path
stringlengths
7
265
concatenated_notebook
stringlengths
46
17M
exercises/ex1-DAR/teched2020-INT260_Data_Attribute_Recommendation.ipynb
###Markdown Cleaning up a service instance*Back to [table of contents](Table-of-Contents)*To clean all data on the service instance, you can run the following snippet. The code is self-contained and does not require you to execute any of the cells above. However, you will need to have the `key.json` containing a service key in place.You will need to set `CLEANUP_EVERYTHING = True` below to execute the cleanup.**NOTE: This will delete all data on the service instance!** ###Code CLEANUP_EVERYTHING = False def cleanup_everything(): import logging import sys logging.basicConfig(level=logging.INFO, stream=sys.stdout) import json import os if not os.path.exists("key.json"): msg = "key.json is not found. Please follow instructions above to create a service key of" msg += " Data Attribute Recommendation. Then, upload it into the same directory where" msg += " this notebook is saved." print(msg) raise ValueError(msg) with open("key.json") as file_handle: key = file_handle.read() SERVICE_KEY = json.loads(key) from sap.aibus.dar.client.model_manager_client import ModelManagerClient model_manager = ModelManagerClient.construct_from_service_key(SERVICE_KEY) for deployment in model_manager.read_deployment_collection()["deployments"]: model_manager.delete_deployment_by_id(deployment["id"]) for model in model_manager.read_model_collection()["models"]: model_manager.delete_model_by_name(model["name"]) for job in model_manager.read_job_collection()["jobs"]: model_manager.delete_job_by_id(job["id"]) from sap.aibus.dar.client.data_manager_client import DataManagerClient data_manager = DataManagerClient.construct_from_service_key(SERVICE_KEY) for dataset in data_manager.read_dataset_collection()["datasets"]: data_manager.delete_dataset_by_id(dataset["id"]) for dataset_schema in data_manager.read_dataset_schema_collection()["datasetSchemas"]: data_manager.delete_dataset_schema_by_id(dataset_schema["id"]) print("Cleanup done!") if CLEANUP_EVERYTHING: print("Cleaning up all resources in this service instance.") cleanup_everything() else: print("Not cleaning up. Set 'CLEANUP_EVERYTHING = True' above and run again.") ###Output _____no_output_____ ###Markdown Data Attribute Recommendation - TechED 2020 INT260Getting started with the Python SDK for the Data Attribute Recommendation service. Business ScenarioWe will consider a business scenario involving product master data. The creation and maintenance of this product master data requires the careful manual selection of the correct categories for a given product from a pre-defined hierarchy of product categories.In this workshop, we will explore how to automate this tedious manual task with the Data Attribute Recommendation service. This workshop will cover: * Data Upload* Model Training and Deployment* Inference Requests We will work through a basic example of how to achieve these tasks using the [Python SDK for Data Attribute Recommendation](https://github.com/SAP/data-attribute-recommendation-python-sdk). *Note: if you are doing several runs of this notebook on a trial account, you may see errors stating 'The resource can no longer be used. Usage limit has been reached'. It can be beneficial to [clean up the service instance](Cleaning-up-a-service-instance) to free up limited trial resources acquired by an earlier run of the notebook. [Some limits](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/c03b561eea1744c9b9892b416037b99a.html) cannot be reset this way.* Table of Contents* [Exercise 01.1](Exercise-01.1) - Installing the SDK and preparing the service key * [Creating a service instance and key on BTP Trial](Creating-a-service-instance-and-key) * [Installing the SDK](Installing-the-SDK) * [Loading the service key into your Jupyter Notebook](Loading-the-service-key-into-your-Jupyter-Notebook)* [Exercise 01.2](Exercise-01.2) - Uploading the data* [Exercise 01.3](Exercise-01.3) - Training the model* [Exercise 01.4](Exercise-01.4) - Deploying the Model and predicting labels* [Resources](Resources) - Additional reading* [Cleaning up a service instance](Cleaning-up-a-service-instance) - Clean up all resources on the service instance* [Optional Exercises](Optional-Exercises) - Optional exercises RequirementsSee the [README in the Github repository for this workshop](https://github.com/SAP-samples/teched2020-INT260/blob/master/exercises/ex1-DAR/README.md). Exercise 01.1*Back to [table of contents](Table-of-Contents)*In exercise 01.1, we will install the SDK and prepare the service key. Creating a service instance and key on BTP Trial Please log in to your trial account: https://cockpit.eu10.hana.ondemand.com/trial/In the your global account screen, go to the "Boosters" tab:![trial_booster.png](attachment:trial_booster.png)*Boosters are only available on the Trial landscape. If you are using a production environment, please follow this tutorial to manually [create a service instance and a service key](https://developers.sap.com/tutorials/cp-aibus-dar-service-instance.html)*. In the Boosters tab, enter "Data Attribute Recommendation" into the search box. Then, select theservice tile from the search results: ![trial_locate_dar_booster.png](attachment:trial_locate_dar_booster.png) The resulting screen shows details of the booster pack. Here, click the "Start" button and wait a few seconds.![trial_start_booster.png](attachment:trial_start_booster.png) Once the booster is finished, click the "go to Service Key" link to obtain your service key.![trial_booster_finished.png](attachment:trial_booster_finished.png) Finally, download the key and save it to disk.![trial_download_key.png](attachment:trial_download_key.png) Installing the SDK The Data Attribute Recommendation SDK is available from the Python package repository. It can be installed with the standard `pip` tool: ###Code ! pip install data-attribute-recommendation-sdk ###Output _____no_output_____ ###Markdown *Note: If you are not using a Jupyter notebook, but instead a regular Python development environment, we recommend using a Python virtual environment to set up your development environment. Please see [the dedicated tutorial to learn how to install the SDK inside a Python virtual environment](https://developers.sap.com/tutorials/cp-aibus-dar-sdk-setup.html).* Loading the service key into your Jupyter Notebook Once you downloaded the service key from the Cockpit, upload it to your notebook environment. The service key must be uploaded to same directory where the `teched2020-INT260_Data_Attribute_Recommendation.ipynb` is stored.We first navigate to the file browser in Jupyter. On the top of your Jupyter notebook, right-click on the Jupyter logo and open in a new tab.![service_key_main_jupyter_page.png](attachment:service_key_main_jupyter_page.png) **In the file browser, navigate to the directory where the `teched2020-INT260_Data_Attribute_Recommendation.ipynb` notebook file is stored. The service key must reside next to this file.**In the Jupyter file browser, click the **Upload** button (1). In the file selection dialog that opens, select the `defaultKey_*.json` file you downloaded previously from the SAP Cloud Platform Cockpit. Rename the file to `key.json`. Confirm the upload by clicking on the second **Upload** button (2).![service_key_upload.png](attachment:service_key_upload.png) The service key contains your credentials to access the service. Please treat this as carefully as you would treat any password. We keep the service key as a separate file outside this notebook to avoid leaking the secret credentials.The service key is a JSON file. We will load this file once and use the credentials throughout this workshop. ###Code # First, set up logging so we can see the actions performed by the SDK behind the scenes import logging import sys logging.basicConfig(level=logging.INFO, stream=sys.stdout) from pprint import pprint # for nicer output formatting import json import os if not os.path.exists("key.json"): msg = "key.json is not found. Please follow instructions above to create a service key of" msg += " Data Attribute Recommendation. Then, upload it into the same directory where" msg += " this notebook is saved." print(msg) raise ValueError(msg) with open("key.json") as file_handle: key = file_handle.read() SERVICE_KEY = json.loads(key) ###Output _____no_output_____ ###Markdown Summary Exercise 01.1In exercise 01.1, we have covered the following topics:* How to install the Python SDK for Data Attribute Recommendation* How to obtain a service key for the Data Attribute Recommendation service Exercise 01.2*Back to [table of contents](Table-of-Contents)**To perform this exercise, you need to execute the code in all previous exercises.*In exercise 01.2, we will upload our demo dataset to the service. The Dataset Obtaining the Data The dataset we use in this workshop is a CSV file containing product master data. The original data was released by BestBuy, a retail company, under an [open license](https://github.com/SAP-samples/data-attribute-recommendation-postman-tutorial-sampledata-and-license). This makes it ideal for first experiments with the Data Attribute Recommendation service. The dataset can be downloaded directly from Github using the following command: ###Code ! wget -O bestBuy.csv "https://raw.githubusercontent.com/SAP-samples/data-attribute-recommendation-postman-tutorial-sample/master/Tutorial_Example_Dataset.csv" # If you receive a "command not found" error (i.e. on Windows), try curl instead of wget: # ! curl -o bestBuy.csv "https://raw.githubusercontent.com/SAP-samples/data-attribute-recommendation-postman-tutorial-sample/master/Tutorial_Example_Dataset.csv" ###Output _____no_output_____ ###Markdown Let's inspect the data: ###Code # if you are experiencing an import error here, run the following in a new cell: # ! pip install pandas import pandas as pd df = pd.read_csv("bestBuy.csv") df.head(5) print() print(f"Data has {df.shape[0]} rows and {df.shape[1]} columns.") ###Output _____no_output_____ ###Markdown The CSV contains the several products. For each product, the description, the manufacturer and the price are given. Additionally, three levels of the products hierarchy are given.The first product, a set of AAA batteries, is located in the following place in the product hierarchy:```level1_category: Connected Home & Housewares |level2_category: Housewares |level3_category: Household Batteries``` We will use the Data Attribute Recommendation service to predict the categories for a given product based on its **description**, **manufacturer** and **price**. Creating the DatasetSchema We first have to describe the shape of our data by creating a DatasetSchema. This schema informs the service about the individual column types found in the CSV. We also describe which are the target columns used for training. These columns will be later predicted. In our case, these are the three category columns.The service currently supports three column types: **text**, **category** and **number**. For prediction, only **category** is currently supported.A DatasetSchema for the BestBuy dataset looks as follows:```json{ "features": [ {"label": "manufacturer", "type": "CATEGORY"}, {"label": "description", "type": "TEXT"}, {"label": "price", "type": "NUMBER"} ], "labels": [ {"label": "level1_category", "type": "CATEGORY"}, {"label": "level2_category", "type": "CATEGORY"}, {"label": "level3_category", "type": "CATEGORY"} ], "name": "bestbuy-category-prediction",}```We will now upload this DatasetSchema to the Data Attribute Recommendation service. The SDK provides the[`DataManagerClient.create_dataset_schema()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.data_manager_client.DataManagerClient.create_dataset_schema) method for this purpose. ###Code from sap.aibus.dar.client.data_manager_client import DataManagerClient dataset_schema = { "features": [ {"label": "manufacturer", "type": "CATEGORY"}, {"label": "description", "type": "TEXT"}, {"label": "price", "type": "NUMBER"} ], "labels": [ {"label": "level1_category", "type": "CATEGORY"}, {"label": "level2_category", "type": "CATEGORY"}, {"label": "level3_category", "type": "CATEGORY"} ], "name": "bestbuy-category-prediction", } data_manager = DataManagerClient.construct_from_service_key(SERVICE_KEY) response = data_manager.create_dataset_schema(dataset_schema) dataset_schema_id = response["id"] print() print("DatasetSchema created:") pprint(response) print() print(f"DatasetSchema ID: {dataset_schema_id}") ###Output _____no_output_____ ###Markdown The API responds with the newly created DatasetSchema resource. The service assigned an ID to the schema. We save this ID in a variable, as we will need it when we upload the data. Uploading the Data to the service The [`DataManagerClient`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.data_manager_client.DataManagerClient) class is also responsible for uploading data to the service. This data must fit to an existing DatasetSchema. After uploading the data, the service will validate the Dataset against the DataSetSchema in a background process. The data must be a CSV file which can optionally be `gzip` compressed.We will now upload our `bestBuy.csv` file, using the DatasetSchema which we created earlier.Data upload is a two-step process. We first create the Dataset using [`DataManagerClient.create_dataset()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.data_manager_client.DataManagerClient.create_dataset). Then we can upload data to the Dataset using the [`DataManagerClient.upload_data_to_dataset()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.data_manager_client.DataManagerClient.upload_data_to_dataset) method. ###Code dataset_resource = data_manager.create_dataset("my-bestbuy-dataset", dataset_schema_id) dataset_id = dataset_resource["id"] print() print("Dataset created:") pprint(dataset_resource) print() print(f"Dataset ID: {dataset_id}") # Compress file first for a faster upload ! gzip -9 -c bestBuy.csv > bestBuy.csv.gz ###Output _____no_output_____ ###Markdown Note that the data upload can take a few minutes. Please do not restart the process while the cell is still running. ###Code # Open in binary mode. with open('bestBuy.csv.gz', 'rb') as file_handle: dataset_resource = data_manager.upload_data_to_dataset(dataset_id, file_handle) print() print("Dataset after data upload:") print() pprint(dataset_resource) ###Output _____no_output_____ ###Markdown Note that the Dataset status changed from `NO_DATA` to `VALIDATING`.Dataset validation is a background process. The status will eventually change from `VALIDATING` to `SUCCEEDED`.The SDK provides the [`DataManagerClient.wait_for_dataset_validation()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.data_manager_client.DataManagerClient.wait_for_dataset_validation) method to poll for the Dataset validation. ###Code dataset_resource = data_manager.wait_for_dataset_validation(dataset_id) print() print("Dataset after validation has finished:") print() pprint(dataset_resource) ###Output _____no_output_____ ###Markdown If the status is `FAILED` instead of `SUCCEEDED`, then the `validationMessage` will contain details about the validation failure. To better understand the Dataset lifecycle, refer to the [corresponding document on help.sap.com](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/a9b7429687a04e769dbc7955c6c44265.html). Summary Exercise 01.2In exercise 01.2, we have covered the following topics:* How to create a DatasetSchema* How to upload a Dataset to the serviceYou can find optional exercises related to exercise 01.2 [below](Optional-Exercises-for-01.2). Exercise 01.3*Back to [table of contents](Table-of-Contents)**To perform this exercise, you need to execute the code in all previous exercises.*In exercise 01.3, we will train the model. Training the Model The Dataset is now uploaded and has been validated successfully by the service.To train a machine learning model, we first need to select the correct model template. Selecting the right ModelTemplateThe Data Attribute Recommendation service currently supports two different ModelTemplates:| ID | Name | Description ||--------------------------------------|---------------------------|---------------------------------------------------------------------------|| d7810207-ca31-4d4d-9b5a-841a644fd81f | **Hierarchical template** | Recommended for the prediction of multiple classes that form a hierarchy. || 223abe0f-3b52-446f-9273-f3ca39619d2c | **Generic template** | Generic neural network for multi-label, multi-class classification. || 188df8b2-795a-48c1-8297-37f37b25ea00 | **AutoML template** | Finds the [best traditional machine learning model out of several traditional algorithms](https://blogs.sap.com/2021/04/28/how-does-automl-works-in-data-attribute-recommendation/). Single label only. |We are building a model to predict product hierarchies. The **Hierarchical Template** is correct for this scenario. In this template, the first label in the DatasetSchema is considered the top-level category. Each subsequent label is considered to be further down in the hierarchy. Coming back to our example DatasetSchema:```json{ "labels": [ {"label": "level1_category", "type": "CATEGORY"}, {"label": "level2_category", "type": "CATEGORY"}, {"label": "level3_category", "type": "CATEGORY"} ]}```The first defined label is `level1_category`, which is given more weight during training than `level3_category`.Refer to the [official documentation on ModelTemplates](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/1e76e8c636974a06967552c05d40e066.html) to learn more. Additional model templates may be added over time, so check back regularly. Starting the training When working with models, we use the [`ModelManagerClient`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.model_manager_client.ModelManagerClient) class.To start the training, we need the IDs of the dataset and the desired model template. We also have to provide a name for the model.The [`ModelManagerClient.create_job()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.model_manager_client.ModelManagerClient.create_job) method launches the training Job.*Only one model of a given name can exist. If you receive a message stating 'The model name specified is already in use', you either have to remove the job and its associated model first or you have to change the `model_name` variable name below. You can also [clean up the entire service instance](Cleaning-up-a-service-instance).* ###Code from sap.aibus.dar.client.model_manager_client import ModelManagerClient from sap.aibus.dar.client.exceptions import DARHTTPException model_manager = ModelManagerClient.construct_from_service_key(SERVICE_KEY) model_template_id = "d7810207-ca31-4d4d-9b5a-841a644fd81f" # hierarchical template model_name = "bestbuy-hierarchy-model" job_resource = model_manager.create_job(model_name, dataset_id, model_template_id) job_id = job_resource['id'] print() print("Job resource:") print() pprint(job_resource) print() print(f"ID of submitted Job: {job_id}") ###Output _____no_output_____ ###Markdown The job is now running in the background. Similar to the DatasetValidation, we have to poll the job until it succeeds.The SDK provides the [`ModelManagerClient.wait_for_job()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.model_manager_client.ModelManagerClient.wait_for_job) method: ###Code job_resource = model_manager.wait_for_job(job_id) print() print("Job resource after training is finished:") pprint(job_resource) ###Output _____no_output_____ ###Markdown To better understand the Training Job lifecycle, see the [corresponding document on help.sap.com](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/0fc40aa077ce4c708c1e5bfc875aa3be.html). IntermissionThe model training will take between 5 and 10 minutes.In the meantime, we can explore the available [resources](Resources) for both the service and the SDK. Inspecting the ModelOnce the training job is finished successfully, we can inspect the model using [`ModelManagerClient.read_model_by_name()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.model_manager_client.ModelManagerClient.read_model_by_name). ###Code model_resource = model_manager.read_model_by_name(model_name) print() pprint(model_resource) ###Output _____no_output_____ ###Markdown In the model resource, the `validationResult` key provides information about model performance. You can also use these metrics to compare performance of different [ModelTemplates](Selecting-the-right-ModelTemplate) or different datasets. Summary Exercise 01.3In exercise 01.3, we have covered the following topics:* How to select the appropriate ModelTemplate* How to train a Model from a previously uploaded DatasetYou can find optional exercises related to exercise 01.3 [below](Optional-Exercises-for-01.3). Exercise 01.4*Back to [table of contents](Table-of-Contents)**To perform this exercise, you need to execute the code in all previous exercises.*In exercise 01.4, we will deploy the model and predict labels for some unlabeled data. Deploying the Model The training job has finished and the model is ready to be deployed. By deploying the model, we create a server process in the background on the Data Attribute Recommendation service which will serve inference requests.In the SDK, the [`ModelManagerClient.create_deployment()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlmodule-sap.aibus.dar.client.model_manager_client) method lets us create a Deployment. ###Code deployment_resource = model_manager.create_deployment(model_name) deployment_id = deployment_resource["id"] print() print("Deployment resource:") print() pprint(deployment_resource) print(f"Deployment ID: {deployment_id}") ###Output _____no_output_____ ###Markdown *Note: if you are using a trial account and you see errors such as 'The resource can no longer be used. Usage limit has been reached', consider [cleaning up the service instance](Cleaning-up-a-service-instance) to free up limited trial resources.* Similar to the data upload and the training job, model deployment is an asynchronous process. We have to poll the API until the Deployment is in status `SUCCEEDED`. The SDK provides the [`ModelManagerClient.wait_for_deployment()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.model_manager_client.ModelManagerClient.wait_for_deployment) for this purposes. ###Code deployment_resource = model_manager.wait_for_deployment(deployment_id) print() print("Finished deployment resource:") print() pprint(deployment_resource) ###Output _____no_output_____ ###Markdown Once the Deployment is in status `SUCCEEDED`, we can run inference requests. To better understand the Deployment lifecycle, see the [corresponding document on help.sap.com](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/f473b5b19a3b469e94c40eb27623b4f0.html). *For trial users: the deployment will be stopped after 8 hours. You can restart it by deleting the deployment and creating a new one for your model. The [`ModelManagerClient.ensure_deployment_exists()`](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/c03b561eea1744c9b9892b416037b99a.html) method will delete and re-create automatically. Then, you need to poll until the deployment is succeeded using [`ModelManagerClient.wait_for_deployment()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.model_manager_client.ModelManagerClient.wait_for_deployment) as above.* Executing Inference requests With a single inference request, we can send up to 50 objects to the service to predict the labels. The data send to the service must match the `features` section of the DatasetSchema created earlier. The `labels` defined inside of the DatasetSchema will be predicted for each object and returned as a response to the request.In the SDK, the [`InferenceClient.create_inference_request()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.inference_client.InferenceClient.create_inference_request) method handles submission of inference requests. ###Code from sap.aibus.dar.client.inference_client import InferenceClient inference = InferenceClient.construct_from_service_key(SERVICE_KEY) objects_to_be_classified = [ { "features": [ {"name": "manufacturer", "value": "Energizer"}, {"name": "description", "value": "Alkaline batteries; 1.5V"}, {"name": "price", "value": "5.99"}, ], }, ] inference_response = inference.create_inference_request(model_name, objects_to_be_classified) print() print("Inference request processed. Response:") print() pprint(inference_response) ###Output _____no_output_____ ###Markdown *Note: For trial accounts, you only have a limited number of objects which you can classify.* You can also try to come up with your own example: ###Code my_own_items = [ { "features": [ {"name": "manufacturer", "value": "EDIT THIS"}, {"name": "description", "value": "EDIT THIS"}, {"name": "price", "value": "0.00"}, ], }, ] inference_response = inference.create_inference_request(model_name, my_own_items) print() print("Inference request processed. Response:") print() pprint(inference_response) ###Output _____no_output_____ ###Markdown You can also classify multiple objects at once. For each object, the `top_n` parameter determines how many predictions are returned. ###Code objects_to_be_classified = [ { "objectId": "optional-identifier-1", "features": [ {"name": "manufacturer", "value": "Energizer"}, {"name": "description", "value": "Alkaline batteries; 1.5V"}, {"name": "price", "value": "5.99"}, ], }, { "objectId": "optional-identifier-2", "features": [ {"name": "manufacturer", "value": "Eidos"}, {"name": "description", "value": "Unravel a grim conspiracy at the brink of Revolution"}, {"name": "price", "value": "19.99"}, ], }, { "objectId": "optional-identifier-3", "features": [ {"name": "manufacturer", "value": "Cadac"}, {"name": "description", "value": "CADAC Grill Plate for Safari Chef Grills: 12\"" + "cooking surface; designed for use with Safari Chef grills;" + "105 sq. in. cooking surface; PTFE nonstick coating;" + " 2 grill surfaces" }, {"name": "price", "value": "39.99"}, ], } ] inference_response = inference.create_inference_request(model_name, objects_to_be_classified, top_n=3) print() print("Inference request processed. Response:") print() pprint(inference_response) ###Output _____no_output_____ ###Markdown We can see that the service now returns the `n-best` predictions for each label as indicated by the `top_n` parameter.In some cases, the predicted category has the special value `nan`. In the `bestBuy.csv` data set, not all records have the full set of three categories. Some records only have a top-level category. The model learns this fact from the data and will occasionally suggest that a record should not have a category. ###Code # Inspect all video games with just a top-level category entry video_games = df[df['level1_category'] == 'Video Games'] video_games.loc[df['level2_category'].isna() & df['level3_category'].isna()].head(5) ###Output _____no_output_____ ###Markdown To learn how to execute inference calls without the SDK just using the underlying RESTful API, see [Inference without the SDK](Inference-without-the-SDK). Summary Exercise 01.4In exercise 01.4, we have covered the following topics:* How to deploy a previously trained model* How to execute inference requests against a deployed modelYou can find optional exercises related to exercise 01.4 [below](Optional-Exercises-for-01.4). Wrapping upIn this workshop, we looked into the following topics:* Installation of the Python SDK for Data Attribute Recommendation* Modelling data with a DatasetSchema* Uploading data into a Dataset* Training a model* Predicting labels for unlabelled dataUsing these tools, we are able to solve the problem of missing Master Data attributes starting from just a CSV file containing training data.Feel free to revisit the workshop materials at any time. The [resources](Resources) section below contains additional reading.If you would like to explore the additional capabilities of the SDK, visit the [optional exercises](Optional-Exercises) below. Cleanup During the course of the workshop, we have created several resources on the Data Attribute Recommendation Service:* DatasetSchema* Dataset* Job* Model* DeploymentThe SDK provides several methods to delete these resources. Note that there are dependencies between objects: you cannot delete a Dataset without deleting the Model beforehand.You will need to set `CLEANUP_SESSION = True` below to execute the cleanup. ###Code # Clean up all resources created earlier CLEANUP_SESSION = False def cleanup_session(): model_manager.delete_deployment_by_id(deployment_id) # this can take a few seconds model_manager.delete_model_by_name(model_name) model_manager.delete_job_by_id(job_id) data_manager.delete_dataset_by_id(dataset_id) data_manager.delete_dataset_schema_by_id(dataset_schema_id) print("DONE cleaning up!") if CLEANUP_SESSION: print("Cleaning up resources generated in this session.") cleanup_session() else: print("Not cleaning up. Set 'CLEANUP_SESSION = True' above and run again!") ###Output _____no_output_____ ###Markdown Resources*Back to [table of contents](Table-of-Contents)* SDK Resources* [SDK source code on Github](https://github.com/SAP/data-attribute-recommendation-python-sdk)* [SDK documentation](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/)* [How to obtain support](https://github.com/SAP/data-attribute-recommendation-python-sdk/blob/master/README.mdhow-to-obtain-support)* [Tutorials: Classify Data Records with the SDK for Data Attribute Recommendation](https://developers.sap.com/group.cp-aibus-data-attribute-sdk.html) Data Attribute Recommendation* [SAP Help Portal](https://help.sap.com/viewer/product/Data_Attribute_Recommendation/SHIP/en-US)* [API Reference](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/b45cf9b24fd042d082c16191aa938c8d.html)* [Tutorials using Postman - interact with the service RESTful API directly](https://developers.sap.com/mission.cp-aibus-data-attribute.html)* [Trial Account Limits](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/c03b561eea1744c9b9892b416037b99a.html)* [Metering and Pricing](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/1e093326a2764c298759fcb92c5b0500.html) Addendum Inference without the SDK*Back to [table of contents](Table-of-Contents)* The Data Attribute Service exposes a RESTful API. The SDK we use in this workshop uses this API to interact with the DAR service.For custom integration, you can implement your own client for the API. The tutorial "[Use Machine Learning to Classify Data Records]" is a great way to explore the Data Attribute Recommendation API with the Postman REST client. Beyond the tutorial, the [API Reference] is a comprehensive documentation of the RESTful interface.[Use Machine Learning to Classify Data Records]: https://developers.sap.com/mission.cp-aibus-data-attribute.html[API Reference]: https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/b45cf9b24fd042d082c16191aa938c8d.htmlTo demonstrate the underlying API, the next example uses the `curl` command line tool to perform an inference request against the Inference API.The example uses the `jq` command to extract the credentials from the service. The authentication token is retrieved from the `uaa_url` and then used for the inference request. ###Code # If the following example gives you errors that the jq or curl commands cannot be found, # you may be able to install them from conda by uncommenting one of the lines below: #%conda install -q jq #%conda install -q curl %%bash -s "$model_name" # Pass the python model_name variable as the first argument to shell script model_name=$1 echo "Model: $model_name" key=$(cat key.json) url=$(echo $key | jq -r .url) uaa_url=$(echo $key | jq -r .uaa.url) clientid=$(echo $key | jq -r .uaa.clientid) clientsecret=$(echo $key | jq -r .uaa.clientsecret) echo "Service URL: $url" token_url=${uaa_url}/oauth/token?grant_type=client_credentials echo "Obtaining token with clientid $clientid from $token_url" bearer_token=$(curl \ --silent --show-error \ --user $clientid:$clientsecret \ $token_url \ | jq -r .access_token ) inference_url=${url}/inference/api/v3/models/${model_name}/versions/1 echo "Running inference request against endpoint $inference_url" echo "" # We pass the token in the Authorization header. # The payload for the inference request is passed as # the body of the POST request below. # The output of the curl command is piped through `jq` # for pretty-printing curl \ --silent --show-error \ --header "Authorization: Bearer ${bearer_token}" \ --header "Content-Type: application/json" \ -XPOST \ ${inference_url} \ -d '{ "objects": [ { "features": [ { "name": "manufacturer", "value": "Energizer" }, { "name": "description", "value": "Alkaline batteries; 1.5V" }, { "name": "price", "value": "5.99" } ] } ] }' | jq ###Output _____no_output_____
examples/Train_ppo_cnn+eval_contact-(pretrained).ipynb
###Markdown if you wish to set which cores to useaffinity_mask = {4, 5, 7} affinity_mask = {6, 7, 9} affinity_mask = {0, 1, 3} affinity_mask = {2, 3, 5} affinity_mask = {0, 2, 4, 6} pid = 0os.sched_setaffinity(pid, affinity_mask) print("CPU affinity mask is modified to %s for process id 0" % affinity_mask) DEFAULT 'CarRacing-v3' environment values continuos action = (steering_angle, throttle, brake)ACT = [[0, 0, 0], [-0.4, 0, 0], [0.4, 0, 0], [0, 0.6, 0], [0, 0, 0.8]] discrete actions: center_steering and no gas/brake, steer left, steer right, accel, brake --> actually a good choice, because car_dynamics softens the action's diff for gas and steeringREWARDS reward given each step: step taken, distance to centerline, normalized speed [0-1], normalized steer angle [0-1] reward given on new tile touched: %proportional of advance, %advance/steps_taken reward given at episode end: all tiles touched (track finished), patience or off-raod exceeded, out of bounds, max_steps exceeded reward for obstacles: obstacle hit (each step), obstacle collided (episode end)GYM_REWARD = [ -0.1, 0.0, 0.0, 0.0, 10.0, 0.0, 0, -0, -100, -0, -0, -0 ]STD_REWARD = [ -0.1, 0.0, 0.0, 0.0, 1.0, 0.0, 100, -20, -100, -50, -0, -0 ]CONT_REWARD =[-0.11, 0.1, 0.0, 0.0, 1.0, 0.0, 100, -20, -100, -50, -5, -100 ] see docu for RETURN computation details DEFAULT Environment Parameters (not related to RL Algorithm!)game_color = 1 State (frame) color option: 0 = RGB, 1 = Grayscale, 2 = Green onlyindicators = True show or not bottom Info Panelframes_per_state = 4 stacked (rolling history) Frames on each state [1-inf], latest observation always on first Frameskip_frames = 3 number of consecutive Frames to skip between history saves [0-4]discre = ACT Action discretization function, format [[steer0, throtle0, brake0], [steer1, ...], ...]. None for continuoususe_track = 1 number of times to use the same Track, [1-100]. More than 20 high risk of overfitting!!episodes_per_track = 1 number of evenly distributed starting points on each track [1-20]. Every time you call reset(), the env automatically starts at the next pointtr_complexity = 12 generated Track geometric Complexity, [6-20]tr_width = 45 relative Track Width, [30-50]patience = 2.0 max time in secs without Progress, [0.5-20]off_track = 1.0 max time in secs Driving on Grass, [0.0-5]f_reward = CONT_REWARD Reward Funtion coefficients, refer to Docu for detailsnum_obstacles = 5 Obstacle objects placed on track [0-10]end_on_contact = False Stop Episode on contact with obstacle, not recommended for starting-phase of trainingobst_location = 0 array pre-setting obstacle Location, in %track. Negative value means tracks's left-hand side. 0 for random locationoily_patch = False use all obstacles as Low-friction road (oily patch)verbose = 2 ###Code ## Choose one agent, see Docu for description #agent='CarRacing-v0' #agent='CarRacing-v1' agent='CarRacing-v3' # Stop training when the model reaches the reward threshold callback_on_best = StopTrainingOnRewardThreshold(reward_threshold = 170, verbose=1) seed = 2000 ## SIMULATION param ## Changing these makes world models incompatible!! game_color = 2 indicators = True fpst = 4 skip = 3 actions = [[0, 0, 0], [-0.4, 0, 0], [0.4, 0, 0], [0, 0.6, 0], [0, 0, 0.8]] #this is ACT obst_loc = [6, -12, 25, -50, 75, -37, 62, -87, 95, -29] #track percentage, negative for obstacle to the left-hand side ## Loading drive_pretained model import pickle root = 'ppo_cnn_gym-mod_' file = root+'c{:d}_f{:d}_s{:d}_{}_a{:d}'.format(game_color,fpst,skip,indicators,len(actions)) model = PPO2.load(file) ## This model param use = 6 # number of times to use same track [1,100] ept = 10 # different starting points on same track [1,20] patience = 1.0 track_complexity = 12 #REWARD2 = [-0.05, 0.1, 0.0, 0.0, 2.0, 0.0, 100, -20, -100, -50, -5, -100] if agent=='CarRacing-v3': env = gym.make(agent, seed=seed, game_color=game_color, indicators=indicators, frames_per_state=fpst, skip_frames=skip, # discre=actions, #passing custom actions use_track = use, episodes_per_track = ept, tr_complexity = track_complexity, tr_width = 45, patience = patience, off_track = patience, end_on_contact = True, #learning to avoid obstacles the-hard-way oily_patch = False, num_obstacles = 5, #some obstacles obst_location = obst_loc, #passing fixed obstacle location # f_reward = REWARD2, #passing a custom reward function verbose = 2 ) else: env = gym.make(agent) env = DummyVecEnv([lambda: env]) ## Training on obstacles model.set_env(env) batch_size = 256 updates = 700 model.learn(total_timesteps = updates*batch_size, log_interval=1) #, callback=eval_callback) #Save last updated model file = root+'c{:d}_f{:d}_s{:d}_{}_a{:d}__u{:d}_e{:d}_p{}_bs{:d}'.format( game_color,fpst,skip,indicators,len(actions),use,ept,patience,batch_size) model.save(file, cloudpickle=True) param_list=model.get_parameter_list() env.close() ## This model param #2 use = 6 # number of times to use same track [1,100] ept = 10 # different starting points on same track [1,20] patience = 1.0 track_complexity = 12 #REWARD2 = [-0.05, 0.1, 0.0, 0.0, 2.0, 0.0, 100, -20, -100, -50, -5, -100] seed = 25000 if agent=='CarRacing-v3': env2 = gym.make(agent, seed=seed, game_color=game_color, indicators=indicators, frames_per_state=fpst, skip_frames=skip, # discre=actions, #passing custom actions use_track = use, episodes_per_track = ept, tr_complexity = track_complexity, tr_width = 45, patience = patience, off_track = patience, end_on_contact = False, # CHANGED oily_patch = False, num_obstacles = 5, #some obstacles obst_location = 0, #using random obstacle location # f_reward = REWARD2, #passing a custom reward function verbose = 3 ) else: env2 = gym.make(agent) env2 = DummyVecEnv([lambda: env2]) ## Training on obstacles model.set_env(env2) #batch_size = 384 updates = 1500 ## Separate evaluation env test_freq = 100 #policy updates until evaluation test_episodes_per_track = 5 #number of starting points on test_track eval_log = './evals/' env_test = gym.make(agent, seed=int(3.14*seed), game_color=game_color, indicators=indicators, frames_per_state=fpst, skip_frames=skip, # discre=actions, #passing custom actions use_track = 1, #change test track after 1 ept round episodes_per_track = test_episodes_per_track, tr_complexity = 12, #test on a medium complexity track tr_width = 45, patience = 2.0, off_track = 2.0, end_on_contact = False, oily_patch = False, num_obstacles = 5, obst_location = obst_loc) #passing fixed obstacle location env_test = DummyVecEnv([lambda: env_test]) eval_callback = EvalCallback(env_test, callback_on_new_best=callback_on_best, #None, n_eval_episodes=test_episodes_per_track*3, eval_freq=test_freq*batch_size, best_model_save_path=eval_log, log_path=eval_log, deterministic=True, render = False) model.learn(total_timesteps = updates*batch_size, log_interval=1, callback=eval_callback) #Save last updated model #file = root+'c{:d}_f{:d}_s{:d}_{}_a{:d}__u{:d}_e{:d}_p{}_bs{:d}'.format( # game_color,fpst,skip,indicators,len(actions),use,ept,patience,batch_size) model.save(file+'_II', cloudpickle=True) param_list=model.get_parameter_list() env2.close() env_test.close() ## Enjoy last trained policy if agent=='CarRacing-v3': #create an independent test environment, almost everything in std/random definition env3 = gym.make(agent, seed=None, game_color=game_color, indicators = True, frames_per_state=fpst, skip_frames=skip, # discre=actions, use_track = 2, episodes_per_track = 1, patience = 5.0, off_track = 3.0 ) else: env3 = gym.make(agent) env3 = DummyVecEnv([lambda: env3]) obs = env3.reset() print(obs.shape) done = False pasos = 0 _states=None while not done: # and pasos<1500: action, _states = model.predict(obs, deterministic=True) obs, reward, done, info = env3.step(action) env3.render() pasos+=1 env3.close() print() print(reward, done, pasos) #, info) ## Enjoy best eval_policy obs = env3.reset() print(obs.shape) ## Load bestmodel from eval #if not isinstance(model_test, PPO2): model_test = PPO2.load(eval_log+'best_model', env3) done = False pasos = 0 _states=None while not done: # and pasos<1500: action, _states = model_test.predict(obs, deterministic=True) obs, reward, done, info = env3.step(action) env3.render() pasos+=1 env3.close() print() print(reward, done, pasos) print(action, _states) model_test.save(file+'_evalbest', cloudpickle=True) env2.close() env3.close() env_test.close() print(action, _states) obs.shape ###Output _____no_output_____
courses/08_Plotly_Bokeh/Fire_Australia19.ipynb
###Markdown Vuoi conoscere gli incendi divampati dopo il 15 settembre 2019? ###Code mes = australia_1[(australia_1["acq_date"]>= "2019-09-15")] mes.head() mes.describe() map_sett = folium.Map([-25.274398,133.775136], zoom_start=4) lat_3 = mes["latitude"].values.tolist() long_3 = mes["longitude"].values.tolist() australia_cluster_3 = MarkerCluster().add_to(map_sett) for lat_3,long_3 in zip(lat_3,long_3): folium.Marker([lat_3,long_3]).add_to(australia_cluster_3) map_sett ###Output _____no_output_____ ###Markdown Play with Folium ###Code 44.4807035,11.3712528 import folium m1 = folium.Map(location=[44.48, 11.37], tiles='openstreetmap', zoom_start=18) m1.save('map1.html') m1 m3.save("filename.png") ###Output _____no_output_____
presentations/How To - Estimate Pi.ipynb
###Markdown Estimating $\pi$ by Sampling PointsBy Evgenia "Jenny" Nitishinskaya and Delaney Granizo-MackenzieNotebook released under the Creative Commons Attribution 4.0 License.---A stochastic way to estimate the value of $\pi$ is to sample points from a square area. Some of the points will fall within the area of a circle as defined by $x^2 + y^2 = 1$, we count what percentage all points fall within this area, which allows us to estimate the area of the circle and therefore $\pi$. ###Code # Import libraries import math import numpy as np import matplotlib.pyplot as plt in_circle = 0 outside_circle = 0 n = 10 ** 4 # Draw many random points X = np.random.rand(n) Y = np.random.rand(n) for i in range(n): if X[i]**2 + Y[i]**2 > 1: outside_circle += 1 else: in_circle += 1 area_of_quarter_circle = float(in_circle)/(in_circle + outside_circle) pi_estimate = area_of_circle = area_of_quarter_circle * 4 pi_estimate ###Output _____no_output_____ ###Markdown We can visualize the process to see how it works. ###Code # Plot a circle for reference circle1=plt.Circle((0,0),1,color='r', fill=False, lw=2) fig = plt.gcf() fig.gca().add_artist(circle1) # Set the axis limits so the circle doesn't look skewed plt.xlim((0, 1.8)) plt.ylim((0, 1.2)) plt.scatter(X, Y) ###Output _____no_output_____ ###Markdown Finally, let's see how our estimate gets better as we increase $n$. We'll do this by computing the estimate for $\pi$ at each step and plotting that estimate to see how it converges. ###Code in_circle = 0 outside_circle = 0 n = 10 ** 3 # Draw many random points X = np.random.rand(n) Y = np.random.rand(n) # Make a new array pi = np.ndarray(n) for i in range(n): if X[i]**2 + Y[i]**2 > 1: outside_circle += 1 else: in_circle += 1 area_of_quarter_circle = float(in_circle)/(in_circle + outside_circle) pi_estimate = area_of_circle = area_of_quarter_circle * 4 pi[i] = pi_estimate plt.plot(range(n), pi) plt.xlabel('n') plt.ylabel('pi estimate') plt.plot(range(n), [math.pi] * n) ###Output _____no_output_____
Concise_Chit_Chat.ipynb
###Markdown Concise Chit ChatGitHub Repository: Code TODO:1. create a DataLoader class for dataset preprocess. (Use tf.data.Dataset inside?)1. Create a PyPI package for easy load cornell movie curpos dataset(?)1. Use PyPI module `embeddings` to load `GLOVES`, or use tfhub to load `GLOVES`?1. How to do a `clip_norm`(or set `clip_value`) in Keras with Eager mode but without `tf.contrib`?1. Better name for variables & functions1. Code clean1. Encapsulate all layers to Model Class: 1. ChitChatEncoder 1. ChitChatDecoder 1. ChitChatModel1. Re-style to follow the book1. ...? Book Todo1. Outlines1. What's seq2seq1. What's word embedding1. 1. Split code into snips1. Write for snips1. Content cleaning and optimizing1. ...? Other1. `keras.callbacks.TensorBoard` instead of `tf.contrib.summary`? - `model.fit(callbacks=[TensorBoard(...)])`1. download url? - http://old.pep.com.cn/gzsx/jszx_1/czsxtbjxzy/qrzptgjzxjc/dzkb/dscl/ config.py ###Code '''doc''' # GO for start of the sentence # DONE for end of the sentence GO = '\b' DONE = '\a' # max words per sentence MAX_LEN = 20 ###Output _____no_output_____ ###Markdown data_loader.py ###Code ''' data loader ''' import gzip import re from typing import ( # Any, List, Tuple, ) import tensorflow as tf import numpy as np # from .config import ( # GO, # DONE, # MAX_LEN, # ) DATASET_URL = 'https://github.com/huan/concise-chit-chat/releases/download/v0.0.1/dataset.txt.gz' DATASET_FILE_NAME = 'concise-chit-chat-dataset.txt.gz' class DataLoader(): '''data loader''' def __init__(self) -> None: print('DataLoader', 'downloading dataset from:', DATASET_URL) dataset_file = tf.keras.utils.get_file( DATASET_FILE_NAME, origin=DATASET_URL, ) print('DataLoader', 'loading dataset from:', dataset_file) # dataset_file = './data/dataset.txt.gz' # with open(path, encoding='iso-8859-1') as f: with gzip.open(dataset_file, 'rt') as f: self.raw_text = f.read().lower() self.queries, self.responses \ = self.__parse_raw_text(self.raw_text) self.size = len(self.queries) def get_batch( self, batch_size=32, ) -> Tuple[List[List[str]], List[List[str]]]: '''get batch''' # print('corpus_list', self.corpus) batch_indices = np.random.choice( len(self.queries), size=batch_size, ) batch_queries = self.queries[batch_indices] batch_responses = self.responses[batch_indices] return batch_queries, batch_responses def __parse_raw_text( self, raw_text: str ) -> Tuple[List[List[str]], List[List[str]]]: '''doc''' query_list = [] response_list = [] for line in raw_text.strip('\n').split('\n'): query, response = line.split('\t') query, response = self.preprocess(query), self.preprocess(response) query_list.append('{} {} {}'.format(GO, query, DONE)) response_list.append('{} {} {}'.format(GO, response, DONE)) return np.array(query_list), np.array(response_list) def preprocess(self, text: str) -> str: '''doc''' new_text = text new_text = re.sub('[^a-zA-Z0-9 .,?!]', ' ', new_text) new_text = re.sub(' +', ' ', new_text) new_text = re.sub( '([\w]+)([,;.?!#&-\'\"-]+)([\w]+)?', r'\1 \2 \3', new_text, ) if len(new_text.split()) > MAX_LEN: new_text = (' ').join(new_text.split()[:MAX_LEN]) match = re.search('[.?!]', new_text) if match is not None: idx = match.start() new_text = new_text[:idx+1] new_text = new_text.strip().lower() return new_text ###Output _____no_output_____ ###Markdown vocabulary.py ###Code '''doc''' import re from typing import ( List, ) import tensorflow as tf # from .config import ( # DONE, # GO, # MAX_LEN, # ) class Vocabulary: '''voc''' def __init__(self, text: str) -> None: self.tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='') self.tokenizer.fit_on_texts( [GO, DONE] + re.split( r'[\s\t\n]', text, ) ) # additional 1 for the index 0 self.size = 1 + len(self.tokenizer.word_index.keys()) def texts_to_padded_sequences( self, text_list: List[List[str]] ) -> tf.Tensor: '''doc''' sequence_list = self.tokenizer.texts_to_sequences(text_list) padded_sequences = tf.keras.preprocessing.sequence.pad_sequences( sequence_list, maxlen=MAX_LEN, padding='post', truncating='post', ) return padded_sequences def padded_sequences_to_texts(self, sequence: List[int]) -> str: return 'tbw' ###Output _____no_output_____ ###Markdown model.py ###Code '''doc''' import tensorflow as tf import numpy as np from typing import ( List, ) # from .vocabulary import Vocabulary # from .config import ( # DONE, # GO, # MAX_LENGTH, # ) EMBEDDING_DIM = 300 LATENT_UNIT_NUM = 500 class ChitEncoder(tf.keras.Model): '''encoder''' def __init__( self, ) -> None: super().__init__() self.lstm_encoder = tf.keras.layers.CuDNNLSTM( units=LATENT_UNIT_NUM, return_state=True, ) def call( self, inputs: tf.Tensor, # shape: [batch_size, max_len, embedding_dim] training=None, mask=None, ) -> tf.Tensor: _, *state = self.lstm_encoder(inputs) return state # shape: ([latent_unit_num], [latent_unit_num]) class ChatDecoder(tf.keras.Model): '''decoder''' def __init__( self, voc_size: int, ) -> None: super().__init__() self.lstm_decoder = tf.keras.layers.CuDNNLSTM( units=LATENT_UNIT_NUM, return_sequences=True, return_state=True, ) self.dense = tf.keras.layers.Dense( units=voc_size, ) self.time_distributed_dense = tf.keras.layers.TimeDistributed( self.dense ) self.initial_state = None def set_state(self, state=None): '''doc''' # import pdb; pdb.set_trace() self.initial_state = state def call( self, inputs: tf.Tensor, # shape: [batch_size, None, embedding_dim] training=False, mask=None, ) -> tf.Tensor: '''chat decoder call''' # batch_size = tf.shape(inputs)[0] # max_len = tf.shape(inputs)[0] # outputs = tf.zeros(shape=( # batch_size, # batch_size # max_len, # max time step # LATENT_UNIT_NUM, # dimention of hidden state # )) # import pdb; pdb.set_trace() outputs, *states = self.lstm_decoder(inputs, initial_state=self.initial_state) self.initial_state = states outputs = self.time_distributed_dense(outputs) return outputs class ChitChat(tf.keras.Model): '''doc''' def __init__( self, vocabulary: Vocabulary, ) -> None: super().__init__() self.word_index = vocabulary.tokenizer.word_index self.index_word = vocabulary.tokenizer.index_word self.voc_size = vocabulary.size # [batch_size, max_len] -> [batch_size, max_len, voc_size] self.embedding = tf.keras.layers.Embedding( input_dim=self.voc_size, output_dim=EMBEDDING_DIM, mask_zero=True, ) self.encoder = ChitEncoder() # shape: [batch_size, state] self.decoder = ChatDecoder(self.voc_size) # shape: [batch_size, max_len, voc_size] def call( self, inputs: List[List[int]], # shape: [batch_size, max_len] teacher_forcing_targets: List[List[int]]=None, # shape: [batch_size, max_len] training=None, mask=None, ) -> tf.Tensor: # shape: [batch_size, max_len, embedding_dim] '''call''' batch_size = tf.shape(inputs)[0] inputs_embedding = self.embedding(tf.convert_to_tensor(inputs)) state = self.encoder(inputs_embedding) self.decoder.set_state(state) if training: teacher_forcing_targets = tf.convert_to_tensor(teacher_forcing_targets) teacher_forcing_embeddings = self.embedding(teacher_forcing_targets) # outputs[:, 0, :].assign([self.__go_embedding()] * batch_size) batch_go_embedding = tf.ones([batch_size, 1, 1]) * [self.__go_embedding()] batch_go_one_hot = tf.ones([batch_size, 1, 1]) * [tf.one_hot(self.word_index[GO], self.voc_size)] outputs = batch_go_one_hot output = self.decoder(batch_go_embedding) for t in range(1, MAX_LEN): outputs = tf.concat([outputs, output], 1) if training: target = teacher_forcing_embeddings[:, t, :] decoder_input = tf.expand_dims(target, axis=1) else: decoder_input = self.__indice_to_embedding(tf.argmax(output)) output = self.decoder(decoder_input) return outputs def predict(self, inputs: List[int], temperature=1.) -> List[int]: '''doc''' outputs = self([inputs]) outputs = tf.squeeze(outputs) word_list = [] for t in range(1, MAX_LEN): output = outputs[t] indice = self.__logit_to_indice(output, temperature=temperature) word = self.index_word[indice] if indice == self.word_index[DONE]: break word_list.append(word) return ' '.join(word_list) def __go_embedding(self) -> tf.Tensor: return self.embedding( tf.convert_to_tensor(self.word_index[GO])) def __logit_to_indice( self, inputs, temperature=1., ) -> int: ''' [vocabulary_size] convert one hot encoding to indice with temperature ''' inputs = tf.squeeze(inputs) prob = tf.nn.softmax(inputs / temperature).numpy() indice = np.random.choice(self.voc_size, p=prob) return indice def __indice_to_embedding(self, indice: int) -> tf.Tensor: tensor = tf.convert_to_tensor([[indice]]) return self.embedding(tensor) ###Output _____no_output_____ ###Markdown Train Tensor Board[Quick guide to run TensorBoard in Google Colab](https://www.dlology.com/blog/quick-guide-to-run-tensorboard-in-google-colab/)`tensorboard` vs `tensorboard/` ? ###Code LOG_DIR = '/content/data/tensorboard/' get_ipython().system_raw( 'tensorboard --logdir {} --host 0.0.0.0 --port 6006 &' .format(LOG_DIR) ) # Install ! npm install -g localtunnel # Tunnel port 6006 (TensorBoard assumed running) get_ipython().system_raw('lt --port 6006 >> url.txt 2>&1 &') # Get url ! cat url.txt '''train''' import tensorflow as tf # from chit_chat import ( # ChitChat, # DataLoader, # Vocabulary, # ) tf.enable_eager_execution() data_loader = DataLoader() vocabulary = Vocabulary(data_loader.raw_text) chitchat = ChitChat(vocabulary=vocabulary) def loss(model, x, y) -> tf.Tensor: '''doc''' weights = tf.cast( tf.not_equal(y, 0), tf.float32, ) prediction = model( inputs=x, teacher_forcing_targets=y, training=True, ) # implment the following contrib function in a loop ? # https://stackoverflow.com/a/41135778/1123955 # https://stackoverflow.com/q/48025004/1123955 return tf.contrib.seq2seq.sequence_loss( prediction, tf.convert_to_tensor(y), weights, ) def grad(model, inputs, targets): '''doc''' with tf.GradientTape() as tape: loss_value = loss(model, inputs, targets) return tape.gradient(loss_value, model.variables) def train() -> int: '''doc''' learning_rate = 1e-3 num_batches = 8000 batch_size = 128 print('Dataset size: {}, Vocabulary size: {}'.format( data_loader.size, vocabulary.size, )) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) root = tf.train.Checkpoint( optimizer=optimizer, model=chitchat, optimizer_step=tf.train.get_or_create_global_step(), ) root.restore(tf.train.latest_checkpoint('./data/save')) print('checkpoint restored.') writer = tf.contrib.summary.create_file_writer('./data/tensorboard') writer.set_as_default() global_step = tf.train.get_or_create_global_step() for batch_index in range(num_batches): global_step.assign_add(1) queries, responses = data_loader.get_batch(batch_size) encoder_inputs = vocabulary.texts_to_padded_sequences(queries) decoder_outputs = vocabulary.texts_to_padded_sequences(responses) grads = grad(chitchat, encoder_inputs, decoder_outputs) optimizer.apply_gradients( grads_and_vars=zip(grads, chitchat.variables) ) if batch_index % 10 == 0: print("batch %d: loss %f" % (batch_index, loss( chitchat, encoder_inputs, decoder_outputs).numpy())) root.save('./data/save/model.ckpt') print('checkpoint saved.') with tf.contrib.summary.record_summaries_every_n_global_steps(1): # your model code goes here tf.contrib.summary.scalar('loss', loss( chitchat, encoder_inputs, decoder_outputs).numpy()) # print('summary had been written.') return 0 def main() -> int: '''doc''' return train() main() #! rm -fvr data/tensorboard # ! pwd # ! rm -frv data/save # ! rm -fr /content/data/tensorboard # ! kill 2823 # ! kill -9 2823 # ! ps axf | grep lt ! cat url.txt ###Output your url is: https://bright-fox-51.localtunnel.me ###Markdown chat.py ###Code '''train''' # import tensorflow as tf # from chit_chat import ( # ChitChat, # DataLoader, # Vocabulary, # DONE, # GO, # ) # tf.enable_eager_execution() def main() -> int: '''chat main''' data_loader = DataLoader() vocabulary = Vocabulary(data_loader.raw_text) print('Dataset size: {}, Vocabulary size: {}'.format( data_loader.size, vocabulary.size, )) chitchat = ChitChat(vocabulary) checkpoint = tf.train.Checkpoint(model=chitchat) checkpoint.restore(tf.train.latest_checkpoint('./data/save')) print('checkpoint restored.') return cli(chitchat, vocabulary=vocabulary, data_loader=data_loader) def cli(chitchat: ChitChat, data_loader: DataLoader, vocabulary: Vocabulary): '''command line interface''' index_word = vocabulary.tokenizer.index_word word_index = vocabulary.tokenizer.word_index query = '' while True: try: # Get input sentence query = input('> ').lower() # Check if it is quit case if query == 'q' or query == 'quit': break # Normalize sentence query = data_loader.preprocess(query) query = '{} {} {}'.format(GO, query, DONE) # Evaluate sentence query_sequence = vocabulary.texts_to_padded_sequences([query])[0] response_sequence = chitchat.predict(query_sequence, 1) # Format and print response sentence response_word_list = [ index_word[indice] for indice in response_sequence if indice != 0 and indice != word_index[DONE] ] print('Bot:', ' '.join(response_word_list)) except KeyError: print("Error: Encountered unknown word.") main() ! cat /proc/cpuinfo ###Output processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 63 model name : Intel(R) Xeon(R) CPU @ 2.30GHz stepping : 0 microcode : 0x1 cpu MHz : 2299.998 cache size : 46080 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms xsaveopt arch_capabilities bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf bogomips : 4599.99 clflush size : 64 cache_alignment : 64 address sizes : 46 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 63 model name : Intel(R) Xeon(R) CPU @ 2.30GHz stepping : 0 microcode : 0x1 cpu MHz : 2299.998 cache size : 46080 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 1 apicid : 1 initial apicid : 1 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms xsaveopt arch_capabilities bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf bogomips : 4599.99 clflush size : 64 cache_alignment : 64 address sizes : 46 bits physical, 48 bits virtual power management:
community/awards/teach_me_qiskit_2018/quantum_machine_learning/1_K_Means/Quantum K-Means Algorithm.ipynb
###Markdown Trusted Notebook" width="500 px" align="left"> _*Quantum K-Means algorithm*_ The latest version of this notebook is available on https://github.com/qiskit/qiskit-tutorial.*** Contributors Shan Jin, Xi He, Xiaokai Hou, Li Sun, Dingding Wen, Shaojun Wu and Xiaoting Wang$^{1}$1. Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China,Chengdu, China,610051*** IntroductionClustering algorithm is a typical unsupervised learning algorithm, which is mainly used to automatically classify similar samples into one category.In the clustering algorithm, according to the similarity between the samples, the samples are divided into different categories. For different similarity calculation methods, different clustering results will be obtained. The commonly used similarity calculation method is the Euclidean distance method.What we want to show is the quantum K-Means algorithm. The K-Means algorithm is a distance-based clustering algorithm that uses distance as an evaluation index for similarity, that is, the closer the distance between two objects is, the greater the similarity. The algorithm considers the cluster to be composed of objects that are close together, so the compact and independent cluster is the ultimate target. Experiment designThe implementation of the quantum K-Means algorithm mainly uses the swap test to compare the distances among the input data points. Select K points randomly from N data points as centroids, measure the distance from each point to each centroid, and assign it to the nearest centroid- class, recalculate centroids of each class that has been obtained, and iterate 2 to 3 steps until the new centroid is equal to or less than the specified threshold, and the algorithm ends. In our example, we selected 6 data points, 2 centroids, and used the swap test circuit to calculate the distance. Finally, we obtained two clusters of data points.$|0\rangle$ is an auxiliary qubit, through left $H$ gate, it will be changed to $\frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$. Then under the control of $|1\rangle$, the circuit will swap two vectors $|x\rangle$ and $|y\rangle$. Finally, we get the result at the right end of the circuit:$$|0_{anc}\rangle |x\rangle |y\rangle \rightarrow \frac{1}{2}|0_{anc}\rangle(|xy\rangle + |yx\rangle) + \frac{1}{2}|1_{anc}\rangle(|xy\rangle - |yx\rangle)$$If we measure auxiliary qubit alone, then the probability of final state in the ground state $|1\rangle$ is:$$P(|1_{anc}\rangle) = \frac{1}{2} - \frac{1}{2}|\langle x | y \rangle|^2$$If we measure auxiliary qubit alone, then the probability of final state in the ground state $|1\rangle$ is:$$Euclidean \ distance = \sqrt{(2 - 2|\langle x | y \rangle|)}$$So, we can see that the probability of measuring $|1\rangle$ has positive correlation with the Euclidean distance.The schematic diagram of quantum K-Means is as the follow picture.[[1]](cite) To make our algorithm can be run using qiskit, we design a more detailed circuit to achieve our algorithm. | Quantum K-Means circuit Data pointspoint numthetaphilamxy10.01pipi0.7106330.70356220.02pipi0.7141420.730.03pipi0.7176330.69642140.04pipi0.7211070.69282450.05pipi0.7245620.6892161.31pipi0.8868110.46213271.32pipi0.8891110.45769281.33pipi0.8913880.45324191.34pipi0.8936430.448779101.35pipi0.8958760.444305 Quantum K-Means algorithm program ###Code # import math lib from math import pi # import Qiskit from qiskit import Aer, IBMQ, execute from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister # import basic plot tools from qiskit.tools.visualization import plot_histogram # To use local qasm simulator backend = Aer.get_backend('qasm_simulator') ###Output _____no_output_____ ###Markdown In this section, we first judge the version of Python and import the packages of qiskit, math to implement the following code. We show our algorithm on the ibm_qasm_simulator, if you need to run it on the real quantum conputer, please remove the "" in frint of "import Qconfig". ###Code theta_list = [0.01, 0.02, 0.03, 0.04, 0.05, 1.31, 1.32, 1.33, 1.34, 1.35] ###Output _____no_output_____ ###Markdown Here we define the number pi in the math lib, because we need to use u3 gate. And we also define a list about the parameter theta which we need to use in the u3 gate. As the same above, if you want to implement on the real quantum comnputer, please remove the symbol "" and configure your local Qconfig.py file. ###Code # create Quantum Register called "qr" with 5 qubits qr = QuantumRegister(5, name="qr") # create Classical Register called "cr" with 5 bits cr = ClassicalRegister(5, name="cr") # Creating Quantum Circuit called "qc" involving your Quantum Register "qr" # and your Classical Register "cr" qc = QuantumCircuit(qr, cr, name="k_means") #Define a loop to compute the distance between each pair of points for i in range(9): for j in range(1,10-i): # Set the parament theta about different point theta_1 = theta_list[i] theta_2 = theta_list[i+j] #Achieve the quantum circuit via qiskit qc.h(qr[2]) qc.h(qr[1]) qc.h(qr[4]) qc.u3(theta_1, pi, pi, qr[1]) qc.u3(theta_2, pi, pi, qr[4]) qc.cswap(qr[2], qr[1], qr[4]) qc.h(qr[2]) qc.measure(qr[2], cr[2]) qc.reset(qr) job = execute(qc, backend=backend, shots=1024) result = job.result() print(result) print('theta_1:' + str(theta_1)) print('theta_2:' + str(theta_2)) # print( result.get_data(qc)) plot_histogram(result.get_counts()) ###Output COMPLETED theta_1:0.01 theta_2:0.02
run/monitor-flir-service.ipynb
###Markdown Install and monitor the FLIR camera serviceInstall ###Code ! sudo cp flir-server.service /etc/systemd/system/flir-server.service ###Output _____no_output_____ ###Markdown Start the service ###Code ! sudo systemctl start flir-server.service ###Output _____no_output_____ ###Markdown Stop the service ###Code ! sudo systemctl stop flir-server.service ###Output Warning: The unit file, source configuration file or drop-ins of flir-server.service changed on disk. Run 'systemctl daemon-reload' to reload units. ###Markdown Enable it so that it starts on boot ###Code ! sudo systemctl enable flir-server.service # enable at boot ###Output _____no_output_____ ###Markdown Disable it so that it does not start on boot ###Code ! sudo systemctl enable flir-server.service # enable at boot ###Output _____no_output_____ ###Markdown To show status of the service ###Code ! sudo systemctl status flir-server.service ###Output ● flir-server.service - FLIR-camera server service Loaded: loaded (/etc/systemd/system/flir-server.service; enabled; vendor pres Active: active (running) since Tue 2020-03-24 07:53:04 NZDT; 20min ago Main PID: 765 (python) Tasks: 17 (limit: 4915) CGroup: /system.slice/flir-server.service └─765 /home/rov/.virtualenvs/flir/bin/python -u run/flir-server.py  Mar 24 07:53:06 rov-UP python[765]: 19444715 - executing: "Gain.SetValue(6)" Mar 24 07:53:06 rov-UP python[765]: 19444715 - executing: "BlackLevelSelector.Se Mar 24 07:53:06 rov-UP python[765]: 19444715 - executing: "BlackLevel.SetValue(0 Mar 24 07:53:07 rov-UP python[765]: 19444715 - executing: "GammaEnable.SetValue( Mar 24 07:53:07 rov-UP python[765]: Starting : FrontLeft Mar 24 07:53:17 rov-UP python[765]: Stopping FrontLeft due to inactivity. Mar 24 07:54:12 rov-UP python[765]: Starting : FrontLeft Mar 24 07:54:26 rov-UP python[765]: Stopping FrontLeft due to inactivity. Mar 24 07:54:33 rov-UP python[765]: Starting : FrontLeft Mar 24 07:54:48 rov-UP python[765]: Stopping FrontLeft due to inactivity. [?1l>1-18/18 (END) ###Markdown Install and monitor the FLIR camera serviceInstall ###Code ! sudo cp flir-server.service /etc/systemd/system/flir-server.service ###Output _____no_output_____ ###Markdown Start the service ###Code ! sudo systemctl start flir-server.service ! sudo systemctl daemon-reload ###Output _____no_output_____ ###Markdown Stop the service ###Code ! sudo systemctl stop flir-server.service ###Output _____no_output_____ ###Markdown Disable it so that it does not start on boot ###Code ! sudo systemctl disable flir-server.service # enable at boot ###Output _____no_output_____ ###Markdown Enable it so that it starts on boot ! sudo systemctl enable flir-server.service enable at boot ###Code ! sudo systemctl enable flir-server.service # enable at boot ###Output _____no_output_____ ###Markdown To show status of the service ###Code ! sudo systemctl --no-pager status flir-server.service ! sudo systemctl --no-pager status jupyter.service ###Output Unit jupyter.service could not be found.
Problem_3.ipynb
###Markdown ###Code import math def f(x): return(math.exp(x)) #Trigo function a = -1 b = 1 n = 10 h = (b-a)/n #Width of Trapezoid S = h * (f(a)+f(b)) #Value of summation for i in range(1,n): S += f(a+i*h) Integral = S*h print('Integral = %0.4f' %Integral) ###Output Integral = 2.1731 ###Markdown ###Code import math def f(x): return(math.exp(x)) #Define the trigo function a = -1 b = 1 n = 10 h = (b-a)/n # Width of the trapezoid S = h * (f(a)+f(b)) # Beginning value of summation for i in range(1,n): S += f(a+i*h) Integral = S*h print('Integral = %0.4f' %Integral) ###Output Integral = 2.1731 ###Markdown ###Code import math def f(x): return(math.exp(x)) #defines the trigo function a = -1 b = 1 n = 10 h = (b-a)/n #width of the trapezoid S = h * (f(a)+f(b)) #starting value of summation for i in range(1,n): S += f(a+i*h) Integral = S*h print('Integral = %0.4f' %Integral) ###Output Integral = 2.1731 ###Markdown ###Code import math def f(x): return(math.exp(x)) # define the trigo function a = -1 b = 1 n = 10 h = (b-a)/n # Width of the trapezoid S = h * (f(a)+f(b)) # Beginning value of summation for i in range(1,n): S += f(a+i*h) Integral = S*h print('Integral = %0.4f' %Integral) from math import e def f(x): return e**x a = -1 b = 1 n = 11 h = (b-a)/n S = h*(f(a)+f(b)) for i in range(1,n): S+= (a+i*h) Integral=h*S print('Integral = %.1f' %Integral) ###Output _____no_output_____ ###Markdown Problem 3. Create a Python program that integrates the function f(x) = ex from x1=-1 to x2 = 1 as the interval. Save your program to your repository and send your GitHub link here. (30 points) ###Code import math def f(x): return(math.exp(x)) a = -1 b = 1 n = 10 h = (b-a)/n S = h* (f(a)+f(b)) for i in range(1,n): S+=f(a+i*h) Integral = S*h print('Integral = %0.4f' %Integral) ###Output Integral = 2.1731
eda/hyper-parameter_tuning/random_forest-Level0.ipynb
###Markdown Get Training Data ###Code # get training data train_df = pd.read_csv(os.path.join(ROOT_DIR,DATA_DIR,FEATURE_SET,'train.csv.gz')) X_train = train_df.drop(ID_VAR + [TARGET_VAR],axis=1) y_train = train_df.loc[:,TARGET_VAR] X_train.shape y_train.shape y_train[:10] ###Output _____no_output_____ ###Markdown Setup pipeline for hyper-parameter tuning ###Code # set up pipeline pipe = Pipeline([('this_model',ThisModel(n_jobs=-1))]) ###Output _____no_output_____ ###Markdown this_scorer = make_scorer(lambda y, y_hat: np.sqrt(mean_squared_error(y,y_hat)),greater_is_better=False) ###Code def kag_rmsle(y,y_hat): return np.sqrt(mean_squared_error(y,y_hat)) this_scorer = make_scorer(kag_rmsle, greater_is_better=False) grid_search = RandomizedSearchCV(pipe, param_distributions=PARAM_GRID, scoring=this_scorer,cv=5, n_iter=N_ITER, verbose=2, n_jobs=1, refit=False) grid_search.fit(X_train,y_train) grid_search.best_params_ grid_search.best_score_ df = pd.DataFrame(grid_search.cv_results_).sort_values('rank_test_score') df hyper_parameters = dict(FeatureSet=FEATURE_SET,cv_run=df) with open(os.path.join(CONFIG['ROOT_DIR'],'eda','hyper-parameter_tuning',MODEL_ALGO),'wb') as f: pickle.dump(hyper_parameters,f) ###Output _____no_output_____
05-statistics.ipynb
###Markdown Statistics **Quick intro to the following packages**- `hepstats`.I will not discuss here the `pyhf` package, which is very niche.Please refer to the [GitHub repository](https://github.com/scikit-hep/pyhf) or related material at https://scikit-hep.org/resources. **`hepstats` - statistics tools and utilities**The package contains 2 submodules:- `hypotests`: provides tools to do hypothesis tests such as discovery test and computations of upper limits or confidence intervals.- `modeling`: includes the Bayesian Block algorithm that can be used to improve the binning of histograms.Note: feel free to complement the introduction below with the several tutorials available from the [GitHub repository](https://github.com/scikit-hep/hepstats). **1. Adaptive binning determination**The Bayesian Block algorithm produces histograms that accurately represent the underlying distribution while being robust to statistical fluctuations. ###Code import numpy as np import matplotlib.pyplot as plt from hepstats.modeling import bayesian_blocks data = np.append(np.random.laplace(size=10000), np.random.normal(5., 1., size=15000)) bblocks = bayesian_blocks(data) plt.hist(data, bins=1000, label='Fine Binning', density=True) plt.hist(data, bins=bblocks, label='Bayesian Blocks', histtype='step', linewidth=2, density=True) plt.legend(loc=2); ###Output _____no_output_____
wgan_experiment/WGAN_experiment.ipynb
###Markdown Let's look at:Number of labels per image (histogram)Quality score per image for images with multiple labels (sigmoid?) ###Code import csv from itertools import islice from collections import defaultdict import pandas as pd import matplotlib.pyplot as plt import torch import torchvision import numpy as np CSV_PATH = 'wgangp_data.csv' realness = {} # real_votes = defaultdict(int) # fake_votes = defaultdict(int) total_votes = defaultdict(int) correct_votes = defaultdict(int) with open(CSV_PATH) as f: dictreader = csv.DictReader(f) for line in dictreader: img_name = line['img'] assert(line['realness'] in ('True', 'False')) assert(line['correctness'] in ('True', 'False')) realness[img_name] = line['realness'] == 'True' if line['correctness'] == 'True': correct_votes[img_name] += 1 total_votes[img_name] += 1 pdx = pd.read_csv(CSV_PATH) pdx pdx[pdx.groupby('img').count() > 50] pdx #df.img # print(df.columns) # print(df['img']) # How much of the time do people guess "fake"? Slightly more than half! pdx[pdx.correctness != pdx.realness].count()/pdx.count() # How much of the time do people guess right? 94.4% pdx[pdx.correctness].count()/pdx.count() #90.3% of the time, real images are correctly labeled as real pdx[pdx.realness][pdx.correctness].count()/pdx[pdx.realness].count() #98.5% of the time, fake images are correctly labeled as fake pdx[~pdx.realness][pdx.correctness].count()/pdx[~pdx.realness].count() len(total_votes.values()) img_dict = {img: [realness[img], correct_votes[img], total_votes[img], correct_votes[img]/total_votes[img]] for img in realness } # print(img_dict.keys()) #img_dict['celeba500/005077_crop.jpg'] plt.hist([v[3] for k,v in img_dict.items() if 'celeb' in k]) def getVotesDict(img_dict): votes_dict = defaultdict(int) for img in total_votes: votes_dict[img_dict[img][2]] += 1 return votes_dict votes_dict = getVotesDict(img_dict) for i in sorted(votes_dict.keys()): print(i, votes_dict[i]) selected_img_dict = {img:value for img, value in img_dict.items() if img_dict[img][2] > 10} less_than_50_dict = {img:value for img, value in img_dict.items() if img_dict[img][2] < 10} imgs_over_50 = list(selected_img_dict.keys()) # print(len(selected_img_dict)) # print(imgs_over_50) pdx_50 = pdx[pdx.img.apply(lambda x: x in imgs_over_50)] len(pdx_50) pdx_under_50 = pdx[pdx.img.apply(lambda x: x not in imgs_over_50)] len(pdx_under_50) len(pdx_under_50[pdx_under_50.img.apply(lambda x: 'wgan' not in x)]) correctness = sorted([value[3] for key, value in selected_img_dict.items()]) print(correctness) plt.hist(correctness) plt.show() correctness = sorted([value[3] for key, value in less_than_50_dict.items()]) # print(correctness) plt.hist(correctness) plt.show() ct = [] # selected_img = [img in total_votes.keys() if total_votes[img] > 1 ] discriminator = torch.load('discriminator.pt', map_location='cpu') # torch.load_state_dict('discriminator.pt') discriminator(torch.zeros(64,64,3)) ###Output _____no_output_____
site/en/guide/data.ipynb
###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code import tensorflow as tf import pathlib import os import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `tf.Tensor`, `tf.sparse.SparseTensor`, `tf.RaggedTensor`,`tf.TensorArray`, or `tf.data.Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( lambda: img_gen.flow_from_directory(flowers), output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds.element_spec for images, label in ds.take(1): print('images.shape: ', images.shape) print('labels.shape: ', labels.shape) ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, os.sep)[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, os.sep) label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 3 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:label_length]) predicted_steps = tf.data.Dataset.zip((features, labels)) for features, label in predicted_steps.take(5): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Iterator Checkpointing Tensorflow supports [taking checkpoints](https://www.tensorflow.org/guide/checkpoint) so that when your training process restarts it can restore the latest checkpoint to recover most of its progress. In addition to checkpointing the model variables, you can also checkpoint the progress of the dataset iterator. This could be useful if you have a large dataset and don't want to start the dataset from the beginning on each restart. Note however that iterator checkpoints may be large, since transformations such as `shuffle` and `prefetch` require buffering elements within the iterator. To include your iterator in a checkpoint, pass the iterator to the `tf.train.Checkpoint` constructor. ###Code range_ds = tf.data.Dataset.range(20) iterator = iter(range_ds) ckpt = tf.train.Checkpoint(step=tf.Variable(0), iterator=iterator) manager = tf.train.CheckpointManager(ckpt, '/tmp/my_ckpt', max_to_keep=3) print([next(iterator).numpy() for _ in range(5)]) save_path = manager.save() print([next(iterator).numpy() for _ in range(5)]) ckpt.restore(manager.latest_checkpoint) print([next(iterator).numpy() for _ in range(5)]) ###Output _____no_output_____ ###Markdown Note: It is not possible to checkpoint an iterator which relies on external state such as a `tf.py_function`. Attempting to do so will raise an exception complaining about the external state. Using tf.data with tf.keras The `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code !pip install tf-nightly import tensorflow as tf import pathlib import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `Tensor`, `SparseTensor`, `RaggedTensor`,`TensorArray`, or `Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10, padded_shapes=([], [None])) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown Note: As of **TensorFlow 2.2** the `padded_shapes` argument is no longer required. The default behavior is to pad all axes to the longest in the batch. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, '/')[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, '/') label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code from __future__ import absolute_import, division, print_function, unicode_literals try: !pip install tf-nightly except Exception: pass import tensorflow as tf import pathlib import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `Tensor`, `SparseTensor`, `RaggedTensor`,`TensorArray`, or `Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10, padded_shapes=([], [None])) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown Note: As of **TensorFlow 2.2** the `padded_shapes` argument is no longer required. The default behavior is to pad all axes to the longest in the batch. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, '/')[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, '/') label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, different data formats, and perform complextransformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code from __future__ import absolute_import, division, print_function, unicode_literals try: %tensorflow_version 2.x except Exception: pass import tensorflow as tf import pathlib import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `Tensor`, `SparseTensor`, `RaggedTensor`,`TensorArray`, or `Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fit in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator. Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting dules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0,10) yield i, np.random.normal(size=(size,)) i += 1 for i,series in gen_series(): print(i,":",str(series)) if i>5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes = ((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10, padded_shapes=([],[None])) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes = ([32,256,256,3],[32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file ]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url+file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`, this makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i%3==0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example if the file starts with a header line, or containscomments. These lines can be removed using the `Dataset.skip()` and`Dataset.filter()` transformations. To apply these transformations to eachfile separately, we use `Dataset.flat_map()` to create a nested `Dataset` foreach file. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function, is the high level interface for reading sets of csv files. It supports column-type-inference, and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class','fare','survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class. Which provides finer grained control. It does not support column-type-inference, you specify the types of each column, and the items yielded by the dataset ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column-types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1,3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown To load the data from the files use the `tf.io.read_file` function: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Convert the file paths to (image, label) pairs: ###Code def process_path(file_path): parts = tf.strings.split(file_path, '/') return tf.io.read_file(file_path), parts[-2] labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) it = iter(batched_dataset) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` results in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch()` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch()` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. For example, to create a dataset that repeatsits input for 3 epochs: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that stradle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation passes the input dataset through a random shuffle queue, `tf.queues.RandomShuffleQueue`. It maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: That while large buffer_sizes shuffle more thouroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(file_path, '/') label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, we encourage you to use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30,30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,]= tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file ]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(raw_examples): example = tf.io.parse_example( raw_example[tf.newaxis], {'image/encoded':tf.io.FixedLenFeature(shape=(),dtype=tf.string), 'image/text':tf.io.FixedLenFeature(shape=(), dtype=tf.string)}) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlab between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x:x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip','.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0':0, 'class_1':0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = creditcard_ds.unbatch().filter(lambda features,label: label==0).repeat() positive_ds = creditcard_ds.unbatch().filter(lambda features,label: label==1).repeat() for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets([negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatis that it needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataet and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds , steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code from __future__ import absolute_import, division, print_function, unicode_literals try: !pip install tf-nightly except Exception: pass import tensorflow as tf import pathlib import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `Tensor`, `SparseTensor`, `RaggedTensor`,`TensorArray`, or `Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10, padded_shapes=([], [None])) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown Note: As of **TensorFlow 2.2** the `padded_shapes` argument is no longer required. The default behavior is to pad all axes to the longest in the batch. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, '/')[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, '/') label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code import tensorflow as tf import pathlib import os import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `tf.Tensor`, `tf.sparse.SparseTensor`, `tf.RaggedTensor`,`tf.TensorArray`, or `tf.data.Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( lambda: img_gen.flow_from_directory(flowers), output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds.element_spec for images, label in ds.take(1): print('images.shape: ', images.shape) print('labels.shape: ', labels.shape) ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, os.sep)[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, os.sep) label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Iterator Checkpointing Tensorflow supports [taking checkpoints](https://www.tensorflow.org/guide/checkpoint) so that when your training process restarts it can restore the latest checkpoint to recover most of its progress. In addition to checkpointing the model variables, you can also checkpoint the progress of the dataset iterator. This could be useful if you have a large dataset and don't want to start the dataset from the beginning on each restart. Note however that iterator checkpoints may be large, since transformations such as `shuffle` and `prefetch` require buffering elements within the iterator. To include your iterator in a checkpoint, pass the iterator to the `tf.train.Checkpoint` constructor. ###Code range_ds = tf.data.Dataset.range(20) iterator = iter(range_ds) ckpt = tf.train.Checkpoint(step=tf.Variable(0), iterator=iterator) manager = tf.train.CheckpointManager(ckpt, '/tmp/my_ckpt', max_to_keep=3) print([next(iterator).numpy() for _ in range(5)]) save_path = manager.save() print([next(iterator).numpy() for _ in range(5)]) ckpt.restore(manager.latest_checkpoint) print([next(iterator).numpy() for _ in range(5)]) ###Output _____no_output_____ ###Markdown Note: It is not possible to checkpoint an iterator which relies on external state such as a `tf.py_function`. Attempting to do so will raise an exception complaining about the external state. Using tf.data with tf.keras The `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code import tensorflow as tf import pathlib import os import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `tf.Tensor`, `tf.sparse.SparseTensor`, `tf.RaggedTensor`,`tf.TensorArray`, or `tf.data.Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( lambda: img_gen.flow_from_directory(flowers), output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds.element_spec for images, label in ds.take(1): print('images.shape: ', images.shape) print('labels.shape: ', labels.shape) ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, os.sep)[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, os.sep) label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Iterator Checkpointing Tensorflow supports [taking checkpoints](https://www.tensorflow.org/guide/checkpoint) so that when your training process restarts it can restore the latest checkpoint to recover most of its progress. In addition to checkpointing the model variables, you can also checkpoint the progress of the dataset iterator. This could be useful if you have a large dataset and don't want to start the dataset from the beginning on each restart. Note however that iterator checkpoints may be large, since transformations such as `shuffle` and `prefetch` require buffering elements within the iterator. To include your iterator in a checkpoint, pass the iterator to the `tf.train.Checkpoint` constructor. ###Code range_ds = tf.data.Dataset.range(20) iterator = iter(range_ds) ckpt = tf.train.Checkpoint(step=tf.Variable(0), iterator=iterator) manager = tf.train.CheckpointManager(ckpt, '/tmp/my_ckpt', max_to_keep=3) print([next(iterator).numpy() for _ in range(5)]) save_path = manager.save() print([next(iterator).numpy() for _ in range(5)]) ckpt.restore(manager.latest_checkpoint) print([next(iterator).numpy() for _ in range(5)]) ###Output _____no_output_____ ###Markdown Note: It is not possible to checkpoint an iterator which relies on external state such as a `tf.py_function`. Attempting to do so will raise an exception complaining about the external state. Using tf.data with tf.keras The `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code import tensorflow as tf import pathlib import os import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output 8 3 0 8 2 1 ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output 8 ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output 22 ###Markdown Dataset structureA dataset produces a sequence of *elements*, where each element isthe same (nested) structure of *components*. Individual componentsof the structure can be of any type representable by`tf.TypeSpec`, including `tf.Tensor`, `tf.sparse.SparseTensor`,`tf.RaggedTensor`, `tf.TensorArray`, or `tf.data.Dataset`.The Python constructs that can be used to express the (nested)structure of elements include `tuple`, `dict`, `NamedTuple`, and`OrderedDict`. In particular, `list` is not a valid construct forexpressing the structure of dataset elements. This is becauseearly tf.data users felt strongly about `list` inputs (e.g. passedto `tf.data.Dataset.from_tensors`) being automatically packed astensors and `list` outputs (e.g. return values of user-definedfunctions) being coerced into a `tuple`. As a consequence, if youwould like a `list` input to be treated as a structure, you needto convert it into `tuple` and if you would like a `list` outputto be a single component, then you need to explicitly pack itusing `tf.stack`.The `Dataset.element_spec` property allows you to inspect the typeof each element component. The property returns a *nested structure*of `tf.TypeSpec` objects, matching the structure of the element,which may be a single component, a tuple of components, or a nestedtuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output shapes: (10,), (), (100,) shapes: (10,), (), (100,) shapes: (10,), (), (100,) shapes: (10,), (), (100,) ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output 0 1 2 3 4 ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output [0 1 2 3 4 5 6 7 8 9] [10 11 12 13 14 15 16 17 18 19] [20 21 22 23 24 0 1 2 3 4] [ 5 6 7 8 9 10 11 12 13 14] [15 16 17 18 19 20 21 22 23 24] [0 1 2 3 4 5 6 7 8 9] [10 11 12 13 14 15 16 17 18 19] [20 21 22 23 24 0 1 2 3 4] [ 5 6 7 8 9 10 11 12 13 14] [15 16 17 18 19 20 21 22 23 24] ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output 0 : [-1.3227 -0.331 -0.3325 -0.5189 0.5194] 1 : [ 0.5885 -0.6888] 2 : [-0.8533] 3 : [] 4 : [ 1.9703 1.1762 -1.031 -0.1325 -0.7229 1.9447 -1.1872 -0.3329 -2.8705] 5 : [ 0.5678 -0.0725 -1.7162 -0.3061 -1.5617] 6 : [-1.2142 -3.412 -0.2757 0.4994] ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output [18 0 6 12 14 19 3 1 4 27] [[ 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [-0.3802 0.1529 -1.2023 0.187 1.2217 0. 0. 0. 0. ] [ 0.4518 -1.5763 0.4504 -0.1157 1.6575 1.3638 -0.616 0.0365 -0.0865] [ 0.2785 -0.8416 0. 0. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 1.9329 -1.1417 0.1192 0.2928 0. 0. 0. 0. 0. ] [-1.6077 0.7447 0.623 0.1847 0. 0. 0. 0. 0. ] [ 2.1967 -1.3904 0.6585 -1.9359 0. 0. 0. 0. 0. ] [-0.4517 1.2796 -2.4923 0. 0. 0. 0. 0. 0. ] [ 0.3869 0. 0. 0. 0. 0. 0. 0. 0. ]] ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output Downloading data from https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz 228818944/228813984 [==============================] - 2s 0us/step 228827136/228813984 [==============================] - 2s 0us/step ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( lambda: img_gen.flow_from_directory(flowers), output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds.element_spec for images, label in ds.take(1): print('images.shape: ', images.shape) print('labels.shape: ', labels.shape) ###Output Found 3670 images belonging to 5 classes. images.shape: (32, 256, 256, 3) labels.shape: (32, 5) ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tfrecord.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001 7905280/7904079 [==============================] - 0s 0us/step 7913472/7904079 [==============================] - 0s 0us/step ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas_dataframe.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, os.sep)[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, os.sep) label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/structured_data/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Inputs: All except the last 5 steps batch[-5:]) # Labels: The last 5 steps predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 3 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:label_length]) predicted_steps = tf.data.Dataset.zip((features, labels)) for features, label in predicted_steps.take(5): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Iterator Checkpointing Tensorflow supports [taking checkpoints](https://www.tensorflow.org/guide/checkpoint) so that when your training process restarts it can restore the latest checkpoint to recover most of its progress. In addition to checkpointing the model variables, you can also checkpoint the progress of the dataset iterator. This could be useful if you have a large dataset and don't want to start the dataset from the beginning on each restart. Note however that iterator checkpoints may be large, since transformations such as `shuffle` and `prefetch` require buffering elements within the iterator. To include your iterator in a checkpoint, pass the iterator to the `tf.train.Checkpoint` constructor. ###Code range_ds = tf.data.Dataset.range(20) iterator = iter(range_ds) ckpt = tf.train.Checkpoint(step=tf.Variable(0), iterator=iterator) manager = tf.train.CheckpointManager(ckpt, '/tmp/my_ckpt', max_to_keep=3) print([next(iterator).numpy() for _ in range(5)]) save_path = manager.save() print([next(iterator).numpy() for _ in range(5)]) ckpt.restore(manager.latest_checkpoint) print([next(iterator).numpy() for _ in range(5)]) ###Output _____no_output_____ ###Markdown Note: It is not possible to checkpoint an iterator which relies on external state such as a `tf.py_function`. Attempting to do so will raise an exception complaining about the external state. Using tf.data with tf.keras The `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code import tensorflow as tf import pathlib import os import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset produces a sequence of *elements*, where each element isthe same (nested) structure of *components*. Individual componentsof the structure can be of any type representable by`tf.TypeSpec`, including `tf.Tensor`, `tf.sparse.SparseTensor`,`tf.RaggedTensor`, `tf.TensorArray`, or `tf.data.Dataset`.The Python constructs that can be used to express the (nested)structure of elements include `tuple`, `dict`, `NamedTuple`, and`OrderedDict`. In particular, `list` is not a valid construct forexpressing the structure of dataset elements. This is becauseearly tf.data users felt strongly about `list` inputs (e.g. passedto `tf.data.Dataset.from_tensors`) being automatically packed astensors and `list` outputs (e.g. return values of user-definedfunctions) being coerced into a `tuple`. As a consequence, if youwould like a `list` input to be treated as a structure, you needto convert it into `tuple` and if you would like a `list` outputto be a single component, then you need to explicitly pack itusing `tf.stack`.The `Dataset.element_spec` property allows you to inspect the typeof each element component. The property returns a *nested structure*of `tf.TypeSpec` objects, matching the structure of the element,which may be a single component a tuple of components, or a nestedtuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( lambda: img_gen.flow_from_directory(flowers), output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds.element_spec for images, label in ds.take(1): print('images.shape: ', images.shape) print('labels.shape: ', labels.shape) ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, os.sep)[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, os.sep) label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 3 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:label_length]) predicted_steps = tf.data.Dataset.zip((features, labels)) for features, label in predicted_steps.take(5): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Iterator Checkpointing Tensorflow supports [taking checkpoints](https://www.tensorflow.org/guide/checkpoint) so that when your training process restarts it can restore the latest checkpoint to recover most of its progress. In addition to checkpointing the model variables, you can also checkpoint the progress of the dataset iterator. This could be useful if you have a large dataset and don't want to start the dataset from the beginning on each restart. Note however that iterator checkpoints may be large, since transformations such as `shuffle` and `prefetch` require buffering elements within the iterator. To include your iterator in a checkpoint, pass the iterator to the `tf.train.Checkpoint` constructor. ###Code range_ds = tf.data.Dataset.range(20) iterator = iter(range_ds) ckpt = tf.train.Checkpoint(step=tf.Variable(0), iterator=iterator) manager = tf.train.CheckpointManager(ckpt, '/tmp/my_ckpt', max_to_keep=3) print([next(iterator).numpy() for _ in range(5)]) save_path = manager.save() print([next(iterator).numpy() for _ in range(5)]) ckpt.restore(manager.latest_checkpoint) print([next(iterator).numpy() for _ in range(5)]) ###Output _____no_output_____ ###Markdown Note: It is not possible to checkpoint an iterator which relies on external state such as a `tf.py_function`. Attempting to do so will raise an exception complaining about the external state. Using tf.data with tf.keras The `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code from __future__ import absolute_import, division, print_function, unicode_literals try: %tensorflow_version 2.x except Exception: pass import tensorflow as tf import pathlib import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `Tensor`, `SparseTensor`, `RaggedTensor`,`TensorArray`, or `Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10, padded_shapes=([], [None])) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here we skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown We can read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, '/')[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch()` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch()` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First we create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that stradle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(file_path, '/') label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, we encourage you to use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatis that it needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code import tensorflow as tf import pathlib import os import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `tf.Tensor`, `tf.sparse.SparseTensor`, `tf.RaggedTensor`,`tf.TensorArray`, or `tf.data.Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, os.sep)[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, os.sep) label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Iterator Checkpointing Tensorflow supports [taking checkpoints](https://www.tensorflow.org/guide/checkpoint) so that when your training process restarts it can restore the latest checkpoint to recover most of its progress. In addition to checkpointing the model variables, you can also checkpoint the progress of the dataset iterator. This could be useful if you have a large dataset and don't want to start the dataset from the beginning on each restart. Note however that iterator checkpoints may be large, since transformations such as `shuffle` and `prefetch` require buffering elements within the iterator. To include your iterator in a checkpoint, pass the iterator to the `tf.train.Checkpoint` constructor. ###Code range_ds = tf.data.Dataset.range(20) iterator = iter(range_ds) ckpt = tf.train.Checkpoint(step=tf.Variable(0), iterator=iterator) manager = tf.train.CheckpointManager(ckpt, '/tmp/my_ckpt', max_to_keep=3) print([next(iterator).numpy() for _ in range(5)]) save_path = manager.save() print([next(iterator).numpy() for _ in range(5)]) ckpt.restore(manager.latest_checkpoint) print([next(iterator).numpy() for _ in range(5)]) ###Output _____no_output_____ ###Markdown Note: It is not possible to checkpoint an iterator which relies on external state such as a `tf.py_function`. Attempting to do so will raise an exception complaining about the external state. Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code import tensorflow as tf import pathlib import os import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map`, and multi-elementtransformations such as `Dataset.batch`. Refer to the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset produces a sequence of *elements*, where each element isthe same (nested) structure of *components*. Individual componentsof the structure can be of any type representable by`tf.TypeSpec`, including `tf.Tensor`, `tf.sparse.SparseTensor`,`tf.RaggedTensor`, `tf.TensorArray`, or `tf.data.Dataset`.The Python constructs that can be used to express the (nested)structure of elements include `tuple`, `dict`, `NamedTuple`, and`OrderedDict`. In particular, `list` is not a valid construct forexpressing the structure of dataset elements. This is becauseearly `tf.data` users felt strongly about `list` inputs (for example, when passedto `tf.data.Dataset.from_tensors`) being automatically packed astensors and `list` outputs (for example, return values of user-definedfunctions) being coerced into a `tuple`. As a consequence, if youwould like a `list` input to be treated as a structure, you needto convert it into `tuple` and if you would like a `list` outputto be a single component, then you need to explicitly pack itusing `tf.stack`.The `Dataset.element_spec` property allows you to inspect the typeof each element component. The property returns a *nested structure*of `tf.TypeSpec` objects, matching the structure of the element,which may be a single component, a tuple of components, or a nestedtuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map`, and `Dataset.filter` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysRefer to the [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) tutorial for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convenient approach it has limited portability and scalability. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recommended as many TensorFlow operations do not support tensors with an unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects: it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( lambda: img_gen.flow_from_directory(flowers), output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds.element_spec for images, labels in ds.take(1): print('images.shape: ', images.shape) print('labels.shape: ', labels.shape) ###Output _____no_output_____ ###Markdown Consuming TFRecord dataRefer to the [Loading TFRecords](../tutorials/load_data/tfrecord.ipynb) tutorial for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataRefer to the [Load text](../tutorials/load_data/text.ipynb) tutorial for an end-to-end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data Refer to the [Loading CSV Files](../tutorials/load_data/csv.ipynb) and [Loading Pandas DataFrames](../tutorials/load_data/pandas_dataframe.ipynb) tutorials for more examples.The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary.The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `tf.data.experimental.make_csv_dataset` function is the high-level interface for reading sets of CSV files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, os.sep)[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (including sequence models) work with input data that can have varying size(for example, sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shapes by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (for example, to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, os.sep) label = parts[-2] image = tf.io.read_file(filename) image = tf.io.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function` operation in a `Dataset.map` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation.Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end-to-end time series example see: [Time series forecasting](../../tutorials/structured_data/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice.The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Inputs: All except the last 5 steps batch[-5:]) # Labels: The last 5 steps predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 3 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:label_length]) predicted_steps = tf.data.Dataset.zip((features, labels)) for features, label in predicted_steps.take(5): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. Go to the [Dataset structure](dataset_structure) section for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `Dataset.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: Go to [Classification on imbalanced data](../tutorials/structured_data/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `tf.data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.Dataset.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.Dataset.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with a 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `Dataset.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. You could use `Dataset.filter`to create those two datasets, but that results in all the data being loaded twice.The `tf.data.Dataset.rejection_resample` method can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.The `rejection_resample` method takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The goal here is to balance the label distribution, and the elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampling method deals with individual examples, so in this case you must `unbatch` the dataset before applying that method.The method needs a target distribution, and optionally an initial distribution estimate as inputs. ###Code resample_ds = ( creditcard_ds .unbatch() .rejection_resample(class_func, target_dist=[0.5,0.5], initial_dist=fractions) .batch(10)) ###Output _____no_output_____ ###Markdown The `rejection_resample` method returns `(class, example)` pairs where the `class` is the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with a 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Iterator Checkpointing Tensorflow supports [taking checkpoints](./checkpoint.ipynb) so that when your training process restarts it can restore the latest checkpoint to recover most of its progress. In addition to checkpointing the model variables, you can also checkpoint the progress of the dataset iterator. This could be useful if you have a large dataset and don't want to start the dataset from the beginning on each restart. Note however that iterator checkpoints may be large, since transformations such as `Dataset.shuffle` and `Dataset.prefetch` require buffering elements within the iterator.To include your iterator in a checkpoint, pass the iterator to the `tf.train.Checkpoint` constructor. ###Code range_ds = tf.data.Dataset.range(20) iterator = iter(range_ds) ckpt = tf.train.Checkpoint(step=tf.Variable(0), iterator=iterator) manager = tf.train.CheckpointManager(ckpt, '/tmp/my_ckpt', max_to_keep=3) print([next(iterator).numpy() for _ in range(5)]) save_path = manager.save() print([next(iterator).numpy() for _ in range(5)]) ckpt.restore(manager.latest_checkpoint) print([next(iterator).numpy() for _ in range(5)]) ###Output _____no_output_____ ###Markdown Note: It is not possible to checkpoint an iterator which relies on an external state, such as a `tf.py_function`. Attempting to do so will raise an exception complaining about the external state. Using `tf.data` with `tf.keras` The `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `Model.fit` and `Model.evaluate` and `Model.predict` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code !pip install tf-nightly import tensorflow as tf import pathlib import os import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `Tensor`, `SparseTensor`, `RaggedTensor`,`TensorArray`, or `Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10, padded_shapes=([], [None])) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown Note: As of **TensorFlow 2.2** the `padded_shapes` argument is no longer required. The default behavior is to pad all axes to the longest in the batch. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, os.sep)[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, os.sep) label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Iterator Checkpointing Tensorflow supports [taking checkpoints](https://www.tensorflow.org/guide/checkpoint) so that when your training process restarts it can restore the latest checkpoint to recover most of its progress. In addition to checkpointing the model variables, you can also checkpoint the progress of the dataset iterator. This could be useful if you have a large dataset and don't want to start the dataset from the beginning on each restart. Note however that iterator checkpoints may be large, since transformations such as `shuffle` and `prefetch` require buffering elements within the iterator. To include your iterator in a checkpoint, pass the iterator to the `tf.train.Checkpoint` constructor. ###Code range_ds = tf.data.Dataset.range(20) iterator = iter(range_ds) ckpt = tf.train.Checkpoint(step=tf.Variable(0), iterator=iterator) manager = tf.train.CheckpointManager(ckpt, '/tmp/my_ckpt', max_to_keep=3) print([next(iterator).numpy() for _ in range(5)]) save_path = manager.save() print([next(iterator).numpy() for _ in range(5)]) ckpt.restore(manager.latest_checkpoint) print([next(iterator).numpy() for _ in range(5)]) ###Output _____no_output_____ ###Markdown Note: It is not possible to checkpoint an iterator which relies on external state such as a `tf.py_function`. Attempting to do so will raise an exception complaining about the external state. Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, different data formats, and perform complextransformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code from __future__ import absolute_import, division, print_function, unicode_literals try: %tensorflow_version 2.x except Exception: pass import tensorflow as tf import pathlib import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `Tensor`, `SparseTensor`, `RaggedTensor`,`TensorArray`, or `Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fit in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator. Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting dules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0,10) yield i, np.random.normal(size=(size,)) i += 1 for i,series in gen_series(): print(i,":",str(series)) if i>5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes = ((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10, padded_shapes=([],[None])) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes = ([32,256,256,3],[32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file ]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url+file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`, this makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i%3==0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example if the file starts with a header line, or containscomments. These lines can be removed using the `Dataset.skip()` and`Dataset.filter()` transformations. To apply these transformations to eachfile separately, we use `Dataset.flat_map()` to create a nested `Dataset` foreach file. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.DataFrame.from_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function, is the high level interface for reading sets of csv files. It supports column-type-inference, and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class','fare','survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class. Which provides finer grained control. It does not support column-type-inference, you specify the types of each column, and the items yielded by the dataset ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column-types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1,3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown To load the data from the files use the `tf.io.read_file` function: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Convert the file paths to (image, label) pairs: ###Code def process_path(file_path): parts = tf.strings.split(file_path, '/') return tf.io.read_file(file_path), parts[-2] labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) it = iter(batched_dataset) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` results in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch()` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch()` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. For example, to create a dataset that repeatsits input for 3 epochs: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that stradle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation passes the input dataset through a random shuffle queue, `tf.queues.RandomShuffleQueue`. It maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: That while large buffer_sizes shuffle more thouroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(file_path, '/') label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, we encourage you to use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30,30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,]= tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file ]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(raw_examples): example = tf.io.parse_example( raw_example[tf.newaxis], {'image/encoded':tf.io.FixedLenFeature(shape=(),dtype=tf.string), 'image/text':tf.io.FixedLenFeature(shape=(), dtype=tf.string)}) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlab between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x:x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip','.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0':0, 'class_1':0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = creditcard_ds.unbatch().filter(lambda features,label: label==0).repeat() positive_ds = creditcard_ds.unbatch().filter(lambda features,label: label==1).repeat() for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets([negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatis that it needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataet and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds , steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code import tensorflow as tf import pathlib import os import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset produces a sequence of *elements*, where each element isthe same (nested) structure of *components*. Individual componentsof the structure can be of any type representable by`tf.TypeSpec`, including `tf.Tensor`, `tf.sparse.SparseTensor`,`tf.RaggedTensor`, `tf.TensorArray`, or `tf.data.Dataset`.The Python constructs that can be used to express the (nested)structure of elements include `tuple`, `dict`, `NamedTuple`, and`OrderedDict`. In particular, `list` is not a valid construct forexpressing the structure of dataset elements. This is becauseearly tf.data users felt strongly about `list` inputs (e.g. passedto `tf.data.Dataset.from_tensors`) being automatically packed astensors and `list` outputs (e.g. return values of user-definedfunctions) being coerced into a `tuple`. As a consequence, if youwould like a `list` input to be treated as a structure, you needto convert it into `tuple` and if you would like a `list` outputto be a single component, then you need to explicitly pack itusing `tf.stack`.The `Dataset.element_spec` property allows you to inspect the typeof each element component. The property returns a *nested structure*of `tf.TypeSpec` objects, matching the structure of the element,which may be a single component, a tuple of components, or a nestedtuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( lambda: img_gen.flow_from_directory(flowers), output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds.element_spec for images, label in ds.take(1): print('images.shape: ', images.shape) print('labels.shape: ', labels.shape) ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tfrecord.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code import tensorflow as tf import pathlib import os import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset produces a sequence of *elements*, where each element isthe same (nested) structure of *components*. Individual componentsof the structure can be of any type representable by`tf.TypeSpec`, including `tf.Tensor`, `tf.sparse.SparseTensor`,`tf.RaggedTensor`, `tf.TensorArray`, or `tf.data.Dataset`.The Python constructs that can be used to express the (nested)structure of elements include `tuple`, `dict`, `NamedTuple`, and`OrderedDict`. In particular, `list` is not a valid construct forexpressing the structure of dataset elements. This is becauseearly tf.data users felt strongly about `list` inputs (e.g. passedto `tf.data.Dataset.from_tensors`) being automatically packed astensors and `list` outputs (e.g. return values of user-definedfunctions) being coerced into a `tuple`. As a consequence, if youwould like a `list` input to be treated as a structure, you needto convert it into `tuple` and if you would like a `list` outputto be a single component, then you need to explicitly pack itusing `tf.stack`.The `Dataset.element_spec` property allows you to inspect the typeof each element component. The property returns a *nested structure*of `tf.TypeSpec` objects, matching the structure of the element,which may be a single component, a tuple of components, or a nestedtuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recommended as many TensorFlow operations do not support tensors with an unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( lambda: img_gen.flow_from_directory(flowers), output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds.element_spec for images, labels in ds.take(1): print('images.shape: ', images.shape) print('labels.shape: ', labels.shape) ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tfrecord.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas_dataframe.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, os.sep)[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, os.sep) label = parts[-2] image = tf.io.read_file(filename) image = tf.io.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/structured_data/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Inputs: All except the last 5 steps batch[-5:]) # Labels: The last 5 steps predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 3 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:label_length]) predicted_steps = tf.data.Dataset.zip((features, labels)) for features, label in predicted_steps.take(5): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.Dataset.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.Dataset.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `Dataset.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. You could use `Dataset.filter`to create those two datasets, but that results in all the data being loaded twice.The `data.Dataset.rejection_resample` method can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.Dataset.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The goal here is to balance the lable distribution, and the elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampling method deals with individual examples, so in this case you must `unbatch` the dataset before applying that method.The method needs a target distribution, and optionally an initial distribution estimate as inputs. ###Code resample_ds = ( creditcard_ds .unbatch() .rejection_resample(class_func, target_dist=[0.5,0.5], initial_dist=fractions) .batch(10)) ###Output _____no_output_____ ###Markdown The `rejection_resample` method returns `(class, example)` pairs where the `class` is the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Iterator Checkpointing Tensorflow supports [taking checkpoints](https://www.tensorflow.org/guide/checkpoint) so that when your training process restarts it can restore the latest checkpoint to recover most of its progress. In addition to checkpointing the model variables, you can also checkpoint the progress of the dataset iterator. This could be useful if you have a large dataset and don't want to start the dataset from the beginning on each restart. Note however that iterator checkpoints may be large, since transformations such as `shuffle` and `prefetch` require buffering elements within the iterator. To include your iterator in a checkpoint, pass the iterator to the `tf.train.Checkpoint` constructor. ###Code range_ds = tf.data.Dataset.range(20) iterator = iter(range_ds) ckpt = tf.train.Checkpoint(step=tf.Variable(0), iterator=iterator) manager = tf.train.CheckpointManager(ckpt, '/tmp/my_ckpt', max_to_keep=3) print([next(iterator).numpy() for _ in range(5)]) save_path = manager.save() print([next(iterator).numpy() for _ in range(5)]) ckpt.restore(manager.latest_checkpoint) print([next(iterator).numpy() for _ in range(5)]) ###Output _____no_output_____ ###Markdown Note: It is not possible to checkpoint an iterator which relies on external state such as a `tf.py_function`. Attempting to do so will raise an exception complaining about the external state. Using tf.data with tf.keras The `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: The input pipeline API View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, different data formats, and perform complextransformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code from __future__ import absolute_import, division, print_function, unicode_literals try: import tensorflow.compat.v2 as tf #nightly except Exception: pass tf.enable_v2_behavior() import pathlib import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `Tensor`, `SparseTensor`, `RaggedTensor`,`TensorArray`, or `Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fit in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator. Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passes as the callabler's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting dules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0,10) yield i, np.random.normal(size=(size,)) i += 1 for i,series in gen_series(): print(i,":",str(series)) if i>5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes = ((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10, padded_shapes=([],[None])) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes = ([32,256,256,3],[32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file ]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url+file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`, this makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i%3==0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example if the file starts with a header line, or containscomments. These lines can be removed using the `Dataset.skip()` and`Dataset.filter()` transformations. To apply these transformations to eachfile separately, we use `Dataset.flat_map()` to create a nested `Dataset` foreach file. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.DataFrame.from_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function, is the high level interface for reading sets of csv files. It supports column-type-inference, and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class','fare','survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class. Which provides finer grained control. It does not support column-type-inference, you specify the types of each column, and the items yielded by the dataset ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column-types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1,3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown To load the data from the files use the `tf.io.read_file` function: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Convert the file paths to (image, label) pairs: ###Code def process_path(file_path): parts = tf.strings.split(file_path, '/') return tf.io.read_file(file_path), parts[-2] labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) it = iter(batched_dataset) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` results in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch()` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch()` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. For example, to create a dataset that repeatsits input for 3 epochs: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that stradle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation passes the input dataset through a random shuffle queue, `tf.queues.RandomShuffleQueue`. It maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: That while large buffer_sizes shuffle more thouroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(file_path, '/') label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, we encourage you to use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30,30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,]= tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file ]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(raw_examples): example = tf.io.parse_example( raw_example[tf.newaxis], {'image/encoded':tf.io.FixedLenFeature(shape=(),dtype=tf.string), 'image/text':tf.io.FixedLenFeature(shape=(), dtype=tf.string)}) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlab between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x:x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip','.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0':0, 'class_1':0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = creditcard_ds.unbatch().filter(lambda features,label: label==0).repeat() positive_ds = creditcard_ds.unbatch().filter(lambda features,label: label==1).repeat() for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets([negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatis that it needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( get_class, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataet and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds , steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, different data formats, and perform complextransformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code from __future__ import absolute_import, division, print_function, unicode_literals try: %tensorflow_version 2.x except Exception: pass import tensorflow as tf import pathlib import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `Tensor`, `SparseTensor`, `RaggedTensor`,`TensorArray`, or `Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fit in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator. Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting dules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0,10) yield i, np.random.normal(size=(size,)) i += 1 for i,series in gen_series(): print(i,":",str(series)) if i>5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes = ((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10, padded_shapes=([],[None])) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes = ([32,256,256,3],[32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file ]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url+file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`, this makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i%3==0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example if the file starts with a header line, or containscomments. These lines can be removed using the `Dataset.skip()` and`Dataset.filter()` transformations. To apply these transformations to eachfile separately, we use `Dataset.flat_map()` to create a nested `Dataset` foreach file. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function, is the high level interface for reading sets of csv files. It supports column-type-inference, and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class','fare','survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class. Which provides finer grained control. It does not support column-type-inference, you specify the types of each column, and the items yielded by the dataset ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column-types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1,3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown To load the data from the files use the `tf.io.read_file` function: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Convert the file paths to (image, label) pairs: ###Code def process_path(file_path): parts = tf.strings.split(file_path, '/') return tf.io.read_file(file_path), parts[-2] labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) it = iter(batched_dataset) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` results in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch()` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch()` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. For example, to create a dataset that repeatsits input for 3 epochs: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that stradle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation passes the input dataset through a random shuffle queue, `tf.queues.RandomShuffleQueue`. It maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: That while large buffer_sizes shuffle more thouroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(file_path, '/') label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, we encourage you to use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30,30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,]= tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file ]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(raw_examples): example = tf.io.parse_example( raw_example[tf.newaxis], {'image/encoded':tf.io.FixedLenFeature(shape=(),dtype=tf.string), 'image/text':tf.io.FixedLenFeature(shape=(), dtype=tf.string)}) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x:x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip','.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0':0, 'class_1':0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = creditcard_ds.unbatch().filter(lambda features,label: label==0).repeat() positive_ds = creditcard_ds.unbatch().filter(lambda features,label: label==1).repeat() for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets([negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatis that it needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataet and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds , steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code from __future__ import absolute_import, division, print_function, unicode_literals ###Output _____no_output_____ ###Markdown ###Code try: !pip install tf-nightly except Exception: pass import tensorflow as tf import pathlib import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `Tensor`, `SparseTensor`, `RaggedTensor`,`TensorArray`, or `Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, '/')[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, '/') label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Shuffled datasets don't work with rejection resample in TF2.1 shuffle = False, # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).shuffle(100).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.Caution: This function only works correctly of the dataset returns the same sequence of elements on each run. Don't pass a shuffled dataset to `rejection_resample`.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).shuffle(100).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: The input pipeline API View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, different data formats, and perform complextransformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code from __future__ import absolute_import, division, print_function, unicode_literals try: %tensorflow_version 2.x except Exception: pass import tensorflow as tf import pathlib import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `Tensor`, `SparseTensor`, `RaggedTensor`,`TensorArray`, or `Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fit in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator. Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting dules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0,10) yield i, np.random.normal(size=(size,)) i += 1 for i,series in gen_series(): print(i,":",str(series)) if i>5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes = ((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10, padded_shapes=([],[None])) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes = ([32,256,256,3],[32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file ]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url+file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`, this makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i%3==0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example if the file starts with a header line, or containscomments. These lines can be removed using the `Dataset.skip()` and`Dataset.filter()` transformations. To apply these transformations to eachfile separately, we use `Dataset.flat_map()` to create a nested `Dataset` foreach file. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.DataFrame.from_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function, is the high level interface for reading sets of csv files. It supports column-type-inference, and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class','fare','survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class. Which provides finer grained control. It does not support column-type-inference, you specify the types of each column, and the items yielded by the dataset ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column-types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1,3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown To load the data from the files use the `tf.io.read_file` function: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Convert the file paths to (image, label) pairs: ###Code def process_path(file_path): parts = tf.strings.split(file_path, '/') return tf.io.read_file(file_path), parts[-2] labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) it = iter(batched_dataset) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` results in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch()` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch()` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. For example, to create a dataset that repeatsits input for 3 epochs: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that stradle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation passes the input dataset through a random shuffle queue, `tf.queues.RandomShuffleQueue`. It maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: That while large buffer_sizes shuffle more thouroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(file_path, '/') label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, we encourage you to use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30,30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,]= tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file ]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(raw_examples): example = tf.io.parse_example( raw_example[tf.newaxis], {'image/encoded':tf.io.FixedLenFeature(shape=(),dtype=tf.string), 'image/text':tf.io.FixedLenFeature(shape=(), dtype=tf.string)}) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlab between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x:x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip','.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0':0, 'class_1':0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = creditcard_ds.unbatch().filter(lambda features,label: label==0).repeat() positive_ds = creditcard_ds.unbatch().filter(lambda features,label: label==1).repeat() for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets([negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatis that it needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataet and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds , steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code from __future__ import absolute_import, division, print_function, unicode_literals try: %tensorflow_version 2.x except Exception: pass import tensorflow as tf import pathlib import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `Tensor`, `SparseTensor`, `RaggedTensor`,`TensorArray`, or `Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator. Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10, padded_shapes=([], [None])) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here we skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown To load the data from the files use the `tf.io.read_file` function: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Convert the file paths to (image, label) pairs: ###Code def process_path(file_path): parts = tf.strings.split(file_path, '/') return tf.io.read_file(file_path), parts[-2] labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch()` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch()` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First we create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that stradle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(file_path, '/') label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, we encourage you to use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(raw_examples): example = tf.io.parse_example( raw_example[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = creditcard_ds.unbatch().filter(lambda features,label: label==0).repeat() positive_ds = creditcard_ds.unbatch().filter(lambda features,label: label==1).repeat() for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets([negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatis that it needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataet and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code from __future__ import absolute_import, division, print_function, unicode_literals try: %tensorflow_version 2.x except Exception: pass import tensorflow as tf import pathlib import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `Tensor`, `SparseTensor`, `RaggedTensor`,`TensorArray`, or `Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator. Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10, padded_shapes=([], [None])) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here we skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown To load the data from the files use the `tf.io.read_file` function: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Convert the file paths to (image, label) pairs: ###Code def process_path(file_path): parts = tf.strings.split(file_path, '/') return tf.io.read_file(file_path), parts[-2] labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch()` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch()` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First we create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that stradle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(file_path, '/') label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, we encourage you to use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = creditcard_ds.unbatch().filter(lambda features,label: label==0).repeat() positive_ds = creditcard_ds.unbatch().filter(lambda features,label: label==1).repeat() for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets([negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatis that it needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataet and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code from __future__ import absolute_import, division, print_function, unicode_literals try: !pip install tf-nightly except Exception: pass import tensorflow as tf import pathlib import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `Tensor`, `SparseTensor`, `RaggedTensor`,`TensorArray`, or `Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here we skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown We can read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, '/')[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First we create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, '/') label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, we encourage you to use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatis that it needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code import tensorflow as tf import pathlib import os import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset produces a sequence of *elements*, where each element isthe same (nested) structure of *components*. Individual componentsof the structure can be of any type representable by`tf.TypeSpec`, including `tf.Tensor`, `tf.sparse.SparseTensor`,`tf.RaggedTensor`, `tf.TensorArray`, or `tf.data.Dataset`.The Python constructs that can be used to express the (nested)structure of elements include `tuple`, `dict`, `NamedTuple`, and`OrderedDict`. In particular, `list` is not a valid construct forexpressing the structure of dataset elements. This is becauseearly tf.data users felt strongly about `list` inputs (e.g. passedto `tf.data.Dataset.from_tensors`) being automatically packed astensors and `list` outputs (e.g. return values of user-definedfunctions) being coerced into a `tuple`. As a consequence, if youwould like a `list` input to be treated as a structure, you needto convert it into `tuple` and if you would like a `list` outputto be a single component, then you need to explicitly pack itusing `tf.stack`.The `Dataset.element_spec` property allows you to inspect the typeof each element component. The property returns a *nested structure*of `tf.TypeSpec` objects, matching the structure of the element,which may be a single component, a tuple of components, or a nestedtuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( lambda: img_gen.flow_from_directory(flowers), output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds.element_spec for images, label in ds.take(1): print('images.shape: ', images.shape) print('labels.shape: ', labels.shape) ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tfrecord.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas_dataframe.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, os.sep)[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, os.sep) label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/structured_data/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Inputs: All except the last 5 steps batch[-5:]) # Labels: The last 5 steps predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 3 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:label_length]) predicted_steps = tf.data.Dataset.zip((features, labels)) for features, label in predicted_steps.take(5): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Iterator Checkpointing Tensorflow supports [taking checkpoints](https://www.tensorflow.org/guide/checkpoint) so that when your training process restarts it can restore the latest checkpoint to recover most of its progress. In addition to checkpointing the model variables, you can also checkpoint the progress of the dataset iterator. This could be useful if you have a large dataset and don't want to start the dataset from the beginning on each restart. Note however that iterator checkpoints may be large, since transformations such as `shuffle` and `prefetch` require buffering elements within the iterator. To include your iterator in a checkpoint, pass the iterator to the `tf.train.Checkpoint` constructor. ###Code range_ds = tf.data.Dataset.range(20) iterator = iter(range_ds) ckpt = tf.train.Checkpoint(step=tf.Variable(0), iterator=iterator) manager = tf.train.CheckpointManager(ckpt, '/tmp/my_ckpt', max_to_keep=3) print([next(iterator).numpy() for _ in range(5)]) save_path = manager.save() print([next(iterator).numpy() for _ in range(5)]) ckpt.restore(manager.latest_checkpoint) print([next(iterator).numpy() for _ in range(5)]) ###Output _____no_output_____ ###Markdown Note: It is not possible to checkpoint an iterator which relies on external state such as a `tf.py_function`. Attempting to do so will raise an exception complaining about the external state. Using tf.data with tf.keras The `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code import tensorflow as tf import pathlib import os import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset produces a sequence of *elements*, where each element isthe same (nested) structure of *components*. Individual componentsof the structure can be of any type representable by`tf.TypeSpec`, including `tf.Tensor`, `tf.sparse.SparseTensor`,`tf.RaggedTensor`, `tf.TensorArray`, or `tf.data.Dataset`.The Python constructs that can be used to express the (nested)structure of elements include `tuple`, `dict`, `NamedTuple`, and`OrderedDict`. In particular, `list` is not a valid construct forexpressing the structure of dataset elements. This is becauseearly tf.data users felt strongly about `list` inputs (e.g. passedto `tf.data.Dataset.from_tensors`) being automatically packed astensors and `list` outputs (e.g. return values of user-definedfunctions) being coerced into a `tuple`. As a consequence, if youwould like a `list` input to be treated as a structure, you needto convert it into `tuple` and if you would like a `list` outputto be a single component, then you need to explicitly pack itusing `tf.stack`.The `Dataset.element_spec` property allows you to inspect the typeof each element component. The property returns a *nested structure*of `tf.TypeSpec` objects, matching the structure of the element,which may be a single component a tuple of components, or a nestedtuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( lambda: img_gen.flow_from_directory(flowers), output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds.element_spec for images, label in ds.take(1): print('images.shape: ', images.shape) print('labels.shape: ', labels.shape) ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, os.sep)[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, os.sep) label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/structured_data/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 3 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:label_length]) predicted_steps = tf.data.Dataset.zip((features, labels)) for features, label in predicted_steps.take(5): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Iterator Checkpointing Tensorflow supports [taking checkpoints](https://www.tensorflow.org/guide/checkpoint) so that when your training process restarts it can restore the latest checkpoint to recover most of its progress. In addition to checkpointing the model variables, you can also checkpoint the progress of the dataset iterator. This could be useful if you have a large dataset and don't want to start the dataset from the beginning on each restart. Note however that iterator checkpoints may be large, since transformations such as `shuffle` and `prefetch` require buffering elements within the iterator. To include your iterator in a checkpoint, pass the iterator to the `tf.train.Checkpoint` constructor. ###Code range_ds = tf.data.Dataset.range(20) iterator = iter(range_ds) ckpt = tf.train.Checkpoint(step=tf.Variable(0), iterator=iterator) manager = tf.train.CheckpointManager(ckpt, '/tmp/my_ckpt', max_to_keep=3) print([next(iterator).numpy() for _ in range(5)]) save_path = manager.save() print([next(iterator).numpy() for _ in range(5)]) ckpt.restore(manager.latest_checkpoint) print([next(iterator).numpy() for _ in range(5)]) ###Output _____no_output_____ ###Markdown Note: It is not possible to checkpoint an iterator which relies on external state such as a `tf.py_function`. Attempting to do so will raise an exception complaining about the external state. Using tf.data with tf.keras The `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code import tensorflow as tf import pathlib import os import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset produces a sequence of *elements*, where each element isthe same (nested) structure of *components*. Individual componentsof the structure can be of any type representable by`tf.TypeSpec`, including `tf.Tensor`, `tf.sparse.SparseTensor`,`tf.RaggedTensor`, `tf.TensorArray`, or `tf.data.Dataset`.The Python constructs that can be used to express the (nested)structure of elements include `tuple`, `dict`, `NamedTuple`, and`OrderedDict`. In particular, `list` is not a valid construct forexpressing the structure of dataset elements. This is becauseearly tf.data users felt strongly about `list` inputs (e.g. passedto `tf.data.Dataset.from_tensors`) being automatically packed astensors and `list` outputs (e.g. return values of user-definedfunctions) being coerced into a `tuple`. As a consequence, if youwould like a `list` input to be treated as a structure, you needto convert it into `tuple` and if you would like a `list` outputto be a single component, then you need to explicitly pack itusing `tf.stack`.The `Dataset.element_spec` property allows you to inspect the typeof each element component. The property returns a *nested structure*of `tf.TypeSpec` objects, matching the structure of the element,which may be a single component, a tuple of components, or a nestedtuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( lambda: img_gen.flow_from_directory(flowers), output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds.element_spec for images, label in ds.take(1): print('images.shape: ', images.shape) print('labels.shape: ', labels.shape) ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tfrecord.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas_dataframe.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, os.sep)[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, os.sep) label = parts[-2] image = tf.io.read_file(filename) image = tf.io.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/structured_data/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Inputs: All except the last 5 steps batch[-5:]) # Labels: The last 5 steps predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 3 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:label_length]) predicted_steps = tf.data.Dataset.zip((features, labels)) for features, label in predicted_steps.take(5): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.Dataset.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.Dataset.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `Dataset.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. You could use `Dataset.filter`to create those two datasets, but that results in all the data being loaded twice.The `data.Dataset.rejection_resample` method can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.Dataset.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The goal here is to balance the lable distribution, and the elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampling method deals with individual examples, so in this case you must `unbatch` the dataset before applying that method.The method needs a target distribution, and optionally an initial distribution estimate as inputs. ###Code resample_ds = ( creditcard_ds .unbatch() .rejection_resample(class_func, target_dist=[0.5,0.5], initial_dist=fractions) .batch(10)) ###Output _____no_output_____ ###Markdown The `rejection_resample` method returns `(class, example)` pairs where the `class` is the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Iterator Checkpointing Tensorflow supports [taking checkpoints](https://www.tensorflow.org/guide/checkpoint) so that when your training process restarts it can restore the latest checkpoint to recover most of its progress. In addition to checkpointing the model variables, you can also checkpoint the progress of the dataset iterator. This could be useful if you have a large dataset and don't want to start the dataset from the beginning on each restart. Note however that iterator checkpoints may be large, since transformations such as `shuffle` and `prefetch` require buffering elements within the iterator. To include your iterator in a checkpoint, pass the iterator to the `tf.train.Checkpoint` constructor. ###Code range_ds = tf.data.Dataset.range(20) iterator = iter(range_ds) ckpt = tf.train.Checkpoint(step=tf.Variable(0), iterator=iterator) manager = tf.train.CheckpointManager(ckpt, '/tmp/my_ckpt', max_to_keep=3) print([next(iterator).numpy() for _ in range(5)]) save_path = manager.save() print([next(iterator).numpy() for _ in range(5)]) ckpt.restore(manager.latest_checkpoint) print([next(iterator).numpy() for _ in range(5)]) ###Output _____no_output_____ ###Markdown Note: It is not possible to checkpoint an iterator which relies on external state such as a `tf.py_function`. Attempting to do so will raise an exception complaining about the external state. Using tf.data with tf.keras The `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code import tensorflow as tf import pathlib import os import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `tf.Tensor`, `tf.sparse.SparseTensor`, `tf.RaggedTensor`,`tf.TensorArray`, or `tf.data.Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, os.sep)[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, os.sep) label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Iterator Checkpointing Tensorflow supports [taking checkpoints](https://www.tensorflow.org/guide/checkpoint) so that when your training process restarts it can restore the latest checkpoint to recover most of its progress. In addition to checkpointing the model variables, you can also checkpoint the progress of the dataset iterator. This could be useful if you have a large dataset and don't want to start the dataset from the beginning on each restart. Note however that iterator checkpoints may be large, since transformations such as `shuffle` and `prefetch` require buffering elements within the iterator. To include your iterator in a checkpoint, pass the iterator to the `tf.train.Checkpoint` constructor. ###Code range_ds = tf.data.Dataset.range(20) iterator = iter(range_ds) ckpt = tf.train.Checkpoint(step=tf.Variable(0), iterator=iterator) manager = tf.train.CheckpointManager(ckpt, '/tmp/my_ckpt', max_to_keep=3) print([next(iterator).numpy() for _ in range(5)]) save_path = manager.save() print([next(iterator).numpy() for _ in range(5)]) ckpt.restore(manager.latest_checkpoint) print([next(iterator).numpy() for _ in range(5)]) ###Output _____no_output_____ ###Markdown Note: It is not possible to checkpoint an iterator which relies on external state such as a `tf.py_function`. Attempting to do so will raise an exception complaining about the external state. Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code import tensorflow as tf import pathlib import os import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset produces a sequence of *elements*, where each element isthe same (nested) structure of *components*. Individual componentsof the structure can be of any type representable by`tf.TypeSpec`, including `tf.Tensor`, `tf.sparse.SparseTensor`,`tf.RaggedTensor`, `tf.TensorArray`, or `tf.data.Dataset`.The Python constructs that can be used to express the (nested)structure of elements include `tuple`, `dict`, `NamedTuple`, and`OrderedDict`. In particular, `list` is not a valid construct forexpressing the structure of dataset elements. This is becauseearly tf.data users felt strongly about `list` inputs (e.g. passedto `tf.data.Dataset.from_tensors`) being automatically packed astensors and `list` outputs (e.g. return values of user-definedfunctions) being coerced into a `tuple`. As a consequence, if youwould like a `list` input to be treated as a structure, you needto convert it into `tuple` and if you would like a `list` outputto be a single component, then you need to explicitly pack itusing `tf.stack`.The `Dataset.element_spec` property allows you to inspect the typeof each element component. The property returns a *nested structure*of `tf.TypeSpec` objects, matching the structure of the element,which may be a single component a tuple of components, or a nestedtuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( lambda: img_gen.flow_from_directory(flowers), output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds.element_spec for images, label in ds.take(1): print('images.shape: ', images.shape) print('labels.shape: ', labels.shape) ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tfrecord.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, os.sep)[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, os.sep) label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/structured_data/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 3 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:label_length]) predicted_steps = tf.data.Dataset.zip((features, labels)) for features, label in predicted_steps.take(5): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Iterator Checkpointing Tensorflow supports [taking checkpoints](https://www.tensorflow.org/guide/checkpoint) so that when your training process restarts it can restore the latest checkpoint to recover most of its progress. In addition to checkpointing the model variables, you can also checkpoint the progress of the dataset iterator. This could be useful if you have a large dataset and don't want to start the dataset from the beginning on each restart. Note however that iterator checkpoints may be large, since transformations such as `shuffle` and `prefetch` require buffering elements within the iterator. To include your iterator in a checkpoint, pass the iterator to the `tf.train.Checkpoint` constructor. ###Code range_ds = tf.data.Dataset.range(20) iterator = iter(range_ds) ckpt = tf.train.Checkpoint(step=tf.Variable(0), iterator=iterator) manager = tf.train.CheckpointManager(ckpt, '/tmp/my_ckpt', max_to_keep=3) print([next(iterator).numpy() for _ in range(5)]) save_path = manager.save() print([next(iterator).numpy() for _ in range(5)]) ckpt.restore(manager.latest_checkpoint) print([next(iterator).numpy() for _ in range(5)]) ###Output _____no_output_____ ###Markdown Note: It is not possible to checkpoint an iterator which relies on external state such as a `tf.py_function`. Attempting to do so will raise an exception complaining about the external state. Using tf.data with tf.keras The `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, os.sep)[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, os.sep) label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/structured_data/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Inputs: All except the last 5 steps batch[-5:]) # Labels: The last 5 steps predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 3 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:label_length]) predicted_steps = tf.data.Dataset.zip((features, labels)) for features, label in predicted_steps.take(5): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Iterator Checkpointing Tensorflow supports [taking checkpoints](https://www.tensorflow.org/guide/checkpoint) so that when your training process restarts it can restore the latest checkpoint to recover most of its progress. In addition to checkpointing the model variables, you can also checkpoint the progress of the dataset iterator. This could be useful if you have a large dataset and don't want to start the dataset from the beginning on each restart. Note however that iterator checkpoints may be large, since transformations such as `shuffle` and `prefetch` require buffering elements within the iterator. To include your iterator in a checkpoint, pass the iterator to the `tf.train.Checkpoint` constructor. ###Code range_ds = tf.data.Dataset.range(20) iterator = iter(range_ds) ckpt = tf.train.Checkpoint(step=tf.Variable(0), iterator=iterator) manager = tf.train.CheckpointManager(ckpt, '/tmp/my_ckpt', max_to_keep=3) print([next(iterator).numpy() for _ in range(5)]) save_path = manager.save() print([next(iterator).numpy() for _ in range(5)]) ckpt.restore(manager.latest_checkpoint) print([next(iterator).numpy() for _ in range(5)]) ###Output _____no_output_____ ###Markdown Note: It is not possible to checkpoint an iterator which relies on external state such as a `tf.py_function`. Attempting to do so will raise an exception complaining about the external state. Using tf.data with tf.keras The `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code from __future__ import absolute_import, division, print_function, unicode_literals try: %tensorflow_version 2.x except Exception: pass import tensorflow as tf import pathlib import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `Tensor`, `SparseTensor`, `RaggedTensor`,`TensorArray`, or `Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10, padded_shapes=([], [None])) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here we skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown We can read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, '/')[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch()` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch()` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First we create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that stradle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(file_path, '/') label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, we encourage you to use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatis that it needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataet and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code from __future__ import absolute_import, division, print_function, unicode_literals try: !pip install tf-nightly except Exception: pass import tensorflow as tf import pathlib import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `Tensor`, `SparseTensor`, `RaggedTensor`,`TensorArray`, or `Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, '/')[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, '/') label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code from __future__ import absolute_import, division, print_function, unicode_literals try: %tensorflow_version 2.x except Exception: pass import tensorflow as tf import pathlib import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `Tensor`, `SparseTensor`, `RaggedTensor`,`TensorArray`, or `Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10, padded_shapes=([], [None])) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here we skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown We can read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, '/')[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch()` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch()` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First we create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, '/') label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, we encourage you to use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatis that it needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code !pip install tf-nightly import tensorflow as tf import pathlib import os import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `tf.Tensor`, `tf.sparse.SparseTensor`, `tf.RaggedTensor`,`tf.TensorArray`, or `tf.data.Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10, padded_shapes=([], [None])) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown Note: As of **TensorFlow 2.2** the `padded_shapes` argument is no longer required. The default behavior is to pad all axes to the longest in the batch. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, os.sep)[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, os.sep) label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Iterator Checkpointing Tensorflow supports [taking checkpoints](https://www.tensorflow.org/guide/checkpoint) so that when your training process restarts it can restore the latest checkpoint to recover most of its progress. In addition to checkpointing the model variables, you can also checkpoint the progress of the dataset iterator. This could be useful if you have a large dataset and don't want to start the dataset from the beginning on each restart. Note however that iterator checkpoints may be large, since transformations such as `shuffle` and `prefetch` require buffering elements within the iterator. To include your iterator in a checkpoint, pass the iterator to the `tf.train.Checkpoint` constructor. ###Code range_ds = tf.data.Dataset.range(20) iterator = iter(range_ds) ckpt = tf.train.Checkpoint(step=tf.Variable(0), iterator=iterator) manager = tf.train.CheckpointManager(ckpt, '/tmp/my_ckpt', max_to_keep=3) print([next(iterator).numpy() for _ in range(5)]) save_path = manager.save() print([next(iterator).numpy() for _ in range(5)]) ckpt.restore(manager.latest_checkpoint) print([next(iterator).numpy() for _ in range(5)]) ###Output _____no_output_____ ###Markdown Note: It is not possible to checkpoint an iterator which relies on external state such as a `tf.py_function`. Attempting to do so will raise an exception complaining about the external state. Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code import tensorflow as tf import pathlib import os import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `tf.Tensor`, `tf.sparse.SparseTensor`, `tf.RaggedTensor`,`tf.TensorArray`, or `tf.data.Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, os.sep)[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, os.sep) label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Iterator Checkpointing Tensorflow supports [taking checkpoints](https://www.tensorflow.org/guide/checkpoint) so that when your training process restarts it can restore the latest checkpoint to recover most of its progress. In addition to checkpointing the model variables, you can also checkpoint the progress of the dataset iterator. This could be useful if you have a large dataset and don't want to start the dataset from the beginning on each restart. Note however that iterator checkpoints may be large, since transformations such as `shuffle` and `prefetch` require buffering elements within the iterator. To include your iterator in a checkpoint, pass the iterator to the `tf.train.Checkpoint` constructor. ###Code range_ds = tf.data.Dataset.range(20) iterator = iter(range_ds) ckpt = tf.train.Checkpoint(step=tf.Variable(0), iterator=iterator) manager = tf.train.CheckpointManager(ckpt, '/tmp/my_ckpt', max_to_keep=3) print([next(iterator).numpy() for _ in range(5)]) save_path = manager.save() print([next(iterator).numpy() for _ in range(5)]) ckpt.restore(manager.latest_checkpoint) print([next(iterator).numpy() for _ in range(5)]) ###Output _____no_output_____ ###Markdown Note: It is not possible to checkpoint an iterator which relies on external state such as a `tf.py_function`. Attempting to do so will raise an exception complaining about the external state. Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: The input pipeline API View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, different data formats, and perform complextransformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code from __future__ import absolute_import, division, print_function, unicode_literals try: %tensorflow_version 2.x except Exception: pass import tensorflow as tf import pathlib import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `Tensor`, `SparseTensor`, `RaggedTensor`,`TensorArray`, or `Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fit in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator. Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passes as the callabler's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting dules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0,10) yield i, np.random.normal(size=(size,)) i += 1 for i,series in gen_series(): print(i,":",str(series)) if i>5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes = ((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10, padded_shapes=([],[None])) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes = ([32,256,256,3],[32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file ]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url+file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`, this makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i%3==0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example if the file starts with a header line, or containscomments. These lines can be removed using the `Dataset.skip()` and`Dataset.filter()` transformations. To apply these transformations to eachfile separately, we use `Dataset.flat_map()` to create a nested `Dataset` foreach file. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.DataFrame.from_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function, is the high level interface for reading sets of csv files. It supports column-type-inference, and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class','fare','survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class. Which provides finer grained control. It does not support column-type-inference, you specify the types of each column, and the items yielded by the dataset ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column-types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1,3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown To load the data from the files use the `tf.io.read_file` function: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Convert the file paths to (image, label) pairs: ###Code def process_path(file_path): parts = tf.strings.split(file_path, '/') return tf.io.read_file(file_path), parts[-2] labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) it = iter(batched_dataset) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` results in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch()` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch()` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. For example, to create a dataset that repeatsits input for 3 epochs: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that stradle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation passes the input dataset through a random shuffle queue, `tf.queues.RandomShuffleQueue`. It maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: That while large buffer_sizes shuffle more thouroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(file_path, '/') label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, we encourage you to use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30,30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,]= tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file ]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(raw_examples): example = tf.io.parse_example( raw_example[tf.newaxis], {'image/encoded':tf.io.FixedLenFeature(shape=(),dtype=tf.string), 'image/text':tf.io.FixedLenFeature(shape=(), dtype=tf.string)}) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlab between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x:x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip','.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0':0, 'class_1':0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = creditcard_ds.unbatch().filter(lambda features,label: label==0).repeat() positive_ds = creditcard_ds.unbatch().filter(lambda features,label: label==1).repeat() for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets([negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatis that it needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( get_class, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataet and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds , steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code from __future__ import absolute_import, division, print_function, unicode_literals try: !pip install tf-nightly except Exception: pass import tensorflow as tf import pathlib import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `Tensor`, `SparseTensor`, `RaggedTensor`,`TensorArray`, or `Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, '/')[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, '/') label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code from __future__ import absolute_import, division, print_function, unicode_literals try: !pip install tf-nightly except Exception: pass import tensorflow as tf import pathlib import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `Tensor`, `SparseTensor`, `RaggedTensor`,`TensorArray`, or `Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, '/')[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, '/') label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code import tensorflow as tf import pathlib import os import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset produces a sequence of *elements*, where each element isthe same (nested) structure of *components*. Individual componentsof the structure can be of any type representable by`tf.TypeSpec`, including `tf.Tensor`, `tf.sparse.SparseTensor`,`tf.RaggedTensor`, `tf.TensorArray`, or `tf.data.Dataset`.The Python constructs that can be used to express the (nested)structure of elements include `tuple`, `dict`, `NamedTuple`, and`OrderedDict`. In particular, `list` is not a valid construct forexpressing the structure of dataset elements. This is becauseearly tf.data users felt strongly about `list` inputs (e.g. passedto `tf.data.Dataset.from_tensors`) being automatically packed astensors and `list` outputs (e.g. return values of user-definedfunctions) being coerced into a `tuple`. As a consequence, if youwould like a `list` input to be treated as a structure, you needto convert it into `tuple` and if you would like a `list` outputto be a single component, then you need to explicitly pack itusing `tf.stack`.The `Dataset.element_spec` property allows you to inspect the typeof each element component. The property returns a *nested structure*of `tf.TypeSpec` objects, matching the structure of the element,which may be a single component, a tuple of components, or a nestedtuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recommended as many TensorFlow operations do not support tensors with an unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( lambda: img_gen.flow_from_directory(flowers), output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds.element_spec for images, label in ds.take(1): print('images.shape: ', images.shape) print('labels.shape: ', labels.shape) ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tfrecord.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas_dataframe.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code from __future__ import absolute_import, division, print_function, unicode_literals try: !pip install tf-nightly except Exception: pass import tensorflow as tf import pathlib import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `Tensor`, `SparseTensor`, `RaggedTensor`,`TensorArray`, or `Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, '/')[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, '/') label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, os.sep)[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, os.sep) label = parts[-2] image = tf.io.read_file(filename) image = tf.io.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/structured_data/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Inputs: All except the last 5 steps batch[-5:]) # Labels: The last 5 steps predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 3 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:label_length]) predicted_steps = tf.data.Dataset.zip((features, labels)) for features, label in predicted_steps.take(5): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.Dataset.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.Dataset.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `Dataset.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. You could use `Dataset.filter`to create those two datasets, but that results in all the data being loaded twice.The `data.Dataset.rejection_resample` method can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.Dataset.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The goal here is to balance the lable distribution, and the elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampling method deals with individual examples, so in this case you must `unbatch` the dataset before applying that method.The method needs a target distribution, and optionally an initial distribution estimate as inputs. ###Code resample_ds = ( creditcard_ds .unbatch() .rejection_resample(class_func, target_dist=[0.5,0.5], initial_dist=fractions) .batch(10)) ###Output _____no_output_____ ###Markdown The `rejection_resample` method returns `(class, example)` pairs where the `class` is the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Iterator Checkpointing Tensorflow supports [taking checkpoints](https://www.tensorflow.org/guide/checkpoint) so that when your training process restarts it can restore the latest checkpoint to recover most of its progress. In addition to checkpointing the model variables, you can also checkpoint the progress of the dataset iterator. This could be useful if you have a large dataset and don't want to start the dataset from the beginning on each restart. Note however that iterator checkpoints may be large, since transformations such as `shuffle` and `prefetch` require buffering elements within the iterator. To include your iterator in a checkpoint, pass the iterator to the `tf.train.Checkpoint` constructor. ###Code range_ds = tf.data.Dataset.range(20) iterator = iter(range_ds) ckpt = tf.train.Checkpoint(step=tf.Variable(0), iterator=iterator) manager = tf.train.CheckpointManager(ckpt, '/tmp/my_ckpt', max_to_keep=3) print([next(iterator).numpy() for _ in range(5)]) save_path = manager.save() print([next(iterator).numpy() for _ in range(5)]) ckpt.restore(manager.latest_checkpoint) print([next(iterator).numpy() for _ in range(5)]) ###Output _____no_output_____ ###Markdown Note: It is not possible to checkpoint an iterator which relies on external state such as a `tf.py_function`. Attempting to do so will raise an exception complaining about the external state. Using tf.data with tf.keras The `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code import tensorflow as tf import pathlib import os import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `tf.Tensor`, `tf.sparse.SparseTensor`, `tf.RaggedTensor`,`tf.TensorArray`, or `tf.data.Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, os.sep)[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, os.sep) label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Iterator Checkpointing Tensorflow supports [taking checkpoints](https://www.tensorflow.org/guide/checkpoint) so that when your training process restarts it can restore the latest checkpoint to recover most of its progress. In addition to checkpointing the model variables, you can also checkpoint the progress of the dataset iterator. This could be useful if you have a large dataset and don't want to start the dataset from the beginning on each restart. Note however that iterator checkpoints may be large, since transformations such as `shuffle` and `prefetch` require buffering elements within the iterator. To include your iterator in a checkpoint, pass the iterator to the `tf.train.Checkpoint` constructor. ###Code range_ds = tf.data.Dataset.range(20) iterator = iter(range_ds) ckpt = tf.train.Checkpoint(step=tf.Variable(0), iterator=iterator) manager = tf.train.CheckpointManager(ckpt, '/tmp/my_ckpt', max_to_keep=3) print([next(iterator).numpy() for _ in range(5)]) save_path = manager.save() print([next(iterator).numpy() for _ in range(5)]) ckpt.restore(manager.latest_checkpoint) print([next(iterator).numpy() for _ in range(5)]) ###Output _____no_output_____ ###Markdown Note: It is not possible to checkpoint an iterator which relies on external state such as a `tf.py_function`. Attempting to do so will raise an exception complaining about the external state. Using tf.data with tf.keras The `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: The input pipeline API View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, different data formats, and perform complextransformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code from __future__ import absolute_import, division, print_function, unicode_literals try: %tensorflow_version 2.x except Exception: pass import tensorflow as tf import pathlib import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `Tensor`, `SparseTensor`, `RaggedTensor`,`TensorArray`, or `Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fit in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator. Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callabler's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting dules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0,10) yield i, np.random.normal(size=(size,)) i += 1 for i,series in gen_series(): print(i,":",str(series)) if i>5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes = ((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10, padded_shapes=([],[None])) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes = ([32,256,256,3],[32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file ]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url+file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`, this makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i%3==0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example if the file starts with a header line, or containscomments. These lines can be removed using the `Dataset.skip()` and`Dataset.filter()` transformations. To apply these transformations to eachfile separately, we use `Dataset.flat_map()` to create a nested `Dataset` foreach file. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.DataFrame.from_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function, is the high level interface for reading sets of csv files. It supports column-type-inference, and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class','fare','survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class. Which provides finer grained control. It does not support column-type-inference, you specify the types of each column, and the items yielded by the dataset ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column-types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1,3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown To load the data from the files use the `tf.io.read_file` function: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Convert the file paths to (image, label) pairs: ###Code def process_path(file_path): parts = tf.strings.split(file_path, '/') return tf.io.read_file(file_path), parts[-2] labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) it = iter(batched_dataset) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` results in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch()` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch()` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. For example, to create a dataset that repeatsits input for 3 epochs: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that stradle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation passes the input dataset through a random shuffle queue, `tf.queues.RandomShuffleQueue`. It maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: That while large buffer_sizes shuffle more thouroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(file_path, '/') label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, we encourage you to use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30,30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,]= tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file ]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(raw_examples): example = tf.io.parse_example( raw_example[tf.newaxis], {'image/encoded':tf.io.FixedLenFeature(shape=(),dtype=tf.string), 'image/text':tf.io.FixedLenFeature(shape=(), dtype=tf.string)}) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlab between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x:x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip','.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0':0, 'class_1':0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = creditcard_ds.unbatch().filter(lambda features,label: label==0).repeat() positive_ds = creditcard_ds.unbatch().filter(lambda features,label: label==1).repeat() for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets([negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatis that it needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( get_class, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataet and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds , steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code !pip install tf-nightly import tensorflow as tf import pathlib import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `Tensor`, `SparseTensor`, `RaggedTensor`,`TensorArray`, or `Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10, padded_shapes=([], [None])) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown Note: As of **TensorFlow 2.2** the `padded_shapes` argument is no longer required. The default behavior is to pad all axes to the longest in the batch. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, '/')[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, '/') label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Iterator Checkpointing Tensorflow supports [taking checkpoints](https://www.tensorflow.org/guide/checkpoint) so that when your training process restarts it can restore the latest checkpoint to recover most of its progress. In addition to checkpointing the model variables, you can also checkpoint the progress of the dataset iterator. This could be useful if you have a large dataset and don't want to start the dataset from the beginning on each restart. Note however that iterator checkpoints may be large, since transformations such as `shuffle` and `prefetch` require buffering elements within the iterator. To include your iterator in a checkpoint, pass the iterator to the `tf.train.Checkpoint` constructor. ###Code range_ds = tf.data.Dataset.range(20) iterator = iter(range_ds) ckpt = tf.train.Checkpoint(step=tf.Variable(0), iterator=iterator) manager = tf.train.CheckpointManager(ckpt, '/tmp/my_ckpt', max_to_keep=3) print([next(iterator).numpy() for _ in range(5)]) save_path = manager.save() print([next(iterator).numpy() for _ in range(5)]) ckpt.restore(manager.latest_checkpoint) print([next(iterator).numpy() for _ in range(5)]) ###Output _____no_output_____ ###Markdown Note: It is not possible to checkpoint an iterator which relies on external state such as a `tf.py_function`. Attempting to do so will raise an exception complaining about the external state. Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects. ###Code from __future__ import absolute_import, division, print_function, unicode_literals try: !pip install tf-nightly except Exception: pass import tensorflow as tf import pathlib import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4) ###Output _____no_output_____ ###Markdown Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop: ###Code dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy()) ###Output _____no_output_____ ###Markdown Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`: ###Code it = iter(dataset) print(next(it).numpy()) ###Output _____no_output_____ ###Markdown Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers. ###Code print(dataset.reduce(0, lambda state, value: state + value).numpy()) ###Output _____no_output_____ ###Markdown Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `Tensor`, `SparseTensor`, `RaggedTensor`,`TensorArray`, or `Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example: ###Code dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10])) dataset1.element_spec dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2.element_spec dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3.element_spec # Dataset containing a sparse tensor. dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])) dataset4.element_spec # Use value_type to see the type of value represented by the element spec dataset4.element_spec.value_type ###Output _____no_output_____ ###Markdown The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function: ###Code dataset1 = tf.data.Dataset.from_tensor_slices( tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) dataset1 for z in dataset1: print(z.numpy()) dataset2 = tf.data.Dataset.from_tensor_slices( (tf.random.uniform([4]), tf.random.uniform([4, 100], maxval=100, dtype=tf.int32))) dataset2 dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) dataset3 for a, (b,c) in dataset3: print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c)) ###Output _____no_output_____ ###Markdown Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`. ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255 dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset ###Output _____no_output_____ ###Markdown Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock). ###Code def count(stop): i = 0 while i<stop: yield i i += 1 for n in count(5): print(n) ###Output _____no_output_____ ###Markdown The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`. ###Code ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), ) for count_batch in ds_counter.repeat().batch(10).take(10): print(count_batch.numpy()) ###Output _____no_output_____ ###Markdown The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length. ###Code def gen_series(): i = 0 while True: size = np.random.randint(0, 10) yield i, np.random.normal(size=(size,)) i += 1 for i, series in gen_series(): print(i, ":", str(series)) if i > 5: break ###Output _____no_output_____ ###Markdown The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ###Code ds_series = tf.data.Dataset.from_generator( gen_series, output_types=(tf.int32, tf.float32), output_shapes=((), (None,))) ds_series ###Output _____no_output_____ ###Markdown Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`. ###Code ds_series_batch = ds_series.shuffle(20).padded_batch(10) ids, sequence_batch = next(iter(ds_series_batch)) print(ids.numpy()) print() print(sequence_batch.numpy()) ###Output _____no_output_____ ###Markdown For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data: ###Code flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) ###Output _____no_output_____ ###Markdown Create the `image.ImageDataGenerator` ###Code img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20) images, labels = next(img_gen.flow_from_directory(flowers)) print(images.dtype, images.shape) print(labels.dtype, labels.shape) ds = tf.data.Dataset.from_generator( img_gen.flow_from_directory, args=[flowers], output_types=(tf.float32, tf.float32), output_shapes=([32,256,256,3], [32,5]) ) ds ###Output _____no_output_____ ###Markdown Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS). ###Code # Creates a dataset that reads all of the examples from two files. fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") ###Output _____no_output_____ ###Markdown The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument: ###Code dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) parsed.features.feature['image/text'] ###Output _____no_output_____ ###Markdown Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files. ###Code directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' file_names = ['cowper.txt', 'derby.txt', 'butler.txt'] file_paths = [ tf.keras.utils.get_file(file_name, directory_url + file_name) for file_name in file_names ] dataset = tf.data.TextLineDataset(file_paths) ###Output _____no_output_____ ###Markdown Here are the first few lines of the first file: ###Code for line in dataset.take(5): print(line.numpy()) ###Output _____no_output_____ ###Markdown To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation: ###Code files_ds = tf.data.Dataset.from_tensor_slices(file_paths) lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3) for i, line in enumerate(lines_ds.take(9)): if i % 3 == 0: print() print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors. ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) for line in titanic_lines.take(10): print(line.numpy()) def survived(line): return tf.not_equal(tf.strings.substr(line, 0, 1), "0") survivors = titanic_lines.skip(1).filter(survived) for line in survivors.take(10): print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") df = pd.read_csv(titanic_file, index_col=None) df.head() ###Output _____no_output_____ ###Markdown If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported: ###Code titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) for feature_batch in titanic_slices.take(1): for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived") for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown You can use the `select_columns` argument if you only need a subset of columns. ###Code titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ###Output _____no_output_____ ###Markdown There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ###Code titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True) for line in dataset.take(10): print([item.numpy() for item in line]) ###Output _____no_output_____ ###Markdown If some columns are empty, this low-level interface allows you to provide default values instead of column types. ###Code %%writefile missing.csv 1,2,3,4 ,2,3,4 1,,3,4 1,2,,4 1,2,3, ,,, # Creates a dataset that reads all of the records from two CSV files, each with # four float columns which may have missing values. record_defaults = [999,999,999,999] dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively. ###Code # Creates a dataset that reads all of the records from two CSV files with # headers, extracting float data from columns 2 and 4. record_defaults = [999, 999] # Only provide defaults for the selected columns dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3]) dataset = dataset.map(lambda *items: tf.stack(items)) dataset for line in dataset: print(line.numpy()) ###Output _____no_output_____ ###Markdown Consuming sets of files There are many datasets distributed as a set of files, where each file is an example. ###Code flowers_root = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) flowers_root = pathlib.Path(flowers_root) ###Output _____no_output_____ ###Markdown Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class: ###Code for item in flowers_root.glob("*"): print(item.name) ###Output _____no_output_____ ###Markdown The files in each class directory are examples: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) for f in list_ds.take(5): print(f.numpy()) ###Output _____no_output_____ ###Markdown Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs: ###Code def process_path(file_path): label = tf.strings.split(file_path, '/')[-2] return tf.io.read_file(file_path), label labeled_ds = list_ds.map(process_path) for image_raw, label_text in labeled_ds.take(1): print(repr(image_raw.numpy()[:100])) print() print(label_text.numpy()) ###Output _____no_output_____ ###Markdown <!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape. ###Code inc_dataset = tf.data.Dataset.range(100) dec_dataset = tf.data.Dataset.range(0, -100, -1) dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset)) batched_dataset = dataset.batch(4) for batch in batched_dataset.take(4): print([arr.numpy() for arr in batch]) ###Output _____no_output_____ ###Markdown While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape: ###Code batched_dataset ###Output _____no_output_____ ###Markdown Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation: ###Code batched_dataset = dataset.batch(7, drop_remainder=True) batched_dataset ###Output _____no_output_____ ###Markdown Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded. ###Code dataset = tf.data.Dataset.range(100) dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x)) dataset = dataset.padded_batch(4, padded_shapes=(None,)) for batch in dataset.take(2): print(batch.numpy()) print() ###Output _____no_output_____ ###Markdown The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data: ###Code titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic_lines = tf.data.TextLineDataset(titanic_file) def plot_batch_sizes(ds): batch_sizes = [batch.shape[0] for batch in ds] plt.bar(range(len(batch_sizes)), batch_sizes) plt.xlabel('Batch number') plt.ylabel('Batch size') ###Output _____no_output_____ ###Markdown Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries: ###Code titanic_batches = titanic_lines.repeat(3).batch(128) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you need clear epoch separation, put `Dataset.batch` before the repeat: ###Code titanic_batches = titanic_lines.batch(128).repeat(3) plot_batch_sizes(titanic_batches) ###Output _____no_output_____ ###Markdown If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch: ###Code epochs = 3 dataset = titanic_lines.batch(128) for epoch in range(epochs): for batch in dataset: print(batch.shape) print("End of epoch: ", epoch) ###Output _____no_output_____ ###Markdown Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect: ###Code lines = tf.data.TextLineDataset(titanic_file) counter = tf.data.experimental.Counter() dataset = tf.data.Dataset.zip((counter, lines)) dataset = dataset.shuffle(buffer_size=100) dataset = dataset.batch(20) dataset ###Output _____no_output_____ ###Markdown Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120. ###Code n,line_batch = next(iter(dataset)) print(n.numpy()) ###Output _____no_output_____ ###Markdown As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(60).take(5): print(n.numpy()) shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown But a repeat before a shuffle mixes the epoch boundaries together: ###Code dataset = tf.data.Dataset.zip((counter, lines)) shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10) print("Here are the item ID's near the epoch boundary:\n") for n, line_batch in shuffled.skip(55).take(15): print(n.numpy()) repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled] plt.plot(shuffle_repeat, label="shuffle().repeat()") plt.plot(repeat_shuffle, label="repeat().shuffle()") plt.ylabel("Mean item ID") plt.legend() ###Output _____no_output_____ ###Markdown Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset: ###Code list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*')) ###Output _____no_output_____ ###Markdown Write a function that manipulates the dataset elements. ###Code # Reads an image from a file, decodes it into a dense tensor, and resizes it # to a fixed shape. def parse_image(filename): parts = tf.strings.split(filename, '/') label = parts[-2] image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [128, 128]) return image, label ###Output _____no_output_____ ###Markdown Test that it works. ###Code file_path = next(iter(list_ds)) image, label = parse_image(file_path) def show(image, label): plt.figure() plt.imshow(image) plt.title(label.numpy().decode('utf-8')) plt.axis('off') show(image, label) ###Output _____no_output_____ ###Markdown Map it over the dataset. ###Code images_ds = list_ds.map(parse_image) for image, label in images_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead: ###Code import scipy.ndimage as ndimage def random_rotate_image(image): image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False) return image image, label = next(iter(images_ds)) image = random_rotate_image(image) show(image, label) ###Output _____no_output_____ ###Markdown To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function: ###Code def tf_random_rotate_image(image, label): im_shape = image.shape [image,] = tf.py_function(random_rotate_image, [image], [tf.float32]) image.set_shape(im_shape) return image, label rot_ds = images_ds.map(tf_random_rotate_image) for image, label in rot_ds.take(2): show(image, label) ###Output _____no_output_____ ###Markdown Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors. ###Code fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001") dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file]) dataset ###Output _____no_output_____ ###Markdown You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data: ###Code raw_example = next(iter(dataset)) parsed = tf.train.Example.FromString(raw_example.numpy()) feature = parsed.features.feature raw_img = feature['image/encoded'].bytes_list.value[0] img = tf.image.decode_png(raw_img) plt.imshow(img) plt.axis('off') _ = plt.title(feature["image/text"].bytes_list.value[0]) raw_example = next(iter(dataset)) def tf_parse(eg): example = tf.io.parse_example( eg[tf.newaxis], { 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string), 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string) }) return example['image/encoded'][0], example['image/text'][0] img, txt = tf_parse(raw_example) print(txt.numpy()) print(repr(img.numpy()[:20]), "...") decoded = dataset.map(tf_parse) decoded image_batch, text_batch = next(iter(decoded.batch(10))) image_batch.shape ###Output _____no_output_____ ###Markdown Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate: ###Code range_ds = tf.data.Dataset.range(100000) ###Output _____no_output_____ ###Markdown Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch` ###Code batches = range_ds.batch(10, drop_remainder=True) for batch in batches.take(5): print(batch.numpy()) ###Output _____no_output_____ ###Markdown Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other: ###Code def dense_1_step(batch): # Shift features and labels one step relative to each other. return batch[:-1], batch[1:] predict_dense_1_step = batches.map(dense_1_step) for features, label in predict_dense_1_step.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To predict a whole window instead of a fixed offset you can split the batches into two parts: ###Code batches = range_ds.batch(15, drop_remainder=True) def label_next_5_steps(batch): return (batch[:-5], # Take the first 5 steps batch[-5:]) # take the remainder predict_5_steps = batches.map(label_next_5_steps) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`: ###Code feature_length = 10 label_length = 5 features = range_ds.batch(feature_length, drop_remainder=True) labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5]) predict_5_steps = tf.data.Dataset.zip((features, labels)) for features, label in predict_5_steps.take(3): print(features.numpy(), " => ", label.numpy()) ###Output _____no_output_____ ###Markdown Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details. ###Code window_size = 5 windows = range_ds.window(window_size, shift=1) for sub_ds in windows.take(5): print(sub_ds) ###Output _____no_output_____ ###Markdown The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset: ###Code for x in windows.flat_map(lambda x: x).take(30): print(x.numpy(), end=' ') ###Output _____no_output_____ ###Markdown In nearly all cases, you will want to `.batch` the dataset first: ###Code def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) for example in windows.flat_map(sub_to_batch).take(5): print(example.numpy()) ###Output _____no_output_____ ###Markdown Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function: ###Code def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windows ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3) for example in ds.take(10): print(example.numpy()) ###Output _____no_output_____ ###Markdown Then it's easy to extract labels, as before: ###Code dense_labels_ds = ds.map(dense_1_step) for inputs,labels in dense_labels_ds.take(3): print(inputs.numpy(), "=>", labels.numpy()) ###Output _____no_output_____ ###Markdown ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial. ###Code zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip', fname='creditcard.zip', extract=True) csv_path = zip_path.replace('.zip', '.csv') creditcard_ds = tf.data.experimental.make_csv_dataset( csv_path, batch_size=1024, label_name="Class", # Set the column types: 30 floats and an int. column_defaults=[float()]*30+[int()]) ###Output _____no_output_____ ###Markdown Now, check the distribution of classes, it is highly skewed: ###Code def count(counts, batch): features, labels = batch class_1 = labels == 1 class_1 = tf.cast(class_1, tf.int32) class_0 = labels == 0 class_0 = tf.cast(class_0, tf.int32) counts['class_0'] += tf.reduce_sum(class_0) counts['class_1'] += tf.reduce_sum(class_1) return counts counts = creditcard_ds.take(10).reduce( initial_state={'class_0': 0, 'class_1': 0}, reduce_func = count) counts = np.array([counts['class_0'].numpy(), counts['class_1'].numpy()]).astype(np.float32) fractions = counts/counts.sum() print(fractions) ###Output _____no_output_____ ###Markdown A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data: ###Code negative_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==0) .repeat()) positive_ds = ( creditcard_ds .unbatch() .filter(lambda features, label: label==1) .repeat()) for features, label in positive_ds.batch(10).take(1): print(label.numpy()) ###Output _____no_output_____ ###Markdown To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each: ###Code balanced_ds = tf.data.experimental.sample_from_datasets( [negative_ds, positive_ds], [0.5, 0.5]).batch(10) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels: ###Code def class_func(features, label): return label ###Output _____no_output_____ ###Markdown The resampler also needs a target distribution, and optionally an initial distribution estimate: ###Code resampler = tf.data.experimental.rejection_resample( class_func, target_dist=[0.5, 0.5], initial_dist=fractions) ###Output _____no_output_____ ###Markdown The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler: ###Code resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10) ###Output _____no_output_____ ###Markdown The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels: ###Code balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label) ###Output _____no_output_____ ###Markdown Now the dataset produces examples of each class with 50/50 probability: ###Code for features, labels in balanced_ds.take(10): print(labels.numpy()) ###Output _____no_output_____ ###Markdown Using high-level APIs tf.kerasThe `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup: ###Code train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`: ###Code model.fit(fmnist_train_ds, epochs=2) ###Output _____no_output_____ ###Markdown If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument: ###Code model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20) ###Output _____no_output_____ ###Markdown For evaluation you can pass the number of evaluation steps: ###Code loss, accuracy = model.evaluate(fmnist_train_ds) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown For long datasets, set the number of steps to evaluate: ###Code loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10) print("Loss :", loss) print("Accuracy :", accuracy) ###Output _____no_output_____ ###Markdown The labels are not required in when calling `Model.predict`. ###Code predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32) result = model.predict(predict_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown But the labels are ignored if you do pass a dataset containing them: ###Code result = model.predict(fmnist_train_ds, steps = 10) print(result.shape) ###Output _____no_output_____ ###Markdown tf.estimatorTo use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, simplyreturn the `Dataset` from the `input_fn` and the framework will take care of consuming its elementsfor you. For example: ###Code import tensorflow_datasets as tfds def train_input_fn(): titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.experimental.AUTOTUNE)) return titanic_batches embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) age = tf.feature_column.numeric_column('age') import tempfile model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break ###Output _____no_output_____
Clustering Chicago Public Libraries.ipynb
###Markdown Clustering Chicago Public Libraries by Top 10 Nearby Venues Author: Kunyu HeUniversity of Chicago CAPP'20 Executive Summary In this notebook, I clustered 80 public libraries in the city of Chicago into 7 clusters, based on the categories of their top ten venues nearby. It would be a nice guide for those who would like to spend their days in these libraries, exploring their surroundings, but become tired of staying in only one or few of them over time. The rest of this notebook is organized as follows:[Data]((https://dataplatform.cloud.ibm.com/data/jupyter2/runtimeenv2/v1/wdpx/service/notebook/conda2x4b4a004761bfc4fb7999c959d3c200ec7/dsxjpy/dded3b111c2248748711f2e0af8908a960fd32ff81d51b803ea16ca6d27c85a032d73d3162155e40276ce416ddb676be350986f93c2c59/container/notebooks/576fdac1-e752-47e2-ba6d-c97780698b85?project=b4a00476-1bfc-4fb7-999c-959d3c200ec7&api=v2&env=aData)) section briefly introduces the data source. [Methodology]((https://dataplatform.cloud.ibm.com/data/jupyter2/runtimeenv2/v1/wdpx/service/notebook/conda2x4b4a004761bfc4fb7999c959d3c200ec7/dsxjpy/dded3b111c2248748711f2e0af8908a960fd32ff81d51b803ea16ca6d27c85a032d73d3162155e40276ce416ddb676be350986f93c2c59/container/notebooks/576fdac1-e752-47e2-ba6d-c97780698b85?project=b4a00476-1bfc-4fb7-999c-959d3c200ec7&api=v2&env=aMethodology)) section briefly introduced the unsupervised learning algorithms used. In the [Imports and Format Parameters]((https://dataplatform.cloud.ibm.com/data/jupyter2/runtimeenv2/v1/wdpx/service/notebook/conda2x4b4a004761bfc4fb7999c959d3c200ec7/dsxjpy/dded3b111c2248748711f2e0af8908a960fd32ff81d51b803ea16ca6d27c85a032d73d3162155e40276ce416ddb676be350986f93c2c59/container/notebooks/576fdac1-e752-47e2-ba6d-c97780698b85?project=b4a00476-1bfc-4fb7-999c-959d3c200ec7&api=v2&env=aImports-and-Format-Parameters)) section, I install and import the Python libraries used and set the global constants for future use. [Getting and Cleaning Data]((https://dataplatform.cloud.ibm.com/data/jupyter2/runtimeenv2/v1/wdpx/service/notebook/conda2x4b4a004761bfc4fb7999c959d3c200ec7/dsxjpy/dded3b111c2248748711f2e0af8908a960fd32ff81d51b803ea16ca6d27c85a032d73d3162155e40276ce416ddb676be350986f93c2c59/container/notebooks/576fdac1-e752-47e2-ba6d-c97780698b85?project=b4a00476-1bfc-4fb7-999c-959d3c200ec7&api=v2&env=aGetting-and-Cleaning-Data)) sections cotains code downloading and cleaning public library and nearby venues data from external sources. I perform dimension reduction, clustering and labelling mainly in the [Data Analysis]((https://dataplatform.cloud.ibm.com/data/jupyter2/runtimeenv2/v1/wdpx/service/notebook/conda2x4b4a004761bfc4fb7999c959d3c200ec7/dsxjpy/dded3b111c2248748711f2e0af8908a960fd32ff81d51b803ea16ca6d27c85a032d73d3162155e40276ce416ddb676be350986f93c2c59/container/notebooks/576fdac1-e752-47e2-ba6d-c97780698b85?project=b4a00476-1bfc-4fb7-999c-959d3c200ec7&api=v2&env=aData-Analysis)) section. Finally, resulting folium map is presented in the [Results]((https://dataplatform.cloud.ibm.com/data/jupyter2/runtimeenv2/v1/wdpx/service/notebook/conda2x4b4a004761bfc4fb7999c959d3c200ec7/dsxjpy/dded3b111c2248748711f2e0af8908a960fd32ff81d51b803ea16ca6d27c85a032d73d3162155e40276ce416ddb676be350986f93c2c59/container/notebooks/576fdac1-e752-47e2-ba6d-c97780698b85?project=b4a00476-1bfc-4fb7-999c-959d3c200ec7&api=v2&env=aResults)) section and [Discussions]((https://dataplatform.cloud.ibm.com/data/jupyter2/runtimeenv2/v1/wdpx/service/notebook/conda2x4b4a004761bfc4fb7999c959d3c200ec7/dsxjpy/dded3b111c2248748711f2e0af8908a960fd32ff81d51b803ea16ca6d27c85a032d73d3162155e40276ce416ddb676be350986f93c2c59/container/notebooks/576fdac1-e752-47e2-ba6d-c97780698b85?project=b4a00476-1bfc-4fb7-999c-959d3c200ec7&api=v2&env=aDiscussions)) section covers caveats and potential improvements. Data Information of the public libraries is provided by [Chicago Public Library](https://www.chipublib.org/). You can access the data [here]((https://data.cityofchicago.org/Education/Libraries-Locations-Hours-and-Contact-Information/x8fc-8rcq)).Information of the top venues near to (within a range of 500 meters) the public libraries is acquired from [FourSquare API](https://developer.foursquare.com/). You can explore the surroundings of any geographical coordinates of interest with a developer account. Methodology The clustering algorithms used include:* [Principal Component Analysis]((https://en.wikipedia.org/wiki/Principal_component_analysis)) with [Truncated SVD]((http://infolab.stanford.edu/pub/cstr/reports/na/m/86/36/NA-M-86-36.pdf));* [KMeans Clustering]((https://en.wikipedia.org/wiki/K-means_clustering));* [Hierarchical Clustering]((https://en.wikipedia.org/wiki/Hierarchical_clustering)) with [Ward's Method]((https://en.wikipedia.org/wiki/Ward%27s_method)).PCA with TSVD is used for reducing the dimension of our feature matrix, which is a [sparse matrix]((https://en.wikipedia.org/wiki/Sparse_matrix)). KMeans and hierarchical clusering are applied to cluster the libraries in terms of their top ten nearby venue categories and the final labels are derived from hierarchical clustering with ward distance. Imports and Format Parameters ###Code import pandas as pd import numpy as np import re import requests import matplotlib.pyplot as plt from matplotlib.font_manager import FontProperties from pandas.io.json import json_normalize from sklearn.decomposition import TruncatedSVD from sklearn.cluster import KMeans from scipy.cluster.hierarchy import linkage, dendrogram, fcluster ###Output _____no_output_____ ###Markdown For visualization, install [folium](https://github.com/python-visualization/folium) and make an additional import. ###Code !conda install --quiet -c conda-forge folium --yes import folium %matplotlib inline title = FontProperties() title.set_family('serif') title.set_size(16) title.set_weight('bold') axis = FontProperties() axis.set_family('serif') axis.set_size(12) plt.rcParams['figure.figsize'] = [12, 8] ###Output _____no_output_____ ###Markdown Hard-code the geographical coordinates of the City of Chicago based on [this]((https://www.latlong.net/place/chicago-il-usa-1855.html)) page. Also prepare formatting parameters for folium map markers. ###Code LATITUDE, LOGITUDE = 41.881832, -87.623177 ICON_COLORS = ['red', 'blue', 'green', 'purple', 'orange', 'beige', 'darked'] HTML = """ <center><h4><b>Library {}</b></h4></center> <h5><b>Cluster:</b> {};</h5> <h5><b>Hours of operation:</b><br> {}</h5> <h5><b>Top five venues:</b><br> <center>{}<br> {}<br> {}<br> {}<br> {}</center></h5> """ ###Output _____no_output_____ ###Markdown Getting and Cleaning Data Public Library Data ###Code !wget --quiet https://data.cityofchicago.org/api/views/x8fc-8rcq/rows.csv?accessType=DOWNLOAD -O libraries.csv lib = pd.read_csv('libraries.csv', usecols=['NAME ', 'HOURS OF OPERATION', 'LOCATION']) lib.columns = ['library', 'hours', 'location'] lib.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 80 entries, 0 to 79 Data columns (total 3 columns): library 80 non-null object hours 80 non-null object location 80 non-null object dtypes: object(3) memory usage: 2.0+ KB ###Markdown Notice that locations are stored as strings of tuples. Applying the following function to `lib`, we can convert `location` into two separate columns of latitudes and longitudes of the libraries. ###Code def sep_location(row): """ Purpose: seperate the string of location in a given row, convert it into a tuple of floats, representing latitude and longitude of the library respectively Inputs: row (PandasSeries): a row from the `lib` dataframe Outputs: (tuple): of floats representing latitude and longitude of the library """ return tuple(float(re.compile('[()]').sub("", coordinate)) for \ coordinate in row.location.split(', ')) lib[['latitude', 'longitude']] = lib.apply(sep_location, axis=1).apply(pd.Series) lib.drop('location', axis=1, inplace=True) lib.head() ###Output _____no_output_____ ###Markdown Now data on the public libraries is ready for analysis. Venue Data Use sensitive code cell below to enter FourSquare credentials. ###Code # The code was removed by Watson Studio for sharing. ###Output _____no_output_____ ###Markdown Get top ten venues near to the libraries and store data into the `venues` dataframe, with radius set to 1000 meters by default. You can update the `VERSION` parameter to get up-to-date venue information. ###Code VERSION = '20181206' FEATURES = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng'] def get_venues(libraries, latitudes, longitudes, limit=10, radius=1000.0): """ Purpose: download nearby venues information through FourSquare API in a dataframe Inputs: libraries (PandasSeries): names of the public libraries latitudes (PandasSeries): latitudes of the public libraries longitudes (PandasSeries): longitudes of the public libraries limit (int): number of top venues to explore, default to 10 radius (float): range of the circle coverage to define 'nearby', default to 1000.0 Outputs: (DataFrame) """ venues_lst = [] for library, lat, lng in zip(libraries, latitudes, longitudes): url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format( \ CLIENT_ID, CLIENT_SECRET, VERSION, lat, lng, radius, limit) items = requests.get(url).json()["response"]['groups'][0]['items'] venues_lst.append([(library, lat, lng, \ item['venue']['name'], \ item['venue']['location']['lat'], item['venue']['location']['lng'], \ item['venue']['categories'][0]['name']) for item in items]) venues = pd.DataFrame([item for venues_lst in venues_lst for item in venues_lst]) venues.columns = ['Library', 'Library Latitude', 'Library Longitude', \ 'Venue', 'Venue Latitude', 'Venue Longitude', 'Venue Category'] return venues venues = get_venues(lib.library, lib.latitude, lib.longitude) venues.head() ###Output _____no_output_____ ###Markdown Count unique libraries, venues and vanue categories in our `venues` dataframe. ###Code print('There are {} unique libraries, {} unique venues and {} unique categories.'.format( \ len(venues.Library.unique()), \ len(venues.Venue.unique()), \ len(venues['Venue Category'].unique()))) ###Output There are 80 unique libraries, 653 unique venues and 173 unique categories. ###Markdown Now our `venues` data is also ready for furtehr analysis. Data Analysis Data Preprocessing Apply one-hot encoding to get our feature matrix, group the venues by libraries and calculate the frequency of each venue category around specific library by taking the mean. ###Code features = pd.get_dummies(venues['Venue Category'], prefix="", prefix_sep="") features.insert(0, 'Library Name', venues.Library) X = features.groupby(['Library Name']).mean().iloc[:, 1:] X.head() ###Output _____no_output_____ ###Markdown There are too many categories of venues in our features dataframe. Perform PCA to reduce the dimension of our data. Notice here most of the data entries in our feature matrix is zero, which means our data is sparse data, perform dimension reduction with truncated SVD. First, attempt to find the least number of dimensions to keep 85% of the variance and transform the feature matrix. ###Code tsvd = TruncatedSVD(n_components=X.shape[1]-1, random_state=0).fit(X) least_n = np.argmax(tsvd.explained_variance_ratio_.cumsum() > 0.85) print("In order to keep 85% of total variance, we need to keep at least {} dimensions.".format(least_n)) X_t = pd.DataFrame(TruncatedSVD(n_components=least_n, random_state=0).fit_transform(X)) ###Output In order to keep 85% of total variance, we need to keep at least 36 dimensions. ###Markdown Use KMeans on the transformed data and find the best number of k below. ###Code ks = np.arange(1, 51) inertias = [] for k in ks: model = KMeans(n_clusters=k, random_state=0).fit(X_t) inertias.append(model.inertia_) plt.plot(ks, inertias, linewidth=2) plt.title("Figure 1 KMeans: Finding Best k", fontproperties=title) plt.xlabel('Number of Clusters (k)', fontproperties=axis) plt.ylabel('Within-cluster Sum-of-squares', fontproperties=axis) plt.xticks(np.arange(1, 51, 2)) plt.show() ###Output _____no_output_____ ###Markdown It's really hard to decide based on elbow plot, as the downward trend lasts until 50. Alternatively, try Ward Hierachical Clustering method. ###Code merging = linkage(X_t, 'ward') plt.figure(figsize=[20, 10]) dendrogram(merging, leaf_rotation=90, leaf_font_size=10, distance_sort='descending', show_leaf_counts=True) plt.axhline(y=0.65, dashes=[6, 2], c='r') plt.xlabel('Library Names', fontproperties=axis) plt.title("Figure 2 Hierachical Clustering with Ward Distance: Cutting at 0.65", fontproperties=title) plt.show() ###Output _____no_output_____ ###Markdown The result is way better than KMeans. We see six clusters cutting at approximately 0.70. Label the clustered libraries below. Join the labelled library names with `lib` to bind geographical coordinates and hours of operation of the puclic libraries. ###Code labels = fcluster(merging, t=0.65, criterion='distance') df = pd.DataFrame(list(zip(X.index.values, labels))) df.columns = ['library', 'cluster'] merged = pd.merge(lib, df, how='inner', on='library') merged.head() ###Output _____no_output_____ ###Markdown Results Create a `folium.Map` instance `chicago` with initial zoom level of 11. ###Code chicago = folium.Map(location=[LATITUDE, LOGITUDE], zoom_start=11) ###Output _____no_output_____ ###Markdown Check the clustered map! Click on the icons to see the name, hours of operation and top five nearby venues of each public library in the city of Chicago! ###Code for index, row in merged.iterrows(): venues_name = venues[venues.Library == row.library].Venue.values label = folium.Popup(HTML.format(row.library, row.cluster, row.hours, venues_name[0], venues_name[1], venues_name[2], venues_name[3], venues_name[4]), parse_html=False) folium.Marker([row.latitude, row.longitude], popup=label, icon=folium.Icon(color=ICON_COLORS[row.cluster-1], icon='book')).add_to(chicago) chicago ###Output _____no_output_____
src/ipython/45 Simulation_Test.ipynb
###Markdown Simulation Test Introduction ###Code import sys import random import numpy as np import pylab from scipy import stats sys.path.insert(0, '../simulation') from environment import Environment from predator import Predator params = { 'env_size': 1000, 'n_patches': 20, 'n_trials': 100, 'max_moves': 5000, 'max_entities_per_patch': 50, 'min_entities_per_patch': 5, } entity_results = [] captured_results = [] for trial in range(params['n_trials']): # Set up the environment env = Environment(params['env_size'], params['env_size'], params['n_patches']) entities = random.randint( params['min_entities_per_patch'], params['max_entities_per_patch'] ) for patch in env.children: patch.create_entities(entities) pred = Predator() pred.xpos = env.length / 2.0 pred.y_pos = env.width / 2.0 pred.parent = env for i in range(params['max_moves']): pred.move() entity = pred.detect() pred.capture(entity) entity_results.append(entities) captured_results.append(len(pred.captured)) x = np.array(entity_results) y = np.array(captured_results) slope, intercept, r_value, p_value, slope_std_error = stats.linregress(x, y) print "Slope, intercept:", slope, intercept print "R-squared:", r_value**2 # Calculate some additional outputs predict_y = intercept + slope * x pred_error = y - predict_y degrees_of_freedom = len(x) - 2 residual_std_error = np.sqrt(np.sum(pred_error**2) / degrees_of_freedom) print "Residual Std Error = ", residual_std_error # Plotting pylab.plot(x, y, 'o') pylab.plot(x, predict_y, 'k-') pylab.show() z = np.divide(np.multiply(y, 1.0), np.multiply(x, 1.0)) pylab.plot(x, z, 'o') ###Output _____no_output_____
23 - Python for Finance/2_Calculating and Comparing Rates of Return in Python/11_Calculating the Rate of Return of Indices (5:03)/Calculating the Return of Indices - Solution_Yahoo_Py3.ipynb
###Markdown Calculating the Return of Indices *Suggested Answers follow (usually there are multiple ways to solve a problem in Python).* Consider three famous American market indices – Dow Jones, S&P 500, and the Nasdaq for the period of 1st of January 2000 until today. ###Code import numpy as np import pandas as pd from pandas_datareader import data as wb import matplotlib.pyplot as plt tickers = ['^DJI', '^GSPC', '^IXIC'] ind_data = pd.DataFrame() for t in tickers: ind_data[t] = wb.DataReader(t, data_source='yahoo', start='2000-1-1')['Adj Close'] ind_data.head() ind_data.tail() ###Output _____no_output_____ ###Markdown Normalize the data to 100 and plot the results on a graph. ###Code (ind_data / ind_data.iloc[0] * 100).plot(figsize=(15, 6)); plt.show() ###Output _____no_output_____ ###Markdown How would you explain the common and the different parts of the behavior of the three indices? ***** Obtain the simple returns of the indices. ###Code ind_returns = (ind_data / ind_data.shift(1)) - 1 ind_returns.tail() ###Output _____no_output_____ ###Markdown Estimate the average annual return of each index. ###Code annual_ind_returns = ind_returns.mean() * 250 annual_ind_returns ###Output _____no_output_____
docs/notebooks/negative_binomial.ipynb
###Markdown Negative Binomial Regression (Students absence example) Negative binomial distribution review I always experience some kind of confusion when looking at the negative binomial distribution after a while of not working with it. There are so many different definitions that I usually need to read everything more than once. The definition I've first learned, and the one I like the most, says as follows: The negative binomial distribution is the distribution of a random variable that is defined as the number of independent Bernoulli trials until the k-th "success". In short, we repeat a Bernoulli experiment until we observe k successes and record the number of trials it required.$$Y \sim \text{NB}(k, p)$$where $0 \le p \le 1$ is the probability of success in each Bernoulli trial, $k > 0$, usually integer, and $y \in \{k, k + 1, \cdots\}$The probability mass function (pmf) is $$p(y | k, p)= \binom{y - 1}{y-k}(1 -p)^{y - k}p^k$$If you, like me, find it hard to remember whether $y$ starts at $0$, $1$, or $k$, try to think twice about the definition of the variable. But how? First, recall we aim to have $k$ successes. And success is one of the two possible outcomes of a trial, so the number of trials can never be smaller than the number of successes. Thus, we can be confident to say that $y \ge k$. But this is not the only way of defining the negative binomial distribution, there are plenty of options! One of the most interesting, and the one you see in [PyMC3](https://docs.pymc.io/api/distributions/discrete.htmlpymc3.distributions.discrete.NegativeBinomial), the library we use in Bambi for the backend, is as a continuous mixture. The negative binomial distribution describes a Poisson random variable whose rate is also a random variable (not a fixed constant!) following a gamma distribution. Or in other words, conditional on a gamma-distributed variable $\mu$, the variable $Y$ has a Poisson distribution with mean $\mu$.Under this alternative definition, the pmf is$$\displaystyle p(y | k, \alpha) = \binom{y + \alpha - 1}{y} \left(\frac{\alpha}{\mu + \alpha}\right)^\alpha\left(\frac{\mu}{\mu + \alpha}\right)^y$$where $\mu$ is the parameter of the Poisson distribution (the mean, and variance too!) and $\alpha$ is the rate parameter of the gamma. ###Code import arviz as az import bambi as bmb import matplotlib.pyplot as plt import numpy as np import pandas as pd from scipy.stats import nbinom az.style.use("arviz-darkgrid") import warnings warnings.simplefilter(action='ignore', category=FutureWarning) ###Output _____no_output_____ ###Markdown In SciPy, the definition of the negative binomial distribution differs a little from the one in our introduction. They define $Y$ = Number of failures until k successes and then $y$ starts at 0. In the following plot, we have the probability of observing $y$ failures before we see $k=3$ successes. ###Code y = np.arange(0, 30) k = 3 p1 = 0.5 p2 = 0.3 fig, ax = plt.subplots(1, 2, figsize=(12, 4), sharey=True) ax[0].bar(y, nbinom.pmf(y, k, p1)) ax[0].set_xticks(np.linspace(0, 30, num=11)) ax[0].set_title(f"k = {k}, p = {p1}") ax[1].bar(y, nbinom.pmf(y, k, p2)) ax[1].set_xticks(np.linspace(0, 30, num=11)) ax[1].set_title(f"k = {k}, p = {p2}") fig.suptitle("Y = Number of failures until k successes", fontsize=16); ###Output _____no_output_____ ###Markdown For example, when $p=0.5$, the probability of seeing $y=0$ failures before 3 successes (or in other words, the probability of having 3 successes out of 3 trials) is 0.125, and the probability of seeing $y=3$ failures before 3 successes is 0.156. ###Code print(nbinom.pmf(y, k, p1)[0]) print(nbinom.pmf(y, k, p1)[3]) ###Output 0.12499999999999997 0.15624999999999992 ###Markdown Finally, if one wants to show this probability mass function as if we are following the first definition of negative binomial distribution we introduced, we just need to shift the whole thing to the right by adding $k$ to the $y$ values. ###Code fig, ax = plt.subplots(1, 2, figsize=(12, 4), sharey=True) ax[0].bar(y + k, nbinom.pmf(y, k, p1)) ax[0].set_xticks(np.linspace(3, 30, num=10)) ax[0].set_title(f"k = {k}, p = {p1}") ax[1].bar(y + k, nbinom.pmf(y, k, p2)) ax[1].set_xticks(np.linspace(3, 30, num=10)) ax[1].set_title(f"k = {k}, p = {p2}") fig.suptitle("Y = Number of trials until k successes", fontsize=16); ###Output _____no_output_____ ###Markdown Negative binomial in GLM The negative binomial distribution belongs to the exponential family, and the canonical link function is $$g(\mu_i) = \log\left(\frac{\mu_i}{k + \mu_i}\right) = \log\left(\frac{k}{\mu_i} + 1\right)$$but it is difficult to interpret. The log link is usually preferred because of the analogy with Poisson model, and it also tends to give better results. Load and explore Students dataThis example is based on this [UCLA example](https://stats.idre.ucla.edu/r/dae/negative-binomial-regression/).School administrators study the attendance behavior of high school juniors at two schools. Predictors of the **number of days of absence** include the **type of program** in which the student is enrolled and a **standardized test in math**. We have attendance data on 314 high school juniors.The variables of insterest in the dataset are* daysabs: The number of days of absence. It is our response variable.* progr: The type of program. Can be one of 'General', 'Academic', or 'Vocational'.* math: Score in a standardized math test. ###Code data = pd.read_stata("https://stats.idre.ucla.edu/stat/stata/dae/nb_data.dta") data.head() ###Output _____no_output_____ ###Markdown We assign categories to the values 1, 2, and 3 of our `"prog"` variable. ###Code data["prog"] = data["prog"].map({1: "General", 2: "Academic", 3: "Vocational"}) data.head() ###Output _____no_output_____ ###Markdown The Academic program is the most popular program (167/314) and General is the least popular one (40/314) ###Code data["prog"].value_counts() ###Output _____no_output_____ ###Markdown Let's explore the distributions of math score and days of absence for each of the three programs listed above. The vertical lines indicate the mean values. ###Code fig, ax = plt.subplots(3, 2, figsize=(8, 6), sharex="col") programs = list(data["prog"].unique()) programs.sort() for idx, program in enumerate(programs): # Histogram ax[idx, 0].hist(data[data["prog"] == program]["math"], edgecolor='black', alpha=0.9) ax[idx, 0].axvline(data[data["prog"] == program]["math"].mean(), color="C1") # Barplot days = data[data["prog"] == program]["daysabs"] days_mean = days.mean() days_counts = days.value_counts() values = list(days_counts.index) count = days_counts.values ax[idx, 1].bar(values, count, edgecolor='black', alpha=0.9) ax[idx, 1].axvline(days_mean, color="C1") # Titles ax[idx, 0].set_title(program) ax[idx, 1].set_title(program) plt.setp(ax[-1, 0], xlabel="Math score") plt.setp(ax[-1, 1], xlabel="Days of absence"); ###Output _____no_output_____ ###Markdown The first impression we have is that the distribution of math scores is not equal for any of the programs. It looks right-skewed for students under the Academic program, left-skewed for students under the Vocational program, and roughly uniform for students in the General program (although there's a drop in the highest values). Clearly those in the Vocational program has the highest mean for the math score. On the other hand, the distribution of the days of absence is right-skewed in all cases. Students in the General program present the highest absence mean while the Vocational group is the one who misses fewer classes on average. ModelsWe are interested in measuring the association between the type of the program and the math score with the days of absence. It's also of interest to see if the association between math score and days of absence is different in each type of program. In order to answer our questions, we are going to fit and compare two models. The first model uses the type of the program and the math score as predictors. The second model also includes the interaction between these two variables. The score in the math test is going to be standardized in both cases to make things easier for the sampler and save some seconds. A good idea to follow along is to run these models without scaling `math` and comparing how long it took to fit.We are going to use a negative binomial likelihood to model the days of absence. But let's stop here and think why we use this likelihood. Earlier, we said that the negative binomial distributon arises when our variable represents the number of trials until we got $k$ successes. However, the number of trials is fixed, i.e. the number of school days in a given year is not a random variable. So if we stick to the definition, we could think of the two alternative views for this problem* Each of the $n$ days is a trial, and we record whether the student is absent ($y=1$) or not ($y=0$). This corresponds to a binary regression setting, where we could think of logistic regression or something alike. A problem here is that we have the sum of $y$ for a student, but not the $n$.* The whole school year represents the space where events occur and we count how many absences we see in that space for each student. This gives us a Poisson regression setting (count of an event in a given space or time).We also know that when $n$ is large and $p$ is small, the Binomial distribution can be approximated with a Poisson distribution with $\lambda = n * p$. We don't know exactly $n$ in this scenario, but we know it is around 180, and we do know that $p$ is small because you can't skip classes all the time. So both modeling approaches should give similar results.But then, why negative binomial? Can't we just use a Poisson likelihood?Yes, we can. However, using a Poisson likelihood implies that the mean is equal to the variance, and that is usually an unrealistic assumption. If it turns out the variance is either substantially smaller or greater than the mean, the Poisson regression model results in a poor fit. Alternatively, if we use a negative binomial likelihood, the variance is not forced to be equal to the mean, and there's more flexibility to handle a given dataset, and consequently, the fit tends to better. Model 1 $$\log{Y_i} = \beta_1 \text{Academic}_i + \beta_2 \text{General}_i + \beta_3 \text{Vocational}_i + \beta_4 \text{Math_std}_i$$ Model 2$$\log{Y_i} = \beta_1 \text{Academic}_i + \beta_2 \text{General}_i + \beta_3 \text{Vocational}_i + \beta_4 \text{Math_std}_i + \beta_5 \text{General}_i \cdot \text{Math_std}_i + \beta_6 \text{Vocational}_i \cdot \text{Math_std}_i$$In both cases we have the following dummy variables$$\text{Academic}_i = \left\{ \begin{array}{ll} 1 & \textrm{if student is under Academic program} \\ 0 & \textrm{other case} \end{array}\right.$$$$\text{General}_i = \left\{ \begin{array}{ll} 1 & \textrm{if student is under General program} \\ 0 & \textrm{other case} \end{array}\right.$$$$\text{Vocational}_i = \left\{ \begin{array}{ll} 1 & \textrm{if student is under Vocational program} \\ 0 & \textrm{other case} \end{array}\right.$$and $Y$ represents the days of absence.So, for example, the first model for a student under the Vocational program reduces to$$\log{Y_i} = \beta_3 + \beta_4 \text{Math_std}_i$$And one last thing to note is we've decided not to inclide an intercept term, that's why you don't see any $\beta_0$ above. This choice allows us to represent the effect of each program directly with $\beta_1$, $\beta_2$, and $\beta_3$. Model fitIt's very easy to fit these models with Bambi. We just pass a formula describing the terms in the model and Bambi will know how to handle each of them correctly. The `0` on the right hand side of `~` simply means we don't want to have the intercept term that is added by default. `scale(math)` tells Bambi we want to use standardize `math` before being included in the model. By default, Bambi uses a log link for negative binomial GLMs. We'll stick to this default here. Model 1 ###Code model_additive = bmb.Model("daysabs ~ 0 + prog + scale(math)", data, family="negativebinomial") idata_additive = model_additive.fit() ###Output Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (4 chains in 4 jobs) NUTS: [prog, scale(math), daysabs_alpha] ###Markdown Model 2For this second model we just add `prog:scale(math)` to indicate the interaction. A shorthand would be to use `y ~ 0 + prog*scale(math)`, which uses the **full interaction** operator. In other words, it just means we want to include the interaction between `prog` and `scale(math)` as well as their main effects. ###Code model_interaction = bmb.Model("daysabs ~ 0 + prog + scale(math) + prog:scale(math)", data, family="negativebinomial") idata_interaction = model_interaction.fit() ###Output Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (4 chains in 4 jobs) NUTS: [prog, scale(math), prog:scale(math), daysabs_alpha] ###Markdown Explore models The first thing we do is calling `az.summary()`. Here we pass the `InferenceData` object the `.fit()` returned. This prints information about the marginal posteriors for each parameter in the model as well as convergence diagnostics. ###Code az.summary(idata_additive) az.summary(idata_interaction) ###Output _____no_output_____ ###Markdown The information in the two tables above can be visualized in a more concise manner using a forest plot. ArviZ provides us with `plot_forest()`. There we simply pass a list containing the `InferenceData` objects of the models we want to compare. ###Code az.plot_forest( [idata_additive, idata_interaction], model_names=["Additive", "Interaction"], var_names=["prog", "scale(math)"], combined=True, figsize=(8, 4) ); ###Output _____no_output_____ ###Markdown One of the first things one can note when seeing this plot is the similarity between the marginal posteriors. Maybe one can conclude that the variability of the marginal posterior of `scale(math)` is slightly lower in the model that considers the interaction, but the difference is not significant. We can also make conclusions about the association between the program and the math score with the days of absence. First, we see the posterior for the Vocational group is to the left of the posterior for the two other programs, meaning it is associated with fewer absences (as we have seen when first exploring our data). There also seems to be a difference between General and Academic, where we may conclude the students in the General group tend to miss more classes.In addition, the marginal posterior for `math` shows negative values in both cases. This means that students with higher math scores tend to miss fewer classes. Below, we see a forest plot with the posteriors for the coefficients of the interaction effects. Both of them overlap with 0, which means the data does not give much evidence to support there is an interaction effect between program and math score (i.e., the association between math and days of absence is similar for all the programs). ###Code az.plot_forest(idata_interaction, var_names=["prog:scale(math)"], combined=True, figsize=(8, 4)) plt.axvline(0); ###Output _____no_output_____ ###Markdown Plot predicted mean responseWe finish this example showing how we can get predictions for new data and plot the mean response for each program together with confidence intervals. ###Code math_score = np.arange(1, 100) # This function takes a model and an InferenceData object. # It returns of length 3 with predictions for each type of program. def predict(model, idata): predictions = [] for program in programs: new_data = pd.DataFrame({"math": math_score, "prog": [program] * len(math_score)}) new_idata = model.predict( idata, data=new_data, inplace=False ) prediction = new_idata.posterior.stack(sample=["chain", "draw"])["daysabs_mean"].values predictions.append(prediction) return predictions prediction_additive = predict(model_additive, idata_additive) prediction_interaction = predict(model_interaction, idata_interaction) mu_additive = [prediction.mean(1) for prediction in prediction_additive] mu_interaction = [prediction.mean(1) for prediction in prediction_interaction] fig, ax = plt.subplots(1, 2, sharex=True, sharey=True, figsize = (10, 4)) for idx, program in enumerate(programs): ax[0].plot(math_score, mu_additive[idx], label=f"{program}", color=f"C{idx}", lw=2) az.plot_hdi(math_score, prediction_additive[idx].T, color=f"C{idx}", ax=ax[0]) ax[1].plot(math_score, mu_interaction[idx], label=f"{program}", color=f"C{idx}", lw=2) az.plot_hdi(math_score, prediction_interaction[idx].T, color=f"C{idx}", ax=ax[1]) ax[0].set_title("Additive"); ax[1].set_title("Interaction"); ax[0].set_xlabel("Math score") ax[1].set_xlabel("Math score") ax[0].set_ylim(0, 25) ax[0].legend(loc="upper right"); ###Output _____no_output_____ ###Markdown As we can see in this plot, the interval for the mean response for the Vocational program does not overlap with the interval for the other two groups, representing the group of students who miss fewer classes. On the right panel we can also see that including interaction terms does not change the slopes significantly because the posterior distributions of these coefficients have a substantial overlap with 0. If you've made it to the end of this notebook and you're still curious about what else you can do with these two models, you're invited to use `az.compare()` to compare the fit of the two models. What do you expect before seeing the plot? Why? Is there anything else you could do to improve the fit of the model?Also, if you're still curious about what this model would have looked like with the Poisson likelihood, you just need to replace `family="negativebinomial"` with `family="poisson"` and then you're ready to compare results! ###Code %load_ext watermark %watermark -n -u -v -iv -w ###Output Last updated: Wed Jun 01 2022 Python implementation: CPython Python version : 3.9.7 IPython version : 8.3.0 sys : 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] pandas : 1.4.2 numpy : 1.21.5 arviz : 0.12.1 matplotlib: 3.5.1 bambi : 0.7.1 Watermark: 2.3.0 ###Markdown Negative Binomial Regression Negative binomial distribution review I always experience some kind of confusion when looking at the negative binomial distribution after a while of not working with it. There are so many different definitions that I usually need to read everything more than once. The definition I've first learned, and the one I like the most, says as follows: The negative binomial distribution is the distribution of a random variable that is defined as the number of independent Bernoulli trials until the k-th "success". In short, we repeat a Bernoulli experiment until we observe k successes and record the number of trials it required.$$Y \sim \text{NB}(k, p)$$where $0 \le p \le 1$ is the probability of success in each Bernoulli trial, $k > 0$, usually integer, and $y \in \{k, k + 1, \cdots\}$The probability mass function (pmf) is $$p(y | k, p)= \binom{y - 1}{y-k}(1 -p)^{y - k}p^k$$If you, like me, find it hard to remember whether $y$ starts at $0$, $1$, or $k$, try to think twice about the definition of the variable. But how? First, recall we aim to have $k$ successes. And success is one of the two possible outcomes of a trial, so the number of trials can never be smaller than the number of successes. Thus, we can be confident to say that $y \ge k$. But this is not the only way of defining the negative binomial distribution, there are plenty of options! One of the most interesting, and the one you see in [PyMC3](https://docs.pymc.io/api/distributions/discrete.htmlpymc3.distributions.discrete.NegativeBinomial), the library we use in Bambi for the backend, is as a continuous mixture. The negative binomial distribution describes a Poisson random variable whose rate is also a random variable (not a fixed constant!) following a gamma distribution. Or in other words, conditional on a gamma-distributed variable $\mu$, the variable $Y$ has a Poisson distribution with mean $\mu$.Under this alternative definition, the pmf is$$\displaystyle p(y | k, \alpha) = \binom{y + \alpha - 1}{y} \left(\frac{\alpha}{\mu + \alpha}\right)^\alpha\left(\frac{\mu}{\mu + \alpha}\right)^y$$where $\mu$ is the parameter of the Poisson distribution (the mean, and variance too!) and $\alpha$ is the rate parameter of the gamma. ###Code import arviz as az import bambi as bmb import matplotlib.pyplot as plt import numpy as np import pandas as pd from scipy.stats import nbinom az.style.use("arviz-darkgrid") import warnings warnings.simplefilter(action='ignore', category=FutureWarning) ###Output _____no_output_____ ###Markdown In SciPy, the definition of the negative binomial distribution differs a little from the one in our introduction. They define $Y$ = Number of failures until k sucesses and then $y$ starts at 0. In the following plot, we have the probability of observing $y$ failures before we see $k=3$ successes. ###Code y = np.arange(0, 30) k = 3 p1 = 0.5 p2 = 0.3 fig, ax = plt.subplots(1, 2, figsize=(12, 4), sharey=True) ax[0].bar(y, nbinom.pmf(y, k, p1)) ax[0].set_xticks(np.linspace(0, 30, num=11)) ax[0].set_title(f"k = {k}, p = {p1}") ax[1].bar(y, nbinom.pmf(y, k, p2)) ax[1].set_xticks(np.linspace(0, 30, num=11)) ax[1].set_title(f"k = {k}, p = {p2}") fig.suptitle("Y = Number of failures until k successes", fontsize=16); ###Output _____no_output_____ ###Markdown For example, when $p=0.5$, the probability of seeing $y=0$ failures before 3 successes (or in other words, the probability of having 3 successes out of 3 trials) is 0.125, and the probability of seeing $y=3$ failures before 3 successes is 0.156. ###Code print(nbinom.pmf(y, k, p1)[0]) print(nbinom.pmf(y, k, p1)[3]) ###Output 0.12500000000000003 0.15625000000000003 ###Markdown Finally, if one wants to show this probability mass function as if we are following the first definition of negative binomial distribution we introduced, we just need to shift the whole thing to the right by adding $k$ to the $y$ values. ###Code fig, ax = plt.subplots(1, 2, figsize=(12, 4), sharey=True) ax[0].bar(y + k, nbinom.pmf(y, k, p1)) ax[0].set_xticks(np.linspace(3, 30, num=10)) ax[0].set_title(f"k = {k}, p = {p1}") ax[1].bar(y + k, nbinom.pmf(y, k, p2)) ax[1].set_xticks(np.linspace(3, 30, num=10)) ax[1].set_title(f"k = {k}, p = {p2}") fig.suptitle("Y = Number of trials until k successes", fontsize=16); ###Output _____no_output_____ ###Markdown Negative binomial in GLM The negative binomial distribution belongs to the exponential family, and the canonical link function is $$g(\mu_i) = \log\left(\frac{\mu_i}{k + \mu_i}\right) = \log\left(\frac{k}{\mu_i} + 1\right)$$but it is difficult to interpret. The log link is usually preferred because of the analogy with Poisson model, and it also tends to give better results. Load and explore Students dataThis example is based on this [UCLA example](https://stats.idre.ucla.edu/r/dae/negative-binomial-regression/).School administrators study the attendance behavior of high school juniors at two schools. Predictors of the **number of days of absence** include the **type of program** in which the student is enrolled and a **standardized test in math**. We have attendance data on 314 high school juniors.The variables of insterest in the dataset are* daysabs: The number of days of abscence. It is our response variable.* progr: The type of program. Can be one of 'General', 'Academic', or 'Vocational'.* math: Score in a standardized math test. ###Code data = pd.read_stata("https://stats.idre.ucla.edu/stat/stata/dae/nb_data.dta") data.head() ###Output _____no_output_____ ###Markdown We assign categories to the values 1, 2, and 3 of our `"prog"` variable. ###Code data["prog"] = data["prog"].map({1: "General", 2: "Academic", 3: "Vocational"}) data.head() ###Output _____no_output_____ ###Markdown The Academic program is the most popular program (167/314) and General is the least popular one (40/314) ###Code data["prog"].value_counts() ###Output _____no_output_____ ###Markdown Let's explore the distributions of math score and days of absence for each of the three programs listed above. The vertical lines indicate the mean values. ###Code fig, ax = plt.subplots(3, 2, figsize=(8, 6), sharex="col") programs = list(data["prog"].unique()) programs.sort() for idx, program in enumerate(programs): # Histogram ax[idx, 0].hist(data[data["prog"] == program]["math"], edgecolor='black', alpha=0.9) ax[idx, 0].axvline(data[data["prog"] == program]["math"].mean(), color="C1") # Barplot days = data[data["prog"] == program]["daysabs"] days_mean = days.mean() days_counts = days.value_counts() values = list(days_counts._index) count = days_counts.values ax[idx, 1].bar(values, count, edgecolor='black', alpha=0.9) ax[idx, 1].axvline(days_mean, color="C1") # Titles ax[idx, 0].set_title(program) ax[idx, 1].set_title(program) plt.setp(ax[-1, 0], xlabel="Math score") plt.setp(ax[-1, 1], xlabel="Days of absence"); ###Output _____no_output_____ ###Markdown The first impression we have is that the distribution of math scores is not equal for any of the programs. It looks right-skewed for students under the Academic program, left-skewed for students under the Vocational program, and roughly uniform for students in the General program (although there's a drop in the highest values). Clearly those in the Vocational program has the highest mean for the math score. On the other hand, the distribution of the days of absence is right-skewed in all cases. Students in the General program present the highest absence mean while the Vocational group is the one who misses fewer classes on average. ModelsWe are interested in measuring the association between the type of the program and the math score with the days of absence. It's also of interest to see if the association between math score and days of absence is different in each type of program. In order to answer our questions, we are going to fit and compare two models. The first model uses the type of the program and the math score as predictors. The second model also includes the interaction between these two variables. The score in the math test is going to be standardized in both cases to make things easier for the sampler and save some seconds. A good idea to follow along is to run these models without scaling `math` and comparing how long it took to fit.We are going to use a negative binomial likelihood to model the days of absence. But let's stop here and think why we use this likelihood. Earlier, we said that the negative binomial distributon arises when our variable represents the number of trials until we got $k$ successes. However, the number of trials is fixed, i.e. the number of school days in a given year is not a random variable. So if we stick to the definition, we could think of the two alternative views for this problem* Each of the $n$ days is a trial, and we record whether the student is absent ($y=1$) or not ($y=0$). This corresponds to a binary regression setting, where we could think of logistic regression or something alike. A problem here is that we have the sum of $y$ for a student, but not the $n$.* The whole school year represents the space where events occur and we count how many absences we see in that space for each student. This gives us a Poisson regression setting (count of an event in a given space or time).We also know that when $n$ is large and $p$ is small, the Binomial distribution can be approximated with a Poisson distribution with $\lambda = n * p$. We don't know exactly $n$ in this scenario, but we know it is around 180, and we do know that $p$ is small because you can't skip classes all the time. So both modeling approaches should give similar results.But then, why negative binomial? Can't we just use a Poisson likelihood?Yes, we can. However, using a Poisson likelihood implies that the mean is equal to the variance, and that is usually an unrealistic assumption. If it turns out the variance is either substantially smaller or greater than the mean, the Poisson regression model results in a poor fit. Alternatively, if we use a negative binomial likelihood, the variance is not forced to be equal to the mean, and there's more flexibility to handle a given dataset, and consequently, the fit tends to better. Model 1 $$\log{Y_i} = \beta_1 \text{Academic}_i + \beta_2 \text{General}_i + \beta_3 \text{Vocational}_i + \beta_4 \text{Math_std}_i$$ Model 2$$\log{Y_i} = \beta_1 \text{Academic}_i + \beta_2 \text{General}_i + \beta_3 \text{Vocational}_i + \beta_4 \text{Math_std}_i + \beta_5 \text{General}_i \cdot \text{Math_std}_i + \beta_6 \text{Vocational}_i \cdot \text{Math_std}_i$$In both cases we have the following dummy variables$$\text{Academic}_i = \left\{ \begin{array}{ll} 1 & \textrm{if student is under Academic program} \\ 0 & \textrm{other case} \end{array}\right.$$$$\text{General}_i = \left\{ \begin{array}{ll} 1 & \textrm{if student is under General program} \\ 0 & \textrm{other case} \end{array}\right.$$$$\text{Vocational}_i = \left\{ \begin{array}{ll} 1 & \textrm{if student is under Vocational program} \\ 0 & \textrm{other case} \end{array}\right.$$and $Y$ represents the days of absence.So, for example, the first model for a student under the Vocational program reduces to$$\log{Y_i} = \beta_3 + \beta_4 \text{Math_std}_i$$And one last thing to note is we've decided not to inclide an intercept term, that's why you don't see any $\beta_0$ above. This choice allows us to represent the effect of each program directly with $\beta_1$, $\beta_2$, and $\beta_3$. Model fitIt's very easy to fit these models with Bambi. We just pass a formula describing the terms in the model and Bambi will know how to handle each of them correctly. The `0` on the right hand side of `~` simply means we don't want to have the intercept term that is added by default. `scale(math)` tells Bambi we want to use standardize `math` before being included in the model. By default, Bambi uses a log link for negative binomial GLMs. We'll stick to this default here. Model 1 ###Code model_additive = bmb.Model("daysabs ~ 0 + prog + scale(math)", data, family="negativebinomial") idata_additive = model_additive.fit() ###Output Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (2 chains in 2 jobs) NUTS: [daysabs_alpha, scale(math), prog] ###Markdown Model 2For this second model we just add `prog:scale(math)` to indicate the interaction. A shorthand would be to use `y ~ 0 + prog*scale(math)`, which uses the **full interaction** operator. In other words, it just means we want to include the interaction between `prog` and `scale(math)` as well as their main effects. ###Code model_interaction = bmb.Model("daysabs ~ 0 + prog + scale(math) + prog:scale(math)", data, family="negativebinomial") idata_interaction = model_interaction.fit() ###Output Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (2 chains in 2 jobs) NUTS: [daysabs_alpha, prog:scale(math), scale(math), prog] ###Markdown Explore models The first thing we do is calling `az.summary()`. Here we pass the `InferenceData` object the `.fit()` returned. This prints information about the marginal posteriors for each parameter in the model as well as convergence diagnostics. ###Code az.summary(idata_additive) az.summary(idata_interaction) ###Output _____no_output_____ ###Markdown The information in the two tables above can be visualized in a more concise manner using a forest plot. ArviZ provides us with `plot_forest()`. There we simply pass a list containing the `InferenceData` objects of the models we want to compare. ###Code az.plot_forest( [idata_additive, idata_interaction], model_names=["Additive", "Interaction"], var_names=["prog", "scale(math)"], combined=True, figsize=(8, 4) ); ###Output _____no_output_____ ###Markdown One of the first things one can note when seeing this plot is the similarity between the marginal posteriors. Maybe one can conclude that the variability of the marginal posterior of `scale(math)` is slightly lower in the model that considers the interaction, but the difference is not significant. We can also make conclusions about the association between the program and the math score with the days of absence. First, we see the posterior for the Vocational group is to the left of the posterior for the two other programs, meaning it is associated with fewer absences (as we have seen when first exploring our data). There also seems to be a difference between General and Academic, where we may conclude the students in the General group tend to miss more classes.In addition, the marginal posterior for `math` shows negative values in both cases. This means that students with higher math scores tend to miss fewer classes. Below, we see a forest plot with the posteriors for the coefficients of the interaction effects. Both of them overlap with 0, which means the data does not give much evidence to support there is an interaction effect between program and math score (i.e., the association between math and days of absence is similar for all the programs). ###Code az.plot_forest(idata_interaction, var_names=["prog:scale(math)"], combined=True, figsize=(8, 4)) plt.axvline(0); ###Output _____no_output_____ ###Markdown Plot predicted mean responseWe finish this example showing how we can get predictions for new data and plot the mean response for each program together with confidence intervals. ###Code math_score = np.arange(1, 100) # This function takes a model and an InferenceData object. # It returns of length 3 with predictions for each type of program. def predict(model, idata): posterior = idata.posterior.stack(sample=["chain", "draw"]) β = np.vstack([np.atleast_2d(posterior[name]) for name in model.terms]) predictions = [] for program in programs: new_data = pd.DataFrame({"math": math_score, "prog": [program] * len(math_score)}) X = model._design.common._evaluate_new_data(new_data).design_matrix predictions.append(np.exp(np.dot(X, β))) return predictions prediction_additive = predict(model_additive, idata_additive) prediction_interaction = predict(model_interaction, idata_interaction) mu_additive = [prediction.mean(1) for prediction in prediction_additive] mu_interaction = [prediction.mean(1) for prediction in prediction_interaction] fig, ax = plt.subplots(1, 2, sharex=True, sharey=True, figsize = (10, 4)) for idx, program in enumerate(programs): ax[0].plot(math_score, mu_additive[idx], label=f"{program}", color=f"C{idx}", lw=2) az.plot_hdi(math_score, prediction_additive[idx].T, color=f"C{idx}", ax=ax[0]) ax[1].plot(math_score, mu_interaction[idx], label=f"{program}", color=f"C{idx}", lw=2) az.plot_hdi(math_score, prediction_interaction[idx].T, color=f"C{idx}", ax=ax[1]) ax[0].set_title("Additive"); ax[1].set_title("Interaction"); ax[0].set_xlabel("Math score") ax[1].set_xlabel("Math score") ax[0].set_ylim(0, 25) ax[0].legend(loc="upper right"); ###Output _____no_output_____ ###Markdown As we can see in this plot, the interval for the mean response for the Vocational program does not overlap with the interval for the other two groups, representing the group of students who miss fewer classes. On the right panel we can also see that including interaction terms does not change the slopes significantly because the posterior distributions of these coefficients have a substantial overlap with 0. If you've made it to the end of this notebook and you're still curious about what else you can do with these two models, you're invited to use `az.compare()` to compare the fit of the two models. What do you expect before seeing the plot? Why? Is there anything else you could do to improve the fit of the model?Also, if you're still curious about what this model would have looked like with the Poisson likelihood, you just need to replace `family="negativebinomial"` with `family="poisson"` and then you're ready to compare results! ###Code %load_ext watermark %watermark -n -u -v -iv -w ###Output Last updated: Wed May 26 2021 Python implementation: CPython Python version : 3.8.5 IPython version : 7.18.1 arviz : 0.11.2 matplotlib: 3.3.3 bambi : 0.5.0 pandas : 1.2.2 numpy : 1.20.1 Watermark: 2.1.0 ###Markdown Let's explore the distributions of math score and days of absence for each of the three programs listed above. The vertical lines indicate the mean values. ###Code fig, ax = plt.subplots(3, 2, figsize=(8, 6), sharex="col") programs = list(data["prog"].unique()) programs.sort() for idx, program in enumerate(programs): # Histogram ax[idx, 0].hist(data[data["prog"] == program]["math"], edgecolor='black', alpha=0.9) ax[idx, 0].axvline(data[data["prog"] == program]["math"].mean(), color="C1") # Barplot days = data[data["prog"] == program]["daysabs"] days_mean = days.mean() days_counts = days.value_counts() values = list(days_counts._index) count = days_counts.values ax[idx, 1].bar(values, count, edgecolor='black', alpha=0.9) ax[idx, 1].axvline(days_mean, color="C1") # Titles ax[idx, 0].set_title(program) ax[idx, 1].set_title(program) plt.setp(ax[-1, 0], xlabel="Math score") plt.setp(ax[-1, 1], xlabel="Days of absence"); ###Output _____no_output_____ ###Markdown The first impression we have is that the distribution of math scores is not equal for any of the programs. It looks right-skewed for students under the Academic program, left-skewed for students under the Vocational program, and roughly uniform for students in the General program (although there's a drop in the highest values). Clearly those in the Vocational program has the highest mean for the math score. On the other hand, the distribution of the days of absence is right-skewed in all cases. Students in the General program present the highest absence mean while the Vocational group is the one who misses fewer classes on average. ModelsWe are interested in measuring the association between the type of the program and the math score with the days of absence. It's also of interest to see if the association between math score and days of absence is different in each type of program. In order to answer our questions, we are going to fit and compare two models. The first model uses the type of the program and the math score as predictors. The second model also includes the interaction between these two variables. The score in the math test is going to be standardized in both cases to make things easier for the sampler and save some seconds. A good idea to follow along is to run these models without scaling `math` and comparing how long it took to fit.We are going to use a negative binomial likelihood to model the days of absence. But let's stop here and think why we use this likelihood. Earlier, we said that the negative binomial distributon arises when our variable represents the number of trials until we got $k$ successes. However, the number of trials is fixed, i.e. the number of school days in a given year is not a random variable. So if we stick to the definition, we could think of the two alternative views for this problem* Each of the $n$ days is a trial, and we record whether the student is absent ($y=1$) or not ($y=0$). This corresponds to a binary regression setting, where we could think of logistic regression or something alike. A problem here is that we have the sum of $y$ for a student, but not the $n$.* The whole school year represents the space where events occur and we count how many absences we see in that space for each student. This gives us a Poisson regression setting (count of an event in a given space or time).We also know that when $n$ is large and $p$ is small, the Binomial distribution can be approximated with a Poisson distribution with $\lambda = n * p$. We don't know exactly $n$ in this scenario, but we know it is around 180, and we do know that $p$ is small because you can't skip classes all the time. So both modeling approaches should give similar results.But then, why negative binomial? Can't we just use a Poisson likelihood?Yes, we can. However, using a Poisson likelihood implies that the mean is equal to the variance, and that is usually an unrealistic assumption. If it turns out the variance is either substantially smaller or greater than the mean, the Poisson regression model results in a poor fit. Alternatively, if we use a negative binomial likelihood, the variance is not forced to be equal to the mean, and there's more flexibility to handle a given dataset, and consequently, the fit tends to better. Model 1 $$\log{Y_i} = \beta_1 \text{Academic}_i + \beta_2 \text{General}_i + \beta_3 \text{Vocational}_i + \beta_4 \text{Math_std}_i$$ Model 2$$\log{Y_i} = \beta_1 \text{Academic}_i + \beta_2 \text{General}_i + \beta_3 \text{Vocational}_i + \beta_4 \text{Math_std}_i + \beta_5 \text{General}_i \cdot \text{Math_std}_i + \beta_6 \text{Vocational}_i \cdot \text{Math_std}_i$$In both cases we have the following dummy variables$$\text{Academic}_i = \left\{ \begin{array}{ll} 1 & \textrm{if student is under Academic program} \\ 0 & \textrm{other case} \end{array}\right.$$$$\text{General}_i = \left\{ \begin{array}{ll} 1 & \textrm{if student is under General program} \\ 0 & \textrm{other case} \end{array}\right.$$$$\text{Vocational}_i = \left\{ \begin{array}{ll} 1 & \textrm{if student is under Vocational program} \\ 0 & \textrm{other case} \end{array}\right.$$and $Y$ represents the days of absence.So, for example, the first model for a student under the Vocational program reduces to$$\log{Y_i} = \beta_3 + \beta_4 \text{Math_std}_i$$And one last thing to note is we've decided not to inclide an intercept term, that's why you don't see any $\beta_0$ above. This choice allows us to represent the effect of each program directly with $\beta_1$, $\beta_2$, and $\beta_3$. Model fitIt's very easy to fit these models with Bambi. We just pass a formula describing the terms in the model and Bambi will know how to handle each of them correctly. The `0` on the right hand side of `~` simply means we don't want to have the intercept term that is added by default. `scale(math)` tells Bambi we want to use standardize `math` before being included in the model. By default, Bambi uses a log link for negative binomial GLMs. We'll stick to this default here. Model 1 ###Code model_additive = bmb.Model("daysabs ~ 0 + prog + scale(math)", data, family="negativebinomial") idata_additive = model_additive.fit() ###Output Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (2 chains in 2 jobs) NUTS: [daysabs_alpha, scale(math), prog] ###Markdown Model 2For this second model we just add `prog:scale(math)` to indicate the interaction. A shorthand would be to use `y ~ 0 + prog*scale(math)`, which uses the **full interaction** operator. In other words, it just means we want to include the interaction between `prog` and `scale(math)` as well as their main effects. ###Code model_interaction = bmb.Model("daysabs ~ 0 + prog + scale(math) + prog:scale(math)", data, family="negativebinomial") idata_interaction = model_interaction.fit() ###Output Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (2 chains in 2 jobs) NUTS: [daysabs_alpha, prog:scale(math), scale(math), prog] ###Markdown Explore models The first thing we do is calling `az.summary()`. Here we pass the `InferenceData` object the `.fit()` returned. This prints information about the marginal posteriors for each parameter in the model as well as convergence diagnostics. ###Code az.summary(idata_additive) az.summary(idata_interaction) ###Output _____no_output_____ ###Markdown The information in the two tables above can be visualized in a more concise manner using a forest plot. ArviZ provides us with `plot_forest()`. There we simply pass a list containing the `InferenceData` objects of the models we want to compare. ###Code az.plot_forest( [idata_additive, idata_interaction], model_names=["Additive", "Interaction"], var_names=["prog", "scale(math)"], combined=True, figsize=(8, 4) ); ###Output _____no_output_____ ###Markdown One of the first things one can note when seeing this plot is the similarity between the marginal posteriors. Maybe one can conclude that the variability of the marginal posterior of `scale(math)` is slightly lower in the model that considers the interaction, but the difference is not significant. We can also make conclusions about the association between the program and the math score with the days of absence. First, we see the posterior for the Vocational group is to the left of the posterior for the two other programs, meaning it is associated with fewer absences (as we have seen when first exploring our data). There also seems to be a difference between General and Academic, where we may conclude the students in the General group tend to miss more classes.In addition, the marginal posterior for `math` shows negative values in both cases. This means that students with higher math scores tend to miss fewer classes. Below, we see a forest plot with the posteriors for the coefficients of the interaction effects. Both of them overlap with 0, which means the data does not give much evidence to support there is an interaction effect between program and math score (i.e., the association between math and days of absence is similar for all the programs). ###Code az.plot_forest(idata_interaction, var_names=["prog:scale(math)"], combined=True, figsize=(8, 4)) plt.axvline(0); ###Output _____no_output_____ ###Markdown Plot predicted mean responseWe finish this example showing how we can get predictions for new data and plot the mean response for each program together with confidence intervals. ###Code math_score = np.arange(1, 100) # This function takes a model and an InferenceData object. # It returns of length 3 with predictions for each type of program. def predict(model, idata): predictions = [] for program in programs: new_data = pd.DataFrame({"math": math_score, "prog": [program] * len(math_score)}) new_idata = model.predict( idata, data=new_data, inplace=False ) prediction = new_idata.posterior.stack(sample=["chain", "draw"])["daysabs_mean"].values predictions.append(prediction) return predictions prediction_additive = predict(model_additive, idata_additive) prediction_interaction = predict(model_interaction, idata_interaction) mu_additive = [prediction.mean(1) for prediction in prediction_additive] mu_interaction = [prediction.mean(1) for prediction in prediction_interaction] fig, ax = plt.subplots(1, 2, sharex=True, sharey=True, figsize = (10, 4)) for idx, program in enumerate(programs): ax[0].plot(math_score, mu_additive[idx], label=f"{program}", color=f"C{idx}", lw=2) az.plot_hdi(math_score, prediction_additive[idx].T, color=f"C{idx}", ax=ax[0]) ax[1].plot(math_score, mu_interaction[idx], label=f"{program}", color=f"C{idx}", lw=2) az.plot_hdi(math_score, prediction_interaction[idx].T, color=f"C{idx}", ax=ax[1]) ax[0].set_title("Additive"); ax[1].set_title("Interaction"); ax[0].set_xlabel("Math score") ax[1].set_xlabel("Math score") ax[0].set_ylim(0, 25) ax[0].legend(loc="upper right"); ###Output _____no_output_____ ###Markdown As we can see in this plot, the interval for the mean response for the Vocational program does not overlap with the interval for the other two groups, representing the group of students who miss fewer classes. On the right panel we can also see that including interaction terms does not change the slopes significantly because the posterior distributions of these coefficients have a substantial overlap with 0. If you've made it to the end of this notebook and you're still curious about what else you can do with these two models, you're invited to use `az.compare()` to compare the fit of the two models. What do you expect before seeing the plot? Why? Is there anything else you could do to improve the fit of the model?Also, if you're still curious about what this model would have looked like with the Poisson likelihood, you just need to replace `family="negativebinomial"` with `family="poisson"` and then you're ready to compare results! ###Code %load_ext watermark %watermark -n -u -v -iv -w ###Output Last updated: Tue Jul 27 2021 Python implementation: CPython Python version : 3.8.5 IPython version : 7.18.1 bambi : 0.5.0 numpy : 1.20.1 matplotlib: 3.3.3 pandas : 1.2.2 arviz : 0.11.2 json : 2.0.9 Watermark: 2.1.0 ###Markdown Negative Binomial Regression (Students absence example) Negative binomial distribution review I always experience some kind of confusion when looking at the negative binomial distribution after a while of not working with it. There are so many different definitions that I usually need to read everything more than once. The definition I've first learned, and the one I like the most, says as follows: The negative binomial distribution is the distribution of a random variable that is defined as the number of independent Bernoulli trials until the k-th "success". In short, we repeat a Bernoulli experiment until we observe k successes and record the number of trials it required.$$Y \sim \text{NB}(k, p)$$where $0 \le p \le 1$ is the probability of success in each Bernoulli trial, $k > 0$, usually integer, and $y \in \{k, k + 1, \cdots\}$The probability mass function (pmf) is $$p(y | k, p)= \binom{y - 1}{y-k}(1 -p)^{y - k}p^k$$If you, like me, find it hard to remember whether $y$ starts at $0$, $1$, or $k$, try to think twice about the definition of the variable. But how? First, recall we aim to have $k$ successes. And success is one of the two possible outcomes of a trial, so the number of trials can never be smaller than the number of successes. Thus, we can be confident to say that $y \ge k$. But this is not the only way of defining the negative binomial distribution, there are plenty of options! One of the most interesting, and the one you see in [PyMC3](https://docs.pymc.io/api/distributions/discrete.htmlpymc3.distributions.discrete.NegativeBinomial), the library we use in Bambi for the backend, is as a continuous mixture. The negative binomial distribution describes a Poisson random variable whose rate is also a random variable (not a fixed constant!) following a gamma distribution. Or in other words, conditional on a gamma-distributed variable $\mu$, the variable $Y$ has a Poisson distribution with mean $\mu$.Under this alternative definition, the pmf is$$\displaystyle p(y | k, \alpha) = \binom{y + \alpha - 1}{y} \left(\frac{\alpha}{\mu + \alpha}\right)^\alpha\left(\frac{\mu}{\mu + \alpha}\right)^y$$where $\mu$ is the parameter of the Poisson distribution (the mean, and variance too!) and $\alpha$ is the rate parameter of the gamma. ###Code import arviz as az import bambi as bmb import matplotlib.pyplot as plt import numpy as np import pandas as pd from scipy.stats import nbinom az.style.use("arviz-darkgrid") import warnings warnings.simplefilter(action='ignore', category=FutureWarning) ###Output _____no_output_____ ###Markdown In SciPy, the definition of the negative binomial distribution differs a little from the one in our introduction. They define $Y$ = Number of failures until k sucesses and then $y$ starts at 0. In the following plot, we have the probability of observing $y$ failures before we see $k=3$ successes. ###Code y = np.arange(0, 30) k = 3 p1 = 0.5 p2 = 0.3 fig, ax = plt.subplots(1, 2, figsize=(12, 4), sharey=True) ax[0].bar(y, nbinom.pmf(y, k, p1)) ax[0].set_xticks(np.linspace(0, 30, num=11)) ax[0].set_title(f"k = {k}, p = {p1}") ax[1].bar(y, nbinom.pmf(y, k, p2)) ax[1].set_xticks(np.linspace(0, 30, num=11)) ax[1].set_title(f"k = {k}, p = {p2}") fig.suptitle("Y = Number of failures until k successes", fontsize=16); ###Output _____no_output_____ ###Markdown For example, when $p=0.5$, the probability of seeing $y=0$ failures before 3 successes (or in other words, the probability of having 3 successes out of 3 trials) is 0.125, and the probability of seeing $y=3$ failures before 3 successes is 0.156. ###Code print(nbinom.pmf(y, k, p1)[0]) print(nbinom.pmf(y, k, p1)[3]) ###Output 0.12500000000000003 0.15625000000000003 ###Markdown Finally, if one wants to show this probability mass function as if we are following the first definition of negative binomial distribution we introduced, we just need to shift the whole thing to the right by adding $k$ to the $y$ values. ###Code fig, ax = plt.subplots(1, 2, figsize=(12, 4), sharey=True) ax[0].bar(y + k, nbinom.pmf(y, k, p1)) ax[0].set_xticks(np.linspace(3, 30, num=10)) ax[0].set_title(f"k = {k}, p = {p1}") ax[1].bar(y + k, nbinom.pmf(y, k, p2)) ax[1].set_xticks(np.linspace(3, 30, num=10)) ax[1].set_title(f"k = {k}, p = {p2}") fig.suptitle("Y = Number of trials until k successes", fontsize=16); ###Output _____no_output_____ ###Markdown Negative binomial in GLM The negative binomial distribution belongs to the exponential family, and the canonical link function is $$g(\mu_i) = \log\left(\frac{\mu_i}{k + \mu_i}\right) = \log\left(\frac{k}{\mu_i} + 1\right)$$but it is difficult to interpret. The log link is usually preferred because of the analogy with Poisson model, and it also tends to give better results. Load and explore Students dataThis example is based on this [UCLA example](https://stats.idre.ucla.edu/r/dae/negative-binomial-regression/).School administrators study the attendance behavior of high school juniors at two schools. Predictors of the **number of days of absence** include the **type of program** in which the student is enrolled and a **standardized test in math**. We have attendance data on 314 high school juniors.The variables of insterest in the dataset are* daysabs: The number of days of abscence. It is our response variable.* progr: The type of program. Can be one of 'General', 'Academic', or 'Vocational'.* math: Score in a standardized math test. ###Code data = pd.read_stata("https://stats.idre.ucla.edu/stat/stata/dae/nb_data.dta") data.head() ###Output _____no_output_____ ###Markdown We assign categories to the values 1, 2, and 3 of our `"prog"` variable. ###Code data["prog"] = data["prog"].map({1: "General", 2: "Academic", 3: "Vocational"}) data.head() ###Output _____no_output_____ ###Markdown The Academic program is the most popular program (167/314) and General is the least popular one (40/314) ###Code data["prog"].value_counts() ###Output _____no_output_____ ###Markdown Let's explore the distributions of math score and days of absence for each of the three programs listed above. The vertical lines indicate the mean values. ###Code fig, ax = plt.subplots(3, 2, figsize=(8, 6), sharex="col") programs = list(data["prog"].unique()) programs.sort() for idx, program in enumerate(programs): # Histogram ax[idx, 0].hist(data[data["prog"] == program]["math"], edgecolor='black', alpha=0.9) ax[idx, 0].axvline(data[data["prog"] == program]["math"].mean(), color="C1") # Barplot days = data[data["prog"] == program]["daysabs"] days_mean = days.mean() days_counts = days.value_counts() values = list(days_counts._index) count = days_counts.values ax[idx, 1].bar(values, count, edgecolor='black', alpha=0.9) ax[idx, 1].axvline(days_mean, color="C1") # Titles ax[idx, 0].set_title(program) ax[idx, 1].set_title(program) plt.setp(ax[-1, 0], xlabel="Math score") plt.setp(ax[-1, 1], xlabel="Days of absence"); ###Output _____no_output_____ ###Markdown The first impression we have is that the distribution of math scores is not equal for any of the programs. It looks right-skewed for students under the Academic program, left-skewed for students under the Vocational program, and roughly uniform for students in the General program (although there's a drop in the highest values). Clearly those in the Vocational program has the highest mean for the math score. On the other hand, the distribution of the days of absence is right-skewed in all cases. Students in the General program present the highest absence mean while the Vocational group is the one who misses fewer classes on average. ModelsWe are interested in measuring the association between the type of the program and the math score with the days of absence. It's also of interest to see if the association between math score and days of absence is different in each type of program. In order to answer our questions, we are going to fit and compare two models. The first model uses the type of the program and the math score as predictors. The second model also includes the interaction between these two variables. The score in the math test is going to be standardized in both cases to make things easier for the sampler and save some seconds. A good idea to follow along is to run these models without scaling `math` and comparing how long it took to fit.We are going to use a negative binomial likelihood to model the days of absence. But let's stop here and think why we use this likelihood. Earlier, we said that the negative binomial distributon arises when our variable represents the number of trials until we got $k$ successes. However, the number of trials is fixed, i.e. the number of school days in a given year is not a random variable. So if we stick to the definition, we could think of the two alternative views for this problem* Each of the $n$ days is a trial, and we record whether the student is absent ($y=1$) or not ($y=0$). This corresponds to a binary regression setting, where we could think of logistic regression or something alike. A problem here is that we have the sum of $y$ for a student, but not the $n$.* The whole school year represents the space where events occur and we count how many absences we see in that space for each student. This gives us a Poisson regression setting (count of an event in a given space or time).We also know that when $n$ is large and $p$ is small, the Binomial distribution can be approximated with a Poisson distribution with $\lambda = n * p$. We don't know exactly $n$ in this scenario, but we know it is around 180, and we do know that $p$ is small because you can't skip classes all the time. So both modeling approaches should give similar results.But then, why negative binomial? Can't we just use a Poisson likelihood?Yes, we can. However, using a Poisson likelihood implies that the mean is equal to the variance, and that is usually an unrealistic assumption. If it turns out the variance is either substantially smaller or greater than the mean, the Poisson regression model results in a poor fit. Alternatively, if we use a negative binomial likelihood, the variance is not forced to be equal to the mean, and there's more flexibility to handle a given dataset, and consequently, the fit tends to better. Model 1 $$\log{Y_i} = \beta_1 \text{Academic}_i + \beta_2 \text{General}_i + \beta_3 \text{Vocational}_i + \beta_4 \text{Math_std}_i$$ Model 2$$\log{Y_i} = \beta_1 \text{Academic}_i + \beta_2 \text{General}_i + \beta_3 \text{Vocational}_i + \beta_4 \text{Math_std}_i + \beta_5 \text{General}_i \cdot \text{Math_std}_i + \beta_6 \text{Vocational}_i \cdot \text{Math_std}_i$$In both cases we have the following dummy variables$$\text{Academic}_i = \left\{ \begin{array}{ll} 1 & \textrm{if student is under Academic program} \\ 0 & \textrm{other case} \end{array}\right.$$$$\text{General}_i = \left\{ \begin{array}{ll} 1 & \textrm{if student is under General program} \\ 0 & \textrm{other case} \end{array}\right.$$$$\text{Vocational}_i = \left\{ \begin{array}{ll} 1 & \textrm{if student is under Vocational program} \\ 0 & \textrm{other case} \end{array}\right.$$and $Y$ represents the days of absence.So, for example, the first model for a student under the Vocational program reduces to$$\log{Y_i} = \beta_3 + \beta_4 \text{Math_std}_i$$And one last thing to note is we've decided not to inclide an intercept term, that's why you don't see any $\beta_0$ above. This choice allows us to represent the effect of each program directly with $\beta_1$, $\beta_2$, and $\beta_3$. Model fitIt's very easy to fit these models with Bambi. We just pass a formula describing the terms in the model and Bambi will know how to handle each of them correctly. The `0` on the right hand side of `~` simply means we don't want to have the intercept term that is added by default. `scale(math)` tells Bambi we want to use standardize `math` before being included in the model. By default, Bambi uses a log link for negative binomial GLMs. We'll stick to this default here. Model 1 ###Code model_additive = bmb.Model("daysabs ~ 0 + prog + scale(math)", data, family="negativebinomial") idata_additive = model_additive.fit() ###Output Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (2 chains in 2 jobs) NUTS: [daysabs_alpha, scale(math), prog] ###Markdown Model 2For this second model we just add `prog:scale(math)` to indicate the interaction. A shorthand would be to use `y ~ 0 + prog*scale(math)`, which uses the **full interaction** operator. In other words, it just means we want to include the interaction between `prog` and `scale(math)` as well as their main effects. ###Code model_interaction = bmb.Model("daysabs ~ 0 + prog + scale(math) + prog:scale(math)", data, family="negativebinomial") idata_interaction = model_interaction.fit() ###Output Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (2 chains in 2 jobs) NUTS: [daysabs_alpha, prog:scale(math), scale(math), prog] ###Markdown Explore models The first thing we do is calling `az.summary()`. Here we pass the `InferenceData` object the `.fit()` returned. This prints information about the marginal posteriors for each parameter in the model as well as convergence diagnostics. ###Code az.summary(idata_additive) az.summary(idata_interaction) ###Output _____no_output_____ ###Markdown The information in the two tables above can be visualized in a more concise manner using a forest plot. ArviZ provides us with `plot_forest()`. There we simply pass a list containing the `InferenceData` objects of the models we want to compare. ###Code az.plot_forest( [idata_additive, idata_interaction], model_names=["Additive", "Interaction"], var_names=["prog", "scale(math)"], combined=True, figsize=(8, 4) ); ###Output _____no_output_____ ###Markdown One of the first things one can note when seeing this plot is the similarity between the marginal posteriors. Maybe one can conclude that the variability of the marginal posterior of `scale(math)` is slightly lower in the model that considers the interaction, but the difference is not significant. We can also make conclusions about the association between the program and the math score with the days of absence. First, we see the posterior for the Vocational group is to the left of the posterior for the two other programs, meaning it is associated with fewer absences (as we have seen when first exploring our data). There also seems to be a difference between General and Academic, where we may conclude the students in the General group tend to miss more classes.In addition, the marginal posterior for `math` shows negative values in both cases. This means that students with higher math scores tend to miss fewer classes. Below, we see a forest plot with the posteriors for the coefficients of the interaction effects. Both of them overlap with 0, which means the data does not give much evidence to support there is an interaction effect between program and math score (i.e., the association between math and days of absence is similar for all the programs). ###Code az.plot_forest(idata_interaction, var_names=["prog:scale(math)"], combined=True, figsize=(8, 4)) plt.axvline(0); ###Output _____no_output_____ ###Markdown Plot predicted mean responseWe finish this example showing how we can get predictions for new data and plot the mean response for each program together with confidence intervals. ###Code math_score = np.arange(1, 100) # This function takes a model and an InferenceData object. # It returns of length 3 with predictions for each type of program. def predict(model, idata): predictions = [] for program in programs: new_data = pd.DataFrame({"math": math_score, "prog": [program] * len(math_score)}) new_idata = model.predict( idata, data=new_data, inplace=False ) prediction = new_idata.posterior.stack(sample=["chain", "draw"])["daysabs_mean"].values predictions.append(prediction) return predictions prediction_additive = predict(model_additive, idata_additive) prediction_interaction = predict(model_interaction, idata_interaction) mu_additive = [prediction.mean(1) for prediction in prediction_additive] mu_interaction = [prediction.mean(1) for prediction in prediction_interaction] fig, ax = plt.subplots(1, 2, sharex=True, sharey=True, figsize = (10, 4)) for idx, program in enumerate(programs): ax[0].plot(math_score, mu_additive[idx], label=f"{program}", color=f"C{idx}", lw=2) az.plot_hdi(math_score, prediction_additive[idx].T, color=f"C{idx}", ax=ax[0]) ax[1].plot(math_score, mu_interaction[idx], label=f"{program}", color=f"C{idx}", lw=2) az.plot_hdi(math_score, prediction_interaction[idx].T, color=f"C{idx}", ax=ax[1]) ax[0].set_title("Additive"); ax[1].set_title("Interaction"); ax[0].set_xlabel("Math score") ax[1].set_xlabel("Math score") ax[0].set_ylim(0, 25) ax[0].legend(loc="upper right"); ###Output _____no_output_____ ###Markdown As we can see in this plot, the interval for the mean response for the Vocational program does not overlap with the interval for the other two groups, representing the group of students who miss fewer classes. On the right panel we can also see that including interaction terms does not change the slopes significantly because the posterior distributions of these coefficients have a substantial overlap with 0. If you've made it to the end of this notebook and you're still curious about what else you can do with these two models, you're invited to use `az.compare()` to compare the fit of the two models. What do you expect before seeing the plot? Why? Is there anything else you could do to improve the fit of the model?Also, if you're still curious about what this model would have looked like with the Poisson likelihood, you just need to replace `family="negativebinomial"` with `family="poisson"` and then you're ready to compare results! ###Code %load_ext watermark %watermark -n -u -v -iv -w ###Output Last updated: Tue Jul 27 2021 Python implementation: CPython Python version : 3.8.5 IPython version : 7.18.1 bambi : 0.5.0 numpy : 1.20.1 matplotlib: 3.3.3 pandas : 1.2.2 arviz : 0.11.2 json : 2.0.9 Watermark: 2.1.0 ###Markdown Negative Binomial Regression Negative binomial distribution review I always experience some kind of confusion when looking at the negative binomial distribution after a while of not working with it. There are so many different definitions that I usually need to read everything more than once. The definition I've first learned, and the one I like the most, says as follows: The negative binomial distribution is the distribution of a random variable that is defined as the number of independent Bernoulli trials until the k-th "success". In short, we repeat a Bernoulli experiment until we observe k successes and record the number of trials it required.$$Y \sim \text{NB}(k, p)$$where $0 \le p \le 1$ is the probability of success in each Bernoulli trial, $k > 0$, usually integer, and $y \in \{k, k + 1, \cdots\}$The probability mass function (pmf) is $$p(y | k, p)= \binom{y - 1}{y-k}(1 -p)^{y - k}p^k$$If you, like me, find it hard to remember whether $y$ starts at $0$, $1$, or $k$, try to think twice about the definition of the variable. But how? First, recall we aim to have $k$ successes. And success is one of the two possible outcomes of a trial, so the number of trials can never be smaller than the number of successes. Thus, we can be confident to say that $y \ge k$. But this is not the only way of defining the negative binomial distribution, there are plenty of options! One of the most interesting, and the one you see in [PyMC3](https://docs.pymc.io/api/distributions/discrete.htmlpymc3.distributions.discrete.NegativeBinomial), the library we use in Bambi for the backend, is as a continuous mixture. The negative binomial distribution describes a Poisson random variable whose rate is also a random variable (not a fixed constant!) following a gamma distribution. Or in other words, conditional on a gamma-distributed variable $\mu$, the variable $Y$ has a Poisson distribution with mean $\mu$.Under this alternative definition, the pmf is$$\displaystyle p(y | k, \alpha) = \binom{y + \alpha - 1}{y} \left(\frac{\alpha}{\mu + \alpha}\right)^\alpha\left(\frac{\mu}{\mu + \alpha}\right)^y$$where $\mu$ is the parameter of the Poisson distribution (the mean, and variance too!) and $\alpha$ is the rate parameter of the gamma. ###Code import arviz as az import bambi as bmb import matplotlib.pyplot as plt import numpy as np import pandas as pd from scipy.stats import nbinom az.style.use("arviz-darkgrid") import warnings warnings.simplefilter(action='ignore', category=FutureWarning) ###Output _____no_output_____ ###Markdown In SciPy, the definition of the negative binomial distribution differs a little from the one in our introduction. They define $Y$ = Number of failures until k sucesses and then $y$ starts at 0. In the following plot, we have the probability of observing $y$ failures before we see $k=3$ successes. ###Code y = np.arange(0, 30) k = 3 p1 = 0.5 p2 = 0.3 fig, ax = plt.subplots(1, 2, figsize=(12, 4), sharey=True) ax[0].bar(y, nbinom.pmf(y, k, p1)) ax[0].set_xticks(np.linspace(0, 30, num=11)) ax[0].set_title(f"k = {k}, p = {p1}") ax[1].bar(y, nbinom.pmf(y, k, p2)) ax[1].set_xticks(np.linspace(0, 30, num=11)) ax[1].set_title(f"k = {k}, p = {p2}") fig.suptitle("Y = Number of failures until k successes", fontsize=16); ###Output _____no_output_____ ###Markdown For example, when $p=0.5$, the probability of seeing $y=0$ failures before 3 successes (or in other words, the probability of having 3 successes out of 3 trials) is 0.125, and the probability of seeing $y=3$ failures before 3 successes is 0.156. ###Code print(nbinom.pmf(y, k, p1)[0]) print(nbinom.pmf(y, k, p1)[3]) ###Output 0.12500000000000003 0.15625000000000003 ###Markdown Finally, if one wants to show this probability mass function as if we are following the first definition of negative binomial distribution we introduced, we just need to shift the whole thing to the right by adding $k$ to the $y$ values. ###Code fig, ax = plt.subplots(1, 2, figsize=(12, 4), sharey=True) ax[0].bar(y + k, nbinom.pmf(y, k, p1)) ax[0].set_xticks(np.linspace(3, 30, num=10)) ax[0].set_title(f"k = {k}, p = {p1}") ax[1].bar(y + k, nbinom.pmf(y, k, p2)) ax[1].set_xticks(np.linspace(3, 30, num=10)) ax[1].set_title(f"k = {k}, p = {p2}") fig.suptitle("Y = Number of trials until k successes", fontsize=16); ###Output _____no_output_____ ###Markdown Negative binomial in GLM The negative binomial distribution belongs to the exponential family, and the canonical link function is $$g(\mu_i) = \log\left(\frac{\mu_i}{k + \mu_i}\right) = \log\left(\frac{k}{\mu_i} + 1\right)$$but it is difficult to interpret. The log link is usually preferred because of the analogy with Poisson model, and it also tends to give better results. Load and explore Students dataThis example is based on this [UCLA example](https://stats.idre.ucla.edu/r/dae/negative-binomial-regression/).School administrators study the attendance behavior of high school juniors at two schools. Predictors of the **number of days of absence** include the **type of program** in which the student is enrolled and a **standardized test in math**. We have attendance data on 314 high school juniors.The variables of insterest in the dataset are* daysabs: The number of days of abscence. It is our response variable.* progr: The type of program. Can be one of 'General', 'Academic', or 'Vocational'.* math: Score in a standardized math test. ###Code data = pd.read_stata("https://stats.idre.ucla.edu/stat/stata/dae/nb_data.dta") data.head() ###Output _____no_output_____ ###Markdown We assign categories to the values 1, 2, and 3 of our `"prog"` variable. ###Code data["prog"] = data["prog"].map({1: "General", 2: "Academic", 3: "Vocational"}) data.head() ###Output _____no_output_____ ###Markdown The Academic program is the most popular program (167/314) and General is the least popular one (40/314) ###Code data["prog"].value_counts() ###Output _____no_output_____
Step3_SVM-ClassifierTo-LabelVideoData.ipynb
###Markdown Check the distribution of the true and false trials ###Code mu, sigma = 0, 0.1 # mean and standard deviation s = np.random.normal(mu, sigma, 1000) k2_test, p_test = sc.stats.normaltest(s, axis=0, nan_policy='omit') print("p = {:g}".format(p_test)) if p_test < 0.05: # null hypothesis - the distribution is normally distributed; less than alpha - reject null hypothesis print('This random distribution is not normally distributed') else: print('This random distribution is normally distributed') trueTrials = file.FramesInView[file.TrialStatus == 1] k2_true, p_true = sc.stats.normaltest(np.log(trueTrials), axis=0, nan_policy='omit') print("p = {:g}".format(p_true)) if p_true < 0.05: # null hypothesis - the distribution is normally distributed; less than alpha - reject null hypothesis print('the true trials are not normally distributed') else: print('The true trials are normally distributed') falseTrials = file.FramesInView[file.TrialStatus == 0] k2_false, p_false = sc.stats.normaltest(np.log(falseTrials), axis=0, nan_policy='omit') print("p = {:g}".format(p_false)) if p_false < 0.05: # null hypothesis - the distribution is normally distributed; less than alpha - reject null hypothesis print('the false trials are not normally distributed') else: print('The false trials are normally distributed') x = np.asarray(file.FramesInView) y = np.zeros(len(x)) data = np.transpose(np.array([x,y])) Manual_Label = np.asarray(file.TrialStatus) plt.scatter(data[:,0],data[:,1], c = Manual_Label) #see what the data looks like # build the linear classifier clf = svm.SVC(kernel = 'linear', C = 1.0) clf.fit(data,Manual_Label) w = clf.coef_[0] y0 = clf.intercept_ new_line = w[0]*data[:,0] - y0 new_line.shape # see what the classifier did to the labels - find a way to draw a line along the "point" and draw "margin" plt.hist(trueTrials, bins =10**np.linspace(0, 4, 40), color = 'lightyellow', label = 'true trials', zorder=0) plt.hist(falseTrials, bins =10**np.linspace(0, 4, 40), color = 'mediumpurple', alpha=0.35, label = 'false trials', zorder=5) annotation = [] for x,_ in data: YY = clf.predict([[x,0]])[0] annotation.append(YY) plt.scatter(data[:,0],data[:,1]+10, c = annotation, alpha=0.3, edgecolors='none', zorder=10, label = 'post-classification') # plt.plot(new_line) plt.xscale("log") plt.yscale('linear') plt.xlabel('Trial length (in frame Number)') plt.title('Using a Classifier to indentify true trials') plt.legend() # plt.savefig(r'C:\Users\Daniellab\Desktop\Light_level_videos_c-10\Data\Step3\Annotation\Figuers_3.svg') plt.tight_layout() # run the predictor for all dataset and annotate them direc = r'C:\Users\Daniellab\Desktop\Light_level_videos_second_batch\Data\Step2_Tanvi_Method' new_path = r'C:\Users\Daniellab\Desktop\Light_level_videos_second_batch\Data\Step3' file = [file for file in os.listdir(direc) if file.endswith('.csv')] # test = file[0] for item in file: print(item) df = pd.read_csv(direc + '/' + item) label = [] # run the classifer on this for xx in df.Frames_In_View: YY = clf.predict([[xx,0]])[0] label.append(YY) df1 = pd.DataFrame({'label': label}) new_df = pd.concat([df, df1], axis = 1) # new_df.to_csv(new_path + '/' + item[:-4] + '_labeled.csv') ###Output L0.1_c-3_m10_MothInOut.csv L0.1_c-3_m12_MothInOut.csv L0.1_c-3_m20_MothInOut.csv L0.1_c-3_m21_MothInOut.csv L0.1_c-3_m22_MothInOut.csv L0.1_c-3_m23_MothInOut.csv L0.1_c-3_m24_MothInOut.csv L0.1_c-3_m25_MothInOut.csv L0.1_c-3_m27_MothInOut.csv L0.1_c-3_m2_MothInOut.csv L0.1_c-3_m32_MothInOut.csv L0.1_c-3_m34_MothInOut.csv L0.1_c-3_m37_MothInOut.csv L0.1_c-3_m38_MothInOut.csv L0.1_c-3_m39_MothInOut.csv L0.1_c-3_m40_MothInOut.csv L0.1_c-3_m41_MothInOut.csv L0.1_c-3_m43_MothInOut.csv L0.1_c-3_m44_MothInOut.csv L0.1_c-3_m45_MothInOut.csv L0.1_c-3_m46_MothInOut.csv L0.1_c-3_m47_MothInOut.csv L0.1_c-3_m48_MothInOut.csv L0.1_c-3_m49_MothInOut.csv L0.1_c-3_m50_MothInOut.csv L0.1_c-3_m54_MothInOut.csv L0.1_c-3_m57_MothInOut.csv L0.1_c-3_m5_MothInOut.csv L0.1_c-3_m8_MothInOut.csv L50_c-3_m10_MothInOut.csv L50_c-3_m12_MothInOut.csv L50_c-3_m13_MothInOut.csv L50_c-3_m14_MothInOut.csv L50_c-3_m15_MothInOut.csv L50_c-3_m21_MothInOut.csv L50_c-3_m22_MothInOut.csv L50_c-3_m24_MothInOut.csv L50_c-3_m25_MothInOut.csv L50_c-3_m26_MothInOut.csv L50_c-3_m2_MothInOut.csv L50_c-3_m30_MothInOut.csv L50_c-3_m32_MothInOut.csv L50_c-3_m33_MothInOut.csv L50_c-3_m34_MothInOut.csv L50_c-3_m35_MothInOut.csv L50_c-3_m37_MothInOut.csv L50_c-3_m38_MothInOut.csv L50_c-3_m39_MothInOut.csv L50_c-3_m45_MothInOut.csv L50_c-3_m49_MothInOut.csv L50_c-3_m50_MothInOut.csv L50_c-3_m51_MothInOut.csv L50_c-3_m58_MothInOut.csv L50_c-3_m6_MothInOut.csv L50_c-3_m9_MothInOut.csv
lectures/ng/Lecture-07-Support-Vector-Machines.ipynb
###Markdown Versions and Acknowledgements ###Code import sys sys.path.append("../../src") # add our class modules to the system PYTHON_PATH from ml_python_class.custom_funcs import version_information version_information() ###Output Module Versions -------------------- ------------------------------------------------------------ matplotlib: ['3.3.0'] numpy: ['1.18.5'] pandas: ['1.0.5'] ###Markdown This week, you will be learning about the support vector machine (SVM) algorithm. SVMs are considered by many to be the most powerful 'black box' learning algorithm, and by posing a cleverly-chosen optimization objective, one of the most widely used learning algorithms today. Video W7 01: Optimziation Objective[YouTube Video Link](https://www.youtube.com/watch?v=r3uBEDCqIN0&list=PLZ9qNFMHZ-A4rycgrgOYma6zxF4BZGGPW&index=71)One way of understanding SVM is that it is a simple modification of logistic regression (just as the logistic regression isa simple extension of linear regression, and neural networks are a way of extending the concepts of logistic regression, etc.)Recall that the basic modification we made for logistic regression was to pass our hypothesis through a logistic (or sigmoid)function. This caused the output from our hypothesis to always be bound to a value in the range from 0.0 to 1.0$$h_{\theta}(x) = \frac{1}{1 + e^{-\theta^T x}}$$ ###Code def sigmoid(z): return 1.0 / (1.0 + np.e**-z) plt.figure() x = np.linspace(-5.0, 5.0) z = sigmoid(x) plt.plot(x, z) plt.axis([-5, 5, 0, 1]) plt.grid() plt.xlabel('$z = \\theta^Tx$', fontsize=20) plt.text(-1, 0.85, '$h_{\\theta}(x) = g(z)$', fontsize=20); ###Output _____no_output_____ ###Markdown Recall that for a single input/output pair $(x, y)$, the objective or cost function for logistic regression has the following form:$$-y \;\; \textrm{log} \frac{1}{1 + e^{-\theta^T x}} - (1 - y) \;\; \textrm{log} \; \big(1 - \frac{1}{1 + e^{-\theta^T x}}\big)$$This expression will only involve either the left or right side, depending on whether $y = 1$ or $y = 0$ (recall that in logisticregression, we are performming a classification, where each training example is either in the class, or it is not in the class).So for example, if we want $y = 1$, then we want $\theta^Tx \gg 0$. The curve for the function when $y = 1$ looks like thefollowing: ###Code z = np.linspace(-3.0, 3.0) y = -np.log( 1.0 / (1.0 + np.exp(-z)) ) plt.figure() plt.plot(z, y) plt.xlabel('$z$', fontsize=20) plt.grid() plt.text(1, 3, '$-log \\; \\frac{1}{1 + e^{-z}}$', fontsize=20); ###Output _____no_output_____ ###Markdown Likewise, for the case where $y = 0$ then we want $\theta^T x \ll 0$. The curve for the objective function when $y = 0$ similarlylooks like the following: ###Code z = np.linspace(-3.0, 3.0) y = -np.log( 1.0 - (1.0 / (1.0 + np.exp(-z))) ) plt.figure() plt.plot(z, y) plt.xlabel('$z$', fontsize=20) plt.grid() plt.text(-3, 3, '$-log \\; ( 1 - \\frac{1}{1 + e^{-z}} )$', fontsize=20); ###Output _____no_output_____ ###Markdown The full cost function we were trying to minimize, then, for logistic regression was:$$\frac{1}{m} \big[ \sum_{i=1}^{m} y^{(i)} \big( - \textrm{log} \; h_{\theta}(x^{(i)} \big) + (1 - y^{(i)}) \big( - \textrm{log} \; (1 - h_{\theta}(x^{(i)})) \big) \big] + \frac{\lambda}{2m} \sum_{j=1}^{n} \theta_j^2$$For the support vector machine, we change the terms relating to the hypothesis to functions $\textrm{cost}_1$ and$\textrm{cost}_0$:$$\frac{1}{m} \big[ \sum_{i=1}^{m} y^{(i)} \; \textrm{cost}_1( \theta^{T} x^{(i)} )+ (1 - y^{(i)}) \; \textrm{cost}_0 ( \theta^{T} x^{(i)} ) \big] + \frac{\lambda}{2m} \sum_{j=1}^{n} \theta_j^2$$as described in the video, by convention in SVM we remove the division by $m$ and we parameterize the regularization and costterms a bit differently. Usually you will see the objective function for SVM specified in this slightly different but equivalentform:$$\underset{\theta}{\textrm{min}} \; C\sum_{i=1}^{m} \big[ y^{(i)} \; \textrm{cost}_1( \theta^{T} x^{(i)} )+ (1 - y^{(i)}) \; \textrm{cost}_0 ( \theta^{T} x^{(i)} ) \big]+ \frac{1}{2} \sum_{j=1}^{n} \theta_j^2$$In this formulation of the objective function, the term C is being used as the regularization parameter. But now, the largerthe value of C, the more emphasis that is placed on the cost terms (and the less that is placed on the regularization terms. Video W7 02: Large Margin Intuition[YouTube Video Link](https://www.youtube.com/watch?v=yjH3ZSPqLhU&list=PLZ9qNFMHZ-A4rycgrgOYma6zxF4BZGGPW&index=72)Because of the form of the $\textrm{cost}_0$ and $\textrm{cost}_1$ functions (which we haven't specified yet), thesenaturally favor cost functions that give wide margins between the hypothesis outputs for $y=0$ and $y=1$.Intiutively, as shown in the video, the objective function that we have defined will find decision boundaries that maximizethe margin between the negative and positive examples. This is where the name large margin classifier comes from. Theterm `support vector` from the name for SVM also refers to the mathematical properties of these objective functions thattry and maximize this margin between positive and negative examples. The $cost_0$ and $cost_1$ functions described in the video are basically the same idea of usingwhat are known as rectified linear units (ReLU) in neural network. Here we give a linearactivation response when the value is above (or below) some threshold, and 0 otherwise.The following figures compare a possible implementation of the discussed $cost_0$ and $cost_1$ functionsto this type of threshold RELU activation function. ###Code def cost_0(z): return np.where(z > -1, z+1, 0) def cost_1(z): return np.where(z < 1, -z+1, 0) z = np.linspace(-3.0, 3.0) y = -np.log( 1.0 / (1.0 + np.exp(-z)) ) # logistic cost function, for y=1 plt.figure() plt.plot(z, y, label='logistic cost function $y=1$') plt.xlabel('$z$', fontsize=20) plt.grid() # cost_1 function, RELU like y = cost_1(z) plt.plot(z, y, label='$cost_1$ RELU approximation for SVM') plt.legend(); z = np.linspace(-3.0, 3.0) y = -np.log( 1.0 - (1.0 / (1.0 + np.exp(-z))) ) # logistic cost function, for y=0 plt.figure() plt.plot(z, y, label='logistic cost function $y=0$') plt.xlabel('$z$', fontsize=20) plt.grid() # cost_1 function, RELU like y = cost_0(z) plt.plot(z, y, label='$cost_0$ RELU approximation for SVM') plt.legend(); ###Output _____no_output_____ ###Markdown Video W7 03: Mathematics Behind Large Margin Intuition (Optional)[YouTube Video Link](https://www.youtube.com/watch?v=Jm49m7ey34o&list=PLZ9qNFMHZ-A4rycgrgOYma6zxF4BZGGPW&index=73)The nitty gritty of the mathematics behind how the SVM optimization finds large margin decision boundaries is not necessaryfor using SVM well. But at least watch the video to get a bit of a feel for what happens behind the scenes when creating an SVMand how it finds such decision boundaries given our definition of the cost function. Video W7 04: Kernels I[YouTube Video Link](https://www.youtube.com/watch?v=0Fg2U6LN3pg&list=PLZ9qNFMHZ-A4rycgrgOYma6zxF4BZGGPW&index=74)This video starts by giving a good explanation of what are known as gaussian feature kernels. When we looked at logisticregression, we did examine using nonlinear features to produce more complex decision boundaries. Kernel methods, usedmost commonly with SVM systems, allow us to create sets of nonlinear features, but in a more directed and less random waythan simply using polynomial combinations of basic features.The [gaussian function](https://en.wikipedia.org/wiki/Gaussian_function) discussed in the video is related to gaussianor normal distributions that you may be familiar with (e.g. the standard 'bell curve' distribution). For a singlefeature, the gaussian function is usually specified in terms of $\mu$, the mean or location of the feature, and $\sigma^2$ the square of the deviation of the feature. So for example, as given in the video, we can think ofthe similarity measure for a system that has a single feature, with a landmar at the point $\mu = 0$ as follows:$$f(x) = exp \Big(- \frac{(x - \mu)^2}{2\sigma^2} \Big)$$The expression $(x - \mu)^2$ is really just an expression of the distance from some input $x$ to the landmark. So whenwhen we only have a single feature, and our landmark is at the origin point 0 (e.g. $\mu = 0$) then we have: ###Code def gauss(x, mu, sigma): return np.exp(- (x - mu)**2.0 / (2 * sigma**2.0)) x = np.linspace(-3.0, 3.0) plt.figure() plt.plot(x, gauss(x, 0.0, 1.0), 'k-') plt.xlabel('$x_1$', fontsize=20) plt.ylabel('gaussian similarity function'); ###Output _____no_output_____ ###Markdown This is the basic gaussian distribution, with a mean of 0 and a standard deviation of 1. In the context of a gaussian kernel function, we will return a similarity of 1.0 for any feature that is exactly the same as our landmark ($\mu$ or 0 in this case),and we will return lesser values, eventually approaching 0, as we get a further distance from the landmark $\mu$ location of 0.In the videos, the linear algebra norm simply calculates the distance from an input $x$ when we have more than 1 feature. So forexample, when we have 2 features, or 2 dimensional space, we need to visualize the gaussian function using a 3 dimensional plot, where we plot our two features $x_1$ and $x_2$ on two orthogonal axis, and plot the gaussian function on the 3rd orthogonalz axis.So for example, as shown in the video, if we have 2 features, and our landmark is located at the position where $x_1 = 3$and $x_2 = 5$, e.g. $$l^{(1)} =\begin{bmatrix}3 \\5 \\\end{bmatrix}$$We will simply have a gaussian function in 2 dimensions (features) that has a value of 1.0 exactly at that $\mu$ location (in2 dimensions), and falls away in the bell curve shape in both dimensions as a function of the $\sigma$ (the deviation value).So the gaussian similiarity function written in the video is:$$f_1 = \mathrm{exp} \Big(- \frac{ \|x - l^{(1)}\|^2 }{2 \sigma^2} \Big)$$The top part of the fraction is simply calculating the distance between some point $x$ and the landmark location in a 2 or higherdimensional space (e.g. the sum of all of the differences for each individual dimension, then squaring this sum, in linear algebra this is simply the square of the norm of the difference of these two vectors). So for a two dimensional feature, the gaussian function canbe plotted on a 3D plot in python as follows: ###Code # first plot as a contour map def gauss(x, mu, sigma): """A multi-dimensional version of the gaussian function. x and mu are n dimensional feature vectors, so we take the linear algebra norm of the difference and square this).""" from numpy.linalg import norm return np.exp(- norm(x - mu, axis=1)**2.0 / (2 * sigma**2.0)) # the landmark, I have been calling it mu mu = np.array([3, 5]) sigma = 1.0 # we create a mesh so we can plot our gaussin function in 3d x1_min, x1_max = -2.0, 8.0 x2_min, x2_max = 0.0, 10.0 h = 0.02 # step size in the mesh x1, x2 = np.meshgrid(np.arange(x1_min, x1_max, h), np.arange(x2_min, x2_max, h)) x = np.c_[x1.ravel(), x2.ravel()] Z = gauss(x, mu, sigma) Z = Z.reshape(x1.shape) # plot the 2 feature dimensional gaussian as a contour map plt.contourf(x1, x2, Z, cmap=plt.cm.jet, alpha=0.8) plt.colorbar() plt.xlabel('$x_1$', fontsize=20) plt.ylabel('$x_2$', fontsize=20); # now plot as as a 3D surface plot from mpl_toolkits.mplot3d import Axes3D Z = Z.reshape(x1.shape) fig = plt.figure(figsize=(12,12)) ax = fig.gca(projection='3d') surf = ax.plot_surface(x1, x2, Z, cmap=plt.cm.jet) ax.set_xlabel('$x_1$', fontsize=20) ax.set_ylabel('$x_2$', fontsize=20); ###Output _____no_output_____ ###Markdown You should try changing the landmark location $\mu$ and the sigma value (which controls how fast the change in distance affectsthe value of the gaussian function) in the previous cell.As shown in the video (closer to the end), gaussian kernels allow for nonlinear decision boundaries. But unlike creatingan (exponential) combination of polynomial fetures, we can simply pick an appropriate number of gausian kernels that willlikely produce a good enough decision boundary for our given set of data. As discussed in this video, a good way of thinking of the gaussian kernels is as landmarks that are chosen (we discuss how to choose the landmarks in the next video)and features are then simply similarity measures to the chosen set of landmarks. Video W7 05: Kernels II[YouTube Video Link](https://www.youtube.com/watch?v=P9Xjvr2JfOk&index=75&list=PLZ9qNFMHZ-A4rycgrgOYma6zxF4BZGGPW)In practice, landmarks for gaussian kernels are chosen by putting landmarks at each of the training example locations.Thus the number of landmarks will grow linearly with the size of our training set data (instead of being a combinatorialexplosion in terms of the number of input features, as creating polynomial terms from the features tends to do). Video W7 06: Using an SVM[YouTube Video Link](https://www.youtube.com/watch?v=wtno4WSDTlY&list=PLZ9qNFMHZ-A4rycgrgOYma6zxF4BZGGPW&index=76)This video shows using SVM packages for octave/matlab. In this class we have been using Python, of course. There are many goodimplementations of SVM in Python. For example, the libsvm mentioned in the video is actually a language neutral implementationof svm, and there are extensions available to use libsvm in python:[libsvm](https://www.csie.ntu.edu.tw/~cjlin/libsvm/)As of the creation of this notebook, however (Fall 2015), I do recommend using the svm implementationin the scikit learn library. It is the most mature and has the best (most consistent) user interface. You will have to installscikit learn in your environment in order to use it. If you are using the enthought Python environemnt, it should have beeninstalled for you. The documentation for the scikit learn svm library is [here](http://scikit-learn.org/stable/modules/svm.html).As discussed in this video, sometimes we want to do an SVM classification, but not ues any complex kernels, e.g. astraightforward linear SVM classifier. If you want to do this, you should use the `SVC` in scikit learn with a linear kernel todo a linear SVM classifier. For example, recall that in assignment 03 we used logistic regression to classifyexam score data (with a single class, admit or not admit) using a linear decision boundary. The data looked like this: ###Code data = pd.read_csv('../../data/assg-03-data.csv', names=['exam1', 'exam2', 'admitted']) x = data[['exam1', 'exam2']].values y = data.admitted.values m = y.size print((x.shape)) print((y.shape)) print(m) X = np.ones( (3, m) ) X[1:,:] = x.T # the second column contains the raw inputs X = X.T neg_indexes = np.where(y==0)[0] pos_indexes = np.where(y==1)[0] plt.plot(x[neg_indexes, 0], x[neg_indexes, 1], 'yo', label='Not admitted') plt.plot(x[pos_indexes, 0], x[pos_indexes, 1], 'r^', label='Admitted') plt.title('Admit/No Admit as a function of Exam Scores') plt.xlabel('Exam 1 score') plt.ylabel('Exam 2 score') plt.legend(); ###Output (100, 2) (100,) 100 ###Markdown Before we do a linear SVM, lets use the logistic regression functions from scikit learn to perform a logistic regression. Thescikit learn uses C rather than the $\lambda$ to specify the amount of regularization. In our assignment we didn't use any regularization. We can get similar theta parameters by using a large C, which will do the optimization using only the cost, withoutmuch weight for the regularization. But try it with more regularization (e.g. smaller C values), and you will see that the decision boundary is still basically the same. ###Code from sklearn import linear_model logreg = linear_model.LogisticRegression(C=1e6, solver='lbfgs') logreg.fit(x, y) print((logreg.coef_)) # show the coefficients that were fitted to the data by logistic regression print((logreg.intercept_)) ###Output [[0.20623222 0.2014719 ]] [-25.16138556] ###Markdown Notice that for scikit learn we don't have to add in the column of 1's to the input data. By default, most scikit learn functionswill assume they need to add in such an intercept parameter. So there will only be two theta parameters in this case, but the parameter corresponding to the intercept value is in a separate constant after we fit our model to the training data.Here is a plot of the decision boundary that was found. ###Code # display the decision boundary for the coeficients neg_indexes = np.where(y==0)[0] pos_indexes = np.where(y==1)[0] # visualize the data points of the two categories plt.plot(x[neg_indexes, 0], x[neg_indexes, 1], 'yo', label='Not admitted') plt.plot(x[pos_indexes, 0], x[pos_indexes, 1], 'r^', label='Admitted') plt.title('Admit/No Admit as a function of Exam Scores') plt.xlabel('Exam 1 score') plt.ylabel('Exam 2 score') plt.legend() # add the decision boundary line dec_xpts = np.arange(30, 93) theta = logreg.coef_[0] dec_ypts = - (logreg.intercept_ + theta[0] * dec_xpts) / theta[1] plt.plot(dec_xpts, dec_ypts, 'b-'); ###Output _____no_output_____ ###Markdown Now lets use the linear SVM classifier from scikit learn to perform the same classification. ###Code from sklearn import svm linclf = svm.SVC(kernel='linear', C=1e6) linclf.fit(x, y) print((linclf.coef_)) # show the coefficients that were fitted to the data by logistic regression print((linclf.intercept_)) ###Output [[64.17436109 67.1465802 ]] [-8028.53017612] ###Markdown Notice that the parameters found for the model andthe intercept are a bit different, but these do actually correspond to basicallyabout the same decision boundary as before. If we plot it you can see this is the case: ###Code # display the decision boundary for the coeficients neg_indexes = np.where(y==0)[0] pos_indexes = np.where(y==1)[0] # visualize the data points of the two categories plt.plot(x[neg_indexes, 0], x[neg_indexes, 1], 'yo', label='Not admitted') plt.plot(x[pos_indexes, 0], x[pos_indexes, 1], 'r^', label='Admitted') plt.title('Admit/No Admit as a function of Exam Scores') plt.xlabel('Exam 1 score') plt.ylabel('Exam 2 score') plt.legend() # add the decision boundary line dec_xpts = np.arange(30, 93) theta = linclf.coef_[0] dec_ypts = - (linclf.intercept_ + theta[0] * dec_xpts) / theta[1] plt.plot(dec_xpts, dec_ypts, 'b-'); ###Output _____no_output_____ ###Markdown And finally, lets use an SVM with a gaussian kernel. It is not so interesting to use a nonlinear classifier with theprevious data, so lets make up some data, similar to the data shown in our companion videoof a class surrounded by another class. Here we use a function from the scikit learn library that can beused to creat dataat random. The data has 2 features, and only 2 classes (either positivie or negative, e.g. admitted or notadmitted). The random data generated from this function is centered at the origin (0, 0). The further away the data is fromthe center, the more probable it is in another class (using a gaussian probability function). Thus with two classes we tend to geta class inside surrounded by another class, with a basically circular decision boundary. ###Code from sklearn.datasets import make_gaussian_quantiles X, Y = make_gaussian_quantiles(n_features=2, n_classes=2) neg_indexes = np.where(Y==0)[0] pos_indexes = np.where(Y==1)[0] plt.plot(X[neg_indexes, 0], X[neg_indexes, 1], 'yo', label='negative examples') plt.plot(X[pos_indexes, 0], X[pos_indexes, 1], 'r^', label='positive examples'); ###Output _____no_output_____ ###Markdown Here then we will use a SVM with gaussian kernels to create a classifier for the data. Note that we specify 'rbf' for thekernel, these are radial basis functions kernels. Radial basis function kernels include gaussian functions, as well as the polynomial functionsdiscussed in our companion videos. You specify the gamma, degree and coef0 parameters to get the different types of kernel functions that were discussed. I believe that by specifying a gamma of 1.0 we will be using simple gaussian kernel functions aswere shown in our videos. ###Code from sklearn import svm rbfclf = svm.SVC(kernel='rbf', gamma=1.0) rbfclf.fit(X, Y) # Now display the results. We don't really have simple theta parameters anymore, the parameters are specifying # relative values of the gaussian kernels now. In fact, rbclf.coef_ will not be defined for non linear kernels. # Here we use an alternative method to visualize the decision boundary that was discovered. # create a mesh to plot in x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 h = .02 # step size in the mesh xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = rbfclf.predict(np.c_[xx.ravel(), yy.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8) # plot the original data neg_indexes = np.where(Y==0)[0] pos_indexes = np.where(Y==1)[0] plt.plot(X[neg_indexes, 0], X[neg_indexes, 1], 'yo', label='negative examples') plt.plot(X[pos_indexes, 0], X[pos_indexes, 1], 'r^', label='positive examples') plt.legend(); ###Output _____no_output_____ ###Markdown More SciKit-Learn Examples Linear SVC on iris datasetUse a LinearSVC (non-kernel) based SVM. LinearSVC will be much faster than using SVC and specifying a 'linear'kernel. ###Code from sklearn import datasets from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.svm import LinearSVC iris = datasets.load_iris() X = iris['data'][:, (2, 3)] # petal length, petal width y = (iris['target'] == 2).astype(np.float64) # Iris-virginica C = 1.0 svmclf = Pipeline(( ("scaler", StandardScaler()), ("linear_svc", LinearSVC(C=C, loss='hinge')), )) svmclf.fit(X, y) svmclf.predict([[5.5, 1.7]]) # visualize resulting decision boundary x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 h = .02 # step size in the mesh xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = svmclf.predict(np.c_[xx.ravel(), yy.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8) # plot the original data # plot the original data neg_indexes = np.where(y==0)[0] pos_indexes = np.where(y==1)[0] plt.plot(X[neg_indexes, 0], X[neg_indexes, 1], 'yo', label='other') plt.plot(X[pos_indexes, 0], X[pos_indexes, 1], 'r^', label='Iris-virginica') plt.legend(); plt.title('C = %f' % C) ###Output _____no_output_____ ###Markdown Moon data using Generated Polynomial FeaturesExample using the PolynomialFeatures class to create all feature combinations up to degree 3 polynomials here. ###Code from sklearn.datasets import make_moons from sklearn.preprocessing import PolynomialFeatures X, y = make_moons(noise=0.1) d = 3 # polynomial degree C = 10 polynomial_svm_clf = Pipeline(( ('poly_features', PolynomialFeatures(degree=d)), ('scaler', StandardScaler()), ('svm_clf', LinearSVC(C=C, loss='hinge')), )) polynomial_svm_clf.fit(X, y) # visualize resulting decision boundary x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 h = .02 # step size in the mesh xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = polynomial_svm_clf.predict(np.c_[xx.ravel(), yy.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8) # plot the original data # plot the original data neg_indexes = np.where(y==0)[0] pos_indexes = np.where(y==1)[0] plt.plot(X[neg_indexes, 0], X[neg_indexes, 1], 'yo', label='negative class') plt.plot(X[pos_indexes, 0], X[pos_indexes, 1], 'r^', label='positive class') plt.legend(); plt.title('C = %f, degree=%d' % (C, d)) ###Output _____no_output_____ ###Markdown Polynimial Kernel ###Code from sklearn.svm import SVC d = 10 c0 = 100 C = 5 poly_kernel_svm_clf = Pipeline(( ('scalar', StandardScaler()), ('svm_clf', SVC(kernel='poly', degree=d, coef0=c0, C=C)), )) poly_kernel_svm_clf.fit(X, y) # visualize resulting decision boundary x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 h = .02 # step size in the mesh xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = poly_kernel_svm_clf.predict(np.c_[xx.ravel(), yy.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8) # plot the original data # plot the original data neg_indexes = np.where(y==0)[0] pos_indexes = np.where(y==1)[0] plt.plot(X[neg_indexes, 0], X[neg_indexes, 1], 'yo', label='negative class') plt.plot(X[pos_indexes, 0], X[pos_indexes, 1], 'r^', label='positive class') plt.legend(); plt.title('C = %f, degree=%d, c0=%f' % (C, d, c0)) ###Output _____no_output_____ ###Markdown Gaussian RBF Kernel ###Code g = 5 C = 0.001 rbf_kernel_svm_clf = Pipeline(( ('scaler', StandardScaler()), ('svm_clf', SVC(kernel='rbf', gamma=g, C=C)), )) rbf_kernel_svm_clf.fit(X, y) # visualize resulting decision boundary x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 h = .02 # step size in the mesh xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = rbf_kernel_svm_clf.predict(np.c_[xx.ravel(), yy.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8) # plot the original data # plot the original data neg_indexes = np.where(y==0)[0] pos_indexes = np.where(y==1)[0] plt.plot(X[neg_indexes, 0], X[neg_indexes, 1], 'yo', label='negative class') plt.plot(X[pos_indexes, 0], X[pos_indexes, 1], 'r^', label='positive class') plt.legend(); plt.title('C = %f, degree=%d, c0=%f' % (C, d, c0)) ###Output _____no_output_____
module-1/Intro-Pandas/your-code/main.ipynb
###Markdown Introduction to PandasComplete the following set of exercises to solidify your knowledge of Pandas fundamentals. 1. Import Numpy and Pandas and alias them to `np` and `pd` respectively. ###Code # your code here import numpy as np import pandas as pd ###Output _____no_output_____ ###Markdown 2. Create a Pandas Series containing the elements of the list below. ###Code lst = [5.7, 75.2, 74.4, 84.0, 66.5, 66.3, 55.8, 75.7, 29.1, 43.7] # your code here lstseries = pd.Series(lst) print(lstseries) ###Output 0 5.7 1 75.2 2 74.4 3 84.0 4 66.5 5 66.3 6 55.8 7 75.7 8 29.1 9 43.7 dtype: float64 ###Markdown 3. Use indexing to return the third value in the Series above.*Hint: Remember that indexing begins at 0.* ###Code # your code here print(lstseries[2]) ###Output 74.4 ###Markdown 4. Create a Pandas DataFrame from the list of lists below. Each sublist should be represented as a row. ###Code b = [[53.1, 95.0, 67.5, 35.0, 78.4], [61.3, 40.8, 30.8, 37.8, 87.6], [20.6, 73.2, 44.2, 14.6, 91.8], [57.4, 0.1, 96.1, 4.2, 69.5], [83.6, 20.5, 85.4, 22.8, 35.9], [49.0, 69.0, 0.1, 31.8, 89.1], [23.3, 40.7, 95.0, 83.8, 26.9], [27.6, 26.4, 53.8, 88.8, 68.5], [96.6, 96.4, 53.4, 72.4, 50.1], [73.7, 39.0, 43.2, 81.6, 34.7]] # your code here bdf = pd.DataFrame(b) print(bdf) ###Output 0 1 2 3 4 0 53.1 95.0 67.5 35.0 78.4 1 61.3 40.8 30.8 37.8 87.6 2 20.6 73.2 44.2 14.6 91.8 3 57.4 0.1 96.1 4.2 69.5 4 83.6 20.5 85.4 22.8 35.9 5 49.0 69.0 0.1 31.8 89.1 6 23.3 40.7 95.0 83.8 26.9 7 27.6 26.4 53.8 88.8 68.5 8 96.6 96.4 53.4 72.4 50.1 9 73.7 39.0 43.2 81.6 34.7 ###Markdown 5. Rename the data frame columns based on the names in the list below. ###Code colnames = ['Score_1', 'Score_2', 'Score_3', 'Score_4', 'Score_5'] # your code here bdf.columns = colnames print(bdf) ###Output Score_1 Score_2 Score_3 Score_4 Score_5 0 53.1 95.0 67.5 35.0 78.4 1 61.3 40.8 30.8 37.8 87.6 2 20.6 73.2 44.2 14.6 91.8 3 57.4 0.1 96.1 4.2 69.5 4 83.6 20.5 85.4 22.8 35.9 5 49.0 69.0 0.1 31.8 89.1 6 23.3 40.7 95.0 83.8 26.9 7 27.6 26.4 53.8 88.8 68.5 8 96.6 96.4 53.4 72.4 50.1 9 73.7 39.0 43.2 81.6 34.7 ###Markdown 6. Create a subset of this data frame that contains only the Score 1, 3, and 5 columns. ###Code # your code here newbdf = bdf[['Score_1','Score_3','Score_5']] print(newbdf) ###Output Score_1 Score_3 Score_5 0 53.1 67.5 78.4 1 61.3 30.8 87.6 2 20.6 44.2 91.8 3 57.4 96.1 69.5 4 83.6 85.4 35.9 5 49.0 0.1 89.1 6 23.3 95.0 26.9 7 27.6 53.8 68.5 8 96.6 53.4 50.1 9 73.7 43.2 34.7 ###Markdown 7. From the original data frame, calculate the average Score_3 value. ###Code # your code here bdf['Score_3'].mean() ###Output _____no_output_____ ###Markdown 8. From the original data frame, calculate the maximum Score_4 value. ###Code # your code here bdf['Score_4'].max() ###Output _____no_output_____ ###Markdown 9. From the original data frame, calculate the median Score 2 value. ###Code # your code here bdf['Score_2'].median() ###Output _____no_output_____ ###Markdown 10. Create a Pandas DataFrame from the dictionary of product orders below. ###Code orders = {'Description': ['LUNCH BAG APPLE DESIGN', 'SET OF 60 VINTAGE LEAF CAKE CASES ', 'RIBBON REEL STRIPES DESIGN ', 'WORLD WAR 2 GLIDERS ASSTD DESIGNS', 'PLAYING CARDS JUBILEE UNION JACK', 'POPCORN HOLDER', 'BOX OF VINTAGE ALPHABET BLOCKS', 'PARTY BUNTING', 'JAZZ HEARTS ADDRESS BOOK', 'SET OF 4 SANTA PLACE SETTINGS'], 'Quantity': [1, 24, 1, 2880, 2, 7, 1, 4, 10, 48], 'UnitPrice': [1.65, 0.55, 1.65, 0.18, 1.25, 0.85, 11.95, 4.95, 0.19, 1.25], 'Revenue': [1.65, 13.2, 1.65, 518.4, 2.5, 5.95, 11.95, 19.8, 1.9, 60.0]} # your code here ordersdf = pd.DataFrame(orders) # x.head() print(ordersdf) ###Output Description Quantity UnitPrice Revenue 0 LUNCH BAG APPLE DESIGN 1 1.65 1.65 1 SET OF 60 VINTAGE LEAF CAKE CASES 24 0.55 13.20 2 RIBBON REEL STRIPES DESIGN 1 1.65 1.65 3 WORLD WAR 2 GLIDERS ASSTD DESIGNS 2880 0.18 518.40 4 PLAYING CARDS JUBILEE UNION JACK 2 1.25 2.50 5 POPCORN HOLDER 7 0.85 5.95 6 BOX OF VINTAGE ALPHABET BLOCKS 1 11.95 11.95 7 PARTY BUNTING 4 4.95 19.80 8 JAZZ HEARTS ADDRESS BOOK 10 0.19 1.90 9 SET OF 4 SANTA PLACE SETTINGS 48 1.25 60.00 ###Markdown 11. Calculate the total quantity ordered and revenue generated from these orders. ###Code # your code here total_qty = ordersdf['Quantity'].sum() print(total_qty) total_revenue = ordersdf['Revenue'].sum() print(total_revenue) ###Output 2978 637.0 ###Markdown 12. Obtain the prices of the most expensive and least expensive items ordered and print the difference. ###Code # your code here expensive = ordersdf['UnitPrice'].max() print(expensive) affordable = ordersdf['UnitPrice'].min() print(affordable) diff = expensive - affordable print(diff) ###Output 11.95 0.18 11.77 ###Markdown Introduction to PandasComplete the following set of exercises to solidify your knowledge of Pandas fundamentals. 1. Import Numpy and Pandas and alias them to `np` and `pd` respectively. ###Code # your code here ###Output _____no_output_____ ###Markdown 2. Create a Pandas Series containing the elements of the list below. ###Code lst = [5.7, 75.2, 74.4, 84.0, 66.5, 66.3, 55.8, 75.7, 29.1, 43.7] # your code here ###Output _____no_output_____ ###Markdown 3. Use indexing to return the third value in the Series above.*Hint: Remember that indexing begins at 0.* ###Code # your code here ###Output _____no_output_____ ###Markdown 4. Create a Pandas DataFrame from the list of lists below. Each sublist should be represented as a row. ###Code b = [[53.1, 95.0, 67.5, 35.0, 78.4], [61.3, 40.8, 30.8, 37.8, 87.6], [20.6, 73.2, 44.2, 14.6, 91.8], [57.4, 0.1, 96.1, 4.2, 69.5], [83.6, 20.5, 85.4, 22.8, 35.9], [49.0, 69.0, 0.1, 31.8, 89.1], [23.3, 40.7, 95.0, 83.8, 26.9], [27.6, 26.4, 53.8, 88.8, 68.5], [96.6, 96.4, 53.4, 72.4, 50.1], [73.7, 39.0, 43.2, 81.6, 34.7]] # your code here ###Output _____no_output_____ ###Markdown 5. Rename the data frame columns based on the names in the list below. ###Code colnames = ['Score_1', 'Score_2', 'Score_3', 'Score_4', 'Score_5'] # your code here ###Output _____no_output_____ ###Markdown 6. Create a subset of this data frame that contains only the Score 1, 3, and 5 columns. ###Code # your code here ###Output _____no_output_____ ###Markdown 7. From the original data frame, calculate the average Score_3 value. ###Code # your code here ###Output _____no_output_____ ###Markdown 8. From the original data frame, calculate the maximum Score_4 value. ###Code # your code here ###Output _____no_output_____ ###Markdown 9. From the original data frame, calculate the median Score 2 value. ###Code # your code here ###Output _____no_output_____ ###Markdown 10. Create a Pandas DataFrame from the dictionary of product orders below. ###Code orders = {'Description': ['LUNCH BAG APPLE DESIGN', 'SET OF 60 VINTAGE LEAF CAKE CASES ', 'RIBBON REEL STRIPES DESIGN ', 'WORLD WAR 2 GLIDERS ASSTD DESIGNS', 'PLAYING CARDS JUBILEE UNION JACK', 'POPCORN HOLDER', 'BOX OF VINTAGE ALPHABET BLOCKS', 'PARTY BUNTING', 'JAZZ HEARTS ADDRESS BOOK', 'SET OF 4 SANTA PLACE SETTINGS'], 'Quantity': [1, 24, 1, 2880, 2, 7, 1, 4, 10, 48], 'UnitPrice': [1.65, 0.55, 1.65, 0.18, 1.25, 0.85, 11.95, 4.95, 0.19, 1.25], 'Revenue': [1.65, 13.2, 1.65, 518.4, 2.5, 5.95, 11.95, 19.8, 1.9, 60.0]} # your code here ###Output _____no_output_____ ###Markdown 11. Calculate the total quantity ordered and revenue generated from these orders. ###Code # your code here ###Output _____no_output_____ ###Markdown 12. Obtain the prices of the most expensive and least expensive items ordered and print the difference. ###Code # your code here ###Output _____no_output_____ ###Markdown Introduction to PandasComplete the following set of exercises to solidify your knowledge of Pandas fundamentals. 1. Import Numpy and Pandas and alias them to `np` and `pd` respectively. ###Code # your code here import numpy as np import pandas as pd ###Output _____no_output_____ ###Markdown 2. Create a Pandas Series containing the elements of the list below. ###Code lst = [5.7, 75.2, 74.4, 84.0, 66.5, 66.3, 55.8, 75.7, 29.1, 43.7] # your code here a = pd.Series(lst) print(a) type(a) ###Output _____no_output_____ ###Markdown 3. Use indexing to return the third value in the Series above.*Hint: Remember that indexing begins at 0.* ###Code # your code here a[2] #### 4. Create a Pandas DataFrame from the list of lists below. Each sublist should be represented as a row. b = [[53.1, 95.0, 67.5, 35.0, 78.4], [61.3, 40.8, 30.8, 37.8, 87.6], [20.6, 73.2, 44.2, 14.6, 91.8], [57.4, 0.1, 96.1, 4.2, 69.5], [83.6, 20.5, 85.4, 22.8, 35.9], [49.0, 69.0, 0.1, 31.8, 89.1], [23.3, 40.7, 95.0, 83.8, 26.9], [27.6, 26.4, 53.8, 88.8, 68.5], [96.6, 96.4, 53.4, 72.4, 50.1], [73.7, 39.0, 43.2, 81.6, 34.7]] # your code here df = pd.DataFrame(b) df type(df) ###Output _____no_output_____ ###Markdown 5. Rename the data frame columns based on the names in the list below. ###Code colnames = ['Score_1', 'Score_2', 'Score_3', 'Score_4', 'Score_5'] # your code here df = pd.DataFrame(b, columns=colnames) df ###Output _____no_output_____ ###Markdown 6. Create a subset of this data frame that contains only the Score 1, 3, and 5 columns. ###Code # your code here df[['Score_1','Score_3','Score_5']] ###Output _____no_output_____ ###Markdown 7. From the original data frame, calculate the average Score_3 value. ###Code # your code here df["Score_3"].mean() ###Output _____no_output_____ ###Markdown 8. From the original data frame, calculate the maximum Score_4 value. ###Code # your code here df["Score_4"].max() ###Output _____no_output_____ ###Markdown 9. From the original data frame, calculate the median Score 2 value. ###Code # your code here df["Score_2"].median() ###Output _____no_output_____ ###Markdown 10. Create a Pandas DataFrame from the dictionary of product orders below. ###Code orders = {'Description': ['LUNCH BAG APPLE DESIGN', 'SET OF 60 VINTAGE LEAF CAKE CASES ', 'RIBBON REEL STRIPES DESIGN ', 'WORLD WAR 2 GLIDERS ASSTD DESIGNS', 'PLAYING CARDS JUBILEE UNION JACK', 'POPCORN HOLDER', 'BOX OF VINTAGE ALPHABET BLOCKS', 'PARTY BUNTING', 'JAZZ HEARTS ADDRESS BOOK', 'SET OF 4 SANTA PLACE SETTINGS'], 'Quantity': [1, 24, 1, 2880, 2, 7, 1, 4, 10, 48], 'UnitPrice': [1.65, 0.55, 1.65, 0.18, 1.25, 0.85, 11.95, 4.95, 0.19, 1.25], 'Revenue': [1.65, 13.2, 1.65, 518.4, 2.5, 5.95, 11.95, 19.8, 1.9, 60.0]} # your code here x = pd.DataFrame(orders) x.head() x.head(10) ###Output _____no_output_____ ###Markdown 11. Calculate the total quantity ordered and revenue generated from these orders. ###Code # your code here x["Quantity"].sum() x["Revenue"].sum() ###Output _____no_output_____ ###Markdown 12. Obtain the prices of the most expensive and least expensive items ordered and print the difference. ###Code # your code here x["UnitPrice"].max() x["UnitPrice"].min() x["UnitPrice"].max() - x["UnitPrice"].min() ###Output _____no_output_____ ###Markdown Introduction to PandasComplete the following set of exercises to solidify your knowledge of Pandas fundamentals. 1. Import Numpy and Pandas and alias them to `np` and `pd` respectively. ###Code # your code here import numpy as np import pandas as pd ###Output _____no_output_____ ###Markdown 2. Create a Pandas Series containing the elements of the list below. ###Code lst = [5.7, 75.2, 74.4, 84.0, 66.5, 66.3, 55.8, 75.7, 29.1, 43.7] # your code here s = pd.Series(lst) print(s) ###Output 0 5.7 1 75.2 2 74.4 3 84.0 4 66.5 5 66.3 6 55.8 7 75.7 8 29.1 9 43.7 dtype: float64 ###Markdown 3. Use indexing to return the third value in the Series above.*Hint: Remember that indexing begins at 0.* ###Code # your code here print(s[2]) ###Output 74.4 ###Markdown 4. Create a Pandas DataFrame from the list of lists below. Each sublist should be represented as a row. ###Code b = [[53.1, 95.0, 67.5, 35.0, 78.4], [61.3, 40.8, 30.8, 37.8, 87.6], [20.6, 73.2, 44.2, 14.6, 91.8], [57.4, 0.1, 96.1, 4.2, 69.5], [83.6, 20.5, 85.4, 22.8, 35.9], [49.0, 69.0, 0.1, 31.8, 89.1], [23.3, 40.7, 95.0, 83.8, 26.9], [27.6, 26.4, 53.8, 88.8, 68.5], [96.6, 96.4, 53.4, 72.4, 50.1], [73.7, 39.0, 43.2, 81.6, 34.7]] # your code here df = pd.DataFrame(b) ###Output _____no_output_____ ###Markdown 5. Rename the data frame columns based on the names in the list below. ###Code colnames = ['Score_1', 'Score_2', 'Score_3', 'Score_4', 'Score_5'] # your code here updated_df = pd.DataFrame(b, columns = colnames) print(updated_df) ###Output Score_1 Score_2 Score_3 Score_4 Score_5 0 53.1 95.0 67.5 35.0 78.4 1 61.3 40.8 30.8 37.8 87.6 2 20.6 73.2 44.2 14.6 91.8 3 57.4 0.1 96.1 4.2 69.5 4 83.6 20.5 85.4 22.8 35.9 5 49.0 69.0 0.1 31.8 89.1 6 23.3 40.7 95.0 83.8 26.9 7 27.6 26.4 53.8 88.8 68.5 8 96.6 96.4 53.4 72.4 50.1 9 73.7 39.0 43.2 81.6 34.7 ###Markdown 6. Create a subset of this data frame that contains only the Score 1, 3, and 5 columns. ###Code # your code here print(updated_df[['Score_1', 'Score_3', 'Score_5']]) ###Output Score_1 Score_3 Score_5 0 53.1 67.5 78.4 1 61.3 30.8 87.6 2 20.6 44.2 91.8 3 57.4 96.1 69.5 4 83.6 85.4 35.9 5 49.0 0.1 89.1 6 23.3 95.0 26.9 7 27.6 53.8 68.5 8 96.6 53.4 50.1 9 73.7 43.2 34.7 ###Markdown 7. From the original data frame, calculate the average Score_3 value. ###Code # your code here print(updated_df['Score_3'].mean()) ###Output 56.95000000000001 ###Markdown 8. From the original data frame, calculate the maximum Score_4 value. ###Code # your code here print(updated_df['Score_4'].max()) ###Output 88.8 ###Markdown 9. From the original data frame, calculate the median Score 2 value. ###Code # your code here print(updated_df['Score_2'].median()) ###Output 40.75 ###Markdown 10. Create a Pandas DataFrame from the dictionary of product orders below. ###Code orders = {'Description': ['LUNCH BAG APPLE DESIGN', 'SET OF 60 VINTAGE LEAF CAKE CASES ', 'RIBBON REEL STRIPES DESIGN ', 'WORLD WAR 2 GLIDERS ASSTD DESIGNS', 'PLAYING CARDS JUBILEE UNION JACK', 'POPCORN HOLDER', 'BOX OF VINTAGE ALPHABET BLOCKS', 'PARTY BUNTING', 'JAZZ HEARTS ADDRESS BOOK', 'SET OF 4 SANTA PLACE SETTINGS'], 'Quantity': [1, 24, 1, 2880, 2, 7, 1, 4, 10, 48], 'UnitPrice': [1.65, 0.55, 1.65, 0.18, 1.25, 0.85, 11.95, 4.95, 0.19, 1.25], 'Revenue': [1.65, 13.2, 1.65, 518.4, 2.5, 5.95, 11.95, 19.8, 1.9, 60.0]} # your code here orders_df = pd.DataFrame(orders) print(orders_df) ###Output Description Quantity UnitPrice Revenue 0 LUNCH BAG APPLE DESIGN 1 1.65 1.65 1 SET OF 60 VINTAGE LEAF CAKE CASES 24 0.55 13.20 2 RIBBON REEL STRIPES DESIGN 1 1.65 1.65 3 WORLD WAR 2 GLIDERS ASSTD DESIGNS 2880 0.18 518.40 4 PLAYING CARDS JUBILEE UNION JACK 2 1.25 2.50 5 POPCORN HOLDER 7 0.85 5.95 6 BOX OF VINTAGE ALPHABET BLOCKS 1 11.95 11.95 7 PARTY BUNTING 4 4.95 19.80 8 JAZZ HEARTS ADDRESS BOOK 10 0.19 1.90 9 SET OF 4 SANTA PLACE SETTINGS 48 1.25 60.00 ###Markdown 11. Calculate the total quantity ordered and revenue generated from these orders. ###Code # your code here total_quantity_ordered = orders_df['Quantity'].sum() print(total_quantity_ordered) revenue_generated = orders_df['Revenue'].sum() print(revenue_generated) ###Output 2978 637.0 ###Markdown 12. Obtain the prices of the most expensive and least expensive items ordered and print the difference. ###Code # your code here most_expensive_item = orders_df['UnitPrice'].max() least_expensive_item = orders_df['UnitPrice'].min() difference = most_expensive_item - least_expensive_item print(difference) ###Output 11.77
Discussion Section Notes/Discussion Week 1.ipynb
###Markdown Question 1 ###Code string = " Hello World" print(string) string*3 ###Output Hello World ###Markdown Question 2 ###Code looper = [1,2,3,4,6,8] for thing in looper: print(thing) for thing in string: print(thing) string_n = "123468" for thing in string_n: print(thing) ###Output 1 2 3 4 6 8 H e l l o W o r l d 1 2 3 4 6 8 ###Markdown Question 3 ###Code def f(x): y = x**2 return y f(4) ###Output _____no_output_____ ###Markdown Question 4 ###Code a = [1,2,3] #f(a) #I tried to plug a list into my f function, it didn't work so I wrote a loop and hopefully that will work z = [] for thing in a: thing_sq = thing**2 z.append(thing_sq) print(z) def f_list(x): y = [] for thing in x: thing_sq = thing**2 y.append(thing_sq) return y print(f_list(a)) ###Output [1, 4, 9] [1, 4, 9] ###Markdown Question 5 ###Code b = [thing**2 for thing in a] print(b) c = [f(thing) for thing in a] print(c) ###Output [1, 4, 9] [1, 4, 9] ###Markdown Question 6 ###Code import numpy as np a_np = np.array(a) print(a) print(a_np) a_np**2 ###Output [1, 2, 3] [1 2 3] ###Markdown Question 7 ###Code list = [1,7,100] np_list = np.array(list) np_list.mean() np_list.var() ###Output _____no_output_____
PyTorch/.ipynb_checkpoints/Matrizes_Arrays_Tensores-checkpoint.ipynb
###Markdown Matrizes, Arrays, Tensores Referências - Documentação oficial de Tensores do PyTorch http://pytorch.org/docs/master/tensors.html- PyTorch para usuários NumPy: https://github.com/torch/torch7/wiki/Torch-for-Numpy-users NumPy array ###Code import numpy as np a = np.array([[2., 8., 3.], [0.,-1., 5.]]) a a.shape a.dtype ###Output _____no_output_____ ###Markdown PyTorch tensor Os tensores do PyTorch só podem ser float, float32 ou float64 ###Code import torch ###Output _____no_output_____ ###Markdown Convertendo NumPy array para tensor PyTorch ###Code b = torch.Tensor(np.zeros((3,4))) b ###Output _____no_output_____ ###Markdown Criando arrays e tensores constantes ###Code c = np.ones((2,4)); c d = torch.ones((2,4)); d ###Output _____no_output_____ ###Markdown Criando arrays e tensores aleatórios ###Code e = np.random.rand(2,4); e f = torch.rand(2,4); f ###Output _____no_output_____ ###Markdown Arrays aleatórios com semente, para reproduzir mesma sequência pseudoaleatória ###Code np.random.seed(1234) e = np.random.rand(2,4);e torch.manual_seed(1234) f = torch.rand(2,4); f ###Output _____no_output_____ ###Markdown Torch seed is different for GPU ###Code if torch.cuda.is_available(): torch.cuda.torch.manual_seed(1234) g = torch.cuda.torch.rand(2,4) print(g) ###Output 0.0290 0.4019 0.2598 0.3666 0.0583 0.7006 0.0518 0.4681 [torch.FloatTensor of size 2x4] ###Markdown Conversões entre NumPy e Tensores PyTorch NumPy para Tensor PyTorch utilizando `.from_numpy()` - CUIDADO Não são todos os tipos de elementos do array NumPy que podem ser convertidos para tensores PyTorch. Abaixo é um programa que cria uma tabela de equivalenciasentre os tipos do NumPy e os tipos do Tensor PyTorch: ###Code import pandas as pd dtypes = [np.uint8, np.int32, np.int64, np.float32, np.float64, np.double] table = np.empty((2, len(dtypes)),dtype=np.object) for i,t in enumerate(dtypes): a = np.array([1],dtype=t) ta = torch.from_numpy(a) table[0,i] = a.dtype.name table[1,i] = type(ta).__name__ pd.DataFrame(table) ###Output _____no_output_____ ###Markdown NumPy para Tensor utilizando `torch.FloatTensor()` - método recomendado Existe uma cuidado importante a ser tomado na transformação de matrizes do NumPy para tensores PyTorch pois as funções de rede neurais do PyTorch utilizam o tipo FloatTensor e o NumPy utiliza como default o tipo float64, o que faz uma conversão automática para DoubleTensor do PyTorch e consequentemente gerando um erro.A recomendação é utilizar o `torch.FloatTensor` para converter NumPy para tensores PyTorch: ###Code a = np.ones((2,5)) a_t = torch.FloatTensor(a) a_t ###Output _____no_output_____ ###Markdown Tensor PyTorch para array NumPy ###Code ta = torch.ones(2,3) ta a = ta.numpy() a ###Output _____no_output_____ ###Markdown Tensor na CPU e na GPU ###Code ta_cpu = torch.ones(2,3); ta_cpu if torch.cuda.is_available(): ta_gpu = ta_cpu.cuda() print(ta_gpu) ###Output 1 1 1 1 1 1 [torch.cuda.FloatTensor of size 2x3 (GPU 0)] ###Markdown Operações em tensores criação de tensor e visualização do seu shape ###Code a = torch.eye(4); a a.size() ###Output _____no_output_____ ###Markdown Reshape é feito com `view` em PyTorch ###Code b = a.view(2,8); b ###Output _____no_output_____ ###Markdown Aqui é um exemplo criando um tensor unidimensional sequencial de 0 a 23 e em seguida uma reshape paraque o tensor fique com 4 linhas e 6 colunas ###Code a = torch.arange(0,24).view(4,6);a ###Output _____no_output_____ ###Markdown Adição elemento por elemento usando operadores ###Code c = a + a; c d = a - c ; d ###Output _____no_output_____ ###Markdown forma funcional ###Code d = a.sub(c); d ###Output _____no_output_____ ###Markdown Operação in-place ###Code a.sub_(c); a ###Output _____no_output_____ ###Markdown Multiplicação elemento por elemento ###Code d = a * c; d d = a.mul(c); d a.mul_(c); a ###Output _____no_output_____ ###Markdown Média em tensores ###Code a = torch.arange(0,24).view(4,6); a u = a.mean(); u uu = a.sum()/a.nelement(); uu ###Output _____no_output_____ ###Markdown Média com redução de eixo ###Code u_row = a.mean(dim=1); u_row u_col = a.mean(dim=0); u_col ###Output _____no_output_____ ###Markdown Desvio padrão ###Code std = a.std(); std std_row = a.std(dim=1); std_row ###Output _____no_output_____ ###Markdown Comparação speedup CPU e GPU ###Code a_numpy_cpu = np.ones((1000,1000)) %timeit b = 2 * a_numpy_cpu a_torch_cpu = torch.ones(1000,1000) %timeit b = 2 * a_torch_cpu if torch.cuda.is_available(): a_torch_gpu = a_torch_cpu.cuda() %timeit b = 2 * a_torch_gpu ###Output 41.7 µs ± 41.8 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each) ###Markdown Rodando o código abaixo na GTX1080: speedup de 15,5- 888 µs ± 43.4 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each)- 57.1 µs ± 22.7 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)Rodando no macbook:- numpy: 1000 loops, best of 3: 449 µs per loop- torch: 1000 loops, best of 3: 1.6 ms per loop ###Code %timeit b1 = a_numpy_cpu.mean() %timeit b2 = a_torch_cpu.mean() if torch.cuda.is_available(): %timeit c = a_torch_gpu.mean() ###Output 433 µs ± 9.54 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) 792 µs ± 11.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) 80.7 µs ± 1.82 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
problem_code.ipynb
###Markdown ProblemImplement the **k-means**, **SLIC**, and **Ratio Cut** algorithms for segmenting a given bioimage into multiple segments. Use attached image or any other bioimage to show the segmentation results of your algorithms. Utility Functions ###Code def visualize_clusters(image, labels, n_clusters, subp): # convert to the shape of a vector of pixel values masked_image = np.copy(image) masked_image = masked_image.reshape((-1, 3)) labels = labels.flatten() for i in range(n_clusters): # color (i.e cluster) to disable cluster = i masked_image[labels == cluster] = [255-150*i, 255-60*i, 155+3*i] # convert back to original shape masked_image = masked_image.reshape(image.shape) # show the image plt.subplot(subp).imshow(masked_image) plt.axis('off') def plot_segmented_image( img, labels, num_clusters, subp): labels = labels.reshape( img.shape[:2] ) plt.subplot(subp).imshow(img) plt.axis('off') for l in range( num_clusters ): try: plt.subplot(subp).contour( labels == l, levels=1, colors=[plt.get_cmap('coolwarm')( l / float( num_clusters ))] ) except ValueError: #raised if `y` is empty. pass ###Output _____no_output_____ ###Markdown K-Means ###Code import numpy as np from PIL import Image import matplotlib.pyplot as plt class KMeansClustering: def runKMeans(self, intensities: np.ndarray, n_clusters: int, n_iterations: int = 20) -> (list, np.array): ''' The KMeans clustering algorithm. Returns: cluster_labels: list of labels for each point. ''' self.n_clusters = n_clusters self.init_centroids(intensities) print('Running KMeans...') for i in range(n_iterations): cluster_int, cluster_ind = self.allocate(X, intensities) self.update_centroids(cluster_int) labels = np.empty((intensities.shape[0])) for i in range(n_clusters): labels[cluster_ind[i]] = i return labels, self.centroids def init_centroids(self, intensities: np.ndarray): ''' Initialize centroids with random examples (or points) from the dataset. ''' #Number of examples l = intensities.shape[0] #Initialize centroids array with points from intensities with random indices chosen from 0 to number of examples rng = np.random.default_rng() self.centroids = intensities[rng.choice(l, size=self.n_clusters, replace=False)] self.centroids.astype(np.float32) def allocate(self, X: np.ndarray, intensities): ''' This function forms new clusters from the centroids updated in the previous iterations. ''' #Step 1: Allocate the closest points to the clusters to fill them with atleast one point. # Allocate the remaining points to the closest clusters #Calculate the differences in the features between centroids and X using broadcast subtract res = self.centroids - intensities[:, np.newaxis] #Find Manhattan distances of each point with all centroids dist = np.absolute(res) #Find the closest centroid from each point. # Find unique indices of the closest points. Using res again for optimization #not unique indices res = np.where(dist == dist.min(axis=1)[:, np.newaxis]) #res[0] is used as indices for row-wise indices in res[1] min_indices = res[1][np.unique(res[0])] indices = [[] for i in range(self.n_clusters)] for i, c in enumerate(min_indices): if not c == -1: # cluster_array[c] = np.append(cluster_array[c], [X[i]], axis=0) #add the point to the corresponding cluster indices[c].append(i) return [intensities[indices[i]] for i in range(self.n_clusters)], indices def update_centroids(self, cluster_int): ''' This function updates the centroids based on the updated clusters. ''' #Make a rough copy centroids = self.centroids #Find mean for every cluster for i in range(self.n_clusters): if len(cluster_int[i]) > 0: centroids[i] = np.mean(cluster_int[i]) #Update fair copy self.centroids = centroids if __name__ == '__main__': img = Image.open('f1.png') plt.figure(figsize=(10,20)) plt.subplot(121).imshow(img) plt.axis('off') img = np.array(img) print(f'img.shape: {img.shape}') X = [] intensities = [] for i in range(img.shape[0]): for j in range(img.shape[1]): X.append([i, j]) intensities.append(np.average(img[i][j])) X = np.array(X) intensities = np.array(intensities) k = 3 KMC = KMeansClustering() labels, centroids = KMC.runKMeans(intensities, k, 10) visualize_clusters(img, labels, k, 122) plt.show() ###Output img.shape: (493, 559, 3) Running KMeans... ###Markdown SLIC ###Code import numpy as np from numpy import linalg as la from PIL import Image import sys import matplotlib.pyplot as plt from time import perf_counter class SLIC: def runSlic(self, X: np.ndarray, intensities: np.ndarray, n_clusters: int, n_iterations: int, lmbda: float) -> list: ''' The SLIC clustering algorithm. Returns: cluster_labels: list of labels for each point. ''' self.n_clusters = n_clusters self.init_centroids(X, intensities) for i in range(n_iterations): cluster_int, cluster_loc, indices = self.allocate(X, intensities, lmbda) self.update_centroids(cluster_int, cluster_loc) labels = np.empty((X.shape[0])) for i in range(n_clusters): labels[indices[i]] = i return labels def init_centroids(self, X, intensities: np.ndarray): ''' Initialize centroids with random examples (or points) from the dataset. ''' #Number of examples l = intensities.shape[0] #Initialize centroids array with points from intensities with random indices chosen from 0 to number of examples rng = np.random.default_rng() indices = rng.choice(l, size=self.n_clusters, replace=False) self.centroids_c = X[indices] self.centroids_i = intensities[indices] self.centroids_i.astype(np.float32) def allocate(self, X: np.ndarray, intensities, lmbda): ''' This function forms new clusters from the centroids updated in the previous iterations. ''' # Allocate the points to the closest clusters #Calculate the differences in the features between centroids and X using broadcast subtract dist = np.absolute(self.centroids_i - intensities[:, np.newaxis]) + lmbda * la.norm(self.centroids_c - X[:, np.newaxis], axis=2) #Find the closest centroid from each point. # Find unique indices of the closest points. Using res again for optimization #not unique indices res = np.where(dist == dist.min(axis=1)[:, np.newaxis]) #res[0] is used as indices for row-wise indices in res[1] min_indices = res[1][np.unique(res[0])] indices = [[] for i in range(self.n_clusters)] for i, c in enumerate(min_indices): if not c == -1: indices[c].append(i) return [intensities[indices[i]] for i in range(self.n_clusters)], \ [X[indices[i]] for i in range(self.n_clusters)], indices def update_centroids(self, cluster_int, cluster_loc): ''' This function updates the centroids based on the updated clusters. ''' #Make a rough copy centroids_c = self.centroids_c centroids_i = self.centroids_i #Find mean for every cluster for i in range(self.n_clusters): if len(cluster_int[i]) > 0: centroids_i[i] = np.mean(cluster_int[i]) centroids_c[i] = np.mean(cluster_loc[i], axis=0) #Update fair copy self.centroids_i = centroids_i self.centroids_c = centroids_c if __name__ == '__main__': img = Image.open('f1.png') plt.figure(figsize=(10, 20)) plt.subplot('121').imshow(img) plt.axis('off') img = np.array(img) print(f'img.shape: {img.shape}') X = [] intensities = [] for i in range(img.shape[0]): for j in range(img.shape[1]): X.append([i, j]) intensities.append(np.average(img[i][j])) X = np.array(X) intensities = np.array(intensities) k = 25 slic = SLIC() labels = slic.runSlic(X, intensities, k, 20, 0.25) visualize_clusters(img, labels, k, 122) # plot_segmented_image(img, labels, k) plt.show() ###Output img.shape: (493, 559, 3) ###Markdown Ratio Cut ###Code import numpy as np from google.colab.patches import cv2_imshow from PIL import Image from numpy import linalg as la import scipy.cluster.vq as vq import matplotlib.pyplot as plt import warnings import math warnings.simplefilter('ignore') class Spectralclustering: def run(self, img, k, LOAD=True, lmbda=0.25, sigma=1): if not LOAD: print('Constructing Laplacian matrix...') L = self.construct_L(img, lmbda, sigma) print('Performing Eigen Value Decomposition of L...') l, V = la.eigh( L ) with open('array.npy', 'wb') as fp: np.save(fp, V, allow_pickle=True) else: V = np.load('array.npy') # First K columns of V need to be clustered H = V[:,0:k] if( k==2 ): # In this case clustering on the Fiedler vector which gives very close approximation f = H[:,1] labels = np.ravel( np.sign( f ) ) k=2 else: # Run K-Means on eigenvector matrix centroids, labels = vq.kmeans2( H[:,:k], k ) print(f'kmeans2 labels: {labels}') return labels def construct_L(self, img: np.ndarray, lmbda: int, sigma: int): try: h, w = img.shape[:2] except AttributeError: raise('img should be numpy array.') L = np.zeros((h*w, h*w)) D = np.zeros((h*w,)) for i in range(h): for j in range(w): # i - 1, j - 1 if i - 1 >= 0 and j - 1 >= 0: L[(i - 1) * w + (j - 1)][i * w + j] = L[i * w + j][(i - 1) * w + (j - 1)] = -self.sim(img[i][j], i, j, img[i-1][j-1], i-1, j-1, lmbda, sigma) D[i * w + j] += 1 D[(i - 1) * w + (j - 1)] += 1 # i - 1, j if i - 1 >= 0: L[(i - 1) * w + j][i * w + j] = L[i * w + j][(i - 1) * w + j] = -self.sim(img[i][j], i, j, img[i-1][j], i-1, j, lmbda, sigma) D[(i - 1) * w + j] += 1 D[i * w + j] += 1 # i - 1, j + 1 if i - 1 >= 0 and j + 1 < w: L[(i - 1) * w + (j + 1)][i * w + j] = L[i * w + j][(i - 1) * w + (j + 1)] = -self.sim(img[i][j], i, j, img[i-1][j+1], i-1, j+1, lmbda, sigma) D[(i - 1) * w + (j + 1)] += 1 D[i * w + j] += 1 # i, j - 1 if j - 1 >= 0: L[i * w + (j - 1)][i * w + j] = L[i * w + j][i * w + (j - 1)] = -self.sim(img[i][j], i, j, img[i][j-1], i, j-1, lmbda, sigma) D[i * w + (j - 1)] += 1 D[i * w + j] += 1 for i in range(h): for j in range(w): L[i * w + j][i * w + j] = D[i * w + j] return L def sim(self, x1, i1, j1, x2, i2, j2, lmbda = 0.25, sigma = 1): dist = np.linalg.norm([x1 - x2]) + lmbda * np.linalg.norm([i1 - i2, j1 - j2]) return math.exp(-(dist/sigma**2)) if __name__ == '__main__': img = Image.open('/content/f1.png') k = 10 LOAD = False # ''' # -------------------------------------- # CODE TO RESIZE ARRAY TO LOWER SIZE # ORIGINAL IMAGE WAS EXCEEDING MEMORY # -------------------------------------- basewidth = 100 wpercent = (basewidth/float(img.size[0])) hsize = int((float(img.size[1])*float(wpercent))) img = img.resize((basewidth,hsize), Image.ANTIALIAS) # Convert image to grayscale gray = img.convert('L') # Normalise image intensities to [0,1] values gray = np.asarray(gray).astype(float)/255.0 # gray = np.array([[0, 1, 0], [1,0,1], [0,1,0]], dtype=float) s = Spectralclustering() labels = s.run(gray, k, LOAD=LOAD, lmbda=0.25, sigma=1) # labels = labels.reshape( gray.shape ) # plot_segmented_image( img, labels, k, None, 'Spectral Clustering' ) img = np.array(img) plt.figure(figsize=(10, 30)) plt.subplot(131).imshow(img) plt.axis('off') visualize_clusters(img, labels, k,132) plot_segmented_image(img, labels, k, 133) plt.show() ###Output Constructing Laplacian matrix... Performing Eigen Value Decomposition of L... kmeans2 labels: [1 1 1 ... 1 1 1] ###Markdown Library Function ###Code # import kmeans import numpy as np from google.colab.patches import cv2_imshow from PIL import Image from numpy import linalg as la import scipy.cluster.vq as vq import matplotlib.pyplot as plt import warnings import math import logging from sklearn.cluster import SpectralClustering warnings.simplefilter('ignore') def sim(x1, i1, j1, x2, i2, j2, lmbda = 0.25, sigma = 1): dist = np.linalg.norm([x1 - x2]) + lmbda * np.linalg.norm([i1 - i2, j1 - j2]) return math.exp(-(dist/sigma**2)) def construct_W(img: np.ndarray, lmbda: float, sigma: float): try: h, w = img.shape[:2] except AttributeError: raise('img should be numpy array.') L = np.zeros((h*w, h*w)) D = np.zeros((h*w,)) for i in range(h): for j in range(w): # i - 1, j - 1 if i - 1 >= 0 and j - 1 >= 0: L[(i - 1) * w + (j - 1)][i * w + j] = L[i * w + j][(i - 1) * w + (j - 1)] = sim(img[i][j], i, j, img[i-1][j-1], i-1, j-1, lmbda, sigma) # i - 1, j if i - 1 >= 0: L[(i - 1) * w + j][i * w + j] = L[i * w + j][(i - 1) * w + j] = sim(img[i][j], i, j, img[i-1][j], i-1, j, lmbda, sigma) # i - 1, j + 1 if i - 1 >= 0 and j + 1 < w: L[(i - 1) * w + (j + 1)][i * w + j] = L[i * w + j][(i - 1) * w + (j + 1)] = sim(img[i][j], i, j, img[i-1][j+1], i-1, j+1, lmbda, sigma) # i, j - 1 if j - 1 >= 0: L[i * w + (j - 1)][i * w + j] = L[i * w + j][i * w + (j - 1)] = sim(img[i][j], i, j, img[i][j-1], i, j-1, lmbda, sigma) return L def visualize_clusters_r(image, labels, n_clusters, subp): # convert to the shape of a vector of pixel values masked_image = np.copy(image) labels = labels.flatten() masked_image = masked_image.reshape(-1) for i in range(n_clusters): # color (i.e cluster) to disable cluster = i masked_image[labels == cluster] = 255-20*i # convert back to original shape masked_image = masked_image.reshape(image.shape) # show the image plt.subplot(subp).imshow(masked_image) plt.axis('off') if __name__ == '__main__': img = Image.open('/content/f1.png') k = 5 # -------------------------------------- # CODE TO RESIZE ARRAY TO LOWER SIZE # ORIGINAL IMAGE WAS EXCEEDING MEMORY # -------------------------------------- basewidth = 100 wpercent = (basewidth/float(img.size[0])) hsize = int((float(img.size[1])*float(wpercent))) img = img.resize((basewidth,hsize), Image.ANTIALIAS) plt.figure(figsize=(10, 30)) plt.subplot(131).imshow(img) plt.axis('off') # Convert image to grayscale img = img.convert('L') # Normalise image intensities to [0,1] values img = np.asarray(img).astype(float)/255.0 # img = np.array([[0, 1, 0], [1,0,1], [0,1,0]], dtype=float) logging.debug(f'img:{img}\nimg.shape: {img.shape}') W = construct_W(img, 0, 1) sc = SpectralClustering(k, affinity='precomputed', n_init=10, assign_labels='kmeans') labels = sc.fit_predict(W) visualize_clusters_r(img, labels, k, 132) plot_segmented_image( img, labels, k, 133) plt.show() ###Output _____no_output_____
intro-to-pytorch/Part 1 - Tensors in PyTorch (Exercises).ipynb
###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y_hat = activation(torch.mm(features, weights.T) + bias) print(y_hat) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here y_hat = activation(torch.mm(activation(torch.mm(features, W1) + B1), W2) + B2) print(y_hat) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = (torch.sum(features * weights)+ bias) activation (y) y1 = torch.mm(features, weights)+bias activation(y1) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication weights.resize_(5,1) y2 = torch.mm(features, weights)+bias activation(y2) weights.view(5,1) y3 = torch.mm(features, weights)+bias activation(y3) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code h = activation(torch.mm(features, W1)+B1) output = activation(torch.mm(h, W2)+B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a # The memory is shared between the Numpy array and Torch tensor!!! ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features*weights)+bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.mm(features,weights.view(5,1))+bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units is a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) y = activation((features * weights).sum() + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.sum(features * weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features*weights)+bias) print(y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5, 1))+ bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(h, W2) + B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors out1 = activation((features*weights).sum() + bias) out2 = activation(torch.matmul(features, weights.t()) + bias).item() assert out1.item() == out2, 'Two methods are not equal' ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication #see above ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here out_oneliner = activation(activation(features.mm(W1) + B1).mm(W2) + B2) out1 = activation(torch.mm(features, W1) + B1) out2 = activation(torch.mm(out1, W2) + B2) assert out_oneliner == out2, 'Oops...' print(out_oneliner.item()) ###Output 0.3170831501483917 ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3).astype(np.float32) a b = torch.from_numpy(a) b, b.dtype b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication weights = weights.view(5,1) activation(torch.mm(features,weights) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 1 # Number of hidden units n_output = 2 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here activation_1 = activation(torch.mm(features, W1) + B1) # activation_2 = activation(torch.mm(features, W2 + B2)) y = activation(torch.mm(activation_1, W2)+B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1.view(3,2)) + B1) y = activation(torch.mm(h, W2.view(2,1)) + B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors h = torch.sum(features*weights)+bias y = activation(h) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1))+bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden_outputs = activation(torch.mm(features, W1)+B1) y = activation(torch.mm(hidden_outputs, W2)+B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ ![tensor_examples.svg](attachment:3cc27866-bfe7-44f6-9137-4744bcce107b.svg) TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) print(features) # True weights for our data, random normal variables again weights = torch.randn_like(features) print(weights) # and a true bias term bias = torch.randn((1, 1)) print (bias) ###Output tensor([[-0.1468, 0.7861, 0.9468, -1.1143, 1.6908]]) tensor([[-0.8948, -0.3556, 1.2324, 0.1382, -1.6822]]) tensor([[0.3177]]) ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors print (torch.sum(features * weights) + bias) print ((features * weights).sum() + bias) print (torch.mm(features, weights.view(5,1)) + bias) ###Output tensor([[-1.6619]]) tensor([[-1.6619]]) tensor([[-1.6619]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication print (torch.mm(features, weights.view(5,1)) + bias) ###Output tensor([[-1.6619]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h1 = activation(torch.mm(features, W1) + B1) h2 = activation(torch.mm(h1, W2) + B2) print(h1) print(h2) ###Output tensor([[0.6813, 0.4355]]) tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units is a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code features weights features*weights ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features*weights)+ bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.mm(features, weights.view(5, 1)) + bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here output1 = activation(torch.mm(features, W1) + B1) output2 = activation(torch.mm(output1, W2) + B2) output2 ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors outs = activation(torch.sum(features * weights) + bias) outs ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication outs = activation(torch.mm(features, weights.view(5,-1)) + bias) print(outs) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here A1 = activation(torch.mm(features, W1) + B1) outs = activation(torch.mm(A1, W2) + B2) print(outs) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, torch.t(weights)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here activation(torch.mm(activation(torch.mm(features, W1) + B1), W2) + B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation((features * weights).sum() + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features,weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation((torch.mm(features,W1) + B1)) output = activation(torch.mm(h,W2) + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation((features * weights).sum() + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.view(5, 1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden = activation(torch.mm(features, W1) + bias) activation(torch.mm(hidden, W2) + ) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons". Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.dot(features[0],weights[0]) + bias) # features[0] output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication weights = weights.reshape(5,1) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here output2 = activation(torch.mm(activation(torch.mm(features,W1)+B1),W2) + B2) output2 ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors h1 = torch.mm(features, weights.T) b1 = torch.sum(h1 + bias) activation(b1) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication h1 = torch.mm(features, weights.T) b1 = torch.sum(h1 + bias) activation(b1) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here print(features.shape) print(W1.shape, B1.shape) h1 = activation(torch.mm(features, W1) + B1) print(h1.shape) print(W2.shape, B2.shape) h2 = activation(torch.mm(h1, W2) + B2) print(h2) ###Output torch.Size([1, 3]) torch.Size([3, 2]) torch.Size([1, 2]) torch.Size([1, 2]) torch.Size([2, 1]) torch.Size([1, 1]) tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y_hat = activation(torch.sum(features * weights) + bias) print(y_hat) y_hat = activation((features * weights).sum() + bias) print(y_hat) ###Output tensor([[0.1595]]) tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y_hat = activation(torch.mm(features, weights.view(5, 1)) + bias) print(y_hat) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1)+B1) # hidden layer y_hat = activation(torch.mm(h, W2) + B2) # output layer print(y_hat) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(torch.matmul(features, weights), bias)) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.mm(features, weights.resize_(5, 1)) + bias) print(output) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here L1 = activation(torch.matmul(features, W1) + B1) L2 = activation(torch.matmul(L1, W2) + B2) print(L2) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code weights ## Calculate the output of this network using the weights and bias tensors Wx = torch.sum(features*weights) + bias output = activation(Wx) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication Wx = torch.mm(features, weights.view(5, 1)) + bias activation(Wx).squeeze() ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here W1x = torch.mm(features, W1) f1 = activation(W1x + B1) W2x = torch.mm(f1, W2) f2 = activation(W2x + B2) f2 # output = activation(f2) # output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation( torch.sum( features * weights ) + bias ) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation( torch.matmul( features, weights.T ) + bias ) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h1 = activation( torch.mm( features, W1 ) + B1 ) activation( torch.sum( torch.mm( h1, W2 ) ) + B2 ) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) y = activation((features * weights).sum() + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication print(features.shape, weights.shape) y = activation(torch.mm(features, weights.view(5,1)) + bias) ###Output torch.Size([1, 5]) torch.Size([1, 5]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) # or y = activation((features * weights).sum() + bias) tensor has .sum method print(y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1)) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features,W1) + B1) output = activation(torch.mm(h, W2) + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch !pip install torch torchvision import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(weights*features+bias)) print(y) ###Output tensor(0.4034) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features,weights.view(5,1))+bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here y = activation(torch.mm(activation(torch.mm(features,W1)+B1),W2)+B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors Y = activation(torch.sum(features*weights) + bias) print(Y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication Y = activation(torch.mm(features,weights.view(5,1)) + bias) print(Y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) print(n_input, W1, W2, B1, B2) ###Output 3 tensor([[-1.1143, 1.6908], [-0.8948, -0.3556], [ 1.2324, 0.1382]]) tensor([[-1.6822], [ 0.3177]]) tensor([[0.1328, 0.1373]]) tensor([[0.2405]]) ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here Y = activation(torch.mm(activation(torch.mm(features,W1) + B1), W2) + B2) x = torch.tensor([0.7598]) print(activation(x),torch.mm(features,W1) + B1, activation(torch.mm(features,W1) + B1), Y) h = activation(torch.mm(features,W1) + B1) Y = activation(torch.mm(h, W2) + B2) Y ###Output tensor([0.6813]) tensor([[ 0.7598, -0.2596]]) tensor([[0.6813, 0.4355]]) tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) y = activation((features * weights).sum() + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) print(output) ###Output tensor([[0.31708315]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code y = activation(torch.sum(features * weights) + bias); y ###Output _____no_output_____ ###Markdown Calculate the output of this network using the weights and bias tensors You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code y = activation(torch.mm(features,weights.T) + bias); y ###Output _____no_output_____ ###Markdown Calculate the output of this network using matrix multiplication Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here a1 = activation((features @ W1) + B1) output = activation((a1 @ W2) + B2); output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.reshape(5, 1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(hidden, W2) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn(features.shape) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = torch.sum(features*weights)+bias print(output) ###Output -1.6619 [torch.FloatTensor of size 1x1] ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = torch.mm(features, weights.view(-1,1))+bias print(output) ###Output -1.6619 [torch.FloatTensor of size 1x1] ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here layer1 = activation(torch.mm(features, W1)+B1) y=activation(torch.mm(layer1, W2)+B2) print(y) ###Output 0.3171 [torch.FloatTensor of size 1x1] ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a a[0,0]=0 print(b) print(b) a=np.random.rand(4,2) id(a) a=a*2 id(a) torch_a=torch.from_numpy(a) id(torch_a) torch_a=torch_a*2 id(torch_a) ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation((features * weights).sum() + bias) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(features.mm(weights.view(5, 1)) + bias) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden = activation(features.mm(W1) + B1) y = activation(hidden.mm(W2) + B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) y = activation((features * weights).sum() + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.reshape(5, 1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here y1 = activation(torch.mm(features, W1) + B1) y2 = activation(torch.mm(y1, W2) + B2) y2 ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables # this are the input variables (x1 to x5) features = torch.randn((1, 5)) print(f"inputs(x) {features}") # True weights for our data, random normal variables again weights = torch.randn_like(features) print(f"weights(y) {weights}") # and a true bias term bias = torch.randn((1, 1)) print(f"bias {bias}") ###Output inputs(x) tensor([[-0.1468, 0.7861, 0.9468, -1.1143, 1.6908]]) weights(y) tensor([[-0.8948, -0.3556, 1.2324, 0.1382, -1.6822]]) bias tensor([[0.3177]]) ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors #print(dir(features.shape)) #print(features.shape.reverse()) print(activation((features * weights).sum() + bias)) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication prod = torch.mm(features, weights.view(5,1)) print(activation(torch.add(prod, bias))) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here input_layer = torch.mm(features, W1) + B1 print(input_layer) h1_layer = torch.mm(activation(input_layer), W2) + B2 print (h1_layer) output = activation(h1_layer) print(output) ###Output tensor([[ 0.7598, -0.2596]]) tensor([[-0.7672]]) tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,2) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights)+bias) ##y = torch.mm(features, weights) y x = features.shape z = weights.shape print("Shape of x =",x) print("Shape of z =",z) ###Output Shape of x = torch.Size([1, 5]) Shape of z = torch.Size([1, 5]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1))+bias) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1)+B1) output = activation(torch.mm(h, W2)+B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = torch.sum(features * weights) + bias print(features) print(weights.view(5, 1)) print(bias) print(output) y = activation(output) print(y) ###Output tensor([[-0.1468, 0.7861, 0.9468, -1.1143, 1.6908]]) tensor([[-0.8948], [-0.3556], [ 1.2324], [ 0.1382], [-1.6822]]) tensor([[0.3177]]) tensor([[-1.6619]]) tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.resize_(5, 1)) + bias) print(y) y = activation(torch.matmul(features, weights.reshape(5, 1)) + bias) print(y) ###Output tensor([[0.1595]]) tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(h, W2) + B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.mm(weights., features) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(weights.t(), features) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here activation( activation(features.mm(W1) + B1).mm(W2) + B2 ) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation((weights * features).sum() + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(weights, features.T) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here activation(torch.mm(activation(torch.mm(features, W1)+B1), W2) + B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) features.shape[1] ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors torch.mm(features,weights.reshape(features.shape[1],1)) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.>> understand the implementation of torch matmulis mmHere, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note: To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.**There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, **as in it copies the data to another part of memory.*** `weights.resize_(a, b)`**possibly truncates** returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication torch.mm(features,weights.reshape(features.shape[1],1)) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h=activation(torch.mm(features,W1)+B1) output=activation(torch.mm(h,W2)+B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown **The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well.** ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch !pip3 install torch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation((features*weights).sum()+bias) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features,weights.view(5,1))+bias) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features,W1)+B1) output = activation(torch.mm(h,W2)+B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors # activation(torch.sum(features*weights)+bias) activation(torch.sum(features*weights)) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm (features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h,W2) + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchNeste caderno, você será apresentado ao [PyTorch] (http://pytorch.org/), uma estrutura para construir e treinar redes neurais. O PyTorch se comporta de várias maneiras como as matrizes que você ama do Numpy. Afinal, essas matrizes Numpy são apenas tensores. O PyTorch pega esses tensores e facilita a sua transferência para GPUs para o processamento mais rápido necessário ao treinar redes neurais. Ele também fornece um módulo que calcula automaticamente gradientes (para retropropagação!) E outro módulo especificamente para a construção de redes neurais. No conjunto, o PyTorch acaba sendo mais coerente com o Python e a pilha Numpy / Scipy em comparação com o TensorFlow e outras estruturas. Neural NetworksO Deep Learning é baseado em redes neurais artificiais que existem de alguma forma desde o final da década de 1950. As redes são construídas a partir de partes individuais que se aproximam dos neurônios, normalmente chamadas de unidades ou simplesmente "neurônios". Cada unidade possui algum número de entradas ponderadas. Essas entradas ponderadas são somadas (uma combinação linear) e depois passadas por uma função de ativação para obter a saída da unidade.Matematicamente, isso se parece com:$$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$Com vetores, este é o produto interno / ponto de dois vetores:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsAcontece que os cálculos de redes neurais são apenas um monte de operações de álgebra linear em * tensores *, uma generalização de matrizes. Um vetor é um tensor unidimensional, uma matriz é um tensor bidimensional, uma matriz com três índices é um tensor tridimensional (imagens em cores RGB, por exemplo). A estrutura de dados fundamental para redes neurais são os tensores e o PyTorch (assim como praticamente qualquer outra estrutura de aprendizado profundo) é construído em torno dos tensores.Com o básico abordado, é hora de explorar como podemos usar o PyTorch para construir uma rede neural simples. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data --Gere alguns dados torch.manual_seed(7) # Set the random seed so things are predictable -- Defina a semente aleatória para que as coisas sejam previsíveis # Features are 5 random normal variables -- Os recursos são 5 variáveis normais aleatórias features = torch.randn((1, 5)) # True weights for our data, random normal variables again -- Pesos reais para nossos dados, variáveis normais aleatórias novamente weights = torch.randn_like(features) # and a true bias term -- e um termo verdadeiro viés bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Acima, geramos dados que podemos usar para obter a saída de nossa rede simples. Por enquanto, tudo é aleatório. No futuro, começaremos a usar dados normais. Passando por cada linha relevante:`features = torch.randn ((1, 5))` cria um tensor com a forma `(1, 5)`, uma linha e cinco colunas, que contém valores distribuídos aleatoriamente de acordo com a distribuição normal, com média de zero e padrão desvio de um.`pesos = torch.randn_like (features)` cria outro tensor com a mesma forma que `features`, novamente contendo valores de uma distribuição normal.Finalmente, `bias = torch.randn ((1, 1))` cria um valor único a partir de uma distribuição normal.Os tensores PyTorch podem ser adicionados, multiplicados, subtraídos, etc., assim como os arrays Numpy. Em geral, você usará os tensores PyTorch da mesma maneira que usaria as matrizes Numpy. Eles trazem alguns benefícios interessantes, como a aceleração da GPU, que veremos mais adiante. Por enquanto, use os dados gerados para calcular a saída dessa rede simples de camada única.> ** Exercício **: Calcule a saída da rede com recursos de entrada `recursos`, pesos` pesos` e bias `bias`. Semelhante ao Numpy, o PyTorch possui a função [`torch.sum ()`] (https://pytorch.org/docs/stable/torch.htmltorch.sum), além do método `.sum ()` em tensores, para somar somas. Use a função `ativação 'definida acima como a função de ativação. ###Code ## Calcule a saída desta rede usando os pesos e tensores de polarização y = activation(torch.sum(features *weights) + bias) ###Output _____no_output_____ ###Markdown Você pode fazer a multiplicação e somar na mesma operação usando uma multiplicação de matrizes. Em geral, convém usar multiplicações de matriz, pois elas são mais eficientes e aceleradas usando bibliotecas modernas e computação de alto desempenho em GPUs.Aqui, queremos fazer uma multiplicação matricial dos recursos e pesos. Para isso, podemos usar [`torch.mm ()`] (https://pytorch.org/docs/stable/torch.htmltorch.mm) ou [`torch.matmul ()`] (https: // pytorch.org/docs/stable/torch.htmltorch.matmul), que é um pouco mais complicado e suporta transmissão. Se tentarmos fazê-lo com os recursos e pesos como eles são, obteremos um erro```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```Ao criar redes neurais em qualquer estrutura, você verá isso com frequência. Realmente frequentemente. O que está acontecendo aqui é que nossos tensores não têm as formas corretas para realizar uma multiplicação de matrizes. Lembre-se de que para multiplicações de matrizes, o número de colunas no primeiro tensor deve ser igual ao número de linhas na segunda coluna. Ambos os `recursos` e` pesos` têm a mesma forma, `(1, 5)`. Isso significa que precisamos alterar a forma dos pesos para que a multiplicação da matriz funcione.**Nota:** Para ver a forma de um tensor chamado `tensor`, use` tensor.shape`. Se você estiver construindo redes neurais, estará usando esse método frequentemente.Existem algumas opções aqui: [`weights.reshape ()`] (https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize _ ()`] ( https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_) e [`weights.view ()`] (https://pytorch.org/docs/stable/tensors.html tocha.Tensor.view).* `weights.reshape (a, b)` retornará um novo tensor com os mesmos dados que `pesos com tamanho` `(a, b)`as vezes e às vezes um clone, pois ele copia os dados para outra parte do memória.* `weights.resize_ (a, b)` retorna o mesmo tensor com uma forma diferente. No entanto, se a nova forma resultar em menos elementos que o tensor original, alguns elementos serão removidos do tensor (mas não da memória). Se a nova forma resultar em mais elementos que o tensor original, novos elementos serão não inicializados na memória. Aqui, devo observar que o sublinhado no final do método indica que esse método é realizado ** no local **. Aqui está um ótimo tópico do fórum para [leia mais sobre operações no local] (https://discuss.pytorch.org/t/what-is-in-place-operation/16244) no PyTorch.* `weights.view (a, b)` retornará um novo tensor com os mesmos dados que `weights` com tamanho` (a, b) `.Eu normalmente uso `.view ()`, mas qualquer um dos três métodos funcionará para isso. Portanto, agora podemos remodelar `pesos` 'para ter cinco linhas e uma coluna com algo como` pesos.view (5, 1) `.> **Exercício**: Calcule a saída de nossa pequena rede usando multiplicação de matrizes. ###Code ## Calculate the output of this network using matrix multiplication print(features.shape) print(weights.shape) y = activation(torch.mm(features, weights.view(5,1)) + bias) print(y) ###Output torch.Size([1, 5]) torch.Size([1, 5]) tensor([[0.1595]]) ###Markdown Stack them up!É assim que você pode calcular a saída para um único neurônio. O poder real desse algoritmo acontece quando você começa a empilhar essas unidades individuais em camadas e pilhas de camadas, em uma rede de neurônios. A saída de uma camada de neurônios se torna a entrada para a próxima camada. Com várias unidades de entrada e unidades de saída, agora precisamos expressar os pesos como uma matriz.A primeira camada mostrada na parte inferior aqui são as entradas, compreensivelmente chamadas de **camada de entrada**. A camada do meio é chamada de **camada oculta** e a camada final (à direita) é a **camada de saída**. Podemos expressar essa rede matematicamente com matrizes novamente e usar a multiplicação de matrizes para obter combinações lineares para cada unidade em uma operação. Por exemplo, a camada oculta ($ h_1 $ e $ h_2 $ aqui) pode ser calculada $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$A saída para esta pequena rede é encontrada tratando a camada oculta como entradas para a unidade de saída. A saída da rede é expressa simplesmente$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Defina a semente aleatória para que as coisas sejam previsíveis # Os recursos são três variáveis normais aleatórias features = torch.randn((1, 3)) # Defina o tamanho de cada camada em nossa rede n_input = features.shape[1] # Número de unidades de entrada, deve corresponder ao número de recursos de entrada print(n_input) n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Pesos para entradas na camada oculta W1 = torch.randn(n_input, n_hidden) # Pesos da camada oculta para a camada de saída W2 = torch.randn(n_hidden, n_output) # e termos de polarização para camadas ocultas e de saída B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output 3 ###Markdown > **Exercise:** Calcule a saída desta rede multicamada usando os pesos `W1` e` W2` e os desvios, `B1` e` B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown Se você fez isso corretamente, deverá ver a saída `tensor ([[0,3171]])`.O número de unidades ocultas é um parâmetro da rede, geralmente chamado de **hyperparameter** para diferenciá-lo dos parâmetros de pesos e desvios. Como você verá mais adiante, quando discutirmos o treinamento de uma rede neural, quanto mais unidades ocultas uma rede tiver e mais camadas, maior será a capacidade de aprender com os dados e fazer previsões precisas. Numpy to Torch and backSeção de bônus especial! O PyTorch possui um ótimo recurso para converter entre matrizes Numpy e Torch tensors. Para criar um tensor a partir de uma matriz Numpy, use `torch.from_numpy ()`. Para converter um tensor em uma matriz Numpy, use o método `.numpy ()`. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown A memória é compartilhada entre a matriz Numpy e o Torch tensor, portanto, se você alterar os valores no lugar de um objeto, o outro também será alterado. ###Code # Multiply PyTorch Tensor by 2, in place -- Multiplique o PyTorch Tensor por 2, no lugar b.mul_(2) # Numpy array matches new values from Tensor -- A matriz numpy corresponde a novos valores do tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors h = torch.sum(features*weights)+bias output = activation(h) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication print(weights) features.shape[1] weights_s = weights.reshape(features.shape[1], features.shape[0]) print(weights_s) h_s = torch.mm(features, weights_s) print(h_s) output_s = activation(h_s+bias) print(output_s) y=activation(torch.mm(features, weights.view(5,1))+bias) y ###Output tensor([[-0.8948, -0.3556, 1.2324, 0.1382, -1.6822]]) tensor([[-0.8948], [-0.3556], [ 1.2324], [ 0.1382], [-1.6822]]) tensor([[-1.9796]]) tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) #print(features) #print(W1.view(-1,features.shape[0])) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h_h = activation(torch.mm(features, W1)+B1) #print(h_h) output_h = activation(torch.mm(h_h, W2)+B2) output_h ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation (torch.sum(features * weights) + bias) # y = activation((features * weights).sum() + bias) #features #weights #bias #y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features,W1) + B1) output = activation(torch.mm(h,W2) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors h = torch.sum(features * weights) + bias y = activation(h) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.t()) + bias) h = activation(torch.mm(features, weights.view(5, 1)) + bias) print(y) print(h) ###Output tensor([[0.1595]]) tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h1 = torch.mm(features, W1) + B1 y1 = activation(h1) print(y1) h2 = torch.mm(y1, W2) + B2 output = activation(h2) print(output) ###Output tensor([[0.6813, 0.4355]]) tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors h = torch.sum(features * weights) output = activation(h + bias) print(output) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication yhat = activation(torch.mm(features, weights.t()) + bias) print(yhat) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h_output = activation(torch.mm(features, W1) + B1) yh = activation(torch.mm(h_output, W2) + B2) print(yh) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors weights.resize_(5,1) torch.mm(features,weights) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output torch.Size([1, 3]) ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here y1=activation(torch.mm(features,W1)+B1) activation(torch.mm(y1,W2)+B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors torch.sum(weights * features) + bias ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.matmul(features, weights.transpose(0,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here activation((activation(features.matmul(W1) + B1)).matmul(W2) + B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a M = torch.Tensor([[[1, 2, 3], [4, 5, 6]],[[10, 20, 30], [40, 50, 60]]]) print(M) M.transpose(1,0) ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(weights * features) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = torch.mm(features, W1) + B1 y = activation(torch.mm(activation(h), W2) + B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. **PyTorch in a lot of ways behaves like the arrays you love from Numpy**. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and **makes it simple to move them to GPUs for the faster processing needed when training neural networks**. **It also provides a module that automatically calculates gradients (for backpropagation!)** and another module specifically for building neural networks. All together, **PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow** and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together **(a linear combination)** then passed through an **activation function** to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ Tensors**It turns out neural network computations are just a bunch of linear algebra operations** on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code print(features) print(weights) print(bias) ## Calculate the output of this network using the weights and bias tensors activation((features*weights).sum()+bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use **matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs**.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in **any framework, you'll see this often**. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.add(torch.matmul(features, weights.view(5, 1)), bias)) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. **The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons**. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code torch.mm(features, W1)+B1 ## Your solution here activation(torch.mm(activation(torch.add(torch.mm(features, W1), B1)), W2)+B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, **the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions.** Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(weights * features + bias)) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.view(5, 1) + bias)) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden_layer = activation(torch.mm(features, W1) + B1) output_layer = activation(torch.mm(hidden_layer, W2) + B2) output_layer ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features*weights) + bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.mm(features, weights.view(5,1)) + bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(weights*features) + bias) print(y) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(features.matmul(weights.view(5,1))+bias) print(y) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here print(features.shape) print(W1.shape) H1 = activation(features.matmul(W1)+B1) print(H1) print(W2.shape) H2 = activation(H1.matmul(W2)+B2) print(H2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch READ def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features*weights) + bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second tensor. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view) and [`torch.transpose(weights,0,1)`](https://pytorch.org/docs/master/generated/torch.transpose.html).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.* `torch.transpose(weights,0,1)` will return transposed weights tensor. This returns transposed version of inpjut tensor along dim 0 and dim 1. This is efficient since we do not specify to actual dimesions of weights.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.One more approach is to use `.t()` to transpose vector of weights, in our case from (1,5) to (5,1) shape.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.matmul(features,torch.transpose(weights,0,1)) + bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.matmul(features,W1).add_(B1)) output = activation(torch.matmul(h,W2).add_(B2)) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units are a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code y = activation(torch.sum(features*weights)+bias) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code y = activation(torch.mm(features, weights.view(5, 1)+bias)) ###Output tensor([[0.2154]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code one = activation(torch.mm(features, W1) + B1) two = activation(torch.mm(one, W2) + B2) print(two) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors torch.mm(features, torch.transpose(weights, 0, 1)) ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.reshape(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code layer1 = activation(torch.mm(features, W1) + B1) activation( torch.mm(layer1, W2) + B2 ) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.mm(features, weights.view(5, 1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(weights*features) + bias) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.dot(weights.view(5), features.view(5)) + bias) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here y = activation(torch.matmul(activation(torch.matmul(features, W1) + B1), W2) + B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation((features*weights).sum() + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.mm(features, weights.view(5, 1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication print(weights) y = activation(torch.mm(features, weights.view(5, 1)) + bias) print(y) ###Output tensor([[-0.8948, -0.3556, 1.2324, 0.1382, -1.6822]]) tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h , W2) + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias); y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5, 1)) + bias); y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here layer1_output = activation(torch.mm(features, W1) + B1) layer2_output = activation(torch.mm(layer1_output, W2) + B2) layer2_output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = torch.mm(features, weights.view(5,1)) + bias ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h1, W2) + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code features * weights + bias ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.t()) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code W1.shape ## Your solution here H1 = activation(torch.mm(features, W1) + B1) activation(torch.mm(H1, W2) + B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(weights, features.view(5,1) + bias)) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden_output = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(hidden_output, W2) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features * weights) + bias) print(output) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.mm(features, weights.view(5, 1)) + bias) print(output) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here H = activation(torch.mm(features,W1) + B1) output = activation(torch.mm(H,W2) + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features*weights) + bias) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.matmul(weights, features.view(5,1)) + bias) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here output1 = activation(torch.matmul(features, W1) + B1) output = activation(torch.matmul(output1, W2) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors print('features:\n{}\n\nweights:\n{}\n\nbias:\n{}\n\n'.format(features[0],weights[0],bias[0])) #output = torch.dot(features[0],weights[0])+bias[0] output = torch.sum(features*weights)+bias prediction = activation(output) print('output:\t\t{}\nprediction:\t{}'.format(output[0],prediction[0])) ###Output features: tensor([-0.1468, 0.7861, 0.9468, -1.1143, 1.6908]) weights: tensor([-0.8948, -0.3556, 1.2324, 0.1382, -1.6822]) bias: tensor([0.3177]) output: tensor([-1.6619]) prediction: tensor([0.1595]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication print('features:\n{}\n\n'.format(features)) print('weights:\n{}:\n{}\n'.format(weights.shape,weights)) weights_reshaped = weights.view(5,1) print('weights reshaped:\n{}:\n{}\n\n'.format(weights_reshaped.shape,weights_reshaped)) print('bias:\n{}\n\n'.format(bias)) output = torch.mm(features,weights_reshaped)+bias prediction = activation(output) print('output:\t\t{}\nprediction:\t{}'.format(output,prediction)) ###Output features: tensor([[-0.1468, 0.7861, 0.9468, -1.1143, 1.6908]]) weights: torch.Size([1, 5]): tensor([[-0.8948, -0.3556, 1.2324, 0.1382, -1.6822]]) weights reshaped: torch.Size([5, 1]): tensor([[-0.8948], [-0.3556], [ 1.2324], [ 0.1382], [-1.6822]]) bias: tensor([[0.3177]]) output: tensor([[-1.6619]]) prediction: tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here print('--- FEATURES ---') print('* inputs\n{}\n{}\n'.format(features.shape,features)) print('--- HIDDEN LAYER ---') print('* weights\n{}\n{}'.format(W1.shape,W1)) print('* bias\n{}\n{}\n'.format(B1.shape,B1)) # hidden layer calculation hidden_layer_result = activation( torch.mm(features,W1)+B1 ) print('* output\n{}\n{}\n'.format(hidden_layer_result.shape,hidden_layer_result)) print('--- OUTPUT LAYER ---') print('* weights\n{}\n{}'.format(W2.shape,W2)) print('* bias\n{}\n{}'.format(B2.shape,B2)) # output layer calculation output_layer_result = activation(torch.mm(hidden_layer_result,W2)+B2) print('* output\n{}\n{}\n'.format(output_layer_result.shape,output_layer_result)) print('--- RESULT ---') print(output_layer_result[0]) ###Output --- FEATURES --- * inputs torch.Size([1, 3]) tensor([[-0.1468, 0.7861, 0.9468]]) --- HIDDEN LAYER --- * weights torch.Size([3, 2]) tensor([[-1.1143, 1.6908], [-0.8948, -0.3556], [ 1.2324, 0.1382]]) * bias torch.Size([1, 2]) tensor([[0.1328, 0.1373]]) * output torch.Size([1, 2]) tensor([[0.6813, 0.4355]]) --- OUTPUT LAYER --- * weights torch.Size([2, 1]) tensor([[-1.6822], [ 0.3177]]) * bias torch.Size([1, 1]) tensor([[0.2405]]) * output torch.Size([1, 1]) tensor([[0.3171]]) --- RESULT --- tensor([0.3171]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch # !pip install torch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y_hat = activation( torch.mm(weights, features.T) + bias ) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y_hat = activation( torch.mm(features, weights.T) + bias ) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) print(output) ###Output torch.Size([3, 2]) tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch import numpy def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) print(y) ###Output tensor([[ 0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1)) + bias) print(y) ###Output tensor([[ 0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden = activation(torch.mm(features, W1) +B1) y = activation(torch.mm(hidden, W2) + B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch # http://pytorch.org/ from os.path import exists from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag()) cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/' accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu' !pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y=activation(torch.sum(weights*features)+bias) print(y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y=activation(torch.mm(features,weights.view(5,1))+bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h=activation(torch.mm(features,W1)+B1) y=activation(torch.mm(h,W2)+B2) print(y) ###Output tensor([[0.3171]]) tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights)+bias) print(y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.matmul(features,weights.view((5,1)))+bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.matmul(features, W1)+B1) y = activation(torch.matmul(h,W2)+B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors result=torch.sum(torch.matmul(weights.view(5,1),features) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication result=torch.matmul(weights.view(5,1),features) result ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) print(weights) print(bias) ###Output tensor([[-0.8948, -0.3556, 1.2324, 0.1382, -1.6822]]) tensor([[0.3177]]) ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code # LOIS # Calculate the output of this network using the weights and bias tensors # multiply the features and the weights result = torch.matmul(weights, features.t()) # add the bias result = result + bias # use activation function to get the output print(activation(result)) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code # LOIS # trial of the three options print("weights.reshape(a, b)") print(weights.reshape(5,1)) print("\nweights.resize(a, b)") print(weights.resize(5,1)) print("\nweights.view(a, b)") print(weights.view(5,1)) ## Calculate the output of this network using matrix multiplication # multiply the features and the weights result = torch.matmul(weights, features.view(5,1)) # add the bias result = result + bias # use activation function to get the output print(activation(result)) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (above) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # 1 x 2 # Define the size of each layer in our network # Number of input units, must match number of input features # features.shape returns a list - [1,3] # so we can get the number of input unites with the second element thus n_input = features.shape[1] n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # size = [3,2] # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # size = [2. 1] # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code features.shape ## Your solution here h1 = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(h1,W2) + B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features*weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code h = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(h, W2) + B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(features*weights)+bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.view(5,1))+bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden = activation(torch.mm(features, W1)+B1) output = activation(torch.mm(hidden)+B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units is a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features * weights) + bias) print(output) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.mm(features, weights.T) + bias) print(output) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here y1 = activation(torch.mm(features, W1) + B1) y2 = activation(torch.mm(y1, W2) + B2) print(y2) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication weights = weights.reshape(5, 1) output = activation(torch.mm(features, weights) + bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here #W1 = W1.reshape(3, 1) l0 = activation(torch.mm(features, W1) + B1) l1 = activation(torch.mm(l0, W2) + B2) l1 ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5, 1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(h, W2) + B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features * weights) + bias) output print(f"Is CUDA supported by this system? {torch.cuda.is_available()}") print(f"CUDA version: {torch.version.cuda}") # Storing ID of current CUDA device cuda_id = torch.cuda.current_device() print(f"ID of current CUDA device: {torch.cuda.current_device()}") print(f"Name of current CUDA device: {torch.cuda.get_device_name(cuda_id)}") ###Output Is CUDA supported by this system? True CUDA version: 10.2 ID of current CUDA device: 0 Name of current CUDA device: NVIDIA GeForce RTX 2060 ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.mm(features, weights.view(5,1)) + bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden_in = torch.mm(features, W1) hidden_out = activation(hidden_in + B1) output_in = torch.mm(hidden_out, W2) output_out = activation(output_in + B2) output_out ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units is a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(weights * features) + bias) features features.view(5, 1) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1)) + bias) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code hidden_output = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(hidden_output, W2) + B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.mm(features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code activation(torch.mm(features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here first_activation = activation(torch.mm(features, W1) + B1) second_activation = activation(torch.mm(first_activation, W2) + B2) print(first_activation) print(second_activation) ###Output tensor([[0.6813, 0.4355]]) tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors w_sum = torch.sum(features * weights) y = activation(w_sum + bias) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5, 1)) + bias) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here res_input = activation(torch.mm(features, W1) + B1) res_output = activation(torch.mm(res_input, W2) + B2) res_output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors wx = features * weights y = wx.sum() + bias print(activation(y)) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication wx = torch.matmul(features, weights.view(5,1)) y = wx + bias print(activation(y)) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code import numpy as np 1/(1+np.exp(-2)) # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation((features * weights).sum() + bias) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features,weights.view(features.shape[1], features.shape[0])) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) features.shape, W1.shape ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here H1 = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(H1, W2) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code !pip install torch # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(features*weights)+bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.view(5,1))+bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here W1.shape,W2.shape, B1.shape, B2.shape h = activation(torch.mm(features, W1)+B1) y_pred = activation(torch.mm(h, W2)+B2) y_pred ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation((features * weights).sum() + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.view(5, 1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) activation(torch.mm(h, W2) + B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation((features * weights).sum() + bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.mm(features, weights.T) + bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here Z1 = torch.mm(features, W1) + B1 A1 = activation(Z1) Z2 = torch.mm(A1, W2) + B2 A2 = activation(Z2) A2 ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.4813]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation_inputs = torch.matmul(features, weights.T) + bias activation(activation_inputs) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden_inputs = torch.mm(features, W1) + B1 hidden_outputs = activation(hidden_inputs) final_inputs = torch.mm(hidden_outputs, W2) + B2 activation(final_inputs) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # http://pytorch.org/ from os.path import exists from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag()) cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/' accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu' !pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication weights = weights.view(5, 1) y = activation(torch.sum(torch.mm(features, weights)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication weights_T = weights.view(weights.shape[1], 1) sigmoid_output = activation(torch.mm(features, weights_T) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here W1_T = torch.t(W1) B1_T = torch.t(B1) W2_T = torch.t(W2) B2_T = torch.t(B2) features_c = torch.t(features) Output = activation(torch.mm(W2_T, activation(torch.mm(W1_T,features_c) + B1_T)) + B2_T ) Output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a b = torch.from_numpy(a) b ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units is a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features*weights) + bias) #Element wise multiplication print(output) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.mm(features,weights.view(5,1)) + bias) print(output) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here layer_1 = activation(torch.mm(features,W1) + B1) out = activation(torch.mm(layer_1, W2) + B2) print(output) ###Output tensor([[0.1595]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) print(y) mat = torch.mm(features, weights.reshape(5,1)) y = activation(torch.sum(mat) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication mat = torch.mm(features, weights.reshape(5,1)) y = activation(torch.sum(mat) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) features # True weights for our data, random normal variables again weights = torch.randn_like(features) weights # and a true bias term bias = torch.randn((1, 1)) bias ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features*weights) + bias) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation( torch.mm(features, weights.view(5,1)) + bias) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden_layer = activation(torch.mm(features,W1)+B1) output = activation(torch.mm(hidden_layer,W2)+B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) weights ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code weights.shape features.shape ## Calculate the output of this network using the weights and bias tensors z = torch.mm(weights.view(weights.shape[1],weights.shape[0]), features) + bias a = activation(z) print(a) ###Output tensor([[0.6104, 0.4047, 0.3706, 0.7883, 0.2323], [0.5914, 0.5095, 0.4952, 0.6713, 0.4296], [0.5341, 0.7836, 0.8153, 0.2581, 0.9169], [0.5738, 0.6050, 0.6103, 0.5408, 0.6344], [0.6375, 0.2680, 0.2184, 0.8995, 0.0740]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication z = torch.mm(weights.view(weights.shape[1],weights.shape[0]), features) + bias a = activation(z) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code torch.cuda.set_device(0) ## Your solution here z1 = torch.mm(features, W1) + B1 a1 = activation(z1) z2 = torch.mm(a1, W2) + B2 a2 = activation(z2) a2 ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors h = torch.sum(features*weights, axis=1) + bias y = activation(h) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication h = torch.mm(features, weights.view(5, 1)) y = activation(h) h ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) W1 ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h1 = torch.mm(features, W1) + B1 y1 = activation(h1) h2 = torch.mm(y1, W2) + B2 y2 = activation(h2) y2 ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) features,weights,bias ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.view(5, 1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) B1 ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code W1 torch.mm(features, W1)+B1 ## Your solution here L1_output = activation(torch.mm(features,W1) + B1) L2_output = activation(torch.mm(L1_output,W2) + B2) L2_output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code torch.set_printoptions(precision=10) b # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a atensor =torch.tensor([1,2,3]) atensor nparray = np.array([1,2,3]) nparray np.multiply(nparray,atensor) torch.mul(torch.from_numpy(nparray),atensor) %pastebin 67-82 ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch import pandas as pd import numpy as np def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code y = activation(torch.mm(features,weights.view(5,1))+bias) y ## Calculate the output of this network using the weights and bias tensors print("feats: ",features.shape) print("weights: ", weights.shape) test = np.matmul(features,weights.T) output = activation(torch.mm(features,weights.T)+bias) v = output.numpy()[0] v ###Output feats: torch.Size([1, 5]) weights: torch.Size([1, 5]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code print("feats: ",features.shape) print("W1: ", W1.shape) output_layer_1 = activation(torch.mm(features,W1)+B1) output_layer_2 = activation(torch.mm(output_layer_1,W2)+B2) v = output_layer_2.numpy()[0] v output_layer_2 ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a __version__ numpy ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features*weights) + bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second tensor. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view) and [`torch.transpose(weights,0,1)`](https://pytorch.org/docs/master/generated/torch.transpose.html).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.* `torch.transpose(weights,0,1)` will return transposed weights tensor. This returns transposed version of inpjut tensor along dim 0 and dim 1. This is efficient since we do not specify to actual dimesions of weights.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.One more approach is to use `.t()` to transpose vector of weights, in our case from (1,5) to (5,1) shape.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.matmul(features,torch.transpose(weights,0,1)) + bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.matmul(features,W1).add_(B1)) output = activation(torch.matmul(h,W2).add_(B2)) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units are a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8)a = np.random.rand(4,3) a torch.set_printoptions(precision=8)b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(weights*features)+bias) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication a,b = weights.shape y = activation(torch.mm(features, weights.view(b,a))+ bias) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here input_layer = activation(torch.mm(features, W1)+B1) y = activation(torch.mm(input_layer, W2) + B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors print('features', features) print('weights', weights) print('bias', bias) print() print('weighted', torch.mul(features, weights)) print('summed', torch.mul(features, weights).sum()) print('biased', torch.mul(features, weights).sum() + bias) print() print('activated', activation(torch.mul(features, weights).sum() + bias)) ###Output features tensor([[-0.1468, 0.7861, 0.9468, -1.1143, 1.6908]]) weights tensor([[-0.8948, -0.3556, 1.2324, 0.1382, -1.6822]]) bias tensor([[0.3177]]) weighted tensor([[ 0.1314, -0.2796, 1.1668, -0.1540, -2.8442]]) summed tensor(-1.9796) biased tensor([[-1.6619]]) activated tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication print('features', features) print('weights', weights.view(-1, 1)) print('bias', bias) print() print('matmul', torch.mm(features, weights.view(-1, 1))) print('biased', torch.mm(features, weights.view(-1, 1)) + bias) print() print('activated', activation(torch.mm(features, weights.view(-1, 1)) + bias)) ###Output features tensor([[-0.1468, 0.7861, 0.9468, -1.1143, 1.6908]]) weights tensor([[-0.8948], [-0.3556], [ 1.2324], [ 0.1382], [-1.6822]]) bias tensor([[0.3177]]) matmul tensor([[-1.9796]]) biased tensor([[-1.6619]]) activated tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here print('features') print(features) print('weight 1') print(W1) print('bias 1') print(B1) print('\n') a1 = activation(torch.mm(features, W1) + B1) print('layer 1 activation') print(a1) print('\n') print('weight 2') print(W2) print('bias 2') print(B2) print('\n') a2 = activation(torch.mm(a1, W2) + B2) print('output') print(a2) ###Output features tensor([[-0.1468, 0.7861, 0.9468]]) weight 1 tensor([[-1.1143, 1.6908], [-0.8948, -0.3556], [ 1.2324, 0.1382]]) bias 1 tensor([[0.1328, 0.1373]]) layer 1 activation tensor([[0.6813, 0.4355]]) weight 2 tensor([[-1.6822], [ 0.3177]]) bias 2 tensor([[0.2405]]) output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch torch.exp?? def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) torch.randn_like? ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code weights weights.size() weights.view(5,1).size() weights.view(5,1) ## Calculate the output of this network using the weights and bias tensors activation(torch.matmul(features, weights.view(5,1))) y = activation(torch.matmul(features, weights.view(5,1))) y.size() ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features * weights) + bias) print(output) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.mm(features, weights.view(5,1)) + bias) print(output) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here output = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(output, W2) + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(weights*features) + bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.mm(weights, features.view(5,1)) + bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors sum0 = torch.zeros((features.shape[0], features.shape[1] + 1)) for i in range(features.shape[1]): sum0[0][i] = (features[0][i] * weights[0][i]) sum0[0][features.shape[1]] = bias print(sum0.sum()) print(sum0.shape) ## Calculate the output of this network using the weights and bias tensors sum0 = torch.zeros_like(features) sum0 = torch.mul(features, weights) #sum0 = features.mul(weights) # another option print(torch.add(sum0.sum(), bias)) ###Output tensor([[-1.66190410]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication sum = torch.add(torch.matmul(features, weights.view(5,1)), bias) #sum = torch.matmul(features, weights.T) # another solution print(sum) ###Output tensor([[-1.66190398]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here Z1 = activation(torch.add(torch.matmul(features, W1), B1)) #(1, n_hidden) Z2 = activation(torch.add(torch.matmul(Z1, W2), B2)) # (1, n_output) print(Z1, Z2) ###Output tensor([[0.68130499, 0.43545660]]) tensor([[0.31708315]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units is a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) features weights bias ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features * weights) + bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code features.shape ## Calculate the output of this network using matrix multiplication activation(torch.matmul(features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code features.shape ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden = activation(torch.matmul(features, W1) + B1) output = activation(torch.matmul(hidden, W2) + B2) output hidden.shape ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1 / (1 + torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y2 = activation(torch.mm(features, weights.view(5, 1)) + bias) y, y2 ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code features.shape, W1.shape, W2.shape ## Your solution here internal_out = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(internal_out, W2) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. ###Code def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = torch.sum(features * weights) + bias ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5, 1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(h, W2) + B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors print(features.shape) print(weights.shape) output = features * weights print(output) output = torch.sum(output) + bias print(output) output = activation(output) print(output) ###Output torch.Size([1, 5]) torch.Size([1, 5]) tensor([[ 0.1314, -0.2796, 1.1668, -0.1540, -2.8442]]) tensor([[-1.6619]]) tensor([[ 0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = torch.mm(features, weights.view(-1, 1)) + bias print(output) output = activation(output) print(output) ###Output tensor([[-1.6619]]) tensor([[ 0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here output = activation(torch.mm(features, W1) + B1) print(output) print(output.shape) output = activation(torch.mm(output, W2) + B2) print(output) print(output.shape) ###Output tensor([[ 0.6813, 0.4355]]) torch.Size([1, 2]) tensor([[ 0.3171]]) torch.Size([1, 1]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(h): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-h)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) print('features:', features) # True weights for our data, random normal variables again weights = torch.randn_like(features) print('weights:', weights) # and a true bias term bias = torch.randn((1, 1)) print('bias:', bias) ###Output features: tensor([[-0.1468, 0.7861, 0.9468, -1.1143, 1.6908]]) weights: tensor([[-0.8948, -0.3556, 1.2324, 0.1382, -1.6822]]) bias: tensor([[0.3177]]) ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors. print(features.shape) print(weights.shape) print(bias.shape) ###Output torch.Size([1, 5]) torch.Size([1, 5]) torch.Size([1, 1]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code # torch.mm is like W * x # weights.view(5,1) # torch.mm(features, weights.view(5,1)) torch.mm(features, weights.t()) + bias ## Calculate the output of this network using matrix multiplication test = torch.randn_like(features) print(test.shape) print(test) #t1 = test.view(6,2) #print(test) #print(test.shape) #print(t1.shape) #t2 = test.reshape(6,2) #print(test) #print(test.shape) #print(t2) #print(t2.shape) t3 = test.resize_(6,2) print(test) print(test.shape) print(t3.shape) print(t3) ###Output torch.Size([1, 5]) tensor([[ 1.8317, -0.5535, 1.0395, -1.2601, 0.4156]]) tensor([[ 1.8317, -0.5535], [ 1.0395, -1.2601], [ 0.4156, 0.0000], [ -15486880.0000, 0.0000], [-404982784.0000, 0.0000], [ 0.0000, 0.0000]]) torch.Size([6, 2]) torch.Size([6, 2]) tensor([[ 1.8317, -0.5535], [ 1.0395, -1.2601], [ 0.4156, 0.0000], [ -15486880.0000, 0.0000], [-404982784.0000, 0.0000], [ 0.0000, 0.0000]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code features = torch.randn((1, 3)) weights = torch.randn((3, 1)) torch.mm(features, weights) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # 3 x 2 # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # 2 x 1 # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) # 1 x 3 B2 = torch.randn((1, n_output)) # 1 x 1 ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h1 = activation(torch.mm(features, W1) + B1) print(h1) output = activation(torch.mm(h1, W2) + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5, 1)) + bias) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h1 = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(h1, W2) + B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features*weights) + bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output2 = activation(torch.mm(features, weights.view(5,1))+bias) output2 ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden_layer = activation(torch.mm(features, W1) + B1) output_layer = activation(torch.mm(hidden_layer, W2) + B2) output_layer ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication reshaped_weights = weights.view(5, 1) activation(torch.mm(features, reshaped_weights) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch print(torch.__version__) def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors print(weights.shape) print(features.shape) print("Output:", activation(torch.sum(weights * features))) ###Output torch.Size([1, 5]) torch.Size([1, 5]) Output: tensor(0.1214) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication print("output:", activation(torch.matmul(features, weights.view(5,1)))) ###Output output: tensor([[0.1214]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden = activation(torch.matmul(features, W1) + B1) output = activation(torch.matmul(hidden, W2) + B2) print("Output:", output) ###Output Output: tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b # I'am not creating a new tensor (that is a pointer to the same memory reference) b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) print(bias) ###Output tensor([[ 0.3177]]) ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors torch.mm(features, weights.reshape(5,1)) + bias ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication torch.mm(features, weights.reshape(5,1)) + bias ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) ## (3,2) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) ## (2,1) print (W1, W2) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) print (B1, B2) ###Output tensor([[-1.1143, 1.6908], [-0.8948, -0.3556], [ 1.2324, 0.1382]]) tensor([[-1.6822], [ 0.3177]]) tensor([[ 0.1328, 0.1373]]) tensor([[ 0.2405]]) ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h1 = activation(torch.mm(features, W1.reshape(n_input,n_hidden)) + B1) output = activation(torch.mm(h1, W2.reshape( n_hidden,n_output)) + B2) print(output) ###Output tensor([[ 0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features*weights) + bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm( features,weights.view(5,1) ) + bias) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(h, W2) +B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum( features * weights ) + bias) print(output) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.mm(features, weights.view(5, 1)) + bias) print(output) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here layer_1 = activation(torch.mm(features, W1) + B1) # 1x2 layer_2 = activation(torch.mm(layer_1, W2) + B2) # 1x1 print(layer_2) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output =activation(torch.sum(features*weights)+bias) print(output) ###Output tensor([[ 0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication weights=weights.view(5,1) output =activation(torch.mm(features,weights)+bias) print(output) ###Output tensor([[ 0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here output=activation(torch.mm(activation(torch.mm(features,W1)+B1),W2)+B2) print(output) ###Output tensor([[ 0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) print("features = ", features) print("features_type = ", type(features), "\n") print("weights = ", weights) print("bias = ", bias) ###Output features = tensor([[-0.1468, 0.7861, 0.9468, -1.1143, 1.6908]]) features_type = <class 'torch.Tensor'> weights = tensor([[-0.8948, -0.3556, 1.2324, 0.1382, -1.6822]]) bias = tensor([[0.3177]]) ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code print(torch.sum(features * weights)) print(torch.sum(features * weights) + bias) ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features * weights) + bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code torch.mm(features, weights.view(5,1)) ## Calculate the output of this network using matrix multiplication output = activation(torch.mm(features, weights.view(5,1)) + bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) W1 ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h1h2 = torch.mm(features, W1) + B1 # no torch sum as we need a 1x2 (2D tensor) output f_h1h2 = activation(h1h2) h3 = torch.mm(f_h1h2, W2) + B2 f_h3 = activation(h3) # This is the output for this multilayer network print(f_h3) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units is a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors torch.sum(activation(features @ weights.reshape(5,1) + bias)) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication torch.sum(activation(torch.mm(features, weights.view(5,1)) + bias)) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here layer1 = activation(torch.mm(features, W1) + B1) layer2 = activation(torch.mm(layer1, W2)+ B2) layer2 ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors torch.mm(features, weights) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication h = torch.mm(features, weights.view(5, 1)) + bias activation(h) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) activation(torch.mm(h, W2) + B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features*weights) + bias ) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.matmul(features,weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.matmul(features,W1) + B1) y = activation( torch.matmul(h, W2) + B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.matmul(features, weights.view(5,1)) + bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here output = activation(torch.matmul(activation(torch.matmul(features, W1) + B1), W2) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units is a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.mm(features, weights.view(5,1)) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1)) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) print(output) ###Output torch.Size([1, 2]) tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication w_transpose = weights.view(5,1) intermediate = torch.mm(features,w_transpose) + bias result = activation(intermediate) print(result) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hid1 = activation(torch.mm(features,W1) + B1) output = activation(torch.mm(hid1,W2) + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.mm(weights, features) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.t()) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here activation(torch.mm(activation(torch.mm(features,W1) + B1), W2) + B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) #a = torch.from_numpy(a) b = torch.from_numpy(a) c = b.numpy() b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # creates a 2-d tensor with 1 row, 5 columns (row vector) # True weights for our data, random normal variables again weights = torch.randn_like(features) # takes another tensor with same shape as features # and a true bias term bias = torch.randn((1, 1)) # creates a 1x1 tensor -- bias term. ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors temp = features * weights y = activation(torch.sum(temp.sum() + bias)) print(sum) ###Output <built-in function sum> ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5, 1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(h, W2) + B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units is a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) print(a, ' numpy version') #a = torch.from_numpy(a) print(a, 'pytorch version') torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch # import torch import sys, os sys.executable def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features*weights) + bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second tensor. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view) and [`torch.transpose(weights,0,1)`](https://pytorch.org/docs/master/generated/torch.transpose.html).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.* `torch.transpose(weights,0,1)` will return transposed weights tensor. This returns transposed version of inpjut tensor along dim 0 and dim 1. This is efficient since we do not specify to actual dimesions of weights.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.One more approach is to use `.t()` to transpose vector of weights, in our case from (1,5) to (5,1) shape.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.matmul(features,torch.transpose(weights,0,1)) + bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.matmul(features,W1).add_(B1)) output = activation(torch.matmul(h,W2).add_(B2)) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units are a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) features.shape weights.shape torch.mm(features,weights.view(5,1))+bias ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ds=torch.mm(features.view(5,1),weights)+bias ds.shape ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code activation(ds) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code h=activation(torch.mm(features,W1)+B1) h.shape,W2.shape outs=activation(torch.mm(h,W2)+B2) outs ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation (torch.sum(features * weights) + bias) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation (torch.mm(features, weights.view(5,1))+ bias) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation (torch.mm(features, W1)+B1) output = activation (torch.mm(h, W2)+B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features*weights) + bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second tensor. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view) and [`torch.transpose(weights,0,1)`](https://pytorch.org/docs/master/generated/torch.transpose.html).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.* `torch.transpose(weights,0,1)` will return transposed weights tensor. This returns transposed version of inpjut tensor along dim 0 and dim 1. This is efficient since we do not specify to actual dimesions of weights.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.One more approach is to use `.t()` to transpose vector of weights, in our case from (1,5) to (5,1) shape.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.matmul(features,torch.transpose(weights,0,1)) + bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.matmul(features,W1).add_(B1)) output = activation(torch.matmul(h,W2).add_(B2)) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units are a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1)) + bias) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(features*weights)+bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) print(y) y = activation((features * weights).sum() + bias) print(y) ###Output tensor([[0.1595]]) tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.sum(torch.mm(features, weights.reshape([5, 1]))) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here O1 = activation(torch.sum(torch.mm(features, W1)) + B1) O2 = activation((torch.sum(torch.mm(O1, W2))) + B2) print(O2) ###Output tensor([[0.3627]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) print(features) # True weights for our data, random normal variables again weights = torch.randn_like(features) print(weights) # and a true bias term bias = torch.randn((1, 1)) print(bias) ###Output tensor([[-0.1468, 0.7861, 0.9468, -1.1143, 1.6908]]) tensor([[-0.8948, -0.3556, 1.2324, 0.1382, -1.6822]]) tensor([[0.3177]]) ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code print(activation(torch.sum(features*weights)+bias)) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code print(activation(torch.mm(features, weights.reshape(5,1))+bias)) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code f1 = activation(torch.mm(features, W1)+B1) y = activation(torch.mm(f1, W2) + B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors sum_layer_input = torch.sum(features * weights) + bias sum_layer_output = activation(sum_layer_input) sum_layer_output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication mm_layer_input = torch.mm(features, weights.view(5, 1)) + bias mm_layer_output = activation(mm_layer_input) mm_layer_output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here mlp_layer_1_input = torch.mm(features, W1) + B1 mlp_layer_1_output = activation(mlp_layer_1_input) mlp_layer_2_input = torch.mm(mlp_layer_1_output, W2) + B2 mlp_layer_2_output = activation(mlp_layer_2_input) mlp_layer_2_output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) y = activation((features * weights).sum() + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features * weights) + bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.mm(features, weights.view(5, 1)) + bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here a1 = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(a1, W2) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) print(y) y = activation((features * weights).sum() + bias) print(y) ###Output tensor([[0.1595]]) tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication weights = weights.view(5,1) y = activation(torch.mm(features, weights) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,5) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch torch.__version__ print(torch.cuda.is_available()) print(torch.backends.cudnn.enabled) print(torch.cuda._initialized) torch.cuda._lazy_init() print(torch.cuda._initialized) a = torch.rand(10) print(a) a = a.cuda() print(a) def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) features weights bias activation(features) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code torch.mm(features, weights) + bias print(features.shape) print(weights.shape) print(bias.shape) print(torch.sum(features * weights) + bias) print((features * weights).sum() + bias) print(torch.mm(features, weights) + bias) weights = weights.view(5, 1) print(weights.shape) print(torch.mm(features, weights) + bias) # Going through torch.sum or using sum() are now different print(torch.sum(features * weights) + bias) print((features * weights).sum() + bias) print(activation(torch.sum(features * weights) + bias)) ###Output tensor([[-3.0605]]) tensor([[-3.0605]]) tensor([[0.0448]]) ###Markdown Using torch.mm now give the same result as the original activation(torch.sum(features * weights) + bias) ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights) + bias) y ###Output _____no_output_____ ###Markdown **Approach in example solution is to use .view without reassigning...** reset everything ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1)) + bias) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units features n_input # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) print(features, features.shape) print(W1, W1.shape) print(W2, W2.shape) print(B1, B1.shape) print(B2, B2.shape) ###Output tensor([[-0.1468, 0.7861, 0.9468]]) torch.Size([1, 3]) tensor([[-1.1143, 1.6908], [-0.8948, -0.3556], [ 1.2324, 0.1382]]) torch.Size([3, 2]) tensor([[-1.6822], [ 0.3177]]) torch.Size([2, 1]) tensor([[0.1328, 0.1373]]) torch.Size([1, 2]) tensor([[0.2405]]) torch.Size([1, 1]) ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here activation(torch.mm( activation(torch.mm(features, W1) + B1), W2) + B2) ### Official Solution h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) print(output) h torch.sigmoid(torch.mm(features, W1) + B1) torch.sigmoid(torch.mm(h, W2) + B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features*weights) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1)) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(h, W2) + B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors score = torch.sum(features*weights)+bias output = activation(score) print score print output ###Output tensor([[-1.6619]]) tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output2=activation(torch.mm(features,weights.view(5,1))+bias) print output2 ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here output1=activation(torch.mm(features,W1)+B1) output2=activation(torch.mm(output1,W2)+B2) print output2 ###Output tensor([[0.6813, 0.4355]]) tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors print(features,weights) output = (features*weights).sum() out = activation(output+bias) print(out) ###Output tensor([[-0.1468, 0.7861, 0.9468, -1.1143, 1.6908]]) tensor([[-0.8948], [-0.3556], [ 1.2324], [ 0.1382], [-1.6822]]) tensor([[0.0448]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication weights = weights.view(5,1) out = activation(torch.matmul(features,weights)+bias) print(out) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here out = activation(torch.matmul(activation(torch.matmul(features,W1)+B1),W2)+B2) print(out) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors # GK: The following multiplication works just as numpy arrays for tensors as well. # The first element of the features(1*5) is multiplied with the first element of # weights(1*5) and so on. Then we have a need for torch.sum(). We could have just used # matrix multiplication instead. Z = features*weights Y = activation(torch.sum(features*weights) + bias) print(Y) Z.shape ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication O = activation(torch.matmul(features, weights.view(5,1)) + bias) print(O) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here input_to_hidden = torch.matmul(features, W1) + B1 o_input_to_hidden = activation(input_to_hidden) hidden_to_output = torch.matmul(o_input_to_hidden, W2) + B2 o_hidden_to_output = activation(hidden_to_output) print(o_hidden_to_output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = torch.sum(features * weights) + bias ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = torch.mm(features,weights.view(features.shape[1],-1))+ bias ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h1 = activation(torch.matmul(features,W1) + B1) h2 = activation(torch.matmul(h1,W2) + B2) print(h2) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation((features * weights).sum() + bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(features.mm(weights.view(5,1)) + bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(features.mm(W1) + B1) y = activation(h.mm(W2) + B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features * weights) + bias) print(output) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication # output2 = activation(torch.mm(features, weights.resize_(5, 1)) + bias) output2 = activation(torch.mm(features, weights.view(5, 1)) + bias) print(output2) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here def calc_output(Features, Weights, Biases): # This only works on 1 or 2 dimensional Tensors if Features.shape == Weights.shape: print("Reshaped Weights") return activation(torch.mm(Features, Weights.view(Weights.shape[1], Weights.shape[0])) + Biases) else: return activation(torch.mm(Features, Weights) + Biases) print("There seems to be a shape mismatch between features and weights") multilayer_output1 = calc_output(features, W1, B1) print(multilayer_output1) multilayer_output2 = calc_output(multilayer_output1, W2, B2) print(multilayer_output2) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch # Sigmoid function - to return def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables # Random data - creates a tensor with shape (1, 5) - one row and five columns features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors # Make labels from our data and true weights y = activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication # sigmoid # torch.mm is matrix multiplication # weights.view = changes weights so that it matches the required size same as features y = activation(torch.mm(features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code # Calculate first value for hidden layer = use features Hidden_layer = activation(torch.mm(features, W1.view(3,2)) + B1) # Calculate output layer based on inputs from hidden layer as features -> using weights and bias y = activation(torch.mm(Hidden_layer, W2.view(2,1)) + B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units is a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5, 1)) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) print(B1) ###Output tensor([[0.1328, 0.1373]]) ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code # Your solution here def output(x, w1, w2, b1, b2): """ Return the output of the neural network Parameters: x (tensor): Input for the network w1 (tensor): Weights for inputs to hidden layer w2 (tensor): Weights for hidden layer to output layer b1 (tensor): Bias for input layer to hidden layer b2 (tensor): Bias for hidden layer to output layer Return: (tensor): Returns the output of the neural network """ h = activation(torch.mm(x, w1) + b1) return activation(torch.mm(h, w2) + b2) print(output(features, W1, W2, B1, B2)) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a # new line just to update git ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.view(5, 1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here features2 = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(features2, W2) + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) +bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication reshaped_weights = weights.view(5, 1) y = activation(torch.mm(features, reshaped_weights) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here first_output = activation(torch.mm(features, W1) +B1) y = activation(torch.mm(first_output ,W2) + B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code features weights torch.sum(features * weights) + bias ## Calculate the output of this network using the weights and bias tensors #features.shape #torch.mm(features, weights.resize_(5,1))+bias y_hat = activation(torch.sum(features * weights) + bias) y_hat ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication weights_t = weights.view(5,1) y_hat_mm = activation(torch.mm(features, weights_t) + bias) y_hat_mm ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here l2 = activation(torch.mm(features, W1)+B1) y = activation(torch.mm(l2, W2)+B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch # def activation(x): # """ Sigmoid activation function # Arguments # --------- # x: torch.Tensor # """ # return 1/(1+torch.exp(-x)) activation = lambda x: 1 / (1 + torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation((features * weights).sum() + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.view(5, 1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1.view(n_input, n_hidden)) + B1) out = activation(torch.mm(h, W2.view(n_hidden, n_output)) + B2) out ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(h, W2) + B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication weights = weights.view(5,1) y = activation(torch.matmul(features, weights)) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.matmul(features, W1) + B1) y = activation(torch.matmul(h, W2)+B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) print(h) output = activation(torch.mm(h, W2) + B2) print(output) ###Output tensor([[0.6813, 0.4355]]) tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units is a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors h = torch.sum(weights * features) + bias nnet = activation(h) nnet ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication neural_net = activation(torch.mm(features, weights.view(5, 1)) + bias) neural_net ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden_layer = activation(torch.mm(features, W1) + B1) output_layer = activation(torch.mm(hidden_layer, W2) + B2) output_layer ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y1 = activation(torch.sum(features*weights)+bias) y2 = activation((features*weights).sum()+bias) print(y1==y2) ###Output tensor([[True]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden_output = activation(torch.mm(features, W1)+B1) output = activation(torch.mm(hidden_output, W2)+B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) print(features) # True weights for our data, random normal variables again weights = torch.randn_like(features) print(weights) # and a true bias term bias = torch.randn((1, 1)) print(bias) ###Output tensor([[-0.1468, 0.7861, 0.9468, -1.1143, 1.6908]]) tensor([[-0.8948, -0.3556, 1.2324, 0.1382, -1.6822]]) tensor([[0.3177]]) ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y_hat = activation(torch.sum(features * weights) + bias) print(y_hat) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y_hat = activation(torch.mm(features, weights.view(5, 1)) + bias) print(y_hat) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ## Hidden layer h = activation(torch.mm(features, W1) + B1) ## Second layer output = activation(torch.mm(h, W2) + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorch[Video](https://youtu.be/6Z7WntXays8)In this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features*weights)+bias) print(y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features,weights.view(5,1))+bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features,W1)+B1) y = activation(torch.mm(h,W2)+B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units is a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors h = torch.sum(features * weights) + bias # h = (features * weights).sum() + bias y = activation(input_to_hidden) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = torch.mm(features, W1) + B1 output = activation(torch.mm(h, W2) + B2) print(output) ###Output tensor([[0.2460]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a type(a) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.view(5, -1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code h = activation(torch.mm(features, W1) + B1) activation(torch.mm(h, W2) + B2) ###Output _____no_output_____ ###Markdown The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features*weights) + bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second tensor. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view) and [`torch.transpose(weights,0,1)`](https://pytorch.org/docs/master/generated/torch.transpose.html).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.* `torch.transpose(weights,0,1)` will return transposed weights tensor. This returns transposed version of inpjut tensor along dim 0 and dim 1. This is efficient since we do not specify to actual dimesions of weights.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.One more approach is to use `.t()` to transpose vector of weights, in our case from (1,5) to (5,1) shape.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.matmul(features,torch.transpose(weights,0,1)) + bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.matmul(features,W1).add_(B1)) output = activation(torch.matmul(h,W2).add_(B2)) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units are a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output_two = activation(torch.sum(torch.mm(features, weights.reshape(5, 1))) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here output_hidden = activation(torch.mm(features, W1) + B1) output_final = activation(torch.mm(output_hidden, W2) + B2) print(output_final) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code activation(torch.sum(features * weights) + bias) y = activation(torch.mm(features, weights.view(5,1)) + bias) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code weights=weights.reshape(5,1) mul= torch.mm(features,weights) x=mul+bias activation(x) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code activation(torch.mm(activation(torch.mm(features,W1)+B1),W2)+B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features*weights )+bias) print(y) ###Output tensor([[ 0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features,weights.view(5,1))+bias) print(y) ###Output tensor([[ 0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here y = activation(torch.mm((activation(torch.mm(features,W1)+ B1)),W2)+B2) print(y) ###Output tensor([[ 0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second tensor. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units are a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation((features * weights).sum() + bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.mm(features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here output = activation(torch.mm(activation(torch.mm(features, W1) + B1), W2) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a features = torch.randn((1, 5)) features featview = features.view((5,1)) featview featview[2] = 5 features ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) print('features: {}'.format(features)) print('weights: {}'.format(weights)) ###Output features: tensor([[-0.1468, 0.7861, 0.9468, -1.1143, 1.6908]]) weights: tensor([[-0.8948, -0.3556, 1.2324, 0.1382, -1.6822]]) ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(features * weights) + bias) torch.mm(features, weights) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication weights_t = weights.view(5,1) activation(torch.mm(features, weights_t) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here out1 = activation(torch.mm(features, W1) + B1) out1 out2 = activation(torch.mm(out1, W2) + B2) out2 ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5, 1)) + bias) print (y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) print(h) output = activation(torch.mm(h, W2) + B2) print(output) ###Output tensor([[0.6813, 0.4355]]) tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors h = torch.sum(features * weights) + bias activation(h) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication h = torch.mm(features, weights.view(5, 1)) + bias activation(h) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(h, W2) + B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation((features * weights).sum() + bias) print(y) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.reshape(5,1)) + bias) print(y) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) print(output) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/x-cloud/deep-learning-v2-pytorch/blob/master/intro-to-pytorch/Part%201%20-%20Tensors%20in%20PyTorch%20%28Exercises%29.ipynb) Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using sum of dot products y = activation(torch.sum(features * weights) + bias) print(y) y = activation((features * weights).sum() + bias) print(y) ###Output tensor([[0.1595]]) tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm), or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory, but you never know beforehand.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. These mean that no error will be given if the shape changes from the original. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`, which shares the storage with the original tensor.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.permute(1,0)) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(h, W2) + B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) y = activation((features * weights).sum() + bias) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(weights,features.view(5,1))+bias) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = torch.sum(features * weights) + bias output = activation(output) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication weights = weights.view(5,1) output = torch.mm(features, weights) + bias output = activation(output) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h1 = activation(torch.mm(features, W1) + B1) h2 = activation(torch.mm(h1, W2) + B2) print(h2) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.mm(features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1)) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) out = activation(torch.mm(h, W2) + B2) print(out) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) print(features) print(weights) print(bias) ###Output tensor([[-0.1468, 0.7861, 0.9468, -1.1143, 1.6908]]) tensor([[-0.8948, -0.3556, 1.2324, 0.1382, -1.6822]]) tensor([[0.3177]]) ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(features*weights)+bias) activation((features*weights).sum()+bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features,weights.view(5,1))+bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) print("features : ",features) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units print("n_input : ",n_input) # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) print("W1 : ",W1) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) print("W2 : ",W2) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) print("B1 : ",B1) B2 = torch.randn((1, n_output)) print("B2 : ",B2) ###Output features : tensor([[-0.1468, 0.7861, 0.9468]]) n_input : 3 W1 : tensor([[-1.1143, 1.6908], [-0.8948, -0.3556], [ 1.2324, 0.1382]]) W2 : tensor([[-1.6822], [ 0.3177]]) B1 : tensor([[0.1328, 0.1373]]) B2 : tensor([[0.2405]]) ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h1 = activation(torch.mm(features,W1.view(3,2))+B1) output = activation(torch.mm(h1,W2.view(2,1))+B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch print(torch.__version__) def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors out = torch.matmul(features, weights.view(5,1)) print(out) ###Output tensor([[-1.9796]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(out + bias) print(output) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here middle_output = activation(torch.mm(features, W1) + B1) final_output = activation(torch.mm(middle_output, W2) + B2) print(final_output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) print(features) # True weights for our data, random normal variables again weights = torch.randn_like(features) print(weights) # and a true bias term bias = torch.randn((1, 1)) print(bias) ###Output tensor([[-0.1468, 0.7861, 0.9468, -1.1143, 1.6908]]) tensor([[-0.8948, -0.3556, 1.2324, 0.1382, -1.6822]]) tensor([[0.3177]]) ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors print(features.shape) sum_network = (features * weights).sum() + bias ac = activation(sum_network) print(ac) ###Output torch.Size([1, 5]) tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication res = activation( torch.mm(features, weights.view(5,1)) + bias ) # print(res) # test = torch.tensor([[ 1, 2, 3]]) # test2 = torch.tensor([[ 4, 5, 6]]).view(3,1) # print(test.shape) # print(test2.shape) # print(torch.mm(test, test2)) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) print(features) print(W1) print(W2) print(B2) print(B2) ###Output tensor([[-0.1468, 0.7861, 0.9468]]) tensor([[-1.1143, 1.6908], [-0.8948, -0.3556], [ 1.2324, 0.1382]]) tensor([[-1.6822], [ 0.3177]]) tensor([[0.2405]]) tensor([[0.2405]]) ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden = activation( torch.mm(features, W1) + B1 ) print(hidden) out = activation( torch.mm(hidden, W2) + B2 ) print(out) ###Output tensor([[0.6813, 0.4355]]) tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) print(features.size()) print(weights.size()) print(bias) ###Output torch.Size([1, 5]) torch.Size([1, 5]) tensor([[0.3177]]) ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors # y=activation(torch.sum(weights*features)+bias) y=activation(torch.sum(torch.matmul(weights,features.reshape(5,1)))+bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y=activation(torch.mm(features,weights.view(5,1))+bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features print(n_input) n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) print(W1) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) print(B1) B2 = torch.randn((1, n_output)) ###Output 3 tensor([[-1.1143, 1.6908], [-0.8948, -0.3556], [ 1.2324, 0.1382]]) tensor([[0.1328, 0.1373]]) ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here first_layer_output=activation(torch.mm(features,W1)+B1) y=activation(torch.mm(first_layer_output,W2)+B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) type(a) b = torch.from_numpy(a) type(b) b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors def output(features, weights, bias): return activation(torch.sum(weights * features) + bias) output(features, weights, bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(weights.mm(features.view((5, 1))) + bias) print(y) y = activation(torch.mm(features, weights.view(5,1)) + bias) print(y) # This doesn't check assumptions about input like view() does: y = activation(weights.flatten().dot(features.flatten()) + bias) print(y) ###Output tensor([[0.1595]]) tensor([[0.1595]]) tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here l1 = activation(features.mm(W1) + B1) l2 = activation(l1.mm(W2) + B2) l2 ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # http://pytorch.org/ from os.path import exists from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag()) cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/' accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu' !pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) print(features) print(weights) print(bias) ###Output tensor([[-0.1468, 0.7861, 0.9468, -1.1143, 1.6908]]) tensor([[-0.8948, -0.3556, 1.2324, 0.1382, -1.6822]]) tensor([[0.3177]]) ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum (features * weights) + bias) print(y) y = activation((features * weights).sum() + bias) print(y) ###Output tensor([[0.1595]]) tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1)) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features*weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view((5,1))) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here layer1 = activation(torch.mm(features, W1) + B1) layer2 = activation(torch.mm(layer1, W2) + B2) print(layer2) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) weights print(torch.randn(1,5)) print(torch.rand(1,5)) ###Output tensor([[0.1328, 0.1373, 0.2405, 1.3955, 1.3470]]) tensor([[0.6267, 0.5691, 0.7437, 0.9592, 0.3887]]) ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features,weights.reshape(5,1)) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1.view(3,2)) + B1) y = activation(torch.mm(h, W2.view(2,1)) + B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a / 2 ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function This is used when we want the output of the neural network to be a probability Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) #one row and five cols # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) print(y) # every neural netwrok has the output written in the following way: # multiply weights with features # add the result to bias # then pass it through the activation function # the diffirence btw two logs ###Output tensor([[0.0448]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication weights = weights.view(5,1) logits = torch.mm(features, weights) print(logits) result = activation(logits + bias) ###Output tensor([[-1.9796]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) features.shape[1] ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second tensor. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors print(features.shape) print(weights.shape) print(activation(torch.sum(torch.mm(features,weights.T)+bias )) ) print(activation(torch.sum((features*weights)+bias))) y = activation(torch.mm(features, weights.view(5,1)) + bias) print(y, y.shape) ###Output tensor([[0.1595]]) torch.Size([1, 1]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1)) + bias) print(y, y.shape) ###Output tensor([[0.1595]]) torch.Size([1, 1]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here print(features.shape) print(W1.shape) print(W2.shape) H1 = activation(torch.mm(features, W1 + B1)) Y = activation(torch.mm(H1, W2) + B2) print(Y) ###Output torch.Size([1, 3]) torch.Size([3, 2]) torch.Size([2, 1]) tensor([[0.3124]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a # A Numpy Array b = torch.from_numpy(a) print(b) # A Torch Tensor b.numpy() ###Output _____no_output_____ ###Markdown __Important :__ The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(features*weights)+bias) # Also correct # activation((features*weights).sum()+bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.view(5,1))+bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here activation(torch.mm(activation(torch.mm(features, W1)+B1),W2)+B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors h1 = activation(torch.sum(features*weights)+bias) h2 = activation((features*weights).sum()+bias) print(h1,h2) ###Output tensor([[0.1595]]) tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication weights.shape features.shape h = activation(torch.mm(weights,features.view(5,1))+bias) h ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here W1.shape h1 = activation(torch.mm(features,W1)+B1) h2 = activation(torch.mm(h1,W2)+B2) h2 ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.reshape(5, 1)) + bias) print (y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here y_1 = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(y_1, W2) + B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import pytorch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features * weights) + bias) print(output) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.mm(features, weights.view(5,1)) + bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) #copy shape # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features*weights) + bias) print(y) y = activation((features*weights).sum() + bias) print(y) ###Output tensor([[0.1595]]) tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features,weights.reshape(5,1))+bias) print(y) y = activation(torch.mm(features,weights.resize_(5,1))+bias) print(y) y = activation(torch.mm(features,weights.view(5,1))+bias) print(y) ###Output tensor([[0.1595]]) tensor([[0.1595]]) tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here layer1 = activation(torch.mm(features,W1)+B1) layer2 = activation(torch.mm(layer1,W2)+B2) layer2 ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # inplace는 memory copy하지 않고 그냥 값 자체를 바로 바꿔버림 예를 들어 += *= /= etc.. # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y=activation((features*weights).sum()+bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y=activation(torch.sum(features*weights.resize_(5,1))+bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h=activation(torch.mm(features,W1)+B1) output=activation(torch.mm(h,W2)+B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(weights * features) + bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output= activation(torch.mm(features, weights.view(5,1)) + bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here layer2 = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(layer2, W2) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code output = activation(torch.sum(features * weights) + bias) output ## Calculate the output of this network using the weights and bias tensors output = activation((features * weights).sum() + bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication weights_t = weights.view(weights.shape[1],weights.shape[0]) output = activation(torch.mm(features, weights_t).sum() + bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code print(features.shape) print(W1.shape) print(W2.shape) ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units is a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation((features * weights).sum() + bias) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.T).sum() + bias) print(y) y = activation(torch.mm(features, weights.view(5,1)) + bias) print(y) ###Output tensor([[0.1595]]) tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here print(features.shape, W1.shape, B1.shape) h_1 = torch.mm(features, W1) + B1 a_1 = activation(h_1) print(a_1.shape, W2.shape) h_2 = torch.mm(a_1, W2) + B2 a_2 = activation(h_2) y = a_2 y ###Output torch.Size([1, 3]) torch.Size([3, 2]) torch.Size([1, 2]) torch.Size([1, 2]) torch.Size([2, 1]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like:$$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot\begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one.`weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network.> **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. Calculate the output of this network using the weights and bias tensors You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. Calculate the output of this network using matrix multiplication Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated$$\vec{h} = [h_1 \, h_2] =\begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot\begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) features W1 ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here z1 = torch.mm(features, W1) + B1 a1 = activation(z1) z2 = torch.mm(a1, W2) + B2 a2 = activation(z2) print(a2) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors a1 = activation( torch.mm( features, weights.view(5,1) ) + bias ) # Equivalente a2 = activation( torch.sum( features*weights ) + bias ) print( a1 ) print( a2 ) ###Output tensor([[0.1595]]) tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication a1 = activation( torch.mm( features, weights.view(5,1) ) + bias ) print( a1 ) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code l1_output = activation( torch.mm(features, W1 ) + B1 ) l2_output = activation( torch.mm( l1_output, W2 ) + B2 ) print( l2_output ) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units is a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. **PyTorch in a lot of ways behaves like the arrays you love from Numpy**. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and **makes it simple to move them to GPUs for the faster processing needed when training neural networks**. **It also provides a module that automatically calculates gradients (for backpropagation!)** and another module specifically for building neural networks. All together, **PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow** and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together **(a linear combination)** then passed through an **activation function** to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ Tensors**It turns out neural network computations are just a bunch of linear algebra operations** on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code print(features) print(weights) print(bias) ## Calculate the output of this network using the weights and bias tensors activation((features*weights).sum()+bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use **matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs**.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in **any framework, you'll see this often**. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.add(torch.matmul(features, weights.view(5, 1)), bias)) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. **The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons**. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code torch.mm(features, W1)+B1 ## Your solution here activation(torch.mm(activation(torch.add(torch.mm(features, W1), B1)), W2)+B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, **the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions.** Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) print("features: {}".format(features)) print("weights: {}".format(weights)) print("bias: {}".format(bias)) ###Output features: tensor([[-0.1468, 0.7861, 0.9468, -1.1143, 1.6908]]) weights: tensor([[-0.8948, -0.3556, 1.2324, 0.1382, -1.6822]]) bias: tensor([[0.3177]]) ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features * weights) + bias) print(output) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) print("hidden_in: {}".format(h)) output = activation(torch.mm(h, W2) + B2) print("hidden_out: {}".format(output)) ###Output hidden_in: tensor([[0.6813, 0.4355]]) hidden_out: tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5, 1)) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units print(n_input) # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) print(W1.shape, W2.shape) ###Output 3 torch.Size([3, 2]) torch.Size([2, 1]) ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here y = activation(torch.mm(activation(torch.mm(features, W1) + B1), W2) + B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = torch.sum(weights*features)+bias ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = torch.mm(features,weights.reshape(5,1))+bias ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here a1 = torch.mm(features,W1)+B1 a2 = torch.mm(activation(a1),W2)+B2 output = activation(a2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) tensored = torch.from_numpy(a) b = torch.from_numpy(a) lol = b.numpy() b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(W1*features)+B1 ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output tensor([[-0.1468, 0.7861, 0.9468]]) tensor([[-1.1143, 1.6908], [-0.8948, -0.3556], [ 1.2324, 0.1382]]) ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here (W2*((features*W1)+B1))+B2 ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights)+bias) print(y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features , weights.view(weights.shape[1],weights.shape[0]) + bias)) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1)+B1) y = activation(torch.mm(h,W2)+B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = y = activation((features * weights).sum() + bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.mm(features, weights.view(5,1)) + bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden_output = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(hidden_output, W2) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors features@weights + bias ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation([email protected](5,1) + bias) [email protected](5,1) + bias ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(features@W1 + B1) output = activation(h@W2 + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features*weights) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features,weights.view(5,1)) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code h = activation(torch.mm(features,W1)+B1) # hidden layer output = activation(torch.mm(h,W2)+B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h1 = torch.mm(features, W1) + B1 out = torch.mm(activation(h1), W2) + B2 activation(out) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation((torch.sum(features * weights)+bias)) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation((torch.mm(features,weights.view(5,1))+bias)) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(h, W2) + B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation((weights*features).sum() + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here #torch.mm(features, W1) + B1 activation(torch.mm(activation(torch.mm(features, W1) + B1), W2) + B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1)) + bias) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden_output = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(hidden_output, W2) + B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch as torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.mm(features, weights.view(5,1)) + bias) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5, 1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units is a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation((features*weights).sum()+bias) print(y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1))+bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here y1 = activation(torch.mm(features,W1)+B1) y2 = activation(torch.mm(y1,W2)+B2) print(y2) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication h = torch.mm(features, weights.t()) + bias y = activation(h) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here # 1 x n_hidden h1 = torch.mm(features, W1) + B1 ah1 = activation(h1) # 1 x n_output h2 = torch.mm(ah1, W2) + B2 y = activation(h2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5, 1)) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here y = activation(torch.mm(activation(torch.mm(features, W1) + B1), W2) + B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(weights*features)+bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication weights = weights.view(5,1) output=activation(torch.mm(features,weights)+bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden_output = activation(torch.mm(features,W1)+B1) output = activation(torch.mm(hidden_output,W2)+B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(weights * features) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.matmul(weights, features.t()) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here activation(torch.mm(activation(torch.mm(features, W1) + B1), W2) + B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # http://pytorch.org/ from os.path import exists from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag()) cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/' accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu' !pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision import torch # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5, 1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation((features*weights).sum() + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(features.mm(weights.T) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h1 = activation(features.mm(W1) + B1) out = activation(h1.mm(W2) + B2) print(out) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1)) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.mm(features, weights.t()) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code bias_input = torch.Tensor([[1]]) bias_input.shape # Compute act_fn(act_fn(features * W1 + b1)*W2 + b2) print(features.shape) print(W1.shape) print(B1.shape) print(B2.shape) print("Old way: {0}".format(activation(torch.mm(activation(torch.mm(features, W1) + B1), W2) + B2))) hidden_layer = activation(torch.mm(torch.cat((features, bias_input), 1), torch.cat((W1, B1), 0))) op = activation(torch.mm(torch.cat((hidden_layer, bias_input), 1), torch.cat((W2, B2), 0))) print("Matrix Way: {0}".format(op)) torch.cat((W1, B1), 0) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.mm(features, weights.view(5,1)) +bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication torch.mm(features, weights.view(5,1)) +bias ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code hidden_out = activation(torch.mm(features,W1)+B1) y = activation(torch.mm(hidden_out,W2) + B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.matmul(weights, features.T) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.matmul(weights, features.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here activation(torch.mm(activation(torch.mm(features,W1)+B1),W2)+B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors def output_sum(features, weights, bias): return activation(torch.sum(features * weights) + bias) output_sum(features, weights, bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication def output(features, weights, bias): return activation(torch.mm(features, weights.view(5,1)) + bias) output(features, weights, bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) def output1(features, weights, bias): return activation(torch.mm(features, weights.view(3,1)) + bias) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here # output_1 = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(activation(torch.mm(features, W1) + B1), W2) + B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors torch.mm(features,weights) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication weights.resize_(5,1) activation(torch.mm(features,weights)+bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) print(features.shape) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output torch.Size([1, 3]) ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here layer1 = activation(torch.mm(features,W1)+B1) layer2 = activation(torch.mm(layer1,W2)+B2) print(layer2) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation((features * weights).sum() + bias) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication weights_reshape = weights.reshape((5, 1)) y = activation(torch.mm(features, weights_reshape) + bias) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden_output = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(hidden_output, W2) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features,weights.resize(features.shape[1], features.shape[0])) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here input_hidden = activation(torch.mm(features,W1) + B1) y = activation(torch.mm(input_hidden,W2) + B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation((features * weights).sum() + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.T) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here output1 = activation(torch.mm(features, W1) + B1) activation(torch.mm(output1, W2) + B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output= activation((features * weights).sum()+bias) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output=activation(torch.matmul(weights,features.resize_(5,1))+bias) print(output) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h_hidden= activation(torch.matmul(features, W1)+B1) h_output= activation(torch.matmul(h_hidden,W2)+B2) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units is a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5, 1)) + bias) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(h, W2) + B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors print(features.shape) print(weights.reshape(weights.shape[1],weights.shape[0]).shape) print(bias.shape) print mm = torch.mm(features, weights.reshape(weights.shape[1],weights.shape[0])) print(mm.shape) print(mm, bias) h = mm + bias#torch.sum(mm, bias) output = activation(h) print(output) ###Output torch.Size([1, 5]) torch.Size([5, 1]) torch.Size([1, 1]) torch.Size([1, 1]) tensor([[-1.9796]]) tensor([[0.3177]]) tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(activation(torch.mm(features, W1) + B1),W2) + B2) print(h) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.mm(features, weights)+bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here lay1 = activation(torch.mm(features, W1)+B1) output = activation(torch.mm(lay1, W2)+B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch # use fastai venv to run since pytorch installed import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) # activation(3.2) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1,5)) # True weights for our data, random normal variables again weights = torch.randn((1,5)) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors weights.shape activation(torch.sum(features * weights ) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code weights.shape ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.view(5,1))+bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) features features.shape # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) W1 # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) W2 # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) B1 B2 ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here H1 = activation((torch.mm(features, W1))+B1) y = activation(torch.mm(H1,W2)+B2) H1 y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place, underscore means in place operation b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) print(f'features * weights : {(features * weights).shape}') y = activation(torch.sum(features * weights) + bias) y = activation((features * weights).sum() + bias) ###Output features * weights : torch.Size([1, 5]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication # weights.view(a, b) 를 사용하자 y = activation(torch.mm(features, weights.view(5,1)) + bias) print(f'y : {y}') # print(f'y.shape {y.shape}') # print(f'weights.shape {weights.shape}') # print(f'weights.view(5,1).shape {weights.view(5,1).shape}') # print(f'weights.shape {weights.shape}') # Using Transpose # torch.transpose(x, 0, 1) y = activation(torch.mm(features, torch.transpose(weights, 0, 1)) + bias) print(f'y : {y}') ###Output y : tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here H1 = activation(torch.mm(features, W1) + B1) print(f'H1 SHAPE : {H1.shape}') H2 = activation(torch.mm(H1, W2) + B2) print(f'H1 SHAPE : {H2.shape}') print(H1) print(H2) ###Output H1 SHAPE : torch.Size([1, 2]) H1 SHAPE : torch.Size([1, 1]) tensor([[0.6813, 0.4355]]) tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a type(a) type(b) a = torch.from_numpy(a) type(a) type(b) a.mul_(4) b ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) # y = activation((features * weights).sum() + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code weights.view(5,1) weights.shape ## Calculate the output of this network using matrix multiplication # With matrix multiplacation with do in one go the sum and the multiplication in one go. # We do this: y = activation(torch.mm(features, weights.view(5, 1)) + bias) # Instead of: # y = activation((features * weights).sum() + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation( torch.sum(features * weights) + bias ) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation( torch.mm(features, weights.T) + bias ) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here layer1 = activation( torch.matmul( activation(torch.matmul(features, W1) + B1 ) , W2 ) + B2 ) layer1 ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = torch.sum(weights * features) + bias output = activation(output) print("Output =", output) ###Output Output = tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = torch.mm(features, weights.view(5, 1)) + bias output = activation(output) print("Output =", output) ###Output Output = tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here print("Features shape :", features.shape) print("W1 shape :", W1.shape) output = torch.mm(features, W1) + B1 output = activation(output) print("Output L1 shape :", output.shape) print("W2 shape :", W2.shape) output = torch.mm(output, W2) + B2 output = activation(output) print("Output L2 shape :", output.shape) print("Output :", output) ###Output Features shape : torch.Size([1, 3]) W1 shape : torch.Size([3, 2]) Output L1 shape : torch.Size([1, 2]) W2 shape : torch.Size([2, 1]) Output L2 shape : torch.Size([1, 1]) Output : tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code # Added by Viv print('feature shape - ',features.shape) print('weights shape - ',weights.shape) print(features*weights) # Wrong - use matmul or torch.mm() ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(torch.mm(features,weights.view(5,1))+bias)) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) print('features:\t', features) # True weights for our data, random normal variables again weights = torch.randn_like(features) print('weights:\t', weights) # and a true bias term bias = torch.randn((1, 1)) print('bias:\t', bias) ###Output features: tensor([[-0.1468, 0.7861, 0.9468, -1.1143, 1.6908]]) weights: tensor([[-0.8948, -0.3556, 1.2324, 0.1382, -1.6822]]) bias: tensor([[0.3177]]) ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation((features * weights).sum() + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5, 1)).sum() + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) print(features) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) print(W1) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) print(W2) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) print(B1) B2 = torch.randn((1, n_output)) print(B2) ###Output tensor([[-0.1468, 0.7861, 0.9468]]) tensor([[-1.1143, 1.6908], [-0.8948, -0.3556], [ 1.2324, 0.1382]]) tensor([[-1.6822], [ 0.3177]]) tensor([[0.1328, 0.1373]]) tensor([[0.2405]]) ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden_output = activation(torch.mm(features, W1) + B1) final_output = activation(torch.mm(hidden_output, W2) + B2) print(final_output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) features ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code print ((weights * features).sum()) print (torch.mul(weights, features).sum()) y = activation(torch.sum(weights * features) + bias) print(y) y = activation(torch.sum(torch.mul(weights, features)) + bias) print(y) ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code weights.shape, features.shape ## Calculate the output of this network using matrix multiplication y = activation(torch.sum(torch.mm(weights, features.reshape(5, 1)) + bias)) print(y) ###Output tensor(0.1595) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code print (W1.shape, features.shape) print (h.shape, W2.shape) ## Your solution here h = activation(torch.mm(features, W1)+B1) y = activation(torch.mm(h, W2) + B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors print(f'features:\t{features}\nweigths:\t{weights}\nbias:\t\t{bias}') output = features * weights output = torch.sum(output) + bias output = activation(output) print(f'result:\t\t{output}') ###Output features: tensor([[-0.1468, 0.7861, 0.9468, -1.1143, 1.6908]]) weigths: tensor([[-0.8948, -0.3556, 1.2324, 0.1382, -1.6822]]) bias: tensor([[0.3177]]) result: tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication print(f'features:\t{features.shape}:{features}\nweigths:\t{weights.shape}:{weights}\nbias:\t\t{bias.shape}:{bias}') weights = weights.view(5,1) output = torch.mm(features, weights) output = output + bias output = activation(output) print(f'result:\t\t{output.shape}:{output}') ###Output features: torch.Size([1, 5]):tensor([[-0.1468, 0.7861, 0.9468, -1.1143, 1.6908]]) weigths: torch.Size([1, 5]):tensor([[-0.8948, -0.3556, 1.2324, 0.1382, -1.6822]]) bias: torch.Size([1, 1]):tensor([[0.3177]]) result: torch.Size([1, 1]):tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here print('-- LAYER 1 --') print(f'features shape:\t\t{features.shape}') print(f'W1 shape:\t\t{W1.shape}') print(f'B1 shape:\t\t{B1.shape}') print() layer1_in = torch.mm(features, W1) + B1 layer1_out = activation(layer1_in) print('-- LAYER 2 --') print(f'layer 1 output shape:\t{layer1_out.shape}') print(f'W2 shape:\t\t{W2.shape}') print(f'B2 shape:\t\t{B2.shape}') print() layer2_in = torch.mm(layer1_out, W2) + B2 output = activation(layer2_in) print('-- OUTPUT --') print(f'output shape:\t\t{output.shape}') print(f'output:\t\t\t{output}') ###Output -- LAYER 1 -- features shape: torch.Size([1, 3]) W1 shape: torch.Size([3, 2]) B1 shape: torch.Size([1, 2]) -- LAYER 2 -- layer 1 output shape: torch.Size([1, 2]) W2 shape: torch.Size([2, 1]) B2 shape: torch.Size([1, 1]) -- OUTPUT -- output shape: torch.Size([1, 1]) output: tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features*weights) + bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second tensor. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view) and [`torch.transpose(weights,0,1)`](https://pytorch.org/docs/master/generated/torch.transpose.html).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.* `torch.transpose(weights,0,1)` will return transposed weights tensor. This returns transposed version of inpjut tensor along dim 0 and dim 1. This is efficient since we do not specify to actual dimesions of weights.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.One more approach is to use `.t()` to transpose vector of weights, in our case from (1,5) to (5,1) shape.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.matmul(features,torch.transpose(weights,0,1)) + bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.matmul(features,W1).add_(B1)) output = activation(torch.matmul(h,W2).add_(B2)) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units are a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) features ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(weights * features) + bias) ##or ##y = acactivation((weights * features).sum() + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1)) + bias ) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) features ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h,W2) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation((features * weights).sum() + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5, 1)) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) outp = activation(torch.mm(h, W2) + B2) print(outp) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.matmul(features, weights.reshape(weights.shape[::-1])) + bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.matmul(features, weights.reshape(weights.shape[::-1])) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here # features has shape 1 x 3; W1 has shape 3 x 2 a1 = 1 / (1 + torch.exp(-(torch.matmul(features, W1) + B1))) # a1 has shape 1 x 2; W2 has shape 2 x 1 a2 = activation(torch.matmul(a1, W2) + B2) out = a2 out ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a weights.reshape(weights.shape[::-1]) weights.shape weights.reshape(weights.shape[::-1]).shape ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) print(features) print(weights) print(features * weights) print((features*weights).sum()) print(bias) print((features * weights) + bias) print(features[0][0]*weights[0][0] ) ###Output tensor([[-0.1468, 0.7861, 0.9468, -1.1143, 1.6908]]) tensor([[-0.8948, -0.3556, 1.2324, 0.1382, -1.6822]]) tensor([[ 0.1314, -0.2796, 1.1668, -0.1540, -2.8442]]) tensor(-1.9796) tensor([[0.3177]]) tensor([[ 0.4490, 0.0381, 1.4845, 0.1637, -2.5266]]) tensor(0.1314) ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output1 = activation( (features*weights).sum() + bias) print(activation(torch.sum(features*weights)+bias)) output1 ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication mul = torch.mm(features, weights.T) # matmul (1,5)x(5,1) -> (1,1) mul2 = torch.mm(features, weights.view(5,1)) output2 = activation(mul + bias) print(weights) print(weights.view(5,1)) #output2.view() ###Output tensor([[-0.8948, -0.3556, 1.2324, 0.1382, -1.6822]]) tensor([[-0.8948], [-0.3556], [ 1.2324], [ 0.1382], [-1.6822]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden = activation(torch.mm(features, W1.view(n_input, n_hidden)) + B1) output = activation(torch.mm(hidden, W2.view(n_hidden, n_output))+ B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(h, W2) + B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights)+bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y=activation(torch.mm(features, weights.view(5,1))+bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h=activation(torch.mm(features,W1)+B1) output = activation(torch.mm(h,W2)+B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias); print(y) #tensor([[0.1595]]) y = activation((features * weights).sum() + bias); print(y) #tensor([[0.1595]]) ###Output tensor([[0.1595]]) tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication # y = activation(torch.mm(features,weights.view(5,1)).sum() + bias) y = activation(torch.mm(features,weights.view(5,1)) + bias) print(y) # tensor([[0.1595]]) # hace la multiplicacion de matrices y la suma en un solo paso, ya no es necesario aplicar .sum() ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here # y = activation(torch.mm(activation() * activation()) + bias) h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h,W2) + B2) print(output) # tensor([[0.3171]]) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features * weights) + bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.mm(features, weights.view(5,1)) + bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h1 = activation(torch.mm(features, W1) + B1) o = activation(torch.mm(h1, W2) + B2) o ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like:$$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot\begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one.`weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network.> **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features*weights) + bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second tensor. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view) and [`torch.transpose(weights,0,1)`](https://pytorch.org/docs/master/generated/torch.transpose.html).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.* `torch.transpose(weights,0,1)` will return transposed weights tensor. This returns transposed version of inpjut tensor along dim 0 and dim 1. This is efficient since we do not specify to actual dimesions of weights.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.One more approach is to use `.t()` to transpose vector of weights, in our case from (1,5) to (5,1) shape.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.matmul(features,torch.transpose(weights,0,1)) + bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated$$\vec{h} = [h_1 \, h_2] =\begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot\begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.matmul(features,W1).add_(B1)) output = activation(torch.matmul(h,W2).add_(B2)) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units are a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8)a = np.random.rand(4,3) a torch.set_printoptions(precision=8)b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(features * weights)+ bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.view(5,1))+bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h=activation(torch.mm(features, W1)+B1) output = activation(torch.mm(h, W2)+B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features*weights)+bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.matmul(weights,features.reshape(5,1))+bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here Z1 = activation(torch.matmul(features,W1)+B1) Z2 = activation(torch.matmul(Z1,W2)+B2) Z2 ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch import numpy as np def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors weighted_input = torch.sum(features * weights) + bias h = activation(weighted_input) h x = np.arange(-10., 10., 0.2) sig = 1/(1+np.exp(-x)) import matplotlib.pyplot as plt %matplotlib inline plt.plot(x, sig) plt.plot(weighted_input[0][0], h[0][0], 'g*') torch.mm(features, weights) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code weights, weights.view(weights.shape), weights.reshape(5, 1), weights.resize(5, 1) activation(torch.sum(features * weights) + bias) ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.reshape(tuple(reversed(weights.shape)))) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden_layer = activation(torch.mm(features, W1) + B1) print(hidden_layer) output_layer = activation(torch.mm(hidden_layer, W2) + B2) print(output_layer) ###Output tensor([[0.6813, 0.4355]]) tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors # torch.mm(features, weights.t_()) weights features weightsnew = weights.view(-1, 1) weightsnew activation(torch.mm(features, weightsnew) + bias) activation(torch.sum(features * weights) + bias) features * weights ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weightsnew) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here print(features, end='\n\n') print(W1, end='\n\n') print(W2) h = activation(torch.mm(features, W1) + B1) h out = activation(torch.mm(h, W2) + B2) out ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code activation((features * weights).sum()+bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code torch.mm(features, weights.view(5,1) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code L1 = activation(torch.mm(features, W1) + B1) out = activation(torch.mm(L1, W2) + B2) out ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) print(features) print(weights) print(bias) ###Output tensor([[0.3177]]) ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ### Solution # Now, make our labels from our data and true weights y = activation(torch.sum(features * weights) + bias) print(y) y = activation((features * weights).sum() + bias) print(y) torch.mm(features, weights) weights.shape ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1)) + bias) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) print(features) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features print(n_input) n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) print(W1) print(W2) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) print(B1) print(B2) ###Output tensor([[-0.1468, 0.7861, 0.9468]]) 3 tensor([[-1.1143, 1.6908], [-0.8948, -0.3556], [ 1.2324, 0.1382]]) tensor([[-1.6822], [ 0.3177]]) tensor([[0.1328, 0.1373]]) tensor([[0.2405]]) ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) print(h) output = activation(torch.mm(h, W2) + B2) print(output) ###Output tensor([[0.6813, 0.4355]]) tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5, 1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(h, W2) + B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch # http://pytorch.org/ from os.path import exists from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag()) cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/' accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu' !pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features*weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5, 1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) y = activation( torch.mm(h, W2) + B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(weights*features) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.view(5, 1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here activation(torch.mm(activation(torch.mm(features, W1) + B1), W2) + B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) print(f"features: {features}") # True weights for our data, random normal variables again weights = torch.randn_like(features) print(f"weights: {weights}") # and a true bias term bias = torch.randn((1, 1)) print(f"bias: {bias}") ###Output features: tensor([[-0.1468, 0.7861, 0.9468, -1.1143, 1.6908]]) weights: tensor([[-0.8948, -0.3556, 1.2324, 0.1382, -1.6822]]) bias: tensor([[0.3177]]) ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features*weights) +bias) #x = activation(x) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.matmul(features, weights.view(5,1) ) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1 ) + B1) #lay2 = activation(torch.matmul(features, W2 ) + B2) y = activation(torch.mm(h, W2 ) + B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch ###Output _____no_output_____ ###Markdown Defining sigmoid activation function$$\sigma(x) = \frac{1}{1+e^{-x}}$$ ###Code def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features*weights)+bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features,weights.view(5,1))+bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h1 = activation(torch.mm(features,W1)+B1) y = activation(torch.mm(h1,W2)+B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors h = torch.sum((features*weights)) + bias y = activation(h) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication h = torch.mm(features, weights.view(5,1)) + bias y = activation(h) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) features ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation((features*weights).sum()+bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features,weights.view(5,1))+bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here activation(activation(torch.mm(features,W1)+B1).mm(W2)+B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors print(activation((features*weights).sum()+bias)) y_hat = activation(torch.sum(features*weights)+bias) y_hat ###Output tensor([[ 0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication print (weights.shape) print (weights.reshape(5,1).shape) y_hat = activation(torch.mm(features, weights.reshape(5,1))+bias) y_hat ###Output torch.Size([1, 5]) torch.Size([5, 1]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code # Look at the dimensions of the weights and features! print(features.shape) print(W1.shape) # so shape of features * W1 is [1,2] hidden = (activation(torch.mm(features, W1)+B1)) print(hidden.shape) print(W2.shape) # so shape of hidden * shape is [1,1] output = (activation(torch.mm(hidden, W2)+B2)) output ###Output torch.Size([1, 3]) torch.Size([3, 2]) torch.Size([1, 2]) torch.Size([2, 1]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation((features * weights).sum() + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.matmul(features, weights.reshape(5, 1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here y1 = activation(torch.matmul(features, W1) + B1) y2 = activation(torch.matmul(y1, W2) + B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors print(features) print(weights) print(bias) print(features * weights) output = torch.sum(features * weights) + bias print(activation(output)) ###Output tensor([[-0.1468, 0.7861, 0.9468, -1.1143, 1.6908]]) tensor([[-0.8948, -0.3556, 1.2324, 0.1382, -1.6822]]) tensor([[0.3177]]) tensor([[ 0.1314, -0.2796, 1.1668, -0.1540, -2.8442]]) tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = torch.mm(features, weights.view(5,1)) + bias print(activation(output)) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code print(features) print(n_input) print(W1) print(W2) print(B1) print(B2) print(W1.shape[0], W1.shape[1]) ## Your solution here output1 = activation(torch.mm(features, W1) + B1) output2 = activation(torch.mm(output1, W2) + B2) print(output2) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code !pip install -q torch # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(weights*features) + bias) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(-1, 1)) + bias) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors torch.sum(features * weights) + bias ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here a = activation(torch.mm(features , W1) + B1) b = activation(torch.mm(a, W2) + B2) print(a) print(b) ###Output tensor([[0.6813, 0.4355]]) tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(weights * features) + bias) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.reshape(5, 1)) + bias) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden_layer = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(hidden_layer, W2) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.mm(features, torch.t(weights)) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, torch.t(weights)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y=activation(torch.mm(features,weights.view(5,1))+bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here y=activation(torch.mm(activation(torch.mm(features,W1)+B1),W2)+B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors torch.mm(features, weights) + bias ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication torch.mm(features, weights.view(5, 1)) + bias ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation((torch.mm(features, W1) + B1)) output = activation(torch.mm(h, W2.view(2, 1)) + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code weights.view(5,1) ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1)) + bias) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) print(h) output = activation(torch.mm(h, W2) + B2) print(output) ###Output tensor([[0.6813, 0.4355]]) tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features*weights)+bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features,weights.view(5,1))+bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1)+B1) output = activation(torch.mm(h, W2)+B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() # Numpy array matches new values from Tensor a # Multiply PyTorch Tensor by 2, in place b.mul_(2) ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features*weights)+bias) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication print(features.shape) print(weights.shape) weights = weights.reshape((5,1)) print(weights.shape) y = activation(torch.mm(features, weights)+bias) y ###Output torch.Size([1, 5]) torch.Size([5, 1]) torch.Size([5, 1]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here features.shape y = activation(torch.mm(activation(torch.mm(features, W1)+B1), W2) + B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(-1, 1)) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here l1_y = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(l1_y, W2) + B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch ##this is rthe change def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(weights*features) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication torch.matmul(features,weights.reshape(5,1))+bias ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here activation(torch.matmul(activation(torch.matmul(features,W1) + B1),W2) + B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor use this when you want the output to be an activation function. """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # _like means it has teh same shape as tensor # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors f = activation(torch.sum(features*weights)+bias) # output is dot product plus offset or bias f ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calcula te the output of this network using matrix multiplication activation(torch.mm(features, weights.view(5,1))+bias) # you still have to add bias ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h1 = activation(torch.mm(features,W1)+B1) # you have to activate each layer activation(torch.mm(h1, W2)+B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.reshape(5, 1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden = activation(torch.mm(features, W1.reshape(n_input, n_hidden)) + B1) output = activation(torch.mm(hidden, W2.reshape(n_hidden, n_output)) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.sum(torch.matmul(features, weights.view(5,1)))) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden_layer_output = activation(torch.matmul(features, W1) + B1) y = activation(torch.matmul(hidden_layer_output, W2) + B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features * weights) + bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output2 = activation(torch.mm(features, weights.view(5, 1)) + bias) output2 ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here output3 = activation(torch.mm(activation(torch.mm(features, W1) + B1), W2) + B2) output3 ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units is a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = torch.sum(weights*features)+bias output = activation(output) print(output) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication w_x=torch.matmul(features,weights.view(5,1)) output = activation(w_x+bias) print(output) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here H = activation(torch.matmul(features, W1)+B1) O = activation(torch.matmul(H, W2)+B2) print(O) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors #activation(torch.sum((features*weights)+bias)) print(activation(torch.sum((features*weights))+bias)) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code activation(torch.matmul(features,weights.view(5,1))+bias) ## Calculate the output of this network using matrix multiplication def layer_cal(x,w,b): return activation(torch.matmul(x,w.view(x.shape[1],x.shape[0]))+b) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h1=activation(torch.mm(features,W1)+B1) y_prime=activation(torch.mm(h1,W2)+B2) y_prime ###Output torch.Size([3, 2]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(weights * features) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h1 = torch.mm(features, W1) + B1 activation(torch.mm(h1, W2) + B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(weights * features) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5,1)) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code h = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(h, W2) + B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features * weights) + bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code weights_T = weights.view(5, 1) output = activation(torch.mm(features, weights_T) + bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code output_h = activation(torch.mm(features, W1) + B1) output_o = activation(torch.mm(output_h, W2) + B2) output_o ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors # features.shape # t2 = torch.mm(features, weights) weights.resize_(5,1) h = torch.mm(features, weights) h=h+bias h ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication op = activation(h) op ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here T1=torch.mm(features, W1) h=T1+B1 y1=activation(h) T2=torch.mm(y1,W2) h2=T2+B2 y2=activation(h2) print(y2) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(h, W2) + B2) print(y) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units is a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y=activation(torch.sum(features*weights)+bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y=activation(torch.mm(features,weights.reshape(5,1))+bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h=activation(torch.mm(features,W1)+B1) output=activation(torch.mm(h,W2)+B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) features.shape #weights.shape ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features * weights)+ bias) print(output) ###Output tensor([[ 0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code features.shape weights.shape ## Calculate the output of this network using matrix multiplication output = activation(torch.matmul(features, weights.view(5,1)) + bias) print(output) ###Output tensor([[ 0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) print('feature shape ={}'.format(features.shape)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) print('W1 shape ={}, W2 shape = {}'.format(W1.shape, W2.shape)) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output feature shape =torch.Size([1, 3]) W1 shape =torch.Size([3, 2]), W2 shape = torch.Size([2, 1]) ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here output = activation(torch.mm(activation(torch.mm(features, W1) + B1), W2) + B2) print(output) ###Output tensor([[ 0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation((features * weights).sum() + bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code new_shape = list(weights.shape) new_shape.reverse() new_shape weights.reshape(*new_shape) ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.reshape(*new_shape)).sum() + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here activation(torch.mm(activation(torch.mm(features, W1) + B1), W2) + B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units is a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum((features * weights) + bias)) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5, 1)) + bias) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(weights * features) + bias) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5, 1)) + bias) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here y = activation(torch.mm(activation(torch.mm(features, W1) + B1), W2) + B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation((features*weights).sum() + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features.view(weights.shape[1], -1), weights) + bias) print(y) ###Output tensor([[0.6104, 0.5914, 0.5341, 0.5738, 0.6375], [0.4047, 0.5095, 0.7836, 0.6050, 0.2680], [0.3706, 0.4952, 0.8153, 0.6103, 0.2184], [0.7883, 0.6713, 0.2581, 0.5408, 0.8995], [0.2323, 0.4296, 0.9169, 0.6344, 0.0740]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here O1 = activation(torch.mm(features, W1) + B1) O2 = activation(torch.mm(O1, W2) + B2) print(O2) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.T) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(h, W2) + B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code output ## Calculate the output of this network using matrix multiplication output = activation(torch.mm(weights, features.t()) + bias) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code input_layer_output = activation(torch.mm(features, W1) + B1) hidden_layer_output = activation(torch.mm(input_layer_output, W2) + B2) hidden_layer_output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor outpus will be probability """ return 1 / (1 + torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors # element-wise multiplication y = activation(torch.sum(features * weights) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication # features.shape, weights.shape # y = activation(torch.matmul(features, weights.view(5, 1)) + bias) y = activation(torch.mm(features, weights.view(5, 1)) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here y1 = torch.mm(features, W1) + B1 h = activation(y1) y2 = torch.mm(h, W2) + B2 output = activation(y2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) print(features) # True weights for our data, random normal variables again weights = torch.randn_like(features) print(features*weights) # and a true bias term bias = torch.randn((1, 1)) ###Output tensor([[-0.1468, 0.7861, 0.9468, -1.1143, 1.6908]]) tensor([[ 0.1314, -0.2796, 1.1668, -0.1540, -2.8442]]) ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) y = activation((features * weights).sum() + bias) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation((features * weights).sum() + bias) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.T) + bias) y ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y = activation(torch.sum(torch.mm(features, weights.view(5, 1))) + bias) print(y) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here o1_vector = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(o1_vector, W2) + B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors def metwork_op(features,weights,bias): return activation(torch.sum(torch.mm(features,weights))) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication print(features.shape) c=features.reshape(5,1) print(c) print(weights.shape) print(metwork_op(c,weights,bias)) ###Output torch.Size([1, 5]) tensor([[-0.1468], [ 0.7861], [ 0.9468], [-1.1143], [ 1.6908]]) torch.Size([1, 5]) tensor(0.0330) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here print(features.shape) print(W1.shape) print(W2.shape) print(B1.shape) c=activation(torch.add(torch.mm(features,W1),B1)) d=activation(torch.add(torch.mm(c,W2),B2)) print(c) print(d) ###Output torch.Size([1, 3]) torch.Size([3, 2]) torch.Size([2, 1]) torch.Size([1, 2]) tensor([[0.6813, 0.4355]]) tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.matmul(weights, features.view(5, 1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden = activation(torch.matmul(features, W1) + B1) output = activation(torch.matmul(hidden, W2) + B2) print(output) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch torch.__version__ torch.cuda.current_device() torch.cuda.device_count() torch.cuda.get_device_name(0) torch.cuda.is_available() device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print('Using device:', device) print() #Additional Info when using cuda if device.type == 'cuda': print(torch.cuda.get_device_name(0)) print('Memory Usage:') print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**3,1), 'GB') print('Cached: ', round(torch.cuda.memory_cached(0)/1024**3,1), 'GB') def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors torch.matmul(features, weights.view(5,1)) + bias ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here def output_func(x, W, B): return activation(torch.matmul(x,W)+B) y = output_func(output_func(features, W1, B1) , W2, B2) print(y) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication transposed_weights = weights.view(5, 1) h = torch.mm(features, transposed_weights) + bias y = activation(h) print(y) ###Output tensor([[ 0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features,W1) + B1) y = activation(torch.mm(h,W2) + B2) print(y) ###Output tensor([[ 0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(features * weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here L1 = activation(torch.mm(features,W1) + B1) y = activation(torch.mm(L1, W2) + B2) print(y) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication weights_reshaped = weights.view(5, 1) torch.mm(features, weights_reshaped) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here Z1 = torch.mm(features, W1) + B1 H1 = activation(Z1) Z2 = torch.mm(H1, W2) + B2 output = activation(Z2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) b = torch.from_numpy(a) print(b) b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) #print(features) # True weights for our data, random normal variables again weights = torch.randn_like(features) #print(weights) # and a true bias term bias = torch.randn((1, 1)) ###Output tensor([[0.1595]]) tensor([[0.1595]]) ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors network = activation(torch.sum(features * weights) + bias) print(network) ###Output tensor([[0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication network = activation(torch.matmul(features,weights.view(5,1)).add(bias)) print(network) ###Output tensor([[0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here layer1 = activation(torch.matmul(features,W1).add(B1)) layer2 = activation(torch.matmul(layer1,W2).add(B2)) print(layer2) ###Output tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors bias + (features * weights).sum() ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication torch.mm(features, weights.view(-1,1)) + bias ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here fc1 = activation(torch.mm(features, W1) + B1) fc2 = activation(torch.mm(fc1, W2) + B2) fc2 ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors output = activation(torch.sum(weights * features) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication output = activation(torch.mm(features, weights.view(5, 1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h_in = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h_in, W2) + B2) print("output={}".format(output)) ###Output output=tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors x = activation(torch.sum(features * weights) + bias) print(x) ###Output tensor([[ 0.1595]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication x = activation(torch.mm(features, weights.view(5, 1)) + bias) print(x) ###Output tensor([[ 0.1595]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h = activation(torch.mm(features, W1) + B1) y = activation(torch.mm(h, W2) + B2) print(y) ###Output tensor([[ 0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features*weights) + bias) ###Output tensor([[-1.6619]]) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication y =activation(torch.matmul(features, weights.view(5,1)) + bias) y = activation(torch.mm(features, weights.view(5,1)) + bias) ###Output tensor([[-1.6619]]) tensor([[-1.6619]]) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code B2.size() ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) print(output) #features - 1, 3 W1 - 3, 2 h1 = 1, 2 B1 = 1,2 #h1 - 1,2 W2 - 2,1 B2 = 1,1 ###Output tensor([[ 0.7598, -0.2596]]) tensor([[-0.7672]]) tensor([[0.3171]]) ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units is a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np np.set_printoptions(precision=8) a = np.random.rand(4,3) a torch.set_printoptions(precision=8) b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors compute = torch.sum((weights * features) + bias) y = activation(compute) print (y) ###Output tensor(0.4034) ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication weights = weights.view(5, 1) y = activation(torch.sum(torch.mm(weights, features) + bias)) print (y) ###Output tensor(0.9897) ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) features.shape ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here y = activation(torch.mm(activation(torch.mm(features, W1) + B1), W2) + B2) y ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors import numpy as np p = np.matmul(features, np.transpose(weights))[0] p[0] + bias ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication weights = weights.view(5,1) activation(torch.mm(features, weights) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here input_to_hidden = activation(torch.mm(features, W1) + B1) hidden_to_output = activation(torch.mm(input_to_hidden, W2) + B2) hidden_to_output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) features weights ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication torch.mm(features, weights.view(5, 1)) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here step1 = torch.mm(features, W1) step1 = torch.add(step1, B1) step1 = activation(step1) step1 step2 = torch.mm(step1, W2) step2 = torch.add(step2, B2) output = activation(step2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation(torch.sum(features*weights) + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.mm(features, weights.view(5,1)) + bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here hidden = activation(torch.mm(features, W1)+B1) activation(torch.mm(hidden, W2) + B2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Table of Contents1&nbsp;&nbsp;Introduction to Deep Learning with PyTorch1.1&nbsp;&nbsp;Neural Networks1.2&nbsp;&nbsp;Tensors1.2.1&nbsp;&nbsp;Stack them up!1.3&nbsp;&nbsp;Numpy to Torch and back Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum((features*weights))+bias) y y = activation(((features*weights).sum()+bias)) y ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation(torch.matmul(features,weights.view(5,1))+bias) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here input_layer = activation(torch.matmul(features,W1)+B1) output = activation(torch.matmul(input_layer,W2)+B2) output ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors # print(features[0], weights[0]) h = torch.mm(features, weights) output = activation(h + bias) output ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication h = torch.mm(features, weights.view(5,1)) h += bias output = activation(h) output ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here h1 = torch.mm(features, W1) + B1 f_h1 = activation(h1) h2 = torch.mm(f_h1, W2) + B2 f_h2 = activation(h2) f_h2 ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____ ###Markdown Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ###Code # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ###Output _____no_output_____ ###Markdown Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ###Code ## Calculate the output of this network using the weights and bias tensors activation((features * weights).sum() + bias) ###Output _____no_output_____ ###Markdown You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication. ###Code ## Calculate the output of this network using matrix multiplication activation((torch.mm(features, weights.reshape(5, 1))[0] + bias)) ###Output _____no_output_____ ###Markdown Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$ ###Code ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ###Output _____no_output_____ ###Markdown > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ###Code ## Your solution here out1 = activation(torch.mm(features, W1) + B1) # print(out1.shape) out2 = torch.mm(out1, W2) + B2 # print(out2.shape) activation(out2) ###Output _____no_output_____ ###Markdown If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ###Code import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ###Output _____no_output_____ ###Markdown The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ###Code # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ###Output _____no_output_____
notebooks/eXate_samples/pysyft/duet_multi/.ipynb_checkpoints/5.0-mg-central-aggregator-checkpoint.ipynb
###Markdown Syft Duet for Federated Learning - Central Aggregator SetupFirst we need to install syft 0.3.0 because for every other syft project in this repo we have used syft 0.2.9. However, a recent update has removed a lot of the old features and replaced them with this new 'Duet' function. To do this go into your terminal and cd into the repo directory and run:> pip uninstall syftThen confirm with 'y' and hit enter.> pip install syft==0.3.0NOTE: Make sure that you uninstall syft 0.3.0 and reinstall syft 0.2.9 if you want to run any of the other projects in this repo. Unfortunately when PySyft updated from 0.2.9 to 0.3.0 it removed all of the previous functionalities for the FL, DP, and HE that have previously been iplemented. ###Code # Double check you are using syft 0.3.0 not 0.2.9 # !pip show syft import syft as sy import pandas as pd import torch ###Output _____no_output_____ ###Markdown Initialising the Duets ###Code portuguese_bank_duet = sy.duet("317e830fd06779d42237bcee6483427b") ###Output _____no_output_____ ###Markdown >If the connection is established then there should be a green message above saying 'CONNECTED!'. Ensure the first bank is connected before attempting the connect to the second bank. ###Code american_bank_duet = sy.duet("d9de11127f79d32c62aa0566d5807342") ###Output _____no_output_____ ###Markdown >If the connection is established then there should be a green message above saying 'CONNECTED!'. Ensure the first and second banks are connected before attempting the connect to the third bank. ###Code australian_bank_duet = sy.duet("a33531fab99dc31aa53c97e3166b5922") ###Output _____no_output_____ ###Markdown >If the connection is established then there should be a green message above saying 'CONNECTED!'. This should mean that you have connected three seperate duets to three different 'banks' around the world! Check the data exists in each duet ###Code portuguese_bank_duet.store.pandas american_bank_duet.store.pandas australian_bank_duet.store.pandas ###Output _____no_output_____ ###Markdown >As a proof of concept for the security of this federated learning method. If you wanted to see/access the data from this side of the connection you can't without permission. To try thi run;```pythonname_bank_duet.store["tag"].get()```>Where you replace 'name' with the specific banks name and the 'tag' with the data tag. This should through a permissions error and recommend that you request the data from that 'bank'. From here you should run;```pythonname_bank_duet.store["tag"].request() Orname_bank_duet.store["tag"].get(request_block=True)```>Now you have sent a request to the 'bank' side of the connection - now you must wait until on their end they see this requesnt and type the code;```pythonduet.requests[0].accept()```>Once they accept the request, you can freely get the data on this end - however, for federated learning this should never be explicityl done on data. Only results of computation. Import Test data ###Code test_data = pd.read_csv('datasets/test-data.csv', sep = ',') test_target = pd.read_csv('datasets/test-target.csv', sep = ',') test_data.head() test_data = torch.tensor(test_data.values).float() test_data test_target = torch.tensor(test_target.values).float() test_target from sklearn.preprocessing import StandardScaler sc_X = StandardScaler() test_data = sc_X.fit_transform(test_data) test_data = torch.tensor(test_data).float() test_data ###Output _____no_output_____ ###Markdown Initialise the local Model ###Code class LR(sy.Module): def __init__(self, n_features, torch_ref): super(LR, self).__init__(torch_ref=torch_ref) self.lr = torch_ref.nn.Linear(n_features, 1) def forward(self, x): out = self.torch_ref.sigmoid(self.lr(x)) return out local_model = LR(test_data.shape[1], torch) ###Output > Creating local model ###Markdown Send the Model to each connection ###Code portuguese_bank_model = local_model.send(portuguese_bank_duet) american_bank_model = local_model.send(american_bank_duet) australian_bank_model = local_model.send(australian_bank_duet) ###Output > Sending local model > Creating remote model Sending local layer: lr > Finished sending local model < ###Markdown Get the parameters for each model ###Code portuguese_bank_parameters = portuguese_bank_model.parameters() american_bank_parameters = american_bank_model.parameters() australian_bank_parameters = australian_bank_model.parameters() ###Output _____no_output_____ ###Markdown Create Local torch of the connections 'remote' torch ###Code portuguese_bank_remote_torch = portuguese_bank_duet.torch american_bank_remote_torch = american_bank_duet.torch australian_bank_remote_torch = australian_bank_duet.torch ###Output _____no_output_____ ###Markdown Define each banks optimiser with 'remote' torchs ###Code portuguese_bank_optimiser = portuguese_bank_remote_torch.optim.SGD(portuguese_bank_parameters, lr=1) american_bank_optimiser = american_bank_remote_torch.optim.SGD(american_bank_parameters, lr=1) australian_bank_optimiser = australian_bank_remote_torch.optim.SGD(australian_bank_parameters, lr=1) ###Output _____no_output_____ ###Markdown Finally, define the loss criterion for each ###Code portuguese_bank_criterion = portuguese_bank_remote_torch.nn.BCELoss() american_bank_criterion = american_bank_remote_torch.nn.BCELoss() australian_bank_criterion = australian_bank_remote_torch.nn.BCELoss() criterions = [portuguese_bank_criterion, american_bank_criterion, australian_bank_criterion] ###Output _____no_output_____ ###Markdown Train the Models ###Code EPOCHS = 25 def train(criterion, epochs=EPOCHS): for e in range(1, epochs + 1): # Train Portuguese Bank's Model portuguese_bank_model.train() portuguese_bank_optimiser.zero_grad() portuguese_bank_pred = portuguese_bank_model(portuguese_bank_duet.store[0]) portuguese_bank_loss = criterion[0](portuguese_bank_pred, portuguese_bank_duet.store[1]) portuguese_bank_loss.backward() portuguese_bank_optimiser.step() local_portuguese_bank_loss = None local_portuguese_bank_loss = portuguese_bank_loss.get( name="loss", reason="To evaluate training progress", request_block=True, timeout_secs=5 ) if local_portuguese_bank_loss is not None: print("Epoch {}:".format(e)) print("Portuguese Bank Loss: {:.4}".format(local_portuguese_bank_loss)) else: print("Epoch {}:".format(e)) print("Portuguese Bank Loss: HIDDEN") # Train American Bank's Model american_bank_model.train() american_bank_optimiser.zero_grad() american_bank_pred = american_bank_model(american_bank_duet.store[0]) american_bank_loss = criterion[1](american_bank_pred, american_bank_duet.store[1]) american_bank_loss.backward() american_bank_optimiser.step() local_american_bank_loss = None local_american_bank_loss = american_bank_loss.get( name="loss", reason="To evaluate training progress", request_block=True, timeout_secs=5 ) if local_american_bank_loss is not None: print("American Bank Loss: {:.4}".format(local_american_bank_loss)) else: print("American Bank Loss: HIDDEN") # Train Australian Bank's Model australian_bank_model.train() australian_bank_optimiser.zero_grad() australian_bank_pred = australian_bank_model(australian_bank_duet.store[0]) australian_bank_loss = criterion[2](australian_bank_pred, australian_bank_duet.store[1]) australian_bank_loss.backward() australian_bank_optimiser.step() local_australian_bank_loss = None local_australian_bank_loss = australian_bank_loss.get( name="loss", reason="To evaluate training progress", request_block=True, timeout_secs=5 ) if local_australian_bank_loss is not None: print("Australian Bank Loss: {:.4}".format(local_australian_bank_loss)) else: print("Australian Bank Loss: HIDDEN") return ([portuguese_bank_model, american_bank_model, australian_bank_model]) models = train(criterions) ###Output Epoch 1: Portuguese Bank Loss: 0.7693 American Bank Loss: HIDDEN Australian Bank Loss: 0.7693 Epoch 2: Portuguese Bank Loss: 0.5677 American Bank Loss: HIDDEN Australian Bank Loss: 0.5712 Epoch 3: Portuguese Bank Loss: 0.4772 American Bank Loss: HIDDEN Australian Bank Loss: 0.4799 Epoch 4: Portuguese Bank Loss: 0.4215 American Bank Loss: HIDDEN Australian Bank Loss: 0.4237 Epoch 5: Portuguese Bank Loss: 0.3855 American Bank Loss: HIDDEN Australian Bank Loss: 0.3875 Epoch 6: Portuguese Bank Loss: 0.3613 American Bank Loss: HIDDEN Australian Bank Loss: 0.3632 Epoch 7: Portuguese Bank Loss: 0.3442 American Bank Loss: HIDDEN Australian Bank Loss: 0.346 Epoch 8: Portuguese Bank Loss: 0.3318 American Bank Loss: HIDDEN Australian Bank Loss: 0.3335 Epoch 9: Portuguese Bank Loss: 0.3224 American Bank Loss: HIDDEN Australian Bank Loss: 0.3241 Epoch 10: Portuguese Bank Loss: 0.3152 American Bank Loss: HIDDEN Australian Bank Loss: 0.3169 Epoch 11: Portuguese Bank Loss: 0.3096 American Bank Loss: HIDDEN Australian Bank Loss: 0.3113 Epoch 12: Portuguese Bank Loss: 0.305 American Bank Loss: HIDDEN Australian Bank Loss: 0.3068 Epoch 13: Portuguese Bank Loss: 0.3014 American Bank Loss: HIDDEN Australian Bank Loss: 0.3031 Epoch 14: Portuguese Bank Loss: 0.2983 American Bank Loss: HIDDEN Australian Bank Loss: 0.3001 Epoch 15: Portuguese Bank Loss: 0.2958 American Bank Loss: HIDDEN Australian Bank Loss: 0.2976 Epoch 16: Portuguese Bank Loss: 0.2937 American Bank Loss: HIDDEN Australian Bank Loss: 0.2955 Epoch 17: Portuguese Bank Loss: 0.292 American Bank Loss: HIDDEN Australian Bank Loss: 0.2937 Epoch 18: Portuguese Bank Loss: 0.2904 American Bank Loss: HIDDEN Australian Bank Loss: 0.2922 Epoch 19: Portuguese Bank Loss: 0.2891 American Bank Loss: HIDDEN Australian Bank Loss: 0.2909 Epoch 20: Portuguese Bank Loss: 0.288 American Bank Loss: HIDDEN Australian Bank Loss: 0.2897 Epoch 21: Portuguese Bank Loss: 0.287 American Bank Loss: HIDDEN Australian Bank Loss: 0.2888 Epoch 22: Portuguese Bank Loss: 0.2862 American Bank Loss: HIDDEN Australian Bank Loss: 0.2879 Epoch 23: Portuguese Bank Loss: 0.2854 American Bank Loss: HIDDEN Australian Bank Loss: 0.2872 Epoch 24: Portuguese Bank Loss: 0.2848 American Bank Loss: HIDDEN Australian Bank Loss: 0.2865 Epoch 25: Portuguese Bank Loss: 0.2842 American Bank Loss: HIDDEN Australian Bank Loss: 0.2859 ###Markdown Localise the models again ###Code # As you can see they are all still remote models local_portuguese_bank_model = models[0].get( request_block=True, name="model_download", reason="test evaluation", timeout_secs=5 ) local_american_bank_model = models[1].get( request_block=True, name="model_download", reason="test evaluation", timeout_secs=5 ) local_australian_bank_model = models[2].get( request_block=True, name="model_download", reason="test evaluation", timeout_secs=5 ) ###Output > Downloading remote model > Creating local model Downloading remote layer: lr > Finished downloading remote model < ###Markdown Average the three models into on local model ###Code with torch.no_grad(): local_model.lr.weight.set_(((local_portuguese_bank_model.lr.weight.data + local_american_bank_model.lr.weight.data + local_australian_bank_model.lr.weight.data) / 3)) local_model.lr.bias.set_(((local_portuguese_bank_model.lr.bias.data + local_american_bank_model.lr.bias.data + local_australian_bank_model.lr.bias.data) / 3)) ###Output _____no_output_____ ###Markdown Test the accuracy on the test set ###Code def accuracy(model, x, y): out = model(x) correct = torch.abs(y - out) < 0.5 return correct.float().mean() plain_accuracy = accuracy(local_model, test_data, test_target) print(f"Accuracy on plain test_set: {plain_accuracy}") ###Output Accuracy on plain test_set: 0.8976250290870667
pyscal/part3/05_distinguishing_solid_liquid.ipynb
###Markdown Distinction of solid liquid atoms and clustering In this example, we will take one snapshot from a molecular dynamics simulation which has a solid cluster in liquid. The task is to identify solid atoms and cluster them. More details about the method can be found [here](https://pyscal.readthedocs.io/en/latest/solidliquid.html).The first step is, of course, importing all the necessary module. For visualisation, we will use [Ovito](https://www.ovito.org/). ![alt text](system1.png "original system") The above image shows a visualisation of the system using Ovito. Importing modules, ###Code import pyscal.core as pc ###Output _____no_output_____ ###Markdown Now we will set up a System with this input file, and calculate neighbors. Here we will use a cutoff method to find neighbors. More details about finding neighbors can be found [here](https://pyscal.readthedocs.io/en/latest/nearestneighbormethods.html). ###Code sys = pc.System() sys.read_inputfile('cluster.dump') sys.find_neighbors(method='cutoff', cutoff=3.63) ###Output _____no_output_____ ###Markdown Once we compute the neighbors, the next step is to find solid atoms. This can be done using [System.find_solids](https://docs.pyscal.org/en/latest/pyscal.htmlpyscal.core.System.find_solids) method. There are few parameters that can be set, which can be found in detail [here](https://docs.pyscal.org/en/latest/pyscal.htmlpyscal.core.System.find_solids). ###Code sys.find_solids(bonds=6, threshold=0.5, avgthreshold=0.6, cluster=False) ###Output _____no_output_____ ###Markdown The above statement found all the solid atoms. Solid atoms can be identified by the value of the `solid` attribute. For that we first get the atom objects and select those with `solid` value as True. ###Code atoms = sys.atoms solids = [atom for atom in atoms if atom.solid] len(solids) ###Output _____no_output_____ ###Markdown There are 202 solid atoms in the system. In order to visualise in Ovito, we need to first write it out to a trajectory file. This can be done with the help of [to_file](https://docs.pyscal.org/en/latest/pyscal.htmlpyscal.core.System.to_file) method of System. This method can help to save any attribute of the atom or ant Steinhardt parameter value. ###Code sys.to_file('sys.solid.dat', custom = ['solid']) ###Output _____no_output_____ ###Markdown We can now visualise this file in Ovito. After opening the file in Ovito, the modifier [compute property](https://ovito.org/manual/particles.modifiers.compute_property.html) can be selected. The `Output property` should be `selection` and in the expression field, `solid==0` can be selected to select all the non solid atoms. Applying a modifier [delete selected particles](https://ovito.org/manual/particles.modifiers.delete_selected_particles.html) can be applied to delete all the non solid particles. The system after removing all the liquid atoms is shown below. ![alt text](system2.png "system with only solid") Clustering algorithmYou can see that there is a cluster of atom. The clustering functions that pyscal offers helps in this regard. If you used `find_clusters` with `cluster=True`, the clustering is carried out. Since we did used `cluster=False` above, we will rerun the function ###Code sys.find_solids(bonds=6, threshold=0.5, avgthreshold=0.6, cluster=True) ###Output _____no_output_____ ###Markdown You can see that the above function call returned the number of atoms belonging to the largest cluster as an output. In order to extract atoms that belong to the largest cluster, we can use the `largest_cluster` attribute of the atom. ###Code atoms = sys.atoms largest_cluster = [atom for atom in atoms if atom.largest_cluster] len(largest_cluster) ###Output _____no_output_____ ###Markdown The value matches that given by the function. Once again we will save this information to a file and visualise it in Ovito. ###Code sys.to_file('sys.cluster.dat', custom = ['solid', 'largest_cluster']) ###Output _____no_output_____ ###Markdown The system visualised in Ovito following similar steps as above is shown below. ![alt text](system3.png "system with only largest solid cluster") It is clear from the image that the largest cluster of solid atoms was successfully identified. Clustering can be done over any property. The following example with the same system will illustrate this. Clustering based on a custom property In pyscal, clustering can be done based on any property. The following example illustrates this. To find the clusters based on a custom property, the [System.clusters_atoms](https://docs.pyscal.org/en/latest/pyscal.htmlpyscal.core.System.cluster_atoms) method has to be used. The simulation box shown above has the centre roughly at (25, 25, 25). For the custom clustering, we will cluster all atoms within a distance of 10 from the the rough centre of the box at (25, 25, 25). Let us define a function that checks the above condition. ###Code def check_distance(atom): #get position of atom pos = atom.pos #calculate distance from (25, 25, 25) dist = ((pos[0]-25)**2 + (pos[1]-25)**2 + (pos[2]-25)**2)**0.5 #check if dist < 10 return (dist <= 10) ###Output _____no_output_____ ###Markdown The above function would return True or False depending on a condition and takes the Atom as an argument. These are the two important conditions to be satisfied. Now we can pass this function to cluster. First, set up the system and find the neighbors. ###Code sys = pc.System() sys.read_inputfile('cluster.dump') sys.find_neighbors(method='cutoff', cutoff=3.63) ###Output _____no_output_____ ###Markdown Now cluster ###Code sys.cluster_atoms(check_distance) ###Output _____no_output_____ ###Markdown There are 242 atoms in the cluster! Once again we can check this, save to a file and visualise in ovito. ###Code atoms = sys.atoms largest_cluster = [atom for atom in atoms if atom.largest_cluster] len(largest_cluster) sys.to_file('sys.dist.dat', custom = ['solid', 'largest_cluster']) ###Output _____no_output_____
Housing Predi/house_pre.ipynb
###Markdown Data Cleaning- Most Machine Learning algorithms cannot work with missing features, so let’s create a few functions to take care of them. You noticed earlier that the total_bedroomsattribute has some missing values, so let’s fix this. You have three options:- Get rid of the corresponding districts.- Get rid of the whole attribute.- Set the values to some value (zero, the mean, the median, etc.).You can accomplish these easily using DataFrame’s dropna(), drop(), and fillna()methods.If you choose option 3, you should compute the median value on the training set, anduse it to fill the missing values in the training set, but also don’t forget to save themedian value that you have computed. You will need it later to replace missing valuesin the test set when you want to evaluate your system, and also once the system goeslive to replace missing values in new data.Scikit-Learn provides a handy class to take care of missing values: SimpleImputer.Here is how to use it. First, you need to create a SimpleImputer instance, specifyingthat you want to replace each attribute’s missing values with the median of thatattribute.However, I won't be dealing with sklearn now because we are yet to treat the library ###Code housing.dropna(subset=["total_bedrooms"]) # option 1 housing.drop("total_bedrooms", axis=1) # option 2 median = housing["total_bedrooms"].median() # option 3 housing["total_bedrooms"].fillna(median, inplace=True) from sklearn.impute import SimpleImputer housing["total_bedrooms"].fillna(median, inplace=True) imputer = SimpleImputer(strategy="median") #I made mistake in this code. I wrote simpler instead of simple # imputer = SimplerImputer(strategy= "median") #Since the median can only be computed on numerical attributes, we need to create a #copy of the data without the text attribute ocean_proximity: housing_num = housing.drop("ocean_proximity", axis=1) print(imputer.fit(housing_num)) imputer #I could have just treat only total_bedrooms attribute that has missing values rather than everything. But we can't be so sure of tomorrow's data #so let's apply it to everywhere imputer.statistics_ housing_num.median().values #transform the values X = imputer.transform(housing_num) housing_tr = pd.DataFrame(X, columns=housing_num.columns) housing_tr #fit() and transform() what about fit_transform()? #fit_transform() is saying fit then transform. Fit_transform() method sometimes run faster. ###Output _____no_output_____ ###Markdown Handling Text and Categorical Attributes ###Code # let's us treat ocean_proximity attributes housing_cat = housing[["ocean_proximity"]] housing_cat.head(15) #check for the value counts housing_cat.value_counts(sort=True) ###Output _____no_output_____ ###Markdown to convert text attribute to number because machine learning algorithms tends to work better with numbers, we use- oneht encoding- Scikit-Learn’s OrdinalEncoder class- etc ###Code from sklearn.preprocessing import OrdinalEncoder ordinal_encoder = OrdinalEncoder() ordinal_encoder housing_cat_encoded = ordinal_encoder.fit_transform(housing_cat) housing_cat_encoded ordinal_encoder.categories_ _ ###Output _____no_output_____ ###Markdown Underscore (_) in PythonDifficulty Level : MediumLast Updated : 22 Nov, 2020Following are different places where _ is used in Python:Single Underscore:In InterpreterAfter a nameBefore a nameDouble Underscore:__leading_double_underscore__before_after__Single UnderscoreIn Interpreter:_ returns the value of last executed expression value in Python Prompt/InterpreterFor ignoring values:Multiple time we do not want return values at that time assign those values to Underscore. It used as throwaway variable. Ignore a value of specific location/indexfor _ in range(10) print ("Test") Ignore a value when unpackinga,b,_,_ = my_method(var1)After a namePython has their by default keywords which we can not use as the variable name. To avoid such conflict between python keyword and variable we use underscore after name- snake_case vs camelCase vs PascalCase ###Code # One hot encoding #Our ML algorithm from previous result, 0.,1.,..4. can think 0.1 and 0.2 are close # to solve this problem, we do dummy variable. To achieve that, scikit- learn provides us with One hot encoding from sklearn.preprocessing import OneHotEncoder cat_encoder = OneHotEncoder() housing_cat_1hot = cat_encoder.fit_transform(housing_cat) housing_cat_1hot # Using up tons of memory mostly to store zeros # would be very wasteful, so instead a sparse matrix only stores the location of the non‐ # 70 | Chapter 2: End-to-End Machine Learning Project # 21 See SciPy’s documentation for more details. # zero elements. You can use it mostly like a normal 2D array,21 but if you really want to # convert it to a (dense) NumPy array, just call the toarray() method: # get list of categories housing_cat_1hot.toarray() ###Output _____no_output_____ ###Markdown Feature ScalingOne of the most important transformations you need to apply to your data is featurescaling. With few exceptions, Machine Learning algorithms don’t perform well whenthe input numerical attributes have very different scales. This is the case for the hous‐ing data: the total number of rooms ranges from about 6 to 39,320, while the medianincomes only range from 0 to 15. Note that scaling the target values is generally notrequired.There are two common ways to get all attributes to have the same scale: min-maxscaling and standardization.Min-max scaling (many people call this normalization) is quite simple: values areshifted and rescaled so that they end up ranging from 0 to 1. ###Code housing_cat housing housing.info() housing.info() housing["total_rooms"].value_counts().head(100) housing["median_income"].value_counts().head(100) ###Output _____no_output_____ ###Markdown feature scaling types- Min-Max /Normalization- Standarzation - Min-MaxMin-Max scaler: In this we subtract the Minimum from all values – thereby marking a scale from Min to Max. Then divide it by the difference between Min and Max. The result is that our values will go from zero to 1.- Standardization is quite different: first it subtracts the mean value (so standardizedvalues always have a zero mean), and then it divides by the standard deviation so thatthe resulting distribution has unit variance. Unlike min-max scaling, standardizationdoes not bound values to a specific range, which may be a problem for some algo‐rithms (e.g., neural networks often expect an input value ranging from 0 to 1). How‐ever, standardization is much less affected by outliers. For example, suppose a districthad a median income equal to 100 (by mistake). Min-max scaling would then crushall the other values from 0–15 down to 0–0.15, whereas standardization would not bemuch affected. Scikit-Learn provides a transformer called StandardScaler for stand‐ardization. scikit learn handling feature scalingScikit-Learn provides atransformer called MinMaxScaler for this. It has a feature_range hyperparameterthat lets you change the range if you don’t want 0–1 for some reason. Transformation Pipelines ###Code pip install scikit-learn==2.0 from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline num_pipeline = Pipeline([ ('imputer', SimpleImputer(strategy="median")), ('attribs_adder' , CombinedAttributesAdder()), ('std_scaler', StandardScaler()), ]) from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler num_pipeline = Pipeline([('imputer', SimpleImputer(strategy="median")), ('attribs_adder',CombinedAttributesAdder() ), ('std_scaler', StandardScaler()), ]) from sklearn.compose import ColumnTransformer housing_num num_attribs = list(housing_num) cat_attribs = ["ocean_proximity"] full_pipeline = ColumnTransformer([ ("num", num_pipeline, num_attribs), ("cat", OneHotEncoder(), cat_attribs), ]) housing_prepared = full_pipeline.fit_transform(housing) pip install scikit-learn==2.0 ###Output _____no_output_____
lane_finding/lane_finding.ipynb
###Markdown Self-Driving Car Engineer Nanodegree Project: **Finding Lane Lines on the Road** ***In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below. Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/!/rubrics/322/view) for this project.---Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**--- **The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**--- Your output should look something like this (above) after detecting line segments using the helper functions below Your goal is to connect/average/extrapolate line segments to get output like this **Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** Import Packages ###Code #importing some useful packages import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import cv2 import math import pdb, os from collections.abc import Mapping %matplotlib inline ###Output _____no_output_____ ###Markdown Read in an Image ###Code #reading in an image image = mpimg.imread('test_images/solidWhiteRight.jpg') #printing out some stats and plotting print('This image is:', type(image), 'with dimensions:', image.shape) plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray') ###Output This image is: <class 'numpy.ndarray'> with dimensions: (540, 960, 3) ###Markdown Ideas for Lane Detection Pipeline **Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**`cv2.inRange()` for color selection `cv2.fillPoly()` for regions selection `cv2.line()` to draw lines on an image given endpoints `cv2.addWeighted()` to coadd / overlay two images`cv2.cvtColor()` to grayscale or change color`cv2.imwrite()` to output images to file `cv2.bitwise_and()` to apply a mask to an image**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!** ###Code # Helper Functions def show_img(img_dict): if not isinstance(img_dict, Mapping): plt.imshow(img_dict) return elif len(img_dict) == 1: plt.imshow(img_dict.values[0]) return else: col = 3 row = 1 values_list = list(img_dict.values()) fig, axes = plt.subplots(row, col, figsize = (16, 8)) fig.subplots_adjust(hspace = 0.1, wspace = 0.2) axes.ravel() axes[0].imshow(values_list[0]) axes[1].imshow(values_list[1]) axes[2].imshow(values_list[2]) def grayscale(img): return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) def canny(img, low_threshold, high_threshold): return cv2.Canny(img, low_threshold, high_threshold) def gaussian_blur(img, kernel_size): return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0) def region_of_interest(img, vertices): """ Only keeps the region of the image defined by the polygon formed from `vertices`. The rest of the image is set to black. `vertices` should be a numpy array of integer points. """ #defining a blank mask to start with mask = np.zeros_like(img) #defining a 3 channel or 1 channel color to fill the mask with depending on the input image if len(img.shape) > 2: channel_count = img.shape[2] # i.e. 3 or 4 depending on your image ignore_mask_color = (255,) * channel_count else: ignore_mask_color = 255 #filling pixels inside the polygon defined by "vertices" with the fill color cv2.fillPoly(mask, vertices, ignore_mask_color) #returning the image only where mask pixels are nonzero masked_image = cv2.bitwise_and(img, mask) return masked_image def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap, default): """ `img` should be the output of a Canny transform. Returns an image with hough lines drawn. """ lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap) line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8) if default: draw_default_lines(line_img, lines) else: draw_lines(line_img, lines) return line_img # Python 3 has support for cool math symbols. def weighted_img(img, initial_img, α=0.8, β=1., γ=0.): """ `img` is the output of the hough_lines(), An image with lines drawn on it. Should be a blank image (all black) with lines drawn on it. `initial_img` should be the image before any processing. The result image is computed as follows: initial_img * α + img * β + γ NOTE: initial_img and img must be the same shape! """ return cv2.addWeighted(initial_img, α, img, β, γ) def draw_default_lines(img, lines, color=[255, 0, 0], thickness=5): for line in lines: for x1,y1,x2,y2 in line: cv2.line(img, (x1, y1), (x2, y2), color, thickness) def draw_lines(img, lines, color=[255, 0, 0], thickness=10): """ This function draws `lines` with `color` and `thickness`. Lines are drawn on the image inplace (mutates the image). If you want to make the lines semi-transparent, think about combining this function with the weighted_img() function below """ # Track gradient and intercept of left and right lane left_slope = [] left_intercept = [] left_y = [] right_slope = [] right_intercept = [] right_y = [] for line in lines: for x1,y1,x2,y2 in line: slope = (y2-y1)/(x2-x1) intercept = y2 - (slope*x2) # right lane if slope > 0.0 and slope < math.inf and abs(slope) > 0.3: right_slope.append(slope) right_intercept.append(intercept) right_y.append(y1) right_y.append(y2) # left lane elif slope < 0.0 and slope > -math.inf and abs(slope) > 0.3: left_slope.append(slope) left_intercept.append(intercept) left_y.append(y1) left_y.append(y2) y_min = min(min(left_y), min(right_y)) + 40 y_max = img.shape[0] l_m = np.mean(left_slope) l_c = np.mean(left_intercept) r_m = np.mean(right_slope) r_c = np.mean(right_intercept) l_x_max = int((y_max - l_c)/l_m) l_x_min = int((y_min - l_c)/l_m) r_x_max = int((y_max - r_c)/r_m) r_x_min = int((y_min - r_c)/r_m) #pdb.set_trace() cv2.line(img, (l_x_max, y_max),(l_x_min, y_min), color, thickness) cv2.line(img, (r_x_max, y_max),(r_x_min, y_min), color, thickness) ###Output _____no_output_____ ###Markdown Build a Lane Finding Pipeline Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters. ###Code # This cell creates a dictionary of all of the images in 'test_images' folder def get_images(IMG_PATH = None): if IMG_PATH == None: test_imgs = os.listdir("test_images/") IMG_PATH = "test_images/" else: test_imgs = os.listdir(IMG_PATH) # create an array that contains all test images img_dict = {} for image in test_imgs: img_dict[image] = mpimg.imread(os.path.join(IMG_PATH, image)) return img_dict # outputs lanes for all images in image dictionary def interpolate(lanes, img): # Interpolating lines result = weighted_img(lanes, img) return result def get_lanes(img_dict, default = False): if isinstance(img_dict, Mapping): for image in img_dict.keys(): test_img = img_dict[image] # Converting to grayscale gray_img = grayscale(test_img) blur_img = gaussian_blur(gray_img, kernel_size = 3) # Computing Edges edges = canny(blur_img, low_threshold = 75, high_threshold = 150) # Extracting Region of Interest points = np.array([[130,600],[380,300],[650,300],[900,550]], dtype=np.int32) ROI = region_of_interest(edges, [points]) # Performing Hough Transform and draw lanes lanes = hough_lines(ROI, 2, np.pi/180, 15, 5, 25, default) if default: img_dict[image] = lanes else: res = interpolate(lanes, test_img) img_dict[image] = res return img_dict # from video frames else: gray_img = grayscale(img_dict) blur_img = gaussian_blur(gray_img, kernel_size = 3) # Computing Edges edges = canny(blur_img, low_threshold = 75, high_threshold = 150) # Extracting Region of Interest points = np.array([[130,600],[380,300],[650,300],[900,550]], dtype=np.int32) ROI = region_of_interest(edges, [points]) # Performing Hough Transform and draw lanes lanes = hough_lines(ROI, 2, np.pi/180, 15, 5, 25, default) res = interpolate(lanes, img_dict) return res img_dict = get_images() show_img(img_dict) default_lanes = get_lanes(img_dict, True) show_img(default_lanes) img_dict = get_images() final_output = get_lanes(img_dict) show_img(final_output) # This block will automatically compute and save the lanes to the folder 'test_images_output' def compute_test_images(img_dict): # compute the lanes for all test_images lanes_dict = get_lanes(img_dict) # save outputs to 'test_images_output' for image in lanes_dict.keys(): PATH = 'test_images_output/' mpimg.imsave(os.path.join(PATH, image), img_dict[image]) compute_test_images(img_dict) ###Output /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:112: RuntimeWarning: divide by zero encountered in int_scalars ###Markdown Test on VideosYou know what's cooler than drawing lanes over images? Drawing lanes over video!We can test our solution on two provided videos:`solidWhiteRight.mp4``solidYellowLeft.mp4`**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.****If you get an error that looks like this:**```NeedDownloadError: Need ffmpeg exe. You can download it by calling: imageio.plugins.ffmpeg.download()```**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.** ###Code # Import everything needed to edit/save/watch video clips from moviepy.editor import VideoFileClip from IPython.display import HTML def process_image(image): # NOTE: The output you return should be a color image (3 channel) for processing video below # TODO: put your pipeline here, # you should return the final output (image where lines are drawn on lanes) result = get_lanes(image) return result ###Output _____no_output_____ ###Markdown Let's try the one with the solid white lane on the right first ... ###Code white_output = 'test_videos_output/solidWhiteRight.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5) clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4") white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!! %time white_clip.write_videofile(white_output, audio=False) ###Output [MoviePy] >>>> Building video test_videos_output/solidWhiteRight.mp4 [MoviePy] Writing video test_videos_output/solidWhiteRight.mp4 ###Markdown Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice. ###Code HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(white_output)) ###Output _____no_output_____ ###Markdown Improve the draw_lines() function**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".****Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.** Now for the one with the solid yellow lane on the left. This one's more tricky! ###Code yellow_output = 'test_videos_output/solidYellowLeft.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5) clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4') yellow_clip = clip2.fl_image(process_image) %time yellow_clip.write_videofile(yellow_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(yellow_output)) ###Output _____no_output_____ ###Markdown Writeup and SubmissionIf you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file. Optional ChallengeTry your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project! ###Code challenge_output = 'test_videos_output/challenge.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5) clip3 = VideoFileClip('test_videos/challenge.mp4') challenge_clip = clip3.fl_image(process_image) %time challenge_clip.write_videofile(challenge_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(challenge_output)) ###Output _____no_output_____
notebook/S03A_Scalars_Annotated.ipynb
###Markdown Scalars ###Code %matplotlib inline import numpy as np import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown Integers Binary representation of integers ###Code format(16, '032b') ###Output _____no_output_____ ###Markdown Bit shifting ###Code format(16 >> 2, '032b') 16 >> 2 format(16 << 2, '032b') 16 << 2 ###Output _____no_output_____ ###Markdown OverflowIn general, the computer representation of integers has a limited range, and may overflow. The range depends on whether the integer is signed or unsigned.For example, with 8 bits, we can represent at most $2^8 = 256$ integers.- 0 to 255 unsigned- -128 ti 127 signed Signed integers ###Code np.arange(130, dtype=np.int8)[-5:] ###Output _____no_output_____ ###Markdown Unsigned integers ###Code np.arange(130, dtype=np.uint8)[-5:] np.arange(260, dtype=np.uint8)[-5:] ###Output _____no_output_____ ###Markdown Integer divisionIn Python 2 or other languages such as C/C++, be very careful when dividing as the division operator `/` performs integer division when both numerator and denominator are integers. This is rarely what you want. In Python 3 the `/` always performs floating point division, and you use `//` for integer division, removing a common source of bugs in numerical calculations. ###Code %%python2 import numpy as np x = np.arange(10) print(x/10) ###Output [0 0 0 0 0 0 0 0 0 0] ###Markdown Python 3 does the "right" thing. ###Code x = np.arange(10) x/10 ###Output _____no_output_____ ###Markdown Real numbersReal numbers are represented as **floating point** numbers. A floating point number is stored in 3 pieces (sign bit, exponent, mantissa) so that every float is represetned as get +/- mantissa ^ exponent. Because of this, the interval between consecutive numbers is smallest (high precison) for numebrs close to 0 and largest for numbers close to the lower and upper bounds.Because exponents have to be singed to represent both small and large numbers, but it is more convenint to use unsigned numbers here, the exponnent has an offset (also knwnn as the exponentn bias). For example, if the expoennt is an unsigned 8-bit number, it can rerpesent the range (0, 255). By using an offset of 128, it will now represent the range (-127, 128).![float1](http://www.dspguide.com/graphics/F_4_2.gif)**Note**: Intervals between consecutive floating point numbers are not constant. In particular, the precision for small numbers is much larger than for large numbers. In fact, approximately half of all floating point numbers lie between -1 and 1 when using the `double` type in C/C++ (also the default for `numpy`).![float2](http://jasss.soc.surrey.ac.uk/9/4/4/fig1.jpg)Because of this, if you are adding many numbers, it is more accurate to first add the small numbers before the large numbers. IEEE 754 32-bit floating point representation![img](https://upload.wikimedia.org/wikipedia/commons/thumb/d/d2/Float_example.svg/590px-Float_example.svg.png)See [Wikipedia](https://en.wikipedia.org/wiki/Single-precision_floating-point_format) for how this binary number is evaluated to 0.15625. ###Code from ctypes import c_int, c_float s = c_int.from_buffer(c_float(0.15625)).value s = format(s, '032b') s rep = { 'sign': s[:1], 'exponent' : s[1:9:], 'fraction' : s[9:] } rep ###Output _____no_output_____ ###Markdown Most base 10 real numbers are approximationsThis is simply because numbers are stored in finite-precision binary format. ###Code '%.20f' % (0.1 * 0.1 * 100) ###Output _____no_output_____ ###Markdown Never check for equality of floating point numbers ###Code i = 0 loops = 0 while i != 1: i += 0.1 * 0.1 loops += 1 if loops == 1000000: break i i = 0 loops = 0 while np.abs(1 - i) > 1e-6: i += 0.1 * 0.1 loops += 1 if loops == 1000000: break i ###Output _____no_output_____ ###Markdown Associative law does not necessarily hold ###Code 6.022e23 - 6.022e23 + 1 1 + 6.022e23 - 6.022e23 ###Output _____no_output_____ ###Markdown Distributive law does not hold ###Code a = np.exp(1) b = np.pi c = np.sin(1) a*(b+c) a*b + a*c ###Output _____no_output_____ ###Markdown Catastrophic cancellation Consider calculating sample variance$$s^2= \frac{1}{n(n-1)}\sum_{i=1}^n x_i^2 - (\sum_{i=1}^n x_i)^2$$Be careful whenever you calculate the difference of potentially big numbers. ###Code def var(x): """Returns variance of sample data using sum of squares formula.""" n = len(x) return (1.0/(n*(n-1))*(n*np.sum(x**2) - (np.sum(x))**2)) ###Output _____no_output_____ ###Markdown Underflow ###Code np.warnings.filterwarnings('ignore') np.random.seed(4) xs = np.random.random(1000) ys = np.random.random(1000) np.prod(xs)/np.prod(ys) ###Output _____no_output_____ ###Markdown Prevent underflow by staying in log space ###Code x = np.sum(np.log(xs)) y = np.sum(np.log(ys)) np.exp(x - y) ###Output _____no_output_____ ###Markdown Overflow ###Code np.exp(1000) ###Output _____no_output_____ ###Markdown Numerically stable algorithms What is the sample variance for numbers from a normal distribution with variance 1? ###Code np.random.seed(15) x_ = np.random.normal(0, 1, int(1e6)) x = 1e12 + x_ var(x) ###Output _____no_output_____ ###Markdown Use functions from numerical libraries where available ###Code np.var(x) ###Output _____no_output_____ ###Markdown There is also a variance function in the standard library, but it is slower for large arrays. ###Code import statistics statistics.variance(x) ###Output _____no_output_____ ###Markdown Note that `numpy` uses does not use the asymptotically unbiased estimator by default. If you want the unbiased variance, set `ddof` to 1. ###Code np.var([1,2,3,4], ddof=1) statistics.variance([1,2,3,4]) ###Output _____no_output_____ ###Markdown Useful numerically stable functions Let's calculate$$\log(e^{1000} + e^{1000})$$Using basic algebra, we get the solution $\log(2) + 1000$.\begin{align}\log(e^{1000} + e^{1000}) &= \log(e^{0}e^{1000} + e^{0}e^{1000}) \\&= \log(e^{100}(e^{0} + e^{0})) \\&= \log(e^{1000}) + \log(e^{0} + e^{0}) \\&= 1000 + \log(2)\end{align} **logaddexp** ###Code x = np.array([1000, 1000]) np.log(np.sum(np.exp(x))) np.logaddexp(*x) ###Output _____no_output_____ ###Markdown **logsumexp**This function generalizes `logaddexp` to an arbitrary number of addends and is useful in a variety of statistical contexts. Suppose we need to calculate a probability distribution $\pi$ parameterized by a vector $x$$$\pi_i = \frac{e^{x_i}}{\sum_{j=1}^n e^{x_j}}$$Taking logs, we get$$\log(\pi_i) = x_i - \log{\sum_{j=1}^n e^{x_j}}$$ ###Code x = 1e6*np.random.random(100) np.log(np.sum(np.exp(x))) from scipy.special import logsumexp logsumexp(x) ###Output _____no_output_____ ###Markdown **logp1 and expm1** ###Code np.exp(np.log(1 + 1e-6)) - 1 np.expm1(np.log1p(1e-6)) ###Output _____no_output_____ ###Markdown **sinc** ###Code x = 1 np.sin(x)/x np.sinc(x) x = np.linspace(0.01, 2*np.pi, 100) plt.plot(x, np.sinc(x), label='Library function') plt.plot(x, np.sin(x)/x, label='DIY function') plt.legend() pass ###Output _____no_output_____
5/rode.ipynb
###Markdown Lab 05 Solving a rigid system of differential equations Konks Eric, Б01-818X.9.7 $$y_1'=-0.04y_1+10^4y_2y_3$$ $$y_2'=0.04y_1-10^4y_2y_3-3*10^7y_2^2$$ $$y_3'=3*10^7y_2^2$$ $$y_1(0)=1,\ y_2(0)=0,\ y_3(0)=0$$ ###Code import unittest import logging import numpy as np import pandas as pd import matplotlib.pyplot as plt #logging.basicConfig(level=logging.DEBUG) class RODE: def __init__(self): self.log = logging.getLogger("RODE") def k_calc_stop(self, k_cur, k_next, delta): if id(k_cur) == id(k_next): return False if np.abs(np.linalg.norm(np.matrix(k_cur)) - np.linalg.norm(np.matrix(k_next))) < delta: return True return False def k_calc(self, stages, c_vec, b_vec, a, f_vec, u_res, h, t_res, delta): k_next = [[0 for _ in range(stages)] for _ in range(len(f_vec))] k_cur = k_next itr = 0 while not self.k_calc_stop(k_cur, k_next, delta): k_tmp = k_next k_next = [k_cur[i][:] for i in range(len(k_cur))] k_cur = k_tmp for s in range(stages): u_k = [u_res[-1][j]+h*sum(a[s][m]*k_cur[j][m] for m in range(s)) for j in range(len(f_vec))] self.log.debug(f"Iter[{itr}]|S[{s}]: u_k: {u_k}") for i in range(len(f_vec)): k_next[i][s] = f_vec[i](t_res[-1]+c_vec[s]*h, u_k) self.log.debug(f"Iter[{itr}]]: k: {k_next}") itr = itr + 1 return k_next def solve(self, stages, c_vec, b_vec, a, f_vec, u_init, h, t_range, delta): u_res = [u_init,] t_res = [t_range[0],] while t_res[-1] < t_range[1]: u_cur = [0 for _ in range(len(f_vec))] k = self.k_calc(stages, c_vec, b_vec, a, f_vec, u_res, h, t_res, delta) for i in range(len(f_vec)): u_cur[i] = u_res[-1][i]+h*sum(b_vec[s]*k[i][s] for s in range(stages)) self.log.debug(f"T[{t_res[-1]}]: k: {k}") self.log.debug(f"T[{t_res[-1]}]: u: {u_cur}") u_res.append(u_cur) t_res.append(t_res[-1]+h) return (t_res, u_res) log = logging.getLogger() c_vec = [1/2-np.sqrt(15)/10, 1/2, 1/2+np.sqrt(15)/10] b_vec = [5/18, 4/9, 5/18] a = [[5/36,2/9-np.sqrt(15)/15,5/36-np.sqrt(15)/30], [5/36+np.sqrt(15)/24,2/9,5/36-np.sqrt(15)/24], [5/36+np.sqrt(15)/30,2/9+np.sqrt(15)/15,5/36]] #c_vec = [1/3, 1] #b_vec = [3/4, 1/4] #a = [[5/12, -1/12], [3/4, 1/4]] log.debug(f"c={c_vec}") log.debug(f"b={b_vec}") log.debug(f"a={a}") u_init = [1, 0, 0] t_range = (0, 40) delta = 10e-6 h = 0.001 f1 = lambda t, u_vec: -0.04*u_vec[0]+10**4*u_vec[1]*u_vec[2] f2 = lambda t, u_vec: 0.04*u_vec[0]-10**4*u_vec[1]*u_vec[2]-3*10**7*u_vec[1]**2 f3 = lambda t, u_vec: 3*10**7*u_vec[1]**2 f_vec = [f1, f2, f3] rode = RODE() res = rode.solve(len(c_vec), c_vec, b_vec, a, f_vec, u_init, h, t_range, delta) df = pd.DataFrame({"t": res[0], "(y1, y2, y3)": res[1]}) print(df) def mplot(x, y, xlabel, ylabel): plt.plot(x, y, label=f"{ylabel}({xlabel})") plt.grid(True) plt.xlabel(xlabel) plt.ylabel(ylabel) plt.legend() plt.show() mplot(res[0], [j[0] for j in res[1]], 't', 'y1') mplot(res[0], [j[1] for j in res[1]], 't', 'y2') mplot(res[0], [j[2] for j in res[1]], 't', 'y3') ###Output _____no_output_____
week-3/week-3-1-class-empty.ipynb
###Markdown Week 3-1 - Linear Regression - class notebookThis notebook gives three examples of regression, that is, fitting a linear model to our data to find trends. For the finale, we're going to duplicate the analysis behind the Washington Post story ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression %matplotlib inline ###Output _____no_output_____ ###Markdown Part 1 - Single variable regressionWe'll start with some simple data on height and weight. ###Code hw = pd.read_csv("week-3/height-weight.csv") hw ###Output _____no_output_____ ###Markdown Let's look at the distribution of each of these variables. ###Code hw.height.hist() hw.weight.hist() ###Output _____no_output_____ ###Markdown Really, the interesting thing is to look at them together. For this we use a scatter plot. ###Code hw.plot(kind='scatter', x='height', y='weight') ###Output _____no_output_____ ###Markdown Clearly there's a trend that relates the two. One measure of the strength of that trend is called "correlation". We can compute the correlation between every pair of columns with `corr()`, though in this case it's really only between one pair. ###Code # Show the correlations! OMG hw.corr() # the closer to 1 the correlation is, the closer to a line are the values ###Output _____no_output_____ ###Markdown If you want to get better at knowing what sort of graph a correlation coefficient corresponds to, play the remarkable 8-bit game [Guess the Correlation](http://guessthecorrelation.com/)So far so good. Now suppose we want to know what weight we should guess if we know someone is 60" tall. We don't have anyone of that height in our data, and even id we did, they could be above or below average height. We need to build some sort of *model* which captures the trend, and guesses the average weight at each height.*ENTER THE REGRESSION*. ###Code # convert pandas dataframe to a numpy array, which can be understood by sklearn x = hw[['height']].values y = hw[['weight']].values lm = LinearRegression() lm.fit(x,y) ###Output /Users/km/.pyenv/versions/3.6.5/lib/python3.6/site-packages/sklearn/linear_model/base.py:509: RuntimeWarning: internal gelsd driver lwork query error, required iwork dimension not returned. This is likely the result of LAPACK bug 0038, fixed in LAPACK 3.2.2 (released July 21, 2010). Falling back to 'gelss' driver. linalg.lstsq(X, y) ###Markdown Ok, now we've got a "linear regression." What is it? It's just a line `y=mx+b`, which we can recover like this: ###Code m = lm.coef_[0] m b = lm.intercept_ b ###Output _____no_output_____ ###Markdown We can plot this line `y=mx+b` on top of the scatterplot to see it. ###Code hw.plot(kind='scatter', x='height', y='weight') plt.plot(hw.height, m*hw.height+b, '–') ###Output _____no_output_____ ###Markdown So if we want to figure out the average weight of someone who is 60" tall, we can compute ###Code m*60+b ###Output _____no_output_____ ###Markdown There's a shortcut for this, which will come in handy when we add variables ###Code lm.predict(60) ###Output _____no_output_____ ###Markdown Part 2 - Multi-variable regression We can do essentially the same trick with one more independent variable. Then our regression equation is `y = m1*x1 + m2*x2 + b`. We'll use one of the built-in `sklearn` data test as demonstration data. ###Code from sklearn import datasets from mpl_toolkits.mplot3d import Axes3D diabetes = datasets.load_diabetes() print(diabetes.DESCR) # take a look at the predictive (independent) variables # The variables to be used for prediction df = pd.DataFrame(diabetes.data, columns=['age', 'sex', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']) df.hist() # take a look at the "target" (dependent) variable # fit a regression # Which columns do we want to use to try to predict? I’m choosing age and BMI here # (BMI is “body mass index”, it’s a measure of weight compared to height) indices = (0, 2) x = diabetes.data[:, indices] y = diabetes.target lm2 = LinearRegression() lm2.fit(x, y) ###Output _____no_output_____ ###Markdown Ok awesome, we've fit a regression with multiple variables. What did we get? Let's check the coefficients ###Code lm2.coef_ ###Output _____no_output_____ ###Markdown Now we have *two* coefficients. They're both positive, which means that both age and BMI are associated with increased disease progression. We have an intercept too, the predicted value of the target variable when both age and BMI are zero (which never happens, but that's the way the math works) ###Code lm2.intercept_ ###Output _____no_output_____ ###Markdown To really see what's going on here, we're going to plot the whole thing in beautiful 3D. Now instead of a regression line, we have a regression *plane.* Are you ready for this? ###Code # Helpful function that we'll use later for making more 3D regression plots def plot_regression_3d(x, y, z, model, elev=30, azim=30, xlab=None, ylab=None): fig = plt.figure() ax = Axes3D(fig, elev=elev, azim=azim) # This looks gnarly, but we're just taking four points at the corners of the plot, # and using predict() to determine their vertical position xmin = x.min() xmax = x.max() ymin = y.min() ymax = y.max() corners_x = np.array([[xmin, xmin], [xmax, xmax]]) corners_y = np.array([[ymin, ymax], [ymin, ymax]]) corners_z = model.predict(np.array([[xmin, xmin, xmax, xmax], [ymin, ymax, ymin, ymax]]).T).reshape((2, 2)) ax.plot_surface(corners_x, corners_y, corners_z, alpha=0.5) ax.scatter(x, y, z, alpha=0.3) ax.set_xlabel(xlab) ax.set_ylabel(ylab) # Now plot our diabetes data plot_regression_3d(x[:, 0], x[:, 1], y, lm2, elev=20, azim=0, xlab='age', ylab='BMI') ###Output _____no_output_____ ###Markdown Part 3 - Analysis of 2016 votersAside from prediction, we can use regression to attempt explanations. The coefficient `m` in the above encodes a guess about the existence and strength of the relationship between `x` and `y`. If it's zero, we guess that they're unrelated. Otherwise, it tells us how they are likely to vary together.In this section we're going to try to understand what motivated people to vote for Trump but looking at the relationship between vote and other variables in the [2016 American National Election Study data](http://electionstudies.org/project/2016-time-series-study/). There were quite a few statistical analyses of this "why did Trump win?" kind after the election, by journalists and researchers. - [Racism motivated Trump voters more than authoritarianism](https://www.washingtonpost.com/news/monkey-cage/wp/2017/04/17/racism-motivated-trump-voters-more-than-authoritarianism-or-income-inequality) - Washington Post- [The Rise of American Authoritarianism](https://www.vox.com/2016/3/1/11127424/trump-authoritarianism) - Vox- [Education, Not Income, Predicted Who Would Vote For Trump](https://fivethirtyeight.com/features/education-not-income-predicted-who-would-vote-for-trump/) - 538- [Why White Americans Voted for Trump – A Research Psychologist’s Analysis](https://techonomy.com/2018/02/white-americans-voted-trump-research-psychologists-analysis/) - Techonomy- [Status threat, not economic hardship, explains the 2016 presidential vote](http://www.pnas.org/content/early/2018/04/18/1718155115) - Diana C. Mutz, PNAS- [Trump thrives in areas that lack traditional news outlets](https://www.politico.com/story/2018/04/08/news-subscriptions-decline-donald-trump-voters-505605) - Politico- [The Five Types of Trump Voters](https://www.voterstudygroup.org/publications/2016-elections/the-five-types-trump-voters) - Voter Study GroupMany of these used regression, but some did not. My favoite is the Voter Study Group analysis which used clustering -- just like we learned last week. It has a good discussion of the problems with using a regression to answer this question. We're going to use regression anyway, along the lines of the [Washington Post piece](https://www.washingtonpost.com/news/monkey-cage/wp/2017/04/17/racism-motivated-trump-voters-more-than-authoritarianism-or-income-inequality/?utm_term=.01d9d3764f2c) which also uses ANES data. In particular, a regression on variables representing attitudes about authoritarianism and minorities. ###Code # read 'anes_timeseries_2016_rawdata.csv' anes = pd.read_csv('week-3/anes_timeseries_2016_rawdata.csv') print(anes.shape) anes.head() ###Output /Users/km/.pyenv/versions/3.6.5/lib/python3.6/site-packages/IPython/core/interactiveshell.py:2705: DtypeWarning: Columns (790,1129,1131) have mixed types. Specify dtype option on import or set low_memory=False. interactivity=interactivity, compiler=compiler, result=result) ###Markdown The first thing we need to do is construct indices of "authoritarianism" and "racism" from answers to the survey questions. We're following exactly what the Washington Post did here. Are "authoritarianism" and "racism" accurate and/or useful words for indices constructed of these questions? Our choice of words will hugely shape the impression that readers come away with -- even if we do the exact same calculations.We start by dropping everything we don't need: we keep only white voters, only people who voted, and just the cols we want ###Code # drop non-white voters white_col = 'V161310a' anes = anes[anes[white_col] == 1] anes.shape # keep only Trump, Clinton voters voted_col = 'V162034a' # 1=Clinton, 2=Trump, 3=Johnson, 4=Stein, negative numbers = didn't vote or won't say anes = anes[(anes[voted_col] == 1) | (anes[voted_col] == 2)] anes.shape # keep only columns on authoritarian, racial scales authoritarian_cols = ['V162239', 'V162240', 'V162241', 'V162242'] racial_cols = ['V162211', 'V162212', 'V162213', 'V162214'] anes = anes[[voted_col] + authoritarian_cols + racial_cols] anes.head() ###Output _____no_output_____ ###Markdown Now we have to decode these values.For the child-rearing questions, the code book tells us that 1 means the first option and 2 means the second. But 3 means both and then there are all sorts of codes that mean the question wasn't answered, in different ways. And then there's the issue that the questions have different directions: Options 1 might mean either "more" or "less" authoritarian. So we have a custom translation dictionary for each column. This is the stuff that dreams are made of, people. ###Code # recode the authoritarian variables # These variables are proxies for authoritarian attitudes. Why are these questiones about children? # Because that's the only way to get honest answers! It's a long story. # See https://www.vox.com/2016/3/1/11127424/trump-authoritarianism # All authoritarian traits are coded 1 for first option and 2 for second # We turn this into +1/0/-1 where +1 is the more authoritarian option, and 0 means no data # Child trait more important: independence or respect anes['V162239'].replace({1: -1, 2: 1, 3: 0, -6: 0, -7: 0, -8: 0, -9: 0}, inplace=True) # Child trait more important: curiosity or good manners anes['V162240'].replace({1: -1, 2: 1, 3: 0, -6: 0, -7: 0, -8: 0, -9: 0}, inplace=True) # Child trait more important: obedience or self-reliance anes['V162241'].replace({1: 1, 2: -1, 3: 0, -6: 0, -7: 0, -8: 0, -9: 0}, inplace=True) # Child trait more important: considerate or well-behaved anes['V162242'].replace({1: -1, 2: 1, 3: 0, -6: 0, -7: 0, -8: 0, -9: 0}, inplace=True) # recode the racial variables # All racial questions are coded on a five point scale, 1=agree strongy, 5=disagree strongly # We recode so that least tolerant = +2 and most tolerant =-2 # Agree/disagree: blacks shd work way up w/o special favors anes['V162211'].replace( {1: 2, 2: 1, 3: 0, 4: -1, 5: -2, -6: 0, -7: 0, -8: 0, -9: 0}, inplace=True) # Agree/disagree: past slavery make more diff for blacks anes['V162212'].replace( {1: -2, 2: -1, 3: 0, 4: 1, 5: 2, -6: 0, -7: 0, -8: 0, -9: 0}, inplace=True) # Agree/disagree: blacks have gotten less than deserve anes['V162213'].replace( {1: -2, 2: -1, 3: 0, 4: 1, 5: 2, -6: 0, -7: 0, -8: 0, -9: 0}, inplace=True) anes['V162214'].replace( {1: 2, 2: 1, 3: 0, 4: -1, 5: -2, -6: 0, -7: 0, -8: 0, -9: 0}, inplace=True) # check the results anes.head() ###Output _____no_output_____ ###Markdown Finally, add the authority and racial columns together to form the composite indexes. ###Code # sum each group of columns. End up with vote, authority, racial columns anes['authority'] = anes[authoritarian_cols].sum(axis=1) anes['racial'] = anes[racial_cols].sum(axis=1) anes['vote'] = anes[voted_col] anes = anes[['vote', 'authority', 'racial']] anes.head(10) ###Output _____no_output_____ ###Markdown Data prepared at last! Let's first look at the scatter plots ###Code anes.plot(kind='scatter', x='authority', y='vote') ###Output _____no_output_____ ###Markdown Er, right... all this says is that we've got votes for both candidates at all levels of authoritarianism. To get a sense of how many dots in each point, we can add some jitter and make the points a bit transparent. ###Code # function to add noise to the values in the array # add a noise to the values in the array def jitter(arr): # pick a standard deviation for the jitter of 3% of the data range stdev = .02 * (max(arr) - min(arr)) return arr + np.random.randn(len(arr)) * stdev # plot vote vs authoritarian variables with jitter plt.scatter(x=jitter(anes.authority), y=jitter(anes.vote), alpha=0.05) ###Output _____no_output_____ ###Markdown Note that, generally, as you move to the right (more authoritarian) there are more Trump voters. We can do this same plot with the racial axis. ###Code # plot vote vs racial variables with jitter # ... oh fuck it, this is just copy-pasting stuff from the class notebook, # I'd rather just keep listening to the lecture instead of wasting brainpower # on copy-pasting ###Output _____no_output_____ ###Markdown Similar deal. The axis is smoother because we are summing numbers from a five point agree/disagree scale, rather than just the two-option questions of the authoritarianism subplot. Now in glorious 3D. ###Code # 3D plot of both sets of vars ###Output _____no_output_____ ###Markdown Same problem: everything is on top of each other. Same solution. ###Code # jittered 3D plot ###Output _____no_output_____ ###Markdown You can definitely see the change alog both axes. But which factor matters more? Let's get quantitative by fitting a linear model. Regression to the rescue! ###Code # This is some drudgery to convert the dataframe into the format that sklearn needs: # This does the actual regression # call plot_regression_3d ###Output _____no_output_____ ###Markdown Well that looks cool but doesn't really clear it up for me. Let's look at the coefficients. Looks like the coefficient on `racial` is higher. But wait, we choose the numbers that we turned each response into! We could have coded `racial` on a +/-1 scale instead of a +/-2 scale, or a +/-10 scale. So... we could get any number we want just be changing how we convert the data.To fix this, we're going to standardize the values (both dependent and independent) to have mean 0 and standard deviation 1. This gives us [standardized coefficients](https://en.wikipedia.org/wiki/Standardized_coefficient). ###Code # normalize the columns and take a look # fit another regression ###Output _____no_output_____ ###Markdown What we have now is the same data, just scaled in each direction ###Code # call plot_regression_3d ###Output _____no_output_____ ###Markdown Finally, we can compare the coefficients directly. It doesn't matter what range we used to code the survey answers, because we divided it out during normalization. So there we have it. For white voters in the 2016 election, the standardized regression coefficient on racial factors is quite a bit bigger than the standardized coeffiecient on authoritrianism. But what does this actually mean? ###Code # what's the new intercept? ###Output _____no_output_____
neural_style_transfer.ipynb
###Markdown Neural style transfer**Author:** [fchollet](https://twitter.com/fchollet)**Date created:** 2016/01/11**Last modified:** 2020/05/02**Description:** Transfering the style of a reference image to target image using gradient descent. IntroductionStyle transfer consists in generating an imagewith the same "content" as a base image, but with the"style" of a different picture (typically artistic).This is achieved through the optimization of a loss functionthat has 3 components: "style loss", "content loss",and "total variation loss":- The total variation loss imposes local spatial continuity betweenthe pixels of the combination image, giving it visual coherence.- The style loss is where the deep learning keeps in --that one is definedusing a deep convolutional neural network. Precisely, it consists in a sum ofL2 distances between the Gram matrices of the representations ofthe base image and the style reference image, extracted fromdifferent layers of a convnet (trained on ImageNet). The general ideais to capture color/texture information at different spatialscales (fairly large scales --defined by the depth of the layer considered).- The content loss is a L2 distance between the features of the baseimage (extracted from a deep layer) and the features of the combination image,keeping the generated image close enough to the original one.**Reference:** [A Neural Algorithm of Artistic Style]( http://arxiv.org/abs/1508.06576) Setup ###Code import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras.applications import vgg19 base_image_path = keras.utils.get_file("paris.jpg", "https://i.imgur.com/F28w3Ac.jpg") style_reference_image_path = keras.utils.get_file( "starry_night.jpg", "https://i.imgur.com/9ooB60I.jpg" ) result_prefix = "paris_generated" # Weights of the different loss components total_variation_weight = 1e-6 style_weight = 1e-6 content_weight = 2.5e-8 # Dimensions of the generated picture. width, height = keras.preprocessing.image.load_img(base_image_path).size img_nrows = 400 img_ncols = int(width * img_nrows / height) ###Output _____no_output_____ ###Markdown Let's take a look at our base (content) image and our style reference image ###Code from IPython.display import Image, display display(Image(base_image_path)) display(Image(style_reference_image_path)) ###Output _____no_output_____ ###Markdown Image preprocessing / deprocessing utilities ###Code def preprocess_image(image_path): # Util function to open, resize and format pictures into appropriate tensors img = keras.preprocessing.image.load_img( image_path, target_size=(img_nrows, img_ncols) ) img = keras.preprocessing.image.img_to_array(img) img = np.expand_dims(img, axis=0) img = vgg19.preprocess_input(img) return tf.convert_to_tensor(img) def deprocess_image(x): # Util function to convert a tensor into a valid image x = x.reshape((img_nrows, img_ncols, 3)) # Remove zero-center by mean pixel x[:, :, 0] += 103.939 x[:, :, 1] += 116.779 x[:, :, 2] += 123.68 # 'BGR'->'RGB' x = x[:, :, ::-1] x = np.clip(x, 0, 255).astype("uint8") return x ###Output _____no_output_____ ###Markdown Compute the style transfer lossFirst, we need to define 4 utility functions:- `gram_matrix` (used to compute the style loss)- The `style_loss` function, which keeps the generated image close to the local texturesof the style reference image- The `content_loss` function, which keeps the high-level representation of thegenerated image close to that of the base image- The `total_variation_loss` function, a regularization loss which keeps the generatedimage locally-coherent ###Code # The gram matrix of an image tensor (feature-wise outer product) def gram_matrix(x): x = tf.transpose(x, (2, 0, 1)) features = tf.reshape(x, (tf.shape(x)[0], -1)) gram = tf.matmul(features, tf.transpose(features)) return gram # The "style loss" is designed to maintain # the style of the reference image in the generated image. # It is based on the gram matrices (which capture style) of # feature maps from the style reference image # and from the generated image def style_loss(style, combination): S = gram_matrix(style) C = gram_matrix(combination) channels = 3 size = img_nrows * img_ncols return tf.reduce_sum(tf.square(S - C)) / (4.0 * (channels ** 2) * (size ** 2)) # An auxiliary loss function # designed to maintain the "content" of the # base image in the generated image def content_loss(base, combination): return tf.reduce_sum(tf.square(combination - base)) # The 3rd loss function, total variation loss, # designed to keep the generated image locally coherent def total_variation_loss(x): a = tf.square( x[:, : img_nrows - 1, : img_ncols - 1, :] - x[:, 1:, : img_ncols - 1, :] ) b = tf.square( x[:, : img_nrows - 1, : img_ncols - 1, :] - x[:, : img_nrows - 1, 1:, :] ) return tf.reduce_sum(tf.pow(a + b, 1.25)) ###Output _____no_output_____ ###Markdown Next, let's create a feature extraction model that retrieves the intermediate activationsof VGG19 (as a dict, by name). ###Code # Build a VGG19 model loaded with pre-trained ImageNet weights model = vgg19.VGG19(weights="imagenet", include_top=False) # Get the symbolic outputs of each "key" layer (we gave them unique names). outputs_dict = dict([(layer.name, layer.output) for layer in model.layers]) # Set up a model that returns the activation values for every layer in # VGG19 (as a dict). feature_extractor = keras.Model(inputs=model.inputs, outputs=outputs_dict) ###Output _____no_output_____ ###Markdown Finally, here's the code that computes the style transfer loss. ###Code # List of layers to use for the style loss. style_layer_names = [ "block1_conv1", "block2_conv1", "block3_conv1", "block4_conv1", "block5_conv1", ] # The layer to use for the content loss. content_layer_name = "block5_conv2" def compute_loss(combination_image, base_image, style_reference_image): input_tensor = tf.concat( [base_image, style_reference_image, combination_image], axis=0 ) features = feature_extractor(input_tensor) # Initialize the loss loss = tf.zeros(shape=()) # Add content loss layer_features = features[content_layer_name] base_image_features = layer_features[0, :, :, :] combination_features = layer_features[2, :, :, :] loss = loss + content_weight * content_loss( base_image_features, combination_features ) # Add style loss for layer_name in style_layer_names: layer_features = features[layer_name] style_reference_features = layer_features[1, :, :, :] combination_features = layer_features[2, :, :, :] sl = style_loss(style_reference_features, combination_features) loss += (style_weight / len(style_layer_names)) * sl # Add total variation loss loss += total_variation_weight * total_variation_loss(combination_image) return loss ###Output _____no_output_____ ###Markdown Add a tf.function decorator to loss & gradient computationTo compile it, and thus make it fast. ###Code @tf.function def compute_loss_and_grads(combination_image, base_image, style_reference_image): with tf.GradientTape() as tape: loss = compute_loss(combination_image, base_image, style_reference_image) grads = tape.gradient(loss, combination_image) return loss, grads ###Output _____no_output_____ ###Markdown The training loopRepeatedly run vanilla gradient descent steps to minimize the loss, and save theresulting image every 100 iterations.We decay the learning rate by 0.96 every 100 steps. ###Code optimizer = keras.optimizers.SGD( keras.optimizers.schedules.ExponentialDecay( initial_learning_rate=100.0, decay_steps=100, decay_rate=0.96 ) ) base_image = preprocess_image(base_image_path) style_reference_image = preprocess_image(style_reference_image_path) combination_image = tf.Variable(preprocess_image(base_image_path)) iterations = 4000 for i in range(1, iterations + 1): loss, grads = compute_loss_and_grads( combination_image, base_image, style_reference_image ) optimizer.apply_gradients([(grads, combination_image)]) if i % 100 == 0: print("Iteration %d: loss=%.2f" % (i, loss)) img = deprocess_image(combination_image.numpy()) fname = result_prefix + "_at_iteration_%d.png" % i keras.preprocessing.image.save_img(fname, img) ###Output _____no_output_____ ###Markdown After 4000 iterations, you get the following result: ###Code display(Image(result_prefix + "_at_iteration_4000.png")) ###Output _____no_output_____ ###Markdown ###Code import torch import torch.nn as nn import torch.nn.functional as F import torch .optim as optim from PIL import Image import matplotlib.pyplot as plt import torchvision.transforms as transforms import torchvision.models as models import copy device = torch.device("cuda" if torch.cuda.is_available() else "cpu") imsize = 512 if torch.cuda.is_available() else 128 # We'll use a smaller size of the image if gpu is not available. loader = transforms.Compose([ transforms.Resize(imsize), transforms.ToTensor() ]) def image_loader(image_name): image = Image.open(image_name) image = loader(image).unsqueeze(0) return image.to(device, torch.float) #style_img = image_loader("https://github.com/hritikbhandari/Neural-Style-Transfer-with-PyTorch/raw/master/images/picasso.jpg") #content_img = image_loader("https://github.com/hritikbhandari/Neural-Style-Transfer-with-PyTorch/raw/master/images/dancing.jpg") style_img = image_loader("images/picasso.jpg") content_img = image_loader("images/dancing.jpg") assert style_img.size() == content_img.size() unloader = transforms.ToPILImage() # reconvert into PIL image plt.ion() def imshow(tensor, title=None): image = tensor.cpu().clone() # cloning the tensor to not do changes on it image = image.squeeze(0) # removing the fake batch dimension image = unloader(image) plt.imshow(image) if title is not None: plt.title(title) plt.pause(0.001) # pause a bit so that plots are updated plt.figure() imshow(style_img, title='Style Image') plt.figure() imshow(content_img, title='Content Image') class ContentLoss(nn.Module): def __init__(self, target,): super(ContentLoss, self).__init__() self.target = target.detach() def forward(self, input): self.loss = F.mse_loss(input, self.target) return input def gram_matrix(input): a, b, c, d = input.size() # a=batch size(=1) # b=number of feature maps # (c,d)=dimensions of a f. map (N=c*d) features = input.view(a * b, c * d) # resise F_XL into \hat F_XL G = torch.mm(features, features.t()) # compute the gram product return G.div(a * b * c * d) class StyleLoss(nn.Module): def __init__(self, target_feature): super(StyleLoss, self).__init__() self.target = gram_matrix(target_feature).detach() def forward(self, input): G = gram_matrix(input) self.loss = F.mse_loss(G, self.target) return input cnn = models.vgg19(pretrained=True).features.to(device).eval() cnn_normalization_mean = torch.tensor([0.485, 0.456, 0.406]).to(device) cnn_normalization_std = torch.tensor([0.229, 0.224, 0.225]).to(device) # create a module to normalize input image so we can easily put it in a # nn.Sequential class Normalization(nn.Module): def __init__(self, mean, std): super(Normalization, self).__init__() # .view the mean and std to make them [C x 1 x 1] so that they can # directly work with image Tensor of shape [B x C x H x W]. # B is batch size. C is number of channels. H is height and W is width. self.mean = torch.tensor(mean).view(-1, 1, 1) self.std = torch.tensor(std).view(-1, 1, 1) def forward(self, img): # normalize img return (img - self.mean) / self.std # desired depth layers to compute style/content losses : content_layers_default = ['conv_4'] style_layers_default = ['conv_1', 'conv_2', 'conv_3', 'conv_4', 'conv_5'] def get_style_model_and_losses(cnn, normalization_mean, normalization_std, style_img, content_img, content_layers=content_layers_default, style_layers=style_layers_default): cnn = copy.deepcopy(cnn) # normalization module normalization = Normalization(normalization_mean, normalization_std).to(device) # in order to have an iterable access to or list of content/syle losses content_losses = [] style_losses = [] # assuming that cnn is a nn.Sequential, we make a new nn.Sequential # to put in modules that are supposed to be activated sequentially model = nn.Sequential(normalization) i = 0 # increment it every time we see a conv for layer in cnn.children(): if isinstance(layer, nn.Conv2d): i += 1 name = 'conv_{}'.format(i) elif isinstance(layer, nn.ReLU): name = 'relu_{}'.format(i) # Replacing with out-of-place # ones here. layer = nn.ReLU(inplace=False) elif isinstance(layer, nn.MaxPool2d): name = 'pool_{}'.format(i) elif isinstance(layer, nn.BatchNorm2d): name = 'bn_{}'.format(i) else: raise RuntimeError('Unrecognized layer: {}'.format(layer.__class__.__name__)) model.add_module(name, layer) if name in content_layers: # adding content loss: target = model(content_img).detach() content_loss = ContentLoss(target) model.add_module("content_loss_{}".format(i), content_loss) content_losses.append(content_loss) if name in style_layers: # adding style loss: target_feature = model(style_img).detach() style_loss = StyleLoss(target_feature) model.add_module("style_loss_{}".format(i), style_loss) style_losses.append(style_loss) # trimming off the layers after the last content and style losses for i in range(len(model) - 1, -1, -1): if isinstance(model[i], ContentLoss) or isinstance(model[i], StyleLoss): break model = model[:(i + 1)] return model, style_losses, content_losses input_img = content_img.clone() # to use white noise instead uncomment the below line: # input_img = torch.randn(content_img.data.size(), device=device) # add the original input image plt.figure() imshow(input_img, title='Input Image') def get_input_optimizer(input_img): # input parameter requires a gradient optimizer = optim.LBFGS([input_img.requires_grad_()]) return optimizer def run_style_transfer(cnn, normalization_mean, normalization_std, content_img, style_img, input_img, num_steps=300, style_weight=1000000, content_weight=1): """Run the style transfer.""" print('Building the style transfer model..') model, style_losses, content_losses = get_style_model_and_losses(cnn, normalization_mean, normalization_std, style_img, content_img) optimizer = get_input_optimizer(input_img) print('Optimizing..') run = [0] while run[0] <= num_steps: def closure(): # correcting the values of updated input image input_img.data.clamp_(0, 1) optimizer.zero_grad() model(input_img) style_score = 0 content_score = 0 for sl in style_losses: style_score += sl.loss for cl in content_losses: content_score += cl.loss style_score *= style_weight content_score *= content_weight loss = style_score + content_score loss.backward() run[0] += 1 if run[0] % 50 == 0: print("run {}:".format(run)) print('Style Loss : {:4f} Content Loss: {:4f}'.format( style_score.item(), content_score.item())) print() return style_score + content_score optimizer.step(closure) input_img.data.clamp_(0, 1) return input_img output = run_style_transfer(cnn, cnn_normalization_mean, cnn_normalization_std, content_img, style_img, input_img) plt.figure() imshow(output, title='Output Image') # thumbnail_number = 4 plt.ioff() plt.show() ###Output _____no_output_____ ###Markdown ###Code !rm -r ComputerVision NeuralStyleTransfer !git clone https://github.com/ldfrancis/ComputerVision.git !cp -r ComputerVision/NeuralStyleTransfer . % load https://github.com/ldfrancis/ComputerVision/blob/master/NeuralStyleTransfer/implementNTS.py from google.colab import files files.upload() CONTENT_IMAGE = "content1.jpg" STYLE_IMAGE = "style1.jpg" IMAGE_HEIGHT = 300 IMAGE_WIDTH = 400 ITERATION = 200 path_to_content_image = "/content/"+CONTENT_IMAGE path_to_style_image = "/content/"+STYLE_IMAGE import matplotlib.pyplot as plt c_image = plt.imread(path_to_content_image) s_image = plt.imread(path_to_style_image) print("Content Image of size (height, width) => {0}".format(c_image.shape[:-1])) plt.imshow(c_image) print("Style Image of size (height, width) => {0}".format(s_image.shape[:-1])) plt.imshow(s_image) from NeuralStyleTransfer import implementNTS as NST NST.setImageDim(IMAGE_WIDTH,IMAGE_HEIGHT) NST.run(ITERATION, style_image=path_to_style_image, content_image=path_to_content_image) generated_image_path = "/content/NeuralStyleTransfer/output/generated_image.jpg" image = plt.imread(generated_image_path) plt.imshow(image) files.download("NeuralStyleTransfer/output/generated_image.jpg") ###Output _____no_output_____ ###Markdown **0. Import Dependencies and Pretrained Model** ###Code import tensorflow_hub as hub import tensorflow as tf from matplotlib import pyplot as plt import numpy as np import cv2 model = hub.load('https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2') ###Output _____no_output_____ ###Markdown **1. Preprocess Image and Load** ###Code from google.colab import drive drive.mount('/content/gdrive') def load_image(img_path): img = tf.io.read_file(img_path) img = tf.image.decode_image(img, channels=3) img = tf.image.convert_image_dtype(img, tf.float32) img = img[tf.newaxis, :] return img content_image = load_image('/idrees.jpg') style_image = load_image('/monalisa.jpg') ###Output _____no_output_____ ###Markdown **2. Visualize Output** ###Code content_image.shape plt.imshow(np.squeeze(style_image)) plt.show() ###Output _____no_output_____ ###Markdown **3. Stylize Image** ###Code stylized_image = model(tf.constant(content_image), tf.constant(style_image))[0] plt.imshow(np.squeeze(stylized_image)) plt.show() cv2.imwrite('generated_img.jpg', cv2.cvtColor(np.squeeze(stylized_image)*255, cv2.COLOR_BGR2RGB)) ###Output _____no_output_____ ###Markdown > we will use these layers for style loss and content loss- style_layers = ['block1_conv1', 'block2_conv1', 'block3_conv1', 'block4_conv1', 'block5_conv1'] - content_layers = ['block5_conv2'] > Model ###Code style_content_index = [1 , 4 , 7 ,12 ,17 ,18] # indexes of the layers i want NUM_STYLE_LAYERS = 5 NUM_CONTENT_LAYERS = 1 outputs = [vgg_19.get_layer(index = index).output for index in style_content_index] outputs model = tf.keras.Model(inputs = vgg_19.input , outputs = outputs) model.trainable = False # i want to extract features only model.summary() ###Output Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, None, None, 3)] 0 conv1_0 (Conv2D) (None, None, None, 64) 1792 conv1_1 (Conv2D) (None, None, None, 64) 36928 maxpool1 (MaxPooling2D) (None, None, None, 64) 0 conv2_0 (Conv2D) (None, None, None, 128) 73856 conv2_1 (Conv2D) (None, None, None, 128) 147584 maxpool2 (MaxPooling2D) (None, None, None, 128) 0 conv3_0 (Conv2D) (None, None, None, 256) 295168 conv3_1 (Conv2D) (None, None, None, 256) 590080 conv3_2 (Conv2D) (None, None, None, 256) 590080 conv3_3 (Conv2D) (None, None, None, 256) 590080 maxpool3 (MaxPooling2D) (None, None, None, 256) 0 conv4_0 (Conv2D) (None, None, None, 512) 1180160 conv4_1 (Conv2D) (None, None, None, 512) 2359808 conv4_2 (Conv2D) (None, None, None, 512) 2359808 conv4_3 (Conv2D) (None, None, None, 512) 2359808 maxpool4 (MaxPooling2D) (None, None, None, 512) 0 conv5_0 (Conv2D) (None, None, None, 512) 2359808 conv5_1 (Conv2D) (None, None, None, 512) 2359808 ================================================================= Total params: 15,304,768 Trainable params: 0 Non-trainable params: 15,304,768 _________________________________________________________________ ###Markdown > losses ###Code def gram_matrix(input_tensor): """ Calculates the gram matrix and divides by the number of locations Args: input_tensor: tensor of shape (batch, height, width, channels) Returns: scaled_gram: gram matrix divided by the number of locations """ # calculate the gram matrix of the input tensor gram = tf.linalg.einsum('bijc,bijd->bcd', input_tensor, input_tensor) # get the height and width of the input tensor input_shape = tf.shape(input_tensor) height = input_shape[1] width = input_shape[2] # get the number of locations (height times width), and cast it as a tf.float32 num_locations = tf.cast(height * width, tf.float32) # scale the gram matrix by dividing by the number of locations scaled_gram = gram / num_locations return scaled_gram def get_style_loss(features, targets): """Expects two images of dimension h, w, c Args: features: tensor with shape: (height, width, channels) targets: tensor with shape: (height, width, channels) Returns: style loss (scalar) """ # get the average of the squared errors style_loss = tf.reduce_mean(tf.square(features - targets)) return style_loss def get_content_loss(features, targets): """Expects two images of dimension h, w, c Args: features: tensor with shape: (height, width, channels) targets: tensor with shape: (height, width, channels) Returns: content loss (scalar) """ # get the sum of the squared error multiplied by a scaling factor content_loss = 0.5 * tf.reduce_sum(tf.square(features - targets)) return content_loss ###Output _____no_output_____ ###Markdown > Extract features ###Code def get_style_image_features(image): """ Get the style image features Args: image: an input image Returns: gram_style_features: the style features as gram matrices """ # preprocess the image using the given preprocessing function preprocessed_style_image = preprocess_image(image) # get the outputs from the custom vgg model that you created using vgg_model() outputs = model(preprocessed_style_image) # Get just the style feature layers (exclude the content layer) style_outputs = outputs[:NUM_STYLE_LAYERS] # for each style layer, calculate the gram matrix for that layer and store these results in a list gram_style_features = [gram_matrix(style_layer) for style_layer in style_outputs] # we should get gram becuase the loss of the cost is the difference between the two gram matrix of style and the gen # unlike the content which the cost will be between the activation return gram_style_features def get_content_image_features(image): """ Get the content image features Args: image: an input image Returns: content_outputs: the content features of the image """ # preprocess the image preprocessed_content_image = preprocess_image(image) # get the outputs from the vgg model outputs = model(preprocessed_content_image) # get the content layers of the outputs content_outputs = outputs[NUM_STYLE_LAYERS:] # return the content layer outputs of the content image return content_outputs def get_style_content_loss(style_targets, style_outputs, content_targets, content_outputs, style_weight, content_weight): """ Combine the style and content loss Args: style_targets: style features of the style image style_outputs: style features of the generated image content_targets: content features of the content image content_outputs: content features of the generated image style_weight: weight given to the style loss content_weight: weight given to the content loss Returns: total_loss: the combined style and content loss """ # sum of the style losses style_loss = tf.add_n([ get_style_loss(style_output, style_target) for style_output, style_target in zip(style_outputs, style_targets)]) # Sum up the content losses content_loss = tf.add_n([get_content_loss(content_output, content_target) for content_output, content_target in zip(content_outputs, content_targets)]) # scale the style loss by multiplying by the style weight and dividing by the number of style layers style_loss = style_loss * style_weight / NUM_STYLE_LAYERS # scale the content loss by multiplying by the content weight and dividing by the number of content layers content_loss = content_loss * content_weight / NUM_CONTENT_LAYERS # sum up the style and content losses total_loss = style_loss + content_loss return total_loss ###Output _____no_output_____ ###Markdown > Calculate Gradients ###Code def calculate_gradients(image, style_targets, content_targets, style_weight, content_weight, var_weight): """ Calculate the gradients of the loss with respect to the generated image Args: image: generated image style_targets: style features of the style image content_targets: content features of the content image style_weight: weight given to the style loss content_weight: weight given to the content loss var_weight: weight given to the total variation loss Returns: gradients: gradients of the loss with respect to the input image """ with tf.GradientTape() as tape: # get the style image features style_features = get_style_image_features(image) # of generated image # get the content image features content_features = get_content_image_features(image) # get the style and content loss loss = get_style_content_loss(style_targets, style_features, content_targets, content_features, style_weight, content_weight) # calculate gradients of loss with respect to the image gradients = tape.gradient(loss, image) return gradients def update_image_with_style(image, style_targets, content_targets, style_weight, var_weight, content_weight, optimizer): """ Args: image: generated image style_targets: style features of the style image content_targets: content features of the content image style_weight: weight given to the style loss content_weight: weight given to the content loss var_weight: weight given to the total variation loss optimizer: optimizer for updating the input image """ # calculate gradients using the function that you just defined. gradients = calculate_gradients(image, style_targets, content_targets, style_weight, content_weight, var_weight) # apply the gradients to the given image optimizer.apply_gradients([(gradients, image)]) # clip the image using the utility clip_image_values() function image.assign(clip_image_values(image, min_value=0.0, max_value=255.0)) def fit_style_transfer(style_image, content_image, style_weight=1e-2, content_weight=1e-4, var_weight=0, optimizer='adam', epochs=1, steps_per_epoch=1): """ Performs neural style transfer. Args: style_image: image to get style features from content_image: image to stylize style_targets: style features of the style image content_targets: content features of the content image style_weight: weight given to the style loss content_weight: weight given to the content loss var_weight: weight given to the total variation loss optimizer: optimizer for updating the input image epochs: number of epochs steps_per_epoch = steps per epoch Returns: generated_image: generated image at final epoch images: collection of generated images per epoch """ images = [] step = 0 # get the style image features style_targets = get_style_image_features(style_image) # get the content image features content_targets = get_content_image_features(content_image) # initialize the generated image for updates generated_image = tf.cast(content_image, dtype=tf.float32) generated_image = tf.Variable(generated_image) # collect the image updates starting from the content image images.append(content_image) # incrementally update the content image with the style features for n in range(epochs): for m in range(steps_per_epoch): step += 1 # Update the image with the style using the function that you defined update_image_with_style(generated_image, style_targets, content_targets, style_weight, var_weight, content_weight, optimizer) print(".", end='') if (m + 1) % 10 == 0: images.append(generated_image) # display the current stylized image clear_output(wait=True) display_image = tensor_to_image(generated_image) display_fn(display_image) # append to the image collection for visualization later images.append(generated_image) print("Train step: {}".format(step)) # convert to uint8 (expected dtype for images with pixels in the range [0,255]) generated_image = tf.cast(generated_image, dtype=tf.uint8) return generated_image, images content_image, style_image = load_images(content_path, style_path) # load images with and add additional dimension # define style and content weight style_weight = 2e-2 content_weight = 1e-2 # define optimizer. learning rate decreases per epoch. adam = tf.optimizers.Adam( tf.keras.optimizers.schedules.ExponentialDecay( initial_learning_rate=20.0, decay_steps=100, decay_rate=0.50 ) ) # start the neural style transfer stylized_image, display_images = fit_style_transfer(style_image=style_image, content_image=content_image, style_weight=style_weight, content_weight=content_weight, var_weight=0, optimizer=adam, epochs=10, steps_per_epoch=100) ###Output _____no_output_____ ###Markdown ***Copyright 2019 Pätzold, Menzel, Zacharias.*** ###Code #Licensed under the Apache License, Version 2.0 (the "License"); #you may not use this file except in compliance with the License. #You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # #Unless required by applicable law or agreed to in writing, software #distributed under the License is distributed on an "AS IS" BASIS, #WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #See the License for the specific language governing permissions and #limitations under the License. ###Output _____no_output_____ ###Markdown **Neural Algorithm of Artistic Style**Here we implement the network which enables us to combine two separate images, namely one image providing the content and the other one adding the style to this content image. For this purpose, we use a convolutional net - the VGG19 - and build a new feature space on top to reconstruct the style of the image.The code partially uses code and structure from the TensorFlow tutorial: Neural Style Transfer (2018) and the ANN is replicated after the study from Gatys et al. (2015). ***Basic idea:***Two images are input to the neural network: A content-image and a style-image. We wish to generate the mixed-image which has the contours of the content-image and the colours and texture of the style-image. We do this by creating several loss-functions that can be optimized.The loss-function for the content-image tries to minimize the difference between the features that are activated for the content-image and for the mixed-image, at one or more layers in the network. This causes the contours of the mixed-image to resemble those of the content-image.The loss-function for the style-image is slightly more complicated, because it instead tries to minimize the difference between the so-called Gram-matrices for the style-image and the mixed-image. This is done at one or more layers in the network. The Gram-matrix measures which features are activated simultaneously in a given layer. Changing the mixed-image so that it mimics the activation patterns of the style-image causes the colour and texture to be transferred.We use TensorFlow to automatically derive the gradient for these loss-functions. The gradient is then used to update the mixed-image. This procedure is repeated a number of times until we are satisfied with the resulting image. *Import all necessary packages.* ###Code %tensorflow_version 2.x import tensorflow as tf %matplotlib inline import matplotlib.pyplot as plt import PIL.Image import numpy as np ###Output TensorFlow 2.x selected. ###Markdown *Image manipulation*In order to load new input images, save mixed (output) images with transferred style and to plot the results during the transformation process, we define some important convenience functions. *Loading an image.* ###Code def load_image(filename, filepath): """ filename: The name of the image (behind backslash, before file type) filepath: The URL path of the image. """ # Read in the image. path = tf.keras.utils.get_file(filename, filepath) img = tf.io.read_file(path) # Preprocess image: decode and convert to float. img = tf.image.decode_image(img, channels=3) img = tf.image.convert_image_dtype(img, tf.float32) # Scale the image if any dimension is larger than 512 pixels. max_dim = 512 shape = tf.cast(tf.shape(img)[:-1], tf.float32) long_dim = max(shape) scale = max_dim / long_dim new_shape = tf.cast(shape * scale, tf.int32) img = tf.image.resize(img, new_shape) img = img[tf.newaxis, :] return img ###Output _____no_output_____ ###Markdown *Transform tensor into image.* ###Code def tensor_to_image(tensor): """ tensor: Tensor input to be converted to an image. """ # To get the image we turn the tensor into an uint8 array in [0,255]. tensor = tensor*255 tensor = np.array(tensor, dtype=np.uint8) # The image must have no more than 3 dimensions. if np.ndim(tensor)>3: assert tensor.shape[0] == 1 tensor = tensor[0] return PIL.Image.fromarray(tensor) ###Output _____no_output_____ ###Markdown *Plot content and style image.* ###Code def plot_images(content_image, style_image, mixed_image): """ content_image: The image that is providing the contours for the mixed image. style_image: The image that is providing colour and structure for the mixed image. mixed_image: The resulting mixed image. """ # Create figure with sub-plots. fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(15, 15)) images = [content_image, style_image, mixed_image] labels = ["content", "style", "mixed"] # Label and plot every image. for i in range(3): img = images[i] lbl = labels[i] # If shape of the image is too big it cannot be plotted. if len(img.shape) > 3: img = tf.squeeze(img, axis=0) img = tf.image.resize(img, (512,512)) ax[i].imshow(img, interpolation='sinc') ax[i].set_title(lbl) ax[i].axis("off") plt.show() ###Output _____no_output_____ ###Markdown *Save an image.* ###Code def save_image(image, filename='mixed_image.jpeg'): """ image: The image to be saved. filename: The name you want to save your image with. """ # Add .jpeg (in default filename) and save in colab files image.save(filename) ###Output _____no_output_____ ###Markdown *Load the data* ###Code # Load a content and a style image. content_image = load_image('YellowLabradorLooking_new','https://upload.wikimedia.org/wikipedia/commons/2/26/YellowLabradorLooking_new.jpg') style_image = load_image('04-Pablo-Picasso-Head-of-Woman-Fernande-1909-56a03c7f5f9b58eba4af7614', 'https://www.thoughtco.com/thmb/DUsvOYMGSDG1LjjME0TfV3XgJO4=/2178x1800/filters:no_upscale():max_bytes(150000):strip_icc()/04-Pablo-Picasso-Head-of-Woman-Fernande-1909-56a03c7f5f9b58eba4af7614.jpg') ###Output _____no_output_____ ###Markdown *Build the model*Here we implement the main model which is used for the optimization process of the mixed image. ###Code class StyleContentModel(tf.keras.models.Model): def __init__(self, style_layers, content_layers): super(StyleContentModel, self).__init__() self.vgg = vgg_layers(style_layers + content_layers) self.style_layers = style_layers self.content_layers = content_layers self.num_style_layers = len(style_layers) # Set the model as non-trainable. self.vgg.trainable = False # Compute forward step. def call(self, inputs): """ Expects float input in [0,1]. """ inputs = inputs*255.0 # Get the outputs of a forward step and save in output lists. preprocessed_input = tf.keras.applications.vgg19.preprocess_input(inputs) outputs = self.vgg(preprocessed_input) style_outputs, content_outputs = (outputs[:self.num_style_layers], outputs[self.num_style_layers:]) # For the style outputs we transform each element into a gram matrix. style_outputs = [gram_matrix(style_output) for style_output in style_outputs] # Create feed dictionaries for content and style. content_dict = {content_name:value for content_name, value in zip(self.content_layers, content_outputs)} style_dict = {style_name:value for style_name, value in zip(self.style_layers, style_outputs)} return {'content':content_dict, 'style':style_dict} ###Output _____no_output_____ ###Markdown The style of an image can be described by the means and correlations across the different feature maps. We calculate a Gram matrix that includes this information by taking the outer product of the feature vector with itself at each location, and averaging that outer product over all locations. *Transform tensors for the style layers into gram matrices.* ###Code def gram_matrix(tensor): """ tensor: Tensor input to be transformed into a gram matrix. """ # Get the tensor's shape. shape = tf.shape(tensor) # Get the number of feature channels for the input tensor, # which is assumed to be from a convolutional layer with 4-dim. num_channels = int(shape[3]) # Reshape the tensor so it is a 2-dim matrix. This essentially # flattens the contents of each feature-channel. matrix = tf.reshape(tensor, shape=[-1, num_channels]) # Calculate the Gram-matrix as the matrix-product of # the 2-dim matrix with itself. This calculates the # dot-products of all combinations of the feature-channels. gram = tf.matmul(tf.transpose(matrix), matrix) return gram ###Output _____no_output_____ ###Markdown *Calculate intermediate layer outputs.* ###Code def vgg_layers(layer_names): """ Creates a vgg model that returns a list of intermediate output values. layer_names: The names of the layer for which we compute the outputs. """ # Load our model. Load pretrained VGG, trained on imagenet data. vgg = tf.keras.applications.VGG19(include_top=False, weights='imagenet') vgg.trainable = False # Compute the output for each layer. outputs = [vgg.get_layer(name).output for name in layer_names] # Define a model using functional API. model = tf.keras.Model([vgg.input], outputs) return model ###Output _____no_output_____ ###Markdown In order to get the contour features of the content image and the colour and texture of the style image, we need to calculate different losses for each layer. This function calculates the mean squared error between the feature activations of content and mixed image for the chosen layers. If the loss between these two images would be zero, the content image would have perfectly transferred the contour features onto the resulting mixed image. *Content and style loss.* ###Code # Define content and style layers from which we pull feature maps. content_layers = ['block5_conv2'] style_layers = ['block1_conv1', 'block2_conv1', 'block3_conv1', 'block4_conv1', 'block5_conv1'] num_content_layers = len(content_layers) num_style_layers = len(style_layers) def style_content_loss(outputs, style_weight=1e-2, content_weight=1e4): """ Calculates the loss between targets and layer outputs. outputs: The list with outputs for each layer. style_weight: The weight of the style image. content_weight: The weight of the content image. """ # Get style and content outputs. style_outputs = outputs['style'] content_outputs = outputs['content'] # Calculate weighted style loss by averaging over all style losses. style_loss = tf.add_n([tf.reduce_mean((style_outputs[name]-style_targets[name])**2) for name in style_outputs.keys()]) style_loss *= style_weight / num_style_layers # Calculate weighted content loss by averaging over all content losses. content_loss = tf.add_n([tf.reduce_mean((content_outputs[name]-content_targets[name])**2) for name in content_outputs.keys()]) content_loss *= content_weight / num_content_layers # Calculate total loss. loss = style_loss + content_loss return loss ###Output _____no_output_____ ###Markdown *Optimization* *Initialization and instantiation.* ###Code # Initialize Adam optimizer. opt = tf.keras.optimizers.Adam(learning_rate=0.02, beta_1=0.99, epsilon=1e-1) # Instantiate the model/extractor. extractor = StyleContentModel(style_layers, content_layers) # Targets. style_targets = extractor(style_image)['style'] content_targets = extractor(content_image)['content'] # Input of same shape as content_image. mixed_image = tf.Variable(content_image) ###Output _____no_output_____ ###Markdown *Update function.* ###Code @tf.function() def train_step(image, total_variation_weight=30): """ Computes one training step. image: The to be optimized mixed image. total_variation_weight: Weight of the total variation loss to reduce high frequency artifacts. """ # Start a gradient type for gradient descent. with tf.GradientTape() as tape: # Get outputs and losses. outputs = extractor(image) loss = style_content_loss(outputs) # Total Variation denoising: # Average of differences in images when they are shifted by a pixel on x- and y-axis. loss += total_variation_weight*tf.image.total_variation(image) # Apply gradients and optimize the image. grad = tape.gradient(loss, image) opt.apply_gradients([(grad, image)]) # Make sure the image values stay in [0,1]. image.assign(tf.clip_by_value(image, clip_value_min=0.0, clip_value_max=1.0)) ###Output _____no_output_____ ###Markdown *Main function to transfer style of an image.* ###Code def transfer_style(image=mixed_image, steps=500): """ The main function to transfer the style onto the content image. image: The resulting mixed image which gets optimized. steps: The number of training steps. At least 500 is recommended. """ # Initialize step counter. i = 0 # Perform 'steps' training steps and plot intermediate results. while i < steps: train_step(image) i += 1 if i%10 == 0 or i==1: if i%50 == 0 or i<=50: print() print() print("Iterations: ", i) plot_images(content_image, style_image, image) print() print() print("Resulting mixed image after ", i, " iteration(s):") # Save the resulting mixed image to colab files. save_image(tensor_to_image(image)) # Plot the resulting mixed image. plt.figure(figsize=(7,7)) if len(image.shape) > 3: image = tf.squeeze(image, axis=0) plt.imshow(image) plt.axis("off") plt.title("mixed") ###Output _____no_output_____ ###Markdown *Example* ###Code # Hint: Run on GPU (Runtime > Change Runtime Type > Hardware Accelerator > GPU) for faster style transfer! transfer_style(steps=1000) # You can find a downloaded version of the # resulting mixed image in your colab files folder. ###Output _____no_output_____ ###Markdown Deep Learning & Art: Neural Style TransferWelcome to the second assignment of this week. In this assignment, you will learn about Neural Style Transfer. This algorithm was created by Gatys et al. (2015) (https://arxiv.org/abs/1508.06576). **In this assignment, you will:**- Implement the neural style transfer algorithm - Generate novel artistic images using your algorithm Most of the algorithms you've studied optimize a cost function to get a set of parameter values. In Neural Style Transfer, you'll optimize a cost function to get pixel values! ###Code import os import sys import scipy.io import scipy.misc import matplotlib.pyplot as plt from matplotlib.pyplot import imshow from PIL import Image from nst_utils import * import numpy as np import tensorflow as tf %matplotlib inline ###Output _____no_output_____ ###Markdown 1 - Problem StatementNeural Style Transfer (NST) is one of the most fun techniques in deep learning. As seen below, it merges two images, namely, a "content" image (C) and a "style" image (S), to create a "generated" image (G). The generated image G combines the "content" of the image C with the "style" of image S. In this example, you are going to generate an image of the Louvre museum in Paris (content image C), mixed with a painting by Claude Monet, a leader of the impressionist movement (style image S).Let's see how you can do this. 2 - Transfer LearningNeural Style Transfer (NST) uses a previously trained convolutional network, and builds on top of that. The idea of using a network trained on a different task and applying it to a new task is called transfer learning. Following the original NST paper (https://arxiv.org/abs/1508.06576), we will use the VGG network. Specifically, we'll use VGG-19, a 19-layer version of the VGG network. This model has already been trained on the very large ImageNet database, and thus has learned to recognize a variety of low level features (at the earlier layers) and high level features (at the deeper layers). Run the following code to load parameters from the VGG model. This may take a few seconds. ###Code model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat") print(model) ###Output {'input': <tf.Variable 'Variable:0' shape=(1, 300, 400, 3) dtype=float32_ref>, 'conv1_1': <tf.Tensor 'Relu:0' shape=(1, 300, 400, 64) dtype=float32>, 'conv1_2': <tf.Tensor 'Relu_1:0' shape=(1, 300, 400, 64) dtype=float32>, 'avgpool1': <tf.Tensor 'AvgPool:0' shape=(1, 150, 200, 64) dtype=float32>, 'conv2_1': <tf.Tensor 'Relu_2:0' shape=(1, 150, 200, 128) dtype=float32>, 'conv2_2': <tf.Tensor 'Relu_3:0' shape=(1, 150, 200, 128) dtype=float32>, 'avgpool2': <tf.Tensor 'AvgPool_1:0' shape=(1, 75, 100, 128) dtype=float32>, 'conv3_1': <tf.Tensor 'Relu_4:0' shape=(1, 75, 100, 256) dtype=float32>, 'conv3_2': <tf.Tensor 'Relu_5:0' shape=(1, 75, 100, 256) dtype=float32>, 'conv3_3': <tf.Tensor 'Relu_6:0' shape=(1, 75, 100, 256) dtype=float32>, 'conv3_4': <tf.Tensor 'Relu_7:0' shape=(1, 75, 100, 256) dtype=float32>, 'avgpool3': <tf.Tensor 'AvgPool_2:0' shape=(1, 38, 50, 256) dtype=float32>, 'conv4_1': <tf.Tensor 'Relu_8:0' shape=(1, 38, 50, 512) dtype=float32>, 'conv4_2': <tf.Tensor 'Relu_9:0' shape=(1, 38, 50, 512) dtype=float32>, 'conv4_3': <tf.Tensor 'Relu_10:0' shape=(1, 38, 50, 512) dtype=float32>, 'conv4_4': <tf.Tensor 'Relu_11:0' shape=(1, 38, 50, 512) dtype=float32>, 'avgpool4': <tf.Tensor 'AvgPool_3:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv5_1': <tf.Tensor 'Relu_12:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv5_2': <tf.Tensor 'Relu_13:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv5_3': <tf.Tensor 'Relu_14:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv5_4': <tf.Tensor 'Relu_15:0' shape=(1, 19, 25, 512) dtype=float32>, 'avgpool5': <tf.Tensor 'AvgPool_4:0' shape=(1, 10, 13, 512) dtype=float32>} ###Markdown The model is stored in a python dictionary where each variable name is the key and the corresponding value is a tensor containing that variable's value. To run an image through this network, you just have to feed the image to the model. In TensorFlow, you can do so using the [tf.assign](https://www.tensorflow.org/api_docs/python/tf/assign) function. In particular, you will use the assign function like this: ```pythonmodel["input"].assign(image)```This assigns the image as an input to the model. After this, if you want to access the activations of a particular layer, say layer `4_2` when the network is run on this image, you would run a TensorFlow session on the correct tensor `conv4_2`, as follows: ```pythonsess.run(model["conv4_2"])``` 3 - Neural Style Transfer We will build the NST algorithm in three steps:- Build the content cost function $J_{content}(C,G)$- Build the style cost function $J_{style}(S,G)$- Put it together to get $J(G) = \alpha J_{content}(C,G) + \beta J_{style}(S,G)$. 3.1 - Computing the content costIn our running example, the content image C will be the picture of the Louvre Museum in Paris. Run the code below to see a picture of the Louvre. ###Code content_image = scipy.misc.imread("images/louvre.jpg") imshow(content_image) ###Output _____no_output_____ ###Markdown The content image (C) shows the Louvre museum's pyramid surrounded by old Paris buildings, against a sunny sky with a few clouds.** 3.1.1 - How do you ensure the generated image G matches the content of the image C?**As we saw in lecture, the earlier (shallower) layers of a ConvNet tend to detect lower-level features such as edges and simple textures, and the later (deeper) layers tend to detect higher-level features such as more complex textures as well as object classes. We would like the "generated" image G to have similar content as the input image C. Suppose you have chosen some layer's activations to represent the content of an image. In practice, you'll get the most visually pleasing results if you choose a layer in the middle of the network--neither too shallow nor too deep. (After you have finished this exercise, feel free to come back and experiment with using different layers, to see how the results vary.)So, suppose you have picked one particular hidden layer to use. Now, set the image C as the input to the pretrained VGG network, and run forward propagation. Let $a^{(C)}$ be the hidden layer activations in the layer you had chosen. (In lecture, we had written this as $a^{[l](C)}$, but here we'll drop the superscript $[l]$ to simplify the notation.) This will be a $n_H \times n_W \times n_C$ tensor. Repeat this process with the image G: Set G as the input, and run forward progation. Let $$a^{(G)}$$ be the corresponding hidden layer activation. We will define as the content cost function as:$$J_{content}(C,G) = \frac{1}{4 \times n_H \times n_W \times n_C}\sum _{ \text{all entries}} (a^{(C)} - a^{(G)})^2\tag{1} $$Here, $n_H, n_W$ and $n_C$ are the height, width and number of channels of the hidden layer you have chosen, and appear in a normalization term in the cost. For clarity, note that $a^{(C)}$ and $a^{(G)}$ are the volumes corresponding to a hidden layer's activations. In order to compute the cost $J_{content}(C,G)$, it might also be convenient to unroll these 3D volumes into a 2D matrix, as shown below. (Technically this unrolling step isn't needed to compute $J_{content}$, but it will be good practice for when you do need to carry out a similar operation later for computing the style const $J_{style}$.)**Exercise:** Compute the "content cost" using TensorFlow. **Instructions**: The 3 steps to implement this function are:1. Retrieve dimensions from a_G: - To retrieve dimensions from a tensor X, use: `X.get_shape().as_list()`2. Unroll a_C and a_G as explained in the picture above - If you are stuck, take a look at [Hint1](https://www.tensorflow.org/versions/r1.3/api_docs/python/tf/transpose) and [Hint2](https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/reshape).3. Compute the content cost: - If you are stuck, take a look at [Hint3](https://www.tensorflow.org/api_docs/python/tf/reduce_sum), [Hint4](https://www.tensorflow.org/api_docs/python/tf/square) and [Hint5](https://www.tensorflow.org/api_docs/python/tf/subtract). ###Code # GRADED FUNCTION: compute_content_cost def compute_content_cost(a_C, a_G): """ Computes the content cost Arguments: a_C -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image C a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image G Returns: J_content -- scalar that you compute using equation 1 above. """ ### START CODE HERE ### # Retrieve dimensions from a_G (≈1 line) m, n_H, n_W, n_C = a_G.get_shape().as_list() # Reshape a_C and a_G (≈2 lines) a_C_unrolled = tf.reshape(a_C, [ n_C, n_W*n_H]) a_G_unrolled = tf.reshape(a_G, [ n_C, n_W*n_H]) # compute the cost with tensorflow (≈1 line) J_content = tf.reduce_sum(tf.squared_difference(a_C_unrolled, a_G_unrolled))/(4*n_W*n_C*n_H) ### END CODE HERE ### return J_content tf.reset_default_graph() with tf.Session() as test: tf.set_random_seed(1) a_C = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4) a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4) J_content = compute_content_cost(a_C, a_G) print("J_content = " + str(J_content.eval())) ###Output J_content = 6.76559 ###Markdown **Expected Output**: **J_content** 6.76559 **What you should remember**:- The content cost takes a hidden layer activation of the neural network, and measures how different $a^{(C)}$ and $a^{(G)}$ are. - When we minimize the content cost later, this will help make sure $G$ has similar content as $C$. 3.2 - Computing the style costFor our running example, we will use the following style image: ###Code style_image = scipy.misc.imread("images/monet_800600.jpg") imshow(style_image) ###Output _____no_output_____ ###Markdown This painting was painted in the style of *[impressionism](https://en.wikipedia.org/wiki/Impressionism)*.Lets see how you can now define a "style" const function $J_{style}(S,G)$. 3.2.1 - Style matrixThe style matrix is also called a "Gram matrix." In linear algebra, the Gram matrix G of a set of vectors $(v_{1},\dots ,v_{n})$ is the matrix of dot products, whose entries are ${\displaystyle G_{ij} = v_{i}^T v_{j} = np.dot(v_{i}, v_{j}) }$. In other words, $G_{ij}$ compares how similar $v_i$ is to $v_j$: If they are highly similar, you would expect them to have a large dot product, and thus for $G_{ij}$ to be large. Note that there is an unfortunate collision in the variable names used here. We are following common terminology used in the literature, but $G$ is used to denote the Style matrix (or Gram matrix) as well as to denote the generated image $G$. We will try to make sure which $G$ we are referring to is always clear from the context. In NST, you can compute the Style matrix by multiplying the "unrolled" filter matrix with their transpose:The result is a matrix of dimension $(n_C,n_C)$ where $n_C$ is the number of filters. The value $G_{ij}$ measures how similar the activations of filter $i$ are to the activations of filter $j$. One important part of the gram matrix is that the diagonal elements such as $G_{ii}$ also measures how active filter $i$ is. For example, suppose filter $i$ is detecting vertical textures in the image. Then $G_{ii}$ measures how common vertical textures are in the image as a whole: If $G_{ii}$ is large, this means that the image has a lot of vertical texture. By capturing the prevalence of different types of features ($G_{ii}$), as well as how much different features occur together ($G_{ij}$), the Style matrix $G$ measures the style of an image. **Exercise**:Using TensorFlow, implement a function that computes the Gram matrix of a matrix A. The formula is: The gram matrix of A is $G_A = AA^T$. If you are stuck, take a look at [Hint 1](https://www.tensorflow.org/api_docs/python/tf/matmul) and [Hint 2](https://www.tensorflow.org/api_docs/python/tf/transpose). ###Code # GRADED FUNCTION: gram_matrix def gram_matrix(A): """ Argument: A -- matrix of shape (n_C, n_H*n_W) Returns: GA -- Gram matrix of A, of shape (n_C, n_C) """ ### START CODE HERE ### (≈1 line) GA = tf.matmul(A, tf.transpose(A)) ### END CODE HERE ### return GA tf.reset_default_graph() with tf.Session() as test: tf.set_random_seed(1) A = tf.random_normal([3, 2*1], mean=1, stddev=4) GA = gram_matrix(A) print("GA = " + str(GA.eval())) ###Output GA = [[ 6.42230511 -4.42912197 -2.09668207] [ -4.42912197 19.46583748 19.56387138] [ -2.09668207 19.56387138 20.6864624 ]] ###Markdown **Expected Output**: **GA** [[ 6.42230511 -4.42912197 -2.09668207] [ -4.42912197 19.46583748 19.56387138] [ -2.09668207 19.56387138 20.6864624 ]] 3.2.2 - Style cost After generating the Style matrix (Gram matrix), your goal will be to minimize the distance between the Gram matrix of the "style" image S and that of the "generated" image G. For now, we are using only a single hidden layer $a^{[l]}$, and the corresponding style cost for this layer is defined as: $$J_{style}^{[l]}(S,G) = \frac{1}{4 \times {n_C}^2 \times (n_H \times n_W)^2} \sum _{i=1}^{n_C}\sum_{j=1}^{n_C}(G^{(S)}_{ij} - G^{(G)}_{ij})^2\tag{2} $$where $G^{(S)}$ and $G^{(G)}$ are respectively the Gram matrices of the "style" image and the "generated" image, computed using the hidden layer activations for a particular hidden layer in the network. **Exercise**: Compute the style cost for a single layer. **Instructions**: The 3 steps to implement this function are:1. Retrieve dimensions from the hidden layer activations a_G: - To retrieve dimensions from a tensor X, use: `X.get_shape().as_list()`2. Unroll the hidden layer activations a_S and a_G into 2D matrices, as explained in the picture above. - You may find [Hint1](https://www.tensorflow.org/versions/r1.3/api_docs/python/tf/transpose) and [Hint2](https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/reshape) useful.3. Compute the Style matrix of the images S and G. (Use the function you had previously written.) 4. Compute the Style cost: - You may find [Hint3](https://www.tensorflow.org/api_docs/python/tf/reduce_sum), [Hint4](https://www.tensorflow.org/api_docs/python/tf/square) and [Hint5](https://www.tensorflow.org/api_docs/python/tf/subtract) useful. ###Code # GRADED FUNCTION: compute_layer_style_cost def compute_layer_style_cost(a_S, a_G): """ Arguments: a_S -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image S a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image G Returns: J_style_layer -- tensor representing a scalar value, style cost defined above by equation (2) """ ### START CODE HERE ### # Retrieve dimensions from a_G (≈1 line) m, n_H, n_W, n_C = a_G.get_shape().as_list() # Reshape the images to have them of shape (n_C, n_H*n_W) (≈2 lines) a_S = tf.transpose(tf.reshape(a_S, [n_H*n_W, n_C])) a_G = tf.transpose(tf.reshape(a_G, [n_H*n_W, n_C])) # Computing gram_matrices for both images S and G (≈2 lines) GS = tf.matmul(a_S, tf.transpose(a_S)) GG = tf.matmul(a_G, tf.transpose(a_G)) # Computing the loss (≈1 line) J_style_layer = tf.reduce_sum(tf.squared_difference(GS, GG))/(4*n_C*n_C*n_H*n_H*n_W*n_W) ### END CODE HERE ### return J_style_layer tf.reset_default_graph() with tf.Session() as test: tf.set_random_seed(1) a_S = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4) a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4) J_style_layer = compute_layer_style_cost(a_S, a_G) print("J_style_layer = " + str(J_style_layer.eval())) ###Output J_style_layer = 9.19028 ###Markdown **Expected Output**: **J_style_layer** 9.19028 3.2.3 Style WeightsSo far you have captured the style from only one layer. We'll get better results if we "merge" style costs from several different layers. After completing this exercise, feel free to come back and experiment with different weights to see how it changes the generated image $G$. But for now, this is a pretty reasonable default: ###Code STYLE_LAYERS = [ ('conv1_1', 0.2), ('conv2_1', 0.2), ('conv3_1', 0.2), ('conv4_1', 0.2), ('conv5_1', 0.2)] ###Output _____no_output_____ ###Markdown You can combine the style costs for different layers as follows:$$J_{style}(S,G) = \sum_{l} \lambda^{[l]} J^{[l]}_{style}(S,G)$$where the values for $\lambda^{[l]}$ are given in `STYLE_LAYERS`. We've implemented a compute_style_cost(...) function. It simply calls your `compute_layer_style_cost(...)` several times, and weights their results using the values in `STYLE_LAYERS`. Read over it to make sure you understand what it's doing. <!-- 2. Loop over (layer_name, coeff) from STYLE_LAYERS: a. Select the output tensor of the current layer. As an example, to call the tensor from the "conv1_1" layer you would do: out = model["conv1_1"] b. Get the style of the style image from the current layer by running the session on the tensor "out" c. Get a tensor representing the style of the generated image from the current layer. It is just "out". d. Now that you have both styles. Use the function you've implemented above to compute the style_cost for the current layer e. Add (style_cost x coeff) of the current layer to overall style cost (J_style)3. Return J_style, which should now be the sum of the (style_cost x coeff) for each layer.!--> ###Code def compute_style_cost(model, STYLE_LAYERS): """ Computes the overall style cost from several chosen layers Arguments: model -- our tensorflow model STYLE_LAYERS -- A python list containing: - the names of the layers we would like to extract style from - a coefficient for each of them Returns: J_style -- tensor representing a scalar value, style cost defined above by equation (2) """ # initialize the overall style cost J_style = 0 for layer_name, coeff in STYLE_LAYERS: # Select the output tensor of the currently selected layer out = model[layer_name] # Set a_S to be the hidden layer activation from the layer we have selected, by running the session on out a_S = sess.run(out) # Set a_G to be the hidden layer activation from same layer. Here, a_G references model[layer_name] # and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that # when we run the session, this will be the activations drawn from the appropriate layer, with G as input. a_G = out # Compute style_cost for the current layer J_style_layer = compute_layer_style_cost(a_S, a_G) # Add coeff * J_style_layer of this layer to overall style cost J_style += coeff * J_style_layer return J_style ###Output _____no_output_____ ###Markdown **Note**: In the inner-loop of the for-loop above, `a_G` is a tensor and hasn't been evaluated yet. It will be evaluated and updated at each iteration when we run the TensorFlow graph in model_nn() below.<!-- How do you choose the coefficients for each layer? The deeper layers capture higher-level concepts, and the features in the deeper layers are less localized in the image relative to each other. So if you want the generated image to softly follow the style image, try choosing larger weights for deeper layers and smaller weights for the first layers. In contrast, if you want the generated image to strongly follow the style image, try choosing smaller weights for deeper layers and larger weights for the first layers!-->**What you should remember**:- The style of an image can be represented using the Gram matrix of a hidden layer's activations. However, we get even better results combining this representation from multiple different layers. This is in contrast to the content representation, where usually using just a single hidden layer is sufficient.- Minimizing the style cost will cause the image $G$ to follow the style of the image $S$. 3.3 - Defining the total cost to optimize Finally, let's create a cost function that minimizes both the style and the content cost. The formula is: $$J(G) = \alpha J_{content}(C,G) + \beta J_{style}(S,G)$$**Exercise**: Implement the total cost function which includes both the content cost and the style cost. ###Code # GRADED FUNCTION: total_cost def total_cost(J_content, J_style, alpha = 10, beta = 40): """ Computes the total cost function Arguments: J_content -- content cost coded above J_style -- style cost coded above alpha -- hyperparameter weighting the importance of the content cost beta -- hyperparameter weighting the importance of the style cost Returns: J -- total cost as defined by the formula above. """ ### START CODE HERE ### (≈1 line) J = alpha*J_content + beta*J_style ### END CODE HERE ### return J tf.reset_default_graph() with tf.Session() as test: np.random.seed(3) J_content = np.random.randn() J_style = np.random.randn() J = total_cost(J_content, J_style) print("J = " + str(J)) ###Output J = 35.34667875478276 ###Markdown **Expected Output**: **J** 35.34667875478276 **What you should remember**:- The total cost is a linear combination of the content cost $J_{content}(C,G)$ and the style cost $J_{style}(S,G)$- $\alpha$ and $\beta$ are hyperparameters that control the relative weighting between content and style 4 - Solving the optimization problem Finally, let's put everything together to implement Neural Style Transfer!Here's what the program will have to do:1. Create an Interactive Session2. Load the content image 3. Load the style image4. Randomly initialize the image to be generated 5. Load the VGG16 model7. Build the TensorFlow graph: - Run the content image through the VGG16 model and compute the content cost - Run the style image through the VGG16 model and compute the style cost - Compute the total cost - Define the optimizer and the learning rate8. Initialize the TensorFlow graph and run it for a large number of iterations, updating the generated image at every step.Lets go through the individual steps in detail. You've previously implemented the overall cost $J(G)$. We'll now set up TensorFlow to optimize this with respect to $G$. To do so, your program has to reset the graph and use an "[Interactive Session](https://www.tensorflow.org/api_docs/python/tf/InteractiveSession)". Unlike a regular session, the "Interactive Session" installs itself as the default session to build a graph. This allows you to run variables without constantly needing to refer to the session object, which simplifies the code. Lets start the interactive session. ###Code # Reset the graph tf.reset_default_graph() # Start interactive session sess = tf.InteractiveSession() ###Output _____no_output_____ ###Markdown Let's load, reshape, and normalize our "content" image (the Louvre museum picture): ###Code content_image = scipy.misc.imread("images/camp-nou.jpg") content_image = reshape_and_normalize_image(content_image) ###Output _____no_output_____ ###Markdown Let's load, reshape and normalize our "style" image (Claude Monet's painting): ###Code style_image = scipy.misc.imread("images/monet.jpg") style_image = reshape_and_normalize_image(style_image) ###Output _____no_output_____ ###Markdown Now, we initialize the "generated" image as a noisy image created from the content_image. By initializing the pixels of the generated image to be mostly noise but still slightly correlated with the content image, this will help the content of the "generated" image more rapidly match the content of the "content" image. (Feel free to look in `nst_utils.py` to see the details of `generate_noise_image(...)`; to do so, click "File-->Open..." at the upper-left corner of this Jupyter notebook.) ###Code generated_image = generate_noise_image(content_image) imshow(generated_image[0]) ###Output _____no_output_____ ###Markdown Next, as explained in part (2), let's load the VGG16 model. ###Code model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat") ###Output _____no_output_____ ###Markdown To get the program to compute the content cost, we will now assign `a_C` and `a_G` to be the appropriate hidden layer activations. We will use layer `conv4_2` to compute the content cost. The code below does the following:1. Assign the content image to be the input to the VGG model.2. Set a_C to be the tensor giving the hidden layer activation for layer "conv4_2".3. Set a_G to be the tensor giving the hidden layer activation for the same layer. 4. Compute the content cost using a_C and a_G. ###Code # Assign the content image to be the input of the VGG model. sess.run(model['input'].assign(content_image)) # Select the output tensor of layer conv4_2 out = model['conv4_2'] # Set a_C to be the hidden layer activation from the layer we have selected a_C = sess.run(out) # Set a_G to be the hidden layer activation from same layer. Here, a_G references model['conv4_2'] # and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that # when we run the session, this will be the activations drawn from the appropriate layer, with G as input. a_G = out # Compute the content cost J_content = compute_content_cost(a_C, a_G) ###Output _____no_output_____ ###Markdown **Note**: At this point, a_G is a tensor and hasn't been evaluated. It will be evaluated and updated at each iteration when we run the Tensorflow graph in model_nn() below. ###Code # Assign the input of the model to be the "style" image sess.run(model['input'].assign(style_image)) # Compute the style cost J_style = compute_style_cost(model, STYLE_LAYERS) ###Output _____no_output_____ ###Markdown **Exercise**: Now that you have J_content and J_style, compute the total cost J by calling `total_cost()`. Use `alpha = 10` and `beta = 40`. ###Code ### START CODE HERE ### (1 line) J = total_cost(J_content, J_style, 10, 40) ### END CODE HERE ### ###Output _____no_output_____ ###Markdown You'd previously learned how to set up the Adam optimizer in TensorFlow. Lets do that here, using a learning rate of 2.0. [See reference](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer) ###Code # define optimizer (1 line) optimizer = tf.train.AdamOptimizer(2.0) # define train_step (1 line) train_step = optimizer.minimize(J) ###Output _____no_output_____ ###Markdown **Exercise**: Implement the model_nn() function which initializes the variables of the tensorflow graph, assigns the input image (initial generated image) as the input of the VGG16 model and runs the train_step for a large number of steps. ###Code def model_nn(sess, input_image, num_iterations = 200): # Initialize global variables (you need to run the session on the initializer) ### START CODE HERE ### (1 line) sess.run(tf.global_variables_initializer()) ### END CODE HERE ### # Run the noisy input image (initial generated image) through the model. Use assign(). ### START CODE HERE ### (1 line) sess.run(model['input'].assign(input_image)) ### END CODE HERE ### for i in range(num_iterations): # Run the session on the train_step to minimize the total cost ### START CODE HERE ### (1 line) sess.run(train_step) ### END CODE HERE ### # Compute the generated image by running the session on the current model['input'] ### START CODE HERE ### (1 line) generated_image = sess.run(model['input']) ### END CODE HERE ### # Print every 20 iteration. if i%20 == 0: Jt, Jc, Js = sess.run([J, J_content, J_style]) print("Iteration " + str(i) + " :") print("total cost = " + str(Jt)) print("content cost = " + str(Jc)) print("style cost = " + str(Js)) # save current generated image in the "/output" directory save_image("output/" + str(i) + ".png", generated_image) # save last generated image save_image('output/generated_image.jpg', generated_image) return generated_image ###Output _____no_output_____ ###Markdown Run the following cell to generate an artistic image. It should take about 3min on CPU for every 20 iterations but you start observing attractive results after ≈140 iterations. Neural Style Transfer is generally trained using GPUs. ###Code model_nn(sess, generated_image) ###Output Iteration 0 : total cost = 5.13759e+09 content cost = 6346.35 style cost = 1.28438e+08 Iteration 20 : total cost = 9.5045e+08 content cost = 12815.2 style cost = 2.37581e+07 Iteration 120 : total cost = 1.72724e+08 content cost = 16920.5 style cost = 4.31388e+06 Iteration 140 : total cost = 1.48141e+08 content cost = 17180.2 style cost = 3.69923e+06 Iteration 160 : total cost = 1.28538e+08 content cost = 17382.4 style cost = 3.20912e+06 Iteration 180 : total cost = 1.12631e+08 content cost = 17558.3 style cost = 2.81139e+06
cgames/05_sonic/sonic_a2c.ipynb
###Markdown Sonic The Hedgehog 1 with Advantage Actor Critic Step 1: Import the libraries ###Code import time import retro import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt from IPython.display import clear_output import math %matplotlib inline import sys sys.path.append('../../') from algos.agents import A2CAgent from algos.models import ActorCnn, CriticCnn from algos.preprocessing.stack_frame import preprocess_frame, stack_frame ###Output _____no_output_____ ###Markdown Step 2: Create our environmentInitialize the environment in the code cell below. ###Code env = retro.make(game='SonicTheHedgehog-Genesis', state='GreenHillZone.Act1', scenario='contest') env.seed(0) # if gpu is to be used device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print("Device: ", device) ###Output _____no_output_____ ###Markdown Step 3: Viewing our Enviroment ###Code print("The size of frame is: ", env.observation_space.shape) print("No. of Actions: ", env.action_space.n) env.reset() plt.figure() plt.imshow(env.reset()) plt.title('Original Frame') plt.show() possible_actions = { # No Operation 0: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], # Left 1: [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0], # Right 2: [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], # Left, Down 3: [0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0], # Right, Down 4: [0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0], # Down 5: [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0], # Down, B 6: [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0], # B 7: [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] } ###Output _____no_output_____ ###Markdown Execute the code cell below to play Pong with a random policy. ###Code def random_play(): score = 0 env.reset() for i in range(200): env.render() action = possible_actions[np.random.randint(len(possible_actions))] state, reward, done, _ = env.step(action) score += reward if done: print("Your Score at end of game is: ", score) break env.reset() env.render(close=True) random_play() ###Output _____no_output_____ ###Markdown Step 4:Preprocessing Frame ###Code plt.figure() plt.imshow(preprocess_frame(env.reset(), (1, -1, -1, 1), 84), cmap="gray") plt.title('Pre Processed image') plt.show() ###Output _____no_output_____ ###Markdown Step 5: Stacking Frame ###Code def stack_frames(frames, state, is_new=False): frame = preprocess_frame(state, (1, -1, -1, 1), 84) frames = stack_frame(frames, frame, is_new) return frames ###Output _____no_output_____ ###Markdown Step 6: Creating our Agent ###Code INPUT_SHAPE = (4, 84, 84) ACTION_SIZE = len(possible_actions) SEED = 0 GAMMA = 0.99 # discount factor ALPHA= 0.0001 # Actor learning rate BETA = 0.0005 # Critic learning rate UPDATE_EVERY = 100 # how often to update the network agent = A2CAgent(INPUT_SHAPE, ACTION_SIZE, SEED, device, GAMMA, ALPHA, BETA, UPDATE_EVERY, ActorCnn, CriticCnn) ###Output _____no_output_____ ###Markdown Step 7: Watching untrained agent play ###Code env.viewer = None # watch an untrained agent state = stack_frames(None, env.reset(), True) for j in range(200): env.render(close=False) action, _, _ = agent.act(state) next_state, reward, done, _ = env.step(possible_actions[action]) state = stack_frames(state, next_state, False) if done: env.reset() break env.render(close=True) ###Output _____no_output_____ ###Markdown Step 8: Loading AgentUncomment line to load a pretrained agent ###Code start_epoch = 0 scores = [] scores_window = deque(maxlen=20) ###Output _____no_output_____ ###Markdown Step 9: Train the Agent with Actor Critic ###Code def train(n_episodes=1000): """ Params ====== n_episodes (int): maximum number of training episodes """ for i_episode in range(start_epoch + 1, n_episodes+1): state = stack_frames(None, env.reset(), True) score = 0 # Punish the agent for not moving forward prev_state = {} steps_stuck = 0 timestamp = 0 while timestamp < 10000: action, log_prob, entropy = agent.act(state) next_state, reward, done, info = env.step(possible_actions[action]) score += reward timestamp += 1 # Punish the agent for standing still for too long. if (prev_state == info): steps_stuck += 1 else: steps_stuck = 0 prev_state = info if (steps_stuck > 20): reward -= 1 next_state = stack_frames(state, next_state, False) agent.step(state, log_prob, entropy, reward, done, next_state) state = next_state if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score clear_output(True) fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") return scores scores = train(1000) fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown Step 10: Watch a Smart Agent! ###Code env.viewer = None # watch an untrained agent state = stack_frames(None, env.reset(), True) for j in range(10000): env.render(close=False) action, _, _ = agent.act(state) next_state, reward, done, _ = env.step(possible_actions[action]) state = stack_frames(state, next_state, False) if done: env.reset() break env.render(close=True) ###Output _____no_output_____
use_case/use_case_2.ipynb
###Markdown Use Case PROFAB is a benchmarking platform that is expected to fill the gap of datasets about protein functions with total 7656 datasets. In addition to protein function datasets, ProFAB provides complete sets of preprocessing-training-evaluation triangle to speed up machine learning usage in biological studies. Since the workflow is dense, an easy to implement user case is prepared. The difference from use_case_1, here, it is shown that how user can import his/her dataset and following ProFAB modules. 1. Data Importing ProFAB provides users to import their datasets that are not available in ProFAB. To import data, SelfGet() function will be savior: ###Code import sys sys.path.insert(0, '../') from profab.import_dataset import SelfGet data = SelfGet(delimiter = '\t', name = False, label = False).get_data(file_name = "sample.txt") ###Output _____no_output_____ ###Markdown Explanation of parameters is available in "import_dataset" section. With these functions, users can manage dataset construction. If s/he has positive set of any term available in ProFAB, only negative set can be obtained by setting parameter 'label' = 'negative'. For example, let's say user has positive set for EC number 1-2-7 and wants to getnegative set to use in prediction, following lines can be executed: ###Code from profab.import_dataset import SelfGet, ECNO negative_set = ECNO(label = 'negative').get_data('ecNo_1-2-7') positive_set = SelfGet().get_data('users_1-2-7_positive_set.txt') ###Output _____no_output_____ ###Markdown After loading datasets, preprocessing step comes in. 2. PreProcessing Preprocessing is applicable in three sections which are featurization, splitting and scaling. a. Featurization Featurization is used to convert protein fasta file into numearical feature data with many protein descriptors. Detailed explanation can be found in "model_preprocess". This function is only applicable with LINUX and MAC operation systems and input file format must be '.fasta'. Following lines can be run: ###Code from profab.model_preprocess import extract_protein_feature extract_protein_feature('edp', 1, 'directory_folder_input_file', 'sample') ###Output _____no_output_____ ###Markdown After running this function, a new file that holds numerical features of proteins will be formed and it can be imported via SelfGet() function as shown in previous section. b. Splitting Another preprocessing module is splitting module that is to prepare train, validation (if needed) and test setsfor prediction. Detailed information is available in "model_preprocess" and reading it is highly recommended to see how function is working. If one has X (feature matrix) and y(label matrix), by defining fraction of test set, splitting can be done: ###Code from profab.model_preprocess import ttv_split X_train,X_test,y_train,y_test = ttv_split(X,y,ratio) ###Output _____no_output_____ ###Markdown Rather than giving all data, user can choose to feed 'ttv_split' function with positive and negative sets and s/he can be obtain splitted data, eventually. ###Code from profab.model_preprocess import ttv_split X_train,X_test,y_train,y_test = ttv_split(X_pos,X_neg,ratio) ###Output _____no_output_____ ###Markdown If data is regression tasked, then y (label matrix) must be given. c. Scaling Scaling is a function to rearange the range of inputs points. The reason to do it prevent imbalance problem. If data is stable then this function is unnecessary to apply. like other preprocessing steps, its detailed introduction can found in 'model_preprocess'. A use case: ###Code from profab.model_preprocess import scale_methods X_train,scaler = scale_methods(X_train,scale_type = 'standard') X_test = scaler.transform(X_test) ###Output _____no_output_____ ###Markdown Scaling function returns fitted train (X_train) data and fitting model (scaler) to transform other sets as can be seen in use case. The rest is exactly the same as 'test_file_1'. 3. Training PROFAB can train any type of data. It provides both classification and regression training. Since our datasets are based on classication of proteins, as an example, classification method will be shown.After training session, outcome of training can be stored in 'model_path' ```if path is not None```. Because this process lasts to long, saving the outcome will be time-saver. Stored model must be exported and be imported with 'pickle' a python based package. ###Code from profab.model_learn import classification_methods #Let's define model path where training model will be saved. model_path = 'model_path.txt' model = classification_methods(ml_type = 'logistic_reg', X_train = X_train, y_train = y_train, path = model_path ) ###Output _____no_output_____ ###Markdown 3. Evaluation After training session is done, evaluation can be done with following lines of code. The output of evaluation is given below of code. a. Get Scores ###Code from profab.model_evaluate import evaluate_score score_train,f_train = evaluate_score(model,X_train,y_train,preds = True) score_test,f_test = evaluate_score(model,X_test,y_test,preds = True) score_validation,f_validation = evaluate_score(model,X_validation,y_validation,preds = True) ###Output _____no_output_____ ###Markdown The score of train and test are given for data: 'ecNo_1-2-7 'target'. b. Table Formating To get the data in table format, a dictionary that consists of scores of different sets must be given. Following lines of code can be executed to tabularize the results: ###Code #If user wants to see result in a table, following codes can be run: from profab.model_evaluate import form_table score_path = 'score_path.csv' #To save the results. scores = {'train':score_train,'test':score_test,'validation':score_validation} #form_table() function will write scores to score_path. form_table(scores = scores, path = score_path) ###Output _____no_output_____ ###Markdown 5. Working with Multiple Set If user wants to make a prediction uses multiple class, ProFAB can handle this with 'for-loop'. For this case, let's say user has positive and negative datasets for 2 GO terms which names of files are: - GO_0000018_negative_data.txt - GO_0019935_negative_data.txt - GO_0000018_positive_data.txt - GO_0019935_positive_data.txtBoth files are tab separated and protein features are described with their name.So, this time using SelfGet() function with parameter 'name' = True will be efficient to load negative datasets. ###Code import sys sys.path.insert(0, '../') from profab.import_dataset import GOID, SelfGet from profab.model_preprocess import ttv_split from profab.model_learn import classification_methods from profab.model_evaluate import evaluate_score, multiple_form_table #GO_List: variable includes GO terms GO_list = ['GO_0000018','GO_0019935'] #To hold scores of model performances scores = {} for go_term in GO_list: #User imports his/her negative and positive datasets with SelfGet() function negative_data_name = go_term + '_negative_data.txt' negative_set = SelfGet(name = True).get_data(file_name = negative_data_name) positive_data_name = go_term + '_positive_data.txt' positive_set = SelfGet(name = True).get_data(file_name = positive_data_name) #splitting X_train,X_test,X_validation,y_train,y_test,y_validation = ttv_split(X_pos = positive_set, X_neg = negative_set, ratio = [0.1,0.2]) #prediction model = classification_methods(ml_type = 'SVM', X_train = X_train, X_valid = X_validation, y_train = y_train, y_valid = y_validation) #evaluation score_train = evaluate_score(model,X_train,y_train) score_test = evaluate_score(model,X_test,y_test) set_scores = {'train':score_train,'test': score_test} scores.update({go_term:set_scores}) #tabularizing the scores score_path = 'score_path.csv' multiple_form_table(scores, score_path) print(scores) ###Output {'GO_0000018': {'train': {'Precision': 0.680365296803653, 'Recall': 0.4257142857142857, 'F1-Score': 0.523725834797891, 'F05-Score': 0.6076672104404568, 'Accuracy': 0.7426400759734093, 'MCC': 0.37854022684114785, 'AUC': 0.6630705141231458, 'AUPRC': 0.6484813867005648, 'TP': 149, 'FP': 70, 'TN': 633, 'FN': 201}, 'test': {'Precision': 0.7352941176470589, 'Recall': 0.49019607843137253, 'F1-Score': 0.588235294117647, 'F05-Score': 0.6684491978609626, 'Accuracy': 0.7682119205298014, 'MCC': 0.4531328287625726, 'AUC': 0.7000980392156864, 'AUPRC': 0.6988378132710038, 'TP': 25, 'FP': 9, 'TN': 91, 'FN': 26}}, 'GO_0019935': {'train': {'Precision': 0.765661252900232, 'Recall': 0.6043956043956044, 'F1-Score': 0.67553735926305, 'F05-Score': 0.7268722466960352, 'Accuracy': 0.8022457891453525, 'MCC': 0.543894233191604, 'AUC': 0.7544210756131287, 'AUPRC': 0.7524021030084921, 'TP': 330, 'FP': 101, 'TN': 956, 'FN': 216}, 'test': {'Precision': 0.775, 'Recall': 0.49206349206349204, 'F1-Score': 0.6019417475728155, 'F05-Score': 0.695067264573991, 'Accuracy': 0.8217391304347826, 'MCC': 0.5155438600770936, 'AUC': 0.719085638247315, 'AUPRC': 0.7030969634230503, 'TP': 31, 'FP': 9, 'TN': 158, 'FN': 32}}}
results_ipynb/single_neuron_exploration/cnn_initial_exploration.ipynb
###Markdown this notebook first collect all stats obtained in intial exploration.it will be a big table, indexed by subset, neuron, structure, optimization. result:I will use k9cX + k6s2 + vanilla as my basis. ###Code import h5py import numpy as np import os.path from functools import partial from collections import OrderedDict import pandas as pd pd.options.display.max_rows = 100 pd.options.display.max_columns = 100 from scipy.stats import pearsonr # get number of parameters. from tang_jcompneuro import dir_dictionary from tang_jcompneuro.cnn_exploration_pytorch import get_num_params num_param_dict = get_num_params() def print_relevant_models(): for x, y in num_param_dict.items(): if x.startswith('k9c') and 'k6s2max' in x and x.endswith('vanilla'): print(x, y) print_relevant_models() def generic_call_back(name, obj, env): if isinstance(obj, h5py.Dataset): arch, dataset, subset, neuron_idx, opt = name.split('/') assert dataset == 'MkA_Shape' neuron_idx = int(neuron_idx) corr_this = obj.attrs['corr'] if corr_this.dtype != np.float32: # this will get hit by my code. assert corr_this == 0.0 env['result'].append( { 'subset': subset, 'neuron': neuron_idx, 'arch': arch, 'opt': opt, 'corr': corr_this, 'time': obj.attrs['time'], # 'num_param': num_param_dict[arch], } ) def collect_all_data(): cnn_explore_dir = os.path.join(dir_dictionary['models'], 'cnn_exploration') env = {'result': []} count = 0 for root, dirs, files in os.walk(cnn_explore_dir): for f in files: if f.lower().endswith('.hdf5'): count += 1 if count % 100 == 0: print(count) f_check = os.path.join(root, f) with h5py.File(f_check, 'r') as f_metric: f_metric.visititems(partial(generic_call_back, env=env)) result = pd.DataFrame(env['result'], columns=['subset', 'neuron', 'arch', 'opt', 'corr', 'time']) result = result.set_index(['subset', 'neuron', 'arch', 'opt'], verify_integrity=True) print(count) return result all_data = collect_all_data() # 66 (arch) x 35 (opt) x 2 (subsets) x 14 (neurons per subset) assert all_data.shape == (64680, 2) %matplotlib inline import matplotlib.pyplot as plt def check_run_time(): # check time. as long as it's fast, it's fine. time_all = all_data['time'].values plt.close('all') plt.hist(time_all, bins=100) plt.show() print(time_all.min(), time_all.max(), np.median(time_all), np.mean(time_all)) print(np.sort(time_all)[::-1][:50]) check_run_time() # seems that it's good to check those with more than 100 sec. def check_long_ones(): long_runs = all_data[all_data['time']>=100] return long_runs # typically, long cases are from adam. # I'm not sure whether these numbers are accruate. but maybe let's ignore them for now. check_long_ones() # I think it's easier to analyze per data set. def study_one_subset(df_this_only_corr): # this df_this_only_corr should be a series. # with (neuron, arch, opt) as the (multi) index. # first, I want to know how good my opt approximation is. # # I will show two ways. # first, use my opt approximation to replace the best # one for every combination of neuron and arch. # show scatter plot, pearsonr, as well as how much performance is lost. # # second, I want to see, if for each neuron I choose the best architecture, # how much performance is lost. # # there are actually two ways to choose best architecture. # a) one is, best one is chosen based on the exact version of loss. # b) another one is, best one is chosen separately. # # by the last plot in _examine_opt (second, b)), you can see that, # given enough architectures to choose from, these optimization methods can achieve neear optimal. a = _examine_opt(df_this_only_corr) # ok. then, I'd like to check archtectures. # here, I will use these arch's performance on the approx version. _examine_arch(a) def _examine_arch(df_neuron_by_arch): # mark input as tmp_sutff. # then you can run things like # tmp_stuff.T.mean(axis=1).sort_values() # or tmp_stuff.T.median(axis=1).sort_values() # my finding is that k9cX_nobn_k6s2max_vanilla # where X is number of channels often perform best. # essentially, I can remove those k13 stuff. # also, dropout and factored works poorly. # so remove them as well. # k6s2 stuff may not be that evident. # so I will examine that next. print(df_neuron_by_arch.T.mean(axis=1).sort_values(ascending=False).iloc[:10]) print(df_neuron_by_arch.T.median(axis=1).sort_values(ascending=False).iloc[:10]) columns = df_neuron_by_arch.columns columsn_to_preserve = [x for x in columns if x.startswith('k9c') and x.endswith('vanilla')] df_neuron_by_arch = df_neuron_by_arch[columsn_to_preserve] print(df_neuron_by_arch.T.mean(axis=1).sort_values(ascending=False)) print(df_neuron_by_arch.T.median(axis=1).sort_values(ascending=False)) # just search 'k6s2max' in the output, and see that most of them are on top. def show_stuff(x1, x2, figsize=(10, 10), title='', xlabel=None, ylabel=None): plt.close('all') plt.figure(figsize=figsize) plt.scatter(x1, x2, s=5) plt.xlim(0,1) plt.ylim(0,1) if xlabel is not None: plt.xlabel(xlabel) if ylabel is not None: plt.ylabel(ylabel) plt.plot([0,1], [0,1], linestyle='--', color='r') plt.title(title + 'corr {:.2f}'.format(pearsonr(x1,x2)[0])) plt.axis('equal') plt.show() def _extract_max_value_from_neuron_by_arch_stuff(neuron_by_arch_stuff: np.ndarray, max_idx=None): assert isinstance(neuron_by_arch_stuff, np.ndarray) n_neuron, n_arch = neuron_by_arch_stuff.shape if max_idx is None: max_idx = np.argmax(neuron_by_arch_stuff, axis=1) assert max_idx.shape == (n_neuron,) best_perf_per_neuron = neuron_by_arch_stuff[np.arange(n_neuron), max_idx] assert best_perf_per_neuron.shape == (n_neuron, ) # OCD, sanity check. for neuron_idx in range(n_neuron): assert best_perf_per_neuron[neuron_idx] == neuron_by_arch_stuff[neuron_idx, max_idx[neuron_idx]] return neuron_by_arch_stuff[np.arange(n_neuron), max_idx], max_idx def _examine_opt(df_this_only_corr): # seems that best opt can be approximated by max(1e-3L2_1e-3L2_adam002_mse, 1e-4L2_1e-3L2_adam002_mse, # '1e-3L2_1e-3L2_sgd_mse', '1e-4L2_1e-3L2_sgd_mse') # let's see how well that goes. # this is by running code like # opt_var = all_data['corr'].xs('OT', level='subset').unstack('arch').unstack('neuron').median(axis=1).sort_values() # where you can replace OT with all, # median with mean. # and check by eye. # notice that mean and median may give pretty different results. opt_approxer = ( '1e-3L2_1e-3L2_adam002_mse', '1e-4L2_1e-3L2_adam002_mse', '1e-3L2_1e-3L2_sgd_mse', '1e-4L2_1e-3L2_sgd_mse' ) opt_in_columns = df_this_only_corr.unstack('opt') opt_best = opt_in_columns.max(axis=1).values assert np.all(opt_best > 0) opt_best_approx = np.asarray([df_this_only_corr.unstack('opt')[x].values for x in opt_approxer]).max(axis=0) assert opt_best.shape == opt_best_approx.shape # compute how much is lost. preserved_performance = opt_best_approx.mean()/opt_best.mean() print('preserved performance', preserved_performance) show_stuff(opt_best, opt_best_approx, (8, 8), 'approx vs. exact, all arch, all neurons, ', 'exact', 'approx') both_exact_and_opt = pd.DataFrame(OrderedDict([('exact', opt_best), ('approx', opt_best_approx)]), index = opt_in_columns.index.copy()) both_exact_and_opt.columns.name = 'opt_type' best_arch_performance_exact, max_idx = _extract_max_value_from_neuron_by_arch_stuff(both_exact_and_opt['exact'].unstack('arch').values) best_arch_performance_approx, _ = _extract_max_value_from_neuron_by_arch_stuff(both_exact_and_opt['approx'].unstack('arch').values, max_idx) best_arch_performance_own_idx, _ = _extract_max_value_from_neuron_by_arch_stuff(both_exact_and_opt['approx'].unstack('arch').values) assert best_arch_performance_exact.shape == best_arch_performance_approx.shape #return best_arch_performance_exact, best_arch_performance_approx show_stuff(best_arch_performance_exact, best_arch_performance_approx, (6, 6), 'approx vs. exact, best arch (determined by exact), all neurons, ', 'exact', 'approx') show_stuff(best_arch_performance_exact, best_arch_performance_own_idx, (6, 6), 'approx vs. exact, best arch (determined by each), all neurons, ', 'exact', 'approx') return both_exact_and_opt['approx'].unstack('arch') tmp_stuff = study_one_subset(all_data['corr'].xs('OT', level='subset')) ###Output preserved performance 0.961820467602
solutions/4 Deep Learning Intro Exercises Solution.ipynb
###Markdown Deep Learning Intro ###Code %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import numpy as np ###Output _____no_output_____ ###Markdown Exercise 1 The [Pima Indians dataset](https://archive.ics.uci.edu/ml/datasets/Pima+Indians+Diabetes) is a very famous dataset distributed by UCI and originally collected from the National Institute of Diabetes and Digestive and Kidney Diseases. It contains data from clinical exams for women age 21 and above of Pima indian origins. The objective is to predict based on diagnostic measurements whether a patient has diabetes.It has the following features:- Pregnancies: Number of times pregnant- Glucose: Plasma glucose concentration a 2 hours in an oral glucose tolerance test- BloodPressure: Diastolic blood pressure (mm Hg)- SkinThickness: Triceps skin fold thickness (mm)- Insulin: 2-Hour serum insulin (mu U/ml)- BMI: Body mass index (weight in kg/(height in m)^2)- DiabetesPedigreeFunction: Diabetes pedigree function- Age: Age (years)The last colum is the outcome, and it is a binary variable.In this first exercise we will explore it through the following steps:1. Load the ..data/diabetes.csv dataset, use pandas to explore the range of each feature- For each feature draw a histogram. Bonus points if you draw all the histograms in the same figure.- Explore correlations of features with the outcome column. You can do this in several ways, for example using the `sns.pairplot` we used above or drawing a heatmap of the correlations.- Do features need standardization? If so what stardardization technique will you use? MinMax? Standard?- Prepare your final `X` and `y` variables to be used by a ML model. Make sure you define your target variable well. Will you need dummy columns? ###Code df = pd.read_csv('../data/diabetes.csv') df.head() _ = df.hist(figsize=(12, 10)) import seaborn as sns sns.pairplot(df, hue='Outcome') sns.heatmap(df.corr(), annot = True) df.info() df.describe() from sklearn.preprocessing import StandardScaler from keras.utils import to_categorical sc = StandardScaler() X = sc.fit_transform(df.drop('Outcome', axis=1)) y = df['Outcome'].values y_cat = to_categorical(y) X.shape y_cat.shape ###Output _____no_output_____ ###Markdown Exercise 2 Build a fully connected NN model that predicts diabetes. Follow these steps:1. Split your data in a train/test with a test size of 20% and a `random_state = 22`- define a sequential model with at least one inner layer. You will have to make choices for the following things: - what is the size of the input? - how many nodes will you use in each layer? - what is the size of the output? - what activation functions will you use in the inner layers? - what activation function will you use at output? - what loss function will you use? - what optimizer will you use?- fit your model on the training set, using a validation_split of 0.1- test your trained model on the test data from the train/test split- check the accuracy score, the confusion matrix and the classification report ###Code X.shape from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y_cat, random_state=22, test_size=0.2) from keras.models import Sequential from keras.layers import Dense from keras.optimizers import Adam model = Sequential() model.add(Dense(32, input_shape=(8,), activation='relu')) model.add(Dense(32, activation='relu')) model.add(Dense(2, activation='softmax')) model.compile(Adam(lr=0.05), loss='categorical_crossentropy', metrics=['accuracy']) model.summary() 32*8 + 32 model.fit(X_train, y_train, epochs=20, verbose=2, validation_split=0.1) y_pred = model.predict(X_test) y_test_class = np.argmax(y_test, axis=1) y_pred_class = np.argmax(y_pred, axis=1) from sklearn.metrics import accuracy_score from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix pd.Series(y_test_class).value_counts() / len(y_test_class) accuracy_score(y_test_class, y_pred_class) print(classification_report(y_test_class, y_pred_class)) confusion_matrix(y_test_class, y_pred_class) ###Output _____no_output_____ ###Markdown Exercise 3Compare your work with the results presented in [this notebook](https://www.kaggle.com/futurist/d/uciml/pima-indians-diabetes-database/pima-data-visualisation-and-machine-learning). Are your Neural Network results better or worse than the results obtained by traditional Machine Learning techniques?- Try training a Support Vector Machine or a Random Forest model on the exact same train/test split. Is the performance better or worse?- Try restricting your features to only 4 features like in the suggested notebook. How does model performance change? ###Code from sklearn.ensemble import RandomForestClassifier from sklearn.svm import SVC from sklearn.naive_bayes import GaussianNB for mod in [RandomForestClassifier(), SVC(), GaussianNB()]: mod.fit(X_train, y_train[:, 1]) y_pred = mod.predict(X_test) print("="*80) print(mod) print("-"*80) print("Accuracy score: {:0.3}".format(accuracy_score(y_test_class, y_pred))) print("Confusion Matrix:") print(confusion_matrix(y_test_class, y_pred)) print() ###Output _____no_output_____ ###Markdown Deep Learning Intro ###Code %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import numpy as np ###Output _____no_output_____ ###Markdown Exercise 1 The [Pima Indians dataset](https://archive.ics.uci.edu/ml/datasets/Pima+Indians+Diabetes) is a very famous dataset distributed by UCI and originally collected from the National Institute of Diabetes and Digestive and Kidney Diseases. It contains data from clinical exams for women age 21 and above of Pima indian origins. The objective is to predict based on diagnostic measurements whether a patient has diabetes.It has the following features:- Pregnancies: Number of times pregnant- Glucose: Plasma glucose concentration a 2 hours in an oral glucose tolerance test- BloodPressure: Diastolic blood pressure (mm Hg)- SkinThickness: Triceps skin fold thickness (mm)- Insulin: 2-Hour serum insulin (mu U/ml)- BMI: Body mass index (weight in kg/(height in m)^2)- DiabetesPedigreeFunction: Diabetes pedigree function- Age: Age (years)The last colum is the outcome, and it is a binary variable.In this first exercise we will explore it through the following steps:1. Load the ..data/diabetes.csv dataset, use pandas to explore the range of each feature- For each feature draw a histogram. Bonus points if you draw all the histograms in the same figure.- Explore correlations of features with the outcome column. You can do this in several ways, for example using the `sns.pairplot` we used above or drawing a heatmap of the correlations.- Do features need standardization? If so what stardardization technique will you use? MinMax? Standard?- Prepare your final `X` and `y` variables to be used by a ML model. Make sure you define your target variable well. Will you need dummy columns? ###Code df = pd.read_csv('../data/diabetes.csv') df.head() _ = df.hist(figsize=(12, 10)) import seaborn as sns sns.pairplot(df, hue='Outcome') sns.heatmap(df.corr(), annot = True) df.info() df.describe() from sklearn.preprocessing import StandardScaler from keras.utils import to_categorical sc = StandardScaler() X = sc.fit_transform(df.drop('Outcome', axis=1)) y = df['Outcome'].values y_cat = to_categorical(y) X.shape y_cat.shape ###Output _____no_output_____ ###Markdown Exercise 2 Build a fully connected NN model that predicts diabetes. Follow these steps:1. Split your data in a train/test with a test size of 20% and a `random_state = 22`- define a sequential model with at least one inner layer. You will have to make choices for the following things: - what is the size of the input? - how many nodes will you use in each layer? - what is the size of the output? - what activation functions will you use in the inner layers? - what activation function will you use at output? - what loss function will you use? - what optimizer will you use?- fit your model on the training set, using a validation_split of 0.1- test your trained model on the test data from the train/test split- check the accuracy score, the confusion matrix and the classification report ###Code X.shape from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y_cat, random_state=22, test_size=0.2) from keras.models import Sequential from keras.layers import Dense from keras.optimizers import Adam model = Sequential() model.add(Dense(32, input_shape=(8,), activation='relu')) model.add(Dense(32, activation='relu')) model.add(Dense(2, activation='softmax')) model.compile(Adam(lr=0.05), loss='categorical_crossentropy', metrics=['accuracy']) model.summary() 32*8 + 32 model.fit(X_train, y_train, epochs=20, verbose=2, validation_split=0.1) y_pred = model.predict(X_test) y_test_class = np.argmax(y_test, axis=1) y_pred_class = np.argmax(y_pred, axis=1) from sklearn.metrics import accuracy_score from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix pd.Series(y_test_class).value_counts() / len(y_test_class) accuracy_score(y_test_class, y_pred_class) print(classification_report(y_test_class, y_pred_class)) confusion_matrix(y_test_class, y_pred_class) ###Output _____no_output_____ ###Markdown Exercise 3Compare your work with the results presented in [this notebook](https://www.kaggle.com/futurist/d/uciml/pima-indians-diabetes-database/pima-data-visualisation-and-machine-learning). Are your Neural Network results better or worse than the results obtained by traditional Machine Learning techniques?- Try training a Support Vector Machine or a Random Forest model on the exact same train/test split. Is the performance better or worse?- Try restricting your features to only 4 features like in the suggested notebook. How does model performance change? ###Code from sklearn.ensemble import RandomForestClassifier from sklearn.svm import SVC from sklearn.naive_bayes import GaussianNB for mod in [RandomForestClassifier(), SVC(), GaussianNB()]: mod.fit(X_train, y_train[:, 1]) y_pred = mod.predict(X_test) print("="*80) print(mod) print("-"*80) print("Accuracy score: {:0.3}".format(accuracy_score(y_test_class, y_pred))) print("Confusion Matrix:") print(confusion_matrix(y_test_class, y_pred)) print() ###Output _____no_output_____ ###Markdown Deep Learning Intro ###Code %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import numpy as np ###Output _____no_output_____ ###Markdown Exercise 1 The [Pima Indians dataset](https://archive.ics.uci.edu/ml/datasets/diabetes) is a very famous dataset distributed by UCI and originally collected from the National Institute of Diabetes and Digestive and Kidney Diseases. It contains data from clinical exams for women age 21 and above of Pima indian origins. The objective is to predict based on diagnostic measurements whether a patient has diabetes.It has the following features:- Pregnancies: Number of times pregnant- Glucose: Plasma glucose concentration a 2 hours in an oral glucose tolerance test- BloodPressure: Diastolic blood pressure (mm Hg)- SkinThickness: Triceps skin fold thickness (mm)- Insulin: 2-Hour serum insulin (mu U/ml)- BMI: Body mass index (weight in kg/(height in m)^2)- DiabetesPedigreeFunction: Diabetes pedigree function- Age: Age (years)The last colum is the outcome, and it is a binary variable.In this first exercise we will explore it through the following steps:1. Load the ..data/diabetes.csv dataset, use pandas to explore the range of each feature- For each feature draw a histogram. Bonus points if you draw all the histograms in the same figure.- Explore correlations of features with the outcome column. You can do this in several ways, for example using the `sns.pairplot` we used above or drawing a heatmap of the correlations.- Do features need standardization? If so what stardardization technique will you use? MinMax? Standard?- Prepare your final `X` and `y` variables to be used by a ML model. Make sure you define your target variable well. Will you need dummy columns? ###Code df = pd.read_csv('../data/diabetes.csv') df.head() _ = df.hist(figsize=(12, 10)) import seaborn as sns sns.pairplot(df, hue='Outcome'); sns.heatmap(df.corr(), annot = True) df.info() df.describe() from sklearn.preprocessing import StandardScaler from tensorflow.keras.utils import to_categorical sc = StandardScaler() X = sc.fit_transform(df.drop('Outcome', axis=1)) y = df['Outcome'].values y_cat = to_categorical(y) X.shape y_cat.shape ###Output _____no_output_____ ###Markdown Exercise 2 Build a fully connected NN model that predicts diabetes. Follow these steps:1. Split your data in a train/test with a test size of 20% and a `random_state = 22`- define a sequential model with at least one inner layer. You will have to make choices for the following things: - what is the size of the input? - how many nodes will you use in each layer? - what is the size of the output? - what activation functions will you use in the inner layers? - what activation function will you use at output? - what loss function will you use? - what optimizer will you use?- fit your model on the training set, using a validation_split of 0.1- test your trained model on the test data from the train/test split- check the accuracy score, the confusion matrix and the classification report ###Code X.shape from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y_cat, random_state=22, test_size=0.2) from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.optimizers import Adam model = Sequential() model.add(Dense(32, input_shape=(8,), activation='relu')) model.add(Dense(32, activation='relu')) model.add(Dense(2, activation='softmax')) model.compile(Adam(learning_rate=0.05), loss='categorical_crossentropy', metrics=['accuracy']) model.summary() 32*8 + 32 model.fit(X_train, y_train, epochs=20, verbose=2, validation_split=0.1) y_pred = model.predict(X_test) y_test_class = np.argmax(y_test, axis=1) y_pred_class = np.argmax(y_pred, axis=1) from sklearn.metrics import accuracy_score from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix pd.Series(y_test_class).value_counts() / len(y_test_class) accuracy_score(y_test_class, y_pred_class) print(classification_report(y_test_class, y_pred_class)) confusion_matrix(y_test_class, y_pred_class) ###Output _____no_output_____ ###Markdown Exercise 3Compare your work with the results presented in [this notebook](https://www.kaggle.com/sheshu/pima-data-visualisation-and-machine-learning). Are your Neural Network results better or worse than the results obtained by traditional Machine Learning techniques?- Try training a Support Vector Machine or a Random Forest model on the exact same train/test split. Is the performance better or worse?- Try restricting your features to only 4 features like in the suggested notebook. How does model performance change? ###Code from sklearn.ensemble import RandomForestClassifier from sklearn.svm import SVC from sklearn.naive_bayes import GaussianNB for mod in [RandomForestClassifier(), SVC(), GaussianNB()]: mod.fit(X_train, y_train[:, 1]) y_pred = mod.predict(X_test) print("="*80) print(mod) print("-"*80) print("Accuracy score: {:0.3}".format(accuracy_score(y_test_class, y_pred))) print("Confusion Matrix:") print(confusion_matrix(y_test_class, y_pred)) print() ###Output _____no_output_____ ###Markdown Deep Learning Intro ###Code %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import numpy as np ###Output _____no_output_____ ###Markdown Exercise 1 The [Pima Indians dataset](https://archive.ics.uci.edu/ml/datasets/diabetes) is a very famous dataset distributed by UCI and originally collected from the National Institute of Diabetes and Digestive and Kidney Diseases. It contains data from clinical exams for women age 21 and above of Pima indian origins. The objective is to predict based on diagnostic measurements whether a patient has diabetes.It has the following features:- Pregnancies: Number of times pregnant- Glucose: Plasma glucose concentration a 2 hours in an oral glucose tolerance test- BloodPressure: Diastolic blood pressure (mm Hg)- SkinThickness: Triceps skin fold thickness (mm)- Insulin: 2-Hour serum insulin (mu U/ml)- BMI: Body mass index (weight in kg/(height in m)^2)- DiabetesPedigreeFunction: Diabetes pedigree function- Age: Age (years)The last colum is the outcome, and it is a binary variable.In this first exercise we will explore it through the following steps:1. Load the ..data/diabetes.csv dataset, use pandas to explore the range of each feature- For each feature draw a histogram. Bonus points if you draw all the histograms in the same figure.- Explore correlations of features with the outcome column. You can do this in several ways, for example using the `sns.pairplot` we used above or drawing a heatmap of the correlations.- Do features need standardization? If so what stardardization technique will you use? MinMax? Standard?- Prepare your final `X` and `y` variables to be used by a ML model. Make sure you define your target variable well. Will you need dummy columns? ###Code df = pd.read_csv('../data/diabetes.csv') df.head() _ = df.hist(figsize=(12, 10)) import seaborn as sns sns.pairplot(df, hue='Outcome') sns.heatmap(df.corr(), annot = True) df.info() df.describe() from sklearn.preprocessing import StandardScaler from keras.utils import to_categorical sc = StandardScaler() X = sc.fit_transform(df.drop('Outcome', axis=1)) y = df['Outcome'].values y_cat = to_categorical(y) X.shape y_cat.shape ###Output _____no_output_____ ###Markdown Exercise 2 Build a fully connected NN model that predicts diabetes. Follow these steps:1. Split your data in a train/test with a test size of 20% and a `random_state = 22`- define a sequential model with at least one inner layer. You will have to make choices for the following things: - what is the size of the input? - how many nodes will you use in each layer? - what is the size of the output? - what activation functions will you use in the inner layers? - what activation function will you use at output? - what loss function will you use? - what optimizer will you use?- fit your model on the training set, using a validation_split of 0.1- test your trained model on the test data from the train/test split- check the accuracy score, the confusion matrix and the classification report ###Code X.shape from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y_cat, random_state=22, test_size=0.2) from keras.models import Sequential from keras.layers import Dense from keras.optimizers import Adam model = Sequential() model.add(Dense(32, input_shape=(8,), activation='relu')) model.add(Dense(32, activation='relu')) model.add(Dense(2, activation='softmax')) model.compile(Adam(lr=0.05), loss='categorical_crossentropy', metrics=['accuracy']) model.summary() 32*8 + 32 model.fit(X_train, y_train, epochs=20, verbose=2, validation_split=0.1) y_pred = model.predict(X_test) y_test_class = np.argmax(y_test, axis=1) y_pred_class = np.argmax(y_pred, axis=1) from sklearn.metrics import accuracy_score from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix pd.Series(y_test_class).value_counts() / len(y_test_class) accuracy_score(y_test_class, y_pred_class) print(classification_report(y_test_class, y_pred_class)) confusion_matrix(y_test_class, y_pred_class) ###Output _____no_output_____ ###Markdown Exercise 3Compare your work with the results presented in [this notebook](https://www.kaggle.com/sheshu/pima-data-visualisation-and-machine-learning). Are your Neural Network results better or worse than the results obtained by traditional Machine Learning techniques?- Try training a Support Vector Machine or a Random Forest model on the exact same train/test split. Is the performance better or worse?- Try restricting your features to only 4 features like in the suggested notebook. How does model performance change? ###Code from sklearn.ensemble import RandomForestClassifier from sklearn.svm import SVC from sklearn.naive_bayes import GaussianNB for mod in [RandomForestClassifier(), SVC(), GaussianNB()]: mod.fit(X_train, y_train[:, 1]) y_pred = mod.predict(X_test) print("="*80) print(mod) print("-"*80) print("Accuracy score: {:0.3}".format(accuracy_score(y_test_class, y_pred))) print("Confusion Matrix:") print(confusion_matrix(y_test_class, y_pred)) print() ###Output _____no_output_____ ###Markdown Deep Learning Intro ###Code %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import numpy as np ###Output _____no_output_____ ###Markdown Exercise 1 The [Pima Indians dataset](https://archive.ics.uci.edu/ml/datasets/Pima+Indians+Diabetes) is a very famous dataset distributed by UCI and originally collected from the National Institute of Diabetes and Digestive and Kidney Diseases. It contains data from clinical exams for women age 21 and above of Pima indian origins. The objective is to predict based on diagnostic measurements whether a patient has diabetes.It has the following features:- Pregnancies: Number of times pregnant- Glucose: Plasma glucose concentration a 2 hours in an oral glucose tolerance test- BloodPressure: Diastolic blood pressure (mm Hg)- SkinThickness: Triceps skin fold thickness (mm)- Insulin: 2-Hour serum insulin (mu U/ml)- BMI: Body mass index (weight in kg/(height in m)^2)- DiabetesPedigreeFunction: Diabetes pedigree function- Age: Age (years)The last colum is the outcome, and it is a binary variable.In this first exercise we will explore it through the following steps:1. Load the ..data/diabetes.csv dataset, use pandas to explore the range of each feature- For each feature draw a histogram. Bonus points if you draw all the histograms in the same figure.- Explore correlations of features with the outcome column. You can do this in several ways, for example using the `sns.pairplot` we used above or drawing a heatmap of the correlations.- Do features need standardization? If so what stardardization technique will you use? MinMax? Standard?- Prepare your final `X` and `y` variables to be used by a ML model. Make sure you define your target variable well. Will you need dummy columns? ###Code df = pd.read_csv('../data/diabetes.csv') df.head() _ = df.hist(figsize=(12, 10)) import seaborn as sns sns.pairplot(df, hue='Outcome') sns.heatmap(df.corr(), annot = True) df.info() df.describe() from sklearn.preprocessing import StandardScaler from keras.utils import to_categorical sc = StandardScaler() X = sc.fit_transform(df.drop('Outcome', axis=1)) y = df['Outcome'].values y_cat = to_categorical(y) X.shape y_cat.shape ###Output _____no_output_____ ###Markdown Exercise 2 Build a fully connected NN model that predicts diabetes. Follow these steps:1. Split your data in a train/test with a test size of 20% and a `random_state = 22`- define a sequential model with at least one inner layer. You will have to make choices for the following things: - what is the size of the input? - how many nodes will you use in each layer? - what is the size of the output? - what activation functions will you use in the inner layers? - what activation function will you use at output? - what loss function will you use? - what optimizer will you use?- fit your model on the training set, using a validation_split of 0.1- test your trained model on the test data from the train/test split- check the accuracy score, the confusion matrix and the classification report ###Code X.shape from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y_cat, random_state=22, test_size=0.2) from keras.models import Sequential from keras.layers import Dense from keras.optimizers import Adam model = Sequential() model.add(Dense(32, input_shape=(8,), activation='relu')) model.add(Dense(32, activation='relu')) model.add(Dense(2, activation='softmax')) model.compile(Adam(lr=0.05), loss='categorical_crossentropy', metrics=['accuracy']) model.summary() 32*8 + 32 model.fit(X_train, y_train, epochs=20, verbose=2, validation_split=0.1) y_pred = model.predict(X_test) y_test_class = np.argmax(y_test, axis=1) y_pred_class = np.argmax(y_pred, axis=1) from sklearn.metrics import accuracy_score from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix pd.Series(y_test_class).value_counts() / len(y_test_class) accuracy_score(y_test_class, y_pred_class) print(classification_report(y_test_class, y_pred_class)) confusion_matrix(y_test_class, y_pred_class) ###Output _____no_output_____ ###Markdown Exercise 3Compare your work with the results presented in [this notebook](https://www.kaggle.com/futurist/d/uciml/pima-indians-diabetes-database/pima-data-visualisation-and-machine-learning). Are your Neural Network results better or worse than the results obtained by traditional Machine Learning techniques?- Try training a Support Vector Machine or a Random Forest model on the exact same train/test split. Is the performance better or worse?- Try restricting your features to only 4 features like in the suggested notebook. How does model performance change? ###Code from sklearn.ensemble import RandomForestClassifier from sklearn.svm import SVC from sklearn.naive_bayes import GaussianNB for mod in [RandomForestClassifier(), SVC(), GaussianNB()]: mod.fit(X_train, y_train[:, 1]) y_pred = mod.predict(X_test) print("="*80) print(mod) print("-"*80) print("Accuracy score: {:0.3}".format(accuracy_score(y_test_class, y_pred))) print("Confusion Matrix:") print(confusion_matrix(y_test_class, y_pred)) print() ###Output _____no_output_____ ###Markdown Deep Learning Intro ###Code %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import numpy as np ###Output _____no_output_____ ###Markdown Exercise 1 The [Pima Indians dataset](https://archive.ics.uci.edu/ml/datasets/Pima+Indians+Diabetes) is a very famous dataset distributed by UCI and originally collected from the National Institute of Diabetes and Digestive and Kidney Diseases. It contains data from clinical exams for women age 21 and above of Pima indian origins. The objective is to predict based on diagnostic measurements whether a patient has diabetes.It has the following features:- Pregnancies: Number of times pregnant- Glucose: Plasma glucose concentration a 2 hours in an oral glucose tolerance test- BloodPressure: Diastolic blood pressure (mm Hg)- SkinThickness: Triceps skin fold thickness (mm)- Insulin: 2-Hour serum insulin (mu U/ml)- BMI: Body mass index (weight in kg/(height in m)^2)- DiabetesPedigreeFunction: Diabetes pedigree function- Age: Age (years)The last colum is the outcome, and it is a binary variable.In this first exercise we will explore it through the following steps:1. Load the ..data/diabetes.csv dataset, use pandas to explore the range of each feature- For each feature draw a histogram. Bonus points if you draw all the histograms in the same figure.- Explore correlations of features with the outcome column. You can do this in several ways, for example using the `sns.pairplot` we used above or drawing a heatmap of the correlations.- Do features need standardization? If so what stardardization technique will you use? MinMax? Standard?- Prepare your final `X` and `y` variables to be used by a ML model. Make sure you define your target variable well. Will you need dummy columns? ###Code df = pd.read_csv('../data/diabetes.csv') df.head() _ = df.hist(figsize=(12, 10)) import seaborn as sns sns.pairplot(df, hue='Outcome') sns.heatmap(df.corr(), annot = True) df.info() df.describe() from sklearn.preprocessing import StandardScaler from keras.utils import to_categorical sc = StandardScaler() X = sc.fit_transform(df.drop('Outcome', axis=1)) y = df['Outcome'].values y_cat = to_categorical(y) X.shape y_cat.shape ###Output _____no_output_____ ###Markdown Exercise 2 Build a fully connected NN model that predicts diabetes. Follow these steps:1. Split your data in a train/test with a test size of 20% and a `random_state = 22`- define a sequential model with at least one inner layer. You will have to make choices for the following things: - what is the size of the input? - how many nodes will you use in each layer? - what is the size of the output? - what activation functions will you use in the inner layers? - what activation function will you use at output? - what loss function will you use? - what optimizer will you use?- fit your model on the training set, using a validation_split of 0.1- test your trained model on the test data from the train/test split- check the accuracy score, the confusion matrix and the classification report ###Code X.shape from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y_cat, random_state=22, test_size=0.2) from keras.models import Sequential from keras.layers import Dense from keras.optimizers import Adam model = Sequential() model.add(Dense(32, input_shape=(8,), activation='relu')) model.add(Dense(32, activation='relu')) model.add(Dense(2, activation='softmax')) model.compile(Adam(lr=0.05), loss='categorical_crossentropy', metrics=['accuracy']) model.summary() 32*8 + 32 model.fit(X_train, y_train, epochs=20, verbose=2, validation_split=0.1) y_pred = model.predict(X_test) y_test_class = np.argmax(y_test, axis=1) y_pred_class = np.argmax(y_pred, axis=1) from sklearn.metrics import accuracy_score from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix pd.Series(y_test_class).value_counts() / len(y_test_class) accuracy_score(y_test_class, y_pred_class) print(classification_report(y_test_class, y_pred_class)) confusion_matrix(y_test_class, y_pred_class) ###Output _____no_output_____ ###Markdown Exercise 3Compare your work with the results presented in [this notebook](https://www.kaggle.com/futurist/d/uciml/pima-indians-diabetes-database/pima-data-visualisation-and-machine-learning). Are your Neural Network results better or worse than the results obtained by traditional Machine Learning techniques?- Try training a Support Vector Machine or a Random Forest model on the exact same train/test split. Is the performance better or worse?- Try restricting your features to only 4 features like in the suggested notebook. How does model performance change? ###Code from sklearn.ensemble import RandomForestClassifier from sklearn.svm import SVC from sklearn.naive_bayes import GaussianNB for mod in [RandomForestClassifier(), SVC(), GaussianNB()]: mod.fit(X_train, y_train[:, 1]) y_pred = mod.predict(X_test) print("="*80) print(mod) print("-"*80) print("Accuracy score: {:0.3}".format(accuracy_score(y_test_class, y_pred))) print("Confusion Matrix:") print(confusion_matrix(y_test_class, y_pred)) print() ###Output ================================================================================ RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini', max_depth=None, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=None, oob_score=False, random_state=None, verbose=0, warm_start=False) -------------------------------------------------------------------------------- Accuracy score: 0.734 Confusion Matrix: [[94 6] [35 19]] ================================================================================ SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, decision_function_shape='ovr', degree=3, gamma='auto_deprecated', kernel='rbf', max_iter=-1, probability=False, random_state=None, shrinking=True, tol=0.001, verbose=False) -------------------------------------------------------------------------------- Accuracy score: 0.721 Confusion Matrix: [[89 11] [32 22]] ================================================================================ GaussianNB(priors=None, var_smoothing=1e-09) -------------------------------------------------------------------------------- Accuracy score: 0.708 Confusion Matrix: [[87 13] [32 22]] ###Markdown Deep Learning Intro ###Code %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import numpy as np ###Output _____no_output_____ ###Markdown Exercise 1 The [Pima Indians dataset](https://archive.ics.uci.edu/ml/datasets/Pima+Indians+Diabetes) is a very famous dataset distributed by UCI and originally collected from the National Institute of Diabetes and Digestive and Kidney Diseases. It contains data from clinical exams for women age 21 and above of Pima indian origins. The objective is to predict based on diagnostic measurements whether a patient has diabetes.It has the following features:- Pregnancies: Number of times pregnant- Glucose: Plasma glucose concentration a 2 hours in an oral glucose tolerance test- BloodPressure: Diastolic blood pressure (mm Hg)- SkinThickness: Triceps skin fold thickness (mm)- Insulin: 2-Hour serum insulin (mu U/ml)- BMI: Body mass index (weight in kg/(height in m)^2)- DiabetesPedigreeFunction: Diabetes pedigree function- Age: Age (years)The last colum is the outcome, and it is a binary variable.In this first exercise we will explore it through the following steps:1. Load the ..data/diabetes.csv dataset, use pandas to explore the range of each feature- For each feature draw a histogram. Bonus points if you draw all the histograms in the same figure.- Explore correlations of features with the outcome column. You can do this in several ways, for example using the `sns.pairplot` we used above or drawing a heatmap of the correlations.- Do features need standardization? If so what stardardization technique will you use? MinMax? Standard?- Prepare your final `X` and `y` variables to be used by a ML model. Make sure you define your target variable well. Will you need dummy columns? ###Code df = pd.read_csv('../data/diabetes.csv') df.head() _ = df.hist(figsize=(12, 10)) import seaborn as sns sns.pairplot(df, hue='Outcome') sns.heatmap(df.corr(), annot = True) df.info() df.describe() from sklearn.preprocessing import StandardScaler from keras.utils import to_categorical sc = StandardScaler() X = sc.fit_transform(df.drop('Outcome', axis=1)) y = df['Outcome'].values y_cat = to_categorical(y) X.shape y_cat.shape ###Output _____no_output_____ ###Markdown Exercise 2 Build a fully connected NN model that predicts diabetes. Follow these steps:1. Split your data in a train/test with a test size of 20% and a `random_state = 22`- define a sequential model with at least one inner layer. You will have to make choices for the following things: - what is the size of the input? - how many nodes will you use in each layer? - what is the size of the output? - what activation functions will you use in the inner layers? - what activation function will you use at output? - what loss function will you use? - what optimizer will you use?- fit your model on the training set, using a validation_split of 0.1- test your trained model on the test data from the train/test split- check the accuracy score, the confusion matrix and the classification report ###Code X.shape from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y_cat, random_state=22, test_size=0.2) from keras.models import Sequential from keras.layers import Dense from keras.optimizers import Adam model = Sequential() model.add(Dense(32, input_shape=(8,), activation='relu')) model.add(Dense(32, activation='relu')) model.add(Dense(2, activation='softmax')) model.compile(Adam(lr=0.05), loss='categorical_crossentropy', metrics=['accuracy']) model.summary() 32*8 + 32 model.fit(X_train, y_train, epochs=20, verbose=2, validation_split=0.1) y_pred = model.predict(X_test) y_test_class = np.argmax(y_test, axis=1) y_pred_class = np.argmax(y_pred, axis=1) from sklearn.metrics import accuracy_score from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix pd.Series(y_test_class).value_counts() / len(y_test_class) accuracy_score(y_test_class, y_pred_class) print(classification_report(y_test_class, y_pred_class)) confusion_matrix(y_test_class, y_pred_class) ###Output _____no_output_____ ###Markdown Exercise 3Compare your work with the results presented in [this notebook](https://www.kaggle.com/futurist/d/uciml/pima-indians-diabetes-database/pima-data-visualisation-and-machine-learning). Are your Neural Network results better or worse than the results obtained by traditional Machine Learning techniques?- Try training a Support Vector Machine or a Random Forest model on the exact same train/test split. Is the performance better or worse?- Try restricting your features to only 4 features like in the suggested notebook. How does model performance change? ###Code from sklearn.ensemble import RandomForestClassifier from sklearn.svm import SVC from sklearn.naive_bayes import GaussianNB for mod in [RandomForestClassifier(), SVC(), GaussianNB()]: mod.fit(X_train, y_train[:, 1]) y_pred = mod.predict(X_test) print("="*80) print(mod) print("-"*80) print("Accuracy score: {:0.3}".format(accuracy_score(y_test_class, y_pred))) print("Confusion Matrix:") print(confusion_matrix(y_test_class, y_pred)) print() ###Output ================================================================================ RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini', max_depth=None, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1, oob_score=False, random_state=None, verbose=0, warm_start=False) -------------------------------------------------------------------------------- Accuracy score: 0.753 Confusion Matrix: [[95 5] [33 21]] ================================================================================ SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, decision_function_shape='ovr', degree=3, gamma='auto', kernel='rbf', max_iter=-1, probability=False, random_state=None, shrinking=True, tol=0.001, verbose=False) -------------------------------------------------------------------------------- Accuracy score: 0.721 Confusion Matrix: [[89 11] [32 22]] ================================================================================ GaussianNB(priors=None) -------------------------------------------------------------------------------- Accuracy score: 0.708 Confusion Matrix: [[87 13] [32 22]] ###Markdown Deep Learning Intro ###Code %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import numpy as np ###Output _____no_output_____ ###Markdown Exercise 1 The [Pima Indians dataset](https://archive.ics.uci.edu/ml/datasets/Pima+Indians+Diabetes) is a very famous dataset distributed by UCI and originally collected from the National Institute of Diabetes and Digestive and Kidney Diseases. It contains data from clinical exams for women age 21 and above of Pima indian origins. The objective is to predict based on diagnostic measurements whether a patient has diabetes.It has the following features:- Pregnancies: Number of times pregnant- Glucose: Plasma glucose concentration a 2 hours in an oral glucose tolerance test- BloodPressure: Diastolic blood pressure (mm Hg)- SkinThickness: Triceps skin fold thickness (mm)- Insulin: 2-Hour serum insulin (mu U/ml)- BMI: Body mass index (weight in kg/(height in m)^2)- DiabetesPedigreeFunction: Diabetes pedigree function- Age: Age (years)The last colum is the outcome, and it is a binary variable.In this first exercise we will explore it through the following steps:1. Load the ..data/diabetes.csv dataset, use pandas to explore the range of each feature- For each feature draw a histogram. Bonus points if you draw all the histograms in the same figure.- Explore correlations of features with the outcome column. You can do this in several ways, for example using the `sns.pairplot` we used above or drawing a heatmap of the correlations.- Do features need standardization? If so what stardardization technique will you use? MinMax? Standard?- Prepare your final `X` and `y` variables to be used by a ML model. Make sure you define your target variable well. Will you need dummy columns? ###Code df = pd.read_csv('../data/diabetes.csv') df.head() _ = df.hist(figsize=(12, 10)) import seaborn as sns sns.pairplot(df, hue='Outcome') sns.heatmap(df.corr(), annot = True) df.info() df.describe() from sklearn.preprocessing import StandardScaler from keras.utils import to_categorical sc = StandardScaler() X = sc.fit_transform(df.drop('Outcome', axis=1)) y = df['Outcome'].values y_cat = to_categorical(y) X.shape y_cat.shape ###Output _____no_output_____ ###Markdown Exercise 2 Build a fully connected NN model that predicts diabetes. Follow these steps:1. Split your data in a train/test with a test size of 20% and a `random_state = 22`- define a sequential model with at least one inner layer. You will have to make choices for the following things: - what is the size of the input? - how many nodes will you use in each layer? - what is the size of the output? - what activation functions will you use in the inner layers? - what activation function will you use at output? - what loss function will you use? - what optimizer will you use?- fit your model on the training set, using a validation_split of 0.1- test your trained model on the test data from the train/test split- check the accuracy score, the confusion matrix and the classification report ###Code X.shape from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y_cat, random_state=22, test_size=0.2) from keras.models import Sequential from keras.layers import Dense from keras.optimizers import Adam model = Sequential() model.add(Dense(32, input_shape=(8,), activation='relu')) model.add(Dense(32, activation='relu')) model.add(Dense(2, activation='softmax')) model.compile(Adam(lr=0.05), loss='categorical_crossentropy', metrics=['accuracy']) model.summary() 32*8 + 32 model.fit(X_train, y_train, epochs=20, verbose=2, validation_split=0.1) y_pred = model.predict(X_test) y_test_class = np.argmax(y_test, axis=1) y_pred_class = np.argmax(y_pred, axis=1) from sklearn.metrics import accuracy_score from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix pd.Series(y_test_class).value_counts() / len(y_test_class) accuracy_score(y_test_class, y_pred_class) print(classification_report(y_test_class, y_pred_class)) confusion_matrix(y_test_class, y_pred_class) ###Output _____no_output_____ ###Markdown Exercise 3Compare your work with the results presented in [this notebook](https://www.kaggle.com/futurist/d/uciml/pima-indians-diabetes-database/pima-data-visualisation-and-machine-learning). Are your Neural Network results better or worse than the results obtained by traditional Machine Learning techniques?- Try training a Support Vector Machine or a Random Forest model on the exact same train/test split. Is the performance better or worse?- Try restricting your features to only 4 features like in the suggested notebook. How does model performance change? ###Code from sklearn.ensemble import RandomForestClassifier from sklearn.svm import SVC from sklearn.naive_bayes import GaussianNB for mod in [RandomForestClassifier(), SVC(), GaussianNB()]: mod.fit(X_train, y_train[:, 1]) y_pred = mod.predict(X_test) print("="*80) print(mod) print("-"*80) print("Accuracy score: {:0.3}".format(accuracy_score(y_test_class, y_pred))) print("Confusion Matrix:") print(confusion_matrix(y_test_class, y_pred)) print() ###Output _____no_output_____ ###Markdown Deep Learning Intro ###Code %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import numpy as np ###Output _____no_output_____ ###Markdown Exercise 1 The [Pima Indians dataset](https://archive.ics.uci.edu/ml/datasets/diabetes) is a very famous dataset distributed by UCI and originally collected from the National Institute of Diabetes and Digestive and Kidney Diseases. It contains data from clinical exams for women age 21 and above of Pima indian origins. The objective is to predict based on diagnostic measurements whether a patient has diabetes.It has the following features:- Pregnancies: Number of times pregnant- Glucose: Plasma glucose concentration a 2 hours in an oral glucose tolerance test- BloodPressure: Diastolic blood pressure (mm Hg)- SkinThickness: Triceps skin fold thickness (mm)- Insulin: 2-Hour serum insulin (mu U/ml)- BMI: Body mass index (weight in kg/(height in m)^2)- DiabetesPedigreeFunction: Diabetes pedigree function- Age: Age (years)The last colum is the outcome, and it is a binary variable.In this first exercise we will explore it through the following steps:1. Load the ..data/diabetes.csv dataset, use pandas to explore the range of each feature- For each feature draw a histogram. Bonus points if you draw all the histograms in the same figure.- Explore correlations of features with the outcome column. You can do this in several ways, for example using the `sns.pairplot` we used above or drawing a heatmap of the correlations.- Do features need standardization? If so what stardardization technique will you use? MinMax? Standard?- Prepare your final `X` and `y` variables to be used by a ML model. Make sure you define your target variable well. Will you need dummy columns? ###Code df = pd.read_csv('../data/diabetes.csv') df.head() _ = df.hist(figsize=(12, 10)) import seaborn as sns sns.pairplot(df, hue='Outcome'); sns.heatmap(df.corr(), annot = True) df.info() df.describe() from sklearn.preprocessing import StandardScaler from tensorflow.keras.utils import to_categorical sc = StandardScaler() X = sc.fit_transform(df.drop('Outcome', axis=1)) y = df['Outcome'].values y_cat = to_categorical(y) X.shape y_cat.shape ###Output _____no_output_____ ###Markdown Exercise 2 Build a fully connected NN model that predicts diabetes. Follow these steps:1. Split your data in a train/test with a test size of 20% and a `random_state = 22`- define a sequential model with at least one inner layer. You will have to make choices for the following things: - what is the size of the input? - how many nodes will you use in each layer? - what is the size of the output? - what activation functions will you use in the inner layers? - what activation function will you use at output? - what loss function will you use? - what optimizer will you use?- fit your model on the training set, using a validation_split of 0.1- test your trained model on the test data from the train/test split- check the accuracy score, the confusion matrix and the classification report ###Code X.shape from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y_cat, random_state=22, test_size=0.2) from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.optimizers import Adam model = Sequential() model.add(Dense(32, input_shape=(8,), activation='relu')) model.add(Dense(32, activation='relu')) model.add(Dense(2, activation='softmax')) model.compile(Adam(learning_rate=0.05), loss='categorical_crossentropy', metrics=['accuracy']) model.summary() 32*8 + 32 model.fit(X_train, y_train, epochs=20, verbose=2, validation_split=0.1) y_pred = model.predict(X_test) y_test_class = np.argmax(y_test, axis=1) y_pred_class = np.argmax(y_pred, axis=1) from sklearn.metrics import accuracy_score from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix pd.Series(y_test_class).value_counts() / len(y_test_class) accuracy_score(y_test_class, y_pred_class) print(classification_report(y_test_class, y_pred_class)) confusion_matrix(y_test_class, y_pred_class) ###Output _____no_output_____ ###Markdown Exercise 3Compare your work with the results presented in [this notebook](https://www.kaggle.com/sheshu/pima-data-visualisation-and-machine-learning). Are your Neural Network results better or worse than the results obtained by traditional Machine Learning techniques?- Try training a Support Vector Machine or a Random Forest model on the exact same train/test split. Is the performance better or worse?- Try restricting your features to only 4 features like in the suggested notebook. How does model performance change? ###Code from sklearn.ensemble import RandomForestClassifier from sklearn.svm import SVC from sklearn.naive_bayes import GaussianNB for mod in [RandomForestClassifier(), SVC(), GaussianNB()]: mod.fit(X_train, y_train[:, 1]) y_pred = mod.predict(X_test) print("="*80) print(mod) print("-"*80) print("Accuracy score: {:0.3}".format(accuracy_score(y_test_class, y_pred))) print("Confusion Matrix:") print(confusion_matrix(y_test_class, y_pred)) print() ###Output _____no_output_____ ###Markdown Deep Learning Intro ###Code %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import numpy as np ###Output _____no_output_____ ###Markdown Exercise 1 The [Pima Indians dataset](https://archive.ics.uci.edu/ml/datasets/Pima+Indians+Diabetes) is a very famous dataset distributed by UCI and originally collected from the National Institute of Diabetes and Digestive and Kidney Diseases. It contains data from clinical exams for women age 21 and above of Pima indian origins. The objective is to predict based on diagnostic measurements whether a patient has diabetes.It has the following features:- Pregnancies: Number of times pregnant- Glucose: Plasma glucose concentration a 2 hours in an oral glucose tolerance test- BloodPressure: Diastolic blood pressure (mm Hg)- SkinThickness: Triceps skin fold thickness (mm)- Insulin: 2-Hour serum insulin (mu U/ml)- BMI: Body mass index (weight in kg/(height in m)^2)- DiabetesPedigreeFunction: Diabetes pedigree function- Age: Age (years)The last colum is the outcome, and it is a binary variable.In this first exercise we will explore it through the following steps:1. Load the ..data/diabetes.csv dataset, use pandas to explore the range of each feature- For each feature draw a histogram. Bonus points if you draw all the histograms in the same figure.- Explore correlations of features with the outcome column. You can do this in several ways, for example using the `sns.pairplot` we used above or drawing a heatmap of the correlations.- Do features need standardization? If so what stardardization technique will you use? MinMax? Standard?- Prepare your final `X` and `y` variables to be used by a ML model. Make sure you define your target variable well. Will you need dummy columns? ###Code df = pd.read_csv('../data/diabetes.csv') df.head() _ = df.hist(figsize=(12, 10)) import seaborn as sns sns.pairplot(df, hue='Outcome') sns.heatmap(df.corr(), annot = True) df.info() df.describe() from sklearn.preprocessing import StandardScaler from keras.utils import to_categorical sc = StandardScaler() X = sc.fit_transform(df.drop('Outcome', axis=1)) y = df['Outcome'].values y_cat = to_categorical(y) X.shape y_cat.shape ###Output _____no_output_____ ###Markdown Exercise 2 Build a fully connected NN model that predicts diabetes. Follow these steps:1. Split your data in a train/test with a test size of 20% and a `random_state = 22`- define a sequential model with at least one inner layer. You will have to make choices for the following things: - what is the size of the input? - how many nodes will you use in each layer? - what is the size of the output? - what activation functions will you use in the inner layers? - what activation function will you use at output? - what loss function will you use? - what optimizer will you use?- fit your model on the training set, using a validation_split of 0.1- test your trained model on the test data from the train/test split- check the accuracy score, the confusion matrix and the classification report ###Code X.shape from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y_cat, random_state=22, test_size=0.2) from keras.models import Sequential from keras.layers import Dense from keras.optimizers import Adam model = Sequential() model.add(Dense(32, input_shape=(8,), activation='relu')) model.add(Dense(32, activation='relu')) model.add(Dense(2, activation='softmax')) model.compile(Adam(lr=0.05), loss='categorical_crossentropy', metrics=['accuracy']) model.summary() 32*8 + 32 model.fit(X_train, y_train, epochs=20, verbose=2, validation_split=0.1) y_pred = model.predict(X_test) y_test_class = np.argmax(y_test, axis=1) y_pred_class = np.argmax(y_pred, axis=1) from sklearn.metrics import accuracy_score from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix pd.Series(y_test_class).value_counts() / len(y_test_class) accuracy_score(y_test_class, y_pred_class) print(classification_report(y_test_class, y_pred_class)) confusion_matrix(y_test_class, y_pred_class) ###Output _____no_output_____ ###Markdown Exercise 3Compare your work with the results presented in [this notebook](https://www.kaggle.com/sheshu/pima-data-visualisation-and-machine-learning). Are your Neural Network results better or worse than the results obtained by traditional Machine Learning techniques?- Try training a Support Vector Machine or a Random Forest model on the exact same train/test split. Is the performance better or worse?- Try restricting your features to only 4 features like in the suggested notebook. How does model performance change? ###Code from sklearn.ensemble import RandomForestClassifier from sklearn.svm import SVC from sklearn.naive_bayes import GaussianNB for mod in [RandomForestClassifier(), SVC(), GaussianNB()]: mod.fit(X_train, y_train[:, 1]) y_pred = mod.predict(X_test) print("="*80) print(mod) print("-"*80) print("Accuracy score: {:0.3}".format(accuracy_score(y_test_class, y_pred))) print("Confusion Matrix:") print(confusion_matrix(y_test_class, y_pred)) print() ###Output _____no_output_____ ###Markdown Deep Learning Intro ###Code %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import numpy as np ###Output _____no_output_____ ###Markdown Exercise 1 The [Pima Indians dataset](https://archive.ics.uci.edu/ml/datasets/Pima+Indians+Diabetes) is a very famous dataset distributed by UCI and originally collected from the National Institute of Diabetes and Digestive and Kidney Diseases. It contains data from clinical exams for women age 21 and above of Pima indian origins. The objective is to predict based on diagnostic measurements whether a patient has diabetes.It has the following features:- Pregnancies: Number of times pregnant- Glucose: Plasma glucose concentration a 2 hours in an oral glucose tolerance test- BloodPressure: Diastolic blood pressure (mm Hg)- SkinThickness: Triceps skin fold thickness (mm)- Insulin: 2-Hour serum insulin (mu U/ml)- BMI: Body mass index (weight in kg/(height in m)^2)- DiabetesPedigreeFunction: Diabetes pedigree function- Age: Age (years)The last colum is the outcome, and it is a binary variable.In this first exercise we will explore it through the following steps:1. Load the ..data/diabetes.csv dataset, use pandas to explore the range of each feature- For each feature draw a histogram. Bonus points if you draw all the histograms in the same figure.- Explore correlations of features with the outcome column. You can do this in several ways, for example using the `sns.pairplot` we used above or drawing a heatmap of the correlations.- Do features need standardization? If so what stardardization technique will you use? MinMax? Standard?- Prepare your final `X` and `y` variables to be used by a ML model. Make sure you define your target variable well. Will you need dummy columns? ###Code df = pd.read_csv('../data/diabetes.csv') df.head() _ = df.hist(figsize=(12, 10)) import seaborn as sns sns.pairplot(df, hue='Outcome') sns.heatmap(df.corr(), annot = True) df.info() df.describe() from sklearn.preprocessing import StandardScaler from keras.utils import to_categorical sc = StandardScaler() X = sc.fit_transform(df.drop('Outcome', axis=1)) y = df['Outcome'].values y_cat = to_categorical(y) X.shape y_cat.shape ###Output _____no_output_____ ###Markdown Exercise 2 Build a fully connected NN model that predicts diabetes. Follow these steps:1. Split your data in a train/test with a test size of 20% and a `random_state = 22`- define a sequential model with at least one inner layer. You will have to make choices for the following things: - what is the size of the input? - how many nodes will you use in each layer? - what is the size of the output? - what activation functions will you use in the inner layers? - what activation function will you use at output? - what loss function will you use? - what optimizer will you use?- fit your model on the training set, using a validation_split of 0.1- test your trained model on the test data from the train/test split- check the accuracy score, the confusion matrix and the classification report ###Code X.shape from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y_cat, random_state=22, test_size=0.2) from keras.models import Sequential from keras.layers import Dense from keras.optimizers import Adam model = Sequential() model.add(Dense(32, input_shape=(8,), activation='relu')) model.add(Dense(32, activation='relu')) model.add(Dense(2, activation='softmax')) model.compile(Adam(lr=0.05), loss='categorical_crossentropy', metrics=['accuracy']) model.summary() 32*8 + 32 model.fit(X_train, y_train, epochs=20, verbose=2, validation_split=0.1) y_pred = model.predict(X_test) y_test_class = np.argmax(y_test, axis=1) y_pred_class = np.argmax(y_pred, axis=1) from sklearn.metrics import accuracy_score from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix pd.Series(y_test_class).value_counts() / len(y_test_class) accuracy_score(y_test_class, y_pred_class) print(classification_report(y_test_class, y_pred_class)) confusion_matrix(y_test_class, y_pred_class) ###Output _____no_output_____ ###Markdown Exercise 3Compare your work with the results presented in [this notebook](https://www.kaggle.com/sheshu/pima-data-visualisation-and-machine-learning). Are your Neural Network results better or worse than the results obtained by traditional Machine Learning techniques?- Try training a Support Vector Machine or a Random Forest model on the exact same train/test split. Is the performance better or worse?- Try restricting your features to only 4 features like in the suggested notebook. How does model performance change? ###Code from sklearn.ensemble import RandomForestClassifier from sklearn.svm import SVC from sklearn.naive_bayes import GaussianNB for mod in [RandomForestClassifier(), SVC(), GaussianNB()]: mod.fit(X_train, y_train[:, 1]) y_pred = mod.predict(X_test) print("="*80) print(mod) print("-"*80) print("Accuracy score: {:0.3}".format(accuracy_score(y_test_class, y_pred))) print("Confusion Matrix:") print(confusion_matrix(y_test_class, y_pred)) print() ###Output _____no_output_____ ###Markdown Deep Learning Intro ###Code %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import numpy as np ###Output _____no_output_____ ###Markdown Exercise 1 The [Pima Indians dataset](https://archive.ics.uci.edu/ml/datasets/Pima+Indians+Diabetes) is a very famous dataset distributed by UCI and originally collected from the National Institute of Diabetes and Digestive and Kidney Diseases. It contains data from clinical exams for women age 21 and above of Pima indian origins. The objective is to predict based on diagnostic measurements whether a patient has diabetes.It has the following features:- Pregnancies: Number of times pregnant- Glucose: Plasma glucose concentration a 2 hours in an oral glucose tolerance test- BloodPressure: Diastolic blood pressure (mm Hg)- SkinThickness: Triceps skin fold thickness (mm)- Insulin: 2-Hour serum insulin (mu U/ml)- BMI: Body mass index (weight in kg/(height in m)^2)- DiabetesPedigreeFunction: Diabetes pedigree function- Age: Age (years)The last colum is the outcome, and it is a binary variable.In this first exercise we will explore it through the following steps:1. Load the ..data/diabetes.csv dataset, use pandas to explore the range of each feature- For each feature draw a histogram. Bonus points if you draw all the histograms in the same figure.- Explore correlations of features with the outcome column. You can do this in several ways, for example using the `sns.pairplot` we used above or drawing a heatmap of the correlations.- Do features need standardization? If so what stardardization technique will you use? MinMax? Standard?- Prepare your final `X` and `y` variables to be used by a ML model. Make sure you define your target variable well. Will you need dummy columns? ###Code df = pd.read_csv('../data/diabetes.csv') df.head() _ = df.hist(figsize=(12, 10)) import seaborn as sns sns.pairplot(df, hue='Outcome') sns.heatmap(df.corr(), annot = True) df.info() df.describe() from sklearn.preprocessing import StandardScaler from keras.utils import to_categorical sc = StandardScaler() X = sc.fit_transform(df.drop('Outcome', axis=1)) y = df['Outcome'].values y_cat = to_categorical(y) X.shape y_cat.shape ###Output _____no_output_____ ###Markdown Exercise 2 Build a fully connected NN model that predicts diabetes. Follow these steps:1. Split your data in a train/test with a test size of 20% and a `random_state = 22`- define a sequential model with at least one inner layer. You will have to make choices for the following things: - what is the size of the input? - how many nodes will you use in each layer? - what is the size of the output? - what activation functions will you use in the inner layers? - what activation function will you use at output? - what loss function will you use? - what optimizer will you use?- fit your model on the training set, using a validation_split of 0.1- test your trained model on the test data from the train/test split- check the accuracy score, the confusion matrix and the classification report ###Code X.shape from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y_cat, random_state=22, test_size=0.2) from keras.models import Sequential from keras.layers import Dense from keras.optimizers import Adam model = Sequential() model.add(Dense(32, input_shape=(8,), activation='relu')) model.add(Dense(32, activation='relu')) model.add(Dense(2, activation='softmax')) model.compile(Adam(lr=0.05), loss='categorical_crossentropy', metrics=['accuracy']) model.summary() 32*8 + 32 model.fit(X_train, y_train, epochs=20, verbose=2, validation_split=0.1) y_pred = model.predict(X_test) y_test_class = np.argmax(y_test, axis=1) y_pred_class = np.argmax(y_pred, axis=1) from sklearn.metrics import accuracy_score from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix pd.Series(y_test_class).value_counts() / len(y_test_class) accuracy_score(y_test_class, y_pred_class) print(classification_report(y_test_class, y_pred_class)) confusion_matrix(y_test_class, y_pred_class) ###Output _____no_output_____ ###Markdown Exercise 3Compare your work with the results presented in [this notebook](https://www.kaggle.com/futurist/d/uciml/pima-indians-diabetes-database/pima-data-visualisation-and-machine-learning). Are your Neural Network results better or worse than the results obtained by traditional Machine Learning techniques?- Try training a Support Vector Machine or a Random Forest model on the exact same train/test split. Is the performance better or worse?- Try restricting your features to only 4 features like in the suggested notebook. How does model performance change? ###Code from sklearn.ensemble import RandomForestClassifier from sklearn.svm import SVC from sklearn.naive_bayes import GaussianNB for mod in [RandomForestClassifier(), SVC(), GaussianNB()]: mod.fit(X_train, y_train[:, 1]) y_pred = mod.predict(X_test) print("="*80) print(mod) print("-"*80) print("Accuracy score: {:0.3}".format(accuracy_score(y_test_class, y_pred))) print("Confusion Matrix:") print(confusion_matrix(y_test_class, y_pred)) print() ###Output _____no_output_____
14_linear_algebra/14_String_Problem-Students-1.ipynb
###Markdown 14 Linear Algebra: String Problem – Students (1) Motivating problem: Two masses on three stringsTwo masses $M_1$ and $M_2$ are hung from a horizontal rod with length $L$ in such a way that a rope of length $L_1$ connects the left end of the rod to $M_1$, a rope of length $L_2$ connects $M_1$ and $M_2$, and a rope of length $L_3$ connects $M_2$ to the right end of the rod. The system is at rest (in equilibrium under gravity).![Schematic of the 1 rod/2 masses/3 strings problem.](1rod2masses3strings.svg)Find the angles that the ropes make with the rod and the tension forces in the ropes. Theoretical backgroundTreat $\sin\theta_i$ and $\cos\theta_i$ together with $T_i$, $1\leq i \leq 3$, as unknowns that have to simultaneously fulfill the nine equations\begin{align}-T_1 \cos\theta_1 + T_2\cos\theta_2 &= 0\\ T_1 \sin\theta_1 - T_2\sin\theta_2 - W_1 &= 0\\ -T_2\cos\theta_2 + T_3\cos\theta_3 &= 0\\ T_2\sin\theta_2 + T_3\sin\theta_3 - W_2 &= 0\\ L_1\cos\theta_1 + L_2\cos\theta_2 + L_3\cos\theta_3 - L &= 0\\-L_1\sin\theta_1 - L_2\sin\theta_2 + L_3\sin\theta_3 &= 0\\\sin^2\theta_1 + \cos^2\theta_1 - 1 &= 0\\\sin^2\theta_2 + \cos^2\theta_2 - 1 &= 0\\\sin^2\theta_3 + \cos^2\theta_3 - 1 &= 0\end{align}Consider the nine equations a vector function $\mathbf{f}$ that takes a 9-vector $\mathbf{x}$ of the unknowns as argument:\begin{align}\mathbf{f}(\mathbf{x}) &= 0\\\mathbf{x} &= \left(\begin{array}{c}x_0 \\ x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\ x_6 \\ x_7 \\ x_8\end{array}\right) =\left(\begin{array}{c}\sin\theta_1 \\ \sin\theta_2 \\ \sin\theta_3 \\\cos\theta_1 \\ \cos\theta_2 \\ \cos\theta_3 \\T_1 \\ T_2 \\ T_3\end{array}\right) \\\mathbf{L} &= \left(\begin{array}{c}L \\ L_1 \\ L_2 \\ L_3\end{array}\right), \quad\mathbf{W} = \left(\begin{array}{c}W_1 \\ W_2\end{array}\right)\end{align} Solve with generalized Newton-Raphson:$$\mathsf{J}(\mathbf{x}) \Delta\mathbf{x} = -\mathbf{f}(\mathbf{x})$$and $$\mathbf{x} \leftarrow \mathbf{x} + \Delta\mathbf{x}.$$ Problem setupSet the problem parameters and the objective function $\mathbf{f}(\mathbf{x})$ ###Code import numpy as np # problem parameters W = np.array([10, 20]) L = np.array([8, 3, 4, 4]) def f_2masses(x, L, W): return np.array([ -x[6]*x[3] + x[7]*x[4], x[6]*x[0] - x[7]*x[1] - W[0], # Eq 3 # Eq 4 # Eq 5 # Eq 6 x[0]**2 + x[3]**2 - 1, # Eq 8 # Eq 9 ]) def fLW(x): return f_2masses(x, L, W) ###Output _____no_output_____ ###Markdown Initial valuesGuess some initial values (they don't have to fullfil the equations!): ###Code # initial parameters x0 # ... x0 = ###Output _____no_output_____ ###Markdown Check that we can calculate $\mathbf{f}(\mathbf{x}_0)$: ###Code f_2masses(x0, L, W) ###Output _____no_output_____ ###Markdown VisualizationPlot the positions of the 2 masses and the 3 strings for any solution vector $\mathbf{x}$: ###Code import matplotlib import matplotlib.pyplot as plt %matplotlib inline def plot_2masses(x, L, W): r0 = np.array([0, 0]) r1 = r0 + np.array([L[0], 0]) rod = np.transpose([r0, r1]) L1 = r0 + np.array([L[1]*x[3], -L[1]*x[0]]) L2 = L1 + np.array([L[2]*x[4], -L[2]*x[1]]) L3 = L2 + np.array([L[3]*x[5], L[3]*x[2]]) strings = np.transpose([r0, L1, L2, L3]) ax = plt.subplot(111) ax.plot(rod[0], rod[1], color="black", marker="d", linewidth=4) ax.plot(strings[0], strings[1], marker="o", linestyle="-", linewidth=1) ax.set_aspect(1) return ax ###Output _____no_output_____ ###Markdown What does the initial guess look like? ###Code plot_2masses(x0, L, W) ###Output _____no_output_____ ###Markdown Jacobian Write a function `Jacobian(f, x, h=1e-5)` that computes the Jacobian matrix numerically (use the central difference algorithm). ###Code def Jacobian(f, x, h=1e-5): """df_i/dx_j with central difference (f(x+h/2)-f(x-h/2))/h""" J = np.zeros((len(f(x)), len(x)), dtype=np.float64) raise NotImplementedError return J ###Output _____no_output_____ ###Markdown Test Jacobian on $$\mathbf{f}(\mathbf{x}) = \left( \begin{array}{c} x_0^2 - x_1 \\ x_0 \end{array}\right)$$with analytical result (compute it analytically!)$$\mathsf{J} = \frac{\partial f_i}{\partial x_j} = \ ?$$ ###Code def ftest(x): return np.array([ # Eq Test 1 # Eq Test 2 ]) x0test = # choose a simple test vector # run function and print result # compare to analytical result ###Output _____no_output_____ ###Markdown Just testing that it also works for our starting vector: ###Code Jacobian(fLW, x0) ###Output _____no_output_____ ###Markdown n-D Newton-Raphson Root Finding Write a function `newton_raphson(f, x, Nmax=100, tol=1e-8, h=1e-5)` to find a root for a vector function `f(x)=0`. (See also [13 Root-finding by trial-and-error](http://asu-compmethodsphysics-phy494.github.io/ASU-PHY494//2017/03/16/13_Root_finding/) and the _1D Newton-Raphson algorithm_ in [13-Root-finding.ipynb](https://github.com/ASU-CompMethodsPhysics-PHY494/PHY494-resources/blob/master/13_root_finding/13-Root-finding.ipynb).) As a convergence criterion we demand that the length of the vector `f(x)` (the norm --- see `np.linalg.norm`) be less than the tolerance. ###Code def newton_raphson(f, x, Nmax=100, tol=1e-8, h=1e-5): """n-D Newton-Raphson: solves f(x) = 0. Iterate until |f(x)| < tol or nmax steps. """ x = x.copy() raise NotImplementedError else: print("Newton-Raphson: no root found after {0} iterations (eps={1}); " "best guess is {2} with error {3}".format(Nmax, tol, x, fx)) return x ###Output _____no_output_____ ###Markdown Solve 2 masses/3 strings problem Solution ###Code # solve the string problem x = ###Output _____no_output_____ ###Markdown Plot the starting configuration and the solution: ###Code plot_2masses(x0, L, W) plot_2masses(x, L, W) ###Output _____no_output_____ ###Markdown Pretty-print the solution (angles in degrees): ###Code def pretty_print(x): theta = np.rad2deg(np.arcsin(x[0:3])) tensions = x[6:] print("theta1 = {0[0]:.1f} \t theta2 = {0[1]:.1f} \t theta3 = {0[2]:.1f}".format(theta)) print("T1 = {0[0]:.1f} \t T2 = {0[1]:.1f} \t T3 = {0[2]:.1f}".format(tensions)) print("Starting values") pretty_print(x0) print() print("Solution") pretty_print(x) ###Output _____no_output_____ ###Markdown Show intermediate stepsCreate a new function `newton_raphson_intermediates()` based on `newtopn_raphson()` that returns *all* trial `x` values including the last one. ###Code def newton_raphson_intermediates(f, x, Nmax=100, tol=1e-8, h=1e-5): """n-D Newton-Raphson: solves f(x) = 0. Iterate until |f(x)| < tol or nmax steps. Returns all intermediates. """ intermediates = [] x = x.copy() raise NotImplementedError else: print("Newton-Raphson: no root found after {0} iterations (eps={1}); " "best guess is {2} with error {3}".format(Nmax, tol, x, fx)) return np.array(intermediates) ###Output _____no_output_____ ###Markdown Visualize the intermediate configurations: ###Code x_series = newton_raphson_intermediates(fLW, x0) ax = plt.subplot(111) ax.set_prop_cycle("color", [plt.cm.viridis_r(i) for i in np.linspace(0, 1, len(x_series))]) for x in x_series: plot_2masses(x, L, W) ###Output _____no_output_____ ###Markdown It's convenient to turn the above plotting code into a function that we can reuse: ###Code def plot_series(x_series, L, W): """Plot all N masses/strings solution vectors in x_series (N, 9) array""" ax = plt.subplot(111) ax.set_prop_cycle("color", [plt.cm.viridis_r(i) for i in np.linspace(0, 1, len(x_series))]) for x in x_series: plot_2masses(x, L, W) return ax ###Output _____no_output_____ ###Markdown 14 Linear Algebra: String Problem – Students (1) Motivating problem: Two masses on three stringsTwo masses $M_1$ and $M_2$ are hung from a horizontal rod with length $L$ in such a way that a rope of length $L_1$ connects the left end of the rod to $M_1$, a rope of length $L_2$ connects $M_1$ and $M_2$, and a rope of length $L_3$ connects $M_2$ to the right end of the rod. The system is at rest (in equilibrium under gravity).![Schematic of the 1 rod/2 masses/3 strings problem.](1rod2masses3strings.svg)Find the angles that the ropes make with the rod and the tension forces in the ropes. Theoretical backgroundTreat $\sin\theta_i$ and $\cos\theta_i$ together with $T_i$, $1\leq i \leq 3$, as unknowns that have to simultaneously fulfill the nine equations\begin{align}-T_1 \cos\theta_1 + T_2\cos\theta_2 &= 0\\ T_1 \sin\theta_1 - T_2\sin\theta_2 - W_1 &= 0\\ -T_2\cos\theta_2 + T_3\cos\theta_3 &= 0\\ T_2\sin\theta_2 + T_3\sin\theta_3 - W_2 &= 0\\ L_1\cos\theta_1 + L_2\cos\theta_2 + L_3\cos\theta_3 - L &= 0\\-L_1\sin\theta_1 - L_2\sin\theta_2 + L_3\sin\theta_3 &= 0\\\sin^2\theta_1 + \cos^2\theta_1 - 1 &= 0\\\sin^2\theta_2 + \cos^2\theta_2 - 1 &= 0\\\sin^2\theta_3 + \cos^2\theta_3 - 1 &= 0\end{align}Consider the nine equations a vector function $\mathbf{f}$ that takes a 9-vector $\mathbf{x}$ of the unknowns as argument:\begin{align}\mathbf{f}(\mathbf{x}) &= 0\\\mathbf{x} &= \left(\begin{array}{c}x_0 \\ x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\ x_6 \\ x_7 \\ x_8\end{array}\right) =\left(\begin{array}{c}\sin\theta_1 \\ \sin\theta_2 \\ \sin\theta_3 \\\cos\theta_1 \\ \cos\theta_2 \\ \cos\theta_3 \\T_1 \\ T_2 \\ T_3\end{array}\right) \\\mathbf{L} &= \left(\begin{array}{c}L \\ L_1 \\ L_2 \\ L_3\end{array}\right), \quad\mathbf{W} = \left(\begin{array}{c}W_1 \\ W_2\end{array}\right)\end{align} Solve with generalized Newton-Raphson:$$\mathsf{J}(\mathbf{x}) \Delta\mathbf{x} = -\mathbf{f}(\mathbf{x})$$and $$\mathbf{x} \leftarrow \mathbf{x} + \Delta\mathbf{x}.$$ Problem setupSet the problem parameters and the objective function $\mathbf{f}(\mathbf{x})$ ###Code import numpy as np # problem parameters W = np.array([10, 20]) L = np.array([8, 3, 4, 4]) def f_2masses(x, L, W): return np.array([ -x[6]*x[3] + x[7]*x[4], x[6]*x[0] - x[7]*x[1] - W[0], -x[7]*x[4] + x[8]*x[5], # Eq 4 # Eq 5 -L[1]*x[0] - L[2]*x[1] + L[3]*x[2], x[0]**2 + x[3]**2 - 1, # Eq 8 x[2]**2 + x[5]**2 - 1, ]) def fLW(x): return f_2masses(x, L, W) ###Output _____no_output_____ ###Markdown Initial valuesGuess some initial values (they don't have to fullfil the equations!): ###Code # initial parameters x0 # ... x0 = ###Output _____no_output_____ ###Markdown Check that we can calculate $\mathbf{f}(\mathbf{x}_0)$: ###Code f_2masses(x0, L, W) ###Output _____no_output_____ ###Markdown VisualizationPlot the positions of the 2 masses and the 3 strings for any solution vector $\mathbf{x}$: ###Code import matplotlib import matplotlib.pyplot as plt %matplotlib inline def plot_2masses(x, L, W): r0 = np.array([0, 0]) r1 = r0 + np.array([L[0], 0]) rod = np.transpose([r0, r1]) L1 = r0 + np.array([L[1]*x[3], -L[1]*x[0]]) L2 = L1 + np.array([L[2]*x[4], -L[2]*x[1]]) L3 = L2 + np.array([L[3]*x[5], L[3]*x[2]]) strings = np.transpose([r0, L1, L2, L3]) ax = plt.subplot(111) ax.plot(rod[0], rod[1], color="black", marker="d", linewidth=4) ax.plot(strings[0], strings[1], marker="o", linestyle="-", linewidth=1) ax.set_aspect(1) return ax ###Output _____no_output_____ ###Markdown What does the initial guess look like? ###Code plot_2masses(x0, L, W) ###Output _____no_output_____ ###Markdown Jacobian Write a function `Jacobian(f, x, h=1e-5)` that computes the Jacobian matrix numerically (use the central difference algorithm).\begin{align}\mathbf{J} &= \frac{\partial \mathbf{f}}{\partial\mathbf{x}} \\J_{ij} &= \frac{\partial f_i(x_1, \dots, x_j, \dots)}{\partial x_j} \\ &\approx \frac{f_i(x_1, \dots, x_j + \frac{h}{2}, \dots) - f_i(x_1, \dots, x_j - \frac{h}{2}, \dots)}{h}\end{align} ###Code def Jacobian(f, x, h=1e-5): """df_i/dx_j with central difference (fi(xj+h/2)-fi(xj-h/2))/h""" J = np.zeros((len(f(x)), len(x)), dtype=np.float64) raise NotImplementedError return J ###Output _____no_output_____ ###Markdown Test Jacobian on $$\mathbf{f}(\mathbf{x}) = \left( \begin{array}{c} x_0^2 - x_1 \\ x_0 \end{array}\right)$$with analytical result (compute it analytically!)$$\mathsf{J} = \frac{\partial f_i}{\partial x_j} = \ ?$$ ###Code def ftest(x): return np.array([ # Eq Test 1 # Eq Test 2 ]) x0test = # choose a simple test vector # run function and print result # compare to analytical result ###Output _____no_output_____ ###Markdown Just testing that it also works for our starting vector: ###Code Jacobian(fLW, x0) ###Output _____no_output_____ ###Markdown n-D Newton-Raphson Root Finding Write a function `newton_raphson(f, x, Nmax=100, tol=1e-8, h=1e-5)` to find a root for a vector function `f(x)=0`. (See also [13 Root-finding by trial-and-error](https://asu-compmethodsphysics-phy494.github.io/ASU-PHY494/2019/03/19/13_Root_finding/) and the _1D Newton-Raphson algorithm_ in [13-Root-finding.ipynb](https://github.com/ASU-CompMethodsPhysics-PHY494/PHY494-resources/blob/master/13_root_finding/13-Root-finding.ipynb).) As a convergence criterion we demand that the length of the vector `f(x)` (the norm --- see `np.linalg.norm`) be less than the tolerance. ###Code def newton_raphson(f, x, Nmax=100, tol=1e-8, h=1e-5): """n-D Newton-Raphson: solves f(x) = 0. Iterate until |f(x)| < tol or nmax steps. """ x = x.copy() raise NotImplementedError else: print("Newton-Raphson: no root found after {0} iterations (eps={1}); " "best guess is {2} with error {3}".format(Nmax, tol, x, fx)) return x ###Output _____no_output_____ ###Markdown Solve 2 masses/3 strings problem Solution ###Code # solve the string problem x = ###Output _____no_output_____ ###Markdown Plot the starting configuration and the solution: ###Code plot_2masses(x0, L, W) plot_2masses(x, L, W) ###Output _____no_output_____ ###Markdown Pretty-print the solution (angles in degrees): ###Code def pretty_print(x): theta = np.rad2deg(np.arcsin(x[0:3])) tensions = x[6:] print("theta1 = {0[0]:.1f} \t theta2 = {0[1]:.1f} \t theta3 = {0[2]:.1f}".format(theta)) print("T1 = {0[0]:.1f} \t T2 = {0[1]:.1f} \t T3 = {0[2]:.1f}".format(tensions)) print("Starting values") pretty_print(x0) print() print("Solution") pretty_print(x) ###Output _____no_output_____ ###Markdown Show intermediate stepsCreate a new function `newton_raphson_intermediates()` based on `newtopn_raphson()` that returns *all* trial `x` values including the last one. ###Code def newton_raphson_intermediates(f, x, Nmax=100, tol=1e-8, h=1e-5): """n-D Newton-Raphson: solves f(x) = 0. Iterate until |f(x)| < tol or nmax steps. Returns all intermediates. """ intermediates = [] x = x.copy() raise NotImplementedError else: print("Newton-Raphson: no root found after {0} iterations (eps={1}); " "best guess is {2} with error {3}".format(Nmax, tol, x, fx)) return np.array(intermediates) ###Output _____no_output_____ ###Markdown Visualize the intermediate configurations: ###Code x_series = newton_raphson_intermediates(fLW, x0) ax = plt.subplot(111) ax.set_prop_cycle("color", [plt.cm.viridis_r(i) for i in np.linspace(0, 1, len(x_series))]) for x in x_series: plot_2masses(x, L, W) ###Output _____no_output_____ ###Markdown It's convenient to turn the above plotting code into a function that we can reuse: ###Code def plot_series(x_series, L, W): """Plot all N masses/strings solution vectors in x_series (N, 9) array""" ax = plt.subplot(111) ax.set_prop_cycle("color", [plt.cm.viridis_r(i) for i in np.linspace(0, 1, len(x_series))]) for x in x_series: plot_2masses(x, L, W) return ax ###Output _____no_output_____ ###Markdown 14 Linear Algebra: String Problem – Students (1) Motivating problem: Two masses on three stringsTwo masses $M_1$ and $M_2$ are hung from a horizontal rod with length $L$ in such a way that a rope of length $L_1$ connects the left end of the rod to $M_1$, a rope of length $L_2$ connects $M_1$ and $M_2$, and a rope of length $L_3$ connects $M_2$ to the right end of the rod. The system is at rest (in equilibrium under gravity).![Schematic of the 1 rod/2 masses/3 strings problem.](1rod2masses3strings.svg)Find the angles that the ropes make with the rod and the tension forces in the ropes. Theoretical backgroundTreat $\sin\theta_i$ and $\cos\theta_i$ together with $T_i$, $1\leq i \leq 3$, as unknowns that have to simultaneously fulfill the nine equations\begin{align}-T_1 \cos\theta_1 + T_2\cos\theta_2 &= 0\\ T_1 \sin\theta_1 - T_2\sin\theta_2 - W_1 &= 0\\ -T_2\cos\theta_2 + T_3\cos\theta_3 &= 0\\ T_2\sin\theta_2 + T_3\sin\theta_3 - W_2 &= 0\\ L_1\cos\theta_1 + L_2\cos\theta_2 + L_3\cos\theta_3 - L &= 0\\-L_1\sin\theta_1 - L_2\sin\theta_2 + L_3\sin\theta_3 &= 0\\\sin^2\theta_1 + \cos^2\theta_1 - 1 &= 0\\\sin^2\theta_2 + \cos^2\theta_2 - 1 &= 0\\\sin^2\theta_3 + \cos^2\theta_3 - 1 &= 0\end{align}Consider the nine equations a vector function $\mathbf{f}$ that takes a 9-vector $\mathbf{x}$ of the unknowns as argument:\begin{align}\mathbf{f}(\mathbf{x}) &= 0\\\mathbf{x} &= \begin{pmatrix}x_0 \\ x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\ x_6 \\ x_7 \\ x_8\end{pmatrix} =\begin{pmatrix}\sin\theta_1 \\ \sin\theta_2 \\ \sin\theta_3 \\\cos\theta_1 \\ \cos\theta_2 \\ \cos\theta_3 \\T_1 \\ T_2 \\ T_3\end{pmatrix} \\\mathbf{L} &= \left(\begin{array}{c}L \\ L_1 \\ L_2 \\ L_3\end{array}\right), \quad\mathbf{W} = \left(\begin{array}{c}W_1 \\ W_2\end{array}\right)\end{align} Using the unknowns from above, our system of 9 coupled equations is:\begin{align}-x_6 x_3 + x_7 x_4 &= 0\\ x_6 x_0 - x_7 x_1 - W_1 &= 0\\-x_7x_4 + x_8 x_5 &= 0\\ x_7x_1 + x_8 x_2 - W_2 &= 0\\ L_1x_3 + L_2 x_4 + L_3 x_5 - L &= 0\\-L_1x_0 - L_2 x_1 + L_3 x_2 &= 0\\x_{0}^{2} + x_{3}^{2} - 1 &= 0\\x_{1}^{2} + x_{4}^{2} - 1 &= 0\\x_{2}^{2} + x_{5}^{2} - 1 &= 0\end{align} Solve the root-finding problem $\mathbf{f}(\mathbf{x}) = 0$ with the **generalized Newton-Raphson** algorithm:$$\mathsf{J}(\mathbf{x}) \Delta\mathbf{x} = -\mathbf{f}(\mathbf{x})$$and $$\mathbf{x} \leftarrow \mathbf{x} + \Delta\mathbf{x}.$$ Problem setupSet the problem parameters and the objective function $\mathbf{f}(\mathbf{x})$ ###Code import numpy as np # problem parameters W = np.array([10, 20]) L = np.array([8, 3, 4, 4]) def f_2masses(x, L, W): return np.array([ -x[6]*x[3] + x[7]*x[4], x[6]*x[0] - x[7]*x[1] - W[0], -x[7]*x[4] + x[8]*x[5], # Eq 4 # Eq 5 -L[1]*x[0] - L[2]*x[1] + L[3]*x[2], x[0]**2 + x[3]**2 - 1, # Eq 8 x[2]**2 + x[5]**2 - 1, ]) def fLW(x, L=L, W=W): return f_2masses(x, L, W) ###Output _____no_output_____ ###Markdown Initial valuesGuess some initial values (they don't have to fullfil the equations!): ###Code # initial parameters x0 # ... x0 = ###Output _____no_output_____ ###Markdown Check that we can calculate $\mathbf{f}(\mathbf{x}_0)$: ###Code f_2masses(x0, L, W) ###Output _____no_output_____ ###Markdown VisualizationPlot the positions of the 2 masses and the 3 strings for any solution vector $\mathbf{x}$: ###Code import matplotlib import matplotlib.pyplot as plt %matplotlib inline def plot_2masses(x, L, W, **kwargs): """Plot 2 mass/3 string problem for parameter vector x and parameters L and W""" kwargs.setdefault('linestyle', '-') kwargs.setdefault('marker', 'o') kwargs.setdefault('linewidth', 1) ax = kwargs.pop('ax', None) if ax is None: ax = plt.subplot(111) r0 = np.array([0, 0]) r1 = r0 + np.array([L[0], 0]) rod = np.transpose([r0, r1]) L1 = r0 + np.array([L[1]*x[3], -L[1]*x[0]]) L2 = L1 + np.array([L[2]*x[4], -L[2]*x[1]]) L3 = L2 + np.array([L[3]*x[5], L[3]*x[2]]) strings = np.transpose([r0, L1, L2, L3]) ax.plot(rod[0], rod[1], color="black", marker="d", linewidth=4) ax.plot(strings[0], strings[1], **kwargs) ax.set_aspect(1) return ax ###Output _____no_output_____ ###Markdown What does the initial guess look like? ###Code plot_2masses(x0, L, W) ###Output _____no_output_____ ###Markdown Jacobian Write a function `Jacobian(f, x, h=1e-5)` that computes the Jacobian matrix numerically (use the central difference algorithm).\begin{align}\mathbf{J} &= \frac{\partial \mathbf{f}}{\partial\mathbf{x}} \\J_{ij} &= \frac{\partial f_i(x_1, \dots, x_j, \dots)}{\partial x_j} \\ &\approx \frac{f_i(x_1, \dots, x_j + \frac{h}{2}, \dots) - f_i(x_1, \dots, x_j - \frac{h}{2}, \dots)}{h}\end{align} ###Code def Jacobian(f, x, h=1e-5): """df_i/dx_j with central difference (fi(xj+h/2)-fi(xj-h/2))/h""" J = np.zeros((len(f(x)), len(x)), dtype=np.float64) raise NotImplementedError return J ###Output _____no_output_____ ###Markdown Test `Jacobian()` on $$\mathbf{g}(\mathbf{x}) = \begin{pmatrix} x_0^2 - x_1 \\ x_0 \end{pmatrix}$$with analytical result$$\mathsf{J} = \left[\frac{\partial g_i}{\partial x_j}\right] =\begin{pmatrix} \frac{\partial g_0}{\partial x_0} & \frac{\partial g_0}{\partial x_1}\\ \frac{\partial g_1}{\partial x_0} & \frac{\partial g_1}{\partial x_1}\end{pmatrix}= \begin{pmatrix} 2 x_0 & -1\\ 1 & 0\end{pmatrix}$$ Given a test vector $\mathbf{x}_\text{test} = (1, 0)$, what is the numerical answer for $\mathsf{J}(\mathbf{x}_\text{test})$?$$\mathsf{J}(\mathbf{x}_\text{test}) = \text{?}$$Test your `Jacobian()` function with $\mathbf{x}_\text{test}$ and check that you get the same answer: ###Code def g(x): return np.array([ # Eq Test 1 # Eq Test 2 ]) x_test = # choose a simple test vector # run function and print result # compare to analytical result ###Output _____no_output_____ ###Markdown Just testing that it also works for our starting vector: ###Code Jacobian(fLW, x0) ###Output _____no_output_____ ###Markdown n-D Newton-Raphson Root Finding Write a function `newton_raphson(f, x, Nmax=100, tol=1e-8, h=1e-5)` to find a root for a vector function `f(x)=0`. (See also [13 Root-finding by trial-and-error](https://asu-compmethodsphysics-phy494.github.io/ASU-PHY494/2020/03/26/13_Root_finding/) and the _1D Newton-Raphson algorithm_ in [13-Root-finding.ipynb](https://github.com/ASU-CompMethodsPhysics-PHY494/PHY494-resources/blob/master/13_root_finding/13-Root-finding.ipynb).) As a convergence criterion we demand that the length of the vector `f(x)` (the norm --- see `np.linalg.norm`) be less than the tolerance. ###Code def newton_raphson(f, x, Nmax=100, tol=1e-8, h=1e-5): """n-D Newton-Raphson: solves f(x) = 0. Iterate until |f(x)| < tol or nmax steps. """ x = x.copy() raise NotImplementedError else: print("Newton-Raphson: no root found after {0} iterations (eps={1}); " "best guess is {2} with error {3}".format(Nmax, tol, x, fx)) return x ###Output _____no_output_____ ###Markdown Solve 2 masses/3 strings problem Solution ###Code # solve the string problem x = ###Output _____no_output_____ ###Markdown Plot the starting configuration and the solution: ###Code plot_2masses(x0, L, W) plot_2masses(x, L, W) ###Output _____no_output_____ ###Markdown Pretty-print the solution (angles in degrees): ###Code def pretty_print(x): theta = np.rad2deg(np.arcsin(x[0:3])) tensions = x[6:] print("theta1 = {0[0]:.1f} \t theta2 = {0[1]:.1f} \t theta3 = {0[2]:.1f}".format(theta)) print("T1 = {0[0]:.1f} \t T2 = {0[1]:.1f} \t T3 = {0[2]:.1f}".format(tensions)) print("Starting values") pretty_print(x0) print() print("Solution") pretty_print(x) ###Output _____no_output_____ ###Markdown Show intermediate stepsCreate a new function `newton_raphson_intermediates()` based on `newtopn_raphson()` that returns *all* trial `x` values including the last one. ###Code def newton_raphson_intermediates(f, x, Nmax=100, tol=1e-8, h=1e-5): """n-D Newton-Raphson: solves f(x) = 0. Iterate until |f(x)| < tol or nmax steps. Returns all intermediates. """ intermediates = [] x = x.copy() raise NotImplementedError else: print("Newton-Raphson: no root found after {0} iterations (eps={1}); " "best guess is {2} with error {3}".format(Nmax, tol, x, fx)) return np.array(intermediates) ###Output _____no_output_____ ###Markdown Visualize the intermediate configurations: ###Code x_series = newton_raphson_intermediates(fLW, x0) ax = plt.subplot(111) ax.set_prop_cycle("color", [plt.cm.viridis_r(i) for i in np.linspace(0, 1, len(x_series))]) for iteration, x in enumerate(x_series): plot_2masses(x, L, W, label=str(iteration), ax=ax) ax.legend(loc="best"); ###Output _____no_output_____ ###Markdown It's convenient to turn the above plotting code into a function that we can reuse: ###Code def plot_series(x_series, L, W): """Plot all N masses/strings solution vectors in x_series (N, 9) array""" ax = plt.subplot(111) ax.set_prop_cycle("color", [plt.cm.viridis_r(i) for i in np.linspace(0, 1, len(x_series))]) for iteration, x in enumerate(x_series): plot_2masses(x, L, W, label=str(iteration), ax=ax) ax.legend(loc="best") return ax ###Output _____no_output_____
CERN - Practical Introduction To Quantum Computing/Lecture 4 Resources/8.-Deutsch-Jozsa And Grover With Aqua[Run In IBM Quantum Experience].ipynb
###Markdown Deutsch-Jozsa and Grover with Aqua The Aqua library in Qiskit implements some common algorithms so that they can be used without needing to program the circuits for each case. In this notebook, we will show how we can use the Deutsch-Jozsa and Grover algorithms. Detusch-Jozsa To use the Deutsch-Jozsa algorithm, we need to import some extra packages in addition to the ones we have been using. ###Code %matplotlib inline from qiskit import * from qiskit.visualization import * from qiskit.tools.monitor import * from qiskit.aqua import * from qiskit.aqua.components.oracles import * from qiskit.aqua.algorithms import * ###Output _____no_output_____ ###Markdown To specify the elements of the Deutsch-Jozsa algorithm, we must use an oracle (the function that we need to test to see if it is constant or balanced). Aqua offers the possibility of defining this oracle at a high level, without giving the actual quantum gates, with *TruthTableOracle*.*TruthTableOracle* receives a string of zeroes and ones of length $2^n$ that sets what are the values of the oracle for the $2^n$ binary strings in lexicographical order. For example, with the string 0101 we will have a boolean function that is 0 on 00 and 10 but 1 on 01 and 11 (and, thus, it is balanced). ###Code oracle = TruthTableOracle("0101") oracle.construct_circuit().draw(output='mpl') ###Output _____no_output_____ ###Markdown Once we have defined the oracle, we can easily create an instance of the Deutsch-Jozsa algorithm and draw the circuit. ###Code dj = DeutschJozsa(oracle) dj.construct_circuit(measurement=True).draw(output='mpl') ###Output _____no_output_____ ###Markdown Obviously, we could execute this circuit on any backend. However, Aqua specifies some extra elements in addition to the circuit, as for instance how the results are to be interpreted.To execute a quantum algorithm in Aqua, we need to pass it a *QuantumInstance* (which includes the backend and possibly other settings) and the algorithm will use it as many times as need. The result will include information about the execution and, in the case of Deutsch-Jozsa, the final veredict. ###Code backend = Aer.get_backend('qasm_simulator') quantum_instance = QuantumInstance(backend) result = dj.run(quantum_instance) print(result) ###Output {'measurement': {'01': 1024}, 'result': 'balanced'} ###Markdown Let us check that it also works with constant functions. ###Code oracle2 = TruthTableOracle('00000000') dj2 = DeutschJozsa(oracle2) result = dj2.run(quantum_instance) print("The function is",result['result']) ###Output The function is constant ###Markdown Grover As in the case of Deutsch-Jozsa, for the Aqua implementation of Grover's algorithm we need to provide an oracle. We can also specify the number of iterations. ###Code backend = Aer.get_backend('qasm_simulator') oracle3 = TruthTableOracle('0001') g = Grover(oracle3, iterations=1) ###Output _____no_output_____ ###Markdown The execution is also similar to the one of Deutsch-Jozsa ###Code result = g.run(quantum_instance) print(result) ###Output {'measurement': {'11': 1024}, 'top_measurement': '11', 'circuit': <qiskit.circuit.quantumcircuit.QuantumCircuit object at 0x7f24168f9210>, 'assignment': [1, 2], 'oracle_evaluation': True} ###Markdown It can also be interesting to use oracles that we construct from logical expressions ###Code expression = '(x | y) & (~y | z) & (~x | ~z | w) & (~x | y | z | ~w)' oracle4 = LogicalExpressionOracle(expression) g2 = Grover(oracle4, iterations = 3) result = g2.run(quantum_instance) print(result) ###Output {'measurement': {'0000': 28, '0001': 23, '0010': 157, '0011': 26, '0100': 22, '0101': 25, '0110': 22, '0111': 22, '1000': 30, '1001': 24, '1010': 20, '1011': 137, '1100': 161, '1101': 157, '1110': 29, '1111': 141}, 'top_measurement': '1100', 'circuit': <qiskit.circuit.quantumcircuit.QuantumCircuit object at 0x7f2417469e90>, 'assignment': [-1, -2, 3, 4], 'oracle_evaluation': True} ###Markdown If we do not know the number of solutions or if we do not want to specify the number of iterations, we can use the incremenal mode, that allows us to find a solution in time $O(\sqrt{N})$. ###Code backend = Aer.get_backend('qasm_simulator') expression2 = '(x & y & z & w) | (~x & ~y & ~z & ~w)' #expression2 = '(x & y) | (~x & ~y)' oracle5 = LogicalExpressionOracle(expression2, optimization = True) g3 = Grover(oracle5, incremental = True) result = g3.run(quantum_instance) print(result) ###Output {'measurement': {'0000': 387, '0001': 17, '0010': 15, '0011': 17, '0100': 13, '0101': 9, '0110': 15, '0111': 11, '1000': 14, '1001': 20, '1010': 11, '1011': 16, '1100': 24, '1101': 16, '1110': 15, '1111': 424}, 'top_measurement': '1111', 'circuit': <qiskit.circuit.quantumcircuit.QuantumCircuit object at 0x7f2438677410>, 'assignment': [1, 2, 3, 4], 'oracle_evaluation': True}
09/Lab12.ipynb
###Markdown Genetic Algorithm ###Code import random as rnd # returns the random array def random_arr(lower, upper, size): return [rnd.randrange(lower, upper+1) for _ in range(size)] # cross over between chromosomes def reproduce(x, y): tmp = rnd.randint(0, len(x)-1) return x[:tmp]+y[tmp:] # randomly change the value of index def mutate(x): inp = rnd.randint(1, len(x)) x[rnd.randrange(0, len(x))] = inp return x # pick the random chromosome from population while seeing the probabilities def random_pick(population, probs): r = rnd.uniform(0, sum(probs)) endpoint = 0 for pop, prob in zip(population, probs): if endpoint+prob >= r: return pop # picking random chromosome endpoint += prob print("Error!") exit() def genetic_algo(population, maxfitness): mutation_prob = 0.85 # mutation 85% new_population = [] # all probabilites or percentages probs = [fitness(pop)/maxfitness for pop in population] for _ in range(len(population)): x = random_pick(population, probs) # one best chromosome y = random_pick(population, probs) # two best chromosome # creating child child = reproduce(x, y) if rnd.random() < mutation_prob: child = mutate(child) # rarely mutate new_population.append(child) if fitness(child) >= maxfitness: break return new_population def fitness(x): # checking the chromosome for fitness horizontal_collisions = sum( [x.count(queen)-1 for queen in x])/2 diagonal_collisions = 0 n = len(x) left_diagonal = [0] * 2*n right_diagonal = [0] * 2*n for i in range(n): left_diagonal[i + x[i] - 1] += 1 right_diagonal[len(x) - i + x[i] - 2] += 1 diagonal_collisions = 0 for i in range(2*n-1): counter = 0 if left_diagonal[i] > 1: counter += left_diagonal[i]-1 if right_diagonal[i] > 1: counter += right_diagonal[i]-1 diagonal_collisions += counter / (n-abs(i-n+1)) # 28-(2+3)=23 return int((n*(n-1))/2 - (horizontal_collisions + diagonal_collisions)) def print_chromosome(chrom): print(f"Chromosome = {str(chrom)}, Fitness = {fitness(chrom)}") nq = int(input("Enter number of queens: ")) # number of queens maxfitness = (nq*(nq-1))/2 population = [random_arr(1, nq, nq) for _ in range(nq*nq)] generation = 1 while not maxfitness in [fitness(chrom) for chrom in population]: population = genetic_algo(population, maxfitness) generation += 1 if generation % 100 == 0: besttill = max([(fitness(n), n) for n in population],key=lambda x:x[0]) print( f"Generation= {generation}, Sol={besttill[1]} Maximum Fitness = {besttill[0]}") print("Solved!!") chrom_out=[] for chrom in population: if fitness(chrom) == maxfitness: chrom_out = chrom print( f"Generation= {generation}, Sol={chrom} Maximum Fitness = {fitness(chrom)}") ###Output Generation= 100, Sol=[2, 3, 4, 7, 5, 2, 6, 1] Maximum Fitness = 26 Generation= 200, Sol=[4, 8, 3, 3, 6, 8, 1, 5] Maximum Fitness = 26 Generation= 300, Sol=[7, 4, 4, 2, 8, 5, 6, 3] Maximum Fitness = 26 Generation= 400, Sol=[3, 6, 4, 5, 1, 8, 2, 7] Maximum Fitness = 27 Generation= 500, Sol=[5, 8, 3, 6, 2, 7, 4, 8] Maximum Fitness = 26 Generation= 600, Sol=[1, 6, 5, 1, 2, 4, 7, 8] Maximum Fitness = 26 Generation= 700, Sol=[5, 4, 7, 2, 6, 5, 1, 3] Maximum Fitness = 26 Generation= 800, Sol=[2, 3, 1, 8, 5, 6, 7, 2] Maximum Fitness = 26 Generation= 900, Sol=[4, 1, 6, 3, 2, 7, 5, 3] Maximum Fitness = 26 Generation= 1000, Sol=[1, 4, 8, 2, 5, 3, 7, 6] Maximum Fitness = 27 Generation= 1100, Sol=[6, 4, 8, 3, 2, 5, 7, 1] Maximum Fitness = 27 Generation= 1200, Sol=[7, 5, 6, 8, 4, 2, 1, 3] Maximum Fitness = 27 Generation= 1300, Sol=[5, 2, 4, 1, 3, 8, 6, 2] Maximum Fitness = 27 Solved!! Generation= 1342, Sol=[8, 3, 1, 6, 2, 5, 7, 4] Maximum Fitness = 28
Assignments/Assignment04_iris.ipynb
###Markdown Get Data ###Code import os import zipfile import urllib DOWNLOAD_ROOT = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data" IRIS_PATH = os.path.join("datasets", "iris") IRIS_URL = DOWNLOAD_ROOT def extract_iris_data(iris_url=IRIS_URL,iris_path=IRIS_PATH): if not os.path.isdir(iris_path): os.makedirs(iris_path) irisdata_path = os.path.join(iris_path,"iris.data") urllib.request.urlretrieve(iris_url, irisdata_path) extract_iris_data() ###Output _____no_output_____ ###Markdown Data Manipulation and analysis ###Code import pandas as pd def load_iris_train_data(iris_path=IRIS_PATH): csv_path = os.path.join(iris_path, "iris.data") return pd.read_csv(csv_path,names=['SepalLength','SepalWidth','PetalLength','PetalWidth','Class']) datasets=load_iris_train_data() datasets.head() from sklearn.preprocessing import OrdinalEncoder ordinal_encoder = OrdinalEncoder() cat_encoded = ordinal_encoder.fit_transform(datasets[['Class']]) datasets['Class'] = cat_encoded datasets.head() labels = ordinal_encoder.categories_ labels = list(labels) print(labels) import matplotlib.pyplot as plt ax = datasets[datasets.Class==0].plot.scatter(x='SepalLength', y='SepalWidth', color='red', label='Iris-setosa') datasets[datasets.Class==1].plot.scatter(x='SepalLength', y='SepalWidth', color='green', label='Iris-versicolor', ax=ax) datasets[datasets.Class==2].plot.scatter(x='SepalLength', y='SepalWidth', color='blue', label='Iris-virginica', ax=ax) ax.set_title("scatter") ###Output _____no_output_____ ###Markdown The above scatter plot shows three classes, i.e. 'Iris-setosa', 'Iris-versicolor', 'Iris-virginica'. Divide dataset into Train/Test ###Code from sklearn.model_selection import train_test_split X=datasets[['SepalLength','SepalWidth','PetalLength','PetalWidth']] y=datasets['Class'] X_train, X_test, y_train, y_test=train_test_split(X, y, test_size=0.20, random_state=42) ###Output _____no_output_____ ###Markdown Prepare data for ML Standard scaling ###Code from sklearn.preprocessing import StandardScaler scaler = StandardScaler() X_train_tr = scaler.fit_transform(X_train) X_test_tr = scaler.fit_transform(X_test) X_train_tr ###Output _____no_output_____ ###Markdown Training the model SVC with linear kernel ###Code from sklearn.svm import SVC svc = SVC(kernel='linear') svc.fit(X_train_tr, y_train) y_pred = svc.predict(X_test_tr) print("C for trained model: ",svc.C) print("gamma for trained model: ",svc.gamma) ###Output C for trained model: 1.0 gamma for trained model: scale ###Markdown RMSE for SVC linear kernel ###Code from sklearn.metrics import mean_squared_error print("Root mean squared error: ",mean_squared_error(y_test, y_pred, squared=False)) ###Output Root mean squared error: 0.31622776601683794 ###Markdown Classification report ###Code from sklearn.metrics import classification_report print("Classification report for SVC linear kernel") print(classification_report(y_test, y_pred, target_names=['Iris-setosa', 'Iris-versicolor', 'Iris-virginica'])) ###Output Classification report for SVC linear kernel precision recall f1-score support Iris-setosa 1.00 1.00 1.00 10 Iris-versicolor 0.80 0.89 0.84 9 Iris-virginica 0.90 0.82 0.86 11 accuracy 0.90 30 macro avg 0.90 0.90 0.90 30 weighted avg 0.90 0.90 0.90 30 ###Markdown SVC with rbf kernel ###Code svc_rbf = SVC(kernel='rbf') svc_rbf.fit(X_train_tr, y_train) y_pred_rbf = svc_rbf.predict(X_test_tr) print("C for trained model: ",svc_rbf.C) print("gamma for trained model: ",svc_rbf.gamma) ###Output C for trained model: 1.0 gamma for trained model: scale ###Markdown RMSE for SVC rbf kernel ###Code print("Root mean squared error: ",mean_squared_error(y_test, y_pred_rbf, squared=False)) ###Output Root mean squared error: 0.18257418583505536 ###Markdown Classification report ###Code from sklearn.metrics import classification_report print("Classification report for SVC linear kernel") print(classification_report(y_test, y_pred_rbf, target_names=['Iris-setosa', 'Iris-versicolor', 'Iris-virginica'])) ###Output Classification report for SVC linear kernel precision recall f1-score support Iris-setosa 1.00 1.00 1.00 10 Iris-versicolor 0.90 1.00 0.95 9 Iris-virginica 1.00 0.91 0.95 11 accuracy 0.97 30 macro avg 0.97 0.97 0.97 30 weighted avg 0.97 0.97 0.97 30 ###Markdown Tuning the SVC rbf kernel model ###Code from sklearn.model_selection import GridSearchCV parameters_svc = {'C':[0.01, 0.1, 1, 10, 100, 1000], 'gamma':[0.01, 0.1, 1, 10, 100, 1000]} clf = GridSearchCV(svc_rbf, parameters_svc) clf.fit(X_train_tr, y_train) print("Best parameters for C and gamma") clf.best_estimator_ ###Output Best parameters for C and gamma ###Markdown Accuracy on test set with tuned SVC rbf kernel ###Code from sklearn.metrics import accuracy_score svc_tuned = clf.best_estimator_ y_pred_svc_tuned = svc_tuned.predict(X_test_tr) print("RMSE: ",mean_squared_error(y_test, y_pred_svc_tuned, squared=False)) print(classification_report(y_test, y_pred_svc_tuned, target_names=['Iris-setosa', 'Iris-versicolor', 'Iris-virginica'])) print("Acurracy: ",accuracy_score(y_test, y_pred_svc_tuned) * 100) ###Output RMSE: 0.18257418583505536 precision recall f1-score support Iris-setosa 1.00 1.00 1.00 10 Iris-versicolor 0.90 1.00 0.95 9 Iris-virginica 1.00 0.91 0.95 11 accuracy 0.97 30 macro avg 0.97 0.97 0.97 30 weighted avg 0.97 0.97 0.97 30 Acurracy: 96.66666666666667 ###Markdown KNN ###Code from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors=9) knn.fit(X_train_tr, y_train) y_pred_knn = svc.predict(X_test_tr) print("RMSE: ",mean_squared_error(y_test, y_pred_knn, squared=False)) print("Classification report: ",classification_report(y_test, y_pred_knn, target_names=['Iris-setosa', 'Iris-versicolor', 'Iris-virginica'])) print("Accuracy: ", accuracy_score(y_test, y_pred_knn)) ###Output RMSE: 0.31622776601683794 Classification report: precision recall f1-score support Iris-setosa 1.00 1.00 1.00 10 Iris-versicolor 0.80 0.89 0.84 9 Iris-virginica 0.90 0.82 0.86 11 accuracy 0.90 30 macro avg 0.90 0.90 0.90 30 weighted avg 0.90 0.90 0.90 30 Accuracy: 0.9 ###Markdown Tuning KNN ###Code from sklearn.model_selection import RandomizedSearchCV parameters = {'n_neighbors': range(1,11)} clf_randomCV = RandomizedSearchCV(knn, parameters, random_state=0) search = clf_randomCV.fit(X_train_tr, y_train) print("Best parameters: ", search.best_params_) ###Output Best parameters: {'n_neighbors': 3} ###Markdown Accuracy on test set with tuned KNN ###Code knn_tuned = search.best_estimator_ y_pred_knn_tuned = knn_tuned.predict(X_test_tr) print("RMSE: ", mean_squared_error(y_test, y_pred_knn_tuned, squared=False)) print("Classification report: ",classification_report(y_test, y_pred_svc_tuned, target_names=['Iris-setosa', 'Iris-versicolor', 'Iris-virginica'])) print("Accuracy: ",accuracy_score(y_test, y_pred_knn_tuned) * 100) ###Output RMSE: 0.18257418583505536 Classification report: precision recall f1-score support Iris-setosa 1.00 1.00 1.00 10 Iris-versicolor 0.90 1.00 0.95 9 Iris-virginica 1.00 0.91 0.95 11 accuracy 0.97 30 macro avg 0.97 0.97 0.97 30 weighted avg 0.97 0.97 0.97 30 Accuracy: 96.66666666666667
guides/feature_tour_guide.ipynb
###Markdown 🌋 Quick Feature Tour [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/RelevanceAI/RelevanceAI-readme-docs/blob/v2.0.0/docs/getting-started/_notebooks/RelevanceAI-ReadMe-Quick-Feature-Tour.ipynb) 1. Set up Relevance AIGet started using our RelevanceAI SDK and use of [Vectorhub](https://hub.getvectorai.com/)'s [CLIP model](https://hub.getvectorai.com/model/text_image%2Fclip) for encoding. ###Code # remove `!` if running the line in a terminal !pip install -U RelevanceAI[notebook]==2.0.0 # remove `!` if running the line in a terminal !pip install -U vectorhub[clip] ###Output _____no_output_____ ###Markdown Follow the signup flow and get your credentials below otherwise, you can sign up/login and find your credentials in the settings [here](https://auth.relevance.ai/signup/?callback=https%3A%2F%2Fcloud.relevance.ai%2Flogin%3Fredirect%3Dcli-api)![](https://drive.google.com/uc?id=131M2Kpz5s9GmhNRnqz6b0l0Pw9DHVRWs) ###Code from relevanceai import Client """ You can sign up/login and find your credentials here: https://cloud.relevance.ai/sdk/api Once you have signed up, click on the value under `Activation token` and paste it here """ client = Client() ###Output _____no_output_____ ###Markdown ![](https://drive.google.com/uc?id=1owtvwZKTTcrOHBlgKTjqiMOvrN3DGrF6) 2. Create a dataset and insert dataUse one of our sample datasets to upload into your own project! ###Code import pandas as pd from relevanceai.utils.datasets import get_ecommerce_dataset_clean # Retrieve our sample dataset. - This comes in the form of a list of documents. documents = get_ecommerce_dataset_clean() pd.DataFrame.from_dict(documents).head() ds = client.Dataset("quickstart") ds.insert_documents(documents) ###Output _____no_output_____ ###Markdown See your dataset in the dashboard![](https://drive.google.com/uc?id=1nloY4S8R1B8GY2_QWkb0BGY3bLrG-8D-) 3. Encode data and upload vectors into your new datasetEncode a new product image vector using [Vectorhub's](https://hub.getvectorai.com/) `Clip2Vec` models and update your dataset with the resulting vectors. Please refer to [Vectorhub](https://github.com/RelevanceAI/vectorhub) for more details. ###Code from vectorhub.bi_encoders.text_image.torch import Clip2Vec model = Clip2Vec() # Set the default encode to encoding an image model.encode = model.encode_image documents = model.encode_documents(fields=["product_image"], documents=documents) ds.upsert_documents(documents=documents) ds.schema ###Output _____no_output_____ ###Markdown Monitor your vectors in the dashboard![](https://drive.google.com/uc?id=1d2jhjhwvPucfebUphIiqGVmR1Td2uYzM) 4. Run clustering on your vectorsRun clustering on your vectors to better understand your data! You can view your clusters in our clustering dashboard following the link which is provided after the clustering is finished! ###Code from sklearn.cluster import KMeans cluster_model = KMeans(n_clusters=10) ds.cluster(cluster_model, ["product_image_clip_vector_"]) ###Output _____no_output_____ ###Markdown You can see the new `_cluster_` field that is added to your document schema. Clustering results are uploaded back to the dataset as an additional field.The default `alias` of the cluster will be the `kmeans_`. ###Code ds.schema ###Output _____no_output_____ ###Markdown See your cluster centers in the dashboard![](https://drive.google.com/uc?id=1P0ZJcTd-Kl7TUwzFHEe3JuJpf_cTTP6J) 4. Run a vector searchEncode your query and find your image results!Here our query is just a simple vector query, but our search comes with out of the box support for features such as multi-vector, filters, facets and traditional keyword matching to combine with your vector search. You can read more about how to construct a multivector query with those features [here](https://docs.relevance.ai/docs/vector-search-prerequisites).See your search results on the dashboard here https://cloud.relevance.ai/sdk/search. ###Code query = "gifts for the holidays" query_vector = model.encode(query) multivector_query = [{"vector": query_vector, "fields": ["product_image_clip_vector_"]}] results = ds.vector_search(multivector_query=multivector_query, page_size=10) ###Output _____no_output_____
UG_S17/Mba-Kalu_Onwughalu-Brazil.ipynb
###Markdown ![Brazil Flag](http://www.brazil.org.za/brazil-images/brazil-flag.png) **Chukwuemeka Mba-Kalu** **Joseph Onwughalu** **An Analysis of the Brazilian Economy between 2000 and 2012** Final Project In Partial Fulfillment of the Course Requirements [**Data Bootcamp**](http://nyu.data-bootcamp.com/) Stern School of Business, NYU Spring 2017 **May 12, 2017** The Brazilian EconomyIn this project we examine in detail different complexities of Brazil’s growth between the years 2000-2012. During this period, Brazil set an example for many of the major emerging economies in Latin America, Africa, and Asia. From the years 2000-2012, Brazil was one of the fastest growing major economies in the world. It is the 8th largest economy in the world, with its GDP totalling 2.2 trillion dollars and GDP per Capita being at 10,308 dollars. While designing this project, we were interested to find out more about the main drivers of the Brazilian economy. Specifically, we aim to look at specific trends and indicators that directly affect economic growth, especially in fast-growing countries such as Brazil. Certain trends include household consumption and its effects on the GDP, bilateral aid and investment flows and its effects on the GDP per capita growth. We also aim to view the effects of economic growth on climate change and public health by observing the carbon emissions percentage changes and specific indicators like the mortality rate.We will be looking at generally accepted economic concepts and trends, making some hypotheses, and comparing our hypotheses to the Brazil data we have. Did Brazil follow these trends on its path to economic growth? Methodology - Data AcquisitionAll the data we are using in this project was acquired from the World Bank and can be accessed and downloaded from the [website](www.WorldBank.org). By going on the website and searching for “World data report” we were given access to information that has to be submitted by the respective countries on the site. By clicking “[Brazil](http://databank.worldbank.org/data/reports.aspx?source=2&country=BRA),” we’re shown the information of several economic indicators and their respective data over a time period of 2000-2012 that we downloaded as an excel file. We picked more than 20 metrics to include in our data, such as: * Population* GDP (current US Dollars)* Household final consumption expenditure, etc. (% of GDP)* General government final consumption expenditure (current US Dollars)* Life expectancy at birth, total (years) For all of our analysis and data we will be looking at the 2000-2012 time period and have filtered the spreadsheets accordingly to reflect this information. ###Code # Inportant Packages import pandas as pd import matplotlib.pyplot as plt import sys import datetime as dt print('Python version is:', sys.version) print('Pandas version:', pd.__version__) print('Date:', dt.date.today()) ###Output Python version is: 3.5.2 |Anaconda 4.2.0 (64-bit)| (default, Jul 5 2016, 11:41:13) [MSC v.1900 64 bit (AMD64)] Pandas version: 0.18.1 Date: 2017-05-11 ###Markdown Reading in and Cleaning up the DataWe downloaded our [data](http://databank.worldbank.org/data/AjaxDownload/FileDownloadHandler.ashx?filename=67fd49af-3b41-4515-b248-87b045e61886.zip&filetype=CSV&language=en&displayfile=Data_Extract_From_World_Development_Indicators.zip) in xlxs, retained and renamed the important columns, and deleted rows without enough data. We alse transposed the table to make it easier to plot diagrams. ###Code path = 'C:\\Users\\emeka_000\\Desktop\\Bootcamp_Emeka.xlsx' odata = pd.read_excel(path, usecols = ['Series Name','2000 [YR2000]', '2001 [YR2001]', '2002 [YR2002]', '2003 [YR2003]', '2004 [YR2004]', '2005 [YR2005]', '2006 [YR2006]', '2007 [YR2007]', '2008 [YR2008]', '2009 [YR2009]', '2010 [YR2010]', '2011 [YR2011]', '2012 [YR2012]'] ) #retained only the necessary columns odata.columns = ['Metric', '2000', '2001', '2002', '2003', '2004', '2005', '2006', '2007', '2008', '2009', '2010', '2011', '2012'] #easier column names odata = odata.drop([20, 21, 22, 23, 24]) ##delete NaN values odata = odata.transpose() #transpose to make diagram easier odata #data with metrics description for the chart below data = pd.read_excel(path, usecols = ['2000 [YR2000]', '2001 [YR2001]', '2002 [YR2002]', '2003 [YR2003]', '2004 [YR2004]', '2005 [YR2005]', '2006 [YR2006]', '2007 [YR2007]', '2008 [YR2008]', '2009 [YR2009]', '2010 [YR2010]', '2011 [YR2011]', '2012 [YR2012]'] ) #same data but modified for pandas edits data.columns = ['2000', '2001', '2002', '2003', '2004', '2005', '2006', '2007', '2008', '2009', '2010', '2011', '2012'] #all columns are now string data = data.transpose() #data used for the rest of the project ###Output _____no_output_____ ###Markdown GDP Growth and GDP Growth Rate in Brazil To demonstrate Brazil's strong economic growht between 2000 and 2012, here are a few charts illustrating Brazil's GDP growth. Gross domestic product (GDP) is the monetary value of all the finished goods and services produced within a country's borders in a specific time period. Though GDP is usually calculated on an annual basis, it can be calculated on a quarterly basis as well. GDP includes all private and public consumption, government outlays, investments and exports minus imports that occur within a defined territory. Put simply, GDP is a broad measurement of a nation’s overall economic activity.GDP per Capita is a measure of the total output of a country that takes gross domestic product (GDP) and divides it by the number of people in the country. Read more on [Investopedia](http://www.investopedia.com/terms/g/gdp.aspixzz4gjgzo4Ri) ###Code data[4].plot(kind = 'line', #line plot title = 'Brazil Yearly GDP (2000-2012) (current US$)', #title fontsize=15, color='Green', linewidth=4, #width of plot line figsize=(20,5),).title.set_size(20) #set figure size and title size plt.xlabel("Year").set_size(15) plt.ylabel("GDP (current US$) * 1e12").set_size(15) #set x and y axis, with their sizes data[6].plot(kind = 'line', title = 'Brazil Yearly GDP Per Capita (2000-2012) (current US$)', fontsize=15, color='blue', linewidth=4, figsize=(20,5)).title.set_size(20) plt.xlabel("Year").set_size(15) plt.ylabel("GDP per capita (current US$)").set_size(15) data[5].plot(kind = 'line', title = 'Brazil Yearly GDP Growth (2000-2012) (%)', fontsize=15, color='red', linewidth=4, figsize=(20,5)).title.set_size(20) plt.xlabel("Year").set_size(15) plt.ylabel("GDP Growth (%)").set_size(15) ###Output _____no_output_____ ###Markdown GDP Growth vs. GDP Growth RateWhile Brazil's GDP was growing quite consistently over the 12 years, its GDP growth-rate was not steady with negative growth during the 2008 financial crisis. Hypothesis: Household Consumption vs. Foreign AidOur hypothesis is that household consumption is a bigger driver of the Brazilian economy than foreign aid. With their rising incomes, Brazilians are expected to be empowered with larger disposable incomes to spend on goods and services. Foreign aid, on the other hand, might not filter down to the masses for spending. ###Code fig, ax1 = plt.subplots(figsize = (20,5)) y1 = data[8] y2 = data[4] ax2 = ax1.twinx() ax1.plot(y1, 'green') #household consumption ax2.plot(y2, 'blue') #GDP growth plt.title("Household Consumption (% of GDP) vs. GDP").set_size(20) ###Output _____no_output_____ ###Markdown Actual: Household ConsumptionGDP comprises of household consumption, net investments, government spending and net exports;increases or decreases in any of these areas would affect the overall GDP respectively. The data shows that despite household consumption decreasing as a % of GDP, the GDP was growing. We found this a little strange and difficult to understand. One explanation for this phenomenon could be that as emerging market economies continue to expand, there is an increased shift towards investments and government spending. The blue line represents GDP growth and the green line represents Household Consumption. ###Code fig, ax1 = plt.subplots(figsize = (20, 5)) y1 = data[11] y2 = data[4] ax2 = ax1.twinx() ax1.plot(y1, 'red') #Net official development assistance ax2.plot(y2, 'blue') #GDP growth plt.title("Foreign Aid vs. GDP").set_size(20) ###Output _____no_output_____ ###Markdown Actual: Foreign AidRegarding foreign aid, it should be the case that with decreases in aid there will be reduced economic growth because many developing countries do rely on that as a crucial resource. The data shows a positive corellation for Brazil. While household spending was not a major driver of the Brazil's GDP growth, foreign aid played a big role. We will now explore how foreign direct investment and government spending can affect economic growth. The blue line represents GDP growth and the red line represents Foreign Aid. Hypothesis: Foreign Direct Investment vs. Government SpendingFor emerging market economies, the general trend is that Governments contribute a significant proportion to the GDP. Given that Brazil experienced growth between the years 2000-2012, it is expected that a consequence was increased foreign direct investment. Naturally, we’d like to compare the increases in Government Spending versus this foreign direct investment and see who generally contributed more to the GDP growth of the country. Our hypothesis is that the increased foreign direct investment was a bigger contributor to the GDP growth than government spending. With increased globalisation, we expect many multinationals and investors started business operations in Brazil due to its large, fast-growing market. ###Code fig, ax1 = plt.subplots(figsize = (20, 5)) y1 = data[2] y2 = data[4] ax2 = ax1.twinx() ax1.plot(y1, 'yellow') #household consumption ax2.plot(y2, 'blue') #GDP growth plt.title("Foreign Direct Investment (Inflows) (% of GDP) vs. GDP").set_size(20) ###Output _____no_output_____ ###Markdown Actual: Foreign Direct InvestmentContrary to popular belief and economic concepts, increased foreign direct investment did not act as a major contributor to the GDP growth Brazil experienced. There is no clear general trend or correlation between FDI and GDP growth. ###Code fig, ax1 = plt.subplots(figsize = (20, 5)) y1 = data[14] y2 = data[4] ax2 = ax1.twinx() ax1.plot(y1, 'purple') #household consumption ax2.plot(y2, 'blue') #GDP growth plt.title("Government Spending vs. GDP").set_size(20) ###Output _____no_output_____ ###Markdown Actual: Government SpendingIt is clear that government spending is positively corellated with the total GDP growth Brazil experienced. We believe that this was the major driver for Brazil's growth. Hypothesis: Population Growth and GDP per capitaBrazil’s population growth continued to increase during this time period of 2000-2012. As mentioned earlier, Brazil’s GDP growth was also growing during the same time period. Given that GDP per capita is a nice economic indicator to highlight standard of living in a country, we wanted to see if the increasing population was negating the effects of increased economic growth.Our hypothesis is that even though population was growing, the GDP per capita over the years generally increased at a higher rate and, all things equal, we are assured increased living standards in Brazil. This finding would prove to us that the GDP was growing at a faster rate than the population. ###Code data.plot.scatter(x = 5, y = 0, title = 'Population Growth vs. GDP Growth', figsize=(20,5)).title.set_size(20) plt.xlabel("GDP Growth Rate").set_size(15) plt.ylabel("Population Growth Rate").set_size(15) ###Output _____no_output_____ ###Markdown Actual: Population Growth There is no correlation between the population growth rate and the overall GDP growth rate. The general GDP rate already accounts for population increases and decreases. ###Code data.plot.scatter(x = 6, y = 0, title = 'Population Growth vs. GDP per Capita', figsize=(20,5)).title.set_size(20) plt.xlabel("GDP per Capita").set_size(15) plt.ylabel("Population Growth Rate").set_size(15) ###Output _____no_output_____ ###Markdown Population Growth The population growth rate has a negative correlation with GDP per capita. Our explanation is that, as economies advance, the birth rate is expected to decrease. This generally causes population growth rate to fall and GDP per Capita to rise. Hypothesis: Renewable Energy Expenditures and C02 EmissionsWhat one would expect is that as a country’s economy grows, its investments in renewable energy methods would increase as well. Such actions should lead to a decrease in CO2 emissions as cleaner energy processes are being applied. Our hypothesis disagrees with this.We believe that despite there being significant increases in renewable energy expenditures due to increased incomes and a larger, more diversified economy, there will still be more than proportionate increases in C02 emissions. By testing this hypothesis we will begin to understand certain explanations as to why this may be true or false. ###Code data[15].plot(kind = 'bar', title = 'Renewable energy consumption (% of total) (2000-2012)', fontsize=15, color='green', linewidth=4, figsize=(20,5)).title.set_size(20) data[12].plot(kind = 'bar', title = 'CO2 emissions from liquid fuel consumption (2000-2012)', fontsize=15, color='red', linewidth=4, figsize=(20,5)).title.set_size(20) data[13].plot(kind = 'bar', title = 'CO2 emissions from gaseous fuel consumption (2000-2012)', fontsize=15, color='blue', linewidth=4, figsize=(20,5)).title.set_size(20) ###Output _____no_output_____ ###Markdown Actual: Renewable Energy Consumption vs. CO2 EmmissionsAs countries continue to grow their economies, it is expected that people’s incomes will continue to rise. Increased disposable incomes should cause better energy consumption methods but as our hypothesis states, C02 emissions still continue to rise. This could be due to the increase in population as more people are using carbon goods and products. Hypothesis: Health Expenditures and Life ExpectancyThere should be a positive correlation between health expenditures and life expenditures. Naturally, the more a country is spending on healthcare, the higher the life expectancy ought to be. Our hypothesis agrees with this positive statement and we’d like to test it. If it turns out that health expenditure increases positively affects life expectancy, then we can attribute the increase to an improved economy that allows for more health expenditures from individuals, organisations and institutions. ###Code data.plot.scatter(x = 7, y = 19, #scatter plot title = 'Health Expenditures vs. Life Expectancy', figsize=(20,5)).title.set_size(20) plt.xlabel("Health Expenditures").set_size(15) plt.ylabel("Life Expectancy").set_size(15) ###Output _____no_output_____
exercises/multiples_of_3_and_5/notebook.ipynb
###Markdown Multiples of 3 and 5 **Problem**: If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000. ###Code import numpy as np def multiples(k: int, below_threshold: int): output = [] for i in range(2, below_threshold): if i % k == 0: output.append(i) return output numbers = [] for i in [3, 5]: numbers.extend(multiples(i, 1000)) numbers = set(numbers) sum(numbers) ###Output _____no_output_____
source/industry/telecom/notebooks/Ml-Telecom-NaiveBayes.ipynb
###Markdown Machine Learning for Telecom with Naive Bayes Introduction Machine Learning for CallDisconnectReason is a notebook which demonstrates exploration of dataset and CallDisconnectReason classification with Spark ml Naive Bayes Algorithm. ###Code from pyspark.sql.types import * from pyspark.sql import SparkSession from sagemaker import get_execution_role import sagemaker_pyspark role = get_execution_role() # Configure Spark to use the SageMaker Spark dependency jars jars = sagemaker_pyspark.classpath_jars() classpath = ":".join(sagemaker_pyspark.classpath_jars()) spark = SparkSession.builder.config("spark.driver.extraClassPath", classpath)\ .master("local[*]").getOrCreate() ###Output _____no_output_____ ###Markdown Using S3 Select, enables applications to retrieve only a subset of data from an object by using simple SQL expressions. By using S3 Select to retrieve only the data, you can achieve drastic performance increases – in many cases you can get as much as a 400% improvement.- _We first read a parquet compressed format of CDR dataset using s3select which has already been processed by Glue._ ###Code cdr_start_loc = "<%CDRStartFile%>" cdr_stop_loc = "<%CDRStopFile%>" cdr_start_sample_loc = "<%CDRStartSampleFile%>" cdr_stop_sample_loc = "<%CDRStopSampleFile%>" df = spark.read.format("s3select").parquet(cdr_stop_sample_loc) df.createOrReplaceTempView("cdr") durationDF = spark.sql("SELECT _c13 as CallServiceDuration FROM cdr where _c0 = 'STOP'") durationDF.count() ###Output _____no_output_____ ###Markdown Exploration of Data - _We see how we can explore and visualize the dataset used for processing. Here we create a bar chart representation of CallServiceDuration from CDR dataset._ ###Code import matplotlib.pyplot as plt durationpd = durationDF.toPandas().astype(int) durationpd.plot(kind='bar',stacked=True,width=1) ###Output _____no_output_____ ###Markdown - _We can represent the data and visualize with a box plot. The box extends from the lower to upper quartile values of the data, with a line at the median._ ###Code color = dict(boxes='DarkGreen', whiskers='DarkOrange', medians='DarkBlue', caps='Gray') durationpd.plot.box(color=color, sym='r+') from pyspark.sql.functions import col durationDF = durationDF.withColumn("CallServiceDuration", col("CallServiceDuration").cast(DoubleType())) ###Output _____no_output_____ ###Markdown - _We can represent the data and visualize the data with histograms partitioned in different bins._ ###Code import matplotlib.pyplot as plt bins, counts = durationDF.select('CallServiceDuration').rdd.flatMap(lambda x: x).histogram(durationDF.count()) plt.hist(bins[:-1], bins=bins, weights=counts,color=['green']) sqlDF = spark.sql("SELECT _c2 as Accounting_ID, _c19 as Calling_Number,_c20 as Called_Number, _c14 as CallDisconnectReason FROM cdr where _c0 = 'STOP'") sqlDF.show() ###Output +------------------+--------------+-------------+--------------------+ | Accounting_ID|Calling_Number|Called_Number|CallDisconnectReason| +------------------+--------------+-------------+--------------------+ |0x00016E0F5BDACAF7| 9645000046| 3512000046| 16| |0x00016E0F36A4A836| 9645000048| 3512000048| 16| |0x00016E0F4C261126| 9645000050| 3512000050| 16| |0x00016E0F4A446638| 9645000052| 3512000052| 16| |0x00016E0F4040CE81| 9645000054| 3512000054| 16| |0x00016E0F4D522D63| 9645000055| 3512000055| 16| |0x00016E0F5854A088| 9645000057| 3512000057| 16| |0x00016E0F7DFDA482| 9645000060| 3512000060| 16| |0x00016E0F65D65F76| 9645000062| 3512000062| 16| |0x00016E0F2378A4AE| 9645000064| 3512000064| 16| |0x00016E0F5003BC72| 9645000066| 3512000066| 16| | 0x00016E0F44702AB| 9645000067| 3512000067| 16| |0x00016E0F500EED75| 9645000069| 3512000069| 16| |0x00016E0F38D99C7D| 9645000071| 3512000071| 16| |0x00016E0F4D14C078| 9645000074| 3512000074| 16| |0x00016E0F4116E96C| 9645000075| 3512000075| 16| |0x00016E0F1F5CDE40| 9645000077| 3512000077| 16| |0x00016E0F1BFE3E2A| 9645000079| 3512000079| 16| |0x00016E0F7E203CC9| 9645000081| 3512000081| 16| | 0x00016E0F5B43F12| 9645000084| 3512000084| 16| +------------------+--------------+-------------+--------------------+ only showing top 20 rows ###Markdown Featurization ###Code from pyspark.ml.feature import StringIndexer accountIndexer = StringIndexer(inputCol="Accounting_ID", outputCol="AccountingIDIndex") accountIndexer.setHandleInvalid("skip") tempdf1 = accountIndexer.fit(sqlDF).transform(sqlDF) callingNumberIndexer = StringIndexer(inputCol="Calling_Number", outputCol="Calling_NumberIndex") callingNumberIndexer.setHandleInvalid("skip") tempdf2 = callingNumberIndexer.fit(tempdf1).transform(tempdf1) calledNumberIndexer = StringIndexer(inputCol="Called_Number", outputCol="Called_NumberIndex") calledNumberIndexer.setHandleInvalid("skip") tempdf3 = calledNumberIndexer.fit(tempdf2).transform(tempdf2) from pyspark.ml.feature import StringIndexer # Convert target into numerical categories labelIndexer = StringIndexer(inputCol="CallDisconnectReason", outputCol="label") labelIndexer.setHandleInvalid("skip") from pyspark.sql.functions import rand trainingFraction = 0.75; testingFraction = (1-trainingFraction); seed = 1234; trainData, testData = tempdf3.randomSplit([trainingFraction, testingFraction], seed=seed); # CACHE TRAIN AND TEST DATA trainData.cache() testData.cache() trainData.count(),testData.count() ###Output _____no_output_____ ###Markdown Analyzing the label distribution- We analyze the distribution of our target labels using a histogram where 16 represents Normal_Call_Clearing. ###Code import matplotlib.pyplot as plt negcount = trainData.filter("CallDisconnectReason != 16").count() poscount = trainData.filter("CallDisconnectReason == 16").count() negfrac = 100*float(negcount)/float(negcount+poscount) posfrac = 100*float(poscount)/float(poscount+negcount) ind = [0.0,1.0] frac = [negfrac,posfrac] width = 0.35 plt.title('Label Distribution') plt.bar(ind, frac, width, color='r') plt.xlabel("CallDisconnectReason") plt.ylabel('Percentage share') plt.xticks(ind,['0.0','1.0']) plt.show() import matplotlib.pyplot as plt negcount = testData.filter("CallDisconnectReason != 16").count() poscount = testData.filter("CallDisconnectReason == 16").count() negfrac = 100*float(negcount)/float(negcount+poscount) posfrac = 100*float(poscount)/float(poscount+negcount) ind = [0.0,1.0] frac = [negfrac,posfrac] width = 0.35 plt.title('Label Distribution') plt.bar(ind, frac, width, color='r') plt.xlabel("CallDisconnectReason") plt.ylabel('Percentage share') plt.xticks(ind,['0.0','1.0']) plt.show() from pyspark.ml.feature import VectorAssembler from pyspark.ml.feature import VectorAssembler vecAssembler = VectorAssembler(inputCols=["AccountingIDIndex","Calling_NumberIndex", "Called_NumberIndex"], outputCol="features") ###Output _____no_output_____ ###Markdown __Spark ML Naive Bayes__: Naive Bayes is a simple multiclass classification algorithm with the assumption of independence between every pair of features. Naive Bayes can be trained very efficiently. Within a single pass to the training data, it computes the conditional probability distribution of each feature given label, and then it applies Bayes’ theorem to compute the conditional probability distribution of label given an observation and use it for prediction.- _We use Spark ML Naive Bayes Algorithm and spark Pipeline to train the data set._ ###Code from pyspark.ml.classification import NaiveBayes from pyspark.ml.clustering import KMeans from pyspark.ml import Pipeline # Train a NaiveBayes model nb = NaiveBayes(smoothing=1.0, modelType="multinomial") # Chain labelIndexer, vecAssembler and NBmodel in a pipeline = Pipeline(stages=[labelIndexer,vecAssembler, nb]) # Run stages in pipeline and train model model = pipeline.fit(trainData) # Run inference on the test data and show some results predictions = model.transform(testData) predictions.printSchema() predictions.show() predictiondf = predictions.select("label", "prediction", "probability") pddf_pred = predictions.toPandas() pddf_pred ###Output _____no_output_____ ###Markdown - _We use Scatter plot for visualization and represent the dataset._ ###Code import matplotlib.pyplot as plt import numpy as np # Set the size of the plot plt.figure(figsize=(14,7)) # Create a colormap colormap = np.array(['red', 'lime', 'black']) # Plot CDR plt.subplot(1, 2, 1) plt.scatter(pddf_pred.Calling_NumberIndex, pddf_pred.Called_NumberIndex, c=pddf_pred.prediction) plt.title('CallDetailRecord') plt.show() ###Output _____no_output_____ ###Markdown Evaluation ###Code from pyspark.ml.evaluation import MulticlassClassificationEvaluator evaluator = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction", metricName="accuracy") accuracy = evaluator.evaluate(predictiondf) print(accuracy) ###Output 1.0 ###Markdown Confusion Matrix ###Code from sklearn.metrics import confusion_matrix import pandas as pd import matplotlib.pyplot as plt import seaborn as sn outdataframe = predictiondf.select("prediction", "label") pandadf = outdataframe.toPandas() npmat = pandadf.values labels = npmat[:,0] predicted_label = npmat[:,1] cnf_matrix = confusion_matrix(labels, predicted_label) import numpy as np def plot_confusion_matrix(cm, target_names, title='Confusion matrix', cmap=None, normalize=True): import matplotlib.pyplot as plt import numpy as np import itertools accuracy = np.trace(cm) / float(np.sum(cm)) misclass = 1 - accuracy if cmap is None: cmap = plt.get_cmap('Blues') plt.figure(figsize=(8, 6)) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() if target_names is not None: tick_marks = np.arange(len(target_names)) plt.xticks(tick_marks, target_names, rotation=45) plt.yticks(tick_marks, target_names) if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] thresh = cm.max() / 1.5 if normalize else cm.max() / 2 for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): if normalize: plt.text(j, i, "{:0.4f}".format(cm[i, j]), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") else: plt.text(j, i, "{:,}".format(cm[i, j]), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('label') plt.xlabel('Predicted \naccuracy={:0.4f}; misclass={:0.4f}'.format(accuracy, misclass)) plt.show() plot_confusion_matrix(cnf_matrix, normalize = False, target_names = ['Positive', 'Negative'], title = "Confusion Matrix") from pyspark.mllib.evaluation import MulticlassMetrics # Create (prediction, label) pairs predictionAndLabel = predictiondf.select("prediction", "label").rdd # Generate confusion matrix metrics = MulticlassMetrics(predictionAndLabel) print(metrics.confusionMatrix()) ###Output DenseMatrix([[5469.]]) ###Markdown Cross Validation ###Code from pyspark.ml.tuning import ParamGridBuilder, CrossValidator # Create ParamGrid and Evaluator for Cross Validation paramGrid = ParamGridBuilder().addGrid(nb.smoothing, [0.0, 0.2, 0.4, 0.6, 0.8, 1.0]).build() cvEvaluator = MulticlassClassificationEvaluator(metricName="accuracy") # Run Cross-validation cv = CrossValidator(estimator=pipeline, estimatorParamMaps=paramGrid, evaluator=cvEvaluator) cvModel = cv.fit(trainData) # Make predictions on testData. cvModel uses the bestModel. cvPredictions = cvModel.transform(testData) cvPredictions.select("label", "prediction", "probability").show() # Evaluate bestModel found from Cross Validation evaluator.evaluate(cvPredictions) ###Output _____no_output_____
examples/import_and_analyse_data.ipynb
###Markdown Interacting with CerebralCortex Data Cerebral Cortex is MD2K's big data cloud tool designed to support population-scale data analysis, visualization, model development, and intervention design for mobile-sensor data. It provides the ability to do machine learning model development on population scale datasets and provides interoperable interfaces for aggregation of diverse data sources.This page provides an overview of the core Cerebral Cortex operations to familiarilze you with how to discover and interact with different sources of data that could be contained within the system._Note:_ While some of these examples are showing generated data, they are designed to function on real-world mCerebrum data and the signal generators were built to facilitate the testing and evaluation of the Cerebral Cortex platform by those individuals that are unable to see those original datasets or do not wish to collect data before evaluating the system. Setting Up EnvironmentNotebook does not contain the necessary runtime enviornments necessary to run Cerebral Cortex. The following commands will download and install these tools, framework, and datasets. ###Code import importlib, sys, os from os.path import expanduser sys.path.insert(0, os.path.abspath('..')) DOWNLOAD_USER_DATA=False ALL_USERS=False #this will only work if DOWNLOAD_USER_DATA=True IN_COLAB = 'google.colab' in sys.modules MD2K_JUPYTER_NOTEBOOK = "MD2K_JUPYTER_NOTEBOOK" in os.environ if (get_ipython().__class__.__name__=="ZMQInteractiveShell"): IN_JUPYTER_NOTEBOOK = True JAVA_HOME_DEFINED = "JAVA_HOME" in os.environ SPARK_HOME_DEFINED = "SPARK_HOME" in os.environ PYSPARK_PYTHON_DEFINED = "PYSPARK_PYTHON" in os.environ PYSPARK_DRIVER_PYTHON_DEFINED = "PYSPARK_DRIVER_PYTHON" in os.environ HAVE_CEREBRALCORTEX_KERNEL = importlib.util.find_spec("cerebralcortex") is not None SPARK_VERSION = "3.1.2" SPARK_URL = "https://archive.apache.org/dist/spark/spark-"+SPARK_VERSION+"/spark-"+SPARK_VERSION+"-bin-hadoop2.7.tgz" SPARK_FILE_NAME = "spark-"+SPARK_VERSION+"-bin-hadoop2.7.tgz" CEREBRALCORTEX_KERNEL_VERSION = "3.3.14" DATA_PATH = expanduser("~") if DATA_PATH[:-1]!="/": DATA_PATH+="/" USER_DATA_PATH = DATA_PATH+"cc_data/" if MD2K_JUPYTER_NOTEBOOK: print("Java, Spark, and CerebralCortex-Kernel are installed and paths are already setup.") else: SPARK_PATH = DATA_PATH+"spark-"+SPARK_VERSION+"-bin-hadoop2.7/" if(not HAVE_CEREBRALCORTEX_KERNEL): print("Installing CerebralCortex-Kernel") !pip -q install cerebralcortex-kernel==$CEREBRALCORTEX_KERNEL_VERSION else: print("CerebralCortex-Kernel is already installed.") if not JAVA_HOME_DEFINED: if not os.path.exists("/usr/lib/jvm/java-8-openjdk-amd64/") and not os.path.exists("/usr/lib/jvm/java-11-openjdk-amd64/"): print("\nInstalling/Configuring Java") !sudo apt update !sudo apt-get install -y openjdk-8-jdk-headless os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64/" elif os.path.exists("/usr/lib/jvm/java-8-openjdk-amd64/"): print("\nSetting up Java path") os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64/" elif os.path.exists("/usr/lib/jvm/java-11-openjdk-amd64/"): print("\nSetting up Java path") os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-11-openjdk-amd64/" else: print("JAVA is already installed.") if (IN_COLAB or IN_JUPYTER_NOTEBOOK) and not MD2K_JUPYTER_NOTEBOOK: if SPARK_HOME_DEFINED: print("SPARK is already installed.") elif not os.path.exists(SPARK_PATH): print("\nSetting up Apache Spark ", SPARK_VERSION) !pip -q install findspark import pyspark spark_installation_path = os.path.dirname(pyspark.__file__) import findspark findspark.init(spark_installation_path) if not os.getenv("PYSPARK_PYTHON"): os.environ["PYSPARK_PYTHON"] = os.popen('which python3').read().replace("\n","") if not os.getenv("PYSPARK_DRIVER_PYTHON"): os.environ["PYSPARK_DRIVER_PYTHON"] = os.popen('which python3').read().replace("\n","") else: print("SPARK is already installed.") else: raise SystemExit("Please check your environment configuration at: https://github.com/MD2Korg/CerebralCortex-Kernel/") if DOWNLOAD_USER_DATA: if not os.path.exists(USER_DATA_PATH): if ALL_USERS: print("\nDownloading all users' data.") !rm -rf $USER_DATA_PATH !wget -q http://mhealth.md2k.org/images/datasets/cc_data.tar.bz2 && tar -xf cc_data.tar.bz2 -C $DATA_PATH && rm cc_data.tar.bz2 else: print("\nDownloading a user's data.") !rm -rf $USER_DATA_PATH !wget -q http://mhealth.md2k.org/images/datasets/s2_data.tar.bz2 && tar -xf s2_data.tar.bz2 -C $DATA_PATH && rm s2_data.tar.bz2 else: print("Data already exist. Please remove folder", USER_DATA_PATH, "if you want to download the data again") ###Output Installing CerebralCortex-Kernel  |████████████████████████████████| 194 kB 5.2 MB/s  |████████████████████████████████| 1.3 MB 9.3 MB/s  |████████████████████████████████| 100 kB 9.1 MB/s  |████████████████████████████████| 105 kB 25.9 MB/s  |████████████████████████████████| 21.8 MB 6.5 MB/s  |████████████████████████████████| 20.6 MB 1.3 MB/s  |████████████████████████████████| 721 kB 40.9 MB/s  |████████████████████████████████| 636 kB 37.2 MB/s  |████████████████████████████████| 212.4 MB 61 kB/s  |████████████████████████████████| 77 kB 7.4 MB/s  |████████████████████████████████| 44 kB 3.0 MB/s  |████████████████████████████████| 94 kB 3.9 MB/s  |████████████████████████████████| 198 kB 61.3 MB/s  |████████████████████████████████| 554 kB 42.9 MB/s [?25h Building wheel for datascience (setup.py) ... [?25l[?25hdone Building wheel for hdfs3 (setup.py) ... [?25l[?25hdone Building wheel for pyspark (setup.py) ... [?25l[?25hdone Building wheel for gatspy (setup.py) ... [?25l[?25hdone Setting up Java path Setting up Apache Spark 3.1.2 Downloading a user's data. ###Markdown Import Your Own DatamCerebrum is not the only way to collect and load data into *Cerebral Cortex*. It is possible to import your own structured datasets into the platform. This example will demonstrate how to load existing data and subsequently how to read it back from Cerebral Cortex through the same mechanisms you have been utilizing. Additionally, it demonstrates how to write a custom data transformation fuction to manipulate data and produce a smoothed result which can then be visualized. Initialize the system ###Code from cerebralcortex.kernel import Kernel CC = Kernel(cc_configs="default", study_name="default", new_study=True) ###Output /usr/local/lib/python3.7/dist-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>. """) ###Markdown Import DataCerebral Cortex provides a set of predefined data import routines that fit typical use cases. The most common is CSV data parser, `csv_data_parser`. These parsers are easy to write and can be extended to support most types of data. Additionally, the data importer, `import_data`, needs to be brought into this notebook so that we can start the data import process.The `import_data` method requires several parameters that are discussed below.- `cc_config`: The path to the configuration files for Cerebral Cortex; this is the same folder that you would utilize for the `Kernel` initialization- `input_data_dir`: The path to where the data to be imported is located; in this example, `sample_data` is available in the file/folder browser on the left and you should explore the files located inside of it- `user_id`: The universally unique identifier (UUID) that owns the data to be imported into the system- `data_file_extension`: The type of files to be considered for import- `data_parser`: The import method or another that defines how to interpret the data samples on a per-line basis- `gen_report`: A simple True/False value that controls if a report is printed to the screen when complete Download sample data ###Code sample_file = DATA_PATH+"data.csv" !wget -q https://raw.githubusercontent.com/MD2Korg/CerebralCortex/master/jupyter_demo/sample_data/data.csv -O $sample_file iot_stream = CC.read_csv(file_path=sample_file, stream_name="some-sample-iot-stream", column_names=["timestamp", "some_vals", "version", "user"]) ###Output _____no_output_____ ###Markdown View Imported Data ###Code iot_stream.show(4) ###Output +-------------------+-----------+-------+--------------------+ | timestamp| some_vals|version| user| +-------------------+-----------+-------+--------------------+ |2019-01-09 17:35:00|0.085188727| 1|00000000-afb8-476...| |2019-01-09 17:35:01|0.168675497| 1|00000000-afb8-476...| |2019-01-09 17:35:02|0.740485082| 1|00000000-afb8-476...| |2019-01-09 17:35:03|0.713160997| 1|00000000-afb8-476...| +-------------------+-----------+-------+--------------------+ only showing top 4 rows ###Markdown Document Data ###Code from cerebralcortex.core.metadata_manager.stream.metadata import Metadata, DataDescriptor, ModuleMetadata stream_metadata = Metadata() stream_metadata.set_name("iot-data-stream").set_description("This is randomly generated data for demo purposes.") \ .add_dataDescriptor( DataDescriptor().set_name("timestamp").set_type("datetime").set_attribute("description", "UTC timestamp of data point collection.")) \ .add_dataDescriptor( DataDescriptor().set_name("some_vals").set_type("float").set_attribute("description", \ "Random values").set_attribute("range", \ "Data is between 0 and 1.")) \ .add_dataDescriptor( DataDescriptor().set_name("version").set_type("int").set_attribute("description", "version of the data")) \ .add_dataDescriptor( DataDescriptor().set_name("user").set_type("string").set_attribute("description", "user id")) \ .add_module(ModuleMetadata().set_name("cerebralcortex.data_importer").set_attribute("url", "hhtps://md2k.org").set_author( "Nasir Ali", "[email protected]")) iot_stream.metadata = stream_metadata ###Output _____no_output_____ ###Markdown View Metadata ###Code iot_stream.metadata ###Output _____no_output_____ ###Markdown How to write an algorithmThis section provides an example of how to write a simple smoothing algorithm and apply it to the data that was just imported. Import the necessary modules ###Code from pyspark.sql.functions import pandas_udf, PandasUDFType from pyspark.sql.types import StructField, StructType, StringType, FloatType, TimestampType, IntegerType from pyspark.sql.functions import minute, second, mean, window from pyspark.sql import functions as F import numpy as np ###Output _____no_output_____ ###Markdown Define the SchemaThis schema defines what the computation module will return to the execution context for each row or window in the datastream. ###Code # column name and return data type # acceptable data types for schem are - "null", "string", "binary", "boolean", # "date", "timestamp", "decimal", "double", "float", "byte", "integer", # "long", "short", "array", "map", "structfield", "struct" schema="timestamp timestamp, some_vals double, version int, user string, vals_avg double" ###Output _____no_output_____ ###Markdown Write a user defined functionThe user-defined function (UDF) is one of two mechanisms available for distributed data processing within the Apache Spark framework. In this case, we are computing a simple windowed average. ###Code def smooth_algo(key, df): # key contains all the grouped column values # In this example, grouped columns are (userID, version, window{start, end}) # For example, if you want to get the start and end time of a window, you can # get both values by calling key[2]["start"] and key[2]["end"] some_vals_mean = df["some_vals"].mean() df["vals_avg"] = some_vals_mean return df ###Output _____no_output_____ ###Markdown Run the smoothing algorithm on imported dataThe smoothing algorithm is applied to the datastream by calling the `run_algorithm` method and passing the method as a parameter along with which columns, `some_vals`, that should be sent. Finally, the `windowDuration` parameter specified the size of the time windows on which to segment the data before applying the algorithm. Notice that when the next cell is run, the operation completes nearly instantaneously. This is due to the lazy evaluation aspects of the Spark framework. When you run the next cell to show the data, the algorithm will be applied to the whole dataset before displaying the results on the screen. ###Code smooth_stream = iot_stream.compute(smooth_algo, schema=schema, windowDuration=10) smooth_stream.show(truncate=False) ###Output +-------------------+-----------+-------+------------------------------------+-------------------+ |timestamp |some_vals |version|user |vals_avg | +-------------------+-----------+-------+------------------------------------+-------------------+ |2019-01-09 17:46:30|0.070952751|1 |00000000-afb8-476e-9872-6472b4e66b68|0.37378515760000003| |2019-01-09 17:46:31|0.279759975|1 |00000000-afb8-476e-9872-6472b4e66b68|0.37378515760000003| |2019-01-09 17:46:32|0.096120952|1 |00000000-afb8-476e-9872-6472b4e66b68|0.37378515760000003| |2019-01-09 17:46:33|0.121091841|1 |00000000-afb8-476e-9872-6472b4e66b68|0.37378515760000003| |2019-01-09 17:46:34|0.356470355|1 |00000000-afb8-476e-9872-6472b4e66b68|0.37378515760000003| |2019-01-09 17:46:35|0.800499717|1 |00000000-afb8-476e-9872-6472b4e66b68|0.37378515760000003| |2019-01-09 17:46:36|0.799160143|1 |00000000-afb8-476e-9872-6472b4e66b68|0.37378515760000003| |2019-01-09 17:46:37|0.372062031|1 |00000000-afb8-476e-9872-6472b4e66b68|0.37378515760000003| |2019-01-09 17:46:38|0.601158405|1 |00000000-afb8-476e-9872-6472b4e66b68|0.37378515760000003| |2019-01-09 17:46:39|0.240575406|1 |00000000-afb8-476e-9872-6472b4e66b68|0.37378515760000003| |2019-01-09 17:39:00|0.073887651|1 |00000000-afb8-476e-9872-6472b4e66b68|0.3879962564 | |2019-01-09 17:39:01|0.45542365 |1 |00000000-afb8-476e-9872-6472b4e66b68|0.3879962564 | |2019-01-09 17:39:02|0.629757033|1 |00000000-afb8-476e-9872-6472b4e66b68|0.3879962564 | |2019-01-09 17:39:03|0.67459103 |1 |00000000-afb8-476e-9872-6472b4e66b68|0.3879962564 | |2019-01-09 17:39:04|0.513277101|1 |00000000-afb8-476e-9872-6472b4e66b68|0.3879962564 | |2019-01-09 17:39:05|0.131369076|1 |00000000-afb8-476e-9872-6472b4e66b68|0.3879962564 | |2019-01-09 17:39:06|0.604344202|1 |00000000-afb8-476e-9872-6472b4e66b68|0.3879962564 | |2019-01-09 17:39:07|0.12731815 |1 |00000000-afb8-476e-9872-6472b4e66b68|0.3879962564 | |2019-01-09 17:39:08|0.424741964|1 |00000000-afb8-476e-9872-6472b4e66b68|0.3879962564 | |2019-01-09 17:39:09|0.245252707|1 |00000000-afb8-476e-9872-6472b4e66b68|0.3879962564 | +-------------------+-----------+-------+------------------------------------+-------------------+ only showing top 20 rows ###Markdown Visualize dataThese are two plots that show the original and smoothed data to visually check how the algorithm transformed the data. ###Code from cerebralcortex.plotting.basic.plots import plot_timeseries plot_timeseries(iot_stream) plot_timeseries(smooth_stream) ###Output _____no_output_____
UKIRTdataload.ipynb
###Markdown ###Code kplr1=pd.read_table('kplr002304168-2009131105131_llc_lc.tbl', skiprows=(lambda x: x in np.concatenate((np.arange(0,203), np.array([204,205,206])))), delim_whitespace=True, header=[0]) kplr1 for i in range(22): plt.plot(kplr1.iloc[:,i]) plt.title(kplr1.iloc[:,i].name) plt.show() ###Output _____no_output_____
wrangle_act.ipynb
###Markdown Project: Wrangling and Analyze Data ###Code import pandas as pd import numpy as np from twython import Twython import requests import json import time import matplotlib.pyplot as plt import seaborn as sns from wordcloud import WordCloud, STOPWORDS from PIL import Image import urllib ###Output _____no_output_____ ###Markdown Data GatheringIn the cells below was gathered **all** three pieces of data for this project and loaded them in the notebook. 1. Directly download the WeRateDogs Twitter archive data (twitter_archive_enhanced.csv) ###Code # Supplied file archive = pd.read_csv('twitter-archive-enhanced.csv') ###Output _____no_output_____ ###Markdown 2. Use the Requests library to download the tweet image prediction (image_predictions.tsv) ###Code # Requesting tweet image predictions with open('image_predictions.tsv' , 'wb') as file: image_predictions = requests.get('https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv', auth=('user', 'pass')) file.write(image_predictions.content) # Reading image predictions predictions = pd.read_csv('image_predictions.tsv', sep='\t') ###Output _____no_output_____ ###Markdown 3. Use the Tweepy library to query additional data via the Twitter API (tweet_json.txt) ###Code #Use Python's Tweepy library and store each tweet's entire set of JSON data in a file called tweet_json.txt file. import tweepy consumer_key = '--' consumer_secret = '--' access_token = '--' access_secret = '--' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True) collected =[] not_collected = [] with open('tweet_json.txt', 'w') as file: for tweet_id in list(archive['tweet_id']): try: tweet_status = api.get_status(tweet_id,tweet_mode='extended') json.dump(tweet_status._json, file) file.write('\n') collected.append(tweet_id) except Exception as e: not_collected.append(tweet_id) #Reading JSON content as pandas dataframe tweets = pd.read_json('tweet_json.txt', lines = True,encoding='utf-8') ###Output _____no_output_____ ###Markdown Assessing DataIn this section was detect and documented **nine (9) quality issues and five (5) tidiness issues**. Were use **both** visual assessment programmatic assessement to assess the data. ###Code # Load the data gathered data files archive.head() archive.tail() archive.shape archive.info() archive.describe() # Load the data gathered data files predictions.head() predictions.tail() predictions.shape predictions.info() predictions.describe() # Load the data gathered data files tweets.head() tweets.tail() tweets.shape tweets.info() tweets.describe() ###Output _____no_output_____ ###Markdown Quality issues Archive1. [The timestamp field is in string format (object) and tweet_id is in int64](1)2. [There are only 181 retweets (retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp)](2)3. [There are only 78 replies (in_reply_to_status_id, in_reply_to_user_id)](3)4. [There are missing values in the column expanded_urls](4)5. [Column name floofer should be spelled 'floof'](5)6. [Dogs with no name in the description have given names of "a", "an" and "None" instead of "NaN"](6)7. [In the column rating_denominator there are votes greater than 10](7)8. [Drop unnecessary columns](8) Predictions9. [The types of dogs in columns p1, p2, and p3 have some lowercase and uppercase letters](9)10. [The tweet_id field is in int64, should be in string format](10) Tweets11. [Rename the column 'id' to 'tweet_id' to facilitate merging](11)12. [Clean up text column to show only the text](12) Tidiness issues Archive1. [Several columns representing the same category, which is divided into "doggo", "flooter", "pupper", "puppo" columns, but we need only one column to represent this classifications](a)2. [Merge all tables to realize any analysis](b) Cleaning DataIn this section were clean up **all** of the issues you documented while assessing. ###Code # Make copies of original pieces of data archive_clean = archive.copy() predictions_clean = predictions.copy() tweets_clean = tweets.copy() ###Output _____no_output_____ ###Markdown Quality issues Issue 1: Erroneous data types Define: The timestamp field is in string format (object) and tweet_id is in int64 Code ###Code #change the dtype of column timestamp from object to datetime archive_clean.timestamp = archive_clean.timestamp.astype('datetime64') archive_clean.tweet_id = archive_clean.tweet_id.astype(str) ###Output _____no_output_____ ###Markdown Test ###Code #Check for changes archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null object 1 in_reply_to_status_id 78 non-null float64 2 in_reply_to_user_id 78 non-null float64 3 timestamp 2356 non-null datetime64[ns] 4 source 2356 non-null object 5 text 2356 non-null object 6 retweeted_status_id 181 non-null float64 7 retweeted_status_user_id 181 non-null float64 8 retweeted_status_timestamp 181 non-null object 9 expanded_urls 2297 non-null object 10 rating_numerator 2356 non-null int64 11 rating_denominator 2356 non-null int64 12 name 2356 non-null object 13 doggo 2356 non-null object 14 floofer 2356 non-null object 15 pupper 2356 non-null object 16 puppo 2356 non-null object dtypes: datetime64[ns](1), float64(4), int64(2), object(10) memory usage: 313.0+ KB ###Markdown Issue 2: Missing records Define: There are only 181 retweets (retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp) Code ###Code #Use drop function to drop the non necessary columns archive_clean = archive_clean.drop(['retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], axis=1) ###Output _____no_output_____ ###Markdown Test ###Code #Check for changes archive_clean.head() ###Output _____no_output_____ ###Markdown Issue 3: Missing records Define: There are only 78 replies (in_reply_to_status_id, in_reply_to_user_id) Code ###Code #Use drop function to drop the non necessary columns archive_clean = archive_clean.drop(['in_reply_to_status_id', 'in_reply_to_user_id'], axis=1) ###Output _____no_output_____ ###Markdown Test ###Code #Check for changes archive_clean.head() ###Output _____no_output_____ ###Markdown Issue 4: Missing records Define: There are missing values in the column expanded_urls Code ###Code #Use drop function to drop the expanded_urls. We wont use this column for the analysis archive_clean = archive_clean.drop(['expanded_urls'], axis=1) ###Output _____no_output_____ ###Markdown Test ###Code #Check for changes archive_clean.head() ###Output _____no_output_____ ###Markdown Issue 5: Correct the column name Define: Column name floofer should be spelled 'floof' Code ###Code # Rename the column 'floofer' archive_clean = archive_clean.rename(columns={'floofer':'floof'}) ###Output _____no_output_____ ###Markdown Test ###Code #Check for changes archive_clean.head() archive_clean.floof = archive_clean['floof'].map({'floofer':'floof'}, na_action=None) archive_clean ###Output _____no_output_____ ###Markdown Issue 6: Differents inputs for the same categories Define: Dogs with no name in the description have given names of "a", "an" and "None" instead of "NaN" Code ###Code # Replace the value 'None' with NaN archive_clean = archive_clean.replace('None', np.nan) archive_clean = archive_clean.replace('a', np.nan) archive_clean = archive_clean.replace('an', np.nan) ###Output _____no_output_____ ###Markdown Test ###Code #Check for changes archive_clean.name.value_counts() ###Output _____no_output_____ ###Markdown Issue 7: There are no delimitations for the rating demonimator Define: In the column rating_denominator there are votes greater than 10 Code ###Code #Select only the values in the column rating_denominator that should only be "10" archive_clean.rating_denominator = archive_clean[archive_clean.rating_denominator == 10] ###Output _____no_output_____ ###Markdown Test ###Code #Check for changes archive_clean ###Output _____no_output_____ ###Markdown Issue 8: Unnecessary columns Define: Drop unnecessary columns Code ###Code #Use drop function to drop source column archive_clean.drop(columns='source', inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code #Check for change archive_clean.head() ###Output _____no_output_____ ###Markdown Issue 9: Differents letter cases Define: The types of dogs in columns p1, p2, and p3 have some lowercase and uppercase letters Code ###Code #Convert all the dogs names to lowercase letters predictions_clean['p1'] = predictions_clean['p1'].str.lower() predictions_clean['p2'] = predictions_clean['p2'].str.lower() predictions_clean['p3'] = predictions_clean['p3'].str.lower() ###Output _____no_output_____ ###Markdown Test ###Code #Check for changes predictions_clean.p1.head() ###Output _____no_output_____ ###Markdown Issue 10: Differents data type format Define: The tweet_id field is in int64, should be in string format Code ###Code #change the dtype of column tweed_id from int64 to string format predictions_clean.tweet_id = predictions_clean.tweet_id.astype(str) tweets_clean.id = tweets_clean.id.astype(str) ###Output _____no_output_____ ###Markdown Test ###Code #Check for changes predictions_clean.info() #Check for changes tweets_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2327 entries, 0 to 2326 Data columns (total 32 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 created_at 2327 non-null datetime64[ns, UTC] 1 id 2327 non-null object 2 id_str 2327 non-null int64 3 full_text 2327 non-null object 4 truncated 2327 non-null bool 5 display_text_range 2327 non-null object 6 entities 2327 non-null object 7 extended_entities 2057 non-null object 8 source 2327 non-null object 9 in_reply_to_status_id 77 non-null float64 10 in_reply_to_status_id_str 77 non-null float64 11 in_reply_to_user_id 77 non-null float64 12 in_reply_to_user_id_str 77 non-null float64 13 in_reply_to_screen_name 77 non-null object 14 user 2327 non-null object 15 geo 0 non-null float64 16 coordinates 0 non-null float64 17 place 1 non-null object 18 contributors 0 non-null float64 19 is_quote_status 2327 non-null bool 20 retweet_count 2327 non-null int64 21 favorite_count 2327 non-null int64 22 favorited 2327 non-null bool 23 retweeted 2327 non-null bool 24 possibly_sensitive 2195 non-null float64 25 possibly_sensitive_appealable 2195 non-null float64 26 lang 2327 non-null object 27 retweeted_status 160 non-null object 28 quoted_status_id 26 non-null float64 29 quoted_status_id_str 26 non-null float64 30 quoted_status_permalink 26 non-null object 31 quoted_status 24 non-null object dtypes: bool(4), datetime64[ns, UTC](1), float64(11), int64(3), object(13) memory usage: 518.2+ KB ###Markdown Issue 11: Differents columns names for the same content Define: Rename the column 'id' to 'tweet_id' to facilitate merging Code ###Code #Use rename() function to rename the column tweets_clean = tweets_clean.rename(columns={'id':'tweet_id'}) ###Output _____no_output_____ ###Markdown Test ###Code #Check for changes tweets_clean.head() ###Output _____no_output_____ ###Markdown Issue 12: Column Text has multiples variables Define: Clean up text column to show only the text Code ###Code #Remove url link archive_clean['text'] = archive_clean.text.str.replace(r"http\S+", "") archive_clean['text'] = archive_clean.text.str.strip() ###Output <ipython-input-326-1a7db23b574b>:2: FutureWarning: The default value of regex will change from True to False in a future version. archive_clean['text'] = archive_clean.text.str.replace(r"http\S+", "") ###Markdown Test ###Code archive_clean['text'][0] ###Output _____no_output_____ ###Markdown Tidiness issues Issue 1: Unify the dogs classes Define: Several columns representing the same category, which is divided into "doggo", "flooter", "pupper", "puppo" columns, but we need only one column to represent this classifications Code ###Code #Use loc function to add a new column to represent the dog stage archive_clean.loc[archive_clean['doggo'] == 'doggo', 'stage'] = 'doggo' archive_clean.loc[archive_clean['floof'] == 'floof', 'stage'] = 'floof' archive_clean.loc[archive_clean['pupper'] == 'pupper', 'stage'] = 'pupper' archive_clean.loc[archive_clean['puppo'] == 'puppo', 'stage'] = 'puppo' ###Output _____no_output_____ ###Markdown Test ###Code #Check for changes archive_clean.head() ###Output _____no_output_____ ###Markdown Code ###Code #Dropping the columns: doggo, floofer, pupper and poppo archive_clean = archive_clean.drop(['doggo', 'floof', 'pupper', 'puppo'], axis = 1) ###Output _____no_output_____ ###Markdown Test ###Code #Check the final change in the dogs stages archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 7 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null object 1 timestamp 2356 non-null datetime64[ns] 2 text 2356 non-null object 3 rating_numerator 2356 non-null int64 4 rating_denominator 2333 non-null object 5 name 1549 non-null object 6 stage 380 non-null object dtypes: datetime64[ns](1), int64(1), object(5) memory usage: 129.0+ KB ###Markdown Issue 2: Separated tables Define: Merge all tables to realize any analysis Code ###Code #Merge the archive_clean and tweets_clean table merge_df = archive_clean.join(tweets_clean.set_index('tweet_id'), on='tweet_id') merge_df.head() ###Output _____no_output_____ ###Markdown Test ###Code #Check the new df merge_df.info() #Join the merge_df to the predictions_clean table twitter_master = merge_df.join(predictions_clean.set_index('tweet_id'), on='tweet_id') twitter_master.head() twitter_master.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 49 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null object 1 timestamp 2356 non-null datetime64[ns] 2 text 2356 non-null object 3 rating_numerator 2356 non-null int64 4 rating_denominator 2333 non-null object 5 name 1549 non-null object 6 stage 380 non-null object 7 created_at 2327 non-null datetime64[ns, UTC] 8 id_str 2327 non-null float64 9 full_text 2327 non-null object 10 truncated 2327 non-null object 11 display_text_range 2327 non-null object 12 entities 2327 non-null object 13 extended_entities 2057 non-null object 14 source 2327 non-null object 15 in_reply_to_status_id 77 non-null float64 16 in_reply_to_status_id_str 77 non-null float64 17 in_reply_to_user_id 77 non-null float64 18 in_reply_to_user_id_str 77 non-null float64 19 in_reply_to_screen_name 77 non-null object 20 user 2327 non-null object 21 geo 0 non-null float64 22 coordinates 0 non-null float64 23 place 1 non-null object 24 contributors 0 non-null float64 25 is_quote_status 2327 non-null object 26 retweet_count 2327 non-null float64 27 favorite_count 2327 non-null float64 28 favorited 2327 non-null object 29 retweeted 2327 non-null object 30 possibly_sensitive 2195 non-null float64 31 possibly_sensitive_appealable 2195 non-null float64 32 lang 2327 non-null object 33 retweeted_status 160 non-null object 34 quoted_status_id 26 non-null float64 35 quoted_status_id_str 26 non-null float64 36 quoted_status_permalink 26 non-null object 37 quoted_status 24 non-null object 38 jpg_url 2075 non-null object 39 img_num 2075 non-null float64 40 p1 2075 non-null object 41 p1_conf 2075 non-null float64 42 p1_dog 2075 non-null object 43 p2 2075 non-null object 44 p2_conf 2075 non-null float64 45 p2_dog 2075 non-null object 46 p3 2075 non-null object 47 p3_conf 2075 non-null float64 48 p3_dog 2075 non-null object dtypes: datetime64[ns, UTC](1), datetime64[ns](1), float64(18), int64(1), object(28) memory usage: 902.0+ KB ###Markdown Code ###Code #Filter the columns to further analisys twitter_master_clean = twitter_master.filter(['tweet_id','timestamp','text', 'rating_numerator', 'rating_denominator','name','stage','retweet_count', 'favorite_count', 'jpg_url','img_num', 'p1', 'p1_conf','p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog']) twitter_master_clean.head() ###Output _____no_output_____ ###Markdown Storing DataSave gathered, assessed, and cleaned master dataset to a CSV file named "twitter_archive_master.csv". ###Code #store data with to_csv function twitter_master_clean.to_csv('twitter_archive_master.csv', index = False) ###Output _____no_output_____ ###Markdown Analyzing and Visualizing DataIn this section, analyze and visualize your wrangled data. You must produce at least **three (3) insights and one (1) visualization.** ###Code #Make a copy rate_dogs = twitter_master_clean.copy() rate_dogs.info() #Select missing values from table merged to drop later drop = rate_dogs[pd.isnull(rate_dogs['retweet_count'])].index drop #Drop missing data from merged table rate_dogs.drop(index=drop, inplace=True) #Check the changes rate_dogs.info() #Investigating the time of the tweets rate_dogs.timestamp.min(), rate_dogs.timestamp.max() #Set the index to the datatime rate_dogs = rate_dogs.set_index('timestamp') #Look for informations rate_dogs.describe() rate_dogs.favorite_count.max()/rate_dogs.retweet_count.max() #See if there is any correlations rate_dogs.corr() #Plot the correlations sns.pairplot(rate_dogs, vars = ['rating_numerator', 'retweet_count', 'favorite_count', 'p1_conf'], diag_kind = 'kde', plot_kws = {'alpha': 0.9}); #Check the most favorited tweet rate_dogs.sort_values(by = 'favorite_count', ascending = False).head(3) #Check the most retweeted tweet rate_dogs.sort_values(by = 'retweet_count', ascending = False).head(3) #Check fot the most common dogs stages rate_dogs.stage.value_counts() #Check for the most common dogs breeds rate_dogs.p1.value_counts().head(10) #Plot the most common dogs breeds plt.barh(rate_dogs.p1.value_counts().head(10).index, rate_dogs.p1.value_counts().head(10), color = 'g', alpha=0.9) plt.xlabel('Number of tweets', fontsize = 10) plt.title('Top 10 dog breeds by tweet count', fontsize = 14) plt.gca().invert_yaxis() plt.show(); #Group favorite count with dogs breeds and see what are the most favorites top10 = rate_dogs.favorite_count.groupby(rate_dogs['p1']).sum().sort_values(ascending = False) top10.head(10) #Plot the most favorites dogs breeds plt.barh(top10.head(10).index, top10.head(10), color = 'g', alpha=0.9) plt.xlabel('Favorite count', fontsize = 10) plt.title('Top 10 favorite dog breeds', fontsize = 14) plt.gca().invert_yaxis() plt.show(); #Plot the most favorites dogs stages favorite_count_stages = rate_dogs.groupby('stage').favorite_count.mean().sort_values() favorite_count_stages.plot(x="stage",y='favorite_count',kind='barh',color='g', alpha=0.9) plt.xlabel('Favorite count', fontsize = 10) plt.ylabel('Dogs stages', fontsize = 10) plt.title('Average favorite counts by dogs stages', fontsize = 14) ###Output _____no_output_____ ###Markdown Insights:1. The quantity of people who favorite the posts is 2.039 time higher than people that retweet the posts. This shows a preference of just favorite the posts than retweet.2. There are a strong correlation between the favorites counts and retweets. To be more precised the correlation is 0.801345.To evidenciate better, the most retweeted and favorited dog is a doggo labrador retriever who received 72474 retweets and 147742 favorites votes. His ID is 744234799360020481.3. The most common dogs breeds are golden retriever, labrador retriever and pembroke, respectively. They receive the most favorite counts too. Visualizations ###Code #Plot a scatter plot to verify a possible trend in the amount os favorite count over time plt.scatter(rate_dogs.index, rate_dogs['favorite_count']) plt.title('Daily tweets by favorite count', fontsize = 14) plt.xlabel('Days', fontsize = 14) plt.ylabel('Favorite count', fontsize = 14) plt.show(); #Plot a Word Cloud with the texts written tweets = np.array(rate_dogs.text) list1 = [] for tweet in tweets: list1.append(tweet.replace("\n","")) mask = np.array(Image.open(requests.get('https://img.favpng.com/23/21/16/dog-vector-graphics-bengal-cat-illustration-clip-art-png-favpng-RWmY6zWcLaCxWurMaPEpZpARA.jpg', stream=True).raw)) text = list1 def gen_wc(text, mask): word_cloud = WordCloud(width = 700, height = 400, background_color='white', mask=mask).generate(str(text)) plt.figure(figsize=(16,10),facecolor = 'white', edgecolor='red') plt.imshow(word_cloud) plt.axis('off') plt.tight_layout(pad=0) plt.show() gen_wc(text, mask) # The code used above was modeled from this blog on how to generate a word cloud in python. #https://blog.goodaudience.com/how-to-generate-a-word-cloud-of-any-shape-in-python-7bce27a55f6e ###Output _____no_output_____ ###Markdown Loading Libraries ###Code #import all needed libraries for the whole excercise #my tweepy account is not approved yet, so i downloaded the file import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns import requests import json import datetime as dt ###Output _____no_output_____ ###Markdown Gathering Data for this Project ###Code #read the first file and view the data twitter_archive = pd.read_csv('twitter-archive-enhanced.csv') twitter_archive.info() # download image tsv file from a website programmatically using requests library url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' resp = requests.get(url) # Save to tsv file with open('image_predictions.tsv', mode='wb') as file: file.write(resp.content) # load second file and view the data image_predictions = pd.read_csv('image_predictions.tsv', sep='\t') image_predictions.info() # extract data to dataframe, and close the file afterward tweets_data = [] tweet_file = open('tweet-json.txt', 'r') for line in tweet_file: try: tweetline = json.loads(line) tweets_data.append(tweetline) except: continue tweet_file.close() # Convert last file to dataframe,include only requested data so we data gathering is completed tweet_info = pd.DataFrame() tweet_info['tweet_id'] = list(map(lambda tweet: tweet['id'], tweets_data)) tweet_info['retweet_count'] = list(map(lambda tweet: tweet['retweet_count'], tweets_data)) tweet_info['favorite_count'] = list(map(lambda tweet: tweet['favorite_count'], tweets_data)) # check data structure tweet_info.head() tweet_info.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2354 entries, 0 to 2353 Data columns (total 3 columns): tweet_id 2354 non-null int64 retweet_count 2354 non-null int64 favorite_count 2354 non-null int64 dtypes: int64(3) memory usage: 55.2 KB ###Markdown Assessing Data for this Project ###Code #assessing first file twitter_archive twitter_archive.info() #first thing to notice, there are some columns with missing values like in_reply_to_status_id and in_reply_to_user_id, also the #retweeted columns, some of the urls are missing (2297 values only) twitter_archive.head() #checking duplicating in tweet_id sum(twitter_archive.tweet_id.duplicated()) #no duplication in tweet id, should be unique, yet it's int, better to be str #timestamp is not date time #four columns for dog type #181 columns are retweets #check the source for errors twitter_archive.source.value_counts() # most of data from twitter.com yet, there other soures like vine.co #checking data numberator twitter_archive.rating_numerator.value_counts() #too many values, better to have look of the data description twitter_archive.describe() #there are outliers as maximum data is 1776 and 75% is 12 #checking denominator twitter_archive.rating_denominator.value_counts() # two things to notice, there are different data in the denominator than 10, needs to be unified with numerator, and to drop the zero # also data is int, better to be flaat #checking the name column twitter_archive.name.value_counts() #many dogs has no names, some dogs has wrong names like 'a' with 55 records #assessing second file image_predictions image_predictions.info() #no missing values #checking data head image_predictions.head() #check tweet id for duplication sum(image_predictions.tweet_id.duplicated()) #no duplication image_predictions.img_num.value_counts() #some colums has more than 1 photo #find totally wrong predections image_predictions[~(image_predictions['p1_dog']==True) & ~(image_predictions['p2_dog']==True) & ~(image_predictions['p3_dog']==True)].count() #there are 324 dogs with all false predections #checking third dataframe tweet_info tweet_info.info() #no missing values, all data are int tweet_info.head() #check ids for duplications sum(tweet_info.tweet_id.duplicated()) #no duplication ###Output _____no_output_____ ###Markdown Assessment Results Quality 1-Convert timestamp to datetime 2-Remove retweets(Keep original Tweets only) 3-Remove replies to retweets(keep original tweets only) 4-Change Source from long HTML code to simple text 5-Fix rating_denominator by removing the "Zeros" 6-Change rating_numerator and rating_denominator to float 7-Fix rating_numerator , by dividing by the denomintor and remove outliers 8-Fix wrong dog names 9-Tweet_id datatype is an integer, convert to string 10-Remove duplicate jpg_url 11-Convert confidence levels to a percentage by multiplying by 100 12-create new column for breed and confidnece each Tidiness 1-gather dog columns (doggo, puppo, pupper, floofer) into one 2-Rename P_1, P_2 P_3 with more understandable names 3-create a new column to predecit if dog or not (Dog, Maybe Dog, Not Dog) 4-Merge data frames Cleaning Data for this Project ###Code #first create copies from the 3 data sets archive_clean = twitter_archive.copy() image_clean = image_predictions.copy() tweet_clean = tweet_info.copy() ###Output _____no_output_____ ###Markdown Define Timestamp in twitter_archive is object, needs to be changed to datetime Code ###Code archive_clean['timestamp'] = pd.to_datetime(archive_clean.timestamp) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean['timestamp'].sample(5) ###Output _____no_output_____ ###Markdown Define There are retweets in twitter_archive, while we are intersted in Original Tweets only Code ###Code archive_clean = archive_clean[archive_clean.retweeted_status_id.isna()] ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2175 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2175 non-null datetime64[ns] source 2175 non-null object text 2175 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 2117 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 2175 non-null object floofer 2175 non-null object pupper 2175 non-null object puppo 2175 non-null object dtypes: datetime64[ns](1), float64(4), int64(3), object(9) memory usage: 305.9+ KB ###Markdown Define Remove reply tweets in twitter_archive, as we are only in intersted in original Tweets Code ###Code archive_clean = archive_clean[archive_clean.in_reply_to_status_id.isna()] ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2097 non-null int64 in_reply_to_status_id 0 non-null float64 in_reply_to_user_id 0 non-null float64 timestamp 2097 non-null datetime64[ns] source 2097 non-null object text 2097 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 2094 non-null object rating_numerator 2097 non-null int64 rating_denominator 2097 non-null int64 name 2097 non-null object doggo 2097 non-null object floofer 2097 non-null object pupper 2097 non-null object puppo 2097 non-null object dtypes: datetime64[ns](1), float64(4), int64(3), object(9) memory usage: 294.9+ KB ###Markdown Define Drop Retweet related columns in twitter_archive as it has no value now Code ###Code archive_clean= archive_clean.drop(['retweeted_status_id','retweeted_status_user_id','retweeted_status_timestamp','in_reply_to_status_id','in_reply_to_user_id'], axis=1) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2355 Data columns (total 12 columns): tweet_id 2097 non-null int64 timestamp 2097 non-null datetime64[ns] source 2097 non-null object text 2097 non-null object expanded_urls 2094 non-null object rating_numerator 2097 non-null int64 rating_denominator 2097 non-null int64 name 2097 non-null object doggo 2097 non-null object floofer 2097 non-null object pupper 2097 non-null object puppo 2097 non-null object dtypes: datetime64[ns](1), int64(3), object(8) memory usage: 213.0+ KB ###Markdown Define Source HTML code is complicated and hard to read, need to simplify it Code ###Code source_O = {'<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>': 'Twitter for iPhone', '<a href="http://vine.co" rel="nofollow">Vine - Make a Scene</a>': 'Vine', '<a href="http://twitter.com" rel="nofollow">Twitter Web Client</a>': 'Twitter Web Client', '<a href="https://about.twitter.com/products/tweetdeck" rel="nofollow">TweetDeck</a>': 'TweetDeck'} archive_clean["source"].replace(source_O, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.source.value_counts() ###Output _____no_output_____ ###Markdown Define Too many columns for dog stages, gather dog columns (doggo, puppo, pupper, floofer) into one, call it dog_stage, then drop the extra dogs columns Code ###Code archive_clean.doggo= archive_clean.doggo.replace('None', '') archive_clean.floofer= archive_clean.floofer.replace('None', '') archive_clean.pupper= archive_clean.pupper.replace('None', '') archive_clean.puppo= archive_clean.puppo.replace('None', '') archive_clean['dog_stage'] = archive_clean.doggo + archive_clean.floofer + archive_clean.pupper + archive_clean.puppo archive_clean.loc[archive_clean.dog_stage == '', 'dog_stage'] = "None" archive_clean.loc[archive_clean.dog_stage == 'doggopupper', 'dog_stage']= 'Multiple' archive_clean.loc[archive_clean.dog_stage == 'doggopuppo', 'dog_stage']= 'Multiple' archive_clean.loc[archive_clean.dog_stage == 'doggofloofer', 'dog_stage'] = 'Multiple' archive_clean = archive_clean.drop(['doggo', 'floofer', 'pupper', 'puppo'], axis = 1) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.dog_stage.value_counts() archive_clean.head(2) ###Output _____no_output_____ ###Markdown Define Fix denominator by removing the "Zeros" Code ###Code twitter_archive = twitter_archive[twitter_archive.rating_denominator != 0] ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive.query('rating_denominator== 0') ###Output _____no_output_____ ###Markdown Define 8- Fix denominator and numerator to Floats Code ###Code archive_clean.rating_numerator= archive_clean.rating_numerator.astype(float) archive_clean.rating_denominator= archive_clean.rating_denominator.astype(float) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2355 Data columns (total 9 columns): tweet_id 2097 non-null int64 timestamp 2097 non-null datetime64[ns] source 2097 non-null object text 2097 non-null object expanded_urls 2094 non-null object rating_numerator 2097 non-null float64 rating_denominator 2097 non-null float64 name 2097 non-null object dog_stage 2097 non-null object dtypes: datetime64[ns](1), float64(2), int64(1), object(5) memory usage: 163.8+ KB ###Markdown Define Fix rating_numerator , by dividing by the denomintor and remove outliers Code ###Code archive_clean ['rating_numerator']= (archive_clean['rating_numerator']/archive_clean['rating_denominator'] )*10 archive_clean = archive_clean[archive_clean.rating_numerator <15] ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.rating_numerator.value_counts().sort_index(ascending = False) ###Output _____no_output_____ ###Markdown Define Convert confience level to number out of 100 Code ###Code confidence = ['p1_conf', 'p2_conf', 'p3_conf'] for con in confidence: image_clean[con] = round(image_clean[con]*100).astype(int) ###Output _____no_output_____ ###Markdown Test ###Code image_clean.head() ###Output _____no_output_____ ###Markdown Define Change p1,p2,p3 to more understandable names Code ###Code image_clean.rename(columns={'p1': 'prediction_1','p2': 'prediction_2','p3': 'prediction_3'}, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code image_clean.head() ###Output _____no_output_____ ###Markdown Define Fix dog name issue in archive_clean, not all names are correct Code ###Code wrong_name = archive_clean.name.str.islower() archive_clean.loc[wrong_name, 'name'] = 'None' ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.name.value_counts() ###Output _____no_output_____ ###Markdown Define Fix URL duplication in image_clean Code ###Code image_clean = image_clean.drop_duplicates(subset=['jpg_url'], keep='last') ###Output _____no_output_____ ###Markdown Test ###Code image_clean.jpg_url.duplicated().value_counts() ###Output _____no_output_____ ###Markdown Define Merge Data Frames for simplicity Code ###Code twitter_archive_master = pd.merge(archive_clean, tweet_clean, on='tweet_id', how = 'inner') twitter_archive_master = pd.merge(twitter_archive_master, image_clean, on='tweet_id') ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_master.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1899 entries, 0 to 1898 Data columns (total 22 columns): tweet_id 1899 non-null int64 timestamp 1899 non-null datetime64[ns] source 1899 non-null object text 1899 non-null object expanded_urls 1899 non-null object rating_numerator 1899 non-null float64 rating_denominator 1899 non-null float64 name 1899 non-null object dog_stage 1899 non-null object retweet_count 1899 non-null int64 favorite_count 1899 non-null int64 jpg_url 1899 non-null object img_num 1899 non-null int64 prediction_1 1899 non-null object p1_conf 1899 non-null int64 p1_dog 1899 non-null bool prediction_2 1899 non-null object p2_conf 1899 non-null int64 p2_dog 1899 non-null bool prediction_3 1899 non-null object p3_conf 1899 non-null int64 p3_dog 1899 non-null bool dtypes: bool(3), datetime64[ns](1), float64(2), int64(7), object(9) memory usage: 302.3+ KB ###Markdown Define Change tweet_id to str Code ###Code twitter_archive_master['tweet_id']=twitter_archive_master['tweet_id'].astype(str) ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_master.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1899 entries, 0 to 1898 Data columns (total 22 columns): tweet_id 1899 non-null object timestamp 1899 non-null datetime64[ns] source 1899 non-null object text 1899 non-null object expanded_urls 1899 non-null object rating_numerator 1899 non-null float64 rating_denominator 1899 non-null float64 name 1899 non-null object dog_stage 1899 non-null object retweet_count 1899 non-null int64 favorite_count 1899 non-null int64 jpg_url 1899 non-null object img_num 1899 non-null int64 prediction_1 1899 non-null object p1_conf 1899 non-null int64 p1_dog 1899 non-null bool prediction_2 1899 non-null object p2_conf 1899 non-null int64 p2_dog 1899 non-null bool prediction_3 1899 non-null object p3_conf 1899 non-null int64 p3_dog 1899 non-null bool dtypes: bool(3), datetime64[ns](1), float64(2), int64(6), object(10) memory usage: 302.3+ KB ###Markdown Define Find out which is dog and which is not Code ###Code ps_dogs = ['p1_dog', 'p2_dog', 'p3_dog'] for p in ps_dogs: twitter_archive_master[p] = twitter_archive_master[p].astype(int) # Create a new column to sum the resuly twitter_archive_master['prediction'] = twitter_archive_master.p1_dog + twitter_archive_master.p2_dog + twitter_archive_master.p3_dog # Replace the number with a defining text string twitter_archive_master['prediction'] = twitter_archive_master['prediction'].replace(3, 'Dog') twitter_archive_master['prediction'] = twitter_archive_master['prediction'].replace(2, 'Maybe a Dog') twitter_archive_master['prediction'] = twitter_archive_master['prediction'].replace(1, 'Maybe a Dog') twitter_archive_master['prediction'] = twitter_archive_master['prediction'].replace(0, 'Not a Dog') ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_master.prediction.value_counts() ###Output _____no_output_____ ###Markdown Define Merge the breed and confidnece in one column each, and drop the extra columns Code ###Code conditions = [(twitter_archive_master['p1_dog'] == True), (twitter_archive_master['p2_dog'] == True), (twitter_archive_master['p3_dog'] == True)] choices_breed = [twitter_archive_master['prediction_1'],twitter_archive_master['prediction_2'],twitter_archive_master['prediction_3']] choices_confidence = [twitter_archive_master['p1_conf'], twitter_archive_master['p2_conf'],twitter_archive_master['p3_conf']] twitter_archive_master['breed'] = np.select(conditions, choices_breed, default = 'none') twitter_archive_master['confidence'] = np.select(conditions, choices_confidence, default = 0) twitter_archive_master= twitter_archive_master.drop(['prediction_1','prediction_2','prediction_3','p1_conf','p2_conf','p3_conf','p1_dog','p2_dog','p3_dog'], axis=1) ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_master.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1899 entries, 0 to 1898 Data columns (total 16 columns): tweet_id 1899 non-null object timestamp 1899 non-null datetime64[ns] source 1899 non-null object text 1899 non-null object expanded_urls 1899 non-null object rating_numerator 1899 non-null float64 rating_denominator 1899 non-null float64 name 1899 non-null object dog_stage 1899 non-null object retweet_count 1899 non-null int64 favorite_count 1899 non-null int64 jpg_url 1899 non-null object img_num 1899 non-null int64 prediction 1899 non-null object breed 1899 non-null object confidence 1899 non-null int64 dtypes: datetime64[ns](1), float64(2), int64(4), object(9) memory usage: 252.2+ KB ###Markdown Storing, Analyzing, and Visualizing Data for this Project ###Code #Storing clear data twitter_archive_master.to_csv('twitter_archive_master.csv', index=False, encoding = 'utf-8') twitter_archive_master.rating_numerator.value_counts() twitter_archive_master.describe() #first to find the distribution of dog rating twitter_archive_master['rating_numerator'].plot(kind = 'hist', bins = 14,color='green', linestyle='dashed'); sns.set_style("darkgrid") plt.xlim(0, 15) plt.ylabel('Number of Tweets', fontsize = 15) plt.xlabel('Rating', fontsize = 15) plt.title('Distribution of Ratings', fontsize = 15); plt.savefig('Distribution rating.png'); ###Output _____no_output_____ ###Markdown Rating distribution is skewed to the left. most of ratings is about 10, as it's clear from the data description above 75% of all ratings are above 10. ###Code #Top 10 favorite tweets twitter_archive_master.sort_values(by = 'favorite_count', ascending = False).head(10) #Top 10 re-tweeted tweets twitter_archive_master.sort_values(by = 'retweet_count', ascending = False).head(10) ###Output _____no_output_____ ###Markdown Most of dogs types are not identified, yet the most common type is pupper, then the doggo ###Code plt.pie(twitter_archive_master.query("dog_stage!='none'").dog_stage.value_counts(),shadow=False) plt.legend(twitter_archive_master.query("dog_stage!='none'").dog_stage.value_counts(),labels=["pupper", "doggo", "puppo", "multiple", "floofer"], loc="upper left") plt.title('Most Frequent Dog Stages') plt.tight_layout() plt.savefig('Most frequesnt Dog.png'); ###Output /opt/conda/lib/python3.6/site-packages/matplotlib/axes/_axes.py:521: UserWarning: You have mixed positional and keyword arguments, some input will be discarded. warnings.warn("You have mixed positional and keyword " ###Markdown above graph presents that pupper is top dog stage in tweets, exculding the non-identifed dogs ###Code twitter_archive_master[['favorite_count', 'retweet_count']].plot(style ='.', ylim=[0, 50000], figsize=(10,10)) plt.title('Favorites, Retweets relationship', size=15) plt.xlabel('Date', size=12) plt.xticks([], []) plt.ylabel('Count', size=12) plt.legend(ncol=1, loc='upper right') plt.savefig('retweets-favorites-time.png'); ###Output _____no_output_____ ###Markdown the relationship between favorite and retweets, shows that people used to favorite the tweets more than retweeting it, but by time, they became similar to each other ###Code twitter_archive_master.plot(x = 'rating_numerator', y = 'favorite_count', style ='.', figsize=(8,8), alpha=.5) plt.title('Ratings v. Favorite Count', size=16) plt.xlabel('Rating', size=12) plt.ylabel('Favorites', size=12); plt.savefig('rating Vs favorite.png'); ###Output _____no_output_____ ###Markdown There is a positive relationship between the dog rate and the favorite count, dog with higher rate get more favorites ###Code sns.regplot(x="retweet_count", y="favorite_count", data=twitter_archive_master, scatter_kws={'alpha':0.2}) plt.title('Retweet v. Favorite Count', size=16) plt.xlabel('Retweets', size=12) plt.ylabel('Favorites', size=12) plt.savefig('retweet-favorite.png'); ###Output _____no_output_____ ###Markdown The graph shows that the more the tweet is retweeted, the more it gets favorite ###Code top_breeds = twitter_archive_master.query("breed!='none'").breed.value_counts()[0:10].sort_values(axis=0, ascending=False) top_breeds.plot(kind = 'barh', color=['steelblue']) plt.title('Top 10 Dog Breeds', size=16) plt.xlabel('Count', size=12) plt.ylabel('Breed', size=12) plt.savefig('top-breeds.png'); ###Output _____no_output_____ ###Markdown The graph above presents top 10 dog breeds mentioned in tweets ###Code twitter_archive_master['prediction'].value_counts().plot(kind='pie', figsize=(5,5),) plt.title('Dog Image Predictions',fontsize=16); plt.ylabel(''); plt.savefig('dog predection'); twitter_archive_master.prediction.value_counts() ###Output _____no_output_____ ###Markdown Around 60% of the dogs were identified as dogs by the software, while another 23% might be dogs, the rest could not be identified as dogs ###Code twitter_archive_master.head() twitter_archive_master.name.value_counts()[1:11] ###Output _____no_output_____ ###Markdown Wrangle and Analyze Data**The dataset that was wrangled, analyzed and visualized is the tweet archive of Twitter user [@dog_rates](https://twitter.com/dog_rates), also known as [WeRateDogs](https://en.wikipedia.org/wiki/WeRateDogs). WeRateDogs is a Twitter account that rates people's dogs with a humorous comment about the dog**------------------------------------------------------ Goal**Wrangle WeRateDogs Twitter data to create interesting and trustworthy analyses and visualizations**------------------------------------------------------ Project Details:**An Enhanced Twitter Archive has been provided by Udacity to facilitate Wrangling process, the archive contains basic tweet data for all 5000+ of their tweets, but not everything. One column the archive does contain though: each tweet's text, which was used to extract rating, dog name, and dog "stage" (i.e. doggo, floofer, pupper, and puppo) to make this Twitter archive "enhanced." Of the 5000+ tweets, the archive was filtered for tweets with ratings only (there are 2356).****Addtionally, Udacity provided an image predictions file that can classify breeds of dogs. The results: a table full of image predictions (the top three only) alongside each tweet ID, image URL, and the image number that corresponded to the most confident prediction (numbered 1 to 4 since tweets can have up to four images).****It is part of the requirements for this project to gather additional data via the Twitter API [Tweepy](http://www.tweepy.org/) and store the Retweet Count and Favorite Count of the tweets provided in the Enhanced Twitter Archive. The garthering of this data has been performed in this project.** Key Points:**Key points to keep in mind when data wrangling for this project:**>**- We only want original ratings (no retweets) that have images. Though there are 5000+ tweets in the dataset, not all are dog ratings and some are retweets.** >**- Assessing and cleaning the entire dataset completely would require a lot of time, and is not necessary to practice and demonstrate your skills in data wrangling. Therefore, the requirements of this project are only to assess and clean at least 8 quality issues and at least 2 quality issues in this dataset.** >**- Cleaning includes merging individual pieces of data according to the rules of tidy data.** >**- The fact that the rating numerators are greater than the denominators does not need to be cleaned. This unique rating system is a big part of the popularity of WeRateDogs.** >**- We do not need to gather the tweets beyond August 1st, 2017. You can, but note that you won't be able to gather the image predictions for these tweets since you don't have access to the algorithm used.**------------------------------------------------------ Project Tasks**- Data wrangling, which consists of:** * Gathering data * Assessing data * Cleaning data**- Storing, analyzing, and visualizing your wrangled data****- Reporting on:** * Data wrangling efforts * Data analyses and visualizations ------------------------------------------------------ Project Table of Contents> - [Gather](Gather)>> - [Requirements 1: The WeRateDogs Twitter archive](Requirements--1:-The-WeRateDogs-Twitter-archive)>> - [Requirements 2: The Tweet Image Predictions](Requirements--2:-The-Tweet-Image-Predictions)>> - [Requirements 3: Store Tweets JSON Data Using Tweepy](Requirements--3:-Store-Tweets-JSON-Data-Using-Tweepy)> - [Assess](Assess)>> - [Requirements: Detect and document at least (8) quality issues and (2) tidiness issues](Requirements:-Detect-and-document-at-least-8-quality-issues-and-2-tidiness-issues)>>> - [Assessing The WeRateDogs Twitter archive](Assessing-The-WeRateDogs-Twitter-Archive)>>> - [Assessing The Tweet Image Predictions](Assessing-The-Tweet-Image-Predictions)>>> - [Assessing Downloaded Tweets Data using Tweepy](Assessing-Downloaded-Tweets-Data-using-Tweepy)>> - [Assessment Summary](Assessment-Summary)> - [Clean](Clean)>> - [Quality Cleaning](Quality-Cleaning)>> - [Tidiness Cleaning](Tidiness-Cleaning)> - [Storing, Analyzing, and Visualizing](Storing,-Analyzing,-and-Visualizing)>> - [Insight 1 - Highest and Lowest Rated Dog](Insight-1---Highest-and-Lowest-Rated-Dog)>> - [Insight 2 - Top Favorite Count of a Tweet](Insight-2---Top-Favorite-Count-of-a-Tweet)>> - [Insight 3 - Tweet Sources](Insight-3---Tweet-Sources)>> - [Visualization 1 - Top 10 Favoritted Dogs by Favorite Count](Visualization-1---Top-10-Favoritted-Dogs-by-Favorite-Count)>> - [Visualization 2 - Successful Breed Prediction for Algorithm P1](Visualization-2---Successful-Breed-Prediction-for-Algorithm-P1)> - [References](References) ###Code import pandas as pd import numpy as np import requests import tweepy import json from tweepy.parsers import JSONParser import time import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns import re ###Output _____no_output_____ ###Markdown Gather Requirements 1: The WeRateDogs Twitter archive **`Reading the twitter provided archive`** ###Code # read the twitter archive in twdf dataset twdf = pd.read_csv('twitter-archive-enhanced.csv') twdf.head() ###Output _____no_output_____ ###Markdown Requirements 2: The Tweet Image Predictions **`Gather the tweet image predictions which is hosted in Udacity's servers`** ###Code #folder_name = 'image_predictions' #if not os.path.exists(folder_name): # os.makedirs(folder_name) url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) # check response code response ###Output _____no_output_____ ###Markdown **`Save the provided image_predictions file`** ###Code with open(url.split('/')[-1], mode='wb') as file: file.write(response.content) ###Output _____no_output_____ ###Markdown **`Read the image-predictions.tsv file`** ###Code image_predictions = pd.read_csv('image-predictions.tsv', sep='\t') image_predictions.head() ###Output _____no_output_____ ###Markdown Requirements 3: Store Tweets JSON Data Using Tweepy **`Authorization`** [Useful Guide](https://www.digitalocean.com/community/tutorials/how-to-authenticate-a-python-application-with-twitter-using-tweepy-on-ubuntu-14-04) ###Code consumer_key = '##############' consumer_secret = '###########' access_token = '##############' access_secret = '#############' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) #api = tweepy.API(auth) api = tweepy.API(auth, parser=JSONParser(), wait_on_rate_limit=True) ###Output _____no_output_____ ###Markdown **`Test Reading`** ###Code tweet = api.get_status(887473957103951883,tweet_mode='extended') print(tweet['id']) print(tweet['retweet_count']) print(tweet['favorite_count']) df_list = [] errors = [] tweets = twdf['tweet_id'] now = time.time() for idx, t in enumerate(tweets): try: start = time.time() tweet = api.get_status(t,tweet_mode='extended') end = time.time() print("Reading Tweet {}/{} ({}%) Completed. Elapsed Time = {}".format((idx+1),len(tweets), round(((idx+1)/len(tweets)) * 100,1), round((end - start),2))) retweet_count = tweet['retweet_count'] favorite_count = tweet['favorite_count'] #followers_count = tweet['user']['followers_count'] df_list.append({'tweet_id':int(t), 'retweet_count':int(retweet_count), 'favorite_count':int(favorite_count) #,'followers_count':int(followers_count) }) except Exception as e: print(str(t) + str(e)) errors.append(t) then = time.time() diff = round(then - now) minutes = diff // 60 seconds = diff % 60 print("Reading Process Completed. Elapsed Time is {}:{}".format(minutes, seconds)) ###Output Reading Tweet 1/2356 (0.0%) Completed. Elapsed Time = 1.01 Reading Tweet 2/2356 (0.1%) Completed. Elapsed Time = 1.05 Reading Tweet 3/2356 (0.1%) Completed. Elapsed Time = 1.11 Reading Tweet 4/2356 (0.2%) Completed. Elapsed Time = 1.09 Reading Tweet 5/2356 (0.2%) Completed. Elapsed Time = 1.1 Reading Tweet 6/2356 (0.3%) Completed. Elapsed Time = 1.17 Reading Tweet 7/2356 (0.3%) Completed. Elapsed Time = 1.04 Reading Tweet 8/2356 (0.3%) Completed. Elapsed Time = 1.0 Reading Tweet 9/2356 (0.4%) Completed. Elapsed Time = 1.0 Reading Tweet 10/2356 (0.4%) Completed. Elapsed Time = 0.99 Reading Tweet 11/2356 (0.5%) Completed. Elapsed Time = 0.98 Reading Tweet 12/2356 (0.5%) Completed. Elapsed Time = 0.99 Reading Tweet 13/2356 (0.6%) Completed. Elapsed Time = 1.08 Reading Tweet 14/2356 (0.6%) Completed. Elapsed Time = 1.01 Reading Tweet 15/2356 (0.6%) Completed. Elapsed Time = 1.03 Reading Tweet 16/2356 (0.7%) Completed. Elapsed Time = 1.01 Reading Tweet 17/2356 (0.7%) Completed. Elapsed Time = 1.04 Reading Tweet 18/2356 (0.8%) Completed. Elapsed Time = 1.0 Reading Tweet 19/2356 (0.8%) Completed. Elapsed Time = 1.0 888202515573088257[{'code': 144, 'message': 'No status found with that ID.'}] Reading Tweet 21/2356 (0.9%) Completed. Elapsed Time = 0.96 Reading Tweet 22/2356 (0.9%) Completed. Elapsed Time = 1.01 Reading Tweet 23/2356 (1.0%) Completed. Elapsed Time = 1.04 Reading Tweet 24/2356 (1.0%) Completed. Elapsed Time = 1.04 Reading Tweet 25/2356 (1.1%) Completed. Elapsed Time = 0.99 Reading Tweet 26/2356 (1.1%) Completed. Elapsed Time = 1.01 Reading Tweet 27/2356 (1.1%) Completed. Elapsed Time = 1.12 Reading Tweet 28/2356 (1.2%) Completed. Elapsed Time = 1.08 Reading Tweet 29/2356 (1.2%) Completed. Elapsed Time = 1.0 Reading Tweet 30/2356 (1.3%) Completed. Elapsed Time = 1.0 Reading Tweet 31/2356 (1.3%) Completed. Elapsed Time = 1.0 Reading Tweet 32/2356 (1.4%) Completed. Elapsed Time = 1.0 Reading Tweet 33/2356 (1.4%) Completed. Elapsed Time = 1.04 Reading Tweet 34/2356 (1.4%) Completed. Elapsed Time = 0.98 Reading Tweet 35/2356 (1.5%) Completed. Elapsed Time = 1.01 Reading Tweet 36/2356 (1.5%) Completed. Elapsed Time = 1.04 Reading Tweet 37/2356 (1.6%) Completed. Elapsed Time = 1.0 Reading Tweet 38/2356 (1.6%) Completed. Elapsed Time = 0.96 Reading Tweet 39/2356 (1.7%) Completed. Elapsed Time = 1.0 Reading Tweet 40/2356 (1.7%) Completed. Elapsed Time = 1.03 Reading Tweet 41/2356 (1.7%) Completed. Elapsed Time = 1.0 Reading Tweet 42/2356 (1.8%) Completed. Elapsed Time = 1.04 Reading Tweet 43/2356 (1.8%) Completed. Elapsed Time = 1.09 Reading Tweet 44/2356 (1.9%) Completed. Elapsed Time = 0.98 Reading Tweet 45/2356 (1.9%) Completed. Elapsed Time = 0.97 Reading Tweet 46/2356 (2.0%) Completed. Elapsed Time = 1.04 Reading Tweet 47/2356 (2.0%) Completed. Elapsed Time = 0.97 Reading Tweet 48/2356 (2.0%) Completed. Elapsed Time = 1.0 Reading Tweet 49/2356 (2.1%) Completed. Elapsed Time = 1.02 Reading Tweet 50/2356 (2.1%) Completed. Elapsed Time = 1.01 Reading Tweet 51/2356 (2.2%) Completed. Elapsed Time = 0.96 Reading Tweet 52/2356 (2.2%) Completed. Elapsed Time = 1.0 Reading Tweet 53/2356 (2.2%) Completed. Elapsed Time = 1.05 Reading Tweet 54/2356 (2.3%) Completed. Elapsed Time = 0.98 Reading Tweet 55/2356 (2.3%) Completed. Elapsed Time = 1.01 Reading Tweet 56/2356 (2.4%) Completed. Elapsed Time = 1.0 Reading Tweet 57/2356 (2.4%) Completed. Elapsed Time = 1.0 Reading Tweet 58/2356 (2.5%) Completed. Elapsed Time = 0.95 Reading Tweet 59/2356 (2.5%) Completed. Elapsed Time = 1.04 Reading Tweet 60/2356 (2.5%) Completed. Elapsed Time = 1.0 Reading Tweet 61/2356 (2.6%) Completed. Elapsed Time = 1.03 Reading Tweet 62/2356 (2.6%) Completed. Elapsed Time = 1.01 Reading Tweet 63/2356 (2.7%) Completed. Elapsed Time = 1.03 Reading Tweet 64/2356 (2.7%) Completed. Elapsed Time = 0.99 Reading Tweet 65/2356 (2.8%) Completed. Elapsed Time = 0.92 Reading Tweet 66/2356 (2.8%) Completed. Elapsed Time = 0.98 Reading Tweet 67/2356 (2.8%) Completed. Elapsed Time = 1.04 Reading Tweet 68/2356 (2.9%) Completed. Elapsed Time = 1.0 Reading Tweet 69/2356 (2.9%) Completed. Elapsed Time = 0.98 Reading Tweet 70/2356 (3.0%) Completed. Elapsed Time = 0.98 Reading Tweet 71/2356 (3.0%) Completed. Elapsed Time = 0.99 Reading Tweet 72/2356 (3.1%) Completed. Elapsed Time = 0.98 Reading Tweet 73/2356 (3.1%) Completed. Elapsed Time = 0.94 Reading Tweet 74/2356 (3.1%) Completed. Elapsed Time = 0.96 Reading Tweet 75/2356 (3.2%) Completed. Elapsed Time = 1.01 Reading Tweet 76/2356 (3.2%) Completed. Elapsed Time = 0.95 Reading Tweet 77/2356 (3.3%) Completed. Elapsed Time = 1.08 Reading Tweet 78/2356 (3.3%) Completed. Elapsed Time = 1.04 Reading Tweet 79/2356 (3.4%) Completed. Elapsed Time = 0.96 Reading Tweet 80/2356 (3.4%) Completed. Elapsed Time = 0.98 Reading Tweet 81/2356 (3.4%) Completed. Elapsed Time = 1.0 Reading Tweet 82/2356 (3.5%) Completed. Elapsed Time = 0.96 Reading Tweet 83/2356 (3.5%) Completed. Elapsed Time = 1.01 Reading Tweet 84/2356 (3.6%) Completed. Elapsed Time = 0.97 Reading Tweet 85/2356 (3.6%) Completed. Elapsed Time = 0.95 Reading Tweet 86/2356 (3.7%) Completed. Elapsed Time = 0.96 Reading Tweet 87/2356 (3.7%) Completed. Elapsed Time = 1.06 Reading Tweet 88/2356 (3.7%) Completed. Elapsed Time = 1.05 Reading Tweet 89/2356 (3.8%) Completed. Elapsed Time = 1.03 Reading Tweet 90/2356 (3.8%) Completed. Elapsed Time = 0.96 Reading Tweet 91/2356 (3.9%) Completed. Elapsed Time = 1.02 Reading Tweet 92/2356 (3.9%) Completed. Elapsed Time = 1.0 Reading Tweet 93/2356 (3.9%) Completed. Elapsed Time = 0.95 Reading Tweet 94/2356 (4.0%) Completed. Elapsed Time = 1.04 Reading Tweet 95/2356 (4.0%) Completed. Elapsed Time = 1.0 873697596434513921[{'code': 144, 'message': 'No status found with that ID.'}] Reading Tweet 97/2356 (4.1%) Completed. Elapsed Time = 0.95 Reading Tweet 98/2356 (4.2%) Completed. Elapsed Time = 1.05 Reading Tweet 99/2356 (4.2%) Completed. Elapsed Time = 1.0 Reading Tweet 100/2356 (4.2%) Completed. Elapsed Time = 1.12 Reading Tweet 101/2356 (4.3%) Completed. Elapsed Time = 1.0 Reading Tweet 102/2356 (4.3%) Completed. Elapsed Time = 0.98 Reading Tweet 103/2356 (4.4%) Completed. Elapsed Time = 0.98 Reading Tweet 104/2356 (4.4%) Completed. Elapsed Time = 0.96 Reading Tweet 105/2356 (4.5%) Completed. Elapsed Time = 1.01 Reading Tweet 106/2356 (4.5%) Completed. Elapsed Time = 0.96 Reading Tweet 107/2356 (4.5%) Completed. Elapsed Time = 0.99 Reading Tweet 108/2356 (4.6%) Completed. Elapsed Time = 0.96 Reading Tweet 109/2356 (4.6%) Completed. Elapsed Time = 1.0 Reading Tweet 110/2356 (4.7%) Completed. Elapsed Time = 0.97 Reading Tweet 111/2356 (4.7%) Completed. Elapsed Time = 0.98 Reading Tweet 112/2356 (4.8%) Completed. Elapsed Time = 0.96 Reading Tweet 113/2356 (4.8%) Completed. Elapsed Time = 1.0 Reading Tweet 114/2356 (4.8%) Completed. Elapsed Time = 1.0 Reading Tweet 115/2356 (4.9%) Completed. Elapsed Time = 1.04 Reading Tweet 116/2356 (4.9%) Completed. Elapsed Time = 0.99 Reading Tweet 117/2356 (5.0%) Completed. Elapsed Time = 1.06 Reading Tweet 118/2356 (5.0%) Completed. Elapsed Time = 1.0 869988702071779329[{'code': 144, 'message': 'No status found with that ID.'}] Reading Tweet 120/2356 (5.1%) Completed. Elapsed Time = 0.98 Reading Tweet 121/2356 (5.1%) Completed. Elapsed Time = 1.04 Reading Tweet 122/2356 (5.2%) Completed. Elapsed Time = 0.96 Reading Tweet 123/2356 (5.2%) Completed. Elapsed Time = 1.03 Reading Tweet 124/2356 (5.3%) Completed. Elapsed Time = 1.04 Reading Tweet 125/2356 (5.3%) Completed. Elapsed Time = 1.04 Reading Tweet 126/2356 (5.3%) Completed. Elapsed Time = 1.12 Reading Tweet 127/2356 (5.4%) Completed. Elapsed Time = 1.37 Reading Tweet 128/2356 (5.4%) Completed. Elapsed Time = 0.99 Reading Tweet 129/2356 (5.5%) Completed. Elapsed Time = 1.0 Reading Tweet 130/2356 (5.5%) Completed. Elapsed Time = 1.03 Reading Tweet 131/2356 (5.6%) Completed. Elapsed Time = 1.11 Reading Tweet 132/2356 (5.6%) Completed. Elapsed Time = 1.03 866816280283807744[{'code': 144, 'message': 'No status found with that ID.'}] Reading Tweet 134/2356 (5.7%) Completed. Elapsed Time = 1.04 Reading Tweet 135/2356 (5.7%) Completed. Elapsed Time = 1.08 Reading Tweet 136/2356 (5.8%) Completed. Elapsed Time = 1.2 ###Markdown **`Erroneous Tweet IDs `** ###Code errors ###Output _____no_output_____ ###Markdown **`Re-trying Reading the Erroneous Tweet IDs `** ###Code for idx,t in enumerate(errors): try: start = time.time() tweet = api.get_status(t,tweet_mode='extended') end = time.time() print("Reading Tweet {}/{} Completed. Elapsed Time = {}".format((idx+1),len(tweets), round((end - start),2))) retweet_count = tweet['retweet_count'] favorite_count = tweet['favorite_count'] #followers_count = tweet['user']['followers_count'] df_list.append({'tweet_id':int(t), 'retweet_count':int(retweet_count), 'favorite_count':int(favorite_count) #,'followers_count':int(followers_count) }) except Exception as e: print(str(t) + str(e)) #errors.append(t) ###Output 888202515573088257[{'code': 144, 'message': 'No status found with that ID.'}] 873697596434513921[{'code': 144, 'message': 'No status found with that ID.'}] 869988702071779329[{'code': 144, 'message': 'No status found with that ID.'}] 866816280283807744[{'code': 144, 'message': 'No status found with that ID.'}] 861769973181624320[{'code': 144, 'message': 'No status found with that ID.'}] 842892208864923648[{'code': 144, 'message': 'No status found with that ID.'}] 837012587749474308[{'code': 34, 'message': 'Sorry, that page does not exist.'}] 827228250799742977[{'code': 144, 'message': 'No status found with that ID.'}] 802247111496568832[{'code': 144, 'message': 'No status found with that ID.'}] 775096608509886464[{'code': 144, 'message': 'No status found with that ID.'}] Reading Tweet 11/2356 Completed. Elapsed Time = 0.66 Reading Tweet 12/2356 Completed. Elapsed Time = 0.69 ###Markdown **`Save Downloaded Tweets Data into tweet_json.txt`** ###Code df_json = pd.DataFrame(df_list, columns = ['tweet_id', 'retweet_count', 'favorite_count']) #, 'followers_count']) df_json.to_csv('tweet_json.txt', encoding = 'utf-8') df_json = pd.read_csv('tweet_json.txt', encoding = 'utf-8') #df_json.set_index('tweet_id', inplace = True) df_json.head() ###Output _____no_output_____ ###Markdown Assess **Subjects of assessment:****`1- The WeRateDogs Twitter archive (twdf)`****`2- The Tweet Image Predictions (image_predictions)`****`3- Downloaded Tweets Data using Tweepy (df_json)`** **Assessment Plan:****`To spot quality issues with the subjects, I will run/check the following in every dataset`**```* Dataset info: - Check column names - Validate data types - Number of data entries per column* Duplicates check: - Check for duplicated rows * View (10) samples: - Check validity and quality of entries* Value Counts: - Check validity and quality of entries``` Requirements: Detect and document at least 8 quality issues and 2 tidiness issues Assessing The WeRateDogs Twitter Archive ###Code twdf.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown _**Observations:**_- twdf dataset contains 2356 rows and 17 columns- There are null values in some of the columns- timestamp datatype is incorrect- tweet_id datatype is integer while it should be converted to string ###Code twdf[twdf.duplicated] ###Output _____no_output_____ ###Markdown _**No observations of duplicates**_ ###Code twdf.sample(10) ###Output _____no_output_____ ###Markdown _**Observations:**_- There are missing values (nulls) in these columns: * in_reply_to_status_id * in_reply_to_user_id * retweeted_status_id * retweeted_status_user_id * retweeted_status_timestamp - Invalid names of dogs * none is given for missing dog name * (a) is not name, probably missing or incorrect value - Missing values in the dog breed columns which was filled with 'None'- Unnecessary characters in timestamp (+0000) ###Code twdf.rating_numerator.value_counts().sort_values() ###Output _____no_output_____ ###Markdown _**Observations:**_- Rating numerator values of 0 or 1 might have been misstyped. More investigatation below ###Code # list tweets with 0 or 1 numerator twdf[(twdf['rating_numerator'] == 0) | (twdf['rating_numerator'] == 1)] ###Output _____no_output_____ ###Markdown _**Observations:**_Tweet ID|Tweet Text|Observation|:--------|:----------|:-----------|[835152434251116546](https://twitter.com/dog_rates/status/835152434251116546)|When you're so blinded by your systematic plagiarism that you forget what day it is. 0/10 | Not related tweet|[746906459439529985](https://twitter.com/dog_rates/status/746906459439529985)|PUPDATE: can't see any. Even if I could, I couldn't reach them to pet. 0/10 much disappointment| Not related tweet|[798576900688019456](https://twitter.com/dog_rates/status/666104133288665088)|Not familiar with this breed. No tail (weird). Only 2 legs. Doesn't bark. Surprisingly quick. Shits eggs. 1/10 | Not related tweet|[696490539101908992](https://twitter.com/dog_rates/status/696490539101908992)|After reading the comments I may have overestimated this pup. Downgraded to a 1/10. Please forgive me|Valid rating |[675153376133427200](https://twitter.com/dog_rates/status/675153376133427200)|What kind of person sends in a picture without a dog in it? 1/10 just because that's a nice table |Not related tweet |[673716320723169284](https://twitter.com/dog_rates/status/673716320723169284)|The millennials have spoken and we've decided to immediately demote to a 1/10. Thank you | Not related tweet|[671550332464455680](https://twitter.com/dog_rates/status/671550332464455680)| After 22 minutes of careful deliberation this dog is being demoted to a 1/10. The longer you look at him the more terrifying he becomes| Not related tweet|[670783437142401025](https://twitter.com/dog_rates/status/670783437142401025)| Flamboyant pup here. Probably poisonous. Won't eat kibble. Doesn't bark. Slow af. Petting doesn't look fun. 1/10| Not related tweet |[667549055577362432](https://twitter.com/dog_rates/status/667549055577362432)|Never seen dog like this. Breathes heavy. Tilts head in a pattern. No bark. Shitty at fetch. Not even cordless. 1/10 | Not related tweet |[666287406224695296](https://twitter.com/dog_rates/status/666287406224695296)|This is an Albanian 3 1/2 legged Episcopalian. Loves well-polished hardwood flooring. Penis on the collar. 9/10 | Change to 9/10 rating|[666104133288665088](https://twitter.com/dog_rates/status/666104133288665088)|Not familiar with this breed. No tail (weird). Only 2 legs. Doesn't bark. Surprisingly quick. Shits eggs. 1/10 |Not related tweet |(10) Tweets to remove:* 835152434251116546* 798576900688019456* 746906459439529985* 675153376133427200* 673716320723169284* 671550332464455680* 670783437142401025* 667549055577362432* 666287406224695296* 666104133288665088 ###Code twdf[twdf['tweet_id']==786709082849828864] ###Output _____no_output_____ ###Markdown _**Observations:**_- The tweet 786709082849828864 rating is not correct. Tweet text "This is Logan, the Chow who lived. He solemnly swears he's up to lots of good. H*ckin magical af 9.75/10" ###Code twdf.rating_denominator.value_counts().sort_values() ###Output _____no_output_____ ###Markdown _**Observations:**_- Rating denominator value should always be 10, there are many incorrect denominators ###Code twdf.name.value_counts() ###Output _____no_output_____ ###Markdown _**Observations:**_- Incorrect dog names exist in the name column like none, a, the Assessing The Tweet Image Predictions ###Code image_predictions.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null int64 jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown _**Observations:**_- image_predictions dataset contains 2075 rows and 12 columns ###Code image_predictions[image_predictions.duplicated] ###Output _____no_output_____ ###Markdown _**No observations of duplicates**_ ###Code image_predictions.sample(10) ###Output _____no_output_____ ###Markdown _**No observations in the sample.**_ ###Code image_predictions.img_num.value_counts() ###Output _____no_output_____ ###Markdown _**No observations in the count.**_ Assessing Downloaded Tweets Data using Tweepy ###Code df_json.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2346 entries, 0 to 2345 Data columns (total 4 columns): Unnamed: 0 2346 non-null int64 tweet_id 2346 non-null int64 retweet_count 2346 non-null int64 favorite_count 2346 non-null int64 dtypes: int64(4) memory usage: 73.4 KB ###Markdown _**Observations:**_- df_json dataset contains 2347 rows and 3 columns- Total tweets in the twdf is 2356 while here we have 2347 predicted.- (Unnamed: 0) column is invalid- (9) missing tweets that could not be read: * 888202515573088257 * 873697596434513921 * 869988702071779329 * 866816280283807744 * 861769973181624320 * 842892208864923648 * 827228250799742977 * 802247111496568832 * 775096608509886464 ###Code df_json[df_json.duplicated] ###Output _____no_output_____ ###Markdown _**No observations in the duplicates.**_ -------------------------- Assessment Summary Quality:No. | Issue | Column(s) | Dataset----- |:------|:--------|:-------1 | none is given for missing dog name | name | twdf2 | Incorrect dog names exist in the name column like none, a, the, an, my | name | twdf3 | timestamp datatype is incorrect | timestamp | twdf4 | Unnecessary characters (+0000) | timestamp | twdf5 | Missing values in the dog breed columns which was filled with 'None' | doggo, floofer, pupper, puppoo | twdf6 | Remove retweets | in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp | twdf7 | Rating numerator values of 0 or 1 might have been miss-typed (Remove 10 tweets) | rating_numerator | twdf8 | Rating denominator value should always be 10, there are many incorrect denominators | rating_denominator | twdf9 | (9) missing tweets that could not be read | tweet_id | df_json10 | tweet_id datatype needs to be converted to str | tweet_id | twdf11 | Rating numerator is not reflecting actual rating, need to extract rating from tweet text | rating_numerator | twdf Tideness:No. | Issue | Action | Dataset----- |:------|:--------|:-------1 | Dogs breed is splitted into 4 columns | Melt the 4 breeds into a column | twdf2 | 'Unnamed: 0' column is not needed | Drop the 'Unnamed: 0' column | df_json3 | Data can be structured into 1 dataset of tweets archive | merge the 3 datasets (enhanced tweet archive, image_predictions, and downloaded JSON data) into 1 and remove unneeded columns | twdf, image_predictions, df_json Clean Quality Cleaning----------------------------------------The cleaning process will be perfomed in the copied dataframes below ###Code # Make a copy of the datasets before cleaning twdf_clean = twdf.copy() image_predictions_clean = image_predictions.copy() df_json_clean = df_json.copy() # I am saving the datasets to csvs to avoid re-downloading tweets which takes about 40 minutes twdf_clean.to_csv('twdf.clean.csv') image_predictions_clean.to_csv('image_predictions.clean.csv') df_json_clean.to_csv('df_json.clean.csv') df_errors = pd.DataFrame(errors, columns = ['tweet_id']) df_errors.to_csv('df_errors.csv') # Whenever I work on the project, I start from here twdf_clean = pd.read_csv('twdf.clean.csv') image_predictions_clean = pd.read_csv('image_predictions.clean.csv') df_json_clean = pd.read_csv('df_json.clean.csv') df_errors = pd.read_csv('df_errors.csv') ###Output _____no_output_____ ###Markdown Issue 1 & 2 DefineReplace invalid dog names (none, a, the, an) with null Code ###Code twdf_clean['name'].replace('the', np.nan, inplace=True) twdf_clean['name'].replace('a', np.nan, inplace=True) twdf_clean['name'].replace('None', np.nan, inplace=True) twdf_clean['name'].replace('an', np.nan, inplace=True) twdf_clean['name'].replace('my', np.nan, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code twdf_clean[ (twdf_clean['name'] == "the") | (twdf_clean['name'] == "a") | (twdf_clean['name'] == "None") | (twdf_clean['name'] == "an") | (twdf_clean['name'] == "my") ] twdf_clean.sample(5) ###Output _____no_output_____ ###Markdown Issue 3 & 4 DefineChange the timestamp datatype from object to timestamp Code ###Code twdf_clean['timestamp'] = pd.to_datetime(twdf_clean['timestamp']) ###Output _____no_output_____ ###Markdown Test ###Code twdf_clean.head(3) ###Output _____no_output_____ ###Markdown Issue 5 DefineReplace None values for dog breeds with np.nan Code ###Code twdf_clean['doggo'].replace('None', np.nan, inplace=True) twdf_clean['floofer'].replace('None', np.nan, inplace=True) twdf_clean['pupper'].replace('None', np.nan, inplace=True) twdf_clean['puppo'].replace('None', np.nan, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code twdf_clean.head(5) ###Output _____no_output_____ ###Markdown Issue 6 DefineAs we only want original ratings (no retweets) that have images, I will remove the rows that have tweet IDs in the (in_reply_to_status_id and retweeted_status_id) column, and then drop the retweet related columns.According to [Twitter Dev](https://developer.twitter.com/en/docs/tweets/data-dictionary/overview/tweet-object), **in_reply_to_status_id**: If the represented Tweet is a reply, this field will contain the integer representation of the original Tweet’s ID.It makes sense to have null vlues in the below columns since they are related to retweets.* in_reply_to_status_id* in_reply_to_user_id Code ###Code # Recreate dataset twdf.clean with excluding replys twdf_clean = twdf_clean[twdf_clean['in_reply_to_status_id'].isnull()] twdf_clean = twdf_clean[twdf_clean['retweeted_status_id'].isnull()] ###Output _____no_output_____ ###Markdown Test ###Code twdf_clean[twdf_clean['in_reply_to_status_id'].notnull()] twdf_clean[twdf_clean['retweeted_status_id'].notnull()] ###Output _____no_output_____ ###Markdown DefineRemove the row related to replys* in_reply_to_status_id* in_reply_to_user_id Code ###Code twdf_clean.drop(['in_reply_to_status_id'], axis=1, inplace = True) twdf_clean.drop(['in_reply_to_user_id'], axis=1, inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code twdf_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2355 Data columns (total 16 columns): Unnamed: 0 2097 non-null int64 tweet_id 2097 non-null int64 timestamp 2097 non-null datetime64[ns] source 2097 non-null object text 2097 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 2094 non-null object rating_numerator 2097 non-null int64 rating_denominator 2097 non-null int64 name 1424 non-null object doggo 83 non-null object floofer 10 non-null object pupper 230 non-null object puppo 24 non-null object dtypes: datetime64[ns](1), float64(2), int64(4), object(9) memory usage: 278.5+ KB ###Markdown So far, the clean dataset contains 2278 tweets Issue 7 DefineDelete the (10) Tweets that are not related to dogs rating: (Note: removing replys resulted in shrinking this list to 7 tweets only). The tweets to be deleted are:* 835152434251116546* 798576900688019456* 675153376133427200* 670783437142401025* 667549055577362432* 666287406224695296* 666104133288665088 Code ###Code #Store the list of tweets that are subject to removal remove_list = twdf_clean[(twdf_clean['rating_numerator'] == 0) | (twdf_clean['rating_numerator'] == 1)].tweet_id for i in remove_list: print(i) # Recreate dataset twdf.clean with excluding invalid rating_numerator twdf_clean = twdf_clean.loc[~twdf_clean['tweet_id'].isin(remove_list),:] ###Output _____no_output_____ ###Markdown Test ###Code twdf_clean[(twdf_clean['rating_numerator'] == 0) | (twdf_clean['rating_numerator'] == 1)].tweet_id ###Output _____no_output_____ ###Markdown Issue 8 DefineSet the value of the rating denominator to 10 across the dataset Code ###Code # Locate all values with rating_denominator other than 10 and replace them with 10 twdf_clean.loc[twdf_clean['rating_denominator'] != 10, 'rating_denominator'] = 10 ###Output _____no_output_____ ###Markdown Test ###Code twdf_clean['rating_denominator'].value_counts() ###Output _____no_output_____ ###Markdown Issue 9 DefineRemove the (9) missing tweets that could not be read (stored in df_errors) Code ###Code # Store the tweet IDs to be removed in remove_list remove_list = df_errors.tweet_id # Recreate dataset twdf.clean with excluding tweets that could not be rad twdf_clean = twdf_clean.loc[~twdf_clean['tweet_id'].isin(remove_list),:] ###Output _____no_output_____ ###Markdown Test ###Code twdf_clean.loc[twdf_clean['tweet_id'].isin(remove_list),:] twdf_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2089 entries, 0 to 2355 Data columns (total 16 columns): Unnamed: 0 2089 non-null int64 tweet_id 2089 non-null int64 timestamp 2089 non-null datetime64[ns] source 2089 non-null object text 2089 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 2086 non-null object rating_numerator 2089 non-null int64 rating_denominator 2089 non-null int64 name 1423 non-null object doggo 83 non-null object floofer 10 non-null object pupper 230 non-null object puppo 24 non-null object dtypes: datetime64[ns](1), float64(2), int64(4), object(9) memory usage: 277.4+ KB ###Markdown Issue 10 DefineConvert tweet_id datatype to string Code ###Code twdf_clean.tweet_id = twdf_clean.tweet_id.astype(str) image_predictions_clean.tweet_id = image_predictions_clean.tweet_id.astype(str) df_json_clean.tweet_id = df_json_clean.tweet_id.astype(str) ###Output _____no_output_____ ###Markdown Test ###Code twdf_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2089 entries, 0 to 2355 Data columns (total 16 columns): Unnamed: 0 2089 non-null int64 tweet_id 2089 non-null object timestamp 2089 non-null datetime64[ns] source 2089 non-null object text 2089 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 2086 non-null object rating_numerator 2089 non-null int64 rating_denominator 2089 non-null int64 name 1423 non-null object doggo 83 non-null object floofer 10 non-null object pupper 230 non-null object puppo 24 non-null object dtypes: datetime64[ns](1), float64(2), int64(3), object(10) memory usage: 277.4+ KB ###Markdown Issue 11 DefineFix rating numerator by extracting ratings from tweet text as some ratings contains decimals which means the datatype of the rating numerator will also be converted to float Code ###Code # first convert the rating_numerator datatype to float twdf_clean.rating_numerator = twdf_clean.rating_numerator.astype(float) ###Output _____no_output_____ ###Markdown Test ###Code twdf_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2089 entries, 0 to 2355 Data columns (total 16 columns): Unnamed: 0 2089 non-null int64 tweet_id 2089 non-null object timestamp 2089 non-null datetime64[ns] source 2089 non-null object text 2089 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 2086 non-null object rating_numerator 2089 non-null float64 rating_denominator 2089 non-null int64 name 1423 non-null object doggo 83 non-null object floofer 10 non-null object pupper 230 non-null object puppo 24 non-null object dtypes: datetime64[ns](1), float64(3), int64(2), object(10) memory usage: 277.4+ KB ###Markdown DefineExtract rating numerator values from tweet text and popluate the new clean rating numerators Code ###Code # Extrat rating and popluate numerator #twdf_clean['rating_numerator'] = twdf_clean.text.str.extract('(\d\d*\.?\d\d*/)', expand=False) twdf_clean['rating_numerator'] = twdf_clean.text.str.extract('(\d?\d*\.?\d\d*/)', expand=False) ###Output _____no_output_____ ###Markdown Test ###Code twdf_clean[twdf_clean['tweet_id']=='786709082849828864'] twdf_clean[twdf_clean['tweet_id']=='678424312106393600'] twdf_clean[twdf_clean['tweet_id']=='772114945936949249'] ###Output _____no_output_____ ###Markdown DefineClean up the values starting with '.' from the rating_numerator values Code ###Code test3 = twdf_clean.rating_numerator.str.extract('([^\W]\d*.\d*)', expand=False) test3.to_csv('t.txt') twdf_clean['rating_numerator'] = twdf_clean.rating_numerator.str.extract('([^\W]\d*.\d*)', expand=False) ###Output _____no_output_____ ###Markdown Test ###Code twdf_clean[twdf_clean['tweet_id']=='772114945936949249'] ###Output _____no_output_____ ###Markdown DefineClean up the / character from the rating_numerator values Code ###Code twdf_clean['rating_numerator'].replace('/','',regex=True, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code twdf_clean[twdf_clean['tweet_id']=='786709082849828864'] twdf_clean[twdf_clean['tweet_id']=='678424312106393600'] ###Output _____no_output_____ ###Markdown Tidiness Cleaning Issue 1 DefineJoin the 4 dog stages column into (stage) column. [Source](https://stackoverflow.com/questions/42384118/how-to-get-distinct-value-while-using-apply-join-in-pandas-dataframe) Code ###Code #replace the null value with empty character to prepare data for join twdf_clean['doggo'].replace(np.nan,'', inplace=True) twdf_clean['floofer'].replace(np.nan,'', inplace=True) twdf_clean['pupper'].replace(np.nan,'', inplace=True) twdf_clean['puppo'].replace(np.nan, '', inplace=True) twdf_clean['stage'] = test_df[['doggo', 'floofer','pupper','puppo']].apply(lambda x: ''.join(set(x.astype(str))), axis=1) ###Output _____no_output_____ ###Markdown TestNow the stage column contains the dog breed ###Code twdf_clean.sample(5) twdf_clean['stage'].value_counts() ###Output _____no_output_____ ###Markdown DefineSome dogs have been assinged 2 stages, seperate the stages with - Code ###Code twdf_clean['stage'].replace('doggopupper', 'doggo-pupper', inplace=True) twdf_clean['stage'].replace('doggopuppo', 'doggo-puppo', inplace=True) twdf_clean['stage'].replace('doggofloofer', 'doggo-floofer', inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code twdf_clean['stage'].value_counts() ###Output _____no_output_____ ###Markdown DefineReplace empty values in stage column with np.nan Code ###Code twdf_clean['stage'].replace('', np.nan, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code twdf_clean['stage'].value_counts() ###Output _____no_output_____ ###Markdown DefineConvert the stage column type to categorial Code ###Code twdf_clean.stage = twdf_clean.stage.astype('category') ###Output _____no_output_____ ###Markdown Test ###Code twdf_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2089 entries, 0 to 2355 Data columns (total 17 columns): Unnamed: 0 2089 non-null int64 tweet_id 2089 non-null object timestamp 2089 non-null datetime64[ns] source 2089 non-null object text 2089 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 2086 non-null object rating_numerator 2089 non-null object rating_denominator 2089 non-null int64 name 1423 non-null object doggo 2089 non-null object floofer 2089 non-null object pupper 2089 non-null object puppo 2089 non-null object stage 336 non-null category dtypes: category(1), datetime64[ns](1), float64(2), int64(2), object(11) memory usage: 279.9+ KB ###Markdown Issue 2 DefineDrop Uneeded columns (Unnamed 0) from twdf_clean, image_predictions_clean and df_json_clean Code ###Code df_json_clean.info() twdf_clean.drop(['Unnamed: 0'], axis=1, inplace = True) image_predictions_clean.drop(['Unnamed: 0'], axis=1, inplace = True) df_json_clean.drop(['Unnamed: 0'], axis=1, inplace = True) df_json_clean.drop(['Unnamed: 0.1'], axis=1, inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code twdf_clean.info() df_json_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2346 entries, 0 to 2345 Data columns (total 3 columns): tweet_id 2346 non-null object retweet_count 2346 non-null int64 favorite_count 2346 non-null int64 dtypes: int64(2), object(1) memory usage: 55.1+ KB ###Markdown Issue 3 DefineCreate a comprehensive dataset by merging the (image_predictions_clean and df_json_clean into twdf_clean) datasets and drop the unneeded columns.The comprehensive dataset will contain the following data:* tweet_id* timestamp* text* rating numerator* rating_denominator* stage* retweet_count* favorite_count* jpg_url* img_num* p1* p1_conf* p1_dog* p2* p2_conf* p2_dog* p3* p3_conf* p3_dogFirst, the twdf_clean will be cleaned by removing these (8) columns* retweeted_status_id* retweeted_status_user_id* retweeted_status_timestamp* expanded_urls* doggo* floofer* pupper* puppo Code ###Code twdf_clean.drop(['retweeted_status_id'], axis=1, inplace = True) twdf_clean.drop(['retweeted_status_user_id'], axis=1, inplace = True) twdf_clean.drop(['retweeted_status_timestamp'], axis=1, inplace = True) twdf_clean.drop(['expanded_urls'], axis=1, inplace = True) twdf_clean.drop(['doggo'], axis=1, inplace = True) twdf_clean.drop(['floofer'], axis=1, inplace = True) twdf_clean.drop(['pupper'], axis=1, inplace = True) twdf_clean.drop(['puppo'], axis=1, inplace = True) twdf_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2089 entries, 0 to 2355 Data columns (total 8 columns): tweet_id 2089 non-null object timestamp 2089 non-null datetime64[ns] source 2089 non-null object text 2089 non-null object rating_numerator 2089 non-null object rating_denominator 2089 non-null int64 name 1423 non-null object stage 336 non-null category dtypes: category(1), datetime64[ns](1), int64(1), object(5) memory usage: 133.0+ KB ###Markdown DefineMerge twdf_clean with image_preditions Code ###Code tweets_prediction = pd.merge(twdf_clean, image_predictions_clean, how='left', on='tweet_id') ###Output _____no_output_____ ###Markdown Test ###Code tweets_prediction.sample(3) tweets_prediction.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2089 entries, 0 to 2088 Data columns (total 19 columns): tweet_id 2089 non-null object timestamp 2089 non-null datetime64[ns] source 2089 non-null object text 2089 non-null object rating_numerator 2089 non-null object rating_denominator 2089 non-null int64 name 1423 non-null object stage 336 non-null category jpg_url 1963 non-null object img_num 1963 non-null float64 p1 1963 non-null object p1_conf 1963 non-null float64 p1_dog 1963 non-null object p2 1963 non-null object p2_conf 1963 non-null float64 p2_dog 1963 non-null object p3 1963 non-null object p3_conf 1963 non-null float64 p3_dog 1963 non-null object dtypes: category(1), datetime64[ns](1), float64(4), int64(1), object(12) memory usage: 312.5+ KB ###Markdown DefineMerge tweets_prediction with df_json_clean Code ###Code tweets_predictions_all = pd.merge(tweets_prediction, df_json_clean, how='left', on='tweet_id') ###Output _____no_output_____ ###Markdown Test ###Code tweets_predictions_all.sample(3) tweets_predictions_all.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2089 entries, 0 to 2088 Data columns (total 21 columns): tweet_id 2089 non-null object timestamp 2089 non-null datetime64[ns] source 2089 non-null object text 2089 non-null object rating_numerator 2089 non-null object rating_denominator 2089 non-null int64 name 1423 non-null object stage 336 non-null category jpg_url 1963 non-null object img_num 1963 non-null float64 p1 1963 non-null object p1_conf 1963 non-null float64 p1_dog 1963 non-null object p2 1963 non-null object p2_conf 1963 non-null float64 p2_dog 1963 non-null object p3 1963 non-null object p3_conf 1963 non-null float64 p3_dog 1963 non-null object retweet_count 2089 non-null int64 favorite_count 2089 non-null int64 dtypes: category(1), datetime64[ns](1), float64(4), int64(3), object(12) memory usage: 345.1+ KB ###Markdown DefineNote that we have some tweets that were not predicted and does not have images, those tweets needs to be removed from the final dataset (tweets_predictions_all) to prepare for analysis Code ###Code # Execlude tweets with no images tweets_predictions_all = tweets_predictions_all[tweets_predictions_all['jpg_url'].notnull()] ###Output _____no_output_____ ###Markdown Test ###Code tweets_predictions_all.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1963 entries, 0 to 2088 Data columns (total 21 columns): tweet_id 1963 non-null object timestamp 1963 non-null datetime64[ns] source 1963 non-null object text 1963 non-null object rating_numerator 1963 non-null object rating_denominator 1963 non-null int64 name 1377 non-null object stage 303 non-null category jpg_url 1963 non-null object img_num 1963 non-null float64 p1 1963 non-null object p1_conf 1963 non-null float64 p1_dog 1963 non-null object p2 1963 non-null object p2_conf 1963 non-null float64 p2_dog 1963 non-null object p3 1963 non-null object p3_conf 1963 non-null float64 p3_dog 1963 non-null object retweet_count 1963 non-null int64 favorite_count 1963 non-null int64 dtypes: category(1), datetime64[ns](1), float64(4), int64(3), object(12) memory usage: 324.3+ KB ###Markdown DefineSet correct data type for columns after merge* p1_dog (bool)* p2_dog (bool)* p3_dog (bool)* img_num (int64)* source (category)* rating_numerator (float) Code ###Code tweets_predictions_all.p1_dog = tweets_predictions_all.p1_dog.astype('bool') tweets_predictions_all.p2_dog = tweets_predictions_all.p2_dog.astype('bool') tweets_predictions_all.p3_dog = tweets_predictions_all.p3_dog.astype('bool') tweets_predictions_all.img_num = tweets_predictions_all.img_num.astype('int64') tweets_predictions_all.source = tweets_predictions_all.source.astype('category') tweets_predictions_all.rating_numerator = tweets_predictions_all.rating_numerator.astype('float') ###Output _____no_output_____ ###Markdown Test ###Code tweets_predictions_all.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1963 entries, 0 to 2088 Data columns (total 21 columns): tweet_id 1963 non-null object timestamp 1963 non-null datetime64[ns] source 1963 non-null category text 1963 non-null object rating_numerator 1963 non-null float64 rating_denominator 1963 non-null int64 name 1377 non-null object stage 303 non-null category jpg_url 1963 non-null object img_num 1963 non-null int64 p1 1963 non-null object p1_conf 1963 non-null float64 p1_dog 1963 non-null bool p2 1963 non-null object p2_conf 1963 non-null float64 p2_dog 1963 non-null bool p3 1963 non-null object p3_conf 1963 non-null float64 p3_dog 1963 non-null bool retweet_count 1963 non-null int64 favorite_count 1963 non-null int64 dtypes: bool(3), category(2), datetime64[ns](1), float64(4), int64(4), object(7) memory usage: 270.8+ KB ###Markdown Storing, Analyzing, and Visualizing Requirements: Store the clean DataFrame(s) in a CSV file with the main one named twitter_archive_master.csvAnalyze and visualize your wrangled data in your wrangle_act.ipynb Jupyter Notebook. At least three (3) insights and one (1) visualization must be produced. Store ###Code tweets_predictions_all.to_csv('twitter_archive_master.csv') ###Output _____no_output_____ ###Markdown Insight 1 - Highest and Lowest Rated Dog ###Code # Get the highest rating numerator tweets_predictions_all.nlargest(1,'rating_numerator') ###Output _____no_output_____ ###Markdown Tweet with Highest rating is [749981277374128128](https://twitter.com/dog_rates/status/749981277374128128)The Winner:![Highest Rating](https://pbs.twimg.com/media/CmgBZ7kWcAAlzFD.jpg) ###Code tweets_predictions_all.nsmallest(2,'rating_numerator') ###Output _____no_output_____ ###Markdown The Lowest Rating is ![678675843183484930](https://pbs.twimg.com/media/CWskEqnWUAAQZW_.jpg) The image is not related to dogs which emphasize to importance of re-iterating the wrangling processI have picked the second lowest which also scored 2/10![678424312106393600](https://pbs.twimg.com/media/CWo_T8gW4AAgJNo.jpg) Insight 2 - Top Favorite Count of a Tweet ###Code # Get top favorite count tweets_predictions_all.nlargest(1,'favorite_count') ###Output _____no_output_____ ###Markdown The top favoritted tweet is [822872901745569793](https://twitter.com/dog_rates/status/822872901745569793) which was favoritted 144175 times Insight 3 - Tweet Sources ###Code # Get count of all sources tweets_predictions_all['source'].value_counts() ###Output _____no_output_____ ###Markdown There are (3) sources of the tweets in our dataset as follows:* 2000 (Twitter for iPhone)* 29 (Twitter Web Client)* 11 (TweetDeck) Visualization 1 - Top 10 Favoritted Dogs by Favorite Count ###Code df_top10 = tweets_predictions_all[tweets_predictions_all['name'].notnull()] #df_top10 = tweets_predictions_all.nlargest(10,'favorite_count') # Create a dataset that contains the top 10 favoritted dogs df_top10 = df_top10.nlargest(10,'favorite_count') #Plot the top favorited dogs sns.set(style="whitegrid"); sns.set(font_scale=2); f, ax = plt.subplots(figsize=(9, 15)); ax = sns.barplot(x='favorite_count', y='name', data=df_top10); ax.set(xlim=(70000,130000), ylabel="Dog Name", xlabel="Favorite Count"); plt.title('Top 10 Favoritted Dogs by Favorite Count'); ###Output E:\ProgramData\Anaconda3\lib\site-packages\seaborn\categorical.py:1460: FutureWarning: remove_na is deprecated and is a private function. Do not use. stat_data = remove_na(group_data) ###Markdown This visualization shows the top favorited dog by favorite count for named dogs. For this reason, the result found in Insight 2 – Top favorite count of a tweet is not listed part of this visualizationThe top favorited dog is named Jamesy and he earned 124742 favorite counts Visualization 2 - Successful Breed Prediction for Algorithm P1 ###Code #Filter out failed predictions df_p1 = tweets_predictions_all[tweets_predictions_all['p1_dog'] == True] # Store the count of the successful predictions in successful_p1 successful_p1 = df_p1.groupby('p1').p1_dog.count() # Create a dataset that contains breed and count for algorithem P1 successfully predicted breeds successful_predictions = pd.DataFrame({'breed':successful_p1.index, 'count':successful_p1.values}) sns.set(style="whitegrid") sns.set(font_scale=1.5) f, ax = plt.subplots(figsize=(10, 30)) sns.set_color_codes("dark") sns.barplot(x="count", y="breed", data=successful_predictions.sort_values("count", ascending=False),label="Total"); ax.set_xlabel("Number of Successfully Predictions Using Algorithm # 1",fontsize=25) ax.set_ylabel("Dog Breed",fontsize=30) sns.despine(left=True, bottom=True) ###Output E:\ProgramData\Anaconda3\lib\site-packages\seaborn\categorical.py:1460: FutureWarning: remove_na is deprecated and is a private function. Do not use. stat_data = remove_na(group_data) ###Markdown WeRateDogs Twitter Data Analysis ###Code import pandas as pd import tweepy import numpy as np import json import os import requests import pandas.api.types as ptypes import seaborn as sb import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown Gather DataIn this section we'll gather the data from multiple sources:- WeRateDogs Twitter archive- Tweet images classification (in context of dog breed)- Retweet and favorite counts from Twitter APINote that after gathering each data into a dataframe, each dataframe will be copied as "\_unclean" to backup the original uncleaned version of the data. Cleaning operation will not be done to these "unclean" dataframe. WeRateDogs Twitter ArchiveThe data will be loaded from the CSV file `twitter-archive-enhanced.csv`. ###Code twitter_df = pd.read_csv('twitter-archive-enhanced.csv') twitter_df_unclean = twitter_df.copy() # to backup the raw uncleaned version of the data twitter_df.head() ###Output _____no_output_____ ###Markdown Tweet Image PredictionsThe data will be retrieved from the following URL: https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv ###Code if 'image_predictions.tsv' in os.listdir(): print('"image_predictions.tsv" file already exists, retrieval will be skipped.') else: response = requests.get('https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv') with open('image_predictions.tsv', 'wb') as file: file.write(response.content) image_pred_df = pd.read_csv('image_predictions.tsv', sep='\t') image_pred_df_unclean = image_pred_df.copy() # to backup the raw uncleaned version of the data image_pred_df.head() ###Output _____no_output_____ ###Markdown Get Retweet Counts and Like Counts from TweepyWe'll use Tweepy to get the `retweet_count` and `favorite_count` of every tweet in `twitter_df`. Note that some of the tweets may no longer exists, and for these the `retweet_count` and `favorite_count` will not be available. ###Code if 'tweet_json.txt' in os.listdir(): print('"tweet_json.txt" file already exists, retrieval will be skipped.') else: tweepy_auth_dir = 'auth/tweepy_auth.json' with open(tweepy_auth_dir, 'r') as file: tweepy_auth_json = json.load(file) key = tweepy_auth_json['key'] secret = tweepy_auth_json['secret'] auth = tweepy.OAuthHandler(key, secret) api = tweepy.API(auth) for tweet_id in twitter_df.tweet_id: try: status = api.get_status(tweet_id, tweet_mode='extended') with open('tweet_json.txt', 'a+') as out_file: json.dump(status._json, out_file) out_file.write('\n') except: pass tweet_infos = [] with open('tweet_json.txt', 'r') as file: for line in file: tweet_json = json.loads(line) tweet_infos.append({'tweet_id': tweet_json['id_str'], 'retweet_count': tweet_json['retweet_count'], 'favorite_count': tweet_json['favorite_count']}) tweet_infos_df = pd.DataFrame(tweet_infos) tweet_infos_df_unclean = tweet_infos_df.copy() # to backup the raw uncleaned version of the data tweet_infos_df.to_csv('tweet_infos.csv', index=False) tweet_infos_df.head() ###Output _____no_output_____ ###Markdown Data Assessment and CleaningFirst and foremost, we'd like to remove all statuses that are retweets/replies, so that they don't hamper the analysis result. This will be done in 2 steps, i.e. to first create the is_retweet and is_reply columns, then dropping all rows that are retweets/replies based on those 2 new columns. Convert retweet and reply information columns in `twitter_df` into `is_retweet` and `is_reply` AssessmentThe following columns in `twitter_df` are deemed unnecessary for analysis:- `in_reply_to_status_id`- `in_reply_to_user_id`- `retweeted_status_id`- `retweeted_status_user_id`- `retweeted_status_timestamp`Reason is because knowing the exact status ID or User ID that the status is replying to or retweeting from is not useful for the analysis. However, the information that says "this tweet is a reply" or "this tweet is a retweet" will be useful and can be obtained from these columns, so we'll create those columns instead.Below we'll show how many of the not null values of these columns. ###Code twitter_df.in_reply_to_status_id.notna().sum() twitter_df.retweeted_status_id.notna().sum() ###Output _____no_output_____ ###Markdown As can be seen above, there are 78 replies and 181 retweets statuses. Below we'll first convert them into `is_reply` and `is_retweet`. CleaningCreate new columns `is_reply` and `is_retweet`, where the value is `True` if the value of `in_reply_to_status_id` and `retweeted_status_id` is non-null respectively. ###Code twitter_df['is_reply'] = twitter_df.in_reply_to_status_id.notna() twitter_df['is_retweet'] = twitter_df.retweeted_status_id.notna() # test assert twitter_df.is_reply.sum() == twitter_df.in_reply_to_status_id.notna().sum() assert twitter_df.is_retweet.sum() == twitter_df.retweeted_status_id.notna().sum() ###Output _____no_output_____ ###Markdown Drop the columns `in_reply_to_status_id`, `in_reply_to_user_id`, `retweeted_status_id`, `retweeted_status_user_id`, and `retweeted_status_timestamp` from `twitter_df`. ###Code cols_to_drop = ['in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'] twitter_df.drop(columns=cols_to_drop, inplace=True) # test assert not twitter_df.columns.isin(cols_to_drop).any() ###Output _____no_output_____ ###Markdown ReassessmentBelow shows that now the `twitter_df` is more concise. ###Code twitter_df.head() twitter_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 14 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 timestamp 2356 non-null object 2 source 2356 non-null object 3 text 2356 non-null object 4 expanded_urls 2297 non-null object 5 rating_numerator 2356 non-null int64 6 rating_denominator 2356 non-null int64 7 name 2356 non-null object 8 doggo 2356 non-null object 9 floofer 2356 non-null object 10 pupper 2356 non-null object 11 puppo 2356 non-null object 12 is_reply 2356 non-null bool 13 is_retweet 2356 non-null bool dtypes: bool(2), int64(3), object(9) memory usage: 225.6+ KB ###Markdown Remove all rows that are retweets or reply AssessmentSome of the tweets are actually just retweets or replies, which may hamper analysis result. So from this we'll decide to remove those rows. Below shows number of rows that are either retweet or reply. ###Code n = (twitter_df.is_retweet | twitter_df.is_reply).sum() p = n / twitter_df.shape[0] print(f'Number of retweets or replies are {n}, which is {p*100:.0f}% of the whole twitter_df.') ###Output Number of retweets or replies are 259, which is 11% of the whole twitter_df. ###Markdown Cleaning ###Code twitter_df = twitter_df[(~twitter_df.is_retweet) & (~twitter_df.is_reply)] # test assert twitter_df.is_retweet.sum() == 0 assert twitter_df.is_reply.sum() == 0 ###Output _____no_output_____ ###Markdown ReassessmentBelow we'll ensure that there is no more tweets that are retweet / reply. ###Code print(f'Number of retweets: {twitter_df.is_retweet.sum()}') twitter_df.is_retweet.unique() print(f'Number of replies: {twitter_df.is_reply.sum()}') twitter_df.is_reply.unique() ###Output _____no_output_____ ###Markdown Data Completeness - `twitter_df` Assessment ###Code twitter_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2355 Data columns (total 14 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2097 non-null int64 1 timestamp 2097 non-null object 2 source 2097 non-null object 3 text 2097 non-null object 4 expanded_urls 2094 non-null object 5 rating_numerator 2097 non-null int64 6 rating_denominator 2097 non-null int64 7 name 2097 non-null object 8 doggo 2097 non-null object 9 floofer 2097 non-null object 10 pupper 2097 non-null object 11 puppo 2097 non-null object 12 is_reply 2097 non-null bool 13 is_retweet 2097 non-null bool dtypes: bool(2), int64(3), object(9) memory usage: 217.1+ KB ###Markdown As you can see above, the following columns have null values: - `in_reply_to_status_id`- `in_reply_to_user_id`- `retweeted_status_id`- `retweeted_status_user_id`- `retweeted_status_timestamp`- `expanded_urls`For the `in_reply...` and `retweeted_...` columns, they make sense to have null values, because not all statuses are replying or retweeting another status. As for `expanded_urls`, having null values in them is not an issue, because we most likely won't use this column. So in conclusion, no cleaning action to be done relating to missing data. Data Completeness - `tweet_infos_df` Assessment ###Code tweet_infos_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2043 entries, 0 to 2042 Data columns (total 3 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2043 non-null object 1 retweet_count 2043 non-null int64 2 favorite_count 2043 non-null int64 dtypes: int64(2), object(1) memory usage: 48.0+ KB ###Markdown No missing value found, hence no cleaning needed. Data Completeness - `image_pred_df` Assessment ###Code image_pred_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2075 non-null int64 1 jpg_url 2075 non-null object 2 img_num 2075 non-null int64 3 p1 2075 non-null object 4 p1_conf 2075 non-null float64 5 p1_dog 2075 non-null bool 6 p2 2075 non-null object 7 p2_conf 2075 non-null float64 8 p2_dog 2075 non-null bool 9 p3 2075 non-null object 10 p3_conf 2075 non-null float64 11 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown No missing value found, hence no cleaning needed. Column Data Types - `twitter_df` AssessmentBelow we'll evaluate the data type of the columns in `twitter_df`. ###Code twitter_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2355 Data columns (total 14 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2097 non-null int64 1 timestamp 2097 non-null object 2 source 2097 non-null object 3 text 2097 non-null object 4 expanded_urls 2094 non-null object 5 rating_numerator 2097 non-null int64 6 rating_denominator 2097 non-null int64 7 name 2097 non-null object 8 doggo 2097 non-null object 9 floofer 2097 non-null object 10 pupper 2097 non-null object 11 puppo 2097 non-null object 12 is_reply 2097 non-null bool 13 is_retweet 2097 non-null bool dtypes: bool(2), int64(3), object(9) memory usage: 217.1+ KB ###Markdown Here are the data types issues found from the `info()` above:- `tweet_id` is of type integer, while it should be string because it's an ID.- `timestamp` is of type string, while they should be datetime. CleaningBelow we'll then convert the data types of the columns as described above. ###Code twitter_df.tweet_id = twitter_df.tweet_id.astype(str) # test assert ptypes.is_string_dtype(twitter_df.tweet_id) str_to_datetime_cols = ['timestamp'] twitter_df[str_to_datetime_cols] = twitter_df[str_to_datetime_cols].applymap(pd.to_datetime) # test for c in str_to_datetime_cols: assert ptypes.is_datetime64_any_dtype(twitter_df[c]) ###Output _____no_output_____ ###Markdown ReassessmentBelow we'll reassess the new data types of the columns mentioned above, and evaluate their new values. ###Code twitter_df.info() twitter_df.tweet_id.sample(5) twitter_df.timestamp[twitter_df.timestamp.notna()].sample(5) ###Output _____no_output_____ ###Markdown By looking at the assessments above, we can confirm that now the data types are correct and the values seems reasonable.You may notice that some of the IDs may have shorter length. This is normal, because based on [this documentation from Twitter](https://developer.twitter.com/en/docs/twitter-ids:~:text=Today%2C%20Twitter%20IDs%20are%20unique,number%2C%20and%20a%20sequence%20number.), they represent ID as 64 bits integer number (but recommended to be stored as string to avoid losing accuracy in systems with lower bits integer representations). Hence, it is expected for the ID to have different lengths. No zero padding is required.There will be another data type conversion for the dog stage columns `doggo`, `floofer`, `pupper`, and `puppo` in latter section as well, where we'll convert the columns into boolean type. Column Data Types - `tweet_infos_df` Assessment ###Code tweet_infos_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2043 entries, 0 to 2042 Data columns (total 3 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2043 non-null object 1 retweet_count 2043 non-null int64 2 favorite_count 2043 non-null int64 dtypes: int64(2), object(1) memory usage: 48.0+ KB ###Markdown The data type of the columns above seems to be correct, hence no cleaning needed. Column Data Types - `image_pred_df` Assessment ###Code image_pred_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2075 non-null int64 1 jpg_url 2075 non-null object 2 img_num 2075 non-null int64 3 p1 2075 non-null object 4 p1_conf 2075 non-null float64 5 p1_dog 2075 non-null bool 6 p2 2075 non-null object 7 p2_conf 2075 non-null float64 8 p2_dog 2075 non-null bool 9 p3 2075 non-null object 10 p3_conf 2075 non-null float64 11 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown Data type issues found:- `tweet_id` is integer, while it should be string since it is an ID. CleaningBelow we'll convert the `tweet_id` column to be string. ###Code image_pred_df.tweet_id = image_pred_df.tweet_id.astype(str) # test assert ptypes.is_string_dtype(image_pred_df.tweet_id) ###Output _____no_output_____ ###Markdown Reassessment ###Code image_pred_df.info() image_pred_df.tweet_id.sample(5) ###Output _____no_output_____ ###Markdown The above assessments show that the data type has been converted correctly. Data Tidiness - `tweet_infos_df` separated from `twitter_df` AssessmentAn obvious data tidiness issue is the `tweet_infos_df` being separated from the `twitter_df`. Both of them should be combined together, because the columns `retweet_count` and `favorite_count` should belong to `twitter_df`. CleaningBelow we'll merge the `tweet_infos_df` into `twitter_df`. ###Code original_shape = twitter_df.shape # for testing # clean twitter_df = twitter_df.merge(tweet_infos_df, left_on='tweet_id', right_on='tweet_id', how='left') # test assert {'retweet_count', 'favorite_count'}.issubset(twitter_df.columns) assert twitter_df.shape == (original_shape[0], original_shape[1]+2) ###Output _____no_output_____ ###Markdown Reassessment ###Code twitter_df.info() twitter_df.head() ###Output _____no_output_____ ###Markdown The above assessment confirms that the merge is successful. Note that you may notice the data type of `retweet_count` and `favorite_count` is converted to float after the merge operation. This is a well-known problem in pandas library, and it is because the column contains NaN values. NaN values cannot be represented by integer, thus this is the reason the data type is converted to float after the merge operation. This is not an issue as the float type still allows numeric operation on those columns, while allowing NaN values in those columns as well. Hence, no cleaning needs to be done for this. Data Quality - Dog Stage Columns "None" String Values AssessmentLet's now assess the `doggo`, `floofer`, `pupper`, and `puppo` columns in `twitter_df`, which represent the various dog "stages". ###Code print(twitter_df.doggo.unique()) print(twitter_df.floofer.unique()) print(twitter_df.pupper.unique()) print(twitter_df.puppo.unique()) ###Output ['None' 'doggo'] ['None' 'floofer'] ['None' 'pupper'] ['None' 'puppo'] ###Markdown As you can see, there are string "None" values in those columns, which are invalid and misleading. This can make it hard for programmatic analysis later on. CleaningWe'll replace the string "None" values to be actual `NaN` values to ease the analysis. ###Code dog_stages_cols = ['doggo', 'floofer', 'pupper', 'puppo'] twitter_df[dog_stages_cols] = twitter_df[dog_stages_cols].replace('None', np.nan) # test - ensure that there is no more 'None' string values assert (twitter_df[dog_stages_cols] == 'None').sum().sum() == 0 ###Output _____no_output_____ ###Markdown Reassessment It is confirmed below that the "None" string values have been replaced with `NaN` values, shown by the `nan` in the list of unique values. ###Code print(twitter_df.doggo.unique()) print(twitter_df.floofer.unique()) print(twitter_df.pupper.unique()) print(twitter_df.puppo.unique()) ###Output [nan 'doggo'] [nan 'floofer'] [nan 'pupper'] [nan 'puppo'] ###Markdown Below we'll reassess the actual number of null values for these dog stages columns. ###Code twitter_df[['doggo', 'floofer', 'pupper', 'puppo']].info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2096 Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 doggo 83 non-null object 1 floofer 10 non-null object 2 pupper 230 non-null object 3 puppo 24 non-null object dtypes: object(4) memory usage: 81.9+ KB ###Markdown As you can see now apparently there are a lot of null values for the dog stages columns. However, as we'll see in the next section, these columns actually only represents the existence of those dog stages word in the tweet status. This means it makes sense to have a lot of null values, because not all tweets will mention those dog stages words. Hence no cleaning action will be done regarding these missing dog stages column values. Column Data Type - Dog Stages Columns in `twitter_df` AssessmentAs described in previous section, apparently the dog stages columns in `twitter_df` actually represents existence of the dog stage word in the tweet status. This column currently is weirdly represented using string, and the representation of the values are also weird. For example, existence of "doggo" word in the tweet status is represented by the column `doggo` having string value of "doggo", which is very redundant. I would expect their data types to be boolean instead, with value `True` if the corresponding dog stage appears in the tweet status. CleaningConvert the data type and values of `doggo`, `floofer`, `pupper`, `puppo` in `twitter_df` into boolean. For the values, give value `True` if the value is non-null, and `False` if the value is null. This is safe to be done since in the above section, we've checked the unique values of each dog stage column to only contain either NaN or the dog stage name itself. There won't be any case where a dog stage column contains string value that represents the other dog stage name. For clarity, below we'll reproduce the unique values again. ###Code dog_stages_cols = ['doggo', 'floofer', 'pupper', 'puppo'] for c in dog_stages_cols: print(f'{c} unique values: {twitter_df[c].unique()}') ###Output doggo unique values: [nan 'doggo'] floofer unique values: [nan 'floofer'] pupper unique values: [nan 'pupper'] puppo unique values: [nan 'puppo'] ###Markdown Below then we'll do the conversion to boolean. ###Code twitter_df[dog_stages_cols] = twitter_df[dog_stages_cols].applymap(lambda x: False if pd.isna(x) else True) # test for c in dog_stages_cols: assert ptypes.is_bool_dtype(twitter_df[c]) assert twitter_df.doggo.sum() == 83 assert twitter_df.floofer.sum() == 10 assert twitter_df.pupper.sum() == 230 assert twitter_df.puppo.sum() == 24 ###Output _____no_output_____ ###Markdown ReassessmentBelow we'll check the data type of the dog stages columns. ###Code twitter_df[dog_stages_cols].info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2096 Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 doggo 2097 non-null bool 1 floofer 2097 non-null bool 2 pupper 2097 non-null bool 3 puppo 2097 non-null bool dtypes: bool(4) memory usage: 24.6 KB ###Markdown We'll also check the number of True values in those columns, which we expect to be the same as the previous non-null values count of those columns (because after our transformation, null should equal False and non-null should equal True). ###Code twitter_df[dog_stages_cols].sum() ###Output _____no_output_____ ###Markdown As can be seen, the number of True values matches the number of non-null values of the columns before the transformation (refer to the values presented in previous section). Data Accuracy - Multiple Dog Stages in a Row (while actually there is only one dog) AssessmentNext, let's see whether is it possible for a row to have multiple dog stages. ###Code dog_stages_count = twitter_df[['doggo', 'floofer', 'pupper', 'puppo']].sum(axis=1) print(f'Possible number of dog stages in a row: {dog_stages_count.unique()}') ###Output Possible number of dog stages in a row: [0 1 2] ###Markdown There are indeed suprisingly some rows that have 2 dog stages. Since this dog stage information is derived from the corresponding tweet status text, let's now see the corresponding statuses of these rows. ###Code multi_dog_stages_rows = twitter_df[dog_stages_count >= 2] statuses = multi_dog_stages_rows.text.values for s in statuses: print('- ' + s) ###Output - Here's a puppo participating in the #ScienceMarch. Cleverly disguising her own doggo agenda. 13/10 would keep the planet habitable for https://t.co/cMhq16isel - At first I thought this was a shy doggo, but it's actually a Rare Canadian Floofer Owl. Amateurs would confuse the two. 11/10 only send dogs https://t.co/TXdT3tmuYk - This is Dido. She's playing the lead role in "Pupper Stops to Catch Snow Before Resuming Shadow Box with Dried Apple." 13/10 (IG: didodoggo) https://t.co/m7isZrOBX7 - Here we have Burke (pupper) and Dexter (doggo). Pupper wants to be exactly like doggo. Both 12/10 would pet at same time https://t.co/ANBpEYHaho - This is Bones. He's being haunted by another doggo of roughly the same size. 12/10 deep breaths pupper everything's fine https://t.co/55Dqe0SJNj - This is Pinot. He's a sophisticated doggo. You can tell by the hat. Also pointier than your average pupper. Still 10/10 would pet cautiously https://t.co/f2wmLZTPHd - Pupper butt 1, Doggo 0. Both 12/10 https://t.co/WQvcPEpH2u - Meet Maggie &amp; Lila. Maggie is the doggo, Lila is the pupper. They are sisters. Both 12/10 would pet at the same time https://t.co/MYwR4DQKll - Please stop sending it pictures that don't even have a doggo or pupper in them. Churlish af. 5/10 neat couch tho https://t.co/u2c9c7qSg8 - This is just downright precious af. 12/10 for both pupper and doggo https://t.co/o5J479bZUC - Like father (doggo), like son (pupper). Both 12/10 https://t.co/pG2inLaOda ###Markdown Evaluating the list of twitter statuses above (and also actually opening the pictures of the tweets), most of them have multiple dog stages because the status and the picture itself are involving more than one dogs, which totally makes sense. However, some rows actually only have one dog, but the tweet status mentions multiple dog stages in the text. For these cases, the multiple dog stages will indeed be misleading. Since the number is not much, we will fix them one by one. CleaningFrom visual observation of the above statuses, below we lists down the statuses that actually only have one dog (with misleadingly multiple dog stages in the text). ###Code idxs_invalid_multi_dog_stages = multi_dog_stages_rows.index[[0, 1, 2, 5, 8]] for s in twitter_df.iloc[idxs_invalid_multi_dog_stages].text.values: print('- ' + s) ###Output - Here's a puppo participating in the #ScienceMarch. Cleverly disguising her own doggo agenda. 13/10 would keep the planet habitable for https://t.co/cMhq16isel - At first I thought this was a shy doggo, but it's actually a Rare Canadian Floofer Owl. Amateurs would confuse the two. 11/10 only send dogs https://t.co/TXdT3tmuYk - This is Dido. She's playing the lead role in "Pupper Stops to Catch Snow Before Resuming Shadow Box with Dried Apple." 13/10 (IG: didodoggo) https://t.co/m7isZrOBX7 - This is Pinot. He's a sophisticated doggo. You can tell by the hat. Also pointier than your average pupper. Still 10/10 would pet cautiously https://t.co/f2wmLZTPHd - Please stop sending it pictures that don't even have a doggo or pupper in them. Churlish af. 5/10 neat couch tho https://t.co/u2c9c7qSg8 ###Markdown Below I list down one by one the correct stage for each tweet status, derived from each of the text above, and then immediately do the correction to the dog stage columns values. ###Code correct_stages = ['puppo', 'floofer', 'pupper', 'pupper', 'doggo'] dog_stages_columns = np.array(['doggo', 'floofer', 'pupper', 'puppo']) for row, col in zip(idxs_invalid_multi_dog_stages, correct_stages): cols_to_false = dog_stages_columns[dog_stages_columns != col] twitter_df.loc[row, cols_to_false] = False # test assert (twitter_df.loc[idxs_invalid_multi_dog_stages, dog_stages_columns].sum(axis=1).unique() == 1).all() ###Output _____no_output_____ ###Markdown ReassessmentBelow we'll check again whether the rows with invalid-multi-dog-stages are now having only one dog stage per row. ###Code pd.options.display.max_colwidth = 150 # so we can see the whole status text twitter_df.loc[idxs_invalid_multi_dog_stages, np.append('text', dog_stages_columns)] pd.options.display.max_colwidth = 50 # reset display ###Output _____no_output_____ ###Markdown The above prints shows that those rows no longer have invalid multiple dog stages. Data Tidiness - Change Index of `twitter_df` and `image_pred_df` to `tweet_id` AssessmentAs seen below, the number of unique values for the `tweet_id` are the same as the total number of rows for both `twitter_df` and `image_pred_df`. This means we can set their indexes to be `tweet_id`, which makes more sense and will ease analysis. ###Code print(f'Number of unique ID ({len(twitter_df.tweet_id.unique())}) ' + f'is same as the number of rows of twitter_df ({twitter_df.shape[0]}).') print(f'Number of unique ID ({len(image_pred_df.tweet_id.unique())}) ' + f'is same as the number of rows of twitter_df ({image_pred_df.shape[0]}).') ###Output Number of unique ID (2075) is same as the number of rows of twitter_df (2075). ###Markdown CleaningBelow we'll convert the indexes of `twitter_df` and `imaged_pred_df` into `tweet_id`. ###Code twitter_df.set_index('tweet_id', inplace=True) image_pred_df.set_index('tweet_id', inplace=True) # test assert twitter_df.index.name == 'tweet_id' assert image_pred_df.index.name == 'tweet_id' ###Output _____no_output_____ ###Markdown ReassessmentBelow we'll see that now their indexes are set to be `tweet_id`. ###Code twitter_df.head(2) image_pred_df.head(2) ###Output _____no_output_____ ###Markdown Data Accuracy - `rating_numerator` and `rating_denominator` in `twitter_df` AssessmentLet's assess the `describe()` below. ###Code twitter_df.describe() ###Output _____no_output_____ ###Markdown It is weird to see that the `rating_numerator` hs minimum value of 0 and maximum value of 1776. Same goes for `rating_denominator`, with minimum value of 0 and maximum value of 170. The reason this is weird for me is because it is described by the [WeRateDogs wikipedia page](https://en.wikipedia.org/wiki/WeRateDogs) that the rating numerator is in general higher than 10, and the denominator is mostly 10. Below we'll investigate further for the rows that have these weird numerator and denominator values. ###Code pd.options.display.max_colwidth = 150 # so we can see the whole status text idxs = twitter_df.rating_numerator <= 10 twitter_df.loc[idxs, ['text', 'rating_numerator', 'rating_denominator']].head() ###Output _____no_output_____ ###Markdown As you can see above, one of the rows for ID "883482846933004288" has rating that is actually decimal 13.5/10. However, the numerator is detected as 5. It seems that the original rating detection didn't take into account possibility of the rating having decimal point. CleaningHence below I will re-extract the rating from the text, taking into account both the possibilities of:- the rating having decimal points- having multiple ratings within a single tweetI will now store the rating in a separate dataframe than `twitter_df`. This is also because there can be multiple ratings per tweet, hence this will need a separate dataframe to be able to store the data correctly. The new dataframe name will be `dog_ratings_df`. We'll rename the resulting `match` index in to be `rating_no` to represent the index of multiple ratings within a single tweet status. ###Code dog_ratings_df = twitter_df.text.str.extractall('(((\d+\.)?\d+)/((\d+\.)?\d+))')[[1,3]] dog_ratings_df.columns = ['numerator', 'denominator'] dog_ratings_df.index.rename(['tweet_id', 'rating_no'], inplace=True) # test assert dog_ratings_df.index.names[0] == 'tweet_id' assert dog_ratings_df.index.names[1] == 'rating_no' assert dog_ratings_df.columns[0] == 'numerator' assert dog_ratings_df.columns[1] == 'denominator' ###Output _____no_output_____ ###Markdown ReassessmentHere we check whether the detection of rating with decimal point is successful. ###Code idx_with_dec = dog_ratings_df.numerator.str.contains('\.') ratings_with_dec = dog_ratings_df[idx_with_dec] ids_with_dec = ratings_with_dec.index.get_level_values('tweet_id') ratings_with_dec.join(twitter_df.loc[ids_with_dec].text) # to show together with the tweet status text ###Output _____no_output_____ ###Markdown You can see that the detection of rating seems to be effective for decimal numerator values. Since we've ensured that decimal detection was successful, we can now convert the data type of the rating values to float. CleaningBelow we'll then convert the data type of the ratings numerator and denominator to float. ###Code dog_ratings_df = dog_ratings_df.astype(float) dog_ratings_df.head() dog_ratings_df.info() ###Output <class 'pandas.core.frame.DataFrame'> MultiIndex: 2125 entries, ('892420643555336193', 0) to ('666020888022790149', 0) Data columns (total 2 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 numerator 2125 non-null float64 1 denominator 2125 non-null float64 dtypes: float64(2) memory usage: 136.0+ KB ###Markdown ReassessmentNow we'll evaluate the detected denominator values first. At least the denominator values expectation is clearer, where we'd like for the denominator to be exactly 10. Any deviation from that should be investigated further. Below we'll then show the tweets that have denominator being not equal to 10, with their corresponding tweet status. ###Code dog_ratings_df[dog_ratings_df.denominator != 10].join(twitter_df.text) ###Output _____no_output_____ ###Markdown As can be seen above, some of them are mistakenly extracted from common phrases such as "7/11" and "24/7", while the others are actually valid rating. However, since this will affect the fairness of rating comparison, we'll simply drop all ratings whose denominator is not 10. CleaningDrop all the ratings whose denominator is not 10. ###Code dog_ratings_df = dog_ratings_df[dog_ratings_df.denominator == 10] # test assert (dog_ratings_df.denominator != 10).sum() == 0 ###Output _____no_output_____ ###Markdown ReassessmentFrom the print of unique values below, now we're sure that the denominator are all 10. ###Code dog_ratings_df.denominator.unique() ###Output _____no_output_____ ###Markdown Next, we'll evaluate the numerator values. We'll display the unique values of the numerator, sorted from lowest to highest. ###Code unique_nums = dog_ratings_df.numerator.unique() unique_nums.sort() unique_nums.astype(str) ###Output _____no_output_____ ###Markdown There seems to be some oddly high numerator values as seen above, i.e. for the values greater than 17. Also there are values that are lower than 10, which is unusual since the site says the numerator should be >= 10.For now, let's evaluate the tweets that has rating numerator > 17. ###Code ratings_num_big = dog_ratings_df[dog_ratings_df.numerator > 17] ratings_num_big.join(twitter_df.text) ###Output _____no_output_____ ###Markdown Again for fair comparison in the rating, we'll remove any numerator ratings that are greater than 17. CleaningRemove any numerator ratings that are greater than 17 in the ratings dataframe. ###Code dog_ratings_df = dog_ratings_df[dog_ratings_df.numerator <= 17] # test assert (dog_ratings_df.numerator > 17).sum() == 0 ###Output _____no_output_____ ###Markdown ReassessmentBelow description will show that now the rating numerators have sensible values between 10 to 17. ###Code dog_ratings_df.numerator.describe() ###Output _____no_output_____ ###Markdown Now that the oddly high values of the numerator are settled, we'll now take care of the less than 10 numerator values. ###Code dog_ratings_df.query('numerator <= 10') ###Output _____no_output_____ ###Markdown There appears to be quite a lot, and we assumed that those people that gave numerator <= 10 does not know the rule that the numerator should be greater than 10. In this case then we'll assume that people who gave 0 is equivalent to giving rating of 10. Hence, we'll offset the numerator values by 10 for the rows that have their numerators <= 10. CleaningFor all the numerator values that are <= 10, increase their value by 10. ###Code dog_ratings_df.loc[(dog_ratings_df.numerator <= 10), 'numerator'] += 10 # test assert dog_ratings_df.numerator.min() >= 10 ###Output _____no_output_____ ###Markdown ReassessmentBelow we show that now the numerator ranges from 10 to 20, which are very reasonable values. ###Code dog_ratings_df.numerator.describe() ###Output _____no_output_____ ###Markdown Now we are settled with `dog_ratings_df`, and the only thing left to do is to drop the rating columns from `twitter_df`. CleaningDrop the rating columns `rating_numerator` and `rating_denominator` from `twitter_df`. ###Code twitter_df.drop(columns=['rating_numerator', 'rating_denominator'], inplace=True) # test assert not twitter_df.columns.isin(['rating_numerator', 'rating_denominator']).any() ###Output _____no_output_____ ###Markdown Final ReassessmentBelow we'll do final reassessment of the final result of this cleaning effort for the dog ratings. ###Code dog_ratings_df.head(2) dog_ratings_df.info() dog_ratings_df.describe() ###Output _____no_output_____ ###Markdown The assessments above shows that `dog_ratings_df` looks reasonable.Below we'll also see that the rating columns are no longer in `twitter_df`. ###Code pd.options.display.max_colwidth = 50 # reset display twitter_df.head(2) ###Output _____no_output_____ ###Markdown Data Tidiness - Dog Stage and Dog Name Columns should not be in `twitter_df` AssessmentThe dog stage columns `doggo`, `floofer`, `pupper`, and `puppo`, and the `name` column does not seem to belong in the `twitter_df`. They need their own table. We'll separate them into a new dataframe named `tweet_dog_info_df`. CleaningSeparate the columns `doggo`, `floofer`, `pupper`, `puppo`, and `name` into a new dataframe named `tweet_dog_info_df`. ###Code dog_info_cols = ['name', 'doggo', 'floofer', 'pupper', 'puppo'] tweet_dog_info_df = twitter_df[dog_info_cols].copy() ###Output _____no_output_____ ###Markdown Drop the columns from `twitter_df`. ###Code twitter_df.drop(columns=dog_info_cols, inplace=True) # test assert not twitter_df.columns.isin(dog_info_cols).any() ###Output _____no_output_____ ###Markdown ReassessmentBelow we'll evaluate again the resulting dataframes. ###Code tweet_dog_info_df.head(2) tweet_dog_info_df.info() twitter_df.head(2) twitter_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Index: 2097 entries, 892420643555336193 to 666020888022790149 Data columns (total 8 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 timestamp 2097 non-null datetime64[ns, UTC] 1 source 2097 non-null object 2 text 2097 non-null object 3 expanded_urls 2094 non-null object 4 is_reply 2097 non-null bool 5 is_retweet 2097 non-null bool 6 retweet_count 1815 non-null float64 7 favorite_count 1815 non-null float64 dtypes: bool(2), datetime64[ns, UTC](1), float64(2), object(3) memory usage: 198.8+ KB ###Markdown It is the concluded from assessments above that the structure of `twitter_df` is now tidy as it serves for a single function: to contain technical information for each tweet. `tweet_dog_info_df` then contains dog related information found from the tweet's status. Data Validity - Dog Name in `tweet_dog_info_df` Containing Invalid Names Assessment ###Code names = tweet_dog_info_df.name.unique() names.sort() names ###Output _____no_output_____ ###Markdown From the above list of names, we can find several invalid names:- 'None'- 'a'- 'actually'- 'all'- 'an'- 'by'- 'getting'- 'his'- 'incredibly'- 'infuriating'- 'just'- 'life'- 'light'- 'mad'- 'my'- 'not'- 'officially'- 'old'- 'one'- 'quite'- 'space'- 'such'- 'the'- 'this'- 'unacceptable'- 'very' Mostly the invalid names are the "None" and all the names that starts with lower case. CleaningWe'll now change the names that are "None" or starting with lower case to be the actual `None` value. ###Code lower_case_names = tweet_dog_info_df.name.str.contains('^[a-z]') tweet_dog_info_df.loc[lower_case_names, 'name'] = None # test assert tweet_dog_info_df.name.str.contains('^[a-z]').sum() == 0 idxs_none = tweet_dog_info_df.name == 'None' tweet_dog_info_df.loc[idxs_none, 'name'] = None # test assert (tweet_dog_info_df.name == 'None').sum() == 0 ###Output _____no_output_____ ###Markdown Reassessment ###Code names = tweet_dog_info_df.name.astype(str).unique() names.sort() names ###Output _____no_output_____ ###Markdown As can be seen from the names above that all the invalid names that we observed before are now removed (replaced with `NaN`). Dog Breed Values Assessment in `image_pred_df` AssessmentBelow we'll assess on the predicted dog breed names in the `image_pred_df`, to see any possible ambiguously similar dog breed names. ###Code dog_breeds = pd.concat((image_pred_df.p1, image_pred_df.p2)) dog_breeds = pd.concat((dog_breeds, image_pred_df.p3)) dog_breeds = dog_breeds.unique() dog_breeds.sort() a = dog_breeds[:-1] b = dog_breeds[1:] a_lens = np.array([len(x) for x in a]) b_lens = np.array([len(x) for x in b]) # observe only the dog breed names whose length are similar with the previous (this is a sorted list) len_diffs = np.abs(b_lens - a_lens) idxs = len_diffs <= 2 idxs = np.insert(idxs, 0, True) idxs_final = idxs[:-1] | idxs[1:] idxs_final = np.append(idxs_final, idxs[-1]) dog_breeds[idxs_final] ###Output _____no_output_____ ###Markdown Visually assessing the list of dog breed names above shows that there is no ambiguously similar dog breed name. Convert `source` column values in `twitter_df` into categorical values AssessmentEvaluating the unique values of the `source` column in `twitter_df` (shown below), there are only 4 unique values. To be more concise, we can convert this column into a categorical column instead. ###Code twitter_df.source.unique() ###Output _____no_output_____ ###Markdown CleaningConvert `source` column in `twitter_df` into a categorical column with value: `iphone`, `web_client`, `vine`, and `tweet_deck`. ###Code def conv_to_source_category(x): if 'iphone' in x: return 'iphone' elif 'Web Client' in x: return 'webclient' elif 'vine' in x: return 'vine' elif 'tweetdeck' in x: return 'tweetdeck' else: return None twitter_df.source = twitter_df.source.apply(conv_to_source_category).astype('category') # test assert ptypes.is_categorical_dtype(twitter_df.source.dtype) assert twitter_df.source.dtype.categories.isin(['iphone', 'tweetdeck', 'vine', 'webclient']).all() ###Output _____no_output_____ ###Markdown ReassessmentWe can see below that now the `source` column is categorical. ###Code twitter_df.source.unique() ###Output _____no_output_____ ###Markdown Dropping completely null values from `tweet_dog_info_df` AssessmentAfter all the cleaning done, it is found that the `tweet_dog_info_df` contains a lot of "completely null rows". By completely null rows, we mean rows in which all the columns are null. For the dog stages columns `doggo`, `floofer`, `pupper`, and `puppo`, if all of them are `False` and the corresponding `name` column is null, then the row will considered as completely null as well, because this row is practically useless. Below we'll see that there are 678 completely null rows in `tweet_dog_info_df`. ###Code null_rows = tweet_dog_info_df.name.isna() & (tweet_dog_info_df[['doggo', 'floofer', 'pupper', 'puppo']].sum(axis=1) == 0) tweet_dog_info_df.loc[null_rows] ###Output _____no_output_____ ###Markdown CleaningDrop all the completely null rows from `tweet_dog_info_df`, i.e. rows whose `name` is null and whose dog stages columns are all `False`. ###Code null_rows = tweet_dog_info_df.name.isna() & (tweet_dog_info_df[['doggo', 'floofer', 'pupper', 'puppo']].sum(axis=1) == 0) tweet_dog_info_df = tweet_dog_info_df.loc[~null_rows].copy() # test null_rows_new = tweet_dog_info_df.name.isna() & (tweet_dog_info_df[['doggo', 'floofer', 'pupper', 'puppo']].sum(axis=1) == 0) assert null_rows_new.sum() == 0 ###Output _____no_output_____ ###Markdown ReassessmentBelow we'll see that there are no more rows that are completely null in `tweet_dog_info_df`. ###Code print(f'Number of completely null rows in "tweet_dog_info_df": {null_rows_new.sum()}') ###Output Number of completely null rows in "tweet_dog_info_df": 0 ###Markdown Data Tidiness for `image_pred_df` AssessmentThe following columns in `image_pred_df` do not respect the data tidiness rule where each variable should be represented by a single column (they are instead represented by 3 columns):- p1, p1_conf, p1_dog- p2, p2_conf, p2_dog- p3, p3_conf, p3_dog CleaningMelt the following columns:- p1, p1_conf, p1_dog- p2, p2_conf, p2_dog- p3, p3_conf, p3_doginto the following columns:- pred_level (the value is either 1, 2, or 3)- pred_confidence- pred_class- is_dog ###Code # melt for each variable id_vars = ['tweet_id', 'jpg_url', 'img_num'] pred_class_df = pd.melt(image_pred_df.reset_index(), id_vars=id_vars, value_vars=['p1', 'p2', 'p3'], var_name='pred_level', value_name='pred_class') pred_conf_df = pd.melt(image_pred_df.reset_index(), id_vars=id_vars, value_vars=['p1_conf', 'p2_conf', 'p3_conf'], var_name='pred_level', value_name='pred_confidence') pred_is_dog_df = pd.melt(image_pred_df.reset_index(), id_vars=id_vars, value_vars=['p1_dog', 'p2_dog', 'p3_dog'], var_name='pred_level', value_name='is_dog') # extract the number for the prediction level pred_class_df.pred_level = pred_class_df.pred_level.str.extract('(\d+)').astype(int) pred_conf_df.pred_level = pred_conf_df.pred_level.str.extract('(\d+)').astype(int) pred_is_dog_df.pred_level = pred_is_dog_df.pred_level.str.extract('(\d+)').astype(int) # join the dataframes on_cols = ['tweet_id', 'jpg_url', 'img_num', 'pred_level'] image_pred_df = pred_class_df.merge(pred_conf_df, on=on_cols, how='outer').merge(pred_is_dog_df, on=on_cols, how='outer') # convert pred_level to be categorical column levels = image_pred_df.pred_level.unique() levels.sort() pred_levels_category = pd.api.types.CategoricalDtype(categories=levels, ordered=True) image_pred_df.pred_level = image_pred_df.pred_level.astype(pred_levels_category) # rename columns image_pred_df.rename(columns={'jpg_url': 'img_url', 'img_num': 'img_idx'}, inplace=True) # set index image_pred_df.set_index(['tweet_id', 'img_idx', 'pred_level'], inplace=True) image_pred_df.loc[:, :, :]['img_url'].drop_duplicates() ###Output _____no_output_____ ###Markdown ReassessmentAs can be seen below, now the dataframe is tidy in the sense that each variable is only represented by one column. ###Code image_pred_df.head() image_pred_df.info() ###Output <class 'pandas.core.frame.DataFrame'> MultiIndex: 6225 entries, ('666020888022790149', 1, 1) to ('892420643555336193', 1, 3) Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 img_url 6225 non-null object 1 pred_class 6225 non-null object 2 pred_confidence 6225 non-null float64 3 is_dog 6225 non-null bool dtypes: bool(1), float64(1), object(2) memory usage: 273.0+ KB ###Markdown Feature Engineering - Audience Score of each TweetBefore we proceed with analysis, we'll create a new column named `audience_score`. The value will be the sum of `favorite_count` and `retweet_count`. This calculation is based on the assumption that sometimes when people likes a tweet, they will retweet it but forgot to click on the favorite button, so it makes sense to assume that the people who retweets a status also likes the status as well. And if the person even remembers to click on the favorite button before retweeting, means the status was indeed awesome! So adding those as "audience score" makes more sense. ###Code twitter_df['audience_score'] = twitter_df.retweet_count + twitter_df.favorite_count twitter_df.audience_score.describe() twitter_df[['retweet_count', 'favorite_count', 'audience_score']].head() ###Output _____no_output_____ ###Markdown ConclusionWith the cleaning actions done above, we now come up with the following finalized and cleaned dataframes:- `twitter_df`: contains tweets technical informations.- `image_pred_df`: contains dog breed classifications of the image in each tweet.- `dog_ratings_df`: contains dog ratings found in each tweet. - `tweet_dog_info_df`: contains the dog stage and dog name detected in each tweet. Dataframes Preview ###Code # set formatting for floating value display pd.options.display.float_format = '{:.3f}'.format ###Output _____no_output_____ ###Markdown `twitter_df` ###Code twitter_df.head() ###Output _____no_output_____ ###Markdown `image_pred_df` ###Code image_pred_df.head() ###Output _____no_output_____ ###Markdown `dog_ratings_df` ###Code dog_ratings_df.head() ###Output _____no_output_____ ###Markdown `tweet_dog_info_df` ###Code tweet_dog_info_df.head() ###Output _____no_output_____ ###Markdown Columns Preview `twitter_df` ###Code twitter_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Index: 2097 entries, 892420643555336193 to 666020888022790149 Data columns (total 9 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 timestamp 2097 non-null datetime64[ns, UTC] 1 source 2097 non-null category 2 text 2097 non-null object 3 expanded_urls 2094 non-null object 4 is_reply 2097 non-null bool 5 is_retweet 2097 non-null bool 6 retweet_count 1815 non-null float64 7 favorite_count 1815 non-null float64 8 audience_score 1815 non-null float64 dtypes: bool(2), category(1), datetime64[ns, UTC](1), float64(3), object(2) memory usage: 201.0+ KB ###Markdown `image_pred_df` ###Code image_pred_df.info() ###Output <class 'pandas.core.frame.DataFrame'> MultiIndex: 6225 entries, ('666020888022790149', 1, 1) to ('892420643555336193', 1, 3) Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 img_url 6225 non-null object 1 pred_class 6225 non-null object 2 pred_confidence 6225 non-null float64 3 is_dog 6225 non-null bool dtypes: bool(1), float64(1), object(2) memory usage: 273.0+ KB ###Markdown `dog_ratings_df` ###Code dog_ratings_df.info() ###Output <class 'pandas.core.frame.DataFrame'> MultiIndex: 2106 entries, ('892420643555336193', 0) to ('666020888022790149', 0) Data columns (total 2 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 numerator 2106 non-null float64 1 denominator 2106 non-null float64 dtypes: float64(2) memory usage: 135.7+ KB ###Markdown `tweet_dog_info_df` ###Code tweet_dog_info_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Index: 1542 entries, 892420643555336193 to 666418789513326592 Data columns (total 5 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 name 1390 non-null object 1 doggo 1542 non-null bool 2 floofer 1542 non-null bool 3 pupper 1542 non-null bool 4 puppo 1542 non-null bool dtypes: bool(4), object(1) memory usage: 30.1+ KB ###Markdown Numeric Columns Statistics `twitter_df` ###Code twitter_df.describe() ###Output _____no_output_____ ###Markdown `image_pred_df` ###Code image_pred_df.describe() ###Output _____no_output_____ ###Markdown `dog_ratings_df` ###Code dog_ratings_df.describe() ###Output _____no_output_____ ###Markdown `tweet_dog_info_df` ###Code tweet_dog_info_df.describe() ###Output _____no_output_____ ###Markdown Store the Cleaned DataBelow we'll store each final and cleaned dataframe into CSV format for each. ###Code twitter_df.to_csv('twitter_archive_master.csv') image_pred_df.to_csv('image_predictions_master.csv') dog_ratings_df.to_csv('tweet_dog_ratings_master.csv') tweet_dog_info_df.to_csv('tweet_dog_infos_master.csv') ###Output _____no_output_____ ###Markdown Data AnalysisQuestions to be answered:- How does the dog rating relate with the audience score of the tweet?- What is the most favored dog stage?- Which one does the public audience favors more, tweets with videos or photos? Question: How does the dog rating relate with the audience score of the tweet? Calculating the correlation between the audience score and "highest dog rating in each tweet" in general below. ###Code numerators = dog_ratings_df.groupby('tweet_id').numerator.max() aud_score_dog_rating = twitter_df.join(numerators)[['audience_score', 'numerator']].dropna().rename(columns={'numerator': 'numerator_max'}) c = aud_score_dog_rating.corr().loc['audience_score', 'numerator_max'] print(f'Correlation between audience_score and numerator_max is {c:.2f}.') ###Output Correlation between audience_score and numerator_max is -0.25. ###Markdown Below I'll remove outliers from the data by considering dog rating numerator values with counts < 10 as outliers. This is based on the observation of the counts where some of the outliers have only 2 values. ###Code counts = aud_score_dog_rating.numerator_max.value_counts() print('Counts of dog rating numerator values:') print(counts) ###Output Counts of dog rating numerator values: 12.000 439 20.000 369 11.000 353 13.000 291 19.000 116 18.000 88 14.000 50 17.000 39 16.000 29 15.000 24 19.750 1 11.270 1 10.000 1 13.500 1 Name: numerator_max, dtype: int64 ###Markdown Below I'll print the numerator values that are considered as outliers. ###Code non_outlier_idxs = counts[counts>=10].index aud_score_dog_rating = aud_score_dog_rating.query('numerator_max in @non_outlier_idxs') print(f'Rating numerator outliers: {counts[~counts.index.isin(non_outlier_idxs)].index.values}') ###Output Rating numerator outliers: [19.75 11.27 10. 13.5 ] ###Markdown Below I'll plot the mean of audience score VS the dog rating numerator value. ###Code plt.figure(figsize=(10,6)) sb.barplot(data=aud_score_dog_rating, x='numerator_max', y='audience_score', color=sb.color_palette()[0]) plt.xticks(rotation=30); plt.ylabel(''); plt.xlabel('Highest Rating Numerator in each Tweet Status') plt.title('Mean Audience Score for each Dog Rating Numerator'); ###Output _____no_output_____ ###Markdown Below I'll calculate the correlation between audience score and the dog rating numerator values for numerator values of 11 up to 14. ###Code # finding correlation for numerator value of 11 to 14 numerators = dog_ratings_df.groupby('tweet_id').numerator.max() aud_score_dog_rating = twitter_df.join(numerators)[['audience_score', 'numerator']].dropna().rename(columns={'numerator': 'numerator_max'}) aud_score_dog_rating_linear = aud_score_dog_rating.query('numerator_max in @non_outlier_idxs and numerator_max <= 14') c = aud_score_dog_rating_linear.corr().loc['audience_score', 'numerator_max'] print(f'Correlation between audience_score and numerator_max for numerator_max value of 11.0 to 14.0 is {c:.2f}.') ###Output Correlation between audience_score and numerator_max for numerator_max value of 11.0 to 14.0 is 0.35. ###Markdown Below I'll re-plot the mean audience score for each dog rating numerator, for numerator values of 11 up to 14. ###Code plt.figure(figsize=(10,6)) sb.barplot(data=aud_score_dog_rating_linear, x='numerator_max', y='audience_score', color=sb.color_palette()[0]) plt.xticks(rotation=30); plt.ylabel(''); plt.xlabel('Highest Rating Numerator in each Tweet Status') plt.title('Mean Audience Score for each Dog Rating Numerator'); ###Output _____no_output_____ ###Markdown Below I'll calculate the correlation between audience score, retweet count, and favorite count with the dog rating numerator, in general. ###Code numerators = dog_ratings_df.groupby('tweet_id').numerator.max() aud_score_dog_rating = twitter_df.join(numerators)[['audience_score', 'numerator', 'retweet_count', 'favorite_count']].dropna() aud_score_dog_rating.corr() ###Output _____no_output_____ ###Markdown Below I'll produce several plots to visualize the distribution of audience score for each dog rating numerator value. ###Code # aud_score_dog_rating.plot(kind='scatter', x='numerator', y='audience_score', alpha=0.2) sb.regplot(data=aud_score_dog_rating, x=aud_score_dog_rating.numerator.astype(int), y='audience_score', x_jitter=0.3, fit_reg=False, scatter_kws={'alpha':0.1}) aud_score_dog_rating.numerator = aud_score_dog_rating.numerator.astype('category') plt.figure(figsize=(20,10)) sb.violinplot(data=aud_score_dog_rating, x='numerator', y='audience_score') plt.figure(figsize=(20,10)) sb.boxplot(data=aud_score_dog_rating, x='numerator', y='audience_score') ###Output _____no_output_____ ###Markdown Question: What is the most favored dog stage?Answer: In general, floofer is favored by most audience. All those fluffs are really cute indeed!Below I'll plot the count of each dog stage. ###Code dog_stages_cols = ['doggo', 'floofer', 'pupper', 'puppo'] has_dog_stage = tweet_dog_info_df[dog_stages_cols].sum(axis=1) > 0 dog_stages_info = tweet_dog_info_df.loc[has_dog_stage][dog_stages_cols] count = dog_stages_info.sum() count = count.sort_values(ascending=False) count.plot(kind='bar') plt.ylabel('count'); plt.xticks(rotation=0); ###Output _____no_output_____ ###Markdown Below I'll plot the distribution of audience score for each dog stage. ###Code from matplotlib.ticker import FuncFormatter dog_stage_audience_scores = dog_stages_info.join(twitter_df) def convert_to_kilo(x, pos): return f'{int(x / 1000)}' +'k' formatter = FuncFormatter(convert_to_kilo) fig = plt.figure(figsize=(15,5)) fig.suptitle('Histogram of Audience Score for each Dog Stage') for i, stage in enumerate(dog_stages_cols, start=1): plt.subplot(1, 4, i) plt.gca().xaxis.set_major_formatter(formatter) sb.histplot(dog_stage_audience_scores.query(stage).audience_score) plt.title(stage[0].upper() + stage[1:]) plt.xlabel('') plt.ylabel('') plt.tight_layout() ###Output _____no_output_____ ###Markdown **Discussion:** Since the distribution is skewed, we'll use median instead of mean to exclude outliers.Below I'll plot the median audience score for each dog stage. ###Code results = [] for stage in dog_stages_cols: mean_score = dog_stage_audience_scores.query(stage).audience_score.median() results.append({'stage': stage, 'median_audience_score': mean_score}) result_df = pd.DataFrame(results) result_df= result_df.sort_values('median_audience_score', ascending=False) plt.figure(figsize=(10,6)) sb.barplot(data=result_df, x='stage', y='median_audience_score', color=sb.color_palette()[0]); plt.xlabel('') plt.xticks(ticks=np.arange(4),labels=['Floofer', 'Puppo', 'Doggo', 'Pupper']) plt.ylabel('') plt.title('Median Audience Score for each Dog Stage'); ###Output _____no_output_____ ###Markdown Question: Which one does the public audience favors more, tweets with videos or photos?Answer: Video has higher audience score in general. Median is again used because of the highly skewed distribution of `audience_score` for each value of `is_video`.Below I'll create the `is_video` column. ###Code twitter_df['is_video'] = twitter_df.expanded_urls.str.contains('video') twitter_df.head() ###Output _____no_output_____ ###Markdown Below I'll plot the distribution of audience score for tweets with photo and video. ###Code g = sb.FacetGrid(data=twitter_df, col='is_video', sharey=False, height=5, aspect=1.2) plt.suptitle('Histogram of Audience Score for Tweets with Photo vs Video') g.map(sb.histplot, 'audience_score', common_norm=False) axes = g.axes.flatten() axes[0].set_title('Photo') axes[1].set_title('Video'); g.set_xlabels('') for ax in axes: ax.xaxis.set_major_formatter(formatter) ###Output _____no_output_____ ###Markdown **Discussion:** Since the distribution is skewed, we'll use median instead of mean to exclude outliers.Below I'll then plot the median audience score of tweets with photos vs videos. ###Code median_score_is_video = twitter_df.groupby('is_video').audience_score.median() median_score_is_video plt.figure(figsize=(10,6)) median_score_is_video.plot(kind='bar') plt.xticks(rotation=0, ticks=range(2), labels=['Photo', 'Video']) plt.xlabel('') plt.title('Median Audience Score for Tweets with Photo vs Video'); ###Output _____no_output_____ ###Markdown Gather ###Code # The WeRateDogs Twitter archive import pandas as pd import requests import tweepy import json from timeit import default_timer as timer import matplotlib.pyplot as plt import numpy as np import re df_archived = pd.read_csv('twitter_archive_enhanced.csv') # The tweets image predctions url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) with open('image_predictions.tsv', mode='wb') as file: file.write(response.content) file.close() df_image = pd.read_csv('image_predictions.tsv', sep='\t') # Tweets' JSON data from twitter API consumer_key = 'LXHYTsJWSydgMJQYld0iFjdML' consumer_secret = 'JTa39W5wzlCotNgoOltx12D0hZppkzWvF4pqObl6Rx9I7PSgem' access_token = '883915268501643264-N2tCLPLFwsvedqBsihNL8Sd6wcXJiyB' access_secret = 'aEb6K8bgi4xo8RYtHXTotiKqnJmb5khG4SZIP9m6oJ3E6' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True) # Create the tweet_json.txt start = timer() with open('tweet_json.txt', mode='w') as file: for id in df_archived.tweet_id: try: tweet = api.get_status(id, tweet_mode = 'extended') json.dump(tweet._json, file) file.write('\n') except tweepy.TweepError as e: None end = timer() print(id, end) file.close() # A sample look of data in dictionary data_dict.g et('retweet_count'), data_dict.get('favorite_count'), data_dict.get('retweeted'), data_dict.get('id') sum_dict = { "id" : [], "retweet_count" : [], "favorite_count" : [], "retweeted" : [], } with open ('tweet_json.txt') as json_file: line = json_file.readline() data_dict = json.loads(line) while line != '': sum_dict['id'].append(data_dict['id']) sum_dict['retweet_count'].append(data_dict['retweet_count']) sum_dict['favorite_count'].append(data_dict['favorite_count']) sum_dict['retweeted'].append(data_dict['retweeted']) line = json_file.readline() if line !='': data_dict = json.loads(line) json_file.close() df = pd.DataFrame(sum_dict) ###Output _____no_output_____ ###Markdown Access Visually assessing data ###Code df df_archived df_image ###Output _____no_output_____ ###Markdown Visually assessing data quality and tidiness issues found: - Quality issue 1(Q1) All values in the 'retweeted' column of the `df` table are false, the column can be dropped. - Q2 In the `df_archived` table, the values of the 'rating denominator' column seem to always be 10, check those rows that the denominaotr is not 10, and see if they are correct; In addition, taking `row 46` as an exmaple, many decimal raings are not extracted correctly. - Q3 In the `df_archived` table, the 'retweeted_status_user_id' , 'retweeted_status_timestamp', 'in_reply_to_user_id' and the 'expanded_urls ' columns can be represented with the other remaining columns, so therefore, which all can be dropped. - Q4 In the `df_image` table, columns 'jpg_url', 'img_num', are redundant of the rssult of iamge predictions, as only image predictions are necessary for this analysis, hence the two columns may be dropped. - Q5 In the `df_archived` table, the name column has many invalid values like , a, an, the. - Q6 In all three tables, The tweet_id column should be named same as well as their datatypes. - Tidiness issue 1(T1) In the `df_archived` table, the last 4 columns about 'dog stage' can be merged into one column, as it is one single result about 'dog stage'. Programacitally assessing data ###Code df.info() df_archived.info() df_image.info() df_image.describe() df_archived.duplicated().sum() df.duplicated().sum() df_image.duplicated().sum() ###Output _____no_output_____ ###Markdown Programatically assessing data quality and tidiness issues found: - Q7 In the `df_archived` table, some of the tweets are retweets, which may be dropped, because they actually add no new data to the analysis. - Q8 In the `df_archived` table, the datatype of different columns should the reflective of the values present, like date, ratings, etc. - T2 In the `df_image` table, the columns from 'p1' all the way to 'p3_dog', can be integrated into one column, with conditions such as only keep those predictions which predicted as dog, and those confidence higher than a certain percentage. - T3 The three table can be merged to one, based on the common 'tweet_id' column. Clean ###Code # Make copites of the three tables df_clean = df.copy() df_archived_clean = df_archived.copy() df_image_clean = df_image.copy() ###Output _____no_output_____ ###Markdown quality issue 1define: drop the 'retweeted' column in the `df` table ###Code #code df_clean.drop(axis=1, columns='retweeted', inplace=True) #test df_clean.head() ###Output _____no_output_____ ###Markdown quality issue 2define: fix those rows where the 'rating_denominator' column's value is not 10, in the `df_archived` table, if possible, also fix the 'rating_numerator' column, drop the 'rating_denominator' column after cleaning. In addition, many decimal ratings should also be fixed. ###Code #code df_archived_clean.rating_denominator.value_counts() df_archived_clean.rating_numerator.value_counts() #consider these rows lst = df_archived_clean.query('rating_denominator != 10').index lst = list(lst) # ask pandas to display the full text pd.options.display.max_colwidth = 1000 len(lst) # 1 df_archived_clean.loc[lst[0], ['text']] # rating should be 13/10 df_archived_clean.loc[lst[0], ['rating_denominator']] = 10 df_archived_clean.loc[lst[0], ['rating_numerator']] = 13 #2 df_archived_clean.loc[lst[1], ['text']] # no rate is given, the rate can be set to 12/10, since this is the mode number of rate df_archived_clean.loc[lst[1], ['rating_denominator']] = 10 df_archived_clean.loc[lst[1], ['rating_numerator']] = 12 #3 df_archived_clean.loc[lst[2], ['text']] # set to 12/10 as the original rate was not valid df_archived_clean.loc[lst[2], ['rating_denominator']] = 10 df_archived_clean.loc[lst[2], ['rating_numerator']] = 12 #4 df_archived_clean.loc[lst[3], ['text']] # no rate is given, the rate can be set to 12/10, since this is the mode number of rate df_archived_clean.loc[lst[3], ['rating_denominator']] = 10 df_archived_clean.loc[lst[3], ['rating_numerator']] = 12 #5 df_archived_clean.loc[lst[4], ['text']] df_archived_clean.loc[lst[4], ['rating_denominator']] = 10 df_archived_clean.loc[lst[4], ['rating_numerator']] = 14 #6 df_archived_clean.loc[lst[5], ['text']] # set to 12/10 as the original rate was not valid df_archived_clean.loc[lst[5], ['rating_denominator']] = 10 df_archived_clean.loc[lst[5], ['rating_numerator']] = 12 #7 df_archived_clean.loc[lst[6], ['text']] df_archived_clean.loc[lst[6], ['rating_denominator']] = 10 df_archived_clean.loc[lst[6], ['rating_numerator']] = 14 #8 df_archived_clean.loc[lst[7], ['text']] # set to 12/10 as the original rate was not valid df_archived_clean.loc[lst[7], ['rating_denominator']] = 10 df_archived_clean.loc[lst[7], ['rating_numerator']] = 12 #9 df_archived_clean.loc[lst[8], ['text']] df_archived_clean.loc[lst[8], ['rating_denominator']] = 10 df_archived_clean.loc[lst[8], ['rating_numerator']] = 13 #10 df_archived_clean.loc[lst[9], ['text']] df_archived_clean.loc[lst[9], ['rating_denominator']] = 10 df_archived_clean.loc[lst[9], ['rating_numerator']] = 11 #11 df_archived_clean.loc[lst[10], ['text']] # set to 12/10 as the original rate was not valid df_archived_clean.loc[lst[10], ['rating_denominator']] = 10 df_archived_clean.loc[lst[10], ['rating_numerator']] = 12 #12 df_archived_clean.loc[lst[11], ['text']] # set to 12/10 as the original rate was not valid df_archived_clean.loc[lst[11], ['rating_denominator']] = 10 df_archived_clean.loc[lst[11], ['rating_numerator']] = 12 #13 df_archived_clean.loc[lst[12], ['text']] # set to 12/10 as the original rate was not valid df_archived_clean.loc[lst[12], ['rating_denominator']] = 10 df_archived_clean.loc[lst[12], ['rating_numerator']] = 12 #14 df_archived_clean.loc[lst[13], ['text']] # set to 12/10 as the original rate was not valid df_archived_clean.loc[lst[13], ['rating_denominator']] = 10 df_archived_clean.loc[lst[13], ['rating_numerator']] = 12 #15 df_archived_clean.loc[lst[14], ['text']] # set to 12/10 as the original rate was not valid df_archived_clean.loc[lst[14], ['rating_denominator']] = 10 df_archived_clean.loc[lst[14], ['rating_numerator']] = 12 #16 df_archived_clean.loc[lst[15], ['text']] # set to 12/10 as the original rate was not valid df_archived_clean.loc[lst[15], ['rating_denominator']] = 10 df_archived_clean.loc[lst[15], ['rating_numerator']] = 12 #17 df_archived_clean.loc[lst[16], ['text']] # set to 12/10 as the original rate was not valid df_archived_clean.loc[lst[16], ['rating_denominator']] = 10 df_archived_clean.loc[lst[16], ['rating_numerator']] = 12 #18 df_archived_clean.loc[lst[17], ['text']] # set to 12/10 as the original rate was not valid df_archived_clean.loc[lst[17], ['rating_denominator']] = 10 df_archived_clean.loc[lst[17], ['rating_numerator']] = 12 #19 df_archived_clean.loc[lst[18], ['text']] df_archived_clean.loc[lst[18], ['rating_denominator']] = 10 df_archived_clean.loc[lst[18], ['rating_numerator']] = 10 #20 df_archived_clean.loc[lst[19], ['text']] # set to 12/10 as the original rate was not valid df_archived_clean.loc[lst[19], ['rating_denominator']] = 10 df_archived_clean.loc[lst[19], ['rating_numerator']] = 12 #21 df_archived_clean.loc[lst[20], ['text']] # set to 12/10 as the original rate was not valid df_archived_clean.loc[lst[20], ['rating_denominator']] = 10 df_archived_clean.loc[lst[20], ['rating_numerator']] = 12 #22 df_archived_clean.loc[lst[21], ['text']] # set to 12/10 as the original rate was not valid df_archived_clean.loc[lst[21], ['rating_denominator']] = 10 df_archived_clean.loc[lst[21], ['rating_numerator']] = 12 #23 df_archived_clean.loc[lst[22], ['text']] df_archived_clean.loc[lst[22], ['rating_denominator']] = 10 df_archived_clean.loc[lst[22], ['rating_numerator']] = 9 # to clean decimal ratings: get the indices of all ratings that contain a decimal number by using regression. lst = list(df_archived_clean[df_archived_clean.text.str.contains(r"(\d+\.\d*\/\d+)")][['text', 'rating_numerator']].index) len(lst) #1 df_archived_clean.loc[lst[0], ['text']] df_archived_clean.loc[lst[0], ['rating_numerator']] = 13.5 #2 df_archived_clean.loc[lst[1], ['text']] df_archived_clean.loc[lst[1], ['rating_numerator']] = 9.75 #3 df_archived_clean.loc[lst[2], ['text']] df_archived_clean.loc[lst[2], ['rating_numerator']] = 9.75 #4 df_archived_clean.loc[lst[3], ['text']] df_archived_clean.loc[lst[3], ['rating_numerator']] = 11.27 #5 df_archived_clean.loc[lst[4], ['text']] df_archived_clean.loc[lst[4], ['rating_numerator']] = 9.5 #6 df_archived_clean.loc[lst[5], ['text']] df_archived_clean.loc[lst[5], ['rating_numerator']] = 11.26 #test df_archived_clean.rating_denominator.value_counts() # all denominators are 10 now. df_archived_clean[df_archived_clean.text.str.contains(r"(\d+\.\d*\/\d+)")][['text', 'rating_numerator']] # all decimal ratings have benn fixed now. #if all denominators are now 10, then the column becomes no use, and should be dropped. df_archived_clean.drop(axis=1, columns='rating_denominator', inplace=True) ###Output _____no_output_____ ###Markdown quality issue 3define: drop the 'retweeted_status_user_id' , 'retweeted_status_timestamp', 'in_reply_to_user_id' and the 'expanded_urls ' columns in `df_archived`. ###Code #code df_archived_clean.drop(axis=1, columns=['retweeted_status_user_id', 'retweeted_status_timestamp', 'in_reply_to_user_id', 'expanded_urls'], inplace=True) #test df_archived_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 12 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 rating_numerator 2356 non-null float64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(3), int64(1), object(8) memory usage: 221.0+ KB ###Markdown quality issue 4define: drop the columns 'jpg_url', 'img_num', in the `df_image` table. ###Code #code df_image_clean.drop(axis=1, columns=['jpg_url', 'img_num'],inplace=True) #test df_image_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 10 columns): tweet_id 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(1), object(3) memory usage: 119.6+ KB ###Markdown quality issue 5define: fix the 'name' column in `df_archived`, as it contains many many invalid values like the, a, an, only. ###Code #code # replace those names taht are invalid. df_archived_clean.name = df_archived_clean.name.replace(['the', 'a', 'an', 'None'], np.nan) # use regular expression to extract those words after 'named ' in each row df_archived_clean['name_to_add'] = df_archived_clean.text.str.extract(r'(?<=named)( [A-Z]\w+)') # set the condition as where the 'name_to_add' column is not null idx = (df_archived_clean['name_to_add'].isnull() == False) #swap the value of the two columns lst = list(df_archived_clean[idx].index) for i in lst: df_archived_clean.loc[i, 'name'] = df_archived_clean.loc[i, 'name_to_add'] df_archived_clean.drop(axis=1, columns='name_to_add', inplace=True) #test df_archived_clean.name ###Output _____no_output_____ ###Markdown * the reason why I clean the tidienss issue after quality issue here is because, there are too many redundant columns that can be dropped from the dataframe, and it is better and way easier to address the tidiness issue after in terms of coding. quality issue 6change the datatype of 'tweet_id'/'id' columns into string in all 3 tables, and rename the one in `df` ###Code #code df_clean.rename(columns={'id': 'tweet_id'}, inplace=True) #datatype conversion df_clean.tweet_id = df_clean.tweet_id.astype(str) df_archived_clean.tweet_id = df_archived_clean.tweet_id.astype(str) df_image_clean.tweet_id = df_image_clean.tweet_id.astype(str) test df_clean.info() df_archived_clean.info() df_image_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 10 columns): tweet_id 2075 non-null object p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), object(4) memory usage: 119.6+ KB ###Markdown tideness issue 1define merge the 4 'dog stage' columns in the 'df_archived' table, and for those have multiple values of 'dog stage' in the original data set, use 'multiple' instead ###Code df_archived_clean.columns lst = list(df_archived_clean.columns) lst.remove('doggo') lst.remove('floofer') lst.remove('pupper') lst.remove('puppo') #code # use the melt method to creat a new column by the vlaue of 'dog stage' df_temp_dog_stage = df_archived_clean.melt(id_vars=lst, var_name='dog', value_name='dog_stage') #drop those columns that have no values of dog stage df_temp_dog_stage.drop(axis=0, index=df_temp_dog_stage.query('dog_stage == "None"').index, inplace=True) # maek the df_temp_dog_stage table concise lst.remove('tweet_id') lst.append('dog') df_temp_dog_stage.drop(axis=0, columns=lst, inplace=True) df_temp_dog_stage.head() df_archived_clean.head() #get those tweets id of those duplaicted tweets, mneans tweets which have more than one 'dog stage' values lst = list(df_temp_dog_stage[df_temp_dog_stage.tweet_id.duplicated() == True].tweet_id.values) df_archived_clean = df_archived_clean.merge(df_temp_dog_stage, how='left') # if tweets have more than one 'dog stage' value, they are labelled as 'multiple' here. def changeDogStage(x): if x.iloc[0] in lst: return 'multiple' return x.iloc[12] df_archived_clean.loc[:,'dog_stage'] = df_archived_clean.apply(changeDogStage, axis=1) df_archived_clean.dog_stage.value_counts() df_archived_clean.drop(axis=1, columns=['doggo', 'pupper', 'puppo', 'floofer'], inplace=True) #test df_archived_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2370 entries, 0 to 2369 Data columns (total 9 columns): tweet_id 2370 non-null object in_reply_to_status_id 79 non-null float64 timestamp 2370 non-null object source 2370 non-null object text 2370 non-null object retweeted_status_id 183 non-null float64 rating_numerator 2370 non-null float64 name 1571 non-null object dog_stage 394 non-null object dtypes: float64(3), object(6) memory usage: 185.2+ KB ###Markdown quality issue 7define: drop thoses rows that are retweets in the 'df_archived' table. ###Code #code df_archived_clean.drop(axis=0, index=df_archived_clean[df_archived_clean.retweeted_status_id.isnull() == False].index, inplace=True) #test df_archived_clean.query('retweeted_status_id != "NaN"') #therefore, the 'retweeted_status_id' column is of no use now, which can be dropped df_archived_clean.drop(axis=1, columns='retweeted_status_id', inplace=True) ###Output _____no_output_____ ###Markdown quality issue 8define: change the datatypes of different columns in the `df_archived` table. ###Code #code df_archived_clean.info() df_archived_clean.timestamp = pd.to_datetime(df_archived_clean.timestamp) df_archived_clean.dog_stage = df_archived_clean.dog_stage.astype('category') #test df_archived_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2187 entries, 0 to 2369 Data columns (total 8 columns): tweet_id 2187 non-null object in_reply_to_status_id 79 non-null float64 timestamp 2187 non-null datetime64[ns, UTC] source 2187 non-null object text 2187 non-null object rating_numerator 2187 non-null float64 name 1454 non-null object dog_stage 356 non-null category dtypes: category(1), datetime64[ns, UTC](1), float64(2), object(4) memory usage: 139.0+ KB ###Markdown tidiness issue 2define: integrate columns from 'p1' to 'p3_dog' into 1 column in the 'df_archived' table ###Code #code df_image_clean.head() df_image_clean.describe() df_image_clean.info() #create a list of 'df_image''s columns to use for the 'apply' and 'drop' method below lst = list(df_image_clean.columns) lst.remove('tweet_id') #this is a sample look at a single x variable below in the 'integrateColumns' function. df_image_clean[lst].iloc[0] #return one of the image predictions if certain conditions are satisfied def integrateColumns(x): # return p1 if p1_dog == true and p1_conf >=0.3 if x.iloc[2] == True and x.iloc[1] >= 0.3: return x.iloc[0] # return p2 if p2_dog == true and p2_conf >= 0.2 if x.iloc[5] == True and x.iloc[4] >= 0.2: return x.iloc[3] # return p3 if p3_dog == true and p3_conf >= 0,1 if x.iloc[8]== True and x.iloc[7] >= 0.1: return x.iloc[6] # return 'Failed' if none of the img predictions were deemed to succeed return np.nan #create a new column using the above function. df_image_clean['img_predicitons'] = df_image_clean[lst].apply(integrateColumns, axis=1) #drop those image predictions columns that are of no use now. df_image_clean.drop(axis=1, columns=lst, inplace=True) #test df_image_clean.head() #tidiness issue 3 #define: merge the 3 tables into 1 #code df_image_clean.head() df_clean.head() df_archived_clean.head() df_archived_clean = df_archived_clean.merge(df_clean, how='left') df_archived_clean = df_archived_clean.merge(df_image_clean, how='left') df_archived_clean.head() twitter_archive_master = df_archived_clean ###Output _____no_output_____ ###Markdown Check and Store ###Code #since there are null values in the two 'count' columns, datatypes have to be float. twitter_archive_master.info() twitter_archive_master.to_csv('twitter_archive_master.csv') ###Output _____no_output_____ ###Markdown Analyze and Visualize Insight 1 - It seems that the golden retriever is the most popular species among dogs, or alternatviely, the golden retreiever has a higher prediction rate in the original image prediction works. ###Code twitter_archive_master.img_predicitons.value_counts() ###Output _____no_output_____ ###Markdown Insight 2 - pupper, known as 'a small doggo' was mentioned the most in the 4 dog stages, perhaps people were more inclined to share about their cute puppers, than the other more mature dogs. ###Code twitter_archive_master.dog_stage.value_counts() ###Output _____no_output_____ ###Markdown Insight 3 & Visualization 1 - The tweets that have a particualrly low rating for their dogs(means rating_numerator < 10 here), would not have many favorites and retweets. ###Code df_lowRate = twitter_archive_master.query('rating_numerator < 10') df_lowRate.retweet_count.hist(alpha = 0.8, label = 'low_rating numerator') twitter_archive_master.retweet_count.hist(alpha = 0.4, label= 'all') plt.xlabel('retweets count') plt.ylabel('frequency') plt.title('Low rating\'s retweets VS. All rating\'s retweets') plt.savefig('123.png') plt.legend(); df_lowRate.favorite_count.hist(alpha = 0.8, label = 'low_rating numerator') twitter_archive_master.favorite_count.hist(alpha = 0.4, label= 'all') plt.xlabel('favorites count') plt.ylabel('frequency') plt.title('Low rating\'s favs VS. All rating\'s favs') plt.legend(); ###Output _____no_output_____ ###Markdown - As from the two histograms above, the favs and retweets of those low rating tweets are significanly lower than the other tweets. Insight 4 & Visualization 1 - There is a positive coorelation between the favs count, and retweets count. Generally, a tweet with higher favs count, will also have a higher retweets count. ###Code plt.scatter(x=twitter_archive_master.retweet_count, y=twitter_archive_master.favorite_count, c='gold',alpha=0.5) plt.xlabel('favorites count') plt.ylabel('retweets counts') plt.title('Retweets VS. Favorites'); ###Output _____no_output_____ ###Markdown Project: Wrangle and Analyze Data IntroductionReal-world data rarely comes clean. Using Python and its libraries, you will gather data from a variety of sources and in a variety of formats, assess its quality and tidiness, then clean it. This is called data wrangling. You will document your wrangling efforts in a Jupyter Notebook, plus showcase them through analyses and visualizations using Python (and its libraries) and/or SQL.The dataset that you will be wrangling (and analyzing and visualizing) is the tweet archive of Twitter user [@dog_rates](https://twitter.com/dog_rates), also known as [WeRateDogs](https://en.wikipedia.org/wiki/WeRateDogs). WeRateDogs is a Twitter account that rates people's dogs with a humorous comment about the dog. These ratings almost always have a denominator of 10. The numerators, though? Almost always greater than 10. 11/10, 12/10, 13/10, etc. Why? Because "[they're good dogs Brent](http://knowyourmeme.com/memes/theyre-good-dogs-brent)." WeRateDogs has over 4 million followers and has received international media coverage.WeRateDogs [downloaded their Twitter archive](https://support.twitter.com/articles/20170160) and sent it to Udacity via email exclusively for you to use in this project. This archive contains basic tweet data (tweet ID, timestamp, text, etc.) for all 5000+ of their tweets as they stood on August 1, 2017. More on this soon.Image via Boston Magazine What Software Do I Need?The entirety of this project can be completed inside the Udacity classroom on the **Project Workspace: Complete and Submit Project** page using the Jupyter Notebook provided there. (Note: This Project Workspace may not be available in all versions of this project, in which case you should follow the directions below.)If you want to work outside of the Udacity classroom, the following software requirements apply:* You need to be able to work in a Jupyter Notebook on your computer. Please revisit our Jupyter Notebook and Anaconda tutorials earlier in the Nanodegree program for installation instructions.* The following packages (libraries) need to be installed. You can install these packages via conda or pip. Please revisit our Anaconda tutorial earlier in the Nanodegree program for package installation instructions. * pandas * NumPy * requests * tweepy * json* You need to be able to create written documents that contain images and you need to be able to export these documents as PDF files. This task can be done in a Jupyter Notebook, but you might prefer to use a word processor like [Google Docs](https://www.google.com/docs/about/), which is free, or Microsoft Word.* A text editor, like [Sublime](https://www.sublimetext.com/), which is free, will be useful but is not required. Project Motivation ContextYour goal: wrangle WeRateDogs Twitter data to create interesting and trustworthy analyses and visualizations. The Twitter archive is great, but it only contains very basic tweet information. Additional gathering, then assessing and cleaning is required for "Wow!"-worthy analyses and visualizations. The Data Enhanced Twitter ArchiveThe WeRateDogs Twitter archive contains basic tweet data for all 5000+ of their tweets, but not everything. One column the archive does contain though: each tweet's text, which I used to extract rating, dog name, and dog "stage" (i.e. doggo, floofer, pupper, and puppo) to make this Twitter archive "enhanced." Of the 5000+ tweets, I have filtered for tweets with ratings only (there are 2356).Extracted data from tweet textThe extracted data from each tweet's textI extracted this data programmatically, but I didn't do a very good job. The ratings probably aren't all correct. Same goes for the dog names and probably dog stages (see below for more information on these) too. You'll need to assess and clean these columns if you want to use them for analysis and visualization.The Dogtionary explains the various stages of dog: doggo, pupper, puppo, and floof(er) (via the WeRateDogs book on Amazon) Additional Data via the Twitter APIBack to the basic-ness of Twitter archives: retweet count and favorite count are two of the notable column omissions. Fortunately, this additional data can be gathered by anyone from Twitter's API. Well, "anyone" who has access to data for the 3000 most recent tweets, at least. But you, because you have the WeRateDogs Twitter archive and specifically the tweet IDs within it, can gather this data for all 5000+. And guess what? You're going to query Twitter's API to gather this valuable data. Image Predictions FileOne more cool thing: I ran every image in the WeRateDogs Twitter archive through a [neural network](https://www.youtube.com/watch?v=2-Ol7ZB0MmU) that can classify breeds of dogs*. The results: a table full of image predictions (the top three only) alongside each tweet ID, image URL, and the image number that corresponded to the most confident prediction (numbered 1 to 4 since tweets can have up to four images).Tweet image prediction dataSo for the last row in that table:* tweet_id is the last part of the tweet URL after "status/" → https://twitter.com/dog_rates/status/889531135344209921* p1 is the algorithm's 1 prediction for the image in the tweet → golden retriever* p1_conf is how confident the algorithm is in its 1 prediction → 95%* p1_dog is whether or not the 1 prediction is a breed of dog → TRUE* p2 is the algorithm's second most likely prediction → Labrador retriever* p2_conf is how confident the algorithm is in its 2 prediction → 1%* p2_dog is whether or not the 2 prediction is a breed of dog → TRUE* etc.And the 1 prediction for the image in that tweet was spot on:A golden retriever named StuartSo that's all fun and good. But all of this additional data will need to be gathered, assessed, and cleaned. This is where you come in. Key PointsKey points to keep in mind when data wrangling for this project: You only want original ratings (no retweets) that have images. Though there are 5000+ tweets in the dataset, not all are dog ratings and some are retweets. Assessing and cleaning the entire dataset completely would require a lot of time, and is not necessary to practice and demonstrate your skills in data wrangling. Therefore, the requirements of this project are only to assess and clean at least 8 quality issues and at least 2 tidiness issues in this dataset. Cleaning includes merging individual pieces of data according to the rules of tidy data. The fact that the rating numerators are greater than the denominators does not need to be cleaned. This unique rating system is a big part of the popularity of WeRateDogs. You do not need to gather the tweets beyond August 1st, 2017. You can, but note that you won't be able to gather the image predictions for these tweets since you don't have access to the algorithm used. Project DetailsYour tasks in this project are as follows: Data wrangling, which consists of: Gathering data (downloadable file in the Resources tab in the left most panel of your classroom and linked in step 1 below). Assessing data Cleaning data Storing, analyzing, and visualizing your wrangled data Reporting on 1) your data wrangling efforts and 2) your data analyses and visualizations ###Code # Import statements import pandas as pd import numpy as np import requests import tweepy import os import json import time import re import matplotlib.pyplot as plt import warnings from IPython.display import Image from functools import reduce import re import seaborn as sns import datetime from jupyterthemes import jtplot jtplot.style(theme='onedork') % matplotlib inline ###Output _____no_output_____ ###Markdown Just Installing jupyterthemes ###Code !pip install jupyterthemes==0.16.1 ###Output Collecting jupyterthemes==0.16.1 [?25l Downloading https://files.pythonhosted.org/packages/d5/37/65b28bb0ee5fc510054f4427907f4b5e2e0f776ac6f591074a1ca17d366b/jupyterthemes-0.16.1-py2.py3-none-any.whl (6.0MB)  100% |████████████████████████████████| 6.0MB 5.1MB/s eta 0:00:01 7% |██▌ | 471kB 10.3MB/s eta 0:00:01 31% |██████████▏ | 1.9MB 21.6MB/s eta 0:00:01 48% |███████████████▍ | 2.9MB 23.2MB/s eta 0:00:01 69% |██████████████████████ | 4.1MB 28.1MB/s eta 0:00:01 [?25hCollecting lesscpy>=0.12.0 (from jupyterthemes==0.16.1) [?25l Downloading https://files.pythonhosted.org/packages/f8/d2/665cda6614e3556eaeb7553a3a2963624c2e3bc9636777a1bb654b87b027/lesscpy-0.14.0-py2.py3-none-any.whl (46kB)  100% |████████████████████████████████| 51kB 16.0MB/s ta 0:00:01 [?25hRequirement already satisfied: jupyter-core in /opt/conda/lib/python3.6/site-packages (from jupyterthemes==0.16.1) (4.4.0) Collecting jupyter (from jupyterthemes==0.16.1) Downloading https://files.pythonhosted.org/packages/83/df/0f5dd132200728a86190397e1ea87cd76244e42d39ec5e88efd25b2abd7e/jupyter-1.0.0-py2.py3-none-any.whl Requirement already satisfied: six in /opt/conda/lib/python3.6/site-packages (from lesscpy>=0.12.0->jupyterthemes==0.16.1) (1.11.0) Collecting ply (from lesscpy>=0.12.0->jupyterthemes==0.16.1) [?25l Downloading https://files.pythonhosted.org/packages/a3/58/35da89ee790598a0700ea49b2a66594140f44dec458c07e8e3d4979137fc/ply-3.11-py2.py3-none-any.whl (49kB)  100% |████████████████████████████████| 51kB 15.5MB/s ta 0:00:01 [?25hRequirement already satisfied: traitlets in /opt/conda/lib/python3.6/site-packages (from jupyter-core->jupyterthemes==0.16.1) (4.3.2) Requirement already satisfied: nbconvert in /opt/conda/lib/python3.6/site-packages (from jupyter->jupyterthemes==0.16.1) (5.4.0) Collecting qtconsole (from jupyter->jupyterthemes==0.16.1) [?25l Downloading https://files.pythonhosted.org/packages/61/9c/ee26b844381f0cf2ea24bd822e4a9ed2c7fd6d8cdeef63be459c62132f9b/qtconsole-4.7.4-py2.py3-none-any.whl (118kB)  100% |████████████████████████████████| 122kB 24.4MB/s ta 0:00:01 [?25hRequirement already satisfied: notebook in /opt/conda/lib/python3.6/site-packages (from jupyter->jupyterthemes==0.16.1) (5.7.0) Requirement already satisfied: ipywidgets in /opt/conda/lib/python3.6/site-packages (from jupyter->jupyterthemes==0.16.1) (7.0.5) Requirement already satisfied: ipykernel in /opt/conda/lib/python3.6/site-packages (from jupyter->jupyterthemes==0.16.1) (4.9.0) Collecting jupyter-console (from jupyter->jupyterthemes==0.16.1) Downloading https://files.pythonhosted.org/packages/0a/89/742fa5a80b552ffcb6a8922712697c6e6828aee7b91ee4ae2b79f00f8401/jupyter_console-6.1.0-py2.py3-none-any.whl Requirement already satisfied: ipython-genutils in /opt/conda/lib/python3.6/site-packages (from traitlets->jupyter-core->jupyterthemes==0.16.1) (0.2.0) Requirement already satisfied: decorator in /opt/conda/lib/python3.6/site-packages (from traitlets->jupyter-core->jupyterthemes==0.16.1) (4.0.11) Requirement already satisfied: mistune>=0.8.1 in /opt/conda/lib/python3.6/site-packages (from nbconvert->jupyter->jupyterthemes==0.16.1) (0.8.3) Requirement already satisfied: jinja2 in /opt/conda/lib/python3.6/site-packages (from nbconvert->jupyter->jupyterthemes==0.16.1) (2.10) Requirement already satisfied: pygments in /opt/conda/lib/python3.6/site-packages (from nbconvert->jupyter->jupyterthemes==0.16.1) (2.2.0) Requirement already satisfied: nbformat>=4.4 in /opt/conda/lib/python3.6/site-packages (from nbconvert->jupyter->jupyterthemes==0.16.1) (4.4.0) Requirement already satisfied: entrypoints>=0.2.2 in /opt/conda/lib/python3.6/site-packages (from nbconvert->jupyter->jupyterthemes==0.16.1) (0.2.3) Requirement already satisfied: bleach in /opt/conda/lib/python3.6/site-packages (from nbconvert->jupyter->jupyterthemes==0.16.1) (1.5.0) Requirement already satisfied: pandocfilters>=1.4.1 in /opt/conda/lib/python3.6/site-packages (from nbconvert->jupyter->jupyterthemes==0.16.1) (1.4.1) Requirement already satisfied: testpath in /opt/conda/lib/python3.6/site-packages (from nbconvert->jupyter->jupyterthemes==0.16.1) (0.3.1) Requirement already satisfied: defusedxml in /opt/conda/lib/python3.6/site-packages (from nbconvert->jupyter->jupyterthemes==0.16.1) (0.5.0) Requirement already satisfied: jupyter-client>=4.1 in /opt/conda/lib/python3.6/site-packages (from qtconsole->jupyter->jupyterthemes==0.16.1) (5.2.4) Collecting qtpy (from qtconsole->jupyter->jupyterthemes==0.16.1) [?25l Downloading https://files.pythonhosted.org/packages/cd/fd/9972948f02e967b691cc0ca1f26124826a3b88cb38f412a8b7935b8c3c72/QtPy-1.9.0-py2.py3-none-any.whl (54kB)  100% |████████████████████████████████| 61kB 18.5MB/s ta 0:00:01 [?25hRequirement already satisfied: pyzmq>=17.1 in /opt/conda/lib/python3.6/site-packages (from qtconsole->jupyter->jupyterthemes==0.16.1) (17.1.2) Requirement already satisfied: tornado>=4 in /opt/conda/lib/python3.6/site-packages (from notebook->jupyter->jupyterthemes==0.16.1) (4.5.3) Requirement already satisfied: Send2Trash in /opt/conda/lib/python3.6/site-packages (from notebook->jupyter->jupyterthemes==0.16.1) (1.5.0) Requirement already satisfied: terminado>=0.8.1 in /opt/conda/lib/python3.6/site-packages (from notebook->jupyter->jupyterthemes==0.16.1) (0.8.1) Requirement already satisfied: prometheus_client in /opt/conda/lib/python3.6/site-packages (from notebook->jupyter->jupyterthemes==0.16.1) (0.3.1) Collecting widgetsnbextension~=3.0.0 (from ipywidgets->jupyter->jupyterthemes==0.16.1) [?25l Downloading https://files.pythonhosted.org/packages/8d/f2/c8bcccccbed39d51d3e237fb0c0f0c9bbc845d12afc41f5ca5f5728fffc7/widgetsnbextension-3.0.8-py2.py3-none-any.whl (2.2MB)  100% |████████████████████████████████| 2.2MB 9.2MB/s eta 0:00:01 57% |██████████████████▎ | 1.2MB 27.1MB/s eta 0:00:01 [?25hRequirement already satisfied: ipython>=4.0.0; python_version >= "3.3" in /opt/conda/lib/python3.6/site-packages (from ipywidgets->jupyter->jupyterthemes==0.16.1) (6.5.0) Collecting prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 (from jupyter-console->jupyter->jupyterthemes==0.16.1) [?25l Downloading https://files.pythonhosted.org/packages/e4/a7/81b39aa50e9284fe2cb21cc7fb7de7817b224172d42793fd57451d38842b/prompt_toolkit-3.0.5-py3-none-any.whl (351kB)  100% |████████████████████████████████| 358kB 22.8MB/s ta 0:00:01 [?25hRequirement already satisfied: MarkupSafe>=0.23 in /opt/conda/lib/python3.6/site-packages (from jinja2->nbconvert->jupyter->jupyterthemes==0.16.1) (1.0) Requirement already satisfied: jsonschema!=2.5.0,>=2.4 in /opt/conda/lib/python3.6/site-packages (from nbformat>=4.4->nbconvert->jupyter->jupyterthemes==0.16.1) (2.6.0) Requirement already satisfied: html5lib!=0.9999,!=0.99999,<0.99999999,>=0.999 in /opt/conda/lib/python3.6/site-packages (from bleach->nbconvert->jupyter->jupyterthemes==0.16.1) (0.9999999) Requirement already satisfied: python-dateutil>=2.1 in /opt/conda/lib/python3.6/site-packages (from jupyter-client>=4.1->qtconsole->jupyter->jupyterthemes==0.16.1) (2.6.1) Requirement already satisfied: pickleshare in /opt/conda/lib/python3.6/site-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets->jupyter->jupyterthemes==0.16.1) (0.7.4) Requirement already satisfied: pexpect; sys_platform != "win32" in /opt/conda/lib/python3.6/site-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets->jupyter->jupyterthemes==0.16.1) (4.3.1) Requirement already satisfied: setuptools>=18.5 in /opt/conda/lib/python3.6/site-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets->jupyter->jupyterthemes==0.16.1) (38.4.0) Requirement already satisfied: jedi>=0.10 in /opt/conda/lib/python3.6/site-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets->jupyter->jupyterthemes==0.16.1) (0.10.2) Requirement already satisfied: simplegeneric>0.8 in /opt/conda/lib/python3.6/site-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets->jupyter->jupyterthemes==0.16.1) (0.8.1) Requirement already satisfied: backcall in /opt/conda/lib/python3.6/site-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets->jupyter->jupyterthemes==0.16.1) (0.1.0) Requirement already satisfied: wcwidth in /opt/conda/lib/python3.6/site-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->jupyter-console->jupyter->jupyterthemes==0.16.1) (0.1.7) Requirement already satisfied: ptyprocess>=0.5 in /opt/conda/lib/python3.6/site-packages (from pexpect; sys_platform != "win32"->ipython>=4.0.0; python_version >= "3.3"->ipywidgets->jupyter->jupyterthemes==0.16.1) (0.5.2) ipython 6.5.0 has requirement prompt-toolkit<2.0.0,>=1.0.15, but you'll have prompt-toolkit 3.0.5 which is incompatible. Installing collected packages: ply, lesscpy, qtpy, qtconsole, prompt-toolkit, jupyter-console, jupyter, jupyterthemes, widgetsnbextension Found existing installation: prompt-toolkit 1.0.15 Uninstalling prompt-toolkit-1.0.15: Successfully uninstalled prompt-toolkit-1.0.15 Found existing installation: widgetsnbextension 3.1.0 Uninstalling widgetsnbextension-3.1.0: Successfully uninstalled widgetsnbextension-3.1.0 Successfully installed jupyter-1.0.0 jupyter-console-6.1.0 jupyterthemes-0.16.1 lesscpy-0.14.0 ply-3.11 prompt-toolkit-3.0.5 qtconsole-4.7.4 qtpy-1.9.0 widgetsnbextension-3.0.8 ###Markdown Gather ###Code # Open the csv file df_twitter_archive = pd.read_csv('twitter-archive-enhanced-2.csv') df_twitter_archive.head() ###Output _____no_output_____ ###Markdown **Tweet image prediction** ###Code # Download the image prediction file using the link provided to Udacity students url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' image_request = requests.get(url, allow_redirects=True) open('image_predictions.tsv', 'wb').write(image_request.content) # Showing the data in the image predictions file df_image_predictions = pd.read_csv('image_predictions.tsv', sep = '\t') df_image_predictions.head() ###Output _____no_output_____ ###Markdown **Ref: https://stackoverflow.com/questions/28384588/twitter-api-get-tweets-with-specific-id** ###Code auth = tweepy.OAuthHandler('5Uur0mo4ol2kB8yhtZ1VxXS0u', 'h8E7fSpXWiMoBel7G1ZOAeu4Mgru0v0MtxH5ehYE1RKM89SiBH') auth.set_access_token('303562412-ct9aNnU0FQR0UKJVn1i1W3Y8omqSewiQWUcRaygB', 'D3qslrbdOU5fqTOp951kOIuZbkeTPBodnjNYoEGFR63Ft') api = tweepy.API(auth, parser = tweepy.parsers.JSONParser(), wait_on_rate_limit = True, wait_on_rate_limit_notify = True) ###Output _____no_output_____ ###Markdown **Twitter API & JSON** ###Code #Download Tweepy status object based on Tweet ID and store in list list_of_tweets = [] # Tweets that can't be found are saved in the list below: cant_find_tweets_for_those_ids = [] for tweet_id in df_twitter_archive['tweet_id']: try: list_of_tweets.append(api.get_status(tweet_id)) except Exception as e: cant_find_tweets_for_those_ids.append(tweet_id) #Printing print("The list of tweets" ,len(list_of_tweets)) print("The list of tweets no found" , len(cant_find_tweets_for_those_ids)) #Then in this code block we isolate the json part of each tweepy #status object that we have downloaded and we add them all into a list my_list_of_dicts = [] for each_json_tweet in list_of_tweets: my_list_of_dicts.append(each_json_tweet) #we write this list into a txt file: with open('tweet_json.txt', 'w') as file: file.write(json.dumps(my_list_of_dicts, indent=4)) #identify information of interest from JSON dictionaries in txt file #and put it in a dataframe called tweet JSON my_demo_list = [] with open('tweet_json.txt', encoding='utf-8') as json_file: all_data = json.load(json_file) for each_dictionary in all_data: tweet_id = each_dictionary['id'] whole_tweet = each_dictionary['text'] only_url = whole_tweet[whole_tweet.find('https'):] favorite_count = each_dictionary['favorite_count'] retweet_count = each_dictionary['retweet_count'] followers_count = each_dictionary['user']['followers_count'] friends_count = each_dictionary['user']['friends_count'] whole_source = each_dictionary['source'] only_device = whole_source[whole_source.find('rel="nofollow">') + 15:-4] source = only_device retweeted_status = each_dictionary['retweeted_status'] = each_dictionary.get('retweeted_status', 'Original tweet') if retweeted_status == 'Original tweet': url = only_url else: retweeted_status = 'This is a retweet' url = 'This is a retweet' my_demo_list.append({'tweet_id': str(tweet_id), 'favorite_count': int(favorite_count), 'retweet_count': int(retweet_count), 'followers_count': int(followers_count), 'friends_count': int(friends_count), 'url': url, 'source': source, 'retweeted_status': retweeted_status, }) tweet_json = pd.DataFrame(my_demo_list, columns = ['tweet_id', 'favorite_count','retweet_count', 'followers_count', 'friends_count','source', 'retweeted_status', 'url']) tweet_json.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2331 entries, 0 to 2330 Data columns (total 8 columns): tweet_id 2331 non-null object favorite_count 2331 non-null int64 retweet_count 2331 non-null int64 followers_count 2331 non-null int64 friends_count 2331 non-null int64 source 2331 non-null object retweeted_status 2331 non-null object url 2331 non-null object dtypes: int64(4), object(4) memory usage: 145.8+ KB ###Markdown Assessing data * (**Visual assessment**) Each piece of gathered data is displayed in the Jupyter Notebook for visual assessment purposes. ###Code df_twitter_archive df_image_predictions tweet_json ###Output _____no_output_____ ###Markdown * (**Programmatic assessment**) Pandas' functions and/or methods are used to assess the data. ###Code df_twitter_archive.info() df_image_predictions.info() tweet_json.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2331 entries, 0 to 2330 Data columns (total 8 columns): tweet_id 2331 non-null object favorite_count 2331 non-null int64 retweet_count 2331 non-null int64 followers_count 2331 non-null int64 friends_count 2331 non-null int64 source 2331 non-null object retweeted_status 2331 non-null object url 2331 non-null object dtypes: int64(4), object(4) memory usage: 145.8+ KB ###Markdown **Archive Dataframe Analysis** ###Code df_twitter_archive.rating_numerator.value_counts() print(df_twitter_archive.loc[df_twitter_archive.rating_numerator == 204, 'text']) print(df_twitter_archive.loc[df_twitter_archive.rating_numerator == 143, 'text']) print(df_twitter_archive.loc[df_twitter_archive.rating_numerator == 666, 'text']) print(df_twitter_archive.loc[df_twitter_archive.rating_numerator == 1176, 'text']) print(df_twitter_archive.loc[df_twitter_archive.rating_numerator == 144, 'text']) #print whole text in order to verify numerators and denominators #17 dogs print(df_twitter_archive['text'][1120]) #13 dogs print(df_twitter_archive['text'][1634]) #just a tweet to explain actual ratings, this will be ignored when cleaning data print(df_twitter_archive['text'][313]) #no picture, this will be ignored when cleaning data print(df_twitter_archive['text'][189]) #12 dogs print(df_twitter_archive['text'][1779]) df_twitter_archive.rating_denominator.value_counts() print(df_twitter_archive.loc[df_twitter_archive.rating_denominator == 11, 'text']) print(df_twitter_archive.loc[df_twitter_archive.rating_denominator == 2, 'text']) print(df_twitter_archive.loc[df_twitter_archive.rating_denominator == 16, 'text']) print(df_twitter_archive.loc[df_twitter_archive.rating_denominator == 15, 'text']) print(df_twitter_archive.loc[df_twitter_archive.rating_denominator == 7, 'text']) #retweet - it will be deleted when delete all retweets print(df_twitter_archive['text'][784]) #actual rating 14/10 need to change manually print(df_twitter_archive['text'][1068]) #actual rating 10/10 need to change manually print(df_twitter_archive['text'][1662]) #actual rating 9/10 need to change manually print(df_twitter_archive['text'][2335]) #tweet to explain rating print(df_twitter_archive['text'][1663]) #no rating - delete print(df_twitter_archive['text'][342]) #no rating - delete print(df_twitter_archive['text'][516]) df_twitter_archive['name'].value_counts() df_twitter_archive[df_twitter_archive.tweet_id.duplicated()] df_twitter_archive.describe() ###Output _____no_output_____ ###Markdown **Image Dataframe Analysis** ###Code df_image_predictions.sample(5) # This is an image for tweet_id 856282028240666624 Image(url = 'https://pbs.twimg.com/media/C-If9ZwXoAAfDX2.jpg') df_image_predictions.info() df_image_predictions[df_image_predictions.tweet_id.duplicated()] df_image_predictions['p1'].value_counts() df_image_predictions['p2'].value_counts() df_image_predictions['p3'].value_counts() ###Output _____no_output_____ ###Markdown **Twitter Counts Dataframe Analysis** ###Code tweet_json.head() tweet_json.info() tweet_json.describe() ###Output _____no_output_____ ###Markdown CleanThis section consists of the cleaning portion of the data wrangling process:* Define* Code* Test ###Code # Make a copy of the tables before cleaning df_twitter_archive_clean = df_twitter_archive.copy() df_image_predictions_clean = df_image_predictions.copy() tweet_json_clean = tweet_json.copy() ###Output _____no_output_____ ###Markdown Define1. Merge the `clean versions` of `df_twitter_archive`, `df_image_predictions`, and `tweet_json` dataframes Correct the dog types2. Create one column for the various dog types: doggo, floofer, pupper, puppo Remove columns no longer needed: in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id, and retweeted_status_timestamp3. Delete retweets4. Remove columns no longer needed5. Change tweet_id from an integer to a string6. Change the timestamp to correct datetime format7. Correct naming issues8. Standardize dog ratings9. Creating a new dog_breed column using the image prediction data * Merge the clean versions of df_twitter_archive, df_image_predictions, and tweet_json dataframes Correct the dog types**Code** ###Code # Ref: https://stackoverflow.com/questions/44327999/python-pandas-merge-multiple-dataframes/44338256 dfs = pd.concat([df_twitter_archive_clean, df_image_predictions_clean, tweet_json_clean], join='outer', axis=1) dfs.head() dfs.columns ###Output _____no_output_____ ###Markdown **Test** ###Code dfs.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 37 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object tweet_id 2075 non-null float64 jpg_url 2075 non-null object img_num 2075 non-null float64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null object p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null object p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null object tweet_id 2331 non-null object favorite_count 2331 non-null float64 retweet_count 2331 non-null float64 followers_count 2331 non-null float64 friends_count 2331 non-null float64 source 2331 non-null object retweeted_status 2331 non-null object url 2331 non-null object dtypes: float64(13), int64(3), object(21) memory usage: 681.1+ KB ###Markdown * **Code and Test**: Create one column for the various dog types: doggo, floofer, pupper, puppo ###Code # Extract the text from the columns into the new dog_type colunn dfs['dog_type'] = dfs['text'].str.extract('(doggo|floofer|pupper|puppo)') dfs[['dog_type', 'doggo', 'floofer', 'pupper', 'puppo']].sample(5) dfs.head() dfs.columns dfs.dog_type.value_counts() ###Output _____no_output_____ ###Markdown * **Code and Test**: Delete retweets ###Code dfs = dfs[np.isnan(dfs.retweeted_status_id)] #Verify no non-null entires are left dfs.info() # Remove the following columns: dfs = dfs.drop(['retweeted_status_id', \ 'retweeted_status_user_id', 'retweeted_status_timestamp'], axis=1) dfs.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 35 columns): tweet_id 2175 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2175 non-null object source 2175 non-null object text 2175 non-null object expanded_urls 2117 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 2175 non-null object floofer 2175 non-null object pupper 2175 non-null object puppo 2175 non-null object tweet_id 1896 non-null float64 jpg_url 1896 non-null object img_num 1896 non-null float64 p1 1896 non-null object p1_conf 1896 non-null float64 p1_dog 1896 non-null object p2 1896 non-null object p2_conf 1896 non-null float64 p2_dog 1896 non-null object p3 1896 non-null object p3_conf 1896 non-null float64 p3_dog 1896 non-null object tweet_id 2150 non-null object favorite_count 2150 non-null float64 retweet_count 2150 non-null float64 followers_count 2150 non-null float64 friends_count 2150 non-null float64 source 2150 non-null object retweeted_status 2150 non-null object url 2150 non-null object dog_type 364 non-null object dtypes: float64(11), int64(3), object(21) memory usage: 611.7+ KB ###Markdown * **Code and Test**: Remove columns no longer needed ###Code dfs.drop(['in_reply_to_status_id', 'in_reply_to_user_id', 'source', 'img_num', 'friends_count', 'source', 'url', 'followers_count'], axis = 1, inplace=True) # Ref: https://stackoverflow.com/questions/14984119/python-pandas-remove-duplicate-columns dfs = dfs.loc[:,~dfs.columns.duplicated()] dfs.columns dfs.info() dfs.drop(['retweeted_status'], axis = 1, inplace=True) dfs.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 24 columns): tweet_id 2175 non-null int64 timestamp 2175 non-null object text 2175 non-null object expanded_urls 2117 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 2175 non-null object floofer 2175 non-null object pupper 2175 non-null object puppo 2175 non-null object jpg_url 1896 non-null object p1 1896 non-null object p1_conf 1896 non-null float64 p1_dog 1896 non-null object p2 1896 non-null object p2_conf 1896 non-null float64 p2_dog 1896 non-null object p3 1896 non-null object p3_conf 1896 non-null float64 p3_dog 1896 non-null object favorite_count 2150 non-null float64 retweet_count 2150 non-null float64 dog_type 364 non-null object dtypes: float64(5), int64(3), object(16) memory usage: 424.8+ KB ###Markdown * **Code and Test**: Change tweet_id from an integer to a string ###Code dfs['tweet_id'] = dfs['tweet_id'].astype(str) dfs.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 24 columns): tweet_id 2175 non-null object timestamp 2175 non-null object text 2175 non-null object expanded_urls 2117 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 2175 non-null object floofer 2175 non-null object pupper 2175 non-null object puppo 2175 non-null object jpg_url 1896 non-null object p1 1896 non-null object p1_conf 1896 non-null float64 p1_dog 1896 non-null object p2 1896 non-null object p2_conf 1896 non-null float64 p2_dog 1896 non-null object p3 1896 non-null object p3_conf 1896 non-null float64 p3_dog 1896 non-null object favorite_count 2150 non-null float64 retweet_count 2150 non-null float64 dog_type 364 non-null object dtypes: float64(5), int64(2), object(17) memory usage: 424.8+ KB ###Markdown * **Code and Test**: Timestamps to datetime format ###Code #Remove the time zone from the 'timestamp' column dfs['timestamp'] = dfs['timestamp'].str.slice(start=0, stop=-6) # Change the 'timestamp' column to a datetime object dfs['timestamp'] = pd.to_datetime(dfs['timestamp'], format = "%Y-%m-%d %H:%M:%S") dfs.head(1) ###Output _____no_output_____ ###Markdown * **Code and Test**: Correct naming issues ###Code dfs.name = dfs.name.str.replace('^[a-z]+', 'None') dfs['name'].value_counts() dfs['name'].sample(10) ###Output _____no_output_____ ###Markdown * **Code and Test**: Standardize dog ratings ###Code dfs['rating_numerator'] = dfs['rating_numerator'].astype(float) dfs['rating_denominator'] = dfs['rating_denominator'].astype(float) dfs.info() # For loop to gather all text, indices, and ratings for tweets that contain a decimal in the numerator of the rating ratings_decimals_text = [] ratings_decimals_index = [] ratings_decimals = [] for i, text in dfs['text'].iteritems(): if bool(re.search('\d+\.\d+\/\d+', text)): ratings_decimals_text.append(text) ratings_decimals_index.append(i) ratings_decimals.append(re.search('\d+\.\d+', text).group()) # Print ratings with decimals ratings_decimals_text # Print the indices of the ratings above (have decimal) ratings_decimals_index #Correctly converting the above decimal ratings to float dfs.loc[ratings_decimals_index[0],'rating_numerator'] = float(ratings_decimals[0]) dfs.loc[ratings_decimals_index[1],'rating_numerator'] = float(ratings_decimals[1]) dfs.loc[ratings_decimals_index[2],'rating_numerator'] = float(ratings_decimals[2]) dfs.loc[ratings_decimals_index[3],'rating_numerator'] = float(ratings_decimals[3]) # Testing the indices dfs.loc[40] Image(url = 'https://pbs.twimg.com/media/CUCQTpEWEAA7EDz.jpg') # Create a new column called rating, and calulate the value with new, standardized ratings dfs['rating'] = dfs['rating_numerator'] / dfs['rating_denominator'] dfs.sample(10) dfs.loc[30] dfs.rating.head() ###Output _____no_output_____ ###Markdown * **Clean and Test**: Creating a new dog_breed column using the image prediction data ###Code dfs['dog_breed'] = 'None' for i, row in dfs.iterrows(): if row.p1_dog: dfs.set_value(i, 'dog_breed', row.p1) elif row.p2_dog and row.rating_numerator >= 10: dfs.set_value(i, 'dog_breed', row.p2) elif row.p3_dog and row.rating_numerator >= 10: dfs.set_value(i, 'dog_breed', row.p3) else: dfs.set_value(i, 'dog_breed', 'None') dfs.dog_breed.value_counts() ###Output _____no_output_____ ###Markdown Storing, Analyzing, and Visualizing DataThis section provides an analysis of the data set, and corresponding visualizations to draw valuable conclusions. 1. Visualizing the total number of tweets over time to see whether that number increases, or decreases, over time. 2. Visualizing the retweet counts, and favorite counts comparison over time. 3. Visualizing the most popular dog breed 4. Visualizing the most popular dog names ###Code # Storing the new twitter_dogs df to a new csv file dfs.to_csv('twitter_archive_master.csv', encoding='utf-8', index=False) ###Output _____no_output_____ ###Markdown * **Analyze and Visualize**: Visualizing the total number of tweets over time to see whether that number increases, or decreases, over time. ###Code dfs.timestamp = pd.to_datetime(dfs['timestamp'], format='%Y-%m-%d %H:%M:%S.%f') monthly_tweets = dfs.groupby(pd.Grouper(key = 'timestamp', freq = "M")).count().reset_index() monthly_tweets = monthly_tweets[['timestamp', 'tweet_id']] monthly_tweets.head() monthly_tweets.sum() # Plotting time vs. tweets plt.figure(figsize=(10, 10)); plt.xlim([datetime.date(2015, 11, 30), datetime.date(2017, 7, 30)]); plt.xlabel('Year and Month') plt.ylabel('Tweets Count') plt.plot(monthly_tweets.timestamp, monthly_tweets.tweet_id); plt.title('We Rate Dogs Tweets over Time'); ###Output /opt/conda/lib/python3.6/site-packages/matplotlib/font_manager.py:1316: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans (prop.get_family(), self.defaultFamily[fontext])) /opt/conda/lib/python3.6/site-packages/matplotlib/figure.py:1999: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect. warnings.warn("This figure includes Axes that are not compatible " ###Markdown Over time tweets decreased sharply, with spikes in activity during the early of 2016(Jan), 2016(Mar), and generally decreasing from there. * **Analyze and Visualize**: Visualizing the retweet counts, and favorite counts comparison over time. ###Code # Scatterplot of retweets vs favorite count sns.lmplot(x="retweet_count", y="favorite_count", data=dfs, size = 5, aspect=1.3, scatter_kws={'alpha':1/5}); plt.title('Favorite Count vs. Retweet Count'); plt.xlabel('Retweet Count'); plt.ylabel('Favorite Count'); ###Output /opt/conda/lib/python3.6/site-packages/matplotlib/font_manager.py:1316: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans (prop.get_family(), self.defaultFamily[fontext])) /opt/conda/lib/python3.6/site-packages/matplotlib/figure.py:1999: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect. warnings.warn("This figure includes Axes that are not compatible " ###Markdown Favorite counts are correlated with retweet counts - this is a positive correlation. * **Analyze and Visualize**: Visualizing the most popular dog breed ###Code dfs['dog_type'].value_counts() ###Output _____no_output_____ ###Markdown The most popular dog breed is a golden retriever, with a labrador retriever coming in as the second most popular breed. ###Code # Histogram to visualize dog breeeds dog_breed = dfs.groupby('dog_breed').filter(lambda x: len(x) >= 25) dog_breed['dog_breed'].value_counts().plot(kind = 'barh') plt.title('Most Rated Dog Breed') plt.xlabel('Count') plt.ylabel('Breed of dog'); ###Output /opt/conda/lib/python3.6/site-packages/matplotlib/font_manager.py:1316: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans (prop.get_family(), self.defaultFamily[fontext])) /opt/conda/lib/python3.6/site-packages/matplotlib/figure.py:1999: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect. warnings.warn("This figure includes Axes that are not compatible " ###Markdown * **Analyze and Visualize**: Visualizing the most popular dog names ###Code dfs.name.value_counts()[0:7].plot('barh', figsize=(15,8), title='Most Common Dog Names').set_xlabel("Number of Dogs"); dfs.name.value_counts() ###Output _____no_output_____ ###Markdown Gathering data We need to import our libraries first ###Code #Import our libraries import pandas as pd import numpy as np import requests import json import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown Now we start gathering the data ###Code #Read the csv file and put it in a df enhanced_df =pd.read_csv('twitter-archive-enhanced.csv') enhanced_df #Get the imiges file and load it it into a data frame rqst =requests.get('https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv') with open('images.tsv', mode ='wb') as file: file.write(rqst.content) images_df = pd.read_csv('images.tsv', sep='\t') images_df ###Output _____no_output_____ ###Markdown The twitterapi.py file ###Code import tweepy from tweepy import OAuthHandler import json from timeit import default_timer as timer # Query Twitter API for each tweet in the Twitter archive and save JSON in a text file # These are hidden to comply with Twitter's API terms and conditions consumer_key = 'HIDDEN' consumer_secret = 'HIDDEN' access_token = 'HIDDEN' access_secret = 'HIDDEN' auth = OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True) # NOTE TO STUDENT WITH MOBILE VERIFICATION ISSUES: # df_1 is a DataFrame with the twitter_archive_enhanced.csv file. You may have to # change line 17 to match the name of your DataFrame with twitter_archive_enhanced.csv # NOTE TO REVIEWER: this student had mobile verification issues so the following # Twitter API code was sent to this student from a Udacity instructor # Tweet IDs for which to gather additional data via Twitter's API tweet_ids = enhanced_df.tweet_id.values len(tweet_ids) # Query Twitter's API for JSON data for each tweet ID in the Twitter archive count = 0 fails_dict = {} start = timer() # Save each tweet's returned JSON as a new line in a .txt file with open('tweet_json.txt', 'w') as outfile: # This loop will likely take 20-30 minutes to run because of Twitter's rate limit for tweet_id in tweet_ids: count += 1 print(str(count) + ": " + str(tweet_id)) try: tweet = api.get_status(tweet_id, tweet_mode='extended') print("Success") json.dump(tweet._json, outfile) outfile.write('\n') except tweepy.TweepError as e: print("Fail") fails_dict[tweet_id] = e pass end = timer() print(end - start) print(fails_dict) #Load the data from the json file then make the needed data into a data frame with open('tweet-json.txt') as json_file: raw_data =json_file.read().splitlines() rowslist=[] for i in raw_data: tempjson =json.loads(i) templist =[tempjson['id'], tempjson['retweet_count'], tempjson['favorite_count']] rowslist.append(templist) #Creating the DataFrame, it'll hold the data we filtered twitter_data =df =pd.DataFrame(rowslist) #Labeling columns twitter_data.columns =['tweet_id', 'retweet_count', 'favorite_count'] twitter_data ###Output _____no_output_____ ###Markdown Assessing data Quality issues 1. In the main data frame: Nondescriptive column: We will replace the links in the source column with the actual app they tweeted with in words since it's nicer this way.2. In the main data frame: Convert the timestamp column to a daytime object3. In the main data frame: Change rating_numerator to 13 if it was more than 134. In the main data frame: Change the rating_denominator to 10 if it was more than 105. In the main data frame: Remove all retweets6. In the main data frame: Remove all tweets that are replys7. In the twitter_data data frame: Rename favorite_count to likes_count, because twittter has changed that awhile back.8. In the images data frame: Nondescriptive column headers: We will change the column headers for p1, p1_conf, p1_dog, p2, p2_conf, p2_dog, p3, p3_conf, p3_dog to names that are more descriptive. tidiness issues 1. In the main data frame: Combine the columns dogo, floofer, pupper, puppo into one column named dog_type2. Join the tweet_data and the main data frame (enhanced_df) together. Data cleaning Copying the datasets We will have to copy the datasets before cleaning ###Code enhanced_copy =enhanced_df.copy() twitter_data_copy =twitter_data.copy() images_copy =images_df.copy() ###Output _____no_output_____ ###Markdown Quality issues Source issue Define:The source column has what I think is an html tag containing the link for the app that's used for sending the tweets. We'll replace it with an easy name to read. ###Code enhanced_df['source'].value_counts() ###Output _____no_output_____ ###Markdown Code: ###Code #Replace html text in source column enhanced_df =enhanced_df.replace({'<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>': 'Twitter for iPhone', '<a href="http://vine.co" rel="nofollow">Vine - Make a Scene</a>': 'Vine - Make a Scene', '<a href="http://twitter.com" rel="nofollow">Twitter Web Client</a>': 'Twitter Web Client', '<a href="https://about.twitter.com/products/tweetdeck" rel="nofollow">TweetDeck</a>': 'TweetDeck'}, regex =False) ###Output _____no_output_____ ###Markdown Test: ###Code enhanced_df ###Output _____no_output_____ ###Markdown The timestamp issue Define:We will convert the timestamp column to a datetime object so we can manage the time if need be Check the data type of the timestamp column ###Code enhanced_df['timestamp'].dtypes ###Output _____no_output_____ ###Markdown Code: ###Code #Convert timestamp column to object enhanced_df['timestamp'] = pd.to_datetime(enhanced_df['timestamp']) ###Output _____no_output_____ ###Markdown Test: ###Code #check again enhanced_df['timestamp'].dtypes ###Output _____no_output_____ ###Markdown rating_numerator issue Define:We will change the rating_numerator to 13 if it was more than 13 ###Code #Check value counts of rating_numerator enhanced_df['rating_numerator'].value_counts() ###Output _____no_output_____ ###Markdown Code:[source for the cell below](https://stackoverflow.com/questions/21608228/conditional-replace-pandas) ###Code #Lets fix that enhanced_df['rating_numerator'] = np.where(enhanced_df['rating_numerator'] > 13, 13, enhanced_df['rating_numerator']) ###Output _____no_output_____ ###Markdown test: ###Code enhanced_df['rating_numerator'].value_counts() ###Output _____no_output_____ ###Markdown rating_denumerator issue Define:We will change the rating_denumerator to 10 if it was more than 10 Code: ###Code #Check max value of rating_denominator enhanced_df['rating_denominator'].max() ###Output _____no_output_____ ###Markdown Code: ###Code #Lets fix that enhanced_df['rating_denominator'] = np.where(enhanced_df['rating_denominator'] > 10, 10, enhanced_df['rating_denominator']) enhanced_df['rating_denominator'] = np.where(enhanced_df['rating_denominator'] < 10, 10, enhanced_df['rating_denominator']) ###Output _____no_output_____ ###Markdown Test: ###Code enhanced_df ###Output _____no_output_____ ###Markdown Replies issue Define:There are a lot of replies that this user has sent, we don't need them so we will remove them. Code:[Source for the cell below](https://www.delftstack.com/howto/python-pandas/how-to-delete-dataframe-row-in-pandas-based-on-column-value/) ###Code #We'll remove all the replies reply_ids = enhanced_df[enhanced_df['in_reply_to_status_id'] >0 ].index enhanced_df.drop(reply_ids , inplace=True) ###Output _____no_output_____ ###Markdown Test: ###Code enhanced_df ###Output _____no_output_____ ###Markdown Retweets issue Define:There are a lot of entries that are just retweets, we don't need them Code:[Source for the cell below](https://www.delftstack.com/howto/python-pandas/how-to-delete-dataframe-row-in-pandas-based-on-column-value/) ###Code #We'll remove all the retweets retweet_ids = enhanced_df[enhanced_df['retweeted_status_id'] >0 ].index enhanced_df.drop(retweet_ids , inplace=True) ###Output _____no_output_____ ###Markdown Test: ###Code enhanced_df ###Output _____no_output_____ ###Markdown Column name issue in the twitter_data data frame Define:Here we'll rename the favorite_count to likes_count in the twitter_data data frame ###Code #Check the column names twitter_data.columns ###Output _____no_output_____ ###Markdown Code: ###Code #Lets change the name of favorite_count to likes_count twitter_data =twitter_data.rename(columns ={'favorite_count': 'likes_count'}) ###Output _____no_output_____ ###Markdown Test: ###Code twitter_data ###Output _____no_output_____ ###Markdown Column names issue in the images data frame Define:Here we'll change some column names to more descriptive names in the images data frame ###Code #Check the column names images_df.columns ###Output _____no_output_____ ###Markdown Code: ###Code #Names change images_df =images_df.rename(columns ={'p1': 'prediction1', 'p1_conf': 'prediction1_confidence', 'p1_dog': 'prediction1_dog', 'p2': 'prediction2', 'p2_conf': 'prediction2_confidence', 'p2_dog': 'prediction2_dog', 'p3': 'prediction3', 'p3_conf': 'prediction3_confidence', 'p3_dog': 'prediction3_dog'}) ###Output _____no_output_____ ###Markdown Test: ###Code images_df ###Output _____no_output_____ ###Markdown Tidiness issues Combining columns Define:We will combine the 4 dog stages columns into one Code: ###Code #We first replace the empty values with None enhanced_df[['doggo', 'floofer', 'pupper', 'puppo']] = enhanced_df[['doggo', 'floofer', 'pupper', 'puppo']].replace({'None': ''}) #We'll join the 4 columns into one enhanced_df['dog_stage'] = enhanced_df['doggo'] + enhanced_df['floofer'] + enhanced_df['pupper'] + enhanced_df['puppo'] enhanced_df = enhanced_df.drop(['doggo', 'floofer', 'pupper', 'puppo'], axis=1) enhanced_df['dog_stage'].value_counts() ###Output _____no_output_____ ###Markdown We can see that sometimes we get dogs that are more than one type. Lets fix that ###Code enhanced_df.dog_stage = enhanced_df.dog_stage.replace('doggopupper', 'multiple') enhanced_df.dog_stage = enhanced_df.dog_stage.replace('doggopuppo', 'multiple') enhanced_df.dog_stage = enhanced_df.dog_stage.replace('doggofloofer', 'multiple') ###Output _____no_output_____ ###Markdown Test: ###Code #Lets check again enhanced_df['dog_stage'].value_counts() ###Output _____no_output_____ ###Markdown Joining the enhanced_df data frame with twitter_data data frame Define:We will join the twitter_data data frame with the main enhanced_df data frame Code: ###Code combined_data = pd.merge(enhanced_df, twitter_data, on = 'tweet_id', how = 'inner') # Test: combined_data ###Output _____no_output_____ ###Markdown Saving datasets ###Code #save datasets combined_data.to_csv('twitter_archive_master.csv') images_df.to_csv('images_master.csv') ###Output _____no_output_____ ###Markdown Analysis We'll be answering the following questions1. What are the dogs who got the most likes? and what are their ratings?2. What's the most common dog stage?3. What's the most common dog name?4. What's the most app the account used to send their tweets from? What are the dogs who got the most likes? and what are their ratings? Lets say we just want to look at the top 5 dogs who got the most likes here ###Code #Top 5 most liked dogs combined_data.sort_values(['likes_count'], ascending=[False])[['name', 'rating_numerator', 'rating_denominator', 'dog_stage', 'likes_count']].head() ###Output _____no_output_____ ###Markdown What's the most common dog stage? ###Code #Most common dog stage combined_data['dog_stage'].value_counts() ###Output _____no_output_____ ###Markdown We can see that from the stages the algorithm was able to identify, pupper was the most common stage with 221 entries. Unfortunatly though, it wasn't able to identify 1761 entries though. What are the dogs who got the most retweets? Retweets are a form of liking a tweet, or at least agreeing with its content. Lets look at the most dogs who got retweets ###Code #The top 5 retweeted tweets combined_data.sort_values(['retweet_count'], ascending=[False])[['name', 'rating_numerator', 'rating_denominator', 'dog_stage', 'retweet_count']].head() ###Output _____no_output_____ ###Markdown What are the most app they use for their tweets? ###Code combined_data['source'].value_counts() sns.set(style="darkgrid") sns.countplot(data =combined_data, y ='source', order =combined_data['source'].value_counts().index) plt.xticks(rotation =360) plt.xlabel('Count', fontsize=14) plt.ylabel('Source', fontsize=14) plt.title('Sources Distribution',fontsize=16) plt.show(); ###Output _____no_output_____ ###Markdown Assess--- Quality`df`- A number of entries are replies and retweets or doesn't contain images. - nulls represented as 'None' in [name, stages columns]- some *name* values are incorrect (a,an,by,my,O)- *denominator* values that are not 10- *numerator* values that are relatively high- *nominator* values that are supposed to be decimal numbers- Erroneous datatypes (tweet_id, timestamp, stage)`df_images`- missing records (281) compared to `df`- records that doesn't have 'dog' predictions- entries should have one 'dog' prediction with the highest 'conf' probability `df_counts`- missing records (2333 instead of 2356) compared to `df` Tidiness`df` - stages are split into 4 columns instead of 1.- [in_reply_to_status_id, in_reply_to_user_id, source, retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp] are redundant.- `df_counts` and `df_images` to be merged with master `df` Clean ###Code # make copies of all dataframes df_clean = df.copy() images_clean = df_images.copy() counts_clean = df_counts.copy() ###Output _____no_output_____ ###Markdown - `df_counts` to be merged with master `df`- `df_counts`: missing records (2333 instead of 2356) compared to `df` *Merge the 'likes_count,retweet_count' columns to the `df` table, joining on 'tweet_id'*. ###Code # we merge using an inner joint to solve the missing records of df_counts as well df_clean = pd.merge(df_clean, counts_clean, on= 'tweet_id', how = 'inner') # test df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2333 entries, 0 to 2332 Data columns (total 19 columns): tweet_id 2333 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2333 non-null object source 2333 non-null object text 2333 non-null object retweeted_status_id 165 non-null float64 retweeted_status_user_id 165 non-null float64 retweeted_status_timestamp 165 non-null object expanded_urls 2274 non-null object rating_numerator 1946 non-null object rating_denominator 2333 non-null int64 name 2333 non-null object doggo 2333 non-null object floofer 2333 non-null object pupper 2333 non-null object puppo 2333 non-null object likes_count 2333 non-null int64 retweet_count 2333 non-null int64 dtypes: float64(4), int64(4), object(11) memory usage: 364.5+ KB ###Markdown - `df`: A number of entries are replies and retweets or doesn't contain images. *select a subset of df_clean that doesn't contain any retweets or replies, but must contain an image* ###Code df_clean= df_clean[df_clean.retweeted_status_id.isnull() & df_clean.in_reply_to_status_id.isnull() & df_clean.expanded_urls.notnull()] # test, we now have 0 rows that contain any replies or retweets df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2087 entries, 0 to 2332 Data columns (total 19 columns): tweet_id 2087 non-null int64 in_reply_to_status_id 0 non-null float64 in_reply_to_user_id 0 non-null float64 timestamp 2087 non-null object source 2087 non-null object text 2087 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 2087 non-null object rating_numerator 1708 non-null object rating_denominator 2087 non-null int64 name 2087 non-null object doggo 2087 non-null object floofer 2087 non-null object pupper 2087 non-null object puppo 2087 non-null object likes_count 2087 non-null int64 retweet_count 2087 non-null int64 dtypes: float64(4), int64(4), object(11) memory usage: 326.1+ KB ###Markdown - `df`: [in_reply_to_status_id, in_reply_to_user_id, source, retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp] are redundant. *drop these rows using pd.drop* ###Code df_clean = df_clean.drop(columns= ['in_reply_to_status_id', 'in_reply_to_user_id', 'source', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp']) # check for thos columns, they no longer exist df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2087 entries, 0 to 2332 Data columns (total 13 columns): tweet_id 2087 non-null int64 timestamp 2087 non-null object text 2087 non-null object expanded_urls 2087 non-null object rating_numerator 1708 non-null object rating_denominator 2087 non-null int64 name 2087 non-null object doggo 2087 non-null object floofer 2087 non-null object pupper 2087 non-null object puppo 2087 non-null object likes_count 2087 non-null int64 retweet_count 2087 non-null int64 dtypes: int64(4), object(9) memory usage: 228.3+ KB ###Markdown - `df`: stages are split into 4 columns instead of 1. *Melt 'doggo, floofer, pupper, puppo, None ' columns into 'stage, variable' columns, then confirm the value for each row by comparing 'stage, variable' columns and keeping those that match and then remove any duplicated rows or unneseccary columns* ###Code # create the new 'stage' column df_clean['stage'] = ['doggo' if 'doggo' in i else 'pupper' if 'pupper' in i else 'floofer' if 'floofer' in i else 'floof' if 'floof' in i else 'puppo' if 'puppo' in i else np.nan for i in df_clean['text'].str.lower()] # turn into categorical df_clean['stage'] = df_clean['stage'].astype('category') # drop other columns df_clean.drop(['doggo', 'floofer', 'pupper', 'puppo'], axis=1, inplace=True) # test, stage original columns are gone and replaced with 1 'stage' column includes all variants df_clean.info() # confirm 'stage' values df_clean.stage.value_counts() ###Output _____no_output_____ ###Markdown - `df`: nulls represented as 'None' in [name] *replace 'None' values with NaN ###Code df_clean = df_clean.replace('None', np.NaN) # check Nan df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2087 entries, 0 to 2332 Data columns (total 10 columns): tweet_id 2087 non-null int64 timestamp 2087 non-null object text 2087 non-null object expanded_urls 2087 non-null object rating_numerator 1708 non-null object rating_denominator 2087 non-null int64 name 1487 non-null object likes_count 2087 non-null int64 retweet_count 2087 non-null int64 stage 398 non-null category dtypes: category(1), int64(4), object(5) memory usage: 165.3+ KB ###Markdown -`df`: 'denominator' values that are not 10 *Take a subset of df_clean where denominator equals 10 only* ###Code df_clean = df_clean[df_clean.rating_denominator == 10] df_clean.rating_denominator.value_counts() ###Output _____no_output_____ ###Markdown -`df`: 'numerator' values that are relatively high *extract correct numerator values using regex* ###Code df_clean['rating_numerator'] = (df_clean.text.str.extract(r'(\d+(\.\d+)?)/\d+')[0]).astype('float') df_clean.rating_numerator.value_counts() # exclude very low and very high values, as they are not correct ratings for a single dog df_clean = df_clean[~((df_clean.rating_numerator > 15) | (df_clean.rating_numerator < 6))] ###Output _____no_output_____ ###Markdown - `df`:some 'name' values are incorrect (a,an,by,my,O,...) *take a subset of df_clean where no names are lowercase* ###Code df_clean = df_clean[~((df_clean.name.notnull()) & (df_clean.name.str.islower()))] # confirm changes df_clean[((df_clean.name.notnull()) & (df_clean.name.str.islower()))] ###Output _____no_output_____ ###Markdown - `df`Erroneous datatypes (tweet_id, timestamp) *use 'astype' and 'to_datetime' to change data types* ###Code df_clean.tweet_id = df_clean.tweet_id.astype('str') # To datetime df_clean.timestamp = pd.to_datetime(df_clean.timestamp) df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1893 entries, 0 to 2332 Data columns (total 10 columns): tweet_id 1893 non-null object timestamp 1893 non-null datetime64[ns, UTC] text 1893 non-null object expanded_urls 1893 non-null object rating_numerator 1893 non-null float64 rating_denominator 1893 non-null int64 name 1348 non-null object likes_count 1893 non-null int64 retweet_count 1893 non-null int64 stage 368 non-null category dtypes: category(1), datetime64[ns, UTC](1), float64(1), int64(3), object(4) memory usage: 149.9+ KB ###Markdown - `df_images`: records that doesn't have 'dog' predictions *Take a subset of images_clean that has atleast one prediction as 'dog'* ###Code images_clean = images_clean[~((images_clean['p1_dog'] == False) & (images_clean['p2_dog'] == False) & (images_clean['p3_dog'] == False))] images_clean ###Output _____no_output_____ ###Markdown - `df_images`: entries should have one 'dog' prediction with the highest 'conf' probability *keep highest True dog prediction only for each entry using subsets of `images_clean` and concat* ###Code # creat 3 dfs where each one contains only one prediction and conf columns # based on which prediction is true for each #concat all 3 dfs together where prediction columns contains the highest # True dog prediction df_1 = images_clean[(images_clean.p1_dog == True) ] df_1['prediction'], df_1['conf'] = df_1['p1'], df_1['p1_conf'] df_1 = df_1.drop(columns= ['p1', 'p1_conf','p1_dog','p2','p2_conf','p2_dog','p3','p3_conf','p3_dog']) df_2 = images_clean[(images_clean.p1_dog == False) & (images_clean.p2_dog == True)] df_2['prediction'], df_2['conf'] = df_2['p2'], df_2['p2_conf'] df_2 = df_2.drop(columns= ['p1', 'p1_conf','p1_dog','p2','p2_conf','p2_dog','p3','p3_conf','p3_dog']) df_3 = images_clean[(images_clean.p1_dog == False) & (images_clean.p2_dog == False) & (images_clean.p3_dog == True)] df_3['prediction'], df_3['conf'] = df_3['p3'], df_3['p3_conf'] df_3 = df_3.drop(columns= ['p1', 'p1_conf','p1_dog','p2','p2_conf','p2_dog','p3','p3_conf','p3_dog']) images_clean = pd.concat([df_1, df_2, df_3], ignore_index=True) # test images_clean # test, number of entries is the same images_clean.info() # change images_clean.tweet_id dtype inorder to be able to merge on it images_clean.tweet_id = images_clean.tweet_id.astype('str') img= images_clean[['tweet_id', 'prediction']] # Finally merge image predictions with df_clean so that we have one master df # containing all observations df_clean= pd.merge(df_clean, img, on= 'tweet_id', how= 'left') df_clean.reset_index(drop= True) # check changes df_clean # convert df_clean into csv file df_clean.to_csv('twitter_archive_master.csv', index=False) df = df_clean ###Output _____no_output_____ ###Markdown Exploratory Data Analysis Research Question 1: What are the most popular dog breeds rated by WeRateDogs? And how high are they rated? ###Code # get top ten value counts for predictions(breeds) and plot it top= (df.prediction.value_counts().head(10)) g= top.plot(legend= False, title= 'Popularity by Breed', kind='bar', figsize=(8,8)); g.set_xlabel('Breeds'); g.set_ylabel('Popularity'); ###Output _____no_output_____ ###Markdown Second part of the question ###Code top = df.groupby('prediction').count().tweet_id rating= df.groupby('prediction').mean().rating_numerator top_rated = pd.concat([top,rating], axis=1) # sorting by most popular breeds top_rated.sort_values(by= 'tweet_id', ascending= False) # sorting by highest rated breeds top_rated.sort_values(by= 'rating_numerator', ascending= False).head(20) # tried to get highest rated dog and it turns out its a quality error # change it to 11 df[df.prediction == 'clumber'] mask = df.prediction == 'clumber' df.loc[mask, 'rating_numerator'] = 11 ###Output _____no_output_____ ###Markdown Research Question 2: Depending on data collected from WeRateDogs, what is the most common dog gender to be owned as a pet? ###Code # use gender_guesser apckage to guess genders based on dog names import gender_guesser.detector as gender d = gender.Detector() names= list(df.name.dropna()) gender = [] for name in names: gender.append(d.get_gender(name)) gender = pd.Series(gender) gender.value_counts() # calculate ratio of male to female dogs (526+49) / (211+28) ###Output _____no_output_____ ###Markdown Research Question 3: How did WeRateDogs account activity and interaction increased over time? ###Code # make a draft df, set datetime as index and group by month of year # plot likes and retweets counts versus time draft = df.set_index(df.timestamp) draft = draft.groupby(pd.Grouper(freq='M')).sum()[['likes_count', 'retweet_count']] g1= draft.plot(legend= True, title= 'Activity over time', kind='line', figsize=(8,8)); g1.set_xlabel('Month') g1.set_ylabel('Count'); ###Output C:\Users\FARESS\Anaconda3\lib\site-packages\pandas\core\arrays\datetimes.py:1269: UserWarning: Converting to PeriodArray/Index representation will drop timezone information. UserWarning, ###Markdown 1. Data Gathering ###Code # import required libraries import pandas as pd import requests # read in csv file twitter-archive-enhanced.csv df_archive = pd.read_csv('data/twitter-archive-enhanced.csv') # read in csv file image-predictions.tsv # this is only option b, it should be ideally read in with requests library # this project was conducted in China, and a VPN was necessary to download the file from the internet # due to political conferences, this VPN was switched of df_image = pd.read_csv('data/image-predictions.tsv', sep='\t') # read in csv file image-predictions.tsv with requests library # import io #url = "https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv" #s = requests.get(url).content #df_image1 = pd.read_csv(io.StringIO(s.decode('utf-8')), sep='\t') #df_image1.head() # read in json file tweet-json.txt # this project was conducted in China, and a VPN was necessary to download the file via the Twitter API # due to political conferences, this VPN was switched of df_retweet = pd.read_json('data/tweet-json.txt', lines=True) ###Output _____no_output_____ ###Markdown 2. Data Assessment 2.1 Visual Assessment ###Code df_archive.head() df_image.head() df_retweet.head() ###Output _____no_output_____ ###Markdown 2.2 Programatic Assessment ###Code df_archive.info() # assess staring and ending points df_archive.timestamp.sort_values().head() df_archive.timestamp.sort_values().tail() # get unique values of rating_denominator df_archive.rating_denominator.value_counts() df_image.info() df_retweet.info() # figure out if any zero counts are included df_retweet.retweet_count.sort_values().head() ###Output _____no_output_____ ###Markdown 2.3 Data Quality and Tidiness IssuesImportant note: The issues initially refer to the original dataframes provided by Udacity. As you can see, these three original dataframes have their respective issues. Within the cleaning process, all three initial datasets have been merged to one master dataframe called 'df_archive_image_retweet'. Key Points from 'Project Motivation' and info from 'Project Details'* df_archive contains original ratings and retweets. In fact, only original ratings (no retweets) are required* some of the original ratings don't have images, but we only want ratings with images * tweets beyond Aug. 1st 2017 don't have image predictions and we don't want to gather them * only concentrate on retweet and favorite count in df_retweet (see project details) Tidiness Issues df_archive* each variable forms a column: rating_numerator and rating_denominator are actually one variable but separated in two columns * each variable forms a column: for dog stage, there are four columns, they should be merged to one column called 'dog_stage' (datatype: category with values: None, doggo, floofer, pupper, puppo) df_retweet* each variable forms a column: Tweet ID appears in two columns: 'id' and 'id_str' Quality Issues df_retweet* Tweet ID's in columns 'id' and 'id_str' are sometimes not the same* Retweet_count for index 290 is 0 (should not be in this table)* 'tweet_id' should be object and not integer df_retweet and df_image* tweet ID are listed in columns with different name: tweet_id (df_image) and 'id', 'id_str' (df_retweet) df_image* datatype of p1, p2 and p3 is object and not category df_archive * data type of column 'timestamp' is object and not datetime* Not every value of the column 'rating denominator' has value 10 * dogs sometimes are in two dog stages 3. Data Cleaning Make copies of the dataframes ###Code df_archive_clean = df_archive.copy() df_image_clean = df_image.copy() df_retweet_clean = df_retweet.copy() ###Output _____no_output_____ ###Markdown Issue: df_archive contains original ratings and retweets. In fact, only original ratings (no retweets) are required Definedrop retweets from df_archive_clean.The column `retweeted_status_id` in this dataframe shows us those rows with retweets. I only keep those rows where the value of this column is NaN. Coding ###Code # get rows with noe retweets df_archive_clean1 = df_archive_clean[df_archive_clean['retweeted_status_id'].isnull()] ###Output _____no_output_____ ###Markdown Testing ###Code #how many retweets are included in df_archive_clean? retweets_len = len(df_archive_clean[df_archive.retweeted_status_id.notnull()]) retweets_len #The sum of retweets_len and df_archive_clean1 should equal the number of #rows of df_archive_clean sum = len(df_archive_clean1) + retweets_len sum len(df_archive_clean) ###Output _____no_output_____ ###Markdown Issue: some of the original ratings don't have images, but we only want ratings with images DefineWe know that all those tweets and ratings, what have pictures, are stored in df_images_clean. I will merge the dataframes df_archive_clean1 and df_images_clean and only keep those id's what have images. Code ###Code # Merge both df's with 'inner', that means only those tweet_id's and their rows will be kept # what appear in both table # By doing that, we can guarantee, that we only keep ratings or tweets # with images df_merge_archive_image = pd.merge(df_archive_clean1, df_image_clean, how='inner', on='tweet_id') ###Output _____no_output_____ ###Markdown Test ###Code df_merge_archive_image.head(1) # the number of rows of df_merge_archive_image should be smaller or equal to # number of rows of df_image_clean, because there are no more ratings # with pictures len(df_merge_archive_image) # the number of columns of df_merge_archive_image should be the sum of # the number of df_archive_clean1 and df_image_clean, minus 1, because # we only keep one column for tweet_id A = len(list(df_merge_archive_image)) B = len(list(df_archive_clean1)) + len(list(df_image_clean)) - 1 print(A) print(B) ###Output 28 28 ###Markdown Issue: tweets beyond Aug. 1st 2017 don't have image predictions and we don't want to gather them DefineCheck if there are rows with a timestamp beyond Aug. 1st 2017 in df_merge_archive_image. If yes, these rows should be deleted. Code ###Code df_merge_archive_image.timestamp.sort_values().tail() ###Output _____no_output_____ ###Markdown TestAs we can see from the code output above, there are no ratings/tweets with a timestamp beyond Aug. 1st 2017, that is logical, because we already merged the dataframes df_archive_clean1 and df_image_clean and only considered tweet ID's for ratings/tweets with pictures. For these pictures, there are also predictions available. Issue: only concentrate on retweet and favorite count in df_retweet (see project details) DefineDrop alle rows from df_retweet_clean except 'id', 'id_str', 'retweet_count', 'favorite_count' Code ###Code # from df_retweet_clean, choose columns 'id', 'id_str' and 'favorite count' df_retweet_counts = df_retweet_clean[['id', 'id_str', 'retweet_count', 'favorite_count']] ###Output _____no_output_____ ###Markdown Test ###Code #see if required columns are kept, using head(1) df_retweet_counts[288:293] ###Output _____no_output_____ ###Markdown Issue: Retweet_count for index 290 is 0 (should not be in this table) DefineOnly keep rows where value of retweet_count does not equals 0. Code ###Code # Only keep rows where value of retweet_count does not equals 0 df_retweet_counts1 = df_retweet_counts[df_retweet_counts.retweet_count != 0] ###Output _____no_output_____ ###Markdown Test ###Code # Output should be zero. len(df_retweet_counts1[df_retweet_counts1['retweet_count']==0]) ###Output _____no_output_____ ###Markdown Issue: A) Tidiness Issue: each variable forms a column: Tweet ID appears in two columns: 'id' and 'id_str' B) Quality issue: Tweet ID's in columns 'id' and 'id_str' are sometimes not the same DefineCompare the id's of the columns 'tweet_id' (from df_merge_archive_image), 'id' and 'id_str' (df_retweet_counts1) with the following steps. We want to see if the id's from the column 'id' or 'id_str' are equal to the column 'tweet_id'. 1) rename 'id' (from df_retweet_counts1) to 'tweet_id' and merge it with the tweet_id (from df_merge_archive_image) to dataframe df_test1. 2) Get number of rows from df_test13) Rename 'tweet_id' back to 'id' and rename 'id_str' to 'tweet_id' (from df_retweet_counts1) and merge it with the tweet_id (from df_merge_archive_image) to dataframe df_test2.4) Get number of rows from df_test25) Check which df (df_test1 or test2) has more rows6) If df_test1 has more rows, delete 'tweet_id' (from df_retweet_counts1) and rename 'id' to 'tweet_id'. Then merge the df's df_retweet_counts1 and df_merge_archive_image on the column 'tweet_id'.7) If df_test2 has more rows, delete 'id' and rename 'id_str' to 'tweet_id'. Then merge the df's df_retweet_counts1 and df_merge_archive_image on the column 'tweet_id'. Code and TestTo solve this issue, coding and testing steps are conducted consecutively. ###Code # rename columns df_retweet_counts1 = df_retweet_counts1.rename(columns={'id':'tweet_id'}) # test if renaming was successful df_retweet_counts1.head(1) # merge df's df_merge_archive_image and df_retweet_counts1 # on 'tweet_id' to dataframe df_test1 df_test1 = pd.merge(df_merge_archive_image, df_retweet_counts1, how='inner', on='tweet_id') # Get number of rows from df_test1 len(df_test1) # Rename 'tweet_id' back to 'id' and rename 'id_str' to 'tweet_id' # (from df_retweet_counts1) df_retweet_counts1.rename(columns={'tweet_id':'id'}, inplace=True) # test if renaming was successful df_retweet_counts1.head(1) # Rename 'id_str' to 'tweet_id' df_retweet_counts1.rename(columns={'id_str':'tweet_id'}, inplace=True) # test if renaming was successful df_retweet_counts1.head(1) # merge df's df_merge_archive_image and df_retweet_counts1 # on 'tweet_id' to dataframe df_test1 df_test2 = pd.merge(df_merge_archive_image, df_retweet_counts1, how='inner', on='tweet_id') # Get number of rows from df_test1 len(df_test2) # df_test1 has more rows, so I delete 'tweet_id' from df_retweet_counts1 # and rename 'id' to 'tweet_id'. Then merge the df's df_retweet_counts1 and # df_merge_archive_image on the column 'tweet_id' to df_archive_image_retweet df_retweet_counts1.drop(['tweet_id'], axis=1, inplace=True) df_retweet_counts1 = df_retweet_counts1.rename(columns={'id':'tweet_id'}) df_archive_image_retweet = pd.merge(df_merge_archive_image, df_retweet_counts1, how='inner', on='tweet_id') # Test: # the number of columns of df__archive_image_retweet should be the sum of # the number of df_merge_archive_image and df_retweet_counts1, minus 1, because # we only keep one column for tweet_id A = len(list(df_archive_image_retweet)) B = len(list(df_merge_archive_image)) + len(list(df_retweet_counts1)) - 1 print(A) print(B) # Test: Check if the column 'tweet_id' still exists df_archive_image_retweet.head(1) # Test: Get number of rows from df_archive_image_retweet, it should # equal the number of rows from df_test1 (1994) len(df_archive_image_retweet) ###Output _____no_output_____ ###Markdown Issue: Not every value of the column 'rating_denominator' has value 10 DefineCheck, which values in the column 'rating_denominator' have the value 10. Change all values to 10. Code ###Code # get number of rows, where rating_denominator is != 10 len(df_archive_image_retweet[df_archive_image_retweet.rating_denominator != 10]) # if values rating_denominator !=10, change them to 10 for x in df_archive_image_retweet.rating_denominator: if x != 10: df_archive_image_retweet.rating_denominator.replace(x, 10, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code # check if any values !=10 are left in rating_denominator len(df_archive_image_retweet[df_archive_image_retweet.rating_denominator != 10]) ###Output _____no_output_____ ###Markdown Issue: each variable forms a column: rating_numerator and rating_denominator are actually one variable but separated in two columns (should be merged, ideally e.g. 8/10) Define Create a new column 'rating' in df_archive_image_retweet.Divide values from 'rating_numerator' with 'rating_denominator' in the column 'rating'Delete the columns 'rating_numerator' and 'rating_denominator Code and TestTo solve this issue, coding and testing steps are conducted consecutively. ###Code # create new column and divide value x by value y x = df_archive_image_retweet.rating_numerator y = df_archive_image_retweet.rating_denominator df_archive_image_retweet['rating'] = (x / y) # test if new column was created and if it contains the values of x / y df_archive_image_retweet[['rating_numerator', 'rating_denominator', 'rating']].head() df_archive_image_retweet.head(1) # drop columns 'rating_numerator' and 'rating_denominator' df_archive_image_retweet.drop(['rating_numerator', 'rating_denominator'], axis=1, inplace=True) # check if columns 'rating_numerator' and 'rating_denominator' have been dropped list(df_archive_image_retweet) ###Output _____no_output_____ ###Markdown Issue: datatype of p1, p2 and p3 is object and not category DefineChange data type from object to category Code ###Code df_archive_image_retweet.p1 = df_archive_image_retweet.p1.astype('category') df_archive_image_retweet.p2 = df_archive_image_retweet.p2.astype('category') df_archive_image_retweet.p3 = df_archive_image_retweet.p3.astype('category') ###Output _____no_output_____ ###Markdown Test ###Code # check if datatype of columns p1, p2, p3 changed to category df_archive_image_retweet.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1994 entries, 0 to 1993 Data columns (total 29 columns): tweet_id 1994 non-null int64 in_reply_to_status_id 23 non-null float64 in_reply_to_user_id 23 non-null float64 timestamp 1994 non-null object source 1994 non-null object text 1994 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 1994 non-null object name 1994 non-null object doggo 1994 non-null object floofer 1994 non-null object pupper 1994 non-null object puppo 1994 non-null object jpg_url 1994 non-null object img_num 1994 non-null int64 p1 1994 non-null category p1_conf 1994 non-null float64 p1_dog 1994 non-null bool p2 1994 non-null category p2_conf 1994 non-null float64 p2_dog 1994 non-null bool p3 1994 non-null category p3_conf 1994 non-null float64 p3_dog 1994 non-null bool retweet_count 1994 non-null int64 favorite_count 1994 non-null int64 rating 1994 non-null float64 dtypes: bool(3), category(3), float64(8), int64(4), object(11) memory usage: 450.6+ KB ###Markdown Issue: data type of column 'timestamp' is object and not datetime DefineChange datetype of column timestamp to datetime Code ###Code # Change datetype of column timestamp to datetime df_archive_image_retweet.timestamp = pd.to_datetime(df_archive_image_retweet.timestamp) ###Output _____no_output_____ ###Markdown Test ###Code # check if datatype of column timestamp changed to datetime df_archive_image_retweet.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1994 entries, 0 to 1993 Data columns (total 29 columns): tweet_id 1994 non-null int64 in_reply_to_status_id 23 non-null float64 in_reply_to_user_id 23 non-null float64 timestamp 1994 non-null datetime64[ns] source 1994 non-null object text 1994 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 1994 non-null object name 1994 non-null object doggo 1994 non-null object floofer 1994 non-null object pupper 1994 non-null object puppo 1994 non-null object jpg_url 1994 non-null object img_num 1994 non-null int64 p1 1994 non-null category p1_conf 1994 non-null float64 p1_dog 1994 non-null bool p2 1994 non-null category p2_conf 1994 non-null float64 p2_dog 1994 non-null bool p3 1994 non-null category p3_conf 1994 non-null float64 p3_dog 1994 non-null bool retweet_count 1994 non-null int64 favorite_count 1994 non-null int64 rating 1994 non-null float64 dtypes: bool(3), category(3), datetime64[ns](1), float64(8), int64(4), object(10) memory usage: 450.6+ KB ###Markdown Issue: dogs sometimes are in two dog stages Define:* create new column: 'number_none'* get sum of None and write it to 'number_none'* get those rows where sum is less than 3* correct df accordingly ###Code df_archive_image_retweet[['doggo', 'floofer', 'pupper', 'puppo']].head() ###Output _____no_output_____ ###Markdown Code ###Code # create new column: 'number_none' and # get sum of None and write it to 'number_none' a = df_archive_image_retweet.doggo.str.count('None') b = df_archive_image_retweet.floofer.str.count('None') c = df_archive_image_retweet.pupper.str.count('None') d = df_archive_image_retweet.puppo.str.count('None') df_archive_image_retweet['number_none'] = a + b + c + d df_archive_image_retweet['number_none'].head() df_archive_image_retweet.head() # get those rows where sum is less than 3 df_archive_image_retweet[df_archive_image_retweet.number_none < 3][['text', 'doggo', 'floofer', 'pupper', 'puppo', 'number_none']] # look at content of 'text' column in the lines with the above indices and change values #in the columns 'doggo', 'floofer', 'pupper' and 'puppo' accordingly #the code in line 4 switches of a warning pd.options.mode.chained_assignment = None df_archive_image_retweet.doggo[148] = df_archive_image_retweet.doggo[148].replace('doggo', 'None') df_archive_image_retweet.doggo[154] = df_archive_image_retweet.doggo[154].replace('doggo', 'None') df_archive_image_retweet.doggo[340] = df_archive_image_retweet.doggo[340].replace('doggo', 'None') df_archive_image_retweet.doggo[397] = df_archive_image_retweet.doggo[397].replace('doggo', 'None') df_archive_image_retweet.pupper[419] = df_archive_image_retweet.pupper[419].replace('pupper', 'None') df_archive_image_retweet.doggo[425] = df_archive_image_retweet.doggo[425].replace('doggo', 'None') df_archive_image_retweet.pupper[510] = df_archive_image_retweet.pupper[510].replace('pupper', 'None') df_archive_image_retweet.pupper[652] = df_archive_image_retweet.pupper[652].replace('pupper', 'None') df_archive_image_retweet.pupper[704] = df_archive_image_retweet.pupper[704].replace('pupper', 'None') df_archive_image_retweet.doggo[704] = df_archive_image_retweet.doggo[704].replace('doggo', 'None') df_archive_image_retweet.doggo[795] = df_archive_image_retweet.doggo[795].replace('doggo', 'None') df_archive_image_retweet.doggo[841] = df_archive_image_retweet.doggo[841].replace('doggo', 'None') ###Output _____no_output_____ ###Markdown Test ###Code #check if there are still rows where sum is less than 3 (no row except column names should occur) a = df_archive_image_retweet.doggo.str.count('None') b = df_archive_image_retweet.floofer.str.count('None') c = df_archive_image_retweet.pupper.str.count('None') d = df_archive_image_retweet.puppo.str.count('None') df_archive_image_retweet['number_none'] = a + b + c + d df_archive_image_retweet[df_archive_image_retweet.number_none < 3][['text', 'doggo', 'floofer', 'pupper', 'puppo', 'number_none']] ###Output _____no_output_____ ###Markdown Issue: each variable forms a column: for dog stage, there are four columns, they should be merged to one column called 'dog_stage' (datatype: category with values: None, doggo, floofer, pupper, puppo) Define* Create new column called 'dog_stage'* fill 'dog_stage' column with respective values None, doggo, floofer, pupper, puppo* drop columns doggo, floofer, pupper, puppo* change datatype in column 'dog_stage' to category Code ###Code # test preparation: number of values for 'doggo', 'floofer', 'pupper' and 'puppo' in respective columns df_archive_image_retweet['doggo'].value_counts() df_archive_image_retweet['floofer'].value_counts() df_archive_image_retweet['pupper'].value_counts() df_archive_image_retweet['puppo'].value_counts() df_archive_image_retweet[['doggo', 'floofer', 'pupper', 'puppo']].head(5) # replace None with 0 in columns 'doggo', 'floofer', 'pupper', 'puppo' df_archive_image_retweet.doggo.replace('None', 0, inplace=True) df_archive_image_retweet.floofer.replace('None', 0, inplace=True) df_archive_image_retweet.pupper.replace('None', 0, inplace=True) df_archive_image_retweet.puppo.replace('None', 0, inplace=True) df_archive_image_retweet[['doggo', 'floofer', 'pupper', 'puppo']].head(30) # replace dog stages in columns 'doggo', 'floofer', 'pupper', 'puppo' with numbers 1, 2, 3, 4 df_archive_image_retweet.doggo.replace('doggo', 1, inplace=True) df_archive_image_retweet.floofer.replace('floofer', 2, inplace=True) df_archive_image_retweet.pupper.replace('pupper', 3, inplace=True) df_archive_image_retweet.puppo.replace('puppo', 4, inplace=True) df_archive_image_retweet[['doggo', 'floofer', 'pupper', 'puppo']].head(30) # create new column 'dog_stage' and move numbers (variables) 1, 2,3 and 4 to this column a = df_archive_image_retweet.doggo b = df_archive_image_retweet.floofer c = df_archive_image_retweet.pupper d = df_archive_image_retweet.puppo df_archive_image_retweet['dog_stage'] = (a + b + c + d) df_archive_image_retweet[['doggo', 'floofer', 'pupper', 'puppo', 'dog_stage']].head(30) # column 'dog_stage': replace numbers (variables) 1, 2,3 and 4 with dog stages df_archive_image_retweet.dog_stage.replace(1, 'doggo', inplace=True) df_archive_image_retweet.dog_stage.replace(2, 'floofer', inplace=True) df_archive_image_retweet.dog_stage.replace(3, 'pupper', inplace=True) df_archive_image_retweet.dog_stage.replace(4,'puppo', inplace=True) # drop columns 'doggo', 'floofer', 'pupper', 'puppo' df_archive_image_retweet.drop(['doggo', 'floofer', 'pupper', 'puppo'], axis=1, inplace=True) # change datatype in column 'dog_stage' to category df_archive_image_retweet.dog_stage = df_archive_image_retweet.dog_stage.astype('category') ###Output _____no_output_____ ###Markdown Test ###Code # see if number of unique values is still the same df_archive_image_retweet['dog_stage'].value_counts() # see if columns 'doggo', 'floofer', 'pupper', 'puppo' are dropped # and if datatype in column 'dog_stage' is changed to df_archive_image_retweet.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1994 entries, 0 to 1993 Data columns (total 27 columns): tweet_id 1994 non-null int64 in_reply_to_status_id 23 non-null float64 in_reply_to_user_id 23 non-null float64 timestamp 1994 non-null datetime64[ns] source 1994 non-null object text 1994 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 1994 non-null object name 1994 non-null object jpg_url 1994 non-null object img_num 1994 non-null int64 p1 1994 non-null category p1_conf 1994 non-null float64 p1_dog 1994 non-null bool p2 1994 non-null category p2_conf 1994 non-null float64 p2_dog 1994 non-null bool p3 1994 non-null category p3_conf 1994 non-null float64 p3_dog 1994 non-null bool retweet_count 1994 non-null int64 favorite_count 1994 non-null int64 rating 1994 non-null float64 number_none 1994 non-null int64 dog_stage 1994 non-null category dtypes: bool(3), category(4), datetime64[ns](1), float64(8), int64(5), object(6) memory usage: 486.0+ KB ###Markdown Issue: 'tweet_id' should be object and not integer Define* change dataype to object Code ###Code # change dataype to object df_archive_image_retweet.tweet_id = df_archive_image_retweet.tweet_id.astype(str) ###Output _____no_output_____ ###Markdown Test ###Code df_archive_image_retweet.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1994 entries, 0 to 1993 Data columns (total 27 columns): tweet_id 1994 non-null object in_reply_to_status_id 23 non-null float64 in_reply_to_user_id 23 non-null float64 timestamp 1994 non-null datetime64[ns] source 1994 non-null object text 1994 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 1994 non-null object name 1994 non-null object jpg_url 1994 non-null object img_num 1994 non-null int64 p1 1994 non-null category p1_conf 1994 non-null float64 p1_dog 1994 non-null bool p2 1994 non-null category p2_conf 1994 non-null float64 p2_dog 1994 non-null bool p3 1994 non-null category p3_conf 1994 non-null float64 p3_dog 1994 non-null bool retweet_count 1994 non-null int64 favorite_count 1994 non-null int64 rating 1994 non-null float64 number_none 1994 non-null int64 dog_stage 1994 non-null category dtypes: bool(3), category(4), datetime64[ns](1), float64(8), int64(4), object(7) memory usage: 486.0+ KB ###Markdown 4. Store, Analyze and Visualize Data ###Code #store clean dataframe df_archive_image_retweet.to_csv('twitter_archive_master.csv', index=False) df_archive_image_retweet.head() ###Output _____no_output_____ ###Markdown Questions for the Data Analysis* Regarding the Neural Network: What are the five most frequent 1 dog breed predictions of the algorithm?* What is the maximum, average and minimum rating of these five dog breed?* What are the 3 months, during which the most tweeds have been uploaded? Data Visualization* Show the average favorite and retweet count for the dog stages 'doggo', 'floofer', 'pupper', 'puppo' in a bar chart Regarding the Neural Network: What are the five most frequent 1 dog breed predictions of the algorithm? ###Code df_archive_image_retweet['p1'].value_counts().head() ###Output _____no_output_____ ###Markdown What is the maximum, average and minimum rating of these five dog breeds? ###Code #get results for golden_retriever df_archive_image_retweet[df_archive_image_retweet.p1 == 'golden_retriever']['rating'].describe()[['max', 'mean', 'min']] #get results for Labrador_retriever df_archive_image_retweet[df_archive_image_retweet.p1 == 'Labrador_retriever']['rating'].describe()[['max', 'mean', 'min']] #get results for Pembroke df_archive_image_retweet[df_archive_image_retweet.p1 == 'Pembroke']['rating'].describe()[['max', 'mean', 'min']] #get results for Chihuahua df_archive_image_retweet[df_archive_image_retweet.p1 == 'Chihuahua']['rating'].describe()[['max', 'mean', 'min']] #get results for pug df_archive_image_retweet[df_archive_image_retweet.p1 == 'pug']['rating'].describe()[['max', 'mean', 'min']] ###Output _____no_output_____ ###Markdown What are the 3 months, during which the most tweeds have been uploaded? ###Code df_archive_image_retweet['timestamp'].groupby(df_archive_image_retweet.timestamp.dt.to_period("M")).agg('count').sort_values().tail(3) ###Output _____no_output_____ ###Markdown Show the average favorite and retweet count for the dog stages 'doggo', 'floofer', 'pupper', 'puppo' in a bar chart ###Code dog_stage_counts = df_archive_image_retweet.groupby(['dog_stage'])['favorite_count', 'retweet_count'].mean()[1:] dog_stage_counts import numpy as np import matplotlib.pyplot as plt %matplotlib inline #data to plot dog_stage_counts n_groups = 4 #create plot fig, ax = plt.subplots() index = np.arange(n_groups) bar_width = 0.35 opacity = 0.8 rects1 = plt.bar(index, dog_stage_counts.favorite_count, bar_width, alpha=opacity, color='b', label='Favorite Counts') rects2 = plt.bar(index + bar_width, dog_stage_counts.retweet_count, bar_width, alpha=opacity, color='g', label='Retweet Counts') plt.xlabel('Dog Stages') plt.ylabel('Counts') plt.title('Dog Stages and their Average Favorite and Retweet Counts') plt.xticks(index + bar_width, ('doggo', 'floofer', 'pupper', 'puppo')) plt.legend() plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown Project: Data Wrangling on @dog_rates Twitter account Table of ContentsIntroductionGather DataData WranglingCleaningExploratory Data AnalysisConclusions Introduction Data will be gathered from three different sources and in a variety of formats for the @dog_rates Twitter account also known as WeRateDogs. WeRateDogs is a Twitter account that rates people's dogs with a humorous comment about the dog. These ratings almost always have a denominator of 10, but the numerators are often higher than 10. WeRateDogs has over 4 million followers and has received international media coverage. Udacity has provided me with a Twitter archive that contains basic tweet data (tweet ID, timestamp, text, etc.) for all 5000+ of their tweets as they stood on August 1, 2017.I will be data wrangling: assessing the data quality, data tidiness, and cleaning the data. I will be documenting my analyses and visualizations below. Gather Data 1. WeRateDogs Twitter Archive downloaded from Udacity website. ###Code #Import twitter_archive_enhanced.csv import pandas as pd import numpy as np import matplotlib.pyplot as plt df_tweets = pd.read_csv('twitter-archive-enhanced.csv') df_tweets.head() df_tweets.tail() #Check for missing data df_tweets.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown There are only 181 tweets that have retweet_id, user_id, and timestamp data. I need to look at the other data to import as I don't think I should neglect all the other columns just yet that is missing this data. ###Code #Check for duplicates df_tweets.duplicated().sum() ###Output _____no_output_____ ###Markdown 2. Import Image Predictions hosted on Udacity's Servers. ###Code #Import image_predictions.tsv -- #Requests library and the following URL: https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv import requests url='https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response=requests.get(url) response #Save response as a file response.content with open ("image-predictions.tsv", mode='wb') as file: file.write(response.content) #load file into memory image_predictions=pd.read_csv('image-predictions.tsv', sep='\t') image_predictions.head() image_predictions.shape image_predictions.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null int64 jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown 3. Twitter API to JSON Data ###Code #Import Tweet IDs import tweepy import json consumer_key = '' consumer_secret = ' ' access_token = ' ' access_secret = ' ' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, parser=tweepy.parsers.JSONParser()) api.wait_on_rate_limit = True api.wait_on_rate_limit = True #for tweet_id in df_tweets['tweet_id'].values: # try: # tweet = api.get_status(tweet_id, tweet_mode='extended') # with open('tweet_json.txt', 'a') as txt: # txt.write(json.dumps(tweet) + '\n') # print('Success') # except tweepy.TweepError: # continue #DataFrame with (at minimum) tweet ID, retweet count, and favorite count df = pd.read_json('tweet_json.txt', lines=True) df.head() df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2833 entries, 0 to 2832 Data columns (total 32 columns): contributors 0 non-null float64 coordinates 0 non-null float64 created_at 2833 non-null datetime64[ns] display_text_range 2833 non-null object entities 2833 non-null object extended_entities 2469 non-null object favorite_count 2833 non-null int64 favorited 2833 non-null bool full_text 2833 non-null object geo 0 non-null float64 id 2833 non-null int64 id_str 2833 non-null int64 in_reply_to_screen_name 101 non-null object in_reply_to_status_id 101 non-null float64 in_reply_to_status_id_str 101 non-null float64 in_reply_to_user_id 101 non-null float64 in_reply_to_user_id_str 101 non-null float64 is_quote_status 2833 non-null bool lang 2833 non-null object place 1 non-null object possibly_sensitive 2632 non-null float64 possibly_sensitive_appealable 2632 non-null float64 quoted_status 38 non-null object quoted_status_id 42 non-null float64 quoted_status_id_str 42 non-null float64 quoted_status_permalink 42 non-null object retweet_count 2833 non-null int64 retweeted 2833 non-null bool retweeted_status 238 non-null object source 2833 non-null object truncated 2833 non-null bool user 2833 non-null object dtypes: bool(4), datetime64[ns](1), float64(11), int64(4), object(12) memory usage: 630.9+ KB ###Markdown Data Wrangling - I only want original ratings (no retweets) from the three datasets. ###Code df_tweets.shape df_tweets.info() #No Tweet ID duplicates df_tweets['tweet_id'].duplicated().sum() df_tweets.sample(5) df_tweets.describe() #How many have a numerator over 10 numerator_over10=df_tweets[df_tweets['rating_numerator']> 10] numerator_over10.shape numerator_over20=df_tweets[df_tweets['rating_numerator']> 20] numerator_over20.shape ###Output _____no_output_____ ###Markdown 24 Entries are very highly rated over 20! The max entry is 1776. ###Code df_tweets[df_tweets['rating_numerator']> 15].shape ###Output _____no_output_____ ###Markdown Tweets with a rating_numerator over 15 should be deleted as they are inconsistent with the other data. ###Code df_tweets.query('rating_numerator == "1776"') num_under10=df_tweets[df_tweets['rating_numerator']< 10] num_under10.shape numerator_over20.expanded_urls.sample(5) #How many have a denominator over 10 denominator_over10=df_tweets[df_tweets['rating_denominator']> 10] denominator_over10.shape denominator_over10.sample(5) #How many have a denominator under 10 denominator_under10=df_tweets[df_tweets['rating_denominator']< 10] denominator_under10.shape denominator_under10 denominator_10=df_tweets[df_tweets['rating_denominator']== 10] denominator_10.shape ###Output _____no_output_____ ###Markdown Tweets with a rating_denominator not equal to 10 should be deleted as they are inconsistent with the other data. I'm going to narrow down this data in the cleaning stage. ###Code image_predictions.shape image_predictions.info() image_predictions.sample(5) image_predictions['p1'].nunique() #Check for duplicates sum(image_predictions.tweet_id.duplicated()) sum(image_predictions.jpg_url.duplicated()) ###Output _____no_output_____ ###Markdown Not sure if I should delete the jpg_url duplicates just yet. These could be retweets. ###Code jpg_dup=image_predictions['jpg_url'].value_counts() jpg_dup.head(5) ###Output _____no_output_____ ###Markdown These above definitely have to be retweetsbecause some of the pictures made me laugh! ###Code df.shape df.info() df.sample(4) #Are there any contributors or coordinates in the dataset? df.contributors.nunique() df.coordinates.nunique() #Any retweets? df.describe() ###Output _____no_output_____ ###Markdown Max number of retweets was 79872! The mean is 3208 for retweet_count. ###Code #Any duplicates? df['id'].duplicated().sum() #Check length of tweet ID in each table len(df_tweets.tweet_id) ###Output _____no_output_____ ###Markdown Detect and document at least eight (8) quality issues and two (2) tidiness issuesQuality:df_tweets- Retweets can be deleted and they pertain to these columns: in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id, and retweeted_status_timestamp- Drop unnessary columns from the list above.- Timestamp is an object and it can be changed to datetime- Tweets rating_numerator and rating_denominator should be changed to a float.- Name column sometimes lists None or "a", I would like to research the names and change these both to NaN. Then it will be easier to see the top dog names.- Check Expanded urls for duplicates and delete any found.image_predictions- Column names don't tell much information, rename these columns.- p1 Column is inconsistent as not all dog names are Capitalized.- Delete rows with duplicate jpg_urls from image_predictions table.df Twitter- Take out contributors, coordinates, and geo column as they all have a value of 0.Tidiness: - df_Tweets with a rating_denominator not equal to 10 or a rating_numerator over 15 or under 10 should be deleted as they are inconsistent with the other data. - Columns of df_tweets: doggo, floofer, pupper, and puppo could be put into one column as type of dog. - df_tweets has 2356 rows and image_predictions has 2075. There are 2333 columns from the df Twitter table. These tables can be combined. - There are many columns in image_predictions table and only need jpg_url, p1, p1_conf, and p1_dog columns. - All three tables have tweet_ID or ID in common. When these tables are merged will have to make sure this column does not show up 2-3 times in the combined dataset. The counts_clean table will have many of the columns removed as I only want favorite_count, id, retweet_count, full_text, and source. Cleaning ###Code #Make copies of files tweets_clean=df_tweets.copy() images_clean=image_predictions.copy() counts_clean=df.copy() ###Output _____no_output_____ ###Markdown Define - Where in_reply_to_status_id and retweeted_status_user_id is null, we want to keep these values as they are not retweets.-Retweets can be deleted and they pertain to these columns: in_reply_to_user_id, retweeted_status_id, and retweeted_status_timestamp ###Code tweets_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown Code ###Code tweets_clean=tweets_clean[tweets_clean['in_reply_to_user_id'].isnull()] tweets_clean.shape tweets_clean=tweets_clean[tweets_clean['retweeted_status_user_id'].isnull()] tweets_clean.shape #Drop columns https://stackoverflow.com/questions/28538536/deleting-multiple-columns-based-on-column-names-in-pandas tweets_clean=tweets_clean.drop(["retweeted_status_id", "retweeted_status_user_id", "retweeted_status_timestamp", "in_reply_to_user_id", "in_reply_to_status_id"], axis=1) tweets_clean.head() tweets_clean.shape ###Output _____no_output_____ ###Markdown Test ###Code tweets_clean.head() ###Output _____no_output_____ ###Markdown Define - timestamp is an object and it can be changed to datetime ###Code tweets_clean.dtypes ###Output _____no_output_____ ###Markdown Code ###Code from datetime import datetime #https://stackoverflow.com/questions/38333954/converting-object-to-datetime-format-in-python tweets_clean['timestamp'] = pd.to_datetime(tweets_clean['timestamp']) ###Output _____no_output_____ ###Markdown Test ###Code tweets_clean.head() tweets_clean.dtypes ###Output _____no_output_____ ###Markdown Define - Tweets rating_numerator and rating_denominator should be changed to a float. Code ###Code #change from integer to float tweets_clean.astype({'rating_numerator': 'float'}).dtypes tweets_clean.astype({'rating_denominator': 'float'}).dtypes ###Output _____no_output_____ ###Markdown Test ###Code tweets_clean.dtypes ###Output _____no_output_____ ###Markdown Define - columns: doggo, floofer, pupper, and puppo need to be changed if they have None values in them to np.Nan. Code ###Code tweets_clean.doggo.value_counts() tweets_clean.floofer.value_counts() tweets_clean.pupper.value_counts() tweets_clean.puppo.value_counts() #replace None values from these four rows with np.nan tweets_clean['doggo'] = tweets_clean['doggo'].replace(['None'], np.nan) tweets_clean['floofer'] = tweets_clean['floofer'].replace(['None'], np.nan) tweets_clean['pupper'] = tweets_clean['pupper'].replace(['None'], np.nan) tweets_clean['puppo'] = tweets_clean['puppo'].replace(['None'], np.nan) tweets_clean.doggo.value_counts() ###Output _____no_output_____ ###Markdown Test ###Code tweets_clean.sample(3) tweets_clean.loc[(tweets_clean[['doggo', 'floofer', 'pupper', 'puppo']] != 'None' ).sum(axis=1) > 1] ###Output _____no_output_____ ###Markdown Define - columns: doggo, floofer, pupper, and puppo will be combined to one new column. Some tweet_ids mention two dog stages in them. Code ###Code #https://stackoverflow.com/questions/39291499/how-to-concatenate-multiple-column-values-into-a-single-column-in-panda-datafram cols = ['doggo', 'floofer', 'pupper', 'puppo'] tweets_clean['dog_stages'] = tweets_clean[cols].apply(lambda row: ','.join(row.values.astype(str)), axis=1) tweets_clean.dog_stages.value_counts() tweets_clean['dog_stages'] = tweets_clean['dog_stages'].replace(['nan,nan,nan,nan'], np.nan) tweets_clean['dog_stages'] = tweets_clean['dog_stages'].replace(['nan,nan,pupper,nan'], 'pupper') tweets_clean['dog_stages'] = tweets_clean['dog_stages'].replace(['doggo,nan,nan,nan'], 'doggo') tweets_clean['dog_stages'] = tweets_clean['dog_stages'].replace(['nan,nan,nan,puppo'], 'puppo') tweets_clean['dog_stages'] = tweets_clean['dog_stages'].replace(['nan,floofer,nan,nan'], 'floofer') tweets_clean['dog_stages'] = tweets_clean['dog_stages'].replace(['doggo,nan,pupper,nan'], 'doggo, pupper') tweets_clean['dog_stages'] = tweets_clean['dog_stages'].replace(['doggo,floofer,nan,nan'], 'doggo, floofer') tweets_clean['dog_stages'] = tweets_clean['dog_stages'].replace(['doggo,nan,nan,puppo'], 'doggo, puppo') ###Output _____no_output_____ ###Markdown Test ###Code tweets_clean.dog_stages.value_counts() tweets_clean.head() ###Output _____no_output_____ ###Markdown Test ###Code tweets_clean.head() ###Output _____no_output_____ ###Markdown Define - Name column sometimes lists None, I would like to research the dogs with "none" and "a" and change those to NaN or null. This way I can find out the top dog names from the dataset. Code ###Code low_name=tweets_clean[tweets_clean.name.str.islower().fillna(False)] low_name.shape low_name.name.value_counts() #replace all these lowercase dog names above with np.nan tweets_clean['name'] = tweets_clean['name'].replace(['a', 'the', 'an', 'very', 'one', 'quite', 'just', 'getting', 'not', 'actually', 'my', 'space', 'old', 'all', 'his', 'mad', 'officially', 'by', 'incredibly', 'life', 'light', 'infuriating', 'unacceptable', 'this', 'such', 'None'], np.nan) tweets_clean[tweets_clean.name.str.islower().fillna(False)] tweets_clean.sample(5) from collections import Counter counter = Counter(tweets_clean['name'].tolist()) print(counter.most_common(15)) #Let's find out the top 10 dog names by popularity tweets_clean.groupby('name').count().sort_values(by='tweet_id', ascending=False).iloc[:10, :1] ###Output _____no_output_____ ###Markdown Test ###Code tweets_clean.tail(2) ###Output _____no_output_____ ###Markdown Define - Check Expanded urls for duplicates and delete any found. ###Code sum(tweets_clean['expanded_urls'].duplicated()) ###Output _____no_output_____ ###Markdown Code ###Code tweets_clean['expanded_urls'].drop_duplicates(inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code sum(tweets_clean.expanded_urls.duplicated()) tweets_clean.shape sum(tweets_clean.text.duplicated()) ###Output _____no_output_____ ###Markdown Define image_predictions - Column names don't tell much information and only need the tweet_id, jpg_url, img_num, p1, p1_conf, and p1_dog columns. I'm going to drop the other columns not needed. ###Code images_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null int64 jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown Code ###Code images_clean=images_clean.drop(["p2", "p2_conf", "p2_dog", "p3", "p3_conf", "p3_dog"], axis=1) images_clean.head() ###Output _____no_output_____ ###Markdown Test ###Code images_clean.tail() ###Output _____no_output_____ ###Markdown Define image_predictions - Rename column headings so they make sense. Code ###Code images_clean.columns = ['tweet_id', 'jpg_url', 'img_num', 'type_of_dog', '% prediction', 'prediction_dog'] ###Output _____no_output_____ ###Markdown Test ###Code images_clean.head() ###Output _____no_output_____ ###Markdown Define image_predictions - p1 Column is inconsistent with not all dog names Capitalized Code ###Code images_clean.dtypes #Capitalize first letter in the type_of_dog column https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.title.html images_clean['type_of_dog']=images_clean['type_of_dog'].str.title() ###Output _____no_output_____ ###Markdown Test ###Code images_clean.sample(5) #If dog predicted false is it still a dog image? false_dog=images_clean[images_clean['prediction_dog']== False] false_dog.head(5) ###Output _____no_output_____ ###Markdown I viewed some of these images and there isn't enough information as some that are predicted as false for a dog, the image is still a dog. Define - Delete rows with duplicate jpg_urls from image_predictions table. ###Code images_clean.shape ###Output _____no_output_____ ###Markdown Code ###Code #check for jpg url duplicates image_dups=images_clean['jpg_url'].value_counts() image_dups.sample(5) images_clean=images_clean.drop_duplicates(['jpg_url'], keep='first') image_dups2=images_clean['jpg_url'].value_counts() image_dups2.head(5) #check if duplicate jpg_urls were deleted images_clean.shape ###Output _____no_output_____ ###Markdown Test ###Code images_clean.sample(2) ###Output _____no_output_____ ###Markdown Define counts_clean - Take out contributors, coordinates, and geo column as they all have a value of 0. Must keep columns tweet ID, retweet count, and favorite count, and any others neccessary. Code ###Code counts_clean.info() counts_clean.sample(2) col_list=['id', 'retweet_count', 'favorite_count'] counts_clean=counts_clean[col_list] ###Output _____no_output_____ ###Markdown Test ###Code counts_clean.head() #rename id column to tweet_id counts_clean.columns = ['tweet_id', 'retweet_count', 'favorite_count'] counts_clean.sample(2) sum(counts_clean.tweet_id.duplicated()) counts_clean['tweet_id'].drop_duplicates(inplace=True) sum(counts_clean.tweet_id.duplicated()) ###Output _____no_output_____ ###Markdown Tidiness: Define Code - tweets_clean with a rating_denominator not equal to 10 or a rating_numerator over 15 or under 10 should be deleted as they are inconsistent with the other data. Code ###Code tweets_clean.shape tweets_clean=tweets_clean[tweets_clean['rating_denominator']==10] tweets_clean.shape tweets_clean[tweets_clean['rating_numerator']<15].shape tweets_clean=tweets_clean[tweets_clean['rating_numerator']<15] tweets_clean.shape tweets_clean[tweets_clean['rating_numerator']>10].shape tweets_clean=tweets_clean[tweets_clean['rating_numerator']>10] tweets_clean.shape ###Output _____no_output_____ ###Markdown Test ###Code tweets_clean.sample(5) ###Output _____no_output_____ ###Markdown Define - Delete doggo, floofer, pupper, and puppo columns since that information was used for a new column. Code ###Code tweets_clean=tweets_clean.drop(["doggo", "floofer", "pupper", "puppo"], axis=1) tweets_clean.tail() tweets_clean.dog_stages.value_counts() #change from integer to category tweets_clean.astype({'dog_stages': 'category'}).dtypes ###Output _____no_output_____ ###Markdown Test ###Code tweets_clean.head() ###Output _____no_output_____ ###Markdown - df_tweets has 2356 rows and image_predictions has 2075. There are 2333 columns from the df Twitter table. There are many columns in image_predictions table and only need jpg_url, p1, p1_conf, and p1_dog columns. The counts_clean table will have many of the columns removed as I only want favorite_count, id, retweet_count, full_text, and source. ###Code tweets_clean.sample(2) sum(tweets_clean.tweet_id.duplicated()) #remove source column as I am only concerned with tweet_id, and ratings. col_lists=['tweet_id', 'timestamp', 'text', 'expanded_urls', 'rating_numerator', 'rating_denominator', 'name', 'dog_stages'] tweets_clean=tweets_clean[col_lists] tweets_clean.sample(2) images_clean.sample(2) sum(images_clean.tweet_id.duplicated()) counts_clean.sample(3) sum(counts_clean.tweet_id.duplicated()) ###Output _____no_output_____ ###Markdown Define Merge tweets_clean with counts_clean and then merge with images_clean to make twitter archive master.csv file Code ###Code twitter_archive_master=tweets_clean.merge(counts_clean, on='tweet_id') twitter_archive_master.head(2) sum(twitter_archive_master.tweet_id.duplicated()) twitter_archive_master=twitter_archive_master.merge(images_clean, on='tweet_id') twitter_archive_master.sample(2) sum(twitter_archive_master.duplicated()) twitter_archive_master.shape ###Output _____no_output_____ ###Markdown Test - Save all datasets to CSV ###Code twitter_archive_master.to_csv('twitter_archive_master.csv', index=False) tweets_clean.to_csv('tweets_clean.csv', index=False) images_clean.to_csv('images_clean.csv', index=False) counts_clean.to_csv('counts_clean.csv', index=False) ###Output _____no_output_____ ###Markdown Data Analysis - I am going to focus on finding out the top dog names, types of dogs, and if those had higher favorites on Twitter. I also want to find the top 2 images that had the most favorites. ###Code %matplotlib inline import seaborn as sns twitter_archive_master.sample(2) twitter_archive_master['rating_numerator'].plot() plt.ylabel('rating numerator') plt.xlabel('tweet number'); topdogstage=twitter_archive_master['dog_stages'].value_counts() topdogstage topdogstage.plot(kind='bar', figsize=(9,9)) plt.ylabel('# of dogs', color='r', fontsize=14) plt.xticks(rotation='vertical', fontsize=12) plt.title('Dog Stages vs. # of Dogs') plt.legend(); #What are the Top 10 most popular dog names? top_names=twitter_archive_master.groupby('name').count().sort_values(by='tweet_id', ascending=False).iloc[:10, :1] top_names labels = ['Charlie 8', 'Tucker 8', 'Oliver 8', 'Cooper 8', 'Koda 6', 'Bo 6', 'Daisy 6', 'Penny 6', 'Lucy 6', 'Rusty 4'] plt.figure(figsize=(8, 8)) plt.pie(top_names, labels = labels, shadow = True, startangle = 90) plt.title('Top 10 Dog Names'); twitter_archive_master.shape top_types=twitter_archive_master['type_of_dog'].value_counts() top10=top_types.head(10) top10 top10.plot(figsize=(9,9)) plt.ylabel('# of Tweets', color='r', fontsize=14) plt.xticks(rotation='vertical', fontsize=12) plt.title('Number of Tweets for Different Types of Dogs') plt.legend(); top_10favs=twitter_archive_master[twitter_archive_master['favorite_count'] >80000] top_10favs top_10favs2=top_10favs.sort_values(by=['favorite_count','type_of_dog']) top_10favs2 fav =top_10favs2['favorite_count'] dog =top_10favs2['type_of_dog'] plt.plot(dog, fav) plt.ylabel('tweet favorites count', size = 15) plt.xticks(rotation='vertical', size = 15); #Let's find the top 3 dog images from the top_10 favorites, ID 393, 302, and 763 are the top 3 most favorited Tweets. top2_images=top_10favs2['jpg_url'] top2_images ###Output _____no_output_____ ###Markdown Gather ###Code # Importing the libraries import pandas as pd import numpy as np import requests import tweepy import json import matplotlib.pyplot as plt # Importing WeRateDogs Twitter archive ta = pd.read_csv('twitter-archive-enhanced.csv') # Importing tweet image predictions from Udacity's servers url = "https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv" # Get file from Udacity's servers r = requests.get(url) # Write file locally with open('image_predictions.tsv', 'wb') as file: file.write(r.content) # Import file to DataFrame tip = pd.read_csv('image_predictions.tsv', sep='\t') # Importing additional tweets data with tweepy library for Twitter API consumer_key = "PUT_YOUR_OWN_KEY" consumer_secret = "PUT_YOUR_OWN_KEY" access_token = "PUT_YOUR_OWN_KEY" access_token_secret = "PUT_YOUR_OWN_KEY" auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) # Authonticate with Twitter API and set wait on rate limit to be able to query more tweets api = tweepy.API(auth, wait_on_rate_limit=True) # Query tweets using Tweepy library and write tweets as JSON objects in a txt file not_found = [] error = [] count = 0 with open("tweet_json.txt", "w") as file: for tweet_id in ta.tweet_id: # Catch and document Exceptions try: tweet_status = api.get_status(tweet_id) tweet_json = tweet_status._json file.write(json.dumps(tweet_json)+"\n") except Exception as e: not_found.append(tweet_id) error.append(str(e)) # Load JSON objects into a Dictionary form text file with open("tweet_json.txt", "r") as file: tweet_dict = {'tweet_id': [], 'retweet_count': [], 'favorite_count': [], 'lang': []} for line in file: json_obj = json.loads(line) tweet_dict['tweet_id'].append(json_obj['id']) tweet_dict['retweet_count'].append(json_obj['retweet_count']) tweet_dict['favorite_count'].append(json_obj['favorite_count']) tweet_dict['lang'].append(json_obj['lang']) # Convert Dictionery into DataFrame taad = pd.DataFrame(tweet_dict) ###Output _____no_output_____ ###Markdown Assess ###Code ta.sample(7) ta.info() ta.describe() ta.rating_denominator.value_counts() ta.rating_numerator.value_counts() ta.name.value_counts() tip.sample(7) tip.info() tip.p1_dog.value_counts() tip[tip['p1_dog']==True].p1.value_counts().sort_index() taad.sample(3) taad.info() taad.describe() taad[taad['favorite_count'] == 0] not_found ###Output _____no_output_____ ###Markdown Quality `Twitter Archive` table* Reply and retweet tweets are present in the Twitter Archive data* timestamp and retweeted_status_timestamp columns type should be datetime* The rating_denominator can't have 0's, but the describe method shows that it does* In the dog type columns, some entries have more than one dog type* Missing data in the name column should be nulls not "None"* Wrong data in the name column with values like "a", "the", and "an"* Errorneous datatypes (in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, and retweeted_status_user_id columns) `Image Predictions` table* Lowercase and uppercase in the breeds columns (p1, p2, and p3), also '_' instead of spaces `Twitter API` table* 17 entries less than the entries from the Twitter Archive* Some entries have 0 favorite_count which seems weird (https://twitter.com/i/web/status/886053434075471873) Tidiness* In the first DataFrame (ta) these 4 columns (doggo, floofer, pupper, and puppo) should be represneted by 1 column since they represent 1 variable* The 3 tables should be combined in to one Clean ###Code # Make copies of the DataFrames ta_copy = ta.copy() tip_copy = tip.copy() taad_copy = taad.copy() ###Output _____no_output_____ ###Markdown Tidiness DefineMerge the 3 tables into 1 DataFrame based on using the tweet_id columns Clean ###Code # Join the Twitter Archive and the Image Prediction tables using a left join ta_tip = pd.merge(ta_copy, tip_copy, on='tweet_id', how='left') # Join the new table with the Twitter API table using a left join df = pd.merge(ta_tip, taad_copy, on='tweet_id', how='left') ###Output _____no_output_____ ###Markdown Test ###Code # Display the first 3 rows of the DataFrame df.head(3) # Display the information of the DataFrame df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2356 entries, 0 to 2355 Data columns (total 31 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object jpg_url 2075 non-null object img_num 2075 non-null float64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null object p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null object p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null object retweet_count 2335 non-null float64 favorite_count 2335 non-null float64 lang 2335 non-null object dtypes: float64(10), int64(3), object(18) memory usage: 589.0+ KB ###Markdown DefineMelt the 4 columns (doggo, floofer, pupper, and puppo) in to 1 column since they represent 1 variable, and ignore the ones that have more than one dog type. Clean ###Code # Melt the 4 columns of the dog types and save them to a new DataFrame df2 = pd.melt(df, id_vars=['tweet_id'], value_vars=['doggo', 'floofer', 'pupper', 'puppo'], var_name='dog_type_all', value_name='dog_type') # Remove the columns without any dog types df2 = df2[df2['dog_type'] != "None"] # Remove the entries that have more than one dog type df2 = df2[np.logical_not(df2.tweet_id.duplicated())] # Join the new DataFrame to the orginal DataFrame uing left join df = pd.merge(df, df2, on='tweet_id', how='left') # Drop old dog type columns df = df.drop(['dog_type_all', 'doggo', 'floofer', 'pupper', 'puppo'], axis = 1) ###Output _____no_output_____ ###Markdown Test ###Code # Display the information of the DataFrame df.info() # Display the value counts for the dog_type column df.dog_type.value_counts() ###Output _____no_output_____ ###Markdown Quality DefineRemove the 17 entries that were not found with the Twitter API from the Twitter Archive table Clean ###Code # Drop the rows with tweet_id in the not_found list df = df[np.logical_not(df['tweet_id'].isin(not_found))] ###Output _____no_output_____ ###Markdown Test ###Code # Try to lookup the row with one of the not found ids df[df['tweet_id'] == not_found[0]] ###Output _____no_output_____ ###Markdown DefineFix data types for tweet_id, in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, and retweeted_status_user_id, timestamp, retweeted_status_timestamp, img_num, p1_dog, p2_dog, p3_dog, retweet_count, favorite_count, lang, and dog_type ###Code # Display the information of the DataFrame before changing df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2335 entries, 0 to 2355 Data columns (total 28 columns): tweet_id 2335 non-null int64 in_reply_to_status_id 77 non-null float64 in_reply_to_user_id 77 non-null float64 timestamp 2335 non-null object source 2335 non-null object text 2335 non-null object retweeted_status_id 166 non-null float64 retweeted_status_user_id 166 non-null float64 retweeted_status_timestamp 166 non-null object expanded_urls 2277 non-null object rating_numerator 2335 non-null int64 rating_denominator 2335 non-null int64 name 2335 non-null object jpg_url 2064 non-null object img_num 2064 non-null float64 p1 2064 non-null object p1_conf 2064 non-null float64 p1_dog 2064 non-null object p2 2064 non-null object p2_conf 2064 non-null float64 p2_dog 2064 non-null object p3 2064 non-null object p3_conf 2064 non-null float64 p3_dog 2064 non-null object retweet_count 2335 non-null float64 favorite_count 2335 non-null float64 lang 2335 non-null object dog_type 378 non-null object dtypes: float64(10), int64(3), object(15) memory usage: 529.0+ KB ###Markdown Clean ###Code # Change the data types df = df.astype({"tweet_id": str, "in_reply_to_status_id": str, "in_reply_to_user_id": str, "retweeted_status_id": str, "retweeted_status_user_id": str, "img_num": 'category', "p1_dog": bool, "p2_dog": bool, "p3_dog": bool, "retweet_count": int, "favorite_count": int, "lang": 'category', "dog_type": 'category'}, errors = 'ignore') # Change the time based columns to datatime df['timestamp'] = pd.to_datetime(df['timestamp']) df['retweeted_status_timestamp'] = pd.to_datetime(df['retweeted_status_timestamp']) ###Output _____no_output_____ ###Markdown Test ###Code # Display the information of the DataFrame after changing the data types df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2335 entries, 0 to 2355 Data columns (total 28 columns): tweet_id 2335 non-null object in_reply_to_status_id 2335 non-null object in_reply_to_user_id 2335 non-null object timestamp 2335 non-null datetime64[ns] source 2335 non-null object text 2335 non-null object retweeted_status_id 2335 non-null object retweeted_status_user_id 2335 non-null object retweeted_status_timestamp 166 non-null datetime64[ns] expanded_urls 2277 non-null object rating_numerator 2335 non-null int64 rating_denominator 2335 non-null int64 name 2335 non-null object jpg_url 2064 non-null object img_num 2064 non-null category p1 2064 non-null object p1_conf 2064 non-null float64 p1_dog 2335 non-null bool p2 2064 non-null object p2_conf 2064 non-null float64 p2_dog 2335 non-null bool p3 2064 non-null object p3_conf 2064 non-null float64 p3_dog 2335 non-null bool retweet_count 2335 non-null int64 favorite_count 2335 non-null int64 lang 2335 non-null category dog_type 378 non-null category dtypes: bool(3), category(3), datetime64[ns](2), float64(3), int64(4), object(13) memory usage: 434.0+ KB ###Markdown DefineRemove the reply and retweet tweets from the DataFrame Clean ###Code df['in_reply_to_status_id'].value_counts()['nan'] # Drop the rows with any value in the in_reply_to_status_id and retweeted_status_id columns df = df[df['in_reply_to_status_id'] == "nan"] df = df[df['retweeted_status_id'] == "nan"] ###Output _____no_output_____ ###Markdown Test ###Code # Check the number of non-null values in the in_reply_to_status_id and retweeted_status_id columns len(df[df['in_reply_to_status_id'] != "nan"]), len(df[df['retweeted_status_id'] != "nan"]) # Remove unneeded rows df = df.drop(['in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], axis = 1) ###Output _____no_output_____ ###Markdown DefineRemove the errorneous names from the name column, this includes things like "a", "an" and the "None" as a string. Clean ###Code # Apply a lambda function that replaces the "None" and the lowercase strings with nulls df['name'] = df['name'].apply(lambda x: None if (x.islower()) | (x == "None") else x) ###Output _____no_output_____ ###Markdown Test ###Code # Check the value counts for the names column df.name.value_counts() # Test some of the values to be sure they are not there df[np.logical_or(df['name'] == 'a', df['name'] == 'None')] ###Output _____no_output_____ ###Markdown DefineFix the formatting for the breeds columns (p1, p2, and p3) and replace '_' with spaces Clean ###Code # Apply a lambda function that replaces the "_" with " " and converts the string to title case df[['p1','p2','p3']] = df[['p1','p2','p3']].apply(lambda x: x.str.replace("_", " ").str.title()) ###Output _____no_output_____ ###Markdown Test ###Code # Display the 3 columns df[['p1','p2','p3']] ###Output _____no_output_____ ###Markdown DefineThe rating_denominator can't have 0's, but the describe method shows that it does CleanThe issue was in the data for replies so no need to take any further actions Test ###Code # Check if there are any rows with 0's in the rating_denominator column df[df['rating_denominator'] == 0] ###Output _____no_output_____ ###Markdown DefineEntries have 0 favorite_count CleanThe issue was in the data for retweeted data so no need to take any further actions Test ###Code # Check if there are any rows with 0's in the favorite_count column df[df['favorite_count'] == 0] ###Output _____no_output_____ ###Markdown Storing the data ###Code # Store the DataFrame in to a csv file df.to_csv('twitter_archive_master.csv', index=False) ###Output _____no_output_____ ###Markdown Analysis and visualization ###Code df.info() df.describe() # Get the value of the first timestamp after sorting df.timestamp.sort_values().iloc[0] # Divide the number of 'en' values by all the non-null values df.lang.value_counts()['en'] / len(df[~df['lang'].isnull()]) # Get the count of the differnet values df.name.value_counts() # Divide the number of 'pupper' values by all the non-null values df.dog_type.value_counts()['pupper'] / len(df[~df['dog_type'].isnull()]) # Create a temp Series by dividing the numerator by the denominator for the ratings and getting its' max value max(df.rating_numerator / df.rating_denominator) * 100 # Plot a scatter plot between the number of favorites and retweets plt.scatter(x = df.retweet_count, y = df.favorite_count) # Add the title and labels plt.title("The relationship between the number of favorites and retweets") plt.xlabel("Number of retweets") plt.ylabel("Number of favorites") # Save the plot as an image plt.savefig('fig-1.png', bbox_inches='tight') # Calculate Pearson correlation coefficient for the number of favorites and retweets np.corrcoef(df.retweet_count, df.favorite_count) ###Output _____no_output_____ ###Markdown Avaliação e Arrumação 1. Avaliação do arquivo CSV:* existe muitos elementos na coluna nome nulos, só que preenchidos como NONE.* a coluna timestamp encontr-ase como string quando deveria ser DATE. 2. Arrumação do arquivo CSV:* acredito que as colunas in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp são desnecessárias a analise. ###Code df.head() df.info() df.describe() #verificar dados duplicados df.duplicated() #verificar dados nulos df.isnull() #verificar a soma dos dados nulos df.isnull().sum() #contagem dos nomes df['name'].value_counts() df['source'].value_counts() ###Output _____no_output_____ ###Markdown 1. Avaliação do arquivo tsv:* o arquivo csv mostra 2175 tweets, só que o arquivo tsv nos mostra 2075 imagens dos cães, logo, faltam 100 arquivos de imagens 2. Arrumação do arquivo tsv:* acredito que não se faz necessário três colunas para indicar a raça como p1 que indica nome, p1_conf o código da raça, p1_dog que indica se verdadeiro ou falso. Logo, bastava duas colunas para verificar. ###Code #função para baixar arquivo da página url def get_file(url): name = url.split('/')[-1] response = requests.get(url) if response.status_code == 200: try: with open(name, mode='wb') as file: file.write(response.content) print('Arquivo {} download realizado com sucesso!'.format(name)) except IOError as err: name = "" print("Erro de I/O: {}".format(err)) else: name = "" print('URL inválida.') return name #baixando arquivo html image_url= 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' name = get_file(image_url) #leitura do aquivo tsv df_im = pd.read_csv(name, delimiter = '\t') df_im.head() df_im.info() df_im.describe() #contagem dos nomes das raças existentes na tabela df_im['p1'].value_counts() #leitura dos dados json dos tweets a partir API import tweepy consumer_key = 'YOUR CONSUMER KEY' consumer_secret = 'YOUR CONSUMER SECRET' access_token = 'YOUR ACCESS TOKEN' access_secret = 'YOUR ACCESS SECRET' auth = tweepy.OAuthHandler('0ET3LhnJt11VytU68RzKB9yXN', 'dycPKdH5ai6xOQ1QB5EbksDtAXuY3G6n6Jsb34hlxEkMT5kLsP') auth.set_access_token('1470627588-374awz7oLdw7hwlYBErHAIyIBr5achrOcUkgy7q', '2CCMOkUfGwKZUPCVNLqfCqGiPRqfDpSaVVntT4Q4JgLin') api = tweepy.API(auth) #baixando os arquivos json dos tweets e verificação dos erros tweet_dog = 'tweet_json.txt' ids = df.tweet_id.values errors = [] try: with open(tweet_dog, 'w') as file: for i, id in enumerate(ids): clear_output() print('Progresso: {:.2f}%\nErros: {}'.format(float((i+1)*100/len(ids)), len(errors))) try: tweet = api.get_status(id, tweet_mode='extended') file.write(json.dumps(tweet._json)) if i < len(ids) - 1: file.write('\n') except tweepy.TweepError: errors.append(id) clear_output() print('Arquivo {} gravado com sucesso!'.format(tweet_dog)) except IOError as err: print("Erro de I/O: {}".format(err)) if(len(errors)) > 0: error_json_file = 'id_json_errors.txt' print('\nErro obtendo id\'s: {}'.format(errors)) try: with open(error_json_file, 'w') as file: for id in errors: file.write(str(id) + '\n') print('\nArquivo {} gravado com sucesso!'.format(error_json_file)) except IOError as err: print("\nErro de I/O: {}".format(err)) #transformando os arquivos json em lista t_list = [] try: with open(tweet_dog) as file: for line in file: current_json = json.loads(line) tweet_id = current_json['id'] retweet_count = current_json['retweet_count'] favorite_count = current_json['favorite_count'] t_list.append({'tweet_id': int(tweet_id), 'retweet_count': int(retweet_count), 'favorite_count': int(favorite_count)}) except FileNotFoundError as err: print("\nErro de I/O: {}".format(err)) df_tw = pd.DataFrame(t_list, columns = ['tweet_id', 'retweet_count', 'favorite_count']) df_tw.head() df_tw.info() df_tw.describe() ###Output _____no_output_____ ###Markdown Limpeza ###Code arquivo = df.copy() image = df_im.copy() tweet = df_tw.copy() ###Output _____no_output_____ ###Markdown Definição:vamos apagar os arquivos que não apresentem imagem, utilizaremos inner join Código ###Code #limpar os arquivos id de tweet que não apresentam imagem associados arquivo = pd.merge(arquivo, tweet, on=['tweet_id']) arquivo = pd.merge(arquivo, image, on=['tweet_id']) ###Output _____no_output_____ ###Markdown Teste ###Code arquivo.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1525 entries, 0 to 1524 Data columns (total 30 columns): tweet_id 1525 non-null int64 in_reply_to_status_id 14 non-null float64 in_reply_to_user_id 14 non-null float64 timestamp 1525 non-null object source 1525 non-null object text 1525 non-null object retweeted_status_id 70 non-null float64 retweeted_status_user_id 70 non-null float64 retweeted_status_timestamp 70 non-null object expanded_urls 1525 non-null object rating_numerator 1525 non-null int64 rating_denominator 1525 non-null int64 name 1525 non-null object doggo 1525 non-null object floofer 1525 non-null object pupper 1525 non-null object puppo 1525 non-null object retweet_count 1525 non-null int64 favorite_count 1525 non-null int64 jpg_url 1525 non-null object img_num 1525 non-null int64 p1 1525 non-null object p1_conf 1525 non-null float64 p1_dog 1525 non-null bool p2 1525 non-null object p2_conf 1525 non-null float64 p2_dog 1525 non-null bool p3 1525 non-null object p3_conf 1525 non-null float64 p3_dog 1525 non-null bool dtypes: bool(3), float64(7), int64(6), object(14) memory usage: 338.1+ KB ###Markdown Definição:iremos deletar as colunas aos quais são desnecessárias. Código ###Code #eleiminar as colunas irrelevantes arquivo.drop(['retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], axis = 1, inplace = True) ###Output _____no_output_____ ###Markdown Teste ###Code arquivo.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1525 entries, 0 to 1524 Data columns (total 27 columns): tweet_id 1525 non-null int64 in_reply_to_status_id 14 non-null float64 in_reply_to_user_id 14 non-null float64 timestamp 1525 non-null object source 1525 non-null object text 1525 non-null object expanded_urls 1525 non-null object rating_numerator 1525 non-null int64 rating_denominator 1525 non-null int64 name 1525 non-null object doggo 1525 non-null object floofer 1525 non-null object pupper 1525 non-null object puppo 1525 non-null object retweet_count 1525 non-null int64 favorite_count 1525 non-null int64 jpg_url 1525 non-null object img_num 1525 non-null int64 p1 1525 non-null object p1_conf 1525 non-null float64 p1_dog 1525 non-null bool p2 1525 non-null object p2_conf 1525 non-null float64 p2_dog 1525 non-null bool p3 1525 non-null object p3_conf 1525 non-null float64 p3_dog 1525 non-null bool dtypes: bool(3), float64(5), int64(6), object(13) memory usage: 302.3+ KB ###Markdown Definição:trocar a coluna timestamp de string para date Código ###Code #transformar a coluna timestamp de string em datetime arquivo.timestamp = pd.to_datetime(arquivo.timestamp) ###Output _____no_output_____ ###Markdown Teste ###Code arquivo.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1525 entries, 0 to 1524 Data columns (total 27 columns): tweet_id 1525 non-null int64 in_reply_to_status_id 14 non-null float64 in_reply_to_user_id 14 non-null float64 timestamp 1525 non-null datetime64[ns] source 1525 non-null object text 1525 non-null object expanded_urls 1525 non-null object rating_numerator 1525 non-null int64 rating_denominator 1525 non-null int64 name 1525 non-null object doggo 1525 non-null object floofer 1525 non-null object pupper 1525 non-null object puppo 1525 non-null object retweet_count 1525 non-null int64 favorite_count 1525 non-null int64 jpg_url 1525 non-null object img_num 1525 non-null int64 p1 1525 non-null object p1_conf 1525 non-null float64 p1_dog 1525 non-null bool p2 1525 non-null object p2_conf 1525 non-null float64 p2_dog 1525 non-null bool p3 1525 non-null object p3_conf 1525 non-null float64 p3_dog 1525 non-null bool dtypes: bool(3), datetime64[ns](1), float64(5), int64(6), object(12) memory usage: 302.3+ KB ###Markdown Definição:iremos deletar a coluna boolean da tabela image já que apresenta duas colunas para indicar a nome da raça do cachorro, como também o código. Código ###Code #eliminar as colunas nas tabelas image.drop(['p1_dog', 'p2_dog', 'p3_dog'], axis = 1, inplace = True) arquivo.drop(['p1_dog', 'p2_dog', 'p3_dog'], axis = 1, inplace = True) ###Output _____no_output_____ ###Markdown Teste ###Code image.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 9 columns): tweet_id 2075 non-null int64 jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p2 2075 non-null object p2_conf 2075 non-null float64 p3 2075 non-null object p3_conf 2075 non-null float64 dtypes: float64(3), int64(2), object(4) memory usage: 146.0+ KB ###Markdown Salvar dados limpos ###Code #salvar arquivo limpo arqlimpo = 'twitter_archive_master.csv' arquivo.to_csv(arqlimpo, index= False) df_l = pd.read_csv(arqlimpo) df_l.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 1525 entries, 0 to 1524 Data columns (total 24 columns): tweet_id 1525 non-null int64 in_reply_to_status_id 14 non-null float64 in_reply_to_user_id 14 non-null float64 timestamp 1525 non-null object source 1525 non-null object text 1525 non-null object expanded_urls 1525 non-null object rating_numerator 1525 non-null int64 rating_denominator 1525 non-null int64 name 1525 non-null object doggo 1525 non-null object floofer 1525 non-null object pupper 1525 non-null object puppo 1525 non-null object retweet_count 1525 non-null int64 favorite_count 1525 non-null int64 jpg_url 1525 non-null object img_num 1525 non-null int64 p1 1525 non-null object p1_conf 1525 non-null float64 p2 1525 non-null object p2_conf 1525 non-null float64 p3 1525 non-null object p3_conf 1525 non-null float64 dtypes: float64(5), int64(6), object(13) memory usage: 286.0+ KB ###Markdown Project: Wrangle and Analyze WeRateDogs Data Table of ContentsIntroductionGather DataAssess Data Clean Data Analyze and Visualize Data Summary Introduction About the data: three data sources are used in this project: > **1**. The tweet achive of Twitter user @ dog_rates, also known as WeRatedogs. This archive contains basic tweet data (tweet ID, timestamp,text,etc.) for all 5000+ of their tweets until Aug 1,2017. > **2**. Additional Data such as likes and retweets extracted via the Twitter API > **3**. Predictions of dog breeds based on their images. This is done by running every image through a neural network classifier built by Udacity. Task: > **1**. Wrangle and analyze the WeRateDogs datasets > **2**. Build at least three insights and create visualizations > **3**. Write two reports. One as an internal document which describes the wrangling maneuvor. The other as a magazine post for external use, which communicates the findings. Gather Data ###Code #Import python libs for file downloads and data wrangling and analysis import requests import pandas as pd import os import tweepy import numpy as np import json from timeit import default_timer as timer import matplotlib.pyplot as plt import seaborn as sns #set pandas view option to see the entire text pd.set_option('display.max_columns', None) pd.set_option('display.max_rows', None) pd.set_option('display.max_colwidth', -1) #Download the image prediction file from the web and save it to the folder img_folder='image_pred' if not os.path.exists(img_folder): os.makedirs(img_folder) url='https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' r=requests.get(url) with open(os.path.join(img_folder,url.split('/')[-1]),mode='wb') as f: f.write(r.content) #Load personal API keys consumer_key = '' consumer_secret = '' access_token = '' access_secret = '' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True ) #Load the twitter archive file into the data frame twitter_arch=pd.read_csv('twitter-archive-enhanced.csv') #Load image_predictions file into the dataframe image_pred=pd.read_csv('image-predictions.tsv',sep="\t") #Add each tweet to a new line of tweet_json.text fails={} start_time=timer() count=0 with open('tweet_json.txt', 'w', encoding='utf8') as f: for tweet_id in twitter_arch['tweet_id']: count+=1 print(str(count)+" : "+str(tweet_id)) try: tweet = api.get_status(tweet_id, tweet_mode='extended') json.dump(tweet._json, f) f.write('\n') except tweepy.TweepError as e: fails[tweet_id]=e print('Fail'+str(tweet_id)) end_time=timer() print(count) print(end_time-start_time) print(fails) #Load tweets data into dataframe tweets=pd.read_json('tweet_json.txt',lines='true') ###Output _____no_output_____ ###Markdown Assess Data `image_pred`dataset: ###Code #Check columns,datatypes and null values image_pred.info() #Sample data from image_pred image_pred.sample(10) image_pred.p1.value_counts()[:10] image_pred.p2.value_counts()[:10] image_pred.p3.value_counts()[:10] ###Output _____no_output_____ ###Markdown Quality issue 1: some breeds with the first letter capitalized, others not. Quality issue 2: twitterID should be string not integer Quality issue 3: column names are not informative. ###Code #How many predictions are not dogs? image_pred.p1_dog.value_counts() image_pred.p2_dog.value_counts() image_pred.p3_dog.value_counts() ###Output _____no_output_____ ###Markdown `twitter_arch`dataset: ###Code #Check columns,datatypes and null values twitter_arch.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown Quality issue 4:Timestamp and Retweet timestamp should be datetime, not object. All ids should be string not float ###Code #Sample the data and get some feeling about the quality and tidiness twitter_arch.sample(10) ###Output _____no_output_____ ###Markdown Tidiness issue 1: Dog stages should be one column with 5 possible outcomes (4 stages and None). Data type for the new column "stage" should be categorical Quality issue 5: There are retweets which we don't need Quality issue 6: There are many columns irrelevant to the analysis ###Code #Check dog names twitter_arch.name.unique() ###Output _____no_output_____ ###Markdown Quality issue 7: Some names are captured wrong. Get a list of names without the first letter capitalized. They don't look like dog names. ###Code nl=list(twitter_arch.name.unique()) name_error=[] for n in nl: if n[0].islower(): name_error.append(n) print(n) #Double check why the name is wrong. These are tweets without a name specified. They should be None instead twitter_arch[twitter_arch.name.isin(name_error)][['name','text']] #Check the distribution of dog ratings twitter_arch.rating_numerator.value_counts() ###Output _____no_output_____ ###Markdown Quality Issue 8: There are errors in numerator column. ###Code twitter_arch.rating_denominator.value_counts() ###Output _____no_output_____ ###Markdown Quality Issue 9: There are errors in denominator column ###Code #check duplicated data twitter_arch.duplicated().sum() #How many dogs with no stage records? len(twitter_arch[(twitter_arch.doggo=='None') & (twitter_arch.floofer=='None') & (twitter_arch.pupper=='None') &(twitter_arch.puppo=='None')]) #source is illegible. Check what information we can find there. twitter_arch.source.value_counts() ###Output _____no_output_____ ###Markdown Quality issue 10: source information needs to be cleaned up. It should show which application/device people used to access twitter `tweets`dataset: ###Code #Check columns,datatypes and null values tweets.info() tweets.sample(3) #Look at the statistics for quantitative variables tweets.describe() #Most of the tweets are in english so there is probably not many insights we can get from this column tweets.lang.value_counts() ###Output _____no_output_____ ###Markdown Quality Issue 11: TwitterId should be string not integer Tidiness issue 2: `favorite_count` and `retweet_count`should be part of the `Twitter_arch` table. Summary:Quality`twitter_arch`- Timestamp and Retweet timestamp should be datetime, not object. All ids should be string not float. - Remove the retweets.- Remove the columns irrelevant to the analysis.- Correct the wrongly captured names and remove those that cannot be fixed.- Correct the wrongly captured numerator and denominator ratings and remove those that cannot be fixed.- Source information needs to be cleaned up. It should show which application/device people used to access twitter.- Dog stages are wrongly captured.`image_pred`- Some breeds with the first letter capitalized, others not. Change all breed names to lowercase and remove the "-" dash in between.- tweet_id should be string not integer.- Column names 'p1','p2','p3','p1-dog','p2-dog',and 'p3-dog' are not informative. Rename them.`tweets`- Ids should be string not integer.Tidiness`twitter_arch`- Dog stages should be one column with 5 possible outcomes (4 stages and None). Data type for the new column "stage" should be categorical.- All three datasets should be merged into one on twitter_id Clean Data ###Code #make a copy for each dataset: twitter_arch_clean=twitter_arch.copy() image_pred_clean=image_pred.copy() tweets_clean=tweets.copy() ###Output _____no_output_____ ###Markdown `twitter_arch_clean` **Define:**Remove the retweets**Code:** ###Code twitter_arch_clean=twitter_arch_clean[twitter_arch_clean.in_reply_to_status_id.isnull()] twitter_arch_clean=twitter_arch_clean[~twitter_arch_clean.text.str.contains('RT @')] ###Output _____no_output_____ ###Markdown **Test:** ###Code twitter_arch_clean.in_reply_to_status_id.notnull().sum() len(twitter_arch_clean[twitter_arch_clean.text.str.contains('RT @')]) ###Output _____no_output_____ ###Markdown **Define:**Drop the irrelevant columns**Code:** ###Code #Drop the columns irrelevant to the analysis twitter_arch_clean.drop(['in_reply_to_status_id','in_reply_to_user_id','retweeted_status_id','retweeted_status_user_id','retweeted_status_timestamp','expanded_urls'],axis=1,inplace=True) ###Output _____no_output_____ ###Markdown **Test:** ###Code #double check the columns list(twitter_arch_clean) ###Output _____no_output_____ ###Markdown **Define:**Change tweet_id datatype to string**Code:** ###Code #Correct the datatypes twitter_arch_clean.tweet_id=twitter_arch_clean.tweet_id.astype(str) ###Output _____no_output_____ ###Markdown **Test:** ###Code twitter_arch_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2355 Data columns (total 11 columns): tweet_id 2097 non-null object timestamp 2097 non-null object source 2097 non-null object text 2097 non-null object rating_numerator 2097 non-null int64 rating_denominator 2097 non-null int64 name 2097 non-null object doggo 2097 non-null object floofer 2097 non-null object pupper 2097 non-null object puppo 2097 non-null object dtypes: int64(2), object(9) memory usage: 196.6+ KB ###Markdown **Define:**Remove the +0000 in timestamp column and change the datatype to datetime**Code:** ###Code #Remove the +0000 in timestamp twitter_arch_clean.timestamp=twitter_arch_clean.timestamp.apply(lambda x: x[:-5]) #Change the datatype to datetime twitter_arch_clean.timestamp=pd.to_datetime(twitter_arch_clean.timestamp,format='%Y/%m/%d %H:%M:%S') ###Output _____no_output_____ ###Markdown **Test:** ###Code twitter_arch_clean.timestamp.sample(5) ###Output _____no_output_____ ###Markdown **Define:**Correct the wrong names. Change those in the error list to 'None'**Code:** ###Code #Correct the wrongly catpured names: change those in the error list to None twitter_arch_clean.name=twitter_arch_clean.name.apply(lambda x: 'None' if x in (name_error) else x) ###Output _____no_output_____ ###Markdown **Test:** ###Code #Double check whether erroneous names are changed to None len(twitter_arch_clean[twitter_arch_clean.name.isin(name_error)]) ###Output _____no_output_____ ###Markdown **Define:**Correct the errors in numerators and denominators. Set all denominators to 10 and scale the numerators accordingly.Name the clean rating column as 'rating'.**Code**: ###Code #First look at the non-standardized denominators. i.e. those not equal to 10 twitter_arch_clean.query('rating_denominator!=10')[['text','rating_numerator', 'rating_denominator']] #Correct the wrongly captured scores and twitter_arch_clean.loc[1202,['rating_numerator','rating_denominator']]=(11,10) twitter_arch_clean.loc[1662,['rating_numerator','rating_denominator']]=(10,10) twitter_arch_clean.loc[2335,['rating_numerator','rating_denominator']]=(9,10) twitter_arch_clean.loc[1068,['rating_numerator','rating_denominator']]=(14,10) twitter_arch_clean.loc[1165,['rating_numerator','rating_denominator']]=(13,10) #Remove the entry without rating twitter_arch_clean=twitter_arch_clean[(twitter_arch_clean.rating_numerator!=24)] #Double check before the standardization twitter_arch_clean.query('rating_denominator!=10')[['text','rating_numerator', 'rating_denominator']] #Standardize the denominator by setting it to 10 and scaling the numerator twitter_arch_clean['rating']=twitter_arch_clean.apply(lambda x: int(10*x.rating_numerator/x.rating_denominator) if (x.rating_denominator!=10) else x.rating_numerator,axis=1) #The standardization looks good twitter_arch_clean.query('rating_denominator!=10')[['rating','rating_numerator','rating_denominator']] #Second, look at rating outliers after standardization: twitter_arch_clean.query('rating>15 |rating<6')[['text','rating']] #Correct the wrongly captured rating and round them to the nearest integer: twitter_arch_clean.loc[45,'rating']=14 twitter_arch_clean.loc[763,'rating']=11 twitter_arch_clean.loc[1712,'rating']=11 twitter_arch_clean.loc[695,'rating']=10 #Remove the symbolic 1776 and the entry without rating. twitter_arch_clean.drop(index=979,inplace=True) ###Output _____no_output_____ ###Markdown **Test:** ###Code #Check the rating distribution again twitter_arch_clean.rating.value_counts() ###Output _____no_output_____ ###Markdown **Define:**Drop the redundant ratingcolumns and just keep the clean 'rating' column**Code:** ###Code #Drop the original numerator and denominator columns as rating column alone suffices now: twitter_arch_clean.drop(['rating_numerator','rating_denominator'],axis=1,inplace=True) ###Output _____no_output_____ ###Markdown **Test:** ###Code #Double check twitter_arch_clean.sample(5) ###Output _____no_output_____ ###Markdown **Define:**Clean up the source column by extracting the relevant text.**Code:** ###Code #Extract relevant information from the source column and assign it to the source column again twitter_arch_clean.source=twitter_arch_clean.source.str.extract(r'\>(.*?)\<') ###Output _____no_output_____ ###Markdown **Test:** ###Code #Double check the extraction twitter_arch_clean.source.value_counts() ###Output _____no_output_____ ###Markdown **Define:**Create a single column for dog stages**Code:** ###Code #Create a single column for dog stage doggo=twitter_arch_clean.doggo.replace('None','') floofer=twitter_arch_clean.floofer.replace('None','') pupper=twitter_arch_clean.pupper.replace('None','') puppo=twitter_arch_clean.puppo.replace('None','') twitter_arch_clean['stage']=doggo+floofer+pupper+puppo #Check the newly created column 'stage' twitter_arch_clean['stage'].value_counts() #Double check with two stages. They probably have stages wrongly captured twitter_arch_clean.query('stage ==["doggopupper","doggofloofer","doggopuppo"]') #Fix the wrong stages twitter_arch_clean.loc[191,'stage']='puppo' twitter_arch_clean.loc[200,'stage']='doggo' twitter_arch_clean.loc[460,'stage']='pupper' twitter_arch_clean.loc[575,'stage']='pupper' twitter_arch_clean.loc[705,'stage']='doggo' #Drop the rest which contain more than one dog or don't provide any info twitter_arch_clean.drop(twitter_arch_clean.query('stage ==["doggopupper","doggofloofer","doggopuppo"]').index,inplace=True) #change the whitespace back to "none" twitter_arch_clean.stage.replace('','none',inplace=True); ###Output _____no_output_____ ###Markdown **Test:** ###Code #double check the stage column twitter_arch_clean.stage.value_counts() ###Output _____no_output_____ ###Markdown **Define:**Change the stage column to categorical and drop the redundant columns**Code:** ###Code #Change the stage column datatype to categorical twitter_arch_clean.stage=twitter_arch_clean.stage.astype('category'); #Drop the old dog stage columns twitter_arch_clean.drop(['doggo','floofer','pupper','puppo'],axis=1,inplace=True) ###Output _____no_output_____ ###Markdown **Test:** ###Code #Check the result twitter_arch_clean.info() twitter_arch_clean.sample(5) ###Output _____no_output_____ ###Markdown `image_pred_clean` ###Code image_pred_clean=image_pred.copy() ###Output _____no_output_____ ###Markdown **Define:**Standardize the dog breed names by removing the dash and changing all to lowercase**Code:** ###Code #Standardize the dog breed names by removing the dash and changing all to lowercase image_pred_clean[['p1','p2','p3']]=image_pred_clean[['p1','p2', 'p3']].apply(lambda x:x.str.lower().str.replace("_"," "),axis=1) ###Output _____no_output_____ ###Markdown **Test:** ###Code #double check image_pred_clean[['p1','p2','p3']].sample(10) ###Output _____no_output_____ ###Markdown **Define:**Change the tweet_id datatype to str**Code:** ###Code #change the tweet_id datatype to str image_pred_clean.tweet_id=image_pred_clean.tweet_id.astype(str) ###Output _____no_output_____ ###Markdown **Test:** ###Code image_pred_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null object jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(1), object(5) memory usage: 152.1+ KB ###Markdown **Define:**Create a new column 'dog_or_not' which synthesizes the information from 3 predictions> If 3 out of 3 predictions point to dog- Yes, it's a dog> If The one or two out of 3 predictions point(s) to dog- Maybe, it's a dog> If None of the predictions points to dog- No, it's not a dogDrop the irrelevant columns and rename the newly created one**Code:** ###Code predict={0:'No',1:'Maybe',2:'Maybe',3:'Yes'} image_pred_clean['predictions']=image_pred_clean.p1_dog*1 + image_pred_clean.p2_dog*1 + image_pred_clean.p3_dog*1 #Mapping from bools to answers image_pred_clean['predictions']=image_pred_clean['predictions'].apply(lambda x: predict[x]) #drop columns about the 2nd and the 3rd predictions and also p1_dog image_pred_clean.drop(['p1_dog','p2','p2_conf','p2_dog','p3','p3_conf','p3_dog'],axis=1,inplace=True) #Rename columns: image_pred_clean.rename(columns={'p1':'pred_breed','p1_conf':'pred_confidence','predictions':'dog_or_not'},inplace=True); ###Output _____no_output_____ ###Markdown **Test:** ###Code image_pred_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 6 columns): tweet_id 2075 non-null object jpg_url 2075 non-null object img_num 2075 non-null int64 pred_breed 2075 non-null object pred_confidence 2075 non-null float64 dog_or_not 2075 non-null object dtypes: float64(1), int64(1), object(4) memory usage: 97.4+ KB ###Markdown `tweets_clean` **Define:**Create a new dataframe with just twitter id, retweet_count and favorite_count. Rename the id column and change the datatype to string.**Code:** ###Code #Create a new dataframe with just twitter id, retweet_count and favorite_count df=tweets_clean[['id','retweet_count','favorite_count']] #Change datatype and column name for twitter_id df.id=df.id.astype(str) df.rename(columns={'id':'tweet_id'},inplace=True) ###Output _____no_output_____ ###Markdown **Text:** ###Code #double check df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2330 entries, 0 to 2329 Data columns (total 3 columns): tweet_id 2330 non-null object retweet_count 2330 non-null int64 favorite_count 2330 non-null int64 dtypes: int64(2), object(1) memory usage: 54.7+ KB ###Markdown Merge the three dataframes **Define:**Join the three tables on tweet_id**Code:** ###Code # Merge the three dataframes dog_rating=pd.merge((pd.merge(twitter_arch_clean,image_pred_clean,on='tweet_id',how='left')),df,on='tweet_id',how='left') ###Output _____no_output_____ ###Markdown **Test:** ###Code #Check nulls and datatypes dog_rating.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2089 entries, 0 to 2088 Data columns (total 14 columns): tweet_id 2089 non-null object timestamp 2089 non-null datetime64[ns] source 2089 non-null object text 2089 non-null object name 2089 non-null object rating 2089 non-null int64 stage 2089 non-null category jpg_url 1964 non-null object img_num 1964 non-null float64 pred_breed 1964 non-null object pred_confidence 1964 non-null float64 dog_or_not 1964 non-null object retweet_count 2081 non-null float64 favorite_count 2081 non-null float64 dtypes: category(1), datetime64[ns](1), float64(4), int64(1), object(7) memory usage: 230.7+ KB ###Markdown **Define:**Drop the missing values and reset the index**Code:** ###Code #Drop nulls dog_rating.dropna(inplace=True) # Reset the index dog_rating.reset_index(); ###Output _____no_output_____ ###Markdown **Test:** ###Code #A final check! dog_rating.info() #Save the clean data to csv dog_rating.to_csv('twitter_archive_master.csv',index=False) ###Output _____no_output_____ ###Markdown Analyze and Visualize Data 1. What is the distribution of the rankings for dogs recognized by the image predictor? ###Code #Select photos which pass the prediction as dogs dog_r=dog_rating.query('dog_or_not=="Yes"') plt.hist(dog_r['rating']) plt.axvline(dog_r['rating'].median(),color='r',linestyle='--',label='median: '+str(dog_r['rating'].median())) plt.xlabel('Dog Ratings') plt.ylabel('Occurrence') plt.title('Distribution Of Dog Ratings') plt.legend() plt.show(); ###Output _____no_output_____ ###Markdown 2. What are the top 10 breeds based on the image predictor? ###Code top5_breed=dog_rating.pred_breed.value_counts()[:10] top5_breed=top5_breed/len(dog_rating)*100 sns.barplot(x=top5_breed,y=top5_breed.index) plt.xlabel('Breeds predicted by the image processor by %'); ###Output _____no_output_____ ###Markdown 98%+ of users accessed via Twitter for Iphone app, only less than 2% used other apps 3. What are the stages of these dogs? ###Code df1=dog_rating.query('stage==["puppo","doggo","pupper","floofer"]') stage=(df1.stage.value_counts()[:4])/len(df1)*100 stage.sort_values(ascending=False,inplace=True) stage sns.barplot(x=stage,y=list(stage.index)) plt.xlabel('% dog by stage') plt.show(); ###Output _____no_output_____ ###Markdown Among tweets which contain stage information, the majority (68%) concern puppers. 4. What is the relationship between rating and the number of retweets or rating and the number of favorites,with only the dog photos counted? ###Code plt.scatter('rating','retweet_count',data=dog_rating.query('dog_or_not=="Yes"')) plt.title('Number of retweets v.s. Dog ratings') plt.show(); plt.scatter('rating','favorite_count',data=dog_rating.query('dog_or_not=="Yes"')) plt.title('Number of likes v.s. Dog ratings') plt.show(); ###Output _____no_output_____ ###Markdown 5. How do users access their twitter accounts? ###Code source=dog_rating.groupby(['source']).count()['tweet_id']/len(dog_rating)*100 source ###Output _____no_output_____ ###Markdown 6. When were these tweets posted during the day? ###Code t=(pd.DatetimeIndex(dog_rating.timestamp).hour.value_counts().sort_index())/len(dog_rating)*100 plt.plot(t.index,t) plt.xlabel('Hour of the day') plt.ylabel('% of tweets posted') plt.title('% of tweets posted during the day'); ###Output _____no_output_____ ###Markdown 7. What are the number of tweets per month? ###Code m=pd.DatetimeIndex(dog_rating.timestamp).month.value_counts().sort_index() plt.plot(m.index,m) plt.xlabel('Month') plt.ylabel('Number of tweets posted') plt.title('Number of tweets posted per month'); ###Output _____no_output_____ ###Markdown 8. What are the most popular dog names? ###Code dog_rating.query('name!="None"').name.value_counts()[:10] ###Output _____no_output_____ ###Markdown Data Wrangling Project: WeRateDogs ###Code # import libraries import warnings import requests import os import tweepy import json import numpy as np import pandas as pd import matplotlib import matplotlib.pyplot as plt import seaborn as sns from textwrap import wrap %matplotlib inline # allow full text to be displayed pd.set_option('display.max_colwidth', None) # suppress warnings warnings.simplefilter('ignore') ###Output _____no_output_____ ###Markdown Gather ###Code # load .csv file archive = pd.read_csv('twitter-archive-enhanced.csv') # save tweet ids to list tweet_ids = list(archive.tweet_id) # specify path folder_name = r'C:\Users\t_gas\data_wrangling' if not os.path.exists(folder_name): os.makedirs(folder_name) # download and save predicted images url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) with open("image_predictions.tsv", mode = 'wb') as file: file.write(response.content) # load .tsv file images = pd.read_csv('image_predictions.tsv', sep = '\t', encoding = 'utf-8') # save credentials consumer_key = '' consumer_secret = '' access_token = '' access_token_secret = '' # authorize account auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) # set class paramters api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True) ###Output _____no_output_____ ###Markdown For this code, I used https://stackoverflow.com/questions/28384588/twitter-api-get-tweets-with-specific-id and https://knowledge.udacity.com/questions/42488 for support. ###Code # create empty list to store tweet ids with errors errors_list = [] with open('tweet_json.txt', 'a', encoding = 'utf8') as outfile: for tweet_id in tweet_ids: try: print(tweet_id) tweet = api.get_status(tweet_id, tweet_mode = 'extended') json.dump(tweet._json, outfile) outfile.write('\n') except Exception as e: code = e.args[0][0]['code'] error = e.args[0][0]['message'] errors_list.append({'tweet_id': tweet_id, 'code': code, 'error': error}) extra_info = pd.DataFrame(columns=['tweet_id', 'text', 'retweet_count', 'fav_count']) with open('tweet_json.txt') as f: for line in f: tweet = json.loads(line) tweet_id = tweet['id_str'] text = tweet['full_text'] retweet_count = tweet['retweet_count'] fav_count = tweet['favorite_count'] extra_info = extra_info.append(pd.DataFrame([[tweet_id, text, retweet_count, fav_count]], columns=['tweet_id', 'text', 'retweet_count', 'fav_count'])) extra_info = extra_info.reset_index(drop=True) # convert errors dict to dataframe errors = pd.DataFrame(errors_list, columns = ['tweet_id', 'code', 'error']) ###Output _____no_output_____ ###Markdown To avoid having to re-run the loop above, I will now save the dataframes as .csv files to be used in the remainder of the .ipynb notebook. ###Code # save dataframes to .csv files extra_info.to_csv(r'extra_info.csv', index=False) errors.to_csv(r'errors.csv', index=False) # import saved dataframes extra_info = pd.read_csv(r'extra_info.csv') errors = pd.read_csv(r'errors.csv') ###Output _____no_output_____ ###Markdown Before assessing my data, I will check the errors dataframe to see why certain tweets were not retrievable by using .value_counts() on the error column. ###Code errors.error.value_counts() ###Output _____no_output_____ ###Markdown It appears that two types of errors occurred. In 24 of the 25 cases, no status was found matching the tweet ID. It's possible that those tweets were deleted or there is a problem with the tweet ID. In the last case, I am not authorized to see the tweet. Perhaps someone set their account to private? Either way, I won't be able to solve these errors on my own. I'll have to remove these tweet IDs from the archive and images dataframes when I clean the data. AssessTo assess the dataframes, I will use a combination of visual and programmatic tools - primarily, .head(), .sample(), .info(), .describe(), .duplicated(), value_counts(), and .unique(). Any quality or tidiness issues will be recorded at the end of this section. ###Code archive.head() archive.sample(5) archive.info() archive.describe() archive_dupl = archive['tweet_id'].duplicated() archive[archive_dupl] archive.rating_denominator.value_counts() archive.name.value_counts().index.tolist() images.head() images.sample(5) images.info() images.describe() images_dupl = images['tweet_id'].duplicated() images[images_dupl] sorted(images.p1.unique()) extra_info.head() extra_info.sample(5) extra_info.info() extra_info.describe() ###Output _____no_output_____ ###Markdown Quality `archive` table+ NaNs not recognized (doggo, floofer, pupper and pupppo columns)+ Erroneous datatypes (timestamp)+ Missing data + Outliers (rating_numerator and rating_denominator columns) + Incorrect entries in name column (e.g. 'one' index 924)+ Mix of tweets, retweets and replies+ Not all rankings are for dogs `images` table+ Mix of upper- and lowercase string values (p1, p2 and p3 columns) `extra_info` table + Mix of tweets and retweets+ Not all rankings are for dogs Tidiness `archive` table+ Type of dog variable split into four columns (doggo, floofer, pupper and puppo) `extra_info` table + Text column includes dog name and rating variables + Tweet_id, text columns duplicated in `archive` table other + Final merged dataframe of `archive` and `extra_info` tables contains messy data Clean To clean the data, I will first make copies of the three dataframes. Then I will follow the programmatic data cleaning process, as described in Lesson 4 Part 5. ###Code archive_clean = archive.copy() images_clean = images.copy() extra_info_clean = extra_info.copy() ###Output _____no_output_____ ###Markdown Define + Convert NaNs to recognizable datatype Code ###Code archive_clean.replace(to_replace='None', value=np.nan, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code # check that nans are recognized archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 in_reply_to_status_id 78 non-null float64 2 in_reply_to_user_id 78 non-null float64 3 timestamp 2356 non-null object 4 source 2356 non-null object 5 text 2356 non-null object 6 retweeted_status_id 181 non-null float64 7 retweeted_status_user_id 181 non-null float64 8 retweeted_status_timestamp 181 non-null object 9 expanded_urls 2297 non-null object 10 rating_numerator 2356 non-null int64 11 rating_denominator 2356 non-null int64 12 name 1611 non-null object 13 doggo 97 non-null object 14 floofer 10 non-null object 15 pupper 257 non-null object 16 puppo 30 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown Define + Convert timestamp column to datetime Code ###Code archive_clean.timestamp = pd.to_datetime(archive_clean.timestamp) ###Output _____no_output_____ ###Markdown Test ###Code # check datatype archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 in_reply_to_status_id 78 non-null float64 2 in_reply_to_user_id 78 non-null float64 3 timestamp 2356 non-null datetime64[ns, UTC] 4 source 2356 non-null object 5 text 2356 non-null object 6 retweeted_status_id 181 non-null float64 7 retweeted_status_user_id 181 non-null float64 8 retweeted_status_timestamp 181 non-null object 9 expanded_urls 2297 non-null object 10 rating_numerator 2356 non-null int64 11 rating_denominator 2356 non-null int64 12 name 1611 non-null object 13 doggo 97 non-null object 14 floofer 10 non-null object 15 pupper 257 non-null object 16 puppo 30 non-null object dtypes: datetime64[ns, UTC](1), float64(4), int64(3), object(9) memory usage: 313.0+ KB ###Markdown Define + Create a list of tweet ids where no extra information could be found and drop rows with matchin tweet ids + Merge `archive` and `extra_info` tables after restructuring `extra_info` table Code ###Code # create list of tweet ids remove_ids = list(errors.tweet_id) for ids in remove_ids: archive_clean.drop(archive_clean[archive_clean.tweet_id == ids].index, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code # check that all IDs were removed assert archive.shape[0] - archive_clean.shape[0] == len(remove_ids) ###Output _____no_output_____ ###Markdown Define + Set all rating_denominator column values to 10+ Do not change rating_numerator column Code ###Code archive_clean.rating_denominator = 10 ###Output _____no_output_____ ###Markdown Test ###Code # confirm that all columns values equal 10 archive_clean.rating_denominator.value_counts() ###Output _____no_output_____ ###Markdown Define+ Identify all rows where name column value is lowercase and replace with correct name from tweet text Code ###Code # find all rows where name column value is lowercase archive_clean.loc[archive_clean.name.str.match(r'(^[a-z][\w]+)', na=False)] # create dictionary with corrected values new_names = {'one' : 'Grace', 'my' : 'Zoey', 'his' : 'Quizno'} # map dictionary to dataframe column archive_clean.name.map(new_names).fillna(archive_clean.name) # replace all remaining values with NaN archive_clean.loc[archive_clean.name.str.match(r'(^[a-z][\w]+)', na=False),'name'] = np.nan # replace name with index #2204 with "Berta"; could not replace this using dictionary archive_clean.loc[2204, 'name'] = 'Berta' ###Output _____no_output_____ ###Markdown After initially testing my results, I found that the regex I used did not pull name column values of "a". I will now check those rows for names, adding the values to a dictionary. Unlike above, the index will serve as the key. ###Code archive_clean.loc[archive_clean.name == 'a'] # create dictionary with indices and names new_a = {1853 : 'Wylie', 1955 : 'Kip', 2034 : 'Jacob', 2066 : 'Rufus', 2116 : 'Spork', 2125 : 'Cherokee', 2128 : 'Hemry', 2146 : 'Alphred', 2161 : 'Alfredo', 2191 : 'Leroi', 2218 : 'Chuk', 2235 : 'Alfonso', 2249 : 'Cheryl', 2255 : 'Jessiga', 2264 : 'Klint', 2273 : 'Kohl', 2287 : 'Daryl', 2304 : 'Pepe', 2311 : 'Octaviath', 2314 : 'Johm'} # replace "a" value with name archive_clean.name.update(pd.Series(new_a)) # replace all remaining values with NaN archive_clean.loc[archive_clean.name == 'a','name'] = np.nan ###Output _____no_output_____ ###Markdown Test ###Code # visually check that no lowercase names remain archive_clean.name.value_counts().index.tolist() ###Output _____no_output_____ ###Markdown Define+ Remove rows with retweets and replies Code ###Code # drop rows where in_reply_to_status_id is not null archive_clean.drop(archive_clean.loc[~archive_clean.in_reply_to_status_id.isnull()].index, inplace=True) # drop rows where retweet_status_id column is not null archive_clean.drop(archive_clean.loc[~archive_clean.retweeted_status_id.isnull()].index, inplace=True) # drop rows where tweet text begins with "RT" archive_clean.drop(archive_clean.loc[archive_clean.text.str.startswith('RT')].index, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code # confirm no retweet rows exist in dataframe assert archive_clean.in_reply_to_status_id.isnull().all() # confirm no retweet rows exist in dataframe assert archive_clean.retweeted_status_id.isnull().all() # should return an empty dataframe archive_clean.loc[archive_clean.text.str.startswith('RT')] ###Output _____no_output_____ ###Markdown Define+ Identify and remove rankings that are not for dogs using neural network predictions Code ###Code # list of tweet_ids where first image prediction is not a dog with minimum 95% confidence level not_dog = list(images_clean.loc[(images_clean.p1_conf >= 0.95) & (images_clean.p1_dog == False), 'tweet_id']) # remove tweet ids for ids in not_dog: archive_clean.drop(archive_clean[archive_clean.tweet_id == ids].index, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code # should return an empty dataframe archive_clean.loc[archive_clean.tweet_id.isin(not_dog)] ###Output _____no_output_____ ###Markdown Define + Convert all values in p1, p2 and p3 columns to lowercase Code ###Code images_clean.p1 = images_clean.p1.str.lower() images_clean.p2 = images_clean.p2.str.lower() images_clean.p3 = images_clean.p3.str.lower() ###Output _____no_output_____ ###Markdown Test ###Code # confirm that all values are lowercase with test column (p1) sorted(images_clean.p1.unique()) ###Output _____no_output_____ ###Markdown Define+ Remove rows with retweets Code ###Code # drop rows where tweet text begins with "RT" extra_info_clean.drop(extra_info_clean.loc[extra_info_clean.text.str.startswith('RT')].index, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code # should return an empty dataframe extra_info_clean.loc[extra_info_clean.text.str.startswith('RT')] ###Output _____no_output_____ ###Markdown Define+ Melt dataframe so that dog variable appears in one column Originally, I converted each dog stage column into a boolean and used the boolean values to map each dog stage to a new column, "dog_stages." When testing my code, however, I discovered that not all of the dog stages were correctly mapped. After further expecting my dataset, I realized that sometimes more than one dog stage was marked as "True". When mapping the values, I used the order: doggo, floofer, pupper, and then puppo. If a row was marked as both 'doggo' and another dog stage value, 'doggo' would be later replaced by a different dog stage. I will need to take a different approach to ensure that both values are preserved. Code ###Code # remove NaN values from selected columns archive_clean[['doggo', 'floofer', 'pupper', 'puppo']] = archive_clean[['doggo', 'floofer', 'pupper', 'puppo']].fillna("") # join columns archive_clean['dog_stages'] = archive_clean[['doggo', 'floofer', 'pupper', 'puppo']].agg(''.join, axis=1) ###Output _____no_output_____ ###Markdown Test ###Code # visually confirm concatenations archive_clean.sample(10) # total number of original dog stage values total_stages_org = (archive_clean.doggo.value_counts()[1] + archive_clean.floofer.value_counts()[1] + archive_clean.pupper.value_counts()[1] + archive_clean.puppo.value_counts()[1]) # total instances where original dog stage value does not match value in new column no_match = archive_clean.loc[(archive_clean.doggo != archive_clean.dog_stages) & (archive_clean.floofer != archive_clean.dog_stages) & (archive_clean.pupper != archive_clean.dog_stages) & (archive_clean.puppo != archive_clean.dog_stages)].shape[0] # total number of dog stage values in new column total_stages_new = (archive_clean.dog_stages.value_counts()[1] + archive_clean.dog_stages.value_counts()[2] + archive_clean.dog_stages.value_counts()[3] + archive_clean.dog_stages.value_counts()[5]) # total number of combined dog stag values in new column total_stages_comb = (archive_clean.dog_stages.value_counts()[4] + archive_clean.dog_stages.value_counts()[6] + archive_clean.dog_stages.value_counts()[7]) # confirm all dog stages were included assert (total_stages_org - no_match) == (total_stages_new + total_stages_comb) # drop unnecessary columns archive_clean.drop(columns=['doggo', 'floofer', 'pupper', 'puppo'], inplace=True) ###Output _____no_output_____ ###Markdown Define+ Extract dog stages and rating variables from "text" column Code ###Code # create list of dog stages stages = ['doggo', 'floofer', 'pupper', 'puppo'] # extract dog stages extra_info_clean['dog_stages'] = None extra_info_clean['dog_stages'] = extra_info_clean.text.str.findall(r"|".join(stages)).apply(", ".join) # extract rating numerator extra_info_clean['rating_numerator'] = extra_info_clean.text.str.extract(r'(\d+(?:\.\d+)?)/(\d+(?:\.\d+)?)(?!.*\d+(?:\.\d+)?/\d+(?:\.\d+)?)', expand=True)[0] # set rating denominator to 10 extra_info_clean['rating_denominator'] = 10 # convert both ratings to float extra_info_clean['rating_numerator'] = extra_info_clean['rating_numerator'].astype(float) extra_info_clean['rating_denominator'] = extra_info_clean['rating_denominator'].astype(float) ###Output _____no_output_____ ###Markdown Test ###Code # visually check that all three columns were added correctly extra_info_clean.sample(10) # programmatically check rating datatypes extra_info_clean.dtypes ###Output _____no_output_____ ###Markdown Define+ Merge the retweet_count, fav_count, rating_numerator, rating_denominator, and dog_stage columns from the `extra_info` table to the `archive` table, joining on *tweet id* Code ###Code # merge dataframes final_clean = pd.merge(archive_clean, extra_info_clean, on='tweet_id') ###Output _____no_output_____ ###Markdown Test ###Code # visually check results final_clean.sample(5) ###Output _____no_output_____ ###Markdown Define+ Split merged dataframe into two: tweet_info (timestamp, source and text) and dog_info (ratings and dog stages) + Drop columns unnecessary for analysis (in_reply and retweet status and user IDs)+ Reconcile differences between dog_stage columns Code ###Code # select columns for new dataframes tweet_info = final_clean[['tweet_id', 'timestamp', 'source', 'text_x', 'retweet_count', 'fav_count']] dog_info = final_clean[['tweet_id', 'name', 'dog_stages_x', 'rating_numerator_y', 'rating_denominator_y', 'dog_stages_y']] # rename columns tweet_info.rename(columns={'text_x': 'text'}, inplace=True) dog_info.rename(columns={'rating_numerator_y': 'rating_numerator', 'rating_denominator_y': 'rating_denominator'}, inplace=True) ###Output _____no_output_____ ###Markdown To reconcile the two dog stage columns, there are four possibilities: 1. dog_stage_x column is empty and dog_stage_y column contains value 2. dog_stage_x column contains value and dog_stage_y column is empty 3. both dog_stage_x and dog_stage_y columns are empty4. both dog_stage_x and dog_stage_y columns contain same value 5. both dog_stage_x and dog_stage_y columns contain different valuesIn the case of one empty column, the column containing a value will be used. In the case of two empty columns, the column will remain empty. In the case of both columns containing the same value, that value will be used. In the final case, where two contradicting values exist, the text will be re-examined to select the best possible value. ###Code # create column dog_info['dog_stages'] = np.nan # add values mask1 = (dog_info.dog_stages_x != dog_info.dog_stages_y) & (dog_info.dog_stages_y == '') dog_info['dog_stages'][mask1] = dog_info.dog_stages_x mask2 = (dog_info.dog_stages_x != dog_info.dog_stages_y) & (dog_info.dog_stages_x == '') dog_info['dog_stages'][mask2] = dog_info.dog_stages_y mask3 = (dog_info.dog_stages_x == dog_info.dog_stages_y) dog_info['dog_stages'][mask3] = dog_info.dog_stages_y # create mask for last condition mask4 = dog_info.loc[(dog_info.dog_stages_x != dog_info.dog_stages_y) & (dog_info.dog_stages_x != "") & (dog_info.dog_stages_y != "")] # save tweet ids to list check_ids = list(mask4['tweet_id']) # view original tweets tweet_info.loc[tweet_info['tweet_id'].isin(check_ids)] # create index-based dictionary with correct dog stage values correct_stage = {148 : 'doggo', 164 : 'puppo', 171 : 'floofer', 438 : 'doggopupper', 463 : 'doggopupper', 470 : 'doggo', 503 : 'pupper', 563 : 'doggo', 717 : 'doggopupper', 879 : 'doggopupper', 929 : 'doggopupper', 1118 : 'pupperpupper', 1154 : 'pupper', 1181 : 'pupper', 1467 : 'pupperpupper', 1601 : 'pupper', 1641 : 'pupper', 1719 : 'pupper'} # replace values in dataframe dog_info.dog_stages.update(pd.Series(correct_stage)) # drop x and y columns dog_info.drop(columns=['dog_stages_x', 'dog_stages_y'], inplace=True) # convert dog stage column to category dog_info.dog_stages = dog_info.dog_stages.astype('category') ###Output _____no_output_____ ###Markdown Test ###Code tweet_info.shape # visually confirm new dataframes tweet_info.sample(10) dog_info.sample(10) # programmatically confirm dog stages datatype dog_info.dtypes # re-merge dog_info and tweet_info dataframes to save master .csv file master = pd.merge(dog_info, tweet_info, on='tweet_id') # check shape master.shape # save cleaned dataframes master.to_csv(r'twitter_archive_master.csv', index=False) dog_info.to_csv(r'dog_info.csv', index=False) tweet_info.to_csv(r'tweet_info.csv', index=False) images_clean.to_csv(r'images.csv', index=False) ###Output _____no_output_____ ###Markdown AnalysisTo discover interesting insights and create compelling visualizations for my final analysis, the cleaned dataframes require a bit more reshaping. I plan to analyze a few key metrics, including: + Development of ratings over time+ Average rating per dog stage + Average number of shares per post+ Top ten postsBefore calculating the final ratings to be used in the first two metrics, I will have to divide the tweets containing references to multiple dog stages into two or more separate rows. ###Code # identify relevant tweet ids mask5 = dog_info.loc[(dog_info.dog_stages == 'doggopupper') | (dog_info.dog_stages == 'pupperpupper')] # save ids to list multi_stage_ids = list(mask5['tweet_id']) # view original tweets tweet_info.loc[tweet_info['tweet_id'].isin(multi_stage_ids)] # list of series to append append_list = [pd.Series([808106460588765185, 'Dexter', 12, 10, 'doggo'], index=dog_info.columns), pd.Series([802265048156610565, np.nan, 11, 10, 'pupper'], index=dog_info.columns), pd.Series([781308096455073793, np.nan, 12, 10, 'doggo'], index=dog_info.columns), pd.Series([759793422261743616, 'Lila', 12, 10, 'pupper'], index=dog_info.columns), pd.Series([741067306818797568, np.nan, 12, 10, 'doggo'], index=dog_info.columns), pd.Series([733109485275860992, np.nan, 12, 10, 'pupper'], index=dog_info.columns), pd.Series([707411934438625280, np.nan, 11, 10, 'pupper'], index=dog_info.columns), pd.Series([683462770029932544, np.nan, 8, 10, 'pupper'], index=dog_info.columns)] # append dataframe using list of series dog_info = dog_info.append(append_list, ignore_index=True) # names to replace replace_name = {438 : 'Burke', 717 : 'Maggie'} # replace values in dataframe dog_info.name.update(pd.Series(replace_name)) # stages to replace replace_stage = {378 : 'pupper', 438 : 'pupper', 463 : 'doggo', 589 : 'pupper', 717 : 'doggo', 879 : 'pupper', 929 : 'doggo', 1118: 'pupper', 1467: 'pupper'} # replace values in dataframe dog_info.dog_stages.update(pd.Series(replace_stage)) # calculate final rating dog_info['final_rating'] = (dog_info.rating_numerator / dog_info.rating_denominator) * 100 # remove rows with nan rating dog_info.drop(dog_info.loc[dog_info.final_rating.isnull()].index, inplace=True) ###Output _____no_output_____ ###Markdown Metric: Dog stage average rating ###Code # groupby dog stage and calculate average rating stage_avg_rating = dog_info.groupby('dog_stages')['final_rating'].mean().reset_index() stage_avg_rating = stage_avg_rating[1:] # select font matplotlib.rcParams['font.family'] = "sans-serif" # plot bar chart fig, ax = plt.subplots(figsize=(15, 10)) labels = np.array(stage_avg_rating.dog_stages) values = np.array(stage_avg_rating.final_rating) clrs = ['grey' if (x < max(values)) else 'darkblue' for x in values] g = sns.barplot(x = 'dog_stages', y = 'final_rating', data = stage_avg_rating, palette = clrs, ax = ax) plt.title("Doggos enjoy highest average ratings\n%s" % "\n".join(wrap("of all dog stages", width=60)), fontsize = 30, loc = 'left', weight = 'bold') plt.xlabel("") plt.ylabel("Average Rating (Percent)", fontsize = 15) ax.yaxis.set_label_coords(-0.1,.78) plt.xticks(fontsize=20) plt.yticks(fontsize=20) ax.patch.set_alpha(0.0) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['bottom'].set_visible(False) ax.spines['left'].set_visible(False) plt.savefig('stage_ratings.png', transparent = True); ###Output _____no_output_____ ###Markdown Metric: Development of ratings over timeFor this metric, I will need to merge the timestamp column of the tweet_info table with the dog_info table. ###Code # merge dataframes ratings_time = pd.merge(dog_info, tweet_info, on = 'tweet_id', how = 'left') # select desired columns ratings_time = ratings_time[['final_rating', 'timestamp']] # calculate average ratings by day ratings_time = ratings_time.set_index('timestamp').groupby(pd.Grouper(freq='D')).mean() # drop outliers ratings_time.drop(ratings_time.loc[ratings_time.final_rating > 400].index, inplace=True) # plot line chart pd.plotting.register_matplotlib_converters() fig, ax = plt.subplots(figsize=(15, 10)) sns.lineplot(data = ratings_time, ax = ax) plt.title("Average ratings become less extreme\n%s" % "\n".join(wrap("after early 2016", width=60)), fontsize = 30, loc = 'left', weight = 'bold') plt.xlabel("") plt.ylabel("Average Rating (Percent)", fontsize = 15) ax.yaxis.set_label_coords(-0.1,.75) #plt.xticks(fontsize=15, rotation=45) plt.yticks(fontsize=20) ax.patch.set_alpha(0.0) ax.get_legend().remove() ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['bottom'].set_visible(False) ax.spines['left'].set_visible(False) plt.savefig('avg_ratings.png', transparent = True); ###Output _____no_output_____ ###Markdown Metric: Average number of shares per post ###Code avg_retweet = round(tweet_info.retweet_count.mean(), 0) print('On average, posts are retweeted {} times.'.format(avg_retweet)) avg_fav = round(tweet_info.fav_count.mean(), 0) print('On average, posts are favorited {} times.'.format(avg_fav)) ###Output On average, posts are favorited 9825.0 times. ###Markdown Metric: Top ten posts To measure a post's popularity, two metrics are frequently used: number of retweets and number of times favorited. Although a strong, positive relationship exists between the two variables, as displayed below, the "top ten" list does vary depending on the metric selected. To resolve this issue, the rankings for each metric will be averaged together to create a final "top ten" list. ###Code # plot scatter plot fig, ax = plt.subplots(figsize=(15, 10)) sns.scatterplot(x='retweet_count', y='fav_count', data=tweet_info, ax = ax) plt.title("A strong positive relationship exists\n%s" % "\n".join(wrap("between Retweets and Fav Counts", width=60)), fontsize = 30, loc = 'left', weight = 'bold') plt.xlabel("Total Retweets", fontsize = 15, horizontalalignment = 'left', x=.03, labelpad = 20) plt.ylabel("Total Fav Counts", fontsize = 15) ax.yaxis.set_label_coords(-0.13,.88) plt.xticks(fontsize=20) plt.yticks(fontsize=20) ax.patch.set_alpha(0.0) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['bottom'].set_visible(False) ax.spines['left'].set_visible(False) plt.savefig('retweet_fav_corr.png', transparent = True); # rank columns in descending order tweet_info['retweet_rank'] = tweet_info['retweet_count'].rank(ascending=0) tweet_info['fav_rank'] = tweet_info['fav_count'].rank(ascending=0) # calculate average rank tweet_info['final_rank'] = tweet_info[['retweet_rank', 'fav_rank']].mean(axis=1) # display top ten results top_ten_tweets = tweet_info.sort_values(by='final_rank').head(10) top_ten_tweets = top_ten_tweets[['tweet_id', 'text', 'retweet_count', 'fav_count', 'final_rank']].reset_index() # save to .csv file top_ten_tweets.to_csv(r'top_ten_tweets.csv', index=False) ###Output _____no_output_____ ###Markdown Data Wrangling IntroductionThis project focused on wrangling data from the WeRateDogs Twitter account using Python, documented in a Jupyter Notebook (wrangle_act.ipynb). This Twitter account rates dogs with humorous commentary. The rating denominator is usually 10, however, the numerators are usually greater than 10.[They’re Good Dogs Brent](http://knowyourmeme.com/memes/theyre-good-dogs-brent)wrangle WeRateDogs Twitter data to create interesting and trustworthy analyses and visualizations. WeRateDogs has over 4 million followers and has received international media coverage.WeRateDogs downloaded their Twitter archive and sent it to Udacity via email exclusively for us to use in this project. This archive contains basic tweet data (tweet ID, timestamp, text, etc.) for all 5000+ of their tweets as they stood on August 1, 2017.The goal of this project is to wrangle the WeRateDogs Twitter data to create interesting and trustworthy analyses and visualizations. The challenge lies in the fact that the Twitter archive is great, but it only contains very basic tweet information that comes in JSON format. I needed to gather, asses and clean the Twitter data for a worthy analysis and visualization. The Data Enhanced Twitter ArchiveThe WeRateDogs Twitter archive contains basic tweet data for all 5000+ of their tweets, but not everything. One column the archive does contain though: each tweet's text, which I used to extract rating, dog name, and dog "stage" (i.e. doggo, floofer, pupper, and puppo) to make this Twitter archive "enhanced.".We manually downloaded this file manually by clicking the following link: [twitter_archive_enhanced.csv](https://d17h27t6h515a5.cloudfront.net/topher/2017/August/59a4e958_twitter-archive-enhanced/twitter-archive-enhanced.csv) Additional Data via the Twitter APIBack to the basic-ness of Twitter archives: retweet count and favorite count are two of the notable column omissions. Fortunately, this additional data can be gathered by anyone from Twitter's API. Well, "anyone" who has access to data for the 3000 most recent tweets, at least. But we, because we have the WeRateDogs Twitter archive and specifically the tweet IDs within it, can gather this data for all 5000+. And guess what? We're going to query Twitter's API to gather this valuable data. Image Predictions Filehe tweet image predictions, i.e., what breed of dog (or other object, animal, etc.) is present in each tweet according to a neural network. This file (image_predictions.tsv) hosted on Udacity's servers and we downloaded it programmatically using python Requests library on the following (URL of the file: https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv) Key PointsKey points to keep in mind when data wrangling for this project:* We only want original ratings (no retweets) that have images. Though there are 5000+ tweets in the dataset, not all are dog ratings and some are retweets.* Fully assessing and cleaning the entire dataset requires exceptional effort so only a subset of its issues (eight (8) quality issues and two (2) tidiness issues at minimum) need to be assessed and cleaned.* Cleaning includes merging individual pieces of data according to the rules of tidy data.* The fact that the rating numerators are greater than the denominators does not need to be cleaned. This unique rating system is a big part of the popularity of WeRateDogs.* We do not need to gather the tweets beyond August 1st, 2017. We can, but note that we won't be able to gather the image predictions for these tweets since we don't have access to the algorithm used. Project DetailsFully assessing and cleaning the entire dataset would require exceptional effort so only a subset of its issues (eight quality issues and two tidiness issues at minimum) needed to be assessed and cleaned.The tasks for this project were:* Data wrangling, which consists of: * Gathering data * Assessing data * Cleaning data* Storing, analyzing, and visualizing our wrangled data* Reporting on 1) our data wrangling efforts and 2) our data analyses and visualizations ###Code import pandas as pd import numpy as np import requests import tweepy import os import json import time import re import matplotlib.pyplot as plt import warnings ###Output _____no_output_____ ###Markdown Gather ###Code # read csv as a Pandas DataFrame twitter_archive = pd.read_csv('./Data/twitter-archive-enhanced.csv') twitter_archive.head() twitter_archive.info() # Use requests library to download tsv file url="https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv" response = requests.get(url) with open('./Data/image_predictions.tsv', 'wb') as file: file.write(response.content) image_predictions = pd.read_csv('./Data/image_predictions.tsv', sep='\t') image_predictions.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null int64 jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown **Query the Twitter API for each tweet's JSON data using Python's Tweepy library and store each tweet's entire set of JSON data in a file.** ###Code CONSUMER_KEY = "" CONSUMER_SECRET = "" OAUTH_TOKEN = "" OAUTH_TOKEN_SECRET = "" auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET) auth.set_access_token(OAUTH_TOKEN, OAUTH_TOKEN_SECRET) api = tweepy.API(auth) # List of the error tweets error_list = [] # List of tweets df_list = [] # Calculate the time of execution start = time.time() # For loop which will add each available tweet json to df_list for tweet_id in twitter_archive['tweet_id']: try: tweet = api.get_status(tweet_id, tweet_mode='extended', wait_on_rate_limit = True, wait_on_rate_limit_notify = True)._json favorites = tweet['favorite_count'] # How many favorites the tweet had retweets = tweet['retweet_count'] # Count of the retweet user_followers = tweet['user']['followers_count'] # How many followers the user had user_favourites = tweet['user']['favourites_count'] # How many favorites the user had date_time = tweet['created_at'] # The date and time of the creation df_list.append({'tweet_id': int(tweet_id), 'favorites': int(favorites), 'retweets': int(retweets), 'user_followers': int(user_followers), 'user_favourites': int(user_favourites), 'date_time': pd.to_datetime(date_time)}) except Exception as e: print(str(tweet_id)+ " _ " + str(e)) error_list.append(tweet_id) # Calculate the time of excution end = time.time() print(end - start) # lengh of the result print("The lengh of the result", len(df_list)) # The tweet_id of the errors print("The lengh of the errors", len(error_list)) ###Output The lengh of the result 2344 The lengh of the errors 12 ###Markdown From the above results:- We reached the limit of the tweepy API three times but wait_on_rate_limit automatically wait for rate limits to re-establish and wait_on_rate_limit_notify print a notification when Tweepy is waiting.- We could get 2344 tweet_id correctly with 12 errors- The total time was about 3023 seconds (~ 50.5 min) ###Code print("The length of the result", len(df_list)) # Create DataFrames from list of dictionaries json_tweets = pd.DataFrame(df_list, columns = ['tweet_id', 'favorites', 'retweets', 'user_followers', 'user_favourites', 'date_time']) # Save the dataFrame in file json_tweets.to_csv('tweet_json.txt', encoding = 'utf-8', index=False) # Read the saved tweet_json.txt file into a dataframe tweet_data = pd.read_csv('tweet_json.txt', encoding = 'utf-8') tweet_data tweet_data.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2344 entries, 0 to 2343 Data columns (total 6 columns): tweet_id 2344 non-null int64 favorites 2344 non-null int64 retweets 2344 non-null int64 user_followers 2344 non-null int64 user_favourites 2344 non-null int64 date_time 2344 non-null object dtypes: int64(5), object(1) memory usage: 110.0+ KB ###Markdown Gather: SummaryGathering is the first step in the data wrangling process.- Obtaining data - Getting data from an existing file (twitter-archive-enhanced.csv) Reading from csv file using pandas - Downloading a file from the internet (image-predictions.tsv) Downloading file using requests - Querying an API (tweet_json.txt) Get JSON object of all the tweet_ids using Tweepy- Importing that data into our programming environment (Jupyter Notebook) Assessing ###Code # Print some random examples twitter_archive.sample(10) # Assessing the data programmaticaly twitter_archive.info() twitter_archive.describe() twitter_archive['rating_numerator'].value_counts() twitter_archive['rating_denominator'].value_counts() twitter_archive['name'].value_counts() # View descriptive statistics of twitter_archive twitter_archive.describe() image_predictions image_predictions.info() image_predictions['jpg_url'].value_counts() image_predictions[image_predictions['jpg_url'] == 'https://pbs.twimg.com/media/DF6hr6BUMAAzZgT.jpg'] # View number of entries for each source twitter_archive.source.value_counts() #For rating that don't follow pattern twitter_archive[twitter_archive['rating_numerator'] > 20] #unusual names twitter_archive[twitter_archive['name'].apply(len) < 3] #Orignal Tweets twitter_archive[twitter_archive['retweeted_status_id'].isnull()] ###Output _____no_output_____ ###Markdown Quality*Completeness, Validity, Accuracy, Consistency => a.k.a content issues***twitter_archive dataset**- in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id should be integers/strings instead of float.- retweeted_status_timestamp, timestamp should be datetime instead of object (string).- The numerator and denominator columns have invalid values.- In several columns null objects are non-null (None to NaN).- Name column have invalid names i.e 'None', 'a', 'an' and less than 3 characters.- We only want original ratings (no retweets) that have images.- We may want to change this columns type (in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id and tweet_id) to string because We don't want any operations on them.- Sources difficult to read.**image_predictions dataset**- Missing values from images dataset (2075 rows instead of 2356)- Some tweet_ids have the same jpg_url- Some tweets are have 2 different tweet_id one redirect to the other (Dataset contains retweets)**tweet_data dataset**- This tweet_id (666020888022790149) duplicated 8 times TidinessUntidy data => a.k.a structural issues- No need to all the informations in images dataset, (tweet_id and jpg_url what matters)- Dog "stage" variable in four columns: doggo, floofer, pupper, puppo- Join 'tweet_info' and 'image_predictions' to 'twitter_archive' CleaningCleaning our data is the third step in data wrangling. It is where we will fix the quality and tidiness issues that we identified in the assess step. ###Code #copy dataframes tweet_data_clean = tweet_data.copy() twitter_archive_clean = twitter_archive.copy() image_predictions_clean= image_predictions.copy() ###Output _____no_output_____ ###Markdown DefineAdd tweet_info and image_predictions to twitter_archive table. Code ###Code twitter_archive_clean = pd.merge(left=twitter_archive_clean, right=tweet_data_clean, left_on='tweet_id', right_on='tweet_id', how='inner') twitter_archive_clean = twitter_archive_clean.merge(image_predictions_clean, on='tweet_id', how='inner') ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2068 entries, 0 to 2067 Data columns (total 33 columns): tweet_id 2068 non-null int64 in_reply_to_status_id 23 non-null float64 in_reply_to_user_id 23 non-null float64 timestamp 2068 non-null object source 2068 non-null object text 2068 non-null object retweeted_status_id 75 non-null float64 retweeted_status_user_id 75 non-null float64 retweeted_status_timestamp 75 non-null object expanded_urls 2068 non-null object rating_numerator 2068 non-null int64 rating_denominator 2068 non-null int64 name 2068 non-null object doggo 2068 non-null object floofer 2068 non-null object pupper 2068 non-null object puppo 2068 non-null object favorites 2068 non-null int64 retweets 2068 non-null int64 user_followers 2068 non-null int64 user_favourites 2068 non-null int64 date_time 2068 non-null object jpg_url 2068 non-null object img_num 2068 non-null int64 p1 2068 non-null object p1_conf 2068 non-null float64 p1_dog 2068 non-null bool p2 2068 non-null object p2_conf 2068 non-null float64 p2_dog 2068 non-null bool p3 2068 non-null object p3_conf 2068 non-null float64 p3_dog 2068 non-null bool dtypes: bool(3), float64(7), int64(8), object(15) memory usage: 506.9+ KB ###Markdown Define Melt the 'doggo', 'floofer', 'pupper' and 'puppo' columns into one column 'dog_stage'. Code ###Code # Select the columns to melt and to remain MELTS_COLUMNS = ['doggo', 'floofer', 'pupper', 'puppo'] STAY_COLUMNS = [x for x in twitter_archive_clean.columns.tolist() if x not in MELTS_COLUMNS] # Melt the the columns into values twitter_archive_clean = pd.melt(twitter_archive_clean, id_vars = STAY_COLUMNS, value_vars = MELTS_COLUMNS, var_name = 'stages', value_name = 'dog_stage') # Delete column 'stages' twitter_archive_clean = twitter_archive_clean.drop('stages', 1) ###Output _____no_output_____ ###Markdown Test ###Code print(twitter_archive_clean.dog_stage.value_counts()) print(len(twitter_archive_clean)) ###Output None 7938 pupper 222 doggo 80 puppo 24 floofer 8 Name: dog_stage, dtype: int64 8272 ###Markdown CleanClean rows and columns that we will not need Code ###Code # Delete the retweets twitter_archive_clean = twitter_archive_clean[pd.isnull(twitter_archive_clean.retweeted_status_id)] # Delete duplicated tweet_id twitter_archive_clean = twitter_archive_clean.drop_duplicates() # Delete tweets with no pictures twitter_archive_clean = twitter_archive_clean.dropna(subset = ['jpg_url']) # small test len(twitter_archive_clean) # Delete columns related to retweet we don't need anymore twitter_archive_clean = twitter_archive_clean.drop('retweeted_status_id', 1) twitter_archive_clean = twitter_archive_clean.drop('retweeted_status_user_id', 1) twitter_archive_clean = twitter_archive_clean.drop('retweeted_status_timestamp', 1) # Delete column date_time we imported from the API, it has the same values as timestamp column twitter_archive_clean = twitter_archive_clean.drop('date_time', 1) # small test list(twitter_archive_clean) #Delete dog_stage duplicates twitter_archive_clean = twitter_archive_clean.sort_values('dog_stage').drop_duplicates('tweet_id', keep = 'last') ###Output _____no_output_____ ###Markdown Test ###Code print(twitter_archive_clean.dog_stage.value_counts()) print(len(twitter_archive_clean)) ###Output None 1687 pupper 212 doggo 63 puppo 23 floofer 8 Name: dog_stage, dtype: int64 1993 ###Markdown DefineGet rid of image prediction columns Code ###Code # We will store the fisrt true algorithm with it's level of confidence prediction_algorithm = [] confidence_level = [] # Get_prediction_confidence function: # search the first true algorithm and append it to a list with it's level of confidence # if flase prediction_algorthm will have a value of NaN def get_prediction_confidence(dataframe): if dataframe['p1_dog'] == True: prediction_algorithm.append(dataframe['p1']) confidence_level.append(dataframe['p1_conf']) elif dataframe['p2_dog'] == True: prediction_algorithm.append(dataframe['p2']) confidence_level.append(dataframe['p2_conf']) elif dataframe['p3_dog'] == True: prediction_algorithm.append(dataframe['p3']) confidence_level.append(dataframe['p3_conf']) else: prediction_algorithm.append('NaN') confidence_level.append(0) twitter_archive_clean.apply(get_prediction_confidence, axis=1) twitter_archive_clean['prediction_algorithm'] = prediction_algorithm twitter_archive_clean['confidence_level'] = confidence_level ###Output _____no_output_____ ###Markdown Test ###Code list(twitter_archive_clean) # Delete the columns of image prediction information twitter_archive_clean = twitter_archive_clean.drop(['img_num', 'p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog'], 1) list(twitter_archive_clean) # let's concentrate on low values.. let's dig more twitter_archive_clean.info() print('in_reply_to_user_id ') print(twitter_archive_clean['in_reply_to_user_id'].value_counts()) print('source ') print(twitter_archive_clean['source'].value_counts()) print('user_favourites ') print(twitter_archive_clean['user_favourites'].value_counts()) ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1993 entries, 0 to 7089 Data columns (total 18 columns): tweet_id 1993 non-null int64 in_reply_to_status_id 23 non-null float64 in_reply_to_user_id 23 non-null float64 timestamp 1993 non-null object source 1993 non-null object text 1993 non-null object expanded_urls 1993 non-null object rating_numerator 1993 non-null int64 rating_denominator 1993 non-null int64 name 1993 non-null object favorites 1993 non-null int64 retweets 1993 non-null int64 user_followers 1993 non-null int64 user_favourites 1993 non-null int64 jpg_url 1993 non-null object dog_stage 1993 non-null object prediction_algorithm 1993 non-null object confidence_level 1993 non-null float64 dtypes: float64(3), int64(7), object(8) memory usage: 295.8+ KB in_reply_to_user_id 4.196984e+09 23 Name: in_reply_to_user_id, dtype: int64 source <a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a> 1954 <a href="http://twitter.com" rel="nofollow">Twitter Web Client</a> 28 <a href="https://about.twitter.com/products/tweetdeck" rel="nofollow">TweetDeck</a> 11 Name: source, dtype: int64 user_favourites 132918 1973 132916 20 Name: user_favourites, dtype: int64 ###Markdown Notes- One value in ***in_reply_to_user_id*** so we will delete the columns of reply all of them replying to @dog_rates.- ***source** has 3 types, we will clean that column and made them clean.- **user_favourites** has 2 values and they are close. ###Code # drop the following columns 'in_reply_to_status_id', 'in_reply_to_user_id', 'user_favourites' twitter_archive_clean = twitter_archive_clean.drop(['in_reply_to_status_id', 'in_reply_to_user_id', 'user_favourites'], 1) # Clean the content of source column twitter_archive_clean['source'] = twitter_archive_clean['source'].apply(lambda x: re.findall(r'>(.*)<', x)[0]) # Test twitter_archive_clean ###Output _____no_output_____ ###Markdown DefineFix rating numerator and denominators that are not actually ratings Code ###Code # View all occurences where there are more than one #/# in 'text' column text_ratings_to_fix = twitter_archive_clean[twitter_archive_clean.text.str.contains( r"(\d+\.?\d*\/\d+\.?\d*\D+\d+\.?\d*\/\d+\.?\d*)")].text text_ratings_to_fix for entry in text_ratings_to_fix: mask = twitter_archive_clean.text == entry column_name1 = 'rating_numerator' column_name2 = 'rating_denominator' twitter_archive_clean.loc[mask, column_name1] = re.findall(r"\d+\.?\d*\/\d+\.?\d*\D+(\d+\.?\d*)\/\d+\.?\d*", entry) twitter_archive_clean.loc[mask, column_name2] = 10 twitter_archive_clean[twitter_archive_clean.text.isin(text_ratings_to_fix)] ###Output _____no_output_____ ###Markdown DefineFix rating numerator that have decimals. Code ###Code # View tweets with decimals in rating in 'text' column twitter_archive_clean[twitter_archive_clean.text.str.contains(r"(\d+\.\d*\/\d+)")] # Set correct numerators for specific tweets twitter_archive_clean.loc[(twitter_archive_clean['tweet_id'] == 883482846933004288) & (twitter_archive_clean['rating_numerator'] == 5), ['rating_numerator']] = 13.5 twitter_archive_clean.loc[(twitter_archive_clean['tweet_id'] == 786709082849828864) & (twitter_archive_clean['rating_numerator'] == 75), ['rating_numerator']] = 9.75 twitter_archive_clean.loc[(twitter_archive_clean['tweet_id'] == 778027034220126208) & (twitter_archive_clean['rating_numerator'] == 27), ['rating_numerator']] = 11.27 twitter_archive_clean.loc[(twitter_archive_clean['tweet_id'] == 680494726643068929) & (twitter_archive_clean['rating_numerator'] == 26), ['rating_numerator']] = 11.26 ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean[twitter_archive_clean.text.str.contains(r"(\d+\.\d*\/\d+)")] ###Output C:\Program Files\Anaconda3\envs\py35\lib\site-packages\ipykernel\__main__.py:1: UserWarning: This pattern has match groups. To actually get the groups, use str.extract. if __name__ == '__main__': ###Markdown DefineGet Dogs gender column from text column Code ###Code # Loop on all the texts and check if it has one of pronouns of male or female # and append the result in a list male = ['He', 'he', 'him', 'his', "he's", 'himself'] female = ['She', 'she', 'her', 'hers', 'herself', "she's"] dog_gender = [] for text in twitter_archive_clean['text']: # Male if any(map(lambda v:v in male, text.split())): dog_gender.append('male') # Female elif any(map(lambda v:v in female, text.split())): dog_gender.append('female') # If group or not specified else: dog_gender.append('NaN') # Test len(dog_gender) # Save the result in a new column 'dog_name' twitter_archive_clean['dog_gender'] = dog_gender ###Output _____no_output_____ ###Markdown Test ###Code print("dog_gender count \n", twitter_archive_clean.dog_gender.value_counts()) ###Output dog_gender count NaN 1131 male 636 female 226 Name: dog_gender, dtype: int64 ###Markdown DefineConvert the null values to None type Code ###Code twitter_archive_clean.loc[twitter_archive_clean['prediction_algorithm'] == 'NaN', 'prediction_algorithm'] = None twitter_archive_clean.loc[twitter_archive_clean['dog_gender'] == 'NaN', 'dog_gender'] = None twitter_archive_clean.loc[twitter_archive_clean['rating_numerator'] == 'NaN', 'rating_numerator'] = 0 #twitter_archive_clean.loc[twitter_archive_clean['rating_denominator'] == 'NaN', 'rating_denominator'] = 0 ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1993 entries, 0 to 7089 Data columns (total 16 columns): tweet_id 1993 non-null int64 timestamp 1993 non-null object source 1993 non-null object text 1993 non-null object expanded_urls 1993 non-null object rating_numerator 1993 non-null object rating_denominator 1993 non-null int64 name 1993 non-null object favorites 1993 non-null int64 retweets 1993 non-null int64 user_followers 1993 non-null int64 jpg_url 1993 non-null object dog_stage 1993 non-null object prediction_algorithm 1685 non-null object confidence_level 1993 non-null float64 dog_gender 862 non-null object dtypes: float64(1), int64(5), object(10) memory usage: 264.7+ KB ###Markdown DefineChange datatypes . Code ###Code twitter_archive_clean['tweet_id'] = twitter_archive_clean['tweet_id'].astype(str) twitter_archive_clean['timestamp'] = pd.to_datetime(twitter_archive_clean.timestamp) twitter_archive_clean['source'] = twitter_archive_clean['source'].astype('category') twitter_archive_clean['favorites'] = twitter_archive_clean['favorites'].astype(int) twitter_archive_clean['retweets'] = twitter_archive_clean['retweets'].astype(int) twitter_archive_clean['user_followers'] = twitter_archive_clean['user_followers'].astype(int) twitter_archive_clean['dog_stage'] = twitter_archive_clean['dog_stage'].astype('category') twitter_archive_clean['rating_numerator'] = twitter_archive_clean['rating_numerator'].astype(float) twitter_archive_clean['rating_denominator'] = twitter_archive_clean['rating_denominator'].astype(float) twitter_archive_clean['dog_gender'] = twitter_archive_clean['dog_gender'].astype('category') ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.dtypes ###Output _____no_output_____ ###Markdown Store ###Code # Save clean DataFrame to csv file twitter_archive_clean.drop(twitter_archive_clean.columns[twitter_archive_clean.columns.str.contains('Unnamed',case = False)],axis = 1) twitter_archive_clean.to_csv('./Data/twitter_archive_master.csv', encoding = 'utf-8', index=False) twitter_archive_clean = pd.read_csv('./Data/twitter_archive_master.csv') twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 1993 entries, 0 to 1992 Data columns (total 16 columns): tweet_id 1993 non-null int64 timestamp 1993 non-null object source 1993 non-null object text 1993 non-null object expanded_urls 1993 non-null object rating_numerator 1993 non-null float64 rating_denominator 1993 non-null float64 name 1993 non-null object favorites 1993 non-null int64 retweets 1993 non-null int64 user_followers 1993 non-null int64 jpg_url 1993 non-null object dog_stage 1993 non-null object prediction_algorithm 1685 non-null object confidence_level 1993 non-null float64 dog_gender 862 non-null object dtypes: float64(3), int64(4), object(9) memory usage: 249.2+ KB ###Markdown Data Wrangling and Analysis: WeRateDogs ([@dog_rates][1])> _By Ambar Canonicco_***Welcome to this academic project meant to develop and demonstrate my skills on wrangling and cleaning data using Python code to further perform proper analysis.[1]:https://twitter.com/dog_rates "WeRateDogs Twitter Account" Table of Contents- [Introduction][1]- [Gathering Data][2]- [Assessing Data][3]- [Cleaning Data][4]- [Analyzing Data][5]- [Insights][6]- [Conclusion][7]- [Findings Report][8]- [Data Repository][9][1]:introduction[2]:gather[3]:assess[4]:clean[5]:analysis[6]:insights[7]:conclusion[8]:./data/[9]:./data/ IntroductionReal-world data rarely comes clean. Using Python and its libraries, I will gather data from a variety of sources and in a variety of formats, assess its quality and tidiness, then clean it. This is called data wrangling. I will document my wrangling efforts in this Jupyter Notebook, plus showcase them through analyses and visualizations using Python (and its libraries, you'll see).The dataset that I will be wrangling (and analyzing and visualizing) is the tweet archive of Twitter user @dog_rates, also known as WeRateDogs. WeRateDogs is a Twitter account that rates people's dogs with a humorous comment about the dog. These ratings almost always have a denominator of 10. The numerators, though? Almost always greater than 10. 11/10, 12/10, 13/10, etc. Why? ```◖ᵔᴥᵔ◗``` Because "they're good dogs Brent." [``` knowyourmeme```][1]> WeRateDogs has over 8 million followers and has received international media coverage.After performing the wrangling phase of the project, I will decide which questions I think could be answered from the final clean data recollected with my efforts and share the insighful findings with you.Be ready![1]: https://knowyourmeme.com/memes/theyre-good-dogs-brent ###Code # Import all necesary (or possible) libraries import pandas as pd # DataFrames import numpy as np # Number arrays import matplotlib.pyplot as plt # Fancy Plots import seaborn as sns # Fancier Plots import requests as rqs # "HTTP library for Python, built with ♥"} import json # encoder "easy for humans to read and write" from IPython.display import display as dpl # print distant cousin from IPython.display import Image as img # jupyter scrapbook from IPython.display import HTML # More display libraries import matplotlib.dates as mdates # More plot libraries import time # tick tock from datetime import datetime # Time passess import tweepy # Access Twitter API for dummies import re # regex library %matplotlib inline # Time at the beginning of the code to track elapsed time notebk_time = time.time() # Define some functions to show all columns and/or rows when a dataframe is displayed def full(x): pd.set_option('display.max_columns', None) pd.set_option('display.max_rows', None) dpl(x) pd.reset_option('display.max_columns') pd.reset_option('display.max_rows') def fullcols(x): pd.set_option('display.max_columns', None) dpl(x) pd.reset_option('display.max_columns') ###Output _____no_output_____ ###Markdown Gathering DataTo start this project I will gather all the files collected and import it into the notebook so I can work with the data. We are going to start with the WeRateDogs twitter archive, which has been provided to me as a .csv file. The WeRateDogs Twitter archive contains basic tweet data for all 5000+ of their tweets, but not everything. One column the archive does contains though: each tweet's text, which was used to extrac rating, dog name, and dog "stage" (i.e. doggo, floofer, pupper, and puppo) to make this Twitter archive "enhanced".I will programmatically download a tweet image prediction file with a URL that I received. The results: a table full of image predictions (the top three only) alongside each tweet ID, image URL, and the image number that corresponded to the most confident prediction (numbered 1 to 4 since tweets can have up to four images).Back to the basic-ness of Twitter archives: retweet count and favorite count are two of the notable column omissions. Fortunately, this additional data can be gathered by anyone from Twitter's API. Well, "anyone" who has access to data for the 3000 most recent tweets, at least. But I, because I have the WeRateDogs Twitter archive and specifically the tweet IDs within it, can gather this data for all 5000+. And guess what? I am going to query Twitter's API to gather this valuable data using the Tweepy library. ###Code # Store available data paths # WeRateDogs twitter archive d_1 = './data/twitter-archive-enhanced.csv' # Tweet image prediction file and where I am going to store it d2_url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' d_2 = './data/image-predictions.tsv' # Where I am going to store the Twitter API collected data d_3 = './data/tweet_json.txt' # request Tweet image prediction file r = rqs.get(d2_url) # store the file in the path open(d_2, 'wb').write(r.content); # Read data_1 and data_2 into working variables df1_raw = pd.read_csv(d_1, sep = ',') df2_raw = pd.read_csv(d_2, sep = '\t') # Preview data_1 to confirm that it was properly read df1_raw.head(2) # Preview data_2 to confirm that it was properly read df2_raw.head(2) # DO NOT RUN THIS CELL, PLEASE RUN THE ABOVE AND THEN THE BELOW CELLS. Unless you have half an hour to spend ( ͡° ͜ʖ ͡°) # Twitter API # keys and secrets are stored in creds.py import creds # Query Twitter API for each tweet in the Twitter archive and save JSON in a text file auth = tweepy.OAuthHandler(creds.consumer_key, creds.consumer_secret) auth.set_access_token(creds.access_token, creds.access_secret) api = tweepy.API(auth, wait_on_rate_limit=True) # Tweet IDs for which to gather additional data via Twitter's API tweet_ids = df1_raw.tweet_id.values len(tweet_ids) # Query Twitter's API for JSON data for each tweet ID in the Twitter archive count = 0 fails_dict = {} start = time.time() # Save each tweet's returned JSON as a new line in a .txt file and prints the log in another with open('./data/tweet_json.txt', 'w') as outfile: with open('./data/twitter-query-log.txt', 'a+') as api_log: # This loop will likely take 20-30 minutes to run because of Twitter's rate limit for tweet_id in tweet_ids: count += 1 try: tweet = api.get_status(tweet_id, tweet_mode='extended') api_log.write(str(count) + ": " + str(tweet_id) + " - Success\n") json.dump(tweet._json, outfile) outfile.write('\n') except tweepy.TweepError as e: api_log.write(str(count) + ": " + str(tweet_id) + " - Fail\n") fails_dict[tweet_id] = e pass print(round((time.time() - start)/60,1),' minutes') print(fails_dict) # Read data into working variables with open(d_3) as f_in : df3_raw = pd.read_json(f_in, lines=True) # Preview data_3 to confirm that it was properly read fullcols(df3_raw.head(2)) ###Output _____no_output_____ ###Markdown Assessing DataDuring this phase I am going to take a closer look at the data to assess quality of the data, which can be expanded into four main dimensions:|      | Completeness | Validity | Accuracy | Consistency || ---------: | :-- | :-- | :-- | :-- || **Issues** | Missing data | Data that makes no sense for the observation | Even though it could be valid and make sense, data that is not true or do not belong to the observation |The data may be correct but is not formatted in the same way, is not standarized across de dataset | On the other hand, we can encounter with tidy data, which is more reffered to messy and not-user-friendly data. The information may be complete, valid, accurate and consistent, but it is not readable, cannot be analised an/or cannot be matched with other datasets. Meaning, in general, issues with data structure or semantics. We learned that *Each variable must have its own column, each observation must have its own row, and each type of observational unit should be stored in its own table.* > _I was requested to identify and clean at least 8 quality issues and at least 2 tidiness issues in this dataset. I performed this assessment using the following code cells. However, you can read my notes in this section._- Quality issues 1. Columns ```contributors, coordinates, geo, quoted_status, quoted_status_id, quoted_status_id_str, place, quoted_status_permalink``` are empty. 1. Twenty-three tweets are replies. 1. Dogs names column have 'none' and letters that do not form words. 1. The ```timestamp``` column is stored as string. 1. Also this after the time ends with a '+0000' string. 1. Dog breed image predictons ```p1, p2, p3``` contains predictions of items that are not dog breeds. 1. These predictions have inconsistently in upper, lower or title cases. 1. Additionally, have underscores for spaces when breeds, or other rare stuff, have more than one word. 1. Seventy-two tweets are retweets. 1. Max numerator is 1776. 1. The tweet_id is stored as int. 1. In the ```rating_denominator``` column, though they are usually /10, there are 15 different ratings up to 170. 1. The ```possibly_sensitive``` and ```possibly_sensitive_appealable``` columns are all zeros.- Tidiness issues 1. Columns ```text``` and ```full_text``` may be the same. 1. Our three raw datasets belong to the same type of observational unit "a tweet". 1. There is a variable that we will call ```dog_stage``` that is spread in 4 columns ```doggo, floofer, pupper and puppo```. 1. The ```columns entities``` and ```extended_entities``` has a dictionary within each cell. 1. ```user``` column hides very interesting information on the user. 1. As all of the tweets are from the same user, this information should go in a different table. These observations above were noted after joining all the data together in the same data frame and may have been completed in different times of the assessing process. Do not be scared if they are not treated sequencially through the code! Although you will not get lost while reading the code ```•ᴗ•``` ###Code # Read the log file I've created apilog = pd.read_csv('./data/twitter-query-log.txt','r', header = None) # Parsing the log file apilog = apilog[0].str.split(':', expand = True) apilog = apilog[1].str.split('-', expand = True) apilog.columns = ['id','status'] # Count values in log file to check how many fails apilog.status.value_counts() ###Output _____no_output_____ ###Markdown > _Noted that 25 ids failed during the api activity_ ###Code # set tweet id as index for all data df1_raw = df1_raw.set_index('tweet_id') df2_raw = df2_raw.set_index('tweet_id') df3_raw = df3_raw.set_index('id') dpl(df1_raw.head(1)) dpl(df2_raw.head(1)) dpl(df3_raw.head(1)) # Join all sources into one table df = (df1_raw.join( df2_raw, how = 'inner', lsuffix='_1', rsuffix='_2')).join ( df3_raw, how = 'inner', rsuffix='_3') fullcols(df) # Reformat number display to decimal and not scientific pd.options.display.float_format = '{:.2f}'.format # Check for information on dataframe full(df.info()) full(df.describe()) # Confirm if the unique value in this column can be dropped pd.DataFrame(df['place'].dropna()) # Get dog names counts df['name'].value_counts() # Count how many different denominators there are len(df['rating_denominator'].value_counts()) # Get counts of unique values in predictor columns dpl(df['p1'].value_counts()) dpl(df['p2'].value_counts()) dpl(df['p3'].value_counts()) ###Output _____no_output_____ ###Markdown Cleaning DataDuring this section I will address the issues found in the assessment phase and clean any more issues that I encounter in my path. ###Code # Getting rid of fully empty columns and place column df = df.dropna(axis = 1, how = 'all').drop('place', axis=1) fullcols(df.info()) # Getting rid of replies df = df[df['in_reply_to_status_id'].notnull() == False] df.shape # Getting rid of retweets df = df[df['retweeted_status_id'].notnull() == False] df.shape # Getting rid of new full empty columns df = df.dropna(axis = 1, how = 'all') fullcols(df.info()) # Fixing timestamp dtype df['timestamp'] = df['timestamp'].copy().apply(lambda x: x[0:len(x)-6]) df['timestamp'] = pd.to_datetime(df['timestamp']) df['timestamp'].head(2) # Getting rid of possibly_sensitive and possibly_sensitive_appealable columns df = df.drop(['possibly_sensitive','possibly_sensitive_appealable'], axis = 1) df.info() # Fixing predictions format df['p1'] = df['p1'].str.lower().str.replace('_',' ') df['p2'] = df['p2'].str.lower().str.replace('_',' ') df['p3'] = df['p3'].str.lower().str.replace('_',' ') fullcols(df.sample(5)) # Reformat entities column df.entities.head(1) # Reformat entities column (cont.) def fullcell(dataframe,column,index): pd.set_option('display.max_colwidth', len(dataframe[column][0:1])) dpl(dataframe[column][index]) pd.reset_option('display.max_colwidth') # Review the contents of one cell in entities fullcell(df,'entities',892420643555336193) # Reformat entities column (cont.) df[['hashtags', 'symbols', 'user_mentions', 'urls', 'media']] = df.entities.apply(pd.Series) df = df.drop('entities', axis=1) fullcols(df.head(3)) # Extended_entities contains the same information than in media output form entities, I am getting rid of this column df = df.drop('extended_entities', axis=1) # As only relevant data from media column is image url, which I already have, I will get rid of that column df = df.drop('media', axis=1) def getDuplicateColumns(fd): # Get a list of duplicate columns. # It will iterate over all the columns in dataframe and find the columns whose contents are duplicate. # :param df: Dataframe object # :return: List of columns whose contents are duplicates. duplicateColumnNames = set() # Iterate over all the columns in dataframe for x in range(fd.shape[1]): # Select column at xth index. col = fd.iloc[:, x] # Iterate over all the columns in DataFrame from (x+1)th index till end for y in range(x + 1, fd.shape[1]): # Select column at yth index. otherCol = fd.iloc[:, y] # Check if two columns at x 7 y index are equal if col.equals(otherCol): duplicateColumnNames.add(fd.columns.values[y]) return list(duplicateColumnNames) # I am getting rid of the duplicate columns with the weirder name. Follow me the trip! getDuplicateColumns(df) # Trip (cont.) (df.source == df.source_3).value_counts() # No surprise they are duplicated! # Trip (cont.) df = df.drop('source_3', axis = 1) # Remove duplicate columns # Trip (cont.) df.retweeted.value_counts() # Remove duplicate columns # Trip (cont.) (((df.retweeted == df.is_quote_status) == df.truncated) == df.favorited).value_counts() # Confirm columns are duplicated # I imagined that are different things, still they are all fully false # then it is information not relevant for analysis, they go! # Trip (cont.) df = df.drop(['retweeted','is_quote_status','truncated','favorited'], axis = 1) # Remove duplicate columns # Trip (cont.) (df.text == df.full_text).value_counts() # Confirm columns are duplicated # Trip (cont.) df = df.drop('full_text', axis = 1) # Remove duplicate columns # Trip (cont.) (df.timestamp == df.created_at).value_counts() # Confirm columns are duplicated # Trip (cont.) df.created_at.head() # Review datatype and format of created_at ###Output _____no_output_____ ###Markdown > *And just found out that ```created_at``` was already formatted for date and time, even before I had to reformat ```timestamp```, and they are the same data. **ಥ﹏ಥ** I guess those are the wonders of being a data analyst.* ###Code df = df.drop('created_at', axis = 1) # Remove duplicate columns # Let's come back from the trip dpl(df.info()) fullcols(df.head()) # Awesome findings in this column df.user[892420643555336193] ###Output _____no_output_____ ###Markdown > _Maybe, some of the user information changes through the time, while he tweets, I will keep the timestamp and move the user information to another table_ ###Code # Create a new table for user data wrd = df[['timestamp','user']] df = df.drop('user', axis = 1) wrd.head(2) # I will come back to this table later # I will google how to move doggo, floofer, pupper and puppo where they belong! # (as if I did not google anything else for this project above) fullcols(df.head(2)) # Exploring df['dog_stage'] = df['text'].str.extract('(doggo|floofer|pupper|puppo)') fullcols(df.sample(5)) # Turned out that I could reperform that catergorization, I will drop those columns. BB! df = df.drop(['doggo','floofer','pupper','puppo'], axis = 1) fullcols(df.head(2)) # hashtags symbols user_mentions urls Out! These columns look awfully useless for the current analysis # There is no missing data, it is just that its values applies to very few tweets to take them into consideration df = df.drop(['hashtags','symbols','user_mentions','urls'], axis = 1) # Back to original cleaning. Let's fix the weird dogs names df['name'] = df.name.str.replace('^[a-z]+', 'None') # replace characters except a-z for None df['name'].value_counts() # Convert numerators into real ratings float_ratings_txt = [] float_ratings_index = [] float_ratings = [] for i, text in df['text'].iteritems(): if bool(re.search('\d+\.\d+\/\d+', text)): # find strings with the format number.number / number (d is for digit) float_ratings_txt.append(text) float_ratings_index.append(i) float_ratings.append(re.search('\d+\.\d+', text).group()) float_ratings_txt # As they are only four I can make the changes manually one by one df.loc[float_ratings_index[0],'rating_numerator'] = float(float_ratings[0]) df.loc[float_ratings_index[1],'rating_numerator'] = float(float_ratings[1]) df.loc[float_ratings_index[2],'rating_numerator'] = float(float_ratings[2]) df.loc[float_ratings_index[3],'rating_numerator'] = float(float_ratings[3]) # Review fullcols(df.describe()) # Check for maximum rating numerator df[df['rating_numerator'] == max(df['rating_numerator'])] # Checking that 'weird' numbers are correct. What can I say? 'They're good dogs Brent' fullcell(df,'text',749981277374128128) # Check for maximum rating denominator df[df['rating_denominator'] == max(df['rating_denominator'])] # Check if high denominator is ok fullcell(df,'text',731156023742988288) # They are still good dogs... # Create a different rate standard df['rating'] = df['rating_numerator']/df['rating_denominator'] df = df.drop(['rating_numerator','rating_denominator'], axis = 1) fullcols(df.sample(3)) # Drop display_text_range column df = df.drop('display_text_range', axis = 1) # I really do not know what it means fullcols(df.sample(5)) # get the dog breed in one column df['dog_breed'] = 'None' for i, row in df.iterrows(): if row.p1_dog: df.at[i, 'dog_breed'] = row.p1 elif row.p2_dog: df.at[i, 'dog_breed'] = row.p2 elif row.p3_dog: df.at[i, 'dog_breed'] = row.p3 else: df.at[i, 'dog_breed'] = 'None' # Review fullcols(df.sample(10)) # I will drop image prediction now old columns df = df.drop(['p1','p1_conf','p1_dog','p2','p2_conf','p2_dog','p3','p3_conf','p3_dog'], axis = 1) # Checkpoint fullcols(df.sample(5)) # Drop a couple more of useless columns df = df.drop(['source','id_str'], axis = 1) # Random dog image of the day! img(url = df.jpg_url.at[df.index[np.random.randint(0,len(df.index))]]) # Checkpoint fullcols(df.dog_breed.value_counts()) ###Output _____no_output_____ ###Markdown > *I guess that now we have very good dogs ```V•ᴥ•V```* ###Code # Back to the user table, let's checkpoint that wrd.sample(5) # Reformat user column wrd[['id', 'id_str', 'name', 'screen_name', 'location', 'description', 'url', 'entities', 'description' 'protected', 'followers_count', 'friends_count', 'listed_count', 'created_at', 'favourites_count', 'utc_offset', 'time_zone', 'geo_enabled', 'verified', 'statuses_count', 'lang', 'contributors_enabled', 'is_translator', 'is_translation_enabled', 'profile_background_color', 'profile_background_image_url', 'profile_background_image_url_https', 'profile_background_tile', 'profile_image_url', 'profile_image_url_https', 'profile_banner_url', 'profile_link_color', 'profile_sidebar_border_color', 'profile_sidebar_fill_color', 'profile_text_color', 'profile_use_background_image', 'has_extended_profile', 'default_profile', 'default_profile_image', 'following', 'follow_request_sent', 'notifications', 'translator_type']] = wrd.user.apply(pd.Series) wrd = wrd.copy().drop('user', axis=1) fullcols(wrd.sample(3)) wrd = wrd.drop(getDuplicateColumns(wrd), axis = 1) # drop duplicate columns # Checkpoint fullcols(wrd.sample(5)) # Reset index and order chronologically wrd = wrd.reset_index() wrd = wrd.sort_values('timestamp', ascending = False) fullcols(wrd.head(10)) wrd.id_str.value_counts() # Confirming that all tweets from the other table are from WeRateDogs # filter columns wrd = wrd[['index','timestamp','name','followers_count']] # Review dpl(wrd.head()) dpl(wrd.shape) # Save down cleaned data frames wrd.to_csv (r'./data/we-rate-dogs-clean.csv', index = False, header = True) df.to_csv (r'./data/df-tweets-clean.csv', index = False, header = True) ###Output _____no_output_____ ###Markdown Analyzing DataWhat I think I am going to do is to start playing with the data. As I said at the beginning of this notebook, after reviewing a little more, I will decide which questions I think I would find more insightful to answer with this data. ###Code # Do I know how to nest? print(str(round(df.dog_stage.copy().dropna().shape[0]/df.dog_stage.copy().shape[0]*100,2))+'%') # Only 16.29% of tweets mention the dog_stage # Where was I? fullcols(df.sample(5)) dpl(wrd.sample(5)) # I want to see how followers count behaves during the period in which the API was running plt.figure(figsize=(10, 8)) plt.xlim([datetime.date(wrd.timestamp.min()), datetime.date(wrd.timestamp.max())]) plt.xlabel('Year and Month') plt.ylabel('Followers Count') plt.plot(wrd.timestamp, wrd.followers_count, color = '#e3276c') plt.grid(color = 'gray', linestyle = '-.', linewidth = '.2') plt.title('WeRateDogs Followers over Time'); plt.savefig('followers-count') plt.show(); # Get followers range dpl(wrd.followers_count.min()) dpl(wrd.followers_count.max()) dpl(wrd.followers_count.max()-wrd.followers_count.min()) ###Output _____no_output_____ ###Markdown > According to the data, followers count of WeRateDogs fluctuated between 8771383 and 8112134, 751 points during the period of twitter data scrapping, which took approximately 32 minutes. ###Code # Sum tweets count grouped by two weeks basis fortnightly_tweets = df.reset_index().groupby(pd.Grouper(key = 'timestamp', freq = "SM")).count().reset_index() fortnightly_tweets = fortnightly_tweets[['timestamp', 'index']] fortnightly_tweets['tweets'] = fortnightly_tweets['index'] fortnightly_tweets = fortnightly_tweets.drop('index', axis = 1) fortnightly_tweets.head() # Plot tweets over time plt.figure(figsize=(10, 8)) plt.xlim([datetime.date(fortnightly_tweets.timestamp.min()), datetime.date(fortnightly_tweets.timestamp.max())]) plt.xlabel('Year and Fortnight') plt.ylabel('Followers Count') plt.plot(fortnightly_tweets.timestamp, fortnightly_tweets.tweets, color = '#e3276c') plt.grid(color = 'gray', linestyle = '-.', linewidth = '.2') plt.title('WeRateDogs Tweets over Time'); plt.savefig('tweets-count') plt.show(); ###Output _____no_output_____ ###Markdown > By the end of 2015, WeRateDogs tweeted over 250 rates in a couple of weeks. That number decreased considerably the following years by tweeting less than 25 rates per week in the summer of 2017. ###Code # I want to lookout for curiosities in the dogs initial_count = df.name[df.name != 'None'].str[0].value_counts() initial_count ###Output _____no_output_____ ###Markdown > For the tweets with dog names, the most common names for dogs start with 'B', 'C' or 'S' ###Code # Drop unknown names, get first letter and Review initials = pd.DataFrame(df.name[df.name != 'None'].str[0]).reset_index().drop('index', axis = 1) initials.head() # Plot dogs names initials sns.set_style("whitegrid") sns.axes_style("whitegrid") fig, ax = plt.subplots(figsize = (10,8)) plt.grid(color = 'gray', linestyle = '-.', linewidth = '.2') p = sns.countplot(data = initials, x = 'name', color = '#e3276c', ax = ax) ax.set(xlabel='Name Initial', ylabel='Initial Count') plt.title('Rated Dogs Names Initial') fig = p.get_figure() fig.savefig('initials'); # Separate dog breed and ratings dog_breed = df[['dog_breed','rating']] dog_breed # Sort breeds by rating dog_breed = dog_breed.sort_values(['rating'], ascending = False) dog_breed # Best rated dog! Image recognition cannot do its job because this dog is fully dressed! I think it is a golden retriever # How could not be? import urllib.request top1name = df.name[749981277374128128] top1tweet = df.text[749981277374128128] dpl(top1name) dpl(img(url = df.jpg_url.at[749981277374128128])) dpl(top1tweet) urllib.request.urlretrieve(df.jpg_url.at[749981277374128128], top1name+'-top1-dog.png'); # Second best rated tweet top2name = df.name[670842764863651840] top2tweet = df.text[670842764863651840] dpl(top2name) dpl(img(url = df.jpg_url.at[670842764863651840])) # Haha, Snoop 'Dog'g is our second best dog dpl(top2tweet) urllib.request.urlretrieve(df.jpg_url.at[670842764863651840], top2name+'-top2-dog.png') # Third best rated dog top3name = df.name[810984652412424192] top3tweet = df.text[810984652412424192] dpl(top3name) dpl(img(url = df.jpg_url.at[810984652412424192])) # Best rated dog for our known breeds lists! Definately the golden! dpl(top3tweet) urllib.request.urlretrieve(df.jpg_url.at[810984652412424192], top3name+'-top3-dog.png') # Get data to create dog breed chart dog_breed_chart = df.copy() dog_breed_chart.dog_breed = dog_breed_chart.dog_breed.astype(str) dog_breed_chart # Get data to create dog breed chart (cont.) dog_breed_chart = df.copy() dog_breed_chart.dog_breed = dog_breed_chart.dog_breed.astype(str) dog_breed_chart = dog_breed_chart.dog_breed.value_counts().rename_axis('breed').to_frame('counts') dog_breed_chart = dog_breed_chart.reset_index() dog_breed_chart.breed[11:] = 'Other' dog_breed_chart.breed = dog_breed_chart.breed.str.title() # Title case the names for the chart labels dog_breed_chart = dog_breed_chart.groupby(['breed'])['counts'].sum().to_frame() dog_breed_chart # Chart breed of rated dogs fig2 = plt.figure(figsize=(10, 8)) colors = ['pink', '#FE7F9C', '#9dc183','#DF5286','hotpink','#fdb9c8','palevioletred', '#FDAB9F', '#F19CBB', 'lightgray','hotpink', '#FDAB9F'] explode = (0,0,0.2,0,0,0,0,0,0,0,0,0) ax = fig2.add_axes([0,0,1,1]) ax.axis('equal') ax.pie(dog_breed_chart.counts, explode=explode, labels = dog_breed_chart.index ,autopct='%1.2f%%', colors = colors) plt.savefig('breeds') plt.show(); ###Output _____no_output_____ ###Markdown > There are 15.53 % of tweets whose breed are unknown. And the outstanding recurrent breed is the Golden Retriever. ###Code # Get data for best rated by dog stage dogstage = df[['dog_stage','dog_breed','rating']] dogstage.dog_stage.fillna('unknown', inplace = True) dogstage.sort_values('rating', ascending = False, inplace = True) dogstage.reset_index(inplace = True) dogstage = dogstage[dogstage.dog_stage != 'unknown'] dogstage = pd.pivot_table(dogstage, values='rating', index=['dog_stage','dog_breed']) dogstage # Get dog stage names d = list(set(dogstage.index.get_level_values(0))) d # Get the top rated breeds for each dog stage for x in d: print(x.title()) c = dogstage['rating'][x].to_frame() c = c.sort_values('rating', ascending = False) dpl(c.head(3)) print('---------------------------\n') ###Output Floofer ###Markdown > The top rated breeds for each dog stage are the following:| Doggo | Puppo | Floofler | Pupper || --- | --- | --- | --- || Irish Setter : 1.40 |Rottweiler : 1.40 | Chow: 1.30 | Black-and-tan Coonhound : 1.40 || Pembroke : 1.40 | &xfeff; | Samoyed : 1.30 | &xfeff; | InsightsWell, this has been an interesting dataset. After dealing with unreadable data, encounter myself with data within data, cleaning all of that and be able to take a deeper look onto the real information, I found out a couple of interesting things.To begin, I decided to check the behavior of the followers of WeRateDogs while I was running the twitter API. By the starting point, the accound had the maximum followers count for the period, with 8.772.134 followers, and this number fluctuated during the following minutes by 751 points, going up and down, being the minimum 8.771.383 followers by the end of this period. The odd thing about this is how many times it went 751 points up and down in so few minutes. It was like 751 people were playing with the "follow" button te see how many spikes I would have in my chart.Then I went over to check the number of dog rates that was published every two weeks in the twitter account of WeRateDogs, and it is evident that the account was trying to get as much followers as possible at the begining, publishing over 250 rates every 2 weeks. After just a couple of months that number dropped cosiderably and stayed between 20 and 50 rates every 2 weeks for the next few years. This behavior was observed for the period for which we have information, approximately between winter of 2015 and summer of 2017.Another random fact that I had fun with just thinking about it, was to find the most common initial in the rated dogs names. There are dog names with almost every letter in the alphabet, missing only I think the X. But the winners here are the B, C and S.. with over 120 dogs rated for each letter! More than 120 rated whose names began with the letter B, more than 120 rated whose names began with the letter C, and te same for S! It may not be a critical information from this data, but I found it out interesting to tell. ```¯\_(ツ)_/¯``` The best for the end, of course this is a data were WeRateDogs, then I think everyone wants to know who the best rated dogs are.I would like to start the awards with the best rated dog breeds in each dog stage, which they are 'doggo', 'puppo', 'floofer' and 'pupper'. In the Doggo category we have a tie between the Irish Setter and the Pembroke, both with a rating of 1.4. For the Puppo stage, the winner is the Rottweiler with a rating of 1.4. Followed by the Black-and-ran Coonhound rated with 1.4 as a Pupper, and finally, another tie for the Floofer stage with a rating of 1.3, we have the Chow and the Samoyed.So this are the best rated breeds? Actually, no. The issue with this data, is that not all the rates contain the dog stage information, in fact, only 16.29% of the rates included the dog stage. For that matter, the previous award is not that well deserved. Let's keep the quest to find the best rated breeds then!There are 15.53% of rated dogs whose breeds we do not know, I do not think that this would be relevant to the result which says that the most common rated dog breed is the Golden Retriever ```WeLoveGolden```. But wait, I want to know who the best rated dogs are, and we are going to find it!For the top 3 dogs I will go from bottom to top, for the top 3 in third place we have... Sam! "She smiles 24/7", this is a Golden Retriever.In second place we have......wait what? This is Snoop Dogg, ```☜(⌒▽⌒)☞``` this is a "Good Dogg". I do not think there is a breed for Snoop Dogg, don't you think?Finally, the last but not least, we have the best of the bests, the guardian of lost souls, the powerful, the pleasurable, the indestructible... Atticus! "He's quite simply America". Image recognition did not work with him because Atticus is fully dressed, but I bet he is a Golden Retriver! ConclusionWhat is at the eyesight does not need glasses. We can definately say the best rated dogs are **Golden Retrievers**. Why? ```◖ᵔᴥᵔ◗``` Because "they're good dogs Brent."[Go Back to the top][1][1]:table_of_contents ###Code # Time at the end of the code to track elapsed time elapsed = round((time.time() - notebk_time)/60, 2) print('----- Completed in {} minutes -----'.format(elapsed)) #Convert notebook into html file from subprocess import call call(['python', '-m', 'nbconvert', 'wrangle_act.ipynb']) ###Output _____no_output_____ ###Markdown Data Wrangling Dataset - WeRateDogs&trade; Twitter Archive***By: Kartik Nanduri*****Date: 1st Dec, 2018.** Let's Gather ###Code # importing all the necessary libraries import os import pandas as pd import requests as req ###Output _____no_output_____ ###Markdown ***Important! uncomment the following files to run the book with out errors.*** ###Code # resetting the folder structure. #os.rename('dataset/twitter-archive-enhanced.csv', 'twitter-archive-enhanced.csv') #import shutil #shutil.rmtree('dataset') ###Output _____no_output_____ ###Markdown ***Important! once done, please recomment.*** 1. [x] **The file given at hand `twitter-archive-enhanced.csv`** ###Code # all the requried files for this project are in the list files_list files_list = ['twitter-archive-enhanced.csv', 'image-predictions.tsv', 'tweet_json.txt'] # reading the twitter archive file archive = pd.read_csv(files_list[0]) # taking at random entries for the archive file archive.sample(2) ###Output _____no_output_____ ###Markdown 2. [x] **Fetching the data from url and saving it to local drive - `image-predictions.tsv`** ###Code # reading the file from internet using the requests library url = "https://s3.amazonaws.com/video.udacity-data.com/topher/2018/November/5bf60c69_image-predictions-3/image-predictions-3.tsv" res = req.get(url) with open(files_list[1], mode = "wb") as op_file: op_file.write(res.content) # checking if fetched the data right way img_pre_test = pd.read_csv(files_list[1], delimiter = "\t", encoding = 'utf-8') img_pre_test.sample(2) # we did it the right way, Yay! it worked. ###Output _____no_output_____ ###Markdown 3. [x] **Getting data from Twitter&trade;** ###Code # importing all the necessary libraries for accessing Twitter via API import tweepy from tweepy import OAuthHandler import json from timeit import default_timer as timer # setting up all the necessary placeholders for API consumer_key = 'xxx.xxx.xxx.xxx' consumer_secret = 'xxx.xxx.xxx.xxx' access_token = 'xxx.xxx.xxx.xxx' access_secret = 'xxx.xxx.xxx.xxx' auth = OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth_handler = auth, parser = tweepy.parsers.JSONParser(), wait_on_rate_limit = True, wait_on_rate_limit_notify = True) def fetch_and_save(ids, api_ins, one_id = None): ''' This function will fetch data with associated id in ids list ids (List Object): a list all tweets api_ins (Tweepy Object): api object instance, will be used to query twitter for data one_id (int): use when you want to query only for one tweet failed_ids (List Object): a list will be retured so that, this fuction can be called once again on those ids ''' new_file_name = ''; failed_ids = []; tweet_df = [] # checking if file exists if os.path.exists(files_list[2]): temp = [s for s in os.listdir() if "tweet_json" in s] new_file_name = files_list[2].split('.')[0] + "_" + str(len(temp)) + ".txt" else: new_file_name = files_list[2] # querying a list of ids if one_id == None: with open(new_file_name, mode = 'w') as outfile: for one_id in ids: try: content = api_ins.get_status(one_id, tweet_mode='extended') json.dump(content, outfile) outfile.write('\n') except Exception as e: print("Error for: " + str(one_id) + " - " + str(e)) failed_ids.append(one_id) # querying a single id else: try: content = api_ins.get_status(one_id, include_entities = True) favorites = content['favorite_count'] retweets = content['retweet_count'] tweet_df.append({'tweet_id': int(one_id), 'favorites': int(favorites), 'retweets': int(retweets)}) return tweet_df except Exception as e: print("Error for: " + str(one_id) + " - " + str(e)) failed_ids.append(one_id) return failed_ids # passing the list of ids to the fuction fetch_and_save() tweet_ids = archive['tweet_id'].tolist() len(tweet_ids) # fetching data # starting the timer start = timer() # querying errors = fetch_and_save(tweet_ids, api) # ending the timer end = timer() # calculating the runtime for fetch_and_save print("That took about {} mins.".format(round((end - start)/60, 1))) # lets save the failed ids into one master list print("Total failed request are: {}. \n".format(len(errors))) # ids that failed and the ones that passed indi_fail = []; success = [] #for each failed id, lets try to fetch status individually. for error in errors: temp = fetch_and_save(ids = None, api_ins = api, one_id = error) indi_fail.append(temp[0]) # removing empty elements from list success = [x for x in indi_fail if not isinstance(x, (int))] indi_fail = [x for x in indi_fail if isinstance(x, (int))] # checking if there is change print("\nWe were able to retrieve {} records, others failed.".format(len(errors) - len(indi_fail))) # printing the results of success success ###Output _____no_output_____ ###Markdown 4. [x] **Okay, let's read `retweets` and `favourite count` from `tweet_json.txt`** ###Code # reading tweet_json.txt tweets = pd.read_json(files_list[2], lines = True, encoding = 'utf-8') # let's select only the following columns retweet_count, favorite_count, id tweets = tweets[['id', 'favorite_count', 'retweet_count']] tweets.info() # renaming the columns tweets.rename(columns={'id': 'tweet_id', 'favorite_count': 'favorites', 'retweet_count': 'retweets'}, inplace = True) # concating the dataframes into one success = pd.DataFrame(success, columns = ['tweet_id', 'favorites', 'retweets']) tweet_master = pd.concat([tweets, success], ignore_index = True, sort = True,) tweet_master.info() # making a copy of the archive archive_copy = archive.copy() # marking the id null that we failed to retrieve in archive for a_id in indi_fail: archive_copy.loc[archive_copy['tweet_id'] == a_id, ['tweet_id']] = 0 # a checking if we did it right len(archive_copy[archive_copy['tweet_id'] == 0]) # appending the new file to our files list files_list.append('archive_copy.csv') # writing the contents of tweet_master to a file import numpy as np tweet_master['tweet_id'] = tweet_master['tweet_id'].astype(np.int64) tweet_master.to_csv('tweet_master.csv', index = False) # saving the updated version of our archived-enhanced.csv archive_copy['tweet_id'] = archive_copy['tweet_id'].astype(np.int64) archive_copy.to_csv(files_list[3], index = False) ###Output _____no_output_____ ###Markdown 5. [x] **Last thing to do is to tidy up our folder, let's get going.** ###Code # moving all data files under one folder - dataset # removing the temporary files, that acted as placeholders # creating the folder folder = 'dataset' if not os.path.exists(folder): os.mkdir(folder) # we know that our master datasets for this project are # 1. twitter-archive-enhanced.csv # 2. image-predictions.tsv # 3. tweet_json.txt # 4. tweet_master.csv # let us move these files # updating our files_list files_list.append('tweet_master.csv') # moving only required files for file in files_list: if os.path.exists(file): os.rename(file, folder+'/'+file) # lisitng the current directory os.listdir() # clean and neat, lets get with assessing and cleaning # renaming files_list for i in range(len(files_list)): files_list[i] = folder + '/'+ files_list[i] ###Output _____no_output_____ ###Markdown Summary - Gathering- We know, that gathering is a the first step in wrangling.- We were successful in gathering from three different sources with different techniques: - Data given at hand. - Fetch from flat file stored on a server. - From API.- There a total of 14 missing data points, tried a different ways for retrieving them, using the API as well as `twurl` of the `Ruby` package, but they were not to be found, as stated below in the highlighted section.***So let's start with assessing the data.*** ![error](error.png) Assessing ###Code files_list # let's load up dataset, and starting assessing them. archive = pd.read_csv(files_list[-2], encoding = 'utf-8') img_pre = pd.read_csv(files_list[1], sep = '\t', encoding = 'utf-8') tweet_master = pd.read_csv(files_list[-1], encoding = 'utf-8') ###Output _____no_output_____ ###Markdown Issues to sort! ###Code # printing out archive - visual assessment archive # Programmatic Assessment 1 - Information archive.info() # Programmtic Assessment 2 - Describe archive.describe() # checking for duplicates - tweet_ids # these are the 0's for which the api failed to retrive data sum(archive.tweet_id.duplicated()) # checkin if we have more than one class of dogs assigned to dog # the following are the only combinations that are present in the dataset cond_1 = (archive['doggo'] == 'doggo') & (archive['floofer'] == 'floofer') cond_2 = (archive['doggo'] == 'doggo') & (archive['pupper'] == 'pupper') cond_3 = (archive['doggo'] == 'doggo') & (archive['puppo'] == 'puppo') # printing these entries archive[cond_1 | cond_2 | cond_3][['tweet_id', 'text', 'doggo', 'floofer', 'pupper', 'puppo']] len(archive[cond_1 | cond_2 | cond_3]) ###Output _____no_output_____ ###Markdown 1. **`twitter-archive-enhanced.csv`** table***1 Content Issues:*****1.1 Visual Assessment:**- `rating_numerator` : has values such as 1, 3.. e.t.c - **Data Quality Dimension - `Consistency`**.- `rating_denominator` : have values, less than 10, for example, the tweet_id - 666287406224695296 has the number 2 as its value - **Data Quality Dimension - `Consistency`**. - We see that, Articles - `a`, `an`, `the` have been used to name dogs, as well as words such as `such`, `quite` - **Data Quality Dimension - `Validity`**.- There are instances where the names of dogs are in lowercases - **Data Quality Dimension - `Consistency`**.**1.2 Programmatic Assessment:**- `rating_numerator` : has a maximum value of 1766 - **Data Quality Dimension - `Consistency`**. - `rating_denominator` : has a maximum value of 170 - **Data Quality Dimension - `Consistency`**.- All in all, this dataset appears to be clean, except for `expanded_url` - we have about 59 instances missing - **Data Quality Dimension - `Completeness`**.- We can see that there are more than one class assigned to tweets, analyze and assign proper dog class so that melting is easy - **Data Quality Dimension - `Consistency`**.***2 Structural Issues:*****2.1 Visual Assessment:**- we can see that, there are four classes of dogs `doggo`, `floofer`, `puppo`, `pupper`; these should a part of one unit - `dog_class` - **Data Quality Dimension - `Consistency`**.**2.2 Programmatic Assessment:**- `in_reply_to_status_id`, `retweeted_status_id`, `retweeted_status_user_id`, `in_reply_to_user_id` of type float64 must be converted into int - **Data Quality Dimension - `Validity`**.- `timestamp`, `retweeted_status_timestamp` of type object must be converted into datatime - **Data Quality Dimension - `Validity`**. ###Code # assessing img_predictions dataset img_pre # Programmatic Assessment - Information img_pre.info() # checking for duplicates img_pre[img_pre['jpg_url'].duplicated(keep = False)].sort_values(by = 'jpg_url')[['tweet_id', 'jpg_url']] ###Output _____no_output_____ ###Markdown 2. **`image-predictions.tsv`** table***1 Content Issues:*****1.1 Visual Assessment:**- We have few dog breeds that are represented in lowercase.**1.2 Programmatic Assessment:**- We have about 281 images on a whole, that are missing with respect to our `twitter-archive-enhanced.csv` file - **Data Quality Dimension - `Completeness`**.- We can see that, we have about `66` duplicates **OR** a pair of tweets are pointing to same *`jpg_url`* - **Data Quality Dimension - `Accuracy`**.***2 Structural Issues:*****2.1 Visual Assessment:**- None. **2.2 Programmatic Assessment:**- None. ###Code # assessing tweet_master dataset tweet_master tweet_master.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2342 entries, 0 to 2341 Data columns (total 3 columns): favorites 2342 non-null int64 retweets 2342 non-null int64 tweet_id 2342 non-null int64 dtypes: int64(3) memory usage: 55.0 KB ###Markdown 3. **`tweet_master.txt`** table***1 Content Issues:*****1.1 Visual Assessment:**- None.**1.2 Programmatic Assessment:**- We have about 14 missing records - **Data Quality Dimension - `Completeness`**.***2 Structural Issues:*****2.1 Visual Assessment:**- None.**2.2 Programmatic Assessment:**- None. Summary - Assessing- Completed the second step.- The following are the insights: - from `twitter-archive-enhanced.csv` datset, the rating_numerator and denominator need to be fixed. - the dataset also represents row values as columns, which needs to be fixed. - the dataset also has structural issues such as wrong datatype assigned to a column. - from `images-preductions.tsv` dataset, there is consistency issue with naming dog breeds. - the dataset isn't complete when compared to `twitter-archive-enhanced.csv`, we have about 281 missing tweets. - Also we have `jpg_urls'` that are pointing to a pair of tweets. - `tweet_master.txt` dataset has about 14 missing records. - the dataset alone hold the information about retweets and favourites - bad form of schema normalization. Cleaning Define- Important!, before we get to cleaning, let's drop rows from image-predictions, that are false in dog_1,_2 and _3, as they are not related to our dataset. Code ###Code # only select those rows that are either true or false and not all false img_pre = img_pre[~((img_pre.p1_dog == False) & (img_pre.p2_dog == False) & (img_pre.p3_dog == False))] ###Output _____no_output_____ ###Markdown Test ###Code # asserting the lenght to be 0 assert len(img_pre[(img_pre.p1_dog == False) & (img_pre.p2_dog == False) & (img_pre.p3_dog == False)]) == 0, "Check" # the master dataset master_set = archive.merge(img_pre, how = 'left', on = ['tweet_id']) master_set = master_set.merge(tweet_master, how = 'left', on = ['tweet_id']) files_list.append('dataset/master_set_raw.csv') master_set.info() # saving the file to local disk. master_set.to_csv(files_list[-1], index = False) # creating a copy of the master set master_set = pd.read_csv(files_list[-1], encoding = 'utf-8') master_copy = master_set.copy() ###Output _____no_output_____ ###Markdown Issues to Clean. 1. Basic cleaning. Define- Assign proper class for the above 14 tweets before melting.- Delete *retweets* with *any duplicates* and get rid of *tweets with **no** images*.- Once done, drop the following columns: 1. `retweeted_status_id` 2. `retweeted_status_user_id` 3. `retweeted_status_timestamp` 4. `in_reply_to_status_id` 5. `in_reply_to_user_id` Code ###Code # setting column width to -1 pd.set_option('display.max_colwidth', -1) cond_1 = (master_copy['doggo'] == 'doggo') & (master_copy['floofer'] == 'floofer') cond_2 = (master_copy['doggo'] == 'doggo') & (master_copy['pupper'] == 'pupper') cond_3 = (master_copy['doggo'] == 'doggo') & (master_copy['puppo'] == 'puppo') print(master_copy[cond_1 | cond_2 | cond_3][['tweet_id', 'text']]) ###Output tweet_id \ 191 855851453814013952 200 854010172552949760 460 817777686764523521 531 808106460588765185 565 802265048156610565 575 801115127852503040 705 785639753186217984 733 781308096455073793 778 775898661951791106 822 770093767776997377 889 759793422261743616 956 751583847268179968 1063 741067306818797568 1113 733109485275860992 text 191 Here's a puppo participating in the #ScienceMarch. Cleverly disguising her own doggo agenda. 13/10 would keep the planet habitable for https://t.co/cMhq16isel 200 At first I thought this was a shy doggo, but it's actually a Rare Canadian Floofer Owl. Amateurs would confuse the two. 11/10 only send dogs https://t.co/TXdT3tmuYk 460 This is Dido. She's playing the lead role in "Pupper Stops to Catch Snow Before Resuming Shadow Box with Dried Apple." 13/10 (IG: didodoggo) https://t.co/m7isZrOBX7 531 Here we have Burke (pupper) and Dexter (doggo). Pupper wants to be exactly like doggo. Both 12/10 would pet at same time https://t.co/ANBpEYHaho 565 Like doggo, like pupper version 2. Both 11/10 https://t.co/9IxWAXFqze 575 This is Bones. He's being haunted by another doggo of roughly the same size. 12/10 deep breaths pupper everything's fine https://t.co/55Dqe0SJNj 705 This is Pinot. He's a sophisticated doggo. You can tell by the hat. Also pointier than your average pupper. Still 10/10 would pet cautiously https://t.co/f2wmLZTPHd 733 Pupper butt 1, Doggo 0. Both 12/10 https://t.co/WQvcPEpH2u 778 RT @dog_rates: Like father (doggo), like son (pupper). Both 12/10 https://t.co/pG2inLaOda 822 RT @dog_rates: This is just downright precious af. 12/10 for both pupper and doggo https://t.co/o5J479bZUC 889 Meet Maggie &amp; Lila. Maggie is the doggo, Lila is the pupper. They are sisters. Both 12/10 would pet at the same time https://t.co/MYwR4DQKll 956 Please stop sending it pictures that don't even have a doggo or pupper in them. Churlish af. 5/10 neat couch tho https://t.co/u2c9c7qSg8 1063 This is just downright precious af. 12/10 for both pupper and doggo https://t.co/o5J479bZUC 1113 Like father (doggo), like son (pupper). Both 12/10 https://t.co/pG2inLaOda ###Markdown ***Assign the following:***1. 855851453814013952: puppo2. 854010172552949760: floofer3. 817777686764523521: pupper4. 808106460588765185: pupper5. 802265048156610565: pupper6. 801115127852503040: pupper7. 785639753186217984: pupper8. 781308096455073793: pupper9. 775898661951791106: pupper10. 770093767776997377: pupper11. 759793422261743616: pupper12. 751583847268179968: doggo13. 741067306818797568: doggo14. 733109485275860992: doggo**I like puppies, so for most of the entries it is pupper!** ###Code # assigning values. master_copy.loc[master_copy['tweet_id'] == 855851453814013952, ['doggo', 'floofer', 'pupper']] = 'None' master_copy.loc[master_copy['tweet_id'] == 854010172552949760, ['doggo', 'pupper', 'puppo']] = 'None' master_copy.loc[master_copy['tweet_id'] == 817777686764523521, ['doggo', 'floofer', 'puppo']] = 'None' master_copy.loc[master_copy['tweet_id'] == 808106460588765185, ['doggo', 'floofer', 'puppo']] = 'None' master_copy.loc[master_copy['tweet_id'] == 802265048156610565, ['doggo', 'floofer', 'puppo']] = 'None' master_copy.loc[master_copy['tweet_id'] == 801115127852503040, ['doggo', 'floofer', 'puppo']] = 'None' master_copy.loc[master_copy['tweet_id'] == 785639753186217984, ['doggo', 'floofer', 'puppo']] = 'None' master_copy.loc[master_copy['tweet_id'] == 781308096455073793, ['doggo', 'floofer', 'puppo']] = 'None' master_copy.loc[master_copy['tweet_id'] == 775898661951791106, ['doggo', 'floofer', 'puppo']] = 'None' master_copy.loc[master_copy['tweet_id'] == 770093767776997377, ['doggo', 'floofer', 'puppo']] = 'None' master_copy.loc[master_copy['tweet_id'] == 759793422261743616, ['doggo', 'floofer', 'puppo']] = 'None' master_copy.loc[master_copy['tweet_id'] == 751583847268179968, ['pupper', 'floofer', 'puppo']] = 'None' master_copy.loc[master_copy['tweet_id'] == 741067306818797568, ['pupper', 'floofer', 'puppo']] = 'None' master_copy.loc[master_copy['tweet_id'] == 733109485275860992, ['pupper', 'floofer', 'puppo']] = 'None' ###Output _____no_output_____ ###Markdown Test - 1 ###Code # all values have been properly assigned pd.set_option('display.max_colwidth', 50) master_copy[cond_1 | cond_2 | cond_3][['tweet_id', 'doggo', 'floofer', 'pupper', 'puppo']] # selecting those, tweets that have no retweets master_copy = master_copy[pd.isnull(master_copy['retweeted_status_id'])] # deleting duplicates if any master_copy = master_copy.drop_duplicates() # deleting those tweets with no images. master_copy = master_copy.dropna(subset = ['jpg_url']) # reseting index master_copy.reset_index(drop=True, inplace=True) # droping columns master_copy = master_copy.drop(labels = ['retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp', 'in_reply_to_status_id', 'in_reply_to_user_id'], axis = 1) ###Output _____no_output_____ ###Markdown Test - 2 ###Code # after droping the columns, we should have about 25 dimensions/columns master_copy.shape ###Output _____no_output_____ ###Markdown 2. Condense wide-format to long-format Define- Condense `doggo`, `floofer`, `pupper`, `puppo` as `dog_class`. Code ###Code # to make sure that we have doggo = master_copy.doggo.value_counts()['doggo'] floofer = master_copy.floofer.value_counts()['floofer'] pupper = master_copy.pupper.value_counts()['pupper'] puppo = master_copy.puppo.value_counts()['puppo'] # printing count of each class print("Count of Doggo: {}\nCount of Floofer: {}\nCount of Pupper: {}\nCount of Puppo: {}".format(doggo, floofer, pupper, puppo)) # selecting the columns that are to be melted columns_to_melt = ['doggo', 'floofer', 'pupper', 'puppo'] columns_to_stay = [x for x in master_copy.columns.tolist() if x not in columns_to_melt] # melting the the columns into values master_copy = pd.melt(master_copy, id_vars = columns_to_stay, value_vars = columns_to_melt, var_name = 'stages', value_name = 'dog_class') # Delete column 'stages' master_copy = master_copy.drop('stages', 1) # dropping duplicates master_copy = master_copy.sort_values('dog_class').drop_duplicates('tweet_id', keep = 'last') master_copy.reset_index(drop=True, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code # let's assert assert doggo == master_copy.dog_class.value_counts()['doggo'], "Some entries are missing" assert floofer == master_copy.dog_class.value_counts()['floofer'], "Some entries are missing" assert pupper == master_copy.dog_class.value_counts()['pupper'], "Some entries are missing" assert puppo == master_copy.dog_class.value_counts()['puppo'], "Some entries are missing" ###Output _____no_output_____ ###Markdown 3. Fix all inaccurate data. Define- fix names of dogs.- fix ratings.- check source column. Code ###Code # Checking source column master_copy.source.nunique() ###Output _____no_output_____ ###Markdown **Okay! only three values, a categorical variable** ###Code import re # assiging unique values to source. master_copy['source'] = master_copy['source'].apply(lambda x: re.findall(r'>(.*)<', x)[0]) ###Output _____no_output_____ ###Markdown Test - 1 ###Code # taking a look at sample of 5 rows master_copy.sample(5)[['tweet_id', 'source', 'text']] # fixing names non_names = master_copy.name.str.islower() non_names = list(set(master_copy[non_names]['name'].tolist())) flag = master_copy.name.str.len() == 1 & master_copy.name.str.isupper() non_names.append(master_copy[flag][['tweet_id', 'name']]['name'].tolist()[0]) # replacing all garbage names with none, once done, we'll use the text field to extract names for name in master_copy.name: if name in non_names: master_copy.loc[master_copy['name'] == name, ['name']] = 'None' # checking if there are any non_names after the operation assert len(master_copy[(master_copy.name.str.islower()) & (flag)]) == 0, "Check code" ###Output _____no_output_____ ###Markdown ***The following are patterns observed in `text` field, we shall use the :***- This is [name] ..- Meet [name] ..- Say hello to [name] ..- .. named [name] ..- .. name is [name] ..We will treat those cases to get the names from the text of the tweet ###Code # extracting names using regular expression. dog_names = [] # assigning patterns pattern_1 = r'(T|t)his\sis\s([^.|,]*)' pattern_2 = r'Meet\s([^.|,]*)' pattern_3 = r'Say\shello\sto\s([^.|,]*)' pattern_4 = r'name\sis\s([^.|,]*)' # looping through text and extracting names for text in master_copy['text']: # Start with 'This is ' if re.search(pattern_1, text): # if our match has alternate name if "(" in re.search(pattern_1, text).group(2): dog_names.append(re.search(pattern_1, text).group(2).split()[0]) # if our match has AKA in it elif "AKA" in re.search(pattern_1, text).group(2): dog_names.append(re.search(pattern_1, text).group(2).split()[0]) # if our name has two dogs elif '&amp;' in re.search(pattern_1, text).group(2): temp = re.search(pattern_1, text).group(2).split() if len(temp) == 1: dog_names.append(temp[0]) elif len(temp) == 3: dog_names.append(temp[0]+"|"+temp[-1]) else: dog_names.append(temp[0]+"|"+temp[-2]) elif 'named' in re.search(pattern_1, text).group(2): temp = re.search(pattern_1, text).group(2).split() dog_names.append(temp[-1]) # just appending the name else: dog_names.append(re.search(pattern_1, text).group(2)) # Start with 'Meet ' elif re.search(pattern_2, text): # if our name has two dogs if '&amp;' in re.search(pattern_2, text).group(1): temp = re.search(pattern_2, text).group(1).split() if len(temp) == 1: dog_names.append(temp[0]) elif len(temp) == 3: dog_names.append(temp[0]+"|"+temp[-1]) else: dog_names.append(temp[0]+"|"+temp[-2]) # if our name has alternatives elif '(' in re.search(pattern_2, text).group(1): dog_names.append(re.search(pattern_2, text).group(1).split()[0]) # just appending the name else: dog_names.append(re.search(pattern_2, text).group(1)) # Start with 'Say hello to ' elif re.search(pattern_3, text): # if our match has alternate name if '(' in re.search(pattern_3, text).group(1): dog_names.append(re.search(pattern_3, text).group(1).split()[0]) # if our name has two dogs elif '&amp;' in re.search(pattern_3, text).group(1): temp = re.search(pattern_3, text).group(1).split() if len(temp) == 1: dog_names.append(temp[0]) elif len(temp) == 3: dog_names.append(temp[0]+"|"+temp[-1]) else: dog_names.append(temp[0]+"|"+temp[-2]) else: dog_names.append(re.search(pattern_3, text).group(1)) # contains 'name is' elif re.search(pattern_4, text): if len(re.search(pattern_4, text).group(1).split()) == 1: dog_names.append(re.search(pattern_4, text).group(1)) else: temp = re.search(pattern_4, text).group(1).split() dog_names.append(temp[0]) # No name specified or other style else: dog_names.append('None') # adding this new set of names to our master_copy master_copy['dog_names'] = dog_names # new non names. non_names = [] pattern_4 = r'^[a-z].*' for name in master_copy['dog_names']: if re.search(pattern_4, name): master_copy.loc[master_copy['dog_names'] == name, ['dog_names']] = 'None' non_names.append(name) # dog_names with and to be replaced with | for name in master_copy['dog_names']: master_copy['dog_names'] = master_copy['dog_names'].str.replace(pat = r'\sand\s', repl = "|", regex = True) # we need to replace two cells, with names 'Sadie|&amp;', 'Phillippe ...', pd.set_option('display.max_colwidth', -1) master_copy[(master_copy['dog_names'] == "Philippe from Soviet Russia") | (master_copy['dog_names'] == "Sadie|&amp;")][['tweet_id', 'text', 'dog_names']] # setting them to correct ones # the following tweet_id we different from our regexs master_copy.loc[master_copy['dog_names'] == "Philippe from Soviet Russia", ['dog_names']] = 'Phillippe' master_copy.loc[master_copy['dog_names'] == "Sadie|&amp;", ['dog_names']] = 'Sadie|Shebang|Ruffalo' master_copy.loc[master_copy['tweet_id'] == 667509364010450944, ['dog_names']] = 'Tickles' master_copy.loc[master_copy['tweet_id'] == 667546741521195010, ['dog_names']] = 'George' master_copy.loc[master_copy['tweet_id'] == 667073648344346624, ['dog_names']] = 'Dave' master_copy.loc[master_copy['tweet_id'] == 667177989038297088, ['dog_names']] = 'Daryl' master_copy.loc[master_copy['tweet_id'] == 666835007768551424, ['dog_names']] = 'Cupit|Prencer' master_copy.loc[master_copy['tweet_id'] == 668221241640230912, ['dog_names']] = 'Bo|Smittens' master_copy.loc[master_copy['tweet_id'] == 668268907921326080, ['dog_names']] = 'Guss' master_copy.loc[master_copy['tweet_id'] == 666058600524156928, ['dog_names']] = 'Paul Rand' master_copy.loc[master_copy['tweet_id'] == 692142790915014657, ['dog_names']] = 'Teddy' master_copy.loc[master_copy['tweet_id'] == 684097758874210310, ['dog_names']] = 'Lupe' master_copy.loc[master_copy['tweet_id'] == 709198395643068416, ['dog_names']] = 'Cletus|Jerome|Alejandro|Burp|Titson' master_copy.loc[master_copy['tweet_id'] == 671743150407421952, ['dog_names']] = 'Jacob' master_copy.loc[master_copy['tweet_id'] == 669037058363662336, ['dog_names']] = 'Pancho|Peaches' master_copy.loc[master_copy['tweet_id'] == 669363888236994561, ['dog_names']] = 'Zeus' master_copy.loc[master_copy['tweet_id'] == 813217897535406080, ['dog_names']] = 'Atlas' master_copy.loc[master_copy['tweet_id'] == 856526610513747968, ['dog_names']] = 'Charlie' master_copy.loc[master_copy['tweet_id'] == 861288531465048066, ['dog_names']] = 'Boomer' master_copy.loc[master_copy['tweet_id'] == 863079547188785154, ['dog_names']] = 'Pipsy' master_copy.loc[master_copy['tweet_id'] == 844979544864018432, ['dog_names']] = 'Toby' master_copy.loc[master_copy['tweet_id'] == 836001077879255040, ['dog_names']] = 'Atlas' master_copy.loc[master_copy['tweet_id'] == 758041019896193024, ['dog_names']] = 'Teagan' master_copy.loc[master_copy['tweet_id'] == 765395769549590528, ['dog_names']] = 'Zoey' master_copy.loc[master_copy['tweet_id'] == 778408200802557953, ['dog_names']] = 'Loki' master_copy.loc[master_copy['tweet_id'] == 770069151037685760, ['dog_names']] = 'Carbon' # dropping column name master_copy = master_copy.drop(['name'], axis = 1) # printing columns in master_copy master_copy.columns.tolist() ###Output _____no_output_____ ###Markdown Test - 2 ###Code # selecting a dog_names that are none. master_copy[master_copy['dog_names'] == 'None'].sample(5)[['tweet_id', 'text', 'dog_names']] ###Output _____no_output_____ ###Markdown Define- Let's get cleaning the ratings. Code ###Code # as we are aware that there are two ratings in the text columns, lets use the # regex to extract and replace the wrong ones. ratings = master_copy['text'].apply(lambda x: re.findall(r'(\d+(\.\d+)|(\d+))\/(\d+0)', x)) # let's scale number from 0 to 15 def scale_rate(number, mini = 0, maxi = 15): return (number - mini)/(maxi - mini) import math # our temp variables rate_num = [] rate_denom = [] # let's loop over and assign values properly for rate in ratings: # if our regex didn't return any value. if len(rate) == 0: rate_num.append(0) rate_denom.append(0) # if regex's leght equals to one elif len(rate) == 1: temp = float(rate[0][0]) # if we the value falls in between [30,100] if 30 < temp < 100: temp_2 = int(rate[0][0]) / int(rate[0][-1][0]) rate_num.append(math.ceil(temp_2)) rate_denom.append(10) # else if our number falls between [100, 200] elif 100 < temp < 200: temp_2 = int(rate[0][0]) / int(rate[0][-1][:2]) rate_num.append(math.ceil(temp_2)) rate_denom.append(10) # else just ceiling the number else: rate_num.append(math.ceil(temp)) rate_denom.append(10) # if our regex returned two ratings elif len(rate) == 2: # restricting our ratings to a max of 15 if int(rate[0][0]) + int(rate[1][0]) > 15: temp = (int(rate[0][0]) + int(rate[1][0]))/2 rate_num.append(math.ceil(temp)) rate_denom.append(10) # if it is < 15 else: rate_num.append(int(rate[0][0]) + int(rate[1][0])) rate_denom.append(10) # all others lenghts else: temp_sum = 0 for i in range(len(rate)): temp_sum += int(rate[i][0]) scaled = scale_rate(temp_sum) rate_num.append(math.ceil(scaled)) rate_denom.append(10) # assigning the values to rating_numerator and denominator master_copy['rating_numerator'] = rate_num master_copy['rating_denominator'] = rate_denom ###Output _____no_output_____ ###Markdown Test ###Code # lenght of the newly obtained values equal to the number of datapoint in our dataset # i.e. 1686 print(len(master_copy['rating_numerator']), len(master_copy['rating_denominator'])) ###Output 1685 1685 ###Markdown 4. Fix the structure of the table Define- Assign each column with appropriate type Code ###Code # info about master_copy master_copy.info() # importing numpy import numpy as np # assign each column with appropriate type master_copy['tweet_id'] = master_copy['tweet_id'].astype(object) master_copy['timestamp'] = pd.to_datetime(master_copy.timestamp) master_copy['source'] = master_copy['source'].astype('category') master_copy['favorites'] = master_copy['favorites'].astype(np.int64) master_copy['retweets'] = master_copy['retweets'].astype(np.int64) master_copy['dog_class'] = master_copy['dog_class'].astype('category') master_copy['rating_numerator'] = master_copy['rating_numerator'].astype(np.int64) master_copy['rating_denominator'] = master_copy['rating_denominator'].astype(np.int64) ###Output _____no_output_____ ###Markdown Test ###Code # printing information master_copy.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 1685 entries, 0 to 1684 Data columns (total 22 columns): tweet_id 1685 non-null object timestamp 1685 non-null datetime64[ns] source 1685 non-null category text 1685 non-null object expanded_urls 1685 non-null object rating_numerator 1685 non-null int64 rating_denominator 1685 non-null int64 jpg_url 1685 non-null object img_num 1685 non-null float64 p1 1685 non-null object p1_conf 1685 non-null float64 p1_dog 1685 non-null object p2 1685 non-null object p2_conf 1685 non-null float64 p2_dog 1685 non-null object p3 1685 non-null object p3_conf 1685 non-null float64 p3_dog 1685 non-null object favorites 1685 non-null int64 retweets 1685 non-null int64 dog_class 1685 non-null category dog_names 1685 non-null object dtypes: category(2), datetime64[ns](1), float64(4), int64(4), object(11) memory usage: 266.9+ KB ###Markdown 5. Getting rid of predictions and add in final touches Define- Condense the `p_[1|2|3]` to `predicted_dog` and `conf_[1|2|3]` to `accuracy`.- drop columns `img_num`, `p1`, `p1_conf`, `p1_dog`, `p2`, `p2_conf`, `p2_dog`, `p3`, `p3_conf`, `p3_dog`.- rename columns that are apt to this dataset. Code ###Code # We will store the fisrt true algorithm # with it's level of confidence predicted_dog_breed = [] accuracy = [] # funvtion for getting the levels def condense_predictions(dataframe): ''' takes in the dataframe and extracts information for predicted dog breed. dataframe: input to the fuction ''' if dataframe['p1_dog'] == True: predicted_dog_breed.append(dataframe['p1']) accuracy.append(dataframe['p1_conf']) elif dataframe['p2_dog'] == True: predicted_dog_breed.append(dataframe['p2']) accuracy.append(dataframe['p2_conf']) elif dataframe['p3_dog'] == True: predicted_dog_breed.append(dataframe['p3']) accuracy.append(dataframe['p3_conf']) else: predicted_dog_breed.append('NaN') accuracy.append(0) master_copy.apply(condense_predictions, axis=1) master_copy['dog_breeds'] = predicted_dog_breed master_copy['accuracy'] = accuracy # dropping columns master_copy.drop(['img_num', 'p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog'], axis = 1, inplace = True) # final tocuh - renaming master_copy.rename(columns={'source': 'tweet_source', 'text': 'tweet', 'timestamp': 'tweet_date', 'expanded_urls' : 'tweet_urls', 'jpg_url': 'image_url', 'favorites': 'tweet_favorites', 'retweets': 'tweet_retweets'}, inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code # printing information master_copy.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 1685 entries, 0 to 1684 Data columns (total 14 columns): tweet_id 1685 non-null object tweet_date 1685 non-null datetime64[ns] tweet_source 1685 non-null category tweet 1685 non-null object tweet_urls 1685 non-null object rating_numerator 1685 non-null int64 rating_denominator 1685 non-null int64 image_url 1685 non-null object tweet_favorites 1685 non-null int64 tweet_retweets 1685 non-null int64 dog_class 1685 non-null category dog_names 1685 non-null object dog_breeds 1685 non-null object accuracy 1685 non-null float64 dtypes: category(2), datetime64[ns](1), float64(1), int64(4), object(6) memory usage: 161.6+ KB ###Markdown Storing**Done with the process of cleaning, let's store and analyse this dataset.** ###Code # saving the file to local disk master_copy.to_csv(folder+'/'+'twitter_archive_master.csv', index=False, encoding = 'utf-8') # listing the dataset folder os.listdir(folder) ###Output _____no_output_____ ###Markdown Predicted types of animals will be as it without any change. Solving Some Tidness Issue: merging all DataFrames into one Data frame ###Code df_semi = pd.merge(df_1, df_img, on='tweet_id', how='outer') df_final = pd.merge(df_semi, tweet_df, on="tweet_id", how="outer") #Test: pd.set_option('display.max_columns', None) df_final.head() ### Saving new dataset into new csv file: df_final.to_csv("twitter_archive_master.csv", index = False) ###Output _____no_output_____ ###Markdown Visualisation and some insights: 1- average rating of different stages: ###Code df_new = pd.read_csv("twitter_archive_master.csv") ## Making a new column of rating df_new["rating"] = df_new["rating_numerator"] / df_new["rating_denominator"] ## To exclude None values x1 = ["doggo", "floofer", "pupper", "puppo"] values = [df_new.groupby("stage")["rating"].mean()["doggo"], df_new.groupby("stage")["rating"].mean()["floofer"], df_new.groupby("stage")["rating"].mean()["pupper"], df_new.groupby("stage")["rating"].mean()["puppo"] ] plt.bar(x1, values); ###Output _____no_output_____ ###Markdown Insight 1: as we see the averages rating for all of the stages are very close to each other so there is no huge effect of the stage in the dog ratings. 2- Perentages of rated dogs for every stage: ###Code x2 = ["doggo", "floofer", "pupper", "puppo"] values_2 = [df_new.groupby("stage")["rating"].count()["doggo"], df_new.groupby("stage")["rating"].count()["floofer"], df_new.groupby("stage")["rating"].count()["pupper"], df_new.groupby("stage")["rating"].count()["puppo"] ] plt.pie(values_2, labels = x2); ###Output _____no_output_____ ###Markdown Insight 2: - Most of the rated is from pupper stage, inoring the None stage tweets. 3 - Does the stage of the dog affects the engagement ? ###Code x3 = ["doggo", "pupper","floofer" , "puppo"] favorits = [df_new.groupby("stage")["favorite_count"].mean()["doggo"], df_new.groupby("stage")["favorite_count"].mean()["pupper"], df_new.groupby("stage")["favorite_count"].mean()["floofer"], df_new.groupby("stage")["favorite_count"].mean()["puppo"]] retweets = [df_new.groupby("stage")["retweet_count"].mean()["doggo"], df_new.groupby("stage")["retweet_count"].mean()["pupper"], df_new.groupby("stage")["retweet_count"].mean()["floofer"], df_new.groupby("stage")["retweet_count"].mean()["puppo"]] plt.figure(1) plt.xlabel("Dog Stages") plt.ylabel("Mean Favorites") plt.bar(x3, favorits) plt.figure(2) plt.xlabel("Dog Stages") plt.ylabel("Mean retweets") plt.bar(x3, retweets) ###Output _____no_output_____ ###Markdown Project: Wrangle and Analyze Data Table of ContentsIntroductionData WranglingData AnalysisInsights IntroductionWithin this porject data from the WeRateDogs Twitter account will be cleaned, analyzed and interesting aspects will be visualized. ###Code # Importing libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import requests import os import json import tweepy ###Output _____no_output_____ ###Markdown Data WranglingSubsequently Data Wrangling is performed for the project data. Goal is to generate a basic understanding of the data and bringing it into a state from which further analysis can be conducted. a. GatherFirst step is to access the process data, so the dataset with the project data is generated from a csv-file. I. Downloaded csv-file ###Code # Create dataframe df1 from downloaded csv-file df1 = pd.read_csv('twitter-archive-enhanced.csv') ###Output _____no_output_____ ###Markdown II: Programmatical download from provided URL ###Code # Create new folder named 'twitter' if it doesn't exist yet folder_name = 'twitter' if not os.path.exists(folder_name): os.makedirs(folder_name) # Request file via specified URL url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) # Save file in created folder with open(os.path.join(folder_name, url.split('/')[7]), mode='wb') as file: file.write(response.content) # Create dataframe df2 from downloaded csv-file df2 = pd.read_csv('twitter/image-predictions.tsv', sep = '\t') ###Output _____no_output_____ ###Markdown III: Scraping additional data from twitter API ###Code # Authentification process consumer_key = '' consumer_secret = '' access_token = '' access_secret = '' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth) # Make sure that when requesting data via tweepy API Rate limit isn't exceeded api = tweepy.API(auth, wait_on_rate_limit=True) # Create list with 'tweet_IDs' from df1 in order to search for via tweepy API id_list = df1['tweet_id'].copy().tolist() # Request additional data 'tweet_text', 'favourite_count' and 'retweet_count' via API df_list = [] exceptions_list = [] for tweet_id in id_list: try: tweet = api.get_status(tweet_id, tweet_mode='extended') # Create json str from tweepy.models.Status json_str = json.dumps(tweet._json) # Create json from json_str json_file = json.loads(json_str) # Extract data from json tweet_favc = json_file['favorite_count'] retw_count = json_file['retweet_count'] # Append data to df_list df_list.append({'tweet_id':tweet_id, 'favourite_count':tweet_favc, 'retweet_count':retw_count}) # If exception write 'tweet_id' into exceptions_list except Exception as e: exceptions_list.append(tweet_id) # Create df3 from df_list df3 = pd.DataFrame(df_list, columns = ['tweet_id', 'favourite_count', 'retweet_count']) ###Output _____no_output_____ ###Markdown IV. Joining df1 and df3 to one single dataframe df1 ###Code df1 = df1.join(df3.set_index('tweet_id'), on = 'tweet_id') ###Output _____no_output_____ ###Markdown b. AssessIn order to sufficiently understand the project data, the shape, rows and columens as well as the datatypes and mean values are examined. I. Visual assessment of df1 and df2 ###Code df1.head() df2.head() ###Output _____no_output_____ ###Markdown II. Programmatic assessment of df1 ###Code # General information about columns, rows, datatypes and null-values df1.info() # Checking datatypes of object columns labels_obj = ['timestamp', 'source', 'text', 'retweeted_status_timestamp', 'expanded_urls', 'name', 'doggo','floofer','pupper', 'puppo'] for c in labels_obj: print(c, ':', type(df1[c][0])) # Examining a random sample from df1 df1.sample() # Examining column 'rating_numerator' with quantitative data df1.rating_numerator.describe() # Examining column 'rating_denominator' with quantitative data df1.rating_denominator.describe() # Examining column 'favourite_countr' with quantitative data df1.favourite_count.describe() # Examining column 'retweet_count' with quantitative data df1.retweet_count.describe() # Checking for duplicated tweet_ids in df1 df1.tweet_id.duplicated().sum() ###Output _____no_output_____ ###Markdown III. Programmatic assessment of df2 ###Code # General information about columns, rows, datatypes and null-values df2.info() # Checking datatypes of object columns labels_obj = ['jpg_url', 'p1', 'p2', 'p3'] for c in labels_obj: print(c, ':', type(df2[c][0])) # Examining a random sample from df2 df2.sample() # Checking value counts for column 'img_num' df2.img_num.value_counts() # Checking for duplicated tweet_ids in df2 df2.tweet_id.duplicated().sum() ###Output _____no_output_____ ###Markdown Differentiation of dataframes- **df1** - Basic information about the tweets- **df2** - Information regarding the prediction of dog breeds- **df3** - Information regarding dog name, type and rating Tidiness of data `df1` - There are retweets in df1 while they shouldn't be occurring in df1 -> **Remove retweet rows**- Columns 'retweeted_status_user_id' and 'retweeted_status_timestamp' don't fit to type of obervational unit int df1 -> **Remove retweet columns**- Columns 'in_reply_to_status_id' and 'in_reply_to_user_id' don't fit to type of observational unit -> **Remove columns**- Columns 'doggo', 'floofer', ' doggo', 'pupper', 'puppo' not understandable for reader -> **Join columns in one single column 'type'**- Columns 'name', 'type', 'rating_numerator' and 'rating_denominator' don't fit to type of observational unit -> **Move columns to df3** `df2`:- Column label names 'img_num' and 'jpg_url' hard to understand for reader -> **Change label names**- Columns 'jpg_url' and 'img_num' in df2 hold information regarding type of observational unit from df1 -> **Move columns to df1** Quality `df1`, `df2`, `df3`- 1, Remove rows with missing values from the dataframes `df1`:- 2, Change 'favourite_count' from float to int- 3, Change 'retweet_count' from float to int- 4, Change 'timestamp' to datatype Datetime- 5, Change 'img_num' to int- 6, Correct entries in 'source' column to 'http://twitter.com' `df3`:- 7, Remove row with unrealistic max values in column 'rating_numerator- 8, Change unrealistic max value in column 'rating_denominator' to 10 c. CleanIn the Cleaning phase potential problemns in the provided dataset are idtentified and subsequently solved. The data is brought into a state from which further analysis can be conducted. ###Code df1_clean = df1.copy() df2_clean = df2.copy() ###Output _____no_output_____ ###Markdown I. Tidiness of data Define`df1`- 1, Remove retweet rows and subsequently delete columns regarding retweets from df1_clean- 2, Remove 'in_reply_to_status_id' and 'in_reply_to_user_id' from df1_clean- 3, Reorganize columns 'doggo', 'floofer', 'doggo', 'pupper' and 'puppo' in single column 'type'- 4, Create new dataframe df3 with columns 'name', 'type', 'rating_numerator' and 'rating_denominator' and remove columns from df1_clean`df2`- 5, Place columns 'jpg_url' and 'img_num' in df1_clean, eliminate them from df2_clean and change label names Code 1, Remove retweet rows and subsequently delete columns regarding retweets from df1_clean ###Code # Remove the retweet rows from df1_clean df1_clean = df1_clean.query('retweeted_status_id.isnull() == True') # Delete the columns regarding retweets in df1_clean df1_clean.drop(['retweeted_status_id','retweeted_status_user_id', 'retweeted_status_timestamp'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown 2. Remove 'in_reply_to_status_id' and 'in_reply_to_user_id' from df1_clean ###Code # 2, Remove columns 'in_reply_to_status_id' and 'in_reply_to_user_id' from df1_clean df1_clean.drop(['in_reply_to_status_id','in_reply_to_user_id'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown 3. Reorganize columns 'doggo', 'floofer', 'doggo', 'pupper' and 'puppo' in single column 'type' ###Code # Make sure there are no overlapping entries in columns doggo, floofer, pupper, puppo labels_list = ['doggo', 'floofer', 'pupper', 'puppo'] check = [] for c in labels_list: df_check = df1_clean.query('{} == "{}"'.format(c,c)) list_check = labels_list.copy() list_check.remove(c) for d in list_check: check.append(df_check[d].isnull().count() == df_check[c].count()) # No overlaping entries in any of the columns? check # Replace None with NaNs in df1_clean labels = ['doggo', 'floofer', 'pupper','puppo'] for c in labels: df1_clean[c] = df1_clean[c].replace('None', np.nan) # Combine columns 'doggo', 'floofer', 'pupper' and 'puppo' in column 'doggo' df1_clean['doggo'] = df1_clean.loc[:,'doggo':'puppo'].fillna(method='ffill',axis=1)['puppo'] # Remove redundant columns 'floofer', 'pupper' and 'puppo' df1_clean.drop(['floofer', 'pupper', 'puppo'], axis=1, inplace=True) # Rename column 'doggo' to 'type' df1_clean = df1_clean.rename(columns={'doggo':'type'}) # Replace NaNs with 'None' in columns 'type' df1_clean['type'] = df1_clean['type'].replace(np.nan, 'None') ###Output _____no_output_____ ###Markdown 4. Create new dataframe df3 with columns 'name', 'type', 'rating_numerator' and 'rating_denominator' and remove columns from df1_clean ###Code # Create new dataframe df3 df3_clean = df1_clean.copy() # Remove all columns from df3 except for 'tweet_id', 'rating_numerator', 'rating_denominator', 'name', 'type' df3_clean.drop(['timestamp','source', 'text', 'expanded_urls', 'favourite_count', 'retweet_count', 'image_url', 'number_of_images'], axis=1, inplace=True) # Remove columns 'rating_numerator', 'rating_denominator', 'name', 'type' from df1 df1_clean.drop(['rating_numerator', 'rating_denominator', 'name', 'type'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown 5. Place columns 'jpg_url' and 'img_num' in df1_clean, eliminate them from df2_clean and change label names ###Code # Join columns 'jpg_url' and 'img_num' from df2_clean with df1_clean df1_clean = df1_clean.join(df2_clean.set_index('tweet_id'), on='tweet_id') # Remove columns joined from df2_clean except for 'jpg_url' and 'img_num' df1_clean.drop(['p1','p1_conf','p1_dog','p2','p2_conf','p2_dog','p3','p3_conf','p3_dog'], axis=1, inplace=True) # Remove columns 'img_num' and 'jpg_url' from df2_clean df2_clean.drop(['img_num','jpg_url'], axis=1, inplace=True) # Rename columns 'img_num' and 'jpg_url' df1_clean = df1_clean.rename(columns={'img_num':'number_of_images', 'jpg_url':'image_url'}) ###Output _____no_output_____ ###Markdown Check ###Code # Visual check of changes in df1_clean df1_clean.head(1) # Visual check of changes in df2_clean df2_clean.head(1) # Visual check of changes in df3_clean df3_clean.head(1) ###Output _____no_output_____ ###Markdown II. Quality of data Define- 1, Remove rows with missing values from the dataframes- 2, Change 'favourite_count' from float to int- 3, Change 'retweet_count' from float to int- 4, Change 'timestamp' to datatype Datetime- 5, Change 'img_num' to int- 6, Correct entries in 'source' column to 'http://twitter.com'- 7, Delete row with unrealistic max values in column 'rating_numerator'- 8, Change unrealistic max value (170) in column 'rating_denominator' to 10 Code1. Remove rows with missing values from the dataframes ###Code # Drop all rows with null values df1_clean.dropna(inplace = True) df2_clean.dropna(inplace = True) df3_clean.dropna(inplace = True) ###Output _____no_output_____ ###Markdown 2. Change 'favourite_count' from float to int 3. Change 'retweet_count' from float to int 4. Change 'timestamp' to datatype Datetime 5. Change 'img_num' to int ###Code df1_clean['favourite_count'] = df1_clean.favourite_count.astype(int) df1_clean['retweet_count'] = df1_clean.retweet_count.astype(int) df1_clean['timestamp'] = pd.to_datetime(df1_clean['timestamp']) df1_clean['number_of_images'] = df1_clean.number_of_images.astype(int) ###Output _____no_output_____ ###Markdown 6. Correct entries in 'source' column to 'http://twitter.com' ###Code # Change entries in column 'source' df1_clean['source'] = 'http://twitter.com' ###Output _____no_output_____ ###Markdown 7. Delete row with unrealistic max values in column 'rating_numerator' ###Code # Detect and remove rows with unrealistic numerator (>20) while df3_clean.rating_numerator.max() > 20: max_value_index = df3_clean.loc[df3_clean['rating_numerator'] == df3_clean.rating_numerator.max()].index[0] df3_clean.drop(max_value_index, inplace = True) ###Output _____no_output_____ ###Markdown 8. Change unrealistic max values in column 'rating_denominator' to 10 ###Code # Check for how many rows denominator isn't 10 df3_clean.query('rating_denominator != 10').shape # Change denomiator of all rows to 10 df3_clean['rating_denominator'] = 10 ###Output _____no_output_____ ###Markdown Check ###Code # Check datatypes of columns in df1_clean visually df1_clean.info() # Check datatypes of columns in df2_clean visually df2_clean.info() # Check datatypes of columns in df2_clean visually df3_clean.info() # No more unrealistic values in df3_clean columns 'rating_numerator' and 'rating_denominator'? if df3_clean.rating_numerator.max() <= 20 and df3_clean.rating_denominator.max() == 10: print(True) else: print(False) ###Output True ###Markdown III. Aditional changes- Restructure columns in df1_clean and df3_clean to impove readability- Reset indices in df1_clean, df2_clean and and df3_clean ###Code # Restructure columns in df1 df1_clean = df1_clean[['tweet_id', 'timestamp', 'text', 'favourite_count', 'retweet_count', 'number_of_images', 'source', 'expanded_urls', 'image_url']] # Restructure columns in df3 df3_clean = df3_clean[['tweet_id', 'name', 'type', 'rating_numerator', 'rating_denominator']] # Reset indices in dataframes df1_clean.reset_index(drop=True) df2_clean.reset_index(drop=True) df3_clean.reset_index(drop=True) ###Output _____no_output_____ ###Markdown III. Saving as csv file- Dataframe df1_clean gets stored as csv named 'twitter_archive_master.csv'- Dataframes df2_clean and df3_clean are stored csv named 'twitter_archive_predictions' and 'twitter_archive_dog_ratings' ###Code # Save dataframes as csv df1_clean.to_csv('twitter_archive_master.csv') df2_clean.to_csv('twitter_archive_predictions.csv') df3_clean.to_csv('twitter_archive_dog_ratings.csv') ###Output _____no_output_____ ###Markdown Data AnalysisSubsequently general patterns in the data are detected and relationships in the data are visualized. I. Dataframe `df1` ###Code # Checking values in column 'retweet_count' df1_clean.retweet_count.describe() # Checking values in column 'favourite_count' df1_clean.favourite_count.describe() # Creating scatterplot from 'favourite_count' and 'retweet_count' plt.scatter(df1_clean['favourite_count'], df1_clean['retweet_count']) plt.title('Scatterplot favourite_count vs. retweet_count') plt.xlabel('favourite_count') plt.ylabel('retweet_count'); ###Output _____no_output_____ ###Markdown I. Dataframe `df3` ###Code # Dog with frequency >10 in the dataset n = 10 names_list = [] names_list = df3_clean['name'].value_counts()[:n].index.tolist() print(names_list) # Removal of Noname and names occurring due to typing errors or similar typing_errors = ['None', 'a', 'the'] for c in typing_errors: names_list.remove(c) # Frequency of dog names in df3_clean names_count = [] for c in names_list: names_count.append(df3_clean.query('name == "{}"'.format(c)).tweet_id.count()) print(names_count) plt.bar(names_list, names_count) plt.title('Bar chart: Most frequent dog names') plt.xlabel('dog names') plt.ylabel('frequency'); # Creating new column 'rating' ind df3_clean df3_clean['rating'] = df3_clean['rating_numerator']/df3_clean['rating_denominator'] # Calculating mean ratings for dog types from dataset dog_types = ['doggo', 'pupper', 'puppo', 'floofer', 'None'] dog_mean_ratings = [] for c in dog_types: dog_mean_ratings.append(df3_clean.query('type == "{}"'.format(c)).rating.mean()) print(dog_mean_ratings) # Creating bar chart with dog types and mean ratings plt.bar(dog_types, dog_mean_ratings) plt.title('Bar chart: Rating according to dog type') plt.xlabel('dog type') plt.ylabel('mean rating'); ###Output _____no_output_____ ###Markdown Wrangle and Analyze Data Project Impoting Data ###Code import pandas as pd import requests import numpy as np import json import tweepy import io import time import matplotlib.pyplot as plt df_arch = pd.read_csv('twitter-archive-enhanced.csv') url = requests.get('https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv').content df_image = pd.read_csv(io.StringIO(url.decode('utf-8')),sep='\t') df_image.to_csv('image_predictions.csv') consumer_key = 'tTDxvnAYNKWMWcitrVpHxNZCL' consumer_secret = 'By3U1xFfa4ZFZd1kKEsGjsTTRYmJ5s6wypjLqVI83cmj54VP52' access_token = '2213870713-q0Mpgyn3cFeexF5SV0vWGi0YeHnJYNRSrOkDmWy' access_secret = 'k04cSza4hxg45YMd76MJx3qeT4v5Ce6MF4AcGg7sVGUeZ' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth,wait_on_rate_limit=True,wait_on_rate_limit_notify=True) tweets=[] for i in df_image.tweet_id: try: tweets.append(api.get_status(i, tweet_mode='extended')) time.sleep(.1) except: print("Error for: " + str(i)) json_list = [] for i in tweets: json_list.append(i._json) with open('df_json.txt', 'w') as outfile: json.dump(json_list, outfile) df_json = pd.read_json('df_json.txt') ###Output _____no_output_____ ###Markdown Assess df_image assessment ###Code df_image.info() df_image.head() df_image.describe() df_image.duplicated().sum() df_image.tweet_id.duplicated().sum() df_image.jpg_url.duplicated().sum() df_image[df_image.jpg_url.duplicated()].head() df_image.sample(5) df_image[(df_image.p1_dog == False) & (df_image.p2_dog == False) &(df_image.p3_dog == False)].sample(5) ###Output _____no_output_____ ###Markdown df_arch assessment ###Code df_arch.head() df_arch.info() df_arch.text.sample(5) df_arch.describe() df_arch.rating_numerator[df_arch.rating_numerator > 14] df_arch.rating_denominator[df_arch.rating_denominator > 10] df_arch.doggo.value_counts() df_arch.floofer.value_counts() df_arch.pupper.value_counts() df_arch.puppo.value_counts() ###Output _____no_output_____ ###Markdown df_json assessment ###Code df_json.head() df_json.info() df_json.describe() df_json.id[40] df_json.id_str[40] df_json.favorited.value_counts() df_json.favorite_count.describe() df_json.place.sample(20) df_json.possibly_sensitive_appealable.sum() df_json.retweeted.sum() df_json.is_quote_status.sum() df_json.lang.value_counts() df_json.source[0] df_json.truncated.sum() df_json.user[0] df_json.info() df_json.entities[0] df_json.extended_entities[0] df_json.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2067 entries, 0 to 2066 Data columns (total 28 columns): contributors 0 non-null float64 coordinates 0 non-null float64 created_at 2067 non-null datetime64[ns] display_text_range 2067 non-null object entities 2067 non-null object extended_entities 2067 non-null object favorite_count 2067 non-null int64 favorited 2067 non-null bool full_text 2067 non-null object geo 0 non-null float64 id 2067 non-null int64 id_str 2067 non-null int64 in_reply_to_screen_name 23 non-null object in_reply_to_status_id 23 non-null float64 in_reply_to_status_id_str 23 non-null float64 in_reply_to_user_id 23 non-null float64 in_reply_to_user_id_str 23 non-null float64 is_quote_status 2067 non-null bool lang 2067 non-null object place 1 non-null object possibly_sensitive 2067 non-null bool possibly_sensitive_appealable 2067 non-null bool retweet_count 2067 non-null int64 retweeted 2067 non-null bool retweeted_status 75 non-null object source 2067 non-null object truncated 2067 non-null bool user 2067 non-null object dtypes: bool(6), datetime64[ns](1), float64(7), int64(4), object(10) memory usage: 367.5+ KB ###Markdown Tidiness issues 1. "df_image": There are 3 separate columns for whether or not each row is a dog.2. "df_arch": The columns [doggo, floofer, pupper, puppo] can be combined in 1 column.3. "df_json": Columns id and id_str have the same value.4. All 3 datasets can be combined into 1. Quality issues 1. "df_image": Some rows are not dogs.2. "df_arch": Columns rating_numinator and rating_denominator often have values more than 10. (Won't be fixed based on instructions)3. "df_arch": 5 columns ralated to replay_ and retweeted_ have less than 100 non null values. We don't want retweets.4. "df_arch": Values in column source are html tags.5. "df_json": All values in favorited, possibly_sensitive_appealable, retweeted, truncated and is_quote_status are False.6. "df_json": Columns contributors, coordinates and geo don't have any value.7. "df_json": Columns in_reply... ,place and retweeted_status have very few valid values.8. "df_json": Columns display_text_range, entities, extended_entities seems useless.9. "df_json": Each value in column user is a dictionary. Cleaning ###Code clean_image = df_image.copy() clean_arch = df_arch.copy() clean_json = df_json.copy() ###Output _____no_output_____ ###Markdown Deleting unnecessary rows Some rows are not dogs, we'll check if all predictions are False on whether or not the picture is a dog. ###Code clean_image = clean_image.drop(clean_image.query('p1_dog== False & p2_dog==False & p3_dog==False').index,axis=0) clean_image.query('p1_dog== False & p2_dog==False & p3_dog==False') ###Output _____no_output_____ ###Markdown There are roghly 300 more rows in the df_arch dataset, since we are looking only for rows with images, we'll drop the extra rows when merging the datasets. ###Code len(df_arch) len(df_image) ###Output _____no_output_____ ###Markdown Deleting unnecessary columns Column img_num won't be of any help, so I'll drop it. ###Code clean_image = clean_image.drop(['img_num'],axis=1) clean_image.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1751 entries, 0 to 2073 Data columns (total 11 columns): tweet_id 1751 non-null int64 jpg_url 1751 non-null object p1 1751 non-null object p1_conf 1751 non-null float64 p1_dog 1751 non-null bool p2 1751 non-null object p2_conf 1751 non-null float64 p2_dog 1751 non-null bool p3 1751 non-null object p3_conf 1751 non-null float64 p3_dog 1751 non-null bool dtypes: bool(3), float64(3), int64(1), object(4) memory usage: 128.2+ KB ###Markdown Also columns 'in_reply_to_status_id','in_reply_to_user_id','retweeted_status_id','retweeted_status_user_id','retweeted_status_timestamp','source','expanded_urls' won't be helping us as well. ###Code clean_arch = clean_arch.drop(['in_reply_to_status_id','in_reply_to_user_id','retweeted_status_id', 'retweeted_status_user_id','retweeted_status_timestamp','source','expanded_urls'],axis=1) clean_arch.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 10 columns): tweet_id 2356 non-null int64 timestamp 2356 non-null object text 2356 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: int64(3), object(7) memory usage: 184.1+ KB ###Markdown And columns 'contributors','coordinates','display_text_range','entities','extended_entities','favorited','geo','id_str','in_reply_to_screen_name','in_reply_to_status_id','in_reply_to_status_id_str','in_reply_to_user_id','in_reply_to_user_id_str','is_quote_status','place','possibly_sensitive','possibly_sensitive_appealable','retweeted','retweeted_status','truncated','created_at' as well. ###Code clean_json = clean_json.drop(['contributors','coordinates','display_text_range','entities','extended_entities', 'favorited','geo','id_str','in_reply_to_screen_name','in_reply_to_status_id', 'in_reply_to_status_id_str','in_reply_to_user_id','in_reply_to_user_id_str', 'is_quote_status','place','possibly_sensitive','possibly_sensitive_appealable', 'retweeted','retweeted_status','truncated','created_at'],axis=1) clean_json.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2067 entries, 0 to 2066 Data columns (total 7 columns): favorite_count 2067 non-null int64 full_text 2067 non-null object id 2067 non-null int64 lang 2067 non-null object retweet_count 2067 non-null int64 source 2067 non-null object user 2067 non-null object dtypes: int64(3), object(4) memory usage: 113.1+ KB ###Markdown Combining all 3 datasets To make life easier for cleaning and analyzing the data, I'll merge all of them together using tweet id as key. ###Code clean_mix = clean_image.merge(clean_arch, on='tweet_id') clean_mix = clean_mix.merge(clean_json, left_on='tweet_id',right_on='id') clean_mix.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1744 entries, 0 to 1743 Data columns (total 27 columns): tweet_id 1744 non-null int64 jpg_url 1744 non-null object p1 1744 non-null object p1_conf 1744 non-null float64 p1_dog 1744 non-null bool p2 1744 non-null object p2_conf 1744 non-null float64 p2_dog 1744 non-null bool p3 1744 non-null object p3_conf 1744 non-null float64 p3_dog 1744 non-null bool timestamp 1744 non-null object text 1744 non-null object rating_numerator 1744 non-null int64 rating_denominator 1744 non-null int64 name 1744 non-null object doggo 1744 non-null object floofer 1744 non-null object pupper 1744 non-null object puppo 1744 non-null object favorite_count 1744 non-null int64 full_text 1744 non-null object id 1744 non-null int64 lang 1744 non-null object retweet_count 1744 non-null int64 source 1744 non-null object user 1744 non-null object dtypes: bool(3), float64(3), int64(6), object(15) memory usage: 345.7+ KB ###Markdown Now since I don't need column "id" anymore I'll just drop it. ###Code clean_mix = clean_mix.drop(['id','tweet_id'],axis=1) ###Output _____no_output_____ ###Markdown Combining dog type predictions into one column There are 3 predictions for each dog, after visually investigating some of the rows I saw some error where probabilities are low. So I'm going to filter the dataset and exclude the lower quartile of the first prediction probability (p1_conf) and also the first quartile of the difference between p1_conf and p2_conf to exclude the rows where the two probabilities are too close to each other. ###Code clean_mix.p1.value_counts()[:10] clean_mix.p2.value_counts()[:10] are_we_sure = clean_mix.apply(lambda x: x.p1_conf - x.p2_conf,axis=1) are_we_sure.describe() clean_mix.p1_conf.describe() clean_mix[are_we_sure < 0.150744] clean_mix = clean_mix[(are_we_sure > 0.150744) & (clean_mix.p1_conf > 0.377417)] ###Output _____no_output_____ ###Markdown There are still a number of rows where the p1 prediction is not a dog, I'll delete those as well. ###Code clean_mix = clean_mix[clean_mix.p1_dog==True] ###Output _____no_output_____ ###Markdown Now I'm going to delete all the prediction columns and allocate only the p1 prediction to each row. ###Code clean_mix['dog_type'] = clean_mix.p1 clean_mix = clean_mix.drop(['p1','p2','p3','p1_conf','p2_conf','p3_conf','p1_dog','p2_dog','p3_dog'],axis=1) clean_mix.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1094 entries, 0 to 1742 Data columns (total 17 columns): jpg_url 1094 non-null object timestamp 1094 non-null object text 1094 non-null object rating_numerator 1094 non-null int64 rating_denominator 1094 non-null int64 name 1094 non-null object doggo 1094 non-null object floofer 1094 non-null object pupper 1094 non-null object puppo 1094 non-null object favorite_count 1094 non-null int64 full_text 1094 non-null object lang 1094 non-null object retweet_count 1094 non-null int64 source 1094 non-null object user 1094 non-null object dog_type 1094 non-null object dtypes: int64(4), object(13) memory usage: 153.8+ KB ###Markdown Combining dog stage columns into one Having one row for each dog stage won't help us in analysis, I'll make a new column and set the dog type as the value for each row. ###Code clean_mix['dog_stage'] = clean_mix.apply(lambda x : 'doggo' if x.doggo=="doggo" else 'floofer' if x.floofer=='floofer' else 'pupper' if x.pupper=='pupper' else 'puppo' if x.puppo=='puppo' else 'None',axis=1) ###Output _____no_output_____ ###Markdown Now I'll drop the dog stage columns. ###Code clean_mix = clean_mix.drop(['doggo','floofer','pupper','puppo'],axis=1) clean_mix.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1094 entries, 0 to 1742 Data columns (total 14 columns): jpg_url 1094 non-null object timestamp 1094 non-null object text 1094 non-null object rating_numerator 1094 non-null int64 rating_denominator 1094 non-null int64 name 1094 non-null object favorite_count 1094 non-null int64 full_text 1094 non-null object lang 1094 non-null object retweet_count 1094 non-null int64 source 1094 non-null object user 1094 non-null object dog_type 1094 non-null object dog_stage 1094 non-null object dtypes: int64(4), object(10) memory usage: 128.2+ KB ###Markdown Fixing the name 'a' Apparently there is a bunch of rows with name value 'a', I'll change those to 'None'. ###Code clean_mix.name.value_counts()[:5] clean_mix.name = clean_mix.name.replace('a','None') ###Output _____no_output_____ ###Markdown Extracting the value from source column's tag This column's values are tags and I only want the actual string. So I'll clean them up. ###Code clean_mix.source[0] clean_mix.source.str[-14:].value_counts() clean_mix.source = clean_mix.source.apply(lambda x: 'iPhone' if x[-14:]=='for iPhone</a>' else 'Web Client' if x[-14:]=='Web Client</a>' else 'TweetDeck' if x[-14:]=='>TweetDeck</a>' else 'None') ###Output _____no_output_____ ###Markdown Dealing with "user" column "user" is the "WeRateDogs" tweeter description and since it's constant for all rows I'll drop it. ###Code clean_mix = clean_mix.drop('user',axis=1) clean_mix.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1094 entries, 0 to 1742 Data columns (total 13 columns): jpg_url 1094 non-null object timestamp 1094 non-null object text 1094 non-null object rating_numerator 1094 non-null int64 rating_denominator 1094 non-null int64 name 1094 non-null object favorite_count 1094 non-null int64 full_text 1094 non-null object lang 1094 non-null object retweet_count 1094 non-null int64 source 1094 non-null object dog_type 1094 non-null object dog_stage 1094 non-null object dtypes: int64(4), object(9) memory usage: 159.7+ KB ###Markdown Combining rating_numerator and rating_denominator into 1 column To make it easy to use the user rating system, I'll divide the to columns to get a single value score. ###Code clean_mix['score'] = clean_mix.rating_numerator / clean_mix.rating_denominator clean_mix.score.sort_values() ###Output _____no_output_____ ###Markdown To make sure no anomalies are gonna affect the average scores in the analysis section, I'll drop the last 4 rows with values more than 1.5. ###Code clean_mix = clean_mix[clean_mix.score < 1.5] clean_mix.score.sort_values()[-5:] ###Output _____no_output_____ ###Markdown Saving the master dataset as csv file ###Code clean_mix.to_csv('twitter_archive_master.csv',index=False) ###Output _____no_output_____ ###Markdown Analysis and Visualizations After having a look at the dataset and the columns we have, a few ideas for analysis came to my mind.1. Does the dog type have anything to do its name?2. Is there any correlation between favorite count with the dog type?3. How about score and retweet count? ###Code df = pd.read_csv('twitter_archive_master.csv') ###Output _____no_output_____ ###Markdown Name vs dog type ###Code df[df.name !='None'].name.value_counts()[:20] df[df.name != 'None'].groupby(['name'])['dog_type'].value_counts().sort_values(ascending=False)[:20] ###Output _____no_output_____ ###Markdown It seems since there many different names not many of the are for a specific type of dog, so it appears there is not much correlation between name and dog type. Favorite count vs Retweet count vs Score and dog_type Favorite_count, retweet_count and score all have values related to popularity, therefor I'm going to make a new dataset and extract the average of these columns for each dog type. Since only a handful of tweets have mentioned some rare types of dogs and in these tweets one very high score can influence the average score of that type of dog, I'll filter the dog types to only the ones with at least 10 tweets about them. ###Code a = df.groupby('dog_type')['score'].count()[df.groupby('dog_type')['score'].count()>10].index sele = df[df.dog_type.apply(lambda x : x in a)] types = pd.DataFrame() types['dog_type'] = sele.groupby(['dog_type']).favorite_count.mean().index types['favorite_count'] = sele.groupby(['dog_type']).favorite_count.mean().values.astype(int) types['score'] = sele.groupby(['dog_type']).score.mean().values types['retweet_count'] = sele.groupby(['dog_type']).retweet_count.mean().values.astype(int) types.sort_values(['favorite_count'],ascending=False)[:5] types.sort_values(['score'],ascending=False)[:5] types.sort_values(['retweet_count'],ascending=False)[:5] ###Output _____no_output_____ ###Markdown As we can see in the tables above the 3 features favorite_count, retweet_count and score have different top 5s, but we can say with confidence that since "Cardigan" is in the top 5 in all tables it can be named the popular dog type in our dataset. To be fair favorite count and retweet count are more reliable metrics to measure the popularity of a dog, and we have 3 types of dogs in the top 5 of both these metrics, "French Bulldog", Great Pyrenees" and "Cardigan". These are dogs people like to see! ###Code plt.scatter(types.score,types.favorite_count); plt.xlabel('Score') plt.ylabel('Favorite Count') plt.title('Score vs Favorite Count'); plt.scatter(types.retweet_count,types.favorite_count); plt.xlabel('Retweet Count') plt.ylabel('Favorite Count') plt.title('Favorite vs Retweet Count'); plt.scatter(types.retweet_count,types.score); plt.xlabel('Retweet Count') plt.ylabel('Score') plt.title('Score vs Retweet Count'); ###Output _____no_output_____ ###Markdown Visual Assessment on Twitter archive content which contains basic tweet data -----For this project, key data assessment requirements for twitter archive data include original rating and there should be an image associated with the given rating.- "expanded_urls" is associated with image urls for a given tweet.- It is observed that the "expanded_urls" column does have missing or no values- "rating_numerator" gives us insights into the given dog rating.- "tweet_id" it a unique identifier identifying each unique tweet for each dog.- "timestamp" captures the date and time specifics when a direct message was posted to weratedogs- 2356 records in twitter archive data set. ###Code tweet_av_df ###Output _____no_output_____ ###Markdown Programmatic Assessment on Twitter archive content which contains basic tweet data ###Code tweet_av_df.head() tweet_av_df.tail() tweet_av_df.sample(5) tweet_av_df.info() tweet_av_df.rating_numerator.describe() tweet_av_df[['doggo', 'floofer', 'pupper', 'puppo']].apply(pd.Series.value_counts) ###Output _____no_output_____ ###Markdown **Define - Data Quality Issues** ----------- Completeness: The following columns are incomplete - and have missing values. 'in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id','retweeted_status_user_id', 'retweeted_status_timestamp','expanded_urls'.- Consistency: 'in_reply_to_user_id', 'retweeted_status_user_id' (status ids are sometimes populated with user_id).- Validity: 'rating_denominator' has 0 values in it. This will result in 0 rating for dogs.- Erroneous data types: 'timestamp', 'retweeted_status_timestamp' has been set as object type.- Erroneous data types: 'in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id','retweeted_status_user_id' should be int type.- Accuracy: 'name' column has inaccurate values. **Code - Code to fix the data quality issues.** -------- ###Code # Fixing the 'timestamp' data quality issue. Converting object type to datetime64 with UTC timezone tweet_av_df['timestamp'] = pd.to_datetime(tweet_av_df['timestamp'], utc=True, errors='coerce') # Fixing 'expanded_urls' data quality issue. Delete rows that have null values since we only want original tweet ratings that have images. tweet_av_df = tweet_av_df.dropna(axis=0, subset=['expanded_urls']) # Fixing 'retweeted_status_id' data quality issue. We delete rows that have a value associated with retweeted_status_id since we only want original tweet ratings. tweet_av_df = tweet_av_df.drop(tweet_av_df.loc[tweet_av_df.retweeted_status_id.notna()].index) tweet_av_df.reset_index() # Continue with 'retweeted_status_id' - since we are intrested with original tweets only, it is safe to drop this column since we are left with null values in this column. tweet_av_df.drop(axis=1, columns=['retweeted_status_id'], inplace=True) # 'retweeted_status_user_id' column can also be dropped since we are also left with null values. Keeping the original question in mind. tweet_av_df.drop(axis=1, columns=['retweeted_status_user_id'], inplace=True) # 'retweeted_status_timestamp' column can also be dropped since we are also left with null values. Keeping the original question in mind. tweet_av_df.drop(axis=1, columns=['retweeted_status_timestamp'], inplace=True) # 'in_reply_to_status_id' column can be dropped, since we are not really looking at tweets that were in reply. tweet_av_df.drop(axis=1, columns=['in_reply_to_status_id'], inplace=True) # 'in_reply_to_user_id' column can be dropped too, since we are not really looking at tweets that were in reply. tweet_av_df.drop(axis=1, columns=['in_reply_to_user_id'], inplace=True) ###Output _____no_output_____ ###Markdown **Test** ------ ###Code # Taking a peek at tweet dataframe after fixing the data quality issues. tweet_av_df.info() ###Output _____no_output_____ ###Markdown Assessment Findings on Image Predictions data set ------**Data Quality Issues**- No data quality issues were found with this data set. The dataset is complete, consistent, valid with no accuracy issues. Gather Favourite and Retweet Count using Twitter api -----Before we proceed with data tidy tasks., we will gather favourite and retweet count data using twitter api. ###Code # A function to collect tweet json data using twitter api and save it to json file. def collecttweetdata(tweet_ids): consumer_key = '' consumer_secret = '' auth = tweepy.AppAuthHandler(consumer_key, consumer_secret) api = tweepy.API(auth, wait_on_rate_limit=True) count = 0 fails_dict = {} start = timer() with open('tweet-json.txt', 'w') as outfile: for tweet_id in tweet_ids: count += 1 print(str(count) + ": " + str(tweet_id)) try: tweetjson = api.get_status(tweet_id, tweet_mode="extended") print("Success") json.dump(tweetjson._json, outfile) outfile.write('\n') except tweepy.TweepError as e: print("Fail", e) fails_dict[tweet_id] = e pass end = timer() print(end - start) print(fails_dict) # Lets collect tweet data and save to tweets data json file. tweet_ids = tweet_av_df.tweet_id.values collecttweetdata(tweet_ids) # Creating dataframe from the tweet json that will have the 'favorite_count' and 'retweet_count' def is_json_key_present(json, key): try: buf = json[key] except KeyError: return False return True column_names = ["tweet_id", "favorite_count", "retweet_count"] list_vals = [] with open('tweet-json.txt','r') as jfile: for line in jfile: try: myjson = json.loads(line) if (is_json_key_present(myjson,'id') and is_json_key_present(myjson,'favorite_count') and is_json_key_present(myjson,'retweet_count')): vals = [myjson['id'], myjson['favorite_count'], myjson['retweet_count']] list_vals.append(vals) else: print('Skipping row as there is no data') except: pass tweet_data_df = pd.DataFrame(list_vals, columns=column_names) print(tweet_data_df.shape, tweet_data_df.columns) ###Output _____no_output_____ ###Markdown **Define - Tideness issues - 1** ----------The following tidiness issues were identified.- The retweet and favorite count belong to twitter data set - to form an observational unit (table).- As each variable forms a column, the columns on twitter data set 'doggo', 'floofer', 'pupper', 'puppo' are identifying various stages of dog. We fix this by creating a single column 'growth_stage' that captures the dog stage.- The image predictions data can also be combined with twitter data set to form an observational unit from where the predictions on each tweet can be analyzed. **Code - Code to fixe the data quality issues.** -------- ###Code # Merge tweet archive data set with tweet json data set that has favorite and retweet count. twitter_av_favs_df = tweet_av_df.merge(tweet_data_df, on='tweet_id') ###Output _____no_output_____ ###Markdown **Test - the data quality issues.** -------- ###Code twitter_av_favs_df.info() ###Output _____no_output_____ ###Markdown **Define - Tideness issues - 2** ----------The columns on twitter archive data set 'doggo', 'floofer', 'pupper', 'puppo' are identifying various stages of dog. We fix this wide form of data by creating a single column 'growth_stage' that captures the dog stage.I created a function to melt the individual stage columns into a single column to identify the existing stage of the dog. The highest stage takes precedence when a tweet indicates multiple stages of the dog. **Code - Code to fix the data quality issues.** -------- ###Code # Custom function to derive the current stage of the dog and create a dataframe to hold the tweet_id and the growth_stage. def dog_stages(twitter_avfavsdf): stages_dict = {} for i in range(len(twitter_avfavsdf)): row = twitter_avfavsdf.iloc[i] stage = "unknown" if (row.floofer == "floofer"): stage = "floofer" elif (row.puppo == "puppo"): stage = "puppo" elif (row.pupper == "pupper"): stage = "pupper" elif (row.doggo == "doggo"): stage = "doggo" stages_dict[row.tweet_id] = stage return stages_dict stage_dictn = dog_stages(twitter_av_favs_df) stage_df = pd.DataFrame(stage_dictn.items(), columns=['tweet_id', 'growth_stage']) # Enriching the twitter data set that has favourite and retweet count with growth_stage column and creating a new data set. twitter_av_favsstages_df = twitter_av_favs_df.merge(stage_df, on='tweet_id') # We are going to drop the stage columns. twitter_av_favsstages_df.drop(axis=1, columns=['doggo', 'floofer', 'pupper', 'puppo'], inplace=True) # Finally we will save this to our master data frame - 'twitter_archive_master.csv'. twitter_av_favsstages_df.to_csv('twitter_archive_master.csv', index=False) ###Output _____no_output_____ ###Markdown **Test - the data quality issues.** -------- ###Code # Lets get the shape and columns print(twitter_av_favsstages_df.shape, twitter_av_favsstages_df.columns) # Getting a sample of our dataset twitter_av_favsstages_df.sample(3) ###Output _____no_output_____ ###Markdown Visualizations to help us get insignts on WeRateDogs Twitter data - What was the most favorite dog stage on average?- What was the most retweeted dog stage on average?- Which dog stage are highly rated on average? ###Code twitter_av_master_df = pd.read_csv('twitter_archive_master.csv') # On twitter which dog stage has more faviourites on average? plt.figure(figsize=[16, 12]) base_color = sb.color_palette()[0] splot = sb.barplot(x="growth_stage", y="favorite_count", data=twitter_av_master_df, ci=None, order=['doggo', 'pupper', 'puppo', 'floofer'], color=base_color) for p in splot.patches: splot.annotate(format(p.get_height(), '.0f'), (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'center', xytext = (0, 10), textcoords = 'offset points') plt.xlabel("Dog Stages") plt.ylabel("Favorite Count") plt.title("Favorite's by Dog Stages on Twitter") plt.show() # On twitter which dog stage has high retweets on average? plt.figure(figsize=[16, 12]) base_color = sb.color_palette()[0] splot = sb.barplot(x="growth_stage", y="retweet_count", data=twitter_av_master_df, ci=None, order=['doggo', 'pupper', 'puppo', 'floofer'], color=base_color) for p in splot.patches: splot.annotate(format(p.get_height(), '.0f'), (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'center', xytext = (0, 10), textcoords = 'offset points') plt.xlabel("Dog Stages") plt.ylabel("Retweet Count") plt.title("Retweet's by Dog Stages on Twitter") plt.show() # How are the dogs rated by their stages? plt.figure(figsize=[16, 12]) base_color = sb.color_palette()[0] splot = sb.barplot(x="growth_stage", y="rating_numerator", data=twitter_av_master_df, ci=None, order=['doggo', 'pupper', 'puppo', 'floofer'], color=base_color) for p in splot.patches: splot.annotate(format(p.get_height(), '.0f'), (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'center', xytext = (0, 10), textcoords = 'offset points') plt.xlabel("Dog Stages") plt.ylabel("Rating on scale of 10") plt.title("Rating averages by Dog Stages on Twitter") plt.show() ###Output _____no_output_____ ###Markdown WeRateDogs Data Wrangling and Analyzing Table of Contents IntroductionGathering DataAssessing DataCleaning DataStoring, Analyzing and Visualizing IntroductionIn this project, I am applying my data wrangling skill learned from Udacity Data Analyst Nanodegree. The dataset that I will be wrangling is the tweet archive of Twitter user @dog_rates, also known as WeRateDogs. Gathering DataThere are 3 pieces of data that need to be gathered from a variety of sources and in a variety of formats:1. The WeRateDogs Twitter archive given by the instructor in csv format.2. The tweet image predictions, i.e., what breed of dog (or other object, animal, etc.) is present in each tweet according to a neural network. This file (image_predictions.tsv) is hosted on Udacity's servers and should be downloaded programmatically using the Requests library and the following URL: https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv3. Each tweet's retweet count and favorite ("like") count at minimum, and any additional data you find interesting. Using the tweet IDs in the WeRateDogs Twitter archive, query the Twitter API for each tweet's JSON data using Python's Tweepy library and store each tweet's entire set of JSON data in a file called tweet_json.txt file. ###Code #Import all necessary packages import pandas as pd import requests import tweepy import time import json import re import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 1. Twitter archive file ###Code #Read the twitter archive csv file df = pd.read_csv('twitter-archive-enhanced.csv') ###Output _____no_output_____ ###Markdown 2. Tweet image predictions ###Code #Download the tweet image predictions using Requests library url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' url_data = requests.get(url) with open('image-predictions.tsv', mode='wb') as file: file.write(url_data.content) #Import the tweet image predictions into a dataframe image_data = pd.read_csv('image-predictions.tsv', sep='\t') ###Output _____no_output_____ ###Markdown 3. Twitter API & JSON ###Code #Twitter API keys and tokens consumer_key = 'a3xSBVPIOUxyzsO9PlUhRtwoy' consumer_secret = 'jlru3R0cNx83nIDcZu0fOEWiM82EfV8IWxJ3QoqyhGUJfZciNC' access_token = '100682582-8fCegSf8upOyufHNnL11pw0BTIQVhncffZP7csMY' access_secret = 'FWh9DWlNFleAt3ie238Pl7EkQxzXKhA3vzc70cKpaCD4y' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth_handler=auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True) #Query Twitter API for each tweet's JSON using the tweet IDs in the twitter archive file #Start timer start = time.time() with open('getstatus_error.txt', 'w') as error_file: valid_ids = 0 error_ids = 0 tweet_ids = df['tweet_id'] with open('tweet_json.txt', 'w', encoding='utf-8') as outfile: for i, tweet_id in tweet_ids.iteritems(): try: print("%s# %s" % (str(i+1), tweet_id)) #Get tweet data from Twitter API tweet = api.get_status(tweet_id, tweet_mode='extended') json_content = tweet._json #Write each tweet's JSON data to its own line in a file json.dump(json_content, outfile) outfile.write('\n') valid_ids += 1 except tweepy.TweepError as e: error_ids += 1 error_str = [] error_str.append(str(tweet_id)) error_str.append(': ') error_str.append(e.response.json()['errors'][0]['message']) error_str.append('\n') error_file.write(''.join(error_str)) print(''.join(error_str)) continue print("%s %s" % ('Valid tweets:', valid_ids)) print("%s %s" % ('Error tweets:', error_ids)) #End timer end = time.time() print((end - start)/(1000*60)) #Read tweet's JSON data line by line and store in dictionary that is converted to dataframe json_dict = [] with open('tweet_json.txt', 'r') as json_file: for line in json_file: status = json.loads(line) #Append to the empty dictionary json_dict.append({'tweet_id': status['id'], 'retweet_count': status['retweet_count'], 'favorite_count': status['favorite_count'], 'display_text_range': status['display_text_range'] }) #Convert dictionary to a dataframe tweet_df = pd.DataFrame(json_dict, columns = ['tweet_id', 'retweet_count', 'favorite_count', 'display_text_range']) ###Output _____no_output_____ ###Markdown Assessing DataI am using visual and programmatical assessment to identify any Quality (content) and Tidiness (structure) issues in the dataframe. All of the issues will be documented at the end of this section. Visual AssessmentDisplay all three gathered data for visual assessment purposes ###Code df.sample(10) image_data.sample(10) tweet_df.sample(10) ###Output _____no_output_____ ###Markdown Programmatic AssessmentUse Pandas to assess the data ###Code #Check the datatypes of the twitter archive table columns df.info() #Check the datatypes of the image data table columns image_data.info() #Check the datatypes of the twitter API table columns tweet_df.info() #Check to see if any tweet id in id is a duplicate sum(df['tweet_id'].duplicated()) #Sort the rating denominator in df table df['rating_denominator'].value_counts().sort_index() #Sort the rating numerator in df table df['rating_numerator'].value_counts().sort_index() #See all the dog names in df table df['name'].value_counts().sort_index() ##Check if there are any retweets in Twitter Archive data (we only want original posts) df[df['retweeted_status_id'].isnull()] #Check if any dogs have more than 1 stage listed print(len(df[(df.doggo != 'None') & (df.floofer != 'None')])) print(len(df[(df.doggo != 'None') & (df.pupper != 'None')])) print(len(df[(df.doggo != 'None') & (df.puppo != 'None')])) print(len(df[(df.floofer != 'None') & (df.pupper != 'None')])) print(len(df[(df.floofer != 'None') & (df.puppo != 'None')])) print(len(df[(df.pupper != 'None') & (df.puppo != 'None')])) #Check the image_data table for any duplicate sum(image_data['jpg_url'].duplicated()) ###Output _____no_output_____ ###Markdown QualityCompleteness, validity, accuracy, consistency (content) Twitter archive data (df)- There are retweet entries (we only want original posts)- Rating denominator column has value other than 10- Rating numerator column has value less than 10 and large numbers- Name column has erroneous entries (such as 'a', 'an', 'actually')- Erroneous datatypes (in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id columns)- There are 14 dogs that have more than 1 dog stages Tweet image predictions (image_data)- Missing data (only 2075 rows instead of 2356)- There are 66 duplicate jpg_url entries- Image predictions might not be actual dog breed (example: bow tie) Tidiness - Dog stage variable is not in the same column- Some columns are not needed in this analysis- All data should be combined into one table Cleaning DataIn this section, I will fix the issues I identified in the Assessing step using codes. After this section, the data should be clean and ready for analysis and visualization. ###Code #Create a copy of all dataframes to do the cleaning on df_clean = df.copy() image_clean = image_data.copy() tweet_clean = tweet_df.copy() ###Output _____no_output_____ ###Markdown Quality 1. Twitter archive data: There are retweet entries (we only want original posts) DefineRemove all entries that are retweet (we only want to keep original tweets) and drop the columns concerning retweet info Code ###Code df_clean = df_clean[df_clean['retweeted_status_id'].isnull()] df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2175 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2175 non-null object source 2175 non-null object text 2175 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 2117 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 2175 non-null object floofer 2175 non-null object pupper 2175 non-null object puppo 2175 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 305.9+ KB ###Markdown Test ###Code print(len(df_clean[df_clean['retweeted_status_id'].isnull() == False])) df_clean.info() #Drop retweeted_status_id, retweeted_status_user_id and retweeted_status_timestamp columns since they are empty df_clean = df_clean.drop(['retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], axis=1) ###Output _____no_output_____ ###Markdown 2. Tweet image predictions data: missing data DefineKeep only the rows in twitter archive data with existing tweet_id data in tweet image predictions data Code ###Code df_clean = df_clean[df_clean['tweet_id'].isin(image_clean['tweet_id'])] ###Output _____no_output_____ ###Markdown Test ###Code len(df_clean[~df_clean['tweet_id'].isin(image_clean['tweet_id'])]) ###Output _____no_output_____ ###Markdown 3. Twitter archive data: Erroneous datatypes (in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id columns) Define Convert the in_reply_to_status_id & in_reply_to_user_id from float to string using astype.The retweeted_status_id & retweeted_status_user_id columns are already dropped. Code ###Code df_clean['in_reply_to_status_id'] = df_clean['in_reply_to_status_id'].astype('str') df_clean['in_reply_to_user_id'] = df_clean['in_reply_to_user_id'].astype('str') ###Output _____no_output_____ ###Markdown Test ###Code df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1994 entries, 0 to 2355 Data columns (total 14 columns): tweet_id 1994 non-null int64 in_reply_to_status_id 1994 non-null object in_reply_to_user_id 1994 non-null object timestamp 1994 non-null object source 1994 non-null object text 1994 non-null object expanded_urls 1994 non-null object rating_numerator 1994 non-null int64 rating_denominator 1994 non-null int64 name 1994 non-null object doggo 1994 non-null object floofer 1994 non-null object pupper 1994 non-null object puppo 1994 non-null object dtypes: int64(3), object(11) memory usage: 233.7+ KB ###Markdown 4. Twitter archive data: rating denominators have values other than 10 DefineFix the rating denominators that are not actual ratings (multiple / in the 'text column) Code ###Code #All occurences where there are more than one #/#'s in the text column ratings_multiple = df_clean[df_clean['text'].str.contains( r"(\d+\.?\d*\/\d+\.?\d*\D+\d+\.?\d*\/\d+\.?\d*)")].text ratings_multiple for entry in ratings_multiple: mask = df_clean['text'] == entry column_name1 = 'rating_numerator' column_name2 = 'rating_denominator' df_clean.loc[mask, column_name1] = re.findall(r"\d+\.?\d*\/\d+\.?\d*\D+(\d+\.?\d*)\/\d+\.?\d*", entry) df_clean.loc[mask, column_name2] = 10 ###Output _____no_output_____ ###Markdown Test ###Code df_clean['rating_denominator'].value_counts() ###Output _____no_output_____ ###Markdown DefineFor records whose rating_denominator is greater than 10 and divisible by 10, use the quotient as the divisor to divide the rating_numerator. If the numerator turns out to be divisible (i.e. remainder=0), assign this quotient as the rating_numerator. Code ###Code #Iterate for all records with rating denominator not equal to 10 for i, row in df_clean[df_clean['rating_denominator'] != 10].iterrows(): d = row['rating_denominator'] #If rating denominator is greater than 10 and divisible by 10 if d > 10 and d%10 == 0: # assign divisor as the quotient divisor = d/10 n = row['rating_numerator'] #If rating_numerator is divisible by the divisor if n%divisor == 0: # reassign rating_denominator as 10 df_clean.set_value(i, 'rating_denominator', 10) # reassign rating_numerator as the quotient of rating_numerator by divisor df_clean.set_value(i, 'rating_numerator', int(n/divisor)) ###Output /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:14: FutureWarning: set_value is deprecated and will be removed in a future release. Please use .at[] or .iat[] accessors instead /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:16: FutureWarning: set_value is deprecated and will be removed in a future release. Please use .at[] or .iat[] accessors instead app.launch_new_instance() ###Markdown Test ###Code df_clean['rating_denominator'].value_counts() ###Output _____no_output_____ ###Markdown Gather each of the three pieces of data as described below in a Jupyter Notebook titled `wrangle_act.ipynb`:1. The WeRateDogs Twitter archive. I am giving this file to you, so imagine it as a file on hand. Download this file manually by clicking the following link: twitter_archive_enhanced.csv2. The tweet image predictions, i.e., what breed of dog (or other object, animal, etc.) is present in each tweet according to a neural network. This file (image_predictions.tsv) is hosted on Udacity's servers and should be downloaded programmatically using the Requests library and the following URL: https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv3. Each tweet's retweet count and favorite ("like") count at minimum, and any additional data you find interesting. Using the tweet IDs in the WeRateDogs Twitter archive, query the Twitter API for each tweet's JSON data using Python's Tweepy library and store each tweet's entire set of JSON data in a file called tweet_json.txt file. Each tweet's JSON data should be written to its own line. Then read this .txt file line by line into a pandas DataFrame with (at minimum) tweet ID, retweet count, and favorite count. Note: do not include your Twitter API keys, secrets, and tokens in your project submission. jump to [Assess](Assess) ###Code # Downloading TSV programatically as instructed by nr. 2 above import requests response = requests.get('https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv') with open('image-predictions.tsv', 'wb') as f: f.write(response.content) import pandas as pd import numpy as np import json import time df = pd.read_csv('twitter-archive-enhanced.csv') predictions = pd.read_csv('image-predictions.tsv', sep='\t') ###Output _____no_output_____ ###Markdown jump to [JSON reading](Reading-JSON-into-DataFrame) ###Code # # !! REMEMBER TO remove your keys and tokens !! # import tweepy consumer_key = #'YOUR CONSUMER KEY' consumer_secret = #'YOUR CONSUMER SECRET' access_token = #'YOUR ACCESS TOKEN' access_secret = #'YOUR ACCESS SECRET' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True) df.iloc[:6,[0,3,5]] # testing the method to obtain a tweet's full text #tweet = api.get_status(df.iloc[4,0], tweet_mode='extended') tweet = api.get_status(df.iloc[2,0]) tweet.text tweet.retweet_count tweet.id # how about favorite (like) count? tweet.favorite_count list(df.iloc[:20,0].values) df.shape #twt_list = list(df.iloc[:20,0].values) # used to test the API gathering code twt_list = list(df.iloc[:,0].values) time.asctime() # testing time function #Gathering twitter data and inserting into JSON file errors = [] twt_ids_tot = len(twt_list) start = time.time() with open('tweet-json.txt', 'w') as file: for twt_id in twt_list: try: # printing index to gauge time remaining ranking = twt_list.index(twt_id) + 1 print('{} of {} : id {}'.format(ranking, twt_ids_tot, twt_id)) status = api.get_status(twt_id, tweet_mode='extended') json.dump(status._json, file) file.write('\n') print('Wrote fo file') except tweepy.RateLimitError: print(time.asctime()) print('Waiting 15 min') time.sleep(15 * 60) except Exception as e: print('\t EXCEPTION! list nr: {} , id: {} \n {}'.format(ranking, twt_id, e)) errors.append(str(ranking) + "_" + str(twt_id)) end = time.time() print(':: {} minutes to get twitter statuses @WeRateDogs'.format((end - start) / 60)) errors ###Output _____no_output_____ ###Markdown jump to [Assess](Assess) Reading JSON into DataFrame ###Code # reading json file created from Twitter via dump popularity = [] with open('tweet-json.txt', 'r') as file: for line in file: try: twt = json.loads(line) #print(twt['id'], twt['favorite_count'], twt['retweet_count'], twt['full_text'] ) popularity.append({'id': str(twt['id']),'favorites': twt['favorite_count'],\ 'retweets': twt['retweet_count']}) #, 'full_text': twt['full_text']}) except Exception as e: print(e) # Make a DataFrame out of a list of dictionaries popularity = pd.DataFrame(popularity, columns=['id','favorites','retweets']) popularity popularity.info() popularity.shape ###Output _____no_output_____ ###Markdown Reading and Copying before cleaning ###Code # copying before cleaning, but just this one so as to show I'm able to do it. # All the others can just be re-read from the original files (json and tsv) if needed archive = df.copy() archive.shape archive.head(10) archive.source.value_counts() archive.expanded_urls[6] archive.info() archive.describe() archive.doggo.value_counts() archive.floofer.value_counts() archive[['tweet_id', 'timestamp', 'rating_numerator']][archive.retweeted_status_id.notna()] popularity.info() predictions.head() predictions.info() predictions.describe() archive.info() archive.loc[0,'floofer'] archive[archive.expanded_urls.duplicated()].shape ###Output _____no_output_____ ###Markdown jump to [JSON reading](Reading-JSON-into-DataFrame) Assess Quality`archive` table1. [retweeted columns](Retweets) with non-null values indicate the rows which are retweets [x]2. [timestamp column](Timestamp) is of `str` type, should be `datetime` [x]2. [column tweet_id](Tweet_id) is of int type, should be str (object) [x]3. 'None' string values indicating nulls on [dog stage columns](Dog_stage), should be `np.nan` or `None` [x]4. [rating_denominator](Rating_denominator) max is 170, usually it is 10 [x]4. [rating_numerator](Rating_numerator) odd values (1776, 666 ...) [x]5. ~~source and expanded_url values are truncated, making them useless (drop?)~~ (actually, they just **appear** truncated on pandas)8. [odd dog names](Odd_dog_names) (a, the, an and lowercase) [x]9. [dog names null](Null_dog_names) values are the str 'None', shoud be None (as a null) [x]10. useless columns for analysis like 'source' and 'in\_reply'`popularity` table+ change column name 'id' to 'tweet_id' for consistency with other dfs`predictions` table11. ['tweet_id'](Predictions) is int, shoud be str [x]12. we are only interested in the [higher confidence](Higher_confidence) breed predictions (p1_conf over 50%)12. p\*\_dog columns indicate that some of the predictions are [not breeds](Not_breeds) of dogs13. Dog breeds have [inconsistent format](Breed_names) (some are title case, other are lower) Tidiness- one variable (dog stage) [split into 4 columns](Melting) (doggo, floofler, pupper and puppo) in `archive` [_]- same observational unit (twitter status) in [multiple tables](Merging) (merge them?) [_] Clean Retweets > `archive` retweeted columns with non-null values indicate the rows which are retweets ###Code archive.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown > There are 179 rows that are indicated as retweets. Let's check them out before removing. ###Code archive[archive.retweeted_status_id.notna()] archive = archive.loc[archive.retweeted_status_id.isna()] # removing the no longer used columns archive.drop(columns=['retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], inplace=True) # test archive.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 14 columns): tweet_id 2175 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2175 non-null object source 2175 non-null object text 2175 non-null object expanded_urls 2117 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 2175 non-null object floofer 2175 non-null object pupper 2175 non-null object puppo 2175 non-null object dtypes: float64(2), int64(3), object(9) memory usage: 254.9+ KB ###Markdown [scroll back to Assess](Assess) Timestamp > `archive` : timestamp column is of `str` type, should be `datetime` ###Code archive.timestamp = pd.to_datetime(archive.timestamp, infer_datetime_format=True) # TESTing conversion to datetime archive.info() archive.timestamp.head() ###Output _____no_output_____ ###Markdown [scroll back to Assess](Assess) Dog_stage > `archive` 'None' string values indicating nulls on dog stage columns, should be `np.nan` or `None` ###Code archive.iloc[:10,-4:] type(archive.iloc[0,-1]), archive.iloc[0,-1] # replacing str 'None' with np.nan so that null values are recognized as such in pandas archive.iloc[:,-4:] = archive.iloc[:,-4:].replace('None', np.nan) archive.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 14 columns): tweet_id 2175 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2175 non-null datetime64[ns] source 2175 non-null object text 2175 non-null object expanded_urls 2117 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 87 non-null object floofer 10 non-null object pupper 234 non-null object puppo 25 non-null object dtypes: datetime64[ns](1), float64(2), int64(3), object(8) memory usage: 334.9+ KB ###Markdown [scroll back to Assess](Assess) Rating_denominator > `archive` rating_denominator max is 170, usually it is 10 ###Code archive.rating_denominator.describe() archive[['text', 'rating_numerator', 'rating_denominator']][archive.rating_denominator != 10] wrong = _ wrong.index for n in wrong.index: print(archive.loc[n, 'text']) len(wrong.index) ###Output _____no_output_____ ###Markdown > From the tweets texts, one can observe that a few these ratings are related to more than one dog in the posted picture. Since they're only 22, I'll just drop them all. ###Code # dropping rows with wrong rating denominator archive.drop(wrong.index, axis=0, inplace=True) # TESTing the drop archive[['text', 'rating_numerator', 'rating_denominator']][archive.rating_denominator != 10] ###Output _____no_output_____ ###Markdown [scroll back to Assess](Assess) Rating_numerator > `archive` rating_numerator odd values (1776, 666 ...) ###Code archive.rating_numerator.describe() archive[['tweet_id', 'text', 'timestamp', 'rating_numerator', 'rating_denominator']][archive.rating_numerator > 15] wrong = _ wrong.index for n in wrong.index: print(archive.loc[n, 'text']) len(wrong.index) ###Output _____no_output_____ ###Markdown > It looks to me that most of these are outliers and two or three are floats, so I'll just drop them all. ###Code # dropping rows with wrong rating numerator archive.drop(wrong.index, axis=0, inplace=True) # TESTing drop archive[['text', 'rating_numerator', 'rating_denominator']][archive.rating_numerator > 15] ###Output _____no_output_____ ###Markdown [scroll back to Assess](Assess) Tweet_id > `archive` : column 'tweet_id' is of int type, should be str ###Code archive.tweet_id = archive.tweet_id.astype('str') archive.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2144 entries, 0 to 2355 Data columns (total 14 columns): tweet_id 2144 non-null object in_reply_to_status_id 69 non-null float64 in_reply_to_user_id 69 non-null float64 timestamp 2144 non-null datetime64[ns] source 2144 non-null object text 2144 non-null object expanded_urls 2094 non-null object rating_numerator 2144 non-null int64 rating_denominator 2144 non-null int64 name 2144 non-null object doggo 87 non-null object floofer 10 non-null object pupper 233 non-null object puppo 25 non-null object dtypes: datetime64[ns](1), float64(2), int64(2), object(9) memory usage: 251.2+ KB ###Markdown [scroll back to Assess](Assess) Odd_dog_names > `archive` lower case odd names (a, the, an) ###Code dog_names = archive.name.value_counts() dog_names[dog_names > 5] archive[archive.name.str.islower()] wrong = _ wrong.index for n in wrong.index: print(archive.loc[n, 'text'], archive.loc[n, 'name']) archive.name.head() archive.name.head().str.istitle() archive.name.where(archive.name.str.istitle(), np.nan, inplace=True) ###Output _____no_output_____ ###Markdown [scroll back to Assess](Assess) Null_dog_names `archive` dog names null values are the str 'None', shoud be None (as a null) ###Code archive.name = archive.name.replace('None', np.nan) # TESTING removal of odd names and str 'None' archive.name.str.islower().any() archive.name.value_counts() archive[['name','text']].sample(20) archive.info() 2144-1379 765/2144 ###Output _____no_output_____ ###Markdown > Since we're here, I'll just get rid of columns that seem useless, like in_reply and source columns ###Code archive.drop(columns=['in_reply_to_status_id', 'in_reply_to_user_id', 'source'], inplace=True) # TEST archive.columns ###Output _____no_output_____ ###Markdown > So after cleaning, we have only **1379** non-null names out of 2144 @dogrates records [scroll back to Assess](Assess) Popularity > Change column name for consistency accross different dataframes ###Code popularity.rename(columns={'id': 'tweet_id'}, inplace=True) popularity.columns ###Output _____no_output_____ ###Markdown Predictions > `predictions` 'tweet_id' is int, shoud be str ###Code predictions.info() predictions.sample(10) predictions.tweet_id = predictions.tweet_id.astype('str') # TESTING predictions.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null object jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(1), object(5) memory usage: 152.1+ KB ###Markdown [scroll back to Assess](Assess) Higher_confidence > We are only interested in the higher confidence breed predictions (p1_conf over 50%) ###Code predictions = predictions[predictions.p1_conf > 0.50] #TEST predictions[predictions.p1_conf <= 0.50] predictions.shape ###Output _____no_output_____ ###Markdown Not_breeds > p*_dog indicate that some of the predictions are not breeds of dogs ###Code # let's keep only those which are breeds of dogs predictions = predictions[predictions.p1_dog] #TEST predictions.p1_dog.all() predictions[predictions.duplicated(subset='tweet_id')] # now we select only what we are interested in for potential analysis. I say only id and p1 predictions = predictions[['tweet_id', 'p1']] ###Output _____no_output_____ ###Markdown [scroll back to Assess](Assess) Breed_names > Make all breeds lower case, with no spaces ###Code predictions.p1 = predictions.p1.str.strip().str.lower().str.replace(' ', '_') #TESTING predictions.p1.value_counts() ###Output _____no_output_____ ###Markdown [scroll back to Assess](Assess) > Now I´ll just rename the p1 column for better description ###Code predictions.rename(columns={'p1': 'breed_pred'}, inplace=True) predictions.head() ###Output _____no_output_____ ###Markdown Tidiness Melting > + one variable (dog stage) split into 4 columns (doggo, floofler, pupper and puppo) in archive [_] ###Code archive.iloc[20:30,-4:] archive.columns archive.columns[-4:] archive.columns[:-4] # create temporary stage_df DataFrame to hold melted values from archive stage_df = archive.melt(id_vars=['tweet_id'] ,value_vars=archive.columns[-4:], value_name='dog_stage') stage_df.sample(15) # verifying resulting df stage_df.info() # getting rid of null entries on the newly created column dog_stage stage_df.dropna(subset=['dog_stage'], inplace=True) stage_df.shape stage_df[stage_df.duplicated(subset='tweet_id', keep=False)] _.shape ###Output _____no_output_____ ###Markdown > It seems there are more than one dog_stage for some tweet_id. Let's check the `archive` df to see if that's where this comes from. ###Code archive[archive.tweet_id == '855851453814013952'][['tweet_id', 'text', 'doggo', 'pupper', 'puppo', 'floofer']] archive[archive.tweet_id == '733109485275860992'][['tweet_id', 'text', 'doggo', 'pupper', 'puppo', 'floofer']] ###Output _____no_output_____ ###Markdown > Since there are 24 of these repeated entries, I'll drop them and only leave those which have a single dog_stage per tweet_id, in order to avoid duplicates when merging `archive` and `stage_df` later ###Code stage_df.drop_duplicates(subset='tweet_id', keep=False, inplace=True) # drop variable column too, created when melting stage_df.drop(columns=['variable'], inplace=True) #TESTING stage_df[stage_df.duplicated(subset='tweet_id', keep=False)] stage_df.shape # Now we merge it with archive archive = pd.merge(archive, stage_df, on=['tweet_id'], how='left') archive # and we drop the previous 4 columns for dog stage archive.drop(columns=['doggo', 'pupper', 'puppo', 'floofer'], inplace=True) # TESTING print(archive.columns) archive.shape ###Output Index(['tweet_id', 'in_reply_to_status_id', 'in_reply_to_user_id', 'timestamp', 'source', 'text', 'expanded_urls', 'rating_numerator', 'rating_denominator', 'name', 'dog_stage'], dtype='object') ###Markdown [scroll back to Assess](Assess) Merging > + same observational unit (twitter status) in multiple tables (merge them?) [_] ###Code predictions.head() popularity.head() popularity.info() archive.head() archive.info() archive = pd.merge(archive, popularity, on='tweet_id', how='left', validate='one_to_one') archive = pd.merge(archive, predictions, on='tweet_id', how='left', validate='1:1') # Now let's see the final result archive.tail() archive.info() archive[['retweets','favorites']].describe() # for some odd reason (maybe because there are null values, favorites and retweets changed back into floats. # I'll get them back into integer format that allow for null values archive.retweets = archive.retweets.astype(dtype=pd.Int32Dtype()) archive.favorites = archive.favorites.astype(dtype=pd.Int32Dtype()) archive.info() archive.to_csv('twitter_archive_master.csv', index=False) archive.sample(20) ###Output _____no_output_____ ###Markdown [scroll back to Assess](Assess) Insights ###Code archive.timestamp.describe() ###Output _____no_output_____ ###Markdown > Q: What are the most popular dog names posted on WeRateDogs? > A: Most popular names are Lucy, Charlie, Oliver, Cooper, Tucker and Penny. ###Code archive.name.value_counts() ###Output _____no_output_____ ###Markdown > Q : What were the breeds in the most popular tweets? > A : Labrador retriever, Eskimo dog and Chihuahua. ###Code archive.sort_values(['retweets', 'favorites'], ascending=False) ###Output _____no_output_____ ###Markdown > Average, Median ###Code archive.rating_numerator.describe() archive.breed_pred.value_counts()[:5] %matplotlib inline # Histogram of breed predictions archive.breed_pred.value_counts()[:5].plot.barh() ###Output _____no_output_____ ###Markdown Data analysis of WeRateDogs Twitter account. ###Code import pandas as pd import numpy as np import requests as rq import matplotlib.pyplot as plt import seaborn as sns import matplotlib %matplotlib inline import tweepy import json import re from tweepy import OAuthHandler from timeit import default_timer as timer from functools import reduce ###Output _____no_output_____ ###Markdown 1. Gather data 1. The WeRateDogs Twitter archive ###Code #importing csv file df_1 = pd.read_csv("twitter-archive-enhanced.csv") df_1.head() #getting some information about dataframe df_1.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 in_reply_to_status_id 78 non-null float64 2 in_reply_to_user_id 78 non-null float64 3 timestamp 2356 non-null object 4 source 2356 non-null object 5 text 2356 non-null object 6 retweeted_status_id 181 non-null float64 7 retweeted_status_user_id 181 non-null float64 8 retweeted_status_timestamp 181 non-null object 9 expanded_urls 2297 non-null object 10 rating_numerator 2356 non-null int64 11 rating_denominator 2356 non-null int64 12 name 2356 non-null object 13 doggo 2356 non-null object 14 floofer 2356 non-null object 15 pupper 2356 non-null object 16 puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown 2. The tweet image predictions ###Code #getting the file of Image predictions programatically url = "https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv" response = rq.get(url) with open('image-predictions.tsv', mode = 'wb') as file: file.write(response.content) #reading the downloaded file image_p = pd.read_csv('image-predictions.tsv', sep = '\t') #Retrieving some info image_p.head() ###Output _____no_output_____ ###Markdown 3. Twitter API ###Code # I took this code from my udacity class rooom. Because I could not get the Twitter API token. #https://classroom.udacity.com/nanodegrees/nd002-ent/parts/f55ce890-c08c-46a5-b57f-55a06c1cc6ae/modules/a8fcd18c-b9a5-4852-a7ec-6dbb08ebfe5a/lessons/a8085857-3e28-4fc7-aeb8-da64ccbc2e20/concepts/d7e3de1b-d7a1-4ebc-9d58-beba021a7c29 # Query Twitter API for each tweet in the Twitter archive and save JSON in a text file # These are hidden to comply with Twitter's API terms and conditions consumer_key = 'HIDDEN' consumer_secret = 'HIDDEN' access_token = 'HIDDEN' access_secret = 'HIDDEN' auth = OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True) # NOTE TO STUDENT WITH MOBILE VERIFICATION ISSUES: # df_1 is a DataFrame with the twitter_archive_enhanced.csv file. You may have to # change line 17 to match the name of your DataFrame with twitter_archive_enhanced.csv # NOTE TO REVIEWER: this student had mobile verification issues so the following # Twitter API code was sent to this student from a Udacity instructor # Tweet IDs for which to gather additional data via Twitter's API tweet_ids = df_1.tweet_id.values len(tweet_ids) # Query Twitter's API for JSON data for each tweet ID in the Twitter archive count = 0 fails_dict = {} start = timer() # Save each tweet's returned JSON as a new line in a .txt file with open('tweet_json.txt', 'w') as outfile: # This loop will likely take 20-30 minutes to run because of Twitter's rate limit for tweet_id in tweet_ids: count += 1 print(str(count) + ": " + str(tweet_id)) try: tweet = api.get_status(tweet_id, tweet_mode='extended') print("Success") json.dump(tweet._json, outfile) outfile.write('\n') except tweepy.TweepError as e: print("Fail") fails_dict[tweet_id] = e pass end = timer() print(end - start) print(fails_dict) ###Output 1: 892420643555336193 Fail 2: 892177421306343426 Fail 3: 891815181378084864 Fail 4: 891689557279858688 Fail 5: 891327558926688256 Fail 6: 891087950875897856 Fail 7: 890971913173991426 Fail 8: 890729181411237888 Fail 9: 890609185150312448 Fail 10: 890240255349198849 Fail 11: 890006608113172480 Fail 12: 889880896479866881 Fail 13: 889665388333682689 Fail 14: 889638837579907072 Fail 15: 889531135344209921 Fail 16: 889278841981685760 Fail 17: 888917238123831296 Fail 18: 888804989199671297 Fail 19: 888554962724278272 Fail 20: 888202515573088257 Fail 21: 888078434458587136 Fail 22: 887705289381826560 Fail 23: 887517139158093824 Fail 24: 887473957103951883 Fail 25: 887343217045368832 Fail 26: 887101392804085760 Fail 27: 886983233522544640 Fail 28: 886736880519319552 Fail 29: 886680336477933568 Fail 30: 886366144734445568 Fail 31: 886267009285017600 Fail 32: 886258384151887873 Fail 33: 886054160059072513 Fail 34: 885984800019947520 Fail 35: 885528943205470208 Fail 36: 885518971528720385 Fail 37: 885311592912609280 Fail 38: 885167619883638784 Fail 39: 884925521741709313 Fail 40: 884876753390489601 Fail 41: 884562892145688576 Fail 42: 884441805382717440 Fail 43: 884247878851493888 Fail 44: 884162670584377345 Fail 45: 883838122936631299 Fail 46: 883482846933004288 Fail 47: 883360690899218434 Fail 48: 883117836046086144 Fail 49: 882992080364220416 Fail 50: 882762694511734784 Fail 51: 882627270321602560 Fail 52: 882268110199369728 Fail 53: 882045870035918850 Fail 54: 881906580714921986 Fail 55: 881666595344535552 Fail 56: 881633300179243008 Fail 57: 881536004380872706 Fail 58: 881268444196462592 Fail 59: 880935762899988482 Fail 60: 880872448815771648 Fail 61: 880465832366813184 Fail 62: 880221127280381952 Fail 63: 880095782870896641 Fail 64: 879862464715927552 Fail 65: 879674319642796034 Fail 66: 879492040517615616 Fail 67: 879415818425184262 Fail 68: 879376492567855104 Fail 69: 879130579576475649 Fail 70: 879050749262655488 Fail 71: 879008229531029506 Fail 72: 878776093423087618 Fail 73: 878604707211726852 Fail 74: 878404777348136964 Fail 75: 878316110768087041 Fail 76: 878281511006478336 Fail 77: 878057613040115712 Fail 78: 877736472329191424 Fail 79: 877611172832227328 Fail 80: 877556246731214848 Fail 81: 877316821321428993 Fail 82: 877201837425926144 Fail 83: 876838120628539392 Fail 84: 876537666061221889 Fail 85: 876484053909872640 Fail 86: 876120275196170240 Fail 87: 875747767867523072 Fail 88: 875144289856114688 Fail 89: 875097192612077568 Fail 90: 875021211251597312 Fail 91: 874680097055178752 Fail 92: 874434818259525634 Fail 93: 874296783580663808 Fail 94: 874057562936811520 Fail 95: 874012996292530176 Fail 96: 873697596434513921 Fail 97: 873580283840344065 Fail 98: 873337748698140672 Fail 99: 873213775632977920 Fail 100: 872967104147763200 Fail 101: 872820683541237760 Fail 102: 872668790621863937 Fail 103: 872620804844003328 Fail 104: 872486979161796608 Fail 105: 872261713294495745 Fail 106: 872122724285648897 Fail 107: 871879754684805121 Fail 108: 871762521631449091 Fail 109: 871515927908634625 Fail 110: 871166179821445120 Fail 111: 871102520638267392 Fail 112: 871032628920680449 Fail 113: 870804317367881728 Fail 114: 870726314365509632 Fail 115: 870656317836468226 Fail 116: 870374049280663552 Fail 117: 870308999962521604 Fail 118: 870063196459192321 Fail 119: 869988702071779329 Fail 120: 869772420881756160 Fail 121: 869702957897576449 Fail 122: 869596645499047938 Fail 123: 869227993411051520 Fail 124: 868880397819494401 Fail 125: 868639477480148993 Fail 126: 868622495443632128 Fail 127: 868552278524837888 Fail 128: 867900495410671616 Fail 129: 867774946302451713 Fail 130: 867421006826221569 Fail 131: 867072653475098625 Fail 132: 867051520902168576 Fail 133: 866816280283807744 Fail 134: 866720684873056260 Fail 135: 866686824827068416 Fail 136: 866450705531457537 Fail 137: 866334964761202691 Fail 138: 866094527597207552 Fail 139: 865718153858494464 Fail 140: 865359393868664832 Fail 141: 865006731092295680 Fail 142: 864873206498414592 Fail 143: 864279568663928832 Fail 144: 864197398364647424 Fail 145: 863907417377173506 Fail 146: 863553081350529029 Fail 147: 863471782782697472 Fail 148: 863432100342583297 Fail 149: 863427515083354112 Fail 150: 863079547188785154 Fail 151: 863062471531167744 Fail 152: 862831371563274240 Fail 153: 862722525377298433 Fail 154: 862457590147678208 Fail 155: 862096992088072192 Fail 156: 861769973181624320 Fail 157: 861383897657036800 Fail 158: 861288531465048066 Fail 159: 861005113778896900 Fail 160: 860981674716409858 Fail 161: 860924035999428608 Fail 162: 860563773140209665 Fail 163: 860524505164394496 Fail 164: 860276583193509888 Fail 165: 860184849394610176 Fail 166: 860177593139703809 Fail 167: 859924526012018688 Fail 168: 859851578198683649 Fail 169: 859607811541651456 Fail 170: 859196978902773760 Fail 171: 859074603037188101 Fail 172: 858860390427611136 Fail 173: 858843525470990336 Fail 174: 858471635011153920 Fail 175: 858107933456039936 Fail 176: 857989990357356544 Fail 177: 857746408056729600 Fail 178: 857393404942143489 Fail 179: 857263160327368704 Fail 180: 857214891891077121 Fail 181: 857062103051644929 Fail 182: 857029823797047296 Fail 183: 856602993587888130 Fail 184: 856543823941562368 Fail 185: 856526610513747968 Fail 186: 856330835276025856 Fail 187: 856288084350160898 Fail 188: 856282028240666624 Fail 189: 855862651834028034 Fail 190: 855860136149123072 Fail 191: 855857698524602368 Fail 192: 855851453814013952 Fail 193: 855818117272018944 Fail 194: 855459453768019968 Fail 195: 855245323840757760 Fail 196: 855138241867124737 Fail 197: 854732716440526848 Fail 198: 854482394044301312 Fail 199: 854365224396361728 Fail 200: 854120357044912130 Fail 201: 854010172552949760 Fail 202: 853760880890318849 Fail 203: 853639147608842240 Fail 204: 853299958564483072 Fail 205: 852936405516943360 Fail 206: 852912242202992640 Fail 207: 852672615818899456 Fail 208: 852553447878664193 Fail 209: 852311364735569921 Fail 210: 852226086759018497 Fail 211: 852189679701164033 Fail 212: 851953902622658560 Fail 213: 851861385021730816 Fail 214: 851591660324737024 Fail 215: 851464819735769094 Fail 216: 851224888060895234 Fail 217: 850753642995093505 Fail 218: 850380195714523136 Fail 219: 850333567704068097 Fail 220: 850145622816686080 Fail 221: 850019790995546112 Fail 222: 849776966551130114 Fail 223: 849668094696017920 Fail 224: 849412302885593088 Fail 225: 849336543269576704 Fail 226: 849051919805034497 Fail 227: 848690551926992896 Fail 228: 848324959059550208 Fail 229: 848213670039564288 Fail 230: 848212111729840128 Fail 231: 847978865427394560 Fail 232: 847971574464610304 Fail 233: 847962785489326080 Fail 234: 847842811428974592 Fail 235: 847617282490613760 Fail 236: 847606175596138505 Fail 237: 847251039262605312 Fail 238: 847157206088847362 Fail 239: 847116187444137987 Fail 240: 846874817362120707 Fail 241: 846514051647705089 Fail 242: 846505985330044928 Fail 243: 846153765933735936 Fail 244: 846139713627017216 Fail 245: 846042936437604353 Fail 246: 845812042753855489 Fail 247: 845677943972139009 Fail 248: 845459076796616705 Fail 249: 845397057150107648 Fail 250: 845306882940190720 Fail 251: 845098359547420673 Fail 252: 844979544864018432 Fail 253: 844973813909606400 Fail 254: 844704788403113984 Fail 255: 844580511645339650 Fail 256: 844223788422217728 Fail 257: 843981021012017153 Fail 258: 843856843873095681 Fail 259: 843604394117681152 Fail 260: 843235543001513987 Fail 261: 842892208864923648 Fail 262: 842846295480000512 Fail 263: 842765311967449089 Fail 264: 842535590457499648 Fail 265: 842163532590374912 Fail 266: 842115215311396866 Fail 267: 841833993020538882 Fail 268: 841680585030541313 Fail 269: 841439858740625411 Fail 270: 841320156043304961 Fail 271: 841314665196081154 Fail 272: 841077006473256960 Fail 273: 840761248237133825 Fail 274: 840728873075638272 Fail 275: 840698636975636481 Fail 276: 840696689258311684 Fail 277: 840632337062862849 Fail 278: 840370681858686976 Fail 279: 840268004936019968 Fail 280: 839990271299457024 Fail 281: 839549326359670784 Fail 282: 839290600511926273 Fail 283: 839239871831150596 Fail 284: 838952994649550848 Fail 285: 838921590096166913 Fail 286: 838916489579200512 Fail 287: 838831947270979586 ###Markdown Looks like the tokens are expired. Thus I downloaded the file called 'tweet-json.txt' from my classroom via abovementioned link. ###Code #Extracting the important columns of Json text file. Afterwards I am putting the coulmns into pandas Dataframe so #that I can see and assess the data. my_list = [] with open('tweet-json.txt', encoding = 'utf-8') as file_json: for each in file_json: json_d = json.loads(each) tweet_id = json_d['id'] favorite_count = json_d['favorite_count'] retweet_count = json_d['retweet_count'] my_list.append({'tweet_id': tweet_id, 'favorite_count': favorite_count, 'retweet_count': retweet_count, }) tweet_json = pd.DataFrame(my_list, columns = ['tweet_id', 'favorite_count', 'retweet_count']) tweet_json.info() tweet_json.head() ###Output _____no_output_____ ###Markdown 2. Assess the data Let us assess first the Twitter archive data ###Code # Gives 10 random samples of dataframe df_1.sample(10) ###Output _____no_output_____ ###Markdown As you can see there are some numbers in retweeted_status_id and retweeted_status_user_id instead of NaNs. This means this data has retweets. ###Code df_1.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 in_reply_to_status_id 78 non-null float64 2 in_reply_to_user_id 78 non-null float64 3 timestamp 2356 non-null object 4 source 2356 non-null object 5 text 2356 non-null object 6 retweeted_status_id 181 non-null float64 7 retweeted_status_user_id 181 non-null float64 8 retweeted_status_timestamp 181 non-null object 9 expanded_urls 2297 non-null object 10 rating_numerator 2356 non-null int64 11 rating_denominator 2356 non-null int64 12 name 2356 non-null object 13 doggo 2356 non-null object 14 floofer 2356 non-null object 15 pupper 2356 non-null object 16 puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown Timestamp is string. retweeted_status_timestamp is string. ###Code #Series containing counts of unique values df_1.source.value_counts() ###Output _____no_output_____ ###Markdown Source column has html tags. ###Code #I have viewed visually also by excell. So I can see name column fully. df_1.name # Selecting name = a OR name = None df_1[(df_1['name'] == 'a') | (df_1['name'] == 'None')] ###Output _____no_output_____ ###Markdown name column has None instead of NaN. ###Code df_1.name.sample(30) #When sampling multiple times I have seen name called O. Selecting name = O df_1[df_1['name'] == 'O'] ###Output _____no_output_____ ###Markdown There a lot of inaccurute names in this dataframe such as a, an, such, the, by, very, O and so on. ###Code df_1.doggo.value_counts() df_1.floofer.value_counts() df_1.pupper.value_counts() df_1.puppo.value_counts() ###Output _____no_output_____ ###Markdown Ok these four columns have None values instead of NaN. ###Code # Seeking for upper case values of name column. df_1.loc[df_1['name'].str.isupper()] ###Output _____no_output_____ ###Markdown There is a JD name also. What a name :( . Ok So I will replace them with O'Malley and Just Dog. ###Code #Let see the data tyoe of tweet_id column. df_1.tweet_id.value_counts() ###Output _____no_output_____ ###Markdown Ok I will change tweet_id also. This one should be a string since we are not doing any calculations on it. ###Code df_1.rating_numerator.sample(30) df_1.rating_denominator.sample(30) df_1[df_1['rating_denominator'] == 0] ###Output _____no_output_____ ###Markdown This is weird. Denominator cannot be 0. Whereas numerator is 960. Ok this is going to be fun :) Second I go with Image predictions data. ###Code image_p.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2075 non-null int64 1 jpg_url 2075 non-null object 2 img_num 2075 non-null int64 3 p1 2075 non-null object 4 p1_conf 2075 non-null float64 5 p1_dog 2075 non-null bool 6 p2 2075 non-null object 7 p2_conf 2075 non-null float64 8 p2_dog 2075 non-null bool 9 p3 2075 non-null object 10 p3_conf 2075 non-null float64 11 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown There are 2075 rows. Whereas in df_1 dataframe, there are 2356. This means whether there are some missing data or some tweets do not have images. ###Code image_p.jpg_url.value_counts() image_p.img_num.value_counts() image_p.p1.value_counts() image_p.p2.value_counts() image_p.p3.value_counts() ###Output _____no_output_____ ###Markdown Here we go. You see the underscores of p1, p2 and p3. In addition, I will create two columns called dog breeds and prediction confidence based on these p columns. ###Code image_p.head() ###Output _____no_output_____ ###Markdown Third is tweet_json dataframe from Twitter API ###Code #Let us take some samples tweet_json.sample(10) tweet_json.info() tweet_json.tweet_id.value_counts() tweet_json.favorite_count.value_counts() tweet_json.retweet_count.value_counts() ###Output _____no_output_____ ###Markdown Quality issues: df_1 DataFrame- Data has retweets since there are some numbers in retweeted_status_id and retweeted_status_user_id instead of NaNs. - timestamp column is string type.- source column has html tags with some text.- name column contains values such as string "None".- name column has awkward names such as a, an, such, the, by, very and so on.- name column has O, JD. It should O'Malley and Just Dog.- tweet_id is integer. It should be string since there are not any calculations.- Wrong data in rating numerator and rating denominator column since tweets include more than one rating or decimal numbers. - Fraction for rating numerator and rating denominator. image_p DataFrame- Remove underscores and make them lowercase of p1, p2 and p3. Tideness issues: df_1 DataFrame- 4 different names such as doggo, floofer, pupper, puppo. Change name to dog type. tweet_json DataFrame- Combine tweet_json with df_1 since data are related to each other. image_p DataFrame- Combine image_p with df_1 since data of image_p is related to df_1.- Create dog breeds and prediction confidence columns by p1, p1_conf, p1_dog, p2, p2_conf, p2_dog, p3, p3_conf, p3_dog. Drop the p1, p1_conf, p1_dog, p2, p2_conf, p2_dog, p3, p3_conf, p3_dog. 3. Clean the data ###Code #Copy the data to a new variable. So we can work safely with copied variable. df_1_copied = df_1.copy() tweet_json_copied = tweet_json.copy() image_p_copied = image_p.copy() ###Output _____no_output_____ ###Markdown Tidiness:Firstly, I will start cleaning with Tidiness issues since they are more important for beginning. I will solve these two issues since first tidiness issue will be more easy to clean by solving these two issues. Define:- Combine tweet_json with df_1 since data are kinda same.- Combine image_p with df_1 since data of image_p is related to df_1. Solving: Merge dataFrames on tweet_id column. Code: ###Code #I will merge these three DataFrames on tweet_id column. #I will make some list first. df_3 = [df_1_copied, tweet_json_copied, image_p_copied] #I will use lambda for merging. I will use reduce funtion. #https://stackoverflow.com/questions/8689184/nameerror-name-reduce-is-not-defined-in-python twitter_full = reduce(lambda left, right: pd.merge(left, right, on = 'tweet_id'), df_3) ###Output _____no_output_____ ###Markdown Test: ###Code twitter_full.head() #There should be 30 columns. twitter_full.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2073 entries, 0 to 2072 Data columns (total 30 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2073 non-null int64 1 in_reply_to_status_id 23 non-null float64 2 in_reply_to_user_id 23 non-null float64 3 timestamp 2073 non-null object 4 source 2073 non-null object 5 text 2073 non-null object 6 retweeted_status_id 79 non-null float64 7 retweeted_status_user_id 79 non-null float64 8 retweeted_status_timestamp 79 non-null object 9 expanded_urls 2073 non-null object 10 rating_numerator 2073 non-null int64 11 rating_denominator 2073 non-null int64 12 name 2073 non-null object 13 doggo 2073 non-null object 14 floofer 2073 non-null object 15 pupper 2073 non-null object 16 puppo 2073 non-null object 17 favorite_count 2073 non-null int64 18 retweet_count 2073 non-null int64 19 jpg_url 2073 non-null object 20 img_num 2073 non-null int64 21 p1 2073 non-null object 22 p1_conf 2073 non-null float64 23 p1_dog 2073 non-null bool 24 p2 2073 non-null object 25 p2_conf 2073 non-null float64 26 p2_dog 2073 non-null bool 27 p3 2073 non-null object 28 p3_conf 2073 non-null float64 29 p3_dog 2073 non-null bool dtypes: bool(3), float64(7), int64(6), object(14) memory usage: 459.5+ KB ###Markdown Define: - 4 different names such as doggo, floofer, pupper, puppo. Change name to dog type. Solving: Take the dog types from text column and drop the doggo, floofer, pupper, puppo. I will use ReGex(regular expression). Code: ###Code #creating new column called dog_type. Extracting doggo, floofer, pupper, puppo from text column. #https://stackoverflow.com/questions/54343378/pandas-valueerror-pattern-contains-no-capture-groups twitter_full['dog_type'] = twitter_full['text'].str.extract('(doggo|floofer|pupper|puppo)') #Viewing the new column called dog type twitter_full[['dog_type', 'doggo', 'floofer', 'pupper', 'puppo']].head(10) #Drop the doggo, floofer, pupper, puppo. twitter_full = twitter_full.drop(['doggo', 'floofer', 'pupper', 'puppo'], axis = 1) ###Output _____no_output_____ ###Markdown Test: ###Code #There should be 27 columns twitter_full.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2073 entries, 0 to 2072 Data columns (total 27 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2073 non-null int64 1 in_reply_to_status_id 23 non-null float64 2 in_reply_to_user_id 23 non-null float64 3 timestamp 2073 non-null object 4 source 2073 non-null object 5 text 2073 non-null object 6 retweeted_status_id 79 non-null float64 7 retweeted_status_user_id 79 non-null float64 8 retweeted_status_timestamp 79 non-null object 9 expanded_urls 2073 non-null object 10 rating_numerator 2073 non-null int64 11 rating_denominator 2073 non-null int64 12 name 2073 non-null object 13 favorite_count 2073 non-null int64 14 retweet_count 2073 non-null int64 15 jpg_url 2073 non-null object 16 img_num 2073 non-null int64 17 p1 2073 non-null object 18 p1_conf 2073 non-null float64 19 p1_dog 2073 non-null bool 20 p2 2073 non-null object 21 p2_conf 2073 non-null float64 22 p2_dog 2073 non-null bool 23 p3 2073 non-null object 24 p3_conf 2073 non-null float64 25 p3_dog 2073 non-null bool 26 dog_type 337 non-null object dtypes: bool(3), float64(7), int64(6), object(11) memory usage: 411.0+ KB ###Markdown Define: - Create dog breeds and prediction confidence columns by p1, p1_conf, p1_dog, p2, p2_conf, p2_dog, p3, p3_conf, p3_dog. Drop the p1, p1_conf, p1_dog, p2, p2_conf, p2_dog, p3, p3_conf, p3_dog. Solving: If the first prediction was not the dog breed then take the second and so on. Code: ###Code #writing a funtion which extracts dog breeds from the columns. def dog_breed(row): if row['p1_dog']: return(row['p1']) elif row['p2_dog']: return(row['p2']) elif row['p3_dog']: return(row['p3']) else: return(np.NaN) #creating new column called dog_breed and applying it into the dataframe. twitter_full['dog_breed'] = twitter_full.apply(lambda row: dog_breed(row), axis = 1) #writing a funtion which extracts prediction confidence from the columns. def pred_conf(row): if row['p1_dog']: return(row['p1_conf']) elif row['p2_dog']: return(row['p2_conf']) elif row['p3_dog']: return(row['p3_conf']) else: return(np.NaN) #creating new column called pred_confidence and applying it into the dataframe. twitter_full['pred_confidence'] = twitter_full.apply(lambda row: pred_conf(row), axis = 1) #Drop the p1, p1_conf, p1_dog, p2, p2_conf, p2_dog, p3, p3_conf, p3_dog. twitter_full = twitter_full.drop(['p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog'], axis = 1) ###Output _____no_output_____ ###Markdown Test: ###Code twitter_full.head(5) #There should be 20 columns twitter_full.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2073 entries, 0 to 2072 Data columns (total 20 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2073 non-null int64 1 in_reply_to_status_id 23 non-null float64 2 in_reply_to_user_id 23 non-null float64 3 timestamp 2073 non-null object 4 source 2073 non-null object 5 text 2073 non-null object 6 retweeted_status_id 79 non-null float64 7 retweeted_status_user_id 79 non-null float64 8 retweeted_status_timestamp 79 non-null object 9 expanded_urls 2073 non-null object 10 rating_numerator 2073 non-null int64 11 rating_denominator 2073 non-null int64 12 name 2073 non-null object 13 favorite_count 2073 non-null int64 14 retweet_count 2073 non-null int64 15 jpg_url 2073 non-null object 16 img_num 2073 non-null int64 17 dog_type 337 non-null object 18 dog_breed 1750 non-null object 19 pred_confidence 1750 non-null float64 dtypes: float64(5), int64(6), object(9) memory usage: 340.1+ KB ###Markdown Quality: Define:- Data has retweets since there are some numbers in retweeted_status_id and retweeted_status_user_id instead of NaNs. Solving: Keep the rows where retweeted_status_id has NaN values. Drop the retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp. Code: ###Code #Keping the NaNs. twitter_full = twitter_full[np.isnan(twitter_full.retweeted_status_id)] twitter_full.retweeted_status_id #Dropping retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp twitter_full = twitter_full.drop(['retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], axis = 1) ###Output _____no_output_____ ###Markdown Test: ###Code #There should be 17 columns twitter_full.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1994 entries, 0 to 2072 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1994 non-null int64 1 in_reply_to_status_id 23 non-null float64 2 in_reply_to_user_id 23 non-null float64 3 timestamp 1994 non-null object 4 source 1994 non-null object 5 text 1994 non-null object 6 expanded_urls 1994 non-null object 7 rating_numerator 1994 non-null int64 8 rating_denominator 1994 non-null int64 9 name 1994 non-null object 10 favorite_count 1994 non-null int64 11 retweet_count 1994 non-null int64 12 jpg_url 1994 non-null object 13 img_num 1994 non-null int64 14 dog_type 326 non-null object 15 dog_breed 1686 non-null object 16 pred_confidence 1686 non-null float64 dtypes: float64(3), int64(6), object(8) memory usage: 280.4+ KB ###Markdown Define:- timestamp column is string type. Solving: Convert it into datetime data type. Code: ###Code #Get rid of the time zone info from timestamp column twitter_full['timestamp'] = twitter_full['timestamp'].str.slice(start = 0, stop = -6) #Convert it into datetime data type twitter_full['timestamp'] = pd.to_datetime(twitter_full['timestamp'], format = '%Y-%m-%d %H:%M:%S') ###Output _____no_output_____ ###Markdown Test: ###Code #Check the timestamp data type twitter_full.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1994 entries, 0 to 2072 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1994 non-null int64 1 in_reply_to_status_id 23 non-null float64 2 in_reply_to_user_id 23 non-null float64 3 timestamp 1994 non-null datetime64[ns] 4 source 1994 non-null object 5 text 1994 non-null object 6 expanded_urls 1994 non-null object 7 rating_numerator 1994 non-null int64 8 rating_denominator 1994 non-null int64 9 name 1994 non-null object 10 favorite_count 1994 non-null int64 11 retweet_count 1994 non-null int64 12 jpg_url 1994 non-null object 13 img_num 1994 non-null int64 14 dog_type 326 non-null object 15 dog_breed 1686 non-null object 16 pred_confidence 1686 non-null float64 dtypes: datetime64[ns](1), float64(3), int64(6), object(7) memory usage: 280.4+ KB ###Markdown Define:- source column has html tags with some text. Solving: Remove the html tags and only leave the name of device. Code: ###Code twitter_full.source.value_counts() #Extracting the only the name. I am using the awesome ReGex. twitter_full['source'] = twitter_full['source'].str.extract('(Twitter for iPhone|Twitter Web Client|TweetDeck)') ###Output _____no_output_____ ###Markdown Test: ###Code twitter_full.source.value_counts() ###Output _____no_output_____ ###Markdown Define:- name column contains values such as string "None".- name column has awkward names such as a, an, such, the, by, very, O, JD and so on.- name column has O, JD. It should O'Malley and Just Dog. Solving: Replace all with NaN where you see some weird names. Replace O with O'Malley and JD with Just Dog. Code: ###Code #Let us find the lowercase values. Maybe there are lots of weird names lower_name = [] for i in twitter_full.name: if i[0].islower() and i not in lower_name: lower_name.append(i) print(lower_name) #Replacing all lowercase values with NaN twitter_full.name.replace(lower_name, np.nan, inplace=True) #Replacing all None's values with NaN twitter_full.name.replace('None', np.nan, inplace=True) #Replacing O with O'Malley and JD with Just Dog. twitter_full.name.replace('O', "O'Malley", inplace=True) twitter_full.name.replace('JD', "Just Dog", inplace=True) ###Output _____no_output_____ ###Markdown Test: ###Code twitter_full.name.value_counts() #Checking for the O'Malley and Just Dog. twitter_full[(twitter_full['name'] == "O'Malley") | (twitter_full['name'] == "Just Dog")] ###Output _____no_output_____ ###Markdown Define:- tweet_id is integer. It should be string since there are not any calculations. Solving: Convert it into string data type. Code: ###Code #Converting it into string. twitter_full.tweet_id = twitter_full.tweet_id.astype(str) ###Output _____no_output_____ ###Markdown Test: ###Code twitter_full.tweet_id ###Output _____no_output_____ ###Markdown Define:- Wrong data in rating numerator and rating denominator column since tweets include more than one rating or decimal numbers. Solving: Find the ids that have two ratings and correct the rating numerator column and putting it into new column. I will drop the old one. Code: ###Code #Finding the more than one ratings for one tweet. Using ReGex. double_rating_id = twitter_full['tweet_id'][twitter_full.text.str.contains(r'(\d+\.?\d*\/\d+\.?\d*\D+\d+\.?\d*\/\d+\.?\d*)')].tolist() double_rating_id #Correcting the rating numerastor column and putting it into new column. twitter_full['rating_numerator_n'] = twitter_full['rating_numerator'] #Replace the double rating with NaNs. twitter_full['rating_numerator_n'] = np.where(twitter_full['tweet_id'].isin(double_rating_id), np.NaN, twitter_full['rating_numerator_n']) #Check for the NaNs twitter_full.rating_numerator_n.isnull().sum() twitter_full.rating_numerator_n.dtype #Finding the all tweets with decimal numerators twitter_full[twitter_full.text.str.contains(r'(\d+\.\d+)/(\d+)')] list_id = twitter_full['tweet_id'][twitter_full.text.str.contains(r'(\d+\.\d+)/(\d+)')].tolist() list_id #create new list with decimal ratings list_num = twitter_full['text'].str.extract(r'(\d+\.\d+)/(\d+)')[0].dropna().tolist() list_num #create dictionary with list_id and list_num dict_n = dict(zip(list_id, list_num)) dict_n #Use dictionary to correct the wrong numerators. twitter_full.loc[twitter_full['tweet_id'].isin(dict_n.keys()), 'rating_numerator_n'] = twitter_full['tweet_id'].map(dict_n) twitter_full.loc[twitter_full['rating_numerator'] != twitter_full['rating_numerator_n']] #Drop the rating numerator column since we do not need it anymore. twitter_full = twitter_full.drop(['rating_numerator'], axis=1) ###Output _____no_output_____ ###Markdown Test: ###Code twitter_full.info() twitter_full[twitter_full.text.str.contains(r'(\d+\.\d+)/(\d+)')] ###Output /Users/fazliddinibodullaev/opt/anaconda3/lib/python3.7/site-packages/pandas/core/strings.py:1954: UserWarning: This pattern has match groups. To actually get the groups, use str.extract. return func(self, *args, **kwargs) ###Markdown Define:- Fraction for rating numerator and rating denominator. Solving: create a new column called fraction. Code: ###Code #measuring the fraction of two columns. #https://stackoverflow.com/questions/40353079/pandas-how-to-check-dtype-for-all-columns-in-a-dataframe twitter_full['fraction'] = twitter_full['rating_numerator_n'].astype(float)/twitter_full['rating_denominator'] ###Output _____no_output_____ ###Markdown Test: ###Code twitter_full.fraction.value_counts() ###Output _____no_output_____ ###Markdown Define:- Remove underscores and make them lowercase of p1, p2 and p3. Solving: Replace the underscores with space and make them all lowercase Code: ###Code twitter_full.dog_breed = twitter_full.dog_breed.str.replace('_', ' ') twitter_full.dog_breed = twitter_full.dog_breed.str.lower() ###Output _____no_output_____ ###Markdown Test: ###Code twitter_full.dog_breed.value_counts() ###Output _____no_output_____ ###Markdown Store the clean data. ###Code twitter_full.to_csv('twitter_archive_master.csv', encoding='utf-8', index=False) ###Output _____no_output_____ ###Markdown Analyzing and visualizing Data ###Code df = pd.read_csv('twitter_archive_master.csv') df.info() df.describe() ###Output _____no_output_____ ###Markdown Number of tweets in every month? ###Code #Changing the data types of variables so we can work with libraries properly df['source'] = df['source'].astype('category') df['tweet_id'] = df['tweet_id'].astype('str') df['timestamp'] = pd.to_datetime(df['timestamp']) plt.rcParams['figure.figsize'] = (14, 7) #For bigger plot sizes picked_data = df['tweet_id'].groupby([df['timestamp'].dt.year, df['timestamp'].dt.month]).count() picked_data.plot(kind = 'line') plt.title('Number of tweets in every month', size = 15) plt.xlabel('Time (Year, Month)') plt.ylabel('Number of tweets') plt.savefig('number_of_tweets_every_month') ###Output _____no_output_____ ###Markdown As you can see, December 2015 has the most of tweets. After that period, number tweets decreased drastically. It carried on decreasing till July 2017. The most used tool? ###Code df.source.value_counts() sns.countplot(data=df, x='source') plt.title('Sources of Tweets', size=15) plt.savefig('most_used_source_of_tweets') ###Output _____no_output_____ ###Markdown Apparently, the most used source is Iphone. Second and third places are for Twitter Web Client and TweetDeck respectively. How many dogs have ratings which are over than 10? ###Code df.rating_numerator_n.value_counts().sort_index().plot(kind = 'bar', color = 'orange') plt.title('Rating numerator distribution', size = 15) plt.xlabel('Rating numerator') plt.ylabel('Rating denominator') plt.savefig('rating_numerator_distribution') #Selecting ratings which are over than 10 round(df.rating_numerator_n[df.rating_numerator_n > 10]).count() ###Output _____no_output_____ ###Markdown 1156 dogs have ratings which are over than 10. What are the popular dog breeds? ###Code df.dog_breed.value_counts()[0:10].sort_values(ascending = False).plot(kind= 'bar', color = 'maroon') plt.ylabel('Number of breeds') plt.xlabel('Dog breed') plt.title('Popular 10 dog breeds') plt.savefig('top_dog_breeds') ###Output _____no_output_____ ###Markdown Golden retriever is the most popular breed. Whereas malamute is the least. The most popular dog names? ###Code df.name.value_counts()[0:10].sort_values(ascending = False).plot(kind = 'bar', color = 'green') plt.ylabel('Number of dogs') plt.xlabel('Names of dogs') plt.title('Dog names') plt.savefig('popular_dog_names') ###Output _____no_output_____ ###Markdown Charlie is the most popular name. Meanwhile Lucy, Cooper, Oliver and Tucker are also famous. The most frequent dog stage? ###Code df.dog_type.value_counts() sns.countplot(data=df, x='dog_type') plt.ylabel('Number of dog stages') plt.xlabel('Names of dog stages') plt.title('Dog stages') plt.savefig('frequent_dog_stages') ###Output _____no_output_____ ###Markdown Data Gathering programmatically downloaded using the Requests library ###Code #import libraries import requests import pandas as pd import json import datetime as dt import matplotlib.pyplot as plt #store file to file handler r = requests.get('https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv') code = r.status_code print(code) #save downloaded file to local folder open('image-predictions.tsv', 'wb').write(r.content) ###Output _____no_output_____ ###Markdown Download tweet ID, retweet count, and favorite count using Tweepy API ###Code #read twitte IDs #read the twitter-archive-enhanced.csv file in to dataframe df_1 = pd.read_csv('twitter-archive-enhanced.csv') df_1 #convert the first column of the datafram(tweet id) to a list IDlist = df_1['tweet_id'].tolist() IDlist #pull twitter status json from twitter API and store them into a list import tweepy # key and token omitted for privacy consumer_key = '*' consumer_secret = '*' access_token = '*' access_secret = '*' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit = True, wait_on_rate_limit_notify = True) tweet_list = [] for tid in IDlist: try: tweet = api.get_status(tid, tweet_mode='extended') tweet_list.append(tweet._json) except: print('tweet ' + str(tid) + "doesn't" exist') #check if all tweet status has been downloaded len(tweet_list) # write json to txt file # type(tweet_json) with open('tweet_json.txt', 'w') as outfile: for tweet_json in tweet_list: json.dump(tweet_json, outfile) outfile.write('\n') #add a newline character at the end of each json # read json from text tweet_df_raw = pd.read_json('tweet_json.txt', lines = 'True') tweet_df = tweet_df_raw[['id', 'retweet_count', 'favorite_count']] #display twitter archive archive_df.head(5) #display image data image_df.head(5) #display status dataframe from tweepy API tweet_df_raw.head(5) ###Output _____no_output_____ ###Markdown Data Assessing ###Code #load 3 dataframes from different sources archive_df = pd.read_csv('twitter-archive-enhanced.csv') image_df = pd.read_csv('image-predictions.tsv', sep='\t') status_df = tweet_df.copy() status_df.rename(columns = {"id": "tweet_id"}, inplace = True) archive_df.info() archive_df.sample(5) archive_df.rating_denominator.value_counts() ###Output _____no_output_____ ###Markdown issues in archive dataframe1. Quality issues: - Retweets: some of the tweets in this dataframe are retweet, as mentioned in the project detail. These retweet are not supposed to be included in the analysis - Unnecessary information: text, sources are not needed for analysis. Retweeted_status_id and retweeted_status_user_id, and retweetd_status_timestamp are not needed after data cleaning procedure. - The rating_numerator and rating_denominator can be combined into one column in decimal form. - Wrong data type for tweet id. Since no calculations will be applied on tweet ID, the tweet ID needs to be str instead of int64. - The timestamp column has wrong data type. 2. Tidiess issues - Dog stages are not in one column, instead, they are divided into 4. - Date and time should be two variables for the purpose of analysis. ###Code image_df.info() image_df.p1.value_counts() ###Output _____no_output_____ ###Markdown issues in image dataframe1. Quality issues: - Non-descriptive column headers: p1, p1_conf, p1_dog, p2, p2_dog, p3, p3_conf, p3_dog etc. - Some of the dog breeds has first letter capitalized and some are not. - Some of the dog breeds are misspelled (ie. 19 of the breeds is website) or intended misspelled (ie. cheeseburger) 2. Tidiess issues ###Code status_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2348 entries, 0 to 2347 Data columns (total 3 columns): tweet_id 2348 non-null int64 retweet_count 2348 non-null int64 favorite_count 2348 non-null int64 dtypes: int64(3) memory usage: 73.4 KB ###Markdown issues in status dataframe1. Quality issues: - wrong data type for tweet ID2. Tidiess issues SummaryQuality issues: 1. Retweets: some of the tweets in this dataframe are retweet. as mentioned in the project detail, These retweets are not supposed to be included in the analysis. 2. Unnecessary information: text, sources are not needed for analysis. Retweeted_status_id and retweeted_status_user_id, and retweetd_status_timestamp are not needed after data cleaning procedure. 3. Wrong data type for tweet id. Since no calculations will be applied on tweet ID, the tweet ID needs to be str instead of int64. 4. The timestamp column has wrong data type. 5. Non-descriptive column headers: p1, p1_conf, p1_dog, p2, p2_dog, p3, p3_conf, p3_dog etc. 6. Some of the dog breeds has first letter capitalized and some are not. 7. Many ratings' denominator are not 10, even the numerator which greater than 10 is the feature of this twitter account, keep the denominator same is vital to later analysis. 8. Date and time are in the same column, this is not necessarily a tidyness issue, because its nothing wrong with putting this two in same column. It still need tobe parsed because we will perfome analysis around date and time later. Tidyness issues: 1. One observation unit does not form a table. At least, retweet_count and favorite_count needs to be part of the archive dataframe to form a complete observation unit. 2. Dog stages are not in one column, instead, they are divided into 4. Data Cleaning Issue 1 : retweets Define:Remove all tweet rows that has nonnull value in retweeted_status_id column in archive dataframe. Code ###Code # create a copy archive_df_clean = archive_df.copy() # only keep tweet whose retweeted_status_id is NAn archive_df_clean = archive_df_clean[archive_df_clean['retweeted_status_id'].isnull()] ###Output _____no_output_____ ###Markdown Test ###Code archive_df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2175 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2175 non-null object source 2175 non-null object text 2175 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 2117 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 2175 non-null object floofer 2175 non-null object pupper 2175 non-null object puppo 2175 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 305.9+ KB ###Markdown Issue 2: Unnecessary information Define:Remove columns: expanded_urls, in_reply_to_status_id, in_reply_to_user_id, text, retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp, source. Code ###Code # Drop unwanted columns archive_df_clean = archive_df_clean.drop(columns=['expanded_urls', 'in_reply_to_status_id', 'in_reply_to_user_id', 'text', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp', 'source']) ###Output _____no_output_____ ###Markdown Test ###Code list(archive_df_clean) ###Output _____no_output_____ ###Markdown Issue 3: Wrong datatype for tweet ID Define:Change data type of Tweet ID to string Code ###Code # change the datatype of tweet_id archive_df_clean['tweet_id'] = archive_df_clean['tweet_id'].astype(str) # change the datatype of tweet_id column in status_df as well status_df['tweet_id'] = status_df['tweet_id'].astype(str) ###Output _____no_output_____ ###Markdown Test ###Code archive_df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 9 columns): tweet_id 2175 non-null object timestamp 2175 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 2175 non-null object floofer 2175 non-null object pupper 2175 non-null object puppo 2175 non-null object dtypes: int64(2), object(7) memory usage: 169.9+ KB ###Markdown Issue 4: Wrong datatype for time stamp column Define:Remove columns: change the timestamp column to Code ###Code #analysis the timestamp column type(archive_df_clean['timestamp']) archive_df_clean.head(10) # single out first 19 digits of the timestamp archive_df_clean['timestamp'] = archive_df_clean.timestamp.str[:19] archive_df_clean['timestamp'] = pd.to_datetime(archive_df_clean['timestamp'], format = "%Y-%m-%d %H:%M:%S") archive_df_clean.head(10) ###Output _____no_output_____ ###Markdown Test ###Code archive_df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 9 columns): tweet_id 2175 non-null object timestamp 2175 non-null datetime64[ns] rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 2175 non-null object floofer 2175 non-null object pupper 2175 non-null object puppo 2175 non-null object dtypes: datetime64[ns](1), int64(2), object(6) memory usage: 169.9+ KB ###Markdown Issue 5: Non-Descriptive column header in image dataframe Define:change following column headers:- p1 -> prediction_1- p1_conf -> prediction_1_confidence- p1_dog -> prediction_1_result- p2 -> prediction_2- p2_conf -> prediction_2_confidence- p2_dog -> prediction_2_result- p3 -> prediction_3- p3_conf -> prediction_3_confidence- p3_dog -> prediction_3_result Code ###Code image_df_clean = image_df.copy() image_df_clean.rename(columns={'p1': 'prediction_1', 'p1_conf': 'prediction_1_confidence', 'p1_dog': 'prediction_1_result'}, inplace=True) image_df_clean.rename(columns={'p2': 'prediction_2', 'p2_conf': 'prediction_2_confidence', 'p2_dog': 'prediction_2_result'}, inplace=True) image_df_clean.rename(columns={'p3': 'prediction_3', 'p3_conf': 'prediction_3_confidence', 'p3_dog': 'prediction_3_result'}, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code image_df_clean.sample(10) ###Output _____no_output_____ ###Markdown Issue 6: the first letter of all dog names are not all capitalized. Define:capitalize the first letter of all dog breeds Code ###Code image_df_clean.prediction_1 = image_df_clean.prediction_1.str.title() image_df_clean.prediction_2 = image_df_clean.prediction_2.str.title() image_df_clean.prediction_3 = image_df_clean.prediction_3.str.title() ###Output _____no_output_____ ###Markdown Test ###Code image_df_clean.sample(10) ###Output _____no_output_____ ###Markdown Issue 7: Many ratings' denominator are not 10 Define:Scale tweets who's denominators more than 10 or less than 10 to 10 Code ###Code archive_df_clean.info() archive_df_clean.rating_denominator.value_counts() for index, row in archive_df_clean.iterrows(): if row['rating_denominator'] == 0: new_denominator = 10 archive_df_clean.set_value(index, 'rating_denominator', new_denominator) elif row['rating_denominator'] != 10: print(row['tweet_id']) new_denominator = 10 new_numerator = round((row['rating_numerator']/row['rating_denominator'])*10, 1) #print(row['rating_denominator']) #print(new_denominator) #print(row['rating_numerator']) #print(new_numerator) archive_df_clean.set_value(index, 'rating_denominator', new_denominator) archive_df_clean.set_value(index, 'rating_numerator', new_numerator) # else: # new_denominator = rating_denominator # new_numerator = rating_numerator ###Output _____no_output_____ ###Markdown Test ###Code archive_df_clean.rating_denominator.value_counts() ###Output _____no_output_____ ###Markdown Issue 8: date and time are in one column Define:seperate date and time into two columns for easier visualization regards the time in a day. Code ###Code archive_df_clean['date'] = archive_df_clean['timestamp'].apply(lambda time: time.strftime('%m-%d-%Y')) archive_df_clean['time'] = archive_df_clean['timestamp'].apply(lambda time: time.strftime('%H:%M')) ###Output _____no_output_____ ###Markdown Test ###Code archive_df_clean.head(10) ###Output _____no_output_____ ###Markdown tidyness issue 1: dog stages not in one column Definecombine doggo, floofer, pupper and puppo columns into stage column Code ###Code archive_df_clean.head(10) archive_df_clean_stage = archive_df_clean.copy() archive_df_clean_stage["stage"] = "" for index, row in archive_df_clean_stage.iterrows(): if row["doggo"] == "doggo": #print(row["doggo"]) archive_df_clean_stage.loc[index, "stage"] = "doggo" elif row["floofer"] == "floofer": #print(row["floofer"]) archive_df_clean_stage.loc[index, "stage"] = "floofer" elif row["pupper"] == "pupper": archive_df_clean_stage.loc[index, "stage"] = "pupper" elif row["puppo"] == "puppo": archive_df_clean_stage.loc[index, "stage"] = "puppo" else: archive_df_clean_stage.loc[index, "stage"] = "N/A" #drop 4 original columns which contains the stage of dogs #archive_df_clean_stage.drop(["doggo", "floofer", "pupper", "puppo"], axis = 1) archive_df_clean = archive_df_clean_stage.copy() archive_df_clean = archive_df_clean.drop(["doggo", "floofer", "pupper", "puppo"], axis = 1) ###Output _____no_output_____ ###Markdown Test ###Code archive_df_clean["stage"].sample(10) archive_df_clean.sample(10) ###Output _____no_output_____ ###Markdown tidyness issue 2: observation unit are not in same column Defineinner join three dataframe by twitter ID to forme the observation unit. Code ###Code #twitter_df_final = pd.concat([archive_df_clean, status_df, image_df_clean], axis=1, join='inner') #create a copy of image_df which has only tweet_id and related data on prediction 1. image_df_tomerge = image_df_clean.loc[:, ['tweet_id', 'prediction_1', 'prediction_1_confidence','prediction_1_result']] image_df_tomerge['tweet_id'] = image_df_tomerge['tweet_id'].astype(str) twitter_df_merged = pd.merge(pd.merge(archive_df_clean, status_df, on='tweet_id'), image_df_tomerge, on='tweet_id')#pd.merge(status_df, image_df_clean, on = 'tweet_id', how='inner') ###Output _____no_output_____ ###Markdown Test ###Code twitter_df_merged.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1994 entries, 0 to 1993 Data columns (total 13 columns): tweet_id 1994 non-null object timestamp 1994 non-null datetime64[ns] rating_numerator 1994 non-null int64 rating_denominator 1994 non-null int64 name 1994 non-null object date 1994 non-null object time 1994 non-null object stage 1994 non-null object retweet_count 1994 non-null int64 favorite_count 1994 non-null int64 prediction_1 1994 non-null object prediction_1_confidence 1994 non-null float64 prediction_1_result 1994 non-null bool dtypes: bool(1), datetime64[ns](1), float64(1), int64(4), object(6) memory usage: 204.5+ KB ###Markdown Store data ###Code twitter_df_merged.to_csv('twitter_archive_master.csv') ###Output _____no_output_____ ###Markdown Analysis and Visualization ###Code twitter_df_merged ###Output _____no_output_____ ###Markdown Insights 1 Breeds: Most Popular vs Most lovedAssume the neural network predicted breeds right, which dog breeds get rated most? which dog breeds gets most favorite? Are they the same? ###Code twitter_df_merged.head(5) #select only predictions where the result is a breed of dogs prediction_true = twitter_df_merged[twitter_df_merged['prediction_1_result'] == True] #find the top20 most popular breeds and return its associated data top20 = prediction_true['prediction_1'].value_counts().head(20).index top20_df = twitter_df_merged.loc[twitter_df_merged.prediction_1.isin(top20)] #find the mean favorites for this top 20 most tweeted dog breeds fav_summary = top20_df.groupby('prediction_1', as_index=False).mean().sort_values('favorite_count') top20_mostfav = fav_summary[['prediction_1', 'favorite_count']] top20_mosttweeted = top20_df['prediction_1'].value_counts() top20_mosttweeted = pd.DataFrame(top20_mosttweeted).reset_index() top20_mosttweeted.columns = ['prediction_1', 'tweet_count'] #sort two dataframes top20_mostfav = top20_mostfav.sort_values(by='favorite_count', ascending=False).copy() top20_mosttweeted = top20_mosttweeted.sort_values(by='tweet_count', ascending=False).copy() #combine these two dataframs together, prepare for plot top20_merged = pd.merge(top20_mostfav, top20_mosttweeted, on='prediction_1') top20_merged = top20_merged.sort_values('tweet_count', ascending = False).copy() display(top20_merged) top20_merged.to_csv('top20_merged.csv') #Make a bar plot of this two table #set ordered y axis accordin to most tweeted breeds y = top20_merged['prediction_1'] x = top20_merged['tweet_count'] z = top20_merged['favorite_count'] plt.rc('font', size = 22) f, (ax1, ax2) = plt.subplots(1, 2, sharey=True) ax1.barh(range(len(y)), x) plt.yticks(range(len(y)), y) ax1.set_title('total # of tweets vs dog breeds') ax1.set(xlabel = 'number of tweets') ax2.barh(range(len(y)), z) ax2.set_title('total # of favorites vs dog breeds') ax2.set(xlabel = 'number of favorites') f.set_size_inches(18.5, 10.5) plt.show() ###Output _____no_output_____ ###Markdown From the plot shown above, we can see even golden retriver has most tweets, french bull dog is the most favorite breeds. However, most of the dogs in top20 most tweeted list are also in most favorites list. Insights 2 Popular dog nameswhat is the most popular dog names? ###Code twitter_df_merged['name'].value_counts().head(20) #top20_dognames = pd.DataFrame(top20_dognames) #top20_mosttweeted = pd.DataFrame(top20_mosttweeted).reset_index() #top20 = prediction_true['prediction_1'].value_counts().head(20).index #display(top20_dognames) ###Output _____no_output_____ ###Markdown Beside "None" and "a" which is apperently not need real dog names, the most popular dog names are Charlie, Lucy, Oliver and Cooper Insights 3 rated "0" dogsKnowing the WeRateDogs tend to "spoil" their dogs, is there ever a dog receive 0 rating? ###Code twitter_df_merged[twitter_df_merged['rating_numerator'] == 0] ###Output _____no_output_____ ###Markdown Use markdown to display the image.Appeartly, the first golden retriver truly deserve a 10 and the second image doesn't even a dog.![alt text](https://pbs.twimg.com/media/C5cOtWVWMAEjO5p.jpg)![alt text](https://pbs.twimg.com/media/Cl2LdofXEAATl7x.jpg) Visualization Number of retweets vs timeIs WeRateDogs still popular through these years?plot the of retweet against date to see the trend of this account ###Code retweet_sum = twitter_df_merged.groupby('date', as_index = False).sum() retweet_sum = retweet_sum.sort_values(by = "date", ascending=False) #change type of date column to datetime retweet_sum['date'] = pd.to_datetime(retweet_sum['date'], format = "%m-%d-%Y") #df.groupby(pd.TimeGrouper(freq='M')) retweet_sum = retweet_sum.sort_values(by = "date") retweet_sum.set_index('date', inplace=True) #top20_mosttweeted = top20_mosttweeted.sort_values(by='count', ascending=False) retweet_sum.sample(3) figure_v = plt.figure() retweet_sum['retweet_count'].plot(color = 'red', label='total retweets of the day') retweet_sum['favorite_count'].plot(color = 'blue', label='total favorites of the day') plt.title('Time vs retweets and favorites') plt.xlabel('Month') plt.ylabel('favorite/retweet count') plt.legend() #plt.savefig('visulization_retweet_favorite_vs_time.pdf') figure_v.set_size_inches(18.5, 10.5) plt.show(figure_v) ###Output _____no_output_____ ###Markdown Data Wrangling IntroductionThe tasks for this project are:* Data wrangling, which consists of: * Gathering data * Assessing data * Cleaning data* Storing, analyzing, and visualizing our wrangled data* Reporting on* 1) our data wrangling efforts* 2) our data analyses and visualizations ###Code import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import time import gc import requests import tweepy import os import json import re import warnings plt.style.use('ggplot') %matplotlib inline ###Output _____no_output_____ ###Markdown Gather ###Code # read csv as a Pandas DataFrame twitter_archive = pd.read_csv('twitter-archive-enhanced.csv') twitter_archive.head() twitter_archive.info() # Use requests library to download tsv file url="https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv" response = requests.get(url) with open('./Data/image_predictions.tsv', 'wb') as file: file.write(response.content) image_predictions = pd.read_csv('./Data/image_predictions.tsv', sep='\t') image_predictions.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2075 non-null int64 1 jpg_url 2075 non-null object 2 img_num 2075 non-null int64 3 p1 2075 non-null object 4 p1_conf 2075 non-null float64 5 p1_dog 2075 non-null bool 6 p2 2075 non-null object 7 p2_conf 2075 non-null float64 8 p2_dog 2075 non-null bool 9 p3 2075 non-null object 10 p3_conf 2075 non-null float64 11 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown **Query the Twitter API for each tweet's JSON data using Python's Tweepy library and store each tweet's entire set of JSON data in a file.** ###Code CONSUMER_KEY = "5Uur0mo4ol2kB8yhtZ1VxXS0u" CONSUMER_SECRET = "h8E7fSpXWiMoBel7G1ZOAeu4Mgru0v0MtxH5ehYE1RKM89SiBH" OAUTH_TOKEN = "303562412-ct9aNnU0FQR0UKJVn1i1W3Y8omqSewiQWUcRaygB" OAUTH_TOKEN_SECRET = "D3qslrbdOU5fqTOp951kOIuZbkeTPBodnjNYoEGFR63Ft" auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET) auth.set_access_token(OAUTH_TOKEN, OAUTH_TOKEN_SECRET) api = tweepy.API(auth, wait_on_rate_limit = True, wait_on_rate_limit_notify = True) # List of the error tweets error_list = [] # List of tweets df_list = [] # Calculate the time of execution start = time.time() i=0 # For loop which will add each available tweet json to df_list for tweet_id in twitter_archive['tweet_id']: i+=1 try: tweet = api.get_status(tweet_id, tweet_mode='extended', wait_on_rate_limit = True, wait_on_rate_limit_notify = True)._json favorites = tweet['favorite_count'] # How many favorites the tweet had retweets = tweet['retweet_count'] # Count of the retweet user_followers = tweet['user']['followers_count'] # How many followers the user had user_favourites = tweet['user']['favourites_count'] # How many favorites the user had date_time = tweet['created_at'] # The date and time of the creation df_list.append({'tweet_id': int(tweet_id), 'favorites': int(favorites), 'retweets': int(retweets), 'user_followers': int(user_followers), 'user_favourites': int(user_favourites), 'date_time': pd.to_datetime(date_time)}) except Exception as e: print(str(tweet_id)+ " _ " + str(e)) error_list.append(tweet_id) print(np.round((i/twitter_archive.shape[0])*100, 2),'% done') # Calculate the time of excution end = time.time() print(end - start) # lengh of the result print("The lengh of the result", len(df_list)) # The tweet_id of the errors print("The lengh of the errors", len(error_list)) ###Output The lengh of the result 2331 The lengh of the errors 25 ###Markdown From the above results:- We reached the limit of the tweepy API three times but wait_on_rate_limit automatically wait for rate limits to re-establish and wait_on_rate_limit_notify print a notification when Tweepy is waiting.- We could get 2344 tweet_id correctly with 12 errors- The total time was about 3023 seconds (~ 50.5 min) ###Code print("The length of the result", len(df_list)) # Create DataFrames from list of dictionaries json_tweets = pd.DataFrame(df_list, columns = ['tweet_id', 'favorites', 'retweets', 'user_followers', 'user_favourites', 'date_time']) # Save the dataFrame in file json_tweets.to_csv('tweet_json.txt', encoding = 'utf-8', index=False) # Read the saved tweet_json.txt file into a dataframe tweet_data = pd.read_csv('tweet_json.txt', encoding = 'utf-8') tweet_data tweet_data.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2331 entries, 0 to 2330 Data columns (total 6 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2331 non-null int64 1 favorites 2331 non-null int64 2 retweets 2331 non-null int64 3 user_followers 2331 non-null int64 4 user_favourites 2331 non-null int64 5 date_time 2331 non-null object dtypes: int64(5), object(1) memory usage: 109.4+ KB ###Markdown Gather: SummaryGathering is the first step in the data wrangling process.- Obtaining data - Getting data from an existing file (twitter-archive-enhanced.csv) Reading from csv file using pandas - Downloading a file from the internet (image-predictions.tsv) Downloading file using requests - Querying an API (tweet_json.txt) Get JSON object of all the tweet_ids using Tweepy- Importing that data into our programming environment (Jupyter Notebook) Assessing ###Code # Print some random examples twitter_archive.sample(10) # Assessing the data programmaticaly twitter_archive.info() twitter_archive.describe() twitter_archive['rating_numerator'].value_counts() twitter_archive['rating_denominator'].value_counts() twitter_archive['name'].value_counts() # View descriptive statistics of twitter_archive twitter_archive.describe() image_predictions image_predictions.info() image_predictions['jpg_url'].value_counts() image_predictions[image_predictions['jpg_url'] == 'https://pbs.twimg.com/media/DF6hr6BUMAAzZgT.jpg'] # View number of entries for each source twitter_archive.source.value_counts() #For rating that don't follow pattern twitter_archive[twitter_archive['rating_numerator'] > 20] #unusual names twitter_archive[twitter_archive['name'].apply(len) < 3] #Orignal Tweets twitter_archive[twitter_archive['retweeted_status_id'].isnull()] ###Output _____no_output_____ ###Markdown Quality*Completeness, Validity, Accuracy, Consistency => a.k.a content issues***twitter_archive dataset**- in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id should be integers/strings instead of float.- retweeted_status_timestamp, timestamp should be datetime instead of object (string).- The numerator and denominator columns have invalid values.- In several columns null objects are non-null (None to NaN).- Name column have invalid names i.e 'None', 'a', 'an' and less than 3 characters.- We only want original ratings (no retweets) that have images.- We may want to change this columns type (in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id and tweet_id) to string because We don't want any operations on them.- Sources difficult to read.**image_predictions dataset**- Missing values from images dataset (2075 rows instead of 2356)- Some tweet_ids have the same jpg_url- Some tweets are have 2 different tweet_id one redirect to the other (Dataset contains retweets)**tweet_data dataset**- This tweet_id (666020888022790149) duplicated 8 times TidinessUntidy data => a.k.a structural issues- No need to all the informations in images dataset, (tweet_id and jpg_url what matters)- Dog "stage" variable in four columns: doggo, floofer, pupper, puppo- Join 'tweet_info' and 'image_predictions' to 'twitter_archive' CleaningCleaning our data is the third step in data wrangling. It is where we will fix the quality and tidiness issues that we identified in the assess step. ###Code #copy dataframes tweet_data_clean = tweet_data.copy() twitter_archive_clean = twitter_archive.copy() image_predictions_clean= image_predictions.copy() ###Output _____no_output_____ ###Markdown DefineAdd tweet_info and image_predictions to twitter_archive table. Code ###Code twitter_archive_clean = pd.merge(left=twitter_archive_clean, right=tweet_data_clean, left_on='tweet_id', right_on='tweet_id', how='inner') twitter_archive_clean = twitter_archive_clean.merge(image_predictions_clean, on='tweet_id', how='inner') ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2059 entries, 0 to 2058 Data columns (total 33 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2059 non-null int64 1 in_reply_to_status_id 23 non-null float64 2 in_reply_to_user_id 23 non-null float64 3 timestamp 2059 non-null object 4 source 2059 non-null object 5 text 2059 non-null object 6 retweeted_status_id 72 non-null float64 7 retweeted_status_user_id 72 non-null float64 8 retweeted_status_timestamp 72 non-null object 9 expanded_urls 2059 non-null object 10 rating_numerator 2059 non-null int64 11 rating_denominator 2059 non-null int64 12 name 2059 non-null object 13 doggo 2059 non-null object 14 floofer 2059 non-null object 15 pupper 2059 non-null object 16 puppo 2059 non-null object 17 favorites 2059 non-null int64 18 retweets 2059 non-null int64 19 user_followers 2059 non-null int64 20 user_favourites 2059 non-null int64 21 date_time 2059 non-null object 22 jpg_url 2059 non-null object 23 img_num 2059 non-null int64 24 p1 2059 non-null object 25 p1_conf 2059 non-null float64 26 p1_dog 2059 non-null bool 27 p2 2059 non-null object 28 p2_conf 2059 non-null float64 29 p2_dog 2059 non-null bool 30 p3 2059 non-null object 31 p3_conf 2059 non-null float64 32 p3_dog 2059 non-null bool dtypes: bool(3), float64(7), int64(8), object(15) memory usage: 504.7+ KB ###Markdown Define Melt the 'doggo', 'floofer', 'pupper' and 'puppo' columns into one column 'dog_stage'. Code ###Code # Select the columns to melt and to remain MELTS_COLUMNS = ['doggo', 'floofer', 'pupper', 'puppo'] STAY_COLUMNS = [x for x in twitter_archive_clean.columns.tolist() if x not in MELTS_COLUMNS] # Melt the the columns into values twitter_archive_clean = pd.melt(twitter_archive_clean, id_vars = STAY_COLUMNS, value_vars = MELTS_COLUMNS, var_name = 'stages', value_name = 'dog_stage') # Delete column 'stages' twitter_archive_clean = twitter_archive_clean.drop('stages', 1) ###Output _____no_output_____ ###Markdown Test ###Code print(twitter_archive_clean.dog_stage.value_counts()) print(len(twitter_archive_clean)) ###Output None 7905 pupper 221 doggo 78 puppo 24 floofer 8 Name: dog_stage, dtype: int64 8236 ###Markdown CleanClean rows and columns that we will not need Code ###Code # Delete the retweets twitter_archive_clean = twitter_archive_clean[pd.isnull(twitter_archive_clean.retweeted_status_id)] # Delete duplicated tweet_id twitter_archive_clean = twitter_archive_clean.drop_duplicates() # Delete tweets with no pictures twitter_archive_clean = twitter_archive_clean.dropna(subset = ['jpg_url']) # small test len(twitter_archive_clean) # Delete columns related to retweet we don't need anymore twitter_archive_clean = twitter_archive_clean.drop('retweeted_status_id', 1) twitter_archive_clean = twitter_archive_clean.drop('retweeted_status_user_id', 1) twitter_archive_clean = twitter_archive_clean.drop('retweeted_status_timestamp', 1) # Delete column date_time we imported from the API, it has the same values as timestamp column twitter_archive_clean = twitter_archive_clean.drop('date_time', 1) # small test list(twitter_archive_clean) #Delete dog_stage duplicates twitter_archive_clean = twitter_archive_clean.sort_values('dog_stage').drop_duplicates('tweet_id', keep = 'last') ###Output _____no_output_____ ###Markdown Test ###Code print(twitter_archive_clean.dog_stage.value_counts()) print(len(twitter_archive_clean)) ###Output None 1682 pupper 212 doggo 62 puppo 23 floofer 8 Name: dog_stage, dtype: int64 1987 ###Markdown DefineGet rid of image prediction columns Code ###Code # We will store the fisrt true algorithm with it's level of confidence prediction_algorithm = [] confidence_level = [] # Get_prediction_confidence function: # search the first true algorithm and append it to a list with it's level of confidence # if flase prediction_algorthm will have a value of NaN def get_prediction_confidence(dataframe): if dataframe['p1_dog'] == True: prediction_algorithm.append(dataframe['p1']) confidence_level.append(dataframe['p1_conf']) elif dataframe['p2_dog'] == True: prediction_algorithm.append(dataframe['p2']) confidence_level.append(dataframe['p2_conf']) elif dataframe['p3_dog'] == True: prediction_algorithm.append(dataframe['p3']) confidence_level.append(dataframe['p3_conf']) else: prediction_algorithm.append('NaN') confidence_level.append(0) twitter_archive_clean.apply(get_prediction_confidence, axis=1) twitter_archive_clean['prediction_algorithm'] = prediction_algorithm twitter_archive_clean['confidence_level'] = confidence_level ###Output _____no_output_____ ###Markdown Test ###Code list(twitter_archive_clean) # Delete the columns of image prediction information twitter_archive_clean = twitter_archive_clean.drop(['img_num', 'p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog'], 1) list(twitter_archive_clean) # let's concentrate on low values.. let's dig more twitter_archive_clean.info() print('in_reply_to_user_id ') print(twitter_archive_clean['in_reply_to_user_id'].value_counts()) print('source ') print(twitter_archive_clean['source'].value_counts()) print('user_favourites ') print(twitter_archive_clean['user_favourites'].value_counts()) ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1987 entries, 0 to 7053 Data columns (total 18 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1987 non-null int64 1 in_reply_to_status_id 23 non-null float64 2 in_reply_to_user_id 23 non-null float64 3 timestamp 1987 non-null object 4 source 1987 non-null object 5 text 1987 non-null object 6 expanded_urls 1987 non-null object 7 rating_numerator 1987 non-null int64 8 rating_denominator 1987 non-null int64 9 name 1987 non-null object 10 favorites 1987 non-null int64 11 retweets 1987 non-null int64 12 user_followers 1987 non-null int64 13 user_favourites 1987 non-null int64 14 jpg_url 1987 non-null object 15 dog_stage 1987 non-null object 16 prediction_algorithm 1987 non-null object 17 confidence_level 1987 non-null float64 dtypes: float64(3), int64(7), object(8) memory usage: 294.9+ KB in_reply_to_user_id 4.196984e+09 23 Name: in_reply_to_user_id, dtype: int64 source <a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a> 1949 <a href="http://twitter.com" rel="nofollow">Twitter Web Client</a> 28 <a href="https://about.twitter.com/products/tweetdeck" rel="nofollow">TweetDeck</a> 10 Name: source, dtype: int64 user_favourites 145808 1278 145809 513 145810 194 145811 2 Name: user_favourites, dtype: int64 ###Markdown Notes- One value in ***in_reply_to_user_id*** so we will delete the columns of reply all of them replying to @dog_rates.- ***source** has 3 types, we will clean that column and made them clean.- **user_favourites** has 2 values and they are close. ###Code # drop the following columns 'in_reply_to_status_id', 'in_reply_to_user_id', 'user_favourites' twitter_archive_clean = twitter_archive_clean.drop(['in_reply_to_status_id', 'in_reply_to_user_id', 'user_favourites'], 1) # Clean the content of source column twitter_archive_clean['source'] = twitter_archive_clean['source'].apply(lambda x: re.findall(r'>(.*)<', x)[0]) # Test twitter_archive_clean ###Output _____no_output_____ ###Markdown DefineFix rating numerator and denominators that are not actually ratings Code ###Code # View all occurences where there are more than one #/# in 'text' column text_ratings_to_fix = twitter_archive_clean[twitter_archive_clean.text.str.contains( r"(\d+\.?\d*\/\d+\.?\d*\D+\d+\.?\d*\/\d+\.?\d*)")].text text_ratings_to_fix for entry in text_ratings_to_fix: mask = twitter_archive_clean.text == entry column_name1 = 'rating_numerator' column_name2 = 'rating_denominator' twitter_archive_clean.loc[mask, column_name1] = re.findall(r"\d+\.?\d*\/\d+\.?\d*\D+(\d+\.?\d*)\/\d+\.?\d*", entry) twitter_archive_clean.loc[mask, column_name2] = 10 twitter_archive_clean[twitter_archive_clean.text.isin(text_ratings_to_fix)] ###Output _____no_output_____ ###Markdown DefineFix rating numerator that have decimals. Code ###Code # View tweets with decimals in rating in 'text' column twitter_archive_clean[twitter_archive_clean.text.str.contains(r"(\d+\.\d*\/\d+)")] # Set correct numerators for specific tweets twitter_archive_clean.loc[(twitter_archive_clean['tweet_id'] == 883482846933004288) & (twitter_archive_clean['rating_numerator'] == 5), ['rating_numerator']] = 13.5 twitter_archive_clean.loc[(twitter_archive_clean['tweet_id'] == 786709082849828864) & (twitter_archive_clean['rating_numerator'] == 75), ['rating_numerator']] = 9.75 twitter_archive_clean.loc[(twitter_archive_clean['tweet_id'] == 778027034220126208) & (twitter_archive_clean['rating_numerator'] == 27), ['rating_numerator']] = 11.27 twitter_archive_clean.loc[(twitter_archive_clean['tweet_id'] == 680494726643068929) & (twitter_archive_clean['rating_numerator'] == 26), ['rating_numerator']] = 11.26 ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean[twitter_archive_clean.text.str.contains(r"(\d+\.\d*\/\d+)")] ###Output C:\Users\topp\Anaconda3\lib\site-packages\pandas\core\strings.py:1954: UserWarning: This pattern has match groups. To actually get the groups, use str.extract. return func(self, *args, **kwargs) ###Markdown DefineGet Dogs gender column from text column Code ###Code # Loop on all the texts and check if it has one of pronouns of male or female # and append the result in a list male = ['He', 'he', 'him', 'his', "he's", 'himself'] female = ['She', 'she', 'her', 'hers', 'herself', "she's"] dog_gender = [] for text in twitter_archive_clean['text']: # Male if any(map(lambda v:v in male, text.split())): dog_gender.append('male') # Female elif any(map(lambda v:v in female, text.split())): dog_gender.append('female') # If group or not specified else: dog_gender.append('NaN') # Test len(dog_gender) # Save the result in a new column 'dog_name' twitter_archive_clean['dog_gender'] = dog_gender ###Output _____no_output_____ ###Markdown Test ###Code print("dog_gender count \n", twitter_archive_clean.dog_gender.value_counts()) ###Output dog_gender count NaN 1130 male 633 female 224 Name: dog_gender, dtype: int64 ###Markdown DefineConvert the null values to None type Code ###Code twitter_archive_clean.loc[twitter_archive_clean['prediction_algorithm'] == 'NaN', 'prediction_algorithm'] = None twitter_archive_clean.loc[twitter_archive_clean['dog_gender'] == 'NaN', 'dog_gender'] = None twitter_archive_clean.loc[twitter_archive_clean['rating_numerator'] == 'NaN', 'rating_numerator'] = 0 #twitter_archive_clean.loc[twitter_archive_clean['rating_denominator'] == 'NaN', 'rating_denominator'] = 0 ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1987 entries, 0 to 7053 Data columns (total 16 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1987 non-null int64 1 timestamp 1987 non-null object 2 source 1987 non-null object 3 text 1987 non-null object 4 expanded_urls 1987 non-null object 5 rating_numerator 1987 non-null object 6 rating_denominator 1987 non-null int64 7 name 1987 non-null object 8 favorites 1987 non-null int64 9 retweets 1987 non-null int64 10 user_followers 1987 non-null int64 11 jpg_url 1987 non-null object 12 dog_stage 1987 non-null object 13 prediction_algorithm 1679 non-null object 14 confidence_level 1987 non-null float64 15 dog_gender 857 non-null object dtypes: float64(1), int64(5), object(10) memory usage: 263.9+ KB ###Markdown DefineChange datatypes . Code ###Code twitter_archive_clean['tweet_id'] = twitter_archive_clean['tweet_id'].astype(str) twitter_archive_clean['timestamp'] = pd.to_datetime(twitter_archive_clean.timestamp) twitter_archive_clean['source'] = twitter_archive_clean['source'].astype('category') twitter_archive_clean['favorites'] = twitter_archive_clean['favorites'].astype(int) twitter_archive_clean['retweets'] = twitter_archive_clean['retweets'].astype(int) twitter_archive_clean['user_followers'] = twitter_archive_clean['user_followers'].astype(int) twitter_archive_clean['dog_stage'] = twitter_archive_clean['dog_stage'].astype('category') twitter_archive_clean['rating_numerator'] = twitter_archive_clean['rating_numerator'].astype(float) twitter_archive_clean['rating_denominator'] = twitter_archive_clean['rating_denominator'].astype(float) twitter_archive_clean['dog_gender'] = twitter_archive_clean['dog_gender'].astype('category') ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.dtypes ###Output _____no_output_____ ###Markdown Store ###Code # Save clean DataFrame to csv file twitter_archive_clean.drop(twitter_archive_clean.columns[twitter_archive_clean.columns.str.contains('Unnamed',case = False)],axis = 1) twitter_archive_clean.to_csv('twitter_archive_master.csv', encoding = 'utf-8', index=False) twitter_archive_clean = pd.read_csv('twitter_archive_master.csv') twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 1987 entries, 0 to 1986 Data columns (total 16 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1987 non-null int64 1 timestamp 1987 non-null object 2 source 1987 non-null object 3 text 1987 non-null object 4 expanded_urls 1987 non-null object 5 rating_numerator 1987 non-null float64 6 rating_denominator 1987 non-null float64 7 name 1987 non-null object 8 favorites 1987 non-null int64 9 retweets 1987 non-null int64 10 user_followers 1987 non-null int64 11 jpg_url 1987 non-null object 12 dog_stage 1987 non-null object 13 prediction_algorithm 1679 non-null object 14 confidence_level 1987 non-null float64 15 dog_gender 857 non-null object dtypes: float64(3), int64(4), object(9) memory usage: 248.5+ KB ###Markdown Gathering Data for this project First, we need to install tweepy library for this project to analyse people's post on tweeter. Type '!' in python to turn notebook into command prompt ###Code ! pip install tweepy ###Output Requirement already satisfied: tweepy in c:\users\admin\anaconda3\lib\site-packages (3.8.0) Requirement already satisfied: requests-oauthlib>=0.7.0 in c:\users\admin\anaconda3\lib\site-packages (from tweepy) (1.3.0) Requirement already satisfied: PySocks>=1.5.7 in c:\users\admin\anaconda3\lib\site-packages (from tweepy) (1.7.0) Requirement already satisfied: requests>=2.11.1 in c:\users\admin\anaconda3\lib\site-packages (from tweepy) (2.22.0) Requirement already satisfied: six>=1.10.0 in c:\users\admin\anaconda3\lib\site-packages (from tweepy) (1.12.0) Requirement already satisfied: oauthlib>=3.0.0 in c:\users\admin\anaconda3\lib\site-packages (from requests-oauthlib>=0.7.0->tweepy) (3.1.0) Requirement already satisfied: certifi>=2017.4.17 in c:\users\admin\anaconda3\lib\site-packages (from requests>=2.11.1->tweepy) (2019.6.16) Requirement already satisfied: idna<2.9,>=2.5 in c:\users\admin\anaconda3\lib\site-packages (from requests>=2.11.1->tweepy) (2.8) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in c:\users\admin\anaconda3\lib\site-packages (from requests>=2.11.1->tweepy) (1.24.2) Requirement already satisfied: chardet<3.1.0,>=3.0.2 in c:\users\admin\anaconda3\lib\site-packages (from requests>=2.11.1->tweepy) (3.0.4) ###Markdown Import library ###Code import pandas as pd import numpy as np import datetime ###Output _____no_output_____ ###Markdown 1) Twitter archive enhance data ###Code # Supplied file twitter_archive = pd.read_csv('twitter-archive-enhanced.csv') twitter_archive.head() ###Output _____no_output_____ ###Markdown 2)Image prediction file Import request library for gathering links on the Internet, the data below is using a neural network that can classify breeds of dogs*. The results: a table full of image predictions (the top three only) alongside each tweet ID, image URL, and the image number that corresponded to the most confident prediction (numbered 1 to 4 since tweets can have up to four images). ###Code # download the image prediction file import requests url= 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' image_request=requests.get(url,allow_redirects=True) ###Output _____no_output_____ ###Markdown Because the data are clusters , so use sep='t' to turn them to a tables, use read pand csv to open file tsv ###Code predict=pd.read_csv("image-predictions.tsv", sep="\t") predict ###Output _____no_output_____ ###Markdown 3)Twitterpy JSON data This json data contain favorites and retweet in twitter , import tweetpy library and time to collect data. To gather the data, we need authorization of tweeter. First of all, user need to make a develop accounts, tweeter will ask why you need it before provide you codes to access. For more instruction you can go to this link https://www.youtube.com/watch?v=Jl-_dDqSaUQ to know how to get access. Each user has different acess key and token so please keep them as secret.After get your authorization, type the key and token in to consumer_key, consumer_secret , access_token, access_secreat ###Code # Query Twitter API for each tweet in the Twitter archive and save JSON in a text file # These are hidden to comply with Twitter's API terms and conditions import tweepy from tweepy.auth import OAuthHandler import time consumer_key = 'type your code here' consumer_secret = 'type your code here' access_token = 'type your code here ' access_secret = 'type your code here' # import the code to authorize auth = OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) # set api and authorization api = tweepy.API(auth_handler=auth, parser=tweepy.parsers.JSONParser(), wait_on_rate_limit=True,wait_on_rate_limit_notify=True) ###Output _____no_output_____ ###Markdown next , we will stream the data down and write them ,read them into txt to use later. This will take 35 mins to run ###Code # stream data and write to json import json start_time=time.time() with open('tweet_json.txt','w') as file: for tweet_id in twitter_archive['tweet_id']: try: tweet=api.get_status(tweet_id,tweet_mode='extended') file.write(json.dumps(tweet) + '\n') except Exception as e: print('No tweet found for {} with error message {}'.format(str(tweet_id),str(e))) end_time=time.time() print('whole process finished in {} second'.format(end_time-start_time) ) # read json with open("tweet_json.txt",'r') as json_file: for line in json_file: json_data=json.loads(line) print(json.dumps(json_data,indent=2)) break # read the file , select columns needed to read selected_attr=[] with open('tweet_json.txt','r') as json_file: for line in json_file: json_data=json.loads(line) selected_attr.append({'tweet_id':json_data["id"], 'favorites':json_data["favorite_count"], 'retweets':json_data['retweet_count'], 'timestamp':json_data["created_at"]}) tweet_selected_attr=pd.DataFrame(selected_attr,columns=["tweet_id","favorites",'retweets','timestamp']) tweet_selected_attr.head() ###Output _____no_output_____ ###Markdown Save data for futher using ###Code tweet_selected_attr.to_csv('tweet_selected_attr.csv', index = False) tweet_selected_attr=pd.read_csv("tweet_selected_attr.csv") tweet_selected_attr ###Output _____no_output_____ ###Markdown After gathering data in 3 different soucres , data are really dirty and messy which alot of missing value , redundant , unecessary contents. Next step is to Acess data before clean and combine them together as a completed version. Acess data Start with twitter archive data, there are lots of non value, and inappropriate data types, when we check in excel , we can see columns like `in reply to user id/user_status`,`retweeted_status_id/user_id`,`retweeted_status_timestamp` ,`doggo`,`puppo`,`pupper`, or `fluffer` are missing records ###Code twitter_archive.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown In this step we need to find as much as data issue, we can see the column "name" has a lot of strange name not for dogs. Because of the limitation of pandas, we can not see all the name. To fix this , open csv file in excel , scroll down and mark unreal name to replace them in cleaning steps ###Code #strange name of 'a','an','the','by' in data twitter_archive["name"].value_counts() predict.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null int64 jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown we can see how dirty the data is , some words start with letters, why others are capitalized, some words have space between words but other are replace with underscores ###Code # detect strange name predict["p1"].value_counts() # detect strange name predict["p2"].value_counts() # detect strange name predict['p3'].value_counts() ###Output _____no_output_____ ###Markdown In a nutshell this is the summary of all data have been found Tidiness Archive data - 1)`rating_numerator`,`rating_denominator` are redundant- 2)`doggo` `floofer`, `pupper`,`puppo` are redundant Quality Archive data - 1)missing record on `in reply to user id/user_status`,`retweeted_status_id/user_id`,`retweeted_status_timestamp`- 2) +000 in `timestamp` data - 3) `timestampt` is not datatime.- 4)`name` has strange name like 'a', 'an' ,'the' ,'by' , 'quite','actually', 'just','one','his','my','very', 'not'- 5)Oliviér, old, Ralphé, Amélie,Gòrdón,,Frönq,getting, Devón=> change to proper names- 6)`expand_url` NA - mean no tweet you retweet.- 7) `source` looks confused. predict data - 8)name content is inconsistent with uppercase, space and "_" . The final step is to clean all data, to save time , I just only clean archiv_data, and predict_data Clean Quality Archive data ###Code archive_data=twitter_archive.copy() archive_data.head() ###Output _____no_output_____ ###Markdown Define - 1)drop columns on `in reply to user id/user_status`,`retweeted_status_id/user_id`,`retweeted_status_timestamp` Code ###Code archive_data=archive_data.drop(['in_reply_to_status_id','in_reply_to_user_id','retweeted_status_id','retweeted_status_user_id','retweeted_status_timestamp'],axis=1) archive_data ###Output _____no_output_____ ###Markdown Define - 2) Remove +00000 in `timestamp` Code ###Code # remove +0000 archive_data["timestamp"]=archive_data['timestamp'].str[:-5] archive_data ###Output _____no_output_____ ###Markdown Define - 3) change `timestampt` to datatime Code ###Code # To change time to datetime archive_data.timestamp = pd.to_datetime(archive_data.timestamp) # recheck archive_data.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 12 columns): tweet_id 2356 non-null int64 timestamp 2356 non-null datetime64[ns] source 2356 non-null object text 2356 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: datetime64[ns](1), int64(3), object(8) memory usage: 221.0+ KB ###Markdown Define - 4)replace `name` has strange name like 'a' 'an' 'the' 'by' quite,actually, just,one,his,my,very,old ,not with None Code ###Code #create a list list_name=["a",'an', 'the', 'by','quite','actually', 'just','one','his','my','very','old' ,'not','getting'] # replace with approriate name archive_data["name"] = archive_data['name'].replace(list_name,'None') # recheck archive_data ###Output _____no_output_____ ###Markdown Define - 6)change Oliviér , Ralphé, Amélie,Gòrdón,,Frönq,getting, Devón to proper names Code ###Code #create a list list_name=['Oliviér' , 'Ralphé', 'Amélie','Gòrdón','Frönq', 'Devón'] right_name=['Olivar',"Ralpha","Amali",'Gardan',"Fran",'Devan'] # replace with approriate name archive_data["name"] = archive_data['name'].replace(list_name,right_name) # recheck archive_data ###Output _____no_output_____ ###Markdown Define - 6) drop`expand_url` Code ###Code archive_data.drop('expanded_urls',axis=1,inplace=True) ###Output _____no_output_____ ###Markdown Define - 7)replace the `source` content by ‘Twitter for iphone’, ‘Vine - Make a Scene’, ‘Twitter Web Client’, and ‘TweetDeck’. Code ###Code # find unique value for source archive_data["source"].unique() archive_data['sources']=archive_data["source"].str.extract('(Twitter for iPhone|Twitter Web Client|Vine - Make a Scene|TweetDeck)',expand=True) # drop the old source value to make data clean archive_data.drop('source',axis=1,inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code archive_data.head() ###Output _____no_output_____ ###Markdown predict data ###Code # clone the data predict_data=predict.copy() ###Output _____no_output_____ ###Markdown Define - 8)Capitalize first Letter of name for consitance, replace '_' with space , add space in name of bread in p1 p2 p3 Code ###Code # replace '_' with space predict_data["p1"]=predict_data['p1'].str.replace('_',' ') predict_data["p2"]=predict_data['p2'].str.replace('_',' ') predict_data["p3"]=predict_data['p3'].str.replace('_',' ') # chang to lowercase predict_data["p1"]=predict_data["p1"].str.lower() predict_data["p2"]=predict_data["p2"].str.lower() predict_data["p3"]=predict_data["p3"].str.lower() predict_data ###Output _____no_output_____ ###Markdown Tidiness Archive data Define - 1)`rating_numerator`,`rating_denominator` are redundant => combine to 1 columns named `rating`=>need to merge 3 tables Code ###Code # add column named 'rating' is the numerator / denominator archive_data["rating"]=archive_data["rating_numerator"]/archive_data["rating_denominator"] # drop colum rating numerator and rating denominator archive_data.drop(['rating_numerator','rating_denominator'],axis=1,inplace=True) archive_data ###Output _____no_output_____ ###Markdown Define - 2)`doggo` `floofer`, `pupper`,`puppo` are redundant=> melt data combine to 1 columns Code ###Code # take dog type archive_data['dog_stages'] = archive_data.text.str.extract('(doggo|floofer|pupper|puppo)', expand = True) # drop columns archive_data.drop(['doggo','floofer','pupper','puppo'], axis=1, inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code archive_data ###Output _____no_output_____ ###Markdown Merge ###Code # merge 3 file archive_data , predict_data and tweet_selected_attr clean_data=archive_data.merge(predict_data, on='tweet_id') clean_data=clean_data.merge(tweet_selected_attr,on="tweet_id") clean_data ###Output _____no_output_____ ###Markdown Storage after merge all date, we save them to 1 master file in csv ###Code # store the cleaned data in new master csv file clean_data.to_csv('twitter_master.csv', index = False) ###Output _____no_output_____ ###Markdown Visualization and insight ###Code cleanned_data=pd.read_csv("twitter_master.csv") cleanned_data.head() ###Output _____no_output_____ ###Markdown Most talked dogs ###Code cleanned_data1=cleanned_data[cleanned_data['p1_dog']==True] # rank value type_order=cleanned_data1["p1"].value_counts().index # chose base color to avoid distraction import seaborn as sb import matplotlib.pyplot as plt base_color=sb.color_palette()[3] # draw charts sb.countplot(data=cleanned_data1,y="p1",color=base_color,order=type_order[:10]) plt.title("Dog_stages") ###Output _____no_output_____ ###Markdown **Golden retriever and labrador are 2 types of dogs which are posted the most by weratedog** Rating for these dogs ###Code # plotting base_color = sb.color_palette()[4] sb.barplot(data = cleanned_data1, x = 'rating', y = 'p1', color = base_color, order =type_order[:10], ci = 'sd') plt.title("Rating of Dog_stages") # plotting base_color = sb.color_palette()[1] sb.barplot(data = cleanned_data1, x = 'favorites', y = 'p1', color = base_color, order =type_order[:10]) plt.title("Favorites of Dog_stages") # plotting base_color = sb.color_palette()[2] sb.barplot(data = cleanned_data1, x = 'retweets', y = 'p1', color = base_color, order =type_order[:10]) plt.title("retweet of Dog_stages") ###Output _____no_output_____ ###Markdown **Eventhough Golden retriever is the type of dog which is talked the most , the pomeranian is the rank the most , and has the biggest range from low to high, and samoyed breed is loved the most** ###Code # rank value type_order=cleanned_data1["rating"].value_counts().index # chose base color to avoid distraction import seaborn as sb base_color=sb.color_palette()[3] # draw charts sb.countplot(data=cleanned_data1,y="rating",color=base_color,order=type_order[:10]) plt.title("Rating counts") cleanned_data1["rating"].describe() ###Output _____no_output_____ ###Markdown rating 1.2 is given the most, average rating is 1.1 max is 7.5, min 0.2 Sources ###Code # rank value source_order=cleanned_data["sources"].value_counts().index # chose base color to avoid distraction import seaborn as sb base_color=sb.color_palette()[9] # draw charts sb.countplot(data=cleanned_data,y="sources",color=base_color,order=source_order) plt.title("Sources of twitter # weratingdog user") ###Output _____no_output_____ ###Markdown **People use iphone to comments** common dogs stages ###Code # rank value stage_order=cleanned_data["dog_stages"].value_counts().index # chose base color to avoid distraction import seaborn as sb base_color=sb.color_palette()[6] # draw charts sb.countplot(data=cleanned_data,x="dog_stages",color=base_color,order=stage_order) plt.title('Dog stages in weratedog ') ###Output _____no_output_____ ###Markdown pupper exist the most all the media correlation of tweet and favorites ###Code # plot but it is over lap alot so => jitter it import matplotlib.pyplot as plt sb.regplot(data=cleanned_data, x="retweets",y="favorites", x_jitter=0.3, scatter_kws={"alpha":1/20}) plt.xlim(0,10000) plt.ylim(10,50000) plt.title("Relationship between favorites and retweets") ###Output _____no_output_____ ###Markdown There is a strong relationship between favorites and retweets Limitation - Difficulty in understanding regrex. The project could be better if I can extract the `text` in 3 seperated columns `text`, `rating`, `link`to know more about regrex : https://stackoverflow.com/a/48769624/3271001 or https://jakevdp.github.io/PythonDataScienceHandbook/03.10-working-with-strings.html- tweepy is a new library to me to work with, so I need to practice more to become master in this library Preference For further wrangling, we can extract string, numbers, http by this pic below Thank you mentors in Udacity who gave me those codes below ###Code import pandas as pd cleanned_data=pd.read_csv("twitter_master.csv") cleanned_data.head() # to extract number from text cleanned_data["rating1"]=cleanned_data["text"].str.extract(r'([0-9]+[0-9.]*/[0-9]+[0-9]*)',expand=False) # another way to extract string from text dog_stages=["doggo",'floofer','pupper','puppo'] cleanned_data["dogstage"]=cleanned_data['dogstage'].replace('None',np.nan) cleanned_data["dogstage']=cleanned_data['dogstage'].apply(lambda x: ','.join(x[x.notnull()]),axis=1) cleanned_data['dogstage'].value_counts # remove http from source archive_data["source"]=archive_data["source"].str.split('>').str[1].str[:-3] ###Output _____no_output_____ ###Markdown Table of Contents:1. [Gathering Data](gather)2. [Assesing Data](assisng)3. [Coying DataFrame](copy)4. [Cleaning DataFrames](clean)5. [Visualisations and insights](vs) 1- Gathering data ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline import requests as rq import json as js ###Output _____no_output_____ ###Markdown A. Enhanced twitter archive ###Code df_1 = pd.read_csv("twitter-archive-enhanced.csv") ###Output _____no_output_____ ###Markdown B. Image predictions ###Code ### Requesting the file hosted in the udacity server. data = rq.get(" https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv") with open("img-predictions.tsv", "wb") as file: file.write(data.content) df_img = pd.read_csv("img-predictions.tsv", sep = "\t") df_img.head() ###Output _____no_output_____ ###Markdown C. Twiiter API DataNOTE: I didn't get a twitter developer account so I used the files of tweet-json.txt to complete the task --- Importing the contents of the tweet-json.txt into a list --- ###Code jslist = [] with open("tweet-json.txt") as tjs: for l in tjs: jslist.append(js.loads(l)) js_df = pd.DataFrame(jslist, columns = ["id", "favorite_count", "retweet_count"]) js_df = js_df.rename(columns = {"id": "tweet_id"}) js_df.to_csv("json_data.csv", index = False) tweet_df = pd.read_csv("json_data.csv") tweet_df.head() ###Output _____no_output_____ ###Markdown 2- Asseesing Data: * Twitter Archive ###Code df_1.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 in_reply_to_status_id 78 non-null float64 2 in_reply_to_user_id 78 non-null float64 3 timestamp 2356 non-null object 4 source 2356 non-null object 5 text 2356 non-null object 6 retweeted_status_id 181 non-null float64 7 retweeted_status_user_id 181 non-null float64 8 retweeted_status_timestamp 181 non-null object 9 expanded_urls 2297 non-null object 10 rating_numerator 2356 non-null int64 11 rating_denominator 2356 non-null int64 12 name 2356 non-null object 13 doggo 2356 non-null object 14 floofer 2356 non-null object 15 pupper 2356 non-null object 16 puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown A. Quality Isuues: - timestamp has an object(String) datatype and it has to be datetime. - a lot of Null Values are found in in_reply_to_status_id. - a lot of Null Values are found in in_reply_to_user_id. - retweeted_timestamp has an object(String) datatype and it has to be datetime. - a lot of Null Values are found in retweeted_status_timestamp. - a lot of Null Values are found in retweeted_status_id. - a lot of Null Values are found in retweeted_status_user_id. - the tweet_id is integer instead of string, we don't need to make any mathematical operations on it. - retweeted_status_id is a float and hast to be string. ###Code df_1.sample(20) ###Output _____no_output_____ ###Markdown - there are some values of numerator less than 10 - There are some names not found. - There are Multiple retweets. Now I'm trying to find some pattern to edit the name of the dogs ###Code df_1.iloc[1410]["text"] df_1.iloc[1091]["text"] df_1.iloc[1090]["text"] ###Output _____no_output_____ ###Markdown -- Unfortunately there is no defined pattern to investigate the name of the dogs from tweet text. ###Code df_1[df_1["rating_denominator"] != 10] ###Output _____no_output_____ ###Markdown - there some rating denomenator not equal to 10 * image prediction datafreame: ###Code df_img.info() df_img.sample(10) ###Output _____no_output_____ ###Markdown A. Quality Issues: - There are some images predicted not a dog. - Some Capitalized and uncapitalized preditctions. - the tweet_id is also integer instead of string. * Twitter Data ###Code tweet_df.info() tweet_df.sample(10) ###Output _____no_output_____ ###Markdown - Two Missing Data B.Tidness Issues: - More than one data frame for one observations - in the Twitter archive data frame there dog stages which can be in one column - The tweet_id is integer too. 3- Copying the dataFrames: ###Code df_1_copy = df_1.copy() tweet_df_copy = tweet_df.copy() df_img_copy = df_img.copy() ###Output _____no_output_____ ###Markdown 4- Cleaning Data: * Solving quality issues in archive Data Frame: 1- removing columns with a lot of null values the problem: - we have several columns have a lot of null values and that affects the quality of the data set like: * in_reply_to_status_id. * in_reply_to_user_id. * retweeted_status_user_id. * retweeted_status_timestamp. NOTE: I kept the retweeted_status_id col to remove the retweets by it. ###Code ## Code: we will drop all of these usless cloumns for our analysis. df_1.drop(inplace = True, columns=["in_reply_to_status_id" , "in_reply_to_user_id", "retweeted_status_user_id", "retweeted_status_timestamp"]) ## Testing df_1.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 13 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 timestamp 2356 non-null object 2 source 2356 non-null object 3 text 2356 non-null object 4 retweeted_status_id 181 non-null float64 5 expanded_urls 2297 non-null object 6 rating_numerator 2356 non-null int64 7 rating_denominator 2356 non-null int64 8 name 2356 non-null object 9 doggo 2356 non-null object 10 floofer 2356 non-null object 11 pupper 2356 non-null object 12 puppo 2356 non-null object dtypes: float64(1), int64(3), object(9) memory usage: 239.4+ KB ###Markdown 2- Converting timestamp column to datetime datatype The problem: - the time stamp column in the twitter archive file is string type and the right type is datetime object. ###Code ### Code: useing datetime method to convert it into the right datatype df_1["timestamp"] = pd.to_datetime(df_1["timestamp"]) ## Testing df_1.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 13 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 timestamp 2356 non-null datetime64[ns, UTC] 2 source 2356 non-null object 3 text 2356 non-null object 4 retweeted_status_id 181 non-null float64 5 expanded_urls 2297 non-null object 6 rating_numerator 2356 non-null int64 7 rating_denominator 2356 non-null int64 8 name 2356 non-null object 9 doggo 2356 non-null object 10 floofer 2356 non-null object 11 pupper 2356 non-null object 12 puppo 2356 non-null object dtypes: datetime64[ns, UTC](1), float64(1), int64(3), object(8) memory usage: 239.4+ KB ###Markdown 4- Editing numerator and denominator: The problem: - The denomniator be 10 due the standard of the WeRateDogs account, but there are some rows that have non-ten values. - the numerator has not acceptable values. ###Code ## Code: # Knowing the factor which the rating is multiplied by: factor = df_1["rating_denominator"]/10 # making the ratings denomerator = 10 and multiply the numerator with the same factor: df_1["rating_denominator"] = df_1["rating_denominator"]/factor df_1["rating_numerator"] = df_1["rating_numerator"]/factor # Converting the columns into integers df_1["rating_denominator"] = pd.to_numeric(df_1["rating_denominator"]) df_1["rating_numerator"] = pd.to_numeric(df_1["rating_numerator"]) # Testing: df_1.info() df_1[df_1["rating_denominator"] != 10] ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 13 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 timestamp 2356 non-null datetime64[ns, UTC] 2 source 2356 non-null object 3 text 2356 non-null object 4 retweeted_status_id 181 non-null float64 5 expanded_urls 2297 non-null object 6 rating_numerator 2355 non-null float64 7 rating_denominator 2355 non-null float64 8 name 2356 non-null object 9 doggo 2356 non-null object 10 floofer 2356 non-null object 11 pupper 2356 non-null object 12 puppo 2356 non-null object dtypes: datetime64[ns, UTC](1), float64(3), int64(1), object(8) memory usage: 239.4+ KB ###Markdown 5- Elemenating the zero denomenator row: The problem: - The is a row that the denominator was 0 and this not acceptable. ###Code ## Code: we are going to remove it. df_1.drop(313, inplace = True) # Testing: df_1[df_1["rating_denominator"] != 10] ###Output _____no_output_____ ###Markdown 6- Names Issue: The prolem: - there are several rows that we cannot know the name of the dog. Code: - we won't make any change on it because it's not important for our analysis. 7- Retweets: The problem: - There are some retweets in the data and that affects the visualisations and insights. Code : - These retweets an be known with retweeted_status_id with Nan. ###Code df_1 = df_1[df_1["retweeted_status_id"].isnull()] ## Test: df_1.sample(10) ###Output _____no_output_____ ###Markdown 8- tweet_id type: The problem: - the tweet_id column in the three dataframs has to be string not integer because we are not going to make any mathematical operations on it. code: ###Code ### using map method df_1["tweet_id"] = df_1["tweet_id"].map(str) df_img["tweet_id"] = df_img["tweet_id"].map(str) tweet_df["tweet_id"] = tweet_df["tweet_id"].map(str) ## Test: df_1.info() df_img.info() tweet_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2174 entries, 0 to 2355 Data columns (total 13 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2174 non-null object 1 timestamp 2174 non-null datetime64[ns, UTC] 2 source 2174 non-null object 3 text 2174 non-null object 4 retweeted_status_id 0 non-null float64 5 expanded_urls 2117 non-null object 6 rating_numerator 2174 non-null float64 7 rating_denominator 2174 non-null float64 8 name 2174 non-null object 9 doggo 2174 non-null object 10 floofer 2174 non-null object 11 pupper 2174 non-null object 12 puppo 2174 non-null object dtypes: datetime64[ns, UTC](1), float64(3), object(9) memory usage: 237.8+ KB <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2075 non-null object 1 jpg_url 2075 non-null object 2 img_num 2075 non-null int64 3 p1 2075 non-null object 4 p1_conf 2075 non-null float64 5 p1_dog 2075 non-null bool 6 p2 2075 non-null object 7 p2_conf 2075 non-null float64 8 p2_dog 2075 non-null bool 9 p3 2075 non-null object 10 p3_conf 2075 non-null float64 11 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(1), object(5) memory usage: 152.1+ KB <class 'pandas.core.frame.DataFrame'> RangeIndex: 2354 entries, 0 to 2353 Data columns (total 3 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2354 non-null object 1 favorite_count 2354 non-null int64 2 retweet_count 2354 non-null int64 dtypes: int64(2), object(1) memory usage: 55.3+ KB ###Markdown 9- retweet_status_id type: The problem: - The retweet_status_id column is float type and it have to be string for th esame reason of the tweet_id column. Code: ###Code df_1["retweeted_status_id"] = df_1["retweeted_status_id"].map(str) ## Test: df_1.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2174 entries, 0 to 2355 Data columns (total 13 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2174 non-null object 1 timestamp 2174 non-null datetime64[ns, UTC] 2 source 2174 non-null object 3 text 2174 non-null object 4 retweeted_status_id 2174 non-null object 5 expanded_urls 2117 non-null object 6 rating_numerator 2174 non-null float64 7 rating_denominator 2174 non-null float64 8 name 2174 non-null object 9 doggo 2174 non-null object 10 floofer 2174 non-null object 11 pupper 2174 non-null object 12 puppo 2174 non-null object dtypes: datetime64[ns, UTC](1), float64(2), object(10) memory usage: 237.8+ KB ###Markdown * Solving Tidness Issues in achive dataframe: 1- merging Dog stages into one column: ###Code ### Making empty array to hold the values of the stage stage = [] for index, row in df_1.iterrows(): if row["doggo"] != "None": stage.append(row["doggo"]) elif row["pupper"] != "None": stage.append(row["pupper"]) elif row["puppo"] != "None": stage.append(row["puppo"]) elif row["floofer"] != "None": stage.append(row["floofer"]) else: stage.append(np.nan) ## Making new column with the values in the stage list df_1["stage"] = stage ### dropping the columns from the original dataframe: df_1.drop(columns = ["doggo", "pupper", "puppo", "floofer"], inplace = True) ###Output _____no_output_____ ###Markdown * image prediction: 1- Solving quality issues: - making all predictions lower case ###Code df_img.info() df_img["p1"] = df_img["p1"].str.lower() df_img["p2"] = df_img["p2"].str.lower() df_img["p3"] = df_img["p3"].str.lower() # Testing df_img.head() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2075 non-null object 1 jpg_url 2075 non-null object 2 img_num 2075 non-null int64 3 p1 2075 non-null object 4 p1_conf 2075 non-null float64 5 p1_dog 2075 non-null bool 6 p2 2075 non-null object 7 p2_conf 2075 non-null float64 8 p2_dog 2075 non-null bool 9 p3 2075 non-null object 10 p3_conf 2075 non-null float64 11 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(1), object(5) memory usage: 152.1+ KB ###Markdown ###Code import requests import pandas as pd import numpy as np import json import time import tweepy from tweepy import TweepError from pandas.io.json import json_normalize from scipy import stats from statsmodels import stats as sms import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown Data Wrangling Process Gather; Assess; Clean. 1. Gather the Data twitter-archive-enhanced.csv ###Code df_twitter_archive = pd.read_csv("data/twitter-archive-enhanced.csv", sep=',') df_twitter_archive.shape df_twitter_archive.info() df_twitter_archive.head() ###Output _____no_output_____ ###Markdown image-predictions.tsv ###Code url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) if (response.status_code==200): with open('data/image-predictions.tsv', 'wb') as file: file.write(response.content) df_image_predictions = pd.read_csv('data/image-predictions.tsv', sep='\t') df_image_predictions.shape df_image_predictions.info() df_image_predictions.head() ###Output _____no_output_____ ###Markdown tweet_json.txt ###Code import twitter_config auth = tweepy.OAuthHandler(twitter_config.consumer_key, twitter_config.consumer_secret) auth.set_access_token(twitter_config.access_token, twitter_config.access_token_secret) api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True) # get all unique tweet_ids from the twitter-archive-enhanced.csv dataframe tweet_ids = set(df_twitter_archive['tweet_id'].unique()) with open('all_tweet.json', 'wt') as file: for tweet_id in tweet_ids: print ('Reading tweet {}'.format(tweet_id)) try: tweet = api.get_status(tweet_id, tweet_mode='extended') json.dump(tweet._json, file) file.write('\n') time.sleep(1) except TweepError as e: print ('Error reading tweet {}'.format(tweet_id)) print(e) list_df = [] with open('data/all_tweet.json') as file: for line in file: df_tweet = pd.DataFrame(pd.json_normalize(json.loads(line))) list_df.append(df_tweet) df_all_tweet_info = pd.concat(list_df, axis=0) df_all_tweet_info.head() df_retweet_favorite_info = df_all_tweet_info[['id','retweet_count', 'favorite_count']] df_retweet_favorite_info.shape df_retweet_favorite_info.info() df_retweet_favorite_info.head() ###Output _____no_output_____ ###Markdown 2. Assess Quality Issues ###Code mask = df_twitter_archive.name.str.contains('^[a-z]', regex = True) df_twitter_archive[mask].name.value_counts().sort_values(ascending=False) ###Output _____no_output_____ ###Markdown > 1. We found some problems in dog names such as: 'None', 'a', 'the', an', 'very', 'all', etc. > 2. After visual inspecting we found the following problems in rating_rumerator:45 - This is Bella. She hopes her smile made you smile. If not, she is also offering you her favorite monkey. 13.5/10 https://t.co/qjrljjt948784 - RT @dog_rates: After so many requests, this is Bretagne. She was the last surviving 9/11 search dog, and our second ever 14/10. RIP https:/…1068 - After so many requests, this is Bretagne. She was the last surviving 9/11 search dog, and our second ever 14/10. RIP https://t.co/XAVDNDaVgQ > 3. After visual inspecting, we found the following problems in rating_denominator:342 - @docmisterio account started on 11/15/15784 - RT @dog_rates: After so many requests, this is Bretagne. She was the last surviving 9/11 search dog, and our second ever 14/10. RIP https:/…1068 - After so many requests, this is Bretagne. She was the last surviving 9/11 search dog, and our second ever 14/10. RIP https://t.co/XAVDNDaVgQ1662 - This is Darrel. He just robbed a 7/11 and is in a high speed police chase. Was just spotted by the helicopter 10/10 https://t.co/7EsP8LmSp5 ###Code df_twitter_archive[df_twitter_archive[['retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp']].notna().any(axis=1)].shape ###Output _____no_output_____ ###Markdown > 4. Programatically, we identified 181 retweets that should be excluded from the database. ###Code df_twitter_archive[df_twitter_archive[['in_reply_to_status_id', 'in_reply_to_user_id']].notna().any(axis=1)].shape ###Output _____no_output_____ ###Markdown > 5. Programatically, we identified 78 replies that should be excluded from the database. ###Code df_twitter_archive['timestamp'].dtype ###Output _____no_output_____ ###Markdown > 6. Programatically, we identified that the timestamp columns is with incorrect data type. ###Code df_twitter_archive['tweet_id'].isin(list(df_image_predictions['tweet_id'])).value_counts() ###Output _____no_output_____ ###Markdown > 7. Programatically, we identified that 281 observations don't have corresponding image. ###Code df_twitter_archive.head() ###Output _____no_output_____ ###Markdown > 8. The columns 'in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp' are not relevant to our analysis and should be dropped. ###Code df_image_predictions[['p1', 'p2', 'p3']].head() ###Output _____no_output_____ ###Markdown > 9. Dog types without letter case standard. Also using "_" as word separator. Tidyness Issues > 1. All three DataFrames hold information about the same entity, therefore, it should be considered as a single DataFrame.> 2. The dog types are in four different columns, but the information could in fact be in only one. 3. Clean Before any coding, let make a copy of the three datasets: ###Code df_twitter_archive_copy = df_twitter_archive.copy() df_image_predictions_copy = df_image_predictions.copy() df_retweet_favorite_info_copy = df_retweet_favorite_info.copy() ###Output _____no_output_____ ###Markdown Now, lets deal with every issue previously assessed, using the "define", "code" and "test" approach. **Define:** Merge the thee datasets **Code:** ###Code _ = pd.merge(df_twitter_archive, df_image_predictions, on='tweet_id', how='inner') df = pd.merge(_, df_retweet_favorite_info, left_on='tweet_id', right_on='id', how='inner') ###Output _____no_output_____ ###Markdown **Test:** ###Code df.head(2) ###Output _____no_output_____ ###Markdown **Define:** Fix dog name issues. **Code:** ###Code # replace wrong dog names with nan df['name'].replace({key : np.NaN for key in df_twitter_archive[mask].name.unique()}, inplace=True) ###Output _____no_output_____ ###Markdown **Test:** ###Code df['name'].isin(set(df_twitter_archive[mask].name)).any() ###Output _____no_output_____ ###Markdown **Define:** Extract dog stage from text and exclude the unecessary columns **Code:** ###Code # extract the stage from the text df['stage'] = df['text'].str.extract('(doggo|floofer|pupper|puppo)', expand = True) # drop the separate stage columns df.drop(['doggo', 'floofer', 'pupper', 'puppo'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown **Test:** ###Code df['stage'].value_counts() df.columns ###Output _____no_output_____ ###Markdown **Define:** Drop replied tweets and replier columns **Code:** ###Code # drop replier values idx = df[df[['in_reply_to_status_id', 'in_reply_to_user_id']].notna().any(axis=1)].index df.drop(idx, inplace=True) # drop the unecessary replier columns df.drop(['in_reply_to_status_id', 'in_reply_to_user_id'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown **Test** ###Code df.columns ###Output _____no_output_____ ###Markdown **Define:** Drop retweeted tweets and retweet columns **Code:** ###Code # drop retweet values idx = df[df[['retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp']].notna().any(axis=1)].index df.drop(idx, inplace=True) # drop the unecessary retweet columns df.drop(['retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown **Test:** ###Code df.columns ###Output _____no_output_____ ###Markdown **Define:** Convert timestamo column from object to date. **Code:** ###Code # convert the timestamp column to datetime df['timestamp'] = pd.to_datetime(df['timestamp']) ###Output _____no_output_____ ###Markdown **Test:** ###Code df['timestamp'].dtype ###Output _____no_output_____ ###Markdown **Define:** Drop duplicated id columns **Code:** ###Code # drop duplicated id column df.drop(['id'], axis='columns', inplace=True) ###Output _____no_output_____ ###Markdown **Test:** ###Code df.columns ###Output _____no_output_____ ###Markdown **Define:** Fix numerator and denominator issues: **Code:** ###Code # Fix data issues: 13/10 to 13.5/10 idx1 = df['text'].str.contains('This is Bella. She hopes her smile made you smile. If not, she is also offering you her favorite monkey. 13.5/10') df.loc[idx1, ['rating_numerator', 'rating_denominator']] = [13.5, 10] # Fix data issues: 9/11 to 14/10 idx2 = df['text'].str.contains('After so many requests, this is Bretagne. She was the last surviving 9/11 search dog, and our second ever 14/10. RIP') df.loc[idx2, ['rating_numerator', 'rating_denominator']] = [14, 10] # Fix data issues: 7/11 to 10/10 idx3 = df['text'].str.contains('This is Darrel. He just robbed a 7/11 and is in a high speed police chase. Was just spotted by the helicopter 10/10') df.loc[idx3, ['rating_numerator', 'rating_denominator']] = [10, 10] ###Output _____no_output_____ ###Markdown **Test:** ###Code df.loc[idx1, ['text', 'rating_numerator', 'rating_denominator']] df.loc[idx2, ['text', 'rating_numerator', 'rating_denominator']] df.loc[idx3, ['text', 'rating_numerator', 'rating_denominator']] ###Output _____no_output_____ ###Markdown **Define**: Fix capitalization issues in dog breeds **Code:** ###Code # Fix capitalization issues in dog breeds df['p1'] = df['p1'].str.replace('_', ' ').str.title() df['p2'] = df['p2'].str.replace('_', ' ').str.title() df['p3'] = df['p3'].str.replace('_', ' ').str.title() ###Output _____no_output_____ ###Markdown **Test:** ###Code df[['p1', 'p2', 'p3']].head() ###Output _____no_output_____ ###Markdown Now, lets save the clean DataFrame ###Code df.to_csv("twitter_archive_master.csv") ###Output _____no_output_____ ###Markdown Insights and Visualisations > First, lets see what are the most common breeds: ###Code is_dog = df['p1_dog']==True df[is_dog]['p1'].value_counts().head(10) (df[is_dog]['p1'].value_counts(normalize=True).head(10)*100).plot(kind='bar') plt.xlabel('Dog Breed') plt.ylabel('Percentage (%)') plt.title('Top 10 Common Breeds') plt.show() ###Output _____no_output_____ ###Markdown > Golden Retriever and Labrador Retriever are the two most common breeds in the dataset. But how "likable" are they compared to other breeds? Do people favorite more tweets if it is about a Golden Retriever? What about a Labrador Retriever? To answer these question we're going to use hypothesis testing on the `favorite_count` column. $$ H_0: \mu_{favorite\_breed} \le \mu_{favorite\_all} $$$$ H_1: \mu_{favorite\_breed} \gt \mu_{favorite\_all} $$ ###Code # Create a function to do the "likable test" def test_breed_likeability(breed, pop_mean, alpha=0.05): """ Test if the favorite mean count of the breed is equal to (or less than) the given mean. """ # calculate the mean, std and std_error statistics for Golden Retriver n = df[df['p1']==breed].shape[0] mu = df[df['p1']==breed]['favorite_count'].mean() std = df[df['p1']==breed]['favorite_count'].std() std_error = std/np.sqrt(n) # calculate the t-statistic and the pvalue for Golden Retriver t_statistic = (mu-pop_mean)/std_error pvalue = 1 - stats.t.cdf(t_statistic, n-1) t_statistic, pvalue if pvalue < alpha: # Reject the Null Hypothesis return("{} tweets are more likable than other tweets.".format(breed)) else: # Fail to Reject the Null Hypothesis return("{} tweets are NOT more likable than other tweets.".format(breed)) # Since we dont have enough samples for all breeds, we're testing only those breeds with at least 30 samples breeds = ['Golden Retriever', 'Labrador Retriever', 'Pembroke', 'Chihuahua', 'Pug', 'Chow', 'Samoyed', 'Pomeranian', 'Toy Poodle'] alpha = 0.05 print("At the significance level o {} we can state that:\n".format(alpha)) for breed in breeds: result = test_breed_likeability(breed, df['favorite_count'].mean(), alpha) print(result) ###Output At the significance level o 0.05 we can state that: Golden Retriever tweets are more likable than other tweets. Labrador Retriever tweets are NOT more likable than other tweets. Pembroke tweets are more likable than other tweets. Chihuahua tweets are NOT more likable than other tweets. Pug tweets are NOT more likable than other tweets. Chow tweets are NOT more likable than other tweets. Samoyed tweets are more likable than other tweets. Pomeranian tweets are NOT more likable than other tweets. Toy Poodle tweets are NOT more likable than other tweets. ###Markdown > Very interesting results. Golden Retriever, Pembroke and Samoyed are very likable dogs! But what can we say about how popular these breeds are? To "measure" that, we're going to do some hypothesis testing on the `retweet_count` column:$$ H_0: \mu_{retweet\_breed} \le \mu_{retweet\_all} $$$$ H_1: \mu_{retweet\_breed} \gt \mu_{retweet\_all} $$ ###Code # Create a function to do the "popularity test" def test_breed_popularity(breed, pop_mean, alpha=0.05): """ Test if the retweet mean count of the breed is equal to (or less than) the given mean. """ # calculate the mean, std and std_error statistics for Golden Retriver n = df[df['p1']==breed].shape[0] mu = df[df['p1']==breed]['retweet_count'].mean() std = df[df['p1']==breed]['retweet_count'].std() std_error = std/np.sqrt(n) # calculate the t-statistic and the pvalue for Golden Retriver t_statistic = (mu-pop_mean)/std_error pvalue = 1 - stats.t.cdf(t_statistic, n-1) t_statistic, pvalue if pvalue < alpha: # Reject the Null Hypothesis return("{} tweets are more popular than other tweets.".format(breed)) else: # Fail to Reject the Null Hypothesis return("{} tweets are NOT more popular than other tweets.".format(breed)) # Since we dont have enough samples for all breeds, we're testing only those breeds with at least 30 samples breeds = ['Golden Retriever', 'Labrador Retriever', 'Pembroke', 'Chihuahua', 'Pug', 'Chow', 'Samoyed', 'Pomeranian', 'Toy Poodle'] alpha = 0.05 print("At the significance level of {} we can state that:\n".format(alpha)) for breed in breeds: result = test_breed_popularity(breed, df['retweet_count'].mean(), alpha) print(result) ###Output At the significance level of 0.05 we can state that: Golden Retriever tweets are more popular than other tweets. Labrador Retriever tweets are NOT more popular than other tweets. Pembroke tweets are NOT more popular than other tweets. Chihuahua tweets are NOT more popular than other tweets. Pug tweets are NOT more popular than other tweets. Chow tweets are NOT more popular than other tweets. Samoyed tweets are more popular than other tweets. Pomeranian tweets are NOT more popular than other tweets. Toy Poodle tweets are NOT more popular than other tweets. ###Markdown > Golden Retriever and Samoyed again showed up! Since these two breeds are both likable and popular, lets build a 95% confidence interval for the retweet and favorite count a tweet showing any of this would get. ###Code def get_95_ci(breed, column): sample = df[df['p1']==breed][column] boot_sample_means = [] for _ in range(int(1e4)): boot_sample = sample.sample(sample.shape[0], replace=True) boot_mean = boot_sample.mean() boot_sample_means.append(boot_mean) print ("The 95% confidence interval for the {} {} is between {} and {}.".format( breed, column, int(np.quantile(boot_sample_means, .025)), int(np.quantile(boot_sample_means, .975)))) get_95_ci('Golden Retriever', 'retweet_count') get_95_ci('Golden Retriever', 'favorite_count') get_95_ci('Samoyed', 'retweet_count') get_95_ci('Samoyed', 'favorite_count') ###Output The 95% confidence interval for the Samoyed retweet_count is between 2631 and 5084. The 95% confidence interval for the Samoyed favorite_count is between 8208 and 16128. ###Markdown > As the last part of this analysis, lets build a word cloud with the most significant words used by WeRateDogs tweets. ###Code %config InlineBackend.figure_format='retina' from wordcloud import WordCloud, STOPWORDS from PIL import Image def generate_dog_cloud(): # Concatenate all text from the `text` field text = df.loc[:, 'text'].str.cat(sep=' ') # Words to exclude from word cloud stopwords = set(STOPWORDS) words = ['https', 'tco', 't', 'h', 'p', 'co', 'af', 'meet', 'ckin'] for word in words: stopwords.add(word) # Instantiate word cloud object wc = WordCloud(background_color='black', stopwords=stopwords, width=800, height=1200) # Generate word cloud wc.generate(text) # Plot the image plt.imshow(wc) plt.axis('off') plt.show() generate_dog_cloud() ###Output _____no_output_____ ###Markdown Wrangling and Analysis project> Authored by: **Abhishek Pandey** [LinkedIn](https://www.linkedin.com/in/abhishekpandeyit/) | [Twitter](https://twitter.com/PandeyJii_) Contents- [Introduction](intro)- [Gathering data](gather)- [Assessing data](assess)- [Cleaning data](clean)- [Storing, Analyzing, and Visualizing](storing) Introduction> This project contains the wrangling and analysis or Archived chats of [Twitter](https://twitter.com/PandeyJii_). The dataset that I will be wrangling (and analyzing and visualizing) is the tweet archive of Twitter user @dog_rates, also known as WeRateDogs. WeRateDogs is a Twitter account that rates people's dogs with a humorous comment about the dog. These ratings almost always have a denominator of 10. The numerators, though? Almost always greater than 10. 11/10, 12/10, 13/10, etc. Why? Because "they're good dogs Brent." WeRateDogs has over 6 million followers and has received international media coverage. Gathering Data> 1. **Twitter archive file:** Loading this [file](twitter_archive_enhanced.csv) into our dataframe using pandas>>2. **The tweet image predictions:** This find contains informations such as which breed of dog (or other object, animal, etc.) is present in each tweet. This file was provide at [Udacity](https://www.udacity.com), and as mentioned in the rubrics I downloaded this file using **Requests library**>>3. **Twitter API & JSON:** Using the tweet IDs in the Twitter archive, querying the Twitter API for each tweet is done by creating a developer account at Twitter and using some secret keys. Then read this file line by line into a pandas DataFrame with (at minimum) tweet ID, retweet count, and favorite count. > This template to gather data from Twitter API is provided by [Udacity](www.Udacity.com). I am just using that template as mentioned in the Udacity workspace using my secret token and access tokens. Importing all the important library ###Code import json import matplotlib.pyplot as plt import numpy as np import os import pandas as pd import re %matplotlib inline import requests import seaborn as sns import tweepy from datetime import datetime from functools import reduce ###Output _____no_output_____ ###Markdown Loading files into dataframe:> - Twitter archive dataset into twitter1 dataframe> - Downloading the image-prediction using request library ###Code twitter1 = pd.read_csv("twitter-archive-enhanced.csv") response = requests.get("https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv") with open(os.path.join('image_predictions.tsv'), mode = 'wb') as file: file.write(response.content) images = pd.read_csv('image_predictions.tsv', sep = '\t') ###Output _____no_output_____ ###Markdown Setting the Twitter API :: This format is provided by Udacity ###Code consumer_key = '****' consumer_secret = '****' access_token = '****' access_secret = '****' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, parser=tweepy.parsers.JSONParser()) # Use Twitter API to collect status data on tweets present in twitter1 dataframe tweet_ids = list(twitter1['tweet_id']) tweet_data = [] tweet_id_success = [] tweet_id_missing = [] for tweet_id in tweet_ids: try: data = api.get_status(tweet_id, tweet_mode='extended', wait_on_rate_limit = True, wait_on_rate_limit_notify = True) tweet_data.append(data) tweet_id_success.append(tweet_id) except: tweet_id_missing.append(tweet_id) print(tweet_id) ###Output 888202515573088257 873697596434513921 869988702071779329 866816280283807744 861769973181624320 845459076796616705 842892208864923648 837012587749474308 827228250799742977 802247111496568832 775096608509886464 ###Markdown Writing Tweet data to a json file and loading this data to a new dataframe twitter2 ###Code with open('tweet_json.txt', mode = 'w') as file: json.dump(tweet_data, file) twitter2 = pd.read_json('tweet_json.txt') twitter2['tweet_id'] = tweet_id_success twitter2 = twitter2[['tweet_id', 'favorite_count', 'retweet_count']] ###Output _____no_output_____ ###Markdown Data Assesment ###Code twitter1.sample(3) twitter1.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown Looking for `Value_counts()` of each attribute/column so that we'll know our data in a better way. ###Code twitter1.tweet_id.value_counts() twitter1.source.value_counts() twitter1.text.value_counts() twitter1.retweeted_status_id.value_counts() twitter1.retweeted_status_user_id.value_counts() twitter1.retweeted_status_timestamp.value_counts() twitter1.expanded_urls.value_counts() twitter1.rating_numerator.value_counts() twitter1.rating_denominator.value_counts() twitter1.name.value_counts() twitter1.doggo.value_counts() twitter1.floofer.value_counts() twitter1.pupper.value_counts() twitter1.puppo.value_counts() twitter1.loc[twitter1['name'].str.isupper()] twitter2.sample(5) twitter2.info() twitter2['tweet_id'].value_counts() twitter2['favorite_count'].value_counts() twitter2['retweet_count'].value_counts() images.sample(5) images.info() images['tweet_id'].value_counts() images['jpg_url'].value_counts() images['img_num'].value_counts() images['p1'].value_counts() images['p1_conf'].value_counts() images['p1_dog'].value_counts() images['p2'].value_counts() images['p2_conf'].value_counts() images['p2_dog'].value_counts() images['p3'].value_counts() images['p3_conf'].value_counts() images['p3_dog'].value_counts() ###Output _____no_output_____ ###Markdown Quality Issues :: After Assesment of our dataframe(s)> - Incorrect Data types of columns such as tweet_id, timestamp and retweeted_status_timestamp.> - Data having retweets information.> - Inaccurate data in Name column> - Inaccurate and unstandardized rating having decimal values in numerator.> - Some unwanted columns, that do not require for my analysis.> - In comparision with Twitter1 dataframe about 11 tweets are missing.> - In comparision with twitter 1 data frame and images data frame, about 281 data is missing in images dataframe, the reason could be that all tweets do not contain pictures/images. Tidiness Issues> - 4 Seperate columns for a single variable i.e. dog_stage>> - twitter1, twitter2 and images data should be combined together since they all contains same type of information Data Cleaning Creating copy of dataframes to prevent data loss ###Code twitter1_clean = twitter1 twitter2_clean = twitter2 images_clean = images ###Output _____no_output_____ ###Markdown Steps performed below:> - Merging Twitter1, twitter2 and images dataframe together as they all contains same type of data.> - Creating a single column dogs_stage instead of having columns for each dog [doggo, floofer, pupper, and puppo].> - Dropping the columns as their data is extracted and saved into a new column `dog_stage`. ###Code #Merging dataframes dfs = [twitter1_clean, twitter2_clean, images_clean] twitter = reduce(lambda left,right: pd.merge(left,right,on='tweet_id'), dfs) # Creating a single dog_stage column twitter['dog_stage'] = twitter['text'].str.extract('(doggo|floofer|pupper|puppo)') #Dropping four columns twitter = twitter.drop(['doggo', 'floofer', 'pupper', 'puppo'], axis=1) """ Confirming changes in the twitter dataframe. """ twitter.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2069 entries, 0 to 2068 Data columns (total 27 columns): tweet_id 2069 non-null int64 in_reply_to_status_id 23 non-null float64 in_reply_to_user_id 23 non-null float64 timestamp 2069 non-null object source 2069 non-null object text 2069 non-null object retweeted_status_id 75 non-null float64 retweeted_status_user_id 75 non-null float64 retweeted_status_timestamp 75 non-null object expanded_urls 2069 non-null object rating_numerator 2069 non-null int64 rating_denominator 2069 non-null int64 name 2069 non-null object favorite_count 2069 non-null int64 retweet_count 2069 non-null int64 jpg_url 2069 non-null object img_num 2069 non-null int64 p1 2069 non-null object p1_conf 2069 non-null float64 p1_dog 2069 non-null bool p2 2069 non-null object p2_conf 2069 non-null float64 p2_dog 2069 non-null bool p3 2069 non-null object p3_conf 2069 non-null float64 p3_dog 2069 non-null bool dog_stage 338 non-null object dtypes: bool(3), float64(7), int64(6), object(11) memory usage: 410.2+ KB ###Markdown Steps performed below:> - Removing rows having retweets and dropping `retweeted_status_id`, `retweeted_status_user_id` & `retweeted_status_timestamp`> - casting tweet_id to the string type.> - Removing the time zone information from `timestamp` column & casting the `timestamp` column to a datetime object> - Correcting inaccurate names by replacing it with NaN etc.> - Replacing incorrect ratings with correct ratings.> - Solving issue of unstandardized ratings.> - dropping unwanted columns. ###Code # Extracting only those columns having null values. twitter = twitter[np.isnan(twitter.retweeted_status_id)] twitter = twitter.drop(['retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], axis=1) twitter.info() # changing datatype twitter['tweet_id'] = twitter['tweet_id'].astype(str) # Removing timezone information twitter['timestamp'] = twitter['timestamp'].str.slice(start=0, stop=-6) # Casting 'timestamp' to a datetime object twitter['timestamp'] = pd.to_datetime(twitter['timestamp'], format = "%Y-%m-%d %H:%M:%S") # Confirming Changes twitter.info() # Finding names starting with lowercase letters lowercase_names = [] for row in twitter['name']: if row[0].islower() and row not in lowercase_names: lowercase_names.append(row) print(lowercase_names) ''' Replacing inaccurate names with respective keywords. ''' twitter['name'].replace(lowercase_names, np.nan, inplace = True) twitter['name'].replace('None', np.nan, inplace = True) twitter['name'].replace('O', "O'Malley", inplace = True) ''' Replacing incorrect ratings (i.e. numerator contains decimal values) with correct values. ''' ratings_with_decimals_text = [] ratings_with_decimals_index = [] ratings_with_decimals = [] for i, text in twitter['text'].iteritems(): if bool(re.search('\d+\.\d+\/\d+', text)): ratings_with_decimals_text.append(text) ratings_with_decimals_index.append(i) ratings_with_decimals.append(re.search('\d+\.\d+', text).group()) # Confirming changes ratings_with_decimals_text ''' Checking for index of text containing decimal values.''' ratings_with_decimals_index # Changing incorrect values twitter.loc[ratings_with_decimals_index[0],'rating_numerator'] = float(ratings_with_decimals[0]) twitter.loc[ratings_with_decimals_index[1],'rating_numerator'] = float(ratings_with_decimals[1]) twitter.loc[ratings_with_decimals_index[2],'rating_numerator'] = float(ratings_with_decimals[2]) twitter.loc[ratings_with_decimals_index[3],'rating_numerator'] = float(ratings_with_decimals[3]) #Solving the issue of unstanderdized ratings twitter['rating'] = twitter['rating_numerator'] / twitter['rating_denominator'] twitter.rename(columns={'rating_numerator': 'numerator', 'rating_denominator': 'denominator'}, inplace=True) twitter.drop(['in_reply_to_status_id', 'in_reply_to_user_id', 'source','img_num'], axis=1, inplace=True) pd.set_option('display.max_columns', None) # Confirming Changes twitter.head(1) ###Output _____no_output_____ ###Markdown Storing, Analysis and Visualisation ###Code twitter.to_csv('twitter_archive_master.csv',index=False, encoding = 'utf-8') sns.lmplot(x="retweet_count", y="favorite_count", data=twitter, size = 5, aspect=1.3, scatter_kws={'alpha':1/5}) plt.title('Favorite vs. Retweet Count') plt.xlabel('Retweet Count') plt.ylabel('Favorite Count'); ###Output _____no_output_____ ###Markdown **Description:**> - There is a strong positive correlation between Favorite and retweet counts.>> - Maximum data is below 10K retweets and 40K favorites tweets thus the ratio of retweets and favorites is 1:4 ###Code twitter.groupby('timestamp')['rating'].mean().plot(kind='line') plt.title('Rating over Time') plt.xlabel('Time') plt.ylabel('Standardized Rating') plt.show; twitter.loc[twitter['rating'] > 2] ###Output _____no_output_____ ###Markdown **Description:**> - There were 3 outliers having rating >=2>> - In order to locate and analyse then, I found that that the rating of **24/7** is inaccurate.>> - The Joke tweet ratings are accurate. ###Code twitter.groupby('timestamp')['rating'].mean().plot(kind='line') plt.ylim(0, 2) plt.title('Rating over Time') plt.xlabel('Time') plt.ylabel('Standardized Rating') plt.show; ###Output _____no_output_____ ###Markdown **Description:**> - Analysing this visualisation we can say that overtime the rating frequency below 1 decreases.>> - After 2017-01 there isn't any rating below 1. ###Code from subprocess import call call(['python', '-m', 'nbconvert', 'wrangle_act.ipynb']) ###Output _____no_output_____ ###Markdown Udacity: Data Analyst Nanodegree - Project 4 Data Wrangling of WeRateDogs Twitter Archive Dataset Gurps Rai ________________________________________________________ Project outline:- **Gather** data from at least three different sources, and in at least three different file formats, importing all initially to sperate pandas DataFrames.- **Assess** data visually and programmatically for quality and tidiness. Visual assessment by means of displaying gathered data in Jupyter Notebook. Programmatic assessment by means of pandas’ functions and/or methods. Assessment of dataset to identify at least eight quality issues and two tidiness issues to satisfy the Project Motivation. Each issue being documented in one to a few sentences.- **Clean** the dataset programmatically. Clearly documenting define, code, and test steps of the process. Keeping the original pieces of data prior to cleaning. All issues identified in the assess phase successfully cleaned (if possible) using Python and pandas.- **Store** and act on wrangled data, to produce insights (e.g. analyses, visualizations). At least one labelled visualisation produced in the Jupyter notebook, from assessed and cleaned data. Gathered, assessed and cleaned master datasets to be stored to a CSV file. - **Report** on wrangling efforts in concise description of approx. 300-600 works, in a separate PDF document – *wrangle_report.pdf*. Further, report on three or more insights found from the wrangled dataset, in descriptive text of approx. 250 words, with at least one visualisation, in a separate PDF document – *act_report.pdf*. Gather ###Code import pandas as pd import numpy as np import tweepy import json import requests import os import matplotlib.pyplot as plt %matplotlib inline # read 'twitter-archive-enhanced.csv', into pandas DataFrame, and check file correctly formatted in DF df_twitter_archive = pd.read_csv('twitter-archive-enhanced.csv') df_twitter_archive.head(3) # download 'image-prediction.tsv' file using requests library, read file into pandas DataFrame, and finally check DF url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' r = requests.get(url) with open(url.split('/')[-1], mode = 'wb') as f: f.write(r.content) image_predict_df = pd.read_csv('image-predictions.tsv', sep = '\t') image_predict_df.head(3) # *** Note: Accessing data without a twitter account *** # check over single line of 'tweet-json.txt' with open('tweet-json.txt') as f: for line in f: print(line) break # read in 'tweet-json.txt', tweet ID, retweet count, and favorite count, line by line into pandas DF tweet_df = pd.DataFrame() with open('tweet-json.txt') as f: for line in f: data = json.loads(line) tweet_id = data['id']# get the tweet_id retweet_count = data['retweet_count']# get the retweet_count favorite_count = data['favorite_count']# get the favorite_count df = pd.DataFrame([[tweet_id, retweet_count, favorite_count]], columns=['tweet_id', 'retweet_count', 'favorite_count']) tweet_df = tweet_df.append(df) tweet_df.head() ###Output _____no_output_____ ###Markdown Assess **Assess *df_twitter_archive* table both visually and programmatically** ###Code df_twitter_archive df_twitter_archive.info() df_twitter_archive.name.value_counts() df_twitter_archive[df_twitter_archive.tweet_id.duplicated()] ###Output _____no_output_____ ###Markdown **Assess *image_predict_df* table both visually and programmatically** ###Code image_predict_df image_predict_df.info() image_predict_df.describe() image_predict_df[image_predict_df.tweet_id.duplicated()] ###Output _____no_output_____ ###Markdown **Assess *tweet_df* table both visually and programmatically** ###Code tweet_df tweet_df.info() tweet_df.describe() tweet_df[tweet_df.tweet_id.duplicated()] ###Output _____no_output_____ ###Markdown Quality issues ***df_twitter_archive, image_predict_df, tweet_df*** Quality issues determined from above visual and programmatic assessment:- Difference in *df_twitter_archive* records to *image_predict_df* records, 2356 to 2075 respectively- *df_twitter_archive* 'name' column has very likely invalid entries, e.g. 'a', 'an', 'the' etc.- We only want original ratings (no retweets), the *df_twitter_archive* has 181 entries of retweeted data- Dog stages are object (string) types, should be categorical type.- timestamp in *df_twitter_archive* records are in object type, should be is in *datetime* type.- 'tweet_id' is in *int* format in all three tables, should be *object* (string) type.- We only need ratings that have images, and not all the ratings have images.- *in_reply_to_status_id*, *in_reply_to_user_id*, *retweeted_status_id*, and *retweeted_status_user_id* are all in *int* format, should be in *object* type. Tidiness issues ***df_twitter_archive, image_predict_df, tweet_df*** Tidiness issues determined from above visual and programmatic assessment:- *image_predict_df* has three prediction columns, but only single column is required. If we determine the breed of the dog from the predictions, we can drop all of the predict columns.- Three tables can be merged into single table, with only the necessary columns included- Dogs stages are in seperate columns, these should be in single 'dog_stages' column, with rows containing the observation data (i.e. what stage the dogs are at). Clean ###Code # Make working copies of datasets twitter_archive_clean = df_twitter_archive.copy() image_predict_clean = image_predict_df.copy() tweet_clean = tweet_df.copy() ###Output _____no_output_____ ###Markdown Tidiness Define Retrieve any True dog breed prediction, before dropping unwanted columns from image_predict table: *p1*, *p1_conf*, *p1_dog*, *p2*, *p2_conf*, *p2_dog*, *p3*, *p3_con*, *p3_dog*. Code ###Code # use apply on image_predict_clean, to determine if dog breed is known, if it is ammend to new column dog_breed dog_breed = [] def find_breed(df_row): if df_row['p1_dog'] == True: dog_breed.append(df_row['p1']) elif df_row['p2_dog'] == True: dog_breed.append(df_row['p2']) elif df_row['p3_dog'] == True: dog_breed.append(df_row['p3']) else: dog_breed.append('None') image_predict_clean.apply(find_breed, axis=1) image_predict_clean['dog_breed'] = dog_breed # check to at least first None value image_predict_clean.head(10) # remove original predict columns image_predict_clean = image_predict_clean.drop('p1',1) image_predict_clean = image_predict_clean.drop('p1_conf',1) image_predict_clean = image_predict_clean.drop('p1_dog',1) image_predict_clean = image_predict_clean.drop('p2',1) image_predict_clean = image_predict_clean.drop('p2_conf',1) image_predict_clean = image_predict_clean.drop('p2_dog',1) image_predict_clean = image_predict_clean.drop('p3',1) image_predict_clean = image_predict_clean.drop('p3_conf',1) image_predict_clean = image_predict_clean.drop('p3_dog',1) ###Output _____no_output_____ ###Markdown Test ###Code image_predict_clean.info() image_predict_clean.head() ###Output _____no_output_____ ###Markdown Define Drop re-tweeted columns from twitter_archive table: *retweeted_status_id*, *retweeted_status_user_id*, *retweeted_status_timestamp*, *in_reply_to_status_id*, in_reply_to_user_id Code ###Code # drop other columns that are also not needed twitter_archive_clean = twitter_archive_clean[twitter_archive_clean.retweeted_status_id.isnull()] twitter_archive_clean = twitter_archive_clean.drop(['retweeted_status_id','retweeted_status_user_id', 'retweeted_status_timestamp','in_reply_to_status_id', 'in_reply_to_user_id'], 1) ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 12 columns): tweet_id 2175 non-null int64 timestamp 2175 non-null object source 2175 non-null object text 2175 non-null object expanded_urls 2117 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 2175 non-null object floofer 2175 non-null object pupper 2175 non-null object puppo 2175 non-null object dtypes: int64(3), object(9) memory usage: 220.9+ KB ###Markdown Define Melt dog stages into single column Code ###Code # using the pandas melt function, melt the various dog stages into a single dog_stage column twitter_archive_clean = pd.melt(twitter_archive_clean, id_vars=['tweet_id','timestamp','source', 'text', 'expanded_urls', 'rating_numerator', 'rating_denominator', 'name',], value_name='dog_stage') twitter_archive_clean = twitter_archive_clean.drop('variable', 1) ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.info() twitter_archive_clean.dog_stage.value_counts() # To drop the duplicate tweet_id, keeping entries where we have a dog_stage twitter_archive_clean = twitter_archive_clean.sort_values('dog_stage').drop_duplicates('tweet_id', keep='last') twitter_archive_clean.dog_stage.value_counts() twitter_archive_clean[twitter_archive_clean.tweet_id.duplicated()] ###Output _____no_output_____ ###Markdown Define Merge the three tables into single master table, converting to-match-on *tweet_id* to *object* (string) types first Code ###Code # change tweet_id to 'string' type in all three tables twitter_archive_clean.tweet_id = twitter_archive_clean.tweet_id.astype(str) image_predict_clean.tweet_id = image_predict_clean.tweet_id.astype(str) tweet_clean.tweet_id = tweet_clean.tweet_id.astype(str) # merge tables on 'tweet_id', with default 'inner' join method twitter_archive_clean = pd.merge(twitter_archive_clean, image_predict_clean, on='tweet_id') twitter_archive_clean = pd.merge(twitter_archive_clean, tweet_clean, on='tweet_id') ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1994 entries, 0 to 1993 Data columns (total 14 columns): tweet_id 1994 non-null object timestamp 1994 non-null object source 1994 non-null object text 1994 non-null object expanded_urls 1994 non-null object rating_numerator 1994 non-null int64 rating_denominator 1994 non-null int64 name 1994 non-null object dog_stage 1994 non-null object jpg_url 1994 non-null object img_num 1994 non-null int64 dog_breed 1994 non-null object retweet_count 1994 non-null int64 favorite_count 1994 non-null int64 dtypes: int64(5), object(9) memory usage: 233.7+ KB ###Markdown Quality Note: Several of the quality issues have been rectified above to help facilitate the tidiness issues, including:- Difference of *df_twitter_archive* records to *image_predict_df* record. - 'tweet_id' to *object* type.- Removal of re-tweet and in_reply_to data, which also negated the need to change the datatypes for those columns. Define *name* column has very likely invalid entries, e.g. 'a', 'an', 'the' etc. As erroneous entries don’t appear to be capitalised, check all lower case entries, and convert any invalid to 'None' Code ###Code # check if lower case 'name' entries are non-valid names def print_lower_case(name): if name[0].islower(): print(name) else: pass twitter_archive_clean['name'].apply(print_lower_case) # convert all lower case 'name' entries to 'None' def convert_lower_case(name): if name[0].isupper(): return name elif name: name = None return name else: return twitter_archive_clean['name'] = twitter_archive_clean['name'].apply(convert_lower_case) ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.name.value_counts() twitter_archive_clean.name.unique() ###Output _____no_output_____ ###Markdown Define timestamp in *df_twitter_archive* records are in object type, should be is in *datetime* type. Code ###Code # convert timestamp column to datetime type twitter_archive_clean.timestamp = pd.to_datetime(twitter_archive_clean.timestamp) ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1994 entries, 0 to 1993 Data columns (total 14 columns): tweet_id 1994 non-null object timestamp 1994 non-null datetime64[ns] source 1994 non-null object text 1994 non-null object expanded_urls 1994 non-null object rating_numerator 1994 non-null int64 rating_denominator 1994 non-null int64 name 1896 non-null object dog_stage 1994 non-null object jpg_url 1994 non-null object img_num 1994 non-null int64 dog_breed 1994 non-null object retweet_count 1994 non-null int64 favorite_count 1994 non-null int64 dtypes: datetime64[ns](1), int64(5), object(8) memory usage: 233.7+ KB ###Markdown Define *Dog_stage* and *dog_breed* are object (string) types, should be categorical type. Code ###Code # change both dog_stage and dog_breed to category type twitter_archive_clean.dog_stage = twitter_archive_clean.dog_stage.astype('category') twitter_archive_clean.dog_breed = twitter_archive_clean.dog_breed.astype('category') ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1994 entries, 0 to 1993 Data columns (total 14 columns): tweet_id 1994 non-null object timestamp 1994 non-null datetime64[ns] source 1994 non-null object text 1994 non-null object expanded_urls 1994 non-null object rating_numerator 1994 non-null int64 rating_denominator 1994 non-null int64 name 1896 non-null object dog_stage 1994 non-null category jpg_url 1994 non-null object img_num 1994 non-null int64 dog_breed 1994 non-null category retweet_count 1994 non-null int64 favorite_count 1994 non-null int64 dtypes: category(2), datetime64[ns](1), int64(5), object(6) memory usage: 212.5+ KB ###Markdown Store ###Code # store cleaned dateframe, as csv file twitter_archive_clean.to_csv('twitter_archive_master.csv', encoding='utf-8', index=False) ###Output _____no_output_____ ###Markdown Analyze and Visualise ###Code twitter_archive_clean.describe() twitter_archive_clean.info() twitter_archive_clean # plot bar chart of dog_breed (counting each breed with groupby), against favourite_count - limit bins to 20 dog_breeds twitter_archive_clean.groupby('dog_breed').count()['favorite_count'].sort_values(ascending=False).nlargest(20).plot(kind='bar') # remove dog_breeds with None entries twitter_archive_clean.loc[twitter_archive_clean['dog_breed']=='None','dog_breed'] = None # set plot to larger size, increase dog_breads to 30 fig = plt.figure(figsize=(20,10)) twitter_archive_clean.groupby('dog_breed').count()['favorite_count'].sort_values(ascending=False).nlargest(30).plot(kind='bar') # add y axis and title plt.ylabel("favorite_count") plt.title('Favourite dogs by breed'); # create scatter plot of retweet_count vs favorite_count fig = plt.figure(figsize=(20,10)) plt.scatter(twitter_archive_clean.retweet_count, twitter_archive_clean.favorite_count) plt.ylabel("favorite_count") plt.xlabel("retweet_count") plt.title('Number of Re-tweets against Favourite count'); # remove dog_stage with None entries twitter_archive_clean.loc[twitter_archive_clean['dog_stage']=='None','dog_stage'] = None fig = plt.figure(figsize=(20,10)) twitter_archive_clean.groupby('dog_stage').count()['favorite_count'].sort_values(ascending=False).nlargest(4).plot(kind='barh') # add y axis and title plt.xlabel("favorite_count") plt.title('Favourite dogs by dog stage'); ###Output _____no_output_____ ###Markdown Gathering 1-Uploaded the twitter_archive_enhanced.csv to the jupyter notebook cloud and importing it to a dataframe ###Code twitter_archive = pd.read_csv('twitter-archive-enhanced.csv') twitter_archive.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown 2-Downloading the image predictions tsv file by using requests library and saving it , and importing it to a data frame ###Code url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) with open (url.split('/')[-1] ,'wb') as file : file.write(response.content) image_predictions = pd.read_csv('image-predictions.tsv',sep='\t') image_predictions.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null int64 jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown 3-Import Json data from Twitter using Tweepy ###Code consumer_key = '' consumer_secret = '' access_token = '' access_secret = '' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth,wait_on_rate_limit = True , wait_on_rate_limit_notify= True) ###Output _____no_output_____ ###Markdown Experiment to extract one tweet_id infotmation ###Code exp_tweet = api.get_status(twitter_archive.tweet_id[1000] ,wait_on_rate_limit=True, wait_on_rate_limit_notify=True, tweet_mode = 'extended') content = exp_tweet._json print(content) ###Output {'created_at': 'Wed Jun 29 01:23:16 +0000 2016', 'id': 747963614829678593, 'id_str': '747963614829678593', 'full_text': 'PUPPER NOOOOO BEHIND YOUUU 10/10 pls keep this pupper in your thoughts https://t.co/ZPfeRtOX0Q', 'truncated': False, 'display_text_range': [0, 70], 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [], 'urls': [], 'media': [{'id': 747963600220917761, 'id_str': '747963600220917761', 'indices': [71, 94], 'media_url': 'http://pbs.twimg.com/media/CmFM7ngXEAEitfh.jpg', 'media_url_https': 'https://pbs.twimg.com/media/CmFM7ngXEAEitfh.jpg', 'url': 'https://t.co/ZPfeRtOX0Q', 'display_url': 'pic.twitter.com/ZPfeRtOX0Q', 'expanded_url': 'https://twitter.com/dog_rates/status/747963614829678593/photo/1', 'type': 'photo', 'sizes': {'thumb': {'w': 150, 'h': 150, 'resize': 'crop'}, 'medium': {'w': 937, 'h': 632, 'resize': 'fit'}, 'small': {'w': 680, 'h': 459, 'resize': 'fit'}, 'large': {'w': 937, 'h': 632, 'resize': 'fit'}}}]}, 'extended_entities': {'media': [{'id': 747963600220917761, 'id_str': '747963600220917761', 'indices': [71, 94], 'media_url': 'http://pbs.twimg.com/media/CmFM7ngXEAEitfh.jpg', 'media_url_https': 'https://pbs.twimg.com/media/CmFM7ngXEAEitfh.jpg', 'url': 'https://t.co/ZPfeRtOX0Q', 'display_url': 'pic.twitter.com/ZPfeRtOX0Q', 'expanded_url': 'https://twitter.com/dog_rates/status/747963614829678593/photo/1', 'type': 'photo', 'sizes': {'thumb': {'w': 150, 'h': 150, 'resize': 'crop'}, 'medium': {'w': 937, 'h': 632, 'resize': 'fit'}, 'small': {'w': 680, 'h': 459, 'resize': 'fit'}, 'large': {'w': 937, 'h': 632, 'resize': 'fit'}}}]}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': None, 'in_reply_to_status_id_str': None, 'in_reply_to_user_id': None, 'in_reply_to_user_id_str': None, 'in_reply_to_screen_name': None, 'user': {'id': 4196983835, 'id_str': '4196983835', 'name': 'WeRateDogs®', 'screen_name': 'dog_rates', 'location': '「 DM YOUR DOGS 」', 'description': 'Your Only Source For Professional Dog Ratings Instagram and Facebook ➪ WeRateDogs [email protected] ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀', 'url': 'https://t.co/Wrvtpnv7JV', 'entities': {'url': {'urls': [{'url': 'https://t.co/Wrvtpnv7JV', 'expanded_url': 'https://blacklivesmatters.carrd.co', 'display_url': 'blacklivesmatters.carrd.co', 'indices': [0, 23]}]}, 'description': {'urls': []}}, 'protected': False, 'followers_count': 8851693, 'friends_count': 17, 'listed_count': 5833, 'created_at': 'Sun Nov 15 21:41:29 +0000 2015', 'favourites_count': 145910, 'utc_offset': None, 'time_zone': None, 'geo_enabled': True, 'verified': True, 'statuses_count': 12743, 'lang': None, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': '000000', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1300264686013657090/UvFPDe2f_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1300264686013657090/UvFPDe2f_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/4196983835/1591077312', 'profile_link_color': 'F5ABB5', 'profile_sidebar_border_color': '000000', 'profile_sidebar_fill_color': '000000', 'profile_text_color': '000000', 'profile_use_background_image': False, 'has_extended_profile': False, 'default_profile': False, 'default_profile_image': False, 'following': False, 'follow_request_sent': False, 'notifications': False, 'translator_type': 'none'}, 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 2102, 'favorite_count': 5706, 'favorited': False, 'retweeted': False, 'possibly_sensitive': False, 'possibly_sensitive_appealable': False, 'lang': 'en'} ###Markdown Checking the keys of the test tweet ###Code content['retweet_count'],content['user']['followers_count'],content['favorite_count'] ###Output _____no_output_____ ###Markdown Quering Json Data from the Api and saving them in tweet_json.txt file ###Code errors = [] if not os.path.isfile('tweet_json.txt'): with open('tweet_json.txt' ,'w') as file: #save the start time before Quering to make code timer start_time = time.time() for tweet_id in twitter_archive['tweet_id']: try : tweet = api.get_status(tweet_id, wait_on_rate_limit=True, wait_on_rate_limit_notify=True,tweet_mode= 'extended') json.dump(tweet._json,file) file.write('\n') except Exception as e: print("Error on tweet id {}".format(tweet_id) + ";" + str(e)) errors.append(tweet_id) #print each tweet id and time elapsed print('The tweet id is ' , tweet_id) #print time elapsed to query all the data print('time elapsed ' , time.time() - start_time) #Read json data line by line into Pandas dataframe list_of_tweets = [] with open ('tweet_json.txt' , 'r') as file : for line in file: tweet = json.loads(line) list_of_tweets.append(tweet) api_df = pd.DataFrame.from_dict(list_of_tweets) # make dataframe with the desired columns api_df = api_df[['id' , 'retweet_count' , 'favorite_count']] #checking the new dataframe api_df.sample() api_df.shape ###Output _____no_output_____ ###Markdown Assessing the Gathered Data Visual Assessment ###Code twitter_archive image_predictions api_df ###Output _____no_output_____ ###Markdown Tidinesschange id column to tweet_id to facilitate merging Programmatic Assessment ###Code twitter_archive.info() twitter_archive.describe() twitter_archive.duplicated().unique() twitter_archive['name'].value_counts() twitter_archive['rating_numerator'].unique() twitter_archive['rating_denominator'].unique() twitter_archive['rating_denominator'].value_counts() twitter_archive['source'][0] twitter_archive['source'].value_counts() twitter_archive[twitter_archive.text.str.contains(r"(\d+\.\d*\/\d+)")][['text', 'rating_numerator']] twitter_archive[twitter_archive.text.str.contains(r"(\d+\.\d*\/\d+)")]['text'].value_counts() image_predictions.info() image_predictions['p1'].value_counts() image_predictions.duplicated().unique() api_df.info() api_df.describe() api_df.duplicated().unique() ###Output _____no_output_____ ###Markdown Assessment summary Quality issues twitter archive :* Extra +0000 in the time_stamp* timestamp data type is object* dogs names are None or incomplete as a,an,the* Many retweets * Many replies to tweets* The source column values include html code* Empty values are stored as None string and need to be changed to nan* most of dogs are not classified* denominator column records have weired values* Unnecessary columns like in_reply_to_status_id and retweeted_status_id* some decimal numerator ratings in tweet text is extracted wrong to the numerator ratings column image_predictions:* tweets have weired image predictions like microwave and sliding doors.* unnecessary columns like image number* Column names are not describtive Tidiness issues * Merge the twitter_archive dataframe with the api_df data and the image_predictions dataframe on the id column to create one master dataframe* there are four dog classification columns which can be only one Cleaning twitter archive ###Code #make a copy of the twitter archive dataframe archive_clean = twitter_archive.copy() archive_clean.sample() ###Output _____no_output_____ ###Markdown Define:Remove extra string from timestamp by string slicing Code: ###Code # remove extra chars from timestamp string archive_clean['timestamp'] = archive_clean['timestamp'].str[0:-5] ###Output _____no_output_____ ###Markdown Test: ###Code archive_clean['timestamp'].sample(5) ###Output _____no_output_____ ###Markdown Define:Change timestamp data type to datetime using pandas to_datetime method Code: ###Code #change timestamp type to datatime archive_clean['timestamp'] = pd.to_datetime(archive_clean['timestamp'], format = '%Y-%m-%d %H:%M:%S') ###Output _____no_output_____ ###Markdown Test: ###Code archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null datetime64[ns] source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: datetime64[ns](1), float64(4), int64(3), object(9) memory usage: 313.0+ KB ###Markdown Define:filtering archive dataframe to include only original tweets by using .isnull() function Code: ###Code #remove replies and retweets and keep only original tweets archive_clean= archive_clean[archive_clean['in_reply_to_status_id'].isnull()] archive_clean= archive_clean[archive_clean['retweeted_status_id'].isnull()] ###Output _____no_output_____ ###Markdown Test: ###Code archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2097 non-null int64 in_reply_to_status_id 0 non-null float64 in_reply_to_user_id 0 non-null float64 timestamp 2097 non-null datetime64[ns] source 2097 non-null object text 2097 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 2094 non-null object rating_numerator 2097 non-null int64 rating_denominator 2097 non-null int64 name 2097 non-null object doggo 2097 non-null object floofer 2097 non-null object pupper 2097 non-null object puppo 2097 non-null object dtypes: datetime64[ns](1), float64(4), int64(3), object(9) memory usage: 294.9+ KB ###Markdown Define:filter the archive dataframe to delete tweets with weired dog names by using isin() function and use a list of observed weired names Code: ###Code #remove name records with weired values archive_clean = archive_clean.drop(archive_clean[archive_clean.name.isin(['a','an','the','None','very','my','Bookstore'])].index) ###Output _____no_output_____ ###Markdown Test: ###Code archive_clean.name.value_counts() ###Output _____no_output_____ ###Markdown Define:Extract source url from the html code in source column by slicing the string Code: ###Code #remove html code from the source column values archive_clean.source[0] archive_clean.source[0].split("\"")[1] archive_clean['source'] = archive_clean['source'].str.split("\"").apply(lambda x: x[1]) ###Output _____no_output_____ ###Markdown Test: ###Code archive_clean.source.value_counts() ###Output _____no_output_____ ###Markdown Define:make one column for dog class and drop the other class columns considering tweets with multiple classification Code: ###Code # handle none values archive_clean.doggo.replace('None', '', inplace=True) archive_clean.floofer.replace('None', '', inplace=True) archive_clean.pupper.replace('None', '', inplace=True) archive_clean.puppo.replace('None', '', inplace=True) # merge into one column archive_clean['dog_class'] = archive_clean.doggo + archive_clean.floofer + archive_clean.pupper + archive_clean.puppo # handle multiple classes archive_clean.loc[archive_clean.dog_class == 'doggopupper', 'dog_class'] = 'doggo, pupper' archive_clean.loc[archive_clean.dog_class == 'doggopuppo', 'dog_class'] = 'doggo, puppo' archive_clean.loc[archive_clean.dog_class == 'doggofloofer', 'dog_class'] = 'doggo, floofer' # handle missing values archive_clean.loc[archive_clean.dog_class == '', 'dog_class'] = np.nan archive_clean = archive_clean.drop(['doggo','floofer','pupper','puppo'],axis = 1) ###Output _____no_output_____ ###Markdown Test: ###Code archive_clean.sample() archive_clean.dog_class.value_counts() ###Output _____no_output_____ ###Markdown Define:Replace None string values to numpy nan to facilite analysis using replace() method Code: ###Code archive_clean = archive_clean.replace('None',np.nan) ###Output _____no_output_____ ###Markdown Test: ###Code archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1419 entries, 0 to 2326 Data columns (total 14 columns): tweet_id 1419 non-null int64 in_reply_to_status_id 0 non-null float64 in_reply_to_user_id 0 non-null float64 timestamp 1419 non-null datetime64[ns] source 1419 non-null object text 1419 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null float64 expanded_urls 1419 non-null object rating_numerator 1419 non-null int64 rating_denominator 1419 non-null int64 name 1419 non-null object dog_class 192 non-null object dtypes: datetime64[ns](1), float64(5), int64(3), object(5) memory usage: 206.3+ KB ###Markdown Define:Filter dataframe to remove tweets with wired denominator values Code: ###Code archive_clean['rating_denominator'].value_counts() archive_clean = archive_clean[archive_clean['rating_denominator'] == 10] ###Output _____no_output_____ ###Markdown Test: ###Code archive_clean['rating_denominator'].value_counts() ###Output _____no_output_____ ###Markdown Define:Correct numerator ratings by Extracting decimal ratings from text and assign it to numerator ratings column Code: ###Code decimal_df = archive_clean[archive_clean.text.str.contains(r"(\d+\.\d*\/\d+)")]['text'].str.extract(r"(\d+\.\d*\/\d+)") decimal_df.rename( columns={0 :'ratings'}, inplace=True ) decimal_df decimal_df.ratings = decimal_df.ratings.str[:-3] archive_clean.loc[archive_clean.text.str.contains(r"(\d+\.\d*\/\d+)") ,'rating_numerator'] = decimal_df.ratings.str[:-3].astype('float64') ###Output /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:1: UserWarning: This pattern has match groups. To actually get the groups, use str.extract. """Entry point for launching an IPython kernel. ###Markdown Test: ###Code archive_clean['rating_numerator'].value_counts() ###Output _____no_output_____ ###Markdown Define:Drop unnecessary Columns in the archive Dataframe Code: ###Code archive_clean.info() #remove unnecessary columns archive_clean = archive_clean.drop(['in_reply_to_status_id','in_reply_to_user_id','retweeted_status_id' ,'retweeted_status_user_id','retweeted_status_timestamp'],axis = 1) ###Output _____no_output_____ ###Markdown Test: ###Code archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1415 entries, 0 to 2326 Data columns (total 9 columns): tweet_id 1415 non-null int64 timestamp 1415 non-null datetime64[ns] source 1415 non-null object text 1415 non-null object expanded_urls 1415 non-null object rating_numerator 1415 non-null float64 rating_denominator 1415 non-null int64 name 1415 non-null object dog_class 192 non-null object dtypes: datetime64[ns](1), float64(1), int64(2), object(5) memory usage: 110.5+ KB ###Markdown image_predictions table ###Code #make copy of the original dataframe predictions_clean = image_predictions.copy() predictions_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null int64 jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown Define:remove tweets with weired prediction values (where all bread predictions are false) Code: ###Code predictions_clean = predictions_clean[(predictions_clean.p1_dog == True) & (predictions_clean.p2_dog == True) & (predictions_clean.p3_dog == True )] ###Output _____no_output_____ ###Markdown Test: ###Code predictions_clean.p1.value_counts() ###Output _____no_output_____ ###Markdown Define:Drop the unnecessary img_num column from the dataframe Code: ###Code #delete image number column predictions_clean = predictions_clean.drop('img_num' , axis= 1) ###Output _____no_output_____ ###Markdown Test: ###Code predictions_clean.info() archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1415 entries, 0 to 2326 Data columns (total 9 columns): tweet_id 1415 non-null int64 timestamp 1415 non-null datetime64[ns] source 1415 non-null object text 1415 non-null object expanded_urls 1415 non-null object rating_numerator 1415 non-null float64 rating_denominator 1415 non-null int64 name 1415 non-null object dog_class 192 non-null object dtypes: datetime64[ns](1), float64(1), int64(2), object(5) memory usage: 110.5+ KB ###Markdown Define:make one ratings column instead of two numerator and denominator columns and drop the numerator and denominator columns. Code: ###Code archive_clean['ratings'] = archive_clean['rating_numerator'] / archive_clean['rating_denominator'] archive_clean = archive_clean.drop(['rating_numerator','rating_denominator'], axis = 1) ###Output _____no_output_____ ###Markdown Test: ###Code archive_clean.info() archive_clean.sample() ###Output _____no_output_____ ###Markdown Define:rename column headers of image_predictions to be describtive Code: ###Code predictions_clean.info() predictions_clean = predictions_clean.rename(columns = {'jpg_url' : 'image_url','p1' : 'breed_prediction_1' ,'p1_conf' : 'prediction_confidence_1','p1_dog' : 'prediction_Match_1', 'p2' : 'breed_prediction_2' ,'p2_conf' : 'prediction_confidence_2','p2_dog' : 'prediction_Match_2' , 'p3' : 'breed_prediction_3' ,'p3_conf' : 'prediction_confidence_3','p3_dog' : 'prediction_Match_3'}) ###Output _____no_output_____ ###Markdown Test: ###Code predictions_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1243 entries, 0 to 2073 Data columns (total 11 columns): tweet_id 1243 non-null int64 image_url 1243 non-null object breed_prediction_1 1243 non-null object prediction_confidence_1 1243 non-null float64 prediction_Match_1 1243 non-null bool breed_prediction_2 1243 non-null object prediction_confidence_2 1243 non-null float64 prediction_Match_2 1243 non-null bool breed_prediction_3 1243 non-null object prediction_confidence_3 1243 non-null float64 prediction_Match_3 1243 non-null bool dtypes: bool(3), float64(3), int64(1), object(4) memory usage: 91.0+ KB ###Markdown Define:change id column in api_df to tweet_id to facilitate merging Code: ###Code #make copy of api_df dataframe api_df_clean = api_df.copy() api_df_clean = api_df_clean.rename(columns = {'id' : 'tweet_id'}) ###Output _____no_output_____ ###Markdown Test: ###Code api_df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2331 entries, 0 to 2330 Data columns (total 3 columns): tweet_id 2331 non-null int64 retweet_count 2331 non-null int64 favorite_count 2331 non-null int64 dtypes: int64(3) memory usage: 54.7 KB ###Markdown Define:merge the archive table and api_df table on tweet_id using how parameter as inner to select only shared tweet ids Code: ###Code archive_clean = pd.merge(archive_clean , api_df_clean , on = 'tweet_id',how= 'inner') ###Output _____no_output_____ ###Markdown Test: ###Code archive_clean.head() archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1408 entries, 0 to 1407 Data columns (total 10 columns): tweet_id 1408 non-null int64 timestamp 1408 non-null datetime64[ns] source 1408 non-null object text 1408 non-null object expanded_urls 1408 non-null object name 1408 non-null object dog_class 191 non-null object ratings 1408 non-null float64 retweet_count 1408 non-null int64 favorite_count 1408 non-null int64 dtypes: datetime64[ns](1), float64(1), int64(3), object(5) memory usage: 121.0+ KB ###Markdown Define:merge the image predictions dataframe with the archive dataframe on tweet ids using parameter how as inner to include only shared tweet ids and remove tweets with no images and create single dataframe for all the data Code: ###Code predictions_clean.info() dog_rates_tweets = pd.merge(archive_clean,predictions_clean,on= 'tweet_id' , how= 'inner') ###Output _____no_output_____ ###Markdown Test: ###Code dog_rates_tweets.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 855 entries, 0 to 854 Data columns (total 20 columns): tweet_id 855 non-null int64 timestamp 855 non-null datetime64[ns] source 855 non-null object text 855 non-null object expanded_urls 855 non-null object name 855 non-null object dog_class 113 non-null object ratings 855 non-null float64 retweet_count 855 non-null int64 favorite_count 855 non-null int64 image_url 855 non-null object breed_prediction_1 855 non-null object prediction_confidence_1 855 non-null float64 prediction_Match_1 855 non-null bool breed_prediction_2 855 non-null object prediction_confidence_2 855 non-null float64 prediction_Match_2 855 non-null bool breed_prediction_3 855 non-null object prediction_confidence_3 855 non-null float64 prediction_Match_3 855 non-null bool dtypes: bool(3), datetime64[ns](1), float64(4), int64(3), object(9) memory usage: 122.7+ KB ###Markdown Define:save the clean dataframe to csv file using to_csv() function Code: ###Code #save the clean dataframe into a csv file dog_rates_tweets.to_csv('twitter_archive_master.csv') ###Output _____no_output_____ ###Markdown Test: ###Code main_df = pd.read_csv('twitter_archive_master.csv') main_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 855 entries, 0 to 854 Data columns (total 21 columns): Unnamed: 0 855 non-null int64 tweet_id 855 non-null int64 timestamp 855 non-null object source 855 non-null object text 855 non-null object expanded_urls 855 non-null object name 855 non-null object dog_class 113 non-null object ratings 855 non-null float64 retweet_count 855 non-null int64 favorite_count 855 non-null int64 image_url 855 non-null object breed_prediction_1 855 non-null object prediction_confidence_1 855 non-null float64 prediction_Match_1 855 non-null bool breed_prediction_2 855 non-null object prediction_confidence_2 855 non-null float64 prediction_Match_2 855 non-null bool breed_prediction_3 855 non-null object prediction_confidence_3 855 non-null float64 prediction_Match_3 855 non-null bool dtypes: bool(3), float64(4), int64(4), object(10) memory usage: 122.8+ KB ###Markdown Analyzing and Visualizing What is the most common dog names? ###Code main_df.name.value_counts() ###Output _____no_output_____ ###Markdown Cooper is the most common dog name ------ What is the most retweeted and favorite tweet? ###Code main_df[main_df['retweet_count'] == main_df['retweet_count'].max()] main_df[main_df['favorite_count'] == main_df['favorite_count'].max()] main_df[main_df['favorite_count'] == main_df['favorite_count'].max()].favorite_count #pd.set_option('display.max_colwidth', -1) main_df[main_df['favorite_count'] == main_df['favorite_count'].max()].expanded_urls ###Output _____no_output_____ ###Markdown Most retweeted and favorite tweet is the one with id 807106840509214720 for a dog called Stephan ------ What is the most class of a Dog? ###Code main_df.dog_class.value_counts() ###Output _____no_output_____ ###Markdown For the most dogs are not classified but classified dogs are most classified as pupper which mean a small dog in the Dogtionary ----- What is the most frequent breeds of dogs ###Code main_df['breed_prediction_1'].value_counts() ###Output _____no_output_____ ###Markdown Based on the first prediction algorithm golden retriever is the most breed of a dog ----- Top Average Ratings by Breed of Dog Based on first prediction algorithm ###Code main_df.groupby('breed_prediction_1')['ratings'].mean().nlargest(15) fig = plt.figure(figsize=(16,8)) main_df.groupby('breed_prediction_1')['ratings'].mean().nlargest(15).plot(kind='bar') plt.title("Top 15 Average Ratings by Breed of Dog",fontsize=22,weight ='bold') plt.ylabel("Average Rating",fontsize = 18,weight ='bold') plt.xlabel("Dog Breed",fontsize = 18,weight ='bold') plt.ylim(0,3); plt.rcParams.update({'font.size': 18}) ###Output _____no_output_____ ###Markdown The lowest average ratings by breed of dogs Based on first prediction algorithm ###Code main_df.groupby('breed_prediction_1')['ratings'].mean().nsmallest(15) fig = plt.figure(figsize=(16,8)) main_df.groupby('breed_prediction_1')['ratings'].mean().nsmallest(15).plot(kind='bar') plt.title("Lowest 15 Average Ratings by Breed of Dog",fontsize=22,weight ='bold') plt.ylabel("Average Rating",fontsize = 18,weight ='bold') plt.xlabel("Dog Breed",fontsize = 18,weight ='bold') plt.ylim(0,3); plt.rcParams.update({'font.size': 18}) ###Output _____no_output_____ ###Markdown Distribution of Dog classes between classified dogs ###Code labels = ['pupper','doggo','puppo','doggo ,pupper','floofer'] sizes = main_df.dog_class.value_counts().values explode=(0.1 ,0,0,0,0) fig1, ax1 = plt.subplots(figsize=(16,8)) ax1.pie(sizes, explode=explode, labels=labels, autopct='%1.1f%%', shadow=True, startangle=90) ax1.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle. plt.show() ###Output _____no_output_____ ###Markdown Data Wrangling Project Table of Contents* 1. Introduction* 2. Data Wrangling - 2.1 Gathering - 2.2 Assessment - 2.3 Cleaning* 3. Storing Cleaned Data* 4. Visualizing Data 1.0 IntroductionThe goal of this project is to wrangle **@WeRateDogs** Twitter Data to aid trustworthy analysis on the twitter data. In this project, all the steps *(Gather,Assess,Clean)* in the Data Wrangling process are handled. Initially, We have been provided with twitter archive data. This data needs to be asssessed further and need to gather aditional data if needed. All these data need to be cleaned, so that meaningful insights can be derived from the cleaned data. 2.0 Data Wrangling Data Wrangling is one of the key steps in Data Analysis, as it takes 80% or more part of Data Analyst. Real world data is often dirty and unstructured which make data analysis harder. Fortunately, latest software advancements like Python, and libraries like Pandas, Numpy, etc., makes data analyst's life easier for making the data wrangling process faster, smoother. At a high level, the data wrangling comes in 3 different steps, as mentioned below:* Gather* Assess* CleanLets dive deeper into each of the steps for the **@WeRateDogs** data to get meaningful insights. 2.1 Gathering In this project, initially we have provided with Twitter Archived Data (*twitter-archive-enhanced.csv*). This archived data contains only tweets which has ratings. ###Code ''' Initial all libraries ''' import numpy as np import pandas as pd import matplotlib.pyplot as plt import json import tweepy import os as os import requests import re from pandas.io.json import json_normalize import sqlite3 import seaborn as sns %matplotlib inline ###Output _____no_output_____ ###Markdown Parse the given twitter archive enhanced data ###Code ''' Read the Twitter Archived Data ''' df_tweet_archive = pd.read_csv('./data/provided_data/twitter-archive-enhanced.csv') df_tweet_archive.head(3) ###Output _____no_output_____ ###Markdown Twitter Archive Enhanced Field Details:* `tweet_id`: ID of each tweet* `in_reply_to_status_id`: If the represented Tweet is a reply, this field will contain the integer representation of the original Tweet’s ID* `in_reply_to_user_id`:If the represented Tweet is a reply, this field will contain the integer representation of the original Tweet’s author ID. This will not necessarily always be the user directly mentioned in the Tweet.* `timestamp`: Tweet Created Time* `source`: Source of the Tweet. i.e IPhone, Vine, etc.,* `text`: Original Text of the Tweet* `retweeted_status_id`: Retweet Status ID* `retweeted_status_user_id` : Retweet User ID* `retweeted_status_timestamp`: Retweet Timestamp* `expanded_urls`: Tweet URL* `rating_numerator`: Dog Rating Numerator.* `rating_denominator`: Dog Rating Denominator. Its always 10.* `name`: Name of the dog* `doggo`: Stage of the dog* `floofer`: Stage of the dog* `pupper`: Stage of the dog* `puppo`: Stage of the dog Get additional details on the tweets via Twitter API CallThe above data missing some key information like retweet count, favorite count for each of the tweet. These additional data can be gathered using Twitter API. In this project, 'tweepy' library is used to get the tweet details.Even though, we can get each individual tweet status by using `get_status` API call, it requires 2356 API calls. It seems, we can get tweets in bulk using `statuses_lookup` API call. `statuses_lookup` API call can upto 100 tweets. Also, we have to make sure the tweet_mode is set to `extended`, so that tweets are not truncated. ###Code ''' This function initialize Twitter API Secret needed for further API calls''' def initialize_twitter_secrets(): with open('twitter_secrets.txt', 'r') as content_file: twitter_secrets = json.loads( content_file.read()) return twitter_secrets ''' This function authenticates the application with Twitter and returns API object which can be used for further API Calls''' def get_twitter_api_handler(twitter_secrets={}): auth = tweepy.OAuthHandler(twitter_secrets['consumer_api_key'], twitter_secrets['consumer_api_secret']) auth.set_access_token(twitter_secrets['access_token'],twitter_secrets['access_token_secret']) api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True) return api ''' The below function get all tweet details for given list and store them in a file in JSON format ''' def get_tweet_details_for_given_list(tweet_list=None, tweet_api = None, split_size=100, file_name=None): try: # Check for Max Split Size if split_size > 100: print('Twitter API can handle only 100 tweets per API at the Max. So switching split size to 100') split_size = 100 # Check for Incorrect Split Size if split_size <= 0: print('Incorrect split size') return -1 #Check if tweet list is empty if tweet_list is None or len(tweet_list) <= 0: print('tweet list is empty') return -1 else: ''' Below Code splits the whole tweet list supplied into smaller chunks and get their details''' max_loop_index = (len(tweet_list)/split_size) + 1 tweets_json_list = [] for i in np.arange(max_loop_index): start_index = (int) (i * split_size) end_index = min( (int) ((i+1) * split_size), len(tweet_list)) '''Get the small chunk tweet id list ''' sub_array = tweet_list[start_index:end_index] '''Check if the small chunk has tweet ids''' if len(sub_array) > 0: ''' API Call made to get the data and tweet_mode is set to Extended mode for getting the full tweet ''' tweets = tweet_api.statuses_lookup(id_=sub_array, tweet_mode='extended') '''Store all tweets in the list''' for tweet in tweets: tweets_json_list.append(tweet._json) full_file_name = os.path.join("./data", "collected_data", file_name) with open(full_file_name,'w+b') as tf: for tweet in tweets_json_list: '''Add EOL(\n) for every json stored''' jsonstr = (json.dumps(tweet, separators=(',', ': ')) + '\n').encode('UTF-8') tf.write(jsonstr) return 0 except: print('Error in getting tweets via API') return -1 #Initialize Secrets twitter_secrets = initialize_twitter_secrets() #Get Twitter API Handler twitter_api = get_twitter_api_handler(twitter_secrets) #Get all tweets and store them in a file get_tweet_details_for_given_list(tweet_list=df_tweet_archive.tweet_id.values.tolist(), \ file_name='tweet_json.txt', split_size=80, tweet_api=twitter_api) ''' Check if the file has all data in JSON - one tweet per line''' open('./data/collected_data/tweet_json.txt', 'r').readline().encode('UTF-8') ###Output _____no_output_____ ###Markdown Twitter Object Field Details.* Details of the full Tweet Object can be found [here](https://developer.twitter.com/en/docs/tweets/data-dictionary/overview/tweet-object) ###Code df_tweet_details = pd.read_json(path_or_buf='./data/collected_data/tweet_json.txt', \ encoding='utf-8', orient='records', lines=True) df_tweet_details.head(5) df_tweet_details.columns ###Output _____no_output_____ ###Markdown Get Image Predictions file via Requests LibraryAdditional Data for Dog Breed Prediction is provided and is available from here: https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsvWe can use `requests` library to get this Tab Separated File, as shown below: ###Code ''' The below function download file from the web server for a given URL and File Name''' def download_file_from_url(file_url=None, file_name=None): try: req = requests.get(file_url) with open(file_name, 'wb') as fs: fs.write(req.content) return 0 except: print('Error downloading file. Error Message: {0}'.format(sys.exc_info()[0])) return -1 #initial variables. df_image_prediction = None file_url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' image_prediction_file_name = './data/collected_data/image-predictions.tsv' #Download the image prediction file. download_file_from_url(file_url=file_url, file_name=image_prediction_file_name) #Check if file Exists. if os.path.isfile(image_prediction_file_name): df_image_prediction = pd.read_csv(image_prediction_file_name, sep='\t') else: raise Exception('No file Exists') df_image_prediction.head(5) df_image_prediction.columns ###Output _____no_output_____ ###Markdown Image Prediction Data Fields* `tweet_id` : Tweet ID* `jpg_url`: Image URL* `img_num`: Image Number. Since Twitter supports upto 4 images per tweet. This column contains the index of the image being predicted* `p1`: Dog Breed - Prediction 1* `p1_conf`: Prediction 1 - Confidence Score* `p1_dog`: Is Prediction Dog or some other animal/object - Prediction 1* `p2`: Dog Breed - Prediction 2* `p2_conf`: Prediction 2 - Confidence Score* `p2_dog`: Is Prediction Dog or some other animal/object - Prediction 2* `p3`: Dog Breed - Prediction 3* `p3_conf`: Prediction 3 - Confidence Score* `p3_dog`: Is Prediction Dog or some other animal/object - Prediction 3 2.2 Assess Since we have gathered all the data for our data analysis, Lets focus on the major step **Assess**. Here we are looking for two things: 1. Quality Issues2. Structural IssuesThese issues can be detected either using Visual Assessment or Programmatic Assessment. Let's identify the data issues for all the data we have collected so far. Visual Assessments for all the data have been done by opening the file in Visual Code Editor/Excel Analysis Twitter archive enhanced data frame ###Code df_tweet_archive.info() df_tweet_archive.describe() df_tweet_archive.name.str.len().value_counts() df_tweet_archive[df_tweet_archive.name.str.len() < 3].name.value_counts() df_tweet_archive[df_tweet_archive.rating_denominator > 10].shape df_tweet_archive[df_tweet_archive.rating_numerator > 20].shape ###Output _____no_output_____ ###Markdown Twitter Archive DataFrame Issues:Dirty Data Issues:* `rating_denominator` - About 20 records have Rating Denominator greater than 10. As per [Wiki](https://en.wikipedia.org/wiki/WeRateDogs), the rating scale is one to ten.* `rating_numerator` - About 24 records have Rating Numerator greater than 20. This is unusual. We need to check why this is happening* `name` - Some dog names have come up as 'a', 'an', 'this', 'the' etc.,* `dog stage` - Not all tweets have dog stage names* `timestamp` - Tweet Created Time is not in datetime type* `Missing Values` - retweet count and favorite count values are missing for each tweet.Messy Data Issues:* `dog stage` - 'puppo', 'doggo', 'floffer', 'pupper' - these are different dog stages. In other words, these are values. These needs to be tracked under one variable 'dog_stage' Analysing Twitter Data collected via API ###Code df_tweet_details.info() df_tweet_details.describe() df_tweet_details[df_tweet_details.retweeted] df_tweet_details.columns len(set(df_tweet_archive['tweet_id']).intersection(set(df_tweet_details['id']))) len(set(df_tweet_archive['tweet_id']).intersection(set(df_tweet_details['id_str']))) ###Output _____no_output_____ ###Markdown Twitter API Details DataFrame Issues:Dirty Data Issues:* `Missing Values` - Original we queried twitter for 2356 Tweets, but we have only 2342 Tweet Details. We are missing about 14 Tweet Details.* `contributors`, `coordinates`,`geo`, `in_reply_to_screen_name`,`in_reply_to_status_id`, `in_reply_to_status_id_str`,`in_reply_to_user_id`, `in_reply_to_user_id_str`, `is_quote_status`, `possibly_sensitive`, `quoted_status`, `quoted_status_id`, `quoted_status_id_str`, `quoted_status_permalink`, `truncated`,`user`, `retweeted_status` - Remove these columns as we are not planning to use these columns.* `id`, `id_str` - These are duplicate columns.`id_str` column value does not match with all the tweet id in twitter archive data. So, we need to remove 'id_str' columnTidy Data Issues:* `extended_entities` - This column contains the data in JSON format. This need to be parsed. Also, a tweet may contain upto 4 images. Each Image is an observation and need to be a row in the dataset.* `entities` - Need to parse this column to get the hashtags from each tweet. Analyzing Image Prediction Data Frame ###Code df_image_prediction.info() df_image_prediction.describe() len(df_image_prediction.tweet_id.unique()) ###Output _____no_output_____ ###Markdown Twitter Image Prediction Data Issues.Dirty Data Issues:* `Missing Data` - We have 2356 tweets in twitter archive data, but image prediction is available only for 2075 tweets.Tidy Data Issues* `p1`, `p2`, `p3`,`p1_conf`, `p2_conf`, `p3_conf`, `p1_dog`, `p2_dog`, `p3_dog` - These are just column names. Ideally, they should have been tracked in 4 variables (Prediction Number, Breed Prediction, Prediction Confidence Score, Is Prediction a dog?) * Finally, We can reduce the three dataframes into 2 data frames. One for tweet details and one for image predictions Summary all Issues Found. Dirty Data Issues.**Twitter Archive Data Frame**1. `rating_denominator` - About 20 records have Rating Denominator greater than 10. As per [Wiki](https://en.wikipedia.org/wiki/WeRateDogs), the rating scale is one to ten.2. `rating_numerator` - About 24 records have Rating Numerator greater than 20. This is unusual. We need to check why this is happening3. `name` - Some dog names have come up as 'a', 'an', 'this', 'the' etc.,5. `dog stage` - Not all tweets have dog stage names5. `timestamp` - Tweet Created Time is not in datetime type6. `Missing Values` - retweet count and favorite count values are missing for each tweet. This will be fixed by when we query the tweet data using Twitter API. 7. `puppo, doggo, floffer, pupper` - Have 'None' in place of NaN values.**All Twitter Details Data Frame**8. `Missing Values` - Original we queried twitter for 2356 Tweets, but we have only 2342 Tweet Details. We are missing about 14 Tweet Details.9. `contributors`, `coordinates`,`geo`, `in_reply_to_screen_name`,`in_reply_to_status_id`, `in_reply_to_status_id_str`,`in_reply_to_user_id`, `in_reply_to_user_id_str`, `is_quote_status`, `possibly_sensitive`, `quoted_status`, `quoted_status_id`, `quoted_status_id_str`, `quoted_status_permalink`, `truncated`,`user`, `retweeted_status` - Unwanted columns in the dataframe.10. `id`, `id_str` - These are duplicate columns. One of them can be removed. `id_str` column does not match all the tweets from archive tweetid**Twitter Image Prediction Data Frame**11. `Missing Data` - We have 2356 tweets in twitter archive data, but image prediction is available only for 2075 tweets. Tidy Data Issues.**Twitter Archive Data Frame*** `dog stage` - 'puppo', 'doggo', 'floffer', 'pupper' - these are different dog stages. In other words, these are values. These needs to be tracked under one variable 'dog_stage'**All Twitter Details Data Frame*** `extended_entities` - This column contains the data in JSON format. This need to be parsed. Also, a tweet may contain upto 4 images. Each Image is an observation and need to be a row in the dataset.* `entities` - Need to parse this column to get the hashtags from each tweet.**Twitter Image Prediction Data Frame*** `p1`, `p2`, `p3`,`p1_conf`, `p2_conf`, `p3_conf`, `p1_dog`, `p2_dog`, `p3_dog` - These are just column names. Ideally, they should have been tracked in 4 variables (Prediction Number, Breed Prediction, Prediction Confidence Score, Is Prediction a dog?) * Finally, We can reduce the three dataframes into 2 data frames. One for tweet details and one for image predictions 2.3 Cleaning Before we start any cleaning, first lets make a **copy** of all the dataframes. ###Code df_tweet_archive_clean = df_tweet_archive.copy() df_image_prediction_clean = df_image_prediction.copy() df_tweet_details_clean = df_tweet_details.copy() ###Output _____no_output_____ ###Markdown First, clean the twitter enhanced data. Define* Drop ununsed columns ('in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp').* Using Pandas Melt function, transform the columns ('puppo', 'doggo', 'floffer', 'pupper' ) into one single variable named 'dog stage', and remove the following columns ('puppo', 'doggo', 'floffer', 'pupper' ) once transformed.* Some of the dog stage values have 'None'. Need to replace 'None' with np.NaN.* Some of the dog names have come as 'a', 'an', 'this', 'the'. Use regular expression to identify the name. If the name is not provided (some of the dog names are just breed names not real names), then return np.NaN. Also, it seems some of the dog names follow this convention 'named &lt;value&gt;' * Convert 'timestamp's object type to datetime type.* Identify all the rating pattern matches in the tweet text. If the denominator is greater than 10, check if its a multiple of ten, as rating can be given to pack of gods. If its a multiple of 10, then normalize the numerator & denominator values i.e (numerator * 10) / denominator. The resulting value can be used as numerator and denomintor can be set to 10. If the denominator is not a multiple of 10, then its highly likely rating is not provided. For these cases, we will use np.NaN.* Some of the rating numerators are greater than 20. ie. We have ratings like 182/10, 1776/10. Let's assign value 20.0 for these ratings, so that our analysis does not skew. Code ###Code #Drop unnecessary columns df_tweet_archive_clean.drop(['in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', \ 'retweeted_status_user_id', 'retweeted_status_timestamp'], \ axis=1, inplace=True) #Filter selected columns and replace None with np.NaN df_dog_stage_filtered = df_tweet_archive_clean[['tweet_id', 'doggo', 'floofer', 'pupper', 'puppo']].copy() df_dog_stage_filtered.doggo.replace('None', np.NAN, inplace=True) df_dog_stage_filtered.floofer.replace('None', np.NAN, inplace=True) df_dog_stage_filtered.pupper.replace('None', np.NAN, inplace=True) df_dog_stage_filtered.puppo.replace('None', np.NAN, inplace=True) #Using Melt function, get dog_stage variable df_dog_stage_melt = df_dog_stage_filtered.melt(id_vars=['tweet_id'], var_name='dog_stage') #Drop any null values df_dog_stage_melt.dropna(inplace=True) df_dog_stage_melt.info() #remove duplicate column, as dog_stage has already has this value. df_dog_stage_melt.drop(['value'], inplace=True, axis=1) #Merge Dataframes df_tweet_archive_clean = df_tweet_archive_clean.merge(df_dog_stage_melt, how='left', left_on=['tweet_id'], right_on=['tweet_id'], suffixes=['','_l'] ) #Remove unwanted columns df_tweet_archive_clean.drop(['doggo', 'floofer', 'pupper', 'puppo'], axis=1, inplace=True) #convert timestamp to datetime type df_tweet_archive_clean['timestamp'] = pd.to_datetime(df_tweet_archive_clean['timestamp']) ''' This function corects some of the dog names which are listed as a/an/the,this. Some of their name followed by "named <value>" ''' def fix_dog_names(dataframe=None): dog_name = dataframe['name'] full_txt = dataframe['text'] if dog_name == 'a' or dog_name == 'an' or dog_name == 'the' or dog_name == 'this': m = re.search(r'named (\w+)', full_txt ) if m is not None and len(m.groups()) > 0: return m.groups()[0] else: return np.NaN elif dog_name == 'None': return np.NaN else: return dog_name df_tweet_archive_clean['name'] = df_tweet_archive_clean.apply(fix_dog_names, axis=1) #Refrence Link: https://stackoverflow.com/questions/4703390/how-to-extract-a-floating-number-from-a-string ''' This function gets all the rating scores, and normalize if needed (some ratings are included for pack of dogs) ''' def get_score_per_tweet(dataframe=None): text = dataframe['text'] matches = re.findall(r'(?:(?:\d*\.\d+)|(?:\d+\.?))/(?:(?:\d*\.\d+)|(?:\d+\.?))', text) for m in matches: str_split = m.split('/') if str_split is not None and len(str_split) == 2: denominator = (float) (str_split[1]) numerator = (float) (str_split[0]) if (denominator % 10) == 0 and denominator != 0: return pd.Series([ (int) ((numerator * 10) / denominator), (int)(10) ]) else: continue return pd.Series([np.NaN, np.NaN]) df_tweet_archive_clean[['rating_numerator', 'rating_denominator']] = df_tweet_archive_clean.apply(get_score_per_tweet, axis=1) #Fixing outliers. df_tweet_archive_clean['rating_numerator'] = df_tweet_archive_clean['rating_numerator'].apply(lambda x: 20 if x > 20 else x) ###Output _____no_output_____ ###Markdown Test ###Code df_tweet_archive_clean.info() df_tweet_archive_clean.dog_stage.value_counts() # checking for incorrect dog names invalid_dog_name = ( (df_tweet_archive_clean.name == 'a') | (df_tweet_archive_clean.name.str == 'an' ) \ | (df_tweet_archive_clean.name == 'the') | (df_tweet_archive_clean.name.str == 'this') ) assert len(df_tweet_archive_clean[invalid_dog_name]) == 0 #Check for incorrect rating denominator assert sum(df_tweet_archive_clean.rating_denominator > 10) == 0 assert sum(df_tweet_archive_clean.rating_denominator < 10) == 0 #check for incorrect rating numerator assert sum(df_tweet_archive_clean.rating_numerator > 20) == 0 ###Output _____no_output_____ ###Markdown **From the above, we can see, the unwanted columns have been removed.*** `timestamp` - is a datetime column.* `dog_stage` - New Variable column added to show the dog stage. It has only 394 values. * `name` - Invalid dog names such as 'a','an', 'this', 'the' have been removed* `rating_numerator` - We made sure there are no numerator values greater than 20.0* `rating_denominator` - Rating Denominator is always 10, expect for the tweets which has no rating. Second, let's clean twitter API data frame. ###Code df_tweet_details_clean.columns ###Output _____no_output_____ ###Markdown Defne* Drop un-necessary columns 'contributors', 'coordinates', 'created_at', 'display_text_range', 'geo', id', 'in_reply_to_screen_name', 'in_reply_to_status_id', 'in_reply_to_status_id_str', 'in_reply_to_user_id', 'in_reply_to_user_id_str', 'is_quote_status','place', 'possibly_sensitive', 'quoted_status','quoted_status_id', 'quoted_status_id_str', 'quoted_status_permalink', 'retweeted_status','truncated', 'user', 'source'* Rename 'id_str' column to 'tweet_id'* Expand entities object, and extract hashtags & user_mentions. * Expand extended_entities object and extract media information like Image URLs, Media Type. Store all the expanded extended entities in a dataframe and store them in a list. Concat all the dataframes into one dataframe. Also, remove unnecessary fields in extended_entities object like 'display_url', 'media_url', 'url', 'source_status_id', 'source_status_id_str', 'source_user_id', 'source_user_id_str', 'additional_media_info','video_info'. Rename 'id_str' in media object to 'media_id'. Merge the expanded dataframe with original twitter details dataframe.* Merge the Twitter API Details with Twitter Archive Information.* Extract Source Information from HTML tags. ###Code #Drop un-necessary columns df_tweet_details_clean.drop(['contributors', 'coordinates', 'created_at', 'display_text_range', 'geo', \ 'id_str', 'in_reply_to_screen_name', 'in_reply_to_status_id', 'in_reply_to_status_id_str', \ 'in_reply_to_user_id', 'in_reply_to_user_id_str', 'is_quote_status','place', 'possibly_sensitive', \ 'quoted_status','quoted_status_id', 'quoted_status_id_str', 'quoted_status_permalink', \ 'retweeted_status','truncated', 'user', 'source'], axis=1, inplace=True) #rename id column name -> tweet_id df_tweet_details_clean.rename(index=str, columns={'id': 'tweet_id'}, inplace=True) ''' This function parses the source information inside html tag ''' def extract_source_from_html(dataframe=None): sourcetxt = dataframe['source'] if sourcetxt is np.NaN: return np.NaN if sourcetxt is not None: r = re.match(r'<[a][^>]*>(.+?)</[a]>', sourcetxt) if r is not None and len(r.groups()) > 0: return r.groups()[0] return np.NaN df_tweet_archive_clean['source'] = df_tweet_archive_clean.apply(extract_source_from_html, axis=1) ''' This function extracts hashtags and user mention values from entities object ''' def get_hash_tags_and_user_mention_screen_mentions_from_entities_obj(dataFrame = None): entities_obj = dataFrame['entities'] hashtags_array = [] user_mentions_screen_name_array = [] for key, value in entities_obj.items(): if key == 'hashtags': for hashvalue in value: hashtags_array.append( hashvalue['text']) if key == 'user_mentions': for user_mention_value in value: user_mentions_screen_name_array.append(user_mention_value['screen_name']) return pd.Series([ hashtags_array, user_mentions_screen_name_array]) df_tweet_details_clean[['hashtags','user_mentions']] = df_tweet_details_clean.apply( \ get_hash_tags_and_user_mention_screen_mentions_from_entities_obj, axis=1) #drop 'entities' column as its already parsed df_tweet_details_clean.drop(['entities'], inplace=True, axis=1) #filtered tweetid and extended entities object df_tweet_details_ex_entities_filtered = df_tweet_details_clean[['tweet_id', 'extended_entities']] ''' This function parses the extended entities object into dataframe and appends to datafame list ''' def parse_extended_entities(original_dataframe=None, data_collection_dataframe_list=None): tweet_id = original_dataframe['tweet_id'] extended_entities_obj = original_dataframe['extended_entities'] extended_entities_df = json_normalize(extended_entities_obj, 'media') extended_entities_df['tweet_id'] = tweet_id extended_entities_df.drop(['id', 'indices', 'sizes'], axis=1, inplace=True) extended_entities_df['img_num'] = extended_entities_df.index + 1 data_collection_dataframe_list.append(extended_entities_df) return data_collection_dataframe_list = [] df_tweet_details_ex_entities_filtered[~df_tweet_details_ex_entities_filtered['extended_entities'].isnull()].apply( \ parse_extended_entities, \ data_collection_dataframe_list=data_collection_dataframe_list , axis=1) #concat all extended entities dataframe list df_tweet_extended_media_details = pd.concat(data_collection_dataframe_list, ignore_index=True, sort=False) #Drop unnecessary columns from the extended entities object extened_entities_unwanted_cols = ['display_url', 'media_url', 'url', 'source_status_id', 'source_status_id_str', \ 'source_user_id', 'source_user_id_str', 'additional_media_info','video_info'] df_tweet_extended_media_details.drop(extened_entities_unwanted_cols, inplace=True, axis=1) #Rename 'id_str' to media_id df_tweet_extended_media_details.rename(index=str, columns={"id_str": "media_id"}, inplace=True) #Merge Extended Entities dataframe with tweet details dataframe. df_tweet_details_clean = df_tweet_details_clean.merge(df_tweet_extended_media_details, how='left', \ left_on='tweet_id', right_on='tweet_id') #Drop 'extended_entities' column, as its already parsed df_tweet_details_clean.drop(['extended_entities'], inplace=True, axis=1) #combine twitter archive data with tweet details dataframe df_tweet_archive_clean = df_tweet_archive_clean.merge(df_tweet_details_clean, how='left',\ left_on='tweet_id', right_on='tweet_id') #drop 'full_text' column as we have this information in 'text' column df_tweet_archive_clean.drop(['full_text'], inplace=True, axis=1) ###Output _____no_output_____ ###Markdown Test ###Code #Visual check to see if there is un-necessary columns df_tweet_archive_clean.info() #check to see if the hashtags values are parsed correctly df_tweet_archive_clean.hashtags.value_counts() #Check to see if user_mentions are parsed correctly df_tweet_archive_clean.user_mentions.value_counts() #check to see if the media in extended objects are parsed correctly df_tweet_archive_clean.type.value_counts() #Check if the tweet count matches with original data source count. assert df_tweet_archive_clean.tweet_id.nunique() == df_tweet_archive.tweet_id.nunique() # check if the tweet count in archive table matches with original source (tweet details dataframe) count assert df_tweet_archive_clean[~df_tweet_archive_clean.favorited.isnull()].tweet_id.nunique() == df_tweet_details.id.nunique() ###Output _____no_output_____ ###Markdown Final Tweet Information Dataframe, we can see the following.- All unnecessary columns have been removed including duplicate columns ('full_text' from tweet API details)- HashTags/User Mentions/Media Information have been extracted.- Missing tweet details (14 tweets missing from Tweet API details) have been found from the original twitter archive information. Finally, let's clean Image Prediction Results. Define * There are 3 Prediction Results in the dataframe. Select each prediction data in a separate dataframe, and rename the columns appropriately. i.e. p1 -> 'prediction_breed', p1_conf -> 'confidence_score', p1_dog -> 'is_dog' and so on. * Include a new variable called 'prediction_number' to identify whether its p1/p2/p3 result.* Combine each of the prediction group results into one single dataframe ###Code #Filtered the interseting the columns for p1 df_image_prediction_p1_filtered = df_image_prediction_clean[['tweet_id', 'jpg_url', 'img_num', 'p1', 'p1_conf', 'p1_dog']].copy() #Assign the prediction number df_image_prediction_p1_filtered['prediction_number'] = 'p1' #rename column names with appropriate names df_image_prediction_p1_filtered.rename(index=str, columns={'p1': 'predicted_breed', 'p1_conf': 'confidence_score', 'p1_dog': 'is_dog'}, inplace=True) #Filtered the interseting the columns for p2 df_image_prediction_p2_filtered = df_image_prediction_clean[['tweet_id', 'jpg_url', 'img_num', 'p2', 'p2_conf', 'p2_dog']].copy() #Assign the prediction number df_image_prediction_p2_filtered['prediction_number'] = 'p2' #rename column names with appropriate names df_image_prediction_p2_filtered.rename(index=str, columns={'p2': 'predicted_breed', 'p2_conf': 'confidence_score', 'p2_dog': 'is_dog'}, inplace=True) #Filtered the interseting the columns for p3 df_image_prediction_p3_filtered = df_image_prediction_clean[['tweet_id', 'jpg_url', 'img_num', 'p3', 'p3_conf', 'p3_dog']].copy() #Assign the prediction number df_image_prediction_p3_filtered['prediction_number'] = 'p3' #rename column names with appropriate names df_image_prediction_p3_filtered.rename(index=str, columns={'p3': 'predicted_breed', 'p3_conf': 'confidence_score', 'p3_dog': 'is_dog'}, inplace=True) #concat all 3 predictions dataframes df_image_prediction_clean = pd.concat([df_image_prediction_p1_filtered, df_image_prediction_p2_filtered, df_image_prediction_p3_filtered], ignore_index=True, sort=False) ###Output _____no_output_____ ###Markdown Test ###Code #Visual check to see if the column names are coming correctly. df_image_prediction_clean.info() #Make sure the count matches. assert df_image_prediction_clean.shape[0] == df_image_prediction.shape[0]*3 ###Output _____no_output_____ ###Markdown 3.0 Storing Cleaned Data.For this project, I have stored the data in Sqlite file 'twitter_archive_master.sqlite' ###Code #initialize SQLite connection sqlite_file_name = 'twitter_archive_master.sqlite' conn = sqlite3.connect(sqlite_file_name) #Convert Array Objects to String, so that it can be stored in Sqlite database. df_tweet_archive_clean.hashtags = df_tweet_archive_clean.hashtags.astype(str) df_tweet_archive_clean.user_mentions = df_tweet_archive_clean.user_mentions.astype(str) #Store the dataframes in sqlite file df_image_prediction_clean.to_sql('twitter_image_predictions', conn, if_exists='replace', index=False) df_tweet_archive_clean.to_sql('twitter_archive_details', conn, if_exists='replace', index=False) #Check if the twitter image prediction results are stored correctly df_image_prediction_clean = pd.read_sql_query("select * from twitter_image_predictions;", conn) #Check if the twitter archive results are stored correctly df_tweet_archive_clean = pd.read_sql_query("select * from twitter_archive_details;", conn) df_tweet_archive_clean.info() df_tweet_archive_clean.timestamp = pd.to_datetime(df_tweet_archive_clean.timestamp) df_image_prediction_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 6225 entries, 0 to 6224 Data columns (total 7 columns): tweet_id 6225 non-null int64 jpg_url 6225 non-null object img_num 6225 non-null int64 predicted_breed 6225 non-null object confidence_score 6225 non-null float64 is_dog 6225 non-null int64 prediction_number 6225 non-null object dtypes: float64(1), int64(3), object(3) memory usage: 340.5+ KB ###Markdown 4.0 Visualization Tweet Trend Analysis ###Code #Filter the necessary columns df_tweet_archive_date = df_tweet_archive_clean[['tweet_id', 'timestamp']].copy() #drop duplicates df_tweet_archive_date_no_dup = df_tweet_archive_date.drop_duplicates() #gather year, month, yearmonth information df_tweet_archive_date_no_dup['Year']= df_tweet_archive_date_no_dup['timestamp'].apply(lambda x: x.year) df_tweet_archive_date_no_dup['Month']= df_tweet_archive_date_no_dup['timestamp'].apply(lambda x: x.month) df_tweet_archive_date_no_dup['YearMonth'] = df_tweet_archive_date_no_dup['timestamp'].map(lambda x: 100*x.year + x.month) #get tweet count by group with 'Yearmonth' df_tweets_by_month = df_tweet_archive_date_no_dup.groupby(['YearMonth', 'Year', 'Month']).count().reset_index()[['YearMonth', 'Year', 'Month', 'tweet_id']] df_tweets_by_month.rename(index=str, columns={'tweet_id':'tweet_count'}, inplace=True) df_tweets_by_month['YearMonth'] = df_tweets_by_month['YearMonth'].astype('str') #Plot the tweet count trend sns.set() f = plt.figure(figsize=(20,3)) ax = f.add_subplot(1, 1, 1) sns.relplot(x="YearMonth", y="tweet_count", kind="line", data=df_tweets_by_month, ax=ax); ###Output C:\Users\Karthick\Anaconda3\envs\my_env\lib\site-packages\ipykernel\__main__.py:8: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy C:\Users\Karthick\Anaconda3\envs\my_env\lib\site-packages\ipykernel\__main__.py:9: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy C:\Users\Karthick\Anaconda3\envs\my_env\lib\site-packages\ipykernel\__main__.py:10: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy ###Markdown Rating Insights ###Code #group tweetid, rating to remove duplicates df_rating_numerator_details = df_tweet_archive_clean.groupby(['tweet_id', 'rating_numerator']).count() \ .reset_index()[['tweet_id', 'rating_numerator']] #Plot the histogram of rating scores plt.figure(figsize=(10,5)) df_rating_numerator_details.rating_numerator.hist(); df_rating_numerator_details.rating_numerator.value_counts() ###Output _____no_output_____ ###Markdown Tweet Source Insight ###Code #plot the distributtion of Twitter Source plt.figure(figsize=(10,10)) df_tweet_archive_clean.source.value_counts().plot(kind='pie', autopct='%.1f%%'); ###Output _____no_output_____ ###Markdown Predictor ConflictsLet's Identify the conflicts between p1, p2, p3. i.e. Check if all the predictions are correctly, the image is a dog or not. Or if the p1, p2, p3 - some of them says its a dog or some says its not a dog ###Code df_pred_result = df_image_prediction_clean.groupby(['tweet_id', 'prediction_number']).sum() \ .reset_index()[['tweet_id', 'prediction_number', 'is_dog']] df_pred_result = df_pred_result.groupby(['tweet_id']).sum().reset_index() df_pred_overall_result = df_pred_result.is_dog.value_counts().reset_index() df_pred_overall_result.rename(index=str, columns={'index':'pred_combined'}, inplace=True) ''' classifies whether the predictors are in conflict or non-conflict ''' def classify_conflict(dataframe=None): if dataframe['pred_combined'] == 3 or dataframe['pred_combined'] == 0: return 'Non-Conflict' else: return 'Conflict' df_pred_overall_result['conflict_classification'] = df_pred_overall_result.apply(classify_conflict, axis=1) df_conflict_result = df_pred_overall_result.groupby(['conflict_classification']).sum() \ .reset_index()[['conflict_classification', 'is_dog']] df_conflict_result.rename(index=str, columns={'is_dog': 'count'}) ###Output _____no_output_____ ###Markdown Top 3 favorite dog stages ###Code #Get top 3 favorite counts top_3_favorites = df_tweet_archive_clean.favorite_count.sort_values(ascending=False).head(3).reset_index() top_3_favorites.rename(index=str, columns={'index':'row_number'}, inplace=True) #get top 3 favorite count's tweet id and dog stage top_3_favorites.merge(df_tweet_archive_clean, how='inner', left_on='row_number', right_index=True)[['tweet_id','dog_stage', 'favorite_count_x']] from wordcloud import WordCloud, STOPWORDS import matplotlib.pyplot as plt text= df_image_prediction_clean[df_image_prediction_clean.is_dog == 1]['predicted_breed'].values text = np.append(text, df_tweet_archive_clean[~df_tweet_archive_clean.dog_stage.isnull()]['dog_stage'].values) wordcloud = WordCloud(max_font_size=50, max_words=100, background_color="black").generate(' '.join(text)) plt.figure(figsize=(15,15)) plt.imshow(wordcloud, interpolation="bilinear") plt.axis("off") plt.show() ###Output _____no_output_____ ###Markdown Project DetailsThe tasks in this project are as follows:* Data wrangling, which consists of: * Gathering data (downloadable file in the Resources tab in the left most panel of your classroom and linked in step 1 below). * Assessing data * Cleaning data * Storing, analyzing, and visualizing your wrangled data * Reporting on 1) your data wrangling efforts and 2) your data analyses and visualizations 1. Gathering Data for this Project: * The WeRateDogs Twitter archive. Download twitter_archive_enhanced.csv file.* The tweet image predictions, i.e., what breed of dog (or other object, animal, etc.) is present in each tweet according to a neural network. This file (image_predictions.tsv) is hosted on Udacity's servers and should be downloaded programmatically using the Requests library and the following URL: https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv* Each tweet's retweet count and favorite ("like") count at minimum, and any additional data you find interesting. Using the tweet IDs in the WeRateDogs Twitter archive, query the Twitter API for each tweet's JSON data using Python's Tweepy library and store each tweet's entire set of JSON data in a file called tweet_json.txt file. Each tweet's JSON data should be written to its own line. Then read this .txt file line by line into a pandas DataFrame with (at minimum) tweet ID, retweet count, and favorite count. ###Code import pandas as pd import numpy as np import requests import matplotlib.pyplot as plt import tweepy from tweepy import OAuthHandler import json from timeit import default_timer as timer ###Output _____no_output_____ ###Markdown 1.1 The WeRateDogs Twitter archive ###Code # Read Twitter Archive Data twitter_archive = pd.read_csv('twitter-archive-enhanced.csv') ###Output _____no_output_____ ###Markdown 1.2 Use Requests to get image predictions data ###Code # Download by using Requests and save the data as image_predictions.tsv url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' r = requests.get(url) open('image_predictions.tsv', 'wb').write(r.content) # Read Image Predictions Data image_predictions = pd.read_csv('image_predictions.tsv', sep = '\t') ###Output _____no_output_____ ###Markdown 1.3 Use Twitter API and Save JSON ###Code df_1 = pd.read_csv('twitter-archive-enhanced.csv') #### Query Twitter API for each tweet in the Twitter archive and save JSON in a text file #### These are hidden to comply with Twitter's API terms and conditions consumer_key = 'xxxxxxxxxxxxxxx' consumer_secret = 'xxxxxxxxxxxxxxx' access_token = 'xxxxxxxxxxxxxxx' access_secret = 'xxxxxxxxxxxxxxx' auth = OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True) #### NOTE TO STUDENT WITH MOBILE VERIFICATION ISSUES: #### df_1 is a DataFrame with the twitter_archive_enhanced.csv file. You may have to #### change line 17 to match the name of your DataFrame with twitter_archive_enhanced.csv #### NOTE TO REVIEWER: this student had mobile verification issues so the following #### Twitter API code was sent to this student from a Udacity instructor #### Tweet IDs for which to gather additional data via Twitter's API tweet_ids = df_1.tweet_id.values len(tweet_ids) #### Query Twitter's API for JSON data for each tweet ID in the Twitter archive count = 0 fails_dict = {} start = timer() #### Save each tweet's returned JSON as a new line in a .txt file with open('tweet_json.txt', 'w') as outfile: # This loop will likely take 20-30 minutes to run because of Twitter's rate limit for tweet_id in tweet_ids: count += 1 print(str(count) + ": " + str(tweet_id)) try: tweet = api.get_status(tweet_id, tweet_mode='extended') print("Success") json.dump(tweet._json, outfile) outfile.write('\n') except tweepy.TweepError as e: print("Fail") fails_dict[tweet_id] = e pass end = timer() print(end - start) print(fails_dict) ###Output _____no_output_____ ###Markdown 2. Assessing Data for this Project Key PointsKey points to keep in mind when data wrangling for this project:You only want original ratings (no retweets) that have images. Though there are 5000+ tweets in the dataset, not all are dog ratings and some are retweets.Assessing and cleaning the entire dataset completely would require a lot of time, and is not necessary to practice and demonstrate your skills in data wrangling. Therefore, the requirements of this project are only to assess and clean at least 8 quality issues and at least 2 tidiness issues in this dataset.Cleaning includes merging individual pieces of data according to the rules of tidy data.The fact that the rating numerators are greater than the denominators does not need to be cleaned. This unique rating system is a big part of the popularity of WeRateDogs.You do not need to gather the tweets beyond August 1st, 2017. You can, but note that you won't be able to gather the image predictions for these tweets since you don't have access to the algorithm used. 2.1.1 Twitter archive data: ###Code twitter_archive.info() twitter_archive.head() twitter_archive.isnull().sum() # we don't want retweets, so 181 retweets with retweeted_status_id shall removed # also, 78 tweet replies need to be removed twitter_archive.notnull().sum() #in_reply_to_status_id,in_reply_to_user_id,retweeted_status_id,retweeted_status_user_id shall be string, not float twitter_archive.dtypes # numerator of rating <10 shall be removed twitter_archive.rating_numerator.value_counts() #Denominator of not 10 shall be removed twitter_archive.rating_denominator.value_counts() twitter_archive.groupby('source').count() pd.set_option('display.max_colwidth', 1000) # check content in text column to see if there are any rating numerators or denominators contain decimals twitter_archive[twitter_archive.text.str.contains(r'\d+\.\d+\/(?:\d+\.)?\d+|(?:\d+\.)\d+\/\d+\.\d+')] # we can see that the rating_numerators are not correct for the text with decimals, we need to include the score # on the left hand side of the decimal as well pd.set_option('display.max_colwidth', 24) ###Output _____no_output_____ ###Markdown 2.1.2 Assessment for Quality Issues 1. There are "<a href=" and other characters in the source column, they shall be removed2. "+0000" in the timestamp column does not provide any value, shall be removed3. tweet_id, in_reply_to_status_id,in_reply_to_user_id,retweeted_status_id,retweeted_status_user_id shall be string, not float.4. In the text, there are ratings with decimal numbers, right now only numbers after the decimal points are captured and loaded into the rating_numerator column.5. As we don’t want retweets and replies, “retweeted_status_timestamp” and other retweets and reply ralated columns becomes irrelevant , this column shall be removed.6. Rows with 'None' in the name column has no dog name in the text 2.1.3 Assessment for Tidiness Issues 1. There are 181 retweets and 78 tweet replies, those rows of entries shall be removed to ensure no duplication of the same tweet. This is following the rule of tidy data, i.e. Each observation forms a row. Therefore, we want each row to only represent an unique entry. 2. Dog types are currently in the form of wide columns, i.e. "doggo", "floofer", "pupper", and "puppo" columns, we shall combine these dog types into one single column as they are one type of variables. 2.2.1 Image prediction data: ###Code image_predictions.info() image_predictions.head() image_predictions.img_num.value_counts() image_predictions.isnull().sum() image_predictions.dtypes ###Output _____no_output_____ ###Markdown 2.2.2 Assessment for Quality Issues 7. Entries with a False “P1_dog” value will be removed, as the model has no confidence to determine the type of dog. 8. “P1_dog” column will be removed as it does not provide information after the False “P1_dog” entries are removed. 2.3.1 Twitter API's JSON file (Retweet vs. Favorite Count Data) ###Code # saving json data to the dataframe tweets_list =[] with open('tweet_json.txt') as json_file: for line in json_file: t_dict = {} t_json = json.loads(line) #to handle exceptions, as some of id is na try: t_dict['tweet_id'] = t_json['extended_entities']['media'][0]['id'] except: t_dict['tweet_id'] = 'na' t_dict['retweet_count'] = t_json['retweet_count'] t_dict['favorite_count'] = t_json['favorite_count'] tweets_list.append(t_dict) tweets_df = pd.DataFrame(tweets_list) tweets_df.head() tweets_df.info() tweets_df.isnull().sum() ###Output _____no_output_____ ###Markdown 3. Cleaning Data for this Project 3.1 Twitter Archive data: ###Code twitter_archive_cleaned = twitter_archive.copy() ###Output _____no_output_____ ###Markdown Tidiness Issues: Recap:1. There are 181 retweets and 78 tweet replies, those rows of entries shall be removed to ensure no duplication of the same tweet. This is following the rule of tidy data, i.e. Each observation forms a row. Therefore, we want each row to only represent an unique entry.2. Dog types are currently in the form of wide columns, i.e. "doggo", "floofer", "pupper", and "puppo" columns, we shall combine these dog types into one single column as they are one type of variables. Define: 1. There are 181 retweets and 78 tweet replies, those rows of entries shall be removed to ensure no duplication of the same tweet. This is following the rule of tidy data, i.e. Each observation forms a row. Therefore, we want each row to only represent an unique entry. Code: ###Code twitter_archive_cleaned.drop(twitter_archive_cleaned[twitter_archive_cleaned.retweeted_status_id.notnull()].index, inplace = True) ###Output _____no_output_____ ###Markdown Test: ###Code # Test: twitter_archive_cleaned.retweeted_status_id.notnull().sum() ###Output _____no_output_____ ###Markdown Define: 2.Dog types are currently in the form of wide columns, i.e. "doggo", "floofer", "pupper", and "puppo" columns, we shall combine these dog types into one single column as they are one type of variables. Code: ###Code # First, use lambda to join all types into one value twitter_archive_cleaned['dogStage'] = twitter_archive_cleaned[['doggo', 'floofer','pupper','puppo']].apply(lambda x: ''.join(x), axis=1) # single stage twitter_archive_cleaned['dogStage'].replace("NoneNoneNoneNone","None ", inplace=True) twitter_archive_cleaned['dogStage'].replace("doggoNoneNoneNone","doggo", inplace=True) twitter_archive_cleaned['dogStage'].replace("NoneflooferNoneNone","floofer", inplace=True) twitter_archive_cleaned['dogStage'].replace("NoneNonepupperNone","pupper", inplace=True) twitter_archive_cleaned['dogStage'].replace("NoneNoneNonepuppo","puppo", inplace=True) # mix stage twitter_archive_cleaned['dogStage'].replace("doggoNonepupperNone","multiple", inplace=True) twitter_archive_cleaned['dogStage'].replace("doggoNoneNonepuppo","multiple", inplace=True) twitter_archive_cleaned['dogStage'].replace("doggoflooferNoneNone","multiple", inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code # Test: twitter_archive_cleaned.head() # Test: twitter_archive_cleaned.dogStage.value_counts() # To drop old stage columns, they are no longer useful ToDrop = ['doggo','floofer','pupper','puppo',] twitter_archive_cleaned.drop(ToDrop,inplace=True,axis=1) twitter_archive_cleaned.columns ###Output _____no_output_____ ###Markdown Quality Issues: Recap all the quality issues:1. There are "<a href=" and other characters in the source column, they shall be removed2. "+0000" in the timestamp column does not provide any value, shall be removed3. tweet_id, in_reply_to_status_id,in_reply_to_user_id,retweeted_status_id,retweeted_status_user_id shall be string, not float.4. In the text, there are ratings with decimal numbers, right now only numbers after the decimal points are captured and loaded into the rating_numerator column.5. As we don’t want retweets and replies, “retweeted_status_timestamp” and other retweets and reply ralated columns becomes irrelevant , this column shall be removed.6. Rows with 'None' in the name column has no dog name in the text Define: 1. Remove extra parts of urls in source column using split() function Code ###Code twitter_archive_cleaned.source = twitter_archive_cleaned.source.str.replace(r'<[^>]*>', '') ###Output _____no_output_____ ###Markdown Test: ###Code twitter_archive_cleaned.head() ###Output _____no_output_____ ###Markdown Define: 2. "+0000" in the timestamp column does not provide any value, shall be removed Code: ###Code twitter_archive_cleaned.timestamp=twitter_archive_cleaned.timestamp.str.rstrip('+0000') ###Output _____no_output_____ ###Markdown Test: ###Code # Test: twitter_archive_cleaned.head() ###Output _____no_output_____ ###Markdown Define: 3.tweet_id, in_reply_to_status_id,in_reply_to_user_id,retweeted_status_id,retweeted_status_user_id shall be string, not float. Code: ###Code # Code: ListOfID= ['tweet_id', 'in_reply_to_status_id','in_reply_to_user_id','retweeted_status_id','retweeted_status_user_id'] twitter_archive_cleaned[ListOfID] = twitter_archive_cleaned[ListOfID].astype(str) # Test: twitter_archive_cleaned.dtypes ###Output _____no_output_____ ###Markdown Define: 4. In the text, there are ratings with decimal numbers, right now only numbers after the decimal points are captured and loaded into the rating_numerator column. we can see that the rating_numerators are not correct for the text with decimals, we need to include the score on the left hand side of the decimal as well. So let's re-extract the rating numerator with the correct regular expression Code: ###Code twitter_archive_cleaned.rating_numerator = twitter_archive_cleaned.text.str.extract(r'((?:\d+\.)?\d+)\/\d+', expand=True).astype('float') ###Output _____no_output_____ ###Markdown Test: ###Code twitter_archive_cleaned.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 14 columns): tweet_id 2175 non-null object in_reply_to_status_id 2175 non-null object in_reply_to_user_id 2175 non-null object timestamp 2175 non-null object source 2175 non-null object text 2175 non-null object retweeted_status_id 2175 non-null object retweeted_status_user_id 2175 non-null object retweeted_status_timestamp 0 non-null object expanded_urls 2117 non-null object rating_numerator 2175 non-null float64 rating_denominator 2175 non-null int64 name 2175 non-null object dogStage 2175 non-null object dtypes: float64(1), int64(1), object(12) memory usage: 254.9+ KB ###Markdown Define: 5. all columns relate to retweets and replies will be removed Code: ###Code ToDrop = ['in_reply_to_status_id','in_reply_to_user_id','retweeted_status_id','retweeted_status_user_id','retweeted_status_timestamp'] twitter_archive_cleaned.drop(ToDrop,inplace=True,axis=1) ###Output _____no_output_____ ###Markdown Test: ###Code #Test: twitter_archive_cleaned.head() ###Output _____no_output_____ ###Markdown Define: 6. Rows with 'None' in the name column has no dog name in the text Code: ###Code # Code: twitter_archive_cleaned.name = twitter_archive_cleaned.name.replace('None', np.nan) ###Output _____no_output_____ ###Markdown Test: ###Code # Test: twitter_archive_cleaned.query('name == "None"') ###Output _____no_output_____ ###Markdown 3.2 Image Prediction Data: ###Code image_preds_cleaned = image_predictions.copy() ###Output _____no_output_____ ###Markdown Quality Issues: 7. Entries with a False “P1_dog” value will be removed, as the model has no confidence to determine the type of dog.8. “P1_dog” column will be removed as it does not provide information after the False “P1_dog” entries are removed. ###Code image_preds_cleaned.p1_dog.value_counts() ###Output _____no_output_____ ###Markdown Define: 7. Entries with a False “P1_dog” value will be removed, as the model has no confidence to determine the type of dog. Code: ###Code image_preds_cleaned.drop(image_preds_cleaned[image_preds_cleaned.p1_dog == False].index, inplace=True) ###Output _____no_output_____ ###Markdown Test: ###Code # Test: image_preds_cleaned.p1_dog.value_counts() ###Output _____no_output_____ ###Markdown Define: 8. “P1_dog” column will be removed as it does not provide information after the False “P1_dog” entries are removed. Code: ###Code Drop_C = ['p1_dog'] image_preds_cleaned.drop(Drop_C, inplace=True, axis=1) ###Output _____no_output_____ ###Markdown Test: ###Code # Test image_preds_cleaned.head() image_preds_cleaned.info() image_preds_cleaned['tweet_id'] = image_preds_cleaned['tweet_id'].astype(str) image_preds_cleaned.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1532 entries, 0 to 2073 Data columns (total 11 columns): tweet_id 1532 non-null object jpg_url 1532 non-null object img_num 1532 non-null int64 p1 1532 non-null object p1_conf 1532 non-null float64 p2 1532 non-null object p2_conf 1532 non-null float64 p2_dog 1532 non-null bool p3 1532 non-null object p3_conf 1532 non-null float64 p3_dog 1532 non-null bool dtypes: bool(2), float64(3), int64(1), object(5) memory usage: 122.7+ KB ###Markdown 3.3 Retweet vs Fav. Data ###Code tweets_df_cleaned=tweets_df.copy() tweets_df_cleaned.info() tweets_df_cleaned.describe() ###Output _____no_output_____ ###Markdown Quality and Tidiness seem Ok 4. Storing, Analyzing, and Visualizing Data for this Project 4.1 Combining and Storing ###Code twitter_archive_master = pd.merge(twitter_archive_cleaned,image_preds_cleaned,on = 'tweet_id' ,how = 'left') twitter_archive_master = pd.merge(twitter_archive_master,tweets_df_cleaned,on = 'tweet_id' ,how = 'left') twitter_archive_master.head() twitter_archive_master.to_csv('twitter_archive_master.csv') twitter_archive_cleaned.to_csv('twitter_archive_cleaned.csv') image_preds_cleaned.to_csv('image_preds_cleaned.csv') tweets_df_cleaned.to_csv('tweets_df_cleaned.csv') ###Output _____no_output_____ ###Markdown 4.2 Analyzing ###Code twitter_archive_master.dogStage.value_counts() import seaborn as sns # Plotting a bar graph of the number of ooccurrence of dog in each dog stage dog_count = twitter_archive_master.dogStage.value_counts() dog_count = dog_count[1:] plt.figure(figsize=(10,5)) sns.barplot(dog_count.index, dog_count.values, alpha=0.8) plt.title('Most popular dog stages in twitter data') plt.ylabel('Number of dogs in the same stage', fontsize=12) plt.xlabel('dog stage', fontsize=12) plt.show() # Pupper is the most popular stage of dogs ###Output _____no_output_____ ###Markdown Insight 1: Pupper is the most popular dog type ###Code valueCounts=twitter_archive_master.p1.value_counts().reset_index() valueCounts.rename(columns={'p1':'counts','index':'type'},inplace=True) #Create a dogRating which is numerator/denominator*10 (score out of 10) twitter_archive_master['dogRating']=twitter_archive_master['rating_numerator']/twitter_archive_master['rating_denominator']*10 twitter_archive_master.head() rating=twitter_archive_master.groupby('p1').dogRating.mean().reset_index() rating.sort_values('dogRating', ascending=False) rating.rename(columns={'p1':'type'},inplace=True) valueCounts.head() valueCounts.info() rating.info() rate_table=pd.merge(valueCounts,rating) # Golden Retriever is the most popular breed rate_table.head(20) ###Output _____no_output_____ ###Markdown Insight 2: Golden Retriever is the most popular breed with 139 counts ###Code # Golden Retriever has the highest rating among top 20 most popular breeds rate_table[rate_table['counts']>20].sort_values('dogRating', ascending=False) ###Output _____no_output_____ ###Markdown Install Packages ###Code import tweepy import pandas as pd import numpy as np import requests import json import io from bs4 import BeautifulSoup import os import time import re ###Output _____no_output_____ ###Markdown Gather the Data ###Code # Download twitter archive twit_arch = pd.read_csv("twitter-archive-enhanced.csv", encoding = 'utf-8') #Side note: you can read a csv file by feeding the url straight in the function twit_arch.head(3) # Get the image predictions url= "https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv" response = requests.get(url) # 'w' = open for writing, truncating the file first with open ("image-predictions.tsv", mode = "wb") as file: file.write(response.content) img_prd = pd.read_csv("image-predictions.tsv", sep = "\t", encoding='utf-8') img_prd.head(3) consumer_key = 'Walk it, like I talk it' consumer_secret = 'Walk it, like I talk it' access_token = 'Walk it, like I talk it' access_secret = 'Walk it, like I talk it' #Create an OAuthHandler instance auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) #API instance api = tweepy.API(auth, parser = tweepy.parsers.JSONParser(), wait_on_rate_limit = True, wait_on_rate_limit_notify = True) #Create dicitonary to save data tweet_data = [] #Keep track of errors error_list=[] # Write to the end of the file # Use "a" = open for writing, appending to the end of the file if it exists for tweet_id in twit_arch['tweet_id']: #Use a try/except construction to control errors try: #Return a single status specified by the ID parameter #Get the full untruncated Tweet text content = api.get_status(tweet_id, tweet_mode = 'extended') #Get time of tweet tweet_time = content['created_at'] #Get retweet count tweet_retweet= content['retweet_count'] #Get favorite count tweet_fav = content['favorite_count'] #Get the amount of followers at time of tweet followers_count = content['user']['followers_count'] #Get how many favorites the user had user_fav = content['user']['favourites_count'] #Append the whole thing to dictionary tweet_data.append({'tweet_id': int(tweet_id), 'tweet_time' : pd.to_datetime(tweet_time), 'retweet' : int(tweet_retweet), 'favorites' : int(tweet_fav), 'follows' : int(tweet_follow), 'user_fav' : int(user_fav)}) except Exception as e : #In case of error add the tweet ID print(str(tweet_id)+ " " + str(e)) error_list.append(tweet_id) #See length of successful results print(len(tweet_data)) #See number of erros print(len(error_list)) #Repeat the tweet gathering from the error list error_tweet = [] for e in error_list: try: #Get time of tweet tweet_time = content['created_at'] #Get retweet count tweet_retweet= content['retweet_count'] #Get favorite count tweet_fav = content['favorite_count'] #Get the amount of followers at time of tweet followers_count = content['user']['followers_count'] #Get how many favorites the user had user_fav = content['user']['favourites_count'] #Append the whole thing to dictionary tweet_data.append({'tweet_id': int(tweet_id), 'tweet_time' : pd.to_datetime(tweet_time), 'retweet' : int(tweet_retweet), 'favorites' : int(tweet_fav), 'follows' : int(tweet_follow), 'user_fav' : int(user_fav)}) except Exception: #In case of error add the tweet ID print(str(tweet_id)+ " " + str(e)) error_tweet.append(e) #Create the dataframe json_tweets = pd.DataFrame(tweet_data, columns = ['tweet_id', 'tweet_time', 'retweet', 'favorites', 'follows']) # Save the dataFrame in file json_tweets.to_csv('tweet_json.txt', encoding = 'utf-8', index=False) json_tweets.head(4) json_tweets.tail(4) ###Output _____no_output_____ ###Markdown Assess ###Code #Take a look at the archive twit_arch.head(10) twit_arch.tail(10) #Check size of dataframe twit_arch.shape twit_arch.info() twit_arch['rating_numerator'].min() twit_arch['rating_numerator'].max() twit_arch['rating_denominator'].min() twit_arch['rating_denominator'].max() ###Output _____no_output_____ ###Markdown Quality issues from Twitter Archive:1. in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, andretweeted_status_user_id should be integers.2. timestamp should have date time attribute3. Numerator/Denominator values look incorrect.4. The dog types should have NaN instead of None5. Some tweets have no dog names (a, the) and might be a retweet ###Code #Look at the images predicitons dataframe img_prd.head(20) img_prd.info() #Note how the amount of images does not match up with the number of tweets, some tweets don't have images. img_prd.shape #Check if all tweet id are unique img_prd['tweet_id'].nunique() #Check if all jpegs are unique img_prd['jpg_url'].nunique() img_prd['img_num'].max() ###Output _____no_output_____ ###Markdown Quality issues from Image Predictions6. 2075 rows in images versus 2356 in the twitter archive7. Some jpgs are repeated ###Code json_tweets.head(20) json_tweets.tail(20) json_tweets.info() #See if there's any duplicates in the json file: json_tweets.shape json_tweets['tweet_id'].nunique() ###Output _____no_output_____ ###Markdown Quality issues in JSON tweets8. Some tweets are repeated. Overalll Tidiness1. All tables should be made into one dataset2. Dog stages should be in one column Clean Create Master Dataframe ###Code #Merge the dataframes in one #Use tweet_id as the primary key #Combine twitter arcgive with image prediction df_master = pd.merge(twit_arch, img_prd, how = 'left', on = ['tweet_id']) #Combine master dataframe with JSON df_master = pd.merge(df_master, json_tweets, how = 'left', on = ['tweet_id']) df_master.head(3) df_master.info() df_master.shape ###Output _____no_output_____ ###Markdown Drop Duplicates and Tweets with No Pictures ###Code #Drop the duplicates df_master = df_master.drop_duplicates() #Delete tweets without dog pictures df_master = df_master.dropna(subset = ['jpg_url']) #Tweet time is daretime format already, we can drop a redundant column: df_master = df_master.drop('timestamp', 1) #Drop uncessary columns df_master = df_master.drop('retweeted_status_id', 1) df_master = df_master.drop('retweeted_status_user_id', 1) df_master = df_master.drop('retweeted_status_timestamp', 1) list(df_master) ###Output _____no_output_____ ###Markdown Melt Dog Names ###Code # Columns to melt melt_col = ['doggo', 'floofer', 'pupper', 'puppo'] #Create a list using list comprehensions : [ expression for item in list if conditional ] stay_col = [x for x in df_master.columns.tolist() if x not in melt_col] print(stay_col) #Melt the dog stages df_master = pd.melt(df_master, id_vars = stay_col, value_vars = melt_col, var_name = 'stages', value_name = 'dog_stage') df_master.tail() #Drop duplicates df_master = df_master.sort_values('dog_stage').drop_duplicates('tweet_id', keep = 'last') #Drop dog_stage column df_master = df_master.drop('dog_stage', 1) #Check that we only have 4 dog stages print(np.unique(df_master['stages'])) #Define single column for first true dog prediction dog_prediction = [] #Define column for confidence level conf_level = [] #Define a function to apply on the dataframe def dog_prediction_melt(df): if df['p1_dog'] == True: dog_prediction.append(df['p1']) conf_level.append(df['p1_conf']) elif df['p2_dog'] == True: dog_prediction.append(df['p2']) conf_level.append(df['p2_conf']) elif df['p3_dog'] == True: dog_prediction.append(df['p3']) conf_level.append(df['p3_conf']) else: dog_prediction.append('MysteryDog') conf_level.append(0) df_master.apply(dog_prediction_melt, axis = 1) df_master['dog_prediction'] = dog_prediction df_master['conf_level'] = conf_level # Test list(df_master) #Delete the unused columns df_master = df_master.drop(['img_num', 'p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog'], 1) list(df_master) #We can drop additional columns that we don't need df_master = df_master.drop(['in_reply_to_status_id', 'in_reply_to_user_id'], 1) ###Output _____no_output_____ ###Markdown Tweet Source ###Code #Use the findall from regular expressions to clean up the source column #The example below shows that the tweet was sent from an iphone df_master['source'][5] #The source is found between the shape >...< df_master['source'] = df_master['source'].apply(lambda x: re.findall(r'>(.*)<', x)[0]) df_master.head(3) ###Output _____no_output_____ ###Markdown Dog Score ###Code #Take a random tweet to see the dog rating format in the text print(df_master['text'][10]) #Extract dog ratings ratings = df_master['text'].apply(lambda x: re.findall(r'(\d+(\.\d+)|(\d+))\/(\d+0)', x)) #Print a random rating to see format print(ratings[2]) #Column for numerator numerator = [] #Column for denominator denominator = [] #Column for how many dogs dog_count = [] for rating in ratings: #Start with rateless tweets if len(rating) == 0: numerator.append('NaN') denominator.append('NaN') #Assume there is a picture to the tweet dog_count.append(1) # Tweets with one rate elif len(rating) == 1: numerator.append((float(rating[0][0]) / (float(rating[0][-1])/10))) denominator.append(float(rating[0][-1])) dog_count.append(float(rating[0][-1]) / 10) elif len(rating) > 1 and rating[0][-1] == '10': rating_total = 0 for i in range(len(rating)): rating_total = rating_total + float(rating[i][0]) total_avg = (rating_total / len(rating)) numerator.append(total_avg) denominator.append(10) dog_count.append(len(rating)) else: #Dump errors numerator.append('error') denominator.append('error') dog_count.append('error') #Add the arrays as new columns df_master['new_numerator'] = numerator df_master['new_denominator'] = denominator df_master['dog_count'] = dog_count #List the values df_master['new_numerator'].value_counts() #Print out the errors print(df_master[df_master.new_numerator == 'error']['text']) #Take a closer look at the errors print(df_master['text'][3044]) print(df_master['text'][3078]) ###Output Happy 4/20 from the squad! 13/10 for all https://t.co/eV1diwds8a This is Bluebert. He just saw that both #FinalFur match ups are split 50/50. Amazed af. 11/10 https://t.co/Kky1DPG4iq ###Markdown We can see that the algorithm got high on 420, and that more than one fraction causes an error.At this stage it is easier to fix the issue manuallyuse the iloc function to avoid chain indexing ###Code #Manually check entries print(df_master['new_numerator'][3044]) print(df_master['new_denominator'][3044]) print(df_master['dog_count'][3044]) print(df_master['new_numerator'][3078]) print(df_master['new_denominator'][3078]) print(df_master['dog_count'][3078]) tweet_3044 = df_master[df_master.new_numerator == 'error']['tweet_id'][3044] tweet_3078 = df_master[df_master.new_numerator == 'error']['tweet_id'][3078] df_master.loc[df_master['tweet_id'] == tweet_3044, 'new_numerator'] = 13 df_master.loc[df_master['tweet_id'] == tweet_3044, 'new_denominator'] = 10 df_master.loc[df_master['tweet_id'] == tweet_3044, 'dog_count'] = 4 df_master.loc[df_master['tweet_id'] == tweet_3078, 'new_numerator'] = 11 df_master.loc[df_master['tweet_id'] == tweet_3078, 'new_denominator'] = 10 df_master.loc[df_master['tweet_id'] == tweet_3078, 'dog_count'] = 1 print(df_master['new_numerator'][3044]) print(df_master['new_denominator'][3044]) print(df_master['dog_count'][3044]) print(df_master['new_numerator'][3078]) print(df_master['new_denominator'][3078]) print(df_master['dog_count'][3078]) #Drop the old columns now that we don't need them df_master = df_master.drop(['rating_numerator', 'rating_denominator'], 1) #Rename the columns to keep things tidy df_master.rename(columns = {'new_numerator': 'rating_numerator', 'new_denominator': 'rating_denominator'}, inplace = True) ###Output _____no_output_____ ###Markdown Get Dog Names ###Code names = [] #We are assuming that the tweets begin with a capital case letter for text in df_master['text']: #This is "Dog", so the name will be the third word if text.startswith('This is ') and re.match(r'[A-Z].*', text.split()[2]): names.append(text.split()[2].strip(',').strip('.')) elif text.startswith('Here is ') and re.match(r'[A-Z].*', text.split()[2]): names.append(text.split()[2].strip(',').strip('.')) #Meet "Dog", so the name will be the second word elif text.startswith('Meet ') and re.match(r'[A-Z].*', text.split()[1]): names.append(text.split()[1].strip(',').strip('.')) #Say hello to "Dog", so name will be fourth word elif text.startswith('Say hello to ') and re.match(r'[A-Z].*', text.split()[3]): names.append(text.split()[3].strip(',').strip('.')) #Also fourth word elif text.startswith('Here we have ') and re.match(r'[A-Z].*', text.split()[3]): names.append(text.split()[3].strip(',').strip('.')) #The dog name should come right after named elif 'named' in text and re.match(r'[A-Z].*', text.split()[text.split().index('named') + 1]): names.append(text.split()[text.split().index('named') + 1].strip(',').strip('.')) # No name specified or other style else: names.append('NaN') df_master['dog_name'] = names #Take a look at the dog names we failed to extract df_master[df_master.dog_name == 'NaN'] df_master[df_master['tweet_id'] == 816829038950027264] ###Output _____no_output_____ ###Markdown Looking at the results above, most of the names were caught correctly. However the algorithm doens't account for all tweet types, for example the one above that begins with "RT" ###Code #Drop the old name column df_master = df_master.drop(['name'], 1) list(df_master) ###Output _____no_output_____ ###Markdown Dog Gender Take all the gender pronouns possibilities from here:https://en.wikipedia.org/wiki/English_personal_pronounsOmit neuter/epicene genders. ###Code #Male dogs male = ['He', 'he', 'Him', 'him','Him','His', 'his', 'Himself', 'himself', "He's", "he's", 'boy'] #Female dogs female = ['She', 'she', 'Her', 'her', 'Hers', 'hers', 'Herself', 'herself', "She's", "she's", 'girl'] dog_gender = [] for text in df_master['text']: if any(map(lambda v:v in male, text.split())): dog_gender.append('male') elif any(map(lambda v:v in female, text.split())): dog_gender.append('female') else: dog_gender.append('NaN') df_master['dog_gender'] = dog_gender ###Output _____no_output_____ ###Markdown Data types ###Code df_master.info() df_master.dtypes #Export to csv file df_master.to_csv('twitter_archive_master.csv', index=False, encoding = 'utf-8') ###Output _____no_output_____ ###Markdown Dogs on Twitter Table of ContentsIntroductionGathering DataAssess and Clean for Tidiness IssuesAssess and Clean for Quality IssuesMerge and SaveVisualization and Analysis Introduction In this project I will analyze data about dogs on Twitter by using data from three different sources: The WeRateDogs Twitter archive which is included in this folder as a csv fileImage Predictions provided by Udacity's servers and downloaded as a tsv file using the Requests libraryRetween and Favorite counts pulled from the Twitter API using Python's Tweepy libraryAfter pulling the data, I will try to make the data more tidy, and then assess and clean the data for its quality using both visual and programmatic assessments. Once we have data that is easy to work with, I will do a brief analysis to look at the most common names of dogs, the breeds that get the most votes, and what ways people introduce their dogs on Twitter. ###Code import pandas as pd import requests import tweepy import json ###Output _____no_output_____ ###Markdown GatherLet's take a look at the information we have imported directly from files. ###Code twitter_arc = pd.read_csv('twitter-archive-enhanced.csv') twitter_arc.head(3) ###Output _____no_output_____ ###Markdown Now we also want to pull image-predictions from a website, so we are going to need the requests library to bring that in, and then we will save it to a tsv file that we can read to check that everything is working properly ###Code r = requests.get('https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv') r.headers['content-type'] with open('image_predictions.tsv', 'wb') as file: file.write(r.content) image_pred = pd.read_csv('image_predictions.tsv', sep='\t') image_pred.head(3) ###Output _____no_output_____ ###Markdown This looks okay, so finally let's try to pull some data from Twitter directly using the Tweepy API. ###Code # Authentication Details: load personal API keys (replaced with placeholders) #deleted for security # variables for Twitter API connection auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit = True) #let's make sure this is correctly identifying me api.verify_credentials(include_email=True)._json['screen_name'] i = 0 with open('tweet_json.txt', 'w', encoding='utf8') as f: for tweet_id in twitter_arc['tweet_id']: i += 1 try: tweet = api.get_status(tweet_id, tweet_mode='extended') json.dump(tweet._json, f) f.write('\n') except: print('exception : ', tweet_id) continue print('done') tweets_data = [] tweet_file = 'tweet_json.txt' with open(tweet_file, "r") as tweet_file: for line in tweet_file: try: tweets_data.append(json.loads(line)) except: continue tweet_file.close() print('done') tweet_info = pd.DataFrame() tweet_info["tweet_id"] = list(map(lambda tweet: tweet["id"], tweets_data )) tweet_info['retweet_count'] = list(map(lambda tweet: tweet["retweet_count"], tweets_data)) tweet_info["favorite_count"] = list(map(lambda tweet: tweet["favorite_count"], tweets_data)) tweet_info.head(3) ###Output _____no_output_____ ###Markdown Assess We have data coming from three places now-- tweet_info, image_pred, and tweet_json. Let's start by a visual assessment of each. ###Code tweet_info image_pred twitter_arc ###Output _____no_output_____ ###Markdown Let's do some **programatic assessment** as well ###Code print("Twitter Archive Rows:", twitter_arc.shape[0], " \nImage Prediction Rows:", image_pred.shape[0], " \nTweet Info Rows:", tweet_info.shape[0]) ###Output Twitter Archive Rows: 2356 Image Prediction Rows: 2075 Tweet Info Rows: 2327 ###Markdown These should all be the same, so it looks like there are some discrepencies in our data- likely some duplicates somewhere. ###Code twitter_arc['rating_denominator'].value_counts() ###Output _____no_output_____ ###Markdown It looks like the denominator is 10 in the vast majority of cases ###Code twitter_arc['rating_numerator'].describe() ###Output _____no_output_____ ###Markdown It looks like numerators lie mostly between 10 and 12 have very similar results, mostly staying between 10 and 12 with a few outliers. In this case, it might be interesting to group these into below 10, between 10 and 12, and above 12 with category names of "low", "average", and "high" to see if anything interesting is happening there. ###Code twitter_arc.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 in_reply_to_status_id 78 non-null float64 2 in_reply_to_user_id 78 non-null float64 3 timestamp 2356 non-null object 4 source 2356 non-null object 5 text 2356 non-null object 6 retweeted_status_id 181 non-null float64 7 retweeted_status_user_id 181 non-null float64 8 retweeted_status_timestamp 181 non-null object 9 expanded_urls 2297 non-null object 10 rating_numerator 2356 non-null int64 11 rating_denominator 2356 non-null int64 12 name 2356 non-null object 13 doggo 2356 non-null object 14 floofer 2356 non-null object 15 pupper 2356 non-null object 16 puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown I checked the info of Twitter Archives and found that in_reply_to_status_id and in_reply_to_user_id both only had 73 rows of 2333 populated, so those don't seem useful. ###Code twitter_arc['source'].value_counts() ###Output _____no_output_____ ###Markdown Assess and Clean for Tidiness IssuesI will include the results of the assessment and how I cleaned the data, followed by the code that implements this. For scope, I will limit the issues to the required assessment number of two. Before we begin I will make a copy of each dataset. Tidiness Issues Dog Categories in Image Prediction Dog Stages in Twitter Archive ###Code twitter_arc_clean = twitter_arc.copy() image_pred_clean = image_pred.copy() tweet_info_clean = tweet_info.copy() ###Output _____no_output_____ ###Markdown Dog Categories in Image PredictionInstead of having six columns with the three most likely dog types and their confidence levels in different columns, let's pare this down to the most likely dog and the confidence level we have for it being that dog separated into two columns. This fulfills the tidiness criteron of having one variable for one column- I want one confidence result, not six. ###Code # For each row find the breed with the highest confidence (p1_dog, p2_dog, p3_dog are true false values) # Append confidence score and breed to arrays breed = [] confidence = [] def get_breeds(data): if data.p1_dog: breed.append(data.p1) confidence.append(data.p1_conf) elif data.p2_dog: breed.append(data.p2) confidence.append(data.p2_conf) elif data.p3_dog: breed.append(data.p3) confidence.append(data.p3_conf) else: breed.append('None') confidence.append(0) image_pred_clean.apply(get_breeds,axis =1) # Add the new rows and drop the excess image_pred_clean['dog_breed'] = breed image_pred_clean['confidence'] = confidence image_pred_clean.drop(columns = ['p1', 'p1_dog', 'p1_conf' , 'p2', 'p2_dog', 'p2_conf' , 'p3', 'p3_dog', 'p3_conf'],axis=1, inplace =True) image_pred_clean.head(3) ###Output _____no_output_____ ###Markdown Dog stages in Twitter ArchiveThese dogs are divided into 'doggo,' 'pupper,' 'floofer,' and 'puppo.' Instead of having this information be split into multiple columns, let's make this one column that is called dog_stage. ###Code STAGES = ['doggo', 'pupper','floofer', 'puppo'] COLUMNS = [i for i in twitter_arc_clean.columns.tolist() if i not in STAGES] twitter_arc_clean = pd.melt(twitter_arc_clean, id_vars = COLUMNS, value_vars = STAGES, var_name = 'stages', value_name = 'dog_stage') ###Output _____no_output_____ ###Markdown Now let's clean up that last column ###Code twitter_arc_clean = twitter_arc_clean.drop('stages', 1) ###Output _____no_output_____ ###Markdown Assess and Clean for Quality IssuesAs I assess, I will add to the following list of quality issues immediately following this header. Each bullet item will link to the later assessment where I will include the results of the assessment and how I cleaned the data, followed by the code that implements it all. For scope, I will limit the issues to the required number of eight. Quality Issues Ratings are difficult to assess(2 changes) Discrepancies in Dataframes In Reply To Statuses Almost Empty in Twitter Archive Retweeted Statuses Almost Empty in Twitter Archive Dog Breeds in Image Prediction Sources in Twitter Archive Names in Twitter Archive Text in Twitter Archive (2 changes) Rating columns in Twitter ArchiveAlmost all of the values in the rating_denominator column were 10 so we assumed that the outliers were typos or misunderstandings, and just dropped them. Then we looked at the quartiles in value counts of the numerator and found that almost all of them ranged between 10 and 12, so we just split this group into rating categories of "low," "average," and "high." We will drop the denominator column altogether and only use this numerator category as we proceed. ###Code twitter_arc_clean['rating_denominator'].value_counts() ###Output _____no_output_____ ###Markdown It looks like the denominator is 10 in the vast majority of cases, so we can just drop all of the outliers and assume there was some typo or misreading in the scraping for those. ###Code indices = twitter_arc[twitter_arc['rating_denominator'] != 10].index twitter_arc_clean.drop(indices, inplace = True) twitter_arc_clean['rating_denominator'].value_counts() ###Output _____no_output_____ ###Markdown It looks like numerators lie mostly between 10 and 12 have very similar results, mostly staying between 10 and 12 with a few outliers. In this case, it might be interesting to group these into below 10, between 10 and 12, and above 12 with category names of "low", "average", and "high" to see if anything interesting is happening there. ###Code ratings_bins = [-1, 10, 12, 1777] ratings_names = ['low', 'average', 'high'] twitter_arc_clean['rating_bin'] = pd.cut(twitter_arc['rating_numerator'], ratings_bins, labels=ratings_names) twitter_arc_clean['rating_bin'].value_counts() ###Output _____no_output_____ ###Markdown This makes it very clear that there are diffrerences in dog rating. Before our dog ratings were basically unreadable, but this gives us a good picture of where ratings fall in comparison with the other pups. Now that we have the column we want, let's drop the two extra columns. ###Code cols = ['rating_denominator', 'rating_numerator'] twitter_arc_clean.drop(cols, axis=1, inplace = True) twitter_arc_clean.head(3) ###Output _____no_output_____ ###Markdown Discrepencies in DataframesI checked to see if each of the three dataframes were the same size, but they were not which means there are some discrepencies in our data frames. I'm mostly worried about duplicates of import information so I looked at the images that are the same in image_predictions (there were 66), text duplications in twitter_archives (none) and tweet ids from Tweepy (also none). With this I decided to just delete the 66 duplicates from the image predicions. ###Code print("Twitter Archive Rows:", twitter_arc_clean.shape[0], " \nImage Prediction Rows:", image_pred_clean.shape[0], " \nTweet Info Rows:", tweet_info_clean.shape[0]) image_pred_clean[image_pred_clean['jpg_url'].duplicated()==True]['jpg_url'].value_counts() twitter_arc_clean[twitter_arc_clean['text'].duplicated()==True]['text'].value_counts() tweet_info_clean[tweet_info_clean['tweet_id'].duplicated()==True]['tweet_id'].value_counts() #clear it out image_pred_clean = image_pred_clean[image_pred_clean['jpg_url'].duplicated() ==False] image_pred_clean[image_pred_clean['jpg_url'].duplicated()==True]['jpg_url'].value_counts() ###Output _____no_output_____ ###Markdown In Reply To Statuses Almost EmptyI checked the info of Twitter Archives and found that in_reply_to_status_id and in_reply_to_user_id both only had 73 rows of 2333 populated, so I decided to drop them ###Code twitter_arc_clean.info() twitter_arc_clean.drop(columns = ['in_reply_to_status_id', 'in_reply_to_user_id'],axis=1, inplace =True) twitter_arc_clean.head(3) ###Output _____no_output_____ ###Markdown Retweeted Statuses Almost Empty in Twitter ArchiveI checked the info of Twitter Archives and found that retweet statuses had only 180 rows of 2333 populated, so I decided to drop them all as well. ###Code twitter_arc_clean.drop(columns = ['retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'],axis=1, inplace =True) twitter_arc_clean.head(3) ###Output _____no_output_____ ###Markdown Dog Breeds in Image PredictionThis one is mostly my own fault. I created this tidier column to show the dog breeds, but I didn't normalize the names. So here I went in and took out all of the dashes in between the breed names and made them all title cased. I also noticed that there are a couple of types of retrievers and terriers in the breeds I could see so I checked to see if there were anymore larger-type breeds we could split them out into. It looks like Poodles and possibly Spaniels also can be split into subcategories. ###Code image_pred_clean.head(3) breeds = image_pred_clean.dog_breed.value_counts() breeds.head(30) image_pred_clean['dog_breed_proper'] = image_pred_clean.dog_breed.str.replace('_', ' ').str.title() image_pred_clean['dog_breed_proper'].value_counts().head(10) image_pred_clean['dog_breed_proper_last_word'] = image_pred_clean.dog_breed_proper.str.split(' ').str[-1] image_pred_clean['dog_breed_proper_last_word'].value_counts().head(10) ###Output _____no_output_____ ###Markdown Sources in Twitter ArchiveThe sources looked a bit odd visually so I did some programatic assessment and found that there are four sources for these tweets: Twitter for iPhone, Vine, the Twitter web client, and TweetDeck (which it looks like is a client for social media dashboard application management of twitter accounts). I decided to put them into more readable variables in our dataframe: iPhone, Vine, Web, and TweetDeck ###Code twitter_arc_clean['source'].value_counts() ###Output _____no_output_____ ###Markdown It looks like there are only four sources so let's put them into a more readable form in our dataframe. ###Code # For each row find the breed with the highest confidence (p1_dog, p2_dog, p3_dog are true false values) # Append confidence score and breed to arrays sources = [] def get_sources(data): if data.source == '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>': sources.append("iPhone") elif data.source == '<a href="http://vine.co" rel="nofollow">Vine - Make a Scene</a>': sources.append("vine") elif data.source == '<a href="http://twitter.com" rel="nofollow">Twitter Web Client</a>': sources.append("web") elif data.source == '<a href="https://about.twitter.com/products/tweetdeck" rel="nofollow">TweetDeck</a>': sources.append("TweetDeck") else: sources.append('Error') twitter_arc_clean.apply(get_sources,axis =1) # Add the new rows and drop the excess twitter_arc_clean['sources'] = sources twitter_arc_clean.drop(columns = ['source'],axis=1, inplace =True) twitter_arc_clean.head(3) ###Output _____no_output_____ ###Markdown Names in Twitter ArchiveLet's make all of the names lowercase and see if we can't do a better assessment on common names. ###Code twitter_arc_clean['normalized_name'] = twitter_arc_clean['name'].str.lower() names = twitter_arc_clean['normalized_name'].value_counts() names.head(30) ###Output _____no_output_____ ###Markdown It looks like this might almost be an exponential drop in how common names are, which is interesting. I'm definitely interested in plotting that out to see it visually. I'm a little concerned with our top name being "a" and also seeing "the" and "an" in the list. It makes me worry that if we look at the bottom few names we'll just get a list of random words that were scrapped incorrectly. Let's take a look: ###Code names = twitter_arc_clean['normalized_name'].value_counts(ascending=True) names.head(20) ###Output _____no_output_____ ###Markdown This looks fine to me. There's only one "word" that doesn't look like a name ("incredibly") which seems like an okay rate of error. Let's drop the original names column and move on. ###Code import numpy as np cols = ['name'] twitter_arc_clean.drop(cols, axis=1, inplace = True) twitter_arc_clean.head(3) ###Output _____no_output_____ ###Markdown Text in Twitter ArchiveI sampled a few rows of text from the Twitter Archive dataframe so that I could get an idea of similarities. It looked like I was seeing a lot of tweets starting with "This is..." so I decided to check that percentage and found that it was almost half (49%). I also saw many starting with "Here.." but it turned out to only be about 4% of starters. I realized could just check for the value counts of first words in the text column to figure out which are the most frequent. My first discovery ("This") is by far the most frequent with 1200 counts, with my second guess ("Here") coming in at 5th with only 49 counts. I decided to also check the second to last words (the last is usually the image or a link) and found that almost all of them are ratings, with 10/10, 10/11, 10/12, and 10/13 all in the top five most frequent endings. Our only outlier in the top five is "af" which comes in as the second most popular ending. ###Code twitter_arc_clean.text.sample(20) this_is = twitter_arc_clean.text[twitter_arc_clean.text.str.match('This is.*')] print(this_is.shape[0], " / ", twitter_arc_clean.text.shape[0], " which is a total of ", this_is.shape[0]/twitter_arc_clean.text.shape[0], "%" ) here = twitter_arc_clean.text[twitter_arc_clean.text.str.match('Here.*')] print(here.shape[0], " / ", twitter_arc_clean.text.shape[0], " which is a total of ", here.shape[0]/twitter_arc_clean.text.shape[0], "%" ) meet = twitter_arc_clean.text[twitter_arc_clean.text.str.match('Meet.*')] print(meet.shape[0], " / ", twitter_arc_clean.text.shape[0], " which is a total of ", meet.shape[0]/twitter_arc_clean.text.shape[0], "%" ) ###Output 858 / 9309 which is a total of 0.09216886883660973 % ###Markdown At about this point I realized I could just check for the value counts of first words in the text column to figure out which are the most frequent. Turns out my first discovery ("This") is by far the most frequent with 1200 counts, with my second guess ("Here") coming in at 5th with only 49 counts. ###Code twitter_arc_clean['first_word'] = twitter_arc_clean.text.str.split(' ').str[0] twitter_arc_clean['first_word'].value_counts() twitter_arc_clean['last_word'] = twitter_arc_clean.text.str.split(' ').str[-2] twitter_arc_clean['last_word'].value_counts() ###Output _____no_output_____ ###Markdown Merge and Save ###Code twitter_archive_clean = pd.merge(twitter_arc_clean, tweet_info_clean , how = 'left' , on = 'tweet_id') master_dataset = pd.merge(twitter_archive_clean, image_pred_clean , how = 'inner' , on = 'tweet_id') master_dataset.info() master_dataset.to_csv("twitter_archive_master.csv", encoding='utf-8') # print("Saved-- twitter_archive_master.csv") ###Output Saved-- twitter_archive_master.csv ###Markdown Visualization and Analysis ###Code import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown What are the most common dog names on Twitter? The most common name is Charlie, and in the top 20 names there are almost twice as many male names as female names. ###Code # rank the names frequency in a descending order, take out the dumb ones (none, the, an , and a) names = master_dataset.query('normalized_name != "none" & normalized_name != "the" & normalized_name != "a" & normalized_name != "an"').normalized_name.value_counts().sort_values(ascending=False)[:20] fig, ax = plt.subplots(figsize = (6,6)) ax = plt.axes() ax.set_title("Top Dog Names", fontsize=20) plt.grid(color='w', linestyle='solid') names.plot(kind="bar",label="number of dogs", width=.85,color=['#ffa600'], ylim=[0,50]) ###Output _____no_output_____ ###Markdown It looks like the most popular dog name is Charlie, followed closely by Lucy, Cooper, and Oliver. In the top 20 it looks like there are almost twice as many male names as female names (note: I didn't count Koda in this assessment as I am not sure which gender it belongs to). I'm interested in where this curve bottoms out. ###Code names = master_dataset.query('normalized_name != "none" & normalized_name != "the" & normalized_name != "a" & normalized_name != "an"').normalized_name.value_counts().sort_values(ascending=False)[:500] fig, ax = plt.subplots(figsize = (6,6)) ax = plt.axes(facecolor='white') ax.set_title("Dog Name Distribution", fontsize=20) plt.grid(color='w', linestyle='solid') names.plot(kind="line", color=['purple'], ylim=[0,50]) ###Output _____no_output_____ ###Markdown How do people introduce their dog?In our exploration we learned that "This" (as in "This is...") accounts for almost half of the introductions to people's dogs. I'm taking that out as an outlier so that we can better visualize how the rest of the Twitter population introduces their pups. It looks like the rest often say "Meet" or "Say" (likely "Say hello"). There are also a large number of people who say "I" or "We" and then there are groups that either plead with the audience to greet their dog ("Please") or maybe tell a quick story about the photo ("When"). ###Code # rank the names frequency in a descending order, take out the top one for scale names = master_dataset.query('first_word != "This"').first_word.value_counts().sort_values(ascending=False)[:10] fig, ax = plt.subplots(figsize = (6,6)) ax = plt.axes() ax.set_title("Top Dog Intros", fontsize=20) plt.grid(color='w', linestyle='solid') names.plot(kind="bar", width=.85,color=['#ffa600'], ylim=[0,1000]) ###Output _____no_output_____ ###Markdown Which types of dogs are the highest and lowest rated?For this one I dug into the ratings bins we made and compared that to the breeds of dogs. In our top rated dogs and average rated dogs, the most common breed was the Golden Retriever, but that breed didn't even show up in the least rated dogs. People must love Golden Retrievers. Our most common breed in the bottom rated dogs was the Chihuahua which only got about 9% of the vote in both the top rated and average rated groups. It definitely looks like breed has some influence on the rating of the dogs on Twitter. ###Code # rank the names frequency in a descending order, take out the top one for scale names = master_dataset.query('rating_bin == "high" & dog_breed_proper != "None"').dog_breed_proper.value_counts().sort_values(ascending=False)[:10] fig, ax = plt.subplots(figsize = (6,6)) ax = plt.axes() ax.set_title("Favorite Dog Breeds on Twitter", fontsize=20) plt.grid(color='white', linestyle='solid') ax.set_ylabel('') plt.axis('off') names.plot(kind="pie",autopct='%1.1f%%', legend=False, explode=(names == max(names)) * 0.1, colors=['#86C1C4', '#ffc6ff', '#bdb2ff', '#a0c4ff', '#9bf6ff', '#caffbf', '#fdffb6', '#ffd6a5', '#f59ae0', '#ffadad']) # rank the names frequency in a descending order, take out the top one for scale names = master_dataset.query('rating_bin == "low" & dog_breed_proper != "None"').dog_breed_proper.value_counts().sort_values(ascending=False)[:10] fig, ax = plt.subplots(figsize = (6,6)) ax = plt.axes() ax.set_title("Least Favorite Dog Breeds on Twitter", fontsize=20) plt.grid(color='w', linestyle='solid') ax.set_ylabel('') plt.axis('off') names.plot(kind="pie",autopct='%1.1f%%', legend=False, explode=(names == max(names)) * 0.1, colors=['#86C1C4', '#ffc6ff', '#bdb2ff', '#a0c4ff', '#9bf6ff', '#caffbf', '#fdffb6', '#ffd6a5', '#f59ae0', '#ffadad']) # rank the names frequency in a descending order, take out the top one for scale names = master_dataset.query('rating_bin == "average" & dog_breed_proper != "None"').dog_breed_proper.value_counts().sort_values(ascending=False)[:10] fig, ax = plt.subplots(figsize = (6,6)) ax = plt.axes() ax.set_title("Dog Types in Average Dogs", fontsize=20) plt.grid(color='w', linestyle='solid') ax.set_ylabel('') plt.axis('off') names.plot(kind="pie",autopct='%1.1f%%', legend=False, explode=(names == max(names)) * 0.1, colors=['#86C1C4', '#ffc6ff', '#bdb2ff', '#a0c4ff', '#9bf6ff', '#caffbf', '#fdffb6', '#ffd6a5', '#f59ae0', '#ffadad']) ###Output _____no_output_____ ###Markdown Project: Wrangling and Analyze Data Register with Tweeter and apply for enhanced developer rightsEstablish connectivity with Twitter ###Code import pandas as pd import numpy as np import tweepy from tweepy import OAuthHandler import json from timeit import default_timer as timer import datetime import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns sns.set(style="darkgrid") consumer_key = 'XXXX' consumer_secret = 'XXXX' access_token = 'XXX-XXX' access_secret = 'XXXX' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True) try: api.verify_credentials() print("Authentication OK") except Exception as e: print("Error during authentication", e) ###Output Authentication OK ###Markdown Gather Data Read the tweets archive data file & image prediction file from provided URLAfter writing the data to csv file, read the file as dataframe and then pull the tweets data (as json) from twitter using tweepy (Twitter API). Use tweet_id to pull the data using tweepy and write to tweet_json.txt file. ###Code pd.options.mode.chained_assignment = None url_tweet = "https://d17h27t6h515a5.cloudfront.net/topher/2017/August/59a4e958_twitter-archive-enhanced/twitter-archive-enhanced.csv" url_image = "https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv" import requests response_tweet = requests.get(url_tweet) response_image = requests.get(url_image) with open ('twitter-archive-enhanced.csv', mode ='wb') as file: file.write(response_tweet.content) with open ('image-prediction.tsv', mode ='wb') as file: file.write(response_image.content) df_twitter_archive = pd.read_csv("twitter-archive-enhanced.csv") df_twitter_archive.head() df_twitter_archive.info() df_image = pd.read_csv("image-prediction.tsv", sep='\t') df_image.head() df_image.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2075 non-null int64 1 jpg_url 2075 non-null object 2 img_num 2075 non-null int64 3 p1 2075 non-null object 4 p1_conf 2075 non-null float64 5 p1_dog 2075 non-null bool 6 p2 2075 non-null object 7 p2_conf 2075 non-null float64 8 p2_dog 2075 non-null bool 9 p3 2075 non-null object 10 p3_conf 2075 non-null float64 11 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown From Udacity Project:So for the last row in that table:tweet_id is the last part of the tweet URL after "status/" → https://twitter.com/dog_rates/status/889531135344209921p1 is the algorithm's 1 prediction for the image in the tweet → golden retrieverp1_conf is how confident the algorithm is in its 1 prediction → 95%p1_dog is whether or not the 1 prediction is a breed of dog → TRUEp2 is the algorithm's second most likely prediction → Labrador retrieverp2_conf is how confident the algorithm is in its 2 prediction → 1%p2_dog is whether or not the 2 prediction is a breed of dog → TRUE ###Code # NOTE TO STUDENT WITH MOBILE VERIFICATION ISSUES: # df_1 is a DataFrame with the twitter_archive_enhanced.csv file. You may have to # change line 17 to match the name of your DataFrame with twitter_archive_enhanced.csv # NOTE TO REVIEWER: this student had mobile verification issues so the following # Twitter API code was sent to this student from a Udacity instructor # Tweet IDs for which to gather additional data via Twitter's API tweet_ids = df_twitter_archive.tweet_id.values len(tweet_ids) # Query Twitter's API for JSON data for each tweet ID in the Twitter archive count = 0 fails_dict = {} start = timer() # Save each tweet's returned JSON as a new line in a .txt file with open('tweet_json.txt', 'w') as outfile: # This loop will likely take 20-30 minutes to run because of Twitter's rate limit for tweet_id in tweet_ids: count += 1 print(str(count) + ": " + str(tweet_id)) try: tweet = api.get_status(tweet_id, tweet_mode='extended') print("Success") json.dump(tweet._json, outfile) outfile.write('\n') except Exception as e: print("Fail") fails_dict[tweet_id] = e pass end = timer() print(end - start) print(fails_dict) # Read the json data from tweet_json.txt df_tweets = pd.read_json("tweet_json.txt", lines=True) df_tweets.head() df_tweets.info() # From all the columns only id, retweet_count, favorite_count are looking interesting with quantitative value. # Copy dataframe before making change df_tweets_copy = df_tweets.copy() # Extract of these columns in another dataframe df_tweets_sub = df_tweets_copy[['id', 'retweet_count', 'favorite_count']] # Rename column(s) to align them with other DF df_tweets_sub.rename(columns={'id':'tweet_id'}, inplace=True) df_tweets_sub.head() df_tweets_sub.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2327 entries, 0 to 2326 Data columns (total 3 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2327 non-null int64 1 retweet_count 2327 non-null int64 2 favorite_count 2327 non-null int64 dtypes: int64(3) memory usage: 54.7 KB ###Markdown Assessing DataIn this section, detect and document at least **eight (8) quality issues and two (2) tidiness issue**. You must use **both** visual assessmentprogrammatic assessement to assess the data.**Note:** pay attention to the following key points when you access the data.* You only want original ratings (no retweets) that have images. Though there are 5000+ tweets in the dataset, not all are dog ratings and some are retweets.* Assessing and cleaning the entire dataset completely would require a lot of time, and is not necessary to practice and demonstrate your skills in data wrangling. Therefore, the requirements of this project are only to assess and clean at least 8 quality issues and at least 2 tidiness issues in this dataset.* The fact that the rating numerators are greater than the denominators does not need to be cleaned. This [unique rating system](http://knowyourmeme.com/memes/theyre-good-dogs-brent) is a big part of the popularity of WeRateDogs.* You do not need to gather the tweets beyond August 1st, 2017. You can, but note that you won't be able to gather the image predictions for these tweets since you don't have access to the algorithm used. ###Code # Check if there are any duplicate tweets in all the gathered data print("Are there any duplicate Tweet ID in archived tweets data file (df_twitter_archive)? {}". format(df_twitter_archive.duplicated(subset=['tweet_id']).any())) print("Are there any duplicate Tweet ID in download tweets json data (df_image)? {}". format(df_image.duplicated(subset=['tweet_id']).any())) print("Are there any duplicate Tweet ID in image tweets data (df_tweets_sub)? {}". format(df_tweets_sub.duplicated(subset=['tweet_id']).any())) # Number of Tweets missing URL in Archive Dataset df_twitter_archive.expanded_urls.isna().sum() # Number of tweets with missing urls that are replies or retweets sum(df_twitter_archive.expanded_urls.isna() & \ (df_twitter_archive.in_reply_to_status_id.notnull() | \ df_twitter_archive.retweeted_status_id.notnull())) df_twitter_archive.info() # Get a sample record df_twitter_archive.iloc[5] # How many tweets are original tweets, and not replies or retweets? print("Total number of original tweets are {} out of {}. Number of retwets are {}. Number of replies are {}." .format( sum(df_twitter_archive.retweeted_status_id.isna() & df_twitter_archive.in_reply_to_status_id.isna()), df_twitter_archive.shape[0], df_twitter_archive.shape[0] - df_twitter_archive.retweeted_status_id.isna().sum(), df_twitter_archive.shape[0] - df_twitter_archive.in_reply_to_status_id.isna().sum())) df_twitter_archive[df_twitter_archive.retweeted_status_id.isna() & df_twitter_archive.in_reply_to_status_id.isna()].rating_denominator.value_counts() df_image.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2075 non-null int64 1 jpg_url 2075 non-null object 2 img_num 2075 non-null int64 3 p1 2075 non-null object 4 p1_conf 2075 non-null float64 5 p1_dog 2075 non-null bool 6 p2 2075 non-null object 7 p2_conf 2075 non-null float64 8 p2_dog 2075 non-null bool 9 p3 2075 non-null object 10 p3_conf 2075 non-null float64 11 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown Quality issues1. Twitter Archive: Timestamp column is of object datatype instead of datetime or timestamp.2. Twitter Archive: We are only interested in Original Tweets which are only 2097 out of 2356. 181 tweets are retweets and 78 are replies.3. Twitter Archive: Rating Denominator should be 10 but there out of 2097 original tweets 17 tweets have denominator that is not 10. 4. Twitter Archive: Rating Numerator should be greater than 10 but there are 855 original tweets where it is not.5. Twitter Archive: There are 4 columns for displaying the dog stage - doggo, floofer, pupper, and puppo. These can be part of only one column as this is categorical data.6. Twitter Archive: Some of the dogs have invalid name like, "a", "None" etc.7. Image Prediction: Total number of tweets in the dataframe is 2075 which is 281 tweets less than twitter archive. 8. Json Data: 23 of all the tweets provided in Archive are deleted (Tweepy Exception: 22 File Not Found Error and for one tweet I am not authorized to download) Tidiness issues1. Twitter Archive: Following 4 columns can be made as single columns (as categorical) - doggo, floofer, pupper, and puppo2. Twitter Archive: Rating Denominator can be dropped when value remain static as 103. Twitter Archive: We only need original tweets, and following columns can be dropped as they don't provide any value as such - in_reply_to_status_id, in_reply_to_user_id, source, text, retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp, and expanded_urls4. Twitter Archive: timestamp column datatype change and we can add Year and Month as separate column which can be used for various analysis5. Image Prediction: Get the final dog_breed using p1_dog, p2_dog and p3_dog. Choose the breed based on first True value of these 3 columms6. Json Data: This can be combined with Twitter Archive with following columns - retweet_count, favorite_count. Key used to combine is id (tweet_id) Cleaning DataIn this section, clean **all** of the issues you documented while assessing. **Note:** Make a copy of the original data before cleaning. Cleaning includes merging individual pieces of data according to the rules of [tidy data](https://cran.r-project.org/web/packages/tidyr/vignettes/tidy-data.html). The result should be a high-quality and tidy master pandas DataFrame (or DataFrames, if appropriate). Issue 1: Twitter Archive Dataframe copied to temp_df. Following issues to be addressed:1. Twitter Archive: Following 4 columns can be made as single columns (as categorical) - doggo, floofer, pupper, and puppo2. Twitter Archive: Rating Denominator can be dropped when value remain static as 103. Twitter Archive: We only need original tweets, and following columns can be dropped as they don't provide any value as such - in_reply_to_status_id, in_reply_to_user_id, source, text, retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp, and expanded_urls4. Twitter Archive: timestamp column datatype change and we can add Year and Month as separate column which can be used for various analysis Define: Code ###Code # Copy the twitter archive dataframe df_twitter_archive_copy = df_twitter_archive.copy() # Tweet IDs where rating denominator is not 10 df_twitter_archive_copy = df_twitter_archive_copy[df_twitter_archive.retweeted_status_id.isna() & df_twitter_archive_copy.in_reply_to_status_id.isna()] df_twitter_archive_copy[df_twitter_archive_copy.rating_denominator != 10].tweet_id # Tweet IDs where rating numerator is not greater than 10 print("Number of tweets having numerator is less than or equal to 10: {}".format(df_twitter_archive_copy[df_twitter_archive_copy.rating_numerator <= 10].shape[0])) df_twitter_archive_copy[df_twitter_archive_copy.rating_numerator <= 10].tweet_id df_twitter_archive_copy.name df_twitter_archive_copy.drop(['in_reply_to_status_id', 'retweeted_status_timestamp', 'in_reply_to_user_id', 'source', 'text', 'retweeted_status_id', 'retweeted_status_user_id', 'expanded_urls'], inplace=True, axis=1) df_twitter_archive_copy["doggo"] = df_twitter_archive_copy["doggo"].str.replace('None', '') df_twitter_archive_copy["floofer"] = df_twitter_archive_copy["floofer"].str.replace('None', '') df_twitter_archive_copy["pupper"] = df_twitter_archive_copy["pupper"].str.replace('None', '') df_twitter_archive_copy["puppo"] = df_twitter_archive_copy["puppo"].str.replace('None', '') df_twitter_archive_copy.info() # Concat all the values of 4 columns into one new column with name dog_stage df_twitter_archive_copy['dog_stage'] = df_twitter_archive_copy[['doggo', 'floofer', 'pupper', 'puppo']].apply(lambda x: ''.join(x), axis= 1) df_twitter_archive_copy.dog_stage.value_counts() # Drop the 4 columns as they are redundant df_twitter_archive_copy.drop(['doggo', 'floofer', 'pupper', 'puppo'], axis=1, inplace=True) # Convert timestamp datatype from string to datetime df_twitter_archive_copy['timestamp'] = pd.to_datetime(df_twitter_archive_copy['timestamp']) # Enrich dateframe with year and month information df_twitter_archive_copy['year'] = df_twitter_archive_copy['timestamp'].dt.year df_twitter_archive_copy['month'] = df_twitter_archive_copy['timestamp'].dt.month df_twitter_archive_copy['month_year'] = pd.to_datetime(df_twitter_archive_copy['timestamp']).dt.to_period('M') df_twitter_archive_copy['day'] = df_twitter_archive_copy['timestamp'].dt.day #Drop the timestamp column df_twitter_archive_copy.drop(['timestamp'], axis=1, inplace=True) # Drop all the rows where rating denominator is not 10 and rating numerator is not greater than 10. indexNames = df_twitter_archive_copy[(df_twitter_archive_copy['rating_denominator'] != 10) | (df_twitter_archive_copy['rating_numerator'] < 11) ].index df_twitter_archive_copy.drop(indexNames , inplace=True) # Since rating denominator value is always going to be 10, it can be dropped df_twitter_archive_copy.drop(['rating_denominator'], axis=1, inplace=True) df_twitter_archive_copy.info() df_twitter_archive_copy.shape ###Output _____no_output_____ ###Markdown Test ###Code df_twitter_archive_copy.head() ###Output _____no_output_____ ###Markdown Issue 2: Image Prediction: Choose the final dog breed based on 3 different predictionTwitter JSON: Take only relevant data Define: Get the final dog_breed using p1_dog, p2_dog and p3_dog. Choose the breed based on first True value of these 3 columms Code ###Code # Create a copy of impage prediction df_image_copy = df_image.copy() # Define a function to get the final dog breed and it's prediction value dog_breed = [] dog_prediction_score = [] def get_final_dog_value(df_image_copy): if df_image_copy['p1_dog'] == True: dog_breed.append(df_image_copy['p1']) dog_prediction_score.append(df_image_copy['p1_conf']) elif df_image_copy['p2_dog'] == True: dog_breed.append(df_image_copy['p2']) dog_prediction_score.append(df_image_copy['p2_conf']) elif df_image_copy['p3_dog'] == True: dog_breed.append(df_image_copy['p3']) dog_prediction_score.append(df_image_copy['p3_conf']) else: dog_breed.append('Unknown') dog_prediction_score.append(0.0) # Add 2 new columns df_image_copy.apply(get_final_dog_value, axis=1) df_image_copy['dog_breed'] = dog_breed df_image_copy['dog_prediction_score'] = dog_prediction_score # Drop p1*, p2* and p3* columns df_image_copy.drop(['p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog', 'img_num'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code df_tweets_sub.head() df_image_copy.head() ###Output _____no_output_____ ###Markdown Storing DataSave gathered, assessed, and cleaned master dataset to a CSV file named "twitter_archive_master.csv". ###Code # Join Twitter Archive with Json Data (sub) df_twitter_archive_copy = df_twitter_archive_copy.merge(df_tweets_sub, on='tweet_id') # Now join Twitter Archive with Cleaned up Image Pediction rate_dogs = temp_df.merge(df_image_temp, on='tweet_id') rate_dogs.head() rate_dogs.to_csv('twitter_archive_master.csv', index=False, encoding = 'utf-8') ###Output _____no_output_____ ###Markdown Analyzing and Visualizing DataIn this section, analyze and visualize your wrangled data. You must produce at least **three (3) insights and one (1) visualization.** ###Code rate_dogs.info() # Set default display parameter for plots. plt.rcParams['figure.figsize'] = (8, 5) top10_dog_breeds_count = rate_dogs[rate_dogs.dog_breed != 'Unknown'].dog_breed.value_counts().head(10) print("Breed and number of tweets") print("--------------------------") print(top10_dog_breeds_count) top10_per = round(sum(top10_dog_breeds_count) * 100 / rate_dogs.shape[0]) print("\nTop 10 dog breeds make up {}% of all tweets".format(top10_per)) plt.barh(top10_dog_breeds_count.index, top10_breeds_count) plt.xlabel('Number of Tweets', fontsize = 14) plt.title('Top 10 Dog Breeds by Tweet Count', fontsize = 16) plt.gca().invert_yaxis() plt.show(); fav_counts_dog_breed = rate_dogs[rate_dogs.dog_breed != 'Unknown'] fav_counts_dog_breed = fav_counts_dog_breed.groupby(['dog_breed']) fav_counts_dog_breed = fav_counts_dog_breed['favorite_count'].sum() fav_counts_dog_breed = fav_counts_dog_breed.sort_values(ascending = False) top10_dog_breeds_fav_counts = fav_counts_dog_breed.head(10) top10_dog_breeds_fav_counts plt.barh(top10_dog_breeds_fav_counts.index, top10_breeds_fav_counts, color = 'g') plt.xlabel('Aggregate Favorite Count', fontsize = 14) plt.title('Top 10 Dog Breeds by Aggregate Favorite Count', fontsize = 16) plt.gca().invert_yaxis() plt.show(); # Draw multiple pairwise bivariate distributions for numeric columns in rate_dogs sns.pairplot(rate_dogs, vars = ['rating_numerator', 'retweet_count', 'favorite_count', 'dog_prediction_score'], diag_kind = 'kde'); ###Output _____no_output_____ ###Markdown Insights:1. Golden Retriever received the maximum amount of attention in terms of tweets2. Golden Retriever is again the most favoured dog among all the dogs breed whereas Pugs are least favoured3. Most important factors in determining the famous or favourite are number of retweets, and aggregated favourite count ###Code !!jupyter nbconvert *.ipynb ###Output _____no_output_____ ###Markdown Gather ###Code import pandas as pd from pandas import DataFrame import requests import io import numpy as np import matplotlib.pyplot as plt import seaborn as sns % matplotlib inline #importing the WeRateDogs Twitter archive (CSV file) twitter_archive = pd.read_csv('twitter-archive-enhanced.csv') #double check if the operation was succesfull twitter_archive.head() #downloading programmatically (with request libary) the tweet image predictions url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url).content tweet_images= pd.read_csv(io.StringIO(response.decode('utf-8')), delimiter='\t') #double check if the operation was succesfull tweet_images.head() import tweepy from tweepy import OAuthHandler import json from timeit import default_timer as timer # Query Twitter API for each tweet in the Twitter archive and save JSON in a text file # These are hidden to comply with Twitter's API terms and conditions consumer_key = 'HIDDEN' consumer_secret = 'HIDDEN' access_token = 'HIDDEN' access_secret = 'HIDDEN' auth = OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True) # NOTE TO STUDENT WITH MOBILE VERIFICATION ISSUES: # df_1 is a DataFrame with the twitter_archive_enhanced.csv file. You may have to # change line 17 to match the name of your DataFrame with twitter_archive_enhanced.csv # NOTE TO REVIEWER: this student had mobile verification issues so the following # Twitter API code was sent to this student from a Udacity instructor # Tweet IDs for which to gather additional data via Twitter's API tweet_ids = twitter_archive.tweet_id.values len(tweet_ids) # Query Twitter's API for JSON data for each tweet ID in the Twitter archive count = 0 fails_dict = {} start = timer() # Save each tweet's returned JSON as a new line in a .txt file with open('tweet_json.txt', 'w') as outfile: # This loop will likely take 20-30 minutes to run because of Twitter's rate limit for tweet_id in tweet_ids: count += 1 print(str(count) + ": " + str(tweet_id)) try: tweet = api.get_status(tweet_id, tweet_mode='extended') print("Success") json.dump(tweet._json, outfile) outfile.write('\n') except tweepy.TweepError as e: print("Fail") fails_dict[tweet_id] = e pass end = timer() print(end - start) print(fails_dict) ###Output 1: 892420643555336193 Fail 2: 892177421306343426 Fail 3: 891815181378084864 Fail 4: 891689557279858688 Fail 5: 891327558926688256 Fail 6: 891087950875897856 Fail 7: 890971913173991426 Fail 8: 890729181411237888 Fail 9: 890609185150312448 Fail 10: 890240255349198849 Fail 11: 890006608113172480 Fail 12: 889880896479866881 Fail 13: 889665388333682689 Fail 14: 889638837579907072 Fail 15: 889531135344209921 Fail 16: 889278841981685760 Fail 17: 888917238123831296 Fail 18: 888804989199671297 Fail 19: 888554962724278272 Fail 20: 888202515573088257 Fail 21: 888078434458587136 Fail 22: 887705289381826560 Fail 23: 887517139158093824 Fail 24: 887473957103951883 Fail 25: 887343217045368832 Fail 26: 887101392804085760 Fail 27: 886983233522544640 Fail 28: 886736880519319552 Fail 29: 886680336477933568 Fail 30: 886366144734445568 Fail 31: 886267009285017600 Fail 32: 886258384151887873 Fail 33: 886054160059072513 Fail 34: 885984800019947520 Fail 35: 885528943205470208 Fail 36: 885518971528720385 Fail 37: 885311592912609280 Fail 38: 885167619883638784 Fail 39: 884925521741709313 Fail 40: 884876753390489601 Fail 41: 884562892145688576 Fail 42: 884441805382717440 Fail 43: 884247878851493888 Fail 44: 884162670584377345 Fail 45: 883838122936631299 Fail 46: 883482846933004288 Fail 47: 883360690899218434 Fail 48: 883117836046086144 Fail 49: 882992080364220416 Fail 50: 882762694511734784 Fail 51: 882627270321602560 Fail 52: 882268110199369728 Fail 53: 882045870035918850 Fail 54: 881906580714921986 Fail 55: 881666595344535552 Fail 56: 881633300179243008 Fail 57: 881536004380872706 Fail 58: 881268444196462592 Fail 59: 880935762899988482 Fail 60: 880872448815771648 Fail 61: 880465832366813184 Fail 62: 880221127280381952 Fail 63: 880095782870896641 Fail 64: 879862464715927552 Fail 65: 879674319642796034 Fail 66: 879492040517615616 Fail 67: 879415818425184262 Fail 68: 879376492567855104 Fail 69: 879130579576475649 Fail 70: 879050749262655488 Fail 71: 879008229531029506 Fail 72: 878776093423087618 Fail 73: 878604707211726852 Fail 74: 878404777348136964 Fail 75: 878316110768087041 Fail 76: 878281511006478336 Fail 77: 878057613040115712 Fail 78: 877736472329191424 Fail 79: 877611172832227328 Fail 80: 877556246731214848 Fail 81: 877316821321428993 Fail 82: 877201837425926144 Fail 83: 876838120628539392 Fail 84: 876537666061221889 Fail 85: 876484053909872640 Fail 86: 876120275196170240 Fail 87: 875747767867523072 Fail 88: 875144289856114688 Fail 89: 875097192612077568 Fail 90: 875021211251597312 Fail 91: 874680097055178752 Fail 92: 874434818259525634 Fail 93: 874296783580663808 Fail 94: 874057562936811520 Fail 95: 874012996292530176 Fail 96: 873697596434513921 Fail 97: 873580283840344065 Fail 98: 873337748698140672 Fail 99: 873213775632977920 Fail 100: 872967104147763200 Fail 101: 872820683541237760 Fail 102: 872668790621863937 Fail 103: 872620804844003328 Fail 104: 872486979161796608 Fail 105: 872261713294495745 Fail 106: 872122724285648897 Fail 107: 871879754684805121 Fail 108: 871762521631449091 Fail 109: 871515927908634625 Fail 110: 871166179821445120 Fail 111: 871102520638267392 Fail 112: 871032628920680449 Fail 113: 870804317367881728 Fail 114: 870726314365509632 Fail 115: 870656317836468226 Fail 116: 870374049280663552 Fail 117: 870308999962521604 Fail 118: 870063196459192321 Fail 119: 869988702071779329 Fail 120: 869772420881756160 Fail 121: 869702957897576449 Fail 122: 869596645499047938 Fail 123: 869227993411051520 Fail 124: 868880397819494401 Fail 125: 868639477480148993 Fail 126: 868622495443632128 Fail 127: 868552278524837888 Fail 128: 867900495410671616 Fail 129: 867774946302451713 Fail 130: 867421006826221569 Fail 131: 867072653475098625 Fail 132: 867051520902168576 Fail 133: 866816280283807744 Fail 134: 866720684873056260 Fail 135: 866686824827068416 Fail 136: 866450705531457537 Fail 137: 866334964761202691 Fail 138: 866094527597207552 Fail 139: 865718153858494464 Fail 140: 865359393868664832 Fail 141: 865006731092295680 Fail 142: 864873206498414592 Fail 143: 864279568663928832 Fail 144: 864197398364647424 Fail 145: 863907417377173506 Fail 146: 863553081350529029 Fail 147: 863471782782697472 Fail 148: 863432100342583297 Fail 149: 863427515083354112 Fail 150: 863079547188785154 Fail 151: 863062471531167744 Fail 152: 862831371563274240 Fail 153: 862722525377298433 Fail 154: 862457590147678208 Fail 155: 862096992088072192 Fail 156: 861769973181624320 Fail 157: 861383897657036800 Fail 158: 861288531465048066 Fail 159: 861005113778896900 Fail 160: 860981674716409858 Fail 161: 860924035999428608 Fail 162: 860563773140209665 Fail 163: 860524505164394496 Fail 164: 860276583193509888 Fail 165: 860184849394610176 Fail 166: 860177593139703809 Fail 167: 859924526012018688 Fail 168: 859851578198683649 Fail 169: 859607811541651456 Fail 170: 859196978902773760 Fail 171: 859074603037188101 Fail 172: 858860390427611136 Fail 173: 858843525470990336 Fail 174: 858471635011153920 Fail 175: 858107933456039936 Fail 176: 857989990357356544 Fail 177: 857746408056729600 Fail 178: 857393404942143489 Fail 179: 857263160327368704 Fail 180: 857214891891077121 Fail 181: 857062103051644929 Fail 182: 857029823797047296 Fail 183: 856602993587888130 Fail 184: 856543823941562368 Fail 185: 856526610513747968 Fail 186: 856330835276025856 Fail 187: 856288084350160898 Fail 188: 856282028240666624 Fail 189: 855862651834028034 Fail 190: 855860136149123072 Fail 191: 855857698524602368 Fail 192: 855851453814013952 Fail 193: 855818117272018944 Fail 194: 855459453768019968 Fail 195: 855245323840757760 Fail 196: 855138241867124737 Fail 197: 854732716440526848 Fail 198: 854482394044301312 Fail 199: 854365224396361728 Fail 200: 854120357044912130 Fail 201: 854010172552949760 Fail 202: 853760880890318849 Fail 203: 853639147608842240 Fail 204: 853299958564483072 Fail 205: 852936405516943360 Fail 206: 852912242202992640 Fail 207: 852672615818899456 Fail 208: 852553447878664193 Fail 209: 852311364735569921 Fail 210: 852226086759018497 Fail 211: 852189679701164033 Fail 212: 851953902622658560 Fail 213: 851861385021730816 Fail 214: 851591660324737024 Fail 215: 851464819735769094 Fail 216: 851224888060895234 Fail 217: 850753642995093505 Fail 218: 850380195714523136 Fail 219: 850333567704068097 Fail 220: 850145622816686080 Fail 221: 850019790995546112 Fail 222: 849776966551130114 Fail 223: 849668094696017920 Fail 224: 849412302885593088 Fail 225: 849336543269576704 Fail 226: 849051919805034497 Fail 227: 848690551926992896 Fail 228: 848324959059550208 Fail 229: 848213670039564288 Fail 230: 848212111729840128 Fail 231: 847978865427394560 Fail 232: 847971574464610304 Fail 233: 847962785489326080 Fail 234: 847842811428974592 Fail 235: 847617282490613760 Fail 236: 847606175596138505 Fail 237: 847251039262605312 Fail 238: 847157206088847362 Fail 239: 847116187444137987 Fail 240: 846874817362120707 Fail 241: 846514051647705089 Fail 242: 846505985330044928 Fail 243: 846153765933735936 Fail 244: 846139713627017216 Fail 245: 846042936437604353 Fail 246: 845812042753855489 Fail 247: 845677943972139009 Fail 248: 845459076796616705 Fail 249: 845397057150107648 Fail 250: 845306882940190720 Fail 251: 845098359547420673 Fail 252: 844979544864018432 Fail 253: 844973813909606400 Fail 254: 844704788403113984 Fail 255: 844580511645339650 Fail 256: 844223788422217728 Fail 257: 843981021012017153 Fail 258: 843856843873095681 Fail 259: 843604394117681152 Fail 260: 843235543001513987 Fail 261: 842892208864923648 Fail 262: 842846295480000512 Fail 263: 842765311967449089 Fail 264: 842535590457499648 Fail 265: 842163532590374912 Fail 266: 842115215311396866 Fail 267: 841833993020538882 Fail 268: 841680585030541313 Fail 269: 841439858740625411 Fail 270: 841320156043304961 Fail 271: 841314665196081154 Fail 272: 841077006473256960 Fail 273: 840761248237133825 Fail 274: 840728873075638272 Fail 275: 840698636975636481 Fail 276: 840696689258311684 Fail 277: 840632337062862849 Fail 278: 840370681858686976 Fail 279: 840268004936019968 Fail 280: 839990271299457024 Fail 281: 839549326359670784 Fail 282: 839290600511926273 Fail 283: 839239871831150596 Fail 284: 838952994649550848 Fail 285: 838921590096166913 Fail 286: 838916489579200512 Fail 287: 838831947270979586 Fail 288: 838561493054533637 ###Markdown Now I will read the tweet_json.txt file line by line into a pandas DataFramewith tweet ID, retweet count, and favorite count as outcome source: https://discuss.analyticsvidhya.com/t/reading-a-text-file-in-python/18515 https://stackoverflow.com/questions/3277503/how-to-read-a-file-line-by-line-into-a-list ###Code #reading tweet_json.txt file line by line into a pandas DataFrame #with tweet ID, retweet count, and favorite count # I noticed in the cleaning part that I also need to add addtional columns in order # to access all data. We have in total 67 rows where the favorite_count and retweet_count is # in column retweeted and favorited tweet_list = [] with open('tweet-json.txt') as f: for line in f: content = f.readline() tweet_id = [line.strip() for line in line.split(',')][1] favorite_count = [line.strip() for line in line.split(',')][-6] retweet_count = [line.strip() for line in line.split(',')][-7] retweeted = [line.strip() for line in line.split(',')][-4] favorited = [line.strip() for line in line.split(',')][-5] lang = [line.strip() for line in line.split(',')][-1] tweet_list.append({'tweet_id': tweet_id, 'favorite_count': favorite_count, 'retweet_count' : retweet_count, 'retweeted': retweeted, 'favorited' : favorited, }) tweet_measurements = pd.DataFrame(tweet_list, columns = ['tweet_id', 'favorite_count', 'retweet_count', 'retweeted','favorited']) #double check if the operation was succesfull tweet_measurements.head() ###Output _____no_output_____ ###Markdown ASSES In the first step I will do a visual assesment of the three tables: `twitter_archive`, `tweet_images` & `tweet_measurements` and collect my findings ###Code twitter_archive tweet_images tweet_measurements ###Output _____no_output_____ ###Markdown In the next step I will futher investigate the tables programmatically ###Code twitter_archive.info() #since we just would like to have tweets till first of August, # I need to check this (twitter_archive.timestamp.min(), twitter_archive.timestamp.max()) tweet_images.info() tweet_measurements.info() # I will check how many of the tweets are retweets in table tweet_measurements tweet_measurements[tweet_measurements['retweeted'] == True] # I will check the tweets which are retweets in table twitter_archive twitter_archive.in_reply_to_status_id.min() twitter_archive[twitter_archive['in_reply_to_status_id'] == 6.658146967007232e+17] ###Output _____no_output_____ ###Markdown Replies to other tweets might include ratings, so I will not exclude them ###Code # in the next step I will check if there column duplicates (other than tweet_id) in the tables all_columns = pd.Series(list(twitter_archive) + list(tweet_images) + list(tweet_measurements)) all_columns[all_columns.duplicated()] ###Output _____no_output_____ ###Markdown =There no column duplicates within the three tables Now I will further investigate the values that are stored in the columns of all there tables to identify faulty values ###Code twitter_archive.describe() twitter_archive.rating_numerator.value_counts() # In the next step I will have closer look at the faulty values twitter_archive[twitter_archive['rating_numerator'] == 0].text ###Output _____no_output_____ ###Markdown The null values are part of the unique rating system of WeRateDogs ###Code #In the next step I will have a closer look at the # tweets to see if the high values are correct pd.options.display.max_colwidth = 200 twitter_archive[twitter_archive['rating_numerator'] ==666].text twitter_archive[twitter_archive['rating_numerator'] ==204].text ###Output _____no_output_____ ###Markdown The high values are part of the unique rating system of WeRateDogs ###Code twitter_archive.rating_denominator.value_counts() twitter_archive[twitter_archive['rating_denominator']== 0].text twitter_archive.name.value_counts() #In the next step I will have a closer look at the # tweets to see if the values are correct twitter_archive[twitter_archive['name']== 'a'].text tweet_images.describe() tweet_measurements.describe() tweet_measurements.favorite_count.value_counts() tweet_measurements.retweet_count.value_counts() tweet_images.p1.value_counts() ###Output _____no_output_____ ###Markdown Now I also check for duplicates in the 3 tables ###Code twitter_archive[twitter_archive.duplicated()] tweet_images[tweet_images.duplicated()] tweet_measurements[tweet_measurements.duplicated()] ###Output _____no_output_____ ###Markdown =The tables have no duplicates Quality `General issues:`all 3 tables have a different amount of entries/ rows `twitter_archive ` table- missing values in columns "in_reply_to_status_id", "retweeted_status_id", "retweeted_status_timestamp" - missing values in name column and wrong values in name columns: - None (680x) - a (55x) - an (6x)- Erroneous datatype (doggo, floofer, pupper, puppo, timestamp, tweet_id, in_reply_to_status_id, in_reply_to_user_id )- wrong value in the rating_denominator column (0 in row 313)- 181 tweets are retweets `tweet_images` table- wrong value in the p1 values: "shopping_cart", "convertible"- undescriptive column names in this table- different formats of the breed-values for the dogs- Erroneous datatype (tweet_id (str instead), p1, p2, p3 (category instead) `tweet_measurements` table- The column name in the beginning of all values in all columns of this table- wrong values in the favorite_count and retweet_count column. "is_quote_status": false"(67 entries total) and "contributors": null" (25 entries total), "lang": "en"}(41 entries total) instead of favorite count and retweet count in for ex. row 27, 1148, 1168. In total 67 entries.- Erroneous datatype (retweeted, favorited) Tidiness- tweet_measurements result of image Predictions should be part of the twitter_archive table- rating_numerator & rating_denominator should be one column in (each info in one column)- dog stages floofer, pupper, puppo, doggo are spread in 4 columns. they should be merged into one. I will now chose 8 of Quality and 2 Tidiness issues in order to clean them in the next step.My choice will base on the upcoming Investigation and will base on severity of the issue. My goal is to have data that helps me to investigate if the popularity of tweet (favorites & retweets) are related to the rating or the breed and if there is are breeds that gets rated better than other breeds (does WeRateDogs has a favorite breed)? In the first step I will create copies of the tables twitter_archive, tweet_images, tweet_measurements Clean In the first step I will create copies of the tables twitter_archive, tweet_images, tweet_measurements ###Code twitter_archive_clean = twitter_archive.copy() tweet_images_clean = tweet_images.copy() tweet_measurements_clean = tweet_measurements.copy() # double check if the operation was succesful twitter_archive_clean.head(2) # double check if the operation was succesful tweet_images_clean.head(2) # double check if the operation was succesful tweet_measurements_clean.head(2) ###Output _____no_output_____ ###Markdown 1. `tweet_measurements` table: The column name in the beginning of all values in all columns of this table Define cutting off the text in the beginng of every row of the all columns in table tweet_measurements: tweet_id, favorite_count, retweet_count, retweeted and favorited Code ###Code tweet_measurements_clean.tweet_id = tweet_measurements_clean.tweet_id.str.replace('"id":','').str.strip() tweet_measurements_clean.favorite_count = tweet_measurements_clean.favorite_count.str.replace('"favorite_count":','').str.strip() tweet_measurements_clean.retweet_count = tweet_measurements_clean.retweet_count.str.replace('"retweet_count":','').str.strip() tweet_measurements_clean.retweeted = tweet_measurements_clean.retweeted.str.replace('"retweeted":','').str.strip() tweet_measurements_clean.favorited = tweet_measurements_clean.favorited.str.replace('"favorited":','').str.strip() ###Output _____no_output_____ ###Markdown Test ###Code # double check if the operation was succesful tweet_measurements_clean.head() ###Output _____no_output_____ ###Markdown 2. `tweet_measurements` table: wrong values in the favorite_count and retweet_count column Define identifying the wrong values and filling the missing values with the information from the txt file Iteration:This showed me that I need to get more columns from the txt file in order to get the missing information; so I will go back to the Gather part to add the columns retweeted andfavorited to the dataframe twitter_measurements Code ###Code # identifying the wrong values and storing them in a new data frame called wrong_values wrong_values = tweet_measurements_clean[tweet_measurements_clean['favorite_count'] == '"is_quote_status": false'] wrong_values.head() # dropping the columns favorite_count and retweet_count wrong_values = wrong_values.drop(['favorite_count', 'retweet_count'], axis=1) # renaming the columns retweeted and favorited to favorite_count and retweet_count wrong_values = wrong_values.rename(columns={"retweeted" : "favorite_count", "favorited": "retweet_count"}) wrong_values = wrong_values.rename(columns={"retweeted" : "retweet_count", "favorited": "favorite_count"}) # cutting off the text in the beginng of every row wrong_values.favorite_count = wrong_values.favorite_count.str.replace('"favorite_count":','').str.strip() wrong_values.retweet_count = wrong_values.retweet_count.str.replace('"retweet_count":','').str.strip() # double checking the operation wrong_values.head() # dropping the rows with the wrong values from tweet_measurements tweet_measurements_clean = tweet_measurements_clean[tweet_measurements_clean.favorite_count !='"is_quote_status": false'] #double checking if all rows with wrong values are gone tweet_measurements_clean[tweet_measurements_clean['favorite_count'] == '"is_quote_status": false'] # appending the fixed values (wrong_value table) to the df tweet_measurements_clean= tweet_measurements_clean.append(wrong_values) #dropping the rows I just need for the cleaning process tweet_measurements_clean= tweet_measurements_clean.drop(['favorited','retweeted'], axis=1) ###Output _____no_output_____ ###Markdown Test ###Code # checking if the operations were succesful tweet_measurements_clean.head() tweet_measurements_clean.info() tweet_measurements_clean.favorite_count = tweet_measurements_clean.favorite_count.astype(int) tweet_measurements_clean.retweet_count = tweet_measurements_clean.retweet_count .astype(int) tweet_measurements_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1177 entries, 0 to 1148 Data columns (total 3 columns): favorite_count 1177 non-null int64 retweet_count 1177 non-null int64 tweet_id 1177 non-null object dtypes: int64(2), object(1) memory usage: 36.8+ KB ###Markdown 3. `twitter_archive ` table: 181 tweets are retweets (we just want to have original tweets) Define Deleting/excluding the rows/tweets that are retweets from twitter_archive and then deleting the columns concerning the retweets from the table twitter_archive Code ###Code # deleting/excluding the rows/tweets that are retweets from twitter_archive twitter_archive_clean = twitter_archive_clean[twitter_archive_clean.retweeted_status_id.isnull()] #deleting the columns concerning the retweets from the table twitter_archive twitter_archive_clean = twitter_archive_clean.drop(['retweeted_status_id', 'retweeted_status_user_id','retweeted_status_timestamp'], axis=1) ###Output _____no_output_____ ###Markdown Test ###Code # double checking the operations twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 14 columns): tweet_id 2175 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2175 non-null object source 2175 non-null object text 2175 non-null object expanded_urls 2117 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 2175 non-null object floofer 2175 non-null object pupper 2175 non-null object puppo 2175 non-null object dtypes: float64(2), int64(3), object(9) memory usage: 254.9+ KB ###Markdown 4. `twitter_archive ` table: Erroneous datatype (doggo, floofer, pupper, puppo, timestamp, tweet_id, in_reply_to_status_id, in_reply_to_user_id ) Define Convert doggo, floofer, pupper, puppo will category data type. Convert timestamp to datetime and tweet_id, n_reply_to_status_id and in_reply_to_user_id to string data type. ###Code # to category twitter_archive_clean.doggo = twitter_archive_clean.doggo.astype('category') twitter_archive_clean.floofer = twitter_archive_clean.floofer.astype('category') twitter_archive_clean.pupper = twitter_archive_clean.pupper.astype('category') twitter_archive_clean.puppo = twitter_archive_clean.puppo.astype('category') # to datetime twitter_archive_clean.timestamp = pd.to_datetime(twitter_archive_clean.timestamp) #to string twitter_archive_clean.tweet_id = twitter_archive_clean.tweet_id.astype(str) twitter_archive_clean.in_reply_to_status_id = twitter_archive_clean.in_reply_to_status_id.astype(str) twitter_archive_clean.in_reply_to_user_id = twitter_archive_clean.in_reply_to_user_id.astype(str ) twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 14 columns): tweet_id 2175 non-null object in_reply_to_status_id 2175 non-null object in_reply_to_user_id 2175 non-null object timestamp 2175 non-null datetime64[ns] source 2175 non-null object text 2175 non-null object expanded_urls 2117 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 2175 non-null category floofer 2175 non-null category pupper 2175 non-null category puppo 2175 non-null category dtypes: category(4), datetime64[ns](1), int64(2), object(7) memory usage: 195.8+ KB ###Markdown 5. `tweet_images ` table: Erroneous datatype (tweet_id (str instead), p1, p2, p3 (category instead) Define Convert p1, p2, p3 to category data type. Convert tweet_id to string data type Code ###Code # category tweet_images.p1 = tweet_images.p1.astype('category') tweet_images.p2 = tweet_images.p2.astype('category') tweet_images.p3 = tweet_images.p3.astype('category') # to string tweet_images.teet_id = tweet_images.p3.astype(str) tweet_images.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null int64 jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null category p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null category p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null category p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), category(3), float64(3), int64(2), object(1) memory usage: 174.9+ KB ###Markdown 6. `twitter_archive ` table: wrong value in the rating_denominator column (0 in row 313) Define Changing the null value in the denominator column (row 13) to 13 (as it is correct) Code ###Code # Changing the all values in the denominator column to 10. twitter_archive_clean.rating_denominator = twitter_archive_clean.rating_denominator.replace(0,10) ###Output _____no_output_____ ###Markdown Test ###Code # checking if the operation was succesful twitter_archive_clean[twitter_archive_clean['rating_denominator'] == 0] ###Output _____no_output_____ ###Markdown 7. `tweet_images` table: - undescriptive column names in this table Define Change the following columns:- img_num = number_images- p1 = prediction_1- p2 = prediction_2- p3 = prediction_3- p1_conf = confidence_prediction_1- p2_conf = confidence_prediction_2- p3_conf = confidence_prediction_3- p1_dog = breed_prediction_1- p2_dog = breed_prediction_2- p3_dog = breed_prediction_3 Code ###Code tweet_images_clean = tweet_images_clean.rename(columns={ 'p1': 'prediction_1', 'p2': 'prediction_2', 'p3': 'prediction_3', 'p1_conf': 'confidence_prediction_1', 'p2_conf': 'confidence_prediction_2', 'p3_conf': 'confidence_prediction_3', 'p1_dog': 'breed_predicted_1', 'p2_dog': 'breed_predicted_2', 'p3_dog': 'breed_predicted_3'}) ###Output _____no_output_____ ###Markdown Test ###Code tweet_images_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null int64 jpg_url 2075 non-null object img_num 2075 non-null int64 prediction_1 2075 non-null object confidence_prediction_1 2075 non-null float64 breed_predicted_1 2075 non-null bool prediction_2 2075 non-null object confidence_prediction_2 2075 non-null float64 breed_predicted_2 2075 non-null bool prediction_3 2075 non-null object confidence_prediction_3 2075 non-null float64 breed_predicted_3 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown 8. `tweet_images` table: different formats of the breed-values for the dogs Define changes all breed names to lower case and take away the _ in all breed names in prediction_1, prediction_2 and prediction_3 Code ###Code #changes all breed names to lower case and take away the _ in all breed names tweet_images_clean['prediction_1'] = tweet_images_clean['prediction_1'].str.lower().str.replace('_', ' ') tweet_images_clean['prediction_2'] = tweet_images_clean['prediction_2'].str.lower().str.replace('_', ' ') tweet_images_clean['prediction_3'] = tweet_images_clean['prediction_3'].str.lower().str.replace('_', ' ') ###Output _____no_output_____ ###Markdown Test ###Code tweet_images_clean['prediction_1'].sample(5) tweet_images_clean['prediction_2'].sample(5) tweet_images_clean['prediction_3'].sample(5) ###Output _____no_output_____ ###Markdown Tidiness Code 1. result of image Predictions and tweet_measurements should be in twitter_archive table Define integrate the column 'prediction_1','breed_predicted_1, etc. in table "twitter_archive_clean by merging the column with the twitter_archive on Tweet_idMerge the favorite_count and retweet_count column to the twitter_archive_clean table, joining on tweet_id Code ###Code # first I need to change the data type of column tweet_id to str (so it matches the data type from table twitter_archive) tweet_images_clean.tweet_id = tweet_images_clean.tweet_id.astype(str) #Isolate the tweet_ID and predictions columns in the tweet_images_clean table id_breed = tweet_images_clean[['prediction_1','breed_predicted_1','prediction_2','breed_predicted_2', 'prediction_3', 'breed_predicted_3','tweet_id']] #merging id_breed with twitter_archive_clean twitter_archive_clean = pd.merge(twitter_archive_clean,id_breed, on=['tweet_id'], how='left') # Merge the favorite_count and retweet_count column to the twitter_archive_clean table, joining on tweet_id twitter_archive_clean = pd.merge(twitter_archive_clean,tweet_measurements_clean, on=['tweet_id'], how='left') ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.info() twitter_archive_clean.sample(3) tweet_images_clean[tweet_images_clean['tweet_id'] == '667470559035432960'] tweet_images_clean[tweet_images_clean['tweet_id'] == '666337882303524864'] tweet_images_clean[tweet_images_clean['tweet_id'] == '671122204919246848'] ###Output _____no_output_____ ###Markdown 2. `twitter_archive`: dog stages floofer, pupper, puppo, doggo are spread in 4 columns. they should be merged into one. Define creating a new dataframe with the dog stages floofer, pupper, puppo, doggo in order to merge the 4 columns with a lambda function into one column named dog_stages.Then merging the dog_stages column with the twitter_archive column and dropping the columns floofer, pupper, puppo, doggo from the table ###Code #creating a new dataframe with the dog stages df_dog_stages = twitter_archive_clean[['tweet_id','doggo','floofer','pupper','puppo']] #replacing all None with Nan in order to drop them in the lambda functions df_dog_stages = df_dog_stages.replace('None', np.nan) # merge the 4 columns with a lambda function into one column named dog_stages df_dog_stages['dog_stages'] = df_dog_stages[df_dog_stages.columns[1:]].apply( lambda x: ', '.join(x.dropna()), axis=1) #creplacing all empty columns with NaN df_dog_stages = df_dog_stages.replace('', np.nan) df_dog_stages = df_dog_stages.drop(['doggo','floofer','pupper', 'puppo'],axis=1) # Merge the dog_stages to the twitter_archive_clean table, joining on tweet_id and droping the old columns twitter_archive_clean = pd.merge(twitter_archive_clean,df_dog_stages, on=['tweet_id'], how='left') twitter_archive_clean = twitter_archive_clean.drop(['doggo','floofer','pupper', 'puppo'],axis=1) twitter_archive_clean.sample(2) # Storing the clean DataFrame in a CSV file named twitter_archive_master.csv twitter_archive_clean.to_csv('twitter_archive_master.csv', index=False) ###Output _____no_output_____ ###Markdown Please find further information about the wrangle process in the Wrangle Report ( Explore Dataset- Questions I would like to know if the success of a tweet in terms of amount of retweets and favorites is related to the rating from WeRateDogs (have higher rated dogs more retweets & get more favorites)Furthermore, it would be interesting to investigagte if the creator of WeRateDogs rate a some breeds better than others. Can we see from the data if the owners have a favorite breed? In contrast, do the user have rate a some breeds retweet and fovourite some breeds more than others. Can we see from the data if the owners have a favorite breed? ###Code # creating a copy of the twitter_archive_clean for the investigation tweet_archive = twitter_archive_clean.copy() tweet_archive.head(2) tweet_archive.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2174 Data columns (total 19 columns): tweet_id 2175 non-null object in_reply_to_status_id 2175 non-null object in_reply_to_user_id 2175 non-null object timestamp 2175 non-null datetime64[ns] source 2175 non-null object text 2175 non-null object expanded_urls 2117 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object prediction_1 1994 non-null object breed_predicted_1 1994 non-null object prediction_2 1994 non-null object breed_predicted_2 1994 non-null object prediction_3 1994 non-null object breed_predicted_3 1994 non-null object favorite_count 1090 non-null float64 retweet_count 1090 non-null float64 dog_stages 344 non-null object dtypes: datetime64[ns](1), float64(2), int64(2), object(14) memory usage: 339.8+ KB ###Markdown Since I wont need all columns for my investigation I will drop the following columns from the data frame: - in_reply_to_status_id - in_reply_to_user_id - timestamp - source - expanded_urls - dog_stages ###Code # dropping columns from the df tweet_archive = tweet_archive.drop(['in_reply_to_status_id', 'in_reply_to_user_id','timestamp', 'source', 'expanded_urls', 'dog_stages'], axis=1) # double checking if the operation was succesful tweet_archive.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2174 Data columns (total 13 columns): tweet_id 2175 non-null object text 2175 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object prediction_1 1994 non-null object breed_predicted_1 1994 non-null object prediction_2 1994 non-null object breed_predicted_2 1994 non-null object prediction_3 1994 non-null object breed_predicted_3 1994 non-null object favorite_count 1090 non-null float64 retweet_count 1090 non-null float64 dtypes: float64(2), int64(2), object(9) memory usage: 237.9+ KB ###Markdown Saving the result of the images prediction in a new coloum called breed. By doing so, I make sure that I can investigate the data in the best way. Writing a function that takes the breed of prediction 1 if True, otherwise goes to to the second prediction and checks if the prediction is True (and takes this value if True). If also the second prediction isnt True, go to prediction 3 and takes the value from there (if True) ###Code def get_breed(tweet_archive): if tweet_archive['breed_predicted_1'] == True: return tweet_archive['prediction_1'] elif (tweet_archive['breed_predicted_1'] == False) & (tweet_archive['breed_predicted_2'] == True): return tweet_archive['prediction_2'] elif (tweet_archive['breed_predicted_1'] == False) & (tweet_archive['breed_predicted_2'] == False) & (tweet_archive['breed_predicted_3'] == True): return tweet_archive['prediction_3'] else: return 'no breed predicted' tweet_archive['breed'] = tweet_archive.apply(get_breed, axis = 1) # changing the data type of breed to categorical tweet_archive.breed = tweet_archive.breed.astype('category') # testing if the operation was successful tweet_archive['breed'].value_counts() #dropping prediction_1, breed_predicted_1 prediction_2, breed_predicted_2 prediction_3 breed_predicted_3 tweet_archive = tweet_archive.drop(['prediction_1','breed_predicted_1','prediction_2','breed_predicted_2','prediction_3','breed_predicted_3'], axis= 1) tweet_archive.sample(2) ###Output _____no_output_____ ###Markdown Another problem that I see is that we have not for every tweet the following values: breed, favorite_count , retweet_count: ###Code tweet_archive.breed[tweet_archive['breed'] != "no breed predicted"].count() tweet_archive.breed = tweet_archive.breed[tweet_archive['breed'] != "no breed predicted"] tweet_archive.favorite_count.isnull().sum() tweet_archive.retweet_count.isnull().sum() # dropping the null values from the data set which I created by just including values >0 tweet_archiv = tweet_archive.dropna(axis=0, inplace=True) tweet_archive.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 837 entries, 2 to 2173 Data columns (total 8 columns): tweet_id 837 non-null object text 837 non-null object rating_numerator 837 non-null int64 rating_denominator 837 non-null int64 name 837 non-null object favorite_count 837 non-null float64 retweet_count 837 non-null float64 breed 837 non-null category dtypes: category(1), float64(2), int64(2), object(3) memory usage: 59.0+ KB ###Markdown The second problem is to make the rating from WeRateDogs more comparable by dividing the rating_numerator by the rating_denominator since both have so many outliers ###Code # adding a new column called rating_new which divides the numerator by the denominator tweet_archive['rating_new'] = tweet_archive.rating_numerator/ tweet_archive.rating_denominator tweet_archive['rating_new'].value_counts() tweet_archive.breed.value_counts() ###Output _____no_output_____ ###Markdown Even though the ratings have some outliers, I decided to keep the outliers since the rating system is what WeRateDogs so special Histogramm for various features- for an first overview of the factors ###Code # creating the subplot for the following columns: favorite_count , retweet_count, new_rating fig, axes = plt.subplots(nrows=3, ncols=1, figsize=(18,30)) ax0, ax1, ax2= axes.flatten() n_bins = 3 plt.setp(axes.flat, ylabel='Count') ax0.hist(tweet_archive.rating_new,n_bins) ax0.set(title = 'Distribution Rating', xlabel='Rating') ax0.set_xticks([0,1,2,3,4,5,6,7,8]) ax1.hist(tweet_archive.favorite_count,n_bins) ax1.set(title = 'Distributions Favorite Count', xlabel='Favorite Count') ax1.set_xticks([0,10000,20000,30000,40000,50000,60000,70000]) ax2.hist(tweet_archive.retweet_count, n_bins) ax2.set(title = 'Distributions Retweet Count', xlabel='Retweet Count') ax2.set_xticks([0,10000,20000,30000,40000,50000,60000,70000]) fig.tight_layout(); ###Output _____no_output_____ ###Markdown The Histograms show that data rating, retweets and favorites is right-skewed.(source: http://asq.org/learn-about-quality/data-collection-analysis-tools/overview/histogram2.html) Question 1: Do the creator of WeRateDogs rate a some breeds better than others. Can we see from the data if the owners have a favorite breed? ###Code # in the first step I would to identify the 10 highest rated breeds in average top_ten_breeds_creators = tweet_archive.groupby('breed').rating_new.mean().sort_values(ascending=False).head(10) top_ten_breeds_creators # plotting the 10 highest rated breeds in average ax = top_ten_breeds_creators.plot(kind='bar', fontsize=12,legend=True) ax.legend(['Rating']) plt.xlabel('Breed', fontsize=15) plt.title('Average Rating- Top 10',fontsize=20); ###Output _____no_output_____ ###Markdown The finding shows that the creators of the Twitter Account WeRateDogs have their favorites when you look at the average rating of the different breeds. Maybe the creators have on of these breeds as pet and are therefore biased? Question 2: Do the user have rate a some breeds retweet and fovourite some breeds more than others. Can we see from the data if the owners have a favorite breed? ###Code # identifying the 10 highest retweeted breeds in average top_ten_retweeted_breeds = tweet_archive.groupby('breed').retweet_count.mean().sort_values(ascending=False).head(10) top_ten_retweeted_breeds # identifying the 10 highest favorite breeds in average top_ten_faved_breeds = tweet_archive.groupby('breed').favorite_count.mean().sort_values(ascending=False).head(10) top_ten_faved_breeds ###Output _____no_output_____ ###Markdown The findings show that the users and creators dont have exactly the same favorites but there are 2 breeds in the Top 10 rated list which have a lot of retweets and/or favorites:- black-and-tan coonhound- bouvier des flandresits also interesting to see that the most of the top retweeted and faved breeds are not the same: the bedlington terrier is the most retweeted and the black-and-tan coonhound is the most faved breed.It would be interesting to further investigate when the user retweeted a tweet and when they faved a tweet. All in all, it seems like the black-and-tan coonhound is the most popular breed among the creators of WeRateDogs (2 of 10) and among the users (1 of 10 in faved tweets and 3 of 10 in retweets) Question 3: I would like to know if the sucess of a tweet in terms of amount of retweets and favorites is related to the rating from WeRateDogs (have higher rated dogs more retweets & get more favorites) I will fit two Linear Regression Model model with statsmodel libary to predict the retweet based based on the rating and to predict the favorite count based on the rating. We define our alpa with 0.05 ###Code import statsmodels.api as sm # add an intercept tweet_archive['intercept'] = 1 # model to predict the retweet based on the rating lm = sm.OLS(tweet_archive['retweet_count'], tweet_archive[['intercept', 'rating_new']]) results = lm.fit() results.summary() # modelto predict the favorite count based on the rating lm = sm.OLS(tweet_archive['favorite_count'], tweet_archive[['intercept', 'rating_new']]) results = lm.fit() results.summary() # plotting a scatterplot to visualize the relationship plt.scatter(tweet_archive['favorite_count'], tweet_archive['rating_new']); plt.xlabel('Count of faved Tweets'); plt.ylabel('Rating'); plt.title('Rating of Breed vs. Count of faved Tweets'); # plotting a scatterplot to visualize the relationship plt.scatter(tweet_archive['retweet_count'], tweet_archive['rating_new']); plt.xlabel('Retweets'); plt.ylabel('Rating'); plt.title('Rating of Breed vs. Retweets'); ###Output _____no_output_____ ###Markdown Data wrangling - WeRateDogs Table of Contents- [Introduction](introduction)- [Part 1 - Gathering data](gatheringdata)- [Part 2 - Assessing data](assessingdata) + [Part 2-1 - Qality](quality) + [Part 2-2 - Tidiness](tidiness)- [Part 3 - Cleaning data](cleaningdata)- [Part 4 - Insights and Visualization](insightsandvisualization) + [Part 4-1 - First Insight](firstinsight) + [Part 4-2 - Second Insight](secondinsight) + [Part 4-3 - Third Insight](thirdinsight) IntroductionThe purpose of this project is use what I learned in data wrangling lesson from Udacity Data Analysis Nanodegree program. The dataset which will be wrangled is the tweets archive of Twitter user @dog_rates, also known as WeRateDogs. WeRateDogs downloaded their Twitter archive and sent it to Udacity via email exclusively for us to use in this project. This archive contains basic tweet data (tweet ID, timestamp, text, etc.) for all 5000+ of their tweets as they stood on August 1, 2017. The goal of this project is to wrangle the WeRateDogs Twitter data to create interesting and trustworthy analyses and visualizations. The challenge lies in the fact that the Twitter archive is amazing, but it only contains very basic tweet information that comes in JSON format. So I need to gather, asses and clean the Twitter data for a worthy analysis and visualization. Key Points- We only want original ratings **(no retweets) that have images**. Though there are 5000+ tweets in the dataset, **not all are dog ratings and some are retweets**.- **We do not need to gather the tweets beyond August 1st, 2017**. We can, but note that we won't be able to gather the image predictions for these tweets since we don't have access to the algorithm used.- Fully assessing and cleaning the entire dataset requires exceptional effort so only a subset of its issues **(eight (8) quality issues and two (2) tidiness issues at minimum) need to be assessed and cleaned.**- Cleaning includes **merging individual pieces of data** according to the rules of tidy data.- The fact that the rating numerators are greater than the denominators **does not need to be cleaned**. This unique rating system is a big part of the popularity of WeRateDogs. Part 1 - Gathering data **1. Twitter archive file:** download this file manually by clicking the following link: twitter_archive_enhanced.csv**2. Tweet image predictions**, i.e., what breed of dog (or other object, animal, etc.) is present in each tweet according to a neural network. This file (image_predictions.tsv) is hosted on Udacity's servers and should be downloaded programmatically using the Requests library and the following URL: https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv**3. Twitter API & JSON**: Each tweet's retweet count and favorite ("like") count at minimum, and any additional data you find interesting. Using the tweet IDs in the WeRateDogs Twitter archive, query the Twitter API for each tweet's JSON data using Python's Tweepy library and store each tweet's entire set of JSON data in a file called tweet_json.txt file. Each tweet's JSON data should be written to its own line. Then read this .txt file line by line into a pandas DataFrame with (at minimum) tweet ID, retweet count, and favorite count. 1. Twitter Archive File: ###Code ##Import all the necessery libraries will be used in this project import pandas as pd import numpy as np import requests import tweepy import json import matplotlib.pyplot as plt %matplotlib inline ##Load CSV File twitter_archive_df = pd.read_csv('twitter-archive-enhanced.csv') ##Viweing the data frame and ensure that our Dataframe is not beyond the first of August in 2017 twitter_archive_df.sort_values('timestamp') twitter_archive_df.head(10) ##Show detailed info about the dataframe twitter_archive_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown 2. Tweet image predictions: ###Code ##Download the URL programmatically url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' ##Get the content of the url into response variable response = requests.get(url) ##Write the content of the response into file 'image-predictions.tsv' with open('image-predictions.tsv', mode ='wb') as file: file.write(response.content) ##Load TSV File image_predictions_df = pd.read_csv('image-predictions.tsv',sep = '\t') ##Viweing the data frame image_predictions_df.head(10) ##Showing detailed info about the data frame image_predictions_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null int64 jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown 3. Twitter API & JSON: ###Code ##Set the method of querying Twitter API CONSUMER_KEY = "27ikX3e9zil0cWEcUdObvp7Yn" CONSUMER_SECRET = "Rw1I9Iv255lVrpZ4zGPR299UmAabq91Msmk0jpcAS29QYmzPMy" OAUTH_TOKEN = "1067652094293917696-ujQyEeNb8LdYPqSiKsuFiHMIb5IiMQ" OAUTH_TOKEN_SECRET = "xT1JTybB7whgafN2Gntle0CWLivKR2Jebea9O75WK9cOq" auth = tweepy.OAuthHandler(CONSUMER_KEY,CONSUMER_SECRET) auth.set_access_token(OAUTH_TOKEN, OAUTH_TOKEN_SECRET) api = tweepy.API(auth, parser = tweepy.parsers.JSONParser(), wait_on_rate_limit = True, wait_on_rate_limit_notify = True) ##Fetch tweets from the twitter API and store them in the list 'all_tweets' all_tweets = [] ##Tweets that can't be found are saved in the list 'not_found_tweets' not_found_tweets = [] ##Loop on the whole dataframe and fill in the two mentioned lists with proper values for tweet_id in twitter_archive_df['tweet_id']: try: all_tweets.append(api.get_status(tweet_id)) except Exception as e: not_found_tweets.append(tweet_id) ##Test the list and check the number of elements in it print(len(all_tweets)) ## testing the list and check the number of elements in it print(len(not_found_tweets)) ##Separating each JSON block from the list 'all_tweets' and store them in list 'json_tweet_dict' json_tweet_dict = [] for json_block in all_tweets: json_tweet_dict.append(json_block) ##Loop on each block and write it in a new-made .txt file called 'tweet_json.txt' with open('json_tweets.txt', 'w') as file: file.write(json.dumps(json_tweet_dict, indent = 4)) ##Select specific information from JSON dictionaries in a txt file ##and put it in a dataframe called json_tweets selected_info = [] with open('json_tweets.txt', encoding='utf-8') as json_file: all_data = json.load(json_file) for dicti in all_data: tweet_id = dicti['id'] favorite_count = dicti['favorite_count'] retweet_count = dicti['retweet_count'] whole_source = dicti['source'] source = whole_source[whole_source.find('rel="nofollow">') + 15:-4] text = dicti['text'] retweeted_status = dicti['retweeted_status'] = dicti.get('retweeted_status', 'Tweet') if retweeted_status == 'Tweet': short_url = text[text.find('https'):] else: retweeted_status = 'Retweet' short_url = 'Retweet' selected_info.append({'tweet_id': str(tweet_id),'favorite_count': int(favorite_count),'retweet_count': int(retweet_count),'url': short_url, 'source': source, 'retweeted_status': retweeted_status, }) ##Build the json_tweets dataframe json_tweets_df = pd.DataFrame(selected_info, columns = ['tweet_id', 'favorite_count','retweet_count','source', 'retweeted_status', 'url']) ##Create new file called 'json_tweets_cleaned.csv' with all cleaned json blocks json_tweets_df.to_csv('json_tweets_cleaned.txt', encoding = 'utf-8', index=False) ##Showing the data frame json_tweets_df.sample(10) ## Viweing detailed info about the data frame json_tweets_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2337 entries, 0 to 2336 Data columns (total 6 columns): tweet_id 2337 non-null object favorite_count 2337 non-null int64 retweet_count 2337 non-null int64 source 2337 non-null object retweeted_status 2337 non-null object url 2337 non-null object dtypes: int64(2), object(4) memory usage: 109.6+ KB ###Markdown Part 2 - Assessing data Visual Assessment:In this assessment phase, we use our visual ability to assess the showed data and if we can easily identify issue related to quality or tidiness of the data by eye sights. ###Code twitter_archive_df image_predictions_df json_tweets_df ###Output _____no_output_____ ###Markdown Programmatic Assessment:In this assessment phase, we are more concern about using the programmatic approach and use the powerful libraries of python like pandas and numpy to identify issues to be cleaned. ###Code ##Show detailed info about the twitter_archive dataframe twitter_archive_df.info() ##View descriptive statistics of twitter_archive dataframe twitter_archive_df.describe() ##Get the number of duplicates in respect of values in twitter_id column sum(twitter_archive_df['tweet_id'].duplicated()) ##Get the number of numerators with counts of each twitter_archive_df.rating_numerator.value_counts() twitter_archive_df['name'].value_counts() ##Get the width of columns in dataframes pd.options.display.max_colwidth ##Maximuize the display width of columns in dataframes pd.options.display.max_colwidth = 500 ##Check the suspected values of rating_numerator to identify if this value is mentioned exactly in the text of the tweet(as Rating) or not. ##By checking the url mentioned in the text, i found that the mentioned rating is not for one puppy, it's for 17 dogs ##in the SAME picture.That's why the rating is 204/170 twitter_archive_df[twitter_archive_df.rating_numerator ==204] ##By reading the text mentioned below, i found that this is a retweet for explainging ##the rating system used here in Weratedogs Account. twitter_archive_df[twitter_archive_df.rating_numerator ==960] ##By checking the url mentioned in the text, i found that the mentioned rating is not for one puppy, it's for 14 dogs ##in the SAME picture. That's why the rating is 143/130 twitter_archive_df[twitter_archive_df.rating_numerator ==143] ###This tweet has no picture, so it should be ignored in the cleaning steps due to the instructions. twitter_archive_df[twitter_archive_df.rating_numerator ==666] ##No probelm to be mentioned in this tweet. twitter_archive_df[twitter_archive_df.rating_numerator ==1776] ##Getting the number of denominator with counts of each twitter_archive_df.rating_denominator.value_counts() ##Check the suspected values of rating_denominator to identify if this value is mentioned exactly in the text of the tweet(as Rating) or not. ##By checking the url mentioned in the text, i found that the mentioned rating is not correct. ##For example, the first record, the mentioned rating is not 9/11, but it's 14/10. So the denominator should be corrected to be 10 ##For those three records, the denominator's value should be corrected twitter_archive_df[twitter_archive_df.rating_denominator ==11] ##By checking the url mentioned in the text, i found that the mentioned rating is not 1/2, but it's 9/10. ##So the denominator should be corrected to be 10 twitter_archive_df[twitter_archive_df.rating_denominator ==2] ##By checking the url mentioned in the text, i found that this tweet is just for explaining ##how the rating system in Weratedogs works. twitter_archive_df[twitter_archive_df.rating_denominator ==16] ##By checking the url mentioned in the text, i found that this tweet has no rating in the correct format, so this record should ##be deleted in the cleaning phase. twitter_archive_df[twitter_archive_df.rating_denominator ==15] ##By checking the url mentioned in the text, i found that this tweet has no rating in the correct format, so this record should ##be deleted in the cleaning phase. twitter_archive_df[twitter_archive_df.rating_denominator ==7] ##By checking the url mentioned in the text, no problem to be mentioned here as ##the rating is correctly populated through the record twitter_archive_df[twitter_archive_df.rating_denominator ==90] ##Searching for fractional Numerators which will be 'not showed' due to wrong format of cells in the populated data excel sheet ##If found, it should be cleaned properly and correct the rating values either for Numerators/ Denominators twitter_archive_df[twitter_archive_df['text'].str.contains(r"(\d+\.\d*\/\d+)")] image_predictions_df.sample(10) ##View descriptive statistics of image_predictions dataframe image_predictions_df.describe() ##View info about image_predictions dataframe image_predictions_df.info() ##Search for many tweets refering to same puppy/ dogs sum(image_predictions_df['jpg_url'].duplicated()) ##Get the exct duplicate records.The expected number is 132 because of 66*2 df_duplicates = image_predictions_df[image_predictions_df.jpg_url.duplicated(keep=False)] df_duplicates.sort_values('jpg_url') json_tweets_df.source.value_counts() json_tweets_df.retweeted_status.value_counts() ###Output _____no_output_____ ###Markdown Assessing Conclusion Quality Issues:Based on the concepts of good quality data which are: Completeness, validity, accuracy, consistency. This dataset should be cleaned from the mentioned below issues. Twitter_Archvie_df1- Delete all tweets that have no images or actual ratings.2- Correct the records with value 'None' to be null.3- Correct the rating system due to fractional numerators/denominators.4- Delete columns which we will not use.6- Correct the numerators/denominators due to pictures which have more than one puppy and the denominator is not 10 due to unknown conditions. 7- Separate the full timestamp provided into three columns (Day-Month-Year) which will be helpful for other teams to query through this dataset. Image_Predictions_df1- Delete all duplicated **jpg_url** tweets.2- Delete columns which we will not use. Tweet_json1- Delete all tweets except for 'Tweet'. Tidiness Issues:Issues here are more related to the structure of the data and the used datatypes, and how to make the whole dataset easy to use.1- Change the datatype of tweet_id column in json_tweets table to **int64** instead of **object** in order to be able to merge the three tables together. 2- Melt those columns (doggo, floofer, pupper and puppo) into new columns **dog** and **dogs_blood** columns. **(This issue will be handeled under the table's section.)**3- Concatenate all tables together into one dataframe. Cleaning Data ###Code ##First of all, creating a backup dataframe of each dataframe we have twitter_archive_df_clean = twitter_archive_df.copy() image_prediction_df_clean = image_predictions_df.copy() json_tweets_df_clean = json_tweets_df.copy() ###Output _____no_output_____ ###Markdown Quality Issues: Twitter_Archive_df issues: 1- Delete all tweets that have no images or retweets. Definition There are 181 values in retweeted_status_id or the tweets which not having actual rating. Delete the retweets. The Code ###Code ##Delete retweets by only selecting the Nan records twitter_archive_df_clean = twitter_archive_df_clean[pd.isnull(twitter_archive_df_clean['retweeted_status_user_id'])] twitter_archive_df_clean = twitter_archive_df_clean[twitter_archive_df_clean['tweet_id'] != 832088576586297345] twitter_archive_df_clean = twitter_archive_df_clean[twitter_archive_df_clean['tweet_id'] != 810984652412424192] twitter_archive_df_clean = twitter_archive_df_clean[twitter_archive_df_clean['tweet_id'] != 682808988178739200] twitter_archive_df_clean = twitter_archive_df_clean[twitter_archive_df_clean['tweet_id'] != 835246439529840640] twitter_archive_df_clean = twitter_archive_df_clean[twitter_archive_df_clean['tweet_id'] != 686035780142297088] ###Output _____no_output_____ ###Markdown The Test ###Code ##Testing the cleaning code print(sum(twitter_archive_df_clean.retweeted_status_user_id.value_counts())) twitter_archive_df_clean[twitter_archive_df_clean['tweet_id'] == 832088576586297345] twitter_archive_df_clean[twitter_archive_df_clean['tweet_id'] == 810984652412424192] twitter_archive_df_clean[twitter_archive_df_clean['tweet_id'] == 682808988178739200] twitter_archive_df_clean[twitter_archive_df_clean['tweet_id'] == 835246439529840640] twitter_archive_df_clean[twitter_archive_df_clean['tweet_id'] == 686035780142297088] ###Output _____no_output_____ ###Markdown 2- Correct the records with name 'None' to be 'NaN'. The Definition Through the assessment phase of this table, i noticed that there is 745 records have 'None' value in the name column. So, we should correct those records to be null. The Code ###Code ##Change the value of name of this specific records twitter_archive_df_clean['name'].replace('None',np.nan,inplace = True) ###Output _____no_output_____ ###Markdown The Test ###Code twitter_archive_df_clean.name.value_counts() ###Output _____no_output_____ ###Markdown 3- Correct the rating system due to fractional numerators/denominators. Definition I found that there are 6 records have fractional rating in the body of the text. However, the Numerators/Denominator values are not correct. The Code ###Code ##Convert the datatype of the columns 'rating_numerator', 'rating_denominator' to be float to accept float numbers. twitter_archive_df_clean[['rating_numerator', 'rating_denominator']] = twitter_archive_df_clean[['rating_numerator', 'rating_denominator']].astype(float) ##Fix the values of rating_numerator and rating_dominator manually, becuase those records are just caused by unknown reasons. twitter_archive_df_clean.loc[(twitter_archive_df_clean.tweet_id == 883482846933004288), 'rating_numerator'] = 13.5 twitter_archive_df_clean.loc[(twitter_archive_df_clean.tweet_id == 681340665377193984), 'rating_numerator'] = 9.5 twitter_archive_df_clean.loc[(twitter_archive_df_clean.tweet_id == 778027034220126208), 'rating_numerator'] = 11.27 twitter_archive_df_clean.loc[(twitter_archive_df_clean.tweet_id == 680494726643068929), 'rating_numerator'] = 11.26 twitter_archive_df_clean.loc[(twitter_archive_df_clean.tweet_id == 786709082849828864), 'rating_numerator'] = 9.75 ###Output _____no_output_____ ###Markdown The Test ###Code with pd.option_context('max_colwidth', 500): display(twitter_archive_df_clean[twitter_archive_df_clean['text'].str.contains(r"(\d+\.\d*\/\d+)")] [['tweet_id', 'text', 'rating_numerator', 'rating_denominator','name']]) ###Output /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:2: UserWarning: This pattern has match groups. To actually get the groups, use str.extract. ###Markdown 4- Delete columns which will not be helpful in our analysis. Definition Delete all columns which will not be helpful in our analysis. The Code ###Code ##Get column names of twitter_archive_clean print(list(twitter_archive_df_clean)) ##Delete not needed columns twitter_archive_df_clean = twitter_archive_df_clean.drop(['source', 'in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp', 'expanded_urls'], 1) ###Output ['tweet_id', 'in_reply_to_status_id', 'in_reply_to_user_id', 'timestamp', 'source', 'text', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp', 'expanded_urls', 'rating_numerator', 'rating_denominator', 'name', 'doggo', 'floofer', 'pupper', 'puppo'] ###Markdown The Test ###Code print(list(twitter_archive_df_clean)) ###Output ['tweet_id', 'timestamp', 'text', 'rating_numerator', 'rating_denominator', 'name', 'doggo', 'floofer', 'pupper', 'puppo'] ###Markdown 5- Melt those columns (doggo, floofer, pupper and puppo) into new columns dog and dog_type columns Definition Melt the doggo, floofer, pupper and puppo columns to dog and dogs_blood columns. Then drop dog column. Sort by dogs_blood in order to then drop duplicated based on tweet_id except for the last occurrence. The Code ###Code ##First, Melt the doggo, floofer, pupper and puppo columns to dog and dogs_blood columns twitter_archive_df_clean = pd.melt(twitter_archive_df_clean, id_vars=['tweet_id', 'timestamp','text','rating_numerator', 'rating_denominator','name'], var_name='dog', value_name='dogs_blood') ##Second, Drop dog column twitter_archive_df_clean = twitter_archive_df_clean.drop('dog', 1) ##Third, Sort by dogs_blood column and then drop duplicated based on tweet_id except the last occurrence twitter_archive_df_clean = twitter_archive_df_clean.sort_values('dogs_blood').drop_duplicates(subset='tweet_id',keep='last') ###Output _____no_output_____ ###Markdown The Test ###Code twitter_archive_df_clean['dogs_blood'].value_counts() ###Output _____no_output_____ ###Markdown 6- Correct the numerators/denominators due to pictures which have more than one puppy and the denominator is not 10 due to unknown conditions. The Definition Firstly, There are 5 tweets with denominator not equal to 10. Therefore, we need to update these records manually.Secondly, These tweets with denominator not equal to 10 are multiple dogs. **For example, tweet_id 697463031882764288** has numerator and denominators 44/40 because there are 4 dogs in the picture. So , i decided to create new column called 'rating', it's value will equal to 10*numerator / denominator The Code ###Code ## First, updating the 5 records which have not correct denominator twitter_archive_df_clean.loc[(twitter_archive_df_clean.tweet_id == 740373189193256964), 'rating_numerator'] = 14 twitter_archive_df_clean.loc[(twitter_archive_df_clean.tweet_id == 740373189193256964), 'rating_denominator'] = 10 twitter_archive_df_clean.loc[(twitter_archive_df_clean.tweet_id == 682962037429899265), 'rating_numerator'] = 10 twitter_archive_df_clean.loc[(twitter_archive_df_clean.tweet_id == 682962037429899265), 'rating_denominator'] = 10 twitter_archive_df_clean.loc[(twitter_archive_df_clean.tweet_id == 666287406224695296), 'rating_numerator'] = 9 twitter_archive_df_clean.loc[(twitter_archive_df_clean.tweet_id == 666287406224695296), 'rating_denominator'] = 10 twitter_archive_df_clean.loc[(twitter_archive_df_clean.tweet_id == 722974582966214656), 'rating_numerator'] = 13 twitter_archive_df_clean.loc[(twitter_archive_df_clean.tweet_id == 722974582966214656), 'rating_denominator'] = 10 twitter_archive_df_clean.loc[(twitter_archive_df_clean.tweet_id == 716439118184652801), 'rating_numerator'] = 13.5 twitter_archive_df_clean.loc[(twitter_archive_df_clean.tweet_id == 716439118184652801), 'rating_denominator'] = 10 ## Secondly, Create a new column called rating with float type twitter_archive_df_clean['rating'] = 10 * twitter_archive_df_clean['rating_numerator'] / twitter_archive_df_clean['rating_denominator'].astype(float) ###Output _____no_output_____ ###Markdown The Test ###Code with pd.option_context('max_colwidth', 200): display(twitter_archive_df_clean[twitter_archive_df_clean['rating_denominator'] != 10][['tweet_id','text', 'rating_numerator', 'rating_denominator']]) ###Output _____no_output_____ ###Markdown 6- Separate the full timestamp provided into three columns (Day- Month - Year) which will be helpful for other teams to query throuh this dataset. DefinitionFirstly, convert timestamp column to datetime datatype. Extract year, month and day to new columns. Finally drop timestamp column. The Code ###Code ##Convert the datatype of the column to datetime twitter_archive_df_clean['timestamp'] = pd.to_datetime(twitter_archive_df_clean['timestamp']) #Extract year, month and day to new columns twitter_archive_df_clean['year'] = twitter_archive_df_clean['timestamp'].dt.year twitter_archive_df_clean['month'] = twitter_archive_df_clean['timestamp'].dt.month twitter_archive_df_clean['day'] = twitter_archive_df_clean['timestamp'].dt.day #Finally drop timestamp column twitter_archive_df_clean = twitter_archive_df_clean.drop('timestamp', 1) ###Output _____no_output_____ ###Markdown The Test ###Code list(twitter_archive_df_clean) ###Output _____no_output_____ ###Markdown Image_Predictions_df: 1- Delete all duplicated jpg_url tweets. Definition Delete all the duplicated jpg_url tweets which are 66 records. The Code ###Code ##Delete duplicated jpg_url image_prediction_df_clean = image_prediction_df_clean.drop_duplicates(subset=['jpg_url'], keep='last') ###Output _____no_output_____ ###Markdown The Test ###Code sum(image_prediction_df_clean['jpg_url'].duplicated()) ###Output _____no_output_____ ###Markdown 2- Delete columns which will not be helpful in our analysis. DefinitionDelete columns which will not be helpful in our analysis. The Code ###Code print(list(image_prediction_df_clean)) image_prediction_df_clean = image_prediction_df_clean.drop(['img_num', 'p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog'], 1) ###Output ['tweet_id', 'jpg_url', 'img_num', 'p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog'] ###Markdown The Test ###Code print(list(image_prediction_df_clean)) ###Output ['tweet_id', 'jpg_url'] ###Markdown Twiiter_json: 1- Delete all tweets except for 'Tweet'. DefinitionDeleteting all tweets except for the 'Tweet' records. The Code ###Code json_tweets_df_clean = json_tweets_df_clean[json_tweets_df_clean['retweeted_status'] == 'Tweet'] ###Output _____no_output_____ ###Markdown The Test ###Code json_tweets_df_clean['retweeted_status'].value_counts() ###Output _____no_output_____ ###Markdown Tidiness Issues: 1- Change tweet_id to type int64 in order to merge with the other 2 tables. DefinitionChange tweet_id to type int64 in order to merge with the other 2 tables The Code ###Code ##Change tweet_id from str to int json_tweets_df_clean['tweet_id'] = json_tweets_df_clean['tweet_id'].astype(int) ###Output _____no_output_____ ###Markdown The Test ###Code json_tweets_df_clean['tweet_id'].dtypes ###Output _____no_output_____ ###Markdown 3- Concatenate all tables together into one dataset DefinitionAll tables should be part of one dataset The Code ###Code ##Join all tables on Left Join using the Tweet_ID column twitter_df = pd.merge(twitter_archive_df_clean,image_prediction_df_clean, how = 'left', on = ['tweet_id']) # Only keeping rows that have pictures (jpg_url) twitter_df = twitter_df[twitter_df['jpg_url'].notnull()] ##Create a new dataframe that merge full_twitter_df and tweet_json_df_clean full_twitter_df = pd.merge(twitter_df, json_tweets_df_clean, how = 'left', on = ['tweet_id']) ###Output _____no_output_____ ###Markdown The Test ###Code twitter_df.sample() twitter_df.info() full_twitter_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1927 entries, 0 to 1926 Data columns (total 16 columns): tweet_id 1927 non-null int64 text 1927 non-null object rating_numerator 1927 non-null float64 rating_denominator 1927 non-null float64 name 1398 non-null object dogs_blood 1927 non-null object rating 1927 non-null float64 year 1927 non-null int64 month 1927 non-null int64 day 1927 non-null int64 jpg_url 1927 non-null object favorite_count 1923 non-null float64 retweet_count 1923 non-null float64 source 1923 non-null object retweeted_status 1923 non-null object url 1923 non-null object dtypes: float64(5), int64(4), object(7) memory usage: 255.9+ KB ###Markdown Storing, Analyzing, and Visualizing Data ###Code ##Store the clean DataFrame in a CSV file full_twitter_df.to_csv('twitter_archive_master.csv',index=False, encoding = 'utf-8') ###Output _____no_output_____ ###Markdown The First Insight: Most common device used to post dogs' Tweets. ###Code ##Create new dataframe of counts of each dog type device_type_count_df = full_twitter_df.groupby('source').count() device_type_count_df['tweet_id'] ##Create new dataframe with different sources and their counts plotted_df = pd.DataFrame() plotted_df= full_twitter_df.groupby("source")["tweet_id"].count().reset_index(name="count") ##Drop Nan Values plotted_df = plotted_df[pd.notnull(plotted_df['source'])] df_dog_type['dogs_type'].value_counts().plot(kind = 'barh') plt.title('Histogram of the most rated dog types') plt.xlabel('The Counts') plt.ylabel('Type of the dog') fig = plt.gcf() fig.savefig('Second_Insight_plot.png',bbox_inches='tight'); plotted_df.plot(x="source", y="count", kind="bar") plt.title('Most used devices for dog posts') plt.xlabel('The devices') plt.ylabel('The counts') fig = plt.gcf() fig.savefig('First_Insight_plot.png',bbox_inches='tight'); ###Output _____no_output_____ ###Markdown The Second Insight: Comparison of Retweets vs Favourites(likes) ###Code full_twitter_df.plot(kind='scatter',x='retweet_count',y='favorite_count', alpha = 0.3) plt.xlabel('Retweets') plt.ylabel('Favourites') plt.title('Retweets and favorites Scatter plot') fig = plt.gcf() fig.savefig('Second_Insight_plot.png'); ###Output _____no_output_____ ###Markdown The Third Insight: The highest ratings not always with the highest retweets ###Code full_twitter_df.plot(x='retweet_count', y='rating', kind='scatter') plt.xlabel('Retweet Counts') plt.ylabel('Ratings') plt.title('Scatter Plot of Ratings VS Retweet Counts') fig = plt.gcf() fig.savefig('Third_Insight_plot.png'); ###Output _____no_output_____ ###Markdown Gather ###Code import pandas as pd import numpy as np import requests import os import json import time import datetime import random import matplotlib.pyplot as plt import seaborn as sns import math %matplotlib inline # Displaying the full string from a pandas DataFrame pd.options.display.max_rows pd.set_option('display.max_colwidth', -1) # Download the WeRateDogs Twitter archive WeRateDogs_twitter_archive = pd.read_csv('twitter-archive-enhanced.csv') ###Output _____no_output_____ ###Markdown Tweet Image Prediction File Below, the tweet image predictions, which represent what breed of dog, other subject or animal is present in each tweet (according to a neural network), will be downloaded programmatically, using using the [Requests](https://pypi.org/project/requests/) library and the [URL](https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv) provided by Udacity. Then they'll be viewied and saved as a pandas DataFrame to the computer. ###Code # Download the image predictions with the provided url. url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) response ###Output _____no_output_____ ###Markdown Success status response code, , indicates that the request has succeeded. ###Code # Save the file to the computer image_predictions = url.split('/')[-1] with open(os.path.join('./', image_predictions), mode='wb') as file: file.write(response.content) # Save image predictions file into the image_predictions_df dataframe and viewing the dataframe image_predictions_df = pd.read_csv(image_predictions, sep = '\\t', engine = 'python') ###Output _____no_output_____ ###Markdown Tweet JSON Data Below, the tweet data stored in JSON for each of the tweets in the WeRateDogs Twitter archive will be querried using Python's Tweepy library. Each tweet's JSON data will be written to its own line in a file called tweet_json.txt file and then read line by line into a pandas DataFrame with the required fields. tweet ID, retweet count, and favorite count. ###Code import tweepy consumer_key = 'HIDDEN' consumer_secret = 'HIDDEN' access_token = 'HIDDEN' access_secret = 'HIDDEN' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) # Create Twitter API object and setting rate limit parameters api = tweepy.API(auth, wait_on_rate_limit = True, wait_on_rate_limit_notify = True) # get the tweet status status = api.get_status(WeRateDogs_twitter_archive.tweet_id[1], tweet_mode='extended') # Create tweet_json.txt file if it doesn't already exist file = 'tweet_json.txt' if not os.path.isfile(file): open(file, 'w', encoding = 'UTF-8') # Get a list of tweet_ids from the enhanced twitter archive to use for downloading with the Twitter API tweet_ids = WeRateDogs_twitter_archive.tweet_id.values print("# of tweet_ids: " + str(len(tweet_ids))) tweet_errors = [] # get start time of query print("Start time:", datetime.datetime.now().time()) start = time.time() # write JSON to .txt file with open('tweet_json.txt', 'w', encoding = 'UTF-8') as file: for tweet_id in tweet_ids: try: tweet = api.get_status(tweet_id, tweet_mode = 'extended') json.dump(tweet._json, file) file.write('\n') except Exception as e: print("Error in Tweet ID:", tweet_id, "Time:", datetime.datetime.now().time()) tweet_errors.append(tweet_id) # get end time of query end = time.time() print("End time:", datetime.datetime.now().time()) # display runtime print("Runtime: ", end - start) # display IDs of tweets with errors tweet_errors # Extract JSON data from the text file, and save it to a DataFrame tweets_json_data = [] with open('tweet_json.txt') as json_file: for line in json_file: data = json.loads(line) tweets_json_data.append({'tweet_id': data['id'], 'retweet_count': data['retweet_count'], 'favorite_count': data['favorite_count'] }) # Create DataFrame from tweets_json_data dictionary tweets_api_df = pd.DataFrame(tweets_json_data, columns = ['tweet_id', 'retweet_count', 'favorite_count']) tweets_api_df.head() # Save the DataFrame object into a csv file without the preceding indices of each row tweets_api_df.to_csv('tweets_api.csv', index=False) ###Output _____no_output_____ ###Markdown Assess Visual Assessment ###Code # Display the WeRateDogs_twitter_archive table WeRateDogs_twitter_archive # Display the image_predictions table image_predictions_df # Read and display the Tweepy tweets table tweets_api = pd.read_csv('tweets_api.csv') tweets_api ###Output _____no_output_____ ###Markdown `WeRateDogs_twitter_archive`- Dog names have missing values- text column includes several data types (URLs, ratings and strings) `tweets_api`- columns can be merged with those of the `WeRateDogs_twitter_archive` table Programmatic Assessment ###Code WeRateDogs_twitter_archive.sample(10) image_predictions_df.sample(10) tweets_api.sample(10) WeRateDogs_twitter_archive.tail() WeRateDogs_twitter_archive.info() WeRateDogs_twitter_archive.rating_numerator.value_counts() WeRateDogs_twitter_archive.rating_denominator.value_counts() WeRateDogs_twitter_archive.source.value_counts() image_predictions_df.tail() image_predictions_df.info() tweets_api.tail() tweets_api.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2331 entries, 0 to 2330 Data columns (total 3 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2331 non-null int64 1 retweet_count 2331 non-null int64 2 favorite_count 2331 non-null int64 dtypes: int64(3) memory usage: 54.8 KB ###Markdown Quality `WeRateDogs_twitter_archive` - *retweeted_status_id* is a float and not an integer- Multiple formats for *retweeted_status_id*- *retweeted_status_user_id* is a float and not an integer- Multiple formats for *retweeted_status_user_id*- *retweeted_status_timestamp* is a string and not a datetime object- Missing records in (*in_reply_to_status_id, in_reply_to_user_id, retweeted_user_id, retweeted_status_user_id, retweeted_status_timestamp, expanded_urls* columns)(**can't clean**)- Odd values for *rating_numerator* and some erroneous values for *rating_denominator*- There are 4 categorical values in the `source` column. Twitter for iPhone, Vine - Make a Scene, Twitter Web Client, TweetDeck `image_predictions_df` - Lower case *p1* names sometimes, upper case other times- Lower case *p2* names sometimes, upper case other times- Lower case *p3* names sometimes, upper case other times- Erroneous/unrelated information where *p1_dog*, *p2_dog* and *p3_dog* are all False - Missing dog name information for *name* column Tidiness `WeRateDogs_twitter_archive` - *doggo*, *floofer*, *pupper* and *puppo* columns can be merged into one column (*dog_stages*) and the data type for the *dog_stages* needs to be categorical- Two variables in the text column should be split into text and short_urls- The key points in the project description indicate that we are only interested in original tweets and not in retweets. The columns `retweeted_status_id`, `retweeted_status_user_id` and `retweeted_status_timestamp` can be removed to make the table cleaner. Same goes for `in_reply_to_status_id` and `in_reply_to_user_id` Clean ###Code # First make a copy of all the data that you want to clean WeRateDogs_twitter_arch_clean = WeRateDogs_twitter_archive.copy() image_predictions_clean = image_predictions_df.copy() tweets_api_clean = tweets_api.copy() ###Output _____no_output_____ ###Markdown Missing Data Define As shown in the cells below, there's a great amount of missing and erroneus (as Python keeps converting the tweet's large numerical id's to scientific notation when we try to write it to a CSV file). data in the WeRateDogs_twitter_arch_clean table columns: in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id and retweeted_status_timestamp. We don't have any resource to collect and clean this data so we will not be able to make conclusions from these columns. These columns can remove these columns to make our table cleaner. ###Code WeRateDogs_twitter_arch_clean['in_reply_to_status_id'].value_counts() WeRateDogs_twitter_arch_clean['in_reply_to_user_id'].value_counts() WeRateDogs_twitter_arch_clean['retweeted_status_id'].value_counts() WeRateDogs_twitter_arch_clean['retweeted_status_user_id'].value_counts() WeRateDogs_twitter_arch_clean['retweeted_status_timestamp'].value_counts() ###Output _____no_output_____ ###Markdown Code ###Code WeRateDogs_twitter_arch_clean.drop(columns=['in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code # Confirm the columns are gone list(WeRateDogs_twitter_arch_clean) ###Output _____no_output_____ ###Markdown Define Convert timestamp to datetime data type Code ###Code WeRateDogs_twitter_arch_clean.timestamp = pd.to_datetime(WeRateDogs_twitter_arch_clean.timestamp) ###Output _____no_output_____ ###Markdown Test ###Code # Confirm data types WeRateDogs_twitter_arch_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 timestamp 2356 non-null datetime64[ns, UTC] 2 source 2356 non-null object 3 text 2356 non-null object 4 expanded_urls 2297 non-null object 5 rating_numerator 2356 non-null int64 6 rating_denominator 2356 non-null int64 7 name 2356 non-null object 8 doggo 2356 non-null object 9 floofer 2356 non-null object 10 pupper 2356 non-null object 11 puppo 2356 non-null object dtypes: datetime64[ns, UTC](1), int64(3), object(8) memory usage: 221.0+ KB ###Markdown Define Replace the `source` column with the display portion of itself. Extract the string between ` and ` and change the data type to categorical. Code ###Code WeRateDogs_twitter_arch_clean['source'] = WeRateDogs_twitter_arch_clean['source'].str.extract('^<a.+>(.+)</a>$') ###Output _____no_output_____ ###Markdown Test ###Code WeRateDogs_twitter_arch_clean.source.value_counts() # Change the data type to categorical WeRateDogs_twitter_arch_clean['source'] = WeRateDogs_twitter_arch_clean.source.astype('category') ###Output _____no_output_____ ###Markdown Test ###Code WeRateDogs_twitter_arch_clean.dtypes ###Output _____no_output_____ ###Markdown Define Convert names to upper case in WeRateDogs_twitter_arch_clean table. Replace the erroneous names like "a" and "None" with NaN (missing value). Repeat the same thing for dog stages in image_predictions_clean table Code ###Code import numpy as np # check the dog names to look for any unrelated/worng name name_check = WeRateDogs_twitter_arch_clean.name.value_counts().to_frame() name_check_transposed = name_check.T name_check_transposed.head() WeRateDogs_twitter_arch_clean.name.replace(to_replace=[None], value=np.nan, inplace=True) WeRateDogs_twitter_arch_clean.name.replace(['a', 'Not','The'], np.nan, inplace=True) image_predictions_clean.p1 = image_predictions_clean.p1.str.capitalize() image_predictions_clean.p2 = image_predictions_clean.p2.str.capitalize() image_predictions_clean.p3 = image_predictions_clean.p3.str.capitalize() ###Output _____no_output_____ ###Markdown Test ###Code WeRateDogs_twitter_arch_clean.head() columns = [image_predictions_clean.p1, image_predictions_clean.p2, image_predictions_clean.p3] columns image_predictions_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2075 non-null int64 1 jpg_url 2075 non-null object 2 img_num 2075 non-null int64 3 p1 2075 non-null object 4 p1_conf 2075 non-null float64 5 p1_dog 2075 non-null bool 6 p2 2075 non-null object 7 p2_conf 2075 non-null float64 8 p2_dog 2075 non-null bool 9 p3 2075 non-null object 10 p3_conf 2075 non-null float64 11 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown Define Remove the rows from `image_predictions_clean` table where the values in columns 'p1_dog', 'p2_dog', 'p3_dog' are all False. Code ###Code image_predictions_clean.drop(image_predictions_clean.loc[(image_predictions_clean['p1_dog']== False) & (image_predictions_clean['p2_dog']== False) & (image_predictions_clean['p3_dog']== False)].index, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code [image_predictions_clean.p1_dog, image_predictions_clean.p2_dog, image_predictions_clean.p3_dog] == False ###Output _____no_output_____ ###Markdown Define Extract the urls form `WeRateDogs_twitter_arch_clean` text column and move it to a new column called `short_urls` Code ###Code # Get length of short url short_url_length = len(WeRateDogs_twitter_arch_clean['text'][0].split()[-1])+1 short_url_length # New column with short url WeRateDogs_twitter_arch_clean['short_urls'] = WeRateDogs_twitter_arch_clean['text'].apply(lambda row: row[-short_url_length:]) # Remove the short urls from text column WeRateDogs_twitter_arch_clean['text'] = WeRateDogs_twitter_arch_clean['text'].apply(lambda row: row[:-short_url_length]) ###Output _____no_output_____ ###Markdown Test ###Code WeRateDogs_twitter_arch_clean.head(3) ###Output _____no_output_____ ###Markdown Define Some values for *rating_numerator* and *rating_denominator* are incorrect Code ###Code # Find the numerator and denominator values and save them in the correct columns (*rating_numerator* and # *rating_denominator*) # Correct format of ratings string match corr_format = WeRateDogs_twitter_arch_clean.text.str.contains('\d+/10') # Remove rows that do not match the correct format WeRateDogs_twitter_arch_clean = WeRateDogs_twitter_arch_clean[corr_format].copy() # Extract numerators and denominators extract = WeRateDogs_twitter_arch_clean.text.str.extract('(\d+/10)', expand = False).copy() WeRateDogs_twitter_arch_clean['rating_numerator'] = extract.apply(lambda x: int(str(x)[:-3])) WeRateDogs_twitter_arch_clean['rating_denominator'] = 10 ###Output _____no_output_____ ###Markdown Test ###Code WeRateDogs_twitter_arch_clean.rating_numerator.describe() WeRateDogs_twitter_arch_clean.rating_denominator.describe() WeRateDogs_twitter_arch_clean.sample(2) ###Output _____no_output_____ ###Markdown Define Merge the `doggo`, `floofer`, `pupper`, `puppo` columns into a `dog_stages` column and change the data type to categorical. ###Code WeRateDogs_twitter_arch_clean[['doggo', 'floofer', 'pupper', 'puppo']].describe() ###Output _____no_output_____ ###Markdown Code ###Code # replace the stage name with 1, and 'None' with 0, like a dummy variable dummy = lambda x: 0 if x == 'None' else 1 WeRateDogs_twitter_arch_clean.doggo = WeRateDogs_twitter_arch_clean.doggo.apply(dummy) WeRateDogs_twitter_arch_clean.floofer = WeRateDogs_twitter_arch_clean.floofer.apply(dummy) WeRateDogs_twitter_arch_clean.pupper = WeRateDogs_twitter_arch_clean.pupper.apply(dummy) WeRateDogs_twitter_arch_clean.puppo = WeRateDogs_twitter_arch_clean.puppo.apply(dummy) # by adding the stage columns, we can see how many are 'none' and how many stages are set WeRateDogs_twitter_arch_clean['none'] = WeRateDogs_twitter_arch_clean['doggo'] + \ WeRateDogs_twitter_arch_clean['floofer'] + WeRateDogs_twitter_arch_clean['pupper'] + \ WeRateDogs_twitter_arch_clean['puppo'] # let's have a look at what we have before we continue... WeRateDogs_twitter_arch_clean['none'].value_counts() ###Output _____no_output_____ ###Markdown In the above cell, we can see an interesting finding. There are 14 tweets that have 2 dog stages. These must be tweets about more than one dog. Since there are 14 entries, I'll select the dog stages in increasing order of floofer, puppo, doggo and pupper to avoid losing too much information. ###Code # if there are no stages specified then set to 1 stage_none = lambda x: 1 if x == 0 else 0 # reset values in 'none' WeRateDogs_twitter_arch_clean['none'] = WeRateDogs_twitter_arch_clean['none'].apply(stage_none) # Stages in increasing count order: floofer, puppo, doggo and pupper stage = ['floofer', 'puppo', 'doggo', 'pupper', 'none'] # Set the conditions for selecting the dog stage based on count order conditions = [ (WeRateDogs_twitter_arch_clean[stage[0]] == 1), (WeRateDogs_twitter_arch_clean[stage[1]] == 1), (WeRateDogs_twitter_arch_clean[stage[2]] == 1), (WeRateDogs_twitter_arch_clean[stage[3]] == 1), (WeRateDogs_twitter_arch_clean[stage[4]] == 1)] # select the dog stage based on the first successful condition; stage[4] is 'None' WeRateDogs_twitter_arch_clean['stage'] = np.select(conditions, stage, default = stage[4]) # drop the original 4 stage columns, and the temporary 'none' WeRateDogs_twitter_arch_clean.drop(stage, axis = 1, inplace = True) # Set the 'stage' column data type to category WeRateDogs_twitter_arch_clean['stage'] = WeRateDogs_twitter_arch_clean.stage.astype('category') ###Output _____no_output_____ ###Markdown Test ###Code WeRateDogs_twitter_arch_clean.sample(10) WeRateDogs_twitter_arch_clean.stage.value_counts() ###Output _____no_output_____ ###Markdown Define The best dog breed prediction and associated confidence levels can be combined with the `WeRateDogs_twitter_arch_clean` table to provide meaningful information and analysis opprotunity about the dog in the tweets based on tweets' images. Code ###Code # Create columns in the image_predictions_clean table # Conditions for selection conditions = [(image_predictions_clean['p1_dog'] == True), (image_predictions_clean['p2_dog'] == True), (image_predictions_clean['p3_dog'] == True)] # set up the breeds order breeds = [image_predictions_clean['p1'], image_predictions_clean['p2'], image_predictions_clean['p3']] # set up the choice confidence order for confidence level based on the selection conditions confidence_choice = [image_predictions_clean['p1_conf'], image_predictions_clean['p2_conf'], image_predictions_clean['p3_conf']] # Select predicted breed based on first successful condition image_predictions_clean['breed'] = np.select(conditions, breeds, default = 'none') # select the predicted confidence level based on the first successful condition image_predictions_clean['confidence'] = np.select(conditions, confidence_choice, default = 0) ###Output _____no_output_____ ###Markdown Test ###Code image_predictions_clean.head(3) ###Output _____no_output_____ ###Markdown Define The best dog breed prediction and associated confidence levels can be combined with the `WeRateDogs_twitter_arch_clean` table to provide meaningful information and analysis opprotunity about the dog in the tweets based on tweets' images. Code ###Code # Merge the breed and confidence columns to archive mask = ['tweet_id', 'breed', 'confidence'] WeRateDogs_twitter_arch_clean = pd.merge(WeRateDogs_twitter_arch_clean, image_predictions_clean[mask], on = 'tweet_id', how = 'inner') # Drop the merged comlumns from image_predictions_clean table image_predictions_clean.drop(['breed', 'confidence'], axis = 1, inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code WeRateDogs_twitter_arch_clean.info() WeRateDogs_twitter_arch_clean.sample(5) ###Output _____no_output_____ ###Markdown Define Merge the `tweets_api` retweet_count and favorite_count columns to the `WeRateDogs_twitter_arch_clean` ###Code # Merge the retweet_count and favorite_count columns to archive mask2 = ['tweet_id', 'retweet_count', 'favorite_count'] WeRateDogs_twitter_arch_clean = pd.merge(WeRateDogs_twitter_arch_clean, tweets_api[mask2], on = 'tweet_id', how = 'inner') WeRateDogs_twitter_arch_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1725 entries, 0 to 1724 Data columns (total 14 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1725 non-null int64 1 timestamp 1725 non-null datetime64[ns, UTC] 2 source 1725 non-null category 3 text 1725 non-null object 4 expanded_urls 1725 non-null object 5 rating_numerator 1725 non-null int64 6 rating_denominator 1725 non-null int64 7 name 1680 non-null object 8 short_urls 1725 non-null object 9 stage 1725 non-null category 10 breed 1725 non-null object 11 confidence 1725 non-null float64 12 retweet_count 1725 non-null int64 13 favorite_count 1725 non-null int64 dtypes: category(2), datetime64[ns, UTC](1), float64(1), int64(5), object(5) memory usage: 178.9+ KB ###Markdown Define Change the erroneous rating_numerator values in `WeRateDogs_twitter_arch_clean` ###Code # Check the top 10 highest rated tweets are WeRateDogs_twitter_arch_clean.sort_values(by = 'rating_numerator', ascending= False).head(10) ###Output _____no_output_____ ###Markdown Code ###Code # Change the erroneous rating_numerator values WeRateDogs_twitter_arch_clean.replace(75, value=10, inplace=True) WeRateDogs_twitter_arch_clean.replace(27, value=11, inplace=True) WeRateDogs_twitter_arch_clean.replace(26, value=11, inplace=True) ###Output _____no_output_____ ###Markdown I changed the top 3 ratings as they were wrong. Erroneous values were the decimal part of the rating (seen in the text column). They were probably been mishandled during data migration. Test ###Code WeRateDogs_twitter_arch_clean.sort_values(by = 'rating_numerator', ascending= False).head(10) # let's change the confidence to percent WeRateDogs_twitter_arch_clean['confidence_percent'] = WeRateDogs_twitter_arch_clean['confidence']*100 WeRateDogs_twitter_arch_clean.drop(columns=['confidence'], inplace=True) WeRateDogs_twitter_arch_clean.head(2) # Store the clean DataFrame(s) in a CSV file with the main one named twitter_archive_master.csv WeRateDogs_twitter_arch_clean.to_csv('twitter_archive_master.csv', index=False) ls *.csv ###Output Volume in drive C is Windows Volume Serial Number is D885-94BA Directory of C:\Users\zariped\Desktop\Data_Science\Udacity\data_wrangling\data_wrangling_project 01/03/2021 12:29 AM 68,640 tweets_api.csv 01/07/2021 07:48 PM 582,695 twitter_archive_master.csv 12/30/2020 03:54 PM 915,692 twitter-archive-enhanced.csv 3 File(s) 1,567,027 bytes 0 Dir(s) 214,227,546,112 bytes free ###Markdown Data Analysis & Visualization ###Code WeRateDogs_twitter_arch_clean.info() WeRateDogs_twitter_arch_clean.duplicated().sum() ###Output _____no_output_____ ###Markdown Let's first visualize the univariate distribution of all variables in the dataset along with all of their pairwise relationships to get an overal view. ###Code pd.options.display.max_rows = 4000 sns.pairplot(WeRateDogs_twitter_arch_clean); ###Output _____no_output_____ ###Markdown We can see that confidence and rating_numerator are skewed to the left. This observation indicates that a significant fraction of high confidence data is above 50%, and the majority of dog ratings are in the 10-12 rating range.favorite_count and retweet_count clearly show a big skew to the right indicating the majority of the data ~75% fall below 10,000 favorite_count and ~75% fall below 2,900 retweet_count. ###Code WeRateDogs_twitter_arch_clean['rating_numerator'].describe() # Let's have a look at the rating_numerator histogram in more detail import seaborn as sns sns.histplot(WeRateDogs_twitter_arch_clean['rating_numerator'], discrete=True) ###Output _____no_output_____ ###Markdown The distribution of ratings is skewed to the left. From the descriptive statistics above we see that more than half of all ratings are between 10 and 12 inclusive. ###Code # Let's have a look at the confidence_percent histogram in more detail import seaborn as sns sns.histplot(WeRateDogs_twitter_arch_clean['confidence_percent'], discrete=True) WeRateDogs_twitter_arch_clean['confidence_percent'].describe() ###Output _____no_output_____ ###Markdown As shown in the confidence_percent histogram and the descriptive statistics, more than half of all confidence levels are above 54%. ###Code # Let's have a look at the favorite_count histogram in more detail import seaborn as sns sns.histplot(WeRateDogs_twitter_arch_clean['favorite_count'], bins=50) # Let's have a look at the retweet_count histogram in more detail import seaborn as sns sns.histplot(WeRateDogs_twitter_arch_clean['retweet_count'], bins=50) ###Output _____no_output_____ ###Markdown favorite_count and retweet_count histograms clearly show a big skew to the right indicating the majority of the data ~75% fall below 10,000 favorite_count and ~75% fall below 2,900 retweet_countLet's investigate this further... ###Code [WeRateDogs_twitter_arch_clean['favorite_count'].describe(), WeRateDogs_twitter_arch_clean['retweet_count'].describe()] # Let's see which tweets got the highest favorite_counts WeRateDogs_twitter_arch_clean.sort_values(by = 'favorite_count', ascending= False).head(10) # Let's see which tweets got the highest retweet_counts WeRateDogs_twitter_arch_clean.sort_values(by = 'retweet_count', ascending= False).head(10) plt.figure(figsize=(12,5)) sns.scatterplot(data=WeRateDogs_twitter_arch_clean, x="favorite_count", y="retweet_count", hue="rating_numerator", size="rating_numerator", palette="deep", sizes=(20, 200), style="rating_numerator", legend="full", alpha= 0.7); ###Output _____no_output_____ ###Markdown From the scatter plot above, we can conclude that the number of retweets has a strong positive correlation with the number of favorites a tweet receives. Both of these variables are also positively correlated with the rating that the dogs have received (although not as strong of a correlation). ###Code # Let's see which breed of dogs have the highest ratings popular_breeds = WeRateDogs_twitter_arch_clean.groupby(['breed', 'rating_numerator'], as_index = True).mean().sort_values('rating_numerator', ascending=False) popular_breeds.head(20).index ###Output _____no_output_____ ###Markdown As seen above, the highest rated dog breeds are *Rottweiler*, *French_bulldog*, *Old_english_sheepdog*, *Golden_retriever*, *Bloodhound*, *Samoyed*, *Chihuahua*, *Irish_setter*, *Pembroke*, *Standard_poodle*, *tan_coonhound*, *Labrador_retriever*, *Pomeranian*, *Pomeranian*, *Lakeland_terrier*, *Eskimo_dog*, *Bedlington_terrier* and *Gordon_setter*. ###Code stages = WeRateDogs_twitter_arch_clean['stage'].value_counts().index[1:] sns.set(style="darkgrid") sns.countplot(data = WeRateDogs_twitter_arch_clean, y = 'stage', order = stages, orient = 'v') # plt.xticks(rotation = 360) plt.xlabel('Count', fontsize=14) plt.ylabel('Dog Stages', fontsize=14) plt.title('Distribution of Dog Stages',fontsize=16) WeRateDogs_twitter_arch_clean['stage'].value_counts() ###Output _____no_output_____ ###Markdown Wrangle and Analyze Data: We Rate Dogs ([@dog_rates]("https://twitter.com/dog_rates"))A data analysis project focused on data wrangling efforts. Table of Contents- [Introduction](intro)- [Gather](gather)- [Assess](assess) - Detect and document at least **eight (8) quality issues** and **two (2) tidiness issues**- [Clean](clean)- [Storing, Analyzing and Visualizing Data](storing_analyzing_and_visualizing) - At least **three (3) insights** and **one (1) visualization** must be produced- [Wrangling Efforts Report](wranglingeffortsreport)- [Communicate Findings Report](communicatefindingsreport)- [References](references) IntroductionReal-world data rarely comes clean. Using Python and its libraries, I will gather data from a variety of sources and in a variety of formats, assess its quality and tidiness, then clean it. This is called data wrangling. I will document my wrangling efforts in a Jupyter Notebook, plus showcase them through analyses and visualizations using Python (and its libraries) and/or SQL.The dataset that I will be wrangling (and analyzing and visualizing) is the tweet archive of Twitter user [@dog_rates]("https://twitter.com/dog_rates"), also known as [WeRateDogs]("https://twitter.com/dog_rates"). WeRateDogs is a Twitter account that rates people's dogs with a humorous comment about the dog. These ratings almost always have a denominator of 10. The numerators, though? Almost always greater than 10. 11/10, 12/10, 13/10, etc. Why? Because "they're good dogs Brent." WeRateDogs has over 4 million followers and has received international media coverage.WeRateDogs [downloaded their Twitter archive]("https://help.twitter.com/en/managing-your-account/how-to-download-your-twitter-archive") and sent it to Udacity via email exclusively for you to use in this project. This archive contains basic tweet data (tweet ID, timestamp, text, etc.) for all 5000+ of their tweets as they stood on August 1, 2017 GatherI will be gathering data from these three resources: 1. The [WeRateDogs]("https://twitter.com/dog_rates") Twitter archive. The *twitter_archive_enhanced.csv* file was given. 2. The tweet image predictions, i.e., what breed of dog (or other object, animal, etc.) is present in each tweet according to a neural network. This file was provided. 3. Twitter API and Python's Tweepy library to gather each tweet's retweet count and favorite or like count at minimum, and any additional data I find interesting. I will be generating this using my Twitter API key, secrets, and tokens. ###Code # import necessary libaries import pandas as pd import numpy as np # loading the WeRateDogs twitter archive data archive = pd.read_csv('twitter-archive-enhanced.csv') archive.head() # downloading the image prediction data programmatically import requests predicted_breeds_url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(predicted_breeds_url) with open('image_predictions.tsv', 'wb') as f: f.write(response.content) # load image prediction data image_predictions = pd.read_csv("image_predictions.tsv", sep='\t') image_predictions.head() import tweepy consumer_key = 'HIDDEN' consumer_secret = 'HIDDEN' access_token = 'HIDDEN' access_secret = 'HIDDEN' # this secures my authentification codes above auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) #this connects me to Twitter API api = tweepy.API(auth) api.me() api = tweepy.API(auth, parser=tweepy.parsers.JSONParser()) import json import time start = time.time() tweet_ids = archive.tweet_id.values tweets_data = [] tweet_success = [] tweet_failure = [] for tweet_id in tweet_ids: try: data = api.get_status(tweet_id, tweet_mode='extended', wait_on_rate_limit = True, wait_on_rate_limit_notify = True) tweets_data.append(data) tweet_success.append(tweet_id) except: tweet_failure.append(tweet_id) print(tweet_id) end = time.time() print(end - start) # storing data to tweet_json.txt with open('tweet_json.txt', mode = 'w') as file: json.dump(tweets_data, file) # reading the API data stored in json file df3 = pd.read_json('tweet_json.txt') df3['tweet_id'] = tweet_success df3 = df3[['tweet_id', 'retweet_count', 'favorite_count']] df3.head() ###Output _____no_output_____ ###Markdown Assess ###Code archive image_predictions df3 archive.info() image_predictions.info() df3.info() archive.describe() image_predictions.describe() df3.describe() all_columns = pd.Series(list(archive) + list(image_predictions) + list(df3)) all_columns[all_columns.duplicated()] list(archive) list(image_predictions) archive[archive['retweeted_status_user_id'].isnull()] archive.name.value_counts() archive.rating_numerator.value_counts() archive.rating_denominator.value_counts() archive.sample(50) image_predictions.p1.value_counts() image_predictions.p2.value_counts() image_predictions.p3.value_counts() archive.sample(5) image_predictions.sample(5) from IPython.display import Image Image(url = 'https://pbs.twimg.com/media/CU8v-rdXIAId12Z.jpg') sum(archive.rating_numerator.isnull()) sum(archive.rating_denominator.isnull()) sum(archive.timestamp.isnull()) ###Output _____no_output_____ ###Markdown Quality `archive` table- Missing data in (in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp, expanded_url) columns- Timestamp has +0000 at the end. Not in the right format- Erroneous datatype (timestamp and retweeted status timestamp should be integer instead of string)- Tweet id is integer instead of string- Lowercase dog names such as *a, an, the, quite, my, such etc.* are unusual. all dog names in lowercase - Text shows there are retweets or replies in the data- Extraction of numerator and denominator are incorrect `image_predictions` table- Tweet id is integer instead of string- Names in Algorithm's predictions p1, p2, and p3 are sentence case sometimes, lowercase other times - Compound names in p1, p2 and p3 columns have underscore sometimes, hyphen other times `df3` table- Tweet id is integer instead of string Tidiness- One variable in four columns in `archive` table (doggo, floofer, pupper and puppo)- df3 should be part of the `archive` table Clean Now it's time to fix the quality issues and tidiness issues spotted above ###Code archive_clean = archive.copy() image_predictions_clean = image_predictions.copy() df3_clean = df3.copy() ###Output _____no_output_____ ###Markdown Quality Erroneous Data Format**`archive`**: Timestamp has +0000 at the end, making it recorded in pandas as object. Not in the right format DefineChange timestamp to datetime format Code ###Code # remove +0000 from timestamp archive_clean['timestamp'] = archive_clean['timestamp'].str.slice(start=0, stop=-6) # convert timestamp to datetime archive_clean['timestamp'] = pd.to_datetime(archive_clean['timestamp'], format = "%Y-%m-%d %H:%M:%S") ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.info() archive_clean.head(2) ###Output _____no_output_____ ###Markdown Erroneous Datatype**`archive`: Tweet id is integer instead of string****`image_predictions`: Tweet id is integer instead of string****`df3`: Tweet id is integer instead of string** DefineChange tweet id datatype to string Code ###Code archive_clean['tweet_id'] = archive_clean['tweet_id'].astype(str) image_predictions_clean['tweet_id'] = image_predictions_clean['tweet_id'].astype(str) df3_clean['tweet_id'] = df3_clean['tweet_id'].astype(str) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.info() image_predictions_clean.info() df3_clean.info() archive_clean.head(1) ###Output _____no_output_____ ###Markdown Inconsistency In Name**`archive`**:Names such as a, an, the, quite, my, such etc. are unusual. all dog names in lowercase DefineConvert all non-dog names to np.nan. All non-dog names start with lowercase Code ###Code mask = archive_clean.name.str.islower() column_name = 'name' archive_clean.loc[mask, column_name] = np.nan archive_clean.name.value_counts() archive_clean[archive_clean.name.isnull()] ###Output _____no_output_____ ###Markdown `archives` table: **There are retweets and replies in the dataset** DefineFilter the null values for the three columns related to retweets and check to verify them. Code ###Code archive_clean = archive_clean[archive_clean.retweeted_status_id.isnull()] archive_clean = archive_clean[archive_clean.retweeted_status_user_id.isnull()] archive_clean = archive_clean[archive_clean.retweeted_status_timestamp.isnull()] archive_clean = archive_clean[archive_clean.in_reply_to_status_id.isnull()] archive_clean = archive_clean[archive_clean.in_reply_to_user_id.isnull()] ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.retweeted_status_id.notnull().sum() archive_clean.retweeted_status_user_id.notnull().sum() archive_clean.retweeted_status_timestamp.notnull().sum() archive_clean.in_reply_to_status_id.notnull().sum() archive_clean.in_reply_to_user_id.notnull().sum() ###Output _____no_output_____ ###Markdown `archive` table: **The extraction of numerator and denominator are incorrect** DefineThe extraction of the numerator and the denominator didn't seem to work fine, as floats were not extracted correctly. The extraction will be executed again and the results shall be stored in the DataFrame. Code ###Code rating = archive_clean.text.str.extract('((?:\d+\.)?\d+)\/(\d+)', expand = True) rating.columns = ['rating_numerator', 'rating_denominator'] rating['rating_numerator'] = rating['rating_numerator'].astype(float) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.rating_numerator.value_counts() archive_clean.info() archive_clean.head(3) ###Output _____no_output_____ ###Markdown `image_predictions`: **Names in Algorithm's predictions p1, p2, and p3 are sentence case sometimes, lowercase other times** DefineChange the breed names to the same format for consistency Code ###Code # change the image predictions' breed names to the same format for consistency image_predictions_clean[["p1", "p2", "p3"]] = image_predictions_clean[["p1", "p2", "p3"]].apply(lambda x: x.str.lower(), axis=1) image_predictions_clean[["p1", "p2", "p3"]] ###Output _____no_output_____ ###Markdown Test ###Code image_predictions_clean.p1.describe image_predictions_clean.p2.describe image_predictions_clean.p3.describe ###Output _____no_output_____ ###Markdown `image_predictions`: **Compound names in p1, p2 and p3 columns have underscore sometimes, hyphen other times** DefineUse pandas apply function to make changes to compound names with hypen and underscore for consistency Code ###Code image_predictions_clean[["p1", "p2", "p3"]].apply(lambda x: x.str.replace("_", " ").str.title, axis=1) image_predictions_clean[["p1", "p2", "p3"]] ###Output _____no_output_____ ###Markdown Test ###Code image_predictions_clean.p1.describe image_predictions_clean[["p1", "p2", "p3"]].apply(lambda x: x.str.replace("_", "-").str.title, axis=1) image_predictions_clean[["p1", "p2", "p3"]] ###Output _____no_output_____ ###Markdown Test ###Code image_predictions_clean[["p1", "p2", "p3"]].describe ###Output _____no_output_____ ###Markdown Missing Data**`archive`: Missing data in (in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp, expanded_urls) columns** DefineDrop the columns in which missing data are present in the `archive` table. Since they are not going to be used in my analysis. Code ###Code # remove the columns in archive with missing data archive_clean = archive_clean.drop(['in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp', 'expanded_urls'], axis = 1) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2355 Data columns (total 11 columns): tweet_id 2097 non-null object timestamp 2097 non-null datetime64[ns] source 2097 non-null object text 2097 non-null object rating_numerator 2097 non-null int64 rating_denominator 2097 non-null int64 name 1993 non-null object doggo 2097 non-null object floofer 2097 non-null object pupper 2097 non-null object puppo 2097 non-null object dtypes: datetime64[ns](1), int64(2), object(8) memory usage: 196.6+ KB ###Markdown Tidiness **Several columns in archive table contain similar variables** DefineReplace *None* variables in the dog stage columns, then combine all stages to one column, separate the joint multiple stages and then convert the missing values to nan. In the end, drop individual stages from the archive table. Code ###Code # handle None variables in the dog stage columns archive_clean.doggo.replace('None', '', inplace=True) archive_clean.floofer.replace('None', '', inplace=True) archive_clean.pupper.replace('None', '', inplace=True) archive_clean.puppo.replace('None', '', inplace=True) # merge all stages into one column archive_clean['dog_stage'] = archive_clean.doggo + archive_clean.floofer + archive_clean.pupper + archive_clean.puppo # drop individual stage columns archive_clean = archive_clean.drop(['doggo', 'floofer', 'pupper', 'puppo'], axis = 1) # handle tweets with multiple stages archive_clean.loc[archive_clean.dog_stage == 'doggopupper', 'dog_stage'] = 'doggo, pupper' archive_clean.loc[archive_clean.dog_stage == 'doggopuppo', 'dog_stage'] = 'doggo, puppo' archive_clean.loc[archive_clean.dog_stage == 'doggofloofer', 'dog_stage'] = 'doggo, floofer' # handle missing values archive_clean.loc[archive_clean.dog_stage == '', 'dog_stage'] = np.nan ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.info() archive_clean.dog_stage.value_counts() ###Output _____no_output_____ ###Markdown **Dog breed predictions are not in should be a variable** DefineWrite a loop to select dog breed in the predictions variable Code ###Code # writing a loop to select dog breed dog_prediction = [] for i in range (len(image_predictions_clean)): if image_predictions_clean['p1_dog'][i] == True: dog_prediction.append(image_predictions_clean.p1[i]) elif image_predictions_clean['p2_dog'][i] == True: dog_prediction.append(image_predictions_clean.p2[i]) elif image_predictions_clean['p3_dog'][i] == True: dog_prediction.append(image_predictions_clean.p3[i]) else: dog_prediction.append("no prediction") # create a new column from dog prediction list image_predictions_clean['predictions'] = dog_prediction ###Output _____no_output_____ ###Markdown Test ###Code # check prediction image_predictions_clean[['predictions', 'p1_dog', 'p1']] image_predictions_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 13 columns): tweet_id 2075 non-null object jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool predictions 2075 non-null object dtypes: bool(3), float64(3), int64(1), object(6) memory usage: 168.3+ KB ###Markdown **All the table should be merged as one** DefineMerge df3_clean with archive_clean Code ###Code # check if the DataFrames have duplicates first archive_clean['tweet_id'].duplicated().sum() image_predictions_clean['tweet_id'].duplicated().sum() df3_clean['tweet_id'].duplicated().sum() # good! no duplicate. now we can combine the DataFrames we_rate_dogs = pd.merge(archive_clean, df3_clean, on = 'tweet_id', how = 'left') ###Output _____no_output_____ ###Markdown Test ###Code we_rate_dogs.head() we_rate_dogs.info() # checking for duplicates again we_rate_dogs['tweet_id'].duplicated().sum() ###Output _____no_output_____ ###Markdown **All the table should be merged as one** DefineMerge image_predictions_clean with we_rate_dogs Code ###Code we_rate_dogs2 = pd.merge(we_rate_dogs, image_predictions_clean, on = 'tweet_id', how = 'left') ###Output _____no_output_____ ###Markdown Test ###Code we_rate_dogs2.head(2) we_rate_dogs2.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2096 Data columns (total 22 columns): tweet_id 2097 non-null object timestamp 2097 non-null datetime64[ns] source 2097 non-null object text 2097 non-null object rating_numerator 2097 non-null int64 rating_denominator 2097 non-null int64 name 1993 non-null object dog_stage 336 non-null object retweet_count 2088 non-null float64 favorite_count 2088 non-null float64 jpg_url 1971 non-null object img_num 1971 non-null float64 p1 1971 non-null object p1_conf 1971 non-null float64 p1_dog 1971 non-null object p2 1971 non-null object p2_conf 1971 non-null float64 p2_dog 1971 non-null object p3 1971 non-null object p3_conf 1971 non-null float64 p3_dog 1971 non-null object predictions 1971 non-null object dtypes: datetime64[ns](1), float64(6), int64(2), object(13) memory usage: 376.8+ KB ###Markdown Storing, Analyzing and Visualizing DataStore the clean DataFrame in a csv file named `twitter_archive_master.csv`Analyze and visualize my wrangled data. At least, **three (3) insights and one(1) visualization** must be producedDrawing conclusions and creating visuals to communicate results. The following questions are addressed**Q1:** What are the features that influence retweet count and favorite count?**Q2:** Is rating influenced by dog stage? What are the dog stages with the highest rating?**Q3:** What is the most popular dog name? ###Code # store the clean DataFrame in a csv file named twitter_archive_master.csv we_rate_dogs2.to_csv('twitter_archive_master.csv', encoding='utf-8', index=False) # import necessary libaries import datetime import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns # load the clean data df = pd.read_csv('twitter_archive_master.csv') ###Output _____no_output_____ ###Markdown Exploring with Visuals ###Code # create rating column df['rating'] = df.rating_numerator/df.rating_denominator #create new column for dog breeds substituting single breeds for "other" # value counts for all the dog breeds vc = df.predictions.value_counts() # get the breeds that have a value count of less than 10 singles = vc[vc<10].index.tolist() # new column for dog breeds df['breed_group'] = df['predictions'] # replace strings in single list with string 'other' df['breed_group'].replace(singles, 'other', inplace = True) # get the average rating only for each breed rating2 = df.groupby('breed_group').rating.mean() rating2 # the index for this series is the breed # the value is the average rating # plotting these directly in pandas: # plot average rating of breed group and dog stage fig, axes = plt.subplots(1,2) rating = df.groupby('breed_group').rating.mean() rating.plot.barh(ax = axes[1], align = 'center', color = ['lavender', 'lightsteelblue', 'cornflowerblue', 'royalblue', 'midnightblue'], figsize = [5,11]) rating2 = df.groupby('dog_stage').rating.mean() rating2.plot.barh(ax = axes[0], align = 'center', color = ['lavender', 'lightsteelblue', 'cornflowerblue', 'royalblue', 'midnightblue']) plt.subplots_adjust(wspace = 0.4) fig.set_figwidth(25) plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown Dog breed with **no prediction** which is where no dog breed exist, has the highest average rating. Average rating for dog breeds is below 1.4 ###Code # plot average favorite count for dog stage and breed group fig, axes = plt.subplots(1,2, figsize = [5,11]) rating = df.groupby('dog_stage').favorite_count.mean() rating.plot.barh(ax = axes[0],align = 'center', color = ['lavender', 'lightsteelblue', 'cornflowerblue', 'royalblue', 'midnightblue']) rating2 = df.groupby('breed_group').favorite_count.mean() rating2.plot.barh(ax = axes[1],align = 'center', color = ['lavender', 'lightsteelblue', 'cornflowerblue', 'royalblue', 'midnightblue']) plt.subplots_adjust(wspace =0.4) fig.set_figwidth(25) plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown There is a big difference in average favorite count for the combined dog stage **doggo, puppo** compare to other stages. This factor is subject to further findings ###Code # plot average retweet count for dog stage and breed group fig, axes = plt.subplots(1,2, figsize = [5,11]) rating = df.groupby('dog_stage').retweet_count.mean() rating.plot.barh(ax = axes[0],align = 'center', color = ['lavender', 'lightsteelblue', 'cornflowerblue', 'royalblue', 'midnightblue']) rating2 = df.groupby('breed_group').retweet_count.mean() rating2.plot.barh(ax = axes[1],align = 'center', color = ['lavender', 'lightsteelblue', 'cornflowerblue', 'royalblue', 'midnightblue']) plt.subplots_adjust(wspace =0.4) fig.set_figwidth(25) plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown There is a big difference in average retweet count for the combined dog stage **doggo, puppo** compare to other stages. This factor is also subject to further findings in our analysis **Q1:** What are the features that influence retweet count and favorite count? ###Code p_breed_retweet = df[~df['breed_group'].isin(['no prediction'])].retweet_count.mean() p_breed_favorite = df[~df['breed_group'].isin(['no prediction'])].favorite_count.mean() p_stage_retweet = df[~df['breed_group'].isin(['no prediction'])].retweet_count.mean() print('average retweet count for all dog breeds except no prediction = %f' %df[~df['breed_group'].isin(['no prediction'])].retweet_count.mean()) print('average favorite count for all dog breeds except no prediction = %f' %df[~df['breed_group'].isin(['no prediction'])].favorite_count.mean()) print('average rating for all dog breeds under no prediction = %f' %df[df['predictions'].isin(['no prediction'])].rating.mean()) print('average rating for all dog breeds under other = %f' %df[df['predictions'].isin(singles)].rating.mean()) # correlation between retweet count and favorite count plt.scatter(df.retweet_count, df.favorite_count) plt.title('Correlation Between Retweet and Favourite') plt.xlabel('Retweet Count') plt.ylabel('Favorite Count') plt.show(); ###Output _____no_output_____ ###Markdown There is a positive correlation between retweet count and favorite count ###Code df = df.astype({'dog_stage': 'category'}) # install a pip package in the current Jupyter kernel to update the seaborn libary import sys !{sys.executable} -m pip install seaborn -U sns.scatterplot(data = df, x='retweet_count', y='favorite_count', hue = df.dog_stage.tolist()); ###Output _____no_output_____ ###Markdown Here, we can see that there is a positive correlation between retweet count and favorite count across the dog stages, with **doggo** having the strongestTherefore, dog stage influences the reetweet count and favorite count **Q2:** Is rating influenced by dog stage? What are the dog stages with the highest rating? ###Code sns.scatterplot(data = df, x = 'rating', y = 'dog_stage'); ###Output _____no_output_____ ###Markdown Now let's check the relationship between rating and each dog stage excluding the outliers ###Code # filter out the outliers in rating rating_within = df[df['rating']<2.5] sns.scatterplot( data = rating_within, x = 'rating' , y = 'dog_stage'); ###Output _____no_output_____ ###Markdown This visual shows that **doggo**, **pupper** and **puppo** have the highest rating **Q3:** What are the most popular dog names and dog breed? ###Code # df.name.value_counts() df.name.value_counts()[1:10].plot('barh', figsize=(15,8), color = 'cornflowerblue', title='Most Popular Dog Name').set_xlabel("Dog Count"); ###Output _____no_output_____ ###Markdown Lucy and Charlie are the most popular dog names followed by Cooper and Oliver. ###Code df.breed_group.value_counts()[2:12].plot('barh', figsize=(15,8), color = 'cornflowerblue', title='Most Popular Breed').set_xlabel("Dog Count"); ###Output _____no_output_____ ###Markdown Data Wrangling Project - Case WeRateDogs Table of ContentsIntroductionGathering DataAssessing DataCleaning DataStoring, Analyzing, and Visualizing DataConclusion IntroductionIn this project, we will wrangle WeRateDogs Twitter data to create interesting and trustworthy analyses and visualizations, through Data Wrangling steps (Gather, Assess, and Clean). Gathering DataIn this step, we will gather some data in different resources.- **Enhanced Twitter Archive**: file downloaded in this link: https://d17h27t6h515a5.cloudfront.net/topher/2017/August/59a4e958_twitter-archive-enhanced/twitter-archive-enhanced.csv. This dataset has the following informations: rating, dog name, and dog stage.- **Additional Data via the Twitter API**: The tweets where are stored in the previous data set have some missing informations, such as retweet count and favorite count. To gather these informations, we will use the Twitter API, and search through the tweet_id (first column of "Enhanced Twitter Archive" dataset).- **Image Predictions File**: file downloaded in this link: https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv. This dataset has the following informations: image predictions, tweet id, image URL, and the image number that corresponded to the most confident prediction (numbered 1 to 4 since tweets can have up to four images). 1. Enhanced Twitter Archive ###Code #import all necessary packages import numpy as np import pandas as pd import os import requests import tweepy import json from tweepy import OAuthHandler from timeit import default_timer as timer import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline #load csv df_tweets = pd.read_csv('twitter-archive-enhanced.csv') df_tweets.head() ###Output _____no_output_____ ###Markdown 2. Additional Data via the Twitter APIReference: https://stackoverflow.com/questions/28384588/twitter-api-get-tweets-with-specific-id ###Code #Get data from Twitter API import tweepy consumer_key = 'CONSUMER_KEY' consumer_secret = 'CONSUMER_SECRET' access_token = 'ACESS_TOKEN' access_secret = 'ACESS_SECRET' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit = True, wait_on_rate_limit_notify = True) #Get json data tweets_metrics = [] not_tweets_metrics = [] with open('tweet_json.txt', 'w') as output: for tweet_id in df_tweets['tweet_id']: try: tweet=api.get_status(tweet_id, tweet_mode='extended') json.dump(tweet._json, output) output.write('\n') except tweepy.TweepError: not_tweets_metrics.append(tweet_id) print(len(not_tweets_metrics)) #Tranform json file to dataframe tweetlist = [] with open('tweet_json.txt') as json_file: for line in json_file: tweets_dict = {} tweets_json = json.loads(line) try: tweets_dict['tweet_id'] = tweets_json['id'] except: tweets_dict['tweet_id'] = 'na' tweets_dict['retweet_count'] = tweets_json['retweet_count'] tweets_dict['favorite_count'] = tweets_json['favorite_count'] tweetlist.append(tweets_dict) tweets_fav_rt_df = pd.DataFrame(tweetlist) tweets_fav_rt_df.head() #export twitter data to csv tweets_fav_rt_df.to_csv('tweets_fav_rt_df.csv', index=False) ###Output _____no_output_____ ###Markdown 3. Image Predictions File ###Code #Download programmatically using the Requests library url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' req = requests.get(url) open('image_predictions.tsv', 'wb').write(req.content) df_image_predictions = pd.read_csv('image_predictions.tsv', sep = '\t') df_image_predictions.head() ###Output _____no_output_____ ###Markdown Assessing Data For assessing data, we will analyze it in two ways: first visual, and then, programmatic. First, we will see all dataframes. ###Code df_tweets.head() tweets_fav_rt_df = pd.read_csv('tweets_fav_rt_df.csv') tweets_fav_rt_df.head() df_image_predictions.head() ###Output _____no_output_____ ###Markdown Visual Assessment - In the Enhanced Twitter Archive dataframe, the column "timestamp" the write can be writing in the better way.- In the Enhanced Twitter Archive dataframe, the column "source" can be more readable.- Columns headers should be more descriptive. Programmatic Assessment Analyzing the datatypes ###Code df_tweets.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown In the colummns "doggo", "floofler", "pupper", and "puppo", the "None" is treated as a non-null value. ###Code tweets_fav_rt_df.info() df_image_predictions.info() df_tweets['expanded_urls'].duplicated().sum() df_tweets['expanded_urls'].isnull().sum() #duplicates + missing values df_tweets['expanded_urls'].duplicated().sum() + df_tweets['expanded_urls'].isnull().sum() ###Output _____no_output_____ ###Markdown - Column "expanded_urls" in Enhanced Twitter Archive dataframe has missing values and duplicates. Enhanced Twitter Archive- Column tweet_id as int64- Column reply_to_status_id has missing values- Column in_reply_to_user_id has missing values- Column timestamp as object- Column retweeted_status_id as float64- Column retweeted_status_user_id as float64- Column retweeted_status_timestamp as object- Column expanded_urls has missing values and duplicates ###Code df_tweets.columns ###Output _____no_output_____ ###Markdown Tidiness problems- The columns "doggo", "floofer", "pupper", and "puppo" can be one columns.- The "tweets_fav_rt_df" is related with "df_tweets". To sum up, the problems are divided by the quality and tidy problems. Quality- Column tweet_id as int64- Column timestamp as object- In the Enhanced Twitter Archive dataframe, the column "timestamp" the write can be writing in the better way- Column reply_to_status_id has missing values- Column in_reply_to_user_id has missing values- Column retweeted_status_id as float64 and has missing values- Column retweeted_status_user_id as float64 and has missing values- Column retweeted_status_timestamp as object and has missing values- Column expanded_urls has missing values and duplicates- In the Enhanced Twitter Archive dataframe, the column "source" can be more readable.- Columns headers should be more descriptive.- In the colummns "doggo", "floofler", "pupper", and "puppo", the "None" is treated as a non-null value.- We only want original dog ratings. Tidy- The columns "doggo", "floofer", "pupper", and "puppo" can be one columns- The "tweets_fav_rt_df" is related to "df_tweets" Data Cleaning ###Code df_tweets_archive_clean = df_tweets.copy() tweets_fav_rt_df_clean = tweets_fav_rt_df.copy() df_image_predictions_clean = df_image_predictions.copy() ###Output _____no_output_____ ###Markdown Define1. Remove retweets as we want original dogs ratings Code ###Code df_tweets_archive_clean = df_tweets_archive_clean[df_tweets_archive_clean.retweeted_status_id.isnull()] df_tweets_archive_clean = df_tweets_archive_clean[df_tweets_archive_clean.retweeted_status_user_id.isnull()] df_tweets_archive_clean = df_tweets_archive_clean[df_tweets_archive_clean.retweeted_status_timestamp.isnull()] ###Output _____no_output_____ ###Markdown Test ###Code df_tweets_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2175 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2175 non-null object source 2175 non-null object text 2175 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 2117 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 2175 non-null object floofer 2175 non-null object pupper 2175 non-null object puppo 2175 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 305.9+ KB ###Markdown Define1. Remove html tags from source to make it more readable Code ###Code #https://stackoverflow.com/questions/9662346/python-code-to-remove-html-tags-from-a-string #Function to remove html tags import re def cleanhtml(raw_html): cleanr = re.compile('<.*?>') cleantext = re.sub(cleanr, '', raw_html) return cleantext #applying into dataframe for index, row in df_tweets_archive_clean.iterrows(): df_tweets_archive_clean.loc[index, 'source'] = cleanhtml(row['source']) ###Output _____no_output_____ ###Markdown Test ###Code df_tweets_archive_clean['source'].value_counts() ###Output _____no_output_____ ###Markdown Define1. Convert "None" in the columns "doggo", "floofler", "pupper", and "puppo" to null. Code ###Code df_tweets_archive_clean['doggo'] = df_tweets_archive_clean['doggo'].replace('None', np.nan) df_tweets_archive_clean['floofer'] = df_tweets_archive_clean['floofer'].replace('None', np.nan) df_tweets_archive_clean['pupper'] = df_tweets_archive_clean['pupper'].replace('None', np.nan) df_tweets_archive_clean['puppo'] = df_tweets_archive_clean['puppo'].replace('None', np.nan) ###Output _____no_output_____ ###Markdown Test ###Code df_tweets_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2175 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2175 non-null object source 2175 non-null object text 2175 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 2117 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 87 non-null object floofer 10 non-null object pupper 234 non-null object puppo 25 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 385.9+ KB ###Markdown Define1. Change columns names to be more readable Code ###Code df_image_predictions_clean.head() name_list = list(df_image_predictions_clean.columns) name_list df_image_predictions_clean.columns = ['tweet_id', 'jpg_url', 'Number of Images', 'First Prediction', 'Confidence of First Prediction', 'First prediction is a breed of dog', 'Second Prediction', 'Confidence of Second Prediction', 'Second prediction is a breed of dog', 'Third Prediction', 'Confidence of Third Prediction', 'Third prediction is a breed of dog'] ###Output _____no_output_____ ###Markdown Test ###Code df_image_predictions_clean.columns ###Output _____no_output_____ ###Markdown Define1. Change tweet_id datatype to object Code ###Code df_tweets_archive_clean.tweet_id = df_tweets_archive_clean.tweet_id.astype(object) ###Output _____no_output_____ ###Markdown Test ###Code df_tweets_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2175 non-null object in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2175 non-null object source 2175 non-null object text 2175 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 2117 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 87 non-null object floofer 10 non-null object pupper 234 non-null object puppo 25 non-null object dtypes: float64(4), int64(2), object(11) memory usage: 385.9+ KB ###Markdown Define1. Change the column timestamp to datetime2. Change the format to YYYY-mm-dd Code ###Code df_tweets_archive_clean.timestamp = pd.to_datetime(df_tweets_archive_clean.timestamp) df_tweets_archive_clean.timestamp = df_tweets_archive_clean.timestamp.dt.strftime('%Y/%m/%d') df_tweets_archive_clean.timestamp = pd.to_datetime(df_tweets_archive_clean.timestamp) ###Output _____no_output_____ ###Markdown Test ###Code df_tweets_archive_clean.info() df_tweets_archive_clean.head() ###Output _____no_output_____ ###Markdown Define1. Columns "reply_to_status_id", "in_reply_to_user_id", "retweeted_status_id", "retweeted_status_user_id", and "retweeted_status_timestamp" has missing values, so we have to drop these columns. Code ###Code df_tweets_archive_clean = df_tweets_archive_clean.drop(columns = ['in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp']) ###Output _____no_output_____ ###Markdown Test ###Code df_tweets_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 12 columns): tweet_id 2175 non-null object timestamp 2175 non-null datetime64[ns] source 2175 non-null object text 2175 non-null object expanded_urls 2117 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 87 non-null object floofer 10 non-null object pupper 234 non-null object puppo 25 non-null object dtypes: datetime64[ns](1), int64(2), object(9) memory usage: 300.9+ KB ###Markdown Define1. Drop missing values of column "expanded_urls".2. Drop duplicates values of column "expanded_urls" Code ###Code df_tweets_archive_clean = df_tweets_archive_clean.dropna(subset=['expanded_urls']) df_tweets_archive_clean = df_tweets_archive_clean.drop_duplicates(subset=['expanded_urls']) ###Output _____no_output_____ ###Markdown Test ###Code df_tweets_archive_clean['expanded_urls'].isnull().sum() df_tweets_archive_clean['expanded_urls'].duplicated().sum() ###Output _____no_output_____ ###Markdown Define1. Merge the columns "doggo", "floofer", "pupper", and "puppo" into one column2. Convert data type to categorical Code ###Code dog_list = ['doggo', 'floofer', 'pupper', 'puppo'] # append list in cells of column 'dog_type' with values from other columns def d_type(archive_clean): for i in range(archive_clean.shape[0]): for x in dog_list: if x in archive_clean.loc[i,['doggo', 'floofer', 'pupper', 'puppo']].tolist(): archive_clean.loc[i,'dog_type'].append(x) else: continue # convert list into string archive_clean.loc[i,'dog_type'] = ", ".join(archive_clean.loc[i,'dog_type']) # replace empty strings with another string archive_clean.dog_type = archive_clean.dog_type.replace('',np.nan) df_tweets_archive_clean['dog_type'] = pd.np.empty((df_tweets_archive_clean.shape[0], 0)).tolist() df_tweets_archive_clean.shape df_tweets_archive_clean.reset_index(drop = True, inplace = True) d_type(df_tweets_archive_clean) df_tweets_archive_clean.head() df_tweets_archive_clean['dog_type']=df_tweets_archive_clean['dog_type'].astype('category') #Drop columns 'doggo', 'floofer', 'pupper', and 'puppo' df_tweets_archive_clean=df_tweets_archive_clean.drop(columns=['doggo', 'floofer', 'pupper', 'puppo']) ###Output _____no_output_____ ###Markdown Test ###Code df_tweets_archive_clean.info() df_tweets_archive_clean.dog_type.value_counts() ###Output _____no_output_____ ###Markdown Define 1. Merge columns "favorite_count" and "retweet_count" of tweets_fav_rt_df_clean to df_tweets_archive_clean Code ###Code tweets_fav_rt_df_clean.tweet_id = tweets_fav_rt_df_clean.tweet_id.astype('object') tweets_fav_rt_df_clean.info() df_tweets_archive_clean = df_tweets_archive_clean.merge(tweets_fav_rt_df_clean, how = 'left', on = 'tweet_id') df_tweets_archive_clean.info() df_tweets_archive_clean['favorite_count'].isnull().sum() df_tweets_archive_clean['retweet_count'].isnull().sum() #Remove missing values df_tweets_archive_clean = df_tweets_archive_clean.dropna(subset=['favorite_count']) df_tweets_archive_clean = df_tweets_archive_clean.dropna(subset=['retweet_count']) ###Output _____no_output_____ ###Markdown Test ###Code df_tweets_archive_clean.info() df_tweets_archive_clean['favorite_count'].isnull().sum() df_tweets_archive_clean['retweet_count'].isnull().sum() ###Output _____no_output_____ ###Markdown Storing, Analyzing, and Visualizing Data StoringNow, we need to store the clean dataframe in one file named 'twitter_archive_master.csv'. However, first we need to merge the last two clean dataframes, they are: df_tweets_archive_clean and df_image_predictions_clean ###Code df_image_predictions_clean.info() #convert tweet_id to object df_image_predictions_clean.tweet_id = df_image_predictions_clean.tweet_id.astype('object') df_image_predictions_clean.info() #merge df_tweets_archive_clean = df_tweets_archive_clean.merge(df_image_predictions_clean, how = 'left', on = 'tweet_id') df_tweets_archive_clean.info() #drop missing values df_tweets_archive_clean = df_tweets_archive_clean.dropna(subset=['jpg_url']) df_tweets_archive_clean = df_tweets_archive_clean.dropna(subset=['Number of Images']) df_tweets_archive_clean = df_tweets_archive_clean.dropna(subset=['First Prediction']) df_tweets_archive_clean = df_tweets_archive_clean.dropna(subset=['Confidence of First Prediction']) df_tweets_archive_clean = df_tweets_archive_clean.dropna(subset=['First prediction is a breed of dog']) df_tweets_archive_clean = df_tweets_archive_clean.dropna(subset=['Second Prediction']) df_tweets_archive_clean = df_tweets_archive_clean.dropna(subset=['Confidence of Second Prediction']) df_tweets_archive_clean = df_tweets_archive_clean.dropna(subset=['Second prediction is a breed of dog']) df_tweets_archive_clean = df_tweets_archive_clean.dropna(subset=['Third Prediction']) df_tweets_archive_clean = df_tweets_archive_clean.dropna(subset=['Confidence of Third Prediction']) df_tweets_archive_clean = df_tweets_archive_clean.dropna(subset=['Third prediction is a breed of dog']) df_tweets_archive_clean.info() #store dataframe into csv file df_tweets_archive_clean.to_csv('twitter_archive_master.csv', index=False) ###Output _____no_output_____ ###Markdown Analyzing ###Code #Get data from csv df_analysis = pd.read_csv('twitter_archive_master.csv') df_analysis.head() df_analysis.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 1987 entries, 0 to 1986 Data columns (total 22 columns): tweet_id 1987 non-null int64 timestamp 1987 non-null object source 1987 non-null object text 1987 non-null object expanded_urls 1987 non-null object rating_numerator 1987 non-null int64 rating_denominator 1987 non-null int64 name 1987 non-null object dog_type 305 non-null object favorite_count 1987 non-null float64 retweet_count 1987 non-null float64 jpg_url 1987 non-null object Number of Images 1987 non-null float64 First Prediction 1987 non-null object Confidence of First Prediction 1987 non-null float64 First prediction is a breed of dog 1987 non-null bool Second Prediction 1987 non-null object Confidence of Second Prediction 1987 non-null float64 Second prediction is a breed of dog 1987 non-null bool Third Prediction 1987 non-null object Confidence of Third Prediction 1987 non-null float64 Third prediction is a breed of dog 1987 non-null bool dtypes: bool(3), float64(6), int64(3), object(10) memory usage: 300.8+ KB ###Markdown QuestionsThose are the three questions that we have to answer:1. Which type of dog has more favorite and retweet count?2. Which breeds of dogs more appear in the predictions?3. What's the average confidence of the algorithm? Which type of dog has more favorite and retweet count? ###Code #Grouping data with dog type df_analysis_group = df_analysis.groupby(['dog_type']).sum() #Sort by favorite count df3 = df_analysis_group.sort_values(by=['favorite_count'],ascending=False) df3 = df3.reset_index() df3 #Sort by Retweet Count df_analysis_group.sort_values(by=['retweet_count'],ascending=False) ###Output _____no_output_____ ###Markdown The dog type "pupper" has the greatest favorite count and retweet count among others. Which breeds of dogs more appear in the predictions? ###Code df_analysis_breed = df_analysis.groupby(['First Prediction']).count() df_analysis_breed.sort_values(by=['retweet_count'],ascending=False) ###Output _____no_output_____ ###Markdown With this result, we can conclude that the golden retriever, labrador retriever, and Pembroke are the breeds of dogs that are more predicted. One of the hypotheses we can build is that those breeds are more popular in this group of people. What's the average confidence of the algorithm? ###Code df_analysis.shape df_analysis_confidence = df_analysis['Confidence of First Prediction'].mean() df_analysis_confidence df_analysis_confidence_median = df_analysis['Confidence of First Prediction'].median() df_analysis_confidence_median ###Output _____no_output_____ ###Markdown Only looking p1_dog as true, we analyze the mean of the prediction, and we can see that the average confidence of the algorithm is 59%. Visualizing Data ###Code df3.plot(x="dog_type", y="favorite_count", kind="bar", title = "Which type of dog has more favorite and retweet count?", figsize = (8,5)); ###Output _____no_output_____ ###Markdown Wrangle and Analyze DataReal-world data rarely comes clean. Using Python and its libraries, we will gather data from a variety of sources and in a variety of formats, assess its quality and tidiness, then clean it for this project. The dataset that you will be wrangling (and analyzing and visualizing) is the tweet archive of Twitter user [@dog_rates](https://twitter.com/dog_rates), also known as [WeRateDogs](https://en.wikipedia.org/wiki/WeRateDogs). WeRateDogs is a Twitter account that rates people's dogs with a humorous comment about the dog. These ratings almost always have a denominator of 10. The numerators, though? Almost always greater than 10. 11/10, 12/10, 13/10, etc. The goal is to wrangle WeRateDogs Twitter data to create interesting and trustworthy analyses and visualizations. Table of Contents- [Introduction](intro)- [Gathering Data](gather)- [Assessing Data](assess)- [Cleaning Data](clean)- [Storing Data](store)- [Analyzing & Visualizing Data](visual)- [Webliography](web) Introduction ###Code import pandas as pd import numpy as np import requests import tweepy import os import json import matplotlib import matplotlib.pyplot as plt import seaborn as sns import time from IPython.display import display import re %matplotlib inline matplotlib.style.use('ggplot') ###Output _____no_output_____ ###Markdown Gathering Data ###Code pd.set_option('display.max_columns', None) # Manually downloading the Twitter Archive df_twitter_archive = pd.read_csv('twitter_archive_enhanced.csv') df_twitter_archive.head() # Downloading data programmatically url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' data = requests.get(url) with open(url.split('/')[-1], mode = 'wb') as file: file.write(data.content) # Showing the data in the image predictions file df_image_predictions = pd.read_csv('image-predictions.tsv', sep = '\t') df_image_predictions.head() consumer_key = '' consumer_secret = '' access_token = '' access_secret = '' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth) ###Output _____no_output_____ ###Markdown Reference: [StackOverflow](https://stackoverflow.com/questions/28384588/twitter-api-get-tweets-with-specific-id) ###Code df_list = [] error_list = [] start = time.time() # Will add each available tweet json to df_list for tweet_id in df_twitter_archive['tweet_id']: try: tweet = api.get_status(tweet_id, tweet_mode='extended', wait_on_rate_limit = True, wait_on_rate_limit_notify = True)._json favorites = tweet['favorite_count'] # tweet's favorites retweets = tweet['retweet_count'] user_followers = tweet['user']['followers_count'] user_favourites = tweet['user']['favourites_count'] # user's favorites date_time = tweet['created_at'] # The date and time of the creation df_list.append({'tweet_id': int(tweet_id), 'favorites': int(favorites), 'retweets': int(retweets), 'user_followers': int(user_followers), 'user_favourites': int(user_favourites), 'date_time': pd.to_datetime(date_time)}) except Exception as e: print(str(tweet_id)+ " __ " + str(e)) error_list.append(tweet_id) end = time.time() print(end - start) ###Output 888202515573088257 __ [{'code': 144, 'message': 'No status found with that ID.'}] 873697596434513921 __ [{'code': 144, 'message': 'No status found with that ID.'}] 872668790621863937 __ [{'code': 144, 'message': 'No status found with that ID.'}] 872261713294495745 __ [{'code': 144, 'message': 'No status found with that ID.'}] 869988702071779329 __ [{'code': 144, 'message': 'No status found with that ID.'}] 866816280283807744 __ [{'code': 144, 'message': 'No status found with that ID.'}] 861769973181624320 __ [{'code': 144, 'message': 'No status found with that ID.'}] 856602993587888130 __ [{'code': 144, 'message': 'No status found with that ID.'}] 851953902622658560 __ [{'code': 144, 'message': 'No status found with that ID.'}] 845459076796616705 __ [{'code': 144, 'message': 'No status found with that ID.'}] 844704788403113984 __ [{'code': 144, 'message': 'No status found with that ID.'}] 842892208864923648 __ [{'code': 144, 'message': 'No status found with that ID.'}] 837366284874571778 __ [{'code': 144, 'message': 'No status found with that ID.'}] 837012587749474308 __ [{'code': 144, 'message': 'No status found with that ID.'}] 829374341691346946 __ [{'code': 144, 'message': 'No status found with that ID.'}] 827228250799742977 __ [{'code': 144, 'message': 'No status found with that ID.'}] 812747805718642688 __ [{'code': 144, 'message': 'No status found with that ID.'}] 802247111496568832 __ [{'code': 144, 'message': 'No status found with that ID.'}] 779123168116150273 __ [{'code': 144, 'message': 'No status found with that ID.'}] 775096608509886464 __ [{'code': 144, 'message': 'No status found with that ID.'}] 771004394259247104 __ [{'code': 179, 'message': 'Sorry, you are not authorized to see this status.'}] 770743923962707968 __ [{'code': 144, 'message': 'No status found with that ID.'}] 759566828574212096 __ [{'code': 144, 'message': 'No status found with that ID.'}] 754011816964026368 __ [{'code': 144, 'message': 'No status found with that ID.'}] 680055455951884288 __ [{'code': 144, 'message': 'No status found with that ID.'}] 3310.2324521541595 ###Markdown _The total time was about 3310 seconds = 50 mins._ ###Code print("The list of tweets" ,len(df_list)) print("The list of tweets no found" , len(error_list)) # Creating DataFrame for the tweets retrived json_tweets = pd.DataFrame(df_list, columns = ['tweet_id', 'favorites', 'retweets', 'user_followers', 'user_favourites', 'date_time']) # saving JSON data json_tweets.to_csv('tweet_json.txt', encoding = 'utf-8', index = False) tweet_data = pd.read_csv('tweet_json.txt', encoding = 'utf-8') ###Output _____no_output_____ ###Markdown __In the gathering part, we imported data from a existing csv file, downloaded tsv file programmaticaly, used Tweepy to query Twitter's API for additional data. Finally, imported all the data in workbook.__ Assessing Data __Visual Assessment__ ###Code df_twitter_archive df_image_predictions tweet_data ###Output _____no_output_____ ###Markdown __Programmatic Assessment__ ###Code df_twitter_archive.info() df_twitter_archive.describe() df_image_predictions.info() df_image_predictions.describe() tweet_data.info() tweet_data.describe() df_image_predictions['jpg_url'].value_counts() df_twitter_archive.source.value_counts() df_twitter_archive.rating_numerator.value_counts() df_twitter_archive.rating_denominator.value_counts() df_twitter_archive[df_twitter_archive['rating_numerator'] > 20] df_twitter_archive[df_twitter_archive.tweet_id.duplicated()] df_image_predictions[df_image_predictions.tweet_id.duplicated()] df_twitter_archive[df_twitter_archive['name'].apply(len) <= 2] #Orignal tweets df_twitter_archive[df_twitter_archive['retweeted_status_id'].isnull()] df_image_predictions['p1'].value_counts() df_image_predictions['p2'].value_counts() df_image_predictions['p3'].value_counts() ###Output _____no_output_____ ###Markdown Quality `df_twitter_archive` table- _Source_ column format is bad and cannot be read easily.- retweeted_status_timestamp, timestamp columns should be datetime instead of object (string).- We may want to change this columns type (in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id) to string because We don't want any operations on them.- The *ratings_numerator* and *ratings_denominator* columns have invalid values.- There are invalid names (a, an and less than 3 characters) in *name* column.- There are retweeted tweets, and we do not want it.- Change datatypes for columns tweet_id, timestamp, source, favorites, retweets, numerator & denominator. `df_image_predictions` table- Missing values from images dataset (2075 rows instead of 2356).- Some tweets have 2 different tweet_id, that is retweets. `tweet_data` table Tidiness- Dog stage is in 4 columns in `df_twitter_archive` (doggo, floofer, pupper, puppo), move into one column- Merge `df_image_predictions` & `tweet_data` into `df_twitter_archive` table Cleaning Data ###Code # Create a copy of DataFrames for cleaning df_twitter_archive_clean = df_twitter_archive.copy() df_image_predictions_clean = df_image_predictions.copy() tweet_data_clean = tweet_data.copy() ###Output _____no_output_____ ###Markdown Tidiness DefineMerge df_image_predictions & tweet_data into df_twitter_archive table. Code ###Code df_twitter_archive_clean = pd.merge(left=df_twitter_archive_clean, right=tweet_data_clean, left_on='tweet_id', right_on='tweet_id', how='inner') df_twitter_archive_clean = df_twitter_archive_clean.merge(df_image_predictions_clean, on='tweet_id', how='inner') ###Output _____no_output_____ ###Markdown Test ###Code df_twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2059 entries, 0 to 2058 Data columns (total 33 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2059 non-null int64 1 in_reply_to_status_id 23 non-null float64 2 in_reply_to_user_id 23 non-null float64 3 timestamp 2059 non-null object 4 source 2059 non-null object 5 text 2059 non-null object 6 retweeted_status_id 72 non-null float64 7 retweeted_status_user_id 72 non-null float64 8 retweeted_status_timestamp 72 non-null object 9 expanded_urls 2059 non-null object 10 rating_numerator 2059 non-null int64 11 rating_denominator 2059 non-null int64 12 name 2059 non-null object 13 doggo 2059 non-null object 14 floofer 2059 non-null object 15 pupper 2059 non-null object 16 puppo 2059 non-null object 17 favorites 2059 non-null int64 18 retweets 2059 non-null int64 19 user_followers 2059 non-null int64 20 user_favourites 2059 non-null int64 21 date_time 2059 non-null object 22 jpg_url 2059 non-null object 23 img_num 2059 non-null int64 24 p1 2059 non-null object 25 p1_conf 2059 non-null float64 26 p1_dog 2059 non-null bool 27 p2 2059 non-null object 28 p2_conf 2059 non-null float64 29 p2_dog 2059 non-null bool 30 p3 2059 non-null object 31 p3_conf 2059 non-null float64 32 p3_dog 2059 non-null bool dtypes: bool(3), float64(7), int64(8), object(15) memory usage: 504.7+ KB ###Markdown DefineDog stage is in 4 columns. Move doggo, floofer, pupper, puppo into *dog_stage* column Code ###Code #Some dogs have multiple stages, concatenate them. df_twitter_archive_clean.loc[df_twitter_archive_clean.doggo == 'None', 'doggo'] = '' df_twitter_archive_clean.loc[df_twitter_archive_clean.floofer == 'None', 'floofer'] = '' df_twitter_archive_clean.loc[df_twitter_archive_clean.pupper == 'None', 'pupper'] = '' df_twitter_archive_clean.loc[df_twitter_archive_clean.puppo == 'None', 'puppo'] = '' df_twitter_archive_clean.groupby(["doggo", "floofer", "pupper", "puppo"]).size().reset_index().rename(columns={0: "count"}) df_twitter_archive_clean['dog_stage'] = df_twitter_archive_clean.doggo + df_twitter_archive_clean.floofer + df_twitter_archive_clean.pupper + df_twitter_archive_clean.puppo df_twitter_archive_clean.loc[df_twitter_archive_clean.dog_stage == 'doggopupper', 'dog_stage'] = 'doggo,pupper' df_twitter_archive_clean.loc[df_twitter_archive_clean.dog_stage == 'doggopuppo', 'dog_stage'] = 'doggo,puppo' df_twitter_archive_clean.loc[df_twitter_archive_clean.dog_stage == 'doggofloofer', 'dog_stage'] = 'doggo,floofer' df_twitter_archive_clean.loc[df_twitter_archive_clean.dog_stage == '', 'dog_stage'] = 'None' ###Output _____no_output_____ ###Markdown Test ###Code df_twitter_archive_clean.dog_stage.value_counts() ###Output _____no_output_____ ###Markdown Quality Define_Source_ column format is bad and can not be read easily. Make it clean and readable. Code ###Code df_twitter_archive_clean['source'] = df_twitter_archive_clean['source'].apply(lambda x : re.findall(r'>(.*)<', x)[0]) ###Output _____no_output_____ ###Markdown Test ###Code df_twitter_archive_clean.head() ###Output _____no_output_____ ###Markdown DefineThe numerator and denominator columns have invalid values. Fix the ones which are not ratings Code ###Code tmp_rating = df_twitter_archive_clean[df_twitter_archive_clean.text.str.contains( r"(\d+\.?\d*\/\d+\.?\d*\D+\d+\.?\d*\/\d+\.?\d*)")].text for i in tmp_rating: x = df_twitter_archive_clean.text == i column_1 = 'rating_numerator' column_2 = 'rating_denominator' df_twitter_archive_clean.loc[x, column_1] = re.findall(r"\d+\.?\d*\/\d+\.?\d*\D+(\d+\.?\d*)\/\d+\.?\d*", i) df_twitter_archive_clean.loc[x, column_2] = 10 ###Output D:\Empty\lib\site-packages\pandas\core\strings.py:2001: UserWarning: This pattern has match groups. To actually get the groups, use str.extract. return func(self, *args, **kwargs) ###Markdown Test ###Code df_twitter_archive_clean[df_twitter_archive_clean.text.isin(tmp_rating)] ###Output _____no_output_____ ###Markdown DefineClean decimal values in *rating_numerator*. Code ###Code # View tweets with decimal rating in 'text' column df_twitter_archive_clean[df_twitter_archive_clean.text.str.contains(r"(\d+\.\d*\/\d+)")] ratings = df_twitter_archive_clean.text.str.extract('((?:\d+\.)?\d+)\/(\d+)', expand=True) ratings df_twitter_archive_clean['rating_numerator'] = ratings[0] ###Output _____no_output_____ ###Markdown Test ###Code df_twitter_archive_clean[df_twitter_archive_clean.text.str.contains(r"(\d+\.\d*\/\d+)")] ###Output D:\Empty\lib\site-packages\pandas\core\strings.py:2001: UserWarning: This pattern has match groups. To actually get the groups, use str.extract. return func(self, *args, **kwargs) ###Markdown DefineFix Datatypes Code ###Code df_twitter_archive_clean['tweet_id'] = df_twitter_archive_clean['tweet_id'].astype(str) df_twitter_archive_clean['timestamp'] = pd.to_datetime(df_twitter_archive_clean.timestamp) df_twitter_archive_clean['source'] = df_twitter_archive_clean['source'].astype('category') df_twitter_archive_clean['favorites'] = df_twitter_archive_clean['favorites'].astype(int) df_twitter_archive_clean['retweets'] = df_twitter_archive_clean['retweets'].astype(int) df_twitter_archive_clean['user_followers'] = df_twitter_archive_clean['user_followers'].astype(int) df_twitter_archive_clean['dog_stage'] = df_twitter_archive_clean['dog_stage'].astype('category') df_twitter_archive_clean['rating_numerator'] = df_twitter_archive_clean['rating_numerator'].astype(float) df_twitter_archive_clean['rating_denominator'] = df_twitter_archive_clean['rating_denominator'].astype(float) ###Output _____no_output_____ ###Markdown Test ###Code df_twitter_archive_clean.dtypes ###Output _____no_output_____ ###Markdown DefineDelete columns no longer required. Code ###Code # Retweets ID: df_twitter_archive_clean = df_twitter_archive_clean[pd.isnull(df_twitter_archive_clean.retweeted_status_id)] #Duplicated tweet_id: df_twitter_archive_clean = df_twitter_archive_clean.drop_duplicates() # Without pictures: df_twitter_archive_clean = df_twitter_archive_clean.dropna(subset = ['jpg_url']) #Useless columns: df_twitter_archive_clean = df_twitter_archive_clean.drop('retweeted_status_id', 1) df_twitter_archive_clean = df_twitter_archive_clean.drop('retweeted_status_user_id', 1) df_twitter_archive_clean = df_twitter_archive_clean.drop('retweeted_status_timestamp', 1) #Since we have timestamp column, we do not need date_time: df_twitter_archive_clean = df_twitter_archive_clean.drop('date_time', 1) #Duplicates in dog_stage: df_twitter_archive_clean = df_twitter_archive_clean.sort_values('dog_stage').drop_duplicates('tweet_id', keep = 'last') ###Output _____no_output_____ ###Markdown Test ###Code list(df_twitter_archive_clean) print(len(df_twitter_archive_clean)) print(df_twitter_archive_clean.dog_stage.value_counts()) ###Output None 1682 pupper 203 doggo 62 puppo 22 doggo,pupper 9 floofer 7 doggo,puppo 1 doggo,floofer 1 Name: dog_stage, dtype: int64 ###Markdown DefineDelete image prediction columns. Also drop *in_reply_to_user_id, in_reply_to_user_id, user_favourites* column Code ###Code # Append the first True predection to the list 'perdictions' and the level appended to list 'confidence_level', # Otherwise, will append NaN. predictions = [] confidence_level = [] def prediction_func(dataframe): if dataframe['p1_dog'] == True: predictions.append(dataframe['p1']) confidence_level.append(dataframe['p1_conf']) elif dataframe['p2_dog'] == True: predictions.append(dataframe['p2']) confidence_level.append(dataframe['p2_conf']) elif dataframe['p3_dog'] == True: predictions.append(dataframe['p3']) confidence_level.append(dataframe['p3_conf']) else: predictions.append('NaN') confidence_level.append(0) df_twitter_archive_clean.apply(prediction_func, axis=1) df_twitter_archive_clean['prediction'] = predictions df_twitter_archive_clean['confidence_level'] = confidence_level # Delete columns df_twitter_archive_clean = df_twitter_archive_clean.drop(['img_num', 'p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog'], 1) df_twitter_archive_clean = df_twitter_archive_clean.drop(['in_reply_to_status_id', 'in_reply_to_user_id', 'user_favourites'], 1) ###Output _____no_output_____ ###Markdown Test ###Code list(df_twitter_archive_clean) ###Output _____no_output_____ ###Markdown DefineFix issues in name column Code ###Code df_twitter_archive_clean.name = df_twitter_archive_clean.name.str.replace('^[a-z]+', 'None') ###Output _____no_output_____ ###Markdown Test ###Code df_twitter_archive_clean.name.value_counts() df_twitter_archive_clean.name.sample(10) ###Output _____no_output_____ ###Markdown DefineConvert the Null values to None type Code ###Code df_twitter_archive_clean.loc[df_twitter_archive_clean['prediction'] == 'NaN', 'prediction'] = None df_twitter_archive_clean.loc[df_twitter_archive_clean['rating_numerator'] == 'NaN', 'rating_numerator'] = 0 ###Output _____no_output_____ ###Markdown Test ###Code df_twitter_archive_clean.info() df_twitter_archive_clean ###Output _____no_output_____ ###Markdown Storing DataStore the clean DataFrame in a CSV file with the main one named `twitter_archive_master.csv`. ###Code df_twitter_archive_clean.drop(df_twitter_archive_clean.columns[df_twitter_archive_clean.columns.str.contains('Unnamed',case = False)],axis = 1) # Storing df_twitter_archive_clean.to_csv('twitter_archive_master.csv', encoding = 'utf-8', index=False) df_twitter_archive_clean = pd.read_csv('twitter_archive_master.csv') df_twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 1987 entries, 0 to 1986 Data columns (total 19 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1987 non-null int64 1 timestamp 1987 non-null object 2 source 1987 non-null object 3 text 1987 non-null object 4 expanded_urls 1987 non-null object 5 rating_numerator 1987 non-null float64 6 rating_denominator 1987 non-null float64 7 name 1987 non-null object 8 doggo 73 non-null object 9 floofer 8 non-null object 10 pupper 212 non-null object 11 puppo 23 non-null object 12 favorites 1987 non-null int64 13 retweets 1987 non-null int64 14 user_followers 1987 non-null int64 15 jpg_url 1987 non-null object 16 dog_stage 1987 non-null object 17 prediction 1679 non-null object 18 confidence_level 1987 non-null float64 dtypes: float64(3), int64(4), object(12) memory usage: 295.1+ KB ###Markdown Analyzing, and Visualizing Data ###Code df = pd.read_csv('twitter_archive_master.csv') df.info() df.head() # Convert columns to their appropriate types and set the timestamp as an index df['tweet_id'] = df['tweet_id'].astype(object) df['timestamp'] = pd.to_datetime(df.timestamp) df['source'] = df['source'].astype('category') df['dog_stage'] = df['dog_stage'].astype('category') df.set_index('timestamp', inplace=True) df.info() df.describe() ###Output _____no_output_____ ###Markdown Questions 1) What is the most common dog type? ###Code x = np.char.array(['Pupper', 'Doggo', 'Puppo', 'Doggo, Pupper', 'Floofer', 'Doggo, Puppo', 'Doggo, Floofer']) y = np.array(list(df[df['dog_stage'] != 'None']['dog_stage'].value_counts())[0:7]) colors = ['#ff9999','#66b3ff','#99ff99','#ffcc99','#E580E8','#FF684F','#DCDCDD'] porcent = 100.*y/y.sum() patches, texts = plt.pie(y, colors=colors, startangle=90, radius=1.8) labels = ['{0} - {1:1.2f} %'.format(i,j) for i,j in zip(x, porcent)] plt.legend(patches, labels, bbox_to_anchor=(-0.1, 1.), fontsize=8) plt.axis('equal') # Save the visualization as PNG file plt.savefig('Most_common_dog.png', bbox_inches='tight') ###Output _____no_output_____ ###Markdown **From the above visualization, Pupper is the most common dog in the dataset** 2) What is the most common dog rating? ###Code df_integer_ratings_14 = df[(df.rating_numerator <= 14) & (df.rating_numerator.apply(float.is_integer))] subset_rating_counts = df_integer_ratings_14.groupby(['rating_numerator']).count()['tweet_id'] plt.bar(np.arange(15), subset_rating_counts, color=('#ff9999','#66b3ff','#99ff99','#ffcc99')) plt.xticks(np.arange(15)) plt.xlabel('Rating Numerator') plt.ylabel('Frequency') plt.title('Distribution of Rating Numerators'); # Save the visualization as PNG file plt.savefig('Most_common_rates.png', bbox_inches='tight') ###Output _____no_output_____ ###Markdown **The most tweets are with a ratinig between 10-13** 3) What is the relation between favorites & retweets? ###Code sns.lmplot(x="retweets", y="favorites", data=df, size = 5, aspect=1.3, scatter_kws={'alpha':1/5}); plt.title('Favorite Count vs. Retweet Count'); plt.xlabel('Retweet Count'); plt.ylabel('Favorite Count'); ###Output D:\Empty\lib\site-packages\seaborn\regression.py:580: UserWarning: The `size` parameter has been renamed to `height`; please update your code. warnings.warn(msg, UserWarning) ###Markdown **There is a positive correlation between retweets & likes(favorites)** 4) Which are the most popular dog names? ###Code df.name.value_counts()[0:7].plot(kind='barh', figsize=(15,8), title='Most Common Dog Names').set_xlabel("Number of Dogs"); df.name.value_counts() ###Output _____no_output_____ ###Markdown Data Wrangling Gathering Data ###Code # Loading my necessary libraries import pandas as pd import numpy as np import requests import io import tweepy import json # Loading the main twitter archive into a DataFrame df_main = pd.read_csv('twitter_archive.csv') # Using the requests library to get data from the provided url and load it into a DataFrame. url = "https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv" r = requests.get(url).content df_img = pd.read_csv(io.StringIO(r.decode('utf-8')), sep="\t") # The last source of data comes from twitter itself. Here I use my own twitter app keys and tokens, provided by twitter, # and the tweepy library to set up an API that will allow me to query tweet info. consumer_key = "Nows8Vpx7CRUJobFOLozLPN2M" consumer_secret = "QPXccibXfq1bz2ZADEMpJbHbVrOUlwxIrM4tY6BxoRV8rlKpfm" access_token = "1059977742450487296-JXGfMjphpnr24bEE0wM3LhfbjcJBf4" access_token_secret = "ZvkpjxNqdtkTRqZegfvvXn7dtstI3wWBuJTPuBmQRZSv2" auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth_handler=auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True) # Here is the list of tweet id's I will use for my tweet queries. id_list = df_main['tweet_id'].tolist() error_list = [] # With a for loop, and a try/except block that will notify me if there's a tweepy specific error and will add that tweet id # to a list, this block of code will query each tweet's information in JSON format, then write it to a text file, add a new line, # and move on to the next tweet. with open('json_data.txt', 'w') as outfile: for id in id_list: try: tweet = api.get_status(id, tweet_mode='extended') except tweepy.TweepError: print(f"tweet {id} experienced an error") error_list.append(id) json.dump(tweet._json, outfile) outfile.write('\n') # Now I create my third DataFrame using the json based text file I just created. with open('json_data.txt') as f: df_json = pd.DataFrame(json.loads(line) for line in f) ###Output _____no_output_____ ###Markdown With this I've gathered all my data from three sources into three DataFrames, df_main, df_img, and df_json. Assessing Data ###Code # Looking at df_main's structure and visual appearance df_main.info() df_main.head() # Here are the value counts for each of the dog types. Most entries don't have a type and have values of None for each column. print(df_main['doggo'].value_counts()) print(df_main['floofer'].value_counts()) print(df_main['pupper'].value_counts()) print(df_main['puppo'].value_counts()) # I noticed the timestamp column included a part for the timezone so I just wanted to check if there are any timezones besides # +0000 df_main['timestamp'].str[19:].value_counts() # Filtering out the other columns and then downloading the DataFrame as a csv file so I can get a better # look at the text. df_denom = df_denom.loc[:,['text','rating_numerator','rating_denominator']] df_denom.info() df_denom.to_csv('ratings.csv') # Looking more closely at the 23 entries that have non-null values for in_reply_to_status_id and # in_reply_to_user_id df_reply = df_main.query('in_reply_to_status_id == in_reply_to_status_id') df_reply.to_csv('reply.csv') # Looking at the name column df_main['name'].value_counts() # Looking at the values for the rating columns print(df_main['rating_numerator'].value_counts()) print(df_main['rating_denominator'].value_counts()) # Creating a DataFrame to look more closely at what's going on with the suspicious ratings. df_denom = df_main.query('rating_denominator != 10') df_denom.info() df_denom.reset_index(drop=True, inplace=True) ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 23 entries, 313 to 2335 Data columns (total 17 columns): tweet_id 23 non-null int64 in_reply_to_status_id 5 non-null float64 in_reply_to_user_id 5 non-null float64 timestamp 23 non-null object source 23 non-null object text 23 non-null object retweeted_status_id 1 non-null float64 retweeted_status_user_id 1 non-null float64 retweeted_status_timestamp 1 non-null object expanded_urls 19 non-null object rating_numerator 23 non-null int64 rating_denominator 23 non-null int64 name 23 non-null object doggo 23 non-null object floofer 23 non-null object pupper 23 non-null object puppo 23 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 3.2+ KB ###Markdown df_main is the DataFrame created from the downloaded twitter archive data. From the initial check, I see there are 2356 tweets. 78 seem to be replies, 181 seem to be retweets, and there is also around 60 missing values under the expanded_urls column. The doggo, floofer, pupper, and puppo columns are also stored as strings but I can either convert these columns to boolean values, or I can merge these 4 columns into a single dog_type column that has a value of doggo, floofer, pupper, puppo, or none.The timestamp column looks pretty clean but its stored as a string instead of a datetime so I could also change that for easier use as well.I need to get rid of the retweeted tweets, and then I can also get rid of the columns that deal with retweet status. I looked into the reply columns and those tweets looked normal. They had images and normal ratings, so I can keep them but I can get rid of the columns that deal only with reply status.The name column has quite a few entries with 'None' as a string as well as several that are just normal words without capitalization. The 'None' string tag should be changed also just to make sure it doesn't get confused with the None datatype.Some of the ratings looked strange, especially the denominator ratings that weren't 10. Looking into them, some of these ratings are okay but a lot of them are inaccurate due to multiple '/' characters being present in the text of the tweet.The tweet_id column has its entries stored as integers but they should probably be stored as strings because they are used as an identifier and should not be numerically manipulated.So far my assessment of cleaning needed is as follows:1. Clean - Get rid of retweet entries.2. Tidy - Get rid of retweet columns.3. Tidy - Get rid of the reply columns.4. Tidy - Streamline the dog type columns (doggo, fluffer, pupper, puppo).5. Clean - Change the timestamp column values to datetimes.6. Clean - Change 'None' string values in the name column to NaN.7. Clean - Change entries in the name column that are just lowercase words.8. Clean - Fix inaccurate numerator and denominator ratings.9. Clean - Change tweet_id column from int datatype to a string ###Code # Basic check on the structure and visual appearance of df_img df_img.info() df_img.head() # Checking the value counts of p1, p2, and p3. print(df_img['p1'].value_counts()) print(df_img['p2'].value_counts()) print(df_img['p3'].value_counts()) ###Output golden_retriever 150 Labrador_retriever 100 Pembroke 89 Chihuahua 83 pug 57 chow 44 Samoyed 43 toy_poodle 39 Pomeranian 38 cocker_spaniel 30 malamute 30 French_bulldog 26 Chesapeake_Bay_retriever 23 miniature_pinscher 23 seat_belt 22 Siberian_husky 20 German_shepherd 20 Staffordshire_bullterrier 20 web_site 19 Cardigan 19 Shetland_sheepdog 18 teddy 18 Eskimo_dog 18 Maltese_dog 18 beagle 18 Lakeland_terrier 17 Shih-Tzu 17 Rottweiler 17 Italian_greyhound 16 kuvasz 16 ... military_uniform 1 harp 1 maillot 1 handkerchief 1 giant_panda 1 tailed_frog 1 maze 1 slug 1 tiger_shark 1 ocarina 1 microwave 1 carton 1 pillow 1 envelope 1 snowmobile 1 sandbar 1 mud_turtle 1 alp 1 jersey 1 EntleBucher 1 loupe 1 platypus 1 earthstar 1 silky_terrier 1 quilt 1 rotisserie 1 water_buffalo 1 electric_fan 1 pool_table 1 revolver 1 Name: p1, Length: 378, dtype: int64 Labrador_retriever 104 golden_retriever 92 Cardigan 73 Chihuahua 44 Pomeranian 42 Chesapeake_Bay_retriever 41 French_bulldog 41 toy_poodle 37 cocker_spaniel 34 Siberian_husky 33 miniature_poodle 33 beagle 28 Eskimo_dog 27 collie 27 Pembroke 27 kuvasz 26 Italian_greyhound 22 American_Staffordshire_terrier 21 Pekinese 21 malinois 20 miniature_pinscher 20 Samoyed 20 toy_terrier 20 chow 20 Norwegian_elkhound 19 Boston_bull 19 Staffordshire_bullterrier 18 Irish_terrier 17 pug 17 kelpie 16 ... racket 1 accordion 1 bucket 1 oxygen_mask 1 toucan 1 breakwater 1 purse 1 bathing_cap 1 hand-held_computer 1 window_shade 1 Madagascar_cat 1 handkerchief 1 sulphur_butterfly 1 spotlight 1 lighter 1 china_cabinet 1 komondor 1 nail 1 basketball 1 assault_rifle 1 water_bottle 1 pickup 1 quail 1 cannon 1 bib 1 African_hunting_dog 1 polecat 1 giant_panda 1 knee_pad 1 umbrella 1 Name: p2, Length: 405, dtype: int64 Labrador_retriever 79 Chihuahua 58 golden_retriever 48 Eskimo_dog 38 kelpie 35 kuvasz 34 Staffordshire_bullterrier 32 chow 32 beagle 31 cocker_spaniel 31 toy_poodle 29 Pekinese 29 Pomeranian 29 Pembroke 27 Chesapeake_Bay_retriever 27 Great_Pyrenees 27 French_bulldog 26 malamute 26 American_Staffordshire_terrier 24 Cardigan 23 pug 23 basenji 21 bull_mastiff 20 toy_terrier 20 Siberian_husky 19 Boston_bull 17 Shetland_sheepdog 17 doormat 16 boxer 16 Lakeland_terrier 16 .. croquet_ball 1 hatchet 1 grey_fox 1 partridge 1 barbell 1 Sussex_spaniel 1 beach_wagon 1 French_horn 1 switch 1 cowboy_boot 1 Windsor_tie 1 space_shuttle 1 hammerhead 1 pretzel 1 pajama 1 canoe 1 Indian_elephant 1 prairie_chicken 1 broccoli 1 drumstick 1 African_grey 1 rhinoceros_beetle 1 great_grey_owl 1 toyshop 1 black_swan 1 cup 1 nipple 1 rock_crab 1 shovel 1 grocery_store 1 Name: p3, Length: 408, dtype: int64 ###Markdown df_img is a DataFrame created by pulling the data directly from the Udacity supplied website. It contains dog predictions for a neural network based on pictures of the dog from the tweet. Based on the initial check, so far it looks relatively clean. There are 2075 entries and no missing values in any of the other columns.The columns also all seem to use good datatypes. Columns with numbers are all actually numeric datatypes, p1_dog, p2_dog, and p3_dog use boolean values instead of something less useful like a string. The exception is that tweet_id is again stored as an integer instead of as a string.There is also an issue with some of the entries in p1, p2, and p3. Some of the dog names are capitalized and some are lowercase, and an underscore is used instead of a space.For df_img, my cleaning steps are:1. Clean - Change tweet_id from an integer column to a string column.2. Clean - Standardize the way the strings are stored in p1, p2, and p3. ###Code # Assessing df_json visually and programmatically. df_json.info() df_json.head() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 32 columns): contributors 0 non-null object coordinates 0 non-null object created_at 2356 non-null object display_text_range 2356 non-null object entities 2356 non-null object extended_entities 2081 non-null object favorite_count 2356 non-null int64 favorited 2356 non-null bool full_text 2356 non-null object geo 0 non-null object id 2356 non-null int64 id_str 2356 non-null object in_reply_to_screen_name 78 non-null object in_reply_to_status_id 78 non-null float64 in_reply_to_status_id_str 78 non-null object in_reply_to_user_id 78 non-null float64 in_reply_to_user_id_str 78 non-null object is_quote_status 2356 non-null bool lang 2356 non-null object place 1 non-null object possibly_sensitive 2220 non-null object possibly_sensitive_appealable 2220 non-null object quoted_status 24 non-null object quoted_status_id 26 non-null float64 quoted_status_id_str 26 non-null object quoted_status_permalink 26 non-null object retweet_count 2356 non-null int64 retweeted 2356 non-null bool retweeted_status 168 non-null object source 2356 non-null object truncated 2356 non-null bool user 2356 non-null object dtypes: bool(4), float64(3), int64(3), object(22) memory usage: 524.7+ KB ###Markdown For df_json, there are a lot of columns, many of which I don't need. The only columns I really want are favorite_count and retweet_count. I'll also need the id column to be able to match the counts with my other DataFrames. That id column is again stored as an integer instead of as a string.With all three DataFrames assessed, I can also determine that merging them all into one DataFrame is the preferable way to go. I only want to analyze tweets that actually have an image that was run through the neural network so by merging all three DataFrames I will eliminate any tweets that were not run through the neural network.My cleaning steps are:1. Tidy - Only take the three columns I want, id, favorite_count, and retweet_count.2. Tidy - Rename id to tweet_id so it matches the other DataFrames.3. Clean - Change the tweet_id column from integer datatype to string.4. Tidy - Merge all DataFrames together into one master DataFrame. Cleaning Data ###Code # Creating a copy of df_main before I start cleaning. df_maincopy = df_main.copy() # Creating a copy of the original df_img before I start cleaning. # Saving the original file downloaded programmatically as image_predictions.tsv df_imgcopy = df_img.copy() df_img.to_csv("image_predictions.tsv", sep="\t") # Creating a copy of df_json before I start cleaning. df_jsoncopy = df_json.copy() # Querying for only NaN values in the retweeted_status_id column to get rid of all retweets. df_main = df_main.query('retweeted_status_id != retweeted_status_id') # Testing the result to make sure all the retweet columns now have 0 entries. df_main.info() # Resetting the index. df_main.reset_index(drop=True, inplace=True) df_main.info() # Dropping the now empty retweet columns. df_main = df_main.drop(['retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], axis=1) # Testing to make sure those columns are now gone. df_main.info() # Replacing all the 'None' string entries with NaN. df_main['doggo'].replace('None', np.nan, inplace=True) df_main['floofer'].replace('None', np.nan, inplace=True) df_main['pupper'].replace('None', np.nan, inplace=True) df_main['puppo'].replace('None', np.nan, inplace=True) # Testing to see that there are now missing values in the 4 dog type columns. df_main.info() # Now I join the last 4 columns together into one, and make sure to dropna because I have a lot of NaN values now. df_main['doggo_type'] = df_test[df_test.columns[13:17]].apply(lambda x: ','.join(x.dropna()),axis=1) # Checking the new column I just made and its unique value counts df_main.info() df_main['doggo_type'].value_counts() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2175 entries, 0 to 2174 Data columns (total 15 columns): tweet_id 2175 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2175 non-null object source 2175 non-null object text 2175 non-null object expanded_urls 2117 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 87 non-null object floofer 10 non-null object pupper 234 non-null object puppo 25 non-null object doggo_type 2175 non-null object dtypes: float64(2), int64(3), object(10) memory usage: 255.0+ KB ###Markdown I almost have what I want, except where I had only NaN values before, I now just have empty entries, but these don't show up as missing values in the doggo_type column so I need to convert the b ###Code # Replacing the empty string values in my doggo_type column with NaNs. df_main['doggo_type'].replace('', np.nan, inplace=True) # Checking the results to see that doggo_type no longer has empty strings and actually has the missing values I want. df_main.info() df_main['doggo_type'].value_counts() # After merging the doggo, floofer, pupper, and puppo columns info into one column, I can now drop them from the table. df_main = df_main.drop(['doggo', 'floofer', 'pupper', 'puppo'], axis =1) # Testing to make sure the doggo, floofer, pupper, and puppo columns are now gone. df_main.info() # There was no other timezone besides +0000 so I don't need to worry about it and I can just use pandas to_datetime method # Even if there was another timezone, because the data was formatted pretty well already, to_datetime may have been able to # handle different timezones anyway. I don't know its complete default behavior. Either way, the timestamp column is now in # an easier to use data type in case I want to use it later in my analysis. df_main['timestamp'] = pd.to_datetime(df_main['timestamp']) # Testing the resulting datatype of the timestamp column type(df_main['timestamp'][1]) # Standardizing p1, p2, and p3 so characters are lowercase and replacing underscores with spaces. df_img['p1'] = df_img['p1'].apply(lambda x: x.lower().replace('_',' ')) df_img['p2'] = df_img['p2'].apply(lambda x: x.lower().replace('_',' ')) df_img['p3'] = df_img['p3'].apply(lambda x: x.lower().replace('_',' ')) # Making sure the pick names are now all lowercase and with spaces instead of underscores. df_img['p1'].value_counts() # Merging df_main with df_img eliminates the tweets that were either not in the original twitter archive provided or those that # do not have an image that was run through the neural network. There are still 1994 valid entries. df_c = pd.merge(df_main, df_img, how='inner', on=['tweet_id']) # Looking at the resulting combined DataFrame. df_c.info() # I only want to add favorite_count and retweet_count to my final DataFrame, so using the filter() method I filter out # everything except those 2 columns, and the id column for when I merge. # Changing the id's column name to match my other DataFrame will make it easier to merge. df_json = df_json.filter(['id','favorite_count', 'retweet_count'], axis=1) df_json.rename(index=str, columns={'id':'tweet_id'}, inplace=True) # Looking at the new df_json to make sure its only got the three columns I want df_json.info() df_json.head() # Now I can merge the json DataFrame with my previous one so I'll have everything in one place. df_master = pd.merge(df_c, df_json, how='inner', on=['tweet_id']) # Checking the structure of my new DataFrame. df_master.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2006 entries, 0 to 2005 Data columns (total 24 columns): tweet_id 2006 non-null int64 in_reply_to_status_id 24 non-null float64 in_reply_to_user_id 24 non-null float64 timestamp 2006 non-null datetime64[ns] source 2006 non-null object text 2006 non-null object expanded_urls 2006 non-null object rating_numerator 2006 non-null int64 rating_denominator 2006 non-null int64 name 2006 non-null object doggo_type 370 non-null object jpg_url 2006 non-null object img_num 2006 non-null int64 p1 2006 non-null object p1_conf 2006 non-null float64 p1_dog 2006 non-null bool p2 2006 non-null object p2_conf 2006 non-null float64 p2_dog 2006 non-null bool p3 2006 non-null object p3_conf 2006 non-null float64 p3_dog 2006 non-null bool favorite_count 2006 non-null int64 retweet_count 2006 non-null int64 dtypes: bool(3), datetime64[ns](1), float64(5), int64(6), object(9) memory usage: 350.7+ KB ###Markdown Now I have all three of my original DataFrames combined into one, df_master. ###Code # Changing the tweet_id column from integer to string df_master['tweet_id'].astype(str) # Checking the column datatype I see it is no longer an integer. df_master.info() # Creating a new column to look at the number of / characters in all of the text entries. df_master['count'] = df_master['text'].apply(lambda x: x.count('/')) # Looking at the unique values of this new column df_master['count'].value_counts() ###Output _____no_output_____ ###Markdown The vast majority of entries have 4 / characters. 3 are from the image url, and every tweet should have an image url, so those entries with 4 / characters should be reliable. There are 57 entries with more than 4 / characters which either need to be dropped or examined closely.For simplicities sake I'm just going to drop those 57 entries. There is probably a way to use regex patterns to programmatically fix these, or I could just go through them one by one but both would take significant amounts of time and only save 57 out of 1994 entries. ###Code # Dropping those entries and then the count column. df_master = df_master.query('count == 4') df_master.drop(['count'], axis=1, inplace=True) # Checking to make sure I've lost those 57 entries as well as the count column I created. df_master.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1949 entries, 0 to 2005 Data columns (total 24 columns): tweet_id 1949 non-null int64 in_reply_to_status_id 24 non-null float64 in_reply_to_user_id 24 non-null float64 timestamp 1949 non-null datetime64[ns] source 1949 non-null object text 1949 non-null object expanded_urls 1949 non-null object rating_numerator 1949 non-null int64 rating_denominator 1949 non-null int64 name 1949 non-null object doggo_type 359 non-null object jpg_url 1949 non-null object img_num 1949 non-null int64 p1 1949 non-null object p1_conf 1949 non-null float64 p1_dog 1949 non-null bool p2 1949 non-null object p2_conf 1949 non-null float64 p2_dog 1949 non-null bool p3 1949 non-null object p3_conf 1949 non-null float64 p3_dog 1949 non-null bool favorite_count 1949 non-null int64 retweet_count 1949 non-null int64 dtypes: bool(3), datetime64[ns](1), float64(5), int64(6), object(9) memory usage: 340.7+ KB ###Markdown The 23 reply entries all look normal with accurate ratings so I can keep them in my DataFrame, but I don't need the two columns. ###Code # Dropping the two reply columns df_master.drop(['in_reply_to_status_id', 'in_reply_to_user_id'], axis=1, inplace=True) df_master.reset_index(drop=True, inplace=True) # Checking df_master's structure to make sure the two reply columns are no longer there. df_master.info() # Changing names in the name column that did not start with an uppercase letter to NaN to indicate that the name is missing df_master['name'] = df_master['name'].apply(lambda x : x if x[0].isupper() else np.nan) df_master['name'] = df_master['name'].replace('None', np.nan) # Checking the DataFrame and the unique values of the name column to make sure the change worked df_master.info() df_master['name'].value_counts() # Saving my final DataFrame as a csv file. df_master.to_csv('twitter_archive_master.csv') ###Output _____no_output_____ ###Markdown In the name column, most entries had a dog's name, but 500 had 'None' and others had random words that were not capitalized. Using uppercase status of the first letter as a condition, I replaced all those words, as well as all entries with 'None' with 'no name'. I now have my final DataFrame, clean and ready for analysis. Analysis and Visualization ###Code import matplotlib.pyplot as plt %matplotlib inline # Looking at the distributions of favorite count and retweet count plt.hist(df_master['favorite_count'], bins=30); plt.hist(df_master['retweet_count'], bins=30, color='red'); # Favorite count and retweet count distribution with a log transformation plt.hist(df_master['favorite_count'], bins=30, log=True); plt.hist(df_master['retweet_count'], bins=30, log=True, color='red'); # Limiting the x-axis to get a better view of the data plt.hist(df_master['favorite_count'], range=(0,25000), bins=30, log=True); plt.hist(df_master['retweet_count'], range=(0,25000), bins=30, log=True, color='red'); ###Output _____no_output_____ ###Markdown I can see that both the favorite (blue) and retweet (red) distributions are long-tailed, with the bulk of tweets getting less than 25,000 favorites/retweets, but some tweets getting as many as 6 times that. ###Code # Looking at numerator rating vs favorite count plt.scatter(df_master['rating_numerator'], df_master['favorite_count']); # Limiting the x-axis to get rid of outliers and examine the bulk of the data. plt.scatter(df_master['rating_numerator'], df_master['favorite_count'], alpha=0.01); plt.xlim(0,15) # A scatter plot with numerator rating has vertical lines, so a bar graph might be a better visualization plt.bar(df_master['rating_numerator'], df_master['favorite_count']); plt.xticks(range(0,16)) plt.xlim(0,16) ###Output _____no_output_____ ###Markdown Looking at the numerator rating vs favorite count, the first graph is not very useful due to the inclusion of some extreme outliers. Focusing in on just the numerator ratings between 0 and 15, and using a low alpha value to help against overplotting, there does seem to be an increase in favorite count at higher ratings. And that observation holds true in a bar graph of the same data. ###Code # Correlation matrix of my final DataFrame, while not including the first column and first row which would be tweet id. df_master.corr().iloc[1:,1:] ###Output _____no_output_____ ###Markdown Looking at the correlation matrix of all the numerical columns except for tweet_id, I see that the confidence of picks are negatively correlated. The correlation coefficient between p1_conf and p2_conf is -0.51, and between p1_conf and p3_conf its -0.71. This makes sense as if the neural network is very confident in one pick, it will have to be much less confident in other picks. There's also a positive association between p1_dog, p2_dog, and p3_dog. If one pick is of a dog, its more likely that other picks are also dogs. There is a very strong direct relationship between favorite_count and retweet count. ###Code # Looking at favorite_count vs retweet_count, the two variables with the strongest correlation. plt.scatter(df_master['favorite_count'], df_master['retweet_count'], alpha=0.1); plt.xlim(0,25000) plt.ylim(0,10000) plt.savefig('fav_vs_ret.png') ###Output _____no_output_____ ###Markdown Udacity Data Analyst Nanodegree - Project 2 Wrangle and Analyze Data (WeRateDogs) Table of Contents- [Introduction](intro)- [Gathering data](gather)- [Assessing data](assess) - [Quality](quality) - [Tidiness](tidiness)- [Cleaning data](clean)- [Storing, Analyzing, and Visualizing](storing) - [Insights](insights)- [References](references) Introduction In this project, we are going to Wrangle the dataset. The dataset is the tweet archive of Twitter user [@dog_rates](https://twitter.com/dog_rates), also known as WeRateDogs. After Wrangling, we are going to analyze the dataset to create interesting and trustworthy analyses and visualizations. Gathering data - **The WeRateDogs Twitter archive:** This is provided by Udacity and is available in the same directory as `twitter_archive_enhanced.csv` - **The tweet image predictions**, i.e., what breed of dog (or other object, animal, etc.) is present in each tweet according to a neural network. This file `image_predictions.tsv` is hosted on Udacity's servers and should be downloaded programmatically using the Requests library and the following URL: https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv - **Twitter API & JSON:** Each tweet's retweet count and favorite ("like") count at minimum, and any additional data we may find interesting. Using the tweet IDs in the WeRateDogs Twitter archive, we will query the Twitter API for each tweet's JSON data using Python's Tweepy library and store each tweet's entire set of JSON data in a file called `tweet_json.txt` file. Each tweet's JSON data should be written to its own line. Then we will read this .txt file line by line into a pandas DataFrame with (at minimum) tweet ID, retweet count, and favorite count. **1. Reading `twitter_archive_enhanced.csv`** ###Code #Import all packages needed import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline import requests import tweepy import json from IPython.display import HTML, display, Video import seaborn as sns #Read and view CSV file twitter_archive = pd.read_csv('twitter-archive-enhanced.csv') twitter_archive.sort_values('timestamp') twitter_archive.head() #Print a concise summary of a twitter_archive DataFrame. twitter_archive.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown **2. The tweet image predictions** ###Code url = "https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv" response = requests.get(url) with open('image-predictions.tsv', mode ='wb') as file: file.write(response.content) #Read TSV file image_prediction = pd.read_csv('image-predictions.tsv', sep='\t' ) image_prediction.head() CONSUMER_KEY = 'LhRxITEU8Onf6XRnZVUVNFMxB' CONSUMER_SECRET = '5Zs6aT471NBIstAQAfeWMoWQR4xcZtfzw7fR3BMIIhMFfRNvl7' OAUTH_TOKEN = '1420033664-LYHIZf9hzESWsWj72S5ZoGYhqR5cviaZHCerNKP' OAUTH_TOKEN_SECRET = 'URo8CpP0kCx8i1U3iBdqKHMmR8xMcQYhfiFHYgQILKTEm' # authorization of consumer key and consumer secret auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET) # set access to user's access key and access secret auth.set_access_token(OAUTH_TOKEN, OAUTH_TOKEN_SECRET) api = tweepy.API(auth, parser = tweepy.parsers.JSONParser(), wait_on_rate_limit = True, wait_on_rate_limit_notify = True) ###Output _____no_output_____ ###Markdown **3. Twitter API and JSON** ###Code # Download Tweepy status object based on Tweet ID and store in list list_of_tweets = [] tweets_not_found = [] for tweet_id in twitter_archive['tweet_id']: try: list_of_tweets.append(api.get_status(tweet_id)) except Exception as e: tweets_not_found.append(tweet_id) print(len(list_of_tweets)) print(len(tweets_not_found)) # Isolate the JSON part of each tweepy status object and add to list list_of_dicts = [] for each_json_tweet in list_of_tweets: list_of_dicts.append(each_json_tweet) with open('tweet_json.txt', 'w') as file: file.write(json.dumps(list_of_dicts, indent=4)) # Retrieve all the necessary information from JSON dictionaries # and put it in a tweet_df dataframe tweet_data = [] with open('tweet_json.txt', encoding='utf8') as json_file: all_data = json.load(json_file) for every_dict in all_data: tweet_id = every_dict['id'] tweet_text = every_dict['text'] only_url = tweet_text[tweet_text.find('https'):] favorite_count = every_dict['favorite_count'] retweet_count = every_dict['retweet_count'] followers_count = every_dict['user']['followers_count'] friends_count = every_dict['user']['friends_count'] whole_source = every_dict['source'] only_device = whole_source[whole_source.find('rel="nofollow">') + 15:-4] source = only_device retweeted_status = every_dict['retweeted_status'] = every_dict.get('retweeted_status', 'Original tweet') if retweeted_status == 'Original tweet': url = only_url else: retweeted_status = 'This is a retweet' url = 'This is a retweet' tweet_data.append({'tweet_id': str(tweet_id), 'favorite_count': int(favorite_count), 'retweet_count': int(retweet_count), 'followers_count': int(followers_count), 'friends_count': int(friends_count), 'url': url, 'source': source, 'retweeted_status': retweeted_status, }) tweet_df = pd.DataFrame(tweet_data, columns=['tweet_id', 'favorite_count','retweet_count', 'followers_count', 'friends_count','source', 'retweeted_status', 'url']) tweet_df.head() tweet_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2331 entries, 0 to 2330 Data columns (total 8 columns): tweet_id 2331 non-null object favorite_count 2331 non-null int64 retweet_count 2331 non-null int64 followers_count 2331 non-null int64 friends_count 2331 non-null int64 source 2331 non-null object retweeted_status 2331 non-null object url 2331 non-null object dtypes: int64(4), object(4) memory usage: 145.8+ KB ###Markdown Assessing data Visual assessment ###Code twitter_archive image_prediction tweet_df ###Output _____no_output_____ ###Markdown Programmatic assessment ###Code twitter_archive.info() sum(twitter_archive['tweet_id'].duplicated()) twitter_archive.rating_numerator.value_counts() twitter_archive.rating_denominator.value_counts() image_prediction.sample(10) image_prediction.info() sum(image_prediction.jpg_url.duplicated()) sum(image_prediction.tweet_id.duplicated()) tweet_df.sample(10) tweet_df.info() tweet_df.retweeted_status.value_counts() sum(tweet_df.tweet_id.duplicated()) sum(tweet_df.url.duplicated()) ###Output _____no_output_____ ###Markdown Quality*Completeness, validity, accuracy, consistency (content issues)* **`twitter_archive` DataFrame**1. Only keep original ratings. Remove all the retweets that have images2. Delete unnecessary columns which will not be used for analysis3. Change the datatype of timestamp column to date time object4. Correct the numerator having decimal values5. Correct and assess the denominators other than 106. After merging 4 columns into dog_stage (Tidiness issue), fill out the missing dog stage values7. Change the dog_stage type to category **`image_prediction` DataFrame**1. Rename the column names by displaying their full forms for better understanding2. Drop duplicated jpg_url rows **`tweet_df` DataFrame**1. Change the datatype of `tweet_id` to int64 Tidiness 1. `twitter_archive`: Merge 4 columns (dogger, floofer, pupper and puppo) into 1 column (dog stage)2. Merging all three datasets into one dataset Cleaning Data ###Code # Make a copy of all the dataframes twitter_archive_clean = twitter_archive.copy() image_prediction_clean = image_prediction.copy() tweet_df_clean = tweet_df.copy() ###Output _____no_output_____ ###Markdown `twitter_archive` DataFrame Quality **1. Only keep original ratings. Remove all the retweets that have images** Delete retweets by filtering the NaN of retweeted_status_user_id ###Code # Code twitter_archive_clean = twitter_archive_clean[pd.isnull(twitter_archive_clean['retweeted_status_user_id'])] # Test print(sum(twitter_archive_clean.retweeted_status_user_id.value_counts())) ###Output 0 ###Markdown **2. Delete unnecessary columns which will not be used for analysis** ###Code # Code twitter_archive_clean = twitter_archive_clean.drop(['source', 'in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp', 'expanded_urls'], 1) # Test twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 10 columns): tweet_id 2175 non-null int64 timestamp 2175 non-null object text 2175 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 2175 non-null object floofer 2175 non-null object pupper 2175 non-null object puppo 2175 non-null object dtypes: int64(3), object(7) memory usage: 186.9+ KB ###Markdown **3. Change the datatype of timestamp column to date time object** ###Code # Code twitter_archive_clean.timestamp = pd.to_datetime(twitter_archive_clean.timestamp) # Test twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 10 columns): tweet_id 2175 non-null int64 timestamp 2175 non-null datetime64[ns] text 2175 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 2175 non-null object floofer 2175 non-null object pupper 2175 non-null object puppo 2175 non-null object dtypes: datetime64[ns](1), int64(3), object(6) memory usage: 186.9+ KB ###Markdown **4. Correct the numerator having decimal values** (brought to attention by Udacity reviewer) ###Code # change numerator and denominators type int to float # Code twitter_archive_clean[['rating_numerator', 'rating_denominator']] = twitter_archive_clean[['rating_numerator','rating_denominator']].astype(float) # Test twitter_archive_clean.info() # Let us display all the tweet text where decimals are present with pd.option_context('max_colwidth', 200): display(twitter_archive_clean[twitter_archive_clean['text'].str.contains(r"(\d+\.\d*\/\d+)")] [['tweet_id', 'text', 'rating_numerator', 'rating_denominator']]) ###Output /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:3: UserWarning: This pattern has match groups. To actually get the groups, use str.extract. This is separate from the ipykernel package so we can avoid doing imports until ###Markdown Look's like we have to change rating_numerator for the above 5 rows manually according to tweet text. ###Code # Change numerators twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 883482846933004288), 'rating_numerator'] = 13.5 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 786709082849828864), 'rating_numerator'] = 9.75 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 778027034220126208), 'rating_numerator'] = 11.27 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 681340665377193984), 'rating_numerator'] = 9.5 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 680494726643068929), 'rating_numerator'] = 11.26 # Test with pd.option_context('max_colwidth', 200): display(twitter_archive_clean[twitter_archive_clean['text'].str.contains(r"(\d+\.\d*\/\d+)")] [['tweet_id', 'text', 'rating_numerator', 'rating_denominator']]) ###Output /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:10: UserWarning: This pattern has match groups. To actually get the groups, use str.extract. # Remove the CWD from sys.path while we load stuff. ###Markdown **5. Correct and assess the denominators other than 10** ###Code # Assess twitter_archive_clean.rating_denominator.value_counts() # NOTE: Denominators having multiple of 10 represents multiple dogs # Let's have a look at tweet texts whose rating_denominators are not 10 and also not a multiple of 10 print(twitter_archive_clean.loc[twitter_archive.rating_denominator == 11.0, 'text']) print(twitter_archive_clean.loc[twitter_archive.rating_denominator == 2.0, 'text']) print(twitter_archive_clean.loc[twitter_archive.rating_denominator == 16.0, 'text']) print(twitter_archive_clean.loc[twitter_archive.rating_denominator == 15.0, 'text']) print(twitter_archive_clean.loc[twitter_archive.rating_denominator == 7.0, 'text']) print(twitter_archive_clean['text'][1068]) print(twitter_archive_clean['text'][1662]) print(twitter_archive_clean['text'][2335]) print(twitter_archive_clean['text'][1663]) print(twitter_archive_clean['text'][342]) print(twitter_archive_clean['text'][516]) # Manually updating Numerator and Denominator according to tweet texts above twitter_archive_clean['rating_numerator'][1068] = 14.0 twitter_archive_clean['rating_denominator'][1068] = 10.0 twitter_archive_clean['rating_numerator'][1662] = 10.0 twitter_archive_clean['rating_denominator'][1662] = 10.0 twitter_archive_clean['rating_numerator'][2335] = 9.0 twitter_archive_clean['rating_denominator'][2335] = 10.0 twitter_archive_clean['rating_numerator'][1663] = np.nan # Rating not given twitter_archive_clean['rating_denominator'][1663] = np.nan twitter_archive_clean['rating_numerator'][342] = np.nan # Rating not given twitter_archive_clean['rating_denominator'][342] = np.nan twitter_archive_clean['rating_numerator'][516] = np.nan # Rating not given twitter_archive_clean['rating_denominator'][516] = np.nan ###Output /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:2: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:3: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy This is separate from the ipykernel package so we can avoid doing imports until /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:5: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy """ /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:6: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:8: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:9: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy if __name__ == '__main__': /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:11: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy # This is added back by InteractiveShellApp.init_path() /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:12: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy if sys.path[0] == '': /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:14: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:15: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy from ipykernel import kernelapp as app /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:17: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:18: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy ###Markdown **Cleaning Numerators and Denominators for rows as pointed out by Udacity reviewer** ###Code twitter_archive_clean[twitter_archive_clean['tweet_id'] == 835246439529840640] twitter_archive_clean[twitter_archive_clean['tweet_id'] == 810984652412424192] ###Output _____no_output_____ ###Markdown Already cleaned for above row ###Code twitter_archive_clean[twitter_archive_clean['tweet_id'] == 820690176645140481] ###Output _____no_output_____ ###Markdown Above row seems to be correct. There are total 7 dogs in a picture. ###Code # Let's clean the row number 313 print(twitter_archive_clean['text'][313]) twitter_archive_clean['rating_numerator'][313] = 13.0 twitter_archive_clean['rating_denominator'][313] = 10.0 # Test twitter_archive_clean.rating_denominator.value_counts() ###Output _____no_output_____ ###Markdown Tidiness **1. `twitter_archive`: Merge 4 columns (dogger, floofer, pupper and puppo) into 1 column (dog stage)** ###Code # Create new colum dog_stages for doggo, floofer, pupper and puppo # Code twitter_archive_clean['dog_stage'] = twitter_archive_clean[['doggo', 'floofer','pupper','puppo']].apply(lambda x: ''.join(x), axis=1) twitter_archive_clean['dog_stage'].replace("NoneNoneNoneNone", 'None', inplace=True) twitter_archive_clean['dog_stage'].replace("doggoNoneNoneNone", "doggo", inplace=True) twitter_archive_clean['dog_stage'].replace("NoneNonepupperNone", "pupper", inplace=True) twitter_archive_clean['dog_stage'].replace("NoneNoneNonepuppo", "puppo", inplace=True) twitter_archive_clean['dog_stage'].replace("NoneflooferNoneNone", "floofer", inplace=True) #removing doggo, floofer, pupper and puppo columns twitter_archive_clean.drop(['doggo','floofer', 'pupper','puppo'], axis=1, inplace= True) twitter_archive_clean.dog_stage.value_counts() ###Output _____no_output_____ ###Markdown **Replacing doggoNonepupperNone, doggoNoneNonepuppo, and doggoflooferNoneNone with 'Multiple'** (Pointed out by Udactiy review) ###Code # Code twitter_archive_clean['dog_stage'].replace("doggoNonepupperNone", "multiple", inplace=True) twitter_archive_clean['dog_stage'].replace("doggoNoneNonepuppo", "multiple", inplace=True) twitter_archive_clean['dog_stage'].replace("doggoflooferNoneNone", "multiple", inplace=True) # Test twitter_archive_clean.dog_stage.value_counts() ###Output _____no_output_____ ###Markdown Quality **6. Filling out missing dog_stage values** ###Code # # Extracting the dog_stage from tweet text where dog stage is None # # Code for i in range(len(twitter_archive_clean)): if twitter_archive_clean.dog_stage.iloc[i] == 'None': if 'doggo' in twitter_archive_clean.text.iloc[i]: twitter_archive_clean.dog_stage.iloc[i] = 'doggo' elif 'pupper' in twitter_archive_clean.text.iloc[i]: twitter_archive_clean.dog_stage.iloc[i] = 'pupper' elif 'puppo' in twitter_archive_clean.text.iloc[i]: twitter_archive_clean.dog_stage.iloc[i] = 'puppo' elif 'floofer' in twitter_archive_clean.text.iloc[i]: twitter_archive_clean.dog_stage.iloc[i] = 'floofer' else: twitter_archive_clean.dog_stage.iloc[i] = np.NaN # Test twitter_archive_clean.dog_stage.value_counts() ###Output _____no_output_____ ###Markdown **7. Change the dog_stage type to category** ###Code # Code twitter_archive_clean['dog_stage'] = twitter_archive_clean['dog_stage'].astype('category') # Test twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 7 columns): tweet_id 2175 non-null int64 timestamp 2175 non-null datetime64[ns] text 2175 non-null object rating_numerator 2172 non-null float64 rating_denominator 2172 non-null float64 name 2175 non-null object dog_stage 383 non-null category dtypes: category(1), datetime64[ns](1), float64(2), int64(1), object(2) memory usage: 201.3+ KB ###Markdown `image_prediction` DataFrame Quality **1. Rename the column names by displaying their full forms for better understanding** ###Code # Code image_prediction_clean = image_prediction_clean.rename(columns={"p1":"prediction1", "p1_conf":"prediction1_confidence", "p1_dog":"prediction1_result", "p2":"prediction2", "p2_conf":"prediction2_confidence", "p2_dog":"prediction2_result", "p3":"prediction3", "p3_conf":"prediction3_confidence", "p3_dog":"prediction3_result"}) # Test image_prediction_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null int64 jpg_url 2075 non-null object img_num 2075 non-null int64 prediction1 2075 non-null object prediction1_confidence 2075 non-null float64 prediction1_result 2075 non-null bool prediction2 2075 non-null object prediction2_confidence 2075 non-null float64 prediction2_result 2075 non-null bool prediction3 2075 non-null object prediction3_confidence 2075 non-null float64 prediction3_result 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown **2. Drop duplicated jpg_url rows** ###Code # Code image_prediction_clean = image_prediction_clean.drop_duplicates(subset=['jpg_url'], keep='last') # Test sum(image_prediction_clean['jpg_url'].duplicated()) ###Output _____no_output_____ ###Markdown `tweet_df` DataFrame **1. Change the datatype of `tweet_id` to int64** ###Code # Code tweet_df_clean['tweet_id'] = tweet_df_clean['tweet_id'].astype(int) # Test tweet_df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2331 entries, 0 to 2330 Data columns (total 8 columns): tweet_id 2331 non-null int64 favorite_count 2331 non-null int64 retweet_count 2331 non-null int64 followers_count 2331 non-null int64 friends_count 2331 non-null int64 source 2331 non-null object retweeted_status 2331 non-null object url 2331 non-null object dtypes: int64(5), object(3) memory usage: 145.8+ KB ###Markdown Tidiness **2. Merging all three datasets into one dataset** ###Code # Code twitter_df1 = pd.merge(twitter_archive_clean, image_prediction_clean, how='left', on=['tweet_id']) twitter_df = pd.merge(twitter_df1, tweet_df_clean, how='left', on=['tweet_id']) # Test twitter_df.head() ###Output _____no_output_____ ###Markdown Storing, Analyzing, and Visualizing Data ###Code #Store the clean DataFrame in a CSV file twitter_df.to_csv('twitter_archive_master.csv', index=False, encoding = 'utf-8') df = pd.read_csv('twitter_archive_master.csv') df.head() ###Output _____no_output_____ ###Markdown Descriptive Analysis ###Code df.info() df[['retweet_count','favorite_count','rating_numerator', 'prediction1_confidence']].describe() ###Output _____no_output_____ ###Markdown **Dog with the highest favorite count** ###Code df.loc[df['favorite_count'].idxmax()] ###Output _____no_output_____ ###Markdown The image prediction algorithm predicts that the dog with most favorite count is a Labrador Retriever. ###Code # Let's see the image for the dog dog_jpg_url = df.loc[df['favorite_count'].idxmax()].jpg_url dog_jpg_url display(HTML('<img src = ''https://pbs.twimg.com/ext_tw_video_thumb/744234667679821824/pu/img/1GaWmtJtdqzZV7jy.jpg'' />')) # Let's see what that tweet text has to say about this dog df.loc[df['favorite_count'].idxmax()].text ###Output _____no_output_____ ###Markdown Let's embed this complete tweet ###Code %%HTML <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Here&#39;s a doggo realizing you can stand in a pool. 13/10 enlightened af (vid by Tina Conrad) <a href="https://t.co/7wE9LTEXC4">pic.twitter.com/7wE9LTEXC4</a></p>&mdash; WeRateDogs® (@dog_rates) <a href="https://twitter.com/dog_rates/status/744234799360020481?ref_src=twsrc%5Etfw">June 18, 2016</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> ###Output _____no_output_____ ###Markdown **Distribution of Retweets and Favorites based on Ratings** ###Code # Set the style sns.set(style="darkgrid") df.groupby('rating_numerator')['retweet_count','favorite_count'].mean().plot(kind='bar', figsize=(8,5)) plt.title('Distribution of Retweets and Favorites based on Ratings', fontsize=15) plt.xlabel('Rating', fontsize=15) plt.ylabel('Count', fontsize=15) ###Output _____no_output_____ ###Markdown - The favorite counts are higher than the retweet counts for each rating- The dogs with the rating 13.5 are most retweeted and most liked. It seems like there is a positive correlation between retweet counts and favorite counts based on ratings. ###Code df[['rating_numerator','retweet_count','favorite_count']].corr(method = 'pearson') ###Output _____no_output_____ ###Markdown Pearson correlation coefficients show that there is indeed a positive correlation between retweet counts and favorite counts. Surprisingly there is no correlation between ratings and retweet counts, as well as ratings and favorite counts. **Most tweeted dog breeds** ###Code df.prediction1.value_counts() ###Output _____no_output_____ ###Markdown Take the dog breeds which are tweeted more than 15 times and exclude the category 'None' ###Code hot_breeds = df.groupby('prediction1').filter(lambda x: len(x) > 19) hot_breeds['prediction1'].value_counts().plot(kind = 'bar', figsize=(8,5)) plt.title('Most Tweeted Dog Breeds', fontsize=15) plt.ylabel('Count', fontsize=15) plt.xlabel('Breed', fontsize=15) ###Output _____no_output_____ ###Markdown **Dog Stage Distribution** For most of the rows, dog_stage is not known. Let's just consider those rows for which we have information. ###Code df.dog_stage.value_counts() df.dog_stage.value_counts().plot(kind = 'bar', figsize=(8,5)) plt.title('Dog Stage Distribution', fontsize=15) plt.ylabel('Count', fontsize=15) plt.xlabel('Dog Stage', fontsize=15) ###Output _____no_output_____ ###Markdown **Dog Stage vs Average Favorite Count** ###Code dog_stages = df.groupby('dog_stage').filter(lambda x: len(x) < 250) dog_stages.groupby('dog_stage')['favorite_count'].mean().plot(kind='bar') plt.title('Dog Stage vs Avarage Favorite Count Analysis', fontsize=15) plt.xlabel('Dog Stage', fontsize=15) plt.ylabel('Avarage Favorite Count', fontsize=15) ###Output _____no_output_____ ###Markdown Wrangle & Analyze Data (WeRateDogs Twitter Archive) IntroductionData Wrangling Gathering Data Assessing Data Cleaning Data Storing, Analyzing and Visualizing Data Introduction> The dataset which will be wrangling (and analyzing and visualizing) is the tweet archive of Twitter user @dog_rates, also known as WeRateDogs. WeRateDogs is a Twitter account that rates people's dogs with a humorous comment about the dog. I will use Python (and its libraries) to analyze and vusualize the dataset through jupyter notebook. ###Code #import library import pandas as pd import numpy as np import requests import os import time import matplotlib.pyplot as plt from matplotlib import cm %matplotlib inline import seaborn as sns ###Output _____no_output_____ ###Markdown Data Wrangling Gathering Data> In this part, we will gather three parts of data The WeRateDogs Twitter archive. Downloading the file named twitter-archive-enhanced.csv The tweet image predictions, i.e., what breed of dog (or other object, animal, etc.) is present in each tweet according to a neural network. This file (image_predictions.tsv) is hosted on Udacity's servers and should be downloaded programmatically using the Requests library and the following URL: https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv Each tweet's retweet count and favorite ("like") count at minimum. Using the tweet IDs in the WeRateDogs Twitter archive, query the Twitter API for each tweet's JSON data using Python's Tweepy library and store each tweet's entire set of JSON data in a file called tweet_json.txt file. Each tweet's JSON data should be written to its own line. Then read this .txt file line by line into a pandas DataFrame with (at minimum) tweet ID, retweet count, and favorite count. Note: do not include your Twitter API keys, secrets, and tokens in your project submission. Step 1. Importing Twitter Archieve File ###Code #import csv file df_archive = pd.read_csv('twitter-archive-enhanced.csv') df_archive.head() ###Output _____no_output_____ ###Markdown Step 2. Programatically Download Tweet Image Predictions TSV ###Code #using requests library to download tweet image prediction tsv and store it as image_predictions.tsv url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) with open('image_predictions.tsv', mode='wb') as file: file.write(response.content) # Import the tweet image predictions TSV file into a DataFrame df_img = pd.read_csv('image_predictions.tsv', sep='\t') df_img.head() ###Output _____no_output_____ ###Markdown Step 3. Downloading Tweet JSON Data ###Code import tweepy from tweepy import OAuthHandler import json from timeit import default_timer as timer # Query Twitter API for each tweet in the Twitter archive and save JSON in a text file # These are hidden to comply with Twitter's API terms and conditions consumer_key = 'HIDDEN' consumer_secret = 'HIDDEN' access_token = 'HIDDEN' access_secret = 'HIDDEN' auth = OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True) # NOTE TO STUDENT WITH MOBILE VERIFICATION ISSUES: # NOTE TO REVIEWER: this student had mobile verification issues so the following # Twitter API code was sent to this student from a Udacity instructor # Tweet IDs for which to gather additional data via Twitter's API tweet_ids = df_archive.tweet_id.values len(tweet_ids) # Query Twitter's API for JSON data for each tweet ID in the Twitter archive count = 0 fails_dict = {} start = timer() # Save each tweet's returned JSON as a new line in a .txt file with open('tweet_json.txt', 'w') as outfile: # This loop will likely take 20-30 minutes to run because of Twitter's rate limit for tweet_id in tweet_ids: count += 1 print(str(count) + ": " + str(tweet_id)) try: tweet = api.get_status(tweet_id, tweet_mode='extended') print("Success") json.dump(tweet._json, outfile) outfile.write('\n') except tweepy.TweepError as e: print("Fail") fails_dict[tweet_id] = e pass end = timer() print(end - start) print(fails_dict) # read json txt file and save as a df tw_json = [] with open('tweet-json.txt', 'r') as json_data: #make a loop to read file line = json_data.readline() while line: status = json.loads(line) # extract variable status_id = status['id'] status_ret_count = status['retweet_count'] status_fav_count = status['favorite_count'] # make a dictionary json_file = {'tweet_id': status_id, 'retweet_count': status_ret_count, 'favorite_count': status_fav_count } tw_json.append(json_file) # read next line line = json_data.readline() #convert the dictionary list to a df df_json = pd.DataFrame(tw_json, columns = ['tweet_id', 'retweet_count', 'favorite_count']) df_json.head() ###Output _____no_output_____ ###Markdown Assessing Data> After gathering each of the above pieces of data, assess them visually and programmatically for quality and tidiness issues. >Using two types of assessment: >1. Visual assessment: scrolling through the data in your preferred software application (Google Sheets, Excel, a text editor, etc.).>2. Programmatic assessment: using code to view specific portions and summaries of the data (pandas' head, tail, and info methods, for example).Quality Issues -- issues with content. Low quality data is also known as dirty data.Tidiness Issues -- issues with structure that prevent easy analysis. Untidy data is also known as messy data. Tidy data requirements: Each variable forms a column. Each observation forms a row. Each type of observational unit forms a table. Step 1. Assessing Twitter Archive File ###Code #print out the head() df_archive.head() df_archive.shape df_archive.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 in_reply_to_status_id 78 non-null float64 2 in_reply_to_user_id 78 non-null float64 3 timestamp 2356 non-null object 4 source 2356 non-null object 5 text 2356 non-null object 6 retweeted_status_id 181 non-null float64 7 retweeted_status_user_id 181 non-null float64 8 retweeted_status_timestamp 181 non-null object 9 expanded_urls 2297 non-null object 10 rating_numerator 2356 non-null int64 11 rating_denominator 2356 non-null int64 12 name 2356 non-null object 13 doggo 2356 non-null object 14 floofer 2356 non-null object 15 pupper 2356 non-null object 16 puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown There are 2356 rows and 17 columns. From the info above, we found that six columns have missing values including 'in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'. ###Code #check with in_reply_to_status_id column df_archive.in_reply_to_status_id.value_counts() type(df_archive['tweet_id'][0]) type(df_archive['in_reply_to_status_id'][1]) type(df_archive['in_reply_to_user_id'][1]) type(df_archive['retweeted_status_id'][1]) type(df_archive['retweeted_status_user_id'][1]) ###Output _____no_output_____ ###Markdown Problem 1: The format of id is wrong and it should be changed to integer. ###Code #check column rating_numerator df_archive.rating_numerator.describe() df_archive.rating_numerator.value_counts().sort_index() #check column rating_denominator df_archive.rating_denominator.describe() df_archive.rating_denominator.value_counts().sort_index() ###Output _____no_output_____ ###Markdown Problem 2: The rating denominator could only be 10 and other values are invalid. ###Code #check timestamp column df_archive['timestamp'].value_counts() type(df_archive['timestamp'][0]) df_archive['retweeted_status_timestamp'].value_counts() type(df_archive['retweeted_status_timestamp'][0]) ###Output _____no_output_____ ###Markdown Problem 3: When we explore the type of two timestamp columns, one is string, the other one is float. The format of timestamp should be changed to datetime. ###Code df_archive.name.value_counts().sort_index(ascending = True) #collect all error names error_name = df_archive.name.str.contains('^[a-z]', regex = True) df_archive[error_name].name.value_counts().sort_index() len(df_archive[error_name]) ###Output _____no_output_____ ###Markdown Problem 4: There are 109 invalid names not starting with a capitalized alphabet. ###Code #check with four columns of dogs' stage df_archive.doggo.value_counts() df_archive.floofer.value_counts() df_archive.pupper.value_counts() df_archive.puppo.value_counts() ###Output _____no_output_____ ###Markdown Problem 5: The "None" value should be changed to "NaN" in these four columns. ###Code #show the number of retweet df_archive.retweeted_status_id.isnull().value_counts() df_archive.retweeted_status_user_id.isnull().value_counts() ###Output _____no_output_____ ###Markdown Problem 6: As we do not want the duplicated information, so we would clear away the rows of retweet. ###Code df_archive.source.value_counts() ###Output _____no_output_____ ###Markdown Problem 7: source name should be changed and moved the attached link. Step 2. Tweet Image Predictions ###Code df_img.head() df_img.shape df_img.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2075 non-null int64 1 jpg_url 2075 non-null object 2 img_num 2075 non-null int64 3 p1 2075 non-null object 4 p1_conf 2075 non-null float64 5 p1_dog 2075 non-null bool 6 p2 2075 non-null object 7 p2_conf 2075 non-null float64 8 p2_dog 2075 non-null bool 9 p3 2075 non-null object 10 p3_conf 2075 non-null float64 11 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown Problem 8: There are 2075 rows and 12 columns. Based on Twitter Archive file (2356 rows), we know some pictures are missing. ###Code #check if id duplicate df_img.tweet_id.duplicated().value_counts() ##check if jpg duplicate df_img.jpg_url.duplicated().value_counts() ###Output _____no_output_____ ###Markdown Problem 9: There are 66 jpg_url which are duplicated. ###Code #check the image number column df_img.img_num.value_counts() ###Output _____no_output_____ ###Markdown Some people have posted more than one picture. Step 3. Checking JSON File ###Code df_json.head() df_json.shape df_json.info() df_json.describe() df_json.tweet_id.duplicated().value_counts() df_json['tweet_id'].nunique() ###Output _____no_output_____ ###Markdown There are 2354 rows and 3 columns. No null variables. No duplicated tweet id. ###Code type(df_json['tweet_id'][0]) type(df_json['retweet_count'][0]) type(df_json['favorite_count'][0]) ###Output _____no_output_____ ###Markdown The format of variable is correct. Summary Quality IssueTwitter Archive File The format of id is wrong and it should be changed to integer. The rating denominator could only be 10 and other values are invalid. When we explore the type of two timestamp columns, one is string, the other one is float. The format of timestamp should be changed to datetime. There are 109 invalid names not starting with a capitalized alphabet. In four columns of dog's stage, the "None" value should be changed to "NaN" in these four columns. As we do not want the duplicated information, so we would clear away the rows of retweet based on retweet id. Change the value for source column.Tweet Image Predictions TSV There are 2075 rows in prediction file, 2354 rows in JSON data. Based on Twitter Archive file (2356 rows), we know some rows are not matching. There are 66 jpg_url which are duplicated.JSON File No quality issue for the json file. Tidiness Issue df_archive could drop empty columns of retweet infomation like 'in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'. The four columns of dog's stage should be merged into one column. Merging JSON file, df_archive dataframe and img file into one. Cleaning Data> Store the clean DataFrame(s) in a CSV file with the main one named twitter_archive_master.csv. Analyze and visualize your wrangled data in your wrangle_act.ipynb Jupyter Notebook.>The issues that satisfy the Project Motivation must be cleaned:>1. Cleaning includes merging individual pieces of data according to the rules of tidy data.>2. The fact that the rating numerators are greater than the denominators does not need to be cleaned. This unique rating system is a big part of the popularity of WeRateDogs. Dealing with tidiness issue 1. Drop the columns we do not use ###Code #make a copy of three data files df_archive_clean = df_archive.copy() df_img_clean = df_img.copy() df_json_clean = df_json.copy() #drop useless columns df_archive_clean = df_archive_clean.drop(['in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], axis=1) #fill the null url df_archive_clean.expanded_urls.head() #the website url should be 'https://twitter.com/dog_rates/status/' plus their id, so we could fill it df_archive_clean.expanded_urls = 'https://twitter.com/dog_rates/status/' + df_archive_clean.tweet_id.astype(str) #check with the df see if everything is fixed df_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 13 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 timestamp 2356 non-null object 2 source 2356 non-null object 3 text 2356 non-null object 4 retweeted_status_id 181 non-null float64 5 expanded_urls 2356 non-null object 6 rating_numerator 2356 non-null int64 7 rating_denominator 2356 non-null int64 8 name 2356 non-null object 9 doggo 2356 non-null object 10 floofer 2356 non-null object 11 pupper 2356 non-null object 12 puppo 2356 non-null object dtypes: float64(1), int64(3), object(9) memory usage: 239.4+ KB ###Markdown 2. Merge four columns of dog's stage into one ###Code #replace 'None' to '' df_archive_clean[['doggo', 'floofer', 'pupper', 'puppo']] = df_archive_clean[['doggo', 'floofer', 'pupper', 'puppo']].replace('None', '') df_archive_clean.head() #combine four columns to stage df_archive_clean['dog_stage'] = df_archive_clean['doggo'] + df_archive_clean['floofer'] + df_archive_clean['pupper'] + df_archive_clean['puppo'] #drop other four stages columns df_archive_clean = df_archive_clean.drop(['doggo', 'floofer', 'pupper', 'puppo'], axis=1) df_archive_clean.dog_stage.value_counts() #replace the null value and multiple stage df_archive_clean['dog_stage'] = df_archive_clean['dog_stage'].replace('', np.nan) df_archive_clean['dog_stage'] = df_archive_clean['dog_stage'].replace('doggopupper', 'multiple') df_archive_clean['dog_stage'] = df_archive_clean['dog_stage'].replace('doggofloofer', 'multiple') df_archive_clean['dog_stage'] = df_archive_clean['dog_stage'].replace('doggopuppo', 'multiple') #double check with our df df_archive_clean.dog_stage.value_counts() ###Output _____no_output_____ ###Markdown 3. Merge three files into one ###Code master_clean = pd.merge(df_archive_clean, df_img_clean, on = 'tweet_id', how = 'inner') master_clean = pd.merge(master_clean, df_json_clean, on = 'tweet_id', how = 'inner') master_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2073 entries, 0 to 2072 Data columns (total 23 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2073 non-null int64 1 timestamp 2073 non-null object 2 source 2073 non-null object 3 text 2073 non-null object 4 retweeted_status_id 79 non-null float64 5 expanded_urls 2073 non-null object 6 rating_numerator 2073 non-null int64 7 rating_denominator 2073 non-null int64 8 name 2073 non-null object 9 dog_stage 320 non-null object 10 jpg_url 2073 non-null object 11 img_num 2073 non-null int64 12 p1 2073 non-null object 13 p1_conf 2073 non-null float64 14 p1_dog 2073 non-null bool 15 p2 2073 non-null object 16 p2_conf 2073 non-null float64 17 p2_dog 2073 non-null bool 18 p3 2073 non-null object 19 p3_conf 2073 non-null float64 20 p3_dog 2073 non-null bool 21 retweet_count 2073 non-null int64 22 favorite_count 2073 non-null int64 dtypes: bool(3), float64(4), int64(6), object(10) memory usage: 346.2+ KB ###Markdown Dealing with quality issue 1. change data format and drop the retweet rows ###Code #change timestamp format from string to datetime master_clean.timestamp = pd.to_datetime(master_clean.timestamp) #change id format from float to int id_clean = master_clean.retweeted_status_id id_clean = id_clean.dropna() id_clean = id_clean.astype('int64') #drop the retweet rows master_clean = master_clean.drop(master_clean[master_clean.retweeted_status_id.apply(lambda x : x in id_clean.values)].index.values, axis = 0) master_clean = master_clean.drop('retweeted_status_id', axis=1) master_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1994 entries, 0 to 2072 Data columns (total 22 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1994 non-null int64 1 timestamp 1994 non-null datetime64[ns, UTC] 2 source 1994 non-null object 3 text 1994 non-null object 4 expanded_urls 1994 non-null object 5 rating_numerator 1994 non-null int64 6 rating_denominator 1994 non-null int64 7 name 1994 non-null object 8 dog_stage 306 non-null object 9 jpg_url 1994 non-null object 10 img_num 1994 non-null int64 11 p1 1994 non-null object 12 p1_conf 1994 non-null float64 13 p1_dog 1994 non-null bool 14 p2 1994 non-null object 15 p2_conf 1994 non-null float64 16 p2_dog 1994 non-null bool 17 p3 1994 non-null object 18 p3_conf 1994 non-null float64 19 p3_dog 1994 non-null bool 20 retweet_count 1994 non-null int64 21 favorite_count 1994 non-null int64 dtypes: bool(3), datetime64[ns, UTC](1), float64(3), int64(6), object(9) memory usage: 317.4+ KB ###Markdown 2. fix the rating part ###Code # rating_denominator should be 10. master_clean.rating_denominator.value_counts() #check those rating_denominator is not equal to 10 pd.set_option('display.max_colwidth', 150) master_clean[['tweet_id', 'text', 'rating_numerator', 'rating_denominator']].query('rating_denominator != 10') master_clean.query('rating_denominator != 10').shape[0] #find the error rating and fix it #740373189193256964(14/10);722974582966214656(13/10);716439118184652801(11/10);666287406224695296(9/10);682962037429899265(10/10); master_clean.loc[master_clean.tweet_id == 740373189193256964, 'rating_numerator':'rating_denominator'] = [14, 10] master_clean.loc[master_clean.tweet_id == 722974582966214656, 'rating_numerator':'rating_denominator'] = [13, 10] master_clean.loc[master_clean.tweet_id == 716439118184652801, 'rating_numerator':'rating_denominator'] = [11, 10] master_clean.loc[master_clean.tweet_id == 666287406224695296, 'rating_numerator':'rating_denominator'] = [9, 10] master_clean.loc[master_clean.tweet_id == 682962037429899265, 'rating_numerator':'rating_denominator'] = [10, 10] #number of rows that rating denominator is not equal to 10 master_clean.query('rating_denominator != 10').shape[0] master_clean[['text', 'rating_numerator', 'rating_denominator']].query('rating_denominator != 10') #drop all the rows above master_clean = master_clean.drop([345, 415, 734, 924, 1022, 1047, 1065, 1131, 1207, 1379, 1380, 1512, 1571], axis = 0) master_clean.query('rating_denominator != 10').shape[0] #the rows are droped master_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1981 entries, 0 to 2072 Data columns (total 22 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1981 non-null int64 1 timestamp 1981 non-null datetime64[ns, UTC] 2 source 1981 non-null object 3 text 1981 non-null object 4 expanded_urls 1981 non-null object 5 rating_numerator 1981 non-null int64 6 rating_denominator 1981 non-null int64 7 name 1981 non-null object 8 dog_stage 306 non-null object 9 jpg_url 1981 non-null object 10 img_num 1981 non-null int64 11 p1 1981 non-null object 12 p1_conf 1981 non-null float64 13 p1_dog 1981 non-null bool 14 p2 1981 non-null object 15 p2_conf 1981 non-null float64 16 p2_dog 1981 non-null bool 17 p3 1981 non-null object 18 p3_conf 1981 non-null float64 19 p3_dog 1981 non-null bool 20 retweet_count 1981 non-null int64 21 favorite_count 1981 non-null int64 dtypes: bool(3), datetime64[ns, UTC](1), float64(3), int64(6), object(9) memory usage: 315.3+ KB ###Markdown 3. replace those invalid names to 'None' ###Code master_clean.reset_index(drop=True, inplace=True) error_name = master_clean.name.str.contains('^[a-z]', regex = True) master_clean[error_name].name.value_counts().sort_index() #change name to 'None' save = [] counter = 0 for i in master_clean.name: if error_name[counter] == False: save.append(i) else: save.append('None') counter += 1 master_clean.name = np.array(save) master_clean.name.value_counts() ###Output _____no_output_____ ###Markdown 4. creat a new column called 'breed' and if p1 confidence >= 95% and p1_dog is True or p2 confidence <= 1% and p2_dog is True, then put the predicted breed into breed column ###Code #create breed column master_clean['breed'] = 'None' master_clean.breed.value_counts() #put all right category into breed save = [] for i in range(master_clean.breed.shape[0]): if master_clean.p1_conf.iloc[i] >= 0.95 and master_clean.p1_dog.iloc[i]: save.append(master_clean.p1.iloc[i]) elif master_clean.p2_conf.iloc[i] <= 0.01 and master_clean.p2_dog.iloc[i]: save.append(master_clean.p2.iloc[i]) else: save.append('Unsure') master_clean['breed'] = np.array(save) #format the breed names master_clean['breed'] = master_clean.breed.str.capitalize().str.replace('_',' ') master_clean.breed.value_counts() ###Output _____no_output_____ ###Markdown 5. drop p1/p2/p3 columns ###Code #drop p2 p3 columns master_clean = master_clean.drop(['p1','p1_conf','p1_dog','p2','p2_conf','p2_dog','p3','p3_conf','p3_dog'], axis = 1) master_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 1981 entries, 0 to 1980 Data columns (total 14 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1981 non-null int64 1 timestamp 1981 non-null datetime64[ns, UTC] 2 source 1981 non-null object 3 text 1981 non-null object 4 expanded_urls 1981 non-null object 5 rating_numerator 1981 non-null int64 6 rating_denominator 1981 non-null int64 7 name 1981 non-null object 8 dog_stage 306 non-null object 9 jpg_url 1981 non-null object 10 img_num 1981 non-null int64 11 retweet_count 1981 non-null int64 12 favorite_count 1981 non-null int64 13 breed 1981 non-null object dtypes: datetime64[ns, UTC](1), int64(6), object(7) memory usage: 216.8+ KB ###Markdown 6. rename source column ###Code master_clean.source.value_counts() master_clean3 = master_clean master_clean3['source'] = master_clean3.source.replace({'<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>' : 'Twitter for Iphone', '<a href="http://twitter.com" rel="nofollow">Twitter Web Client</a>' : 'Twitter Web Client', '<a href="https://about.twitter.com/products/tweetdeck" rel="nofollow">TweetDeck</a>' : 'TweetDeck' }) master_clean3.source.value_counts() ###Output _____no_output_____ ###Markdown Storing, Analyzing and Visualizing Data> Clean each of the issues you documented while assessing. Perform this cleaning in wrangle_act.ipynb as well. The result should be a high quality and tidy master pandas DataFrame (or DataFrames, if appropriate).Then analyze and visualize the wrangled data. Insights Visualization Storing ###Code master_clean.to_csv('twitter_archive_master.csv') twitter_archive_master = pd.read_csv('twitter_archive_master.csv') twitter_archive_master.head() ###Output _____no_output_____ ###Markdown Insights Q1: Which dog has highest retweet counts? ###Code twitter_archive_master.sort_values(by = 'retweet_count', ascending = False).iloc[0] ###Output _____no_output_____ ###Markdown The dog has highest retweet counts. Its stage is doggo. The pose retweeted 79515 times and received 131075 favorite counts. Q2: Which dog receive most favorite counts? ###Code twitter_archive_master.sort_values(by = 'favorite_count', ascending = False).iloc[0] ###Output _____no_output_____ ###Markdown The dog has most favorite counts, its stage is puppo. The pose retweeted 48265 times and received 132810 favorite counts. Q3: What way do users most use to log in WeRateDog? ###Code twitter_archive_master.source.value_counts() #calculate the proportion twitter_archive_master.source.value_counts().iloc[0]/twitter_archive_master.shape[0] ###Output _____no_output_____ ###Markdown 98% of the WeRateDog users like to use Twitter through iPhone. Q4: What is the relation between retweet counts and favorite counts? ###Code twitter_archive_master.describe() twitter_archive_master.retweet_count.mean() twitter_archive_master.favorite_count.mean() twitter_archive_master.favorite_count.mean()/twitter_archive_master.retweet_count.mean() ###Output _____no_output_____ ###Markdown The mean of retweet counts is 27775.6, the mean of favorite counts is 8925.8. And the the favorite counts are as three times as many as the retweet counts. It means that people would always thump up for a pose, but they might not retweet it. Visualization 1. What is the most popular dog breed? ###Code #see the value of breed twitter_archive_master.breed.value_counts() #total dog breed (without unsure) twitter_archive_master.breed.value_counts().shape[0] - 1 #create a bar chart to find popular dog breed plt.figure(figsize = (9, 9)) breed_filter=twitter_archive_master.groupby('breed').filter(lambda x: len(x) >= 3 and len(x) <= 100) breed_filter['breed'].value_counts(ascending=True).plot(kind = 'barh', alpha = 0.8, color = 'pink') plt.title('The Most Popular Dog Breed', fontsize=20) plt.xlabel('Counts',fontsize=18) plt.ylabel('Dog Breed',fontsize=18); ###Output _____no_output_____ ###Markdown Through this chart, the most popular dog breed is Pug with 21 counts. The second popular dog breeds are Pembroke and Samoyed with 19 counts. The third is Golden retriever with 18 counts. The dog breed has 50 categories in total. 2. What is the proportion of dog stages? And what's the relation between favorite counts and its dog stage? ###Code twitter_archive_master.dog_stage.value_counts() #create a pie chart plt.figure(figsize=(12,9)) sns.set(style='darkgrid') name = twitter_archive_master['dog_stage'].value_counts() explode = (0, 0, 0, 0, 0.1) plt.pie(name, explode, labels = name.index, shadow=True, textprops={'fontsize': 20}, autopct='%1.1f%%', startangle = 230) plt.axis('equal') plt.title('The Proportion of Dog Stage', fontsize=35) plt.legend(); #calculate the average favorite counts based on dog stage avg_fav = twitter_archive_master.groupby('dog_stage').favorite_count.mean() avg_fav #create a bar chart to represent the relation between fav_counts and dog_stage plt.figure(figsize = (9, 9)) plt.bar(avg_fav.index.values, avg_fav, color = 'orange', alpha = 0.8) plt.title('The Relation between Favorite Counts and Dog Stage', fontsize=18) plt.xlabel('Dog Stage', fontsize=15) plt.ylabel('Favorite Counts', fontsize=15); ###Output _____no_output_____ ###Markdown Wrangling and Analyzing Data 1. Introduction Data preparation is always the hardest part of a data analyst's work flow, in this project, we will use the data wrangling skills to pull real-world data from Twitter, clean it, and do some analysis. We will get the original Twitter data from Twitter user @dog_rates, along with a image prediction dataset, to build our analysis. WeRateDogs is a popular Twitter hash tag, as the name tells, people rate dogs with a denominator of 10 and the numerator is usually higher than 10 to show how lovely the dog is. 2. Gathering Data ###Code import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import os import json import requests import tweepy import re from tweepy import OAuthHandler from timeit import default_timer as timer %matplotlib inline ###Output _____no_output_____ ###Markdown 2.1 Import the on hand twitter data ###Code twt_df1 = pd.read_csv('twitter-archive-enhanced.csv') twt_df1.head() ###Output _____no_output_____ ###Markdown 2.2 Scrape data from website ###Code url = "https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv" response = requests.get(url) with open('image_prediction.tsv', mode='wb') as file: file.write(response.content) img_df = pd.read_csv('image_prediction.tsv', sep='\t') img_df.head() ###Output _____no_output_____ ###Markdown I tried to get data from Twitter API by registering a Twitter developer account. But the application was failed, I was rejected by Twitter. So I just used the data sent from my Udacity instructor. ###Code # Query Twitter API for each tweet in the Twitter archive and save JSON in a text file # These are hidden to comply with Twitter's API terms and conditions consumer_key = 'HIDDEN' consumer_secret = 'HIDDEN' access_token = 'HIDDEN' access_secret = 'HIDDEN' auth = OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True) # NOTE TO STUDENT WITH MOBILE VERIFICATION ISSUES: # df_1 is a DataFrame with the twitter_archive_enhanced.csv file. You may have to # change line 17 to match the name of your DataFrame with twitter_archive_enhanced.csv # NOTE TO REVIEWER: this student had mobile verification issues so the following # Twitter API code was sent to this student from a Udacity instructor # Tweet IDs for which to gather additional data via Twitter's API tweet_ids = twt_df1.tweet_id.values len(tweet_ids) # Query Twitter's API for JSON data for each tweet ID in the Twitter archive count = 0 fails_dict = {} start = timer() # Save each tweet's returned JSON as a new line in a .txt file with open('tweet-json.txt', 'w') as outfile: # This loop will likely take 20-30 minutes to run because of Twitter's rate limit for tweet_id in tweet_ids: count += 1 print(str(count) + ": " + str(tweet_id)) try: tweet = api.get_status(tweet_id, tweet_mode='extended') print("Success") json.dump(tweet._json, outfile) outfile.write('\n') except tweepy.TweepError as e: print("Fail") fails_dict[tweet_id] = e pass end = timer() print(end - start) print(fails_dict) ###Output _____no_output_____ ###Markdown Store the JSON file in a dataframe. ###Code df2_list = [] with open('tweet-json.txt', 'r', encoding='utf8') as file: for line in file: lines = json.loads(line) df2_list.append({'tweet_id': lines['id'], 'favorites': lines['favorite_count'], 'retweets': lines['retweet_count'], 'timestamp': lines['created_at']}) twt_df2 = pd.DataFrame(df2_list, columns=['tweet_id','timestamp','favorites','retweets']) twt_df2.head() ###Output _____no_output_____ ###Markdown 3. Assessing Data Now, it's time to check the data we gathered in part 2. 3.1 Assessment twt_df1 ###Code twt_df1.head() twt_df1.tail() twt_df1.info() twt_df1.describe() twt_df1.sort_values('timestamp') ###Output _____no_output_____ ###Markdown There are a lot of missing data in **in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id, and retweeted_status_timestamp** columns. Tried to find a connection among them: ###Code twt_df1[twt_df1.in_reply_to_status_id.notna()].head() twt_df1[twt_df1.retweeted_status_id.notna()].head() ###Output _____no_output_____ ###Markdown Twt_df1 columns:* **tweet_id**: the unique identifier for each tweet* **in_reply_to_status_id**: if the tweet is a reply, this column will representing the original tweet id* **in_reply_to_user_id**: if the tweet is a reply, this column will representing the original tweet's user id* **timestamp**: date and time of the tweet* **source**: utility used to post the tweet* **text**: content of the tweet * **retweeted_status_id**: if the tweet is retweet, this column will representing the original tweet id* **retweeted_status_user_id**: if the tweet is retweet, this column will representing the original tweet's user id* **retweeted_status_timestamp**: if the tweet is retweet, this column will representing the original tweet's time stamp* **expanded_urls**: URL of the tweet* **rating_numerator**: rating numerator of the dog mentioned in the tweet* **rating_denominator**: rating denominator of the dog mentioned in the tweet* **name**: the name of the dog* **doggo**/ **floofer**/ **pupper**/ **puppo**: some nick names of different dog species at different ages. img_df ###Code img_df.head() img_df.tail() img_df.info() img_df.describe() ###Output _____no_output_____ ###Markdown img_df columns:* **tweet_id**: the unique identifier of the tweet* **jpg_url**: the URL of the image* **img_num**: image number of the tweet* **p1**: the first prediction of the image with the most prediction confidence* **p1_conf**: how confident the algorithm is in the first prediction* **p1_dog**: whether or not the first prediction is a dog* **p2**: the second prediction of the image with the second prediction confidence* **p2_conf**: how confident the algorithm is in the second prediction* **p2_dog**: whether or not the second prediction is a dog* **p3**: the third prediction of the image with the third prediction confidence* **p3_conf**: how confident the algorithm is in the third prediction* **p3_dog**: whether or not the third prediction is a dog twt_df2 ###Code twt_df2.head() twt_df2.tail() twt_df2.info() twt_df2.describe() ###Output _____no_output_____ ###Markdown twt_df2 columns:* **tweet_id**: the unique identifier of the tweet* **timestamp**: the created time of the tweet* **favorites**: favorite counts of the tweet* **retweets**: retweet counts of the tweet 3.2 Quality and tidiness problems twt_df1 Let's first check if our unique identifier is truly unique or not: ###Code twt_df1.tweet_id.duplicated().sum() ###Output _____no_output_____ ###Markdown Have a look of the first several rows: ###Code twt_df1.head() twt_df1.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown * **tweet_id**: this column should be string instead of int* **timestamp**: this column should be date-time format instead of string* **expanded_urls**: this column has multiple missing values ###Code twt_df1.describe() twt_df1.tweet_id.duplicated().sum() twt_df1.loc[twt_df1.expanded_urls.isnull()] ###Output _____no_output_____ ###Markdown The text part is not fully displayed, we may need to see that full text content: ###Code #https://stackoverflow.com/questions/25351968/how-to-display-full-non-truncated-dataframe-information-in-html-when-convertin def print_full(x): pd.set_option('display.max_rows', len(x)) pd.set_option('display.max_columns', None) pd.set_option('display.width', 2000) pd.set_option('display.float_format', '{:20,.2f}'.format) pd.set_option('display.max_colwidth', -1) print(x) pd.reset_option('display.max_rows') pd.reset_option('display.max_columns') pd.reset_option('display.width') pd.reset_option('display.float_format') pd.reset_option('display.max_colwidth') print_full(twt_df1.head()[['text', 'rating_numerator', 'rating_denominator']]) print_full(twt_df1.tail()[['text', 'rating_numerator', 'rating_denominator']]) print_full(twt_df1.sample(5)[['text', 'rating_numerator', 'rating_denominator']]) ###Output text rating_numerator rating_denominator 2329 Those are sunglasses and a jean jacket. 11/10 dog cool af https://t.co/uHXrPkUEyl 11 10 31 This is Waffles. His doggles are pupside down. Unsure how to fix. 13/10 someone assist Waffles https://t.co/xZDA9Qsq1O 13 10 2227 Here we have an Azerbaijani Buttermilk named Guss. He sees a demon baby Hitler behind his owner. 10/10 stays alert https://t.co/aeZykWwiJN 10 10 1618 For those who claim this is a goat, u are wrong. It is not the Greatest Of All Time. The rating of 5/10 should have made that clear. Thank u 5 10 28 This is Derek. He's late for a dog meeting. 13/10 pet...al to the metal https://t.co/BCoWue0abA 13 10 ###Markdown So the rating of the dog will be at the end of the text content. Let's check the full schema by value counts of the ratings: ###Code twt_df1.rating_denominator.value_counts() ###Output _____no_output_____ ###Markdown We have some values of the rating denominator that is not 10. Dig into them by check text content, and denominator of these rows: ###Code check_denominator = twt_df1.query("rating_denominator > 10")[['text', 'rating_numerator', 'rating_denominator']] check_denominator print_full(check_denominator) ###Output text rating_numerator rating_denominator 342 @docmisterio account started on 11/15/15 11 15 433 The floofs have been released I repeat the floofs have been released. 84/70 https://t.co/NIYC820tmd 84 70 784 RT @dog_rates: After so many requests, this is Bretagne. She was the last surviving 9/11 search dog, and our second ever 14/10. RIP https:/… 9 11 902 Why does this never happen at my front door... 165/150 https://t.co/HmwrdfEfUE 165 150 1068 After so many requests, this is Bretagne. She was the last surviving 9/11 search dog, and our second ever 14/10. RIP https://t.co/XAVDNDaVgQ 9 11 1120 Say hello to this unbelievably well behaved squad of doggos. 204/170 would try to pet all at once https://t.co/yGQI3He3xv 204 170 1165 Happy 4/20 from the squad! 13/10 for all https://t.co/eV1diwds8a 4 20 1202 This is Bluebert. He just saw that both #FinalFur match ups are split 50/50. Amazed af. 11/10 https://t.co/Kky1DPG4iq 50 50 1228 Happy Saturday here's 9 puppers on a bench. 99/90 good work everybody https://t.co/mpvaVxKmc1 99 90 1254 Here's a brigade of puppers. All look very prepared for whatever happens next. 80/80 https://t.co/0eb7R1Om12 80 80 1274 From left to right:\nCletus, Jerome, Alejandro, Burp, &amp; Titson\nNone know where camera is. 45/50 would hug all at once https://t.co/sedre1ivTK 45 50 1351 Here is a whole flock of puppers. 60/50 I'll take the lot https://t.co/9dpcw6MdWa 60 50 1433 Happy Wednesday here's a bucket of pups. 44/40 would pet all at once https://t.co/HppvrYuamZ 44 40 1598 Yes I do realize a rating of 4/20 would've been fitting. However, it would be unjust to give these cooperative pups that low of a rating 4 20 1634 Two sneaky puppers were not initially seen, moving the rating to 143/130. Please forgive us. Thank you https://t.co/kRK51Y5ac3 143 130 1635 Someone help the girl is being mugged. Several are distracting her while two steal her shoes. Clever puppers 121/110 https://t.co/1zfnTJLt55 121 110 1662 This is Darrel. He just robbed a 7/11 and is in a high speed police chase. Was just spotted by the helicopter 10/10 https://t.co/7EsP8LmSp5 7 11 1663 I'm aware that I could've said 20/16, but here at WeRateDogs we are very professional. An inconsistent rating scale is simply irresponsible 20 16 1779 IT'S PUPPERGEDDON. Total of 144/120 ...I think https://t.co/ZanVtAtvIq 144 120 1843 Here we have an entire platoon of puppers. Total score: 88/80 would pet all at once https://t.co/y93p6FLvVw 88 80 ###Markdown So, some people used another number/number expression in the text content, and that was recorded as the rating. Some other ratings are just strange with big rating denominators. I think I'm just gonna delete those strange ratings and keep those was wrongly parsed with a real rating at the end of the text. ###Code twt_df1.rating_numerator.value_counts() check_numerator = twt_df1.query("rating_numerator > 20")[['text', 'rating_numerator', 'rating_denominator']] check_numerator print_full(check_numerator) ###Output text rating_numerator rating_denominator 188 @dhmontgomery We also gave snoop dogg a 420/10 but I think that predated your research 420 10 189 @s8n You tried very hard to portray this good boy as not so good, but you have ultimately failed. His goodness shines through. 666/10 666 10 290 @markhoppus 182/10 182 10 313 @jonnysun @Lin_Manuel ok jomny I know you're excited but 960/00 isn't a valid rating, 13/10 is tho 960 0 340 RT @dog_rates: This is Logan, the Chow who lived. He solemnly swears he's up to lots of good. H*ckin magical af 9.75/10 https://t.co/yBO5wu… 75 10 433 The floofs have been released I repeat the floofs have been released. 84/70 https://t.co/NIYC820tmd 84 70 516 Meet Sam. She smiles 24/7 &amp; secretly aspires to be a reindeer. \nKeep Sam smiling by clicking and sharing this link:\nhttps://t.co/98tB8y7y7t https://t.co/LouL5vdvxx 24 7 695 This is Logan, the Chow who lived. He solemnly swears he's up to lots of good. H*ckin magical af 9.75/10 https://t.co/yBO5wuqaPS 75 10 763 This is Sophie. She's a Jubilant Bush Pupper. Super h*ckin rare. Appears at random just to smile at the locals. 11.27/10 would smile back https://t.co/QFaUiIHxHq 27 10 902 Why does this never happen at my front door... 165/150 https://t.co/HmwrdfEfUE 165 150 979 This is Atticus. He's quite simply America af. 1776/10 https://t.co/GRXwMxLBkh 1776 10 1120 Say hello to this unbelievably well behaved squad of doggos. 204/170 would try to pet all at once https://t.co/yGQI3He3xv 204 170 1202 This is Bluebert. He just saw that both #FinalFur match ups are split 50/50. Amazed af. 11/10 https://t.co/Kky1DPG4iq 50 50 1228 Happy Saturday here's 9 puppers on a bench. 99/90 good work everybody https://t.co/mpvaVxKmc1 99 90 1254 Here's a brigade of puppers. All look very prepared for whatever happens next. 80/80 https://t.co/0eb7R1Om12 80 80 1274 From left to right:\nCletus, Jerome, Alejandro, Burp, &amp; Titson\nNone know where camera is. 45/50 would hug all at once https://t.co/sedre1ivTK 45 50 1351 Here is a whole flock of puppers. 60/50 I'll take the lot https://t.co/9dpcw6MdWa 60 50 1433 Happy Wednesday here's a bucket of pups. 44/40 would pet all at once https://t.co/HppvrYuamZ 44 40 1634 Two sneaky puppers were not initially seen, moving the rating to 143/130. Please forgive us. Thank you https://t.co/kRK51Y5ac3 143 130 1635 Someone help the girl is being mugged. Several are distracting her while two steal her shoes. Clever puppers 121/110 https://t.co/1zfnTJLt55 121 110 1712 Here we have uncovered an entire battalion of holiday puppers. Average of 11.26/10 https://t.co/eNm2S6p9BD 26 10 1779 IT'S PUPPERGEDDON. Total of 144/120 ...I think https://t.co/ZanVtAtvIq 144 120 1843 Here we have an entire platoon of puppers. Total score: 88/80 would pet all at once https://t.co/y93p6FLvVw 88 80 2074 After so many requests... here you go.\n\nGood dogg. 420/10 https://t.co/yfAAo1gdeY 420 10 ###Markdown * **rating_denominator**: Some of the denominator is not 10. One reason of that is some text contents has multiple number/number format, and the ratings only transform the first number/number into rating numerator and rating denominators which is not the rating but some thing like date/time. Another reason is that some of the of the posted image contains more than one dog, so they will rate 10 dogs based on a denominator of 100. * **rating_numerator**: Some of the numerator is too big. Besides the reasons listed in the denominator part, there is another reason: people just loves the dog so much that they give the dog a such a high rate. So basically, if we solved the problem in denominator part, we don't need to worry about the numerators. Check the source column which is not fully displayed: ###Code print_full(twt_df1.source.sample(5)) ###Output 776 <a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a> 1731 <a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a> 1202 <a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a> 44 <a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a> 1541 <a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a> Name: source, dtype: object ###Markdown * **source**: this column has a html structure which can be simplified Let's have a look of the dog names column: ###Code twt_df1.name.head() twt_df1.name.value_counts() ###Output _____no_output_____ ###Markdown There are some values in this column that looks not like a real name: a, an, the, very, and so on. They are all in lower case, so we may check the abnormality by this feature. ###Code twt_df1.loc[(twt_df1.name.str.islower())].name.value_counts() twt_df1.loc[(twt_df1.name.str.islower())].name.value_counts().index ###Output _____no_output_____ ###Markdown The list above proves the hypothesis: lower case strings are not real names of dog.* **name**: this column has some missing values and some of the names are not real dog names but articles or adjectives. We can also have a look of the last four columns of this dataset: doggo, floofer, pupper, and puppo. This is a tidiness problem that the columns themselves are values of a variable. Here the variable name of these 4 columns should be something like 'dog stages'. ###Code twt_df1.groupby(["doggo", "floofer", "pupper", "puppo"]).size().reset_index().rename(columns={0: "count"}) ###Output _____no_output_____ ###Markdown * **doggo, floofer, pupper, puppo**: tidiness problem: columns themselves are values of a variable. And some of the dogs have multiple dog stages. img_df ###Code img_df.head() img_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null int64 jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown * **tweet_id**: this column should be string instead of int ###Code img_df.describe() ###Output _____no_output_____ ###Markdown Start from checking unique identifiers: ###Code img_df.tweet_id.duplicated().sum() ###Output _____no_output_____ ###Markdown This dataset looks pretty clean. The only problem is the string in **p1**, **p2**, and **p3** columns are not all in lowercase.* **p1**, **p2**, **p3**: dog breed names are not all in lowercase twt_df2 ###Code twt_df2.head() twt_df2.sample(5) twt_df2.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2354 entries, 0 to 2353 Data columns (total 4 columns): tweet_id 2354 non-null int64 timestamp 2354 non-null object favorites 2354 non-null int64 retweets 2354 non-null int64 dtypes: int64(3), object(1) memory usage: 73.6+ KB ###Markdown * **tweet_id**: this column should be string instead of int* **timestamp**: this column should be date-time instead of sting ###Code twt_df2.describe() twt_df2.tweet_id.duplicated().sum() ###Output _____no_output_____ ###Markdown We can summarize the data quality and tidiness problems now: **Quality problems**:* **tweet_id**, **timestamp**: wrong data types* **source**: this column has a useless html structure that can be simplified* **retweets**: there are some retweets that are essentially duplicates of the actual tweets* **expanded_urls**: multiple missing values* **rating_denominator**: Some of the denominator is not 10. One reason of that is some text contents has multiple number/number format, and the ratings only transform the first number/number into rating numerator and rating denominators which is not the rating but some thing like date/time. Another reason is that some of the of the posted image contains more than one dog, so they will rate 10 dogs based on a denominator of 100. * **rating_numerator**: Some of the numerator is too big. Besides the reasons listed in the denominator part, there is another reason: people just loves the dog so much that they give the dog a such a high rate. So basically, if we solved the problem in denominator part, we don't need to worry about the numerators* **name**: this column has some missing values and some of the names are not real dog names but articles or adjectives* **p1**, **p2**, **p3**: dog breed names are not all in lowercase* **unnecessary columns to be deleted**: delete unnecessary columns to make the final dataset more neat and tidy **Tidiness problems**:* **doggo, floofer, pupper, puppo**: tidiness problem: columns themselves are values of a variable. Notice: some of the dogs have multiple dog stages.* **need to merge all the datasets**: merge the three datasets into one using inner join according to the tweet_id 4. Cleaning Data ###Code df1_clean = twt_df1.copy() df2_clean = twt_df2.copy() df3_clean = img_df.copy() ###Output _____no_output_____ ###Markdown * **need to merge all the datasets**: merge the three datasets into one using inner join according to the tweet_id **Define** we can start with combining all three dataset with the unique identifier *tweet_id* **Code** ###Code df4 = pd.merge(df1_clean, df2_clean, left_on='tweet_id', right_on='tweet_id', how='inner') df5 = pd.merge(df4, df3_clean, left_on='tweet_id', right_on='tweet_id', how='inner') df5_clean = df5 ###Output _____no_output_____ ###Markdown **Test** ###Code df5_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2073 entries, 0 to 2072 Data columns (total 31 columns): tweet_id 2073 non-null int64 in_reply_to_status_id 23 non-null float64 in_reply_to_user_id 23 non-null float64 timestamp_x 2073 non-null object source 2073 non-null object text 2073 non-null object retweeted_status_id 79 non-null float64 retweeted_status_user_id 79 non-null float64 retweeted_status_timestamp 79 non-null object expanded_urls 2073 non-null object rating_numerator 2073 non-null int64 rating_denominator 2073 non-null int64 name 2073 non-null object doggo 2073 non-null object floofer 2073 non-null object pupper 2073 non-null object puppo 2073 non-null object timestamp_y 2073 non-null object favorites 2073 non-null int64 retweets 2073 non-null int64 jpg_url 2073 non-null object img_num 2073 non-null int64 p1 2073 non-null object p1_conf 2073 non-null float64 p1_dog 2073 non-null bool p2 2073 non-null object p2_conf 2073 non-null float64 p2_dog 2073 non-null bool p3 2073 non-null object p3_conf 2073 non-null float64 p3_dog 2073 non-null bool dtypes: bool(3), float64(7), int64(6), object(15) memory usage: 475.7+ KB ###Markdown In general, I will fix the tidiness problems and quality problems first, then delete these unnecessary columns. * **source**: this column has a useless html structure that can be simplified**Define** Delete the useless html structure by using regular expressions **Code** ###Code href_tags = re.compile(r'<[^>]+>') def remove_tags(text): return href_tags.sub('', text) df5_clean['source'] = df5_clean['source'].apply(remove_tags) ###Output _____no_output_____ ###Markdown **Test** ###Code df5_clean.source.sample(5) df5_clean.source.value_counts() ###Output _____no_output_____ ###Markdown * **rating_denominator**: Some of the denominator is not 10. One reason of that is some text contents has multiple number/number format, and the ratings only transform the first number/number into rating numerator and rating denominators which is not the rating but some thing like date/time. Another reason is that some of the of the posted image contains more than one dog, so they will rate 10 dogs based on a denominator of 100. * **rating_numerator**: Some of the numerator is too big. Besides the reasons listed in the denominator part, there is another reason: people just loves the dog so much that they give the dog a such a high rate. So basically, if we solved the problem in denominator part, we don't need to worry about the numerators. **Define** Use a regular expression to extract all the number/number formating in the text content. Assign the first extracted value to rating column, and then, for those has two number/number in their post, update the rating column with the second number/number values in their post. After that, calculate the true rating points by extract the denominator and numerator. **Code** ###Code rating = df5_clean.text.str.extractall('(\d+\.?\d*\/{1}\d+)') rating.head() rating.xs(1, level='match').head() match1 = rating.xs(1, level='match') match1_index = match1.index match1_index = np.array(match1_index) match1_index match1.columns = match1.columns.astype(str) match1.rename(columns={"0":"rating"}, inplace=True) df5_clean['rating'] = rating.xs(0, level='match') df5_clean.update(match1) ###Output _____no_output_____ ###Markdown **Test** ###Code df5_clean.rating ###Output _____no_output_____ ###Markdown This is another method doing the same thing above with regular expression match: **Code** ###Code regex1 = '(\d+\.?\d*\/{1}\d+)' regex2 = '(\.{1}\d+)' rating_new = df5_clean.text.tolist() df5_clean['rating'] = [re.sub(regex2, '', re.findall(regex1, x)[-1]) for x in rating_new] ###Output _____no_output_____ ###Markdown **Test** ###Code df5_clean.rating ###Output _____no_output_____ ###Markdown This is not the end of this adjustment, since all the ratings are now based on a denominator of 10, we can now just keep the numerator of it to represent the ratings. Besides, there are some ratings are decimal values, so we should use float data type for it. **Code** ###Code rating_df = df5_clean.rating.str.extract('(\d+\.?\d*\/)') rating_scores = rating_df[0].str.strip('/') df5_clean['rating'] = rating_scores.astype(float) ###Output _____no_output_____ ###Markdown **Test** ###Code df5_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2073 entries, 0 to 2072 Data columns (total 32 columns): tweet_id 2073 non-null int64 in_reply_to_status_id 23 non-null float64 in_reply_to_user_id 23 non-null float64 timestamp_x 2073 non-null object source 2073 non-null object text 2073 non-null object retweeted_status_id 79 non-null float64 retweeted_status_user_id 79 non-null float64 retweeted_status_timestamp 79 non-null object expanded_urls 2073 non-null object rating_numerator 2073 non-null int64 rating_denominator 2073 non-null int64 name 2073 non-null object doggo 2073 non-null object floofer 2073 non-null object pupper 2073 non-null object puppo 2073 non-null object timestamp_y 2073 non-null object favorites 2073 non-null int64 retweets 2073 non-null int64 jpg_url 2073 non-null object img_num 2073 non-null int64 p1 2073 non-null object p1_conf 2073 non-null float64 p1_dog 2073 non-null bool p2 2073 non-null object p2_conf 2073 non-null float64 p2_dog 2073 non-null bool p3 2073 non-null object p3_conf 2073 non-null float64 p3_dog 2073 non-null bool rating 2073 non-null float64 dtypes: bool(3), float64(8), int64(6), object(15) memory usage: 491.9+ KB ###Markdown * **expanded_urls**: multiple missing values **Define** Remove those missing values in expanded_urls column by .dropna **Code** ###Code df5_clean.dropna(subset=['expanded_urls'], inplace=True) ###Output _____no_output_____ ###Markdown **Test** ###Code df5_clean.expanded_urls.isnull().sum() ###Output _____no_output_____ ###Markdown * **p1**, **p2**, **p3**: dog breed names are not all in lowercase **Define** Turn them into lower case **Code** ###Code df5_clean['p1'] = df5_clean['p1'].str.lower() df5_clean['p2'] = df5_clean['p2'].str.lower() df5_clean['p3'] = df5_clean['p3'].str.lower() ###Output _____no_output_____ ###Markdown **Test** ###Code df5_clean.sample(5) ###Output _____no_output_____ ###Markdown * **name**: this column has some missing values and some of the names are not real dog names but articles or adjectives. **Define** Replace those not-real-dog-names with None **Code** ###Code not_dog_names = df5_clean.loc[(df5_clean.name.str.islower())].name.value_counts().index.tolist() not_dog_names.append('None') not_dog_names for name in not_dog_names: df5_clean.loc[df5_clean.name == name, 'name'] = None ###Output _____no_output_____ ###Markdown **Test** ###Code df5_clean.name.value_counts() ###Output _____no_output_____ ###Markdown * **doggo, floofer, pupper, puppo**: tidiness problem: columns themselves are values of a variable **Define** Combine those 4 columns into one column **Code** Firstly, we need to convert Nones and np.NaN to empty string '' for all columns ###Code df5_clean.doggo.replace('None', '', inplace=True) df5_clean.doggo.replace(np.NaN, '', inplace=True) df5_clean.floofer.replace('None', '', inplace=True) df5_clean.floofer.replace(np.NaN, '', inplace=True) df5_clean.pupper.replace('None', '', inplace=True) df5_clean.pupper.replace(np.NaN, '', inplace=True) df5_clean.puppo.replace('None', '', inplace=True) df5_clean.puppo.replace(np.NaN, '', inplace=True) ###Output _____no_output_____ ###Markdown Then we get the columns combined ###Code df5_clean['dog_stages'] = df5_clean.text.str.extract('(doggo|floofer|pupper|puppo)', expand = True) ###Output _____no_output_____ ###Markdown Notice: there are some dogs have multiple stages ###Code df5_clean['dog_stages'] = df5_clean.doggo + df5_clean.floofer + df5_clean.pupper + df5_clean.puppo df5_clean.loc[df5_clean.dog_stages == 'doggopupper', 'dog_stages'] = 'doggo, pupper' df5_clean.loc[df5_clean.dog_stages == 'doggopuppo', 'dog_stages'] = 'doggo, puppo' df5_clean.loc[df5_clean.dog_stages == 'doggofloofer', 'dog_stages'] = 'doggo, floofer' ###Output _____no_output_____ ###Markdown Now we can delete the useless four columns ###Code df5_clean.drop(['doggo','floofer','pupper','puppo'], axis=1, inplace = True) ###Output _____no_output_____ ###Markdown **Test** ###Code df5_clean.sample(5) df5_clean.dog_stages.value_counts() ###Output _____no_output_____ ###Markdown * **retweets**: there are some retweets that are essentially duplicates of the actual tweets **Define** There are some retweets need to be removed from the dataset, since they are duplicates of actual tweets **Code** ###Code df5_clean = df5_clean.loc[df5_clean['text'].str.startswith('RT') == False] ###Output _____no_output_____ ###Markdown **Test** ###Code df5_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1993 entries, 0 to 2072 Data columns (total 29 columns): tweet_id 1993 non-null int64 in_reply_to_status_id 23 non-null float64 in_reply_to_user_id 23 non-null float64 timestamp_x 1993 non-null object source 1993 non-null object text 1993 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 1993 non-null object rating_numerator 1993 non-null int64 rating_denominator 1993 non-null int64 name 1350 non-null object timestamp_y 1993 non-null object favorites 1993 non-null int64 retweets 1993 non-null int64 jpg_url 1993 non-null object img_num 1993 non-null int64 p1 1993 non-null object p1_conf 1993 non-null float64 p1_dog 1993 non-null bool p2 1993 non-null object p2_conf 1993 non-null float64 p2_dog 1993 non-null bool p3 1993 non-null object p3_conf 1993 non-null float64 p3_dog 1993 non-null bool rating 1993 non-null float64 dog_stages 1993 non-null object dtypes: bool(3), float64(8), int64(6), object(12) memory usage: 426.2+ KB ###Markdown * **unnecessary columns to be deleted**: delete unnecessary columns to make the final dataset more neat and tidy. **Define** Delete unnecessary columns by using df.drop **Code** ###Code drop_cols = ['in_reply_to_status_id','in_reply_to_user_id','retweeted_status_id', 'retweeted_status_user_id','retweeted_status_timestamp', 'rating_numerator', 'rating_denominator', 'p1_conf','p1_dog', 'p2_conf','p2_dog', 'p3_conf','p3_dog'] df5_clean.drop(drop_cols, axis=1, inplace=True) ###Output /Users/zhenghaoxiao/anaconda3/envs/my_env/lib/python3.7/site-packages/pandas/core/frame.py:3940: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy errors=errors) ###Markdown **Test** ###Code df5_clean.head() df5_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1993 entries, 0 to 2072 Data columns (total 16 columns): tweet_id 1993 non-null int64 timestamp_x 1993 non-null object source 1993 non-null object text 1993 non-null object expanded_urls 1993 non-null object name 1350 non-null object timestamp_y 1993 non-null object favorites 1993 non-null int64 retweets 1993 non-null int64 jpg_url 1993 non-null object img_num 1993 non-null int64 p1 1993 non-null object p2 1993 non-null object p3 1993 non-null object rating 1993 non-null float64 dog_stages 1993 non-null object dtypes: float64(1), int64(4), object(11) memory usage: 264.7+ KB ###Markdown There is a new problem in the data set that we have two time stamp columns: timestamp_x and timestamp_y, theyare of the same content but in different format, we should drop one of them **Define**Drop unnecessary timestamp_y column, and rename timestamp_x column to timestamp **Code** ###Code df5_clean.drop(['timestamp_y'], axis=1, inplace=True) df5_clean.rename(columns={'timestamp_x': 'timestamp'}, inplace=True) ###Output /Users/zhenghaoxiao/anaconda3/envs/my_env/lib/python3.7/site-packages/pandas/core/frame.py:4025: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy return super(DataFrame, self).rename(**kwargs) ###Markdown **Test** ###Code df5_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1993 entries, 0 to 2072 Data columns (total 15 columns): tweet_id 1993 non-null int64 timestamp 1993 non-null object source 1993 non-null object text 1993 non-null object expanded_urls 1993 non-null object name 1350 non-null object favorites 1993 non-null int64 retweets 1993 non-null int64 jpg_url 1993 non-null object img_num 1993 non-null int64 p1 1993 non-null object p2 1993 non-null object p3 1993 non-null object rating 1993 non-null float64 dog_stages 1993 non-null object dtypes: float64(1), int64(4), object(10) memory usage: 249.1+ KB ###Markdown * **tweet_id**, **timestamp**: wrong data types **Define**Correct all the wrong data types in the dataset, including changing source, img_num, and dog_stages into category data type for future analyzing **Code** ###Code df5_clean['tweet_id'] = df5_clean['tweet_id'].astype(str) df5_clean['timestamp'] = pd.to_datetime(df5_clean['timestamp']) df5_clean['source'] = df5_clean['source'].astype('category') df5_clean['img_num'] = df5_clean['img_num'].astype('category') df5_clean['dog_stages'] = df5_clean['dog_stages'].astype('category') ###Output /Users/zhenghaoxiao/anaconda3/envs/my_env/lib/python3.7/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy """Entry point for launching an IPython kernel. /Users/zhenghaoxiao/anaconda3/envs/my_env/lib/python3.7/site-packages/ipykernel_launcher.py:2: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy /Users/zhenghaoxiao/anaconda3/envs/my_env/lib/python3.7/site-packages/ipykernel_launcher.py:3: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy This is separate from the ipykernel package so we can avoid doing imports until /Users/zhenghaoxiao/anaconda3/envs/my_env/lib/python3.7/site-packages/ipykernel_launcher.py:4: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy after removing the cwd from sys.path. /Users/zhenghaoxiao/anaconda3/envs/my_env/lib/python3.7/site-packages/ipykernel_launcher.py:5: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy """ ###Markdown **Test** ###Code df5_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1993 entries, 0 to 2072 Data columns (total 15 columns): tweet_id 1993 non-null object timestamp 1993 non-null datetime64[ns, UTC] source 1993 non-null category text 1993 non-null object expanded_urls 1993 non-null object name 1350 non-null object favorites 1993 non-null int64 retweets 1993 non-null int64 jpg_url 1993 non-null object img_num 1993 non-null category p1 1993 non-null object p2 1993 non-null object p3 1993 non-null object rating 1993 non-null float64 dog_stages 1993 non-null category dtypes: category(3), datetime64[ns, UTC](1), float64(1), int64(2), object(8) memory usage: 208.9+ KB ###Markdown 5. Storing, Analyzing, and Visualizing Data Storing Data Now we have our dataset cleaned, we can save it for future usage. ###Code df5_clean.to_csv('twitter.csv', index=False) ###Output _____no_output_____ ###Markdown Analyzing & Visualizing Data ###Code df = pd.read_csv('twitter.csv') ###Output _____no_output_____ ###Markdown * which kind of source are people using the most? ###Code df.source.value_counts() ###Output _____no_output_____ ###Markdown The answer is quite straight forward: most people using Twitter for iPhone * what is the bar plot of dog stages? ###Code from matplotlib import rcParams rcParams.update({'figure.autolayout': True}) plt.figure(figsize=(8,6)) df.dog_stages.value_counts().sort_values(ascending=False).plot.bar() plt.title("Popular dog stages") plt.xticks(rotation=45) plt.xlabel("Dog stages") plt.ylabel("Number of dogs"); #save pic plt.savefig('dog_stages.png') ###Output _____no_output_____ ###Markdown * what is the most popular dog name? ###Code name_count = df.name.value_counts() name_list = name_count.index.tolist() from subprocess import check_output from wordcloud import WordCloud, ImageColorGenerator from PIL import Image import locale locale.setlocale(locale.LC_ALL, '') rcParams['figure.figsize']=(8.0,6.0) rcParams['savefig.dpi']=100 rcParams['font.size']=12 rcParams['figure.subplot.bottom']=.1 round_mask = np.array(Image.open("mask.png")) wordcloud = WordCloud(background_color='white', mask=round_mask, max_words=50, max_font_size=70, random_state=23, ).generate(' '.join(name_list)) #save the wordcloud wordcloud.to_file(os.path.join("dog_names.png")) plt.imshow(wordcloud, interpolation='bilinear') plt.axis("off") plt.figure() plt.axis("off") plt.show(); ###Output _____no_output_____ ###Markdown * which breed of dog people love the most? ###Code dog_fav = df.groupby('p1')['favorites'].sum().sort_values(ascending=False).head(6) dog_fav dog_ret = df.groupby('p1')['retweets'].sum().sort_values(ascending=False).head(6) dog_ret fig, ax1 = plt.subplots() ax2 = ax1.twinx() dog_fav.plot(figsize = (10,6), kind='bar', color='orange', ax=ax1, width=0.4, position=1, title='Popular Breeds: Likes vs. Retweets') dog_ret.plot(figsize = (10,6), kind='bar', color='yellow', ax=ax2, width=0.4, position=0) ax1.set_ylabel('Likes') ax2.set_ylabel('Retweets') ax1.set_xticklabels(dog_fav.index, rotation=30) h1, l1 = ax1.get_legend_handles_labels() h2, l2 = ax2.get_legend_handles_labels() plt.legend(h1+h2, l1+l2, loc=1) #save pic plt.savefig('popular_dogs.png', dpi=100) plt.show(); ###Output _____no_output_____ ###Markdown * Which breed of those popular dogs has the highest average rating? ###Code rate1 = df.query('p1 == "golden_retriever"').rating.mean() rate2 = df.query('p1 == "labrador_retriever"').rating.mean() rate3 = df.query('p1 == "pembroke"').rating.mean() rate4 = df.query('p1 == "chihuahua"').rating.mean() rate5 = df.query('p1 == "samoyed"').rating.mean() rate6 = df.query('p1 == "french_bulldog"').rating.mean() breeds = ['golden_retriever', 'labrador_retriever', 'pembroke', 'chihuahua', 'samoyed', 'french_bulldog'] rates = [rate1, rate2, rate3, rate4, rate5, rate6] y_position = np.arange(len(breeds)) plt.bar(y_position, rates, align='center', alpha=0.5) plt.xticks(y_position, breeds) plt.xticks(rotation=30) plt.ylabel("Average rating") plt.title("Average rating of popular dogs") plt.savefig('dog_ratings.png') plt.show() ###Output _____no_output_____ ###Markdown This data is gathered from the twitter api for a page called "WeRateDogs(R)" on twitter with the handle @dog_rates. I will be working with three datasets:* `tweet_information` that has api data that was sent to Udacity upon request from WeRateDogs, because I couldn't gain access to twitter api.* `image_predictions`: made by reading the `image-predictions.tsv` file downloaded from course resources. The instructor used a machine learning model to produce these predictions.* `enhanced_archive`: by reading the `enhanced-twitter-archive.csv` that was provided by course instructors.Dogs are categorized into four groups: doggos, floofers, puppers, and puppos, according to age and characters.Gathering, assessing, cleaning, then analyzing and vizualizing the data will be documented below. The next cell does the first step for data gathering. It collects the json string from the `'tweet-json.txt'` file then dumps it into another file `"tweet_clean.json"` for future use. After that, it loads the json file into the variable `data`. ###Code import json import pandas as pd import numpy as np import re import os # First we read the tweet-json.txt file and save it into 'tweet_clean.json' data = [json.loads(line) for line in open('tweet-json.txt', 'r')] with open('tweet_clean.json', 'w') as f: json.dump(data, f) #After that, we read the json data in 'tweet-json.txt' with open('tweet_clean.json', 'r') as f: data = json.load(f) #To display all text for every variable in a dataframe pd.set_option('display.max_colwidth', None) ###Output _____no_output_____ ###Markdown This code is to discover where each piece of data lies inside the twitter api. ###Code # This code is to discover where each piece of data lies. print("This is the tweet id; it is in the id_str key: \n", data[0]['id_str'], "\n") print("These are the data keys: \n", data[0].keys(), "\n") print("These are the keys in the entities dictionary: \n", data[0]['entities'].keys(), "\n") print("These are the keys in the urls dictionart in the entities dictionary: \n", data[200]['entities']['urls'], "\n") print("The source key has this data: \n", data[0]['source'], "\n") print("The favorited key has this data: \n", data[0]['favorited'], "\n") print("This is how many times the tweet was liked: \n", data[0]['favorite_count'], "\n") print("This is how many times the tweet was retweeted: \n", data[0]['retweet_count'], "\n") print("This is the IMAGE id; it is in the entities media: \n", data[0]['entities']['media'][0]['id'], "\n") print("This is the image url for the tweet: \n", data[0]['entities']['media'][0]['media_url'], "\n") #print(type(data[0]['entities'])) print("The url to the tweet is produced like this: \n", ("https://twitter.com/dog_rates/status/" + data[0]['id_str']).strip(" "), "\n") # If we needed to extract media_urls, we can use the following code: # Create line_list to append the media_urls to # for line in data: #line_list.append(line['entities']['media'][0]['media_url']) # Note that there is a separate column for entities in the api data gathered by the instructor. # In case we want to access that data, we can use a for loop: # for element in enhanced_archive: media_url is inside element['media'][0]['media_url'] ###Output _____no_output_____ ###Markdown * `id_str`: This is the `tweet_id` in string form* `created_at`: This is the timestamp variable in the `twitter_archive_enhanced.csv` file.* `source`: This is the source variable, don't know what it is for* `full_text`: This is the full text of the tweet* `expanded_urls`: the tweet url: this is found in the `entities` key in the json file.* `media_url`: this is the image url; it is found in `line_of_data['entities]['media'][0]['media_url']`* `place`: place of the tweet or dog?* `retweet_count`: how many times the tweet was retweeted* `favorite_count`:* `favorited`: the tweet favorited or not Next, we read in the data from our 3 sources. ###Code #I downloaded the data gathered from the twitter api via the link in the supporting material section. #Then extracted a json object from it and named it 'tweet_clean.json' tweet_information = pd.read_json('tweet_clean.json') #Read the contents of the tsv file into a dataframe image_predictions = pd.read_csv('image-predictions.tsv', sep='\t') enhanced_archive = pd.read_csv('twitter-archive-enhanced.csv') ###Output _____no_output_____ ###Markdown Assessment (This was written after assessment using the methods below)* Unwanted columns (1) 4 `dog_type` columns could be presented in one. (2) 9 columns in the predictions table; also can be in one column.* `id` in `tweet_information`. Wrong name. Should be `tweet_id`.* `tweet_id` in all columns should be a string.* Media_url not found for some pictures. * Source not extracted * Some rows don't have jpg_url (called `media_url` in the entities column data)* Source column is hard to read* `dog_stage` column (to be produced) has empty strings that need to be converted into null values. Notes:* Gather media_urls from entities column by creating an empty list first then: * using regex; 'Twitter for iPhone' for example. This can be done using the capture statement `"(\w+)$"` OR AN IF STATEMENT INSIDE A LOOP that uses `df.itertuples()` function. Assess by using pandas dataframe functions: `head, info, duplicated, etc`. ###Code tweet_information.head(3) image_predictions.head(10) enhanced_archive.head() enhanced_archive.info() tweet_information.info() image_predictions.info() list(tweet_information) enhanced_archive[enhanced_archive.tweet_id.duplicated()] image_predictions[image_predictions.tweet_id.duplicated()] tweet_information[tweet_information.id.duplicated()] enhanced_archive.rating_denominator.value_counts(ascending=False) # Then we want to find all denominators that are not equal to 10 and put them in a separate dataframe unusual_denominators = enhanced_archive[enhanced_archive.rating_denominator != 10] # To see the text inside each of the tweets that have ratings with unusual denominators, we use a for loop line_number = 0 for tweet in unusual_denominators['text']: line_number += 1 print(line_number, ') ', tweet, '\n', sep='') ###Output 1) @jonnysun @Lin_Manuel ok jomny I know you're excited but 960/00 isn't a valid rating, 13/10 is tho 2) @docmisterio account started on 11/15/15 3) The floofs have been released I repeat the floofs have been released. 84/70 https://t.co/NIYC820tmd 4) Meet Sam. She smiles 24/7 &amp; secretly aspires to be a reindeer. Keep Sam smiling by clicking and sharing this link: https://t.co/98tB8y7y7t https://t.co/LouL5vdvxx 5) RT @dog_rates: After so many requests, this is Bretagne. She was the last surviving 9/11 search dog, and our second ever 14/10. RIP https:/… 6) Why does this never happen at my front door... 165/150 https://t.co/HmwrdfEfUE 7) After so many requests, this is Bretagne. She was the last surviving 9/11 search dog, and our second ever 14/10. RIP https://t.co/XAVDNDaVgQ 8) Say hello to this unbelievably well behaved squad of doggos. 204/170 would try to pet all at once https://t.co/yGQI3He3xv 9) Happy 4/20 from the squad! 13/10 for all https://t.co/eV1diwds8a 10) This is Bluebert. He just saw that both #FinalFur match ups are split 50/50. Amazed af. 11/10 https://t.co/Kky1DPG4iq 11) Happy Saturday here's 9 puppers on a bench. 99/90 good work everybody https://t.co/mpvaVxKmc1 12) Here's a brigade of puppers. All look very prepared for whatever happens next. 80/80 https://t.co/0eb7R1Om12 13) From left to right: Cletus, Jerome, Alejandro, Burp, &amp; Titson None know where camera is. 45/50 would hug all at once https://t.co/sedre1ivTK 14) Here is a whole flock of puppers. 60/50 I'll take the lot https://t.co/9dpcw6MdWa 15) Happy Wednesday here's a bucket of pups. 44/40 would pet all at once https://t.co/HppvrYuamZ 16) Yes I do realize a rating of 4/20 would've been fitting. However, it would be unjust to give these cooperative pups that low of a rating 17) Two sneaky puppers were not initially seen, moving the rating to 143/130. Please forgive us. Thank you https://t.co/kRK51Y5ac3 18) Someone help the girl is being mugged. Several are distracting her while two steal her shoes. Clever puppers 121/110 https://t.co/1zfnTJLt55 19) This is Darrel. He just robbed a 7/11 and is in a high speed police chase. Was just spotted by the helicopter 10/10 https://t.co/7EsP8LmSp5 20) I'm aware that I could've said 20/16, but here at WeRateDogs we are very professional. An inconsistent rating scale is simply irresponsible 21) IT'S PUPPERGEDDON. Total of 144/120 ...I think https://t.co/ZanVtAtvIq 22) Here we have an entire platoon of puppers. Total score: 88/80 would pet all at once https://t.co/y93p6FLvVw 23) This is an Albanian 3 1/2 legged Episcopalian. Loves well-polished hardwood flooring. Penis on the collar. 9/10 https://t.co/d9NcXFKwLv ###Markdown * I noticed that tweet number 2 has no ratings. * **We clean the URLs issue by doing the following:** ###Code # Next, we check if the urls for some tweets are missing; if the url is not present, we add the # indices into a list and print some details no_url_list = list() for tweetid in enhanced_archive.tweet_id: url = enhanced_archive.loc[enhanced_archive['tweet_id'] == tweetid].expanded_urls.to_string() if 'http' not in url: no_url_list.append(url) print(no_url_list) print("\nThere are {} tweets with no url.".format(len(no_url_list))) print("Notice that each item of the list has the index of the row it is in.") # We can access information about any of the tweets that has no url by using the index # written before NaNs enhanced_archive.loc[[30, 55, 64, 113]] ###Output _____no_output_____ ###Markdown * We can fix the missing urls by creating a new url using the `tweet_ids` by making a string like this: `new_url = 'https://twitter.com/dog_rates/status/' + tweet_id`. ###Code # After that, let's find out how many values are present in the source column print("The counts of the values in the source column are:\n", enhanced_archive.source.value_counts()) print("\n\n* We notice there are only four sources, which makes it easier for us to write a for loop to change these values.") # To check for unusual or incorrect dog names, we do the following: print(enhanced_archive.name.unique()) print("\nWe notice there are incorrect names like 'all', 'this', 'unacceptable', 'a', and others. All of them start with a lowercase letter.") print("\nThe incorrect names are:\n", list(enhanced_archive[enhanced_archive.name.str.islower()].name.unique())) print("\nThe number of rows with incorrect names is {}.".format(len(enhanced_archive[enhanced_archive.name.str.islower()].name))) # Let's look up the text of the tweet whose name is assigned to 'mad' enhanced_archive_mad = enhanced_archive[enhanced_archive['name'] == 'mad'] print(enhanced_archive_mad.text) print("\nLooks like the text is duplicated. Let's check other variables.") print("\nThese are the tweet ids for these tweets:\n{}".format(enhanced_archive_mad.tweet_id)) print("\nMaybe one of these was a retweet.") ###Output 682 RT @dog_rates: Say hello to mad pupper. You know what you did. 13/10 would pet until no longer furustrated https://t.co/u1ulQ5heLX 1095 Say hello to mad pupper. You know what you did. 13/10 would pet until no longer furustrated https://t.co/u1ulQ5heLX Name: text, dtype: object Looks like the text is duplicated. Let's check other variables. These are the tweet ids for these tweets: 682 788552643979468800 1095 736392552031657984 Name: tweet_id, dtype: int64 Maybe one of these was a retweet. ###Markdown Let's find the NAs count for each dataframe. ###Code # Now, let's calculate the number of NA values in each column nas_count = len(enhanced_archive) - enhanced_archive.count() print("The numbers of NAs for each column in enhanced_archive is:\n {}".format(nas_count)) nas_count = len(image_predictions) - image_predictions.count() print("The numbers of NAs for each column in image_predictions is:\n {}".format(nas_count)) nas_count = len(tweet_information) - tweet_information.count() print("The numbers of NAs for each column in tweet_information is:\n {}".format(nas_count)) ###Output The numbers of NAs for each column in tweet_information is: created_at 0 id 0 id_str 0 full_text 0 truncated 0 display_text_range 0 entities 0 extended_entities 281 source 0 in_reply_to_status_id 2276 in_reply_to_status_id_str 2276 in_reply_to_user_id 2276 in_reply_to_user_id_str 2276 in_reply_to_screen_name 2276 user 0 geo 2354 coordinates 2354 place 2353 contributors 2354 is_quote_status 0 retweet_count 0 favorite_count 0 favorited 0 retweeted 0 possibly_sensitive 143 possibly_sensitive_appealable 143 lang 0 retweeted_status 2175 quoted_status_id 2325 quoted_status_id_str 2325 quoted_status 2326 dtype: int64 ###Markdown Issues to fix * Dataframes need to be joined * Incorrect data type in the three tables (`tweet_id`). This is a quality issue. Tidiness Issues* `enhanced_archive` table has unwanted columns.* `enhanced_archive` table has several columns for dog_types where only one should suffice.* `tweet_information` table has unwanted columns.* `image_predictions` table has 9 columns predicting the dog_type; one is enough. Quality Issues `enhanced_archive` table* some rows are with no picture url (media_url/jpg_url)* One tweet is recorded as having a dog rating when it actually doesn't* Some rating numerators contain decimals in the actual tweet* Source column is very hard to read* Some dogs don't have data for dod_stage i.e. puppos, floofers, etc.* Erroneous data type for the timestamp column (needs to be datetime object). `image_predictions` table* Dog types sometimes start with capital letters and other times with small letters. `tweet_information` table* Invalid column name `id`: should be `tweet_id` Clean Before cleaning, we create copies of the original dataframes. ###Code enhanced_archive_clean = enhanced_archive.copy() image_predictions_clean = image_predictions.copy() tweet_info_clean = tweet_information.copy() ###Output _____no_output_____ ###Markdown Tidiness issues first* **Unrequired columns in Enhanced Twitter Archive****Define**Remove unnecessary columns: `['in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp']` **Code** ###Code enhanced_archive_clean.drop(columns=['in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], inplace=True) ###Output _____no_output_____ ###Markdown Gathering Data ###Code import requests import os import pandas as pd df1=pd.read_csv('twitter-archive-enhanced.csv') df1.describe() df1.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown Gather image prediction file ###Code folder_name='image' if not os.path.exists(folder_name): os.makedirs(folder_name) url='https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response=requests.get(url) response with open(os.path.join(folder_name,'image_predictions.tsv'),mode='wb') as file: file.write(response.content) os.listdir(folder_name) import pandas as pd image=pd.read_csv('image/image_predictions.tsv',sep='\t') image.head() image.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null int64 jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown Gather data thru API ###Code import tweepy ###Output _____no_output_____ ###Markdown consumer_key = ''consumer_secret = ''access_token = ''access_secret = '' auth = tweepy.OAuthHandler(consumer_key, consumer_secret)auth.set_access_token(access_token, access_secret)api = tweepy.API(auth,wait_on_rate_limit=True, wait_on_rate_limit_notify=True) ###Code id_of_tweet=df1.tweet_id id_of_tweet=id_of_tweet.tolist() id_of_tweet[0] ###Output _____no_output_____ ###Markdown good=[]id=892420643555336193tweet = api.get_status(id,tweet_mode='extended')tweet._json with open('data.txt','w') as f: json.dump(tweet._json,f) with open('data.txt','a') as f: f.write('\n') Writing JSON file ###Code import json ###Output _____no_output_____ ###Markdown good_id=[]error_id=[]tweet = api.get_status(id_of_tweet[0],tweet_mode='extended')with open('tweet_json.txt','w') as f: json.dump(tweet._json,f)good_id.append(id_of_tweet[0])for tweet_id in id_of_tweet[1:]: "try-except" block try: tweet = api.get_status(tweet_id,tweet_mode='extended') with open('tweet_json.txt','a') as f: json.dump(tweet._json,f) f.write('\n') good_id.append(tweet_id) except Exception as e: catch *all* exceptions with open('tweet_json.txt','a') as f: f.write('\n') error_id.append(tweet_id) Reading JSON file and convert it to data frame Columns: 'id' 'created_at' 'retweet_count' 'favorite_count' ###Code with open('tweet_json.txt') as f: lines=f.readlines() df = pd.DataFrame(columns=['id','created_at','full_text','retweet_count','favorite_count']) error_id=[] for line in lines: try: line_j=json.loads(line) df = df.append({'id': line_j['id'],'created_at':line_j['created_at'],'full_text':line_j['full_text'], 'retweet_count':line_j['retweet_count'],'favorite_count':line_j['favorite_count']},ignore_index=True) except Exception as e: error_id.append(line_j['id']) df.to_csv('extend_info.csv',index=False) ###Output _____no_output_____ ###Markdown Assessing Data (Quality) Assessing df1-basic dataVisual assess:- There are many retweets(starts from 'RT' in the text)- There are missing data for dogs' name.- There are several ratings extracted wrongly from the text.id: 835246439529840000, 666287406224695000 Programming assess: ###Code df1.info() df1.describe() df1.sample(10) df1[df1['tweet_id'].duplicated()] ###Output _____no_output_____ ###Markdown After checking the number of values, the statistics and the data type. Here are some findings:- 'rating_denominator' can never be 0.- timestamp should be datetime type instead of string.- dogs' names can not be 'None','a', 'an' or 'the'. Assessing df-data from APIBased on same process, here are what I found:- There are many retweet.- 'created_at' should be date type.- All of the three data set contain id, but have different name. df['id'], df1['tweet_id'],image['tweet_id']. ###Code df=pd.read_csv('extend_info.csv') df.info() df.head() df[df['id'].duplicated()] image.info() image[image.p1.isnull()] image[image['tweet_id'].duplicated()] ###Output _____no_output_____ ###Markdown Assessing Data(Tidiness) - df1 has several columns for dog stage.- We don't need all these three tables. Assessing Summary: Assessing df1-basic data- There are many retweets('RT' in the text)- Retweet-related columns are not necessary.- There are missing data for dogs' name. And dogs' names can not be 'None','a', 'an' or 'the'.- There are several ratings extracted wrongly from the text. id: 835246439529840000, 666287406224695000- 'rating_denominator' can never be 0.- Timestamp should be datetime type instead of string.- `None` in doggo, floofer, pupper, puppo columns is treated as a non-null value. This should be converted to null `np.nan`.- All 'tweet_id' should be string type. Assessing df-data from API- There are many retweet.- 'created_at' should be datetime.- All of the three data set contain id, but have different name. df['id'], df1['tweet_id'],image['tweet_id']. Tidiness- df1 has several columns for dog stage.- We don't need all these three tables. Cleaning Data ###Code df1_clean=df1.copy() df_clean=df.copy() image_clean=image.copy() ###Output _____no_output_____ ###Markdown 1. Retweets in `df1` and `df` Define Select the tweets that do not contain 'RT' in the tweet text. Code ###Code df1_clean=df1_clean[~df1_clean.text.str.contains('RT')] df_clean=df_clean[~df_clean.full_text.str.contains('RT')] ###Output _____no_output_____ ###Markdown Test ###Code df_clean[df_clean.full_text.str.contains('RT')] df1_clean[df1_clean.text.str.contains('RT')] ###Output _____no_output_____ ###Markdown 2. Improper data type in `df1` and `df` Define Convert date from string to datetime Code ###Code import datetime df_clean['created_at']=pd.to_datetime(df['created_at']) df1_clean['timestamp']=pd.to_datetime(df1['timestamp']) ###Output _____no_output_____ ###Markdown Test ###Code print(df_clean.info()) print(df1_clean.info()) ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2157 entries, 0 to 2330 Data columns (total 5 columns): id 2157 non-null int64 created_at 2157 non-null datetime64[ns] full_text 2157 non-null object retweet_count 2157 non-null int64 favorite_count 2157 non-null int64 dtypes: datetime64[ns](1), int64(3), object(1) memory usage: 101.1+ KB None <class 'pandas.core.frame.DataFrame'> Int64Index: 2164 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2164 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2164 non-null datetime64[ns] source 2164 non-null object text 2164 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 2106 non-null object rating_numerator 2164 non-null int64 rating_denominator 2164 non-null int64 name 2164 non-null object doggo 2164 non-null object floofer 2164 non-null object pupper 2164 non-null object puppo 2164 non-null object dtypes: datetime64[ns](1), float64(4), int64(3), object(9) memory usage: 304.3+ KB None ###Markdown 3. Wrong extrated ratings in `df1` Define Filter those ids whose rating_denominator is 0 and rewrite the ratings manually based on the ratings mentioned in full_text. Code ###Code df1_clean[(df1_clean.tweet_id==666287406224695000) | (df1_clean.tweet_id==835246439529840000)] ###Output _____no_output_____ ###Markdown Seems these two id are retweet and have already been dropped. ###Code df1_clean[df1_clean['rating_denominator']==0] ###Output _____no_output_____ ###Markdown tweet_0 = api.get_status(835246439529840640,tweet_mode='extended')tweet_0._json['full_text'] ###Code df1_clean.loc[df1_clean.tweet_id==835246439529840640,['rating_denominator']]=10 df1_clean.loc[df1_clean.tweet_id==835246439529840640,['rating_numerator']]=13 ###Output _____no_output_____ ###Markdown Test ###Code df1_clean.loc[df1_clean.tweet_id==835246439529840640] ###Output _____no_output_____ ###Markdown 4. Retweet related columns in `df1_clean` are not necessary any more. Define Drop several columns and keep 'tweet_id','rating_numerator','rating_denominator','name','doggo','floofer','pupper','puppo' Code ###Code df1_clean_core=df1_clean[['tweet_id','rating_numerator','rating_denominator','name','doggo','floofer','pupper','puppo']] ###Output _____no_output_____ ###Markdown Test ###Code df1_clean_core.head() ###Output _____no_output_____ ###Markdown 5. `None` in dog stage columns Define `None` in doggo, floofer, pupper, puppo columns is treated as a non-null value. This should be converted to null `np.nan`. Code ###Code import numpy as np df1_clean_core.replace('None', np.nan, inplace=True) ###Output /opt/conda/lib/python3.6/site-packages/pandas/core/frame.py:3798: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy method=method) ###Markdown Test ###Code df1_clean_core.head() ###Output _____no_output_____ ###Markdown 6. Dog stage in `df1` is untidy Define Melt the several columns related to dog stage into one column called 'stage', and drop the duplicates. Code ###Code # Melt the dog stage df1_clean_core = pd.melt(df1_clean, id_vars=['tweet_id', 'rating_numerator', 'rating_denominator', 'name'], value_vars=['doggo','floofer','pupper','puppo'], value_name='stage') df1_clean_core.head() df1_clean_core.drop('variable',axis=1,inplace=True) df1_clean_core=df1_clean_core.drop_duplicates() ###Output _____no_output_____ ###Markdown Test ###Code df1_clean_core[df1_clean_core.duplicated()] df1_clean_core.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2518 entries, 0 to 7393 Data columns (total 5 columns): tweet_id 2518 non-null int64 rating_numerator 2518 non-null int64 rating_denominator 2518 non-null int64 name 2518 non-null object stage 2518 non-null object dtypes: int64(3), object(2) memory usage: 118.0+ KB ###Markdown 7. Keep consistency on tweet id Define Rename 'id' in `df_clean` to 'tweet_id' Code ###Code df_clean=df_clean.rename(columns={'id':'tweet_id'}) ###Output _____no_output_____ ###Markdown Test ###Code df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2157 entries, 0 to 2330 Data columns (total 5 columns): tweet_id 2157 non-null int64 created_at 2157 non-null datetime64[ns] full_text 2157 non-null object retweet_count 2157 non-null int64 favorite_count 2157 non-null int64 dtypes: datetime64[ns](1), int64(3), object(1) memory usage: 101.1+ KB ###Markdown 8. Dog name shouldn't be 'a''an' in `df1_clean_core` Define Drop the rows that dog names are lower case letter Code ###Code df1_clean_core=df1_clean_core[df1_clean_core['name'].str.match('^[A-Z][a-z]')== True] df1_clean_core.replace('None', np.nan, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code df1_clean_core.name ###Output _____no_output_____ ###Markdown 9. id datatype Define Convert 'tweet_id' from int to string type Code ###Code df1_clean_core['tweet_id']=df1_clean_core['tweet_id'].astype('str') df_clean['tweet_id']=df_clean['tweet_id'].astype('str') image_clean['tweet_id']=image_clean['tweet_id'].astype('str') ###Output _____no_output_____ ###Markdown Test ###Code print(df1_clean_core.info()) print(df_clean.info()) print(image_clean.info()) ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2396 entries, 0 to 7393 Data columns (total 5 columns): tweet_id 2396 non-null object rating_numerator 2396 non-null int64 rating_denominator 2396 non-null int64 name 1570 non-null object stage 339 non-null object dtypes: int64(2), object(3) memory usage: 112.3+ KB None <class 'pandas.core.frame.DataFrame'> Int64Index: 2157 entries, 0 to 2330 Data columns (total 5 columns): tweet_id 2157 non-null object created_at 2157 non-null datetime64[ns] full_text 2157 non-null object retweet_count 2157 non-null int64 favorite_count 2157 non-null int64 dtypes: datetime64[ns](1), int64(2), object(2) memory usage: 101.1+ KB None <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null object jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(1), object(5) memory usage: 152.1+ KB None ###Markdown 10. Reorganize the three dataset based on 'tweet_id' Define Merge the three dataset into one dataset called 'df_master2' Code ###Code df_master=df_clean.merge(df1_clean_core,on='tweet_id',how='inner') df_master.info() df_master2=df_master.merge(image_clean,on='tweet_id',how='inner') df_master2.to_csv('twitter_archive_master.csv',index=False) ###Output _____no_output_____ ###Markdown Test ###Code df_master2.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2180 entries, 0 to 2179 Data columns (total 20 columns): tweet_id 2180 non-null object created_at 2180 non-null datetime64[ns] full_text 2180 non-null object retweet_count 2180 non-null int64 favorite_count 2180 non-null int64 rating_numerator 2180 non-null int64 rating_denominator 2180 non-null int64 name 1516 non-null object stage 302 non-null object jpg_url 2180 non-null object img_num 2180 non-null int64 p1 2180 non-null object p1_conf 2180 non-null float64 p1_dog 2180 non-null bool p2 2180 non-null object p2_conf 2180 non-null float64 p2_dog 2180 non-null bool p3 2180 non-null object p3_conf 2180 non-null float64 p3_dog 2180 non-null bool dtypes: bool(3), datetime64[ns](1), float64(3), int64(5), object(8) memory usage: 312.9+ KB ###Markdown Analyzing and Visualizing Data ###Code df_m=pd.read_csv('twitter_archive_master.csv') df_m.head() df_m.describe() import datetime df_m['created_at']=pd.to_datetime(df_m['created_at']) df_m.info() import matplotlib.pyplot as plt import seaborn as sb import numpy as np %matplotlib inline df_m_stats=['retweet_count','favorite_count','rating_denominator','rating_numerator','p1_conf'] g=sb.PairGrid(data=df_m,vars=df_m_stats) g.map(plt.scatter,alpha=0.4); plt.scatter(data=df_m,x='rating_denominator',y='rating_numerator',c='favorite_count',cmap='mako_r'); plt.xlabel('rating_denominator') plt.ylabel('rating_numerator') plt.colorbar(label='favorite_count') plt.title('favorite_count and ratings') xbin_edges=np.arange(df_m['favorite_count'].min(),40000,2000) ybin_edges=np.arange(df_m['retweet_count'].min(),20000,500) xbin_idxs = pd.cut(df_m['favorite_count'], xbin_edges, right = False, include_lowest = True, labels = False) ybin_idxs = pd.cut(df_m['retweet_count'], ybin_edges, right = False, include_lowest = True, labels = False) plt.figure(figsize=(12,6)) plt.hist2d(data = df_m, x = 'favorite_count', y = 'retweet_count', bins = [xbin_edges, ybin_edges], cmap = 'viridis_r', cmin = 0.5); plt.colorbar(label = 'count'); plt.legend() plt.xlabel('favorite_count') plt.ylabel('retweet_count') plt.title('number of favorite_count and retweet_count') # use boxplot to show how spread the fav and retweet count are base_color=sb.color_palette()[0] sb.boxplot(data=df_m,x='p1_dog',y='favorite_count',color=base_color).set_title('prediction result and favorite_count') ###Output _____no_output_____ ###Markdown Retweet and favorite count have a positive relationship. ###Code plt.figure(figsize=(12,6)) plt.subplot(1,2,1) sb.regplot(data=df_m,x='rating_numerator',y='retweet_count',x_jitter=0.3,fit_reg=False); plt.xlim([0,500]) plt.title('rating_numerator and retweet_count') plt.subplot(1,2,2) sb.regplot(data=df_m,x='rating_numerator',y='favorite_count',x_jitter=0.3,fit_reg=False); plt.xlim([0,500]) plt.title('rating_numerator and favorite_count') plt.figure(figsize=(12,6)) plt.subplot(1,2,1) sb.regplot(data=df_m,x='rating_numerator',y='retweet_count',x_jitter=0.3,scatter_kws={'alpha':0.4},fit_reg=False); plt.xlim([0,20]) plt.title('rating_numerator under 20 and retweet_count') plt.subplot(1,2,2) sb.regplot(data=df_m,x='rating_numerator',y='favorite_count',x_jitter=0.3,scatter_kws={'alpha':0.4},fit_reg=False); plt.xlim([0,20]) plt.title('rating_numerator under 20 and favorite_count') ###Output _____no_output_____ ###Markdown retweet_count surges when the numerator goes up. But when the numerator is bigger than 15, the trend disappear. ###Code #prediction confidence base_color=sb.color_palette()[0] sb.boxplot(data=df_m,x='p1_dog',y='p1_conf',color=base_color).set_title('dog prediction result and confidence level') ###Output _____no_output_____ ###Markdown 'True' prediction has a higher confidence level than 'false' prediction. ###Code # year, favorite_count and p1_dog df_m['year'], df_m['month'] = df_m['created_at'].dt.year, df_m['created_at'].dt.month df_m['year']=df_m['year'].astype('category') df_m[df_m['favorite_count'].isnull()] ax = sb.pointplot(data = df_m, x = 'p1_dog', y = 'favorite_count', hue = 'year', dodge = 0.3, linestyles = "").set_title('favorite_count on different dog prediction result comparison over years') ###Output _____no_output_____ ###Markdown The mean of favorite_count increased significantly during the past three years, from 2015 to 2017. There's no big difference between false and true predition. But 'false' prediction seems to have larger deviation around the mean, which means the 'false' predition (the pictures are usually not focusing on the dog, but other stuff, like a orange) some people don't like this kind of pictures. ###Code g=sb.FacetGrid(data = df_m, hue = 'p1_dog', size = 5) g.map(plt.scatter,'p1_conf','favorite_count',alpha=0.4) g.add_legend() # No trend ###Output _____no_output_____ ###Markdown Data Wrangling Table of Contents* [Gather](Gather)* [Assess](Assess)* [Notes](Notes)* [Clean](Clean) ###Code # import the needed libraries import pandas as pd import numpy as np import requests import zipfile import matplotlib.pyplot as plt from pandas.plotting import register_matplotlib_converters register_matplotlib_converters() import seaborn as sns plt.style.use('bmh') from PIL import Image from io import BytesIO from wordcloud import WordCloud, STOPWORDS import requests import numpy as np %matplotlib inline ###Output _____no_output_____ ###Markdown Gather ###Code # read the provided csv file twitter-archive-enhanced file (file on hand) and save it twitter_archive = pd.read_csv('twitter-archive-enhanced.csv') # read the first 5 rows for data inspection twitter_archive.head() # getting the image prediction file programmatically using the given url url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) # save to .tsv file with open('image_predictions.tsv', 'wb') as file: file.write(response.content) # read the image prediction file and save to pandas DataFrame image_pred = pd.read_csv('image_predictions.tsv',sep='\t') # check for the data top 5 rows image_pred.head() ###Output _____no_output_____ ###Markdown _I tried to set up a twitter developer account, but my application was **not approved**_.- The following code is the Twitter API code supported by Udacity.- So, I will _comment_ it as a matter of reproducibilty when rerun all the code cells in this Jupyter notebook. ###Code # Query Twitter API for each tweet in the Twitter archive and save JSON in a text file # These are hidden to comply with Twitter's API terms and conditions #consumer_key = 'HIDDEN' #consumer_secret = 'HIDDEN' #access_token = 'HIDDEN' #access_secret = 'HIDDEN' #auth = OAuthHandler(consumer_key, consumer_secret) #auth.set_access_token(access_token, access_secret) #api = tweepy.API(auth, wait_on_rate_limit=True) # NOTE TO REVIEWER: this student had mobile verification issues so the following # Twitter API code was sent to this student from a Udacity instructor # Tweet IDs for which to gather additional data via Twitter's API #tweet_ids = twitter_archive.tweet_id.values #len(tweet_ids) # Query Twitter's API for JSON data for each tweet ID in the Twitter archive #count = 0 #fails_dict = {} #start = timer() # Save each tweet's returned JSON as a new line in a .txt file #with open('tweet_json.txt', 'w') as outfile: # This loop will likely take 20-30 minutes to run because of Twitter's rate limit # for tweet_id in tweet_ids: # count += 1 # print(str(count) + ": " + str(tweet_id)) # try: # tweet = api.get_status(tweet_id, tweet_mode='extended') # print("Success") # json.dump(tweet._json, outfile) # outfile.write('\n') # except tweepy.TweepError as e: # print("Fail") # fails_dict[tweet_id] = e # pass #end = timer() #print(end - start) #print(fails_dict) ###Output _____no_output_____ ###Markdown The data that should be gathered by the previous code is supported in the project resources by Udacity as _**zip file**_. ###Code # extract the file from the zipfile with open('tweet-json.zip','rb') as f: z_tweets = zipfile.ZipFile(f) z_tweets.extractall() # check for the extracted file z_tweets.namelist() # read the file in DataFrame with open('tweet-json copy', 'r') as f: tweet_json = pd.read_json(f, lines= True, encoding = 'utf-8') # check the data tweet_json.head(3) # check for the columns names tweet_json.columns # select the columns of interest : 'id', 'favorite_count','retweet_count' tweet_json = tweet_json.loc[:,['id','favorite_count','retweet_count']] # check for the top 5 rows tweet_json.head() ###Output _____no_output_____ ###Markdown Assess- So, Now we have Three datasets `twitter_archive` , `img_pred` and `tweet_json`- First let's display one by one for visual assessing ###Code # display twitter archive twitter_archive # display image_pred image_pred # display tweet_json tweet_json ###Output _____no_output_____ ###Markdown More deep - Let's dive in deeper - Assessing of the data programmatically ###Code # twitter_archive data info twitter_archive.info() # statistic description of twitter archive twitter_archive.describe() # data sample twitter_archive.sample(5) # check for source column twitter_archive.source.value_counts() # check for the dog's name written style twitter_archive.name.str.istitle().value_counts() # check for those written as lowercase lowers = twitter_archive.name.loc[twitter_archive.name.str.islower()].unique() lowers # check for the unique values of those non titled untitled = twitter_archive.name.loc[twitter_archive.name.str.istitle() == False].unique() untitled # check for those mis-written untitled_unlowers = [i for i in untitled if i not in lowers] untitled_unlowers ###Output _____no_output_____ ###Markdown As we are interested in this project with the rating of Dogs so Let's focus more on the columns related to rating i.e `rating_numerator` and `rating_denominator` ###Code # check for denominator values below 10 pd.set_option('display.max_colwidth',-1) twitter_archive.loc[twitter_archive.rating_denominator <10 , ['text','rating_numerator','rating_denominator']] # check for rating denominator values > 10 twitter_archive.loc[twitter_archive.rating_denominator >10 ,['text','rating_numerator','rating_denominator']] ###Output _____no_output_____ ###Markdown - Note : the account started on 11/15/15 ###Code # check for rating_numerator <10 twitter_archive.loc[twitter_archive.rating_numerator < 10,['text','rating_numerator','rating_denominator']] # check for rating_numerator values > 10 twitter_archive.loc[twitter_archive.rating_numerator > 14,['text','rating_numerator','rating_denominator']] ###Output _____no_output_____ ###Markdown Important points here:* The account has its own rating system and that is quiet clear here specially that rate of 14/10 looks normal* form the above I found some scores as outliers 1776,420 that needs more invistigate eith to include or drop* Also those above 100 these seems to be related to more that one dogs in a photo* some typos as 75 instead of 9.75 , 26 instead of 11.26 ,27 instead of 11.27 11/10 instead of 50/50* some times they are using the float numbers * I will collect these observations for further cleaning ###Code # check for the text twitter_archive.text.sample(5).tolist() ###Output _____no_output_____ ###Markdown - during my check for samples from text column I noticed the sentence `We only rate dogs` , Let's invistigate about this ###Code # check inside the text values for non dog related tweets twitter_archive.text[twitter_archive.text.str.match('.*only rate dogs')] # check the expanded urls column twitter_archive.expanded_urls.sample(5) ###Output _____no_output_____ ###Markdown - So this sentence used by the account's admin to address that picture doesn't contain a dog! ###Code # check for how many time this issue occur? len(twitter_archive.text[twitter_archive.text.str.match('.*only rate dogs')]) # image_pred data info image_pred.info() # statistic description of image_pred image_pred.describe() # data sample image_pred.sample(5) # number of dogs breeds image_pred.p1.value_counts() # tweet_json data info tweet_json.info() # tweet_json statistics tweet_json.describe() # data sample tweet_json.sample(5) # check for datasets shape and completeness twitter_archive.shape[0], tweet_json.shape[0] , image_pred.shape[0] # duplicate columns in the three datasets all_columns = pd.Series(list(twitter_archive ) + list(tweet_json) +list(image_pred)) all_columns[all_columns.duplicated()] ###Output _____no_output_____ ###Markdown Notes Qulity- `twitter_archive` * **Missing Values** : * in_reply_to_status_id, in_reply_to_user_id : 78 instead of 2356 * retweeted_status_id,retweeted_status_user_id,retweeted_status_timestamp 181 instead of 2356 * expanded_urls : 2297 instead of 2356 * We are interested in the tweet ONLY not the retweet * We are interested in the tweet ONLY not the reply to the original tweet * **tweet_id** is saved as int datatype instead of/ "better to be" string (object) * **timestamp , retweeted_status_timestamp** are saved as object datatype (str) instead of date/timestamp * **source** column is writen in html containg `` tags * column **name** : * some values are not titled `untitled_unlowers` ('BeBe','DonDon','CeCe',, 'JD', 'DayZ') * some are inacuarte values : `lowers` `['such', 'a', 'quite', 'not', 'one', 'incredibly', 'mad','an', 'very', 'just', 'my', 'his', 'actually', 'getting','this', 'unacceptable', 'all', 'old', 'infuriating', 'the','by', 'officially', 'life', 'light', 'space']` * **rating_numerator & rating_denominator**: - datatype for rating_numerator should be float instead of int - fix: - @45 13.5/10 instead of 5/10 - @ 313 13/10 instead of 960/0 - @ 2335 : 9/10 instead of 1/2 - @ 1068 : 14/10 instead of 9/11 - @1165: 13/10 instead of 4/20 - @ 1202 : 11/10 instead of 50/50 - @ 1662 10/10 instead of 7/11 - @ 695 : 9.75/10 instead of 75/10 - @763 : 11.27/10 instead of 27/10 - @1712 :11.26/10 instead of 26/10 - drop: - @ 516 no rating - @342 inaccurate (account start date) - invistigate(outliers): - @ 315 https://t.co/YbEJPkg4Ag 0/10 - @979 1776/10 - @ 1634 : https://t.co/kRK51Y5ac3 143/130 - @2074 :420/10 - @1274 names * columns **doggo,floofer,pupper, puppo** has None values instead of Null. * We are interested in dogs , **text** column reveals the truth about that some tweets are not related to dogs * expanded_urls is too bulky we are interested in tweet link only.************************************************************- `image_pred` * some images are not for dogs * tweet_id is saved as _int_ datatype instead of _object_ datatype * replace the underscore in breeds values with space and title all breeds values (p1 &p2& p3)************************************************************- `twitter_json` - column **id** is saved as _int_ datatype instead of _object_ datatype & rename as tweet_id***********************************************************- `All_datasets` - we have completeness issue not all the datasets have the same number of observation************************************************************ Tidiness- `twitter_archive` * text column has two variables text and short urls,create short_urls column, drop expanded_urls * The values of four columns (doggo,floofer,pupper,puppo) in `twitter_archive` dataset should be in one column dog_stage with a category datatype. * rating_numerator and rating_denominator columns in `twitter_archive` dataset should form one column dog_rating normalized out of 10. * make new columns for `day_name` and `month` from the timstamp column***********************************************************- `image_pred` * Columns p1, p1_dog, p1_conf , p2, p2_dog, p2_conf , p3, p3_dog, p3_conf could be condenced to two columns `dog_breed` and `confidence`***********************************************************- `All datasets` * tweet_id is present in two datasets and after renaming it will appear in all datasets * `tweet_json` and `image_pred` datasets should be part of our main dataset `twitter_archive`. Clean ###Code # make a copy of the datasets twitter_archive_clean = twitter_archive.copy() image_pred_clean = image_pred.copy() tweet_json_clean = tweet_json.copy() ###Output _____no_output_____ ###Markdown **First things first** * Let's start with the missing values `1` Missing Values`twitter_archive` * Missing Values : - in_reply_to_status_id, in_reply_to_user_id : 78 instead of 2356 - retweeted_status_id,retweeted_status_user_id,retweeted_status_timestamp 181 instead of 2356 - expanded_urls : 2297 instead of 2356 (to be fixed later) Define* in the `twitter_archive` dataset we will keep only recodrs that: - `1` Are not associated with retweets. - `2` Are not associated with reply to the original tweet. - _i.e we will keep the NaN values for these columns and drop non NaN values_* Drop columns: - in_reply_to_status_id - in_reply_to_user_id - retweeted_status_id - retweeted_status_user_id - retweeted_status_timestamp** Code ###Code twitter_archive_clean = twitter_archive_clean.query('in_reply_to_status_id == "NaN" &\ in_reply_to_user_id == "NaN" &\ retweeted_status_id == "NaN" &\ retweeted_status_user_id == "NaN"') # drop columns xcolumns = ['in_reply_to_status_id','in_reply_to_user_id','retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'] twitter_archive_clean = twitter_archive_clean.drop(columns = xcolumns, axis=1) ###Output _____no_output_____ ###Markdown Test ###Code # check for Null values in the twitter_archive clean versions twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2355 Data columns (total 12 columns): tweet_id 2097 non-null int64 timestamp 2097 non-null object source 2097 non-null object text 2097 non-null object expanded_urls 2094 non-null object rating_numerator 2097 non-null int64 rating_denominator 2097 non-null int64 name 2097 non-null object doggo 2097 non-null object floofer 2097 non-null object pupper 2097 non-null object puppo 2097 non-null object dtypes: int64(3), object(9) memory usage: 213.0+ KB ###Markdown Tideness`1` `twitter_archive` * text column has two variables text and short urls,create short_urls column, drop expanded_urls Define * use split method by `' ' ` over the text column, and apply over row * create `short_urls` column * drop `expanded_urls` column * split the text column by `https:` and assign its value to the same column name Code ###Code # create short_urls column by use split method over the text column, and apply over row twitter_archive_clean['short_urls'] = twitter_archive_clean.text.apply(lambda x :x.strip().split(' ')[-1]) # drop the expanded_urls twitter_archive_clean.drop('expanded_urls', axis =1, inplace=True) # split the text column by `https:` and assign its value to the same column name twitter_archive_clean.text = twitter_archive_clean.text.apply(lambda x:x.split('https:')[0]) ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.sample() # check for column droping assert 'expanded_urls' not in twitter_archive_clean.columns ###Output _____no_output_____ ###Markdown Tideness`2` `twitter_archive` * The values of four columns (doggo,floofer,pupper,puppo) in `twitter_archive` dataset should be in one column dog_stage with a category datatype. Define * select the last 4 columns related to the different dog stages * replace the 'None' string with np.nan in the selected columns * create a dog_stage column joinig all the values in the selected values droping nan, convert to str * convert the dog_stage column type to categorical * drop the columns related to the previous 4 stages Code ###Code # select the dog stages columns from the dataset all_dogs_type = ['doggo', 'floofer', 'pupper', 'puppo'] # replace the 'None' string with np.nan twitter_archive_clean[all_dogs_type] = twitter_archive_clean[all_dogs_type].replace('None', np.nan) # create the dog_stage column with joining the four columns in one column dog_stage join for more than stage twitter_archive_clean['dog_stage'] = twitter_archive_clean[all_dogs_type].\ apply(lambda x: ', '.join(x.dropna().astype(str)),axis =1) # replace the empty string with nan and change datatype to category twitter_archive_clean.dog_stage = twitter_archive_clean.dog_stage.replace('', np.nan).astype('category') # drop the 4 columns twitter_archive_clean = twitter_archive_clean.drop(columns = all_dogs_type, axis =1) ###Output _____no_output_____ ###Markdown Test ###Code # check for the data columns and datatype twitter_archive_clean.info() # check for the values of the new column twitter_archive_clean.dog_stage.value_counts() ###Output _____no_output_____ ###Markdown Quality`2` **rating_numerator & rating_denominator**: - datatype for rating_numerator should be float instead of int Define * convert the datatype of rating_numerator to float by astype('float') Code ###Code #convert the datatype of rating_numerator to float by astype('float') twitter_archive_clean.rating_numerator = twitter_archive_clean.rating_numerator.astype('float') ###Output _____no_output_____ ###Markdown Test ###Code # check for the datatype twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2355 Data columns (total 9 columns): tweet_id 2097 non-null int64 timestamp 2097 non-null object source 2097 non-null object text 2097 non-null object rating_numerator 2097 non-null float64 rating_denominator 2097 non-null int64 name 2097 non-null object short_urls 2097 non-null object dog_stage 336 non-null category dtypes: category(1), float64(1), int64(2), object(5) memory usage: 149.9+ KB ###Markdown Quality`3` **rating_numerator & rating_denominator**: - fix: - @45 13.5/10 instead of 5/10 - @ 313 13/10 instead of 960/0 - @ 2335 : 9/10 instead of 1/2 - @ 1068 : 14/10 instead of 9/11 - @1165: 13/10 instead of 4/20 - @ 1202 : 11/10 instead of 50/50 - @ 1662 10/10 instead of 7/11 - @ 695 : 9.75/10 instead of 75/10 - @763 : 11.27/10 instead of 27/10 - @1712 :11.26/10 instead of 26/10*********************************************`4` **rating_numerator & rating_denominator**: - drop: - @ 516 no rating - @342 inaccurate (account start date) retweets& replys are already droped Define * check for the above indices if exist * get a list of the indices of the erros after check * set a list for the correct values relative to those indices * loop through the two lists and assign each index with the new correct value * drop `[516]` by index Code ###Code # check for index if exist indices = [45,313,2335,1068,1165,1202,1662,695,763,1712,516,342] for i in indices: if i in list(twitter_archive_clean.index): print('yes') else: print(f'No : {i} ') #get a list of the indices of the erros after check indices = [45,2335,1068,1165,1202,1662,695,763,1712] # set a list for the correct values relative to those indices vals = [13.5,9,14,13,11,10,9.75,11.27,11.26] # loop through the two lists and assign each index with the new correct value for i,val in zip(indices,vals): twitter_archive_clean.loc[i, 'rating_numerator'] = val twitter_archive_clean.loc[i, 'rating_denominator'] =10 # drop the index: 516 twitter_archive_clean.drop(index=516,inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code # test for value for one of the entries assert twitter_archive_clean.loc[1712,'rating_numerator'] ==11.26 # test for droping index=516 assert 516 not in list(twitter_archive_clean.index) twitter_archive_clean.info() # check for the rating_denominator values twitter_archive_clean.rating_denominator.value_counts() ###Output _____no_output_____ ###Markdown Tideness`3` `twitter_archive` * rating_numerator and rating_denominator columns in `twitter_archive` dataset should form one column dog_rating normalized out of 10. Define * divide the rating_numerator / rating_denominator and then mulitiply by 10 * make dog_score column * drop the columns rating_numerator & rating_denominator column Code ###Code #divide the rating_numerator / rating_denominator and then mulitiply by 10 & make dog_score column twitter_archive_clean['dog_score'] = 10 * twitter_archive_clean.rating_numerator / twitter_archive_clean.rating_denominator #drop the columns rating_numerator & rating_denominator column twitter_archive_clean.drop(['rating_numerator','rating_denominator'],axis=1,inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code # check for values in the dog_score column twitter_archive_clean.dog_score.value_counts() # check for the twitter_archive_clean data twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2096 entries, 0 to 2355 Data columns (total 8 columns): tweet_id 2096 non-null int64 timestamp 2096 non-null object source 2096 non-null object text 2096 non-null object name 2096 non-null object short_urls 2096 non-null object dog_stage 336 non-null category dog_score 2096 non-null float64 dtypes: category(1), float64(1), int64(1), object(5) memory usage: 213.4+ KB ###Markdown Quality`5` fix the tweet_id columns in all datasets Define * rename the `id` column in `twitter_json` to `tweet_id` * change the datatype to str(object) for tweet_id column in all datasets Code ###Code # rename the id column in twitter_json to tweet_id tweet_json_clean.columns = ['tweet_id', 'favorite_count', 'retweet_count'] # change the datatype to str(object) in all datasets datasets = [twitter_archive_clean,image_pred_clean,tweet_json_clean] for i in datasets: i.tweet_id = i.tweet_id.astype('object') ###Output _____no_output_____ ###Markdown Test ###Code # check for the datatypes for tweet_id in all datasets for i in datasets: assert i.tweet_id.dtypes == 'object' ###Output _____no_output_____ ###Markdown Tideness`4` `image_pred` dataset condence the columns p1,p1_dog_p1_conf,...etc to dog_breed, confidence * we are interested in images of dogs only * we are are going to select those have at least one prediction for dog among the top three prediction Define * define a dog_breed_confidence function to extract the dog_breed and confience from the top 3 predictions * apply the function row wise * assign the new column names `dog_breed` and `confidence` * drop the un needed columns now Qulaity issues Now: Define * rename the No breed values with np.nan * replace the underscore with space and title all breeds values Code ###Code breed = [] confidence = [] # define the function def dog_breed_confidence(data): if data.p1_dog: breed.append(data.p1) confidence.append(data.p1_conf) elif data.p2_dog: breed.append(data.p2) confidence.append(data.p2_conf) elif data.p3_dog : breed.append(data.p3) confidence.append(data.p3_conf) else: breed.append('No breed') confidence.append(0) # apply the function row wise image_pred_clean.apply(dog_breed_confidence,axis =1) # assign the new column names image_pred_clean['dog_breed'] = breed image_pred_clean['confidence'] = confidence # drop the un needed columns now image_pred_clean.drop(columns = ['p1', 'p1_dog', 'p1_conf' , 'p2', 'p2_dog', 'p2_conf' , 'p3', 'p3_dog', 'p3_conf'],axis=1, inplace =True) # rename the No breed values with np.nan image_pred_clean.replace('No breed',np.nan, inplace=True) # replace the underscore with space and title all breeds values image_pred_clean.dog_breed= image_pred_clean.dog_breed.str.replace('_',' ').str.title() ###Output _____no_output_____ ###Markdown Test ###Code # check the top 5 rows in image_pred_clean image_pred_clean.head() ###Output _____no_output_____ ###Markdown Tideness`5` `tweet_json` and `image_pred` datasets should be part of our main dataset `twitter_archive`. * we are interested in the retweet_count and favorite_count from `tweet_json` and keeping the original data * we are interested in the tweets that have images Define * use the merge function to merge `twitter_archive_clean` and `tweet_json_clean` on tweet_id column (left join) * use the merge function to merge `twitter_archive_clean` and `image_pred_clean` on tweet_id column (inner join) * make the master dataset Code ###Code # use the merge function to merge twitter_archive_clean and tweet_json_clean on tweet_id column (left join) twitter_archive_clean = pd.merge(twitter_archive_clean, tweet_json_clean , how = 'left' , on = 'tweet_id') # use the merge function to merge `twitter_archive_clean` and `image_pred_clean` on tweet_id column (inner join) # and make master dataset master_dataset = pd.merge(twitter_archive_clean, image_pred_clean , how = 'inner' , on = 'tweet_id') ###Output _____no_output_____ ###Markdown Test ###Code # check new dataset after merge master_dataset.info() # check that all records have an image master_dataset.jpg_url.isnull().sum() ###Output _____no_output_____ ###Markdown Quality`6` **source** column is writen in html containg `` tags Define * check for the unique values in source columns to know how to extract the needed value * make a function fix_source which extract the strings between tags `(>text<)` * use apply function to fix the source column row wise Code ###Code # check for the unique values master_dataset.source.unique() #make a function fix_source which extract the strings between tags def fix_source(i): 'i is an html string from the source column in twitter_archive_clean dataset' #find the first closed tag > x= i.find('>') + 1 # find the first open tag after the previous < y =i[x:].find('<') # extract the text in between return i[x:][:y] # use apply function to fix the source column row wise master_dataset.source= master_dataset.source.apply(lambda x: fix_source(x)) ###Output _____no_output_____ ###Markdown Test ###Code # check for the result values in the source column master_dataset.source.value_counts() ###Output _____no_output_____ ###Markdown - The Vine records are lost in merging the datasets as they haven't a jpg_url they are vedio links not passed in the model. Quality `7` **timestamp** is saved as object datatype (str) instead of date/timestamp Define * change the datatype of `timestamp` column to datetime Code ###Code # change the datatype of timestamp column to datetime master_dataset.timestamp = pd.to_datetime(master_dataset.timestamp) ###Output _____no_output_____ ###Markdown Test ###Code # check for the datatype master_dataset.timestamp.dtype ###Output _____no_output_____ ###Markdown Tideness `6` make new columns for `day_name` and `month` for more analysis Define * extract the month name from the `timestamp` column * extract the day name from the `timestamp` column Code ###Code # extract the month name master_dataset['month'] = master_dataset.timestamp.apply(lambda x: x.month_name()) #extarct the day_name master_dataset['day_name'] = master_dataset.timestamp.apply(lambda x: x.day_name()) ###Output _____no_output_____ ###Markdown Test ###Code # check for the top 5 rows in columns timestamp, day_name and month master_dataset.loc[:5,['timestamp','day_name','month']] # chheck for the datatypes master_dataset.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1970 entries, 0 to 1969 Data columns (total 16 columns): tweet_id 1970 non-null object timestamp 1970 non-null datetime64[ns, UTC] source 1970 non-null object text 1970 non-null object name 1970 non-null object short_urls 1970 non-null object dog_stage 303 non-null category dog_score 1970 non-null float64 favorite_count 1970 non-null int64 retweet_count 1970 non-null int64 jpg_url 1970 non-null object img_num 1970 non-null int64 dog_breed 1665 non-null object confidence 1970 non-null float64 month 1970 non-null object day_name 1970 non-null object dtypes: category(1), datetime64[ns, UTC](1), float64(2), int64(3), object(9) memory usage: 328.5+ KB ###Markdown Quality`7` column **name** : * rename to dog_name * some values are not titled `untitled_unlowers` ('BeBe','DonDon','CeCe',, 'JD', 'DayZ') * some are inacuarte values : `lowers` `['such', 'a', 'quite', 'not', 'one', 'incredibly', 'mad','an', 'very', 'just', 'my', 'his', 'actually', 'getting','this', 'unacceptable', 'all', 'old', 'infuriating', 'the','by', 'officially', 'life', 'light', 'space']` Define * rename the column to dog_name * converted lower names to np.nan * make all values titled * relace 'None' values with np.nan values Code ###Code # rename the name column to dog_name master_dataset.rename(columns={'name':'dog_name'},inplace=True) # converted lower names to np.nan lowers = master_dataset.dog_name.str.islower() master_dataset.loc[lowers,'dog_name'] = 'None' # make all values titled master_dataset.dog_name = master_dataset.dog_name.apply(lambda x: x.title()) # relace 'None' with np.nan values master_dataset.dog_name.replace('None', np.nan, inplace= True) ###Output _____no_output_____ ###Markdown Test ###Code # check for all is titled master_dataset.dog_name.str.istitle().value_counts() # assert for our work assert [i.title() in master_dataset.dog_name.unique() for i in untitled_unlowers] assert [i in master_dataset.dog_name.unique() for i in lowers] assert 'dog_name' in master_dataset.columns # check for dog_name frequencies master_dataset.dog_name.value_counts() # check for data names master_dataset.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1970 entries, 0 to 1969 Data columns (total 16 columns): tweet_id 1970 non-null object timestamp 1970 non-null datetime64[ns, UTC] source 1970 non-null object text 1970 non-null object dog_name 1348 non-null object short_urls 1970 non-null object dog_stage 303 non-null category dog_score 1970 non-null float64 favorite_count 1970 non-null int64 retweet_count 1970 non-null int64 jpg_url 1970 non-null object img_num 1970 non-null int64 dog_breed 1665 non-null object confidence 1970 non-null float64 month 1970 non-null object day_name 1970 non-null object dtypes: category(1), datetime64[ns, UTC](1), float64(2), int64(3), object(9) memory usage: 248.5+ KB ###Markdown Quality`8` We are interested in dogs , `text` column reveals the truth about that some tweets are not related to dogs Define * check for the text column for `only rate dogs` as it is used by the account admin to address that photo is not a dog * confirm that has no name in dog_name column * drop the rows that contains this text using their indices Code ###Code # check for the text column for only rate dogs in text and null value for dog_name not_dogs = master_dataset.loc[master_dataset.dog_name.isnull()& master_dataset.text.str.match('.*only rate dogs')] # check for number of records len(not_dogs) # explore data not_dogs # collect indices #indices = master_dataset.loc[master_dataset.dog_name.isnull()& master_dataset.text.str.match('.*only rate dogs')].index.tolist() # drop the rows master_dataset.index[indices] master_dataset.drop(not_dogs.index,axis= 0,inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code # check for only rate dogs if still exists len(master_dataset.loc[master_dataset.dog_name.isnull()& master_dataset.text.str.match('.*only rate dogs')]) # check the new shape master_dataset.shape # check for else master_dataset.loc[master_dataset.text.str.match('.*only rate dogs'),['text','dog_name']] # So sucess! as above the text ensure that this is a real dog! ###Output _____no_output_____ ###Markdown Check Outliers ###Code master_dataset[master_dataset.dog_score >14] ###Output _____no_output_____ ###Markdown - It seems that we have funny outliers here the **Atticuss** gets the highest score in celebration of Indenpence day, so the score here related to the occassion and his dress- The second one is also a funny joke as this is not a pic of a real dog, this singer nickname is **snoop dogg** Define- So we are going to drop these outliers Code ###Code # drop outliers outliers = master_dataset[master_dataset.dog_score >14].index.tolist() master_dataset.drop(outliers,axis = 0, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code # check for the master data shape master_dataset.shape master_dataset[master_dataset.dog_score>14] ###Output _____no_output_____ ###Markdown Final Check to the Tidy Master Dataset ###Code #check for the final master master_dataset.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1915 entries, 0 to 1969 Data columns (total 16 columns): tweet_id 1915 non-null object timestamp 1915 non-null datetime64[ns, UTC] source 1915 non-null object text 1915 non-null object dog_name 1347 non-null object short_urls 1915 non-null object dog_stage 303 non-null category dog_score 1915 non-null float64 favorite_count 1915 non-null int64 retweet_count 1915 non-null int64 jpg_url 1915 non-null object img_num 1915 non-null int64 dog_breed 1617 non-null object confidence 1915 non-null float64 month 1915 non-null object day_name 1915 non-null object dtypes: category(1), datetime64[ns, UTC](1), float64(2), int64(3), object(9) memory usage: 241.6+ KB ###Markdown Store Data ###Code # Store the data after combinig and cleaning master_dataset.to_csv('twitter_archive_master.csv',encoding='utf-8',index=False) ###Output _____no_output_____ ###Markdown Visualization & Analysis `1` What's the most source used by followers to share their dog's nice photo ? ###Code plt.title("Distribution of Tweets'Source") master_dataset.source.value_counts().sort_values().plot(kind ='barh') plt.xlabel('Total Tweets') plt.ylabel('Source'); # percentage of sources master_dataset.source.value_counts() / master_dataset.source.value_counts().sum() ###Output _____no_output_____ ###Markdown It is clear from the above that Twitter app in Iphone has the most share __98%__ which is better explained by :> * The ease of use to take a shot for a dog from the app > * The high resolution of cameras.* Nice to mention here that around 91 records of **Vine** source was omitted in the cleaning process as they contain no jpg_url and those counts for around only _5%_ so the above insight is still valid. `2` Which is the most popular day/month to post a dog photo? ###Code master_dataset.day_name.value_counts().plot('bar') plt.title("Distribution of Tweets over Days") plt.xlabel('Week Days') plt.ylabel('Frequency'); master_dataset.month.value_counts().plot('bar') plt.title("Distribution of Tweets over Months") plt.xlabel('Month') plt.ylabel('Frequency'); ###Output _____no_output_____ ###Markdown - It is quiet clear that people tend to post their dogs photos in Mondy/ December> - Interestingly, most day is Mondy, may indicate that most of followers is out of stress (may be not workers) - The top month is December may intrepreted as the time of Christmas and New Year vacations, so people tends to go out with their dogs and take shots - These interpretations needs more data and investigation to be confirmed ###Code # select the month and day from timestamp e.g 01/07 will be 107 master_dataset.timestamp.apply(lambda x: x.day*100 + x.month ).value_counts().sort_values(ascending =False)[:15].plot('bar') plt.title("Distribution of Tweets over Day/Month") plt.xlabel('Day/Month "ddmm"') plt.xticks(rotation = 90) plt.ylabel('Frequency'); ###Output _____no_output_____ ###Markdown > - Viola ! It's quiet clear now , The most common Day/Month in our sample dataset indicate that most posts related to the end of November , which is matching with **Thanksgiving** Holidays, as it comes at the the Fourth Thursday of November after that **Black Fridays** which is also a holiday `3` Which is the most common dog name? ###Code # rank the names frequency in a descending order master_dataset.dog_name.value_counts().sort_values(ascending =False)[:10].plot('barh') plt.title("Most Common Dogs' Names") plt.xlabel('Frequency') plt.ylabel("Dog's Name"); ###Output _____no_output_____ ###Markdown - It is obvious here, that most used dog name in our sample is Charlie, followed by Oliver and Lucy.!> - May be that people tend to use real names for their dogs `4` What is the most common dog stage? ###Code # select the dog_ stage frequencies master_dataset.dog_stage.value_counts().plot('bar') plt.title("Distribution of Dog Stages") plt.xlabel('Dog Stage') plt.ylabel('Frequency'); ###Output _____no_output_____ ###Markdown - As per the dogotionary Pupper is : a small doggo, here Pupper is the most common dog stage in our dataset- This may be due to dogs at this stage/age prefered by owners.> **Caveats:** We need to note that some values for the dog stage is missed may be because not known by the account itself and/or the dog's owner `5` How do @WeRateDogs account rate dogs? ###Code # histogram for the dog score master_dataset.dog_score.hist(bins=15) plt.title('Distribution of Dog scores') plt.xlabel('Scores') plt.ylabel('Count') plt.axvline(x= master_dataset.dog_score.mean(),color='orange', linestyle='--',label ='Mean') plt.xticks(np.arange(15)) plt.legend(loc=0); # descriptive stats master_dataset.dog_score.describe() ###Output _____no_output_____ ###Markdown - We can notice from the above plot that the most frequent score is arround 12, and the maximun is 14 - Although the rating system for the account is /10 but actually the average rating is 10.55!- One of the most causes for this account poularity they tend to give higher scores i.e above 10, **Brent** was right! `6` Which is the most common breed? - Note : I need to iterate here over the cleaning process to fix the name of breeds (remove underscore and titled breed names) ###Code # frequency for dog breeds master_dataset.dog_breed.value_counts()[:10].plot('bar') plt.title('Distribution of Dog Breeds') plt.xlabel('Breeds') plt.ylabel('Count'); ###Output _____no_output_____ ###Markdown - Here, The most common breed in our sample is **Golden Retriever**> **Caveats:** - The breeds data contains a lot of null values - Also to take into consideration that this data is given by a neural network model `7` What is the account performance over time? - Here I need to answer the question if this twitter account getting more popularity over time or followers interest is going to decline- So, I used the rolling average to get more insight about performance on 30 days rolling average ###Code # set a 30 days rolling average for favorite count y1= master_dataset.favorite_count.rolling(window = 30).mean() # set a 30 days rolling average for retweet count y2= master_dataset.retweet_count.rolling(window = 30).mean() x = master_dataset.timestamp plt.plot(x,y1) plt.plot(x,y2) plt.xticks(rotation = 90) plt.title('30 days Rolling Average Account Performance') plt.xlabel('Dates') plt.ylabel('30 days average') plt.legend(loc=0); ###Output _____no_output_____ ###Markdown It's quiet clear here that the @WeRateDogs account is getting more popular overtime, noticed by the increase number of likes(favorite_count)- Also it is quiet clear that folowers tends to like more than to retweet `8` Who is the top retweeted and/or favorite dog? ###Code def get_photo(param): """ get photo and numbers of the top of param after sorting descendingly. INPUT: param as one of our dataset columns ---------------------------------------------- OUTPUT: image saved from the jpg_url link print out the numbers for the top """ winner = master_dataset.loc[master_dataset[param].sort_values(ascending =False).index[0]] r = requests.get(winner['jpg_url']) i =Image.open(BytesIO(r.content)) i.save('.\images/'+f'top_of_{param}.jpg') print(f'Top {param} is: {winner[param]}') get_photo('favorite_count') get_photo('retweet_count') ###Output Top retweet_count is: 79515 ###Markdown Top Favorite count winner with 132,810 likes Top Retweet count winner with 79,515 retweets ###Code # final winner # get the winner who has the largest retweet_count and the favorite_count max_retweet , max_favorite = master_dataset.favorite_count.groupby(master_dataset['retweet_count']).value_counts().\ sort_values(ascending=False).index[0] winner = master_dataset.query('favorite_count == @max_favorite & retweet_count == @max_retweet') r = requests.get(winner['jpg_url'].item()) i =Image.open(BytesIO(r.content)) i.save('.\images\winner.jpg') # prin the final result print(f"No of retweets is : {winner['retweet_count'].item()}, \nNo of favorite_count is {winner['favorite_count'].item()}") # winner dog-score winner['dog_score'] ###Output _____no_output_____ ###Markdown This awesome dogs prove his capability to swim in a pool so, he catched the attenion of followers either by retweet or give him a like `9` How @WeRateDogs account write their posts? (DogCloud) ###Code text = master_dataset.text.to_string(index =False).replace('/','').strip() #select the text def wordzcloud(text): #text =df_sql.review[0] # choose the mask from a google dog pictures url = 'https://cdn.pixabay.com/photo/2013/11/28/11/32/dog-220324_960_720.jpg' r = requests.get(url) mask = np.array(Image.open(BytesIO(r.content))) # set stopwords stopwords = ('This','and','is','the','to')#set(STOPWORDS) # set other parameters wc = WordCloud(background_color= 'white', mask = mask, stopwords=stopwords, max_words=100, contour_color='blue') # generate the word cloud wc.generate(text) # show the image wc.to_file('.\images\dog_cloud.jpg') return wc.to_image() # generate the word cloud from text wordzcloud(text) ###Output _____no_output_____ ###Markdown We Rate Dogs Data Wrangling ProjectThis project represents a student example of completing an analysis with emphasis on the data wrangling step. Rather than completing a tandem data wrangling and exploratory data analysis, data wrangling is completed as a an important step to illustrate understanding and mastery. Table of Contents Project Details Gathering Data Step 1: Read in File 1 (.csv file) Step 2: Read in File 2 (.tsv file from the web) Step 3: Read in File 3 (.json file) Assessing Data Tidiness Issues Exploring Quality Issues Archive Data Set Step 1: Search for Missing Data Points Archive Data Set Step 2: Search for Duplicates Predictions Data Set Step 1: Search for Missing Data Points Predictions Data Set Step 2: Search for Duplicates Predictions Data Set Step 3: Search for Incorrect Data Mising Data Set Step 1: Search for Missing Data Points Mising Data Set Step 2: Search for Duplicates Mising Data Set Step 3: Search for Incorrect Data Data Quality Issues Cleaning Data 1. Remove records where archive['retweet_status_id'] is not null 2. Remove the records where archive['in_reply_to_status_id'] is not null 3. Update the data type of archive['timestamp'] to datetime 5. Remove the missing['place'] column 8. Update the missing['lang'] column values to human readable values 6. Correct the missing['retweeted'] where missing['retweet_count'] > 0 values to True 7. Correct the missing['favorited'] where missing['favorite_count'] > 0 values to True 4. Remove the records from predictions where none of the prediction values are a dog 1. Combine all three DataFrames into one DataFrame 2. Create a dog_stage column and unpivot the doggo, floofer, pupper, and puppo columns into this new column. Storing, Analyzing, and Visualizing Data Storing the Data to .csv File Analyzing the Data Exploration: Do people prefer younger dogs over older dogs? Exploration: Which dog breed was retweeted the most? Exploration: Which language users have the highest rating? Exploration: Which dog breeds were eaisest for the AI to identify? Telling the Data Story with Visualizations Project Wrap Up Back to Top Project Details Udacity Step Objectives: Data wrangling, which consists of: Gathering data (downloadable file in the Resources tab in the left most panel of your classroom and linked in step 1 below) Assessing data Cleaning data Storing, analyzing, and visualizing your wrangled data Reporting on 1) your data wrangling efforts and 2) your data analyses and visualizations Student Comments: The team at Udacity has provided a data set across a few different data sources from the Twitter user @dog_rates. This tweet inforamtion includes tweets that have since been archived from the beginning of their activity until August 1, 2017. The We Rate Dogs team rates people's dog's and provides a comment about them using a rating system that is unique to their fanbase. They also give dogs a category which they define as a stage based on the age of the dog and/or other characteristics of the dog. For this project I will focus primarily on the data wrangling process but a short analysis is also provided. ###Code # import typical python libraries import pandas as pd import numpy as np %matplotlib inline ###Output _____no_output_____ ###Markdown Back to Top Gathering Data Udacity Step Objectives: Gather each of the three pieces of data as described below in a Jupyter Notebook titled wrangle_act.ipynb: The WeRateDogs Twitter archive. I am giving this file to you, so imagine it as a file on hand. Download this file manually by clicking the following link: twitter_archive_enhanced.csv The tweet image predictions, i.e., what breed of dog (or other object, animal, etc.) is present in each tweet according to a neural network. This file (image_predictions.tsv) is hosted on Udacity's servers and should be downloaded programmatically using the Requests library and the following URL: https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv Each tweet's retweet count and favorite ("like") count at minimum, and any additional data you find interesting. Using the tweet IDs in the WeRateDogs Twitter archive, query the Twitter API for each tweet's JSON data using Python's Tweepy library and store each tweet's entire set of JSON data in a file called tweet_json.txt file. Each tweet's JSON data should be written to its own line. Then read this .txt file line by line into a pandas DataFrame with (at minimum) tweet ID, retweet count, and favorite count. Note: do not include your Twitter API keys, secrets, and tokens in your project submission. If you decide to complete your project in the Project Workspace, note that you can upload files to the Jupyter Notebook Workspace by clicking the "Upload" button in the top righthand corner of the dashboard. Student Comments: Per the requirements of the project data is being gathered from 3 different locations A previously prepared .csv file with the We Rate Dogs twitter archive information An image predictions .tsv file hosted on the Udacity website Additional tweet information to be pulled from the Twitter API using Tweepy As a side note the info can't be pulled from the Twitter API as pulling full archive data is only available to users with a paid premium or enterprise account. In lieu of grabbing the data using Tweepy I will instead illustrate my understading of using said API but use the tweet-json.txt file provided by Udacity Back to Top Step 1: Read in File 1 (.csv file) ###Code archive = pd.read_csv('twitter-archive-enhanced-2.csv') archive.head(1) ###Output _____no_output_____ ###Markdown Back to Top Step 2: Read in File 2 (.tsv file from the web) ###Code # import required requests & io libraries to read on the .tsv file from a website import requests import io url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url).content predictions = pd.read_csv(io.StringIO(response.decode('utf-8')), sep = '\t') predictions.head(1) ###Output _____no_output_____ ###Markdown Back to Top Step 3: Read in File 3 (.json file) The json file should be read in using Tweepy but recently Twitter has updated their account types and only users with a premium developer account can access historical tweets. Since I have a standard premium account I will illustrate that I know how to use Tweepy but use the file provided by Udacity to read in the .json file. ###Code # import the tweepy library import tweepy # Authenticate to Twitter auth = tweepy.OAuthHandler('*', '*') auth.set_access_token('*', '*') # Create an API object api = tweepy.API(auth, wait_on_rate_limit = True, wait_on_rate_limit_notify = True) # Double check successfull authentication try: api.verify_credentials() print("Authentication OK", end = '\n\n') except: print("Error during authentication") # Get some We Rate Dogs Tweets statuses = api.user_timeline('dog_rates', count = 3) for status in statuses: print(status.text, end = '\n\n') extra = pd.read_json('tweet-json.txt', lines = True) extra.head(1) # Make a copy of the json data with only the columns we're interested in missing = extra.filter(['id', 'retweet_count', 'retweeted', 'favorite_count', 'favorited', 'lang', 'place'], axis = 1) missing.head(1) ###Output _____no_output_____ ###Markdown Back to Top Assessing Data Now that the data is gathered it needs to be assessed and documented for issues. Udacity Step Objectives: After gathering each of the above pieces of data, assess them visually and programmatically for quality and tidiness issues. Detect and document at least eight (8) quality issues and two (2) tidiness issues in your wrangle_act.ipynb Jupyter Notebook. To meet specifications, the issues that satisfy the Project Motivation (see the Key Points header on the previous page) must be assessed. Student Comments: Let's define the issues: Tidiness Issues Quality Issues Back to Top Tidiness Issues The following tidiness issues have been observed during the Data Gathering process above: The most blaring tidiness issue is the fact that the data is spread across 3 tables. According to tidiness rules each observational unit forms a table. There really isn't much data here and we're observing tweets. The data in each of the sources provided are all pieces of information about a particular tweet. Even the prediction is technically a piece of info about the tweet; i.e. the breed of dog who is the star of the tweet - the tweet's subject. Another tidiness issue that was observed while gathering the data is that the dog stage is spread across 3 columns. According to tidiness rules each variable forms a column. The four dog stages were observed as being their own column headers. Tidiness Clean Up Recommendations Combine all three DataFrames into one DataFrame Create a dog_stage column and unpivot the doggo, floofer, pupper, and puppo columns into this new column Back to Top Exploring Quality Issues Let's start exploring the data for possible quality issues in order to identify any quality issues. Data quality exploration will focus on identifying missing data, removing duplicates, data type mismatches, and correcting or removing incorrect information. Back to Top Archive Data Set Step 1: Search for Missing Data Points ###Code archive.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 in_reply_to_status_id 78 non-null float64 2 in_reply_to_user_id 78 non-null float64 3 timestamp 2356 non-null object 4 source 2356 non-null object 5 text 2356 non-null object 6 retweeted_status_id 181 non-null float64 7 retweeted_status_user_id 181 non-null float64 8 retweeted_status_timestamp 181 non-null object 9 expanded_urls 2297 non-null object 10 rating_numerator 2356 non-null int64 11 rating_denominator 2356 non-null int64 12 name 2356 non-null object 13 doggo 2356 non-null object 14 floofer 2356 non-null object 15 pupper 2356 non-null object 16 puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown Back to Top Student Comments: Archive has 2356 rows but some of the columns are showing less than 2356 non null values. The rows that are of interest are tweet_id, timestamp, maybe source, rating_numerator, rating_denominator, name, doggo, floofer, pupper, and puppo. Each of these columns has 2356 values so no need to drop values yet. Back to Top Archive Data Set Step 2: Search for Duplicates ###Code sum(archive.duplicated()) ###Output _____no_output_____ ###Markdown Back to Top Student Comments: There aren't any duplicates but in previous analysis I've done I've noted that sometimes the same observational unit is represented with multiple records with distinct ID values. Even though there don't appear to be any duplicates we should reduce the observational units down to the minimum distinct columns that define them to really search for duplicates. Let's look for duplicates by reducing the number of columns to review. ###Code archive2 = archive.filter(['timestamp', 'source', 'text', 'name'], axis = 1) sum(archive2.duplicated()) ###Output _____no_output_____ ###Markdown Back to Top Student Comments: OK so there really aren't any duplicates. In this case let's see how many unique values each column has and see if we notice anything off. ###Code archive.nunique() ###Output _____no_output_____ ###Markdown Back to Top Student Comments: It looks like the expanded_urls column only has 2218 unique values but there were a total of 2297 expanded_url entries? What is the expanded_url? I thought it was the url where the tweet is located - so does that mean these are retweets? And upon further investigation on the internet I've found that retweet's can be identified with the retweet_status. If they have a retweet_status they are a retweet and if they don't they are not (Stack Overflow). For sure we'll need to drop the retweets, i.e., the items with a retweet_status_id and there are 181 of them that need to be removed. In the same vein if realizing that a retweet_status_id means it's a retweet, the in_reply_to_status_id must mean that the tweet is a reply. Technically speaking a reply isn't the original tweet that we want to review and these can also be removed. ###Code eurl = archive['expanded_urls'] archive[eurl.isin(eurl[eurl.duplicated()])] ###Output _____no_output_____ ###Markdown Back to Top Student Comments: Some of them are retweets but some of them have NAN as the value which means they are null. Since we we aren't concentrating on expanded_url as a value of interest the NAN's don't matter as long as the columns of interest don't have NAN as a value. I think this is a good start for now on the archive information and we may find later that we need to iterate over this step after a little combining and cleaning. But before wrapping up let's notate any data type issues: timestamp is an object and should be a datetime Back to Top Predictions Data Set Step 1: Search for Missing Data Points ###Code predictions.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2075 non-null int64 1 jpg_url 2075 non-null object 2 img_num 2075 non-null int64 3 p1 2075 non-null object 4 p1_conf 2075 non-null float64 5 p1_dog 2075 non-null bool 6 p2 2075 non-null object 7 p2_conf 2075 non-null float64 8 p2_dog 2075 non-null bool 9 p3 2075 non-null object 10 p3_conf 2075 non-null float64 11 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown Back to Top Predictions Data Set Step 2: Search for Duplicates ###Code sum(predictions.duplicated()) ###Output _____no_output_____ ###Markdown Back to Top Student Comments: This data set is pretty much already paired down to the minimum distinct columns needed to define the observation type so there's no need to go further to look for duplicates. Since there doesn't appear to be any missing data or duplicate data I'll take a look at a sampling of data to see if I see anything of note. Back to Top Predictions Data Set Step 3: Search for Incorrect Data ###Code predictions.sample(20) ###Output _____no_output_____ ###Markdown Back to Top Student Comments: The Predictions data set doesn't have any duplicates and doesn't appear to have any missing data. Also all the data types look to be correct. The issue here is that some of the items aren't dogs. I see some items up here that returned pencil_box, suit, and tub for example. These things aren't dogs. The items that aren't dogs are going to need to be eliminated but i wonder how to figure that out? Based on the columns available the strategy I want to go with is as follows: Use the pX_dog column to decipher if the prediction is in fact a dog If all 3 predictions for any given record are all False eliminate it Use the pX_conf column to figure out which predicition has the highest confidence rating Use the highest value that has a pX_dog value of True as the final dog breed prediction From the onset of this project we were given the dog breed prediction data and don't have access to the AI that was used to create it. Even though I can click on these links and look at the photos myself the only way to fix any incorrect data on this set is to follow through with hours of manual tedium which isn't particularly the point of this project. So rather than correcting incorrect data here, I'll just drop those records that don't meet the requirement of being a dog. Back to Top Missing Data Set Step 1: Search for Missing Data Points ###Code missing.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2354 entries, 0 to 2353 Data columns (total 7 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 id 2354 non-null int64 1 retweet_count 2354 non-null int64 2 retweeted 2354 non-null bool 3 favorite_count 2354 non-null int64 4 favorited 2354 non-null bool 5 lang 2354 non-null object 6 place 1 non-null object dtypes: bool(2), int64(3), object(2) memory usage: 96.7+ KB ###Markdown Back to Top Student Comments: Even though it's technically a cleaning task, the ID column in this data set isn't properly labeled so I'll start by fixing that. The other 2 data sets ID column is labeled tweet_id and this data set should get in line in order to make it easier to merge the 3 data sets during the cleaning process. ###Code missing.rename(columns = {'id': 'tweet_id'}, inplace = True) missing.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2354 entries, 0 to 2353 Data columns (total 6 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2354 non-null int64 1 retweet_count 2354 non-null int64 2 retweeted 2354 non-null bool 3 favorite_count 2354 non-null int64 4 favorited 2354 non-null bool 5 lang 2354 non-null object dtypes: bool(2), int64(3), object(1) memory usage: 78.3+ KB ###Markdown Back to Top Student Comments: I can already see that there are no missing data and also that there aren't any incorrect data types. Back to Top Missing Data Set Step 2: Search for Duplicates ###Code missing['tweet_id'].nunique() ###Output _____no_output_____ ###Markdown Back to Top Student Comments: There aren't any duplicates. Back to Top Missing Data Set Step 3: Search for Incorrect Data ###Code missing.sample(20) ###Output _____no_output_____ ###Markdown Back to Top Student Comments: Looking at a sampling of the data this place column looks useless actually. Let's hone in to get a better idea. ###Code missing[missing['place'].isnull()] ###Output _____no_output_____ ###Markdown Back to Top Student Comments: Most of the records don't actually have place data so we can remove this column. The most important columns from this dataframe are retweet_count and favorite_count so let's make sure all the data is there. ###Code missing[missing['retweet_count'].isnull()] missing[missing['favorite_count'].isnull()] ###Output _____no_output_____ ###Markdown Back to Top Student Comments: Neither of these columns is missing data, but the retweeted and favorited column appear to have incorrect data in them. How can you have a retweet_count but the retweeted value be False? The same is true for the favorited column. Unless the corresponding count is 0 that value should really be True. I suppose technically speaking we don't really need either one of those columns and could remove them, but if we wanted to get a general count of how many records were retweeted vs how many weren't and/or how many were favorited vs how many weren't this value would/should be accurate. In that case, maybe we should just correct this incorrect data. Let's also see if the language column is of value. If all the values are en (English) then it's really not worth having. ###Code missing['lang'].nunique() missing['lang'].unique() ###Output _____no_output_____ ###Markdown Back to Top Student Comments: It looks like there are 9 values represented here but not quite sure what they all mean. Most likely they are html lang values. It would be helpful to translate these into their actual language value if possible so perhaps we should add a column with human readable language values. Back to Top Data Quality Issues The following data quality issues were revealed: The archive dataframe has retweets that need to be removed. The archive dataframe has replys that need to be removed. The timestamp column in the archive data frame is the wrong data type. The predictions dataframe has items in it that are some other object other than a dog. The missing dataframe place column doesn't have any useful information in it. The missing dataframe retweeted column has incorrect values in it. The missing dataframe favorited column has incorrect values in it. The lang column values aren't particularly human readable so I can't tell if they are correct or not. Data Quality Clean Up Recommendations Remove the records where archive['retweeted_status_id'] is not null Remove the records where archive['in_reply_to_status_id'] is not null Update the data type of archive['timestamp'] to datetime Remove the records from predictions where none of the prediction values are a dog Remove the missing['place'] column Correct the missing['retweeted'] where missing['retweet_count'] > 0 values to True Correct the missing['favorited'] where missing['favorite_count'] > 0 values to True Update the missing['lang'] column values to human readable values Back to Top Cleaning Data Udacity step objectives: Clean each of the issues you documented while assessing. Perform this cleaning in wrangle_act.ipynb as well. The result should be a high quality and tidy master pandas DataFrame (or DataFrames, if appropriate). Again, the issues that satisfy the Project Motivation must be cleaned. Student Comments: In order to get this dataset cleaned up we'll start with the quality issues then tackle the tidiness issues. Back to Top 1. Remove records where archive['retweet_status_id'] is not null Rather than dropping these values from the dataset we can actually create a new dataset without them. ###Code archive3 = archive[archive['retweeted_status_id'].isna()] archive3['retweeted_status_id'].isnull().sum() archive3.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2175 non-null int64 1 in_reply_to_status_id 78 non-null float64 2 in_reply_to_user_id 78 non-null float64 3 timestamp 2175 non-null object 4 source 2175 non-null object 5 text 2175 non-null object 6 retweeted_status_id 0 non-null float64 7 retweeted_status_user_id 0 non-null float64 8 retweeted_status_timestamp 0 non-null object 9 expanded_urls 2117 non-null object 10 rating_numerator 2175 non-null int64 11 rating_denominator 2175 non-null int64 12 name 2175 non-null object 13 doggo 2175 non-null object 14 floofer 2175 non-null object 15 pupper 2175 non-null object 16 puppo 2175 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 305.9+ KB ###Markdown Back to Top Test: In the test above we can see that the code worked since the retweet_status_id column now has 0 records. Back to Top 2. Remove the records where archive['in_reply_to_status_id'] is not null Again we'll create a new dataframe and just exclude those values. ###Code archive4 = archive3[archive3['in_reply_to_status_id'].isna()] archive4['in_reply_to_status_id'].isnull().sum() archive4.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2355 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2097 non-null int64 1 in_reply_to_status_id 0 non-null float64 2 in_reply_to_user_id 0 non-null float64 3 timestamp 2097 non-null object 4 source 2097 non-null object 5 text 2097 non-null object 6 retweeted_status_id 0 non-null float64 7 retweeted_status_user_id 0 non-null float64 8 retweeted_status_timestamp 0 non-null object 9 expanded_urls 2094 non-null object 10 rating_numerator 2097 non-null int64 11 rating_denominator 2097 non-null int64 12 name 2097 non-null object 13 doggo 2097 non-null object 14 floofer 2097 non-null object 15 pupper 2097 non-null object 16 puppo 2097 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 294.9+ KB ###Markdown Back to Top Test: In the test above we can see that the code worked since the in_reply_to_status_id column now has 0 records. Back to Top 3. Update the data type of archive['timestamp'] to datetime ###Code archive4 = archive4.copy() archive4['timestamp'] = pd.to_datetime(archive4['timestamp']) archive4.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2355 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2097 non-null int64 1 in_reply_to_status_id 0 non-null float64 2 in_reply_to_user_id 0 non-null float64 3 timestamp 2097 non-null datetime64[ns, UTC] 4 source 2097 non-null object 5 text 2097 non-null object 6 retweeted_status_id 0 non-null float64 7 retweeted_status_user_id 0 non-null float64 8 retweeted_status_timestamp 0 non-null object 9 expanded_urls 2094 non-null object 10 rating_numerator 2097 non-null int64 11 rating_denominator 2097 non-null int64 12 name 2097 non-null object 13 doggo 2097 non-null object 14 floofer 2097 non-null object 15 pupper 2097 non-null object 16 puppo 2097 non-null object dtypes: datetime64[ns, UTC](1), float64(4), int64(3), object(9) memory usage: 294.9+ KB ###Markdown Back to Top Test: In the test above we can see that the code worked since the timestamp column now has the datetime64 data type. Back to Top Student Comments: It wasn't listed in the quality issues but makes sense to drop the uneeded columns with 0 values. ###Code archive4.drop(['in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], axis = 1, inplace = True) archive4.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2355 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2097 non-null int64 1 timestamp 2097 non-null datetime64[ns, UTC] 2 source 2097 non-null object 3 text 2097 non-null object 4 expanded_urls 2094 non-null object 5 rating_numerator 2097 non-null int64 6 rating_denominator 2097 non-null int64 7 name 2097 non-null object 8 doggo 2097 non-null object 9 floofer 2097 non-null object 10 pupper 2097 non-null object 11 puppo 2097 non-null object dtypes: datetime64[ns, UTC](1), int64(3), object(8) memory usage: 213.0+ KB ###Markdown Back to Top Test: In the test above we can see that the code worked since all the selected columns are no longer a part of the data set. Back to Top 5. Remove the missing['place'] column ###Code missing.drop(['place'], axis = 1, inplace = True) missing.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2354 entries, 0 to 2353 Data columns (total 6 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2354 non-null int64 1 retweet_count 2354 non-null int64 2 retweeted 2354 non-null bool 3 favorite_count 2354 non-null int64 4 favorited 2354 non-null bool 5 lang 2354 non-null object dtypes: bool(2), int64(3), object(1) memory usage: 78.3+ KB ###Markdown Back to Top Test: In the test above we can see that the code worked since the selected column is no longer a part of the data set. Back to Top 8. Update the missing['lang'] column values to human readable values First I need to get the html lang ISO values. These values are all listed out on the W3Schools website. I can actually get these values programmatically using the Beautiful Soup Python library, but there's only 9 of them so forget it, let's just create a dataframe. And actually while looking these up it looks like und is not on the list, but with further research I see it means undefined. ###Code lang_vals = [{'lang':'en', 'language':'English'} ,{'lang':'und', 'language':'Undefined'} ,{'lang':'in', 'language':'Indonesian'} ,{'lang':'eu', 'language':'Basque'} ,{'lang':'es', 'language':'Sundanese'} ,{'lang':'nl', 'language':'Dutch'} ,{'lang':'tl', 'language':'Tagolog'} ,{'lang':'ro', 'language':'Romanian'} ,{'lang':'et', 'language':'Estonian'} ] lang_languages = pd.DataFrame(lang_vals, columns = ['lang','language']) lang_languages.info() missing2 = pd.merge(missing, lang_languages, on = 'lang', sort = False) missing2.head(2) ###Output _____no_output_____ ###Markdown Back to Top Test: In the test above we can see that the code worked since the language column is now a part of the data set where previously it wasn't. Back to Top 6. Correct the missing['retweeted'] where missing['retweet_count'] > 0 values to True ###Code missing2.loc[missing2.retweet_count > 0, 'retweeted'] = True missing2.head(2) missing2.query('retweeted == False') ###Output _____no_output_____ ###Markdown Back to Top Test: In the test above we can see that the code worked since the retweet_count columns with values greater than 0 now read as True, and the ones with a value of 0 still read as False. Back to Top 7. Correct the missing['favorited'] where missing['favorite_count'] > 0 values to True ###Code missing2.loc[missing2.favorite_count > 0, 'favorited'] = True missing2.head(2) missing2.query('favorited == False') ###Output _____no_output_____ ###Markdown Back to Top Test: In the test above we can see that the code worked since the favorite_count columns with values greater than 0 now read as True, and the ones with a value of 0 still read as False. Back to Top 4. Remove the records from predictions where none of the prediction values are a dog I've saved this (4.) as the last task because to me it's perceived to be the the trickiest of the tasks for a few differernt reasons. In some cases by viewing the picture we can see that it's a dog but, from the onset of the project we don't have access to the AI that made the predictions to make corrections and refine the predictions, which means we should just cut our losses and remove the values where a dog breed prediction could not be made. Also the strategy that was explained above will take several steps. ###Code predictions.query('tweet_id == 671547767500775424')['jpg_url'] ###Output _____no_output_____ ###Markdown Back to Top Student Comments: As mentioned there actually is a dog in this image, but the dog is in the background. The AI obviously focused on the item in the foreground and spent it's computing power on trying to identify the shoe that the dog chewed up. None of the predictions for this item tweet_id 671547767500775424 are a dog so we might as well drop this record. ###Code predictions.query('tweet_id == 668623201287675904')['jpg_url'] ###Output _____no_output_____ ###Markdown Back to Top Student Comments: In other cases the dog is the only item in the image so for images like tweet_id 668623201287675904 it was a little bit easier for the AI to try to identify the dog breed. ###Code predictions2 = predictions.query('p1_dog == True or p2_dog == True or p3_dog == True').copy() predictions2.head(2) predictions2.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1751 entries, 0 to 2073 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1751 non-null int64 1 jpg_url 1751 non-null object 2 img_num 1751 non-null int64 3 p1 1751 non-null object 4 p1_conf 1751 non-null float64 5 p1_dog 1751 non-null bool 6 p2 1751 non-null object 7 p2_conf 1751 non-null float64 8 p2_dog 1751 non-null bool 9 p3 1751 non-null object 10 p3_conf 1751 non-null float64 11 p3_dog 1751 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 141.9+ KB ###Markdown Back to Top Student Comments: Now let's see if we can determine which dog it actually is. Of the 3 predictions which one is the highest value? Let's go with that as the actual breed. ###Code predictions2.loc[(predictions2['p1_dog'] == True) & (predictions2['p1_conf'] > predictions2['p2_conf']) & (predictions2['p1_conf'] > predictions2['p3_conf']), 'highest_prediction_value'] = predictions2['p1_conf'] predictions2.loc[(predictions2['p2_dog'] == True) & (predictions2['p2_conf'] > predictions2['p1_conf']) & (predictions2['p2_conf'] > predictions2['p3_conf']), 'highest_prediction_value'] = predictions2['p2_conf'] predictions2.loc[(predictions2['p3_dog'] == True) & (predictions2['p3_conf'] > predictions2['p1_conf']) & (predictions2['p3_conf'] > predictions2['p2_conf']), 'highest_prediction_value'] = predictions2['p3_conf'] predictions2.head() predictions2.loc[(predictions2['p1_conf'] == predictions2['highest_prediction_value']), 'dog_breed'] = predictions2['p1'] predictions2.loc[(predictions2['p2_conf'] == predictions2['highest_prediction_value']), 'dog_breed'] = predictions2['p2'] predictions2.loc[(predictions2['p3_conf'] == predictions2['highest_prediction_value']), 'dog_breed'] = predictions2['p3'] predictions2.head() ###Output _____no_output_____ ###Markdown Back to Top Test: In the test above we can see that the code worked since the highest_prediction_value and dog_breed columns now exist with populated values. Back to Top 1. Combine all three DataFrames into one DataFrame Now that all these items have been cleaned up let's start on the tidiness issues. ###Code df = pd.merge(archive4, missing2, on = 'tweet_id', sort = False) df = pd.merge(df, predictions2, on = 'tweet_id', sort = False) df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1666 entries, 0 to 1665 Data columns (total 31 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1666 non-null int64 1 timestamp 1666 non-null datetime64[ns, UTC] 2 source 1666 non-null object 3 text 1666 non-null object 4 expanded_urls 1666 non-null object 5 rating_numerator 1666 non-null int64 6 rating_denominator 1666 non-null int64 7 name 1666 non-null object 8 doggo 1666 non-null object 9 floofer 1666 non-null object 10 pupper 1666 non-null object 11 puppo 1666 non-null object 12 retweet_count 1666 non-null int64 13 retweeted 1666 non-null bool 14 favorite_count 1666 non-null int64 15 favorited 1666 non-null bool 16 lang 1666 non-null object 17 language 1666 non-null object 18 jpg_url 1666 non-null object 19 img_num 1666 non-null int64 20 p1 1666 non-null object 21 p1_conf 1666 non-null float64 22 p1_dog 1666 non-null bool 23 p2 1666 non-null object 24 p2_conf 1666 non-null float64 25 p2_dog 1666 non-null bool 26 p3 1666 non-null object 27 p3_conf 1666 non-null float64 28 p3_dog 1666 non-null bool 29 highest_prediction_value 1463 non-null float64 30 dog_breed 1463 non-null object dtypes: bool(5), datetime64[ns, UTC](1), float64(4), int64(6), object(15) memory usage: 359.6+ KB ###Markdown Back to Top Student Comments: Looks like some of our dog_breed predictions values are null. Let's take a look at them. There are about 203 records. ###Code df[df['dog_breed'].isnull()][['p1_conf', 'p1_dog', 'p1', 'p2_conf', 'p2_dog', 'p2', 'p3_conf', 'p3_dog', 'p3', 'highest_prediction_value', 'dog_breed']] ###Output _____no_output_____ ###Markdown Back to Top Student Comments: The highest prediction value wasn't set on these but i see not all of the predictions are dogs. Let's fix this. ###Code df.loc[(df['p1_dog'] == True) & (df['p2_dog'] == False) & (df['p3_dog'] == False), 'highest_prediction_value'] = df['p1_conf'] df.loc[(df['p2_dog'] == True) & (df['p1_dog'] == False) & (df['p3_dog'] == False), 'highest_prediction_value'] = df['p2_conf'] df.loc[(df['p3_dog'] == True) & (df['p1_dog'] == False) & (df['p2_dog'] == False), 'highest_prediction_value'] = df['p3_conf'] df[df['highest_prediction_value'].isnull()][['tweet_id', 'p1_conf', 'p1_dog', 'p1', 'p2_conf', 'p2_dog', 'p2', 'p3_conf', 'p3_dog', 'p3', 'highest_prediction_value', 'dog_breed']] df.loc[(df['p1_dog'] == True) & (df['p2_dog'] == True) & (df['p1_conf'] > df['p2_conf']) & (df['p3_dog'] == False), 'highest_prediction_value'] = df['p1_conf'] df.loc[(df['p1_dog'] == True) & (df['p3_dog'] == True) & (df['p1_conf'] > df['p3_conf']) & (df['p2_dog'] == False), 'highest_prediction_value'] = df['p1_conf'] df.loc[(df['p2_dog'] == True) & (df['p1_dog'] == True) & (df['p2_conf'] > df['p1_conf']) & (df['p3_dog'] == False), 'highest_prediction_value'] = df['p2_conf'] df.loc[(df['p2_dog'] == True) & (df['p3_dog'] == True) & (df['p2_conf'] > df['p3_conf']) & (df['p1_dog'] == False), 'highest_prediction_value'] = df['p2_conf'] df.loc[(df['p3_dog'] == True) & (df['p1_dog'] == True) & (df['p3_conf'] > df['p1_conf']) & (df['p2_dog'] == False), 'highest_prediction_value'] = df['p3_conf'] df.loc[(df['p3_dog'] == True) & (df['p2_dog'] == True) & (df['p3_conf'] > df['p2_conf']) & (df['p1_dog'] == False), 'highest_prediction_value'] = df['p3_conf'] df[df['highest_prediction_value'].isnull()][['tweet_id', 'p1_conf', 'p1_dog', 'p1', 'p2_conf', 'p2_dog', 'p2', 'p3_conf', 'p3_dog', 'p3', 'highest_prediction_value', 'dog_breed']] df.loc[(df['p1_conf'] == df['highest_prediction_value']), 'dog_breed'] = df['p1'] df.loc[(df['p2_conf'] == df['highest_prediction_value']), 'dog_breed'] = df['p2'] df.loc[(df['p3_conf'] == df['highest_prediction_value']), 'dog_breed'] = df['p3'] df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1666 entries, 0 to 1665 Data columns (total 31 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1666 non-null int64 1 timestamp 1666 non-null datetime64[ns, UTC] 2 source 1666 non-null object 3 text 1666 non-null object 4 expanded_urls 1666 non-null object 5 rating_numerator 1666 non-null int64 6 rating_denominator 1666 non-null int64 7 name 1666 non-null object 8 doggo 1666 non-null object 9 floofer 1666 non-null object 10 pupper 1666 non-null object 11 puppo 1666 non-null object 12 retweet_count 1666 non-null int64 13 retweeted 1666 non-null bool 14 favorite_count 1666 non-null int64 15 favorited 1666 non-null bool 16 lang 1666 non-null object 17 language 1666 non-null object 18 jpg_url 1666 non-null object 19 img_num 1666 non-null int64 20 p1 1666 non-null object 21 p1_conf 1666 non-null float64 22 p1_dog 1666 non-null bool 23 p2 1666 non-null object 24 p2_conf 1666 non-null float64 25 p2_dog 1666 non-null bool 26 p3 1666 non-null object 27 p3_conf 1666 non-null float64 28 p3_dog 1666 non-null bool 29 highest_prediction_value 1666 non-null float64 30 dog_breed 1666 non-null object dtypes: bool(5), datetime64[ns, UTC](1), float64(4), int64(6), object(15) memory usage: 439.6+ KB ###Markdown Back to Top Test: In the test above we can see that the code worked since we now have 1 DataFrame with all of the columns of interest from the 3 previous DataFrames. Back to Top 2. Create a dog_stage column and unpivot the doggo, floofer, pupper, and puppo columns into this new column. Back to Top Student Comments: First let's see if there are any records that have values in more than 1 of these columns, i.e., has many stages. Let's start off by creating some int columns so we can tally it up. ###Code df.loc[(df['doggo'] != "None"), 'is_doggo'] = 1 df.loc[(df['floofer'] != "None"), 'is_floofer'] = 1 df.loc[(df['pupper'] != "None"), 'is_pupper'] = 1 df.loc[(df['puppo'] != "None"), 'is_puppo'] = 1 df.loc[(df['doggo'] == "None"), 'is_doggo'] = 0 df.loc[(df['floofer'] == "None"), 'is_floofer'] = 0 df.loc[(df['pupper'] == "None"), 'is_pupper'] = 0 df.loc[(df['puppo'] == "None"), 'is_puppo'] = 0 df['dog_rates_categories_count'] = df['is_doggo'] + df['is_floofer'] + df['is_pupper'] + df['is_puppo'] df.info() df.query('dog_rates_categories_count > 1') ###Output _____no_output_____ ###Markdown Back to Top Student Comments: It looks like 9 of them have more than 1 stage. Based on the stage definitions the different stages appear to be pretty linear. A doggo is older. A pupper is younger. A puppo is between a doggo and a pupper and is the equivalent of a teenager. A floofer on the other hand is any dog really and is a generic name. I get that you can be old but also be young at heart, but you can't actually be both young and old. In this case, there should really be only one stage per dog. The only exception is floofer. If the duplicate stage is floofer, which is a generic stage, we can just keep the stage that's not floofer. On the other ones though, we might as well drop them because we have no way of verifying the right stage. ###Code df.loc[(df['tweet_id'] == 854010172552949760), 'is_floofer'] = 0 df.loc[(df['tweet_id'] == 854010172552949760), 'dog_rates_categories_count'] = 0 df.query('dog_rates_categories_count > 1') df.drop(df[df['dog_rates_categories_count'] > 1].index, inplace = True) df.query('dog_rates_categories_count > 1') ###Output _____no_output_____ ###Markdown Back to Top Student Comments: And now let's see if we have some with no stage at all. ###Code df.query('dog_rates_categories_count < 1') ###Output _____no_output_____ ###Markdown Back to Top Student Comments: Well more than half of them, but there should be none with no stage as far as I'm concerned. Again based on the stage definitions, floofer is generic so if they don't have a stage they can be considered a floofer. ###Code df.loc[(df['dog_rates_categories_count'] < 1), 'is_floofer'] = 1 df['dog_rates_categories_count'] = df['is_doggo'] + df['is_floofer'] + df['is_pupper'] + df['is_puppo'] df.query('dog_rates_categories_count < 1') ###Output _____no_output_____ ###Markdown Back to Top Student Comments: And finally let's create the dog_stage column. ###Code df.loc[df['is_doggo'] == 1, 'dog_stage'] = 'doggo' df.loc[df['is_floofer'] == 1, 'dog_stage'] = 'floofer' df.loc[df['is_pupper'] == 1, 'dog_stage'] = 'pupper' df.loc[df['is_puppo'] == 1, 'dog_stage'] = 'puppo' df['dog_stage'].isnull().any() ###Output _____no_output_____ ###Markdown Back to Top Test: In the test above we can see that the code worked since the dog_stage column exists and there are no null values. Back to Top Student Comments: Lastly let's remind ourselves of what columns we have and drop the ones we don't need. ###Code df.info() df_clean = df.filter(['tweet_id', 'timestamp', 'source', 'text', 'expanded_urls', 'rating_numerator', 'rating_denominator', 'name', 'retweet_count', 'retweeted', 'favorite_count', 'favorited', 'language', 'jpg_url', 'highest_prediction_value', 'dog_breed', 'dog_stage'], axis = 1).copy() df_clean.info() df_clean.head() ###Output _____no_output_____ ###Markdown Back to Top Student Comments: One thing I didn't think to explore before was the source column. I vaguely remember that this column only had a few values and it looks like they are probably the device. It wasn't listed as one of the cleaning steps, but it doesn't hurt to create a human readable column for this value so that we can get device information more easily. ###Code df_clean['source'].nunique() df_clean['source'].unique() ###Output _____no_output_____ ###Markdown Back to Top Student Comments: There are only 3 sources represented here: iPhone, Web Client, and TweetDeck. ###Code df_clean.loc[df['source'] == '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'source_device'] = 'iPhone' df_clean.loc[df['source'] == '<a href="http://twitter.com" rel="nofollow">Twitter Web Client</a>', 'source_device'] = 'Web Client' df_clean.loc[df['source'] == '<a href="https://about.twitter.com/products/tweetdeck" rel="nofollow">TweetDeck</a>', 'source_device'] = 'TweetDeck' df_clean['source_device'].isnull().any() ###Output _____no_output_____ ###Markdown Back to Top Test: In the test above we can see that the code worked since the source_device column exists and there are no null values. Back to Top Student Comments: Now that this is done we don't actually need the source column and we can drop it. ###Code df_clean.drop(['source'], axis = 1, inplace = True) df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1658 entries, 0 to 1665 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1658 non-null int64 1 timestamp 1658 non-null datetime64[ns, UTC] 2 text 1658 non-null object 3 expanded_urls 1658 non-null object 4 rating_numerator 1658 non-null int64 5 rating_denominator 1658 non-null int64 6 name 1658 non-null object 7 retweet_count 1658 non-null int64 8 retweeted 1658 non-null bool 9 favorite_count 1658 non-null int64 10 favorited 1658 non-null bool 11 language 1658 non-null object 12 jpg_url 1658 non-null object 13 highest_prediction_value 1658 non-null float64 14 dog_breed 1658 non-null object 15 dog_stage 1658 non-null object 16 source_device 1658 non-null object dtypes: bool(2), datetime64[ns, UTC](1), float64(1), int64(5), object(8) memory usage: 290.5+ KB ###Markdown Back to Top Storing, Analyzing, and Visualizing Data Udacity Step Objectives: Store the clean DataFrame(s) in a CSV file with the main one named twitter_archive_master.csv. If additional files exist because multiple tables are required for tidiness, name these files appropriately. Additionally, you may store the cleaned data in a SQLite database (which is to be submitted as well if you do). Analyze and visualize your wrangled data in your wrangle_act.ipynb Jupyter Notebook. At least three (3) insights and one (1) visualization must be produced. Student Comments: I'll break this up into 3 steps. Back to Top Storing the Data to .csv File ###Code df_clean.to_csv('twitter_archive_master.csv', index = False) ###Output _____no_output_____ ###Markdown Back to Top Analyzing the Data ###Code df_clean.sample(3) ###Output _____no_output_____ ###Markdown Back to Top Student Comments: Now that the data set is all cleaned up what can we figure out from it? I can think of a couple of things from the columns available. Which dog stages have the highest rating - i.e. inferred as more popular? This seems like an obvious one but honestly it's a little fruitless. Since I cleaned the data I already know that a big chunk of the dogs are categorized as floofers so maybe if we exlude the floofer stage we can get info on the other linear categories. Maybe we can see if people really prefer younger dogs over older dogs? That's a much more targeted question to ask. Which language users have the highest rating? Which dog breed was retweeted the most? Which dog breed was favorited the most? Which dog breed has the highest rating? Which dog breeds were eaisest for the AI to identify? Which source device users have the highest rating? We only need to produce 3 insights for the purposes of this project but some of these might not have enough data to even be considered significant so let's look at a couple and see how strong the correlations are before deciding on which one to visualize. Back to Top Exploration: Do people prefer younger dogs over older dogs? For this exploration we'll look specifically at dogs in the dog_stage of doggo or pupper. Doggo is an old dog and pupper is a young dog. How do we define prefer? We have a few options. We could go with the dog that has the highest average rating, the dog that has the highest average favorites, or the dog that has the highest average retweets. Let's take a look at these values and see if we can find any significance in those three features. Firstly I'm curious to see if all the rating denominators are 10. ###Code df_clean['rating_denominator'].unique() ###Output _____no_output_____ ###Markdown Back to Top Student Comments: There are several different denominators here. These need to be equalized. We'll make a new column for equalized_denominator. ###Code for x in df_clean['tweet_id']: y = int(round(df_clean.query('tweet_id == @x')['rating_numerator'] * 10 / df_clean.query('tweet_id == @x')['rating_denominator'])) df_clean.loc[df['tweet_id'] == x, 'equalized_numerator'] = y df_clean.query('rating_denominator != 10')[['tweet_id', 'rating_numerator', 'rating_denominator','equalized_numerator']] df_clean.query('dog_stage == "doggo" or dog_stage == "pupper"').groupby('dog_stage')['equalized_numerator'].mean() df_clean.query('dog_stage == "doggo" or dog_stage == "pupper"').groupby('dog_stage')['favorite_count'].mean() df_clean.query('dog_stage == "doggo" or dog_stage == "pupper"').groupby('dog_stage')['retweet_count'].mean() ###Output _____no_output_____ ###Markdown Back to Top Student Comments: Based on the averages I'm seeing here it looks like people prefer doggos over puppers. On average the doggos have a higher rating. They also have higher average favorite counts and higher average retweet counts. Not only do the users at We Rate Dogs rate them more highly, but other twitter users approve of their photos/descriptions more often, and they also share those tweets with other people more often. Back to Top Exploration: Which dog breed was retweeted the most? For this exploration we'll use the dog_breed feature and see which breeds were shared with others the most. ###Code breed_retweet = df_clean.groupby(['dog_breed'], sort = True)['retweet_count'].sum().reset_index() breed_retweet.sort_values(by = ['retweet_count'], ascending = False).head(20) ###Output _____no_output_____ ###Markdown Back to Top Student Comments: It probably would have been beneficial to make sure the AI that predicted these breeds consistently used the same breed names/formatting but since an AI created this data set I didn't think it was necessary. Even though I don't know how the predictions were conducted, it doesn't make sense for an AI to use both golden_retriever and GoldenRetriever in the data so it's probably safe to assume that the breeds are consistent. But let's just check one any way. ###Code df_clean[df_clean['dog_breed'].str.contains('retriever', regex = False)]['dog_breed'].unique() ###Output _____no_output_____ ###Markdown Back to Top Student Comments: It looks like my assumption was right and that the golden_retriever was the most retweeted breed. Perhaps it'd be interesting to see how many of the top 20 retweeted breeds are also amongs the top 20 favorited breeds. ###Code breed_favorite = df_clean.groupby(['dog_breed'], sort = True)['favorite_count'].sum().reset_index() breed_favorite.sort_values(by = ['favorite_count'], ascending = False).head(20) ###Output _____no_output_____ ###Markdown Back to Top Student Comments: There are a few differences but I wonder if there'd be a way to visualize these togehter in the same chart maybe? Back to Top Exploration: Which language users have the highest rating? For this exploration we'll look specifically at the language which we had translated from the html ISO value and see on average which language users have the highest rated dogs. ###Code df_clean.groupby(['language'])[['equalized_numerator', 'favorite_count','retweet_count']].mean() ###Output _____no_output_____ ###Markdown Back to Top Student Comments: This is kind of interesting. The favorite_count and the retweet_count for English speakers on average is pretty high, and surely much higher than of the other language speakers, but is that due to access? Both Dutch and Estonian speakers on average had higher equalized_numerator values and therefore had more well loved dogs by the We Rate Dogs user. I wonder what kind of dogs these users had and how easy it was for the AI to identify these dogs. It could be that these language speakers tend toward older dogs maybe, have better quality photos that are easier for the AI to identify, or maybe it could be something else? ###Code df_clean.groupby(['language'])['highest_prediction_value'].mean() df_clean.groupby(['language','dog_breed'], sort = True)['rating_numerator'].mean() ###Output _____no_output_____ ###Markdown Back to Top Student Comments: I'd say my assumption about access is probably right on the money. Either users who speak these languages have limitted access, or we as the researcher had limitted access to data from non-English speaking users. I also can't rule out that the lang values themselves are erroneous although that's unlikely. At any rate there are too few records for the other language speakers to consider this data of value. Back to Top Exploration: Which dog breeds were eaisest for the AI to identify? For this exploration we'll look specifically at the language which we had translated from the html ISO value and see on average which language users have the highest rated dogs. ###Code df_clean['highest_prediction_value'].describe() df_clean.query('highest_prediction_value == 0.999956') ###Output _____no_output_____ ###Markdown Back to Top Student Comments: Well I can see why the AI was able to identify this breed with such a high confidence level. This Komondor breed is a pretty distinctive looking dog. I was able to visually verify with a Google images search that this is indeed the correct breed designation. Only this one dog was identified with such a high confidence level though. Let's see if there were other Komondor photos in the data set and how high their confidence level is, and/or if other dogs were identified more consistently. ###Code df_clean.query('dog_breed == "komondor"')['highest_prediction_value'] easy_identify = df_clean.groupby(['dog_breed'], sort = True)['highest_prediction_value'].mean().reset_index() easy_identify.sort_values(by = ['highest_prediction_value'], ascending = False).head(20) easy_identify2 = df_clean.groupby(['dog_breed'], sort = True)['highest_prediction_value'].max().reset_index() easy_identify2.sort_values(by = ['highest_prediction_value'], ascending = False).head(20) easy_identify3 = df_clean.groupby(['dog_breed'], sort = True)['highest_prediction_value'].agg(lambda x:x.value_counts().index[0]).reset_index() easy_identify3.sort_values(by = ['highest_prediction_value'], ascending = False).head(20) ###Output _____no_output_____ ###Markdown Back to Top Student Comments: To preface this info, there are only 3 Komondor records. But in all 3 cases the confidence level is very high. That being said, by virtue of having less data points to aggregate, the average, max, and mode is understandably high. Across the average, mode, and max though I do see some other breeds repeated including Bernese Mountain Dog, Keeshond, and Samoyed. Let's take a look and see how many of these breeds are amongst the records. ###Code df.query('dog_breed == "Bernese_mountain_dog" or dog_breed == "keeshond" or dog_breed == "Samoyed"')['dog_breed'].value_counts() df_clean['dog_breed'].value_counts().describe() ###Output _____no_output_____ ###Markdown Back to Top Student Comments: Based on these values I'd say the Samoyed was the easiest dog for the AI to identify. Looks like there are 113 different distinct breeds in the set and on average a breed is listed about 14 times in totatlity amongst the records. The dog that shows up the most shows up 152 times but in the upper quartile (3rd quartile) we're looking at 16 different occurrences. To say the Samoyed has a total of 42 occurrences in the data set, that's pretty high compared to the other dogs. You'd think that if the AI saw the dog a bunch of times it'd have a better chance of identifying the dog based on previous identifications, but again, there are other factors including picture quality. The Samoyed does have more photos than others though, and it has a much higher identification confidence level so I feel pretty confident saying based on all these factors it was the easiest dog for the AI to recognize. I feel like we've already answered a lot of questions here and that this AI identification insight is pretty interesting compared to the others but I don't think it's the best candidate for a visualization because it's going to require a few different visualizations to get the point across. Instead let's look at the spread of old dogs vs new dogs. Back to Top Telling the Data Story with Visualizations ###Code df_clean.query('dog_stage == "doggo" or dog_stage == "pupper"').groupby(['dog_stage'])['retweet_count'].mean().plot(kind = 'bar', x = 'retweet_count', figsize = (12,8), title = 'Average Retweet Count per Dog Stage'); df_clean.query('dog_stage == "doggo" or dog_stage == "pupper"').groupby(['dog_stage'])['retweet_count'].sum().plot(kind = 'bar', x = 'retweet_count', figsize = (12,8), title = 'Total Retweet Count Across Data Set'); ###Output _____no_output_____ ###Markdown Back to Top Student Comments: I think the 2 visualizations above really drive to home just how much people prefer older dogs (doggos) to younger dogs (puppers) when you look at the bar chart of average retweets. The doggo bar is so much bigger than the pupper chart. Across the board amongst the values avaialble in the data set (the sum of retweets) the puppers had slightly more retweets in general and had the opportunity to surpass the doggos or at least get close to them in average value, but no such luck. The lag pretty far behind. Back to Top Project Wrap Up Student Comments: The We Rate Dogs tweet archive provided the opportunity to combine and massage data from a few different sources all with varying levels of significance. During this process it became very clear just how iterative this process can be. It also provided a nice framework for understanding how to clean a data set methodically: reduce duplicates - not only across all available columns in the set, but more concentrated accross defining features for the observation. verify data types figure out a good way to look for and correct incorrect/corrupted data remove unfixable observations and fix fixable ones address structural issues as early as possible to help make the analysis easier/doable Udacity Required Attachments Reporting for this Project: Create a 300-600 word written report called wrangle_report.pdf or wrangle_report.html that briefly describes your wrangling efforts. This is to be framed as an internal document. Attached: See - wrangle_report.pdf Create a 250-word-minimum written report called act_report.pdf or act_report.html that communicates the insights and displays the visualization(s) produced from your wrangled data. This is to be framed as an external document, like a blog post or magazine article, for example. Attached: See - act_report.pdf ###Code !jupyter nbconvert --to html wrangle_act.ipynb ###Output [NbConvertApp] Converting notebook wrangle_act.ipynb to html [NbConvertApp] Writing 933547 bytes to wrangle_act.html ###Markdown WeRateDogs - Data Wrangling *Andreina Tirado* Gather ###Code import pandas as pd import numpy as np import seaborn as sb import requests import tweepy import json import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown 1- "WeRateDogs" tweet archiveIn this case, the history of tweets to be used for this analysis is given in a csv file, so the first step will be to import the file as dataframe ###Code tweet_df = pd.read_csv('data/twitter-archive-enhanced.csv', sep=',') tweet_df.head() ###Output _____no_output_____ ###Markdown 2- "WeRateDogs" imagesAs the second source of our analysis, there is information available in one of Udacity's endpint. In order to assess it, it's necessary to use the request library and then store the content response in a tsv file ###Code #get image prediction file url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' r = requests.get(url) filename = 'data/image_predictions.tsv' with open(filename, 'wb') as f: f.write(r.content) images = pd.read_csv('data/image_predictions.tsv', sep='\t') images ###Output _____no_output_____ ###Markdown 3 "WeRateDogs" Retweets & LikesThe third and last source of data for this analysis is directly the twitter API, to gain acccess it was used the tweepy library. After reviewing the tweepy documentation the function used to query for retweets and likes was: api.get_status(tweet_id,tweet_mode = 'extended'). Where tweet_id is a list from the file 1 ###Code tweet_ids = tweet_df['tweet_id'].unique() #establish connection to twitters API to collect information about each tweet ''' consumer_key = '' consumer_secret = '' access_token = '' access_secret = '' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth,wait_on_rate_limit=True,wait_on_rate_limit_notify=True) ''' #retrieve total retweets and likes from twitter's API ''' with open('data/tweet_json.txt', mode = 'w') as file: for tweet_id in tweet_ids: try: tweet_info = api.get_status(tweet_id,tweet_mode = 'extended') tweet_info = tweet_info._json file.write(str(tweet_info['id'])+','+str(tweet_info['retweet_count']) +','+str(tweet_info['favorite_count'])+'\n') except: print("ERROR Tweet not found {}".format(tweet_id)) continue file.close() ''' ###Output _____no_output_____ ###Markdown At this point besides flagging tweets missing (not found) nothing additional will be done, because during the cleaning steps this will be solved. ###Code cols = ['tweet_id', 'count_retweet', 'count_likes'] tweet_additional = pd.read_csv('data/tweet_json.txt', sep = ',', header = None, names = cols, skip_blank_lines = True ) tweet_additional.tail() ###Output _____no_output_____ ###Markdown AssessAssessing this data will require both visual and programmatic efforts, this means that each of the dataframes will be carefully review in order to decide on the next cleaning steps. ###Code tweet_df ###Output _____no_output_____ ###Markdown RetweetsGiven that one of our requirements is to work with original tweets and **NOT** retweets (RTs), let's identify how many tweets are RT's ###Code tweet_df[tweet_df.text.str.contains("^RT") == True]['tweet_id'].count() tweet_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown Ratings (Numerator and denominator)In this case, after visual assessment it seems like most of the tweets have a score whose numerator seems to be a integer number and denominator is 10, so let's get a better idea of how many differ from those characteristics. *Denominators != 10* ###Code tweet_df.query('rating_denominator != 10')['rating_denominator'].value_counts() tweet_df[(tweet_df['rating_denominator']== 120) | (tweet_df['rating_denominator']== 0 )] score = tweet_df.text.str.extract('((?:\d+\.)?\d+)\/(?:\d+)', expand=True) score #420 tweet_df[tweet_df['text'].str.contains('420|11.26|1776') == True] ###Output _____no_output_____ ###Markdown In the denominators case, there are some false positives (/120) especially if there are multiple dogs in the picture, but still given the low number of tweets affected by this and given that this is not the focus of the investigation, we'll flag this cases for deletion.In the numerators case, we'll update the column with the correct values NamesEach tweet usually contains the name, so could we have cases where the tweet has an incorrect name or no name at all? ###Code tweet_df.query('name == "a" or name == "an" or name == "the" or name == None or name == ""')['tweet_id'].count() ###Output _____no_output_____ ###Markdown Dog StageEach puppy/dog is categorized by a stage, in this case this information is organized in 4 different columns so let's check if all dogs have a stage assigned ###Code tweet_df.groupby(["doggo", "floofer", "pupper", "puppo"]).size().reset_index().rename(columns={0: "count"}) tweet_df.groupby('doggo')['doggo'].count() tweet_df.groupby('floofer')['floofer'].count() tweet_df.groupby('pupper')['pupper'].count() tweet_df.groupby('puppo')['puppo'].count() ###Output _____no_output_____ ###Markdown For the following 2 data sets, the same process will be followed: checking data types, missing information, untidy data, duplicated information, etc. ###Code images.info() images[images['tweet_id'].duplicated() == True]['tweet_id'].count() images[images['jpg_url'].duplicated() == True]['tweet_id'].count() images[~images.jpg_url.str.contains("jpg$") == True] tweet_additional.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2339 entries, 0 to 2338 Data columns (total 3 columns): tweet_id 2339 non-null int64 count_retweet 2339 non-null int64 count_likes 2339 non-null int64 dtypes: int64(3) memory usage: 54.9 KB ###Markdown Quality `tweet_df` table* unnecessary columns in the data set* the timestamp column has the incorrect data type* there is missing information in at least 5 columns NaN* "None" instead of NaN is used to design null values in some of the columns* there are RTs in the tweet list* there are dogs that don't belong to any stage and are classified as None, some even have been incorrectly assigned more than one stage* incorrect names --> https://twitter.com/dog_rates/status/666407126856765440* retweeted_status_id incorrect data type* incorrect numerators * incorrect denominators `images` table* Duplicated image URL for different tweets Tidiness* create one table or column with dogs type* group all tables into one Copying the files prior to cleaning ###Code tweet_df_copy = tweet_df.copy() tweet_additional_copy = tweet_additional.copy() images_copy = images.copy() ###Output _____no_output_____ ###Markdown CleanIn order to start this process, let's start to solve the tidiness issues so that we focus in the quality issues directly in the final table. In the following steps we will follow the "Define - Code - Test" structure to describe the steps taken.* Create one table with all observations from the 3 dataframes* delete unnecessary columns* Create 1 column for the doggos* remove RTs* changing data types Tidiness **Define**Merge the three tables into one single table to fulfill tidiness principles. In this step, we will focus first of merging tweet_df and tweet_additional **Code** ###Code full_tweet = pd.merge(tweet_df, tweet_additional, how='left', on='tweet_id',left_index=False, indicator=False) full_tweet.head() ###Output _____no_output_____ ###Markdown **Test** ###Code full_tweet.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2356 entries, 0 to 2355 Data columns (total 19 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object count_retweet 2339 non-null float64 count_likes 2339 non-null float64 dtypes: float64(6), int64(3), object(10) memory usage: 368.1+ KB ###Markdown **Define**Merge tables full_tweet and images.Part AIn order to fulfill this step, it's necessary to also group the image predictions data:* Get the dog breed prediction with the highest confidence interval* Get the prediction on whether the image belongs to a dog or not* Get the confidence interval associated with the prediction **Code** ###Code breed = [] is_dog = [] ci = [] def process_image_pred (df): if df['p1_dog'] == True: breed.append(df['p1']) is_dog.append(True) ci.append(df['p1_conf']) elif df['p2_dog'] == True: breed.append(df['p2']) is_dog.append(True) ci.append(df['p2_conf']) elif df['p3_dog'] == True: breed.append(df['p3']) is_dog.append(True) ci.append(df['p3_conf']) else: breed.append(np.NaN) is_dog.append(False) ci.append(0) images.apply(process_image_pred, axis=1) ###Output _____no_output_____ ###Markdown **Test** ###Code images['breed'] = breed images['is_dog'] = is_dog images['conf_interval'] = ci images.head() ###Output _____no_output_____ ###Markdown **Define**Merge tables full_tweet and images.Part BIn this part we will complete the merge between full_tweet and images **Code** ###Code full_tweet = pd.merge(full_tweet, images, how='left', on='tweet_id',left_index=False, indicator=False) full_tweet.head() ###Output _____no_output_____ ###Markdown **Test** ###Code full_tweet.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2356 entries, 0 to 2355 Data columns (total 33 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object count_retweet 2339 non-null float64 count_likes 2339 non-null float64 jpg_url 2075 non-null object img_num 2075 non-null float64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null object p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null object p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null object breed 1751 non-null object is_dog 2075 non-null object conf_interval 2075 non-null float64 dtypes: float64(11), int64(3), object(19) memory usage: 625.8+ KB ###Markdown **Define**Replace the "None" with "" and combine content of columns 'doggo', 'floofer', 'pupper', 'puppo' into one column. **Code** ###Code cols_to_remove = ['doggo', 'floofer', 'pupper', 'puppo'] for i in range (len(cols_to_remove)): full_tweet[cols_to_remove[i]].replace('None', "", inplace=True) full_tweet['stage'] = full_tweet.doggo + full_tweet.floofer + full_tweet.pupper + full_tweet.puppo full_tweet.loc[full_tweet.stage == 'doggopupper', 'stage'] = 'delete' full_tweet.loc[full_tweet.stage == 'doggopuppo', 'stage'] = 'delete' full_tweet.loc[full_tweet.stage == 'doggofloofer', 'stage'] = 'delete' full_tweet = full_tweet[full_tweet['stage'] != 'delete'] ###Output _____no_output_____ ###Markdown **Test** ###Code # verify the creation of the new column full_tweet.head() full_tweet.groupby('stage')['stage'].count() full_tweet.shape ###Output _____no_output_____ ###Markdown Quality **Define*** Remove unnecessary columns **Code** ###Code columns_to_drop = ['retweeted_status_timestamp','p1', 'p1_conf', 'p2', 'p2_conf', 'p3', 'p3_conf', 'p1_dog', 'p2_dog', 'p3_dog','doggo', 'floofer', 'pupper', 'puppo'] full_tweet.drop(columns_to_drop, axis = 1, inplace = True, errors = 'raise') ###Output _____no_output_____ ###Markdown **Test** ###Code full_tweet.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2342 entries, 0 to 2355 Data columns (total 20 columns): tweet_id 2342 non-null int64 in_reply_to_status_id 77 non-null float64 in_reply_to_user_id 77 non-null float64 timestamp 2342 non-null object source 2342 non-null object text 2342 non-null object retweeted_status_id 179 non-null float64 retweeted_status_user_id 179 non-null float64 expanded_urls 2283 non-null object rating_numerator 2342 non-null int64 rating_denominator 2342 non-null int64 name 2342 non-null object count_retweet 2325 non-null float64 count_likes 2325 non-null float64 jpg_url 2062 non-null object img_num 2062 non-null float64 breed 1739 non-null object is_dog 2062 non-null object conf_interval 2062 non-null float64 stage 2342 non-null object dtypes: float64(8), int64(3), object(9) memory usage: 384.2+ KB ###Markdown **Define*** Fix incorrect data types **Code** ###Code full_tweet['timestamp'] = pd.to_datetime(full_tweet['timestamp']) full_tweet['count_retweet'] = full_tweet['count_retweet'].fillna(0).astype(int) full_tweet['count_likes'] = full_tweet['count_likes'].fillna(0).astype(int) full_tweet['in_reply_to_status_id'] = full_tweet['in_reply_to_status_id'].fillna(0).astype(int) full_tweet['in_reply_to_user_id'] = full_tweet['in_reply_to_user_id'].fillna(0).astype(int) full_tweet['img_num'] = full_tweet['img_num'].fillna(0).astype(int) full_tweet['stage'] = full_tweet['stage'].astype('category') ###Output _____no_output_____ ###Markdown **Test** ###Code full_tweet.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2342 entries, 0 to 2355 Data columns (total 20 columns): tweet_id 2342 non-null int64 in_reply_to_status_id 2342 non-null int64 in_reply_to_user_id 2342 non-null int64 timestamp 2342 non-null datetime64[ns] source 2342 non-null object text 2342 non-null object retweeted_status_id 179 non-null float64 retweeted_status_user_id 179 non-null float64 expanded_urls 2283 non-null object rating_numerator 2342 non-null int64 rating_denominator 2342 non-null int64 name 2342 non-null object count_retweet 2342 non-null int64 count_likes 2342 non-null int64 jpg_url 2062 non-null object img_num 2342 non-null int64 breed 1739 non-null object is_dog 2062 non-null object conf_interval 2062 non-null float64 stage 2342 non-null category dtypes: category(1), datetime64[ns](1), float64(3), int64(8), object(7) memory usage: 368.4+ KB ###Markdown **Define*** Change None in the entire df and "" in stage to NaN **Code** ###Code #name, stage full_tweet.replace('None', pd.np.nan, inplace=True) full_tweet['stage'].replace((''), pd.np.nan, inplace=True) ###Output _____no_output_____ ###Markdown **Define*** Remove RTs from the data set **Code** ###Code # 4 RTs in the tweet list full_tweet.drop(full_tweet[full_tweet.text.str.contains("^RT") == True].index, axis = 0, inplace = True) ###Output _____no_output_____ ###Markdown **Test**(test for the previous 2 steps) ###Code full_tweet.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2161 entries, 0 to 2355 Data columns (total 20 columns): tweet_id 2161 non-null int64 in_reply_to_status_id 2161 non-null int64 in_reply_to_user_id 2161 non-null int64 timestamp 2161 non-null datetime64[ns] source 2161 non-null object text 2161 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 expanded_urls 2103 non-null object rating_numerator 2161 non-null int64 rating_denominator 2161 non-null int64 name 1490 non-null object count_retweet 2161 non-null int64 count_likes 2161 non-null int64 jpg_url 1982 non-null object img_num 2161 non-null int64 breed 1675 non-null object is_dog 1982 non-null object conf_interval 1982 non-null float64 stage 332 non-null category dtypes: category(1), datetime64[ns](1), float64(3), int64(8), object(7) memory usage: 340.0+ KB ###Markdown **Define*** Remove incorrect names from the column 'name' **Code** ###Code full_tweet['name'].value_counts() #Example --> https://twitter.com/dog_rates/status/666407126856765440 full_tweet.replace(('a','an','the'), pd.np.nan, inplace=True) ###Output _____no_output_____ ###Markdown **Test** ###Code full_tweet['name'].value_counts() ###Output _____no_output_____ ###Markdown **Define*** Get the tweet source in a separated column, rather than embedded in a HTML tag **Code** ###Code full_tweet['source'].value_counts() full_tweet['tw_source'] = full_tweet['source'].str.extract(r'^(?:.+>)([^\<\>]+)(?:<.+)$', expand=True) ###Output _____no_output_____ ###Markdown **Test** ###Code full_tweet['tw_source'].value_counts() ###Output _____no_output_____ ###Markdown **Define*** Fix sintaxis and standarize dog breeds naming convention, convert to lower case and use "_" instead of "-" **Code** ###Code #### Dog Breed sintax inconsistencies #### #curly-coated_retriever | black-and-tan_coonhound | Greater_Swiss_Mountain_dog full_tweet['breed'].value_counts().sort_values() full_tweet['breed'] = full_tweet['breed'].str.lower() full_tweet['breed'] = full_tweet['breed'].str.replace('-','_') ###Output _____no_output_____ ###Markdown **Test** ###Code full_tweet[full_tweet['breed'] == 'Greater_Swiss_Mountain_dog'] ###Output _____no_output_____ ###Markdown **Define** * Rename columns to get better insights on the content **Code** ###Code full_tweet.rename(columns={'timestamp': 'tw_creation_date', 'source': 'tw_device_source','text': 'tw_text','stage': 'dog_stage','rating_numerator': 'numerator','rating_denominator': 'denominator'}, inplace=True) ###Output _____no_output_____ ###Markdown **Test** ###Code full_tweet.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2161 entries, 0 to 2355 Data columns (total 21 columns): tweet_id 2161 non-null int64 in_reply_to_status_id 2161 non-null int64 in_reply_to_user_id 2161 non-null int64 tw_creation_date 2161 non-null datetime64[ns] tw_device_source 2161 non-null object tw_text 2161 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 expanded_urls 2103 non-null object numerator 2161 non-null int64 denominator 2161 non-null int64 name 1421 non-null object count_retweet 2161 non-null int64 count_likes 2161 non-null int64 jpg_url 1982 non-null object img_num 2161 non-null int64 breed 1675 non-null object is_dog 1982 non-null object conf_interval 1982 non-null float64 dog_stage 332 non-null category tw_source 2161 non-null object dtypes: category(1), datetime64[ns](1), float64(3), int64(8), object(8) memory usage: 356.8+ KB ###Markdown **Define** * Fix content of column numerators to assign the full number not just the decimal part **Code** ###Code full_tweet['numerator'] = full_tweet['tw_text'].str.extract('((?:\d+\.)?\d+)\/(?:\d+)', expand=True) ###Output _____no_output_____ ###Markdown **Test** ###Code full_tweet['numerator'].value_counts() ###Output _____no_output_____ ###Markdown **Define** Drop rows with denominators different than 10 **Code** ###Code full_tweet = full_tweet[full_tweet['denominator'] == 10] ###Output _____no_output_____ ###Markdown **Test** ###Code full_tweet.query('denominator != 10')['denominator'].value_counts() #store as csv full_tweet.to_csv('data/twitter_archive_master.csv', index = False) ###Output _____no_output_____ ###Markdown AnalysisNow after combining all of the data sources let's use this dataset to collect some insights about WeRateDogs, Dogs and its followers Most retweeted & liked Dog ###Code full_tweet.describe() ###Output _____no_output_____ ###Markdown From the information above, we know that the highest number of tweets is **82727** and the highest number of likes is **162612**, so let's identify which dog(s) are the ones with such high numbers ###Code #Get Max rewteet information full_tweet.loc[full_tweet['count_retweet'].idxmax()] #Get Max likes information full_tweet.loc[full_tweet['count_likes'].idxmax()] ###Output _____no_output_____ ###Markdown In both cases belongs to the same tweet, in this case the one with id 744234799360020481 that belongs to a "unnamed dog" labrador retriever, whose dog stage is **doggo** Retweets vs Likes ###Code fig = plt.figure(figsize=(10,5)) fig.suptitle('Retweet vs Likes') plt.scatter(data=full_tweet, y='count_retweet', x= 'count_likes', alpha = 0.4); ###Output _____no_output_____ ###Markdown From before we could identify that the dog with the highest number of retweet was also the one with most likes, and according to this chart there is clearly a positive correlation between retweets and likes Most predicted Dog Breed From the image prediction files, we got a list of potential predictions so what if we list the ones that are more recurrent ###Code ax = full_tweet['breed'].value_counts().head().plot.barh(color = ['grey']) ax.set_title('Top 5 Predicted Dog Breed') ax.set_xlabel("Number of Tweets") ax.set_ylabel("Breed") ax.set_title("Most Popular Dog Breeds (top 5)"); plt.show() ###Output _____no_output_____ ###Markdown In this case golden retrievers are the clear winners with more than 150 occurrences, the 2nd place belongs to labrador retrievers with a bit more than 100 matches. Image Predictions: Is that a dog? ###Code #Total identified breeds total_breeds = len(full_tweet['breed'].unique())+1 print("There are a total of {} breeds identified".format(total_breeds)) fig = plt.figure(figsize=(10,5)) fig.suptitle('Distribution of trips per Age Group') sb.countplot(data=full_tweet, x='is_dog') plt.ylabel('Number of trips') plt.xlabel('Age group (years)'); ###Output _____no_output_____ ###Markdown Earlier we determined what was the most predicted breed, and based on the previous chart we know that more than 1000 tweets were identified a positive for having dogs and assigned a corresponding breed Most common dog stage If you follow WeRateDogs, you know already that there is a stage (associated to the dog age) where each dog is placed. So let's see all the data to verify what is the most common stage among all: ###Code fig = plt.figure(figsize=(10,5)) fig.suptitle('Dog Stage Distribution') sb.countplot(data=full_tweet, x='dog_stage') plt.ylabel('Tweets') plt.xlabel('Dog (stage)'); ###Output _____no_output_____ ###Markdown Puppers are the 1 stage in this data set Twitter Engagement: Mobile vs Desktop? ###Code fig = plt.figure(figsize=(10,5)) fig.suptitle('Distribution of tweets per source') sb.countplot(data=full_tweet, x='tw_source') plt.ylabel('Number of tweets') plt.xlabel('Source'); ###Output _____no_output_____ ###Markdown Gathering Data ###Code import tweepy import requests import pandas as pd import json import time import math ###Output _____no_output_____ ###Markdown Given tweet data stored in `df` ###Code # Import the Twitter archive CSV file into a DataFrame df = pd.read_csv('twitter-archive-enhanced.csv') ###Output _____no_output_____ ###Markdown Data gathered via requests library - Downloaded via given url ###Code url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) with open('image_predictions.tsv', mode='wb') as file: file.write(response.content) # Import the tweet image predictions TSV file into a DataFrame img_df = pd.read_csv('image_predictions.tsv', sep='\t') # Declare Twitter API keys and access tokens consumer_key = 'Es94t2ixj4F2HNH8N7j2w7Gm0' consumer_secret = '25DsnbSWEGpfYbANGm9nm1pM9KFoGcpbMhjUniqdAG5zSkEmeH' access_token = '1255022845-RePTbeVm3DH38TatYt85Ae5eDe1214szinkSKFq' access_secret = '1Dt50cvY7Z2oSY64AuRSKJ7l37Y74yCx2rIWIFBDqJDGF' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth_handler=auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True) ###Output _____no_output_____ ###Markdown Data gathering via Twitter APITweet data is stored in `tweet_json.txt` and error/no response tweet id's stored in `getstatus_error.txt` ###Code # Using the tweet IDs in the Twitter archive, query the Twitter API for each tweet's JSON start = time.time() # start timer with open('getstatus_error.txt', 'w') as errfile: valid_ids = 0 err_ids = 0 tweet_ids = df.tweet_id with open('tweet_json.txt', 'w', encoding='utf-8') as outfile: for i, tweet_id in tweet_ids.iteritems(): try: print("%s# %s" % (str(i+1), tweet_id)) # Get tweet data using Twitter API tweet = api.get_status(tweet_id, tweet_mode='extended') json_content = tweet._json # Write each tweet's JSON data to its own line in a file json.dump(json_content, outfile) outfile.write('\n') valid_ids += 1 except tweepy.TweepError as e: err_ids += 1 err_str = [] err_str.append(str(tweet_id)) err_str.append(': ') err_str.append(e.response.json()['errors'][0]['message']) err_str.append('\n') errfile.write(''.join(err_str)) print(''.join(err_str)) continue print("%s %s" % ('Valid tweets:', valid_ids)) print("%s %s" % ('Error tweets:', err_ids)) end = time.time() # end timer print((end - start)/(1000*60)) # List of dictionaries to read tweet's JSON data line by line and later convert to a DataFrame df_list = [] with open('tweet_json.txt', 'r') as json_file: for line in json_file: status = json.loads(line) # Append to list of dictionaries df_list.append({'tweet_id': status['id'], 'retweet_count': status['retweet_count'], 'favorite_count': status['favorite_count'], 'display_text_range': status['display_text_range'] }) # Create a DataFrame with tweet ID, retweet count, favorite count and display_text_range status_df = pd.DataFrame(df_list, columns = ['tweet_id', 'retweet_count', 'favorite_count', 'display_text_range']) ###Output _____no_output_____ ###Markdown --- Assessing Data Visual and Programmatic Assessment Quality and Tidiness issues mentioned below are obtained from both Visual and Programmatic assessment ###Code df.sample(5) df.info() # Checking for retweets len(df[df.retweeted_status_id.isnull() == False]) # Denominator values df.rating_denominator.value_counts().sort_index() # Numerator values df.rating_numerator.value_counts().sort_index() df.name.value_counts().sort_index(ascending=False) img_df.sample(5) img_df.info() status_df.sample(5) status_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2333 entries, 0 to 2332 Data columns (total 4 columns): tweet_id 2333 non-null int64 retweet_count 2333 non-null int64 favorite_count 2333 non-null int64 display_text_range 2333 non-null object dtypes: int64(3), object(1) memory usage: 73.0+ KB ###Markdown Quality `df` (Twitter archive) table- contains retweets and therefore, duplicates- many *tweet_id*(s) of `df` table are missing in `img_df` (image predictions) table- change in datatypes (*in_reply_to_status_id, in_reply_to_user_id and timestamp* columns)- unnecessary html tags in *source* column in place of actual source name- *rating_denominator* column has values other than 10- replace dog names starting with lowercase characters (e.g. a, an, actually, by)- some records have more than one dog stage Tidiness- `df` without any duplicates (i.e. retweets) will have empty *retweeted_status_id, retweeted_status_user_id* and *retweeted_status_timestamp* columns, which can be dropped- *doggo, floofer, pupper* and *puppo* columns in `df` table should be merged into one column named *"stage"*- *retweet_count* and *favorite_count* columns from `status_df` (tweet status) table should be joined with `df` table --- Cleaning Data ###Code # Take a copy of df on which the cleaning tasks will be performed archive_clean = df.copy() ###Output _____no_output_____ ###Markdown Quality `df`: contains retweets and therefore, duplicates DefineKeep only those rows in `df` table that are original tweets and not retweets Code ###Code archive_clean = archive_clean[archive_clean.retweeted_status_id.isnull()] ###Output _____no_output_____ ###Markdown Test ###Code len(archive_clean[archive_clean.retweeted_status_id.isnull() == False]) ###Output _____no_output_____ ###Markdown `df`: many *tweet_id*(s) of `df` table are missing in `img_df` (image predictions) table DefineKeep only those records in `df` table whose *tweet_id* exists in `img_df` table Code ###Code archive_clean = archive_clean[archive_clean.tweet_id.isin(img_df.tweet_id)] ###Output _____no_output_____ ###Markdown Test ###Code len(archive_clean[~archive_clean.tweet_id.isin(img_df.tweet_id)]) ###Output _____no_output_____ ###Markdown Tidiness `df` table without any duplicates (i.e. retweets) have empty *retweeted_status_id, retweeted_status_user_id* and *retweeted_status_timestamp* columns, which can be dropped ###Code archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1994 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 1994 non-null int64 in_reply_to_status_id 23 non-null float64 in_reply_to_user_id 23 non-null float64 timestamp 1994 non-null object source 1994 non-null object text 1994 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 1994 non-null object rating_numerator 1994 non-null int64 rating_denominator 1994 non-null int64 name 1994 non-null object doggo 1994 non-null object floofer 1994 non-null object pupper 1994 non-null object puppo 1994 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 280.4+ KB ###Markdown DefineDrop *retweeted_status_id, retweeted_status_user_id* and *retweeted_status_timestamp* columns from `df` table Code ###Code archive_clean.drop(['retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1994 entries, 0 to 2355 Data columns (total 14 columns): tweet_id 1994 non-null int64 in_reply_to_status_id 23 non-null float64 in_reply_to_user_id 23 non-null float64 timestamp 1994 non-null object source 1994 non-null object text 1994 non-null object expanded_urls 1994 non-null object rating_numerator 1994 non-null int64 rating_denominator 1994 non-null int64 name 1994 non-null object doggo 1994 non-null object floofer 1994 non-null object pupper 1994 non-null object puppo 1994 non-null object dtypes: float64(2), int64(3), object(9) memory usage: 233.7+ KB ###Markdown Quality `df`: Correction of datatypes (*in_reply_to_status_id, in_reply_to_user_id* and *timestamp* columns) DefineConvert *in_reply_to_status_id* and *in_reply_to_user_id* to data type integer. Convert *timestamp* to datetime data type ###Code import numpy as np ###Output _____no_output_____ ###Markdown Code ###Code archive_clean.in_reply_to_status_id = archive_clean.in_reply_to_status_id.fillna(0) archive_clean.in_reply_to_user_id = archive_clean.in_reply_to_user_id.fillna(0) archive_clean.in_reply_to_status_id = archive_clean.in_reply_to_status_id.astype(np.int64) archive_clean.in_reply_to_user_id = archive_clean.in_reply_to_user_id.astype(np.int64) archive_clean.timestamp = pd.to_datetime(archive_clean.timestamp) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1994 entries, 0 to 2355 Data columns (total 14 columns): tweet_id 1994 non-null int64 in_reply_to_status_id 1994 non-null int64 in_reply_to_user_id 1994 non-null int64 timestamp 1994 non-null datetime64[ns, UTC] source 1994 non-null object text 1994 non-null object expanded_urls 1994 non-null object rating_numerator 1994 non-null int64 rating_denominator 1994 non-null int64 name 1994 non-null object doggo 1994 non-null object floofer 1994 non-null object pupper 1994 non-null object puppo 1994 non-null object dtypes: datetime64[ns, UTC](1), int64(5), object(8) memory usage: 233.7+ KB ###Markdown `df`: Cleanup of source column - Removing html tags DefineStrip all html anchor tags (i.e. ``) in *source* column and retain just the text in between the tags. Code ###Code archive_clean.source = archive_clean.source.str.replace(r'<(?:a\b[^>]*>|/a>)', '') archive_clean.source = archive_clean.source.astype('category') ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.source.value_counts() ###Output _____no_output_____ ###Markdown `df`: *rating_denominator* column has values other than 10 DefineFor records whose *rating_denominator* is greater than 10 and divisible by 10, use the quotient as the divisor to divide the *rating_numerator*. If the numerator turns out to be divisible (i.e. remainder=0), assign this quotient as the *rating_numerator*.For the remaining records, check if the *text* column contains any fraction whose denominator is 10. If it does, update the *rating_denominator* to 10. Additionally, update the *rating_numerator* with the numerator value of this fraction. Code ###Code import re # regex to match fractions pattern = "\s*(\d+([.]\d+)?([/]\d+))" # function which will match the above pattern and return an array of fractions, if any def tokens(x): return [m.group(1) for m in re.finditer(pattern, x)] # iterate through all those records whose rating_denominator is not 10 for i, row in archive_clean[archive_clean.rating_denominator != 10].iterrows(): d = row.rating_denominator # if rating_denominator is greater than 10 and divisible by 10 if d > 10 and d%10 == 0: # assign divisor as the quotient divisor = d/10 n = row.rating_numerator # if rating_numerator is greater than 10 and divisible by the divisor if n%divisor == 0: # reassign rating_denominator as 10 archive_clean.set_value(i, 'rating_denominator', 10) # reassign rating_numerator as the quotient of rating_numerator by divisor archive_clean.set_value(i, 'rating_numerator', int(n/divisor)) # for all those records whose rating_denominator is either less than 10 or not divisible by 10 else: # extract all fractions(ratings) from text using tokens function ratings = tokens(row.text) # iterate through all the fractions for rating in ratings: # if denominator of any such fraction is equal to 10 if rating.split('/')[1] == '10': # reassign rating_denominator as 10 archive_clean.set_value(i, 'rating_denominator', 10) # reassign rating_numerator as the numerator value of this fraction archive_clean.set_value(i, 'rating_numerator', int(round(float(rating.split('/')[0])))) break ###Output /anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:14: FutureWarning: set_value is deprecated and will be removed in a future release. Please use .at[] or .iat[] accessors instead /anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:16: FutureWarning: set_value is deprecated and will be removed in a future release. Please use .at[] or .iat[] accessors instead app.launch_new_instance() /anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:27: FutureWarning: set_value is deprecated and will be removed in a future release. Please use .at[] or .iat[] accessors instead /anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:29: FutureWarning: set_value is deprecated and will be removed in a future release. Please use .at[] or .iat[] accessors instead ###Markdown Test ###Code archive_clean.rating_denominator.value_counts() ###Output _____no_output_____ ###Markdown `df`: erroneous dog names starting with lowercase characters (e.g. a, an, actually, by) DefineReplace all lowercase values of *name* column with None Code ###Code archive_clean['name'][archive_clean['name'].str.match('[a-z]+')] = 'None' ###Output /anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy """Entry point for launching an IPython kernel. ###Markdown Test ###Code archive_clean.name[archive_clean.name == 'None'].value_counts() # Sort ascending by name to check if there are more names starting with a lowercase alphabet archive_clean.name.value_counts().sort_index(ascending=False) ###Output _____no_output_____ ###Markdown `df`: some records have more than one dog stage ###Code print(len(archive_clean[(archive_clean.doggo != 'None') & (archive_clean.floofer != 'None')])) print(len(archive_clean[(archive_clean.doggo != 'None') & (archive_clean.puppo != 'None')])) print(len(archive_clean[(archive_clean.doggo != 'None') & (archive_clean.pupper != 'None')])) ###Output 1 1 9 ###Markdown DefineThere is one record that has both *doggo* and *floofer* and another record that has both *doggo* and *puppo*. For these 2 records, take a look at the text manually to decide one dog stage for each of them. For ambiguous texts, set both the column values as None.There are 9 records which have both *doggo* and *pupper*. As per the dogtionary, *doggo* and *pupper* are sometimes used interchangeably. Therefore, set *pupper* column as None for these 9 records. Code ###Code for i, row in archive_clean[((archive_clean.doggo != 'None') & (archive_clean.floofer != 'None')) | ((archive_clean.doggo != 'None') & (archive_clean.puppo != 'None'))].iterrows(): print('%s %s\n'%(row.tweet_id, row.text)) # based on the above texts, doggo should be set as None for both the records archive_clean['doggo'][archive_clean.tweet_id.isin([855851453814013952, 854010172552949760])] = 'None' # set pupper column as None for records which have both doggo and pupper archive_clean['pupper'][(archive_clean.doggo != 'None') & (archive_clean.pupper != 'None')] = 'None' ###Output /anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:2: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy ###Markdown Test ###Code len(archive_clean[((archive_clean.doggo != 'None') & (archive_clean.pupper != 'None')) | ((archive_clean.doggo != 'None') & (archive_clean.floofer != 'None')) | ((archive_clean.doggo != 'None') & (archive_clean.puppo != 'None'))]) ###Output _____no_output_____ ###Markdown Tidiness doggo, floofer, pupper and puppo columns in `df` table should be merged into one column named "stage" ###Code archive_clean.doggo.value_counts() archive_clean.floofer.value_counts() archive_clean.pupper.value_counts() archive_clean.puppo.value_counts() ###Output _____no_output_____ ###Markdown DefineMerge the *doggo*, *floofer*, *pupper* and *puppo* columns to a *stage* column. Convert the datatype from string to categorical as it helps with analysis and visualization and saves memory on disk.Drop the *doggo*, *floofer*, *pupper* and *puppo* columns. Code ###Code # merge the doggo, floofer, pupper and puppo columns to a stage column archive_clean['stage'] = archive_clean[['doggo', 'floofer', 'pupper', 'puppo']].max(axis=1) # convert the datatype from string to categorical archive_clean.stage = archive_clean.stage.astype('category') # drop the doggo, floofer, pupper and puppo columns archive_clean.drop(['doggo', 'floofer', 'pupper', 'puppo'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.stage.value_counts() status_df.head() archive_clean1 = pd.merge(archive_clean, status_df, on=['tweet_id','tweet_id']) archive_clean1.head() archive_clean1.info() archive_clean1.favorite_count.mean() ###Output _____no_output_____ ###Markdown --- Storing Data ###Code archive_clean1.to_csv('twitter_archive_master.csv', encoding='utf-8', index=False) ###Output _____no_output_____ ###Markdown Analysing Data Number of tweets over time ###Code df['timestamp'].apply(lambda x: x.strftime('%Y-%m')).value_counts().sort_index() ###Output _____no_output_____ ###Markdown Most used Twitter source ###Code df['source'].value_counts() ###Output _____no_output_____ ###Markdown Majority of tweets were posted from an iPhone. Most common rating given (Numerators) ###Code df['rating_numerator'].value_counts().sort_index() df['rating_numerator'][df['rating_numerator'] > 10].value_counts().sum() ###Output _____no_output_____ ###Markdown Retweet and favorite counts ###Code print('%s\t%s' % ('Mean Retweet Count', round(archive_clean1.retweet_count.mean()))) print('%s\t%s' % ('Mean Favorite Count', round(archive_clean1.favorite_count.mean()))) ###Output Mean Retweet Count 2530 Mean Favorite Count 8479 ###Markdown Famous Dogs (Rating > 10) ###Code print('%s\t%s' % ('Mean Retweet Count', round(archive_clean1.retweet_count[archive_clean1.rating_numerator > 10].mean()))) print('%s\t%s' % ('Mean Favorite Count', round(archive_clean1.favorite_count[archive_clean1.rating_numerator > 10].mean()))) ###Output Mean Retweet Count 3536 Mean Favorite Count 12259 ###Markdown Famous Categories ###Code print('Doggo') print('%s\t%s' % ('Mean Retweet Count', round(archive_clean1.retweet_count[archive_clean1.stage == 'doggo'].mean()))) print('%s\t%s' % ('Mean Favorite Count', round(archive_clean1.favorite_count[archive_clean1.stage == 'doggo'].mean()))) print('Floofer') print('%s\t%s' % ('Mean Retweet Count', round(archive_clean1.retweet_count[archive_clean1.stage == 'floofer'].mean()))) print('%s\t%s' % ('Mean Favorite Count', round(archive_clean1.favorite_count[archive_clean1.stage == 'floofer'].mean()))) print('Pupper') print('%s\t%s' % ('Mean Retweet Count', round(archive_clean1.retweet_count[archive_clean1.stage == 'pupper'].mean()))) print('%s\t%s' % ('Mean Favorite Count', round(archive_clean1.favorite_count[archive_clean1.stage == 'pupper'].mean()))) print('Puppo') print('%s\t%s' % ('Mean Retweet Count', round(archive_clean1.retweet_count[archive_clean1.stage == 'puppo'].mean()))) print('%s\t%s' % ('Mean Favorite Count', round(archive_clean1.favorite_count[archive_clean1.stage == 'puppo'].mean()))) ###Output Doggo Mean Retweet Count 6367 Mean Favorite Count 18520 Floofer Mean Retweet Count 4322 Mean Favorite Count 12850 Pupper Mean Retweet Count 2155 Mean Favorite Count 6840 Puppo Mean Retweet Count 6512 Mean Favorite Count 22318 ###Markdown Commonly used Names ###Code archive_clean1.name.value_counts() ###Output _____no_output_____ ###Markdown ---- Visualizing Data ###Code import matplotlib.pyplot as plt %matplotlib inline plt.rcParams["figure.figsize"] = [12, 9] ax = archive_clean1.rating_numerator.value_counts().sort_index().plot('bar', title = 'Dog Rating distribution') ax.set_xlabel("Rating out of 10") ax.set_ylabel("Number of Dogs") ax.set_yticks([0, 50, 100, 150, 200, 250, 300, 350, 400, 450]) plt.savefig('rating_dist') df.name.value_counts()[1:7].plot('line', figsize=(11,5), title='Most used dog names').set_xlabel("Number of Dogs") plt.savefig('dog_names') ###Output _____no_output_____ ###Markdown Gather ###Code import pandas as pd import requests import tweepy import json import re import matplotlib.pyplot as plt import numpy as np import statsmodels.api as sm import random %matplotlib inline random.seed(42) # read in the twitter data from file on hand twitter_arch = pd.read_csv('twitter-archive-enhanced.csv') # download the image predictions data url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' r = requests.get(url) with open(url.split('/')[-1],'wb') as file: file.write(r.content) # read in the image predictions data image_preds = pd.read_csv('image-predictions.tsv',sep='\t') # access the twitter api with open('twitter_dev_keys.txt','r') as file: keys = file.readlines() keys = [key.rstrip('\n') for key in keys] consumer_key = keys[1] consumer_secret = keys[3] access_token = keys[7] access_token_secret = keys[9] auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True) # download the tweets & write them with open('tweet_json.txt','a') as file: for i in twitter_arch['tweet_id'].items(): # in case tweet has been deleted try: tweet = api.get_status(i[1], tweet_mode='extended') json.dump(tweet._json, file) file.write('\n') except tweepy.TweepError: print('id {} not available'.format(i[1])) # read json data into dataframe # note here this is a solution from Stack Overflow; json.load(file) fails, apparently if have multiple objects with open('tweet_json.txt') as jfile: jtweet = [json.loads(line) for line in jfile] ext_tweet = pd.DataFrame.from_dict(jtweet) # just keep the relevant columns ext_tweet = ext_tweet[['id','id_str','retweet_count','favorite_count']] ###Output _____no_output_____ ###Markdown Assess ###Code display(twitter_arch.head()) display(twitter_arch.tail()) display(twitter_arch.sample(20)) display(twitter_arch.info()) display(twitter_arch.describe()) display(twitter_arch['rating_numerator'].value_counts()) display(twitter_arch['rating_denominator'].value_counts()) display(twitter_arch[twitter_arch['rating_denominator'] != 10]) display(twitter_arch['text'][313]) display(twitter_arch['text'][342]) display(twitter_arch['text'][433]) display(twitter_arch['text'][516]) display(twitter_arch['text'][902]) display(twitter_arch['text'][1068]) display(twitter_arch['text'][1120]) display(twitter_arch['text'][1165]) display(twitter_arch['text'][1202]) display(twitter_arch['text'][1228]) display(twitter_arch['text'][1254]) display(twitter_arch['text'][1274]) display(twitter_arch['text'][1351]) display(twitter_arch['text'][1433]) display(twitter_arch['text'][1598]) display(twitter_arch['text'][1634]) display(twitter_arch['text'][1635]) display(twitter_arch['text'][1662]) display(twitter_arch['text'][1663]) display(twitter_arch['text'][1779]) display(twitter_arch['text'][1843]) display(twitter_arch['text'][2335]) display(twitter_arch[(twitter_arch['rating_numerator'] > 20) & (twitter_arch['rating_denominator'] == 10)]) display(twitter_arch['text'][188]) display(twitter_arch['text'][189]) display(twitter_arch['text'][290]) display(twitter_arch['text'][340]) display(twitter_arch['text'][695]) display(twitter_arch['text'][763]) display(twitter_arch['text'][979]) display(twitter_arch['text'][1712]) display(twitter_arch['text'][2074]) display(twitter_arch['name'].value_counts()[0:59]) display(twitter_arch['name'].value_counts()[60:119]) # non-names appear to be lowercase. not_names = twitter_arch['name'][twitter_arch['name'].str.match(r'^[a-z]')==True].value_counts() display(not_names) # see if some of the names are discernible from text display(twitter_arch['text'][twitter_arch['name'].str.match(r'^[a-z]')==True]) display(twitter_arch['text'][twitter_arch['name'].str.match(r'^[a-z]')==True][22]) display(twitter_arch['text'][twitter_arch['name'].str.match(r'^[a-z]')==True][56]) display(twitter_arch['text'][twitter_arch['name'].str.match(r'^[a-z]')==True][118]) display(twitter_arch['text'][twitter_arch['name'].str.match(r'^[a-z]')==True][2354]) display(twitter_arch['text'][twitter_arch['name'].str.match(r'^[a-z]')==True][2353]) display(twitter_arch['text'][twitter_arch['name'].str.match(r'^[a-z]')==True][2352]) # see if consistent tagging of non-dog entries # see if some of the names are discernible from text display(twitter_arch[twitter_arch['text'].str.match(r'.+[Ww]e only rate dogs.+')==True]) # most of those look like non-dogs display(twitter_arch['text'][25]) display(twitter_arch['doggo'].value_counts()) display(twitter_arch['floofer'].value_counts()) display(twitter_arch['pupper'].value_counts()) display(twitter_arch['puppo'].value_counts()) # these fields should be exclusive, but in tidying found some have multiple stage entries twitter_arch[(twitter_arch['doggo'] != "None") & (twitter_arch['floofer']!="None")] # this appears to be a non-dog display(twitter_arch['text'][200]) twitter_arch[(twitter_arch['doggo'] != "None") & (twitter_arch['puppo']!="None")] # this appears to be parsed incorrectly due to "doggo" also being part of the tweet display(twitter_arch['text'][191]) # based on the texts of these, moost appear to be multiple or non-dog twitter_arch[(twitter_arch['doggo'] != "None") & (twitter_arch['pupper']!="None")] # look at individual entries for those not obvious from snippets above display(twitter_arch['text'][460]) display(twitter_arch['text'][575]) display(twitter_arch['text'][705]) display(twitter_arch['text'][889]) display(twitter_arch['text'][1063]) twitter_arch[(twitter_arch['puppo'] != "None") & (twitter_arch['pupper']!="None")] twitter_arch[(twitter_arch['floofer'] != "None") & (twitter_arch['pupper']!="None")] twitter_arch[(twitter_arch['puppo'] != "None") & (twitter_arch['floofer']!="None")] display(image_preds.head()) display(image_preds.tail()) display(image_preds.sample(20)) display(image_preds.info()) display(image_preds.describe()) display(image_preds['tweet_id'].nunique()) display(image_preds['p1'].value_counts()) display(ext_tweet.head()) display(ext_tweet.tail()) display(ext_tweet.info()) display(ext_tweet.describe()) ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2331 entries, 0 to 2330 Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 id 2331 non-null int64 1 id_str 2331 non-null object 2 retweet_count 2331 non-null int64 3 favorite_count 2331 non-null int64 dtypes: int64(3), object(1) memory usage: 73.0+ KB ###Markdown Clean Quality`twitter_arch` contains retweets, which we want to exclude DefineDelete retweet entries Code ###Code # back up data first twitter_archbu = twitter_arch.copy() twitter_archbu = twitter_archbu[twitter_archbu['retweeted_status_id'].isnull()] ###Output _____no_output_____ ###Markdown Test ###Code twitter_archbu['retweeted_status_id'].value_counts() twitter_archbu.sample(10) ###Output _____no_output_____ ###Markdown Code ###Code # the retweeted columns are now empty, so delete them twitter_archbu.drop(columns=['retweeted_status_id','retweeted_status_user_id','retweeted_status_timestamp'], inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code twitter_archbu.head() # looks good, copy back twitter_arch = twitter_archbu.copy() ###Output _____no_output_____ ###Markdown `twitter_arch` contains tweets about non-dog things, which we want to exclude DefineDelete the non-dog entries. Since [@dograte's](https://twitter.com/dog_rates) replies to these consistently include "we only rate dogs", it is assumed any entry with this phrase in `text` is a non-dog entry. Code ###Code twitter_archbu = twitter_archbu[twitter_archbu['text'].str.match(r'.+[Ww]e only rate dogs.+')==False] ###Output _____no_output_____ ###Markdown Test ###Code display(twitter_archbu[twitter_archbu['text'].str.match(r'.+[Ww]e only rate dogs.+')==True]) # looks good, so copy back twitter_arch = twitter_archbu.copy() ###Output _____no_output_____ ###Markdown `twitter_arch` contains ratings data that aren't ratings DefineReplace the erroneous ratings with actual ratings from `text` or delete if no rating was found in `text`. Code ###Code # these are the entries that appear to be parsing errors and had correct-appearing ratings in the tweet rating_fix = [{'index': 313, 'numerator':13},{'index':1068, 'numerator':14},{'index':1165, 'numerator':13},\ {'index':1202, 'numerator':11},{'index':1662, 'numerator':10}, {'index':2335, 'numerator':9},\ {'index':695, 'numerator':10},{'index':763, 'numerator':11}] rate_fix = pd.DataFrame(rating_fix) rate_fix.head() twitter_archbu = twitter_arch.copy() for i in rate_fix['index']: twitter_archbu.loc[i,'rating_numerator'] = rate_fix.loc[(rate_fix['index']==i),'numerator'].values twitter_archbu.loc[i,'rating_denominator'] = 10 ###Output _____no_output_____ ###Markdown Test ###Code display(twitter_archbu.loc[rate_fix['index']]) # looks good, copy back twitter_arch = twitter_archbu.copy() ###Output _____no_output_____ ###Markdown Code ###Code # these did not have obvious ratings in the text, so delete the entries rate_del = [342, 516, 1598, 1663, 1712] twitter_archbu.drop(rate_del, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code try: display(twitter_archbu.loc[rate_del]) except KeyError: print('values not found') # looks good, copy back twitter_arch = twitter_archbu.copy() ###Output _____no_output_____ ###Markdown `twitter_arch` contains ratings for multiple-dog groups DefineRemove the multi-dog entries, which have denominators 10 Code ###Code twitter_archbu = twitter_archbu[twitter_archbu['rating_denominator'] == 10] ###Output _____no_output_____ ###Markdown Test ###Code display(twitter_archbu[twitter_archbu['rating_denominator'] != 10]) display(twitter_archbu.head()) # looks good, copy back twitter_arch = twitter_archbu.copy() ###Output _____no_output_____ ###Markdown `twitter_arch` contains non-name entries in `name` DefineRemove the multi-dog entries, which have denominators 10 Code ###Code no_names = twitter_archbu['name'][twitter_archbu['name'].str.match(r'^[a-z]')==True].index twitter_archbu.loc[no_names,'name'] = 'None' ###Output _____no_output_____ ###Markdown Test ###Code display(twitter_archbu['name'][twitter_archbu['name'].str.match(r'^[a-z]')==True].value_counts()) # check one of the entries to make sure it's set correctly twitter_archbu.loc[56,'name'] # looks good, copy back twitter_arch = twitter_archbu.copy() ###Output _____no_output_____ ###Markdown `twitter_arch` fields `timestamp` and `retweeted_status_timestamp` are strings, not datetime DefineConvert `timestamp` to datetime. `retweeted_status_timestamp` has been deleted. Code ###Code twitter_archbu['timestamp'] = pd.to_datetime(twitter_archbu['timestamp']) ###Output _____no_output_____ ###Markdown Test ###Code display(twitter_archbu.info()) display(twitter_archbu.head()) # looks good, copy back twitter_arch = twitter_archbu.copy() ###Output _____no_output_____ ###Markdown `twitter_arch` fields `in_reply_to_status_id`,`in_reply_to_user_id`, `retweeted_status_id` and `retweeted_user_id` are floats, not ints DefineConvert `in_reply_to_status_id` and `in_reply_to_user_id` to int. `retweeted_status_id` and `retweeted_user_id` have been deleted. Code ###Code # the pd.Int64DType() is to get around a error due to float NaNs twitter_archbu['in_reply_to_status_id'] = twitter_archbu['in_reply_to_status_id'].astype(pd.Int64Dtype()) twitter_archbu['in_reply_to_user_id'] = twitter_archbu['in_reply_to_user_id'].astype(pd.Int64Dtype()) ###Output _____no_output_____ ###Markdown Test ###Code display(twitter_archbu.info()) display(twitter_archbu.head()) # looks good, copy back twitter_arch = twitter_archbu.copy() ###Output _____no_output_____ ###Markdown `twitter_arch` dog-stage fields `floofer`,`pupper`, `puppo` and `doggo` should be mutually exclusive, but some entries have more than one stage DefineDelete the entries that appear to be non-dog or multi-dog. Correct the ones that appear to be parsed incorrectly. Code ###Code # These two are parsed incorrectly. Set doggo field to none twitter_archbu.loc[191,'doggo'] = "None" twitter_archbu.loc[460,'doggo'] = "None" # remove the other entries as are multi-dog or non-dog twitter_archbu = twitter_archbu[~((twitter_archbu['doggo'] != "None") & (twitter_archbu['floofer']!="None"))] twitter_archbu = twitter_archbu[~((twitter_archbu['doggo'] != "None") & (twitter_archbu['pupper']!="None"))] ###Output _____no_output_____ ###Markdown Test ###Code twitter_archbu[(twitter_archbu['doggo'] != "None") & (twitter_archbu['floofer']!="None")] twitter_archbu[(twitter_archbu['doggo'] != "None") & (twitter_archbu['pupper']!="None")] # looks good, copy back twitter_arch = twitter_archbu.copy() ###Output _____no_output_____ ###Markdown `image_preds` fields `p1`, `p2`, and `p3` are inconsistently capitalized DefineMake all entries in `p1`, `p2`, and `p3` lowercase for consistency. Code ###Code image_predsbu = image_preds.copy() image_predsbu['p1'] = image_predsbu['p1'].str.lower() image_predsbu['p2'] = image_predsbu['p2'].str.lower() image_predsbu['p3'] = image_predsbu['p3'].str.lower() ###Output _____no_output_____ ###Markdown Test ###Code image_predsbu.sample(10) # looks good, copy back image_preds = image_predsbu.copy() ###Output _____no_output_____ ###Markdown `image_preds` contains fewer entries than the twitter archive. DefineThis means some tweets have no images. This will be addressed in "Tidiness" when the tables are joined. `ext_tweet` contains fewer entries than the twitter archive. DefineThis means some tweets were deleted or are otherwise not accessible. This will be addressed in "Tidiness" when the tables are joined. Tidiness`twitter_arch` fields `doggo`, `floofer`, `pupper` and `puppo` are mutually exclusive descriptions of dog stage DefineMake a single categorical field `stage` containing each stage name or "none". Code ###Code twitter_archmelt = pd.melt(twitter_archbu, id_vars=['tweet_id','in_reply_to_status_id','in_reply_to_user_id',\ 'timestamp','source','expanded_urls','rating_numerator',\ 'rating_denominator','name'],value_vars=['doggo','floofer',\ 'pupper','puppo']) # first get rid of all the duplicate "None" entries twitter_archmelt.drop_duplicates(subset=['tweet_id','value'],inplace=True) twitter_archmelt.info() twitter_archmelt['value'].value_counts() twitter_archmelt['drop'] = twitter_archmelt.duplicated(subset=['tweet_id'], keep=False) twitter_archmelt['drop'].value_counts() # find the "None" entries that are duplicates of those with a stage entry drop_rows = twitter_archmelt[(twitter_archmelt['drop']==True) & (twitter_archmelt['value']=="None")].index len(drop_rows) # drop those rows twitter_archmelt.drop(drop_rows,inplace=True) # clean up the columns & names twitter_archmelt.drop(columns=['variable','drop'], inplace=True) twitter_archmelt.rename(columns={'value':'stage'}, inplace=True) twitter_archmelt['stage'] = twitter_archmelt['stage'].astype('category') ###Output _____no_output_____ ###Markdown Test ###Code twitter_archmelt['stage'].value_counts() twitter_archmelt.duplicated(subset=['tweet_id'],keep=False).sum() display(twitter_archmelt.head()) display(twitter_archmelt.info()) #looks good, copy back twitter_archbu = twitter_archmelt.copy() twitter_arch = twitter_archmelt.copy() ###Output _____no_output_____ ###Markdown `image_preds`should be combined with the main archive table since there is one entry per tweet DefineMerge `image_preds` with `twitter_arch`. Inner join so non-image-containing tweets are eliminated. Code ###Code twitter_archbu = twitter_archbu.merge(image_predsbu,on='tweet_id') ###Output _____no_output_____ ###Markdown Test ###Code display(twitter_archbu.head()) display(twitter_archbu.info()) #looks good, copy back twitter_arch = twitter_archbu.copy() ###Output _____no_output_____ ###Markdown `ext_tweets`should be combined with the main archive table since it has additional data about each tweet DefineMerge `ext_tweet` with `twitter_arch`. Inner join so no-longer-accessible tweets are eliminated. Code ###Code twitter_archbu = twitter_archbu.merge(ext_tweet,left_on='tweet_id',right_on='id') # drop extra columns twitter_archbu.drop(columns=['id','id_str'], inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code display(twitter_archbu.head()) display(twitter_archbu.info()) #looks good, copy back twitter_arch = twitter_archbu.copy() # store the cleaned data twitter_arch.to_csv('twitter_archive_master.csv') ###Output _____no_output_____ ###Markdown AnalyzeAre dogs with higher ratings more frequently favorited and retweeted? ###Code twitter_arch['rating_numerator'].describe() twitter_arch['retweet_count'].describe() twitter_arch['favorite_count'].describe() twitter_arch.plot(x='rating_numerator', y='retweet_count', kind='scatter') plt.xlabel('Rating') plt.ylabel('Retweets') plt.title('Retweet Count as Function of Rating'); # remove the 2 outliers rate_nooutl = twitter_arch[twitter_arch['rating_numerator']< 250] rate_nooutl.plot(x='rating_numerator', y='retweet_count', kind='scatter') plt.xlabel('Rating') plt.ylabel('Retweets') plt.title('Retweet Count as Function of Rating'); rate_nooutl.plot(x='rating_numerator', y='favorite_count', kind='scatter') plt.xlabel('Rating') plt.ylabel('Favorites') plt.title('Favorite Count as Function of Rating'); # split ratings into < 10 vs >= 10 rate_nooutl['rate_hilo'] = rate_nooutl.apply(lambda row: row['rating_numerator']>=10, axis=1) rate_nooutl.head() rate_avg = rate_nooutl.groupby(['rate_hilo'], as_index=False)[['favorite_count','retweet_count']].mean() rate_avg.head() ind = pd.array([1.0, 2.0]) plt.bar(ind, rate_avg['favorite_count']) plt.xticks(ind, ['< 10', '>= 10']) plt.xlabel('Rating') plt.ylabel('Favorites') plt.title('Figure 1. Mean Favorites by Low vs. High Rating'); ###Output _____no_output_____ ###Markdown Higher-rated dogs appear to be favorited more frequently.Check if this is statistically significant: ###Code obs_diff_favs = rate_avg.loc[1,'favorite_count'] - rate_avg.loc[0,'favorite_count'] obs_diff_favs # create sampling distribution of difference in favorites # with boostrapping diffs = np.empty(10000, dtype=float) size = rate_nooutl.shape[0] for x in range(10000): smplx = rate_nooutl.sample(size,replace=True) lo_mn = smplx.query('rate_hilo == False').favorite_count.mean() hi_mn = smplx.query('rate_hilo == True').favorite_count.mean() diffs[x] = hi_mn - lo_mn plt.hist(diffs); np.std(diffs) # simulate distribution under the null hypothesis null_vals = np.random.normal(0, np.std(diffs), 10000) # plot null distribution plt.hist(null_vals) # plot line for observed statistic plt.axvline(obs_diff_favs,color='red',lw=2); # compute p value pval = (null_vals > obs_diff_favs).mean() pval ###Output _____no_output_____ ###Markdown The difference is highly significant. ###Code ind = pd.array([1.0, 2.0]) plt.bar(ind, rate_avg['retweet_count']) plt.xticks(ind, ['< 10', '>= 10']) plt.xlabel('Rating') plt.ylabel('Retweets') plt.title('Figure 2. Mean Retweets by Low vs. High Rating'); ###Output _____no_output_____ ###Markdown Higher-rated dogs appear to be retweeted more frequently.Check if this is statistically significant: ###Code obs_diff_rts = rate_avg.loc[1,'retweet_count'] - rate_avg.loc[0,'retweet_count'] obs_diff_rts # create sampling distribution of difference in favorites # with boostrapping diffs = np.empty(10000, dtype=float) size = rate_nooutl.shape[0] for x in range(10000): smplx = rate_nooutl.sample(size,replace=True) lo_mn = smplx.query('rate_hilo == False').retweet_count.mean() hi_mn = smplx.query('rate_hilo == True').retweet_count.mean() diffs[x] = hi_mn - lo_mn plt.hist(diffs); np.std(diffs) # simulate distribution under the null hypothesis null_vals = np.random.normal(0, np.std(diffs), 10000) # plot null distribution plt.hist(null_vals) # plot line for observed statistic plt.axvline(obs_diff_rts,color='red',lw=2); # compute p value pval = (null_vals > obs_diff_favs).mean() pval ###Output _____no_output_____ ###Markdown Again, the difference is highly significant Do dog ratings differ by stage? ###Code rate_stg = rate_nooutl.groupby(['stage'], as_index=False)[['rating_numerator']].mean() rate_stg ind2 = pd.array([1.0, 2.0, 3.0, 4.0, 5.0]) plt.bar(ind2, rate_stg['rating_numerator']) plt.xticks(ind2, ['none','doggo','floofer','pupper','puppo']) plt.xlabel('Stage') plt.ylabel('Rating') plt.title('Figure 3. Mean Rating by Dog Stage'); ###Output _____no_output_____ ###Markdown Dog ratings do seem to vary slightly by stage. Dogs staged as `doggo`, `floofer`, and `puppo` may have slighltly higher ratings than dogs with no stage or dogs staged as `pupper`.To see if there are significant differences, perform a linear regression on ratings as function of stage: ###Code # mucked up dummy cols, restart rate_nooutl.drop(columns=['doggo','floofer','none','pupper','puppo','None'],inplace=True) display(rate_nooutl.head()) # make dummy variables rate_nooutl[['None','doggo','floofer','pupper','puppo']]=pd.get_dummies(rate_nooutl['stage']) rate_nooutl.head() # perform linear regression on ratings as fxn of stage rate_nooutl['intercept'] = 1 mdl_stg = sm.OLS(rate_nooutl['rating_numerator'],rate_nooutl[['intercept','doggo','floofer','pupper','puppo']]) stg_res = mdl_stg.fit() stg_res.summary() ###Output <ipython-input-633-af1e145d6e5f>:2: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy rate_nooutl['intercept'] = 1 ###Markdown The linear regression does show some significant differences, although the R-squared value is low (0.02), indicating the model does not explain much of the variation in ratings.Compared to dogs with no stage classification, those classified as `doggo` and `puppo` have significantly higher ratings. The confidence interval for `pupper` does not overlap with those of `doggo` or `puppo`, indicating that pupper have significantly lower ratings (based on the coefficients) than doggos and puppos.The confidence intervals for `doggo`, `floofer`, and `puppo` all overlap, indicating no significant difference between these groups. This is also true for `floofer` and `pupper`. What are dogs most frequently mis-classified as by the neural network? ###Code top10_miscl = rate_nooutl[rate_nooutl['p1_dog']==False]['p1'].value_counts()[0:10] top10_miscl ind3 = range(1,11) plt.bar(ind3,top10_miscl) plt.xticks(ind3, top10_miscl.index, rotation='vertical') plt.xlabel('Thing') plt.ylabel('Frequency') plt.title('Figure 4. Top 10 Misclassifications of Dogs'); ###Output _____no_output_____ ###Markdown Dogs are most frequently mis-classified as seat belts (?). Possibly due to presence of collar or leash.Other furry (teddy, hamster) and very dog-like (dingo) animals are also common.Does the neural network's confidence in its classification of the dogs as dogs correlate with dog rating? ###Code corr_class = rate_nooutl[rate_nooutl['p1_dog']==True] corr_class.plot(x='rating_numerator', y='p1_conf', kind='scatter') plt.xlabel('Rating') plt.ylabel('Pred. 1 Conf.') plt.title('Figure 5. Prediction 1 Confidence for Dog Predictions as Function of Rating'); ###Output _____no_output_____ ###Markdown There doesn't seem to be an obvious relationship between rating and the neural network's confidence in its first prediction when it predicts that the picture is a dog, although there may be more predictions near a confidence of 1.0 at higher ratings.Check this with linear regression: ###Code mdl_ccl = sm.OLS(corr_class['p1_conf'],corr_class[['intercept','rating_numerator']]) ccl_res = mdl_ccl.fit() ccl_res.summary() ###Output _____no_output_____ ###Markdown 5. Twitter archive data: rating numerator has values less than 10 and large numbers DefineFor rating numerator values that are lower than 10 or higher than 10 but are considered large (greater than 14), check if the text has a fraction and if it does, extract the numerator value from that fraction as the rating numerator Code ###Code # regex to match fractions pattern = "\s*(\d+([.]\d+)?([/]\d+))" # function which will match the above pattern and return an array of fractions, if any def tokens(x): return [m.group(1) for m in re.finditer(pattern, x)] #Convert rating numerator column to int df_clean['rating_numerator'] = df_clean['rating_numerator'].astype('int') df_clean.info() for i, row in df_clean[(df_clean.rating_numerator <= 10) | (df_clean.rating_numerator > 14)].iterrows(): ratings = tokens(row.text) for rating in ratings: if rating.split('/')[1] == '10': n = int(round(float(rating.split('/')[0]))) if (row.rating_numerator == 10 and n > 10) or (row.rating_numerator != 10 and n >= 10): df_clean.set_value(i, 'rating_numerator', n) break #Remove outliers from rating_numerator column for analysis df_clean = df_clean[(df_clean['rating_numerator'] < 15)] ###Output _____no_output_____ ###Markdown Test ###Code df_clean['rating_numerator'].value_counts() ###Output _____no_output_____ ###Markdown 6. Twitter archive data: erroneous name columns (for example: 'a', 'an', 'actually' DefineReplace all lowercase names with 'None' Code ###Code df_clean['name'][df_clean['name'].str.match('[a-z]+')] = 'None' ###Output _____no_output_____ ###Markdown Test ###Code df_clean['name'].value_counts().sort_index() ###Output _____no_output_____ ###Markdown 7. Twitter Archive data: there are some dogs that have more than 1 stage ###Code print(len(df_clean[(df_clean.doggo != 'None') & (df_clean.floofer != 'None')])) print(len(df_clean[(df_clean.doggo != 'None') & (df_clean.puppo != 'None')])) print(len(df_clean[(df_clean.doggo != 'None') & (df_clean.pupper != 'None')])) ###Output 1 1 9 ###Markdown DefineFor the 1 record that has both doggo and floofer stages and the 1 record that has both doggo and puppo stages, I will take a look at the text manually and decide which stage is more appropriate.For the 9 records that have both doggo and pupper stages, I will convert the pupper column to 'None' since the dogtionary mentions that doggo and pupper stages are interchangeable. Code ###Code #Check the texts of the 2 records for i, row in df_clean[((df_clean.doggo != 'None') & (df_clean.floofer != 'None')) | ((df_clean.doggo != 'None') & (df_clean.puppo != 'None'))].iterrows(): print('%s %s\n'%(row.tweet_id, row.text)) ###Output 855851453814013952 Here's a puppo participating in the #ScienceMarch. Cleverly disguising her own doggo agenda. 13/10 would keep the planet habitable for https://t.co/cMhq16isel 854010172552949760 At first I thought this was a shy doggo, but it's actually a Rare Canadian Floofer Owl. Amateurs would confuse the two. 11/10 only send dogs https://t.co/TXdT3tmuYk ###Markdown From the texts, both of them are not considered to be doggo, so I will convert the doggo column to None for these 2. ###Code df_clean['doggo'][df_clean['tweet_id'].isin([855851453814013952, 854010172552949760])] = 'None' #Convert pupper column to None for 9 records which have both doggo and pupper df_clean['pupper'][(df_clean.doggo != 'None') & (df_clean.pupper != 'None')] = 'None' ###Output _____no_output_____ ###Markdown Test ###Code print(len(df_clean[(df_clean.doggo != 'None') & (df_clean.floofer != 'None')])) print(len(df_clean[(df_clean.doggo != 'None') & (df_clean.puppo != 'None')])) print(len(df_clean[(df_clean.doggo != 'None') & (df_clean.pupper != 'None')])) ###Output 0 0 0 ###Markdown 8. Image_data table: duplicate jpg_url DefineDrop all the jpg_url duplicates in image table Code ###Code image_clean = image_clean.drop_duplicates(subset=['jpg_url'], keep='last') ###Output _____no_output_____ ###Markdown Test ###Code sum(image_clean['jpg_url'].duplicated()) ###Output _____no_output_____ ###Markdown 9. Image data: the first dog breed prediction might not be a dog (for example, bow tie) DefineExtract the first 'True' predictions from the dog type and store in a new dog_breed column Code ###Code #Create a new list for storing dog breed dog_breed = [] #Create a function to capture the dog breed from the first 'True' prediction def breed(image_clean): if image_clean['p1_dog'] == True: dog_breed.append(image_clean['p1']) elif image_clean['p2_dog'] == True: dog_breed.append(image_clean['p2']) elif image_clean['p3_dog'] == True: dog_breed.append(image_clean['p3']) else: dog_breed.append('Error') #Apply the function on the dataframe image_clean.apply(breed, axis=1) #Create new column image_clean['dog_breed'] = dog_breed #Drop rows that has prediction_list 'error' image_clean = image_clean[image_clean['dog_breed'] != 'Error'] ###Output _____no_output_____ ###Markdown Test ###Code image_clean.head() ###Output _____no_output_____ ###Markdown Tidiness 1. Twitter archive data: dog stages should be in one column DefineMelt the doggo, floofer, pupper and puppo columns to dogs and dogs_stage column. Then drop dogs, doggo, floofer, pupper, and puppo columns. Code ###Code df_clean = pd.melt(df_clean, id_vars=['tweet_id', 'in_reply_to_status_id', 'in_reply_to_user_id', 'timestamp', 'source', 'text', 'expanded_urls', 'rating_numerator', 'rating_denominator', 'name'], var_name='dogs', value_name='dogs_stage') df_clean = df_clean.drop('dogs', axis=1) df_clean = df_clean.sort_values('dogs_stage').drop_duplicates(subset='tweet_id', keep='last') ###Output _____no_output_____ ###Markdown Test ###Code df_clean.info() df_clean['dogs_stage'].value_counts() ###Output _____no_output_____ ###Markdown 2. Many unnecessary columns in all 3 dataframes and they should be combined into 1 dataframe for analysis DefineDrop unnecessary columns from the dataframes and merge all 3 dataframes into 1Columns to be dropped from twitter archive data:in_reply_to_status_idin_reply_to_user_idtimestampsourceexpanded_urlsColumns to be dropped from image data:jpg_urlimg_nump1p1_confp1_dogp2p2_confp2_dogp3p3_confp3_dogColumns to be dropped from API data:display_text_range Code ###Code #Drop columns in twitter archive data df_clean = df_clean.drop(['in_reply_to_status_id', 'in_reply_to_user_id', 'timestamp', 'source', 'expanded_urls'], axis=1) #Drop columns in image data image_clean = image_clean.drop(['jpg_url', 'img_num', 'p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog'], axis=1) #Drop columns in API data tweet_clean = tweet_clean.drop(['display_text_range'], axis=1) #Merge twitter archive data and image data df_clean = pd.merge(df_clean, image_clean, how = 'left', on = ['tweet_id']) #Merge twitter API into the main dataframe df_clean = pd.merge(df_clean, tweet_clean, how = 'left', on = ['tweet_id']) ###Output _____no_output_____ ###Markdown Test ###Code df_clean.sample(5) df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1992 entries, 0 to 1991 Data columns (total 9 columns): tweet_id 1992 non-null int64 text 1992 non-null object rating_numerator 1992 non-null int64 rating_denominator 1992 non-null int64 name 1992 non-null object dogs_stage 1992 non-null object dog_breed 1626 non-null object retweet_count 1989 non-null float64 favorite_count 1989 non-null float64 dtypes: float64(2), int64(3), object(4) memory usage: 155.6+ KB ###Markdown Storing, Analyzing, and Visualizing Data ###Code #Store the clean and merge dataFrame in a CSV file df_clean.to_csv('twitter_archive_master.csv', index=False, encoding = 'utf-8') ###Output _____no_output_____ ###Markdown Insight 1 - The most common dog breed in data ###Code #Find the most common dog breed from the breed column df_clean['dog_breed'].value_counts() #Plot the 10 most common dog breeds in the data most_breed = df_clean.groupby('dog_breed').filter(lambda x: len(x) >= 32) most_breed['dog_breed'].value_counts().plot(kind = 'bar') plt.title('Most Common Dog Breed Rated') plt.xlabel('Dog Breed') plt.ylabel('Count'); ###Output _____no_output_____ ###Markdown Insight 2 - Dog breed with the lowest mean rating ###Code breed_rating = df_clean.groupby('dog_breed')['rating_numerator'].mean() breed_rating.sort_values() ###Output _____no_output_____ ###Markdown Insight 3 - The dog breed with the most retweet count ###Code breed_retweet = df_clean.groupby('dog_breed')['retweet_count'].mean() breed_retweet.sort_values(ascending = False) ###Output _____no_output_____ ###Markdown Insight 4 - The relationship between ratings and favorites count ###Code df_clean.plot(x='favorite_count', y='rating_numerator', kind='scatter') plt.xlabel('Favorite Counts') plt.ylabel('Ratings') plt.title('Favorite Counts by Ratings Scatter Plot'); ###Output _____no_output_____ ###Markdown Insight 5 - The relationship between ratings and retweet count ###Code df_clean.plot(x='retweet_count', y='rating_numerator', kind='scatter') plt.xlabel('Retweet Counts') plt.ylabel('Ratings') plt.title('Retweet Counts by Ratings Scatter Plot') #Check to see which dog post receives the most favorites and retweets df_clean['favorite_count'].describe() #Check which dog receives the max favorite count df_clean[df_clean['favorite_count'] == 162866.000000] #Dog with the lowest average rating df_clean[df_clean['dog_breed'] == 'Japanese_spaniel'] #Dog with the highest average retweet count df_clean[df_clean['dog_breed'] == 'Bedlington_terrier'] ###Output _____no_output_____ ###Markdown A Twitter Success Story___**Context** This notebook is an analysis of the success of the tweeter account that claims it's purpose is to rate dogs. This account has been able to gain 7.85 million followers as of March 2019 and has monetized its popularity by selling card games made out of the tweets, selling mugs, hats and similar merchandise with a funny and kind message about dogs. ###Code # Essential Data Analysis Ecosystem Libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt # From Python Standard Library import os, shutil import json # encoding and decoding json data from html.parser import HTMLParser # Wrangling Tools import requests # Python HTTP for Humans! - Version: 2.21.0 import tweepy # Popular Twitter library for python. - Version: 3.5.0 # set plots to be embedded inline %matplotlib inline # suppress warnings from final output import warnings warnings.simplefilter("ignore") # Hosted flat text file's url on Udacity servers containing the tweet image predictions, import image_predictions_url ###Output _____no_output_____ ###Markdown Gathering Data **Data Wrangling Process**We are given access to a few thousands of tweets from the WeRateDogs twitter account from November 2015 to end of August in 2017. And the predictions of a neural network image classifier algorithm that identifies dogs and their bread in images for each tweet. We will gather more data points, i.e. retweet counts and favorite counts via twitter API. To see if we can find patterns or trends leading to the success of this account. After we gather these data from 3 different sources, we will assess the data for tidiness and quality issues. We will make a list of problems to tackle in the cleaning step. In the cleaning step, programmatically we make our dataset tidy, address missing values, inconsistent values, and adding necessary value columns for analysis.`Resources on Hand`- tweets archive data in csv, provided by the owner of the account.- - This file is missing retweets and favorite counts for tweets.- Predictions of a neural network algorithm on this account tweets' images to see if an image is a dog or not and if so what the bread of the dog is. We have the URL of this image prediction data to gather this data. `Process`- - Gathering retweets counts and favorite counts via tweeter API- - retweets and favorites counts are among the most valuable features of each tweet that can reveal patterns about the level of engagement and account popularity based.- - Assessing Tidiness issues and cleaning our messy data into Quality and Tidy Structure- - Execute Cleaning **Gathering Data, Part 1: Tweets' Archive File** ###Code # Make a directory named `dataset` (if it doesn't already exist) to store our gathered and cleaned dataset # Verify our expected tweets archive data exists and move it to dataset directory if not already there. datafolder = "input" tweets_archivefile = 'twitter-archive-enhanced.csv' if not os.path.exists(datafolder): os.mkdir(datafolder) print(f'Directory `./{datafolder}` is created to store gathered data files.') path_in_datafolder = os.path.join(datafolder, tweets_archivefile) if os.path.isfile(tweets_archivefile): shutil.move(tweets_archivefile, path_in_datafolder) elif os.path.isfile(path_in_datafolder): pass else: print(f'ERROR: Expected tweets archive csv file is not found.') dogs = pd.read_csv(path_in_datafolder) dogs.iloc[np.random.randint(0, dogs.shape[0]-1)] ###Output _____no_output_____ ###Markdown **Gathering Data, Part 2: Neural Network Algorithm Image Classifier Data** ###Code # Download `image-predictions.tsv` file content that is the result of neural network image classification. # Download file content try: http_resp = requests.get(image_predictions_url.url) http_resp.raise_for_status except Exception as e: print("ERROR: Accessing image predictions from hosted server failed.") print(e) # Get file name from the source url img_fname = http_resp.url.split('/')[-1] # Save downloaded file content path_in_datafolder = os.path.join(datafolder, img_fname) try: with open(path_in_datafolder, 'w') as file: file.write(http_resp.text) except Exception as e: print("\nERROR: Saving downloaded image-predictions data was not successful.") print(e) imgs = pd.read_csv(path_in_datafolder, delimiter='\t') imgs.sample(3) ###Output _____no_output_____ ###Markdown **Gathering Data, Part 3: Additional Data via Tweeter API**Having tweet ids in tweets archive data, enables us to query the Twitter API (via Tweepy library) to gather additional tweets' data including `retweet counts` and `favorite counts` for each tweet in json format. You may skip running following the next 2 cells if you want to use the data queried and saved from twitter API previously. The following cell takes a minimum of 30 minutes to finish to regenerate the data again. To connect to the API, you need to load twitter credentials from the environment. ###Code # Setting file name and path to later store gathered data from tweeter api api_data_fname = "tweet_json.txt" api_fpath = os.path.join(datafolder, api_data_fname) # Setting API Credentials consumer_key = os.environ['CONSUMER_KEY'] consumer_secret = os.environ['CONSUMER_SECRET'] access_token = os.environ['ACCESS_TOKEN'] access_secret = os.environ['ACCESS_SECRET'] auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) # Make a Tweepy model class instance that will contain the data returned from Twitter api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True, parser=tweepy.parsers.JSONParser()) %%time # Send queries to tweeter api and save returned json data tweet_ids_not_found = [] tweet_ids = dogs['tweet_id'] # Returns index of tweet_id to stout api progress series_value_idx = lambda x: tweet_ids[tweet_ids == x].index[0] + 1 with open(api_fpath, 'w') as file: print(f"{tweet_ids.shape[0]} TWEET QUERIES TO TWITTER API IN PROGRESS...\n") for tweet_id in tweet_ids: try: tweet_data = api.get_status(tweet_id, tweet_mode='extended') json.dump(tweet_data, file) file.write('\n') # stout progress every 100 queried tweet_ids if not series_value_idx(tweet_id) % 100: print(f"\t{series_value_idx(tweet_id)}\ttweets queried so far...") except tweepy.TweepError: tweet_ids_not_found.append(tweet_id) print(f"\n{len(tweet_ids_not_found)} tweet queries returned no data.") print(f"{tweet_ids.shape[0] - len(tweet_ids_not_found)} tweets json data successfully saved as {api_data_fname} in {datafolder} folder.") # Convert queried data (json data file saved in tweet_json.txt) to DataFram tweets_list = [] with open(api_fpath) as file: for line in file: row = json.loads(line) tweets_list.append({'tweet_id': row['id_str'], 'retweet_count': int(row['retweet_count']), 'favorite_count': int(row['favorite_count']), }) apis = pd.DataFrame(tweets_list, columns=['tweet_id', 'retweet_count', 'favorite_count']) apis.sample(3) ###Output _____no_output_____ ###Markdown ______ Assessing Data **We gathered our dataset from 3 different sources and converted them to 3 DataFrames/tables.**- dogs: Tweets archive file of the WeRateDogs twitter account provided to us.- imgs: Results of a neural network that classifies breads of dogs. We gathered (downloaded) this csv data from its source. Here is who this image classifer works:- - If a picture is a dog or other objects- - if a picture is a dog what's the bread of the dog. - apis: Additional tweet data (retweet_count and favorite_count) that we gathered via the Twitter API by using tweet_ids in dogs table.___ dogs table**Key points for this table:**- We only want original ratings (no retweets) that have images. Not tweets are dog ratings and some are retweets. ###Code dogs.sample() dogs.info() dogs.describe() # Discovering outliers dogs.rating_numerator.value_counts() dogs.hist(column=['rating_denominator', 'rating_numerator'], figsize=(16, 4)); ###Output _____no_output_____ ###Markdown - Five id columns are numeric; `tweet_id`, `in_reply_to_status_id`, `in_reply_to_user_id`, `retweeted_status_id` and `retweeted_status_user_id`- Exterem min and max outliers in `rating_numerator` and `rating_denominator` ###Code dogs.name.value_counts()[:16] ###Output _____no_output_____ ###Markdown - None is used as dogs' names missing value. Should be NaN.- `the`, `a` and `an` are not dog names. Since all dogs have names these are missing values. ###Code dogs.doggo.value_counts() dogs.floofer.value_counts() dogs.pupper.value_counts() dogs.puppo.value_counts() ###Output _____no_output_____ ###Markdown - `pupper`, `floofer`, `floofer` and `doggo` are booleans. imgs table**About this table:**- p1 is the algorithm's 1 prediction for the image in the tweet → golden retriever- p1_conf is how confident the algorithm is in its 1 prediction → 95%- p1_dog is whether or not the 1 prediction is a breed of dog → TRUE- p2 is the algorithm's second most likely prediction → Labrador retriever- p2_conf is how confident the algorithm is in its 2 prediction → 1%- p2_dog is whether or not the 2 prediction is a breed of dog → TRUE**Key points:**- image predictions in this table are only available to tweets up to August 1st, 2017. ###Code imgs.sample(3) imgs.info() imgs.describe() # A quick look at prediction confidence distributions imgs.hist(column=['p1_conf', 'p2_conf', 'p3_conf'], figsize=(16, 3), layout=(1, 3)); imgs.img_num.value_counts() ###Output _____no_output_____ ###Markdown - Not very sure what img_num exactly is. (Not provided in ML classifer notes). But looking at above cell we it only has 4 values. This variable type could be Categorical. apis table**Key points:**- We do not need to gather the tweets beyond August 1st, 2017. we can, but we won't be able to gather the image predictions for these tweets since we don't have access to the ML image classifer algorithm used. ###Code apis.sample(3) apis.info() apis.describe() ###Output _____no_output_____ ###Markdown **Since apis table is gathered by tweet_ids in dogs table, we check to see if there is any tweet_id in dogs table that doesn't exist in apis table** ###Code dogs.tweet_id[~dogs.tweet_id.isin(apis.tweet_id)].shape ###Output _____no_output_____ ###Markdown - So there are 17 tweets in dogs table that we don't have the retweet_count and favorite_count data for. (That data is not in apis table). This is most likely because tweeter api didn't return results for those tweets. ** apis.describe shows there is 1 or more retweet counts with 0.00 value. We will look at this. ** ###Code apis.retweet_count.value_counts().iloc[-5:] ###Output _____no_output_____ ###Markdown - The Result above shows there is one tweet with no retweet. Based on millions of followers for WeRateDogs tweeter-account we can certainly say this is an incorrect data. Tidiness Issues: Number of observational-units in the dataset and number of tables tables:- retweet_count and favorite_count columns in apis table are naturally part of the tweets archive data in dogs table. So, these two tables are same observational unit and should be combined. Note that tweet_id should be the identifier across all tables. Each variable forms `a` column?- We should have `1` rating for each tweet. But with our current dataset, this seems to be best addressed if we proceed to `Data Cleaning` then `Reaccess` it for best solution. Each observation forms `a` row?- we are good here! Quality Issues: dogs table: Erroneous Data Type and Noise:- Five id columns are numeric not strings. `tweet_id`, `in_reply_to_status_id`, `in_reply_to_user_id`, `retweeted_status_id` and `retweeted_status_user_id`- `timestamp` type is not datatime.- `source` contains formating HTML tag. It's Noise.- `pupper`, `floofer`, `floofer` and `doggo` are booleans. Filtering what we don't need:- Only tweets with original ratings that have images matter; Not tweets without ML-image_data; Not retweets. - Only tweets up to 08/01/2017 matter; Not any tweets beyound this date.- We don't need retweets. tweet_id.text containing 'RT @dog_rates:' should be dropped. Imputing and Addressing Missing Values:- `17` tweets in dogs table have no corresponding values (retweet and favorite counts) in apis table.- Missing dog names represented with `None`, Not Nan.- `the`, `a` and `an` are not dog names. Since all dogs have names these are missing values.- "None" is used for missing values in `names` column. Should be Nan. Missing Values , Outliers and Incorrect Values :- Tweet with 0.00 retweet_count is incorrect row. Same for favorite_count if any.- Exterem min and max outliers for `rating_denominator` exist. Seems not consistent with the WeRsteDogs rating system.- Exterem min and max outliers for `rating_numerator` exist. Rearranging the Table:- Name of `expanded_urls` column can be more descriptive. imgs table:- `tweet_id` is numeric; Should be string.- `img_num` has only 4 possible values. It can be a categorical variable. apis table:- `tweet_id` is numeric; Should be string. ______ Cleaning Data ** Making copies of the original dataset before cleaning **All of the cleaning operations will be conducted on the copies so we will be able to view the original dirty and/or messy dataset later. ###Code dogs_clean = dogs.copy() imgs_clean = imgs.copy() apis_clean = apis.copy() ###Output _____no_output_____ ###Markdown DATA TIDINESS:- retweet_count and favorite_count columns in apis table are naturally part of the tweets archive data in dogs table. So, these two tables are same observational unit and should be combined. Note that tweet_id should be the identifier across all tables. *Define:* - Ensure tweet_id column is of type-str in both Data Frames. We are joining on the column `tweet_id` that is the main identifier in both DataFrames. For that, they must both be of same data type.- with pandas.DataFrame.merge join retweet_count and favorite_count columns to dogs_clean table. *Code:* ###Code dogs_clean.tweet_id = dogs_clean.tweet_id.astype('str') apis_clean.tweet_id = apis_clean.tweet_id.astype('str') # how='outer' because we don't want missing tweet_ids in apis_clean to remove same tweet_ids in dogs_clean dogs_clean = dogs_clean.merge(apis_clean, how='outer', on=['tweet_id']) ###Output _____no_output_____ ###Markdown *Test:*- If a random-sample retweet_count from either tables match with the same tweet_id of the other table, the test is successful. ###Code apis_sample = apis_clean.sample() apis_sample apis_sample.retweet_count.values dogs_sample = dogs_clean[dogs_clean.tweet_id == apis_sample.tweet_id.values[0]] dogs_sample.retweet_count.values[0] 'Test Passed.' if dogs_sample.retweet_count.values[0] == apis_sample.retweet_count.values[0] else 'Test Failed.' ###Output _____no_output_____ ###Markdown ___ DATA QUALITY Erroneous Data Type and Noise:dogs_clean:- Five id columns are numeric not strings. `tweet_id`, `in_reply_to_status_id`, `in_reply_to_user_id`, `retweeted_status_id` and `retweeted_status_user_id`- `timestamp` type is not datatime.- `source` contains formating HTML tag. It's Noise.- `pupper`, `floofer`, `floofer` and `doggo` should be booleans.imgs_clean:- `tweet_id` is numeric; Should be string.apis_clean:- `tweet_id` is numeric; Should be string. *Define:* - Cast numeric id coloumns to string with pandas.DataFrame.astype- Convert timestamp to datatime with pandas.to_datetime- Cast img_num to categorical type with pandas.DataFrame.astype- Remove HTML formating tag from `source` using pandas.DataFrame.apply and html tag removing script. *Code:* ###Code # dogs_clean table dogs_clean.in_reply_to_user_id = dogs_clean.in_reply_to_user_id.astype('str') dogs_clean.in_reply_to_status_id = dogs_clean.in_reply_to_status_id.astype('str') dogs_clean.retweeted_status_id = dogs_clean.retweeted_status_id.astype('str') dogs_clean.retweeted_status_user_id = dogs_clean.retweeted_status_user_id.astype('str') dogs_clean.timestamp = pd.to_datetime(dogs_clean.timestamp) dogs_clean.loc[dogs_clean.pupper == 'pupper', 'pupper'] = True dogs_clean.loc[dogs_clean.pupper == 'None', 'pupper'] = False dogs_clean.loc[dogs_clean.floofer == 'floofer', 'floofer'] = True dogs_clean.loc[dogs_clean.floofer == 'None', 'floofer'] = False dogs_clean.loc[dogs_clean.doggo == 'doggo', 'doggo'] = True dogs_clean.loc[dogs_clean.doggo == 'None', 'doggo'] = False dogs_clean.loc[dogs_clean.puppo == 'puppo', 'puppo'] = True dogs_clean.loc[dogs_clean.puppo == 'None', 'puppo'] = False # apis_clean table apis_clean.retweet_count = apis_clean.retweet_count.astype(np.int64) apis_clean.favorite_count = apis_clean.favorite_count.astype(np.int64) imgs_clean.tweet_id = imgs_clean.tweet_id.astype('str') imgs_clean.img_num = imgs_clean.img_num.astype('category') apis_clean.tweet_id = apis_clean.tweet_id.astype('str') dogs_clean.info('floofer') # Strip HTML from strings # https://stackoverflow.com/questions/753052/strip-html-from-strings-in-python class MLStripper(HTMLParser): def __init__(self): self.reset() self.strict = False self.convert_charrefs= True self.fed = [] def handle_data(self, d): self.fed.append(d) def get_data(self): return ''.join(self.fed) def strip_tags(html): s = MLStripper() s.feed(html) return s.get_data() dogs_clean.source = dogs_clean.source.apply(strip_tags) ###Output _____no_output_____ ###Markdown *Test:* ###Code # test timestamp column is of panda's datatime one_randome_tweet = np.random.randint(0, dogs_clean.shape[0]) print(dogs_clean.iloc[one_randome_tweet].timestamp.date()) print(dogs_clean.iloc[one_randome_tweet].timestamp.time()) # test source is striped from html formating tags dogs_clean.source.value_counts() assert type(dogs_clean.iloc[one_randome_tweet].tweet_id) == str assert type(dogs_clean.iloc[one_randome_tweet].in_reply_to_status_id) == str assert type(dogs_clean.iloc[one_randome_tweet].in_reply_to_user_id) == str assert type(dogs_clean.iloc[one_randome_tweet].retweeted_status_id) == str assert type(dogs_clean.iloc[one_randome_tweet].retweeted_status_user_id) == str one_randome_tweet = np.random.randint(0, imgs_clean.shape[0]) assert type(imgs_clean.iloc[one_randome_tweet].tweet_id) == str imgs_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null object jpg_url 2075 non-null object img_num 2075 non-null category p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), category(1), float64(3), object(5) memory usage: 138.1+ KB ###Markdown ___ Imputing Missing Data*Define:*- `17` tweets in dogs table have no corresponding values (retweet and favorite counts) in apis table. Since "They are all good dogs!" :) we don't want to ignor any of them all as much as possible. To make up for missing retweets and favorite counts we use mean imputation.- Compute retweets and favorites counts `means` in apis_clean table.- Discover 17 tweet_ids in dogs_clean table that are missing in apis_clean table.- Create retweet_count and favorite_count columns in dogs_clean table and assign computed `means` to them. ###Code rt_mean = int(apis_clean.retweet_count.mean()) fv_mean = int(apis_clean.favorite_count.mean()) rt_mean, fv_mean mask = ~dogs_clean.tweet_id.isin(apis_clean.tweet_id) tweets_not_in_apis = dogs_clean.tweet_id.loc[mask] tweets_not_in_apis.shape[0] mask = dogs_clean.tweet_id.isin(tweets_not_in_apis) dogs_clean.loc[mask, 'retweet_count'] = rt_mean dogs_clean.loc[mask, 'favorite_count'] = fv_mean ###Output _____no_output_____ ###Markdown *Test:* ###Code dogs_clean.retweet_count[dogs_clean.retweet_count == rt_mean].sample(2) == [rt_mean, rt_mean] dogs_clean.retweet_count[dogs_clean.retweet_count != rt_mean].sample(2) != [rt_mean, rt_mean] dogs_clean.favorite_count[dogs_clean.favorite_count == fv_mean].sample(2) == [fv_mean, fv_mean] dogs_clean.favorite_count[dogs_clean.favorite_count != fv_mean].sample(2) != [fv_mean, fv_mean] ###Output _____no_output_____ ###Markdown Filtering what we don't need:- Only original tweets that have `dog images` matter not the following- - tweets that missing image prediction - - tweets that have image prediction but are not dog images - - tweets that are retweets. tweet_id.text containing 'RT @dog_rates:' are retweets. *Define:*- Drop tweets with no tweet_url i.e. tweet pictures- Drop tweets that we don't have corresponding ML image classifer information in imgs_clean table- Drop tweets with 'RT @dog_rates:' in tweet's text *Code:* ###Code # Removing retweets and rows wirh no url # mask = dogs_clean.expanded_urls.isna() # no_tweet_url = dogs_clean.loc[mask] # dogs_clean.drop(no_tweet_url.index, inplace=True) # Find tweets that ML 1st or 2nd prediction is NOT a dog pictures in other words we only want to keep tweets with # predictions that both 1st and 2nd p_dog is True mask = (~imgs_clean.p1_dog | ~imgs_clean.p2_dog) imgs_clean.drop(index=imgs_clean.loc[mask].index, inplace=True) # find and drop retweets mask = dogs_clean.text.str.contains('RT @dog_rates:') retweets = dogs_clean[mask] dogs_clean.drop(retweets.index, inplace=True) # Removing tweets (rows) with no ML-image_data in imgs_clean table mask = dogs_clean.tweet_id.isin(imgs_clean.tweet_id) dogs_clean = dogs_clean[mask] ###Output _____no_output_____ ###Markdown *Test:* ###Code mask = dogs_clean.expanded_urls.isna() dogs_clean.loc[mask] ###Output _____no_output_____ ###Markdown ___ Addressing Missing Values:- Missing dog names represented with `None`, Not Nan.- `the`, `a` and `an` are not dog names. Since all dogs have names these are missing values.- "None" is used for missing values in `names` column. Should be Nan. *Define:*- Simply replace 'None', 'a', 'an', 'the' strings with NaN using pandas.DataFrame.replace *Code:* ###Code dogs_clean.name.replace(['a', 'an', 'the', 'None'], np.nan, inplace=True) ###Output _____no_output_____ ###Markdown *Test:* ###Code 'Test Passed' if (dogs_clean.name == 'an').sum() == 0 else 'Test Failed.' 'Test Passed' if (dogs_clean.name == 'a').sum() == 0 else 'Test Failed.' 'Test Passed' if (dogs_clean.name == 'the').sum() == 0 else 'Test Failed.' 'Test Passed' if (dogs_clean.name == 'None').sum() == 0 else 'Test Failed.' ###Output _____no_output_____ ###Markdown ___ Incorrect Values and Outliers- Tweet with 0.00 retweet_count is incorrect row. Same for favorite_count if any.- Exterem min and max outliers for `rating_denominator` exist. Seems not consistent with the WeRsteDogs rating system.- Exterem min and max outliers for `rating_numerator` exist. *Define:*- Remove tweets with rating_denominator other than 10- Remove tweets with rating_numerator smaller than 8- Assign tweets with rating_numerator greater than 14 to 14 *Code:* ###Code mask = dogs_clean.rating_denominator == 10 dogs_clean = dogs_clean[mask] mask = dogs_clean.rating_numerator >= 8 dogs_clean = dogs_clean[mask] mask = dogs_clean.rating_numerator > 13 dogs_clean.at[mask, 'rating_numerator'] = 14 ###Output _____no_output_____ ###Markdown *Test:* ###Code dogs_clean.rating_numerator.value_counts() dogs_clean.rating_denominator.value_counts() ###Output _____no_output_____ ###Markdown DATA TIDINESS: Each variable forms `a` column?- We should have `1` rating for each tweet. *Define:*- remove rating_denominator column- rename rating_numerator column name to rating- rearrange columns order *Code:* ###Code dogs_clean.drop(columns='rating_denominator', inplace=True) dogs_clean.rename(columns={'rating_numerator': 'rating', 'expanded_urls': 'tweet_url', 'name': 'dog_name'}, inplace=True) new_columns_order = ['tweet_id', 'timestamp', 'rating', 'retweet_count', 'favorite_count', 'source', 'text', 'dog_name', 'doggo', 'floofer', 'pupper', 'puppo'] dogs_clean = dogs_clean[new_columns_order] ###Output _____no_output_____ ###Markdown *Test:* ###Code dogs_clean.sample(5) # Check all tweet_ids in dogs_clean exist in imgs_clean dogs_clean.tweet_id.isin(imgs_clean.tweet_id).sum() == dogs_clean.shape[0] ###Output _____no_output_____ ###Markdown ___ Re Assessing Check how many tweets in img_clean table have no corresponding tweets in dogs_clean. If any can be dropped ###Code imgs_clean.shape[0] - dogs_clean.shape[0] plt.figure(figsize=(15, 5)) plt.subplot(1, 2, 1) plt.scatter(data=dogs_clean, x='rating', y='retweet_count') plt.subplot(1, 2, 2) plt.scatter(data=dogs_clean, x='rating', y='favorite_count'); dogs_clean.tail(2) imgs_clean.tail(2) ###Output _____no_output_____ ###Markdown Data Tidiness- Add a new column to dogs_df that meaures popularity based on tweets and favorites- Dog Bread in imgs_clean is only column we need in our analysis and can be added to dogs_clean. Same Observational unit. Data Quality- Make both tables match by tweet_id key column by- - Removing tweets in img_clean that don't exist in dogs_clean- - Matching Indices of both tables by sorting them via tweet_id Continue Cleaning Define- Find tweets in image_clean that don't exist in dogs_clean- Drop above tweets from omgs_clean- Sort both tables by tweet id Code: ###Code mask = ~imgs_clean.tweet_id.isin(dogs_clean.tweet_id) not_in_both = imgs_clean[mask] imgs_clean.drop(not_in_both.index, inplace=True) imgs_clean.sort_values(by=['tweet_id'], inplace=True) dogs_clean.sort_values(by=['tweet_id'], inplace=True) imgs_clean.reset_index(inplace=True, drop=True) dogs_clean.reset_index(inplace=True, drop=True) ###Output _____no_output_____ ###Markdown Test: ###Code (dogs_clean.tweet_id == imgs_clean.tweet_id).sum() == imgs_clean.shape[0] (dogs_clean.tweet_id == imgs_clean.tweet_id).sum() == dogs_clean.shape[0] dogs_clean.tail(2) imgs_clean.tail(2) ###Output _____no_output_____ ###Markdown Define:Add a new column to dogs_clean table as an numeric value for populiarity. Each retweet gets 3 points and each favorite gets 1 point- Add a new column named popularity_score to dogs_clean- Calculate popularity_score as 3 * retweets_count + 1 * favorite_count Code: ###Code dogs_clean['popularity_score'] = (dogs_clean.retweet_count * 3 + dogs_clean.favorite_count) ###Output _____no_output_____ ###Markdown Test: ###Code dogs_clean.popularity_score.describe() ###Output _____no_output_____ ###Markdown Define:Dog Bread in imgs_clean is only column we need in our analysis and can be added to dogs_clean. Same Observational unit.Add dog bread from from imgs_clean to dogs_clean Code: ###Code dogs_clean['bread'] = imgs_clean['p1'] ###Output _____no_output_____ ###Markdown Test: ###Code dogs_clean.bread.value_counts()[:10] ###Output _____no_output_____ ###Markdown Store Cleaned DataFrame ###Code with open(os.path.join(datafolder, 'twitter_archive_master.csv'), 'w') as tweets_file: dogs_clean.to_csv(tweets_file, index=False) ###Output _____no_output_____ ###Markdown Analysis ###Code tweets = dogs_clean.copy() tweets.sample() ###Output _____no_output_____ ###Markdown The Consistency of Audience Engagement ###Code plt.figure(figsize=(16, 8)) plt.plot_date(tweets.timestamp, tweets.popularity_score) plt.ylabel('Tweets \' Popularity\n(based on retweets and favorites)'); ###Output _____no_output_____ ###Markdown ___** Conclusion **It is almost unbelievable at first to learn that a twitter account only to rate people's dogs becomes so successful in gaining about 7.85 million followers as of March 2019. The WeRateDogs twitter account, rates people's dogs' images (sometimes videos too). But with a big caveat of humorous comment (the majority of times) about the dog (From time to time it also posts more serious posts, like if a dog got hurt in a house fire). The genius of its rating's method is that it almost always has a denominator of 10! And the numerators when the tweet is about a dog though, Almost always greater than a10, 11/10, 12/10, 13/10, etc. Why? Because WeRateDogs believes, they're all good dogs!!! This rating account isn't really about rating dogs. There is no magic numeric formula to evaluate dogs! It's about making the audience engaged 'unjudgmentally' to share the love of dogs with each other. In the real world people love talking about each other dogs and trust each other when both have dogs. Dogs are conversation ice breakers. WeRateDogs seems to do just that but on the Tweeter. When we talk about dogs, it's almost words of kindness and love. That's why this account with a funny rating system that makes not much numeric scence gained close to 8 million follower sofar and counting. In short for this account it's not rating dogs that makes it successful, but its value of loving dogs and engaging its audience. ___ ** Side Fun Findings *** Top 10 popular breads amoung top 35% frequet breads; By average popularty score* popularity score is (3 * retweet_count) + (1 * favorite_count) ###Code # popular breads by popularity score top_frq_breads = tweets.bread.value_counts()[:int(tweets.bread.value_counts().shape[0]*0.35)] top_frq_breads = top_frq_breads.to_frame(name='count') # Convert Series to DataFrame top_frq_breads['ave_popularity_score'] = np.nan # Make column to save ave_popularity_score for bread in top_frq_breads.index: mask = tweets.bread == bread popularity_mean = int(tweets.loc[mask].popularity_score.mean()) top_frq_breads.loc[bread, 'ave_popularity_score'] = popularity_mean top_popular_breat = top_frq_breads.sort_values(by='ave_popularity_score', ascending=False)[:10] top_popular_breat ###Output _____no_output_____ ###Markdown ____* Top 10 popular breads amoung top 35% frequet breads; By WeRateDogs Rating ###Code top_frq_breads = tweets.bread.value_counts()[:int(tweets.bread.value_counts().shape[0]*0.35)] top_frq_breads = top_frq_breads.to_frame(name='count') # Convert Series to DataFrame top_frq_breads['rating'] = np.nan # Make column to save rating for bread in top_frq_breads.index: mask = tweets.bread == bread rating_mean = tweets.loc[mask].rating.mean() top_frq_breads.loc[bread, 'rating'] = rating_mean top_ratingr_breat = top_frq_breads.sort_values(by='rating', ascending=False)[:10] top_ratingr_breat # Verify how many top breeds both appear in top by rating and popularity top_ratingr_breat.index.isin(top_popular_breat.index).sum() ###Output _____no_output_____ ###Markdown @WeRateDogs Gather ###Code import pandas as pd import requests import io import tweepy import timeit import time import json import numpy as np import matplotlib.pyplot as plt # Read haded CSV into DataFrame twitter_archive_enhanced = pd.read_csv('twitter-archive-enhanced.csv') # Request image_predictions.tsv into DataFrame url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' with requests.Session() as s: data = s.get(url) decoded_content = data.content.decode('utf-8') image_predictions = pd.read_csv(io.StringIO(data.text), '\t') # Using the tweet IDs in the WeRateDogs Twitter archive, tweet_id = twitter_archive_enhanced.tweet_id.tolist() # query the Twitter API... consumer_key = 'xxx' consumer_secret = 'xxx' access_token = 'xxx' access_secret = 'xxx' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) twitter = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True) # for each tweet's JSON data using Python's Tweepy library... json_list = [] status_errors = [] for x in tweet_id: try: start = timeit.timeit() tweet = api.get_status(x, tweet_mode='extended') json_list.append(tweet._json) print(tweet.full_text) time.sleep(0.5) end = timeit.timeit() print(end - start) except Exception as e: print('\033[93m' + str(x) + ": " + str(e) + '\033[0m') status_errors.append(x) # and store each tweet's entire set of JSON data in a file called tweet_json.txt file. # Each tweet's JSON data should be written to its own line. with open('tweet_json.txt', 'w') as file: for item in json_list: json.dump(item, file) file.write('\n') # Then read this .txt file line by line into a pandas DataFrame with (at minimum): # tweet ID, retweet count, and favorite count. df_list = [] with open('tweet_json.txt') as file: for line in file: json_line = json.loads(line) id = json_line['id'] retweet_count = json_line['retweet_count'] favorite_count = json_line['favorite_count'] retweeted = json_line['retweeted'] created_at = json_line['created_at'] # Append to list of dictionaries df_list.append({'tweet_id': id, 'retweet_count': retweet_count, 'favorite_count': favorite_count, 'retweeted': retweeted, 'created_at': created_at}) social_data = pd.DataFrame(df_list, columns = ['tweet_id', 'retweet_count', 'favorite_count', 'retweeted', 'created_at']) ###Output _____no_output_____ ###Markdown Assess ###Code twitter_archive_enhanced.head() twitter_archive_enhanced.info() twitter_archive_enhanced['retweeted_status_id'].notnull().value_counts() twitter_archive_enhanced.loc[twitter_archive_enhanced['retweeted_status_id'].notnull()] twitter_archive_enhanced['retweeted_status_user_id'].notnull().value_counts() twitter_archive_enhanced['retweeted_status_user_id'].notnull().value_counts() twitter_archive_enhanced.loc[(twitter_archive_enhanced.rating_denominator != 10) & \ (twitter_archive_enhanced.expanded_urls.notnull()) & \ (twitter_archive_enhanced.retweeted_status_id.isnull())] twitter_archive_enhanced.loc[twitter_archive_enhanced.text.str.contains('\d\.\d\/'),['tweet_id','text','rating_numerator']] twitter_archive_enhanced.sample(5) twitter_archive_enhanced.tweet_id.duplicated().value_counts() image_predictions.info() image_predictions.sample(10) image_predictions.tweet_id.duplicated().value_counts() social_data.info() social_data.head() social_data.sample(7) social_data.retweeted.value_counts() social_data.tweet_id.duplicated().value_counts() social_data.describe() ###Output _____no_output_____ ###Markdown Quality *twitter_archive_enhanced* table- Some tweets have more than one value for the dog stages.- In columns `retweeted_status_id`, `retweeted_status_user_id`, and `retweeted_status_timestamp` there are 181 not-null values (numbers like 8.860537e+17). In other words, not all are dog ratings and some are retweets.- In the column `expanded_urls` there are 59 null values.- Issues with fractional rating numerators: tweet_is's 883482846933004288 and 681340665377193984.- Wrong rating for `tweet_id` 716439118184652801: 50/50 instead of 11/10.- Wrong rating for `tweet_id` 682962037429899265: 7/11 instead of 10/10.- Wrong rating for `tweet_id` 666287406224695296: 1/2 instead of 9/10.- Wrong variable types for columns `timestamp, tweet_id, in_reply_to_status_id, in_reply_to_user_id, timestamp, retweeted_status_id, and retweeted_status_user_id`.- In columns `doggo`, `floofer`, `pupper` and `puppo`, values 'None' instead of nulls. *image_predictions* table- Missing data: only 2075 entries out of 2356.- Three predictions per tweet.- Some predictions are not dogs.- Non descriptive column headers `px`, `px_conf`, `px_dog`. *social_data* table- Missing data: only 1729 entries out of 2356. Tidyness *twitter_archive_enhanced* table- Four variables (columns `doggo`, `floofer`, `pupper` and `puppo`) instead of one: `dog_stage` (categorical). *image_predictions* table- Nine variables (columns `p1`, `p1_conf`, `p1_dog`, `p2`, `p2_conf`, `p2_dog`, `p3`, `p3_conf`, `p3_dog`) instead of four: `prediction_number` (categorical), `predicted_breed` (onject), `prediction_confidence` (float) and `is_dog` (boolean).- This data should go into `twitter_archive_enhanced` table. *social_data* table- This data should go into `twitter_archive_enhanced` table. Clean ###Code # Create copy twitter_archive_clean = twitter_archive_enhanced.copy() image_predictions_clean = image_predictions.copy() social_data_clean = social_data.copy() ###Output _____no_output_____ ###Markdown Missing data: Both cases will be addressed by using a `inner merge` later on, given that we cannot do anything to gather that missing data. *social_data* table- This data should go into `twitter_archive_enhanced` table. Define1. Create a list with the columns to transfer from `social_data_clean` to `twitter_archive_clean`.2. Join (left) those columns of `social_data_clean` to `twitter_archive_clean` on `tweet_id`.3. Delete `social_data_clean`. Code ###Code # Code goes here cols = ['tweet_id', 'retweet_count', 'favorite_count'] twitter_archive_clean = pd.merge(twitter_archive_clean, social_data_clean[cols], on=('tweet_id'), how='inner') del social_data_clean ###Output _____no_output_____ ###Markdown Test ###Code # Test code goes here twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1729 entries, 0 to 1728 Data columns (total 19 columns): tweet_id 1729 non-null int64 in_reply_to_status_id 65 non-null float64 in_reply_to_user_id 65 non-null float64 timestamp 1729 non-null object source 1729 non-null object text 1729 non-null object retweeted_status_id 151 non-null float64 retweeted_status_user_id 151 non-null float64 retweeted_status_timestamp 151 non-null object expanded_urls 1679 non-null object rating_numerator 1729 non-null int64 rating_denominator 1729 non-null int64 name 1729 non-null object doggo 1729 non-null object floofer 1729 non-null object pupper 1729 non-null object puppo 1729 non-null object retweet_count 1729 non-null int64 favorite_count 1729 non-null int64 dtypes: float64(4), int64(5), object(10) memory usage: 270.2+ KB ###Markdown *image_predictions* table- Nine variables (columns `p1`, `p1_conf`, `p1_dog`, `p2`, `p2_conf`, `p2_dog`, `p3`, `p3_conf`, `p3_dog`) instead of four: `prediction_number` (categorical), `predicted_breed` (onject), `prediction_confidence` (float) and `is_dog` (boolean). Define1. Change the name of the columns to be modified: `p1, p1_conf, p1_dog, p2, p2_conf, p2_dog, p3, p3_conf and p3_dog`.2. Use `wide_to_long` to modifie the dataframe structure. Code ###Code image_predictions_clean = image_predictions_clean.rename(index=str, columns={"p1": "predicted_breed1", "p1_conf": "prediction_confidence1", "p1_dog": "is_dog1", "p2": "predicted_breed2", "p2_conf": "prediction_confidence2", "p2_dog": "is_dog2", "p3": "predicted_breed3", "p3_conf": "prediction_confidence3", "p3_dog": "is_dog3", }) image_predictions_clean = pd.wide_to_long(image_predictions_clean, stubnames=['predicted_breed', 'prediction_confidence', 'is_dog'], i='tweet_id', j='prediction_number')\ .reset_index()\ .sort_values(by=['tweet_id', 'prediction_number']) ###Output _____no_output_____ ###Markdown Test ###Code image_predictions_clean.head() image_predictions_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 6225 entries, 0 to 6224 Data columns (total 7 columns): tweet_id 6225 non-null int64 prediction_number 6225 non-null object img_num 6225 non-null int64 jpg_url 6225 non-null object predicted_breed 6225 non-null object prediction_confidence 6225 non-null float64 is_dog 6225 non-null bool dtypes: bool(1), float64(1), int64(2), object(3) memory usage: 346.5+ KB ###Markdown *image_predictions* table- This data should go into `twitter_archive_enhanced` table. Define 1. Drop every prediction that isn't a dog.2. For each `tweet_id` select the prediction with the lowest `prediction_number`.3. Join (left) relevant columns (`predicted_breed` and `prediction_confidence`) to `twitter_archive_clean` on `tweet_id`. Code ###Code # Code here image_predictions_clean = image_predictions_clean.loc[image_predictions_clean['is_dog']] image_predictions_clean = image_predictions_clean.sort_values("prediction_number").groupby("tweet_id", as_index=False).first()\ .sort_values(by='tweet_id') cols = ['tweet_id', 'predicted_breed', 'prediction_confidence'] twitter_archive_clean = pd.merge(twitter_archive_clean, image_predictions_clean[cols], on=('tweet_id'), how='inner') ###Output _____no_output_____ ###Markdown Test ###Code # Test code here twitter_archive_clean.info() twitter_archive_clean.sample(5) image_predictions_clean.prediction_confidence.min().astype(float) image_predictions_clean.loc[image_predictions_clean['prediction_confidence'] == 1.00288e-05] ###Output _____no_output_____ ###Markdown *twitter_archive_enhanced* table- Four variables (columns `doggo`, `floofer`, `pupper` and `puppo`) instead of one: `dog_stage` (categorical). Define 1. Create a new pivot df called `twitter_archive_clean_dog_stages`.2. Melt this df into these columns: `tweet_id`, `dog_stage`.3. Drop all the rows with value `None`.4. Join (left) this df into `twtitter_archive_clean` on `tweet_id`.5. Drop the columns `doggo`, `floofer`, `pupper` and `puppo`.6. Delete df `twitter_archive_clean_dog_stages`. Code ###Code # Code goes here cols = ['tweet_id', 'doggo', 'floofer', 'pupper', 'puppo'] twitter_archive_clean_dog_stages = twitter_archive_clean[cols] twitter_archive_clean_dog_stages = pd.melt(twitter_archive_clean_dog_stages, id_vars=['tweet_id'], value_name='dog_stage') twitter_archive_clean_dog_stages = twitter_archive_clean_dog_stages[twitter_archive_clean_dog_stages.dog_stage != 'None'] twitter_archive_clean = pd.merge(twitter_archive_clean, twitter_archive_clean_dog_stages[['tweet_id', 'dog_stage']], on='tweet_id', how='left') twitter_archive_clean = twitter_archive_clean.drop(['doggo', 'floofer', 'pupper', 'puppo'], axis=1) del twitter_archive_clean_dog_stages ###Output _____no_output_____ ###Markdown Test ###Code # Test code goes here twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1299 entries, 0 to 1298 Data columns (total 18 columns): tweet_id 1299 non-null int64 in_reply_to_status_id 18 non-null float64 in_reply_to_user_id 18 non-null float64 timestamp 1299 non-null object source 1299 non-null object text 1299 non-null object retweeted_status_id 54 non-null float64 retweeted_status_user_id 54 non-null float64 retweeted_status_timestamp 54 non-null object expanded_urls 1299 non-null object rating_numerator 1299 non-null int64 rating_denominator 1299 non-null int64 name 1299 non-null object retweet_count 1299 non-null int64 favorite_count 1299 non-null int64 predicted_breed 1299 non-null object prediction_confidence 1299 non-null float64 dog_stage 236 non-null object dtypes: float64(5), int64(5), object(8) memory usage: 192.8+ KB ###Markdown *twitter_archive_enhanced* table- Some tweets have more than one value for the dog stages. Define 1. Find all the duplicated `tweet_id`'s.2. Address the values directly, one by one.3. Delete duplicates `tweet_id`. Code ###Code # Code goes here twitter_archive_clean.loc[twitter_archive_clean.tweet_id.duplicated()] twitter_archive_clean.loc[twitter_archive_clean.tweet_id == 855851453814013952] twitter_archive_clean.at[137,'dog_stage'] = 'puppo' twitter_archive_clean = twitter_archive_clean.drop([138]) twitter_archive_clean.loc[twitter_archive_clean.tweet_id == 854010172552949760] twitter_archive_clean = twitter_archive_clean.drop([145]) twitter_archive_clean.loc[twitter_archive_clean.tweet_id == 817777686764523521] twitter_archive_clean.at[322,'dog_stage'] = 'pupper' twitter_archive_clean = twitter_archive_clean.drop([323]) twitter_archive_clean.loc[twitter_archive_clean.tweet_id == 808106460588765185] twitter_archive_clean.at[383,'dog_stage'] = 'doggo&pupper' twitter_archive_clean = twitter_archive_clean.drop([384]) twitter_archive_clean.loc[twitter_archive_clean.tweet_id == 802265048156610565] twitter_archive_clean.at[408,'dog_stage'] = 'doggo&pupper' twitter_archive_clean = twitter_archive_clean.drop([409]) twitter_archive_clean.loc[twitter_archive_clean.tweet_id == 801115127852503040] twitter_archive_clean.at[414,'dog_stage'] = 'pupper' twitter_archive_clean = twitter_archive_clean.drop([415]) twitter_archive_clean.loc[twitter_archive_clean.tweet_id == 775898661951791106] twitter_archive_clean.at[562,'dog_stage'] = 'doggo&pupper' twitter_archive_clean = twitter_archive_clean.drop([563]) twitter_archive_clean.loc[twitter_archive_clean.tweet_id == 770093767776997377] twitter_archive_clean.at[597,'dog_stage'] = 'doggo&pupper' twitter_archive_clean = twitter_archive_clean.drop([598]) twitter_archive_clean.loc[twitter_archive_clean.tweet_id == 733109485275860992] twitter_archive_clean.at[614,'dog_stage'] = 'doggo&pupper' twitter_archive_clean = twitter_archive_clean.drop([615]) ###Output _____no_output_____ ###Markdown Test ###Code # Test code here twitter_archive_clean[twitter_archive_clean.tweet_id.duplicated()] twitter_archive_clean.dog_stage.value_counts() twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1290 entries, 0 to 1298 Data columns (total 18 columns): tweet_id 1290 non-null int64 in_reply_to_status_id 17 non-null float64 in_reply_to_user_id 17 non-null float64 timestamp 1290 non-null object source 1290 non-null object text 1290 non-null object retweeted_status_id 52 non-null float64 retweeted_status_user_id 52 non-null float64 retweeted_status_timestamp 52 non-null object expanded_urls 1290 non-null object rating_numerator 1290 non-null int64 rating_denominator 1290 non-null int64 name 1290 non-null object retweet_count 1290 non-null int64 favorite_count 1290 non-null int64 predicted_breed 1290 non-null object prediction_confidence 1290 non-null float64 dog_stage 227 non-null object dtypes: float64(5), int64(5), object(8) memory usage: 191.5+ KB ###Markdown *twitter_archive_enhanced* table- In columns `retweeted_status_id`, `retweeted_status_user_id`, and `retweeted_status_timestamp` there are 181 not-null values (numbers like 8.860537e+17). i.e., Not all are dog ratings and some are retweets. Define 1. Drop rows where `retweeted_status_id` is not null.2. Drop columns `retweeted_status_id`, `retweeted_status_user_id` and `retweeted_status_timestamp`. Code ###Code # Code goes here twitter_archive_clean = twitter_archive_clean.loc[twitter_archive_clean.retweeted_status_id.isnull()] twitter_archive_clean = twitter_archive_clean.drop(['retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], axis=1) ###Output _____no_output_____ ###Markdown Test ###Code # Test code here twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1238 entries, 0 to 1298 Data columns (total 15 columns): tweet_id 1238 non-null int64 in_reply_to_status_id 17 non-null float64 in_reply_to_user_id 17 non-null float64 timestamp 1238 non-null object source 1238 non-null object text 1238 non-null object expanded_urls 1238 non-null object rating_numerator 1238 non-null int64 rating_denominator 1238 non-null int64 name 1238 non-null object retweet_count 1238 non-null int64 favorite_count 1238 non-null int64 predicted_breed 1238 non-null object prediction_confidence 1238 non-null float64 dog_stage 217 non-null object dtypes: float64(3), int64(5), object(7) memory usage: 154.8+ KB ###Markdown *twitter_archive_enhanced* table- Issues with fractional rating numerators: tweet_is's 883482846933004288 and 681340665377193984. Define 1. Using regex, extract correct numerator from `text` into new column `rating_numerator_clean`. (float)2. Compare `rating_numerator` and `rating_numerator_clean` to see if it solves issues for tweet_id's 883482846933004288 and 681340665377193984.3. If there are other difference, adjust the code until it fully represents the real rating.4. Drop colum `rating_numerator`.5. Rename column `rating_numerator_clean` to `rating_numerator`. (float) Code ###Code # Code goes here twitter_archive_clean['rating_numerator_clean'] = twitter_archive_clean.text.str.extract('(\d*\.\d+|\d+)\/\d+', expand=True).astype(float) twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 883482846933004288)] twitter_archive_clean = twitter_archive_clean.drop('rating_numerator', axis=1) twitter_archive_clean = twitter_archive_clean.rename(index=str, columns={'rating_numerator_clean': 'rating_numerator'}) ###Output _____no_output_____ ###Markdown Test ###Code # Test Code here twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 883482846933004288)] list(twitter_archive_clean) ###Output _____no_output_____ ###Markdown *twitter_archive_enhanced* table- Wrong rating for `tweet_id` 716439118184652801: 50/50 instead of 11/10.- Wrong rating for `tweet_id` 682962037429899265: 7/11 instead of 10/10.- Wrong rating for `tweet_id` 666287406224695296: 1/2 instead of 9/10. Define 1. Locate each tweet_id's index.2. Manually change `rating_numerator` and `ratin_denominator` using `at`. Code ###Code # Code goes here twitter_archive_clean.loc[twitter_archive_clean.tweet_id == 716439118184652801, \ ['tweet_id', 'rating_numerator', 'rating_denominator']] twitter_archive_clean.at['689','rating_numerator'] = 11 twitter_archive_clean.at['689','rating_denominator'] = 10 twitter_archive_clean.loc[twitter_archive_clean.tweet_id == 682962037429899265, \ ['tweet_id', 'rating_numerator', 'rating_denominator']] twitter_archive_clean.at['1038','rating_numerator'] = 10 twitter_archive_clean.at['1038','rating_denominator'] = 10 twitter_archive_clean.loc[twitter_archive_clean.tweet_id == 666287406224695296, \ ['tweet_id', 'rating_numerator', 'rating_denominator']] ###Output _____no_output_____ ###Markdown Test ###Code # Test code goes here twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 716439118184652801), \ ['tweet_id', 'rating_numerator', 'rating_denominator']] twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 682962037429899265), \ ['tweet_id', 'rating_numerator', 'rating_denominator']] ###Output _____no_output_____ ###Markdown *twitter_archive_enhanced* table- Wrong variable types for columns `timestamp, tweet_id, in_reply_to_status_id, in_reply_to_user_id, timestamp, retweeted_status_id, and retweeted_status_user_id`. Define 1. `timestamp`: Change type to datetime.2. `tweet_id, in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, and retweeted_status_user_id`: Change type to object. Code ###Code # Code goes here twitter_archive_clean.timestamp = pd.to_datetime(twitter_archive_clean.timestamp) twitter_archive_clean.tweet_id = twitter_archive_clean.tweet_id.astype(object) twitter_archive_clean.in_reply_to_status_id = twitter_archive_clean.in_reply_to_status_id.astype(object) twitter_archive_clean.in_reply_to_user_id = twitter_archive_clean.in_reply_to_user_id.astype(object) ###Output _____no_output_____ ###Markdown Test ###Code # Test code here twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Index: 1239 entries, 0 to 689 Data columns (total 15 columns): tweet_id 1238 non-null object in_reply_to_status_id 17 non-null object in_reply_to_user_id 17 non-null object timestamp 1238 non-null datetime64[ns] source 1238 non-null object text 1238 non-null object expanded_urls 1238 non-null object rating_denominator 1238 non-null float64 name 1238 non-null object retweet_count 1238 non-null float64 favorite_count 1238 non-null float64 predicted_breed 1238 non-null object prediction_confidence 1238 non-null float64 dog_stage 217 non-null object rating_numerator 1239 non-null float64 dtypes: datetime64[ns](1), float64(5), object(9) memory usage: 194.9+ KB ###Markdown Storing ###Code twitter_archive_clean.to_csv('twitter_archive_master.csv', encoding='utf-8') ###Output _____no_output_____ ###Markdown Analysis and visualization ###Code twitter_archive_master = pd.read_csv('twitter_archive_master.csv') twitter_archive_master.describe() rating = twitter_archive_master['rating_numerator']/twitter_archive_master['rating_denominator']*100 rating.describe() ###Output _____no_output_____ ###Markdown **\Insight 1:** The average rating is 111%. ###Code twitter_archive_master.dog_stage.describe() ###Output _____no_output_____ ###Markdown **Insight 2:** Most of the tweets have _pupper_ as dog stage (69%). ###Code twitter_archive_master.predicted_breed.describe() twitter_archive_master.predicted_breed.value_counts().head(4) twitter_archive_clean.loc[twitter_archive_clean.retweet_count >= 3621.750000]['predicted_breed'].value_counts().head(4) twitter_archive_clean.loc[twitter_archive_clean.favorite_count >= 14410.250000]['predicted_breed'].value_counts().head(4) twitter_archive_clean.loc[(twitter_archive_clean.rating_numerator/twitter_archive_clean.rating_denominator) >= 1.2]['predicted_breed'].value_counts().head(4) ###Output _____no_output_____ ###Markdown **Insight 3:** There are 121 different predicted breeds, beign the most frequent `golden_retriever`, with more than 9% of ocurrencies, followed by `Labrador_retriever`, `Pembroke` and `Chihuahua`. This same distribution is present in the 1st quartile of `retweet_count`, `favorite_count` and `rating`. ###Code %matplotlib inline plt.scatter(rating, twitter_archive_master['favorite_count'], alpha=0.1) plt.xlim(xmin=0,xmax=150) plt.ylim(ymax=60000) %matplotlib inline plt.scatter(rating, twitter_archive_master['retweet_count'], alpha=0.1) plt.xlim(xmin=0,xmax=150) plt.ylim(ymax=20000) ###Output _____no_output_____ ###Markdown **Insight 4:** There seems to be a positive correlation between the rating and the favorite_count, and between rating and the retweet_count. ###Code %matplotlib inline plt.scatter(twitter_archive_master.text.str.len(), twitter_archive_master['favorite_count'], alpha=0.1) plt.ylim(ymax=45000) ###Output _____no_output_____ ###Markdown Table of Contents1&nbsp;&nbsp;Introduction2&nbsp;&nbsp;Gathering data2.1&nbsp;&nbsp;Twitter archive file2.2&nbsp;&nbsp;Tweet image prediction2.3&nbsp;&nbsp;Twitter API File3&nbsp;&nbsp;Assessing data3.1&nbsp;&nbsp;Assess: twitter archive3.2&nbsp;&nbsp;Assess: Image Predictions3.3&nbsp;&nbsp;Assess: Twitter API Data4&nbsp;&nbsp;Cleaning Data4.1&nbsp;&nbsp;Clean: Twitter Archive Data4.2&nbsp;&nbsp;Clean: Image Predictions Data4.3&nbsp;&nbsp;Clean: Twitter API Data5&nbsp;&nbsp;Storing Data6&nbsp;&nbsp;Analyzing and Visualizing Data Introductionthis project is about rating dogsand in this project we are goning to gather data from different sources and in different formats then assess the data we gathered and clean the data then store,analyze and visualize our data Gathering data Twitter archive filedownloaded from Udacity resources ###Code # Import all packages needed import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline import requests from functools import reduce from datetime import datetime import tweepy import json import re import seaborn as sns import warnings warnings.filterwarnings("ignore") # pandas settings pd.set_option('display.max_colwidth', None) # load twitter archive twitter_archive = pd.read_csv("twitter-archive-enhanced.csv") # use tweet id column as index twitter_archive.set_index("tweet_id", inplace = True) # display few lines twitter_archive.head() ###Output _____no_output_____ ###Markdown Tweet image prediction ###Code # URL downloaded programatically # get file with the image predictions url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' with open('image-predictions.tsv' , 'wb') as file: predictions = requests.get(url) file.write(predictions.content) # load image predictions image_prediction = pd.read_csv('image-predictions.tsv', sep = '\t') # use tweet id column as index image_prediction.set_index("tweet_id", inplace = True) # display few lines image_prediction.head() ###Output _____no_output_____ ###Markdown Twitter API File ###Code # from timeit import default_timer as timer # Query Twitter API for each tweet in the Twitter archive and save JSON in a text file # These are hidden to comply with Twitter's API terms and conditions consumer_key = '***********' consumer_secret = '**********' access_token = '***********' access_secret = '*************' auth = OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True) # NOTE TO STUDENT WITH MOBILE VERIFICATION ISSUES: # df_1 is a DataFrame with the twitter_archive_enhanced.csv file. You may have to # change line 17 to match the name of your DataFrame with twitter_archive_enhanced.csv # NOTE TO REVIEWER: this student had mobile verification issues so the following # Twitter API code was sent to this student from a Udacity instructor # Tweet IDs for which to gather additional data via Twitter's API tweet_ids = df_1.tweet_id.values len(tweet_ids) # Query Twitter's API for JSON data for each tweet ID in the Twitter archive count = 0 fails_dict = {} start = timer() # Save each tweet's returned JSON as a new line in a .txt file with open('tweet_json.txt', 'w') as outfile: # This loop will likely take 20-30 minutes to run because of Twitter's rate limit for tweet_id in tweet_ids: count += 1 print(str(count) + ": " + str(tweet_id)) try: tweet = api.get_status(tweet_id, tweet_mode='extended') print("Success") json.dump(tweet._json, outfile) outfile.write('\n') except tweepy.TweepError as e: print("Fail") fails_dict[tweet_id] = e pass end = timer() print(end - start) df_list = [] with open('tweet-json.txt', 'r') as file: for line in file: tweet = json.loads(line) tweet_id = tweet['id'] retweet_count = tweet['retweet_count'] fav_count = tweet['favorite_count'] user_count = tweet['user']['followers_count'] df_list.append({'tweet_id':tweet_id, 'favorites': fav_count, 'retweets': retweet_count, 'followers': user_count}) twitter_api = pd.DataFrame(df_list) # use tweet id column as index twitter_api.set_index('tweet_id', inplace = True) # display few lines twitter_api.head() ###Output _____no_output_____ ###Markdown Assessing dataAssess data visually and programmatically using pandas for quality and tidiness issues. Assess: twitter archive ###Code # display sample of data twitter_archive.sample(5) # print a summary of a DataFrame twitter_archive.info() # check if ids are unique twitter_archive.index.is_unique # check number of replies np.isfinite(twitter_archive.in_reply_to_status_id).sum() # check number of retweets np.isfinite(twitter_archive.retweeted_status_id).sum() # check name of dog twitter_archive.name.value_counts() # check if dogs have more than one category assigned categories = ['doggo', 'floofer', 'pupper', 'puppo'] for category in categories: twitter_archive[category] = twitter_archive[category].apply(lambda x: 0 if x == 'None' else 1) twitter_archive['number_categories'] = twitter_archive.loc[:,categories].sum(axis = 1) # dogs categories twitter_archive['number_categories'].value_counts() # check rating denominator twitter_archive.rating_denominator.value_counts() # check ratings with denominator greather than 10 twitter_archive[twitter_archive.rating_denominator > 10][['text', 'rating_denominator']] # check rating numerator twitter_archive.rating_numerator.value_counts() # check for any float ratings in the text column with pd.option_context('max_colwidth', 200): display(twitter_archive[twitter_archive['text'].str.contains(r"(\d+\.\d*\/\d+)")] [['text', 'rating_numerator', 'rating_denominator']]) # check expanded urls twitter_archive[twitter_archive.expanded_urls.str.startswith( ('https://twitter.com', 'http://twitter.com', 'https://vine.co'), na=False)].sample(3)[['text', 'expanded_urls']] # check for two or more urls in the expanded urls twitter_archive[twitter_archive.expanded_urls.str.contains(',', na=False)].expanded_urls.count() ###Output _____no_output_____ ###Markdown Quality Issues in twitter_archive1- Delete columns that won't be used for analysis.2- The timestamp has an incorrect datatype - is an object, should be DateTime.3- some of the gathered tweets are replies and should be removed.4- some of the gathered tweets are retweets.5- some dogs have more than one category assigned.6- Correct denominators other than 10.7- float ratings have been incorrectly read from the text of tweet.8- we have 639 expanded urls which contain more than one url address. Tidiness Issues in twitter_archive1- Dog classification (doggo, floofer, pupper or puppo) should be in one column. Assess: Image Predictions ###Code # display sample of data image_prediction.sample(10) # print a summary of a DataFrame image_prediction.info() # Check jpg_url for duplicates sum(image_prediction.jpg_url.duplicated()) # check jpg_url to confirm if it contains only jpg and png images image_prediction[~image_prediction.jpg_url.str.endswith(('.jpg', '.png'), na = False)].jpg_url.count() image_prediction.img_num.value_counts() # check 1st prediction image_prediction.p1.sample(3) # check dog predictions image_prediction.p1_dog.count() ###Output _____no_output_____ ###Markdown Quality Issues in Image Predictions1- the dataset has 2075 entries, while twitter archive dataset has 2356 entries.2- column names are confusing and do not give much information about the content.3- dog breeds contain underscores, and have different case formatting.4- only 2075 images have been classified as dog images for top prediction.5- 66 jpg_url duplicates. Tidiness Issues in Image Predictions1- dataset should be merged with the twitter archive dataset. Assess: Twitter API Data ###Code # display sample of data twitter_api.sample(3) # print a summary of a DataFrame twitter_api.info() # check if ids are unique twitter_archive.index.is_unique ###Output _____no_output_____ ###Markdown Quality Issues in Twitter API Data 1- twitter archive dataset has 2356 entries, while twitter API data has 2354. Tidiness Issues in Twitter API Data1- dataset should be merged with the twitter archive dataset. Cleaning DataUsing pandas, clean the quality and tidiness issues identified in the Assessing Data section. Clean: Twitter Archive Data ###Code # create a copy of twitter archive dataset twitter_archive_clean = twitter_archive.copy() # display sample of data twitter_archive_clean.sample(5) print(sum(twitter_archive_clean.retweeted_status_id.value_counts())) print(sum(twitter_archive_clean.in_reply_to_status_id.value_counts())) # drop retweets twitter_archive_clean = twitter_archive_clean[twitter_archive_clean['retweeted_status_id'].isnull()] #TEST print(sum(twitter_archive_clean.retweeted_status_user_id.value_counts())) twitter_archive_clean = twitter_archive_clean[twitter_archive_clean['in_reply_to_status_id'].isnull()] print(sum(twitter_archive_clean.in_reply_to_status_id.value_counts())) # display all columns twitter_archive_clean.columns # drop unnecessary columns twitter_archive_clean.drop(['in_reply_to_status_id','in_reply_to_user_id','retweeted_status_id', 'retweeted_status_user_id','retweeted_status_timestamp'], axis = 1, inplace = True) # test by the shape of dataframe twitter_archive_clean.shape # test by displaying cleaned dataset twitter_archive_clean.sample(3) # display sample of dataframe twitter_archive_clean.head(3) # read dog types from text column for index, column in twitter_archive_clean.iterrows(): for word in ['puppo', 'pupper', 'doggo', 'floofer']: if word.lower() in str(twitter_archive_clean.loc[index, 'text']).lower(): twitter_archive_clean.loc[index, 'dog_type'] = word.title() # drop old columns twitter_archive_clean.drop(['puppo', 'pupper', 'doggo', 'floofer'], axis=1, inplace=True) # Test # display sample of fixed data twitter_archive_clean.sample(3) # convert to datetime twitter_archive_clean.timestamp = pd.to_datetime(twitter_archive_clean.timestamp) # Test # display dataset types twitter_archive_clean.info() # Disply a sample before correction twitter_archive_clean[twitter_archive_clean.text.str.contains(r'\d+\.\d+\/\d+')][['text','rating_denominator', 'rating_numerator']].sample(3) # convert both columns to floats twitter_archive_clean['rating_numerator'] = twitter_archive_clean['rating_numerator'].astype(float) twitter_archive_clean['rating_denominator'] = twitter_archive_clean['rating_denominator'].astype(float) # find columns with fractions fraction_ratings = twitter_archive_clean[twitter_archive_clean.text.str.contains(r'\d+\.\d+\/\d+', na = False)].index # extract correct rating and replace incorrect one for index in fraction_ratings: rating = re.search('\d+\.\d+\/\d+', twitter_archive_clean.loc[index,:].text).group(0) twitter_archive_clean.at[index,'rating_numerator'], twitter_archive_clean.at[index,'rating_denominator'] = rating.split('/') # display sample of fixed data twitter_archive_clean.loc[fraction_ratings,:][['text','rating_denominator', 'rating_numerator']].sample(3) # display sample of data with denominator greater than 10 twitter_archive_clean.loc[twitter_archive.rating_denominator > 10][['text','rating_denominator', 'rating_numerator']] # fix rating manually for tweets for which rating was read incorrectly twitter_archive_clean.loc[832088576586297345, 'rating_denominator'] = 0 twitter_archive_clean.loc[832088576586297345, 'rating_numerator'] = 0 twitter_archive_clean.loc[775096608509886464, 'rating_denominator'] = 10 twitter_archive_clean.loc[775096608509886464, 'rating_numerator'] = 14 twitter_archive_clean.loc[740373189193256964, 'rating_denominator'] = 10 twitter_archive_clean.loc[740373189193256964, 'rating_numerator'] = 14 twitter_archive_clean.loc[722974582966214656, 'rating_denominator'] = 10 twitter_archive_clean.loc[722974582966214656, 'rating_numerator'] = 13 twitter_archive_clean.loc[716439118184652801, 'rating_denominator'] = 10 twitter_archive_clean.loc[716439118184652801, 'rating_numerator'] = 11 twitter_archive_clean.loc[682962037429899265, 'rating_denominator'] = 10 twitter_archive_clean.loc[682962037429899265, 'rating_numerator'] = 10 # add normalized rating twitter_archive_clean['rating'] = twitter_archive_clean['rating_numerator'] / twitter_archive_clean['rating_denominator'] # Test # display sample of data with the new column twitter_archive_clean[['text','rating_denominator', 'rating_numerator', 'rating']].sample(5) # fix expanded urls for index, column in twitter_archive_clean.iterrows(): twitter_archive_clean.loc[index, 'expanded_urls'] = 'https://twitter.com/dog_rates/status/' + str(index) # Test twitter_archive_clean.sample(3) ###Output _____no_output_____ ###Markdown Clean: Image Predictions Data ###Code # create a copy of dataset image_prediction_clean = image_prediction.copy() # display sample of data image_prediction_clean.sample(3) # display current labels image_prediction_clean.columns # change labels image_prediction_clean.columns = ['image_url', 'img_number', '1st_prediction', '1st_prediction_confidence', '1st_prediction_isdog', '2nd_prediction', '2nd_prediction_confidence', '2nd_prediction_isdog', '3rd_prediction', '3rd_prediction_confidence', '3rd_prediction_isdog'] # display new labels image_prediction_clean.columns # columns with dog breed dog_breed_cols = ['1st_prediction', '2nd_prediction', '3rd_prediction'] # remove underscore and capitalize the first letter of each word for column in dog_breed_cols: image_prediction_clean[column] = image_prediction_clean[column].str.replace('_', ' ').str.title() # display sample of changes image_prediction_clean[dog_breed_cols].sample(5) #disply jpg_url duplicates sum(image_prediction_clean.image_url.duplicated()) #CODE: Delete duplicated jpg_url image_prediction_clean = image_prediction_clean.drop_duplicates(subset=['image_url'], keep='last') #TEST sum(image_prediction_clean['image_url'].duplicated()) # build function to determine dog breed # if no breed detected, set value to NaN def get_breed(row): if row['1st_prediction_isdog'] == True: return row['1st_prediction'], row['1st_prediction_confidence'] if row['2nd_prediction_isdog'] == True: return row['2nd_prediction'], row['2nd_prediction_confidence'] if row['3rd_prediction_isdog'] == True: return row['3rd_prediction'], row['3rd_prediction_confidence'] return np.nan, np.nan # apply function to dataset # create new columns with data image_prediction_clean[['breed_predicted', 'prediction_confidence']] = pd.DataFrame(image_prediction_clean.apply( lambda row: get_breed(row), axis = 1).tolist(), index = image_prediction_clean.index) # drop old columns image_prediction_clean.drop(['1st_prediction', '1st_prediction_confidence', '1st_prediction_isdog', '2nd_prediction', '2nd_prediction_confidence', '2nd_prediction_isdog', '3rd_prediction', '3rd_prediction_confidence', '3rd_prediction_isdog'], axis=1, inplace=True) # drop rows without dog breed prediction image_prediction_clean.dropna(subset = ['breed_predicted', 'prediction_confidence'], inplace = True) # Test # display sample of cleaned dataset image_prediction_clean.sample(3) ###Output _____no_output_____ ###Markdown Clean: Twitter API Data ###Code # display sample of data twitter_api.sample(3) ###Output _____no_output_____ ###Markdown Storing Data ###Code # join datasets df = reduce(lambda left, right: pd.merge(left, right, on='tweet_id'), [twitter_archive_clean, image_prediction_clean,twitter_api]) # display new dataset df.sample(3) #Store the clean DataFrame in a CSV file df.to_csv('twitter_archive_master.csv') ###Output _____no_output_____ ###Markdown Analyzing and Visualizing Data ###Code # display sample of data df.sample(3) # display basic data summary df.describe() # Display the number for each dog's breed df['breed_predicted'].value_counts() # horizontal bar plot function def plot_barh(x, y, title="", xlabel="", ylabel="", rotation=0): plt.figure(figsize=(8, 5)) bar_list = plt.barh(x, y, color="#760545") plt.title(title, fontsize=17) plt.xlabel(xlabel, fontsize=15) plt.ylabel(ylabel, fontsize=15) return plt.show() # plot 10 most popular dog breeds dog_breeds = pd.DataFrame(df.breed_predicted.value_counts()[:10]) plot_barh(dog_breeds.index, dog_breeds.breed_predicted, title="Histogram of the 10 Most Popular Dogs Breeds") df_dog_type_mean = df.groupby('breed_predicted').mean() df_dog_type_mean.head() df_dog_type_sorted = df_dog_type_mean['rating'].sort_values() df_dog_type_sorted d = pd.DataFrame(df.timestamp) fig, ax = plt.subplots(figsize=(10,7)) ax.plot_date(d,df.favorites, color="#BB1A46") ax.plot_date(d,df.retweets, color="#077609") ax.set_ylim([0,16000]) ax.set_title('Popularity of @WERateDogs over Time') # Setting x and y labels. ax.set_ylabel('Count') ax.set_xlabel('Time') ax.legend(['favorites','retweets']) df.dog_type.value_counts() # bar plot function def plot_bar(x, y, title="", xlabel="", ylabel="", rotation=0, width=0.8): plt.figure(figsize=(9, 6)) bar_list = plt.bar(x, y, color="#760545", width=width) plt.title(title, fontsize=18) plt.xlabel(xlabel, fontsize=15) plt.ylabel(ylabel, fontsize=15) return plt.show() # plot dog types dog_types = pd.DataFrame(df.dog_type.value_counts()) plot_bar(dog_types.index, dog_types.dog_type, width=0.25, title="Dog Types", ylabel="Count") ###Output _____no_output_____ ###Markdown Data Wrangling WeRateDogs Table of ContentsIntroductionData Wrangling Gather Assess Clean StoringAnalyzing and Visualizing Data IntroductionReal-world data rarely comes clean. Using Python and its libraries, we can gather data from a variety of sources and in a variety of formats, assess its quality and tidiness, then clean it. This is called data wrangling.The dataset that we will be wrangling (and analyzing and visualizing) is the tweet archive of Twitter user [@dog_rates](https://twitter.com/dog_rates), also known as [WeRateDogs](https://en.wikipedia.org/wiki/WeRateDogs). WeRateDogs is a Twitter account that rates people's dogs with a humorous comment about the dog. These ratings almost always have a denominator of 10. The numerators, though? Almost always greater than 10. 11/10, 12/10, 13/10, etc. Why? Because "[they're good dogs Brent](https://knowyourmeme.com/memes/theyre-good-dogs-brent)". Data Wrangling Gather ###Code %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import numpy as np import requests import tweepy import json import time sns.set_theme() ###Output _____no_output_____ ###Markdown WeRateDogs twitter archive ###Code # Read the WeRateDogs twitter archive archive = pd.read_csv("twitter-archive-enhanced.csv") ###Output _____no_output_____ ###Markdown Tweet image predictions ###Code # Download the image_predictions.tsv file with the tweet image predictions url = "https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv" with open("image_predictions.tsv", "w") as f: r = requests.get(url) f.write(r.text) # Read the image predictions file predictions = pd.read_csv("image_predictions.tsv", sep="\t") ###Output _____no_output_____ ###Markdown Twitter API ###Code consumer_key = "YOUR CONSUMER KEY" consumer_secret = "YOUR CONSUMER SECRET" access_token = "YOUR ACCESS TOKEN" access_secret = "YOUR ACCESS SECRET" auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True) id_errors = [] count = 0 start = time.time() with open("tweet_json.txt", "w") as f: tweet_ids = list(archive.tweet_id) for tweet_id in tweet_ids: try: tweet = api.get_status(tweet_id, tweet_mode="extended") except tweepy.TweepError: id_errors.append(tweet_id) count += 1 json.dump(tweet._json, f) f.write("\n") print(f"{count}/{len(tweet_ids)}", tweet_id, round(time.time() - start, 2)) # Read the tweet_json.txt file to a DataFrame tweets = pd.DataFrame() with open("tweet_json.txt", "r") as f: for line in f.readlines(): data = json.loads(line) tweets = tweets.append(data, ignore_index=True) # Create the DataFrame only with the informations that will be use tweets = tweets[["id_str", "favorite_count", "retweet_count"]] ###Output _____no_output_____ ###Markdown Assess WeRateDogs twitter archive ###Code archive archive.info() archive.describe() ###Output _____no_output_____ ###Markdown - Looking at rows with a rating denominator equal to zero ###Code archive.query("rating_denominator == 0") archive.query("rating_denominator == 0").text.iloc[0] ###Output _____no_output_____ ###Markdown - Looking at rows with a rating numerator equal to zero ###Code archive.query("rating_numerator == 0") archive.query("rating_numerator == 0").text.iloc[0] archive.query("rating_numerator == 0").text.iloc[1] ###Output _____no_output_____ ###Markdown The rating numerator in both cases is zero. - Looking at the sorted list of the denominators, in descending way ###Code archive.rating_denominator.sort_values(ascending=False) ###Output _____no_output_____ ###Markdown - Looking at the sorted list of the numerator, in descending way ###Code archive.rating_numerator.sort_values(ascending=False) ###Output _____no_output_____ ###Markdown - Dogs names ###Code archive.name.value_counts() archive.name.unique() archive[archive.name.str.islower()].name ###Output _____no_output_____ ###Markdown - Duplicate rows ###Code archive.duplicated().sum() ###Output _____no_output_____ ###Markdown - Duplicate IDs ###Code archive.tweet_id.duplicated().sum() ###Output _____no_output_____ ###Markdown Tweet image predictions ###Code predictions predictions.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2075 non-null int64 1 jpg_url 2075 non-null object 2 img_num 2075 non-null int64 3 p1 2075 non-null object 4 p1_conf 2075 non-null float64 5 p1_dog 2075 non-null bool 6 p2 2075 non-null object 7 p2_conf 2075 non-null float64 8 p2_dog 2075 non-null bool 9 p3 2075 non-null object 10 p3_conf 2075 non-null float64 11 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown - Duplicate rows ###Code predictions.duplicated().sum() ###Output _____no_output_____ ###Markdown - Duplicate IDs ###Code predictions.tweet_id.duplicated().sum() ###Output _____no_output_____ ###Markdown - Duplicate images ###Code predictions.jpg_url.duplicated().sum() ###Output _____no_output_____ ###Markdown Twitter API ###Code tweets tweets.info() tweets.describe() ###Output _____no_output_____ ###Markdown - Duplicate rows ###Code tweets.duplicated().sum() ###Output _____no_output_____ ###Markdown - Duplicate IDs ###Code tweets.id_str.duplicated().sum() ###Output _____no_output_____ ###Markdown Quality `archive` table- Missing values in the in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp, and expanded_urls columns.- Dogs name as non-dog, like 'a', 'the', 'such', etc (validity issue).- Nulls represented as 'None' in name, doggo, floofer, pupper, and puppo columns (validity issue). - Erroneous data type (tweet_id - string; dog_stage - category; retweeted_status_id, retweeted_status_user_id, in_reply_to_status_id, and in_reply_to_user_id - integer; timestamp and retweeted_status_timestamp - datetime).- Tweet ID *835246439529840640* with the wrong rating numerator and rating denominator. `predictions` table- The predictions of dog breed (p1, p2, and p3 columns) should be formatted as the name of the breed (" " as "_").- Duplicate images in the *jpg_url* column.- Erroneous data type (tweet_id - string).- Three different predictions of dog breed. `tweets` table- Erroneous data type (id_str - string; favorite_count and retweet_count - integer).- Inconsistency in the name of the column referring to tweets id (id_str).- Duplicate rows. Tidiness- One variable (dog stage) in four columns (doggo, floofer, pupper, and puppo) in the `archive` table.- Favorite count and retweet count (`tweets` table) should be part of the `archive` table.- The dog_breed column in the `predictions` table should be part of the `archive` table. Clean Table of ContentsMissing DataTidinessQuality ###Code archive_clean = archive.copy() predictions_clean = predictions.copy() tweets_clean = tweets.copy() ###Output _____no_output_____ ###Markdown Missing Data `archive`: Missing values in the in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp, and expanded_urls columns. DefineSince there are some retweets and replies in the `archive` table, and those aren't original tweets from WeRateDogs, so drop those rows. All the values will be missing in those columns, so drop those columns, except the expanded_urls column. Code ###Code archive_clean = archive_clean[archive_clean.retweeted_status_id.isnull()] archive_clean = archive_clean[archive_clean.in_reply_to_status_id.isnull()] archive_clean.drop(["in_reply_to_status_id", "in_reply_to_user_id", "retweeted_status_id", "retweeted_status_user_id", "retweeted_status_timestamp"], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.info() # Should be zero (list(archive[archive.retweeted_status_id.notnull()].index) + list(archive[archive.in_reply_to_status_id.notnull()].index)) in list(archive_clean.index) ###Output _____no_output_____ ###Markdown Tidiness One variable (dog stage) in four columns (doggo, floofer, pupper, and puppo) in the `archive` table. DefineConcatenate these four columns: "doggo", "floofer", "pupper", "puppo"; and create the *dog_stage* column. Apply the *pick_dog_stage* function in the *dog_stage* column. Drop the original four columns ("doggo", "floofer", "pupper", "puppo"), which will not be necessary anymore. Code ###Code def pick_dog_stage(stage): """Find the stage of the dog in a string. Parameters ---------- stage : string stage is the dog stage (doggo, floofer, pupper, or puppo) Returns ------- string or null Return the dog stage in the input string. If the input string doesn't have the dog stage, returns NaN. """ stages_list = [] if "doggo" in stage: stages_list.append("doggo") if "floofer" in stage: stages_list.append("floofer") if "pupper" in stage: stages_list.append("pupper") if "puppo" in stage: stages_list.append("puppo") if stages_list == []: return np.nan return ",".join(stages_list) # Concatenate these four columns: "doggo", "floofer", "pupper", "puppo" archive_clean["dog_stage"] = archive_clean.doggo + archive_clean.floofer + archive_clean.pupper + archive_clean.puppo # Apply the pick_dog_stage function in the dog_stage column archive_clean.dog_stage = archive_clean.dog_stage.apply(pick_dog_stage) # Drop the four original columns ("doggo", "floofer", "pupper", "puppo") archive_clean.drop(["doggo", "floofer", "pupper", "puppo"], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.sample(20) archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2355 Data columns (total 9 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2097 non-null int64 1 timestamp 2097 non-null object 2 source 2097 non-null object 3 text 2097 non-null object 4 expanded_urls 2094 non-null object 5 rating_numerator 2097 non-null int64 6 rating_denominator 2097 non-null int64 7 name 2097 non-null object 8 dog_stage 336 non-null object dtypes: int64(3), object(6) memory usage: 163.8+ KB ###Markdown Favorite count and retweet count (`tweets`table) should be part of the `archive` table and duplicate rows in the `tweets`table. DefineDrop duplicate rows in the `tweets` table, convert the *tweet_id* column to string in the `archive` table. Merge the *favorite_count* and *retweet_count* columns from the `tweets` table to the `archive` table, on the tweets ids. Code ###Code # Drop duplicate rows tweets_clean.drop_duplicates(inplace=True) # Convert the tweet_id column to string, so the merge can occur archive_clean.tweet_id = archive_clean.tweet_id.astype(str) # Merge the tweets_clean table to the archive_clean table archive_clean = archive_clean.merge(tweets_clean, how="inner", left_on="tweet_id", right_on="id_str") # Drop the duplicate ID column in the resultant archive_clean table archive_clean.drop("id_str", axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.info() archive_clean.sample(20) ###Output _____no_output_____ ###Markdown The dog_breed column in the `predictions` table should be part of the `archive` table. DefineCreate the dog_breed column in the predictions table with the best prediction of 3 possibilities. Convert the tweet_id column to string in the `predictions`. Merge the dog_breed column from the `predictions` table to the `archive` table, on the tweet_id. Code ###Code def best_prediction(p1, p1_conf, p1_dog, p2, p2_conf, p2_dog, p3, p3_conf, p3_dog): """Find the best prediction of 3 possibilities. Parameters ---------- p1 : string p1 is the algorithm's #1 prediction for the image in the tweet. p1_conf: float p1_conf is how confident the algorithm is in its #1 prediction. p1_dog: bool p1_dog is whether or not the #1 prediction is a breed of dog. (...) Returns ------- string or null The best prediction of the dog breed. If the image isn't of a dog, returns NaN. """ count = 0 confs = [p1_conf, p2_conf, p3_conf] verif_dogs = [p1_dog, p2_dog, p3_dog] dogs = [p1, p2, p3] while True: indice_max = np.argmax(confs) count += 1 if verif_dogs[indice_max]: return dogs[indice_max] else: confs[indice_max] = 0 if count == 3: return np.nan predictions_ziped = zip(predictions_clean.p1, predictions_clean.p1_conf, predictions_clean.p1_dog, predictions_clean.p2, predictions_clean.p2_conf, predictions_clean.p2_dog, predictions_clean.p3, predictions_clean.p3_conf, predictions_clean.p3_dog) # Create the dog_breed column with the best prevision of three possibilities predictions_clean["dog_breed"] = [best_prediction(row[0], row[1], row[2], row[3], row[4], row[5], row[6], row[7], row[8]) for row in predictions_ziped] # Convert the tweet_id column to string, so the merge can occur predictions_clean.tweet_id = predictions_clean.tweet_id.astype(str) # Merge the predictions_clean table to the archive_clean table archive_clean = archive_clean.merge(predictions_clean[["tweet_id", "jpg_url", "dog_breed"]], how="left", on="tweet_id") ###Output _____no_output_____ ###Markdown Test ###Code predictions_clean.head() archive_clean.info() archive_clean.sample(20) ###Output _____no_output_____ ###Markdown Quality `archive`: Dogs name as non-dog, like 'a', 'the', 'such', etc (validity issue). DefineReplace the non-dog names, like 'a', to NaN in the archive table. These cases are in lowercase, whereas a proper dog name is capitalized. Code ###Code archive_clean.loc[archive_clean.name.str.islower(), "name"] = np.nan ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.loc[(archive_clean.name.str.islower()) & (archive_clean.name.notnull())] archive_clean.name.value_counts() ###Output _____no_output_____ ###Markdown `archive`: Nulls represented as 'None' in name, doggo, floofer, pupper, and puppo columns (validity issue). DefineReplace the dog name 'None' to NaN in the `archive` table. The 'None' in the doggo, floofer, pupper, and puppo columns has already been addressed above. Code ###Code archive_clean.name = archive_clean.name.replace("None", np.nan) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.name.value_counts() ###Output _____no_output_____ ###Markdown `archive`: Erroneous data type (tweet_id and id_str - string; dog_stage - category; timestamp and retweeted_status_timestamp - datetime; favorite_count, retweet_count, retweeted_status_id, retweeted_status_user_id, in_reply_to_status_id and in_reply_to_user_id - integer). DefineConvert dog stage to categorical data type, convert timestamp to datetime, and convert favorite count and retweet count to integer. The retweeted_status_id, retweeted_status_user_id, in_reply_to_status_id, in_reply_to_user_id, id_str, and retweeted_status_timestamp have already been dropped or will not be needed. Furthermore, the tweet_id and id_str columns have already been fixed above. Code ###Code # Convert to category archive_clean.dog_stage = archive_clean.dog_stage.astype("category") # To datetime archive_clean.timestamp = pd.to_datetime(archive_clean.timestamp) # Convert to integer archive_clean.favorite_count = archive_clean.favorite_count.astype(int) archive_clean.retweet_count = archive_clean.retweet_count.astype(int) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2090 entries, 0 to 2089 Data columns (total 13 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2090 non-null object 1 timestamp 2090 non-null datetime64[ns, UTC] 2 source 2090 non-null object 3 text 2090 non-null object 4 expanded_urls 2087 non-null object 5 rating_numerator 2090 non-null int64 6 rating_denominator 2090 non-null int64 7 name 1383 non-null object 8 dog_stage 335 non-null category 9 favorite_count 2090 non-null int32 10 retweet_count 2090 non-null int32 11 jpg_url 1964 non-null object 12 dog_breed 1659 non-null object dtypes: category(1), datetime64[ns, UTC](1), int32(2), int64(2), object(7) memory usage: 198.3+ KB ###Markdown `archive`: The predictions of dog breed (p1, p2, and p3 columns) should be formatted as the name of the breed (" " as "_"). DefineReplace the "\_" in the dog breed values to whitespace (" "). Convert those values to lowercase. Code ###Code archive_clean.dog_breed = archive_clean.dog_breed.str.replace("_", " ").str.lower() ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.dog_breed.value_counts() ###Output _____no_output_____ ###Markdown `archive`: Duplicate images in the *jpg_url* column. DefineThe duplicate images have been fixed when the retweets were dropped and the predictions table was merged to the archive table. Test ###Code archive_clean[archive_clean.jpg_url.notnull()].jpg_url.duplicated().sum() ###Output _____no_output_____ ###Markdown `archive`: Tweet ID *835246439529840640* with the wrong rating numerator and rating denominator. DefineThe row with the tweet id *835246439529840640* has been removed when the retweets were dropped. Test ###Code archive_clean[archive_clean.tweet_id == 835246439529840640] ###Output _____no_output_____ ###Markdown Storing ###Code archive_clean.to_csv("twitter_archive_master.csv", index=False) df = archive_clean.copy() ###Output _____no_output_____ ###Markdown Analyzing and Visualizing Data 1. What is the breed of dog that most appears in the WeRateDogs twitter?In the first insight, we will discover the breed of dog that has most appeared in the WeRateDogs twitter archive. ###Code dog_breed = df.dog_breed.value_counts().head(10)[::-1] dog_breed plt.figure(figsize=(10, 8)) plt.barh(dog_breed.index, dog_breed.values, alpha=0.8) plt.title("Top 10 breeds of dogs in the WeRateDogs Twitter", fontsize=15) plt.xlabel("Count of each dog breed appearance") plt.ylabel("Dogs breed") plt.savefig("Top 10 breeds of dogs in the WeRateDogs Twitter.png"); ###Output _____no_output_____ ###Markdown The golden retriever breed has the most appearances in the WeRateDogs twitter, with more than 150 appearances. 2. What is the breed of dog (from the top 10 in the previous insight) that has the highest averages in favorites and retweets?In the second insight, we want to discover the breed of dog with the highest averages in favorites and retweets. Since we have a lot of differents breeds, we will considerer only the top 10 from the previous question. ###Code breed_grouped = df.groupby("dog_breed", as_index=False)[["favorite_count", "retweet_count"]].mean() breed_grouped = breed_grouped.query("dog_breed in @dog_breed.index") breed_grouped locations = np.arange(len(breed_grouped.dog_breed)) height = 0.35 plt.figure(figsize=(12, 10)) plt.barh(locations - height/2, breed_grouped.favorite_count, height=height, label="Favorite count") plt.barh(locations + height/2, breed_grouped.retweet_count, height=height, label="Retweet count") plt.yticks(locations, breed_grouped.dog_breed) plt.title("Average of favorites and retweets by the top 10 breeds of dogs", fontsize=15) plt.ylabel("Breeds of dog") plt.xlabel("Average of favorites and retweets counts") plt.legend() plt.savefig("Average of favorites and retweets by the top 10 breeds of dogs.png"); ###Output _____no_output_____ ###Markdown Looking at the top 10 breeds of dogs in apparitions, the samoyed is the breed of dog with the highest averages in favorites and retweets. 3. What is the dog stage that most appears in the WeRateDogs twitter?In the third insight, we will discover the dog stage that has most appeared in the WeRateDogs Twitter. ###Code # Make a copy of the original DataFrame df_stages_clean = df.copy() # Create a mask for the rows with more than one dog stage mask = df_stages_clean.dog_stage.str.contains(",", na=False, regex=False) # Make two copies of the rows with more than one dog stage df_dog_stage_1 = df_stages_clean[mask].copy() df_dog_stage_2 = df_stages_clean[mask].copy() # Split the column dog_stage with one dog stage on each df df_dog_stage_1 = df_dog_stage_1.dog_stage.str.split(",", expand=True)[0] df_dog_stage_2 = df_dog_stage_2.dog_stage.str.split(",", expand=True)[1] # Drop the rows with more than one dog stage df_stages_clean.drop(df_stages_clean[mask].index, inplace=True) # Concatenate the df_stages_clean, df_dog_stage_1 and df_dog_stage_2 dataframes into a df with one dog stage per row df_stages_clean = pd.concat([df_stages_clean, df_dog_stage_1, df_dog_stage_2], ignore_index=True) dog_stage = df_stages_clean.dog_stage.value_counts() dog_stage plt.figure(figsize=(10, 8)) plt.bar(dog_stage.index, dog_stage.values, alpha=0.8) plt.title("Dog stages with most appearances in the WeRateDogs Twitter", fontsize=15) plt.xlabel("Count of each dog stage appearance") plt.ylabel("Dog stages") plt.savefig("Dog stages with most appearances in the WeRateDogs Twitter.png"); ###Output _____no_output_____ ###Markdown The pupper stage has the most appearances in the WeRateDogs twitter, with more than 200 appearances. 4. What is the dog stage that has the highest averages in favorites and retweets?In the fourth insight, we want to discover the dog stage with the highest averages in favorites and retweets. ###Code stage_grouped = df_stages_clean.groupby("dog_stage", as_index=False)[["favorite_count", "retweet_count"]].mean() stage_grouped locations = np.arange(len(stage_grouped.dog_stage)) width = 0.35 plt.figure(figsize=(12, 7)) plt.bar(locations - width/2, stage_grouped.favorite_count, width=width, label="Favorite count") plt.bar(locations + width/2, stage_grouped.retweet_count, width=width, label="Retweet count") plt.xticks(locations, stage_grouped.dog_stage) plt.title("Average of favorites and retweets by dog stages", fontsize=15) plt.xlabel("Dog stages") plt.ylabel("Average of favorites and retweets counts") plt.legend() plt.savefig("Average of favorites and retweets by dog stages.png"); ###Output _____no_output_____ ###Markdown Data Wrangling and Analysis A case study by Siddartha Thentu Introduction **What is this project about**- In this project, we look at ways to gather data, supplement the incomplete data by wrangling the required data from the internet thorugh API, manually download available data files. After that, we assess the data for issues in it's quality and tidiness. Then, we clean the issues and save the clean file.**Libraries Used**- pandas : data manipulation- numpy : numerical manipulation- requests : downloading data from the internet- re : extracting the regular expressions- tweepy : downloading data through twitter API- json : handling json format data- timeit : calculate the time taken**Steps Taken**- Data gathering- Data assesing- Data cleaning- Data analysis**Case Study**WeRateDogs is a Twitter account that rates people's dogs with a humorous comment about the dog. These ratings almost always have a denominator of 10. The numerators, though? Almost always greater than 10. 11/10, 12/10, 13/10, etc. Why? Because "they're good dogs Brent." WeRateDogs has over 4 million followers and has received international media coverage. In this case study, we currently have **two pieces of data/dataframes available with us**. Let's take a minute to explore them. 1. Through Udacity, we have access to a few tweet data provided by WeRateDogs. However, this data has only basic features like 'tweet_id','dog_name','retweet_id' etc. Some important features like the number of retweets and number of favorites (which show the popularity of the dog) were not provided. Luckily, with the given tweet_id's, we can wrangle this tweets from WeRateDog's archive and extract the required content. **Name : twitter-archive-enhanced-2.csv** **Source : Udacity** **Access : manual download from website** 2. Each tweet in the twitter-archive-enhanced-2.csv is thought to have images of the dog. These images were fed into a Dog breed classifier model to identify the type and breed of dog. This was done by the Udacity team and the file was hosted on Udacity's server. **Name : image-predictions.tsv** **Source : Udacity** **Access : Download the file from server through requests library** **Importing the necessary libraries** ###Code import pandas as pd import tweepy import re import numpy as np import requests from timeit import default_timer as timer import json ###Output _____no_output_____ ###Markdown Data Wrangling/Gathering **Let's load the manually downloaded file into the dataframe and take a peek into it** ###Code #load the manually downloaded file into a pandas dataframe given_tweet_data = pd.read_csv('data/twitter-archive-enhanced-2.csv') #Let's take a look at the data given_tweet_data.head(1) ###Output _____no_output_____ ###Markdown **Let's now setup our twitter API to extract the complete tweet contents based on available tweet_id's** ###Code consumer_key = 'HIDDEN' consumer_secret = 'HIDDEN' access_token = 'HIDDEN-HIDDEN' access_secret = 'HIDDEN' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth,wait_on_rate_limit=True,wait_on_rate_limit_notify=True) fails = {} start = timer() #For all the tweet_id's, from "given_tweet_data['tweet_id'].values" extract tweet data from twitter. #Try catch block used because some tweets might have been deleted and cannot be accessed which should not stop our progress. with open("tweets_json.txt","w+") as output: for idx,val in enumerate(given_tweet_data['tweet_id'].values): try: content = api.get_status(val) #extract tweet content json.dump(content._json,output) #dump content into a file called "tweets_json.txt" as new line for each tweet data output.write("\n") #insert a new line print("Succes. Remaining = ",total_tweets-idx-1) #Logger to keep track except tweepy.TweepError as e: #catch exception fails[val] = e #load the tweet_id and error into a dictionary print("Error. Remaining = ",total_tweets-idx-1) pass end = timer() print("Time took = ",end-start," seconds") ###Output Succes. Remaining = 2355 Succes. Remaining = 2354 Succes. Remaining = 2353 Succes. Remaining = 2352 Succes. Remaining = 2351 Succes. Remaining = 2350 Succes. Remaining = 2349 Succes. Remaining = 2348 Succes. Remaining = 2347 Succes. Remaining = 2346 Succes. Remaining = 2345 Succes. Remaining = 2344 Succes. Remaining = 2343 Succes. Remaining = 2342 Succes. Remaining = 2341 Succes. Remaining = 2340 Succes. Remaining = 2339 Succes. Remaining = 2338 Succes. Remaining = 2337 Error. Remaining = 2336 Succes. Remaining = 2335 Succes. Remaining = 2334 Succes. Remaining = 2333 Succes. Remaining = 2332 Succes. Remaining = 2331 Succes. Remaining = 2330 Succes. Remaining = 2329 Succes. Remaining = 2328 Succes. Remaining = 2327 Succes. Remaining = 2326 Succes. Remaining = 2325 Succes. Remaining = 2324 Succes. Remaining = 2323 Succes. Remaining = 2322 Succes. Remaining = 2321 Succes. Remaining = 2320 Succes. Remaining = 2319 Succes. Remaining = 2318 Succes. Remaining = 2317 Succes. Remaining = 2316 Succes. Remaining = 2315 Succes. Remaining = 2314 Succes. Remaining = 2313 Succes. Remaining = 2312 Succes. Remaining = 2311 Succes. Remaining = 2310 Succes. Remaining = 2309 Succes. Remaining = 2308 Succes. Remaining = 2307 Succes. Remaining = 2306 Succes. Remaining = 2305 Succes. Remaining = 2304 Succes. Remaining = 2303 Succes. Remaining = 2302 Succes. Remaining = 2301 Succes. Remaining = 2300 Succes. Remaining = 2299 Succes. Remaining = 2298 Succes. Remaining = 2297 Succes. Remaining = 2296 Succes. Remaining = 2295 Succes. Remaining = 2294 Succes. Remaining = 2293 Succes. Remaining = 2292 Succes. Remaining = 2291 Succes. Remaining = 2290 Succes. Remaining = 2289 Succes. Remaining = 2288 Succes. Remaining = 2287 Succes. Remaining = 2286 Succes. Remaining = 2285 Succes. Remaining = 2284 Succes. Remaining = 2283 Succes. Remaining = 2282 Succes. Remaining = 2281 Succes. Remaining = 2280 Succes. Remaining = 2279 Succes. Remaining = 2278 Succes. Remaining = 2277 Succes. Remaining = 2276 Succes. Remaining = 2275 Succes. Remaining = 2274 Succes. Remaining = 2273 Succes. Remaining = 2272 Succes. Remaining = 2271 Succes. Remaining = 2270 Succes. Remaining = 2269 Succes. Remaining = 2268 Succes. Remaining = 2267 Succes. Remaining = 2266 Succes. Remaining = 2265 Succes. Remaining = 2264 Succes. Remaining = 2263 Succes. Remaining = 2262 Succes. Remaining = 2261 Error. Remaining = 2260 Succes. Remaining = 2259 Succes. Remaining = 2258 Succes. Remaining = 2257 Succes. Remaining = 2256 Succes. Remaining = 2255 Error. Remaining = 2254 Succes. Remaining = 2253 Succes. Remaining = 2252 Error. Remaining = 2251 Succes. Remaining = 2250 Succes. Remaining = 2249 Succes. Remaining = 2248 Succes. Remaining = 2247 Succes. Remaining = 2246 Succes. Remaining = 2245 Succes. Remaining = 2244 Succes. Remaining = 2243 Succes. Remaining = 2242 Succes. Remaining = 2241 Succes. Remaining = 2240 Succes. Remaining = 2239 Succes. Remaining = 2238 Error. Remaining = 2237 Succes. Remaining = 2236 Succes. Remaining = 2235 Succes. Remaining = 2234 Succes. Remaining = 2233 Succes. Remaining = 2232 Succes. Remaining = 2231 Succes. Remaining = 2230 Succes. Remaining = 2229 Succes. Remaining = 2228 Succes. Remaining = 2227 Succes. Remaining = 2226 Succes. Remaining = 2225 Succes. Remaining = 2224 Error. Remaining = 2223 Succes. Remaining = 2222 Succes. Remaining = 2221 Succes. Remaining = 2220 Succes. Remaining = 2219 Succes. Remaining = 2218 Succes. Remaining = 2217 Succes. Remaining = 2216 Succes. Remaining = 2215 Succes. Remaining = 2214 Succes. Remaining = 2213 Succes. Remaining = 2212 Succes. Remaining = 2211 Succes. Remaining = 2210 Succes. Remaining = 2209 Succes. Remaining = 2208 Succes. Remaining = 2207 Succes. Remaining = 2206 Succes. Remaining = 2205 Succes. Remaining = 2204 Succes. Remaining = 2203 Succes. Remaining = 2202 Succes. Remaining = 2201 Error. Remaining = 2200 Succes. Remaining = 2199 Succes. Remaining = 2198 Succes. Remaining = 2197 Succes. Remaining = 2196 Succes. Remaining = 2195 Succes. Remaining = 2194 Succes. Remaining = 2193 Succes. Remaining = 2192 Succes. Remaining = 2191 Succes. Remaining = 2190 Succes. Remaining = 2189 Succes. Remaining = 2188 Succes. Remaining = 2187 Succes. Remaining = 2186 Succes. Remaining = 2185 Succes. Remaining = 2184 Succes. Remaining = 2183 Succes. Remaining = 2182 Succes. Remaining = 2181 Succes. Remaining = 2180 Succes. Remaining = 2179 Succes. Remaining = 2178 Succes. Remaining = 2177 Succes. Remaining = 2176 Succes. Remaining = 2175 Succes. Remaining = 2174 Error. Remaining = 2173 Succes. Remaining = 2172 Succes. Remaining = 2171 Succes. Remaining = 2170 Succes. Remaining = 2169 Succes. Remaining = 2168 Succes. Remaining = 2167 Succes. Remaining = 2166 Succes. Remaining = 2165 Succes. Remaining = 2164 Succes. Remaining = 2163 Succes. Remaining = 2162 Succes. Remaining = 2161 Succes. Remaining = 2160 Succes. Remaining = 2159 Succes. Remaining = 2158 Succes. Remaining = 2157 Succes. Remaining = 2156 Succes. Remaining = 2155 Succes. Remaining = 2154 Succes. Remaining = 2153 Succes. Remaining = 2152 Succes. Remaining = 2151 Succes. Remaining = 2150 Succes. Remaining = 2149 Succes. Remaining = 2148 Succes. Remaining = 2147 Succes. Remaining = 2146 Succes. Remaining = 2145 Error. Remaining = 2144 Succes. Remaining = 2143 Succes. Remaining = 2142 Succes. Remaining = 2141 Succes. Remaining = 2140 Succes. Remaining = 2139 Succes. Remaining = 2138 Succes. Remaining = 2137 Succes. Remaining = 2136 Succes. Remaining = 2135 Succes. Remaining = 2134 Succes. Remaining = 2133 Succes. Remaining = 2132 Succes. Remaining = 2131 Succes. Remaining = 2130 Succes. Remaining = 2129 Succes. Remaining = 2128 Succes. Remaining = 2127 Succes. Remaining = 2126 Succes. Remaining = 2125 Succes. Remaining = 2124 Succes. Remaining = 2123 Succes. Remaining = 2122 Succes. Remaining = 2121 Succes. Remaining = 2120 Succes. Remaining = 2119 Succes. Remaining = 2118 Succes. Remaining = 2117 Succes. Remaining = 2116 Succes. Remaining = 2115 Succes. Remaining = 2114 Succes. Remaining = 2113 Succes. Remaining = 2112 Succes. Remaining = 2111 Succes. Remaining = 2110 Succes. Remaining = 2109 Error. Remaining = 2108 Succes. Remaining = 2107 Succes. Remaining = 2106 Succes. Remaining = 2105 Succes. Remaining = 2104 Succes. Remaining = 2103 Error. Remaining = 2102 Succes. Remaining = 2101 Succes. Remaining = 2100 Succes. Remaining = 2099 Succes. Remaining = 2098 Succes. Remaining = 2097 Succes. Remaining = 2096 Error. Remaining = 2095 Succes. Remaining = 2094 Succes. Remaining = 2093 Succes. Remaining = 2092 Succes. Remaining = 2091 Succes. Remaining = 2090 Succes. Remaining = 2089 Succes. Remaining = 2088 Succes. Remaining = 2087 Succes. Remaining = 2086 Succes. Remaining = 2085 Succes. Remaining = 2084 Succes. Remaining = 2083 Succes. Remaining = 2082 Succes. Remaining = 2081 Succes. Remaining = 2080 Succes. Remaining = 2079 Succes. Remaining = 2078 Succes. Remaining = 2077 Succes. Remaining = 2076 Succes. Remaining = 2075 Succes. Remaining = 2074 Succes. Remaining = 2073 Succes. Remaining = 2072 Succes. Remaining = 2071 Succes. Remaining = 2070 Succes. Remaining = 2069 Succes. Remaining = 2068 Succes. Remaining = 2067 Succes. Remaining = 2066 Succes. Remaining = 2065 Succes. Remaining = 2064 Succes. Remaining = 2063 Succes. Remaining = 2062 Succes. Remaining = 2061 Succes. Remaining = 2060 Error. Remaining = 2059 Succes. Remaining = 2058 Error. Remaining = 2057 Succes. Remaining = 2056 Succes. Remaining = 2055 Succes. Remaining = 2054 Succes. Remaining = 2053 Succes. Remaining = 2052 Succes. Remaining = 2051 Succes. Remaining = 2050 Succes. Remaining = 2049 Succes. Remaining = 2048 Succes. Remaining = 2047 Succes. Remaining = 2046 Succes. Remaining = 2045 Succes. Remaining = 2044 Succes. Remaining = 2043 Succes. Remaining = 2042 Succes. Remaining = 2041 Succes. Remaining = 2040 ###Markdown **Upon exploring the json_tweets.txt file, I realized that we have some useful columns like retweet_count and favorite_count. Let's create a new dataframe called rt_df which containts the number of retweets and favorite counts** ###Code rt_df = pd.DataFrame(columns=['tweet_id','retweeted','retweet_count','fav_count']) #create empty df with required columns with open("tweets_json.txt") as fp: tweets_data = fp.readlines() #read entire json data into a complete string for tweet_data in tweets_data: #access data related to each tweet my_dict = json.loads(tweet_data) #convert json data of that tweet into a dictionary req_dict = {'tweet_id':my_dict['id_str'],'retweeted':my_dict['retweeted'],'retweet_count':my_dict['retweet_count'],'fav_count':my_dict['favorite_count']} #access only the required componenets of the dictionary rt_df = rt_df.append(req_dict,ignore_index=True) #append this data into our rt_df ###Output _____no_output_____ ###Markdown **Let's take a look into the rt_df (retweet_dataframe)** ###Code rt_df.head(2) ###Output _____no_output_____ ###Markdown **Download image_predictions.tsv from url through requests library and load the data into image_df dataframe** ###Code import requests as rq download = rq.get("https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv")#downloading the file csv_file = open('image_predictions.csv', 'wb') #creating an image_predictions.csv file csv_file.write(download.content) #writing the request objects content into the above created file csv_file.close() #closing the file ###Output _____no_output_____ ###Markdown **Load content from file into dataframe and take a peek** ###Code image_df = pd.read_csv("image_predictions.csv",sep='\t') image_df.head(1) ###Output _____no_output_____ ###Markdown For the above data,- jpg_url -> A link to the image of the dog - p1 -> Top prediction for the model - p1 -> accuracy of prediction p1- p1_dog -> Is the predicted object for p1, a dog?Other columns can be inferred.. Assess the data **Let's study the shapes of the dataframes** ###Code print("Given tweet dataframe shape = ",given_tweet_data.shape) print("Retweet dataframe shape = ",rt_df.shape) print("Image dataframe shape = ",image_df.shape) ###Output _____no_output_____ ###Markdown **From the above output, it can inferred that although there are 2356 unique tweet_id's (why unique?You will see further) from the given_tweet dataframe, there are only 2331 of them in retweet_dataframe. This implies that, while extracting rt_df data from twitter, we couldn't find the data for 25 tweets which might have been deleted. The image_df has fewer more, probably because, not all original tweet_id's had images of dogs.** But Remember!!We are only interested in authentic tweets, i.e, tweets that - have images, particulary those of dogs - tweets that are original and not retweets.So, on the fly I can realize that many rows will be dropped because they did not have images of that dog in their tweets. **Let's check if the there are any duplicates in the dataframes** ###Code print(given_tweet_data.duplicated().any()) #check if any two rows are same print(given_tweet_data['tweet_id'].duplicated().any()) #check if theare any duplicates for the same tweet_id ###Output _____no_output_____ ###Markdown From the above output, we could realize that all the tweet_id's are unique. ###Code rt_df.duplicated().value_counts() #no duplicates print(image_df.duplicated().any()) print(image_df.jpg_url.duplicated().any()) image_df.jpg_url.value_counts() ###Output _____no_output_____ ###Markdown **From the above output, we realized that for some image links are shared by 2 different tweet_id's. How can two tweet_id's point to same image? One of them must be original and One of them must be a retweet!!****Let's try and confirm this** ###Code print(image_df.jpg_url.duplicated().value_counts()) #66 of the rows have images that are duplicated somewhere in the df dupl_index = image_df.jpg_url.duplicated() #find the index of the rows that have images duplicated image_df[dupl_index]['tweet_id'] #grab one tweet_id and check the text of that tweet from the given_tweet_data df given_tweet_data.loc[given_tweet_data['tweet_id']==752309394570878976].text #printing the text of the tweet proves that it's a retweet. image_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2075 non-null int64 1 jpg_url 2075 non-null object 2 img_num 2075 non-null int64 3 p1 2075 non-null object 4 p1_conf 2075 non-null float64 5 p1_dog 2075 non-null bool 6 p2 2075 non-null object 7 p2_conf 2075 non-null float64 8 p2_dog 2075 non-null bool 9 p3 2075 non-null object 10 p3_conf 2075 non-null float64 11 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown **From the above output, we can find that 'tweet_id' column has been erroneously identified as Int datatype. It has to be of object datatype** ###Code image_df[['p1','p2','p3']].sample(10) ###Output _____no_output_____ ###Markdown **From the above output, some names are in lower case while some are in mixed cases. Also, words have been combined with '_' and '-' sometimes. There is a consistency issue** ###Code image_df.query('p1_dog==False and p2_dog==False and p3_dog==False') ###Output _____no_output_____ ###Markdown **The above output shows, there are 324 tweet_id's which do not have dog images in them or where dog images are in the background and unrecognizable.** ###Code given_tweet_data.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 in_reply_to_status_id 78 non-null float64 2 in_reply_to_user_id 78 non-null float64 3 timestamp 2356 non-null object 4 source 2356 non-null object 5 text 2356 non-null object 6 retweeted_status_id 181 non-null float64 7 retweeted_status_user_id 181 non-null float64 8 retweeted_status_timestamp 181 non-null object 9 expanded_urls 2297 non-null object 10 rating_numerator 2356 non-null int64 11 rating_denominator 2356 non-null int64 12 name 2356 non-null object 13 doggo 2356 non-null object 14 floofer 2356 non-null object 15 pupper 2356 non-null object 16 puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown **From the above output, tweet_id column has to changed to string/object type. There are a lot of null values in "in_reply_to" and "retweeted_status" columns. Also, the timestamp column has to be changed from object to datetime type** ###Code given_tweet_data.sample(2) given_tweet_data['retweeted_status_id'].notna().sum() #Number of tweets that had a retweet associated with it print(given_tweet_data.source[0]) print(given_tweet_data.source[999]) print(given_tweet_data.source[450]) ###Output <a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a> <a href="http://vine.co" rel="nofollow">Vine - Make a Scene</a> <a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a> ###Markdown **From the above output, there are 181 tweets that have a retweet associated with them. Further, the source column has unnecessary html tags which can be removed to extract core text like "Twitter for iPhone". Also, there are 4 column names (doggo,floofer..) which can be reduced down into a single column called dog_class which has the values either doggo,floofer,pupper or puppo.** ###Code dog_names = list(given_tweet_data.name.values) dog_names.sort(reverse=True) dog_names_invalid = set(dog_names[:109]) #'very','unacceptable','this','the','such','space','quite','one',etc .. are invalid names #also there are names with variations like zoey,zoe,zooey which cannot be changed because they are subjective ###Output _____no_output_____ ###Markdown **The above piece of code shows that there are impossible dog names stored in a set called dog_names_invalid which have to be corrected.** ###Code given_tweet_data.describe() ###Output _____no_output_____ ###Markdown **From the above code, it's clear that some numerators and denominator values are infeasible and need to be corrected. Let's further explore why these errors happened** ###Code index = given_tweet_data[given_tweet_data['rating_denominator']!=10].index for i in range(len(index)): print(given_tweet_data.loc[index[i]]['text']) print(given_tweet_data.loc[index[i]][['rating_numerator','rating_denominator']]) print("******") ###Output @jonnysun @Lin_Manuel ok jomny I know you're excited but 960/00 isn't a valid rating, 13/10 is tho rating_numerator 960 rating_denominator 0 Name: 313, dtype: object ****** @docmisterio account started on 11/15/15 rating_numerator 11 rating_denominator 15 Name: 342, dtype: object ****** The floofs have been released I repeat the floofs have been released. 84/70 https://t.co/NIYC820tmd rating_numerator 84 rating_denominator 70 Name: 433, dtype: object ****** Meet Sam. She smiles 24/7 &amp; secretly aspires to be a reindeer. Keep Sam smiling by clicking and sharing this link: https://t.co/98tB8y7y7t https://t.co/LouL5vdvxx rating_numerator 24 rating_denominator 7 Name: 516, dtype: object ****** RT @dog_rates: After so many requests, this is Bretagne. She was the last surviving 9/11 search dog, and our second ever 14/10. RIP https:/… rating_numerator 9 rating_denominator 11 Name: 784, dtype: object ****** Why does this never happen at my front door... 165/150 https://t.co/HmwrdfEfUE rating_numerator 165 rating_denominator 150 Name: 902, dtype: object ****** After so many requests, this is Bretagne. She was the last surviving 9/11 search dog, and our second ever 14/10. RIP https://t.co/XAVDNDaVgQ rating_numerator 9 rating_denominator 11 Name: 1068, dtype: object ****** Say hello to this unbelievably well behaved squad of doggos. 204/170 would try to pet all at once https://t.co/yGQI3He3xv rating_numerator 204 rating_denominator 170 Name: 1120, dtype: object ****** Happy 4/20 from the squad! 13/10 for all https://t.co/eV1diwds8a rating_numerator 4 rating_denominator 20 Name: 1165, dtype: object ****** This is Bluebert. He just saw that both #FinalFur match ups are split 50/50. Amazed af. 11/10 https://t.co/Kky1DPG4iq rating_numerator 50 rating_denominator 50 Name: 1202, dtype: object ****** Happy Saturday here's 9 puppers on a bench. 99/90 good work everybody https://t.co/mpvaVxKmc1 rating_numerator 99 rating_denominator 90 Name: 1228, dtype: object ****** Here's a brigade of puppers. All look very prepared for whatever happens next. 80/80 https://t.co/0eb7R1Om12 rating_numerator 80 rating_denominator 80 Name: 1254, dtype: object ****** From left to right: Cletus, Jerome, Alejandro, Burp, &amp; Titson None know where camera is. 45/50 would hug all at once https://t.co/sedre1ivTK rating_numerator 45 rating_denominator 50 Name: 1274, dtype: object ****** Here is a whole flock of puppers. 60/50 I'll take the lot https://t.co/9dpcw6MdWa rating_numerator 60 rating_denominator 50 Name: 1351, dtype: object ****** Happy Wednesday here's a bucket of pups. 44/40 would pet all at once https://t.co/HppvrYuamZ rating_numerator 44 rating_denominator 40 Name: 1433, dtype: object ****** Yes I do realize a rating of 4/20 would've been fitting. However, it would be unjust to give these cooperative pups that low of a rating rating_numerator 4 rating_denominator 20 Name: 1598, dtype: object ****** Two sneaky puppers were not initially seen, moving the rating to 143/130. Please forgive us. Thank you https://t.co/kRK51Y5ac3 rating_numerator 143 rating_denominator 130 Name: 1634, dtype: object ****** Someone help the girl is being mugged. Several are distracting her while two steal her shoes. Clever puppers 121/110 https://t.co/1zfnTJLt55 rating_numerator 121 rating_denominator 110 Name: 1635, dtype: object ****** This is Darrel. He just robbed a 7/11 and is in a high speed police chase. Was just spotted by the helicopter 10/10 https://t.co/7EsP8LmSp5 rating_numerator 7 rating_denominator 11 Name: 1662, dtype: object ****** I'm aware that I could've said 20/16, but here at WeRateDogs we are very professional. An inconsistent rating scale is simply irresponsible rating_numerator 20 rating_denominator 16 Name: 1663, dtype: object ****** IT'S PUPPERGEDDON. Total of 144/120 ...I think https://t.co/ZanVtAtvIq rating_numerator 144 rating_denominator 120 Name: 1779, dtype: object ****** Here we have an entire platoon of puppers. Total score: 88/80 would pet all at once https://t.co/y93p6FLvVw rating_numerator 88 rating_denominator 80 Name: 1843, dtype: object ****** This is an Albanian 3 1/2 legged Episcopalian. Loves well-polished hardwood flooring. Penis on the collar. 9/10 https://t.co/d9NcXFKwLv rating_numerator 1 rating_denominator 2 Name: 2335, dtype: object ****** ###Markdown **When two digit/digit patterns are occuring one of the pattern has been misleadingly taken into account. For example, "last surving dog of 9/11 and our second ever 14/10". For this, the rating considered was "9/11" which is wrong.** ###Code rt_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2331 entries, 0 to 2330 Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2331 non-null object 1 retweeted 2331 non-null object 2 retweet_count 2331 non-null object 3 fav_count 2331 non-null object dtypes: object(4) memory usage: 73.0+ KB ###Markdown **From the above output, retweet cound and fav_count should be changed from string/object to integer type** Summary Analysis Quality Issues1. given_tweet_data dataframe - Impossible dog names should be replaced with Nan - tweet_id datatype erroneously points to Int. Should be changed to object - timestamp datatype incorrect. Change to datetime - Missing values in certain columns like in_reply..etc - Erroneous numerator and denominator values should be corrected - Retweet tweets should be dropped - Source column has html text to be removed2. image_df dataframe - tweet_id dataype correction - duplicate jpg_urls due to retweets - rows whose images are not classified as a dog - Floofer and etc columns have string None instead of python Nan - names of dogs have inconsistencies. Lower and Upper case, joining words with '-' and '_' 3. rt_df dataframe - retweet_count column datatype correction - fav_count column datatype correction Tidiness Issues 1. given_tweet_data dataframe - Unused columns can be dropped - multiple dog class columns can be collapsed into 1 column - Merge 3 dataframes into 1 based on tweet_id2. image_df dataframe - 3 dog breed probabilities can be replaced with the top most probability - 3 probability confidences can be replaced by the top most accurate Data Cleaning **Create copies of dataframes** ###Code tweet_df = given_tweet_data.copy() retweet_df = rt_df.copy() predict_df = image_df.copy() ###Output _____no_output_____ ###Markdown **Drop rows in predict_df/image_df dataframe that does not have images predicted as a dog** ###Code predict_df.drop(predict_df.query('p1_dog==False and p2_dog==False and p3_dog==False').index,inplace=True) predict_df.query('p1_dog==False and p2_dog==False and p3_dog==False') ###Output _____no_output_____ ###Markdown **Change predict_df/image_df tweet_id column datatype from int to string** ###Code predict_df['tweet_id'] = predict_df['tweet_id'].astype(str) predict_df['tweet_id'].dtype ###Output _____no_output_____ ###Markdown **Handle the inconsistencies in predict_df/image_df dog breed names** ###Code columns = ['p1','p2','p3'] for col in columns: predict_df[col] = predict_df[col].str.lower().str.replace('-',' ').str.replace('_',' ') predict_df.sample(10) ###Output _____no_output_____ ###Markdown **Replace multiple p1,p2,p3 columns into 2 columns : breed and accuracy** ###Code def get_values(x): if(x[1]): return x[0] elif(x[3]): return x[2] else: return x[4] predict_df['breed'] = predict_df[['p1','p1_dog','p2','p2_dog','p3','p3_dog']].apply(get_values,axis=1) predict_df['accuracy'] = predict_df[['p1_conf','p1_dog','p2_conf','p2_dog','p3_conf','p3_dog']].apply(get_values,axis=1) predict_df.drop(['p1','p1_conf','p1_dog','p2','p2_conf','p2_dog','p3','p3_conf','p3_dog'],axis=1,inplace=True) ###Output _____no_output_____ ###Markdown **Changing data types of columns tweet_id and timestamp** ###Code tweet_df['tweet_id'] = tweet_df['tweet_id'].astype(str) tweet_df['timestamp'] = pd.to_datetime(tweet_df['timestamp']) tweet_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null object 1 in_reply_to_status_id 78 non-null float64 2 in_reply_to_user_id 78 non-null float64 3 timestamp 2356 non-null datetime64[ns, UTC] 4 source 2356 non-null object 5 text 2356 non-null object 6 retweeted_status_id 181 non-null float64 7 retweeted_status_user_id 181 non-null float64 8 retweeted_status_timestamp 181 non-null object 9 expanded_urls 2297 non-null object 10 rating_numerator 2356 non-null int64 11 rating_denominator 2356 non-null int64 12 name 2356 non-null object 13 doggo 2356 non-null object 14 floofer 2356 non-null object 15 pupper 2356 non-null object 16 puppo 2356 non-null object dtypes: datetime64[ns, UTC](1), float64(4), int64(2), object(10) memory usage: 313.0+ KB ###Markdown **Merge doggo, floofler, pupper and puppo columns into one** ###Code cols=['doggo','floofer','pupper','puppo'] for col in cols: tweet_df[col].replace('None','',inplace=True) #Replace None with "" tweet_df['dog_class'] = tweet_df['doggo'].map(str)+tweet_df['floofer'].map(str)+tweet_df['pupper'].map(str)+tweet_df['puppo'].map(str) tweet_df['dog_class'].unique() tweet_df['dog_class'].replace('',np.nan,inplace=True) #replace "" with Nan tweet_df['dog_class'].unique() idx = tweet_df.query('dog_class=="doggopuppo" or dog_class=="doggofloofer" or dog_class=="doggopupper"').index #study tweets that have irregularities in dog_class for i in range(len(idx)): print(idx[i],tweet_df.loc[idx[i],'text']) ###Output 191 Here's a puppo participating in the #ScienceMarch. Cleverly disguising her own doggo agenda. 13/10 would keep the planet habitable for https://t.co/cMhq16isel 200 At first I thought this was a shy doggo, but it's actually a Rare Canadian Floofer Owl. Amateurs would confuse the two. 11/10 only send dogs https://t.co/TXdT3tmuYk 460 This is Dido. She's playing the lead role in "Pupper Stops to Catch Snow Before Resuming Shadow Box with Dried Apple." 13/10 (IG: didodoggo) https://t.co/m7isZrOBX7 531 Here we have Burke (pupper) and Dexter (doggo). Pupper wants to be exactly like doggo. Both 12/10 would pet at same time https://t.co/ANBpEYHaho 565 Like doggo, like pupper version 2. Both 11/10 https://t.co/9IxWAXFqze 575 This is Bones. He's being haunted by another doggo of roughly the same size. 12/10 deep breaths pupper everything's fine https://t.co/55Dqe0SJNj 705 This is Pinot. He's a sophisticated doggo. You can tell by the hat. Also pointier than your average pupper. Still 10/10 would pet cautiously https://t.co/f2wmLZTPHd 733 Pupper butt 1, Doggo 0. Both 12/10 https://t.co/WQvcPEpH2u 778 RT @dog_rates: Like father (doggo), like son (pupper). Both 12/10 https://t.co/pG2inLaOda 822 RT @dog_rates: This is just downright precious af. 12/10 for both pupper and doggo https://t.co/o5J479bZUC 889 Meet Maggie &amp; Lila. Maggie is the doggo, Lila is the pupper. They are sisters. Both 12/10 would pet at the same time https://t.co/MYwR4DQKll 956 Please stop sending it pictures that don't even have a doggo or pupper in them. Churlish af. 5/10 neat couch tho https://t.co/u2c9c7qSg8 1063 This is just downright precious af. 12/10 for both pupper and doggo https://t.co/o5J479bZUC 1113 Like father (doggo), like son (pupper). Both 12/10 https://t.co/pG2inLaOda ###Markdown **Rectify the dog_class by checking the text.** ###Code tweet_df.loc[191,'dog_class'] = "puppo" #the following one is a tricky one. For index 200, I did not understand what a floofer owl meant. #I checked out the tweet "https://twitter.com/dog_rates/status/854010172552949760?lang=en" #after further studying the pattern, I realized that sometimes when the text in the tweets say that there's no dog in it, #it's a funny way of saying that the dog has done a great job in camouflage or it closely resembles other animals. #For example, check some of the samples at https://defused.com/people-failed-to-send-dogs/ and you will understand. tweet_df.loc[200,'dog_class'] = "doggo" tweet_df.loc[460,'dog_class'] = "pupper" tweet_df.loc[531,'dog_class'] = np.nan #since there are two and we cannot have both tweet_df.loc[565,'dog_class'] = np.nan tweet_df.loc[575,'dog_class'] = "pupper" tweet_df.loc[705,'dog_class'] = "doggo" tweet_df.loc[733,'dog_class'] = np.nan #778,822 will be deleted since they are retweets,889,956,1063,1113 will be np.nan tweet_df.loc[889,'dog_class'] = np.nan tweet_df.loc[956,'dog_class'] = np.nan tweet_df.loc[1063,'dog_class'] = np.nan tweet_df.loc[1113,'dog_class'] = np.nan ###Output _____no_output_____ ###Markdown **Delte rows with tweet_id's that were retweeted** ###Code final_df = tweet_df.merge(retweet_df,on='tweet_id') final_df = final_df.merge(predict_df,on='tweet_id') final_df = final_df[final_df.retweeted_status_id.isnull()] final_df = final_df[final_df.retweeted_status_id.isnull()] final_df.shape final_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1679 entries, 0 to 1736 Data columns (total 25 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1679 non-null object 1 in_reply_to_status_id 20 non-null float64 2 in_reply_to_user_id 20 non-null float64 3 timestamp 1679 non-null datetime64[ns, UTC] 4 source 1679 non-null object 5 text 1679 non-null object 6 retweeted_status_id 0 non-null float64 7 retweeted_status_user_id 0 non-null float64 8 retweeted_status_timestamp 0 non-null object 9 expanded_urls 1679 non-null object 10 rating_numerator 1679 non-null int64 11 rating_denominator 1679 non-null int64 12 name 1679 non-null object 13 doggo 1679 non-null object 14 floofer 1679 non-null object 15 pupper 1679 non-null object 16 puppo 1679 non-null object 17 dog_class 253 non-null object 18 retweeted 1679 non-null object 19 retweet_count 1679 non-null object 20 fav_count 1679 non-null object 21 jpg_url 1679 non-null object 22 img_num 1679 non-null int64 23 breed 1679 non-null object 24 accuracy 1679 non-null float64 dtypes: datetime64[ns, UTC](1), float64(5), int64(3), object(16) memory usage: 341.0+ KB ###Markdown **Drop unwanted columns** ###Code cols = ['in_reply_to_status_id','in_reply_to_user_id','retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp', 'doggo', 'floofer', 'pupper', 'puppo'] final_df.drop(columns=cols,inplace=True) final_df.head(2) list(final_df.columns) ###Output _____no_output_____ ###Markdown **Correct dog names** ###Code for name in dog_names_invalid: final_df['name'] = final_df['name'].replace(name,np.nan) final_df['name'].value_counts() ###Output _____no_output_____ ###Markdown **Correct source column** ###Code final_df['source'] = final_df['source'].str.extract('\>(.*?)\<') final_df['source'][3] final_df['source'].unique() final_df[final_df['rating_denominator']<10] final_df.loc[1719,'text'] #check for floating ratings pattern = "(\d+(\.\d+)?\/\d+(\.\d+)?)" final_df['extracted_rating'] = final_df.text.str.extract(pattern,expand=True)[0] final_df[['my_num','my_denom']] = final_df['extracted_rating'].str.split('/',1,expand=True) final_df[['rating_numerator','rating_denominator']] = final_df[['rating_numerator','rating_denominator']].astype(str) final_df[final_df['rating_numerator']!=final_df['my_num']] #final_df.info() final_df['check_num'] = final_df[['rating_numerator','my_num']].apply(lambda x:False if (x[0]!=x[1]) else True,axis=1) final_df['check_num'].value_counts() final_df.query('check_num==False')[['rating_numerator','my_num']] final_df.loc[37,'rating_numerator'] = 13.5 final_df.loc[489,'rating_numerator'] = 9.75 final_df.loc[538,'rating_numerator'] = 11.27 final_df.loc[1254,'rating_numerator'] = 11.26 final_df['check_denom'] = final_df[['rating_denominator','my_denom']].apply(lambda x:False if (x[0]!=x[1]) else True,axis=1) final_df.query('check_denom==False')[['rating_denominator','my_denom']] ###Output _____no_output_____ ###Markdown **Multiple patterns in ratings** ###Code final_df['pattern_count'] = final_df.text.str.count(pattern) pd.set_option('display.max_columns', 500) pd.set_option('display.max_colwidth', 4000) final_df.query('pattern_count >1')[['text','pattern_count','rating_numerator','rating_denominator']] ###Output _____no_output_____ ###Markdown **From the above output, I we have to update the ratings manually. However, I am leaving this for now.** ###Code final_df.drop(final_df.columns[-7:-1],axis=1,inplace=True) final_df.drop(final_df.columns[-1],axis=1,inplace=True) final_df = final_df.reset_index() ###Output _____no_output_____ ###Markdown **Saving the final master file** ###Code final_df.to_csv('twitter_archive_master.csv', index = False) ###Output _____no_output_____ ###Markdown Data Analysis **Let's check which breeds are the most in number** ###Code import seaborn as sb import matplotlib.pyplot as plt y = final_df['breed'].value_counts().iloc[:5] x = y.index sb.barplot(y,x,orient='h',palette='viridis'); plt.title("Top 5 Number of breeds") plt.xlabel("Count") plt.ylabel("Breed Name") final_df[['breed','retweet_count']].groupby('breed',as_index=True).sum().sort_values(by='retweet_count',ascending=False).iloc[:5].plot(kind="bar",color=['#5cb85c','#5bc0de','#d9534f']) plt.xticks(rotation=-15) plt.legend() plt.ylabel("Number of retweets") plt.xlabel("Name of breed") plt.title("Retweet Count vs Breed") plt.show(); ###Output _____no_output_____ ###Markdown Conclusion **From the above visualizations, golden retriever looks like the most popular breed on the twitter platform** ###Code final_df[['breed','retweet_count']].groupby('breed',as_index=True).sum().sort_values(by='retweet_count',ascending=True).iloc[:5].plot(kind="bar") plt.xticks(rotation=-15) plt.legend() plt.ylabel("Number of retweets") plt.xlabel("Name of breed") plt.title("Retweet Count vs Breed") plt.show(); ###Output _____no_output_____ ###Markdown **From the above visualizations, Japanese Spaniel looks like the least popular breed on the twitter platform** ###Code import seaborn as sb import matplotlib.pyplot as plt y = final_df['dog_class'].value_counts().iloc[:5] x = y.index sb.barplot(x,y,orient='v',palette='magma'); plt.title("Popularuty by size") plt.xlabel("Dog Size") plt.ylabel("Posts") temp_df = final_df[['breed','retweet_count']].groupby('breed',as_index=False).sum().sort_values(by='retweet_count',ascending=True).iloc[:5] temp_df x = temp_df['breed'] y = temp_df['retweet_count'] sb.barplot(y,x,orient='h',palette='Spectral'); plt.title("Popularity by breed") plt.xlabel("Retweet count") plt.ylabel("Breed") final_df.query('breed=="japanese spaniel"') ###Output _____no_output_____ ###Markdown We Rate Dogs - Data Wrangling Project By Assemgul Kaiyrzhan Step 1: Gathering DataGathering data is the first step in data wrangling.1) The WeRateDogs Twitter archive. We have file twitter_archive_enhanced.csv whiche we manually added to our workspace: ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline import requests import tweepy import json twitter_archive = pd.read_csv('twitter-archive-enhanced.csv') twitter_archive.head() ###Output _____no_output_____ ###Markdown 2) The tweet image predictions, i.e., what breed of dog (or other object, animal, etc.) is present in each tweet according to a neural network. This file (image_predictions.tsv) is hosted on Udacity's servers and should be downloaded programmatically using the Requests library and the following URL: https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv ###Code url='https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) with open ("image_predictions.tsv", mode='wb') as file: file.write(response.content) image_predictions = pd.read_csv('image_predictions.tsv') image_predictions.head() image_predictions = pd.read_csv('image_predictions.tsv', sep='\t') image_predictions.head() ###Output _____no_output_____ ###Markdown 3) Using the tweet IDs in the WeRateDogs Twitter archive, query the Twitter API for each tweet's JSON data using Python's Tweepy library and store each tweet's entire set of JSON data in a file called tweet_json.txt file. Each tweet's JSON data should be written to its own line. Then read this .txt file line by line into a pandas DataFrame with (at minimum) tweet ID, retweet count, and favorite count: ###Code import tweepy from tweepy import OAuthHandler import json from timeit import default_timer as timer # Query Twitter API for each tweet in the Twitter archive and save JSON in a text file # These are hidden to comply with Twitter's API terms and conditions consumer_key = 'HIDDEN' consumer_secret = 'HIDDEN' access_token = 'HIDDEN' access_secret = 'HIDDEN' auth = OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True) # NOTE TO STUDENT WITH MOBILE VERIFICATION ISSUES: # df_1 is a DataFrame with the twitter_archive_enhanced.csv file. You may have to # change line 17 to match the name of your DataFrame with twitter_archive_enhanced.csv # NOTE TO REVIEWER: this student had mobile verification issues so the following # Twitter API code was sent to this student from a Udacity instructor # Tweet IDs for which to gather additional data via Twitter's API tweet_ids = twitter_archive.tweet_id.values len(tweet_ids) # Query Twitter's API for JSON data for each tweet ID in the Twitter archive count = 0 fails_dict = {} start = timer() # Save each tweet's returned JSON as a new line in a .txt file with open('tweet_json.txt', 'w') as outfile: # This loop will likely take 20-30 minutes to run because of Twitter's rate limit for tweet_id in tweet_ids: count += 1 print(str(count) + ": " + str(tweet_id)) try: tweet = api.get_status(tweet_id, tweet_mode='extended') print("Success") json.dump(tweet._json, outfile) outfile.write('\n') except tweepy.TweepError as e: print("Fail") fails_dict[tweet_id] = e pass end = timer() print(end - start) print(fails_dict) tweet_list =[] with open('tweet_json.txt', encoding='utf-8') as file: for x in range(2354): x = file.readline() data = {} y = x.strip('{').strip('}').split(',') for item in y: if ':' in item: key, value = item.split(':', 1) data[key] = value else: pass favorite_count = data[' "favorite_count"'] retweet_count = data[' "retweet_count"'] tweet_id = data[' "id"'] tweet_list.append({'favorite_count':int(favorite_count), 'retweet_count':int(retweet_count), 'tweet_id': tweet_id}) tweet_list tweet_df = pd.DataFrame() tweet_df['tweet_id'] = list(map(lambda tweet: tweet['tweet_id'], tweet_list)) tweet_df['retweet_count'] = list(map(lambda tweet: tweet['retweet_count'], tweet_list)) tweet_df['favorite_count'] = list(map(lambda tweet: tweet['favorite_count'], tweet_list)) tweet_df.head() ###Output _____no_output_____ ###Markdown Step 2: Assessing DataAssessing your data is the second step in data wrangling. When assessing, you're like a detective at work, inspecting your dataset for two things: data quality issues (i.e. content issues) and lack of tidiness (i.e. structural issues): ###Code twitter_archive.info() image_predictions.info() tweet_df.info() twitter_archive.describe() image_predictions.describe() tweet_df.describe() sum(twitter_archive.duplicated()) sum(image_predictions.duplicated()) sum(tweet_df.duplicated()) all_columns = pd.Series(list(twitter_archive) + list(image_predictions) + list(tweet_df)) all_columns[all_columns.duplicated()] twitter_archive.isnull().sum() twitter_archive[twitter_archive['expanded_urls'].isnull()] twitter_archive.name.value_counts() image_predictions.isnull().sum() tweet_df.isnull().sum() image_predictions.p1.value_counts() image_predictions.p2.value_counts() image_predictions.p3.value_counts() ###Output _____no_output_____ ###Markdown Tidiness1) doggo, floofer, pupper, puppo need make like a one column2) 3 dataframe join to 1 dataframe Quality1) Delete some column which not needed for analysis like in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp and etc.2) Have name "None", "a", "an",which start in Lowercase 3) Change Data Type in Tweet_id, timestamp4) rating_numerator and rating_denominator have incorrected ratings5) rating_numerator and rating_denominator change data type6) In Source column make one format link without html code7) expanded_urls 2297 non-null8) dog_rating should be integers, not floats Step 3: Cleaning DataCleaning your data is the third step in data wrangling. It is where you fix the quality and tidiness issues that you identified in the assess step. ###Code archive_clean = twitter_archive.copy() prediction_clean = image_predictions.copy() tweet_clean = tweet_df.copy() ###Output _____no_output_____ ###Markdown Tidiness 1 - doggo, floofer, pupper, puppo need make like a one column Code ###Code # handle none archive_clean.doggo.replace('None', '', inplace=True) archive_clean.floofer.replace('None', '', inplace=True) archive_clean.pupper.replace('None', '', inplace=True) archive_clean.puppo.replace('None', '', inplace=True) # merge into column archive_clean['dog_stage'] = archive_clean.doggo + archive_clean.floofer + archive_clean.pupper + archive_clean.puppo # handle multiple stages archive_clean.loc[archive_clean.dog_stage == 'doggopupper', 'dog_stage'] = 'doggo, pupper' archive_clean.loc[archive_clean.dog_stage == 'doggopuppo', 'dog_stage'] = 'doggo, puppo' archive_clean.loc[archive_clean.dog_stage == 'doggofloofer', 'dog_stage'] = 'doggo, floofer' # handle missing values archive_clean.loc[archive_clean.dog_stage == '', 'dog_stage'] = np.nan ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.head(2) archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 18 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dog_stage 380 non-null object dtypes: float64(4), int64(3), object(11) memory usage: 331.4+ KB ###Markdown Tidiness 2 - 2 dataframe join to 1 dataframe Code So we need resolve first quality issues with tweet_id in tweet_df change to int data type ###Code tweet_clean["tweet_id"]= tweet_clean["tweet_id"].astype(int) df_dog = pd.merge(archive_clean, image_predictions,on='tweet_id', how='inner') df_dog = pd.merge(df_dog, tweet_clean,on='tweet_id', how='inner') ###Output _____no_output_____ ###Markdown Test ###Code df_dog.head() ###Output _____no_output_____ ###Markdown Quality 1 - Delete some column which not needed for analysis Code ###Code df_dog = df_dog.drop(['in_reply_to_status_id', 'in_reply_to_user_id','retweeted_status_id', 'retweeted_status_user_id','retweeted_status_timestamp'], axis=1) df_dog = df_dog.drop(['doggo', 'floofer', 'pupper', 'puppo'], axis=1) ###Output _____no_output_____ ###Markdown Test ###Code df_dog.head(3) df_dog.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2115 entries, 0 to 2114 Data columns (total 22 columns): tweet_id 2115 non-null int64 timestamp 2115 non-null object source 2115 non-null object text 2115 non-null object expanded_urls 2115 non-null object rating_numerator 2115 non-null int64 rating_denominator 2115 non-null int64 name 2115 non-null object dog_stage 333 non-null object jpg_url 2115 non-null object img_num 2115 non-null int64 p1 2115 non-null object p1_conf 2115 non-null float64 p1_dog 2115 non-null bool p2 2115 non-null object p2_conf 2115 non-null float64 p2_dog 2115 non-null bool p3 2115 non-null object p3_conf 2115 non-null float64 p3_dog 2115 non-null bool retweet_count 2115 non-null int64 favorite_count 2115 non-null int64 dtypes: bool(3), float64(3), int64(6), object(10) memory usage: 336.7+ KB ###Markdown Quality 2 - Have name "None", "a", "an" and etc.,which start in Lowercase Code ###Code lowercase_letters = [c for c in df_dog['name'] if c.islower()] print(lowercase_letters) import numpy as np df_dog['name'].replace(lowercase_letters, 'None', inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code df_dog.name.value_counts() ###Output _____no_output_____ ###Markdown Quality 3 - Change Data Type in Tweet_id and timestamp Code ###Code df_dog['tweet_id'] = df_dog['tweet_id'].astype(str) df_dog['timestamp']= pd.to_datetime(df_dog['timestamp'],format = "%Y-%m-%d %H:%M:%S") ###Output _____no_output_____ ###Markdown Test ###Code df_dog.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2115 entries, 0 to 2114 Data columns (total 22 columns): tweet_id 2115 non-null object timestamp 2115 non-null datetime64[ns] source 2115 non-null object text 2115 non-null object expanded_urls 2115 non-null object rating_numerator 2115 non-null int64 rating_denominator 2115 non-null int64 name 2115 non-null object dog_stage 333 non-null object jpg_url 2115 non-null object img_num 2115 non-null int64 p1 2115 non-null object p1_conf 2115 non-null float64 p1_dog 2115 non-null bool p2 2115 non-null object p2_conf 2115 non-null float64 p2_dog 2115 non-null bool p3 2115 non-null object p3_conf 2115 non-null float64 p3_dog 2115 non-null bool retweet_count 2115 non-null int64 favorite_count 2115 non-null int64 dtypes: bool(3), datetime64[ns](1), float64(3), int64(5), object(10) memory usage: 336.7+ KB ###Markdown Quality 4 - rating_numerator and rating_denominator have incorrected Data type Code ###Code df_dog['rating_numerator'] = df_dog['rating_numerator'].astype(int) df_dog['rating_denominator'] = df_dog['rating_denominator'].astype(float) ###Output _____no_output_____ ###Markdown Test ###Code df_dog.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2115 entries, 0 to 2114 Data columns (total 22 columns): tweet_id 2115 non-null object timestamp 2115 non-null datetime64[ns] source 2115 non-null object text 2115 non-null object expanded_urls 2115 non-null object rating_numerator 2115 non-null int64 rating_denominator 2115 non-null float64 name 2115 non-null object dog_stage 333 non-null object jpg_url 2115 non-null object img_num 2115 non-null int64 p1 2115 non-null object p1_conf 2115 non-null float64 p1_dog 2115 non-null bool p2 2115 non-null object p2_conf 2115 non-null float64 p2_dog 2115 non-null bool p3 2115 non-null object p3_conf 2115 non-null float64 p3_dog 2115 non-null bool retweet_count 2115 non-null int64 favorite_count 2115 non-null int64 dtypes: bool(3), datetime64[ns](1), float64(4), int64(4), object(10) memory usage: 336.7+ KB ###Markdown Quality 5 - rating_numerator and rating_denominator have incorrected ratings Code ###Code df_dog['dog_rating'] = 10 * df_dog['rating_numerator'] / df_dog['rating_denominator'] ###Output _____no_output_____ ###Markdown Test ###Code df_dog.head() ###Output _____no_output_____ ###Markdown Code ###Code df_dog = df_dog.drop(['rating_numerator','rating_denominator'], axis=1) ###Output _____no_output_____ ###Markdown Test ###Code df_dog.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2115 entries, 0 to 2114 Data columns (total 21 columns): tweet_id 2115 non-null object timestamp 2115 non-null datetime64[ns] source 2115 non-null object text 2115 non-null object expanded_urls 2115 non-null object name 2115 non-null object dog_stage 333 non-null object jpg_url 2115 non-null object img_num 2115 non-null int64 p1 2115 non-null object p1_conf 2115 non-null float64 p1_dog 2115 non-null bool p2 2115 non-null object p2_conf 2115 non-null float64 p2_dog 2115 non-null bool p3 2115 non-null object p3_conf 2115 non-null float64 p3_dog 2115 non-null bool retweet_count 2115 non-null int64 favorite_count 2115 non-null int64 dog_rating 2115 non-null float64 dtypes: bool(3), datetime64[ns](1), float64(4), int64(3), object(10) memory usage: 320.1+ KB ###Markdown Quality 6 - In Source column make one format link without html code Code ###Code df_dog['source'] = df_dog['source'].str.extract('^<a.+>(.+)</a>$') ###Output _____no_output_____ ###Markdown Test ###Code df_dog.source.value_counts() ###Output _____no_output_____ ###Markdown Quality 7 - expanded_urls 2297 non-null Code ###Code df_dog = df_dog[df_dog.expanded_urls.notnull()] ###Output _____no_output_____ ###Markdown Test ###Code df_dog.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2115 entries, 0 to 2114 Data columns (total 21 columns): tweet_id 2115 non-null object timestamp 2115 non-null datetime64[ns] source 2115 non-null object text 2115 non-null object expanded_urls 2115 non-null object name 2115 non-null object dog_stage 333 non-null object jpg_url 2115 non-null object img_num 2115 non-null int64 p1 2115 non-null object p1_conf 2115 non-null float64 p1_dog 2115 non-null bool p2 2115 non-null object p2_conf 2115 non-null float64 p2_dog 2115 non-null bool p3 2115 non-null object p3_conf 2115 non-null float64 p3_dog 2115 non-null bool retweet_count 2115 non-null int64 favorite_count 2115 non-null int64 dog_rating 2115 non-null float64 dtypes: bool(3), datetime64[ns](1), float64(4), int64(3), object(10) memory usage: 320.1+ KB ###Markdown Quality 8 - dog_rating should be integers, not floats Code ###Code df_dog["dog_rating"]= df_dog["dog_rating"].astype(int) ###Output _____no_output_____ ###Markdown Test ###Code df_dog.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2115 entries, 0 to 2114 Data columns (total 21 columns): tweet_id 2115 non-null object timestamp 2115 non-null datetime64[ns] source 2115 non-null object text 2115 non-null object expanded_urls 2115 non-null object name 2115 non-null object dog_stage 333 non-null object jpg_url 2115 non-null object img_num 2115 non-null int64 p1 2115 non-null object p1_conf 2115 non-null float64 p1_dog 2115 non-null bool p2 2115 non-null object p2_conf 2115 non-null float64 p2_dog 2115 non-null bool p3 2115 non-null object p3_conf 2115 non-null float64 p3_dog 2115 non-null bool retweet_count 2115 non-null int64 favorite_count 2115 non-null int64 dog_rating 2115 non-null int64 dtypes: bool(3), datetime64[ns](1), float64(3), int64(4), object(10) memory usage: 320.1+ KB ###Markdown Storing Data ###Code df_dog.to_csv('twitter_archive_master.csv', index = False) ###Output _____no_output_____ ###Markdown Analyzing and Visualizing Data ###Code df_archive = pd.read_csv('twitter_archive_master.csv') df_archive.head(5) df_archive.info() df_archive.describe() ###Output _____no_output_____ ###Markdown 1 insights. In these graphs, we can view a general picture of all the counts that we have. Let's pay attention to favorites and retweets. ###Code df_archive.hist(figsize=(10,10)) ###Output _____no_output_____ ###Markdown 2 insights. Let's see TOP 5 more Favorite tweets ###Code df_archive.sort_values(['favorite_count'], ascending =False ).head(5) ###Output _____no_output_____ ###Markdown 3 insights. Let's see TOP 5 retweets ###Code df_archive.sort_values(['retweet_count'], ascending =False ).head(5) ###Output _____no_output_____ ###Markdown We can see that there are two tweets in favorites and retweets, which have both a high favorite and retweets:744234799360020481 and 807106840509214720 4 insights. Let's see TOP Dog kinds ###Code df_archive['dog_stage'].value_counts().plot(kind='bar', figsize=(10,5)); plt.title('dog_stage', weight='bold', fontsize=12); df_archive['dog_stage'].value_counts() ###Output _____no_output_____ ###Markdown Pupper (239) is the most common stage in a dog's life for this analysis.pupper 239doggo 78puppo 32floofer 3 ###Code df_archive.sort_values('favorite_count')['dog_stage'] df_archive.sort_values('favorite_count')['name'] ###Output _____no_output_____ ###Markdown 5 insights. Top 5 Dog Predictions ###Code df_archive['p1'].value_counts()[4::-1].plot(kind='barh') plt.title('Top 5 Dog Predictions') plt.xlabel('Number of Times Predicted') plt.ylabel('Dog Breed') plt.fontsize = 12 df_archive['p1'].value_counts() ###Output _____no_output_____ ###Markdown We can see from this graph that golden_retriever (158) and Labrador_retriever (103) are especially popular. 6 insights. Top 5 Dog Name ###Code df_archive['name'].value_counts()[4::-1].plot(kind='barh') plt.title('Top 5 name of dogs') plt.xlabel('Number of Times Predicted') plt.ylabel('Dogs') plt.fontsize = 12 df_archive['name'].value_counts() ###Output _____no_output_____ ###Markdown We have 3 dogs with the same high performance - Cooper, Charlie and Oliver (11) 7 insights. Favorites and Retweets ###Code p = sns.regplot(x=df_archive.retweet_count, y=df_archive.favorite_count) plt.title("Favorites and Retweets") plt.xlabel('Retweets') plt.ylabel('Favorites') sns.plt.show() fig = p.get_figure() fig.savefig('scatterplot.png') ###Output _____no_output_____ ###Markdown Python Libraries ###Code import pandas as pd import numpy as np import requests import csv import os import tweepy import json from sqlalchemy import create_engine import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown Gathering Data WeRateDogs Twitter archive ###Code #Loading .tsv file content into a Pandas Data Frame tw_archive = pd.read_csv('twitter-archive-enhanced.csv') ###Output _____no_output_____ ###Markdown Tweet image predictions ###Code #Obtaining Current Working Directory cwd = os.getcwd() #Link to download file tsv_link = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' #Using requests lirary to open link response = requests.get(tsv_link) #Opening a file on CWD,creating a tsv file and writing the link's content into the file with open (os.path.join(cwd, tsv_link.split('/')[-1]), mode = 'wb') as file: file.write(response.content) #Loading .tsv file content into a Pandas Data Frame image_archive_df = pd.read_csv('image-predictions.tsv', sep = '\t') ###Output _____no_output_____ ###Markdown Tweeter API Connection ###Code #File name to save twitter Json info json_file = "tweet_json.txt" #API Keys and Acess Tokens # consumer_key = # consumer_secret = # access_token = # access_secret = # #Set up Twitter access # auth = tweepy.OAuthHandler(consumer_key, consumer_secret) # auth.set_access_token(access_token, access_secret) # api = tweepy.API(auth) #Write json txt file with all twitter entries rewrite_txt = 0; i=0; #to set up begin and end of file if rewrite_txt == 1: #if file was written, do not run routine again with open (os.path.join(cwd, json_file), mode = 'w') as file: file.write("[") for tweet_id in tw_archive.tweet_id: try: tweet = api.get_status(tweet_id, tweet_mode='extended', wait_on_rate_limit= True, wait_on_rate_limit_notify= True) file.write(json.dumps(tweet._json)) if i != len(tw_archive.tweet_id)-1: file.write(",\n")#if last entry, do not add new line except: print("Tweet %d not available anymore" %(tweet_id)) i += 1 file.write("]") ###Output _____no_output_____ ###Markdown Creating Tweet API Dataframe ###Code with open (os.path.join(cwd, json_file)) as json_read: data = json.load(json_read) tweeter_api_dict = [] for p in data: try: #If the tweet json file has all the information tweeter_api_dict.append({'created_at': p['created_at'], 'tweet_id': p['id_str'], 'full_text': p['full_text'], 'favorite_count': p['favorite_count'], 'retweet_count': p['retweet_count'], 'url': p['entities']['media'][0]['url'], 'in_reply_to_user_id' : p['in_reply_to_user_id'] }) except: # Some json entries do not have the media url entry tweeter_api_dict.append({'created_at': p['created_at'], 'tweet_id': p['id_str'], 'full_text': p['full_text'], 'favorite_count': p['favorite_count'], 'retweet_count': p['retweet_count'], 'url': '', 'in_reply_to_user_id' : p['in_reply_to_user_id'] }) tweeter_api_df = pd.DataFrame(tweeter_api_dict, columns = ['tweet_id', 'created_at', 'full_text', 'favorite_count', 'retweet_count', 'url', 'in_reply_to_user_id' ]) tweeter_api_df.head() ###Output _____no_output_____ ###Markdown Assessing Data Twitter Archive Dataset ###Code tw_archive.head() tw_archive.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown Quality- NaN Values on collumns in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp, - Dog's names missing or incorrect, Should be obtained from tweet text- Type of dog missing (doggo, floofer, pupper, puppo)- Timestamp column type str. Change to timedate object- Missing values on expanded_urls column and other columns- Some ratings are not correct, these should be obtained from the tweet text- Only 380 entries have a define dog stage (puppo, fluffo, etc)- There are entries which have two different dog stages assigned. These will be removed and just one dog stage will be considered. These are the tweet IDs with multiple dog stage assigned: - 854010172552949760 - 817777686764523521 - 808106460588765185 - 802265048156610565 - 801115127852503040 - 785639753186217984 - 781308096455073793 - 775898661951791106 - 770093767776997377 - 759793422261743616 - 751583847268179968 - 741067306818797568 - 733109485275860992 - 855851453814013952 Tidiness- Dog stage is now 4 columns, should be merged into just one. - Text column has the tweet link added. This should be moved to a different column on it self- Drop columns expanded_urls, in_reply_to_status_id, , in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp,- Add column with retweeted_count and favorite_count- Source column does not add any information to the table, should be dropped Image Prediction Dataset ###Code image_archive_df.head() image_archive_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null int64 jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown Quality- Breed names have a underscore, capitalize 1st letter of breed Tidiness- This table is not necessary as standalone, since it does not add lot of new information. The breed of the dog can be added to the Twitter archive table to be more complete - Merge together the breed prediction(the best) and use that single value on the main dataframe Tweeter API Dataset ###Code tweeter_api_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2349 entries, 0 to 2348 Data columns (total 7 columns): tweet_id 2349 non-null object created_at 2349 non-null object full_text 2349 non-null object favorite_count 2349 non-null int64 retweet_count 2349 non-null int64 url 2349 non-null object in_reply_to_user_id 78 non-null float64 dtypes: float64(1), int64(2), object(4) memory usage: 128.5+ KB ###Markdown Quality- Tweet ID 810984652412424192 does not have a rating. It can not be obtained from the archive dataset nor the information obtained from the API This entry rating will be defined as 10/10- Tweet ID 826598799820865537 has a rating of 007/10, since it was a joke by the author. No real rating was given. This entry rating will be defined as 10/10- Some entries have incorrect ratings due to the programmatic data gathering, this will be corrected individually - 835246439529840640 - 740373189193256964 - 682962037429899265 - 722974582966214656 - 686035780142297088 - 716439118184652801 - 881633300179243008- Following Tweet ID are not about a dog, and should be removed. - 832088576586297345 - 838150277551247360- Older tweets do not have the Dogs name available from the text. The author changed the tweet format at some point, where all tweets start with the Dog's name. Results in some tweets do not have the dogs name available.- Change All dog's without a defined name with the string 'Not Defined', results in 760 Tweets with no names- Some tweets text do not have the short link to the tweet. This will be corrected, long url will be used.- Some tweets are reteweets and need to be removed (29). There are many self retweets (47), those will be considered original and will be kept on the dataset- There are tweets that are self repeated re tweets. Originals should be kept. Such as: - 879130579576475649 retweet of 878057613040115712 - 868639477480148993 retweet of 868552278524837888 - 871166179821445120 retweet of 841077006473256960- Retweets from other accounts have a text that starts with "RT". All these tweets (176) will be removed since are not original Tidiness- Cleaning Data Creating New Dataframe Copies ###Code tw_archive_clean = tw_archive.copy() tweeter_api_df_clean = tweeter_api_df.copy() image_archive_df_clean = image_archive_df.copy() ###Output _____no_output_____ ###Markdown Extracting Values From JSON File Text Entries Define - There is not a column with dogs names - Remove link from full text - Add Rating columns ###Code list(tweeter_api_df_clean) tweeter_api_df_clean.iloc[0].full_text ###Output _____no_output_____ ###Markdown Code ###Code name = [] tweet_link = [] full_text = [] for text in tweeter_api_df_clean.full_text: temp_name = text.split('.')[0].split(' ')[-1] if len(temp_name) == 0 or len(temp_name) == 1: #Some text do not have the dogs name name.append('Not Defined') elif temp_name[0].islower() or temp_name[0].isalpha() == False: # All tweets with the dogs name will start with a capital letter name.append('Not Defined') elif temp_name[0].isdigit(): #Some tweet texts are short, so algorithm mistaknly takes rating as name name.append('Not Defined') else: name.append(temp_name) full_text.append(text.split('https:')[0]) tweeter_api_df_clean['name'] = name tweeter_api_df_clean['full_text'] = full_text ###Output _____no_output_____ ###Markdown Using REGEX to Extract Ratings ###Code # User Regex to obtain de ratings s = tweeter_api_df_clean.full_text regexp = '(?P<Numerator>[0-9]?[0-9]?[0-9]?[0-9])/(?P<denominator>[0-9]?[0-9][0-9])' test_df = s.str.extract(regexp,expand=True) tweeter_api_df_clean['rating_num'] = test_df.Numerator tweeter_api_df_clean['rating_den'] = test_df.denominator ###Output _____no_output_____ ###Markdown Test - name, rating_num, rating_den column added and values gathered - full_text columns no longer contains the link ###Code tweeter_api_df_clean.iloc[0] tweeter_api_df_clean.iloc[0].full_text ###Output _____no_output_____ ###Markdown Individually Correcting Abnormal Ratings Define - Several Abnormal Ratings Corrected, such as NaN or Pun ratings ###Code tweeter_api_df_clean[tweeter_api_df_clean.rating_num.isnull() == True] tweeter_api_df_clean[tweeter_api_df_clean.tweet_id == '826598799820865537'] ###Output _____no_output_____ ###Markdown Code ###Code # Change entry which has NaN rating and incorrect ratings tweeter_api_df_clean.loc[tweeter_api_df_clean[tweeter_api_df_clean.rating_num.isnull() == True].index[0],'rating_num'] = '10' tweeter_api_df_clean.loc[tweeter_api_df_clean[tweeter_api_df_clean.rating_den.isnull() == True].index[0],'rating_den'] = '10' tweeter_api_df_clean.loc[tweeter_api_df_clean[tweeter_api_df_clean.tweet_id == '826598799820865537'].index[0],'rating_num'] = '10' tweeter_api_df_clean.loc[tweeter_api_df_clean[tweeter_api_df_clean.tweet_id == '826598799820865537'].index[0],'rating_den'] = '10' tweeter_api_df_clean.loc[tweeter_api_df_clean[tweeter_api_df_clean.tweet_id == '835246439529840640'].index[0],'rating_num'] = '13' tweeter_api_df_clean.loc[tweeter_api_df_clean[tweeter_api_df_clean.tweet_id == '835246439529840640'].index[0],'rating_den'] = '10' tweeter_api_df_clean.loc[tweeter_api_df_clean[tweeter_api_df_clean.tweet_id == '740373189193256964'].index[0],'rating_num'] = '14' tweeter_api_df_clean.loc[tweeter_api_df_clean[tweeter_api_df_clean.tweet_id == '740373189193256964'].index[0],'rating_den'] = '10' tweeter_api_df_clean.loc[tweeter_api_df_clean[tweeter_api_df_clean.tweet_id == '682962037429899265'].index[0],'rating_num'] = '10' tweeter_api_df_clean.loc[tweeter_api_df_clean[tweeter_api_df_clean.tweet_id == '682962037429899265'].index[0],'rating_den'] = '10' tweeter_api_df_clean.loc[tweeter_api_df_clean[tweeter_api_df_clean.tweet_id == '722974582966214656'].index[0],'rating_num'] = '13' tweeter_api_df_clean.loc[tweeter_api_df_clean[tweeter_api_df_clean.tweet_id == '722974582966214656'].index[0],'rating_den'] = '10' tweeter_api_df_clean.loc[tweeter_api_df_clean[tweeter_api_df_clean.tweet_id == '686035780142297088'].index[0],'rating_num'] = '12' tweeter_api_df_clean.loc[tweeter_api_df_clean[tweeter_api_df_clean.tweet_id == '686035780142297088'].index[0],'rating_den'] = '10' tweeter_api_df_clean.loc[tweeter_api_df_clean[tweeter_api_df_clean.tweet_id == '716439118184652801'].index[0],'rating_num'] = '11' tweeter_api_df_clean.loc[tweeter_api_df_clean[tweeter_api_df_clean.tweet_id == '716439118184652801'].index[0],'rating_den'] = '10' tweeter_api_df_clean.loc[tweeter_api_df_clean[tweeter_api_df_clean.tweet_id == '881633300179243008'].index[0],'rating_num'] = '13' tweeter_api_df_clean.loc[tweeter_api_df_clean[tweeter_api_df_clean.tweet_id == '881633300179243008'].index[0],'rating_den'] = '10' # Remove entries which are not about dog ratings tweeter_api_df_clean.drop(tweeter_api_df_clean[tweeter_api_df_clean.tweet_id == '838150277551247360'].index[0], inplace=True) tweeter_api_df_clean.drop(tweeter_api_df_clean[tweeter_api_df_clean.tweet_id == '832088576586297345'].index[0], inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code tweeter_api_df_clean[tweeter_api_df_clean.tweet_id == '826598799820865537'] ###Output _____no_output_____ ###Markdown Changing Ratings From String to Integer Type ###Code # Transform ratings from type string to integer for index in tweeter_api_df_clean.index: try: tweeter_api_df_clean.loc[index,'rating_num'] = int(tweeter_api_df_clean.rating_num[index]) tweeter_api_df_clean.loc[index,'rating_den'] = int(tweeter_api_df_clean.rating_den[index]) except: pass ###Output _____no_output_____ ###Markdown Cleaning Dog Stages Information Define - Reducing Dog Stages columns from 4 to 1 by melting and combining values ###Code list(tw_archive_clean) ###Output _____no_output_____ ###Markdown Code ###Code #Dropping Non important columns tw_archive_clean = tw_archive_clean.drop('in_reply_to_status_id', 1) tw_archive_clean = tw_archive_clean.drop('in_reply_to_user_id', 1) tw_archive_clean = tw_archive_clean.drop('source', 1) tw_archive_clean = tw_archive_clean.drop('retweeted_status_id', 1) tw_archive_clean = tw_archive_clean.drop('retweeted_status_user_id', 1) tw_archive_clean = tw_archive_clean.drop('retweeted_status_timestamp', 1) tw_archive_clean = tw_archive_clean.drop('expanded_urls', 1) tw_archive_clean = tw_archive_clean.drop('timestamp', 1) #Creating new column that states which dog states are defined and which are not temp = [] for i in range(0, len(tw_archive_clean.tweet_id)): if tw_archive_clean.doggo[i] == 'None' and tw_archive_clean.floofer[i] == 'None' and tw_archive_clean.pupper[i] == 'None' and tw_archive_clean.puppo[i] == 'None': temp.append('not_defined') else: temp.append('defined') tw_archive_clean['not_defined'] = temp ###Output _____no_output_____ ###Markdown Melting Dog Stage Columns ###Code # Melting dog_stage columns together tw_archive_clean = pd.melt(tw_archive_clean, id_vars=['tweet_id', 'text', 'rating_numerator', 'rating_denominator', 'name'], var_name='dog_stage', value_name='temp_dog_stage') tw_archive_clean = tw_archive_clean[tw_archive_clean.dog_stage == tw_archive_clean.temp_dog_stage] # Removing entries that have multiple dog_stage assigned for index in tw_archive_clean.tweet_id[tw_archive_clean.tweet_id.duplicated()].index: tw_archive_clean.drop(index, inplace=True) # Dropping the column that adds no information tw_archive_clean = tw_archive_clean.drop('temp_dog_stage', 1) ###Output _____no_output_____ ###Markdown Updating API Dataframe With Dog Stage Information ###Code # Updating the tweeter_api_df_clean with the new dog_stage dog_stage_new = [] for tweet_id in tweeter_api_df_clean.tweet_id: dog_stage_new.append(tw_archive_clean.loc[tw_archive_clean.loc[tw_archive_clean.tweet_id == int(tweet_id)].index[0]].dog_stage) # Adding a new column tweeter_api_df_clean['dog_stage'] = dog_stage_new ###Output _____no_output_____ ###Markdown Test ###Code list(tweeter_api_df_clean) tweeter_api_df_clean[tweeter_api_df_clean.dog_stage == 'pupper'] ###Output _____no_output_____ ###Markdown Cleaning Tweet Dog API Dataframe From Retweeted Entries Define - Clean Reteweets - Define URLs for tweets that are missing them - Correcting Dog Breeds names ###Code tweeter_api_df_clean[(tweeter_api_df_clean.in_reply_to_user_id.isnull() == False) & (tweeter_api_df_clean.in_reply_to_user_id != 4196983835)].sample() tweeter_api_df_clean[tweeter_api_df_clean.url == ''].sample() image_archive_df.p1.sample() ###Output _____no_output_____ ###Markdown Code ###Code # Remove all retweets from other users, keeps self re tweets for index in tweeter_api_df_clean[(tweeter_api_df_clean.in_reply_to_user_id.isnull() == False) & (tweeter_api_df_clean.in_reply_to_user_id != 4196983835)].index : tweeter_api_df_clean.drop(index, inplace = True) # Removing Retweets from dataframe ids_to_remove = [] for i in range(0,len(tweeter_api_df_clean.full_text)): try: if tweeter_api_df_clean.full_text[i].split(' ')[0] == 'RT': #Retweets have "RT" in the beginning of text ids_to_remove.append(tweeter_api_df_clean.tweet_id[i]) except: pass # Remove all not original self retweets: for index in tweeter_api_df_clean[tweeter_api_df_clean.tweet_id.isin(ids_to_remove)].index: tweeter_api_df_clean.drop(index, inplace = True) #Dropping Non important columns tweeter_api_df_clean = tweeter_api_df_clean.drop('in_reply_to_user_id', 1) ###Output _____no_output_____ ###Markdown Defining URLs for Tweet Entries Without Them ###Code # Tweets which do not have a short url defined, replace with long url. for index in tweeter_api_df_clean[tweeter_api_df_clean.url == ''].index : new_url = "https://twitter.com/dog_rates/status/" + tweeter_api_df_clean.tweet_id[index] tweeter_api_df_clean.loc[index,'url'] = new_url ###Output _____no_output_____ ###Markdown Correcting Dog Breeds Names With Capitals and Remove Underscores ###Code # Replacing the underscores of Breed names and capitalize first lettter for i in range(0,len(image_archive_df)): image_archive_df.loc[i,'p1'] = image_archive_df.p1[i].replace('_',' ').title() image_archive_df.loc[i,'p2'] = image_archive_df.p2[i].replace('_',' ').title() image_archive_df.loc[i,'p3'] = image_archive_df.p3[i].replace('_',' ').title() ###Output _____no_output_____ ###Markdown Reducing Breed Predictions to Best One and Merging With Main Dataset ###Code s1 = ['p1_dog','p2_dog','p3_dog'] for index in tweeter_api_df_clean.index: tweet_id = tweeter_api_df_clean.tweet_id[index] #Id to look for on image dataframe is_dog = [] #Check if prediction is a dog Breed try: for j in range(0,3): is_dog.append(image_archive_df.loc[image_archive_df.loc[image_archive_df.tweet_id == int(tweet_id)].index[0], s1[j]]) if is_dog[j]: # if we have a True on dog Breed breed_prediction = image_archive_df.loc[image_archive_df.loc[image_archive_df.tweet_id == int(tweet_id)].index[0], s1[j][0:2]] break elif j == 2: #if dog breed is not found breed_prediction = 'Not Defined' except: # If tweet Id not found on image prediction dataframe breed_prediction = 'Not Defined' # Create and Update Breed Columns on Main dataset tweeter_api_df_clean.loc[index,'breed'] = breed_prediction ###Output _____no_output_____ ###Markdown Test ###Code list(tweeter_api_df_clean) tweeter_api_df_clean[tweeter_api_df_clean.url == ''] image_archive_df.p1.sample() ###Output _____no_output_____ ###Markdown Changing Column Variable Types Changing Created_at Column From String to Datatime Type ###Code for index in tweeter_api_df_clean.index: test_time = tweeter_api_df_clean.loc[index,'created_at'].replace(' +0000','') tweeter_api_df_clean.loc[index,'created_at'] = pd.to_datetime(test_time, format='%a %b %d %H:%M:%S %Y', errors='ignore') tweeter_api_df_clean.created_at = pd.to_datetime(tweeter_api_df_clean.created_at) ###Output _____no_output_____ ###Markdown Changing From String to Integers ###Code tweeter_api_df_clean.tweet_id = pd.to_numeric(tweeter_api_df_clean.tweet_id) tweeter_api_df_clean.rating_num = pd.to_numeric(tweeter_api_df_clean.rating_num) tweeter_api_df_clean.rating_den = pd.to_numeric(tweeter_api_df_clean.rating_den) ###Output _____no_output_____ ###Markdown Storing Final Clean Dataframe Creating New Clean Copy ###Code #Clean DF copy with reseted indexes twitter_archive_master = tweeter_api_df_clean.copy().reset_index(drop=True) ###Output _____no_output_____ ###Markdown Storing on CSV format ###Code twitter_archive_master.to_csv(os.path.join(cwd,'twitter_archive_master.csv'), index=False) ###Output _____no_output_____ ###Markdown Storing on SQL Format Connect to a database ###Code # Create SQLAlchemy Engine and empty twitter_archive_master database engine = create_engine('sqlite:///twitter_archive_master.db') ###Output _____no_output_____ ###Markdown Store pandas DataFrame in databaseStore the data in the cleaned master dataset (twitter_archive_master) in that database. ###Code # Store cleaned master DataFrame ('twitter_archive_master') in a table called master in twitter_archive_master.db twitter_archive_master.to_sql('master', engine, index=False, if_exists = 'replace') ###Output _____no_output_____ ###Markdown Creating Analytics Dataframes All Dog Breeds ###Code #List of all dog breeds dog_breeds = twitter_archive_master.breed.value_counts().index.tolist() ###Output _____no_output_____ ###Markdown Grouping Stats by Dog Breed ###Code # Create Dict grouped by dog breed (All dog breeds present on DF) dog_breed_stats = {} for dog_breed in dog_breeds: dog_breed_stats[dog_breed] = {'count':[], 'count_per':[], 'favorite_count_sum':[],'retweet_count_sum':[], 'rating_num_mean' : [],'rating_den_mean' : [], 'favorite_count_mean' : [],'retweet_count_mean' : []} ###Output _____no_output_____ ###Markdown Gathering Statistics of Each Breed ###Code # Gathering stats about each dog breed for key in dog_breed_stats.keys(): #print(dog_breed_stats[key]['count']) dog_breed_stats[key]['count'] = twitter_archive_master[twitter_archive_master.breed == key].tweet_id.count() dog_breed_stats[key]['count_per'] = 100 * twitter_archive_master[twitter_archive_master.breed == key].tweet_id.count()/twitter_archive_master.count()[0] dog_breed_stats[key]['favorite_count_sum'] = twitter_archive_master[twitter_archive_master.breed == key].favorite_count.sum() dog_breed_stats[key]['retweet_count_sum'] = twitter_archive_master[twitter_archive_master.breed == key].retweet_count.sum() dog_breed_stats[key]['rating_num_mean'] = twitter_archive_master[twitter_archive_master.breed == key].rating_num.mean() dog_breed_stats[key]['rating_den_mean'] = twitter_archive_master[twitter_archive_master.breed == key].rating_den.mean() dog_breed_stats[key]['favorite_count_mean'] = twitter_archive_master[twitter_archive_master.breed == key].favorite_count.mean() dog_breed_stats[key]['retweet_count_mean'] = twitter_archive_master[twitter_archive_master.breed == key].retweet_count.mean() ###Output _____no_output_____ ###Markdown Creating and Storing Breed Statistics Dataframe ###Code #Creating Dataframe dog_breed_stats_df = pd.DataFrame([], index = dog_breed_stats.keys(), columns = dog_breed_stats[dog_breeds[0]].keys()) #Populating Dataframe for index in dog_breed_stats_df.index: for key in dog_breed_stats[dog_breeds[0]].keys(): dog_breed_stats_df[key].loc[index] = dog_breed_stats[index][key] #Storing in CSV file dog_breed_stats_df.to_csv(os.path.join(cwd,'dog_breed_stats.csv'), index=True) # Store cleaned breed_stats DataFrame ('breed_stats') in a table called breed_stats in twitter_archive_master.db dog_breed_stats_df.to_sql('breed_stats', engine, index=False, if_exists = 'replace') ###Output _____no_output_____ ###Markdown Grouping by Dog Stage ###Code dog_stages = twitter_archive_master.dog_stage.value_counts().index.tolist() stats = ['count', 'count_per', 'favorite_count_sum','retweet_count_sum', 'rating_num_mean','rating_den_mean', 'favorite_count_mean','retweet_count_mean'] #Creating Dataframe dog_stages_stats_df = pd.DataFrame([], index = stats, columns = dog_stages) for dog_stage in dog_stages: dog_stages_stats_df[dog_stage].loc['count'] = twitter_archive_master.groupby(['dog_stage']).get_group(dog_stage).dog_stage.count() dog_stages_stats_df[dog_stage].loc['count_per'] = 100 * twitter_archive_master.groupby(['dog_stage']).get_group(dog_stage).dog_stage.count()/twitter_archive_master.count()[0] dog_stages_stats_df[dog_stage].loc['favorite_count_sum'] = twitter_archive_master.groupby(['dog_stage']).get_group(dog_stage).favorite_count.sum() dog_stages_stats_df[dog_stage].loc['retweet_count_sum'] = twitter_archive_master.groupby(['dog_stage']).get_group(dog_stage).retweet_count.sum() dog_stages_stats_df[dog_stage].loc['favorite_count_mean'] = twitter_archive_master.groupby(['dog_stage']).get_group(dog_stage).favorite_count.mean() dog_stages_stats_df[dog_stage].loc['retweet_count_mean'] = twitter_archive_master.groupby(['dog_stage']).get_group(dog_stage).retweet_count.mean() dog_stages_stats_df[dog_stage].loc['rating_num_mean'] = twitter_archive_master.groupby(['dog_stage']).get_group(dog_stage).rating_num.mean() dog_stages_stats_df[dog_stage].loc['rating_den_mean'] = twitter_archive_master.groupby(['dog_stage']).get_group(dog_stage).rating_den.mean() dog_stages_stats_df = dog_stages_stats_df.transpose() #Storing in CSV file dog_stages_stats_df.to_csv(os.path.join(cwd,'dog_stages_stats.csv'), index=True) # Store cleaned breed_stats DataFrame ('breed_stats') in a table called breed_stats in twitter_archive_master.db dog_stages_stats_df.to_sql('dog_stages_stats', engine, index=False, if_exists = 'replace') ###Output _____no_output_____ ###Markdown Grouping by Weekday ###Code weekday = {6: 'Sunday', 0: 'Monday', 1: 'Tuesday', 2: 'Wednesday', 3: 'Thursday', 4: 'Friday', 5: 'Saturday'} weekday_stats_temp = tweeter_api_df_clean.copy().drop(['tweet_id', 'full_text','url','name','dog_stage','breed', 'created_at'], axis=1) weekday_series = [] for day in twitter_archive_master.created_at.dt.dayofweek: weekday_series.append(weekday[day]) weekday_stats_temp['weekday'] = weekday_series #Creating Dataframe weekday_stats_df = pd.DataFrame([], index = stats, columns = weekday.values()) for week_day in weekday.values(): #print(week_day) weekday_stats_df[week_day].loc['count'] = weekday_stats_temp.groupby(['weekday']).get_group(week_day).weekday.count() weekday_stats_df[week_day].loc['count_per'] = 100 * weekday_stats_temp.groupby(['weekday']).get_group(week_day).weekday.count()/twitter_archive_master.count()[0] weekday_stats_df[week_day].loc['favorite_count_sum'] = weekday_stats_temp.groupby(['weekday']).get_group(week_day).favorite_count.sum() weekday_stats_df[week_day].loc['retweet_count_sum'] = weekday_stats_temp.groupby(['weekday']).get_group(week_day).retweet_count.sum() weekday_stats_df[week_day].loc['favorite_count_mean'] = weekday_stats_temp.groupby(['weekday']).get_group(week_day).favorite_count.mean() weekday_stats_df[week_day].loc['retweet_count_mean'] = weekday_stats_temp.groupby(['weekday']).get_group(week_day).retweet_count.mean() weekday_stats_df[week_day].loc['rating_num_mean'] = weekday_stats_temp.groupby(['weekday']).get_group(week_day).rating_num.mean() weekday_stats_df[week_day].loc['rating_den_mean'] = weekday_stats_temp.groupby(['weekday']).get_group(week_day).rating_den.mean() weekday_stats_df = weekday_stats_df.transpose() #Storing in CSV file weekday_stats_df.to_csv(os.path.join(cwd,'weekday_stats_df.csv'), index=True) # Store cleaned breed_stats DataFrame ('weekday_stats_df') in a table called weekday_stats_df in twitter_archive_master.db weekday_stats_df.to_sql('weekday_stats_df', engine, index=False, if_exists = 'replace') ###Output _____no_output_____ ###Markdown Data Analytics and Visualization After the whole Gathering, Assessing and Claning process we end up with a dataset with 2142 observations. These are tweets from the WeRateDogs twitter account (https://twitter.com/dog_rates).The most important Information gathered was kept on a tidy concise table. The author made an effort to remove all tweets not related to dogs. All reteweets were removed, just original tweets were kept on the dataset. All tweets kept do have the special rating system offered by the page author, http://knowyourmeme.com/memes/theyre-good-dogs-brent.We will now proceed with a short evaluation of the data gathered and offer some insights. Most Twitted Dog Breed Ignoring the observations which did not produce any dog breed (457 - 21.33% of all observations), we have the following counts on the Top 6: - Golden Retriever 158 - Labrador Retriever 108 - Pembroke 95 - Chihuahua 91 - Pug 62 - Toy Poodle 51Golden Retrievers are Labrador Retrievers are definetely the most popular dogs on Twitter. They out number all other breeds by a lot. The author of this study agrees with this data, Retrievers are definitely the coolest dogs out there. (The author is biased). We have a total of 112 different Dog Breeds on our dataset. Bellow is shown the whole dataset of all our breed of dogs and their respective statistics. ###Code dog_breed_stats_df.head() ###Output _____no_output_____ ###Markdown If we take a look at the Top 15 most tweeted dog breeds, we will find the following information exemplified in this pie chart. Golden Retrievers take a lead with 18% of Tweets from the Top 15. Eskimo Dog closes the Top 15 with a 2.5% possesion of the Tweets. The insigh is if you want your dog to be featured on the WeRateDogs Twitter account, make sure you own a Golden Retriever, since this will you the best change to do so. ###Code plt.figure(figsize=(10,9)); dog_breed_stats_df.transpose().iloc[0][1:16].plot(kind='pie',autopct='%1.1f%%', shadow=True); plt.gca().axis("equal") plt.ylabel('') plt.title('The 15 Most Dog Breeds Tweeted',weight='bold', fontsize = 14); plt.savefig('output/piechart.png', bbox_inches='tight') ###Output _____no_output_____ ###Markdown We will not dive into a more detailed analysis of which breed is the most popular and more Tweeted, on absolute terms and average per number of tweets.From the following two plots we can draw the following conclusion: - Being Retweeted more does not grant that it will be favorited/retweeted more - Case being the Breeds Samoyed, French Bulldog or Cocker Spaniel - These dog Breeds are very popular amongst people, so it is natural that they get more attention from users ###Code fig, axes = plt.subplots(nrows=1, ncols=2) df = dog_breed_stats_df.iloc[1:16].transpose().iloc[2:4].transpose(); for i, c in enumerate(df.columns): df[c].plot(kind='bar', ax=axes[i], figsize=(16, 4), title=c) # keep the y tick labels from getting too crowded #plt.subplots_adjust(hspace = 1.0) fig.suptitle("Top 15 Tweeted Dog Breeds - Favorite and Retweet Total Count", weight='bold', fontsize = 14); axes[0].set_title('Favorite Count'); axes[1].set_title('Retweet Count'); axes[0].set_ylabel("Favorite Count") axes[1].set_ylabel("Retweets Count") plt.savefig('output/Top15favRT.png', bbox_inches='tight') ###Output _____no_output_____ ###Markdown Taking that in consideration, let us look at the Mean Favorite and Retweet counts. It can clearly be seen the observation made before. - The French Bulldog is the most popular dog amongst users. - They only represent 3.6% of the Author's Tweets, but they tend to be favorited/retweeted the most. ###Code fig, axes = plt.subplots(nrows=1, ncols=2) df = dog_breed_stats_df.iloc[1:16].transpose().iloc[6:8].transpose(); for i, c in enumerate(df.columns): df[c].plot(kind='bar', ax=axes[i], figsize=(16, 4), title=c) # keep the y tick labels from getting too crowded #plt.subplots_adjust(hspace = 1.0) fig.suptitle("Top 15 Tweeted Dog Breeds - Favorite and Retweet Means", weight='bold', fontsize = 14); axes[0].set_title('Favorite Mean'); axes[1].set_title('Retweet Mean'); axes[0].set_ylabel("Favorite Mean Count") axes[1].set_ylabel("Retweet Mean Count") plt.savefig('output/Top15Means.png', bbox_inches='tight') ###Output _____no_output_____ ###Markdown Although the rating give by the author is not very accurate or a measurable unit, we can still show it here, since it is one of the best features of this Tweet page. - The Author of the page tends to give the best ratings to the Breed Chow - Although it also gives it a higher denominator, so the ratio is kept - It seems that the Pomeranian has the best rating ###Code fig, axes = plt.subplots(nrows=1, ncols=2) df = dog_breed_stats_df.iloc[1:16].transpose().iloc[4:6].transpose(); top_15 = list(df.transpose()); for i, c in enumerate(df.columns): df[c].plot(kind='bar', ax=axes[i], figsize=(16, 4), title=c) # keep the y tick labels from getting too crowded #plt.subplots_adjust(hspace = 1.0) fig.suptitle("Top 15 Tweeted Dog Breeds - Rating Means", weight='bold', fontsize = 14); axes[0].set_title('Numerator Mean'); axes[1].set_title('Denominator Mean'); axes[0].set_ylabel("Rating Numerator Mean") axes[1].set_ylabel("Rating Denominator Mean") plt.savefig('output/Top15RatMeans.png', bbox_inches='tight') ###Output _____no_output_____ ###Markdown Bellow it is shown almost the same information has before, but now sorted by the Mean Favorited Dog Breed. - The French Bulldog, has seen, is the best Favorited Mean - Having the best Favorite Mean does not grant the best Retweet Mean - Same applies to the contrary relation (Seen bellow) ###Code fig, axes = plt.subplots(nrows=1, ncols=2) df = dog_breed_stats_df[['favorite_count_mean','retweet_count_mean']].iloc[1:16].sort_values(by=['favorite_count_mean'], ascending = False).head(15); mean_ft = list(df.transpose()); for i, c in enumerate(df.columns): df[c].plot(kind='bar', ax=axes[i], figsize=(16, 4), title=c) # keep the y tick labels from getting too crowded #plt.subplots_adjust(hspace = 1.0) fig.suptitle("Top 15 Mean Favorited Dog Breeds", weight='bold', fontsize = 14); axes[0].set_title('Favorite Mean'); axes[1].set_title('Retweet Mean'); axes[0].set_ylabel("Favorite Mean Count") axes[1].set_ylabel("Retweent Mean Count") plt.savefig('output/Top15BreedMeanFav.png', bbox_inches='tight') fig, axes = plt.subplots(nrows=1, ncols=2) df = dog_breed_stats_df[['favorite_count_mean','retweet_count_mean']].iloc[1:16].sort_values(by=['retweet_count_mean'], ascending = False).head(15); mean_rt = list(df.transpose()); for i, c in enumerate(df.columns): df[c].plot(kind='bar', ax=axes[i], figsize=(16, 4), title=c) # keep the y tick labels from getting too crowded #plt.subplots_adjust(hspace = 1.0) fig.suptitle("Top 15 Mean Retweeted Dog Breeds", weight='bold', fontsize = 14); axes[0].set_title('Favorite Mean'); axes[1].set_title('Retweet Mean'); axes[0].set_ylabel("Favorite Mean Count") axes[1].set_ylabel("Retweent Mean Count") plt.savefig('output/Top15BreedMeanRT.png', bbox_inches='tight') fig, axes = plt.subplots(nrows=1, ncols=2) df = dog_breed_stats_df[['rating_num_mean','rating_den_mean']].iloc[1:16].sort_values(by=['rating_num_mean'], ascending = False); mean_rat = list(df.transpose()); for i, c in enumerate(df.columns): df[c].plot(kind='bar', ax=axes[i], figsize=(16, 4), title=c) # keep the y tick labels from getting too crowded #plt.subplots_adjust(hspace = 1.0) fig.suptitle("Top 15 Mean Rated Dog Breeds", weight='bold', fontsize = 14); axes[0].set_title('Numerator Mean'); axes[1].set_title('Denominator Mean'); axes[0].set_ylabel("Numerator Mean") axes[1].set_ylabel("Denominator Mean") plt.savefig('output/Top15BreedMeanRating.png', bbox_inches='tight') ###Output _____no_output_____ ###Markdown The following plots will show: - Sorted by Most Favorited Dog Breeds - Sorted by Most Retweeted Dog Breeds ###Code fig, axes = plt.subplots(nrows=1, ncols=2) df = dog_breed_stats_df[['favorite_count_sum','retweet_count_sum']].sort_values(by=['favorite_count_sum'], ascending = False).iloc[1:16]; top_fav = list(df.transpose()); for i, c in enumerate(df.columns): df[c].plot(kind='bar', ax=axes[i], figsize=(16, 4), title=c) # keep the y tick labels from getting too crowded #plt.subplots_adjust(hspace = 1.0) fig.suptitle("Top 15 Most Favorited Dog Breeds", weight='bold', fontsize = 14); axes[0].set_title('Favorite Count'); axes[1].set_title('Retweet Count'); axes[0].set_ylabel("Favorite Count") axes[1].set_ylabel("Retweent Count") plt.savefig('output/Top15MostFav.png', bbox_inches='tight') fig, axes = plt.subplots(nrows=1, ncols=2) df = dog_breed_stats_df[['favorite_count_sum','retweet_count_sum']].sort_values(by=['retweet_count_sum'], ascending = False).iloc[1:16]; top_rt = list(df.transpose()); for i, c in enumerate(df.columns): df[c].plot(kind='bar', ax=axes[i], figsize=(16, 4), title=c) # keep the y tick labels from getting too crowded #plt.subplots_adjust(hspace = 1.0) fig.suptitle("Top 15 Most Retweeted Dog Breeds", weight='bold', fontsize = 14); axes[0].set_title('Favorite Count'); axes[1].set_title('Retweet Count'); axes[0].set_ylabel("Favorite Count") axes[1].set_ylabel("Retweent Count") plt.savefig('output/Top15MostRT.png', bbox_inches='tight') ###Output _____no_output_____ ###Markdown Most Popular Dog BreedTaking in consideration all shown above, we can say that the most Popular Breeds are the following, in no particular order. - Samoyed - Eskimo Dog - Pembroke - Malamute - Labrador Retriever - Chow - Golden Retriever - Cocker Spaniel - Toy Poodle - Chesapeake Bay Retriever - French Bulldog - Pomeranian - Chihuahua - Pug These dog Breeds show up on every plot shown above, which have all sorts of different measureable analytics. If one takes in consideration just the average Favorited/Retweets, one has to say that the French Bulldog will be the most popula dog Breed amongst users. ###Code all_dog_tops = pd.Series(top_15 + mean_rt + mean_ft + mean_rat + top_fav + top_rt) all_dog_tops_sorted = pd.Series(all_dog_tops.value_counts()) all_dog_tops_sorted[all_dog_tops_sorted == all_dog_tops_sorted.max()] ###Output _____no_output_____ ###Markdown Analysis by Dog Stage (Puppo, Floofer, Doggo, Pupper) ###Code dog_stages_stats_df ###Output _____no_output_____ ###Markdown Let us now focus on influence of the dog stage on the statistics of the tweets. Unfortunately, we do no have a lot of observations to draw conclusions from, so this will be a rather limited analysis. Please refer to the bar graphs bellow. - Puppos are definetely the most popular amongst users. On average they tend to be Favorited and Retweeted more than any other stage. - The owner of the page tends to give the same rating to all stages, a little bit less to puppers. ###Code fig, axes = plt.subplots(nrows=1, ncols=3) df = dog_stages_stats_df.iloc[1:5].transpose().iloc[[6,7,4]].transpose().sort_values(by=['retweet_count_mean'], ascending = False) for i, c in enumerate(df.columns): df[c].plot(kind='bar', ax=axes[i], figsize=(16, 4), title=c) # keep the y tick labels from getting too crowded #plt.subplots_adjust(hspace = 1.0) fig.suptitle("Most Mean Retweeted and Favorited Dog Stages", weight='bold', fontsize = 14); axes[0].set_title('Favorite Mean'); axes[1].set_title('Retweet Mean'); axes[2].set_title('Rating Numerator'); axes[0].set_ylabel("Favorite Mean Count") axes[1].set_ylabel("Retweent Mean Count") axes[2].set_ylabel("Rating Numerator Count") plt.savefig('output/DogStageMostMean.png', bbox_inches='tight') ###Output _____no_output_____ ###Markdown If we look at the absolute numbers, we can see that: - The author is biased to post Puppers, or at least to refer to dogs has Puppers on the posts. It far exceeds any other dog stage posted - Despite that fact, users tend to Favorite and Retweet almost equally Puppers and Doggos. Puppos and Floofers get very little love from Tweeter users on absolute terms, but this is due tot he fact that there are very little posts mentioning them. ###Code fig, axes = plt.subplots(nrows=1, ncols=3) df = dog_stages_stats_df.iloc[1:5].transpose().iloc[[0,2,3]].transpose().sort_values(by=['retweet_count_sum'], ascending = False) for i, c in enumerate(df.columns): df[c].plot(kind='bar', ax=axes[i], figsize=(16, 4), title=c) # keep the y tick labels from getting too crowded #plt.subplots_adjust(hspace = 1.0) fig.suptitle("Most Retweeted and Favorited Dog Stages", weight='bold', fontsize = 14); axes[0].set_title('Count'); axes[1].set_title('Favorites'); axes[2].set_title('Retweets'); axes[0].set_ylabel("Tweets Count") axes[1].set_ylabel("Favorites Count") axes[2].set_ylabel("Retweets Count") plt.savefig('output/DogStageMostRTFav.png', bbox_inches='tight') ###Output _____no_output_____ ###Markdown Analysis by Weekday ###Code weekday_stats_df ###Output _____no_output_____ ###Markdown Let us now focus on each particular day of the week and its influence on our data. Bellow is shown that bar graph for each day of the week and a respective statistic. On the first set we can infer the following:- The owner of the page posts the most on Mondays. 16.6% of the posts occur on a Monday versus 12.8% on Sunday and Saturday, the day he posts the least.- People tend to like (Favorite) his posts more on Wednesdays.- People Retweet his posts more on WednesdaysOne can say that users tend to like/retweet his posts with two days lag or that posting more does not grant an increased activity in Favorites and Reteweets. ###Code fig, axes = plt.subplots(nrows=1, ncols=3) df = weekday_stats_df.transpose().iloc[[0,2,3]].transpose() for i, c in enumerate(df.columns): df[c].plot(kind='bar', ax=axes[i], figsize=(16, 4), title=c) # keep the y tick labels from getting too crowded #plt.subplots_adjust(hspace = 1.0) fig.suptitle("Statistics by Weekday - Count, Favorite and Retweets", weight='bold', fontsize = 14); axes[0].set_title('Count'); axes[1].set_title('Favorites'); axes[2].set_title('Retweets'); axes[0].set_ylabel("Tweets Count") axes[1].set_ylabel("Favorite Count") axes[2].set_ylabel("Retweet Count") plt.savefig('output/WeekdayCountRTFav.png', bbox_inches='tight') ###Output _____no_output_____ ###Markdown Furthering the analysis by weekday, please refer to the plots bellow:- The average Favorite and Retweet (Average per Tweet) are higher on Wednesdays. This serves as extra confirmation of what was mentioned above.- The author of the page tends to give better ratings to dogs on Mondays, on average. ###Code fig, axes = plt.subplots(nrows=1, ncols=3) df = weekday_stats_df.transpose().iloc[[6,7,4]].transpose() for i, c in enumerate(df.columns): df[c].plot(kind='bar', ax=axes[i], figsize=(16, 4), title=c) # keep the y tick labels from getting too crowded #plt.subplots_adjust(hspace = 1.0) fig.suptitle("Statistics by Weekday - Mean Favorite and Retweet, Rating Numerator", weight='bold', fontsize = 14); axes[0].set_title('Favorite Mean'); axes[1].set_title('Retweet Mean'); axes[2].set_title('Rating Numerator'); axes[0].set_ylabel("Favorite Mean Count") axes[1].set_ylabel("Retweent Mean Count") axes[2].set_ylabel("Rating Numerator Count") plt.savefig('output/WeekdayMeanCountRTFav.png', bbox_inches='tight') ###Output _____no_output_____ ###Markdown Best Tweets Most Favorited Tweet The most favorited Tweet was of a Lakeland Terrier, Puppo stage. Analysing the tweet, one can explain why this got the most Favorites. This was tweeted one day after President Trump assumed the Presidency, on Jannuary 21st 2017, during a Womensmarch. This was a highly controversial period when people turned to the Social Media and were highly active with the use of popular hashtags.Refer to the table bellow with the Top 5 Most Favorited Tweets. ###Code twitter_archive_master.sort_values(by=['favorite_count'], ascending = False).head()[['created_at','favorite_count','retweet_count','name','url','rating_num','dog_stage','breed' ]] ###Output _____no_output_____ ###Markdown Most Retweeted Tweet The most Retweeted was a video of an adorable Labrador Retriever in the Doggo Stage. This tweet did not use any special hashtags nor was posted during a particular special time. This proves that Tweets can go viral with good content and quality, such is in this video, completely worth watching it.Refer to the table bellow with the Top 5 Most Retweeted Tweets. ###Code twitter_archive_master.sort_values(by=['retweet_count'], ascending = False).head()[['created_at','favorite_count','retweet_count','name','url','rating_num','dog_stage','breed' ]] ###Output _____no_output_____ ###Markdown Project 4 Data Wrangling - Meng Tan Table of ContentsSection 1 GatherSection 2 AssessSection 3 CleanSection 4 Exploratory Data Analysis ###Code import requests import os import pandas as pd import numpy as np import tweepy import json import time import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns sns.set_style('darkgrid') from wordcloud import WordCloud, STOPWORDS from PIL import Image ###Output _____no_output_____ ###Markdown Section 1 GatherIn this part, we need to gather data from:- Downloading manually- A given url: https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv- Twitter API with Tweepy library ###Code # Data from downloading manually df_archive = pd.read_csv('data/twitter-archive-enhanced.csv') df_archive.head(2) # Gather data with a given url folder_name = 'data' if not os.path.exists('data/image-predictions.tsv'): if not os.path.exists(folder_name): os.makedirs(folder_name) url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) with open(os.path.join(folder_name, url.split('/')[-1]), mode='wb') as file: file.write(response.content) os.listdir(folder_name) df_image = pd.read_csv('data/image-predictions.tsv', '\t') df_image.head() # Gather data with tweepy def gather_from_tweepy(consumer_key, consumer_secret, access_token, access_secret): auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True) if not os.path.exists('data/tweet_json.txt'): # Please enter your own keys: consumer_key = '' consumer_secret = '' access_token = '' access_secret = '' gather_from_tweepy(consumer_key, consumer_secret, access_token, access_secret) json_list = [] tweet_deleted = [] start = time.time() for tweet_id in df_archive.tweet_id: try: json_list.append(api.get_status(tweet_id, tweet_mode = 'extended')._json) except Exception as e: tweet_deleted.append(tweet_id) end = time.time() print(end - start) # Print out the processing time: 1430s print(len(tweet_deleted)) # 25 tweet_id information can't be extracted # Store the data with open('data/tweet_json.txt', 'w') as file: json.dump(json_list, file) # Read the data into a list df_list = [] with open('data/tweet_json.txt') as file: json_data = json.load(file) for data in json_data: tweet_id = data['id'] retweet_count = data['retweet_count'] favorite_count = data['favorite_count'] df_list.append({ 'tweet_id': tweet_id, 'retweet_count': retweet_count, 'favorite_count': favorite_count }) # Create Dataframe from the above list of dictionaries df_api = pd.DataFrame(df_list) df_api.head() ###Output _____no_output_____ ###Markdown Section 2 AssessNow we have three DataFrames: - From **twitter-archive-enhanced.csv**, we get `df_archive` - From **image-predictions.tsv**, we get `df_image`- From **tweet_json.txt**, we get `df_api` `df_archive` table ###Code df_archive df_archive.info() df_archive.nunique() df_archive.name.value_counts() # Since there are 'None' and 'a' as names shown above, maybe there are other words extracted as names df_archive.name.str.extract('(^[a-z]+)').value_counts() df_archive.source.value_counts() df_archive.describe() df_archive.rating_denominator.sort_values() df_archive.rating_denominator.value_counts() # Check the text for those whose denominator not equal to 10 denominator_index = df_archive[df_archive.rating_denominator != 10].index for index in denominator_index: print(index, ': ', df_archive.text[index], '\n---', df_archive.rating_numerator[index], '/', df_archive.rating_denominator[index]) ###Output 313 : @jonnysun @Lin_Manuel ok jomny I know you're excited but 960/00 isn't a valid rating, 13/10 is tho --- 960 / 0 342 : @docmisterio account started on 11/15/15 --- 11 / 15 433 : The floofs have been released I repeat the floofs have been released. 84/70 https://t.co/NIYC820tmd --- 84 / 70 516 : Meet Sam. She smiles 24/7 &amp; secretly aspires to be a reindeer. Keep Sam smiling by clicking and sharing this link: https://t.co/98tB8y7y7t https://t.co/LouL5vdvxx --- 24 / 7 784 : RT @dog_rates: After so many requests, this is Bretagne. She was the last surviving 9/11 search dog, and our second ever 14/10. RIP https:/… --- 9 / 11 902 : Why does this never happen at my front door... 165/150 https://t.co/HmwrdfEfUE --- 165 / 150 1068 : After so many requests, this is Bretagne. She was the last surviving 9/11 search dog, and our second ever 14/10. RIP https://t.co/XAVDNDaVgQ --- 9 / 11 1120 : Say hello to this unbelievably well behaved squad of doggos. 204/170 would try to pet all at once https://t.co/yGQI3He3xv --- 204 / 170 1165 : Happy 4/20 from the squad! 13/10 for all https://t.co/eV1diwds8a --- 4 / 20 1202 : This is Bluebert. He just saw that both #FinalFur match ups are split 50/50. Amazed af. 11/10 https://t.co/Kky1DPG4iq --- 50 / 50 1228 : Happy Saturday here's 9 puppers on a bench. 99/90 good work everybody https://t.co/mpvaVxKmc1 --- 99 / 90 1254 : Here's a brigade of puppers. All look very prepared for whatever happens next. 80/80 https://t.co/0eb7R1Om12 --- 80 / 80 1274 : From left to right: Cletus, Jerome, Alejandro, Burp, &amp; Titson None know where camera is. 45/50 would hug all at once https://t.co/sedre1ivTK --- 45 / 50 1351 : Here is a whole flock of puppers. 60/50 I'll take the lot https://t.co/9dpcw6MdWa --- 60 / 50 1433 : Happy Wednesday here's a bucket of pups. 44/40 would pet all at once https://t.co/HppvrYuamZ --- 44 / 40 1598 : Yes I do realize a rating of 4/20 would've been fitting. However, it would be unjust to give these cooperative pups that low of a rating --- 4 / 20 1634 : Two sneaky puppers were not initially seen, moving the rating to 143/130. Please forgive us. Thank you https://t.co/kRK51Y5ac3 --- 143 / 130 1635 : Someone help the girl is being mugged. Several are distracting her while two steal her shoes. Clever puppers 121/110 https://t.co/1zfnTJLt55 --- 121 / 110 1662 : This is Darrel. He just robbed a 7/11 and is in a high speed police chase. Was just spotted by the helicopter 10/10 https://t.co/7EsP8LmSp5 --- 7 / 11 1663 : I'm aware that I could've said 20/16, but here at WeRateDogs we are very professional. An inconsistent rating scale is simply irresponsible --- 20 / 16 1779 : IT'S PUPPERGEDDON. Total of 144/120 ...I think https://t.co/ZanVtAtvIq --- 144 / 120 1843 : Here we have an entire platoon of puppers. Total score: 88/80 would pet all at once https://t.co/y93p6FLvVw --- 88 / 80 2335 : This is an Albanian 3 1/2 legged Episcopalian. Loves well-polished hardwood flooring. Penis on the collar. 9/10 https://t.co/d9NcXFKwLv --- 1 / 2 ###Markdown **It seems that there are two main situation when denominator not equal to 10:**- When there are more than one dog in the picture, the rating's denominator can be greater than 10- When there are more than one pair of digits separated by '/' and the rating is the last pair of digits (from the results, the default extraction of rating seems to be the first pair of digits) ###Code df_archive.doggo.value_counts() len(df_archive[df_archive.text.str.contains('doggo', case=False)]) df_archive.floofer.value_counts() len(df_archive[df_archive.text.str.contains('floof', case=False)]) ###Output _____no_output_____ ###Markdown **From the above comparisons, the number of dogs for different dog stage seems not correct.** `df_api` table ###Code df_api df_api.info() df_api[df_api.duplicated()] df_api.describe() ###Output _____no_output_____ ###Markdown `df_image` table ###Code df_image df_image.info() df_image.describe() df_image.query('p1_conf == 1') df_image.img_num.value_counts() df_image.p1_dog.value_counts() df_image.p1.value_counts() df_image.p2.value_counts() df_image.p3.value_counts() ###Output _____no_output_____ ###Markdown **From above: inconsistent word formatting for predictions including lowercase words (e.g. golden_retriever), capitalised words (e.g. Chihuahua).** ###Code all_columns = pd.Series(list(df_archive) + list(df_api) + list(df_image)) all_columns all_columns[all_columns.duplicated()] ###Output _____no_output_____ ###Markdown **In the three tables, *tweet_id* is the primary key.** Quality `df_archive` table - Erroneous datatypes (in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp)- There is ' +0000' appended to timestamp and the data type of timestamp is object- There are 78 in reply tweets and 181 retweets- Multiple dog names are words, including 'None', 'a', 'the', etc.- Markdown language in source column- The datatype for source column is object not category- There are erroneous ratings- The number of dogs for each dog_stage (i.e. doggo, floofer, pupper, and puppo) is not correct `df_api` table- There are missing records when compared to archive (2331 vs 2356) `df_image` table- There are missing records when compared to archive (2075 vs 2356)- Erroneous datatypes (img_num, p1, p2, p3)- Ambiguous column names (p1, p1_conf, p1_dog, p2, p2_conf, p2_dog, p3, p3_conf, p3_dog)- Inconsistent formatting in p1, p2, p3 columns: some with capitalized first letter Tidiness- One variable in four columns in `df_archive` table (dog_stage)- There should be only one table Section 3 Clean ###Code # Make copy of three tables df_archive_clean = df_archive.copy() df_image_clean = df_image.copy() df_api_clean = df_api.copy() ###Output _____no_output_____ ###Markdown 3.1 Missing Data and Tidiness - **`df_api`: there are missing records when compared to `df_archive` (2331 vs 2356)**- **`df_image`: there are missing records when compared to `df_archive` (2075 vs 2356)**- **There should be only one table** DefineJoin columns of `df_api` and `df_image` to `df_archive` using `join(how='inner')` method. Code ###Code # Inner join df_api with df_archive df_archive_clean = df_archive_clean.join(df_api_clean.set_index('tweet_id'), on='tweet_id', how='inner') # Inner join df_image with df_archive df_archive_clean = df_archive_clean.join(df_image_clean.set_index('tweet_id'), on='tweet_id', how='inner') ###Output _____no_output_____ ###Markdown Test ###Code df_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2059 entries, 0 to 2355 Data columns (total 30 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2059 non-null int64 1 in_reply_to_status_id 23 non-null float64 2 in_reply_to_user_id 23 non-null float64 3 timestamp 2059 non-null object 4 source 2059 non-null object 5 text 2059 non-null object 6 retweeted_status_id 72 non-null float64 7 retweeted_status_user_id 72 non-null float64 8 retweeted_status_timestamp 72 non-null object 9 expanded_urls 2059 non-null object 10 rating_numerator 2059 non-null int64 11 rating_denominator 2059 non-null int64 12 name 2059 non-null object 13 doggo 2059 non-null object 14 floofer 2059 non-null object 15 pupper 2059 non-null object 16 puppo 2059 non-null object 17 retweet_count 2059 non-null int64 18 favorite_count 2059 non-null int64 19 jpg_url 2059 non-null object 20 img_num 2059 non-null int64 21 p1 2059 non-null object 22 p1_conf 2059 non-null float64 23 p1_dog 2059 non-null bool 24 p2 2059 non-null object 25 p2_conf 2059 non-null float64 26 p2_dog 2059 non-null bool 27 p3 2059 non-null object 28 p3_conf 2059 non-null float64 29 p3_dog 2059 non-null bool dtypes: bool(3), float64(7), int64(6), object(14) memory usage: 456.4+ KB ###Markdown - **One variable in four columns in `df_archive` table (dog_stage)**- **The number of dogs for each dog_stage (i.e. doggo, floofer, pupper, and puppo) is not correct** DefineExtract *doggo*, *floofer*, *pupper*, and *puppo* infomation to a *dog_stage* column using regular expressions and pandas' `str.extract` method. Drop *doggo*, *floofer*, *pupper*, and *puppo* columns when done. Code ###Code df_archive_clean['dog_stage'] = df_archive_clean.text.str.extract('([Dd]oggo|[Ff]loof|[Pp]upper|[Pp]uppo|DOGGO|FLOOF|PUPPER|PUPPO)', expand=False).str.lower() # Replace 'floof' to 'floofer' to describe a dog correctly df_archive_clean.dog_stage = df_archive_clean.dog_stage.replace('floof', 'floofer') # Drop doggo, floofer, pupper, and puppo columns df_archive_clean = df_archive_clean.drop(['doggo', 'floofer', 'pupper', 'puppo'], axis=1) ###Output _____no_output_____ ###Markdown Test ###Code df_archive_clean.dog_stage.value_counts() df_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2059 entries, 0 to 2355 Data columns (total 27 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2059 non-null int64 1 in_reply_to_status_id 23 non-null float64 2 in_reply_to_user_id 23 non-null float64 3 timestamp 2059 non-null object 4 source 2059 non-null object 5 text 2059 non-null object 6 retweeted_status_id 72 non-null float64 7 retweeted_status_user_id 72 non-null float64 8 retweeted_status_timestamp 72 non-null object 9 expanded_urls 2059 non-null object 10 rating_numerator 2059 non-null int64 11 rating_denominator 2059 non-null int64 12 name 2059 non-null object 13 retweet_count 2059 non-null int64 14 favorite_count 2059 non-null int64 15 jpg_url 2059 non-null object 16 img_num 2059 non-null int64 17 p1 2059 non-null object 18 p1_conf 2059 non-null float64 19 p1_dog 2059 non-null bool 20 p2 2059 non-null object 21 p2_conf 2059 non-null float64 22 p2_dog 2059 non-null bool 23 p3 2059 non-null object 24 p3_conf 2059 non-null float64 25 p3_dog 2059 non-null bool 26 dog_stage 381 non-null object dtypes: bool(3), float64(7), int64(6), object(11) memory usage: 408.2+ KB ###Markdown *Note: dog_stage is object not categorical data type, which will be addressed in the following Quality part.* 3.2 Quality - **There are 78 (now is 23) in reply tweets and 181 (now is 72) retweets**- **Erroneous datatypes (in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp)** DefineSelect original tweet rows (where *in_reply_to_status_id*, *in_reply_to_user_id*, *retweeted_status_id*, *retweeted_status_user_id*, *retweeted_status_timestamp* columns are null). Drop *in_reply_to_status_id*, *in_reply_to_user_id*, *retweeted_status_id*, *retweeted_status_user_id*, *retweeted_status_timestamp* columns when done. Since these columns being dropped, there is no need to clean their datatypes. Code ###Code # Select non-replying and non-retweeting rows df_archive_clean = df_archive_clean[(df_archive_clean.in_reply_to_status_id.isnull()) & (df_archive_clean.retweeted_status_id.isnull())] ###Output _____no_output_____ ###Markdown Test ###Code df_archive_clean.info() # Drop replying and retweeting related columns df_archive_clean = df_archive_clean.drop(['in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], axis=1) list(df_archive_clean) ###Output _____no_output_____ ###Markdown - **Markdown language in source column** DefineExtract the content from the anchor element using regular expressions and pandas' `str.extract` method. Code ###Code df_archive_clean.source = df_archive_clean.source.str.extract('>([\w\s]+)<') ###Output _____no_output_____ ###Markdown Test ###Code df_archive_clean.source.value_counts() ###Output _____no_output_____ ###Markdown - **There is ' +0000' appended to timestamp and the data type of timestamp is object**- **The datatype for source is object not category**- **Erroneous datatypes (img_num, p1, p2, p3)** - **Erroneous datatype for dog_stage column (this quality problem found after creating this column in the previous cleaning steps)** DefineStrip the ' +0000' using `str.strip()` method and convert *timestamp* to datetime data type using `pd.to_datetime()` method. Convert *source*, *img_num*, *p1*, *p2*, *p3*, *dog_stage* to category data type using `astype()` method. Code ###Code # To datetime df_archive_clean.timestamp = pd.to_datetime(df_archive_clean.timestamp.str.strip('\s\+0000')) # To category df_archive_clean.source = df_archive_clean.source.astype('category') df_archive_clean.img_num = df_archive_clean.img_num.astype('category') df_archive_clean.p1 = df_archive_clean.p1.astype('category') df_archive_clean.p2 = df_archive_clean.p2.astype('category') df_archive_clean.p3 = df_archive_clean.p3.astype('category') df_archive_clean.dog_stage = df_archive_clean.dog_stage.astype('category') ###Output _____no_output_____ ###Markdown Test ###Code df_archive_clean.timestamp.sample(5) df_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1964 entries, 0 to 2355 Data columns (total 22 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1964 non-null int64 1 timestamp 1964 non-null datetime64[ns] 2 source 1964 non-null category 3 text 1964 non-null object 4 expanded_urls 1964 non-null object 5 rating_numerator 1964 non-null int64 6 rating_denominator 1964 non-null int64 7 name 1964 non-null object 8 retweet_count 1964 non-null int64 9 favorite_count 1964 non-null int64 10 jpg_url 1964 non-null object 11 img_num 1964 non-null category 12 p1 1964 non-null category 13 p1_conf 1964 non-null float64 14 p1_dog 1964 non-null bool 15 p2 1964 non-null category 16 p2_conf 1964 non-null float64 17 p2_dog 1964 non-null bool 18 p3 1964 non-null category 19 p3_conf 1964 non-null float64 20 p3_dog 1964 non-null bool 21 dog_stage 363 non-null category dtypes: bool(3), category(6), datetime64[ns](1), float64(3), int64(5), object(4) memory usage: 297.5+ KB ###Markdown - **Multiple dog names are words, including 'None', 'a', 'the', etc.** DefineReplace the words in *name* column to `np.nan` using `replace()` method. Code ###Code df_archive_clean.name = df_archive_clean.name.replace('(^[a-z]+|None)', np.nan, regex=True) ###Output _____no_output_____ ###Markdown Test ###Code df_archive_clean.name.value_counts() ###Output _____no_output_____ ###Markdown - **There are erroneous ratings** Define- 1) Find out the rows which contains more than one pair of digits separated by '/' given the rating denominator not equal to 10. Extract the last pair of digits separated by '/', and update the rating numerator and denominator by spliting this pair by '/'.- 2) Then correct numerators by considering digits with '.'(decimal point), and update numerator and denominator in the rest part of dataset.- 3) Finally, convert the data types of *rating_numerator* and *rating_denominator* to float data type.*Note: There are some image containing more than one dog, so there sometimes are more than one rating in each text. The default extraction of rating is the first occurence of the digits pair separated by '/'. Since the total number of >1 rating is 27 (including the situation **1)** above), there are very few of this kind of tweets. So I decide not to make the same tweet duplicated to include the second rating or calculate the average of ratings.* Code ###Code #Find out the rows which contains more than one pair of digits separated by '/' given the rating denominator #is not equal to 10. Extract the last pair of digits separated by '/', and update the rating numerator and #denominator by spliting this pair by '/'. error_rating_condition = (df_archive_clean.rating_denominator != 10) & (df_archive_clean.text.str.contains('\d+/\d+\D+\d+/\d+')) df_archive_clean.loc[error_rating_condition, 'rating_numerator'] = df_archive_clean[error_rating_condition].text.str.extract('\d+/\d+\D+(\d+)/\d+', expand=False) df_archive_clean.loc[error_rating_condition, 'rating_denominator'] = df_archive_clean[error_rating_condition].text.str.extract('\d+/\d+\D+\d+/(\d+)', expand=False) #Correct numerators by considering digits with '.'(decimal point), #and update numerator and denominator in the rest part of dataset. error_rating_condition_2 = (df_archive_clean.text.str.contains('\d+\.?\d*/\d+')) & (~error_rating_condition) df_archive_clean.loc[error_rating_condition_2, 'rating_numerator'] = df_archive_clean[error_rating_condition_2].text.str.extract('(\d+\.?\d*)/\d+', expand=False) df_archive_clean.loc[error_rating_condition_2, 'rating_denominator'] = df_archive_clean[error_rating_condition_2].text.str.extract('\d+\.?\d*/(\d+)', expand=False) # To float df_archive_clean.rating_numerator = df_archive_clean.rating_numerator.astype(float) df_archive_clean.rating_denominator = df_archive_clean.rating_denominator.astype(float) ###Output _____no_output_____ ###Markdown Test ###Code df_archive_clean.rating_numerator.value_counts() df_archive_clean.rating_denominator.value_counts() ###Output _____no_output_____ ###Markdown - **Ambiguous column names (p1, p1_conf, p1_dog, p2, p2_conf, p2_dog, p3, p3_conf, p3_dog)** DefineMake these ambiguous column names more appropriate using `rename()` method. Code ###Code df_archive_clean = df_archive_clean.rename(columns={'p1': 'top_prediction', 'p1_conf': 'top_confidence', 'p1_dog': 'top_is_dog', 'p2': '2nd_prediction', 'p2_conf': '2nd_confidence', 'p2_dog': '2nd_is_dog', 'p3': '3rd_prediction', 'p3_conf': '3rd_confidence', 'p3_dog': '3rd_is_dog'}) ###Output _____no_output_____ ###Markdown Test ###Code list(df_archive_clean) ###Output _____no_output_____ ###Markdown - **Inconsistent formatting in *p1 (top_prediction)*, *p2 (2nd_prediction)*, *p3 (3rd_prediction)* columns: some with '_', and some with capitalized first letter** DefineReplace '_' with ' ' using `replace()` method. Make all the first letter of predictions be capitalised using `str.title()` method. Code ###Code column_list = ['top_prediction', '2nd_prediction', '3rd_prediction'] for column in column_list: df_archive_clean[column] = df_archive_clean[column].str.replace('_', ' ').str.title().astype('category') ###Output _____no_output_____ ###Markdown Test ###Code df_archive_clean[column_list].sample(10) df_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1964 entries, 0 to 2355 Data columns (total 22 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1964 non-null int64 1 timestamp 1964 non-null datetime64[ns] 2 source 1964 non-null category 3 text 1964 non-null object 4 expanded_urls 1964 non-null object 5 rating_numerator 1964 non-null float64 6 rating_denominator 1964 non-null float64 7 name 1342 non-null object 8 retweet_count 1964 non-null int64 9 favorite_count 1964 non-null int64 10 jpg_url 1964 non-null object 11 img_num 1964 non-null category 12 top_prediction 1964 non-null category 13 top_confidence 1964 non-null float64 14 top_is_dog 1964 non-null bool 15 2nd_prediction 1964 non-null category 16 2nd_confidence 1964 non-null float64 17 2nd_is_dog 1964 non-null bool 18 3rd_prediction 1964 non-null category 19 3rd_confidence 1964 non-null float64 20 3rd_is_dog 1964 non-null bool 21 dog_stage 363 non-null category dtypes: bool(3), category(6), datetime64[ns](1), float64(5), int64(3), object(4) memory usage: 297.5+ KB ###Markdown 3.3 Store cleaned data ###Code # Store cleaned data to 'twitter_archive_master.pkl', since pickle can store data types unchanged df_archive_clean.to_pickle('data/twitter_archive_master.pkl') ###Output _____no_output_____ ###Markdown Section 4 Exploratory Data AnalysisAfter gathering, assessing, and cleaning process, we now have cleaned data. In this section, we need to analyse cleaned data and visualise the analysis.I get familiar with this dataset from the wrangling process, now I want to find answer for:- Which time period during a day sees a highest number of tweets?- Which is the favorite source for WeRateDogs?- Which dog stage gets higher ratings?- Which breed of dog appears most often?- Is there any relationship between rating_numerator and favorite_count? ###Code # Load data to df df = pd.read_pickle('data/twitter_archive_master.pkl') df.head(2) ###Output _____no_output_____ ###Markdown 4.1 Which time period during a day sees a highest number of tweets? ###Code # Define a function to assign 'night', 'morning', 'afternoon', and 'evening' separately to timestamp between # [00:00-06:00], [06:00-12:00], [12:00-18:00], and [18:00-24:00] def assign_day_period(hour): if hour >= 0 and hour < 6: return 'night' elif hour >= 6 and hour < 12: return 'morning' elif hour >= 12 and hour < 18: return 'afternoon' else: return 'evening' # Apply the function to the df and save the results to a new column named 'day_period' df['day_period'] = df.timestamp.apply(lambda x: x.hour).apply(assign_day_period) # Convert data type to category df.day_period = df.day_period.astype('category') # Check if it works well df[['timestamp', 'day_period']] df.day_period.value_counts() df.day_period.value_counts().plot(kind='bar', figsize=(7, 5), fontsize=12, rot=0) plt.xlabel('Day Period', fontsize=13) plt.ylabel('Tweets Count', fontsize=13) plt.title('Figure 4.1 The Bar Plot of Tweets Count for Different Time Period In a Day', fontsize=14) plt.savefig('1_tweet_time.png'); ###Output _____no_output_____ ###Markdown **From above figure, we can conclude that more than half of tweets tweeted during night, specifically from 00:00 to 06:00, and in the morning, there nearly no tweets.** 4.2 Which is the favorite source for WeRateDogs? ###Code df.source.value_counts().plot(kind='bar', figsize=(7, 5), fontsize=12, rot=0) plt.xlabel('Source', fontsize=13) plt.ylabel('Tweets Count', fontsize=13) plt.title('Figure 4.2 The Bar Plot of Tweets Count for Different Sources', fontsize=14) plt.savefig('2_source_count.png'); ###Output _____no_output_____ ###Markdown **From above figure, it is clear that WeRateDogs like tweeting tweeters with Twitter for iPhone.** 4.3 Which dog stage gets higher ratings? ###Code # Extract the dataframe with rating_denominator equals to 10 rating_data = df.query('rating_denominator == 10') # Plot ax = sns.catplot(x='dog_stage', y='rating_numerator', kind='box', data=rating_data) ax.set_xticklabels(fontsize=13) ax.set_yticklabels(fontsize=12) ax.set_ylabels('Rating', fontsize=14) ax.set_xlabels('Dog Stages', fontsize=14) plt.title('Figure 4.3 Distributions of Ratings within Four Dog Stages\n', fontsize=14) plt.tight_layout() plt.savefig('3_rating_stage.png'); ###Output _____no_output_____ ###Markdown **From the above figure, puppo seems to have higher average ratings than other three stages among all the stage-labeled dogs.** 4.4 Which breed of dog appears most often? ###Code # I extract the dog breed data from the top_prediction column given top_is_dog is True dog_breed = df.query('top_is_dog == True').top_prediction dog_breed.value_counts()[0:5].plot(kind='bar', figsize=(10, 6), fontsize=13, rot=0) plt.xlabel('Dog Breed', fontsize=14) plt.ylabel('Tweets Count', fontsize=14) plt.title('Figure 4.4.1 The Bar Plot of Tweets Count for Top-5 Frequent Dog Breeds', fontsize=15) plt.savefig('4_breed_count.png'); ###Output _____no_output_____ ###Markdown **From \1 prediction, Golden Retriever seems to be the dog breed appears most often.** ###Code # Make more fashionable visualisations with wordcloud text = str(dog_breed.astype(str).values) # Read the mask image # The image source: https://www.shutterstock.com/search/running+dog+silhouette dog_mask = np.array(Image.open('dog_silhouette.png')) wc = WordCloud(background_color='white', mask=dog_mask, stopwords=STOPWORDS, repeat=True, contour_width=1, contour_color='white') # Generate word cloud wc.generate(text) # Show plt.figure(figsize=(10, 8)) plt.imshow(wc, interpolation='bilinear') plt.axis('off') plt.title('Figure 4.4.2 Dog Breed Word Cloud Inside A Dog', fontsize=18) plt.savefig('dog_wordcloud_white.png') plt.show() # Another wordcloud # Load the dog image # Image via Boston Magazine: https://www.bostonmagazine.com/arts-entertainment/2017/04/18/dog-rates-mit/ mask_dog = np.array(Image.open('dog.png'))[:,:,0:3] # Create a picture in array structure, with bigger size, which surrounds 'dog.png' with black area mask_blank = np.zeros(mask_dog.shape).astype(int) mask_wide = np.concatenate([mask_blank, mask_dog, mask_blank], axis=0) mask_blank_wide = np.zeros(mask_wide.shape).astype(int) mask_whole = np.concatenate([mask_blank_wide, mask_wide, mask_blank_wide], axis=1) # Create the mask in a photo frame shape x, y = np.ogrid[:mask_whole.shape[0], :mask_whole.shape[1]] mask = (x < mask_dog.shape[0]*2) & (x > mask_dog.shape[0]) & ( y < mask_dog.shape[1]*2) & (y > mask_dog.shape[1]) mask = 255 * mask.astype(int) # Generate the wordcloud wc = WordCloud(background_color='black', repeat=True, mask=mask, stopwords=STOPWORDS) wc.generate(text) # Combine wordcloud with mask_whole wc_whole = wc + mask_whole # Show the wordcloud with 'dog.png' plt.figure(figsize=(10, 8)) plt.imshow(wc_whole, interpolation='bilinear') plt.axis('off') plt.title('Figure 4.4.3 Dog Breed Word Cloud Outside A Dog\n', fontsize=20) plt.savefig('dog_wordcloud_black.png') plt.show() ###Output _____no_output_____ ###Markdown 4.5 Is there any relationship between rating_numerator and favorite_count? ###Code df.rating_numerator.value_counts().iloc[0:10] df_rating = df[(df['rating_numerator'] >= 5) & (df['rating_numerator'] <= 14)] plt.figure(figsize=(8, 6)) ax = sns.scatterplot(x='rating_numerator', y='favorite_count', data=df_rating) ax.set_xlabel('Rating Numerator', fontsize=13) ax.set_ylabel('Favorite Count', fontsize=13) plt.title('Figure 4.5 Scatter Plot Between Rating Numerator and Favorite Count\n', fontsize=14) plt.savefig('5_rating_favorite.png'); ###Output _____no_output_____ ###Markdown WeRateDogs Twitter Data Wrangling Table of ContentsData GatheringData AssessmentData CleaningData Storing ###Code #Libraries imported import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import requests import os from bs4 import BeautifulSoup import tweepy as tw import json import glob import datetime as dt import re import seaborn as sns %matplotlib inline ###Output _____no_output_____ ###Markdown Data GatheringIn this project, we will gather data from three different sources in order to analyze and obtain insights from the WeRateDogs Twitter.First of all, the WeRateDogs twitter archive has been given to us and downloaded manually. ###Code df_twitter_archive_enhanced = pd.read_csv("twitter-archive-enhanced.csv") ###Output _____no_output_____ ###Markdown The second data set used is a file with tweet image predictions. This data is downloaded programmatically using it's URL. ###Code data_url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(data_url) file = open('image-predictions.tsv', 'wb') file.write(response.content) file.close() ###Output _____no_output_____ ###Markdown In order to complement the twitter archive given, we will use the tweet IDs to query the Twitter API and extract each tweet's entire set of JSON data.First, let's extract the tweet IDs from the image predictions archive. ###Code df_image_preditions = pd.read_csv("image-predictions.tsv", sep='\t') display(df_image_preditions.columns) tweet_ids = df_image_preditions.tweet_id display("There are {} unique Tweet IDs".format(len(tweet_ids.unique()))) ###Output _____no_output_____ ###Markdown Now that we have the tweet IDs in the "tweet_ids" variable, we will query the Twitter API in the next cell. ###Code twitter_keys = { 'consumer_key': '', 'consumer_secret': '', 'access_token_key': '', 'access_token_secret': '' } #Setup access to API auth = tw.OAuthHandler(twitter_keys['consumer_key'], twitter_keys['consumer_secret']) auth.set_access_token(twitter_keys['access_token_key'], twitter_keys['access_token_secret']) #Connect to the API using rate limits to query api = tw.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True) filename = 'tweet_json.txt' not_available = [] #Create .txt file with Tweet JSONs with open(filename, 'w') as outfile: for i in tweet_ids: try: #Get the data as JSON status = api.get_status(str(i)) json_str = json.dumps(status._json) #Add each JSON data into a new line outfile.write("%s\n" % json_str) except tw.TweepError: #If there is an error with the tweet ID, print and save it in a list not_available.append(str(i)) print(str(i)) ###Output 680055455951884288 ###Markdown Then, we will read this .txt file line by line and export it into a pandas DataFrame with tweet ID, retweet count, and favorite count. In order to locate this parameters in the JSON data entries easier, we will define the following function that prints that type of data in a much easier form. ###Code def jprint(obj): text = json.dumps(obj, sort_keys = True, indent = 6) return text ###Output _____no_output_____ ###Markdown The next cell is used to create a dataframe of the tweet parameters used. ###Code filename = 'tweet_json.txt' #Create a new DataFrame df_list = [] #Open JSON File line by line with open(filename, 'r') as openfile: for line in openfile: #Read each line as dictionary json_object = json.loads(line) #print(jprint(json_object)) #Assign the needed parameters to variables ids = json_object['id_str'] retweet_count = json_object['retweet_count'] favorite_count = json_object['favorite_count'] #Add variables to the new dataframe df_list.append({'tweet_id': ids, 'retweet_count': retweet_count, 'favorite_count': favorite_count}) df_tweet_json = pd.DataFrame(df_list, columns=['tweet_id', 'retweet_count', 'favorite_count']) display(df_tweet_json.head()) display(df_tweet_json.info()) ###Output _____no_output_____ ###Markdown Data AssessmentIn this section we will assess the gathered data visually and programatically for quality and tidiness issues. Quality IssuesFirst, let's take a look in the twitter-archive-enhanced data. ###Code #Reading the data. df_twitter_archive_enhanced = pd.read_csv("twitter-archive-enhanced.csv") #General description of a dataset. display(df_twitter_archive_enhanced) print("The raw data contains {} rows and {} columns.".format(df_twitter_archive_enhanced.shape[0],df_twitter_archive_enhanced.shape[1])) ###Output _____no_output_____ ###Markdown As we can see in the following cell, there are quality issues with this data set. ###Code display(df_twitter_archive_enhanced.info()) display(df_twitter_archive_enhanced[['rating_numerator', 'rating_denominator']].describe()) ###Output _____no_output_____ ###Markdown 1. The data set contains 78 tweet replies ("in_reply_to_status_id" and "in_reply_to_user_id" columns). 2. There are 181 retweets ("retweeted_status_id", "retweeted_status_user_id", "retweeted_status_timestamp").3. Column name "expanded_urls" is not descriptive enough as it determines whether the tweet has an image attached or not.4. There are 2356-2297=59 tweets without images.5. "timestamp" column data type should be datetime. We could also generate additional columns (year, month, day) in order to make better insights.6. Column "tweet_id" data type should be treated as string as there are no numerical relationships between ids.7. Tweet "text" column contains the URL of the tweet, which can cause problems during text analysis.8. Ratings could be wrong, specially when there are multiple ratings per tweet.9. Dog stages categorization could be wrong. Tidiness IssuesThe detected tidiness issues are the following: 1. "expanded_urls" column has multiple values in each row separated by commas.2. Dog categorization variable is represented as 4 columns. We can collapse all 4 columns into a single one as it is only one variable.3. All three data sets should be merged by "tweet_id" into a single database for further analysis. ###Code print(df_twitter_archive_enhanced["expanded_urls"][10]) ###Output https://twitter.com/dog_rates/status/890006608113172480/photo/1,https://twitter.com/dog_rates/status/890006608113172480/photo/1 ###Markdown Data CleaningIn this section we will clean the gathered data programatically to improve quality and tidiness.First, it is recommended to make a copy of each dataframe. ###Code df_clean_enhanced = df_twitter_archive_enhanced.copy() df_clean_image = df_image_preditions.copy() df_clean_json = df_tweet_json.copy() ###Output _____no_output_____ ###Markdown Quality Issues1. To clean the tweet replies issues, we suggest to filter the column by the null values in the "in_reply_to_status_id" column. ###Code df_clean_enhanced = df_clean_enhanced[df_clean_enhanced['in_reply_to_status_id'].isnull()] display(df_clean_enhanced.info()) ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2278 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2278 non-null int64 in_reply_to_status_id 0 non-null float64 in_reply_to_user_id 0 non-null float64 timestamp 2278 non-null object source 2278 non-null object text 2278 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2274 non-null object rating_numerator 2278 non-null int64 rating_denominator 2278 non-null int64 name 2278 non-null object doggo 2278 non-null object floofer 2278 non-null object pupper 2278 non-null object puppo 2278 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 320.3+ KB ###Markdown 2. To clean the retweets, we suggest to filter the column by the null values in the "retweeted_status_id" column. ###Code df_clean_enhanced = df_clean_enhanced[df_clean_enhanced['retweeted_status_id'].isnull()] display(df_clean_enhanced.info()) ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2097 non-null int64 in_reply_to_status_id 0 non-null float64 in_reply_to_user_id 0 non-null float64 timestamp 2097 non-null object source 2097 non-null object text 2097 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 2094 non-null object rating_numerator 2097 non-null int64 rating_denominator 2097 non-null int64 name 2097 non-null object doggo 2097 non-null object floofer 2097 non-null object pupper 2097 non-null object puppo 2097 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 294.9+ KB ###Markdown 3. To make the "expanded_urls" column name more descriptive, we suggest to change it to "image_urls". ###Code df_clean_enhanced = df_clean_enhanced.rename(columns={'expanded_urls': 'image_urls'}) ###Output _____no_output_____ ###Markdown 4. In order to consider only those tweets with images, we will filter the dataframe by the non null "image_urls" column. ###Code df_clean_enhanced = df_clean_enhanced[df_clean_enhanced['image_urls'].notnull()] display(df_clean_enhanced.info()) ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2094 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2094 non-null int64 in_reply_to_status_id 0 non-null float64 in_reply_to_user_id 0 non-null float64 timestamp 2094 non-null object source 2094 non-null object text 2094 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object image_urls 2094 non-null object rating_numerator 2094 non-null int64 rating_denominator 2094 non-null int64 name 2094 non-null object doggo 2094 non-null object floofer 2094 non-null object pupper 2094 non-null object puppo 2094 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 294.5+ KB ###Markdown 5. The next cell changes the "timestamp" column data type to datetime and creates additional columns with the year, month and day information for further analysis. ###Code df_clean_enhanced['timestamp'] = pd.to_datetime(df_clean_enhanced['timestamp']) df_clean_enhanced['year'] = df_clean_enhanced['timestamp'].dt.year df_clean_enhanced['month'] = df_clean_enhanced['timestamp'].dt.month df_clean_enhanced['day'] = df_clean_enhanced['timestamp'].dt.day display(df_clean_enhanced[['year', 'month', 'day']].describe()) ###Output _____no_output_____ ###Markdown 6. The next cell changes the column "tweet_id" data type to string for both image and enhanced data sets. ###Code df_clean_enhanced['tweet_id'] = df_clean_enhanced['tweet_id'].astype(str) df_clean_image['tweet_id'] = df_clean_image['tweet_id'].astype(str) display(type(df_clean_enhanced['tweet_id'][0])) display(type(df_clean_image['tweet_id'][0])) ###Output _____no_output_____ ###Markdown 7. In order to fix the ratings numerator and denominator, we need to check the "text" column. Dog ratings are expressed as a number (numerator), a slash ("/") and another number(denominator). Ideally, we should search for this using a regular expression in the tweet "text" column. However, we found that in the "text" column there is also de tweet url, so first we need to get rid of this. ###Code #String that represents the start of an URL url_type = 'https://' #Find the location of this string in the "text" column df_clean_enhanced["remove_urls"] = df_clean_enhanced.text.str.find(url_type) #Slice and save into "text_wihout_url" column every tweet text with the location of the url. L_text_without_url = [] for i, value in df_clean_enhanced["remove_urls"].items(): L_text_without_url.append(df_clean_enhanced["text"][i][0:int(value)-1]) df_clean_enhanced["text_without_url"] = L_text_without_url #Testing if the code works print(df_clean_enhanced["text"][0]) print(df_clean_enhanced["text_without_url"][0]) ###Output This is Phineas. He's a mystical boy. Only ever appears in the hole of a donut. 13/10 https://t.co/MgUWQ76dJU This is Phineas. He's a mystical boy. Only ever appears in the hole of a donut. 13/10 ###Markdown 8. To find the ratings in the text, we will use a regular expresion pattern. The pattern consists in multiple numbers separated by a single slash. The "findall" method will find every string that matches this criteria and store it in a list. We used this method because there could be more than a single rating per tweet. ###Code fraction_pattern = r"([0-9]*/{1}[0-9]*)" ratings = df_clean_enhanced.text_without_url.str.findall(fraction_pattern) display(ratings) ###Output _____no_output_____ ###Markdown Now we need to check for inconsistencies. First, we want to check those tweets with more than one rating and the tweets with a rating denominator different from 10. Also, we would like to check if the provided rating is different from the one extracted using the regular expression presented before. The following cells shows this criteria in a couple lines of code. We also print the indices, our rating, the rating provided and the tweet text to check if it is correct or not. The indices will be used to replace the correct ratings in the dataframe for each case. ###Code count_multiple_ratings = 0 count_10_denominators = 0 count_different_ratings = 0 for k, val in ratings.items(): #Check for more than one rating if len(val) > 1: display("Index {}: Extracted rating: {} Provided: {}. Tweet: {}".format(k, ratings[k], str(df_clean_enhanced["rating_numerator"][k])+'/'+str(df_clean_enhanced["rating_denominator"][k]), df_clean_enhanced["text_without_url"][k])) count_multiple_ratings+=1 #If there is only one rating, check if the denominator is different from 10 (as most ratings are based on 10) elif str(df_clean_enhanced["rating_denominator"][k]) != "10": display("Index {}: Extracted rating: {} Provided: {}. Tweet: {}".format(k, ratings[k][0], str(df_clean_enhanced["rating_numerator"][k])+'/'+str(df_clean_enhanced["rating_denominator"][k]), df_clean_enhanced["text_without_url"][k])) count_10_denominators+=1 #Finally, check if the provided ratings are different from the ones extracted with regex else: locate_slash = val[0].find("/") if (str(val[0][:locate_slash]) != str(df_clean_enhanced["rating_numerator"][k])) or (str(val[0][locate_slash+1:]) != str(df_clean_enhanced["rating_denominator"][k])): print("Different ratings!") display("Index {}: Extracted rating: {} Provided: {}. Tweet: {}".format(k, ratings[k][0], str(df_clean_enhanced["rating_numerator"][k])+'/'+str(df_clean_enhanced["rating_denominator"][k]), df_clean_enhanced["text_without_url"][k])) count_different_ratings+=1 display("{} multiple ratings, {} ratings with denominators != from 10 and {} cases with != ratings between the extracted and the provided ones".format(count_multiple_ratings, count_10_denominators, count_different_ratings)) ###Output _____no_output_____ ###Markdown After checking the tweets, we can infere that higher ratings include multiple dogs This should be corrected when merging the this database with the predicted dog breed.There are 29 tweets with multiple ratings where we need to check each case. If it refers to more than one dog, we will maintain the rating provided. If it refers to a dog and another animal or anything else, we will correct the rating. Tweet 516 doesn't include a rating as "24/7" refers to the time that the dog smiles (every hour and every day). Observation should be eliminated or further analysis.Tweet 1068 rating needs to be corrected because "9/11" refers to the WTC disaster. It will be corrected to "14/10".Tweet 1165 rating needs to be corrected because "4/20" refers to the April 20. It will be corrected to "13/10".Tweet 1202 rating should be corrected to '11/10'.Tweet 1662 rating should be corrected to '10/10' as 7/11 refers to a gun.Tweet 2335 rating should be corrected to '9/10'.The next cell fixes these problems. ###Code df_clean_enhanced = df_clean_enhanced.drop(516) df_clean_enhanced.loc[1068, 'rating_numerator'] = 14 df_clean_enhanced.loc[1068, 'rating_denominator'] = 10 df_clean_enhanced.loc[1165, 'rating_numerator'] = 13 df_clean_enhanced.loc[1165, 'rating_denominator'] = 10 df_clean_enhanced.loc[1202, 'rating_numerator'] = 11 df_clean_enhanced.loc[1202, 'rating_denominator'] = 10 df_clean_enhanced.loc[1662, 'rating_numerator'] = 10 df_clean_enhanced.loc[1662, 'rating_denominator'] = 10 df_clean_enhanced.loc[2335, 'rating_numerator'] = 9 df_clean_enhanced.loc[2335, 'rating_denominator'] = 10 ###Output _____no_output_____ ###Markdown 9. In order to improve the dog categorization, first we will search for the tweets with multiple dog categories. For this, we will create a single variable with the concatenation of the received categorizations. ###Code df_clean_enhanced['Dog_Dicc'] = df_clean_enhanced['doggo'].map(str) + '-' + df_clean_enhanced['floofer'].map(str) + '-' + df_clean_enhanced['pupper'].map(str) + '-' + df_clean_enhanced['puppo'].map(str) display(df_clean_enhanced['Dog_Dicc'].unique()) count_none_categorization = len(df_clean_enhanced[df_clean_enhanced['Dog_Dicc'] == 'None-None-None-None']) count_multiple_categorization = len(df_clean_enhanced[(df_clean_enhanced['Dog_Dicc'] == 'doggo-None-None-puppo') | (df_clean_enhanced['Dog_Dicc'] == 'doggo-floofer-None-None') | (df_clean_enhanced['Dog_Dicc'] == 'doggo-None-pupper-None')]) display("There are {} observations with multiple dog categories and {} observations with no categories.".format(count_multiple_categorization, count_none_categorization)) ###Output _____no_output_____ ###Markdown As we can see, there are 11 observations with multiple dog categorizations. In this project, we will focus on correcting these cases as it will help us to improve the dataframe tidiness later. In the next cell, we will visualize these observations and correct them when possible. ###Code pd.set_option('display.max_colwidth', -1) multiple_categories = ['doggo-None-None-puppo', 'doggo-floofer-None-None', 'doggo-None-pupper-None'] display(df_clean_enhanced.text[(df_clean_enhanced['Dog_Dicc'] == 'doggo-None-None-puppo') | (df_clean_enhanced['Dog_Dicc'] == 'doggo-floofer-None-None') | (df_clean_enhanced['Dog_Dicc'] == 'doggo-None-pupper-None')]) ###Output _____no_output_____ ###Markdown After watching the photos/videos of each tweet, the corrections made are as follows.Tweet 191 category should be corrected to puppo.Tweet 200 category should be corrected to floofer.Tweet 460 category should be corrected to pupper.Tweet 575 category should be corrected to pupper.Tweet 956 category should be corrected to doggo.Tweet 531 category refers to two dogs.Tweet 733 category refers to two dogs.Tweet 889 category refers to two dogs.Tweet 1063 category refers to two dogs.Tweet 1113 category refers to two dogs.Tweet 705 is not a dog.In the next cel, we will correct the categories of the dogs by index. Furthermore, tweets containing multiple dogs will be dropped to improve the breed prediction analysis. ###Code df_clean_enhanced = df_clean_enhanced.drop(531) df_clean_enhanced = df_clean_enhanced.drop(733) df_clean_enhanced = df_clean_enhanced.drop(889) df_clean_enhanced = df_clean_enhanced.drop(1063) df_clean_enhanced = df_clean_enhanced.drop(1113) df_clean_enhanced = df_clean_enhanced.drop(705) df_clean_enhanced.loc[191, 'doggo'] = 'None' df_clean_enhanced.loc[200, 'doggo'] = 'None' df_clean_enhanced.loc[460, 'doggo'] = 'None' df_clean_enhanced.loc[575, 'doggo'] = 'None' df_clean_enhanced.loc[956, 'pupper'] = 'None' ###Output _____no_output_____ ###Markdown 10. The image predictions database tidiness could also be improved by droping columns and changing variable names. ###Code dict_breed = {"p1" : "breed_prediction_1", "p2" : "breed_prediction_2", "p3" : "breed_prediction_3", "p1_conf" : "prediction_confidence_1", "p2_conf" : "prediction_confidence_2", "p3_conf" : "prediction_confidence_3", "p1_dog" : 'prediction_is_a_dog_1', "p2_dog" : 'prediction_is_a_dog_2', "p3_dog" : 'prediction_is_a_dog_3'} df_clean_image = df_clean_image.rename(columns = dict_breed) df_clean_image = df_clean_image.drop(columns = ['jpg_url', 'img_num']) ###Output _____no_output_____ ###Markdown Tidiness Issues1. As "image_urls" column has multiple values in each row separated by commas, we recommend to drop this column. We will also drop all columns used in previous analysis or that won't be used anymore ('in_reply_to_status_id', 'in_reply_to_user_id', 'source', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp', 'remove_urls', 'text_without_url', 'Dog_Dicc'). ###Code columns_dropped = ['image_urls', 'in_reply_to_status_id', 'in_reply_to_user_id', 'source', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp', 'remove_urls', 'text_without_url', 'Dog_Dicc'] df_clean_enhanced = df_clean_enhanced.drop(columns = columns_dropped) display(df_clean_enhanced.columns) ###Output _____no_output_____ ###Markdown 2. Dog categorization variable is represented as 4 columns. We can collapse all 4 columns into a single one as it is only one variable called "dog_category". First, we need to get rid of the "None" values, then we can combine the columns as strings. ###Code df_clean_enhanced.doggo = df_clean_enhanced.doggo.replace("None", "") df_clean_enhanced.floofer = df_clean_enhanced.floofer.replace("None", "") df_clean_enhanced.pupper = df_clean_enhanced.pupper.replace("None", "") df_clean_enhanced.puppo = df_clean_enhanced.puppo.replace("None", "") df_clean_enhanced['dog_category'] = df_clean_enhanced['doggo'] + df_clean_enhanced['floofer'] + df_clean_enhanced['pupper'] + df_clean_enhanced['puppo'] L_dog_categories = ['doggo', 'floofer', 'pupper', 'puppo'] df_clean_enhanced = df_clean_enhanced.drop(columns = L_dog_categories) display(df_clean_enhanced['dog_category'].unique()) ###Output _____no_output_____ ###Markdown 3. All three data sets should be merged by "tweet_id" into a single database for further analysis. ###Code twitter_archive_master = df_clean_enhanced.merge(df_clean_image, on='tweet_id') twitter_archive_master = twitter_archive_master.merge(df_clean_json, on='tweet_id') display(twitter_archive_master.info()) ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1958 entries, 0 to 1957 Data columns (total 21 columns): tweet_id 1958 non-null object timestamp 1958 non-null datetime64[ns, UTC] text 1958 non-null object rating_numerator 1958 non-null int64 rating_denominator 1958 non-null int64 name 1958 non-null object year 1958 non-null int64 month 1958 non-null int64 day 1958 non-null int64 dog_category 1958 non-null object breed_prediction_1 1958 non-null object prediction_confidence_1 1958 non-null float64 prediction_is_a_dog_1 1958 non-null bool breed_prediction_2 1958 non-null object prediction_confidence_2 1958 non-null float64 prediction_is_a_dog_2 1958 non-null bool breed_prediction_3 1958 non-null object prediction_confidence_3 1958 non-null float64 prediction_is_a_dog_3 1958 non-null bool retweet_count 1958 non-null int64 favorite_count 1958 non-null int64 dtypes: bool(3), datetime64[ns, UTC](1), float64(3), int64(7), object(7) memory usage: 296.4+ KB ###Markdown Data StoringIn this section, we will store the cleaned data into a single csv file named "twitter_archive_master.csv" which will be used for our analysis and visualizations. ###Code twitter_archive_master.to_csv('twitter_archive_master.csv', index=False) ###Output _____no_output_____ ###Markdown Data AnalysisIn this section, we will analyzed the processed data in order to obtain insights from the WeRateDogs Twitter.The questions we would like to ask are the following:1. Which are the dogs categories with the higher Twitter reactions?2. How is the relationship between retweets counts and favorite counts for the WeRateDogs account?3. Which dog breeds have the best ratings?The answers to these questions are presented in the following sections. Which are the dogs categories with the higher Twitter reactions?To answer this question, we will group the dog categories by retweet and favorite counts. The dataframe should be grouped by the mean value to correct observation imbalances. ###Code #Reading the data twitter_archive_master = pd.read_csv("twitter_archive_master.csv") #Selecting the required columns for the analysis df_reactions_category = pd.concat([twitter_archive_master['dog_category'], twitter_archive_master['retweet_count'], twitter_archive_master['favorite_count']], axis=1) #Filtering by non null values df_reactions_category = df_reactions_category[df_reactions_category['dog_category'].notnull()] #Grouping data df_reactions_category = df_reactions_category.groupby(['dog_category']).mean() #Data visualization display(df_reactions_category) df_reactions_category['categories'] = df_reactions_category.index df_reactions_category.plot(x='categories', y='favorite_count', kind='barh', title="Mean Favorite Counts by Dog Categories") df_reactions_category.plot(x='categories', y='retweet_count', kind='barh', title="Mean Retweet Counts by Dog Categories", color = 'r') ###Output _____no_output_____ ###Markdown On the one hand, puppos are the most favorited followed by doggos, floofers and puppers. On the other hand, doggos are the most retweeted, followed by puppos, floofer and puppers. The distribution of these reactions leads us to our next question. How is the relationship between retweets and favorite counts in the WeRateDogs account?In order to answer this question, is useful to visualize the relationship using a scatter plot and a regresion model. First, lets scatter plot the mentioned variables ###Code #Selecting the columns used for the analysis df_reactions = pd.concat([twitter_archive_master['retweet_count'], twitter_archive_master['favorite_count']], axis=1) #Data visualization df_reactions.plot(x='retweet_count', y='favorite_count', title = 'Retweet and Favorite Counts Relationship', kind='scatter') plt.xlabel('Retweets') plt.ylabel('Favorite Counts') ###Output _____no_output_____ ###Markdown Now we can plot a regression model in the same dataset using seaborn library. ###Code #Linear Regression model visualization sns.regplot(x='retweet_count', y='favorite_count', data=df_reactions, color='g', marker='+') plt.title('Retweet and Favorite Counts Regression Plot') plt.xlabel('Retweets') plt.ylabel('Favorite Counts') ###Output _____no_output_____ ###Markdown The linear relationship is very clear. As we can see in the next correlation matrix, there is a positive relationship between retweet and favorite counts in the WeRateDogs Twitter account. ###Code #Correlation Matrix Visualization reactions_correlation = df_reactions.corr(method = 'spearman') fig, ax = plt.subplots(figsize=(5,5)) ax = sns.heatmap(reactions_correlation, annot=True, linewidths=2, linecolor='black', cmap='coolwarm', vmin=-1, vmax=1, center=0, square=True) ax.set_yticklabels(reactions_correlation.index, rotation=0) bottom, top = ax.get_ylim() ax.set_ylim(bottom + 0.5, top - 0.5) plt.title('Retweets & Favorite Count Correlation') plt.show() ###Output _____no_output_____ ###Markdown Which dog breeds have the best ratings?To answer this question we will group by ratings and dog breeds.First, we consider only those predictions that were detected as dogs and with a prediction confidence of over 0.7 by slicing the cleaned dataframe. ###Code #Selecting the columns used for the analysis df_reactions_breed = pd.concat([twitter_archive_master['breed_prediction_1'], twitter_archive_master['prediction_confidence_1'], twitter_archive_master['prediction_is_a_dog_1'], twitter_archive_master['rating_numerator'], twitter_archive_master['rating_denominator']], axis=1) #Rating calculation df_reactions_breed['rating'] = df_reactions_breed['rating_numerator'].astype(int) / df_reactions_breed['rating_denominator'].astype(int) #Data filtering df_reactions_breed = df_reactions_breed[(df_reactions_breed['prediction_is_a_dog_1'] == True) & (df_reactions_breed['prediction_confidence_1'] >= 0.7)] #Data grouping and sorting df_reactions_breed = df_reactions_breed.groupby(['breed_prediction_1']).mean() df_reactions_breed = df_reactions_breed.sort_values(by=['rating'], ascending = False)[:10] display(df_reactions_breed) #Data visualization df_reactions_breed['breed_prediction_1'] = df_reactions_breed.index df_reactions_breed.plot(x='breed_prediction_1', y='rating', kind='bar', title="Mean Rating by Dog Breed", edgecolor='black', color=['red', 'purple','grey' ,'yellow', 'pink','black', 'white', 'green', 'blue', 'cyan']) plt.legend('', frameon=False) plt.xlabel('Dog Breed') plt.ylabel('Rating') ###Output _____no_output_____ ###Markdown importing necessary libraries ###Code import pandas as pd import numpy as np import requests as req import json import os import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown Gather Importing the csv file ###Code Twitter_archive=pd.read_csv('twitter-archive-enhanced.csv') ###Output _____no_output_____ ###Markdown Tweet image prediction file ###Code url='https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response=req.get(url) with open ('image-predictions.tsv' , mode='wb') as file : file.write(response.content) #importing image-predictions.tsv data df_image_pred=pd.read_csv('image-predictions.tsv',sep='\t') df_image_pred.head() ###Output _____no_output_____ ###Markdown Tweet data ###Code import tweepy from tweepy import OAuthHandler import json from timeit import default_timer as timer # Query Twitter API for each tweet in the Twitter archive and save JSON in a text file # These are hidden to comply with Twitter's API terms and conditions consumer_key = 'HIDDEN' consumer_secret = 'HIDDEN' access_token = 'HIDDEN' access_secret = 'HIDDEN' auth = OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True) # NOTE TO STUDENT WITH MOBILE VERIFICATION ISSUES: # df_1 is a DataFrame with the twitter_archive_enhanced.csv file. You may have to # change line 17 to match the name of your DataFrame with twitter_archive_enhanced.csv # NOTE TO REVIEWER: this student had mobile verification issues so the following # Twitter API code was sent to this student from a Udacity instructor # Tweet IDs for which to gather additional data via Twitter's API tweet_ids = df_1.tweet_id.values len(tweet_ids) # Query Twitter's API for JSON data for each tweet ID in the Twitter archive count = 0 fails_dict = {} start = timer() # Save each tweet's returned JSON as a new line in a .txt file with open('tweet_json.txt', 'w') as outfile: # This loop will likely take 20-30 minutes to run because of Twitter's rate limit for tweet_id in tweet_ids: count += 1 print(str(count) + ": " + str(tweet_id)) try: tweet = api.get_status(tweet_id, tweet_mode='extended') print("Success") json.dump(tweet._json, outfile) outfile.write('\n') except tweepy.TweepError as e: print("Fail") fails_dict[tweet_id] = e pass end = timer() print(end - start) print(fails_dict) df_tweet_info=[] with open('tweet-json.txt') as file: for i in file: df_tweet_info.append(json.loads(i)) #making a new dataframe contains the required information we need df_tweet_data=pd.DataFrame(df_tweet_info,columns=['id','retweet_count','favorite_count']) # we will rename the id column to tweet_id df_tweet_data.rename(columns={'id':'tweet_id'},inplace=True) df_tweet_data.head() ###Output _____no_output_____ ###Markdown Assess Twitter_archive table ###Code Twitter_archive.shape Twitter_archive.head() Twitter_archive.info() Twitter_archive.retweeted_status_id.count() Twitter_archive.rating_numerator.describe() #all values must be 10 Twitter_archive.rating_denominator.describe() Twitter_archive.rating_denominator.value_counts() Twitter_archive[Twitter_archive.rating_denominator==0].tweet_id Twitter_archive.loc[313,'text'] Twitter_archive.name.value_counts() sum(Twitter_archive.duplicated()) Twitter_archive.isnull().sum() Twitter_archive.rating_numerator.value_counts().sort_values(ascending=False) Twitter_archive[Twitter_archive.rating_numerator>=100] ###Output _____no_output_____ ###Markdown df_image_pred table ###Code df_image_pred.head() df_image_pred.info() df_image_pred.describe() sum(df_image_pred.duplicated()) df_image_pred.isnull().sum() ###Output _____no_output_____ ###Markdown df_tweet_data table ###Code df_tweet_data.head() df_tweet_data.info() df_tweet_data.describe() sum(df_tweet_data.duplicated()) df_tweet_data.isnull().sum() ###Output _____no_output_____ ###Markdown Quality- There are about 181 retweet in the data frame - Some ratings denominators not equal 10 - Some ratings numerators are too big - Timestamp column datatype is an object it has to be datetime - Change tweed_id in all tables into strings - There are invalid names in name column such as (a , an , The and None) - missing photos for some id's - all P names should starts with capital letters - A lot of of columns contains a big number of null values such as retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp so they have to be removed Tidness - Merge the three tables to be only one table - Create one column for all dog types: doggo, floofer, pupper, puppo Cleaning ###Code Twitter_archive_clean=Twitter_archive.copy() df_image_pred_clean=df_image_pred.copy() df_tweet_data_clean=df_tweet_data.copy() ###Output _____no_output_____ ###Markdown Define Create one column for all dog types: doggo, floofer, pupper, puppo Code ###Code #creating new columns Twitter_archive_clean['dog_type']=Twitter_archive_clean['text'].str.extract('(doggo|floofer|pupper|puppo)') ###Output _____no_output_____ ###Markdown Test ###Code Twitter_archive_clean.shape Twitter_archive_clean.sample(5) Twitter_archive_clean.dog_type.value_counts() Twitter_archive_clean[Twitter_archive_clean.doggo != 'None'].head() #remove un necassery columns Twitter_archive_clean.drop(['doggo','floofer','pupper','puppo'],axis=1,inplace=True) Twitter_archive_clean.head(5) Twitter_archive_clean.shape ###Output _____no_output_____ ###Markdown Define Merge the three tables to be only one table Code ###Code #as we know merge is done between only 2 tables only not 3 we will make it in 2 steps Twitter_archive_clean=pd.merge(Twitter_archive_clean,df_image_pred_clean,on='tweet_id',how='left') Twitter_archive_clean=pd.merge(Twitter_archive_clean,df_tweet_data_clean,on='tweet_id',how='left') ###Output _____no_output_____ ###Markdown Test ###Code Twitter_archive_clean.shape Twitter_archive_clean.head() Twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2356 entries, 0 to 2355 Data columns (total 27 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 in_reply_to_status_id 78 non-null float64 2 in_reply_to_user_id 78 non-null float64 3 timestamp 2356 non-null object 4 source 2356 non-null object 5 text 2356 non-null object 6 retweeted_status_id 181 non-null float64 7 retweeted_status_user_id 181 non-null float64 8 retweeted_status_timestamp 181 non-null object 9 expanded_urls 2297 non-null object 10 rating_numerator 2356 non-null int64 11 rating_denominator 2356 non-null int64 12 name 2356 non-null object 13 dog_type 399 non-null object 14 jpg_url 2075 non-null object 15 img_num 2075 non-null float64 16 p1 2075 non-null object 17 p1_conf 2075 non-null float64 18 p1_dog 2075 non-null object 19 p2 2075 non-null object 20 p2_conf 2075 non-null float64 21 p2_dog 2075 non-null object 22 p3 2075 non-null object 23 p3_conf 2075 non-null float64 24 p3_dog 2075 non-null object 25 retweet_count 2354 non-null float64 26 favorite_count 2354 non-null float64 dtypes: float64(10), int64(3), object(14) memory usage: 515.4+ KB ###Markdown Define Timestamp column datatype is an object it has to be datetime Code ###Code Twitter_archive_clean.timestamp=pd.to_datetime(Twitter_archive_clean.timestamp) ###Output _____no_output_____ ###Markdown Test ###Code Twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2356 entries, 0 to 2355 Data columns (total 27 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 in_reply_to_status_id 78 non-null float64 2 in_reply_to_user_id 78 non-null float64 3 timestamp 2356 non-null datetime64[ns, UTC] 4 source 2356 non-null object 5 text 2356 non-null object 6 retweeted_status_id 181 non-null float64 7 retweeted_status_user_id 181 non-null float64 8 retweeted_status_timestamp 181 non-null object 9 expanded_urls 2297 non-null object 10 rating_numerator 2356 non-null int64 11 rating_denominator 2356 non-null int64 12 name 2356 non-null object 13 dog_type 399 non-null object 14 jpg_url 2075 non-null object 15 img_num 2075 non-null float64 16 p1 2075 non-null object 17 p1_conf 2075 non-null float64 18 p1_dog 2075 non-null object 19 p2 2075 non-null object 20 p2_conf 2075 non-null float64 21 p2_dog 2075 non-null object 22 p3 2075 non-null object 23 p3_conf 2075 non-null float64 24 p3_dog 2075 non-null object 25 retweet_count 2354 non-null float64 26 favorite_count 2354 non-null float64 dtypes: datetime64[ns, UTC](1), float64(10), int64(3), object(13) memory usage: 515.4+ KB ###Markdown Define Change tweed_id in all tables into strings Code ###Code Twitter_archive_clean.tweet_id=Twitter_archive_clean.tweet_id.astype(str) ###Output _____no_output_____ ###Markdown Test ###Code Twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2356 entries, 0 to 2355 Data columns (total 27 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null object 1 in_reply_to_status_id 78 non-null float64 2 in_reply_to_user_id 78 non-null float64 3 timestamp 2356 non-null datetime64[ns, UTC] 4 source 2356 non-null object 5 text 2356 non-null object 6 retweeted_status_id 181 non-null float64 7 retweeted_status_user_id 181 non-null float64 8 retweeted_status_timestamp 181 non-null object 9 expanded_urls 2297 non-null object 10 rating_numerator 2356 non-null int64 11 rating_denominator 2356 non-null int64 12 name 2356 non-null object 13 dog_type 399 non-null object 14 jpg_url 2075 non-null object 15 img_num 2075 non-null float64 16 p1 2075 non-null object 17 p1_conf 2075 non-null float64 18 p1_dog 2075 non-null object 19 p2 2075 non-null object 20 p2_conf 2075 non-null float64 21 p2_dog 2075 non-null object 22 p3 2075 non-null object 23 p3_conf 2075 non-null float64 24 p3_dog 2075 non-null object 25 retweet_count 2354 non-null float64 26 favorite_count 2354 non-null float64 dtypes: datetime64[ns, UTC](1), float64(10), int64(2), object(14) memory usage: 515.4+ KB ###Markdown Define get rid of the 181 retweet in the data frame Code ###Code Twitter_archive_clean=Twitter_archive_clean[Twitter_archive_clean.retweeted_status_id.isnull()==True] ###Output _____no_output_____ ###Markdown Test ###Code Twitter_archive_clean.info() Twitter_archive_clean.retweeted_status_id.unique() ###Output _____no_output_____ ###Markdown Define drop in retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp columns Code ###Code Twitter_archive_clean.drop(['retweeted_status_id','retweeted_status_user_id','retweeted_status_timestamp'],axis=1,inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code Twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 24 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2175 non-null object 1 in_reply_to_status_id 78 non-null float64 2 in_reply_to_user_id 78 non-null float64 3 timestamp 2175 non-null datetime64[ns, UTC] 4 source 2175 non-null object 5 text 2175 non-null object 6 expanded_urls 2117 non-null object 7 rating_numerator 2175 non-null int64 8 rating_denominator 2175 non-null int64 9 name 2175 non-null object 10 dog_type 364 non-null object 11 jpg_url 1994 non-null object 12 img_num 1994 non-null float64 13 p1 1994 non-null object 14 p1_conf 1994 non-null float64 15 p1_dog 1994 non-null object 16 p2 1994 non-null object 17 p2_conf 1994 non-null float64 18 p2_dog 1994 non-null object 19 p3 1994 non-null object 20 p3_conf 1994 non-null float64 21 p3_dog 1994 non-null object 22 retweet_count 2175 non-null float64 23 favorite_count 2175 non-null float64 dtypes: datetime64[ns, UTC](1), float64(8), int64(2), object(13) memory usage: 424.8+ KB ###Markdown Define missing photos for some id's Code ###Code # remove the rows that not contain photos Twitter_archive_clean=Twitter_archive_clean[Twitter_archive_clean.jpg_url.notnull()==True] ###Output _____no_output_____ ###Markdown Test ###Code Twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1994 entries, 0 to 2355 Data columns (total 24 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1994 non-null object 1 in_reply_to_status_id 23 non-null float64 2 in_reply_to_user_id 23 non-null float64 3 timestamp 1994 non-null datetime64[ns, UTC] 4 source 1994 non-null object 5 text 1994 non-null object 6 expanded_urls 1994 non-null object 7 rating_numerator 1994 non-null int64 8 rating_denominator 1994 non-null int64 9 name 1994 non-null object 10 dog_type 326 non-null object 11 jpg_url 1994 non-null object 12 img_num 1994 non-null float64 13 p1 1994 non-null object 14 p1_conf 1994 non-null float64 15 p1_dog 1994 non-null object 16 p2 1994 non-null object 17 p2_conf 1994 non-null float64 18 p2_dog 1994 non-null object 19 p3 1994 non-null object 20 p3_conf 1994 non-null float64 21 p3_dog 1994 non-null object 22 retweet_count 1994 non-null float64 23 favorite_count 1994 non-null float64 dtypes: datetime64[ns, UTC](1), float64(8), int64(2), object(13) memory usage: 389.5+ KB ###Markdown Define - There are invalid names in name column such as (a , an , The ,etc....) Code ###Code Twitter_archive_clean.name.unique() Twitter_archive_clean.name.value_counts() Twitter_archive_clean.name=Twitter_archive_clean.name.replace(regex=['^[a-z]+','None'],value=np.nan) ###Output _____no_output_____ ###Markdown Test ###Code Twitter_archive_clean.info() sum(Twitter_archive_clean.name.isnull()) ###Output _____no_output_____ ###Markdown Define Some ratings denominators not equal 10 Code ###Code Twitter_archive_clean=Twitter_archive_clean[Twitter_archive_clean.rating_denominator==10] ###Output _____no_output_____ ###Markdown Test ###Code Twitter_archive_clean.info() Twitter_archive_clean.sample(5) Twitter_archive_clean.rating_denominator.unique() ###Output _____no_output_____ ###Markdown Define all P names should starts with capital letters Code ###Code Twitter_archive_clean.p1=Twitter_archive_clean.p1.str.title() Twitter_archive_clean.p2=Twitter_archive_clean.p2.str.title() Twitter_archive_clean.p3=Twitter_archive_clean.p3.str.title() ###Output _____no_output_____ ###Markdown Test ###Code Twitter_archive_clean.sample(5) ###Output _____no_output_____ ###Markdown Define Some ratings numerators are less than 10 or too big Code ###Code Twitter_archive_clean.rating_numerator.unique() Twitter_archive_clean=Twitter_archive_clean[Twitter_archive_clean.rating_numerator<=14] ###Output _____no_output_____ ###Markdown Test ###Code Twitter_archive_clean.info() Twitter_archive_clean.rating_numerator.unique() ###Output _____no_output_____ ###Markdown Storing Data ###Code Twitter_archive_clean.to_csv('twitter_archive_master.csv') ###Output _____no_output_____ ###Markdown Analyzing, and Visualizing Data ###Code Twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1971 entries, 0 to 2355 Data columns (total 24 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1971 non-null object 1 in_reply_to_status_id 22 non-null float64 2 in_reply_to_user_id 22 non-null float64 3 timestamp 1971 non-null datetime64[ns, UTC] 4 source 1971 non-null object 5 text 1971 non-null object 6 expanded_urls 1971 non-null object 7 rating_numerator 1971 non-null int64 8 rating_denominator 1971 non-null int64 9 name 1344 non-null object 10 dog_type 318 non-null object 11 jpg_url 1971 non-null object 12 img_num 1971 non-null float64 13 p1 1971 non-null object 14 p1_conf 1971 non-null float64 15 p1_dog 1971 non-null object 16 p2 1971 non-null object 17 p2_conf 1971 non-null float64 18 p2_dog 1971 non-null object 19 p3 1971 non-null object 20 p3_conf 1971 non-null float64 21 p3_dog 1971 non-null object 22 retweet_count 1971 non-null float64 23 favorite_count 1971 non-null float64 dtypes: datetime64[ns, UTC](1), float64(8), int64(2), object(13) memory usage: 385.0+ KB ###Markdown the ratio of every dog type ###Code Twitter_archive_clean.dog_type.value_counts().plot(kind='pie') Twitter_archive_clean['dog_type'].value_counts().plot(kind = 'barh',figsize=(8,8)) plt.title('Most dog type used') plt.xlabel('Count') plt.ylabel('Dog type'); #the most used dog type is pupper and the least one is floofer ###Output _____no_output_____ ###Markdown Total number of tweets per year ###Code Twitter_archive_clean.groupby(Twitter_archive_clean["timestamp"].dt.year)['retweet_count'].sum().sort_values() #2016 contains the biggest number of retweets counts and 2015 is the least ###Output _____no_output_____ ###Markdown Relation between retweet_count and favorite_count ###Code plt.scatter(Twitter_archive_clean.retweet_count,Twitter_archive_clean.favorite_count) plt.title('Relation between retweet_count and favorite_count') plt.xlabel('Retweet count') plt.ylabel('Favorite count') #the relation is postively correlated ###Output _____no_output_____ ###Markdown We Rate Dogs. Wrangling and analysis projectFirst, we're getting the basics out of the way... ###Code #Basics import pandas as pd import json import numpy as np import matplotlib.pyplot as plt import seaborn as sns import requests import datetime %matplotlib inline ###Output _____no_output_____ ###Markdown Gather files and resources as needed for the project.This means loading the files supplied and downloading the image predictions.The resources we need to get are:- The archive 'twitter-archive-enhanced.csv' from udacity. Downloaded and added to the project- The image predictions files which will be programatically downloaded- info for all the tweets, accessed from through the twitter api ###Code #Files #Read the csv file and have a look at it. archive = pd.read_csv('twitter-archive-enhanced.csv') archive.head() #Download the tsv file for image predictions url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) with open(url.split('/')[-1], mode='wb') as file: file.write(response.content) img_pred = pd.read_csv('image-predictions.tsv', sep='\t') img_pred.head() #Twitter Stuff - Api access import tweepy consumer_key = 'HIDDEN' consumer_secret = 'HIDDEN' access_token = 'HIDDEN' access_secret = 'HIDDEN' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True) #Testing that Twitter works tweet = api.get_status(archive.tweet_id[1], tweet_mode='extended') info = tweet._json info #Get all the tweets #Saving out the errors in a txt file, just in case #set this in order to grab or not grab the data from twitter getData = 0 if(getData==1): data = [] problem_tweets = {} tweetcount =archive.tweet_id.size #tweetcount = 40 #This one is just for troubleshooting for a in range(0, tweetcount): currentid = archive.tweet_id[a] print("requesting tweet id: {} at {}".format(currentid, datetime.datetime.now().time())) try: tweet = api.get_status(archive.tweet_id[a], tweet_mode='extended') data.append(tweet._json) except Exception as e: print("failed tweet id: {} at {}".format(currentid, datetime.datetime.now().time())) problem_tweets[currentid] = str(e) with open("tweet_json.txt", mode='w') as file: json.dump(data, file) #Saving the errors, just in case we need them at some point... f = open("tweet_errors.txt","w") f.write(str(problem_tweets)) f.close() print("done") print("issues with tweets: {}".format(problem_tweets)) elif(getData==0): print("Not grabbing data, assuming you already have it.") ###Output _____no_output_____ ###Markdown Turn the twitter data into something we can useI want to store these fields:- tweet id (required)- retweet count (required)- favorite count (required)- followers count (already formulating ideas that it could be interesting to see if followe counts have a relative impact on favorites and retweets) ###Code #Turn json into dataframe. #Load and parse the file tweet_list = [] with open('tweet_json.txt') as file: alltweets = json.load(file) for tweet in alltweets: tweet_list.append({'tweet_id' : tweet['id'], 'retweet_count' : tweet['retweet_count'], 'favorite_count': tweet['favorite_count'], 'followers_count' : tweet['user']['followers_count']}) #Turn the file into a dataframe tw_api_data = pd.DataFrame(tweet_list, columns = ['tweet_id', 'retweet_count', 'favorite_count', 'followers_count']) #Quickly have a look to see that all is going according to plan tw_api_data.info() ###Output _____no_output_____ ###Markdown Data gathering roundupWe now have three data sources in dataframes. - _tw_api_data_ holds the data we extrated from the twitter API.- _archive_ The data uploaded, from the file supplied by udacity. - _img_pred_ The predictions from the tab seperated file downloaded programatically. Assessing the data prior to cleaningNow that all the sources are in place, I want to look at them and see what needs cleaning, tidying and wrangling.I will start with the Archive... There are things I know we need to pay attention to:- Dog name- Stages- Rating ###Code # Starting from the top. The Archive print(archive.info()) archive.head() # A quick check to see how a row looks print(archive.iloc[0]) ###Output _____no_output_____ ###Markdown It seems we don't need the source column, and we can see that the link is embedded in the text field as well. This quick look also made me interested to see if expanded_urls sometimes hold more than one url. ###Code archive.expanded_urls.str.split(',') ###Output _____no_output_____ ###Markdown it seems to be the case... maybe not something to worry about yet... ###Code # Lets see how many names have been captured archive['name'].value_counts() ###Output _____no_output_____ ###Markdown This introduces quite a few things we need to look at already, lets start the list:745 names are missing8 dogs called the, and 55 dogs called aSo we need to find those names, and I suspect that filtering for lowercase names will give me a list of dogs that need special attention as well. I will have a quick look at this straight away. ###Code lowerFrame = archive[archive['name'].str[0].str.islower()] lowerFrame['name'].value_counts() ###Output _____no_output_____ ###Markdown Ok, so I will make the assumption that nobody calls their dog 'quite' or 'this' ... though 'Unaccaptable' and 'Mad' could be funny names. But in all seriousness, we need to address these. Dog stagesNext up we look into the stages of the dogs. ###Code (archive.loc[:,'doggo':'puppo'] != 'None').sum() #Out of curiosity I want to see if there are dogs that have been classified with more than one type multidogs = 0 unclasdogs = 0 singleclasdogs = 0 for index, row in archive.iterrows(): dogtypecount = 0 if row['doggo'] != 'None': dogtypecount += 1 if row['floofer'] != 'None': dogtypecount += 1 if row['pupper'] != 'None': dogtypecount += 1 if row['puppo'] != 'None': dogtypecount += 1 if dogtypecount >=2: multidogs += 1 elif dogtypecount ==1: singleclasdogs += 1 else: unclasdogs += 1 print("there are {} dogs with more than one classification".format( multidogs)) print("there are {} dogs with just one classification".format( singleclasdogs)) print("there are {} dogs with no classification".format( unclasdogs)) ###Output _____no_output_____ ###Markdown Ok, so there are already a lot of things to look at, but let us continue the assesment. Dog ratingstime to asses the ratings. These are the most important features, if the name of the account is to be taken seriously. ###Code archive['rating_numerator'].value_counts() archive['rating_denominator'].value_counts() ###Output _____no_output_____ ###Markdown allright.... it's all over the place. There is no missing data it seems, but people seem liberal in keeping to actual ratings that would make sense. But we could likely find interesting things from this at some point. The last thing I want to look at is to check if we have any retweets or replies in there that we should get rid of. No retweets or replies! ###Code #need to see a bit more pd.set_option('max_colwidth',280) #lets look at the text in some of the replies archive[archive.in_reply_to_user_id.notnull()].text #and the same for the tweets that have been retweeted archive[archive.retweeted_status_user_id .notnull()].text ###Output _____no_output_____ ###Markdown From reading through quite a few of both outputs, I will deem these rows unusable. Quality and tidiness issues to fix on the Archive dataframe:Quality:- remove rows that are replies or retweets.- Work on classification of dogs- Remove names with lowercase start- Find dog names where missing and possible- Clean links from textTidiness:- I want to clean the categorization. Having the row name and variable be the same is redundant and annoying- Once we have removed the retweets and replies, we can get rid of those columns.Now let's move on to the images and see what other things we can find... img_pred ###Code #Getting an overview... img_pred.sample(20) ###Output _____no_output_____ ###Markdown Allright, there are some things we need to look into here. Ofcourse we need to merge this into the archive data, but we need to clean some things first.- it doesnt seem like we need the img_num column- There are quite a few predictions that are not dogs. we need to see how many have non-dog predictions in all three predictions. - We should only have on prediction per entry so probably the highest rated one would make sense.- A quick sanity check that we do not have missing fields etc. ###Code # first things first... img_pred.info() ###Output _____no_output_____ ###Markdown The only thing that springs to mind is that we only have 2075 entries. This needs to be dealt with. ###Code #Lets see how many entries to not have a dog in any prediction. notDog = 0 for index, row in img_pred.iterrows(): if ((int(row['p1_dog'])+int(row['p2_dog'])+int(row['p3_dog'])) == 0): notDog += 1 print(notDog) ###Output _____no_output_____ ###Markdown Ok, so that seems pretty straight forward. We do need to choose a way to move forward with regards to the dogs images that are not classified as dogs. Looking through some of them, it seems the prediction is actually right in many cases, so it might make sense to get rid of those. Another option is to just not have anything in that field. Quality and tidiness issues to fix on the img_pred dataframe:Quality:- keep just the highest confidence prediction that is also a dog. (let's keep the confidence as well)Tidiness:- get rid of the img_num column- fix upper and lower casesOther things:- once merged, we need to see what the issue is with missing predictions. Twitter api downloadTime to move on to asses the newly downloaded data. Again, this one is mostly sanity checking, so let's get to it ###Code tw_api_data.info() #Lets have a look at some histograms to see if we can learn more... tw_api_data.hist(bins=40,figsize=(16,10)) ###Output _____no_output_____ ###Markdown Nothing too surprising here. The most interesting thing as I see it, but this gets into analysis, is that it seems there is a very 'all or nothing' distribution to observe, though this is a pretty normal long tail. I am actually happy with this dataframe, I just need to merge it into the other data. Cleaning the data! It's now time to start cleaning. The issues I want to deal with are:_Archive_- remove rows that are retweets.- remove rows that are replies.- remove columns related to replies and retweets- Dog names. I need to clean the ones that have been wrongly extracted. And try to extract more.- Classification. Just one per dog. And try to extract more where possible- Classifaction will go in one column- remove urls from text._image predictions_- remove predictions for non-dogs- have just one prediction with a confidence level per dog- remove img_num column- fix upper and lower cases in prediction namesNext up we need to merge everything into one nice, big dataframe and have a look at that. I know there will be an issue of some entries missing that we need to look into. So let's get started. Wrangling the Archive __Define__: 1- Drop duplicates, if there are any, as a first step. This way we avoid working on more data than we need. Then create a new dataframe that we can keep working with.__Code__; ###Code # Starting with the simplest part of cleaning in order to not spend time on things that will be removed. print(archive.duplicated().sum()) archive_new = archive.drop_duplicates() ###Output _____no_output_____ ###Markdown __Test__: ###Code archive_new.duplicated().sum() ###Output _____no_output_____ ###Markdown __Define__: 2- Remove rows that are retweets. These have values in the retweeted_status_user_id column, so using that as an identifier__Code__; ###Code #drop rows that are retweets archive_new = archive_new[archive_new.retweeted_status_user_id.isnull()] ###Output _____no_output_____ ###Markdown __Test__: ###Code #lets check it archive_new.info() ###Output _____no_output_____ ###Markdown __Define__: 3- Remove rows that are replies. These have values in the reply_to_user_id column, so using that as an identifier__Code__; ###Code #drop rows that are replies archive_new = archive_new[archive_new.in_reply_to_user_id.isnull()] ###Output _____no_output_____ ###Markdown __Test__: ###Code #lets check it archive_new.info() ###Output _____no_output_____ ###Markdown __Define__: 4- Get rid of the now empty columns for replies and rewteets.__Code__; ###Code #Get rid of the empty columns and source column while we are at it. archive_new.drop(['in_reply_to_status_id','in_reply_to_user_id','retweeted_status_id', 'retweeted_status_user_id','retweeted_status_timestamp','source'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown __Test__: ###Code archive_new.info() ###Output _____no_output_____ ###Markdown __Define__: 5- Remove the names if they start with a lower case letter__Code__; ###Code # First, we need to remove all the lowercase names, as we found earlier that these were not names. for index, row in archive_new.iterrows(): if row['name'][0].islower(): print(index) archive_new.drop(index, inplace=True) ###Output _____no_output_____ ###Markdown __Test__: ###Code lowerStart = 0 for index, row in archive_new.iterrows(): if row['name'][0].islower(): lowerStart += 1 print("There are {} names that start with a lowercase letter".format(lowerStart)) ###Output _____no_output_____ ###Markdown __Define__: 6- Change the names that are ste to None to actual blanks. This will make it easer to search and gather an overview etc.__Code__; ###Code # change the names that are None to actual blanks for index, row in archive_new.iterrows(): if row['name'] == "None": archive_new.loc[index, 'name'] = None ###Output _____no_output_____ ###Markdown __Test__: ###Code archive_new.name.value_counts() pd.set_option('max_colwidth',180) #Making things easier to see #And just to make sure that worked and have a look at the text fields archive_new[archive_new.name.isnull()].text.sample(25) ###Output _____no_output_____ ###Markdown Unfortunately it doesnt seem like these have names that can be extracted automatically, so I guess we need to have a lot of dogs without names. At least we've taken out many of the errornurous names.__Define__:7- Remove links from the text fields to simplify and avoid whatever things could cause errors later on when extracting data from the field.__Code__: ###Code #Removing links from text fields for index, row in archive_new.iterrows(): archive_new.loc[index, 'text'] = row['text'].rsplit('http', 1)[0].rsplit('http', 1)[0] archive_new.sample(20) ###Output _____no_output_____ ###Markdown Allright, that takes care of that. Now I 'just' need to wrangle the stages. I just want to make sure we have one column, and that has the name of the stage in it, and is empty if not identified. Since a floofer can also be a catch-all for just about any dog, that has lowest priority in case there are more than one stage noted for a tweet. Next comes Puppo, then pupper and lastly doggo. Mainly because the description describes them as inhereting traits for from the former stage.__Define__:8- Clean up dog stages. The current configuration is not making life easer for us, so the stages will go into one single column, instead of four, and in the process we are getting rid of multiple classifications per dog.__Code__: ###Code archive_new['dog_stage'] = None for index, row in archive_new.iterrows(): outStage = "" if row['floofer'] != 'None': outStage = 'Floofer' if row['puppo'] != 'None': outStage = 'Puppo' if row['pupper'] != 'None': outStage = 'Pupper' if row['doggo'] != 'None': outStage = 'Doggo' if outStage != "": archive_new.loc[index, 'dog_stage'] = outStage ###Output _____no_output_____ ###Markdown __Test__: ###Code archive_new.sample(40) ###Output _____no_output_____ ###Markdown __Define__:9- Drop the old columns that had the dog stages in them,.__Code__: ###Code pd.set_option('max_colwidth',40) #Making things easier to see #That seemed to work! Time to get rid of the old columns archive_new.drop(['floofer', 'puppo', 'pupper','doggo'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown __Test__: ###Code archive_new.head(15) #Cleaning up a bit archive_clean = archive_new.copy() ###Output _____no_output_____ ###Markdown Ok, there is ofcourse many more things to do, but for now we have dealt with:- Removing names that were not names- Removing links from text field- Remove retweets- Remove Replies- Remove columns for retweets and replies- Convert the four stages to a single column- Change names that were 'None' to actual None's- Remove now useless columns stating the stages- Copied to archive_clean Img_pred wranglingimage predictionsmy plan here is to deal with these issues:- remove predictions for non-dogs- have just one prediction with a confidence level per dog- remove img_num column- fix upper and lower cases in prediction names to all lower case, just for conformity ###Code img_pred_clean = img_pred.copy() img_pred_clean.sample(8) ###Output _____no_output_____ ###Markdown __Define__:10a- Clean up the predictions by keeping just the highest rated prediction that is also a dog. If none of the predictions are dogs, the predition will be set to None with a confidence of 0. 10b- While running through the data we will convert to lowercase.__Code__: ###Code #Create a row for the dog race and confidence img_pred_clean['dog_type'] = None img_pred_clean['type_confidence'] = 0.0 #Now, we grab the highest confidence prediction, that is a dog. #This is 10a for index, row in img_pred_clean.iterrows(): outPred = "" outConf = 0.0 if row['p1_dog'] == True: outPred = row['p1'] outConf = row['p1_conf'] elif row['p2_dog'] == True: outPred = row['p2'] outConf = row['p2_conf'] elif row['p3_dog'] == True: outPred = row['p3'] outConf = row['p3_conf'] else: outPred = None outConf = 0.0 #Before Storing convert to lower case if not None #This is 10b if outPred: outPred = outPred.lower() img_pred_clean.loc[index, 'dog_type'] = outPred img_pred_clean.loc[index, 'type_confidence'] = outConf ###Output _____no_output_____ ###Markdown __Test__: ###Code img_pred_clean.sample(20) ###Output _____no_output_____ ###Markdown __Define__:11- drop columns from the old predictions__Code__: ###Code #Remove columns for predictions img_pred_clean.drop(['p1','p1_conf', 'p1_dog','p2','p2_conf', 'p2_dog','p3','p3_conf', 'p3_dog'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown __Test__: ###Code img_pred_clean.head() ###Output _____no_output_____ ###Markdown __Define__:12- Remove the img_num colum__Code__: ###Code #and lets drop the img_num column as well img_pred_clean.drop(['img_num'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown __Test__: ###Code img_pred_clean.tail() img_pred_clean.dog_type.value_counts() ###Output _____no_output_____ ###Markdown Let the merging begin! ###Code #img_pred_clean.head() #archive_clean.info() archive_merge = archive_clean.merge(img_pred_clean , on='tweet_id') archive_merge.sample(20) #And lets add the data from the twitter api archive_merge = archive_merge.merge(tw_api_data , on='tweet_id') archive_merge.sample(20) #Just in case I need it, I'll calculate the rating archive_merge['rating'] = archive_merge['rating_numerator']/archive_merge['rating_denominator'] archive_merge.head() ###Output _____no_output_____ ###Markdown So even though there is still plenty of things to do and doublecheck, I am satisfied with the dataset I have now. So it's time to move on to... AnalysisFirst of i'll store the new dataset so it's easier to work with from here, and i'll load that back into a new dataframe.Then I'll take a look at the data and see what insights I can get from it, and document and visualize these ###Code archive_merge.to_csv('twitter_archive_master.csv', index = None, header=True) data_clean = pd.read_csv('twitter_archive_master.csv') data_clean.info() data_clean.hist(figsize=(28,14), bins=40) #Lets see what happens when we plot the rating againts favourited. plt.figure(figsize =(20,10)) plt.scatter((data_clean['rating']), data_clean['favorite_count'].transform(lambda x: np.log10(x)),s=10) plt.title('rating vs favourites') plt.xlabel('Rating') plt.ylabel('favourited') plt.xlim(-1,3) #in dialing this in, I had to take the outliers out of the picture to see what was going on plt.savefig('rate_v_favor.png') plt.show() data_clean.corr(method='pearson') ###Output _____no_output_____ ###Markdown There does not seem to be a clear correlation but it does seem that you get more favourites from ratings over 1.0 and that once the rating gets to 1.5 or higher, favouriting ceases. This can be obeserved in the graph from the clustering but does not appear in the correlation matrix._An observation_It seems that nothing in the dataframe correlates linearly, which actually makes sense, as there are no rules for how ratings are given per se, so it is very much a case of controlled randomness. ###Code #Maybe looking at dog stages can lead to something interesting? stage_rate = data_clean.groupby(['dog_stage','rating']) stage_rate.count() ###Output _____no_output_____ ###Markdown There could be something there... let's keep looking at the bigger picture first though. ###Code pd.plotting.scatter_matrix(data_clean, figsize=(20,15)) ###Output _____no_output_____ ###Markdown While there seems to be a few things that jump out, I cannot use the rating column against the original ratings, since it's derrived from there. Also, the tweet_id and type_confidence can't really say anything of value here. ###Code ## Maybe looking over time can help us learn something from matplotlib.dates import DateFormatter #Time format: 2017-07-30 15:58:51 +0000 # Define the date format myFmt = DateFormatter('%Y-%m-%d') data_clean['short_timestamp'] = data_clean['timestamp'].astype('datetime64[ns]') # plot the data fig, (ax, ax2) = plt.subplots(2, figsize=(15, 9)) ax.plot(data_clean.short_timestamp, data_clean.favorite_count.rolling(90, win_type='triang').sum()) ax2.plot(data_clean.short_timestamp, data_clean.rating.rolling(90, win_type='triang').sum()) ax.set_title('Development of favourite counts over time') ax.set(xlabel="Time", ylabel="Favourite Counts",) ax.xaxis_date() ax.xaxis.set_major_formatter(myFmt) ax2.set_title('Ratingss over time') ax2.set(xlabel="Time", ylabel="Rating",) ax2.xaxis_date() ax2.xaxis.set_major_formatter(myFmt) fig.savefig('rate_v_favor_over_time.png') ###Output _____no_output_____ ###Markdown It seems that while favourites increase over time, so does the ratings of dogs. The two major spikes seen in the ratings can also be seen in the favourites. ###Code #data_clean[data_clean.dog_stage == 'Pupper']['rating'].hist() fig, ([ax, ax2],[ax3,ax4]) = plt.subplots(nrows=2, ncols=2, figsize=(18, 9)) ax.hist(data_clean[data_clean.dog_stage == 'Pupper']['rating']) ax2.hist(data_clean[data_clean.dog_stage == 'Puppo']['rating']) ax3.hist(data_clean[data_clean.dog_stage == 'Doggo']['rating']) ax4.hist(data_clean[data_clean.dog_stage == 'Floofer']['rating']) ax.set_title('Pupper') ax2.set_title('Puppo') ax3.set_title('Doggo') ax4.set_title('Floofer') ax.set(xlabel="Rating", ylabel="Counts",) ax2.set(xlabel="Rating", ylabel="Counts",) ax3.set(xlabel="Rating", ylabel="Counts",) ax4.set(xlabel="Rating", ylabel="Counts",) fig.savefig('rate_v_dogStage.png') ###Output _____no_output_____ ###Markdown Gathering data 1. The WeRateDogs Twitter archive ###Code # Reading the twitter-archive-enhanced.csv file df_archive = pd.read_csv("twitter-archive-enhanced.csv") df_archive.head() df_archive.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown 2. The Tweet Image Predictions ###Code # Getting a webpage stored in url url = "https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv" response = requests.get(url) # Opening and saving the file selected after the last slash from the url => image-predictions.tsv with open(url.split('/')[-1], mode = 'wb') as file: file.write(response.content) # Reading Tab-separate files (TSV files) we should use sep = '\t' df_image = pd.read_csv("image-predictions.tsv", sep = '\t') df_image.head() df_image.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null int64 jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown 3. The Twitter API ###Code # Twitter API access consumer_key = 'YOUR CONSUMER KEY' consumer_secret = 'YOUR CONSUMER SECRET' access_token = 'YOUR ACCESS TOKEN' access_secret = 'YOUR ACCESS SECRET' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth) # Downloading Tweepy objects defined by Tweet ID => df_archive.tweet_id # Storing the list of tweets in data_list data_list = [] # Storing the list of errors in error_list error_list = [] # Recording the start time of execution start = time.time() # Adding each available tweet into to data_list for tweet_id in df_archive["tweet_id"]: try: tweet = api.get_status(tweet_id, tweet_mode = "extended", wait_on_rate_limit = True, wait_on_rate_limit_notify = True)._json favorites = tweet["favorite_count"] retweets = tweet["retweet_count"] data_list.append({"tweet_id": int(tweet_id), "favorites": int(favorites), "retweets": int(retweets)}) except Exception as e: error_list.append(tweet_id) # Recording the end time of execution end = time.time() print(f"Time recorded in seconds: {end - start}") print("The list of tweets found: " ,len(data_list)) print("The list of tweets not found: " , len(error_list)) t = 2271.9571866989136/60 print(f"Time recorded in minutes: {t}") # Creating the Dataframe from the data_list json_data = pd.DataFrame(data_list, columns = ["tweet_id", "favorites", "retweets"]) # Saving the Dataframe in the file tweet_json.txt json_data.to_csv("tweet_json.txt", encoding = "utf-8", index = False) # Reading the tweet_json.txt file df_tweet = pd.read_csv("tweet_json.txt", encoding = "utf-8") df_tweet.head() df_tweet.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2331 entries, 0 to 2330 Data columns (total 3 columns): tweet_id 2331 non-null int64 favorites 2331 non-null int64 retweets 2331 non-null int64 dtypes: int64(3) memory usage: 54.8 KB ###Markdown Assessing data ###Code df_archive.info() # Checking timestamp data type type(df_archive["timestamp"][0]) df_archive # Looking for duplicates sum(df_archive.duplicated(subset = "tweet_id")) # Detect missing values df_archive.isnull().sum() # Checking the rating_numerator df_archive.rating_numerator.value_counts() # Checking the rating_denominator df_archive.rating_denominator.value_counts() df_archive.describe() # Looking for duplicates df_archive[df_archive.duplicated(subset = ["tweet_id"], keep = False)] ###Output _____no_output_____ ###Markdown ------------------------------- ###Code df_image.info() df_image # Looking for duplicates sum(df_image.jpg_url.duplicated()) df_image.jpg_url.value_counts() df_image[df_image.jpg_url == "https://pbs.twimg.com/media/CtVAvX-WIAAcGTf.jpg"] ###Output _____no_output_____ ###Markdown ---------------------------- ###Code df_tweet.info() df_tweet ###Output _____no_output_____ ###Markdown Quality df_archive - Missing data.Some columns refer to the reply (in_reply_to_status_id, in_reply_to_user_id) are incomplete and are not needed. These columns can be droped. - Missing data.Some columns refer to the retweet (retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp) are incomplete and are not needed. These columns can be droped. - Missing data. The expanded_urls column is incomplete and is not needed. This column can be droped. - Incorrect data type in timestamp. Should be datetime; - The numerator and denominator values must be refined. df_image - 2356 tweets in df_archive, but only 2075 tweets in df_image => 281 missing data - The breeds of dogs names in p1, p2 and p3 columns are not standardized. All names should be lowercase - Join the p1, p2, p3 columns into a new one and p1_conf, p2_conf, p3_conf columns into another - Some tweets have the jpg_url duplicated df_tweet - 2356 tweets in df_archive, but only 2331 tweets in df_tweet => 25 missing data Tidness df_archive - doggo, floofer, pupper, puppo columns should be one column (one variable in four columns). df_archive, df_image and df_tweet should be merged Cleaning data ###Code df_archive_clean = df_archive.copy() df_image_clean = df_image.copy() df_tweet_clean = df_tweet.copy() ###Output _____no_output_____ ###Markdown DefineMissing data. Some columns refer to the reply tweet (in_reply_to_status_id, in_reply_to_user_id) that is not needed and can be dropped. Code ###Code # Dropping the in_reply_to_status_id, in_reply_to_user_id columns # Note: axis=1 denotes that we are referring to a column, not a row df_archive_clean = df_archive_clean.drop("in_reply_to_status_id", axis = 1) df_archive_clean = df_archive_clean.drop("in_reply_to_user_id", axis = 1) ###Output _____no_output_____ ###Markdown Test ###Code df_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 15 columns): tweet_id 2356 non-null int64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(2), int64(3), object(10) memory usage: 276.2+ KB ###Markdown DefineMissing data. Some columns refer to the retweet (retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp) that is not needed and can be dropped. Code ###Code # Dropping the retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp columns df_archive_clean = df_archive_clean.drop("retweeted_status_id", axis = 1) df_archive_clean = df_archive_clean.drop("retweeted_status_user_id", axis = 1) df_archive_clean = df_archive_clean.drop("retweeted_status_timestamp", axis = 1) ###Output _____no_output_____ ###Markdown Test ###Code df_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 12 columns): tweet_id 2356 non-null int64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: int64(3), object(9) memory usage: 221.0+ KB ###Markdown DefineMissing data. The expanded_urls column is incomplete and is not needed.The source column, in turn, will not be used.These columns can be dropped. ###Code # Dropping the expanded_urls column # Note: axis=1 denotes that we are referring to a column, not a row df_archive_clean = df_archive_clean.drop("expanded_urls", axis = 1) df_archive_clean = df_archive_clean.drop("source", axis = 1) ###Output _____no_output_____ ###Markdown Test ###Code df_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 10 columns): tweet_id 2356 non-null int64 timestamp 2356 non-null object text 2356 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: int64(3), object(7) memory usage: 184.2+ KB ###Markdown DefineIncorrect data type in timestamp. Should be datetime. ###Code type(df_archive_clean["timestamp"][0]) ###Output _____no_output_____ ###Markdown Code ###Code df_archive_clean["timestamp"] = pd.to_datetime(df_archive_clean["timestamp"]) ###Output _____no_output_____ ###Markdown Test ###Code type(df_archive_clean["timestamp"][0]) df_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 10 columns): tweet_id 2356 non-null int64 timestamp 2356 non-null datetime64[ns, UTC] text 2356 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: datetime64[ns, UTC](1), int64(3), object(6) memory usage: 184.2+ KB ###Markdown Definedoggo, floofer, pupper, puppo columns should be one column (one variable in four columns). Let's replacing the None value for blank, concatenating all 4 columns to 1 column dog_stage, updating multiple dog_stages and dropping doggo, floofer, pupper, puppo columns. Code ###Code # Replacing the None value for blank '' df_archive_clean.doggo.replace("None", '', inplace = True) df_archive_clean.floofer.replace("None", '', inplace = True) df_archive_clean.pupper.replace("None", '', inplace = True) df_archive_clean.puppo.replace("None", '', inplace = True) # Concatenating all 4 columns to 1 column dog_stage df_archive_clean['dog_stage'] = df_archive_clean['doggo'] + df_archive_clean['floofer'] + df_archive_clean['pupper'] + df_archive_clean['puppo'] df_archive_clean.dog_stage.value_counts() # Replacing multiple dog stages # doggopupper for doggo, pupper # doggofloofer for doggo, floofer # doggopuppo for doggo, puppo df_archive_clean.dog_stage.replace("doggopupper", "doggo, pupper", inplace = True) df_archive_clean.dog_stage.replace("doggofloofer", "doggo, floofer", inplace = True) df_archive_clean.dog_stage.replace("doggopuppo", "doggo, puppo", inplace = True) # Dropping the doggo, floofer, pupper, puppo columns df_archive_clean = df_archive_clean.drop("doggo", 1) df_archive_clean = df_archive_clean.drop("floofer", 1) df_archive_clean = df_archive_clean.drop("pupper", 1) df_archive_clean = df_archive_clean.drop("puppo", 1) ###Output _____no_output_____ ###Markdown Test ###Code df_archive_clean.info() df_archive_clean.dog_stage.value_counts() ###Output _____no_output_____ ###Markdown DefineThe breeds of dogs names in p1, p2 and p3 columns are not standardized. All names should be lowercase ###Code df_image_clean.info() df_image_clean.head() ###Output _____no_output_____ ###Markdown Code ###Code # Converting strings in p1, p2, p3 to lower case df_image_clean["p1"] = df_image_clean["p1"].str.lower() df_image_clean["p2"] = df_image_clean["p2"].str.lower() df_image_clean["p3"] = df_image_clean["p3"].str.lower() ###Output _____no_output_____ ###Markdown Test ###Code df_image_clean.head() ###Output _____no_output_____ ###Markdown DefineSome tweets have the jpg_url duplicated in df_image_clean. Delete the duplicated tweets ###Code sum(df_image_clean.jpg_url.duplicated()) ###Output _____no_output_____ ###Markdown Code ###Code # Dropping the jpg_url duplicated, keeping the last one df_image_clean = df_image_clean.drop_duplicates(subset = ["jpg_url"], keep = "last") ###Output _____no_output_____ ###Markdown Test ###Code sum(df_image_clean.jpg_url.duplicated()) df_image_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2009 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2009 non-null int64 jpg_url 2009 non-null object img_num 2009 non-null int64 p1 2009 non-null object p1_conf 2009 non-null float64 p1_dog 2009 non-null bool p2 2009 non-null object p2_conf 2009 non-null float64 p2_dog 2009 non-null bool p3 2009 non-null object p3_conf 2009 non-null float64 p3_dog 2009 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 162.8+ KB ###Markdown DefineJoin the p1, p2, p3 columns into a new one and p1_conf, p2_conf, p3_conf columns into another. Image Predictions FileHow was said in Project Motivation, p1 is the algorithm's 1 prediction for the image in the tweet, p1_conf is how confident the algorithm is in its 1 prediction, p1_dog is whether or not the 1 prediction is a breed of dog. p1_conf is always higher than p2_conf that in turn is higher than p3_conf. p1 could not find a dog in a picture (p1_dog = False), but p2 can find it (p2_dog = True), gaining relevance in the predictive process. ###Code df_image_clean.info() df_image_clean[["tweet_id", "p1", "p1_conf", "p1_dog", "p2", "p2_conf", "p2_dog", "p3", "p3_conf", "p3_dog"]].head(20) ###Output _____no_output_____ ###Markdown Code ###Code # Storing the prediction p1 or p2 or p3 the one with the higher confidence level who managed to identify a dog (p1_dog = True) # Storing the confidence level p1_conf or p2_conf or p3_conf. prediction = [] confidence = [] def prediction_summarize(df): if df["p1_dog"] == True: prediction.append(df["p1"]) confidence.append(df["p1_conf"]) elif df["p2_dog"] == True: prediction.append(df["p2"]) confidence.append(df["p2_conf"]) elif df["p3_dog"] == True: prediction.append(df["p3"]) confidence.append(df["p3_conf"]) else: prediction.append("unidentifiable") confidence.append(0) # Applying a function along an axis of the DataFrame df_image_clean.apply(prediction_summarize, axis = 1) # Creating two new columns in df_image_clean df_image_clean["prediction"] = prediction df_image_clean["confidence"] = confidence df_image_clean.info() # Dropping the p1, p2, p3, p1_conf, p2_conf, p3_conf, p1_dog, p2_dog, p3_dog columns df_image_clean = df_image_clean.drop(["p1", "p2", "p3", "p1_conf", "p2_conf", "p3_conf", "p1_dog", "p2_dog", "p3_dog"], 1) ###Output _____no_output_____ ###Markdown Test ###Code df_image_clean.info() df_image_clean.head(20) df_image_clean.prediction.value_counts() ###Output _____no_output_____ ###Markdown Define:The numerator and denominator values must be refined.Following what was instructed in Project Motivation, the numerator and the denominator are not 100% correct and need to be revised. Decimal Numbers Code ###Code # Increasing the width of the text column to be able to read the entire tweet # Using regular expression, we can identify all the numerators that are decimal numbers with pd.option_context("max_colwidth", 200): display(df_archive_clean[["tweet_id", "text", "rating_numerator", "rating_denominator"]] [df_archive_clean["text"].str.contains(r"([0-9]+[0-9.]*\/[0-9]+[0-9]*)")]) # As we found decimal numbers in numerators let's change the data type in rating_numerator and rating_denominator to float df_archive_clean["rating_numerator"] = df_archive_clean["rating_numerator"].astype(float) df_archive_clean["rating_denominator"] = df_archive_clean["rating_denominator"].astype(float) df_archive_clean.info() # Using regular expression, we can extract the rating from text and store in a new column rating df_archive_clean["rating"] = df_archive_clean.text.str.extract('([0-9]+[0-9.]*\/[0-9]+[0-9]*)', expand = True) # Splitting the ranting column in two columns as a float number df_archive_clean[["rating_numerator", "rating_denominator"]] = df_archive_clean["rating"].str.split('/',expand = True).astype(float) # Dropping the rating column df_archive_clean = df_archive_clean.drop("rating", 1) ###Output _____no_output_____ ###Markdown Test ###Code df_archive_clean.info() # Increasing the width of the text column to be able to read the entire tweet # Using regular expression, we can identify all the numerators that are decimal numbers with pd.option_context("max_colwidth", 200): display(df_archive_clean[["tweet_id", "text", "rating_numerator", "rating_denominator"]] [df_archive_clean["text"].str.contains(r"([0-9]+[0-9.]*\/[0-9]+[0-9]*)")]) ###Output _____no_output_____ ###Markdown Denominator different than 10.0 Code ###Code # Searching for denominator different than 10.0 rating_to_fix = df_archive_clean.query('rating_denominator != 10.0') # Increasing the width of the text column to be able to read the entire tweet with pd.option_context("max_colwidth", 200): display(rating_to_fix[["tweet_id", "text", "rating_numerator", "rating_denominator"]]) # Let's correct the denumerator and the numerator too df_archive_clean.loc[(df_archive_clean.tweet_id == 666287406224695296),"rating_denominator"] = 10.0 df_archive_clean.loc[(df_archive_clean.tweet_id == 666287406224695296),"rating_numerator"] = 9.0 df_archive_clean.loc[(df_archive_clean.tweet_id == 722974582966214656),"rating_denominator"] = 10.0 df_archive_clean.loc[(df_archive_clean.tweet_id == 722974582966214656),"rating_numerator"] = 13.0 df_archive_clean.loc[(df_archive_clean.tweet_id == 716439118184652801),"rating_denominator"] = 10.0 df_archive_clean.loc[(df_archive_clean.tweet_id == 716439118184652801),"rating_numerator"] = 11.0 df_archive_clean.loc[(df_archive_clean.tweet_id == 740373189193256964),"rating_denominator"] = 10.0 df_archive_clean.loc[(df_archive_clean.tweet_id == 740373189193256964),"rating_numerator"] = 14.0 df_archive_clean.loc[(df_archive_clean.tweet_id == 682962037429899265),"rating_denominator"] = 10.0 df_archive_clean.loc[(df_archive_clean.tweet_id == 682962037429899265),"rating_numerator"] = 10.0 df_archive_clean.loc[(df_archive_clean.tweet_id == 835246439529840640),"rating_denominator"] = 10.0 df_archive_clean.loc[(df_archive_clean.tweet_id == 835246439529840640),"rating_numerator"] = 13.0 # Deleting the tweets without a rate df_archive_clean = df_archive_clean[df_archive_clean.tweet_id != 832088576586297345] df_archive_clean = df_archive_clean[df_archive_clean.tweet_id != 810984652412424192] ###Output _____no_output_____ ###Markdown Test ###Code df_archive_clean.info() # Increasing the width of the text column to be able to read the entire tweet # Using regular expression, we can identify all tweets that have Bretagne in text with pd.option_context("max_colwidth", 200): display(df_archive_clean[["tweet_id", "text", "rating_numerator", "rating_denominator"]] [df_archive_clean["text"].str.contains(r"(Bretagne.)")]) ###Output _____no_output_____ ###Markdown - 740373189193256964 => original tweet - 775096608509886464 => RT Define:We can see that when a tweet is actually a Retweet (RT), it starts the sentence with RT. Following the Project Details we don't want analyse RTs. Let's find all RTs and delete them. Code ###Code # Increasing the width of the text column to be able to read the entire tweet # Using regular expression, we can identify all the tweets that have "RT" in text with pd.option_context("max_colwidth", 200): display(df_archive_clean[["tweet_id", "text", "rating_numerator", "rating_denominator"]] [df_archive_clean["text"].str.contains(r"(RT\s)")]) # Deleting the RTs found df_archive_clean = df_archive_clean[~df_archive_clean.text.str.contains(r"(RT\s)")] ###Output _____no_output_____ ###Markdown Test ###Code df_archive_clean.info() # Increasing the width of the text column to be able to read the entire tweet # Using regular expression, we can identify all the tweets that have "RT" in text with pd.option_context("max_colwidth", 200): display(df_archive_clean[["tweet_id", "text", "rating_numerator", "rating_denominator"]] [df_archive_clean["text"].str.contains(r"(RT\s)")]) ###Output _____no_output_____ ###Markdown ImportantNumerators and denominators were extracted from the text column, revealing a unique complexity, as we do not know the data pattern used in rating. There may not even be a data pattern. Trying to find a data pattern, we found others problems in the process that have also been corrected, in order to improve the quality of the data. Define:df_archive, df_image and df_tweet should be merged ###Code df_archive_clean.info() df_image_clean.info() df_tweet_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2331 entries, 0 to 2330 Data columns (total 3 columns): tweet_id 2331 non-null int64 favorites 2331 non-null int64 retweets 2331 non-null int64 dtypes: int64(3) memory usage: 54.8 KB ###Markdown Code ###Code df_twitter = pd.merge(left = df_archive_clean, right = df_tweet_clean, left_on = "tweet_id", right_on = "tweet_id", how = "inner") df_twitter = df_twitter.merge(df_image_clean, on = "tweet_id", how = "inner") df_twitter.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1919 entries, 0 to 1918 Data columns (total 13 columns): tweet_id 1919 non-null int64 timestamp 1919 non-null datetime64[ns, UTC] text 1919 non-null object rating_numerator 1919 non-null float64 rating_denominator 1919 non-null float64 name 1919 non-null object dog_stage 1919 non-null object favorites 1919 non-null int64 retweets 1919 non-null int64 jpg_url 1919 non-null object img_num 1919 non-null int64 prediction 1919 non-null object confidence 1919 non-null float64 dtypes: datetime64[ns, UTC](1), float64(3), int64(4), object(5) memory usage: 209.9+ KB ###Markdown Storing data ###Code # Storing the final dataframe df_twitter as a comma-separated values (csv) file named twitter_archive_master.csv df_twitter.to_csv("twitter_archive_master.csv", index = False, encoding = "utf-8") ###Output _____no_output_____ ###Markdown Analyzing and Visualizing data ###Code df_twitter.prediction.value_counts() # Counting in prediction column for the most popular dog breed # Let's slicing in order to get the top 20 dogs breeds and to not consider those unidentifiable df_dog = df_twitter.prediction.value_counts()[1:21] df_dog # Setting up the figure plt.figure(figsize = (20,10)) sns.set_color_codes("dark") # Plotting df_dog.plot.bar(color = "blue", alpha = 0.6) # Setting up the title and axes label plt.title("The most posted dog breed on WeRateDogs", fontsize = 15) plt.xlabel("Breeds of Dogs", fontsize = 12) plt.ylabel("Dog Count", fontsize = 12); plt.savefig("most_posted.png", dpi = 400) ###Output _____no_output_____ ###Markdown InsightGolden retriever is the most posted dog breed on WeRateDog.Followed by labrador retriever, pembroke, chihuahua, pug, etc. ###Code # Copying and sorting by favorites df_favorites = df_twitter.copy() df_favorites.sort_values(by = "favorites", ascending = False, inplace = True) df_favorites = df_favorites[:20] df_favorites[["favorites", "prediction", "tweet_id"]].head() # Setting up the figure plt.figure(figsize = (20,10)) sns.set_color_codes("dark") # Plotting sns.barplot(x = "favorites", y = "prediction", data = df_favorites, color = "green", label = "Favorites", alpha = 0.6, ci = None, estimator = max) # Setting up the title and axes label plt.title("The most popular dog breed on WeRateDogs: favorites", fontsize = 15) plt.xlabel("Favorites Count", fontsize = 12) plt.ylabel(" ") plt.legend(); plt.savefig("favorites.png", dpi = 400) ###Output _____no_output_____ ###Markdown InsightWhen we search for the tweet that has the largest number of favorites, we find one with a photo of a labrador retriever that has more than 158,000 favs. The labrador retriever is WeRateDogs' most popular dog breed in counting the number of favorites. The list goes on with lakeland terrier (134,811 favs), french bulldog (117,666 favs), eskimo dog (116,821 favs), english springer (100,328 favs), etc. ###Code # Copying and sorting by retweets df_retweets = df_twitter.copy() df_retweets.sort_values(by = "retweets", ascending = False, inplace = True) df_retweets = df_retweets[:20] df_retweets[["retweets", "prediction", "tweet_id"]].head() # Setting up the figure plt.figure(figsize = (20,10)) sns.set_color_codes("dark") # Plotting sns.barplot(x = "retweets", y = "prediction", data = df_retweets, color = "red", label = "Retweets", alpha = 0.6, ci = None, estimator = max) # Setting up the title and axes label plt.title("The most popular dog breed on WeRateDogs: retweets", fontsize = 15) plt.xlabel("Retweets Count", fontsize = 12) plt.ylabel(" ") plt.legend(); plt.savefig("retweets.png", dpi = 400) ###Output _____no_output_____ ###Markdown InsightWhen we consider the number of retweets, we arrive at a tweet of a labrador retriever with 78,786 RTs.The labrador retriever is not only the most popular breed of dog in WeRateDogs when counting favorites, it is also the most popular in the number of retweets. The same tweet (744234799360020481) received the highest number of favs and highest number of RTs. Followed by eskimo dog (58,422 RTs), lakeland terrier (44,412 RTs), english springer (41,068 RTs), french bulldog (33,352 RTs), etc. ###Code # Copying and sorting by rating_numerator df_ratings = df_twitter.copy() df_ratings.sort_values(by = "rating_numerator", ascending = False, inplace = True) df_ratings = df_ratings[3:23] df_ratings[["rating_numerator", "prediction","img_num", "tweet_id"]].head() # Setting up the figure plt.figure(figsize = (20,10)) sns.set_color_codes("dark") # Plotting sns.barplot(x = "rating_numerator", y = "prediction", data = df_ratings, color = "yellow", label = "Rating", alpha = 0.6, ci = None, estimator = max) # Setting up the title and axes label plt.title("The most popular dog breed on WeRateDogs: rating numerator", fontsize = 15) plt.xlabel("Rating", fontsize = 12) plt.ylabel(" ") plt.legend(); plt.savefig("rating.png", dpi = 400) ###Output _____no_output_____ ###Markdown InsightWhen the criterion is the rating, labrador retriever is the breed of dog that has the highest grade. This highest-rated tweet was posted with a single photo and surpassed others tweets with more photos. The labrador retriever is the big winner in popularity on WeRateDogs. The list goes on with chow, golden retriver, soft coated wheaten terrier, (another) golden retriever, etc. No tweets that were present in the previous lists (favorites and retweets) appear here. ###Code # Setting up the figure plt.figure(figsize = (16,8)) # Plotting the Heatmap sns.heatmap(df_twitter[["confidence", "rating_numerator", "favorites", "retweets"]].corr(), annot = True, cmap = "Blues", linewidths = .5, vmin = -1, vmax = 1) # Setting up the title plt.title("Correlation Heatmap", fontsize = 15); plt.savefig("correlation.png", dpi = 400) # Setting up the figure plt.figure(figsize = (10,6)) sns.set_color_codes("dark") # Plotting sns.scatterplot(x = "favorites", y = "retweets", data = df_twitter, color = "blue", alpha = 0.6) # Setting up the title and axes label plt.title("Scatterplot: Favorites x Retweets", fontsize = 15) plt.xlabel("Favorites", fontsize = 12) plt.ylabel("Retweets", fontsize = 12); plt.savefig("correlation2.png", dpi = 400) ###Output _____no_output_____ ###Markdown InsightThrough the heatmap graph we identified that the favorites and retweets variables have the highest positive correlation of the dataframe (0.92). The scatterplot confirms the correlation. ###Code df_twitter.describe() # Setting up the figure plt.figure(figsize = (16,8)) # Plotting the Line chart line_chart_fav = sns.lineplot(x = "timestamp", y = "favorites", data = df_twitter, linewidth = 2.5, label = "Favorites", alpha = 0.6, color = "green") line_chart_ret = sns.lineplot(x = "timestamp", y = "retweets", data = df_twitter, linewidth = 2.5, label = "Retweets", alpha = 0.6, color = "red") # Setting up the title and axes label plt.title("Favorites and Retweets over time", fontsize = 15) plt.xlabel("Timestamp", fontsize = 12) plt.ylabel("Count", fontsize = 12); plt.savefig("fav_rt.png", dpi = 400) ###Output C:\Users\Felipe\Anaconda3\lib\site-packages\seaborn\relational.py:792: FutureWarning: Converting timezone-aware DatetimeArray to timezone-naive ndarray with 'datetime64[ns]' dtype. In the future, this will return an ndarray with 'object' dtype where each element is a 'pandas.Timestamp' with the correct 'tz'. To accept the future behavior, pass 'dtype=object'. To keep the old behavior, pass 'dtype="datetime64[ns]"'. x, y = np.asarray(x), np.asarray(y) ###Markdown Wrangle & Analyze Data Shakhawat Hassan ###Code # Import libraries import tweepy from tweepy import OAuthHandler import json from timeit import default_timer as timer import pandas as pd import numpy as np import seaborn as sns import urllib.request import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown Gather Data ###Code #read twitter-archieve.csv df1 = pd.read_csv('twitter-archive-enhanced.csv') #read image-predictions.tsv df2 = pd.read_csv('image-predictions.tsv', sep = '\t') #read jsoon file df3 = pd.read_json('tweet-json.txt', lines =True) # Query Twitter API for each tweet in the Twitter archive and save JSON in a text file # These are hidden to comply with Twitter's API terms and conditions consumer_key = 'HIDDEN' consumer_secret = 'HIDDEN' access_token = 'HIDDEN' access_secret = 'HIDDEN' auth = OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth) ###Output _____no_output_____ ###Markdown Assess Data ###Code df1.info() #inspecting df2.info() #inspectiing data # inspecting data df2.head(10) df3.info() #inspecting df3 df3.head() df1.describe() df2.describe() df3.describe() all_columns = pd.Series(list(df1) + list (df2)+ list(df3)) all_columns[all_columns.duplicated()] ###Output _____no_output_____ ###Markdown Quality- Drop unnecessary columns in df1- Drop unnecessary columns in df2- Extract the columns in df3 which are needed ('id', 'retweet_count', 'favorite_count')- Change 'id' name to 'tweet_id' in df3- Change 'tweet_id' data type to string in df1- Change 'tweet_id' data type to string in df2- Change 'id' data type to string in df3- Remove "_" and capitalize the first letter for p1, p2, and p3 in df2- Change string to datetime for timestamp in df1- Capitalize first letter in 'names' in df1Tidiness- Merge 3 dogs stage into one single column- Merge 3 columns into one column by 'tweet_id' Data Cleaning Quality Issue Define Drop unnecessary columns in df1_clean ###Code df1.columns ###Output _____no_output_____ ###Markdown Code ###Code df1_clean = df1.drop(['in_reply_to_status_id', 'in_reply_to_user_id', 'source', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp', 'expanded_urls'], axis =1) ###Output _____no_output_____ ###Markdown Test ###Code df1_clean.columns ###Output _____no_output_____ ###Markdown Define Drop unnecessary columns in df2_clean Code ###Code df2_clean = df2.drop(['jpg_url', 'img_num'], axis= 1) ###Output _____no_output_____ ###Markdown Test ###Code df2_clean.columns ###Output _____no_output_____ ###Markdown DefineExtract the columns in df3 which are needed Code ###Code #extracting columns which are needed df3_clean = pd.DataFrame(df3, columns = ['id', 'retweet_count', 'favorite_count']) ###Output _____no_output_____ ###Markdown Test ###Code df3_clean.columns ###Output _____no_output_____ ###Markdown Define change 'id' name to 'tweet_id' in df3_clean Code ###Code df3_clean.rename(columns = {'id': 'tweet_id'}, inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code df3_clean.columns ###Output _____no_output_____ ###Markdown Define change tweet_id data type by .astype in df1_clean, df2_clean, and df3_clean Test ###Code df1_clean.tweet_id = df1_clean.tweet_id.astype(str) df2_clean.tweet_id = df2_clean.tweet_id.astype(str) df3_clean.tweet_id = df3_clean.tweet_id.astype(str) ###Output _____no_output_____ ###Markdown Test ###Code df1_clean.tweet_id.head() df2_clean.tweet_id.head() df3_clean.tweet_id.head() ###Output _____no_output_____ ###Markdown DefineRemove _ and capitalize the first letter for p1, p2, and p3 in df2 Code ###Code df2_clean = df2_clean.replace('_', ' ', regex = True) df2_clean['p1'] = df2_clean['p1'].str.title() df2_clean['p2'] = df2_clean['p2'].str.title() df2_clean['p3'] = df2_clean['p3'].str.title() ###Output _____no_output_____ ###Markdown Test ###Code df2_clean.head(100) ###Output _____no_output_____ ###Markdown DefineChange string to datetime for timestamp in df1_clean Code ###Code #to datetime df1_clean.timestamp = pd.to_datetime(df1_clean.timestamp) ###Output _____no_output_____ ###Markdown Test ###Code df1_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 10 columns): tweet_id 2356 non-null object timestamp 2356 non-null datetime64[ns] text 2356 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: datetime64[ns](1), int64(2), object(7) memory usage: 184.1+ KB ###Markdown DefineUppercase the first letter in 'names' and dogs' stages by .title and in df1 Code ###Code df1_clean['name'] = df1_clean['name'].str.title() df1_clean['doggo'] = df1_clean['doggo'].str.title() df1_clean['floofer'] = df1_clean['floofer'].str.title() df1_clean['pupper'] = df1_clean['pupper'].str.title() df1_clean['puppo'] = df1_clean['puppo'].str.title() ###Output _____no_output_____ ###Markdown Test ###Code df1_clean['name'] ###Output _____no_output_____ ###Markdown Tidiness Define- Merge all three dogs' stages into one single column and then drop the empty rows Code ###Code df1_clean.replace('None', np.nan, inplace = True) df1_clean['stage'] = df1_clean[['doggo', 'floofer', 'pupper', 'puppo']].fillna('').sum(axis=1).astype(str) #drop the df1_clean = df1_clean.drop(['doggo', 'floofer', 'pupper', 'puppo'], axis =1) df1_clean['name'] = df1_clean['name'].str.title() df1_clean['stage'].replace('', np.nan, inplace=True) df1_clean.dropna(subset=['name']) df1_clean.dropna(subset=['stage']) df1_clean['stage'].value_counts ###Output _____no_output_____ ###Markdown DefineReplace the names 'DoggoPupper" to "Doggo", 'DoggoPuppo', 'Puppo', 'DoggoFloofer', 'Floofer' Test ###Code ##DoggoPupper should be doggo df1_clean.stage= df1_clean.stage.replace('DoggoPupper', 'Doggo') df1_clean.stage= df1_clean.stage.replace('DoggoPuppo', 'Puppo') df1_clean.stage= df1_clean.stage.replace('DoggoFloofer', 'Floofer') ###Output _____no_output_____ ###Markdown Test ###Code df1_clean['stage'].value_counts() df1_clean['name'].value_counts() ###Output _____no_output_____ ###Markdown DefineMerge all three columns into a single column by 'tweeter_id' Code ###Code df_n = pd.merge(df1_clean, df2_clean, on = 'tweet_id', how ='inner') df_new = pd.merge(df_n, df3_clean, on = 'tweet_id', how ='inner') ###Output _____no_output_____ ###Markdown Test ###Code df_new.columns df_new.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2073 entries, 0 to 2072 Data columns (total 18 columns): tweet_id 2073 non-null object timestamp 2073 non-null datetime64[ns] text 2073 non-null object rating_numerator 2073 non-null int64 rating_denominator 2073 non-null int64 name 1496 non-null object stage 320 non-null object p1 2073 non-null object p1_conf 2073 non-null float64 p1_dog 2073 non-null bool p2 2073 non-null object p2_conf 2073 non-null float64 p2_dog 2073 non-null bool p3 2073 non-null object p3_conf 2073 non-null float64 p3_dog 2073 non-null bool retweet_count 2073 non-null int64 favorite_count 2073 non-null int64 dtypes: bool(3), datetime64[ns](1), float64(3), int64(4), object(7) memory usage: 265.2+ KB ###Markdown Data cleaning has been done! And the data type looks fine Visualize and Analyze ###Code df_new.describe() ###Output _____no_output_____ ###Markdown - At 75 percentile, most dogs get at the scale of 12 on rating numerator.- At 75 percentile, most dogs get at the scale of 11 on rating denominator.- There are more favorite counts than retweet counts. Most Popular Names ###Code common_names = df_new['name'].value_counts().nlargest(10) common_names ###Output _____no_output_____ ###Markdown Top 10 dog names ###Code common_names.plot.bar() ###Output _____no_output_____ ###Markdown Unrecorded names with 'A' dogs' have the highest number of names among all other names. Dog Stages ###Code dog_stages = df_new['stage'].value_counts() dog_stages ###Output _____no_output_____ ###Markdown Pupper stage has the highest number of dogs ###Code dog_stages.plot.bar() ###Output _____no_output_____ ###Markdown Pupper stage has the highest number of dogs ###Code df_new.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2073 entries, 0 to 2072 Data columns (total 18 columns): tweet_id 2073 non-null object timestamp 2073 non-null datetime64[ns] text 2073 non-null object rating_numerator 2073 non-null int64 rating_denominator 2073 non-null int64 name 1496 non-null object stage 320 non-null object p1 2073 non-null object p1_conf 2073 non-null float64 p1_dog 2073 non-null bool p2 2073 non-null object p2_conf 2073 non-null float64 p2_dog 2073 non-null bool p3 2073 non-null object p3_conf 2073 non-null float64 p3_dog 2073 non-null bool retweet_count 2073 non-null int64 favorite_count 2073 non-null int64 dtypes: bool(3), datetime64[ns](1), float64(3), int64(4), object(7) memory usage: 265.2+ KB ###Markdown Favorite Tweets vs Retweets ###Code df_new['retweet_count'].describe() df_new['favorite_count'].describe() df_new.plot(x = 'retweet_count', y= 'favorite_count' , kind = 'scatter', figsize= (20, 20)) plt.show() ###Output _____no_output_____ ###Markdown There is a more positive correlation towards the favorite count side. Favorite tweets vs Timestamp ###Code df_new.plot(x = 'timestamp', y= 'favorite_count' , kind = 'line', figsize = (20,20)) plt.show() ###Output _____no_output_____ ###Markdown Wrangle and Analyze Data Project Description ###Code import pandas as pd import os import io import requests import numpy as np import json from PIL import Image ###Output _____no_output_____ ###Markdown Gather ###Code # WeRateDogs Twitter archive. df_archive = pd.read_csv('twitter-archive-enhanced.csv') df_archive.head() # Tweet image predictions urlData = requests.get('https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv').content df_images = pd.read_csv(io.StringIO(urlData.decode('utf-8')), delimiter='\t') df_images.head() # Twitter API import tweepy consumer_key = '' consumer_secret = '' access_token = '' access_secret = '' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, parser=tweepy.parsers.JSONParser(), wait_on_rate_limit = True, wait_on_rate_limit_notify = True) # Array all Tweets ID's tweets_id = np.asarray(tw_archive['tweet_id']) tweets_id # Get all JSON files from ID, store in list and dump into .txt file. with open('tweet_json.txt', 'a+', encoding='utf-8') as outfile: for a in tweets_id: try: tweet = api.get_status(a, tweet_mode = 'extended') outfile.write(json.dumps(tweet)) outfile.write('\n') except: pass outfile.close() # Create list from .txt with open('tweet_json.txt') as file: status = [] for line in file: status.append(json.loads(line)) # Create Dataframe from list df_tweets = pd.DataFrame(status, columns = ['id','retweet_count', 'favorite_count']) df_tweets.head() ###Output _____no_output_____ ###Markdown Assess and Clean Quality `Archive` dataframe- Erroneous datatypes(columns - timestamp and retweeted_status_timestamp)- Data inside html tags (column source)- Innacurate denominator, values different from 10 (column rating_denominator)]- Innacurate numerator, has too large values (column rating_numerator)- Retweeted tweets.- Missing values (column expanded_urls)- Some sources different than Twitter. `Tweets` dataframe- No issues `Images` dataframe- p1, p2 and p3 columns have underscore between words. Visual Assessment ###Code df_archive.sample(5) df_images.sample(5) df_images.tail() df_tweets.sample(5) ###Output _____no_output_____ ###Markdown Programmatic Assessment ###Code df_archive.info() df_images.info() df_tweets.info() df_archive.describe() df_images.describe() df_tweets.describe() ###Output _____no_output_____ ###Markdown Define - Convert columns timestamp and retweeted_status_timestamp to timestamp type. - Remove html link tag in column source. - Replace denominators different than 10. - Remove rows with non standard numerators. - Remove tweets that are retweets. - Remove rows with missing expanded_urls. - Replace _ with space in p1, p2 and p3 columns. - Remove sources different from twitter. Achive Dataframe ###Code df_clean_archive = df_archive.copy() # Convert columns timestamp and retweeted_status_timestamp to datetime df_clean_archive['timestamp'] = pd.to_datetime(df_archive['timestamp'], format='%Y-%m-%d %H:%M:%S.%f') df_clean_archive['retweeted_status_timestamp'] = pd.to_datetime(df_archive['retweeted_status_timestamp'], format='%Y-%m-%d %H:%M:%S.%f') # Test df_archive.info(), df_clean_archive.info() # Remove <a> link tag in column source df_clean_archive['source'].unique() # Use regular expressions to use only the content of the html tag df_clean_archive['source'] = df_clean_archive.source.str.extract(r'>(.*?)<') # Test df_clean_archive['source'].unique() # Replace denominators different than 10. df_clean_archive.query('rating_denominator != 10') # Set all rating denominators to 10. df_clean_archive['rating_denominator'] = 10 # Test df_clean_archive.query('rating_denominator != 10') # Remove non standard numerators remove_id = df_clean_archive.query('rating_numerator > 20') # Remove rows with rating_numerators higher than 20. df_clean_archive = df_clean_archive[~df_clean_archive.index.isin(remove_id.index)] # Test df_clean_archive.query('rating_numerator > 20') # Remove tweets that are retweets # Verify possible values for column in_reply_to_status_id df_clean_archive.retweeted_status_id.unique() # Store retweets to be removed remove_retweet = df_clean_archive.query('retweeted_status_id != "nan"') # Remove retweets df_clean_archive = df_clean_archive[~df_clean_archive.index.isin(remove_retweet.index)] # Test df_clean_archive.query('retweeted_status_id != "nan"') # Remove records if expanded_urls column null # Store missing expanded_urls remove_miss_exp_url = df_clean_archive[df_clean_archive['expanded_urls'].isnull()] # Remove missing expanded_urls df_clean_archive = df_clean_archive[~df_clean_archive.index.isin(remove_miss_exp_url.index)] # Test df_clean_archive[df_clean_archive['expanded_urls'].isnull()] # Remove record with source Vine - Make a Scene non_twitter_source = df_clean_archive.query('source == "Vine - Make a Scene"') df_clean_archive = df_clean_archive[~df_clean_archive.index.isin(non_twitter_source.index)] # Test df_clean_archive.source.unique() ###Output _____no_output_____ ###Markdown Images Dataframe ###Code df_clean_images = df_images.copy() df_clean_images.columns # Replace underscore with space and capitalize df_clean_images['p1'] = df_clean_images.p1.str.replace('_', ' ') df_clean_images['p1'] = df_clean_images.p1.str.capitalize() df_clean_images['p2'] = df_clean_images.p1.str.replace('_', ' ') df_clean_images['p2'] = df_clean_images.p1.str.capitalize() df_clean_images['p3'] = df_clean_images.p1.str.replace('_', ' ') df_clean_images['p3'] = df_clean_images.p1.str.capitalize() df_clean_images.sample(5) ###Output _____no_output_____ ###Markdown Tidiness `Archive` dataframe- Columns doggo, floofer, pupper and puppo have values in both column and rows. - Since retweeted tweets will not be used, retweeted columns are useless. - In some cases, a dog might have two dog stages. `Tweets` dataframe `Images` dataframe Define - Create new column dog_stage - Remove columns retweeted_status_id, retweeted_status_user_id and retweeted_status_timestamp. - Remove dogs with more than one dog stage. - Merge archive data frame with images dataframe - Merge the new data frame with tweets dataframe Achive Dataframe ###Code df_clean_archive.sample(5) # Remove columns related to retweets df_clean_archive.drop('retweeted_status_id', axis=1, inplace=True) df_clean_archive.drop('retweeted_status_user_id', axis=1, inplace=True) df_clean_archive.drop('retweeted_status_timestamp', axis=1, inplace=True) # Test df_clean_archive.head() # Verify dog stages values print(df_clean_archive.doggo.unique()) print(df_clean_archive.floofer.unique()) print(df_clean_archive.pupper.unique()) print(df_clean_archive.puppo.unique()) df_clean_archive['doggo'].replace('None', '', inplace=True) df_clean_archive['floofer'].replace('None', '', inplace=True) df_clean_archive['pupper'].replace('None', '', inplace=True) df_clean_archive['puppo'].replace('None', '', inplace=True) # Verify dog stages values after replace print(df_clean_archive.doggo.unique()) print(df_clean_archive.floofer.unique()) print(df_clean_archive.pupper.unique()) print(df_clean_archive.puppo.unique()) # Create new column dog_stages df_clean_archive['dog_stages'] = (df_clean_archive['doggo'] + df_clean_archive['floofer'] + df_clean_archive['pupper'] + df_clean_archive['puppo'] ) df_clean_archive.dog_stages.unique() # Drop columns doggo, puppo, pupper and floofer df_clean_archive.drop('doggo', axis=1, inplace=True) df_clean_archive.drop('floofer', axis=1, inplace=True) df_clean_archive.drop('pupper', axis=1, inplace=True) df_clean_archive.drop('puppo', axis=1, inplace=True) # Test df_clean_archive.sample(3) # Dogs with more than one dog stage print('before removing', df_clean_archive.dog_stages.unique()) # Remove doggopuppo, doggofloofer, doggopupper remove_dogstage = df_clean_archive.query('dog_stages == "doggopuppo" or dog_stages == "doggofloofer" or dog_stages == "doggopupper"') df_clean_archive = df_clean_archive[~df_clean_archive.index.isin(remove_dogstage.index)] df_clean_archive['dog_stages'].replace('', 'NaN', inplace=True) # Test print('after removing', df_clean_archive.dog_stages.unique()) ###Output before removing ['' 'doggo' 'puppo' 'pupper' 'floofer' 'doggopuppo' 'doggofloofer' 'doggopupper'] after removing ['NaN' 'doggo' 'puppo' 'pupper' 'floofer'] ###Markdown Dataframe Merge ###Code # Merge image dataframe with archive dataframe. Only tweets with images df_new = df_clean_archive.merge(df_clean_images, left_on = 'tweet_id', right_on = 'tweet_id', suffixes=('_archive','_images')) print(df_new.info()) df_new.head() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1964 entries, 0 to 1963 Data columns (total 22 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1964 non-null int64 1 in_reply_to_status_id 21 non-null float64 2 in_reply_to_user_id 21 non-null float64 3 timestamp 1964 non-null datetime64[ns, UTC] 4 source 1964 non-null object 5 text 1964 non-null object 6 expanded_urls 1964 non-null object 7 rating_numerator 1964 non-null int64 8 rating_denominator 1964 non-null int64 9 name 1964 non-null object 10 dog_stages 1964 non-null object 11 jpg_url 1964 non-null object 12 img_num 1964 non-null int64 13 p1 1964 non-null object 14 p1_conf 1964 non-null float64 15 p1_dog 1964 non-null bool 16 p2 1964 non-null object 17 p2_conf 1964 non-null float64 18 p2_dog 1964 non-null bool 19 p3 1964 non-null object 20 p3_conf 1964 non-null float64 21 p3_dog 1964 non-null bool dtypes: bool(3), datetime64[ns, UTC](1), float64(5), int64(4), object(9) memory usage: 312.6+ KB None ###Markdown Final Dataframe ###Code # Merge df_new with df_tweets df = df_new.merge(df_tweets, left_on = 'tweet_id', right_on = 'id') df.drop(['id'], axis=1, inplace=True) print(df.info()) df.head() # Save final dataframe to .csv df.to_csv('twitter_archive_master.csv', index=False) ###Output _____no_output_____ ###Markdown Gather ###Code #Imports libraries import pandas as pd import numpy as np import tweepy auth = tweepy.OAuthHandler(aaaaaaaaaaaaaa, bbbbbbbbbbbbbbbbb) auth.set_access_token(ccccccccccccccccc, dddddddddddddddddddddddddddd) api = tweepy.API(auth) public_tweets = api.home_timeline() for tweet in public_tweets: print(tweet.text) #https://tweepy.readthedocs.io/en/latest/getting_started.html #import tweepy """auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth) public_tweets = api.home_timeline() for tweet in public_tweets: print(tweet.text) """ #Imports datasets we_rate_dogs = pd.read_csv("twitter-archive-enhanced.csv") image_prediction = pd.read_csv("image-predictions.tsv") """Using request https://2.python-requests.org//en/master/ >>> r = requests.get('https://api.github.com/user', auth=('user', 'pass')) >>> r.status_code 200 >>> r.headers['content-type'] 'application/json; charset=utf8' >>> r.encoding 'utf-8' >>> r.text u'{"type":"User"...' >>> r.json() {u'private_gists': 419, u'total_private_repos': 77, ...} """ ###Output _____no_output_____ ###Markdown Assess ###Code we_rate_dogs image_prediction ###Output _____no_output_____ ###Markdown Data Wrangling: WeRateDogs Twitter Data ###Code #importing required libraries import pandas as pd import numpy as np import tweepy import requests import json import re import os import datetime import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 1. Data Wrangling 1.1 Twitter archive file ###Code #save twitter-archive-enhanced.csv file in twitter_archive dataframe twitter_archive = pd.read_csv('twitter-archive-enhanced.csv') twitter_archive.head() ###Output _____no_output_____ ###Markdown 1.2 Image Predictions file ###Code #Downloading and saving image predictions data using Requests url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' r = requests.get(url) r file_name = url.split('/')[-1] file_name if not os.path.isfile(file_name): with open(file_name, 'wb') as f: f.write(r.content) #save image-prediction.tsv file in image_prediction dataframe image_prediction = pd.read_csv('image-predictions.tsv', sep='\t') image_prediction.head() ###Output _____no_output_____ ###Markdown 1.3 Twitter API Data for the favourites and retweets counts ###Code consumer_key = '****************************' consumer_secret = '*************************' access_token = '****************************' access_secret = '***************************' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True) #Experimenting to extract one tweet's id information exp_tweet = api.get_status(twitter_archive.tweet_id[1000], tweet_mode='extended') content = exp_tweet._json content #added for experimenting new way of getting the data exp_tweet.full_text exp_tweet.retweet_count, exp_tweet.id, exp_tweet.favorite_count #checking the keys of the test tweet content.keys() #Getting the retweet and favourite counts content['retweet_count'], content['id'], content['favorite_count'] #investigating the user information content['user'].keys() content['user']['followers_count'], content['user']['location'] ###Output _____no_output_____ ###Markdown 1.3.1 Quering The Twitter API ###Code #creating a file for the tweets' text data errors = [] if not os.path.isfile('tweet_json.txt'): #create the file and write on it with open ('tweet_json.txt', 'w') as file: for tweet_id in twitter_archive['tweet_id']: try: status = api.get_status(tweet_id, wait_on_rate_limit=True, wait_on_rate_limit_notify=True, tweet_mode='extended') json.dump(status._json, file) file.write('\n') except Exception as e: print("Error on tweet id {}".format(tweet_id) + ";" + str(e)) errors.append(tweet_id) ###Output Error on tweet id 888202515573088257;[{'code': 144, 'message': 'No status found with that ID.'}] Error on tweet id 873697596434513921;[{'code': 144, 'message': 'No status found with that ID.'}] Error on tweet id 872668790621863937;[{'code': 144, 'message': 'No status found with that ID.'}] Error on tweet id 872261713294495745;[{'code': 144, 'message': 'No status found with that ID.'}] Error on tweet id 869988702071779329;[{'code': 144, 'message': 'No status found with that ID.'}] Error on tweet id 866816280283807744;[{'code': 144, 'message': 'No status found with that ID.'}] Error on tweet id 861769973181624320;[{'code': 144, 'message': 'No status found with that ID.'}] Error on tweet id 856602993587888130;[{'code': 144, 'message': 'No status found with that ID.'}] Error on tweet id 851953902622658560;[{'code': 144, 'message': 'No status found with that ID.'}] Error on tweet id 845459076796616705;[{'code': 144, 'message': 'No status found with that ID.'}] Error on tweet id 844704788403113984;[{'code': 144, 'message': 'No status found with that ID.'}] Error on tweet id 842892208864923648;[{'code': 144, 'message': 'No status found with that ID.'}] Error on tweet id 837366284874571778;[{'code': 144, 'message': 'No status found with that ID.'}] Error on tweet id 837012587749474308;[{'code': 144, 'message': 'No status found with that ID.'}] Error on tweet id 829374341691346946;[{'code': 144, 'message': 'No status found with that ID.'}] Error on tweet id 827228250799742977;[{'code': 144, 'message': 'No status found with that ID.'}] Error on tweet id 812747805718642688;[{'code': 144, 'message': 'No status found with that ID.'}] Error on tweet id 802247111496568832;[{'code': 144, 'message': 'No status found with that ID.'}] Error on tweet id 779123168116150273;[{'code': 144, 'message': 'No status found with that ID.'}] Error on tweet id 775096608509886464;[{'code': 144, 'message': 'No status found with that ID.'}] Error on tweet id 771004394259247104;[{'code': 179, 'message': 'Sorry, you are not authorized to see this status.'}] Error on tweet id 770743923962707968;[{'code': 144, 'message': 'No status found with that ID.'}] Error on tweet id 759566828574212096;[{'code': 144, 'message': 'No status found with that ID.'}] ###Markdown 1.3.2 Reading the tweet_json.txt ###Code df_list = [] with open('tweet_json.txt', 'r') as file: for line in file: tweet = json.loads(line) tweet_id = tweet['id'] retweet_count = tweet['retweet_count'] fav_count = tweet['favorite_count'] user_count = tweet['user']['followers_count'] df_list.append({'tweet_id':tweet_id, 'retweet_count':retweet_count, 'favorite_count':fav_count, 'user_count':user_count}) api_df = pd.DataFrame(df_list) api_df.head() ###Output _____no_output_____ ###Markdown 2. Data Assessment Visual Assesment ###Code twitter_archive image_prediction api_df ###Output _____no_output_____ ###Markdown Programmatic Assesment ###Code #Data types of each column and number of entries twitter_archive.info() sum(twitter_archive['tweet_id'].duplicated()) twitter_archive.describe() print(twitter_archive['doggo'].value_counts()) print(twitter_archive['floofer'].value_counts()) print(twitter_archive['pupper'].value_counts()) print(twitter_archive['puppo'].value_counts()) twitter_archive.rating_denominator.value_counts() twitter_archive['tweet_id'].loc[twitter_archive['rating_denominator']==2] image_prediction.info() sum(image_prediction['tweet_id'].duplicated()) api_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2331 entries, 0 to 2330 Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2331 non-null int64 1 retweet_count 2331 non-null int64 2 favorite_count 2331 non-null int64 3 user_count 2331 non-null int64 dtypes: int64(4) memory usage: 73.0 KB ###Markdown Quality issues `twitter_archive` table- Missing values in name column and invalid names less than 2 characters.- By comparing the number of rows in `image_prediction` and `twitter_archive` tables, we found that there are many tweets in `twitter_archive` table has no image. This rows should be dropped.- Nan values in 'expanded_urls' column, it represnt tweets with no image, should be dropped.- Some tweets are actually retweets and replies not original tweets that have to be deleted.- Some columns have represntations of null values as 'None' not 'NaN'- 'retweeted_stauts_timestamp' and 'timestamp' should be datetime not object.- Deal with rating_numerator and rating_denominator to make sure it extracted in right way from the text `image_prediction` table- create 1 column for image prediction and 1 column for confidence level- drop rewteets and replies from the table `api_df` table- Keep original tweets only Tidiness issues- values are column names ('doggo','floofer','pupper','puppo') in `twitter_archive` table- Merge `twittwer_archive` with `api_df` tables- columns headers are values, not variable names in `image_prediction` table 3. Data Cleaning ###Code archive_clean = twitter_archive.copy() image_prediction_clean = image_prediction.copy() api_clean = api_df.copy() ###Output _____no_output_____ ###Markdown 3.1 'retweeted_status_timestamp' and 'timestamp' should be datatime not object in `twitter_archive` table Define- 'retweeted_stauts_timestamp' and 'timestamp' should be datetime not object.- we should convert data type of each column from object to datetime Code ###Code archive_clean['timestamp'] = pd.to_datetime(archive_clean['timestamp']) archive_clean['retweeted_status_timestamp'] = pd.to_datetime(archive_clean['retweeted_status_timestamp']) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 in_reply_to_status_id 78 non-null float64 2 in_reply_to_user_id 78 non-null float64 3 timestamp 2356 non-null datetime64[ns, UTC] 4 source 2356 non-null object 5 text 2356 non-null object 6 retweeted_status_id 181 non-null float64 7 retweeted_status_user_id 181 non-null float64 8 retweeted_status_timestamp 181 non-null datetime64[ns, UTC] 9 expanded_urls 2297 non-null object 10 rating_numerator 2356 non-null int64 11 rating_denominator 2356 non-null int64 12 name 2356 non-null object 13 doggo 2356 non-null object 14 floofer 2356 non-null object 15 pupper 2356 non-null object 16 puppo 2356 non-null object dtypes: datetime64[ns, UTC](2), float64(4), int64(3), object(8) memory usage: 313.0+ KB ###Markdown 3.2 Some columns have represntations of null values as 'None' not 'NaN' in `twitter_archive` table Defineconvert 'None' values with "" as empty string, in the columnns 'doggo','floofer', 'pupper'and 'puppo' in `twitter_archive` table Code ###Code archive_clean['doggo'].replace({"None": ""}, inplace=True) archive_clean['floofer'].replace({"None": ""}, inplace=True) archive_clean['pupper'].replace({"None": ""}, inplace=True) archive_clean['puppo'].replace({"None": ""}, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.sample(5) ###Output _____no_output_____ ###Markdown 3.3 values are column names ('doggo','floofer','pupper','puppo') in `twitter_archive` table Define- concatenate the columns in one column 'dog_breed'- drop the old columns - Replace the empty string to np.nan- if the value of 'dog_breed' is combined two type, make it readable Code ###Code #add columns 'doggo', 'floofer', 'pupper',and 'puppo' to make a new column 'dog_breed' old_columns = ['doggo', 'floofer', 'pupper', 'puppo'] archive_clean['dog_breed'] = archive_clean[old_columns].apply(lambda row: "".join(row.values.astype(str)), axis=1) # drop old columns 'doggo', 'floofer', 'pupper',and 'puppo' archive_clean.drop(['doggo', 'floofer', 'pupper', 'puppo'], axis=1, inplace=True) #convert empty strings in 'dog_breed' to "NaN" archive_clean['dog_breed'].replace({"": np.nan}, inplace=True) #make inappropriate values more readable #I will make it manually as it not much data archive_clean['dog_breed'].replace({"doggopuppo": "doggo-puppo", "doggofloofer": "doggo-floofer", "doggopupper": "doggo-pupper"}, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code #check that all old columns combined successfully in new column 'dog_breed' archive_clean['dog_breed'].value_counts() #make sure old columns deleted archive_clean.info() #check all empty strings is converted to 'NaN' archive_clean['dog_breed'].isnull().sum() archive_clean.info() #check values changed archive_clean['dog_breed'].value_counts() ###Output _____no_output_____ ###Markdown 3.4 Nan values in 'expanded_urls' column, it represnt tweets with no image, should be dropped in `twitter_archive` table Define- drop any row with 'NaN' value in column 'expanded_urls' with dropna() Code ###Code #the number of 'NaN' values archive_clean.expanded_urls.isnull().sum() #drop all rows have 'NaN' value in 'expanded_urls' archive_clean.dropna(subset=['expanded_urls'], axis=0, inplace=True) #to reset the index without any problem archive_clean.reset_index(drop=True, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code #check if there are any 'NaN' values archive_clean.expanded_urls.isnull().sum() #check the new number of rows after dropping archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2297 entries, 0 to 2296 Data columns (total 14 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2297 non-null int64 1 in_reply_to_status_id 23 non-null float64 2 in_reply_to_user_id 23 non-null float64 3 timestamp 2297 non-null datetime64[ns, UTC] 4 source 2297 non-null object 5 text 2297 non-null object 6 retweeted_status_id 180 non-null float64 7 retweeted_status_user_id 180 non-null float64 8 retweeted_status_timestamp 180 non-null datetime64[ns, UTC] 9 expanded_urls 2297 non-null object 10 rating_numerator 2297 non-null int64 11 rating_denominator 2297 non-null int64 12 name 2297 non-null object 13 dog_breed 374 non-null object dtypes: datetime64[ns, UTC](2), float64(4), int64(3), object(5) memory usage: 251.4+ KB ###Markdown 3.5 Some tweets are actually retweets and replies not original tweets that have to be deleted in `twitter_archive` table Define- drop any row has value (not 'NaN') in column 'retweeted_status_id' because it is retweet not original tweet- drop any row has value (not 'NaN') in column 'in_reply_to_status_id' because it is reply not original tweet- drop retweets and replies columns as we don't need them anymore Code ###Code #get list of 'tweet_id' of replies replies_tweet_id_list = list(archive_clean[archive_clean['in_reply_to_status_id'].notnull()]['tweet_id']) #get list of 'tweet_id' of retweets retweets_tweet_id_list = list(archive_clean[archive_clean['retweeted_status_id'].notnull()]['tweet_id']) archive_clean.drop(archive_clean[archive_clean['in_reply_to_status_id'].notnull()].index, inplace=True) archive_clean.drop(archive_clean[archive_clean['retweeted_status_id'].notnull()].index, inplace=True) #to reset the index without any problem archive_clean.reset_index(drop=True, inplace=True) #drop retweets and replies columns archive_clean.drop(['in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2094 entries, 0 to 2093 Data columns (total 9 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2094 non-null int64 1 timestamp 2094 non-null datetime64[ns, UTC] 2 source 2094 non-null object 3 text 2094 non-null object 4 expanded_urls 2094 non-null object 5 rating_numerator 2094 non-null int64 6 rating_denominator 2094 non-null int64 7 name 2094 non-null object 8 dog_breed 335 non-null object dtypes: datetime64[ns, UTC](1), int64(3), object(5) memory usage: 147.4+ KB ###Markdown 3.6 drop retweets and replies in `image_prediction_clean` table Define- drop and 'tweet_id' that matches 'tweet_id' of replies or retweets Code ###Code image_prediction_clean.info() #convert 'tweet_id' column in 'image_prediction_clean' table to list image_tweet_list = list(image_prediction_clean['tweet_id']) #get the intersection between tweet_id in image and replies image_reply = list(set(image_tweet_list) & set(replies_tweet_id_list)) print(len(image_reply)) image_reply #drop all replies from 'image_prediction_clean' table for index, row in image_prediction_clean.iterrows(): if row['tweet_id'] in image_reply: image_prediction_clean.drop(image_prediction_clean[image_prediction_clean['tweet_id'] == row['tweet_id']].index, inplace=True) #to reset the index without any problem image_prediction_clean.reset_index(drop=True, inplace=True) #get the intersection between tweet_id in image and retweets image_retweet = list(set(image_tweet_list) & set(retweets_tweet_id_list)) print(len(image_retweet)) image_retweet #drop all retweets from 'image_prediction_clean' table for index, row in image_prediction_clean.iterrows(): if row['tweet_id'] in image_retweet: image_prediction_clean.drop(image_prediction_clean[image_prediction_clean['tweet_id'] == row['tweet_id']].index, inplace=True) #to reset the index without any problem image_prediction_clean.reset_index(drop=True, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code image_prediction_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 1971 entries, 0 to 1970 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1971 non-null int64 1 jpg_url 1971 non-null object 2 img_num 1971 non-null int64 3 p1 1971 non-null object 4 p1_conf 1971 non-null float64 5 p1_dog 1971 non-null bool 6 p2 1971 non-null object 7 p2_conf 1971 non-null float64 8 p2_dog 1971 non-null bool 9 p3 1971 non-null object 10 p3_conf 1971 non-null float64 11 p3_dog 1971 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 144.5+ KB ###Markdown 3.7 By comparing the number of rows in `image_prediction` and `twitter_archive` tables, we found that there are many tweets in `twitter_archive` table has no image. This rows should be dropped. Define- check 'tweet_id' in 'image_prediction' table and 'twitter_archive' table, then drop rows in the 'twitter_archive' that their 'tweet_id' not in 'image_prediction' table Code ###Code archive_clean.info() image_prediction_clean.info() #get column 'tweet_id' in 'archive_clean' and 'image_prediction_clean' tables into list archive_twet_list = list(archive_clean['tweet_id']) image_prediction_list = list(image_prediction_clean['tweet_id']) #get the difference between the tables, to know which original tweets have invalid images messy_images = list(set(archive_tweet_list) & set(image_prediction_list)) print(len(messy_images)) messy_images #drop any row that in 'archive_clean' table and not in 'image_prediction_clean' table for index, row in archive_clean.iterrows(): if row['tweet_id'] not in messy_images: archive_clean.drop(archive_clean[archive_clean['tweet_id'] == row['tweet_id']].index, inplace=True) #to reset the index without any problem archive_clean.reset_index(drop=True, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.info() archive_list = list(archive_clean['tweet_id']) image_list = list(image_prediction_clean['tweet_id']) archive_list.sort() image_list.sort() if archive_list == image_list: print("Matched") ###Output Matched ###Markdown 3.8 Missing values in name column and invalid names less than 2 characters in `twitter_archive` table Define- correct names which have value 'a' Code ###Code archive_clean.name.value_counts() #check the text of name 'a' to make sure that it extracted right or not text_name_a = list(archive_clean['text'].loc[(archive_clean['name'] == "a")]) text_name_a #check the text of name 'an' to make sure that it extracted right or not text_name_an = list(archive_clean['text'].loc[(archive_clean['name'] == "an")]) text_name_an #try to extract some dogs names if found , else will make it None pattern = re.compile(r'(?:name(?:d)?)\s{1}(?:is\s)?([A-Za-z]+)') for index, row in archive_clean.iterrows(): try: if row['name'] == "a": new_name = re.findall(pattern, row['text'])[0] archive_clean.loc[index,'name'] = archive_clean.loc[index, 'name'].replace('a', new_name) elif row['name'] == "an": new_name = re.findall(pattern, row['text'])[0] archive_clean.loc[index,'name'] = archive_clean.loc[index,'name'].replace('an', new_name) except IndexError: archive_clean.loc[index,'name'] = "None" ###Output _____no_output_____ ###Markdown Test ###Code #check all 'a' values are replaced archive_clean.query('name == "a"') #check all 'an' values are replaced archive_clean.query('name == "an"') #check None values are increased archive_clean['name'].value_counts() ###Output _____no_output_____ ###Markdown Define- replace 'None' values with 'NaN' with np.nan Code ###Code #replace "None" values with 'NaN' archive_clean['name'].replace({"None": np.nan}, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code #check that 640 "None" values are converted to "NaN" archive_clean['name'].isnull().sum() ###Output _____no_output_____ ###Markdown 3.9 Keep original tweets only in `api_clean` table Define- delete any row its 'tweet_id' not found in 'tweet_id' column in 'archive_clean' table Code ###Code api_clean.info() #get column 'tweet_id' in 'archive_clean' and 'api_df' tables into list archive_list = list(archive_clean['tweet_id']) api_list = list(api_clean['tweet_id']) #get the intersection between the tables out_list = list(set(api_list) & set(archive_list)) print(len(out_list)) out_list #drop any row that in 'archive_clean' table and not in 'api_clean' table for index, row in api_clean.iterrows(): if row['tweet_id'] not in out_list: api_clean.drop(api_clean[api_clean['tweet_id'] == row['tweet_id']].index, inplace=True) #to reset the index without any problem api_clean.reset_index(drop=True, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code api_clean.info() #check that all 'tweet_id' column in 'api_clean' table are in 'tweet_id' column in 'archive_clean' table api_list = list(api_clean['tweet_id']) print(len(list(set(archive_list) - set(api_list)))) flag = 0 if(set(api_list).issubset(set(archive_list))): flag = 1 if(flag): print("Done") else: print("something went wrong!") ###Output 7 Done ###Markdown 3.10 Deal with rating_numerator and rating_denominator to make sure it extracted in right way from the text in `twitter_archive` table Define- slice the records to investigate the right value of denominators that below or above 10 Code ###Code archive_clean['rating_denominator'].value_counts() archive_clean['rating_denominator'].loc[archive_clean['rating_denominator'] == 2] = 10 archive_clean['rating_denominator'].loc[archive_clean['rating_denominator'] < 10] = 10 pd.set_option('display.max_colwidth', 0) archive_clean.loc[archive_clean['rating_denominator'] > 10]['text'] archive_clean['rating_denominator'].loc[archive_clean['rating_denominator'] == 11] = 10 #count number of dogs in the picture related to denominator values above 10 dogs_count = archive_clean.rating_denominator[archive_clean['rating_denominator'] > 10] /10 dogs_count archive_clean.duplicated().sum() # replace denominators with new values based on dog_count archive_clean.loc[archive_clean.rating_denominator > 10, ['rating_numerator', 'rating_denominator']] = [archive_clean.rating_numerator[archive_clean.rating_denominator > 10]/dogs_count , 10] ###Output /opt/anaconda3/lib/python3.8/site-packages/numpy/core/_asarray.py:83: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray return array(a, dtype, copy=False, order=order) ###Markdown Test ###Code archive_clean['rating_denominator'].value_counts() ###Output _____no_output_____ ###Markdown Define- slice the records to investigate the right value of numerator that below 6 and above 15 Code ###Code archive_clean.rating_numerator.value_counts() pd.set_option('display.max_colwidth', 0) archive_clean.loc[archive_clean['rating_numerator'] > 15] #fix the proble of numerators above 15 manually archive_clean['rating_numerator'].loc[archive_clean['rating_numerator'] == 75] = 5 archive_clean['rating_numerator'].loc[archive_clean['rating_numerator'] == 27] = 11 # in the original image (it's link in the end of the text) the face of dog was cropped archive_clean['rating_numerator'].loc[archive_clean['rating_numerator'] == 1776] = 15 archive_clean['rating_numerator'].loc[archive_clean['rating_numerator'] == 26] = 11 #in the image is snoop dogg not a real dog archive_clean['rating_numerator'].loc[archive_clean['rating_numerator'] == 420] = 0 #show the numerators less than 6 archive_clean.loc[archive_clean['rating_numerator'] < 6] #extract most of numerator values from text num_p = re.compile('(\d+\.?\d?\d?)\/(\d{1,3})') archive_clean['rating_numerator'] = archive_clean.text.str.extract('(\d+\.?\d?\d?)\/\d{1,3}', expand = False).astype('float') ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.rating_numerator.value_counts() ###Output _____no_output_____ ###Markdown 3.11 columns headers are values, not variable names in `image_prediction` table Define- create 1 column for image-prediction and 1 column for confidence Code ###Code image_prediction_clean.head() # Rename the dataset columns to avoid confusion cols = ['tweet_id', 'jpg_url', 'img_num', 'prediction_1', 'confidence_1', 'breed_1', 'prediction_2', 'confidence_2', 'breed_2', 'prediction_3', 'confidence_3', 'breed_3'] image_prediction_clean.columns = cols # Reshape the dataframe image_prediction_clean = pd.wide_to_long(image_prediction_clean, stubnames=['prediction', 'confidence', 'breed'], i=['tweet_id', 'jpg_url', 'img_num'], j='prediction_level', sep="_").reset_index() ###Output _____no_output_____ ###Markdown Test ###Code image_prediction_clean.head() image_prediction_clean.duplicated().sum() ###Output _____no_output_____ ###Markdown 3.12 Merging `archive_clean` with `api_clean` Define- merging tables with merge() function Code ###Code archive_clean.info() api_clean.info() #these tweet_id in 'archive_clean' and not in 'api_clean' archive_list = list(archive_clean['tweet_id']) api_list = list(api_clean['tweet_id']) diff = list(set(archive_list) - set(api_list)) print(len(diff)) diff #merge two tables with left merge to take all original tweets from 'archive_clean' table df_combined = archive_clean.merge(api_df, left_on="tweet_id", right_on="tweet_id", how="left") df_combined ###Output _____no_output_____ ###Markdown Test ###Code #make sure that the tweet_id not in 'api_clean' has NaN values on retweet, favorite, and user count columns df_combined.query('tweet_id == 779123168116150273') df_combined.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1971 entries, 0 to 1970 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1971 non-null int64 1 timestamp 1971 non-null datetime64[ns, UTC] 2 source 1971 non-null object 3 text 1971 non-null object 4 expanded_urls 1971 non-null object 5 rating_numerator 1971 non-null float64 6 rating_denominator 1971 non-null int64 7 name 1407 non-null object 8 dog_breed 303 non-null object 9 retweet_count 1964 non-null float64 10 favorite_count 1964 non-null float64 11 user_count 1964 non-null float64 dtypes: datetime64[ns, UTC](1), float64(4), int64(2), object(5) memory usage: 200.2+ KB ###Markdown 4. Storing the Data ###Code #store combined 'archive_clean' and 'api_clean' in 'twitter_archive_master.csv' df_combined.to_csv('twitter_archive_master.csv', index=False) #store 'image_prediction' table in another file image_prediction_clean.to_csv('image_prediction_cleaned.csv', index=False) ###Output _____no_output_____ ###Markdown 5. Analyze and Visualize 5.1 The relation betweetn retweets and favorite ###Code df_combined.info() color = ['#eff3ff', '#c6dbef', '#9ecae1', '#6baed6', '#4292c6', '#2171b5', '#084594'] df_combined.plot(kind="scatter", x="favorite_count", y="retweet_count", alpha=0.5) plt.xlabel("Likes") plt.ylabel("Retweets") plt.title("The Relation between Retweets and Favorites"); plt.savefig('Retweets_with_Likes.png', bbox_inches='tight') ###Output _____no_output_____ ###Markdown - Retweets are positively correlated with favorites 5.2 The most popular dog breed ###Code df_combined['dog_breed'].value_counts().plot(kind = 'barh') plt.title('Most Popular Dog Breed') plt.xlabel('Count') plt.ylabel('Dog Breed'); plt.savefig('most_popular_dog.png', bbox_inches='tight') ###Output _____no_output_____ ###Markdown - Pupper is the most popular dog breed 5.3 which are the top 10 predicted dog breed ###Code image_prediction_clean.head() predicted = image_prediction_clean[image_prediction_clean['breed'] == True] highest_predicted = predicted.groupby(['prediction']).mean()['confidence'] highest_predicted.sort_values(ascending=False).head(10) colors = ['#8dd3c7', '#ffffb3', '#bebada', '#fb8072', '#fb8072', '#fdb462', '#b3de69', '#fccde5'] highest_predicted.sort_values(ascending=False).head(10).plot(kind="bar", color= colors, figsize=(15,8)) plt.ylabel("Average confidence", size=15) plt.xlabel("Dog Breed", size=15) plt.title("The top 10 predicted dogs", size=20); plt.savefig('top_10_predicted_dogs.png', bbox_inches='tight') ###Output _____no_output_____ ###Markdown - The highest average confidence level in predicting the dog breed is the Bernese Mountain Dog. The second is the Komondor. Both breeds have a unique appearance, which probably made the prediction easier. The highest confidence level is 65% on average. In my opinion, this level is too low and for deeper analysis we should use another algorithm to be more accurate. 5.4 How changed the Retweet and Favorite Count over time? ###Code df_combined.retweet_count.groupby([df_combined['timestamp'].dt.year, df_combined['timestamp'].dt.month]).mean().plot(kind='line') df_combined.favorite_count.groupby([df_combined['timestamp'].dt.year, df_combined['timestamp'].dt.month]).mean().plot(kind='line') plt.title('Retweet and Favorite over time', size =15) plt.ylabel('Number of Tweets') plt.xlabel('Time (Year, Month)') plt.legend(('Retweet Count', 'Favorite Count'), fontsize=18); plt.savefig('retweet_and_favorite_overtime'); ###Output _____no_output_____ ###Markdown - number of retweets and favorites increased overtime 5.5 How many dogs are rated above 10 ? ###Code df_combined['rating_numerator'].value_counts().sort_index().plot(kind='bar', figsize=(18,10)) plt.title ('Rating Numerator Distribution', size=15) plt.xlabel('Rating Numerator') plt.ylabel('Number of Ratings'); plt.savefig('rating_numerator_distribution'); ###Output _____no_output_____ ###Markdown Project : Wrangle and Analyze Twitter Archive By Somya Bharti > **In this project, I will wrangle data to create interesting and trustworthy analyses and visualizations. The Twitter archive is great, but it only contains very basic tweet information. Additional gathering, then assessing and cleaning is required for "Wow!"-worthy analyses and visualizations.** Introduction> The dataset that I will be wrangling (and analyzing and visualizing) is the tweet archive of Twitter user @dog_rates, also known as WeRateDogs. WeRateDogs is a Twitter account that rates people's dogs with a humorous comment about the dog. These ratings almost always have a denominator of 10. The numerators, though? Almost always greater than 10. 11/10, 12/10, 13/10, etc. Why? Because "they're good dogs Brent." WeRateDogs has over 4 million followers and has received international media coverage. Data GatheringThe first step in our project is to gather data from different sources and in diffrent formats, which is the most challenging task of this project.Here, I will be gathering data from three different sources. - Import the packages ###Code import json import matplotlib.pyplot as plt import numpy as np import os import pandas as pd import re import requests import seaborn as sns import tweepy from datetime import datetime from functools import reduce %matplotlib inline ###Output _____no_output_____ ###Markdown First Source - Local file> Enhanced Twitter Archive- The WeRateDogs Twitter archive contains basic tweet data for all 5000+ of their tweets but here we have filtered around 2000+ tweets with ratings. ###Code df_local=pd.read_csv('C:/Users/somya/Desktop/wrangle/twitter-archive-enhanced.csv') df_local.head() ###Output _____no_output_____ ###Markdown Second Source - URL> Image predictions- The tweet image predictions, i.e., what breed of dog (or other object, animal, etc.) is present in each tweet according to a neural network. This file (image_predictions.tsv) is hosted on Udacity's servers and we will be downloading it programmatically using the Requests library and the given URL-https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv ###Code given_url='https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response=requests.get(given_url) with open(os.path.join('image_predictions.tsv'), mode ='wb') as file: file.write(response.content) df_url = pd.read_csv('image_predictions.tsv', sep = '\t') df_url.head() ###Output _____no_output_____ ###Markdown Third Source - Twitter API ###Code import time consumer_key = '...' consumer_secret = '...' access_token = '...' access_secret = '...' auth = tweepy.OAuthHandler(consumer_key,consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, parser=tweepy.parsers.JSONParser()) #api = tweepy.API(auth, wait_on_rate_limit=True) # NOTE TO STUDENT WITH MOBILE VERIFICATION ISSUES: # df_1 is a DataFrame with the twitter_archive_enhanced.csv file. You may have to # change line 17 to match the name of your DataFrame with twitter_archive_enhanced.csv # NOTE TO REVIEWER: this student had mobile verification issues so the following # Twitter API code was sent to this student from a Udacity instructor # Tweet IDs for which to gather additional data via Twitter's API #tweet_ids = df_local.tweet_id.values #len(tweet_ids) #tweet_ids = list(df_local['tweet_id']) start = time.time() tweet_ids = df_local.tweet_id.values tweet_data = [] tweet_id_success = [] tweet_id_missing = [] for tweet_id in tweet_ids: try: data = api.get_status(tweet_id, tweet_mode='extended', wait_on_rate_limit = True, wait_on_rate_limit_notify = True) tweet_data.append(data) tweet_id_success.append(tweet_id) except: tweet_id_missing.append(tweet_id) print(tweet_id) end = time.time() print(end - start) with open('tweet-json.txt','r') as data: tweet_json = data.readline() print(tweet_json) df_tweet=pd.read_json('tweet-json.txt',orient='records',lines=True) #Checking Dataset Dimensions print(df_tweet.shape) df_tweet.head() df_tweet.columns df_tweet=df_tweet[["id","favorite_count","retweet_count"]] df_tweet.head() ###Output _____no_output_____ ###Markdown > Now, we are done with the data gathering process from three diffrent sources. Assessing> Now, we will assess our raw and unclean datasets one by one to find out the quality and tidiness issues. First Dataset - Twitter Archive - Assessing Visually ###Code df_local ###Output _____no_output_____ ###Markdown > We observe that many rows did not mention the stage of dog that is all the four stages in many rows are None. - Assess Programmatically ###Code df_local.shape df_local.columns df_local.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown - DataType of columns such as 'timestamp','retweeted_status_timestamp' are defined as String whereas it should be datetime.- There are missing expanded urls in the dataset.- There are 181 retweeted_status_id which means that our dataset contains retweets as well. ###Code df_local['name'].value_counts() ###Output _____no_output_____ ###Markdown > Some of the names are 'a', 'an', 'the' which are not invalid. ###Code df_local['source'].head(50) df_local['source'].nunique() df_local['source'].value_counts() ###Output _____no_output_____ ###Markdown - Source names need to be redefined without tags. > **After assessing the dataset visually, we find that some of the rows are 'None' for all the four stages of a particular dog.We will find the rows which do not have a stage of dog.** ###Code df_local.query('doggo=="None" and floofer=="None" and pupper=="None" and puppo=="None"') df_local.query('doggo=="None" and floofer=="None" and pupper=="None" and puppo=="None"').shape[0] ###Output _____no_output_____ ###Markdown - There are 1976 rows with no definition of the dog's stage. ###Code df_local['rating_numerator'].value_counts() ###Output _____no_output_____ ###Markdown > The common numerator ratings given by @weratedogs are 11,12,13,16 so on. But,here we find that most of the ratings are too high such as 1776,960,666 etc. ###Code df_local['rating_denominator'].value_counts() ###Output _____no_output_____ ###Markdown > We know that @WeRateDogs keep their denominator as 10 always while rating dogs but here some of the ratings are 11,50,2,7,0,110 etc. Second Dataset - Image Prediction - Assessing visually ###Code df_url ###Output _____no_output_____ ###Markdown - After assessing visually, we find that for the last row, all the predictions of dog breed are false, which means, some images are not dogs.- We will find the number of rows which do not contain the images of Dog. ###Code df_url.query('p1_dog==False and p2_dog==False and p3_dog==False').shape[0] df_url.info() df_url.describe() df_url['p1'].value_counts() ###Output _____no_output_____ ###Markdown - Some of the names of dog breed are not defined, like 'bookshop','bakery','book_jacket', 'orange'. ###Code df_url['p2'].value_counts() df_url['jpg_url'].value_counts() ###Output _____no_output_____ ###Markdown - The Image Urls are same for some images. Third Dataset - Twitter API - Assess Visually ###Code df_tweet df_tweet['id'].duplicated().sum() df_tweet['favorite_count'].value_counts() df_tweet['favorite_count'].duplicated().sum() ###Output _____no_output_____ ###Markdown Observations Summary from the above performed Assessments-> Quality Issues-- Many rows in the twitter enhanced dataset did not mention the stage of dog that is all the four stages in many rows are None.- There are 1976 rows with no definition of the dog's stage.- DataType of columns in the twitter enhanced dataset such as 'timestamp','retweeted_status_timestamp' are defined as String whereas it should be datetime.- There are missing expanded urls in the twitter enhanced dataset.- There are 181 retweeted_status_id which means that our dataset contains retweets as well.- We do not need retweets in out dataset for analysis so we need to remove retweet_user_id and other columns related to retweets.- Some of the names are 'a', 'an', 'the' which are not invalid.- Source names need to be redefined without tags.- The common numerator ratings given by @weratedogs are 11,12,13,16 so on. But,here we find that most of the ratings are too high such as 1776,960,666 etc.- We know that @WeRateDogs keep their denominator as 10 always while rating dogs but here some of the ratings are 11,50,2,7,0,110 etc.- After assessing the image prediction dataset visually, we find that for the last row, all the predictions of dog breed are false, which means, some images are not dogs.- Some of the names of dog breed are not defined, like 'bookshop','bakery','book_jacket', 'orange'.- The Image Urls are same for some images.- The names of dog in Image prediction Dataset are separated by underscore instead of space.> Tidiness Issues-- There are four columns namely doggo, floofer,puppo, pupper for the stages of a particular dog. We don't need four columns for the stage, only one column will be enough.- We only need one master dataset for our analysis and visualizations, so we will merge all the three datasets collected from different sources. Cleaning the Data - First we will make copies of the dataframe. ###Code df_local_new=df_local.copy() df_url_new=df_url.copy() df_tweet_new=df_tweet.copy() ###Output _____no_output_____ ###Markdown **Define** - Select the rows with null retweeted_status_id and remove the non-null retweets from the dataset. **Code** ###Code df_local_new = df_local_new[np.isnan(df_local_new.retweeted_status_id)] ###Output _____no_output_____ ###Markdown **Test** ###Code df_local_new.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2175 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2175 non-null object source 2175 non-null object text 2175 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 2117 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 2175 non-null object floofer 2175 non-null object pupper 2175 non-null object puppo 2175 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 305.9+ KB ###Markdown **Define**Select the columns related to retweets and drop them as it is of no use further. **Code** ###Code df_local_new.drop(["retweeted_status_id","retweeted_status_user_id","retweeted_status_timestamp","in_reply_to_status_id","in_reply_to_user_id"], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown **Test** ###Code df_local_new.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 12 columns): tweet_id 2175 non-null int64 timestamp 2175 non-null object source 2175 non-null object text 2175 non-null object expanded_urls 2117 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 2175 non-null object floofer 2175 non-null object pupper 2175 non-null object puppo 2175 non-null object dtypes: int64(3), object(9) memory usage: 220.9+ KB ###Markdown **Define**- Select the four columns of stages and make a new dataframe.- Add a new column 'Stage' to the new dataframe.- Append the non-null values to column Stage.- Add the new column 'Stage' to our original dataset.- Drop the four columns 'Doggo', 'Floofer', 'Pupper', 'Puppo' from original dataset.**Code** ###Code a = pd.DataFrame(df_local_new[{'floofer', 'doggo', 'pupper','puppo'}]) a.head() a['floofer'].replace('None', np.nan, inplace=True) a['doggo'].replace('None', np.nan, inplace=True) a['pupper'].replace('None', np.nan, inplace=True) a['puppo'].replace('None', np.nan, inplace=True) a.head() a["Stage"]=None a.head() a['Stage'] = a.apply(lambda row: ','.join(row.dropna() .astype(str).astype(str)), axis=1) a.replace(r'^\s*$', np.nan, regex=True,inplace=True) a.head(5) a['Stage'].value_counts() df_local_new.drop(["doggo","floofer","pupper","puppo"],axis=1, inplace=True) df_local_new['Stage']=a["Stage"] ###Output _____no_output_____ ###Markdown **Test** ###Code df_local_new.head(10) df_local_new.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 9 columns): tweet_id 2175 non-null int64 timestamp 2175 non-null object source 2175 non-null object text 2175 non-null object expanded_urls 2117 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object Stage 344 non-null object dtypes: int64(3), object(6) memory usage: 169.9+ KB ###Markdown **Define**Select the column 'timestamp' and change the DataType of timestamp from string to datetime.**Code** ###Code df_local_new['timestamp'] = pd.to_datetime(df_local_new['timestamp']) ###Output _____no_output_____ ###Markdown **Test** ###Code df_local_new.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2117 entries, 0 to 2355 Data columns (total 9 columns): tweet_id 2117 non-null int64 timestamp 2117 non-null datetime64[ns, UTC] source 2117 non-null object text 2117 non-null object expanded_urls 2117 non-null object rating_numerator 2117 non-null float64 rating_denominator 2117 non-null int64 name 2013 non-null object Stage 338 non-null object dtypes: datetime64[ns, UTC](1), float64(1), int64(2), object(5) memory usage: 165.4+ KB ###Markdown **Define**Select rows with missing values of expand urls and remove them. **Code** ###Code df_local_new.dropna(subset=['expanded_urls'],inplace=True) ###Output _____no_output_____ ###Markdown **Test** ###Code df_local_new.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2117 entries, 0 to 2355 Data columns (total 9 columns): tweet_id 2117 non-null int64 timestamp 2117 non-null datetime64[ns, UTC] source 2117 non-null object text 2117 non-null object expanded_urls 2117 non-null object rating_numerator 2117 non-null int64 rating_denominator 2117 non-null int64 name 2117 non-null object Stage 338 non-null object dtypes: datetime64[ns, UTC](1), int64(3), object(5) memory usage: 165.4+ KB ###Markdown **Define**Select invalid Names, which most probably starts with lower case letter and set those cells to None.**Code** ###Code df_local_new.loc[df_local_new['name'] == df_local_new['name'].str.lower(), 'name'] = None ###Output _____no_output_____ ###Markdown **Test** ###Code df_local_new['name'].value_counts() ###Output _____no_output_____ ###Markdown **Define**- Set the numerator rating in terms of denominator as most of the times denominator is 10 and then remove the denominator column with ratings not equal to 10.**Code** and **Test** ###Code df_local_new.rating_numerator=(df_local_new.rating_numerator/df_local_new.rating_denominator)*10 df_local_new[df_local_new['rating_denominator']!=10] ###Output _____no_output_____ ###Markdown **Define**Select the source column and extract the text between anchor tags.**Code** ###Code df_local_new['source'] = df_local_new['source'].apply(lambda x: re.findall(r'>(.*)<', x)[0]) ###Output _____no_output_____ ###Markdown **Test** ###Code df_local_new['source'].value_counts() ###Output _____no_output_____ ###Markdown Image Prediction ###Code df_url_new.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null int64 jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown **Define**Select the columns for which dog breed classifier is true and remove the images which are not dogs.**Code** ###Code df_url_new= df_url_new.query('p1_dog==True and p2_dog==True and p3_dog==True') ###Output _____no_output_____ ###Markdown **Test** ###Code df_url_new.query('p1_dog==False and p2_dog==False and p3_dog==False').shape[0] ###Output _____no_output_____ ###Markdown **Define** Select the dog breed prediction columns that is p1, p2 and p3 and then replace underscore in dog breed's name with space.**Code** ###Code df_url_new['p1']=df_url_new['p1'].replace('_', ' ', regex=True) df_url_new['p2']=df_url_new['p2'].replace('_', ' ', regex=True) df_url_new['p3']=df_url_new['p3'].replace('_', ' ', regex=True) ###Output _____no_output_____ ###Markdown **Test** ###Code df_url_new.head() ###Output _____no_output_____ ###Markdown Twitter API Dataset ###Code df_tweet_new.info() df_tweet_new['id'].duplicated().sum() ###Output _____no_output_____ ###Markdown **Define**- Merging all the datasets using join and make twwet_id as main key as it unique for everyone.- Merge two datasets first and then merge the third dataset in the master dataset. **Code** ###Code df_master=pd.merge(left=df_local_new,right=df_tweet_new,left_on='tweet_id',right_on='id',how='inner') df_master.drop(['id'],axis=1,inplace=True) df_master.head() df_master_final=pd.merge(left=df_master,right=df_url_new,on='tweet_id',how='inner') df_master_final.head() ###Output _____no_output_____ ###Markdown **Test** ###Code df_master_final.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1203 entries, 0 to 1202 Data columns (total 22 columns): tweet_id 1203 non-null int64 timestamp 1203 non-null datetime64[ns, UTC] source 1203 non-null object text 1203 non-null object expanded_urls 1203 non-null object rating_numerator 1203 non-null float64 rating_denominator 1203 non-null int64 name 1148 non-null object Stage 190 non-null object favorite_count 1203 non-null int64 retweet_count 1203 non-null int64 jpg_url 1203 non-null object img_num 1203 non-null int64 p1 1203 non-null object p1_conf 1203 non-null float64 p1_dog 1203 non-null bool p2 1203 non-null object p2_conf 1203 non-null float64 p2_dog 1203 non-null bool p3 1203 non-null object p3_conf 1203 non-null float64 p3_dog 1203 non-null bool dtypes: bool(3), datetime64[ns, UTC](1), float64(4), int64(5), object(9) memory usage: 191.5+ KB ###Markdown **Store the final dataset as csv** ###Code df_master_final.to_csv('twitter_archive_master.csv',index=False) df=pd.read_csv('twitter_archive_master.csv') df.head() df.columns ###Output _____no_output_____ ###Markdown Analyze and Visualize Data Question 1: Is retweet_count related to favorite_count that is whether the post will have more retweets if that tweet is favorite? ###Code import seaborn as sns sns.regplot(data=df,x='retweet_count',y='favorite_count'); ###Output _____no_output_____ ###Markdown By the scatterplot above, we see that retweet count and favorite count are strongly related to each other with a positive correlation. Question 2 - Which is the most common Dog Stage? ###Code df['Stage'].value_counts() sns.countplot(data=df,x='Stage') plt.xticks(rotation=70); plt.title('Most common Stage of Dog') ###Output _____no_output_____ ###Markdown Pupper is the most common dog stage among all the dogs. Question 3 - Which is the most common breed of Dog predicted? ###Code fig_dims = (8, 20) fig, ax = plt.subplots(figsize=fig_dims) sns.countplot(y = "p1", ax=ax, data=df) plt.xticks(rotation=90); plt.title('Most common breed of Dog') plt.ylabel('Breed of Dogs'); ###Output _____no_output_____ ###Markdown Golden retriever, Labrador retriever, Pembroke, Chihuahua, Pug are the most common breeds of Dog. Question 4 - Which is the top 10 breed of Dog which receives more retweets? ###Code df_favorite = df.groupby('p1')['retweet_count'].sum().reset_index() df_sorted = df_favorite.sort_values('retweet_count', ascending=False).head(10) ser_ret = df_sorted['retweet_count'] ret_breed = df_sorted['p1'] fig, ax = plt.subplots(figsize=(10,7)) fav = plt.barh(ret_breed, ser_ret) plt.ylabel('Breed of Dog') plt.xlabel('Retweet count') plt.title('No of retweets per breed'); ###Output _____no_output_____ ###Markdown IntroductionThe goal of this project is to learn the insights of Data analysis process and the very core part of data analysis is to Gather, Access and Clean data. Which is known as Data wrangling. I will be analyzing twitter account "WeRateDogs" tweet data to get insights and learn pattern involved in the data, by gathering from resources, accessing the required data and get informational data only and then cleaning the final data obtained to display my findings from data. Files used:* Enhanced Twitter Archive* Additoinal Data via the Twitter API* Image Predictions File The Steps taken are mentioned as below:1. Data Wrangling : * Gather Data * Assess Data * Clean Data2. Store, analyze and visualize Data.3. Reports for: * Data wrangle efforts * Data analysis and visualizations. ###Code # Import statements import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt import json import os import re import requests import time from datetime import datetime from functools import reduce % matplotlib inline ###Output _____no_output_____ ###Markdown Step - Gather ###Code # Load the enhanced twitter archive file we were given archive_df = pd.read_csv('twitter-archive-enhanced-2.csv', encoding = 'utf-8') ###Output _____no_output_____ ###Markdown Learning data ###Code archive_df.head(3) archive_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown * Can be seen from above that lots of inconsistancies are there in data. ###Code # Downloading the image prediction tsv file using the Requests library and the given URL url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) # Saving file with open (url.split('/')[-1], mode='wb') as file: file.write(response.content) # Loading file into dataframe predict_df = pd.read_csv('image-predictions.tsv', sep = '\t', encoding = 'utf-8') ###Output _____no_output_____ ###Markdown Learning data ###Code predict_df.head(3) predict_df.info() tweet_ids = archive_df.tweet_id.values len(tweet_ids) ## step 1: Create a list to store the tweets from the json file tweet_list = [] json_file = open('tweet-json.txt', "r") for ln in json_file: try: twt = json.loads(ln) tweet_list.append(twt) except: continue json_file.close() ## Step 2: Create a dataframe tweet_df = pd.DataFrame() ## Step 3: Fill data into dataframe from extracted list. tweet_df['tweet_id'] = list(map(lambda tweet: tweet['id'], tweet_list)) tweet_df['retweets'] = list(map(lambda tweet: tweet['retweet_count'], tweet_list)) tweet_df['favorites'] = list(map(lambda tweet: tweet['favorite_count'], tweet_list)) tweet_df['created_at'] = list(map(lambda tweet: tweet['created_at'], tweet_list)) tweet_df['tweet'] = list(map(lambda tweet: tweet['full_text'], tweet_list)) tweet_df.head() ###Output _____no_output_____ ###Markdown Assessing DataNow we will read data and find for data qualities and note down the points for `cleaning data` Quality:**Missing Data** Tweet Archive Dataset name: `745` tweets that have None as a name **Quality issues*** name column: some names are false (O, a, not..)* Source format is bad and can not be read easily.* Missing values from images dataset (2075 rows instead of 2356).* Some tweets have 2 different tweet_id, that is retweets.* Rows where retweeted_status_id and retweeted_status_user_id have values NAN* tweet_id is an integer, should be type object as no calculation is needed* There are invalid names (a, an and less than 3 characters).* created_at: not datetime format* No use of in_reply_to_status_id, in_reply_to_user_id Tidiness:* Dog stage is in 4 columns (doggo, floofer, pupper, puppo), no need for that, can be merged into one for one variable (dog stage)* Merge the three datasets.* Create seperate columns for date from time.* Delete the text column and timestamp column* Make source column in one word source rather than HTML* Combine rating_numerator and rating_denominator into rating column ###Code tweet_df.info() archive_df.shape tweet_df.shape archive_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown Check if any null values in the data frame ###Code archive_df.isnull().sum() ###Output _____no_output_____ ###Markdown Check if any duplicated values in the data frame ###Code archive_df['tweet_id'].duplicated().any() archive_df.head(2) ###Output _____no_output_____ ###Markdown Names of dogs in data ###Code np.sort(archive_df.name.unique()) ###Output _____no_output_____ ###Markdown Issues :* Some unusual names in the data frame, and luckily all are in lower case. ###Code archive_df.loc[(archive_df['name'].str.islower())] ###Output _____no_output_____ ###Markdown Devices used to tweet ###Code archive_df['source'].value_counts() ###Output _____no_output_____ ###Markdown Ratings counts ###Code archive_df['rating_denominator'].value_counts(), archive_df['rating_numerator'].value_counts() ###Output _____no_output_____ ###Markdown Name types of dogs ###Code archive_df['name'].value_counts() ###Output _____no_output_____ ###Markdown Checking Doggo, pupper, puppo and floofer ###Code archive_df['doggo'].value_counts() archive_df['pupper'].value_counts() archive_df['puppo'].value_counts() archive_df['floofer'].value_counts() archive_df.loc[(archive_df['doggo']== 'doggo') & (archive_df['floofer']== 'floofer')] ###Output _____no_output_____ ###Markdown Checking if any ratings using other than integer ###Code archive_df[archive_df['text'].str.contains(r'(\d+\.\d+\/\d+)')] ###Output /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:1: UserWarning: This pattern has match groups. To actually get the groups, use str.extract. """Entry point for launching an IPython kernel. ###Markdown Dataset 2 - Image predictions ###Code predict_df.head() predict_df.describe() predict_df.shape ###Output _____no_output_____ ###Markdown Check for any null values ###Code predict_df.isnull() predict_df.isnull().sum() predict_df['img_num'].value_counts() predict_df['p1_dog'].value_counts() predict_df['p2_dog'].value_counts() predict_df['p3_dog'].value_counts() ###Output _____no_output_____ ###Markdown Predictions with confidence > 100 ###Code predict_df[predict_df.p1_conf > 1] predict_df[predict_df.p2_conf > 1] predict_df[predict_df.p3_conf > 1] ###Output _____no_output_____ ###Markdown Dataset 3 - Data from Twitter API Generated from JSON ###Code tweet_df.head(3) tweet_df.info() tweet_df.describe() tweet_df['tweet_id'].duplicated().any() tweet_df['tweet_id'].duplicated().sum() ###Output _____no_output_____ ###Markdown Cleaning `Steps` * Define* Code* Test Creating copies of each Dataset ###Code twitter_archive_clean = archive_df.copy() predictions_clean = predict_df.copy() twitter_clean = tweet_df.copy() ###Output _____no_output_____ ###Markdown Merging all three datasets into one df Define Merge the 3 datasets using `INNER` join. ###Code df = pd.merge(twitter_archive_clean, predictions_clean, how = 'inner', on = ['tweet_id'] ) df = pd.merge(df, twitter_clean, how = 'inner', on = ['tweet_id']) df.to_csv('tweet_full.csv', encoding = 'utf-8') df.head(1) df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2073 entries, 0 to 2072 Data columns (total 32 columns): tweet_id 2073 non-null int64 in_reply_to_status_id 23 non-null float64 in_reply_to_user_id 23 non-null float64 timestamp 2073 non-null object source 2073 non-null object text 2073 non-null object retweeted_status_id 79 non-null float64 retweeted_status_user_id 79 non-null float64 retweeted_status_timestamp 79 non-null object expanded_urls 2073 non-null object rating_numerator 2073 non-null int64 rating_denominator 2073 non-null int64 name 2073 non-null object doggo 2073 non-null object floofer 2073 non-null object pupper 2073 non-null object puppo 2073 non-null object jpg_url 2073 non-null object img_num 2073 non-null int64 p1 2073 non-null object p1_conf 2073 non-null float64 p1_dog 2073 non-null bool p2 2073 non-null object p2_conf 2073 non-null float64 p2_dog 2073 non-null bool p3 2073 non-null object p3_conf 2073 non-null float64 p3_dog 2073 non-null bool retweets 2073 non-null int64 favorites 2073 non-null int64 created_at 2073 non-null object tweet 2073 non-null object dtypes: bool(3), float64(7), int64(6), object(16) memory usage: 491.9+ KB ###Markdown Creating a copy of merged data to work on ###Code df_clean = df.copy() ###Output _____no_output_____ ###Markdown **Define** Delete the following columns: `retweeted_status_id, retweeted_status_user_id,retweeted_status_timestamp, timestam, text, in_reply_to_status_id, in_reply_to_user_id` **Code** ###Code list_of_col = ['retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp', 'timestamp', 'text', 'in_reply_to_status_id', 'in_reply_to_user_id'] df_clean = df_clean.drop(list_of_col, 1) ###Output _____no_output_____ ###Markdown **Test** ###Code df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2073 entries, 0 to 2072 Data columns (total 25 columns): tweet_id 2073 non-null int64 source 2073 non-null object expanded_urls 2073 non-null object rating_numerator 2073 non-null int64 rating_denominator 2073 non-null int64 name 2073 non-null object doggo 2073 non-null object floofer 2073 non-null object pupper 2073 non-null object puppo 2073 non-null object jpg_url 2073 non-null object img_num 2073 non-null int64 p1 2073 non-null object p1_conf 2073 non-null float64 p1_dog 2073 non-null bool p2 2073 non-null object p2_conf 2073 non-null float64 p2_dog 2073 non-null bool p3 2073 non-null object p3_conf 2073 non-null float64 p3_dog 2073 non-null bool retweets 2073 non-null int64 favorites 2073 non-null int64 created_at 2073 non-null object tweet 2073 non-null object dtypes: bool(3), float64(3), int64(6), object(13) memory usage: 378.6+ KB ###Markdown **Define**Change source column from whole Html text to more readable **Code** ###Code df_clean.source.value_counts() df_clean['source'] = df_clean['source'].apply(lambda x: re.findall(r'>(.*)<', x)[0]) ###Output _____no_output_____ ###Markdown **Test** ###Code df_clean['source'].value_counts() ###Output _____no_output_____ ###Markdown **Define*** Change the value None in the column name to Null **Code** ###Code df_clean['name'] = df_clean['name'].replace('None', np.NaN) ###Output _____no_output_____ ###Markdown **Test** ###Code pd.isnull(df_clean.name).sum() ###Output _____no_output_____ ###Markdown **Define**Move doggo, floofer, pupper and puppo columns into one column dog_stage. **Code** ###Code df_clean.info() # Create the new column df_clean['dog_stage'] = df_clean['tweet'].str.extract('(doggo|floofer|pupper|puppo)', expand=True) df_clean.head(3) # Lists of stages more than one pupper_list = df_clean.loc[(df_clean.doggo == "doggo") & (df_clean.pupper == "pupper")]['tweet_id'].tolist() floofer_list = df_clean.loc[(df_clean.doggo == "doggo") & (df_clean.floofer == "floofer")]['tweet_id'].tolist() puppo_list = df_clean.loc[(df_clean.doggo == "doggo") & (df_clean.puppo == "puppo")]['tweet_id'].tolist() # update the dog stage for twt in pupper_list: df_clean.loc[df_clean.tweet_id == twt,'dog_stage'] = 'doggo, pupper' for twt in floofer_list: df_clean.loc[df_clean.tweet_id == twt,'dog_stage'] = 'doggo, floofer' for twt in puppo_list: df_clean.loc[df_clean.tweet_id == twt,'dog_stage'] = 'doggo, puppo' ###Output _____no_output_____ ###Markdown **Test** ###Code df_clean.dog_stage.value_counts() ###Output _____no_output_____ ###Markdown **Define** * Dropping the columns doggo, floofer, puppo **Code** ###Code # Delete the original stage columns list_of_cols = ['doggo', 'floofer', 'pupper', 'puppo'] df_clean = df_clean.drop(list_of_cols, 1) ###Output _____no_output_____ ###Markdown **Test** ###Code df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2073 entries, 0 to 2072 Data columns (total 22 columns): tweet_id 2073 non-null int64 source 2073 non-null object expanded_urls 2073 non-null object rating_numerator 2073 non-null int64 rating_denominator 2073 non-null int64 name 1496 non-null object jpg_url 2073 non-null object img_num 2073 non-null int64 p1 2073 non-null object p1_conf 2073 non-null float64 p1_dog 2073 non-null bool p2 2073 non-null object p2_conf 2073 non-null float64 p2_dog 2073 non-null bool p3 2073 non-null object p3_conf 2073 non-null float64 p3_dog 2073 non-null bool retweets 2073 non-null int64 favorites 2073 non-null int64 created_at 2073 non-null object tweet 2073 non-null object dog_stage 337 non-null object dtypes: bool(3), float64(3), int64(6), object(10) memory usage: 330.0+ KB ###Markdown **Define**Delete image prediction columns **Code** ###Code df_clean.columns predictions = [] confidence_level = [] def prediction_func(dataframe): if dataframe['p1_dog'] == True: predictions.append(dataframe['p1']) confidence_level.append(dataframe['p1_conf']) elif dataframe['p2_dog'] == True: predictions.append(dataframe['p2']) confidence_level.append(dataframe['p2_conf']) elif dataframe['p3_dog'] == True: predictions.append(dataframe['p3']) confidence_level.append(dataframe['p3_conf']) else: predictions.append('NaN') confidence_level.append(0) df_clean.apply(prediction_func, axis=1) df_clean['prediction'] = predictions df_clean['confidence_level'] = confidence_level ###Output _____no_output_____ ###Markdown **Test** ###Code df_clean.sample() ###Output _____no_output_____ ###Markdown DefineCreating the rating columns to get ratio **Code** ###Code df_clean['rating'] = df_clean['rating_numerator'].astype(float)/df_clean['rating_denominator'] ###Output _____no_output_____ ###Markdown **Test** ###Code df_clean['rating'].value_counts() df_clean.columns df_clean.head(2) ###Output _____no_output_____ ###Markdown DefineSort the columns **Code** ###Code # Create a list to re-sort the dataframe columns sorted_list = ['tweet_id','tweet','name','dog_stage','rating','created_at','retweets', 'favorites', 'source','rating_numerator', 'rating_denominator', 'img_num', 'p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog','expanded_urls', 'jpg_url'] df_clean = df_clean[sorted_list] df_clean.head(2) ###Output _____no_output_____ ###Markdown **Define**Convert the type of tweet_id to string. **Code** ###Code df_clean['tweet_id'] = df_clean['tweet_id'].astype('str') ###Output _____no_output_____ ###Markdown **Test** ###Code df_clean['tweet_id'].dtype ###Output _____no_output_____ ###Markdown Store Data ###Code df_clean.to_csv('twitter_archive_master.csv', index=False, encoding = 'utf-8') ###Output _____no_output_____ ###Markdown Analyzing and Visualizing Data ###Code df = pd.read_csv('twitter_archive_master.csv') df.head(2) df.info() df.describe() # Convert columns to their appropriate types and set the timestamp as an index df['tweet_id'] = df['tweet_id'].astype(object) df['created_at'] = pd.to_datetime(df.created_at) df['source'] = df['source'].astype('category') df['dog_stage'] = df['dog_stage'].astype('category') df.info() df.head(1) # Plot scatterplot of retweet vs favorite count sns.lmplot(x="retweets", y="favorites", data=df, size = 5, aspect=1.3, scatter_kws={'alpha':1/5}) plt.title('Favorite vs. Retweet Count') plt.xlabel('Retweet Count') plt.ylabel('Favorite Count'); ###Output _____no_output_____ ###Markdown * Favorite and retweet counts are highly positively correlated. * For about every 4 favorites there is 1 retweet. The majority of the data falls below 40000 favorites and 10000 retweets. ###Code df.hist(column='rating_numerator', bins = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]) plt.xlabel('Rating Numerator') plt.ylabel('Frequency') plt.title('Distribution the Rating Numerator') plt.savefig('rating_numerator_dist'); ###Output _____no_output_____ ###Markdown Most Common Dog ? ###Code x = np.char.array(['Pupper', 'Doggo', 'Puppo', 'Doggo, Pupper', 'Floofer', 'Doggo, Puppo', 'Doggo, Floofer']) y = np.array(list(df[df['dog_stage'] != 'None']['dog_stage'].value_counts())[0:7]) colors = ['#ff9999','#66b3ff','#99ff99','#ffcc99','#E580E8','#FF684F','#DCDCDD'] porcent = 100.*y/y.sum() patches, texts = plt.pie(y, colors=colors, startangle=90, radius=1.8) labels = ['{0} - {1:1.2f} %'.format(i,j) for i,j in zip(x, porcent)] explode = (0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1) plt.legend(patches, labels, loc='left center', bbox_to_anchor=(-0.1, 1.), fontsize=8) plt.axis('equal') plt.savefig('Most_common_dog.png', bbox_inches='tight') ###Output /opt/conda/lib/python3.6/site-packages/matplotlib/legend.py:326: UserWarning: Unrecognized location "left center". Falling back on "best"; valid locations are best upper right upper left lower left lower right right center left center right lower center upper center center % (loc, '\n\t'.join(self.codes))) ###Markdown Most used source to tweet ###Code df['source'].value_counts() sns.countplot(data=df, x='source') plt.title('Tweet Sources', size=10) plt.savefig('most_used_twitter_source'); df.retweets.groupby([df['created_at'].dt.year, df['created_at'].dt.month]).mean().plot('line') df.favorites.groupby([df['created_at'].dt.year, df['created_at'].dt.month]).mean().plot('line') plt.title('Retweet and Favorite over time', size =15) plt.ylabel('Number of Tweets') plt.xlabel('Time (Year, Month)') plt.legend(('Retweet Count', 'Favorite Count'), fontsize=12) plt.savefig('ret_fav'); ###Output _____no_output_____ ###Markdown Favourites are far more than retweets. Both, favorites and retweets, increased over the time. While the favorite count increases strongly with the number of tweets, the retweet count seems almost indepedent of the number of tweets. Co relation of vairiables ###Code sns.heatmap(df.corr(), cmap="Blues") plt.title('Correlation-matrix', size=20) plt.savefig('heatmap'); ###Output _____no_output_____ ###Markdown Gather ###Code #Read CSV file twitter_archive = pd.read_csv('twitter-archive-enhanced-2.csv') twitter_archive.sort_values('timestamp') twitter_archive.head() # Scrape the image predictions file from the Udacity website url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) with open(os.path.join('image_predictions.tsv'), mode = 'wb') as file: file.write(response.content) # Load the image predictions file image_predictions = pd.read_csv('image_predictions.tsv', sep = '\t') image_predictions.head() # Query Twitter API for each tweet in the Twitter archive and save JSON in a text file # These are hidden to comply with Twitter's API terms and conditions consumer_key = 'HIDDEN' consumer_secret = 'HIDDEN' access_token = 'HIDDEN' access_secret = 'HIDDEN' auth = OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True) tweet_ids = twitter_archive.tweet_id.values len(tweet_ids) # Query Twitter's API for JSON data for each tweet ID in the Twitter archive count = 0 fails_dict = {} start = timer() # Save each tweet's returned JSON as a new line in a .txt file with open('tweet_json.txt', 'w') as outfile: # This loop will likely take 20-30 minutes to run because of Twitter's rate limit for tweet_id in tweet_ids: count += 1 print(str(count) + ": " + str(tweet_id)) try: tweet = api.get_status(tweet_id, tweet_mode='extended') print("Success") json.dump(tweet._json, outfile) outfile.write('\n') except tweepy.TweepError as e: print("Fail") fails_dict[tweet_id] = e pass end = timer() print(end - start) print(fails_dict) # Save only certain tweet elements in dataframe elements_to_save = ['id', 'created_at', 'favorite_count', 'retweet_count'] # Later convert list to dataframe data = [] with open('tweet-json.txt', 'r') as readfile: # Read in JSON line and convert to dict tweet_json = readfile.readline() # Read line by line into DataFrame while tweet_json: tweet_dict = json.loads(tweet_json) # Create a smaller dict data_row = dict((k, tweet_dict[k]) for k in elements_to_save) data.append(data_row) # Read in JSON line and convert to dict tweet_json = readfile.readline() df_tweet_json = pd.DataFrame.from_dict(data) df_tweet_json.head() df = pd.merge(left=twitter_archive, right=image_predictions, left_on='tweet_id', right_on='tweet_id', how='left') df = pd.merge(left=df, right=df_tweet_json, left_on='tweet_id', right_on='id', how='left') del df['id'] df.head() ###Output _____no_output_____ ###Markdown Assessing ###Code twitter_archive twitter_archive.info() twitter_archive.describe() twitter_archive.isna().sum() twitter_archive['name'].value_counts() twitter_archive.sort_values(by='name')['name'] twitter_archive['rating_numerator'].value_counts() twitter_archive['rating_denominator'].value_counts() image_predictions image_predictions.info() image_predictions.describe() image_predictions.isna().sum() df_tweet_json df_tweet_json.info() df_tweet_json.describe() df_tweet_json.isna().sum() ###Output _____no_output_____ ###Markdown Quality Issues(accuracy, validity, consistency, completeness)* There were 2075 rows in the image_predictions dataframe compared to 2356 rows in the twitter_archive dataframe. This is due to tweets with no images and retweets included.* Several columns such as in_reply_to_status, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp and expanded_urls have empty values.* The name column has a lot of non-name values. The most common name is 'a' which is not actually a name.* The numerator and denominator columns have wacky values.* The timestamp type is an object, not a timestamp.* The text column could be parsed to include gender.* The text column could also be parsed to include hashtags.* In several columns null objects are non-null. Tidiness Issues(structural issues)* The columns predicting the dog breed could be condensed.* The dog 'stages' have values as columns, instead of one column filled with the values. Clean Create a Hashtag Column DefineExtract hashtags from 'text' column using .str.extract method and store in a new column 'hashtag' Code ###Code df['hashtag'] = df['text'].str.extract(r"#(\w+)", expand=True) ###Output _____no_output_____ ###Markdown Test ###Code df['hashtag'].value_counts() ###Output _____no_output_____ ###Markdown Convert Timestamp to a Datetime DefineChange the dataype of 'timestamp' column to datetime using pd.to_datetime method Code ###Code df['timestamp'] = pd.to_datetime(df['timestamp']) ###Output _____no_output_____ ###Markdown Test ###Code df.dtypes ###Output _____no_output_____ ###Markdown Convert tweet_id to object datatype DefineChange the dataype of 'tweet_id' column to object using .astype method Code ###Code df.head() df['tweet_id'] = df['tweet_id'].astype('object') ###Output _____no_output_____ ###Markdown Test ###Code df.dtypes ###Output _____no_output_____ ###Markdown Remove Retweets and Tweets without Pictures Define* Remove tweets without pictures by including only those rows where 'jpg_url' is not null.* Remove retweets by including only those rows where 'retweeted_status_id' is null.* Drop the columns related to retweets Code ###Code # Remove tweets without pictures df = df[pd.notnull(df['jpg_url'])] # Remove retweets df = df[pd.isnull(df['retweeted_status_id'])] # Drop columns related to retweets del df['retweeted_status_id'] del df['retweeted_status_user_id'] del df['retweeted_status_timestamp'] ###Output _____no_output_____ ###Markdown Test ###Code df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1994 entries, 0 to 2355 Data columns (total 29 columns): tweet_id 1994 non-null object in_reply_to_status_id 23 non-null float64 in_reply_to_user_id 23 non-null float64 timestamp 1994 non-null datetime64[ns] source 1994 non-null object text 1994 non-null object expanded_urls 1994 non-null object rating_numerator 1994 non-null int64 rating_denominator 1994 non-null int64 name 1994 non-null object doggo 1994 non-null object floofer 1994 non-null object pupper 1994 non-null object puppo 1994 non-null object jpg_url 1994 non-null object img_num 1994 non-null float64 p1 1994 non-null object p1_conf 1994 non-null float64 p1_dog 1994 non-null object p2 1994 non-null object p2_conf 1994 non-null float64 p2_dog 1994 non-null object p3 1994 non-null object p3_conf 1994 non-null float64 p3_dog 1994 non-null object created_at 1994 non-null object favorite_count 1994 non-null float64 retweet_count 1994 non-null float64 hashtag 22 non-null object dtypes: datetime64[ns](1), float64(8), int64(2), object(18) memory usage: 467.3+ KB ###Markdown Condense 'Dog Type' Columns Define* Use for loop and .str.contains() to re-identify if text contains each column header. Include text if it is found. If not, return NaN.* Create a column called dog_satge and merge all data in order of puppo, pupper, floofer, doggo using .fillna().* Drop the redundant columns.* Change datatype of dog_stage to categorical. Code ###Code dog_stages = ['doggo', 'floofer', 'pupper', 'puppo'] dog_stages def find_dog_stage(df, dog_stage): dog_list = [] for row in df['text']: if dog_stage in row: dog_list.append(dog_stage) else: dog_list.append(np.NaN) return dog_list for dog_stage in dog_stages: df[dog_stage] = find_dog_stage(df, dog_stage) # Check non-null data counts for columns df[dog_stages].info() # Compare to counts from text for dog_stage in dog_stages: print(dog_stage, df.text.str.contains(dog_stage).sum()) ###Output doggo 76 floofer 3 pupper 229 puppo 28 ###Markdown The counts of what is found in the text strings matches with what is found in the respective columns. ###Code df['dog_stage'] = df.puppo.fillna(df.pupper.fillna(df.floofer.fillna(df.doggo))) df.dog_stage.value_counts() ###Output _____no_output_____ ###Markdown So just 10 counts less in doggo as compared to earlier ###Code # Delete those redundant columns df.drop(['doggo', 'floofer', 'pupper', 'puppo'], axis=1, inplace=True) # Change datatype of dog_stage to categorical df['dog_stage'] = df['dog_stage'].astype('category') ###Output _____no_output_____ ###Markdown Test ###Code df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1994 entries, 0 to 2355 Data columns (total 26 columns): tweet_id 1994 non-null object in_reply_to_status_id 23 non-null float64 in_reply_to_user_id 23 non-null float64 timestamp 1994 non-null datetime64[ns] source 1994 non-null object text 1994 non-null object expanded_urls 1994 non-null object rating_numerator 1994 non-null int64 rating_denominator 1994 non-null int64 name 1994 non-null object jpg_url 1994 non-null object img_num 1994 non-null float64 p1 1994 non-null object p1_conf 1994 non-null float64 p1_dog 1994 non-null object p2 1994 non-null object p2_conf 1994 non-null float64 p2_dog 1994 non-null object p3 1994 non-null object p3_conf 1994 non-null float64 p3_dog 1994 non-null object created_at 1994 non-null object favorite_count 1994 non-null float64 retweet_count 1994 non-null float64 hashtag 22 non-null object dog_stage 326 non-null category dtypes: category(1), datetime64[ns](1), float64(8), int64(2), object(14) memory usage: 407.2+ KB ###Markdown Condense Dog Breed Analysis Define* Create condensed 'dog_breed' and 'confidence' columns from the respective spread out columns using a function consisting of if else statements.* Delete the redundant columns. Code ###Code dog_breed = [] confidence = [] def breed_confidence(row): if row['p1_dog'] == True: dog_breed.append(row['p1']) confidence.append(row['p1_conf']) elif row['p2_dog'] == True: dog_breed.append(row['p2']) confidence.append(row['p2_conf']) elif row['p3_dog'] == True: dog_breed.append(row['p3']) confidence.append(row['p3_conf']) else: dog_breed.append('Unidentifiable') confidence.append(0) df.apply(breed_confidence, axis=1) df['dog_breed'] = dog_breed df['confidence'] = confidence df.head() # Delete those redundant columns df.drop(['p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog',], axis=1, inplace=True) df.head() ###Output _____no_output_____ ###Markdown Test ###Code df.dog_breed.value_counts() ###Output _____no_output_____ ###Markdown Remove Redundant Columns Define* Delete the 'in_reply_to_status_id' and 'in_reply_to_user_id' columns because of no useful info. Code ###Code df['in_reply_to_status_id'].value_counts() df['in_reply_to_user_id'].value_counts() # The ['in_reply_to_user_id'] are all 4196983835, which is @dog_rates, so this info is not useful df.drop(['in_reply_to_status_id', 'in_reply_to_user_id'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1994 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 1994 non-null object timestamp 1994 non-null datetime64[ns] source 1994 non-null object text 1994 non-null object expanded_urls 1994 non-null object rating_numerator 1994 non-null int64 rating_denominator 1994 non-null int64 name 1994 non-null object jpg_url 1994 non-null object img_num 1994 non-null float64 created_at 1994 non-null object favorite_count 1994 non-null float64 retweet_count 1994 non-null float64 hashtag 22 non-null object dog_stage 326 non-null category dog_breed 1994 non-null object confidence 1994 non-null float64 dtypes: category(1), datetime64[ns](1), float64(4), int64(2), object(9) memory usage: 267.0+ KB ###Markdown Parse Dog Rates and Dog Count Define* Parse ratings and dog count from 'text' column using lambda function and for loop with if-else statements.* Drop the redundant 'rating_numerator' and 'rating_denominator' columns. Code ###Code rates = [] extract_rates = lambda x: rates.append(re.findall(r'(\d+(\.\d+)|(\d+))\/(\d+0)', x, flags=0)) df['text'].apply(extract_rates) numerator = [] dog_count = [] for item in rates: # for tweets with no rating, but a picture, so a dog if len(item) == 0: numerator.append('NaN') dog_count.append(1) # for tweets with one rating and one dog elif len(item) == 1 and item[0][-1] == '10': numerator.append(float(item[0][0])) dog_count.append(1) # for group ratings elif len(item) == 1: avg = float(item[0][0]) / (float(item[0][-1]) / 10) numerator.append(avg) dog_count.append(float(item[0][-1]) / 10) # for tweets with more than one rating elif len(item) > 1: total = 0 list = [] for i in range(len(item)): if item[i][-1] == '10': #one tweet has the phrase '50/50' so I'm coding to exclude it list.append(item[i]) for rate in list: total = total + float(rate[0]) avg = total / len(item) numerator.append(avg) dog_count.append(len(item)) # in order to catch bugs else: numerator.append('Not parsed') dog_count.append('Not parsed') df['rating'] = numerator # not need to also add denominator since they are all 10! df['dog_count'] = dog_count df['rating'].value_counts() # All are below 14 except the joke ratings of 420 and 1776, so success! # Drop these columns no longer needed df.drop([ 'rating_numerator', 'rating_denominator'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code df.info() df['dog_count'].value_counts() ###Output _____no_output_____ ###Markdown Extract Names Define* Split text using str.split* Create a function to identify names from the split text and add a new column names.* Drop the redundant 'name' column. Code ###Code df['text_split'] = df['text'].str.split() df['text_split'] names = [] # Use string startswith method to clean this up def extract_names(row): # 'This is Charlie' if row['text'].startswith('This is ') and re.match(r'[A-Z].*', row['text_split'][2]): names.append(row['text_split'][2].strip('.').strip(',')) # 'Meet Charlie' elif row['text'].startswith('Meet ') and re.match(r'[A-Z].*', row['text_split'][1]): names.append(row['text_split'][1].strip('.').strip(',')) # 'Say hello to Charlie' elif row['text'].startswith('Say hello to ') and re.match(r'[A-Z].*', row['text_split'][3]): names.append(row['text_split'][3].strip('.').strip(',')) # 'Here we have Charlie' elif row['text'].startswith('Here we have ') and re.match(r'[A-Z].*', row['text_split'][3]): names.append(row['text_split'][3].strip('.').strip(',')) # 'named Charlie' elif 'named' in row['text'] and re.match(r'[A-Z].*', row['text_split'][(row['text_split'].index('named') + 1)]): names.append(row['text_split'][(row['text_split'].index('named') + 1)]) else: names.append('Nameless') df.apply(extract_names, axis=1) len(names) df['names'] = names df['names'].value_counts() df.drop(['name'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code df['names'].value_counts() ###Output _____no_output_____ ###Markdown Parse Dog Gender Define* Identifying pronouns using NLTK.* Create a function to identify dog gender.* Remove the redundant columns 'text_split', 'tagged' and 'pronouns'.* Change datatype of gender to categorical. Code ###Code tagger = lambda x: pos_tag(x) df['tagged'] = df['text_split'].apply(tagger) pronouner = lambda x: [word for word, pos in x if pos == 'PRP'] df['pronouns'] = df['tagged'].apply(pronouner) lowerer = lambda x: [a.lower() for a in x] df['pronouns'] = df['pronouns'].apply(lowerer) df['pronouns'].head(10) pronouns = df['pronouns'] pronouns gender = [] male = ['he', 'him', 'his', "he's", 'himself'] female = ['she', 'her', 'hers', 'herself', "she's"] def genderer(row): row['text'] = row['text'].islower() if len(row['pronouns']) > 0 and any(i in female for i in row['pronouns']): gender.append('Female') elif len(row['pronouns']) > 0 and any(i in male for i in row['pronouns']): gender.append('Male') elif 'girl' in str(row['text']): gender.append('Female') elif 'boy' in str(row['text']): gender.append('Male') else: gender.append('Neutral') df.apply(genderer, axis=1) df['gender'] = gender df['gender'].value_counts() df.drop(['text_split', 'tagged', 'pronouns'], axis=1, inplace=True) # Change datatype of gender to categorical df['gender'] = df['gender'].astype('category') ###Output _____no_output_____ ###Markdown Test ###Code df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1994 entries, 0 to 2355 Data columns (total 18 columns): tweet_id 1994 non-null object timestamp 1994 non-null datetime64[ns] source 1994 non-null object text 1994 non-null object expanded_urls 1994 non-null object jpg_url 1994 non-null object img_num 1994 non-null float64 created_at 1994 non-null object favorite_count 1994 non-null float64 retweet_count 1994 non-null float64 hashtag 22 non-null object dog_stage 326 non-null category dog_breed 1994 non-null object confidence 1994 non-null float64 rating 1994 non-null object dog_count 1994 non-null float64 names 1994 non-null object gender 1994 non-null category dtypes: category(2), datetime64[ns](1), float64(5), object(10) memory usage: 269.0+ KB ###Markdown Set Null Values in Various Columns Define* Set null values where gender is Neutral, name is Nameless, dog_breed is Unidentifiable, dog_stage is None, rating is 0.0 and confidence is 0.0 Code ###Code df.loc[df['gender'] == 'Neutral', 'gender'] = None df.loc[df['names'] == 'Nameless', 'names'] = None df.loc[df['dog_breed'] == 'Unidentifiable', 'dog_breed'] = None df.loc[df['dog_stage'] == 'None', 'dog_stage'] = None df.loc[df['rating'] == 0.0, 'rating'] = np.nan df.loc[df['confidence'] == 0.0, 'confidence'] = np.nan ###Output _____no_output_____ ###Markdown Test ###Code df.info() df.to_csv('twitter_archive_master.csv', encoding = 'utf-8') ###Output _____no_output_____ ###Markdown Analysis ###Code df = pd.read_csv('twitter_archive_master.csv') df['timestamp'] = pd.to_datetime(df['timestamp']) df.info() # Set the size of figures plt.rcParams['figure.figsize'] = (12, 8) # Set the font size of axis labels plt.rcParams['axes.labelsize'] = 14 plt.rcParams['axes.titlesize'] = 14 ###Output _____no_output_____ ###Markdown Retweets, Favorites and Ratings Correlation ###Code fig, ax = plt.subplots() ax.plot_date(x=df['timestamp'], y=df['favorite_count'], alpha=0.5) ax.set_yscale('log') plt.title('Favorites over Time') plt.xlabel('Date') plt.ylabel('Count') # df[['favorite_count', 'retweet_count']].plot(style='.', alpha = .2) # plt.title('Favorites and Retweets over Time') # plt.xlabel('Date') # plt.ylabel('Count') fig, ax = plt.subplots() ax.plot_date(x=df['timestamp'], y=df['retweet_count'], alpha=0.5) ax.set_yscale('log') plt.title('Retweets over Time') plt.xlabel('Date') plt.ylabel('Count') plt.plot_date(x=df['timestamp'], y=df['rating'], alpha=.5) plt.ylim(0,14) plt.title('Rating Increase over Time') plt.xlabel('Date') plt.ylabel('Rating') df[['favorite_count', 'retweet_count', 'rating']].corr(method='pearson') ###Output _____no_output_____ ###Markdown There is a correlation between favorites and retweets.There is no correlation between rating and retweets or favorites. 'Good Boys' and 'Good Girls' ###Code df[df['gender'].notnull()]['gender'].value_counts().plot(kind = 'pie') plt.title('Dog Genders') ###Output _____no_output_____ ###Markdown There were three times more male dogs identified than female dogs. Most Rated Breeds ###Code top_breeds=df.groupby('dog_breed').filter(lambda x: len(x) >= 20) top_breeds['dog_breed'].value_counts().plot(kind = 'barh') plt.title('Bar Chart of The Most Rated Breeds') plt.xlabel('Count') plt.ylabel('Breed') top_breeds.groupby('dog_breed')['rating'].describe() df['rating'].describe() ###Output _____no_output_____ ###Markdown Here we have a statistical description of the top breeds compared to the statistical description of all the ratings. Only one of the top breeds has a mean higher than the total population mean. That might be because of the joke ratings of 420 and 1776 pulling up the total population mean. ###Code df[df['rating'] <= 14]['rating'].describe() ###Output _____no_output_____ ###Markdown So I adjusted the ratings to exclude the joke rates and now the mean is 10.555. Only five of the top 21 breeds have means under the total population's average. So these breeds are rated higher than average. Dog Stages Stats ###Code df.boxplot(column=['rating'], by=['dog_stage']) df.groupby('dog_stage')['rating'].describe() ###Output _____no_output_____ ###Markdown So puppers are getting much lower rates than the other dog types. Their median is lower and they have several low outliers. This makes sense since 'pupper' can be used to describe irresponsible dogs.Floofers are consistently rated above 10. I wonder if that is because they are always awesome or if it is based on time. We know that the ratings have been getting higher. If 'floof' is a newer term, only used in newer tweets, that might explain the consistently higher rates. ###Code df.groupby('dog_stage')['timestamp'].describe() ###Output _____no_output_____ ###Markdown Gathering Data 1. Twitter Archive ###Code twitter_archive_enhanced = pd.read_csv('twitter-archive-enhanced.csv') twitter_archive_enhanced.head() ###Output _____no_output_____ ###Markdown 2. Image Predictions ###Code url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) with open('image_predictions.tsv', 'wb') as file: file.write(response.content) image_predictions = pd.read_csv('image_predictions.tsv', sep='\t') image_predictions.head() ###Output _____no_output_____ ###Markdown 3. Twitter API & Twepy ###Code twitter_credentials = yaml.load(open('./twitter_credentials.yml')) consumer_key = twitter_credentials['api']['api_consumer_key'] consumer_secret = twitter_credentials['api']['api_consumer_secret'] access_token = twitter_credentials['access']['access_token'] access_token_secret = twitter_credentials['access']['access_secret'] auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth, parser= tweepy.parsers.JSONParser(), wait_on_rate_limit=True, wait_on_rate_limit_notify=True) ###Output _____no_output_____ ###Markdown 3.1 Gathering 1 ###Code missing_tweets = [] start_time = time.asctime() start_sec = time.time() with open ('tweet_json.txt', 'a') as file: for tweet_id in twitter_archive_enhanced.tweet_id: try: tweet = api.get_status(tweet_id, tweet_mode='extended') file.write(json.dumps(tweet) + '\n') except Exception as tweet_error_msg: missing_tweets.append(tweet_id) print('The End') end_time = time.asctime() end_sec = time.time() start_time , end_time start_sec , end_sec end_sec - start_sec len(missing_tweets) (end_sec - start_sec) / 60 ###Output _____no_output_____ ###Markdown 29 mins and 25 seconds32 mins and 09 seconds 3.1 Tweet Json ###Code my_demo_list = [] with open('tweet_json.txt') as json_file: for line in json_file: each_dictionary = json.loads(line) tweet_id = each_dictionary['id'] favorite_count = each_dictionary['favorite_count'] retweet_count = each_dictionary['retweet_count'] followers_count = each_dictionary['user']['followers_count'] favourites_count = each_dictionary['user']['favourites_count'] friends_count = each_dictionary['user']['friends_count'] date_time = each_dictionary['created_at'] my_demo_list.append({'tweet_id': int(tweet_id), 'favorite_count': int(favorite_count), 'retweet_count': int(retweet_count), 'followers_count': int(followers_count), 'favourites_count': int(favourites_count), 'friends_count': int(friends_count), 'date_time': pd.to_datetime(date_time) }) tweet_json = pd.DataFrame(my_demo_list, columns = ['tweet_id', 'favorite_count','retweet_count', 'followers_count', 'favourites_count', 'friends_count', 'date_time']) ###Output _____no_output_____ ###Markdown Assessing visually ###Code twitter_archive_enhanced image_predictions tweet_json ###Output _____no_output_____ ###Markdown Programatic Assessment ###Code twitter_archive_enhanced.info() image_predictions.info() tweet_json.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2330 entries, 0 to 2329 Data columns (total 7 columns): tweet_id 2330 non-null int64 favorite_count 2330 non-null int64 retweet_count 2330 non-null int64 followers_count 2330 non-null int64 favourites_count 2330 non-null int64 friends_count 2330 non-null int64 date_time 2330 non-null datetime64[ns] dtypes: datetime64[ns](1), int64(6) memory usage: 127.5 KB ###Markdown Assessing: `Twitter Archive Enhanced Data` ###Code twitter_archive_enhanced.head() twitter_archive_enhanced.sample(5) twitter_archive_enhanced.columns ###Output _____no_output_____ ###Markdown Quality: - `Source` format is bad and can't be read easily. Tidiness: - No need to divide Dog stages in 4 different columns like - `'doggo'` - `'floofer'` - `'pupper'` - `'puppo'` ###Code twitter_archive_enhanced.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown Quality: - These attributes should be `integers/strings` instead of `float` - in_reply_to_status_id - in_reply_to_user_id - retweeted_status_id - retweeted_status_user_id- Columns should be `datetime` instead of `object(string)`. - retweeted_status_timestamp - timestamp- We may want to change these columns types to `string` because We don't want any operations on them. - tweet_id - in_reply_to_status_id - in_reply_to_user_id - retweeted_status_id - retweeted_status_user_id ###Code twitter_archive_enhanced[twitter_archive_enhanced.tweet_id.duplicated()] twitter_archive_enhanced.describe() twitter_archive_enhanced.describe(include=[object]) twitter_archive_enhanced.source.value_counts() twitter_archive_enhanced[twitter_archive_enhanced.rating_numerator > 20] ###Output _____no_output_____ ###Markdown Quality:- The columns `numerator` and `denominator` have invalid values. ###Code np.sort(twitter_archive_enhanced.name.unique()) twitter_archive_enhanced.loc[(twitter_archive_enhanced['name'].str.islower())] ###Output _____no_output_____ ###Markdown Quality:- There are invalid names like (`a, an, O, etc.`). ###Code twitter_archive_enhanced[twitter_archive_enhanced.retweeted_status_id.isnull()] ###Output _____no_output_____ ###Markdown Quality:- There are retweeted tweets, and we do not want it. Assessing: `Image Predictions Data` ###Code image_predictions.head() image_predictions.sample(5) image_predictions.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null int64 jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown Quality:- `tweet_id` should be `object` instead of `integer`, as no calculation is needed. ###Code image_predictions.isnull().sum() image_predictions.tweet_id.duplicated().sum() image_predictions.describe() image_predictions.describe(include='object') image_predictions.jpg_url.value_counts() ###Output _____no_output_____ ###Markdown Quality:- Missing values in `image_predictions` dataset. __2075__ rows of data compared to `twitter_archive_enhanced` dataset which has __2356__ number of records.- Some tweets have 2 different `tweet_id`, that are actually retweets. Assessing: `Tweets (API) JSON Data` ###Code tweet_json.head() tweet_json.sample(5) tweet_json['tweet_id'].duplicated().sum() tweet_json.info() tweet_json.describe() ###Output _____no_output_____ ###Markdown Assessing Summary Quality- __`Twitter Archive Enhanced Data:`__ `Source` format is bad and can't be read easily.- __`Twitter Archive Enhanced Data:`__ These attributes should be `integers/strings` instead of `float` - in_reply_to_status_id - in_reply_to_user_id - retweeted_status_id - retweeted_status_user_id- __`Twitter Archive Enhanced Data:`__ Columns should be `datetime` instead of `object(string)`. - retweeted_status_timestamp - timestamp- __`Twitter Archive Enhanced Data:`__ We may want to change these columns types to `string` because We don't want any operations on them. - tweet_id - in_reply_to_status_id - in_reply_to_user_id - retweeted_status_id - retweeted_status_user_id- __`Twitter Archive Enhanced Data:`__ The columns `numerator` and `denominator` have invalid values.- __`Twitter Archive Enhanced Data:`__ Replacing all lower case invalid names like (`a, an, etc.`) with `None`- __`Twitter Archive Enhanced Data:`__ Replacing all `O` invalid names with `None`- __`Image Predictions Data:`__ `tweet_id` should be `object` instead of `integer`, as no calculation is needed.- __`Image Predictions Data:`__ Missing values in _`image_predictions`_ dataset. __2075__ rows of data compared to _`twitter_archive_enhanced`_ dataset which has __2356__ number of records.- __`Image Predictions Data:`__ Some tweets have 2 different `tweet_id`, that are actually retweets. Tidiness- __`Twitter Archive Enhanced Data:`__ No need to divide Dog stages in 4 different columns like - 'doggo' - 'floofer' - 'pupper' - 'puppo'- __`Image Predictions Data:`__ _`Image Predictions Data`_ should be joined to _`Twitter Archive Enhanced Data`_- __`Tweets (API) JSON Data:`__ Merge _`Tweets (API) JSON Data`_ with the _`Twitter Archive Enhanced Data`_ --- Cleaning DataHere we will fix the quality and tidiness issues that we identified earlier in the assessing step. Create Copy of Each Dataset(DataFrame) ###Code twitter_archive_enhanced_clean = twitter_archive_enhanced.copy() image_predictions_clean = image_predictions.copy() tweet_json_clean = tweet_json.copy() ###Output _____no_output_____ ###Markdown Issue: __`Tweets (API) JSON Data:`__ Merge _`Tweets (API) JSON Data`_ with the _`Twitter Archive Enhanced Data`_ Define: Merge _`Tweets (API) JSON Data`_ with the _`Twitter Archive Enhanced Data`_ Code: ###Code twitter_archive_enhanced_clean = pd.merge(left=twitter_archive_enhanced_clean, right=tweet_json_clean, left_on='tweet_id', right_on='tweet_id', how='inner') ###Output _____no_output_____ ###Markdown Test: ###Code twitter_archive_enhanced_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2330 entries, 0 to 2329 Data columns (total 23 columns): tweet_id 2330 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2330 non-null object source 2330 non-null object text 2330 non-null object retweeted_status_id 162 non-null float64 retweeted_status_user_id 162 non-null float64 retweeted_status_timestamp 162 non-null object expanded_urls 2271 non-null object rating_numerator 2330 non-null int64 rating_denominator 2330 non-null int64 name 2330 non-null object doggo 2330 non-null object floofer 2330 non-null object pupper 2330 non-null object puppo 2330 non-null object favorite_count 2330 non-null int64 retweet_count 2330 non-null int64 followers_count 2330 non-null int64 favourites_count 2330 non-null int64 friends_count 2330 non-null int64 date_time 2330 non-null datetime64[ns] dtypes: datetime64[ns](1), float64(4), int64(8), object(10) memory usage: 436.9+ KB ###Markdown Issue: __`Image Predictions Data:`__ _`Image Predictions Data`_ should be joined to _`Twitter Archive Enhanced Data`_ Define: _`Image Predictions Data`_ should be joined to _`Twitter Archive Enhanced Data`_ Code: ###Code twitter_archive_enhanced_clean = twitter_archive_enhanced_clean.merge(image_predictions_clean, on='tweet_id', how='inner') ###Output _____no_output_____ ###Markdown Test: ###Code twitter_archive_enhanced_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2058 entries, 0 to 2057 Data columns (total 34 columns): tweet_id 2058 non-null int64 in_reply_to_status_id 23 non-null float64 in_reply_to_user_id 23 non-null float64 timestamp 2058 non-null object source 2058 non-null object text 2058 non-null object retweeted_status_id 71 non-null float64 retweeted_status_user_id 71 non-null float64 retweeted_status_timestamp 71 non-null object expanded_urls 2058 non-null object rating_numerator 2058 non-null int64 rating_denominator 2058 non-null int64 name 2058 non-null object doggo 2058 non-null object floofer 2058 non-null object pupper 2058 non-null object puppo 2058 non-null object favorite_count 2058 non-null int64 retweet_count 2058 non-null int64 followers_count 2058 non-null int64 favourites_count 2058 non-null int64 friends_count 2058 non-null int64 date_time 2058 non-null datetime64[ns] jpg_url 2058 non-null object img_num 2058 non-null int64 p1 2058 non-null object p1_conf 2058 non-null float64 p1_dog 2058 non-null bool p2 2058 non-null object p2_conf 2058 non-null float64 p2_dog 2058 non-null bool p3 2058 non-null object p3_conf 2058 non-null float64 p3_dog 2058 non-null bool dtypes: bool(3), datetime64[ns](1), float64(7), int64(9), object(14) memory usage: 520.5+ KB ###Markdown Issue: __`Twitter Archive Enhanced Data:`__ No need to divide Dog stages in 4 different columns like- 'doggo'- 'floofer'- 'pupper'- 'puppo' Define: Combining the Dog stages to column `dog_stage` which are divided in 4 different columns like- 'doggo'- 'floofer'- 'pupper'- 'puppo' Code: ###Code twitter_archive_enhanced_clean.loc[twitter_archive_enhanced_clean.doggo == 'None', 'doggo'] = '' twitter_archive_enhanced_clean.loc[twitter_archive_enhanced_clean.floofer == 'None', 'floofer'] = '' twitter_archive_enhanced_clean.loc[twitter_archive_enhanced_clean.pupper == 'None', 'pupper'] = '' twitter_archive_enhanced_clean.loc[twitter_archive_enhanced_clean.puppo == 'None', 'puppo'] = '' twitter_archive_enhanced_clean.groupby(["doggo", "floofer", "pupper", "puppo"]).size().reset_index().rename(columns={0: "count"}) twitter_archive_enhanced_clean['dog_stage'] = twitter_archive_enhanced_clean.doggo + \ twitter_archive_enhanced_clean.floofer + \ twitter_archive_enhanced_clean.pupper + \ twitter_archive_enhanced_clean.puppo twitter_archive_enhanced_clean.loc[twitter_archive_enhanced_clean.dog_stage == 'doggopupper', 'dog_stage'] = 'doggo,pupper' twitter_archive_enhanced_clean.loc[twitter_archive_enhanced_clean.dog_stage == 'doggopuppo', 'dog_stage'] = 'doggo,puppo' twitter_archive_enhanced_clean.loc[twitter_archive_enhanced_clean.dog_stage == 'doggofloofer', 'dog_stage'] = 'doggo,floofer' twitter_archive_enhanced_clean.loc[twitter_archive_enhanced_clean.dog_stage == '', 'dog_stage'] = 'None' ###Output _____no_output_____ ###Markdown Test: ###Code twitter_archive_enhanced_clean['dog_stage'].value_counts() ###Output _____no_output_____ ###Markdown Now, the data is completely untidy and have some `quality` issues. Lets proceed for further cleaning.--- Issue: Remove unwanted rows and columns or the one that are not beneficial for us. Define: Removing unwanted rows and columns Code: ###Code twitter_archive_enhanced_clean = twitter_archive_enhanced_clean[twitter_archive_enhanced_clean.retweeted_status_id.isnull()] # Drop Duplicate Tweet_IDs twitter_archive_enhanced_clean.drop_duplicates(inplace=True) twitter_archive_enhanced_clean.dropna(subset=['jpg_url'], inplace=True) # Columns to be dropped col = ['retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp', 'date_time', 'friends_count'] twitter_archive_enhanced_clean.drop(columns=col, inplace=True) twitter_archive_enhanced_clean = twitter_archive_enhanced_clean.sort_values('dog_stage').drop_duplicates('tweet_id', keep='last') ###Output _____no_output_____ ###Markdown Test: ###Code twitter_archive_enhanced_clean.columns ###Output _____no_output_____ ###Markdown Issue: Delete the image predictions columns Define: Delete the image predictions columns Code: ###Code # Append the first True predection to the list 'perdictions' and the level appended to list 'confidence_level', # Otherwise, will append NaN. predictions = [] confidence_level = [] def prediction_func(dataframe): if dataframe['p1_dog'] == True: predictions.append(dataframe['p1']) confidence_level.append(dataframe['p1_conf']) elif dataframe['p2_dog'] == True: predictions.append(dataframe['p2']) confidence_level.append(dataframe['p2_conf']) elif dataframe['p3_dog'] == True: predictions.append(dataframe['p3']) confidence_level.append(dataframe['p3_conf']) else: predictions.append('NaN') confidence_level.append(0) twitter_archive_enhanced_clean.apply(prediction_func, axis=1) twitter_archive_enhanced_clean['prediction'] = predictions twitter_archive_enhanced_clean['confidence_level'] = confidence_level col = ['img_num', 'p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog',\ 'p3', 'p3_conf', 'p3_dog', 'in_reply_to_status_id',\ 'in_reply_to_user_id', 'favourites_count'] # Deleteing unwanted columns twitter_archive_enhanced_clean.drop(columns=col, inplace=True) ###Output _____no_output_____ ###Markdown Test: ###Code twitter_archive_enhanced_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1987 entries, 0 to 63 Data columns (total 19 columns): tweet_id 1987 non-null int64 timestamp 1987 non-null object source 1987 non-null object text 1987 non-null object expanded_urls 1987 non-null object rating_numerator 1987 non-null int64 rating_denominator 1987 non-null int64 name 1987 non-null object doggo 1987 non-null object floofer 1987 non-null object pupper 1987 non-null object puppo 1987 non-null object favorite_count 1987 non-null int64 retweet_count 1987 non-null int64 followers_count 1987 non-null int64 jpg_url 1987 non-null object dog_stage 1987 non-null object prediction 1987 non-null object confidence_level 1987 non-null float64 dtypes: float64(1), int64(6), object(12) memory usage: 310.5+ KB ###Markdown Issue: __`Twitter Archive Enhanced Data:`__ `Source` format is bad and can't be read easily. Define: Make `Source` format is good and can read easily. Code: ###Code twitter_archive_enhanced_clean.source = twitter_archive_enhanced_clean.source.apply(lambda x: re.findall(r'>(.*)<', x)[0]) ###Output _____no_output_____ ###Markdown Test: ###Code twitter_archive_enhanced_clean.sample(5) ###Output _____no_output_____ ###Markdown Issue: __`Twitter Archive Enhanced Data:`__ The columns `numerator` and `denominator` have invalid values. Define: The columns `numerator` and `denominator` have invalid values. Code: ###Code rating_temp = twitter_archive_enhanced_clean[twitter_archive_enhanced_clean.text.str.contains( r"(\d+\.?\d*\/\d+\.?\d*\D+\d+\.?\d*\/\d+\.?\d*)")].text for i in rating_temp: x = twitter_archive_enhanced_clean.text == i twitter_archive_enhanced_clean.loc[x, 'rating_numerator'] = re.findall(r"\d+\.?\d*\/\d+\.?\d*\D+(\d+\.?\d*)\/\d+\.?\d*", i) twitter_archive_enhanced_clean.loc[x, 'rating_denominator'] = 10 ###Output /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:1: UserWarning: This pattern has match groups. To actually get the groups, use str.extract. """Entry point for launching an IPython kernel. ###Markdown Test: ###Code twitter_archive_enhanced_clean[twitter_archive_enhanced_clean.text.isin(rating_temp)] ###Output _____no_output_____ ###Markdown Issue: Cleaning decimal values in rating numerators. Define: Cleaning decimal values in rating numerators. Code: ###Code # View tweets with decimals in rating in 'text' column twitter_archive_enhanced_clean[twitter_archive_enhanced_clean.text.str.contains(r"(\d+\.\d*\/\d+)")] ratings = twitter_archive_enhanced_clean.text.str.extract('((?:\d+\.)?\d+)\/(\d+)', expand=True) ratings twitter_archive_enhanced_clean.rating_numerator = ratings[0] ###Output _____no_output_____ ###Markdown Test: ###Code twitter_archive_enhanced_clean[twitter_archive_enhanced_clean.text.str.contains(r"(\d+\.\d*\/\d+)")] ###Output /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:1: UserWarning: This pattern has match groups. To actually get the groups, use str.extract. """Entry point for launching an IPython kernel. ###Markdown Issue: Convert the `null` values to `None` type Define: Convert the `null` values to `None` type ###Code twitter_archive_enhanced_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1987 entries, 0 to 63 Data columns (total 19 columns): tweet_id 1987 non-null int64 timestamp 1987 non-null object source 1987 non-null object text 1987 non-null object expanded_urls 1987 non-null object rating_numerator 1987 non-null object rating_denominator 1987 non-null int64 name 1987 non-null object doggo 1987 non-null object floofer 1987 non-null object pupper 1987 non-null object puppo 1987 non-null object favorite_count 1987 non-null int64 retweet_count 1987 non-null int64 followers_count 1987 non-null int64 jpg_url 1987 non-null object dog_stage 1987 non-null object prediction 1987 non-null object confidence_level 1987 non-null float64 dtypes: float64(1), int64(5), object(13) memory usage: 310.5+ KB ###Markdown Code: ###Code twitter_archive_enhanced_clean.loc[twitter_archive_enhanced_clean['prediction'] == 'NaN', 'prediction'] = None twitter_archive_enhanced_clean.loc[twitter_archive_enhanced_clean['rating_numerator'] == 'NaN', 'rating_numerator'] = 0 ###Output _____no_output_____ ###Markdown Test: ###Code twitter_archive_enhanced_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1987 entries, 0 to 63 Data columns (total 19 columns): tweet_id 1987 non-null int64 timestamp 1987 non-null object source 1987 non-null object text 1987 non-null object expanded_urls 1987 non-null object rating_numerator 1987 non-null object rating_denominator 1987 non-null int64 name 1987 non-null object doggo 1987 non-null object floofer 1987 non-null object pupper 1987 non-null object puppo 1987 non-null object favorite_count 1987 non-null int64 retweet_count 1987 non-null int64 followers_count 1987 non-null int64 jpg_url 1987 non-null object dog_stage 1987 non-null object prediction 1679 non-null object confidence_level 1987 non-null float64 dtypes: float64(1), int64(5), object(13) memory usage: 310.5+ KB ###Markdown Issue: __`Twitter Archive Enhanced Data:`__ Replacing all lower case invalid names like (`a, an, etc.`) with `None` Define: Replacing all lower case invalid names like (`a, an, etc.`) with `None` Code: ###Code twitter_archive_enhanced_clean.name = twitter_archive_enhanced_clean.name.str.replace('^[a-z]', 'None') ###Output _____no_output_____ ###Markdown Test: ###Code twitter_archive_enhanced_clean.name.value_counts() ###Output _____no_output_____ ###Markdown Issue: __`Twitter Archive Enhanced Data:`__ Replacing all `O` invalid names with `None` Define: Replacing all `O` invalid names with `None` Code: ###Code twitter_archive_enhanced_clean.name = twitter_archive_enhanced_clean.name.str.replace('O', 'None') ###Output _____no_output_____ ###Markdown Test: ###Code twitter_archive_enhanced_clean[twitter_archive_enhanced_clean['name'].apply(len) == 1 ].name.value_counts() ###Output _____no_output_____ ###Markdown Issue: Correcting Data types Define: Correcting Data types Code: ###Code twitter_archive_enhanced_clean.dtypes twitter_archive_enhanced_clean['tweet_id'] = twitter_archive_enhanced_clean['tweet_id'].astype(str) twitter_archive_enhanced_clean['timestamp'] = pd.to_datetime(twitter_archive_enhanced_clean.timestamp) twitter_archive_enhanced_clean['source'] = twitter_archive_enhanced_clean['source'].astype('category') twitter_archive_enhanced_clean['rating_numerator'] = twitter_archive_enhanced_clean['rating_numerator'].astype(float) twitter_archive_enhanced_clean['rating_denominator'] = twitter_archive_enhanced_clean['rating_denominator'].astype(float) twitter_archive_enhanced_clean['favorite_count'] = twitter_archive_enhanced_clean['favorite_count'].astype(int) twitter_archive_enhanced_clean['retweet_count'] = twitter_archive_enhanced_clean['retweet_count'].astype(int) twitter_archive_enhanced_clean['followers_count'] = twitter_archive_enhanced_clean['followers_count'].astype(int) twitter_archive_enhanced_clean['dog_stage'] = twitter_archive_enhanced_clean['dog_stage'].astype('category') ###Output _____no_output_____ ###Markdown Test: ###Code twitter_archive_enhanced_clean.dtypes ###Output _____no_output_____ ###Markdown Storing Cleaned Data in `twitter_archive_master.csv` file ###Code twitter_archive_enhanced_clean.to_csv('twitter_archive_master.csv', index=False, encoding='utf-8') ###Output _____no_output_____ ###Markdown The cleaned data has been successfully stored in `twitter_archive_master.csv` file--- Analyzing and Visualizing `twitter_archive_master` DataBy gathering cleaned and analysis ready data from `twitter_archive_master.csv`. ###Code master_df = pd.read_csv('twitter_archive_master.csv') master_df.shape master_df.info() master_df.timestamp = pd.to_datetime(master_df.timestamp) master_df.info() master_df.head() master_df.sample(5) master_df.describe() ###Output _____no_output_____ ###Markdown Number of Tweets Monthly ###Code master_df['timestamp'].apply(lambda x: x.strftime('%Y-%m')).value_counts().sort_index() selected_data = master_df['tweet_id'].groupby([master_df['timestamp'].dt.year, master_df['timestamp'].dt.month]).count() selected_data.plot('line') plt.title('Number of Tweets Monthly', size=15) plt.xlabel('(Year, Month)') plt.ylabel('Number of Tweets') plt.savefig('images/Number-of-Tweets-Monthly.png', bbox_inches='tight'); ###Output _____no_output_____ ###Markdown Most number of tweets posted were in December-2015 (350+ tweets). Afterwards the number of tweets decreased continuously until April-2016 and remained constant afterwards until the July-2017. Distribution of Rating Numerator ###Code ratings = master_df[(master_df.rating_numerator <= 14) & (master_df.rating_numerator.apply(float.is_integer))] rating_counts = ratings.groupby(['rating_numerator']).count()['tweet_id'] plt.bar(np.arange(15), rating_counts) plt.xticks(np.arange(15)) plt.xlabel('Rating Numerator') plt.ylabel('Frequency') plt.title('Distribution of Rating Numerators') plt.savefig('images/Distribution-of-Rating-Numerator.png', bbox_inches='tight'); ###Output _____no_output_____ ###Markdown Most rated tweets lie between `10-13` Most used `Source` ###Code master_df.source.value_counts() sns.countplot(data=master_df, x='source') plt.title('Most used Tweet Sources', size=15) plt.xlabel('Tweets Sources') plt.ylabel('Count') plt.savefig('images/Most-used-Sources.png', bbox_inches='tight'); ###Output _____no_output_____ ###Markdown The most popular source is `Twitter for iPhone` followed by the `Twitter Web Client` and `TweetDeck` Correlation between Variables ###Code sns.heatmap(master_df.corr(), cmap='Blues') plt.title('Correlation Heatmap') plt.savefig('images/Correlation-between-Variables.png', bbox_inches='tight'); ###Output _____no_output_____ ###Markdown There are some weak and strong correlations.- `Favorite_count` and `Retweet_count` have a strong correlation- `Favorite_count` and `Followers_count` have a weak correlation Most Frequent Dog Stages ###Code x = np.char.array(['Pupper', 'Doggo', 'Puppo', 'Doggo, Pupper', 'Floofer', 'Doggo, Puppo', 'Doggo, Floofer']) y = np.array(list(master_df[master_df['dog_stage'] != 'None']['dog_stage'].value_counts())[0:7]) explode = (0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1) percent = 100.*y/y.sum() patches, texts = plt.pie(y, startangle=90, radius=1.8, explode = explode) labels = ['{0} - {1:1.2f} %'.format(i,j) for i,j in zip(x, percent)] plt.legend(patches, labels, bbox_to_anchor=(-0.1, 1.), fontsize=8); plt.title('Most Frequent Dog Stages') plt.axis('scaled') plt.savefig('images/Most-Frequent-Dog-Stages.png', bbox_inches='tight'); ###Output _____no_output_____ ###Markdown `Pupper` is the most common dog found. Relation b/w Retweets and Likes ###Code master_df.plot(kind='scatter', x='favorite_count', y='retweet_count', alpha=0.5, figsize=(15, 11)) plt.xlabel('Number of Likes') plt.ylabel('Number of Retweets') plt.title('Relation b/w Retweets and Likes') plt.savefig('images/Relation-between-Retweets-and-Likes.png', bbox_inches='tight'); ###Output _____no_output_____ ###Markdown Project 4 Title: Wrangle And Analyse WeRateDogs Twitter Archive Table of Contents: > - Introduction> - Gather Data> - Assess Data> - Clean Data> - Store Clean Data> - Analyse and Visualize Clean Data Introduction:In this project we wrangle the tweet archive of Twitter user @dog_rates, also known as WeRateDogs. WeRateDogs is a Twitter account that rates people's dogs with a humorous comment about the dog. Our goal is to wrangle WeRateDogs Twitter data to create interesting and trustworthy analyses and visualizations. ###Code # Her we import all the necessary python packages to Wrangle data import datetime from IPython.display import Image import json import matplotlib.pyplot as plt %matplotlib inline import numpy as np import os import pandas as pd import re import requests import seaborn as sns from sqlalchemy import create_engine from timeit import default_timer as timer import tweepy ###Output _____no_output_____ ###Markdown Gather Data: 1. **WeRateDogs Twitter Archive** file can be downloaded manually. Download source is given from Udacity. This `twitter-archive-enhanced.csv` file containes tweets details.2. **Tweet image predictions** file, i.e., what breed of dog (or other object, animal, etc.) is present in each tweet according to a neural network. This file `image_predictions.tsv` is hosted on Udacity's servers and can be downloaded using the Requests library and the following URL: "https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv"3. **Additional Tweet Data** i.e. tweet's retweet count and favorite ("like") count at minimum, and any additional data are necessary for this project. This data can be gathered using the tweet IDs in the WeRateDogs Twitter archive and querying the Twitter API for each tweet's JSON data using Python's Tweepy library and store each tweet's entire set of JSON data in a file called `tweet_json.txt` file. **1. WeRateDogs Twitter Archive file** ###Code # Create a dataframe object df_archive with twitter-archive-enhanced.csv file df_archive = pd.read_csv('twitter-archive-enhanced.csv') df_archive.head() ###Output _____no_output_____ ###Markdown **2. Tweet image predictions file** ###Code # Create a new folder 'image_prediction' to keep image_predictions.tsv file folder_name='image_prediction' if not os.path.exists(folder_name): os.makedirs(folder_name) # Get the image_predictions.tsv file from the given url using 'requests' library url = "https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv" response = requests.get(url) # write the content of response to the image-predictions.tsv file in the image_prediction folder with open(os.path.join(folder_name, url.split('/')[-1]), mode='wb') as file: file.write(response.content) # Create a dataframe object named df_prediction with the tsv file df_prediction = pd.read_csv('image_prediction/image-predictions.tsv', sep='\t') df_prediction.head() ###Output _____no_output_____ ###Markdown **3. Additional Tweet Data** ###Code # First get the key & token details from a previously created text file with open('key_token.txt', 'r') as file: api_key = file.readline()[:-1] api_secret_key = file.readline()[:-1] access_token = file.readline()[:-1] access_token_secret = file.readline() # Create connection to api auth = tweepy.OAuthHandler(api_key, api_secret_key) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth, wait_on_rate_limit = True, wait_on_rate_limit_notify = True) # Creat a list of tweet_id tweet_ids = np.array(df_archive.tweet_id.values) print('Number of tweets available is {}'.format(len(tweet_ids))) tweet_ids # Query twitter API to get JSON data for each tweet_id in the twitter archive and check the time required fails_dict = {} start = timer() with open('tweet_json.txt', 'w') as outfile: for tweet_id in tweet_ids: try: tweet = api.get_status(tweet_id, tweet_mode='extended') json.dump(tweet._json, outfile) outfile.write('\n') except tweepy.TweepError as e: fails_dict[tweet_id] = e pass end = timer() print(end - start) print(fails_dict) ###Output 3255.002777599999 {888202515573088257: TweepError([{'code': 144, 'message': 'No status found with that ID.'}]), 873697596434513921: TweepError([{'code': 144, 'message': 'No status found with that ID.'}]), 872668790621863937: TweepError([{'code': 144, 'message': 'No status found with that ID.'}]), 872261713294495745: TweepError([{'code': 144, 'message': 'No status found with that ID.'}]), 869988702071779329: TweepError([{'code': 144, 'message': 'No status found with that ID.'}]), 866816280283807744: TweepError([{'code': 144, 'message': 'No status found with that ID.'}]), 861769973181624320: TweepError([{'code': 144, 'message': 'No status found with that ID.'}]), 856602993587888130: TweepError([{'code': 144, 'message': 'No status found with that ID.'}]), 851953902622658560: TweepError([{'code': 144, 'message': 'No status found with that ID.'}]), 845459076796616705: TweepError([{'code': 144, 'message': 'No status found with that ID.'}]), 844704788403113984: TweepError([{'code': 144, 'message': 'No status found with that ID.'}]), 842892208864923648: TweepError([{'code': 144, 'message': 'No status found with that ID.'}]), 837366284874571778: TweepError([{'code': 144, 'message': 'No status found with that ID.'}]), 837012587749474308: TweepError([{'code': 144, 'message': 'No status found with that ID.'}]), 829374341691346946: TweepError([{'code': 144, 'message': 'No status found with that ID.'}]), 827228250799742977: TweepError([{'code': 144, 'message': 'No status found with that ID.'}]), 812747805718642688: TweepError([{'code': 144, 'message': 'No status found with that ID.'}]), 802247111496568832: TweepError([{'code': 144, 'message': 'No status found with that ID.'}]), 779123168116150273: TweepError([{'code': 144, 'message': 'No status found with that ID.'}]), 775096608509886464: TweepError([{'code': 144, 'message': 'No status found with that ID.'}]), 771004394259247104: TweepError([{'code': 179, 'message': 'Sorry, you are not authorized to see this status.'}]), 770743923962707968: TweepError([{'code': 144, 'message': 'No status found with that ID.'}]), 759566828574212096: TweepError([{'code': 144, 'message': 'No status found with that ID.'}]), 754011816964026368: TweepError([{'code': 144, 'message': 'No status found with that ID.'}]), 680055455951884288: TweepError([{'code': 144, 'message': 'No status found with that ID.'}])} ###Markdown ###Code # Read tweet_json.txt file line by line to make a list named tweet_detail_list with json objects to creat a dataframe object tweet_detail_list = [] with open('tweet_json.txt', 'r') as file: for line in file: try: tweet_detail_list.append(json.loads(line)) except: continue df_api = pd.DataFrame(tweet_detail_list) df_api.head() df_api.info() df_api.retweeted.value_counts() ###Output _____no_output_____ ###Markdown > From the above ourcome we can see that every `retweeted` value is `False` so we can say that this table dont include any retweeted data. ###Code # Create a new DataFframe object from df_api with necessary columns df_api_necessary = df_api[['id', 'retweet_count', 'favorite_count']] df_api_necessary.head() print('"df_api_necessary" DataFrame has {} rows and {} columns.'. format(df_api_necessary.shape[0], df_api_necessary.shape[1])) # Make copy of the three necessary dataframes for assessment and cleaning purpose df_archive_copy = df_archive.copy() df_prediction_copy = df_prediction.copy() df_necessary_copy = df_api_necessary.copy() ###Output _____no_output_____ ###Markdown Assess Data: Assessment of gather data can be done in two ways -> - Visual Assessment> - Programatic AssessmentMain objective of data assessment is detecting Quality and Tidiness issues of data and documenting them for data cleaning process making easy. Visual Assessment ###Code df_archive_copy df_prediction_copy df_necessary_copy ###Output _____no_output_____ ###Markdown Programatic Assessment ###Code df_archive_copy.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown > From the above summery we can decide to delete the columns which have more null values and will not be used in this project. And we have to change the type of tweet_id, timestamp. And create a new column dog_stage to avoid data redundancy. ###Code # Check if there ia any duplicate value in 'tweet_id' sum(df_archive_copy.tweet_id.duplicated()) print('duplicate entries are {} '.format(sum(df_archive_copy.source.duplicated()))) print(df_archive_copy.source[0]) print(df_archive_copy.source[14]) print(df_archive_copy.source[23]) ###Output duplicate entries are 2352 <a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a> <a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a> <a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a> ###Markdown > From the above output we conclude that we dont need `source` column as it contains just the url that is not required for the analysis. So we can drop this column. ###Code # Check how many different values are there as numerator in ratings df_archive_copy.rating_numerator.value_counts() # Check how many different values are there as dinominator in ratings df_archive_copy.rating_denominator.value_counts() ###Output _____no_output_____ ###Markdown > There are different numbers in dinominator of ratings. So to make comparison according to rating we have to adjust every rating to a particular scale. Here we adjust every rating to '10' scale as 10 is maximum number in dinominatoe. And there is a '0' value so we have to delete that row. ###Code # Check the values of name column df_archive_copy.name.value_counts() # Check which types of information text column contains print(df_archive_copy['text'][1234]) print(df_archive_copy['text'][2134]) ###Output Please don't send in any more polar bears. We only rate dogs. Thank you... 10/10 https://t.co/83RGhdIQz2 This is Randall. He's from Chernobyl. Built playground himself. Has been stuck up there quite a while. 5/10 good dog https://t.co/pzrvc7wKGd ###Markdown > We can see that text contains dog's name, comment about the dog, rating of the dog, and link to the post. We can see that in rating numerator and denominator are integer. We can ceck if there is any floating point number in rating in text and chaeck that with rating_numerator and rating_denominator columns. ###Code # Check if there is any float numerator in rating in text column with pd.option_context('max_colwidth', 200): display(df_archive_copy[df_archive_copy['text'].str.contains(r"(\d+\.\d*\/\d+)")] [['tweet_id', 'text', 'rating_numerator', 'rating_denominator']]) ###Output c:\users\som25\appdata\local\programs\python\python37\lib\site-packages\pandas\core\strings.py:1843: UserWarning: This pattern has match groups. To actually get the groups, use str.extract. return func(self, *args, **kwargs) ###Markdown > From the above result we can see that there are six columns where in text there are float in numerator in rating but in rating_numerator column all this are integer and from the above table we can conclude that text column contans the original rating and so we have to change the values in rating_numerator column. And for that first we have to change the type of rating_numerator column from int to float. ###Code # Get an overview of the df_prediction_copy dataframe df_prediction_copy.info() sum(df_prediction_copy.duplicated()) # Check if there is any deplicated tweet_id or not sum(df_prediction_copy.tweet_id.duplicated()) # Check if there is any duplicate image_url or not sum(df_prediction_copy.jpg_url.duplicated()) ###Output _____no_output_____ ###Markdown > We can see that there are 66 duplicate images in the df_prediction_copy table and there are not required for the analysis. So its googd to remove those duplicated rows to get appropriate concise data. ###Code # Check the number of images for indivisual tweet satisfy the condition of max number of images that can be posted. df_prediction_copy.img_num.value_counts() df_necessary_copy.info() df_necessary_copy.favorite_count.value_counts() df_necessary_copy.retweet_count.value_counts() ###Output _____no_output_____ ###Markdown Data Assessment Report: Quality Issues:Basically this takes completeness, validity, accuracy and consistency issues.- **df_archive_copy** Table:> 1. Retweets are included. We should keep only the original tweets.> 2. Remove in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp, source columns which have more Null values and are not necessary for this project.> 3. Change type of tweet_id from 'int' to 'string'.> 4. Change type of timestamp to datetime.> 5. Null is identified as 'a', 'an', 'the' and 'None' for name, doggo, floofer, pupper, and puppo columns.> 6. We have to adjust the ratings to 10 scale. And we have to delete the row which have '0' as rating_denominator. > 7. First change rating_numerator column type from int to float and then change values of rating_numerator of tweet_id'883482846933004288', '832215909146226688', '786709082849828864 ', '778027034220126208', '681340665377193984', '680494726643068929' according to the rating in text columns.- **df_prediction_copy** Table:> 1. Delete the 66 duplicated 'jpg_url' along with the entire row.> 2. Change type of tweet_id from int to string.> 3. Remove rows where predictions are not dog.> 4. Get rid of data redundancy due to p1, p1_conf, p1_dog, p2, p2_conf, p2_dog, p3, p3_conf, p3_dog columns. Use the data of these columns to make new meaningful columns and then remove these.- **df_necessary_copy** Table:> 1. Change the column name for `id` to `tweet_id`. And change type of column to str. Tidiness Issues:Basically this takes structural issues.- **df_archive_copy** Table:> 1. Create a new column dog_stage instead of doggo, floofer, pupper and puppo columns to get rid of data redundancy.> 2. From the text column extract the dog's stage use them to replace the NaN values as much as possible in dog_stage column. Then remove the indivisual dog's stage columns. Then remove extra columns.- **df_necessary_copy** Table:> 1. Combine three clean dataframes into a single dataframe to avoid data redundancy. Clean Data: This part of Data Wrangling process consists of three sub-process -> - Define > - Code> - Test Define: > 1. **df_archive_copy** Table: Retweets are included. We should keep only the original tweets. Code: ###Code # Remove those rows where retweeted_status_user_id is not null df_archive_copy = df_archive_copy[df_archive_copy.retweeted_status_user_id.isnull()] ###Output _____no_output_____ ###Markdown Test: ###Code # Check if there is any retweet or not print(sum(df_archive_copy.retweeted_status_id.notnull())) ###Output 0 ###Markdown Define: > 2. **df_archive_copy** Table: remove `in_reply_to_status_id`, `in_reply_to_user_id`, `retweeted_status_id`, `retweeted_status_user_id`, `retweeted_status_timestamp`, `source` columns which have more Null values and are not necessary for this project. Code: ###Code # Remove the columns those are not necessary df_archive_copy = df_archive_copy.drop(['in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp', 'source'], axis=1) ###Output _____no_output_____ ###Markdown Test: ###Code # Check if there are those columns or not df_archive_copy.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 11 columns): tweet_id 2175 non-null int64 timestamp 2175 non-null object text 2175 non-null object expanded_urls 2117 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 2175 non-null object floofer 2175 non-null object pupper 2175 non-null object puppo 2175 non-null object dtypes: int64(3), object(8) memory usage: 203.9+ KB ###Markdown Define: > 3. **df_archive_copy** Table: Change type of tweet_id from 'int' to 'string'. Code: ###Code # Change the type of the tweet_id column df_archive_copy['tweet_id'] = df_archive_copy.tweet_id.astype(str) ###Output _____no_output_____ ###Markdown Test: ###Code # Check the datatype of tweet_id column df_archive_copy.tweet_id.dtype ###Output _____no_output_____ ###Markdown Define: > 4. **df_archive_copy** Table: Change type of timestamp to datetime. Code: ###Code # Change the type of timestamp from string to datetime df_archive_copy['timestamp'] = pd.to_datetime(df_archive_copy['timestamp']) ###Output _____no_output_____ ###Markdown Test: ###Code # Check the datatype of the timestamp column df_archive_copy.timestamp.dtype ###Output _____no_output_____ ###Markdown Define: > 5. **df_archive_copy** Table: Null is identified as 'a', 'an', 'the' and 'None' for name, doggo, floofer, pupper, and puppo columns. Code: ###Code # Get some more occurred Names in name column series = df_archive_copy.name.value_counts() series.sort_values(ascending=False, inplace=True) print(series[:21]) ###Output None 680 a 55 Lucy 11 Charlie 11 Oliver 10 Cooper 10 Tucker 9 Penny 9 Sadie 8 Winston 8 Lola 8 the 8 Daisy 7 Toby 7 an 6 Bailey 6 Koda 6 Oscar 6 Bella 6 Stanley 6 Jax 6 Name: name, dtype: int64 ###Markdown > From the above summery we can see that there are 'None', 'a', 'the' and 'an' as null in the name column. So we should replace those values with NaN. ###Code # Modify those values for name column invalid_name_list = ['None','a','an','the'] df_archive_copy.replace(invalid_name_list, np.nan, inplace=True) ###Output _____no_output_____ ###Markdown Test: ###Code # Check for the effectiveness of the above code series = df_archive_copy.name.value_counts() series.sort_values(ascending=False, inplace=True) print(series[:21]) ###Output Lucy 11 Charlie 11 Oliver 10 Cooper 10 Tucker 9 Penny 9 Winston 8 Lola 8 Sadie 8 Toby 7 Daisy 7 Bella 6 Koda 6 Jax 6 Bo 6 Stanley 6 Bailey 6 Oscar 6 Buddy 5 Scout 5 Rusty 5 Name: name, dtype: int64 ###Markdown Define: > 6. **df_archive_copy** Table: We have to delete the row which have '0' as rating_denominator. Code: ###Code # Check for '0' value in rating_denominator df_archive_copy.rating_denominator.value_counts() # Get the index of that particular row i = df_archive_copy.query('rating_denominator == 0').index print(i[0]) # Remove that row from the table df_archive_copy.drop(i[0], inplace=True) ###Output _____no_output_____ ###Markdown Test: ###Code # Check that row exists or not df_archive_copy.query('rating_denominator == 0') ###Output _____no_output_____ ###Markdown Define: > 7. **df_archive_copy** Table: First change rating_numerator column type from int to float and then change values of rating_numerator of tweet_id '883482846933004288', '832215909146226688', '786709082849828864 ', '778027034220126208', '681340665377193984', '680494726643068929' according to the rating in text columns. Code: ###Code # Change the type of rating_numerator and rating_denominator columns from int to float df_archive_copy[['rating_numerator','rating_denominator']]=df_archive_copy[['rating_numerator','rating_denominator']].astype(float) # Check for the quality issue that is defined above with pd.option_context('max_colwidth', 200): display(df_archive_copy[df_archive_copy['text'].str.contains(r"(\d+\.\d*\/\d+)")] [['tweet_id', 'text', 'rating_numerator', 'rating_denominator']]) # Modify the values of rating_numerator df_archive_copy.loc[df_archive_copy.tweet_id=='883482846933004288', 'rating_numerator'] = 13.5 df_archive_copy.loc[df_archive_copy.tweet_id=='786709082849828864', 'rating_numerator'] = 9.75 df_archive_copy.loc[df_archive_copy.tweet_id=='778027034220126208', 'rating_numerator'] = 11.27 df_archive_copy.loc[df_archive_copy.tweet_id=='681340665377193984', 'rating_numerator'] = 9.5 df_archive_copy.loc[df_archive_copy.tweet_id=='680494726643068929', 'rating_numerator'] = 11.26 ###Output _____no_output_____ ###Markdown Test: ###Code # Check the type of rating_numerator and rating_denominator df_archive_copy.info() # Check for float rating_numerator and that in the text column with pd.option_context('max_colwidth', 200): display(df_archive_copy[df_archive_copy['text'].str.contains(r"(\d+\.\d*\/\d+)")] [['tweet_id', 'text', 'rating_numerator', 'rating_denominator']]) ###Output _____no_output_____ ###Markdown Define: > 8. **df_prediction_copy** Table: Delete the 66 duplicated 'jpg_url' along with the entire row. Code: ###Code # Remove the rows which have duplicate jpg_url df_prediction_copy = df_prediction_copy.drop_duplicates(subset=['jpg_url'], keep='last') ###Output _____no_output_____ ###Markdown Test: ###Code # Check if there is any duplicate jpg_url or not sum(df_prediction_copy.jpg_url.duplicated()) ###Output _____no_output_____ ###Markdown Define: > 9. **df_prediction_copy** Table: Change type of tweet_id from int to string. Code: ###Code # Change the type of tweet_id to string df_prediction_copy['tweet_id'] = df_prediction_copy['tweet_id'].astype(str) ###Output c:\users\som25\appdata\local\programs\python\python37\lib\site-packages\ipykernel_launcher.py:2: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy ###Markdown Test: ###Code # Check for the modified type of tweet_id df_prediction_copy.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2009 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2009 non-null object jpg_url 2009 non-null object img_num 2009 non-null int64 p1 2009 non-null object p1_conf 2009 non-null float64 p1_dog 2009 non-null bool p2 2009 non-null object p2_conf 2009 non-null float64 p2_dog 2009 non-null bool p3 2009 non-null object p3_conf 2009 non-null float64 p3_dog 2009 non-null bool dtypes: bool(3), float64(3), int64(1), object(5) memory usage: 162.8+ KB ###Markdown Define: > 10. **df_prediction_copy** Table: Remove rows where predictions are not dog. Code: ###Code # Remove those rows where predictions are not dogs not_dog_list = df_prediction_copy.query('p1_dog==False & p2_dog==False & p3_dog==False').index df_prediction_copy.drop(index=not_dog_list, inplace=True) ###Output c:\users\som25\appdata\local\programs\python\python37\lib\site-packages\pandas\core\frame.py:4102: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy errors=errors, ###Markdown Test: ###Code # Check if there is any row which contain df_prediction_copy.query('p1_dog==False & p2_dog==False & p3_dog==False') df_prediction_copy.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1691 entries, 0 to 2073 Data columns (total 12 columns): tweet_id 1691 non-null object jpg_url 1691 non-null object img_num 1691 non-null int64 p1 1691 non-null object p1_conf 1691 non-null float64 p1_dog 1691 non-null bool p2 1691 non-null object p2_conf 1691 non-null float64 p2_dog 1691 non-null bool p3 1691 non-null object p3_conf 1691 non-null float64 p3_dog 1691 non-null bool dtypes: bool(3), float64(3), int64(1), object(5) memory usage: 137.1+ KB ###Markdown Define: > 11. **df_archive_copy** Table: Create a new column `dog_stage` instead of doggo, floofer, pupper and puppo columns to get rid of data redundancy. Code: ###Code # Check if there are any rows where more than one dog's stage available df_archive_copy[df_archive_copy[['doggo', 'floofer', 'pupper', 'puppo']].notnull().sum(axis=1)>1] # Concatenate all the dog stages columns in one column called dog_stage df_archive_copy['dog_stage'] = df_archive_copy.loc[:, 'doggo':'puppo'].apply(lambda x: ','.join(x.dropna().astype(str)), axis=1) ###Output _____no_output_____ ###Markdown Test: ###Code # Check the dog_stage column and its values df_archive_copy.dog_stage.value_counts() ###Output _____no_output_____ ###Markdown Define: > 12. **df_archive_copy** Table: From the text column extract the dog's stage use them to replace the NaN values as much as possible in `dog_stage` column. Then remove the indivisual dog's stage columns. Code: ###Code # Check for dog types in text column dog_type = ['doggo', 'floofer', 'pupper' , 'puppo'] for t in dog_type: print(t, df_archive_copy.text.str.contains(t).sum()) # Extract the dog types from the text column and create a new column dog_type with those values dog_type = ['doggo', 'floofer', 'pupper' , 'puppo'] pattern = '|'.join(dog_type) df_archive_copy['dog_type'] = df_archive_copy['text'].str.extract('('+pattern+')', expand=False) df_archive_copy['dog_type'].value_counts() # Create a new column 'dog_stages' by merging two columns 'dog_stage' and 'dog_type' df_archive_copy['dog_stages'] = df_archive_copy.dog_type.fillna(df_archive_copy.dog_stage) # Drop the unused columns 'doggo', 'floofer', 'pupper', 'puppo', 'dog_stage' and 'dog_type' df_archive_copy.drop(['doggo', 'floofer', 'pupper', 'puppo', 'dog_stage', 'dog_type'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Test: ###Code # Check the values of dog_stages column df_archive_copy.dog_stages.value_counts() # Check the present columns df_archive_copy.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2174 entries, 0 to 2355 Data columns (total 8 columns): tweet_id 2174 non-null object timestamp 2174 non-null datetime64[ns, UTC] text 2174 non-null object expanded_urls 2117 non-null object rating_numerator 2174 non-null float64 rating_denominator 2174 non-null float64 name 1426 non-null object dog_stages 2174 non-null object dtypes: datetime64[ns, UTC](1), float64(2), object(5) memory usage: 152.9+ KB ###Markdown Define: > 13. **df_prediction_copy** Table: Get rid of data redundancy due to p1, p1_conf, p1_dog, p2, p2_conf, p2_dog, p3, p3_conf, p3_dog columns. Use the data of these columns to make new meaningful columns and then remove these. Code: ###Code # Create three new dataframe for three predictions respectively pred_1 = df_prediction_copy.query('p1_dog==True')[['tweet_id', 'p1', 'p1_conf']] pred_2 = df_prediction_copy.query('p1_dog==False & p2_dog ==True')[['tweet_id', 'p2', 'p2_conf']] pred_3 = df_prediction_copy.query('p1_dog==False & p2_dog ==False & p3_dog ==True')[['tweet_id', 'p3', 'p3_conf']] # Rename columns names of three new dataframe pred_1.columns = ['tweet_id','predictions', 'confidence'] pred_2.columns = ['tweet_id','predictions', 'confidence'] pred_3.columns = ['tweet_id','predictions', 'confidence'] # Concate three new dataframe into a new dataframe pred_new = pd.concat([pred_1, pred_2, pred_3]) pred_new.count() # Merge the pred_new dataframe to the df_prediction_copy dataframe df_prediction_copy = pd.merge(df_prediction_copy, pred_new, on='tweet_id', how='left') # Drop the not necessary columns for this analysis df_prediction_copy.drop(['p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Test: ###Code # Check the df_prediction_copy dataframe df_prediction_copy.sample(5) # Get summery about the df_prediction_copy df_prediction_copy.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1691 entries, 0 to 1690 Data columns (total 5 columns): tweet_id 1691 non-null object jpg_url 1691 non-null object img_num 1691 non-null int64 predictions 1691 non-null object confidence 1691 non-null float64 dtypes: float64(1), int64(1), object(3) memory usage: 79.3+ KB ###Markdown Define: > 14. **df_necessary_copy** Table: Change the column name for id to tweet_id. And change type of column to str. Code: ###Code # Change column name df_necessary_copy.rename(columns={'id':'tweet_id'}, inplace=True) # Change type of tweet_id column from int to str df_necessary_copy['tweet_id'] = df_necessary_copy['tweet_id'].astype(str) ###Output _____no_output_____ ###Markdown Test: ###Code # Check the dataframe status df_necessary_copy.sample(5) # Check the tweet_id type df_necessary_copy.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2331 entries, 0 to 2330 Data columns (total 3 columns): tweet_id 2331 non-null object retweet_count 2331 non-null int64 favorite_count 2331 non-null int64 dtypes: int64(2), object(1) memory usage: 54.8+ KB ###Markdown Define: > 15. Combine three clean dataframes into a single dataframe to avoid data redundancy. Code: ###Code # Check the status and short summery for all the three dataframes before the merge df_archive_copy.head() df_archive_copy.info() df_prediction_copy.head() df_prediction_copy.info() df_necessary_copy.head() df_necessary_copy.info() # Merge df_archive_copy and df_prediction_copy dataframes into df_twitter_1 df_twitter_1 = pd.merge(df_archive_copy, df_prediction_copy, on = ['tweet_id'], how = 'left') # Keep those rows which have jpg_url not null df_twitter_1 = df_twitter_1[df_twitter_1.jpg_url.notnull()] # Merge df_twitter_1 and df_necessary_copy dataframes into a new df_twitter df_twitter = pd.merge(df_twitter_1, df_necessary_copy, on = ['tweet_id'], how = 'left') ###Output _____no_output_____ ###Markdown Test: ###Code # Check the new dataframes df_twitter_1.info() df_twitter.info() df_twitter.head() ###Output _____no_output_____ ###Markdown Store Clean Data > Clean data is stored for the future analysis purpose. Data can be stored as `flat file` or in `relational databases` like SQLite. ###Code # Store the df_twitter clean dataframe as twitter_archive_master.csv file df_twitter.to_csv('twitter_archive_master.csv', index=False, encoding='utf-8') # Store the dataframe in the sqlite database using sqlalchemy in a table called 'master' in 'twitter.db' # First create a SQLAlchemy engine and an empty 'twitter.db' database engine = create_engine('sqlite:///twitter.db') # Create a table called 'master' using the dataframe in 'twitter.db' database df_twitter.to_sql('master', engine, index=False) # Check the data stored as csv file df = pd.read_csv('twitter_archive_master.csv') df.head(3) # Check the data stored in sqlite database in 'master' table df1 = pd.read_sql('SELECT * FROM master', engine) df1.head(3) ###Output _____no_output_____ ###Markdown Analyze and visualize data > After wrangling the data we can analyze and visualize the dataset for getting the patterns and trends of the data. ###Code # Create a new dataframe for the purpose of nanlysis from the csv file that is created before. df = pd.read_csv('twitter_archive_master.csv') df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 1626 entries, 0 to 1625 Data columns (total 14 columns): tweet_id 1626 non-null int64 timestamp 1626 non-null object text 1626 non-null object expanded_urls 1626 non-null object rating_numerator 1626 non-null float64 rating_denominator 1626 non-null float64 name 1165 non-null object dog_stages 277 non-null object jpg_url 1626 non-null object img_num 1626 non-null float64 predictions 1626 non-null object confidence 1626 non-null float64 retweet_count 1620 non-null float64 favorite_count 1620 non-null float64 dtypes: float64(6), int64(1), object(7) memory usage: 178.0+ KB ###Markdown Insight 1: Highest Rating ###Code # Create a new column called 'rating'. # All the values of the 'rating' column is adjusted to 10 scale so that we can compare according to their ratings. df['rating'] = df['rating_numerator'] / df['rating_denominator'] * 10 df.rating.value_counts() # Get the sorted list of ratings df.rating.sort_values(ascending=False) df.rating.nlargest(3) ###Output _____no_output_____ ###Markdown > - Top three ratings are `34.28`, `14.0`, `14.0`. ###Code # Get image url of the top 3 higher rated dog df.loc[df.rating.nlargest(3).index].jpg_url # Get the images of the top three higher rated dogs. # Highest rated dog's image. url = df.loc[df.rating.nlargest(3).index[0]].jpg_url print(df.loc[df.rating.nlargest(3).index[0]].text) Image(url=url, width=150, height=150) # Second higher dog's image. url = df.loc[df.rating.nlargest(3).index[1]].jpg_url print(df.loc[df.rating.nlargest(3).index[1]].text) Image(url=url, width=150, height=150) # Third higher dog's image. url = df.loc[df.rating.nlargest(3).index[2]].jpg_url print(df.loc[df.rating.nlargest(3).index[2]].text) Image(url=url, width=150, height=150) ###Output I present to you, Pup in Hat. Pup in Hat is great for all occasions. Extremely versatile. Compact as h*ck. 14/10 (IG: itselizabethgales) https://t.co/vvBOcC2VdC ###Markdown Insight 2: Popular Dog's name ###Code # Get the values of dog's name df.name.value_counts() # See the top 10 more popular dog's name and plot a bar diagram and label the plot df.name.value_counts().head(10).plot(kind='bar', fontsize=13, color=sns.color_palette('dark'), alpha=.6) plt.title('Top 10 Popular Dog\'s Name', fontsize=18) plt.xlabel('Dog\'s Name', fontsize=14) plt.ylabel('Count', fontsize=14) # Set the size and background of the plot sns.set(rc={'figure.figsize':(10,5)}) sns.set_style('whitegrid') fig = plt.gcf() fig.savefig('popular_dog_name.jpg',bbox_inches='tight') ###Output _____no_output_____ ###Markdown > - Most popular dog's names are `Charlie`, `Cooper`, `Lucy`. Insight 3: Dog's Stage ###Code # Get the values of dog_stages df.dog_stages.value_counts() # Plot the values of dog_stages along with their occurance to get a better visualization df.dog_stages.value_counts().plot(kind='bar', fontsize=13, figsize=(10,5), color=sns.color_palette('dark'), alpha=.6) plt.title('Tweeted Dog\'s Stage', fontsize=18) plt.xlabel('Dog\'s Stage', fontsize=14) plt.ylabel('Counts', fontsize=14) fig = plt.gcf() fig.savefig('tweet_type.jpg',bbox_inches='tight') ###Output _____no_output_____ ###Markdown > - More tweeted dogs are in `pupper` stage and least in `floofer` stages. Insight 4: Dog's Breed Prediction ###Code # Get the values of dog's breed prediction df.predictions.value_counts() # Plot a bar diagram to get a better visualization about the most occurrable dog's breed df.predictions.value_counts().head(15).plot(kind='bar', fontsize=13, figsize=(10,5), color=sns.color_palette('dark'), alpha=.6) plt.title('Most Tweeted Dog\'s Breed', fontsize=18) plt.xlabel('Dog\'s Breed', fontsize=14) plt.ylabel('Occurrance in Tweets', fontsize=14) fig=plt.gcf() fig.savefig('dog_breed.jpg',bbox_inches='tight') ###Output _____no_output_____ ###Markdown > - `Golden Retriever` is the top dog's breed, to which highest number of dogs belongs to. And this followed by `Labrador Retriever` and `Pembroke`. Insight 5: Dog's Breed Prediction Confidence ###Code # Get the prediction confidence values of neural network df.confidence.sort_values(ascending=False) # Draw a box plot to get an idea about which dog's stage is more predictable through neural network. sns.boxplot(x='dog_stages', y='confidence', data=df, palette='dark', saturation=.6, width=.6) sns.set(rc={'figure.figsize':(7,4)}) sns.set_style('whitegrid') plt.savefig('boxplot.jpg') ###Output _____no_output_____ ###Markdown > - From the median values(Q2) we can say that `puppo` is more predictable than any other stages of dogs. Visualization 1: Retweet & Rating ###Code # Get the values of retweet_count column df.retweet_count.sort_values(ascending=False) # Draw a scatterplot to get an idea about the relation between retweet_count and the rating of a dog's post tweet_rating = sns.regplot(x=df.retweet_count, y=df.rating, color='green') sns.set(rc={'figure.figsize':(8,4)}) sns.set_style('whitegrid') plt.title('Retweet Count VS Rating', fontsize=18) plt.xlabel('Retweet Count', fontsize=14) plt.ylabel('Rating', fontsize=14) plt.setp(tweet_rating.collections, alpha=.6) # Get the correlation coefficient print('Correlation Coefficient is: ', df.corr().loc['retweet_count', 'rating'].round(10)) ###Output Correlation Coefficient is: 0.2728296705 ###Markdown > - From the above plot and the correlation coefficient(i.e. 0.273), we can say that rating of a dog slightly depends on the retweet_count of that dog. Means if a dog's post has more retweet then its likely to has higher rating. Visualization 2: Favorite & Rating ###Code # Get the values of the favorite_count column df.favorite_count.sort_values(ascending=False) # Draw a scatterplot between favorite_count and rating of a dog to get an idea about the relationship between them. fav_rating = sns.regplot(x=df.favorite_count, y=df.rating, color='orange') sns.set(rc={'figure.figsize':(8,4)}) sns.set_style('whitegrid') plt.title('Faorite Count VS Rating', fontsize=18) plt.xlabel('Favorite Count', fontsize=14) plt.ylabel('Rating', fontsize=14) plt.setp(fav_rating.collections, alpha=.6) # Get the correlation coefficient print('Correlation Coefficient is: ', df.corr().loc['favorite_count', 'rating'].round(10)) ###Output Correlation Coefficient is: 0.3698804105 ###Markdown > - From the above plot and the correlation coefficient(i.e. 0.369), we can say that rating of a dog depends on the favorite_count of that dog. Means if a dog is more favorite then its likely to be higher rated. Visualization 3: Retweet & Favorite ###Code # Plot a scatter plot to see the relation between the favorite_count and retweet_count retweet_fav = sns.regplot(x=df.favorite_count, y=df.retweet_count, color='brown') sns.set(rc={'figure.figsize':(8,4)}) sns.set_style('whitegrid') plt.title('Retweet Count VS Favorite Count', fontsize=18) plt.xlabel('Favorite Count', fontsize=14) plt.ylabel('Retweet Count', fontsize=14) plt.setp(retweet_fav.collections, alpha=.6) fig=plt.gcf() fig.savefig('retweet_fav.jpg',bbox_inches='tight') # Get the correlation coefficient print('Correlation Coefficient is: ', df.corr().loc['retweet_count', 'favorite_count'].round(10)) ###Output Correlation Coefficient is: 0.9266136846 ###Markdown > - From the above plot and the correlation coefficient(i.e. 0.926), we can say that retweet of a dog depends on the favorite_count of that dog. Means if a dog is more favorite then its likely to be retweeted more. Visualization 4: Retweet & Dog type ###Code # Plot a bar diagram to visualize which type of dog is more likely to be retweeted. df.groupby('dog_stages').retweet_count.mean().plot(kind='bar',fontsize=13,figsize=(10,5),color=sns.color_palette('dark'),alpha=.6) plt.title('Retweet Counts of Dog Types', fontsize=18) plt.xlabel('Dog Types', fontsize=14) plt.ylabel('Retweet Counts', fontsize=14) fig=plt.gcf() fig.savefig('retweet_type.jpg',bbox_inches='tight') ###Output _____no_output_____ ###Markdown > - From the above plot we can say that dogs with `puppo` types are more likely to be retweeted by the user. And this followed by `doggo` and `floofer`. ###Code # Get the image url of the top three higher retweet_count dogs. df.loc[df.retweet_count.nlargest(3).index].jpg_url ###Output _____no_output_____ ###Markdown > - Third higher retweet_count dog's image url is valid, and others are invalid or NaN. ###Code # Get the third higher retweet_count dog's image. url = df.loc[df.retweet_count.nlargest(3).index[2]].jpg_url print(df.loc[df.retweet_count.nlargest(3).index[2]].text) Image(url=url, width=300, height=300) ###Output Here's a super supportive puppo participating in the Toronto #WomensMarch today. 13/10 https://t.co/nTz3FtorBc ###Markdown Visualization 5: Favorite & Dog type ###Code # Plot a bar diagram to visualize which type of dog is more likely to be more favored. df.groupby('dog_stages').favorite_count.mean().plot(kind='bar',fontsize=13,figsize=(10,5),color=sns.color_palette('dark'),alpha=.6) plt.title('Favorite Counts of Dog Types', fontsize=18) plt.xlabel('Dog Types', fontsize=14) plt.ylabel('Favorite Counts', fontsize=14) fig=plt.gcf() fig.savefig('favorite_type.jpg',bbox_inches='tight') ###Output _____no_output_____ ###Markdown > - From the above plot we can say that a dog with `puppo` type is more likely to be more favored and this followed by `doggo` and `floofer`. ###Code # Get the image url of the top three higher favored dog. df.loc[df.favorite_count.nlargest(3).index].jpg_url ###Output _____no_output_____ ###Markdown > - First image url this is for most favorite dog is invalid. ###Code # get the image of the second most favored dog. url = df.loc[df.favorite_count.nlargest(3).index[1]].jpg_url print(df.loc[df.favorite_count.nlargest(3).index[1]].text) Image(url=url, width=300, height=300) ###Output Here's a super supportive puppo participating in the Toronto #WomensMarch today. 13/10 https://t.co/nTz3FtorBc ###Markdown Visualization 6: Favorite & Dog Breed ###Code # Plot a bar diagram to create a visualization about the most favored dog type. df.groupby('predictions').favorite_count.mean().head(15).sort_values(ascending=False).plot(kind='bar',fontsize=13,figsize=(10,5),color=sns.color_palette('dark'),alpha=.6) plt.title('Favorite Counts of Dog Breeds', fontsize=18) plt.xlabel('Dog Breeds', fontsize=14) plt.ylabel('Favorite Counts', fontsize=14) fig=plt.gcf() fig.savefig('favorite_breed.jpg',bbox_inches='tight') ###Output _____no_output_____ ###Markdown Gather ###Code # Download the tweet image predictions tsv file via requests library url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) with open(url.split('/')[-1], mode = 'wb') as file: file.write(response.content) # Read `image_predictions.tsv` file predict_dog = pd.read_csv('image-predictions.tsv', sep = '\t') # Read `twitter archive enhanced.csv` file to get tweet ids for API twitter_df = pd.read_csv('twitter-archive-enhanced.csv') # extract tweet ids only for use in API tweet_id = twitter_df['tweet_id'] # Authenticate Tweepy API consumer_key = '' consumer_secret = '' access_token = '' access_secret = '' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit = True, wait_on_rate_limit_notify = True, parser=tweepy.parsers.JSONParser()) # Get tweet JSON data using tweet_id via Tweepy tweet_json = [] error_list = [] for i in tweet_id: try: tweet = api.get_status(i, tweet_mode = 'extended') tweet_json.append(tweet) except: error_list.append(i) continue # Write JSON data to tweet_json.txt file with each tweet's JSON data on its own line with open('tweet_json.txt', 'w') as outfile: json.dump(tweet_json, outfile, indent = True) # Read tweet_json.txt file into a pandas data frame pd_json = pd.read_json('tweet_json.txt', orient = 'columns') # Extract only needed columns (tweet_id, favorite_count, retweet_count) # Save it to tweet_json tweet_json = pd_json[['id','favorite_count','retweet_count']] ###Output _____no_output_____ ###Markdown Assess Visual Assessment ###Code # display predict_dog table predict_dog.head() # display predict_dog table predict_dog.info() # display twitter_df table twitter_df.head() # display twitter_df table twitter_df.info() # display tweet_json table tweet_json.head() # display tweet_json table tweet_json.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2344 entries, 0 to 2343 Data columns (total 3 columns): id 2344 non-null int64 favorite_count 2344 non-null int64 retweet_count 2344 non-null int64 dtypes: int64(3) memory usage: 73.2 KB ###Markdown Programmatic Assessment ###Code # display predict_dog table predict_dog.describe() # display twitter_df table twitter_df.describe() # display tweet_json table tweet_json.describe() ###Output _____no_output_____ ###Markdown Quality `twitter_df` table- some tweets was deleted or invalid (2356 instead of 2345) found in `error_list`- `tweet_id` column type is integer althogh it should be string- `timestamp` column type is object meaning string but it should be datetime- `in_reply_to_status_id`, `in_reply_to_user_id`, `retweeted_status_id`, `retweeted_status_user_id` and `retweeted_status_timestamp` columns have values rather than `NaN` that is not the criteria of the analysis we need only original tweets no retweet or replies - `in_reply_to_status_id`, `in_reply_to_user_id`, `retweeted_status_id`, `retweeted_status_user_id` and `retweeted_status_timestamp` don't add useful information to our dataset no more `predict_dog` table- `tweet_id` column type is integer although it should be string - invalid records because some tweets are deleted from the archive- not all images are for dogs! some images are for other animals `tweet_json` table- `id` column type is integer although it should be string- `id` column name is not consistent with other tables same column names Tidiness `twitter_df` table- `time stamp` column should be two separate columns for `date` and `time` `predict_dog` table- the whole table should be merged with `twitter_df` table `tweet_json` table- the whole table should be merged with `twitter_df` table ###Code # Make a copy of the datasets to work with and clean twitter_df_clean = twitter_df.copy() predict_dog_clean = predict_dog.copy() tweet_json_clean = tweet_json.copy() ###Output _____no_output_____ ###Markdown Clean Define Quality `twitter_df_clean` table- remove all records with ids in `error_list` Code ###Code # remove all records with ids in `error_list` for record in error_list: twitter_df_clean.drop(twitter_df_clean[twitter_df_clean.tweet_id == record].index, inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code # check if any record with id from error_list still exists! twitter_df_clean['tweet_id'].isin(error_list).value_counts() ###Output _____no_output_____ ###Markdown Define Quality `twitter_df_clean` table- convert `tweet_id` column to string Code ###Code # Convert tweet_id column to string twitter_df_clean['tweet_id'] = twitter_df_clean['tweet_id'].astype(str) ###Output _____no_output_____ ###Markdown Test ###Code # check the column type is converted correctly twitter_df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2344 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2344 non-null object in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2344 non-null object source 2344 non-null object text 2344 non-null object retweeted_status_id 170 non-null float64 retweeted_status_user_id 170 non-null float64 retweeted_status_timestamp 170 non-null object expanded_urls 2285 non-null object rating_numerator 2344 non-null int64 rating_denominator 2344 non-null int64 name 2344 non-null object doggo 2344 non-null object floofer 2344 non-null object pupper 2344 non-null object puppo 2344 non-null object dtypes: float64(4), int64(2), object(11) memory usage: 329.6+ KB ###Markdown Define Quality `twitter_df_clean` table- convert `timestamp` column to datetime Code ###Code # convert `timestamp` column to datetime twitter_df_clean['timestamp'] = pd.to_datetime(twitter_df_clean['timestamp']) ###Output _____no_output_____ ###Markdown Test ###Code # check that `timestamp` column is converted correctly twitter_df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2096 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2096 non-null object in_reply_to_status_id 0 non-null float64 in_reply_to_user_id 0 non-null float64 timestamp 2096 non-null datetime64[ns] source 2096 non-null object text 2096 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 2093 non-null object rating_numerator 2096 non-null int64 rating_denominator 2096 non-null int64 name 2096 non-null object doggo 2096 non-null object floofer 2096 non-null object pupper 2096 non-null object puppo 2096 non-null object dtypes: datetime64[ns](1), float64(4), int64(2), object(10) memory usage: 374.8+ KB ###Markdown Define Quality `twitter_df_clean` table- remove any values rather than nulls in `in_reply_to_status_id`, `in_reply_to_user_id`, `retweeted_status_id`, `retweeted_status_user_id` and `retweeted_status_timestamp` columns Code ###Code # get rows index with values in 'in_reply_to_status_id' irtsid = twitter_df_clean['in_reply_to_status_id'][twitter_df_clean.in_reply_to_status_id.notnull()].index # drop rows with index with values in the above list twitter_df_clean.drop(irtsid, inplace = True) # get rows index with values in 'in_reply_to_user_id' column irtuid = twitter_df_clean['in_reply_to_user_id'][twitter_df_clean.in_reply_to_user_id.notnull()].index # drop rows with index with values in the above list twitter_df_clean.drop(irtuid, inplace = True) # get rows index with values in 'retweeted_status_id' column rsid = twitter_df_clean['retweeted_status_id'][twitter_df_clean.retweeted_status_id.notnull()].index # drop rows with index with values in the above list twitter_df_clean.drop(rsid, inplace = True) # get rows index with values in 'retweeted_status_user_id' column rsuid = twitter_df_clean['retweeted_status_user_id'][twitter_df_clean.retweeted_status_user_id.notnull()].index # drop rows with index with values in the above list twitter_df_clean.drop(rsuid, inplace = True) # get rows index with values in 'retweeted_status_timestamp' column rsts = twitter_df_clean['retweeted_status_timestamp'][twitter_df_clean.retweeted_status_timestamp.notnull()].index # drop rows with index with values in the above list twitter_df_clean.drop(rsts, inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code # check if any non null values exist in in_reply_to_status_id column twitter_df_clean['in_reply_to_status_id'][twitter_df_clean.in_reply_to_status_id.notnull()] # check if any non null values exist in in_reply_to_user_id column twitter_df_clean['in_reply_to_user_id'][twitter_df_clean.in_reply_to_user_id.notnull()] # check if any non null values exist in retweeted_status_id column twitter_df_clean['retweeted_status_id'][twitter_df_clean.retweeted_status_id.notnull()] # check if any non null values exist in retweeted_status_user_id column twitter_df_clean['retweeted_status_user_id'][twitter_df_clean.retweeted_status_user_id.notnull()] # check if any non null values exist in retweeted_status_timestamp column twitter_df_clean['retweeted_status_timestamp'][twitter_df_clean.retweeted_status_timestamp.notnull()] ###Output _____no_output_____ ###Markdown Define Quality `twitter_df_clean` table- drop `in_reply_to_status_id`, `in_reply_to_user_id`, `retweeted_status_id`, `retweeted_status_user_id` and `retweeted_status_timestamp` columns Code ###Code # drop 'in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', # 'retweeted_status_user_id' and 'retweeted_status_timestamp' columns from `twitter_df_clean` table twitter_df_clean.drop(columns = ['in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code # look at the data frame to see the columns is removed twitter_df_clean.tail() ###Output _____no_output_____ ###Markdown Define Quality `predict_dog_clean` table- convert `tweet_id` column type to string Code ###Code # convert `tweet_id` column type to string predict_dog_clean['tweet_id'] = predict_dog_clean['tweet_id'].astype(str) ###Output _____no_output_____ ###Markdown Test ###Code # check column type is correct predict_dog_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null object jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(1), object(5) memory usage: 152.1+ KB ###Markdown Define Quality `predict_dog_clean` table- remove all records with `tweet_id` in `error_list` Code ###Code # remove all records with `tweet_id` in `error_list` for record in error_list: predict_dog_clean.drop(predict_dog_clean[predict_dog.tweet_id == record].index, inplace = True) ###Output /home/ahmed/anaconda3/lib/python3.6/site-packages/ipykernel/__main__.py:4: UserWarning: Boolean Series key will be reindexed to match DataFrame index. ###Markdown Test ###Code # check if any record with id from error_list still exists! predict_dog_clean['tweet_id'].isin(error_list).value_counts() ###Output _____no_output_____ ###Markdown Define Quality `predict_dog_clean` table- remove all non dog records by keeping only the records where at least `p1_dog` or `p2_dog` or`p3_dog` is `True` Code ###Code # remove all non-dog records by keeping only the records where at least `p1_dog` or `p2_dog' or `p3_dog` are `True` predict_dog_clean.drop(predict_dog_clean[(predict_dog_clean.p1_dog == False) & (predict_dog_clean.p2_dog == False) & (predict_dog_clean.p3_dog == False)].index, inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code # Check if any non-dog record exists! predict_dog_clean['tweet_id'].isin(non_dogs).value_counts() # reset all indeces in all tables to keep every thing in a consistent manner twitter_df_clean = twitter_df_clean.reset_index() predict_dog_clean = predict_dog_clean.reset_index() # drop all `index` columns in all tables for consistency twitter_df_clean.drop(columns = ['index'], inplace = True) predict_dog_clean.drop(columns = ['index'], inplace = True) ###Output _____no_output_____ ###Markdown Define Quality `tweet_json_clean` table- convert `id` column type to string Code ###Code # convert `id` column type to string tweet_json_clean['id'] = tweet_json_clean['id'].astype(str) ###Output _____no_output_____ ###Markdown Test ###Code # check the column type is converted correctly tweet_json_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2344 entries, 0 to 2343 Data columns (total 3 columns): id 2344 non-null object favorite_count 2344 non-null int64 retweet_count 2344 non-null int64 dtypes: int64(2), object(1) memory usage: 73.2+ KB ###Markdown Define Quality `tweet_json_clean` table- rename `id` column to `tweet_id` for consistency with other tables Code ###Code # rename `id` column to `tweet_id` for consistency with other tables tweet_json_clean = tweet_json_clean.rename(columns = {'id' : 'tweet_id'}) ###Output _____no_output_____ ###Markdown Test ###Code # check the column name is changed tweet_json_clean.head() ###Output _____no_output_____ ###Markdown Define Tidiness `twitter_df_clean` table- split `time stamp` column to two separate columns `date` and `time` Code ###Code # First, convert the `timestamp` column type to datetime twitter_df_clean['timestamp'] = pd.to_datetime(twitter_df_clean['timestamp']) # second, split `time stamp` column to two separate columns `date` and `time` twitter_df_clean['date'] = [d.date() for d in twitter_df_clean['timestamp']] twitter_df_clean['time'] = [d.time() for d in twitter_df_clean['timestamp']] # Third, convert `date` to datetime type twitter_df_clean['date'] = pd.to_datetime(twitter_df_clean['date']) # finally, drop the `timestamp` column as it is no longer needed twitter_df_clean.drop(columns = ['timestamp'], inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code # check type of `timestamp` column is datetime twitter_df_clean.info() # check the columns are created correctly twitter_df_clean.head() # check `timestamp` column is dropped twitter_df_clean.tail() ###Output _____no_output_____ ###Markdown Define Tidiness `tweet_json_clean` table- merge `tweet_json_clean` table with `twitter_df_clean` table Code ###Code # merge `tweet_json_clean` table with `twitter_df_clean` table twitter_df_clean = pd.merge(twitter_df_clean, tweet_json_clean, on = 'tweet_id') ###Output _____no_output_____ ###Markdown Test ###Code # check the merge is done correctly twitter_df_clean.head() ###Output _____no_output_____ ###Markdown Define Tidiness `predict_dog_clean` table- merge `predict_dog_clean` table with `twitter_df_clean` table Code ###Code # merge `predict_dog_clean` table with `twitter_df_clean` table twitter_df_clean = pd.merge(twitter_df_clean, predict_dog_clean, on = 'tweet_id') ###Output _____no_output_____ ###Markdown Test ###Code # check the merge is done correctly twitter_df_clean.head() # writing cleaned data to a master csv file twitter_df_clean.to_csv('twitter_archive_master.csv', index=False) ###Output _____no_output_____ ###Markdown Ask Questions 1- what dog breed is most present in the dataset ?2- what is the range of rating scores through the dataset (out of 10) ?3- which year has the most tweets for the hashtag ?4- is the rating score related to number of tweet's favorites ?5- which dog breed has the most tweet favorites and retweets ? Exploratory Data Analysis ###Code # load data twitter_master = pd.read_csv('twitter_archive_master.csv') # explore twitter_master dataset twitter_master.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 1665 entries, 0 to 1664 Data columns (total 26 columns): tweet_id 1665 non-null int64 source 1665 non-null object text 1665 non-null object expanded_urls 1665 non-null object rating_numerator 1665 non-null int64 rating_denominator 1665 non-null int64 name 1665 non-null object doggo 1665 non-null object floofer 1665 non-null object pupper 1665 non-null object puppo 1665 non-null object date 1665 non-null object time 1665 non-null object favorite_count 1665 non-null int64 retweet_count 1665 non-null int64 jpg_url 1665 non-null object img_num 1665 non-null int64 p1 1665 non-null object p1_conf 1665 non-null float64 p1_dog 1665 non-null bool p2 1665 non-null object p2_conf 1665 non-null float64 p2_dog 1665 non-null bool p3 1665 non-null object p3_conf 1665 non-null float64 p3_dog 1665 non-null bool dtypes: bool(3), float64(3), int64(6), object(14) memory usage: 304.1+ KB ###Markdown Question 1: what dog breed is most present in the dataset ? ###Code # find the most present dog breed twitter_master['p1'].value_counts().head() ###Output _____no_output_____ ###Markdown So, we can see that `golden retriever` dog is the most probable present breed in the dataset with 137 times appeared Question 2: what is the range of rating scores through the dataset (out of 10) ? ###Code # look at the dataset values for `rating_numerator` column twitter_master.describe() # look at the value counts to have an image in mind of the data twitter_master['rating_numerator'].value_counts() # plot values Vs. counts to see the range of values % matplotlib inline vals = [0,1,2,3,4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,24,26,27,44,45,50,60,75,80,84,88,99,121,144,165] counts = [1,1,2,5,7,14,16,32,68,132,358,352,420,221,21, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] plt.bar(vals, counts, align = 'center') plt.xlabel('Rating Score Values') plt.xlim(xmin = 0, xmax = 20) plt.ylabel('Rating Counts') plt.title('Rating Score Values Counts') plt.show() ###Output _____no_output_____ ###Markdown We can see that rating score ranges from 1 to 165 with highest peak at 12 . of course they are funny numbers but the center of the plot is telling the truth! most people rated between 10 and 12 out of 10 ###Code twitter_master.info() # convert `date` column to datetime to work on it twitter_master['date'] = pd.to_datetime(twitter_master['date']) ###Output _____no_output_____ ###Markdown Question 3: which year has the most tweets for the hashtag ? ###Code # pick only years of each date to work on later year = [] for r in twitter_master.index: year.append(twitter_master['date'][r].date().year) # plot years Vs. count of tweets per year Cont = Counter(year) occur = Cont[2015], Cont[2016], Cont[2017] occur years = ["2015", "2016", "2017"] plt.bar(years, occur) plt.xlabel('Years Active') plt.ylabel('Tweets count per year') plt.title('Tweets about hashtag across years') ###Output _____no_output_____ ###Markdown We can see that most tweets are in 2016 with nearly 800 tweets also 2015 is nearly 500 and 2017 are a bit above 300 tweets So, we can say that 2016 was the hottest year for on that trend! Question 4: is the rating score related to number of tweet's favorites ? ###Code # let's plot a scatter plot and find out! plt.scatter(twitter_master['rating_numerator'], twitter_master['favorite_count'], alpha = 0.4) plt.xlim(xmin = 1, xmax = 20) plt.ylim(ymin = 1000, ymax = 150000) plt.xlabel('Ratings') plt.ylabel('Favorite Count') plt.title('Rating Score Vs. Favorite Count') plt.style.use('dark_background') plt.show() # find the correaltion coeffecient between favorite count and rating scores twitter_master['rating_numerator'].corr(twitter_master['favorite_count']) ###Output _____no_output_____ ###Markdown As we saw in the plot, there is a weak positive correaltion here and the programmatic calculations ensured that relation Question5: which dog breed has the most tweet favorites and retweets ? ###Code # pick only needed columns breed_merged = twitter_master[['p1', 'favorite_count', 'retweet_count']] #rename p1 column to dog_breed breed_merged.rename(columns = {"p1": "dog_breed"}, inplace = True) # look at our new dataframe breed_merged.head() # group unique dog_breed and the sum of favorites and retweets for each breed by_dog_breed = breed_merged.groupby('dog_breed')['favorite_count', 'retweet_count'].sum() # reshape the dataframe to work on it by_dog_breed = by_dog_breed.reset_index() # look at what we have done! by_dog_breed.head() # look for how many unique breed exists len(by_dog_breed['dog_breed'].unique()) by_dog_breed['dog_breed'].values ###Output _____no_output_____ ###Markdown There are 214 unique dog breed! that's alot of points to plot! ###Code # I will aggregate the breeds to 8 groups only according to AKC Sporting = ['Brittany_spaniel', 'Chesapeake_Bay_retriever', 'English_setter', 'English_springer', 'German_short-haired_pointer', 'Gordon_setter', 'Irish_setter', 'Irish_water_spaniel', 'Labrador_retriever', 'flat-coated_retriever', 'curly-coated_retriever', 'golden_retriever', 'Sussex_spaniel', 'Weimaraner', 'Welsh_springer_spaniel', 'clumber', 'cocker_spaniel', 'vizsla'] Hound = ['Afghan_hound', 'Ibizan_hound', 'Norwegian_elkhound', 'Rhodesian_ridgeback', 'Saluki', 'Scottish_deerhound', 'basenji', 'basset', 'beagle', 'black-and-tan_coonhound', 'bloodhound', 'bluetick', 'borzoi', 'redbone', 'whippet'] Working = ['Bernese_mountain_dog', 'Doberman', 'Great_Dane', 'Great_Pyrenees', 'Greater_Swiss_Mountain_dog', 'Leonberg', 'Newfoundland', 'Rottweiler', 'Saint_Bernard', 'Samoyed', 'Siberian_husky', 'Tibetan_mastiff', 'boxer', 'bull_mastiff', 'giant_schnauzer', 'kuvasz', 'kelpie', 'malamute', 'standard_schnauzer'] Terrier = ['Airedale', 'American_Staffordshire_terrier', 'Australian_terrier', 'Bedlington_terrier', 'Border_terrier', 'Dandie_Dinmont', 'Irish_terrier', 'Lakeland_terrier', 'Norfolk_terrier', 'Norwich_terrier', 'Scotch_terrier', 'Yorkshire_terrier', 'West_Highland_white_terrier', 'wire-haired_fox_terrier', 'soft-coated_wheaten_terrier', 'Staffordshire_bullterrier', 'cairn', 'miniature_schnauzer'] Toy = ['Blenheim_spaniel', 'Brabancon_griffon', 'Chihuahua', 'Italian_greyhound', 'Japanese_spaniel', 'Maltese_dog', 'Pekinese', 'Pomeranian', 'Shih-Tzu', 'silky_terrier', 'miniature_pinscher', 'papillon', 'pug', 'toy_terrier', 'toy_poodle'] Non_Sporting = ['French_bulldog', 'Eskimo_dog', 'Lhasa', 'Tibetan_terrier', 'chow','dalmatian', 'keeshond', 'schipperke', 'miniature_poodle', 'standard_poodle'] Herding = ['Border_collie', 'Cardigan', 'EntleBucher', 'German_shepherd', 'Old_English_sheepdog', 'Pembroke', 'Shetland_sheepdog', 'briard', 'collie', 'groenendael', 'malinois'] Companion = ['Boston_bull'] Misc = ['Appenzeller', 'Mexican_hairless', 'Walker_hound'] breed_merged['dog_group'] = 'None' for r in range(len(breed_merged)-1): if breed_merged['dog_breed'][r] in Sporting: breed_merged['dog_group'][r] = 'Sporting' elif breed_merged['dog_breed'][r] in Hound: breed_merged['dog_group'][r] = 'Hound' elif breed_merged['dog_breed'][r] in Working: breed_merged['dog_group'][r] = 'Working' elif breed_merged['dog_breed'][r] in Terrier: breed_merged['dog_group'][r] = 'Terrier' elif breed_merged['dog_breed'][r] in Toy: breed_merged['dog_group'][r] = 'Toy' elif breed_merged['dog_breed'][r] in Non_Sporting: breed_merged['dog_group'][r] = 'Non_Sporting' elif breed_merged['dog_breed'][r] in Herding: breed_merged['dog_group'][r] = 'Herding' elif breed_merged['dog_breed'][r] in Companion: breed_merged['dog_group'][r] = 'Companion' elif breed_merged['dog_breed'][r] in Misc: breed_merged['dog_group'][r] = 'Misc' else: breed_merged['dog_group'][r] = 'None' # let's group the data again but now by group of breeds! by_dog_group = breed_merged.groupby('dog_group')['favorite_count', 'retweet_count'].sum() # check the aggregation is done right! by_dog_group = by_dog_group.reset_index() # now look at the unique labels for dog groups len(by_dog_group['dog_group'].unique()) by_dog_group.describe() ###Output _____no_output_____ ###Markdown Whooa! 10 labels instead of 110! that's a whole of work ###Code # let's plot those to have a look which dog group is topping the hashtag! sns.lmplot(x="favorite_count", y="retweet_count", hue="dog_group", data=by_dog_group) plt.xlim(20, 3000) plt.ylim(6, 1000) plt.xlabel('Favorite Count (Thousands)') plt.ylabel('Retweet Count (Thousands) ') plt.title('Dog Group with Favorites and retweets count') plt.show() ###Output _____no_output_____ ###Markdown we can see obviously that `Sporting` dog group is catching the eyes of fans! as its line goes diagonally to the right hand side meaning it has the most retweets and favorites ###Code max(by_dog_group.favorite_count), max(by_dog_group.retweet_count) by_dog_group ###Output _____no_output_____ ###Markdown Udacity Nanodegree: Data Science Foundations II Project II : Wrangling Data of WeRateDogs Twitter Account Student: Nelson Antonio Fernandes de Matos Table of ContentsIntroductionGatherAssessCleanStoreVisualize and AnalyzeConclusion Introduction This project is a step to fulfill the requirements of Udacity's Data Science Foundation 2 Nanodegree. Data of WeRateDogs Twitter account will be gathered in three distinct ways:* By reading the file `twitter-archive-enhanced.csv` provided by Udacity. This file contains basic tweet data. * By downloading the file `image_predictions.tsv` programmatically. This file contains predictions of dog's breeds based on their photos.* By using the tweepy library to gather data from WeRateDogs Twitter account. Each JSON tweet data will be stored in a flat file, in which each line will correspond to one tweet.All gathered data will be read into Pandas data frames. The data frames will be assessed, cleaned and stored on *.csv and *db files. After all, some exploratory data analyses will be performed and some visualizations will be provided using dogs ratings, number of favorites, number of retweets and breeds. ###Code # Import modules: import pandas as pd import numpy as np from sqlalchemy import create_engine import requests, tweepy, json, time, os import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown Gather Read the file `twitter-archive-enhanced.csv` which was downloaded manually from Udacity page, into a Pandas dataframe. ###Code twt_arch = pd.read_csv("twitter-archive-enhanced.csv") ###Output _____no_output_____ ###Markdown Download programmatically, using *requests* library, the image predicition file (`image_predictions.tsv`) hosted on Udacity's servers at the following URL: https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv In order to not request every time the notebook is run, the gathering will be performed only if the file image_prediction.tsv does not exist (the first time this cell is run). ###Code if not (os.path.isfile("image_predicition.tsv")): url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) with open('image_predicition.tsv', mode='wb') as file: file.write(response.content) ###Output _____no_output_____ ###Markdown Read the file `image_prediction.tsv` into a Pandas dataframe: ###Code pred = pd.read_csv('image_predicition.tsv', sep='\t') ###Output _____no_output_____ ###Markdown Use tweepy library to download all data from tweets. API reference page: http://docs.tweepy.org/en/v3.5.0/api.html ###Code # Keys and tokens. # Keys, secret keys and access tokens management. # Consumer API keys: consumer_key = 'API_KEY' # (API key) consumer_secret = 'API_SECRET_KEY' # (API secret key) # Access token & access token secret: access_token = 'ACCESS_TOKEN' # (Access token) access_secret = 'ACCESS_TOKEN_SECRET' # (Access token secret) ###Output _____no_output_____ ###Markdown In order to not request Twitter servers every time the notebook is run - what would imply in a great loss of time - the gathering will be performed only if the file `tweet_json.txt` does not exist (the first time this cell is run). The tweet ids which return errors will be stored in a list, which will and then be saved to a file, so the data could be accessed without running all the gathering process again. ###Code tweets_fails = [] # List to store ids of tweets in which tweepy returned erros. if not (os.path.isfile("tweet_json.txt")): # Initialize API and some parameters: auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit = True, wait_on_rate_limit_notify = True) # Get the data print(">>> Gathering Twitter Data STARTED at: {}\n".format(time.ctime())) with open("tweet_json.txt", 'w', encoding="utf8") as tf: # open the file tweet_json.txt for i, tweet_id in enumerate(twt_arch['tweet_id']): start = time.time() # save start time # try: tweet = api.get_status(tweet_id, tweet_mode='extended') # get the tweet data json.dump(tweet._json, tf) # write tweet data to the file tf.write("\n") # jumps to the next line of the file status = 'SUCCESS' except tweepy.TweepError as err: status = 'FAIL' tweets_fails.append(tweet_id) # end = time.time() # save end time elapsed = end-start # Calculate elapsed time print("> {:5} - tweet id: {} - elapsed time: {:2.4}s - status: {:7}" .format(i, tweet_id, elapsed, status)) # Print status # print("\n>>> Gathering Twitter Data FINISHED at: {}\n".format(time.ctime())) # Write tweet ids in which a error was returned by tweepy to a file: if (len(tweets_fails) !=0): # Check if is empty with open('tweets_fails.txt', 'w') as f: f.write('\n'.join(str(tweet_fail) for tweet_fail in tweets_fails)) else: # If is empty load from file with open('tweets_fails.txt', 'r') as f: tweets_fails = f.read().splitlines() tweets_fails tweets_list = [] tf = open('tweet_json.txt', 'r') # Open the file for line in tf: try: tweet = json.loads(line) tweets_list.append(tweet) except: continue tf.close() twt_info = pd.DataFrame() twt_info['tweet_id'] = [line['id'] for line in tweets_list] twt_info['retweet_count'] = [line['retweet_count'] for line in tweets_list] twt_info['favorite_count'] = [line['favorite_count'] for line in tweets_list] ###Output _____no_output_____ ###Markdown Assess ###Code # View twt_arch data frame info: twt_arch.info() # View twt_arch data frame: twt_arch # Chech for validity of rating_numerator: twt_arch[(twt_arch.rating_numerator < 0)].rating_numerator.any() # Chech for validity of rating_denominator: twt_arch[(twt_arch.rating_denominator < 0)].rating_denominator.any() # View ordered dogs' names: twt_arch.name.sort_values() # View rating_numerators: twt_arch.rating_numerator.value_counts() # View rating_denominators: twt_arch.rating_denominator.value_counts() # Check if tweet_id are unique: twt_arch.tweet_id.nunique() # Check if there are missing images, what may be an issue for predictions: twt_arch.expanded_urls.isnull().any() # Check if there are missing numerators: twt_arch.rating_numerator.isnull().any() # Check if there are missing denominators: twt_arch.rating_denominator.isnull().any() # Check how many sources exists: twt_arch.source.nunique() # Show available sources: set(twt_arch.source) # Check if there are retweeted tweets: twt_arch[twt_arch.in_reply_to_status_id.notnull()].count() # Check missing expanded_urls: twt_arch[twt_arch.expanded_urls.isnull()] ###Output _____no_output_____ ###Markdown ____ ###Code # Show pred data frame info: pred.info() # Show pred data frame: pred # Show unique entries of pred data frame: pred.nunique() # Check if there are predictions which are not related to a breed of dog: pred[((pred.img_num == 1) & (pred.p1_dog == False)) | ((pred.img_num == 2) & (pred.p2_dog == False)) | ((pred.img_num == 3) & (pred.p3_dog == False)) ].count() ###Output _____no_output_____ ###Markdown ____ ###Code twt_info twt_info.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2330 entries, 0 to 2329 Data columns (total 3 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2330 non-null int64 1 retweet_count 2330 non-null int64 2 favorite_count 2330 non-null int64 dtypes: int64(3) memory usage: 54.7 KB ###Markdown ______ Quality:* **`twt_arch`**: 1-`timestamp` stored as strings. 2-Data frame contains retweets information. 3-Lower case and missing names as 'None' (column: name).¶ 4-Tweets with no images (column: `expanded_urls`). 5-Missing values of dogtionary showing as 'None'. 6-Text difficult to read (columns `text` and `source`). 7-Erroneous source datatype (column `source`). 8-Tweet '785515384317313025': 10/10 is a date, not a rating. * **`pred`**: 9-Breeds with '-' and '_' as separators, instead of ' ' and in lower case (columns: `p1`, `p2` and `p3`). 10-Some predictions, with the highest confidence, are not a dog breed. Tidiness:1-Dogtionary variable stored in four columns: `doggo`, `floofer`, `pupper` and `puppo`. 2-Not all `tweet_id` have its corresponding row in other dataframes. 3-Three different data frames, when only one is needed. 4-The predicted dog breed, with the highest confidence, is spread in various columns. Clean The requirements of this project are only to assess and clean at least **8 quality issues** and at least **2 tidiness issues** in this dataset. ###Code # Make a copy of data frames to keep the originals ones. twt_arch_clean = twt_arch.copy() pred_clean = pred.copy() twt_info_clean = twt_info.copy() ###Output _____no_output_____ ###Markdown ___ Quality issue 1: `timestamp` stored as strings. DefineConvert `timestamp` of twt_arch data frame to pandas datetime. Code ###Code twt_arch_clean['timestamp'] = pd.to_datetime(twt_arch_clean.timestamp) ###Output _____no_output_____ ###Markdown Test ###Code twt_arch_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 in_reply_to_status_id 78 non-null float64 2 in_reply_to_user_id 78 non-null float64 3 timestamp 2356 non-null datetime64[ns, UTC] 4 source 2356 non-null object 5 text 2356 non-null object 6 retweeted_status_id 181 non-null float64 7 retweeted_status_user_id 181 non-null float64 8 retweeted_status_timestamp 181 non-null object 9 expanded_urls 2297 non-null object 10 rating_numerator 2356 non-null int64 11 rating_denominator 2356 non-null int64 12 name 2356 non-null object 13 doggo 2356 non-null object 14 floofer 2356 non-null object 15 pupper 2356 non-null object 16 puppo 2356 non-null object dtypes: datetime64[ns, UTC](1), float64(4), int64(3), object(9) memory usage: 313.0+ KB ###Markdown ____ Quality issue 2: Data frame contains retweets information Define1 - Remove retweets using drop function. 2 - Remove all columns, which are related to retweets, using drop function. Code ###Code # 1: twt_arch_clean = twt_arch_clean[twt_arch_clean.in_reply_to_status_id.isnull()] # 2: columns_to_remove = ['in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'] twt_arch_clean.drop(columns = columns_to_remove, inplace = True) twt_arch_clean.columns ###Output _____no_output_____ ###Markdown Test ###Code twt_arch_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2278 entries, 0 to 2355 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2278 non-null int64 1 timestamp 2278 non-null datetime64[ns, UTC] 2 source 2278 non-null object 3 text 2278 non-null object 4 expanded_urls 2274 non-null object 5 rating_numerator 2278 non-null int64 6 rating_denominator 2278 non-null int64 7 name 2278 non-null object 8 doggo 2278 non-null object 9 floofer 2278 non-null object 10 pupper 2278 non-null object 11 puppo 2278 non-null object dtypes: datetime64[ns, UTC](1), int64(3), object(8) memory usage: 231.4+ KB ###Markdown ____ Quality issue 3: Lower case and missing names as 'None' (column: `name`). Define1 - Use title function to convert the first letter to upper case. 2 - Replace 'None' by NaN. Code ###Code # 1: twt_arch_clean.name = twt_arch_clean.name.str.title() # 2: twt_arch_clean.name.replace('None', np.nan, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code twt_arch_clean ###Output _____no_output_____ ###Markdown ____ Quality issue 4: Tweets with no images (column: `expanded_urls`). DefineRemove all rows without a expanded_urls. If there is no image, the breed prediction would be erroneous. Code ###Code to_drop_list = list(twt_arch_clean[twt_arch_clean.expanded_urls.isnull()].index) twt_arch_clean.drop(to_drop_list, axis=0, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code twt_arch_clean[twt_arch_clean.expanded_urls.isnull()].any() ###Output _____no_output_____ ###Markdown ____ Quality issue 5: Missing values of dogtionary showing as 'None' DefineReplace 'None' by NaN. Code ###Code twt_arch_clean.doggo.replace('None', np.nan, inplace=True) twt_arch_clean.floofer.replace('None', np.nan, inplace=True) twt_arch_clean.pupper.replace('None', np.nan, inplace=True) twt_arch_clean.puppo.replace('None', np.nan, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code twt_arch_clean ###Output _____no_output_____ ###Markdown ____ Quality issue 6: Text difficult to read DefineSet max_colwidth, so the entire text will be displayed Code ###Code pd.set_option('display.max_colwidth', -1) ###Output C:\Users\2swim\anaconda3\lib\site-packages\ipykernel_launcher.py:1: FutureWarning: Passing a negative integer is deprecated in version 1.0 and will not be supported in future version. Instead, use None to not limit the column width. """Entry point for launching an IPython kernel. ###Markdown Test ###Code twt_arch_clean ###Output _____no_output_____ ###Markdown ____ Quality issue 7: Erroneous source datatype (column `source`) DefineThere are only 4 different sources, all stored as string. 1 - Abbreviate and prettify the text. 2 - Convert to categorical datatype. Code ###Code # 1: # web twt_arch_clean.source.replace('<a href="http://twitter.com" rel="nofollow">Twitter Web Client</a>', 'web', inplace=True) # iphone twt_arch_clean.source.replace('<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'iphone', inplace=True) # vine twt_arch_clean.source.replace('<a href="http://vine.co" rel="nofollow">Vine - Make a Scene</a>', 'vine', inplace=True) # tweetdeck twt_arch_clean.source.replace('<a href="https://about.twitter.com/products/tweetdeck" rel="nofollow">TweetDeck</a>', 'tweetdeck' , inplace=True) # 2: twt_arch_clean.source = twt_arch_clean.source.astype('category') ###Output _____no_output_____ ###Markdown Test ###Code set(twt_arch_clean.source) twt_arch_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2274 entries, 0 to 2355 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2274 non-null int64 1 timestamp 2274 non-null datetime64[ns, UTC] 2 source 2274 non-null category 3 text 2274 non-null object 4 expanded_urls 2274 non-null object 5 rating_numerator 2274 non-null int64 6 rating_denominator 2274 non-null int64 7 name 1610 non-null object 8 doggo 93 non-null object 9 floofer 10 non-null object 10 pupper 252 non-null object 11 puppo 29 non-null object dtypes: category(1), datetime64[ns, UTC](1), int64(3), object(7) memory usage: 215.6+ KB ###Markdown ____ Quality issue 8: Tweet 785515384317313025: 10/10 is a date, not a rating DefineDelete the row corresponding to tweet 785515384317313025. Code Not necessary, already fixed in quality issue 4. Test ###Code twt_arch_clean[twt_arch_clean.tweet_id == 785515384317313025] ###Output _____no_output_____ ###Markdown ____ Quality issue 9: Breeds with '-' and '_' as separators, instead of ' ' and in lower case (columns: `p1`, `p2` and `p3`). DefineIn columns `p1`, `p2` and `p3`: 1-Replace '-' with ' '. 2-Replace '_' with ' '. 3-Use title function to convert the first letter to upper case. Code ###Code # 1: pred_clean.p1 = pred_clean.p1.str.replace('_', ' ') pred_clean.p2 = pred_clean.p2.str.replace('_', ' ') pred_clean.p3 = pred_clean.p3.str.replace('_', ' ') # 2: pred_clean.p1 = pred_clean.p1.str.replace('-', ' ') pred_clean.p2 = pred_clean.p2.str.replace('-', ' ') pred_clean.p3 = pred_clean.p3.str.replace('-', ' ') # 3: pred_clean.p1 = pred_clean.p1.str.title() pred_clean.p2 = pred_clean.p1.str.title() pred_clean.p3 = pred_clean.p1.str.title() ###Output _____no_output_____ ###Markdown Test ###Code pred_clean ###Output _____no_output_____ ###Markdown ____ Quality issue 10: Some predictions, with the highest confidence, are not a dog breed. Define1-Drop all rows in which the highest confidence prediction is not a dog breed. Code ###Code pred_clean = pred_clean[((pred_clean.img_num == 1) & (pred_clean.p1_dog == True)) | ((pred_clean.img_num == 2) & (pred_clean.p2_dog == True)) | ((pred_clean.img_num == 3) & (pred_clean.p3_dog == True))] ###Output _____no_output_____ ###Markdown Test ###Code pred_clean[((pred_clean.img_num == 1) & (pred_clean.p1_dog == False)) | ((pred_clean.img_num == 2) & (pred_clean.p2_dog == False)) | ((pred_clean.img_num == 3) & (pred_clean.p3_dog == False))] ###Output _____no_output_____ ###Markdown ____ Tidiness issue 1: Dogtionary variable stored in four columns: `doggo`, `floofer`, `pupper` and `puppo`. Define1 - Create a column (`dogtionary`) containing 4 categories corresponding to the columns: `doggo`, `floofer`, `pupper` and `puppo`. 2 - Drop columns: `doggo`, `floofer`, `pupper` and `puppo`. 3 - Change `dogtionary` datatype to category. Code ###Code # 1: twt_arch_clean.loc[twt_arch_clean.doggo.notnull(),'dogtionary'] = 'doggo' twt_arch_clean.loc[twt_arch_clean.floofer.notnull(),'dogtionary'] = 'floofer' twt_arch_clean.loc[twt_arch_clean.pupper.notnull(),'dogtionary'] = 'pupper' twt_arch_clean.loc[twt_arch_clean.puppo.notnull(),'dogtionary'] = 'puppo' # 2: twt_arch_clean.drop(columns=['doggo', 'floofer', 'pupper', 'puppo'], inplace=True) # 3: twt_arch_clean.dogtionary = twt_arch_clean.dogtionary.astype('category') ###Output _____no_output_____ ###Markdown Test ###Code twt_arch_clean.info() twt_arch_clean.sample(5) ###Output _____no_output_____ ###Markdown ____ Tidiness issue 2: Not all `tweet_id` have its corresponding row in other data frames. Define1-Drop from `twt_arch_clean` all tweet ids which are no present in `pred_clean`. 2-Drop from `twt_arch_clean` all tweet ids which are no present in `twt_info_clean`. Code ###Code # 1: twt_arch_clean = twt_arch_clean[twt_arch_clean.tweet_id.isin(list(pred_clean.tweet_id))] # 2: twt_arch_clean = twt_arch_clean[twt_arch_clean.tweet_id.isin(list(twt_info_clean.tweet_id))] ###Output _____no_output_____ ###Markdown Test ###Code twt_arch_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1493 entries, 1 to 2355 Data columns (total 9 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1493 non-null int64 1 timestamp 1493 non-null datetime64[ns, UTC] 2 source 1493 non-null category 3 text 1493 non-null object 4 expanded_urls 1493 non-null object 5 rating_numerator 1493 non-null int64 6 rating_denominator 1493 non-null int64 7 name 1131 non-null object 8 dogtionary 230 non-null category dtypes: category(2), datetime64[ns, UTC](1), int64(3), object(3) memory usage: 96.6+ KB ###Markdown ____ Tidiness issue 3: Three different data frames, when only one is needed Define1-Merge `twt_arch_clean` and `pred_clean`. 2-Merge the resulting `twt_arch_clean` and `twt_info_clean`. Code ###Code # 1: temp = pd.merge(twt_arch_clean, pred_clean, how='left', on=['tweet_id']) # 2: tweets = pd.merge(temp, twt_info_clean, how='left', on=['tweet_id']) del temp # Free up memory tweets tweets.info() tweets.columns ###Output _____no_output_____ ###Markdown ____ Tidiness issue 4: The predicted dog breed, with the highest confidence, is spread in various columns. Define1-Create a column (`breed`), with the highest confidence prediction. 2-Drop: `p1`, `p2`, `p3`, `p1_dog`, `p2_dog`, `p3_dog`, `p1_conf`, `p2_conf`, `p3_conf` and `img_num`. Code ###Code #(tweets.iloc[0].img_num == 1) & (tweets.iloc[0].p1_conf > tweets.iloc[0].p2_conf) & (tweets.iloc[0].p1_conf > tweets.iloc[0].p3_conf) tweets.iloc[0].p1 tweets['breed']=tweets[(tweets.img_num == 1) & (tweets.p1_conf > tweets.p2_conf) & (tweets.p1_conf > tweets.p3_conf)].p1 # 1: tweets.loc[(tweets.img_num == 1) & (tweets.p1_conf > tweets.p2_conf) & (tweets.p1_conf > tweets.p3_conf), 'breed'] = tweets.p1 # tweets.loc[(tweets.img_num == 2) & (tweets.p2_conf > tweets.p1_conf) & (tweets.p2_conf > tweets.p3_conf), 'breed'] = tweets.p2 # tweets.loc[(tweets.img_num == 3) & (tweets.p3_conf > tweets.p1_conf) & (tweets.p3_conf > tweets.p2_conf), 'breed'] = tweets.p3 # 2: tweets.drop(columns=['p1', 'p2', 'p3', 'p1_dog', 'p2_dog', 'p3_dog', 'p1_conf', 'p2_conf', 'p3_conf', 'img_num'], inplace=True) tweets.info() tweets.columns tweets ###Output _____no_output_____ ###Markdown Store Since all gathering, assessing and cleaning is finished, we have may store the data frame into a .csv or a .db file for use in future analysis. Define:1-Store as *.db 2-Store as *.csv Code ###Code filename = 'twitter_archive_master.csv' # 1: engine = create_engine('sqlite:///twitter_archive_master.db') tweets.to_sql('tweets', engine, index=False, if_exists='replace') # 2: tweets.to_csv('twitter_archive_master.csv', index=False) ###Output _____no_output_____ ###Markdown Test ###Code os.path.isfile('twitter_archive_master.csv') and os.path.isfile('twitter_archive_master.db') ###Output _____no_output_____ ###Markdown Visualize and Analyze Since all gathering, assessing and cleaning is finished, we have a data frame suitable for analysis and visualizations.It is mandatory to make it clear that any analysis or visualizations, without previously performing the cleaning process would result in erroneous conclusions. Number of Tweets by month over time. Analyze audience behavior (retweets and favorites) over time. The analysis will be performed in chunks of months. ###Code # Separate year and month from timestamp: tweets['year'] = tweets['timestamp'].dt.year tweets['month'] = tweets['timestamp'].dt.month # Create a single column with 'year_month' string. tweets['year_month'] = tweets.year.astype('str') + '-' + tweets.month.astype('str') # Create a new grouped data frame monthly = pd.DataFrame() monthly['tweets_count'] = tweets.groupby(['year','month'])['tweet_id'].count() monthly['retweet_count'] = tweets.groupby(['year','month'])['retweet_count'].sum() monthly['favorite_count'] = tweets.groupby(['year','month'])['favorite_count'].sum() # Calculate by tweet: monthly['retweet_by_tweet']=monthly['retweet_count']/monthly['tweets_count'] monthly['favorite_by_tweet']=round(monthly['favorite_count']/monthly['tweets_count']) monthly plt.rcdefaults() # Restore the rc params from Matplotlib's internal default style. ax1 = monthly.tweets_count.plot(color='green', marker='v', grid=True, label='number of tweets', figsize=(10,5)) # Create the plot of first series # Create the legend: h1, l1 = ax1.get_legend_handles_labels() plt.legend(h1, l1, loc=1) # Cretae xticks and its labels: ax1.set_xticks(range(len(monthly))); ax1.set_xticklabels(["%s-%02d" % item for item in monthly.index.tolist()], rotation=45); # Set axis labels: ax1.set_xlabel('(year, month)', fontsize=16) ax1.set_ylabel('number of tweets', fontsize=12, color='green') # Set the title: ax1.set_title('Number of Tweets by Month', fontsize=18) plt.show() ###Output _____no_output_____ ###Markdown Public reaction (retweets and favorites) by Tweet over time. ###Code plt.rcdefaults() # Restore the rc params from Matplotlib's internal default style. ax1 = monthly.favorite_by_tweet.plot(color='blue', marker='v', grid=True, label='favorites', figsize=(10,5)) # Create the plot of first series ax2 = monthly.retweet_by_tweet.plot(color='red', marker='^', grid=True, secondary_y=True, label='retweets') # Create the plot of the second series # Create the legend: h1, l1 = ax1.get_legend_handles_labels() h2, l2 = ax2.get_legend_handles_labels() plt.legend(h1+h2, l1+l2, loc=2) # Cretae xticks and its labels: ax1.set_xticks(range(len(monthly))); ax1.set_xticklabels(["%s-%02d" % item for item in monthly.index.tolist()], rotation=45); # Set axis labels: ax1.set_xlabel('(year, month)', fontsize=16) ax1.set_ylabel('favorites', fontsize=12, color='blue') ax2.set_ylabel('retweets', fontsize=12, color='red') # Set the title: ax1.set_title('Average Public Reaction per Tweet by Month', fontsize=18) plt.show() ###Output _____no_output_____ ###Markdown We can see that public reaction, both retweets and favorites, has increased over time. Public reaction (retweets and favorites) by Breed. ###Code # Create a new data frame: breeds = pd.DataFrame() breeds['number_of_tweets'] = tweets.groupby('breed')['tweet_id'].count() breeds['retweet_count'] = tweets.groupby('breed')['retweet_count'].sum() breeds['favorite_count'] = tweets.groupby('breed')['favorite_count'].sum() breeds['Average Rating'] = tweets.groupby('breed')['rating_numerator'].sum()/tweets.groupby('breed')['rating_denominator'].sum() # Calculate by tweet: breeds['retweet_by_tweet'] = breeds['retweet_count']/breeds['number_of_tweets'] breeds['favorite_by_tweet'] = breeds['favorite_count']/breeds['number_of_tweets'] breeds x_val = breeds.favorite_by_tweet.tolist() y_val = list(np.arange(len(x_val))) labels_y = [v for v in breeds.index] labels_x = [str(v) for v in x_val] plt.rcdefaults() # Restore the rc params from Matplotlib's internal default style. ax = breeds.plot.scatter(x='retweet_by_tweet', y='favorite_by_tweet', c='Average Rating', colormap='viridis', figsize=(10,5), grid=False) # Set X label: ax.set_xlabel('number of retweets by tweet', fontsize=12) # Set Y label: ax.set_ylabel('number of favorites by tweet', fontsize=12) # Set the title: ax.set_title('Breeds: Favorites, Retweets, and Average Rating', fontsize=14) # Annotate the breed with maximum average rating: xpos = float(breeds[breeds['Average Rating'] == breeds['Average Rating'].max()].retweet_by_tweet) ypos = float(breeds[breeds['Average Rating'] == breeds['Average Rating'].max()].favorite_by_tweet) breed = breeds[breeds['Average Rating'] == breeds['Average Rating'].max()].index.tolist()[0] pos = (xpos,ypos) ax.annotate(breed, xy=pos, xytext=(100, 10500), fontweight='bold', arrowprops=dict(facecolor='black', width=2, headwidth=8)) # Annotate the breed with maximum number of retweets by tweet: xpos = float(breeds[breeds['retweet_by_tweet'] == breeds['retweet_by_tweet'].max()].retweet_by_tweet) ypos = float(breeds[breeds['retweet_by_tweet'] == breeds['retweet_by_tweet'].max()].favorite_by_tweet) breed = breeds[breeds['retweet_by_tweet'] == breeds['retweet_by_tweet'].max()].index.tolist()[0] pos = (xpos,ypos) ax.annotate(breed, xy=pos, xytext=(8100, 10500), fontweight='bold', arrowprops=dict(facecolor='black', width=2, headwidth=8)) # Annotate the breed with maximum number of favorites by tweet: xpos = float(breeds[breeds['favorite_by_tweet'] == breeds['favorite_by_tweet'].max()].retweet_by_tweet) ypos = float(breeds[breeds['favorite_by_tweet'] == breeds['favorite_by_tweet'].max()].favorite_by_tweet) breed = breeds[breeds['favorite_by_tweet'] == breeds['favorite_by_tweet'].max()].index.tolist()[0] pos = (xpos,ypos) ax.annotate(breed, xy=pos, xytext=(4100, 20500), fontweight='bold', arrowprops=dict(facecolor='black', width=2, headwidth=8)); breeds[breeds['Average Rating'] == breeds['Average Rating'].max()] xpos = float(breeds[breeds['Average Rating'] == breeds['Average Rating'].max()].retweet_by_tweet) ypos = float(breeds[breeds['Average Rating'] == breeds['Average Rating'].max()].favorite_by_tweet) pos = (xpos,ypos) ###Output _____no_output_____ ###Markdown Gathering Data IntroductionThis project focused on wrangling data from the WeRateDogs Twitter account using Python, documented in a Jupyter Notebook (wrangle_act.ipynb). This Twitter account rates dogs with humorous commentary. The rating denominator is usually 10, however, the numerators are usually greater than 10. [They’re Good Dogs Brent](https://knowyourmeme.com/memes/theyre-good-dogs-brent) wrangle WeRateDogs Twitter data to create interesting and trustworthy analyses and visualizations. WeRateDogs has over 4 million followers and has received international media coverage.WeRateDogs downloaded their Twitter archive and sent it to Udacity via email exclusively for us to use in this project. This archive contains basic tweet data (tweet ID, timestamp, text, etc.) for all 5000+ of their tweets as they stood on August 1, 2017.The goal of this project is to wrangle the WeRateDogs Twitter data to create interesting and trustworthy analyses and visualizations. The challenge lies in the fact that the Twitter archive is great, but it only contains very basic tweet information that comes in JSON format. I needed to gather, asses and clean the Twitter data for a worthy analysis and visualization.The DataEnhanced Twitter ArchiveThe WeRateDogs Twitter archive contains basic tweet data for all 5000+ of their tweets, but not everything. One column the archive does contain though: each tweet's text, which I used to extract rating, dog name, and dog "stage" (i.e. doggo, floofer, pupper, and puppo) to make this Twitter archive "enhanced.".We manually downloaded this file manually by clicking the following link: twitter_archive_enhanced.csvAdditional Data via the Twitter APIBack to the basic-ness of Twitter archives: retweet count and favorite count are two of the notable column omissions. Fortunately, this additional data can be gathered by anyone from Twitter's API. Well, "anyone" who has access to data for the 3000 most recent tweets, at least. But we, because we have the WeRateDogs Twitter archive and specifically the tweet IDs within it, can gather this data for all 5000+. And guess what? We're going to query Twitter's API to gather this valuable data.Image Predictions Filehe tweet image predictions, i.e., what breed of dog (or other object, animal, etc.) is present in each tweet according to a neural network. This file (image_predictions.tsv) hosted on Udacity's servers and we downloaded it programmatically using python Requests library on the following (URL of the file: https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv)Key PointsKey points to keep in mind when data wrangling for this project:We only want original ratings (no retweets) that have images. Though there are 5000+ tweets in the dataset, not all are dog ratings and some are retweets.Fully assessing and cleaning the entire dataset requires exceptional effort so only a subset of its issues (eight (8) quality issues and two (2) tidiness issues at minimum) need to be assessed and cleaned.Cleaning includes merging individual pieces of data according to the rules of tidy data.The fact that the rating numerators are greater than the denominators does not need to be cleaned. This unique rating system is a big part of the popularity of WeRateDogs.We do not need to gather the tweets beyond August 1st, 2017. We can, but note that we won't be able to gather the image predictions for these tweets since we don't have access to the algorithm used.Project DetailsFully assessing and cleaning the entire dataset would require exceptional effort so only a subset of its issues (eight quality issues and two tidiness issues at minimum) needed to be assessed and cleaned.The tasks for this project were:- Data wrangling, which consists of:- Gathering data- Assessing data- Cleaning data- Storing, analyzing, and visualizing our wrangled data- Reporting on 1) our data wrangling efforts and 2) our data analyses and visualizations ###Code # import libraries import pandas as pd import tweepy import requests import json import time import datetime import seaborn as sns import matplotlib.pyplot as plt #read file "twitter_archive_enhanced" df_act = pd.read_csv('twitter_archive_enhanced.csv') #read a little sample of file "twitter_archive_enhanced" df_act.head(2) # columns and rows numbers. df_act.shape #download images predictions used requests lib. (Udacity) url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) with open ('image_predictions.tsv', mode = 'wb') as file: file.write(response.content) # Import the tweet image predictions TSV file into a DataFrame df_img = pd.read_csv('image_predictions.tsv',sep ='\t') # Declare Twitter API keys and access tokens consumer_key = ' consumer_secret = '' access_token = '' access_secret = '' #Tweepy supports oauth authentication auth = tweepy.OAuthHandler(consumer_key,consumer_secret) auth.set_access_token(access_token,access_secret ) api = tweepy.API(auth, parser=tweepy.parsers.JSONParser()) #listen the functions about class tweepy.api --> http://docs.tweepy.org/en/v3.2.0/api.html#API # Use Twitter API to collect status data on tweets present in df_act dataframe start = time.time() # start timer tweet_ids = list(df_act['tweet_id']) #receives the data from the column tweet_id' (df_act) tweet_data = [] tweet_id_success = [] tweet_id_missing = [] for tweet_id in tweet_ids: try: data = api.get_status( tweet_id, tweet_mode='extended', # Use tweet_mode=extended so we get the full text. wait_on_rate_limit = True, # Whether or not to automatically wait for rate limits to replenish wait_on_rate_limit_notify = True # Whether or not to print a notification when Tweepy is waiting ) # for rate limits to replenish tweet_data.append(data) tweet_id_success.append(tweet_id) except: tweet_id_missing.append(tweet_id) print(tweet_id) # print tweet_id missing end = time.time() # end timer print((end - start)/(1000*60)) ''' to know more about Reading and Writing JSON to a File in Python acess the link: https://stackabuse.com/reading-and-writing-json-to-a-file-in-python/ ''' with open('tweet_json.txt',mode = 'w') as file: json.dump(tweet_data, file, sort_keys = True, indent=4) # 'indent=4' Pretty-Printing --> allows a more readable json view. # Load the Twitter API data df_api_tweet = pd.read_json('tweet_json.txt')# read files 'tweet_json.txt' df_api_tweet['tweet_id'] = tweet_id_success # receives 'tweet_id' with found by search df_api_tweet = df_api_tweet[['tweet_id', 'favorite_count', 'retweet_count']] # add labels for columns. path_out = "C:\\Users\\jonys.arcanjo\\df_api_tweet.csv" #path for files dataset "tweet_data". df_api_tweet.to_csv(path_out,index=None) #saving the file "tweet_data" as pandas dataframe. #display file 'df_api_tweet' dataframe. df_api_tweet = pd.read_csv('df_api_tweet.csv') ###Output _____no_output_____ ###Markdown Assess **After gathering each of the above pieces of data, assess them visually and programmatically for quality and tidiness issues. Detect and document at least eight (8) quality issues and two (2) tidiness issues in your wrangle_act.ipynb Jupyter Notebook. To meet specifications, the issues that satisfy the Project Motivation (see the Key Points header on the previous page) must be assessed.** ###Code df_act.shape df_act.info() df_act['tweet_id'].value_counts() df_act['source'].value_counts() df_act['text'].value_counts() df_act['retweeted_status_id'].value_counts() df_act['retweeted_status_user_id'].value_counts() df_act['retweeted_status_timestamp'].value_counts() df_act['expanded_urls'].value_counts() df_act['rating_numerator'].value_counts() df_act['rating_denominator'].value_counts() df_act['name'].value_counts() df_act['doggo'].value_counts() df_act['floofer'].value_counts() df_act['pupper'].value_counts() df_act['puppo'].value_counts() df_act['name'].str.isupper().value_counts() # analyzing the column "name" df_act['name'].sample(frac =.25) #display number columns and rows df_api_tweet.shape df_api_tweet.info() df_api_tweet.tweet_id.value_counts() df_api_tweet.favorite_count.value_counts() df_api_tweet.retweet_count .value_counts() df_api_tweet['tweet_id'].duplicated().value_counts() df_api_tweet.head() df_img.head() print(list(df_img)) df_img.info() df_img['jpg_url'].sort_values().value_counts() sum(df_img.jpg_url.duplicated()) pd.concat(g for _, g in df_img.groupby("jpg_url") if len(g) > 1) print(df_img.p1_dog.value_counts()) print(df_img.p2_dog.value_counts()) print(df_img.p3_dog.value_counts()) df_img.img_num.value_counts() ###Output _____no_output_____ ###Markdown Quality Issuestwitter_archive1. We have identified incorrect names in the "name" column. Change to null or correct it. Identified incorrect names:**quite, such, the, light,my, O, by, actually, not, one, very, mad,all, this,just, old, getting, infuriating.**2. 181 null values were found in column "retweeted_status_id", the same should be removed.3. Identified that currently the type of column 'timestamp' is object the idel would be DateTime objects.4. It has been identified that currently there are doggo, floofer, pupper and puppo columns, ideally would create a new column with the name of "dogs_stage" and move to value for it.5. The columns 'rating_numerator', 'rating_denominator' are with the type, ideal would be to switch to type float.6. Merge 3 files 'df_act_clean', 'df_api_tweet_clean', 'df_api_tweet_clean' to improve the analysis.7. The classification system of columns 'rating_numerator' and 'rating_denominator' is confusing. to be clearer you could split the 'rating_numerator' column by 'rating_denominator'.image_predictions dataset1. Identified 66 duplicated values in jpg_url. 2. The information about the confidence interval in several columns, making it difficult to analyze.3. Delete the columns that are no longer needed.tweet_data dataset1. Maintained content.Cleaning Datatwitter_archive1. Changing the incorrect names to None or correct name. (column "name").2. Removing null values from column "retweeted_status_id".3. Change the column type 'timestamp' from object to DateTime objects.4. Melt in columns: doggo, floofer, pupper and puppo to dogs_stage column.5. Change type of columns 'rating_numerator', 'rating_denominator' to type float.6. Merge 3 files 'df_act_clean', 'df_api_tweet_clean', 'df_api_tweet_clean' to improve the analysis.7. a new "rating" column was created that received the result of the division between the columns 'rating_numerator' and 'rating_denominator'.image_predictions dataset1. Drop 66 jpg_url duplicated.2. Create 1 column for image prediction and 1 column for confidence level.3. Delete columns that won't be used for analysis.tweet_data dataset1. Maintained content. Cleaning data ###Code # Copy the dataframes df_act_clean = df_act.copy() df_img_clean = df_img.copy() df_api_tweet_clean = df_api_tweet.copy() ###Output _____no_output_____ ###Markdown Cleaning dataset df_act_clean (twitter arquive-Udacity) ###Code # Changing the incorrect names to None or correct name. replacements ={ 'quite':'None','such':'None', 'the': 'None','light':'None', "my":'None','O':"O'Malley", 'by':'None', 'actually':'None', 'not':'None','one':'None', 'very':'None','mad':'None', 'all':'None','this':'None', 'just':'None','old':'None', 'getting':'None','infuriating':'None', } df_act_clean['name'].replace(replacements, inplace=True) # checking the modification of the names of the column "name" df_act_clean['name'].sample(frac = .5) #identifying Null values in column 'retweeted_status_id' df_act_clean[df_act_clean['retweeted_status_id'].notnull()==True].head() # removing null values from column "retweeted_status_id" df_act_clean.drop(df_act_clean[df_act_clean['retweeted_status_id'].notnull()== True].index,inplace=True) # checking the modification the values of column "retweeted_status_id" df_act_clean.info() # checking the modification the values of column "retweeted_status_id" df_act_clean[df_act_clean['retweeted_status_id'].notnull()==True] #changing type column 'timestamp' object to datetime. df_act_clean['timestamp'] = pd.to_datetime( df_act_clean['timestamp']) #checking changed type column 'timestamp' to datetime df_act_clean.info() #checking changed type column 'timestamp' to datetime df_act_clean.sample(2) print(list(df_act_clean)) # delet the columns no need df_act_clean = df_act_clean.drop(['source','in_reply_to_status_id','in_reply_to_user_id', 'retweeted_status_id','retweeted_status_user_id', 'retweeted_status_timestamp', 'expanded_urls'],1) # checking droped in number columns. print(list(df_act_clean)) # Melt columns 'doggo', 'floofer', 'pupper', 'puppo' and add to column "dogs_stage" ''' Melt() function data frame --> “df” is passed to melt() function id_vars --> is the variable which need to be left unaltered which is “countries” var_name --> are the column names so we named it as ‘metrics’ value_name --> are its values so we named it as ‘values’ More details about melt() acessaccess the link: http://www.datasciencemadesimple.com/reshape-wide-long-pandas-python-melt-function/ ''' df_act_clean = pd.melt(df_act_clean, id_vars=['tweet_id', 'timestamp', 'text', 'rating_numerator', 'rating_denominator', 'name'], var_name='dogs',value_name='dogs_stage') df_act_clean = df_act_clean.drop('dogs', 1 ) #drop column'dogs' # sort the values in the 'dogs_stage' column and delete the duplicate values based on column 'tweet_id', #with the exception of the last one. df_act_clean = df_act_clean.sort_values('dogs_stage').drop_duplicates(subset='tweet_id', keep='last') list(df_act_clean) df_act_clean[['rating_numerator', 'rating_denominator']] = df_act_clean[['rating_numerator','rating_denominator']].astype(float) df_act_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 10 columns): tweet_id 2175 non-null int64 timestamp 2175 non-null datetime64[ns] text 2175 non-null object rating_numerator 2175 non-null float64 rating_denominator 2175 non-null float64 name 2175 non-null object doggo 2175 non-null object floofer 2175 non-null object pupper 2175 non-null object puppo 2175 non-null object dtypes: datetime64[ns](1), float64(2), int64(1), object(6) memory usage: 186.9+ KB ###Markdown Cleaning dataset df_img_clean (Image twitter arquive-Udacity) ###Code df_img_clean.head() df_img_clean.info() # Delete duplicated jpg_url df_img_clean = df_img_clean.drop_duplicates(subset=['jpg_url'], keep='last') # Checking Delete duplicated jpg_url sum(df_img_clean['jpg_url'].duplicated()) df_img_clean.sample(10) # true prediction (p1,p2,p3) will be store in these lists. #create two lists. dog_type = [] confidence_list = [] def image(df_img_clean): if df_img_clean['p1_dog'] == True: dog_type.append(df_img_clean['p1']) confidence_list.append(df_img_clean['p1_conf']) elif df_img_clean['p2_dog'] == True: dog_type.append(df_img_clean['p2']) confidence_list.append(df_img_clean['p2_conf']) elif df_img_clean['p3_dog'] == True: dog_type.append(df_img_clean['p3']) confidence_list.append(df_img_clean['p3_conf']) else: dog_type.append('Error') confidence_list.append('Error') #series objects having index the df_img_clean column df_img_clean.apply(image, axis=1) #add values in nwe columns df_img_clean['dog_type'] = dog_type df_img_clean['confidence_list'] = confidence_list # delete rows has prediction_list 'error' df_img_clean = df_img_clean[df_img_clean['dog_type'] !='Error' ] # checking latest modifications df_img_clean.sample(5) print(list(df_img_clean)) # delete the columns that are no longer needed df_img_clean = df_img_clean.drop(['img_num', 'p1', 'p1_conf', 'p1_dog','p2', 'p2_conf','p2_dog', 'p3','p3_conf','p3_dog'], 1) # checking delete the columns print(list(df_img_clean)) ###Output ['tweet_id', 'jpg_url', 'dog_type', 'confidence_list'] ###Markdown Merge files ( df_act_clean, df_api_tweet_clean and df_img_clean ) ###Code # Merge the df_act_clean, df_api_tweet_clean and df_img_clean dataframes on 'tweet_id' df_twitter = pd.merge(df_act_clean, df_img_clean,on ='tweet_id' ) df_twitter = pd.merge(df_twitter,df_api_tweet_clean, on = 'tweet_id') list(df_twitter) # Calulate the value of 'rating' df_twitter['rating'] = 10 * df_twitter['rating_numerator'] / df_twitter['rating_denominator'] df_twitter.head() ###Output _____no_output_____ ###Markdown Visualization and Analyze ###Code #graph correlation between retweet_count and favorite_count sns.lmplot(x="retweet_count", y="favorite_count", data=df_twitter, size = 7, aspect=1.5, scatter_kws={'alpha':1/5}) plt.title('Favorite vs. Retweet Count') plt.xlabel('Retweet Count') plt.ylabel('Favorite Count'); ###Output C:\Users\jonys.arcanjo\AppData\Local\Continuum\anaconda3\lib\site-packages\seaborn\regression.py:546: UserWarning: The `size` paramter has been renamed to `height`; please update your code. warnings.warn(msg, UserWarning) C:\Users\jonys.arcanjo\AppData\Local\Continuum\anaconda3\lib\site-packages\scipy\stats\stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval ###Markdown **there is a strong positive correlation between the Favorite and Retweet counts columns. The highest concentration is between 4000 Favorite count and 2000 retweet count.** ###Code df_twitter['dog_type'].value_counts() #Graph type's Dogs. dog_type = df_twitter.groupby('dog_type').filter(lambda x: len(x) >= 20) dog_type['dog_type'].value_counts().plot(kind = 'barh') plt.title('Breed the Most Rated Dog Type') plt.xlabel('Quantity') plt.ylabel('Breed') plt.show(); ###Output _____no_output_____ ###Markdown **Golden retriever is the most common dog.** ###Code #mean column dog_type. dog_type_mean = df_twitter.groupby('dog_type').mean() dog_type_mean.sample(10) #sorting values column's rating. dog_type_mean.rating.sort_values() print(df_twitter.loc[df_twitter.dog_type == 'Japanese_spaniel','jpg_url']) df_twitter[df_twitter['dog_type'] == 'golden_retriever'] ###Output _____no_output_____ ###Markdown **Japanese_spaniel tem a menor classificação média e Clumber tem a maior.** ###Code #graph identify greater concentration retweets. df_twitter.plot(x='retweet_count', y='rating', kind='scatter') plt.xlabel('Retweet Counts') plt.ylabel('Ratings') plt.title('Retweet Counts by Ratings Scatter Plot') plt.show(); ###Output _____no_output_____ ###Markdown Gather ###Code twitter_archive = pd.read_csv('twitter-archive-enhanced.csv') r = requests.get('https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv') with open("image_predictions.tsv", mode = 'wb') as outfile: outfile.write(r.content) image_preds = pd.read_csv('image_predictions.tsv', sep='\t') # Done, so comment out so that we don't do it again! import tweepy import json # consumer_key = # consumer_secret = # access_token = # access_secret = # auth = tweepy.OAuthHandler(consumer_key, consumer_secret) # auth.set_access_token(access_token, access_secret) # api = tweepy.API(auth, wait_on_rate_limit=True) # i=0 # with open('tweet_json.txt', 'w') as outfile: # for tweet_id in twitter_archive.tweet_id.values: # i+=1 # print("{} tweets processed".format(i)) # try: # tweet = api.get_status(tweet_id, tweet_mode='extended') # json.dump(tweet._json, outfile) #json.dump(tweet._json, outfile) # outfile.write("\n") # except: # tweepy.TweepError as e: # print("Tweet {} skipped!".format(tweet_id)) # pass data = [] with open('tweet_json.txt', 'r') as f: for line in f: data.append(json.loads(line)) # data is list of dictionaries tweet_json = pd.DataFrame(data) ###Output _____no_output_____ ###Markdown Assess ###Code twitter_archive.head() image_preds.head() tweet_json.head() twitter_archive.sample(5) # twitter_archive.sample(5) twitter_archive.info() twitter_archive.rating_numerator.describe() twitter_archive.rating_denominator.describe() twitter_archive.name.value_counts() twitter_archive[twitter_archive.tweet_id.duplicated()] image_preds.head() image_preds[~image_preds.p1_dog] ###Output _____no_output_____ ###Markdown Quality `twitter_archive` table- Some of the tweets are retweets, which should be removed- Not all denominator_rating values are 10 (some are a multiple of 10 that is intentional for multiple dogs; others are simply incorrect)- Not all numerators are correct (none of the decimal numerators are captured correctly; other times the whole rating is simply wrong)- Numerator_rating column should be of data type float, to accommodate the few decimal ratings- There is a tweet about plagarism that is not a rating and doesn't belong in the dataset (835152434251116546)- Not all dog names are correct- There are tweets that are replies (not original we-rate-dogs tweets) that should be removed- timestamp column should be in datetime format- Don't need columns source, expanded_urls `tweet_json` table- We only need the id, retweet_count, and favorite_count columns `image_preds` table- Don't need columns jpg_url and img_num- Inconsistent capitalization of dog breed predictions (columns p1, p2, p3) Tidiness- All data can be merged into 1 dataframe- Dog types should be listed in one column in the `twitter_archive` table Clean ###Code twitter_archive_clean = twitter_archive.copy() tweet_json_clean = tweet_json.copy() image_preds_clean = image_preds.copy() ###Output _____no_output_____ ###Markdown Quality `twitter_archive`: Table included retweets DefineRemove all of the tweets that are retweets. Code ###Code twitter_archive_clean = twitter_archive_clean[twitter_archive_clean.retweeted_status_id.isnull()] ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2175 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2175 non-null object source 2175 non-null object text 2175 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 2117 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 2175 non-null object floofer 2175 non-null object pupper 2175 non-null object puppo 2175 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 305.9+ KB ###Markdown Now that all the retweets are gone, we don't need the columns pertaining to retweet info, so drop those. ###Code twitter_archive_clean.drop(['retweeted_status_id','retweeted_status_user_id','retweeted_status_timestamp'],axis=1,inplace=True) twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 14 columns): tweet_id 2175 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2175 non-null object source 2175 non-null object text 2175 non-null object expanded_urls 2117 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 2175 non-null object floofer 2175 non-null object pupper 2175 non-null object puppo 2175 non-null object dtypes: float64(2), int64(3), object(9) memory usage: 254.9+ KB ###Markdown `twitter_archive`: Table includes non-original tweets (replies) DefineRemove all of the tweets that are only replies to other tweets. Code ###Code twitter_archive_clean = twitter_archive_clean[twitter_archive_clean.in_reply_to_status_id.isnull()] ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2355 Data columns (total 14 columns): tweet_id 2097 non-null int64 in_reply_to_status_id 0 non-null float64 in_reply_to_user_id 0 non-null float64 timestamp 2097 non-null object source 2097 non-null object text 2097 non-null object expanded_urls 2094 non-null object rating_numerator 2097 non-null int64 rating_denominator 2097 non-null int64 name 2097 non-null object doggo 2097 non-null object floofer 2097 non-null object pupper 2097 non-null object puppo 2097 non-null object dtypes: float64(2), int64(3), object(9) memory usage: 245.7+ KB ###Markdown Now that all the reply tweets are gone, we don't need the columns pertaining to reply anymore. Remove those. ###Code twitter_archive_clean.drop(['in_reply_to_status_id','in_reply_to_user_id'],axis=1,inplace=True) twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2355 Data columns (total 12 columns): tweet_id 2097 non-null int64 timestamp 2097 non-null object source 2097 non-null object text 2097 non-null object expanded_urls 2094 non-null object rating_numerator 2097 non-null int64 rating_denominator 2097 non-null int64 name 2097 non-null object doggo 2097 non-null object floofer 2097 non-null object pupper 2097 non-null object puppo 2097 non-null object dtypes: int64(3), object(9) memory usage: 213.0+ KB ###Markdown `twitter_archive`: Table includes incorrect ratings in both the numerator and denominator for various reasons DefineRe-extract both numerator and denominator from the text column. These 2 issues will be fixed together.In the process, we are making numerator_rating type float instead of int. Code ###Code twitter_archive_clean[['extra1','extra2']] = twitter_archive_clean.text.str.extract('((?:\d+\.)?\d+)\/(\d+)',expand=True) fixed_num = twitter_archive_clean[twitter_archive_clean.rating_numerator.astype(float)!=twitter_archive_clean.extra1.astype(float)] fixed_num twitter_archive_clean.rating_numerator = twitter_archive_clean.extra1 twitter_archive_clean.drop('extra1',axis=1,inplace=True) twitter_archive_clean.drop('extra2',axis=1,inplace=True) ###Output _____no_output_____ ###Markdown Now see which denominators remain that are not equal to the expected value of 10. ###Code wrong_denominator = twitter_archive_clean[twitter_archive_clean.rating_denominator!=10] wrong_denominator ###Output _____no_output_____ ###Markdown Since there's only 17 with a denominator other than 10, I visually inspected each one by looking at the text. Most of them have a rating for multiple dogs that can be reduced to something out of 10. So, I'll fix that programmatically so the denominator is 10. There's a handful of tweets where I grabbed the wrong fraction (or numbers with a slash between them). I can change these manually. This one is just wrong. It actually has no rating, so I'll remove it. ###Code twitter_archive_clean.loc[516] twitter_archive_clean.drop(516,inplace=True) ###Output _____no_output_____ ###Markdown The following I'm fixing manually: ###Code twitter_archive_clean['rating_numerator'][1068] = 14 twitter_archive_clean['rating_denominator'][1068] = 10 twitter_archive_clean['rating_numerator'][1165] = 13 twitter_archive_clean['rating_denominator'][1165] = 10 twitter_archive_clean['rating_numerator'][1202] = 11 twitter_archive_clean['rating_denominator'][1202] = 10 twitter_archive_clean['rating_numerator'][1662] = 10 twitter_archive_clean['rating_denominator'][1662] = 10 twitter_archive_clean['rating_numerator'][2335] = 9 twitter_archive_clean['rating_denominator'][2335] = 10 wrong_denominator = twitter_archive_clean[twitter_archive_clean.rating_denominator!=10] wrong_denominator ###Output _____no_output_____ ###Markdown Make sure numerator and denominator are numbers and not strings. ###Code twitter_archive_clean.info() twitter_archive_clean.rating_numerator = twitter_archive_clean.rating_numerator.astype(float) ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2096 entries, 0 to 2355 Data columns (total 12 columns): tweet_id 2096 non-null int64 timestamp 2096 non-null object source 2096 non-null object text 2096 non-null object expanded_urls 2093 non-null object rating_numerator 2096 non-null object rating_denominator 2096 non-null int64 name 2096 non-null object doggo 2096 non-null object floofer 2096 non-null object pupper 2096 non-null object puppo 2096 non-null object dtypes: int64(2), object(10) memory usage: 292.9+ KB ###Markdown For tweets that have multiple dogs, for which the rating is a multiple of something out of 10, reduce it to have a denominator of 10, so there's a consistent format and we can compare ratings more easily. ###Code for i in wrong_denominator.index: multiple = twitter_archive_clean['rating_denominator'][i]/10 twitter_archive_clean['rating_denominator'][i] = 10 twitter_archive_clean['rating_numerator'][i] = twitter_archive_clean['rating_numerator'][i]/multiple ###Output /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:3: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy This is separate from the ipykernel package so we can avoid doing imports until /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:4: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy after removing the cwd from sys.path. ###Markdown Test ###Code wrong_denominator = twitter_archive_clean[twitter_archive_clean.rating_denominator!=10] wrong_denominator ###Output _____no_output_____ ###Markdown `twitter_archive`: Table includes a post about plagarism that doesn't belong DefineRemove this tweet (835152434251116546) Code ###Code plagarism_tweet = twitter_archive_clean[twitter_archive_clean.tweet_id==835152434251116546] twitter_archive_clean.drop(plagarism_tweet.index,inplace=True) ###Output _____no_output_____ ###Markdown `twitter_archive`: timestamp is not in datetime format DefineChange timestamp to datetime format. Code ###Code twitter_archive_clean.timestamp = pd.to_datetime(twitter_archive_clean.timestamp) ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2095 entries, 0 to 2355 Data columns (total 12 columns): tweet_id 2095 non-null int64 timestamp 2095 non-null datetime64[ns] source 2095 non-null object text 2095 non-null object expanded_urls 2092 non-null object rating_numerator 2095 non-null float64 rating_denominator 2095 non-null int64 name 2095 non-null object doggo 2095 non-null object floofer 2095 non-null object pupper 2095 non-null object puppo 2095 non-null object dtypes: datetime64[ns](1), float64(1), int64(2), object(8) memory usage: 212.8+ KB ###Markdown `twitter_archive`: Some names are not correct DefineRe-extract names from text column to get rid of cases of "a", "the", and "an" being counted as names. Code ###Code twitter_archive_clean['name_fixed'] = twitter_archive_clean.text.str.extract('(?:named |This is |name is |Meet |Say hello to )([A-Z][a-z]+)',expand=True) twitter_archive_clean[['name','name_fixed']].sample(10) ###Output _____no_output_____ ###Markdown I found cases that my code misses. I'll just fix the 2 I found manually. ###Code twitter_archive_clean.name_fixed.loc[775] = 'O\'Malley' # Devon and Fronq with accent on o twitter_archive_clean['name_fixed'][915] = twitter_archive_clean['name'][915] twitter_archive_clean['name_fixed'][1559] = twitter_archive_clean['name'][1559] ###Output /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:2: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:3: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy This is separate from the ipykernel package so we can avoid doing imports until ###Markdown Now replace name column with my temporary name_fixed and drop name_fixed.Also replace NaNs in name with None, representing no name given rather than a missing value. ###Code twitter_archive_clean.name=twitter_archive_clean.name_fixed twitter_archive_clean.drop('name_fixed',axis=1,inplace=True) twitter_archive_clean.name.fillna('None',inplace=True) twitter_archive_clean.info() twitter_archive_clean.name.value_counts() ###Output _____no_output_____ ###Markdown `image_preds` Table: Inconsistent capitalization of dog breed predictions in columns p1, p2, p3 DefineMake all breeds/predictions lowercase. Code ###Code image_preds_clean.p1 = image_preds_clean.p1.str.lower() image_preds_clean.p2 = image_preds_clean.p2.str.lower() image_preds_clean.p3 = image_preds_clean.p3.str.lower() ###Output _____no_output_____ ###Markdown Test ###Code image_preds_clean.sample(5) ###Output _____no_output_____ ###Markdown There are unneeded columns in all three tables DefineRemove these columns. Code ###Code twitter_archive_clean.drop(['source','expanded_urls'], axis=1, inplace=True) image_preds_clean.drop(['jpg_url','img_num'], axis=1, inplace=True) cols_to_drop = tweet_json_clean.columns.difference(['id','favorite_count','retweet_count']) tweet_json_clean.drop(cols_to_drop, axis=1,inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.info() image_preds_clean.info() tweet_json_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2331 entries, 0 to 2330 Data columns (total 3 columns): favorite_count 2331 non-null int64 id 2331 non-null int64 retweet_count 2331 non-null int64 dtypes: int64(3) memory usage: 54.7 KB ###Markdown Tidiness `twitter_archive`: The dog stages are spread across different variable columns DefineMake one column 'dog_stages' in favor of the 4 stages listed as columns. Code ###Code twitter_archive_clean[['doggo','floofer','pupper','puppo']] = twitter_archive_clean[['doggo','floofer','pupper','puppo']].replace("None",np.nan) twitter_archive_clean["dog_stages"] = twitter_archive_clean['doggo'].fillna('') + twitter_archive_clean['floofer'].fillna('') + twitter_archive_clean['pupper'].fillna('') + twitter_archive_clean['puppo'].fillna('') valid_stages = ['pupper','doggo','puppo','floofer','none'] twitter_archive_clean['dog_stages'].replace('','none',inplace=True) twitter_archive_clean['dog_stages'] = twitter_archive_clean['dog_stages'].replace(["doggopupper","doggofloofer","doggopuppo"],"multiple") twitter_archive_clean.drop(['pupper','doggo','floofer','puppo'],axis=1,inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.head(10) ###Output _____no_output_____ ###Markdown All three tables should be merged into one DefineMerge them, only keeping rows that are present in each table (inner join). Code ###Code twitter_archive_master = twitter_archive_clean.merge(tweet_json_clean,left_on='tweet_id', right_on='id',how='inner') twitter_archive_master.drop('id',axis=1,inplace=True) image_preds_clean.rename(columns={'tweet_id': 'id'}, inplace=True) twitter_archive_master = twitter_archive_master.merge(image_preds_clean,left_on='tweet_id',right_on='id',how='inner') ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_master.info() twitter_archive_master.head() # save twitter_archive_master.to_csv('twitter_archive_master.csv',index=False) ###Output _____no_output_____ ###Markdown Exploratory Data Analysis ###Code %matplotlib inline import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown How does the average dog rating change over time? ###Code twitter_archive_master.timestamp.describe() twitter_archive_master.sort_values('timestamp') plt.scatter(data=twitter_archive_master,x='tweet_id',y='rating_numerator',alpha=1/5); ###Output _____no_output_____ ###Markdown Looks like there's 2 outliers we should get rid of. ###Code outliers = twitter_archive_master[twitter_archive_master.rating_numerator>250] twitter_archive_master.drop(outliers.index, inplace=True) plt.scatter(data=twitter_archive_master,x='tweet_id',y='rating_numerator',alpha=1/5); plt.xlabel('Tweet ID') plt.ylabel('Rating out of 10') plt.title('Rating over Time') ###Output _____no_output_____ ###Markdown This is interesting, and as I suspected. At least up to the time for which we have tweet data, the ratings tend to get higher. In particular, somewhere around the tweet that starts with 78, which corresponds to around September of 2016, We Rate Dogs had stopped giving any dogs lower than 10/10 (I suspect the 6/10 is another outlier/mistake). Is there a correlation between tweet rating, retweet count, and favorite count? ###Code twitter_archive_master.rating_numerator.value_counts() ###Output _____no_output_____ ###Markdown For this analysis, I will remove the tweets that have a non-integer numerator rating, since there's only 4 of them, so I can treat the rating variable as discrete. ###Code # twitter_archive_master_decimals = twitter_archive_master[(twitter_archive_master.rating_numerator==9.75) | (twitter_archive_master.rating_numerator==11.26) | (twitter_archive_master.rating_numerator==13.5) | (twitter_archive_master.rating_numerator==11.27)] # twitter_archive_master_ints = twitter_archive_master.drop(twitter_archive_master_decimals.index) # twitter_archive_master_ints.rating_numerator = twitter_archive_master_ints.rating_numerator.astype('int') # twitter_archive_master_ints.info() plt.scatter(data=twitter_archive_master,x='favorite_count',y='retweet_count',c='rating_numerator',cmap='viridis'); cbar = plt.colorbar() plt.xlabel('Favorite Count'); plt.ylabel('Retweet Count'); cbar.set_label('Rating out of 10') plt.title('Tweet Retweet Count vs. Favorite Count vs. Rating'); np.corrcoef([twitter_archive_master['favorite_count'],twitter_archive_master['retweet_count'],twitter_archive_master['rating_numerator']]) ###Output _____no_output_____ ###Markdown There appears to be a strong positive correlation (r=0.93) between favorite_count and retweet count. The correlation between the rating and favorite count, and between rating and retweet count, on the other hand, are moderate (r=0.4 and r=0.3, respectively). This is reflected in the multivariate scatterplot above. Which dog breeds are most popular? ###Code twitter_archive_master.head(10) ###Output _____no_output_____ ###Markdown First, make a new column for predicted dog breed, based on the most likely predictions out of the three given that are actually dog breeds. If no valid breeds are given, remove those for this analysis. ###Code twitter_archive_master['pred_breed'] = twitter_archive_master.p1 twitter_archive_master.head() p1_not_dog = twitter_archive_master[~twitter_archive_master.p1_dog] p1_not_dog twitter_archive_master.pred_breed[p1_not_dog.index] = twitter_archive_master.p2[p1_not_dog.index] p12_not_dog = twitter_archive_master[~twitter_archive_master.p1_dog & ~twitter_archive_master.p2_dog] p12_not_dog twitter_archive_master.pred_breed[p12_not_dog.index] = twitter_archive_master.p3[p12_not_dog.index] p123_not_dog = twitter_archive_master[~twitter_archive_master.p1_dog & ~twitter_archive_master.p2_dog & ~twitter_archive_master.p3_dog] p123_not_dog twitter_archive_master_breeds = twitter_archive_master.drop(p123_not_dog.index) twitter_archive_master_breeds.sample(10) ###Output _____no_output_____ ###Markdown To make comparison of breeds more statistically fair, only consider those that are in at least 20 different tweets. ###Code twitter_archive_master_breeds.pred_breed.value_counts() top_tweeted_breeds = twitter_archive_master_breeds.pred_breed.value_counts()[twitter_archive_master_breeds.pred_breed.value_counts().values>=20] top_tweeted_breeds top_tweeted_breeds.index top_breeds = top_tweeted_breeds.index.tolist() top_breeds twitter_archive_master_top_breeds = twitter_archive_master_breeds[twitter_archive_master_breeds.pred_breed.isin(top_breeds)] twitter_archive_master_top_breeds.sample(5) ###Output _____no_output_____ ###Markdown Calculate popularity based on average rating, retweet count, and favorite count. ###Code mean_ratings_breed = twitter_archive_master_top_breeds.groupby('pred_breed').rating_numerator.mean().nlargest(3) mean_ratings_breed mean_retweet_breed = twitter_archive_master_top_breeds.groupby('pred_breed').retweet_count.mean().nlargest(3) mean_retweet_breed mean_retweet_breed = twitter_archive_master_top_breeds.groupby('pred_breed').favorite_count.mean().nlargest(3) mean_retweet_breed ###Output _____no_output_____ ###Markdown The top 3 most popularly represented breeds in We Rate Dogs' tweets are Golden Retriever, Laborator Retriever, and Pembroke (aka Corgi). Filtering down to just the top 20 most-tweeted breeds, the top 3 highest-rated breeds were Samoyed, Golden Retriever, and Pembroke. The top 3 most retweeted breeds were French Bulldog, Cocker Spaniel, and Eskimo Dog. Finally, the top 3 most-favorite'd breeds were French Bulldog, Cocker Spaniel, and Samoyed. What are the most popular dog names? ###Code twitter_archive_master.name.value_counts() ###Output _____no_output_____ ###Markdown Wrangle & Analyze Twitter Data from WeRateDogsIn this project we gather, assess and clean data on tweets by WeRateDogs from three sources:* A given csv file containing an archive of tweets to be analyzed.* A remotely hosted tsv file containing predictions of dog breed pictured per tweet.* Data on tweet retweets and favorites obtained via Twitter's API.We then analyze the data and report findings.Wrangling activities for this project are limited to assessing and cleaning eight quality issues and two tidiness issues. ###Code import pandas as pd import numpy as np import requests import io import tweepy import json import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline ###Output _____no_output_____ ###Markdown 1 Gather data 1.1 Tweet archiveThis given csv just needs to be loaded into a dataframe, direct from the folder. ###Code df_archive = pd.read_csv('twitter-archive-enhanced.csv') df_archive.head() ###Output _____no_output_____ ###Markdown 1.2 Image predictionsThis tsv needs to be downloaded from Udacity's servers and loaded into a dataframe. Using the `requests` library rather than reading directly into `pd.read_csv` is a project requirement.Download the file: ###Code url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/' + \ 'August/599fd2ad_image-predictions/image-predictions.tsv' r = requests.get(url) ###Output _____no_output_____ ###Markdown Check the encoding: ###Code r.encoding ###Output _____no_output_____ ###Markdown Load into a dataframe: ###Code content = io.StringIO(r.content.decode('utf-8')) df_img_pred = pd.read_csv(content, sep='\t') df_img_pred.head() ###Output _____no_output_____ ###Markdown 1.3 Retweets and favoritesThis data needs to be obtained via the Twitter API using tweepy and loaded into a dataframe.Tweepy requests 100 tweets at a time, so put tweet ids into batches of 100: ###Code n = 100 ids = df_archive['tweet_id'].tolist() batched = [ids[i:i + 100] for i in range(0, len(ids), 100)] len(batched) ###Output _____no_output_____ ###Markdown Retrieve data for the batches of tweets from the Twitter API, waiting for 15 min and retrying current batch if there is a rate limit error: ###Code tweets = [] consumer_key = 'ENTER KEY HERE' consumer_secret = 'ENTER SECRET HERE' auth = tweepy.AppAuthHandler(consumer_key, consumer_secret) api = tweepy.API(auth) i = 1 for batch in batched: while True: try: print(f'attempting batch {i}') tweets += api.statuses_lookup(batch, trim_user=True) except tweepy.RateLimitError: print(f'waiting on batch {i}') time.sleep(15 * 60) else: i += 1 break ###Output attempting batch 1 attempting batch 2 attempting batch 3 attempting batch 4 attempting batch 5 attempting batch 6 attempting batch 7 attempting batch 8 attempting batch 9 attempting batch 10 attempting batch 11 attempting batch 12 attempting batch 13 attempting batch 14 attempting batch 15 attempting batch 16 attempting batch 17 attempting batch 18 attempting batch 19 attempting batch 20 attempting batch 21 attempting batch 22 attempting batch 23 attempting batch 24 ###Markdown Check how many tweets we were able to get data for: ###Code len(tweets) ###Output _____no_output_____ ###Markdown Save full json data for each tweet to `tweet_json.txt`. This is a project requirement. ###Code with open('tweet_json.txt', 'w', encoding='utf-8') as f: for tweet in tweets: json.dump(tweet._json, f) f.write('\n') ###Output _____no_output_____ ###Markdown Load tweet ID, retweet count, and favorite count from `tweet_json.txt` into a dataframe. Reading the file line by line is a project requirement. ###Code api_data = [] with open('tweet_json.txt', 'r', encoding='utf-8') as f: for l in f.readlines(): j = json.loads(l) api_data.append([j['id'], j['retweet_count'], j['favorite_count']]) df_api = pd.DataFrame(api_data, columns=['tweet_id', 'retweets', 'favorites']) df_api.head() ###Output _____no_output_____ ###Markdown 1.4 SummaryWe now have the data in three dataframes as follows:* `df_archive` contains the tweet data from the given csv file* `df_img_pred` contains the predictions of dog breed from the downloaded tsv* `df_api` contains the retweet and favorite data obtained via the Twitter API 2 Assess and clean dataIssues are tackled as discovered as many are inter-related. A summary is provided at the end. 2.1 One dataframe per observational unitExamine the dataframes visually, looking at the heads (see above).Check the number of tweets in each dataframe: ###Code (len(df_archive), len(df_img_pred), len(df_api)) ###Output _____no_output_____ ###Markdown 2.1.1 Assess* The data is split across three dataframes but they all relate to one type of observational unit (a tweet).* Data for some tweets is missing from one or more dataframes. 2.1.2 DefineMerge the dataframes on the tweet_id column, using an inner join so we are left only with rows for which we have data from all sources. 2.1.3 Code ###Code df_tidy = df_archive.merge(df_img_pred, on='tweet_id', how='inner') \ .merge(df_api, on='tweet_id', how='inner') ###Output _____no_output_____ ###Markdown 2.1.4 TestVisually inspect dataframe head and columns, and check how many rows are present: ###Code df_tidy.head() df_tidy.columns len(df_tidy) ###Output _____no_output_____ ###Markdown 2.2 One column per variableCheck mutual exclusivity of `doggo`, `puppo` and `pupper` stages: ###Code sum(df_tidy.apply(lambda r: sum([r['doggo'] != 'None', r['pupper'] != 'None', r['puppo'] != 'None']) >= 2, axis=1)) ###Output _____no_output_____ ###Markdown Manually investigate why 12 dogs have been given more than one `stage`: ###Code df_tidy['multistage'] = df_tidy.apply( lambda r: sum([r['doggo'] != 'None', r['pupper'] != 'None', r['puppo'] != 'None']) >= 2, axis=1) df_tidy[df_tidy['multistage']==True]['text'].tolist() ###Output _____no_output_____ ###Markdown On some occasions multiple dogs at different stages are featured in the tweet, and on others, a reference to a stage is conversational rather than relating to the dog. There may be other tweets with multiple dogs, both at the same stage. However, a programmatic analysis of the text to unpick these issues is beyond the scope of this project. As the number of occurences with multiple stages is small, they are excluded from the analysis instead. 2.2.1 Assess* The `doggo`, `pupper`, and `puppo` columns all relate to a single, categorical stage variable (`floofer` is related to fluffiness rather than stage so is regarded as a separate variable).* Some tweets refer to more than one stage. 2.2.2 Define1. Drop rows containing more than one stage.2. Rename `doggo` to `stage`3. Copy in the value of `pupper` where `stage` is `"None"`4. Copy in the value of `puppo` where `stage` is still `"None"`5. Drop `pupper`, `puppo` and `multistage` columns. 2.2.3 Code ###Code to_drop = df_tidy[df_tidy['multistage']==True].index df_tidy.drop(to_drop, inplace=True) print(df_tidy['multistage'].value_counts()) df_tidy.rename(columns={'doggo':'stage'}, inplace=True) df_tidy.loc[df_tidy['stage'] == 'None', ['stage']] = df_tidy['pupper'] df_tidy.loc[df_tidy['stage'] == 'None', ['stage']] = df_tidy['puppo'] df_tidy.drop(columns=['pupper', 'puppo', 'multistage'], inplace=True) ###Output False 2047 Name: multistage, dtype: int64 ###Markdown 2.2.4 Test* Verify that the value count of `multistage` printed above only contains the value `False`.* Count the values in `stage` to ensure all stages are now represented in this column.* Check the list of column names to make sure columns were removed. ###Code df_tidy['stage'].value_counts() df_tidy.columns ###Output _____no_output_____ ###Markdown 2.3 Retweets and repliesCount how many tweets are retweets: ###Code df_clean = df_tidy.copy() len(df_clean[df_clean['retweeted_status_id'].notna()]) ###Output _____no_output_____ ###Markdown Count how many tweets are replies: ###Code len(df_clean[df_clean['in_reply_to_status_id'].notna()]) ###Output _____no_output_____ ###Markdown Check the content of the replies to make sure they aren't actually ratings we should include in the analysis: ###Code df_clean[df_clean['in_reply_to_user_id'].notna()]['text'].tolist() ###Output _____no_output_____ ###Markdown These replies are mostly invalid for the analysis, and analysing the text to determine which are valid is beyond the scope of this project, so replies are excluded from the analysis. 2.3.1 AssessThe tweets aren't all original tweets - 70 are retweets and 22 are replies. 2.3.2 Define1. Drop rows where `retweeted_status_id` or `in_reply_to_status_id` is not `NaN`2. Drop the `retweeted_status_id`, `retweeted_status_user_id`, `retweeted_status_timestamp`, `in_reply_to_status_id`, and `in_reply_to_user_id` columns 2.3.3 Code ###Code to_drop = df_clean[df_clean['retweeted_status_id'].notna() | df_clean['in_reply_to_status_id'].notna()].index df_clean.drop(to_drop, inplace=True) print(f'Retweets {df_clean["retweeted_status_id"].value_counts()}') print(f'Replies {df_clean["in_reply_to_status_id"].value_counts()}') df_clean.drop(columns=['retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp', 'in_reply_to_status_id', 'in_reply_to_user_id'], inplace=True) ###Output Retweets Series([], Name: retweeted_status_id, dtype: int64) Replies Series([], Name: in_reply_to_status_id, dtype: int64) ###Markdown 2.3.4 Test* Verify that the value counts for Retweets and Replies printed above are empty series.* Check the list of column names to make sure columns were removed. ###Code df_clean.columns ###Output _____no_output_____ ###Markdown 2.4 Missing/inaccurate namesCheck the value counts of the names column: ###Code df_clean['name'].value_counts() ###Output _____no_output_____ ###Markdown Check a sample where the name is `'None'`: ###Code df_clean[df_clean['name']=='None']['text'].sample(5).tolist() ###Output _____no_output_____ ###Markdown After checking many samples, it seems that where no name is detected, a high proportion of the tweets relate to invalid tweets for the analysis, such as other animals, or multiple dogs.Check a sample where the name is `'a'` or `'an'` or `'the'`: ###Code names = ['a', 'an', 'the'] df_clean[df_clean['name'].isin(names)]['text'] \ .sample(5).tolist() ###Output _____no_output_____ ###Markdown After checking many samples, where the name is `'a'` or `'an'` or `'the'` the tweet is usually valid for the analysis, but sometimes not, for example being about another animal.Analysing the text to determine which tweets with undetected or incorrect name are valid for the analysis is beyond the scope of this project, so these tweets are excluded from the analysis. 2.4.1 Assess* Some tweets have no dog name detected.* Some tweets have an incorrect dog name i.e. `'a'`, `'an'` or `'the'` 2.4.2 DefineDrop rows where `name` is `'None'`, `'a'`, `'an'` or `'the'` 2.4.3 Code ###Code names = ['None', 'a', 'an', 'the'] to_drop = df_clean[df_clean['name'].isin(names)].index df_clean.drop(to_drop, inplace=True) ###Output _____no_output_____ ###Markdown 2.4.4 TestCheck the value counts for the `name` column. ###Code df_clean['name'].value_counts() ###Output _____no_output_____ ###Markdown 2.5 Inaccurate rating denominatorsCheck value counts for `rating_denominator`: ###Code df_clean['rating_denominator'].value_counts() ###Output _____no_output_____ ###Markdown Check tweets where `rating_denominator` is not 10: ###Code df_clean[df_clean['rating_denominator']!=10]['text'].tolist() ###Output _____no_output_____ ###Markdown The first two don't contain a valid rating and are excluded from analysis. The last two can be manually corrected. 2.5.1 Assess* Some tweets with `rating_denominator != 10` are invalid for the analysis* Some tweets with `rating_denominator != 10` have inaccurate but correctable ratings 2.5.2 Define1. Drop rows with `rating_denominator`s of `170` and `50`2. Correct ratings for rows with `rating_denominator`s of `11` and `7` 2.5.3 Code ###Code to_drop = df_clean[df_clean['rating_denominator'].isin([7, 170])].index df_clean.drop(to_drop, inplace=True) df_clean.loc[df_clean['rating_denominator']==50, ['rating_numerator', 'rating_denominator']] = [11, 10] df_clean.loc[df_clean['rating_denominator']==11, ['rating_numerator', 'rating_denominator']] = [10, 10] ###Output _____no_output_____ ###Markdown 2.5.4 TestVerify that there are no longer any rows with `rating_denominator` of 7, 11, 50 or 170. ###Code df_clean['rating_denominator'].isin([7, 11, 50, 170]).sum() ###Output _____no_output_____ ###Markdown 2.6 Inacurrate rating numeratorsHaving seen a tweet mentioning "our first ever 14/10" check tweets where `rating_numerator` is greater than 14: ###Code df_clean[df_clean['rating_numerator']>14]['text'].tolist() ###Output _____no_output_____ ###Markdown The first two instances could be rounded to the nearest integer for analysis. The last instance looks correct and deliberate, but is such a distant outlier it would potentially skew any analysis. Changing this rating to 15/10 would retain the dog's number one rank while allowing analysis to proceed meaningfully. 2.6.1 Assess* Two numerators are inaccurate due to decimals in the tweet text.* One numerator is a distant outlier. 2.6.2 Define1. Round the decimal numerators to the nearest integer and update `rating_numerator`.2. Reduce the outlier numerator to 15 and update `rating_numerator`. 2.6.3 Code ###Code df_clean.loc[df_clean['rating_numerator']==75, 'rating_numerator'] = 10 df_clean.loc[df_clean['rating_numerator']==27, 'rating_numerator'] = 11 df_clean.loc[df_clean['rating_numerator']==1776, 'rating_numerator'] = 15 ###Output _____no_output_____ ###Markdown 2.6.4 TestVerify that only one `rating_numerator` is greater than 14, and this is 15. ###Code df_clean[df_clean['rating_numerator']>14]['rating_numerator'] ###Output _____no_output_____ ###Markdown 2.7 Incorrect datatypesCheck the datatype of each column: ###Code df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1365 entries, 0 to 2029 Data columns (total 23 columns): tweet_id 1365 non-null int64 timestamp 1365 non-null object source 1365 non-null object text 1365 non-null object expanded_urls 1365 non-null object rating_numerator 1365 non-null int64 rating_denominator 1365 non-null int64 name 1365 non-null object stage 1365 non-null object floofer 1365 non-null object jpg_url 1365 non-null object img_num 1365 non-null int64 p1 1365 non-null object p1_conf 1365 non-null float64 p1_dog 1365 non-null bool p2 1365 non-null object p2_conf 1365 non-null float64 p2_dog 1365 non-null bool p3 1365 non-null object p3_conf 1365 non-null float64 p3_dog 1365 non-null bool retweets 1365 non-null int64 favorites 1365 non-null int64 dtypes: bool(3), float64(3), int64(6), object(11) memory usage: 227.9+ KB ###Markdown 2.7.1 Assess* `timestamp` is a string rather than a datetime.* `stage` is a string rather than a category.* `floofer` is a string rather than a boolean. 2.7.2 Define1. Convert `timestamp` to a datetime.2. Convert `stage` to a category.3. Convert `floofer` to a boolean. 2.7.3 Code ###Code df_clean['timestamp'] = pd.to_datetime(df_clean['timestamp']) df_clean['stage'] = df_clean['stage'].astype('category') floof = {'floofer':True, 'None':False} df_clean['floofer'] = df_clean['floofer'].map(floof) ###Output _____no_output_____ ###Markdown 2.7.4 TestCheck the datatypes again. ###Code df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1365 entries, 0 to 2029 Data columns (total 23 columns): tweet_id 1365 non-null int64 timestamp 1365 non-null datetime64[ns] source 1365 non-null object text 1365 non-null object expanded_urls 1365 non-null object rating_numerator 1365 non-null int64 rating_denominator 1365 non-null int64 name 1365 non-null object stage 1365 non-null category floofer 1365 non-null bool jpg_url 1365 non-null object img_num 1365 non-null int64 p1 1365 non-null object p1_conf 1365 non-null float64 p1_dog 1365 non-null bool p2 1365 non-null object p2_conf 1365 non-null float64 p2_dog 1365 non-null bool p3 1365 non-null object p3_conf 1365 non-null float64 p3_dog 1365 non-null bool retweets 1365 non-null int64 favorites 1365 non-null int64 dtypes: bool(4), category(1), datetime64[ns](1), float64(3), int64(6), object(8) memory usage: 209.5+ KB ###Markdown 2.8 Unneeded columnsCheck the list of columns: ###Code df_clean.columns ###Output _____no_output_____ ###Markdown 2.8.1 AssessThe following columns are not needed for the analysis: `source`, `expanded_urls`, `jpg_url` and `rating_denominator`. 2.8.2 DefineDrop the columns `source`, `expanded_urls`, `jpg_url` and `rating_denominator`. 2.8.3 Code ###Code df_clean.drop(columns=['source', 'expanded_urls', 'jpg_url', 'rating_denominator'], inplace=True) ###Output _____no_output_____ ###Markdown 2.8.4 TestCheck the list of columns. ###Code df_clean.columns ###Output _____no_output_____ ###Markdown 2.9 False negatives from image predictions Check how often the image predications all say it's not a dog: ###Code len(df_clean[(df_clean['p1_dog']==False) & (df_clean['p2_dog']==False) & (df_clean['p3_dog']==False)]) ###Output _____no_output_____ ###Markdown Check how often tweets are invalid for analysis when the image predications all say it's not a dog: ###Code df_clean[(df_clean['p1_dog']==False) & (df_clean['p2_dog']==False) & (df_clean['p3_dog']==False)].sample(5)['text'].tolist() ###Output _____no_output_____ ###Markdown After checking many samples, no invalid tweet has been found. No cleaning action is necessary, but this is noted for completeness. 2.10 SummaryThe following issues were discovered and cleaned: 2.10.1 Tidiness* The data was split across three dataframes but they all related to one type of observational unit. The dataframes were merged.* The `doggo`, `pupper`, and `puppo` columns all related to a single, categorical `stage` variable (`floofer` relates to fluffiness rather than stage so was regarded as a separate variable). This information was collated into a single `stage` column. 2.10.2 Quality* Data for 297 tweets was missing from one or more dataframes. These tweets were removed.* 12 tweets contained references to multiple stages i.e. `doggo`, `pupper`, `puppo`. These tweets were removed.* The tweets weren't all original tweets - 70 were retweets and 22 were replies. These tweets were removed.* Names were not always detected and this often indicated a tweet that didn't rate a single dog. These tweets were removed.* Detected names were not always accurate - sometimes being picked up as `'a'`, `'an'` or `'the'`. This sometimes indicated a tweet that didn't rate a single dog. These tweets were removed.* Rating denominators were not always accurate. Ratings were corrected where tweets were valid for the analysis, and otherwise were removed.* Rating numerator were not always accurate due to decimals used in the tweet. Ratings were rounded to the nearest integer.* One outlier numerator of 1776 was present. This was reduced to 15, preserving the dog's number one rank within the data.* `timestamp` was a string rather than a datetime. It was converted.* `stage` was a string rather than a category. It was converted.* `floofer` was a string rather than a boolean. It was converted.* The columns `source`, `expanded_urls`, `jpg_url` and `rating_denominator` were not needed for analysis. These columns were removed. 3 Store data ###Code df_clean.to_csv('twitter_archive_master.csv') ###Output _____no_output_____ ###Markdown 4 Analyze data 4.1 Is there a correlation between rating and favorites?First, plot a scatterplot of rating against favorites for a quick visual assessment: ###Code plt.scatter(df_clean['rating_numerator'], df_clean['favorites'], alpha=0.5) plt.title('WeRateDogs Tweets:\nDog Rating vs Number of Favorites', fontsize=16) plt.xlabel('Rating (out of 10)', fontsize=12) plt.ylabel('Number of favorites', fontsize=12); ###Output _____no_output_____ ###Markdown From the graph above it does seem that the most favorited tweets are for highly rated dogs, but it's difficult to see what's going on for the least favorited tweets.Try taking log of the number of favorites: ###Code plt.scatter(df_clean['rating_numerator'], np.log(df_clean['favorites']), alpha=0.5) plt.title('WeRateDogs Tweets:\nDog Rating vs Log Number of Favorites', fontsize=16) plt.xlabel('Rating (out of 10)', fontsize=12) plt.ylabel('Log Number of favorites', fontsize=12); ###Output _____no_output_____ ###Markdown Now we can clearly see that above 10/10, a higher rating is correlated with more favorites. Below 10/10, it looks possible that the rating doesn't make much difference. 4.2 Which stage dogs get the best and worst ratings on average?Plot boxplots of rating grouped by stage: ###Code df_clean[['rating_numerator', 'stage']].groupby(df_clean['stage']).boxplot(subplots=False); plt.title('WeRateDogs Ratings:\nDog Stage vs Dog Rating', fontsize=16) plt.xlabel('Dog stage', fontsize=12) plt.ylabel('Rating (out of 10)', fontsize=12) plt.xticks([1, 2, 3, 4], ['None', 'Doggo', 'Pupper', 'Puppo']); ###Output _____no_output_____ ###Markdown While it's difficult to separate `Doggo` and `Puppo`dogs for best ratings, surprisingly (for me at least) `Pupper` dogs, the youngest category, get the worst ratings on average. 4.3 Which type of dog averaged the most retweets according to the image predictions?First, put the most confident prediction that is actually a dog into a new column: ###Code df_clean['top_dog_pred'] = 'Unknown' df_clean.loc[df_clean['p3_dog']==True, 'top_dog_pred'] = df_clean['p3'] df_clean.loc[df_clean['p2_dog']==True, 'top_dog_pred'] = df_clean['p2'] df_clean.loc[df_clean['p1_dog']==True, 'top_dog_pred'] = df_clean['p1'] df_clean['top_dog_pred'].value_counts() df_clean[['retweets', 'top_dog_pred']].groupby('top_dog_pred') \ .mean().sort_values(by='retweets', ascending=False) ###Output _____no_output_____ ###Markdown Gathering Data for this ProjectGather each of the three pieces of data as described below in a Jupyter Notebook titled wrangle_act.ipynb:The WeRateDogs Twitter archive. I am giving this file to you, so imagine it as a file on hand. Download this file manually by clicking the following link: twitter_archive_enhanced.csv DONEThe tweet image predictions, i.e., what breed of dog (or other object, animal, etc.) is present in each tweet according to a neural network. This file (image_predictions.tsv) is hosted on Udacity's servers and should be downloaded programmatically using the Requests library and the following URL: https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsvEach tweet's retweet count and favorite ("like") count at minimum, and any additional data you find interesting. Using the tweet IDs in the WeRateDogs Twitter archive, query the Twitter API for each tweet's JSON data using Python's Tweepy library and store each tweet's entire set of JSON data in a file called tweet_json.txt file. Each tweet's JSON data should be written to its own line. Then read this .txt file line by line into a pandas DataFrame with (at minimum) tweet ID, retweet count, and favorite count. Note: do not include your Twitter API keys, secrets, and tokens in your project submission. ###Code #import packages import pandas as pd import numpy as np import requests import tweepy from tweepy import OAuthHandler import json import os from timeit import default_timer as timer ###Output _____no_output_____ ###Markdown First piece of data: twitter archive ###Code # create twitter archive df df_archive = pd.read_csv('twitter-archive-enhanced.csv') df_archive.head() df_archive.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown Second piece of data: image predictions ###Code #request the image predictions tsv url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' r = requests.get(url) # create a folder for the images folder_name = 'dogs' # Make directory if it doesn't already exist if not os.path.exists(folder_name): os.makedirs(folder_name) # save the tsv file to dogs folder and name it as the last piece of the url with open(os.path.join(folder_name, url.split('/')[-1]), mode = 'wb') as file: file.write(r.content) # check if data has been gathered df_image = pd.read_csv('dogs/image-predictions.tsv', sep='\t') df_image.head() ###Output _____no_output_____ ###Markdown Third piece of data: twitter api ###Code # set up the twitter api # dont forget to hide the secret info later consumer_key = 'hidden' consumer_secret = 'hidden' access_token = 'hidden' access_secret = 'hidden' auth = OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True) tweet_ids = df_archive.tweet_id.values len(tweet_ids) # Query Twitter's API for JSON data for each tweet ID in the Twitter archive count = 0 fails_dict = {} start = timer() # Save each tweet's returned JSON as a new line in a .txt file with open('tweet_json.txt', 'w') as outfile: # This loop will likely take 20-30 minutes to run because of Twitter's rate limit for tweet_id in tweet_ids: count += 1 print(str(count) + ": " + str(tweet_id)) try: tweet = api.get_status(tweet_id, tweet_mode='extended') print("Success") json.dump(tweet._json, outfile) outfile.write('\n') except tweepy.TweepError as e: print("Fail") fails_dict[tweet_id] = e pass end = timer() print(end - start) print(fails_dict) # Finally, write the json file to df # This question from the message board was very helpful: # https://knowledge.udacity.com/questions/28389 df_tweets = pd.DataFrame(columns=['tweet_id', 'id_int','retweet_count', 'favorite_count']) with open('tweet_json.txt') as f: for line in f: status = json.loads(line) tweet_id = status['id_str'] id_int = status['id'] retweet_count = status['retweet_count'] favorite_count = status['favorite_count'] df_tweets = df_tweets.append(pd.DataFrame([[tweet_id, id_int, retweet_count, favorite_count]], columns=['tweet_id', 'id_int','retweet_count', 'favorite_count'])) df_tweets = df_tweets.reset_index(drop=True) df_tweets.head() ###Output _____no_output_____ ###Markdown Assessing Data Part 1: Visual Assessment Tweet archive data visual review ###Code df_archive.head() ###Output _____no_output_____ ###Markdown FindingThis data set looks pretty good. Clean and tidy as far as I can tell. I think the NaNs are mostly because the tweets are not retweets/replies/etc. Image df visual review ###Code df_image.head() ###Output _____no_output_____ ###Markdown FindingNo obvious problems here. There may be issues with the capitalization of dog names, but probably ok there too. Tweets df visual assessment ###Code df_tweets.head() ###Output _____no_output_____ ###Markdown FindingThis looks good. The tweet ids as str and ints is interesting... Programmatic assessment Tweets archive data ###Code df_archive.info() # Looking at the Source column, it's probably not useful. df_archive.source.value_counts() # see what's going on with the doggo, etc columns: df_archive.doggo.value_counts(), df_archive.floofer.value_counts(), df_archive.pupper.value_counts(), df_archive.puppo.value_counts() # As mentioned in the project motivation section, these ratings are not good quality, so will need to be addressed. df_archive.rating_numerator.value_counts(), df_archive.rating_denominator.value_counts() # Looking at the replies, it looks they are technically dog ratings, of other tweets. # lots of them are replies to celebrities or other personalities: Snoop Dogg gets 420/10, # @s8n gets 666/10 (get it? 'satan') df_archive[df_archive.in_reply_to_status_id >= 0] # Checking where the denom is not 10 df_archive[df_archive.rating_denominator != 10] # These tweets with 11 as denominator reference 9/11 and 7/11 in the text. df_archive[df_archive.rating_denominator == 11] # I noticed this tweet says the dog has "3 1/2" legs so checking for rating: df_archive[(df_archive.rating_denominator == 2)] ###Output _____no_output_____ ###Markdown Finding Tidiness 'doggo', etc., classifications is broken into 4 columns, most of which appear to be 'none' Multiple id columns in the tweets data frame Quality Timestamp is object/str type id columns are floats, not str Check rating numerator and denominators: most nums are 7-14, but some are triple digits. Some denominators are not 10, after checking the actual tweets, some of those are pics of multiple dogs (84/70 or 165/150, etc) Tweet id 740373189193256964 9/11 rescue dog that pulls 9/11 as the rating. Rating should be 14/10 Tweet id 682962037429899265 "robbed a 7/11." Rating should be 10/10 Tweet id 666287406224695296 picks rating from "3 1/2 legged." rating should be 9/10 Tweet id 722974582966214656 rating is 4/20, should be 13/10 (discovered this after handling the above 3) Consistency issues with replies "novelty" ratings: 420/10, 666/10 Rows with 'retweeted' data are not original tweets Replies have ratings, but we have to see if there are pics to go with them Programmatic assessment Image data ###Code df_image.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null int64 jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown FindingsQuality: -2075 rows vs 2356 for archive data -Tweet id is int Programmatic assessment Tweets data ###Code df_tweets.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2332 entries, 0 to 2331 Data columns (total 4 columns): tweet_id 2332 non-null object id_int 2332 non-null object retweet_count 2332 non-null object favorite_count 2332 non-null object dtypes: object(4) memory usage: 73.0+ KB ###Markdown Findings -Turns out the id_int column is str -Retweet and favorite counts are both str Clean data 1. Archive data issues * Convert Timestamp to date time * Convert id columns to str * Compile dog stages into one column *https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.extract.html * Drop retweeted rows * Drop replies * After some debate, I think the unit of analysis here is unsolicited dog ratings. It's a consistency issue. * Tweet id 740373189193256964 correct rating to 14/10 * Tweet id 722974582966214656 correct rating to 13/10 * Tweet id 682962037429899265 correct rating to 10/10 * Tweet id 666287406224695296 correct rating to 9/10 * Find better way to extract ratings and dog "stages" from text2. Image data issues * Convert id to str * Verify that all tweets have matching images 3. Tweets data issues * Convert retweets and favorites to int ###Code # Create "clean" data frames dfa_clean = df_archive.copy() dfi_clean = df_image.copy() dft_clean = df_tweets.copy() ###Output _____no_output_____ ###Markdown Drop retweet rows and columns ###Code # Return only rows where retweeted status id is na dfa_clean = dfa_clean[dfa_clean.retweeted_status_id.isna()] dfa_clean = dfa_clean[dfa_clean.in_reply_to_user_id.isna()] # Drop the retweet and reply columns cols = ['retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp', 'in_reply_to_status_id', 'in_reply_to_user_id'] dfa_clean.drop(columns = cols, axis=1, inplace=True) # Checking: Done! dfa_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2355 Data columns (total 12 columns): tweet_id 2097 non-null int64 timestamp 2097 non-null object source 2097 non-null object text 2097 non-null object expanded_urls 2094 non-null object rating_numerator 2097 non-null int64 rating_denominator 2097 non-null int64 name 2097 non-null object doggo 2097 non-null object floofer 2097 non-null object pupper 2097 non-null object puppo 2097 non-null object dtypes: int64(3), object(9) memory usage: 213.0+ KB ###Markdown Timestamp to date time ###Code dfa_clean['timestamp'] = pd.to_datetime(dfa_clean['timestamp']) ###Output _____no_output_____ ###Markdown ID to string type ###Code dfa_clean.tweet_id = dfa_clean.tweet_id.astype(str) # reset index after we dropped a bunch of rows dfa_clean.reset_index(drop=True, inplace=True) # check index dfa_clean.tail(2) ###Output _____no_output_____ ###Markdown Re-extract the dog ratings to get the numerators with decimals ###Code # This post on the Udacity question board was helpful for extracting the numerator when it has decimal or not # https://knowledge.udacity.com/questions/33009 correct_rating = dfa_clean.text.str.extract('((?:\d+\.)?\d+)\/(\d+)', expand=True).astype(float) dfa_clean['rating_numerator'] = correct_rating #looking at the numerators though there is still some divergence from the 'norm' for the meme. print(dfa_clean.rating_numerator.value_counts(), dfa_clean.rating_denominator.value_counts()) ###Output 12.00 486 10.00 437 11.00 413 13.00 288 9.00 153 8.00 98 7.00 51 14.00 39 5.00 33 6.00 32 3.00 19 4.00 15 2.00 9 1.00 4 9.75 1 0.00 1 11.26 1 11.27 1 13.50 1 420.00 1 1776.00 1 Name: rating_numerator, dtype: int64 10 2084 Name: rating_denominator, dtype: int64 ###Markdown Fixing the three identified ratingsI had to re-do this after re-extracting the ratings ###Code # Locating the dog ratings based on tweet id to get row index # https://stackoverflow.com/questions/17071871/select-rows-from-a-dataframe-based-on-values-in-a-column-in-pandas # fix = ['740373189193256964', '722974582966214656', '682962037429899265', '666287406224695296'] dfa_clean.loc[dfa_clean['tweet_id'].isin(fix)] # locate the 3 bad ratings to fix them by assigning new values. Surprised this worked # * Tweet id 740373189193256964 correct rating to 14/10 # * Tweet id 722974582966214656 correct rating to 13/10 # * Tweet id 682962037429899265 correct rating to 10/10 # * Tweet id 666287406224695296 correct rating to 9/10 # df.iloc[0, df.columns.get_loc('col2')] = 100 # https://stackoverflow.com/questions/31569384/set-value-for-particular-cell-in-pandas-dataframe-with-iloc dfa_clean.iloc[[853, 948, 1426, 2076], 5:7] = [(14, 10), (13, 10), (10, 10), (9, 10)] # Run above cell again to check: done ###Output _____no_output_____ ###Markdown After re-assessing, Drop rows where denominator is not 10. These are all group (i.e., multiple dogs) ratings plus one that says "24/7" but has no rating. ###Code dfa_clean[dfa_clean.rating_denominator != 10] # Get rid of all rows where the denom does not equal 10 dfa_clean = dfa_clean[dfa_clean.rating_denominator == 10] ###Output _____no_output_____ ###Markdown Deal with numerators???I pulled up some of these tweets with low ratings, and the rating pulled is accurate. In some cases it's a chicken or goat or statue of a dog. But some are actual dogs that get a 6/10. I am going to leave them for now. Maybe the ratings have changed over time and gotten more generous.Eliminate the two super big numerators ###Code len(dfa_clean[dfa_clean.rating_numerator < 8]), len(dfa_clean[dfa_clean.rating_numerator >18]) dfa_clean = dfa_clean[dfa_clean.rating_numerator <= 18] dfa_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2082 entries, 0 to 2096 Data columns (total 12 columns): tweet_id 2082 non-null object timestamp 2082 non-null datetime64[ns] source 2082 non-null object text 2082 non-null object expanded_urls 2079 non-null object rating_numerator 2082 non-null float64 rating_denominator 2082 non-null int64 name 2082 non-null object doggo 2082 non-null object floofer 2082 non-null object pupper 2082 non-null object puppo 2082 non-null object dtypes: datetime64[ns](1), float64(1), int64(1), object(9) memory usage: 211.5+ KB ###Markdown Re-extract the dog stages (doggo, etc) and put them into one columnfor stage in stage_list: twitter_clean[stage].replace('None','',inplace=True) ###Code # Create lower case version of text to make it easier dfa_clean['text_lower'] = dfa_clean['text'].str.lower() # hear is another way to extract the dog stage key words from the text. dfa_clean['doggo2'] = dfa_clean.text_lower.str.extract(r'(?P<doggo2>doggo)') dfa_clean['floofer2'] = dfa_clean.text_lower.str.extract(r'(?P<floofer2>floofer)') dfa_clean['pupper2'] = dfa_clean.text_lower.str.extract(r'(?P<pupper2>pupper)') dfa_clean['puppo2'] = dfa_clean.text_lower.str.extract(r'(?P<puppo2>puppo)') # This dict shows that the re-extraction method worked to get better stage classifications. Better in all cases but # floofer where it was the same as before. {len(dfa_clean[dfa_clean.doggo2 == 'doggo']): len(dfa_clean[dfa_clean.doggo == 'doggo']), \ len(dfa_clean[dfa_clean.pupper2 == 'pupper']): len(dfa_clean[dfa_clean.pupper == 'pupper']), \ len(dfa_clean[dfa_clean.puppo2 == 'puppo']): len(dfa_clean[dfa_clean.puppo == 'puppo']), \ len(dfa_clean[dfa_clean.floofer2 == 'floofer']): len(dfa_clean[dfa_clean.floofer == 'floofer'])} # Dropping the inferior stage cols stage_cols = ['doggo', 'pupper', 'puppo', 'floofer'] dfa_clean.drop(columns=stage_cols, axis=1, inplace=True) # replace all nan in stage columns with None strings so i can concat cols = ['doggo2', 'floofer2', 'pupper2', 'puppo2'] dfa_clean[cols] = dfa_clean[cols].fillna("None") # Concatenating the stage columns using simple pandas addition then handling the overlapping classifications dfa_clean['stage'] = dfa_clean.doggo2 + dfa_clean.pupper2 + dfa_clean.puppo2 + dfa_clean.floofer2 dfa_clean['stage'].replace('NoneNoneNoneNone', 'none', inplace=True) dfa_clean['stage'].replace('None', '', regex=True, inplace=True) dfa_clean.stage.value_counts() # I decided to replace the overlapping stages with the non-doggo title. Doggo seems like the most generic term, and # there are no overlap instances without doggo. dfa_clean['stage'].replace('doggopupper', 'pupper', regex=True, inplace=True) dfa_clean['stage'].replace('doggopuppo', 'puppo', regex=True, inplace=True) dfa_clean['stage'].replace('doggofloofer', 'floofer', regex=True, inplace=True) # Dropping the different stage columns now that we have the single column cols = ['doggo2', 'floofer2', 'pupper2', 'puppo2'] dfa_clean.drop(cols, axis=1, inplace=True) dfa_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2082 entries, 0 to 2096 Data columns (total 10 columns): tweet_id 2082 non-null object timestamp 2082 non-null datetime64[ns] source 2082 non-null object text 2082 non-null object expanded_urls 2079 non-null object rating_numerator 2082 non-null float64 rating_denominator 2082 non-null int64 name 2082 non-null object text_lower 2082 non-null object stage 2082 non-null object dtypes: datetime64[ns](1), float64(1), int64(1), object(7) memory usage: 178.9+ KB ###Markdown Clean the image file and merge with dfaConvert tweet_id to string and ###Code dfi_clean.tweet_id = dfi_clean.tweet_id.astype(str) dfi_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null object jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(1), object(5) memory usage: 152.1+ KB ###Markdown Clean tweets dataTurn retweets and favs to intdrop id_int column ###Code # Converting str to int dft_clean.retweet_count, dft_clean.favorite_count = dft_clean.retweet_count.astype(int), dft_clean.favorite_count.astype(int) # Drop the superfluous column dft_clean.drop('id_int', axis=1, inplace = True) dft_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2332 entries, 0 to 2331 Data columns (total 3 columns): tweet_id 2332 non-null object retweet_count 2332 non-null int64 favorite_count 2332 non-null int64 dtypes: int64(2), object(1) memory usage: 54.7+ KB ###Markdown Merging and Storing the data ###Code # Merge the cleaned data frames to create master data frame df_master = pd.merge(dfa_clean, dfi_clean, how='inner', on='tweet_id') df_master = pd.merge(df_master, dft_clean, how='inner', on='tweet_id') df_master.info() df_master.to_csv('dogs_master.csv', index=False) ###Output _____no_output_____ ###Markdown Data analysis and visualizationI want to see if there are trends over time with ratings. ###Code import matplotlib.pyplot as plt # This chart shows that indeed the ratings have become more generous over time. The earliest tweets have ax1 = df_master.plot(x='timestamp', y='rating_numerator', color='brown', style='.') plt.title('We Rate Dogs: Ratings over time') plt.xlabel('Date') plt.ylabel('Rating (out of ten)'); ###Output _____no_output_____ ###Markdown How do the different dog stages break down?The ###Code df_master.stage.value_counts().plot(kind='bar', color='b') plt.title('Total tweets by dog "stage"') plt.ylabel('Total tweets') plt.xlabel('Stage'); # This chart omits none stage dogs to give a closer look. Most dogs are described as puppers if at all. df_master.stage.value_counts()[1:].plot(kind='bar', color='b') plt.title('Total tweets by dog "stage"') plt.ylabel('Total tweets') plt.xlabel('Stage'); # But puppers have a lower rating on average than the other three stage classificaitons. df_master.groupby('stage')['rating_numerator'].mean().sort_values(ascending=False).plot(kind='bar', color='b') plt.ylabel('Rating out of ten') plt.title('Mean Rating by dog stage'); df_master.groupby('stage')['favorite_count', 'retweet_count'].mean()\ .sort_values(by='favorite_count',ascending=False).plot(kind='bar'); ###Output _____no_output_____ ###Markdown Look at the different breeds. ###Code # Most popular breeds are golden retriever, lab, pembroke (corgi), chihuahua and pug. df_master.p1.value_counts()[:10].plot(kind='bar', color='b') plt.title('Top ten dog breeds by total tweets') plt.ylabel('Total tweets') plt.xlabel('Breed'); # Wow this is ridiculous: are any of these dog breeds? # Trying to see which tweets are most popular by dog breed, but these are... not dogs! df_master.groupby('p1')['favorite_count'].mean().sort_values(ascending=False)[:10].plot(kind='bar', color='b') plt.title('Top ten "dog breeds" by tweet favorites') plt.ylabel('Total favorites') plt.xlabel('Breed...'); # There are lots of whacky things showing up from the image recognition. Most of the recurring ones are dog breeds tho. # I will drop a bunch of rows to get only dog breeds then do the visualation again. df_master.p1.value_counts() # Dropping the rows with whacky p1 image recognitions # Making a new df for this just so we don't delete so many rows from the master. # this stack post was helfpul # https://stackoverflow.com/questions/34913546/remove-low-counts-from-pandas-data-frame-column-on-condition s = df_master['p1'].value_counts() df_breeds = df_master.loc[df_master['p1'].isin(s.index[s >= 10])] df_breeds = df_breeds[df_breeds.p1 != 'seat_belt'] df_breeds = df_breeds[df_breeds.p1 != 'web_site'] df_breeds.p1.value_counts()[:10] # Here we go with the dog breeds by favorites df_breeds.groupby('p1')['favorite_count'].mean().sort_values(ascending=False)[:10].plot(kind='bar', color='b') plt.xlabel('Dog breed (from image recognition)') plt.ylabel('Favorite count') plt.title('Mean favorites of tweets by dog breed'); ###Output _____no_output_____ ###Markdown Data Wrangling: We Rate Dogs Twitter account ###Code # Import import numpy as np import pandas as pd import requests import tweepy import json import time import matplotlib.pyplot as plt import matplotlib.image as mpimg %matplotlib inline import seaborn as sb ###Output _____no_output_____ ###Markdown Table of contentsData gatheringData assessmentData cleaningSavingAnalysis and visualisation Data gathering In the first part of this project, the required data will be gathered from different sources. ###Code # Create a data frame from the provided .csv-file df_archive = pd.read_csv('twitter-archive-enhanced.csv') # Download the provided .tsv-file programmatically r = requests.get('https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv') # Check success of request r.status_code # Write the downloaded object into a file with open('image-predictions.tsv', mode = 'wb') as file: file.write(r.content) # Create a data frame df_images = pd.read_csv('image-predictions.tsv', sep='\t') # Prepare for using the Twitter API consumer_key = 'CONSUMER KEY' consumer_secret = 'CONSUMER SECRET' access_token = 'ACCESS TOKEN' access_secret = 'ACCESS SECRET' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit = True, wait_on_rate_limit_notify = True) # Query json data for each tweet ID in the Twitter archive of WeRateDogs missing_ids = [] # initialise list count = 1 start = time.time() with open('tweet_json.txt', 'w') as outfile: for i in df_archive.tweet_id: # get each tweet ID try: # write the json data for each ID into a text file tweet = api.get_status(i, tweet_mode = 'extended') json.dump(tweet._json, outfile) outfile.write('\n') # add new line except tweepy.TweepError: # catch error and write missing IDs into list missing_ids.append(i) print(count, ': Tweet ID ', i) count += 1 end = time.time() print('Elapsed time: ', (end - start)/60, ' minutes') # Read text file with json data line by line json_list = [] with open('tweet_json.txt') as file: for line in file: json_list.append(json.loads(line)) # Get tweet ids that have corresponding json data json_ids = np.array(df_archive.tweet_id[df_archive['tweet_id'].isin(missing_ids) == False]) # Write a data frame with the retweet and favorite counts df_list = [] favorite_count = {} for i in range(len(json_list)): df_list.append({'tweet_id': json_ids[i], 'retweet_count': json_list[i]['retweet_count'], 'favorite_count': json_list[i]['favorite_count']}) df_popularity = pd.DataFrame(df_list) ###Output _____no_output_____ ###Markdown Data assessmentIn this part of the project, I will inspect the data to identify quality and tidiness issues. ###Code df_archive.head() df_archive.info() df_archive.sample(5) df_archive.name.value_counts() df_archive.name.value_counts().sample(20) # print the text of all tweet IDs with non-names to identify the errors # get index of non-names ind = df_archive.name.str.extract('(^[a-z]+$)').dropna().index # print index and text for non-names for i in ind: print(i, '--', df_archive.text[i]) df_archive.doggo.value_counts() df_archive.floofer.value_counts() df_archive.pupper.value_counts() df_archive.puppo.value_counts() df_archive.rating_numerator.value_counts() # Print the text of all tweet IDs with unusual numerators to identify the errors # Get unusual values vals = df_archive.rating_numerator.value_counts()[df_archive.rating_numerator.value_counts() <=2].index # Filter for unusual values mask = df_archive.rating_numerator.isin(vals) # Print index and text of unusual values for i in df_archive.loc[mask, 'text'].index: print(i, '--', df_archive.text[i]) df_archive.rating_denominator.value_counts() # Print the text of all tweet IDs with unusual denominators to identify the errors # Get unusual values vals = df_archive.rating_denominator.value_counts()[df_archive.rating_denominator.value_counts() <=3].index # Filter for unusual values mask = df_archive.rating_denominator.isin(vals) # Print index and text of unusual values for i in df_archive.loc[mask, 'text'].index: print(i, '--', df_archive.text[i]) df_images.head() df_images.sample(5) df_images.info() df_popularity.head() df_popularity.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2331 entries, 0 to 2330 Data columns (total 3 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2331 non-null int64 1 retweet_count 2331 non-null int64 2 favorite_count 2331 non-null int64 dtypes: int64(3) memory usage: 54.8 KB ###Markdown The following issues were found after executing the code above: Quality issues `df_archive`1. `df_archive` has more rows than `df_images`, so there are some tweets with no images (project requirements are to use only tweets with images)2. Missing values in dog stage variables and `name` are marked as 'None' instead of NaN3. `tweet_id`, `in_reply_to_status_id`, `in_reply_to_user_id`, `retweeted_status_id` and `retweeted_status_user_id` are floats instead of strings4. Data frame contains retweets and replies but should only contain original tweets5. Some names are wrong in `name` 1. It seems that all incorrect name entries are spelt with lowercase letters. 2. Checking the text of these cases reveals that the expressions "named..." and "name is..." were not considered when extracting the names from the text 3. Index no. 369 should have the name "Grace"6. Some values in `rating_numerator` and `rating_denominator` appear to be wrong 1. index of wrong values: [55, 313, 340, 342, 516, 763, 1068, 1165, 1202, 1662, 1712, 2335] 2. correct values: [13/10, 13/10, 9.75/10, NaN, NaN, NaN, 11.27/10, 14/10, 13/10, 11/10, NaN, 11.26/10, 9/10]7. `timestamp` and `retweeted_status_timestamp` columns are strings instead of dates. This will be ignored for two reasons, though: 1. I do not intend to analyse any time based information 2. When storing the data frame into a .csv-file, the date format will be lost again `df_images`8. Some images do not show dogs9. There are three possible dog breeds for each image instead of one10. Names of dog breeds are not formated consistently11. `tweet_id` is int instead of string `df_popularity`12. Fewer number of tweet IDs than in the other data frames. Instead of trimming the other data frames, these will be accepted as missing values. No action required.13. `tweet_id` is int instead of string Tidiness issues14. `df_archive`: Dog stage variable is written as 4 columns but it should be one column15. `df_archive`: Rating is separated into two columns but should be one variable in one column16. `df_popularity`: Should not be a separate data frame but should be written into `df_archive`17. `df_images`: The selected dog breed and jpg url should be written into `df_archive` Data cleaning ###Code # Make backup copies of all data frames df_archive_raw = df_archive.copy() df_images_raw = df_images.copy() df_popularity_raw = df_popularity.copy() ###Output _____no_output_____ ###Markdown Issues 8 & 9: Some images do not show dogs in `df_images` (this needs to be fixed before I can tackle Issue 1) & there are three possible dog breeds for each image instead of one DefineExclude rows where `p1`, `p2` and `p3` are not a dog breed and define the "true breed" for each image by picking the breed prediction with the highest confidence. Code ###Code # Select only rows with at least one recognised dog breed df_images = df_images[((df_images.p1_dog == True) | (df_images.p2_dog == True) | (df_images.p3_dog == True))] # Choose dog breed with biggest confidence in each row for i in df_images.index: if (df_images.p1_conf[i] > df_images.p2_conf[i]) & (df_images.p1_dog[i] == True): df_images.loc[i,'selected_breed'] = df_images.loc[i,'p1'] elif (df_images.p2_conf[i] > df_images.p3_conf[i]) & (df_images.p2_dog[i] == True): df_images.loc[i,'selected_breed'] = df_images.loc[i,'p2'] else: df_images.loc[i,'selected_breed'] = df_images.loc[i,'p3'] ###Output _____no_output_____ ###Markdown Test ###Code # Check if selected breed is a dog breed and has the highest confidence df_images.sample(10) ###Output _____no_output_____ ###Markdown Issue 1: `df_archive` has more rows than `df_images` DefineSelect only those tweet IDs in `df_archive` that are in `df_images` Code ###Code # Get tweet ids in df_images and filter df_archive tweet_list = df_images.tweet_id.values df_archive = df_archive[df_archive['tweet_id'].isin(tweet_list)] ###Output _____no_output_____ ###Markdown Test ###Code # Check if the two data frames have the same length len(df_archive) == len(df_images) ###Output _____no_output_____ ###Markdown Issue 2: Missing values in dog stage variables and `name` are marked as 'None' instead of NaN in `df_archive` DefineReplace 'None' with NaN Code ###Code df_archive = df_archive.replace('None', np.nan) ###Output _____no_output_____ ###Markdown Test ###Code # make sure that 'None' is no longer a value df_archive.doggo.value_counts() # Make sure that 'None' is no longer a value df_archive.floofer.value_counts() # Make sure that 'None' is no longer a value df_archive.pupper.value_counts() # Make sure that 'None' is no longer a value df_archive.puppo.value_counts() ###Output _____no_output_____ ###Markdown Issues 3, 11 & 13: `tweet_id`, `in_reply_to_status_id`, `in_reply_to_user_id`, `retweeted_status_id` and `retweeted_status_user_id` have wrong data types in `df_archive`, `df_images` and `df_popularity` DefineChange format `in_reply_to_status_id`, `in_reply_to_user_id`, `retweeted_status_id` and `retweeted_status_user_id` to int Code ###Code # Change data format df_archive.tweet_id = df_archive.tweet_id.astype('object') df_archive.in_reply_to_status_id = df_archive.in_reply_to_status_id.astype('object') df_archive.in_reply_to_user_id = df_archive.in_reply_to_user_id.astype('object') df_archive.retweeted_status_id = df_archive.retweeted_status_id.astype('object') df_archive.retweeted_status_user_id = df_archive.retweeted_status_user_id.astype('object') df_images.tweet_id = df_images.tweet_id.astype('object') df_popularity.tweet_id = df_popularity.tweet_id.astype('object') ###Output _____no_output_____ ###Markdown Test ###Code # Check data type df_archive.info() # Check data type df_images.info() # Check data type df_popularity.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2331 entries, 0 to 2330 Data columns (total 3 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2331 non-null object 1 retweet_count 2331 non-null int64 2 favorite_count 2331 non-null int64 dtypes: int64(2), object(1) memory usage: 54.8+ KB ###Markdown Issue 4: `df_archive` contains retweets and replies DefineFilter `df_archive` to keep only those rows where `in_reply_to_status_id` and `retweeted_status_id` are NaNs. Then drop these columns because they are of no use. Code (filter) ###Code # Include only rows with no retweets and replies df_archive = df_archive[(df_archive.in_reply_to_status_id.isnull()) & (df_archive.retweeted_status_id.isnull())] ###Output _____no_output_____ ###Markdown Test ###Code # Check data type (before dropping) df_archive.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1666 entries, 1 to 2355 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1666 non-null object 1 in_reply_to_status_id 0 non-null object 2 in_reply_to_user_id 0 non-null object 3 timestamp 1666 non-null object 4 source 1666 non-null object 5 text 1666 non-null object 6 retweeted_status_id 0 non-null object 7 retweeted_status_user_id 0 non-null object 8 retweeted_status_timestamp 0 non-null object 9 expanded_urls 1666 non-null object 10 rating_numerator 1666 non-null int64 11 rating_denominator 1666 non-null int64 12 name 1266 non-null object 13 doggo 63 non-null object 14 floofer 8 non-null object 15 pupper 173 non-null object 16 puppo 22 non-null object dtypes: int64(2), object(15) memory usage: 234.3+ KB ###Markdown Code (drop unused columns) ###Code # Drop unused columns df_archive = df_archive.drop(['in_reply_to_status_id','in_reply_to_user_id','retweeted_status_id','retweeted_status_user_id','retweeted_status_timestamp'], axis = 1) ###Output _____no_output_____ ###Markdown Test ###Code # Make sure that columns were dropped df_archive.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1666 entries, 1 to 2355 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1666 non-null object 1 timestamp 1666 non-null object 2 source 1666 non-null object 3 text 1666 non-null object 4 expanded_urls 1666 non-null object 5 rating_numerator 1666 non-null int64 6 rating_denominator 1666 non-null int64 7 name 1266 non-null object 8 doggo 63 non-null object 9 floofer 8 non-null object 10 pupper 173 non-null object 11 puppo 22 non-null object dtypes: int64(2), object(10) memory usage: 169.2+ KB ###Markdown Issue 5: Some names are wrong in `name` Define1. Extract and replace the names after the expressions "named..." and "name is..."2. Replace the name with index 369 with "Grace"3. Replace the remaining lowercase entries in `name` with NaNs Code ###Code # Get dataframe with non-names ind = df_archive.name.str.extract('(^[a-z]+$)').dropna().index df_archive_nonames = df_archive.loc[ind] # Extract and replace names after "named..." named = df_archive_nonames.text.str.extract('named\s([A-Z][a-z]+)[\s\.]').dropna().rename(columns={0:'name'}) df_archive.update(named) # Extract and replace names after "name is..." name_is = df_archive_nonames.text.str.extract('name is\s([A-Z][a-z]+)[\s\.]').dropna().rename(columns={0:'name'}) df_archive.update(name_is) # Replace name at index 369 with "Grace" grace = pd.Series('Grace', name='name', index=[369]) df_archive.update(grace) # Replace the rest of the wrong names with NaNs rest_ind = [] for i in ind: if i not in np.concatenate((np.array(named.index), np.array(name_is.index), [369])): df_archive.replace(df_archive.name.loc[i], np.nan, inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code # Check entries of non-names (most of them should be NaN) df_archive.loc[ind]['name'] # Check entries of the names extracted after "named..." df_archive.loc[np.array(named.index)]['name'] # Check entries of the names extracted after "name is..." df_archive.loc[np.array(name_is.index)]['name'] # Check entry at index 369 df_archive.loc[grace.index]['name'] ###Output _____no_output_____ ###Markdown Issues 6 & 15: Some values in `rating_numerator` and `rating_denominator` appear to be wrong & rating is separated into two columns but should be one variable in one column Define1. Calculate rating as one number and write it into one column2. Set the values with index [55, 313, 340, 342, 516, 763, 1068, 1165, 1202, 1662, 1712, 2335] to [13/10, 13/10, 9.75/10, NaN, NaN, NaN, 11.27/10, 14/10, 13/10, 11/10, NaN, 11.26/10, 9/10]3. Drop unsused columns Code ###Code # Define a series with indices and values as defined in the data assessment new_rating_index = [55, 313, 340, 342, 516, 763, 1068, 1165, 1202, 1662, 1712, 2335] new_rating = [13/10, 13/10, 9.75/10, np.nan, np.nan, 11.27/10, 14/10, 13/10, 11/10, np.nan, 11.26/10, 9/10] new_rating = pd.Series(new_rating, name='rating', index=new_rating_index) ind = [] for i in df_archive.index: if i in new_rating_index: # check if index is still present or was deleted in previous cleaning actions df_archive.loc[i, 'rating'] = new_rating[i] # change values according to new_rating ind.append(i) # write changed indices into list else: df_archive.loc[i, 'rating'] = df_archive.rating_numerator[i] / df_archive.rating_denominator[i] # calculate rating # Drop unused columns df_archive = df_archive.drop(['rating_numerator','rating_denominator'], axis = 1) ###Output _____no_output_____ ###Markdown Test ###Code # Check changed ratings changed according to new_rating df_archive.loc[ind,'rating'] # Check new column df_archive.sample(10) ###Output _____no_output_____ ###Markdown Issue 10: Names of dog breeds are not formated consistently DefineReplace the underscore(s) in `selected_breed` and capitalise the first letters of each word Code ###Code # Replace underscores and capitalise df_images.selected_breed = df_images.selected_breed.str.replace('_',' ').str.title() ###Output _____no_output_____ ###Markdown Test ###Code # Check new names of dog breeds df_images.selected_breed.sample(20) ###Output _____no_output_____ ###Markdown Issue 14: Dog stage variable is written as 4 columns but it should be one column in `df_archive` DefineWrite dog stages into one column and delete the unused columns Code ###Code for line in df_archive.index: # go through each line for val in df_archive.loc[line,['doggo','floofer','pupper','puppo']]: # go through each value in a line if val in ['doggo','floofer','pupper','puppo']: df_archive.loc[line,'dog_stage'] = val # Drop unused columns df_archive = df_archive.drop(['doggo','floofer','pupper','puppo'], axis = 1) ###Output _____no_output_____ ###Markdown Test ###Code # Check new column df_archive.sample(10) ###Output _____no_output_____ ###Markdown Issue 16: `df_popularity` should not be a separate data frame but should be written into `df_archive` DefineMerge `df_popularity` and `df_archive` into one data frame Code ###Code # Merge the two data frames and change data types of new columns to 'int' df_archive = df_archive.merge(df_popularity, how = 'left', on = 'tweet_id') df_archive.retweet_count = df_archive.retweet_count.astype('Int64') df_archive.favorite_count = df_archive.favorite_count.astype('Int64') ###Output _____no_output_____ ###Markdown Test ###Code # Check new columns and their data type df_archive.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1666 entries, 0 to 1665 Data columns (total 10 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1666 non-null object 1 timestamp 1666 non-null object 2 source 1666 non-null object 3 text 1666 non-null object 4 expanded_urls 1666 non-null object 5 name 1206 non-null object 6 rating 1664 non-null float64 7 dog_stage 257 non-null object 8 retweet_count 1659 non-null Int64 9 favorite_count 1659 non-null Int64 dtypes: Int64(2), float64(1), object(7) memory usage: 146.4+ KB ###Markdown Issue 17: `selected_breed` and `jpg_url` in `df_images` should be written into `df_archive` DefineMerge the `selected_breed` and `jpg_url` columns of `df_images` and `df_archive` into one data frame Code ###Code df_archive = df_archive.merge(df_images[['tweet_id','selected_breed','jpg_url']], how = 'left', on = 'tweet_id') ###Output _____no_output_____ ###Markdown Test ###Code # Check new column df_archive.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1666 entries, 0 to 1665 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1666 non-null object 1 timestamp 1666 non-null object 2 source 1666 non-null object 3 text 1666 non-null object 4 expanded_urls 1666 non-null object 5 name 1206 non-null object 6 rating 1664 non-null float64 7 dog_stage 257 non-null object 8 retweet_count 1659 non-null Int64 9 favorite_count 1659 non-null Int64 10 selected_breed 1666 non-null object 11 jpg_url 1666 non-null object dtypes: Int64(2), float64(1), object(9) memory usage: 172.5+ KB ###Markdown Saving ###Code # Save the master data frame to a csv-file df_archive.to_csv('twitter_archive_master.csv', index=False) ###Output _____no_output_____ ###Markdown Analysis and visalisation There are three questions I would like to answer:1. Which tweet was retweeted the most?2. Which dog breed was retweeted the most?3. Which dog breed received the most "likes"? ###Code # Get image url of tweet with the most retweets and download file mask = df_archive.retweet_count == df_archive.retweet_count.max() ind_max_retweet = df_archive.text[mask].index[0] url = df_archive.jpg_url.loc[ind_max_retweet] r = requests.get(url) # Write the downloaded object into a file with open('max_retweeted_image.jpg', mode = 'wb') as file: file.write(r.content) # Get image and text of tweet with the most retweets max_retweeted_image = mpimg.imread('max_retweeted_image.jpg') max_retweeted_text = df_archive.text[mask].values[0] max_retweeted_text = max_retweeted_text[:max_retweeted_text.rfind('https')] # strip off tweet url # Plot fig, ax = plt.subplots() ax.imshow(max_retweeted_image); props = dict(boxstyle='round', facecolor='skyblue', alpha=0.5) ax.text(-800, 800, max_retweeted_text, fontsize=14, bbox=props); plt.axis('off'); plt.title('This tweet was retweeted most often: {} times'.format(df_archive.retweet_count.max())); ###Output _____no_output_____ ###Markdown The above image shows the tweet text and image that was retweeted most often. ###Code # Group by dog breed and get the top ten of mean retweets per breed top_retweeted_breeds = df_archive.groupby('selected_breed').mean().retweet_count.sort_values(axis=0,ascending=False).index[:11] # Get data frame with top ten dog breeds df_top_breeds = df_archive[df_archive.selected_breed.isin(top_retweeted_breeds)] # Plot the top ten dog breeds sb.pointplot(x = df_top_breeds.retweet_count, y = df_top_breeds.selected_breed, color = 'skyblue', order = top_retweeted_breeds, linestyles=''); plt.xticks(rotation=30); plt.xlabel('Mean retweet count'); plt.ylabel(''); plt.title('Top ten dog breeds in terms of mean retweets'); ###Output _____no_output_____ ###Markdown This image shows the top ten dogs in terms of mean retweets. Tweets with Bedlington Terrier dogs were retweeted most often. However, the spread of the retweet counts is quite high. Some individual tweets of the other top ten breeds were retweeted more often than the mean retweets of the Bedlington Terriers, for example the Standard Poodle or the English Springer. ###Code # Group by dog breed and get the top ten of mean likes per breed top_favorite_breeds = df_archive.groupby('selected_breed').mean().favorite_count.sort_values(axis=0,ascending=False).index[:11] # Get data frame with top ten dog breeds df_top_breeds = df_archive[df_archive.selected_breed.isin(top_favorite_breeds)] # Plot the top ten dog breeds sb.pointplot(x = df_top_breeds.favorite_count, y = df_top_breeds.selected_breed, color = 'skyblue', order = top_favorite_breeds, linestyles=''); plt.xlabel('Mean favorite count'); plt.ylabel(''); plt.title('Top ten dog breeds in terms of mean "likes"'); plt.xticks(rotation=30); ###Output _____no_output_____ ###Markdown Gather ###Code import pandas as pd import numpy as np import requests import io import tweepy import time import json import os import re import config import matplotlib.pyplot as plt % matplotlib inline import seaborn as sns sns.set_style('darkgrid') archive = pd.read_csv('twitter-archive-enhanced.csv') url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' # Download the .tsv file from the above url. r = requests.get(url).content # Save the tweet data frame as a csv file for a local copy images = pd.read_csv(io.StringIO(r.decode('utf-8')), sep='\t') images.to_csv('image-predictions.csv', encoding='utf-8', index=False) # Importing the data from twitter API consumer_key = config.consumer_key consumer_secret = config.consumer_secret access_token = config.access_token access_secret = config.access_secret auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit = True, wait_on_rate_limit_notify = True) archive.head() twitter_errors = [] with open('tweet_json.txt', 'w', encoding='utf-8') as outfile: start = time.time() for tweet in archive.tweet_id: try: page = api.get_status(tweet, tweet_mode='extended') print(tweet) json.dump(page._json, outfile) outfile.write('\n') except Exception as e: print(str(tweet) + ":" + str(e)) twitter_errors.append(tweet) end = time.time() print(end - start) new_errors = [] with open('tweet_json.txt', 'a', encoding='utf-8') as outfile: start = time.time() for tweet in twitter_errors: try: page = api.get_status(tweet, tweet_mode='extended') print(tweet) json.dump(page._json, outfile) outfile.write('\n') except Exception as e: print(str(tweet) + ":" + str(e)) new_errors.append(tweet) end = time.time() print(end - start) print(len(new_errors)) d = [] # Reference used to extract each line from text file # https://stackoverflow.com/questions/21058935/python-json-loads-shows-valueerror-extra-data with open('tweet_json.txt', 'r') as json_file: for entry in json_file: tweet = json.loads(entry) d.append({'tweet_id': tweet['id'], 'retweet_count': tweet['retweet_count'], 'favorite_count': tweet['favorite_count']}) popularity = pd.DataFrame(d) popularity.head() ###Output _____no_output_____ ###Markdown Assess ###Code archive.head(15) archive.tail(20) archive.describe() archive[archive['rating_numerator'] > 1770] archive.info() archive.doggo.value_counts() archive.floofer.value_counts() archive.pupper.value_counts() archive.puppo.value_counts() archive.name.value_counts() archive.rating_numerator.value_counts() archive.rating_denominator.value_counts() images.head(20) images.tail(20) images.info() popularity.head(20) popularity.tail(20) popularity.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2345 entries, 0 to 2344 Data columns (total 3 columns): favorite_count 2345 non-null int64 retweet_count 2345 non-null int64 tweet_id 2345 non-null int64 dtypes: int64(3) memory usage: 55.0 KB ###Markdown Quality- *timestamp* and *retweeted_status_timestamp* is stored as a string- There are 181 retweets- *source* is stored as an html 'a<' tag- Some tweets do not have images- Some tweets have multiple slashes in the *text* column leading to false ratings- There are dog names with a value of *None*- Dogs given the name that follows the phrase "This is...", so some dogs have lowercase words as names.- *in_reply_to_status_id*, *in_reply_to_user_id*, *retweeted_status_id*, *retweeted_status_user_id* all have a float datatype Tidiness- Dog stage variables are stored in both rows and columns.- The *retweet_count* and *favorite_count* are stored in a separate table from the archive of tweets. Clean ###Code # Make a copy of each data frame so we can still access the original data archive_clean = archive.copy() images_clean = images.copy() popularity_clean = popularity.copy() ###Output _____no_output_____ ###Markdown Missing Data Some tweets do not have images DefineDrop the tweets without images from the `archive` table. Code ###Code archive_clean = archive_clean[archive_clean['expanded_urls'].notnull()] ###Output _____no_output_____ ###Markdown Test ###Code # should now have only 2297 entries archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2297 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2297 non-null int64 in_reply_to_status_id 23 non-null float64 in_reply_to_user_id 23 non-null float64 timestamp 2297 non-null object source 2297 non-null object text 2297 non-null object retweeted_status_id 180 non-null float64 retweeted_status_user_id 180 non-null float64 retweeted_status_timestamp 180 non-null object expanded_urls 2297 non-null object rating_numerator 2297 non-null int64 rating_denominator 2297 non-null int64 name 2297 non-null object doggo 2297 non-null object floofer 2297 non-null object pupper 2297 non-null object puppo 2297 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 323.0+ KB ###Markdown Dogs given the name that follows the phrase "This is...", so some dogs have lowercase words as names. DefineIf a dog's name starts with a lowercase letter, replace it with a value of *None* Code ###Code def remove_lowercase(row): if str(row['name'][0]).islower(): row['name'] = row['name'].replace(row['name'], 'None') else: pass archive_clean.apply(remove_lowercase, axis=1) archive_clean.head(15) ###Output _____no_output_____ ###Markdown There are dog names with a value of None DefineFor dogs with a name of 'None', replace 'None' with a null value. Code ###Code archive_clean.name = archive_clean.name.replace('None', np.nan) archive_clean.head(20) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2297 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2297 non-null int64 in_reply_to_status_id 23 non-null float64 in_reply_to_user_id 23 non-null float64 timestamp 2297 non-null object source 2297 non-null object text 2297 non-null object retweeted_status_id 180 non-null float64 retweeted_status_user_id 180 non-null float64 retweeted_status_timestamp 180 non-null object expanded_urls 2297 non-null object rating_numerator 2297 non-null int64 rating_denominator 2297 non-null int64 name 1611 non-null object doggo 2297 non-null object floofer 2297 non-null object pupper 2297 non-null object puppo 2297 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 403.0+ KB ###Markdown There are 181 retweets DefineDrop the tweets with a *retweeted_status_id* from the table. Then remove *retweeted_status_id*, *retweeted_status_user_id*, and *retweeted_status_timestamp* columns from the table. Code ###Code archive_clean = archive_clean[archive_clean['retweeted_status_id'].isnull()] archive_clean.drop(['retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2117 entries, 0 to 2355 Data columns (total 14 columns): tweet_id 2117 non-null int64 in_reply_to_status_id 23 non-null float64 in_reply_to_user_id 23 non-null float64 timestamp 2117 non-null object source 2117 non-null object text 2117 non-null object expanded_urls 2117 non-null object rating_numerator 2117 non-null int64 rating_denominator 2117 non-null int64 name 1495 non-null object doggo 2117 non-null object floofer 2117 non-null object pupper 2117 non-null object puppo 2117 non-null object dtypes: float64(2), int64(3), object(9) memory usage: 248.1+ KB ###Markdown Tidiness Dog stage variables are stored in both rows and columns. DefineCreate one column called *stage* to hold the value of stage. Delete the *doggo*, *floofer*, *pupper*, and *puppo* columns. Code ###Code print(archive_clean.doggo.value_counts()) print(archive_clean.floofer.value_counts()) print(archive_clean.pupper.value_counts()) print(archive_clean.puppo.value_counts()) archive_clean['stage'] = archive_clean[['doggo','floofer','pupper','puppo']].apply(lambda x: ','.join(x), axis=1) archive_clean['stage'].replace("None,None,None,None", "None", inplace=True) archive_clean['stage'].replace("doggo,None,None,None", "doggo", inplace=True) archive_clean['stage'].replace("None,floofer,None,None", "floofer", inplace=True) archive_clean['stage'].replace("None,None,pupper,None", "pupper", inplace=True) archive_clean['stage'].replace("None,None,None,puppo", "puppo", inplace=True) archive_clean.drop(['doggo', 'floofer', 'pupper', 'puppo'], axis=1, inplace=True) # Remove rows with multiple stages archive_clean['stage'] = archive_clean['stage'].astype('str') mask = (archive_clean['stage'].str.len() < 8) archive_clean = archive_clean.loc[mask] ###Output _____no_output_____ ###Markdown Test ###Code # value_counts should match with cells above archive_clean.stage.value_counts() ###Output _____no_output_____ ###Markdown The *retweet_count* and *favorite_count* are stored in a separate table from the archive of tweets. DefineMerge the `popularity_clean` table to the `archive_clean` table, joining on *tweet_id*. Code ###Code archive_clean.info() archive_clean = pd.merge(archive_clean, popularity_clean, on='tweet_id', how='left') ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.head() ###Output _____no_output_____ ###Markdown Quality *timestamp* and *retweeted_status_timestamp* are stored as strings DefineConvert the *timestamp* column's data type from a string to a datetime using `to_datetime`. We do not need to convert the *retweeted_status_timestamp* column as we removed this in a previous step. Code ###Code archive_clean.timestamp = pd.to_datetime(archive_clean.timestamp) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2105 entries, 0 to 2104 Data columns (total 13 columns): tweet_id 2105 non-null int64 in_reply_to_status_id 22 non-null float64 in_reply_to_user_id 22 non-null float64 timestamp 2105 non-null datetime64[ns] source 2105 non-null object text 2105 non-null object expanded_urls 2105 non-null object rating_numerator 2105 non-null int64 rating_denominator 2105 non-null int64 name 1490 non-null object stage 2105 non-null object favorite_count 2105 non-null int64 retweet_count 2105 non-null int64 dtypes: datetime64[ns](1), float64(2), int64(5), object(5) memory usage: 230.2+ KB ###Markdown source is stored as an html 'a<' tag DefineGet `value_counts()` of all the *sources* and extract only the base url for each *source*. Code ###Code archive_clean.source.value_counts() archive_clean['source'] = archive_clean['source'].str.extract('(https?:\/\/.+\.[a-z]+)', expand=False) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.head() ###Output _____no_output_____ ###Markdown Some tweets have multiple slashes in the *text* column leading to false ratings DefineCreate a function to extract the correct rating from the *text* column. Convert the fraction string to a float so that we can use it in analysis. Drop the *rating_numerator* and *rating_denominator* columns. Code ###Code # Create a function to extract the ratings from a row's text def get_rating(row): ratings = re.findall(r'(\d\d?\d?.?\d*\/\d*\d*\d*\d)', row.text) final_rate = str(ratings[-1]) return final_rate archive_clean['rating'] = archive_clean.apply(get_rating, axis=1) # Referenced https://stackoverflow.com/a/30629776 to creat a function # convert the fraction to a float def convert_to_float(frac_str): fraction = frac_str.rating try: return float(fraction) except ValueError: num, denom = fraction.split('/') try: leading, num = num.split(' ') whole = float(leading) except ValueError: whole = 0 frac = float(num) / float(denom) return whole - frac if whole < 0 else whole + frac archive_clean['rating'] = archive_clean.apply(convert_to_float, axis=1) # Drop the numerator and denominator columns archive_clean.drop(['rating_numerator', 'rating_denominator'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean.head() archive_clean.info() archive_clean.head(10) ###Output _____no_output_____ ###Markdown Store Cleaned DataFrame ###Code archive_clean.to_csv('twitter_archive_master.csv', index=False) ###Output _____no_output_____ ###Markdown Analysis ###Code df = pd.read_csv('twitter_archive_master.csv') df.shape ###Output _____no_output_____ ###Markdown Our dataset consists of 12 variables, with 2,105 observations. ###Code df.info() df['rating'].mean() ###Output _____no_output_____ ###Markdown The average rating for all the dogs was about 116 percent. ###Code df['favorite_count'].describe() ###Output _____no_output_____ ###Markdown Here we can see that the average favorite count was about 8,878. I'm interested to know what the most frequent stage of dog was. ###Code dog_stage = df.loc[df['stage'] != 'None'] plt.hist(dog_stage['stage']) plt.xlabel('Dog Stage') plt.ylabel('Frequency') plt.title('Total Number of Each Dog Stage') plt.savefig('dog_stage_plot.png') ###Output _____no_output_____ ###Markdown Here we can clearly see that *puppers* were the most frequent dog *stage* in this dataset. Next it would be interesting to see if there appears to be any relationship between *favorite_count* and *retweet_count* ###Code df.plot(x='favorite_count', y='retweet_count', title='Favorite Count by Retweet Count', kind='scatter'); plt.savefig('retweet_vs_favorite.png') ###Output _____no_output_____ ###Markdown Gather Saving udacity provided dataset into the `tw_arch` ###Code tw_arch = pd.read_csv("twitter-archive-enhanced.csv") tw_arch.head(1) ###Output _____no_output_____ ###Markdown In this part, I am going to be downloading and saving to disk `image-predictions.tsv` data from Udacity provided URL ###Code # Download file image_predictions = req.get( "https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv" ).text # write to disk with open("image-predictions.tsv", "w") as f: f.write(image_predictions) # load date to DataFrame img_predictions = pd.read_csv("image-predictions.tsv", sep="\t") img_predictions.head() ###Output _____no_output_____ ###Markdown Loading data from twitter and storing them in `tweet_json.txt` **Please set your credentials in `credentials.json` file to download data from twitter api** ###Code # Load credentials an authorize with open('credentials.json') as f: creds = json.load(f) consumer_key = creds["consumer_key"] consumer_secret = creds["consumer_secret"] access_token = creds["access_token"] access_secret = creds["access_secret"] auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, parser=tweepy.parsers.JSONParser(), wait_on_rate_limit_notify=True, wait_on_rate_limit=True) # store tweet ids into list tweet_ids = list(img_predictions["tweet_id"]) tweet_data = [] tweet_errors = [] # TODO: REMOVE THIS for idx, tweet_id in enumerate(tweet_ids[:1]): try: if idx % 50 == 0: print("Proccessed {}/{}".format(idx, len(tweet_ids))) t = api.get_status(tweet_id, tweet_mode='extended') tweet_data.append( _.pick(t, ["id", "retweet_count", "favorite_count"]) ) except Exception as err: print(err) tweet_errors.append((tweet_id, err)) print("\nDone: {} downloaded out of {}".format(len(tweet_data), len(tweet_ids))) tweets_df = pd.DataFrame(tweet_data, columns = ['id', 'retweet_count', 'favorite_count']) # TODO: REMOVE THIS tweets_df.to_csv('tweet_json1.txt', encoding = 'utf-8') print("\nTweets saved to tweet_json.txt") print("\nFollowing tweet ids have failed") print(tweet_errors) tweets_df = pd.read_csv("tweet_json.txt") tweets_df.head() ###Output _____no_output_____ ###Markdown Assess Accessing _`twitter-archive-enhanced.csv`_ dataset ###Code tw_arch.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown - timestamp column should be date object- there is lot of missing values for some columns ###Code print("Duplicate count: {}".format(sum(tw_arch.duplicated()))) ###Output Duplicate count: 0 ###Markdown > `tw_arch` dataset seem to be duplicate free ###Code tw_arch.sample(10) ###Output _____no_output_____ ###Markdown - lot of missing values in columns: `in_reply_to_status_id in_reply_to_user_id retweeted_status_id retweeted_status_user_id retweeted_status_timestamp`- We could format date to `YYYY-MM-DD HH:MM:SS`- missing dog names in `name` column ###Code # Check for names that are probably incorect tw_arch_names_to_nan = _.filter_(tw_arch.name.unique(), lambda x: len(x) < 4 and x.islower()) print("Most likely incorrect names: {}".format(tw_arch_names_to_nan)) ###Output Most likely incorrect names: ['a', 'not', 'one', 'mad', 'an', 'my', 'his', 'all', 'old', 'the', 'by'] ###Markdown - This names are probably entered incorrectly or were not intended as names. Futher I could add `None` to that list ###Code # Add None to list of names to be replaced tw_arch_names_to_nan.append('None') tw_arch[["doggo","floofer","pupper","puppo"]].describe() ###Output _____no_output_____ ###Markdown - `doggo floofer pupper puppo` columns should be boolean or melt into one column ###Code tw_arch.text[tw_arch.text.str.startswith('RT', na=False)].head() ###Output _____no_output_____ ###Markdown - Unnecessary Retweet Observations. We will analyze only the tweet observations, not retweet observations. ###Code # Null values in jpg_url tw_arch.info() img_predictions.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null int64 jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown - There are observations without images. The number of total observations is 2,356, but the number of the observation with image is 2,075.- tweet_id should be string Accessing *`tweet_json.txt`* dataset ###Code tweets_df.head() ###Output _____no_output_____ ###Markdown - `Unnamed: 0` should be droped ###Code tweets_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2067 entries, 0 to 2066 Data columns (total 4 columns): Unnamed: 0 2067 non-null int64 id 2067 non-null int64 retweet_count 2067 non-null int64 favorite_count 2067 non-null int64 dtypes: int64(4) memory usage: 64.7 KB ###Markdown > Value types are ok ###Code tweets_df.describe() ###Output _____no_output_____ ###Markdown > there seem to be `0` values in `favorite_count` column ###Code tweets_df.duplicated().sum() ###Output _____no_output_____ ###Markdown > No duplicates in this dataset Accessing *`image-predictions.tsv`* dataset ###Code img_predictions.info() img_predictions ###Output _____no_output_____ ###Markdown > Other than some rows are not dog related I can not see anything wrong with dataset so far ###Code img_predictions.isnull().any().sum() img_predictions.isna().any().sum() img_predictions.duplicated().sum() ###Output _____no_output_____ ###Markdown > No null, no Nan no duplicates. Looks good to me Access summary Quality*tw_arch* timestamp column should be date object We could format date to `YYYY-MM-DD HH:MM:SS` lot of missing values in columns: `in_reply_to_status_id in_reply_to_user_id retweeted_status_id retweeted_status_user_id retweeted_status_timestamp` `doggo floofer pupper puppo` columns should be boolean missing dog names in `name` column Some names are probably entered incorrectly or were not intended as names. I stored them in `tw_arch_names_to_nan` Drop retweet observations `rating_nominator` should be float datatype Drop the observations without images. tweet_id should be string *any* remove rows from `tw_arch` that are not in `tweets_df` Tideness*tweets_df* remove `Unnamed: 0` column rename `id` column to `tweet_id`*tw_arch* `doggo floofer pupper puppo`can be melt into one column we can merge all 3 dataset into one Clean ###Code ### Backup datasets tw_arch_clean = tw_arch.copy() img_predictions_clean = img_predictions.copy() tweets_df_clean = tweets_df.copy() ###Output _____no_output_____ ###Markdown Quality Define 1. Convert time column to date object2. Set correct date format Code ###Code tw_arch_clean["timestamp"] = pd.to_datetime(tw_arch_clean["timestamp"]) ###Output _____no_output_____ ###Markdown Test ###Code tw_arch_clean["timestamp"].head(1) ###Output _____no_output_____ ###Markdown **** Define 3. Drop following columns `in_reply_to_status_id in_reply_to_user_id retweeted_status_id retweeted_status_user_id retweeted_status_timestamp` Code ###Code columns_to_drop_temp = ["in_reply_to_status_id", "in_reply_to_user_id", "retweeted_status_id", "retweeted_status_user_id", "retweeted_status_timestamp"] tw_arch_clean.drop(columns_to_drop_temp, axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code # test if columns has been removed for c in columns_to_drop_temp: assert(c not in list(tw_arch_clean)) ###Output _____no_output_____ ###Markdown **** Define4. Convert `doggo floofer pupper puppo` to boolean Code ###Code # Set all values of "None" to False and rest to True tw_arch_clean[["doggo", "floofer", "pupper", "puppo"]] = tw_arch_clean[["doggo", "floofer", "pupper", "puppo"]] != "None" ###Output _____no_output_____ ###Markdown Test ###Code tw_arch_clean[["doggo", "floofer", "pupper", "puppo"]].info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 4 columns): doggo 2356 non-null bool floofer 2356 non-null bool pupper 2356 non-null bool puppo 2356 non-null bool dtypes: bool(4) memory usage: 9.3 KB ###Markdown **** Define5. Set dog names to Nan where name is "None"6. Set dog names to Nan that weren't inteded as names> All names to be renamed are stored in `tw_arch_names_to_nan`**Code** ###Code tw_arch_clean["name"].replace(tw_arch_names_to_nan, np.NaN, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code len(tw_arch_clean[tw_arch_clean["name"].isin(tw_arch_names_to_nan)]) ###Output _____no_output_____ ###Markdown **** Define7. Drop retweet observations**Code** ###Code tw_arch_clean = tw_arch_clean.drop(index=tw_arch_clean.text[tw_arch_clean.text.str.startswith('RT', na=False)].index) ###Output _____no_output_____ ###Markdown Test ###Code tw_arch_clean.text[tw_arch_clean.text.str.startswith('RT', na=False)].head() ###Output _____no_output_____ ###Markdown **** Define8. Rating numerator should be float**Code** ###Code tw_arch_clean["rating_numerator"] = tw_arch_clean.rating_numerator.astype("float") ###Output _____no_output_____ ###Markdown Test ###Code tw_arch_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2173 entries, 0 to 2355 Data columns (total 12 columns): tweet_id 2173 non-null int64 timestamp 2173 non-null datetime64[ns] source 2173 non-null object text 2173 non-null object expanded_urls 2115 non-null object rating_numerator 2173 non-null float64 rating_denominator 2173 non-null int64 name 1414 non-null object doggo 2173 non-null bool floofer 2173 non-null bool pupper 2173 non-null bool puppo 2173 non-null bool dtypes: bool(4), datetime64[ns](1), float64(1), int64(2), object(4) memory usage: 161.3+ KB ###Markdown **** Define8.1 tweet_id should be string**Code** ###Code # Drop the observations without images tw_arch_clean["tweet_id"] = tw_arch_clean["tweet_id"].astype(str) img_predictions_clean["tweet_id"] = img_predictions_clean["tweet_id"].astype(str) tweets_df_clean["id"] = tweets_df_clean["id"].astype(str) ###Output _____no_output_____ ###Markdown Test ###Code tw_arch_clean.info() img_predictions_clean.info() tweets_df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2173 entries, 0 to 2355 Data columns (total 12 columns): tweet_id 2173 non-null object timestamp 2173 non-null datetime64[ns] source 2173 non-null object text 2173 non-null object expanded_urls 2115 non-null object rating_numerator 2173 non-null float64 rating_denominator 2173 non-null int64 name 1414 non-null object doggo 2173 non-null bool floofer 2173 non-null bool pupper 2173 non-null bool puppo 2173 non-null bool dtypes: bool(4), datetime64[ns](1), float64(1), int64(1), object(5) memory usage: 161.3+ KB <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null object jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(1), object(5) memory usage: 152.1+ KB <class 'pandas.core.frame.DataFrame'> RangeIndex: 2067 entries, 0 to 2066 Data columns (total 4 columns): Unnamed: 0 2067 non-null int64 id 2067 non-null object retweet_count 2067 non-null int64 favorite_count 2067 non-null int64 dtypes: int64(3), object(1) memory usage: 64.7+ KB ###Markdown **** Define9. Remove rows from `tw_arch` that are not in `tweets_df`**Code** ###Code notin_temp = (~tw_arch_clean.tweet_id.isin(list(tweets_df_clean.id))) tw_arch_clean = tw_arch_clean[~notin_temp] ###Output _____no_output_____ ###Markdown Test ###Code (~tw_arch_clean.tweet_id.isin(list(tweets_df_clean.id))).sum() ###Output _____no_output_____ ###Markdown **** Define10. Remove `Unnamed: 0` column from `tweets_df`11. rename `id` column to `tweet_id`**Code** ###Code #Remove column tweets_df_clean.drop(["Unnamed: 0"], axis=1, inplace=True) #Rename column tweets_df_clean.rename(columns={"id": "tweet_id"}, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code tweets_df_clean.head(1) ###Output _____no_output_____ ###Markdown **** Define12. `doggo floofer pupper puppo`can be melt into one column**Code** ###Code columns_to_melt = ["doggo","floofer","pupper","puppo"] # Add breed column tw_arch_clean["breed"] = np.NaN # Set breed value for x in columns_to_melt: tw_arch_clean.loc[tw_arch_clean[x] == True, "breed"] = x ###Output _____no_output_____ ###Markdown Test ###Code test = tw_arch_clean.drop_duplicates("breed") test[_.concat(columns_to_melt, ["breed"])] # Drop columns we do not need tw_arch_clean.drop(columns=columns_to_melt, axis=1, inplace=True) list(tw_arch_clean.columns) ###Output _____no_output_____ ###Markdown **** Define13. Merge all 3 dataset into one**Code** ###Code tweets_all = pd.merge(tw_arch_clean, tweets_df_clean, how="left", on="tweet_id") tweets_all = pd.merge(tweets_all, img_predictions_clean, how="left", on="tweet_id") ###Output _____no_output_____ ###Markdown Test ###Code tweets_all.head(1) ###Output _____no_output_____ ###Markdown Storing, Analyzing, and Visualizing ###Code # Store into file tweets_all.to_csv("twitter_archive_master.csv") import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline ax = tweets_all["breed"].value_counts().plot(kind="bar", figsize=(10,4), title="Dog breed histogram"); ax.set_xlabel("Dog breed"); ax.set_ylabel("Count"); favorite_names = tweets_all[~tweets_all["name"].isna()].nlargest(10, 'favorite_count') # sns.set(style="whitegrid"); sns.set(font_scale=2); f, ax = plt.subplots(figsize=(20, 15)); ax = sns.barplot(y='name', x='favorite_count', data=favorite_names); ax.set(xlim=(70000,130000), ylabel="Dog Name", xlabel="Favorite Count"); plt.title('Top 10 dog names'); ###Output _____no_output_____ ###Markdown > looks like most popular dog name is `Jamesy` ###Code week_df = tweets_all[["timestamp", "favorite_count", "retweet_count"]] week_df = week_df.resample('W-Mon', on='timestamp').sum().reset_index().sort_values(by='timestamp') sns.set_context("talk") f, ax = plt.subplots(figsize=(20, 15)); plt.plot(week_df.timestamp, week_df.retweet_count, label="Weekly Retweets") plt.plot(week_df.timestamp, week_df.favorite_count, label="Weekly Favorites") plt.title('Retweet and Favorite count per week'); plt.xlabel("Date") plt.ylabel("Count") plt.legend() from subprocess import call call(['python', '-m', 'nbconvert', 'wrangle_act.ipynb']) ###Output _____no_output_____ ###Markdown Data Wrangling Gathering Data Loading files and scrapping twitter ###Code import tweepy consumer_key = 'PfmT17TAgFtWMDnIpIOhRmhHK' consumer_secret = 'LLVcySeakw4HgQmqSM9TFiMMTo1dswdPZKFWu329GLKQNUhGlj' access_token = '1161415123413721090-OwdSE8FZ4Msr1S9EFYsGS02gyxxKK9' access_secret = 'f8PtrFuB9YuJtfRwIJ5PgQ4LlvMlSg7qRjY2nTEdVflW8' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth) import pandas as pd import numpy as np import requests import json df_main= pd.read_csv('twitter-archive-enhanced.csv') url= 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response= requests.get(url) with open('image_predictions.tsv', mode='wb') as file: file.write(response.content) df_img= pd.read_csv('image_predictions.tsv', sep="\t") ###Output _____no_output_____ ###Markdown Checking the Scrapping process will work ###Code #https://stackoverflow.com/questions/28384588/twitter-api-get-tweets-with-specific-id auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit = True, wait_on_rate_limit_notify = True) tweet = api.get_status(839549326359670784) print(tweet.text) ###Output Meet Winston. He knows he's a little too big for the swing, but he doesn't care. Kindly requests a push. 12/10 woul… https://t.co/ESW7mcQeZV ###Markdown Scrapping twitter iteratively and combining data in text file ###Code auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit = True, wait_on_rate_limit_notify = True) df_list = [] df_error = [] with open('tweet_json.txt', mode='w') as file: for tweet_id in df_main['tweet_id']: try: tweet= api.get_status(tweet_id, tweet_mode='extended') df_list.append(tweet) #https://www.guru99.com/python-json.html #https://stackabuse.com/reading-and-writing-json-to-a-file-in-python/ file.write(json.dumps(tweet._json) + '\n') except Exception as e: df_error.append(tweet) ###Output Rate limit reached. Sleeping for: 718 Rate limit reached. Sleeping for: 718 ###Markdown Making a data frame with data extracted from text file ###Code id_=[] rt=[] fav=[] with open('tweet_json.txt', mode='r') as file: for line in file.readlines(): data= json.loads(line) id_.append(data['id']) rt.append(data['retweet_count']) fav.append(data['favorite_count']) df_query= pd.DataFrame({'tweet_id':id_, 'retweet_count':rt, 'favourite_count':fav}) ###Output _____no_output_____ ###Markdown Assesing Data Visually inspecting df_main ###Code df_main.sample(3) ###Output _____no_output_____ ###Markdown Programatically inspecting df_main ###Code df_main.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown Checking name column for input error ###Code df_main['name'].value_counts() ###Output _____no_output_____ ###Markdown Checking numerator column for input error ###Code df_main['rating_numerator'].value_counts() df_main.query('rating_numerator=="0"') ###Output _____no_output_____ ###Markdown Checking to see if the rating 0/10 was an error, I laughed because they weren't ###Code tweet = api.get_status(835152434251116546) print(tweet.text) ###Output When you're so blinded by your systematic plagiarism that you forget what day it is. 0/10 https://t.co/YbEJPkg4Ag ###Markdown df_img correctly identifies that it is a picture of a swing ###Code df_img.query('tweet_id=="835152434251116546"') ###Output _____no_output_____ ###Markdown Checking for duplicates ###Code df_main.tweet_id.duplicated().sum() df_main.retweeted_status_id ###Output _____no_output_____ ###Markdown Visually inspecting df_img ###Code df_img.sample(3) ###Output _____no_output_____ ###Markdown Programmatically inspecting df_img ###Code df_img.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null int64 jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown Checking number of photos not of dogs ###Code df_img.query('p1_dog==False').count()[0] ###Output _____no_output_____ ###Markdown Checking for duplicated tweet_id ###Code df_img.tweet_id.duplicated().sum() ###Output _____no_output_____ ###Markdown Checking for duplicated jpg_url ###Code df_img.jpg_url.duplicated().sum() ###Output _____no_output_____ ###Markdown Visually inspecting df_query ###Code df_query.sample(3) ###Output _____no_output_____ ###Markdown Programmatically inspecting df_img ###Code df_query.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2333 entries, 0 to 2332 Data columns (total 3 columns): tweet_id 2333 non-null int64 retweet_count 2333 non-null int64 favourite_count 2333 non-null int64 dtypes: int64(3) memory usage: 54.8 KB ###Markdown Cleaning Data Quality Issues df_main:- tweet_id should be a string not and integer- timestamp is an opbject- in_reply_to_status_id is a float- in_reply_to_user_id is a float- retweeted_status_timestamp is an object- retweeted_status_id is a float- incorrect names (a, the, an, etc.)df_img: - tweet_id should be a string not and integer- 543 photos arent of dogs- there are multiple same photosdf_query:- tweet_id should be a string not and integer- missing some tweet_id's after scrapping Tidiness Issues - the dataframes (df_main, df_img, df_query) shopuld all be combined- the columns for doggo, fluffer, puppo, and pupper can be combined df_main Define 1)Need to fix data types in multiple columns, I will use the .astype( ) command Code ###Code df_main_c= df_main.copy() df_main_c= df_main.copy() df_main_c.tweet_id= df_main_c.tweet_id.astype('object') df_main_c.timestamp= pd.to_datetime(df_main_c.timestamp) df_main_c.retweeted_status_timestamp= pd.to_datetime(df_main_c.retweeted_status_timestamp) ###Output _____no_output_____ ###Markdown Test ###Code df_main_c.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null object in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null datetime64[ns] source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null datetime64[ns] expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: datetime64[ns](2), float64(4), int64(2), object(9) memory usage: 313.0+ KB ###Markdown Define 2)Need to remove all retweets so we only have original tweets, we will keep all null values for in_reply_to_user_id and retweeted_status_id. Code ###Code df_main_c= df_main_c[df_main_c.in_reply_to_user_id.isnull()] df_main_c= df_main_c[df_main_c.retweeted_status_id.isnull()] ###Output _____no_output_____ ###Markdown Test ###Code df_main_c.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2097 non-null object in_reply_to_status_id 0 non-null float64 in_reply_to_user_id 0 non-null float64 timestamp 2097 non-null datetime64[ns] source 2097 non-null object text 2097 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null datetime64[ns] expanded_urls 2094 non-null object rating_numerator 2097 non-null int64 rating_denominator 2097 non-null int64 name 2097 non-null object doggo 2097 non-null object floofer 2097 non-null object pupper 2097 non-null object puppo 2097 non-null object dtypes: datetime64[ns](2), float64(4), int64(2), object(9) memory usage: 294.9+ KB ###Markdown Define 3)We will drop the columns in_reply_to_user_id and retweeted_status_id because we will only have null values now. Code ###Code df_main_c= df_main_c.drop(labels=['in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp', ],axis=1) ###Output _____no_output_____ ###Markdown Test ###Code df_main_c.head(0) ###Output _____no_output_____ ###Markdown Define 4)We will search for names in lower case and convert them to 'None' Code ###Code #https://stackoverflow.com/questions/50633935/pandas-replace-all-strings-in-lowercase-in-a-column-with-none df_main_c.loc[df_main_c['name'] == df_main_c['name'].str.lower(), 'name'] = 'None' ###Output _____no_output_____ ###Markdown Test ###Code df_main_c.name df_main_c.name.value_counts() ###Output _____no_output_____ ###Markdown df_img Define 5)Need to fix data types in tweet_id column, I will use the .astype( ) command Code ###Code df_img_c= df_img.copy() df_img_c.tweet_id= df_img_c.tweet_id.astype('object') ###Output _____no_output_____ ###Markdown Test ###Code df_img_c.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null object jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(1), object(5) memory usage: 152.1+ KB ###Markdown Define 6)We want to delete posts with images that aren't dogs Code ###Code df_img_c= df_img_c[df_img_c.p1_dog] ###Output _____no_output_____ ###Markdown Test ###Code df_img_c[~df_img_c.p1_dog] ###Output _____no_output_____ ###Markdown Define 7)We will remove posts which posted the same image by removing duplicated jpg_url's Code ###Code df_img_c.jpg_url.duplicated().sum() #https://www.geeksforgeeks.org/python-pandas-dataframe-drop_duplicates/ df_img_c.sort_values("jpg_url", inplace = True) df_img_c.drop_duplicates(subset ="jpg_url", keep = False, inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code df_img_c.jpg_url.duplicated().sum() ###Output _____no_output_____ ###Markdown df_query Define 8)Need to fix data types in tweet_id column, I will use the .astype( ) command Code ###Code df_query_c= df_query.copy() df_query_c.tweet_id= df_query_c.tweet_id.astype('object') ###Output _____no_output_____ ###Markdown Test ###Code df_query_c.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2333 entries, 0 to 2332 Data columns (total 3 columns): tweet_id 2333 non-null object retweet_count 2333 non-null int64 favourite_count 2333 non-null int64 dtypes: int64(2), object(1) memory usage: 54.8+ KB ###Markdown Tidiness Issues - the dataframes (df_main, df_img, df_query) shopuld all be combined- the columns for doggo, fluffer, puppo, and pupper can be combined Define We will combine data frames using merge command Code ###Code df_clean= pd.merge(df_main_c, df_query_c, on=['tweet_id'], how='inner') df_clean= pd.merge(df_clean, df_img_c, on=['tweet_id'], how='inner') ###Output _____no_output_____ ###Markdown Test ###Code df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1405 entries, 0 to 1404 Data columns (total 25 columns): tweet_id 1405 non-null object timestamp 1405 non-null datetime64[ns] source 1405 non-null object text 1405 non-null object expanded_urls 1405 non-null object rating_numerator 1405 non-null int64 rating_denominator 1405 non-null int64 name 1405 non-null object doggo 1405 non-null object floofer 1405 non-null object pupper 1405 non-null object puppo 1405 non-null object retweet_count 1405 non-null int64 favourite_count 1405 non-null int64 jpg_url 1405 non-null object img_num 1405 non-null int64 p1 1405 non-null object p1_conf 1405 non-null float64 p1_dog 1405 non-null bool p2 1405 non-null object p2_conf 1405 non-null float64 p2_dog 1405 non-null bool p3 1405 non-null object p3_conf 1405 non-null float64 p3_dog 1405 non-null bool dtypes: bool(3), datetime64[ns](1), float64(3), int64(5), object(13) memory usage: 256.6+ KB ###Markdown Define We will create one column for the different categories of dog (doggo,floofer,pupper,puppo) Code ###Code df_clean['type']=df_clean['doggo']+df_clean['floofer']+df_clean['pupper']+df_clean['puppo'] df_clean.type.value_counts() df_clean.type= df_clean.type.str.replace('None',' ').str.strip() df_clean.type.value_counts() df_clean= df_clean.drop(labels=['pupper', 'doggo', 'puppo', 'floofer'],axis=1) df_clean.type= df_clean.type.astype('category') ###Output _____no_output_____ ###Markdown Test ###Code df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1405 entries, 0 to 1404 Data columns (total 22 columns): tweet_id 1405 non-null object timestamp 1405 non-null datetime64[ns] source 1405 non-null object text 1405 non-null object expanded_urls 1405 non-null object rating_numerator 1405 non-null int64 rating_denominator 1405 non-null int64 name 1405 non-null object retweet_count 1405 non-null int64 favourite_count 1405 non-null int64 jpg_url 1405 non-null object img_num 1405 non-null int64 p1 1405 non-null object p1_conf 1405 non-null float64 p1_dog 1405 non-null bool p2 1405 non-null object p2_conf 1405 non-null float64 p2_dog 1405 non-null bool p3 1405 non-null object p3_conf 1405 non-null float64 p3_dog 1405 non-null bool type 1405 non-null category dtypes: bool(3), category(1), datetime64[ns](1), float64(3), int64(5), object(9) memory usage: 214.4+ KB ###Markdown Visualizations ###Code import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown Not a useful histogram ###Code plt.hist(x=df_clean.rating_numerator ,color='red') plt.show() ###Output _____no_output_____ ###Markdown Looks better but we can improve it ###Code plt.hist(x=df_clean.rating_numerator,bins=100 ,color='red') plt.xlim(0,50) plt.show() ###Output _____no_output_____ ###Markdown This is a useful histogram ###Code df_clean.rating_numerator.mean() a= df_clean.rating_numerator.mean() plt.hist(x=df_clean.rating_numerator,bins=100 ,color='red') plt.xlim(0,20) plt.axvline(a, color='blue') plt.title('Most Popular Rating Numerator') plt.xlabel('Rating Numerator') plt.ylabel('Number of Tweets') plt.show() plt.plot(df_clean.retweet_count, df_clean.favourite_count, marker='o', linestyle='', ms=2) plt.title('Retweets vs Favourite Count') plt.xlabel('Retweet Count') plt.ylabel('Favourite Count'); ###Output _____no_output_____ ###Markdown Wrangle and Analyze Data : WeRateDogs 1. Gathering Data 1.1. Loading WeRateDogs Twitter Data ###Code # importing necessary libraries import pandas as pd from tweepy import OAuthHandler from tweepy import API import json import requests import re import numpy as np import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline # reading csv file which stored as df tweets = pd.read_csv('twitter-archive-enhanced (1).csv', parse_dates = True) tweets.head() ###Output _____no_output_____ ###Markdown 1.2. Querying Twitter API to Gather RT and Favorite Count Data ###Code #consumer_key = 'XXXXXXXXXXXXXXXXX' #consumer_secret = 'XXXXXXXXXXXXXXXXXXXXXXXXXx' #access_token = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXX' #access_secret = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' #auth = OAuthHandler(consumer_key, consumer_secret) #auth.set_access_token(access_token, access_secret) #api = API(auth) #id_list = list(df['tweet_id']) # gathering data from Twitter using statuses_lookup and writing data # to tweets_json.txt file #with open('tweets_json.txt', 'w') as file: # for i in range(0, len(id_list), 100): # if i != 2300: # n = i+100 # else: # n = 2356 # tweets = api.statuses_lookup(id_list[i:n]) # for tweet in tweets: # json_tweet = json.dumps(tweet._json) # file.write(json_tweet + '\n') # reading json file line by line and writing it as a dictionary # creating pandas DataFrame from the dictionary # getting Retweet and Favorite counts, text, rating_numerator, # rating_denominator by using regular expressions counts = [] with open('tweets_json.txt', 'r') as file: line = file.readline() while line: temp_dict = dict() rt_count = re.findall(r'"retweet_count":\s(\d+)', line)[0] fav_count = re.findall(r'"favorite_count":\s(\d+)', line)[0] id_ = re.findall(r'"id":\s(\d+)', line)[0] text = re.findall(r'"text":\s"([^"]+)"', line)[0] numerator = re.findall(r'(\d+\.?\d+?)/\d+', text) if len(numerator) == 0: numerator = 0 else: numerator = numerator[0] denominator = re.findall(r'\d+\.?\d+?/(\d+)', text) if len(denominator) == 0: denominator = 0 else: denominator = denominator[0] doggo, floofer, pupper, puppo = 0, 0, 0, 0 if 'doggo' in text: doggo = 1 elif 'floof' in text: floofer = 1 elif 'pupper' in text: pupper = 1 elif 'puppo' in text: puppo = 1 name = re.findall(r'[^\.]\s([A-Z][a-z]+)', text) temp_dict['rt_count'] = rt_count temp_dict['fav_count'] = fav_count temp_dict['tweet_id'] = id_ temp_dict['text'] = text temp_dict['numerator'] = numerator temp_dict['denominator'] = denominator temp_dict['doggo'] = doggo temp_dict['floofer'] = floofer temp_dict['pupper'] = pupper temp_dict['puppo'] = puppo temp_dict['name'] = name counts.append(temp_dict) line = file.readline() # changing data types #burada tweet_id, numerator ve denominator type lari degisecek sonra bak XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX counts_df = pd.DataFrame(counts) counts_df['fav_count'] = counts_df.fav_count.astype('int') counts_df['rt_count'] = counts_df.rt_count.astype('int') counts_df['tweet_id'] = counts_df.tweet_id.astype('int64') counts_df['denominator'] = counts_df.denominator.astype('float64') counts_df['numerator'] = counts_df.numerator.astype('float64') # printing few rows to see if querying is successful counts_df.head() ###Output _____no_output_____ ###Markdown 1.3. Loading Image Predictions Data ###Code url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) with open('image_predictions.tsv', mode = 'wb') as file: file.write(response.content) image_predictions = pd.read_csv('image_predictions.tsv', sep = '\t') # printing few rows to see if loading is successful image_predictions.head() ###Output _____no_output_____ ###Markdown 2. Data Assessment Assess Quality- "tweets" table : tweet_id is an integer not a string- "tweets" table : in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id are float instead of string- "image_predictions" table : tweet_id is integer instead of string- "tweets" table : inaccurate rating_numerator and rating_denominator, 1126/10 instead of 11.26/10- "tweets" table : name includes things that are not dog names, inaccurate data- "image_predictions" table : some lowercase, some uppercase dog breed names in p1, p2 and p3 columns, inconsistent data- "tweets" table : "None" string instead of np.NaN in doggo, floofer, pupper and puppo columns- "tweets" table : rating_numerator is integer instead of float, since some values should be decimal- "tweets" table : "None" string instead of np.NaN in name column Tidiness- "tweets" table : dog stage is found across multiple columns- data related to the same tweet found in multiple datasets and these should be merged 3. Data Cleaning ###Code # Making copy of dataframes tweets_clean = tweets.copy() counts_clean = counts_df.copy() image_predictions_clean = image_predictions.copy() ###Output _____no_output_____ ###Markdown Define - Convert data types of tweet id variables to string Code ###Code def to_string(df, col): df[col] = df[col].astype(str) return df tweets_clean = to_string(tweets_clean, 'tweet_id') image_predictions_clean = to_string(image_predictions_clean, 'tweet_id') counts_clean = to_string(counts_clean, 'tweet_id') ###Output _____no_output_____ ###Markdown Test ###Code tweets_clean.info() image_predictions_clean.info() counts_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2341 entries, 0 to 2340 Data columns (total 11 columns): denominator 2341 non-null float64 doggo 2341 non-null int64 fav_count 2341 non-null int32 floofer 2341 non-null int64 name 2341 non-null object numerator 2341 non-null float64 pupper 2341 non-null int64 puppo 2341 non-null int64 rt_count 2341 non-null int32 text 2341 non-null object tweet_id 2341 non-null object dtypes: float64(2), int32(2), int64(4), object(3) memory usage: 183.0+ KB ###Markdown Define - converting in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id to string Code ###Code tweets_clean = to_string(tweets_clean, ['in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id']) ###Output _____no_output_____ ###Markdown Test ###Code tweets_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null object in_reply_to_status_id 2356 non-null object in_reply_to_user_id 2356 non-null object timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 2356 non-null object retweeted_status_user_id 2356 non-null object retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: int64(2), object(15) memory usage: 313.0+ KB ###Markdown Define - extracting decimal rating_numerator and rating_denominator Code ###Code tweets_clean[['rating_numerator', 'rating_denominator']] = tweets_clean.text.str.extract('((?:\d+\.)?\d+)\/(\d+)', expand=True) ###Output _____no_output_____ ###Markdown Test ###Code tweets_clean.rating_numerator.unique() tweets_clean.rating_denominator.unique() ###Output _____no_output_____ ###Markdown Define - Capitalizing all name column if it is a dog name Code ###Code tweets_clean[tweets_clean.name.str.islower()]['name'].value_counts() ###Output _____no_output_____ ###Markdown All values that start with lowercase letter is not a valid dog name. When I inspect the data, most of these tweets do not include a dog name. Extraction from the text of the tweet must be the word comes after "This is", "Meet". It is not possible to be able to capture all dog names programatically from the text but the total number of tweets in this category is small. These values are inaccurate and sholud be converted to np.NaN ###Code index_lower = tweets_clean[tweets_clean.name.str.islower()].index # for testing tweets_clean.loc[tweets_clean.name.str.islower(), 'name'] = np.NaN ###Output _____no_output_____ ###Markdown Test ###Code tweets_clean[tweets_clean.name.isnull()].head() col_number_name = tweets_clean.columns.get_loc('name') tweets_clean.iloc[index_lower, col_number_name].sum() ###Output _____no_output_____ ###Markdown Define - Capitalize dog breeds that start with lower case in p1, p2 and p3 columns in image_predictions data Code ###Code def capitalize_df(df, col): df[col] = df[col].str.capitalize() return df[col] image_predictions_clean['p1'] = capitalize_df(image_predictions_clean, 'p1') image_predictions_clean['p2'] = capitalize_df(image_predictions_clean, 'p2') image_predictions_clean['p3'] = capitalize_df(image_predictions_clean, 'p3') ###Output _____no_output_____ ###Markdown Test ###Code image_predictions_clean[['p1','p2', 'p3']].head() ###Output _____no_output_____ ###Markdown Define - Converting None to NaN in doggo, floofer, puppo and pupper columns in tweets_clean data Code ###Code def to_nan(df, cols): for col in cols: df.loc[(df[col] == 'None') | (df[col] == 'nan'), col] = np.NaN return df[cols] tweets_clean[['doggo', 'floofer', 'pupper', 'puppo']] = to_nan(tweets_clean, ['doggo', 'floofer', 'pupper', 'puppo']) ###Output _____no_output_____ ###Markdown Test ###Code total_none = 0 for col in ['doggo', 'floofer', 'pupper', 'puppo']: total_none += tweets_clean[tweets_clean[col] == 'None'].shape[0] print('Total number of "None" values is {}'.format(total_none)) tweets_clean[['doggo', 'floofer', 'pupper', 'puppo']].head() tweets_clean.loc[tweets_clean.doggo == 'doggo'] ###Output _____no_output_____ ###Markdown Define - Converting rating_numerator and rating_denominator to float Code ###Code def to_float(df,cols): for col in cols: df[col] = df[col].astype('float64') return df[cols] tweets_clean[['rating_numerator', 'rating_denominator']] = to_float(tweets_clean, ['rating_numerator', 'rating_denominator']) ###Output _____no_output_____ ###Markdown Test ###Code tweets_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null object in_reply_to_status_id 2356 non-null object in_reply_to_user_id 2356 non-null object timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 2356 non-null object retweeted_status_user_id 2356 non-null object retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null float64 rating_denominator 2356 non-null float64 name 2247 non-null object doggo 97 non-null object floofer 10 non-null object pupper 257 non-null object puppo 30 non-null object dtypes: float64(2), object(15) memory usage: 313.0+ KB ###Markdown Define - Converting all "None" values to np.NaN in name column of tweets_clean table Code ###Code tweets_clean['name'] = to_nan(tweets_clean, ['name']) ###Output _____no_output_____ ###Markdown Test ###Code tweets_clean.loc[tweets_clean.name == 'None'].shape tweets_clean.loc[tweets_clean.name.isnull()] ###Output _____no_output_____ ###Markdown Define - Converting all "nan"s to np.NaN in in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id columns of tweets_clean data. Code ###Code tweets_clean[[ 'in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id']] = to_nan(tweets_clean, ['in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id']) ###Output _____no_output_____ ###Markdown Test ###Code total_nan = 0 for col in ['in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id']: total_nan += tweets_clean[tweets_clean[col] == 'nan'].shape[0] print('Total number of "nan" values is {}'.format(total_nan)) tweets_clean[[ 'in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id']].head() ###Output _____no_output_____ ###Markdown Define - collapsing all 4 columns related to dog stage into a single column; 'dog_stage' Code Before that we need to take care of samples that included several dog stage information , like including both doggo and pupper values. tweets_clean has 12 samples that include both "doggo" and "pupper". Some of these samples seem to actually include two dogs in the picture. So in fact both values are accurate. And some include both stages since the text includes both terms. But we should have only one variable of 4 dog stages to be able to melt these 4 columns. We can look at the samples and correct them manually. But I will not do that and drop all 12 samples having two different dog stage information.And there is another sample including both "doggo" and "floofer". I corrected it according to the text. ###Code tweets_clean.loc[(tweets_clean.doggo == 'doggo') & (tweets_clean.pupper == 'floofer'), 'doggo'] = np.NaN index_2stage = tweets_clean.loc[(tweets_clean.doggo == 'doggo') & (tweets_clean.pupper == 'pupper')].index tweets_clean.drop(index_2stage, axis = 0, inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code tweets_clean.loc[(tweets_clean.doggo == 'doggo') & (tweets_clean.pupper == 'pupper')].shape ###Output _____no_output_____ ###Markdown Code ###Code # if the column is null, False is assigned, otherwise True tweets_clean.doggo = np.where(tweets_clean.doggo.isnull(), False, True) tweets_clean.floofer = np.where(tweets_clean.floofer.isnull(), False, True) tweets_clean.pupper = np.where(tweets_clean.pupper.isnull(), False, True) tweets_clean.puppo = np.where(tweets_clean.puppo.isnull(), False, True) # whichever column is True, dog_stage equals to the name of the column for i in tweets_clean.index.tolist(): if tweets_clean.doggo[i]: tweets_clean.loc[i, 'dog_stage'] = 'doggo' elif tweets_clean.floofer[i]: tweets_clean.loc[i, 'dog_stage'] = 'floofer' elif tweets_clean.pupper[i]: tweets_clean.loc[i, 'dog_stage'] = 'pupper' elif tweets_clean.puppo[i]: tweets_clean.loc[i, 'dog_stage'] = 'puppo' else: tweets_clean.loc[i, 'dog_stage'] = np.NaN ###Output _____no_output_____ ###Markdown Test ###Code tweets_clean[['doggo', 'floofer', 'pupper', 'puppo', 'dog_stage']].head(25) ###Output _____no_output_____ ###Markdown Define - examining rating_numerator and rating_denominator closely that is going to be useful for anaylsis and visualization part Code ###Code # rows where rating_denominator is differet from 10 index_not_ten = tweets_clean[tweets_clean.rating_denominator != 10.0].index # inspect visually the denominator values that are not equal to 10 for i in index_not_ten: print(i, tweets_clean.text[i] + '\n') # correcting rating_numerator and rating_denominator manually tweets_clean['rating_numerator'][313], tweets_clean['rating_denominator'][313] = 13, 10 tweets_clean['rating_numerator'][342], tweets_clean['rating_denominator'][342] = np.nan, np.nan tweets_clean['rating_numerator'][516], tweets_clean['rating_denominator'][516] = np.nan, np.nan tweets_clean['rating_numerator'][784], tweets_clean['rating_denominator'][784] = 14, 10 tweets_clean['rating_numerator'][1068], tweets_clean['rating_denominator'][1068] = 14, 10 tweets_clean['rating_numerator'][1165], tweets_clean['rating_denominator'][1165] = 13, 10 tweets_clean['rating_numerator'][1202], tweets_clean['rating_denominator'][1202] = 11, 10 tweets_clean['rating_numerator'][1662], tweets_clean['rating_denominator'][1662] = 10, 10 tweets_clean['rating_numerator'][2335], tweets_clean['rating_denominator'][2335] = 9, 10 # dropping rows with null rating_denominator, since we do not want tweets with no rating index_to_drop = tweets_clean[tweets_clean.rating_denominator.isnull()].index tweets_clean.drop(index_to_drop , inplace = True) # The next step is about denominator values that are greater than 10 inputted intentionally. # We can convert these ratings to those with a scale of 10. If the rating is 84/70, it is simply 12/10. # If denominator is greater than 10, divide it by 10 and scale down corresponding numerator accordingly. # At the end, all denominator values are 10. tweets_clean.rating_numerator = np.where(tweets_clean.rating_denominator != 10, tweets_clean.rating_numerator/(tweets_clean.rating_denominator / 10), tweets_clean.rating_numerator) tweets_clean.rating_denominator = np.where(tweets_clean.rating_denominator != 10, 10, tweets_clean.rating_denominator) ###Output _____no_output_____ ###Markdown Test ###Code tweets_clean.rating_denominator.unique() numerator_index = tweets_clean[tweets_clean.rating_numerator > 14].index len(numerator_index) # all large rating_numerator values seem OK as they are represented as some rating/10 for i in numerator_index: print(i, tweets_clean.text[i] + '\n') ###Output 55 @roushfenway These are good dogs but 17/10 is an emotional impulse rating. More like 13/10s 188 @dhmontgomery We also gave snoop dogg a 420/10 but I think that predated your research 189 @s8n You tried very hard to portray this good boy as not so good, but you have ultimately failed. His goodness shines through. 666/10 285 RT @KibaDva: I collected all the good dogs!! 15/10 @dog_rates #GoodDogs https://t.co/6UCGFczlOI 290 @markhoppus 182/10 291 @bragg6of8 @Andy_Pace_ we are still looking for the first 15/10 979 This is Atticus. He's quite simply America af. 1776/10 https://t.co/GRXwMxLBkh 2074 After so many requests... here you go. Good dogg. 420/10 https://t.co/yfAAo1gdeY ###Markdown Define - Dropping all samples including ReTweeted Tweets Code ###Code index_rt = tweets_clean[tweets_clean.text.str.startswith('RT')].index tweets_clean.drop(index_rt, inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code tweets_clean[tweets_clean.text.str.startswith('RT')].shape ###Output _____no_output_____ ###Markdown Define - Dropping all columns having information about RT status, since we do not want any RTed tweets, these columns are all null and not useful anymore. These columns are retweeted_status_id, retweeted_status_user_id and retweeted_status_timestamp Code ###Code tweets_clean.drop(['retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], axis = 1, inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code tweets_clean.columns ###Output _____no_output_____ ###Markdown Define - Dropping 4 columns related to dog stage, doggo, floofer, puppo and pupper. These are melted into a column, so they are not needed for tidy data Code ###Code tweets_clean.drop(['doggo', 'floofer', 'puppo', 'pupper'], axis = 1, inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code tweets_clean.columns image_predictions_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null object jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(1), object(5) memory usage: 152.1+ KB ###Markdown Define - Merging all three tables by merging two tables at a time Code ###Code merged = pd.merge(left = tweets_clean, right = counts_clean[['tweet_id', 'rt_count', 'fav_count']], how = 'left', on = 'tweet_id') final = pd.merge(left = merged, right = image_predictions_clean, how = 'left', on = 'tweet_id') ###Output _____no_output_____ ###Markdown Test ###Code final.head(2) final.columns ###Output _____no_output_____ ###Markdown Storing Wrangled Data Define - Storing the final clean table as 'twitter_archive_master.csv' Code ###Code final.to_csv('twitter_archive_master.csv', index = False) ###Output _____no_output_____ ###Markdown Test ###Code stored_final = pd.read_csv('twitter_archive_master.csv') stored_final.head(2) ###Output _____no_output_____ ###Markdown Analyzing and Visualizing Data ###Code final.head(3) base_color =sns.color_palette()[0] sns.set_style('whitegrid') fig,ax = plt.subplots() final.dog_stage.value_counts().sort_values(ascending = False).plot(kind= 'bar', color = base_color, figsize = (8,5)); ax.set(xlabel = 'Dog Stages', ylabel = 'Count'); plt.xticks(rotation = 0); plt.title('Number of dogs in Different Dog Stages') plt.savefig('1st_upt.png') ###Output _____no_output_____ ###Markdown Around 230 of them are pupper and there are around 90 dogs that are doggo. The number of dogs in puppo and floofer stages constitutes a small share in total. ###Code final[(final.dog_stage == 'floofer') & (final.p1_dog)].p1.value_counts().plot(kind = 'bar', color = base_color, figsize = (9,6)); plt.xticks(rotation = 15); plt.ylabel('Count'); plt.title('Image Predictions of Dogs in Floofer Stage') plt.savefig('2nd.png') final[(final.dog_stage == 'doggo') & (final.p1_dog)].p1.value_counts()[:6].plot(kind = 'bar', color = base_color, figsize = (9,6)); plt.xticks(rotation = 15); plt.ylabel('Count'); plt.title('Image Predictions of Dogs in Doggo Stage') plt.savefig('3rd.png') final[(final.dog_stage == 'pupper') & (final.p1_dog)].p1.value_counts()[:6].plot(kind = 'bar', color = base_color, figsize = (9,6)); plt.xticks(rotation = 15); plt.ylabel('Count'); plt.title('Image Predictions of Dogs in Pupper Stage') plt.savefig('4th.png') final[(final.dog_stage == 'puppo') & (final.p1_dog)].p1.value_counts()[:6].plot(kind = 'bar', color = base_color, figsize = (9,6)); plt.xticks(rotation = 15); plt.ylabel('Count'); plt.title('Image Predictions of Dogs in Puppo Stage') plt.savefig('5th.png') a = final[final.p1_dog == True].p1.value_counts().apply(lambda row: row >= 20) ind = a[a == True].index plot_data = final[(final.p1_dog == True) & (final.p1.isin(ind))] plt.figure(figsize = (11, 7)) sns.countplot(data = plot_data, y = 'p1',color = base_color, order = ind); plt.xlim((0, 150)) plt.xlabel('Count') plt.ylabel('p1 Prediction') counts = final[final.p1_dog == True].p1.value_counts() total = final[final.p1_dog == True].shape[0] for i in range(len(counts)): if counts[i] >=20: text = '{:0.2f}%'.format(counts[i]*100/total) plt.text(counts[i]+5, i, text, va = 'center', ha = 'center') plt.title('Distribution of Image Predictions') plt.savefig('6th.png') ###Output _____no_output_____ ###Markdown 9.4% of pictures are Golden Retrievers. It is followed by Labrador Retriever with 6.5%. Pembroke has the third place. 6% of posted pictures is Pembroke. The most of the pictures postd from this account is Golden Retriever pictures. ###Code # to be able to better assess the average rating in each dog breed # I drop all outlier samples with rating numerator greater than 15 # There are only 5 samples with numerator greater than 15 index_to_drop = final[final.rating_numerator > 15].rating_numerator.index final.drop(index_to_drop,inplace = True) final[final.p1_dog == True].groupby('p1').rating_numerator.mean().sort_values(ascending = False)[:16][::-1].plot(kind = 'barh',color = base_color, figsize = (9,6)); plt.ylabel('p1 Prediction'); plt.xlabel('Average Rating'); plt.title('The First 15 Dog Breeds with the Highest Average Rating') plt.savefig('7th.png') ###Output _____no_output_____ ###Markdown The dog breed with the highest rating is Saluki. Briard is in the second place and Tibetan Mastiff is in the third place. The most common dog breed Golden Breed gets on average larger than 11 and it is in 13rd place in dogs with the highest average ranking. ###Code final[final.p1_dog == True].groupby('p1').rating_numerator.mean().sort_values(ascending = False)[-10:][::-1].plot(kind = 'barh',color = base_color, figsize = (9,6)); plt.ylabel('p1 Prediction'); plt.xlabel('Average Rating'); plt.title('The First 10 Dog Breeds with the Lowest Average Rating') plt.savefig('8th.png') order = final.groupby('dog_stage').rating_numerator.mean().sort_values(ascending = False).index plt.figure(figsize = (7,5)) sns.barplot(data = final, x = 'dog_stage', y = 'rating_numerator', order = order,color = base_color, ci = None); plt.xlabel('Dog Stage'); plt.ylabel('Rating'); plt.title('Average Rating of Dogs in Each Dog Stage') plt.savefig('9th.png') ###Output _____no_output_____ ###Markdown Wrangle and Analyze Data Importing the libraries ###Code !pip3 install statsmodels import numpy as np import pandas as pd import matplotlib.pyplot as plt import requests as req import tweepy import json import re import statsmodels.api as sm import matplotlib.patches as mpatches %matplotlib inline ###Output _____no_output_____ ###Markdown Gathering of Data Twitter Enhanced Dataset (Via local file) ###Code df_enhanced=pd.read_csv('twitter-archive-enhanced.csv') df_enhanced.head(3) ###Output _____no_output_____ ###Markdown Image Predictions (Via url) ###Code url = "https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv" response = requests.get(url) with open('image-predictions.tsv', mode ='wb') as file: file.write(response.content) #Read TSV file df_image_pred = pd.read_csv('image-predictions.tsv', sep='\t' ) df_img_pred.head(3) ###Output _____no_output_____ ###Markdown Twitter API ( Via API) ###Code consumer_key = '' consumer_secret = '' access_token = '' access_secret = '' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth,wait_on_rate_limit=True) #Storing list of tweets ids to get their content later tweets_id=df_enhanced["tweet_id"] #Creating a list to store tweet_ids which we fail to get content from (possibly deleted) del_tweets=[] #creating and opening file to write the data into with open('tweet_json.txt',mode='w') as file: for uid in tweets_id: try: #fetching tweet content and storing it tweet=api.get_status(uid,tweet_mode='extended') json.dump(tweet._json,file) #Writing new observation in a newline file.write("\n") except: del_tweets.append(uid) print("Error fetching Tweet ID: ",uid) #To understand how our extracted JSON format is with open('tweet_json.txt','r') as data: tweet_json = data.readline() print(tweet_json) #Converting our data into DataFrame df_tweet=pd.read_json('tweet_json.txt',orient='records',lines=True) df_tweet.head(3) ###Output _____no_output_____ ###Markdown Removing the unnecessary columns ###Code df_tweet.columns df_tweet=df_tweet[["id","favorite_count","retweet_count","retweeted"]] df_tweet.head() ###Output _____no_output_____ ###Markdown Assessing Data Enhanced Dataset ###Code df_enhanced.head() df_enhanced.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown * Many Columns the datatype is wrong (for eg. in_reply_to,timestamp,retweeted_status_id tec.)* Missing Values in dog stages columns (doggo,floofer,pupper,puppo) are registered as None* Some Exapnded URL's are missing ###Code df_enhanced.query('doggo=="None" and floofer=="None" and pupper=="None" and puppo=="None"') ###Output _____no_output_____ ###Markdown * Stages of dogs have not been mentioned and been names as "None" ###Code df_enhanced['source'].value_counts() ###Output _____no_output_____ ###Markdown * Sources are diificult to read and interpret Image Prediction Dataset ###Code df_img_pred df_img_pred.info() df_img_pred.describe() ###Output _____no_output_____ ###Markdown * No missing values as such ###Code len(df_img_pred.query('p1_dog==False and p2_dog==False and p3_dog==False')) ###Output _____no_output_____ ###Markdown * Some objects may not be dog at all as they have been failed to classified as dogs by the algorithm Twitter API ###Code df_tweet df_tweet.info() df_tweet.describe() ###Output _____no_output_____ ###Markdown Summary Quality df_enhanced* Data types are wrongly written in many places* Sources are not clearly understandable/readable* Retweets are present in dataset* Missing Values in dog stages columns (doggo,floofer,pupper,puppo) are registered as None* Change all numerator to 10 and scale numerator accordingly for better comparison df_image_pred* Some images arent even dog by the algorithm but still classified as some* Coverting the breed into categorical data Tideness* Merging all the datasets into one* Indiviual dogs stages columns need to be converted into a single categorical column* Unnecessary columns need to be removed Data Cleaning * First we will make make copies of the dataset ###Code df_enhanced_clean=df_enhanced.copy() df_img_pred_clean=df_img_pred.copy() df_tweet_clean=df_tweet.copy() ###Output _____no_output_____ ###Markdown Quality 1.Twitter enhanced dataset 1.1 Define* Correcting the datatype in columns -timestamp,retweeted_status_timestamp and converting various id's to string datatype Code* Will use pd.to_datetime to convert into datatype and astype(str) to convert into string ###Code # To datetime df_enhanced_clean.timestamp = pd.to_datetime(df_enhanced_clean.timestamp) df_enhanced_clean.retweeted_status_timestamp = pd.to_datetime(df_enhanced_clean.retweeted_status_timestamp) # To string df_enhanced_clean.in_reply_to_status_id=df_enhanced_clean.in_reply_to_status_id.astype(str) df_enhanced_clean.in_reply_to_user_id=df_enhanced_clean.in_reply_to_user_id.astype(str) df_enhanced_clean.retweeted_status_id=df_enhanced_clean.retweeted_status_id.astype(str) df_enhanced_clean.retweeted_status_user_id=df_enhanced_clean.retweeted_status_user_id.astype(str) ###Output _____no_output_____ ###Markdown Test ###Code df_enhanced_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 2356 non-null object in_reply_to_user_id 2356 non-null object timestamp 2356 non-null datetime64[ns, UTC] source 2356 non-null object text 2356 non-null object retweeted_status_id 2356 non-null object retweeted_status_user_id 2356 non-null object retweeted_status_timestamp 181 non-null datetime64[ns, UTC] expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: datetime64[ns, UTC](2), int64(3), object(12) memory usage: 313.0+ KB ###Markdown 1.2 Define * Removing the retweeted tweets. they are present in dataset and needs to be removed as they account for duplicate value Code * Remove the tweets which have no null status in retweet_status_id ###Code df_enhanced_clean=df_enhanced_clean.query('retweeted_status_id == "nan"') ###Output _____no_output_____ ###Markdown Test ###Code len(df_enhanced_clean.query('retweeted_status_id != "nan"')) ###Output _____no_output_____ ###Markdown 1.3 Define* Missing values in dog are shown as none Code* Change the none into NAN ###Code df_enhanced_clean.floofer=df_enhanced_clean.floofer.replace("None",np.nan,regex=True) df_enhanced_clean.pupper=df_enhanced_clean.pupper.replace("None",np.nan,regex=True) df_enhanced_clean.doggo=df_enhanced_clean.doggo.replace("None",np.nan,regex=True) df_enhanced_clean.puppo=df_enhanced_clean.puppo.replace("None",np.nan,regex=True) ###Output _____no_output_____ ###Markdown Test ###Code df_enhanced_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2175 non-null int64 in_reply_to_status_id 2175 non-null object in_reply_to_user_id 2175 non-null object timestamp 2175 non-null datetime64[ns, UTC] source 2175 non-null object text 2175 non-null object retweeted_status_id 2175 non-null object retweeted_status_user_id 2175 non-null object retweeted_status_timestamp 0 non-null datetime64[ns, UTC] expanded_urls 2117 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 87 non-null object floofer 10 non-null object pupper 234 non-null object puppo 25 non-null object dtypes: datetime64[ns, UTC](2), int64(3), object(12) memory usage: 305.9+ KB ###Markdown 1.4 Define * Types of sources not human readable/clearly understood, since they are written in HTML tags format, it is difficult for a normal person to understand. Code* Extract the text between the anchor tags to extract the actual source used ###Code def source(x): source_list=[] pattern = re.compile('llow">(.*?)</a>') for tag in x['source']: m = re.search(pattern, tag) src=m.group(1) source_list.append(src) return source_list src=source(df_enhanced_clean) df_enhanced_clean['source']=src ###Output _____no_output_____ ###Markdown Test ###Code df_enhanced_clean['source'].value_counts() ###Output _____no_output_____ ###Markdown 1.5 Define * Some of the expanded URLs are missing. I found out from the Twitter API Documentation,tweet_id is an extention to 'https://twitter.com/dog_rates/status/' which gives us the URL to the tweet we are looking at Code * For places where expanded URL is null, we will add the https://twitter.com/dog_rates/status/ and tweet_id column, and convert into a string to store in.(We will be using pandas apply function) ###Code def form_link(x): if pd.notnull(x['expanded_urls']): return x else: tid = x['tweet_id'] x['expanded_urls'] = 'https://twitter.com/dog_rates/status/' +str(tid) return x df_enhanced_clean = df_enhanced_clean.apply(form_link, axis=1) ###Output _____no_output_____ ###Markdown Test ###Code df_enhanced_clean[df_enhanced_clean['expanded_urls'].isnull()] ###Output _____no_output_____ ###Markdown 1.6 Define * We can bring the denominator to 10 and asjust the numberator accordingly Code* We will divide the numerator column by the denominator column and then multiply it by 10 to get the rating out of ten, and then drop the denominator column which will no longer be needed ###Code df_enhanced_clean.rating_numerator=(df_enhanced_clean.rating_numerator/df_enhanced_clean.rating_denominator)*10 df_enhanced_clean[df_enhanced_clean['rating_denominator']!=10] df_enhanced_clean.loc[2335,'rating_numerator']=9 df_enhanced_clean.loc[313,'rating_numerator']=13 df_enhanced_clean.drop(['rating_denominator'],axis=1,inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code df_enhanced_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 16 columns): tweet_id 2175 non-null int64 in_reply_to_status_id 2175 non-null object in_reply_to_user_id 2175 non-null object timestamp 2175 non-null datetime64[ns, UTC] source 2175 non-null object text 2175 non-null object retweeted_status_id 2175 non-null object retweeted_status_user_id 2175 non-null object retweeted_status_timestamp 0 non-null datetime64[ns] expanded_urls 2175 non-null object rating_numerator 2175 non-null float64 name 2175 non-null object doggo 2175 non-null object floofer 2175 non-null object pupper 2175 non-null object puppo 2175 non-null object dtypes: datetime64[ns, UTC](1), datetime64[ns](1), float64(1), int64(1), object(12) memory usage: 368.9+ KB ###Markdown 2. Image Prediction Dataset 2.1 Define There are some cases when the algorithm has predicted that the given image is not a dog CodeFilter out the instances where the algortihm predicts that the image is not of a dog in all 3 instances ###Code df_img_pred_clean=df_img_pred_clean.query('p1_dog == True or p2_dog == True or p3_dog == True') ###Output _____no_output_____ ###Markdown Test ###Code len(df_img_pred_clean.query('p1_dog == False and p2_dog == False and p3_dog == False')) ###Output _____no_output_____ ###Markdown 2.2 Define Dogs Breeds are mentioned as strings but they are categorical data CodeConvert datatype into category ###Code df_img_pred_clean['p1']=df_img_pred_clean['p1'].astype('category') df_img_pred_clean['p2']=df_img_pred_clean['p2'].astype('category') df_img_pred_clean['p3']=df_img_pred_clean['p3'].astype('category') ###Output _____no_output_____ ###Markdown Test ###Code df_img_pred_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1751 entries, 0 to 2073 Data columns (total 12 columns): tweet_id 1751 non-null int64 jpg_url 1751 non-null object img_num 1751 non-null int64 p1 1751 non-null category p1_conf 1751 non-null float64 p1_dog 1751 non-null bool p2 1751 non-null category p2_conf 1751 non-null float64 p2_dog 1751 non-null bool p3 1751 non-null category p3_conf 1751 non-null float64 p3_dog 1751 non-null bool dtypes: bool(3), category(3), float64(3), int64(2), object(1) memory usage: 146.5+ KB ###Markdown 2.3 Define Breed names are not capatalized and there are underscore between some breed names. Code Capitalising every string under columns p1,p2,p3 using pandas .capitalize() function and also removing the underscore with python str.replace function ###Code #Capitalizing Breed Names df_img_pred_clean['p1'] = df_img_pred_clean.p1.str.capitalize() df_img_pred_clean['p2'] = df_img_pred_clean.p2.str.capitalize() df_img_pred_clean['p3'] = df_img_pred_clean.p3.str.capitalize() #Replacing _ with ' ' in the columns df_img_pred_clean['p1']=df_img_pred_clean['p1'].replace("_",r' ',regex=True) df_img_pred_clean['p2']=df_img_pred_clean['p2'].replace("_",r' ',regex=True) df_img_pred_clean['p3']=df_img_pred_clean['p3'].replace("_",r' ',regex=True) ###Output _____no_output_____ ###Markdown Test ###Code df_img_pred_clean.head() ###Output _____no_output_____ ###Markdown Tidiness Twitter Enhaced Dataset 1.1 DefineRemoving some non useful columns Code ###Code df_enhanced_clean.columns df_enhanced_clean.drop(['retweeted_status_timestamp','retweeted_status_user_id','retweeted_status_id'],axis=1,inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code df_enhanced_clean.columns ###Output _____no_output_____ ###Markdown 1.2 Define The different stages of dogs are expanded as different variables in the dataset whereas in reality its a should be a single categorical value denoting the stage of a dog Code While keeping other columns constant, the columns ('doggo' ,'floofer', 'pupper', 'puppo') will be melted into one columnWe have to keep in mind that one dog can be in more than one stage since the terms are vaguely defined` ###Code # Create new column for dog stages df_enhanced_clean['dog_stage'] = 'None' # Function that will be applied to each row (changes dog_stage value) def dog_stage(row): stage = [] if row['doggo'] == 'doggo': stage.append('doggo') if row['floofer'] == 'floofer': stage.append('floofer') if row['pupper'] == 'pupper': stage.append('pupper') if row['puppo'] == 'puppo': stage.append('puppo') if len(stage)==0: # Default to 'None' if list is empty row['dog_stage'] = 'None' else: # If there is more than 1 stage row['dog_stage'] = ','.join(stage) return row df_enhanced_clean = df_enhanced_clean.apply(dog_stage, axis=1) df_enhanced_clean.drop(['doggo','floofer','pupper','puppo'],axis=1,inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code df_enhanced_clean.dog_stage.value_counts() ###Output _____no_output_____ ###Markdown 2.0 Define Creating a Master Dataset Code Merging all tables using tweet_id as the primary key and by inner join, so we have a tight dataset of tweets for which we have all data available ###Code df_tweet_clean.head() df_enhanced_clean.head() df_master_1=pd.merge(left=df_enhanced_clean,right=df_tweet_clean,left_on='tweet_id',right_on='id',how='inner') df_master_1.drop(['id','retweeted'],axis=1,inplace=True) df_master_1.head() df_master=pd.merge(left=df_master_1,right=df_img_pred_clean,on='tweet_id',how='inner') ###Output _____no_output_____ ###Markdown Test ###Code df_master.head(3) df_master.shape #Changing source into category df_master['source']=df_master['source'].astype('category') df_master['dog_stage']=df_master['dog_stage'].astype('category') df_master.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1679 entries, 0 to 1678 Data columns (total 23 columns): tweet_id 1679 non-null int64 in_reply_to_status_id 1679 non-null object in_reply_to_user_id 1679 non-null object timestamp 1679 non-null datetime64[ns, UTC] source 1679 non-null category text 1679 non-null object expanded_urls 1679 non-null object rating_numerator 1679 non-null float64 name 1679 non-null object dog_stage 1679 non-null category favorite_count 1679 non-null int64 retweet_count 1679 non-null int64 jpg_url 1679 non-null object img_num 1679 non-null int64 p1 1679 non-null category p1_conf 1679 non-null float64 p1_dog 1679 non-null bool p2 1679 non-null category p2_conf 1679 non-null float64 p2_dog 1679 non-null bool p3 1679 non-null category p3_conf 1679 non-null float64 p3_dog 1679 non-null bool dtypes: bool(3), category(5), datetime64[ns, UTC](1), float64(4), int64(4), object(6) memory usage: 263.7+ KB ###Markdown Storing the dataset externally ###Code df_master.to_csv('twitter_archive_master.csv',index=False) df=pd.read_csv('twitter_archive_master.csv') df.head(3) ###Output _____no_output_____ ###Markdown Analysis Q1) Breed of Dog which recieve more retweets? ###Code df_1 = df_master.groupby('p1')['retweet_count'].sum().reset_index() df_sorted = df_favorite.sort_values('retweet_count', ascending=False).head(15) retweet = df_sorted['retweet_count'] breed = df_sorted['p1'] fig, ax = plt.subplots(figsize=(10,7)) fav = plt.barh(breed, retweet) plt.ylabel('Breed of Dog') plt.xlabel('Retweet count') plt.title('Retweets per breed'); ###Output _____no_output_____ ###Markdown > * Golden Retriver has the most number of retweets as evident from the above plot* The retweets range from 100k to 500k Q2)What are number of favorites by the breed ###Code df_favorite = df.groupby('p1')['favorite_count'].sum().reset_index() df_sorted = df_favorite.sort_values('favorite_count', ascending=False).head(10) ser_fav = df_sorted['favorite_count'] ser_breed = df_sorted['p1'] #E90552 fig, ax = plt.subplots(figsize=(8,6)) fav = plt.barh(ser_breed, ser_fav) plt.ylabel('Dog Breed') plt.xlabel('Number of Favorites') plt.title('Number of Favorites by Breed') ###Output _____no_output_____ ###Markdown >* As we can see from the graph Golden Retriever is the most favortite dog * Every breed listed here had atleast over 200K favorites and peaked at over 1.6 Million Q3)Is there a correlation between favorites and retweets? ###Code !pip3 install seaborn import seaborn as sb # Plot scatterplot of retweet vs favorite count sb.lmplot(x="retweet_count", y="favorite_count", data=df_master, size = 5, aspect=1.3, scatter_kws={'alpha':1/5}) plt.title('Favorite vs. Retweet Count') plt.xlabel('Retweet Count') plt.ylabel('Favorite Count'); ###Output /usr/local/lib/python3.7/site-packages/seaborn/regression.py:573: UserWarning: The `size` parameter has been renamed to `height`; please update your code. warnings.warn(msg, UserWarning) ###Markdown > As we can see there is a high correlation between favorites and retweet count Q4) What are most popular dog names? ###Code df_master.name.value_counts()[2:7].plot('barh', figsize=(15,8)) plt.title='Most Common Dog Names' plt.xlabel("Number of Dogs"); ###Output /usr/local/lib/python3.7/site-packages/ipykernel_launcher.py:1: FutureWarning: `Series.plot()` should not be called with positional arguments, only keyword arguments. The order of positional arguments will change in the future. Use `Series.plot(kind='barh')` instead of `Series.plot('barh',)`. """Entry point for launching an IPython kernel. ###Markdown > We found out that Cooper is the most widely used dog name out of the name data that was available Q5) What are Proportion of various dog stages? ###Code df_stages = df[df['dog_stage'] != "None"] fig, ax = plt.subplots(figsize=(10,10)) df_stages['dog_stage'].value_counts().plot(kind = 'pie', ax = ax, label = 'Dog Stage', autopct='%1.1f%%',explode=[0,0,0,0,0,0.125,0.125]) plt.legend(); ###Output _____no_output_____ ###Markdown PROJECT: Wrangle & Analyze Data Twitter data of @WeRateDogs CONTENTS:Gathering DataData Assessment Visual Assessment Programatic Assessment Cleaning DataStoring,Analyzing & Visualizing Data Gathering DataTwitter archive file: twitter_archive_enhanced.csvThe tweet image predictions: URL- https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv using Request libraryTwitter API & JSON: Using the tweet IDs in the WeRateDogs Twitter archive, query the Twitter API for each tweet's JSON data using Python's Tweepy library and store each tweet's entire set of JSON data in a file called tweet_json.txt file 1.Twitter Archive File: ###Code #code for gathering data import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline import requests import tweepy import json #loading data into the pandas dataframe twitter_data=pd.read_csv('twitter-archive-enhanced.csv') #test twitter_data.sort_values('timestamp') twitter_data.head() twitter_data.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown 2. Tweet Image Prediction: ###Code #code for gathering data from the udacity servers url = "https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv" response = requests.get(url) with open('image-predictions.tsv', mode ='wb') as file: file.write(response.content) #Read TSV file image_prediction = pd.read_csv('image-predictions.tsv', sep='\t' ) #test image_prediction.head(5) auth = tweepy.OAuthHandler('5Uur0mo4ol2kB8yhtZ1VxXS0u', 'h8E7fSpXWiMoBel7G1ZOAeu4Mgru0v0MtxH5ehYE1RKM89SiBH') auth.set_access_token('303562412-ct9aNnU0FQR0UKJVn1i1W3Y8omqSewiQWUcRaygB', 'D3qslrbdOU5fqTOp951kOIuZbkeTPBodnjNYoEGFR63Ft') api = tweepy.API(auth, parser = tweepy.parsers.JSONParser(), wait_on_rate_limit = True, wait_on_rate_limit_notify = True) ###Output _____no_output_____ ###Markdown 3. Twitter API & Json: ###Code #code for gathering data from the twitter api called tweepy list_of_tweets = [] # Tweets that can't be found are saved in the list below: #cant_find_tweets_for_those_ids = [] list_of_tweets_not_found=[] for tweet_id in twitter_data['tweet_id']: try: list_of_tweets.append(api.get_status(tweet_id)) except Exception as e: list_of_tweets_not_found.append(tweet_id) print(len(list_of_tweets)) print(len(list_of_tweets_not_found)) my_list_of_dicts=[] for each_json_tweet in list_of_tweets: my_list_of_dicts.append(each_json_tweet) #print(my_list_of_dicts) with open('tweet_json.txt', 'w') as file: file.write(json.dumps(my_list_of_dicts, indent=4)) my_demo_list = [] with open('tweet_json.txt', encoding='utf-8') as json_file: all_data = json.load(json_file) for each_dictionary in all_data: tweet_id = each_dictionary['id'] whole_tweet = each_dictionary['text'] only_url = whole_tweet[whole_tweet.find('https'):] favorite_count = each_dictionary['favorite_count'] retweet_count = each_dictionary['retweet_count'] followers_count = each_dictionary['user']['followers_count'] friends_count = each_dictionary['user']['friends_count'] whole_source = each_dictionary['source'] only_device = whole_source[whole_source.find('rel="nofollow">') + 15:-4] source = only_device retweeted_status = each_dictionary['retweeted_status'] = each_dictionary.get('retweeted_status', 'Original tweet') if retweeted_status == 'Original tweet': url = only_url else: retweeted_status = 'This is a retweet' url = 'This is a retweet' my_demo_list.append({'tweet_id': str(tweet_id), 'favorite_count': int(favorite_count), 'retweet_count': int(retweet_count), 'followers_count': int(followers_count), 'friends_count': int(friends_count), 'url': url, 'source': source, 'retweeted_status': retweeted_status, }) tweet_json = pd.DataFrame(my_demo_list, columns = ['tweet_id', 'favorite_count','retweet_count', 'followers_count', 'friends_count','source', 'retweeted_status', 'url']) #test tweet_json.sample(5) tweet_json.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2331 entries, 0 to 2330 Data columns (total 8 columns): tweet_id 2331 non-null object favorite_count 2331 non-null int64 retweet_count 2331 non-null int64 followers_count 2331 non-null int64 friends_count 2331 non-null int64 source 2331 non-null object retweeted_status 2331 non-null object url 2331 non-null object dtypes: int64(4), object(4) memory usage: 145.8+ KB ###Markdown Data Assessment: Visual Assessment ###Code twitter_data.sample(5) image_prediction.head() tweet_json.head() ###Output _____no_output_____ ###Markdown Programatic Accessment ###Code twitter_data.info() twitter_data.describe() sum(twitter_data['tweet_id'].duplicated()) twitter_data.rating_numerator.value_counts() twitter_data.rating_denominator.value_counts() twitter_data.rating_numerator.unique() image_prediction.sample(5) image_prediction.info() image_prediction.describe() sum(image_prediction.jpg_url.duplicated()) print(image_prediction.p1_dog.value_counts()) print(image_prediction.p2_dog.value_counts()) print(image_prediction.p3_dog.value_counts()) image_prediction.img_num.value_counts() tweet_json.sample(5) tweet_json.info() tweet_json.describe() tweet_json.retweeted_status.value_counts() tweet_json.source.value_counts() ###Output _____no_output_____ ###Markdown Quality Issues Twitter Data:1. Delete columns that won't be used for analysis2. Separate timestamp into day - month - year (3 columns)3. Correct numerators with decimals4. Correct denominators other than 10.5. Name has values that are string "None" instead of NaN6. Looking visually in Excel,some names are inaccurate such as "a", "an", "the", "very", "by", etc7. It is also found that name of a dog being "O" instead of "O'Malley" Image_prediction:8. Drop duplicated jpg_urls9. Delete columns that won't be used for analysis Tweet_JSon:10. Keep original tweets only Tidiness Issues11. Change tweet_id to type int64 in order to merge with the other 2 tables12. All tables should be part of one dataset Cleaning Data ###Code twitter_data_clean=twitter_data.copy() image_prediction_clean=image_prediction.copy() tweet_json_clean=tweet_json.copy() ###Output _____no_output_____ ###Markdown 1. **TWitter_data:** Deleting columns that are not useful for analysis ###Code #code for deleting not necessary columns twitter_data_clean = twitter_data_clean.drop(['source', 'in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp', 'expanded_urls'], 1) #test print(list(twitter_data_clean)) ###Output ['tweet_id', 'timestamp', 'text', 'rating_numerator', 'rating_denominator', 'name', 'doggo', 'floofer', 'pupper', 'puppo'] ###Markdown 2. **Twitter_data:** Making seperate columns for day-month-year in the dataset ###Code #code twitter_data_clean['timestamp']=pd.to_datetime(twitter_data_clean['timestamp']) twitter_data_clean['year']=twitter_data_clean['timestamp'].dt.year twitter_data_clean['month']=twitter_data_clean['timestamp'].dt.month twitter_data_clean['day']=twitter_data_clean['timestamp'].dt.day #twitter_data_clean=twitter_data_clean.drop(['timestamp'],1) ###Output _____no_output_____ ###Markdown 3. **Twitter_data:** Correct numerators with decimals ###Code #code twitter_data_clean[['rating_numerator','rating_denominator']]=twitter_data_clean[['rating_numerator','rating_denominator']].astype(float) #test twitter_data_clean.info() #Update numerators twitter_data_clean.loc[(twitter_data_clean.tweet_id == 883482846933004288), 'rating_numerator'] = 13.5 twitter_data_clean.loc[(twitter_data_clean.tweet_id == 786709082849828864), 'rating_numerator'] = 9.75 twitter_data_clean.loc[(twitter_data_clean.tweet_id == 778027034220126208), 'rating_numerator'] = 11.27 twitter_data_clean.loc[(twitter_data_clean.tweet_id == 681340665377193984), 'rating_numerator'] = 9.5 twitter_data_clean.loc[(twitter_data_clean.tweet_id == 680494726643068929), 'rating_numerator'] = 11.26 ###Output _____no_output_____ ###Markdown 4. **Twitter_data:** Correct denominators ###Code #code twitter_data_clean['rating'] = 10 * twitter_data_clean['rating_numerator'] / twitter_data_clean['rating_denominator'].astype(float) #test twitter_data_clean.sample(5) ###Output _____no_output_____ ###Markdown **Twitter_data:**5. Name has values that are string "None" instead of NaN6. Looking visually in Excel,some names are inaccurate such as "a", "an", "the", "very", "by", etc7. It is also found that name of a dog being "O" instead of "O'Malley" ###Code #code for getting all lower case names: lowercase_names = [] for row in twitter_data_clean['name']: if row[0].islower() and row not in lowercase_names: lowercase_names.append(row) print(lowercase_names) # Replace all names that start with a lowercase letter with a NaN twitter_data_clean['name'].replace(lowercase_names, np.nan, inplace = True) # Replace all 'None's with a NaN twitter_data_clean['name'].replace('None', np.nan, inplace = True) # Replace the name 'O' with "O'Malley" twitter_data_clean['name'].replace('O', "O'Malley", inplace = True) # Check value counts to see that None and names starting with # a lowercase letter are gone twitter_data_clean['name'].value_counts() ###Output _____no_output_____ ###Markdown 8. **Image_prediction:** Drop 66 jpg_url duplicated ###Code #code image_prediction_clean = image_prediction_clean.drop_duplicates(subset=['jpg_url'], keep='last') #test sum(image_prediction_clean['jpg_url'].duplicated()) ###Output _____no_output_____ ###Markdown 9. **Image_prediction:** dropping columns not needed for analysis ###Code #code image_prediction_clean = image_prediction_clean.drop(['img_num', 'p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog'], 1) #test list(image_prediction_clean) ###Output _____no_output_____ ###Markdown 10. **TWeet_json:** Keeping only original tweets ###Code #code tweet_json_clean = tweet_json_clean[tweet_json_clean['retweeted_status'] == 'Original tweet'] #test tweet_json_clean['retweeted_status'].value_counts() ###Output _____no_output_____ ###Markdown 11. **Tidiness_issue:** Change tweet_id to type int64 in order to merge with the other 2 tables ###Code #code tweet_json_clean['tweet_id'] = tweet_json_clean['tweet_id'].astype(int) #test tweet_json_clean['tweet_id'].dtypes ###Output _____no_output_____ ###Markdown 12. **Tidiness_issue:** All tables should be part of one dataset ###Code #code df_twitter1 = pd.merge(twitter_data_clean, image_prediction_clean, how = 'left', on = ['tweet_id']) #keep rows that have picture (jpg_url) df_twitter1 = df_twitter1[df_twitter1['jpg_url'].notnull()] #TEST df_twitter1.info() #create a new dataframe that merge df_twitter and tweet_json_clean df_twitter = pd.merge(df_twitter1, tweet_json_clean, how = 'left', on = ['tweet_id']) #test df_twitter.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2009 entries, 0 to 2008 Data columns (total 22 columns): tweet_id 2009 non-null int64 timestamp 2009 non-null datetime64[ns] text 2009 non-null object rating_numerator 2009 non-null float64 rating_denominator 2009 non-null float64 name 1350 non-null object doggo 2009 non-null object floofer 2009 non-null object pupper 2009 non-null object puppo 2009 non-null object year 2009 non-null int64 month 2009 non-null int64 day 2009 non-null int64 rating 2009 non-null float64 jpg_url 2009 non-null object favorite_count 1922 non-null float64 retweet_count 1922 non-null float64 followers_count 1922 non-null float64 friends_count 1922 non-null float64 source 1922 non-null object retweeted_status 1922 non-null object url 1922 non-null object dtypes: datetime64[ns](1), float64(7), int64(4), object(10) memory usage: 361.0+ KB ###Markdown Storing, Analyzing and Visualizing Data ###Code df_twitter.to_csv('twitter_archive_master.csv', index=False, encoding = 'utf-8') ###Output _____no_output_____ ###Markdown Insight one:> Favourite dog and retweet count are highly co-related. It seems that people talk much about their favourite dogs. ###Code import seaborn as sns sns.lmplot(x="retweet_count", y="favorite_count", data=df_twitter, size = 5, aspect=1.3, scatter_kws={'alpha':1/5}) plt.title('Favorite vs. Retweet Count') plt.xlabel('Retweet Count') plt.ylabel('Favorite Count'); ###Output _____no_output_____ ###Markdown Insight two:> It appers as though the frequency of rating below 10 has decreased overtime. Before 2016 there were many ratings below 10 but after that there were hardly any. ###Code # Plot standardized ratings over time df_twitter.groupby('timestamp')['rating'].mean().plot(kind='line') plt.ylim(0, 15) plt.title('Rating over Time') plt.xlabel('Time') plt.ylabel('Standardized Rating') plt.show; selected_data = df_twitter['tweet_id'].groupby([df_twitter['timestamp'].dt.year, df_twitter['timestamp'].dt.month]).count() selected_data.plot('line') plt.title('Monthly number of Tweets', size=15) plt.xlabel('Time (Year, Month)') plt.ylabel('Number of Tweets') plt.savefig('number_of_tweets_over_time'); ###Output _____no_output_____ ###Markdown Insight three:>The majority of rating numerators is between 10 & 12 ###Code df_twitter.hist(column='rating_numerator', bins = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]) plt.xlabel('Rating Numerator') plt.ylabel('Frequency') plt.title('Distribution the Rating Numerator') plt.savefig('rating_numerator_dist'); ###Output _____no_output_____ ###Markdown Insight four:>The most popular source for twitter is the Twitter_for_iPhone, followed by Twitter_web_Client and TweetDeck ###Code df_twitter['source'].value_counts() sns.countplot(data=df_twitter, x='source') plt.title('Tweet Sources', size=15) plt.savefig('most_used_twitter_source'); ###Output _____no_output_____ ###Markdown Project :- Data wranglingBy Aditya Gathering data **1. Twitter archive file** ###Code #Import all packages needed import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline import requests import tweepy import json #Read CSV file arch = pd.read_csv('/content/twitter_archive_enhanced.csv') arch.sort_values('timestamp') arch.head() arch.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 in_reply_to_status_id 78 non-null float64 2 in_reply_to_user_id 78 non-null float64 3 timestamp 2356 non-null object 4 source 2356 non-null object 5 text 2356 non-null object 6 retweeted_status_id 181 non-null float64 7 retweeted_status_user_id 181 non-null float64 8 retweeted_status_timestamp 181 non-null object 9 expanded_urls 2297 non-null object 10 rating_numerator 2356 non-null int64 11 rating_denominator 2356 non-null int64 12 name 2356 non-null object 13 doggo 2356 non-null object 14 floofer 2356 non-null object 15 pupper 2356 non-null object 16 puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown **2. Tweet image prediction** ###Code #URL downloaded programatically url = "https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv" response = requests.get(url) with open('image-predictions.tsv', mode ='wb') as file: file.write(response.content) #Read TSV file image_prediction = pd.read_csv('/content/image_predictions.tsv', sep='\t' ) #https://stackoverflow.com/questions/28384588/twitter-api-get-tweets-with-specific-id auth = tweepy.OAuthHandler('5Uur0mo4ol2kB8yhtZ1VxXS0u', 'h8E7fSpXWiMoBel7G1ZOAeu4Mgru0v0MtxH5ehYE1RKM89SiBH') auth.set_access_token('303562412-ct9aNnU0FQR0UKJVn1i1W3Y8omqSewiQWUcRaygB', 'D3qslrbdOU5fqTOp951kOIuZbkeTPBodnjNYoEGFR63Ft') api = tweepy.API(auth, parser = tweepy.parsers.JSONParser(), wait_on_rate_limit = True, wait_on_rate_limit_notify = True) ###Output _____no_output_____ ###Markdown **3. Twitter API & JSON** ###Code #Download Tweepy status object based on Tweet ID and store in list list_of_tweets = [] # Tweets that can't be found are saved in the list below: cant_find_tweets_for_those_ids = [] for tweet_id in arch['tweet_id']: try: list_of_tweets.append(api.get_status(tweet_id)) except Exception as e: cant_find_tweets_for_those_ids.append(tweet_id) print("The list of tweets" ,len(list_of_tweets)) print("The list of tweets no found" , len(cant_find_tweets_for_those_ids)) #Then in this code block we isolate the json part of each tweepy #status object that we have downloaded and we add them all into a list my_list_of_dicts = [] for each_json_tweet in list_of_tweets: my_list_of_dicts.append(each_json_tweet) #we write this list into a txt file: with open('tweet_json.txt', 'w') as file: file.write(json.dumps(my_list_of_dicts, indent=4)) #identify information of interest from JSON dictionaries in txt file #and put it in a dataframe called tweet JSON my_demo_list = [] with open('tweet_json.txt', encoding='utf-8') as json_file: all_data = json.load(json_file) for each_dictionary in all_data: tweet_id = each_dictionary['id'] whole_tweet = each_dictionary['text'] only_url = whole_tweet[whole_tweet.find('https'):] favorite_count = each_dictionary['favorite_count'] retweet_count = each_dictionary['retweet_count'] followers_count = each_dictionary['user']['followers_count'] friends_count = each_dictionary['user']['friends_count'] whole_source = each_dictionary['source'] only_device = whole_source[whole_source.find('rel="nofollow">') + 15:-4] source = only_device retweeted_status = each_dictionary['retweeted_status'] = each_dictionary.get('retweeted_status', 'Original tweet') if retweeted_status == 'Original tweet': url = only_url else: retweeted_status = 'This is a retweet' url = 'This is a retweet' my_demo_list.append({'tweet_id': str(tweet_id), 'favorite_count': int(favorite_count), 'retweet_count': int(retweet_count), 'followers_count': int(followers_count), 'friends_count': int(friends_count), 'url': url, 'source': source, 'retweeted_status': retweeted_status, }) tweet_json = pd.DataFrame(my_demo_list, columns = ['tweet_id', 'favorite_count','retweet_count', 'followers_count', 'friends_count','source', 'retweeted_status', 'url']) tweet_json.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2330 entries, 0 to 2329 Data columns (total 8 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2330 non-null object 1 favorite_count 2330 non-null int64 2 retweet_count 2330 non-null int64 3 followers_count 2330 non-null int64 4 friends_count 2330 non-null int64 5 source 2330 non-null object 6 retweeted_status 2330 non-null object 7 url 2330 non-null object dtypes: int64(4), object(4) memory usage: 145.8+ KB ###Markdown Assessing data Visual assessment ###Code ##twitter_archive arch image_prediction tweet_json ###Output _____no_output_____ ###Markdown Programmatic assessment ###Code arch.info() sum(arch['tweet_id'].duplicated()) arch.rating_numerator.value_counts() print(arch.loc[arch.rating_numerator == 204, 'text']) print(arch.loc[arch.rating_numerator == 143, 'text']) print(arch.loc[arch.rating_numerator == 666, 'text']) print(arch.loc[arch.rating_numerator == 1176, 'text']) print(arch.loc[arch.rating_numerator == 144, 'text']) #print whole text in order to verify numerators and denominators print(arch['text'][1120]) #17 dogs print(arch['text'][1634]) #13 dogs print(arch['text'][313]) #just a tweet to explain actual ratings, this will be ignored when cleaning data print(arch['text'][189]) #no picture, this will be ignored when cleaning data print(arch['text'][1779]) #12 dogs arch.rating_denominator.value_counts() print(arch.loc[arch.rating_denominator == 11, 'text']) print(arch.loc[arch.rating_denominator == 2, 'text']) print(arch.loc[arch.rating_denominator == 16, 'text']) print(arch.loc[arch.rating_denominator == 15, 'text']) print(arch.loc[arch.rating_denominator == 7, 'text']) print(arch['text'][784]) #retweet - it will be deleted when delete all retweets print(arch['text'][1068]) #actual rating 14/10 need to change manually print(arch['text'][1662]) #actual rating 10/10 need to change manually print(arch['text'][2335]) #actual rating 9/10 need to change manually print(arch['text'][1663]) # tweet to explain rating print(arch['text'][342]) #no rating - delete print(arch['text'][516]) #no rating - delete arch[arch['tweet_id'] == 681340665377193000] # Reviewer point brought something to my attention that it was not picked up during the first assessment with pd.option_context('max_colwidth', 200): display(arch[arch['text'].str.contains(r"(\d+\.\d*\/\d+)")] [['tweet_id', 'text', 'rating_numerator', 'rating_denominator']]) image_prediction.sample(10) image_prediction.info() sum(image_prediction.jpg_url.duplicated()) pd.concat(g for _, g in image_prediction.groupby("jpg_url") if len(g) > 1) print(image_prediction.p1_dog.value_counts()) print(image_prediction.p2_dog.value_counts()) print(image_prediction.p3_dog.value_counts()) image_prediction.img_num.value_counts() tweet_json.sample(10) tweet_json.info() tweet_json.retweeted_status.value_counts() tweet_json.source.value_counts() ###Output _____no_output_____ ###Markdown Quality*Completeness, validity, accuracy, consistency (content issues)* *twitter_archive*1. Keep original ratings (no retweets) that have images- Delete columns that won't be used for analysis- Erroneous datatypes (doggo, floofer, pupper and puppo columns)- Separate timestamp into day - month - year (3 columns)- Correct numerators with decimals- Correc denominators other than 10: a. Manually (few examples assessed by individual print text). b. Programatically (Tweets with denominator not equal to 10 are usually multiple dogs). *image_prediction*6. Drop 66 jpg_url duplicated7. Create 1 column for image prediction and 1 column for confidence level8. Delete columns that won't be used for analysis *tweet_json*1. Keep original tweets only Tidiness 1. Change tweet_id to type int64 in order to merge with the other 2 tables- All tables should be part of one dataset Cleaning Data ###Code twitter_archive_clean = arch.copy() image_prediction_clean = image_prediction.copy() tweet_json_clean = tweet_json.copy() ###Output _____no_output_____ ###Markdown **1. Twitter archive** - keep original ratings (no retweets) that have images. Based on info, there are 181 values in retweeted_status_id and retweeted_status_user_id. Delete the retweets. Once I merge twitter_archive and image_prediction, I will only keep the ones with images. ###Code #CODE: Delete retweets by filtering the NaN of retweeted_status_user_id twitter_archive_clean = twitter_archive_clean[pd.isnull(twitter_archive_clean['retweeted_status_user_id'])] #TEST print(sum(twitter_archive_clean.retweeted_status_user_id.value_counts())) ###Output 0 ###Markdown **2. Twitter archive** - Delete columns that won't be used for analysis ###Code #get the column names of twitter_archive_clean print(list(twitter_archive_clean)) #CODE: Delete columns no needed twitter_archive_clean = twitter_archive_clean.drop(['source', 'in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp', 'expanded_urls'], 1) #TEST list(twitter_archive_clean) ###Output _____no_output_____ ###Markdown **3. Twitter_archive** - Erroneous datatypes (doggo, floofer, pupper and puppo columns) ###Code #CODE: Melt the doggo, floofer, pupper and puppo columns to dogs and dogs_stage column twitter_archive_clean = pd.melt(twitter_archive_clean, id_vars=['tweet_id', 'timestamp', 'text', 'rating_numerator', 'rating_denominator', 'name'], var_name='dogs', value_name='dogs_stage') #CODE: drop dogs twitter_archive_clean = twitter_archive_clean.drop('dogs', 1) #CODE: Sort by dogs_stage then drop duplicated based on tweet_id except the last occurrence twitter_archive_clean = twitter_archive_clean.sort_values('dogs_stage').drop_duplicates(subset='tweet_id', keep='last') #TEST twitter_archive_clean['dogs_stage'].value_counts() ###Output _____no_output_____ ###Markdown **4. Twitter_archive** - Separate timestamp into day - month - year (3 columns)First convert *timestamp* to datetime. Then extract year, month and day to new columns. Finally drop *timestamp* column. ###Code #CODE: convert timestamp to datetime twitter_archive_clean['timestamp'] = pd.to_datetime(twitter_archive_clean['timestamp']) #extract year, month and day to new columns twitter_archive_clean['year'] = twitter_archive_clean['timestamp'].dt.year twitter_archive_clean['month'] = twitter_archive_clean['timestamp'].dt.month twitter_archive_clean['day'] = twitter_archive_clean['timestamp'].dt.day #Finally drop timestamp column twitter_archive_clean = twitter_archive_clean.drop('timestamp', 1) #TEST list(twitter_archive_clean) ###Output _____no_output_____ ###Markdown **5. Twitter_archive** - Correc numerators ###Code twitter_archive_clean[['rating_numerator', 'rating_denominator']] = twitter_archive_clean[['rating_numerator','rating_denominator']].astype(float) twitter_archive_clean.info() #CODE #First change numerator and denominators type int to float to allow decimals twitter_archive_clean[['rating_numerator', 'rating_denominator']] = twitter_archive_clean[['rating_numerator','rating_denominator']].astype(float) #Update numerators twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 883482846933004288), 'rating_numerator'] = 13.5 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 786709082849828864), 'rating_numerator'] = 9.75 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 778027034220126208), 'rating_numerator'] = 11.27 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 681340665377193984), 'rating_numerator'] = 9.5 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 680494726643068929), 'rating_numerator'] = 11.26 #TEST with pd.option_context('max_colwidth', 200): display(twitter_archive_clean[twitter_archive_clean['text'].str.contains(r"(\d+\.\d*\/\d+)")] [['tweet_id', 'text', 'rating_numerator', 'rating_denominator']]) ###Output /usr/local/lib/python3.6/dist-packages/pandas/core/strings.py:1954: UserWarning: This pattern has match groups. To actually get the groups, use str.extract. return func(self, *args, **kwargs) ###Markdown **6. Twitter_archive** - Correc denominators *a. Manually* ###Code #CODE: Update both numerators and denominators twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 740373189193256964), 'rating_numerator'] = 14 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 740373189193256964), 'rating_denominator'] = 10 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 682962037429899265), 'rating_numerator'] = 10 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 682962037429899265), 'rating_denominator'] = 10 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 666287406224695296), 'rating_numerator'] = 9 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 666287406224695296), 'rating_denominator'] = 10 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 722974582966214656), 'rating_numerator'] = 13 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 722974582966214656), 'rating_denominator'] = 10 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 716439118184652801), 'rating_numerator'] = 13.5 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 716439118184652801), 'rating_denominator'] = 10 #CODE: Delete five tweets with no actual ratings twitter_archive_clean = twitter_archive_clean[twitter_archive_clean['tweet_id'] != 832088576586297345] twitter_archive_clean = twitter_archive_clean[twitter_archive_clean['tweet_id'] != 810984652412424192] twitter_archive_clean = twitter_archive_clean[twitter_archive_clean['tweet_id'] != 682808988178739200] twitter_archive_clean = twitter_archive_clean[twitter_archive_clean['tweet_id'] != 835246439529840640] twitter_archive_clean = twitter_archive_clean[twitter_archive_clean['tweet_id'] != 686035780142297088] #TEST: Left only the group dogs for programatically clean with pd.option_context('max_colwidth', 200): display(twitter_archive_clean[twitter_archive_clean['rating_denominator'] != 10][['tweet_id', 'text', 'rating_numerator', 'rating_denominator']]) ###Output _____no_output_____ ###Markdown *b. Programatically* ###Code #CODE: Create a new column with rating in float type to avoid converting all int column to float twitter_archive_clean['rating'] = 10 * twitter_archive_clean['rating_numerator'] / twitter_archive_clean['rating_denominator'].astype(float) #TEST twitter_archive_clean.sample(5) ###Output _____no_output_____ ###Markdown **7. Image_prediction** - Drop 66 jpg_url duplicated ###Code #CODE: Delete duplicated jpg_url image_prediction_clean = image_prediction_clean.drop_duplicates(subset=['jpg_url'], keep='last') #TEST sum(image_prediction_clean['jpg_url'].duplicated()) #CODE: the first true prediction (p1, p2 or p3) will be store in these lists dog_type = [] confidence_list = [] #create a function with nested if to capture the dog type and confidence level # from the first 'true' prediction def image(image_prediction_clean): if image_prediction_clean['p1_dog'] == True: dog_type.append(image_prediction_clean['p1']) confidence_list.append(image_prediction_clean['p1_conf']) elif image_prediction_clean['p2_dog'] == True: dog_type.append(image_prediction_clean['p2']) confidence_list.append(image_prediction_clean['p2_conf']) elif image_prediction_clean['p3_dog'] == True: dog_type.append(image_prediction_clean['p3']) confidence_list.append(image_prediction_clean['p3_conf']) else: dog_type.append('Error') confidence_list.append('Error') #series objects having index the image_prediction_clean column. image_prediction_clean.apply(image, axis=1) #create new columns image_prediction_clean['dog_type'] = dog_type image_prediction_clean['confidence_list'] = confidence_list #drop rows that has prediction_list 'error' image_prediction_clean = image_prediction_clean[image_prediction_clean['dog_type'] != 'Error'] #TEST: image_prediction_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1691 entries, 0 to 2073 Data columns (total 14 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1691 non-null int64 1 jpg_url 1691 non-null object 2 img_num 1691 non-null int64 3 p1 1691 non-null object 4 p1_conf 1691 non-null float64 5 p1_dog 1691 non-null bool 6 p2 1691 non-null object 7 p2_conf 1691 non-null float64 8 p2_dog 1691 non-null bool 9 p3 1691 non-null object 10 p3_conf 1691 non-null float64 11 p3_dog 1691 non-null bool 12 dog_type 1691 non-null object 13 confidence_list 1691 non-null object dtypes: bool(3), float64(3), int64(2), object(6) memory usage: 163.5+ KB ###Markdown **9. Image_prediction** - Delete columns that won't be used for analysis ###Code #CODE: print list of image_prediction columns print(list(image_prediction_clean)) #Delete columns image_prediction_clean = image_prediction_clean.drop(['img_num', 'p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog'], 1) #TEST list(image_prediction_clean) ###Output ['tweet_id', 'jpg_url', 'img_num', 'p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog', 'dog_type', 'confidence_list'] ###Markdown **10. Tweet_json** - keep 2174 original tweets ###Code #CODE: tweet_json_clean = tweet_json_clean[tweet_json_clean['retweeted_status'] == 'Original tweet'] #TEST tweet_json_clean['retweeted_status'].value_counts() ###Output _____no_output_____ ###Markdown **11. Tidiness** - Change tweet_id to type int64 in order to merge with the other 2 tables ###Code #CODE: change tweet_id from str to int tweet_json_clean['tweet_id'] = tweet_json_clean['tweet_id'].astype(int) #TEST tweet_json_clean['tweet_id'].dtypes ###Output _____no_output_____ ###Markdown **12. Tidiness** - All tables should be part of one dataset ###Code #CODE: create a new dataframe that merge twitter_archive_clean and #image_prediction_clean df_twitter1 = pd.merge(twitter_archive_clean, image_prediction_clean, how = 'left', on = ['tweet_id']) #keep rows that have picture (jpg_url) df_twitter1 = df_twitter1[df_twitter1['jpg_url'].notnull()] #TEST df_twitter1.info() #CODE: create a new dataframe that merge df_twitter and tweet_json_clean df_twitter = pd.merge(df_twitter1, tweet_json_clean, how = 'left', on = ['tweet_id']) #TEST df_twitter.info() df_twitter['rating_numerator'].value_counts() ###Output _____no_output_____ ###Markdown Storing, Analyzing, and Visualizing Data ###Code #Store the clean DataFrame in a CSV file df_twitter.to_csv('/content/twitter_archive_master.csv', index=False, encoding = 'utf-8') ###Output _____no_output_____ ###Markdown Insight one & visualizationGolden retriever is the most common dog in this dataset. ###Code df_twitter['dog_type'].value_counts() df_dog_type = df_twitter.groupby('dog_type').filter(lambda x: len(x) >= 25) df_dog_type['dog_type'].value_counts().plot(kind = 'barh') plt.title('Histogram of the Most Rated Dog Type') plt.xlabel('Count') plt.ylabel('Type of dog') fig = plt.gcf() fig.savefig('output.png',bbox_inches='tight'); ###Output _____no_output_____ ###Markdown Insight two ###Code df_dog_type_mean = df_twitter.groupby('dog_type').mean() df_dog_type_mean.head() df_dog_type_sorted = df_dog_type_mean['rating'].sort_values() df_dog_type_sorted print(df_twitter.loc[df_twitter.dog_type == 'Japanese_spaniel', 'url']) df_twitter[df_twitter['dog_type'] == 'golden_retriever'] ###Output _____no_output_____ ###Markdown Insight three & visualization ###Code df_dog_type_count = df_twitter.groupby('dog_type').count() df_dog_type_count dog_type_count = df_dog_type_count['rating'] dog_type_mean = df_dog_type_mean['rating'] dog_type_mean df = pd.DataFrame() df['dog_type_count'] = dog_type_count df['dog_type_mean'] = dog_type_mean df df.plot(x='dog_type_count', y='dog_type_mean', kind='scatter') plt.xlabel('Number of Ratings of a Dog Type') plt.ylabel('Average Rating of Dog Type') plt.title('Average Rating of Dog Type by Number of Ratings of a Dog Type Scatter Plot') fig = plt.gcf() #plt.savefig('X:/' + newName + '.png', fig.savefig('output2.png',bbox_inches='tight'); ###Output _____no_output_____ ###Markdown Insight four & visualization ###Code df_twitter.plot(x='retweet_count', y='rating', kind='scatter') plt.xlabel('Retweet Counts') plt.ylabel('Ratings') plt.title('Retweet Counts by Ratings Scatter Plot') fig = plt.gcf() fig.savefig('output3.png',bbox_inches='tight'); ###Output _____no_output_____ ###Markdown Wrangle and Analyze DataIs the fifth project in the Data Wrangling section in the Udacity [Data Analyst Nanodegree](https://eu.udacity.com/course/data-analyst-nanodegree--nd002) program. Context__Goal__: wrangle WeRateDogs Twitter data to create interesting and trustworthy analyses and visualizations. The Twitter archive is great, but it only contains very basic tweet information. Additional gathering, then assessing and cleaning is required for "Wow!"-worthy analyses and visualizations.[The weird underside of DoggoLingo](https://blog.oxforddictionaries.com/2017/08/01/doggolingo/) ###Code import requests import os, sys import re import pandas as pd import numpy as np import zipfile import json import twitter_api import matplotlib.pyplot as plt %matplotlib inline DOWNLOADS_DIR = 'downloads' IMAGES_DIR = 'images' def ensure_dir(file_path=DOWNLOADS_DIR): """ Ensure directory exists or create it. :param file_path: directory path :return: """ if not os.path.exists(file_path): os.makedirs(file_path) def download(*urls): """ Download files from the provided URL. :param urls: variable number of URL :return: None """ ensure_dir() for url in urls: url_file = os.path.join(DOWNLOADS_DIR, url.split(os.path.sep)[-1]).replace('-', '_') if not os.path.exists(url_file): response = requests.get(url, allow_redirects=True) with open(url_file, 'wb') as handle: handle.write(response.content) sys.stdout.write('.') sys.stdout.write('\n') def download_img(name, url): """ Download image from the provided URL :param name: name of image file :param url: URL for image :return: None """ ensure_dir(IMAGES_DIR) image_file = os.path.join(IMAGES_DIR, f"{name}.{url.split('.')[-1]}") if not os.path.exists(image_file): response = requests.get(url, allow_redirects=True) with open(image_file, 'wb') as handle: handle.write(response.content) def zip_extract(file): """ Extract alla files from a zip archive. :param file: file name of archive. :return: a list of file names in the archive. """ with zipfile.ZipFile(os.path.join(DOWNLOADS_DIR, file), 'r') as zip_ref: zip_ref.extractall(DOWNLOADS_DIR) return zip_ref.namelist() def rename(file_from, file_to, directory=DOWNLOADS_DIR): """ Rename file in the :param file_from: Existing file to rename :param file_to: Target file name :param directory: Source directory, defaults to DOWNLOADS_DIR :return: None """ source = os.path.join(directory, file_from) if os.path.exists(source): os.rename(source, os.path.join(directory, file_to)) def file_exists(filename, directory=DOWNLOADS_DIR): """ Check if the file exists in a optional provided directory. :param filename: name of file :param directory: Source directory, defaults to DOWNLOADS_DIR :return: True if file exists. """ return os.path.exists(os.path.join(directory, filename)) ###Output _____no_output_____ ###Markdown Gather ###Code # download the provided data sources download('https://s3.amazonaws.com/video.udacity-data.com/topher/2018/November/5bf60fe7_image-predictions/image-predictions.tsv', 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/59a4e958_twitter-archive-enhanced/twitter-archive-enhanced.csv') print(os.listdir(DOWNLOADS_DIR)) # Rename file to addhere to the expected project submission rename('image-predictions.tsv', 'image_predictions.tsv') rename('twitter-archive-enhanced.csv', 'twitter_archive_enhanced.csv') print(os.listdir(DOWNLOADS_DIR)) ###Output ['.DS_Store', 'image_predictions.tsv', 'twitter_archive_enhanced.csv', 'tweet_json.txt'] ###Markdown Download Twitter TweetsThis code cell expects that a file `twitter_api.py` exists in the same folder as this notebook.One function `get_api` should exists that returns a fully configured `tweepy.API` instance. ###Code from tweepy import TweepError tweets_file = os.path.join(DOWNLOADS_DIR, 'tweet_json.txt') failures = [] if not os.path.exists(tweets_file): api = twitter_api.get_api() with open(tweets_file, 'w', encoding='utf-8') as file: for tweet_id in twitter_df['tweet_id'].to_list(): try: raw_tweet = api.get_status(tweet_id, tweet_mode='extended') file.write(json.dumps(raw_tweet._json)) file.write('\n') sys.stdout.write('.') except TweepError as te: sys.stdout.write('X') failures.append(f'Tweet ID {tweet_id} failed: {te}') sys.stdout.write('\n') print(failures) else: print(f'The file {tweets_file} already exist. Nothing downloaded.') ###Output ...................X...........................................................................X.....X..X.............X.............X......................X...........................................................................................X............X.....................................X... ###Markdown Load Pandas DataFrames ###Code twitter_df = pd.read_csv(os.path.join(DOWNLOADS_DIR, 'twitter_archive_enhanced.csv')) tweet_df = pd.read_json(os.path.join(DOWNLOADS_DIR, 'tweet_json.txt'), lines=True) image_pred_df = pd.read_csv(os.path.join(DOWNLOADS_DIR, 'image_predictions.tsv'), sep='\t') ###Output _____no_output_____ ###Markdown Assess Data file twitter-archive-enhanced.csvProblems identified in data set.* tweet_id - convert to string, identity and not used for calculations* timestamp - convert to datetime* in_reply_to_status_id - only 78 posts, consider dropping column* in_reply_to_user_id - only 78 posts, consider dropping column* name - None as null value, not only names in columnsQualitative variables that could be stored in one column:* doggo - None as null value* floofer - None as null value* pupper - None as null value* puppo - None as null value* full_text - same data as __text__ column in `tweet_json.txt` file* source - same data as __source__ column in `tweet_json.txt` file ###Code twitter_df.head(n=3).transpose() twitter_df.describe(include='all').transpose() twitter_df.info() not_names = twitter_df[twitter_df['name'].str.match('^[a-z]')]['name'] not_names.value_counts().sort_index() print(twitter_df['doggo'].value_counts(), '\n') print(twitter_df['floofer'].value_counts(), '\n') print(twitter_df['pupper'].value_counts(), '\n') print(twitter_df['puppo'].value_counts(), '\n') fig, ax = plt.subplots() ax.hist(twitter_df['rating_numerator'], color='#0C7BDC', histtype='step', log=True, label='Numerator') ax.hist(twitter_df['rating_denominator'], color='#FFC20A', histtype='stepfilled', log=True, label='Denominator') ax.set_xlabel('Rating') ax.set_ylabel('log frequency') ax.legend() fig.savefig(os.path.join(IMAGES_DIR, 'rating-histogram.jpg')) plt.show(); ###Output _____no_output_____ ###Markdown Data file tweet_json.txt Problems identified in data set.Drop column since all values are missing:* contributors* coordinates * geo* place - one row have value* possibly_sensitive - all rows have zero (0)* possibly_sensitive_appealable - all rows have zero (0)* id - convert to string, identity* source - categorical values can be extracted N{iphone, android}* full_text - same data as __text__ column in `twitter_archive_enhanced.csv` file* source - same data as __source__ column in `twitter_archive_enhanced.csv` file ###Code tweet_df.head(n=3).transpose() tweet_df.describe().transpose() tweet_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2339 entries, 0 to 2338 Data columns (total 32 columns): contributors 0 non-null float64 coordinates 0 non-null float64 created_at 2339 non-null datetime64[ns] display_text_range 2339 non-null object entities 2339 non-null object extended_entities 2065 non-null object favorite_count 2339 non-null int64 favorited 2339 non-null bool full_text 2339 non-null object geo 0 non-null float64 id 2339 non-null int64 id_str 2339 non-null int64 in_reply_to_screen_name 77 non-null object in_reply_to_status_id 77 non-null float64 in_reply_to_status_id_str 77 non-null float64 in_reply_to_user_id 77 non-null float64 in_reply_to_user_id_str 77 non-null float64 is_quote_status 2339 non-null bool lang 2339 non-null object place 1 non-null object possibly_sensitive 2203 non-null float64 possibly_sensitive_appealable 2203 non-null float64 quoted_status 24 non-null object quoted_status_id 26 non-null float64 quoted_status_id_str 26 non-null float64 quoted_status_permalink 26 non-null object retweet_count 2339 non-null int64 retweeted 2339 non-null bool retweeted_status 167 non-null object source 2339 non-null object truncated 2339 non-null bool user 2339 non-null object dtypes: bool(4), datetime64[ns](1), float64(11), int64(4), object(12) memory usage: 520.9+ KB ###Markdown Data file image-predictions.tsvProblems identified in data set.* tweet_id - convert to string, identity ###Code image_pred_df.head(n=3) image_pred_df.describe(include='all') image_pred_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null int64 jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown Clean Quality issuesDataFrame twitter_df:* tweet_id - convert to string, identity and not used for calculations* timestamp - convert to datetime* name - None as null value, remove names in columns with only lower characters* client - cleanup tweet client usedDataFrame tweet_df: * id - convert to string, identity and not used for calculations* extract more namesData file image-predictions.tsv* tweet_id - convert to string, identity All dataframes:* drop reply and retweets Tidiness issues* combined doggo, floofer, pupper, and puppo into one column* full_text - dropped since it's the same data as __text__ column in `twitter_df` DataFrame Cleaning twitter_dfDefine: * Try to find more dogtionary words from the tweet text. * Fix None values on the four columns: doggo, floofer, pupper, puppo. * Reshape dogtionary names into one column.* Remove retweets. Code and Test ###Code twitter_clean_df = twitter_df.copy() ###Output _____no_output_____ ###Markdown Reshape dogtionary names into one column. ###Code # replace None with NaN dogtionary = twitter_df[['tweet_id', 'doggo', 'floofer', 'pupper', 'puppo']].replace('None', np.NaN) def get_dogtionary(series): values = [val for val in series.dropna().values] names = ', '.join(values) if len(names) > 1: return names twitter_clean_df['dogtionary'] = dogtionary[['doggo', 'floofer', 'pupper', 'puppo']].apply(get_dogtionary, axis=1) twitter_clean_df['dogtionary'].value_counts().sort_index() ###Output _____no_output_____ ###Markdown Extract more Dogtionary words from text column ###Code dogtionary_words = ['doggo', 'floofer', 'floof', 'pupper', 'puppo'] matcher = re.compile(pattern='|'.join(dogtionary_words)) def extract_type(series): names = matcher.findall(series['text']) if len(names) > 0: found = set(names) if series['dogtionary']: exists = [category.strip() for category in series['dogtionary'].split(',')] found.update(exists) series['dogtionary'] = ', '.join(found) return series twitter_clean_df[['text', 'dogtionary']] = twitter_clean_df[['text', 'dogtionary']].apply(extract_type, axis=1) twitter_clean_df['dogtionary'].value_counts().sort_index() # tweet_id is identity column twitter_clean_df['tweet_id'] = twitter_clean_df['tweet_id'].astype(str) ###Output _____no_output_____ ###Markdown Drop retweet columns ###Code retweet_id_isnull = twitter_clean_df['retweeted_status_id'].isnull() retweet_user_isnull = twitter_clean_df['retweeted_status_user_id'].isnull() retweet_timestamp_isnull = twitter_clean_df['retweeted_status_timestamp'].isnull() twitter_clean_df = twitter_clean_df[retweet_id_isnull | retweet_user_isnull | retweet_timestamp_isnull] twitter_clean_df = twitter_clean_df.drop(['retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], axis=1) twitter_clean_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 15 columns): tweet_id 2175 non-null object in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2175 non-null object source 2175 non-null object text 2175 non-null object expanded_urls 2117 non-null object rating_numerator 2175 non-null int64 rating_denominator 2175 non-null int64 name 2175 non-null object doggo 2175 non-null object floofer 2175 non-null object pupper 2175 non-null object puppo 2175 non-null object dogtionary 402 non-null object dtypes: float64(2), int64(2), object(11) memory usage: 271.9+ KB ###Markdown Drop reply columns ###Code reply_status_isnull = twitter_clean_df['in_reply_to_status_id'].isnull() reply_user_isnull = twitter_clean_df['in_reply_to_user_id'].isnull() twitter_clean_df = twitter_clean_df[reply_status_isnull | reply_user_isnull] twitter_clean_df = twitter_clean_df.drop(['in_reply_to_status_id', 'in_reply_to_user_id'], axis=1) twitter_clean_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2355 Data columns (total 13 columns): tweet_id 2097 non-null object timestamp 2097 non-null object source 2097 non-null object text 2097 non-null object expanded_urls 2094 non-null object rating_numerator 2097 non-null int64 rating_denominator 2097 non-null int64 name 2097 non-null object doggo 2097 non-null object floofer 2097 non-null object pupper 2097 non-null object puppo 2097 non-null object dogtionary 391 non-null object dtypes: int64(2), object(11) memory usage: 229.4+ KB ###Markdown Remove names that start with lower letter charactes ###Code # list of names that start with only lower letters names_to_remove = twitter_clean_df.name.value_counts(dropna=True).filter(regex='^[a-z]', axis=0) names_to_remove.sum() # remove and replace with NaN twitter_clean_df.name.replace(to_replace='^[a-z]', value=np.nan, regex=True, inplace=True) twitter_clean_df.name.isnull().sum() # clean up unused variables del retweet_id_isnull del retweet_user_isnull del retweet_timestamp_isnull del reply_status_isnull del reply_user_isnull del names_to_remove del dogtionary ###Output _____no_output_____ ###Markdown Extract Twitter client ###Code client_matcher = re.compile(pattern='>.*<') twitter_clean_df['client'] = twitter_clean_df.source.apply(lambda x: client_matcher.findall(x)[0][1:-1]) twitter_clean_df.client.value_counts() ###Output _____no_output_____ ###Markdown Drop columns ###Code columns_to_drop = ['expanded_urls', 'source', 'doggo', 'floofer', 'pupper', 'puppo'] twitter_clean_df.drop(columns_to_drop, axis=1, inplace=True) # Parse timestamp column twitter_clean_df['timestamp'] = pd.to_datetime(twitter_clean_df.timestamp) # test twitter_clean_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2355 Data columns (total 8 columns): tweet_id 2097 non-null object timestamp 2097 non-null datetime64[ns, UTC] text 2097 non-null object rating_numerator 2097 non-null int64 rating_denominator 2097 non-null int64 name 1993 non-null object dogtionary 391 non-null object client 2097 non-null object dtypes: datetime64[ns, UTC](1), int64(2), object(5) memory usage: 147.4+ KB ###Markdown Cleaning tweet_dfDefine:* id - convert to string, identity* extract dog names named...* drop columns Code and Test ###Code # change identity to string tweet_clean_df = tweet_df.copy() tweet_clean_df['id'] = tweet_clean_df['id'].astype(str) # extract dog names named... namePattern = re.compile('named ([A-Z]\w+)') def extract_name(text): names = namePattern.findall(text) if names: return names[0] return np.NaN tweet_clean_df['new_name'] = tweet_clean_df['full_text'].apply(extract_name) tweet_clean_df[tweet_clean_df['new_name'].isnull() == False][['id', 'new_name']].set_index('id') tweet_clean_df.new_name.value_counts().sort_index() columns_to_drop = ['contributors', 'coordinates', 'created_at', 'display_text_range', 'entities', 'extended_entities', 'favorited', 'full_text', 'geo', 'id_str', 'in_reply_to_screen_name', 'in_reply_to_status_id', 'in_reply_to_status_id_str', 'in_reply_to_user_id', 'in_reply_to_user_id_str', 'is_quote_status', 'lang', 'place', 'possibly_sensitive', 'possibly_sensitive_appealable', 'quoted_status', 'quoted_status_id', 'quoted_status_id_str', 'quoted_status_permalink', 'retweeted', 'retweeted_status', 'source', 'truncated', 'user'] tweet_clean_df.rename(columns={'id': 'tweet_id'}, inplace=True) tweet_clean_df.drop(columns_to_drop, axis=1, inplace=True) tweet_clean_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2339 entries, 0 to 2338 Data columns (total 4 columns): favorite_count 2339 non-null int64 tweet_id 2339 non-null object retweet_count 2339 non-null int64 new_name 24 non-null object dtypes: int64(2), object(2) memory usage: 73.2+ KB ###Markdown Cleaning of image_pred_dfDefine:* rename prediction category and confidence columns* tweet_id - convert to string, identity Code ###Code # rename columns pred_tidy = image_pred_df.copy() pred_tidy.rename(columns={'p1': 'p1_category', 'p2': 'p2_category', 'p3': 'p3_category', 'p1_conf': 'p1_confidence', 'p2_conf': 'p2_confidence', 'p3_conf': 'p3_confidence'}, inplace=True) # convert identity to string pred_tidy['tweet_id'] = pred_tidy['tweet_id'].astype(str) ###Output _____no_output_____ ###Markdown Test ###Code pred_tidy.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null object jpg_url 2075 non-null object img_num 2075 non-null int64 p1_category 2075 non-null object p1_confidence 2075 non-null float64 p1_dog 2075 non-null bool p2_category 2075 non-null object p2_confidence 2075 non-null float64 p2_dog 2075 non-null bool p3_category 2075 non-null object p3_confidence 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(1), object(5) memory usage: 152.1+ KB ###Markdown Create Twitter archive masterCombine the three DataFrames into one new master DataFrame ###Code master_df = twitter_clean_df.copy() # add new extracted dog names for row in tweet_clean_df[['tweet_id', 'new_name']].itertuples(): x = master_df[master_df['tweet_id'] == str(row.Index)] if not x.empty: print(x[['tweet_id', 'name']], row.new_name) x['name'] = row.new_name master_df['name'] = master_df['name'].replace('None', np.nan) master_df = master_df.merge(tweet_clean_df, on='tweet_id', how='inner') master_df = master_df.merge(pred_tidy, on='tweet_id', how='inner') master_df.drop('new_name', axis=1, inplace=True) master_df.info() REPORT_DATA_SET_FILE = 'twitter_archive_master.csv' master_df.to_csv(REPORT_DATA_SET_FILE, index=False) ###Output _____no_output_____ ###Markdown Analysis ###Code ensure_dir(IMAGES_DIR) ###Output _____no_output_____ ###Markdown Does the different Twitter clients rate dogs differently? ###Code rating_client_agg = master_df.groupby('client', as_index=False).agg({'rating_numerator': ['min', 'max', 'median']}) rating_client_agg.columns = ['client', 'min', 'max', 'median'] fig, (ax1, ax2, ax3) = plt.subplots(nrows=3, ncols=1, figsize=(10,8)) ax1.bar(rating_client_agg['client'], rating_client_agg['min']) ax1.set(ylabel='Rating score', title='Minimum rating / Twitter client') ax2.bar(rating_client_agg['client'], rating_client_agg['max']) ax2.set(ylabel='Rating score', title='Maximum rating / Twitter client') ax3.bar(rating_client_agg['client'], rating_client_agg['median']) ax3.set(ylabel='Rating score', title='Median rating / Twitter client') plt.subplots_adjust(hspace=0.6) fig.savefig(os.path.join(IMAGES_DIR, 'rating-client-outliers.jpg')) plt.show() ###Output _____no_output_____ ###Markdown How are the different Twitter clients used for setting favorites and retweets? ###Code share_agg = master_df.groupby('client', as_index=False).agg({'favorite_count': ['sum'], 'retweet_count': ['sum']}) share_agg.columns = ['client', 'favorite', 'retweet'] ax = share_agg.plot('client', ['favorite', 'retweet'], kind='bar', log=True) ax.set_title('Favorite and retweet per client') ax.set_xlabel('') ax.tick_params(axis='x', labelrotation=35) ax.set_ylabel('Sum of tweets (log)') plt.tight_layout() plt.savefig(os.path.join(IMAGES_DIR, 'favorite-and-retweet-per-client.jpg')) plt.show() share_agg ###Output _____no_output_____ ###Markdown top 10 most common dog names? ###Code # top 10 most common dog names name_agg = master_df[master_df.name.isnull() == False].groupby(['name'])['name'].count() name_agg.sort_values(ascending=False)[:10] print(f'Dogs with no name found: {master_df.name.isnull().sum() / master_df.name.shape[0]:.3f}') print(f'Tweet period start: {master_df.timestamp.min()} - end: {master_df.timestamp.max()}') # download random images for the report import random for i in range(3): download_img(f'dog-{i}.jpg', random.choice(pred_tidy.jpg_url.to_list())) print(os.listdir(IMAGES_DIR)) ###Output ['rating-client-outliers.jpg', 'dog-1.jpg.jpg', 'dog-0.jpg.jpg', 'dog-2.jpg.jpg', 'favorite-and-retweet-per-client.jpg'] ###Markdown Assert project ready for submissionThe following files needs to be included in the submission of the project ###Code print(f"{file_exists('wrangle_act.ipynb', directory='.')} - wrangle_act.ipynb: code for gathering, assessing, cleaning, analyzing, and visualizing data") print(f"{file_exists('wrangle_report.pdf', directory='.')} - wrangle_report.pdf or wrangle_report.html: documentation for data wrangling steps: gather, assess, and clean") print(f"{file_exists('act_report.pdf', directory='.')} - act_report.pdf or act_report.html: documentation of analysis and insights into final data") print(f"{file_exists('twitter_archive_enhanced.csv')} - twitter_archive_enhanced.csv: file as given") print(f"{file_exists('image_predictions.tsv')} - image_predictions.tsv: file downloaded programmatically") print(f"{file_exists('tweet_json.txt')} - tweet_json.txt: file constructed via API") print(f"{file_exists('twitter_archive_master.csv', directory='.')} - twitter_archive_master.csv: combined and cleaned data") ###Output True - wrangle_act.ipynb: code for gathering, assessing, cleaning, analyzing, and visualizing data True - wrangle_report.pdf or wrangle_report.html: documentation for data wrangling steps: gather, assess, and clean True - act_report.pdf or act_report.html: documentation of analysis and insights into final data True - twitter_archive_enhanced.csv: file as given True - image_predictions.tsv: file downloaded programmatically True - tweet_json.txt: file constructed via API True - twitter_archive_master.csv: combined and cleaned data ###Markdown Data Wrangling Table of Contents- [Introduction](intro)- [Data Gathering](gather)- [Data Assessing](assessing)- [Data Cleaning](cleaning)- [Data Storing](storing)- [Data Analyzing](analyzing)- [Data Visualizing](visualizing) Introduction&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The current project has as its' aim to cover all the data wrangling steps with the WeRateDogs Twitter Archive. First, the Twitter archive data was gathered from a .csv file provided by Udacity, a tweet image predictions file was downloaded programmatically from Udacity's servers, and additional tweet information was obtained by a query in the Twitter API, being the JSON data stored in a .txt file considering the tweet IDs provided in the first archive file. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Next, the data was assessed programmatically and visually in order to verify for quality and tidiness. All quality and tidiness issues that were detected are written in this session of the analysis. After noticing such issues, the data cleaning procedure was done in order to obtain a high quality and tidy master DataFrame. This master DataFrame was stored in a final CSV file. Finally, it was performed data analysis and visualization, which provided some insights about the Tweet archive. Data Gathering ###Code # Import packages import pandas as pd import numpy as np import matplotlib.pyplot as plt import requests import tweepy import os import json import time ###Output _____no_output_____ ###Markdown Enhanced Twitter Archive ###Code # Gather data from the enhanced Twitter archive twitter_archive = pd.read_csv('twitter-archive-enhanced.csv') ###Output _____no_output_____ ###Markdown Image Prediction Data ###Code # Gather data from image prediction # Indicating the url to access, requesting its data from the server and storing it in a variable named response url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) # Writing a new file with the content of the url (opening a file object, naming it as the last part of the url # and saving it with the response content): with open(os.path.join(os.getcwd(), url.split('/')[-1]), mode = 'wb') as file: file.write(response.content) image_prediction = pd.read_csv('image-predictions.tsv', sep = '\t') ###Output _____no_output_____ ###Markdown Twitter API Data ###Code consumer_key = 'HIDDEN' consumer_secret = 'HIDDEN' access_token = 'HIDDEN' access_secret = 'HIDDEN' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit = True, wait_on_rate_limit_notify = True) tweet_json_min = [] tweet_del = {} # As querying all IDs is time demanding, for sanity reasons it is printed the tweet ID and used a code timer count = 0 start = time.time() # Set JSON file to store tweets with open('tweet_json.txt', 'w') as outfile: for tweet_id in twitter_archive['tweet_id']: count += 1 print(str(count) + ': ' + str(tweet_id)) try: # Access tweet tweet = api.get_status(tweet_id, tweet_mode='extended') # Save retweet and favorite count retweet_count = tweet.retweet_count favorite_count = tweet.favorite_count # Writing JSON file with each tweet's set of JSON data json.dump(tweet._json, outfile) outfile.write('\n') # Write a list with minimum information tweet_json_min.append({'tweet_ids': tweet_id, 'retweet_count': retweet_count,'favorite_count': favorite_count}) # Try to get the exceptions except Exception as e: print('\033[1;31m' + str(tweet_id) + '\033[1;m') print() tweet_del[tweet_id] = e end = time.time() print(end-start) # Create DataFrame from list of dictionaries tweet ids and retweets and favorite counts df_tweet_json_min = pd.DataFrame(tweet_json_min, columns = ['tweet_ids', 'retweet_count', 'favorite_count'], dtype = 'int') # Read txt file and extrac information to a list tweet_list = [] with open('tweet_json.txt', 'r') as json_file: for line in json_file: tweet = json.loads(line) tweet_id = tweet['id'] retweet_count = tweet['retweet_count'] favorite_count = tweet['favorite_count'] tweet_list.append({'tweet_ids': tweet_id, 'retweet_count': retweet_count,'favorite_count': favorite_count}) # Transform list into a dataframe tweet_API = pd.DataFrame(tweet_list, columns = ['tweet_ids', 'retweet_count', 'favorite_count'], dtype = 'int') tweet_API ###Output _____no_output_____ ###Markdown Data Assessing ###Code twitter_archive.head() twitter_archive.info() twitter_archive.source.value_counts() twitter_archive.rating_denominator.sort_values(ascending=False) twitter_archive[twitter_archive.rating_denominator > 10] twitter_archive.name.value_counts() twitter_archive.name.unique() twitter_archive[(twitter_archive['doggo'] != 'None') & ((twitter_archive['puppo'] != 'None') | (twitter_archive['floofer'] != 'None') | (twitter_archive['pupper'] != 'None')) ] # Verify if dog names were extracted correctly twitter_archive.text.str.extract(r'( [A-Z][a-z]*)?( [A-Z][a-z]*)?( [A-Z][a-z]*)?( [A-Z][a-z]*)') # Verify if dog ratings were extracted correctly twitter_archive.text.str.extract(r'[^/] ?(\d*?\.?\d*)/(\d*)') twitter_archive.text.str.extract(r'[^/] ?(\d*?\.?\d*)/(\d*)')[0][:885][(twitter_archive.text.str.extract(r'[^/] ?(\d*?\.?\d*)/(\d*)')[0][:885].astype('float') != twitter_archive.rating_numerator[:885])] twitter_archive.rating_numerator.unique() twitter_archive.text.str.extract(r'[^/] ?(\d*?\.?\d*)/(\d*)')[0].unique() twitter_archive.rating_numerator.isnull().sum() twitter_archive.rating_denominator.isnull().sum() image_prediction.head() image_prediction.shape image_prediction.sample(10) image_prediction.info() image_prediction.tweet_id.nunique() image_prediction.p1.value_counts() image_prediction.p2.value_counts() image_prediction.p3.value_counts() image_prediction[image_prediction.p1 == 'pole'] tweet_API.head() tweet_API.shape ###Output _____no_output_____ ###Markdown Quality Twitter archive table* We should not include tweets that are retweets. * `expanded_url` has missing entries.* `timestamp` should be a datetime type.* `source` column has confusing data.* The dog `name` is none for many entries. Change data type.* The dog `name` has names identified wrongly.* The dog `name` identified as a or by.* Display full content of the text column.* Remove URL from the text column.* `rating_denominator` has really high maximum values (should it be up to 10?).* `rating_numerator` is not accurate for all entries, i.e. those that have a ..* Dog stages are defined as None instead of NaN. Image prediction table* There missing values (there are only 2075 ID entries, less than the ID entries number for twitter_archive)* There are images that do not identify dogs.* Multiple types of dog breed spelling. Twitter API table* There missing values (there are only 2331 ID entries).* Favorite counts should be an integer.* Tweet counts should be an integer. Tidiness* Only one table is necessary and it could have the following columns: 'tweet_id', 'timestamp', 'source, 'text', 'url', 'rating_numerator', 'rating_denominator', 'name', 'dog_stage', 'dog_prediction', 'retweet_count', 'favorite_count'. Twitter archive table* `expanded_url` has multiple URLs in one cell.* Dog stage should be just one column. Image prediction table* Need just one column for the true prediction. Data Cleaning ###Code # Copy the datasets before cleaning twitter_archive_clean = twitter_archive.copy() image_prediction_clean = image_prediction.copy() tweet_API_clean = tweet_API.copy() ###Output _____no_output_____ ###Markdown Tidiness* Some tidiness issues are being handled first in order to facilitate the consideration of missing data and merging of DataFrames. **`image_prediction` data: Need just one column for the true prediction** Define* Put all dog breeds in just one column, using a for loop and storing the breed in an list. Then store merge this list to the DataFrame.* Delete the unnecessary prediction columns. Code ###Code dog_breed = [] for row in range(len(image_prediction_clean)): if image_prediction_clean.p1_dog[row] == True: dog_breed.append(image_prediction_clean.p1[row]) elif image_prediction_clean.p2_dog[row] == True: dog_breed.append(image_prediction_clean.p2[row]) elif image_prediction_clean.p3_dog[row] == True: dog_breed.append(image_prediction_clean.p3[row]) else: dog_breed.append('not a dog') image_prediction_clean['dog_breed'] = dog_breed image_prediction_clean = image_prediction_clean.loc[:,['tweet_id', 'jpg_url','img_num','dog_breed']] ###Output _____no_output_____ ###Markdown Test ###Code image_prediction_clean.head() ###Output _____no_output_____ ###Markdown **`twitter_archive`: Dog stage should be just one column** Define* Put all dog stages in just one column, using a for loop and storing the stages in an list. Then store merge this list to the dataframe.* Delete the unnecessary stages columns. Code ###Code dog_stages = [] for row in range(len(twitter_archive_clean)): if twitter_archive_clean.doggo[row] != 'None': dog_stage_1 = 'Doggo ' else: dog_stage_1 = '' if twitter_archive_clean.floofer[row] != 'None': dog_stage_2 = 'Floofer ' else: dog_stage_2 = '' if twitter_archive_clean.pupper[row] != 'None': dog_stage_3 = 'Pupper ' else: dog_stage_3 = '' if twitter_archive_clean.puppo[row] != 'None': dog_stage_4 = 'Puppo ' else: dog_stage_4 = '' dog_stage = dog_stage_1 + dog_stage_2 + dog_stage_3 + dog_stage_4 dog_stages.append(dog_stage) dog_stage = '' twitter_archive_clean['dog_stages'] = dog_stages twitter_archive_clean.replace({'dog_stages': ''}, 'None', inplace = True) twitter_archive_clean.drop(columns=['doggo','floofer','pupper', 'puppo'], inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.head() ###Output _____no_output_____ ###Markdown **`twitter_archive` : `expanded_url` has multiple urls in one cell** Define* Since all additional urls are repetition of the first one, keep just the first url. Use split function and store just the first value. Code ###Code twitter_archive_clean.expanded_urls = twitter_archive_clean.expanded_urls.str.split(',').str[0] twitter_archive_clean.rename(columns={'expanded_urls': 'urls'}, inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.head() ###Output _____no_output_____ ###Markdown Missing Data **`twitter_archive`: we should not include tweets that are retweets.** Define* Delete tweet entries that have information about retweet (`retweeted_status_id`, `retweeted_status_user_id`, `retweeted_status_timestamp`).* Delete these columns. Code ###Code twitter_archive_clean = twitter_archive_clean[twitter_archive_clean.retweeted_status_id.isnull() == True] twitter_archive_clean = twitter_archive_clean.drop(['retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], axis=1) ###Output _____no_output_____ ###Markdown Test ###Code list(twitter_archive_clean) ###Output _____no_output_____ ###Markdown **`image_prediction`: there are missing values (there are only 2075 ID entries, less than the ID entries number for twitter_archive)** Define* Merge the two tables and identify tweet IDs with missing recognition* Delete tweet IDs without recognition. Code ###Code twitter_archive_clean = pd.merge(twitter_archive_clean, image_prediction_clean, on=['tweet_id'], how='left') twitter_archive_clean.info() twitter_archive_clean = twitter_archive_clean[(twitter_archive_clean.img_num == 1)|(twitter_archive_clean.img_num == 2)| (twitter_archive_clean.img_num == 3)|(twitter_archive_clean.img_num == 4)] ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean[(twitter_archive_clean.img_num != 1)&(twitter_archive_clean.img_num != 2)& (twitter_archive_clean.img_num != 3)&(twitter_archive_clean.img_num != 4)] twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1994 entries, 0 to 2174 Data columns (total 14 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1994 non-null int64 1 in_reply_to_status_id 23 non-null float64 2 in_reply_to_user_id 23 non-null float64 3 timestamp 1994 non-null object 4 source 1994 non-null object 5 text 1994 non-null object 6 urls 1994 non-null object 7 rating_numerator 1994 non-null int64 8 rating_denominator 1994 non-null int64 9 name 1994 non-null object 10 dog_stages 1994 non-null object 11 jpg_url 1994 non-null object 12 img_num 1994 non-null float64 13 dog_breed 1994 non-null object dtypes: float64(3), int64(3), object(8) memory usage: 233.7+ KB ###Markdown **`tweet_API_clean`: there missing values (there are only 2331 ID entries)** Define* Merge the tweet_API_clean and twitter_archive tables.* Identify and delete tweet IDs without retweet and favorite data. Code ###Code tweet_API_clean.rename(columns={'tweet_ids': 'tweet_id'}, inplace = True) twitter_archive_clean = pd.merge(twitter_archive_clean, tweet_API_clean, on=['tweet_id'], how='left') twitter_archive_clean.dropna(axis = 0, how = 'any', subset = ['retweet_count', 'favorite_count'], inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.retweet_count.isnull().sum() twitter_archive_clean.favorite_count.isnull().sum() ###Output _____no_output_____ ###Markdown Tidiness* Solving the remaining tidiness issues. **Only one table is necessary and it could have the following columns: 'tweet_id', 'timestamp', 'source', 'text', 'url', 'rating_numerator', 'rating_denominator', 'jpg_url', 'name', 'dog_stage', 'dog_prediction', 'retweet_count', 'favorite_count'** Define* Drop columns that are not considered necessary. Code ###Code twitter_archive_clean = twitter_archive_clean.drop(['in_reply_to_status_id','in_reply_to_user_id', 'img_num'], axis=1) ###Output _____no_output_____ ###Markdown Test ###Code list(twitter_archive_clean) ###Output _____no_output_____ ###Markdown Quality **`twitter_archive` : `expanded_url` has missing entries.** Define* Delete rows with missing entries. Code * During the procedure of handling the missing data this problem was already solved. Test ###Code twitter_archive_clean.urls.isnull().sum() ###Output _____no_output_____ ###Markdown **`twitter_archive` : `timestamp` should be a datetime type.** Define* Change timestamp data type by using the method to_datetime Code ###Code twitter_archive_clean.timestamp = pd.to_datetime(twitter_archive_clean.timestamp) ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.timestamp.dtype ###Output _____no_output_____ ###Markdown **`twitter_archive` : `source` column has confusing data.** Define* Extract just meaningul information from the column. Code ###Code twitter_archive_clean.source = twitter_archive_clean.source.str.extract(r'>(.*)<') ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.sample(5) ###Output _____no_output_____ ###Markdown **`twitter_archive` : the dog `name` is none for many entries. Change data type.** Define* Change the string None to np.nan data type. Code ###Code twitter_archive_clean.name = twitter_archive_clean.name.replace('None', np.NaN) ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean[twitter_archive_clean.name == 'None'] ###Output _____no_output_____ ###Markdown **`twitter_archive` : the dog `name` has names identified wrongly.** Define* Identify the dog name in the text and modify it. Code ###Code # Check names that had a as an entry names = twitter_archive_clean[twitter_archive_clean.name == 'a'].text.str.extract(r'named? ([A-Z][a-z]*)').replace(np.nan,'a') twitter_archive_clean.loc[twitter_archive_clean.name == 'a', 'name'] = names[0] # Check names that had NaN as an entry and were preceeded by named names = twitter_archive_clean[twitter_archive_clean.name.isnull()].text.str.extract(r'named? ([A-Z][a-z]*)') twitter_archive_clean.loc[twitter_archive_clean.name.isna(), 'name'] = names[0] # Check names that had NaN as an entry and were preceeded by is names = twitter_archive_clean[twitter_archive_clean.name.isnull()].text.str.extract(r' is? ([A-Z][a-z]* ?[A-Z]?[a-z]*? ?[A-Z]?[a-z]*?)') twitter_archive_clean.loc[twitter_archive_clean.name.isna(), 'name'] = names[0] ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.loc[twitter_archive_clean.name == 'a'] twitter_archive_clean.loc[twitter_archive_clean.name.isna()] ###Output _____no_output_____ ###Markdown **`twitter_archive` : the dog `name` identified as a or by.** Define* Change this names to NaN. Code ###Code twitter_archive_clean.name = twitter_archive_clean.name.replace('a', np.NaN) twitter_archive_clean.name = twitter_archive_clean.name.replace('by', np.NaN) ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean[twitter_archive_clean.name == 'a'] twitter_archive_clean[twitter_archive_clean.name == 'by'] ###Output _____no_output_____ ###Markdown **`twitter_archive` : display full content of `text` column** Define* Modify settings. Code ###Code pd.set_option('display.max_colwidth', -1) ###Output /Library/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:1: FutureWarning: Passing a negative integer is deprecated in version 1.0 and will not be supported in future version. Instead, use None to not limit the column width. """Entry point for launching an IPython kernel. ###Markdown Code ###Code twitter_archive_clean.sample(3) ###Output _____no_output_____ ###Markdown **`twitter_archive` : remove url from text column.** Define* Remove url from text column by using extract function. Code ###Code twitter_archive_clean.text = twitter_archive_clean.text.apply(lambda x: x[:-23]) ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.sample(5) ###Output _____no_output_____ ###Markdown **`twitter_archive` :`rating_denominator` has really high maximum values.** Define* Remove donominators that are different from 10. This could allow to make more accurate comparissons between ratings. Code ###Code twitter_archive_clean = twitter_archive_clean[twitter_archive_clean.rating_denominator == 10] ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean[twitter_archive_clean.rating_denominator != 10] ###Output _____no_output_____ ###Markdown **`twitter_archive` : `rating_numerator` is not accurate for all entries, i.e. those that have a .** Define* Extract the rating numerators and compare Code ###Code numerator = twitter_archive_clean.text.str.extract(r'(\d+\.?\d*)/') numerator = pd.to_numeric(numerator[0]).astype(int) twitter_archive_clean.loc[:, 'rating_numerator'] = numerator ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.rating_numerator.value_counts() ###Output _____no_output_____ ###Markdown **`twitter_archive` : dog stages are defined as None instead of NaN.** Define* Replace None with NaN. Code ###Code twitter_archive_clean.loc[:,'dog_stages'] = twitter_archive_clean.dog_stages.replace('None', np.nan) ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.dog_stages.unique() twitter_archive_clean.dog_stages.value_counts() ###Output _____no_output_____ ###Markdown **`image_prediction` : there are images that do not identify dogs.** Define* Substitute not a dog for NaN. Code ###Code twitter_archive_clean.loc[:,'dog_breed'] = twitter_archive_clean.dog_breed.replace('not a dog', np.nan) ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean[twitter_archive_clean.dog_breed == 'not a dog'] ###Output _____no_output_____ ###Markdown **`image_prediction` : multiple types of dog breed spelling.** Define* Set all dog breeds to the same standard, captalized and with space instead of _. Code ###Code twitter_archive_clean['dog_breed'] = twitter_archive_clean.dog_breed.str.replace('_', ' ').str.title() ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean['dog_breed'].unique() ###Output _____no_output_____ ###Markdown **`twitter_API` : favorite counts should be integer.** Define* Change date type to integer. Code ###Code twitter_archive_clean['favorite_count'] = twitter_archive_clean.favorite_count.astype('int64') ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.favorite_count.dtype ###Output _____no_output_____ ###Markdown **`twitter_API` : Tweet counts should be integer.** Define* Change date type to integer. Code ###Code twitter_archive_clean['retweet_count'] = twitter_archive_clean.retweet_count.astype('int64') ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.retweet_count.dtype ###Output _____no_output_____ ###Markdown Data Storing ###Code twitter_archive_clean.to_csv('twitter_archive_master.csv') ###Output _____no_output_____ ###Markdown Data Analyzing ###Code twitter_archive_clean.name.value_counts() ###Output _____no_output_____ ###Markdown * Oliver is the most common dog name in the dataset. ###Code twitter_archive_clean.dog_stages.value_counts() round(twitter_archive_clean.dog_stages.value_counts()[0]/twitter_archive_clean.dog_stages.value_counts().sum()*100, 2) ###Output _____no_output_____ ###Markdown * The most common dog stage is pupper ###Code twitter_archive_clean.dog_breed.value_counts() round(twitter_archive_clean.dog_breed.value_counts()[0]/twitter_archive_clean.dog_breed.value_counts().sum()*100, 2) ###Output _____no_output_____ ###Markdown * The most common dog breed is golden retriever ###Code twitter_archive_clean.describe() ###Output _____no_output_____ ###Markdown * 75% of tweets have more than 1779 favorites and 549 retweets. ###Code twitter_archive_clean[twitter_archive_clean.favorite_count >= 10000].dog_breed.value_counts() round(twitter_archive_clean[twitter_archive_clean.favorite_count >= 10000].dog_breed.value_counts()[0]/twitter_archive_clean[twitter_archive_clean.favorite_count >= 10000].dog_breed.value_counts().sum()*100, 2) ###Output _____no_output_____ ###Markdown * The most favorited tweets have a golden retriever in the picture. ###Code twitter_archive_clean[twitter_archive_clean.favorite_count >= 10000].dog_stages.value_counts() round(twitter_archive_clean[twitter_archive_clean.favorite_count >= 10000].dog_stages.value_counts()[0]/twitter_archive_clean[twitter_archive_clean.favorite_count >= 10000].dog_stages.value_counts().sum()*100,2) ###Output _____no_output_____ ###Markdown * The most favorited tweets have a doggo as dog stage classification. Data Visualizing ###Code twitter_archive_clean_date = twitter_archive_clean.copy() twitter_archive_clean_date.set_index('timestamp', inplace = True) plt.figure(figsize=(15,10)) plt.plot(twitter_archive_clean_date.retweet_count, color = 'b', alpha = 0.5, label = 'Retweets') plt.plot(twitter_archive_clean_date.favorite_count, color = 'r', alpha = 0.5, label = 'Favorites') plt.xlabel('Date') plt.ylabel('Number of interactions') plt.title('Retweets and Favorites over time') plt.legend() plt.show; ###Output _____no_output_____ ###Markdown * We can notice an incresing trend in the number of favorites over time, while retweets stay with a more constant number.* This could possibly indicate that people prefer to interact clickin on the favorite buttom. ###Code dog_breed = twitter_archive_clean_date.dog_breed.value_counts().index dog_breed_values = twitter_archive_clean_date.dog_breed.value_counts().values f, ax = plt.subplots(figsize=(18,5)) plt.bar(dog_breed[:10], dog_breed_values[:10], alpha = 0.7) plt.xlabel('Breeds') plt.ylabel('Number of tweets') plt.xticks(rotation='45') plt.title('Top 10 dog breeds mentioned in different tweets'); dog_name = twitter_archive_clean_date.name.value_counts().index dog_name_values = twitter_archive_clean_date.name.value_counts().values f, ax = plt.subplots(figsize=(18,5)) plt.bar(dog_name[:10], dog_name_values[:10], alpha = 0.7) plt.xlabel('Names') plt.ylabel('Number of tweets') plt.xticks(rotation='45') plt.title('Top 10 dog names mentioned in different tweets'); dog_rating = twitter_archive_clean_date.rating_numerator.value_counts().index dog_rating_values = twitter_archive_clean_date.rating_numerator.value_counts().values f, ax = plt.subplots(figsize=(18,5)) plt.bar(np.arange(0,len(dog_rating)), dog_rating_values, alpha = 0.7) plt.xlabel('Rating numerator') plt.ylabel('Number of tweets') plt.xticks(np.arange(0,len(dog_rating)),dog_rating , rotation='45') plt.title('Dog rating numerators given in different tweets'); twitter_archive_clean_date.groupby('rating_numerator').rating_numerator.count() high_dog_rating = twitter_archive_clean_date.query('rating_numerator >= 12').dog_breed.value_counts().index high_dog_rating_values = twitter_archive_clean_date.query('rating_numerator >= 12').dog_breed.value_counts().values f, ax = plt.subplots(figsize=(18,5)) plt.bar(high_dog_rating[:10], high_dog_rating_values[:10], alpha = 0.7) plt.xlabel('Breed') plt.ylabel('Number of tweets') plt.xticks(rotation='45') plt.title('Top 10 dog breeds that received 12 or more as rating numerator '); tweet_source = twitter_archive_clean_date.source.value_counts().index tweet_source_value = twitter_archive_clean_date.source.value_counts().values f, ax = plt.subplots(figsize=(18,5)) plt.bar(tweet_source, tweet_source_value, alpha = 0.7) plt.xlabel('Tweet Source') plt.ylabel('Number of tweets') plt.title('Usage of different tweet sources'); twitter_archive_clean_date.columns plt.figure(figsize=(15,10)) plt.scatter(x = twitter_archive_clean_date.retweet_count, y=twitter_archive_clean_date.favorite_count, color = 'b', alpha = 0.5) plt.xscale('log') plt.yscale('log') plt.xlabel('Number of retweets') plt.ylabel('Number of favorites') plt.title('Relation between retweets and favorites') plt.show(); dates = twitter_archive_clean_date.groupby([twitter_archive_clean_date.index.date]).tweet_id.count() plt.figure(figsize=(15,10)) plt.plot(dates, color = 'b', alpha = 0.5) plt.xlabel('Date') plt.ylabel('Number of tweets') plt.title('Number of tweets per day') plt.show(); ###Output _____no_output_____ ###Markdown Wrangle and Analyse of WeRateDogs's TweetsUdacity - Nanodegree Data AnalystAuthor: Leonardo Simões Table of Contents- [Introduction](intro)- [Data Wrangling](data_wrangling) - [Gather](gather)- [Assessing of Tidiness](assessing_tidiness)- [Clean of Tidiness](clean_tidiness)- [Assessing of Quality](assessing_quality)- [Clean of Data Quality](clean_quality)- [Exploratory Analysis](eda) Introduction The dataset used is a Twitter user's tweet log called @dog_rates or WeRateDogs. This Twitter profile classifies dogs by assigning notes and kind comments. The main objective of this project is to evaluate and clean the data set provided, but some analyzes and visualizations will also be carried out. ###Code import numpy as np import pandas as pd import json import requests import os from functools import reduce import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline # suppress warnings from final output import warnings warnings.simplefilter("ignore") ###Output _____no_output_____ ###Markdown Data WranglingThe assessing and cleaning steps will be divided into two each, for Data Organization and Data Quality. Gather * Dataset 1: ###Code #Open dataset 1 df = pd.read_csv('twitter-archive-enhanced.csv') df.head() ###Output _____no_output_____ ###Markdown * Dataset 2: ###Code #Dataset 2 url and name information url_tsv = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' name_tsv = 'image-predictions.tsv' #Download dataset 2 programmatically response = requests.get(url_tsv) with open(name_tsv, 'wb') as file: file.write(response.content) #Open dataset 2 df2 = pd.read_csv(name_tsv, sep='\t') df2.head() ###Output _____no_output_____ ###Markdown Dataset 3: ###Code #Reads JSON file with .txt extension with open('tweet-json.txt','r') as json_file: lines = json_file.readlines() lines = [line.strip("\n") for line in lines] lines = ''.join(lines).split('}{') data_json = [json.loads('%s}' % line) if idx == 0 else json.loads('{%s' % line) if idx == len(lines)-1 else json.loads('{%s}' % line) for idx, line in enumerate(lines)] #Open dataset 3 df3 = pd.DataFrame(data_json) df3.head() ###Output _____no_output_____ ###Markdown Assessing of Tidiness First, the verification will be only of the structures (rows, size, columns) of the data frames. ###Code #Number of lines / observations of each dataset df.shape[0], df2.shape[0], df3.shape[0] #Number of columns of each dataset df.shape[1], df2.shape[1], df3.shape[1] ###Output _____no_output_____ ###Markdown Checking the columns of the dataframes: ###Code #Features (columns) of each dataset df.columns, df2.columns, df3.columns ###Output _____no_output_____ ###Markdown - All of these columns and information should not be separated into three different dataframes.- Most df3 columns are not absolutely necessary, as specified in the project proposal.Checking the columns common to dataframes: ###Code #Common columns between df and df2 np.intersect1d(df.columns.values, df2.columns.values) #Common columns between df and df3 np.intersect1d(df.columns.values, df3.columns.values) #Common columns between df2 and df3 np.intersect1d(df2.columns.values, df3.columns.values) ###Output _____no_output_____ ###Markdown - There should be a column in df3 with the name 'tweet_id'. The column is present with the name 'id'.Checking the 'doggo', 'floofer', 'pupper', 'puppo' columns: ###Code df.sample(5)[['doggo', 'floofer', 'pupper', 'puppo']].head() df['doggo'].value_counts() df['floofer'].value_counts() df['pupper'].value_counts() df['puppo'].value_counts() ###Output _____no_output_____ ###Markdown - The 'doggo', 'floofer', 'pupper', 'puppo' columns should not exist, at least, if they were dummies variables to train a machine learning model. There should only be one column for 'stages' for these values. ###Code #Number of instances that have at least 1 stage ((df['doggo'] == "doggo") + (df['floofer']=="floofer") + (df['pupper']=='pupper') + (df['puppo']=='puppo')).value_counts() #Number of instances that have exactly 1 stage ((df['doggo'] == "doggo") ^ (df['floofer']=="floofer") ^ (df['pupper']=='pupper') ^ (df['puppo']=='puppo')).value_counts() ###Output _____no_output_____ ###Markdown - In 380 lines there is at least one stage. There are only 1 stage on 366 lines. In 14 lines there are 2 or more stages. Tidiness:- Discard unnecessary columns in df3_copy.- The 'id' column in df3_copy should be 'tweet_id'.- All columns and information should be on a single dataframe.- A new 'stages' column must be created.- The 'doggo', 'floofer', 'pupper', 'puppo' columns must be removed. Clean (Tidiness) ###Code df1_copy, df2_copy, df3_copy = df.copy(), df2.copy(), df3.copy() ###Output _____no_output_____ ###Markdown Discard unnecessary columns in df3_copy. Define- Remove unnecessary columns in df3_copy, leaving only 'id', 'retweet_count', 'favorite_count':In the project proposal it was defined that, at least, the columns 'id', 'retweet_count', 'favorite_count' would be necessary. Code ###Code df3_copy = df3_copy[['id', 'retweet_count', 'favorite_count']] ###Output _____no_output_____ ###Markdown Test ###Code df3_copy.head() ###Output _____no_output_____ ###Markdown The 'id' column in df3_copy should be 'tweet_id'. Define- Rename the 'id' column to 'tweet_id' in df3_copy: Code ###Code df3_copy.rename(columns={'id':'tweet_id'}, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code assert 'tweet_id' in df3_copy.columns and 'id' not in df3_copy.columns df3_copy.head()['tweet_id'] ###Output _____no_output_____ ###Markdown All columns and information should be on a single dataframe. (1) Define- Joining the df1_copy and df2_copy dataframes: Code ###Code df_clean = pd.merge(df1_copy, df2_copy, on = 'tweet_id', how = 'left') ###Output _____no_output_____ ###Markdown Test ###Code df_clean.head() df_clean.shape[1] df_clean.columns ###Output _____no_output_____ ###Markdown All columns and information should be on a single dataframe. (2) Define- Joining the df_clean (df_copy and df2_copy) and df3_copy dataframes: Code ###Code df_clean = pd.merge(df_clean, df3_copy, on = 'tweet_id', how = 'left') ###Output _____no_output_____ ###Markdown Test ###Code df_clean.head() df_clean.shape[1] df_clean.columns ###Output _____no_output_____ ###Markdown A new 'stages' column must be created. DefineCreate a stage column for the values ​​'doggo', 'floofer', 'pupper', 'puppo'. Code ###Code is_doggo = df_clean.doggo == 'doggo' is_floofer = df_clean.floofer == 'floofer' is_pupper = df_clean.pupper == 'pupper' is_puppo = df_clean.puppo == 'puppo' #Creating 'stages' column with default value 'none' df_clean['stages'] = 'none' #1 stage df_clean.loc[is_doggo, 'stages'] = 'doggo' df_clean.loc[is_floofer, 'stages'] = 'floofer' df_clean.loc[is_pupper, 'stages'] = 'pupper' df_clean.loc[is_puppo, 'stages'] = 'puppo' #2 stages df_clean.loc[is_doggo & is_floofer, 'stages'] = 'doggo, floofer' df_clean.loc[is_doggo & is_pupper,'stages'] = 'doggo, pupper' df_clean.loc[is_doggo & is_puppo, 'stages'] = 'doggo, puppo' df_clean.loc[is_floofer & is_pupper, 'stages'] = 'floofer, pupper' df_clean.loc[is_floofer & is_puppo, 'stages'] = 'floofer, puppo' df_clean.loc[is_pupper & is_puppo, 'stages'] = 'pupper, puppo' #3 stages df_clean.loc[is_doggo & is_floofer & is_pupper, 'stages'] = 'doggo, floofer, pupper' df_clean.loc[is_doggo & is_floofer & is_puppo, 'stages'] = 'doggo, floofer, puppo' df_clean.loc[is_doggo & is_pupper & is_puppo, 'stages'] = 'doggo, pupper, puppo' df_clean.loc[is_floofer & is_pupper & is_puppo, 'stages'] = 'floofer, pupper, puppo' #4 stages df_clean.loc[is_doggo & is_floofer & is_pupper & is_puppo, 'stages'] = 'doggo, floofer, pupper, puppo' ###Output _____no_output_____ ###Markdown Test ###Code df_clean.sample(5)['stages'] assert 'stages' in df_clean.columns.values assert df_clean['stages'].isna().sum() == 0 df_clean['stages'].value_counts() ###Output _____no_output_____ ###Markdown The 'doggo', 'floofer', 'pupper', 'puppo' columns must be removed. Define Remove the 'doggo', 'floofer', 'pupper', 'puppo' columns. Code ###Code df_clean.drop(columns=['doggo', 'floofer', 'pupper', 'puppo'], inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code assert 'doggo' not in df_clean.columns.values assert 'floofer' not in df_clean.columns.values assert 'pupper' not in df_clean.columns.values assert 'puppo' not in df_clean.columns.values ###Output _____no_output_____ ###Markdown * The dataframes df, df2 and df3 will no longer be used, so they can be deleted. ###Code del df, df2, df3 ###Output _____no_output_____ ###Markdown * The dataframes df1_copy, df2_copy and df3_copy will no longer be used, so they can be deleted. ###Code del df1_copy, df2_copy, df3_copy ###Output _____no_output_____ ###Markdown Assessing of Data Quality* Checking general dataframe information: ###Code df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2356 entries, 0 to 2355 Data columns (total 27 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 in_reply_to_status_id 78 non-null float64 2 in_reply_to_user_id 78 non-null float64 3 timestamp 2356 non-null object 4 source 2356 non-null object 5 text 2356 non-null object 6 retweeted_status_id 181 non-null float64 7 retweeted_status_user_id 181 non-null float64 8 retweeted_status_timestamp 181 non-null object 9 expanded_urls 2297 non-null object 10 rating_numerator 2356 non-null int64 11 rating_denominator 2356 non-null int64 12 name 2356 non-null object 13 jpg_url 2075 non-null object 14 img_num 2075 non-null float64 15 p1 2075 non-null object 16 p1_conf 2075 non-null float64 17 p1_dog 2075 non-null object 18 p2 2075 non-null object 19 p2_conf 2075 non-null float64 20 p2_dog 2075 non-null object 21 p3 2075 non-null object 22 p3_conf 2075 non-null float64 23 p3_dog 2075 non-null object 24 retweet_count 2354 non-null float64 25 favorite_count 2354 non-null float64 26 stages 2356 non-null object dtypes: float64(10), int64(3), object(14) memory usage: 515.4+ KB ###Markdown * Checking values of numeric features: ###Code #Checking statistics for quantitative numerical features num_columns = ['rating_numerator', 'rating_denominator','img_num','p1_conf', 'p2_conf', 'p3_conf', 'retweet_count', 'favorite_count'] df_clean[num_columns].describe() ###Output _____no_output_____ ###Markdown * Checking which columns have missing values (NaN): ###Code df_clean.isnull().sum() ###Output _____no_output_____ ###Markdown * Displaying columns have missing values (NaN) and the amount of these values: ###Code #Count of NaN values more than 0 na_values = df_clean.isna().sum() na_values = na_values[na_values > 0] na_values #Plot of the NaN values per feature plt.figure(figsize=(10,8)) na_values.sort_values().plot(kind='barh', position=0, color='blue') plt.title('Missing values by feature'); plt.xlabel('Count of missing values (NaN)'); plt.ylabel('Feature'); ###Output _____no_output_____ ###Markdown Columns with NaN values must be treated in some way, like filling with some value. * Checking for duplicate whole lines: ###Code df_clean.duplicated().sum() ###Output _____no_output_____ ###Markdown No duplicate lines. * Evaluating the 'source' column: ###Code df_clean['source'].value_counts() ###Output _____no_output_____ ###Markdown The values in the source column will be the values between the anchor tags. * Evaluating the 'rating_denominator' column: ###Code df_clean.rating_denominator.unique() df_clean.query('rating_denominator == 0')[['rating_numerator', 'rating_denominator']] ###Output _____no_output_____ ###Markdown The value of 'rating_denominator' must not be 0, because calculating the grade is not feasible with that. * Evaluating the 'stage' column: ###Code df_clean['stages'].value_counts() ###Output _____no_output_____ ###Markdown * Evaluating the 'p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog' columns: ###Code p_columns = ['stages', 'p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog'] df_clean[p_columns].head() df_clean.query('p1.isnull()', engine='python')[p_columns].head() #Query used to check the number of times that all columns in p_columns are NaN query_ps_is_null = 'p1.isnull() & p2.isnull() & p3.isnull() & ' query_ps_is_null = query_ps_is_null + 'p1_conf.isnull() & p2_conf.isnull() & p3_conf.isnull() & ' query_ps_is_null = query_ps_is_null + 'p1_dog.isnull() & p2_dog.isnull() & p3_dog.isnull()' query_ps_is_null #number of times that all columns in p_columns are NaN len(df_clean.query(query_ps_is_null, engine='python')) ###Output _____no_output_____ ###Markdown When one of the columns 'p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog' is NaN, the others are also. * Evaluating the 'retweet_count' and 'favorite_count' columns: ###Code df_clean[['retweet_count', 'favorite_count']].describe().min() df_clean[['retweet_count', 'favorite_count']].isna().sum() ###Output _____no_output_____ ###Markdown The 'retweet_count', 'favorite_count', as seen above had NaN values twice each. The smallest value for these columns is 0. ###Code df_clean['retweet_count'].dtype df_clean['favorite_count'].dtype ###Output _____no_output_____ ###Markdown These columns with suffix 'count' should not be of type float, but int. * Evaluating the 'img_num' column: ###Code df_clean['img_num'].min(), df_clean['img_num'].max() df_clean['img_num'].unique() df_clean['img_num'].dtype ###Output _____no_output_____ ###Markdown This column with suffix 'num' should not be of type float, but int. The column values are {1., 2., 3. and 4.} but should be {1, 2, 3, 4}. Quality:- The 'rating_denominator' column must not have a value of 0.- The values in the source column will be the values between the anchor tags.- The 'retweet_count' column must have its NaN values filled in.- The 'favorite_count' column must have its NaN values filled in.- Unrated tweets, considering columns p1, p2 and p3, should be removed.- The column type 'img_num' should be int64 and not float64.- The column type 'favorite_count' should be int64 and not float64.- The column type 'retweet_count' should be int64 and not float64.- Only lines that do not represent a retweet should be kept.- If confirmed that the tweets are not retweets, the columns for retweets are unnecessary. Clean of Data Quality The 'rating_denominator' column must not have a value of 0. DefineReplace the 0 values of rating_denominator with 1. Code ###Code df_clean.loc[df_clean.rating_denominator == 0, 'rating_denominator'] = 1 ###Output _____no_output_____ ###Markdown Test ###Code assert 0 not in df_clean.rating_denominator.unique() ###Output _____no_output_____ ###Markdown The values in the source column will be the values between the anchor tags. DefineChange the values in 'source' to just the desired part of the string. Code ###Code old_source = '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>' new_source = 'Twitter for iPhone' df_clean.loc[df_clean.source == old_source, 'source'] = new_source old_source = '<a href="http://vine.co" rel="nofollow">Vine - Make a Scene</a>' new_source = 'Vine - Make a Scene' df_clean.loc[df_clean.source == old_source, 'source'] = new_source old_source = '<a href="http://twitter.com" rel="nofollow">Twitter Web Client</a>' new_source = 'Twitter Web Client' df_clean.loc[df_clean.source == old_source, 'source'] = new_source old_source = '<a href="https://about.twitter.com/products/tweetdeck" rel="nofollow">TweetDeck</a>' new_source = 'TweetDeck' df_clean.loc[df_clean.source == old_source, 'source'] = new_source ###Output _____no_output_____ ###Markdown Test ###Code df_clean['source'].value_counts() ###Output _____no_output_____ ###Markdown The 'retweet_count' column must have its NaN values filled in. DefineFill NaN values of The 'retweet_count' with 0 Code ###Code df_clean['retweet_count'].fillna(0, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code assert df_clean['retweet_count'].isna().sum() == 0 ###Output _____no_output_____ ###Markdown The 'favorite_count' column must have its NaN values filled in. DefineFill NaN values of The 'favorite_count' with 0 Code ###Code df_clean['favorite_count'].fillna(0, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code assert df_clean['favorite_count'].isna().sum() == 0 ###Output _____no_output_____ ###Markdown Unrated tweets, considering columns p1, p2 and p3, should be removed. DefineDrop the lines where p1 is NaN. This should make the columns 'p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog' do not have NaN values. Code ###Code df_clean.head() p_columns = ['p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog'] df_clean.dropna(subset=p_columns, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code p_columns = ['p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog'] for column in p_columns: assert df_clean[column].isna().sum() == 0 df_clean.shape ###Output _____no_output_____ ###Markdown The column type 'img_num' should be int64 and not float64. DefineChange column type 'img_num' from float64 to int64. Code ###Code df_clean['img_num'] = df_clean['img_num'].astype(np.int64) ###Output _____no_output_____ ###Markdown Test ###Code assert df_clean['img_num'].dtype == np.dtype(np.int64) ###Output _____no_output_____ ###Markdown The column type 'favorite_count' should be int64 and not float64. DefineChange column type 'favorite_count' from float64 to int64. Code ###Code df_clean['favorite_count'] = df_clean['favorite_count'].astype(np.int64) ###Output _____no_output_____ ###Markdown Test ###Code assert df_clean['favorite_count'].dtype == np.dtype(np.int64) ###Output _____no_output_____ ###Markdown The column type 'retweet_count' should be int64 and not float64. DefineChange column type 'retweet_count' from float64 to int64. Code ###Code df_clean['retweet_count'] = df_clean['retweet_count'].astype(np.int64) ###Output _____no_output_____ ###Markdown Test ###Code assert df_clean['retweet_count'].dtype == np.dtype(np.int64) ###Output _____no_output_____ ###Markdown Only lines that do not represent a retweet should be kept. DefineRemove lines where 'retweeted_status_id' is not null. Code ###Code df_clean = df_clean[df_clean.retweeted_status_id.isnull()] ###Output _____no_output_____ ###Markdown Test ###Code assert df_clean.retweeted_status_id.isnull().sum() == df_clean.shape[0] df_clean.retweeted_status_id.value_counts() ###Output _____no_output_____ ###Markdown After confirming that tweets are not retweets, columns for retweets are unnecessary. DefineRemove 'retweeted_status_id', 'retweeted_status_user_id', and 'retweeted_status_timestamp' columns. ###Code df_clean.drop(columns=['retweeted_status_id', 'retweeted_status_user_id','retweeted_status_timestamp'], inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code assert 'retweeted_status_id' not in df_clean.columns assert 'retweeted_status_user_id' not in df_clean.columns assert 'retweeted_status_timestamp' not in df_clean.columns ###Output _____no_output_____ ###Markdown Save dataframe to .csv ###Code df_clean.to_csv('twitter_archive_master.csv', index=False) ###Output _____no_output_____ ###Markdown Exploratory Analysis Análise - 3 informações e 1 visualização valores das notas, frequências de raças ###Code df_clean['rating'] = df_clean['rating_numerator']/df_clean['rating_denominator'] #Numeric columns that are quantitative numeric_columns = ['rating', 'rating_numerator','rating_denominator','img_num','p1_conf','p2_conf','p3_conf', 'retweet_count','favorite_count'] ###Output _____no_output_____ ###Markdown Which tweet got the highest score? ###Code #Highest rating df_clean['rating'].max() #Highest rated Tweet highest_grade_tweet = df_clean[df_clean['rating'] == df_clean['rating'].max()] highest_grade_tweet ###Output _____no_output_____ ###Markdown Below are some of the characteristics of the highest rated tweet: ###Code highest_grade_tweet['name'].values[0] highest_grade_tweet['text'].values[0] highest_grade_tweet['jpg_url'] highest_grade_tweet['expanded_urls'].values[0] highest_grade_tweet['retweet_count'].values[0] highest_grade_tweet['favorite_count'].values[0] #Classification of the p1 algorithm and its confidence in the result: highest_grade_tweet['p1'].values[0], highest_grade_tweet['p1_conf'].values[0] #Classification of the p1 algorithm and its confidence in the result: highest_grade_tweet['p2'].values[0], highest_grade_tweet['p2_conf'].values[0] #Classification of the p1 algorithm and its confidence in the result: highest_grade_tweet['p3'].values[0], highest_grade_tweet['p3_conf'].values[0] ###Output _____no_output_____ ###Markdown How are the measurements of each stage? ###Code stages_count = df_clean['stages'].value_counts() stages_count sum_stages = stages_count.sum() sum_stages #Percentage of frequency of each stage in total stages_count.apply(lambda x : x/sum_stages) stages_count = stages_count.drop(index='none') sum_stages = stages_count.sum() sum_stages #Percentage of frequency of each stage in total, except 'none' stages_count.apply(lambda x : x/sum_stages) #Colors used in the plot flatui = ["#9b59b6", "#3498db", "#95a5a6", "#e74c3c", "#34495e", "#2ecc71"] color = sns.color_palette(flatui) #Plot bar chart for counting stages plt.figure(figsize=[11.69,8.27]); sns.barplot(stages_count.index, stages_count.values, palette=color) plt.title('Count of the stages'); plt.ylabel('Number of dogs'); plt.xlabel('Stage'); ###Output _____no_output_____ ###Markdown There is a big difference in the number of dogs per stage. 84% of the stages are 'none'. Considering those that are classified (there is no stage 'none') 66% are 'pupper', 20% are doggo, 7.5% are 'puppo', 2.3% are 'floofer' and and the rest is classified as with 2 stages. * Checking the average values for each stage: ###Code stages = df_clean.groupby('stages') stages[numeric_columns].mean() ###Output _____no_output_____ ###Markdown The rating do not show significant differences by stages.Considering only the unique stages, the 'puppo' stage has the highest average of favorites count, and the 'doggo' stage has the highest average retweet count. How are the results of the algorithms p1, p2 and p3 compared to each other? * Checking the number of cases where each algorithm identified a dog in the image: ###Code ps_dogs = ['p1_dog','p2_dog','p3_dog'] df_clean[ps_dogs].sum() ###Output _____no_output_____ ###Markdown The p2 algorithm was the one that identified the largest number of dogs, 1495. Followed by p1 with 1477 and p3 with 1446. This metric slightly differentiated the efficiency of the algorithms, but the difference between them was not very large. * Checking the average confidence of each algorithm: ###Code ps_confs = ['p1_conf','p2_conf','p3_conf'] df_clean[ps_confs].mean() ###Output _____no_output_____ ###Markdown The p1 algorithm is the one with the highest average confidence, around 59%. The p2 and p3 algorithms have an average confidence of approximately 13% and 6%, respectively. This metric was able to quite distinguish the supposed efficiency of the algorithms, p1 being reasonably efficient, and p2 and p3 relatively inefficient. * Checking the total number of dog breeds identified by the algorithms: ###Code len(df_clean.query('p1_dog == True')['p1'].unique()) len(df_clean.query('p2_dog == True')['p2'].unique()) len(df_clean.query('p3_dog == True')['p3'].unique()) ###Output _____no_output_____ ###Markdown GoodDoggo - Wrangling and Assessing 'WeRateDogs' Twitter Data Table of Contents- **[Gather](gather)** - [Retrieve Local Data (manually)](retrieve_local) - [Retrieve Image Data from a url (programmatically)](retrieve_via_url) - [Retrieve Twitter Data (via API)](retrieve_via_API)- **[Assess](assess)** - [Quality](assess_quality) - [Completeness](assess_completeness) - [Validity](assess_validity) - [Accuracy](assess_accuracy) - [Consistency](assess_consistency) - [Tidiness](assess_tidiness) - [Summary of Initial Observations](assess_summary_initial)- **[Clean](clean)** - [Quality](clean_quality) - [Number of records](clean_num_records) - [High numerators](clean_high_numerator) - [High denominators](clean_high_denominator) - [Zero values in numerators and denominators](clean_numerator_zeros) - [Converting columns to type 'datetime'](clean_datetime) - [Converting columns to type 'int64' (archive data)](clean_int64) - [_(No Cleaning)_ Assess Updated Columns](assess_updated_columns) - [Remove retweets](clean_retweets) - [Converting columns to type 'int64' (JSON data)](clean_int64_JSON_data) - [Remove entries with multiple dog stages](clean_number_of_dog_stages) - [Tidiness](clean_tidiness) - [Remove columns associated with retweets](clean_retweet_columns) - [Combine dog descriptions into a single column](clean_dog_categories) - [Extract urls from 'text' column](clean_text_urls) - [Merge Dataframes](clean_tables) - [Update the Number of Records](clean_update_records)- **[Preparation For Analysis](analysisPrep)** - [Feature Engineering](analysisPrep_feature_engineering) - [Save the Dataframes and Create Local Backups](saveCleanedDataframes) - [Create Plotting Functions](analysisPrep_plotting_functions)- **[Analysis & Visualizations](analysis_visualizations)** - [Histograms of Favorites](analysis_histogram_favorites) - [Favorites vs. Time](analysis_favorite_count_vs_time) - [Tweets per week](analysis_tweets_per_week) - [Tweets vs. Breed Type](analysis_tweets_grouped_by_breed) - **[Final List of Issues That Were Defined/Cleaned/Tested](cleaned_summary_final)**- **[Potential Future Work](potential_future_work)** ###Code # for data wrangling and sampling import pandas as pd import numpy as np import random import requests # to download files programmatically import os # to save/open files and for terminal-like commands to navigate local machine import tweepy import pprint as pp # data pretty printer - https://docs.python.org/2/library/pprint.html import json # for json I/O and parsing import time # for timing code and dealing with Twitter's rate limit # Set the random seed to assure the same answers are returned each time random.seed(42) # for plotting import matplotlib.pyplot as plt %matplotlib inline import seaborn as sb # commented out (not needed) # # for (potential) regression modeling of data # import statsmodels.api as sm; # from patsy import dmatrices # from statsmodels.stats.outliers_influence import variance_inflation_factor ###Output _____no_output_____ ###Markdown ([Top of Page](top_of_page)) Gather ([Top of Page](top_of_page)) Retrieve Local Data (Read in a Previously Provided Twitter Archive) ###Code df_archive = pd.read_csv("twitter-archive-enhanced.csv") df_archive.head(1) ###Output _____no_output_____ ###Markdown ([Top of Page](top_of_page)) Retrieve Image Data from a URL (programmatically) ###Code # get file from a url url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) # get the current working directory folder_name = os.getcwd() # get the filename file_name = url.split('/')[-1] # save the retrieved file to local storage with open(os.path.join(folder_name, file_name), mode='wb') as file: file.write(response.content) # read in the downloaded file df_images = pd.read_csv(file_name, sep='\t') df_images.head() ###Output _____no_output_____ ###Markdown NOTE:* The response variable is in bytes format, not text format.* As such, the 'wb' flag is used when writing the file locally* [Link to a StackOverflow post](https://stackoverflow.com/questions/2665866/what-does-wb-mean-in-this-code-using-python) on the subject ([Top of Page](top_of_page)) Retrieve Twitter Data (via API) Create an API object to gather Twitter data ###Code # get the API Access Token and Acces Token Secret from twAPI_tokens_GoodDoggo import API_KEY, API_KEY_SECRET, API_TOKEN, API_TOKEN_SECRET CONSUMER_KEY = API_KEY CONSUMER_SECRET = API_KEY_SECRET ACCESS_TOKEN = API_TOKEN ACCESS_SECRET = API_TOKEN_SECRET auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET) auth.set_access_token(ACCESS_TOKEN, ACCESS_SECRET) # define api object, using arguments to get around (i.e., wait on) the twitter rate limit api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True) ###Output _____no_output_____ ###Markdown Get a list of tweet IDs: ###Code # Check if there are any repeated tweets in the archive numUniqueValues = df_archive.tweet_id.nunique() print('Number of tweets: ' + str(len(df_archive))) print('Number of repeated tweets: ' + str(len(df_archive) - numUniqueValues)) # Create list of tweet IDs tweet_id_list = df_archive.tweet_id.tolist() ###Output Number of tweets: 2356 Number of repeated tweets: 0 ###Markdown Use the API to get info for each tweet* ___Retrieve json data for the first tweet and write it to local storage___* [StackOverflow article](https://stackoverflow.com/questions/28384588/twitter-api-get-tweets-with-specific-id) on getting JSON data for a specific tweet* [StackAbuse article](https://stackabuse.com/reading-and-writing-json-to-a-file-in-python/) on reading and writing JSON to a file in Python ###Code print('- Tweet retrieval (for 2356 tweets) took 30 minutes to complete, due to Twitter\'s rate limit.\n' + '- As a result, it was performed once, then commented out to allow restarting the kernel / debugging\n' + 'the rest of the analysis.') # loop through multiple tweet_id's, retrieving and writing their json data to 'tweet_json.txt' # with open('tweet_json.txt', mode = 'w') as textFile: # count = 0 # for tweet_id in tweet_id_list: # count = count + 1 # start = time.time() # try: # status = api.get_status(tweet_id) # jsonStr = json.dumps(status._json) # except: # continue # tweet no longer exists # textFile.write(jsonStr + '\n') # end = time.time() # currTime = str(time.localtime().tm_hour) + ':' + str(time.localtime().tm_min) + ':' + str(time.localtime().tm_sec) # print('count: ' + str(count) + ', time elapsed: ' + str(end - start) + ', current time: ' + currTime) ###Output - Tweet retrieval (for 2356 tweets) took 30 minutes to complete, due to Twitter's rate limit. - As a result, it was performed once, then commented out to allow restarting the kernel / debugging the rest of the analysis. ###Markdown __Print first line of 'tweet_json.txt' to check that the above worked__ ###Code # print first line of 'tweet_json.txt' to check that the above worked with open('tweet_json.txt') as jsonFile: line = jsonFile.readline() tweet = json.loads(line) pp.pprint(tweet) ###Output {'contributors': None, 'coordinates': None, 'created_at': 'Tue Aug 01 16:23:56 +0000 2017', 'entities': {'hashtags': [], 'media': [{'display_url': 'pic.twitter.com/MgUWQ76dJU', 'expanded_url': 'https://twitter.com/dog_rates/status/892420643555336193/photo/1', 'id': 892420639486877696, 'id_str': '892420639486877696', 'indices': [86, 109], 'media_url': 'http://pbs.twimg.com/media/DGKD1-bXoAAIAUK.jpg', 'media_url_https': 'https://pbs.twimg.com/media/DGKD1-bXoAAIAUK.jpg', 'sizes': {'large': {'h': 528, 'resize': 'fit', 'w': 540}, 'medium': {'h': 528, 'resize': 'fit', 'w': 540}, 'small': {'h': 528, 'resize': 'fit', 'w': 540}, 'thumb': {'h': 150, 'resize': 'crop', 'w': 150}}, 'type': 'photo', 'url': 'https://t.co/MgUWQ76dJU'}], 'symbols': [], 'urls': [], 'user_mentions': []}, 'extended_entities': {'media': [{'display_url': 'pic.twitter.com/MgUWQ76dJU', 'expanded_url': 'https://twitter.com/dog_rates/status/892420643555336193/photo/1', 'id': 892420639486877696, 'id_str': '892420639486877696', 'indices': [86, 109], 'media_url': 'http://pbs.twimg.com/media/DGKD1-bXoAAIAUK.jpg', 'media_url_https': 'https://pbs.twimg.com/media/DGKD1-bXoAAIAUK.jpg', 'sizes': {'large': {'h': 528, 'resize': 'fit', 'w': 540}, 'medium': {'h': 528, 'resize': 'fit', 'w': 540}, 'small': {'h': 528, 'resize': 'fit', 'w': 540}, 'thumb': {'h': 150, 'resize': 'crop', 'w': 150}}, 'type': 'photo', 'url': 'https://t.co/MgUWQ76dJU'}]}, 'favorite_count': 37468, 'favorited': False, 'geo': None, 'id': 892420643555336193, 'id_str': '892420643555336193', 'in_reply_to_screen_name': None, 'in_reply_to_status_id': None, 'in_reply_to_status_id_str': None, 'in_reply_to_user_id': None, 'in_reply_to_user_id_str': None, 'is_quote_status': False, 'lang': 'en', 'place': None, 'possibly_sensitive': False, 'possibly_sensitive_appealable': False, 'retweet_count': 8159, 'retweeted': False, 'source': '<a href="http://twitter.com/download/iphone" ' 'rel="nofollow">Twitter for iPhone</a>', 'text': "This is Phineas. He's a mystical boy. Only ever appears in the hole " 'of a donut. 13/10 https://t.co/MgUWQ76dJU', 'truncated': False, 'user': {'contributors_enabled': False, 'created_at': 'Sun Nov 15 21:41:29 +0000 2015', 'default_profile': False, 'default_profile_image': False, 'description': 'Your Only Source For Professional Dog Ratings ' 'Instagram and Facebook ➪ WeRateDogs ' '[email protected] ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀', 'entities': {'description': {'urls': []}, 'url': {'urls': [{'display_url': 'weratedogs.com', 'expanded_url': 'http://weratedogs.com', 'indices': [0, 23], 'url': 'https://t.co/N7sNNHSfPq'}]}}, 'favourites_count': 142797, 'follow_request_sent': False, 'followers_count': 8129966, 'following': False, 'friends_count': 12, 'geo_enabled': True, 'has_extended_profile': False, 'id': 4196983835, 'id_str': '4196983835', 'is_translation_enabled': False, 'is_translator': False, 'lang': None, 'listed_count': 6240, 'location': '「 DM YOUR DOGS 」', 'name': 'WeRateDogs™ 🏳️\u200d🌈', 'notifications': False, 'profile_background_color': '000000', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/4196983835/1560181807', 'profile_image_url': 'http://pbs.twimg.com/profile_images/1112594177961844736/qQK8NJT-_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1112594177961844736/qQK8NJT-_normal.jpg', 'profile_link_color': 'F5ABB5', 'profile_sidebar_border_color': '000000', 'profile_sidebar_fill_color': '000000', 'profile_text_color': '000000', 'profile_use_background_image': False, 'protected': False, 'screen_name': 'dog_rates', 'statuses_count': 10374, 'time_zone': None, 'translator_type': 'none', 'url': 'https://t.co/N7sNNHSfPq', 'utc_offset': None, 'verified': True}} ###Markdown **Add the tweet data to a dataframe** ###Code # create a local dataframe for storing tweet data df_tweetInfo = pd.DataFrame(columns = ['tweet_id', 'retweet_count', 'favorite_count']) # store tweet data to the dataframe with open('tweet_json.txt') as jsonFile: count = 0 start = time.time() for line in jsonFile: count = count + 1 tweet = json.loads(line) df_tweetInfo = df_tweetInfo.append({ 'tweet_id': tweet['id'], 'retweet_count': tweet['retweet_count'], 'favorite_count': tweet['favorite_count'] }, ignore_index=True) end = time.time() if (np.remainder(count, 200) == 0): currTime = str(time.localtime().tm_hour) + ':' + str(time.localtime().tm_min) + ':' + str(time.localtime().tm_sec) print('count: ' + str(count) + ', time elapsed: ' + str(end - start) + ', current time: ' + currTime) # # add a single tweet's data to the dataframe # df_tweetInfo = df_tweetInfo.append({ # 'tweetID': tweet['id'], # 'retweet_count': tweet['favorite_count'], # 'favorite_count': tweet['retweet_count'] # },ignore_index=True) #tweetInfo.head() df_tweetInfo.head() #len(df_tweetInfo) ###Output _____no_output_____ ###Markdown ([Top of Page](top_of_page)) AssessAssess the data for Quality and Tidiness. Per Udacity course notes, Quality and Tidiness are defined as follows:**Quality** issues refers to problems with content, such as missing, duplicate, or incorrect data. Low quality data is sometimes referred to as 'dirty' data. Quality issues generally fall into one of four categories or 'dimensions':* **Completeness** * Have all ___records that should have been obtained___ actually been obtained? * Are there any ___missing records___? * Are ___specific rows, columns or cells missing___? * **Validity:** * Perhaps the records exist, but they're ___not valid___? * i.e., they ___don't conform to a defined schema___. * A schema is a defined set of rules for data. * These rules can be real-world constraints (e.g. negative height is impossible) and table-specific constraints (e.g. unique key constraints in tables). * **Accuracy:** * Inaccurate data: * is ___wrong data that is valid___. * ___adheres to the defined schema, but is still incorrect___ * Example: a patient's weight that is 5 lbs too heavy because the scale was faulty. * **Consistency:** * Inconsistent data is both valid and accurate, but ___there are multiple correct ways of referring to the same thing___. * Consistency means the data has a **standard format**. For instance, columns that represent the same data across tables and/or within tables is desired.**Tidiness** refers to the data's structure. Untidy data has structural issues that can slow down or prevent easy analysis. Untidy data is sometimes referred to as 'messy' data. Traits of tidy data include:* Each variable forms a column.* Each observation forms a row.* Each type of observational unit forms a table. ([Top of Page](top_of_page)) Assess - Quality* Assess the data for issues with content, such as missing, duplicate, or incorrect data. * Start by briefly viewing the data to get a sense of it. * Then assess the data with respect to completeness, validity, accuracy, and consistency ###Code df_tweetInfo.head() df_archive.head(2) df_images.head() ###Output _____no_output_____ ###Markdown ([Top of Page](top_of_page)) Completeness* Have all ___records that should have been obtained___ actually been obtained?* Are there any ___missing records___?* Are ___specific rows, columns or cells missing___? ###Code print('# of records in df_tweetInfo (i.e., JSON data retrieved via API): ' + str(len(df_tweetInfo))) print('# of records in df_archive (i.e., weRateDogs Tweet archive): ' + str(len(df_archive))) print('# of records in df_images (i.e., image analysis): ' + str(len(df_images))) #tweetInfo.head() ###Output # of records in df_tweetInfo (i.e., JSON data retrieved via API): 2335 # of records in df_archive (i.e., weRateDogs Tweet archive): 2356 # of records in df_images (i.e., image analysis): 2075 ###Markdown The dataframes have a different number of records.* The slight difference between df_tweetInfo and df_archive is probably due to tweets that have been deleted* The difference betweeen df_archive and df_images is probably due to not all tweets having images ###Code print('# of tweet_id\'s in df_images that are also in df_archive: ' + str(len(df_images.tweet_id.isin(df_archive.tweet_id)))) print('# of tweet_id\'s in df_images that are also in df_tweetInfo: ' + str(len(df_images.tweet_id.isin(df_tweetInfo.tweet_id)))) ###Output # of tweet_id's in df_images that are also in df_archive: 2075 # of tweet_id's in df_images that are also in df_tweetInfo: 2075 ###Markdown Since all tweet_id's in df_images are also in df_archive and df_tweetInfo, the appropriate set to use is the intersection of the three df's. ([Top of Page](top_of_page)) Validity* Perhaps the records exist, but they're ___not valid___? * i.e., they ___don't conform to a defined schema___. * A schema is a defined set of rules for data. * These rules can be real-world constraints (e.g. negative height is impossible) and table-specific constraints (e.g. unique key constraints in tables). ###Code df_archive.describe() ###Output _____no_output_____ ###Markdown Investigate / clean the following issues: * In **df_archive**, the maximum value for rating_numerator may be unrealistically high.* In **df_archive**, the maximum value for rating_denominator may be unrealistically high.* In **df_archive**, the minimum value for rating_numerator probably should not be zero.* In **df_archive**, the minimum value for rating_denominator should not be zero. Check other dataframes for any obvious validity issues ###Code df_images.describe() df_tweetInfo.describe() df_archive.nunique() df_archive.groupby('rating_numerator').rating_numerator.count() ###Output _____no_output_____ ###Markdown In **df_archive**, some **rating_numerator** values are quite large. Investigate whether this is an issue / consider removing numerators over a certain threshold. ###Code df_archive.head(2) searchString = 'NaN' df_archive.query("in_reply_to_status_id != 'NaN'").head(2) # # example syntax # #df_images.query('p1_conf > 0.2').head() # searchString = 'German_shepherd' # df_images.query("p1 != @searchString").head(3) df_tweetInfo.nunique() df_images.nunique() ###Output _____no_output_____ ###Markdown In **df_images**, the number of jpg_urls does not match the number of tweet_id's. Investigate whether this is an issue. If so, correct it. Explore dog stages ###Code df_archive.columns print(str(df_archive.groupby('doggo').doggo.count()) + '\n---------------------------------' ) print(str(df_archive.groupby('floofer').floofer.count()) + '\n---------------------------------' ) print(str(df_archive.groupby('pupper').pupper.count()) + '\n---------------------------------' ) print(str(df_archive.groupby('puppo').puppo.count()) + '\n---------------------------------' ) print(df_archive.shape) # df_archive.groupby('rating_numerator').rating_numerator.count() # ## Make sure all tweets have only one dog stage # # ## Add a 'none' column for tweets that do not have a dog stage ###Output doggo None 2259 doggo 97 Name: doggo, dtype: int64 --------------------------------- floofer None 2346 floofer 10 Name: floofer, dtype: int64 --------------------------------- pupper None 2099 pupper 257 Name: pupper, dtype: int64 --------------------------------- puppo None 2326 puppo 30 Name: puppo, dtype: int64 --------------------------------- (2356, 17) ###Markdown * In **df_archive**, add a 'none' category for tweets that do not have a dog stage* In **df_archive**, make sure all tweets have only one dog stage ([Top of Page](top_of_page)) Accuracy* Inaccurate data: * is ___wrong data that is valid___. * ___adheres to the defined schema, but is still incorrect___ * Example: a patient's weight that is 5 lbs too heavy because the scale was faulty. ###Code df_archive.describe() df_archive.head(3) df_images.head() df_tweetInfo.head() ###Output _____no_output_____ ###Markdown There do not appear to be any obvious accuracy issues ([Top of Page](top_of_page)) Consistency* Inconsistent data is both valid and accurate, but _there are multiple __correct__ ways of referring to the same thing_.* Consistency, i.e., a standard format, in columns that represent the same data across tables and/or within tables is desired. ###Code print(df_tweetInfo.info()) df_archive.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown * In **df_archive**, 'timestamp' and 'retweeted_status_timestamp' should have type 'datetime'.* In **df_archive**, the following columns should have type 'int64': * 'in_reply_to_status_id' * 'in_reply_to_user_id', * 'retweeted_status_id' * 'retweeted_status_user_id' ###Code df_images.info() df_tweetInfo.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2335 entries, 0 to 2334 Data columns (total 3 columns): tweet_id 2335 non-null object retweet_count 2335 non-null object favorite_count 2335 non-null object dtypes: object(3) memory usage: 54.8+ KB ###Markdown In **df_tweetInfo**, 'tweet_id' should have type 'int64' for consistency across the dataframes. ([Top of Page](top_of_page)) Assess - TidinessTidiness refers to the data's structure. Untidy data has structural issues that can slow down or prevent easy analysis. Untidy data is sometimes referred to as 'messy' data. Traits of tidy data include:* Each variable forms a column.* Each observation forms a row.* Each type of observational unit forms a table. Preview the Dataframes Again (Looking for Tidiness Issues This Time) ###Code df_tweetInfo.head(2) df_archive.head(2) ###Output _____no_output_____ ###Markdown * 'doggo', 'floofer', 'pupper', and 'puppo' are categories and should be **combined into a single column**. * Consider the possibility of tweets that fall into multiple such categories. Preview 'text' entries since they do not fit in the default dataframe column width. ###Code df_archive.text[0:5].apply(lambda x: print('- ' + x)) ###Output - This is Phineas. He's a mystical boy. Only ever appears in the hole of a donut. 13/10 https://t.co/MgUWQ76dJU - This is Tilly. She's just checking pup on you. Hopes you're doing ok. If not, she's available for pats, snugs, boops, the whole bit. 13/10 https://t.co/0Xxu71qeIV - This is Archie. He is a rare Norwegian Pouncing Corgo. Lives in the tall grass. You never know when one may strike. 12/10 https://t.co/wUnZnhtVJB - This is Darla. She commenced a snooze mid meal. 13/10 happens to the best of us https://t.co/tD36da7qLQ - This is Franklin. He would like you to stop calling him "cute." He is a very fierce shark and should be respected as such. 12/10 #BarkWeek https://t.co/AtUZn91f7f ###Markdown * In df_archive, **urls should be extracted from the 'text' column** ###Code df_images.head() ###Output _____no_output_____ ###Markdown * **df_tweetInfo** and **df_archive** should be merged into one dataframe. * Combine them on 'tweet_id' ([Top of Page](top_of_page)) Summary of Initial Observations **Upon the initial data assessement, the following quality and tidiness issues were observed:*** **Quality:** * **The dataframes have a different number of records.** * The slight difference between df_tweetInfo and df_archive is probably due to tweets that have been deleted * The difference betweeen df_archive and df_images is probably due to not all tweets having images * Since all tweet_id's in df_images are also in df_archive and df_tweetInfo, the appropriate set to use is the intersection of the three df's. * In **df_archive**, the maximum value for rating_numerator may be unrealistically high. * In **df_archive**, the maximum value for rating_denominator may be unrealistically high. * In **df_archive**, the minimum value for rating_numerator probably should not be zero. * In **df_archive**, the minimum value for rating_denominator should not be zero. * In **df_archive**, some **rating_numerator** values are quite large. Investigate whether this is an issue / consider removing numerators over a certain threshold. * In **df_images**, the number of jpg_urls does not match the number of tweet_id's. Investigate whether this is an issue. If so, correct it. * In **df_archive**, add a 'none' category for tweets that do not have a dog stage * In **df_archive**, make sure all tweets have only one dog stage * In **df_archive**, 'timestamp' and 'retweeted_status_timestamp' should have type 'datetime'. * Likely code to use: * df_archive\['timestamp'\] = pd.to_datetime(df_archive\['timestamp'\]) * df_archive\['retweeted_status_timestamp'\] = pd.to_datetime(df_archive\['retweeted_status_timestamp'\]) * In **df_archive**, the following columns should have type 'int64': * 'in_reply_to_status_id' * 'in_reply_to_user_id', * 'retweeted_status_id' * 'retweeted_status_user_id' * In **df_tweetInfo**, 'tweet_id' should have type 'int64' for consistency across the dataframes.* **Tidiness:** * 'doggo', 'floofer', 'pupper', and 'puppo' are categories and should be **combined into a single column**. * In df_archive, **urls should be extracted from the 'text' column** * **df_tweetInfo** and **df_images** should be merged into one dataframe. * Combine them on 'tweet_id' ([Top of Page](top_of_page)) CleanClean the data issues observed in the "Assess" phase. Every issue that is cleaned should go through the following process:* **Define**: * Define the issue and convert any assessments into "how-to" guides * This is essentially pseudo-code * This serves as future documentation for myself and for others* **Code**: * Translate words from the 'Define' step to code* **Test**: * Test the dataset(s) to make sure the cleaning code worked * This is kind of like re-visiting the "Assess" phase **Begin the cleaning phase by backing up the existing datasets:** ###Code df_archive_orig = df_archive.copy(deep=True) df_images_orig = df_images.copy(deep=True) df_tweetInfo_orig = df_tweetInfo.copy(deep=True) ###Output _____no_output_____ ###Markdown ([Top of Page](top_of_page)) Clean - Quality ([Top of Page](top_of_page)) Define**The dataframes have a different number of records.*** The slight difference between df_tweetInfo and df_archive is probably due to tweets that have been deleted* The difference betweeen df_archive and df_images is probably due to not all tweets having images* Since all tweet_id's in df_images are also in df_archive and df_tweetInfo, the appropriate set to use is the intersection of the three df's. **Pseudo-code:*** Create a list of tweet_ids for each dataframe* Keep the tweet_ids that are common to all three lists* Using that list of tweet_ids, reassign the dataframes Code ###Code # Create a list of tweet_ids for each dataframe: tweet_ids_archive = df_archive_orig['tweet_id'].tolist() tweet_ids_images = df_images_orig['tweet_id'].tolist() tweet_ids_API = df_tweetInfo_orig['tweet_id'].tolist() # Keep the tweet_ids that are common to all three lists: tweet_ids_to_keep = list(set(tweet_ids_archive).intersection(tweet_ids_API).intersection(tweet_ids_images)) # Using that list of tweet_ids, reassign the dataframes: df_archive = df_archive_orig[df_archive_orig.tweet_id.isin(tweet_ids_to_keep)] df_images = df_images_orig[df_images_orig.tweet_id.isin(tweet_ids_to_keep)] df_tweetInfo = df_tweetInfo_orig[df_tweetInfo_orig.tweet_id.isin(tweet_ids_to_keep)] ###Output _____no_output_____ ###Markdown Test ###Code print('# of records in df_tweetInfo (i.e., JSON data retrieved via API): ' + str(len(df_tweetInfo))) print('# of records in df_archive (i.e., weRateDogs Tweet archive): ' + str(len(df_archive))) print('# of records in df_images (i.e., image analysis): ' + str(len(df_images))) ###Output # of records in df_tweetInfo (i.e., JSON data retrieved via API): 2063 # of records in df_archive (i.e., weRateDogs Tweet archive): 2063 # of records in df_images (i.e., image analysis): 2063 ###Markdown ([Top of Page](top_of_page)) Define**In df_archive, the maximum value for rating_numerator may be unrealistically high.****Pseudo-code:*** Investigate ratings with high rating_numerator values* Drop any rows with incorrect numerator values * NOTE: high numerators are acceptable (they're good dogs). Only drop numerators that are incorrect. Code ###Code df_archive.describe() ###Output _____no_output_____ ###Markdown Perform a 'groupby' on rating_numerator to get a better sense of its distribution ###Code df_archive.groupby('rating_numerator').rating_numerator.count() ###Output _____no_output_____ ###Markdown Print tweets with high numerator values to better understand what is going on ###Code # get indices for tweets with numerator >= 15 high_numerator_indices = set() for i in df_archive.index: if df_archive.loc[i].rating_numerator >= 15: high_numerator_indices.add(i) # print info for tweets with numerator >= 15 for i in high_numerator_indices: print(str(i) + ', tweet_id=' + str(df_archive.tweet_id[i]) + ', rating=' + str(df_archive.rating_numerator[i]) + ', text: ' + str(df_archive.loc[i].text)) ###Output 516, tweet_id=810984652412424192, rating=24, text: Meet Sam. She smiles 24/7 &amp; secretly aspires to be a reindeer. Keep Sam smiling by clicking and sharing this link: https://t.co/98tB8y7y7t https://t.co/LouL5vdvxx 902, tweet_id=758467244762497024, rating=165, text: Why does this never happen at my front door... 165/150 https://t.co/HmwrdfEfUE 1433, tweet_id=697463031882764288, rating=44, text: Happy Wednesday here's a bucket of pups. 44/40 would pet all at once https://t.co/HppvrYuamZ 2074, tweet_id=670842764863651840, rating=420, text: After so many requests... here you go. Good dogg. 420/10 https://t.co/yfAAo1gdeY 285, tweet_id=838916489579200512, rating=15, text: RT @KibaDva: I collected all the good dogs!! 15/10 @dog_rates #GoodDogs https://t.co/6UCGFczlOI 1712, tweet_id=680494726643068929, rating=26, text: Here we have uncovered an entire battalion of holiday puppers. Average of 11.26/10 https://t.co/eNm2S6p9BD 433, tweet_id=820690176645140481, rating=84, text: The floofs have been released I repeat the floofs have been released. 84/70 https://t.co/NIYC820tmd 1202, tweet_id=716439118184652801, rating=50, text: This is Bluebert. He just saw that both #FinalFur match ups are split 50/50. Amazed af. 11/10 https://t.co/Kky1DPG4iq 1843, tweet_id=675853064436391936, rating=88, text: Here we have an entire platoon of puppers. Total score: 88/80 would pet all at once https://t.co/y93p6FLvVw 695, tweet_id=786709082849828864, rating=75, text: This is Logan, the Chow who lived. He solemnly swears he's up to lots of good. H*ckin magical af 9.75/10 https://t.co/yBO5wuqaPS 1351, tweet_id=704054845121142784, rating=60, text: Here is a whole flock of puppers. 60/50 I'll take the lot https://t.co/9dpcw6MdWa 1228, tweet_id=713900603437621249, rating=99, text: Happy Saturday here's 9 puppers on a bench. 99/90 good work everybody https://t.co/mpvaVxKmc1 979, tweet_id=749981277374128128, rating=1776, text: This is Atticus. He's quite simply America af. 1776/10 https://t.co/GRXwMxLBkh 1120, tweet_id=731156023742988288, rating=204, text: Say hello to this unbelievably well behaved squad of doggos. 204/170 would try to pet all at once https://t.co/yGQI3He3xv 1634, tweet_id=684225744407494656, rating=143, text: Two sneaky puppers were not initially seen, moving the rating to 143/130. Please forgive us. Thank you https://t.co/kRK51Y5ac3 1635, tweet_id=684222868335505415, rating=121, text: Someone help the girl is being mugged. Several are distracting her while two steal her shoes. Clever puppers 121/110 https://t.co/1zfnTJLt55 1254, tweet_id=710658690886586372, rating=80, text: Here's a brigade of puppers. All look very prepared for whatever happens next. 80/80 https://t.co/0eb7R1Om12 1779, tweet_id=677716515794329600, rating=144, text: IT'S PUPPERGEDDON. Total of 144/120 ...I think https://t.co/ZanVtAtvIq 1274, tweet_id=709198395643068416, rating=45, text: From left to right: Cletus, Jerome, Alejandro, Burp, &amp; Titson None know where camera is. 45/50 would hug all at once https://t.co/sedre1ivTK 763, tweet_id=778027034220126208, rating=27, text: This is Sophie. She's a Jubilant Bush Pupper. Super h*ckin rare. Appears at random just to smile at the locals. 11.27/10 would smile back https://t.co/QFaUiIHxHq ###Markdown High ratings are acceptable (after all: they're good dogs Brent). We only need to remove **incorrect** rating_numerator values. _NOTE: Tweets with incorrector numerator/denominator pairs will be addressed later._* Only one index has an incorrect rating_numerator value. This index will be removed: * 516 (tweet_id = 810984652412424192, rating = 24, text: "24/7")* The following indices have a rating_numerator value that needs corrected. These will be rounded **up** since they're all good dogs: * 1712 (tweet_id = 680494726643068929, rating = 26, text: "11.26/10" --> change rating to 12) * 695 (tweet_id = 786709082849828864, rating = 75, text: "9.75/10" --> change rating to 10) * 763 (tweet_id = 778027034220126208, rating = 27, text: "11.27/10" --> change rating to 12) ###Code # drop tweet_id 810984652412424192 from each of the df's df_archive = df_archive[df_archive.tweet_id != 810984652412424192] df_images = df_images[df_images.tweet_id != 810984652412424192] df_tweetInfo = df_tweetInfo[df_tweetInfo.tweet_id != 810984652412424192] # reset the index for each df df_archive.reset_index(drop=True,inplace=True) df_images.reset_index(drop=True,inplace=True) df_tweetInfo.reset_index(drop=True,inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code print('# of records in df_tweetInfo (i.e., JSON data retrieved via API): ' + str(len(df_tweetInfo))) print('# of records in df_archive (i.e., weRateDogs Tweet archive): ' + str(len(df_archive))) print('# of records in df_images (i.e., image analysis): ' + str(len(df_images))) ###Output # of records in df_tweetInfo (i.e., JSON data retrieved via API): 2062 # of records in df_archive (i.e., weRateDogs Tweet archive): 2062 # of records in df_images (i.e., image analysis): 2062 ###Markdown * The number of tweets has decreased by one in each dataframe* Check to make sure the correct tweet was removed (check for existence of tweet_id 810984652412424192): ###Code df_archive[df_archive.tweet_id == 810984652412424192] ###Output _____no_output_____ ###Markdown tweet_id 810984652412424192 no longer exists ([Top of Page](top_of_page)) Define* **In df_archive, the maximum value for rating_denominator may be unrealistically high.*** **Pseudo-code:** * Check for high denominators * If appropriate, remove rows * If appropriate, change rating_numerator and rating_denominator Code ###Code df_archive.groupby('rating_denominator').rating_denominator.count() ###Output _____no_output_____ ###Markdown Print tweets with denominator != 10 to better understand what is going on ###Code # get indices for tweets with denominator != 10 non_standard_denominator_indices = set() for i in df_archive.index: if df_archive.loc[i].rating_denominator != 10: non_standard_denominator_indices.add(i) # print info for tweets with denominator != 10 for i in non_standard_denominator_indices: #print(str(i) + ' - ' + str(df_archive.loc[i].text)) print(str(i) + ', tweet_id=' + str(df_archive.tweet_id[i]) + ', numerator=' + str(df_archive.rating_numerator[i]) + ', denominator=' + str(df_archive.rating_denominator[i]) + ', text: ' + str(df_archive.loc[i].text) + '\n') ###Output 1501, tweet_id=677716515794329600, numerator=144, denominator=120, text: IT'S PUPPERGEDDON. Total of 144/120 ...I think https://t.co/ZanVtAtvIq 1121, tweet_id=704054845121142784, numerator=60, denominator=50, text: Here is a whole flock of puppers. 60/50 I'll take the lot https://t.co/9dpcw6MdWa 866, tweet_id=740373189193256964, numerator=9, denominator=11, text: After so many requests, this is Bretagne. She was the last surviving 9/11 search dog, and our second ever 14/10. RIP https://t.co/XAVDNDaVgQ 1037, tweet_id=710658690886586372, numerator=80, denominator=80, text: Here's a brigade of puppers. All look very prepared for whatever happens next. 80/80 https://t.co/0eb7R1Om12 1197, tweet_id=697463031882764288, numerator=44, denominator=40, text: Happy Wednesday here's a bucket of pups. 44/40 would pet all at once https://t.co/HppvrYuamZ 337, tweet_id=820690176645140481, numerator=84, denominator=70, text: The floofs have been released I repeat the floofs have been released. 84/70 https://t.co/NIYC820tmd 914, tweet_id=731156023742988288, numerator=204, denominator=170, text: Say hello to this unbelievably well behaved squad of doggos. 204/170 would try to pet all at once https://t.co/yGQI3He3xv 1395, tweet_id=682962037429899265, numerator=7, denominator=11, text: This is Darrel. He just robbed a 7/11 and is in a high speed police chase. Was just spotted by the helicopter 10/10 https://t.co/7EsP8LmSp5 1012, tweet_id=713900603437621249, numerator=99, denominator=90, text: Happy Saturday here's 9 puppers on a bench. 99/90 good work everybody https://t.co/mpvaVxKmc1 725, tweet_id=758467244762497024, numerator=165, denominator=150, text: Why does this never happen at my front door... 165/150 https://t.co/HmwrdfEfUE 2041, tweet_id=666287406224695296, numerator=1, denominator=2, text: This is an Albanian 3 1/2 legged Episcopalian. Loves well-polished hardwood flooring. Penis on the collar. 9/10 https://t.co/d9NcXFKwLv 1560, tweet_id=675853064436391936, numerator=88, denominator=80, text: Here we have an entire platoon of puppers. Total score: 88/80 would pet all at once https://t.co/y93p6FLvVw 1369, tweet_id=684225744407494656, numerator=143, denominator=130, text: Two sneaky puppers were not initially seen, moving the rating to 143/130. Please forgive us. Thank you https://t.co/kRK51Y5ac3 1370, tweet_id=684222868335505415, numerator=121, denominator=110, text: Someone help the girl is being mugged. Several are distracting her while two steal her shoes. Clever puppers 121/110 https://t.co/1zfnTJLt55 1055, tweet_id=709198395643068416, numerator=45, denominator=50, text: From left to right: Cletus, Jerome, Alejandro, Burp, &amp; Titson None know where camera is. 45/50 would hug all at once https://t.co/sedre1ivTK 957, tweet_id=722974582966214656, numerator=4, denominator=20, text: Happy 4/20 from the squad! 13/10 for all https://t.co/eV1diwds8a 991, tweet_id=716439118184652801, numerator=50, denominator=50, text: This is Bluebert. He just saw that both #FinalFur match ups are split 50/50. Amazed af. 11/10 https://t.co/Kky1DPG4iq ###Markdown * Most of the tweets with denominator != 10 have been parsed correctly* However, some tweets have multiple fractions in the tweet text and the wrong fraction was grabbed when ratings were parsed * Such tweets need both their numerator and denominator changed * Since there are not very many of them, they will be changed semi-manually * If there were more such entries (say, more than 10), then it would make more sense to write code that correctly parses the tweet text with respect to rating* Indices with incorrect numerator and denominator pairs: * 866 (tweet_id = 740373189193256964) * numer = 9, should be 14 * denom = 11, should be 10 * 1395 (tweet_id = 682962037429899265) * numer = 7, should be 10 * denom = 11, should be 10 * 2041 (tweet_id = 666287406224695296) * numer = 1, should be 9 * denom = 2, should be 10 * 957 (tweet_id = 722974582966214656) * numer = 4, should be 13 * denom = 20, should be 10 * 991 (tweet_id = 716439118184652801) * numer = 50, should be 11 * denom = 50, should be 10* Based on the above, create a dataframe of tweet info that needs modified ###Code tweets_to_mod = {'tweet_id': [740373189193256964, 682962037429899265, 666287406224695296, 722974582966214656, 716439118184652801], 'numerator_old': [9, 7, 1, 4, 50], 'denominator_old': [11, 11, 2, 20, 50], 'numerator_new': [14, 10, 9, 13, 11], 'denominator_new': [10, 10, 10, 10, 10]} df_tweets_to_mod = pd.DataFrame(tweets_to_mod, columns = ['tweet_id', 'numerator_old', 'denominator_old', 'numerator_new', 'denominator_new']) df_tweets_to_mod = df_tweets_to_mod.set_index('tweet_id') ###Output _____no_output_____ ###Markdown Modify the tweet info ###Code df_archive = df_archive.set_index('tweet_id') for tweet in df_tweets_to_mod.index.tolist(): df_archive.loc[tweet,'rating_numerator'] = df_tweets_to_mod.loc[tweet,'numerator_new'] df_archive.loc[tweet,'rating_denominator'] = df_tweets_to_mod.loc[tweet,'denominator_new'] ###Output _____no_output_____ ###Markdown Test ###Code for tweet in df_tweets_to_mod.index.tolist(): rating_old = str(df_tweets_to_mod.loc[tweet,'numerator_old']) + '/' + str(df_tweets_to_mod.loc[tweet,'denominator_old']) rating_new = str(df_archive.loc[tweet,'rating_numerator']) + '/' + str(df_archive.loc[tweet,'rating_denominator']) print('old ratings: ' + rating_old + ', new ratings: ' + rating_new) ###Output old ratings: 9/11, new ratings: 14/10 old ratings: 7/11, new ratings: 10/10 old ratings: 1/2, new ratings: 9/10 old ratings: 4/20, new ratings: 13/10 old ratings: 50/50, new ratings: 11/10 ###Markdown ([Top of Page](top_of_page)) Define**Multiple Issues:*** In **df_archive**, the minimum value for rating_numerator probably should not be zero.* In **df_archive**, the minimum value for rating_denominator should not be zero.**Pseudo-code:*** **check for rating_numerator == 0** * Follow previous processes* **check for rating_denominator == 0** * Either remove these or assign a denominator of 10 Code ###Code df_archive.groupby('rating_numerator').rating_numerator.count() df_archive.groupby('rating_denominator').rating_denominator.count() ###Output _____no_output_____ ###Markdown * There are no longer any tweets with a denominator of 0 * Such tweets were probably removed when the number of records in the dataframes [were corrected](clean_num_records)* Howerver, there are still numerators with a value of 0 * Since there are only two such values, they will be explored and/or corrected in a semi-manual manner. ###Code # get indices for tweets with numerator == 0 indices_to_check = set() for i in df_archive.index: if df_archive.loc[i].rating_numerator == 0: indices_to_check.add(i) # print info for tweets with numerator == 0 for i in indices_to_check: print('tweet_id=' + str(i) + ', numerator=' + str(df_archive.rating_numerator[i]) + ', denominator=' + str(df_archive.rating_denominator[i]) + ', text: ' + str(df_archive.loc[i].text) + '\n') ###Output tweet_id=746906459439529985, numerator=0, denominator=10, text: PUPDATE: can't see any. Even if I could, I couldn't reach them to pet. 0/10 much disappointment https://t.co/c7WXaB2nqX tweet_id=835152434251116546, numerator=0, denominator=10, text: When you're so blinded by your systematic plagiarism that you forget what day it is. 0/10 https://t.co/YbEJPkg4Ag ###Markdown * Tweet 746906459439529985 does not contain any dog images and can be removed* Tweet 835152434251116546 is a retweet and can be removed * Since it utilizes a screenshot of a separate tweet, the fact that it is a retweet is only obvious once once one clicks the url NOTE: Link on [handling exceptions](https://wiki.python.org/moin/HandlingExceptions) in Python ###Code # drop tweet_id's 746906459439529985 and 835152434251116546 # - use try/except blocks for debugging purposes # - should normally be written as a function, but since there are only 2 such entries, this is quicker / easier # -------------------------------------------------------------------------------- # tweet 1 try: df_archive = df_archive.drop(746906459439529985) except KeyError: print('received a KeyError, meaning the tweet was already dropped from df_archive') # tweet 2 try: df_archive = df_archive.drop(835152434251116546) except KeyError: print('received a KeyError, meaning the tweet was already dropped from df_archive') ###Output _____no_output_____ ###Markdown Test ###Code # - use try/except blocks for debugging purposes # - should normally be written as a function, but since there are only 2 such entries, this is quicker / easier # -------------------------------------------------------------------------------- # tweet 1 try: print(df_archive.loc[746906459439529985]) except KeyError: print('received a KeyError, meaning the tweet was successfully dropped from df_archive') # tweet 2 try: print(df_archive.loc[746906459439529985]) except KeyError: print('received a KeyError, meaning the tweet was successfully dropped from df_archive') ###Output received a KeyError, meaning the tweet was successfully dropped from df_archive received a KeyError, meaning the tweet was successfully dropped from df_archive ###Markdown ([Top of Page](top_of_page)) Define* In **df_archive**, 'timestamp' and 'retweeted_status_timestamp' should have type 'datetime'.* Likely code to use: * df_archive\['timestamp'\] = pd.to_datetime(df_archive\['timestamp'\]) * df_archive\['retweeted_status_timestamp'\] = pd.to_datetime(df_archive\ Code ###Code df_archive.info() df_archive['timestamp'] = pd.to_datetime(df_archive['timestamp']) df_archive['retweeted_status_timestamp'] = pd.to_datetime(df_archive['retweeted_status_timestamp']) ###Output _____no_output_____ ###Markdown Test ###Code df_archive.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2060 entries, 892420643555336193 to 666020888022790149 Data columns (total 16 columns): in_reply_to_status_id 22 non-null float64 in_reply_to_user_id 22 non-null float64 timestamp 2060 non-null datetime64[ns, UTC] source 2060 non-null object text 2060 non-null object retweeted_status_id 74 non-null float64 retweeted_status_user_id 74 non-null float64 retweeted_status_timestamp 74 non-null datetime64[ns, UTC] expanded_urls 2060 non-null object rating_numerator 2060 non-null int64 rating_denominator 2060 non-null int64 name 2060 non-null object doggo 2060 non-null object floofer 2060 non-null object pupper 2060 non-null object puppo 2060 non-null object dtypes: datetime64[ns, UTC](2), float64(4), int64(2), object(8) memory usage: 353.6+ KB ###Markdown The 'timestamp' and 'retweeted_status_timestamp' columns both have type 'datetime' ([Top of Page](top_of_page)) Define* In **df_archive**, the following columns should have type 'int64': * 'in_reply_to_status_id' * 'in_reply_to_user_id', * 'retweeted_status_id' * 'retweeted_status_user_id' Code ###Code df_archive['in_reply_to_status_id'] = df_archive['in_reply_to_status_id'].fillna(0).astype(np.int64) df_archive['in_reply_to_user_id'] = df_archive['in_reply_to_user_id'].fillna(0).astype(np.int64) df_archive['retweeted_status_id'] = df_archive['retweeted_status_id'].fillna(0).astype(np.int64) df_archive['retweeted_status_user_id'] = df_archive['retweeted_status_user_id'].fillna(0).astype(np.int64) ###Output _____no_output_____ ###Markdown Test ###Code df_archive.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2060 entries, 892420643555336193 to 666020888022790149 Data columns (total 16 columns): in_reply_to_status_id 2060 non-null int64 in_reply_to_user_id 2060 non-null int64 timestamp 2060 non-null datetime64[ns, UTC] source 2060 non-null object text 2060 non-null object retweeted_status_id 2060 non-null int64 retweeted_status_user_id 2060 non-null int64 retweeted_status_timestamp 74 non-null datetime64[ns, UTC] expanded_urls 2060 non-null object rating_numerator 2060 non-null int64 rating_denominator 2060 non-null int64 name 2060 non-null object doggo 2060 non-null object floofer 2060 non-null object pupper 2060 non-null object puppo 2060 non-null object dtypes: datetime64[ns, UTC](2), int64(6), object(8) memory usage: 353.6+ KB ###Markdown Each of the desired columns now has type 'datetime' ([Top of Page](top_of_page)) _(No Cleaning)_ Assess Updated Columns * Upon changing these columns to 'int64', it became apparent that they were not adequately explored in the 'Assess' phase * 'in_reply_to_status_id' * 'in_reply_to_user_id', * 'retweeted_status_id' * 'retweeted_status_user_id* The goals of this analysis are to only analyze original tweets, not retweets* As such, some of the entries represented by these columns can probably be removed* Explore whether any info in these columns indicates invalid tweets (i.e., retweets) * **Start (the "re-assessment") by exploring 'retweeted_status_id'** ###Code df_temp_archive = df_archive[df_archive['retweeted_status_id'] != 0] #df_temp_archive = df_temp_archive[~df_archive['in_reply_to_status_id'].isnull()] df_temp_archive.tail(2) #https://twitter.com/dog_rates/status/671561002136281088 print(df_temp_archive.loc[667550904950915073].in_reply_to_status_id) print(df_temp_archive.loc[667550904950915073].in_reply_to_user_id) print(df_temp_archive.loc[667550904950915073].retweeted_status_id) print(df_temp_archive.loc[667550904950915073].retweeted_status_user_id) print(df_temp_archive.loc[667550904950915073].text) print(df_temp_archive.loc[667550904950915073].expanded_urls) print('---------------------------------------------------------------------------------') print(df_temp_archive.loc[667550882905632768].in_reply_to_status_id) print(df_temp_archive.loc[667550882905632768].in_reply_to_user_id) print(df_temp_archive.loc[667550882905632768].retweeted_status_id) print(df_temp_archive.loc[667550882905632768].retweeted_status_user_id) print(df_temp_archive.loc[667550882905632768].text) print(df_temp_archive.loc[667550882905632768].expanded_urls) ###Output 0 0 667548695664070656 4296831739 RT @dogratingrating: Exceptional talent. Original humor. Cutting edge, Nova Scotian comedian. 12/10 https://t.co/uarnTjBeVA https://twitter.com/dogratingrating/status/667548695664070656/photo/1,https://twitter.com/dogratingrating/status/667548695664070656/photo/1 --------------------------------------------------------------------------------- 0 0 667548415174144000 4296831739 RT @dogratingrating: Unoriginal idea. Blatant plagiarism. Curious grammar. -5/10 https://t.co/r7XzeQZWzb https://twitter.com/dogratingrating/status/667548415174144001/photo/1,https://twitter.com/dogratingrating/status/667548415174144001/photo/1 ###Markdown * Tweets with 'retweeted_status_id' != 0 are actually retweets.* Since we only want original tweets in our final data set, **tweets with 'retweeted_status_id' != 0 should be removed.** * Once those tweets have been removed, the remaining values for 'retweeted_status_id', 'retweeted_status_user_id', and 'retweeted_status_timestamp' will all be zero or null.* Therefore, after retweets have been removed, **columns 'retweeted_status_id', 'retweeted_status_user_id', and 'retweeted_status_timestamp' should be dropped** **Now explore 'in_reply_to_status_id'** ###Code df_archive.info() df_temp_archive = df_archive[df_archive['in_reply_to_status_id'] != 0] df_temp_archive.tail(2) print(df_temp_archive.loc[671729906628341761].in_reply_to_status_id) print(df_temp_archive.loc[671729906628341761].in_reply_to_user_id) print(df_temp_archive.loc[671729906628341761].retweeted_status_id) print(df_temp_archive.loc[671729906628341761].retweeted_status_user_id) print(df_temp_archive.loc[671729906628341761].text) print(df_temp_archive.loc[671729906628341761].expanded_urls) print('---------------------------------------------------------------------------------') print(df_temp_archive.loc[669353438988365824].in_reply_to_status_id) print(df_temp_archive.loc[669353438988365824].in_reply_to_user_id) print(df_temp_archive.loc[669353438988365824].retweeted_status_id) print(df_temp_archive.loc[669353438988365824].retweeted_status_user_id) print(df_temp_archive.loc[669353438988365824].text) print(df_temp_archive.loc[669353438988365824].expanded_urls) df_temp_archive.groupby('in_reply_to_user_id').in_reply_to_user_id.count() ###Output _____no_output_____ ###Markdown * Tweets with 'in_reply_to_user_id' != 0 are tweets where the WeRateDogs account has replied to one of its own tweets with a new image, rating, etc.* Since these tweets contain a new dog image and rating, they are valid to keep ([Top of Page](top_of_page)) Definetweets with 'retweeted_status_id' != 0 should be removed, since they are retweets and we only want original tweets for this analysis Code ###Code # current metrics print('current # of tweets: ' + str(df_archive.text.count())) print('current # of retweets: ' + str(df_archive[df_archive['retweeted_status_id'] != 0].retweeted_status_id.count())) df_archive = df_archive[df_archive['retweeted_status_id'] == 0] ###Output _____no_output_____ ###Markdown Test ###Code # updated metrics print('current # of tweets: ' + str(df_archive.text.count())) print('current # of retweets: ' + str(df_archive[df_archive['retweeted_status_id'] != 0].retweeted_status_id.count())) ###Output current # of tweets: 1986 current # of retweets: 0 ###Markdown The 74 retweets have been successfully removed ([Top of Page](top_of_page)) Define* In **df_tweetInfo**, 'tweet_id' should have type 'int64' for consistency across the dataframes. Code ###Code df_tweetInfo.info() df_tweetInfo['tweet_id'] = df_tweetInfo['tweet_id'].astype(np.int64) ###Output _____no_output_____ ###Markdown Test ###Code df_tweetInfo.info() df_tweetInfo.head() ###Output _____no_output_____ ###Markdown ([Top of Page](top_of_page)) Define* Add a 'none' column to **df_archive** for tweets that do not have a dog stage* In **df_archive**, make sure all tweets have only one dog stage Code ###Code #df_temp_archive = df_archive.copy(deep=True) # replace stage entries with 1's and 0's df_archive.doggo = df_archive.doggo.replace('None', 0) df_archive.doggo = df_archive.doggo.replace('doggo', 1) df_archive.floofer = df_archive.floofer.replace('None', 0) df_archive.floofer = df_archive.floofer.replace('floofer', 1) df_archive.pupper = df_archive.pupper.replace('None', 0) df_archive.pupper = df_archive.pupper.replace('pupper', 1) df_archive.puppo = df_archive.puppo.replace('None', 0) df_archive.puppo = df_archive.puppo.replace('puppo', 1) df_archive['none'] = 1 - (df_archive.doggo + df_archive.floofer + df_archive.pupper + df_archive.puppo) print('# of tweets w/ multiple dog stage values: ' + str(df_archive[df_archive.none == -1].none.count())) print('# of tweets w/ a single dog stage value: ' + str(df_archive[df_archive.none == 0].none.count())) print('# of tweets w/ no dog stage value: ' + str(df_archive[df_archive.none == 1].none.count())) df_archive = df_archive[df_archive.none != -1] ###Output # of tweets w/ multiple dog stage values: 11 # of tweets w/ a single dog stage value: 294 # of tweets w/ no dog stage value: 1681 ###Markdown Test ###Code print('# of tweets w/ multiple dog stage values: ' + str(df_archive[df_archive.none == -1].none.count())) print('# of tweets w/ no dog stage value: ' + str(df_archive[df_archive.none == 0].none.count())) print('# of tweets w/ a single dog stage value: ' + str(df_archive[df_archive.none == 1].none.count())) df_archive.head() ###Output _____no_output_____ ###Markdown * A 'none' column was added for tweets that do not have a dog stage* Tweets w/ multiple dog stages have been removed ([Top of Page](top_of_page)) Clean - Tidiness ([Top of Page](top_of_page)) DefineAfter retweets have been removed, columns 'retweeted_status_id', 'retweeted_status_user_id', and 'retweeted_status_timestamp' should be dropped Code ###Code df_archive.info() df_archive.drop('retweeted_status_id', axis=1, inplace=True) df_archive.drop('retweeted_status_user_id', axis=1, inplace=True) df_archive.drop('retweeted_status_timestamp', axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code df_archive.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1975 entries, 892420643555336193 to 666020888022790149 Data columns (total 14 columns): in_reply_to_status_id 1975 non-null int64 in_reply_to_user_id 1975 non-null int64 timestamp 1975 non-null datetime64[ns, UTC] source 1975 non-null object text 1975 non-null object expanded_urls 1975 non-null object rating_numerator 1975 non-null int64 rating_denominator 1975 non-null int64 name 1975 non-null object doggo 1975 non-null int64 floofer 1975 non-null int64 pupper 1975 non-null int64 puppo 1975 non-null int64 none 1975 non-null int64 dtypes: datetime64[ns, UTC](1), int64(9), object(4) memory usage: 231.4+ KB ###Markdown The retweeted_status_id', 'retweeted_status_user_id', and 'retweeted_status_timestamp' columns have been successfully droppped from df_archive ([Top of Page](top_of_page)) Define'doggo', 'floofer', 'pupper', and 'puppo' are categories and should be **combined into a single column**. Code ###Code # create a temporary copy of the df for debugging purposes df_temp_archive = df_archive.copy(deep=True) #df_archive = df_temp_archive.copy(deep=True) df_temp_archive.shape # save the tweet_id back to a column (so the melt function doesn't delete it) df_archive['tweet_id'] = df_archive.index df_archive.reset_index(drop=True,inplace=True) # indicate which columns have entries that need melted values = ['doggo', 'floofer', 'pupper', 'puppo', 'none'] # create a list of all other columns ids = [col for col in list(df_archive.columns) if col not in values] print('df_archive.shape: \t' + str(df_archive.shape), '\n') print('all df_archive columns: \t' + str(df_archive.columns), '\n') print('df_archive columns to keep: \t' + str(ids)) #current preview of dataframe df_archive.head() ###Output _____no_output_____ ###Markdown This [link provides an example](https://www.geeksforgeeks.org/python-pandas-melt/) of melting dataframe columns. ###Code # melt the dog name columns into a single 'value' column df_archive = pd.melt(df_archive, id_vars= ids, value_vars= values, var_name= 'DogStage') # only keep those rows that have an entry for dog stage df_archive = df_archive[df_archive.value == 1] # set 'tweet_id' as the index again df_archive = df_archive.set_index('tweet_id') # drop the 'value' column, since it is no longer needed df_archive.drop('value', axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code # check that the shape of the updated dataframe is accurate df_archive.shape ###Output _____no_output_____ ###Markdown That is the correct shape ###Code df_archive.info() # check that the value counts in the 'DogStage' column are correct print('# of tweets w/ a single dog stage value: ' + str(df_archive[~df_archive['DogStage'].str.contains("none")].DogStage.count())) print('# of tweets w/ no dog stage value: ' + str(df_archive[df_archive['DogStage'].str.contains("none")].DogStage.count())) ###Output # of tweets w/ a single dog stage value: 294 # of tweets w/ no dog stage value: 1681 ###Markdown The value counts in the 'DogStage' column are correct, since they match the corresponding counts [determined earlier](clean_number_of_dog_stages). ([Top of Page](top_of_page)) Define* Issue: * In df_archive, **urls should be extracted from the 'text' column*** Method: * Use regexp to extract the url from 'text' entries * split the 'text' entries on 'http' and only keep the first element Code ###Code # Extract url from text entries df_archive['tweet_url'] = df_archive.text.str.extract('(https://.*/+[0-9a-zA-Z]+)') # split the 'text' entries on 'http' and only keep the first element df_archive['text'] = df_archive.text.apply(lambda x: x.split('http')[0]) ###Output _____no_output_____ ###Markdown Test * Use df_archive.shape to check that the number of columns has increased by one* View the head to check if operation was successful* Print the first few 'text' entries to make sure the urls were removed ###Code df_archive.shape df_archive.head(3) df_archive.text[0:3].apply(lambda x: print('- ' + x)) ###Output - This is Cassie. She is a college pup. Studying international doggo communication and stick theory. 14/10 so elegant much sophisticate - Meet Yogi. He doesn't have any important dog meetings today he just enjoys looking his best at all times. 12/10 for dangerously dapper doggo - Here's a very large dog. He has a date later. Politely asked this water person to check if his breath is bad. 12/10 good to go doggo ###Markdown ([Top of Page](top_of_page)) Define* **df_tweetInfo and df_archive should be merged into one dataframe.** * Combine them on 'tweet_id' Code ###Code # df_temp_archive = df_archive.copy(deep=True) # save a copy of df_archive for debugging purposes df_archive = df_temp_archive.copy(deep=True) # reset df_archive (for debugging purposes) ###Output _____no_output_____ ###Markdown Prior to joining the dataframes, check to make sure that they do not share column names. ###Code print(df_archive.columns) print('--------------------------------------------------------') print(df_tweetInfo.columns) print('--------------------------------------------------------') print(df_archive.shape) print(df_tweetInfo.shape) ###Output Index(['in_reply_to_status_id', 'in_reply_to_user_id', 'timestamp', 'source', 'text', 'expanded_urls', 'rating_numerator', 'rating_denominator', 'name', 'doggo', 'floofer', 'pupper', 'puppo', 'none'], dtype='object') -------------------------------------------------------- Index(['tweet_id', 'retweet_count', 'favorite_count'], dtype='object') -------------------------------------------------------- (1975, 14) (2062, 3) ###Markdown The only shared column name is 'tweet_id'. This will be the column that the dataframes are joined on. ###Code # save the tweet_id back to a column (so it can be used for a merge) df_archive['tweet_id'] = df_archive.index df_archive.reset_index(drop=True,inplace=True) ###Output _____no_output_____ ###Markdown Merge the dataframes ###Code df_master = pd.merge(df_archive, df_tweetInfo, how='inner', on='tweet_id') #df_master = pd.merge(df_archive, df_tweetInfo, left_index=True, right_index=True, how='inner') #df_master = pd.merge(df_archive, df_tweetInfo[cols_to_use], left_index=True, right_index=True, how='inner') ###Output _____no_output_____ ###Markdown Test ###Code print(df_master.columns) print('---------------------------------------------------------------------') print(df_master.shape) # set 'tweet_id' as the index again df_master = df_master.set_index('tweet_id') df_master.head(2) ###Output _____no_output_____ ###Markdown ([Top of Page](top_of_page)) Define**Issue:*** After cleaning the data for quality and cleanliness, the two remaining dataframes (**df_master** and **df_images**) have a different number of records. **Pseudo-code:*** Make 'tweet_id' a column of df_master* Create a list of tweet_ids for each dataframe* Keep the tweet_ids that are common to each* Using that list of tweet_ids, reassign the dataframes* Set 'tweet_id' as the index for each dataframe Code ###Code # # create a temporary copy of df_master for debugging purposes # df_temp_master = df_master.copy(deep=True) # df_temp_images = df_images.copy(deep=True) print(df_master.shape) print(df_master.columns) print('----------------------------------------------------------------------------') print(df_images.shape) print(df_images.columns) # Make 'tweet_id' a column of df_master df_master['tweet_id'] = df_master.index df_master.reset_index(drop=True,inplace=True) # Create a list of tweet_ids for each dataframe: tweet_ids_master = df_master['tweet_id'].tolist() tweet_ids_images = df_images['tweet_id'].tolist() # Keep the tweet_ids that are common to each list tweet_ids_to_keep = list(set(tweet_ids_master).intersection(tweet_ids_images)) # Using that list of tweet_ids, reassign the dataframes: df_master = df_master[df_master.tweet_id.isin(tweet_ids_to_keep)] df_images = df_images[df_images.tweet_id.isin(tweet_ids_to_keep)] # Set 'tweet_id' as the index for each dataframe df_master = df_master.set_index('tweet_id') df_images = df_images.set_index('tweet_id') ###Output _____no_output_____ ###Markdown Test ###Code print(df_master.shape) print(df_master.columns) print('----------------------------------------------------------------------------') print(df_images.shape) print(df_images.columns) df_master.head(2) df_images.head(2) df_master.head(1) ###Output _____no_output_____ ###Markdown * The number of rows are now equal* df_images now has 'tweet_id' as its index ([Top of Page](top_of_page)) Preparation for Analysis ([Top of Page](top_of_page)) Feature Engineering**Add features to the dataframe for analysis purposes** ###Code #df_temp_master = df_master.copy(deep=True) df_master.favorite_count = df_master.favorite_count.astype(int) df_master.retweet_count = df_master.retweet_count.astype(int) # define new columns df_master.time_Delta = pd.Series(list(range(len(df_master)))) df_master.year = pd.Series(list(range(len(df_master)))) df_master.month = pd.Series(list(range(len(df_master)))) # find timestamp of first tweet timestamp_firstTweet = df_master.timestamp.min() # calculate timeDelta since first tweet. Also extract year and month of the given tweet for idx, row in df_master.iterrows(): df_master.loc[idx,'time_Delta'] = row.timestamp - timestamp_firstTweet df_master.loc[idx,'year'] = row.timestamp.year df_master.loc[idx,'month'] = row.timestamp.month # print(str() + str(row.timestamp.year) df_master.info() ###Output /anaconda3/envs/py3_w_tweepy/lib/python3.7/site-packages/ipykernel_launcher.py:7: UserWarning: Pandas doesn't allow columns to be created via a new attribute name - see https://pandas.pydata.org/pandas-docs/stable/indexing.html#attribute-access import sys /anaconda3/envs/py3_w_tweepy/lib/python3.7/site-packages/ipykernel_launcher.py:8: UserWarning: Pandas doesn't allow columns to be created via a new attribute name - see https://pandas.pydata.org/pandas-docs/stable/indexing.html#attribute-access /anaconda3/envs/py3_w_tweepy/lib/python3.7/site-packages/ipykernel_launcher.py:9: UserWarning: Pandas doesn't allow columns to be created via a new attribute name - see https://pandas.pydata.org/pandas-docs/stable/indexing.html#attribute-access if __name__ == '__main__': ###Markdown ([Top of Page](top_of_page)) Save the Cleaned Dataframes and Create Local Backups ###Code # Save the cleaned and updated dataframes df_master.to_csv('twitter_archive_master.csv') df_images.to_csv('tweet_images_master.csv') # create a local copies of the dataframes for debugging purposes df_temp_master = df_master.copy(deep=True) df_temp_images = df_images.copy(deep=True) ###Output _____no_output_____ ###Markdown ([Top of Page](top_of_page)) Create Plotting FunctionsCreate various plotting functions that can be invoked for later analysis ([Top of Page](top_of_page)) Histograms of Favorites ###Code def plot_hist_favorites(): fig, ax = plt.subplots(1, 1, figsize=(12, 7)) # define figure, axis, and plot objects plt.hist(df_master.favorite_count, bins = 100, alpha=0.7) plt.xticks(fontsize=14) plt.yticks(fontsize=14) #plt.title('Histogram of \'Favorites\' Distribution', fontsize=16) plt.xlabel('# of Favorites', fontsize=16) plt.ylabel('Frequency', fontsize=16) ax.spines['top'].set_visible(False) # de-clutter chart / reduce data-to-ink ratio # #ax.spines['bottom'].set_visible(False) ax.spines['right'].set_visible(False) # #ax.spines['left'].set_visible(False) plt.show() def plot_hist_logFavorites(): fig, ax = plt.subplots(1, 1, figsize=(12, 7)) # define figure, axis, and plot objects #plt.hist(df_temp_master.favorite_count, bins = 100) bin_edges = 10 ** np.arange(0.8, np.log10(df_master.favorite_count.max())+0.1, 0.1) plt.hist(df_master.favorite_count, bins = bin_edges, alpha=0.7) plt.xscale('log') tick_locs = [10, 30, 100, 300, 1000, 3000, 10000, 30000, 100000] plt.xticks(tick_locs, tick_locs, fontsize=14) plt.yticks(fontsize=14) #plt.title('Log-normal histogram of \'Favorites\' Distribution', fontsize=16) plt.xlabel('# of Favorites', fontsize=16) plt.ylabel('Frequency', fontsize=16) ax.spines['top'].set_visible(False) # de-clutter chart / reduce data-to-ink ratio # #ax.spines['bottom'].set_visible(False) ax.spines['right'].set_visible(False) # #ax.spines['left'].set_visible(False) plt.show() def plot_hist_logFavorites_density(): fig, ax = plt.subplots(1, 1, figsize=(12, 7)) # define figure, axis, and plot objects sb.distplot(np.log10(df_master.favorite_count), hist_kws = {'alpha' : 0.35}); #sb.distplot(df_temp_master.favorite_count); #sb.distplot(df_temp_master.favorite_count, bins = bin_edges); plt.xscale('log') plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.xlabel('log10(Favorites)', fontsize=16) plt.ylabel('Proportion', fontsize=16) ax.spines['top'].set_visible(False) # de-clutter chart / reduce data-to-ink ratio # #ax.spines['bottom'].set_visible(False) ax.spines['right'].set_visible(False) # #ax.spines['left'].set_visible(False) ###Output _____no_output_____ ###Markdown ([Top of Page](top_of_page)) Favorites vs. Time * Reference / link to [Pyplot tutorial that contains a table of 'Line2D' properties](https://matplotlib.org/users/pyplot_tutorial.html) ###Code def plot_favorites_vs_time(): # remove outliers (i.e., tweets with # favorites near the bounds of the distribution) lower_bound = 0.05 upper_bound = 0.95 mask1 = df_master['favorite_count'] > df_master.favorite_count.quantile(0.05) mask2 = df_master['favorite_count'] < df_master.favorite_count.quantile(0.95) df_master_no_outliers = df_master[mask1 & mask2] fig, ax = plt.subplots(1, 1, figsize=(12, 7)) x = df_master_no_outliers['time_Delta'].dt.days y = df_master_no_outliers['favorite_count'] plt.scatter(x, y, alpha=0.3); z = np.polyfit(x, y, 1) # add a linear trendline p = np.poly1d(z) plt.plot(x,p(x),"r--", linewidth=1.5) # 'linewidth' specified for reference only (default == 1.5) label = "Linear Fit" plt.text(560, 16900, label, fontsize=14, color='red') plt.xticks(fontsize=14) # add / format ticks and labels plt.yticks(fontsize=14) plt.xlabel('Time After First Tweet (days)', fontsize=16) plt.ylabel('Favorites', fontsize=16) ax.spines['top'].set_visible(False) # de-clutter chart / reduce data-to-ink ratio ax.spines['right'].set_visible(False) ###Output _____no_output_____ ###Markdown ([Top of Page](top_of_page)) Tweets per week * [Here is documentation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html) for the dataframe "sort values" function* Documentation for [using time series in pandas](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html) * This documentation was checked in order to understand how to specify the frequency of the "grouped" dataframe* [Documentation for using pandas.grouper](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Grouper.html)* Here is a [StackOverflow example](https://stackoverflow.com/questions/45281297/group-by-week-in-pandas) that makes use of both ###Code def plot_tweets_per_week(): # sort the df by amount of time since first tweet df_master_sorted = df_master.sort_values('time_Delta', ascending=True) # groupby each month, then count tweets per month df_master_grouped = df_master_sorted.groupby([pd.Grouper(key='timestamp', freq='M')]) monthly_tweets = df_master_grouped['text'].count() # define figure, axis, and plot objects fig, ax = plt.subplots(1, 1, figsize=(12, 7)) month_num = list(range(len(monthly_tweets))) plt.plot(month_num, monthly_tweets, ms=6, marker= 's') plt.xticks(fontsize=14) plt.yticks(fontsize=14) #plt.title('# of Tweets vs Time', fontsize=16) plt.xlabel('Months since First Tweet', fontsize=16) plt.ylabel('Tweets per Month', fontsize=16) ax.spines['top'].set_visible(False) # de-clutter chart / reduce data-to-ink ratio # #ax.spines['bottom'].set_visible(False) ax.spines['right'].set_visible(False) # #ax.spines['left'].set_visible(False) plt.show() ###Output _____no_output_____ ###Markdown ([Top of Page](top_of_page)) Tweets vs. Breed Type * Reference for [creating horizontal plots with Matplotlib](https://pythonspot.com/matplotlib-bar-chart/)* Reference for [adding vertical axis lines to a horizontal plot](https://mode.com/example-gallery/python_horizontal_bar/)* Reference for [Map, Filter, Lambda, and List Comprehensions in Python](http://www.u.arizona.edu/~erdmann/mse350/topics/list_comprehensions.html) ###Code def plot_tweets_vs_breedType(): # df of tweets for which the neural network has a high degree of confidence numBreeds = 10 breedsToAnalyze = df_images[df_images['p1_conf'] > 0.50] breedsToAnalyze_topTen = breedsToAnalyze.p1.value_counts()[0:numBreeds] y_pos = np.arange(0,numBreeds) # vertical position of horizontal bars #y_pos = np.arange(len(breedsToAnalyze_topTen)) # alternative approach num_Tweets = breedsToAnalyze_topTen.values.tolist() # define length of bars fig, ax = plt.subplots(1, 1, figsize=(12, 7)) # define figure, axis, and plot objects plt.barh(y_pos, num_Tweets, align='center', alpha=0.7) # correct capitalization and underscore issues in breed names labels = [x.replace('_', ' ').title() for x in breedsToAnalyze_topTen.index.tolist()] plt.yticks(y_pos, labels, fontsize=14) # add / format ticks and labels plt.xlabel('Number of Tweets', fontsize=14) plt.xticks(fontsize=14) ax.spines['top'].set_visible(False) # de-clutter chart / reduce data-to-ink ratio ax.spines['bottom'].set_visible(False) ax.spines['right'].set_visible(False) #ax.spines['left'].set_visible(False) # Draw vertical axis lines vals = ax.get_xticks() for tick in vals: ax.axvline(x=tick, linestyle='-', alpha=0.1, color='gray', zorder=11) plt.show() ###Output _____no_output_____ ###Markdown ([Top of Page](top_of_page)) Analysis and VisualizationsAnalyze the cleaned datasets to see what insights this data can provide. ([Top of Page](top_of_page)) Histograms of Favorites* It would be interesting to determine if the number of favorites a tweet gets is random or if it adheres to some sort of known distribution.* Let's start by checking a simple histogram of favorites: ###Code plot_hist_favorites() ###Output _____no_output_____ ###Markdown * It turns out that the distribution of favorites is highly skewed to the right.* Plotting the favorites distribution on a log scale may be more useful. Let's try that. ###Code plot_hist_logFavorites() ###Output _____no_output_____ ###Markdown * Now we're getting somewhere. It appears that the of favorites roughly follows a log-normal distribution, albeit one that is skewed to the left and appears to be slightly bi-modal.* In fact, the favorites distribution is NOT log-normal, but describing it as (roughly) log-normal is much more accurate than treating the of favorites distribution as though it is a normal distribution.* Bimodality: * The most frequent of favorites seems to be ~3000, with a second cluster of tweets that tend to have ~10000 favorites * Let's add a density plot to the histogram to better visualize the (likely) bi-modality ###Code plot_hist_logFavorites_density() ###Output _____no_output_____ ###Markdown * The kernal density estimation does indicate some slight bi-modality. This may be an indicator of how viral a particular tweet becomes: * Whereas many tweets may be favorited primarily by twitter users that follow @dog_rates (and thereby garner ~3000 favorites), other tweets may be favorited by communities and users outside of the "typical" @dog_rates followers (and thereby garner ~10000 favorites) * Additional analysis of **who** is favoriting various tweets could be interesting and applicable here. However, that is beyond the current scope of this project.* Truly **viral** tweets will have a high number of favorites, but be much less common. These tweets comprise the right tail of the distribution. * NOTE: * The line above is a kernal density estimation and represents the likely statistical distribution * Here is an interesting article on [Stacking Multiple Histograms Using Seaborn and Density Plots](https://towardsdatascience.com/histograms-and-density-plots-in-python-f6bda88f5ac0) ([Top of Page](top_of_page)) Favorites vs. TimeNow that we've looked at the favorites distribution, let's see how it changes over time. ###Code plot_favorites_vs_time() ###Output _____no_output_____ ###Markdown * This is interesting. Clearly, the number of favorites that @dog_rates tweets receive has increased over time.* It also appears that distribution of favorites seems to change over time * Whereas a majority of tweets in the first ~120 days have less than 5000 favorites, later tweets have a much more even distribution * This indicates that explicitly plotting the favorites distribution vs time may provide interesting insights. * A faceted or ridgeline plot may be useful for such a visualization. * That is considered outside of the current project scope, however. So I am noting it here as a potential future step. ([Top of Page](top_of_page)) Tweets per weekSince the last plot showed a high density of tweets in the first ~120 days, let's plot the number of tweets vs time to better visualize this particular trend ###Code plot_tweets_per_week() ###Output _____no_output_____ ###Markdown * As implied by the previous plot, the number of tweets did indeed decrease after the first few months.* Potential future analyses (not performed now for scope reasons): * It could be interesting to analyze other pieces of data to try and determine if there are any underlying reasons for the decrease. For instance: * It may be a natural characteristic of new twitter accounts * It may be a natural characteristic of _highly successful_ twitter accounts * It may correspond with @dog_rates transforming from a hobby to a brand and business * Other time-based analyses may also be of interest, such as: * Is there a day of the week or time of a given day that @dog_rates posts more often? * Is there a day of the week or time of a given day that corresponds to a tweet receiving more likes? ([Top of Page](top_of_page)) Tweets vs. Breed TypeLast, but not least, let's do a quick exploration of whether some dog breeds are tweeted more than others. ###Code plot_tweets_vs_breedType() ###Output _____no_output_____ ###Markdown INTRODUCTION"This project involves wrangling data obtained from three sources, all of which relate to the famous WeRateDogs (@dog_rates) Twitter account. WeRateDogs is a Twitter account that tweets images of dogs their owners send in, along with a funny caption and a rating that almost always exceeds 10/10" 1. GATHERING DATA ###Code import requests import os import tweepy as tw import seaborn as sns import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import json df_twitter = pd.read_csv('twitter-archive-enhanced.csv') df_twitter.head() #Download tsv file by using request library url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) with open(url.split('/')[-1], mode="wb") as file: file.write(response.content) image_predictions = pd.read_csv('image-predictions.tsv', sep = '\t') response # my API keys import tweepy consumer_key = 'consumer_key' consumer_secret = 'consumer_secret' access_token = 'access_token' access_secret = 'access_secret' #Tweepy Query auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True) # Showing the data in the image predictions file image_predictions = pd.read_csv('image-predictions.tsv', sep = '\t') image_predictions.head() auth = tweepy.OAuthHandler('5Uur0mo4ol2kB8yhtZ1VxXS0u', 'h8E7fSpXWiMoBel7G1ZOAeu4Mgru0v0MtxH5ehYE1RKM89SiBH') auth.set_access_token('303562412-ct9aNnU0FQR0UKJVn1i1W3Y8omqSewiQWUcRaygB', 'D3qslrbdOU5fqTOp951kOIuZbkeTPBodnjNYoEGFR63Ft') api = tweepy.API(auth, parser = tweepy.parsers.JSONParser(), wait_on_rate_limit = True, wait_on_rate_limit_notify = True) #Download Tweepy status object based on Tweet ID and store in list list_of_tweets = [] # Tweets that can't be found are saved in the list below: cant_find_tweets_for_those_ids = [] for tweet_id in df_twitter['tweet_id']: try: list_of_tweets.append(api.get_status(tweet_id)) except Exception as e: cant_find_tweets_for_those_ids.append(tweet_id) print("The list of tweets" ,len(list_of_tweets)) print("The list of tweets no found" , len(cant_find_tweets_for_those_ids)) #status object that we have downloaded and we add them all into a list my_list_of_dicts = [] for each_json_tweet in list_of_tweets: my_list_of_dicts.append(each_json_tweet) #we write this list into a txt file: with open('tweet_json.txt', 'w') as file: file.write(json.dumps(my_list_of_dicts, indent=4)) #identify information from JSON dictionaries in txt file and put it in a dataframe called tweet JSON my_demo_list = [] with open('tweet_json.txt', encoding='utf-8') as json_file: all_data = json.load(json_file) for each_dictionary in all_data: tweet_id = each_dictionary['id'] whole_tweet = each_dictionary['text'] only_url = whole_tweet[whole_tweet.find('https'):] favorite_count = each_dictionary['favorite_count'] retweet_count = each_dictionary['retweet_count'] followers_count = each_dictionary['user']['followers_count'] friends_count = each_dictionary['user']['friends_count'] whole_source = each_dictionary['source'] only_device = whole_source[whole_source.find('rel="nofollow">') + 15:-4] source = only_device retweeted_status = each_dictionary['retweeted_status'] = each_dictionary.get('retweeted_status', 'Original tweet') if retweeted_status == 'Original tweet': url = only_url else: retweeted_status = 'This is a retweet' url = 'This is a retweet' my_demo_list.append({'tweet_id': str(tweet_id), 'favorite_count': int(favorite_count), 'retweet_count': int(retweet_count), 'followers_count': int(followers_count), 'friends_count': int(friends_count), 'url': url, 'source': source, 'retweeted_status': retweeted_status, }) tweet_json = pd.DataFrame(my_demo_list, columns = ['tweet_id', 'favorite_count','retweet_count', 'followers_count', 'friends_count','source', 'retweeted_status', 'url']) tweet_json.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 337 entries, 0 to 336 Data columns (total 8 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 337 non-null object 1 favorite_count 337 non-null int64 2 retweet_count 337 non-null int64 3 followers_count 337 non-null int64 4 friends_count 337 non-null int64 5 source 337 non-null object 6 retweeted_status 337 non-null object 7 url 337 non-null object dtypes: int64(4), object(4) memory usage: 21.2+ KB ###Markdown 2. ACCESSING DATA Visual assessment: Each piece of gathered data is displayed below is for visual assessment purposes. ###Code df_twitter image_predictions tweet_json ###Output _____no_output_____ ###Markdown Programmatic assessment: Pandas' functions and/or methods are used to assess the data. ###Code df_twitter.info() image_predictions.info() tweet_json.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 337 entries, 0 to 336 Data columns (total 8 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 337 non-null object 1 favorite_count 337 non-null int64 2 retweet_count 337 non-null int64 3 followers_count 337 non-null int64 4 friends_count 337 non-null int64 5 source 337 non-null object 6 retweeted_status 337 non-null object 7 url 337 non-null object dtypes: int64(4), object(4) memory usage: 21.2+ KB ###Markdown Archiving Dataframe Analysis ###Code df_twitter.rating_numerator.value_counts() print(df_twitter.loc[df_twitter.rating_numerator == 17, 'text']) print(df_twitter.loc[df_twitter.rating_numerator == 144, 'text']) print(df_twitter.loc[df_twitter.rating_numerator == 666, 'text']) print(df_twitter.loc[df_twitter.rating_numerator == 165, 'text']) print(df_twitter.loc[df_twitter.rating_numerator == 1776, 'text']) #print whole text in order to verify numerators and denominators print(df_twitter['text'][55]) print(df_twitter['text'][1779]) print(df_twitter['text'][189]) print(df_twitter['text'][902]) print(df_twitter['text'][979]) df_twitter.rating_denominator.value_counts() print(df_twitter.loc[df_twitter.rating_denominator == 50, 'text']) print(df_twitter.loc[df_twitter.rating_denominator == 2, 'text']) print(df_twitter.loc[df_twitter.rating_denominator == 150, 'text']) print(df_twitter.loc[df_twitter.rating_denominator == 15, 'text']) print(df_twitter.loc[df_twitter.rating_denominator == 80, 'text']) print(df_twitter['text'][1202]) print(df_twitter['text'][1274]) print(df_twitter['text'][1351]) print(df_twitter['text'][2335]) print(df_twitter['text'][902]) print(df_twitter['text'][342]) print(df_twitter['text'][1254]) print(df_twitter['text'][1843]) df_twitter['name'].value_counts() df_twitter[df_twitter.tweet_id.duplicated()] df_twitter.describe() image_predictions.sample(10) image_predictions.info() image_predictions[image_predictions.tweet_id.duplicated()] ###Output _____no_output_____ ###Markdown Twitter Counts Dataframe ###Code tweet_json.head() tweet_json.info() tweet_json.describe() ###Output _____no_output_____ ###Markdown 3. CLEANING DATA Define 1. Removing columns that are no longer needed2. Merge the clean versions of df_twitter, image_predictions, and tweet_json dataframes Correct the dog types3. Delete retweets4. Creating one column for the various dog types: doggo, floofer, pupper, puppo Remove columns no longer needed.5. Change tweet_id to string from integer 6. Timestamp to correct datetime format7. Naming issues8. Creating a new dog_breed column using the image prediction data Clean1. Merge the clean versions of df_twitter, image_predictions, and tweet_json dataframes Correct the dog types ###Code dfs = pd.concat([df_twitter, image_predictions, tweet_json], join='outer', axis=1) dfs.head() dfs.columns ###Output _____no_output_____ ###Markdown Test ###Code dfs.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 37 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 in_reply_to_status_id 78 non-null float64 2 in_reply_to_user_id 78 non-null float64 3 timestamp 2356 non-null object 4 source 2356 non-null object 5 text 2356 non-null object 6 retweeted_status_id 181 non-null float64 7 retweeted_status_user_id 181 non-null float64 8 retweeted_status_timestamp 181 non-null object 9 expanded_urls 2297 non-null object 10 rating_numerator 2356 non-null int64 11 rating_denominator 2356 non-null int64 12 name 2356 non-null object 13 doggo 2356 non-null object 14 floofer 2356 non-null object 15 pupper 2356 non-null object 16 puppo 2356 non-null object 17 tweet_id 2075 non-null float64 18 jpg_url 2075 non-null object 19 img_num 2075 non-null float64 20 p1 2075 non-null object 21 p1_conf 2075 non-null float64 22 p1_dog 2075 non-null object 23 p2 2075 non-null object 24 p2_conf 2075 non-null float64 25 p2_dog 2075 non-null object 26 p3 2075 non-null object 27 p3_conf 2075 non-null float64 28 p3_dog 2075 non-null object 29 tweet_id 337 non-null object 30 favorite_count 337 non-null float64 31 retweet_count 337 non-null float64 32 followers_count 337 non-null float64 33 friends_count 337 non-null float64 34 source 337 non-null object 35 retweeted_status 337 non-null object 36 url 337 non-null object dtypes: float64(13), int64(3), object(21) memory usage: 681.2+ KB ###Markdown Clean2. Removing columns that are no longer needed ###Code dfs.drop(['in_reply_to_status_id', 'in_reply_to_user_id', 'source', 'img_num', 'friends_count', 'source', 'url', 'followers_count'], axis = 1, inplace=True) dfs = dfs.loc[:,~dfs.columns.duplicated()] dfs.columns ###Output _____no_output_____ ###Markdown Test ###Code dfs.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 26 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 timestamp 2356 non-null object 2 text 2356 non-null object 3 retweeted_status_id 181 non-null float64 4 retweeted_status_user_id 181 non-null float64 5 retweeted_status_timestamp 181 non-null object 6 expanded_urls 2297 non-null object 7 rating_numerator 2356 non-null int64 8 rating_denominator 2356 non-null int64 9 name 2356 non-null object 10 doggo 2356 non-null object 11 floofer 2356 non-null object 12 pupper 2356 non-null object 13 puppo 2356 non-null object 14 jpg_url 2075 non-null object 15 p1 2075 non-null object 16 p1_conf 2075 non-null float64 17 p1_dog 2075 non-null object 18 p2 2075 non-null object 19 p2_conf 2075 non-null float64 20 p2_dog 2075 non-null object 21 p3 2075 non-null object 22 p3_conf 2075 non-null float64 23 p3_dog 2075 non-null object 24 favorite_count 337 non-null float64 25 retweet_count 337 non-null float64 dtypes: float64(7), int64(3), object(16) memory usage: 478.7+ KB ###Markdown Clean3. Delete retweets ###Code dfs = dfs[np.isnan(dfs.retweeted_status_id)] dfs.info() dfs = dfs.drop(['retweeted_status_id', \ 'retweeted_status_user_id', 'retweeted_status_timestamp'], axis=1) ###Output _____no_output_____ ###Markdown Test ###Code dfs.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2175 entries, 0 to 2355 Data columns (total 23 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2175 non-null int64 1 timestamp 2175 non-null object 2 text 2175 non-null object 3 expanded_urls 2117 non-null object 4 rating_numerator 2175 non-null int64 5 rating_denominator 2175 non-null int64 6 name 2175 non-null object 7 doggo 2175 non-null object 8 floofer 2175 non-null object 9 pupper 2175 non-null object 10 puppo 2175 non-null object 11 jpg_url 1896 non-null object 12 p1 1896 non-null object 13 p1_conf 1896 non-null float64 14 p1_dog 1896 non-null object 15 p2 1896 non-null object 16 p2_conf 1896 non-null float64 17 p2_dog 1896 non-null object 18 p3 1896 non-null object 19 p3_conf 1896 non-null float64 20 p3_dog 1896 non-null object 21 favorite_count 284 non-null float64 22 retweet_count 284 non-null float64 dtypes: float64(5), int64(3), object(15) memory usage: 407.8+ KB ###Markdown Clean4. Creating one column for the various dog types: doggo, floofer, pupper, puppo Remove columns no longer needed. ###Code dfs['dog_type'] = dfs['text'].str.extract('(doggo|floofer|pupper|puppo)') dfs[['dog_type', 'doggo', 'floofer', 'pupper', 'puppo']].sample(10) dfs.head() dfs.columns dfs.dog_type.value_counts() ###Output _____no_output_____ ###Markdown Clean5. Change tweet_id to string from integer ###Code dfs['tweet_id'] = dfs['tweet_id'].astype(str) ###Output _____no_output_____ ###Markdown Test ###Code dfs.info ###Output _____no_output_____ ###Markdown Clean6. Timestamp to correct datetime format ###Code dfs['timestamp'] = dfs['timestamp'].str.slice(start=0, stop=-6) # Change the 'timestamp' column to datetime dfs['timestamp'] = pd.to_datetime(dfs['timestamp'], format = "%Y-%m-%d %H:%M:%S") ###Output _____no_output_____ ###Markdown Test ###Code dfs.head(5) ###Output _____no_output_____ ###Markdown Clean7. Naming issues ###Code dfs.name = dfs.name.str.replace('^[a-z]+', 'None') dfs['name'].value_counts() ###Output _____no_output_____ ###Markdown Test ###Code dfs['name'].sample(5) ###Output _____no_output_____ ###Markdown 4. Storing, Analyzing, and Visualizing ###Code # Storing the new twitter_dogs df to a new csv file dfs.to_csv('twitter_archive_master.csv', encoding='utf-8', index=False) ###Output _____no_output_____ ###Markdown Analyze and visualize: Visualizing the retweet counts, and favorite counts comparison over time ###Code sns.lmplot(x="retweet_count", y="favorite_count", data=dfs, size = 5, aspect=1.3, scatter_kws={'alpha':1/5}); plt.title('Favorite Count vs. Retweet Count'); plt.xlabel('Retweet Count'); plt.ylabel('Favorite Count'); ###Output C:\Users\user\Documents\anaconda2\lib\site-packages\seaborn\regression.py:580: UserWarning: The `size` parameter has been renamed to `height`; please update your code. warnings.warn(msg, UserWarning) ###Markdown Analyze and visualize: Popular Dog type ###Code dfs['dog_type'].value_counts() dog_breed = dfs.groupby('dog_type').filter(lambda x: len(x) >= 25) dog_breed['dog_type'].value_counts().plot(kind = 'barh') plt.title('Most Rated Dog type') plt.xlabel('Count') plt.ylabel('type of dog'); ###Output _____no_output_____ ###Markdown Analyze and visualize:Ratio of dog rating distribution ###Code df_twitter['rating_ratio'] = df_twitter['rating_numerator']/df_twitter['rating_denominator'] sns.distplot(df_twitter.rating_ratio).set_title('Ratio of dog rating distribution'); plt.xlabel('count of ratio of dog rating') plt.ylabel('count of ratio rating') data = np.random.normal(0, 1, 3) # array([-1.18878589, 0.59627021, 1.59895721]) plt.figure(figsize=(15, 5)) sns.distplot(x=data) plt.show(); ###Output C:\Users\user\Documents\anaconda2\lib\site-packages\seaborn\distributions.py:2551: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning) C:\Users\user\Documents\anaconda2\lib\site-packages\seaborn\distributions.py:2551: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning) ###Markdown We Rate Dogs Author: Ram Saran Vuppuluri In this notebook we will analyze tweets in “WeRateDogs” twitter account. At the end of the note we will be able to answer:1. Frequency distribution of dog rattings. 2. Frequency distribution of dog breeds.3. Relation between Retweets and Favorites.4. Number of tweets per dog breed.Note: Dog ratting, breed information is extracted by Udacity using tweet text and image information. We will be analyzing only tweets valid as of date. Any tweets that are deleted will not be included in the analysis.Before analyzing any data, we need to perform following 3 steps:1. Gather data from one or more data sources.2. Assess data for any quality or tidiness issues.3. Clean data of the quality, tidiness issues identified during asses data step. ###Code import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import requests import tweepy import os import json import time ###Output _____no_output_____ ###Markdown Gather DataAs per the problem statement we need to gather data from three data sources.1. __twitter-archive-enhanced.csv__ file provided by Udacity. This file is available for download directly. Once the file is downloaded it will be placed in the current working directory of the notebook.2. __image_predictions.tsv__ file provided by Udacity. This file is not directly available for download. Instead we need to utilize Python “requests” package.3. __tweet_json.txt__ file. This file is constructed by using tweet_id’s from twitter-archive-enhanced.csv file. We use “tweepy” library as API to retrieve data for each of tweet_id’s.We will load each of these three data sources into Pandas dataframes. ###Code twitter_arch_df = pd.read_csv('twitter-archive-enhanced.csv'); ###Output _____no_output_____ ###Markdown Below step will download __[image_prediction.tsv]( https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv)__ and store as __image_predictions.tsv__ in current working direcoty. ###Code response = requests.get('https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv') with open('image_predictions.tsv',mode='wb') as file: file.write(response.content) imgage_pred_df = pd.read_csv('image_predictions.tsv',sep='\t'); ###Output _____no_output_____ ###Markdown To connect to Twitter via Tweepy API, we need to perform following steps for a new Twitter user:1. Create a regular Twitter account.2. Once the regular Twitter account is created, create a developer account.3. Create an application on Twitter developer dashboard.4. New application will generate: * Consumer API key * Consumer API secret key * Access token key * Access token secret keyOnce app authentication keys are available we can connect to Twitter via Tweepy API. ###Code CONSUMER_API_KEY = 'XX' CONSUMER_API_SECRET_KEY = 'XX' ACCESS_TOKEN_KEY = 'XX' ACCESS_TOKEN_SECRET_KEY = 'XX' auth = tweepy.OAuthHandler(CONSUMER_API_KEY,CONSUMER_API_SECRET_KEY) auth.set_access_token(ACCESS_TOKEN_KEY,ACCESS_TOKEN_SECRET_KEY) api = tweepy.API(auth,wait_on_rate_limit=True) tweet_file = 'tweet_json.txt' ###Output _____no_output_____ ###Markdown Now that we are connected to Twitter via Tweepy, we can download all the tweets using tweet_id information from __twitter-archive-enhanced.csv__ ###Code '''error_id_list = [] with open('tweet_json.txt', mode='w') as file: for tweet_id in twitter_arch_df.tweet_id: start = time.time() print(tweet_id); try: tweet = api.get_status(tweet_id, tweet_mode='extended') tweet_json_line = json.dumps(tweet._json) file.write(tweet_json_line + '\n') end = time.time() print(end - start); except Exception: error_id_list.append(tweet_id)''' '''len(error_id_list),len(twitter_arch_df.tweet_id),(len(twitter_arch_df.tweet_id)-len(error_id_list))''' tweet_json_df = pd.read_json('tweet_json.txt',lines=True); tweet_json_df.rename(index=str,columns={'id':'tweet_id'},inplace=True) ###Output _____no_output_____ ###Markdown As we have gathered all three data sources we will consolidated into one. ###Code tweet_cons_df = twitter_arch_df.merge(imgage_pred_df,on='tweet_id').merge(tweet_json_df,on='tweet_id') len(twitter_arch_df),len(imgage_pred_df),len(tweet_json_df),len(tweet_cons_df) ###Output _____no_output_____ ###Markdown We will write consolicated dataframe into CSV file to help us out during assessment phase as Pandas Dataframe display will not show complete data in column, we can achive this with tools like Excel. ###Code tweet_cons_df.to_csv("tweet_consolicated.csv",index=False) ###Output _____no_output_____ ###Markdown Assess DataNow that we have gathered and loaded data into Pandas dataframes, we will perform visual and programmatic assessment.On visual assessment 1. Following columns are loaded as int64 instead of String (object). * tweet_id * id_str2. Following columns are loaded as float instead of String (object). * in_reply_to_status_id_x * in_reply_to_user_id_x * retweeted_status_id * retweeted_status_user_id * in_reply_to_status_id_y * in_reply_to_status_id_str * in_reply_to_user_id_y * in_reply_to_user_id_str * quoted_status_id * quoted_status_id_str3. There are missing values in following columns: * in_reply_to_status_id_x * in_reply_to_user_id_x * retweeted_status_id * retweeted_status_user_id * retweeted_status_timestamp * expanded_urls * contributors * coordinates * geo * in_reply_to_screen_name * in_reply_to_status_id_y * in_reply_to_status_id_str * in_reply_to_user_id_y * in_reply_to_user_id_str * place * quoted_status * quoted_status_id * quoted_status_id_str * quoted_status_permalink * retweeted_status4. Following columns are loaded as String (object) instead of timestamp instance. * timestamp * retweeted_status_timestamp5. "lang" column is loaded as String (object) instead of catergory.6. "expanded_urls" column contains more than 1 URL. (This obeservation is made by examining the values in Excel).7. Following columns are common in between twitter_arch_df and tweet_json_df, we can remove duplicate columns and rename the columns existing columns. * in_reply_to_status_id * in_reply_to_user_id * source ###Code tweet_cons_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1213 entries, 0 to 1212 Data columns (total 59 columns): tweet_id 1213 non-null int64 in_reply_to_status_id_x 18 non-null float64 in_reply_to_user_id_x 18 non-null float64 timestamp 1213 non-null object source_x 1213 non-null object text 1213 non-null object retweeted_status_id 27 non-null float64 retweeted_status_user_id 27 non-null float64 retweeted_status_timestamp 27 non-null object expanded_urls 1213 non-null object rating_numerator 1213 non-null int64 rating_denominator 1213 non-null int64 name 1213 non-null object doggo 1213 non-null object floofer 1213 non-null object pupper 1213 non-null object puppo 1213 non-null object jpg_url 1213 non-null object img_num 1213 non-null int64 p1 1213 non-null object p1_conf 1213 non-null float64 p1_dog 1213 non-null bool p2 1213 non-null object p2_conf 1213 non-null float64 p2_dog 1213 non-null bool p3 1213 non-null object p3_conf 1213 non-null float64 p3_dog 1213 non-null bool contributors 0 non-null float64 coordinates 0 non-null float64 created_at 1213 non-null datetime64[ns] display_text_range 1213 non-null object entities 1213 non-null object extended_entities 1213 non-null object favorite_count 1213 non-null int64 favorited 1213 non-null bool full_text 1213 non-null object geo 0 non-null float64 id_str 1213 non-null int64 in_reply_to_screen_name 18 non-null object in_reply_to_status_id_y 18 non-null float64 in_reply_to_status_id_str 18 non-null float64 in_reply_to_user_id_y 18 non-null float64 in_reply_to_user_id_str 18 non-null float64 is_quote_status 1213 non-null bool lang 1213 non-null object place 0 non-null float64 possibly_sensitive 1213 non-null float64 possibly_sensitive_appealable 1213 non-null float64 quoted_status 0 non-null object quoted_status_id 0 non-null float64 quoted_status_id_str 0 non-null float64 quoted_status_permalink 0 non-null object retweet_count 1213 non-null int64 retweeted 1213 non-null bool retweeted_status 27 non-null object source_y 1213 non-null object truncated 1213 non-null bool user 1213 non-null object dtypes: bool(7), datetime64[ns](1), float64(19), int64(7), object(25) memory usage: 510.5+ KB ###Markdown Pandas Dataframe describe() will provide basic statistical analysis of numerical data in dataframe.On Programmatically assessment 1. source_x, source_y has only 3 unique values each. We can ignore this column.2. Dog names identified in p1,p2 and p3 columns contain special characters "-","_".3. There is huge variation between 75th percentile of rating_numerator and maximum value.4. rating_denominator contains values other than 10. ###Code tweet_cons_df.source_x.unique() tweet_cons_df.source_y.unique() tweet_cons_df[tweet_cons_df.p1_dog == True].p1.unique() tweet_cons_df[tweet_cons_df.p2_dog == True].p2.unique() tweet_cons_df[tweet_cons_df.p3_dog == True].p3.unique() tweet_cons_df.describe() tweet_cons_df.rating_numerator.value_counts().sort_index() tweet_cons_df.rating_denominator.value_counts().sort_index() ###Output _____no_output_____ ###Markdown From above visual and programmatic analysis, we find quality and tidiness issues: Quality issues:1. Following columns are loaded as int64 instead of String (object). * tweet_id * id_str2. Following columns are loaded as float instead of String (object), we can remove these columns as they are not utilized in analysis. * in_reply_to_status_id_x * in_reply_to_user_id_x * retweeted_status_id * retweeted_status_user_id * in_reply_to_status_id_y * in_reply_to_status_id_str * in_reply_to_user_id_y * in_reply_to_user_id_str * quoted_status_id * quoted_status_id_str3. There are missing values in following columns, we can remove these columns as they are not utilized in analysis. * retweeted_status_timestamp * expanded_urls * contributors * coordinates * geo * in_reply_to_screen_name * place * quoted_status * quoted_status_permalink * retweeted_status4. Following columns are loaded as String (object) instead of datetime64 instance. * timestamp5. "lang" column is loaded as String (object) instead of catergory.6. "expanded_urls" column contains more than 1 URL. (This obeservation is made by examining the values in Excel). We can remove this column as it is not utilized in analysis.7. rating_numerator should be in range 10 to 14, but there are values out of range. We can remove out of range values.8. rating_denominator should be 10, there are values not equal to 10. We can remove non 10 values. Tidiness issues:1. Following columns are common in between twitter_arch_df and tweet_json_df, we can remove duplicate columns and rename the columns existing columns. * in_reply_to_status_id * in_reply_to_user_id * source2. Following columns are not needed for analysis. * timestamp * source_x * text * name * jpg_url * img_num * created_at * display_text_range * entities * extended_entities * favorited * full_text * id_str * is_quote_status * lang * possibly_sensitive * possibly_sensitive_appealable * retweeted * source_y * truncated * user3. Not all records identified in image-prediction file not always dogs. We can remove non dog records.4. Image prediction data has 3 possible predictions for each of the tweets along with whether the prediction has classified as dog or not and confidence of the predictions. We will 1. use only predictions that are of dogs. 2. select maximum prediction value. 3. use the dog breed from maximum prediction value.5. Dog breeds contain special characters "-","_". We will replace the special characters with " ". Clean Data ###Code tweet_cons_clean_df = tweet_cons_df.copy() ###Output _____no_output_____ ###Markdown Quality changes1. Following columns are loaded as int64 instead of String (object). * tweet_id * id_str Code ###Code tweet_cons_clean_df.tweet_id.dtype, tweet_cons_clean_df.id_str.dtype tweet_cons_clean_df[['tweet_id','id_str']].info() tweet_cons_clean_df.tweet_id = tweet_cons_clean_df.tweet_id.astype(str) tweet_cons_clean_df.id_str = tweet_cons_clean_df.id_str.astype(str) ###Output _____no_output_____ ###Markdown Test ###Code tweet_cons_clean_df.tweet_id.dtype, tweet_cons_clean_df.id_str.dtype tweet_cons_clean_df[['tweet_id','id_str']].info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1213 entries, 0 to 1212 Data columns (total 2 columns): tweet_id 1213 non-null object id_str 1213 non-null object dtypes: object(2) memory usage: 28.4+ KB ###Markdown 2. Following columns are loaded as float instead of String (object), we can remove these columns as they are not utilized in analysis. * in_reply_to_status_id_x * in_reply_to_user_id_x * retweeted_status_id * retweeted_status_user_id * in_reply_to_status_id_y * in_reply_to_status_id_str * in_reply_to_user_id_y * in_reply_to_user_id_str * quoted_status_id * quoted_status_id_str Code ###Code tweet_cons_clean_df[['in_reply_to_status_id_x','in_reply_to_user_id_x','retweeted_status_id','retweeted_status_user_id','in_reply_to_status_id_y','in_reply_to_status_id_str','in_reply_to_user_id_y','in_reply_to_user_id_str','quoted_status_id','quoted_status_id_str' ]].info() tweet_cons_clean_df.drop(['in_reply_to_status_id_x','in_reply_to_user_id_x','retweeted_status_id','retweeted_status_user_id','in_reply_to_status_id_y','in_reply_to_status_id_str','in_reply_to_user_id_y','in_reply_to_user_id_str','quoted_status_id','quoted_status_id_str' ],axis=1,inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code tweet_cons_clean_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1213 entries, 0 to 1212 Data columns (total 49 columns): tweet_id 1213 non-null object timestamp 1213 non-null object source_x 1213 non-null object text 1213 non-null object retweeted_status_timestamp 27 non-null object expanded_urls 1213 non-null object rating_numerator 1213 non-null int64 rating_denominator 1213 non-null int64 name 1213 non-null object doggo 1213 non-null object floofer 1213 non-null object pupper 1213 non-null object puppo 1213 non-null object jpg_url 1213 non-null object img_num 1213 non-null int64 p1 1213 non-null object p1_conf 1213 non-null float64 p1_dog 1213 non-null bool p2 1213 non-null object p2_conf 1213 non-null float64 p2_dog 1213 non-null bool p3 1213 non-null object p3_conf 1213 non-null float64 p3_dog 1213 non-null bool contributors 0 non-null float64 coordinates 0 non-null float64 created_at 1213 non-null datetime64[ns] display_text_range 1213 non-null object entities 1213 non-null object extended_entities 1213 non-null object favorite_count 1213 non-null int64 favorited 1213 non-null bool full_text 1213 non-null object geo 0 non-null float64 id_str 1213 non-null object in_reply_to_screen_name 18 non-null object is_quote_status 1213 non-null bool lang 1213 non-null object place 0 non-null float64 possibly_sensitive 1213 non-null float64 possibly_sensitive_appealable 1213 non-null float64 quoted_status 0 non-null object quoted_status_permalink 0 non-null object retweet_count 1213 non-null int64 retweeted 1213 non-null bool retweeted_status 27 non-null object source_y 1213 non-null object truncated 1213 non-null bool user 1213 non-null object dtypes: bool(7), datetime64[ns](1), float64(9), int64(5), object(27) memory usage: 415.8+ KB ###Markdown 3. There are missing values in following columns, we can remove these columns as they are not utilized in analysis. * in_reply_to_status_id_x * in_reply_to_user_id_x * retweeted_status_id * retweeted_status_user_id * retweeted_status_timestamp * expanded_urls * contributors * coordinates * geo * in_reply_to_screen_name * in_reply_to_status_id_y * in_reply_to_status_id_str * in_reply_to_user_id_y * in_reply_to_user_id_str * place * quoted_status * quoted_status_id * quoted_status_id_str * quoted_status_permalink * retweeted_status Code ###Code tweet_cons_clean_df.drop(['retweeted_status_timestamp','expanded_urls','contributors','coordinates','geo','in_reply_to_screen_name','place','quoted_status','quoted_status_permalink','retweeted_status' ],axis=1,inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code tweet_cons_clean_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1213 entries, 0 to 1212 Data columns (total 39 columns): tweet_id 1213 non-null object timestamp 1213 non-null object source_x 1213 non-null object text 1213 non-null object rating_numerator 1213 non-null int64 rating_denominator 1213 non-null int64 name 1213 non-null object doggo 1213 non-null object floofer 1213 non-null object pupper 1213 non-null object puppo 1213 non-null object jpg_url 1213 non-null object img_num 1213 non-null int64 p1 1213 non-null object p1_conf 1213 non-null float64 p1_dog 1213 non-null bool p2 1213 non-null object p2_conf 1213 non-null float64 p2_dog 1213 non-null bool p3 1213 non-null object p3_conf 1213 non-null float64 p3_dog 1213 non-null bool created_at 1213 non-null datetime64[ns] display_text_range 1213 non-null object entities 1213 non-null object extended_entities 1213 non-null object favorite_count 1213 non-null int64 favorited 1213 non-null bool full_text 1213 non-null object id_str 1213 non-null object is_quote_status 1213 non-null bool lang 1213 non-null object possibly_sensitive 1213 non-null float64 possibly_sensitive_appealable 1213 non-null float64 retweet_count 1213 non-null int64 retweeted 1213 non-null bool source_y 1213 non-null object truncated 1213 non-null bool user 1213 non-null object dtypes: bool(7), datetime64[ns](1), float64(5), int64(5), object(21) memory usage: 321.0+ KB ###Markdown 4. Following columns are loaded as String (object) instead of datetime64 instance. * timestamp Code ###Code tweet_cons_clean_df['timestamp'] = pd.to_datetime(tweet_cons_clean_df['timestamp']) ###Output _____no_output_____ ###Markdown Test ###Code tweet_cons_clean_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1213 entries, 0 to 1212 Data columns (total 39 columns): tweet_id 1213 non-null object timestamp 1213 non-null datetime64[ns] source_x 1213 non-null object text 1213 non-null object rating_numerator 1213 non-null int64 rating_denominator 1213 non-null int64 name 1213 non-null object doggo 1213 non-null object floofer 1213 non-null object pupper 1213 non-null object puppo 1213 non-null object jpg_url 1213 non-null object img_num 1213 non-null int64 p1 1213 non-null object p1_conf 1213 non-null float64 p1_dog 1213 non-null bool p2 1213 non-null object p2_conf 1213 non-null float64 p2_dog 1213 non-null bool p3 1213 non-null object p3_conf 1213 non-null float64 p3_dog 1213 non-null bool created_at 1213 non-null datetime64[ns] display_text_range 1213 non-null object entities 1213 non-null object extended_entities 1213 non-null object favorite_count 1213 non-null int64 favorited 1213 non-null bool full_text 1213 non-null object id_str 1213 non-null object is_quote_status 1213 non-null bool lang 1213 non-null object possibly_sensitive 1213 non-null float64 possibly_sensitive_appealable 1213 non-null float64 retweet_count 1213 non-null int64 retweeted 1213 non-null bool source_y 1213 non-null object truncated 1213 non-null bool user 1213 non-null object dtypes: bool(7), datetime64[ns](2), float64(5), int64(5), object(20) memory usage: 321.0+ KB ###Markdown 5. "lang" column is loaded as String (object) instead of catergory. Code ###Code tweet_cons_clean_df.lang = tweet_cons_clean_df.lang.astype(dtype="category") ###Output _____no_output_____ ###Markdown Test ###Code tweet_cons_clean_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1213 entries, 0 to 1212 Data columns (total 39 columns): tweet_id 1213 non-null object timestamp 1213 non-null datetime64[ns] source_x 1213 non-null object text 1213 non-null object rating_numerator 1213 non-null int64 rating_denominator 1213 non-null int64 name 1213 non-null object doggo 1213 non-null object floofer 1213 non-null object pupper 1213 non-null object puppo 1213 non-null object jpg_url 1213 non-null object img_num 1213 non-null int64 p1 1213 non-null object p1_conf 1213 non-null float64 p1_dog 1213 non-null bool p2 1213 non-null object p2_conf 1213 non-null float64 p2_dog 1213 non-null bool p3 1213 non-null object p3_conf 1213 non-null float64 p3_dog 1213 non-null bool created_at 1213 non-null datetime64[ns] display_text_range 1213 non-null object entities 1213 non-null object extended_entities 1213 non-null object favorite_count 1213 non-null int64 favorited 1213 non-null bool full_text 1213 non-null object id_str 1213 non-null object is_quote_status 1213 non-null bool lang 1213 non-null category possibly_sensitive 1213 non-null float64 possibly_sensitive_appealable 1213 non-null float64 retweet_count 1213 non-null int64 retweeted 1213 non-null bool source_y 1213 non-null object truncated 1213 non-null bool user 1213 non-null object dtypes: bool(7), category(1), datetime64[ns](2), float64(5), int64(5), object(19) memory usage: 312.8+ KB ###Markdown 6. "expanded_urls" column contains more than 1 URL. (This obeservation is made by examining the values in Excel). We can remove this column as it is not utilized in analysis. Perfromed as part of 3. 7. rating_numerator should be in range 10 to 14, but there are values out of range. We can remove out of range values. Code ###Code tweet_cons_clean_df = tweet_cons_clean_df[(tweet_cons_clean_df.rating_numerator >= 10) & (tweet_cons_clean_df.rating_numerator <=14)] ###Output _____no_output_____ ###Markdown TestBelow code snippet will yield 0 as we removed any of the records with rating_numerator 14 ###Code len(tweet_cons_clean_df[(tweet_cons_clean_df.rating_numerator < 10) | (tweet_cons_clean_df.rating_numerator > 14)]) ###Output _____no_output_____ ###Markdown 8. rating_denominator should be 10, there are values not equal to 10. We can remove non 10 values. Code ###Code tweet_cons_clean_df = tweet_cons_clean_df[tweet_cons_clean_df.rating_denominator == 10] ###Output _____no_output_____ ###Markdown TestBelow code snippet will yield 0 as we removed any of the records with rating_denominator != 10. ###Code len(tweet_cons_clean_df[tweet_cons_clean_df.rating_denominator != 10]) ###Output _____no_output_____ ###Markdown Tidiness issues:1. Following columns are common in between twitter_arch_df and tweet_json_df, we can remove duplicate columns and rename the columns existing columns. * in_reply_to_status_id * in_reply_to_user_id * source Code ###Code tweet_cons_clean_df.drop(['source_x','source_y'],axis=1,inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code tweet_cons_clean_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 993 entries, 0 to 1212 Data columns (total 37 columns): tweet_id 993 non-null object timestamp 993 non-null datetime64[ns] text 993 non-null object rating_numerator 993 non-null int64 rating_denominator 993 non-null int64 name 993 non-null object doggo 993 non-null object floofer 993 non-null object pupper 993 non-null object puppo 993 non-null object jpg_url 993 non-null object img_num 993 non-null int64 p1 993 non-null object p1_conf 993 non-null float64 p1_dog 993 non-null bool p2 993 non-null object p2_conf 993 non-null float64 p2_dog 993 non-null bool p3 993 non-null object p3_conf 993 non-null float64 p3_dog 993 non-null bool created_at 993 non-null datetime64[ns] display_text_range 993 non-null object entities 993 non-null object extended_entities 993 non-null object favorite_count 993 non-null int64 favorited 993 non-null bool full_text 993 non-null object id_str 993 non-null object is_quote_status 993 non-null bool lang 993 non-null category possibly_sensitive 993 non-null float64 possibly_sensitive_appealable 993 non-null float64 retweet_count 993 non-null int64 retweeted 993 non-null bool truncated 993 non-null bool user 993 non-null object dtypes: bool(7), category(1), datetime64[ns](2), float64(5), int64(5), object(17) memory usage: 240.6+ KB ###Markdown 2. Following columns are not needed for analysis. * timestamp * source_x * text * name * jpg_url * img_num * created_at * display_text_range * entities * extended_entities * favorited * full_text * id_str * is_quote_status * lang * possibly_sensitive * possibly_sensitive_appealable * retweeted * source_y * truncated * user Code ###Code tweet_cons_clean_df.drop(['timestamp','text','name','jpg_url','img_num','created_at','display_text_range','entities','extended_entities','favorited','full_text','id_str','is_quote_status','lang','possibly_sensitive','possibly_sensitive_appealable','retweeted','truncated','user'],axis=1,inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code tweet_cons_clean_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 993 entries, 0 to 1212 Data columns (total 18 columns): tweet_id 993 non-null object rating_numerator 993 non-null int64 rating_denominator 993 non-null int64 doggo 993 non-null object floofer 993 non-null object pupper 993 non-null object puppo 993 non-null object p1 993 non-null object p1_conf 993 non-null float64 p1_dog 993 non-null bool p2 993 non-null object p2_conf 993 non-null float64 p2_dog 993 non-null bool p3 993 non-null object p3_conf 993 non-null float64 p3_dog 993 non-null bool favorite_count 993 non-null int64 retweet_count 993 non-null int64 dtypes: bool(3), float64(3), int64(4), object(8) memory usage: 127.0+ KB ###Markdown 3. Not all records identified in image-prediction file not always dogs. We can remove non dog records. Code ###Code tweet_cons_clean_df = tweet_cons_clean_df[(tweet_cons_clean_df.p1_dog == True)|(tweet_cons_clean_df.p2_dog==True)|(tweet_cons_clean_df.p3_dog==True)] ###Output _____no_output_____ ###Markdown TestBelow code snippet will yield 0 records as atlest one of the preditions should yield True for dog. ###Code len(tweet_cons_clean_df[(tweet_cons_clean_df.p1_dog == False)&(tweet_cons_clean_df.p2_dog==False)&(tweet_cons_clean_df.p3_dog==False)]) ###Output _____no_output_____ ###Markdown 4. Image prediction data has 3 possible predictions for each of the tweets along with whether the prediction has classified as dog or not and confidence of the predictions. We will 1. use only predictions that are of dogs. 2. select maximum prediction value. 3. use the dog breed from maximum prediction value. Code ###Code tweet_cons_clean_df.loc[tweet_cons_clean_df.p1_dog == False,'p1_conf'] = 0 tweet_cons_clean_df.loc[tweet_cons_clean_df.p2_dog == False,'p2_conf'] = 0 tweet_cons_clean_df.loc[tweet_cons_clean_df.p3_dog == False,'p3_conf'] = 0 tweet_cons_clean_df.head() tweet_cons_clean_df['max_conf'] = tweet_cons_clean_df[['p1_conf','p2_conf','p3_conf']].max(axis=1) tweet_cons_clean_df['dog_1'] = tweet_cons_clean_df[(tweet_cons_clean_df.p1_conf==tweet_cons_clean_df.max_conf)].p1 tweet_cons_clean_df['dog_2'] = tweet_cons_clean_df[(tweet_cons_clean_df.p2_conf==tweet_cons_clean_df.max_conf)].p2 tweet_cons_clean_df['dog_3'] = tweet_cons_clean_df[(tweet_cons_clean_df.p3_conf==tweet_cons_clean_df.max_conf)].p3 tweet_cons_clean_df['dog_1']=tweet_cons_clean_df['dog_1'].fillna('') tweet_cons_clean_df['dog_2']=tweet_cons_clean_df['dog_2'].fillna('') tweet_cons_clean_df['dog_3']=tweet_cons_clean_df['dog_3'].fillna('') tweet_cons_clean_df['p_dog_breed']= tweet_cons_clean_df['dog_1']+tweet_cons_clean_df['dog_2']+tweet_cons_clean_df['dog_3'] ###Output _____no_output_____ ###Markdown Test ###Code tweet_cons_clean_df.head() tweet_cons_clean_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 888 entries, 1 to 1212 Data columns (total 23 columns): tweet_id 888 non-null object rating_numerator 888 non-null int64 rating_denominator 888 non-null int64 doggo 888 non-null object floofer 888 non-null object pupper 888 non-null object puppo 888 non-null object p1 888 non-null object p1_conf 888 non-null float64 p1_dog 888 non-null bool p2 888 non-null object p2_conf 888 non-null float64 p2_dog 888 non-null bool p3 888 non-null object p3_conf 888 non-null float64 p3_dog 888 non-null bool favorite_count 888 non-null int64 retweet_count 888 non-null int64 max_conf 888 non-null float64 dog_1 888 non-null object dog_2 888 non-null object dog_3 888 non-null object p_dog_breed 888 non-null object dtypes: bool(3), float64(4), int64(4), object(12) memory usage: 148.3+ KB ###Markdown Now that we extracted predicted dog breed, we will delete following columns 'p1','p1_conf','p1_dog','p2','p2_conf','p2_dog','p3','p3_conf','p3_dog','max_conf','dog_1','dog_2','dog_3' ###Code tweet_cons_clean_df.drop(['p1','p1_conf','p1_dog','p2','p2_conf','p2_dog','p3','p3_conf','p3_dog','max_conf','dog_1','dog_2','dog_3'],axis=1,inplace=True) tweet_cons_clean_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 888 entries, 1 to 1212 Data columns (total 10 columns): tweet_id 888 non-null object rating_numerator 888 non-null int64 rating_denominator 888 non-null int64 doggo 888 non-null object floofer 888 non-null object pupper 888 non-null object puppo 888 non-null object favorite_count 888 non-null int64 retweet_count 888 non-null int64 p_dog_breed 888 non-null object dtypes: int64(4), object(6) memory usage: 76.3+ KB ###Markdown 5. Dog breeds contain special characters "-","_". We will replace the special characters with " ". Code ###Code len(tweet_cons_clean_df[tweet_cons_clean_df.p_dog_breed.str.contains("_")]),len(tweet_cons_clean_df[tweet_cons_clean_df.p_dog_breed.str.contains("-")]) tweet_cons_clean_df['p_dog_breed']=tweet_cons_clean_df.p_dog_breed.str.replace('_',' ') tweet_cons_clean_df['p_dog_breed']=tweet_cons_clean_df.p_dog_breed.str.replace('-',' ') tweet_cons_clean_df['p_dog_breed']=tweet_cons_clean_df.p_dog_breed.str.title() ###Output _____no_output_____ ###Markdown Test ###Code len(tweet_cons_clean_df[tweet_cons_clean_df.p_dog_breed.str.contains("_")]),len(tweet_cons_clean_df[tweet_cons_clean_df.p_dog_breed.str.contains("-")]) tweet_cons_clean_df.loc[tweet_cons_clean_df.doggo=='None','doggo'] ='' tweet_cons_clean_df.loc[tweet_cons_clean_df.floofer=='None','floofer'] ='' tweet_cons_clean_df.loc[tweet_cons_clean_df.pupper=='None','pupper'] ='' tweet_cons_clean_df.loc[tweet_cons_clean_df.puppo=='None','puppo'] ='' len(tweet_cons_clean_df[tweet_cons_clean_df.doggo=='None']),len(tweet_cons_clean_df[tweet_cons_clean_df.floofer=='None']),len(tweet_cons_clean_df[tweet_cons_clean_df.pupper=='None']),len(tweet_cons_clean_df[tweet_cons_clean_df.puppo=='None']) tweet_cons_clean_df['dog_type']= tweet_cons_clean_df['doggo']+tweet_cons_clean_df['floofer']+tweet_cons_clean_df['pupper']+tweet_cons_clean_df['puppo'] tweet_cons_clean_df.info() tweet_cons_clean_df.drop(['doggo','floofer','pupper','puppo'],axis=1,inplace=True) tweet_cons_clean_df.dog_type.value_counts() tweet_cons_clean_df.loc[tweet_cons_clean_df.dog_type == 'doggopupper','dog_type'] = 'Doggo & Pupper' tweet_cons_clean_df.loc[tweet_cons_clean_df.dog_type == 'doggofloofer','dog_type'] = 'Doggo & Floofer' tweet_cons_clean_df.loc[tweet_cons_clean_df.dog_type == 'doggopuppo','dog_type'] = 'Doggo & Puppo' tweet_cons_clean_df['dog_type']=tweet_cons_clean_df.dog_type.str.title() tweet_cons_clean_df['p_dog_breed'] = tweet_cons_clean_df.p_dog_breed.astype(dtype="category"); tweet_cons_clean_df['dog_type'] = tweet_cons_clean_df.dog_type.astype(dtype="category"); tweet_cons_clean_df.head() ###Output _____no_output_____ ###Markdown Visualize and Analyze clean dataBelow code snippet will write clean data into twitter_archive_master.csv file. ###Code tweet_cons_clean_df.to_csv('twitter_archive_master.csv',index=False) sns.set(style="darkgrid") ax=sns.countplot(tweet_cons_clean_df.rating_numerator); plt.title("Rating Frequency") plt.xlabel('Rating') fig = ax.get_figure() fig.savefig('ratingFrq.png') sns.set(rc={'figure.figsize':(11.7,8.27)}) ax = sns.countplot(x="dog_type", data=tweet_cons_clean_df[tweet_cons_clean_df.dog_type!='']); plt.title("Dog Type Frequency") plt.xlabel('Dog Type') fig = ax.get_figure() fig.savefig('dogTypeFrq.png') ax = sns.barplot(x='dog_type',y='favorite_count',ci=None, data=tweet_cons_clean_df[tweet_cons_clean_df.dog_type!='']); plt.title("Dog Type vs Favorite count") plt.xlabel('Dog Type') plt.ylabel('Favorite Count') fig = ax.get_figure() fig.savefig('dogTypeFavorite.png') ax = sns.barplot(x='dog_type',y='retweet_count',ci=None, data=tweet_cons_clean_df[tweet_cons_clean_df.dog_type!='']); plt.title("Dog Type vs Retweet count") plt.xlabel('Dog Type') plt.ylabel('Retweet Count') fig = ax.get_figure() fig.savefig('dogTypeRetweet.png') ax = sns.barplot(x='rating_numerator',y='favorite_count',ci=None, data=tweet_cons_clean_df); plt.title("Rating vs Favorite Count") plt.xlabel('Rating') plt.ylabel('Favorite Count') plt.savefig('ratingFavorite.png') ax = sns.barplot(x='rating_numerator',y='retweet_count',ci=None, data=tweet_cons_clean_df); plt.title("Rating vs Retweet Count") plt.xlabel('Rating') plt.ylabel('Retweet Count') plt.savefig('ratingRetweet.png') tweet_cons_clean_df[['favorite_count','retweet_count']].corr() tweet_cons_clean_df[['favorite_count','retweet_count']].describe() ###Output _____no_output_____ ###Markdown Gathering Data 1. read data from `twitter-archive-enhanced.csv` file ###Code archive_df = pd.read_csv('twitter-archive-enhanced.csv', index_col=['tweet_id']) ###Output _____no_output_____ ###Markdown 2. download `image_predictions.tsv` file from Udacity's servers using Reques library ###Code headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:85.0) Gecko/20100101 Firefox/85.0'} url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' res = requests.get(url=url, headers=headers) # if the file image-predictions.tsv is exists don't rewrite it if not os.path.isfile('image-predictions.tsv'): with open('image-predictions.tsv', 'wb') as f: f.write(res.content) images_df = pd.read_csv('image-predictions.tsv', sep='\t', index_col='tweet_id') ###Output _____no_output_____ ###Markdown > tweepy library ###Code # import tweepy # from tweepy import OAuthHandler # import json # from timeit import default_timer as timer # # Query Twitter API for each tweet in the Twitter archive and save JSON in a text file # # These are hidden to comply with Twitter's API terms and conditions # consumer_key = 'HIDDEN' # consumer_secret = 'HIDDEN' # access_token = 'HIDDEN' # access_secret = 'HIDDEN' # auth = OAuthHandler(consumer_key, consumer_secret) # auth.set_access_token(access_token, access_secret) # api = tweepy.API(auth, wait_on_rate_limit=True) # # NOTE TO STUDENT WITH MOBILE VERIFICATION ISSUES: # # df_1 is a DataFrame with the twitter_archive_enhanced.csv file. You may have to # # change line 17 to match the name of your DataFrame with twitter_archive_enhanced.csv # # NOTE TO REVIEWER: this student had mobile verification issues so the following # # Twitter API code was sent to this student from a Udacity instructor # # Tweet IDs for which to gather additional data via Twitter's API # tweet_ids = df_1.tweet_id.values # len(tweet_ids) # # Query Twitter's API for JSON data for each tweet ID in the Twitter archive # count = 0 # fails_dict = {} # start = timer() # # Save each tweet's returned JSON as a new line in a .txt file # with open('tweet_json.txt', 'w') as outfile: # # This loop will likely take 20-30 minutes to run because of Twitter's rate limit # for tweet_id in tweet_ids: # count += 1 # print(str(count) + ": " + str(tweet_id)) # try: # tweet = api.get_status(tweet_id, tweet_mode='extended') # print("Success") # json.dump(tweet._json, outfile) # outfile.write('\n') # except tweepy.TweepError as e: # print("Fail") # fails_dict[tweet_id] = e # pass # end = timer() # print(end - start) # print(fails_dict) ###Output _____no_output_____ ###Markdown 3. read `tweet-json.txt` file ###Code # the data in this file is a json objects so I using json library to manipulate it tweet_list = [] with open('tweet-json.txt', 'r') as file: for line in file: tweet_id = json.loads(line)["id"] retweet_count = json.loads(line)["retweet_count"] favorite_count = json.loads(line)["favorite_count"] dict_object = {"tweet_id": tweet_id, "retweet_count": retweet_count, "favorite_count": favorite_count} tweet_list.append(dict_object) retweet_df = pd.DataFrame(tweet_list) retweet_df.set_index('tweet_id', inplace=True) # copy data frames archive_clean_df = archive_df.copy() images_clean_df = images_df.copy() retweet_clean_df = retweet_df.copy() ###Output _____no_output_____ ###Markdown Assessing Data> Visually by using a spreadsheet and text editor to examine some random data> Programmatically as follow 1. archive_df ###Code archive_df.info() archive_df.sample(10) archive_df[archive_df.name.duplicated()].name archive_df[archive_df.rating_numerator > 10] archive_df[archive_df.rating_numerator < 10] archive_df.rating_numerator.describe() archive_df[archive_df.rating_numerator == 1776] (archive_df[archive_df.doggo == 'doggo'].doggo.count(), archive_df[archive_df.floofer == 'floofer'].floofer.count(), archive_df[archive_df.pupper == 'pupper'].pupper.count(), archive_df[archive_df.puppo == 'puppo'].puppo.count()) ###Output _____no_output_____ ###Markdown Quality archive_df1. `timestamp` and `retweeted_status_timestamp` data type is object2. some tweets don't have image3. some tweets are not originals `in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp`4. `doggo, floofer, pupper, puppo` null data is None not Nan5. column `name` some time have incorrect name as "a, an, or number"6. some columns at this point would have no use in analysis `in_reply_to_status_id, in_reply_to_user_id, timestamp, retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp`7. The `rating_numerator` column should of type float and also it should be correctly extracted from `text` column 2. images_df ###Code images_df.info() images_df.head(15) images_df.p1.value_counts() len(images_df.p1.unique()) images_df.p2.value_counts() len(images_df.p2.unique()) images_df.p3.value_counts() len(images_df.p3.unique()) ###Output _____no_output_____ ###Markdown Quality images_df1. columns name not descriptive `p1, p1_conf, p1_dog, p2, p2_conf, p2_dog, p3, p3_conf, p3_dog`2. some images do not belong to original tweets 3. retweet_df ###Code retweet_df.info() retweet_df.head() ###Output _____no_output_____ ###Markdown Quality retweet_df Tidiness1. merge the 3 datasets to `archive_df` DataFrame2. one variable in four columns `doggo, floofer, pupper, puppo`3. columns doggo, floofer, pupper, puppo have no use Cleaning Data Define `archive_clean_df`1. change timestamp and retweeted_status_timestamp data type from object to datetime Code ###Code archive_clean_df.timestamp = pd.to_datetime(archive_clean_df.timestamp) archive_clean_df.retweeted_status_timestamp = pd.to_datetime(archive_clean_df.retweeted_status_timestamp) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2356 entries, 892420643555336193 to 666020888022790149 Data columns (total 16 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 in_reply_to_status_id 78 non-null float64 1 in_reply_to_user_id 78 non-null float64 2 timestamp 2356 non-null datetime64[ns, UTC] 3 source 2356 non-null object 4 text 2356 non-null object 5 retweeted_status_id 181 non-null float64 6 retweeted_status_user_id 181 non-null float64 7 retweeted_status_timestamp 181 non-null datetime64[ns, UTC] 8 expanded_urls 2297 non-null object 9 rating_numerator 2356 non-null int64 10 rating_denominator 2356 non-null int64 11 name 2356 non-null object 12 doggo 2356 non-null object 13 floofer 2356 non-null object 14 pupper 2356 non-null object 15 puppo 2356 non-null object dtypes: datetime64[ns, UTC](2), float64(4), int64(2), object(8) memory usage: 312.9+ KB ###Markdown Define2. select rows that have images from archive_clean_df Code ###Code jpg_image_url = sum(images_clean_df.jpg_url.isnull()) jpg_image_url image_index = images_clean_df.index archive_clean_df = archive_clean_df[archive_clean_df.index.isin(image_index)] ###Output _____no_output_____ ###Markdown Test ###Code archive_clean_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2075 entries, 892420643555336193 to 666020888022790149 Data columns (total 16 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 in_reply_to_status_id 23 non-null float64 1 in_reply_to_user_id 23 non-null float64 2 timestamp 2075 non-null datetime64[ns, UTC] 3 source 2075 non-null object 4 text 2075 non-null object 5 retweeted_status_id 81 non-null float64 6 retweeted_status_user_id 81 non-null float64 7 retweeted_status_timestamp 81 non-null datetime64[ns, UTC] 8 expanded_urls 2075 non-null object 9 rating_numerator 2075 non-null int64 10 rating_denominator 2075 non-null int64 11 name 2075 non-null object 12 doggo 2075 non-null object 13 floofer 2075 non-null object 14 pupper 2075 non-null object 15 puppo 2075 non-null object dtypes: datetime64[ns, UTC](2), float64(4), int64(2), object(8) memory usage: 275.6+ KB ###Markdown Define3. delete null rows from `in_reply_to_status_id, retweeted_status_id` columns Code ###Code archive_clean_df = archive_clean_df[(archive_clean_df.in_reply_to_status_id.isnull()) & (archive_clean_df.retweeted_status_id.isnull())] # other solution # archive_clean_df.query('in_reply_to_status_id.isnull() & retweeted_status_id.isnull()').info() ###Output _____no_output_____ ###Markdown Test ###Code archive_clean_df.info() # delete images that belong to replies or retweets archive_clean_df = archive_clean_df[archive_clean_df.index.isin(images_clean_df.index)] archive_clean_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1971 entries, 892420643555336193 to 666020888022790149 Data columns (total 16 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 in_reply_to_status_id 0 non-null float64 1 in_reply_to_user_id 0 non-null float64 2 timestamp 1971 non-null datetime64[ns, UTC] 3 source 1971 non-null object 4 text 1971 non-null object 5 retweeted_status_id 0 non-null float64 6 retweeted_status_user_id 0 non-null float64 7 retweeted_status_timestamp 0 non-null datetime64[ns, UTC] 8 expanded_urls 1971 non-null object 9 rating_numerator 1971 non-null int64 10 rating_denominator 1971 non-null int64 11 name 1971 non-null object 12 doggo 1971 non-null object 13 floofer 1971 non-null object 14 pupper 1971 non-null object 15 puppo 1971 non-null object dtypes: datetime64[ns, UTC](2), float64(4), int64(2), object(8) memory usage: 261.8+ KB ###Markdown Define4. set None value in `doggo, floofer, pupper, puppo` columns to null Code ###Code archive_clean_df.doggo.replace('None', '', inplace=True) archive_clean_df.floofer.replace('None', '', inplace=True) archive_clean_df.pupper.replace('None', '', inplace=True) archive_clean_df.puppo.replace('None', '', inplace=True) archive_clean_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1971 entries, 892420643555336193 to 666020888022790149 Data columns (total 16 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 in_reply_to_status_id 0 non-null float64 1 in_reply_to_user_id 0 non-null float64 2 timestamp 1971 non-null datetime64[ns, UTC] 3 source 1971 non-null object 4 text 1971 non-null object 5 retweeted_status_id 0 non-null float64 6 retweeted_status_user_id 0 non-null float64 7 retweeted_status_timestamp 0 non-null datetime64[ns, UTC] 8 expanded_urls 1971 non-null object 9 rating_numerator 1971 non-null int64 10 rating_denominator 1971 non-null int64 11 name 1971 non-null object 12 doggo 1971 non-null object 13 floofer 1971 non-null object 14 pupper 1971 non-null object 15 puppo 1971 non-null object dtypes: datetime64[ns, UTC](2), float64(4), int64(2), object(8) memory usage: 261.8+ KB ###Markdown Define5. drop columns that will have no use for analysis process `'in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'` Code ###Code archive_clean_df.drop(['in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1971 entries, 892420643555336193 to 666020888022790149 Data columns (total 11 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 timestamp 1971 non-null datetime64[ns, UTC] 1 source 1971 non-null object 2 text 1971 non-null object 3 expanded_urls 1971 non-null object 4 rating_numerator 1971 non-null int64 5 rating_denominator 1971 non-null int64 6 name 1971 non-null object 7 doggo 1971 non-null object 8 floofer 1971 non-null object 9 pupper 1971 non-null object 10 puppo 1971 non-null object dtypes: datetime64[ns, UTC](1), int64(2), object(8) memory usage: 184.8+ KB ###Markdown Define6. clear incorrect column `name` values which have incorrect name like "a, an, or number, ..." code ###Code archive_clean_df.name.value_counts().sort_values() # replace None string with null archive_clean_df.name.replace('None', np.nan, inplace=True) archive_clean_df.name.value_counts(dropna = False).sort_index() # archive_clean_df_1 = archive_clean_df.copy() pattern = re.compile(r'(?:name(?:d)?)\s{1}(?:is\s)?(?:of\s)?([A-Z]+[A-Za-z]+)') for index, row in archive_clean_df.iterrows(): if pd.isnull(row['name']) or row['name'][0].islower() or row['name'] == 'None': try: c_name = re.findall(pattern, row['text'])[0] if pd.isnull(row['name']): archive_clean_df.loc[index,'name'] = c_name else: archive_clean_df.loc[index,'name'] = archive_clean_df.loc[index,'name'].replace(row['name'], c_name) except IndexError: archive_clean_df.loc[index,'name'] = np.nan ###Output _____no_output_____ ###Markdown Test ###Code archive_clean_df.name.value_counts(dropna = False).sort_index() ###Output _____no_output_____ ###Markdown Define7. The `rating_numerator` column should of type float and also it should be correctly extracted from `text` column Code ###Code pattern = re.compile(r'(\d+\.?\d*\/\d+)') for index, row in archive_clean_df.iterrows(): c_numerator = re.findall(pattern, row['text'])[0] # print(c_numerator) # print(c_numerator.split('/')[0]) archive_clean_df.loc[index,'rating_numerator'] = c_numerator.split('/')[0] archive_clean_df.rating_numerator = archive_clean_df.rating_numerator.astype(float) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean_df.rating_numerator.value_counts() ###Output _____no_output_____ ###Markdown Define `images_clean_df`1. rename columns `p1, p1_conf, p1_dog, p2, p2_conf, p2_dog, p3, p3_conf, p3_dog` name to descriptive names Code ###Code images_clean_df = images_clean_df.rename(columns={ 'p1': 'first_prediction', 'p1_conf': 'first_prediction_confident', 'p1_dog': 'first_prediction_is_breed_dog', 'p2': 'second_prediction', 'p2_conf': 'second_prediction_confident', 'p2_dog': 'second_prediction_is_breed_dog', 'p3': 'third_prediction', 'p3_conf': 'third_prediction_confident', 'p3_dog': 'third_prediction_is_breed_dog'}) ###Output _____no_output_____ ###Markdown Test ###Code images_clean_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2075 entries, 666020888022790149 to 892420643555336193 Data columns (total 11 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 jpg_url 2075 non-null object 1 img_num 2075 non-null int64 2 first_prediction 2075 non-null object 3 first_prediction_confident 2075 non-null float64 4 first_prediction_is_breed_dog 2075 non-null bool 5 second_prediction 2075 non-null object 6 second_prediction_confident 2075 non-null float64 7 second_prediction_is_breed_dog 2075 non-null bool 8 third_prediction 2075 non-null object 9 third_prediction_confident 2075 non-null float64 10 third_prediction_is_breed_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(1), object(4) memory usage: 152.0+ KB ###Markdown Define2. drop images that do not belong to original tweets Code ###Code images_clean_df = images_clean_df[images_clean_df.index.isin(archive_clean_df.index)] ###Output _____no_output_____ ###Markdown Test ###Code images_clean_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1971 entries, 666020888022790149 to 892420643555336193 Data columns (total 11 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 jpg_url 1971 non-null object 1 img_num 1971 non-null int64 2 first_prediction 1971 non-null object 3 first_prediction_confident 1971 non-null float64 4 first_prediction_is_breed_dog 1971 non-null bool 5 second_prediction 1971 non-null object 6 second_prediction_confident 1971 non-null float64 7 second_prediction_is_breed_dog 1971 non-null bool 8 third_prediction 1971 non-null object 9 third_prediction_confident 1971 non-null float64 10 third_prediction_is_breed_dog 1971 non-null bool dtypes: bool(3), float64(3), int64(1), object(4) memory usage: 144.4+ KB ###Markdown Tidinessmerge the 3 datasets to `archive_df` DataFrame Define1. merge the 3 datasets to `archive_df` DataFrame Code ###Code archive_clean_df = pd.merge(archive_clean_df, retweet_clean_df, how='left', left_index=True, right_index=True) archive_clean_df.head() ###Output _____no_output_____ ###Markdown Test ###Code archive_clean_df[archive_clean_df.index == 891327558926688256].retweet_count retweet_clean_df[retweet_clean_df.index == 891327558926688256].retweet_count archive_clean_df[archive_clean_df.index == 891327558926688256].retweet_count == retweet_clean_df[retweet_clean_df.index == 891327558926688256].retweet_count archive_clean_df = pd.merge(archive_clean_df, images_clean_df, how='left', left_index=True, right_index=True) archive_clean_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1971 entries, 892420643555336193 to 666020888022790149 Data columns (total 24 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 timestamp 1971 non-null datetime64[ns, UTC] 1 source 1971 non-null object 2 text 1971 non-null object 3 expanded_urls 1971 non-null object 4 rating_numerator 1971 non-null float64 5 rating_denominator 1971 non-null int64 6 name 1379 non-null object 7 doggo 1971 non-null object 8 floofer 1971 non-null object 9 pupper 1971 non-null object 10 puppo 1971 non-null object 11 retweet_count 1971 non-null int64 12 favorite_count 1971 non-null int64 13 jpg_url 1971 non-null object 14 img_num 1971 non-null int64 15 first_prediction 1971 non-null object 16 first_prediction_confident 1971 non-null float64 17 first_prediction_is_breed_dog 1971 non-null bool 18 second_prediction 1971 non-null object 19 second_prediction_confident 1971 non-null float64 20 second_prediction_is_breed_dog 1971 non-null bool 21 third_prediction 1971 non-null object 22 third_prediction_confident 1971 non-null float64 23 third_prediction_is_breed_dog 1971 non-null bool dtypes: bool(3), datetime64[ns, UTC](1), float64(4), int64(4), object(12) memory usage: 409.1+ KB ###Markdown Tidinessone variable in four columns `doggo, floofer, pupper, puppo` Define2. add new column called `classification` to hold value of `doggo, floofer, pupper, puppo` column Code ###Code for index, row in archive_clean_df.iterrows(): classification_value = row.doggo if row.floofer: if classification_value: classification_value = str(classification_value) + ", " + str(row.floofer) else: classification_value = row.floofer if row.pupper: if classification_value: classification_value = str(classification_value) + ', ' + str(row.pupper) else: classification_value = row.pupper if row.puppo: if classification_value: classification_value = str(classification_value) + ', ' + str(row.puppo) else: classification_value = row.puppo # archive_clean_df['classification'][index] = classification_value archive_clean_df.loc[index, 'classification'] = classification_value ###Output _____no_output_____ ###Markdown Test ###Code archive_clean_df.classification.value_counts() ###Output _____no_output_____ ###Markdown Tidinesscolumns `doggo, floofer, pupper, puppo` have no use Define3. drop columns `doggo, floofer, pupper, puppo` Code ###Code archive_clean_df.drop(['doggo', 'floofer', 'pupper', 'puppo'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code archive_clean_df.head(20) ###Output _____no_output_____ ###Markdown Storing data ###Code archive_clean_df.to_csv('twitter_archive_master.csv', encoding='utf-8') # images_clean_df.to_csv('twitter_image_master.csv', encoding='utf-8') ###Output _____no_output_____ ###Markdown Analysis & Visulaization Insights Top 10 retweet count ###Code retweet_count_avg_time = archive_clean_df.groupby([(archive_clean_df.timestamp.dt.year), (archive_clean_df.timestamp.dt.month)]).retweet_count.mean().sort_values(ascending=False)[10::-1] retweet_count_avg_time retweet_count_avg_time.plot(kind='line', style='-ro', figsize=(10,4)) plt.xlabel('year-Month') plt.ylabel('retweet count') plt.title('Top 10 retweet count') plt.savefig('Top 10 retweet count.png') ###Output _____no_output_____ ###Markdown First breed iamge is a dog vs not a dog ###Code first_breed_is_dog = images_clean_df[images_clean_df.first_prediction_is_breed_dog == True].first_prediction_is_breed_dog.count() first_breed_is_not_dog = images_clean_df[images_clean_df.first_prediction_is_breed_dog == False].first_prediction_is_breed_dog.count() (first_breed_is_dog, first_breed_is_not_dog) locations = [1, 2] heights = [first_breed_is_dog, first_breed_is_not_dog] labels = ['Dogs image', 'Others image'] plt.bar(locations, heights, tick_label=labels) plt.title('First breed iamges is a dog vs not a dog') plt.xlabel('Dogs image vs other images') plt.ylabel('Count'); plt.savefig('First breed iamges is a dog vs not a dog.png') ###Output _____no_output_____ ###Markdown Top 10 most populare dog name ###Code most_popular_name = archive_clean_df.name.value_counts().sort_values(ascending=False)[10::-1] # plt.pie(most_popular_name,radius=3, autopct='%1.2f%%') most_popular_name.plot(kind='pie', autopct='%1.2f%%', figsize=(10,10)) plt.title('Top 10 most populare dog name') plt.ylabel('Dogs name'); plt.savefig('Top 10 most populare dog name.png') ###Output _____no_output_____ ###Markdown Visualization ###Code archive_2015 = archive_clean_df[(archive_clean_df.timestamp >= '2015-01-01') & (archive_clean_df.timestamp <= '2015-12-31')] fig, ax = plt.subplots(figsize=(20, 10)) ax.bar(archive_2015.timestamp, archive_2015.favorite_count, color='purple') ax.set(xlabel="Date", ylabel="Favorite count", title="favorite count timestamp") # Define the date format date_form = DateFormatter("%Y-%m") ax.xaxis.set_major_formatter(date_form) # plt.show() plt.savefig('favorite_count_timestamp.png') ###Output _____no_output_____ ###Markdown Gathering ###Code folder_name = 'data' if not os.path.exists(folder_name): os.makedirs(folder_name) tsv_url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' r = requests.get(tsv_url) with open(os.path.join(folder_name, tsv_url.split('/')[-1]), mode = 'wb') as f: f.write(r.content) ###Output _____no_output_____ ###Markdown Assessing ###Code df_tsv = pd.read_csv(os.path.join(folder_name, tsv_url.split('/')[-1]), delimiter = '\t') ###Output _____no_output_____ ###Markdown Wrangle and Analyze Data - WeRateDogs by Frederick Yen Project OverviewData wrangling, which consists of:1. Gathering data (downloadable file in the Resources tab in the left most panel of your classroom and linked in step 1 below).2. Assessing data3. Cleaning data4. Storing, analyzing, and visualizing your wrangled dataReporting on 1) your data wrangling efforts and 2) your data analyses and visualizations Gathering Data for this ProjectGather each of the three pieces of data as described below in a Jupyter Notebook titled wrangle_act.ipynb:1. The WeRateDogs Twitter archive. Download this file manually by clicking the following link: `twitter_archive_enhanced.csv` ###Code # Import required packages import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline import requests import tweepy import json import time # Read CSV file archived = pd.read_csv('twitter-archive-enhanced.csv') # Initial inspection archived.head() archived.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 in_reply_to_status_id 78 non-null float64 2 in_reply_to_user_id 78 non-null float64 3 timestamp 2356 non-null object 4 source 2356 non-null object 5 text 2356 non-null object 6 retweeted_status_id 181 non-null float64 7 retweeted_status_user_id 181 non-null float64 8 retweeted_status_timestamp 181 non-null object 9 expanded_urls 2297 non-null object 10 rating_numerator 2356 non-null int64 11 rating_denominator 2356 non-null int64 12 name 2356 non-null object 13 doggo 2356 non-null object 14 floofer 2356 non-null object 15 pupper 2356 non-null object 16 puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown 2. The tweet image predictions, i.e., what breed of dog (or other object, animal, etc.) is present in each tweet according to a neural network. This file (`image_predictions.tsv`) is hosted on Udacity's servers and should be downloaded programmatically using the Requests library and the following URL: https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv ###Code # Download by using Requests URL = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(URL, timeout=3) with open('image-predictions.tsv', mode = 'wb') as file: file.write(response.content) # Read tsv file predictions = pd.read_csv('image-predictions.tsv', sep = '\t') predictions.info() ###Output _____no_output_____ ###Markdown **NOTE: API Keys, Secrets, and Tokens are hidden for submisson!** ###Code # API auth setup (OAuth 2 Authentication) consumer_key = #HIDDEN# #'YOUR CONSUMER KEY' consumer_secret = #HIDDEN# #'YOUR CONSUMER SECRET' bearer_token = #HIDDEN# auth = tweepy.AppAuthHandler(consumer_key, consumer_secret) api = tweepy.API(auth, wait_on_rate_limit = True, wait_on_rate_limit_notify = True) ###Output _____no_output_____ ###Markdown 3. Using the tweet IDs in the WeRateDogs Twitter archive, query the Twitter API for each tweet's JSON data using Python's Tweepy library and store each tweet's entire set of JSON data in a file called `tweet_json.txt`. Each tweet's JSON data should be written to its own line. Then read this .txt file line by line into a pandas DataFrame with (at minimum) **tweet ID, retweet count, and favorite count**. Reference: https://stackoverflow.com/questions/47612822/how-to-create-pandas-dataframe-from-twitter-search-api ###Code tweet_list = [] # Append list and handle exceptions for tweet in archived['tweet_id']: try: tweet_list.append(api.get_status(tweet)) # print('here') except Exception as e: # print('tweet ID not found in') print(tweet) print("End of appending list") # Check length of list print(len(tweet_list)) # Saving JSON section to new list tweet_list_JSON = [] for tweet in tweet_list: tweet_list_JSON.append(tweet._json) # Save list to txt with open('tweet_json.txt', 'w') as file: file.write(json.dumps(tweet_list_JSON, indent=4)) # Inspect saved txt and save JSONs of interest into a dataframe organized_list = [] with open('tweet_json.txt', encoding='utf-8') as json_file: all_data = json.load(json_file) for each_dictionary in all_data: tweet_id = each_dictionary['id'] whole_tweet = each_dictionary['text'] only_url = whole_tweet[whole_tweet.find('https'):] favorite_count = each_dictionary['favorite_count'] retweet_count = each_dictionary['retweet_count'] followers_count = each_dictionary['user']['followers_count'] friends_count = each_dictionary['user']['friends_count'] whole_source = each_dictionary['source'] only_device = whole_source[whole_source.find('rel="nofollow">') + 15:-4] source = only_device retweeted_status = each_dictionary['retweeted_status'] = each_dictionary.get('retweeted_status', 'Original tweet') if retweeted_status == 'Original tweet': url = only_url else: retweeted_status = 'This is a retweet' url = 'This is a retweet' organized_list.append({'tweet_id': str(tweet_id), 'favorite_count': int(favorite_count), 'retweet_count': int(retweet_count), 'followers_count': int(followers_count), 'friends_count': int(friends_count), 'url': url, 'source': source, 'retweeted_status': retweeted_status, }) tweet_json = pd.DataFrame(organized_list, columns = ['tweet_id', 'favorite_count','retweet_count', 'followers_count', 'friends_count','source', 'retweeted_status', 'url']) # Inspect saved dataframe tweet_json.head() tweet_json.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2331 entries, 0 to 2330 Data columns (total 8 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2331 non-null object 1 favorite_count 2331 non-null int64 2 retweet_count 2331 non-null int64 3 followers_count 2331 non-null int64 4 friends_count 2331 non-null int64 5 source 2331 non-null object 6 retweeted_status 2331 non-null object 7 url 2331 non-null object dtypes: int64(4), object(4) memory usage: 145.8+ KB ###Markdown Assessing Data for this ProjectAfter gathering each of the above pieces of data, assess them visually and programmatically for quality and tidiness issues. Detect and document at least **eight (8) quality issues and two (2) tidiness issues** in `wrangle_act.ipynb` Jupyter Notebook. Quality: - Completeness- validity- Accuracy- ConsistencyTidiness:- Each variable forms a column.- Each observation forms a row.- Each type of observational unit forms a table. Reviewer Feedback 09/24/2020 - Please note that similar issues should be categorised as a single issue even if there are present mutliple datasets. 1. Removing non-required columns from predictions and archive should be considered as a single issue. 2. Similarly, removing retweets in archive and json should be categorized as a single issue. - Additional Data Quality issues to consider: 1. **Dog Names** : In the name column, there are several values that are not dog names, like 'a', 'the', 'such', etc. Notice that all of these observations have lowercase characters, an important pattern that could be used to clean up this field. Another way is to drop duplicated values. 2. **Modifying rows with denominator: != 10** without further inspection is not a good idea. For instance, take a look at this tweet https://twitter.com/dog_rates/status/704054845121142784; in this case the denominator is intended to be 50, because there are 5 puppers. Project instruction says denominator is almost always 10, it does not say it is always 10. Visual Assessment ###Code archived predictions tweet_json ###Output _____no_output_____ ###Markdown Programmatic Assessment**Checking 'archived':** ###Code archived.info() # Verify duplicated entries sum(archived['tweet_id'].duplicated()) # Check abnormal ratings # Inspect text of tweets of abnormal numerators for rating in archived.rating_numerator: if rating > 20: print(archived.loc[archived.rating_numerator == rating, 'text']) # Display rating counts # archived.rating_numerator.value_counts() # Check full texts print(archived['text'][902]) archived.rating_denominator.value_counts() # Inspect text of tweets of abnormal denominators for deno in archived.rating_denominator: if deno != 10: print(archived.loc[archived.rating_denominator == deno, 'text']) ###Output 313 @jonnysun @Lin_Manuel ok jomny I know you're e... Name: text, dtype: object 342 @docmisterio account started on 11/15/15 Name: text, dtype: object 433 The floofs have been released I repeat the flo... Name: text, dtype: object 516 Meet Sam. She smiles 24/7 &amp; secretly aspir... Name: text, dtype: object 784 RT @dog_rates: After so many requests, this is... 1068 After so many requests, this is Bretagne. She ... 1662 This is Darrel. He just robbed a 7/11 and is i... Name: text, dtype: object 902 Why does this never happen at my front door...... Name: text, dtype: object 784 RT @dog_rates: After so many requests, this is... 1068 After so many requests, this is Bretagne. She ... 1662 This is Darrel. He just robbed a 7/11 and is i... Name: text, dtype: object 1120 Say hello to this unbelievably well behaved sq... Name: text, dtype: object 1165 Happy 4/20 from the squad! 13/10 for all https... 1598 Yes I do realize a rating of 4/20 would've bee... Name: text, dtype: object 1202 This is Bluebert. He just saw that both #Final... 1274 From left to right:\nCletus, Jerome, Alejandro... 1351 Here is a whole flock of puppers. 60/50 I'll ... Name: text, dtype: object 1228 Happy Saturday here's 9 puppers on a bench. 99... Name: text, dtype: object 1254 Here's a brigade of puppers. All look very pre... 1843 Here we have an entire platoon of puppers. Tot... Name: text, dtype: object 1202 This is Bluebert. He just saw that both #Final... 1274 From left to right:\nCletus, Jerome, Alejandro... 1351 Here is a whole flock of puppers. 60/50 I'll ... Name: text, dtype: object 1202 This is Bluebert. He just saw that both #Final... 1274 From left to right:\nCletus, Jerome, Alejandro... 1351 Here is a whole flock of puppers. 60/50 I'll ... Name: text, dtype: object 1433 Happy Wednesday here's a bucket of pups. 44/40... Name: text, dtype: object 1165 Happy 4/20 from the squad! 13/10 for all https... 1598 Yes I do realize a rating of 4/20 would've bee... Name: text, dtype: object 1634 Two sneaky puppers were not initially seen, mo... Name: text, dtype: object 1635 Someone help the girl is being mugged. Several... Name: text, dtype: object 784 RT @dog_rates: After so many requests, this is... 1068 After so many requests, this is Bretagne. She ... 1662 This is Darrel. He just robbed a 7/11 and is i... Name: text, dtype: object 1663 I'm aware that I could've said 20/16, but here... Name: text, dtype: object 1779 IT'S PUPPERGEDDON. Total of 144/120 ...I think... Name: text, dtype: object 1254 Here's a brigade of puppers. All look very pre... 1843 Here we have an entire platoon of puppers. Tot... Name: text, dtype: object 2335 This is an Albanian 3 1/2 legged Episcopalian... Name: text, dtype: object ###Markdown Checking 'predictions' ###Code predictions.sample(5) predictions.info() # Checking any duplicate entries sum(predictions.tweet_id.duplicated()) sum(predictions.jpg_url.duplicated()) predictions.img_num.value_counts() # Check 'tweet_json' tweet_json.sample(5) tweet_json.info() tweet_json.retweeted_status.value_counts() tweet_json.friends_count.value_counts() tweet_json.followers_count.value_counts() ###Output _____no_output_____ ###Markdown Summary of Eight quality issues and Two tidiness issues Quality**archived**(1) Remove unneeded columns (2) Remove RTs and orginal tweets that don't have images (3) Timestamps column converted to date time objects (4) Remove rows that have no ratings (5) Fix numerators that contain decimals to float data type (7) Correct denominators to float data type and set to 10 (8) Set invalid dog names to NaN **predictions** (1) Remove undeeded columns (6) Remove duplicated jpg URLs **tweet_json**(2) Remove RTs Tidiness(1) Tweet_id should have uniform datatypes between all tables (2) All tables should be merged to one master table Cleaning DataClean each of the issues documented while assessing. Perform this cleaning in `wrangle_act.ipynb`. The result is a high quality and tidy master pandas DataFrame. ###Code # Make copy of dataframes so the originals don't get modified. tweet_json_clean = tweet_json.copy(deep=True) archived_clean = archived.copy(deep=True) predictions_clean = predictions.copy(deep=True) ###Output _____no_output_____ ###Markdown Reference: https://stackoverflow.com/questions/26535563/querying-for-nan-and-other-names-in-pandas ###Code # **archived** # DEFINE # 1. Remove RTs and orginal tweets that don't have images: removing RTs first # CODE # Querying NaNs archived_clean = archived_clean.query("retweeted_status_user_id != retweeted_status_user_id") # TEST # View cleaning result if sum(archived_clean.retweeted_status_user_id.value_counts()) != 0: print('Retweet still exists.') else: print('No retweets') # DEFINE # 2. Remove sourace and reply related columns # CODE archived_clean = archived_clean.drop(['source','in_reply_to_user_id','in_reply_to_status_id'], 1) # View initial cleaning result print(list(archived_clean)) # Remove RT related columns archived_clean = archived_clean.drop(['retweeted_status_id', 'retweeted_status_timestamp', 'expanded_urls'], 1) # TEST # View final cleaning result print(list(archived_clean)) # DEFINE # 3. Timestamps need to be organized columns (Year-Month-Day) # CODE # Convert data type to date time object archived_clean['timestamp'] = pd.to_datetime(archived_clean['timestamp']) # TEST # View cleaning result print(list(archived_clean)) ###Output ['tweet_id', 'timestamp', 'text', 'retweeted_status_user_id', 'rating_numerator', 'rating_denominator', 'name', 'doggo', 'floofer', 'pupper', 'puppo'] ###Markdown Reference: https://stackoverflow.com/questions/18172851/deleting-dataframe-row-in-pandas-based-on-column-value ###Code # DEFINE # 4. Remove rows that have no ratings # CODE # Check current number of row entries print('Before:', archived_clean.shape) # From the manual inspection results, drop 5 discovered tweets that had no ratings archived_clean.drop(archived_clean.loc[archived_clean['tweet_id']== 682808988178739200].index, inplace=True) archived_clean.drop(archived_clean.loc[archived_clean['tweet_id']== 835246439529840640].index, inplace=True) archived_clean.drop(archived_clean.loc[archived_clean['tweet_id']== 832088576586297345].index, inplace=True) archived_clean.drop(archived_clean.loc[archived_clean['tweet_id']== 686035780142297088].index, inplace=True) archived_clean.drop(archived_clean.loc[archived_clean['tweet_id']== 810984652412424192].index, inplace=True) # TEST print('After:', archived_clean.shape) # DEFINE # 5. Fix numerators that contain decimals to float data type # 7. Correct denominators to float data type and set to 10 # CODE # Convert numerators based on denominators set to 10 archived_clean['rating_numerator'] = 10 * (archived_clean['rating_numerator'].astype(float) / archived_clean['rating_denominator'].astype(float)) archived_clean['rating_denominator'] = 10.0 # TEST # Check converted ratings archived_clean.head(3) # DEFINE # 8. Set invalid dog names to NaN # CODE # Reference: Reviewer Feedback 09/24/2020 archived_clean['name'].replace('None',np.nan,inplace=True) archived_clean['name'].replace('a',np.nan,inplace=True) archived_clean['name'].replace('an',np.nan,inplace=True) archived_clean['name'].replace('the',np.nan,inplace=True) # TEST archived_clean['name'].sample(10) # **predictions** # DEFINE # 6. Remove duplicated jpg URLs # CODE predictions_clean = predictions_clean.drop_duplicates(subset=['jpg_url']) # TEST # Verifying no duplicates left if sum(predictions_clean['jpg_url'].duplicated()) == 0: print('No duplicates left in predictions') # DEFINE # 1. Remove undeeded columns - save as one main prediction and it's confidence level predictions_clean.info() # CODE # Loop through each row and save the first True prediction. # Create new empty lists dog_type = [] confidence = [] for index, row in predictions_clean.iterrows(): if row['p1_dog'] == True: dog_type.append(row['p1']) confidence.append(row['p1_conf']) elif row['p2_dog'] == True: dog_type.append(row['p2']) confidence.append(row['p2_conf']) elif row['p3_dog'] == True: dog_type.append(row['p3']) confidence.append(row['p3_conf']) else: # drop error row predictions_clean.drop(index, inplace=True) # Append new columns predictions_clean['dog_type'] = dog_type predictions_clean['confidence'] = confidence # Remove unneeded columns predictions_clean = predictions_clean.drop(['img_num', 'p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog'], 1) # TEST # Check result predictions_clean.info() # **tweet_json** # DEFINE # 2. Remove RTs # CODE # Only keep original tweets tweet_json_clean = tweet_json_clean.query('retweeted_status == "Original tweet"') # TEST tweet_json_clean['retweeted_status'].value_counts() # Tidiness # DEFINE #1. Tweet_id should have uniform datatypes between all tables # CODE # Set tweet_id to int tweet_json_clean['tweet_id'] = tweet_json_clean['tweet_id'].astype(int) # TEST tweet_json_clean.info() # DEFINE #2. All tables should be merged to one master table # CODE # Create new dataframe that merges archived_clean and predictions_clean archived_predictions = pd.merge(archived_clean, predictions_clean, on = ['tweet_id'], how = 'left') # Clean out entries without jpg_url archived_predictions.dropna(subset = ['jpg_url'], inplace=True) # Check result archived_predictions.info() # Merge to create the master dataframe twitter_master = pd.merge(archived_predictions, tweet_json_clean, on = ['tweet_id'], how = 'left') # TEST # Verify final merged results twitter_master.info() twitter_master.sample(5) ###Output _____no_output_____ ###Markdown Storing, Analyzing, and Visualizing Data for this ProjectStore the clean DataFrame(s) in a CSV file with the main one named `twitter_archive_master.csv`. ###Code #Save CSV file twitter_master.to_csv('twitter_archive_master.csv', index=False, encoding = 'utf-8') ###Output _____no_output_____ ###Markdown Analyze and visualize wrangled data in `wrangle_act.ipynb` Jupyter Notebook. At least **three (3) insights and one (1) visualization** must be produced. Insight No.1 and VisualizationGolden retriever has the most appearance in the datasetReference: https://towardsdatascience.com/getting-more-value-from-the-pandas-value-counts-aa17230907a6 ###Code # Value count shown in percentage twitter_master['dog_type'].value_counts(normalize=True) # Filter out low count dog types and plotting horizontal bar dog_type_master = twitter_master.groupby('dog_type').filter(lambda x: len(x) >= 35) dog_type_master['dog_type'].value_counts(normalize=True).plot(kind = 'barh') plt.title('Dog Type Appearance Percentage') plt.xlabel('Count') plt.ylabel('Dog Type') fig = plt.gcf() fig.savefig('dogtype.png'); ###Output _____no_output_____ ###Markdown Insight No.2 and VisualizationClumber has the highest aveage rating amongst all dog types ###Code # Calculate mean rating for each dog type and sort dog_type_master_mean = twitter_master.groupby('dog_type').mean() dog_type_master_sorted = dog_type_master_mean['rating_numerator'].sort_values(ascending=False) dog_type_master_sorted.iloc[0:14].plot(kind = 'barh') plt.title('Sorted Ratings of Dog Types') plt.xlabel('Average rating') plt.ylabel('Dog Type') fig = plt.gcf() fig.savefig('dogratings.png'); ###Output _____no_output_____ ###Markdown Insight No.3 and VisualizationInspect rating vs retweets: Tweets with higher ratings are more likely to get more retweets ###Code # Plot scatter plot for inspection. twitter_master.plot(x='rating_numerator', y='retweet_count', kind='scatter') plt.xlabel('Ratings') plt.ylabel('Retweet Counts') plt.title('Ratings vs Retweet Counts Scatter Plot') plt.xlim(0, 30) # Exclude outlier ratings fig = plt.gcf() fig.savefig('Scatter.png'); ###Output _____no_output_____ ###Markdown Wrangle and Analyze Data IntroductionThis project focused on wrangling data from the WeRateDogs Twitter account using Python, documented in a Jupyter Notebook (wrangle_act.ipynb). This Twitter account rates dogs with humorous commentary. The rating denominator is usually 10, however, the numerators are usually greater than 10. They’re Good Dogs Brent wrangle WeRateDogs Twitter data to create interesting and trustworthy analyses and visualizations. WeRateDogs has over 4 million followers and has received international media coverage.WeRateDogs downloaded their Twitter archive and sent it to Udacity via email exclusively for us to use in this project. This archive contains basic tweet data (tweet ID, timestamp, text, etc.) for all 5000+ of their tweets as they stood on August 1, 2017.The goal of this project is to wrangle the WeRateDogs Twitter data to create interesting and trustworthy analyses and visualizations. The challenge lies in the fact that the Twitter archive is great, but it only contains very basic tweet information that comes in JSON format. I needed to gather, asses and clean the Twitter data for a worthy analysis and visualization.The Data Enhanced Twitter ArchiveThe WeRateDogs Twitter archive contains basic tweet data for all 5000+ of their tweets, but not everything. One column the archive does contain though: each tweet's text, which I used to extract rating, dog name, and dog "stage" (i.e. doggo, floofer, pupper, and puppo) to make this Twitter archive "enhanced.".We manually downloaded this file manually by clicking the following link: twitter_archive_enhanced.csv Image Predictions FileThe tweet image predictions, i.e., what breed of dog (or other object, animal, etc.) is present in each tweet according to a neural network. This file (image_predictions.tsv) hosted on Udacity's servers and we downloaded it programmatically using python Requests library on the following (URL of the file: https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv) Twitter APIBack to the basic-ness of Twitter archives: retweet count and favorite count are two of the notable column omissions. Fortunately, this additional data can be gathered by anyone from Twitter's API. Well, "anyone" who has access to data for the 3000 most recent tweets, at least. But we, because we have the WeRateDogs Twitter archive and specifically the tweet IDs within it, can gather this data for all 5000+. And guess what? We're going to query Twitter's API to gather this valuable data.Key PointsBefore we start, herea are few points to keep in mind when data wrangling for this project:* We only want original ratings (no retweets) that have images. Though there are 5000+ tweets in the dataset, not all are dog ratings and some are retweets.* Fully assessing and cleaning the entire dataset requires exceptional effort so only a subset of its issues (eight (8) quality issues and two (2) tidiness issues at minimum) need to be assessed and cleaned.* Cleaning includes merging individual pieces of data according to the rules of tidy data.* The fact that the rating numerators are greater than the denominators does not need to be cleaned. This unique rating system is a big part of the popularity of WeRateDogs. ###Code import numpy as np import os import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import requests import tweepy import json from timeit import default_timer as timer from tweepy import OAuthHandler ###Output _____no_output_____ ###Markdown Gathering Data* **Loading the twitter-archive-enhanced.csv into a DataFrame [WeRateDogs Twitter archive]** ###Code df = pd.read_csv('twitter-archive-enhanced.csv') ###Output _____no_output_____ ###Markdown * **Loading the tweet image predictions from Udacity's servers** ###Code r = requests.get('https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv') with open('image-predictions.tsv', mode='wb') as file: file.write(r.content) image_df = pd.read_csv('image-predictions.tsv', sep='\t') ###Output _____no_output_____ ###Markdown * **Loading Favorite count and retweet count from Twitter** ###Code consumer_key ='xxxx' consumer_secret ='xxxx' access_token ='xxxx' access_secret = 'xxxx' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True) start = timer() df_list = [] errors = [] for id in archive_df['tweet_id']: try: tweet = api.get_status(id, tweet_mode='extended') df_list.append({'tweet_id': str(tweet.id), 'favorite_count': int(tweet.favorite_count), 'retweet_count': int(tweet.retweet_count)}) except Exception as e: print(str(id) + " : " + str(e)) errors.append(id) end = timer() df_tweet_json = pd.DataFrame(columns=['tweet_id', 'retweet_count', 'favorite_count']) with open('tweet-json.txt') as data_file: for line in data_file: tweet = json.loads(line) tweet_id = tweet['id_str'] retweet_count = tweet['retweet_count'] favorite_count = tweet['favorite_count'] df_tweet_json = df_tweet_json.append(pd.DataFrame([[tweet_id, retweet_count, favorite_count]], columns=['tweet_id', 'retweet_count', 'favorite_count'])) df_tweet_json = df_tweet_json.reset_index(drop=True) ###Output _____no_output_____ ###Markdown Assessing Data - Access twitter-archive-enhanced dataset ###Code #data of twitter-archive-enhanced.csv df.info() df.head() df.tail() ###Output _____no_output_____ ###Markdown * **Missing data in the following columns: in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, etweeted_status_user_id, retweeted_status_timestamp, expanded_urls*** **Timestamp and retweeted_status_timestamp is an object*** **Source columns have HTML tags*** **This dataset includes retweets, which means there is duplicated data** ###Code # checks for duplicated entries in df df.duplicated().sum() ###Output _____no_output_____ ###Markdown * **There is No Duplicates entry present** ###Code print('maximum rating_numerator :',df["rating_numerator"].max()) print('maximum rating_denominator :',df["rating_denominator"].max()) df[df.name.str.islower()].name.value_counts() df[df.name.str.isupper()].name.value_counts() ###Output _____no_output_____ ###Markdown * **Dogs name have 'None', or 'a', or 'an.' or 'O' or 'by' and some more lower case words as names** - Access tweet image predictions datasets ###Code #data of tweet image predictions.csv image_df.head() image_df.tail() image_df.describe() ###Output _____no_output_____ ###Markdown * **Dog breeds are not consistently in p1,p2,p3 columns i.e lower or uppercase** ###Code image_df.info() # checks for duplicated entries in image_df image_df.duplicated().sum() # Count of duplicate jpg_url image_df['jpg_url'].duplicated().sum() ###Output _____no_output_____ ###Markdown * **jpg_url contains duplicate items means duplicate image links** ###Code image_df['img_num'].value_counts() ###Output _____no_output_____ ###Markdown - Access twitter-json dataset ###Code #tweet-json dataset df_tweet_json.info() df_tweet_json.head() df_tweet_json.tail() df_tweet_json.describe() #sample df_tweet_json.sample(10) df_tweet_json.duplicated().sum() ###Output _____no_output_____ ###Markdown * **There is No Duplicates entry present** Quality Issues - Twitter-archive-enhanced dataset (df): * Missing data in the following columns: in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp, expanded_urls * This dataset includes retweets, which means there is duplicated data * Timestamp and retweeted_status_timestamp is an object * The source column still has the HTML tags * Dogs name have 'None', or 'a', or 'an.' and some more lower case words as names * Multiple dog stages occurs such as 'doggo puppo', 'doggo pupper', 'doggo floofer' - Tweet image predictions datasets (image_df): * Dog breeds are not consistently in p1,p2,p3 columns - Tweet-json dataset (df_tweet_json): * Missing data * tweet_id is an object Tidiness Issues - Twitter-archive-enhanced dataset (df): * The variable for the dog's stage (dogoo, floofer, pupper, puppo) is spread in different columns - Tweet image predictions datasets (image_df): * This data set is part of the same observational unit as the data in the archive_df - Tweet-json dataset (df_tweet_json): * This data set is also part of the same observational unit as the data in the archive_df Findings of the analysis - The pred_breed column is created based on the the confidence level of minimum 20% and 'p1_dog', 'p2_dog' and 'p3_dog' statements - Based on dog types: doggo, floofer, pupper, puppo, 'doggo, puppo', 'doggo, pupper', 'doggo, floofer', only one categorical column is created named as 'stage' - tweet_id is set as object type as it is not going to use for calculation. - A main dataframe is created using df_clean, image_df_clean, and tweet_json_clean dataframes - Dog Names Issue got rectified - Inconsistency in pred_breed got removed - All retweets get deleted to get unique tweets - The columns such as in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id,and retweeted_status_timestamp is removed which is not needed - Timestamp format got corrected to datetime format - Extra HTML tags from source column get refracted - Dog ratings get standardized for denom of 10. Cleaning Data * **Making a copy of the dataframes before cleaning** ###Code #df_clean copy of df df_clean = df.copy() #image_df_clean of image_df image_df_clean = image_df.copy() #tweet_json_clean of df_tweet_json tweet_json_clean = df_tweet_json.copy() tweet_json_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2354 entries, 0 to 2353 Data columns (total 3 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2354 non-null object 1 retweet_count 2354 non-null object 2 favorite_count 2354 non-null object dtypes: object(3) memory usage: 55.3+ KB ###Markdown - DEFINE - 1 > Convert the tweet_id in tweet_json_clean dataframe into int type for merging into master dataframe - CODE ###Code tweet_json_clean['tweet_id'] = tweet_json_clean['tweet_id'].astype('int64') ###Output _____no_output_____ ###Markdown - TEST ###Code tweet_json_clean.info() image_df_clean.sample() ###Output _____no_output_____ ###Markdown - DEFINE - 2 > Creates a predicted dog breed column, based on the the confidence level of minimum 20% and 'p1_dog', 'p2_dog' and 'p3_dog' statements - CODE ###Code image_df_clean['pred_breed'] = [df['p1'] if df['p1_dog'] == True and df['p1_conf'] > 0.2 else df['p2'] if df['p2_dog'] == True and df['p2_conf'] > 0.2 else df['p3'] if df['p3_dog'] == True and df['p3_conf'] > 0.2 else np.nan for index, df in image_df_clean.iterrows()] ## Drop 'p1', 'p1_dog', 'p1_conf','p2', 'p2_dog', 'p2_conf','p3', 'p3_dog', 'p3_conf' columns image_df_clean.drop(['p1', 'p1_dog', 'p1_conf','p2', 'p2_dog', 'p2_conf','p3', 'p3_dog', 'p3_conf'], axis = 1, inplace=True) ###Output _____no_output_____ ###Markdown - TEST ###Code image_df_clean.sample() df_clean.info() #Number of columns in df_clean df_clean.columns ###Output _____no_output_____ ###Markdown - DEFINE - 3 > Create one column for the various dog types: doggo, floofer, pupper, puppo, 'doggo, puppo', 'doggo, pupper', 'doggo, floofer' ascolumn name ' type ' with the categorical dtype - CODE ###Code # as there are separate columns for dogs type 'doggo','floofer','pupper'and so on... #i will convert them into one column df_clean.doggo.replace(np.NaN, '', inplace=True) df_clean.floofer.replace(np.NaN, '', inplace=True) df_clean.pupper.replace(np.NaN, '', inplace=True) df_clean.puppo.replace(np.NaN, '', inplace=True) df_clean.doggo.replace('None', '', inplace=True) df_clean.floofer.replace('None', '', inplace=True) df_clean.pupper.replace('None', '', inplace=True) df_clean.puppo.replace('None', '', inplace=True) df_clean['stage'] = df_clean.doggo + df_clean.floofer + df_clean.pupper + df_clean.puppo df_clean.loc[df_clean.stage == 'doggopupper', 'stage'] = 'doggo, pupper' df_clean.loc[df_clean.stage == 'doggopuppo', 'stage'] = 'doggo, puppo' df_clean.loc[df_clean.stage == 'doggofloofer', 'stage'] = 'doggo, floofer' # Convert the stage in df_clean into categorical dtype df_clean['stage'] = df_clean['stage'].astype('category') # drop 'doggo', 'floofer', 'pupper', 'puppo' columns df_clean.drop(['doggo', 'floofer', 'pupper', 'puppo'], axis=1, inplace=True) df_clean.stage.replace('', np.nan, inplace=True) ###Output _____no_output_____ ###Markdown - TEST ###Code df_clean.info() df_clean.stage.value_counts() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 14 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 in_reply_to_status_id 78 non-null float64 2 in_reply_to_user_id 78 non-null float64 3 timestamp 2356 non-null object 4 source 2356 non-null object 5 text 2356 non-null object 6 retweeted_status_id 181 non-null float64 7 retweeted_status_user_id 181 non-null float64 8 retweeted_status_timestamp 181 non-null object 9 expanded_urls 2297 non-null object 10 rating_numerator 2356 non-null int64 11 rating_denominator 2356 non-null int64 12 name 2356 non-null object 13 stage 380 non-null category dtypes: category(1), float64(4), int64(3), object(6) memory usage: 242.1+ KB ###Markdown - DEFINE - 4 > Merge the copied df_clean, image_df_clean, and tweet_json_clean dataframes - CODE ###Code from functools import reduce data = [df_clean, image_df_clean, tweet_json_clean] main_df = reduce(lambda left, right: pd.merge(left, right,on = 'tweet_id'), data) ###Output _____no_output_____ ###Markdown - TEST ###Code #Marge DataFrame main_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2073 entries, 0 to 2072 Data columns (total 19 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2073 non-null int64 1 in_reply_to_status_id 23 non-null float64 2 in_reply_to_user_id 23 non-null float64 3 timestamp 2073 non-null object 4 source 2073 non-null object 5 text 2073 non-null object 6 retweeted_status_id 79 non-null float64 7 retweeted_status_user_id 79 non-null float64 8 retweeted_status_timestamp 79 non-null object 9 expanded_urls 2073 non-null object 10 rating_numerator 2073 non-null int64 11 rating_denominator 2073 non-null int64 12 name 2073 non-null object 13 stage 320 non-null category 14 jpg_url 2073 non-null object 15 img_num 2073 non-null int64 16 pred_breed 1471 non-null object 17 retweet_count 2073 non-null object 18 favorite_count 2073 non-null object dtypes: category(1), float64(4), int64(4), object(10) memory usage: 310.1+ KB ###Markdown - DEFINE - 5 > Convert the tweet_id in master_df into object type as there is no use for maths operation in tweet_id - CODE ###Code main_df['tweet_id'] = main_df['tweet_id'].astype('object') ###Output _____no_output_____ ###Markdown - TEST ###Code main_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2073 entries, 0 to 2072 Data columns (total 19 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2073 non-null object 1 in_reply_to_status_id 23 non-null float64 2 in_reply_to_user_id 23 non-null float64 3 timestamp 2073 non-null object 4 source 2073 non-null object 5 text 2073 non-null object 6 retweeted_status_id 79 non-null float64 7 retweeted_status_user_id 79 non-null float64 8 retweeted_status_timestamp 79 non-null object 9 expanded_urls 2073 non-null object 10 rating_numerator 2073 non-null int64 11 rating_denominator 2073 non-null int64 12 name 2073 non-null object 13 stage 320 non-null category 14 jpg_url 2073 non-null object 15 img_num 2073 non-null int64 16 pred_breed 1471 non-null object 17 retweet_count 2073 non-null object 18 favorite_count 2073 non-null object dtypes: category(1), float64(4), int64(3), object(11) memory usage: 310.1+ KB ###Markdown - DEFINE - 6 > Replace 'a', 'an', 'the', 'None' and other lower case words with NaN in name column - CODE ###Code main_df['name'] = main_df['name'].replace(main_df[main_df.name.str.islower()].name.unique(), np.nan) main_df['name'] = main_df['name'].replace('None', np.nan) main_df['name'].dropna() ###Output _____no_output_____ ###Markdown - TEST ###Code main_df.name.value_counts() ###Output _____no_output_____ ###Markdown - DEFINE - 7 > Delete Retweets - CODE ###Code # Delete the rows which contains retweets main_df = main_df.drop(main_df[(main_df['in_reply_to_status_id'].isnull() == False) | (main_df['retweeted_status_id'].isnull() == False)].index) ###Output _____no_output_____ ###Markdown - TEST ###Code main_df.shape main_df.columns ###Output _____no_output_____ ###Markdown - DEFINE - 8 > Remove columns no longer needed: in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id, and retweeted_status_timestamp - CODE ###Code # drop the reply status and retweet status columns main_df.drop(['in_reply_to_status_id', 'in_reply_to_user_id','retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown - TEST ###Code main_df.columns main_df['timestamp'].sample(5) ###Output _____no_output_____ ###Markdown - DEFINE - 9 > Change the timestamp to correct datetime format - CODE ###Code main_df['timestamp'] = pd.to_datetime(main_df['timestamp'], format='%Y-%m-%d %H:%M:%S') ###Output _____no_output_____ ###Markdown - TEST ###Code main_df['timestamp'].sample(5) main_df.head() ###Output _____no_output_____ ###Markdown - DEFINE - 10 > Removing HTML tags from source column - CODE ###Code href = main_df["source"].str.split('"', expand = True) main_df["source"] = href[1] ###Output _____no_output_____ ###Markdown - TEST ###Code main_df.head() main_df['rating_numerator'].unique() main_df['rating_denominator'].unique() ###Output _____no_output_____ ###Markdown - DEFINE - 11 > Standardize dog ratings - CODE ###Code ratings = main_df.text.str.extract('((?:\d+\.)?\d+)\/(\d+)', expand=True) main_df.rating_numerator = ratings main_df['rating_numerator'] = main_df['rating_numerator'].astype('float64') # standardizing to a denominator of 10 for groups of dogs: rating_num = [int(round(num/(denom/10))) if denom != 10 and num/denom <= 2 else num for num, denom in zip(main_df['rating_numerator'], main_df['rating_denominator'])] rating_denom = [10 if denom != 10 and num/denom <= 2 else denom for num, denom in zip(main_df['rating_numerator'], main_df['rating_denominator'])] main_df['rating_numerator'] = rating_num main_df['rating_denominator'] = rating_denom main_df = main_df.drop(main_df[((main_df['rating_denominator'] != 10) | (main_df['rating_numerator'] > 20))].index) ###Output _____no_output_____ ###Markdown - TEST ###Code main_df['rating_numerator'].unique() main_df['rating_denominator'].unique() ###Output _____no_output_____ ###Markdown Store Clean Datasets ###Code # storing main dataframe as csv main_df.to_csv('main_df.csv', encoding='utf-8', index=False) ###Output _____no_output_____ ###Markdown Analyzing, and Visualizing Data ###Code # read main_df.csv df1 = pd.read_csv('main_df.csv') df1.info() df1.head() df1.describe() ###Output _____no_output_____ ###Markdown - Top 10 frequent used dog names ###Code df1['name'].value_counts()[0:10].sort_values(ascending=False).plot(kind = 'bar') plt.ylabel('Number of Dogs') plt.title('Top 10 frequent dog names', size=15) plt.xlabel('Dog Names') plt.plot(); ###Output _____no_output_____ ###Markdown * **Most frequent dogs names: Charlie, OLiver,Cooper, Penny, Tucker, Lucy, Sadie, Winston, Daisy,Lola** -Top 10 most frequent dog breeds? ###Code df1['pred_breed'].value_counts()[0:10].sort_values(ascending=False).plot(kind = 'bar') plt.ylabel('Number of Breed Prediction') plt.title('Top 10 frequent dog breeds', size=15) plt.xlabel('Dog Breed') plt.plot(); ###Output _____no_output_____ ###Markdown * **Top 10 dogs breeds have golden retriever, labrador retriever as breed which all are rated** ###Code from subprocess import call call(['python', '-m', 'nbconvert', 'wrangle_act.ipynb']) ###Output _____no_output_____ ###Markdown Let us begin with the **Gathering** of the Data. ###Code # reading the csv file! it is indeed very simple, since it is comma-separated! twar=pd.read_csv('twitter-archive-enhanced.csv') #visualizing the first 5 elements! twar.head() ###Output _____no_output_____ ###Markdown **ASSESSING** the data: utilizando os métodos .info, .sample, .describe, .columns, .value_counts. ###Code #vamos ver quais e quantas colunas nós temos! print(twar.columns.tolist()) print('O numero de colunas eh igual a {0}'.format(len(twar.columns.tolist()))) ###Output ['tweet_id', 'in_reply_to_status_id', 'in_reply_to_user_id', 'timestamp', 'source', 'text', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp', 'expanded_urls', 'rating_numerator', 'rating_denominator', 'name', 'doggo', 'floofer', 'pupper', 'puppo'] O numero de colunas eh igual a 17 ###Markdown Temos então 17 colunas com várias informações sobre o tweet de cada user_id e no total são 17 colunas. ###Code # analisando o DF utilizando info! twar.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown podemos ver várias colunas do tipo objeto e float. ###Code # usando describe() para twar.sample(5) ###Output _____no_output_____ ###Markdown Temos colunas com valores ausentes, como 'in_reply to_status_id' e 'in_reply_to_user_id' que devem ser eliminadas por não conter informações. Isso igualmente é válido para 'retweeted_status_user_id' e 'retweeted_status_timestamp' que praticamente somente apresentam valores que não são numéricos (np.NaN). **Cleaning** of the Data: Fatores de Qualidade e Arrumação! Quais são os problemas de qualidade e arrumação desse DataFrame (DF)? Qualidade 1) timestamp do jeito que está escrito é um problema. vamos separa-la em month, day e year! 2) in_reply_to_status_id e in_reply_to_user_id não significam nada, podemos dar um drop nelas. 3) 'retweeted_status_user_id' e 'retweeted_status_timestamp' não significam nada, podemos dar um drop nelas. Arrumação 1) a maneira que foi mostrada rating numerator e rating denominator para fazer concordância com DF posteriores, criaremos uma columa denominada score_rate = rating_numerator/rating_denominator **Arrumando o erro de qualidade 1) com respeito à coluna timestamp!** ###Code # Fazendo uma cópia desse DF para poder manipulá-lo! twar_cp= twar.copy() ###Output _____no_output_____ ###Markdown **Coding** Codificando a mudança no DF para consertar a coluna timestamp ###Code # criando código para colocar o DF em mes, dia e ano! list_month = [] list_day = [] list_year = [] for i in range(len(twar_cp['timestamp'])): list_month.append(twar_cp['timestamp'][i].split()[0].split('-')[2]) list_day.append(twar_cp['timestamp'][i].split()[0].split('-')[1]) list_year.append(twar_cp['timestamp'][i].split()[0].split('-')[0]) # agregando os valores guardados em uma lista ao DF! twar_cp['month']=list_month twar_cp['day']=list_day twar_cp['year']=list_year twar_cp['year'].head() # deletando a coluna timestamp # não precisamos mais dela twar_cp.drop(['timestamp'],axis=1,inplace=True) ###Output _____no_output_____ ###Markdown testing ###Code twar_cp ###Output _____no_output_____ ###Markdown it seems to be fine! **Quality** problem 2)Taking a drop in the unnecessary columns ###Code twar_cp.drop(['in_reply_to_status_id'],axis=1,inplace=True) twar_cp.drop(['in_reply_to_user_id'],axis=1,inplace=True) ###Output _____no_output_____ ###Markdown **Testing** the modification ###Code twar_cp.columns ###Output _____no_output_____ ###Markdown Vemos que as colunas que foram excluídas realmente não estão mais presentes :) **Quality** problem 3)Taking a drop in the unnecessary columns ###Code twar_cp.drop(['retweeted_status_id'],axis=1,inplace=True) twar_cp.drop(['retweeted_status_user_id'],axis=1,inplace=True) twar_cp.drop(['retweeted_status_timestamp'],axis=1,inplace=True) ###Output _____no_output_____ ###Markdown **Testing** the code ###Code twar_cp.columns ###Output _____no_output_____ ###Markdown Vemos que as colunas que foram excluídas realmente não estão mais presentes :) **Arrumação**: Lidando com as colunas rating_numerator e rating_denominator ###Code # criando a variável score # devemos defini-la, creio eu, como o quociente de rating_numerator e # rating_denominator list_score=[] #print(twar_cp['rating_numerator']) for i in range(len(twar_cp['rating_numerator'])): list_score.append(float(twar_cp['rating_numerator'][i]/twar_cp['rating_denominator'][i])) twar_cp['score']=list_score # dropping the unnecessary columns! twar_cp.drop(['rating_numerator'],axis=1, inplace=True) twar_cp.drop(['rating_denominator'],axis=1,inplace=True) ###Output _____no_output_____ ###Markdown **Testing** the code! ###Code #limpeza básica do primeiro DF twar_cp.head(2) # usando rename para haver compatabilidade com outros DFs twar_cp.rename(columns={'score':'rate_score'}, inplace=True) twar_cp.sample(5) ###Output _____no_output_____ ###Markdown Parece que tudo está funcionando muito bem :) amazing! Agora lidando com o segundo DF que deve ser baixado programaticamente Gathering! ###Code # Downloading programatically using requests! import io url_imagepred='https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' data_url = requests.get(url_imagepred).content ''' !!! abaixo eu poderia eu poderia ter colocado sep='\t', mas eu esqueci inicialmente! Acabei trabalhando com uma maneira muito mais complicada de ler o arquivo e usar como DF, mas deu certo! Pelo menos, fiquei orgulhoso de ainda ter dado certo! :) ''' rawData = pd.read_csv(io.StringIO(data_url.decode('utf-8'))) ###Output _____no_output_____ ###Markdown Visualizando como os dados parecem estar organizados! ###Code rawData['tweet_id\tjpg_url\timg_num\tp1\tp1_conf\tp1_dog\tp2\tp2_conf\tp2_dog\tp3\tp3_conf\tp3_dog'] ###Output _____no_output_____ ###Markdown what a messy data! the \t separator is still there! We need to deal this \t separator in the list by using a raw string ###Code # remember to convert to a raw string colnames_split = r"""tweet_id\tjpg_url\timg_num\tp1\tp1_conf\tp1_dog\tp2\tp2_conf\tp2_dog\tp3\tp3_conf\tp3_dog""" print(colnames_split.split('\\')) print(len(colnames_split.split('\\'))) # let us take a look at what should be dataframe column names -> all mixed all_mixed = 'tweet_id\tjpg_url\timg_num\tp1\tp1_conf\tp1_dog\tp2\tp2_conf\tp2_dog\tp3\tp3_conf\tp3_dog' print(("%r"%rawData[all_mixed][0]).split('\\')) print(len(("%r"%rawData[all_mixed][0]).split('\\'))) colnames_split = r"""tweet_id\tjpg_url\timg_num\tp1\tp1_conf\tp1_dog\tp2\tp2_conf\tp2_dog\tp3\tp3_conf\tp3_dog""" colnam = colnames_split.split('\\t') df_requests=pd.DataFrame({colnam[0]:[],colnam[1]:[],colnam[2]:[],colnam[3]:[],colnam[4]:[],colnam[5]:[],colnam[6]:[],colnam[7]:[],colnam[8]:[],colnam[9]:[],colnam[10]:[],colnam[11]:[]}) #for i in range(len(colnames_split.split('\\'))): len_to_iter = len(rawData['tweet_id\tjpg_url\timg_num\tp1\tp1_conf\tp1_dog\tp2\tp2_conf\tp2_dog\tp3\tp3_conf\tp3_dog']) for i in range(len_to_iter): all_mixed = 'tweet_id\tjpg_url\timg_num\tp1\tp1_conf\tp1_dog\tp2\tp2_conf\tp2_dog\tp3\tp3_conf\tp3_dog' raw_splitted = ("%r"%rawData[all_mixed][i]).split('\\t') # for j in range(len(raw_splitted)): # print(i,j,len(raw_splitted)) # df_requests.append(pd.DataFrame({colnam[j]:list(raw_splitted[j])},index=[i]),sort=False) df_requests = df_requests.append(pd.Series([raw_splitted[k] for k in range(len(raw_splitted))],index=df_requests.columns),sort=False,ignore_index=True) # print(df_requests) ###Output _____no_output_____ ###Markdown Now let us see how does the tab-separated file imported in a non-common way looks! ###Code # raw DataFrame worked! df_requests.head(10) ###Output _____no_output_____ ###Markdown In a hard way it has worked! hahahha **Assessing** the data! Let us use functions like .sample(), .info(), .describe(),.value_counts() Let us see the name of the dog races predicted make sense! ###Code df_requests['p1'].value_counts() ###Output _____no_output_____ ###Markdown It turns out that there are names that does not make sense, such as web_site,limousine, cup, etc! ###Code df_requests.sample(5) ###Output _____no_output_____ ###Markdown We can notice two potential problems: 'tweet_id' column as a string, but should be a string, and True' and False' values for p3_dog while it should be the common boolean values True and False! Erros de Qualidade! Quality IssuesEm suma: nesse DF achamos três erros de qualidade que são:1) na coluna p3_dog os valores True e False estão como True',True" e False';2) na coluna tweet_id os valores devem ser inteiros, não strings!3) tem raça classificada (coluna 'p1') como "website, limousine, fountain, revolver,military_uniform,seatbelt, etc". **Quality** Issue 1: True', True'' and False'' **Coding** to repair this! ###Code # visualizing the problem df_requests['p3_dog'][0] # the for loop code to repair these typos! list_p3_dog = [] for i in range(len(df_requests['p3_dog'])): if(df_requests['p3_dog'][i]=="True'" or df_requests['p3_dog'][i]=='True"'): list_p3_dog.append('True') elif(df_requests['p3_dog'][i]=="False'"): list_p3_dog.append('False') else: print(df_requests['p3_dog'][i]) df_requests['p3_dog_new']=list_p3_dog ###Output _____no_output_____ ###Markdown **Testing** ###Code df_requests.head(2) df_requests.drop(['p3_dog'],axis=1,inplace=True) df_requests.head(3) ###Output _____no_output_____ ###Markdown Let us rename p3_dog_new to have the old name p3_dog ###Code df_requests.rename(columns={'p3_dog_new':'p3_dog'}, inplace=True) df_requests.head(3) ###Output _____no_output_____ ###Markdown It seems OK! **Quality** Problem 2: 'tweet_id' are strings Coding ###Code # testing! it must be an integer, not a string! df_requests['tweet_id'][0][1:] # converting these string values to int manuaally list_id=[] for i in range(len(df_requests['tweet_id'])): list_id.append(int(df_requests['tweet_id'][i][1:])) list_id df_requests['tweet_id_new'] = list_id df_requests.head(3) # dropping the old column! df_requests.drop('tweet_id',axis=1,inplace=True) # renaming it df_requests.rename({'tweet_id_new':'tweet_id'},axis=1,inplace=True) ###Output _____no_output_____ ###Markdown Testing ###Code df_requests.head(3) type(df_requests['tweet_id'][0]) ###Output _____no_output_____ ###Markdown We can see that now it works fine! :) **Quality** Problem 3) dogs classified as "website, limousine, fountain, revolver,military_uniform,seatbelt, etc" coding ###Code # making a function to do that! def deleting_wrongp1(df,word): """ inputs: outputs: """ pos_list = df.index[df_requests['p1']==word].tolist() for i in range(len(pos_list)): df=df.drop(index = pos_list[i]) # print(1) return df #df_requests.index[df_requests['p1']=='web_site'].tolist() df_requests = deleting_wrongp1(df_requests,'web_site') df_requests = deleting_wrongp1(df_requests,'limousine') df_requests = deleting_wrongp1(df_requests,'fountain') df_requests = deleting_wrongp1(df_requests,'revolver') df_requests = deleting_wrongp1(df_requests,'military_uniform') df_requests = deleting_wrongp1(df_requests,'seatbelt') df_requests = deleting_wrongp1(df_requests,'cup') df_requests = deleting_wrongp1(df_requests,'coffee_mug') ###Output _____no_output_____ ###Markdown **Testing** ###Code df_requests.p1.value_counts() ###Output _____no_output_____ ###Markdown The values are no longer present in the column p1! :) Analisando agora o terceiro DataFrame! Baixando os dados com tweepy! Gathering ###Code # dealing tweepy API!!! # Essa é a parte mais interessante do projeto! import tweepy """ As chaves abaixo são específicas de cada usuário desenvolvedor no Twitter e não devem ser colocadas no envio do projeto! """ consumer_key = '???' consumer_secret = '???' access_token = '???' access_secret = '???' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth,wait_on_rate_limit=True,wait_on_rate_limit_notify=True) api ###Output _____no_output_____ ###Markdown Taking a look at how the JSON file looks! ###Code twar_id = list(twar.tweet_id) #print((api.get_status(twar_id[0],tweet_mode='extended')).text) print(api.get_status(twar_id[0],tweet_mode='extended')._json) # dealing the First DataFrame (csv), the 'twar' # it takes a while to run! Therefore, it is not recommended to run it everytime! i_want_json_change = False if i_want_json_change: type(twar.tweet_id) twar_id = list(twar.tweet_id) data = dict() for i in twar_id: try: data[i]= api.get_status(i,tweet_mode='extended')._json except: continue with open('tweet_son.txt','w') as f: json.dump(data,f) # Use timeit.default_timer instead of timeit.timeit. The former provides the best # clock available on your platform and version of Python automatically: from timeit import default_timer as timer start = timer() # ... end = timer() print(end - start) # Time in seconds, e.g. 5.38091952400282 df_from_json = [] filename='tweet_son.txt' with open(filename) as json_file: data = json.load(json_file) for key,value in data.items(): df_from_json.append({'id':value['id'],'created_at':value['created_at'],'full_text':value['full_text'],'retweet_count':value['retweet_count'],'favorite_count':value['favorite_count']}) #Converting list to DataFrame df_pd_json = pd.DataFrame(df_from_json) display(df_pd_json) ###Output _____no_output_____ ###Markdown Let us explore a little bit the data ###Code # making a straightforward copy of the DF df_pd_json_cp = df_pd_json.copy() fulltext_to_split = df_pd_json_cp['full_text'][1000] # Testing Code to Extract Information # dog's name print(fulltext_to_split.split('.')[0].split()[-1]) #dog's gender print(fulltext_to_split.split('.')[1].split()[0]) #dog's rate print(fulltext_to_split.split('.')[-2].split()[0]) ###Output any Even 0/10 ###Markdown ASSESSING the data: using functions like .describe, .value_counts, .info, .sample ###Code df_pd_json_cp.sample(5) ###Output _____no_output_____ ###Markdown We have columns with tons of information! We can notice that we must work on the 'created_at' column and get important information of the column 'full_text' ###Code df_pd_json_cp['favorite_count'].describe() ###Output _____no_output_____ ###Markdown We can see that 'favorite_count' has plausible min, max, quantiles values. Many interesting graphs can be obtained from it! ###Code df_pd_json_cp.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2339 entries, 0 to 2338 Data columns (total 5 columns): created_at 2339 non-null object favorite_count 2339 non-null int64 full_text 2339 non-null object id 2339 non-null int64 retweet_count 2339 non-null int64 dtypes: int64(3), object(2) memory usage: 91.4+ KB ###Markdown In the above we can see the info about each column and their types! Quality and Tidyness issues to be addressed! Podemos ver os seguintes problemas no DF acima: Qualidade (Quality)1) 'full_text' que deve ser desmembrado em genero, nome e em rate_score (somente nessa parte são três problemas de qualidade); Estruturais (Tidyness)1) novamente, não há sentido na timestamp que foi apresentada, ele teve que ser reestrurada convenientemente! Quality Problem 1: Desmembrando as informações da coluna 'full_text' e extraindo outras colunas a partir dela! Q1.1) Taking the name of the dog from the column 'full_text' ###Code list_name =[] all_to_iter = len(df_pd_json_cp['full_text']) for i in range(all_to_iter): fulltext_to_split = df_pd_json_cp['full_text'][i] try: name = fulltext_to_split.split('.')[0].split()[-1] except: name = np.NaN list_name.append({name}) # taking a look at an ordinary dog! df_pd_json_cp['full_text'][2335] #creating the column names! df_name =pd.DataFrame(list_name,columns=['Name']) display(df_name) result1 = pd.concat([df_pd_json_cp,df_name],axis=1,sort=False) ###Output _____no_output_____ ###Markdown Testing ###Code result1 ###Output _____no_output_____ ###Markdown We can see that name is ok now! :) Q1.2) Now we can take a look at the gender of the dog! coding ###Code gender_name =[] all_to_iter = len(df_pd_json_cp['full_text']) for i in range(all_to_iter): # print(i) fulltext_to_split = df_pd_json_cp['full_text'][i] try: name = fulltext_to_split.split('.')[1].split()[0] if(name =='He' or name =='he' or name =="He's" or name =="he's"or name =='His' or name =='his'): gender_name.append({'male'}) elif(name =='She' or name =='she'or name =="She's" or name =="she's"or name =='Her' or name =='her'): gender_name.append({'female'}) else: gender_name.append({np.NaN}) except: name = np.NaN gender_name.append({name}) #print(fulltext_to_split.split('.')[1].split()[0]) df_gender = pd.DataFrame(gender_name,columns=['gender']) #display(df_gender) result2 = pd.concat([result1,df_gender],axis=1,sort=False) ###Output _____no_output_____ ###Markdown Testing ###Code result2.head(3) ###Output _____no_output_____ ###Markdown it seems that now gender is ok! Q1.3) Now we can take some information from the Rate given in the full_text Coding ###Code rate_score =[] all_to_iter = len(df_pd_json_cp['full_text']) for i in range(all_to_iter): # print(i) fulltext_to_split = df_pd_json_cp['full_text'][i] try: score1 = fulltext_to_split.split('.')[-2].split()[0].split("/")[0] score2 = fulltext_to_split.split('.')[-2].split()[0].split("/")[1] try: score1=int(score1) score2=int(score2) score=float(score1/score2) except: score=np.NaN rate_score.append({score}) except: score = np.NaN rate_score.append({score}) # a creating the DataFrame column df_rate_score = pd.DataFrame(rate_score,columns=['rate_score']) display(df_rate_score) # concatenating with the old result result3 = pd.concat([result2,df_rate_score],axis=1,sort=False) display(result3) result3.drop(['full_text'],axis=1,inplace=True) ###Output _____no_output_____ ###Markdown Testing! ###Code result3.head(5) ###Output _____no_output_____ ###Markdown We have successfully created the column rate_score! Structural Problem: 'created_at' column Now let us deal the column 'created_at' to make it in weekday, month and year! ###Code list_weekday = [] list_month = [] list_day = [] list_year = [] for i in range(len(result3['created_at'])): splitted = result3['created_at'][i].split() # print(splitted) weekday = splitted[0] month = splitted[1] day = splitted[2] year = splitted[-1] # appending! list_weekday.append({weekday}) list_month.append({month}) list_day.append({day}) list_year.append({year}) # converting to DataFrames! df_weekday=pd.DataFrame(list_weekday,columns=['weekday']) df_month=pd.DataFrame(list_month,columns=['month']) df_day=pd.DataFrame(list_day,columns=['day']) df_year=pd.DataFrame(list_year,columns=['year']) # concatenating the results to the old DataFrame result4 = pd.concat([result3,df_weekday],axis=1,sort=False) result5 = pd.concat([result4,df_month],axis=1,sort=False) result6 = pd.concat([result5,df_day],axis=1,sort=False) result7 = pd.concat([result6,df_year],axis=1,sort=False) result7 # let us remove the 'created_at' column! result7.drop(['created_at'],axis=1,inplace=True) ###Output _____no_output_____ ###Markdown **Testing** the code! ###Code result7 ###Output _____no_output_____ ###Markdown Quality Problem: 'Name' column must be changed to 'name' ###Code # para haver compatibilidade com outros DF's result7.rename(columns={'Name':'name'}, inplace=True) result7 ###Output _____no_output_____ ###Markdown Looking back again to the DataFrames we have been working on so far! ###Code #the first DF twar_cp.head(2) # the second DF df_requests.head(3) # the third DF: result7.head(5) ###Output _____no_output_____ ###Markdown Vamos fazer cópias desses DF's ###Code firsttwar=twar_cp.copy() secondreq = df_requests.copy() thirdjson = result7.copy() ###Output _____no_output_____ ###Markdown Criando os Master's DataFrames Merge's pertinentes, para criar os DF's masters, seria fazer um merge do primeiro e segundo DF e do terceiro e segundo DF! ###Code merge_1_2 = firsttwar.merge(secondreq,on='tweet_id',how='inner') merge_1_2.head(2) # vamos renomear o nome de thirdjson de 'id' para 'tweet_id' #thirdjson['id'] thirdjson.rename({'id':'tweet_id'},axis=1,inplace=True) merge_2_3 = secondreq.merge(thirdjson,on='tweet_id',how='inner') merge_2_3.head(2) # visualizando as colunas do primeiro DF merged! merge_1_2.columns ###Output _____no_output_____ ###Markdown Exploring the Master DataFrames! :) fazendo um histograma das 6 primeiras raças que tem mais fotos! ###Code race1=merge_1_2['p1'].value_counts().index.tolist()[0] race2=merge_1_2['p1'].value_counts().index.tolist()[1] race3=merge_1_2['p1'].value_counts().index.tolist()[2] race4=merge_1_2['p1'].value_counts().index.tolist()[3] race5=merge_1_2['p1'].value_counts().index.tolist()[4] race6=merge_1_2['p1'].value_counts().index.tolist()[5] top1count=merge_1_2['p1'].value_counts()[0] top2count=merge_1_2['p1'].value_counts()[1] top3count=merge_1_2['p1'].value_counts()[2] top4count=merge_1_2['p1'].value_counts()[3] top5count=merge_1_2['p1'].value_counts()[4] top6count=merge_1_2['p1'].value_counts()[5] #month_hist_array = np.array([month_jan,month_feb,month_mar,month_apr,month_may,month_jun,month_jul],dtype=int) month_hist_list = [top1count,top2count,top3count,top4count,top5count,top6count] x_values = [1,2,3,4,5,6] plt.figure(figsize=(22,11)) plt.bar(x_values,height=month_hist_list) plt.xticks(x_values, [race1,race2,race3,race4,race5,race6]) # no need to add .5 anymore plt.title('The top 6 dog races that have more pictures',size=18) plt.xlabel('Race',size=20) plt.ylabel('Number of Pictures',size=20) plt.xticks(rotation=30,size=16) plt.yticks(size=16) plt.savefig('top6races.eps',dpi=500) plt.show() merge_2_3.columns merge_2_3.groupby(['gender'])['favorite_count'].sum() ###Output _____no_output_____ ###Markdown fazer grafico acima sobre generos e favorite count ###Code first_sum = merge_2_3.groupby(['gender'])['favorite_count'].sum()[0] second_sum = merge_2_3.groupby(['gender'])['favorite_count'].sum()[1] first_sum_index = merge_2_3.groupby(['gender'])['favorite_count'].sum().index.tolist()[0] second_sum_index = merge_2_3.groupby(['gender'])['favorite_count'].sum().index.tolist()[1] month_hist_list = [first_sum,second_sum] x_values = [1,2] plt.figure(figsize=(18,9)) plt.bar(x_values,height=month_hist_list) plt.xticks(x_values, [first_sum_index,second_sum_index]) # no need to add .5 anymore plt.title('Number of Favorites for male and female dogs',size=18) plt.xlabel('Gender',size=22) plt.ylabel('Number of Favorites',size=22) plt.xticks(rotation=30,size=16) plt.yticks(size=16) plt.savefig('favorites_gender.eps',dpi=500) plt.show() ###Output _____no_output_____ ###Markdown fazer grafico acima sobre generos e retweet_count ###Code first_sum = merge_2_3.groupby(['gender'])['retweet_count'].sum()[0] second_sum = merge_2_3.groupby(['gender'])['retweet_count'].sum()[1] first_sum_index = merge_2_3.groupby(['gender'])['retweet_count'].sum().index.tolist()[0] second_sum_index = merge_2_3.groupby(['gender'])['retweet_count'].sum().index.tolist()[1] month_hist_list = [first_sum,second_sum] x_values = [1,2] plt.figure(figsize=(18,9)) plt.bar(x_values,height=month_hist_list) plt.xticks(x_values, [first_sum_index,second_sum_index]) # no need to add .5 anymore plt.title('Number of Retweets for male and female dogs',size=18) plt.xlabel('Gender',size=22) plt.ylabel('Number of Retweets',size=22) plt.xticks(rotation=30,size=16) plt.yticks(size=16) plt.savefig('retweets_gender.eps',dpi=500) plt.show() merge_2_3.groupby(['gender'])['retweet_count'].sum() # fazer grafico acima sobre generos e retweet_count # última visualizacao # vendo quantos cães da raça 'golden_retriever' foram postados nos # anos de 2015 e 2017 em função dos meses! :) ano = '2015' raca = 'golden_retriever' #merge_2_3[merge_2_3['year']==ano and merge_2_3['p1']==raca] print(merge_2_3[(merge_2_3['year']==ano) & (merge_2_3['p1']==raca)]['month'].value_counts()) ano = '2017' print(merge_2_3[(merge_2_3['year']==ano) & (merge_2_3['p1']==raca)]['month'].value_counts()) # Logo, os resultados para o ano de 2017 sao os mais ricos! # Faremos uma visualização do ano 2017 na forma de histograma!! raca = 'golden_retriever' ano = '2017' month_iter = merge_2_3[(merge_2_3['year']==ano) & (merge_2_3['p1']==raca)]['month'].value_counts() month_jan = month_iter[1] month_feb = month_iter[0] month_mar = month_iter[4] month_apr = month_iter[-1] month_may = month_iter[3] month_jun = month_iter[-2] month_jul = month_iter[2] #month_hist_array = np.array([month_jan,month_feb,month_mar,month_apr,month_may,month_jun,month_jul],dtype=int) month_hist_list = [month_jan,month_feb,month_mar,month_apr,month_may,month_jun,month_jul] x_values = [1,2,3,4,5,6,7] plt.bar(x_values,height=month_hist_list) plt.xticks(x_values, ['Jan','Feb','Mar','Apr','May','Jun','Jul']) # no need to add .5 anymore plt.title('Dogs of race "golden retriver" that were posted \n in Twitter at @weratedogs along the year 2017',size=12) plt.xlabel('Month',size=20) plt.ylabel('Number of Posts',size=20) plt.xticks(rotation=30,size=16) plt.yticks(size=16) plt.savefig('golden.retriever_2017_months.eps',dpi=200) plt.show() ###Output _____no_output_____ ###Markdown Exporting the master DataFrames! ###Code # exporting the first master dataframe merge_1_2.to_csv('twitter_archive_master1.csv',encoding='utf-8',index=False) # exporting the second master dataframe merge_2_3.to_csv('twitter_archive_master2.csv',encoding='utf-8',index=False) ###Output _____no_output_____ ###Markdown WeRateDogs Twitter Feed This project looks at various data sources for Tweets from the [WeRateDogs](https://twitter.com/dog_rates) Twitter account, specifically:1. the `twitter-archive-enhanced.csv` which contains the tweet text, as is the core data set1. the Twitter API is used to access the original tweets to retrieve missing fields such as the retweet and favorite counts1. an image prediction file containing the top 3 predictions for each of the (up to 4) dog pictures in the tweetHaving gathered the data, we assess, clean and analyse it. ------ Gather We use a number of data assets including remote files on web servers, and JSON payloads returned by the Twitter API. ###Code WE_RATE_DOGS_TWEETS_PATH = 'data/twitter-archive-enhanced.csv' DOG_BREED_PREDICTIONS_SOURCE_URL = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' ###Output _____no_output_____ ###Markdown Gather the enhanced Tweets data Pandas `read_csv()` function is quite versatile when uploading data, and can be configured to handle different date formats, numeric data types, not available (NA) markers, etc. Getting this right upfront can save time, but requires the raw data in files to be eyeballed first. For this we can use command line tools like head & tail, or alternatively Excel, which allows column headings to be frozen, data to be sorted and searched, etc. Having looked at the raw data, we make the following observations:1. tweet Ids are large integers, we need to select an approriate integer datatype so no accuracy is lost1. some tweet Ids use floats, e.g.: `in_reply_to_status_id`, `in_reply_to_user_id`, with NaNs used as a Not Available marker, as mentioned above these need to be converted to integers1. time stamps are close to ISO 8601 format, and are GMT Actions taken to address above observations:* convert floating point tweets Ids to a 64-bit integer, retaining the Not Available representation* specifcally tell Pandas which columns are dates ###Code import yaml import tweepy import json import numpy as np import pandas as pd ###Output _____no_output_____ ###Markdown Load the enhanced Twitter archive, using explicit data types for fields, instead of letting Pandas infer them. The [Twitter API](https://developer.twitter.com/en/docs/twitter-api/v1/data-dictionary/overview/tweet-object) will define the data types for the Twitter sourced fields.To get around the fact that nullable numeric fields are interpreted by `read_csv()` as floats (thus allowing NaNs to represent null), we will map nullable tweet Ids to the Pandas nullable integer data type (Int64). ###Code feed_data_types = { 'tweet_id': np.int64, 'in_reply_to_status_id': 'Int64', 'in_reply_to_user_id': 'Int64', 'retweeted_status_id': 'Int64', 'retweeted_status_user_id': 'Int64', 'text': 'string', 'expanded_urls': 'string', 'rating_numerator': np.int32, 'rating_denominator': np.int32, 'name': 'string', 'doggo': 'string', 'floofer': 'string', 'pupper': 'string', 'puppo': 'string' } feed_date_cols = [ 'timestamp', 'retweeted_status_timestamp' ] enhanced_tweets_df = pd.read_csv(WE_RATE_DOGS_TWEETS_PATH, index_col=['tweet_id'], dtype=feed_data_types, parse_dates=feed_date_cols) enhanced_tweets_df.shape ###Output _____no_output_____ ###Markdown The first discrepancy we note is that, according to the project motivation document, the main "archive contains basic tweet data for all 5000+ of their tweets" however that is clearly not the case as, having loaded it, the number of tweets is less than half that. As this is the master data set we have been provided with, this is the data we have to go with, since it has been previously enhanced.To sanity check this row count, and make sure we have actually read in all the eprovided data, we will run a line count on the input file, which should roughly match the number of rows in the data frame. Any discrepancy on counts is due to those embeded new line (NL) characters in the tweet text, since the number of NL characters is what `wc` bases its line counts on. ###Code !wc -l {WE_RATE_DOGS_TWEETS_PATH} ###Output 2518 data/twitter-archive-enhanced.csv ###Markdown Now we can double check the column data types, against the data type mapping provided to `read_csv()`. ###Code enhanced_tweets_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2356 entries, 892420643555336193 to 666020888022790149 Data columns (total 16 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 in_reply_to_status_id 78 non-null Int64 1 in_reply_to_user_id 78 non-null Int64 2 timestamp 2356 non-null datetime64[ns, UTC] 3 source 2356 non-null object 4 text 2356 non-null string 5 retweeted_status_id 181 non-null Int64 6 retweeted_status_user_id 181 non-null Int64 7 retweeted_status_timestamp 181 non-null datetime64[ns, UTC] 8 expanded_urls 2297 non-null string 9 rating_numerator 2356 non-null int32 10 rating_denominator 2356 non-null int32 11 name 2356 non-null string 12 doggo 2356 non-null string 13 floofer 2356 non-null string 14 pupper 2356 non-null string 15 puppo 2356 non-null string dtypes: Int64(4), datetime64[ns, UTC](2), int32(2), object(1), string(7) memory usage: 303.7+ KB ###Markdown Gather the Twitter API enrichment data Next we want to use the Twitter API to retrieve the original tweets, so that we can enrich our enhanced tweets data with the missing attributes previously idientified (`retweet_counts`, `favorite_counts`). Having registered with Twitter as a developer, and obtained credentials and keys, we stored these in a private project directory and configuration file (which are excluded from our git repo, and thus won't be visible online in [github](https://github.com/benvens-udacity/wrangle-and-analyze-data/blob/main/wrangle_act.ipynb)).We now use those credentials to authenticate with Twitter for API access. ###Code def read_creds(conf_path): with open(conf_path, 'r') as cf: config = yaml.load(cf, Loader=yaml.FullLoader) return config creds = read_creds('./config/private/creds.yaml') consumer_key = creds['consumer_api']['key'] consumer_secret = creds['consumer_api']['secret'] auth = tweepy.OAuthHandler(consumer_key, consumer_secret) access_token = creds['access_token']['token'] acess_secret = creds['access_token']['secret'] auth.set_access_token(access_token, acess_secret) ###Output _____no_output_____ ###Markdown Next we will load the enrichment data in batches, for better performance, as API invocations are subject to significant network latency. Twitter also applies rate limiting to their APIs, so it is necessary to throttle the rate at which we make requests, and to retry any failed requests. Luckily, this can be handled automatically by the Tweepy library, by setting the `wait_on_rate_limit_notify` flag when configuring API connection. ###Code api = tweepy.API(auth, wait_on_rate_limit_notify=True) def process_batch(batch): idxs = [] retweet_counts = [] favorite_counts = [] for status in batch: tweet = status._json idxs.append(tweet['id']) retweet_counts.append(tweet['retweet_count']) favorite_counts.append(tweet['favorite_count']) return np.array(idxs, dtype=np.int64), np.array([retweet_counts, favorite_counts], dtype=np.int64).T indices = np.empty((0), dtype=np.int64) rows = np.empty((0, 2), dtype=np.int64) batch_size = 100 num_tweets = len(enhanced_tweets_df.index) %%time for batch_start in range(0, num_tweets, batch_size): batch_end = min(batch_start + batch_size, num_tweets) batch_tweet_ids = enhanced_tweets_df.iloc[batch_start:batch_end].index.to_numpy().tolist() statuses = api.statuses_lookup(batch_tweet_ids, include_entities=False, map_=False) b_indices, b_rows = process_batch(statuses) indices = np.concatenate((indices, b_indices), axis=0) rows = np.concatenate((rows, b_rows), axis=0) tweet_counts_df = pd.DataFrame(index=indices, data=rows, columns=['retweet_counts', 'favorite_counts'], dtype='Int32').sort_index() tweet_counts_df.index.name = 'tweet_id' tweet_counts_df.shape ###Output _____no_output_____ ###Markdown Again, we briefly double check on the expected column data type mapping. ###Code tweet_counts_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2331 entries, 666020888022790149 to 892420643555336193 Data columns (total 2 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 retweet_counts 2331 non-null Int32 1 favorite_counts 2331 non-null Int32 dtypes: Int32(2) memory usage: 41.0 KB ###Markdown Gather the breed prediction data Finally we need to gather the breed prediction data. We will read this data from the CloudFront URL, as opposed to the local filesystem, to ensure we get the most up-to-date version. ###Code img_preds_data_types = { 'tweet_id': 'Int64', 'jpg_url': 'string', 'img_num': np.int32, 'p1': 'string', 'p1_conf': np.float32, 'p1_dog': bool, 'p2': 'string', 'p2_conf': np.float32, 'p2_dog': bool, 'p3': 'string', 'p3_conf': np.float32, 'p3_dog': bool } # Load the TSV (not CSV) records, and tell read_csv() to use a tab as the field separator img_preds_df = pd.read_csv(DOG_BREED_PREDICTIONS_SOURCE_URL, index_col=['tweet_id'], sep='\t', dtype=img_preds_data_types) img_preds_df.shape ###Output _____no_output_____ ###Markdown And finally we check for correct data type mapping. ###Code img_preds_df.info() ###Output <class 'pandas.core.frame.DataFrame'> Index: 2075 entries, 666020888022790149 to 892420643555336193 Data columns (total 11 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 jpg_url 2075 non-null string 1 img_num 2075 non-null int32 2 p1 2075 non-null string 3 p1_conf 2075 non-null float32 4 p1_dog 2075 non-null bool 5 p2 2075 non-null string 6 p2_conf 2075 non-null float32 7 p2_dog 2075 non-null bool 8 p3 2075 non-null string 9 p3_conf 2075 non-null float32 10 p3_dog 2075 non-null bool dtypes: bool(3), float32(3), int32(1), string(4) memory usage: 119.6+ KB ###Markdown ------ Assess Having gathered the data we will now assess it, ideally both visually and programmatically. Some of this visual assesment has already been done against the raw data in files, to ensure we used appropriate data types when uploading the data. Therefore some data quality issues (large integers stored as floating point, with potential loss of accuracy, which invalidates their meaning as an identifier) have been addressed at upload time. Visual assessment We will inspect the data that has been uploaded into the corresponding dataframes. ###Code # Raise the number of viewable rows and columns # Retain some kind of row counts, as very large data sets may get loaded into the browser, causing memory issues pd.set_option('display.max_rows', 10000) pd.set_option('display.max_columns', None) pd.set_option('display.max_colwidth', None) ###Output _____no_output_____ ###Markdown Enhanced tweets We assess some tweets that include a dog stage name. ###Code enhanced_tweets_df[(enhanced_tweets_df[['doggo', 'floofer', 'pupper', 'puppo']] != 'None').any(axis=1)].head() ###Output _____no_output_____ ###Markdown We observe the following:1. HTML in the `source` columns, with a lot of repetition (to be verified programmatically)1. the varios rewteet columns frequently hold null values1. on occasions multiple values appearing in the `expanded_urls` column, including repeating values1. quite often no dog stage can be identified, and occasionally no dog name1. dog stages place the stage name in a column named after the stage, this is redundant information Retweet and favorite counts ###Code tweet_counts_df.head() ###Output _____no_output_____ ###Markdown There are no immediate issues observed by assessing a small sample of the tweet counts data visually. Breed predictions ###Code img_preds_df.head() ###Output _____no_output_____ ###Markdown We observe the following: 1. each row refers to an image1. each image is numbered, as it is selected as the best of up to 4 dog images that may be associated with each tweet1. we then have the top 3 breed predictions for that imageEach prediction consists of the following information:1. a predicted label or class (e.g.: the dog breed) that describes the image1. a confidence score associated with the above prediction, in the range 0.0 -> 1.0 (0% to 100% confident)1. a boolean indicator confirming if the predicted label is a dog breed, or some other objectLooking at the confidence score for predictions p1 - p3, they appear to be listed in most confident to least confident order. Therefore we will use the column name numeric suffix to generate a ranking column, which we can later sort by (to preserve this decreasing confidence order).This last attribute confirms that the image classifier used to generate these prediction was trained on a broad set of images, only a subset of which are dog images labelled with their corresponding dog breed. But on occasions the classifier may have interpreted a dog image as an object other than a dog. Programmatic assessment Programmatic assesment gives us the opportunity to validate observations, and search for anomalies, across the entire dataset. This is very difficult to do visually unless the dataset is small, both in trems of the number of rows and columns. Enhanced tweets Assess level of repetition in the `source` column, which holds an HTML anchor node. ###Code enhanced_tweets_df['source'].value_counts() ###Output _____no_output_____ ###Markdown Looking at the above results there appear to be 4 sources corresponding to the related applications: iPhone Twitter app, Vine app, Twitter web client and TweetDeck. This data contains a lot of redundant and messy information. Check if there are tweets where more than one dog stage is mentioned. ###Code ((enhanced_tweets_df[['doggo', 'floofer', 'pupper', 'puppo']] != 'None').sum(axis=1) > 1).sum() ###Output _____no_output_____ ###Markdown Retweet and favorite counts We will quickly validate that all counts are positive. ###Code (tweet_counts_df >= 0).all() ###Output _____no_output_____ ###Markdown We will compare the number of entries in the enriched tweets dataframe to the number of entries in the tweet counts dataframe, to see of we successfully retrieved counts for all tweets from the API. The small difference in counts suggests a small number of tweets can no longer be retrieved. ###Code len(enhanced_tweets_df.index), len(tweet_counts_df.index) ###Output _____no_output_____ ###Markdown Breed predictions We will validate the assumption made earlier that the confidence scores are ordered by the numeric suffix of the column name, which can be used to populate a ranking. ###Code ((img_preds_df['p1_conf'] > img_preds_df['p2_conf']) & (img_preds_df['p2_conf'] > img_preds_df['p3_conf'])).all() ###Output _____no_output_____ ###Markdown Next we validate that all confidence scores are in the range 0.0 to 1.0. ###Code (img_preds_df['p1_conf'].between(0.0, 1.0) & img_preds_df['p2_conf'].between(0.0, 1.0) & img_preds_df['p3_conf'].between(0.0, 1.0)).all() ###Output _____no_output_____ ###Markdown Quality issues found As a result of the visual and programmatic assessments, the following data quality have been found, which will require data content to be cleaned. Enhanced tweets 1. the immediate data quality concern is that the project motivation document states that the "archive contains basic tweet data for all 5000+ of their tweets" but we are loading less than half that number of tweets. **However, given the enhanced tweets dataset is our master dataset, there is nothing that we can do to remedy the much smaller number of rows, beyond highlighting this observation**1. as previously mentioned, the issue with some tweet Id columns being treated as floating point numbers, and the fact that rounding could invalidate these, was resolved at data loading time (without impacting the fact that they are nullable columns)1. the format of the `timestamp` is very close to an ISO 8601 timestamp, however it is missing the 't' character as the separator between the date and time portions. There are definite advantages in following a recognised standard, as this will be understood by tools such as database import utilities, however Pandas has correctly parsed dates1. in the `source` column, extract the source app name from the HTML anchor string, and then map this column to a Pandas categorical1. it is unclear why, in the `expanded_urls` columns, the same URL get repeated, since looking at the tweet text there is only one reference to the corresponding link. Therefore we will remove duplicates1. convert the dog stage columns into boolean datatype, and interpret the constant value 'None' as a missing stage1. since the dog stage column names are the stages, storing that same name as a value is redundant information, following on from the previous observation, where the dog stage appears we will just store a boolean true value Retweet and favorite counts 1. while the intention is to obtain retweet and favorite counts for all the tweets in the enhanced tweets dataset, we cannot guarantee that the Twitter API will always return the original Tweet, e.g.: it may subsequently have been deleted1. where the counts were successfully retrieved for the original tweet (the majority of cases, as proven in the programatic assesment), then there is a one-to-one relationship between the rows in the counts dataframe, and the rows in the enhanced tweets dataframe. Therefore the counts columns can be merged back into the enhanced tweets dataframe, as arguably they are part of that tweet observation. In the few cases where the counts are missing, we will store nulls Breed predictions No obvious data quality issues, beyond the prediction column names being used as variables (the numeric suffix added). Structural issues After looking at data frame structure, column naming, and inspecting values, and then applying the [Tidy Data](https://vita.had.co.nz/papers/tidy-data.pdf) principles, the following structural issues will need to be addressed. Enhanced tweets 1. the `source` column must store a category that represent the application (and possibly device) used to author the tweet1. the `expanded_urls` column can store multiple values per row, depending on the web links embeded in the tweet text, therefore these observations need to be stored in a separate table (however, we will first remove any duplicate values).1. dog stage is a multivalued categorical variable, as a tweet can reference more than one stage. Therefore we retain the existing columns but encode them in the style of one hot encoding Retweet and favorite counts No obvious structural issues here. Breed predictions 1. a variable (prediction number) is embeded in the column names of the prediction columns (predicted breed, prediction confidence, and is-a-dog flag)2. the prediction number ranks the predictions in the order most confident (1st prediction) to least confident (3rd prediction)3. the actual breed predictions should be held in a separate dataframe, and linked back to the tweet and tweet image they are associated with ------ Clean We will now clean the issues uncovered during assesment using a _define/code/test_ framework, which will be applied to each of the issues. ###Code clean_enhanced_tweets_df = enhanced_tweets_df.copy() clean_enhanced_tweets_df.shape ###Output _____no_output_____ ###Markdown --- Extract tweet application from `source` column **Define*** parse source column which holds an HTML anchor node * extract anchor node content, describing the application used* convert the column to Pandas categorical, as a more efficient representation that can be used in models **Code** ###Code # Extract content from anchor node clean_enhanced_tweets_df['source'] = \ clean_enhanced_tweets_df['source'].str.extract(r'[^<]*a href="[^"]+" rel="[^"]+">([^<]+)<\/a>') # Convert column to categorical clean_enhanced_tweets_df['source'] = clean_enhanced_tweets_df['source'].astype('category') ###Output _____no_output_____ ###Markdown **Test** We will check that the tweet source column is now a categorical, and the number of categories is that expected. ###Code # Assert column data type is categorical assert isinstance(clean_enhanced_tweets_df['source'].dtype, pd.CategoricalDtype),'Expect categorical' # Assert the number of categories is as expected assert len(clean_enhanced_tweets_df['source'].cat.categories) == 4, 'Expect 4 application categories' ###Output _____no_output_____ ###Markdown --- Move `expanded_urls` to a detail dataframe **Define*** split multi-valued string of comma separated URLs, into URL arrays* remove any duplicate URLs from the array* convert each array into list of tuples, bound to the containing `tweet_id`* stores these tuples as rows in a new dataframe **Code** ###Code # Pull out rows containing one or more expanded URLs, as some rows have none expanded_urls_col = \ clean_enhanced_tweets_df.loc[clean_enhanced_tweets_df['expanded_urls'].isna() == False]['expanded_urls'] # Nested list comprehension to split multiple URL strings on comma separator, then create [tweet Id, URL] tuples expanded_url_tuples = [(ix, url) for ix, urls in expanded_urls_col.iteritems() for url in urls.split(',')] expanded_url_df = pd.DataFrame(expanded_url_tuples, columns=['tweet_id', 'expanded_url']) # Now drop duplicates and make 'tweet_id' the index for consistency with other dataframes expanded_url_df = expanded_url_df.drop_duplicates().set_index('tweet_id') # Finally drop the original expanded_urls column clean_enhanced_tweets_df = clean_enhanced_tweets_df.drop(columns='expanded_urls') ###Output _____no_output_____ ###Markdown **Test** We will count total and unique tweet Ids in the new dataframe holding expanded URLs. The later will be lower, accounting for multiple rows (hence web links in the tweet text) associated with the same tweet. ###Code # Note that the index can contain duplicate entries (whenever a tweet has more than one URL) # We compare duplicate and non-duplicate counts below len(expanded_url_df.index), len(expanded_url_df.index.unique()) ###Output _____no_output_____ ###Markdown --- Convert dog stage columns to boolean **Define*** where the value 'None' is stored, set False, otherwise set True **Code** ###Code # Convert dog stage columns into a boolean data type stage_cols = ['doggo', 'floofer', 'pupper', 'puppo'] clean_enhanced_tweets_df[stage_cols] = clean_enhanced_tweets_df[stage_cols].apply(lambda c: c.to_numpy() != 'None') ###Output _____no_output_____ ###Markdown **Test** We will check that the dog stage columns are now boolean type. ###Code clean_enhanced_tweets_df.info() for col in stage_cols: assert clean_enhanced_tweets_df[col].dtype == 'bool', 'Expect boolean column' ###Output _____no_output_____ ###Markdown --- Merge retweet and favorite counts into enhanced tweets dataframe **Define*** merge retweet and favorite count columns into enhanced tweets dataframe, using a left join with nulls for missing count values **Code** ###Code clean_enhanced_tweets_df = clean_enhanced_tweets_df.merge(tweet_counts_df, how='left', on='tweet_id') ###Output _____no_output_____ ###Markdown **Test** Validate number of rows after merge, including count of rows with null retweet or favorite ###Code # Count total rows (should be unchanged), and null retweet and favorite counts (tweets no longer available) print(len(clean_enhanced_tweets_df.index)) clean_enhanced_tweets_df[['retweet_counts', 'favorite_counts']].isna().sum() ###Output 2356 ###Markdown --- Melt image prediction column headers into detail dataframe **Define*** store `jpg_url` and `img_num` columns in a clean dataframe* melt prediction 1 to 3 columns into temporary dataframes, with the prediction rank as a constant value, and the related `tweet_id`* stack the above temporary dataframes into a predictions dataframe, with repeated `tweet_id` as the index **Code** ###Code clean_img_preds_df = img_preds_df.copy() clean_img_preds_df.shape def melt_pred_cols(df, numeric): preds_df = pd.DataFrame(data={'pred_rank': numeric, 'pred_class': df[f'p{numeric}'], 'pred_confidence': df[f'p{numeric}_conf'], 'pred_is_dog': df[f'p{numeric}_dog']}) return preds_df preds1_df = melt_pred_cols(clean_img_preds_df, 1) preds2_df = melt_pred_cols(clean_img_preds_df, 2) preds3_df = melt_pred_cols(clean_img_preds_df, 3) clean_predictions_df = pd.concat([preds1_df, preds2_df, preds3_df]).sort_values(by=['tweet_id', 'pred_rank']) # Drop melted prediction columns clean_img_preds_df = clean_img_preds_df.drop(columns=['p1', 'p1_conf', 'p1_dog', \ 'p2', 'p2_conf', 'p2_dog', \ 'p3', 'p3_conf', 'p3_dog']) ###Output _____no_output_____ ###Markdown **Test** Validate dataframe column names and structure as expected. ###Code clean_img_preds_df.info() clean_predictions_df.info() # Validate master/detail row counts assert len(clean_img_preds_df.index) == (len(clean_predictions_df.index) / 3), 'Expect 3x number of detail rows' ###Output _____no_output_____ ###Markdown ------ Analyse In this section we look at the data and analyse it to obtain some insights. Specifically, we are interested in:1. Finding the number of tweets with a score above 10/10, versus tweets with a score under 10/101. Identify the tweets where more than one dog stage appears1. Finding the number of top breed predictions from the image classifier, with a prediction confidence below 0.5 **Count number of scores above and below 10/10** ###Code ((clean_enhanced_tweets_df['rating_numerator'] / clean_enhanced_tweets_df['rating_denominator']) > 1.0).sum(), \ ((clean_enhanced_tweets_df['rating_numerator'] / clean_enhanced_tweets_df['rating_denominator']) <= 1.0).sum() ###Output _____no_output_____ ###Markdown **Show tweets with more than one dog stage in the tweet text** ###Code stage_cols = ['doggo', 'floofer', 'pupper', 'puppo'] clean_enhanced_tweets_df.loc[clean_enhanced_tweets_df[stage_cols].sum(axis=1) > 1][['text'] + stage_cols] ###Output _____no_output_____ ###Markdown **Count tweets where the top scoring breed prediction is below 0.5** ###Code dog_preds = clean_predictions_df.loc[(clean_predictions_df['pred_rank'] == 1) \ & clean_predictions_df['pred_is_dog']] len(dog_preds[dog_preds['pred_confidence'] < 0.5].index) ###Output _____no_output_____ ###Markdown Now we are going to generate some visualisations:1. First, based on the top image prediction, look at the frequency distribution for the top 10 breeds only, based on number of tweets2. Now look at the frequency distribution for the top 10 breeds only, based on aggregate number of favorites **Breed prediction distribution by number of tweets** ###Code dog_preds['pred_class'].value_counts(sort=True)[0:10].plot.pie() ###Output _____no_output_____ ###Markdown **Breed prediction distribution by number of favorites** ###Code dog_preds.join(clean_enhanced_tweets_df['favorite_counts']).groupby(['pred_class']) \ .sum().sort_values(by='favorite_counts', ascending=False)[0:10]['favorite_counts'].plot.pie() ###Output _____no_output_____ ###Markdown Generate internal report Having cleaned the data, and generated data insights, we can now generate the internal documentation from this notebook's markdown cells.(you probably want to clear all output previous to the data insights output generated in the last section, and then SAVE the notebook) ###Code # !jupyter nbconvert --no-input --to pdf wrangle_act.ipynb # !mv wrangle_act.pdf wrangle_report.pdf ###Output [NbConvertApp] Converting notebook wrangle_act.ipynb to pdf ###Markdown Gather ###Code # open image-predictions flie data1=pd.read_csv('image-predictions.tsv',sep='\t') data2=pd.read_csv('twitter-archive-enhanced.csv',sep=',') data3=pd.read_json('tweet_json.txt',lines = True) ###Output _____no_output_____ ###Markdown Assess visual ###Code print(data1.head()) print('') print(data1.tail()) print('') print(data1.sample()) print('') print(data2.head()) print('') print(data2.tail()) print('') print(data2.sample()) print('') print(data3.head()) print('') print(data3.tail()) print('') print(data3.sample()) ###Output tweet_id jpg_url \ 0 666020888022790149 https://pbs.twimg.com/media/CT4udn0WwAA0aMy.jpg 1 666029285002620928 https://pbs.twimg.com/media/CT42GRgUYAA5iDo.jpg 2 666033412701032449 https://pbs.twimg.com/media/CT4521TWwAEvMyu.jpg 3 666044226329800704 https://pbs.twimg.com/media/CT5Dr8HUEAA-lEu.jpg 4 666049248165822465 https://pbs.twimg.com/media/CT5IQmsXIAAKY4A.jpg img_num p1 p1_conf p1_dog p2 \ 0 1 Welsh_springer_spaniel 0.465074 True collie 1 1 redbone 0.506826 True miniature_pinscher 2 1 German_shepherd 0.596461 True malinois 3 1 Rhodesian_ridgeback 0.408143 True redbone 4 1 miniature_pinscher 0.560311 True Rottweiler p2_conf p2_dog p3 p3_conf p3_dog 0 0.156665 True Shetland_sheepdog 0.061428 True 1 0.074192 True Rhodesian_ridgeback 0.072010 True 2 0.138584 True bloodhound 0.116197 True 3 0.360687 True miniature_pinscher 0.222752 True 4 0.243682 True Doberman 0.154629 True tweet_id jpg_url \ 2070 891327558926688256 https://pbs.twimg.com/media/DF6hr6BUMAAzZgT.jpg 2071 891689557279858688 https://pbs.twimg.com/media/DF_q7IAWsAEuuN8.jpg 2072 891815181378084864 https://pbs.twimg.com/media/DGBdLU1WsAANxJ9.jpg 2073 892177421306343426 https://pbs.twimg.com/media/DGGmoV4XsAAUL6n.jpg 2074 892420643555336193 https://pbs.twimg.com/media/DGKD1-bXoAAIAUK.jpg img_num p1 p1_conf p1_dog p2 p2_conf \ 2070 2 basset 0.555712 True English_springer 0.225770 2071 1 paper_towel 0.170278 False Labrador_retriever 0.168086 2072 1 Chihuahua 0.716012 True malamute 0.078253 2073 1 Chihuahua 0.323581 True Pekinese 0.090647 2074 1 orange 0.097049 False bagel 0.085851 p2_dog p3 p3_conf p3_dog 2070 True German_short-haired_pointer 0.175219 True 2071 True spatula 0.040836 False 2072 True kelpie 0.031379 True 2073 True papillon 0.068957 True 2074 False banana 0.076110 False tweet_id jpg_url \ 1894 849776966551130114 https://pbs.twimg.com/media/C8sDpDWWsAE5P08.jpg img_num p1 p1_conf p1_dog p2 p2_conf p2_dog \ 1894 2 Chihuahua 0.292092 True toy_terrier 0.136852 True p3 p3_conf p3_dog 1894 bonnet 0.103111 False tweet_id in_reply_to_status_id in_reply_to_user_id \ 0 892420643555336193 NaN NaN 1 892177421306343426 NaN NaN 2 891815181378084864 NaN NaN 3 891689557279858688 NaN NaN 4 891327558926688256 NaN NaN timestamp \ 0 2017-08-01 16:23:56 +0000 1 2017-08-01 00:17:27 +0000 2 2017-07-31 00:18:03 +0000 3 2017-07-30 15:58:51 +0000 4 2017-07-29 16:00:24 +0000 source \ 0 <a href="http://twitter.com/download/iphone" r... 1 <a href="http://twitter.com/download/iphone" r... 2 <a href="http://twitter.com/download/iphone" r... 3 <a href="http://twitter.com/download/iphone" r... 4 <a href="http://twitter.com/download/iphone" r... text retweeted_status_id \ 0 This is Phineas. He's a mystical boy. Only eve... NaN 1 This is Tilly. She's just checking pup on you.... NaN 2 This is Archie. He is a rare Norwegian Pouncin... NaN 3 This is Darla. She commenced a snooze mid meal... NaN 4 This is Franklin. He would like you to stop ca... NaN retweeted_status_user_id retweeted_status_timestamp \ 0 NaN NaN 1 NaN NaN 2 NaN NaN 3 NaN NaN 4 NaN NaN expanded_urls rating_numerator \ 0 https://twitter.com/dog_rates/status/892420643... 13 1 https://twitter.com/dog_rates/status/892177421... 13 2 https://twitter.com/dog_rates/status/891815181... 12 3 https://twitter.com/dog_rates/status/891689557... 13 4 https://twitter.com/dog_rates/status/891327558... 12 rating_denominator name doggo floofer pupper puppo 0 10 Phineas None None None None 1 10 Tilly None None None None 2 10 Archie None None None None 3 10 Darla None None None None 4 10 Franklin None None None None tweet_id in_reply_to_status_id in_reply_to_user_id \ 2351 666049248165822465 NaN NaN 2352 666044226329800704 NaN NaN 2353 666033412701032449 NaN NaN 2354 666029285002620928 NaN NaN 2355 666020888022790149 NaN NaN timestamp \ 2351 2015-11-16 00:24:50 +0000 2352 2015-11-16 00:04:52 +0000 2353 2015-11-15 23:21:54 +0000 2354 2015-11-15 23:05:30 +0000 2355 2015-11-15 22:32:08 +0000 source \ 2351 <a href="http://twitter.com/download/iphone" r... 2352 <a href="http://twitter.com/download/iphone" r... 2353 <a href="http://twitter.com/download/iphone" r... 2354 <a href="http://twitter.com/download/iphone" r... 2355 <a href="http://twitter.com/download/iphone" r... text retweeted_status_id \ 2351 Here we have a 1949 1st generation vulpix. Enj... NaN 2352 This is a purebred Piers Morgan. Loves to Netf... NaN 2353 Here is a very happy pup. Big fan of well-main... NaN 2354 This is a western brown Mitsubishi terrier. Up... NaN 2355 Here we have a Japanese Irish Setter. Lost eye... NaN retweeted_status_user_id retweeted_status_timestamp \ 2351 NaN NaN 2352 NaN NaN 2353 NaN NaN 2354 NaN NaN 2355 NaN NaN expanded_urls rating_numerator \ 2351 https://twitter.com/dog_rates/status/666049248... 5 2352 https://twitter.com/dog_rates/status/666044226... 6 2353 https://twitter.com/dog_rates/status/666033412... 9 2354 https://twitter.com/dog_rates/status/666029285... 7 2355 https://twitter.com/dog_rates/status/666020888... 8 rating_denominator name doggo floofer pupper puppo 2351 10 None None None None None 2352 10 a None None None None 2353 10 a None None None None 2354 10 a None None None None 2355 10 None None None None None tweet_id in_reply_to_status_id in_reply_to_user_id \ 1008 747594051852075008 NaN NaN timestamp \ 1008 2016-06-28 00:54:46 +0000 source \ 1008 <a href="http://twitter.com/download/iphone" r... text retweeted_status_id \ 1008 Again w the sharks guys. This week is about do... NaN retweeted_status_user_id retweeted_status_timestamp \ 1008 NaN NaN expanded_urls rating_numerator \ 1008 https://twitter.com/dog_rates/status/747594051... 11 rating_denominator name doggo floofer pupper puppo 1008 10 None None None None None contributors coordinates created_at \ 0 NaN NaN 2017-08-01 16:23:56 1 NaN NaN 2017-08-01 00:17:27 2 NaN NaN 2017-07-31 00:18:03 3 NaN NaN 2017-07-30 15:58:51 4 NaN NaN 2017-07-29 16:00:24 entities \ 0 {'hashtags': [], 'symbols': [], 'user_mentions... 1 {'hashtags': [], 'symbols': [], 'user_mentions... 2 {'hashtags': [], 'symbols': [], 'user_mentions... 3 {'hashtags': [], 'symbols': [], 'user_mentions... 4 {'hashtags': [], 'symbols': [], 'user_mentions... extended_entities favorite_count \ 0 {'media': [{'id': 892420639486877696, 'id_str'... 38014 1 NaN 32641 2 NaN 24565 3 {'media': [{'id': 891689552724799489, 'id_str'... 41365 4 NaN 39551 favorited geo id id_str \ 0 False NaN 892420643555336193 892420643555336192 1 False NaN 892177421306343426 892177421306343424 2 False NaN 891815181378084864 891815181378084864 3 False NaN 891689557279858688 891689557279858688 4 False NaN 891327558926688256 891327558926688256 ... quoted_status \ 0 ... NaN 1 ... NaN 2 ... NaN 3 ... NaN 4 ... NaN quoted_status_id quoted_status_id_str retweet_count retweeted \ 0 NaN NaN 8315 False 1 NaN NaN 6142 False 2 NaN NaN 4067 False 3 NaN NaN 8450 False 4 NaN NaN 9157 False retweeted_status source \ 0 NaN <a href="http://twitter.com/download/iphone" r... 1 NaN <a href="http://twitter.com/download/iphone" r... 2 NaN <a href="http://twitter.com/download/iphone" r... 3 NaN <a href="http://twitter.com/download/iphone" r... 4 NaN <a href="http://twitter.com/download/iphone" r... text truncated \ 0 This is Phineas. He's a mystical boy. Only eve... False 1 This is Tilly. She's just checking pup on you.... True 2 This is Archie. He is a rare Norwegian Pouncin... True 3 This is Darla. She commenced a snooze mid meal... False 4 This is Franklin. He would like you to stop ca... True user 0 {'id': 4196983835, 'id_str': '4196983835', 'na... 1 {'id': 4196983835, 'id_str': '4196983835', 'na... 2 {'id': 4196983835, 'id_str': '4196983835', 'na... 3 {'id': 4196983835, 'id_str': '4196983835', 'na... 4 {'id': 4196983835, 'id_str': '4196983835', 'na... [5 rows x 30 columns] contributors coordinates created_at \ 2334 NaN NaN 2015-11-16 00:24:50 2335 NaN NaN 2015-11-16 00:04:52 2336 NaN NaN 2015-11-15 23:21:54 2337 NaN NaN 2015-11-15 23:05:30 2338 NaN NaN 2015-11-15 22:32:08 entities \ 2334 {'hashtags': [], 'symbols': [], 'user_mentions... 2335 {'hashtags': [], 'symbols': [], 'user_mentions... 2336 {'hashtags': [], 'symbols': [], 'user_mentions... 2337 {'hashtags': [], 'symbols': [], 'user_mentions... 2338 {'hashtags': [], 'symbols': [], 'user_mentions... extended_entities favorite_count \ 2334 {'media': [{'id': 666049244999131136, 'id_str'... 106 2335 {'media': [{'id': 666044217047650304, 'id_str'... 292 2336 {'media': [{'id': 666033409081393153, 'id_str'... 123 2337 {'media': [{'id': 666029276303482880, 'id_str'... 126 2338 {'media': [{'id': 666020881337073664, 'id_str'... 2535 favorited geo id id_str \ 2334 False NaN 666049248165822465 666049248165822464 2335 False NaN 666044226329800704 666044226329800704 2336 False NaN 666033412701032449 666033412701032448 2337 False NaN 666029285002620928 666029285002620928 2338 False NaN 666020888022790149 666020888022790144 ... quoted_status \ 2334 ... NaN 2335 ... NaN 2336 ... NaN 2337 ... NaN 2338 ... NaN quoted_status_id quoted_status_id_str retweet_count retweeted \ 2334 NaN NaN 41 False 2335 NaN NaN 139 False 2336 NaN NaN 43 False 2337 NaN NaN 47 False 2338 NaN NaN 502 False retweeted_status source \ 2334 NaN <a href="http://twitter.com/download/iphone" r... 2335 NaN <a href="http://twitter.com/download/iphone" r... 2336 NaN <a href="http://twitter.com/download/iphone" r... 2337 NaN <a href="http://twitter.com/download/iphone" r... 2338 NaN <a href="http://twitter.com/download/iphone" r... text truncated \ 2334 Here we have a 1949 1st generation vulpix. Enj... False 2335 This is a purebred Piers Morgan. Loves to Netf... False 2336 Here is a very happy pup. Big fan of well-main... False 2337 This is a western brown Mitsubishi terrier. Up... False 2338 Here we have a Japanese Irish Setter. Lost eye... False user 2334 {'id': 4196983835, 'id_str': '4196983835', 'na... 2335 {'id': 4196983835, 'id_str': '4196983835', 'na... 2336 {'id': 4196983835, 'id_str': '4196983835', 'na... 2337 {'id': 4196983835, 'id_str': '4196983835', 'na... 2338 {'id': 4196983835, 'id_str': '4196983835', 'na... [5 rows x 30 columns] contributors coordinates created_at \ 1748 NaN NaN 2015-12-20 03:02:53 entities \ 1748 {'hashtags': [], 'symbols': [], 'user_mentions... extended_entities favorite_count \ 1748 {'media': [{'id': 678410205395308544, 'id_str'... 4413 favorited geo id id_str \ 1748 False NaN 678410210315247616 678410210315247616 ... quoted_status \ 1748 ... NaN quoted_status_id quoted_status_id_str retweet_count retweeted \ 1748 NaN NaN 1938 False retweeted_status source \ 1748 NaN <a href="http://twitter.com/download/iphone" r... text truncated \ 1748 Say hello to Jerome. He can shoot french fries... False user 1748 {'id': 4196983835, 'id_str': '4196983835', 'na... [1 rows x 30 columns] ###Markdown programmatic ###Code print(data1.info()) print(data1.isnull().sum()) print(data1.describe()) print(data2.info()) print(data2.isnull().sum()) print(data2.describe()) print(data3.info()) print(data3.isnull().sum()) print(data3.describe()) ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null int64 jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB None tweet_id 0 jpg_url 0 img_num 0 p1 0 p1_conf 0 p1_dog 0 p2 0 p2_conf 0 p2_dog 0 p3 0 p3_conf 0 p3_dog 0 dtype: int64 tweet_id img_num p1_conf p2_conf p3_conf count 2.075000e+03 2075.000000 2075.000000 2.075000e+03 2.075000e+03 mean 7.384514e+17 1.203855 0.594548 1.345886e-01 6.032417e-02 std 6.785203e+16 0.561875 0.271174 1.006657e-01 5.090593e-02 min 6.660209e+17 1.000000 0.044333 1.011300e-08 1.740170e-10 25% 6.764835e+17 1.000000 0.364412 5.388625e-02 1.622240e-02 50% 7.119988e+17 1.000000 0.588230 1.181810e-01 4.944380e-02 75% 7.932034e+17 1.000000 0.843855 1.955655e-01 9.180755e-02 max 8.924206e+17 4.000000 1.000000 4.880140e-01 2.734190e-01 <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB None tweet_id 0 in_reply_to_status_id 2278 in_reply_to_user_id 2278 timestamp 0 source 0 text 0 retweeted_status_id 2175 retweeted_status_user_id 2175 retweeted_status_timestamp 2175 expanded_urls 59 rating_numerator 0 rating_denominator 0 name 0 doggo 0 floofer 0 pupper 0 puppo 0 dtype: int64 tweet_id in_reply_to_status_id in_reply_to_user_id \ count 2.356000e+03 7.800000e+01 7.800000e+01 mean 7.427716e+17 7.455079e+17 2.014171e+16 std 6.856705e+16 7.582492e+16 1.252797e+17 min 6.660209e+17 6.658147e+17 1.185634e+07 25% 6.783989e+17 6.757419e+17 3.086374e+08 50% 7.196279e+17 7.038708e+17 4.196984e+09 75% 7.993373e+17 8.257804e+17 4.196984e+09 max 8.924206e+17 8.862664e+17 8.405479e+17 retweeted_status_id retweeted_status_user_id rating_numerator \ count 1.810000e+02 1.810000e+02 2356.000000 mean 7.720400e+17 1.241698e+16 13.126486 std 6.236928e+16 9.599254e+16 45.876648 min 6.661041e+17 7.832140e+05 0.000000 25% 7.186315e+17 4.196984e+09 10.000000 50% 7.804657e+17 4.196984e+09 11.000000 75% 8.203146e+17 4.196984e+09 12.000000 max 8.874740e+17 7.874618e+17 1776.000000 rating_denominator count 2356.000000 mean 10.455433 std 6.745237 min 0.000000 25% 10.000000 50% 10.000000 75% 10.000000 max 170.000000 <class 'pandas.core.frame.DataFrame'> RangeIndex: 2339 entries, 0 to 2338 Data columns (total 30 columns): contributors 0 non-null float64 coordinates 0 non-null float64 created_at 2339 non-null datetime64[ns] entities 2339 non-null object extended_entities 1821 non-null object favorite_count 2339 non-null int64 favorited 2339 non-null bool geo 0 non-null float64 id 2339 non-null int64 id_str 2339 non-null int64 in_reply_to_screen_name 77 non-null object in_reply_to_status_id 77 non-null float64 in_reply_to_status_id_str 77 non-null float64 in_reply_to_user_id 77 non-null float64 in_reply_to_user_id_str 77 non-null float64 is_quote_status 2339 non-null bool lang 2339 non-null object place 1 non-null object possibly_sensitive 2204 non-null float64 possibly_sensitive_appealable 2204 non-null float64 quoted_status 24 non-null object quoted_status_id 26 non-null float64 quoted_status_id_str 26 non-null float64 retweet_count 2339 non-null int64 retweeted 2339 non-null bool retweeted_status 167 non-null object source 2339 non-null object text 2339 non-null object truncated 2339 non-null bool user 2339 non-null object dtypes: bool(4), datetime64[ns](1), float64(11), int64(4), object(10) memory usage: 484.3+ KB None contributors 2339 coordinates 2339 created_at 0 entities 0 extended_entities 518 favorite_count 0 favorited 0 geo 2339 id 0 id_str 0 in_reply_to_screen_name 2262 in_reply_to_status_id 2262 in_reply_to_status_id_str 2262 in_reply_to_user_id 2262 in_reply_to_user_id_str 2262 is_quote_status 0 lang 0 place 2338 possibly_sensitive 135 possibly_sensitive_appealable 135 quoted_status 2315 quoted_status_id 2313 quoted_status_id_str 2313 retweet_count 0 retweeted 0 retweeted_status 2172 source 0 text 0 truncated 0 user 0 dtype: int64 contributors coordinates favorite_count geo id \ count 0.0 0.0 2339.000000 0.0 2.339000e+03 mean NaN NaN 7961.119282 NaN 7.422447e+17 std NaN NaN 12328.654834 NaN 6.832765e+16 min NaN NaN 0.000000 NaN 6.660209e+17 25% NaN NaN 1371.000000 NaN 6.783378e+17 50% NaN NaN 3454.000000 NaN 7.186315e+17 75% NaN NaN 9739.500000 NaN 7.986962e+17 max NaN NaN 164232.000000 NaN 8.924206e+17 id_str in_reply_to_status_id in_reply_to_status_id_str \ count 2.339000e+03 7.700000e+01 7.700000e+01 mean 7.422447e+17 7.440692e+17 7.440692e+17 std 6.832765e+16 7.524295e+16 7.524295e+16 min 6.660209e+17 6.658147e+17 6.658147e+17 25% 6.783378e+17 6.757073e+17 6.757073e+17 50% 7.186315e+17 7.032559e+17 7.032559e+17 75% 7.986962e+17 8.233264e+17 8.233264e+17 max 8.924206e+17 8.862664e+17 8.862664e+17 in_reply_to_user_id in_reply_to_user_id_str possibly_sensitive \ count 7.700000e+01 7.700000e+01 2204.0 mean 2.040329e+16 2.040329e+16 0.0 std 1.260797e+17 1.260797e+17 0.0 min 1.185634e+07 1.185634e+07 0.0 25% 3.589728e+08 3.589728e+08 0.0 50% 4.196984e+09 4.196984e+09 0.0 75% 4.196984e+09 4.196984e+09 0.0 max 8.405479e+17 8.405479e+17 0.0 possibly_sensitive_appealable quoted_status_id quoted_status_id_str \ count 2204.0 2.600000e+01 2.600000e+01 mean 0.0 8.113972e+17 8.113972e+17 std 0.0 6.295843e+16 6.295843e+16 min 0.0 6.721083e+17 6.721083e+17 25% 0.0 7.761338e+17 7.761338e+17 50% 0.0 8.281173e+17 8.281173e+17 75% 0.0 8.637581e+17 8.637581e+17 max 0.0 8.860534e+17 8.860534e+17 retweet_count count 2339.000000 mean 2928.929885 std 4934.049982 min 0.000000 25% 587.500000 50% 1363.000000 75% 3413.500000 max 83626.000000 ###Markdown Clean Define Quality issues1.rename columns in image-predictions.tsv file 2.Some tweet have 2 different tweets_id, that are retweets3. Missing values from images datasets 4. Delete the add numbers in timestamp in twitter-archive-enhanced.csv5. Timestamp is string6. There some problem in the names of the columns so, I will make the names lower and replace '_' to one space7. after marge there duplicate columns 8. In several columns null object are non-null (none to NAN) but its empty so I will drop it Tidiness9. doggo, floofer, pupper, puppo columns in twitter_archive_enhanced.csv should be combined into a single column as this is one variable that identify stage of dog.10. Merge the three flies (image-predictions.tsv,twitter-archive-enhanced.csv,tweet_json.txt11-Merge rating numerator and rating denominator Code ###Code #make copy of our data df1= data1.copy() #rename columns df1.rename(columns={'jpg_url': 'The picture url', 'img_num': 'image number ' , 'p1' : 'The first picture', 'p2' : 'The secound picture', 'p3' : 'The third picture', 'p1_dog': 'is the frist picture have a dog', 'p2_dog': 'is the secound picture have a dog', 'p3_dog': 'is the third picture have a dog', 'p1_conf': 'the frist picture configuration', 'p2_conf': 'the secound picture configuration', 'p3_conf': 'the third picture configuration'}, inplace=True) data1=pd.read_csv('image-predictions.tsv',sep='\t') df1= data1.copy() #rename columns df1.rename(columns={'jpg_url': 'The picture url', 'img_num': 'image number ' , 'p1' : 'The first picture', 'p2' : 'The secound picture', 'p3' : 'The third picture', 'p1_dog': 'is the frist picture have a dog', 'p2_dog': 'is the secound picture have a dog', 'p3_dog': 'is the third picture have a dog', 'p1_conf': 'the frist picture configuration', 'p2_conf': 'the secound picture configuration', 'p3_conf': 'the third picture configuration'}, inplace=True) #make copy of our data df2= data2.copy() #drop useless columns df2.doggo.replace('None', '', inplace=True) df2['stage'] = df2.doggo + df2.floofer + df2.pupper + df2.puppo df2.drop(columns=['in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp', 'doggo', 'floofer', 'pupper' , 'puppo'], inplace = True) for x in df2['timestamp']: df2['Date'] = df2['timestamp'].apply(lambda x:x.split(' ')[0]) df2['Time'] = df2['timestamp'].apply(lambda x:x.split(' ')[1]) df2['xc'] = df2['timestamp'].apply(lambda x:x.split(' ')[2]) df2['created_at'] = df2['Date'] + ' ' + df2['Time'] df2.drop(['timestamp', 'Date','xc','Time'], axis=1, inplace=True) df2['created_at']= pd.to_datetime(df2['created_at']) type(df2['created_at'][0]) data3=pd.read_json('tweet_json.txt',lines = True) df3= data3.copy() #drop useless columns df3.drop(columns=['contributors', 'coordinates', 'geo', 'in_reply_to_screen_name', 'in_reply_to_status_id', 'in_reply_to_status_id_str', 'in_reply_to_user_id', 'in_reply_to_user_id_str' , 'place', 'quoted_status', 'quoted_status_id', 'quoted_status_id_str', 'retweeted_status'], inplace = True) #drop columns which have another tweet id #make copy of our data df3.drop(columns=['id_str']) df3.rename(columns={'id': 'tweet_id'},inplace= 'True') # merge the three files df4 = pd.merge(df1, df2, how="inner", on="tweet_id") merge = pd.merge(df4, df3, how="inner", on="tweet_id") #make columns name lower and replace '_' to one space merge.rename(columns=lambda x:x.strip().lower().replace('_',' '),inplace = True) #drop missing values merge.dropna(inplace=True) #drop duplicate columns merge.drop(columns=['source y','created at y','text y' ], inplace = True) merge.rename(columns={'source x': 'source','created at x':'created at','text x':'text x'},inplace= 'True') #merge rating numerator and rating denominator into one column merge['rating']=merge['rating numerator'] + merge['rating denominator'] # drop unneeded and retweet columns merge.drop(columns=['rating numerator','rating denominator'], inplace = True) x = merge[merge['text x'].str.contains('RT @')]['text x'].keys() for y in x : merge.drop(index=y,inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code print(df1.info()) print(' ') print(df2.dtypes) print(' ') print(merge.head) print(' ') print(merge.info()) print(' ') ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null int64 The picture url 2075 non-null object image number 2075 non-null int64 The first picture 2075 non-null object the frist picture configuration 2075 non-null float64 is the frist picture have a dog 2075 non-null bool The secound picture 2075 non-null object the secound picture configuration 2075 non-null float64 is the secound picture have a dog 2075 non-null bool The third picture 2075 non-null object the third picture configuration 2075 non-null float64 is the third picture have a dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB None tweet_id int64 source object text object expanded_urls object rating_numerator int64 rating_denominator int64 name object stage object created_at datetime64[ns] dtype: object <bound method NDFrame.head of tweet id the picture url \ 0 666020888022790149 https://pbs.twimg.com/media/CT4udn0WwAA0aMy.jpg 1 666029285002620928 https://pbs.twimg.com/media/CT42GRgUYAA5iDo.jpg 2 666033412701032449 https://pbs.twimg.com/media/CT4521TWwAEvMyu.jpg 3 666044226329800704 https://pbs.twimg.com/media/CT5Dr8HUEAA-lEu.jpg 4 666049248165822465 https://pbs.twimg.com/media/CT5IQmsXIAAKY4A.jpg 5 666050758794694657 https://pbs.twimg.com/media/CT5Jof1WUAEuVxN.jpg 6 666051853826850816 https://pbs.twimg.com/media/CT5KoJ1WoAAJash.jpg 7 666055525042405380 https://pbs.twimg.com/media/CT5N9tpXIAAifs1.jpg 8 666057090499244032 https://pbs.twimg.com/media/CT5PY90WoAAQGLo.jpg 9 666058600524156928 https://pbs.twimg.com/media/CT5Qw94XAAA_2dP.jpg 10 666063827256086533 https://pbs.twimg.com/media/CT5Vg_wXIAAXfnj.jpg 11 666071193221509120 https://pbs.twimg.com/media/CT5cN_3WEAAlOoZ.jpg 12 666073100786774016 https://pbs.twimg.com/media/CT5d9DZXAAALcwe.jpg 13 666082916733198337 https://pbs.twimg.com/media/CT5m4VGWEAAtKc8.jpg 14 666094000022159362 https://pbs.twimg.com/media/CT5w9gUW4AAsBNN.jpg 15 666099513787052032 https://pbs.twimg.com/media/CT51-JJUEAA6hV8.jpg 16 666102155909144576 https://pbs.twimg.com/media/CT54YGiWUAEZnoK.jpg 17 666104133288665088 https://pbs.twimg.com/media/CT56LSZWoAAlJj2.jpg 18 666268910803644416 https://pbs.twimg.com/media/CT8QCd1WEAADXws.jpg 19 666273097616637952 https://pbs.twimg.com/media/CT8T1mtUwAA3aqm.jpg 20 666287406224695296 https://pbs.twimg.com/media/CT8g3BpUEAAuFjg.jpg 21 666293911632134144 https://pbs.twimg.com/media/CT8mx7KW4AEQu8N.jpg 22 666337882303524864 https://pbs.twimg.com/media/CT9OwFIWEAMuRje.jpg 23 666345417576210432 https://pbs.twimg.com/media/CT9Vn7PWoAA_ZCM.jpg 24 666353288456101888 https://pbs.twimg.com/media/CT9cx0tUEAAhNN_.jpg 25 666362758909284353 https://pbs.twimg.com/media/CT9lXGsUcAAyUFt.jpg 26 666373753744588802 https://pbs.twimg.com/media/CT9vZEYWUAAlZ05.jpg 27 666396247373291520 https://pbs.twimg.com/media/CT-D2ZHWIAA3gK1.jpg 28 666407126856765440 https://pbs.twimg.com/media/CT-NvwmW4AAugGZ.jpg 29 666411507551481857 https://pbs.twimg.com/media/CT-RugiWIAELEaq.jpg ... ... ... 2003 879008229531029506 https://pbs.twimg.com/media/DDLdUrqXYAMOVzY.jpg 2006 879415818425184262 https://pbs.twimg.com/ext_tw_video_thumb/87941... 2007 879492040517615616 https://pbs.twimg.com/media/DDSVWMvXsAEgmMK.jpg 2008 879862464715927552 https://pbs.twimg.com/media/DDXmPrbWAAEKMvy.jpg 2010 880221127280381952 https://pbs.twimg.com/media/DDcscbXU0AIfDzs.jpg 2011 880465832366813184 https://pbs.twimg.com/media/DDgK-J4XUAIEV9W.jpg 2013 880935762899988482 https://pbs.twimg.com/media/DDm2Z5aXUAEDS2u.jpg 2016 881666595344535552 https://pbs.twimg.com/media/DDxPFwbWAAEbVVR.jpg 2017 881906580714921986 https://pbs.twimg.com/media/DD0pWm9XcAAeSBL.jpg 2019 882268110199369728 https://pbs.twimg.com/media/DD5yKdPW0AArzX8.jpg 2021 882762694511734784 https://pbs.twimg.com/media/DEAz_HHXsAA-p_z.jpg 2022 882992080364220416 https://pbs.twimg.com/media/DEEEnIqXYAAiJh_.jpg 2023 883117836046086144 https://pbs.twimg.com/media/DEF2-_hXoAAs62q.jpg 2025 883482846933004288 https://pbs.twimg.com/media/DELC9dZXUAADqUk.jpg 2029 884562892145688576 https://pbs.twimg.com/media/DEaZQkfXUAEC7qB.jpg 2031 884925521741709313 https://pbs.twimg.com/media/DEfjEaNXkAAtPlj.jpg 2034 885528943205470208 https://pbs.twimg.com/media/DEoH3yvXgAAzQtS.jpg 2036 886258384151887873 https://pbs.twimg.com/media/DEyfTG4UMAE4aE9.jpg 2038 886680336477933568 https://pbs.twimg.com/media/DE4fEDzWAAAyHMM.jpg 2040 886983233522544640 https://pbs.twimg.com/media/DE8yicJW0AAAvBJ.jpg 2042 887343217045368832 https://pbs.twimg.com/ext_tw_video_thumb/88734... 2043 887473957103951883 https://pbs.twimg.com/media/DFDw2tyUQAAAFke.jpg 2044 887517139158093824 https://pbs.twimg.com/ext_tw_video_thumb/88751... 2047 888554962724278272 https://pbs.twimg.com/media/DFTH_O-UQAACu20.jpg 2049 888917238123831296 https://pbs.twimg.com/media/DFYRgsOUQAARGhO.jpg 2052 889638837579907072 https://pbs.twimg.com/media/DFihzFfXsAYGDPR.jpg 2053 889665388333682689 https://pbs.twimg.com/media/DFi579UWsAAatzw.jpg 2054 889880896479866881 https://pbs.twimg.com/media/DFl99B1WsAITKsg.jpg 2062 891689557279858688 https://pbs.twimg.com/media/DF_q7IAWsAEuuN8.jpg 2065 892420643555336193 https://pbs.twimg.com/media/DGKD1-bXoAAIAUK.jpg image number the first picture \ 0 1 Welsh_springer_spaniel 1 1 redbone 2 1 German_shepherd 3 1 Rhodesian_ridgeback 4 1 miniature_pinscher 5 1 Bernese_mountain_dog 6 1 box_turtle 7 1 chow 8 1 shopping_cart 9 1 miniature_poodle 10 1 golden_retriever 11 1 Gordon_setter 12 1 Walker_hound 13 1 pug 14 1 bloodhound 15 1 Lhasa 16 1 English_setter 17 1 hen 18 1 desktop_computer 19 1 Italian_greyhound 20 1 Maltese_dog 21 1 three-toed_sloth 22 1 ox 23 1 golden_retriever 24 1 malamute 25 1 guinea_pig 26 1 soft-coated_wheaten_terrier 27 1 Chihuahua 28 1 black-and-tan_coonhound 29 1 coho ... ... ... 2003 1 vizsla 2006 1 English_springer 2007 1 German_short-haired_pointer 2008 3 basset 2010 1 Chihuahua 2011 1 golden_retriever 2013 1 street_sign 2016 1 Saluki 2017 1 Weimaraner 2019 1 golden_retriever 2021 1 Labrador_retriever 2022 1 Eskimo_dog 2023 2 golden_retriever 2025 1 golden_retriever 2029 1 pug 2031 1 Italian_greyhound 2034 1 pug 2036 1 pug 2038 1 convertible 2040 2 Chihuahua 2042 1 Mexican_hairless 2043 2 Pembroke 2044 1 limousine 2047 3 Siberian_husky 2049 1 golden_retriever 2052 1 French_bulldog 2053 1 Pembroke 2054 1 French_bulldog 2062 1 paper_towel 2065 1 orange the frist picture configuration is the frist picture have a dog \ 0 0.465074 True 1 0.506826 True 2 0.596461 True 3 0.408143 True 4 0.560311 True 5 0.651137 True 6 0.933012 False 7 0.692517 True 8 0.962465 False 9 0.201493 True 10 0.775930 True 11 0.503672 True 12 0.260857 True 13 0.489814 True 14 0.195217 True 15 0.582330 True 16 0.298617 True 17 0.965932 False 18 0.086502 False 19 0.176053 True 20 0.857531 True 21 0.914671 False 22 0.416669 False 23 0.858744 True 24 0.336874 True 25 0.996496 False 26 0.326467 True 27 0.978108 True 28 0.529139 True 29 0.404640 False ... ... ... 2003 0.960513 True 2006 0.383404 True 2007 0.479896 True 2008 0.813507 True 2010 0.238525 True 2011 0.913255 True 2013 0.251801 False 2016 0.529012 True 2017 0.291539 True 2019 0.762211 True 2021 0.850050 True 2022 0.466778 True 2023 0.949562 True 2025 0.943082 True 2029 0.546406 True 2031 0.259916 True 2034 0.369275 True 2036 0.943575 True 2038 0.738995 False 2040 0.793469 True 2042 0.330741 True 2043 0.809197 True 2044 0.130432 False 2047 0.700377 True 2049 0.714719 True 2052 0.991650 True 2053 0.966327 True 2054 0.377417 True 2062 0.170278 False 2065 0.097049 False the secound picture the secound picture configuration \ 0 collie 0.156665 1 miniature_pinscher 0.074192 2 malinois 0.138584 3 redbone 0.360687 4 Rottweiler 0.243682 5 English_springer 0.263788 6 mud_turtle 0.045885 7 Tibetan_mastiff 0.058279 8 shopping_basket 0.014594 9 komondor 0.192305 10 Tibetan_mastiff 0.093718 11 Yorkshire_terrier 0.174201 12 English_foxhound 0.175382 13 bull_mastiff 0.404722 14 German_shepherd 0.078260 15 Shih-Tzu 0.166192 16 Newfoundland 0.149842 17 cock 0.033919 18 desk 0.085547 19 toy_terrier 0.111884 20 toy_poodle 0.063064 21 otter 0.015250 22 Newfoundland 0.278407 23 Chesapeake_Bay_retriever 0.054787 24 Siberian_husky 0.147655 25 skunk 0.002402 26 Afghan_hound 0.259551 27 toy_terrier 0.009397 28 bloodhound 0.244220 29 barracouta 0.271485 ... ... ... 2003 miniature_pinscher 0.009431 2006 Boston_bull 0.134967 2007 vizsla 0.124353 2008 beagle 0.146654 2010 meerkat 0.104256 2011 Labrador_retriever 0.026329 2013 umbrella 0.115123 2016 Afghan_hound 0.250003 2017 Chesapeake_Bay_retriever 0.278966 2019 Labrador_retriever 0.098985 2021 Chesapeake_Bay_retriever 0.074257 2022 Siberian_husky 0.406044 2023 Labrador_retriever 0.045948 2025 Labrador_retriever 0.032409 2029 French_bulldog 0.404291 2031 American_Staffordshire_terrier 0.198451 2034 Labrador_retriever 0.265835 2036 shower_cap 0.025286 2038 sports_car 0.139952 2040 toy_terrier 0.143528 2042 sea_lion 0.275645 2043 Rhodesian_ridgeback 0.054950 2044 tow_truck 0.029175 2047 Eskimo_dog 0.166511 2049 Tibetan_mastiff 0.120184 2052 boxer 0.002129 2053 Cardigan 0.027356 2054 Labrador_retriever 0.151317 2062 Labrador_retriever 0.168086 2065 bagel 0.085851 is the secound picture have a dog the third picture \ 0 True Shetland_sheepdog 1 True Rhodesian_ridgeback 2 True bloodhound 3 True miniature_pinscher 4 True Doberman 5 True Greater_Swiss_Mountain_dog 6 False terrapin 7 True fur_coat 8 False golden_retriever 9 True soft-coated_wheaten_terrier 10 True Labrador_retriever 11 True Pekinese 12 True Ibizan_hound 13 True French_bulldog 14 True malinois 15 True Dandie_Dinmont 16 True borzoi 17 False partridge 18 False bookcase 19 True basenji 20 True miniature_poodle 21 False great_grey_owl 22 True groenendael 23 True Labrador_retriever 24 True Eskimo_dog 25 False hamster 26 True briard 27 True papillon 28 True flat-coated_retriever 29 False gar ... ... ... 2003 True American_Staffordshire_terrier 2006 True Cardigan 2007 True bath_towel 2008 True cocker_spaniel 2010 False clumber 2011 True cocker_spaniel 2013 False traffic_light 2016 True golden_retriever 2017 True koala 2019 True cocker_spaniel 2021 True flat-coated_retriever 2022 True dingo 2023 True kuvasz 2025 True kuvasz 2029 True Brabancon_griffon 2031 True Staffordshire_bullterrier 2034 True kuvasz 2036 False Siamese_cat 2038 False car_wheel 2040 True can_opener 2042 False Weimaraner 2043 True beagle 2044 False shopping_cart 2047 True malamute 2049 True Labrador_retriever 2052 True Staffordshire_bullterrier 2053 True basenji 2054 True muzzle 2062 True spatula 2065 False banana ... id str is quote status lang possibly sensitive \ 0 ... 666020888022790144 False en 0.0 1 ... 666029285002620928 False en 0.0 2 ... 666033412701032448 False en 0.0 3 ... 666044226329800704 False en 0.0 4 ... 666049248165822464 False en 0.0 5 ... 666050758794694656 False en 0.0 6 ... 666051853826850816 False en 0.0 7 ... 666055525042405376 False en 0.0 8 ... 666057090499244032 False en 0.0 9 ... 666058600524156928 False en 0.0 10 ... 666063827256086528 False en 0.0 11 ... 666071193221509120 False en 0.0 12 ... 666073100786774016 False en 0.0 13 ... 666082916733198336 False en 0.0 14 ... 666094000022159360 False en 0.0 15 ... 666099513787052032 False en 0.0 16 ... 666102155909144576 False en 0.0 17 ... 666104133288665088 False en 0.0 18 ... 666268910803644416 False en 0.0 19 ... 666273097616637952 False en 0.0 20 ... 666287406224695296 False en 0.0 21 ... 666293911632134144 False en 0.0 22 ... 666337882303524864 False en 0.0 23 ... 666345417576210432 False en 0.0 24 ... 666353288456101888 False en 0.0 25 ... 666362758909284352 False en 0.0 26 ... 666373753744588800 False en 0.0 27 ... 666396247373291520 False en 0.0 28 ... 666407126856765440 False en 0.0 29 ... 666411507551481856 False en 0.0 ... ... ... ... ... ... 2003 ... 879008229531029504 False en 0.0 2006 ... 879415818425184256 False en 0.0 2007 ... 879492040517615616 False en 0.0 2008 ... 879862464715927552 False en 0.0 2010 ... 880221127280381952 False en 0.0 2011 ... 880465832366813184 False en 0.0 2013 ... 880935762899988480 False en 0.0 2016 ... 881666595344535552 False en 0.0 2017 ... 881906580714921984 False en 0.0 2019 ... 882268110199369728 False en 0.0 2021 ... 882762694511734784 False en 0.0 2022 ... 882992080364220416 False en 0.0 2023 ... 883117836046086144 False en 0.0 2025 ... 883482846933004288 False en 0.0 2029 ... 884562892145688576 False en 0.0 2031 ... 884925521741709312 False en 0.0 2034 ... 885528943205470208 False en 0.0 2036 ... 886258384151887872 False en 0.0 2038 ... 886680336477933568 False en 0.0 2040 ... 886983233522544640 False en 0.0 2042 ... 887343217045368832 False en 0.0 2043 ... 887473957103951872 False en 0.0 2044 ... 887517139158093824 False en 0.0 2047 ... 888554962724278272 False en 0.0 2049 ... 888917238123831296 False en 0.0 2052 ... 889638837579907072 False en 0.0 2053 ... 889665388333682688 False en 0.0 2054 ... 889880896479866880 False en 0.0 2062 ... 891689557279858688 False en 0.0 2065 ... 892420643555336192 False en 0.0 possibly sensitive appealable retweet count retweeted truncated \ 0 0.0 502 False False 1 0.0 47 False False 2 0.0 43 False False 3 0.0 139 False False 4 0.0 41 False False 5 0.0 59 False False 6 0.0 837 False False 7 0.0 238 False False 8 0.0 139 False False 9 0.0 57 False False 10 0.0 213 False False 11 0.0 59 False False 12 0.0 160 False False 13 0.0 44 False False 14 0.0 72 False False 15 0.0 66 False False 16 0.0 12 False False 17 0.0 6433 False False 18 0.0 35 False False 19 0.0 76 False False 20 0.0 64 False False 21 0.0 348 False False 22 0.0 90 False False 23 0.0 133 False False 24 0.0 71 False False 25 0.0 562 False False 26 0.0 90 False False 27 0.0 84 False False 28 0.0 40 False False 29 0.0 323 False False ... ... ... ... ... 2003 0.0 2646 False False 2006 0.0 43378 False False 2007 0.0 3117 False False 2008 0.0 3435 False False 2010 0.0 4130 False False 2011 0.0 6149 False False 2013 0.0 2731 False False 2016 0.0 10451 False False 2017 0.0 3325 False False 2019 0.0 11360 False False 2021 0.0 4815 False False 2022 0.0 3842 False False 2023 0.0 6531 False False 2025 0.0 9726 False False 2029 0.0 4592 False False 2031 0.0 17856 False False 2034 0.0 6291 False False 2036 0.0 6148 False False 2038 0.0 4365 False False 2040 0.0 7603 False False 2042 0.0 10176 False False 2043 0.0 17776 False False 2044 0.0 11426 False False 2047 0.0 3472 False False 2049 0.0 4400 False False 2052 0.0 4436 False False 2053 0.0 9828 False False 2054 0.0 4869 False False 2062 0.0 8450 False False 2065 0.0 8315 False False user rating 0 {'id': 4196983835, 'id_str': '4196983835', 'na... 18 1 {'id': 4196983835, 'id_str': '4196983835', 'na... 17 2 {'id': 4196983835, 'id_str': '4196983835', 'na... 19 3 {'id': 4196983835, 'id_str': '4196983835', 'na... 16 4 {'id': 4196983835, 'id_str': '4196983835', 'na... 15 5 {'id': 4196983835, 'id_str': '4196983835', 'na... 20 6 {'id': 4196983835, 'id_str': '4196983835', 'na... 12 7 {'id': 4196983835, 'id_str': '4196983835', 'na... 20 8 {'id': 4196983835, 'id_str': '4196983835', 'na... 19 9 {'id': 4196983835, 'id_str': '4196983835', 'na... 18 10 {'id': 4196983835, 'id_str': '4196983835', 'na... 20 11 {'id': 4196983835, 'id_str': '4196983835', 'na... 19 12 {'id': 4196983835, 'id_str': '4196983835', 'na... 20 13 {'id': 4196983835, 'id_str': '4196983835', 'na... 16 14 {'id': 4196983835, 'id_str': '4196983835', 'na... 19 15 {'id': 4196983835, 'id_str': '4196983835', 'na... 18 16 {'id': 4196983835, 'id_str': '4196983835', 'na... 21 17 {'id': 4196983835, 'id_str': '4196983835', 'na... 11 18 {'id': 4196983835, 'id_str': '4196983835', 'na... 20 19 {'id': 4196983835, 'id_str': '4196983835', 'na... 21 20 {'id': 4196983835, 'id_str': '4196983835', 'na... 3 21 {'id': 4196983835, 'id_str': '4196983835', 'na... 13 22 {'id': 4196983835, 'id_str': '4196983835', 'na... 19 23 {'id': 4196983835, 'id_str': '4196983835', 'na... 20 24 {'id': 4196983835, 'id_str': '4196983835', 'na... 18 25 {'id': 4196983835, 'id_str': '4196983835', 'na... 16 26 {'id': 4196983835, 'id_str': '4196983835', 'na... 21 27 {'id': 4196983835, 'id_str': '4196983835', 'na... 19 28 {'id': 4196983835, 'id_str': '4196983835', 'na... 17 29 {'id': 4196983835, 'id_str': '4196983835', 'na... 12 ... ... ... 2003 {'id': 4196983835, 'id_str': '4196983835', 'na... 23 2006 {'id': 4196983835, 'id_str': '4196983835', 'na... 23 2007 {'id': 4196983835, 'id_str': '4196983835', 'na... 22 2008 {'id': 4196983835, 'id_str': '4196983835', 'na... 23 2010 {'id': 4196983835, 'id_str': '4196983835', 'na... 22 2011 {'id': 4196983835, 'id_str': '4196983835', 'na... 22 2013 {'id': 4196983835, 'id_str': '4196983835', 'na... 23 2016 {'id': 4196983835, 'id_str': '4196983835', 'na... 23 2017 {'id': 4196983835, 'id_str': '4196983835', 'na... 22 2019 {'id': 4196983835, 'id_str': '4196983835', 'na... 23 2021 {'id': 4196983835, 'id_str': '4196983835', 'na... 22 2022 {'id': 4196983835, 'id_str': '4196983835', 'na... 23 2023 {'id': 4196983835, 'id_str': '4196983835', 'na... 23 2025 {'id': 4196983835, 'id_str': '4196983835', 'na... 15 2029 {'id': 4196983835, 'id_str': '4196983835', 'na... 23 2031 {'id': 4196983835, 'id_str': '4196983835', 'na... 22 2034 {'id': 4196983835, 'id_str': '4196983835', 'na... 23 2036 {'id': 4196983835, 'id_str': '4196983835', 'na... 23 2038 {'id': 4196983835, 'id_str': '4196983835', 'na... 23 2040 {'id': 4196983835, 'id_str': '4196983835', 'na... 23 2042 {'id': 4196983835, 'id_str': '4196983835', 'na... 23 2043 {'id': 4196983835, 'id_str': '4196983835', 'na... 23 2044 {'id': 4196983835, 'id_str': '4196983835', 'na... 24 2047 {'id': 4196983835, 'id_str': '4196983835', 'na... 23 2049 {'id': 4196983835, 'id_str': '4196983835', 'na... 22 2052 {'id': 4196983835, 'id_str': '4196983835', 'na... 22 2053 {'id': 4196983835, 'id_str': '4196983835', 'na... 23 2054 {'id': 4196983835, 'id_str': '4196983835', 'na... 23 2062 {'id': 4196983835, 'id_str': '4196983835', 'na... 23 2065 {'id': 4196983835, 'id_str': '4196983835', 'na... 23 [1746 rows x 32 columns]> <class 'pandas.core.frame.DataFrame'> Int64Index: 1746 entries, 0 to 2065 Data columns (total 32 columns): tweet id 1746 non-null int64 the picture url 1746 non-null object image number 1746 non-null int64 the first picture 1746 non-null object the frist picture configuration 1746 non-null float64 is the frist picture have a dog 1746 non-null bool the secound picture 1746 non-null object the secound picture configuration 1746 non-null float64 is the secound picture have a dog 1746 non-null bool the third picture 1746 non-null object the third picture configuration 1746 non-null float64 is the third picture have a dog 1746 non-null bool source 1746 non-null object text x 1746 non-null object expanded urls 1746 non-null object name 1746 non-null object stage 1746 non-null object created at 1746 non-null datetime64[ns] entities 1746 non-null object extended entities 1746 non-null object favorite count 1746 non-null int64 favorited 1746 non-null bool id str 1746 non-null int64 is quote status 1746 non-null bool lang 1746 non-null object possibly sensitive 1746 non-null float64 possibly sensitive appealable 1746 non-null float64 retweet count 1746 non-null int64 retweeted 1746 non-null bool truncated 1746 non-null bool user 1746 non-null object rating 1746 non-null int64 dtypes: bool(7), datetime64[ns](1), float64(5), int64(6), object(13) memory usage: 366.6+ KB None ###Markdown insights ###Code # insight one # how many times dog appear in picture? print(sum(merge['is the secound picture have a dog']== True)) print(sum(merge['is the frist picture have a dog']== True)) print(sum(merge['is the third picture have a dog']== True)) # the dog picture is are most common in the second picture #and the dog picture in first picture are more than in the third df_f = sum(merge['is the frist picture have a dog']== True) df_s = sum(merge['is the secound picture have a dog']== True) df_t = sum(merge['is the third picture have a dog']== True) # the insight visualization locations = [1,2,3] hights=[df_f,df_s,df_t] labels= ['first picture', 'secound picture', 'third picture'] plt.bar(locations,hights,tick_label = labels, color = 'g') plt.xlabel('number of picture') plt.ylabel('amount of dogs') sns.set_style('dark') #insight two # the most month to posting is 12 merge['month'] = merge['created at'].dt.month print(merge['month'].value_counts()) print(' ') # the most day of week to posting is monday merge['weekday_name'] = merge['created at'].dt.weekday_name print(merge['weekday_name'].value_counts()) print(' ') # the most hour to posting is 1 merge['hour'] = merge['created at'].dt.hour print(merge['hour'].value_counts()) labels=merge['hour'].value_counts().keys() plt.pie(merge['hour'].value_counts(), labels= labels) plt.title('distribution of common posts hours in day') labels=merge['month'].value_counts().keys() plt.pie(merge['month'].value_counts(), labels= labels) plt.title('distribution of common posts in months') labels=merge['weekday_name'].value_counts().keys() plt.pie(merge['weekday_name'].value_counts(), labels= labels) plt.title('distribution of common posts in week') #insight three #what is the most common dog in the pictures? v_c1=merge[merge['is the frist picture have a dog'] == True]['the first picture'].value_counts() v_c2=merge[merge['is the secound picture have a dog'] == True]['the secound picture'].value_counts() v_c3= merge[merge['is the third picture have a dog'] == True]['the third picture'].value_counts() v_c = v_c1 + v_c2 + v_c3 print(v_c.max()) print(v_c[v_c == 290.0]) # the most common dog type appear in the pictures is golden retriever #insight four # what is the common posts language ? print(merge['lang'].value_counts()) # the most languge use is en then nl then in # insight five # what the lowest picture in retweet count? import requests import os from IPython.display import HTML, display x=merge['retweet count'].min() merge[merge['retweet count']== x] merge[merge['retweet count']== x]['the picture url'] print('source is',merge[merge['retweet count']==x]['the picture url']) display(HTML('<img src=" https://pbs.twimg.com/media/CT54YGiWUAEZnoK.jpg" />')) # what is the worst picture ? x =merge['rating'].min() merge[merge['rating']== x] print('source is',merge[merge['rating']==x]['the picture url']) display(HTML('<img src=" https://pbs.twimg.com/media/CT8g3BpUEAAuFjg.jpg" />')) # what is the favorite dog ? x =merge['rating'].max() merge[merge['rating']== x] merge[merge['rating']==x]['the picture url'] print('source is',merge[merge['rating']==x]['the picture url']) display(HTML('<img src=" https://pbs.twimg.com/media/CmgBZ7kWcAAlzFD.jpg" />')) #save data merge.to_csv('merged_data') merge.info() merge[merge['text x'].str.contains('RT @')] ###Output _____no_output_____ ###Markdown Nano Degree: Project 2. Import Statements. ###Code import pandas as pd import numpy as np import requests import re import tweepy from tweepy import OAuthHandler import json from timeit import default_timer as timer import os import math import matplotlib.dates import matplotlib.pyplot as plt import warnings %matplotlib inline ###Output _____no_output_____ ###Markdown Twitter Preparation. ###Code consumer_key = 'YOUR CONSUMER KEY' consumer_secret = 'YOUR CONSUMER SECRET' access_token = 'YOUR ACCESS TOKEN' access_secret = 'YOUR ACCESS SECRET' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True) ###Output _____no_output_____ ###Markdown Step 1: Data Gathering:* Reading the Twitter archive into a Pandas Dataframe.* Downloading image_predictions.tsv file programmatically from the URL: https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv using the Requests library.* using the Twitter API. (Requires an API key not yet created, hence using code from Course Supporting Material). ###Code # reading twitter-archive-enhanced.csv into a Pandas DataFrame archive_df = pd.read_csv('twitter-archive-enhanced.csv') # downloading the image_predictions.tsv file image_predictions_url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(image_predictions_url) file_name = image_predictions_url.split('/')[-1] if not os.path.isfile('filename.txt'): with open(file_name,'wb') as file: file.write(response.content) # read image_predictions.tsv file into a DataFrame. image_predictions_df = pd.read_csv('image-predictions.tsv',delimiter='\t') """" this block of code has been copied from the twitter_api.py this file was provided on the project Supporting Materials Section in the page 4.Twitter API, this block of code depends on Cell # 5 which currently does not include the necessery data to run correctly, hence it is being skipped. """" tweet_ids = archive_df.tweet_id.values len(tweet_ids) # Query Twitter's API for JSON data for each tweet ID in the Twitter archive count = 0 fails_dict = {} start = timer() # Save each tweet's returned JSON as a new line in a .txt file if not os.path.isfile('tweet_json.txt'): with open('tweet_json.txt', 'w') as outfile: # This loop will likely take 20-30 minutes to run because of Twitter's rate limit for tweet_id in tweet_ids: count += 1 print(str(count) + ": " + str(tweet_id)) try: tweet = api.get_status(tweet_id, tweet_mode='extended') print("Success") json.dump(tweet._json, outfile) outfile.write('\n') except tweepy.TweepError as e: print("Fail") fails_dict[tweet_id] = e pass end = timer() print(end - start) print(fails_dict) # read tweet_json text into a pandas DataFrame df_list = [] with open('tweet-json.txt', 'r') as file: for line in file: tweet = json.loads(line) tweet_id = tweet['id'] retweet_count = tweet['retweet_count'] fav_count = tweet['favorite_count'] user_count = tweet['user']['followers_count'] df_list.append({'tweet_id':tweet_id, 'retweet_count':retweet_count, 'favorite_count':fav_count, 'user_count': user_count}) api_df = pd.DataFrame(df_list) ###Output _____no_output_____ ###Markdown Step 2: Data Assessment Visual Assessment* By Printing Samples of the Data, and Observing them. Programmatic Assessment* By means of using info, describe, head, tail, and sample.* head, tail will not be run here in order to save space. since their output could be infered from the visual assessment step. ###Code # Visual Assessment: archive_df # Programmatic Assessment of archive_df archive_df.info() print('\n**** A Sample of 15 Entries ****\n{}'.format(archive_df.sample(15))) print('\n**** Value Count for dog stages ****\n') print('\n**** DOGGO ****\n{}'.format(archive_df.doggo.value_counts())) print('\n**** FLOOFER ****\n{}'.format(archive_df.floofer.value_counts())) print('\n**** PUPPER ****\n{}'.format(archive_df.pupper.value_counts())) print('\n**** PUPPO ****\n{}'.format(archive_df.puppo.value_counts())) print('\n**** Value Count for dog names ****\n') print('\n**** DOG Name Value Count ****\n{}'.format(archive_df.name.value_counts())) print('\n**** archive shape ****\n{}'.format(archive_df.shape)) print('\n**** source value counts ****\n{}'.format(archive_df.source.value_counts())) print('\n**** value counts for dogs with 2 classifications ****\n{}'.format(archive_df.query('doggo != "None" & floofer !="None"'))) print('\n**** is there duplicated Tweet IDs ****\n{}'.format(archive_df['tweet_id'].duplicated().any())) ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB **** A Sample of 15 Entries **** tweet_id in_reply_to_status_id in_reply_to_user_id \ 846 766313316352462849 NaN NaN 1625 684830982659280897 NaN NaN 71 878776093423087618 NaN NaN 1289 708149363256774660 NaN NaN 902 758467244762497024 NaN NaN 1369 702332542343577600 NaN NaN 1420 698262614669991936 NaN NaN 1456 695314793360662529 NaN NaN 2184 668988183816871936 NaN NaN 1874 675135153782571009 NaN NaN 1150 726224900189511680 NaN NaN 630 794332329137291264 NaN NaN 1823 676533798876651520 NaN NaN 1682 681891461017812993 NaN NaN 860 763167063695355904 NaN NaN timestamp \ 846 2016-08-18 16:38:26 +0000 1625 2016-01-06 20:16:44 +0000 71 2017-06-25 00:45:22 +0000 1289 2016-03-11 04:35:39 +0000 902 2016-07-28 01:00:57 +0000 1369 2016-02-24 03:21:41 +0000 1420 2016-02-12 21:49:15 +0000 1456 2016-02-04 18:35:39 +0000 2184 2015-11-24 03:03:06 +0000 1874 2015-12-11 02:08:58 +0000 1150 2016-04-30 01:41:23 +0000 630 2016-11-04 00:15:59 +0000 1823 2015-12-14 22:46:41 +0000 1682 2015-12-29 17:36:07 +0000 860 2016-08-10 00:16:21 +0000 source \ 846 <a href="http://twitter.com/download/iphone" r... 1625 <a href="http://vine.co" rel="nofollow">Vine -... 71 <a href="http://twitter.com/download/iphone" r... 1289 <a href="http://twitter.com/download/iphone" r... 902 <a href="http://twitter.com/download/iphone" r... 1369 <a href="http://vine.co" rel="nofollow">Vine -... 1420 <a href="http://twitter.com/download/iphone" r... 1456 <a href="http://twitter.com/download/iphone" r... 2184 <a href="http://twitter.com/download/iphone" r... 1874 <a href="http://twitter.com/download/iphone" r... 1150 <a href="http://twitter.com/download/iphone" r... 630 <a href="http://twitter.com/download/iphone" r... 1823 <a href="http://twitter.com/download/iphone" r... 1682 <a href="http://twitter.com/download/iphone" r... 860 <a href="http://twitter.com/download/iphone" r... text retweeted_status_id \ 846 This is Oscar. He has legendary eyebrows and h... NaN 1625 This little fella really hates stairs. Prefers... NaN 71 This is Snoopy. He's a proud #PrideMonthPuppo.... NaN 1289 This is Jebberson. He's the reigning hide and ... NaN 902 Why does this never happen at my front door...... NaN 1369 This is Rudy. He's going to be a star. 13/10 t... NaN 1420 This is Franklin. He's a yoga master. Trying t... NaN 1456 This is Colin. He really likes green beans. It... NaN 2184 Honor to rate this dog. Lots of fur on him. Tw... NaN 1874 This is Steven. He got locked outside. Damn it... NaN 1150 I'm getting super heckin frustrated with you a... NaN 630 This is Nimbus (like the cloud). He just bough... NaN 1823 ITSOFLUFFAYYYYY 12/10 https://t.co/bfw13CnuuZ NaN 1682 Say hello to Charlie. He's scholarly af. Quite... NaN 860 RT @dog_rates: Meet Eve. She's a raging alcoho... 6.732953e+17 retweeted_status_user_id retweeted_status_timestamp \ 846 NaN NaN 1625 NaN NaN 71 NaN NaN 1289 NaN NaN 902 NaN NaN 1369 NaN NaN 1420 NaN NaN 1456 NaN NaN 2184 NaN NaN 1874 NaN NaN 1150 NaN NaN 630 NaN NaN 1823 NaN NaN 1682 NaN NaN 860 4.196984e+09 2015-12-06 00:17:55 +0000 expanded_urls rating_numerator \ 846 https://twitter.com/dog_rates/status/766313316... 12 1625 https://vine.co/v/eEZXZI1rqxX 13 71 https://twitter.com/dog_rates/status/878776093... 13 1289 https://twitter.com/dog_rates/status/708149363... 10 902 https://twitter.com/dog_rates/status/758467244... 165 1369 https://vine.co/v/irlDujgwOjd 13 1420 https://twitter.com/dog_rates/status/698262614... 11 1456 https://twitter.com/dog_rates/status/695314793... 10 2184 https://twitter.com/dog_rates/status/668988183... 7 1874 https://twitter.com/dog_rates/status/675135153... 5 1150 https://twitter.com/dog_rates/status/726224900... 9 630 https://twitter.com/dog_rates/status/794332329... 12 1823 https://twitter.com/dog_rates/status/676533798... 12 1682 https://twitter.com/dog_rates/status/681891461... 10 860 https://twitter.com/dog_rates/status/673295268... 8 rating_denominator name doggo floofer pupper puppo 846 10 Oscar None None None None 1625 10 None None None pupper None 71 10 Snoopy None None None puppo 1289 10 Jebberson None None None None 902 150 None None None None None 1369 10 Rudy None None None None 1420 10 Franklin None None None None 1456 10 Colin None None None None 2184 10 None None None None None 1874 10 Steven None None None None 1150 10 None None None None None 630 10 Nimbus None None None None 1823 10 None None None None None 1682 10 Charlie None None pupper None 860 10 Eve None None pupper None **** Value Count for dog stages **** **** DOGGO **** None 2259 doggo 97 Name: doggo, dtype: int64 **** FLOOFER **** None 2346 floofer 10 Name: floofer, dtype: int64 **** PUPPER **** None 2099 pupper 257 Name: pupper, dtype: int64 **** PUPPO **** None 2326 puppo 30 Name: puppo, dtype: int64 **** Value Count for dog names **** **** DOG Name Value Count **** None 745 a 55 Charlie 12 Cooper 11 Lucy 11 Oliver 11 Penny 10 Lola 10 Tucker 10 Bo 9 Winston 9 the 8 Sadie 8 Toby 7 Buddy 7 Bailey 7 Daisy 7 an 7 Jack 6 Milo 6 Oscar 6 Koda 6 Dave 6 Scout 6 Leo 6 Bella 6 Rusty 6 Jax 6 Stanley 6 Finn 5 ... Pumpkin 1 Maxwell 1 Aqua 1 Pilot 1 Moofasa 1 Mac 1 Apollo 1 Antony 1 Sweets 1 Caryl 1 Darby 1 Marvin 1 Norman 1 Timber 1 Randall 1 Arlen 1 Chef 1 Tango 1 Spencer 1 Diogi 1 Reptar 1 Sephie 1 Freddery 1 Beemo 1 Doobert 1 Hazel 1 Grady 1 Eevee 1 Fiji 1 Maude 1 Name: name, Length: 957, dtype: int64 **** archive shape **** (2356, 17) **** source value counts **** <a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a> 2221 <a href="http://vine.co" rel="nofollow">Vine - Make a Scene</a> 91 <a href="http://twitter.com" rel="nofollow">Twitter Web Client</a> 33 <a href="https://about.twitter.com/products/tweetdeck" rel="nofollow">TweetDeck</a> 11 Name: source, dtype: int64 ###Markdown Getting to Know the Data:'Archive DF' columns:- **tweet_id**: table primary key, unique identifier of each tweet.- **in_reply_to_status_id**: indicates if a tweet was in reply to another tweet (status), if so will include an integer representation of the original tweet ID.- **in_reply_to_user_id**: indicates if a tweet was in reply to a user, if so will include an integer representation of the original tweet author.- **timestamp**: tweet date- **source**: tweet URL- **text**: tweet Text/content- **retweeted_status_id**: indicates if a tweet was a retweet, if so will include an integer representation of original tweet.- **retweeted_status_user_id**:indicates if a tweet was a retweet, if so will include an integer representation of original tweet author.- **retweeted_status_timestamp**:indicates if a tweet was a retweet, if so will include a timestamp representation of the origianl tweet time.- **expanded_urls**:- **rating_numerator**: - **rating_denominator**:- **name**: dog under consideration name- **doggo**: a stage of dog,- **floofer**: a stage of dog,- **pupper**: a stage of dog,- **puppo**: a stage of dog, Observed Data Quality Issues in DataFrame archive_df Completeness- only 394 entries, have a dog stage classification- 745 entries does not have a dog name, None.- not all records are original tweets (some are retweets).- expanded_url only has 2297 entries, while it should have 2356 Validity- 55 dog name entries are 'a', indicating the name was not not extracted correctly from the text.- record 775 have a dog name 'O'- doggo, floofer, pupper, and puppo uses None (String) instead of empty string- source record include both the source, and a link for downloading the source.- Not all tweets are of dogs. Consistency- timestamp record is an object, instead of datetime.- some records have more than one stage of dog defined, exmaple record 202 Observed Data Tidyness Issues in DataFrame archive_df- Columns dogoo, floofer, pupper, and puppo are variables, and should be a single column stage, which should also be converted to categorical datatype (quality issue)- columns not useful to the analysis shall be dropped such as columns used to detect if a tweet is original or a retweet. ###Code # Visual Assessment: image_predictions_df # Programmatic Assessment of image_predictions_df image_predictions_df.info() print('\n**** A Sample of 15 Entries ****\n') image_predictions_df.sample(15) print('\n**** Table Description ****\n') image_predictions_df.describe() print('\n**** confidence Max Value ****\n') print('\n**** P1 Confidence Max Value ****\n{}'.format(image_predictions_df.p1_conf.max())) print('\n**** P2 Confidence Max Value ****\n{}'.format(image_predictions_df.p2_conf.max())) print('\n**** P3 Confidence Max Value ****\n{}'.format(image_predictions_df.p3_conf.max())) ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null int64 jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB **** A Sample of 15 Entries **** **** Table Description **** **** confidence Max Value **** **** P1 Confidence Max Value **** 1.0 **** P2 Confidence Max Value **** 0.4880140000000001 **** P3 Confidence Max Value **** 0.273419 ###Markdown Getting to know the data- **tweet_id**: reference to the tweet with the picture- **jpg_url**: URL for the picture being predicted- **img_num**:- **p1**: first prediction- **p1_conf**: confidence in first prediction- **p1_dog**: boolean indicating if prediction is in fact a dog- **p2**: second prediction- **p2_conf**: confidence in second prediction- **p2_dog**: boolean indicating if prediction is in fact a dog- **p3**: third prediction- **p3_conf**: confidence in third prediction- **p3_dog**: boolean indicating if prediction is in fact a dog. Observed Data Quality Issues in DataFrame image_predictions_df Completeness- Number of Observations in image_predictions_df are less than number of observations in archive_df, therefore not all archived tweets have images. Accuracy- Some observations have false predictions in all predictions, examlpe entry 2074 Observed Tidyness Issues in DataFrame image_predictions_df- Column Names are not describitive - p1 should be prediction 1, p2 should be prediction 2, p3 should be prediction 3 - conf should be confidence ###Code # Visual Assessment api_df # Programmatic Assessment api_df.info() api_df.describe() # Pre Data Cleaning Preparation # Create Copies of Origianl Data Before Cleaning archive_df_clean = archive_df.copy() image_predictions_df_clean = image_predictions_df.copy() api_df_clean = api_df.copy() ###Output _____no_output_____ ###Markdown Step 3: Data Cleaning First Cleaning Directive: (2 out of 8 Quality Issues in terms of Project Rubric)Satisfy Project Requirement, of removing all tweets that are not original (retweets), or tweets with no images. DefinitionAn original Tweet is one that is not a retweet. a retweet is identified by having a retweet_status_id. those must first be deleted.A tweet that does not have a photo is one that it's ID is not present in the image_predictions DataFrame.Therefore, we shall first delete all tweets from the archive dataframe that their tweet ID is not present in the image_predictions DataFrame Next We Shall delete all tweets that are retweets, by checking the fields- in_reply_to_status_id- in_reply_to_user_id- retweeted_status_id- retweeted_status_user_idif they are NaN, then the tweet is original, otherwise it's a retweet, and must be deleted from all dataframes. Code First Directive ###Code # Code Portion for First Cleaning Directive, # Delete All Tweets that do not have a picture, and tweets that are retweets. (Quality Issues 1/8 & 2/8) # This would also fix Quality issue: Number of Observations in image_predictions_df # are less than number of observations in archive_df, # therefore not all archived tweets have images. (Quality Issue 8/8) index_tweet_id = archive_df_clean.columns.get_loc('tweet_id') for row in range(0, len(archive_df_clean)): if archive_df_clean.tweet_id[row] not in image_predictions_df_clean.tweet_id.values: archive_df_clean.drop([row], inplace = True) # temp DataFrame, that only includes original tweets. archive_df_clean_temp = archive_df_clean.query('in_reply_to_status_id != in_reply_to_status_id &'+ 'in_reply_to_user_id != in_reply_to_user_id &'+ 'retweeted_status_id != retweeted_status_id &'+ 'retweeted_status_user_id != retweeted_status_user_id') # delete from image_predictions, and api_df, that are not in temp DataFrame for row in range(0, len(image_predictions_df_clean)): if image_predictions_df_clean.tweet_id[row] not in archive_df_clean_temp.tweet_id.values: image_predictions_df_clean.drop([row], inplace = True) # Change, noticed a descrebency in the record count between API, and Image Predictions, this could be the result of # the loop used to clean the two tables did not cover all of api_df, therefore, it was split into two loops. for row in range(0, len(api_df_clean)): if api_df_clean.tweet_id[row] not in archive_df_clean_temp.tweet_id.values: api_df_clean.drop([row], inplace = True) # test portion for First Cleaning Directive print('Dropping Tweets with no Picture\n After Cleaning {}\n Before Cleaning {}'.format(archive_df_clean.shape,archive_df.shape)) print('**** Dropping Retweets ****') print('Archived Tweets\n Before Dropping Retweets {} \n After Dropping Retweets {} \n'.format(archive_df_clean.shape,archive_df_clean_temp.shape)) print('Image Predictions\n Before Dropping Retweets {} \n After Dropping Retweets {} \n'.format(image_predictions_df.shape,image_predictions_df_clean.shape)) print('API Df\n Before Dropping Retweets {} \n Adter Dropping Retweets {} \n'.format(api_df.shape, api_df_clean.shape)) # Save New archive_df_clean archive_df_clean = archive_df_clean_temp ###Output Dropping Tweets with no Picture After Cleaning (2075, 17) Before Cleaning (2356, 17) **** Dropping Retweets **** Archived Tweets Before Dropping Retweets (2075, 17) After Dropping Retweets (1971, 17) Image Predictions Before Dropping Retweets (2075, 12) After Dropping Retweets (1971, 12) API Df Before Dropping Retweets (2354, 4) Adter Dropping Retweets (1971, 4) ###Markdown Second Cleaning Directive: Quality Issues: Definitions:- 3/8 doggo, floofer, pupper, and puppo uses None (String) instead of empty stringthis issue can be easily corrected by replacing None, with an empty string.- 4/8 source record include both the source, and a link for downloading the source application.this issue can be fixed by means of looking for, and replacing the source strings- 5/8 timestamp record is an object, instead of datetime in DataFrame archive_df.this issue can be fixed by means of converting the DataFrame type for the timestamp record. - 6/8 Not all Tweets are of dogsthis issue can be fixed by means of droping rows of archive_df, and subsequent tables that include the statement "Please only send in dogs"- 7/8 Some observations have false predictions in all predictions, examlpe entry 2074 in dataframe image_predictionsthis issue can be resolved by removing all tweets with image predictions not identified as dogs, that is all boolean values for predictions are false.- 8/8 Number of Observations in image_predictions_df are less than number of observations in archive_df, therefore not all archived tweets have images.this is resolved by fixing Project Motivation of ensuring that only tweets with images and original tweets are considered.- (9/8) records in archive_df has two or more classifications, hence Not Mutually Execlusivethis issue can be resolved by dropping the record. unless the two classificatins are doggo and pupper (according to the dogtionary, hence for those records, we can remove one of the classificatns. Tidyness- (1/2) all tables could be merged in a single DataFrame, since all Tables use Tweet ID as the primary key. Project Review Changes: - (2/2) Instead of melting predictions, they shall remain separate columns, with chaning the column names to be more discriptive and remove all but top prediction.Old To Be Changed Comment (15/10/2020)- (2/2) predictions should be melted since they are observations and not variables. Code Second Directive ###Code # Code for Second Cleaning Directive # Fixing None for doggo, floofer, pupper and puppo. by replacing None with an empty String (3/8 Qiality Issues) # (15/10/2020) : Also Replacing Doggo with 1, Floofer with 2, pupper with 3, and puppo with 4 for row in range(0, len(archive_df_clean)): if archive_df_clean.iat[row, archive_df_clean.columns.get_loc('doggo')].lower() == 'none': archive_df_clean.iat[row, archive_df_clean.columns.get_loc('doggo')] = 0 else: archive_df_clean.iat[row, archive_df_clean.columns.get_loc('doggo')] = 1 if archive_df_clean.iat[row, archive_df_clean.columns.get_loc('floofer')].lower() == 'none': archive_df_clean.iat[row, archive_df_clean.columns.get_loc('floofer')] = 0 else: archive_df_clean.iat[row, archive_df_clean.columns.get_loc('floofer')] = 2 if archive_df_clean.iat[row, archive_df_clean.columns.get_loc('pupper')].lower() == 'none': archive_df_clean.iat[row, archive_df_clean.columns.get_loc('pupper')] = 0 else: archive_df_clean.iat[row, archive_df_clean.columns.get_loc('pupper')] = 3 if archive_df_clean.iat[row, archive_df_clean.columns.get_loc('puppo')].lower() == 'none': archive_df_clean.iat[row, archive_df_clean.columns.get_loc('puppo')] = 0 else: archive_df_clean.iat[row, archive_df_clean.columns.get_loc('puppo')] = 4 # Fixing Column 'source' of DataFrame: archive_df_clean (4 out of 8 Quality Issues) index_source = archive_df_clean.columns.get_loc('source') # the following pattern has not been used since, a simpler method was followed. # the pattern is kept for reference only, in case we wish to reuse it somewhere else. pattern = r'<a\s+(?:[^>]*?\s+)?href=([\"\'])(.*?)\1 rel=\"nofollow\">' for row in range(0, len(archive_df_clean)): try: if archive_df_clean.iloc[row]['source'] == '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>': archive_df_clean.iat[row, index_source] = 'Twitter for iPhone' elif archive_df_clean.iloc[row]['source'] == '<a href="http://twitter.com" rel="nofollow">Twitter Web Client</a>': archive_df_clean.iat[row, index_source] = 'Twitter Web Client' elif archive_df_clean.iloc[row]['source'] == '<a href="http://vine.co" rel="nofollow">Vine - Make a Scene</a>': archive_df_clean.iat[row, index_source] = 'Vine' elif archive_df_clean.iloc[row]['source'] == '<a href="https://about.twitter.com/products/tweetdeck" rel="nofollow">TweetDeck</a>': archive_df_clean.iat[row, index_source] = 'tweetdeck' else: print('Unexpected source {} at {}'.format(archive_df_clean.iloc[row]['source'], row)) except: print('error at line {}, source value is {}'.format(row, archive_df_clean.iloc[row]['source'])) # Fixing Column 'timestamp' of archive_df_clean (5/8 Quality Issues) # because of an error in understanding timezone, this code has been modified to neglect the time zone. and because of # time limitation in submitting the project, this code has not been moified to include the timezone in utc format. archive_df_clean['timestamp'] = pd.to_datetime(archive_df_clean['timestamp'][:-6], format='%Y-%m-%d %H:%M:%S') # Removing tweets that are not of dogs (Quality Issue 6/8) # Similar to fixing the source column, it is the intention to search for the string 'Please only send in dogs' # and drop the row in question. index_text = archive_df_clean.columns.get_loc('text') index_tweet_id = archive_df_clean.columns.get_loc('tweet_id') for row in range(0, len(archive_df_clean)): if re.search('Please only send in dogs',archive_df_clean.iat[row,index_text]) != None: # Match Found delete record from api_df, and image_predictions first. api_df_clean[api_df_clean.tweet_id != archive_df_clean.iat[row,index_tweet_id]] image_predictions_df_clean[image_predictions_df_clean.tweet_id != archive_df_clean.iat[row,index_tweet_id]] archive_df_clean[archive_df_clean.tweet_id != archive_df_clean.iat[row,index_tweet_id]] # Removing Tweets whose image predictions state they are not of dogs. (Quality Issue 7/8) for tweet_id in image_predictions_df_clean.query('p1_dog == False & p2_dog == False & p3_dog == False').tweet_id: # delete tweet from api_df_Clean api_df_clean = api_df_clean[api_df_clean.tweet_id != tweet_id] # delete tweet from archive_df_clean archive_df_clean = archive_df_clean[archive_df_clean.tweet_id != tweet_id] # delete tweet from image_predictions_df_clean image_predictions_df_clean= image_predictions_df_clean[image_predictions_df_clean.tweet_id != tweet_id] archive_df_clean.info() # Fixing Records with Non Mutually exclusive Classifications, (Quality Issue 9/8 - Part 1) # if a dog is classified as both doggo and pupper, as per "THE DOGTIONARY", the record can be labeled as doggo. # hence removing the pupper classification, and maintaining the record. for row in range(0,len(archive_df_clean)): if archive_df_clean.iloc[row].doggo == 1 and archive_df_clean.iloc[row].pupper == 3: archive_df_clean.iat[row, archive_df_clean.columns.get_loc('pupper')] = 0 # Fixing Records with Non Mutually exclusive Classifications, (Quality Issue 9/8 - Part 2) # (15/10/2020) Remember Doggo is 1, Floofer is 2, pupper is 3, and puppo is 4 index_tweet_id = archive_df_clean.columns.get_loc('tweet_id') tweet_id = None for row in range(0,len(archive_df_clean)): try: if archive_df_clean.iloc[row].doggo == 1 and archive_df_clean.iloc[row].floofer == 2: print('doggo and floofer - row {} - dropping record'.format(row)) tweet_id = archive_df_clean.iat[row, index_tweet_id] if archive_df_clean.iloc[row].doggo == 1 and archive_df_clean.iloc[row].puppo == 4: print('doggo and puppo - row {} - dropping record'.format(row)) tweet_id = archive_df_clean.iat[row, index_tweet_id] if archive_df_clean.iloc[row].floofer == 2 and archive_df_clean.iloc[row].puppo == 4: print('floofer and puppo - row {}'.format(row)) tweet_id = archive_df_clean.iat[row, index_tweet_id] if archive_df_clean.iloc[row].floofer == 2 and archive_df_clean.iloc[row].pupper == 3: print('floofer and pupper - row {}'.format(row)) tweet_id = archive_df_clean.iat[row, index_tweet_id] if archive_df_clean.iloc[row].pupper == 3 and archive_df_clean.iloc[row].puppo == 4: print('pupper and puppo - row {}'.format(row)) tweet_id = archive_df_clean.iat[row, index_tweet_id] if tweet_id is not None: api_df_clean = api_df_clean[api_df_clean.tweet_id != tweet_id] image_predictions_df_clean= image_predictions_df_clean[image_predictions_df_clean.tweet_id != tweet_id] archive_df_clean = archive_df_clean[archive_df_clean.tweet_id != tweet_id] tweet_id = None except: print('should do error logging') # Fixing api_df_Clean size ensuring all tweets in api_df_clean are in image predictions and archive df. # (Quality Issue 10/8) print ('BEFORE FIX - Clean Archive Shape {}\nClean Image Predictions Shape {}\nClean API Shape {}'.format(archive_df_clean.shape, image_predictions_df_clean.shape, api_df_clean.shape)) for row in range(0, len(api_df_clean)): try: api_tweet_id = api_df_clean.iloc[row].tweet_id if not (api_tweet_id in image_predictions_df_clean.values and api_tweet_id in archive_df_clean.values): api_df_clean = api_df_clean[api_df_clean.tweet_id != api_tweet_id] except Exception as e: print('error message {}'.format(e)) print ('After FIX - Clean Archive Shape {}\nClean Image Predictions Shape {}\nClean API Shape {}'.format(archive_df_clean.shape, image_predictions_df_clean.shape, api_df_clean.shape)) # New Quality Issue Added after Project Review : Change Doggo, Floofer, Pupper, and puppo type to float. archive_df_clean['doggo'] = archive_df_clean['doggo'].astype(str).astype(int) archive_df_clean['floofer'] = archive_df_clean['floofer'].astype(str).astype(int) archive_df_clean['pupper'] = archive_df_clean['pupper'].astype(str).astype(int) archive_df_clean['puppo'] = archive_df_clean['puppo'].astype(str).astype(int) # (15/10/2020) Concatenate all Dog Classifications into a new column. archive_df_clean['dog_stage'] = archive_df_clean['doggo'] + archive_df_clean['floofer'] + archive_df_clean['pupper'] + archive_df_clean['puppo'] # (15/10/2020) Remove the original 4 columns of dog stages archive_df_clean.drop(['doggo','floofer','pupper','puppo'], axis='columns', inplace=True) archive_df_clean.info() archive_df_clean[archive_df_clean.dog_stage != 0] # (15/10/2020) cleaning names Replacing all names that are a single character with None. for row in range(0,len(archive_df_clean)): if archive_df_clean.iloc[row]['name'] != None: if len(archive_df_clean.iloc[row]['name']) == 1: # print('row: {}, name: {}'.format(row, archive_df_clean.iloc[row]['name'])) # archive_df_clean.iloc[row]['name'] == None archive_df_clean.iat[row, archive_df_clean.columns.get_loc('name')] = None # Decimal Rating # (15/10/2020) From Project Review :The rating_numerator column should of type float # and also it should be correctly extracted. On assessing the twitter_enhanced.csv # file you will see on row 46 that the correct rating in the tweet is 13.5 but it's extracted # as 5. There are many more such issues in the dataset. So it should be extracted and cleaned correctly. archive_df_clean['rating_numerator'] = archive_df_clean['rating_numerator'].astype(float) archive_df_clean['rating_denominator'] = archive_df_clean['rating_denominator'].astype(float) for row in range(0, len(archive_df_clean)): try: match_result = re.findall(r"\d+\.?\d*/10", archive_df_clean.iat[row,archive_df_clean.columns.get_loc('text')]) archive_df_clean.iat[row,archive_df_clean.columns.get_loc('rating_numerator')] = match_result[0].split("/")[0] archive_df_clean.iat[row,archive_df_clean.columns.get_loc('rating_denominator')] = match_result[0].split("/")[1] except Exception as e: print('check row {} Exception {}'.format(row, e)) # Check for Values where the denominator is more that 10 archive_df_clean[archive_df_clean['rating_denominator'] > 10] # fix Values by deviding the numerator by 10, and setting the denominator to 10 col_numerator = archive_df_clean.columns.get_loc('rating_numerator') col_denominator = archive_df_clean.columns.get_loc('rating_denominator') for row in range (0,len(archive_df_clean[archive_df_clean['rating_denominator'] > 10])): archive_df_clean.iat[row,col_numerator] = archive_df_clean.iat[row,col_numerator] / 10 archive_df_clean.iat[row,col_denominator] = 10 ###Output _____no_output_____ ###Markdown Testing ###Code # Test for Second Cleaning Directive #Checking there are no Dogs classified as None # (15/10/2020) Following Testing is no Longer Applicable #print('/n **** Dogs with None instead of empty string in classification **** /n ') #print(archive_df_clean.query('doggo in ("None", "none")'+ # '| floofer in ("None", "none")'+ # '| pupper in ("None", "none")'+ # '| puppo in ("None", "none")')) # Checking Column source of the archive_df_clean print('Archive DF Clean Source Value Counts \n{}'.format(archive_df_clean.source.value_counts())) # Checking timestamp type archive_df_clean.info() # check that no more 'Please only send in dogs' are present print('{}'.format(archive_df_clean.query('"Please only send in dogs" in text'))) # check there are no more image predictions where all predictions are not of dogs print('{}'.format(image_predictions_df_clean.query('p1_dog == False & p2_dog == False & p3_dog == False'))) # Testing Non Mutually exclusive classifications - No longer applicable all stages are in a single column print ('Clean Archive Shape {}\nClean Image Predictions Shape {}\nClean API Shape {}'.format(archive_df_clean.shape, image_predictions_df_clean.shape, api_df_clean.shape)) ###Output Archive DF Clean Source Value Counts Twitter for iPhone 1633 Twitter Web Client 22 tweetdeck 9 Name: source, dtype: int64 <class 'pandas.core.frame.DataFrame'> Int64Index: 1664 entries, 1 to 2355 Data columns (total 14 columns): tweet_id 1664 non-null int64 in_reply_to_status_id 0 non-null float64 in_reply_to_user_id 0 non-null float64 timestamp 1658 non-null datetime64[ns] source 1664 non-null object text 1664 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 1664 non-null object rating_numerator 1664 non-null float64 rating_denominator 1664 non-null float64 name 1617 non-null object dog_stage 1664 non-null int64 dtypes: datetime64[ns](1), float64(6), int64(2), object(5) memory usage: 195.0+ KB Empty DataFrame Columns: [tweet_id, in_reply_to_status_id, in_reply_to_user_id, timestamp, source, text, retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp, expanded_urls, rating_numerator, rating_denominator, name, dog_stage] Index: [] Empty DataFrame Columns: [tweet_id, jpg_url, img_num, p1, p1_conf, p1_dog, p2, p2_conf, p2_dog, p3, p3_conf, p3_dog] Index: [] Clean Archive Shape (1664, 14) Clean Image Predictions Shape (1664, 12) Clean API Shape (1664, 4) ###Markdown Tidyness ###Code # Make New Copies for Tidiness, in case we need to revert back. archive_tidy = archive_df_clean.copy() image_predictions_tidy = image_predictions_df_clean.copy() api_tidy = api_df_clean.copy() # (1/2) all tables could be merged in a single DataFrame, since all Tables use Tweet ID as the primary key. # Sub Step 1 : Will First Clean Useless columns from archive_df_clean archive_tidy.drop(['in_reply_to_status_id','in_reply_to_user_id','expanded_urls','retweeted_status_id','retweeted_status_user_id','retweeted_status_timestamp'],axis = 1, inplace = True) # Comment from Project Review : Instead of merging predictions, merge the dog stage columns. # (2/2) predictions should be melted since they are observations and not variables. # (15/10/2020) No Longer Applicable, sine all Classifications have been concatenated in dog_stage. # and other classification columns have been deleted. #for row in range(r, len(archive_df_clean)): # if archive_df_clean['doggo']='' # Sub Step 2 : Melting doggo, floofer, pupper, and puppo columns. #archive_df_clean= pd.melt(archive_df_clean, #id_vars = ['tweet_id','timestamp','source','text','rating_numerator','rating_denominator','name'], #value_vars = ['doggo','floofer','pupper','puppo'], var_name = 'classification',value_name='type') archive_tidy.info() # (15/10/2020) Remember Doggo is 1, Floofer is 2, pupper is 3, and puppo is 4 archive_tidy.dog_stage.value_counts() image_predictions_tidy.info() image_predictions_tidy.sample(5) # Sub Step 3 Melting image_predictions_df_clean <-- Shall not be done, shall only rename. # Rename Columns so as to be more descriptive image_predictions_tidy.rename(columns={"p1":"prediction_1", "p2":"prediction_2","p3":"prediction_3", "p1_conf":"p1_confidence","p2_conf":"p2_confidence", "p3_conf":"p3_confidence", "p1_dog":"p1_is_dog","p2_dog":"p2_is_dog", "p3_dog":"p3_is_dog"}, inplace=True) # Even Though no longer required, as per Project Review, keeping the code as an exmaple # for later usage by myself. # Melt image_predictions # Image predictions contains a number of records that needs melting, # in order to do that we will first create sub tables, of the original melt said sub tables, and then # merge the resulting melted sub tables. # First Sub Table to include the predictions. # image_predictions_df_clean_1 = image_predictions_df_clean.copy() # image_predictions_df_clean_1.drop(['p1_confidence','p2_confidence','p3_confidence','p1_is_dog','p2_is_dog','p3_is_dog'],axis=1,inplace=True) # image_predictions_df_clean_1.rename(columns={'prediction_1':1,'prediction_2':2,'prediction_3':3},inplace=True) # image_predictions_df_clean_1 = pd.melt(image_predictions_df_clean_1, # id_vars = ['tweet_id','jpg_url','img_num'], # value_vars = [1,2,3], # var_name = 'prediction_no', # value_name ='prediction').sort_values('tweet_id') # image_predictions_df_clean_1.head(9) # Second Sub Table to include the confidence # image_predictions_df_clean_2 = image_predictions_df_clean.copy() # image_predictions_df_clean_2.drop(['prediction_1','prediction_2','prediction_3','p1_is_dog','p2_is_dog','p3_is_dog','jpg_url','img_num'],axis=1,inplace=True) # image_predictions_df_clean_2.rename(columns={'p1_confidence':1,'p2_confidence':2,'p3_confidence':3},inplace=True) # image_predictions_df_clean_2 = pd.melt(image_predictions_df_clean_2, # id_vars = ['tweet_id'], # value_vars = [1,2,3], # var_name = 'confidence_no', # value_name ='confidence').sort_values('tweet_id') # image_predictions_df_clean_2.head(9) # Third Sub Table to include the boolean value if the prediction is a dog. # image_predictions_df_clean_3 = image_predictions_df_clean.copy() # image_predictions_df_clean_3.drop(['prediction_1','prediction_2','prediction_3','p1_confidence','p2_confidence','p3_confidence','jpg_url','img_num'],axis=1,inplace=True) # image_predictions_df_clean_3.rename(columns={'p1_is_dog':1,'p2_is_dog':2,'p3_is_dog':3},inplace=True) # image_predictions_df_clean_3 = pd.melt(image_predictions_df_clean_3, # id_vars = ['tweet_id'], # value_vars = [1,2,3], # var_name = 'is_dog_no', # value_name ='is_dog').sort_values('tweet_id') # image_predictions_df_clean_3.head(9) # Merging Sub Tables 2 and 3 # image_predictions_df_clean_2.rename(columns={'confidence_no':'no'},inplace=True) # image_predictions_df_clean_3.rename(columns={'is_dog_no':'no'},inplace=True) # image_predictions_df_clean_m1 = pd.merge(image_predictions_df_clean_2, image_predictions_df_clean_3, how = 'inner', on=['tweet_id','no']) # image_predictions_df_clean_m1.head(9) # Merging First Merged Table, and final non Merged Table. # image_predictions_df_clean_1.rename(columns={'prediction_no':'no'},inplace=True) # image_predictions_df_clean_m2 = pd.merge(image_predictions_df_clean_1,image_predictions_df_clean_m1,how = 'inner', on=['tweet_id','no']) # image_predictions_df_clean_m2.head(9) # Dropping "no" Columns. # image_predictions_df_clean = image_predictions_df_clean_m2.drop('no',axis=1,inplace=False) # image_predictions_df_clean.head(9) # Merging api_df_Clean with image_predictions_df_clean merged_dfs = pd.merge(image_predictions_tidy, api_tidy, how='inner', on='tweet_id') merged_dfs.head(9) merged_dfs.info() # Keep only Highest Prediction in merged DataFrame. # (15/10/2020) Project Review, Since predictions are individual separate values, # they are orderwise prediction of an algorithm and represent the top 3 predictions # therefore, I shall add two extra columns the "best_bet_breed", and the best_bet_is_dog, # according to the highest confidence confidence_1_index = merged_dfs.columns.get_loc('p1_confidence') confidence_2_index = merged_dfs.columns.get_loc('p2_confidence') confidence_3_index = merged_dfs.columns.get_loc('p3_confidence') best_bet_breed = [] best_bet_is_dog = [] for row in range(0,len(merged_dfs)): highest_confidence_column = 0 if merged_dfs.iat[row, confidence_1_index] > merged_dfs.iat[row, confidence_2_index]: if merged_dfs.iat[row, confidence_1_index] > merged_dfs.iat[row, confidence_3_index]: # first value is highest confidence value. best_bet_breed.append(merged_dfs.iat[row,merged_dfs.columns.get_loc('prediction_1')]) best_bet_is_dog.append(merged_dfs.iat[row,merged_dfs.columns.get_loc('p1_is_dog')]) else: # Third Value is highest confidence value. best_bet_breed.append(merged_dfs.iat[row,merged_dfs.columns.get_loc('prediction_3')]) best_bet_is_dog.append(merged_dfs.iat[row,merged_dfs.columns.get_loc('p3_is_dog')]) else: if merged_dfs.iat[row, confidence_2_index] > merged_dfs.iat[row, confidence_3_index] and p2_is_dog: # Second Value is highest confidence value. best_bet_breed.append(merged_dfs.iat[row,merged_dfs.columns.get_loc('prediction_2')]) best_bet_is_dog.append(merged_dfs.iat[row,merged_dfs.columns.get_loc('p2_is_dog')]) else: # Third Value is highest confidence value. best_bet_breed.append(merged_dfs.iat[row,merged_dfs.columns.get_loc('prediction_3')]) best_bet_is_dog.append(merged_dfs.iat[row,merged_dfs.columns.get_loc('p3_is_dog')]) merged_dfs['best_bet_breed'] = best_bet_breed merged_dfs['best_bet_is_dog'] = best_bet_is_dog merged_dfs.head(20) # Having Merged Image Predictions, and API dF, we can now merge the final dataframe. twitter_archive_master = pd.merge(archive_tidy.sort_values('tweet_id'), merged_dfs.sort_values('tweet_id'),how='inner',on='tweet_id') twitter_archive_master.info() # check if the master archive has any non dog data, in the best_bet_is_dog column twitter_archive_master.query('best_bet_is_dog == False') # Delete all rows (203 rows) where best bet breed is not a dog. twitter_archive_master = twitter_archive_master[twitter_archive_master['best_bet_is_dog'] != False] twitter_archive_master.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1461 entries, 0 to 1663 Data columns (total 24 columns): tweet_id 1461 non-null int64 timestamp 1455 non-null datetime64[ns] source 1461 non-null object text 1461 non-null object rating_numerator 1461 non-null float64 rating_denominator 1461 non-null float64 name 1417 non-null object dog_stage 1461 non-null int64 jpg_url 1461 non-null object img_num 1461 non-null int64 prediction_1 1461 non-null object p1_confidence 1461 non-null float64 p1_is_dog 1461 non-null bool prediction_2 1461 non-null object p2_confidence 1461 non-null float64 p2_is_dog 1461 non-null bool prediction_3 1461 non-null object p3_confidence 1461 non-null float64 p3_is_dog 1461 non-null bool favorite_count 1461 non-null int64 retweet_count 1461 non-null int64 user_count 1461 non-null int64 best_bet_breed 1461 non-null object best_bet_is_dog 1461 non-null bool dtypes: bool(4), datetime64[ns](1), float64(5), int64(6), object(8) memory usage: 245.4+ KB ###Markdown Testing for Tidyness ###Code # Checking that All Dog Classification have been Melted archive_tidy.info() # Checking that All Predictions have been Melted image_predictions_tidy.info() # Checking that a Master Data Set Has Been Created print('Archive Master Info\n{}'.format(twitter_archive_master.info())) twitter_archive_master.sample(10) ###Output _____no_output_____ ###Markdown Step 4: Data StorageOne way: single file twitter_archive_master (single file, with all data)do not forget index=False.Alternative method Twitter_archive_master.csv (image + api)prediction_file ###Code # Saving all Data to a single CSV File:twitter_archive_master.csv twitter_archive_master.to_csv('twitter_archive_master.csv', index = True) ###Output _____no_output_____ ###Markdown Step 5: Analysis, Visualization, and Reporting Insights (Conclusions) drawn from the analysis the owner of the weRateDogs account mostly use an iphone to perform his tweets. to a lesser extent the user makes use of a PC in very rare cases. insight : Most Used Deviceinsight : Most Sent Breed for Rating.insight : Effect of rating on retweets, and favoringAnalysis : Most Retweeted Breed, Least Retweeted Breed, Most Liked Breed, Least Liked Breed, ###Code twitter_archive_master['source'].hist(figsize=(10,10)); twitter_archive_master['best_bet_breed'].value_counts().plot(kind='hist'); twitter_archive_master[twitter_archive_master['dog_stage'] != 0]['dog_stage'].value_counts().plot(kind='pie',figsize=(8,8)); pd.plotting.scatter_matrix(twitter_archive_master,figsize=(15,15)); twitter_archive_master.plot(y='retweet_count',x='rating_numerator',kind='scatter'); twitter_archive_master.plot(y='favorite_count',x='rating_numerator',kind='scatter'); twitter_archive_master['best_bet_breed'].value_counts().sort_values(ascending=False)[:10].plot(kind='bar',figsize=(20,20)); twitter_archive_master.groupby('best_bet_breed')['rating_numerator'].mean().sort_values(ascending=False)[:10].plot(kind='bar',figsize=(15,15)); twitter_archive_master.plot(y='favorite_count',x='retweet_count',kind='scatter',figsize=(20,20)); ###Output _____no_output_____ ###Markdown Data wrangling WeRateDogs by Mohamed Gamal Contents- [Introduction](intro)- [Gathering data](gather) - [source one](source1) - [source two](source2) - [source three](source3)- [Assessing data](assess) - [Quality](quality) - [Tidiness](tidiness)- [Cleaning data](clean)- [Storing, Analyzing, and Visualizing](storing) - [Insight one](one) - [Insight two and visualization](two) - [Insight three](three) - [Insight four ](four) - [insight five and visualization](five) Introduction in this project we will wrangle (WeRateDogs) data . (WeRateDogs) is a twitter account that spicialized in rating dogs . let's start ###Code #Import all packages needed import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline import requests import tweepy import json ###Output _____no_output_____ ###Markdown Gathering Data source one ###Code #read CSV file twitter_archive = pd.read_csv('twitter-archive-enhanced.csv') #test twitter_archive.sort_values('timestamp') twitter_archive.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 in_reply_to_status_id 78 non-null float64 2 in_reply_to_user_id 78 non-null float64 3 timestamp 2356 non-null object 4 source 2356 non-null object 5 text 2356 non-null object 6 retweeted_status_id 181 non-null float64 7 retweeted_status_user_id 181 non-null float64 8 retweeted_status_timestamp 181 non-null object 9 expanded_urls 2297 non-null object 10 rating_numerator 2356 non-null int64 11 rating_denominator 2356 non-null int64 12 name 2356 non-null object 13 doggo 2356 non-null object 14 floofer 2356 non-null object 15 pupper 2356 non-null object 16 puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown Source Two ###Code #Downloading and saving the image prediction data using Requests url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) with open ('image-predictions.tsv' , mode = 'wb') as file: file.write(response.content) #read tsv file image_prediction = pd.read_csv('image-predictions.tsv' , sep = '\t') image_prediction image_prediction.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2075 non-null int64 1 jpg_url 2075 non-null object 2 img_num 2075 non-null int64 3 p1 2075 non-null object 4 p1_conf 2075 non-null float64 5 p1_dog 2075 non-null bool 6 p2 2075 non-null object 7 p2_conf 2075 non-null float64 8 p2_dog 2075 non-null bool 9 p3 2075 non-null object 10 p3_conf 2075 non-null float64 11 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown source three ###Code import tweepy from tweepy import OAuthHandler import json from timeit import default_timer as timer # Query Twitter API for each tweet in the Twitter archive and save JSON in a text file # These are hidden to comply with Twitter's API terms and conditions consumer_key = 'HIDDEN' consumer_secret = 'HIDDEN' access_token = 'HIDDEN' access_secret = 'HIDDEN' auth = OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True) # NOTE TO STUDENT WITH MOBILE VERIFICATION ISSUES: # df_1 is a DataFrame with the twitter_archive_enhanced.csv file. You may have to # change line 17 to match the name of your DataFrame with twitter_archive_enhanced.csv # NOTE TO REVIEWER: this student had mobile verification issues so the following # Twitter API code was sent to this student from a Udacity instructor # Tweet IDs for which to gather additional data via Twitter's API tweet_ids = df_1.tweet_id.values len(tweet_ids) # Query Twitter's API for JSON data for each tweet ID in the Twitter archive count = 0 fails_dict = {} start = timer() # Save each tweet's returned JSON as a new line in a .txt file with open('tweet_json.txt', 'w') as outfile: # This loop will likely take 20-30 minutes to run because of Twitter's rate limit for tweet_id in tweet_ids: count += 1 print(str(count) + ": " + str(tweet_id)) try: tweet = api.get_status(tweet_id, tweet_mode='extended') print("Success") json.dump(tweet._json, outfile) outfile.write('\n') except tweepy.TweepError as e: print("Fail") fails_dict[tweet_id] = e pass end = timer() print(end - start) print(fails_dict) df_list = [] with open ('tweet-json.txt' , 'r') as file: for line in file: tweet = json.loads(line) tweet_id = tweet['id'] retweet_count = tweet['retweet_count'] fav_count = tweet['favorite_count'] df_list.append({'tweet_id':tweet_id, 'retweet_count':retweet_count, 'favorite_count':fav_count}) api_df = pd.DataFrame(df_list) api_df.sample(5) api_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2354 entries, 0 to 2353 Data columns (total 3 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2354 non-null int64 1 retweet_count 2354 non-null int64 2 favorite_count 2354 non-null int64 dtypes: int64(3) memory usage: 55.3 KB ###Markdown Assessing Data visual assessment ###Code twitter_archive image_prediction api_df ###Output _____no_output_____ ###Markdown programmatic assessment ###Code twitter_archive.info() sum(twitter_archive['tweet_id'].duplicated()) twitter_archive.nunique() twitter_archive.rating_numerator.value_counts() print(twitter_archive.loc[twitter_archive.rating_numerator == 204, 'text']) print(twitter_archive.loc[twitter_archive.rating_numerator == 121, 'text']) print(twitter_archive.loc[twitter_archive.rating_numerator == 960, 'text']) print(twitter_archive.loc[twitter_archive.rating_numerator == 1176, 'text']) print(twitter_archive.loc[twitter_archive.rating_numerator == 144, 'text']) print(twitter_archive['text'][1120]) print(twitter_archive['text'][1635]) print(twitter_archive['text'][313]) print(twitter_archive['text'][1779]) print(twitter_archive['text'][214]) twitter_archive.rating_denominator.value_counts() print(twitter_archive.loc[twitter_archive.rating_denominator == 150, 'text']) print(twitter_archive.loc[twitter_archive.rating_denominator == 90, 'text']) print(twitter_archive.loc[twitter_archive.rating_denominator == 16, 'text']) print(twitter_archive.loc[twitter_archive.rating_denominator == 2, 'text']) print(twitter_archive.loc[twitter_archive.rating_denominator == 7, 'text']) print(twitter_archive['text'][902]) print(twitter_archive['text'][1228]) print(twitter_archive['text'][1663]) print(twitter_archive['text'][2335]) print(twitter_archive['text'][516]) image_prediction.sample(10) image_prediction.info() sum(image_prediction.jpg_url.duplicated()) print(image_prediction.p1_dog.value_counts()) print(image_prediction.p2_dog.value_counts()) print(image_prediction.p3_dog.value_counts()) image_prediction.img_num.value_counts() api_df.sample(20) api_df.info() api_df.describe() ###Output _____no_output_____ ###Markdown Quality*Completeness, validity, accuracy, consistency (content issues)* twitter_archive- Keep original ratings (no retweets) that have images- Delete columns that won't be used for analysis- Erroneous datatypes (doggo, floofer, pupper and puppo columns)- Correct numerators with decimals- Correc denominators other than 10 image_prediction- Drop 66 jpg_url duplicated- Create 1 column for image prediction and 1 column for confidence level- Delete columns that won't be used for analysis Tidiness - (twitter_archive)Separate timestamp into day - month - year (3 columns)- use tweet_id as type int24 to merge all tabels in one data set Cleaning Data ###Code #make a copy from all data sets twitter_archive_clean = twitter_archive.copy() image_prediction_clean = image_prediction.copy() api_df_clean = api_df.copy() twitter_archive_clean = twitter_archive_clean[pd.isnull(twitter_archive_clean['retweeted_status_user_id'])] #test sum(twitter_archive_clean.retweeted_status_user_id.value_counts()) list(twitter_archive_clean) twitter_archive_clean = twitter_archive_clean.drop(['source', 'in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp', 'expanded_urls'] , 1) #test list(twitter_archive_clean) #Melt the doggo, floofer, pupper and puppo columns to dogs and dogs_stage column twitter_archive_clean = pd.melt(twitter_archive_clean, id_vars=['tweet_id', 'timestamp', 'text', 'rating_numerator', 'rating_denominator', 'name'],var_name='dogs', value_name='dogs_stage') twitter_archive_clean = twitter_archive_clean.drop('dogs', 1) #Sort by dogs_stage then drop duplicated based on tweet_id twitter_archive_clean = twitter_archive_clean.sort_values('dogs_stage').drop_duplicates(subset='tweet_id') #test twitter_archive_clean['dogs_stage'].value_counts() twitter_archive_clean['timestamp'] = pd.to_datetime(twitter_archive_clean['timestamp']) twitter_archive_clean['year'] = twitter_archive_clean['timestamp'].dt.year twitter_archive_clean['month'] = twitter_archive_clean['timestamp'].dt.month twitter_archive_clean['day'] = twitter_archive_clean['timestamp'].dt.day twitter_archive_clean = twitter_archive_clean.drop('timestamp', 1) # test list(twitter_archive_clean) twitter_archive_clean[['rating_numerator', 'rating_denominator']] = twitter_archive_clean[['rating_numerator','rating_denominator']].astype(float) # test twitter_archive_clean.info() with pd.option_context('max_colwidth', 200): display(twitter_archive_clean[twitter_archive_clean['text'].str.contains(r"(\d+\.\d*\/\d+)")] [['tweet_id', 'text', 'rating_numerator', 'rating_denominator']]) #Update numerators twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 681340665377193984), 'rating_numerator'] = 9.5 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 680494726643068929), 'rating_numerator'] = 11.26 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 883482846933004288), 'rating_numerator'] = 13.5 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 786709082849828864), 'rating_numerator'] = 9.75 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 778027034220126208), 'rating_numerator'] = 11.27 with pd.option_context('max_colwidth', 200): display(twitter_archive_clean[twitter_archive_clean['text'].str.contains(r"(\d+\.\d*\/\d+)")] [['tweet_id', 'text', 'rating_numerator', 'rating_denominator']]) with pd.option_context('max_colwidth', 200): display(twitter_archive_clean[twitter_archive_clean['rating_denominator'] != 10][['tweet_id', 'text', 'rating_numerator', 'rating_denominator']]) #correct poth rating_numerator and rating_denominator twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 686035780142297088),'rating_denominator']= 10 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 682808988178739200),'rating_denominator']= 10 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 682962037429899265),'rating_denominator']= 10 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 684222868335505415),'rating_denominator']= 10 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 684225744407494656),'rating_denominator']= 10 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 677716515794329600),'rating_denominator']= 10 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 709198395643068416),'rating_denominator']= 10 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 716439118184652801),'rating_numerator']= 11 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 716439118184652801),'rating_denominator']= 10 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 710658690886586372),'rating_denominator']= 10 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 713900603437621249),'rating_denominator']= 10 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 697463031882764288),'rating_denominator']= 10 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 704054845121142784),'rating_denominator']= 10 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 666287406224695296),'rating_numerator']= 9 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 666287406224695296),'rating_denominator']= 10 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 675853064436391936),'rating_denominator']= 10 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 835246439529840640),'rating_numerator']= 13 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 835246439529840640),'rating_denominator']= 10 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 820690176645140481),'rating_denominator']= 10 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 832088576586297345),'rating_denominator']= 10 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 758467244762497024),'rating_denominator']= 10 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 731156023742988288),'rating_denominator']= 10 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 740373189193256964),'rating_denominator']= 10 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 722974582966214656),'rating_numerator']= 13 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 722974582966214656),'rating_denominator']= 10 twitter_archive_clean.loc[(twitter_archive_clean.tweet_id == 810984652412424192),'rating_denominator']= 10 #test with pd.option_context('max_colwidth', 200): display(twitter_archive_clean[twitter_archive_clean['rating_denominator'] != 10][['tweet_id', 'text', 'rating_numerator', 'rating_denominator']]) twitter_archive_clean['rating'] = 10 * twitter_archive_clean['rating_numerator'] / twitter_archive_clean['rating_denominator'].astype(float) twitter_archive_clean.sample(10) list(image_prediction_clean) #Drop 66 jpg_url duplicated image_prediction_clean = image_prediction_clean.drop_duplicates(subset=['jpg_url']) #test sum(image_prediction_clean['jpg_url'].duplicated()) #Create 1 column for image prediction and 1 column for confidence level dog_type=[] conf_test=[] def prediction(image_prediction_clean): if image_prediction_clean['p1_dog'] == True: dog_type.append(image_prediction_clean['p1']) conf_test.append(image_prediction_clean['p1_conf']) elif image_prediction_clean['p2_dog'] == True: dog_type.append(image_prediction_clean['p2']) conf_test.append(image_prediction_clean['p2_conf']) elif image_prediction_clean['p3_dog'] == True: dog_type.append(image_prediction_clean['p3']) conf_test.append(image_prediction_clean['p3_conf']) else: dog_type.append('Error') conf_test.append('Error') image_prediction_clean.apply(prediction , axis= 1) image_prediction_clean['dog_type'] = dog_type image_prediction_clean['conf_test'] = conf_test image_prediction_clean.info() image_prediction_clean = image_prediction_clean[image_prediction_clean['conf_test'] != 'Error'] image_prediction_clean.info() list(image_prediction_clean) #drop columns we don't need for analysis image_prediction_clean = image_prediction_clean.drop(['img_num', 'p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog'], 1) # test list(image_prediction_clean) #store cleaned dataframes [twitter_archive_clean , image_prediction_clean]in created (twitter_archive_image) twitter_archive_image = pd.merge(twitter_archive_clean, image_prediction_clean,how = 'left', on = ['tweet_id']) #test twitter_archive_image.info() # keep rows that have image twitter_archive_image = twitter_archive_image[twitter_archive_image['jpg_url'].notnull()] #test twitter_archive_image.info() #store cleaned dataframes [twitter_archive_image , api_df_clean]in created (twitter_archive_master) twitter_archive_master = pd.merge(twitter_archive_image, api_df_clean,how = 'left', on = ['tweet_id']) twitter_archive_master #test twitter_archive_master.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1686 entries, 0 to 1685 Data columns (total 15 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 1686 non-null int64 1 text 1686 non-null object 2 rating_numerator 1686 non-null float64 3 rating_denominator 1686 non-null float64 4 name 1686 non-null object 5 dogs_stage 1686 non-null object 6 year 1686 non-null int64 7 month 1686 non-null int64 8 day 1686 non-null int64 9 rating 1686 non-null float64 10 jpg_url 1686 non-null object 11 dog_type 1686 non-null object 12 conf_test 1686 non-null object 13 retweet_count 1686 non-null int64 14 favorite_count 1686 non-null int64 dtypes: float64(3), int64(6), object(6) memory usage: 210.8+ KB ###Markdown Storing, Analyzing, and Visualizing Data ###Code twitter_archive_master.to_csv('twitter_archive_master.csv' , index = False) ###Output _____no_output_____ ###Markdown Insight one top rated dog type ###Code dog_type_mean = twitter_archive_master.groupby('dog_type').mean() dog_type_mean.head() dog_type_mean['rating'].sort_values() ###Output _____no_output_____ ###Markdown Insight two and visualization ###Code # which type of dogs is the most common twitter_archive_master['dog_type'].value_counts() df_dog_type = twitter_archive_master.groupby('dog_type').filter(lambda x: len(x) >= 30) df_dog_type['dog_type'].value_counts().plot(kind = 'barh') plt.title('Histogram of the Most Rated Dog Type') plt.xlabel('Count') plt.ylabel('Type of dog') df_dog_type = twitter_archive_master.groupby('dog_type').filter(lambda x: len(x) <= 3) df_dog_type['dog_type'].value_counts().plot(kind = 'barh') plt.title('Histogram of the lowest Rated Dog Type') plt.xlabel('Count') plt.ylabel('Type of dog') ###Output _____no_output_____ ###Markdown insight three ###Code # type has the most retweets dog_type_retweet = twitter_archive_master.groupby('dog_type').mean() dog_type_retweet.head() dog_type_retweet['retweet_count'].sort_values() ###Output _____no_output_____ ###Markdown insight four ###Code # type has the most favourits dog_type_favorits = twitter_archive_master.groupby('dog_type').mean() dog_type_favorits.head() dog_type_favorits['favorite_count'].sort_values() ###Output _____no_output_____ ###Markdown insight five ans visualization ###Code #relation between retweet count and favorite count twitter_archive_master.plot(x='favorite_count', y='retweet_count', kind='scatter') plt.xlabel('favorite count') plt.ylabel('retweet_count') plt.title('relation between retweet count and favorite count ') ###Output _____no_output_____ ###Markdown Data Wrangling Project ###Code # import libraries import pandas as pd import numpy as np import requests import tweepy import json import time import matplotlib.pyplot as plt import seaborn as sns % matplotlib inline ###Output _____no_output_____ ###Markdown Gather Twitter archive data ###Code # import twitter archive of dog_rates twitter twitter_archive = pd.read_csv('twitter-archive-enhanced.csv', encoding='utf-8') ###Output _____no_output_____ ###Markdown Predictions data ###Code # download predictions # url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' # response = requests.get(url) # with open(url.split('/')[-1], mode = 'wb') as file: # file.write(response.content) #read predictions data into a pandas DataFrame predictions = pd.read_csv('image-predictions.tsv', sep = '\t') ###Output _____no_output_____ ###Markdown Twitter count data ###Code # create an API object to gather twitter data # consumer_key = 'CONSUMER KEY' # consumer_secret = 'CONSUMER SECRET' # access_token = 'ACCESS TOKEN' # access_secret = 'ACCESS SECRET' # auth = tweepy.OAuthHandler(consumer_key, consumer_secret) # auth.set_access_token(access_token, access_secret) # api = tweepy.API(auth) # gather retweet count and favorite count via twitter api # twitter_count = {} # twitter_count['tweet_counts'] = [] # n = 0 # t = time.process_time() # for tweet_id in twitter_archive.tweet_id: # # try: # tweet = api.get_status(tweet_id, mode = 'extended', wait_on_rate_limit=True, wait_on_rate_limit_notify=True ) # twitter_count['tweet_counts'].append({ # 'tweet_id':tweet_id, # 'retweet_count': tweet.retweet_count, # 'favorite_count': tweet.favorite_count # }) # elapsed_time = time.process_time() - t # n = n+1 # print(str(tweet_id) + '\n total elapsed time in sec: ' + str(round(elapsed_time, 2)) + ', no. ' + str(n)) # except tweepy.TweepError: # print(str(tweet_id) + ' not found') # continue # write twitter count data into json file # with open('tweet_json.txt', 'w') as outfile: # json.dump(twitter_count, outfile) # read twitter count data from json file with open('tweet_json.txt') as file: twitter_count = json.load(file) # read twitter count data into a pandas DataFrame twitter_count = pd.DataFrame(twitter_count['tweet_counts'], columns = ['tweet_id', 'retweet_count', 'favorite_count']) twitter_count = twitter_count.reset_index(drop=True) ###Output _____no_output_____ ###Markdown Assess ###Code # display the twitter archive table twitter_archive # DateFrame and columns info twitter_archive.info() # check if some tweet ids are duplicated sum(twitter_archive.tweet_id.duplicated()) # how often particular rating numerator were given twitter_archive.rating_numerator.value_counts() # how often particular rating denominators were used twitter_archive.rating_denominator.value_counts() # frequency of rating numerators where denominator is 10 twitter_archive.rating_numerator[twitter_archive.rating_denominator == 10].value_counts() # tweet text for the row 45 twitter_archive.text.loc[45] # rating numerator for the row 45 twitter_archive.rating_numerator.loc[45] # are there any duplicated names in the name column sum(twitter_archive.name.duplicated()) # value_counts on name column in order to reveal duplicates twitter_archive.name.value_counts() # show words which were accidentically written into the name column twitter_archive.name[twitter_archive.name.str.lower() == twitter_archive.name].value_counts() # show entries with no dog category assigned to it twitter_archive[(twitter_archive.doggo != 'doggo') & (twitter_archive.floofer != 'floofer') & (twitter_archive.pupper != 'pupper') & (twitter_archive.puppo != 'puppo')] # how many duplicates are in the source column twitter_archive.source.duplicated().value_counts() # show duplicates in the source column twitter_archive[twitter_archive.source.duplicated() == False] # show first entry of source column twitter_archive.source[0] # show first entry of source column twitter_archive.source[209] # display the twitter count table twitter_count # twitter count info twitter_count.info() # basic statistics of retweets and favorites twitter_count.describe() # show entries with zero favortite count but retweets larger than zero twitter_count[(twitter_count.favorite_count == 0) & (twitter_count.retweet_count > 0)] # show entries with zero retweet count but favorite larger than zero twitter_count[(twitter_count.favorite_count > 0) & (twitter_count.retweet_count == 0)] # display the predictions table predictions # predictions table info predictions.info() # sample of 20 entries in the predictions table predictions.sample(20) # this tweet doesn't have a picture of a dog in it predictions[predictions.tweet_id == 666104133288665088] # how many tweets images have no dogs in them according to predictions table predictions[(predictions.p1_dog == False) & (predictions.p2_dog == False) & (predictions.p3_dog == False)] ###Output _____no_output_____ ###Markdown Data quality`twitter_archive` table- *tweet_id* should be string and not integer since we are not going to add or substracht tweet ids.- the same for *in_reply_to_status_id*, *in_reply_to_user_id*, *retweeted_status_id*, *retweeted_status_user_id*, they should be not float type but strings- *timestamp* and *retweeted_status_timestamp* should be datetime type- *denominator* is not allways 10 which makes it not easy to compare ratings across tweets- *numerator* columns also contains some very unusual values such as 666 and 1776, two entries have a numerator of 0- *numerator* should be float type- *name* column displays some random words instead of names written in lower case. Names are written in upper case- *doggo*, *floofer*, *pupper*, *puppo* should be categorical variables- mising names for 745 entries- 1976 entries have no dog category assigned to it (neither *doggo* nor *floofer*, neither *pupper* nor *puppo*) `twitter_count` table- only 2340 observations in the `twitter_count` table vs. 2356 observations in the `twitter_archive` table- 167 entries have zero *favorite_count* but non-zero *retweet_count*`predictions` table- some predictions do not even contain dogs types but other things, e.g. first and second images for the tweet_id 788908386943430656 were recognized not as dogs but as remote_control and oscilloscope respectively. It is indicated by the columns p1_dog, p2_dog and p3_dog, that some predictions are not dogs. Predictions of things other than dogs are not relevant here- in some cases none of the predictions indicates that an image is one of a dog such as the image in the tweet 666104133288665088- only 2075 observations in the `predictions` table vs. 2356 observations in the `twitter_archive` table Data tidinesss`twitter_archive` table- *doggo*, *floofer*, *pupper*, *puppo* should not be columns but variables in one column`twitter_count` and `twitter_archive` table should be stored in one table since they both contain general information on tweets Clean ###Code # make copies of data to be cleaned twitter_archive_clean = twitter_archive.copy() twitter_count_clean = twitter_count.copy() predictions_clean = predictions.copy() ###Output _____no_output_____ ###Markdown Missing data Missing names for 745 entries and missing dog category for 1976 entries in the `twitter_archive` tableSome dogs have no categories and no names presented in the table but as long as they have ratings and were identified as dogs it is not a problem and there is no need to clean it except for trying to extract correct names where it wasn't done yet. 16 missing observations in the `twitter_count` table which are present in the `twitter_archive` tableSince missing tweets are not online anymore we can no longer gather data on its favorite and retweet counts to fill in missing data. We can drop rows where count data is missing later when we join `twitter_archive` and `twitter_count` tables. 167 observations in the *twitter_count* table show zero favorite count but non-zero retweet count This is unusual and makes one suspicious, in fact, by checking some of these tweets online on twitter there is non-zero favorite count as expected, it just wasn't exrtacted properly. Define Change entries with zero favorite_count and non zero retweet_count by setting favorite_count and retweet_count values to NaN Code ###Code # change values of favorite count to NaN where favortite count is zero but retweet is non-zero twitter_count_clean.loc[(twitter_count_clean.favorite_count == 0) & (twitter_count_clean.retweet_count > 0), 'favorite_count'] = 'nan' ###Output _____no_output_____ ###Markdown Test ###Code # check if any entries with zero favortite count but non zero retweets are left twitter_count_clean[(twitter_count_clean.favorite_count == 0) & (twitter_count_clean.retweet_count > 0)] ###Output _____no_output_____ ###Markdown Missing observations in the `predictions` table - only 2075 observations vs. 2356 observations in the `twitter_archive` tableSimilar to twitter count data since there is no access to predictions algorithm, missing data can not be added. Futhermore, since the `predictions table` indicates that not all the pictures have dogs in them, relevant predictions data is even smaller since non of three predictions for 324 entries identified dog pictures. Tidiness Present *doggo*, *floofer*, *pupper*, *puppo* in the `twitter_archive` table as variables in one column Define Melt *doggo*, *floofer*, *pupper* and *puppo* to a *stage* column Code ###Code # Add new column in order to account for those tweets which don't name dogs stages twitter_archive_clean['no_stage'] = 'None' # assign value indicating that there is no stage given where it applies twitter_archive_clean.loc[(twitter_archive_clean.doggo == 'None') & (twitter_archive_clean.floofer == 'None') & (twitter_archive_clean.pupper == 'None') & (twitter_archive_clean.puppo == 'None'), 'no_stage'] = 'no_stage' # melt four dog stages columns into one category column and one value column twitter_archive_clean = pd.melt(twitter_archive_clean, id_vars=['tweet_id', 'in_reply_to_status_id', 'in_reply_to_user_id', 'timestamp', 'source', 'text', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp', 'expanded_urls', 'rating_numerator', 'rating_denominator', 'name'], value_vars=['doggo', 'floofer', 'pupper', 'puppo', 'no_stage'], var_name='category', value_name ='stage') # delete empty duplicate entries twitter_archive_clean = twitter_archive_clean[twitter_archive_clean.stage != 'None'] # drop the category column twitter_archive_clean = twitter_archive_clean.drop('category', axis=1) # check for duplicates sum(twitter_archive_clean.tweet_id.duplicated()) # which ids are duplicated twitter_archive_clean.tweet_id[twitter_archive_clean.tweet_id.duplicated() == True] # what are corresponding stages values of duplicates twitter_archive_clean.stage[twitter_archive_clean.tweet_id.duplicated() == True] # delete duplicated rows twitter_archive_clean[twitter_archive_clean.tweet_id.duplicated() == True] # make a dictionary with index numbers of duplicates as keys and stages as values stage_add_dict = {854010172552949760: 'floofer', 817777686764523521: 'pupper', 808106460588765185: 'pupper', 802265048156610565: 'pupper', 801115127852503040: 'pupper', 785639753186217984: 'pupper', 781308096455073793: 'pupper', 775898661951791106: 'pupper', 770093767776997377: 'pupper', 759793422261743616: 'pupper', 751583847268179968: 'pupper', 741067306818797568: 'pupper', 733109485275860992: 'pupper', 855851453814013952: 'puppo'} # loop through dictionary and add values to the stage_add column for key, value in stage_add_dict.items(): row = twitter_archive_clean.tweet_id == key twitter_archive_clean.loc[row, 'stage'] = twitter_archive_clean.loc[row, 'stage'] + ', ' + value # drop duplicates twitter_archive_clean = twitter_archive_clean[twitter_archive_clean.tweet_id.duplicated() == False] ###Output _____no_output_____ ###Markdown Test ###Code # are there any duplicates left twitter_archive_clean[twitter_archive_clean.tweet_id.duplicated() == True] # check if the number of rows corresponds to the original DataFrame twitter_archive_clean.info() # have a look at those entries for which two dog stages were specified twitter_archive_clean[twitter_archive_clean.tweet_id == 854010172552949760] # show first six entries of the twitter archive table twitter_archive_clean.head() ###Output _____no_output_____ ###Markdown `twitter_count` and `twitter_archive` table should be stored in one table since they both contain general information on tweets Define Merge `twitter_count` and `twitte_archive` tables on *tweet_id* columnn Code ###Code # merge tables twitter_archive_clean = pd.merge(twitter_archive_clean, twitter_count_clean, on = ['tweet_id'], how = 'left') ###Output _____no_output_____ ###Markdown Test ###Code # show first six rows twitter_archive_clean.head() # info for the new table twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2356 entries, 0 to 2355 Data columns (total 16 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object stage 2356 non-null object retweet_count 2340 non-null float64 favorite_count 2340 non-null object dtypes: float64(5), int64(3), object(8) memory usage: 312.9+ KB ###Markdown Change data types for various columns in the `twitter_archive_clean` DataFrame Define Change data types as following:- *tweet_id* (both `twitter_archive_clean` and `predictions` table), *in_reply_to_status_id*, *in_reply_to_user_id*, *retweeted_status_id*, *retweeted_status_user_id* to string- *timestamp* and *retweeted_status_timestamp* to datetime type- *stage* and *stage_add* to categorical variable- *retweet_count* and *favorite_count* to integer- *rating_numerator* and *rating_denominator* to float will be changed later when extracting right values Code ###Code # change variables to string twitter_archive_clean.tweet_id = twitter_archive_clean.tweet_id.astype(str) predictions_clean.tweet_id = predictions_clean.tweet_id.astype(str) twitter_archive_clean.in_reply_to_status_id = twitter_archive_clean.in_reply_to_status_id.astype(str) twitter_archive_clean.in_reply_to_user_id = twitter_archive_clean.in_reply_to_user_id.astype(str) twitter_archive_clean.retweeted_status_id = twitter_archive_clean.retweeted_status_id.astype(str) twitter_archive_clean.retweeted_status_user_id = twitter_archive_clean.retweeted_status_id.astype(str) # change variables to datetime twitter_archive_clean.timestamp = pd.to_datetime(twitter_archive_clean.timestamp) twitter_archive_clean.retweeted_status_timestamp = pd.to_datetime(twitter_archive_clean.retweeted_status_timestamp) # change variables to categorical variable twitter_archive_clean.stage = twitter_archive_clean.stage.astype('category') ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2356 entries, 0 to 2355 Data columns (total 16 columns): tweet_id 2356 non-null object in_reply_to_status_id 2356 non-null object in_reply_to_user_id 2356 non-null object timestamp 2356 non-null datetime64[ns] source 2356 non-null object text 2356 non-null object retweeted_status_id 2356 non-null object retweeted_status_user_id 2356 non-null object retweeted_status_timestamp 181 non-null datetime64[ns] expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object stage 2356 non-null category retweet_count 2340 non-null float64 favorite_count 2340 non-null object dtypes: category(1), datetime64[ns](2), float64(1), int64(2), object(10) memory usage: 297.2+ KB ###Markdown *rating_numerator* is wrongly extracted sometimes Define extract *rating_numerator* from the *text* using regular expressions, check tweet texts againe in cases where values appear to be out of normal rating scale range Code ###Code # replace numerators by extracting the correct rating numerator by using a regular expression twitter_archive_clean.rating_numerator = twitter_archive_clean.text.str.extract('(\d+.\d*/)', expand=True) twitter_archive_clean.rating_numerator = twitter_archive_clean.rating_numerator.str[:-1] twitter_archive_clean.rating_numerator.value_counts() # create a list with outliers and strange values for rating numerators check_numerators = ['420', '9.75', '143', '960', '99', '11.26', '1776', '50', '165', '80', '24', '9.5', '20', '60', '11/15', '121', '666', '88', '13.5', '204', '17', '3 13', '007', '3 1', '182', '144', '11.27', '45', '44', '84'] # set display option at the maximum allowed number of characters in a tweet pd.options.display.max_colwidth = 280 # print tweet texts to check whether the numerators in the check_numerators list has been correctly extracted for numerator in check_numerators: print(twitter_archive_clean.text[twitter_archive_clean.rating_numerator == numerator]) # create dictionary with wrongly extracted and correct ratings rating_numerator_corr_dict = {'50': '11', '24': 'nan', '11/15':'nan', '17':'13', '007': 'nan', '20': 'nan', '3 1': '9', '3 13': '13'} # loop through dictionary and correct values for rating numerators for key, value in rating_numerator_corr_dict.items(): row = twitter_archive_clean.rating_numerator == key twitter_archive_clean.loc[row, 'rating_numerator'] = value # change variable type to float twitter_archive_clean.rating_numerator = twitter_archive_clean.rating_numerator.astype('float') ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.rating_numerator.value_counts() # show entries where rating is above 15 but the denominator is 10 twitter_archive_clean.text[(twitter_archive_clean.rating_numerator > 15) & (twitter_archive_clean.rating_denominator == 10)] ###Output _____no_output_____ ###Markdown extract correct rating denominators Define Extract correct rating denominators using regular expressions Code ###Code # replace numerators by extracting the correct rating denominator by using a regular expression twitter_archive_clean.rating_denominator = twitter_archive_clean.text.str.extract('(/\d*)', expand=True) twitter_archive_clean.rating_denominator = twitter_archive_clean.rating_denominator.str[1:] twitter_archive_clean.rating_denominator.value_counts() # create a list with outliers and strange values for rating numerators check_denominators = ['', '11', '50', '20', '80', '40', '90', '7', '16', '150', '130', '15', '2', '70', '120', '110', '170', '00'] # set display option at the maximum allowed number of characters in a tweet pd.options.display.max_colwidth = 280 # print tweet texts to check whether the numerators in the check_numerators list has been correctly extracted for denominator in check_denominators: print(twitter_archive_clean.text[twitter_archive_clean.rating_denominator == denominator]) # create dictionary with wrongly extracted and correct ratings rating_denominator_corr_dict = {'':'10', '11': '10', '7': 'nan', '16': 'nan', '15': 'nan', '2': '10', '00': '10'} # loop through dictionary and correct values for rating numerators for key, value in rating_denominator_corr_dict.items(): row = twitter_archive_clean.rating_denominator == key twitter_archive_clean.loc[row, 'rating_denominator'] = value # go through '50' denominators twitter_archive_clean.text[twitter_archive_clean.rating_denominator == '50'] twitter_archive_clean.tweet_id[twitter_archive_clean.rating_denominator == '50'] #change one wrongly extracted '50'-denominator but leave the two others row = twitter_archive_clean.tweet_id == '716439118184652801' twitter_archive_clean.loc[row, 'rating_denominator'] = 10 # go through '20' denominators twitter_archive_clean.text[twitter_archive_clean.rating_denominator == '20'] twitter_archive_clean.tweet_id[twitter_archive_clean.rating_denominator == '20'] #change two wrongly extracted '20'-denominators row = twitter_archive_clean.tweet_id == '722974582966214656' twitter_archive_clean.loc[row, 'rating_denominator'] = 10 row = twitter_archive_clean.tweet_id == '686035780142297088' twitter_archive_clean.loc[row, 'rating_denominator'] = 'nan' # change type to float twitter_archive_clean.rating_denominator = twitter_archive_clean.rating_denominator.astype('float') ###Output _____no_output_____ ###Markdown Test ###Code # show frequency of rating denominators twitter_archive_clean.rating_denominator.value_counts() # show statistics for rating_denominator twitter_archive_clean.rating_denominator.describe() # show entries where denominator does not equal 10 twitter_archive_clean.text[twitter_archive_clean.rating_denominator != 10] ###Output _____no_output_____ ###Markdown *name* column displays some random wordswritten in lower case instead of dog names Define Extract right names using regular expressions Code ###Code twitter_archive.name[twitter_archive.tweet_id == 667538891197542400] twitter_archive.name.value_counts() # set display options for number of rows pd.options.display.max_rows = 150 twitter_archive_clean.text[twitter_archive_clean.name.str.lower() == twitter_archive_clean.name] # extract names for those where words had been extracted twitter_archive_clean.name = twitter_archive_clean.text.str.extract('(This is [A-Z][a-z]+|this is [A-Z][a-z]+|name is [A-Z][a-z]+|named [A-Z][a-z]+|Meet [A-Z][a-z]+|Say hello to [A-Z][a-z]+)', expand=True) # replace phrases preceeding names with empty string twitter_archive_clean.name = twitter_archive_clean.name.str.replace('(This is |named|name is |Meet |Say hello to )', '') ###Output _____no_output_____ ###Markdown Test ###Code # sample of the name column values twitter_archive_clean.name.sample(25) # check if any lower case words are left sum(twitter_archive_clean.name[twitter_archive_clean.name.str.lower() == twitter_archive_clean.name].value_counts()) # frequency of names twitter_archive_clean.name.value_counts() ###Output _____no_output_____ ###Markdown Predictions table includes entries where no dog but other images were identified Define Eliminate entries with no dogs predictions from the `predictions` DataFrame Code ###Code # subset the predictions DataFrame predictions_clean = predictions_clean[(predictions_clean.p1_dog == True) & (predictions_clean.p2_dog == True) & (predictions.p3_dog == True)] # drop p1_dog, p2_dog, p3_dog columns predictions_clean = predictions_clean.drop(['p1_dog', 'p2_dog', 'p3_dog'], axis=1) ###Output _____no_output_____ ###Markdown Test ###Code predictions_clean.sample(25) ###Output _____no_output_____ ###Markdown Store ###Code # merge predictions and twitter archive data into master dataset twitter_archive_master = pd.merge(twitter_archive_clean, predictions_clean, on = ['tweet_id'], how = 'left') # store master dataset as csv twitter_archive_master.to_csv('twitter_archive_master.csv', encoding='utf-8', index = False) ###Output _____no_output_____ ###Markdown Analyze ###Code twitter_archive_master = pd.read_csv('twitter_archive_master.csv', encoding='utf-8') ###Output _____no_output_____ ###Markdown Insight no. 1 ###Code # show simple statistics of favorite count twitter_archive_master.favorite_count.describe() # show simple statistics of retweet count twitter_archive_master.retweet_count.describe() ###Output _____no_output_____ ###Markdown The popularity of the twitter account measured by favorite and retweet numbers is quite high. Insight no. 2 ###Code # frequency of dog names twitter_archive_master.name.value_counts().head(10) ###Output _____no_output_____ ###Markdown The most common dog name in the dataset is Charlie (12 counts), followed by Cooper, Lucy and Oliver (each 11 counts). The thrird place is shared by Tucker, Lola and Penny (each 10 counts). Insight no. 3 ###Code # frequency of dog types according to the first prediction twitter_archive_master.p1.value_counts().head(3) # frequency of dog types according to the second prediction twitter_archive_master.p2.value_counts().head(3) # frequency of dog types according to the third prediction twitter_archive_master.p3.value_counts().head(3) ###Output _____no_output_____ ###Markdown Golden retriever and labrador retriever are the most common types of dogs seen in the tweets according to the predictions dataset. Insight no. 4 + Visualization ###Code # freuqency of dog stage twitter_archive_master.stage.value_counts() ###Output _____no_output_____ ###Markdown And pupper is the most common dog stage represented in the tweets. ###Code # value counts only on a subset master DataFrame by choosing only those entries where dog stages are specified dog_stages = twitter_archive_master.stage[twitter_archive_master.stage != 'no_stage'].value_counts() # create a DataFrame of stages and counts counts_df = dog_stages.rename_axis('stages').reset_index(name='counts') # bar chart of dog stages n = np.arange(len(dog_stages)) plt.barh(n, counts_df.counts) plt.yticks(n, counts_df.stages) plt.title('Number of dog stages in tweets') plt.gca().invert_yaxis() plt.show() # pie plot # counts_df.plot.pie() # series = pd.Series(counts_df.counts) , index = counts_df.stages) counts_s = pd.Series(counts_df['counts'].values, index=counts_df['stages'], name='') counts_s.plot.pie(figsize=(5, 5), labels=['','','','', '','','']) plt.title('Dog stages in tweets') plt.legend(loc='upper left', labels = counts_s.index) ###Output _____no_output_____ ###Markdown Insight no. 5 + Visualization ###Code # subset the DataFrame to remove outliers twitter_archive_master_subset = twitter_archive_master[(twitter_archive_master.rating_numerator < 17) & (twitter_archive_master.rating_denominator == 10)] # try plotting rating over time by favorite count size try: x = twitter_archive_master_subset.timestamp # time y = twitter_archive_master_subset.rating_numerator area = twitter_archive_master_subset.favorite_count / 300 plt.scatter(x, y, s=area, alpha=0.3) plt.xticks([]) plt.gca().invert_xaxis() plt.title('Rating over time by favorite count size') plt.ylabel('Rating out of 10') plt.xlabel('Tweets timeline from Nov 2015 to Jul 2017') except ValueError: print('could not plot because could not convert to required data type') except TypeError: print('could not plot because could not convert to required data type') ###Output _____no_output_____ ###Markdown Favorite count increased over time together with better ratings for dogs. Insight no. 6 + Visualization ###Code # subset the DataFrame to remove rating outliers twitter_archive_master_subset = twitter_archive_master[(twitter_archive_master.rating_numerator < 17) & (twitter_archive_master.rating_numerator > 10) & (twitter_archive_master.rating_denominator == 10)] # plot retweet count vs. favorite count by rating color and size sns.scatterplot(x='retweet_count', y='favorite_count', hue='rating_numerator', size='rating_numerator', sizes=(50, 150), data=twitter_archive_master_subset).set( xlabel='number of retweets', ylabel='number of favorites') plt.legend(loc='lower right') plt.title('Favorites and retweets by dog ratings') plt.show() # subset the DataFrame to remove rating outliers and show only very popular tweets twitter_archive_master_subset_extra = twitter_archive_master[(twitter_archive_master.rating_numerator < 17) & (twitter_archive_master.rating_numerator > 10) & (twitter_archive_master.rating_denominator == 10) & (twitter_archive_master.retweet_count > 10000) & (twitter_archive_master.favorite_count > 20000)] # plot retweet count vs. favorite count by rating color and size sns.scatterplot(x='retweet_count', y='favorite_count', hue='rating_numerator', size='rating_numerator', sizes=(50, 150), data=twitter_archive_master_subset_extra).set( xlabel='number of retweets', ylabel='number of favorites') plt.legend(loc='lower right') plt.title('Favorites and retweets by dog ratings') plt.show() twitter_archive_master_subset_extra.rating_numerator.value_counts() ###Output _____no_output_____ ###Markdown High popularity measured by retweets above 10 thousand and favorites above 20 thousand tend to be achieved mostly by tweets which gave dogs higher ratings. *Versions used in this notebook* ###Code # Python version import sys print(sys.version_info) # jupyter notebook 4.4.0 # pandas version pd.__version__ # matplotlib version import matplotlib matplotlib.__version__ ###Output _____no_output_____ ###Markdown Wrangle and Analyze Data: WeRateDogs Table of Contents *(The code below is for generating the Table of Content)* ###Code %%javascript $.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js') ###Output _____no_output_____ ###Markdown 1st Step: Gathering Data Import packages and config some features: ###Code import pandas as pd import requests import numpy as np # Packages for the file obtained from Twitter API: import tweepy from tweepy import OAuthHandler import json from timeit import default_timer as timer # Package for visualization purposes: import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns ###Output _____no_output_____ ###Markdown First source and format: * **Source**: file already downloaded* **Format**: 'csv'* **Name of the file**: 'twitter-archive-enhanced.csv' ###Code # First Data Frame df_1=pd.read_csv('twitter-archive-enhanced.csv') display(df_1.head(3)) ###Output _____no_output_____ ###Markdown Second source and format: * **Source**: url* **Format**: 'tsv'* **Name of the file**: 'image_predictions.tsv' I found two ways to read the data. The first one is reading the url without saving a 'tsv' file: from io import StringIO url = 'https://video.udacity-data.com/topher/2018/November/5bf60c69_image-predictions-3/image-predictions-3.tsv' response = requests.get(url).text df_2 = pd.read_csv(StringIO(response), sep='\t') And the second one is storing it in the disc. I executed the second option: ###Code url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) with open ('image_predictions.tsv', 'wb') as file: file.write(response.content) df_2 = pd.read_csv('image_predictions.tsv', sep='\t') display(df_2.head(3)) ###Output _____no_output_____ ###Markdown Third source and format: * **Source**: Twitter API* **Format**: 'txt'* **Name of the file**: 'tweet-json.txt' In this section it was used the code given by Udacity from the file 'twitter-api.py' ###Code # Query Twitter API for each tweet in the Twitter archive and save JSON in a text file # These are hidden to comply with Twitter's API terms and conditions consumer_key = 'HIDDEN' consumer_secret = 'HIDDEN' access_token = 'HIDDEN' access_secret = 'HIDDEN' auth = OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True) ###Output _____no_output_____ ###Markdown With the length of tweet ids it can be checked if all the tweets were gathered: ###Code tweet_ids = df_1.tweet_id.values len(tweet_ids) # Query Twitter's API for JSON data for each tweet ID in the Twitter archive count = 0 fails_dict = {} start = timer() # Save each tweet's returned JSON as a new line in a .txt file with open('tweet_json.txt', 'w') as outfile: # This loop will likely take 20-30 minutes to run because of Twitter's rate limit for tweet_id in tweet_ids: count += 1 print(str(count) + ": " + str(tweet_id)) try: tweet = api.get_status(tweet_id, tweet_mode='extended') print("Success") json.dump(tweet._json, outfile) outfile.write('\n') except tweepy.TweepError as e: print("Fail") fails_dict[tweet_id] = e pass end = timer() print(end - start) print(fails_dict) ###Output _____no_output_____ ###Markdown * The result of this step was a 'txt' file with missing values in tweet id's. For example, number 20 `tweet_id` says 'Fail': ![alt text](capture_01.png)* This also can be seen in the number of 'Fail' obtained in the `fails_dict`: ![alt text](capture_02.png)* The code above took me around 35 minutes to be executed (2108.9 seconds): ![alt text](capture_03.png)* Due to this results it will be used the file given by Udacity that contains more tweet id's than the file obtained above. ###Code with open ('tweet_json.txt') as file: data = [json.loads(line) for line in file] df_3 = pd.DataFrame(data) display(df_3.head(3)) ###Output _____no_output_____ ###Markdown 2nd Step: Assessing Data Looking for **data quality issues** (completeness, validity, accuracy, consistency) and **tidiness issues** Visual Assessment Data Frame 1: Twitter Archive Enhanced ###Code df_1.sample(10) ###Output _____no_output_____ ###Markdown Data Frame 2: Image Predictions ###Code df_2.sample(10) ###Output _____no_output_____ ###Markdown Data Frame 3: Twitter API ###Code pd.set_option('display.max_columns', None) df_3.sample(8) ###Output _____no_output_____ ###Markdown * At first glance it seems that in `df_1` and `df_3` there are columns with `NaN` values.* Also in `df_1` there are dogs without 'name' and 'dog stage' (doggo, floofer, pupper, puppo). Also, the columns for 'dog stage' have tidiness issues.* It can be verified that in `df_2` the algorithm works (it can identify a dog among others that do not) Programmatic Assessment Data Frame 1: Twitter Archive Enhanced ###Code # Number 1 tidiness issue: in 'stage of dog' columns (doggo, floofer, pupper, puppo) df_1.info() sum(df_1.duplicated()) # Number 1 quality issue: NaN values df_1.isnull().sum() df_1[['rating_numerator','rating_denominator']].describe() # Number 2 quality issue: numerator values out of range df_1.rating_numerator.value_counts().sort_index() # Number 3 quality issue: denominator values distinct from 10 df_1.rating_denominator.value_counts().sort_index() # Number 4 quality issue: dogs without name or valid name ('a') df_1.name.value_counts() ###Output _____no_output_____ ###Markdown Data Frame 2: Image Predictions ###Code df_2.info() # no duplicated rows sum(df_2.duplicated()) # no duplicated tweet id's sum(df_2.tweet_id.duplicated()) # Number 5 quality issue: duplicated 'jpg_url' sum(df_2.jpg_url.duplicated()) # no NaN values df_2.isnull().sum() # Number 6 quality issue: sometimes there are animals or objects that are not dogs # false indicates this is not a dog df_2.p1_dog.value_counts() # Number 6 quality issue: sometimes there are animals or objects that are not dogs df_2.groupby(['p1_dog','p2_dog','p3_dog'])['p1_dog'].count() df_2[['p1_conf','p2_conf','p3_conf']].describe() # Number 7 quality issue: lower and upper case df_2.p1.value_counts() # Number 7 quality issue: lower and upper case df_2.p2.value_counts() # Number 7 quality issue: lower and upper case df_2.p3.value_counts() ###Output _____no_output_____ ###Markdown Data Frame 3: Twitter API ###Code # Number 8 quality issue: 'retweeted_status' df_3.info() # no duplicated tweet id's sum(df_3.id.duplicated()) # Number 9 quality issue: NaN values df_3.isnull().sum() df_3[['retweet_count','favorite_count']].describe() ###Output _____no_output_____ ###Markdown Data Quality Issues 1. Presence on `NaN` values in `df_1` in columns: `in_reply_to_status_id`, `in_reply_to_user_id`, `retweeted_status_id`, `retweeted_status_user_id`, `retweeted_status_timestamp`, and presence of columns that do not have value for the case study, such as `timestamp`, `source` and `expanded_urls`.2. High numerator values in `df_1`.3. Denominator values distinct from 10 in `df_1`.4. There are dogs without name or valid name in `df_1`.5. Duplicated values for column 'jpg_url' in `df_2`.6. Sometimes there are animals or objects that are not dogs in `df_2`.7. Lowercase and uppercase mix in breed of dogs columns in `df_2`(columns `p1`, `p2`, `p3`).8. Presence of rows with retweets, replies and quoted status in `df_3`.9. Columns without relevant information do not add value in `df_3` (mainly related to reply and quoted status). Tidiness Issues 10. 'Stage of dog' columns (`doggo`, `floofer`, `pupper`, `puppo`), must be one variable in `df_1`.11. In `df_1` `rating_numerator` and `rating_denominator` must be one variable named `rating`12. Join `df_2` and `df_3` to `df_1` 3rd Step: Cleaning Data Copies of the original pieces of data are made prior to cleaning: ###Code df_1_clean=df_1.copy() df_2_clean=df_2.copy() df_3_clean=df_3.copy() ###Output _____no_output_____ ###Markdown Data Quality Issues Define: 1) Presence on `NaN` values in `df_1` in columns: `in_reply_to_status_id`, `in_reply_to_user_id`, `retweeted_status_id`, `retweeted_status_user_id`, `retweeted_status_timestamp`, and presence of columns that do not have value for the case study, such as `timestamp`, `source` and `expanded_urls` Code ###Code # DROP VALUES THAT ARE NOT AN ORIGINAL TWEET. # Drop rows with values distincts to NaN in columns 'retweeted_status_id', 'retweeted_status_user_id' # and 'retweeted_status_timestamp'. That means this is a retweet so must be removed. # Drop rows with values distincts to NaN in columns 'in_reply_to_status_id' and 'in_reply_to_user_id'. # That means this is a reply so must be erased. # First validate that the values we want to drop are correct. display(df_1_clean[df_1_clean.retweeted_status_id.notnull()].shape) display(df_1_clean[df_1_clean.in_reply_to_status_id.notnull()].shape) # The lenght of the non null values are the same for the three columns related with 'retweeted status', # so we proceed dropping this rows df_1_clean.drop(df_1_clean[df_1_clean.retweeted_status_id.notnull()].index, inplace=True) # The lenght of the non null values are the same for the three columns related with 'in reply to', # so we proceed dropping this rows df_1_clean.drop(df_1_clean[df_1_clean.in_reply_to_status_id.notnull()].index, inplace=True) # DROP NAN VALUES AND COLUMNS WITHOUT VALUE FOR THE CASE STUDY. # Now drop the columns with NaN's that have no value for us, and also the columns we will not use. df_1_clean = df_1_clean.drop([ 'in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp', 'timestamp', 'source', 'expanded_urls' ], axis=1) ###Output _____no_output_____ ###Markdown Test ###Code display(df_1_clean.shape) display(df_1_clean.sample(5)) display(df_1_clean.info()) ###Output _____no_output_____ ###Markdown Define: 2) Denominator values distinct from 10 in `df_1`. Code ###Code # Visualize denominator values distinct from 10 pd.set_option('display.max_colwidth', None) df_1_denominator = df_1_clean[df_1_clean['rating_denominator'] != 10] display(df_1_denominator.shape) display(df_1_denominator[['tweet_id', 'text', 'rating_numerator', 'rating_denominator']]) # Manually correct rating numerators and denominators that have been misassigned rows_to_update = [1068, 1165, 1202, 1662, 2335] cols_to_update = ['rating_numerator', 'rating_denominator'] values = [[14,10], [13,10], [11,10], [10,10], [9,10]] df_1_clean.loc[rows_to_update, cols_to_update] = values ###Output _____no_output_____ ###Markdown Test ###Code # Check those ratings manually corrected df_1_denominator = df_1_clean[df_1_clean['rating_denominator'] != 10] display(df_1_denominator[['tweet_id', 'text', 'rating_numerator', 'rating_denominator']]) display(df_1_denominator.shape) ###Output _____no_output_____ ###Markdown Define: 3) High numerator values in `df_1`. Code ###Code # Visualize numerator values greater than 20 df_1_numerator = df_1_clean[df_1_clean['rating_numerator'] >= 20] display(df_1_numerator.shape) display(df_1_numerator[['tweet_id', 'text', 'rating_numerator']]) # Manually correct rating numerators that have been misassigned rows_to_update = [695, 763, 1712] cols_to_update = ['rating_numerator'] # Round values to the most proximate integer values = [10, 11, 11] df_1_clean.loc[rows_to_update, cols_to_update] = values ###Output _____no_output_____ ###Markdown Test ###Code # Check those ratings manually corrected df_1_numerator = df_1_clean[df_1_clean['rating_numerator'] >= 20] display(df_1_numerator.shape) display(df_1_numerator[['tweet_id', 'text', 'rating_numerator']]) ###Output _____no_output_____ ###Markdown Define: 4) There are dogs without name or valid name in `df_1`. Code ###Code # In 'name' column, names that are not capitalized have wrong values, so must be converted to 'None' df_1_clean.loc[df_1_clean.name.str.islower(),'name']='None' # In 'name' column replace 'None' with NaN values df_1_clean.name=df_1_clean.name.replace('None',np.nan) ###Output _____no_output_____ ###Markdown Test ###Code df_1_clean.sample(10) ###Output _____no_output_____ ###Markdown Define: 5) Duplicated values for column `jpg_url` in `df_2`. Code ###Code # Remove duplicate values and keep the first to appear df_2_clean = df_2_clean.drop_duplicates(subset=['jpg_url'], keep='first') ###Output _____no_output_____ ###Markdown Test ###Code display(df_2_clean.shape) ###Output _____no_output_____ ###Markdown Define: 6) Sometimes there are animals or objects that are not dogs in `df_2`. Code ###Code # I will make the assumption that the first prediction is correct, so # I will not take into account what second and third predictions say. # Remove the 'false' values because that means that the row does not # contain data from a dog but from another animal display(df_2_clean.p1_dog.value_counts()) # Remove in 'p1_dog' animals different than dogs df_2_clean.drop(df_2_clean[df_2_clean['p1_dog'] == False].index, inplace=True) # Rename columns 'p1' and 'p1_conf' df_2_clean.rename(columns={'p1': 'predicted_breed', 'p1_conf': 'predicted_confidence'}, inplace=True) # Delete columns with second and third predictions. # Also remove 'img_num' and 'p1_dog' columns because those have no value now. df_2_clean = df_2_clean.drop([ 'img_num', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog' ], axis=1) ###Output _____no_output_____ ###Markdown Test ###Code display(df_2_clean.sample(5)) display(df_2_clean.shape) ###Output _____no_output_____ ###Markdown Define: 7) Lowercase and uppercase mix in `predicted_breed` column in `df_2`. Code ###Code df_2_clean['predicted_breed'] = df_2_clean.predicted_breed.str.capitalize() ###Output _____no_output_____ ###Markdown Test ###Code display(df_2_clean.sample(5)) display(df_2_clean.shape) display(df_2_clean.info()) ###Output _____no_output_____ ###Markdown Define: 8) Presence of rows with retweets, replies and quoted status in `df_3`. Code ###Code # Validate that the values we want to drop are correct. display(df_3_clean[df_3_clean.retweeted_status.notnull()].shape) display(df_3_clean[df_3_clean.in_reply_to_status_id.notnull()].shape) display(df_3_clean[df_3_clean.quoted_status_id.notnull()].shape) # Now we proceed dropping this rows # Retweets df_3_clean.drop(df_3_clean[df_3_clean.retweeted_status.notnull()].index, inplace=True) # Replies df_3_clean.drop(df_3_clean[df_3_clean.in_reply_to_status_id.notnull()].index, inplace=True) # Quoted Status df_3_clean.drop(df_3_clean[df_3_clean.quoted_status_id.notnull()].index, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code df_3_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2069 entries, 0 to 2353 Data columns (total 31 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 created_at 2069 non-null object 1 id 2069 non-null int64 2 id_str 2069 non-null object 3 full_text 2069 non-null object 4 truncated 2069 non-null bool 5 display_text_range 2069 non-null object 6 entities 2069 non-null object 7 extended_entities 1971 non-null object 8 source 2069 non-null object 9 in_reply_to_status_id 0 non-null float64 10 in_reply_to_status_id_str 0 non-null object 11 in_reply_to_user_id 0 non-null float64 12 in_reply_to_user_id_str 0 non-null object 13 in_reply_to_screen_name 0 non-null object 14 user 2069 non-null object 15 geo 0 non-null object 16 coordinates 0 non-null object 17 place 1 non-null object 18 contributors 0 non-null object 19 is_quote_status 2069 non-null bool 20 retweet_count 2069 non-null int64 21 favorite_count 2069 non-null int64 22 favorited 2069 non-null bool 23 retweeted 2069 non-null bool 24 possibly_sensitive 2066 non-null object 25 possibly_sensitive_appealable 2066 non-null object 26 lang 2069 non-null object 27 retweeted_status 0 non-null object 28 quoted_status_id 0 non-null float64 29 quoted_status_id_str 0 non-null object 30 quoted_status 0 non-null object dtypes: bool(4), float64(3), int64(3), object(21) memory usage: 460.7+ KB ###Markdown Define: 9) Columns without relevant information do not add value in `df_3` . Code ###Code # DROP NAN VALUES AND COLUMNS WITHOUT VALUE FOR THE CASE STUDY. # Now drop the columns with NaN's that have no value for us, and also the columns we will not use. df_3_clean = df_3_clean.drop([ 'created_at', 'id_str', 'full_text', 'truncated', 'display_text_range', 'entities', 'extended_entities', 'source', 'in_reply_to_status_id', 'in_reply_to_status_id_str', 'in_reply_to_user_id', 'in_reply_to_user_id_str', 'in_reply_to_screen_name', 'user', 'geo', 'coordinates', 'place', 'contributors', 'is_quote_status', 'favorited', 'retweeted', 'possibly_sensitive', 'possibly_sensitive_appealable', 'lang', 'retweeted_status', 'quoted_status_id', 'quoted_status_id_str', 'quoted_status' ], axis=1) # Rename 'id' column df_3_clean.rename(columns={'id': 'tweet_id'}, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code display(df_3_clean.sample(5)) display(df_3_clean.shape) display(df_3_clean.info()) ###Output _____no_output_____ ###Markdown Tidiness Issues Define: 10) 'Stage of dog' columns (`doggo`, `floofer`, `pupper`, `puppo`), must be one variable in `df_1`. Code ###Code # replace all the 'None' stage_of_dog = ['doggo', 'pupper', 'floofer', 'puppo' ] for i in stage_of_dog: df_1_clean[i] = df_1_clean[i].replace('None', '') # using 'cat' to combine the columns df_1_clean['stage_of_dog'] = df_1_clean.doggo.str.cat(df_1_clean.floofer).str.cat(df_1_clean.pupper).str.cat(df_1_clean.puppo) # erase the columns been replaced df_1_clean=df_1_clean.drop(['doggo','floofer','pupper','puppo'], axis=1) # assign NaN values to empty spaces df_1_clean.stage_of_dog=df_1_clean.stage_of_dog.replace('', np.nan) # replacing wrong outputs of 'stage_of_dog' df_1_clean.loc[df_1_clean.stage_of_dog == 'doggopupper', 'stage_of_dog'] = 'doggo, pupper' df_1_clean.loc[df_1_clean.stage_of_dog == 'doggopuppo', 'stage_of_dog'] = 'doggo, puppo' df_1_clean.loc[df_1_clean.stage_of_dog == 'doggofloofer', 'stage_of_dog'] = 'doggo, floofer' ###Output _____no_output_____ ###Markdown Test ###Code df_1_clean.sample(5) df_1_clean.stage_of_dog.value_counts() ###Output _____no_output_____ ###Markdown Define: 11) In df_1 `rating_numerator` and `rating_denominator` must be one variable named `rating` Code ###Code # Making the operation to obtain 'rating' variable df_1_clean['rating']=df_1_clean.rating_numerator/df_1_clean.rating_denominator df_1_clean=df_1_clean.drop(['rating_numerator','rating_denominator'], axis=1) ###Output _____no_output_____ ###Markdown Test ###Code display(df_1_clean.sample(5)) display(df_1_clean.shape) display(df_1_clean.info()) ###Output _____no_output_____ ###Markdown Define: 12) Join `df_2` and `df_3` to `df_1` Code ###Code # I make the assumption of an inner join because I want the lowest amount of missing data # (except for columns 'name' and 'stage_of_dog') # First join 'df_1_clean' with 'df_3_clean' df_merge_1 = pd.merge(left=df_1_clean, right=df_3_clean, on=['tweet_id'], how='inner') # And now merging 'df_2_clean' with 'df_merge_1' final_df_clean = pd.merge(left=df_merge_1, right=df_2_clean, on=['tweet_id'], how='inner') # Finally converting 'tweet_id' to a string final_df_clean['tweet_id'] = final_df_clean['tweet_id'].apply(str) ###Output _____no_output_____ ###Markdown Test ###Code # Final results of cleaning process display(final_df_clean.shape) display(final_df_clean.info()) display(final_df_clean.isnull().sum()) display(sum(final_df_clean.duplicated())) display(final_df_clean.sample(5)) ###Output _____no_output_____ ###Markdown * Finally, there are 1463 rows and 10 columns that will be used for EDA.* See that there are no duplicated values.* Also, the only missing values are in name and state of the dog. 4th Step: Storing and Acting on Wrangled Data Saving the final data frameStoring the cleaned data set in a `csv` file named `twitter_archive_master.csv` ###Code final_df_clean.to_csv('twitter_archive_master.csv', index=False) ###Output _____no_output_____ ###Markdown Exploratory Data Analysis ###Code final_df_clean.describe() ###Output _____no_output_____ ###Markdown * To compare **who are the most popular dogs**, a weight will be assigned to each numerical variable. * **An assumption has been made**, giving 30% of importance to `rating`, `retweet_count` and `favorite_count`. * `predicted_confidence` will be assigned with 10% of importance so **a new aggregated variable will be added**.* But first of all, a **data normalization** must be made, and the technique used is [z-score](https://towardsdatascience.com/data-normalization-with-pandas-and-scikit-learn-7c1cc6ed6475) ###Code # Apply the z-score method in Pandas using the .mean() and .std() methods def z_score(df): # copy the dataframe df_std = df.copy() # apply the z-score method for column in df_std.columns: df_std[column] = (df_std[column] - df_std[column].mean()) / df_std[column].std() return df_std # Call the z_score function df_final_standardized = z_score(final_df_clean[['rating','retweet_count','favorite_count','predicted_confidence']]) # Obtain the final score using the weights assumed before df_final_standardized['score'] = 0.3*df_final_standardized['rating'] + 0.3*df_final_standardized['retweet_count'] + 0.3*df_final_standardized['favorite_count'] + 0.1*df_final_standardized['predicted_confidence'] # Drop all normalized columns but keeping'score' df_final_standardized = df_final_standardized.drop(['rating', 'retweet_count', 'favorite_count', 'predicted_confidence' ], axis=1) display(df_final_standardized.head()) # Merge 'df_final_standardized' to 'final_df_clean' by indexes df_final = pd.merge(final_df_clean, df_final_standardized, left_index=True, right_index=True, how='inner') display(df_final.head()) display(df_final.info()) # Saving changes for the new column added df_final.to_csv('twitter_archive_master.csv', index=False) ###Output _____no_output_____ ###Markdown * Finally the dataset is sorted by 'score'. * Below are the five most popular dogs.* Note that some images are captures of videos, that's why sometimes those are blurred and with low quality, but it can be seen in the original tweet. ###Code df_final.sort_values('score',ascending=False).head() ###Output _____no_output_____ ###Markdown Visualizations The most popular Dog Breeds ###Code plt.figure(figsize = (10,4)) ax = sns.barplot( x = final_df_clean['predicted_breed'].value_counts()[0:15].index, y =final_df_clean['predicted_breed'].value_counts()[0:15], data = final_df_clean, palette="mako" ); plt.ylabel("Prediction Count",fontsize = 15); plt.xlabel("Breed of Dogs",fontsize = 15); plt.title("The most popular Dog Breeds",fontsize = 18); ax.set_xticklabels(ax.get_xticklabels(),rotation = 90, fontsize = 14); ###Output _____no_output_____ ###Markdown Boxplot, Scatterplot and Kernel Density Estimate (KDE) Analysis ###Code g = sns.PairGrid(final_df_clean); g.map_diag(sns.boxplot); g.map_upper(sns.scatterplot); g.map_lower(sns.kdeplot); ###Output _____no_output_____ ###Markdown Wrangling WeRateDogs Twitter Data Gather ###Code import pandas as pd import requests import numpy as np import tweepy import tweepy from tweepy import OAuthHandler import json from timeit import default_timer as timer import matplotlib.pyplot as plt %matplotlib inline twitter_archive_df = pd.read_csv('twitter-archive-enhanced.csv') #response = requests.get("https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv") #with open('image_predictions.tsv', 'wb') as file: # file.write(response.content) image_predictions_df = pd.read_csv('image_predictions.tsv', sep='\t') twitter_archive_df '''consumer_key = 'jXMETHxEjMkBOxYSwJ8hkoMYp' consumer_secret = 'BTeUlltleZ1jQw2nb3vWR60oDYi7a4rTmqzElGVIrEa19Twy9n' access_token = '1202170586392354816-arnKTMv6Xzn0QqrK5vv7CY7skilmgd' access_secret = 'SAeBPVITnRWl7bEGaR4McViHgN01wuiGzF5ibyxNCKCDn' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True) tweet_ids = twitter_archive_df['tweet_id']''' '''# Query Twitter's API for JSON data for each tweet ID in the Twitter archive count = 0 fails_dict = {} start = timer() # Save each tweet's returned JSON as a new line in a .txt file with open('tweet_json.txt', 'w') as outfile: # This loop will likely take 20-30 minutes to run because of Twitter's rate limit for tweet_id in tweet_ids: count += 1 print(str(count) + ": " + str(tweet_id)) try: tweet = api.get_status(tweet_id, tweet_mode='extended') print("Success") json.dump(tweet._json, outfile) outfile.write('\n') except tweepy.TweepError as e: print("Fail") fails_dict[tweet_id] = e pass end = timer() print(end - start) print(fails_dict)''' '''json_data = [] with open('tweet_json.txt', 'r') as file: for line in file: line_json = json.loads(line) json_data.append({ 'tweet_id': line_json['id_str'], 'favorite_count': line_json['favorite_count'], 'retweet_count': line_json['retweet_count'] }) api_df = pd.DataFrame(json_data) api_df.head()''' #save data to csv #api_df.to_csv('api_data.csv', index=False) api_df = pd.read_csv('api_data.csv') api_df.head() ###Output _____no_output_____ ###Markdown Assess ###Code twitter_archive_df.info() #check for duplicates twitter_archive_df.duplicated().any() # check the text data print(twitter_archive_df.iloc[-2, 5]) #check if the ratings match np.all(twitter_archive_df.text.str.findall('(\d+)/\d+').str[-1] == twitter_archive_df.rating_numerator) #check if the source is only one value twitter_archive_df.source.unique() image_predictions_df image_predictions_df.info() #check for duplicates image_predictions_df.duplicated().any() api_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2333 entries, 0 to 2332 Data columns (total 3 columns): favorite_count 2333 non-null int64 retweet_count 2333 non-null int64 tweet_id 2333 non-null int64 dtypes: int64(3) memory usage: 54.8 KB ###Markdown Qulaity`twitter_archive:`- Timestamp is string not datetime- Missing dogs' names- The text contains all tweet data including pictures and videos url- Dataset includes retweets- Dataset includes reply data- Invalid dog names (a, an, the ...etc)- Missing expanded url- Multiple values in expanded url in case of many photos- Missing dog stage values- Wrong rating numerators- None is the default value for null dog stage- The source is an href`image predictions:`- Breads' names sometimes have - instead of _- inconsistent dog breads some start with small letter and others with capital letter Tidiness`twitter_archive:`- doggo, floofer, pupper, and puppo are values not variables- The rating denominator is not needed- tweet id and url are related to tweet data`image predictions:`- Breads of dogs belongs to dogs data Clean ###Code twitter_archive_clean = twitter_archive_df.copy() image_predictions_clean = image_predictions_df.copy() ###Output _____no_output_____ ###Markdown Define- Convert tweets timestamp into datetime by using pd.to_datetime Code ###Code twitter_archive_clean['timestamp'] = pd.to_datetime(twitter_archive_df['timestamp']) ###Output _____no_output_____ ###Markdown Test ###Code print(twitter_archive_clean.timestamp.dtype) ###Output datetime64[ns] ###Markdown Define- Remove replyed tweets Code ###Code twitter_archive_clean = twitter_archive_clean[twitter_archive_clean.in_reply_to_status_id.isnull()] ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.in_reply_to_status_id.notnull().any() ###Output _____no_output_____ ###Markdown Define- Remove retweeted tweets Code ###Code twitter_archive_clean = twitter_archive_clean[twitter_archive_clean.retweeted_status_id.isnull()] ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.retweeted_status_id.notnull().any() ###Output _____no_output_____ ###Markdown Define- Drop in_reply_to_status_id, in_reply_to_user_id, source, retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp and rating_denominator Code ###Code twitter_archive_clean.drop(columns=['in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp', 'rating_denominator'], inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.columns ###Output _____no_output_____ ###Markdown Define- Remove urls from the end of text Code ###Code twitter_archive_clean['text'] = twitter_archive_clean.text.str.strip().replace('(https?:\/\/.*[\r\n]*)', '') ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.text.str.extract('(https?:\/\/.*[\r\n]*)').count() twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2355 Data columns (total 11 columns): tweet_id 2097 non-null int64 timestamp 2097 non-null datetime64[ns] source 2097 non-null object text 2097 non-null object expanded_urls 2094 non-null object rating_numerator 2097 non-null int64 name 2097 non-null object doggo 2097 non-null object floofer 2097 non-null object pupper 2097 non-null object puppo 2097 non-null object dtypes: datetime64[ns](1), int64(2), object(8) memory usage: 196.6+ KB ###Markdown Define- Fix wrong ratings by using regex Code ###Code twitter_archive_clean['rating_numerator'] = twitter_archive_clean.text.str.findall('(\d+)/\d+').str[-1].astype(int) ###Output _____no_output_____ ###Markdown Test ###Code np.all(twitter_archive_clean.text.str.findall('(\d+)/\d+').str[-1] == twitter_archive_clean.rating_numerator) twitter_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2355 Data columns (total 11 columns): tweet_id 2097 non-null int64 timestamp 2097 non-null datetime64[ns] source 2097 non-null object text 2097 non-null object expanded_urls 2094 non-null object rating_numerator 2097 non-null int32 name 2097 non-null object doggo 2097 non-null object floofer 2097 non-null object pupper 2097 non-null object puppo 2097 non-null object dtypes: datetime64[ns](1), int32(1), int64(1), object(8) memory usage: 188.4+ KB ###Markdown Define- Change multiple expanded url to one url only Code ###Code twitter_archive_clean['expanded_urls'] = twitter_archive_clean.expanded_urls.str.split(',', n=1, expand=True)[0] ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.expanded_urls.head() ###Output _____no_output_____ ###Markdown Define- Extract the source value from the anchor tag Code ###Code twitter_archive_clean['source'] = twitter_archive_clean.source.str.extract('<a.*>(.*)</a>')[0] ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.source.unique() ###Output _____no_output_____ ###Markdown Define- Create dog stage column by combining doggo, floofer, pupper and puppo Code ###Code twitter_archive_clean['doggo'].replace('None', '', inplace=True) twitter_archive_clean['floofer'].replace('None', '', inplace=True) twitter_archive_clean['pupper'].replace('None', '', inplace=True) twitter_archive_clean['puppo'].replace('None', '', inplace=True) twitter_archive_clean['dog_stage'] = (twitter_archive_clean['doggo'] + twitter_archive_clean['floofer'] + twitter_archive_clean['pupper'] + twitter_archive_clean['puppo']) twitter_archive_clean['dog_stage'].replace('doggopupper', 'doggo pupper', inplace=True) twitter_archive_clean['dog_stage'].replace('doggopuppo', 'doggo puppo', inplace=True) twitter_archive_clean['dog_stage'].replace('doggofloofer', 'doggo floofer', inplace=True) twitter_archive_clean['dog_stage'].replace('', 'unknown', inplace=True) twitter_archive_clean.drop(['doggo', 'floofer', 'pupper', 'puppo'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_clean.dog_stage.value_counts() ###Output _____no_output_____ ###Markdown Define- Replace - in Breads' names with _ and convert it to lower case Code ###Code image_predictions_clean['p1'] = image_predictions_clean.p1.str.lower().str.replace('-', '_') image_predictions_clean['p2'] = image_predictions_clean.p2.str.lower().str.replace('-', '_') image_predictions_clean['p3'] = image_predictions_clean.p3.str.lower().str.replace('-', '_') ###Output _____no_output_____ ###Markdown Test ###Code print(image_predictions_clean.p1.str.contains('-').any()) print(image_predictions_clean.p2.str.contains('-').any()) print(image_predictions_clean.p3.str.contains('-').any()) ###Output False False False ###Markdown Define- Create dogs table Code ###Code dogs_df = twitter_archive_clean[['tweet_id', 'name', 'rating_numerator', 'dog_stage']].merge(image_predictions_clean[['tweet_id', 'p1', 'p1_conf', 'p1_dog']], on='tweet_id') dogs_df['tweet_id'] = dogs_df.tweet_id.astype(str) ###Output _____no_output_____ ###Markdown Test ###Code dogs_df.head() ###Output _____no_output_____ ###Markdown Define- Create tweets table Code ###Code tweets_df = twitter_archive_clean[twitter_archive_clean.columns.difference(['name', 'rating_numerator', 'dog_stage'])].merge(api_df, on='tweet_id') tweets_df['tweet_id'] = tweets_df.tweet_id.astype(str) ###Output _____no_output_____ ###Markdown Test ###Code tweets_df.head() #now merge all needed data into one data frame for analysis purposes df = twitter_archive_clean.merge(image_predictions_clean[['tweet_id', 'p1', 'p1_conf', 'p1_dog']], on='tweet_id').merge(api_df, on='tweet_id') df['tweet_id'] = df.tweet_id.astype(str) df.head() #now save all cleaned data df.to_csv('twitter_archive_master.csv', index=False) twitter_archive_clean.to_csv('twitter_archive_clean.csv', index=False) image_predictions_clean.to_csv('image_predictions_clean.csv', index=False) dogs_df.to_csv('dogs_data.csv', index=False) tweets_df.to_csv('tweets_data.csv', index=False) ###Output _____no_output_____ ###Markdown Data Visualizations What propotion is predicted to be dogs? ###Code dogs_df.p1_dog.mean() plt.figure(figsize=(7, 7)) plt.pie([dogs_df.p1_dog.mean(), 1 - dogs_df.p1_dog.mean()], labels=['dogs', 'non dogs'], autopct='%1.2f%%'); ###Output _____no_output_____ ###Markdown What is summary statistics for our data? ###Code df.describe() ###Output _____no_output_____ ###Markdown Do dogs receive more likes? ###Code dogs = df[df['p1_dog'] == True] non_dogs = df[df['p1_dog'] == False] dogs.favorite_count.mean(), non_dogs.favorite_count.mean() plt.figure(figsize=(7, 7)) plt.hist(dogs.favorite_count, alpha=0.7) plt.hist(non_dogs.favorite_count, alpha=0.8); ###Output _____no_output_____ ###Markdown Do dogs receive more retweets? ###Code dogs.retweet_count.mean(), non_dogs.retweet_count.mean() plt.figure(figsize=(7, 7)) plt.hist(dogs.retweet_count, alpha=0.7) plt.hist(non_dogs.retweet_count, alpha=0.8); ###Output _____no_output_____ ###Markdown Which tweet was the most retweeted? ###Code tweets_df.iloc[tweets_df.retweet_count.idxmax()] ###Output _____no_output_____ ###Markdown Which tweet has the most likes? ###Code tweets_df.iloc[tweets_df.favorite_count.idxmax()] ###Output _____no_output_____ ###Markdown What is the source of tweets? ###Code print(tweets_df.source.value_counts() / len(tweets_df)) tweets_df.source.value_counts().plot(kind='pie', figsize=(7, 7)); ###Output Twitter for iPhone 0.936842 Vine - Make a Scene 0.043541 Twitter Web Client 0.014833 TweetDeck 0.004785 Name: source, dtype: float64 ###Markdown What kind of dog stages in the dataset? ###Code dogs_df.dog_stage.value_counts().plot(kind='bar', figsize=(7, 7)); ###Output _____no_output_____ ###Markdown Data Wrangling ###Code # Importing required libraries import pandas as pd import numpy as np import requests import json import tweepy import requests import os ###Output _____no_output_____ ###Markdown Gathering Data for Project 1.Opening a file in hand ###Code # opening the file given as is df_archive = pd.read_csv('twitter-archive-enhanced.csv') df_archive.head() ###Output _____no_output_____ ###Markdown 2. File downloaded from Internet programmatically ###Code # Make directory if it doesn't already exist folder_name = 'image_predictions' if not os.path.exists(folder_name): os.makedirs(folder_name) # Method to get our request url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) # To save the file in our folder with open (os.path.join(folder_name,url.split('/')[-1]),mode = 'wb') as file: file.write(response.content) # To list the contents of the folder os.listdir(folder_name) # Opening the tab separated file df_predictions = pd.read_csv('image_predictions/image-predictions.tsv', sep = ('\t')) df_predictions.head() ###Output _____no_output_____ ###Markdown 3.Source : API(Twitter) After setting twitter account and Twitter App, we will get the api object. ###Code consumer_key = '' consumer_secret = '' access_token = '' access_secret = '' # Api object will be created to get twitter Data consumer_key = 'YOUR CONSUMER KEY' consumer_secret = 'YOUR CONSUMER SECRET' access_token = 'YOUR ACCESS TOKEN' access_secret = 'YOUR ACCESS SECRET' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit = True, wait_on_rate_limit_notify = True) tweet_ids = list(df_archive.tweet_id) # getting the list of twitter ids # Tweet data is gathered by using Ids in the twitter archive Data farme(df_archive) with open('tweet_json.txt', 'a', encoding='UTF-8') as file: for tweet_id in tweet_ids: try: tweet = api.get_status(tweet_id, tweet_mode='extended')._json json.dump(tweet, file) file.write('\n') except: continue # Read text file line by line to create dataframe tweets_data = [] with open('tweet_json.txt') as file: for line in file: try: tweet = json.loads(line) tweets_data.append(tweet) except: continue twitter_api = pd.DataFrame(tweets_data, columns=list(tweets_data[0].keys())) twitter_api.head() # Reduce df_api to the necessary columns df = df_api[['id', 'retweet_count', 'favorite_count']] # Save Dataframe for Visual Assessment df_api.to_csv("tweet_json.csv") # Reading the file stored from json data df_json = pd.read_csv('tweet_json.csv',index_col = [0]) df_json.head() df_json.shape ###Output _____no_output_____ ###Markdown Assess Assessing Data for this ProjectAssess them visually and programmatically for quality and tidiness issues. Document at least eight (8) quality issues and two (2) tidiness issues.You do not need to gather the tweets beyond August 1st, 2017 Assessment has two Types- Visual Assessment- Programmatic AssessmentThe three Dataframes will be accessed Visually first.For this purpose I Used Excel Sheets.In the detect phase I noticed the Quality and Tidiness of DataNext in the Document Phase, I will document the Issues. Quality Issues- Datatype of 'Timestamp' and 'retweeted_status_timestamp' is object rather than Datetime in Archive Data.- Datatype of the tweet_id is integer, it should be string as it is charcteristic id.- Missing data of 5 columns of more than 90 percent of the Archive Data.- Inconsistent data of 'Name' column,containing 'a','an','the' as name of the dogs.- 'None' is used in the place of NAN in dog stages(4) column.- Retweet rows should be dropped as we are interested only in the original tweets.- Predictions Dataframe has Tweet Id column as integer, so it should be String- Img_num column is not giving any valuable information, so can be dropped- 13 columns in the dataframe received through twitter Api have data less than 90 percent.- Adjusting Dataypes of the date column in the json data file.- Renaming the 'id' column of the json file to tweet_id so that they can be merged later on. Tidyness Issues- Dog Stages should be a column, rather than four columns.- Timestamp should be separated as date and time column separately.- Retweet count and favorite count should be a part of the Archive dataframe. ###Code # Display a basic summary of the DataFrame using .info df_archive.info() # Display a basic summary of the DataFrame using .info df_predictions.info() # Display a basic summary of the DataFrame using .info df_json.info() # Display the first five rows of the DataFrame using .head df_archive.head() # Display the first five rows of the DataFrame using .head df_predictions.head() # Display the first five rows of the DataFrame using .head df_json.head() # Display the entry counts for the Name column using .value_counts df_archive['name'].value_counts() ###Output _____no_output_____ ###Markdown Clean Cleaning includes merging individual pieces of data according to the rules of tidy data. ###Code # Copy the dataframes for cleaning purpose and saving the original one untouched archive_clean = df_archive.copy() predictions_clean = df_predictions.copy() json_clean = df_json.copy() ###Output _____no_output_____ ###Markdown DataType `Archive`: Datatype of Timestamp column **Define**Changing the datatype of the Timestamp column in the archive_clean column **Code** ###Code archive_clean.timestamp = pd.to_datetime(archive_clean.timestamp) ###Output _____no_output_____ ###Markdown **Test** ###Code archive_clean.dtypes ###Output _____no_output_____ ###Markdown `Archive`: Datatype of retweeted_status_timestamp column **Define**Changing the datatype of the Timestamp column in the archive_clean column **Code** ###Code archive_clean.retweeted_status_timestamp = pd.to_datetime(archive_clean.retweeted_status_timestamp) ###Output _____no_output_____ ###Markdown **Test** ###Code archive_clean.dtypes ###Output _____no_output_____ ###Markdown `Archive`: Datatype of tweet_id column **Define**Changing the datatype of the tweet_id column in the archive_clean column to the string datatype **Code** ###Code archive_clean['tweet_id'] = archive_clean['tweet_id'].astype(str) ###Output _____no_output_____ ###Markdown **Test** ###Code archive_clean.dtypes ###Output _____no_output_____ ###Markdown Drop The retweet rows **Define**Dropping the rows with the retweet status, as they are retweet rather than original ones **Code** ###Code # Finding out the Number of rows having retweet Status archive_clean[archive_clean['retweeted_status_timestamp'].notnull() == True].shape #Dropping the rows with retweet status archive_clean = archive_clean.drop(archive_clean[archive_clean['retweeted_status_id'].notnull()== True].index) ###Output _____no_output_____ ###Markdown **Test** ###Code archive_clean[archive_clean['retweeted_status_timestamp'].notnull() == True].shape # Hence no rows with retweet status ###Output _____no_output_____ ###Markdown Missing data `Archive`:Missing Data of 5 columns **Define**dropping the columns with missing values in Archive Dataframe **Code** ###Code archive_clean = archive_clean.drop(['in_reply_to_status_id','in_reply_to_user_id','retweeted_status_timestamp','retweeted_status_user_id','retweeted_status_id'], axis=1) ###Output _____no_output_____ ###Markdown **Test** ###Code archive_clean.sample(5) ###Output _____no_output_____ ###Markdown **Define**Replacing the 'None' to the NAN in the doggo, floofer, puppo and pupper column **Code** ###Code archive_clean = archive_clean.replace('None', np.nan) ###Output _____no_output_____ ###Markdown **Test** ###Code archive_clean.sample(5) ###Output _____no_output_____ ###Markdown Drop The Rows without images **Define**drop the rows with no images( expanded_urls) **Code** ###Code # Finding the number of rows with Null expanded Urls archive_clean[archive_clean['expanded_urls'].isnull()].shape archive_clean.dropna(subset = ['expanded_urls'],inplace = True) ###Output _____no_output_____ ###Markdown **Test** ###Code archive_clean[archive_clean['expanded_urls'].isnull()].shape ###Output _____no_output_____ ###Markdown **Define**Inconsistentdata of 'Name' column,containing 'a','an','the' as name of the dogs.We will replace those names with Nan values. **Code** ###Code archive_clean['name'].replace(['a','an','the','light','life','by','actually','just','getting','infuriating','old','all','this','mad','very','not','one','my','quite','such','None'], np.nan,inplace = True) ###Output _____no_output_____ ###Markdown **Test** ###Code sum(archive_clean.name.str.islower()) ###Output _____no_output_____ ###Markdown DataType `Predictions Dataframe`: Datatype of Tweet Id column **Define**Changing the datatype of the Tweet Id column in the predictions_clean column **Code** ###Code predictions_clean['tweet_id'] = predictions_clean['tweet_id'].astype(str) ###Output _____no_output_____ ###Markdown **Test** ###Code predictions_clean.dtypes ###Output _____no_output_____ ###Markdown **Define**Img_num column can be dropped by Drp method **Code** ###Code predictions_clean = predictions_clean.drop(['img_num'], axis=1) ###Output _____no_output_____ ###Markdown **Test** ###Code predictions_clean.columns ###Output _____no_output_____ ###Markdown Renaming Id column **Define**Id column is renamed as tweet_id , so that they can be merged Later on **Code** ###Code json_clean.rename(columns = {'id' : 'tweet_id'},inplace =True) ###Output _____no_output_____ ###Markdown **Test** ###Code json_clean.columns ###Output _____no_output_____ ###Markdown DataType `Json Dataframe`: Datatype of Id column **Define**changing the data type of the id column to the string type **Code** ###Code json_clean['tweet_id'] = json_clean['tweet_id'].astype(str) ###Output _____no_output_____ ###Markdown **Test** ###Code json_clean.dtypes ###Output _____no_output_____ ###Markdown - Retweet count and favorite count should be a part of the Archive dataframe. **Define**Merging the two Dataframes as they should be in a Single DataFrame **Code** ###Code master_clean = pd.merge(left=json_clean,right=archive_clean, left_on='tweet_id', right_on='tweet_id', how='inner') ###Output _____no_output_____ ###Markdown **Test** ###Code master_clean.sample(5) ###Output _____no_output_____ ###Markdown A single variable of Dog Stages shold be in a single column **Define**Removing last four separate columns used for the same variable **Code** ###Code # Columns are specified which are to be kept and which are to be removed remove_cols = ['doggo', 'floofer', 'pupper', 'puppo'] remain_cols= ['tweet_id', 'timestamp','retweet_count', 'favorite_count', 'source', 'text','expanded_urls', 'rating_numerator', 'rating_denominator', 'name'] master_clean = pd.melt(master_clean, id_vars = remain_cols, value_vars = remove_cols, var_name = 'stage', value_name = 'dog_stages') # Delete column 'stages' master_clean = master_clean.drop('stage', 1) # Delete duplicated tweet_id master_clean = master_clean.drop_duplicates() ###Output _____no_output_____ ###Markdown **Test** ###Code master_clean.head() ###Output _____no_output_____ ###Markdown Timestamp should be separated as date and time column separately. **Define**Timestamp Column should be separated as two columns separately for Date And TIme **Code** ###Code # Making A separate column for Date master_clean['date'] = master_clean.timestamp.dt.date # MAking A separate column for Time master_clean['time'] = master_clean.timestamp.dt.time # Dropping the Extra column now i-e timestampcolumn master_clean.drop('timestamp', axis=1, inplace = True) ###Output _____no_output_____ ###Markdown **Test** ###Code master_clean.columns ###Output _____no_output_____ ###Markdown Merging the Predictions Data Frame and the MAster clean Data frame **Define**Merging the Two Data Frames to give them a final shape **Code** ###Code twitter_archive_master = pd.merge(left=master_clean,right=predictions_clean, left_on='tweet_id', right_on='tweet_id', how='inner') ###Output _____no_output_____ ###Markdown **Test** ###Code twitter_archive_master.head() twitter_archive_master.shape ###Output _____no_output_____ ###Markdown Store The dataframe ###Code # Store the clean DataFrame in a CSV file named twitter_archive_master.csv twitter_archive_master.to_csv('twitter_archive_master.csv') ###Output _____no_output_____ ###Markdown Analysis and Visualization ###Code # Importing necesary library for visualizations import matplotlib.pyplot as plt plt.style.use('ggplot') #Loading the dataframe df =pd.read_csv('twitter_archive_master.csv') # Changing the data type of Time and date column to datetime df.date = pd.to_datetime(df.date) df.time = pd.to_datetime(df.time) # Changing the datatype of Id column to the string type df.tweet_id = df.tweet_id.astype(str) # Dropping the Unnamed column df.drop(['Unnamed: 0'],axis=1,inplace=True) # Dropping the extra columns of predictions table df.drop(['p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog'] ,axis=1,inplace=True) # replacing the source column appropriately with the source df.source.replace(['<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>','<a href="http://twitter.com" rel="nofollow">Twitter Web Client</a>','<a href="https://about.twitter.com/products/tweetdeck" rel="nofollow">TweetDeck</a>'], ['IPhone','Laptop','TweetDeck'], inplace = True) df.source.value_counts() ###Output _____no_output_____ ###Markdown So this shows that people used mobile to tweet commonly rather than using laptops or Twitter dashboard. ###Code # Box plot for the Dog stages and favorite counts df.boxplot( column='favorite_count', by='dog_stages', figsize = (8,8)) plt.ylim(0, 35000) # Setting y limit plt.xlabel('Dog Stages') # Setting x label plt.ylabel('Number of Favorite count') # Setting y label plt.title('Most Favorite Dog stage') # Setting Title of plot; ###Output _____no_output_____ ###Markdown Hence the most favorit dog stage is puppo. No doubt its a cute stage and this expalins taht why people like this stage most ###Code # Box plot for the rating and stages df.boxplot( column='rating_numerator', by='dog_stages', figsize = (8,8)) plt.ylim(6, 15) # Setting y limit plt.xlabel('Dog Stages') # Setting x label plt.ylabel('Ratings') # Setting y label plt.title('Most rated Dog stage'); # Setting Title of plot; ###Output _____no_output_____ ###Markdown Here we can clearly infer trhat pupper stage is not liked very much and thats why it has received low ratings.on the contrary mean ratings for the remaining three stages taht is doggo,floofer,and puppo is same. ###Code # Statistical sumamry of the dataframe df.describe() ###Output _____no_output_____ ###Markdown This is the statistical description of teh data.We are interested in mainly the mean, and maximum and minimum of the retweet,favorite,and rating column ###Code # lets explore the dog with the highest rating df[df.rating_numerator == 1776].jpg_url df[df.rating_numerator == 1776] ###Output _____no_output_____ ###Markdown so the dog named **Atticus** is the one with highest rating and its posted on 4rth of the July.This day is celebrated as the Independence day of the United States.And It was declared in the year 1776, hence the rating was 1776.Thoughtful :) ###Code # lets explore the dog with the lowest rating df[df.rating_numerator == 0].jpg_url df[df.rating_numerator == 0] ###Output _____no_output_____ ###Markdown So one of them is not even a dog picture.But the other picture is a cute dog. I can't digest why it has been given low ratings lets explore it further. ###Code # lets explore the original tweet df[df.tweet_id == '835152434251116546'].expanded_urls ###Output _____no_output_____ ###Markdown So the matter has been solved. the low ratings are not given to the dog himself. Rather it is in response of the Plagiarism opted by other site. well done job! ###Code # Lets Explore the five most common names given to the dog df.name.value_counts().head() # most common 5 name ###Output _____no_output_____ ###Markdown So the most common name is Cooper.Close ones are Oliver, Lucy,Charlie and Tucker ###Code # Lets explore what is the maximum number a tweet was retweeted. df.retweet_count.max() ###Output _____no_output_____ ###Markdown Thats a pretty high number of the retweet count. ###Code df.source.value_counts() # most used medium to upload the tweet # Viasually see the results df.boxplot( column='retweet_count', by='source', figsize = (7,7)) plt.ylim(0,4000); ###Output _____no_output_____ ###Markdown So the Phone is used almost always to tweetlaptop and twiiter dashboard are not commonly used. ###Code # Most Rated Breeds a = df.groupby('p1').filter(lambda x: len(x) >= 30) a['p1'].value_counts().plot(kind = 'barh',figsize = (18,10)) plt.title('The Most Rated Breeds'); ###Output _____no_output_____ ###Markdown So The Most common breed is Golden RetrieverSecong Highest rated is Labrador retriever ###Code # Msot common month of tweet df.date.dt.month.value_counts().sort_values() # Plotting a bar graph for month and Number of tweets df.date.dt.month.value_counts().plot(kind = 'barh',figsize = (6,6)) plt.title('The Month with most Tweets'); ###Output _____no_output_____ ###Markdown So the most common month to tweet is December and then is November and January.This is understandable as these months are holidays and free months usually.and least tweets were posted on the August. ###Code # To see the relation between the ratings, afvorite count and retweet counts df[['favorite_count', 'rating_numerator', 'retweet_count']].corr(method='pearson') # Scatter plot to see a correlation between the Favorite tweets and the Retweets over a period of time df.set_index('date', inplace=True) df[['favorite_count', 'retweet_count']].plot(ylim=[0,40000], style = '.', alpha = 0.4, figsize =(10,8)) plt.title('Favorites and Retweets with Time') plt.xlabel('Date') plt.ylabel('Count'); ###Output _____no_output_____ ###Markdown **Test** ###Code enhanced_archive_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 tweet_id 2356 non-null int64 1 timestamp 2356 non-null object 2 source 2356 non-null object 3 text 2356 non-null object 4 expanded_urls 2297 non-null object 5 rating_numerator 2356 non-null int64 6 rating_denominator 2356 non-null int64 7 name 2356 non-null object 8 doggo 2356 non-null object 9 floofer 2356 non-null object 10 pupper 2356 non-null object 11 puppo 2356 non-null object dtypes: int64(3), object(9) memory usage: 221.0+ KB ###Markdown Unwanted columns in `tweet_information` table**Define**Drop the unwanted columns **Code** ###Code list(tweet_info_clean) tweet_info_clean = tweet_info_clean[['id', 'favorite_count', 'retweet_count']] ###Output _____no_output_____ ###Markdown **Test** ###Code tweet_info_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2354 entries, 0 to 2353 Data columns (total 3 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 id 2354 non-null int64 1 favorite_count 2354 non-null int64 2 retweet_count 2354 non-null int64 dtypes: int64(3) memory usage: 55.3 KB ###Markdown **`twitter_archive` Unwanted dog stage columns, i.e. 'doggo', 'fluffer', etc when one is enough****Define**There are several columns for dog stages, where one will suffice. **Code** ###Code # We create a column that has dog_names # We also turn "None" values into empty strings so that we can convert all empty strings into NaNs in the end enhanced_archive_clean['dog_stage'] = enhanced_archive_clean[['doggo','floofer','pupper','puppo']].replace("None", "").sum(1) # Then, we drop the unwanted columns enhanced_archive_clean.drop(columns=['doggo','floofer','pupper','puppo'], inplace=True) ###Output _____no_output_____ ###Markdown **Test** ###Code # We display a slice to see if the code worked print(enhanced_archive_clean.iloc[90:120].dog_stage) # and it did! print("The counts of dog stages in the table:\n", enhanced_archive_clean.dog_stage.value_counts()) print("\nLooks like there are a lot of missing values.") ###Output The counts of dog stages in the table: 1976 pupper 245 doggo 83 puppo 29 doggopupper 12 floofer 9 doggofloofer 1 doggopuppo 1 Name: dog_stage, dtype: int64 Looks like there are a lot of missing values. ###Markdown Convert the 9 columns in `image_predictions` into one. **Define**We can iterate through the columns to find which prediction was true, then add its result to the main column. **Code** ###Code # Create an empty column to attach to it image_predictions_clean['dog_type'] = np.nan # Put the dog types into the dog_type column for row in image_predictions_clean.itertuples(): if row.p1_dog == True: image_predictions_clean.loc[row.Index, 'dog_type'] = image_predictions_clean.loc[row.Index, 'p1'] elif row.p2_dog == True: image_predictions_clean.loc[row.Index, 'dog_type'] = image_predictions_clean.loc[row.Index, 'p2'] elif row.p3_dog == True: image_predictions_clean.loc[row.Index, 'dog_type'] = image_predictions_clean.loc[row.Index, 'p3'] # Drop the unnecessary columns image_predictions_clean.drop(columns=['jpg_url','img_num','p1','p1_conf','p1_dog','p2','p2_conf','p2_dog','p3','p3_conf','p3_dog'],inplace=True) ###Output _____no_output_____ ###Markdown **Test** ###Code image_predictions_clean.head() ###Output _____no_output_____ ###Markdown Now that we're done with the tidiness issues - Quality issues Tweet Info 'id' instead of 'tweet_id' in `tweet_info_clean`**Define**Rename the column to `tweet_id` for it to match the `tweet_id` column in the other two dataframes **Code** ###Code # Show the column names in tweet_info_clean list(tweet_info_clean) tweet_info_clean.rename({'id': 'tweet_id'}, axis=1, inplace=True) ###Output _____no_output_____ ###Markdown **Test** ###Code list(tweet_info_clean) ###Output _____no_output_____ ###Markdown Rows with no picture info in `enhanced_archive` table**Define**Create urls for the missing values using the tweet ids **Code** ###Code for tweetid in enhanced_archive_clean.tweet_id: url = enhanced_archive_clean.loc[enhanced_archive_clean['tweet_id'] == tweetid].expanded_urls.to_string() if 'http' not in url: enhanced_archive_clean.loc[enhanced_archive_clean['tweet_id'] == tweetid, 'expanded_urls'] = "https://twitter.com/dog_rates/status/" + str(tweetid) ###Output _____no_output_____ ###Markdown **Test** ###Code enhanced_archive_clean.expanded_urls.isna().sum() ###Output _____no_output_____ ###Markdown Incorrect dog names: 'a', 'such', 'all', etc in `enhanced_archive`**Define**Replace incorrect names (ones that start with a lowercase letter with NaN `np.nan`) **Code** ###Code len(enhanced_archive_clean[enhanced_archive_clean.name == "None"]) enhanced_archive_clean[enhanced_archive_clean.name.str.contains(r'^[a-z]')] # Series of incorrect names incorrect = enhanced_archive_clean[enhanced_archive_clean.name.str.contains(r'^[a-z]')].name print("These are all the incorrect names:\n", incorrect.unique()) print("\nThere are {} incorrect names.".format(len(incorrect))) # Let's replace these values with "None" enhanced_archive_clean.replace({'name': r'^[a-z].*'}, {'name': 'None'}, regex=True, inplace=True) # Now, let's change the column name `dog` into `dog_name` enhanced_archive_clean.rename({'name': 'dog_name'}, axis='columns', inplace=True) ###Output _____no_output_____ ###Markdown **Test** ###Code # This is the same code as before; it should now print 0 or nothing # Series of incorrect names incorrect = enhanced_archive_clean[enhanced_archive_clean.dog_name.str.contains(r'^[a-z]')].dog_name print("These are all the incorrect names:\n", incorrect.unique()) print("\nThere are {} incorrect names.".format(len(incorrect))) # This code should print the overall number of 'None's in the dog_name column after cleaning len(enhanced_archive_clean[enhanced_archive_clean.dog_name == "None"]) len(enhanced_archive_clean) ###Output _____no_output_____ ###Markdown One tweet with no dog rating**Define**One of the tweets has a rating of 24/7, which is not a rating. ###Code enhanced_archive_clean.loc[516] ###Output _____no_output_____ ###Markdown **Code** ###Code # Drop the row that has this false rating enhanced_archive_clean.drop(enhanced_archive_clean.index[[516]], inplace=True) ###Output _____no_output_____ ###Markdown **Test** ###Code numerator_24_7 = enhanced_archive_clean.query('rating_numerator == 24' and 'rating_denominator == 7') print(numerator_24_7) print("\n", numerator_24_7.index) print("\n", enhanced_archive_clean.iloc[516]) ###Output Empty DataFrame Columns: [tweet_id, timestamp, source, text, expanded_urls, rating_numerator, rating_denominator, dog_name, dog_stage] Index: [] Int64Index([], dtype='int64') tweet_id 810896069567610880 timestamp 2016-12-19 17:14:23 +0000 source <a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a> text This is Hunter. He just found out he needs braces. Requesting an orthodogtist stat. 11/10 you're fine Hunter, everything's fine https://t.co/zW1o0W4AYV expanded_urls https://twitter.com/dog_rates/status/810896069567610880/photo/1,https://twitter.com/dog_rates/status/810896069567610880/photo/1,https://twitter.com/dog_rates/status/810896069567610880/photo/1 rating_numerator 11 rating_denominator 10 dog_name Hunter dog_stage Name: 517, dtype: object ###Markdown DONE! **Inaccurate ratings: correct numerators are decimals in `enhanced_twitter_archive`** **Define**Some rating numerators contain decimals. We want to change the column into floats, then correctly insert this data after extracting it from the original text. ###Code enhanced_archive_clean[enhanced_archive_clean.text.str.contains(r"(\d+\.\d*\/\d+)")].rating_numerator ###Output C:\Users\LENOVO\anaconda3\lib\site-packages\pandas\core\strings.py:1954: UserWarning: This pattern has match groups. To actually get the groups, use str.extract. return func(self, *args, **kwargs) ###Markdown **Code** ###Code # Change the data type for the numerator and denominator columns into floats enhanced_archive_clean['rating_numerator'] = enhanced_archive_clean['rating_numerator'].astype('float') enhanced_archive_clean['rating_denominator'] = enhanced_archive_clean['rating_denominator'].astype('float') enhanced_archive_clean.loc[(enhanced_archive_clean['tweet_id'] == 883482846933004288) & (enhanced_archive_clean['rating_numerator'] == 5), ['rating_numerator']] = 13.5 enhanced_archive_clean.loc[(enhanced_archive_clean['tweet_id'] == 832215909146226688) & (enhanced_archive_clean['rating_numerator'] == 27), ['rating_numerator']] = 11.27 enhanced_archive_clean.loc[(enhanced_archive_clean['tweet_id'] == 786709082849828864) & (enhanced_archive_clean['rating_numerator'] == 75), ['rating_numerator']] = 9.75 enhanced_archive_clean.loc[(enhanced_archive_clean['tweet_id'] == 778027034220126208) & (enhanced_archive_clean['rating_numerator'] == 27), ['rating_numerator']] = 11.27 enhanced_archive_clean.loc[(enhanced_archive_clean['tweet_id'] == 680494726643068929) & (enhanced_archive_clean['rating_numerator'] == 26), ['rating_numerator']] = 11.26 ###Output _____no_output_____ ###Markdown **Test** ###Code # Now, the numerators are fixed if we check them enhanced_archive_clean[enhanced_archive_clean.text.str.contains(r"(\d+\.\d*\/\d+)")].rating_numerator ###Output _____no_output_____ ###Markdown Incorrect data type for column `tweet_id` in all dataframes **Define**The tweet_id in all columns is type `int64`. It should be in string form. **Code** ###Code enhanced_archive_clean['tweet_id'] = enhanced_archive_clean['tweet_id'].astype(str) image_predictions_clean['tweet_id'] = image_predictions_clean['tweet_id'].astype(str) tweet_info_clean['tweet_id'] = tweet_info_clean['tweet_id'].astype(str) ###Output _____no_output_____ ###Markdown **Test** ###Code enhanced_archive_clean.dtypes image_predictions_clean.dtypes tweet_info_clean.dtypes ###Output _____no_output_____ ###Markdown **Correction DONE!** Source column hard to read¶ `enhanced_archive` source column is hard to read***Define***The source column shows information in a format that is hard to read. The important information can be extracted and shown more clearly. ***Code*** ###Code # Create a new column to attach to it enhanced_archive_clean['source_'] = np.nan # Put the dog types into the dog_type column for row in enhanced_archive_clean.itertuples(): if "Twitter for iPhone" in row.source: enhanced_archive_clean.loc[row.Index, 'source_'] = "Twitter for iPhone" elif 'Twitter Web Client' in row.source: enhanced_archive_clean.loc[row.Index, 'source_'] = 'Twitter Web Client' elif 'Vine - Make a Scene' in row.source: enhanced_archive_clean.loc[row.Index, 'source_'] = 'Vine - Make a Scene' elif 'TweetDeck' in row.source: enhanced_archive_clean.loc[row.Index, 'source_'] = 'TweetDeck' enhanced_archive_clean.drop(columns=['source'], inplace=True) enhanced_archive_clean.rename({'source_': 'source'}, axis='columns', inplace=True) ###Output _____no_output_____ ###Markdown *Test* ###Code enhanced_archive_clean.source.value_counts() enhanced_archive_clean.source.unique() # This will print the same number as in the value_counts function above len(enhanced_archive_clean[enhanced_archive_clean.source == 'Vine - Make a Scene']) ###Output _____no_output_____ ###Markdown Not all dog types are upper case In `image_predictions` table***Define***Capitalize the dog names ***Code*** ###Code # Create an empty column first image_predictions_clean['dog_capital'] = np.nan # Change the NaN values in the new column for row in image_predictions_clean.itertuples(): image_predictions_clean.loc[row.Index, 'dog_capital'] = str(row.dog_type).title() # Drop the original column and rename the new column image_predictions_clean.drop(columns=['dog_type'], inplace=True) image_predictions_clean.rename({'dog_capital': 'dog_type'}, axis=1, inplace=True) ###Output _____no_output_____ ###Markdown ***Test*** ###Code image_predictions_clean.head(2) ###Output _____no_output_____ ###Markdown There are empty strings in the `dog_stage` column in `enhanced_archive_clean`***Define***We want to replace the empty strings with 'None' ###Code enhanced_archive_clean.dog_stage[8:20] ###Output _____no_output_____ ###Markdown ***Code*** ###Code enhanced_archive_clean.replace({'dog_stage': r'^$'}, {'dog_stage': 'None'}, regex=True, inplace=True) ###Output _____no_output_____ ###Markdown ***Test*** ###Code enhanced_archive_clean.dog_stage[8:20] enhanced_archive_clean.dog_stage.value_counts() ###Output _____no_output_____ ###Markdown Done! * Drum rolls * Now is time foooor: the last step of data cleaning!**We want to join the three cleaned dataframes together.** ###Code list(enhanced_archive_clean) list(image_predictions_clean) list(tweet_info_clean) ###Output _____no_output_____ ###Markdown We want to merge the dataframes on `tweet_id` and keep the columns in the primary twitter archive ###Code twitter_master = pd.merge(left=enhanced_archive_clean, right=tweet_info_clean, on='tweet_id', how='left') twitter_master = pd.merge(left=twitter_master, right=image_predictions_clean, on='tweet_id', how='left') # Resort the columns for easier access twitter_master = twitter_master[['tweet_id','text','dog_name','dog_type','dog_stage','favorite_count','retweet_count','rating_numerator','rating_denominator','source','timestamp','expanded_urls']] twitter_master.head(2) twitter_master.dtypes ###Output _____no_output_____ ###Markdown We forgot to change the timestamp column into datetime64 ###Code twitter_master.timestamp = pd.to_datetime(twitter_master.timestamp) ###Output _____no_output_____ ###Markdown Test ###Code twitter_master.timestamp.head(10) twitter_master.timestamp.value_counts() ###Output _____no_output_____ ###Markdown Now we save the `twitter_master` into a csv file ###Code twitter_master.to_csv('twitter_archive_master.csv') len(twitter_master) ###Output _____no_output_____ ###Markdown Analysis and Visualization We can do an extra step: extract hours from the timestamp datetime objects to see in which hour the page owner mostly tweets. ###Code import datetime from collections import Counter import pandas as pd import matplotlib.pyplot as plt df = twitter_master.copy() df['tweet_hour'] = np.nan for row in df.itertuples(): df.loc[row.Index, 'tweet_hour'] = row.timestamp.hour ###Output _____no_output_____ ###Markdown When does the page owner usually tweet? ###Code # This code stores the number of retweets in a dictionary. Keys are the hours and hours = df.tweet_hour hour_count = Counter(hours) df_hours = pd.DataFrame.from_dict(hour_count, orient='index') df_hours = df_hours.sort_index() df_hours.plot(kind='bar', legend=False) plt.style.use('seaborn-darkgrid') plt.xlabel('Hours') plt.ylabel('Tweet Counts') plt.title('Hour Tweet Counts') plt.savefig('hour_count.png', bbox_inches="tight") plt.show(); ###Output _____no_output_____ ###Markdown * **This visualization shows that the page owner tweets the most from 12 a.m. to 3 a.m.** * **His activity drops significantly after that; he rarely tweets from 6 a.m. to 2 p.m.*** **He also tweets from 3 p.m. to 11 p.m. (lower activity level)****Note: This distribution is bimodal with two peeks: the owner during his day mostly tweets around 4 p.m. and during his night, he mostly tweets around 1 a.m.** What platform for releasing tweets does the owner prefer the most? ###Code source = df.source source_count = Counter(source) source_count = pd.DataFrame.from_dict(source_count, orient='index') source_count = source_count.sort_index() source_count.plot(kind='bar', legend=False) plt.style.use('seaborn-darkgrid') plt.xlabel('Tweet Source') plt.ylabel('Count') plt.title('Tweet Source Counts') plt.savefig('source_count.png', bbox_inches="tight") plt.show(); ###Output _____no_output_____ ###Markdown **We notice that the page owner usually tweets from his iPhone** What is the most common dog type? ###Code df.dog_type.value_counts().head(15) ###Output _____no_output_____ ###Markdown **The most common type is the Golden Retriever** What is the most common dog name? ###Code df.dog_name.value_counts().head(10) ###Output _____no_output_____ ###Markdown **Charlie is the most common dog name; less common names are repeated 11 times, 10 times, 9 times, and 8 times. They are 'Charlie', 'Oliver', 'Lucy', 'Cooper', 'Lola', 'Tucker', 'Penny', and so on.** How many dog types does the tweets cover/have? ###Code print("There are {} different dog types in the tweets, because 'NaN' is counted as one type.".format(len(df.dog_type.unique()) - 1)) ###Output There are 114 different dog types in the tweets, because 'NaN' is counted as one type. ###Markdown Wrangle & Analyze Data**By Tamer Ahmed**wrangling WeRateDogs Twitter data to create interesting and trustworthy analyses and visualizations. The dataset that we will be wrangling (and analyzing and visualizing) is the tweet archive of Twitter user @dog_rates, also known as WeRateDogs. WeRateDogs is a Twitter account that rates people's dogs with a humorous comment about the dog. These ratings almost always have a denominator of 10. The Twitter archive only contains very basic tweet information. Using Python and its libraries, we will gather data from a variety of sources and in a variety of formats, assess its quality and tidiness, then clean it. ###Code import seaborn as sns import datetime as dt import json import numpy as np import pandas as pd import re import requests import tweepy ###Output _____no_output_____ ###Markdown Gather ###Code twitter_archive = pd.read_csv("twitter-archive-enhanced.csv") twitter_archive.head() tsv_url = "https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv" r = requests.get(tsv_url) with open(tsv_url.split('/')[-1], mode = 'wb') as file: file.write(r.content) images = pd.read_csv('image-predictions.tsv', sep = '\t') images.head() #https://stackoverflow.com/questions/28384588/twitter-api-get-tweets-with-specific-id auth = tweepy.OAuthHandler('5Uur0mo4ol2kB8yhtZ1VxXS0u', 'h8E7fSpXWiMoBel7G1ZOAeu4Mgru0v0MtxH5ehYE1RKM89SiBH') auth.set_access_token('303562412-ct9aNnU0FQR0UKJVn1i1W3Y8omqSewiQWUcRaygB', 'D3qslrbdOU5fqTOp951kOIuZbkeTPBodnjNYoEGFR63Ft') api = tweepy.API(auth, parser = tweepy.parsers.JSONParser(), wait_on_rate_limit = True, wait_on_rate_limit_notify = True) #Download Tweepy status object based on Tweet ID and store in list list_of_tweets = [] tweet_ids = twitter_archive.index # Tweets that can't be found are saved in the list below: cant_find_tweets_for_those_ids = [] for tweet_id in tweet_ids: try: list_of_tweets.append(api.get_status(tweet_id)) except Exception as e: cant_find_tweets_for_those_ids.append(tweet_id) print("The list of tweets" ,len(list_of_tweets)) print("The list of tweets no found" , len(cant_find_tweets_for_those_ids)) my_list_of_dicts = [] for each_json_tweet in list_of_tweets: my_list_of_dicts.append(each_json_tweet) #write this list into a txt file: with open('tweet_json.txt', 'w') as file: file.write(json.dumps(my_list_of_dicts, indent=4)) #identify information of interest from JSON dictionaries in txt file #and put it in a dataframe called tweet JSON my_demo_list = [] with open('tweet_json.txt', encoding='utf-8') as json_file: all_data = json.load(json_file) for each_dictionary in all_data: tweet_id = each_dictionary['id'] whole_tweet = each_dictionary['text'] only_url = whole_tweet[whole_tweet.find('https'):] favorite_count = each_dictionary['favorite_count'] retweet_count = each_dictionary['retweet_count'] followers_count = each_dictionary['user']['followers_count'] friends_count = each_dictionary['user']['friends_count'] whole_source = each_dictionary['source'] only_device = whole_source[whole_source.find('rel="nofollow">') + 15:-4] source = only_device retweeted_status = each_dictionary['retweeted_status'] = each_dictionary.get('retweeted_status', 'Original tweet') if retweeted_status == 'Original tweet': url = only_url else: retweeted_status = 'This is a retweet' url = 'This is a retweet' my_demo_list.append({'tweet_id': str(tweet_id), 'favorites': int(favorite_count), 'retweets': int(retweet_count), }) tweets = pd.DataFrame(my_demo_list, columns = ['tweet_id', 'favorites','retweets']) tweets tweets.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2337 entries, 0 to 2336 Data columns (total 3 columns): tweet_id 2337 non-null object favorites 2337 non-null int64 retweets 2337 non-null int64 dtypes: int64(2), object(1) memory usage: 54.9+ KB ###Markdown Assess ###Code twitter_archive.info() twitter_archive.name.value_counts() twitter_archive.rating_denominator.value_counts() twitter_archive.rating_numerator.value_counts() twitter_archive.head() images.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null int64 jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB ###Markdown Clean change datatype of tweet_id i tables into object(**Tidiness-1**) 2.colect all data of three tables into one table(**Tidiness-2**) Condensing Dog Type columns(**Tidiness-3**) Condensing dog breed predictions(**Tidiness-4**) Convert timestamp to datetime object(**quality-1**) Remove Retweets and Tweets which does not include image(**quality-2**) removing extra columns['doggo', 'floofer', 'pupper', 'puppo'] (**quality-3**) removing the processed columns['p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog'] (**quality-4**) Removing useless columns ['in_reply_to_status_id', 'in_reply_to_user_id'] (**quality-5**) Extract Dog Rates and Dog Count(**quality-6**) Extract Names(**quality-7**) "a", "the" and all non-name words have been removed.(**quality-8**) Define:change datatype of tweet_id i tables into object(**Tidiness-1**) Code: ###Code images['tweet_id'] = images['tweet_id'].astype('str') twitter_archive['tweet_id'] = twitter_archive['tweet_id'].astype('str') twitter_archive.info() images.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null object jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(1), object(5) memory usage: 152.1+ KB ###Markdown Define:colect all data of three tables into one table(**Tidiness-2**) Code: ###Code df2 = pd.merge(left=twitter_archive, right=images, left_index=True, right_index=True, how='left') df2 = pd.merge(left=df2, right=tweets, left_index=True, right_index=True, how='left') df2.to_csv('df2copy.csv', encoding = 'utf-8') df = pd.read_csv("df2copy.csv") ###Output _____no_output_____ ###Markdown Test: ###Code df.columns df ###Output _____no_output_____ ###Markdown Define:Convert timestamp to datetime object(**quality-1**) Code: ###Code df['timestamp'] = pd.to_datetime(df['timestamp']) df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 33 columns): Unnamed: 0 2356 non-null int64 tweet_id_x 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null datetime64[ns] source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object tweet_id_y 2075 non-null float64 jpg_url 2075 non-null object img_num 2075 non-null float64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null object p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null object p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null object tweet_id 2337 non-null float64 favorites 2337 non-null float64 retweets 2337 non-null float64 dtypes: datetime64[ns](1), float64(12), int64(4), object(16) memory usage: 607.5+ KB ###Markdown Define: Remove Retweets and Tweets which does not include image(**quality-2**) Code: ###Code # removing the tweets without images df = df[pd.notnull(df['jpg_url'])] # removing retweets df = df[pd.isnull(df['retweeted_status_id'])] df.shape[0] df.drop(['retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], axis = 1, inplace = True) ###Output _____no_output_____ ###Markdown Test: ###Code df.columns ###Output _____no_output_____ ###Markdown Define:Condensing Dog Type columns(**Tidiness-3**) Code: ###Code dog_type = [] x = ['pupper', 'puppo', 'doggo', 'floof'] y = ['pupper', 'puppo', 'doggo', 'floof'] for row in df['text']: row = row.lower() for word in x: if word in str(row): dog_type.append(y[x.index(word)]) break else: dog_type.append('None') df['dog_type'] = dog_type ###Output _____no_output_____ ###Markdown **Test:** ###Code df['dog_type'].value_counts() ###Output _____no_output_____ ###Markdown Define: removing extra columns(**quality-3**) Code: ###Code df.drop(['doggo', 'floofer', 'pupper', 'puppo'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Test: ###Code #test df.columns ###Output _____no_output_____ ###Markdown Define:Condensing dog breed predictions(**Tidiness-4**) Code: ###Code breed = [] conf= [] def breed_conf(row): if row['p1_dog']: breed.append(row['p1']) conf.append(row['p1_conf']) elif row['p2_dog']: breed.append(row['p2']) conf.append(row['p2_conf']) elif row['p3_dog']: breed.append(row['p3']) conf.append(row['p3_conf']) else: breed.append('Unidentifiable') conf.append(0) df.apply(breed_conf, axis = 1) df['breed'] = breed df['confidence'] = conf ###Output _____no_output_____ ###Markdown **Test:** ###Code df.head() ###Output _____no_output_____ ###Markdown Define: removing the processed columns(**quality-4**) Code: ###Code df.drop(['p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog',], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Test: ###Code df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1896 entries, 0 to 2074 Data columns (total 20 columns): Unnamed: 0 1896 non-null int64 tweet_id_x 1896 non-null int64 in_reply_to_status_id 74 non-null float64 in_reply_to_user_id 74 non-null float64 timestamp 1896 non-null datetime64[ns] source 1896 non-null object text 1896 non-null object expanded_urls 1841 non-null object rating_numerator 1896 non-null int64 rating_denominator 1896 non-null int64 name 1896 non-null object tweet_id_y 1896 non-null float64 jpg_url 1896 non-null object img_num 1896 non-null float64 tweet_id 1896 non-null float64 favorites 1896 non-null float64 retweets 1896 non-null float64 dog_type 1896 non-null object breed 1896 non-null object confidence 1896 non-null float64 dtypes: datetime64[ns](1), float64(8), int64(4), object(7) memory usage: 311.1+ KB ###Markdown Define:Removing useless columns(**quality-5**) Code: ###Code df['in_reply_to_status_id'].value_counts() df['in_reply_to_user_id'].value_counts() ###Output _____no_output_____ ###Markdown These all reply to a single user id, i.e., @dog_rates ###Code df.drop(['in_reply_to_status_id', 'in_reply_to_user_id'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Test: ###Code df.columns ###Output _____no_output_____ ###Markdown Define:Extract Dog Rates and Dog Count(**quality-6**) Code: ###Code rates = [] #raw_rates = lambda x: rates.append(re.findall(r'(\d+(\.\d+)|(\d+))\/(\d+0)', x, flags=0)) df['text'].apply(lambda x: rates.append(re.findall(r'(\d+(\.\d+)|(\d+))\/(\d+0)', x, flags=0))) rating = [] dog_count = [] for item in rates: # for tweets with no rating, but a picture, so a dog_count of 1 if len(item) == 0: rating.append('NaN') dog_count.append(1) # for tweets with single rating and dog_count of 1 elif len(item) == 1 and item[0][-1] == '10': rating.append(float(item[0][0])) dog_count.append(1) # for multiple ratings elif len(item) == 1: a = float(item[0][0]) / (float(item[0][-1]) / 10) rating.append(a) dog_count.append(float(item[0][-1]) / 10) # for tweets with more than one rating elif len(item) > 1: total = 0 r = [] for i in range(len(item)): if item[i][-1] == '10': #one tweet has the phrase '50/50' so I'm coding to exclude it r.append(item[i]) for rate in r: total = total + float(rate[0]) a = total / len(item) rating.append(a) dog_count.append(len(item)) # if any error has occurred else: rating.append('Not parsed') dog_count.append('Not parsed') df['rating'] = rating # not need to also add denominator since they are all 10! df['dog_count'] = dog_count df['rating'].value_counts() df.drop(['rating_numerator', 'rating_denominator'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Test: ###Code df.info() df['dog_count'].value_counts() ###Output _____no_output_____ ###Markdown Define:Extract Names(**quality-7**) Code: ###Code df['text_split'] = df['text'].str.split() names = [] # use string starts with method to clean this up def extract_names(row): # 'named Phineas' if 'named' in row['text'] and re.match(r'[A-Z].*', row['text_split'][(row['text_split'].index('named') + 1)]): names.append(row['text_split'][(row['text_split'].index('named') + 1)]) # 'Here we have Phineas' elif row['text'].startswith('Here we have ') and re.match(r'[A-Z].*', row['text_split'][3]): names.append(row['text_split'][3].strip('.').strip(',')) # 'This is Phineas' elif row['text'].startswith('This is ') and re.match(r'[A-Z].*', row['text_split'][2]): names.append(row['text_split'][2].strip('.').strip(',')) # 'Say hello to Phineas' elif row['text'].startswith('Say hello to ') and re.match(r'[A-Z].*', row['text_split'][3]): names.append(row['text_split'][3].strip('.').strip(',')) # 'Meet Phineas' elif row['text'].startswith('Meet ') and re.match(r'[A-Z].*', row['text_split'][1]): names.append(row['text_split'][1].strip('.').strip(',')) else: names.append('Nameless') df.apply(extract_names, axis=1) df['names'] = names ###Output _____no_output_____ ###Markdown **Test:** ###Code df['names'].value_counts() ###Output _____no_output_____ ###Markdown Define:"a", "the" and all non-name words have been removed.(**quality-8**) Code: ###Code df.drop(['text_split'], axis=1, inplace=True) df.loc[df['names'] == 'Nameless', 'names'] = None df.loc[df['breed'] == 'Unidentifiable', 'breed'] = None df.loc[df['dog_type'] == 'None', 'dog_type'] = None df.loc[df['rating'] == 0.0, 'rating'] = np.nan df.loc[df['confidence'] == 0.0, 'confidence'] = np.nan del(df['Unnamed: 0']) df.rename(columns={'tweet_id_x' : 'tweet_id'}, inplace = True) ###Output _____no_output_____ ###Markdown Test: ###Code df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1896 entries, 0 to 2074 Data columns (total 18 columns): tweet_id 1896 non-null int64 timestamp 1896 non-null datetime64[ns] source 1896 non-null object text 1896 non-null object expanded_urls 1841 non-null object name 1896 non-null object tweet_id_y 1896 non-null float64 jpg_url 1896 non-null object img_num 1896 non-null float64 tweet_id 1896 non-null float64 favorites 1896 non-null float64 retweets 1896 non-null float64 dog_type 411 non-null object breed 1611 non-null object confidence 1611 non-null float64 rating 1894 non-null object dog_count 1896 non-null float64 names 1243 non-null object dtypes: datetime64[ns](1), float64(7), int64(1), object(9) memory usage: 281.4+ KB ###Markdown Saving the cleaned file. ###Code df.to_csv('twitter_archive_master.csv', encoding = 'utf-8') ###Output _____no_output_____ ###Markdown Analysis ###Code %matplotlib inline #import matplotlib import matplotlib.pyplot as plt df = pd.read_csv('twitter_archive_master.csv') df['timestamp'] = pd.to_datetime(df['timestamp']) df.set_index('timestamp', inplace=True) ###Output _____no_output_____ ###Markdown Retweets, Favorites and Ratings Correlation ###Code df[['favorites', 'retweets']].plot(style = '.', alpha = 0.4) plt.title('Favorites and Retweets with Time') plt.xlabel('Date') plt.ylabel('Count'); df.plot(y ='rating', ylim=[0,14], style = '.', alpha = 0.4) plt.title('Rating with Time') plt.xlabel('Date') plt.ylabel('Rating'); ###Output _____no_output_____ ###Markdown Here you can see the gradual increase of both favorites and retweets over time. So Brant was right, there are more ratings above 10. Still don't know the reason why there are so much high ratings.So let's see if dogs with higher ratings were getting more favorites and retweets. According to me, if the dogs are getting better they should be getting more favorites and retweets along with the higher rating. There is a strong correlation between favorites and retweets. This means that if the tweet is good in general then there will be more retweets and favorites.Yet there is no correlation between rating and retweets or rating and favorites. It can be because the dogs are not actually getting better. It can be that 'lower quality' dogs are given funnier captions. In this case, it is the caption that is getting more retweets and favorites, rather than the dog itself. Dog Stages Stats What is Most type of dogs? ###Code dog_type = ['pupper', 'doggo', 'puppo', 'floofer', 'multiple'] dog_counts = [184, 72, 23, 9, 5] fig,ax = plt.subplots(figsize = (12,6)) ax.bar(dog_type, dog_counts, width = 0.8) ax.set_ylabel('Dog Count') ax.set_xlabel('Category') plt.title("Most Common Dog Category") plt.show() df.boxplot(column='rating', by='dog_type'); df.groupby('dog_type')['rating'].describe() df.reset_index(inplace=True) df.groupby('dog_type')['timestamp'].describe() ###Output _____no_output_____ ###Markdown So puppers are getting much lower rates than the other dog types. They have several low outliers which decrease the mean to 10.6.Floofers are consistently rated above 10. I don't know whether they are really good or the rating just gets higher with time. Maybe we can see if 'floof' is a newer term.Here we see that 'floof' is not a new term, first seen on January 2016. So we can say that floofer are consistently good dogs. Most Rated Breeds ###Code top=df.groupby('breed').filter(lambda x: len(x) >= 20) top['breed'].value_counts().plot(kind = 'bar') plt.title('The Most Rated Breeds'); ###Output _____no_output_____ ###Markdown It's difficult to know why these breeds are the top breeds. It could be because they are commonly owned. Or they could be the easiest to identify by the AI that identified them. ###Code top.groupby('breed')['rating'].describe() df['rating'].describe() df[df['rating'] <= 14]['rating'].describe() ###Output _____no_output_____ ###Markdown WeRateDogs Twitter Analysis Project IntroductionThe goal of the project is to analyze a famous Twitter account - WeRateDogs that rates people’s dogs with a humorous comment about the dogs by gathering data from Twitter, wrangling data, analyzing, and finally visualizing data to show insights and create a comprehensive and trustworthy report. ###Code import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline import os import requests import tweepy import json import csv import sys import time import warnings ###Output _____no_output_____ ###Markdown Gathering Data ###Code #1. read the WeRateDogs Twitter csv archive df = pd.read_csv('twitter-archive-enhanced.csv') df.head(3) #2. Use 'requests' to download tsv file from a website url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) with open('image_predictions.tsv', 'wb') as fd: fd.write(response.content) image_preds = pd.read_csv('image_predictions.tsv', sep = '\t') image_preds.head(3) #3. Query the Twitter API for each tweet's JSON data using Python's Tweepy consumer_key = 'xxx' consumer_secret = 'xxx' access_token = 'xxx' access_token_secret = 'xxx' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True) tweet_data = {} for tweet in list(df.tweet_id): try: tweet_status = api.get_status(tweet, wait_on_rate_limit=True, wait_on_rate_limit_notify=True) tweet_data[str(tweet)] = tweet_status._json except: print("Error for: " + str(tweet)) with open('tweet_json.txt', 'w') as outfile: json.dump(tweet_data, outfile, sort_keys = True, indent=4, ensure_ascii = False) tw_df = pd.read_json('tweet_json.txt', orient='index') ###Output _____no_output_____ ###Markdown Assessing Data ###Code df.head() image_preds.head() tw_df.head() df.info() image_preds.info() tw_df.info() df.describe() image_preds.describe() tw_df.describe() #find missing values in expanded_urls and check if there are precise data to fix the problem or if they will affect our analysis df[df['expanded_urls'].isnull() == True] df[df['expanded_urls'].isnull() == True].text.values ###Output _____no_output_____ ###Markdown Quick Note: It seems that there is no values to fix the missing values of expanded urls, but it won't affect the analysis. ###Code #check any error of names df['name'].unique() #check if key words (doggo, floofer, puppo, pupper) can also be found in the text df[df['doggo'] != 'None'].text.values tw_df['lang'].unique() ###Output _____no_output_____ ###Markdown Quality Issues Twitter Archive (df)1. Select non-reply or non-retweet data 2. Drop columns of in_reply_to_status_id, in_reply_to_user_id, and retweet info, because we only need original tweet data for analysis3. The numerator and denominator of rating should be float4. Consist ratings by using 10 as denominator and correct wrong ratings, e.g. denominator is 05. Get ‘gender’ based on he/she in the text6. Extract the exact sources of tweet, such as Twitter for iPhone, and remove the urls from sources7. Several typos, mispellings, or nonsense in names, such as 'one', 'my', 'an', 'incredibly', tends to be lowercase Tweet Data (tw_df)9. Rename 'id' in tw_df as ‘tweet_id’, consistent with the other two tables Tidiness Issues Twitter Archive (df)1. Combine the four columns (doggo, floofer, pupper, puppo) into one column called ‘dog_stage’2. The numerator and denominator of rating should be float3. Convert data types of tweet_id, source, dog stages, better into string4. Separate timestamp into two columns: date and time Tweet Data (tw_df)6. Convert data types of id from int into string7. Only select three columns for analysis: id, favorite_count, retweet_count All Tables8. Inner merge three datasets by tweet_id Cleaning Data ###Code #cope dataframes df_clean = df.copy() image_preds_clean = image_preds.copy() tw_df_clean = tw_df.copy() ###Output _____no_output_____ ###Markdown DefineSelect only non-retweet ones (2356 - 181 retweets = 2175)Select only non-reply ones ###Code df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB ###Markdown Code ###Code df_clean = df_clean[df_clean['retweeted_status_id'].isnull() & df_clean['in_reply_to_status_id'].isnull()] ###Output _____no_output_____ ###Markdown Test ###Code df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2097 non-null int64 in_reply_to_status_id 0 non-null float64 in_reply_to_user_id 0 non-null float64 timestamp 2097 non-null object source 2097 non-null object text 2097 non-null object retweeted_status_id 0 non-null float64 retweeted_status_user_id 0 non-null float64 retweeted_status_timestamp 0 non-null object expanded_urls 2094 non-null object rating_numerator 2097 non-null int64 rating_denominator 2097 non-null int64 name 2097 non-null object doggo 2097 non-null object floofer 2097 non-null object pupper 2097 non-null object puppo 2097 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 294.9+ KB ###Markdown DefineDrop reply and retweet columns Code ###Code columns = ['in_reply_to_status_id','in_reply_to_user_id','retweeted_status_id','retweeted_status_user_id','retweeted_status_timestamp'] df_clean.drop(columns, axis = 1, inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2097 entries, 0 to 2355 Data columns (total 12 columns): tweet_id 2097 non-null int64 timestamp 2097 non-null object source 2097 non-null object text 2097 non-null object expanded_urls 2094 non-null object rating_numerator 2097 non-null int64 rating_denominator 2097 non-null int64 name 2097 non-null object doggo 2097 non-null object floofer 2097 non-null object pupper 2097 non-null object puppo 2097 non-null object dtypes: int64(3), object(9) memory usage: 213.0+ KB ###Markdown DefineThe numerator and denominator of rating should be float There are non-sense rating needed to be corrected, such as 0 in the denominator Code ###Code df_clean[df_clean.text.str.contains(r'(\d+(\.\d+))\/(\d+)')].text.values df_clean[df_clean.text.str.contains(r'(\d+(\.\d+))\/(\d+)')][['text', 'rating_numerator', 'rating_denominator']] #correct the problematic numerator of rating df_clean.loc[45, 'rating_numerator'] = 13.5 df_clean.loc[695, 'rating_numerator'] = 9.75 df_clean.loc[763, 'rating_numerator'] = 11.27 df_clean.loc[1712, 'rating_numerator'] = 11.26 ###Output _____no_output_____ ###Markdown Test ###Code df_clean[df_clean.text.str.contains(r'(\d+(\.\d+))\/(\d+)', na=False)][['text', 'rating_numerator', 'rating_denominator']] df_clean['rating_numerator'].astype('float').dtypes ###Output _____no_output_____ ###Markdown DefineConsist ratings by using 10 as denominator and correct wrong ratings, e.g. denominator is 0 Code ###Code df_clean[df_clean['rating_denominator'] != 10][['rating_numerator','rating_denominator']] df_clean[df_clean['rating_denominator'] != 10].rating_denominator.count() #check if there is accurate rating in the text df_clean[df_clean['rating_denominator'] != 10].text.values ###Output _____no_output_____ ###Markdown Quick Note: It seems that there are two parts of mistakes:1. One group is recorded wrongly, which can be corrected manually according to the rating in the text:516 - 11/101068 - 14/101165 - 13/101202 - 11/101662 - 10/102335 - 9/102. The ratings of the other group should be divided according to the dog number the owners have in total, so that the denominator will be 10 following with the correct numerator ###Code #fix 1st group fist df_clean.loc[[516, 1068, 1165, 1202, 1662, 2335], 'rating_numerator'] = [11,14, 13, 11, 10, 9] df_clean.loc[[516, 1068, 1165, 1202, 1662, 2335], 'rating_denominator'] = 10 #fix the other group df_clean[df_clean['rating_denominator'] != 10][['rating_numerator','rating_denominator']] numerator_list = df_clean[df_clean['rating_denominator'] != 10]['rating_numerator'].tolist() denominator_list = df_clean[df_clean['rating_denominator'] != 10]['rating_denominator'].tolist() n = 10 denominator_new_list = [i / n for i in denominator_list] denominator_new_list div = [a / b for a, b in zip(numerator_list, denominator_new_list)] div index = df_clean[df_clean['rating_denominator'] != 10].index.tolist() df_clean.loc[index, 'rating_numerator'] = div df_clean.loc[index, 'rating_denominator'] = 10 ###Output _____no_output_____ ###Markdown Test ###Code #1st group df_clean.loc[[1068, 1165, 1202, 1662, 2335], 'rating_denominator'] df_clean.loc[[1068, 1165, 1202, 1662, 2335], 'rating_numerator'] #2nd group df_clean.loc[index, ['rating_numerator', 'rating_denominator']] df_clean[df_clean['rating_denominator'] != 10][['rating_numerator','rating_denominator']] ###Output _____no_output_____ ###Markdown DefineGet ‘gender’ based on he/she in the text Code ###Code male_words=set(["him","he's",'his',"he", "himself"]) female_words=set(["her", "she's","she", "herself"]) def find_gender(words): """ find words of genders in the sentences and categorize into female or male """ m_length = len(male_words.intersection(words)) f_length = len(female_words.intersection(words)) if m_length > 0 and f_length == 0: gender = 'male' elif m_length == 0 and f_length > 0: gender = 'female' elif m_length > 0 and f_length > 0: gender = 'both' else: gender = 'none' return gender #before find gender, cleaning the data of text column first df_clean['text'] = df_clean['text'].str.replace(",", "") df_clean['text'] = df_clean['text'].str.replace(".", "") #apply find_gender and add 'gender' column in the table df_clean['gender'] = df_clean['text'].dropna().str.lower().str.split(" ").apply(find_gender) ###Output _____no_output_____ ###Markdown test ###Code df_clean.groupby('gender').gender.count() ###Output _____no_output_____ ###Markdown DefineRevise 6 rows of 'both' gender data manually Code ###Code #check the data whose gender shows both, since it only has 6 rows #see if they can manually alter the gender data df_clean.query('gender == "both"').text #781: This girl straight up rejected a guy because he doesn't like dogs. She is my hero and I give her 13/10 #so the gender is female df_clean[df_clean.index == 781] ###Output _____no_output_____ ###Markdown Quick Note: It appears that the gender of the above both data should be revised as below:319 - male361 - male365 - male781 - female1386 - male2064 - female ###Code df_clean.loc[[361, 365, 1386], 'gender'] = 'male' df_clean.loc[[781, 2064], 'gender'] = 'female' ###Output _____no_output_____ ###Markdown Test ###Code df_clean.loc[[361, 365, 1386, 781, 2064], 'gender'] df_clean.groupby('gender').gender.count() ###Output _____no_output_____ ###Markdown DefineExtract the exact sources of tweet, such as Twitter for iPhone, and remove the urls from sources ###Code df_clean['source'].unique() ###Output _____no_output_____ ###Markdown Code ###Code df_clean['source'] = df_clean['source'].str.replace( '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'iPhone') df_clean['source'] = df_clean['source'].str.replace( '<a href="http://twitter.com" rel="nofollow">Twitter Web Client</a>', 'Twitter Web Client') df_clean['source'] = df_clean['source'].str.replace( '<a href="http://vine.co" rel="nofollow">Vine - Make a Scene</a>', 'Vine') df_clean['source'] = df_clean['source'].str.replace( '<a href="https://about.twitter.com/products/tweetdeck" rel="nofollow">TweetDeck</a>', 'TweetDeck') ###Output _____no_output_____ ###Markdown Test ###Code df_clean.head(3) df_clean.groupby('source').source.count() ###Output _____no_output_____ ###Markdown DefineCorrect mispelling names: several typos, mispellings, or nonsense in names, such as 'one', 'my', 'an', 'incredibly', most of them shown on lowercase Code ###Code #find names with lower letter df_clean[df_clean['name'].str.islower() == True].name.unique() df_clean[df_clean['name'].str.islower() == True].name.count() #According to the above result, some of them we can get the real name in the text column, some cannot named_df = df_clean.loc[(df_clean['name'].str.islower()) & (df_clean['text'].str.contains('named'))] nameis_df = df_clean.loc[(df_clean['name'].str.islower()) & (df_clean['text'].str.contains('name is'))] non_name_df = df_clean.loc[(df_clean['name'].str.islower())] named_list = named_df.text.tolist() nameis_list = nameis_df.text.tolist() non_name_list = non_name_df.text.tolist() #put the real name got from the text into the name column, and the others are replaced by None import re for i in named_list: line = df_clean.text == i df_clean.loc[line, 'name'] = re.findall(r"named\s(\w+)", i) for i in nameis_list: line = df_clean.text == i df_clean.loc[line, 'name'] = re.findall(r"name is\s(\w+)", i) for i in non_name_list: line = df_clean.text == i df_clean.loc[line, 'name'] = 'None' #there is dog name called "O" which needs to be revised too df_clean.query('name == "O"') df_clean.loc[775, 'name'] = "O'Malley" ###Output _____no_output_____ ###Markdown Test ###Code df_clean[df_clean['name'].str.islower() == True].name.count() df_clean.loc[775, 'name'] ###Output _____no_output_____ ###Markdown DefineCombine the four columns (doggo, floofer, pupper, puppo) into one column called ‘dog_stage’ Code ###Code df_clean['dog_stage'] = None # df_clean.name = df_clean.text.str.extract('(?:This is|Meet|name is|Say hello to|named) ([A-Z][a-z]{2,12})', expand=False).values # extract status for i in df_clean.index: status_set = set(re.findall('(doggo|floofer|pupper|puppo|Doggo|Floofer|Pupper|Puppo)', df_clean.loc[i,'text'])) if len(status_set) > 0: status_value = ', '.join(status_set) df_clean.loc[i, 'dog_stage'] = status_value df_clean['dog_stage'] = df_clean['dog_stage'].str.lower() df_clean['dog_stage'].unique() index = [172, 191, 200, 531, 881,460, 575, 705, 733, 889, 956, 1063, 1113] df_clean.loc[index, 'dog_stage'] = 'doggo' df_clean.loc[1382, 'dog_stage'] = 'pupper' #drop the original four columns df_clean.drop(['doggo', 'floofer', 'pupper', 'puppo'], axis = 1, inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code df_clean['dog_stage'].unique() ###Output _____no_output_____ ###Markdown DefineSeparate timestamp into two columns: date and time Code ###Code from datetime import datetime, timedelta df_clean['timestamp'].dtypes, df_clean['timestamp'].iloc[0] #Convert data type of timestamp first df_clean['timestamp'] = pd.to_datetime(df_clean['timestamp']) df_clean['year'] = df_clean['timestamp'].apply(lambda t : t.strftime('%Y')) df_clean['month'] = df_clean['timestamp'].apply(lambda t : t.strftime('%B')) df_clean['date'] = df_clean['timestamp'].apply(lambda t : t.strftime('%m-%d')) df_clean['day'] = df_clean['timestamp'].apply(lambda t : t.strftime('%A')) df_clean['time'] = df_clean['timestamp'].apply(lambda t : t.strftime('%H:%M')) ###Output _____no_output_____ ###Markdown Test ###Code df_clean.head() ###Output _____no_output_____ ###Markdown DefineRename 'id' in tw_df as ‘tweet_id’, consistent with the other two tables Code ###Code tw_df_clean.rename(columns = {'id':'tweet_id'}, inplace = True) ###Output _____no_output_____ ###Markdown Test ###Code tw_df_clean.head(1) ###Output _____no_output_____ ###Markdown DefineOnly select three columns for analysis: id, favorite_count, retweet_count Code ###Code tw_df_clean = tw_df_clean[['tweet_id', 'favorite_count','retweet_count']] tw_df_clean.reset_index().drop('index', axis = 1) ###Output _____no_output_____ ###Markdown Test ###Code tw_df_clean.head() tw_df_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> DatetimeIndex: 2342 entries, 1991-02-08 13:48:08.022790149 to 1998-04-12 22:37:23.555336193 Data columns (total 3 columns): tweet_id 2342 non-null int64 favorite_count 2342 non-null int64 retweet_count 2342 non-null int64 dtypes: int64(3) memory usage: 73.2 KB ###Markdown DefineConvert 'tweet_id' in all tables from int into string Code ###Code df_clean['tweet_id'] = df_clean['tweet_id'].astype(str) image_preds_clean['tweet_id'] = image_preds_clean['tweet_id'].astype(str) tw_df_clean['tweet_id'] = tw_df_clean['tweet_id'].astype(str) ###Output _____no_output_____ ###Markdown Test ###Code df_clean['tweet_id'].dtypes, image_preds_clean['tweet_id'].dtypes, tw_df_clean['tweet_id'].dtypes ###Output _____no_output_____ ###Markdown DefineInner merge three datasets by tweet_id Code ###Code from functools import reduce dfs = [df_clean, image_preds_clean, tw_df_clean] twitter_archive_master = reduce(lambda left, right: pd.merge(left, right, on = 'tweet_id', how = 'inner'), dfs) ###Output _____no_output_____ ###Markdown Test ###Code twitter_archive_master.head() twitter_archive_master.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1284 entries, 0 to 1283 Data columns (total 28 columns): tweet_id 1284 non-null object timestamp 1284 non-null datetime64[ns] source 1284 non-null object text 1284 non-null object expanded_urls 1284 non-null object rating_numerator 1284 non-null float64 rating_denominator 1284 non-null int64 name 867 non-null object gender 1284 non-null object dog_stage 217 non-null object year 1284 non-null object month 1284 non-null object date 1284 non-null object day 1284 non-null object time 1284 non-null object jpg_url 1284 non-null object img_num 1284 non-null int64 p1 1284 non-null object p1_conf 1284 non-null float64 p1_dog 1284 non-null bool p2 1284 non-null object p2_conf 1284 non-null float64 p2_dog 1284 non-null bool p3 1284 non-null object p3_conf 1284 non-null float64 p3_dog 1284 non-null bool favorite_count 1284 non-null int64 retweet_count 1284 non-null int64 dtypes: bool(3), datetime64[ns](1), float64(4), int64(4), object(16) memory usage: 264.6+ KB ###Markdown Storing Data ###Code df_clean.to_csv("twitter_archive_clean.csv", header=True, index=False, encoding='utf-8', sep="\t") image_preds_clean.to_csv("image_predictions.csv", header=True, index=False, encoding='utf-8', sep="\t") tw_df_clean.to_csv("tweet_data_clean.csv", header=True, index=False, encoding='utf-8', sep="\t") twitter_archive_master.to_csv("twitter_archive_master.csv", header=True, index=False, encoding='utf-8', sep="\t") ###Output _____no_output_____ ###Markdown Analyzing and Visualizing Data ###Code twitter_archive_master.describe() twitter_archive_master['p2_conf'].mean(),twitter_archive_master['p3_conf'].mean() ###Output _____no_output_____ ###Markdown According to the above summary, we can get some information:The mean of rating is 12.23/10 and a outliner is 1776/10, which needs to review respectively.The mean of favorite count of tweets is 8,243 and the maximum is 127,536.The mean of retweet count of tweets is 2,494 and the maximum is 61,267.The p1_dog has the highest average prediction score 0.588.Questions to be answered:1. Which gender tweet most on this account?2. What is the distribution of the stage of dogs?3. Which channels do users come from, acccording to the tweet sources? 4. Is the one who has the max fav count correspondent to the one who has max retweet count? Is there any correlation between favorite and retweet?5. What is the relationship between rating and other factors, e.g. favorite, retweet? 6. Which time, day, and month the users tweet, retweet and favorite most? 7. The pattern of rating over time8. Who are those who have higher rating? Why would they get the higher rating?9. Who got the lower rating? Why?10. What are the top 5 popular breeds of dog? How are their ratings and retweet/favorite counts? Which gender tweet most on this account? ###Code gender_count = twitter_archive_master['gender'].value_counts() size = np.array(gender_count) colors = ['gold', 'yellowgreen', 'coral', 'blue'] p, tx, autotexts = plt.pie(gender_count, labels = gender_count.index, startangle = 90, counterclock = False, autopct = "", colors = colors) for i, a in enumerate(autotexts): a.set_text("{}".format(size[i])) plt.axis('square'); ###Output _____no_output_____ ###Markdown `Note: More tweets about male dogs (632) tweet on this account than female dogs.` What is the distribution of the stage of dogs? ###Code dog_stage_count = twitter_archive_master['dog_stage'].value_counts() size = np.array(dog_stage_count) colors = ['gold', 'yellowgreen', 'coral', 'skyblue'] p, tx, autotexts = plt.pie(dog_stage_count, labels = dog_stage_count.index, startangle = 90, counterclock = False, autopct = "", colors = colors) for i, a in enumerate(autotexts): a.set_text("{}".format(size[i])) plt.axis('square'); ###Output _____no_output_____ ###Markdown `Note: There are the greatest tweet number about pupper on the acconut, which is 151.` Which channels do users come from, acccording to the tweet sources?  ###Code source_count = twitter_archive_master['source'].value_counts() size = np.array(source_count) colors = ['gold', 'yellowgreen', 'skyblue'] labels = ['iPhone', 'Web', ''] p, tx, autotexts = plt.pie(source_count, labels = labels, startangle = 90, counterclock = False, autopct = "", colors = colors) for i, a in enumerate(autotexts): a.set_text("{}".format(size[i])) plt.axis('square'); ###Output _____no_output_____ ###Markdown `Most tweets are from iphone.` Which time, day, month, and year do the users tweet most? ###Code twitter_archive_master.groupby('year').favorite_count.count() year_df = twitter_archive_master.set_index('year') year_df['favorite_count'].plot(style='o', figsize=(12,6), label='Favorite count'); year_df.retweet_count.plot(style='o',label='Retweet count') plt.legend() plt.yscale("log") plt.xlabel("Year");plt.ylabel("Count"); plt.tight_layout() ###Output _____no_output_____ ###Markdown `The favorite and retweet counts somehow represents user's active rate, and hence, based on the above year plot, it shows that the users' active increases over year, though there are several high favorite counts in the previous year.` ###Code #month month_tw = twitter_archive_master.groupby('month')[['tweet_id']].count() month_orders = ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December'] month_tw = month_tw.reindex(month_orders) #day day_tw = twitter_archive_master.groupby('day')[['tweet_id']].count() day_index = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'] day_tw = day_tw.reindex(day_index) #time twitter_archive_master['time'] = twitter_archive_master['timestamp'].apply(lambda t : t.strftime('%H')) time_tw = twitter_archive_master.groupby('time')[['tweet_id']].count() plt.figure(figsize=[12,60]) plt.subplot(3,1,1) month_tw.tweet_id.plot(style='o-', figsize=(16,5), label='Tweet count') plt.legend() plt.xlabel("Month") plt.ylabel("Count") plt.tight_layout() plt.subplot(3,1,2) day_tw.tweet_id.plot(style='o-', figsize=(12,5), label='Tweet count'); plt.legend() plt.xlabel("Day") plt.ylabel("Count") plt.tight_layout() plt.subplot(3,1,3) time_tw.tweet_id.plot(style='o-', figsize=(12,5), label='Tweet count') plt.xlabel("Time") plt.ylabel("Count") plt.tight_layout() plt.savefig("favorite_Retweet_count.png"); ###Output _____no_output_____ ###Markdown `People tend to react actively on this account during holidays (e.g. Thanksgiving, Xmas), from November to January.``In addition to month aspect, it seems that people are more active on the account on Monday.``The most active time period is at 12 am, or 16 pm.` Is the one who has the max fav count correspondent to the one who has max retweet count? Is there any correlation between favorite and retweet count? ###Code fig = plt.figure(figsize = [12,5]); ax1 = fig.add_subplot(121); ax2 = fig.add_subplot(122); sns.regplot(x="favorite_count", y="retweet_count", data=twitter_archive_master, fit_reg=False, ax=ax1) sns.regplot(x="favorite_count", y="retweet_count", data=twitter_archive_master, fit_reg=False, ax = ax2) plt.xscale('log') plt.yscale('log'); ###Output _____no_output_____ ###Markdown `According to the above plot, the favorite count and retweet count have highlly postive correlation. It means that the tweets with the higher favorite counts tend to have higher retweet counts, too.` There are two outliners of rating point, so look deeper to find out the reason. ###Code twitter_archive_master[twitter_archive_master['rating_numerator'] > 50] ###Output _____no_output_____ ###Markdown `It found out that the highest rating tweet posted on National Day, 7/4, and the dog in the tweet is also related to it, so it wins users' attentions.``The other tweet with second high rating is about one of the most famous celebrity in US who named dogg, rather than real dog. Due to the creativity of this tweet, it also wins the like from users.` What is the relationship between rating and other factors, e.g. favorite, retweet? ###Code #remove the outliners and check the relationship between rating and other factors rate_df = twitter_archive_master[twitter_archive_master['rating_numerator'] < 50] bin_edges = np.arange(0, 15+0.1, 1) g = sns.FacetGrid(data = rate_df, size = 4) g.map(plt.hist, "rating_numerator", bins = bin_edges); ###Output _____no_output_____ ###Markdown `Most of tweets got the rating at 10-13.` ###Code fig = plt.figure(figsize = [12,5]) ax1 = fig.add_subplot(121) ax2 = fig.add_subplot(122) sns.regplot(x = "rating_numerator", y = "retweet_count", data = rate_df, fit_reg = False, ax = ax1) sns.regplot(x = "rating_numerator", y = "favorite_count", data = rate_df, fit_reg = False, ax = ax2); ###Output _____no_output_____ ###Markdown `The higher rating tends to have the more retweet and like, but the highest rating is not the tweet with the highest retweet or like is not necessary. There is no exact positive or negative correlation.` The pattern of rating over time ###Code month_rate = rate_df.groupby('month').rating_numerator.mean() month_rate = month_rate.reindex(month_orders) plt.figure(figsize = [8,4]) month_rate.plot( color = 'orange') plt.axhline(y=10.0, color='gray', linestyle='--'); ###Output _____no_output_____ ###Markdown `Surprisingly, the average rating of tweets on November is the lowest because all the tweets with the lowest rating are on November.` Who got the lower rating? Why?  ###Code #figure out why the tweet are underrated rate_df[rate_df.rating_numerator == 1].text.values ###Output _____no_output_____ ###Markdown `The tweets with lowest rating are because the tweet is not related to dog at all.` What are the top 5 popular breeds of dog? How are their ratings and retweet/favorite counts? ###Code rate_df.p1.value_counts().head() breed_df = rate_df.query('p1 == ["golden_retriever","Pembroke","Labrador_retriever","Chihuahua","pug"]') breed_df.groupby(['p1','gender']).tweet_id.count() breed_order = ["golden_retriever","Pembroke","Labrador_retriever","Chihuahua","pug"] plt.figure(figsize = [8,4]) ax = sns.barplot(data = breed_df, x = 'p1', y = 'rating_numerator', hue = 'gender', order = breed_order) ax.legend(loc = 8, ncol = 3, framealpha = 1, title = 'gender') plt.xlabel('breed') plt.title('Rating of Different Breeds of Dogs'); breed_order = ["golden_retriever","Pembroke","Labrador_retriever","Chihuahua","pug"] plt.figure(figsize = [8,4]) ax = sns.barplot(data = breed_df, x = 'p1', y = 'favorite_count', hue = 'gender', order = breed_order, ci=None) ax.legend(loc = 8, ncol = 3, framealpha = 1, title = 'gender') plt.xlabel('breed') plt.title('Favorite Count of Different Breeds of Dogs'); breed_order = ["golden_retriever","Pembroke","Labrador_retriever","Chihuahua","pug"] plt.figure(figsize = [8,4]) ax = sns.barplot(data = breed_df, x = 'p1', y = 'retweet_count', hue = 'gender', order = breed_order, ci=None) ax.legend(loc = 8, ncol = 3, framealpha = 1, title = 'gender') plt.xlabel('breed') plt.title('Favorite Count of Different Breeds of Dogs'); ###Output _____no_output_____ ###Markdown Gather CSV and TSV ###Code #Read file.csv twitter_archive = pd.read_csv('twitter-archive-enhanced.csv') twitter_archive #request to download a tsv url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) #save HTML to a file with open("image-predictions.tsv", mode = 'wb') as file: file.write(response.content) #Read tsv file as Pandas DataFrame images = pd.read_csv('image-predictions.tsv', sep = '\t') images ###Output _____no_output_____ ###Markdown Twitter API ###Code #Query Twitter Data with consumer_keys, consumer_secret, access_token, access_secrets in numbers. import tweepy consumer_key = "VVBIqKI6s06o195un1KWi81qU" consumer_secret = "mEAt8qca0WoQ8x93JMTtJNjnvzbVChwtxh9f5CP7AEvoX4fYpS" access_token = "4917748270-O78NAuqe0GTmgRz53I4WfWbX0wvpEWpLMjyThXk" access_secret = "linSjiakVxHdukpwQEQf7ATNx3NbOzCF5gXPkP0UWxgWz" #Variables auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) # Wait on rate all True: api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify =True) ###Output _____no_output_____ ###Markdown source: https://stackoverflow.com/questions/47612822/how-to-create-pandas-dataframe-from-twitter-search-apihttps://stackoverflow.com/questions/47925828/how-to-create-a-pandas-dataframe-using-tweepy Writing and Reading Twitter JSON ###Code # Twitter may have deleted the twitter_id-- creating 2 different lists yes_tweets = [] no_tweets = [] #Tweet_id from twitter_archive column t_a = twitter_archive['tweet_id'] #Saving in two different lists for tweet_id in t_a: try: yes_tweets.append(api.get_status(tweet_id)) #Seperate tweets with no tweet_id and tweets without tweet_id except Exception as e: no_tweets.append(tweet_id) #Verify both lists - no_tweets#-----------------------I see that there are numerous tweets without tweet_id yes_tweets len(yes_tweets) ###Output _____no_output_____ ###Markdown source: https://stackoverflow.com/questions/3768895/how-to-make-a-class-json-serializable source: https://stackoverflow.com/questions/50428033/converting-list-into-dictionary-with-index-as-key ###Code #import json and get append json_tweet in dicts import json dicts = [] for json_tweet in yes_tweets : dicts.append(json_tweet._json) #write dict in txt file with open('tweet_json.txt', 'w') as file: file.write(json.dumps(dicts, indent=4)) #Now Create a DataFrame from the tweet_json.txt file: another_list = [] with open('tweet_json.txt', encoding='utf-8') as json_file: data = json.load(json_file) for each_dict in data: tweet_id = each_dict['id'] favorite_count = each_dict['favorite_count'] retweet_count = each_dict['retweet_count'] another_list.append({'tweet_id': str(tweet_id), 'favorite_count': int(favorite_count), 'retweet_count': int(retweet_count), }) #print(my_demo_list) tweet_json = pd.DataFrame(another_list, columns = ['tweet_id', 'favorite_count', 'retweet_count', ]) tweet_json.head() tweet_json.tail() tweet_json.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2341 entries, 0 to 2340 Data columns (total 3 columns): tweet_id 2341 non-null object favorite_count 2341 non-null int64 retweet_count 2341 non-null int64 dtypes: int64(2), object(1) memory usage: 54.9+ KB ###Markdown Assessing 1. Quality (8 issues)- 1 . Gofundme link is at expanded_URL- 2 . Vine link is at expanded_URL - 3 . Articles ('a', 'an', 'the' 'such') in the column names.- 4 . timestamp is an object from image-prediction table- 5 . ["tweet_id"] column is a float--from twitter_archive table- 6 . missing data called none in doggo, floofer, pupper, puppo- 7 . Few innapropriate names like bagel, indian elephant, banana, and orange images-prediction table. - 8 . Sources are hard to read 2. Tidiness (2 issues)- 1 . missing data in_reply_to_status_id and in_reply_to_user_id- 2 . missing data retweeted_status_id and retweeted_status_user_id ###Code ### a. Visual assessment in Twitter_archive: twitter_archive twitter_archive.head() twitter_archive.tail() twitter_archive.sample() ### a. Visual assessment in image-prediction: images images.head() images.tail() ### b. Programming assessment in image-prediction: twitter_archive.info() twitter_archive.describe() #I noticed that there are many NaN in these columns. I dropped NaN to make some observations. columns = ['in_reply_to_status_id', 'in_reply_to_user_id'] df_reply = pd.DataFrame(twitter_archive, columns=columns) df_reply.dropna() #will merge them in the future #I noticed that there are many NaN in these columns. I dropped NaN to make some observations. columns_r = ['retweeted_status_id', 'retweeted_user_id', 'retweeted_status_timestamp'] df_retweet = pd.DataFrame(twitter_archive, columns=columns_r) df_retweet.dropna() #no single data ###Output _____no_output_____ ###Markdown Clean ###Code #Making copies from the original dataframes ta_clean = twitter_archive.copy() images_clean = images.copy() ###Output _____no_output_____ ###Markdown Replace: 1 & 2.DefineTwitter Archive table:- 1 . Remove non-recoverable Gofundme from extended URLS- 2 . Remove non_recoverable Vine from extended URLS 1 & 2.Code ###Code # Remove non-recoverable Gofundme from extended ta_clean = ta_clean[ta_clean.expanded_urls != 'https://gofundme.com/ydvmve-surgery-for-jax,ht...'] ###Output _____no_output_____ ###Markdown source : https://stackoverflow.com/questions/17071871/select-rows-from-a-dataframe-based-on-values-in-a-column-in-pandas ###Code #Remove non_recoverable Vine from extended URLS ta_clean = ta_clean[ta_clean.expanded_urls != '<a href="http://vine.co" rel="nofollow">Vine -...'] ###Output _____no_output_____ ###Markdown 1 & 2.Test ###Code #Check if Gofundme is removed ta_clean[ta_clean['expanded_urls'] == 'https://gofundme.com/ydvmve-surgery-for-jax,ht...'] #Check if Vine is removed ta_clean[ta_clean['expanded_urls'] == '<a href="http://vine.co" rel="nofollow">Vine -...'] ###Output _____no_output_____ ###Markdown 3 .Define- 3 . Replace articles (such as the, a, an) and few missing data (None) to ' ' from name column ###Code ta_clean['name'].tail(25) ###Output _____no_output_____ ###Markdown source : https://stackoverflow.com/questions/23743460/replace-none-with-nan-in-pandas-dataframe 3.Code ###Code #Replace "a, an, None, quite,such, the" to NAN ta_clean['name'].replace('None', np.nan, inplace=True) ta_clean['name'].replace('a', np.nan, inplace=True) ta_clean['name'].replace('an', np.nan, inplace=True) ta_clean['name'].replace('quite', np.nan, inplace=True) ta_clean['name'].replace('the', np.nan, inplace=True) ta_clean['name'].replace('such', np.nan, inplace=True) ###Output _____no_output_____ ###Markdown 3 .Test ###Code #Verify if NAN are replacing them ta_clean['name'].tail() ###Output _____no_output_____ ###Markdown Conversion 4. Define Convert "tweet_id" column into from float to object. source: https://stackoverflow.com/questions/15723628/pandas-make-a-column-dtype-object-or-factor 4. Code ###Code #Convert tweet_id into object ta_clean['tweet_id'] = ta_clean['tweet_id'].astype(object) ###Output _____no_output_____ ###Markdown 4 Test ###Code #Check if tweet_id is converted ta_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null object in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 1536 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(2), object(11) memory usage: 331.3+ KB ###Markdown Conversion 5. Define- 7 . Timestamp should be converted from object to datetime. 5. Code source : https://stackoverflow.com/questions/17134716/convert-dataframe-column-type-from-string-to-datetime ###Code #Convert object to datetime ta_clean['timestamp'] = pd.to_datetime(ta_clean['timestamp']) ###Output _____no_output_____ ###Markdown 5. Test ###Code ta_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null object in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null datetime64[ns] source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 1536 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: datetime64[ns](1), float64(4), int64(2), object(10) memory usage: 331.3+ KB ###Markdown 6. Define- 6 . Make one column for doggo, floofer, pupper, and puppo 6.Code source: https://stackoverflow.com/questions/39291499/how-to-concatenate-multiple-column-values-into-a-single-column-in-panda-datafram ###Code #Replace all the None to blank ta_clean.doggo = ta_clean.doggo.replace('None',' ') ta_clean.floofer = ta_clean.floofer.replace('None',' ') ta_clean.pupper = ta_clean.pupper.replace('None',' ') ta_clean.puppo = ta_clean.puppo.replace('None',' ') #Merge all four columns to one column 'dog'. ta_clean['dog']= ta_clean['doggo'].astype('object')+' '+ta_clean['floofer'].astype('object')+' '+ta_clean['pupper']+' '+ta_clean['puppo'].astype('object') #Drop doggo, floofer, pupper, and puppo ta_clean.drop(['doggo'], axis =1, inplace = True) ta_clean.drop(['floofer'], axis =1, inplace = True) ta_clean.drop(['pupper'], axis =1, inplace = True) ta_clean.drop(['puppo'], axis =1, inplace = True) ###Output _____no_output_____ ###Markdown 6. Test ###Code #Check out the new column ta_clean['dog'] ###Output _____no_output_____ ###Markdown 7. Define - Remove non- recoverable data such as bagel, indian elephant, banana, and orange in the p1, p2, p3 from images- prediction table 7. Code ###Code #Check how many False in p1_dog images.p1_dog.value_counts() ###Output _____no_output_____ ###Markdown source: https://stackoverflow.com/questions/45237083/pandas-dataframe-remove-row-satisfying-certain-conditionsource: https://stackoverflow.com/questions/45314719/pandas-remove-row-based-on-applying-function ###Code #Remove all the the rows that consist of 'False' in p1_dog images_clean = images_clean[images_clean['p1_dog'].apply(bool) != False] #Check how many False in p2_dog images_clean.p2_dog.value_counts() #Remove all the the rows that consist of 'False' in p2_dog images_clean = images_clean[images_clean['p2_dog'].apply(bool) != False] #Check how many False in p3_dog images_clean.p3_dog.value_counts() #Remove all the the rows that consist of 'False' in p3_dog images_clean = images_clean[images_clean['p3_dog'].apply(bool) != False] ###Output _____no_output_____ ###Markdown 7. Test ###Code #Check p1, p2, and p3 -- all the name of dog's breeds images_clean.groupby(['p1', 'p2', 'p3'], as_index=False).sum() ###Output _____no_output_____ ###Markdown 8. Define- Simplify the sources to make it readable 8. Code source: https://blog.mariusschulz.com/2014/06/03/why-using-in-regular-expressions-is-almost-never-what-you-actually-wantsource: https://sites.ualberta.ca/~kirchner/513/OpenOffice%20regular%20expression%20list.pdf ###Code ta_clean['source'] = ta_clean['source'].str.extract('>(.*)<', expand=True) ###Output _____no_output_____ ###Markdown 8 . Test ###Code ta_clean['source'] ###Output _____no_output_____ ###Markdown Tidiness 1. Define - 1 . Remove columns retweeted_status_id and retweeted_status_user_id 1. Code ###Code #Remove a column 'retweeted_status_id' ta_clean.drop('retweeted_status_id', axis = 1, inplace = True) #Remove one column retweeted_to_user_id ta_clean.drop('retweeted_status_user_id', axis = 1, inplace = True) ###Output _____no_output_____ ###Markdown 1. Test ###Code ta_clean.head(1) ###Output _____no_output_____ ###Markdown 2. Code Missing Data & Tidiness 2. Define- 1 . Combine two dataframes in one ###Code #Combine both dataframes ta_clean = pd.merge(ta_clean, images_clean, on=('tweet_id')) ###Output _____no_output_____ ###Markdown 2 . Test ###Code ta_clean.head(1) #Check combined datasets ###Output _____no_output_____ ###Markdown Storing source: https://stackoverflow.com/questions/45579489/how-to-write-utf-8-to-a-new-csv-file-using-python3-with-anaconda ###Code # Save the DataFrame to a file called 'twitter_archive_master.csv' ta_clean.to_csv('twitter_archive_master.csv', index=False, encoding = 'utf-8') ###Output _____no_output_____ ###Markdown Insight (Please read my act_report.pdf for more details about my insights)- A- Analyze The 3 most popular and 3 least popular breeds.- B- Check out the 3 most popular and 3 least popular names.- C- Examine what are the most popular slang describing a dog's growth stage. Code A The most popular and least popular breeds: ###Code # Calling back the clean version dataframes ta_clean # 3 Most popular dog ta_clean['p1'].value_counts().head(3) # The least popular dog ta_clean['p1'].min() # These are the 3 main least frequently occured ta_clean['p1'].value_counts(ascending = True) ###Output _____no_output_____ ###Markdown source: https://matplotlib.org/tutorials/introductory/pyplot.html ###Code #Create new variables for the future chart #Most popular dogs golden_retriever = ta_clean[ta_clean['p1'] == 'golden_retriever'] Pembroke= ta_clean[ta_clean['p1'] == 'Pembroke'] Labrador_retriever = ta_clean[ta_clean['p1'] == 'Labrador_retriever'] #Objects pop_dogs = ('golden_retriever', 'Pembroke', 'Labrador_retriever') num_tweets = ['126', '78', '77'] # Make a chart import matplotlib.pyplot as plt %matplotlib inline #names = ['group_a', 'group_b', 'group_c'] #values = [1, 10, 100] plt.figure(1, figsize=(9, 3)) plt.subplot(131) plt.bar(pop_dogs, num_tweets) plt.subplot(132) plt.scatter(pop_dogs, num_tweets) plt.subplot(133) plt.plot(pop_dogs, num_tweets) plt.suptitle('Most Populat according to Twitter') plt.show() ###Output _____no_output_____ ###Markdown B Most popular and Least popular dog's names ###Code #Make a copy t_arc = ta_clean.copy() t_arc.head(2) #Find the most popular dog's name ta_clean['name'].mode() #Find the least popular dog's name ta_clean['name'].sort_values().head(3) #Find 3 most popular dog's names ta_clean['name'].value_counts().head(3) ###Output _____no_output_____ ###Markdown C Most popular used to describe a dogs' stages. ###Code #Most popular slang or dog's stage ta_clean['dog'].value_counts() ###Output _____no_output_____ ###Markdown Wrangle and Analyse data : Project Topic : Dog rates Data AnalysisProject Outline : Access data from various sources, including Tweepy, twitter API, Assessing and Visualisation of Data.Questions Posed Prior to Data Analysis : * In which language most of tweets are written? * What is the most common names, people give their dogs? * Considering, accumulated data, Which dogtype has the highest presence in the data set? (Doggo/Floofer/Pupper/Puppo) * Given the complete data set, What is the avg. ratio of Favorite Count to Retweet Counts? * Visua;lise and asses the difference in values of these two columns. **Importing Necesary libraries and Analysing Data** ###Code from wordcloud import WordCloud, STOPWORDS import matplotlib.pyplot as plt import pandas as pd import numpy as np %matplotlib inline ###Output _____no_output_____ ###Markdown Gathering Data 1. Twitter Archived Enhanced CSV ###Code df = pd.read_csv('/content/drive/MyDrive/Utkarsh doc/twitter-archive-enhanced.csv') ###Output _____no_output_____ ###Markdown 2. Image Predictions.tsv ###Code import requests url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) # Save HTML to file with open("image_predictions.tsv", mode='wb') as file: file.write(response.content) dfimage = pd.read_csv('/content/image_predictions.tsv', sep='\t') ###Output _____no_output_____ ###Markdown 3. Twitter Api Data Data was collected using the JSON File as Necessary Documents for the twitter developer account Coudn't be made. ###Code dftweet = pd.read_json('/content/drive/MyDrive/Utkarsh doc/tweet.json', lines=True) ###Output _____no_output_____ ###Markdown Note - Code to fetch twitter data using api has been commented out below, due to the above reason. ###Code """import tweepy from tweepy import OAuthHandler import json from timeit import default_timer as timer consumer_key = 'HIDDEN' consumer_secret = 'HIDDEN' access_token = 'HIDDEN' access_secret = 'HIDDEN' auth = OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True) tweet_ids = df_1.tweet_id.values len(tweet_ids) count = 0 fails_dict = {} start = timer() with open('tweet_json.txt', 'w') as outfile: # This loop will likely take 20-30 minutes to run because of Twitter's rate limit for tweet_id in tweet_ids: count += 1 print(str(count) + ": " + str(tweet_id)) try: tweet = api.get_status(tweet_id, tweet_mode='extended') print("Success") json.dump(tweet._json, outfile) outfile.write('\n') except tweepy.TweepError as e: print("Fail") fails_dict[tweet_id] = e pass end = timer() print(end - start) print(fails_dict)""" ###Output _____no_output_____ ###Markdown Assesing The Data ###Code # Define - Twitter Enhanced Data df.head(5) df.info() df.dtypes df.describe() # Define - Image Prediction Data dfimage.head(5) dfimage.info() dfimage.dtypes dfimage.describe() # Define - Twitter Api Data dftweet.head(5) dftweet.info() dftweet.dtypes dftweet.describe() ###Output _____no_output_____ ###Markdown Cleaning the Data 1. Defining copy dataframes before cleaning to retain original values ###Code #Define df_unclean = df.copy() dftweet_unclean = dftweet.copy() dfimage_unclean = dfimage.copy() ###Output _____no_output_____ ###Markdown 2. Defining irrelevance of certain columns in our first dataframe: * Many columns in our first dataframe 'df', consist values that would not affect our basis of analysis in future so we drop them off in the first place. 3. Code for Cleaning irrelevant columns from this df. ###Code #Code #DF CLeaning #dropping all irrelevant columns from df, 'twitter_enhanced_archive.csv' df.drop(['in_reply_to_status_id','source', 'in_reply_to_user_id','retweeted_status_user_id','retweeted_status_timestamp','expanded_urls'], axis = 1, inplace = True) #merging and creating new column 'dogtype', from below merged four pre defined columns df['dogtype']= df['doggo'] + df['floofer'] + df['pupper'] + df['puppo'] ###Output _____no_output_____ ###Markdown 4. Testing new made column for new values. ###Code #engineering values in the new created column count = 0 for i,r in enumerate(df.dogtype): if r == "NoneNoneNoneNone": count = count+1 print(count) ###Output 1976 ###Markdown 5. Coding for altering values for redundancy reduction in new column. ###Code #...engineering values for i,r in enumerate(df.dogtype): if r == "NoneNoneNoneNone": df.dogtype[i]= "undefined" for i,r in enumerate(df['dogtype']): if 'None' in r: df.dogtype[i] = df.dogtype[i].replace('None', '') #dropping pre defined columns after merging into new one. df.drop(['doggo','floofer','pupper','puppo'], axis = 1, inplace = True) ###Output _____no_output_____ ###Markdown 6. Testing dataframe after alterations done to 'df' ###Code # checking newly engineered df. df.head(1) ###Output _____no_output_____ ###Markdown 7. Defining problem for rating_numerator column , Values are inconsitent and alot of values are missing out on decimal values, instead fetching data from 'text' column would suggest much accurate values. 8. Coding for altering values and engineering clean values for above column. ###Code # clean Ratings using Below ratings = df.text.str.extract('((?:\d+\.)?\d+)\/(\d+)', expand=True) df.rating_numerator = ratings df.rating_numerator = df.rating_numerator.astype('float64') #dropping 'text' column after using rating values from it. df.drop(['text'], axis = 1, inplace = True) ###Output _____no_output_____ ###Markdown 9. Testing values of new made column. ###Code df.rating_numerator.mean() ###Output _____no_output_____ ###Markdown 10. Defining problem with 'retweet_status_id' column, As values in this column have None values, and having rows with Null values would bias our analysis in terms of retweet counts , so we drop all such rows. 11. Coding for removing rows with Null values in above said column. ###Code #filtering tweets with retweet_status_id not equal to None, getting rid of all tweets with Null values in that column df = df[df['retweeted_status_id'] != None] ###Output _____no_output_____ ###Markdown 12. Testing values after above alterations. ###Code df.isna() ###Output _____no_output_____ ###Markdown As it occurs huge number of rows contain NAN value and only few (10) rows have values in them, it'd be better to drop this column as well. ###Code df.drop(['retweeted_status_id'], axis =1 , inplace = True) ###Output _____no_output_____ ###Markdown 13. Defining problem with column name containing id in dftweet(dataframe containing data from Tweepy). Changing value from 'id' to 'tweet_id' would ease out operations across multiple dataframs. 14. Code addressing above defined issue. ###Code #DF TWEET Cleaning #renaming id to tweet_id for better accessibility across other dataframes dftweet.rename(columns = {'id':'tweet_id'}, inplace = True) ###Output _____no_output_____ ###Markdown 15. Testing for above made changes. ###Code dftweet.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2354 entries, 0 to 2353 Data columns (total 31 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 created_at 2354 non-null datetime64[ns, UTC] 1 tweet_id 2354 non-null int64 2 id_str 2354 non-null int64 3 full_text 2354 non-null object 4 truncated 2354 non-null bool 5 display_text_range 2354 non-null object 6 entities 2354 non-null object 7 extended_entities 2073 non-null object 8 source 2354 non-null object 9 in_reply_to_status_id 78 non-null float64 10 in_reply_to_status_id_str 78 non-null float64 11 in_reply_to_user_id 78 non-null float64 12 in_reply_to_user_id_str 78 non-null float64 13 in_reply_to_screen_name 78 non-null object 14 user 2354 non-null object 15 geo 0 non-null float64 16 coordinates 0 non-null float64 17 place 1 non-null object 18 contributors 0 non-null float64 19 is_quote_status 2354 non-null bool 20 retweet_count 2354 non-null int64 21 favorite_count 2354 non-null int64 22 favorited 2354 non-null bool 23 retweeted 2354 non-null bool 24 possibly_sensitive 2211 non-null float64 25 possibly_sensitive_appealable 2211 non-null float64 26 lang 2354 non-null object 27 retweeted_status 179 non-null object 28 quoted_status_id 29 non-null float64 29 quoted_status_id_str 29 non-null float64 30 quoted_status 28 non-null object dtypes: bool(4), datetime64[ns, UTC](1), float64(11), int64(4), object(11) memory usage: 505.9+ KB ###Markdown 16. Defining problem regarding redundant columns in data frame containing values fetched from Tweepy API. Dropping All those columns from Dataframe in steps followed. 17. Code for dropping redundant and irrelevant columns from dftweet . ###Code #dropping all irrelevant columns from tweepy dataframe dftweet.drop(['id_str','full_text','truncated','display_text_range','place','contributors','entities','extended_entities','source','in_reply_to_status_id','in_reply_to_status_id_str','in_reply_to_user_id','in_reply_to_user_id_str','in_reply_to_screen_name','user','geo','coordinates','is_quote_status','favorited','retweeted','possibly_sensitive_appealable','retweeted_status','quoted_status_id','quoted_status_id_str','quoted_status'], axis=1 , inplace=True) ###Output _____no_output_____ ###Markdown 18. Testing changes made above. ###Code dftweet.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2354 entries, 0 to 2353 Data columns (total 6 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 created_at 2354 non-null datetime64[ns, UTC] 1 tweet_id 2354 non-null int64 2 retweet_count 2354 non-null int64 3 favorite_count 2354 non-null int64 4 possibly_sensitive 2211 non-null float64 5 lang 2354 non-null object dtypes: datetime64[ns, UTC](1), float64(1), int64(3), object(1) memory usage: 110.5+ KB ###Markdown 19. Defining problem regarding alphabetic case issue with values in 'prediction' column of dfimage (dataframe containing values from image prediction tsv file), inconsitent data in three columns 'p1', 'p2', 'p3' found and fixed below. 20. Code for performing changes for resolving above said problem. ###Code # lower case coversion for predictions column in image prediction dataframe. dfimage["p1"] = dfimage["p1"].str.lower() dfimage["p2"] = dfimage["p2"].str.lower() dfimage["p3"] = dfimage["p3"].str.lower() ###Output _____no_output_____ ###Markdown 21. Testing df for above made changes. ###Code count = 0 for i,r in enumerate(dfimage.p1): if r.islower(): continue else: count = count+1 print(count) count = 0 for i,r in enumerate(dfimage.p3): if r.islower(): continue else: count = count+1 print(count) count = 0 for i,r in enumerate(dfimage.p3): if r.islower(): continue else: count = count+1 print(count) ###Output 0 ###Markdown 22. Definig problem with datatype of timestamp column and changing it to datetime object. 23. Coding for altering datatype of timestamp column. ###Code #changing datatype of 'start_time' & 'end_time' into datetime object df.timestamp = pd.to_datetime(df.timestamp) ###Output _____no_output_____ ###Markdown 24. Testing results after previous alteration. ###Code df.dtypes ###Output _____no_output_____ ###Markdown 25. Defining problem with scattered data across three different dataframes. Merging all in one would ease out future operations and analysis. Here Merging data on 'tweet_id' as primary key, would be an appropriate approach. 26. Code for merging all data into one dataframe. ###Code #merging all cleaned data into one new dataframe for better analysis and visualization. newdf = pd.merge(df, dfimage, on="tweet_id") merged_clean_data = pd.merge(newdf, dftweet, on="tweet_id") ###Output _____no_output_____ ###Markdown 27. Testing new engineered clean dataframe. 'merged_clean_data' ###Code #checking for null values in new engineered df. merged_clean_data.isnull().sum() ###Output _____no_output_____ ###Markdown Storing Data ###Code df.to_csv('tweet_enhanced_cleaned.csv', index = False) dftweet.to_csv('tweetapi_cleaned.csv', index = False) dfimage.to_csv('image_prediction_cleaned.csv', index = False) merged_clean_data.to_csv('twitter_archive_master.csv', index = False) ###Output _____no_output_____ ###Markdown Cleaned and merged data has been stored here as csv. ###Code newdf = merged_clean_data ###Output _____no_output_____ ###Markdown Analysing The Data ###Code stopwords = set(STOPWORDS) wordcloud = WordCloud(width = 800, height = 800, background_color ='white', stopwords = stopwords, min_font_size = 10).generate(' '.join(newdf['name'])) # plot the WordCloud image plt.figure(figsize = (10, 6), facecolor = None) plt.imshow(wordcloud) plt.axis("off") plt.tight_layout(pad = 0) plt.show() ###Output _____no_output_____ ###Markdown here we can say bigger percentage of our dog names have 'None' , so we drop it from our word cloud, giving us a better view of names in word cloud. ###Code stopwords = set(STOPWORDS) wordcloud = WordCloud(width = 800, height = 800, background_color ='white', stopwords = ['None'], min_font_size = 10).generate(' '.join(newdf['name'])) # plot the WordCloud image plt.figure(figsize = (10, 6), facecolor = None) plt.imshow(wordcloud) plt.axis("off") plt.tight_layout(pad = 0) plt.show() ###Output _____no_output_____ ###Markdown I see Charlie, Oliver, Cooper, Penny and Lucy are few of the most common names given in our dataframe.(rest all other names are present in word cloud that have less frequency.) ###Code plt.figure(figsize = (8,4), facecolor = None) newdf['lang'].hist() ###Output _____no_output_____ ###Markdown From above histogram plot we can clearly infer that, majority of our tweets (close to 99%) are in 'en', which stands for 'English' language. ###Code newdf.dogtype.value_counts().plot.bar() ###Output _____no_output_____ ###Markdown It is clear, major percent of our dogs do not have any defined dog type. So performing calculations on this factor can easily affect the dignity of our datafrae. Hence I have created a separated df of dogtypes with legit values : ###Code a = newdf.loc[newdf['dogtype'] !="undefined"] a.dogtype.value_counts().plot.bar() ###Output _____no_output_____ ###Markdown * Most of dogs with defined type are : 'pupper'.* Followed by : 'doggo' and 'puppo' ###Code rmean = newdf['retweet_count'].mean() fmean = newdf['favorite_count'].mean() print (rmean,fmean,fmean/rmean) ###Output 2976.0892426435116 8556.718282682103 2.875155139864555 ###Markdown **Mean and Ratio of Favorite Counts to that of Retweet Counts. : ** * Mean Retweet Count = 2976.089(approz) * Mean Favorite Count = 8556.71(approz)**Ratio of Mean Favorite Count to Retweet count** = **2.87** (approx)This indicates user tendency of adding tweets more to their **Favorite** than **Retweeting** a tweet. ###Code plt.figure(figsize = (12,8), facecolor= None) plt.plot(newdf['retweet_count'],alpha=0.7) plt.plot(newdf['favorite_count'],alpha=0.3) plt.xlabel("Retweet Count and Favorite Count") plt.ylabel("Frequency") plt.title ("Retweet Count vs Favorite Count") plt.legend(['Retweet Count', 'Favorite Count']) ###Output _____no_output_____ ###Markdown **Infering from line graph above:**We can clearly understand the trends followed by these two entities:* People tend more to add a tweet to their favorites than retweeting a tweet.* Mean of Favorite Count > Retweet Count.* Max value of Favorite Count > Retweet Count. ###Code newdf.groupby(['dogtype']).rating_numerator.mean().plot.bar() ###Output _____no_output_____ ###Markdown **Given above bar plot:**We can see, despite being sygnificantly less in number, dogs of 'doggopuppo', category have highest mean rating overall, which are followed by 'undefined', 'doggo'and 'floofer', consecutively. Conclusion: Data Quality issues found during Analysis : ###Code """ 1 Data provided by Tweepy API had redundant data, ie; most of its columns has None or NAN values stored and needed to be cleaned before analysis. 2 Data provided regarding DogType is inadequate as most of tweets do not have type mentioned instead, have 'None' values filled. 3 Unnecessary/ Irrelevant data provided in form various columns such as : ['id_str','full_text','truncated','display_text_range','place','contributors','entities','extended_entities'...... etc, which had to be dropped. 4 English being the major language of almost all the tweets provided(99%), lang columns could not be used for analysis purposes. 5 Column Possible Sensitivity has '0.0' as common value for every row throughout the data frame, defying its relevance in the dataframe. 6 The Rating_Numerator column in the data frame has vague values throughout, which in turn had to be engineered from the text column. 7 ID column provided in tweet_json has column name different to other two data frames making alterations compulsory for smooth accessibility throughout the data frames. 8 Data type of Date column is string where it should have been a date and time object. """ ###Output _____no_output_____ ###Markdown Tidiness issues found during Analysis : ###Code """ 1 Rows with NULL values existed in all three datasets provided making it untidy. 2 Data scattering and unorganized data found across all dataframes. """ ###Output _____no_output_____ ###Markdown Analysis Conclusion Drawn ###Code """ 1 Charlie, Oliver, Cooper, Penny and Lucy are few of the most common names given to dogs in our dataframe. 2 We can clearly infer that, majority of our tweets (close to 99%) are in 'en', which stands for 'English' language. 3 Most dogs dont have a determined type yet dogs with defined type are : 'pupper'. Followed by : 'doggo' and 'puppo' 4 Now Finally after analyzing required last two columns : Favorite Count and Retweet Count we Clear find the following observations: 5 Favorite Count has higher Maximum Value, close to 15000(approx), while Retweet count remains maxed out at 7800 (approx) 6 People tend to add a tweet to their favorite than retweeting a certain tweet. 7 Mean value of Favorite Count > Retweet Count. ('**2976.0892426435116**' and '**8556.718282682103**' respectively) 8 Ratio shared between these two entities (Favorite Count : Retweet Count) = 2.875155139864555: 1 . """ ###Output _____no_output_____ ###Markdown Gather Get the Image Data ###Code r = requests.get('https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv') text = r.iter_lines() reader = csv.reader(text, delimiter='\t') with open('image-predictions.tsv', 'wb') as handle: for block in r.iter_content(1024): handle.write(block) df_image = pd.read_csv("image-predictions.tsv",'\t') df_image.head(2) ###Output _____no_output_____ ###Markdown Get the tweets Data ###Code df_tweets = pd.read_csv("twitter-archive-enhanced.csv") df_tweets.info() df_tweets.sample(20) ###Output _____no_output_____ ###Markdown Get the Tweet JSON dataGet all the ID's from tweets and get the corresponding texts. Write them in tweet_json.text. I have commented out this part of the code, since consumer_keys and consumer_secret are not being submitted, so this part of the code cannot be run and is only shown for documentation purposes. ###Code #consumer_key = '' #consumer_secret = '' #access_token = '' #access_secret = '' #auth = tweepy.OAuthHandler(consumer_key, consumer_secret) #auth.set_access_token(access_token, access_secret) #api = tweepy.API(auth) #data=[] #ntweets = 1 #f = open('tweet_json.txt', 'w') #for id_of_tweet in df_tweets['tweet_id']: ##for id_of_tweet in df_tweets.ix[0:10,'tweet_id']: # print(ntweets) # ntweets = ntweets + 1 # try : # tweet = api.get_status(id_of_tweet,tweet_mode='extended',wait_on_rate_limit=True, wait_on_rate_limit_notify=True) # except : # print("************ No status found for ID : ",id_of_tweet) ## try : ## f.write(tweet.full_text+'\n') ## except : ## tweet = twitter.show_status(id=id_of_tweet) ## print("Unicode Error ",tweet.full_text) # data.append(tweet._json) #json.dump(data,f,sort_keys = True,indent = 4) #f.close() ###Output _____no_output_____ ###Markdown Assess Assess the json data ###Code df_json = pd.read_json("tweet_json.txt") df_json.columns df_json.info() df_json.sample(10) df_json.duplicated(subset='id').value_counts() #The following Id's are duplicated df_json[df_json.duplicated(subset='id')]['id'] ###Output _____no_output_____ ###Markdown Assess the tweets Data ###Code df_tweets.info() pd.set_option('display.max_colwidth', -1) # Are there duplicated Twitter id's? cols = df_tweets.columns df_tweets.duplicated(subset=cols[0]).value_counts() ###Output _____no_output_____ ###Markdown None of the tweet_Ids are duplicated. Investigate the ratings and the tweet texts ###Code rating_num=df_tweets['rating_numerator'] rating_denom=df_tweets['rating_denominator'] rating_denom.unique() ###Output _____no_output_____ ###Markdown Not all the rating denominators are 10. Some are much above 10 and some are below 10 as well. Look into these further. Lets print the texts. ###Code i=0 for txt in df_tweets['text']: n = re.findall("\d+/\d+",txt) if ((rating_denom[i]>10) | len(n) > 1): print(txt,rating_num[i],rating_denom[i], n) i=i+1 ###Output @roushfenway These are good dogs but 17/10 is an emotional impulse rating. More like 13/10s 17 10 ['17/10', '13/10'] @jonnysun @Lin_Manuel ok jomny I know you're excited but 960/00 isn't a valid rating, 13/10 is tho 960 0 ['960/00', '13/10'] RT @dog_rates: "Yep... just as I suspected. You're not flossing." 12/10 and 11/10 for the pup not flossing https://t.co/SuXcI9B7pQ 12 10 ['12/10', '11/10'] "Yep... just as I suspected. You're not flossing." 12/10 and 11/10 for the pup not flossing https://t.co/SuXcI9B7pQ 12 10 ['12/10', '11/10'] RT @dog_rates: After so many requests, this is Bretagne. She was the last surviving 9/11 search dog, and our second ever 14/10. RIP https:/… 9 11 ['9/11', '14/10'] RT @dog_rates: Meet Eve. She's a raging alcoholic 8/10 (would b 11/10 but pupper alcoholism is a tragic issue that I can't condone) https:/… 8 10 ['8/10', '11/10'] This is Bookstore and Seaweed. Bookstore is tired and Seaweed is an asshole. 10/10 and 7/10 respectively https://t.co/eUGjGjjFVJ 10 10 ['10/10', '7/10'] After so many requests, this is Bretagne. She was the last surviving 9/11 search dog, and our second ever 14/10. RIP https://t.co/XAVDNDaVgQ 9 11 ['9/11', '14/10'] Happy 4/20 from the squad! 13/10 for all https://t.co/eV1diwds8a 4 20 ['4/20', '13/10'] This is Bluebert. He just saw that both #FinalFur match ups are split 50/50. Amazed af. 11/10 https://t.co/Kky1DPG4iq 50 50 ['50/50', '11/10'] Meet Travis and Flurp. Travis is pretty chill but Flurp can't lie down properly. 10/10 &amp; 8/10 get it together Flurp https://t.co/Akzl5ynMmE 10 10 ['10/10', '8/10'] This is Socks. That water pup w the super legs just splashed him. Socks did not appreciate that. 9/10 and 2/10 https://t.co/8rc5I22bBf 9 10 ['9/10', '2/10'] This may be the greatest video I've ever been sent. 4/10 for Charles the puppy, 13/10 overall. (Vid by @stevenxx_) https://t.co/uaJmNgXR2P 4 10 ['4/10', '13/10'] Meet Oliviér. He takes killer selfies. Has a dog of his own. It leaps at random &amp; can't bark for shit. 10/10 &amp; 5/10 https://t.co/6NgsQJuSBJ 10 10 ['10/10', '5/10'] When bae says they can't go out but you see them with someone else that same night. 5/10 &amp; 10/10 for heartbroken pup https://t.co/aenk0KpoWM 5 10 ['5/10', '10/10'] This is Eriq. His friend just reminded him of last year's super bowl. Not cool friend 10/10 for Eriq 6/10 for friend https://t.co/PlEXTofdpf 10 10 ['10/10', '6/10'] Meet Fynn &amp; Taco. Fynn is an all-powerful leaf lord and Taco is in the wrong place at the wrong time. 11/10 &amp; 10/10 https://t.co/MuqHPvtL8c 11 10 ['11/10', '10/10'] This is Darrel. He just robbed a 7/11 and is in a high speed police chase. Was just spotted by the helicopter 10/10 https://t.co/7EsP8LmSp5 7 11 ['7/11', '10/10'] Meet Tassy &amp; Bee. Tassy is pretty chill, but Bee is convinced the Ruffles are haunted. 10/10 &amp; 11/10 respectively https://t.co/fgORpmTN9C 10 10 ['10/10', '11/10'] These two pups just met and have instantly bonded. Spectacular scene. Mesmerizing af. 10/10 and 7/10 for blue dog https://t.co/gwryaJO4tC 10 10 ['10/10', '7/10'] Meet Rufio. He is unaware of the pink legless pupper wrapped around him. Might want to get that checked 10/10 &amp; 4/10 https://t.co/KNfLnYPmYh 10 10 ['10/10', '4/10'] Two gorgeous dogs here. Little waddling dog is a rebel. Refuses to look at camera. Must be a preteen. 5/10 &amp; 8/10 https://t.co/YPfw7oahbD 5 10 ['5/10', '8/10'] Meet Eve. She's a raging alcoholic 8/10 (would b 11/10 but pupper alcoholism is a tragic issue that I can't condone) https://t.co/U36HYQIijg 8 10 ['8/10', '11/10'] 10/10 for dog. 7/10 for cat. 12/10 for human. Much skill. Would pet all https://t.co/uhx5gfpx5k 10 10 ['10/10', '7/10', '12/10'] Meet Holly. She's trying to teach small human-like pup about blocks but he's not paying attention smh. 11/10 &amp; 8/10 https://t.co/RcksaUrGNu 11 10 ['11/10', '8/10'] Meet Hank and Sully. Hank is very proud of the pumpkin they found and Sully doesn't give a shit. 11/10 and 8/10 https://t.co/cwoP1ftbrj 11 10 ['11/10', '8/10'] Here we have Pancho and Peaches. Pancho is a Condoleezza Gryffindor, and Peaches is just an asshole. 10/10 &amp; 7/10 https://t.co/Lh1BsJrWPp 10 10 ['10/10', '7/10'] This is Spark. He's nervous. Other dog hasn't moved in a while. Won't come when called. Doesn't fetch well 8/10&amp;1/10 https://t.co/stEodX9Aba 8 10 ['8/10', '1/10'] This is Kial. Kial is either wearing a cape, which would be rad, or flashing us, which would be rude. 10/10 or 4/10 https://t.co/8zcwIoiuqR 10 10 ['10/10', '4/10'] Two dogs in this one. Both are rare Jujitsu Pythagoreans. One slightly whiter than other. Long legs. 7/10 and 8/10 https://t.co/ITxxcc4v9y 7 10 ['7/10', '8/10'] After much debate this dog is being upgraded to 10/10. I repeat 10/10 10 10 ['10/10', '10/10'] These are Peruvian Feldspars. Their names are Cupit and Prencer. Both resemble Rand Paul. Sick outfits 10/10 &amp; 10/10 https://t.co/ZnEMHBsAs1 10 10 ['10/10', '10/10'] This is an Albanian 3 1/2 legged Episcopalian. Loves well-polished hardwood flooring. Penis on the collar. 9/10 https://t.co/d9NcXFKwLv 1 2 ['1/2', '9/10'] ###Markdown Found some of the txts have double rating. Visually looked at the texts and found that there are double ratings for various reasons.a) Sometimes there are two dogsb) The rating was updated.c) Some are just unique and one can see that the ratings are wrongly extracted for these. For example : RT @dog_rates: After so many requests, this is Bretagne. She was the last surviving 9/11 search dog, and our second ever 14/10. RIP https:/… 9 11 ['9/11', '14/10'] This is an Albanian 3 1/2 legged Episcopalian. Loves well-polished hardwood flooring. Penis on the collar. 9/10 https://t.co/d9NcXFKwLv 1 2 ['1/2', '9/10'] d) I looked visually in excel for denominators which are >10, and found from the tweet, that its because there are multiple dogs in the picture. These dogs can only be rated based on their category such as puppers, fluffo and so on. Extract my own ratings ###Code i=0 rating_num = np.int8(rating_num) rating_denom = np.int8(rating_denom) for txt in df_tweets['text']: ratings = re.findall("[0-9.]+\/\d+", txt) #Use the last value as the correct rating ratings_correct = ratings[len(ratings)-1] nums = np.array(ratings_correct.split("/")) #round and then convert to integer try: nums = np.int8(np.rint(np.float16(nums))) # if ((rating_num[i]!=nums[0]) | (rating_denom[i]!=nums[1])): # print(i,txt,rating_num[i],rating_denom[i],nums[0],nums[1]) except: print(i,"Problematic", txt,rating_num[i],rating_denom[i],nums[0],nums[1]) i=i+1 ###Output 890 Problematic RT @dog_rates: This... is a Tyrannosaurus rex. We only rate dogs. Please only send in dogs. Thank you ...10/10 https://t.co/zxw8d5g94P 10 10 ...10 10 988 Problematic What jokester sent in a pic without a dog in it? This is not @rock_rates. This is @dog_rates. Thank you ...10/10 https://t.co/nDPaYHrtNX 10 10 ...10 10 1008 Problematic Again w the sharks guys. This week is about dogs ACTING or DRESSING like sharks. NOT actual sharks. Thank u ...11/10 https://t.co/Ie2mWXWjpr 11 10 ...11 10 1009 Problematic Guys pls stop sending actual sharks. It's too dangerous for me and the people taking the photos. Thank you ...10/10 https://t.co/12lICZN2SP 10 10 ...10 10 1015 Problematic Guys... I said DOGS with "shark qualities" or "costumes." Not actual sharks. This did me a real frighten ...11/10 https://t.co/DX1JUHJVN7 11 10 ...11 10 1017 Problematic This is a carrot. We only rate dogs. Please only send in dogs. You all really should know this by now ...11/10 https://t.co/9e48aPrBm2 11 10 ...11 10 1025 Problematic This is an Iraqi Speed Kangaroo. It is not a dog. Please only send in dogs. I'm very angry with all of you ...9/10 https://t.co/5qpBTTpgUt 9 10 ...9 10 1071 Problematic This is getting incredibly frustrating. This is a Mexican Golden Beaver. We only rate dogs. Only send dogs ...10/10 https://t.co/0yolOOyD3X 10 10 ...10 10 1077 Problematic This... is a Tyrannosaurus rex. We only rate dogs. Please only send in dogs. Thank you ...10/10 https://t.co/zxw8d5g94P 10 10 ...10 10 1084 Problematic "Don't talk to me or my son ever again" ...10/10 for both https://t.co/s96OYXZIfK 10 10 ...10 10 1098 Problematic Right after you graduate vs when you remember you're on your own now and can barely work a washing machine ...10/10 https://t.co/O1TLuYjsNS 10 10 ...10 10 1111 Problematic "Ello this is dog how may I assist" ...10/10 https://t.co/jeAENpjH7L 10 10 ...10 10 1266 Problematic *lets out a tiny whimper and then collapses* ...12/10 https://t.co/BNdVZEHRow 12 10 ...12 10 1322 Problematic When you're just relaxin and having a swell time but then remember you have to fill out the FAFSA ...11/10 https://t.co/qy33OBcexg 11 10 ...11 10 1341 Problematic "Yes hi could I get a number 4 with no pickles" ...12/10 https://t.co/kQPVxqA3gq 12 10 ...12 10 1372 Problematic I know it's tempting, but please stop sending in pics of Donald Trump. Thank you ...9/10 https://t.co/y35Y1TJERY 9 10 ...9 10 1435 Problematic Please stop sending in saber-toothed tigers. This is getting ridiculous. We only rate dogs. ...8/10 https://t.co/iAeQNueou8 8 10 ...8 10 1610 Problematic For the last time, WE. DO. NOT. RATE. BULBASAUR. We only rate dogs. Please only send dogs. Thank you ...9/10 https://t.co/GboDG8WhJG 9 10 ...9 10 1627 Problematic "FOR THE LAST TIME I DON'T WANNA PLAY TWISTER ALL THE SPOTS ARE GREY DAMN IT CINDY" ...10/10 https://t.co/uhQNehTpIu 10 10 ...10 10 ###Markdown Some of these were still causing problems because there was a "..." preceding the rating.We could also remove the tweets where it says "We only rate dogs" Fix the "..." before some of the ratings but have not updated the dataframe yet. Use the "lstrip" command. This is just to get rid of the problematic ratings ###Code i=0 rating_num = np.int8(rating_num) rating_denom = np.int8(rating_denom) for txt in df_tweets['text']: ratings = re.findall("[0-9.]+\/\d+", txt) #Use the last value as the correct rating ratings_correct = ratings[len(ratings)-1] # Fix the "..." that is present in some of the ratings. ratings_correct1 = ratings_correct.lstrip(".") nums = np.array(ratings_correct1.split("/")) #round and then convert to integer try: nums = np.int8(np.rint(np.float16(nums))) if ((rating_num[i]!=nums[0]) | (rating_denom[i]!=nums[1])): #Check to see if this txt has two dogs: if ( (txt.find(' and') != -1) | (txt.find('&amp')!= -1) ) : print('Has 2 DOGS : ',txt,len(ratings), rating_num[i],rating_denom[i],nums[0],nums[1]) else: print('Has 1 DOG : ',txt,len(ratings), rating_num[i],rating_denom[i],nums[0],nums[1]) except: print(ratings_correct1) print(i,"Problematic", txt,rating_num[i],rating_denom[i],nums[0],nums[1]) i=i+1 ###Output Has 1 DOG : This is Bella. She hopes her smile made you smile. If not, she is also offering you her favorite monkey. 13.5/10 https://t.co/qjrljjt948 1 5 10 14 10 Has 1 DOG : @roushfenway These are good dogs but 17/10 is an emotional impulse rating. More like 13/10s 2 17 10 13 10 Has 1 DOG : @jonnysun @Lin_Manuel ok jomny I know you're excited but 960/00 isn't a valid rating, 13/10 is tho 2 -64 0 13 10 Has 1 DOG : RT @dog_rates: This is Logan, the Chow who lived. He solemnly swears he's up to lots of good. H*ckin magical af 9.75/10 https://t.co/yBO5wu… 1 75 10 10 10 Has 2 DOGS : RT @dog_rates: "Yep... just as I suspected. You're not flossing." 12/10 and 11/10 for the pup not flossing https://t.co/SuXcI9B7pQ 2 12 10 11 10 Has 1 DOG : This is Logan, the Chow who lived. He solemnly swears he's up to lots of good. H*ckin magical af 9.75/10 https://t.co/yBO5wuqaPS 1 75 10 10 10 Has 1 DOG : This is Sophie. She's a Jubilant Bush Pupper. Super h*ckin rare. Appears at random just to smile at the locals. 11.27/10 would smile back https://t.co/QFaUiIHxHq 1 27 10 11 10 Has 2 DOGS : "Yep... just as I suspected. You're not flossing." 12/10 and 11/10 for the pup not flossing https://t.co/SuXcI9B7pQ 2 12 10 11 10 Has 2 DOGS : RT @dog_rates: After so many requests, this is Bretagne. She was the last surviving 9/11 search dog, and our second ever 14/10. RIP https:/… 2 9 11 14 10 Has 1 DOG : RT @dog_rates: Meet Eve. She's a raging alcoholic 8/10 (would b 11/10 but pupper alcoholism is a tragic issue that I can't condone) https:/… 2 8 10 11 10 Has 2 DOGS : This is Bookstore and Seaweed. Bookstore is tired and Seaweed is an asshole. 10/10 and 7/10 respectively https://t.co/eUGjGjjFVJ 2 10 10 7 10 Has 2 DOGS : After so many requests, this is Bretagne. She was the last surviving 9/11 search dog, and our second ever 14/10. RIP https://t.co/XAVDNDaVgQ 2 9 11 14 10 Has 1 DOG : Happy 4/20 from the squad! 13/10 for all https://t.co/eV1diwds8a 2 4 20 13 10 Has 1 DOG : This is Bluebert. He just saw that both #FinalFur match ups are split 50/50. Amazed af. 11/10 https://t.co/Kky1DPG4iq 2 50 50 11 10 Has 2 DOGS : Meet Travis and Flurp. Travis is pretty chill but Flurp can't lie down properly. 10/10 &amp; 8/10 get it together Flurp https://t.co/Akzl5ynMmE 2 10 10 8 10 Has 2 DOGS : This is Socks. That water pup w the super legs just splashed him. Socks did not appreciate that. 9/10 and 2/10 https://t.co/8rc5I22bBf 2 9 10 2 10 Has 1 DOG : This may be the greatest video I've ever been sent. 4/10 for Charles the puppy, 13/10 overall. (Vid by @stevenxx_) https://t.co/uaJmNgXR2P 2 4 10 13 10 Has 2 DOGS : Meet Oliviér. He takes killer selfies. Has a dog of his own. It leaps at random &amp; can't bark for shit. 10/10 &amp; 5/10 https://t.co/6NgsQJuSBJ 2 10 10 5 10 Has 2 DOGS : When bae says they can't go out but you see them with someone else that same night. 5/10 &amp; 10/10 for heartbroken pup https://t.co/aenk0KpoWM 2 5 10 10 10 Has 1 DOG : This is Eriq. His friend just reminded him of last year's super bowl. Not cool friend 10/10 for Eriq 6/10 for friend https://t.co/PlEXTofdpf 2 10 10 6 10 Has 2 DOGS : Meet Fynn &amp; Taco. Fynn is an all-powerful leaf lord and Taco is in the wrong place at the wrong time. 11/10 &amp; 10/10 https://t.co/MuqHPvtL8c 2 11 10 10 10 Has 2 DOGS : This is Darrel. He just robbed a 7/11 and is in a high speed police chase. Was just spotted by the helicopter 10/10 https://t.co/7EsP8LmSp5 2 7 11 10 10 Has 1 DOG : I've been told there's a slight possibility he's checking his mirror. We'll bump to 9.5/10. Still a menace 1 5 10 10 10 Has 1 DOG : Here we have uncovered an entire battalion of holiday puppers. Average of 11.26/10 https://t.co/eNm2S6p9BD 1 26 10 11 10 Has 2 DOGS : Meet Tassy &amp; Bee. Tassy is pretty chill, but Bee is convinced the Ruffles are haunted. 10/10 &amp; 11/10 respectively https://t.co/fgORpmTN9C 2 10 10 11 10 Has 2 DOGS : These two pups just met and have instantly bonded. Spectacular scene. Mesmerizing af. 10/10 and 7/10 for blue dog https://t.co/gwryaJO4tC 2 10 10 7 10 Has 2 DOGS : Meet Rufio. He is unaware of the pink legless pupper wrapped around him. Might want to get that checked 10/10 &amp; 4/10 https://t.co/KNfLnYPmYh 2 10 10 4 10 Has 2 DOGS : Two gorgeous dogs here. Little waddling dog is a rebel. Refuses to look at camera. Must be a preteen. 5/10 &amp; 8/10 https://t.co/YPfw7oahbD 2 5 10 8 10 Has 1 DOG : Meet Eve. She's a raging alcoholic 8/10 (would b 11/10 but pupper alcoholism is a tragic issue that I can't condone) https://t.co/U36HYQIijg 2 8 10 11 10 Has 1 DOG : 10/10 for dog. 7/10 for cat. 12/10 for human. Much skill. Would pet all https://t.co/uhx5gfpx5k 3 10 10 12 10 Has 2 DOGS : Meet Holly. She's trying to teach small human-like pup about blocks but he's not paying attention smh. 11/10 &amp; 8/10 https://t.co/RcksaUrGNu 2 11 10 8 10 Has 2 DOGS : Meet Hank and Sully. Hank is very proud of the pumpkin they found and Sully doesn't give a shit. 11/10 and 8/10 https://t.co/cwoP1ftbrj 2 11 10 8 10 Has 2 DOGS : Here we have Pancho and Peaches. Pancho is a Condoleezza Gryffindor, and Peaches is just an asshole. 10/10 &amp; 7/10 https://t.co/Lh1BsJrWPp 2 10 10 7 10 Has 2 DOGS : This is Spark. He's nervous. Other dog hasn't moved in a while. Won't come when called. Doesn't fetch well 8/10&amp;1/10 https://t.co/stEodX9Aba 2 8 10 1 10 Has 1 DOG : This is Kial. Kial is either wearing a cape, which would be rad, or flashing us, which would be rude. 10/10 or 4/10 https://t.co/8zcwIoiuqR 2 10 10 4 10 Has 2 DOGS : Two dogs in this one. Both are rare Jujitsu Pythagoreans. One slightly whiter than other. Long legs. 7/10 and 8/10 https://t.co/ITxxcc4v9y 2 7 10 8 10 Has 1 DOG : This is an Albanian 3 1/2 legged Episcopalian. Loves well-polished hardwood flooring. Penis on the collar. 9/10 https://t.co/d9NcXFKwLv 2 1 2 9 10 ###Markdown 1. When its a single dog, typically its better to take the last rating as those are the updated ratings. Also some have decimal ratings which have been extracted incorrectly. See examples below : 1. This is Bella. She hopes her smile made you smile. If not, she is also offering you her favorite monkey. 13.5/10 https://t.co/qjrljjt948 1 5 10 14 10 2. "@roushfenway These are good dogs but 17/10 is an emotional impulse rating. More like 13/10s" 17 10 13 10 3. "This is an Albanian 3 1/2 legged Episcopalian. Loves well-polished hardwood flooring. Penis on the collar. 9/10"2. Some of the texts have two dogs in them. And hence two ratings. I will take the first one for these as the twitter_enhanced followed that logic as well. See examples below : 1. "Two gorgeous dogs here. Little waddling dog is a rebel. Refuses to look at camera. Must be a preteen. 5/10 &amp; 8/10 https://t.co/YPfw7oahbD" 2. "This is Spark. He's nervous. Other dog hasn't moved in a while. Won't come when called. Doesn't fetch well 8/10&amp;1/10 https://t.co/stEodX9Aba" Assess the Image data ###Code df_image.sample(10) df_image.duplicated(subset='tweet_id').value_counts() # None of the tweet_Ids are duplicated. temp = df_image[df_image['jpg_url'].str.contains('jpg')==False]['jpg_url'] temp.index df_image.iloc[[320,815]] ###Output _____no_output_____ ###Markdown These are png formats but they are perfectly valid. So nothing to be done here. Both jpg and png are acceptable formats. Quality issues in the data: A. twitter_enhanced.csv: 1. Large amount of Missing data for some variables such as: in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id,retweeted_status_timestamp2. Twitter_id is int but we will not be using it. So change it to string. 3. Incorrect ratings or data: Examined the tweets and found a number of incorrect ratings. 1. The decimal ratings are extracted incorrectly. 2. Some have texts which look like ratings, like, 9/11 or 7/11 and these are not correct. The updated rating should be taken. 3. Some have pics which are not dogs. These can be identified by "We only rate dogs".B. df_json:4. Some variables have all null values and need to be investigated further: geo, contributors and coordinates.5. Some variables have lots of missing values: in_reply_to_screen_name, in_reply_to_status_id, in_reply_to_status_id_str, in_reply_to_user_id,in_reply_to_user_id_str, quoted_status, quoted_status_id,quoted_status_id_str, retweeted_status6. The user column has no valid information and has duplicated texts.7. Json has 11 duplicated twitter Id’s.C. df_image:8. Inconsistency: In the Image dataframe, some names start with Capital letters, others don't. Tidy Issues : A. df_tweets: 1. Since the dog can have one of the 4 categories, the "doggo", "floofer"..etc columns need to be row values instead, otherwise the information is just redundant.2. The source column has lot of unuseful text which can be removed and only the useful part retained, i.e. the actual source, such as iphone, vine etc.B. Many columns are duplicates between df_tweets and the df_json data, and the two dataframes need to be merged and the duplicate columns dropped. Clean Clean the Tweets data frame ###Code #Make a copy of the tweet df_tw_cpy = df_tweets.copy() df_tw_cpy.columns ###Output _____no_output_____ ###Markdown DefineI want to keep only original tweets. So I wish to drop all retweets. The original tweets have retweeted_status_id as NaN. Make them 0 first and then keep only those tweets. ###Code df_tw_cpy['retweeted_status_id'].fillna(0,inplace=True) df_tw_cpy = df_tw_cpy[df_tw_cpy['retweeted_status_id']==0] ### Test df_tw_cpy['retweeted_status_id'].value_counts() ###Output _____no_output_____ ###Markdown DefineLarge amount of Missing or unuseful data for some variables such as: in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id,retweeted_status_timestampDrop these columns. ###Code # Unuseful information with lots of missing values dropcols = ['in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'] df_tw_cpy = df_tw_cpy.drop(dropcols,axis=1) df_tw_cpy.head(2) #Test df_tw_cpy.columns ###Output _____no_output_____ ###Markdown DefineIncorrect ratings extracted from Texts. Fix them here. (See section on Quality Issues)Then drop the original columns ratings_numerator and ratings_denominator and fill them with the fixed ratings wherever applicable. ###Code i=0 myrating_num=[] myrating_denom=[] rating_num = np.int8(rating_num) rating_denom = np.int8(rating_denom) for txt in df_tw_cpy['text']: #Strip Rating. ratings = re.findall("[0-9.]+\/\d+", txt) #Use the last value as the correct rating in case of single dog ratings_correct = ratings[len(ratings)-1] #Use the first value in the case of double dogs. if ( (txt.find(' and') != -1) | (txt.find('&amp')!= -1) ) : ratings_correct = ratings[0] # Fix the "..." that is present in some of the ratings. ratings_correct1 = ratings_correct.lstrip(".") #Get the numerator and denominators nums = np.array(ratings_correct1.split("/")) #Use the original ratings first. rnum = rating_num[i] dnum = rating_denom[i] #Change the rating if the ratings are the not the same try: nums = np.int8(np.rint(np.float16(nums))) #round and then convert to integer if ((rating_num[i]!=nums[0]) | (rating_denom[i]!=nums[1])): rnum=nums[0] dnum=nums[1] except: print(ratings_correct1) print(i,"Problematic", txt,rating_num[i],rating_denom[i],nums[0],nums[1]) myrating_num.append(rnum) myrating_denom.append(dnum) i=i+1 df_tw_cpy.drop(['rating_numerator','rating_denominator'],axis=1) df_tw_cpy['rating_numerator'] = myrating_num df_tw_cpy['rating_denominator'] = myrating_denom df_tw_cpy.head(5) ###Output _____no_output_____ ###Markdown Clean the json data ###Code #Make a copy of the json data to clean df_json_clean = df_json.copy() df_json_clean=df_json_clean.dropna(axis=1,how='all') df_json_clean.columns df_json_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 28 columns): created_at 2356 non-null datetime64[ns] display_text_range 2356 non-null object entities 2356 non-null object extended_entities 2079 non-null object favorite_count 2356 non-null int64 favorited 2356 non-null bool full_text 2356 non-null object id 2356 non-null int64 id_str 2356 non-null int64 in_reply_to_screen_name 78 non-null object in_reply_to_status_id 78 non-null float64 in_reply_to_status_id_str 78 non-null float64 in_reply_to_user_id 78 non-null float64 in_reply_to_user_id_str 78 non-null float64 is_quote_status 2356 non-null bool lang 2356 non-null object place 1 non-null object possibly_sensitive 2218 non-null float64 possibly_sensitive_appealable 2218 non-null float64 quoted_status 27 non-null object quoted_status_id 28 non-null float64 quoted_status_id_str 28 non-null float64 retweet_count 2356 non-null int64 retweeted 2356 non-null bool retweeted_status 170 non-null object source 2356 non-null object truncated 2356 non-null bool user 2356 non-null object dtypes: bool(4), datetime64[ns](1), float64(8), int64(4), object(11) memory usage: 451.0+ KB ###Markdown DefineDrop all columns with >er than 79 NaN values ###Code df_json_clean.dropna(axis=1,thresh=79,inplace=True) df_json_clean.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 19 columns): created_at 2356 non-null datetime64[ns] display_text_range 2356 non-null object entities 2356 non-null object extended_entities 2079 non-null object favorite_count 2356 non-null int64 favorited 2356 non-null bool full_text 2356 non-null object id 2356 non-null int64 id_str 2356 non-null int64 is_quote_status 2356 non-null bool lang 2356 non-null object possibly_sensitive 2218 non-null float64 possibly_sensitive_appealable 2218 non-null float64 retweet_count 2356 non-null int64 retweeted 2356 non-null bool retweeted_status 170 non-null object source 2356 non-null object truncated 2356 non-null bool user 2356 non-null object dtypes: bool(4), datetime64[ns](1), float64(2), int64(4), object(8) memory usage: 285.4+ KB ###Markdown Use only dates upto Aug 1st 2017 https://stackoverflow.com/questions/22898824/filtering-pandas-dataframes-on-dates ###Code df_json_clean['created_at'] df_json_clean = df_json_clean[(df_json_clean['created_at']<datetime.date(2017,8,2))] ###Output _____no_output_____ ###Markdown Define Drop these columns which give no useful information ###Code dropcols = ['display_text_range','entities','extended_entities','id_str','possibly_sensitive', 'possibly_sensitive_appealable','source','user'] df_json_clean = df_json_clean.drop(dropcols, axis=1) ###Output _____no_output_____ ###Markdown Define Drop all rows which have retweeted status = True ###Code df_json_clean = df_json_clean[(df_json_clean['retweeted']==False)] df_json_clean['retweeted'].value_counts() ###Output _____no_output_____ ###Markdown DefineMany columns are duplicate between the Json and the tweets data frame. Keep the columns that are useful and make a smaller subset of json data. ###Code keepcols = ['id','favorite_count', 'favorited','retweet_count'] df_json_clean = df_json_clean[keepcols] ###Output _____no_output_____ ###Markdown Define Look for duplicated data and drop them ###Code df_json_clean[df_json_clean.duplicated(keep=False)] df_json_clean.drop_duplicates(inplace=True) df_json_clean.duplicated().value_counts() ###Output _____no_output_____ ###Markdown Clean the Image Data ###Code df_image_cpy = df_image.copy() ###Output _____no_output_____ ###Markdown Define : Capitalize Names in df_Image ###Code df_image['p1'].head(20).unique() df_image_cpy['p1'] = df_image_cpy['p1'].str.capitalize() df_image_cpy['p1'].head(20).unique() ###Output _____no_output_____ ###Markdown Merge the Tweets and Json Files ###Code df_mg = pd.merge(df_tw_cpy,df_json_clean,left_on='tweet_id',right_on='id') #Drop the 'id' column as it is the same as the 'tweet_id' df_mg.drop('id',axis=1,inplace=True) df_mg.head(3) ###Output _____no_output_____ ###Markdown Merge the Image Data as well with the above data set ###Code #Merge the Image Data as well. df_mg = pd.merge(df_mg,df_image_cpy,on='tweet_id') ###Output _____no_output_____ ###Markdown DefineConvert the twitter ID to string ###Code df_mg['tweet_id'] = df_mg['tweet_id'].astype('str') df_mg.info() # Tweet_id is now of type object ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1994 entries, 0 to 1993 Data columns (total 26 columns): tweet_id 1994 non-null object timestamp 1994 non-null object source 1994 non-null object text 1994 non-null object expanded_urls 1994 non-null object rating_numerator 1994 non-null int64 rating_denominator 1994 non-null int64 name 1994 non-null object doggo 1994 non-null object floofer 1994 non-null object pupper 1994 non-null object puppo 1994 non-null object favorite_count 1994 non-null int64 favorited 1994 non-null bool retweet_count 1994 non-null int64 jpg_url 1994 non-null object img_num 1994 non-null int64 p1 1994 non-null object p1_conf 1994 non-null float64 p1_dog 1994 non-null bool p2 1994 non-null object p2_conf 1994 non-null float64 p2_dog 1994 non-null bool p3 1994 non-null object p3_conf 1994 non-null float64 p3_dog 1994 non-null bool dtypes: bool(4), float64(3), int64(5), object(14) memory usage: 366.1+ KB ###Markdown Convert the doggo floofer, etc. categories as row values under the heading Category ###Code df_test = df_mg[['tweet_id','doggo','floofer','pupper','puppo']] df_test.head(10) df_test = pd.melt(df_test, id_vars=["tweet_id"],var_name="Category", value_name="Value") df_test.sort_values(["tweet_id"]).head(5) df_test = df_test[df_test['Value']!="None"] df_test.drop('Value',axis=1,inplace=True) df_test.head(10).sort_values(["Category"]) #Merge again with the df_mg dataframe df_mg_2 = df_mg.copy() df_mg_2.drop(['doggo','floofer','pupper','puppo'],axis=1,inplace=True) df_fin = pd.merge(df_mg_2,df_test,on='tweet_id',how='outer') df_fin.columns df_fin.info() df_fin.to_csv('twitter_archive_master.csv', sep='\t', encoding='utf-8') ###Output _____no_output_____ ###Markdown Visualize ###Code import matplotlib.pyplot as plt plt.clf() df_fin['Category'].value_counts().plot(kind='bar') plt.show() df_fin['p1'].value_counts().sort_index() ###Output _____no_output_____ ###Markdown Not all of these are dog breeds. ###Code df_fin[['p1','text']] #Issue : The Text is Truncated # Fix : pd.set_option('display.max_colwidth', -1) pd.set_option('display.max_colwidth', -1) df_fin[['p1','text','jpg_url']].head(3) ###Output _____no_output_____ ###Markdown Some of these are not dogs. Checked out the URL's and indeed they are not! :). Lets check their ratings. ###Code df_fin[['p1','text','rating_numerator','rating_denominator']].head(3) ###Output _____no_output_____ ###Markdown Interestingly desktop_computer got a rating of 10/10 while hen (2064) and three-toed_sloth(2060) got low ratings. Lets check their URL's It looks like the ones which are named as an object yet have a high rating are those which do have a dog in them.Lets sort the data using rating. ###Code selcols = ['tweet_id', 'rating_numerator', 'rating_denominator', 'name', 'favorite_count', 'favorited', 'retweet_count', 'p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog', 'Category'] df_fin.sort_values(['rating_numerator'],ascending=False)[selcols].head(3) #Lets choose ratings above 20 df_use = df_fin[df_fin['rating_numerator']>20][selcols] #Which breeds have been posted the most? plt.clf() temp = df_fin['p1'].value_counts() fig = plt.figure(figsize=(12,6)) temp[temp.values>20].plot(kind="bar") plt.ylabel("Number Predicted (p1)") plt.tight_layout() plt.show() fig.savefig('p1count.png') """ Bar chart """ def plotbar(x,y,xlab,ylab) : fig = plt.figure(figsize=(12,6)) opacity = 0.4 error_config = {'ecolor': '0.3'} bar_width = 0.35 index = np.arange(len(x)) plt.bar(index, y) plt.xlabel(xlab, fontsize=12) plt.ylabel(ylab, fontsize=12) fig.tight_layout() plt.xticks(index, x, fontsize=12, rotation=90) plt.tight_layout() plt.show() return fig def plotscatter(x,y,xlab,ylab) : fig = plt.figure(figsize=(10,5)) plt.scatter(x, y) plt.xlabel(xlab, fontsize=16) plt.ylabel(ylab, fontsize=16) return fig #Get the breeds with rating numerator above 10 subdf = df_fin[df_fin.rating_numerator>10][['p1','favorite_count']] mytemp = subdf['p1'].value_counts() keepbreeds = np.array(mytemp[mytemp.values>10].index) keepbreeds #subdf[subdf['p1'] in keepbreeds] #Does not work #Keep only breeds with highest ratings subdf = subdf[subdf.apply(lambda x: x['p1'] in keepbreeds, axis=1)==True] #Group By breeds and add the favorite counts. subdf_gp = subdf.groupby(['p1'],as_index=False)['favorite_count'].sum() subdf_gp = subdf_gp.sort_values(by='favorite_count',ascending=False) x = subdf_gp['p1'] y= subdf_gp['favorite_count'] _xlab = "Top rated Breeds" _ylab = "Favourite Count" f1 = plotbar(x,y,_xlab,_ylab) f1.savefig('favcount.png') plt.close(f1) ###Output _____no_output_____ ###Markdown Golden retriever has the highest favourite count followed by Pembroke and then labrador_retriever. ###Code #List Unique and count # How many unique breeds have been listed. len(sorted(df_fin.p1.unique())) # Lets now study the breeds which have been categorized as "doggo", "pupper" et subdf = df_fin[df_fin.Category.isnull()==False] subdf_gp = subdf.groupby(['p1'],as_index=False)[['favorite_count','retweet_count']].max() x = subdf_gp['favorite_count'] y= subdf_gp['retweet_count'] _xlab = "Favorite Count" _ylab = "retweet_count" f1 = plotscatter(x,y,_xlab,_ylab) f1 f1.savefig("favretweet.png") plt.close(f1) per = subdf.retweet_count/subdf.favorite_count ###Output _____no_output_____ ###Markdown There is a strong correlation between favorite count indicating that people first favorite and then retweet. ###Code x = subdf['favorite_count'] y= per _xlab = "Favorite Count" _ylab = "%age (retweet//favorite)" plotscatter(x,y,_xlab,_ylab) from numpy import inf per[per == inf] = 0 per.mean() #The mean %age of favourited counts which are retweeted is about 30% subdf['retweet_count'].corr(subdf['favorite_count']) #What are the favorite counts for the different dog types and which one is the highest. subdf_gp = subdf.groupby(['Category'],as_index=False)[['favorite_count','retweet_count']].mean() subdf_gp.head() x = subdf_gp['Category'] y= subdf_gp['favorite_count'] _xlab = "Dog Category" _ylab = "Favourite Count" plotbar(x,y,_xlab,_ylab) def plotmult(df,xlab,ylab) : fig = plt.figure(figsize=(12,6)) ax = fig.add_subplot(111) x = df[xlab] y0 = df[ylab[0]] y1 = df[ylab[1]] y=[y0,y1] ind = np.arange(len(x)) width = 0.39 rects1 = ax.bar(ind, y[0], width, color='r') rects2 = ax.bar(ind+width, y[1], width, color='y') ax.set_ylabel('Counts') ax.set_title('') ax.set_xticks(ind) ax.set_xticklabels(x, fontsize=14, rotation=90) f1 = ax.legend( (rects1[0], rects2[0]), ('Favorite Count', 'Retweets') ) plt.tight_layout() return fig xlab = 'Category' ylab=['favorite_count','retweet_count'] f1 = plotmult(subdf_gp,xlab,ylab) f1 f1.savefig('dog_category.png') ###Output _____no_output_____
examples/Trajectory Clustering.ipynb
###Markdown This notebook serves as a simple example on how to run the clustering algorithm on a MD trajectory file ###Code from raffy import trajectory_cluster as tc # Choose parameters frames = ':' # Indicate which frames to analyse k = 4 # Number of clusters to be found ncores = 1 # For multiprocessing, requires the ray package cut = 4.42 # Cutoff in Angstrom of the descriptor. If not specified it is automatically set filename = "data/Au/example.xyz" # The trajectory file can be in .xyz or .dump format # Run the clustering. This will generate another .xyz file containing the label of each atom in a "tags" column. tc.trajectory_cluster(filename, index = frames, k = k, ncores = ncores, cut=cut) ###Output Finished. The Labeled trajectory file can be found at data/Au/example_clustered_k=4_cut=5.17.xyz
Simulated Phenomenon.ipynb
###Markdown Table of Contents- [Introduction](introduction) - [Breastfeeding](breastfeeding) - [What is breastfeeding and why is it important?](w_breastfeeding) - [Variables](variables) - [Age](age) - [Civil Status](civil_status) - [Health Insurance Status](hel_ins_status) - [Initiate Breastfeeding](int_breastfeeding)- [Dataset](dataset)- [References](references)- [Other Sources](other_sources) IntroductionThe project requires us to simulate a real-world phenomenon of our choosing.We have been asked to model and synthesise data related to this phenomenon using Python, particularly the numpy.random library. The output of the project should be a synthesised data set.I will be examining the rate of breastfeeding initiation in Ireland. I will create a dataset of variables associated with breastfeeding. I will simulate the distribution of breastfeeding initiation in a random sample of an identified segment of the population. I will explore the relationships between these factors and how they may influence the rate of breastfeeding initiation.This will include:1. The distribution of breastfeeding initiation in an identified segment of the population2. The factors contributing to breastfeeding initiation3. How these factors are distributed in the identified segment of the populationThis topic is of particular interest to me as I have been successfully breastfeeding my daughter for the past year. The publication of the Irish Maternity Indicator System National Report 2019 report [4] received widespread news coverage and highlighted the low breastfeeding rates in Ireland [5]. On reflection, I was unable to identify how I had decided to breastfeed. I began to read more on the topic, including how Irish rates compare to international rates and breastfeeding socio-cultural changes. From this, I identified influencing factors on breastfeeding initiation. While I meet some of the criteria, I do not meet all. And yet as a mother in Ireland exclusively breastfeeding for over 12 months, I am one of only 7%. This intrigued me, and I wanted to examine what factors may have influenced my breastfeeding journey. Breastfeeding![Lactation Image]("https://github.com/SharonNicG/52465-Project/blob/main/Lactation%20Image%20OS.jpg") What is breastfeeding and why is it important?Breastfeeding, or nursing, is the process of providing an infant with their mother's breastmilk [6]. This is usually done directly from the breast but can also be provided indirectly using expressed breast milk [*Ibid.*]. Breastfeeding is important as it offers numerous health benefits for both mother and infant.**Benefits to infant:*** Breast milk is naturally designed to meet the calorific and nutritional needs of an infant [7] and adapts to meet the needs of the infant as they change [8]* Breast milk provides natural antibodies that help to protect against common infections and diseases [*Ibid.*]* Breastfeeding is associated with better long-term health and wellbeing outcomes including less likelihood of developing asthma or obesity, and higher income in later life [9]**Benefits to mother:** * Breastfeeding lowers the mother's risk of breast and ovarian cancer, osteoporosis, cardiovascular disease, and obesity [10]. * Breastfeeding is associated with lower rates of post-natal depression and fewer depressive symptoms for women who do develop post-natal depression while breastfeeding [11, 12]. * Breastfeeding is a cost-effective, safe and hygienic method of infant feeding [13]. The World Health Organisation (WHO), and numerous other organizations recommend exclusively breastfeeding for the first six months of an infant's life and breastfeeding supplemented by other foods from 6 months on [14, 15, 16, 17]. However, globally nearly 2 out of 3 infants are not exclusively breastfed for the first six months [18]. Ireland has one of the lowest breastfeeding initiation rates in the world, with 63.8% of mothers breastfeeding for their child's first feed [19]. The rate of breastfeeding drops substantially within days as on average, only 37.3% of mothers are breastfeeding on discharge from hospital [*Ibid.*].Given the physical, social and economic advantages to breastfeeding over artificial and combination feeding (a mix of breast and artificial) both the WHO and the HSE have undertaken a number of measures to increase the rates of breastfeeding initiation and exclusive breastfeeding for the first six months in Ireland [20, 21].Funded research is one of these measures, including national longitudinal studies to identify factors that may influence breastfeeding rates [22]. VariablesA review of some of the completed research projects has identified common factors that have been researched and for which there is a bank of data to refer to. These are identified in the table below:| Variable Name | Description | Data Type | Distribution ||--------------------------|-------------------------------------|-----------|-------------------|| Age | Age of mother | Numeric | Normal/Triangular || Civil Status | Civil status of mother | Boolean | Normal/Triangular || Private Health Insurance | Whether mother has health insurance | Boolean | Normal/Triangular |These factors will be used as the variables for the development of the dataset. The fourth variable will be 'Breastfeeding Initiation'. This will be informed by data from existing research and dependent on values assigned to records under the other variables. | Variable Name | Description | Data Type | Distribution ||--------------------------|--------------------|-----------|-------------------|| Breastfeeding Initiation | Dependent Variable | Boolean | Normal/Triangular |A trawl of information sources led to the decision to use data from 2016. While National Perinatal Reporting System (NPRS) produce annual reports [*Ibid.*], they are published with a 12-month delay, and final versions (following feedback and review) are available after 24 months. This means the 2016 report is the latest final version available. An initial search for information on Civil Status statistics led to the 2016 census data. While this ultimately wasn't used, it guided the use of the 2016 NPRS data. Similarily historical data on Private Health Insurance rates in Ireland varies greatly, and 2016 seemed to produce the most applicable data for use here. Below is an outline of each variable; the data is based on and how it will be used for the development of a dataset. As the expected distributions for each variable are the same different approaches will be taken with each to generate them for the dataset. AgeA review of data provided by the NPRS study is used here to determine how maternal age is distributed across the population - mothers with live births in 2016 [23].While age is a numerical value, it is presented by NPRS as a categorical variable/discrete groups, ranging from under 20 years of age to 45 years of age and older. The NPRS study provides the frequency and percentages of births within each group. ###Code # Downloaded NPRS_Age.csv from NPRS 2016 Report age = pd.read_csv("Data/Age_and_Feeding_Type.csv", index_col='Age Group') # Transpose index and columns age = age.T # Integer based indexing for selection by position - See References #24 age = age.iloc[0:4, 0:7] age ###Output _____no_output_____ ###Markdown The grouping of data by age group reduces the usefulness of the `describe()` function on the data frame. However, an initial view of the NPRS data indicates that the data is somewhat normally distributed with births increasing in the 25 - 29 age group, peaking at 30 - 34 years of age and beginning to decline in the 35 - 39 age set. Visualising the data set supports this analysis. It shows a minimum value of fewer than 20 years of age increasing in a positive direction until it significant peak around 32 years of age - the midpoint of the age group with the greatest frequency of births. ###Code # Creates a figure and a set of subplots. fig, ax = plt.subplots(figsize=(5, 5), dpi=100) x_pos = np.arange(7) # Plot x versus y as markers ax.plot(x_pos, age.iloc[0, :], marker='^', label='Breast') ax.plot(x_pos, age.iloc[1, :], marker='^', label='Artificial') ax.plot(x_pos, age.iloc[2, :], marker='^', label='Combined') # Set labels for chart and axes ax.set_title('Age and Feeding Type') ax.set_xlabel('Maternal Age at Time of Birth') ax.set_ylabel('Frequency') # Create names on the x-axis ax.set_xticks(x_pos) # Rotate to make labels easier to read ax.set_xticklabels(age.columns, rotation=90) # Position legend ax.legend(loc="best") # Show plot plt.show() ###Output _____no_output_____ ###Markdown The initial approach taken was to create a function that generated a random number using the percentages from the csv file and assigned this to each record. ###Code def age_distribution(): y = rng.integers(1,100) if y <= 23: return random.randint(15,19) elif 23 < y <= 43: return random.randint(20,24) elif 43 < y <= 62: return random.randint(25,29) elif 62 < y <= 77: return random.randint(30,34) elif 77 < y <= 88: return random.randint(35,39) elif 88 < y <= 95: return random.randint(40,44) else: return random.randint(45,49) Age = age data = {'Age': age} ###Output _____no_output_____ ###Markdown Then I realised that the distribution visualised above could be replicated using a Triangular Distribution. This generates a random number from a weighted range by distributing events between the maximum and minimum values provided, based on a third value that indicates what the most likely outcome will be [25, 26]. Here we are looking for 100 events (births) distributed between the ages of 16 and 50 with a known peak where the mothers age is 32. ###Code # Here we are looking for a random array with a lower limit of 16 an upper limit of 50 # and 32 being the number that appears most frequently (the midpoint of the most frequent age group) # over n number of instances where n is the total number of births # and for the out to be presented on a Triangular Distribution plot Tri_var = np.random.triangular(left = 20, mode = 30, right = 50, size = 100).astype(int) print ("Here is your triangular continuous random variable:\n % s" % (Tri_var)) # [55] # Triangular Distribution - See References #27 plt.hist(np.ma.round(np.random.triangular(left = 20, mode = 30, right = 50, size = 100)).astype(int), range = (16, 50), bins = 300, density = True) # Set labels for chart and axes plt.title('Randomised Distribution of Age') plt.xlabel('Maternal Age at Birth of Child') plt.ylabel('Frequency') # Show plot plt.show() ###Output Here is your triangular continuous random variable: [28 23 21 28 29 34 41 42 46 37 25 45 46 38 29 32 24 22 31 29 26 37 32 31 31 44 44 22 27 28 37 32 24 42 32 31 37 44 23 27 32 36 35 32 37 47 39 32 31 39 41 43 36 25 32 39 35 33 45 33 43 36 26 37 42 26 30 37 26 32 37 34 35 25 38 44 32 35 22 29 27 29 43 33 44 22 23 44 29 29 28 25 29 36 27 32 43 39 36 40] ###Markdown Civil Status Research has shown that maternal civil status at the time of birth is significantly associated with breastfeeding initiation [28,29,30].Data captured in the 2016 NPRS survey does not capture relational data between breastfeeding initiation and maternal civil status at the time of birth [31]. However, it does provide percentage values for maternal civil status across all age groups:| Maternal Civil Status at Birth | Percentage of Total births ||--------------------------------|----------------------------|| Married | 62.2 || Single | 36.4 || Other | 1.4 |Central Statistics Office (CSO) data on civil status for 2016 does record information across all age groups [32]. However, as it only captures data for * Marriages* Civil partnerships* Divorces, Judicial Separation and Nullity applications received by the courts * Divorces, Judicial Separation and Nullity applications granted by the courts It does not capture other civil arrangements such as informal separations or co-habitants.For the purposes of this simulation, the NPRS data will be used.This is a categorical variable that has three possible values1. Married2. Single3. Other (encompassing all other civil statutes as identified by the survey respondent)Note, that same sex marriage wasn't legal in 2016 and Civil Partnerships are recorded as Other in the NPRS Report.The first approach was to link the age of a person to a civil status by calculating how many people within each age group may fall into each civil status category, based on the NPRS data. This was excessively complicated - requiring an If statement, a dictionary and a For Loop. While it produced a good distribution, it wasn't easily amendable if the figures changed as each line needed to be amended in the dictionary. ###Code def civil_status(x): def distribution(st): x = rng.integers(0,100) if x <= chances[0][0]: return chances[0][1] elif x <= chances[0][0] + chances[1][0]: return chances[1][1] return chances[2][1] status = { range(15,20) : [(0,'other'),(95,'Single'),(5,'Married')]} for i in status: if x in i: return distribution(status[i]) ###Output _____no_output_____ ###Markdown Instead, a much simpler way based on the information in the NPRS report `rng.choice` can be used to randomly distribute these values across the dataset population. ###Code # Classifying Civil Status # 'single' if single, 'married' if married and 'other' for all other categories civil_options = ['single', 'married', 'other'] # Randomisation of civil status based on the probability provided civil_status = rng.choice(civil_options, n, p=[.364, .622, .014]) # Count of zero elements in array - See References #33 print("Single: ", np.count_nonzero(civil_status == 'single')) print("Married: ", np.count_nonzero(civil_status == 'married')) print("Other: ", np.count_nonzero(civil_status == 'other')) ###Output Single: 23360 Married: 39496 Other: 883 ###Markdown Health Insurance StatusGallagher's research also highlighted a significant association between the health insurance status of a mother and breastfeeding initiation [34]. A review of other research into factors affecting breastfeeding initiation showed that access to enhanced peri- and postnatal medical care have considerable influence on breastfeeding initiation and continuance [35, 36, 37]. These were primarily completed in countries without a funded, or part-funded national health service. While these demonstrated that mothers with private health insurance were more likely to initiate breastfeeding, they weren't comparable in an Irish context. However, a follow-on study from Gallagher's research further supported her findings that maternal access to private health insurance increased the likelihood of breastfeeding [38].The Health Insurance Authority (HIA) in Ireland offers a comparison tool for health insurance policies, including the services available under each plan [39]. A review, carried out in December 2020, shows that of 314 plans on offer 237 provide outpatient maternity benefits which cover peri and postnatal cover care and support systems. These include one-to-one postnatal consultation with a lactation consultant. For mothers without health insurance, maternity care in Ireland is provided under the Maternity and Infant Care Scheme [40]. This is a combined hospital and GP service for the duration of pregnancy and six weeks postpartum. No specific resources are made available to support breastfeeding, though maternity hospitals and units may offer breastfeeding information sessions and one-to-one lactation consultations where needed. Access to these supports is limited. The Coombe Women's and Children's Hospital, for example, handles around 8,000 births per year [41] and provides breastfeeding information sessions to less than 1,000 mothers per year [42]. There are a number of community supports for breastfeeding, including [Le Leche](https://www.lalecheleagueireland.com/), [Friends of Breastfeeding](https://www.friendsofbreastfeeding.ie/) and [Cuidiu](https://www.cuidiu.ie/) and private lactation consultants. Interestingly, neither Irish study assessed whether mothers accessed these services perinatally. Gallagher's research showed that 66% of the insured mothers initiated breastfeeding [43]. As insurance status can only have two possible outcomes (True or False), a binomial distribution was initially used to evaluate distribution across `n` number of births. ###Code # Here we are looking for a Binomial Distribution # with probability of 0.66 for each trial # repeated `n` times # to be presented as a Binomial Bistribution plot sns.distplot(rng.binomial(n=10, p=0.66, size=n), hist=True, kde=False) # Set labels for chart and axes plt.title('Insurance Distribution Based on Percentage Insured') plt.xlabel('Age Distribution') plt.ylabel('Number Insured') # Show plot plt.show() ###Output _____no_output_____ ###Markdown While this randomly allocated health insurance status, it didn't distribute this across the age groups in an informed way.The HIA also provide historical market statistics on insurance in Ireland [44]. Data for 2016 based on the age groups previously used, the number of the total population insured within these groupings that are insured and the percentage of the total insured population these represent was extracted from the HIA historical market statistics into a csv file. ###Code # Downloaded HIA historicial market statistics ins = pd.read_csv("Data/Insurance_by_Age.csv") ins ###Output _____no_output_____ ###Markdown The extracted data shows a positive increase in the number of people insured from ages 20 to 35. Peaking significantly in the 35-39 category and declining thereafter. Visualising the data supports this analysis. ###Code # Downloaded HIA historicial market statistics ins = pd.read_csv('Data/Insurance_by_Age.csv', index_col='Age') # Transpose index and columns ins = ins.T ins # Creates a figure and a set of subplots fig, ax = plt.subplots(figsize=(5,3), dpi=100) x_pos = np.arange(7) # Integer based indexing for selection by position - See References #24 # Scaled down y = ins.iloc[1, :] / 100 y = y*ins.iloc[0, :] # Plot x versus y as markers ax.plot(x_pos, y, marker='o', label='Insurance') # Set labels for chart and axes ax.set_title('Insurance by Age') ax.set_xlabel('Maternal Age') ax.set_ylabel('Number Insured') # Create names on the x-axis ax.set_xticks(x_pos) # Rotate to make labels easier to read ax.set_xticklabels(age.columns, rotation=90) # Show plot plt.show() ###Output _____no_output_____ ###Markdown While it isn't possible to get a breakdown of insurance status by gender or linked to the parity of women within the given age groups, the above analysis has provided two points to work from in the generated dataset: the percentage of people within the age groups that are likely to have health insurance, and that 66% of pregnant women have health insurance. ###Code # ins_by_age(x) takes one argument - the age of the person # based on this number and the percentages identified from HIA # it gives each record a value of True or False # a dict that holds the HIA statistics def ins_by_age (x): agerange_insurance = {range(15,20) : 10, range(20,25) : 9, range(25,30) : 9, range(30,35) : 14, range(35,40) : 19, range(40,45) : 19, range(45,50) : 20} # Introduce randomisation by assigning True or False based on # a randomly assigned number generated by rng.integers for i in agerange_insurance: if x in i: y = rng.integers(1,100) if y <= agerange_insurance[i]: return True return False health_ins_status = np.array([ins_by_age(i) for i in age]) ###Output _____no_output_____ ###Markdown Initiate BreastfeedingThis variable is based on the data used above and in Gallagher's study [45]. Using the data from the NPRS report the percentage of women within each age group that are likely to initiate breastfeeding can be calculated. ###Code # Percentage of total births that initiate breastfeeding age = pd.read_csv("Data/Age_and_Feeding_Type.csv") age = age.iloc[0:7, 0:8] age['pct']=age['Breast']/(age['Total'])*100 age ###Output _____no_output_____ ###Markdown Additionally, this variable will be a dependant variable influenced by data from the other variables and information Gallagher's study:1. Women with health insurance are more likely to initiate breastfeeding * 66% of women have access to health insurance [*Ibid*]2. Civil Status influences the likelihood of breastfeeding * Married women are three times more likely to breastfeed Dataset ###Code # the number of records n = 100 ###Output _____no_output_____ ###Markdown Variable 1 - Age ###Code # A Triangular Distribution with a lower limit of 16 an upper limit of 50 # and 32 being the number that appears most frequently (the midpoint of the most frequent age group) # over n number of instances where n is the total number of births age_dist = (np.random.triangular(left = 16, mode = 30, right = 50, size = 500)).astype(int) # Using rng.choice to randomise the output age = rng.choice(age_dist, n) # generating the dataframe using the the age distribution df = pd.DataFrame(age, columns = ['Maternal Age']) ###Output _____no_output_____ ###Markdown Variable 2 - Civil Status ###Code # Classifying Civil Status # 'single' if single, 'married' if married and 'other' for all other categories civil_status = ['single', 'married', 'other'] # Randomisation of civil status based on the probability provided civil_status = rng.choice(civil_status, n, p=[.364, .622, .014]) ###Output _____no_output_____ ###Markdown Varible 3 - Insurance Status ###Code # ins_by_age(x) takes one argument - the age of the person # based on this number and the percentages identified from HIA # it gives each record a value of True or False # a dictionary that holds the HIA statistics def ins_by_age (x): agerange_insurance = {range(15,20) : 10, range(20,25) : 9, range(25,30) : 9, range(30,35) : 14, range(35,40) : 19, range(40,45) : 19, range(45,50) : 20} # Introduce randomisation by assigning True or False based on # a randomly assigned number generated by rng.integers for i in agerange_insurance: if x in i: y = rng.integers(1,100) if y <= agerange_insurance[i]: return True return False health_ins_status = np.array([ins_by_age(i) for i in age]) df['Health Insurance Status'] = health_ins_status ###Output _____no_output_____ ###Markdown Variable 4 - Initate Breastfeeding ###Code # ins_by_age(x) takes one argument - the age of the person # based on this number and the percentages identified from HIA # it gives each record a value of True or False # a dictionary that holds the calculated percentage statistics def breastfeeding(x): bf_status = {range(15,20) : 23, range(20,25) : 32, range(25,30) : 43, range(30,35) : 53, range(35,40) : 55, range(40,45) : 53, range(45,50) : 50} a = age[x] # the age of the person b = health_ins_status[x] # the person's health insurance status c = civil_status[x] # the person's civil status q = 3 if c == 'Married' else 2 if c == 'Other' else 1 # If married they are 3 times more likely to start the program # Assigning 2 for people with Other as they may have an informal arrangemnt # Assigning 1 for people who identify as single # Introduce randomisation by assigning True or False based on # a randomly assigned number generated by rng.integers for i in bf_status: if a in i: y = rng.integers(1,1000) if y <= q*bf_status[i] or (b == True and y <= 66): # Using 66 as ^^% of insured women initiate breastfeeding return True return False # generating the variable based on the function initiate_bf = np.array([breastfeeding(i) for i in age]) df['Initiate Breastfeeding'] = initiate_bf ###Output _____no_output_____ ###Markdown Dataset ###Code # See References #46 and #47 pd.set_option("expand_frame_repr", True) df ###Output _____no_output_____
notebooks/3_nasa_data_initial_explore.ipynb
###Markdown NASA Data Exploration ###Code raw_data_dir = '../data/raw' processed_data_dir = '../data/processed' figsize_width = 12 figsize_height = 8 output_dpi = 72 # Imports import os import numpy as np import pandas as pd from datetime import datetime import matplotlib.pyplot as plt # Load Data nasa_temp_file = os.path.join(raw_data_dir, 'nasa_temperature_anomaly.txt') nasa_sea_file = os.path.join(raw_data_dir, 'nasa_sea_level.txt') nasa_co2_file = os.path.join(raw_data_dir, 'nasa_carbon_dioxide_levels.txt') # Variable Setup default_fig_size = (figsize_width, figsize_height) # - Process temperature data temp_data = pd.read_csv(nasa_temp_file, sep='\t', header=None) temp_data.columns = ['Year', 'Annual Mean', 'Lowness Smoothing'] temp_data.set_index('Year', inplace=True) fig, ax = plt.subplots(figsize=default_fig_size) temp_data.plot(ax=ax) ax.grid(True, linestyle='--', color='grey', alpha=0.6) ax.set_title('Global Temperature Anomaly Data', fontweight='bold') ax.set_xlabel('') ax.set_ylabel('Temperature Anomaly ($\degree$C)') ax.legend() plt.show(); # - Process Sea-level File # -- Figure out header rows with open(nasa_sea_file, 'r') as fin: all_lines = fin.readlines() header_lines = np.array([1 for x in all_lines if x.startswith('HDR')]).sum() sea_level_data = pd.read_csv(nasa_sea_file, delim_whitespace=True, skiprows=header_lines-1).reset_index() sea_level_data.columns = ['Altimeter Type', 'File Cycle', 'Year Fraction', 'N Observations', 'N Weighted Observations', 'GMSL', 'Std GMSL', 'GMSL (smoothed)', 'GMSL (GIA Applied)', 'Std GMSL (GIA Applied)', 'GMSL (GIA, smoothed)', 'GMSL (GIA, smoothed, filtered)'] sea_level_data.set_index('Year Fraction', inplace=True) fig, ax = plt.subplots(figsize=default_fig_size) sea_level_var = sea_level_data.loc[:, 'GMSL (GIA, smoothed, filtered)'] \ - sea_level_data.loc[:, 'GMSL (GIA, smoothed, filtered)'].iloc[0] sea_level_var.plot(ax=ax) ax.grid(True, color='grey', alpha=0.6, linestyle='--') ax.set_title('Global Sea-Level Height Change over Time', fontweight='bold') ax.set_xlabel('') ax.set_ylabel('Sea Height Change (mm)') ax.legend(loc='upper left') plt.show(); # - Process Carbon Dioxide Data with open(nasa_co2_file, 'r') as fin: all_lines = fin.readlines() header_lines = np.array([1 for x in all_lines if x.startswith('#')]).sum() co2_data = pd.read_csv(nasa_co2_file, skiprows=header_lines, header=None, delim_whitespace=True) co2_data[co2_data == -99.99] = np.nan co2_data.columns = ['Year', 'Month', 'Year Fraction', 'Average', 'Interpolated', 'Trend', 'N Days'] co2_data.set_index(['Year', 'Month'], inplace=True) new_idx = [datetime(x[0], x[1], 1) for x in co2_data.index] co2_data.index = new_idx co2_data.index.name = 'Date' # - Plot fig, ax = plt.subplots(figsize=default_fig_size) co2_data.loc[:, 'Average'].plot(ax=ax) ax.grid(True, linestyle='--', color='grey', alpha=0.6) ax.set_xlabel('') ax.set_ylabel('$CO_2$ Level (ppm)') ax.set_title('Global Carbon Dioxide Level over Time', fontweight='bold') plt.show(); ###Output _____no_output_____
_notebooks/2021-01-04-Lock-free-data-structures.ipynb
###Markdown Data structures for fast infinte batching or streaming requests processing > Here we dicuss one of the coolest use of a data structures to address one of the very natural use case scenario of a server processing streaming requests from clients in order.Usually processing these requests involve a pipeline of operations applied based on request and multiple threads are in charge of dealing with these satges of pipeline. The requests gets accessed by these threads and the threads performing operations in the later part of the pipeline will have to wait for the earlier threads to finish their execution.The usual way to ensure the correctness of multiple threads handling the same data concurrently is use locks.The problem is framed as a producer / consumer problems , where one threads finishes its operation and become producer of the data to be worked upon by another thread, which is a consumer. These two threads needs to be synchronized. > Note: In this blog we will discuss a "lock-free" circular queue data structure called disruptor. It was designed to be an efficient concurrent message passing datastructure.The official implementations and other discussions are available [here](https://lmax-exchange.github.io/disruptor/_discussion_blogs_other_useful_links). This blog intends to summarise its use case and show the points where the design of the disruptor scores big. LOCKS ARE BADWhenever we have a scenario where mutliple concurrent running threads contend on a shared data structure and you need to ensure visibility of changes (i.e. a consumer thread can only get its hand over the data after the producer has processed it and put it for further processing). The usual and most common way to ensure these two requirements is to use a lock.Locks need the operating system to arbitrate which thread has the responsibility on a shared piece of data. The operating system might schedule other processes and the software's thread may be waiting in a queue. Moreover, if other threads get scheduled by the CPU then the cache memory of the softwares's thread will be overwritten and when it finally gets access to the CPU, it may have to go as far as the main memory to get it's required data. All this adds a lot of overhead and is evident by the simple experiment of incrementing a single shared variable. In the experiment below we increment a shared variable in three different ways. In the first case, we have a single process incrementing the variable, in the second case we again have two threads, but they synchronize their way through the operation using locks.In the third case, we have two threads which increment the variables and they synchronize their operation using atomic locks. SINGLE PROCESS INCREMENTING A SINGLE VARIABLE ###Code import time def single_thread(): start = time.time() x = 0 for i in range(500000000): x += 1 end = time.time() return(end-start) print(single_thread()) #another way for single threaded increment using class SingleThreadedCounter(): def __init__(self): self.val = 0 def increment(self): self.val += 1 ###Output _____no_output_____ ###Markdown TWO PROCESS INCREMENTING A SINGLE VARIABLE ###Code import time from threading import Thread, Lock mutex = Lock() x = 0 def thread_fcn(): global x mutex.acquire() for i in range(250000000): x += 1 mutex.release() def mutex_increment(): start = time.time() t1 = Thread(target=thread_fcn) t2 = Thread(target=thread_fcn) t1.start() t2.start() t1.join() t2.join() end = time.time() return (end-start) print(mutex_increment()) ###Output 36.418396949768066
project-bikesharing/keyboard-shortcuts.ipynb
###Markdown Keyboard shortcutsIn this notebook, you'll get some practice using keyboard shortcuts. These are key to becoming proficient at using notebooks and will greatly increase your work speed.First up, switching between edit mode and command mode. Edit mode allows you to type into cells while command mode will use key presses to execute commands such as creating new cells and openning the command palette. When you select a cell, you can tell which mode you're currently working in by the color of the box around the cell. In edit mode, the box and thick left border are colored green. In command mode, they are colored blue. Also in edit mode, you should see a cursor in the cell itself.By default, when you create a new cell or move to the next one, you'll be in command mode. To enter edit mode, press Enter/Return. To go back from edit mode to command mode, press Escape.> **Exercise:** Click on this cell, then press Enter + Shift to get to the next cell. Switch between edit and command mode a few times. ###Code # mode practice ###Output _____no_output_____ ###Markdown Help with commandsIf you ever need to look up a command, you can bring up the list of shortcuts by pressing `H` in command mode. The keyboard shortcuts are also available above in the Help menu. Go ahead and try it now. Creating new cellsOne of the most common commands is creating new cells. You can create a cell above the current cell by pressing `A` in command mode. Pressing `B` will create a cell below the currently selected cell. > **Exercise:** Create a cell above this cell using the keyboard command. > **Exercise:** Create a cell below this cell using the keyboard command. Switching between Markdown and codeWith keyboard shortcuts, it is quick and simple to switch between Markdown and code cells. To change from Markdown to a code cell, press `Y`. To switch from code to Markdown, press `M`.> **Exercise:** Switch the cell below between Markdown and code cells. ###Code ## Practice here def fibo(n): # Recursive Fibonacci sequence! if n == 0: return 0 elif n == 1: return 1 return fibo(n-1) + fibo(n-2) ###Output _____no_output_____ ###Markdown Line numbersA lot of times it is helpful to number the lines in your code for debugging purposes. You can turn on numbers by pressing `L` (in command mode of course) on a code cell.> **Exercise:** Turn line numbers on and off in the above code cell. Deleting cellsDeleting cells is done by pressing `D` twice in a row so `D`, `D`. This is to prevent accidently deletions, you have to press the button twice!> **Exercise:** Delete the cell below. Saving the notebookNotebooks are autosaved every once in a while, but you'll often want to save your work between those times. To save the book, press `S`. So easy! The Command PaletteYou can easily access the command palette by pressing Shift + Control/Command + `P`. > **Note:** This won't work in Firefox and Internet Explorer unfortunately. There is already a keyboard shortcut assigned to those keys in those browsers. However, it does work in Chrome and Safari.This will bring up the command palette where you can search for commands that aren't available through the keyboard shortcuts. For instance, there are buttons on the toolbar that move cells up and down (the up and down arrows), but there aren't corresponding keyboard shortcuts. To move a cell down, you can open up the command palette and type in "move" which will bring up the move commands.> **Exercise:** Use the command palette to move the cell below down one position. ###Code # below this cell # Move this cell down ###Output _____no_output_____
extras/overfitting_exercise.ipynb
###Markdown Overfitting ExerciseIn this exercise, we'll build a model that, as you'll see, dramatically overfits the training data. This will allow you to see what overfitting can "look like" in practice. ###Code import os import pandas as pd import numpy as np import math import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown For this exercise, we'll use gradient boosted trees. In order to implement this model, we'll use the XGBoost package. ###Code ! pip install xgboost import xgboost as xgb ###Output _____no_output_____ ###Markdown Here, we define a few helper functions. ###Code # number of rows in a dataframe def nrow(df): return(len(df.index)) # number of columns in a dataframe def ncol(df): return(len(df.columns)) # flatten nested lists/arrays flatten = lambda l: [item for sublist in l for item in sublist] # combine multiple arrays into a single list def c(*args): return(flatten([item for item in args])) ###Output _____no_output_____ ###Markdown In this exercise, we're going to try to predict the returns of the S&P 500 ETF. This may be a futile endeavor, since many experts consider the S&P 500 to be essentially unpredictable, but it will serve well for the purpose of this exercise. The following cell loads the data. ###Code df = pd.read_csv("SPYZ.csv") ###Output _____no_output_____ ###Markdown As you can see, the data file has four columns, `Date`, `Close`, `Volume` and `Return`. ###Code df.head() n = nrow(df) ###Output _____no_output_____ ###Markdown Next, we'll form our predictors/features. In the cells below, we create four types of features. We also use a parameter, `K`, to set the number of each type of feature to build. With a `K` of 25, 100 features will be created. This should already seem like a lot of features, and alert you to the potential that the model will be overfit. ###Code predictors = [] # we'll create a new DataFrame to hold the data that we'll use to train the model # we'll create it from the `Return` column in the original DataFrame, but rename that column `y` model_df = pd.DataFrame(data = df['Return']).rename(columns = {"Return" : "y"}) # IMPORTANT: this sets how many of each of the following four predictors to create K = 25 ###Output _____no_output_____ ###Markdown Now, you write the code to create the four types of predictors. ###Code for L in range(1,K+1): # this predictor is just the return L days ago, where L goes from 1 to K # these predictors will be named `R1`, `R2`, etc. pR = "".join(["R",str(L)]) predictors.append(pR) for i in range(K+1,n): # TODO: fill in the code to assign the return from L days before to the ith row of this predictor in `model_df` model_df.loc[i, pR] = df.loc[i-L,'Return'] # this predictor is the return L days ago, squared, where L goes from 1 to K # these predictors will be named `Rsq1`, `Rsq2`, etc. pR2 = "".join(["Rsq",str(L)]) predictors.append(pR2) for i in range(K+1,n): # TODO: fill in the code to assign the squared return from L days before to the ith row of this predictor # in `model_df` model_df.loc[i, pR2] = (df.loc[i-L,'Return']) ** 2 # this predictor is the log volume L days ago, where L goes from 1 to K # these predictors will be named `V1`, `V2`, etc. pV = "".join(["V",str(L)]) predictors.append(pV) for i in range(K+1,n): # TODO: fill in the code to assign the log of the volume from L days before to the ith row of this predictor # in `model_df` # Add 1 to the volume before taking the log model_df.loc[i, pV] = math.log(1.0 + df.loc[i-L,'Volume']) # this predictor is the product of the return and the log volume from L days ago, where L goes from 1 to K # these predictors will be named `RV1`, `RV2`, etc. pRV = "".join(["RV",str(L)]) predictors.append(pRV) for i in range(K+1,n): # TODO: fill in the code to assign the product of the return and the log volume from L days before to the # ith row of this predictor in `model_df` model_df.loc[i, pRV] = model_df.loc[i, pR] * model_df.loc[i, pV] ###Output _____no_output_____ ###Markdown Let's take a look at the predictors we've created. ###Code model_df.iloc[100:105,:] ###Output _____no_output_____ ###Markdown Next, we create a DataFrame that holds the recent volatility of the ETF's returns, as measured by the standard deviation of a sliding window of the past 20 days' returns. ###Code vol_df = pd.DataFrame(data = df[['Return']]) for i in range(K+1,n): # TODO: create the code to assign the standard deviation of the return from the time period starting # 20 days before day i, up to the day before day i, to the ith row of `vol_df` vol_df.loc[i, 'vol'] = np.std(vol_df.loc[(i-20):(i-1),'Return']) ###Output _____no_output_____ ###Markdown Let's take a quick look at the result. ###Code vol_df.iloc[100:105,:] ###Output _____no_output_____ ###Markdown Now that we have our data, we can start thinking about training a model. ###Code # for training, we'll use all the data except for the first K days, for which the predictors' values are NaNs model = model_df.iloc[K:n,:] ###Output _____no_output_____ ###Markdown In the cell below, first split the data into train and test sets, and then split off the targets from the predictors. ###Code # Split data into train and test sets train_size = 2.0/3.0 breakpoint = round(nrow(model) * train_size) # TODO: fill in the code to split off the chunk of data up to the breakpoint as the training set, and # assign the rest as the test set. training_data = model.iloc[1:breakpoint,:] test_data = model.loc[breakpoint : nrow(model),] # TODO: Split training data and test data into targets (Y) and predictors (X), for the training set and the test set X_train = training_data.iloc[:,1:ncol(training_data)] Y_train = training_data.iloc[:,0] X_test = test_data.iloc[:,1:ncol(training_data)] Y_test = test_data.iloc[:,0] ###Output _____no_output_____ ###Markdown Great, now that we have our data, let's train the model. ###Code # DMatrix is a internal data structure that used by XGBoost which is optimized for both memory efficiency # and training speed. dtrain = xgb.DMatrix(X_train, Y_train) # Train the XGBoost model param = { 'max_depth':20, 'silent':1 } num_round = 20 xgModel = xgb.train(param, dtrain, num_round) ###Output /opt/conda/lib/python3.6/site-packages/xgboost/core.py:587: FutureWarning: Series.base is deprecated and will be removed in a future version if getattr(data, 'base', None) is not None and \ /opt/conda/lib/python3.6/site-packages/xgboost/core.py:588: FutureWarning: Series.base is deprecated and will be removed in a future version data.base is not None and isinstance(data, np.ndarray) \ ###Markdown Now let's predict the returns for the S&P 500 ETF in both the train and test periods. If the model is successful, what should the train and test accuracies look like? What would be a key sign that the model has overfit the training data? Todo: Before you run the next cell, write down what you expect to see if the model is overfit. ###Code # Make the predictions on the test data preds_train = xgModel.predict(xgb.DMatrix(X_train)) preds_test = xgModel.predict(xgb.DMatrix(X_test)) ###Output _____no_output_____ ###Markdown Let's quickly look at the mean squared error of the predictions on the training and testing sets. ###Code # TODO: Calculate the mean squared error on the training set msetrain = sum((preds_train-Y_train)**2)/len(preds_train) msetrain # TODO: Calculate the mean squared error on the test set msetest = sum((preds_test-Y_test)**2)/len(preds_test) msetest ###Output _____no_output_____ ###Markdown Looks like the mean squared error on the test set is an order of magnitude greater than on the training set. Not a good sign. Now let's do some quick calculations to gauge how this would translate into performance. ###Code # combine prediction arrays into a single list predictions = c(preds_train, preds_test) responses = c(Y_train, Y_test) # as a holding size, we'll take predicted return divided by return variance # this is mean-variance optimization with a single asset vols = vol_df.loc[K:n,'vol'] position_size = predictions / vols ** 2 # TODO: Calculate pnl. Pnl in each time period is holding * realized return. performance = position_size * responses # plot simulated performance plt.plot(np.cumsum(performance)) plt.ylabel('Simulated Performance') plt.axvline(x=breakpoint, c = 'r') plt.show() ###Output _____no_output_____ ###Markdown Our simulated returns accumulate throughout the training period, but they are absolutely flat in the testing period. The model has no predictive power whatsoever in the out-of-sample period.Can you think of a few reasons our simulation of performance is unrealistic? ###Code # TODO: Answer the above question. ###Output _____no_output_____
day11.ipynb
###Markdown AOC 2020 Day 11Seat layout input is a grid:```L.LL.LL.LLLLLLLLL.LLL.L.L..L..LLLL.LL.LLL.LL.LL.LLL.LLLLL.LL..L.L.....LLLLLLLLLLL.LLLLLL.LL.LLLLL.LL```Grid entries can be one of:- `L` - empty seat- `` - occupied seat- `.` - floorRules are based on adjacent seats, 8 surrounding seats ala chess king moves (U,D,L,R,UL,UR,DL,DR).Rules to apply are:1. If a seat is empty (L) and there are no occupied seats adjacent to it, the seat becomes occupied.2. If a seat is occupied () and four or more seats adjacent to it are also occupied, the seat becomes empty.3. Otherwise, the seat's state does not change.Floor seats do not change.When rules are applied they stabilize, at that point count occupied seats, for this grid it's `37`.Input after one round of rules:```.............................```After second round:```.LL.L.LLLLLL.LL.L.L..L..LLL.LL.L.LL.LL.LL.LLLL...L.L.....LLLLLLLL.LLLLLL.L.LLLL.```Eventual stable state after 3 more rounds:```.L.L.LLLLL.LL..L....L..L.L.LL.LL.LL...L.L.....LLLL.LLLLLL.L.LL.``` ###Code SampleInput="""L.LL.LL.LL LLLLLLL.LL L.L.L..L.. LLLL.LL.LL L.LL.LL.LL L.LLLLL.LL ..L.L..... LLLLLLLLLL L.LLLLLL.L L.LLLLL.LL""" def load_seat_map(input): result = [] for line in input.split('\n'): if line != '': row = [seat for seat in line] result.append(row) return result sample_map0 = load_seat_map(SampleInput) sample_map0 def print_seat_map(seat_map): max_row = len(seat_map) max_col = len(seat_map[0]) for r in seat_map: print("".join(r)) print("max_row: {}, max_col: {}".format(max_row, max_col)) def adjacent_seats(seat_map, row, col): """Return a list of all adjacent seats for position row, column in seat_map""" adj = [] max_row = len(seat_map) max_col = len(seat_map[0]) for r in range(row-1,row+2): for c in range(col-1,col+2): # skip invalid co-ordinates if r < 0: continue if c < 0: continue if r >= max_row: continue if c >= max_col: continue if r == row and c == col: continue #print("({},{}): {}".format(r,c, seat_map[r][c])) adj.append(seat_map[r][c]) return adj print("adjacent to row {}, col {} :- {}".format(0, 0, adjacent_seats(sample_map0, 0, 0))) print("adjacent to row {}, col {} :- {}".format(1, 1, adjacent_seats(sample_map0, 1, 1))) print("adjacent to row {}, col {} :- {}".format(9, 0, adjacent_seats(sample_map0, 9, 0))) def apply_rules(seat_map): """Apply rules to seat_map""" result = [] max_row = len(seat_map) max_col = len(seat_map[0]) # create blank result grid for r in range(0, max_row): result.append([]) for c in range(0, max_col): result[r].append('') # apply rules # 1. If a seat is empty (L) and there are no occupied seats adjacent to it, the seat becomes occupied. # 2. If a seat is occupied (#) and four or more seats adjacent to it are also occupied, the seat becomes empty. # 3. Otherwise, the seat's state does not change. for r in range(0, max_row): for c in range(0, max_col): seat = seat_map[r][c] neighbours = adjacent_seats(seat_map, r, c) if seat == '.': result[r][c] = '.' if seat == 'L' and '#' not in neighbours: result[r][c] = '#' if seat == '#' and neighbours.count('#') >= 4: result[r][c] = 'L' elif seat == '#': result[r][c] = '#' return(result) expected_sample_map1 = load_seat_map("""#.##.##.## #######.## #.#.#..#.. ####.##.## #.##.##.## #.#####.## ..#.#..... ########## #.######.# #.#####.##""") assert apply_rules(sample_map0) == expected_sample_map1, "invalid result!" expected_sample_map2 = load_seat_map("""#.LL.L#.## #LLLLLL.L# L.L.L..L.. #LLL.LL.L# #.LL.LL.LL #.LLLL#.## ..L.L..... #LLLLLLLL# #.LLLLLL.L #.#LLLL.##""") assert apply_rules(apply_rules(sample_map0)) == expected_sample_map2, "invalid result!" def part1(input): last_map = load_seat_map(input) while apply_rules(last_map) != last_map: last_map = apply_rules(last_map) result = 0 for row in last_map: result += row.count('#') return result assert part1(SampleInput) == 37, "invalid result - got {}".format(part1(SampleInput)) part1(SampleInput) day11 = open("./inputs/day11").read() day11_map = load_seat_map(day11) #print_seat_map(day11_map) #apply_rules(day11_map) part1(day11) ###Output _____no_output_____ ###Markdown part 2Updated adjacency rules, not just nearest neighbor, any neighbor you can see in any direction.For example below the empty seat sees 8 neighbors:```...................................L.....................................```Whereas for below the left most empty seat only sees one empty seat (to its right):```..............L.L..................```And for this empty seat below it sees no neighbors:```............L............```So basically sudoku "seeing" ... does any value in same row, column or diagonal have a '' or a 'L'? ###Code def look(seat_map, row, col, row_offset, col_offset): """ 'Look' from row,col in direction specified by row/col_offset Return the first key thing we encounter, will be one of: - 'L' - hit an empty seat - '#' - hit an occupied seat - None - hit edge of the grid """ min_row = 0 min_col = 0 max_row = len(seat_map) max_col = len(seat_map[0]) seen = None r = row c = col while not seen: r = r + row_offset c = c + col_offset if r < min_row or c < min_col or r >= max_row or c >= max_col: break s = seat_map[r][c] #print("Looking at ({}, {}): {}".format(r, c, s)) if s == '.': continue # skip blank seen = s return seen test_look_grid1 = load_seat_map(""".L.L.#.#.#.#. .............""") assert look(test_look_grid1, 0, 1, 0, 1) == 'L', "expected 'L', got '{}'".format(look(test_look_grid1, 0, 1, 0, 1)) test_look_grid2 = load_seat_map(""".......#. ...#..... .#....... ......... ..#L....# ....#.... ......... #........ ...#.....""") test_grid = test_look_grid2 print_seat_map(test_grid) print() s = (4, 3) print("starting at ({},{}): {}".format(s[0], s[1], test_grid[s[0]][s[1]])) for r in (-1, 0, 1): for c in (-1, 0, 1): if (r, c) != (0, 0): print("looking from ({},{}):{} in direction ({}, {})".format(s[0], s[1], test_grid[s[0]][s[1]],r, c)) assert look(test_grid, s[0], s[1], r, c) == '#', "expected '#', got '{}'".format(look(test_grid, s[0], s[1], r, c)) print("Ok, saw None") if (r, c) == (0,0): print("Not feeling introspective so not looking inwards (0,0)") test_look_grid3 = load_seat_map(""".##.##. #.#.#.# ##...## ...L... ##...## #.#.#.# .##.##.""") test_grid = test_look_grid3 print_seat_map(test_grid) print() s = (3, 3) print("starting at ({},{}): {}".format(s[0], s[1], test_grid[s[0]][s[1]])) for r in (-1, 0, 1): for c in (-1, 0, 1): if (r, c) != (0, 0): print("looking from ({},{}):{} in direction ({}, {})".format(s[0], s[1], test_grid[s[0]][s[1]],r, c)) assert look(test_grid, s[0], s[1], r, c) == None, "expected None, got '{}'".format(look(test_grid, s[0], s[1], r, c)) print("Ok, saw None") if (r, c) == (0,0): print("Not feeling introspective so not looking inwards (0,0)") def new_adj_seats(seat_map, row, col): adj = [] for r in (-1, 0, 1): for c in (-1, 0, 1): if (r, c) != (0, 0): saw = look(seat_map, row, col, r, c) if saw: adj.append(saw) return adj def new_apply_rules(seat_map): """Apply rules to seat_map""" result = [] max_row = len(seat_map) max_col = len(seat_map[0]) # create blank result grid for r in range(0, max_row): result.append([]) for c in range(0, max_col): result[r].append('') # apply rules # 1. If a seat is empty (L) and there are no occupied seats adjacent to it, the seat becomes occupied. # 2. If a seat is occupied (#) and five or more seats adjacent to it are also occupied, the seat becomes empty. # 3. Otherwise, the seat's state does not change. for r in range(0, max_row): for c in range(0, max_col): seat = seat_map[r][c] neighbours = new_adj_seats(seat_map, r, c) if seat == '.': result[r][c] = '.' elif seat == 'L' and '#' not in neighbours: result[r][c] = '#' elif seat == '#' and neighbours.count('#') >= 5: result[r][c] = 'L' else: #print("final else case, seat: {}, neighbours: {}".format(seat, neighbours)) result[r][c] = seat return result def part2(input): last_map = load_seat_map(input) while new_apply_rules(last_map) != last_map: last_map = new_apply_rules(last_map) #print("Stable map:") #print_seat_map(last_map) result = 0 for row in last_map: result += row.count('#') return result print("Starting with sample input:") sample_map0 = load_seat_map(SampleInput) print_seat_map(sample_map0) expected_sample_map1_p2 = load_seat_map("""#.##.##.## #######.## #.#.#..#.. ####.##.## #.##.##.## #.#####.## ..#.#..... ########## #.######.# #.#####.##""") print("observed result from new_apply_rules(sample input)") print_seat_map(new_apply_rules(sample_map0)) print() print("expected result from new_apply_rules(sample input)") print_seat_map(expected_sample_map1_p2) assert new_apply_rules(sample_map0) == expected_sample_map1_p2, "invalid result!" expected_sample_map2_p2 = load_seat_map("""#.LL.LL.L# #LLLLLL.LL L.L.L..L.. LLLL.LL.LL L.LL.LL.LL L.LLLLL.LL ..L.L..... LLLLLLLLL# #.LLLLLL.L #.LLLLL.L#""") assert new_apply_rules(new_apply_rules(sample_map0)) == expected_sample_map2_p2, "invalid result!" p2_sample = part2(SampleInput) assert p2_sample == 26, "invalid result - got {}".format(p2_sample) p2_sample part2(day11) ###Output _____no_output_____ ###Markdown --- Day 11: Chronal Charge ---You watch the Elves and their sleigh fade into the distance as they head toward the North Pole.Actually, you're the one fading. The falling sensation returns.The low fuel warning light is illuminated on your wrist-mounted device. Tapping it once causes it to project a hologram of the situation: a 300x300 grid of fuel cells and their current power levels, some negative. You're not sure what negative power means in the context of time travel, but it can't be good.Each fuel cell has a coordinate ranging from 1 to 300 in both the X (horizontal) and Y (vertical) direction. In X,Y notation, the top-left cell is 1,1, and the top-right cell is 300,1.The interface lets you select any 3x3 square of fuel cells. To increase your chances of getting to your destination, you decide to choose the 3x3 square with the largest total power.The power level in a given fuel cell can be found through the following process: Find the fuel cell's rack ID, which is its X coordinate plus 10. Begin with a power level of the rack ID times the Y coordinate. Increase the power level by the value of the grid serial number (your puzzle input). Set the power level to itself multiplied by the rack ID. Keep only the hundreds digit of the power level (so 12345 becomes 3; numbers with no hundreds digit become 0). Subtract 5 from the power level.For example, to find the power level of the fuel cell at 3,5 in a grid with serial number 8: The rack ID is 3 + 10 = 13. The power level starts at 13 * 5 = 65. Adding the serial number produces 65 + 8 = 73. Multiplying by the rack ID produces 73 * 13 = 949. The hundreds digit of 949 is 9. Subtracting 5 produces 9 - 5 = 4.So, the power level of this fuel cell is 4.Here are some more example power levels: Fuel cell at 122,79, grid serial number 57: power level -5. Fuel cell at 217,196, grid serial number 39: power level 0. Fuel cell at 101,153, grid serial number 71: power level 4.Your goal is to find the 3x3 square which has the largest total power. The square must be entirely within the 300x300 grid. Identify this square using the X,Y coordinate of its top-left fuel cell. For example:For grid serial number 18, the largest total 3x3 square has a top-left corner of 33,45 (with a total power of 29); these fuel cells appear in the middle of this 5x5 region:-2 -4 4 4 4-4 4 4 4 -5 4 3 3 4 -4 1 1 2 4 -3-1 0 2 -5 -2For grid serial number 42, the largest 3x3 square's top-left is 21,61 (with a total power of 30); they are in the middle of this region:-3 4 2 2 2-4 4 3 3 4-5 3 3 4 -4 4 3 3 4 -3 3 3 3 -5 -1What is the X,Y coordinate of the top-left fuel cell of the 3x3 square with the largest total power?Your puzzle input is 5034. ###Code def get_power(x, y, serial): # Find the fuel cell's rack ID, which is its X coordinate plus 10. rack_id = x + 10 # Begin with a power level of the rack ID times the Y coordinate. power = rack_id * y # Increase the power level by the value of the grid serial number (your puzzle input). power = power + serial # Set the power level to itself multiplied by the rack ID. power = power * rack_id # Keep only the hundreds digit of the power level (so 12345 becomes 3; numbers with no hundreds digit become 0). power = power + 1000 pow_string = str(power) hundreth = int(pow_string[len(pow_string) - 3:len(pow_string) - 2]) #Subtract 5 from the power level. power = hundreth - 5 return power print(get_power(3,5,8)) #4 print(get_power(122,79,57)) #-5 import numpy as np def fill_grid(serial, width=300, height=300): grid = np.zeros([width, height]) for x in range(width): for y in range(height): grid[x][y] = get_power(x + 1, y + 1, serial) return grid grid = fill_grid(5034) from scipy.signal import convolve2d def solve(grid): square = np.ones([3,3]) convolved = convolve2d(grid, square, mode='valid', boundary='fill', fillvalue=0) max_value = -1000 maxx = 0 maxy = 0 for x in range(len(convolved)): for y in range(len(convolved[0])): if convolved[x][y] > max_value: max_value = convolved[x][y] maxx = x + 1 maxy = y + 1 return maxx, maxy print(solve(fill_grid(42))) #21, 61 print(solve(fill_grid(18))) #33,45 print(solve(fill_grid(5034))) ###Output (235, 63) ###Markdown --- Part Two ---You discover a dial on the side of the device; it seems to let you select a square of any size, not just 3x3. Sizes from 1x1 to 300x300 are supported.Realizing this, you now must find the square of any size with the largest total power. Identify this square by including its size as a third parameter after the top-left coordinate: a 9x9 square with a top-left corner of 3,5 is identified as 3,5,9.For example: For grid serial number 18, the largest total square (with a total power of 113) is 16x16 and has a top-left corner of 90,269, so its identifier is 90,269,16. For grid serial number 42, the largest total square (with a total power of 119) is 12x12 and has a top-left corner of 232,251, so its identifier is 232,251,12.What is the X,Y,size identifier of the square with the largest total power? ###Code def solve2(grid): max_value = -10000000 maxx = 0 maxy = 0 maxs = 1 for s in range(1,301): if s % 20 == 0: print(s) square = np.ones([s,s]) convolved = convolve2d(grid, square, mode='valid', boundary='fill', fillvalue=0) for x in range(len(convolved)): for y in range(len(convolved[0])): if convolved[x][y] > max_value: max_value = convolved[x][y] maxx = x + 1 maxy = y + 1 maxs = s return maxx, maxy, maxs print('Result', solve2(fill_grid(18))) #90,269,16 print('Result', solve2(fill_grid(42))) #232,251,12 print(solve2(fill_grid(5034))) ###Output 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 (229, 251, 16) ###Markdown part 1 ###Code ex = '''L.LL.LL.LL LLLLLLL.LL L.L.L..L.. LLLL.LL.LL L.LL.LL.LL L.LLLLL.LL ..L.L..... LLLLLLLLLL L.LLLLLL.L L.LLLLL.LL '''.strip().split('\n') ex def build_layout(lines): layout = {} for row, line in enumerate(lines): for pos, char in enumerate(line): if char == 'L': layout[(row, pos)] = False elif char == '#': layout[(row, pos)] = True return layout exocc = build_layout(ex) def get_adj_occ(seat, occ): n = 0 row, pos = seat adj_occupied = 0 for s in ((row-1, pos-1), (row-1, pos), (row-1, pos+1), (row , pos-1), (row , pos+1), (row+1, pos-1), (row+1, pos), (row+1, pos+1)): if (s in occ) and occ[s]: adj_occupied += 1 return adj_occupied def update_occ(occ): newocc = occ.copy() for seat in occ: numadj = get_adj_occ(seat, occ) if (not occ[seat]) and (numadj == 0): newocc[seat] = True elif occ[seat] and (numadj >= 4): newocc[seat] = False else: newocc[seat] = occ[seat] return newocc def process_rules(occ): currocc = occ.copy() while True: newocc = update_occ(currocc) if newocc == currocc: break currocc = newocc return currocc def count_occ(occ): return sum(occ[seat] for seat in occ) def print_occ(occ): maxrow = max(key[0] for key in occ)+1 maxcol = max(key[1] for key in occ)+1 for line in range(maxrow): s = [] for col in range(maxcol): pos = (line, col) if pos in occ: if occ[pos]: s.append('#') else: s.append('L') else: s.append('.') print(''.join(s)) print_occ(exocc) exprocessed = process_rules(exocc) print_occ(exprocessed) count_occ(exprocessed) with open('inputs/day11.input') as fp: data = fp.read().strip().split('\n') dataocc = build_layout(data) dataprocessed = process_rules(dataocc) count_occ(dataprocessed) ###Output _____no_output_____ ###Markdown part 2 ###Code def find_first(seat, vec, occ, size): maxrow, maxcol = size row, col = seat while True: row, col = row+vec[0], col+vec[1] if (row < 0) or (row == maxrow): return None elif (col < 0) or (col == maxcol): return None elif (row, col) in occ: return occ[(row, col)] ex2 = '''.......#. ...#..... .#....... ......... ..#L....# ....#.... ......... #........ ...#.....'''.strip().split('\n') ex2occ = build_layout(ex2) ex2occ find_first((4,3), (+1,+1), ex2occ, (9, 9)) def get_vis_occ(seat, occ, size): dirs = [(-1, 0), (+1, 0), (0, -1), (0, +1), (-1, -1), (-1, +1), (+1, -1), (+1, +1)] s = 0 for vec in dirs: visible = find_first(seat, vec, occ, size) if visible is not None: s += int(visible) return s get_vis_occ((4,3), ex2occ, (9,9)) ex3 = '''............. .L.L.#.#.#.#. .............'''.strip().split('\n') ex3occ = build_layout(ex3) ex3occ get_vis_occ((1,1), ex3occ, (3,12)) def update_occ2(occ): maxrow = max(key[0] for key in occ)+1 maxcol = max(key[1] for key in occ)+1 size = (maxrow, maxcol) newocc = occ.copy() for seat in occ: numvis = get_vis_occ(seat, occ, size) if (not occ[seat]) and (numvis == 0): newocc[seat] = True elif occ[seat] and (numvis >= 5): newocc[seat] = False else: newocc[seat] = occ[seat] return newocc newocc = update_occ2(exocc) print_occ(newocc) newocc = update_occ2(newocc) print_occ(newocc) def process_rules2(occ): currocc = occ.copy() while True: newocc = update_occ2(currocc) if newocc == currocc: break currocc = newocc return currocc exproc2 = process_rules2(exocc) print_occ(exproc2) count_occ(exproc2) dataproc2 = process_rules2(dataocc) count_occ(dataproc2) ###Output _____no_output_____ ###Markdown MIT LicenseCopyright (c) 2020 Cory SohrakoffPermission is hereby granted, free of charge, to any person obtaining a copyof this software and associated documentation files (the "Software"), to dealin the Software without restriction, including without limitation the rightsto use, copy, modify, merge, publish, distribute, sublicense, and/or sellcopies of the Software, and to permit persons to whom the Software isfurnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in allcopies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS ORIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THEAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHERLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THESOFTWARE. ###Code input = '''LLLLLLLL.LLLLLLLLL.LL.LLLLLLL.LLLLL.LLLLLLLL.LLLLL.LLLL.LLLLL.LLLLLLLLLLLLLLLLLLLLLLL.LLLLLLLLLL LLLLLLLL.LLLLLLLLLLLLLL.L.LLL..LLLL.LLLLLLLLLLLLLLL.LLLLLLLLLL.LLLLLLLLLLLLLLL.LLLLL..LLLLLLLLLL LLLLLLLL.LLLLLLLLL.LLLL.LLLLL.LL.LL.LLLLLLLLLLLLLLL.LLLLLLLLL.LLLLL.L.LLL.LLLL.L.LLLLLLL.LLLLLLL LLLLLLLL.LLLLLLL.L.LLLL.LLLLL.LLLLL.LLLLLLLLLLLLLLL..LLLLLL.LLLLLLLLL.LLLLLLLL..LLLLLLLLLLLLLL.L LLLLLLLL.LLLLLLLLL.LLLL.LLLLLLLLL.LLLLLLLLLLLLLLLLL.LLLLLLLLL.LLLL.LL.LLLLLLLL.LLLLLL.LLLLLL.LLL LLLLLLLL.LLLLLLLLL.LLLLLLLLLL.LLL.LLLLLLLLLL.L.LLL..LLLLLLLLL.LLLLLLL.LLLLLLLL.LLLLLL.LLLL.LLLLL LLLL.LLL.LLLLLL.LL.LLLLL..LLL.LLLL..LL.L.LLLLLLLLLLL.LLLLLLLL.LLLLLLL..LLLLLLL.LLL.LLLLLLLLLLLLL LLLLLLLL.LLLLLLLLL.LLLLLLLLLL.LLLLLLLLLLLLLL.LLLLLL.LLLLLLLLL.LLLLLLL.LLLLLLLL.LLLLLL.LLLLLLLLLL LL...L......L.L.L......L....L.....LLLL..L.L.L.L................LL........L..L...L......L..L.LL.. L.LLLLLLLLLLLLLL.L.LLLLLLLLLLLLLLLL.LLLLLLLLL.LLLLL.LLLLLLLLLLL.LLLLLLLLLLLLLLLLLLLLLLLLLLLLL.LL LLLLLLLL.L.LLLLLLLLLLLL.LLLLLLLLLLL.LLLLLLLLLLLLLLLLLL.LLLLLLLLLL.LLLLLLLLLLLLLLLL.LLLLLLLLLLLLL LLLLLLLL..L.LLLLLL.LLLL.LLLLLLLL.LLLLLL.LL.L.LLLLLL.LLLLLLLLLLLLLLLLL.L.LLLLL..LLLLLLLLLLLLLLLLL LLLLLLLL.LLLL.LLLL.LLLL.LLLLL.LLLLLLLLLLLLLLLLLLLLL.LLLLLLLLL.LLLLLLL.LLLLLL.LLLLLL.L.LLLLLLL.LL LLLLLLLL.LLLLLLLLL.LLLL.LLLLL.LLLLL.LLLLLLLL.LLLLLLLLLLLLL.LL.LLLLLLLLL.LLLLLL.LLLLLLLLLLLLLLLLL LLLLLLLL.LLLL.LLLL.LLLL.LLLLL.LLLLL.LL.LLLL..L.LLLL.LLLLLLLLL.LLLLLLL.LLLLLLLL.LLLLL..LLLLL.LLLL .L...L.....L.........L.L..L...LL.....L.L.L..L.L.....L..L.L.L........L...L..L.......L....L...LL.L LLLLLLLLLLL.LLL.LL.LLLL.LLL.L.LLLLLLLLLLLLL..LLLLLLLLLLLLLLLL.LL.LLLL.LLLLLLLLLLLLLLLLLLLLL.LLLL LLLLLLLL.LLLLLLLLLLLLLLLLLLL..LL.LLLLLLLLLLL.LLLLLL.LLLLLLLL..LLLLLLL.LLLLLLLL.LLLLLLLLLLLLLLLLL LLLLLLLL.LLLLLLLLLLLLLL.LLLL..LLLLLLLLLLLLL.LLLLLLLLLLLLLLL.LLLLLLLLLLLLLLLL.L.LLLLLL.LLLLLLLLLL LLLLLLLL...LLLLLLL.LLLLLLLLLL.LLLLLLLLLLLLLLLLL.L.LLLLLLLLLLL.LLLLLLL.LL..LLLLLLLLLLL.LLLLLLLLLL L...LL.L........L..LLL.L.LL..L..L.L.L..L..L.........L.....LLL.....LLL........LLL...LL..L.L...LL. LLLLLLLLLLLLLLLLLL.LLLL.LLLLL.LLLLL.L.LLLLL...LLL.L.L.LLLLLLLLLLLL.LLLLLLLLLLL.LLLLLLLLLLLLLLLLL LLL.LLLL.LLLLLLLLL.LLLLLLLLL..L.L.L.LLLLLLLL.LLLLLL.LLLLLLL.L.LLLLLLL.LLL.LLLLLLLLLLL.LLLLLLLLLL LLLLLLLL.LLLL.LLLLLLLLL.LLLLLLLLLLL.LLL.LLLL.LLLLLL.LLLLLLLLL.LLLLLLLLLLLLLLLL.LLLLLLLLLLLLLLLL. LLLLLLLL.LLLLLLLLL.LLLL.LLLLL..LLLL.LLLLLLLL.LLLLLL.LLLLLLLLL.LL.LLLL.LLLLLLLL.LLLLLL.LLLLLLLLLL ...L.L.L..LL.......L.LL.L..LL........LLLL.....LL.LLL.L.LL...L.L.......LL....L..L..LL.L....L..LLL L...LLLLLLLLLLLLLLLLLLLLLLLLL.LLLLL.LLLLLLLL.L.LLLL.LLLLL.LLL.LLLLLLL.LLLLLLLLLLLLLLL.LL.LLLLLLL LLLLLLLL.LLL.LLLLL.LLLLLLLLLL.LLLLL.LLLLLLLLLLL.LLL.LLLLLLLLLLLLLLLLL.LLLLLLLL.LLLLL.LLLLLLLLLLL LLLLLLLL.L.LLLLLLL.LLLL.LLLLL.LLLLLLLLLLLLLL..LLLLL.LLLLLL.L..LLLLLLL.LL.L.LLL.LLLLLL.LLLLLLLLLL LLLLLLLL.LLLLLL.LL.LLLL.LLLLL.L.LLL.LLLLLLLL.LLLLLL.LLLL.LLLLLLLLLLLLLLLLL.LLL.LLLLLL.LLLLLLLLLL LLLLLLLLLLLLLLLLLL.LL.L.LLLLL.LLLLL.LLLLLLLLLLLLLLL.LLLLLLLLL.LLLLL.L.LLLLLLLL.LLLLLL.LLLL.LLLLL ..LL...LL..LLL.L...L....L.L...L.L.L...L.LL.L........L.L..L...L..L.L.LL.L.L..L..LLLL..L..L..L...L LL.LLLLL.LLLLLLL.LLLL.L.LLLLL.LLLLL.LLLL.LLLLLLLLLL.LLLLLLLLLLLLLLLLL.L.LLLLLLLLLLLLL.LL.LLLLLLL LLLLLLLLLLLLLLLLLL.LLLL.LLLLLLLLLLLLLLLLLLLL.LLLLLL.LLLLLLLLLLLLLLLLL.LLLLLLLL.LLLLLL.LLL.L.LLLL LLLLLLLLLLLL.LLLLLLLLLL.LLLLL.LLLLL.LLLLLLLL.LLLLLL.LLLLLLLLL.LLLLLLLL.LLLLLLL.LLLLLLLLLLLLLLLLL LLLL.LLL.LLLLLLLLL.LL...LLL.L.LLLLLLLLLLLLLL.LLLLLL.LLLLLLLLLLLLLLLLL.LLLLLLLL.LLLLLL.LLLLLLLLLL LLLLLLL.L.LLLLLLLL.LLLL.LLLL.LLLLLLLLLLLLLL...LLLLL.LLLLLLLLL.LLLLLLL.LLLL.LLLLLLLLLL.LLLLLLLLLL LLLLLLLLLLLLLLLLLL.LL.L.LLLLL.LLLLL.LLLLLLLL.LLLLLLLLLLLLLLLLLLLL.L.L.LLLLLLLL.LLLL.LLLLLLLLLL.L LLLLLLLL.LLLLLLLL..LLLL.LLLLLLLLLLLLLLLLLLLL.LLLLLL.LL.LLLLLLLL.LLLLLLLLLLLLLL.LLLLLLLLLLLLLLLL. ...L......L....L...LLLL.LLLLL.L..LL.L........LL...L.L.LLLLLLLLLL.LL.LLLL...L....L.....L....LL... LLLL.LLLLLLLLLLLLL..L.L.LLLLLLLLLLLLLLLLLL.L.LLLLLL.LLLLLLL.L.LLLLLLL.LL.LLLLL.LLLLLL.LLLLLLLLLL LLLLLLLL.LLLLL.LLL.LLLL.LLLLLLLLLLL.LLLLLLLL.LLLLLLLLLLLLLLLL.LLLLLLL.LLLLL.LL.LLLLLLLLLLLLLLLLL LLLLLLLLLLL.LLLLLL.LLLL.LLLLLLLLL.L.LLLLLLLL.LLLLLLLLLLLLLLLL..LLLLLL.LLLLLLLLLLLLLLLLLLLLLL.LLL LLLLLLLLL.LLLLLLLL.LLL..LLLLLLLLLLL.LLLLLLLLLLLLLLL.LLLLLLLLL.LLLLLLL.LLLLLLLL.LLLLLL.L.LLLLLLLL LLL.LLLL.LLLLL.LLL..LLL.LLLL..LLLLL.LLLLLLLL.LLLLLLLLLLLLLLLLLLLLLLL...LLL.LLL.LLLLLL.LLLLLLLLLL LLLLLLLL.LLLLLLLLL.LLLL.LLLLL.LLLLL.LLLLLLLLLLLLLLL.LLLLLLLLL.LLLLLLL.LLLLLL.LLLLLLLL.LLLLLLL.LL L.L.L.L.L.L....L.LL.L...LL......LL.LL...LLL....L..L..LL.LL.L.L..L..LL.L...LLL.....LLLL.....L.... LLLLLL...LLLLLLLLL..LLL.LLLLL.LLL.LLLL.LLLLL.LLLLLLLLLLLLLLLL.LLLLLLL.LLLLLLLL.LLLLLL.LLLLLLL.LL LLLLLLLLLLLLLLLLLL.LLLLLLLLLL.LLLLL.LLLLLLLL.LLLLL..LLLLLL.LL.LLLL..L.LLLLLLLL.LLLLL.LLLLLLLLLLL LLLLLLLL.LLLLLLLLL.LLLL.LLLLL.L.LLL.LLLLLLLL.LLLL.L.LLLLLLLLL.LLLLLLL.LLLLLLLL.LLLLLLLLLLLLLLLLL LLLLLLLL.LLLLL.LL..LLLL.LLLLL.LLLLLLLLLLLLLL.LLLLLLLLLLLLLLLL.LL.LLLL.LLLLL..L.L.LLLL.LLLLLL.LLL LLLLLLLL.LLL.LLLLL.LLLLLL.LLL.LLLLLLLLLLLLLLLLLLLLL.LLLLLLLLL.LLLLLLL.LLLLLLLL.L.LLLLL..LLLLLLLL LLLLLLLLLLLLLL.LLL..LLL.LLLLLLLLLLLLLLLLLLLL.LLLLLL.LLLLLLL.L.LLLLLLL.LLLLLLLL.LLLLLLLLLLLLLLLLL L.LLLLLL.LLLLLLLLL.LLLL.LLLLLLLLLLL.LLL.LLLL.LLLLLL.LLLLLLLLLLLLLLLLL.LLLLLLLL..LLLLL.LLLL.LLLLL LLLL.L....L.L.LL.LLLL...LL....L.L..L....L...LLL..L..LLLL...L..L.L.L....L..L.LL.L.L...L...L....L. L.LLLLLL.LLLLLLLLL.LLL..LLLLL.LLLLL.LLLLLLLLLLLLLLL.LLLLLLLLL.LLLLLLLLLLLLLLLL.LLLLLLLLLLLLLLLLL LLLLLLLL.LLLLLLL.LLLLLL.LLLLL.LLLLL.LLLLLLLL.LL.LLLL.LLLLL.LL.L.LL.LL.LLLLLLLLLLLLLLL.LLLLLLLLLL LLLLLLLLLLLLLLLLLL..LLL.LLLLL.LLLLL.LLLLLLLLLLLLLLL.LLLLLLLLLLLLLLLLL.LLLLLLLLLLLLLLL.LLLLLLLLLL LLLLLLLL.LLLLLLLLL.LLLL..LLLLLLLLLLLLLLLLLLLLLLLLLL.LLLLLLLLLLLLLL.LL.LLLL.LLLLLLLLLLL.LLLLLLLLL LLLLLLLL.LLLLLL.LL.LLLL.LLLLLL.L.LL.LLLLLLLL.LLLL.L.LLLLLLLLL.LLLLLLL.LLLLLLLLLLLLLL.LLLLLLLLLLL LLLLLLLL.LLL.LLLLL.LLLLLLLLLLLLLLLL.LLLLLL.LLLLLL.LLLLLLLLLLL.LLLLLLL.LLLLLLL..LLLLLLLLLLLLLLLLL .L.L..L....L...........L.LL....L.......L..L.LL..L.L........L..L.............L.L....LL.LLL..L.LL. LLLLLLLL.LLLLLL.LL.LLLL.LLLLL.LLLLL.LLLLLLLL.LLLLLLLLLL.LLLLLLLLLLLLL.LLLLLLLLLLLLLLLLLLLLLLLLLL LLLLLLLL.LLLLLLLLLLLLLL.LLLLL.LLLLL.LLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLL.LLLLLL.L.LLL.LL.LLLLLLLLLL LLLLLLLLLLLL.LLLLL.LLLL.LLLLLLLLLLL.LLLLLLLLLLLLLL..LLLLL.LLL.LL.LLLL.LLLLLLLL.LLL.LLLLLLLLLLLLL LLLLLLLLLLLLLLLLLL.LLLL.LL.LLL..LLLLLLLLLLLL.LLLLLLLLLL.LLLLL.LLLLLLL.LLLLLLLL..LL.LL.LLL.LL.LLL ........L...L....LL.....LL..L.....LL.LLLL.LL.LLL.....LLL.....LL.L..LLL.....LL.............L....L LLL.LLLL.LLLLLLLLLL.LLL.LLLLL.LLLLLLLLLLLLLLLLLLLLL.LLLLLLLLL.LLLLLLL...LLLLLL.L.L.LL..LLLLLLLLL LLLLLLLLLL.LL.LLLLLLLLL.L.LLL.LLLLL.LLLLLLLL..LLLLLLLLLLLLLLLLLLLLL...LLLLLLLL.LLLLLL.LLLLLLLLLL .LLLLLLL.LLLLLLLLL.LLLL.LLLLLLL.LLLL.LL..LLL.LLLLLLLLL.LLLLLL.LL.L.LL.LLLLLLLLLLLLLLLLLLLLLLLLLL LLLLLLLL.LLLLL.LLL.LLLLL.LLL.LLLLLL.LLLLL.LL.LLLLLL.LLLLLLLLLLLLLLLLLLLLLLLLLL.LLLLLLLL.LLLL.LLL LLLLLLLL.LLLLLLLLL.LLLLLLLLLLLLLLLL.LLLLLLLL.LL.LLL.LLLLLLLL..LLLLLLL.LLLLLLLL.LLLLLL.LLLLLLLLLL LLLL.LLL.LLLLLLLLLLLLLLLLLLL.LLLL.L.LLL..LLLL.L.LLL.LL.L.LL.L.LLLLLLL.LLLLLLLL.LLLL.L.L.LLLLLLLL LLLLLLLL.LLLLLLLLL.LLLLLLLLLL.LLLLL.LLLLLLLLLLLLL.L.LL.LLLLLL.L.LLLLLLLLLLLLLL.LLLLLL.LLLLLLLLLL LLL.LLLL.LLLLLLL.L.LLLLLLLLLL.LLLLL..LLL.LL..LLLLLL.LL.LLLLLL.LLLLLLL.LL.LLLLLLLLLLLL.LLLLLLLLLL .L.....L...L.......L...L....L....L..L..LLL.....L............L..LL.L...L.L.L..LL...LLL..L.L.L.... LLLLLLLL.LLLLL.LLLLLLLLLLLLLLLLLLLL.LLLLLLLLLLLLLLL.LLLLLLLLL.LLLLLLL.LLLLLLLL.LLLLLL.LLLLLLLLLL .L.LLLLLLLLLLLLLLL.LLLL.LLLLLLLLLLLLLLLLLLL.LLLLLLLLLL..LLLLL.LLLLLLL.LLLLLLLL.LLLLLLLLLLLLLLLLL LLLLLLLLLLLLLLLL.L.LLLLLLLLLL.LLLLLLLLL.LLLL.LLLLLL.LLLLLLL.L.LLLLLL.LLLLLL.LL.LLLLLL.LLLLLL.LLL LL.LLLLL.LLLLLLLLL..LLL.L.LLL.LLLLL.L.LLL.LL.LLLLLL.LLLLLLLLL.LLLLLL..LLL.LLLL.LLLLL.LLLLLLLLLLL .L...LLL.....L.....LLL....LLLL.L.LL....L...L...L.L.L....L.LLLL...L.L....L.....LL.L.....L..L.L... LLLLL.L...LLLLLLLL.LLLLLLLLLL.LLLLLL.LLLLLLL.LLLLLLLLLLLLLLLL.LL.LLLL.L.LLLLLL.LLLLLL.LLLL.LLLLL LLLL.LLL.LLLLLLLLLLLLLLLL.LLL.LLLLL.LLLLLLLLLLLLLLL.LLLLLLLLLLL.LLLLL.L.LLLLLL.LLLLLLLLLLLLLLLLL LLLLLLLLLLLLLLLLLLLL.LLLLLLLL.LLLLLLLLLLLLLLLLLLL.L.LLLLLLL.L.LLLLLLL.LLLLLLLLLLLLLLLLLLLLLLLLLL LLLLLLLLLLLLLLLLLL.LLLL.LLLLLLLLLLLLLLLLLLLL.LLLLLLLLLLLLLLLLLLLL.LLLLLLLLLLLL.LLL.LL.LLLLLLLLL. ..L..................LL..LL....L.....L..L..L.L...LLLL......L.L.L......L..L....LLLL.L...L....LL.. LLLLLLLL.LLLLLLL..LLLLL.LLLL.LLLLLL.LLLLLLLL.LLLLLL.L.LLLLLLL.LLL.LLLLLLLLLLLL.LLLLLL.LLLLLLLLLL LLLLLLLL.LLLLLLLL.LLLLL.LLLLL.LLLLLLLLLLLL.L.LLLLLL..LLLLLLLL.LLLLLLL.LLLLLLLL.LLLLLL.LLLLLLLLLL LLLLLLLL.LLLLLLLLL.LLLL.LLLLLLLLLLLLLLLLLLLL.LLLLLL.LLLLLLLLLLLL.LLLLLLLLL.LLL.LLLLLL..LLLLLLL.L LLLLLLLLLLLLLLLLLL.LLLL.LLLLL.LL.LLLLLLLLLLL.LLLLLL..LLLLLL.LLLLLLLLL..LL.LLLL.LLLL.L.LLLLLLLLLL LLLLLLLL.LLLLLLL.LLLLLL.LLLLLLLL.LLLLLLLLLLL.LLL.LLLLLLLLLL.LLLLLLLLL.LLLLLLLL.LLLLLLLLLLLLLLLLL L.LLL.LL.LLLL.LLL..LLLL.LLLLLLLLLLL.LLLLLLLL.LLLLLLLLLLLLLLLL.LLLLLLLLLLLLLLLL.L.LL.LLLL.LLLLLLL LLLLLLLL.LLLLLLLLL.LLLL.LLLLLL..LLL.LLLLL.LL.LLLLLL.LLLLLLLLL.LLLLLLL.LLLLLLLLLL.LLLL.LLLLLLLLLL LLLLLLLL.LLLLLLLLL..LLLLL.LL..LLLLLLLLLLL.LLLLLLLLLLL.LLLLLLLLL.LLLLL.LLLLLLLLLLLLLLLLLLLLLLLLLL LLLLLLLLLLLLLLLLLL..LLL.LLLLL.LLLLL.LLLLLLLL.LLLLLL..L.LLLLLLLLLLLLLL.LLLLLLLL.LLLLLLLLLLLLLLLLL LLLLLLLLLLLLLLLLLLLLLL.LLLLLL.LLLLL.LLLLLLLL.LLLLLL.LLLLL.LLL.LLLLLLL.LLLLL.LL.LLLL.L.LLLLLLLLLL LLL.LLLL.LL.LLLLLLLLLLL.LLLLLLLLLL..LLLLLLLL.LL.LLLLL.LL.LLLLLLLLLLLLLLLLLLLLLLLLLLL..LLLLLLLLLL LLLLLLLL.LLLLLLLLL.LLLL.LLLL..LLLLL.LLLLLLLL.LLLLLL.LLLLLLLLL.LLLLLLLLLLLLLLLL.LLLLLLLLLL.LLLLLL''' seats = [[c for c in line] for line in input.split('\n')] # part 1 from itertools import combinations_with_replacement adjacent = [(0, 1), (0, -1), (1, 0), (-1, 0), (1, 1), (1, -1), (-1, 1), (-1, -1)] def apply_rules(seats): i_max = len(seats) new_seats = [] for i in range(i_max): j_max = len(seats[i]) new_seats.append([''] * j_max) for j in range(j_max): occupied = 0 for (di, dj) in adjacent: ip = i + di jp = j + dj if ip >= 0 and ip < i_max and jp >= 0 and jp < j_max: if seats[ip][jp] == '#': occupied += 1 if seats[i][j] == 'L' and occupied == 0: new_seats[i][j] = '#' elif seats[i][j] == '#' and occupied >= 4: new_seats[i][j] = 'L' else: new_seats[i][j] = seats[i][j] return new_seats def print_seats(seats): pass # lines = [''.join(l) for l in seats] # for line in lines: # print(line) # print('') def part1(seats): print_seats(seats) new_seats = apply_rules(seats) while seats != new_seats: print_seats(new_seats) seats = new_seats new_seats = apply_rules(seats) print_seats(seats) count = sum([sum([1 if c == '#' else 0 for c in l]) for l in seats]) print(count) part1(seats) # part 2 def apply_rules2(seats): i_max = len(seats) new_seats = [] for i in range(i_max): j_max = len(seats[i]) new_seats.append([''] * j_max) for j in range(j_max): occupied = 0 for (di, dj) in adjacent: ip = i jp = j while True: ip += di jp += dj if ip >= 0 and ip < i_max and jp >= 0 and jp < j_max: if seats[ip][jp] == '#': occupied += 1 break elif seats[ip][jp] == 'L': break else: break if seats[i][j] == 'L' and occupied == 0: new_seats[i][j] = '#' elif seats[i][j] == '#' and occupied >= 5: new_seats[i][j] = 'L' else: new_seats[i][j] = seats[i][j] return new_seats def part2(seats): print_seats(seats) new_seats = apply_rules2(seats) while seats != new_seats: print_seats(new_seats) seats = new_seats new_seats = apply_rules2(seats) print_seats(seats) count = sum([sum([1 if c == '#' else 0 for c in l]) for l in seats]) print(count) part2(seats) ###Output 2144 ###Markdown part 1 ###Code testlines = '''5483143223 2745854711 5264556173 6141336146 6357385478 4167524645 2176841721 6882881134 4846848554 5283751526'''.splitlines() puzzlelines = '''1443582148 6553734851 1451741246 8835218864 1662317262 1731656623 1128178367 5842351665 6677326843 7381433267 '''.splitlines() nrows, ncols = 10, 10 def build_grid(lines): grid = {} for i, row in enumerate(lines): for j,c in enumerate(row): energy = int(c) grid[(i,j)] = energy return grid def print_grid(grid): for row in range(nrows): line = ''.join(str(grid[(row,col)]) for col in range(ncols)) print(line) def step(grid): flashed = [] num_flashed = 0 for pos in grid: grid[pos] += 1 while True: for pos in grid: if pos in flashed: continue if grid[pos] > 9: flashed.append(pos) r, c = pos for row in (r-1, r, r+1): for col in (c-1, c, c+1): if (row, col) == pos: continue if (row, col) not in grid: continue grid[(row, col)] += 1 if len(flashed) > num_flashed: num_flashed = len(flashed) else: break for pos in flashed: grid[pos] = 0 return num_flashed testgrid = build_grid(testlines) sum(step(testgrid) for i in range(100)) puzzlegrid = build_grid(puzzlelines) sum(step(puzzlegrid) for i in range(100)) ###Output _____no_output_____ ###Markdown part 2 ###Code testgrid = build_grid(testlines) i = 0 while True: num = step(testgrid) i += 1 if num == nrows*ncols: break print(i) def solve(lines): grid = build_grid(lines) i = 0 while True: num = step(grid) i += 1 if num == nrows*ncols: break if i%1000 == 0: print(f' not yet: i={i}') return i solve(testlines) solve(puzzlelines) ###Output _____no_output_____ ###Markdown Part 2 ###Code def check_diagonal(array, row, col, rotate=False): if rotate: row = len(array) - 1 - row diag = np.diagonal(np.rot90(array), offset=col - row)[::-1] else: diag = np.diagonal(array, offset=col - row) return check_both_sides(diag, min(row, col)) def check_both_sides(vector, seat_idx): left = "".join(vector[:seat_idx]).replace(".", "") right = "".join(vector[seat_idx + 1:]).replace(".", "") left_occupied = 1 if left.endswith("#") else 0 right_occupied = 1 if right.startswith("#") else 0 return left_occupied + right_occupied def count_visible(array, row, col): visible = 0 visible += check_both_sides(array[row], col) # Check horizontally visible += check_both_sides(array[:, col], row) # Check vertically visible += check_diagonal(array, row, col, rotate=False) # Check diagonally visible += check_diagonal(array, row, col, rotate=True) # Check diagonally (rotated) return visible def transform_seat_pt2(array, row, col): seat_status = array[row, col] if seat_status == "L" and count_visible(array, row, col) == 0: return "#" elif seat_status == "#" and count_visible(array, row, col) >= 5: return "L" elif seat_status == ".": return "." else: return seat_status with open(INPUT_PATH / "day11_input.txt", "r") as f: array = [[c for c in l.rstrip()] for l in f.readlines()] out = musical_chairs(array, transform_seat_pt2, True, True) ###Output 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 Stopped after 72 updates. 2422 occupied seats ###Markdown Seems to be a Heisenbug somewhere since the code produces the expected result for the example array...Answer should be 2023 after 85 updates :( RetryThis is a nice solution: https://github.com/davepage-mcr/aoc2020/blob/main/day11/seats.py ###Code def count_visible_v2(array, row, col): neighbours = 0 # Look for first seat in 8 cardinal directions from (col, row) for dc, dr in [(-1, -1), (0, -1), (1, -1), (-1, 0), (1, 0), (-1, +1), (0, 1), (1, 1)]: occupied = False r = row + dr c = col + dc while r >= 0 and r < len(array) and c >= 0 and c < len(array[0]): if array[r][c] == '#': occupied = True break elif array[r][c] == 'L': break r += dr c += dc if occupied: neighbours += 1 return (neighbours) def transform_seat_pt2_v2(array, row, col): seat_status = array[row][col] if seat_status == "L" and count_visible_v2(array, row, col) == 0: return "#" elif seat_status == "#" and count_visible_v2(array, row, col) >= 5: return "L" elif seat_status == ".": return "." else: return seat_status out = musical_chairs(array, transform_seat_pt2_v2) ###Output Stopped after 85 updates. 2023 occupied seats ###Markdown Day 11: Guiding Decisions with Optimization---Let's talk about how we can use the optimization ideas we've seen so far to help guide design decisions:- Multiple objectives - Pareto frontier as "menu"; use supporting information - Oar materials revisited: - Why not rigid foams? - Why not technical ceramics? - Transforming to constraints- Constraints - active vs inactive ###Code import pandas as pd import grama as gr import numpy as np from plotnine import * DF = gr.Intention() ###Output _____no_output_____ ###Markdown Multiple Objectives--- Recall: Oar material selection---$$\min\, m = \rho L \pi r^2$$$$\text{wrt.}\, r$$$$\text{s.t.}\, \delta(r) \leq \delta_{\max}$$From our study, we found that we wanted to maximize $I_{\text{oar}} = E^{1/2} / \rho$. If we "split this up" we have$$\max\, E$$$$\min\, \rho$$ ![ashby plot](./images/ashby-fig6.2.png) Why not technical ceramics?Ceramics are brittle; they fracture easily (compared to wood and CFRP). Why not rigid foams?For one thing, rigid foams have low toughness, but there's another reason we should discount them.Let's use our optimal radius:$$r^* = \left( \frac{4}{3\pi} \frac{FL^3}{E} \frac{1}{\delta_{\max}} \right)^{1/4}$$So the ratio between the optimal radius for different materials---all else but $E$ equal---would be:$$r_1 / r_2 = (E_2 / E_1)^{1/4}$$Note that $E_{\text{wood}} / E_{\text{foam}} \approx 10^2$, thus$$r_{\text{foam}} = 10^{1/2} r_{\text{wood}} \approx 3 r_{\text{wood}}$$or $A_{\text{foam}} = 10 A_{\text{wood}}$. This would be rather unwieldy, plus real rowers are unlikely to buy these because they'd look comical. There would probably be drag concerns as well. **Conclusion**: Maybe we were missing a constraint; a maximum radius. Niches: Wood or CFRP?CFRP is a little further out along the $E^{1/2} / \rho$ contour, indicating that it is a higher-performing material as an oar (stiffness-constrained beam). So why aren't all oars made of CFRP?CFRP is a more modern material than wood and for a long while it has been more expensive to manufacture. So a CFRP oar might be desirable for Olympic athletes (who are willing to spend a ton for an edge), but less desirable to someone just starting out rowing. Wood oars would tend to be less expensive, so they may be a "good enough" product for a novice rower.We can use the Pareto frontier to help select a *single* candidate. However, we can also use the Pareto frontier to organize our thinking about a *range* of candidates. In a commercial product space there are often [niche markets](https://en.wikipedia.org/wiki/Niche_market); subsets of the market that specific products target. By organizing all existing products on a Pareto frontier, you might find that a particular part of the frontier isn't covered. This could be evidence that no market exists for that hypothetical product... or that there's a niche market that no one is targeting! Exchange constants---We talked about the *weight method* to identify individual points on the Pareto frontier. By using a variety of weights, you can sketch-out a Pareto frontier by finding individual non-dominated points.One way we can choose a *single* set of weights is to compute [exchange coefficients](https://www.sciencedirect.com/topics/engineering/exchange-constant); these represent the rate at which we would "exchange" one objective for another. Example exchange constants: US Dollars for MassOne easy-to-understand case of an exchange coefficient is money for mass. Using problem-specific knowledge, it's often possible to *estimate* the cost associated with a unit of mass. For the following transportation categories, a different Basis was used to estimate the cost of having an additional kilogram of mass in the vehicle structure.| Sector | Basis | Exchange (USD / kg) ||---|---|---|| Family car | Fuel savings | 1-2 || Truck | Payload | 5-20 || Civil aircraft | Payload | 100-500 || Military aircraft | Payload/performance | 500-1,000 || Spacecraft | Payload | 3,000-10,000 |Adapted from Table 9.2 in Ashby (1994) Let's look at an example; suppose we have the following profit vs mass relationship. Clearly we want$$\max\, \text{Profit}$$$$\min\, \text{Mass}$$ ###Code md_cost = ( gr.Model() >> gr.cp_vec_function( fun=lambda df: gr.df_make( mass=df.x * 1, profit=(np.log(df.x) + 10) * 10, ), var=["x"], out=["mass", "profit"] ) >> gr.cp_bounds(x=(1e-3, 1e2)) ) ( md_cost >> gr.ev_df(df=gr.df_make(x=np.logspace(-2, 1.5))) >> ggplot(aes("mass", "profit")) + geom_line() + coord_cartesian(ylim=(50, 150)) + theme_minimal() + labs( x="Mass (kg)", y="Profit (USD)", ) ) ###Output _____no_output_____ ###Markdown Set a specific exchange coefficient to pick a particular point on the Pareto frontier. ###Code # Set the exchange coefficient m2c = 1 # Family car # m2c = 10 # Truck # m2c = 100 # Civil aircraft # m2c = 1e3 # Military aircraft # m2c = 1e4 # Spacecraft # Solve minimization with selected weight df_opt = ( md_cost >> gr.cp_function( fun=lambda df: gr.df_make( # min -profit -> max profit # min m2c * mass -> min cost of mass out_net = -df.profit + m2c * df.mass ), var=["profit", "mass"], out=["out_net"] ) >> gr.ev_min(out_min="out_net") ) print(df_opt) # Compute tangent line profit0 = float(df_opt.profit[0] - m2c * df_opt.mass[0]) df_tangent = ( gr.df_make(mass=np.logspace(-2, +1.5)) >> gr.tf_mutate(profit=profit0 + m2c * DF.mass) ) # Visualize ( md_cost >> gr.ev_df(df=gr.df_make(x=np.logspace(-2, 1.5))) >> ggplot(aes("mass", "profit")) + geom_line() # Annotation layers + geom_line( data=df_tangent, linetype="dashed", color="salmon", ) + geom_point( data=df_opt, color="salmon", size=3, ) + coord_cartesian(ylim=(50, 150)) + theme_minimal() + labs( x="Mass (kg)", y="Profit (USD)", ) ) ###Output x x_0 profit out_net mass success \ 0 9.999948 50.0005 123.025799 -113.025851 9.999948 True message n_iter 0 Optimization terminated successfully 13 ###Markdown Transform objectives into constraints---Do we really need to optimize *everything*?- Does the user actually *care* about all your objectives? - Will they be satisfied with a "good enough" solution along some objectives? - Importance of understanding the user!!! - Ex. The oar example; oars are sold rated by tip displacement, not based on minimized displacement- Are there norms in your discipline? - e.g. in aircraft design we usually pose range, capacity, cruising speed, etc. as *minimum-threshold constraints*. The primary objective is minimum weight / cost.- [satisficing](https://en.wikipedia.org/wiki/Satisficing) -> finding a "good enough" solution; work in behavioral economics suggests that this is how people *actually* make (some) decisions Learning from Constraints--- ###Code from grama.models import make_cantilever_beam md_beam = make_cantilever_beam() md_beam.printpretty() ###Output model: Cantilever Beam inputs: var_det: t: [2, 4] w: [2, 4] var_rand: H: (+1) norm, {'loc': 500.0, 'scale': 100.0} V: (+1) norm, {'loc': 1000.0, 'scale': 100.0} E: (+0) norm, {'loc': 29000000.0, 'scale': 1450000.0} Y: (-1) norm, {'loc': 40000.0, 'scale': 2000.0} copula: Independence copula functions: cross-sectional area: ['w', 't'] -> ['c_area'] limit state: stress: ['w', 't', 'H', 'V', 'E', 'Y'] -> ['g_stress'] limit state: displacement: ['w', 't', 'H', 'V', 'E', 'Y'] -> ['g_disp'] ###Markdown The built-in Grama cantilever beam model has quantified uncertainties. We'll talk about how to handle these in the upcoming lessons & notebooks, but for now I'll use a simple approach to turn random quantities into deterministic ones. ###Code md_det = ( gr.Model() ## Use a "conservative" deterministic evaluation of the random # beam model to arrive at a deterministic model >> gr.cp_vec_function( fun=lambda df: gr.eval_conservative( md_beam, df_det=df, quantiles=0.01, ), var=["w", "t"], out=["c_area", "g_stress", "g_disp"], name="Nominal evaluation", ) ## Use the same bounds on w, t as before >> gr.cp_bounds( w=(2, 4), t=(2, 4), ) ) md_det.printpretty() ###Output model: None inputs: var_det: w: [2, 4] t: [2, 4] var_rand: copula: None functions: Nominal evaluation: ['w', 't'] -> ['c_area', 'g_stress', 'g_disp'] ###Markdown We can now optimize this model to minimize the cross-sectional area while constraining it in terms of stress and tip displacement. ###Code ( md_det >> gr.ev_min( out_min="c_area", out_geq=["g_stress", "g_disp"] ) ) ###Output _____no_output_____ ###Markdown If we wanted to minimize the area *further*, which constraint should we edit?- Changing an inactive constraint won't affect things; we shouldn't bother with the bounds on `w` nor the displacement constraint.- The stress constraint is *non-negotiable*; a broken beam is not a useful beam!- We're left with the thickness bounds `t = (2, 4)`. By making the bounds looser, we could potentially improve on the optimum area. ###Code ( md_det >> gr.cp_bounds(t=(2, 8)) >> gr.ev_min( out_min="c_area", out_geq=["g_stress", "g_disp"], ) ) ###Output _____no_output_____ ###Markdown Day 11 ###Code file11 = open('day11_input', 'r') input11_lines = file11.readlines() def number_of_neighbors(pos, people): y, x = pos neighbours_pos = [[y+1, x-1], [y+1, x], [y+1, x+1], [y , x-1], [y, x+1], [y-1, x-1], [y-1, x], [y-1, x+1]] return sum([people[p[0],p[1]] for p in neighbours_pos]) def propegate(are_people, chair_positions): people_change = np.zeros(are_people.shape).astype(np.int) for pos in chair_positions: y,x = pos if are_people[y,x]: if number_of_neighbors(pos, are_people) > 3: people_change[y,x] = -1 else: if number_of_neighbors(pos, are_people) == 0: people_change[y,x] = 1 are_people += people_change return are_people are_chairs = [] for line in input11_lines: string = line.replace('\n', '').replace('L','1').replace('.','0') are_chairs.append([0] + [int(s) for s in string] + [0]) boundary = [[0]*len(are_chairs[0])] are_chairs = boundary + are_chairs + boundary are_chairs = np.array(are_chairs).astype(np.bool) are_people = np.zeros(are_chairs.shape).astype(np.int) chair_positions = np.array(np.where(are_chairs)).T print(are_chairs.shape, are_people.shape) ###Output (101, 94) (101, 94) ###Markdown Part A ###Code tot_people_now = 1 tot_people_prev = 0 iteration = 0 while tot_people_now != tot_people_prev: print('\r' + str(iteration), end='') tot_people_prev = tot_people_now are_people = propegate(are_people, chair_positions) tot_people_now = are_people.sum() iteration += 1 answer11A = are_people.sum() print('\n answer:', answer11A) import matplotlib.pyplot as plt #import matplotlib.image as mpimg #img = mpimg.imread('your_image.png') imgplot = plt.imshow(are_chairs) plt.show() ###Output _____no_output_____ ###Markdown Part B ###Code def get_nearest_chairs(pos, are_chairs): """ Function to find the nearest chairs in all 8 directions at position pos. """ y_max, x_max = are_chairs.shape directions = [[ 1, -1], [ 1, 0], [ 1, 1], [ 0, -1], [ 0, 1], [-1, -1], [-1, 0], [-1, 1]] directions = np.array(directions) nearest_chairs = [] for direction in directions: step = 1 y, x = (pos + step*direction) while (x > 0 and x < x_max) and (y > 0 and y < y_max): #print(y, y_max, y < y_max, x, x_max, x < x_max) if are_chairs[y,x]: nearest_chairs.append([y,x]) break step += 1 y, x = (pos + step*direction) return nearest_chairs def propegate_partB(are_people, chair_positions, chair_neighbours): """ Propegate the simulation of part B one step. """ people_change = np.zeros(are_people.shape).astype(np.int) for pos, neighbors in zip(chair_positions, chair_neighbours): y,x = pos neighbor_number = sum([are_people[y2, x2] for y2, x2 in neighbors]) if are_people[y,x]: if neighbor_number > 4: people_change[y,x] = -1 else: if neighbor_number == 0: people_change[y,x] = 1 are_people += people_change return are_people chairs_nearest_neighbours = [] for idx, pos in enumerate(chair_positions): print('\r' + str(idx) + ' (' + str(len(chair_positions)) + ')', end='') chairs_nearest_neighbours.append(get_nearest_chairs(pos, are_chairs)) are_people = np.zeros(are_chairs.shape).astype(np.int) tot_people_now = 1 tot_people_prev = 0 iteration = 0 while tot_people_now != tot_people_prev: print('\r 1 - ' + str(iteration) + ', tot people:', str(tot_people_now), end='') tot_people_prev = tot_people_now are_people = propegate_partB(are_people, chair_positions, chairs_nearest_neighbours) tot_people_now = are_people.sum() iteration += 1 #print('\r 2 - ' + str(iteration) + ', tot people:', str(tot_people_now), end='') answer11B = are_people.sum() print('\n answer:', answer11B) ###Output 1 - 86, tot people: 2131 answer: 2131 ###Markdown Day 11Santa's previous password expired, and he needs help choosing a new one.To help him remember his new password after the old one expires, Santa has devised a method of coming up with a password based on the previous one. Corporate policy dictates that passwords must be exactly eight lowercase letters (for security reasons), so he finds his new password by incrementing his old password string repeatedly until it is valid.Incrementing is just like counting with numbers: xx, xy, xz, ya, yb, and so on. Increase the rightmost letter one step; if it was z, it wraps around to a, and repeat with the next letter to the left until one doesn't wrap around.Unfortunately for Santa, a new Security-Elf recently started, and he has imposed some additional password requirements: Passwords must include one increasing straight of at least three letters, like abc, bcd, cde, and so on, up to xyz. They cannot skip letters; abd doesn't count. Passwords may not contain the letters i, o, or l, as these letters can be mistaken for other characters and are therefore confusing. Passwords must contain at least two different, non-overlapping pairs of letters, like aa, bb, or zz.For example: hijklmmn meets the first requirement (because it contains the straight hij) but fails the second requirement requirement (because it contains i and l). abbceffg meets the third requirement (because it repeats bb and ff) but fails the first requirement. abbcegjk fails the third requirement, because it only has one double letter (bb). The next password after abcdefgh is abcdffaa. The next password after ghijklmn is ghjaabcc, because you eventually skip all the passwords that start with ghi..., since i is not allowed.Given Santa's current password (your puzzle input), what should his next password be? Puzzle 1 ###Code import string import numpy as np DOUBLES = [2 * _ for _ in string.ascii_lowercase] def has_straight(pwd:str) -> bool: """Returns True if password has straight of 3 letters :param pwd: password string """ diffs = np.diff([ord(_) for _ in pwd]) for i in range(6): if diffs[i] == diffs[i+1] == 1: return True return False def allowed_letters(pwd:str) -> bool: """Returns True if password only contains allowed letters""" for _ in "iol": if pwd.count(_): return False return True def has_pairs(pwd:str) -> bool: """Returns True if password has >2 non-overlapping pairs :param pwd: password string """ paircounts = {dbl: bool(pwd.count(dbl)) for dbl in DOUBLES} if sum(paircounts.values()) > 1: return True return False def increment_letter(lttr:str) -> str: """Return next letter in sequence :param lttr: letter to increment """ _ = ord(lttr) if _ in (104, 107, 110): return chr(_ + 2) elif _ == 122: return "a" else: return chr(_ + 1) def increment_password(pwd:str) -> str: """Return next password candidate in sequence :param pwd: password to increment """ rpwd = list(pwd[::-1]) for idx, _ in enumerate(rpwd): next_lett = increment_letter(_) rpwd[idx] = next_lett if next_lett != "a": break return "".join(rpwd[::-1]) def skip_invalid_letters(pwd:str) -> str: """Move invalid password letters to next valid choice :param pwd: password to update """ pwd = list(pwd) for idx, _ in enumerate(pwd): if _ in "iol": pwd[idx] = chr(ord(_) + 1) pwd[idx + 1:] = "a" * (len(pwd) - idx - 1) return "".join(pwd) return "".join(pwd) def next_password(pwd:str) -> str: """Return next valid password in sequence :param pwd: password to increment """ pwd = skip_invalid_letters(pwd) while True: next_pwd = increment_password(pwd) if has_straight(next_pwd) and has_pairs(next_pwd): return next_pwd pwd = next_pwd pwds = ("hijklmmn", "abbceffg", "abbcegjk", "abcdefgh", "abcdffaa", "ghijklmn", "ghjaabcc") for pwd in pwds: print(pwd, has_straight(pwd), allowed_letters(pwd), has_pairs(pwd), increment_password(pwd)) print(next_password("abcdefgh")) print(next_password("ghijklmn")) ###Output hijklmmn True False False hijklmmp abbceffg False True True abbceffh abbcegjk False True False abbcegjm abcdefgh True True False abcdefgj abcdffaa True True True abcdffab ghijklmn True False False ghijklmp ghjaabcc True True True ghjaabcd abcdffaa ghjaabcc ###Markdown Solution ###Code with open("day11.txt", "r") as ifh: print(next_password(ifh.read().strip())) ###Output cqjxxyzz ###Markdown Puzzle 2Santa's password expired again. What's the next one? ###Code next_password("cqjxxyzz") ###Output _____no_output_____ ###Markdown Part 1 ###Code test_input1 = """11111 19991 19191 19991 11111""".splitlines() test_input2 = """5483143223 2745854711 5264556173 6141336146 6357385478 4167524645 2176841721 6882881134 4846848554 5283751526""".splitlines() from collections import deque def process_input(input_data, num_steps): data = np.array([list(map(int,line)) for line in input_data]) rows,cols = data.shape num_flashes = 0 for step in range(num_steps): data += 1 seen = set() queue = deque(list(zip(*np.nonzero(data>9)))) while queue: i,j = queue.popleft() if (i,j) in seen: continue seen.add((i,j)) for ii in range(max(i-1,0),min(i+2,rows)): for jj in range(max(j-1,0),min(j+2,cols)): if (ii,jj) not in seen: data[ii,jj] += 1 if data[ii,jj] > 9: queue.append((ii,jj)) num_flashes += len(seen) if len(seen) == rows*cols: print(step+1) return for i,j in list(seen): data[i,j] = 0 return data, num_flashes process_input(test_input1,2) process_input(test_input2,100) process_input(data,100) puzzle.answer_a = 1627 ###Output That's the right answer! You are one gold star closer to finding the sleigh keys. [Continue to Part Two] ###Markdown Part 2 ###Code process_input(test_input2,200) process_input(data,1000) puzzle.answer_b = 329 ###Output That's the right answer! You are one gold star closer to finding the sleigh keys.You have completed Day 11! You can [Shareon Twitter Mastodon] this victory or [Return to Your Advent Calendar].
techniques-and-models/w02-04c-jags-example.ipynb
###Markdown JAGS example in PyMC3This notebook attempts to solve the same problem that has been solved manually in [w02-04b-mcmc-demo-continuous.ipynb](http://localhost:8888/notebooks/w02-04b-mcmc-demo-continuous.ipynb), but using PyMC3 instead of JAGS as demonstrated in the course video. Problem DefinitionData is for personnel change from last year to this year for 10 companies. Model is defined as follows:$$y_i | \mu \overset{iid}{\sim} N(\mu, 1)$$$$\mu \sim t(0, 1, 1)$$where yi represents personnel change for company i, and the distribution of yi given $\mu$ is a Normal distribution with mean $\mu$ and variance 1. Prior distribution of $\mu$ is a t distribution with location 0, scale parameter 1, and degrees of freedom 1 (also known as Cauchy's distribution).Model is not conjugate, thus posterior is not a standard form that we can conveniently sample. To get posterior samples, we will need to setup a Markov chain, whose stationery distribution is the posterior distribution we want.Main difference with manually solved example is that we don't need to compute the analytical form of the posterior for our simulation. PyMC3 SolutionJAGS usage follows the 4 step process:* Specify model -- this is the first 2 lines in the `with model` block in cell 3.* Setup model -- this is the observed attribute in y_obs where real values of y are plugged in.* Run MCMC sampler -- the block under the `run MCMC sampler` command. The call to `update` and `coda.sample` is merged into a single `pm.sample` call with separate `n_iter` and `n_tune` variables. The `step` attribute is set to Metropolis-Hastings as that is the preferred sampler in the course, PyMC3 default is the NUTS sampler.* Post-processing -- whatever we do with `trace["mu"]` after the sampling is done. ###Code import matplotlib.pyplot as plt import numpy as np import pymc3 as pm %matplotlib inline import warnings warnings.filterwarnings("ignore") y = np.array([1.2, 1.4, -0.5, 0.3, 0.9, 2.3, 1.0, 0.1, 1.3, 1.9]) n_iter = 1000 n_tune = 500 with pm.Model() as model: # model specification, and setup (set observed=y) mu = pm.StudentT("mu", nu=1, mu=0, sigma=1) y_obs = pm.Normal("y_obs", mu=mu, sigma=1, observed=y) # run MCMC sampler step = pm.Metropolis() # PyMC3 default is NUTS, course uses Metropolis-Hastings trace = pm.sample(n_iter, tune=n_tune, step=step) # post-processing mu_sims = trace["mu"] print("mu_sims :", mu_sims) print("len(mu_sims): {:d}".format(len(mu_sims))) _ = pm.traceplot(trace) _ = pm.traceplot(trace, combined=True) pm.summary(trace) ###Output _____no_output_____
SZZ/code_document/05_extract_affected_versions.ipynb
###Markdown Importing libraries ###Code import sys, os, re, csv, subprocess, operator import pandas as pd from urllib.request import urlopen import urllib.request from bs4 import BeautifulSoup ###Output _____no_output_____ ###Markdown Configure repository and directories ###Code userhome = os.path.expanduser('~') txt_file = open(userhome + r"/DifferentDiffAlgorithms/SZZ/code_document/project_identity.txt", "r") pid = txt_file.read().split('\n') project = pid[0] bugidentifier = pid[1] repository = userhome + r'/DifferentDiffAlgorithms/SZZ/datasource/' + project + '/' analyze_dir = userhome + r'/DifferentDiffAlgorithms/SZZ/projects_analyses/' + project + '/' print ("Project name = %s" % project) print ("Project key = %s" % bugidentifier) ###Output _____no_output_____ ###Markdown Load textfile contains bug-ids ###Code txtfile = open(analyze_dir + "01_bug_ids_extraction/candidate_bug_ids.txt", "r") bug_links = txtfile.read().split('\n') print ("Found " + str(len(bug_links)) + " bug_ids") ###Output _____no_output_____ ###Markdown Finding affected versions by bug ids ###Code error_links = [] affected_version = [] for a,b in enumerate(bug_links): link = "https://issues.apache.org/jira/browse/" + b sys.stdout.write("\r%i " %(a+1) + "Extracting: " + b) sys.stdout.flush() try: page = urllib.request.urlopen(link) soup = BeautifulSoup(page, 'html.parser') aff_version = soup.find('span', attrs={'id':'versions-val'}).text.replace("\n",'').replace(" M",'-M').replace(" ",'').replace(".x",'.').split(",") aff_version = sorted(aff_version) aff_version.insert(0,b) affected_version.append(aff_version) except: error_links.append(b) print("\nExtraction has been completed.") print (error_links) #Repeat the process if there are still some affected versions by bug_ids haven't been captured due to network problems errorlinks = [] if error_links != []: for c,d in enumerate(error_links): link = "https://issues.apache.org/jira/browse/" + d sys.stdout.write("\r%i " %(c+1) + "Extracting: " + d) sys.stdout.flush() try: page = urllib.request.urlopen(link) soup = BeautifulSoup(page, 'html.parser') types = soup.find('span', attrs={'id':'versions-val'}).text.replace("\n",'').replace(" M",'-M').replace(" ",'').replace(".x",'.').split(",") types = sorted(types) types.insert(0, d) affected_version.append(types) except: errorlinks.append(d) print ("\nExtraction is complete") print (errorlinks) affected_version.sort() #Finding the earliest version affected by the bug ids earliest_version = [] for num, affver in enumerate(affected_version): earliest_version.append(affver[:2]) earliest_version.sort() for early in earliest_version: print (early) ###Output _____no_output_____ ###Markdown Defining the function for git command ###Code def execute_command(cmd, work_dir): #Executes a shell command in a subprocess, waiting until it has completed. pipe = subprocess.Popen(cmd, shell=True, cwd=work_dir, stdout=subprocess.PIPE, stderr=subprocess.PIPE) (out, error) = pipe.communicate() return out, error pipe.wait() ###Output _____no_output_____ ###Markdown Finding the versions related with earliest version ###Code related_version = [] for n, item in enumerate(earliest_version): if "." in item[1]: git_cmd = 'git tag -l "*' + item[1] + '*"' temp = str(execute_command(git_cmd, repository)).replace("b'",'').replace("(",'').replace(")",'').split("\\n") del temp[len(temp)-1] if temp == []: temp = [item[1].replace("Java-SCA-","")] else: temp = ['None'] temp.insert(0, item[0]) related_version.append(temp) for xx in related_version: print (xx) ###Output _____no_output_____ ###Markdown Finding the date release for affected version ###Code date_release = [] for n, item in enumerate(related_version): sys.stdout.write("\rFinding datetime for version {}: {}".format(n+1, item[0])) sys.stdout.flush() if item[1] != "None": for m in range(1, len(item)): git_cmd = "git log -1 --format=%ai " + item[m] temp = str(execute_command(git_cmd, repository)).replace("b'",'').replace("(",'').replace(")",'').split("\\n") del temp[len(temp)-1] temp = temp[0].split(" ") if temp[0] != "',": temp.insert(0,item[0]) temp.insert(1,item[m]) date_release.append(temp) date_release = sorted(date_release, key=operator.itemgetter(0, 2)) """else: date_release.append(item)""" date_release = sorted(date_release, key=operator.itemgetter(0), reverse=True) print ("\nThe process is finish") #save in CSV file with open(analyze_dir + '04_affected_versions/affected_version.csv','w') as csvfile: writers = csv.writer(csvfile) writers.writerow(['bug_id','earliest_affected_version','date_release','time_release','tz']) for item in date_release: writers.writerow(item) df = pd.read_csv(analyze_dir + '04_affected_versions/affected_version.csv') df earliest_vers = df.groupby('bug_id', as_index=False).first() earliest_vers = earliest_vers.sort_values(['date_release', 'time_release', 'earliest_affected_version'], ascending=True) earliest_vers.to_csv(analyze_dir + '04_affected_versions/earliest_version.csv', index=False) earliest_vers ###Output _____no_output_____ ###Markdown Joining 2 csv files: list of annotated files and earliest affected versions ###Code colname = ['bug_id','bugfix_commitID','parent_id','filepath','diff_myers_file','diff_histogram_file','blame_myers_file','blame_histogram_file', '#deletions_myers','#deletions_histogram'] filedata = pd.read_csv(analyze_dir + '03_annotate/01_annotated_files/listof_diff_n_annotated_files/diff_n_blame_combination_files.csv') filedata = filedata[colname] details = filedata.join(earliest_vers.set_index('bug_id')[['earliest_affected_version','date_release']], on='bug_id') details.to_csv(analyze_dir + '04_affected_versions/affected_version_for_identified_files.csv', index=False) print ("Affected version for identified files has been created") ###Output _____no_output_____
Section 6/6.1_code_Classification.ipynb
###Markdown Binomial Logistic Regression ###Code # logistic model with binary dependent variable from pyspark.ml.classification import LogisticRegression # Load training data training = spark.read.format("libsvm").load("sample_libsvm_data.txt") lr = LogisticRegression(maxIter=10, regParam=0.3, elasticNetParam=0.8) # Fit the model lrModel = lr.fit(training) # Print the coefficients and intercept for logistic regression print("Coefficients: " + str(lrModel.coefficients)) print("Intercept: " + str(lrModel.intercept)) # We can also use the multinomial family for binary classification mlr = LogisticRegression(maxIter=10, regParam=0.3, elasticNetParam=0.8, family="multinomial") # Fit the model mlrModel = mlr.fit(training) # Print the coefficients and intercepts for logistic regression with multinomial family print("Multinomial coefficients: " + str(mlrModel.coefficientMatrix)) print("Multinomial intercepts: " + str(mlrModel.interceptVector)) # end of section on Binomial Logistic Regression Naive Bayes # Classification model based on Bayes' Theorem with an assumption of independence # among predictors. A NB classifier assumes that the presence of a particular # feature in a class is unrelated to the presence of any other feature. from pyspark.ml.classification import NaiveBayes from pyspark.ml.evaluation import MulticlassClassificationEvaluator # Load training data data = spark.read.format("libsvm") \ .load("sample_libsvm_data.txt") # Split the data into train and test splits = data.randomSplit([0.6, 0.4], 1234) train = splits[0] test = splits[1] train.show() test.show() # create the trainer and set its parameters nb = NaiveBayes(smoothing=1.0, modelType="multinomial") # train the model model = nb.fit(train) # select example rows to display. predictions = model.transform(test) predictions.show() # compute accuracy on the test set evaluator = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction", metricName="accuracy") accuracy = evaluator.evaluate(predictions) print("Test set accuracy = " + str(accuracy)) spark.stop() ###Output _____no_output_____
smomics_performance/.ipynb_checkpoints/Saturation_curve_UMIs_Stainings-checkpoint.ipynb
###Markdown Saturation curves for SM-omics and STInput files are generated by counting number of unique molecules and number of annotated reads per annotated region after adjusting for sequencing depth, in downsampled fastq files (proportions 0.001, 0.01, 0.05, 0.1, 0.2, 0.4, 0.6, 0.8, 1) processed using ST-pipeline. ###Code %matplotlib inline import os import numpy import pandas as pd import matplotlib.pyplot as plt import matplotlib import seaborn as sns import glob import warnings matplotlib.rcParams['pdf.fonttype'] = 42 matplotlib.rcParams['ps.fonttype'] = 42 warnings.filterwarnings('ignore') def condition(row): """ Takes row in pandas df as input and returns type of condition """ # The samples are run in triplicate based on condition condition = ['HE', 'DAPI', 'Nestin'] if row['Name'] in ['10015CN108fl_D1', '10015CN108fl_D2', '10015CN108flfl_E2']: return condition[2] elif row['Name'] in ['10015CN90_C2', '10015CN90_D2', '10015CN90_E2']: return condition[1] elif row['Name'] in ['10015CN108_C2', '10015CN108_D2', '10015CN108_E1']: return condition[0] # Load input files path = '../../smomics_data' stats_list = [] samples_list = ['10015CN108fl_D2', '10015CN108flfl_E2', '10015CN108fl_D1', '10015CN90_C2', '10015CN90_D2', '10015CN90_E2', '10015CN108_C2', '10015CN108_D2', '10015CN108_E1'] prop_list = [0.001, 0.01, 0.05, 0.1, 0.2, 0.4, 0.6, 0.8, 1] for filename in samples_list: cond_file = pd.read_csv(os.path.join(path, filename + '_umi_after_seq_depth_in_spots_under_outside_tissue.txt'), sep = '\t') print(cond_file) cond_file.sort_values(by='Num reads', inplace=True) cond_file['Prop_annot_reads'] = prop_list cond_file['Condition'] = cond_file.apply(lambda row: condition(row), axis = 1) cond_file['norm uniq mol inside'] = cond_file['UMI inside'] cond_file['norm uniq mol outside'] = cond_file['UMI outside'] stats_list.append(cond_file) # Concat all files cond_merge = pd.concat(stats_list) #Plot fig = plt.figure(figsize=(20, 10)) x="Prop_annot_reads" y="norm uniq mol inside" #y="Genes" hue='Condition' ################ LINE PLOT ax = sns.lineplot(x=x, y=y, data=cond_merge,hue=hue, palette = ['mediumorchid', 'goldenrod', 'blue'], hue_order = ['HE', 'DAPI', 'Nestin'],ci=95) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['bottom'].set_color('k') ax.spines['left'].set_color('k') # X and y label size ax.set_xlabel("Proportion annotated reads", fontsize=15) ax.set_ylabel("Number of unique molecules under tissue", fontsize=15) # Set ticks size ax.tick_params(axis='y', labelsize=15) ax.tick_params(axis='x', labelsize=15) # change background color back_c = 'white' ax.set_facecolor(back_c) ax.grid(False) # Thousand seprator on y axis ax.get_yaxis().set_major_formatter( matplotlib.ticker.FuncFormatter(lambda x, p: format(int(x), ','))) # LEGEND handles, labels = ax.get_legend_handles_labels() ax.legend(handles=handles[0:], labels=['HE', 'DAPI', 'Nestin'],loc='upper left', ncol=2, fontsize=20) fig.set_size_inches(20, 10) # plt.savefig("../../figures/saturation_sm_stainings_saturation.pdf", transparent=True, bbox_inches = 'tight', # pad_inches = 0, dpi=1200) plt.show() cond_file['Prop_annot_reads'] = 100*cond_file['Prop_annot_reads'] #cond_merge.to_csv('../../smomics_data/sm_stainings_unique_molecules_under_outside_tissue.csv') ###Output _____no_output_____
NI-edu/fMRI-pattern-analysis/week_1/design_and_pattern_estimation.ipynb
###Markdown Experimental design and pattern estimationThis week's lab will be about the basics of pattern analysis of (f)MRI data. We assume that you've worked through the two Nilearn tutorials already. Functional MRI data are most often stored as 4D data, with 3 spatial dimensions ($X$, $Y$, and $Z$) and 1 temporal dimension ($T$). But most pattern analyses assume that data are formatted in 2D: trials ($N$) by patterns (often a subset of $X$, $Y$, and $Z$). Where did the time dimension ($T$) go? And how do we "extract" the patterns of the $N$ trials? In this lab, we'll take a look at various methods to estimate patterns from fMRI time series. Because these methods often depend on your experimental design (and your research question, of course), the first part of this lab will discuss some experimental design considerations. After this more theoretical part, we'll dive into how to estimate patterns from fMRI data.**What you'll learn**: At the end of this tutorial, you ...* Understand the most important experimental design factors for pattern analyses;* Understand and are able to implement different pattern estimation techniques**Estimated time needed to complete**: 8-12 hours ###Code # We need to limit the amount of threads numpy can use, otherwise # it tends to hog all the CPUs available when using Nilearn import os os.environ['MKL_NUM_THREADS'] = '1' os.environ['OPENBLAS_NUM_THREADS'] = '1' import numpy as np ###Output _____no_output_____ ###Markdown Experimental designBefore you can do any fancy machine learning or representational similarity analysis (or any other pattern analysis), there are several decisions you need to make and steps to take in terms of study design, (pre)processing, and structuring your data. Roughly, there are three steps to take:1. Design your study in a way that's appropriate to answer your question through a pattern analysis; this, of course, needs to be done *before* data acquisition!2. Estimate/extract your patterns from the (functional) MRI data;3. Structure and preprocess your data appropriately for pattern analyses;While we won't go into all the design factors that make for an *efficient* pattern analysis (see [this article](http://www.sciencedirect.com/science/article/pii/S105381191400768X) for a good review), we will now discuss/demonstrate some design considerations and how they impact the rest of the MVPA pipeline. Within-subject vs. between-subject analysesAs always, your experimental design depends on your specific research question. If, for example, you're trying to predict schizophrenia patients from healthy controls based on structural MRI, your experimental design is going to be different than when you, for example, are comparing fMRI activity patterns in the amygdala between trials targeted to induce different emotions. Crucially, with *design* we mean the factors that you as a researcher control: e.g., which schizophrenia patients and healthy control to scan in the former example and which emotion trials to present at what time. These two examples indicate that experimental design considerations are quite different when you are trying to model a factor that varies *between subjects* (the schizophrenia vs. healthy control example) versus a factor that varies *within subjects* (the emotion trials example). ToDo/ToThink (1.5 points): before continuing, let's practice a bit. For the three articles below, determine whether they used a within-subject or between-subject design. https://www.nature.com/articles/nn1444 (machine learning based) http://www.jneurosci.org/content/33/47/18597.short (RSA based) https://www.sciencedirect.com/science/article/pii/S1053811913000074 (machine learning based)Assign either 'within' or 'between' to the variables corresponding to the studies above (i.e., study_1, study_2, study_3). ###Code ''' Implement the ToDo here. ''' study_1 = '' # fill in 'within' or 'between' study_2 = '' # fill in 'within' or 'between' study_3 = '' # fill in 'within' or 'between' # YOUR CODE HERE raise NotImplementedError() ''' Tests the above ToDo. ''' for this_study in [study_1, study_2, study_3]: if not this_study: # if empty string raise ValueError("You haven't filled in anything!") else: if this_study not in ['within', 'between']: raise ValueError("Fill in either 'within' or 'between'!") print("Your answer will be graded by hidden tests.") ###Output _____no_output_____ ###Markdown Note that, while we think it is a useful way to think about different types of studies, it is possible to use "hybrid" designs and analyses. For example, you could compare patterns from a particular condition (within-subject) across different participants (between-subject). This is, to our knowledge, not very common though, so we won't discuss it here. ToThink (1 point)Suppose a researcher wants to implement a decoding analysis in which he/she aims to predict schizophrenia (vs. healthy control) from gray-matter density patterns in the orbitofrontal cortex. Is this an example of a within-subject or between-subject pattern analysis? Can it be either one? Why (not)? YOUR ANSWER HERE That said, let's talk about something that is not only important for univariate MRI analyses, but also for pattern-based multivariate MRI analyses: confounds. ConfoundsFor most task-based MRI analyses, we try to relate features from our experiment (stimuli, responses, participant characteristics; let's call these $\mathbf{S}$) to brain features (this is not restricted to "activity patterns"; let's call these $\mathbf{R}$\*). Ideally, we have designed our experiment that any association between our experimental factor of interest ($\mathbf{S}$) and brain data ($\mathbf{R}$) can *only* be due to our experimental factor, not something else. If another factor besides our experimental factor of interest can explain this association, this "other factor" may be a *confound* (let's call this $\mathbf{C}$). If we care to conclude anything about our experimental factor of interest and its relation to our brain data, we should try to minimize any confounding factors in our design. ---\* Note that the notation for experimental variables ($\mathbf{S}$) and brain features ($\mathbf{R}$) is different from what we used in the previous course, in which we used $\mathbf{X}$ for experimental variables and $\mathbf{y}$ for brain signals. We did this to conform to the convention to use $\mathbf{X}$ for the set of independent variables and $\mathbf{y}$ for dependent variables. In some pattern analyses (such as RSA), however, this independent/dependent variable distintion does not really apply, so that's why we'll stick to the more generic $\mathbf{R}$ (for brain features) and $\mathbf{S}$ (for experimental features) terms. Note: In some situations, you may only be interested in maximizing your explanatory/predictive power; in that case, you could argue that confounds are not a problem. The article by Hebart & Baker (2018) provides an excellent overview of this issue. Statistically speaking, you should design your experiment in such a way that there are no associations (correlations) between $\mathbf{R}$ and $\mathbf{C}$, such that any association between $\mathbf{S}$ and $\mathbf{R}$ can *only* be due to $\mathbf{R}$. Note that this is not trivial, because this presumes that you (1) know which factors might confound your study and (2) if you know these factors, that they are measured properly ([Westfall & Yarkoni, 2016)](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0152719)).Minimizing confounds in between-subject studies is notably harder than in within-subject designs, especially when dealing with clinical populations that are hard to acquire, because it is simply easier to experimentally control within-subject factors (especially when they are stimulus- rather than response-based). There are ways to deal with confounds post-hoc, but ideally you prevent confounds in the first place. For an overview of confounds in (multivariate/decoding) neuroimaging analyses and a proposed post-hoc correction method, see [this article](https://www.sciencedirect.com/science/article/pii/S1053811918319463) (apologies for the shameless self-promotion) and [this follow-up article](https://www.biorxiv.org/content/10.1101/2020.08.17.255034v1.abstract).In sum, as with *any* (neuroimaging) analysis, a good experimental design is one that minimizes the possibilities of confounds, i.e., associations between factors that are not of interest ($\mathbf{C}$) and experimental factors that *are* of interest ($\mathbf{S}$). ToThink (0 points): Suppose that you are interested in the neural correlates of ADHD. You want to compare multivariate resting-state fMRI networks between ADHD patients and healthy controls. What is the experimental factor ($\mathbf{S}$)? And can you think of a factor that, when unaccounted for, presents a major confound ($\mathbf{C}$) in this study/analysis? ToThink (1 point): Suppose that you're interested in the neural representation of "cognitive effort". You think of an experimental design in which you show participants either easy arithmetic problems, which involve only single-digit addition/subtraction (e.g., $2+5-4$) or hard(er) arithmetic problems, which involve two-digit addition/subtraction and multiplication (e.g., $12\times4-2\times11$), for which they have to respond whether the solution is odd (press left) or even (press right) as fast as possible. You then compare patterns during the between easy and hard trials. What is the experimental factor of interest ($\mathbf{S}$) here? And what are possible confounds ($\mathbf{C}$) in this design? Name at least two. (Note: this is a separate hypothetical experimental from the previous ToThink.) YOUR ANSWER HERE What makes up a "pattern"?So far, we talked a lot about "patterns", but what do we mean with that term? There are different options with regard to *what you choose as your unit of measurement* that makes up your pattern. The far majority of pattern analyses in functional MRI use patterns of *activity estimates*, i.e., the same unit of measurement &mdash; relative (de)activation &mdash; as is common in standard mass-univariate analyses. For example, decoding object category (e.g., images of faces vs. images of houses) from fMRI activity patterns in inferotemporal cortex is an example of a pattern analysis that uses *activity estimates* as its unit of measurement. However, you are definitely not limited to using *activity estimates* for your patterns. For example, you could apply pattern analyses to structural data (e.g., patterns of voxelwise gray-matter volume values, like in [voxel-based morphometry](https://en.wikipedia.org/wiki/Voxel-based_morphometry)) or to functional connectivity data (e.g., patterns of time series correlations between voxels, or even topological properties of brain networks). (In fact, the connectivity examples from the Nilearn tutorial represents a way to estimate these connectivity features, which can be used in pattern analyses.) In short, pattern analyses can be applied to patterns composed of *any* type of measurement or metric!Now, let's get a little more technical. Usually, as mentioned in the beginning, pattern analyses represent the data as a 2D array of brain patterns. Let's call this $\mathbf{R}$. The rows of $\mathbf{R}$ represent different instances of patterns (sometimes called "samples" or "observations") and the columns represent different brain features (e.g., voxels; sometimes simply called "features"). Note that we thus lose all spatial information by "flattening" our patterns into 1D rows!Let's call the number of samples $N$ and the number of brain features $K$. We can thus represent $\mathbf{R}$ as a $N\times K$ matrix (2D array):\begin{align}\mathbf{R} = \begin{bmatrix} R_{1,1} & R_{1,2} & R_{1,3} & \dots & R_{1,K}\\ R_{2,1} & R_{1,2} & R_{1,3} & \dots & R_{2,K}\\ R_{3,1} & R_{1,2} & R_{1,3} & \dots & R_{3,K}\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ R_{N,1} & R_{1,2} & R_{1,3} & \dots & R_{N,K}\\\end{bmatrix}\end{align}As discussed before, the values themselves (e.g., $R_{1,1}$, $R_{1,2}$, $R_{3,6}$) represent whatever you chose for your patterns (fMRI activity, connectivity estimates, VBM, etc.). What is represented by the rows (samples/observations) of $\mathbf{R}$ depends on your study design: in between-subject studies, these are usually participants, while in within-subject studies, these samples represent trials (or averages of trials or sometimes runs). The columns of $\mathbf{R}$ represent the different (brain) features in your pattern; for example, these may be different voxels (or sensors/magnetometers in EEG/MEG), vertices (when working with cortical surfaces), edges in functional brain networks, etc. etc. Let's make it a little bit more concrete. We'll make up some random data below that represents a typical data array in pattern analyses: ###Code import numpy as np N = 100 # e.g. trials K = 250 # e.g. voxels R = np.random.normal(0, 1, size=(N, K)) R ###Output _____no_output_____ ###Markdown Let's visualize this: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.figure(figsize=(12, 4)) plt.imshow(R, aspect='auto') plt.xlabel('Brain features', fontsize=15) plt.ylabel('Samples', fontsize=15) plt.title(r'$\mathbf{R}_{N\times K}$', fontsize=20) cbar = plt.colorbar() cbar.set_label('Feature value', fontsize=13, rotation=270, labelpad=10) plt.show() ###Output _____no_output_____ ###Markdown ToDo (1 point): Extract the pattern of the 42nd trial and store it in a variable called trial42. Then, extract the values of 187th brain feature across all trials and store it in a variable called feat187. Lastly, extract feature value of the 60th trial and the 221nd feature and store it in a variable called t60_f221. Remember: Python uses zero-based indexing (first value in an array is indexed by 0)! ###Code ''' Implement the ToDo here.''' # YOUR CODE HERE raise NotImplementedError() ''' Tests the above ToDo. ''' from niedu.tests.nipa.week_1 import test_R_indexing test_R_indexing(R, trial42, feat187, t60_f221) ###Output _____no_output_____ ###Markdown Alright, to practice a little bit more. We included whole-brain VBM data for 20 subjects in the `vbm/` subfolder: ###Code import os sorted(os.listdir('vbm')) ###Output _____no_output_____ ###Markdown The VBM data represents spatially normalized (to MNI152, 2mm), whole-brain voxelwise gray matter volume estimates (read more about VBM [here](https://en.wikipedia.org/wiki/Voxel-based_morphometry)).Let's inspect the data from a single subject: ###Code import os import nibabel as nib from nilearn import plotting sub_01_vbm_path = os.path.join('vbm', 'sub-01.nii.gz') sub_01_vbm = nib.load(sub_01_vbm_path) print("Shape of Nifti file: ", sub_01_vbm.shape) # Let's plot it as well plotting.plot_anat(sub_01_vbm) plt.show() ###Output _____no_output_____ ###Markdown As you can see, the VBM data is a 3D array of shape 91 ($X$) $\times$ 109 ($Y$) $\times$ 91 ($Z$) (representing voxels). These are the spatial dimensions associated with the standard MNI152 (2 mm) template provided by FSL. As VBM is structural (not functional!) data, there is no time dimension ($T$).Now, suppose that we want to do a pattern analysis on the data of all 20 subjects. We should then create a 2D array of shape 20 (subjects) $\times\ K$ (number of voxels, i.e., $91 \times 109 \times 91$). To do so, we need to create a loop over all files, load them in, "flatten" the data, and ultimately stack them into a 2D array. Before you'll implement this as part of the next ToDo, we will show you a neat Python function called `glob`, which allows you to simply find files using "[wildcards](https://en.wikipedia.org/wiki/Wildcard_character)": ###Code from glob import glob ###Output _____no_output_____ ###Markdown It works as follows:```list_of_files = glob('path/with/subdirectories/*/*.nii.gz')```Importantly, the string you pass to `glob` can contain one or more wildcard characters (such as `?` or `*`). Also, *the returned list is not sorted*! Let's try to get all our VBM subject data into a list using this function: ###Code # Let's define a "search string"; we'll use the os.path.join function # to make sure this works both on Linux/Mac and Windows search_str = os.path.join('vbm', 'sub-*.nii.gz') vbm_files = glob(search_str) # this is also possible: vbm_files = glob(os.path.join('vbm', 'sub-*.nii.gz')) # Let's print the returned list print(vbm_files) ###Output _____no_output_____ ###Markdown As you can see, *the list is not alphabetically sorted*, so let's fix that with the `sorted` function: ###Code vbm_files = sorted(vbm_files) print(vbm_files) # Note that we could have done that with a single statement # vbm_files = sorted(glob(os.path.join('vbm', 'sub-*.nii.gz'))) # But also remember: shorter code is not always better! ###Output _____no_output_____ ###Markdown ToDo (2 points): Create a 2D array with the vertically stacked subject-specific (flattened) VBM patterns, in which the first subject should be the first row. You may want to pre-allocate this array before starting your loop (using, e.g., np.zeros). Also, the enumerate function may be useful when writing your loop. Try to google how to flatten an N-dimensional array into a single vector. Store the final 2D array in a variable named R_vbm. ###Code ''' Implement the ToDo here. ''' # YOUR CODE HERE raise NotImplementedError() ''' Tests the above ToDo. ''' from niedu.tests.nipa.week_1 import test_R_vbm_loop test_R_vbm_loop(R_vbm) ###Output _____no_output_____ ###Markdown Tip: While it is a good exercise to load in the data yourself, you can also easily load in and concatenate a set of Nifti files using Nilearn's concat_imgs function (which returns a 4D Nifti1Image, with the different patterns as the fourth dimension). You'd still have to reorganize this data into a 2D array, though. ###Code # Run this cell after you're done with the ToDo # This will remove the all numpy arrays from memory, # clearing up RAM for the next sections %reset -f array ###Output _____no_output_____ ###Markdown Patterns as "points in space"Before we continue with the topic of pattern estimation, there is one idea that we'd like to introduce: thinking of patterns as points (i.e., coordinates) in space. Thinking of patterns this way is helpful to understanding both machine learning based analyses and representational similarity analysis. While for some, this idea might sound trivial, we believe it's worth going over anyway. Now, let's make this idea more concrete. Suppose we have estimated fMRI activity patterns for 20 trials (rows of $\mathbf{R}$). Now, we will also assume that those patterns consist of only two features (e.g., voxels; columns of $\mathbf{R}$), because this will make visualizing patterns as points in space easier than when we choose a larger number of features.Alright, let's simulate and visualize the data (as a 2D array): ###Code K = 2 # features (voxels) N = 20 # samples (trials) R = np.random.multivariate_normal(np.zeros(K), np.eye(K), size=N) print("Shape of R:", R.shape) # Plot 2D array as heatmap fig, ax = plt.subplots(figsize=(2, 10)) mapp = ax.imshow(R) cbar = fig.colorbar(mapp, pad=0.1) cbar.set_label('Feature value', fontsize=13, rotation=270, labelpad=15) ax.set_yticks(np.arange(N)) ax.set_xticks(np.arange(K)) ax.set_title(r"$\mathbf{R}$", fontsize=20) ax.set_xlabel('Voxels', fontsize=15) ax.set_ylabel('Trials', fontsize=15) plt.show() ###Output _____no_output_____ ###Markdown Now, we mentioned that each pattern (row of $\mathbf{R}$, i.e., $\mathbf{R}_{i}$) can be interpreted as a point in 2D space. With space, here, we mean a space where each feature (e.g., voxel; column of $\mathbf{R}$, i.e., $\mathbf{R}_{j}$) represents a separate axis. In our simulated data, we have two features (e.g., voxel 1 and voxel 2), so our space will have two axes: ###Code plt.figure(figsize=(5, 5)) plt.title("A two-dimensional space", fontsize=15) plt.grid() plt.xlim(-3, 3) plt.ylim(-3, 3) plt.xlabel('Activity voxel 1', fontsize=13) plt.ylabel('Activity voxel 2', fontsize=13) plt.show() ###Output _____no_output_____ ###Markdown Within this space, each of our patterns (samples) represents a point. The values of each pattern represent the *coordinates* of its location in this space. For example, the coordinates of the first pattern are: ###Code print(R[0, :]) ###Output _____no_output_____ ###Markdown As such, we can plot this pattern as a point in space: ###Code plt.figure(figsize=(5, 5)) plt.title("A two-dimensional space", fontsize=15) plt.grid() # We use the "scatter" function to plot this point, but # we could also have used plt.plot(R[0, 0], R[0, 1], marker='o') plt.scatter(R[0, 0], R[0, 1], marker='o', s=75) plt.axhline(0, c='k') plt.axvline(0, c='k') plt.xlabel('Activity voxel 1', fontsize=13) plt.ylabel('Activity voxel 2', fontsize=13) plt.xlim(-3, 3) plt.ylim(-3, 3) plt.show() ###Output _____no_output_____ ###Markdown If we do this for all patterns, we get an ordinary scatter plot of the data: ###Code plt.figure(figsize=(5, 5)) plt.title("A two-dimensional space", fontsize=15) plt.grid() # We use the "scatter" function to plot this point, but # we could also have used plt.plot(R[0, 0], R[0, 1], marker='o') plt.scatter(R[:, 0], R[:, 1], marker='o', s=75) plt.axhline(0, c='k') plt.axvline(0, c='k') plt.xlabel('Activity voxel 1', fontsize=13) plt.ylabel('Activity voxel 2', fontsize=13) plt.xlim(-3, 3) plt.ylim(-3, 3) plt.show() ###Output _____no_output_____ ###Markdown It is important to realize that both perspectives &mdash; as a 2D array and as a set of points in $K$-dimensional space &mdash; represents the same data! Practically, pattern analysis algorithms usually expect the data as a 2D array, but (in our experience) the operations and mechanisms implemented by those algorithms are easiest to explain and to understand from the "points in space" perspective.You might think, "but how does this work for data with more than two features?" Well, the idea of patterns as points in space remains the same: each feature represents a new dimension (or "axis"). For three features, this means that a pattern represents a point in 3D (X, Y, Z) space; for four features, a pattern represents a point in 4D space (like a point moving in 3D space) ... but what about a pattern with 14 features? Or 500? Actually, this is impossible to visualize or even make sense of mentally. As the famous artificial intelligence researcher Geoffrey Hinton put it:> "To deal with ... a 14 dimensional space, visualize a 3D space and say 'fourteen' very loudly. Everyone does it." (Geoffrey Hinton)The important thing to understand, though, is that most operations, computations, and algorithms that deal with patterns do not care about whether your data is 2D (two features) or 14D (fourteen features) &mdash; we just have to trust the mathematicians that whatever we do on 2D data will generalize to $K$-dimensional data :-)That said, people still try to visualize >2D data using *dimensionality reduction* techniques. These techniques try to project data to a lower-dimensional space. For example, you can transform a dataset with 500 features (i.e., a 500-dimensional dataset) to a 2D dimensional dataset using techniques such as principal component analysis (PCA), Multidimensional Scaling (MDS), and t-SNE. For example, PCA tries to a subset of uncorrelated lower-dimensional features (e.g., 2) from linear combinations of high-dimensional features (e.g., 4) that still represent as much variance of the high-dimensional components as possible. We'll show you an example below using an implementation of PCA from the machine learning library [scikit-learn](https://scikit-learn.org/stable/), which we'll use extensively in next week's lab: ###Code from sklearn.decomposition import PCA # Let's create a dataset with 100 samples and 4 features R4D = np.random.normal(0, 1, size=(100, 4)) print("Shape R4D:", R4D.shape) # We'll instantiate a PCA object that will # transform our data into 2 components pca = PCA(n_components=2) # Fit and transform the data from 4D to 2D R2D = pca.fit_transform(R4D) print("Shape R2D:", R2D.shape) # Plot the result plt.figure(figsize=(5, 5)) plt.scatter(R2D[:, 0], R2D[:, 1], marker='o', s=75) plt.axhline(0, c='k') plt.axvline(0, c='k') plt.xlabel('PCA component 1', fontsize=13) plt.ylabel('PCA component 2', fontsize=13) plt.grid() plt.xlim(-4, 4) plt.ylim(-4, 4) plt.show() ###Output _____no_output_____ ###Markdown ToDo (optional): As discussed, PCA is a specific dimensionality reduction technique that uses linear combinations of features to project the data to a lower-dimensional space with fewer "components". Linear combinations are simply weighted sums of high-dimensional features. In a 4D dimensional space that is project to 2D, PCA component 1 might be computed as $\mathbf{R}_{j=1}\theta_{1}+\mathbf{R}_{j=2}\theta_{2}+\mathbf{R}_{j=3}\theta_{3}+\mathbf{R}_{j=4}\theta_{4}$, where $R_{j=1}$ represents the 4th feature of $\mathbf{R}$ and $\theta_{1}$ represents the weight for the 4th feature. The weights of the fitted PCA model can be accessed by, confusingly, pca.components_ (shape: $K_{lower} \times K_{higher}$. Using these weights, can you recompute the lower-dimensional features from the higher-dimensional features yourself? Try to plot it like the figure above and check whether it matches. ###Code ''' Implement the (optional) ToDo here. ''' # YOUR CODE HERE raise NotImplementedError() ###Output _____no_output_____ ###Markdown Note that dimensionality reduction is often used for visualization, but it can also be used as a preprocessing step in pattern analyses. We'll take a look this in more detail next week.Alright, back to the topic of pattern extraction/estimation. You saw that preparing VBM data for (between-subject) pattern analyses is actually quite straightforward, but unfortunately, preparing functional MRI data for pattern analysis is a little more complicated. The reason is that we are dealing with time series in which different trials ($N$) are "embedded". The next section discusses different methods to "extract" (estimate) these trial-wise patterns. Estimating patternsAs we mentioned before, we should prepare our data as an $N$ (samples) $\times$ $K$ (features) array. With fMRI data, our data is formatted as a $X \times Y \times Z \times T$ array; we can flatten the $X$, $Y$, and $Z$ dimensions, but we still have to find a way to "extract" patterns for our $N$ trials from the time series (i.e., the $T$ dimension). Important side note: single trials vs. (runwise) average trialsIn this section, we often assume that our "samples" refer to different *trials*, i.e., single instances of a stimulus or response (or another experimentally-related factor). This is, however, not the only option. Sometimes, researchers choose to treat multiple repetitions of a trial as a single sample or multiple trials within a condition as a single sample. For example, suppose you design a simple passive-viewing experiment with images belonging two one of three conditions: faces, houses, and chairs. Each condition has ten exemplars (face1, face2, ..., face10, house1, house2, ..., house10, chair1, chair2, ... , chair10) and each exemplar/item is repeated six times. So, in total there are 3 (condition) $\times$ 10 (examplars) $\times$ 6 (repetitions) = 180 trials. Because you don't want to bore the participant to death, you split the 180 trials into two runs (90 each). Now, there are different ways to define your samples. One is to treat every single trial as a sample (so you'll have a 180 samples). Another way is to treat each exemplar as a sample. If you do so, you'll have to "pool" the pattern estimates across all 6 repetitions (so you'll have $10 \times 3 = 30$ samples). And yet another way is to treat each condition as a sample, so you'll have to pool the pattern estimates across all 6 repetitions and 10 exemplars per condition (so you'll end up with only 3 samples). Lastly, with respect to the latter two approaches, you may choose to only average repetitions and/or exemplars *within* runs. So, for two runs, you end up with either $10 \times 3 \times 2 = 60$ samples (when averaging across repetitions only) or $3 \times 2 = 6$ samples (when averaging across examplars and repetitions).Whether you should perform your pattern analysis on the trial, examplar, or condition level, and whether you should estimate these patterns across runs or within runs, depends on your research question and analysis technique. For example, if you want to decode exemplars from each other, you obviously should not average across exemplars. Also, some experiments may not have different exemplars per condition (or do not have categorical conditions at all). With respect to the importance of analysis technique: when applying machine learning analyses to fMRI data, people often prefer to split their trials across many (short) runs and &mdash; if using a categorical design &mdash; prefer to estimate a single pattern per run. This is because samples across runs are not temporally autocorrelated, which is an important assumption in machine learning based analyses. Lastly, for any pattern analysis, averaging across different trials will increase the signal-to-noise ratio (SNR) for any sample (because you average out noise), but will decrease the statistical power of the analysis (because you have fewer samples). Long story short: whatever you treat as a sample &mdash; single trials, (runwise) exemplars or (runwise) conditions &mdash; depends on your design, research question, and analysis technique. In the rest of the tutorial, we will usually refer to samples as "trials", as this scenario is easiest to simulate and visualize, but remember that this term may equally well refer to (runwise) exemplar-average or condition-average patterns.--- To make the issue of estimating patterns from time series a little more concrete, let's simulate some signals. We'll assume that we have a very simple experiment with two conditions (A, B) with ten trials each (interleaved, i.e., ABABAB...AB), a trial duration of 1 second, spaced evenly within a single run of 200 seconds (with a TR of 2 seconds, so 100 timepoints). Note that you are not necessarily limited to discrete categorical designs for all pattern analyses! While for machine learning-based methods (topic of week 2) it is common to have a design with a single categorical feature of interest (or some times a single continuous one), representional similarity analyses (topic of week 3) are often applied to data with more "rich" designs (i.e., designs that include many, often continuously varying, factors of interest). Also, using twenty trials is probably way too few for any pattern analysis, but it'll make the examples (and visualizations) in this section easier to understand. Alright, let's get to it. ###Code TR = 2 N = 20 # 2 x 10 trials T = 200 # duration in seconds # t_pad is a little baseline at the # start and end of the run t_pad = 10 onsets = np.linspace(t_pad, T - t_pad, N, endpoint=False) durations = np.ones(onsets.size) conditions = ['A', 'B'] * (N // 2) print("Onsets:", onsets, end='\n\n') print("Conditions:", conditions) ###Output _____no_output_____ ###Markdown We'll use the `simulate_signal` function used in the introductory course to simulate the data. This function is like a GLM in reverse: it assumes that a signal ($R$) is generated as a linear combination between (HRF-convolved) experimental features ($\mathbf{S}$) weighted by some parameters ( $\beta$ ) plus some additive noise ($\epsilon$), and simulates the signal accordingly (you can check out the function by running `simulate_signal??` in a new code cell). Because we simulate the signal, we can use "ground-truth" activation parameters ( $\beta$ ). In this simulation, we'll determine that the signal responds more strongly to trials of condition A ($\beta = 0.8$) than trials of condition B ($\beta = 0.2$) in *even* voxels (voxel 0, 2, etc.) and vice versa for *odd* voxels (voxel 1, 3, etc.): ###Code params_even = np.array([0.8, 0.2]) params_odd = 1 - params_even ###Output _____no_output_____ ###Markdown ToThink (0 points): Given these simulation parameters, how do you think that the corresponding $N\times K$ pattern array ($\mathbf{R}$) would roughly look like visually (assuming an efficient pattern estimation method)? Alright, We simulate some data for, let's say, four voxels ($K = 4$). (Again, you'll usually perform pattern analyses on many more voxels.) ###Code from niedu.utils.nii import simulate_signal K = 4 ts = [] for i in range(K): # Google "Python modulo" to figure out # what the line below does! is_even = (i % 2) == 0 sig, _ = simulate_signal( onsets, conditions, duration=T, plot=False, std_noise=0.25, params_canon=params_even if is_even else params_odd ) ts.append(sig[:, np.newaxis]) # ts = timeseries ts = np.hstack(ts) print("Shape of simulated signals: ", ts.shape) ###Output _____no_output_____ ###Markdown And let's plot these voxels. We'll show the trial onsets as arrows (red = condition A, orange = condition B): ###Code import seaborn as sns fig, axes = plt.subplots(ncols=K, sharex=True, sharey=True, figsize=(10, 12)) t = np.arange(ts.shape[0]) for i, ax in enumerate(axes.flatten()): # Plot signal ax.plot(ts[:, i], t, marker='o', ms=4, c='tab:blue') # Plot trial onsets (as arrows) for ii, to in enumerate(onsets): color = 'tab:red' if ii % 2 == 0 else 'tab:orange' ax.arrow(-1.5, to / TR, dy=0, dx=0.5, color=color, head_width=0.75, head_length=0.25) ax.set_xlim(-1.5, 2) ax.set_ylim(0, ts.shape[0]) ax.grid(b=True) ax.set_title(f'Voxel {i+1}', fontsize=15) ax.invert_yaxis() if i == 0: ax.set_ylabel("Time (volumes)", fontsize=20) # Common axis labels fig.text(0.425, -.03, "Activation (A.U.)", fontsize=20) fig.tight_layout() sns.despine() plt.show() ###Output _____no_output_____ ###Markdown Tip: Matplotlib is a very flexible plotting package, but arguably at the expense of how fast you can implement something. Seaborn is a great package (build on top of Matplotlib) that offers some neat functionality that makes your life easier when plotting in Python. For example, we used the despine function to remove the top and right spines to make our plot a little nicer. In this course, we'll mostly use Matplotlib, but we just wanted to make you aware of this awesome package. Alright, now we can start discussing methods for pattern estimation! Unfortunately, as pattern analyses are relatively new, there no concensus yet about the "best" method for pattern estimation. In fact, there exist many different methods, which we can roughly divided into two types:1. Timepoint-based method (for lack of a better name) and2. GLM-based methodsWe'll discuss both of them, but spend a little more time on the latter set of methods as they are more complicated (and are more popular). Timepoint-based methodsTimepoint-based methods "extract" patterns by simply using a single timepoint (e.g., 6 seconds after stimulus presentation) or (an average of) multiple timepoints (e.g., 4, 6, and 8 seconds after stimulus presentation). Below, we visualize how a single-timepoint method would look like (assuming that we'd want to extract the timepoint 6 seconds after stimulus presentation, i.e., around the assumed peak of the BOLD response). The stars represent the values that we would extract (red when condition A, orange when condition B). Note, we only plot the first 60 volumes. ###Code fig, axes = plt.subplots(ncols=4, sharex=True, sharey=True, figsize=(10, 12)) t_fmri = np.linspace(0, T, ts.shape[0], endpoint=False) t = np.arange(ts.shape[0]) for i, ax in enumerate(axes.flatten()): # Plot signal ax.plot(ts[:, i], t, marker='o', ms=4, c='tab:blue') # Plot trial onsets (as arrows) for ii, to in enumerate(onsets): plus6 = np.interp(to+6, t_fmri, ts[:, i]) color = 'tab:red' if ii % 2 == 0 else 'tab:orange' ax.arrow(-1.5, to / TR, dy=0, dx=0.5, color=color, head_width=0.75, head_length=0.25) ax.plot([plus6, plus6], [(to+6) / TR, (to+6) / TR], marker='*', ms=15, c=color) ax.set_xlim(-1.5, 2) ax.set_ylim(0, ts.shape[0] // 2) ax.grid(b=True) ax.set_title(f'Voxel {i+1}', fontsize=15) ax.invert_yaxis() if i == 0: ax.set_ylabel("Time (volumes)", fontsize=20) # Common axis labels fig.text(0.425, -.03, "Activation (A.U.)", fontsize=20) fig.tight_layout() sns.despine() plt.show() ###Output _____no_output_____ ###Markdown Now, extracting these timepoints 6 seconds after stimulus presentation is easy when this timepoint is a multiple of the scan's TR (here: 2 seconds). For example, to extract the value for the first trial (onset: 10 seconds), we simply take the 8th value in our timeseries, because $(10 + 6) / 2 = 8$. But what if our trial onset + 6 seconds is *not* a multiple of the TR, such as with trial 2 (onset: 19 seconds)? Well, we can interpolate this value! We will use the same function for this operation as we did for slice-timing correction (from the previous course): `interp1d` from the `scipy.interpolate` module.To refresh your memory: this function takes the timepoints associated with the values (or "frame_times" in Nilearn lingo) and the values itself to generate a new object which we'll later use to do the actual (linear) interpolation. First, let's define the timepoints: ###Code t_fmri = np.linspace(0, T, ts.shape[0], endpoint=False) ###Output _____no_output_____ ###Markdown ToDo (1 point): The above timepoints assume that all data was acquired at the onset of the volume acquisition ($t=0$, $t=2$, etc.). Suppose that we actually slice-time corrected our data to the middle slice, i.e., the 18th slice (out of 36 slices) &mdash; create a new array (using np.linspace with timepoints that reflect these slice-time corrected acquisition onsets) and store it in a variable named t_fmri_middle_slice. ###Code ''' Implement your ToDo here. ''' # YOUR CODE HERE raise NotImplementedError() ''' Tests the above ToDo. ''' from niedu.tests.nipa.week_1 import test_frame_times_stc test_frame_times_stc(TR, T, ts.shape[0], t_fmri_middle_slice) ###Output _____no_output_____ ###Markdown For now, let's assume that all data was actually acquired at the start of the volume ($t=0$, $t=2$, etc.). We can "initialize" our interpolator by giving it both the timepoints (`t_fmri`) and the data (`ts`). Note that `ts` is not a single time series, but a 2D array with time series for four voxels (across different columns). By specifying `axis=0`, we tell `interp1d` that the first axis represents the axis that we want to interpolate later: ###Code from scipy.interpolate import interp1d interpolator = interp1d(t_fmri, ts, axis=0) ###Output _____no_output_____ ###Markdown Now, we can give the `interpolator` object any set of timepoints and it will return the linearly interpolated values associated with these timepoints for all four voxels. Let's do this for our trial onsets plus six seconds: ###Code onsets_plus_6 = onsets + 6 R_plus6 = interpolator(onsets_plus_6) print("Shape extracted pattern:", R_plus6.shape) fig, ax = plt.subplots(figsize=(2, 10)) mapp = ax.imshow(R_plus6) cbar = fig.colorbar(mapp) cbar.set_label('Feature value', fontsize=13, rotation=270, labelpad=15) ax.set_yticks(np.arange(N)) ax.set_xticks(np.arange(K)) ax.set_title(r"$\mathbf{R}$", fontsize=20) ax.set_xlabel('Voxels', fontsize=15) ax.set_ylabel('Trials', fontsize=15) plt.show() ###Output _____no_output_____ ###Markdown Yay, we have extracted our first pattern! Does it look like what you expected given the known mean amplitude of the trials from the two conditions ($\beta_{\mathrm{A,even}} = 0.8, \beta_{\mathrm{B,even}} = 0.2$ and vice versa for odd voxels)? ToDo (3 points): An alternative to the single-timepoint method is to extract, per trial, the average activity within a particular time window, for example 5-7 seconds post-stimulus. One way to do this is by perform interpolation in steps of (for example) 0.1 within the 5-7 post-stimulus time window (i.e., $5.0, 5.1, 5.2, \dots , 6.8, 6.9, 7.0$) and subsequently averaging these values, per trial, into a single activity estimate. Below, we defined these different steps (t_post_stimulus) for you already. Use the interpolator object to extract the timepoints for these different post-stimulus times relative to our onsets (onsets variable) from our data (ts variable). Store the extracted patterns in a new variable called R_av. Note: this is a relatively difficult ToDo! Consider skipping it if it takes too long. ###Code ''' Implement your ToDo here. ''' t_post_stimulus = np.linspace(5, 7, 21, endpoint=True) print(t_post_stimulus) # YOUR CODE HERE raise NotImplementedError() ''' Tests the above ToDo. ''' from niedu.tests.nipa.week_1 import test_average_extraction test_average_extraction(onsets, ts, t_post_stimulus, interpolator, R_av) ###Output _____no_output_____ ###Markdown These timepoint-based methods are relatively simple to implement and computationally efficient. Another variation that you might see in the literature is that extracted (averages of) timepoints are baseline-subtracted ($\mathbf{R}_{i} - \mathrm{baseline}_{i}$) or baseline-normalized ($\frac{\mathbf{R}_{i}}{\mathrm{baseline}_{i}}$), where the baseline is usually chosen to be at the stimulus onset or a small window before the stimulus onset. This technique is, as far as we know, not very popular, so we won't discuss it any further in this lab. GLM-based methodsOne big disadvantage of timepoint-based methods is that it cannot disentangle activity due to different sources (such as trials that are close in time), which is a major problem for fast (event-related) designs. For example, if you present a trial at $t=10$ and another at $t=12$ and subsequently extract the pattern six seconds post-stimulus (at $t=18$ for the second trial), then the activity estimate for the second trial is definitely going to contain activity due to the first trial because of the sluggishness of the HRF. As such, nowadays GLM-based pattern estimation techniques, which *can* disentangle the contribution of different sources, are more popular than timepoint-based methods. (Although, technically, you can use timepoint-based methods using the GLM with FIR-based designs, but that's beyond the scope of this course.) Again, there are multiple flavors of GLM-based pattern estimation, of which we'll discuss the two most popular ones. Least-squares all (LSA)The most straightforward GLM-based pattern estimation technique is to fit a single GLM with a design matrix that contains one or more regressors for each sample that you want to estimate (in addition to any confound regressors). The estimated parameters ($\hat{\beta}$) corresponding to our samples from this GLM &mdash; representing the relative (de)activation of each voxel for each trial &mdash; will then represent our patterns! This technique is often reffered to as "least-squares all" (LSA). Note that, as explained before, a sample can refer to either a single trial, a set of repetitions of a particuar exemplar, or even a single condition. For now, we'll assume that samples refer to single trials. Often, each sample is modelled by a single (canonical) HRF-convolved regressor (but you could also use more than one regressor, e.g., using a basis set with temporal/dispersion derivatives or a FIR-based basis set), so we'll focus on this approach.Let's go back to our simulated data. We have a single run containing 20 trials, so ultimately our design matrix should contain twenty columns: one for every trial. We can use the `make_first_level_design_matrix` function from Nilearn to create the design matrix. Importantly, we should make sure to give a separate and unique "trial_type" values for all our trials. If we don't do this (e.g., set trial type to the trial condition: "A" or "B"), then Nilearn won't create separate regressors for our trials. ###Code import pandas as pd from nilearn.glm.first_level import make_first_level_design_matrix # We have to create a dataframe with onsets/durations/trial_types # No need for modulation! events_sim = pd.DataFrame(onsets, columns=['onset']) events_sim.loc[:, 'duration'] = 1 events_sim.loc[:, 'trial_type'] = ['trial_' + str(i).zfill(2) for i in range(1, N+1)] # lsa_dm = least squares all design matrix lsa_dm = make_first_level_design_matrix( frame_times=t_fmri, # we defined this earlier for interpolation! events=events_sim, hrf_model='glover', drift_model=None # assume data is already high-pass filtered ) # Check out the created design matrix # Note that the index represents the frame times lsa_dm ###Output _____no_output_____ ###Markdown Note that the design matrix contains 21 regressors: 20 trialwise regressors and an intercept (the last column). Let's also plot it using Nilearn: ###Code from nilearn.plotting import plot_design_matrix plot_design_matrix(lsa_dm); ###Output _____no_output_____ ###Markdown And, while we're at it, plot it as time series (rather than a heatmap): ###Code fig, ax = plt.subplots(figsize=(12, 12)) for i in range(lsa_dm.shape[1]): ax.plot(i + lsa_dm.iloc[:, i], np.arange(ts.shape[0])) ax.set_title("LSA design matrix", fontsize=20) ax.set_ylim(0, lsa_dm.shape[0]-1) ax.set_xlabel('') ax.set_xticks(np.arange(N+1)) ax.set_xticklabels(['trial ' + str(i+1) for i in range(N)] + ['icept'], rotation=-90) ax.invert_yaxis() ax.grid() ax.set_ylabel("Time (volumes)", fontsize=15) plt.show() ###Output _____no_output_____ ###Markdown ToDo/ToThink (2 points): One "problem" with LSA-type design matrices, especially in fast event-related designs, is that they are not very statistically efficient, i.e., they lead to relatively high variance estimates of your parameters ($\hat{\beta}$), mainly due to relatively high predictor variance. Because we used a fixed inter-trial interval (here: 9 seconds), the correlation between "adjacent" trials are (approximately) the same. Compute the correlation between, for example, the predictors associated with trial 1 and trial 2, using the pearsonr function imported below, and store it in a variable named corr_t1t2 (1 point). Then, try to think of a way to improve the efficiency of this particular LSA design and write it down in the cell below the test cell. ###Code ''' Implement your ToDO here. ''' # For more info about the `pearsonr` function, check # https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pearsonr.html # Want a challenge? Try to compute the correlation from scratch! from scipy.stats import pearsonr # YOUR CODE HERE raise NotImplementedError() ''' Tests the ToDo above. ''' from niedu.tests.nipa.week_1 import test_t1t2_corr test_t1t2_corr(lsa_dm, corr_t1t2) ###Output _____no_output_____ ###Markdown YOUR ANSWER HERE Alright, let's actually fit the model! When dealing with real fMRI data, we'd use Nilearn to fit our GLM, but for now, we'll just use our own implementation of an (OLS) GLM. Note that we can actually fit a *single* GLM for all voxels at the same time by using `ts` (a $T \times K$ matrix) as our dependent variable due to the magic of linear algebra. In other words, we can run $K$ OLS models at once! ###Code # Let's use 'X', because it's shorter X = lsa_dm.values # Note we can fit our GLM for all K voxels at # the same time! As such, betas is not a vector, # but an n_regressor x k_voxel matrix! beta_hat_all = np.linalg.inv(X.T @ X) @ X.T @ ts print("Shape beta_hat_all:", beta_hat_all.shape) # Ah, the beta for the intercept is still in there # Let's remove it beta_icept = beta_hat_all[-1, :] beta_hat = beta_hat_all[:-1, :] print("Shape beta_hat (intercept removed):", beta_hat.shape) ###Output _____no_output_____ ###Markdown Alright, let's visualize the estimated parameters ($\hat{\beta}$). We'll do this by plotting the scaled regressors (i.e., $X_{j}\hat{\beta}_{j}$) on top of the original signal. Each differently colored line represents a different regressor (so a different trial): ###Code fig, axes = plt.subplots(ncols=4, sharex=True, sharey=True, figsize=(10, 12)) t = np.arange(ts.shape[0]) for i, ax in enumerate(axes.flatten()): # Plot signal ax.plot(ts[:, i], t, marker='o', ms=4, lw=0.5, c='tab:blue') # Plot trial onsets (as arrows) for ii, to in enumerate(onsets): color = 'tab:red' if ii % 2 == 0 else 'tab:orange' ax.arrow(-1.5, to / TR, dy=0, dx=0.5, color=color, head_width=0.75, head_length=0.25) # Compute x*beta for icept only scaled_icept = lsa_dm.iloc[:, -1].values * beta_icept[i] for ii in range(N): this_x = lsa_dm.iloc[:, ii].values # Compute x*beta for this particular trial (ii) xb = scaled_icept + this_x * beta_hat[ii, i] ax.plot(xb, t, lw=2) ax.set_xlim(-1.5, 2) ax.set_ylim(0, ts.shape[0] // 2) ax.grid(b=True) ax.set_title(f'Voxel {i+1}', fontsize=15) ax.invert_yaxis() if i == 0: ax.set_ylabel("Time (volumes)", fontsize=20) # Common axis labels fig.text(0.425, -.03, "Activation (A.U.)", fontsize=20) fig.tight_layout() sns.despine() plt.show() ###Output _____no_output_____ ###Markdown Ultimately, though, the estimated GLM parameters are just another way to estimate our pattern array ($\mathbf{R}$) &mdash; this time, we just estimated it using a different method (GLM-based) than before (timepoint-based). Therefore, let's visualize this array as we did with the other methods: ###Code fig, ax = plt.subplots(figsize=(2, 10)) mapp = ax.imshow(beta_hat) cbar = fig.colorbar(mapp) cbar.set_label(r'$\hat{\beta}$', fontsize=25, rotation=0, labelpad=10) ax.set_yticks(np.arange(N)) ax.set_xticks(np.arange(K)) ax.set_title(r"$\mathbf{R}$", fontsize=20) ax.set_xlabel('Voxels', fontsize=15) ax.set_ylabel('Trials', fontsize=15) plt.show() ###Output _____no_output_____ ###Markdown ToDo (optional, 0 points): It would be nice to visualize the patterns, but this is very hard because we have four dimenions (because we have four voxels)! PCA to the rescue! Run PCA on the estimated patterns (beta_hat) and store the PCA-transformed array (shape: $20 \times 2$) in a variable named beta_hat_2d. Then, try to plot the first two components as a scatterplot. Make it even nicer by plotting the trials from condition A as red points and trials from condition B als orange points. ###Code # YOUR CODE HERE raise NotImplementedError() from niedu.tests.nipa.week_1 import test_pca_beta_hat test_pca_beta_hat(beta_hat, beta_hat_2d) ###Output _____no_output_____ ###Markdown Noise normalizationOne often used preprocessing step for pattern analyses (using GLM-estimation methods) is to use "noise normalization" on the estimated patterns. There are two flavours: "univariate" and "multivariate" noise normalization. In univariate noise normalization, the estimated parameters ($\hat{\beta}$) are divided (normalized) by the standard deviation of the estimated parameters &mdash; which you might recognize as the formula for $t$-values (for a contrast against baseline)!\begin{align}t_{c\hat{\beta}} = \frac{c\hat{\beta}}{\sqrt{\hat{\sigma}^{2}c(X^{T}X)^{-1}c^{T}}}\end{align}where $\hat{\sigma}^{2}$ is the estimate of the error variance (sum of squared errors divided by the degrees of freedom) and $c(X^{T}X)^{-1}c^{T}$ is the "design variance". Sometimes people disregard the design variance and the degrees of freedom (DF) and instead only use the standard deviation of the noise: \begin{align}t_{c\hat{\beta}} \approx \frac{c\hat{\beta}}{\sqrt{\sum (y_{i} - X_{i}\hat{\beta})^{2}}}\end{align} ToThink (1 point): When experiments use a fixed ISI (in the context of single-trial GLMs), the omission of the design variance in univariate noise normalization is warranted. Explain why. YOUR ANSWER HERE Either way, this univariate noise normalization is a way to "down-weigh" the uncertain (noisy) parameter estimates. Although this type of univariate noise normalization seems to lead to better results in both decoding and RSA analyses (e.g., [Misaki et al., 2010](https://www.ncbi.nlm.nih.gov/pubmed/20580933)), the jury is still out on this issue.Multivariate noise normalization will be discussed in week 3 (RSA), so let's focus for now on the implementation of univariate noise normalization using the approximate method (which disregards design variance). To compute the standard deviation of the noise ($\sqrt{\sum (y_{i} - X_{i}\hat{\beta})^{2}}$), we first need to compute the noise, i.e., the unexplained variance ($y - X\hat{\beta}$) also known as the residuals: ###Code residuals = ts - X @ beta_hat_all print("Shape residuals:", residuals.shape) ###Output _____no_output_____ ###Markdown So, for each voxel ($K=4$), we have a timeseries ($T=100$) with unexplained variance ("noise"). Now, to get the standard deviation across all voxels, we can do the following: ###Code std_noise = np.std(residuals, axis=0) print("Shape noise std:", std_noise.shape) ###Output _____no_output_____ ###Markdown To do the actual normalization step, we simply divide the columns of the pattern matrix (`beta_hat`, which we estimated before) by the estimated noise standard deviation: ###Code # unn = univariate noise normalization # Note that we don't have to do this for each trial (row) separately # due to Numpy broadcasting! R_unn = beta_hat / std_noise print("Shape R_unn:", R_unn.shape) ###Output _____no_output_____ ###Markdown And let's visualize it: ###Code fig, ax = plt.subplots(figsize=(2, 10)) mapp = ax.imshow(R_unn) cbar = fig.colorbar(mapp) cbar.set_label(r'$t$', fontsize=25, rotation=0, labelpad=10) ax.set_yticks(np.arange(N)) ax.set_xticks(np.arange(K)) ax.set_title(r"$\mathbf{R}_{unn}$", fontsize=20) ax.set_xlabel('Voxels', fontsize=15) ax.set_ylabel('Trials', fontsize=15) plt.show() ###Output _____no_output_____ ###Markdown ToThink (1 point): In fact, univariate noise normalization didn't really change the pattern matrix much. Why do you think this is the case for our simulation data? Hint: check out the parameters for the simulation. YOUR ANSWER HERE LSA on real dataAlright, enough with all that fake data &mdash; let's work with some real data! We'll use the face perception task data from the *NI-edu* dataset, which we briefly mentioned in the fMRI-introduction course.In the face perception task, participants were presented with images of faces (from the publicly available [Face Research Lab London Set](https://figshare.com/articles/Face_Research_Lab_London_Set/5047666)). In total, frontal face images from 40 different people ("identities") were used, which were either without expression ("neutral") or were smiling. Each face image (from in total 80 faces, i.e., 40 identities $\times$ 2, neutral/smiling) was shown, per participant, 6 times across the 12 runs (3 times per session). Mini ToThink (0 points): Why do you think we show the same image multiple times? Identities were counterbalanced in terms of biological sex (male vs. female) and ethnicity (Caucasian vs. East-Asian vs. Black). The Face Research Lab London Set also contains the age of the people in the stimulus dataset and (average) attractiveness ratings for all faces from an independent set of raters. In addition, we also had our own participants rate the faces on perceived attractiveness, dominance, and trustworthiness after each session (rating each face, on each dimension, four times in total for robustness). The stimuli were chosen such that we have many different attributes that we could use to model brain responses (e.g., identity, expression, ethnicity, age, average attractiveness, and subjective/personal perceived attractiveness/dominance/trustworthiness).In this paradigm, stimuli were presented for 1.25 seconds and had a fixed interstimulus interval (ISI) of 3.75 seconds. While sub-optimal for univariate "detection-based" analyses, we used a fixed ISI &mdash; rather than jittered &mdash; to make sure it can also be used for "single-trial" multivariate analyses. Each run contained 40 stimulus presentations. To keep the participants attentive, a random selection of 5 stimuli (out of 40) were followed by a rating on either perceived attractiveness, dominance, or trustworthiness using a button-box with eight buttons (four per hand) lasting 2.5 seconds. After the rating, a regular ISI of 3.75 seconds followed. See the figure below for a visualization of the paradigm.![face_paradigm](https://docs.google.com/drawings/d/e/2PACX-1vQ0FlwZLI_XMHaKkaNchzZvgqT0JXjZAPbH9fccmNvgey-RYR5bKolh85Wctc2YLrjOLtE3Zkd7WXdu/pub?w=1429&h=502) First, let's set up all the data that we need for our LSA model. Let's see where our data is located: ###Code import os data_dir = os.path.join(os.path.expanduser('~'), 'NI-edu-data') print("Downloading Fmriprep data (+- 175MB) ...\n") !aws s3 sync --no-sign-request s3://openneuro.org/ds003477 {data_dir} --exclude "*" --include "sub-03/ses-1/func/*task-face*run-1*events.tsv" !aws s3 sync --no-sign-request s3://openneuro.org/ds003477 {data_dir} --exclude "*" --include "derivatives/fmriprep/sub-03/ses-1/func/*task-face*run-1*space-T1w*bold.nii.gz" !aws s3 sync --no-sign-request s3://openneuro.org/ds003477 {data_dir} --exclude "*" --include "derivatives/fmriprep/sub-03/ses-1/func/*task-face*run-1*space-T1w*mask.nii.gz" !aws s3 sync --no-sign-request s3://openneuro.org/ds003477 {data_dir} --exclude "*" --include "derivatives/fmriprep/sub-03/ses-1/func/*task-face*run-1*confounds_timeseries.tsv" print("\nDone!") ###Output _____no_output_____ ###Markdown As you can see, it contains both "raw" (not-preprocessed) subject data (e.g., sub-03) and derivatives, which include Fmriprep-preprocessed data: ###Code fprep_sub03 = os.path.join(data_dir, 'derivatives', 'fmriprep', 'sub-03') print("Contents derivatives/fmriprep/sub-03:", os.listdir(fprep_sub03)) ###Output _____no_output_____ ###Markdown There is preprocessed anatomical data and session-specific functional data: ###Code fprep_sub03_ses1_func = os.path.join(fprep_sub03, 'ses-1', 'func') contents = sorted(os.listdir(fprep_sub03_ses1_func)) print("Contents ses-1/func:", '\n'.join(contents)) ###Output _____no_output_____ ###Markdown That's a lot of data! Importantly, we will only use the "face" data ("task-face") in T1 space ("space-T1w"), meaning that this dat has not been normalized to a common template (unlike the "task-MNI152NLin2009cAsym" data). Here, we'll only analyze the first run ("run-1") data. Let's define the functional data, the associated functional brain mask (a binary image indicating which voxels are brain and which are not), and the file with timepoint-by-timepoint confounds (such as motion parameters): ###Code func = os.path.join(fprep_sub03_ses1_func, 'sub-03_ses-1_task-face_run-1_space-T1w_desc-preproc_bold.nii.gz') # Notice this neat little trick: we use the string method "replace" to define # the functional brain mask func_mask = func.replace('desc-preproc_bold', 'desc-brain_mask') confs = os.path.join(fprep_sub03_ses1_func, 'sub-03_ses-1_task-face_run-1_desc-confounds_timeseries.tsv') confs_df = pd.read_csv(confs, sep='\t') confs_df ###Output _____no_output_____ ###Markdown Finally, we need the events-file with onsets, durations, and trial-types for this particular run: ###Code events = os.path.join(data_dir, 'sub-03', 'ses-1', 'func', 'sub-03_ses-1_task-face_run-1_events.tsv') events_df = pd.read_csv(events, sep='\t') events_df.query("trial_type != 'rating' and trial_type != 'response'") ###Output _____no_output_____ ###Markdown Now, it's up to you to use this data to fit an LSA model! ToDo (2 points): in this first ToDo, you define your events and the confounds you want to include. 1. Remove all columns except "onset", "duration", and "trial_type". You should end up with a DataFrame with 40 rows and 3 columns. You can check this with the .shape attribute of the DataFrame. (Note that, technically, you could model the reponse and rating-related events as well! For now, we'll exclude them.) Name this filtered DataFrame events_df_filt. 2. You also need to select specific columns from the confounds DataFrame, as we don't want to include all confounds! For now, include only the motion parameters (trans_x, trans_y, trans_z, rot_x, rot_y, rot_z). You should end up with a confounds DataFrame with 342 rows and 6 columns. Name this filtered DataFrame confs_df_filt. ###Code ''' Implement your ToDo here. ''' # YOUR CODE HERE raise NotImplementedError() ''' Tests the above ToDo. ''' assert(events_df_filt.shape == (40, 3)) assert(events_df_filt.columns.tolist() == ['onset', 'duration', 'trial_type']) assert(confs_df_filt.shape == (confs_df.shape[0], 6)) assert(all('trans' in col or 'rot' in col for col in confs_df_filt.columns)) print("Well done!") ###Output _____no_output_____ ###Markdown ToDo (2 points): in this Todo, you'll fit your model! Define a FirstLevelModel object, name this flm_todo and make sure you do the following: 1. Set the correct TR (this is 0.7)2. Set the slice time reference to 0.53. Set the mask image to the one we defined before4. Use a "glover" HRF5. Use a "cosine" drift model with a cutoff of 0.01 Hz6. Do not apply any smoothing7. Set minimize_memory to true8. Use an "ols" noise modelThen, fit your model using the functional data (func), filtered confounds, and filtered events we defined before. ###Code ''' Implement your ToDo here. ''' # Ignore the DeprecationWarning! from nilearn.glm.first_level import FirstLevelModel # YOUR CODE HERE raise NotImplementedError() """ Tests the above ToDo. """ from niedu.tests.nipa.week_1 import test_lsa_flm test_lsa_flm(flm_todo, func_mask, func, events_df_filt, confs_df_filt) ###Output _____no_output_____ ###Markdown ToDo (2 points): in this Todo, you'll run the single-trial contrasts ("against baseline"). To do so, write a for-loop in which you call the compute_contrast method every iteration with a new contrast definition for a new trial. Make sure to output the "betas" (by using output_type='effect_size'). Note that the compute_contrast method returns the "unmasked" results (i.e., from all voxels). Make sure that, for each trial, you mask the results using the func_mask variable and the apply_mask function from Nilearn. Save these masked results (which should be patterns of 66298 voxels) for each trial. After the loop, stack all results in a 2D array with the different trials in different rows and the (flattened) voxels in columns. This array should be of shape 40 (trials) by 65643 (nr. of masked voxels). The variable name of this array should be R_todo. ###Code ''' Implement your ToDo here. ''' from nilearn.masking import apply_mask # YOUR CODE HERE raise NotImplementedError() ''' Tests the above ToDo. ''' from niedu.tests.nipa.week_1 import test_lsa_R test_lsa_R(R_todo, events_df_filt, flm_todo, func_mask) ###Output _____no_output_____ ###Markdown Disclaimer: In this ToDo, we asked you not to spatially smooth the data. This is often recommended for pattern analyses, as they arguably use information that is encoded in finely distributed patterns. However, several studies have shown that smoothing may sometimes benefit pattern analyses (e.g., Hendriks et al., 2017). In general, in line with the matched filter theorem, we recommend smoothing your data with a kernel equal to how finegrained you think your experimental feature is encoded in the brain patterns. Dealing with trial correlationsWhen working with single-trial experimental designs (such as the LSA designs discussed previously), one often occurring problem is correlation between trial predictors and their resulting estimates. Trial correlations in such designs occur when the inter-stimulus interval (ISI) is sufficiently short such that trial predictors overlap and thus correlate. This, in turn, leads to relatively unstable (high-variance) pattern estimates and, as we will see later in this section, trial patterns that correlate with each other (which is sometimes called [pattern drift](https://www.biorxiv.org/content/10.1101/032391v2)).This is also the case in our data from the NI-edu dataset. In the "face" task, stimuli were presented for 1.25 seconds, followed by a 3.75 ISI, which causes a slightly positive correlation between a given trial ($i$) and the next trial ($i + 1$) and a slightly negative correlation between the trial after that ($i + 2$). We'll show this below by visualizing the correlation matrix of the design matrix: ###Code dm_todo = pd.read_csv('dm_todo.tsv', sep='\t') dm_todo = dm_todo.iloc[:, :40] fig, ax = plt.subplots(figsize=(8, 8)) # Slightly exaggerate by setting the limits to (-.3, .3) mapp = ax.imshow(dm_todo.corr(), vmin=-0.3, vmax=0.3) # Some styling ax.set_xticks(range(dm_todo.shape[1])) ax.set_xticklabels(dm_todo.columns, rotation=90) ax.set_yticks(range(dm_todo.shape[1])) ax.set_yticklabels(dm_todo.columns) cbar = plt.colorbar(mapp, shrink=0.825) cbar.ax.set_ylabel('Correlation', fontsize=15, rotation=-90) plt.show() ###Output _____no_output_____ ###Markdown ToThink (1 point): Explain why trials (at index $i$) correlate slightly negatively with the the second trial coming after it (at index $i + 2$). Hint: try to plot it! YOUR ANSWER HERE The trial-by-trial correlation structure in the design leads to a trial-by-trial correlation structure in the estimated patterns as well (as explained by [Soch et al., 2020](https://www.sciencedirect.com/science/article/pii/S1053811919310407)). We show this below by computing and visualizing the $N \times N$ correlation matrix of the patterns: ###Code # Load in R_todo if you didn't manage to do the # previous ToDo R_todo = np.load('R_todo.npy') # Compute the NxN correlation matrix R_corr = np.corrcoef(R_todo) fig, ax = plt.subplots(figsize=(8, 8)) mapp = ax.imshow(R_corr, vmin=-1, vmax=1) # Some styling ax.set_xticks(range(dm_todo.shape[1])) ax.set_xticklabels(dm_todo.columns, rotation=90) ax.set_yticks(range(dm_todo.shape[1])) ax.set_yticklabels(dm_todo.columns) cbar = plt.colorbar(mapp, shrink=0.825) cbar.ax.set_ylabel('Correlation', fontsize=15, rotation=-90) plt.show() ###Output _____no_output_____ ###Markdown This correlation structure across trials poses a problem for representational similarity analysis (the topic of week 3) especially. Although this issue is still debated and far from solved, in this section we highlight two possible solutions to this problem: least-squares separate designs and temporal "uncorrelation". Least-squares separate (LSS)The least-squares separate LSS) design is a slight modifcation of the LSA design ([Mumford et al., 2014](https://www.sciencedirect.com/science/article/pii/S105381191400768X)). In LSS, you fit a separate model per trial. Each model contains one regressor for the trial that you want to estimate and, for each condition in your experimental design (in case of a categorical design), another regressor containing all other trials. So, suppose you have a run with 30 trials across 3 conditions (A, B, and C); using an LSS approach, you'd fit 30 different models, each containing four regressors (one for the single trial, one for all (other) trials of condition A, one for all (other) trials of condition B, and one for all (other) trials of condition C). The apparent upside of this is that it strongly reduces the collinearity of trials close in time, which in turn makes the trial parameters more efficient to estimate. ToThink (1 point): Suppose my experiment contains 90 stimuli which all belong to their own condition (i.e., there are 90 conditions). Explain why LSS provides no improvement over LSA in this case. YOUR ANSWER HERE We'll show this for our example data. It's a bit complicated (and not necessarily the best/fastest/clearest way), but the comments will explain what it's doing. Essentially, what we're doing, for each trial, is to extract that regressor for a standard LSA design and, for each condition, create a single regressor by summing all single-trial regressors from that condition together. ###Code # First, well make a standard LSA design matrix lsa_dm = make_first_level_design_matrix( frame_times=t_fmri, # we defined this earlier for interpolation! events=events_sim, hrf_model='glover', drift_model=None # assume data is already high-pass filtered ) # Then, we will loop across trials, making a single GLM lss_dms = [] # we'll store the design matrices here # Do not include last column, the intercept, in the loop for i, col in enumerate(lsa_dm.columns[:-1]): # Extract the single-trial predictor single_trial_reg = lsa_dm.loc[:, col] # Now, we need to create a predictor per condition # (one for A, one for B). We'll store these in "other_regs" other_regs = [] # Loop across unique conditions ("A" and "B") for con in np.unique(conditions): # Which columns belong to the current condition? idx = con == np.array(conditions) # Make sure NOT to include the trial we're currently estimating! idx[i] = False # Also, exclude the intercept (last column) idx = np.append(idx, False) # Now, extract all N-1 regressors con_regs = lsa_dm.loc[:, idx] # And sum them together! # This creates a single predictor for the current # condition con_reg_all = con_regs.sum(axis=1) # Save for later other_regs.append(con_reg_all) # Concatenate the condition regressors (one of A, one for B) other_regs = pd.concat(other_regs, axis=1) # Concatenate the single-trial regressor and two condition regressors this_dm = pd.concat((single_trial_reg, other_regs), axis=1) # Add back an intercept! this_dm.loc[:, 'intercept'] = 1 # Give it sensible column names this_dm.columns = ['trial_to_estimate'] + list(set(conditions)) + ['intercept'] # Save for alter lss_dms.append(this_dm) print("We have created %i design matrices!" % len(lss_dms)) ###Output _____no_output_____ ###Markdown Alright, now let's check out the first five design matrices, which should estimate the first five trials and contain 4 regressors each (one for the single trial, two for the separate conditions, and one for the intercept): ###Code fig, axes = plt.subplots(ncols=5, figsize=(15, 10)) for i, ax in enumerate(axes.flatten()): plot_design_matrix(lss_dms[i], ax=ax) ax.set_title("Design for trial %i" % (i+1), fontsize=20) plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown ToDo (optional; 1 bonus point): Can you implement an LSS approach to estimate our patterns on the real data? You can reuse the flm_todo you created earlier; the only thing you need to change each time is the design matrix. Because we have 40 trials, you need to fit 40 different models (which takes a while). Note that our experimental design does not necessarily have discrete categories, so your LSS design matrices should only have 3 columns: one for the trial to estimate, one for all other trials, and one for the intercept. After fitting each model, compute the trial-against-baseline contrast for the single trial and save the parameter ("beta") map. Then, after the loop, create the same pattern matrix as the previous ToDo, which should also have the same shape, but name it this time R_todo_lss. Note, this is a very hard ToDo, but a great way to test your programming skills :-) ###Code ''' Implement your ToDo here. Note that we already created the LSA design matrix for you. ''' func_img = nib.load(func) n_vol = func_img.shape[-1] lsa_dm = make_first_level_design_matrix( frame_times=np.linspace(0, n_vol * 0.7, num=n_vol, endpoint=False), events=events_df_filt, drift_model=None ) # YOUR CODE HERE raise NotImplementedError() ''' Tests the above ToDo. ''' from niedu.tests.nipa.week_1 import test_lss test_lss(R_todo_lss, func, flm_todo, lsa_dm, confs_df_filt) ###Output _____no_output_____ ###Markdown Tip: Programming your own pattern estimation pipeline allows you to be very flexible and is a great way to practice your programming skills, but if you want a more "pre-packaged" tool, I recommend the nibetaseries package. The package's name is derived from a specific analysis technique called "beta-series correlation", which is a type of analysis that allows for resting-state like connectivity analyses of task-based fMRI data (which we won't discuss in this course). For this technique, you need to estimate single-trial activity patterns &mdash; just like we need to do for pattern analyses! I've used this package to estimate patterns for pattern analysis and I highly recommend it! Temporal uncorrelationAnother method to deal with trial-by-trial correlations is the "uncorrelation" method by [Soch and colleagues (2020)](https://www.sciencedirect.com/science/article/pii/S1053811919310407). As opposed to the LSS method, the uncorrelation approach takes care of the correlation structure in the data in a post-hoc manner. It does so, in essence, by "removing" the correlations in the data that are due to the correlations in the design in a way that is similar to what prewhitening does in generalized least squares.Formally, the "uncorrelated" patterns ($R_{\mathrm{unc}}$) are estimated by (matrix) multiplying the square root ($^{\frac{1}{2}}$) of covariance matrix of the LSA design matrix ($X^{T}X$) with the patterns ($R$):\begin{align}R_{\mathrm{unc}} = (X^{T}X)^{\frac{1}{2}}R\end{align}Here, $(X^{T}X)^{\frac{1}{2}}$ represents the "whitening" matrix which uncorrelates the patterns. Let's implement this in code. Note that we can use the `sqrtm` function from the `scipy.linalg` package to take the square root of a matrix: ###Code from scipy.linalg import sqrtm # Design matrix X = dm_todo.to_numpy() R_unc = (X.T @ X) @ R_todo ###Output _____no_output_____ ###Markdown Experimental design and pattern estimationThis week's lab will be about the basics of pattern analysis of (f)MRI data. We assume that you've worked through the two Nilearn tutorials already. Functional MRI data are most often stored as 4D data, with 3 spatial dimensions ($X$, $Y$, and $Z$) and 1 temporal dimension ($T$). But most pattern analyses assume that data are formatted in 2D: trials ($N$) by patterns (often a subset of $X$, $Y$, and $Z$). Where did the time dimension ($T$) go? And how do we "extract" the patterns of the $N$ trials? In this lab, we'll take a look at various methods to estimate patterns from fMRI time series. Because these methods often depend on your experimental design (and your research question, of course), the first part of this lab will discuss some experimental design considerations. After this more theoretical part, we'll dive into how to estimate patterns from fMRI data.**What you'll learn**: At the end of this tutorial, you ...* Understand the most important experimental design factors for pattern analyses;* Understand and are able to implement different pattern estimation techniques**Estimated time needed to complete**: 8-12 hours ###Code # We need to limit the amount of threads numpy can use, otherwise # it tends to hog all the CPUs available when using Nilearn import os os.environ['MKL_NUM_THREADS'] = '1' os.environ['OPENBLAS_NUM_THREADS'] = '1' import numpy as np ###Output _____no_output_____ ###Markdown Experimental designBefore you can do any fancy machine learning or representational similarity analysis (or any other pattern analysis), there are several decisions you need to make and steps to take in terms of study design, (pre)processing, and structuring your data. Roughly, there are three steps to take:1. Design your study in a way that's appropriate to answer your question through a pattern analysis; this, of course, needs to be done *before* data acquisition!2. Estimate/extract your patterns from the (functional) MRI data;3. Structure and preprocess your data appropriately for pattern analyses;While we won't go into all the design factors that make for an *efficient* pattern analysis (see [this article](http://www.sciencedirect.com/science/article/pii/S105381191400768X) for a good review), we will now discuss/demonstrate some design considerations and how they impact the rest of the MVPA pipeline. Within-subject vs. between-subject analysesAs always, your experimental design depends on your specific research question. If, for example, you're trying to predict schizophrenia patients from healthy controls based on structural MRI, your experimental design is going to be different than when you, for example, are comparing fMRI activity patterns in the amygdala between trials targeted to induce different emotions. Crucially, with *design* we mean the factors that you as a researcher control: e.g., which schizophrenia patients and healthy control to scan in the former example and which emotion trials to present at what time. These two examples indicate that experimental design considerations are quite different when you are trying to model a factor that varies *between subjects* (the schizophrenia vs. healthy control example) versus a factor that varies *within subjects* (the emotion trials example). ToDo/ToThink (1.5 points): before continuing, let's practice a bit. For the three articles below, determine whether they used a within-subject or between-subject design. https://www.nature.com/articles/nn1444 (machine learning based) http://www.jneurosci.org/content/33/47/18597.short (RSA based) https://www.sciencedirect.com/science/article/pii/S1053811913000074 (machine learning based)Assign either 'within' or 'between' to the variables corresponding to the studies above (i.e., study_1, study_2, study_3). ###Code ''' Implement the ToDo here. ''' study_1 = '' # fill in 'within' or 'between' study_2 = '' # fill in 'within' or 'between' study_3 = '' # fill in 'within' or 'between' # YOUR CODE HERE raise NotImplementedError() ''' Tests the above ToDo. ''' for this_study in [study_1, study_2, study_3]: if not this_study: # if empty string raise ValueError("You haven't filled in anything!") else: if this_study not in ['within', 'between']: raise ValueError("Fill in either 'within' or 'between'!") print("Your answer will be graded by hidden tests.") ###Output _____no_output_____ ###Markdown Note that, while we think it is a useful way to think about different types of studies, it is possible to use "hybrid" designs and analyses. For example, you could compare patterns from a particular condition (within-subject) across different participants (between-subject). This is, to our knowledge, not very common though, so we won't discuss it here. ToThink (1 point)Suppose a researcher wants to implement a decoding analysis in which he/she aims to predict schizophrenia (vs. healthy control) from gray-matter density patterns in the orbitofrontal cortex. Is this an example of a within-subject or between-subject pattern analysis? Can it be either one? Why (not)? YOUR ANSWER HERE That said, let's talk about something that is not only important for univariate MRI analyses, but also for pattern-based multivariate MRI analyses: confounds. ConfoundsFor most task-based MRI analyses, we try to relate features from our experiment (stimuli, responses, participant characteristics; let's call these $\mathbf{S}$) to brain features (this is not restricted to "activity patterns"; let's call these $\mathbf{R}$\*). Ideally, we have designed our experiment that any association between our experimental factor of interest ($\mathbf{S}$) and brain data ($\mathbf{R}$) can *only* be due to our experimental factor, not something else. If another factor besides our experimental factor of interest can explain this association, this "other factor" may be a *confound* (let's call this $\mathbf{C}$). If we care to conclude anything about our experimental factor of interest and its relation to our brain data, we should try to minimize any confounding factors in our design. ---\* Note that the notation for experimental variables ($\mathbf{S}$) and brain features ($\mathbf{R}$) is different from what we used in the previous course, in which we used $\mathbf{X}$ for experimental variables and $\mathbf{y}$ for brain signals. We did this to conform to the convention to use $\mathbf{X}$ for the set of independent variables and $\mathbf{y}$ for dependent variables. In some pattern analyses (such as RSA), however, this independent/dependent variable distintion does not really apply, so that's why we'll stick to the more generic $\mathbf{R}$ (for brain features) and $\mathbf{S}$ (for experimental features) terms. Note: In some situations, you may only be interested in maximizing your explanatory/predictive power; in that case, you could argue that confounds are not a problem. The article by Hebart & Baker (2018) provides an excellent overview of this issue. Statistically speaking, you should design your experiment in such a way that there are no associations (correlations) between $\mathbf{R}$ and $\mathbf{C}$, such that any association between $\mathbf{S}$ and $\mathbf{R}$ can *only* be due to $\mathbf{R}$. Note that this is not trivial, because this presumes that you (1) know which factors might confound your study and (2) if you know these factors, that they are measured properly ([Westfall & Yarkoni, 2016)](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0152719)).Minimizing confounds in between-subject studies is notably harder than in within-subject designs, especially when dealing with clinical populations that are hard to acquire, because it is simply easier to experimentally control within-subject factors (especially when they are stimulus- rather than response-based). There are ways to deal with confounds post-hoc, but ideally you prevent confounds in the first place. For an overview of confounds in (multivariate/decoding) neuroimaging analyses and a proposed post-hoc correction method, see [this article](https://www.sciencedirect.com/science/article/pii/S1053811918319463) (apologies for the shameless self-promotion) and [this follow-up article](https://www.biorxiv.org/content/10.1101/2020.08.17.255034v1.abstract).In sum, as with *any* (neuroimaging) analysis, a good experimental design is one that minimizes the possibilities of confounds, i.e., associations between factors that are not of interest ($\mathbf{C}$) and experimental factors that *are* of interest ($\mathbf{S}$). ToThink (0 points): Suppose that you are interested in the neural correlates of ADHD. You want to compare multivariate resting-state fMRI networks between ADHD patients and healthy controls. What is the experimental factor ($\mathbf{S}$)? And can you think of a factor that, when unaccounted for, presents a major confound ($\mathbf{C}$) in this study/analysis? ToThink (1 point): Suppose that you're interested in the neural representation of "cognitive effort". You think of an experimental design in which you show participants either easy arithmetic problems, which involve only single-digit addition/subtraction (e.g., $2+5-4$) or hard(er) arithmetic problems, which involve two-digit addition/subtraction and multiplication (e.g., $12\times4-2\times11$), for which they have to respond whether the solution is odd (press left) or even (press right) as fast as possible. You then compare patterns during the between easy and hard trials. What is the experimental factor of interest ($\mathbf{S}$) here? And what are possible confounds ($\mathbf{C}$) in this design? Name at least two. (Note: this is a separate hypothetical experimental from the previous ToThink.) YOUR ANSWER HERE What makes up a "pattern"?So far, we talked a lot about "patterns", but what do we mean with that term? There are different options with regard to *what you choose as your unit of measurement* that makes up your pattern. The far majority of pattern analyses in functional MRI use patterns of *activity estimates*, i.e., the same unit of measurement &mdash; relative (de)activation &mdash; as is common in standard mass-univariate analyses. For example, decoding object category (e.g., images of faces vs. images of houses) from fMRI activity patterns in inferotemporal cortex is an example of a pattern analysis that uses *activity estimates* as its unit of measurement. However, you are definitely not limited to using *activity estimates* for your patterns. For example, you could apply pattern analyses to structural data (e.g., patterns of voxelwise gray-matter volume values, like in [voxel-based morphometry](https://en.wikipedia.org/wiki/Voxel-based_morphometry)) or to functional connectivity data (e.g., patterns of time series correlations between voxels, or even topological properties of brain networks). (In fact, the connectivity examples from the Nilearn tutorial represents a way to estimate these connectivity features, which can be used in pattern analyses.) In short, pattern analyses can be applied to patterns composed of *any* type of measurement or metric!Now, let's get a little more technical. Usually, as mentioned in the beginning, pattern analyses represent the data as a 2D array of brain patterns. Let's call this $\mathbf{R}$. The rows of $\mathbf{R}$ represent different instances of patterns (sometimes called "samples" or "observations") and the columns represent different brain features (e.g., voxels; sometimes simply called "features"). Note that we thus lose all spatial information by "flattening" our patterns into 1D rows!Let's call the number of samples $N$ and the number of brain features $K$. We can thus represent $\mathbf{R}$ as a $N\times K$ matrix (2D array):\begin{align}\mathbf{R} = \begin{bmatrix} R_{1,1} & R_{1,2} & R_{1,3} & \dots & R_{1,K}\\ R_{2,1} & R_{1,2} & R_{1,3} & \dots & R_{2,K}\\ R_{3,1} & R_{1,2} & R_{1,3} & \dots & R_{3,K}\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ R_{N,1} & R_{1,2} & R_{1,3} & \dots & R_{N,K}\\\end{bmatrix}\end{align}As discussed before, the values themselves (e.g., $R_{1,1}$, $R_{1,2}$, $R_{3,6}$) represent whatever you chose for your patterns (fMRI activity, connectivity estimates, VBM, etc.). What is represented by the rows (samples/observations) of $\mathbf{R}$ depends on your study design: in between-subject studies, these are usually participants, while in within-subject studies, these samples represent trials (or averages of trials or sometimes runs). The columns of $\mathbf{R}$ represent the different (brain) features in your pattern; for example, these may be different voxels (or sensors/magnetometers in EEG/MEG), vertices (when working with cortical surfaces), edges in functional brain networks, etc. etc. Let's make it a little bit more concrete. We'll make up some random data below that represents a typical data array in pattern analyses: ###Code import numpy as np N = 100 # e.g. trials K = 250 # e.g. voxels R = np.random.normal(0, 1, size=(N, K)) R ###Output _____no_output_____ ###Markdown Let's visualize this: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.figure(figsize=(12, 4)) plt.imshow(R, aspect='auto') plt.xlabel('Brain features', fontsize=15) plt.ylabel('Samples', fontsize=15) plt.title(r'$\mathbf{R}_{N\times K}$', fontsize=20) cbar = plt.colorbar() cbar.set_label('Feature value', fontsize=13, rotation=270, labelpad=10) plt.show() ###Output _____no_output_____ ###Markdown ToDo (1 point): Extract the pattern of the 42nd trial and store it in a variable called trial42. Then, extract the values of 187th brain feature across all trials and store it in a variable called feat187. Lastly, extract feature value of the 60th trial and the 221nd feature and store it in a variable called t60_f221. Remember: Python uses zero-based indexing (first value in an array is indexed by 0)! ###Code ''' Implement the ToDo here.''' # YOUR CODE HERE raise NotImplementedError() ''' Tests the above ToDo. ''' from niedu.tests.nipa.week_1 import test_R_indexing test_R_indexing(R, trial42, feat187, t60_f221) ###Output _____no_output_____ ###Markdown Alright, to practice a little bit more. We included whole-brain VBM data for 20 subjects in the `vbm/` subfolder: ###Code import os sorted(os.listdir('vbm')) ###Output _____no_output_____ ###Markdown The VBM data represents spatially normalized (to MNI152, 2mm), whole-brain voxelwise gray matter volume estimates (read more about VBM [here](https://en.wikipedia.org/wiki/Voxel-based_morphometry)).Let's inspect the data from a single subject: ###Code import os import nibabel as nib from nilearn import plotting sub_01_vbm_path = os.path.join('vbm', 'sub-01.nii.gz') sub_01_vbm = nib.load(sub_01_vbm_path) print("Shape of Nifti file: ", sub_01_vbm.shape) # Let's plot it as well plotting.plot_anat(sub_01_vbm) plt.show() ###Output _____no_output_____ ###Markdown As you can see, the VBM data is a 3D array of shape 91 ($X$) $\times$ 109 ($Y$) $\times$ 91 ($Z$) (representing voxels). These are the spatial dimensions associated with the standard MNI152 (2 mm) template provided by FSL. As VBM is structural (not functional!) data, there is no time dimension ($T$).Now, suppose that we want to do a pattern analysis on the data of all 20 subjects. We should then create a 2D array of shape 20 (subjects) $\times\ K$ (number of voxels, i.e., $91 \times 109 \times 91$). To do so, we need to create a loop over all files, load them in, "flatten" the data, and ultimately stack them into a 2D array. Before you'll implement this as part of the next ToDo, we will show you a neat Python function called `glob`, which allows you to simply find files using "[wildcards](https://en.wikipedia.org/wiki/Wildcard_character)": ###Code from glob import glob ###Output _____no_output_____ ###Markdown It works as follows:```list_of_files = glob('path/with/subdirectories/*/*.nii.gz')```Importantly, the string you pass to `glob` can contain one or more wildcard characters (such as `?` or `*`). Also, *the returned list is not sorted*! Let's try to get all our VBM subject data into a list using this function: ###Code # Let's define a "search string"; we'll use the os.path.join function # to make sure this works both on Linux/Mac and Windows search_str = os.path.join('vbm', 'sub-*.nii.gz') vbm_files = glob(search_str) # this is also possible: vbm_files = glob(os.path.join('vbm', 'sub-*.nii.gz')) # Let's print the returned list print(vbm_files) ###Output _____no_output_____ ###Markdown As you can see, *the list is not alphabetically sorted*, so let's fix that with the `sorted` function: ###Code vbm_files = sorted(vbm_files) print(vbm_files) # Note that we could have done that with a single statement # vbm_files = sorted(glob(os.path.join('vbm', 'sub-*.nii.gz'))) # But also remember: shorter code is not always better! ###Output _____no_output_____ ###Markdown ToDo (2 points): Create a 2D array with the vertically stacked subject-specific (flattened) VBM patterns, in which the first subject should be the first row. You may want to pre-allocate this array before starting your loop (using, e.g., np.zeros). Also, the enumerate function may be useful when writing your loop. Try to google how to flatten an N-dimensional array into a single vector. Store the final 2D array in a variable named R_vbm. ###Code ''' Implement the ToDo here. ''' # YOUR CODE HERE raise NotImplementedError() ''' Tests the above ToDo. ''' from niedu.tests.nipa.week_1 import test_R_vbm_loop test_R_vbm_loop(R_vbm) ###Output _____no_output_____ ###Markdown Tip: While it is a good exercise to load in the data yourself, you can also easily load in and concatenate a set of Nifti files using Nilearn's concat_imgs function (which returns a 4D Nifti1Image, with the different patterns as the fourth dimension). You'd still have to reorganize this data into a 2D array, though. ###Code # Run this cell after you're done with the ToDo # This will remove the all numpy arrays from memory, # clearing up RAM for the next sections %reset -f array ###Output _____no_output_____ ###Markdown Patterns as "points in space"Before we continue with the topic of pattern estimation, there is one idea that we'd like to introduce: thinking of patterns as points (i.e., coordinates) in space. Thinking of patterns this way is helpful to understanding both machine learning based analyses and representational similarity analysis. While for some, this idea might sound trivial, we believe it's worth going over anyway. Now, let's make this idea more concrete. Suppose we have estimated fMRI activity patterns for 20 trials (rows of $\mathbf{R}$). Now, we will also assume that those patterns consist of only two features (e.g., voxels; columns of $\mathbf{R}$), because this will make visualizing patterns as points in space easier than when we choose a larger number of features.Alright, let's simulate and visualize the data (as a 2D array): ###Code K = 2 # features (voxels) N = 20 # samples (trials) R = np.random.multivariate_normal(np.zeros(K), np.eye(K), size=N) print("Shape of R:", R.shape) # Plot 2D array as heatmap fig, ax = plt.subplots(figsize=(2, 10)) mapp = ax.imshow(R) cbar = fig.colorbar(mapp, pad=0.1) cbar.set_label('Feature value', fontsize=13, rotation=270, labelpad=15) ax.set_yticks(np.arange(N)) ax.set_xticks(np.arange(K)) ax.set_title(r"$\mathbf{R}$", fontsize=20) ax.set_xlabel('Voxels', fontsize=15) ax.set_ylabel('Trials', fontsize=15) plt.show() ###Output _____no_output_____ ###Markdown Now, we mentioned that each pattern (row of $\mathbf{R}$, i.e., $\mathbf{R}_{i}$) can be interpreted as a point in 2D space. With space, here, we mean a space where each feature (e.g., voxel; column of $\mathbf{R}$, i.e., $\mathbf{R}_{j}$) represents a separate axis. In our simulated data, we have two features (e.g., voxel 1 and voxel 2), so our space will have two axes: ###Code plt.figure(figsize=(5, 5)) plt.title("A two-dimensional space", fontsize=15) plt.grid() plt.xlim(-3, 3) plt.ylim(-3, 3) plt.xlabel('Activity voxel 1', fontsize=13) plt.ylabel('Activity voxel 2', fontsize=13) plt.show() ###Output _____no_output_____ ###Markdown Within this space, each of our patterns (samples) represents a point. The values of each pattern represent the *coordinates* of its location in this space. For example, the coordinates of the first pattern are: ###Code print(R[0, :]) ###Output _____no_output_____ ###Markdown As such, we can plot this pattern as a point in space: ###Code plt.figure(figsize=(5, 5)) plt.title("A two-dimensional space", fontsize=15) plt.grid() # We use the "scatter" function to plot this point, but # we could also have used plt.plot(R[0, 0], R[0, 1], marker='o') plt.scatter(R[0, 0], R[0, 1], marker='o', s=75) plt.axhline(0, c='k') plt.axvline(0, c='k') plt.xlabel('Activity voxel 1', fontsize=13) plt.ylabel('Activity voxel 2', fontsize=13) plt.xlim(-3, 3) plt.ylim(-3, 3) plt.show() ###Output _____no_output_____ ###Markdown If we do this for all patterns, we get an ordinary scatter plot of the data: ###Code plt.figure(figsize=(5, 5)) plt.title("A two-dimensional space", fontsize=15) plt.grid() # We use the "scatter" function to plot this point, but # we could also have used plt.plot(R[0, 0], R[0, 1], marker='o') plt.axhline(0, c='k') plt.axvline(0, c='k') plt.scatter(R[:, 0], R[:, 1], marker='o', s=75, zorder=3) plt.xlabel('Activity voxel 1', fontsize=13) plt.ylabel('Activity voxel 2', fontsize=13) plt.xlim(-3, 3) plt.ylim(-3, 3) plt.show() ###Output _____no_output_____ ###Markdown It is important to realize that both perspectives &mdash; as a 2D array and as a set of points in $K$-dimensional space &mdash; represents the same data! Practically, pattern analysis algorithms usually expect the data as a 2D array, but (in our experience) the operations and mechanisms implemented by those algorithms are easiest to explain and to understand from the "points in space" perspective.You might think, "but how does this work for data with more than two features?" Well, the idea of patterns as points in space remains the same: each feature represents a new dimension (or "axis"). For three features, this means that a pattern represents a point in 3D (X, Y, Z) space; for four features, a pattern represents a point in 4D space (like a point moving in 3D space) ... but what about a pattern with 14 features? Or 500? Actually, this is impossible to visualize or even make sense of mentally. As the famous artificial intelligence researcher Geoffrey Hinton put it:> "To deal with ... a 14 dimensional space, visualize a 3D space and say 'fourteen' very loudly. Everyone does it." (Geoffrey Hinton)The important thing to understand, though, is that most operations, computations, and algorithms that deal with patterns do not care about whether your data is 2D (two features) or 14D (fourteen features) &mdash; we just have to trust the mathematicians that whatever we do on 2D data will generalize to $K$-dimensional data :-)That said, people still try to visualize >2D data using *dimensionality reduction* techniques. These techniques try to project data to a lower-dimensional space. For example, you can transform a dataset with 500 features (i.e., a 500-dimensional dataset) to a 2D dimensional dataset using techniques such as principal component analysis (PCA), Multidimensional Scaling (MDS), and t-SNE. For example, PCA tries to a subset of uncorrelated lower-dimensional features (e.g., 2) from linear combinations of high-dimensional features (e.g., 4) that still represent as much variance of the high-dimensional components as possible. We'll show you an example below using an implementation of PCA from the machine learning library [scikit-learn](https://scikit-learn.org/stable/), which we'll use extensively in next week's lab: ###Code from sklearn.decomposition import PCA # Let's create a dataset with 100 samples and 4 features R4D = np.random.normal(0, 1, size=(100, 4)) print("Shape R4D:", R4D.shape) # We'll instantiate a PCA object that will # transform our data into 2 components pca = PCA(n_components=2) # Fit and transform the data from 4D to 2D R2D = pca.fit_transform(R4D) print("Shape R2D:", R2D.shape) # Plot the result plt.figure(figsize=(5, 5)) plt.scatter(R2D[:, 0], R2D[:, 1], marker='o', s=75, zorder=3) plt.axhline(0, c='k') plt.axvline(0, c='k') plt.xlabel('PCA component 1', fontsize=13) plt.ylabel('PCA component 2', fontsize=13) plt.grid() plt.xlim(-4, 4) plt.ylim(-4, 4) plt.show() ###Output _____no_output_____ ###Markdown ToDo (optional): As discussed, PCA is a specific dimensionality reduction technique that uses linear combinations of features to project the data to a lower-dimensional space with fewer "components". Linear combinations are simply weighted sums of high-dimensional features. In a 4D dimensional space that is project to 2D, PCA component 1 might be computed as $\mathbf{R}_{j=1}\theta_{1}+\mathbf{R}_{j=2}\theta_{2}+\mathbf{R}_{j=3}\theta_{3}+\mathbf{R}_{j=4}\theta_{4}$, where $R_{j=1}$ represents the 4th feature of $\mathbf{R}$ and $\theta_{1}$ represents the weight for the 4th feature. The weights of the fitted PCA model can be accessed by, confusingly, pca.components_ (shape: $K_{lower} \times K_{higher}$. Using these weights, can you recompute the lower-dimensional features from the higher-dimensional features yourself? Try to plot it like the figure above and check whether it matches. ###Code ''' Implement the (optional) ToDo here. ''' # YOUR CODE HERE raise NotImplementedError() ###Output _____no_output_____ ###Markdown Note that dimensionality reduction is often used for visualization, but it can also be used as a preprocessing step in pattern analyses. We'll take a look this in more detail next week.Alright, back to the topic of pattern extraction/estimation. You saw that preparing VBM data for (between-subject) pattern analyses is actually quite straightforward, but unfortunately, preparing functional MRI data for pattern analysis is a little more complicated. The reason is that we are dealing with time series in which different trials ($N$) are "embedded". The next section discusses different methods to "extract" (estimate) these trial-wise patterns. Estimating patternsAs we mentioned before, we should prepare our data as an $N$ (samples) $\times$ $K$ (features) array. With fMRI data, our data is formatted as a $X \times Y \times Z \times T$ array; we can flatten the $X$, $Y$, and $Z$ dimensions, but we still have to find a way to "extract" patterns for our $N$ trials from the time series (i.e., the $T$ dimension). Important side note: single trials vs. (runwise) average trialsIn this section, we often assume that our "samples" refer to different *trials*, i.e., single instances of a stimulus or response (or another experimentally-related factor). This is, however, not the only option. Sometimes, researchers choose to treat multiple repetitions of a trial as a single sample or multiple trials within a condition as a single sample. For example, suppose you design a simple passive-viewing experiment with images belonging two one of three conditions: faces, houses, and chairs. Each condition has ten exemplars (face1, face2, ..., face10, house1, house2, ..., house10, chair1, chair2, ... , chair10) and each exemplar/item is repeated six times. So, in total there are 3 (condition) $\times$ 10 (examplars) $\times$ 6 (repetitions) = 180 trials. Because you don't want to bore the participant to death, you split the 180 trials into two runs (90 each). Now, there are different ways to define your samples. One is to treat every single trial as a sample (so you'll have a 180 samples). Another way is to treat each exemplar as a sample. If you do so, you'll have to "pool" the pattern estimates across all 6 repetitions (so you'll have $10 \times 3 = 30$ samples). And yet another way is to treat each condition as a sample, so you'll have to pool the pattern estimates across all 6 repetitions and 10 exemplars per condition (so you'll end up with only 3 samples). Lastly, with respect to the latter two approaches, you may choose to only average repetitions and/or exemplars *within* runs. So, for two runs, you end up with either $10 \times 3 \times 2 = 60$ samples (when averaging across repetitions only) or $3 \times 2 = 6$ samples (when averaging across examplars and repetitions).Whether you should perform your pattern analysis on the trial, examplar, or condition level, and whether you should estimate these patterns across runs or within runs, depends on your research question and analysis technique. For example, if you want to decode exemplars from each other, you obviously should not average across exemplars. Also, some experiments may not have different exemplars per condition (or do not have categorical conditions at all). With respect to the importance of analysis technique: when applying machine learning analyses to fMRI data, people often prefer to split their trials across many (short) runs and &mdash; if using a categorical design &mdash; prefer to estimate a single pattern per run. This is because samples across runs are not temporally autocorrelated, which is an important assumption in machine learning based analyses. Lastly, for any pattern analysis, averaging across different trials will increase the signal-to-noise ratio (SNR) for any sample (because you average out noise), but will decrease the statistical power of the analysis (because you have fewer samples). Long story short: whatever you treat as a sample &mdash; single trials, (runwise) exemplars or (runwise) conditions &mdash; depends on your design, research question, and analysis technique. In the rest of the tutorial, we will usually refer to samples as "trials", as this scenario is easiest to simulate and visualize, but remember that this term may equally well refer to (runwise) exemplar-average or condition-average patterns.--- To make the issue of estimating patterns from time series a little more concrete, let's simulate some signals. We'll assume that we have a very simple experiment with two conditions (A, B) with ten trials each (interleaved, i.e., ABABAB...AB), a trial duration of 1 second, spaced evenly within a single run of 200 seconds (with a TR of 2 seconds, so 100 timepoints). Note that you are not necessarily limited to discrete categorical designs for all pattern analyses! While for machine learning-based methods (topic of week 2) it is common to have a design with a single categorical feature of interest (or some times a single continuous one), representional similarity analyses (topic of week 3) are often applied to data with more "rich" designs (i.e., designs that include many, often continuously varying, factors of interest). Also, using twenty trials is probably way too few for any pattern analysis, but it'll make the examples (and visualizations) in this section easier to understand. Alright, let's get to it. ###Code TR = 2 N = 20 # 2 x 10 trials T = 200 # duration in seconds # t_pad is a little baseline at the # start and end of the run t_pad = 10 onsets = np.linspace(t_pad, T - t_pad, N, endpoint=False) durations = np.ones(onsets.size) conditions = ['A', 'B'] * (N // 2) print("Onsets:", onsets, end='\n\n') print("Conditions:", conditions) ###Output _____no_output_____ ###Markdown We'll use the `simulate_signal` function used in the introductory course to simulate the data. This function is like a GLM in reverse: it assumes that a signal ($R$) is generated as a linear combination between (HRF-convolved) experimental features ($\mathbf{S}$) weighted by some parameters ( $\beta$ ) plus some additive noise ($\epsilon$), and simulates the signal accordingly (you can check out the function by running `simulate_signal??` in a new code cell). Because we simulate the signal, we can use "ground-truth" activation parameters ( $\beta$ ). In this simulation, we'll determine that the signal responds more strongly to trials of condition A ($\beta = 0.8$) than trials of condition B ($\beta = 0.2$) in *even* voxels (voxel 0, 2, etc.) and vice versa for *odd* voxels (voxel 1, 3, etc.): ###Code params_even = np.array([0.8, 0.2]) params_odd = 1 - params_even ###Output _____no_output_____ ###Markdown ToThink (0 points): Given these simulation parameters, how do you think that the corresponding $N\times K$ pattern array ($\mathbf{R}$) would roughly look like visually (assuming an efficient pattern estimation method)? Alright, We simulate some data for, let's say, four voxels ($K = 4$). (Again, you'll usually perform pattern analyses on many more voxels.) ###Code from niedu.utils.nii import simulate_signal K = 4 ts = [] for i in range(K): # Google "Python modulo" to figure out # what the line below does! is_even = (i % 2) == 0 sig, _ = simulate_signal( onsets, conditions, duration=T, plot=False, std_noise=0.25, params_canon=params_even if is_even else params_odd ) ts.append(sig[:, np.newaxis]) # ts = timeseries ts = np.hstack(ts) print("Shape of simulated signals: ", ts.shape) ###Output _____no_output_____ ###Markdown And let's plot these voxels. We'll show the trial onsets as arrows (red = condition A, orange = condition B): ###Code import seaborn as sns fig, axes = plt.subplots(ncols=K, sharex=True, sharey=True, figsize=(10, 12)) t = np.arange(ts.shape[0]) for i, ax in enumerate(axes.flatten()): # Plot signal ax.plot(ts[:, i], t, marker='o', ms=4, c='tab:blue') # Plot trial onsets (as arrows) for ii, to in enumerate(onsets): color = 'tab:red' if ii % 2 == 0 else 'tab:orange' ax.arrow(-1.5, to / TR, dy=0, dx=0.5, color=color, head_width=0.75, head_length=0.25) ax.set_xlim(-1.5, 2) ax.set_ylim(0, ts.shape[0]) ax.grid(b=True) ax.set_title(f'Voxel {i+1}', fontsize=15) ax.invert_yaxis() if i == 0: ax.set_ylabel("Time (volumes)", fontsize=20) # Common axis labels fig.text(0.425, -.03, "Activation (A.U.)", fontsize=20) fig.tight_layout() sns.despine() plt.show() ###Output _____no_output_____ ###Markdown Tip: Matplotlib is a very flexible plotting package, but arguably at the expense of how fast you can implement something. Seaborn is a great package (build on top of Matplotlib) that offers some neat functionality that makes your life easier when plotting in Python. For example, we used the despine function to remove the top and right spines to make our plot a little nicer. In this course, we'll mostly use Matplotlib, but we just wanted to make you aware of this awesome package. Alright, now we can start discussing methods for pattern estimation! Unfortunately, as pattern analyses are relatively new, there no concensus yet about the "best" method for pattern estimation. In fact, there exist many different methods, which we can roughly divided into two types:1. Timepoint-based method (for lack of a better name) and2. GLM-based methodsWe'll discuss both of them, but spend a little more time on the latter set of methods as they are more complicated (and are more popular). Timepoint-based methodsTimepoint-based methods "extract" patterns by simply using a single timepoint (e.g., 6 seconds after stimulus presentation) or (an average of) multiple timepoints (e.g., 4, 6, and 8 seconds after stimulus presentation). Below, we visualize how a single-timepoint method would look like (assuming that we'd want to extract the timepoint 6 seconds after stimulus presentation, i.e., around the assumed peak of the BOLD response). The stars represent the values that we would extract (red when condition A, orange when condition B). Note, we only plot the first 60 volumes. ###Code fig, axes = plt.subplots(ncols=4, sharex=True, sharey=True, figsize=(10, 12)) t_fmri = np.linspace(0, T, ts.shape[0], endpoint=False) t = np.arange(ts.shape[0]) for i, ax in enumerate(axes.flatten()): # Plot signal ax.plot(ts[:, i], t, marker='o', ms=4, c='tab:blue') # Plot trial onsets (as arrows) for ii, to in enumerate(onsets): plus6 = np.interp(to+6, t_fmri, ts[:, i]) color = 'tab:red' if ii % 2 == 0 else 'tab:orange' ax.arrow(-1.5, to / TR, dy=0, dx=0.5, color=color, head_width=0.75, head_length=0.25) ax.plot([plus6, plus6], [(to+6) / TR, (to+6) / TR], marker='*', ms=15, c=color) ax.set_xlim(-1.5, 2) ax.set_ylim(0, ts.shape[0] // 2) ax.grid(b=True) ax.set_title(f'Voxel {i+1}', fontsize=15) ax.invert_yaxis() if i == 0: ax.set_ylabel("Time (volumes)", fontsize=20) # Common axis labels fig.text(0.425, -.03, "Activation (A.U.)", fontsize=20) fig.tight_layout() sns.despine() plt.show() ###Output _____no_output_____ ###Markdown Now, extracting these timepoints 6 seconds after stimulus presentation is easy when this timepoint is a multiple of the scan's TR (here: 2 seconds). For example, to extract the value for the first trial (onset: 10 seconds), we simply take the 8th value in our timeseries, because $(10 + 6) / 2 = 8$. But what if our trial onset + 6 seconds is *not* a multiple of the TR, such as with trial 2 (onset: 19 seconds)? Well, we can interpolate this value! We will use the same function for this operation as we did for slice-timing correction (from the previous course): `interp1d` from the `scipy.interpolate` module.To refresh your memory: this function takes the timepoints associated with the values (or "frame_times" in Nilearn lingo) and the values itself to generate a new object which we'll later use to do the actual (linear) interpolation. First, let's define the timepoints: ###Code t_fmri = np.linspace(0, T, ts.shape[0], endpoint=False) ###Output _____no_output_____ ###Markdown ToDo (1 point): The above timepoints assume that all data was acquired at the onset of the volume acquisition ($t=0$, $t=2$, etc.). Suppose that we actually slice-time corrected our data to the middle slice, i.e., the 18th slice (out of 36 slices) &mdash; create a new array (using np.linspace with timepoints that reflect these slice-time corrected acquisition onsets) and store it in a variable named t_fmri_middle_slice. ###Code ''' Implement your ToDo here. ''' # YOUR CODE HERE raise NotImplementedError() ''' Tests the above ToDo. ''' from niedu.tests.nipa.week_1 import test_frame_times_stc test_frame_times_stc(TR, T, ts.shape[0], t_fmri_middle_slice) ###Output _____no_output_____ ###Markdown For now, let's assume that all data was actually acquired at the start of the volume ($t=0$, $t=2$, etc.). We can "initialize" our interpolator by giving it both the timepoints (`t_fmri`) and the data (`ts`). Note that `ts` is not a single time series, but a 2D array with time series for four voxels (across different columns). By specifying `axis=0`, we tell `interp1d` that the first axis represents the axis that we want to interpolate later: ###Code from scipy.interpolate import interp1d interpolator = interp1d(t_fmri, ts, axis=0) ###Output _____no_output_____ ###Markdown Now, we can give the `interpolator` object any set of timepoints and it will return the linearly interpolated values associated with these timepoints for all four voxels. Let's do this for our trial onsets plus six seconds: ###Code onsets_plus_6 = onsets + 6 R_plus6 = interpolator(onsets_plus_6) print("Shape extracted pattern:", R_plus6.shape) fig, ax = plt.subplots(figsize=(2, 10)) mapp = ax.imshow(R_plus6) cbar = fig.colorbar(mapp) cbar.set_label('Feature value', fontsize=13, rotation=270, labelpad=15) ax.set_yticks(np.arange(N)) ax.set_xticks(np.arange(K)) ax.set_title(r"$\mathbf{R}$", fontsize=20) ax.set_xlabel('Voxels', fontsize=15) ax.set_ylabel('Trials', fontsize=15) plt.show() ###Output _____no_output_____ ###Markdown Yay, we have extracted our first pattern! Does it look like what you expected given the known mean amplitude of the trials from the two conditions ($\beta_{\mathrm{A,even}} = 0.8, \beta_{\mathrm{B,even}} = 0.2$ and vice versa for odd voxels)? ToDo (3 points): An alternative to the single-timepoint method is to extract, per trial, the average activity within a particular time window, for example 5-7 seconds post-stimulus. One way to do this is by perform interpolation in steps of (for example) 0.1 within the 5-7 post-stimulus time window (i.e., $5.0, 5.1, 5.2, \dots , 6.8, 6.9, 7.0$) and subsequently averaging these values, per trial, into a single activity estimate. Below, we defined these different steps (t_post_stimulus) for you already. Use the interpolator object to extract the timepoints for these different post-stimulus times relative to our onsets (onsets variable) from our data (ts variable). Store the extracted patterns in a new variable called R_av. Note: this is a relatively difficult ToDo! Consider skipping it if it takes too long. ###Code ''' Implement your ToDo here. ''' t_post_stimulus = np.linspace(5, 7, 21, endpoint=True) print(t_post_stimulus) # YOUR CODE HERE raise NotImplementedError() ''' Tests the above ToDo. ''' from niedu.tests.nipa.week_1 import test_average_extraction test_average_extraction(onsets, ts, t_post_stimulus, interpolator, R_av) ###Output _____no_output_____ ###Markdown These timepoint-based methods are relatively simple to implement and computationally efficient. Another variation that you might see in the literature is that extracted (averages of) timepoints are baseline-subtracted ($\mathbf{R}_{i} - \mathrm{baseline}_{i}$) or baseline-normalized ($\frac{\mathbf{R}_{i}}{\mathrm{baseline}_{i}}$), where the baseline is usually chosen to be at the stimulus onset or a small window before the stimulus onset. This technique is, as far as we know, not very popular, so we won't discuss it any further in this lab. GLM-based methodsOne big disadvantage of timepoint-based methods is that it cannot disentangle activity due to different sources (such as trials that are close in time), which is a major problem for fast (event-related) designs. For example, if you present a trial at $t=10$ and another at $t=12$ and subsequently extract the pattern six seconds post-stimulus (at $t=18$ for the second trial), then the activity estimate for the second trial is definitely going to contain activity due to the first trial because of the sluggishness of the HRF. As such, nowadays GLM-based pattern estimation techniques, which *can* disentangle the contribution of different sources, are more popular than timepoint-based methods. (Although, technically, you can use timepoint-based methods using the GLM with FIR-based designs, but that's beyond the scope of this course.) Again, there are multiple flavors of GLM-based pattern estimation, of which we'll discuss the two most popular ones. Least-squares all (LSA)The most straightforward GLM-based pattern estimation technique is to fit a single GLM with a design matrix that contains one or more regressors for each sample that you want to estimate (in addition to any confound regressors). The estimated parameters ($\hat{\beta}$) corresponding to our samples from this GLM &mdash; representing the relative (de)activation of each voxel for each trial &mdash; will then represent our patterns! This technique is often reffered to as "least-squares all" (LSA). Note that, as explained before, a sample can refer to either a single trial, a set of repetitions of a particuar exemplar, or even a single condition. For now, we'll assume that samples refer to single trials. Often, each sample is modelled by a single (canonical) HRF-convolved regressor (but you could also use more than one regressor, e.g., using a basis set with temporal/dispersion derivatives or a FIR-based basis set), so we'll focus on this approach.Let's go back to our simulated data. We have a single run containing 20 trials, so ultimately our design matrix should contain twenty columns: one for every trial. We can use the `make_first_level_design_matrix` function from Nilearn to create the design matrix. Importantly, we should make sure to give a separate and unique "trial_type" values for all our trials. If we don't do this (e.g., set trial type to the trial condition: "A" or "B"), then Nilearn won't create separate regressors for our trials. ###Code import pandas as pd from nilearn.glm.first_level import make_first_level_design_matrix # We have to create a dataframe with onsets/durations/trial_types # No need for modulation! events_sim = pd.DataFrame(onsets, columns=['onset']) events_sim.loc[:, 'duration'] = 1 events_sim.loc[:, 'trial_type'] = ['trial_' + str(i).zfill(2) for i in range(1, N+1)] # lsa_dm = least squares all design matrix lsa_dm = make_first_level_design_matrix( frame_times=t_fmri, # we defined this earlier for interpolation! events=events_sim, hrf_model='glover', drift_model=None # assume data is already high-pass filtered ) # Check out the created design matrix # Note that the index represents the frame times lsa_dm ###Output _____no_output_____ ###Markdown Note that the design matrix contains 21 regressors: 20 trialwise regressors and an intercept (the last column). Let's also plot it using Nilearn: ###Code from nilearn.plotting import plot_design_matrix plot_design_matrix(lsa_dm); ###Output _____no_output_____ ###Markdown And, while we're at it, plot it as time series (rather than a heatmap): ###Code fig, ax = plt.subplots(figsize=(12, 12)) for i in range(lsa_dm.shape[1]): ax.plot(i + lsa_dm.iloc[:, i], np.arange(ts.shape[0])) ax.set_title("LSA design matrix", fontsize=20) ax.set_ylim(0, lsa_dm.shape[0]-1) ax.set_xlabel('') ax.set_xticks(np.arange(N+1)) ax.set_xticklabels(['trial ' + str(i+1) for i in range(N)] + ['icept'], rotation=-90) ax.invert_yaxis() ax.grid() ax.set_ylabel("Time (volumes)", fontsize=15) plt.show() ###Output _____no_output_____ ###Markdown ToDo/ToThink (2 points): One "problem" with LSA-type design matrices, especially in fast event-related designs, is that they are not very statistically efficient, i.e., they lead to relatively high variance estimates of your parameters ($\hat{\beta}$), mainly due to relatively high predictor variance. Because we used a fixed inter-trial interval (here: 9 seconds), the correlation between "adjacent" trials are (approximately) the same. Compute the correlation between, for example, the predictors associated with trial 1 and trial 2, using the pearsonr function imported below, and store it in a variable named corr_t1t2 (1 point). Then, try to think of a way to improve the efficiency of this particular LSA design and write it down in the cell below the test cell. ###Code ''' Implement your ToDO here. ''' # For more info about the `pearsonr` function, check # https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pearsonr.html # Want a challenge? Try to compute the correlation from scratch! from scipy.stats import pearsonr # YOUR CODE HERE raise NotImplementedError() ''' Tests the ToDo above. ''' from niedu.tests.nipa.week_1 import test_t1t2_corr test_t1t2_corr(lsa_dm, corr_t1t2) ###Output _____no_output_____ ###Markdown YOUR ANSWER HERE Alright, let's actually fit the model! When dealing with real fMRI data, we'd use Nilearn to fit our GLM, but for now, we'll just use our own implementation of an (OLS) GLM. Note that we can actually fit a *single* GLM for all voxels at the same time by using `ts` (a $T \times K$ matrix) as our dependent variable due to the magic of linear algebra. In other words, we can run $K$ OLS models at once! ###Code # Let's use 'X', because it's shorter X = lsa_dm.values # Note we can fit our GLM for all K voxels at # the same time! As such, betas is not a vector, # but an n_regressor x k_voxel matrix! beta_hat_all = np.linalg.inv(X.T @ X) @ X.T @ ts print("Shape beta_hat_all:", beta_hat_all.shape) # Ah, the beta for the intercept is still in there # Let's remove it beta_icept = beta_hat_all[-1, :] beta_hat = beta_hat_all[:-1, :] print("Shape beta_hat (intercept removed):", beta_hat.shape) ###Output _____no_output_____ ###Markdown Alright, let's visualize the estimated parameters ($\hat{\beta}$). We'll do this by plotting the scaled regressors (i.e., $X_{j}\hat{\beta}_{j}$) on top of the original signal. Each differently colored line represents a different regressor (so a different trial): ###Code fig, axes = plt.subplots(ncols=4, sharex=True, sharey=True, figsize=(10, 12)) t = np.arange(ts.shape[0]) for i, ax in enumerate(axes.flatten()): # Plot signal ax.plot(ts[:, i], t, marker='o', ms=4, lw=0.5, c='tab:blue') # Plot trial onsets (as arrows) for ii, to in enumerate(onsets): color = 'tab:red' if ii % 2 == 0 else 'tab:orange' ax.arrow(-1.5, to / TR, dy=0, dx=0.5, color=color, head_width=0.75, head_length=0.25) # Compute x*beta for icept only scaled_icept = lsa_dm.iloc[:, -1].values * beta_icept[i] for ii in range(N): this_x = lsa_dm.iloc[:, ii].values # Compute x*beta for this particular trial (ii) xb = scaled_icept + this_x * beta_hat[ii, i] ax.plot(xb, t, lw=2) ax.set_xlim(-1.5, 2) ax.set_ylim(0, ts.shape[0] // 2) ax.grid(b=True) ax.set_title(f'Voxel {i+1}', fontsize=15) ax.invert_yaxis() if i == 0: ax.set_ylabel("Time (volumes)", fontsize=20) # Common axis labels fig.text(0.425, -.03, "Activation (A.U.)", fontsize=20) fig.tight_layout() sns.despine() plt.show() ###Output _____no_output_____ ###Markdown Ultimately, though, the estimated GLM parameters are just another way to estimate our pattern array ($\mathbf{R}$) &mdash; this time, we just estimated it using a different method (GLM-based) than before (timepoint-based). Therefore, let's visualize this array as we did with the other methods: ###Code fig, ax = plt.subplots(figsize=(2, 10)) mapp = ax.imshow(beta_hat) cbar = fig.colorbar(mapp) cbar.set_label(r'$\hat{\beta}$', fontsize=25, rotation=0, labelpad=10) ax.set_yticks(np.arange(N)) ax.set_xticks(np.arange(K)) ax.set_title(r"$\mathbf{R}$", fontsize=20) ax.set_xlabel('Voxels', fontsize=15) ax.set_ylabel('Trials', fontsize=15) plt.show() ###Output _____no_output_____ ###Markdown ToDo (optional, 0 points): It would be nice to visualize the patterns, but this is very hard because we have four dimenions (because we have four voxels)! PCA to the rescue! Run PCA on the estimated patterns (beta_hat) and store the PCA-transformed array (shape: $20 \times 2$) in a variable named beta_hat_2d. Then, try to plot the first two components as a scatterplot. Make it even nicer by plotting the trials from condition A as red points and trials from condition B als orange points. ###Code # YOUR CODE HERE raise NotImplementedError() from niedu.tests.nipa.week_1 import test_pca_beta_hat test_pca_beta_hat(beta_hat, beta_hat_2d) ###Output _____no_output_____ ###Markdown Noise normalizationOne often used preprocessing step for pattern analyses (using GLM-estimation methods) is to use "noise normalization" on the estimated patterns. There are two flavours: "univariate" and "multivariate" noise normalization. In univariate noise normalization, the estimated parameters ($\hat{\beta}$) are divided (normalized) by the standard deviation of the estimated parameters &mdash; which you might recognize as the formula for $t$-values (for a contrast against baseline)!\begin{align}t_{c\hat{\beta}} = \frac{c\hat{\beta}}{\sqrt{\hat{\sigma}^{2}c(X^{T}X)^{-1}c^{T}}}\end{align}where $\hat{\sigma}^{2}$ is the estimate of the error variance (sum of squared errors divided by the degrees of freedom) and $c(X^{T}X)^{-1}c^{T}$ is the "design variance". Sometimes people disregard the design variance and the degrees of freedom (DF) and instead only use the standard deviation of the noise: \begin{align}t_{c\hat{\beta}} \approx \frac{c\hat{\beta}}{\sqrt{\sum (y_{i} - X_{i}\hat{\beta})^{2}}}\end{align} ToThink (1 point): When experiments use a fixed ISI (in the context of single-trial GLMs), the omission of the design variance in univariate noise normalization is warranted. Explain why. YOUR ANSWER HERE Either way, this univariate noise normalization is a way to "down-weigh" the uncertain (noisy) parameter estimates. Although this type of univariate noise normalization seems to lead to better results in both decoding and RSA analyses (e.g., [Misaki et al., 2010](https://www.ncbi.nlm.nih.gov/pubmed/20580933)), the jury is still out on this issue.Multivariate noise normalization will be discussed in week 3 (RSA), so let's focus for now on the implementation of univariate noise normalization using the approximate method (which disregards design variance). To compute the standard deviation of the noise ($\sqrt{\sum (y_{i} - X_{i}\hat{\beta})^{2}}$), we first need to compute the noise, i.e., the unexplained variance ($y - X\hat{\beta}$) also known as the residuals: ###Code residuals = ts - X @ beta_hat_all print("Shape residuals:", residuals.shape) ###Output _____no_output_____ ###Markdown So, for each voxel ($K=4$), we have a timeseries ($T=100$) with unexplained variance ("noise"). Now, to get the standard deviation across all voxels, we can do the following: ###Code std_noise = np.std(residuals, axis=0) print("Shape noise std:", std_noise.shape) ###Output _____no_output_____ ###Markdown To do the actual normalization step, we simply divide the columns of the pattern matrix (`beta_hat`, which we estimated before) by the estimated noise standard deviation: ###Code # unn = univariate noise normalization # Note that we don't have to do this for each trial (row) separately # due to Numpy broadcasting! R_unn = beta_hat / std_noise print("Shape R_unn:", R_unn.shape) ###Output _____no_output_____ ###Markdown And let's visualize it: ###Code fig, ax = plt.subplots(figsize=(2, 10)) mapp = ax.imshow(R_unn) cbar = fig.colorbar(mapp) cbar.set_label(r'$t$', fontsize=25, rotation=0, labelpad=10) ax.set_yticks(np.arange(N)) ax.set_xticks(np.arange(K)) ax.set_title(r"$\mathbf{R}_{unn}$", fontsize=20) ax.set_xlabel('Voxels', fontsize=15) ax.set_ylabel('Trials', fontsize=15) plt.show() ###Output _____no_output_____ ###Markdown ToThink (1 point): In fact, univariate noise normalization didn't really change the pattern matrix much. Why do you think this is the case for our simulation data? Hint: check out the parameters for the simulation. YOUR ANSWER HERE LSA on real dataAlright, enough with all that fake data &mdash; let's work with some real data! We'll use the face perception task data from the *NI-edu* dataset, which we briefly mentioned in the fMRI-introduction course.In the face perception task, participants were presented with images of faces (from the publicly available [Face Research Lab London Set](https://figshare.com/articles/Face_Research_Lab_London_Set/5047666)). In total, frontal face images from 40 different people ("identities") were used, which were either without expression ("neutral") or were smiling. Each face image (from in total 80 faces, i.e., 40 identities $\times$ 2, neutral/smiling) was shown, per participant, 6 times across the 12 runs (3 times per session). Mini ToThink (0 points): Why do you think we show the same image multiple times? Identities were counterbalanced in terms of biological sex (male vs. female) and ethnicity (Caucasian vs. East-Asian vs. Black). The Face Research Lab London Set also contains the age of the people in the stimulus dataset and (average) attractiveness ratings for all faces from an independent set of raters. In addition, we also had our own participants rate the faces on perceived attractiveness, dominance, and trustworthiness after each session (rating each face, on each dimension, four times in total for robustness). The stimuli were chosen such that we have many different attributes that we could use to model brain responses (e.g., identity, expression, ethnicity, age, average attractiveness, and subjective/personal perceived attractiveness/dominance/trustworthiness).In this paradigm, stimuli were presented for 1.25 seconds and had a fixed interstimulus interval (ISI) of 3.75 seconds. While sub-optimal for univariate "detection-based" analyses, we used a fixed ISI &mdash; rather than jittered &mdash; to make sure it can also be used for "single-trial" multivariate analyses. Each run contained 40 stimulus presentations. To keep the participants attentive, a random selection of 5 stimuli (out of 40) were followed by a rating on either perceived attractiveness, dominance, or trustworthiness using a button-box with eight buttons (four per hand) lasting 2.5 seconds. After the rating, a regular ISI of 3.75 seconds followed. See the figure below for a visualization of the paradigm.![face_paradigm](https://docs.google.com/drawings/d/e/2PACX-1vQ0FlwZLI_XMHaKkaNchzZvgqT0JXjZAPbH9fccmNvgey-RYR5bKolh85Wctc2YLrjOLtE3Zkd7WXdu/pub?w=1429&h=502) First, let's set up all the data that we need for our LSA model. Let's see where our data is located: ###Code import os data_dir = os.path.join(os.path.expanduser('~'), 'NI-edu-data') print("Downloading Fmriprep data (+- 175MB) ...\n") !aws s3 sync --no-sign-request s3://openneuro.org/ds003965 {data_dir} --exclude "*" --include "sub-03/ses-1/func/*task-face*run-1*events.tsv" !aws s3 sync --no-sign-request s3://openneuro.org/ds003965 {data_dir} --exclude "*" --include "derivatives/fmriprep/sub-03/ses-1/func/*task-face*run-1*space-T1w*bold.nii.gz" !aws s3 sync --no-sign-request s3://openneuro.org/ds003965 {data_dir} --exclude "*" --include "derivatives/fmriprep/sub-03/ses-1/func/*task-face*run-1*space-T1w*mask.nii.gz" !aws s3 sync --no-sign-request s3://openneuro.org/ds003965 {data_dir} --exclude "*" --include "derivatives/fmriprep/sub-03/ses-1/func/*task-face*run-1*confounds_timeseries.tsv" print("\nDone!") ###Output _____no_output_____ ###Markdown As you can see, it contains both "raw" (not-preprocessed) subject data (e.g., sub-03) and derivatives, which include Fmriprep-preprocessed data: ###Code fprep_sub03 = os.path.join(data_dir, 'derivatives', 'fmriprep', 'sub-03') print("Contents derivatives/fmriprep/sub-03:", os.listdir(fprep_sub03)) ###Output _____no_output_____ ###Markdown There is preprocessed anatomical data and session-specific functional data: ###Code fprep_sub03_ses1_func = os.path.join(fprep_sub03, 'ses-1', 'func') contents = sorted(os.listdir(fprep_sub03_ses1_func)) print("Contents ses-1/func:", '\n'.join(contents)) ###Output _____no_output_____ ###Markdown That's a lot of data! Importantly, we will only use the "face" data ("task-face") in T1 space ("space-T1w"), meaning that this dat has not been normalized to a common template (unlike the "task-MNI152NLin2009cAsym" data). Here, we'll only analyze the first run ("run-1") data. Let's define the functional data, the associated functional brain mask (a binary image indicating which voxels are brain and which are not), and the file with timepoint-by-timepoint confounds (such as motion parameters): ###Code func = os.path.join(fprep_sub03_ses1_func, 'sub-03_ses-1_task-face_run-1_space-T1w_desc-preproc_bold.nii.gz') # Notice this neat little trick: we use the string method "replace" to define # the functional brain mask func_mask = func.replace('desc-preproc_bold', 'desc-brain_mask') confs = os.path.join(fprep_sub03_ses1_func, 'sub-03_ses-1_task-face_run-1_desc-confounds_timeseries.tsv') confs_df = pd.read_csv(confs, sep='\t') confs_df ###Output _____no_output_____ ###Markdown Finally, we need the events-file with onsets, durations, and trial-types for this particular run: ###Code events = os.path.join(data_dir, 'sub-03', 'ses-1', 'func', 'sub-03_ses-1_task-face_run-1_events.tsv') events_df = pd.read_csv(events, sep='\t') events_df = events_df.query("trial_type != 'rating' and trial_type != 'response'") # don't need this # Oops, Nilearn doesn't accept trial_type values that start with a number, so # let's prepend 'tt_' to it! events_df['trial_type'] = 'tt_' + events_df['trial_type'] ###Output _____no_output_____ ###Markdown Now, it's up to you to use this data to fit an LSA model! ToDo (2 points): in this first ToDo, you define your events and the confounds you want to include. 1. Remove all columns except "onset", "duration", and "trial_type". You should end up with a DataFrame with 40 rows and 3 columns. You can check this with the .shape attribute of the DataFrame. (Note that, technically, you could model the reponse and rating-related events as well! For now, we'll exclude them.) Name this filtered DataFrame events_df_filt. 2. You also need to select specific columns from the confounds DataFrame, as we don't want to include all confounds! For now, include only the motion parameters (trans_x, trans_y, trans_z, rot_x, rot_y, rot_z). You should end up with a confounds DataFrame with 342 rows and 6 columns. Name this filtered DataFrame confs_df_filt. ###Code ''' Implement your ToDo here. ''' # YOUR CODE HERE raise NotImplementedError() ''' Tests the above ToDo. ''' assert(events_df_filt.shape == (40, 3)) assert(events_df_filt.columns.tolist() == ['onset', 'duration', 'trial_type']) assert(confs_df_filt.shape == (confs_df.shape[0], 6)) assert(all('trans' in col or 'rot' in col for col in confs_df_filt.columns)) print("Well done!") ###Output _____no_output_____ ###Markdown ToDo (2 points): in this Todo, you'll fit your model! Define a FirstLevelModel object, name this flm_todo and make sure you do the following: 1. Set the correct TR (this is 0.7)2. Set the slice time reference to 0.53. Set the mask image to the one we defined before4. Use a "glover" HRF5. Use a "cosine" drift model with a cutoff of 0.01 Hz6. Do not apply any smoothing7. Set minimize_memory to true8. Use an "ols" noise modelThen, fit your model using the functional data (func), filtered confounds, and filtered events we defined before. ###Code ''' Implement your ToDo here. ''' # Ignore the DeprecationWarning! from nilearn.glm.first_level import FirstLevelModel # YOUR CODE HERE raise NotImplementedError() """ Tests the above ToDo. """ from niedu.tests.nipa.week_1 import test_lsa_flm test_lsa_flm(flm_todo, func_mask, func, events_df_filt, confs_df_filt) ###Output _____no_output_____ ###Markdown ToDo (2 points): in this Todo, you'll run the single-trial contrasts ("against baseline"). To do so, write a for-loop in which you call the compute_contrast method every iteration with a new contrast definition for a new trial. Make sure to output the "betas" (by using output_type='effect_size'). Note that the compute_contrast method returns the "unmasked" results (i.e., from all voxels). Make sure that, for each trial, you mask the results using the func_mask variable and the apply_mask function from Nilearn. Save these masked results (which should be patterns of 66298 voxels) for each trial. After the loop, stack all results in a 2D array with the different trials in different rows and the (flattened) voxels in columns. This array should be of shape 40 (trials) by 65643 (nr. of masked voxels). The variable name of this array should be R_todo. ###Code ''' Implement your ToDo here. ''' from nilearn.masking import apply_mask # YOUR CODE HERE raise NotImplementedError() ''' Tests the above ToDo. ''' from niedu.tests.nipa.week_1 import test_lsa_R test_lsa_R(R_todo, events_df_filt, flm_todo, func_mask) ###Output _____no_output_____ ###Markdown Disclaimer: In this ToDo, we asked you not to spatially smooth the data. This is often recommended for pattern analyses, as they arguably use information that is encoded in finely distributed patterns. However, several studies have shown that smoothing may sometimes benefit pattern analyses (e.g., Hendriks et al., 2017). In general, in line with the matched filter theorem, we recommend smoothing your data with a kernel equal to how finegrained you think your experimental feature is encoded in the brain patterns. Dealing with trial correlationsWhen working with single-trial experimental designs (such as the LSA designs discussed previously), one often occurring problem is correlation between trial predictors and their resulting estimates. Trial correlations in such designs occur when the inter-stimulus interval (ISI) is sufficiently short such that trial predictors overlap and thus correlate. This, in turn, leads to relatively unstable (high-variance) pattern estimates and, as we will see later in this section, trial patterns that correlate with each other (which is sometimes called [pattern drift](https://www.biorxiv.org/content/10.1101/032391v2)).This is also the case in our data from the NI-edu dataset. In the "face" task, stimuli were presented for 1.25 seconds, followed by a 3.75 ISI, which causes a slightly positive correlation between a given trial ($i$) and the next trial ($i + 1$) and a slightly negative correlation between the trial after that ($i + 2$). We'll show this below by visualizing the correlation matrix of the design matrix: ###Code dm_todo = pd.read_csv('dm_todo.tsv', sep='\t') dm_todo = dm_todo.iloc[:, :40] fig, ax = plt.subplots(figsize=(8, 8)) # Slightly exaggerate by setting the limits to (-.3, .3) mapp = ax.imshow(dm_todo.corr(), vmin=-0.3, vmax=0.3) # Some styling ax.set_xticks(range(dm_todo.shape[1])) ax.set_xticklabels(dm_todo.columns, rotation=90) ax.set_yticks(range(dm_todo.shape[1])) ax.set_yticklabels(dm_todo.columns) cbar = plt.colorbar(mapp, shrink=0.825) cbar.ax.set_ylabel('Correlation', fontsize=15, rotation=-90) plt.show() ###Output _____no_output_____ ###Markdown ToThink (1 point): Explain why trials (at index $i$) correlate slightly negatively with the the second trial coming after it (at index $i + 2$). Hint: try to plot it! YOUR ANSWER HERE The trial-by-trial correlation structure in the design leads to a trial-by-trial correlation structure in the estimated patterns as well (as explained by [Soch et al., 2020](https://www.sciencedirect.com/science/article/pii/S1053811919310407)). We show this below by computing and visualizing the $N \times N$ correlation matrix of the patterns: ###Code # Load in R_todo if you didn't manage to do the # previous ToDo R_todo = np.load('R_todo.npy') # Compute the NxN correlation matrix R_corr = np.corrcoef(R_todo) fig, ax = plt.subplots(figsize=(8, 8)) mapp = ax.imshow(R_corr, vmin=-1, vmax=1) # Some styling ax.set_xticks(range(dm_todo.shape[1])) ax.set_xticklabels(dm_todo.columns, rotation=90) ax.set_yticks(range(dm_todo.shape[1])) ax.set_yticklabels(dm_todo.columns) cbar = plt.colorbar(mapp, shrink=0.825) cbar.ax.set_ylabel('Correlation', fontsize=15, rotation=-90) plt.show() ###Output _____no_output_____ ###Markdown This correlation structure across trials poses a problem for representational similarity analysis (the topic of week 3) especially. Although this issue is still debated and far from solved, in this section we highlight two possible solutions to this problem: least-squares separate designs and temporal "uncorrelation". Least-squares separate (LSS)The least-squares separate LSS) design is a slight modifcation of the LSA design ([Mumford et al., 2014](https://www.sciencedirect.com/science/article/pii/S105381191400768X)). In LSS, you fit a separate model per trial. Each model contains one regressor for the trial that you want to estimate and, for each condition in your experimental design (in case of a categorical design), another regressor containing all other trials. So, suppose you have a run with 30 trials across 3 conditions (A, B, and C); using an LSS approach, you'd fit 30 different models, each containing four regressors (one for the single trial, one for all (other) trials of condition A, one for all (other) trials of condition B, and one for all (other) trials of condition C). The apparent upside of this is that it strongly reduces the collinearity of trials close in time, which in turn makes the trial parameters more efficient to estimate. ToThink (1 point): Suppose my experiment contains 90 stimuli which all belong to their own condition (i.e., there are 90 conditions). Explain why LSS provides no improvement over LSA in this case. YOUR ANSWER HERE We'll show this for our example data. It's a bit complicated (and not necessarily the best/fastest/clearest way), but the comments will explain what it's doing. Essentially, what we're doing, for each trial, is to extract that regressor for a standard LSA design and, for each condition, create a single regressor by summing all single-trial regressors from that condition together. ###Code # First, well make a standard LSA design matrix lsa_dm = make_first_level_design_matrix( frame_times=t_fmri, # we defined this earlier for interpolation! events=events_sim, hrf_model='glover', drift_model=None # assume data is already high-pass filtered ) # Then, we will loop across trials, making a single GLM lss_dms = [] # we'll store the design matrices here # Do not include last column, the intercept, in the loop for i, col in enumerate(lsa_dm.columns[:-1]): # Extract the single-trial predictor single_trial_reg = lsa_dm.loc[:, col] # Now, we need to create a predictor per condition # (one for A, one for B). We'll store these in "other_regs" other_regs = [] # Loop across unique conditions ("A" and "B") for con in np.unique(conditions): # Which columns belong to the current condition? idx = con == np.array(conditions) # Make sure NOT to include the trial we're currently estimating! idx[i] = False # Also, exclude the intercept (last column) idx = np.append(idx, False) # Now, extract all N-1 regressors con_regs = lsa_dm.loc[:, idx] # And sum them together! # This creates a single predictor for the current # condition con_reg_all = con_regs.sum(axis=1) # Save for later other_regs.append(con_reg_all) # Concatenate the condition regressors (one of A, one for B) other_regs = pd.concat(other_regs, axis=1) # Concatenate the single-trial regressor and two condition regressors this_dm = pd.concat((single_trial_reg, other_regs), axis=1) # Add back an intercept! this_dm.loc[:, 'intercept'] = 1 # Give it sensible column names this_dm.columns = ['trial_to_estimate'] + list(set(conditions)) + ['intercept'] # Save for alter lss_dms.append(this_dm) print("We have created %i design matrices!" % len(lss_dms)) ###Output _____no_output_____ ###Markdown Alright, now let's check out the first five design matrices, which should estimate the first five trials and contain 4 regressors each (one for the single trial, two for the separate conditions, and one for the intercept): ###Code fig, axes = plt.subplots(ncols=5, figsize=(15, 10)) for i, ax in enumerate(axes.flatten()): plot_design_matrix(lss_dms[i], ax=ax) ax.set_title("Design for trial %i" % (i+1), fontsize=20) plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown ToDo (optional; 1 bonus point): Can you implement an LSS approach to estimate our patterns on the real data? You can reuse the flm_todo you created earlier; the only thing you need to change each time is the design matrix. Because we have 40 trials, you need to fit 40 different models (which takes a while). Note that our experimental design does not necessarily have discrete categories, so your LSS design matrices should only have 3 columns: one for the trial to estimate, one for all other trials, and one for the intercept. After fitting each model, compute the trial-against-baseline contrast for the single trial and save the parameter ("beta") map. Then, after the loop, create the same pattern matrix as the previous ToDo, which should also have the same shape, but name it this time R_todo_lss. Note, this is a very hard ToDo if you're not very familiar with Python, but a great way to test your programming skills :-) ###Code ''' Implement your ToDo here. Note that we already created the LSA design matrix for you. ''' func_img = nib.load(func) n_vol = func_img.shape[-1] lsa_dm = make_first_level_design_matrix( frame_times=np.linspace(0, n_vol * 0.7, num=n_vol, endpoint=False), events=events_df_filt, drift_model=None ) # YOUR CODE HERE raise NotImplementedError() ''' Tests the above ToDo. ''' from niedu.tests.nipa.week_1 import test_lss test_lss(R_todo_lss, func, flm_todo, lsa_dm, confs_df_filt) ###Output _____no_output_____ ###Markdown Tip: Programming your own pattern estimation pipeline allows you to be very flexible and is a great way to practice your programming skills, but if you want a more "pre-packaged" tool, I recommend the nibetaseries package. The package's name is derived from a specific analysis technique called "beta-series correlation", which is a type of analysis that allows for resting-state like connectivity analyses of task-based fMRI data (which we won't discuss in this course). For this technique, you need to estimate single-trial activity patterns &mdash; just like we need to do for pattern analyses! I've used this package to estimate patterns for pattern analysis and I highly recommend it! Temporal uncorrelationAnother method to deal with trial-by-trial correlations is the "uncorrelation" method by [Soch and colleagues (2020)](https://www.sciencedirect.com/science/article/pii/S1053811919310407). As opposed to the LSS method, the uncorrelation approach takes care of the correlation structure in the data in a post-hoc manner. It does so, in essence, by "removing" the correlations in the data that are due to the correlations in the design in a way that is similar to what prewhitening does in generalized least squares.Formally, the "uncorrelated" patterns ($R_{\mathrm{unc}}$) are estimated by (matrix) multiplying the square root ($^{\frac{1}{2}}$) of covariance matrix of the LSA design matrix ($X^{T}X$) with the patterns ($R$):\begin{align}R_{\mathrm{unc}} = (X^{T}X)^{\frac{1}{2}}R\end{align}Here, $(X^{T}X)^{\frac{1}{2}}$ represents the "whitening" matrix which uncorrelates the patterns. Let's implement this in code. Note that we can use the `sqrtm` function from the `scipy.linalg` package to take the square root of a matrix: ###Code from scipy.linalg import sqrtm # Design matrix X = dm_todo.to_numpy() R_unc = sqrtm(X.T @ X) @ R_todo ###Output _____no_output_____
hypothesis_00_1_5.ipynb
###Markdown Verificação das hipóteses relacionadas a nota média Verificação das hipóteses 0, 1 e 5Hipótese 0: Se o pedido é cancelado, a nota do pedido é menor \Hipótese 1: Se o pedido foi entregue com atraso, a nota do pedido será menor \Hipótese 5: Se o pedido atrasar sua nota será menor que três Definição do dataframe ###Code from pyspark.sql import SparkSession, functions as F spark = SparkSession.builder.getOrCreate() df_reviews = spark.read \ .option('escape', '\"') \ .csv('./dataset/olist_order_reviews_dataset.csv', header=True, multiLine=True, inferSchema=True) df_orders = spark.read \ .option('escape', '\"') \ .csv('./dataset/olist_orders_dataset.csv', header=True, multiLine=True, inferSchema=True) df = df_orders.join(df_reviews, df_orders.order_id == df_reviews.order_id) df.printSchema() ###Output root |-- order_id: string (nullable = true) |-- customer_id: string (nullable = true) |-- order_status: string (nullable = true) |-- order_purchase_timestamp: timestamp (nullable = true) |-- order_approved_at: timestamp (nullable = true) |-- order_delivered_carrier_date: timestamp (nullable = true) |-- order_delivered_customer_date: timestamp (nullable = true) |-- order_estimated_delivery_date: timestamp (nullable = true) |-- review_id: string (nullable = true) |-- order_id: string (nullable = true) |-- review_score: integer (nullable = true) |-- review_comment_title: string (nullable = true) |-- review_comment_message: string (nullable = true) |-- review_creation_date: timestamp (nullable = true) |-- review_answer_timestamp: timestamp (nullable = true) ###Markdown Calculo nota média geral ###Code df.select(F.mean('review_score')).show() ###Output +-----------------+ |avg(review_score)| +-----------------+ | 4.07089| +-----------------+ ###Markdown Calculo nota média dos cancelados ###Code df_canceled = df.filter(F.col('order_status')=='canceled') df_canceled.select(F.mean('review_score')).show() ###Output +------------------+ | avg(review_score)| +------------------+ |1.8108108108108107| +------------------+ ###Markdown Calculo nota média dos atrasados ###Code df_late = df.filter(F.col('order_delivered_customer_date') > F.col('order_estimated_delivery_date')) df_late.select(F.mean('review_score')).show() ###Output +-----------------+ |avg(review_score)| +-----------------+ | 2.5465293668955| +-----------------+ ###Markdown Calculo de pedidos atrasados com nota maior ou igual a 3 (hipótese 5) ###Code print("O numero de pedidos atrasados com nota >=3 é de",df_late.filter(F.col('review_score')>=3).count()) print("Porcentagem de pedidos atrasados com nota >= 3:", round(df_late.filter(F.col('review_score')>=3).count() / df_late.count() * 100,2)) ###Output Porcentagem de pedidos atrasados com nota >= 3: 45.41 ###Markdown Testes ###Code import pandas as pd import matplotlib.pyplot as plt df_new = df.groupBy(F.month('order_purchase_timestamp').alias('month'),F.year('order_purchase_timestamp') \ .alias('year')).count() \ .orderBy(F.col('year'),F.col('month')) from pyspark.sql import functions as sf df_new = df_new.withColumn('month_year', sf.concat(sf.col('month'),sf.lit('/'), sf.col('year'))) df_new = df_new.selectExpr('month', 'year', 'count as demand', 'month_year') df_new.show() df_new.show(50) df_new.toPandas().plot(x ='month_year', y='demand', kind = 'line') df_new_2 = df.select(F.month('order_purchase_timestamp').alias('month'),F.year('order_purchase_timestamp').alias('year'),F.col('review_score')) \ .orderBy(F.col('year'),F.col('month')) df_new_2.show() from pyspark.sql import functions as sf df_new_2 = df_new_2.withColumn('month_year', sf.concat(sf.col('month'),sf.lit('/'), sf.col('year'))) df_new_2.show() import seaborn as sns import pandas as pd sns.set(style="whitegrid") ax = sns.boxplot(x='month_year', y='review_score',color="g", data=df_new_2.toPandas()) ax2 = ax.twinx() ax2 = sns.lineplot(x='month_year', y='demand',color='.0', label='demand', data=df_new.toPandas()) ax2 = sns.lineplot(x='month_year', y='demand',color='.0', label='demand', data=df_new.toPandas()) ###Output _____no_output_____
utils/search_signlist.ipynb
###Markdown 2 Read Pickled Version of DataFrame ###Code sign_l = pd.read_pickle('output/sign_lines.p') ###Output _____no_output_____ ###Markdown 3 Prepare Data for Search ###Code anchor = '<a href="http://oracc.org/dcclt/{}", target="_blank">{}</a>' t = sign_l.copy() t['id_word'] = [anchor.format(val,val) for val in sign_l['id_word']] signs = list(set(sign_l['form'])) signs.sort() ###Output _____no_output_____ ###Markdown 4 Interactive Search ###Code @interact(sort_by = t.columns, rows = (1, len(t), 1), search = signs) def sort_df(sort_by = "id_word", ascending = False, rows = 25, search = 'A'): l = t[t.form == search] l = l.sort_values(by = sort_by, ascending = ascending).reset_index(drop=True)[:rows].style return l ###Output _____no_output_____
dlnd_language_translation.ipynb
###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # Implement Function source_id_text = [] target_id_text = [] # Getting lines from source and target for source_line,target_line in zip(source_text.split('\n'), target_text.split('\n')): # Append ids for source text source_id_text.append([source_vocab_to_int.get(word, source_vocab_to_int['<UNK>']) for word in source_line.split(' ')]) # Append ids for target text target_id_text.append([target_vocab_to_int.get(word, target_vocab_to_int['<UNK>']) for word in target_line.split(' ')] + [target_vocab_to_int['<EOS>']]) return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper import problem_unittests as tests (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.2.1 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoder_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.- Target sequence length placeholder named "target_sequence_length" with rank 1- Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.- Source sequence length placeholder named "source_sequence_length" with rank 1Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ # TODO: Implement Function inputs = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None], name='target') learning_rate = tf.placeholder(tf.float32, [],name='learning_rate') keep_prob = tf.placeholder(tf.float32, [],name='keep_prob') target_seq_length = tf.placeholder(tf.int32, [None], name='target_sequence_length') max_target_seq = tf.reduce_max(target_seq_length, name='max_target_sequence') source_seq_len = tf.placeholder(tf.int32, [None], name='source_sequence_length') return inputs, targets, learning_rate, keep_prob, target_seq_length, max_target_seq, source_seq_len """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output Tests Passed ###Markdown Process Decoder InputImplement `process_decoder_input` by removing the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch. ###Code def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placeholder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function sliced = tf.strided_slice(target_data, [0,0], [batch_size, -1], [1,1]) return tf.concat([tf.fill([batch_size, 1],target_vocab_to_int['<GO>']), sliced],1) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer: * Embed the encoder input using [`tf.contrib.layers.embed_sequence`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence) * Construct a [stacked](https://github.com/tensorflow/tensorflow/blob/6947f65a374ebf29e74bb71e36fd82760056d82c/tensorflow/docs_src/tutorials/recurrent.mdstacking-multiple-lstms) [`tf.contrib.rnn.LSTMCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell) wrapped in a [`tf.contrib.rnn.DropoutWrapper`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper) * Pass cell and embedded input to [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) ###Code from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ # Implement Function encod_embed = tf.contrib.layers.embed_sequence(rnn_inputs, vocab_size=source_vocab_size,embed_dim=encoding_embedding_size) def lstm_cell(): rnn_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) drop = tf.contrib.rnn.DropoutWrapper(rnn_cell) return drop stacked_lstm = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(num_layers)]) outputs, final_state = tf.nn.dynamic_rnn(stacked_lstm, encod_embed, sequence_length=source_sequence_length,dtype=tf.float32) return outputs, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate a training decoding layer:* Create a [`tf.contrib.seq2seq.TrainingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper) * Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ # TODO: Implement Function training_helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length) decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer) outputs, states, sequence_lengths = tf.contrib.seq2seq.dynamic_decode(decoder) return outputs """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference decoder:* Create a [`tf.contrib.seq2seq.GreedyEmbeddingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper)* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ # TODO: Implement Function start_token = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size]) helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_token, end_of_sequence_id) decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, helper, encoder_state, output_layer) outputs, states, sequence_lengths = tf.contrib.seq2seq.dynamic_decode(decoder) return outputs """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.* Embed the target sequences* Construct the decoder LSTM cell (just like you constructed the encoder cell above)* Create an output layer to map the outputs of the decoder to the elements of our vocabulary* Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)` function to get the training logits.* Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function def lstm_cell(): rnn_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) drop = tf.contrib.rnn.DropoutWrapper(rnn_cell) return drop start_sequence_id = target_vocab_to_int['<GO>'] end_sequence_id = target_vocab_to_int['<EOS>'] dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) stacked_lstm = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(num_layers)]) output_layer = Dense(target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1)) with tf.variable_scope('decode') as decoding_scope: training_decoder_output = decoding_layer_train(encoder_state, stacked_lstm, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) inference_decoder_output = decoding_layer_infer(encoder_state, stacked_lstm, dec_embeddings, start_sequence_id, end_sequence_id, max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size)`.- Process target data using your `process_decoder_input(target_data, target_vocab_to_int, batch_size)` function.- Decode the encoded input using your `decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)` function. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function _, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size) training_decoder_output, inference_decoder_output = decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability- Set `display_step` to state how many steps between each debug output statement ###Code # Number of Epochs epochs = 10 # Batch Size batch_size = 200 # RNN Size rnn_size = 256 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 256 decoding_embedding_size = 256 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 0.75 display_step = 100 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown Batch and pad the source and target sequences ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def pad_sentence_batch(sentence_batch, pad_int): """Pad sentences with <PAD> so that each sentence of a batch has the same length""" max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): """Batch targets, sources, and the lengths of their sentences together""" for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 100/689 - Train Accuracy: 0.4574, Validation Accuracy: 0.4782, Loss: 1.4330 Epoch 0 Batch 200/689 - Train Accuracy: 0.5730, Validation Accuracy: 0.5855, Loss: 0.8641 Epoch 0 Batch 300/689 - Train Accuracy: 0.6188, Validation Accuracy: 0.6243, Loss: 0.6258 Epoch 0 Batch 400/689 - Train Accuracy: 0.7066, Validation Accuracy: 0.6818, Loss: 0.4386 Epoch 0 Batch 500/689 - Train Accuracy: 0.7652, Validation Accuracy: 0.7493, Loss: 0.3277 Epoch 0 Batch 600/689 - Train Accuracy: 0.8470, Validation Accuracy: 0.8059, Loss: 0.2305 Epoch 1 Batch 100/689 - Train Accuracy: 0.8857, Validation Accuracy: 0.8695, Loss: 0.1077 Epoch 1 Batch 200/689 - Train Accuracy: 0.9340, Validation Accuracy: 0.8900, Loss: 0.0713 Epoch 1 Batch 300/689 - Train Accuracy: 0.9213, Validation Accuracy: 0.9034, Loss: 0.0702 Epoch 1 Batch 400/689 - Train Accuracy: 0.9314, Validation Accuracy: 0.9134, Loss: 0.0504 Epoch 1 Batch 500/689 - Train Accuracy: 0.9321, Validation Accuracy: 0.9216, Loss: 0.0475 Epoch 1 Batch 600/689 - Train Accuracy: 0.9640, Validation Accuracy: 0.9311, Loss: 0.0440 Epoch 2 Batch 100/689 - Train Accuracy: 0.9376, Validation Accuracy: 0.9420, Loss: 0.0353 Epoch 2 Batch 200/689 - Train Accuracy: 0.9645, Validation Accuracy: 0.9361, Loss: 0.0279 Epoch 2 Batch 300/689 - Train Accuracy: 0.9590, Validation Accuracy: 0.9511, Loss: 0.0306 Epoch 2 Batch 400/689 - Train Accuracy: 0.9586, Validation Accuracy: 0.9468, Loss: 0.0270 Epoch 2 Batch 500/689 - Train Accuracy: 0.9621, Validation Accuracy: 0.9618, Loss: 0.0248 Epoch 2 Batch 600/689 - Train Accuracy: 0.9830, Validation Accuracy: 0.9539, Loss: 0.0249 Epoch 3 Batch 100/689 - Train Accuracy: 0.9607, Validation Accuracy: 0.9586, Loss: 0.0211 Epoch 3 Batch 200/689 - Train Accuracy: 0.9765, Validation Accuracy: 0.9541, Loss: 0.0174 Epoch 3 Batch 300/689 - Train Accuracy: 0.9860, Validation Accuracy: 0.9536, Loss: 0.0189 Epoch 3 Batch 400/689 - Train Accuracy: 0.9611, Validation Accuracy: 0.9620, Loss: 0.0190 Epoch 3 Batch 500/689 - Train Accuracy: 0.9636, Validation Accuracy: 0.9595, Loss: 0.0189 Epoch 3 Batch 600/689 - Train Accuracy: 0.9730, Validation Accuracy: 0.9716, Loss: 0.0161 Epoch 4 Batch 100/689 - Train Accuracy: 0.9717, Validation Accuracy: 0.9714, Loss: 0.0159 Epoch 4 Batch 200/689 - Train Accuracy: 0.9810, Validation Accuracy: 0.9627, Loss: 0.0144 Epoch 4 Batch 300/689 - Train Accuracy: 0.9895, Validation Accuracy: 0.9561, Loss: 0.0122 Epoch 4 Batch 400/689 - Train Accuracy: 0.9820, Validation Accuracy: 0.9707, Loss: 0.0143 Epoch 4 Batch 500/689 - Train Accuracy: 0.9698, Validation Accuracy: 0.9589, Loss: 0.0162 Epoch 4 Batch 600/689 - Train Accuracy: 0.9805, Validation Accuracy: 0.9714, Loss: 0.0134 Epoch 5 Batch 100/689 - Train Accuracy: 0.9800, Validation Accuracy: 0.9739, Loss: 0.0132 Epoch 5 Batch 200/689 - Train Accuracy: 0.9825, Validation Accuracy: 0.9607, Loss: 0.0101 Epoch 5 Batch 300/689 - Train Accuracy: 0.9880, Validation Accuracy: 0.9602, Loss: 0.0109 Epoch 5 Batch 400/689 - Train Accuracy: 0.9841, Validation Accuracy: 0.9784, Loss: 0.0120 Epoch 5 Batch 500/689 - Train Accuracy: 0.9698, Validation Accuracy: 0.9680, Loss: 0.0125 Epoch 5 Batch 600/689 - Train Accuracy: 0.9870, Validation Accuracy: 0.9764, Loss: 0.0118 Epoch 6 Batch 100/689 - Train Accuracy: 0.9752, Validation Accuracy: 0.9748, Loss: 0.0120 Epoch 6 Batch 200/689 - Train Accuracy: 0.9830, Validation Accuracy: 0.9718, Loss: 0.0081 Epoch 6 Batch 300/689 - Train Accuracy: 0.9965, Validation Accuracy: 0.9641, Loss: 0.0083 Epoch 6 Batch 400/689 - Train Accuracy: 0.9889, Validation Accuracy: 0.9770, Loss: 0.0106 Epoch 6 Batch 500/689 - Train Accuracy: 0.9845, Validation Accuracy: 0.9661, Loss: 0.0094 Epoch 6 Batch 600/689 - Train Accuracy: 0.9890, Validation Accuracy: 0.9859, Loss: 0.0095 Epoch 7 Batch 100/689 - Train Accuracy: 0.9919, Validation Accuracy: 0.9693, Loss: 0.0083 Epoch 7 Batch 200/689 - Train Accuracy: 0.9920, Validation Accuracy: 0.9661, Loss: 0.0068 Epoch 7 Batch 300/689 - Train Accuracy: 0.9978, Validation Accuracy: 0.9689, Loss: 0.0064 Epoch 7 Batch 400/689 - Train Accuracy: 0.9870, Validation Accuracy: 0.9814, Loss: 0.0084 Epoch 7 Batch 500/689 - Train Accuracy: 0.9850, Validation Accuracy: 0.9748, Loss: 0.0085 Epoch 7 Batch 600/689 - Train Accuracy: 0.9912, Validation Accuracy: 0.9811, Loss: 0.0069 Epoch 8 Batch 100/689 - Train Accuracy: 0.9760, Validation Accuracy: 0.9661, Loss: 0.0086 Epoch 8 Batch 200/689 - Train Accuracy: 0.9975, Validation Accuracy: 0.9684, Loss: 0.0069 Epoch 8 Batch 300/689 - Train Accuracy: 0.9978, Validation Accuracy: 0.9643, Loss: 0.0061 Epoch 8 Batch 400/689 - Train Accuracy: 0.9852, Validation Accuracy: 0.9759, Loss: 0.0082 Epoch 8 Batch 500/689 - Train Accuracy: 0.9910, Validation Accuracy: 0.9718, Loss: 0.0054 Epoch 8 Batch 600/689 - Train Accuracy: 0.9942, Validation Accuracy: 0.9780, Loss: 0.0071 Epoch 9 Batch 100/689 - Train Accuracy: 0.9867, Validation Accuracy: 0.9702, Loss: 0.0060 Epoch 9 Batch 200/689 - Train Accuracy: 0.9942, Validation Accuracy: 0.9655, Loss: 0.0052 Epoch 9 Batch 300/689 - Train Accuracy: 0.9950, Validation Accuracy: 0.9675, Loss: 0.0057 Epoch 9 Batch 400/689 - Train Accuracy: 0.9934, Validation Accuracy: 0.9784, Loss: 0.0048 Epoch 9 Batch 500/689 - Train Accuracy: 0.9933, Validation Accuracy: 0.9677, Loss: 0.0066 Epoch 9 Batch 600/689 - Train Accuracy: 0.9895, Validation Accuracy: 0.9752, Loss: 0.0081 Model Trained and Saved ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function word_list = sentence.lower().split(' ') return [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in word_list] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) ###Output INFO:tensorflow:Restoring parameters from checkpoints/dev Input Word Ids: [105, 89, 126, 12, 5, 187, 46] English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.'] Prediction Word Ids: [57, 60, 218, 90, 241, 14, 98, 37, 1] French Words: il a vu un vieux camion jaune . <EOS> ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function #return None, None source_id_text = [[source_vocab_to_int[word] for word in sentence.split()] for sentence in source_text.split("\n")] target_id_text = [[target_vocab_to_int[word] for word in sentence.split()] + [target_vocab_to_int['<EOS>']] for sentence in target_text.split("\n")] return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper import problem_unittests as tests (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.1.0 Default GPU Device: /gpu:0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoder_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.- Target sequence length placeholder named "target_sequence_length" with rank 1- Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.- Source sequence length placeholder named "source_sequence_length" with rank 1Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ # TODO: Implement Function #return None, None, None, None, None, None, None input = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None], name='targets') learning_rate = tf.placeholder(tf.float32) keep_prob = tf.placeholder(tf.float32, name='keep_prob') target_sequence_length = tf.placeholder(tf.int32, [None], name='target_sequence_length') max_target_len = tf.reduce_max(target_sequence_length, name='max_target_len') source_sequence_length = tf.placeholder(tf.int32, [None], name='source_sequence_length') return input, targets, learning_rate, keep_prob, target_sequence_length, max_target_len, source_sequence_length """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output Tests Passed ###Markdown Process Decoder InputImplement `process_decoder_input` by removing the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch. ###Code def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function #return None ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return dec_input """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer: * Embed the encoder input using [`tf.contrib.layers.embed_sequence`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence) * Construct a [stacked](https://github.com/tensorflow/tensorflow/blob/6947f65a374ebf29e74bb71e36fd82760056d82c/tensorflow/docs_src/tutorials/recurrent.mdstacking-multiple-lstms) [`tf.contrib.rnn.LSTMCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell) wrapped in a [`tf.contrib.rnn.DropoutWrapper`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper) * Pass cell and embedded input to [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) ###Code from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ # TODO: Implement Function #return None, None # Encoder embedding enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size) # RNN cell def make_cell(rnn_size): enc_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) enc_cell = tf.contrib.rnn.DropoutWrapper(enc_cell, output_keep_prob=keep_prob) return enc_cell enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32) return enc_output, enc_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate a training decoding layer:* Create a [`tf.contrib.seq2seq.TrainingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper) * Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ # TODO: Implement Function #return None # Training Decoder # Helper for the training process. Used by BasicDecoder to read inputs. training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, sequence_length=target_sequence_length) # Basic decoder training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer) # Perform dynamic decoding using the decoder training_decoder_output = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True, maximum_iterations=max_summary_length)[0] return training_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference decoder:* Create a [`tf.contrib.seq2seq.GreedyEmbeddingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper)* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ # TODO: Implement Function #return None # Inference Decoder start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens') # Helper for the inference process. inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id) # Basic decoder inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer) # Perform dynamic decoding using the decoder inference_decoder_output = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length)[0] return inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.* Embed the target sequences* Construct the decoder LSTM cell (just like you constructed the encoder cell above)* Create an output layer to map the outputs of the decoder to the elements of our vocabulary* Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)` function to get the training logits.* Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function #return None, None # Decoder Embedding #target_vocab_size = len(target_vocab_to_int) dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) # Construct the decoder cell def make_cell(rnn_size): dec_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob) return dec_cell dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) # Dense layer to translate the decoder's output at each time # step into a choice from the target vocabulary output_layer = Dense(target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1)) with tf.variable_scope("decode"): training_decoder_output = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) # Reuses the same parameters trained by the training process with tf.variable_scope("decode", reuse=True): start_of_sequence_id = target_vocab_to_int['<GO>'] end_of_sequence_id = target_vocab_to_int['<EOS>'] inference_decoder_output = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size)`.- Process target data using your `process_decoder_input(target_data, target_vocab_to_int, batch_size)` function.- Decode the encoded input using your `decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)` function. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function #return None, None # Pass the input data through the encoder. We'll ignore the encoder output, but use the state _, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) # Prepare the target sequences we'll feed to the decoder in training mode dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size) # Pass encoder state and decoder inputs to the decoders training_decoder_output, inference_decoder_output = decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability- Set `display_step` to state how many steps between each debug output statement ###Code # Number of Epochs epochs = 10 # Batch Size batch_size = 512 # RNN Size rnn_size = 256 # Number of Layers num_layers = 3 # Embedding Size encoding_embedding_size = 128 decoding_embedding_size = 128 # Learning Rate learning_rate = 0.005 # Dropout Keep Probability keep_probability = 0.5 display_step = 25 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown Batch and pad the source and target sequences ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def pad_sentence_batch(sentence_batch, pad_int): """Pad sentences with <PAD> so that each sentence of a batch has the same length""" max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): """Batch targets, sources, and the lengths of their sentences together""" for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 25/269 - Train Accuracy: 0.3922, Validation Accuracy: 0.4493, Loss: 2.8381 Epoch 0 Batch 50/269 - Train Accuracy: 0.4156, Validation Accuracy: 0.4650, Loss: 2.3619 Epoch 0 Batch 75/269 - Train Accuracy: 0.4475, Validation Accuracy: 0.4644, Loss: 2.0315 Epoch 0 Batch 100/269 - Train Accuracy: 0.5406, Validation Accuracy: 0.5433, Loss: 1.7775 Epoch 0 Batch 125/269 - Train Accuracy: 0.5575, Validation Accuracy: 0.5589, Loss: 1.5352 Epoch 0 Batch 150/269 - Train Accuracy: 0.5583, Validation Accuracy: 0.5683, Loss: 1.1283 Epoch 0 Batch 175/269 - Train Accuracy: 0.5928, Validation Accuracy: 0.5983, Loss: 0.9275 Epoch 0 Batch 200/269 - Train Accuracy: 0.6045, Validation Accuracy: 0.6355, Loss: 0.8332 Epoch 0 Batch 225/269 - Train Accuracy: 0.6153, Validation Accuracy: 0.6270, Loss: 0.7535 Epoch 0 Batch 250/269 - Train Accuracy: 0.6320, Validation Accuracy: 0.6415, Loss: 0.6939 Epoch 1 Batch 25/269 - Train Accuracy: 0.6366, Validation Accuracy: 0.6562, Loss: 0.6571 Epoch 1 Batch 50/269 - Train Accuracy: 0.6316, Validation Accuracy: 0.6475, Loss: 0.6044 Epoch 1 Batch 75/269 - Train Accuracy: 0.6729, Validation Accuracy: 0.6625, Loss: 0.5352 Epoch 1 Batch 100/269 - Train Accuracy: 0.6881, Validation Accuracy: 0.6738, Loss: 0.5192 Epoch 1 Batch 125/269 - Train Accuracy: 0.6828, Validation Accuracy: 0.6787, Loss: 0.4942 Epoch 1 Batch 150/269 - Train Accuracy: 0.6903, Validation Accuracy: 0.6761, Loss: 0.4737 Epoch 1 Batch 175/269 - Train Accuracy: 0.7166, Validation Accuracy: 0.7161, Loss: 0.4735 Epoch 1 Batch 200/269 - Train Accuracy: 0.7066, Validation Accuracy: 0.7123, Loss: 0.4464 Epoch 1 Batch 225/269 - Train Accuracy: 0.7009, Validation Accuracy: 0.7161, Loss: 0.4165 Epoch 1 Batch 250/269 - Train Accuracy: 0.7281, Validation Accuracy: 0.7439, Loss: 0.3991 Epoch 2 Batch 25/269 - Train Accuracy: 0.7383, Validation Accuracy: 0.7547, Loss: 0.3840 Epoch 2 Batch 50/269 - Train Accuracy: 0.7416, Validation Accuracy: 0.7567, Loss: 0.3646 Epoch 2 Batch 75/269 - Train Accuracy: 0.7793, Validation Accuracy: 0.7589, Loss: 0.3218 Epoch 2 Batch 100/269 - Train Accuracy: 0.7961, Validation Accuracy: 0.7716, Loss: 0.3145 Epoch 2 Batch 125/269 - Train Accuracy: 0.7992, Validation Accuracy: 0.7706, Loss: 0.2965 Epoch 2 Batch 150/269 - Train Accuracy: 0.7927, Validation Accuracy: 0.7778, Loss: 0.2831 Epoch 2 Batch 175/269 - Train Accuracy: 0.7806, Validation Accuracy: 0.7921, Loss: 0.2876 Epoch 2 Batch 200/269 - Train Accuracy: 0.8009, Validation Accuracy: 0.8050, Loss: 0.2649 Epoch 2 Batch 225/269 - Train Accuracy: 0.7949, Validation Accuracy: 0.8157, Loss: 0.2524 Epoch 2 Batch 250/269 - Train Accuracy: 0.8479, Validation Accuracy: 0.8450, Loss: 0.2377 Epoch 3 Batch 25/269 - Train Accuracy: 0.8328, Validation Accuracy: 0.8406, Loss: 0.2662 Epoch 3 Batch 50/269 - Train Accuracy: 0.8356, Validation Accuracy: 0.8443, Loss: 0.2359 Epoch 3 Batch 75/269 - Train Accuracy: 0.8747, Validation Accuracy: 0.8555, Loss: 0.1930 Epoch 3 Batch 100/269 - Train Accuracy: 0.8778, Validation Accuracy: 0.8619, Loss: 0.1822 Epoch 3 Batch 125/269 - Train Accuracy: 0.8794, Validation Accuracy: 0.8663, Loss: 0.1649 Epoch 3 Batch 150/269 - Train Accuracy: 0.8651, Validation Accuracy: 0.8789, Loss: 0.1691 Epoch 3 Batch 175/269 - Train Accuracy: 0.8693, Validation Accuracy: 0.8883, Loss: 0.1767 Epoch 3 Batch 200/269 - Train Accuracy: 0.8929, Validation Accuracy: 0.8887, Loss: 0.1460 Epoch 3 Batch 225/269 - Train Accuracy: 0.8661, Validation Accuracy: 0.8879, Loss: 0.1373 Epoch 3 Batch 250/269 - Train Accuracy: 0.9104, Validation Accuracy: 0.8972, Loss: 0.1300 Epoch 4 Batch 25/269 - Train Accuracy: 0.8888, Validation Accuracy: 0.9162, Loss: 0.1299 Epoch 4 Batch 50/269 - Train Accuracy: 0.8853, Validation Accuracy: 0.8996, Loss: 0.1246 Epoch 4 Batch 75/269 - Train Accuracy: 0.9235, Validation Accuracy: 0.9160, Loss: 0.1046 Epoch 4 Batch 100/269 - Train Accuracy: 0.9237, Validation Accuracy: 0.9248, Loss: 0.1004 Epoch 4 Batch 125/269 - Train Accuracy: 0.9265, Validation Accuracy: 0.9159, Loss: 0.0885 Epoch 4 Batch 150/269 - Train Accuracy: 0.9306, Validation Accuracy: 0.9265, Loss: 0.0978 Epoch 4 Batch 175/269 - Train Accuracy: 0.9118, Validation Accuracy: 0.9264, Loss: 0.1008 Epoch 4 Batch 200/269 - Train Accuracy: 0.9295, Validation Accuracy: 0.9242, Loss: 0.0833 Epoch 4 Batch 225/269 - Train Accuracy: 0.9076, Validation Accuracy: 0.9318, Loss: 0.0811 Epoch 4 Batch 250/269 - Train Accuracy: 0.9324, Validation Accuracy: 0.9263, Loss: 0.0793 Epoch 5 Batch 25/269 - Train Accuracy: 0.9277, Validation Accuracy: 0.9399, Loss: 0.0850 Epoch 5 Batch 50/269 - Train Accuracy: 0.9208, Validation Accuracy: 0.9384, Loss: 0.0813 Epoch 5 Batch 75/269 - Train Accuracy: 0.9414, Validation Accuracy: 0.9327, Loss: 0.0730 Epoch 5 Batch 100/269 - Train Accuracy: 0.9377, Validation Accuracy: 0.9414, Loss: 0.0719 Epoch 5 Batch 125/269 - Train Accuracy: 0.9403, Validation Accuracy: 0.9420, Loss: 0.0640 Epoch 5 Batch 150/269 - Train Accuracy: 0.9377, Validation Accuracy: 0.9381, Loss: 0.0743 Epoch 5 Batch 175/269 - Train Accuracy: 0.9317, Validation Accuracy: 0.9402, Loss: 0.0778 Epoch 5 Batch 200/269 - Train Accuracy: 0.9431, Validation Accuracy: 0.9362, Loss: 0.0599 Epoch 5 Batch 225/269 - Train Accuracy: 0.9407, Validation Accuracy: 0.9515, Loss: 0.0578 Epoch 5 Batch 250/269 - Train Accuracy: 0.9492, Validation Accuracy: 0.9476, Loss: 0.0601 Epoch 6 Batch 25/269 - Train Accuracy: 0.9500, Validation Accuracy: 0.9520, Loss: 0.0692 Epoch 6 Batch 50/269 - Train Accuracy: 0.9319, Validation Accuracy: 0.9483, Loss: 0.0721 Epoch 6 Batch 75/269 - Train Accuracy: 0.9486, Validation Accuracy: 0.9447, Loss: 0.0637 Epoch 6 Batch 100/269 - Train Accuracy: 0.9540, Validation Accuracy: 0.9525, Loss: 0.0594 Epoch 6 Batch 125/269 - Train Accuracy: 0.9608, Validation Accuracy: 0.9496, Loss: 0.0502 Epoch 6 Batch 150/269 - Train Accuracy: 0.9528, Validation Accuracy: 0.9497, Loss: 0.0563 Epoch 6 Batch 175/269 - Train Accuracy: 0.9360, Validation Accuracy: 0.9550, Loss: 0.0667 Epoch 6 Batch 200/269 - Train Accuracy: 0.9536, Validation Accuracy: 0.9532, Loss: 0.0454 Epoch 6 Batch 225/269 - Train Accuracy: 0.9508, Validation Accuracy: 0.9545, Loss: 0.0485 Epoch 6 Batch 250/269 - Train Accuracy: 0.9518, Validation Accuracy: 0.9547, Loss: 0.0556 Epoch 7 Batch 25/269 - Train Accuracy: 0.9592, Validation Accuracy: 0.9578, Loss: 0.0598 Epoch 7 Batch 50/269 - Train Accuracy: 0.9456, Validation Accuracy: 0.9579, Loss: 0.0564 Epoch 7 Batch 75/269 - Train Accuracy: 0.9571, Validation Accuracy: 0.9526, Loss: 0.0542 Epoch 7 Batch 100/269 - Train Accuracy: 0.9601, Validation Accuracy: 0.9561, Loss: 0.0485 Epoch 7 Batch 125/269 - Train Accuracy: 0.9674, Validation Accuracy: 0.9603, Loss: 0.0437 Epoch 7 Batch 150/269 - Train Accuracy: 0.9652, Validation Accuracy: 0.9573, Loss: 0.0461 Epoch 7 Batch 175/269 - Train Accuracy: 0.9463, Validation Accuracy: 0.9611, Loss: 0.0583 Epoch 7 Batch 200/269 - Train Accuracy: 0.9636, Validation Accuracy: 0.9588, Loss: 0.0435 Epoch 7 Batch 225/269 - Train Accuracy: 0.9585, Validation Accuracy: 0.9693, Loss: 0.0461 Epoch 7 Batch 250/269 - Train Accuracy: 0.9585, Validation Accuracy: 0.9621, Loss: 0.0452 Epoch 8 Batch 25/269 - Train Accuracy: 0.9676, Validation Accuracy: 0.9627, Loss: 0.0453 Epoch 8 Batch 50/269 - Train Accuracy: 0.9592, Validation Accuracy: 0.9656, Loss: 0.0491 Epoch 8 Batch 75/269 - Train Accuracy: 0.9638, Validation Accuracy: 0.9697, Loss: 0.0445 Epoch 8 Batch 100/269 - Train Accuracy: 0.9593, Validation Accuracy: 0.9614, Loss: 0.0429 Epoch 8 Batch 125/269 - Train Accuracy: 0.9711, Validation Accuracy: 0.9663, Loss: 0.0374 Epoch 8 Batch 150/269 - Train Accuracy: 0.9719, Validation Accuracy: 0.9662, Loss: 0.0424 Epoch 8 Batch 175/269 - Train Accuracy: 0.9536, Validation Accuracy: 0.9703, Loss: 0.0510 Epoch 8 Batch 200/269 - Train Accuracy: 0.9663, Validation Accuracy: 0.9648, Loss: 0.0384 Epoch 8 Batch 225/269 - Train Accuracy: 0.9651, Validation Accuracy: 0.9646, Loss: 0.0341 Epoch 8 Batch 250/269 - Train Accuracy: 0.9651, Validation Accuracy: 0.9660, Loss: 0.0421 Epoch 9 Batch 25/269 - Train Accuracy: 0.9611, Validation Accuracy: 0.9682, Loss: 0.0429 Epoch 9 Batch 50/269 - Train Accuracy: 0.9553, Validation Accuracy: 0.9662, Loss: 0.0464 Epoch 9 Batch 75/269 - Train Accuracy: 0.9564, Validation Accuracy: 0.9660, Loss: 0.0428 Epoch 9 Batch 100/269 - Train Accuracy: 0.9598, Validation Accuracy: 0.9674, Loss: 0.0400 Epoch 9 Batch 125/269 - Train Accuracy: 0.9757, Validation Accuracy: 0.9680, Loss: 0.0327 Epoch 9 Batch 150/269 - Train Accuracy: 0.9686, Validation Accuracy: 0.9598, Loss: 0.0423 Epoch 9 Batch 175/269 - Train Accuracy: 0.9554, Validation Accuracy: 0.9630, Loss: 0.0451 Epoch 9 Batch 200/269 - Train Accuracy: 0.9723, Validation Accuracy: 0.9654, Loss: 0.0347 Epoch 9 Batch 225/269 - Train Accuracy: 0.9629, Validation Accuracy: 0.9693, Loss: 0.0334 Epoch 9 Batch 250/269 - Train Accuracy: 0.9602, Validation Accuracy: 0.9642, Loss: 0.0343 Model Trained and Saved ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function #return None sentence = sentence.lower() return [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.split(' ')] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) ###Output INFO:tensorflow:Restoring parameters from checkpoints/dev Input Word Ids: [80, 86, 43, 217, 123, 32, 39] English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.'] Prediction Word Ids: [85, 174, 213, 166, 250, 302, 20, 172, 1] French Words: il a vu un vieux camion rouge . <EOS> ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code a = [[1,2,3], [4,5,6]] len(a) def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function #Split in sentences source_sentences = source_text.split('\n') target_sentences = target_text.split('\n') def processSentence(sentence, dict_vocab, target=False): text_id_list = [] for word in sentence.split(): text_id_list.append(dict_vocab[word]) if target: #Append EOF to target text_id_list.append(dict_vocab['<EOS>']) return text_id_list source_text_list = [] for s_sentence in source_sentences: source_text_list.append(processSentence(s_sentence, source_vocab_to_int)) target_text_list = [] for t_sentence in target_sentences: target_text_list.append(processSentence(t_sentence, target_vocab_to_int, target=True)) return source_text_list, target_text_list """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper import problem_unittests as tests (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.1.0 Default GPU Device: /gpu:0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoder_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.- Target sequence length placeholder named "target_sequence_length" with rank 1- Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.- Source sequence length placeholder named "source_sequence_length" with rank 1Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ # TODO: Implement Function #2 shapes = (batch_size, seq_length) #Rank 2 tensors input_ = tf.placeholder(tf.int32, [None, None], name='input') targets_ = tf.placeholder(tf.int32, [None, None], name='targets') #Rank 1 tensors target_seq_len = tf.placeholder(tf.int32, (None,), name='target_sequence_length') source_seq_len = tf.placeholder(tf.int32, (None,), name='source_sequence_length') #Rank 0 tensors learn_rate = tf.placeholder(tf.float32, name='learn_rate') keep_prob = tf.placeholder(tf.float32, name='keep_prob') max_target_seq_len = tf.reduce_max(target_seq_len, name='max_target_len') return input_, targets_, learn_rate, keep_prob , target_seq_len, max_target_seq_len, source_seq_len """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output Tests Passed ###Markdown Process Decoder InputImplement `process_decoder_input` by removing the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch. ###Code def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function #Extract batch window excluding last word id '''tf.strided_slice( input=input_tensor, begin:list of beginning coord (batch_start_coord, seq_start_coord) end: list of lasst element to retrieve ) It defines a window to retrieve in the tensor ''' excluded_last = tf.strided_slice(target_data, [0, 0], #Start at the begin, shape rank is 2 [batch_size, -1], #Keep every element but the last for every sequence in the batch strides=[1,1] #Move 1 by 1 ) #Create GOes tensor to prepend goes_tensor = tf.fill([batch_size, 1], #(batch_size,1) tensor shape target_vocab_to_int['<GO>'] #filled with GO ) #Concat tensors return tf.concat([goes_tensor, excluded_last], 1) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer: * Embed the encoder input using [`tf.contrib.layers.embed_sequence`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence) * Construct a [stacked](https://github.com/tensorflow/tensorflow/blob/6947f65a374ebf29e74bb71e36fd82760056d82c/tensorflow/docs_src/tutorials/recurrent.mdstacking-multiple-lstms) [`tf.contrib.rnn.LSTMCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell) wrapped in a [`tf.contrib.rnn.DropoutWrapper`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper) * Pass cell and embedded input to [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) ###Code from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ # TODO: Implement Function #Embed encoder input embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size) #Build cell utility function def build_cell(): cell = tf.contrib.rnn.LSTMCell(num_units=rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) return cell #Stack num_layers cells enc_cell = tf.contrib.rnn.MultiRNNCell([build_cell() for _ in range(num_layers)]) #Build encoder rnn encoder_output, encoder_state = tf.nn.dynamic_rnn(enc_cell, embed_input, sequence_length=source_sequence_length, dtype=tf.float32) return encoder_output, encoder_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate a training decoding layer:* Create a [`tf.contrib.seq2seq.TrainingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper) * Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ # TODO: Implement Function #Training helper, helps the decoder to read the inputs correctly training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, sequence_length=target_sequence_length) #RNN decoder for train (it will share weights with inference) it's a simple basic decoder #output_layer is the last dens layer in the output space training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer) training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True, maximum_iterations=max_summary_length) return training_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference decoder:* Create a [`tf.contrib.seq2seq.GreedyEmbeddingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper)* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ # TODO: Implement Function ''' tf.tile replicate tensor multiple times: const_tensor = [1] times = [4] [[1],[1],[1],[1]] ''' start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size]) #GreedyEmbeddingHelper has the same function as TrainingHelper but it provides the argmax output #init sequence and end sequence must be provided inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id) inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer) inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length) return inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.* Embed the target sequences* Construct the decoder LSTM cell (just like you constructed the encoder cell above)* Create an output layer to map the outputs of the decoder to the elements of our vocabulary* Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)` function to get the training logits.* Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function #Embed input decoder_embedding = tf.Variable(tf.truncated_normal([target_vocab_size, decoding_embedding_size], stddev=0.1)) decoder_embedding_input = tf.nn.embedding_lookup(decoder_embedding, dec_input) #Decoder LSTM cell def build_cell(): cell = tf.contrib.rnn.LSTMCell(num_units=rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) return cell #Stack num_layers cells decoder_cell = tf.contrib.rnn.MultiRNNCell([build_cell() for _ in range(num_layers)]) #Dense layer to map to vocab_size output_layer = Dense(target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1)) #Train decode with tf.variable_scope("decode"): train_decoder_output = decoding_layer_train(encoder_state, decoder_cell, decoder_embedding_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) with tf.variable_scope("decode", reuse=True): inference_decoder_output = decoding_layer_infer(encoder_state, decoder_cell, decoder_embedding, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob) return train_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size)`.- Process target data using your `process_decoder_input(target_data, target_vocab_to_int, batch_size)` function.- Decode the encoded input using your `decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)` function. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function #Encoding encode_output, encoder_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) #Process decoder input preprocessed_target_data = process_decoder_input(target_data, target_vocab_to_int, batch_size) #Decoding training_decoder_output, inference_decoder_output = decoding_layer(preprocessed_target_data, encoder_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability- Set `display_step` to state how many steps between each debug output statement ###Code # Number of Epochs epochs = 6 # Batch Size batch_size = 128 # RNN Size rnn_size = 128 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 56 decoding_embedding_size = 56 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 0.8 display_step = 20 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown Batch and pad the source and target sequences ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def pad_sentence_batch(sentence_batch, pad_int): """Pad sentences with <PAD> so that each sentence of a batch has the same length""" max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): """Batch targets, sources, and the lengths of their sentences together""" for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 20/1077 - Train Accuracy: 0.2703, Validation Accuracy: 0.3363, Loss: 3.7449 Epoch 0 Batch 40/1077 - Train Accuracy: 0.3270, Validation Accuracy: 0.3828, Loss: 3.2091 Epoch 0 Batch 60/1077 - Train Accuracy: 0.3888, Validation Accuracy: 0.4190, Loss: 2.8387 Epoch 0 Batch 80/1077 - Train Accuracy: 0.3832, Validation Accuracy: 0.4396, Loss: 2.7331 Epoch 0 Batch 100/1077 - Train Accuracy: 0.4031, Validation Accuracy: 0.4688, Loss: 2.6146 Epoch 0 Batch 120/1077 - Train Accuracy: 0.4133, Validation Accuracy: 0.4805, Loss: 2.4661 Epoch 0 Batch 140/1077 - Train Accuracy: 0.3993, Validation Accuracy: 0.4954, Loss: 2.4884 Epoch 0 Batch 160/1077 - Train Accuracy: 0.4367, Validation Accuracy: 0.4925, Loss: 2.2043 Epoch 0 Batch 180/1077 - Train Accuracy: 0.4227, Validation Accuracy: 0.4698, Loss: 2.1242 Epoch 0 Batch 200/1077 - Train Accuracy: 0.4496, Validation Accuracy: 0.5028, Loss: 2.0623 Epoch 0 Batch 220/1077 - Train Accuracy: 0.4112, Validation Accuracy: 0.4805, Loss: 1.9884 Epoch 0 Batch 240/1077 - Train Accuracy: 0.4473, Validation Accuracy: 0.4837, Loss: 1.8163 Epoch 0 Batch 260/1077 - Train Accuracy: 0.4587, Validation Accuracy: 0.4929, Loss: 1.6555 Epoch 0 Batch 280/1077 - Train Accuracy: 0.4199, Validation Accuracy: 0.4542, Loss: 1.6752 Epoch 0 Batch 300/1077 - Train Accuracy: 0.3824, Validation Accuracy: 0.4428, Loss: 1.6949 Epoch 0 Batch 320/1077 - Train Accuracy: 0.4371, Validation Accuracy: 0.4666, Loss: 1.5665 Epoch 0 Batch 340/1077 - Train Accuracy: 0.4248, Validation Accuracy: 0.4968, Loss: 1.5456 Epoch 0 Batch 360/1077 - Train Accuracy: 0.4371, Validation Accuracy: 0.4883, Loss: 1.4331 Epoch 0 Batch 380/1077 - Train Accuracy: 0.4320, Validation Accuracy: 0.4759, Loss: 1.3689 Epoch 0 Batch 400/1077 - Train Accuracy: 0.4457, Validation Accuracy: 0.4830, Loss: 1.3613 Epoch 0 Batch 420/1077 - Train Accuracy: 0.4590, Validation Accuracy: 0.5213, Loss: 1.3165 Epoch 0 Batch 440/1077 - Train Accuracy: 0.4750, Validation Accuracy: 0.5213, Loss: 1.2967 Epoch 0 Batch 460/1077 - Train Accuracy: 0.3797, Validation Accuracy: 0.4993, Loss: 1.2625 Epoch 0 Batch 480/1077 - Train Accuracy: 0.4856, Validation Accuracy: 0.5320, Loss: 1.2200 Epoch 0 Batch 500/1077 - Train Accuracy: 0.4945, Validation Accuracy: 0.5323, Loss: 1.1309 Epoch 0 Batch 520/1077 - Train Accuracy: 0.5205, Validation Accuracy: 0.5415, Loss: 1.0867 Epoch 0 Batch 540/1077 - Train Accuracy: 0.4617, Validation Accuracy: 0.5359, Loss: 1.0607 Epoch 0 Batch 560/1077 - Train Accuracy: 0.5047, Validation Accuracy: 0.5458, Loss: 1.0445 Epoch 0 Batch 580/1077 - Train Accuracy: 0.5234, Validation Accuracy: 0.5440, Loss: 0.9797 Epoch 0 Batch 600/1077 - Train Accuracy: 0.5253, Validation Accuracy: 0.5469, Loss: 0.9770 Epoch 0 Batch 620/1077 - Train Accuracy: 0.4918, Validation Accuracy: 0.5419, Loss: 1.0049 Epoch 0 Batch 640/1077 - Train Accuracy: 0.4870, Validation Accuracy: 0.5540, Loss: 0.9510 Epoch 0 Batch 660/1077 - Train Accuracy: 0.5156, Validation Accuracy: 0.5547, Loss: 0.9755 Epoch 0 Batch 680/1077 - Train Accuracy: 0.5290, Validation Accuracy: 0.5565, Loss: 0.9191 Epoch 0 Batch 700/1077 - Train Accuracy: 0.5016, Validation Accuracy: 0.5344, Loss: 0.8975 Epoch 0 Batch 720/1077 - Train Accuracy: 0.4794, Validation Accuracy: 0.5107, Loss: 0.9781 Epoch 0 Batch 740/1077 - Train Accuracy: 0.5414, Validation Accuracy: 0.5625, Loss: 0.8650 Epoch 0 Batch 760/1077 - Train Accuracy: 0.5309, Validation Accuracy: 0.5572, Loss: 0.8872 Epoch 0 Batch 780/1077 - Train Accuracy: 0.5098, Validation Accuracy: 0.5629, Loss: 0.9107 Epoch 0 Batch 800/1077 - Train Accuracy: 0.4945, Validation Accuracy: 0.5682, Loss: 0.8485 Epoch 0 Batch 820/1077 - Train Accuracy: 0.4988, Validation Accuracy: 0.5817, Loss: 0.8527 Epoch 0 Batch 840/1077 - Train Accuracy: 0.5375, Validation Accuracy: 0.5810, Loss: 0.8027 Epoch 0 Batch 860/1077 - Train Accuracy: 0.5298, Validation Accuracy: 0.5795, Loss: 0.7943 Epoch 0 Batch 880/1077 - Train Accuracy: 0.5582, Validation Accuracy: 0.5824, Loss: 0.7825 Epoch 0 Batch 900/1077 - Train Accuracy: 0.5547, Validation Accuracy: 0.5728, Loss: 0.8299 Epoch 0 Batch 920/1077 - Train Accuracy: 0.5270, Validation Accuracy: 0.5756, Loss: 0.8013 Epoch 0 Batch 940/1077 - Train Accuracy: 0.5250, Validation Accuracy: 0.5810, Loss: 0.7827 Epoch 0 Batch 960/1077 - Train Accuracy: 0.5584, Validation Accuracy: 0.5785, Loss: 0.7372 Epoch 0 Batch 980/1077 - Train Accuracy: 0.5398, Validation Accuracy: 0.5721, Loss: 0.7613 Epoch 0 Batch 1000/1077 - Train Accuracy: 0.6001, Validation Accuracy: 0.5920, Loss: 0.7048 Epoch 0 Batch 1020/1077 - Train Accuracy: 0.5348, Validation Accuracy: 0.5845, Loss: 0.7230 Epoch 0 Batch 1040/1077 - Train Accuracy: 0.5469, Validation Accuracy: 0.5955, Loss: 0.7735 Epoch 0 Batch 1060/1077 - Train Accuracy: 0.5516, Validation Accuracy: 0.5788, Loss: 0.7101 Epoch 1 Batch 20/1077 - Train Accuracy: 0.5734, Validation Accuracy: 0.6009, Loss: 0.6742 Epoch 1 Batch 40/1077 - Train Accuracy: 0.5863, Validation Accuracy: 0.6033, Loss: 0.7044 Epoch 1 Batch 60/1077 - Train Accuracy: 0.5867, Validation Accuracy: 0.6140, Loss: 0.6590 Epoch 1 Batch 80/1077 - Train Accuracy: 0.5918, Validation Accuracy: 0.6183, Loss: 0.6826 Epoch 1 Batch 100/1077 - Train Accuracy: 0.5965, Validation Accuracy: 0.6172, Loss: 0.6818 Epoch 1 Batch 120/1077 - Train Accuracy: 0.5895, Validation Accuracy: 0.6151, Loss: 0.6817 Epoch 1 Batch 140/1077 - Train Accuracy: 0.5761, Validation Accuracy: 0.6236, Loss: 0.6777 Epoch 1 Batch 160/1077 - Train Accuracy: 0.6223, Validation Accuracy: 0.6161, Loss: 0.6465 Epoch 1 Batch 180/1077 - Train Accuracy: 0.5910, Validation Accuracy: 0.6193, Loss: 0.6470 Epoch 1 Batch 200/1077 - Train Accuracy: 0.5805, Validation Accuracy: 0.6108, Loss: 0.6600 Epoch 1 Batch 220/1077 - Train Accuracy: 0.5979, Validation Accuracy: 0.6218, Loss: 0.6380 Epoch 1 Batch 240/1077 - Train Accuracy: 0.6352, Validation Accuracy: 0.6200, Loss: 0.6220 Epoch 1 Batch 260/1077 - Train Accuracy: 0.6302, Validation Accuracy: 0.6214, Loss: 0.5862 Epoch 1 Batch 280/1077 - Train Accuracy: 0.6453, Validation Accuracy: 0.6243, Loss: 0.6340 Epoch 1 Batch 300/1077 - Train Accuracy: 0.6090, Validation Accuracy: 0.6300, Loss: 0.6333 Epoch 1 Batch 320/1077 - Train Accuracy: 0.6234, Validation Accuracy: 0.6197, Loss: 0.6161 Epoch 1 Batch 340/1077 - Train Accuracy: 0.5880, Validation Accuracy: 0.6168, Loss: 0.6200 Epoch 1 Batch 360/1077 - Train Accuracy: 0.6035, Validation Accuracy: 0.6222, Loss: 0.6040 Epoch 1 Batch 380/1077 - Train Accuracy: 0.6215, Validation Accuracy: 0.6218, Loss: 0.5864 Epoch 1 Batch 400/1077 - Train Accuracy: 0.6492, Validation Accuracy: 0.6321, Loss: 0.5936 Epoch 1 Batch 420/1077 - Train Accuracy: 0.6336, Validation Accuracy: 0.6257, Loss: 0.5689 Epoch 1 Batch 440/1077 - Train Accuracy: 0.6230, Validation Accuracy: 0.6346, Loss: 0.6052 Epoch 1 Batch 460/1077 - Train Accuracy: 0.6168, Validation Accuracy: 0.6321, Loss: 0.6001 Epoch 1 Batch 480/1077 - Train Accuracy: 0.6336, Validation Accuracy: 0.6428, Loss: 0.5783 Epoch 1 Batch 500/1077 - Train Accuracy: 0.6363, Validation Accuracy: 0.6513, Loss: 0.5673 Epoch 1 Batch 520/1077 - Train Accuracy: 0.6786, Validation Accuracy: 0.6523, Loss: 0.5337 Epoch 1 Batch 540/1077 - Train Accuracy: 0.6176, Validation Accuracy: 0.6410, Loss: 0.5390 Epoch 1 Batch 560/1077 - Train Accuracy: 0.6340, Validation Accuracy: 0.6651, Loss: 0.5753 Epoch 1 Batch 580/1077 - Train Accuracy: 0.6856, Validation Accuracy: 0.6527, Loss: 0.5096 Epoch 1 Batch 600/1077 - Train Accuracy: 0.6719, Validation Accuracy: 0.6523, Loss: 0.5176 Epoch 1 Batch 620/1077 - Train Accuracy: 0.6293, Validation Accuracy: 0.6616, Loss: 0.5408 Epoch 1 Batch 640/1077 - Train Accuracy: 0.6235, Validation Accuracy: 0.6673, Loss: 0.5419 Epoch 1 Batch 660/1077 - Train Accuracy: 0.6312, Validation Accuracy: 0.6580, Loss: 0.5639 Epoch 1 Batch 680/1077 - Train Accuracy: 0.6440, Validation Accuracy: 0.6605, Loss: 0.5302 Epoch 1 Batch 700/1077 - Train Accuracy: 0.6125, Validation Accuracy: 0.6623, Loss: 0.5263 ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ words_ids = [] for word in sentence.lower().split(): if word not in vocab_to_int: words_ids.append(vocab_to_int['<UNK>']) else: words_ids.append(vocab_to_int[word]) # TODO: Implement Function return words_ids """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) ###Output INFO:tensorflow:Restoring parameters from checkpoints/dev Input Word Ids: [185, 63, 118, 147, 152, 81, 124] English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.'] Prediction Word Ids: [205, 309, 160, 194, 245, 123, 199, 312, 1] French Words: il a pas le vieux camion jaune . <EOS> ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function # Convert characters to ids source_id_text = [[source_vocab_to_int.get(word) for word in line.split()] for line in source_text.split('\n')] target_id_text = [[target_vocab_to_int.get(word) for word in line.split()] \ + [target_vocab_to_int['<EOS>']] for line in target_text.split('\n')] return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper import problem_unittests as tests (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.2.0 Default GPU Device: /gpu:0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoder_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.- Target sequence length placeholder named "target_sequence_length" with rank 1- Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.- Source sequence length placeholder named "source_sequence_length" with rank 1Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ # TODO: Implement Function input_data = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None], name='targets') lr = tf.placeholder(tf.float32, name='learning_rate') keep_prob = tf.placeholder(tf.float32, name='keep_prob') target_sequence_length = tf.placeholder(tf.int32, [None], name='target_sequence_length') max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len') source_sequence_length = tf.placeholder(tf.int32, [None], name='source_sequence_length') return input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output ERROR:tensorflow:================================== Object was never used (type <class 'tensorflow.python.framework.ops.Operation'>): <tf.Operation 'assert_rank_2/Assert/Assert' type=Assert> If you want to mark it as used call its "mark_used()" method. It was originally created here: ['File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\runpy.py", line 193, in _run_module_as_main\n "__main__", mod_spec)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\runpy.py", line 85, in _run_code\n exec(code, run_globals)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\ipykernel\\__main__.py", line 3, in <module>\n app.launch_new_instance()', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\traitlets\\config\\application.py", line 658, in launch_instance\n app.start()', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\ipykernel\\kernelapp.py", line 474, in start\n ioloop.IOLoop.instance().start()', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\zmq\\eventloop\\ioloop.py", line 177, in start\n super(ZMQIOLoop, self).start()', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\tornado\\ioloop.py", line 887, in start\n handler_func(fd_obj, events)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\tornado\\stack_context.py", line 275, in null_wrapper\n return fn(*args, **kwargs)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\zmq\\eventloop\\zmqstream.py", line 440, in _handle_events\n self._handle_recv()', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\zmq\\eventloop\\zmqstream.py", line 472, in _handle_recv\n self._run_callback(callback, msg)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\zmq\\eventloop\\zmqstream.py", line 414, in _run_callback\n callback(*args, **kwargs)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\tornado\\stack_context.py", line 275, in null_wrapper\n return fn(*args, **kwargs)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\ipykernel\\kernelbase.py", line 276, in dispatcher\n return self.dispatch_shell(stream, msg)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\ipykernel\\kernelbase.py", line 228, in dispatch_shell\n handler(stream, idents, msg)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\ipykernel\\kernelbase.py", line 390, in execute_request\n user_expressions, allow_stdin)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\ipykernel\\ipkernel.py", line 196, in do_execute\n res = shell.run_cell(code, store_history=store_history, silent=silent)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\ipykernel\\zmqshell.py", line 501, in run_cell\n return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\IPython\\core\\interactiveshell.py", line 2717, in run_cell\n interactivity=interactivity, compiler=compiler, result=result)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\IPython\\core\\interactiveshell.py", line 2827, in run_ast_nodes\n if self.run_code(code, result):', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\IPython\\core\\interactiveshell.py", line 2881, in run_code\n exec(code_obj, self.user_global_ns, self.user_ns)', 'File "<ipython-input-7-a116da22d3e7>", line 23, in <module>\n tests.test_model_inputs(model_inputs)', 'File "G:\\Udacity DLND\\udacity-dlnd-P4\\problem_unittests.py", line 106, in test_model_inputs\n assert tf.assert_rank(lr, 0, message=\'Learning Rate has wrong rank\')', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\tensorflow\\python\\ops\\check_ops.py", line 617, in assert_rank\n dynamic_condition, data, summarize)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\tensorflow\\python\\ops\\check_ops.py", line 571, in _assert_rank_condition\n return control_flow_ops.Assert(condition, data, summarize=summarize)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\tensorflow\\python\\util\\tf_should_use.py", line 170, in wrapped\n return _add_should_use_warning(fn(*args, **kwargs))', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\tensorflow\\python\\util\\tf_should_use.py", line 139, in _add_should_use_warning\n wrapped = TFShouldUseWarningWrapper(x)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\tensorflow\\python\\util\\tf_should_use.py", line 96, in __init__\n stack = [s.strip() for s in traceback.format_stack()]'] ================================== ERROR:tensorflow:================================== Object was never used (type <class 'tensorflow.python.framework.ops.Operation'>): <tf.Operation 'assert_rank_3/Assert/Assert' type=Assert> If you want to mark it as used call its "mark_used()" method. It was originally created here: ['File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\runpy.py", line 193, in _run_module_as_main\n "__main__", mod_spec)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\runpy.py", line 85, in _run_code\n exec(code, run_globals)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\ipykernel\\__main__.py", line 3, in <module>\n app.launch_new_instance()', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\traitlets\\config\\application.py", line 658, in launch_instance\n app.start()', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\ipykernel\\kernelapp.py", line 474, in start\n ioloop.IOLoop.instance().start()', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\zmq\\eventloop\\ioloop.py", line 177, in start\n super(ZMQIOLoop, self).start()', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\tornado\\ioloop.py", line 887, in start\n handler_func(fd_obj, events)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\tornado\\stack_context.py", line 275, in null_wrapper\n return fn(*args, **kwargs)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\zmq\\eventloop\\zmqstream.py", line 440, in _handle_events\n self._handle_recv()', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\zmq\\eventloop\\zmqstream.py", line 472, in _handle_recv\n self._run_callback(callback, msg)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\zmq\\eventloop\\zmqstream.py", line 414, in _run_callback\n callback(*args, **kwargs)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\tornado\\stack_context.py", line 275, in null_wrapper\n return fn(*args, **kwargs)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\ipykernel\\kernelbase.py", line 276, in dispatcher\n return self.dispatch_shell(stream, msg)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\ipykernel\\kernelbase.py", line 228, in dispatch_shell\n handler(stream, idents, msg)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\ipykernel\\kernelbase.py", line 390, in execute_request\n user_expressions, allow_stdin)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\ipykernel\\ipkernel.py", line 196, in do_execute\n res = shell.run_cell(code, store_history=store_history, silent=silent)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\ipykernel\\zmqshell.py", line 501, in run_cell\n return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\IPython\\core\\interactiveshell.py", line 2717, in run_cell\n interactivity=interactivity, compiler=compiler, result=result)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\IPython\\core\\interactiveshell.py", line 2827, in run_ast_nodes\n if self.run_code(code, result):', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\IPython\\core\\interactiveshell.py", line 2881, in run_code\n exec(code_obj, self.user_global_ns, self.user_ns)', 'File "<ipython-input-7-a116da22d3e7>", line 23, in <module>\n tests.test_model_inputs(model_inputs)', 'File "G:\\Udacity DLND\\udacity-dlnd-P4\\problem_unittests.py", line 107, in test_model_inputs\n assert tf.assert_rank(keep_prob, 0, message=\'Keep Probability has wrong rank\')', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\tensorflow\\python\\ops\\check_ops.py", line 617, in assert_rank\n dynamic_condition, data, summarize)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\tensorflow\\python\\ops\\check_ops.py", line 571, in _assert_rank_condition\n return control_flow_ops.Assert(condition, data, summarize=summarize)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\tensorflow\\python\\util\\tf_should_use.py", line 170, in wrapped\n return _add_should_use_warning(fn(*args, **kwargs))', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\tensorflow\\python\\util\\tf_should_use.py", line 139, in _add_should_use_warning\n wrapped = TFShouldUseWarningWrapper(x)', 'File "C:\\Users\\agoil\\Anaconda3\\envs\\dlnd\\lib\\site-packages\\tensorflow\\python\\util\\tf_should_use.py", line 96, in __init__\n stack = [s.strip() for s in traceback.format_stack()]'] ================================== Tests Passed ###Markdown Process Decoder InputImplement `process_decoder_input` by removing the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch. ###Code def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) decoder_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return decoder_input """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer: * Embed the encoder input using [`tf.contrib.layers.embed_sequence`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence) * Construct a [stacked](https://github.com/tensorflow/tensorflow/blob/6947f65a374ebf29e74bb71e36fd82760056d82c/tensorflow/docs_src/tutorials/recurrent.mdstacking-multiple-lstms) [`tf.contrib.rnn.LSTMCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell) wrapped in a [`tf.contrib.rnn.DropoutWrapper`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper) * Pass cell and embedded input to [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) ###Code from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ # Encoder embedding enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size) # RNN cell def make_cell(rnn_size): lstm = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) return drop enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32) return enc_output, enc_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate a training decoding layer:* Create a [`tf.contrib.seq2seq.TrainingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper) * Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ # Adding dropout for training decoder dec_cell_drop = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob) # Helper for the training process. Used by BasicDecoder to read inputs. training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, sequence_length=target_sequence_length, time_major=False) # Basic decoder training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell_drop, training_helper, encoder_state, output_layer) # Perform dynamic decoding using the decoder training_decoder_output = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True, maximum_iterations=max_summary_length)[0] return training_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference decoder:* Create a [`tf.contrib.seq2seq.GreedyEmbeddingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper)* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens') # Helper for the inference process. inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id) # Basic decoder inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer) # Perform dynamic decoding using the decoder inference_decoder_output = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length)[0] return inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.* Embed the target sequences* Construct the decoder LSTM cell (just like you constructed the encoder cell above)* Create an output layer to map the outputs of the decoder to the elements of our vocabulary* Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)` function to get the training logits.* Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # Decoder with tf.variable_scope("decode") as decoding_scope: # 1. Decoder Embedding dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) # 2. Construct the decoder cell def make_cell(rnn_size): lstm = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) return drop dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) # 3. Dense layer to translate the decoder's output at each time # step into a choice from the target vocabulary output_layer = Dense(target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1)) # 4. Set up a training decoder and an inference decoder # Training Decoder training_decoder_output = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) decoding_scope.reuse_variables() start_of_sequence_id = target_vocab_to_int['<GO>'] end_of_sequence_id = target_vocab_to_int['<EOS>'] #Inference Decoder inference_decoder_output = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size)`.- Process target data using your `process_decoder_input(target_data, target_vocab_to_int, batch_size)` function.- Decode the encoded input using your `decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)` function. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # Pass the input data through the encoder. We'll ignore the encoder output, but use the state _, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) # Prepare the target sequences we'll feed to the decoder in training mode dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size) # Pass encoder state and decoder inputs to the decoders training_decoder_output, inference_decoder_output = decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability- Set `display_step` to state how many steps between each debug output statement ###Code # Number of Epochs epochs = 8 # Batch Size batch_size = 512 # RNN Size rnn_size = 256 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 256 decoding_embedding_size = 256 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 0.8 display_step = 20 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown Batch and pad the source and target sequences ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def pad_sentence_batch(sentence_batch, pad_int): """Pad sentences with <PAD> so that each sentence of a batch has the same length""" max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): """Batch targets, sources, and the lengths of their sentences together""" for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 20/269 - Train Accuracy: 0.3665, Validation Accuracy: 0.4242, Loss: 2.9874 Epoch 0 Batch 40/269 - Train Accuracy: 0.4458, Validation Accuracy: 0.5007, Loss: 2.4608 Epoch 0 Batch 60/269 - Train Accuracy: 0.4885, Validation Accuracy: 0.5044, Loss: 1.8665 Epoch 0 Batch 80/269 - Train Accuracy: 0.4679, Validation Accuracy: 0.4883, Loss: 1.6654 Epoch 0 Batch 100/269 - Train Accuracy: 0.4580, Validation Accuracy: 0.4728, Loss: 1.4754 Epoch 0 Batch 120/269 - Train Accuracy: 0.4426, Validation Accuracy: 0.4937, Loss: 1.3982 Epoch 0 Batch 140/269 - Train Accuracy: 0.5120, Validation Accuracy: 0.5286, Loss: 1.2109 Epoch 0 Batch 160/269 - Train Accuracy: 0.5225, Validation Accuracy: 0.5408, Loss: 1.0743 Epoch 0 Batch 180/269 - Train Accuracy: 0.5559, Validation Accuracy: 0.5690, Loss: 0.9546 Epoch 0 Batch 200/269 - Train Accuracy: 0.5531, Validation Accuracy: 0.5827, Loss: 0.9118 Epoch 0 Batch 220/269 - Train Accuracy: 0.5983, Validation Accuracy: 0.5912, Loss: 0.7778 Epoch 0 Batch 240/269 - Train Accuracy: 0.6346, Validation Accuracy: 0.6184, Loss: 0.6993 Epoch 0 Batch 260/269 - Train Accuracy: 0.6008, Validation Accuracy: 0.6112, Loss: 0.7592 Epoch 1 Batch 20/269 - Train Accuracy: 0.6248, Validation Accuracy: 0.6337, Loss: 0.6843 Epoch 1 Batch 40/269 - Train Accuracy: 0.6158, Validation Accuracy: 0.6335, Loss: 0.6602 Epoch 1 Batch 60/269 - Train Accuracy: 0.6511, Validation Accuracy: 0.6500, Loss: 0.5700 Epoch 1 Batch 80/269 - Train Accuracy: 0.6696, Validation Accuracy: 0.6627, Loss: 0.5643 Epoch 1 Batch 100/269 - Train Accuracy: 0.6810, Validation Accuracy: 0.6655, Loss: 0.5395 Epoch 1 Batch 120/269 - Train Accuracy: 0.6575, Validation Accuracy: 0.6869, Loss: 0.5334 Epoch 1 Batch 140/269 - Train Accuracy: 0.6946, Validation Accuracy: 0.6942, Loss: 0.5151 Epoch 1 Batch 160/269 - Train Accuracy: 0.6887, Validation Accuracy: 0.7045, Loss: 0.4758 Epoch 1 Batch 180/269 - Train Accuracy: 0.7085, Validation Accuracy: 0.7085, Loss: 0.4477 Epoch 1 Batch 200/269 - Train Accuracy: 0.7246, Validation Accuracy: 0.7108, Loss: 0.4375 Epoch 1 Batch 220/269 - Train Accuracy: 0.7546, Validation Accuracy: 0.7320, Loss: 0.3759 Epoch 1 Batch 240/269 - Train Accuracy: 0.7810, Validation Accuracy: 0.7473, Loss: 0.3481 Epoch 1 Batch 260/269 - Train Accuracy: 0.7440, Validation Accuracy: 0.7638, Loss: 0.3703 Epoch 2 Batch 20/269 - Train Accuracy: 0.7906, Validation Accuracy: 0.7943, Loss: 0.3214 Epoch 2 Batch 40/269 - Train Accuracy: 0.8101, Validation Accuracy: 0.8067, Loss: 0.3168 Epoch 2 Batch 60/269 - Train Accuracy: 0.8372, Validation Accuracy: 0.8244, Loss: 0.2592 Epoch 2 Batch 80/269 - Train Accuracy: 0.8404, Validation Accuracy: 0.8295, Loss: 0.2552 Epoch 2 Batch 100/269 - Train Accuracy: 0.8647, Validation Accuracy: 0.8385, Loss: 0.2331 Epoch 2 Batch 120/269 - Train Accuracy: 0.8615, Validation Accuracy: 0.8584, Loss: 0.2225 Epoch 2 Batch 140/269 - Train Accuracy: 0.8753, Validation Accuracy: 0.8685, Loss: 0.2120 Epoch 2 Batch 160/269 - Train Accuracy: 0.8706, Validation Accuracy: 0.8828, Loss: 0.1922 Epoch 2 Batch 180/269 - Train Accuracy: 0.9042, Validation Accuracy: 0.8903, Loss: 0.1643 Epoch 2 Batch 200/269 - Train Accuracy: 0.8765, Validation Accuracy: 0.8832, Loss: 0.1664 Epoch 2 Batch 220/269 - Train Accuracy: 0.8977, Validation Accuracy: 0.8994, Loss: 0.1357 Epoch 2 Batch 240/269 - Train Accuracy: 0.9053, Validation Accuracy: 0.9009, Loss: 0.1267 Epoch 2 Batch 260/269 - Train Accuracy: 0.8971, Validation Accuracy: 0.9002, Loss: 0.1358 Epoch 3 Batch 20/269 - Train Accuracy: 0.9047, Validation Accuracy: 0.9047, Loss: 0.1151 Epoch 3 Batch 40/269 - Train Accuracy: 0.9156, Validation Accuracy: 0.9150, Loss: 0.1148 Epoch 3 Batch 60/269 - Train Accuracy: 0.9179, Validation Accuracy: 0.9110, Loss: 0.0978 Epoch 3 Batch 80/269 - Train Accuracy: 0.9114, Validation Accuracy: 0.9195, Loss: 0.0992 Epoch 3 Batch 100/269 - Train Accuracy: 0.9275, Validation Accuracy: 0.9177, Loss: 0.0930 Epoch 3 Batch 120/269 - Train Accuracy: 0.9184, Validation Accuracy: 0.9202, Loss: 0.0916 Epoch 3 Batch 140/269 - Train Accuracy: 0.9164, Validation Accuracy: 0.9215, Loss: 0.0959 Epoch 3 Batch 160/269 - Train Accuracy: 0.9167, Validation Accuracy: 0.9292, Loss: 0.0871 Epoch 3 Batch 180/269 - Train Accuracy: 0.9325, Validation Accuracy: 0.9301, Loss: 0.0826 Epoch 3 Batch 200/269 - Train Accuracy: 0.9244, Validation Accuracy: 0.9254, Loss: 0.0789 Epoch 3 Batch 220/269 - Train Accuracy: 0.9355, Validation Accuracy: 0.9379, Loss: 0.0738 Epoch 3 Batch 240/269 - Train Accuracy: 0.9326, Validation Accuracy: 0.9275, Loss: 0.0672 Epoch 3 Batch 260/269 - Train Accuracy: 0.9235, Validation Accuracy: 0.9249, Loss: 0.0821 Epoch 4 Batch 20/269 - Train Accuracy: 0.9297, Validation Accuracy: 0.9354, Loss: 0.0659 Epoch 4 Batch 40/269 - Train Accuracy: 0.9240, Validation Accuracy: 0.9371, Loss: 0.0701 Epoch 4 Batch 60/269 - Train Accuracy: 0.9407, Validation Accuracy: 0.9273, Loss: 0.0634 Epoch 4 Batch 80/269 - Train Accuracy: 0.9342, Validation Accuracy: 0.9339, Loss: 0.0646 Epoch 4 Batch 100/269 - Train Accuracy: 0.9368, Validation Accuracy: 0.9364, Loss: 0.0591 Epoch 4 Batch 120/269 - Train Accuracy: 0.9362, Validation Accuracy: 0.9372, Loss: 0.0656 Epoch 4 Batch 140/269 - Train Accuracy: 0.9371, Validation Accuracy: 0.9308, Loss: 0.0635 Epoch 4 Batch 160/269 - Train Accuracy: 0.9330, Validation Accuracy: 0.9362, Loss: 0.0592 Epoch 4 Batch 180/269 - Train Accuracy: 0.9526, Validation Accuracy: 0.9474, Loss: 0.0544 Epoch 4 Batch 200/269 - Train Accuracy: 0.9479, Validation Accuracy: 0.9414, Loss: 0.0551 Epoch 4 Batch 220/269 - Train Accuracy: 0.9463, Validation Accuracy: 0.9440, Loss: 0.0514 Epoch 4 Batch 240/269 - Train Accuracy: 0.9356, Validation Accuracy: 0.9394, Loss: 0.0542 Epoch 4 Batch 260/269 - Train Accuracy: 0.9406, Validation Accuracy: 0.9447, Loss: 0.0603 Epoch 5 Batch 20/269 - Train Accuracy: 0.9476, Validation Accuracy: 0.9469, Loss: 0.0489 Epoch 5 Batch 40/269 - Train Accuracy: 0.9402, Validation Accuracy: 0.9438, Loss: 0.0553 Epoch 5 Batch 60/269 - Train Accuracy: 0.9482, Validation Accuracy: 0.9460, Loss: 0.0453 Epoch 5 Batch 80/269 - Train Accuracy: 0.9510, Validation Accuracy: 0.9410, Loss: 0.0496 Epoch 5 Batch 100/269 - Train Accuracy: 0.9503, Validation Accuracy: 0.9419, Loss: 0.0472 Epoch 5 Batch 120/269 - Train Accuracy: 0.9475, Validation Accuracy: 0.9454, Loss: 0.0503 Epoch 5 Batch 140/269 - Train Accuracy: 0.9492, Validation Accuracy: 0.9466, Loss: 0.0488 Epoch 5 Batch 160/269 - Train Accuracy: 0.9479, Validation Accuracy: 0.9511, Loss: 0.0445 Epoch 5 Batch 180/269 - Train Accuracy: 0.9683, Validation Accuracy: 0.9530, Loss: 0.0429 Epoch 5 Batch 200/269 - Train Accuracy: 0.9559, Validation Accuracy: 0.9460, Loss: 0.0485 Epoch 5 Batch 220/269 - Train Accuracy: 0.9554, Validation Accuracy: 0.9461, Loss: 0.0415 Epoch 5 Batch 240/269 - Train Accuracy: 0.9511, Validation Accuracy: 0.9408, Loss: 0.0389 Epoch 5 Batch 260/269 - Train Accuracy: 0.9562, Validation Accuracy: 0.9457, Loss: 0.0487 Epoch 6 Batch 20/269 - Train Accuracy: 0.9581, Validation Accuracy: 0.9502, Loss: 0.0417 Epoch 6 Batch 40/269 - Train Accuracy: 0.9538, Validation Accuracy: 0.9523, Loss: 0.0435 Epoch 6 Batch 60/269 - Train Accuracy: 0.9546, Validation Accuracy: 0.9542, Loss: 0.0379 Epoch 6 Batch 80/269 - Train Accuracy: 0.9537, Validation Accuracy: 0.9535, Loss: 0.0369 Epoch 6 Batch 100/269 - Train Accuracy: 0.9578, Validation Accuracy: 0.9562, Loss: 0.0373 Epoch 6 Batch 120/269 - Train Accuracy: 0.9508, Validation Accuracy: 0.9581, Loss: 0.0385 Epoch 6 Batch 140/269 - Train Accuracy: 0.9532, Validation Accuracy: 0.9524, Loss: 0.0394 Epoch 6 Batch 160/269 - Train Accuracy: 0.9511, Validation Accuracy: 0.9591, Loss: 0.0386 Epoch 6 Batch 180/269 - Train Accuracy: 0.9681, Validation Accuracy: 0.9595, Loss: 0.0339 Epoch 6 Batch 200/269 - Train Accuracy: 0.9622, Validation Accuracy: 0.9577, Loss: 0.0371 Epoch 6 Batch 220/269 - Train Accuracy: 0.9620, Validation Accuracy: 0.9530, Loss: 0.0357 Epoch 6 Batch 240/269 - Train Accuracy: 0.9562, Validation Accuracy: 0.9576, Loss: 0.0330 Epoch 6 Batch 260/269 - Train Accuracy: 0.9620, Validation Accuracy: 0.9630, Loss: 0.0368 Epoch 7 Batch 20/269 - Train Accuracy: 0.9641, Validation Accuracy: 0.9613, Loss: 0.0319 Epoch 7 Batch 40/269 - Train Accuracy: 0.9674, Validation Accuracy: 0.9625, Loss: 0.0330 Epoch 7 Batch 60/269 - Train Accuracy: 0.9653, Validation Accuracy: 0.9539, Loss: 0.0297 Epoch 7 Batch 80/269 - Train Accuracy: 0.9621, Validation Accuracy: 0.9578, Loss: 0.0337 Epoch 7 Batch 100/269 - Train Accuracy: 0.9668, Validation Accuracy: 0.9595, Loss: 0.0319 Epoch 7 Batch 120/269 - Train Accuracy: 0.9680, Validation Accuracy: 0.9625, Loss: 0.0290 Epoch 7 Batch 140/269 - Train Accuracy: 0.9593, Validation Accuracy: 0.9586, Loss: 0.0341 Epoch 7 Batch 160/269 - Train Accuracy: 0.9577, Validation Accuracy: 0.9632, Loss: 0.0292 Epoch 7 Batch 180/269 - Train Accuracy: 0.9772, Validation Accuracy: 0.9684, Loss: 0.0296 Epoch 7 Batch 200/269 - Train Accuracy: 0.9687, Validation Accuracy: 0.9674, Loss: 0.0310 Epoch 7 Batch 220/269 - Train Accuracy: 0.9677, Validation Accuracy: 0.9622, Loss: 0.0279 Epoch 7 Batch 240/269 - Train Accuracy: 0.9603, Validation Accuracy: 0.9586, Loss: 0.0290 Epoch 7 Batch 260/269 - Train Accuracy: 0.9717, Validation Accuracy: 0.9710, Loss: 0.0310 Model Trained and Saved ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function sentence = sentence.lower() return [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.split()] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) ###Output INFO:tensorflow:Restoring parameters from checkpoints/dev Input Word Ids: [157, 119, 76, 92, 115, 221, 109] English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.'] Prediction Word Ids: [78, 293, 79, 357, 220, 192, 196, 227, 1] French Words: il a vu un camion jaune rouillé . <EOS> ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function source_text_ids = [[source_vocab_to_int.get(word, source_vocab_to_int['<UNK>']) for word in line.split()] for line in source_text.split('\n')] target_text_ids = [[target_vocab_to_int.get(word, target_vocab_to_int['<UNK>']) for word in line.split()] + [target_vocab_to_int['<EOS>']] for line in target_text.split('\n')] return source_text_ids, target_text_ids """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper import problem_unittests as tests (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.1.0 Default GPU Device: /gpu:0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoder_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.- Target sequence length placeholder named "target_sequence_length" with rank 1- Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.- Source sequence length placeholder named "source_sequence_length" with rank 1Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ # TODO: Implement Function inputs = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None], name='targets') lr = tf.placeholder(tf.float32, name='learning_rate') keep_prob = tf.placeholder(tf.float32, name='keep_prob') target_sequence_len = tf.placeholder(tf.int32, (None,), name='target_sequence_length') max_target_sequence_length = tf.reduce_max(target_sequence_len, name='max_target_len') source_seq_len = tf.placeholder(tf.int32, (None,), name='source_sequence_length') return inputs, targets, lr, keep_prob, target_sequence_len, max_target_sequence_length, source_seq_len """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output Tests Passed ###Markdown Process Decoder InputImplement `process_decoder_input` by removing the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch. ###Code def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return dec_input """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer: * Embed the encoder input using [`tf.contrib.layers.embed_sequence`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence) * Construct a [stacked](https://github.com/tensorflow/tensorflow/blob/6947f65a374ebf29e74bb71e36fd82760056d82c/tensorflow/docs_src/tutorials/recurrent.mdstacking-multiple-lstms) [`tf.contrib.rnn.LSTMCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell) wrapped in a [`tf.contrib.rnn.DropoutWrapper`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper) * Pass cell and embedded input to [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) ###Code from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ # TODO: Implement Function enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size) def make_cell(rnn_size): enc_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) dropout = tf.contrib.rnn.DropoutWrapper(enc_cell, keep_prob) return dropout cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) enc_output, enc_state = tf.nn.dynamic_rnn(cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32) return enc_output, enc_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate a training decoding layer:* Create a [`tf.contrib.seq2seq.TrainingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper) * Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ # TODO: Implement Function training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, sequence_length=target_sequence_length, time_major=False) training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer) training_decoder_output = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True, maximum_iterations=max_summary_length)[0] return training_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference decoder:* Create a [`tf.contrib.seq2seq.GreedyEmbeddingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper)* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ # TODO: Implement Function start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens') inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id) inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer) inference_decoder_output = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length)[0] return inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.* Embed the target sequences* Construct the decoder LSTM cell (just like you constructed the encoder cell above)* Create an output layer to map the outputs of the decoder to the elements of our vocabulary* Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)` function to get the training logits.* Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) def make_cell(rnn_size): dec_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) return dec_cell dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) output_layer = Dense(target_vocab_size, kernel_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1)) with tf.variable_scope("decode"): training_output = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) with tf.variable_scope("decode", reuse = True): start_of_sequence_id = target_vocab_to_int['<GO>'] end_of_sequence_id = target_vocab_to_int['<EOS>'] inference_output = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob) return training_output, inference_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size)`.- Process target data using your `process_decoder_input(target_data, target_vocab_to_int, batch_size)` function.- Decode the encoded input using your `decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)` function. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function _, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size) training_output, inference_output = decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return training_output, inference_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability- Set `display_step` to state how many steps between each debug output statement ###Code # Number of Epochs epochs = 5 # Batch Size batch_size = 523 # RNN Size rnn_size = 256 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 128 decoding_embedding_size = 128 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 0.7 display_step = 10 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown Batch and pad the source and target sequences ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def pad_sentence_batch(sentence_batch, pad_int): """Pad sentences with <PAD> so that each sentence of a batch has the same length""" max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): """Batch targets, sources, and the lengths of their sentences together""" for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 10/263 - Train Accuracy: 0.3087, Validation Accuracy: 0.3733, Loss: 3.5424 Epoch 0 Batch 20/263 - Train Accuracy: 0.3641, Validation Accuracy: 0.4200, Loss: 2.8818 Epoch 0 Batch 30/263 - Train Accuracy: 0.4162, Validation Accuracy: 0.4452, Loss: 2.5184 Epoch 0 Batch 40/263 - Train Accuracy: 0.4484, Validation Accuracy: 0.4677, Loss: 2.3539 Epoch 0 Batch 50/263 - Train Accuracy: 0.4206, Validation Accuracy: 0.4569, Loss: 2.2528 Epoch 0 Batch 60/263 - Train Accuracy: 0.4791, Validation Accuracy: 0.4860, Loss: 1.9783 Epoch 0 Batch 70/263 - Train Accuracy: 0.4665, Validation Accuracy: 0.4866, Loss: 1.9129 Epoch 0 Batch 80/263 - Train Accuracy: 0.4726, Validation Accuracy: 0.4811, Loss: 1.7465 Epoch 0 Batch 90/263 - Train Accuracy: 0.4804, Validation Accuracy: 0.5056, Loss: 1.7166 Epoch 0 Batch 100/263 - Train Accuracy: 0.5018, Validation Accuracy: 0.5205, Loss: 1.6105 Epoch 0 Batch 110/263 - Train Accuracy: 0.5175, Validation Accuracy: 0.5153, Loss: 1.4580 Epoch 0 Batch 120/263 - Train Accuracy: 0.4755, Validation Accuracy: 0.5092, Loss: 1.4676 Epoch 0 Batch 130/263 - Train Accuracy: 0.4694, Validation Accuracy: 0.5040, Loss: 1.3875 Epoch 0 Batch 140/263 - Train Accuracy: 0.4928, Validation Accuracy: 0.5193, Loss: 1.3324 Epoch 0 Batch 150/263 - Train Accuracy: 0.5116, Validation Accuracy: 0.5302, Loss: 1.2949 Epoch 0 Batch 160/263 - Train Accuracy: 0.5167, Validation Accuracy: 0.5393, Loss: 1.2199 Epoch 0 Batch 170/263 - Train Accuracy: 0.5078, Validation Accuracy: 0.5337, Loss: 1.1668 Epoch 0 Batch 180/263 - Train Accuracy: 0.4798, Validation Accuracy: 0.5199, Loss: 1.1760 Epoch 0 Batch 190/263 - Train Accuracy: 0.5275, Validation Accuracy: 0.5375, Loss: 1.0851 Epoch 0 Batch 200/263 - Train Accuracy: 0.4802, Validation Accuracy: 0.5329, Loss: 1.0798 Epoch 0 Batch 210/263 - Train Accuracy: 0.5311, Validation Accuracy: 0.5501, Loss: 1.0022 Epoch 0 Batch 220/263 - Train Accuracy: 0.5790, Validation Accuracy: 0.5709, Loss: 0.9222 Epoch 0 Batch 230/263 - Train Accuracy: 0.5805, Validation Accuracy: 0.5750, Loss: 0.8872 Epoch 0 Batch 240/263 - Train Accuracy: 0.5615, Validation Accuracy: 0.5786, Loss: 0.8985 Epoch 0 Batch 250/263 - Train Accuracy: 0.5902, Validation Accuracy: 0.5840, Loss: 0.8345 Epoch 0 Batch 260/263 - Train Accuracy: 0.5722, Validation Accuracy: 0.5908, Loss: 0.8186 Epoch 1 Batch 10/263 - Train Accuracy: 0.5707, Validation Accuracy: 0.5935, Loss: 0.7995 Epoch 1 Batch 20/263 - Train Accuracy: 0.5842, Validation Accuracy: 0.5932, Loss: 0.7818 Epoch 1 Batch 30/263 - Train Accuracy: 0.5957, Validation Accuracy: 0.5933, Loss: 0.7138 Epoch 1 Batch 40/263 - Train Accuracy: 0.6023, Validation Accuracy: 0.6049, Loss: 0.7175 Epoch 1 Batch 50/263 - Train Accuracy: 0.5882, Validation Accuracy: 0.6052, Loss: 0.6935 Epoch 1 Batch 60/263 - Train Accuracy: 0.6238, Validation Accuracy: 0.6100, Loss: 0.6276 Epoch 1 Batch 70/263 - Train Accuracy: 0.6179, Validation Accuracy: 0.6215, Loss: 0.6675 Epoch 1 Batch 80/263 - Train Accuracy: 0.6349, Validation Accuracy: 0.6311, Loss: 0.6128 Epoch 1 Batch 90/263 - Train Accuracy: 0.6280, Validation Accuracy: 0.6154, Loss: 0.6124 Epoch 1 Batch 100/263 - Train Accuracy: 0.6202, Validation Accuracy: 0.6289, Loss: 0.6033 Epoch 1 Batch 110/263 - Train Accuracy: 0.6444, Validation Accuracy: 0.6260, Loss: 0.5760 Epoch 1 Batch 120/263 - Train Accuracy: 0.6417, Validation Accuracy: 0.6282, Loss: 0.5659 Epoch 1 Batch 130/263 - Train Accuracy: 0.6572, Validation Accuracy: 0.6311, Loss: 0.5537 Epoch 1 Batch 140/263 - Train Accuracy: 0.6313, Validation Accuracy: 0.6351, Loss: 0.5435 Epoch 1 Batch 150/263 - Train Accuracy: 0.6501, Validation Accuracy: 0.6495, Loss: 0.5285 Epoch 1 Batch 160/263 - Train Accuracy: 0.6782, Validation Accuracy: 0.6504, Loss: 0.5167 Epoch 1 Batch 170/263 - Train Accuracy: 0.6678, Validation Accuracy: 0.6515, Loss: 0.5028 Epoch 1 Batch 180/263 - Train Accuracy: 0.6248, Validation Accuracy: 0.6427, Loss: 0.5356 Epoch 1 Batch 190/263 - Train Accuracy: 0.6547, Validation Accuracy: 0.6547, Loss: 0.4991 Epoch 1 Batch 200/263 - Train Accuracy: 0.6431, Validation Accuracy: 0.6658, Loss: 0.5124 Epoch 1 Batch 210/263 - Train Accuracy: 0.6688, Validation Accuracy: 0.6776, Loss: 0.4870 Epoch 1 Batch 220/263 - Train Accuracy: 0.6931, Validation Accuracy: 0.6746, Loss: 0.4533 Epoch 1 Batch 230/263 - Train Accuracy: 0.6947, Validation Accuracy: 0.6964, Loss: 0.4457 Epoch 1 Batch 240/263 - Train Accuracy: 0.6759, Validation Accuracy: 0.6850, Loss: 0.4657 Epoch 1 Batch 250/263 - Train Accuracy: 0.6954, Validation Accuracy: 0.6993, Loss: 0.4427 Epoch 1 Batch 260/263 - Train Accuracy: 0.7118, Validation Accuracy: 0.7274, Loss: 0.4298 Epoch 2 Batch 10/263 - Train Accuracy: 0.7325, Validation Accuracy: 0.7262, Loss: 0.4184 Epoch 2 Batch 20/263 - Train Accuracy: 0.7282, Validation Accuracy: 0.7387, Loss: 0.4103 Epoch 2 Batch 30/263 - Train Accuracy: 0.7561, Validation Accuracy: 0.7440, Loss: 0.3851 Epoch 2 Batch 40/263 - Train Accuracy: 0.7582, Validation Accuracy: 0.7531, Loss: 0.3744 Epoch 2 Batch 50/263 - Train Accuracy: 0.7491, Validation Accuracy: 0.7529, Loss: 0.3584 Epoch 2 Batch 60/263 - Train Accuracy: 0.7816, Validation Accuracy: 0.7521, Loss: 0.3305 Epoch 2 Batch 70/263 - Train Accuracy: 0.7749, Validation Accuracy: 0.7812, Loss: 0.3594 Epoch 2 Batch 80/263 - Train Accuracy: 0.7836, Validation Accuracy: 0.7706, Loss: 0.3214 Epoch 2 Batch 90/263 - Train Accuracy: 0.8088, Validation Accuracy: 0.7929, Loss: 0.3110 Epoch 2 Batch 100/263 - Train Accuracy: 0.7820, Validation Accuracy: 0.7882, Loss: 0.3061 Epoch 2 Batch 110/263 - Train Accuracy: 0.8033, Validation Accuracy: 0.7892, Loss: 0.2974 Epoch 2 Batch 120/263 - Train Accuracy: 0.8159, Validation Accuracy: 0.7955, Loss: 0.2874 Epoch 2 Batch 130/263 - Train Accuracy: 0.8161, Validation Accuracy: 0.8003, Loss: 0.2769 Epoch 2 Batch 140/263 - Train Accuracy: 0.8225, Validation Accuracy: 0.8123, Loss: 0.2718 Epoch 2 Batch 150/263 - Train Accuracy: 0.8267, Validation Accuracy: 0.8233, Loss: 0.2642 Epoch 2 Batch 160/263 - Train Accuracy: 0.8424, Validation Accuracy: 0.8354, Loss: 0.2559 Epoch 2 Batch 170/263 - Train Accuracy: 0.8440, Validation Accuracy: 0.8204, Loss: 0.2364 Epoch 2 Batch 180/263 - Train Accuracy: 0.8233, Validation Accuracy: 0.8284, Loss: 0.2525 Epoch 2 Batch 190/263 - Train Accuracy: 0.8299, Validation Accuracy: 0.8345, Loss: 0.2346 Epoch 2 Batch 200/263 - Train Accuracy: 0.8424, Validation Accuracy: 0.8436, Loss: 0.2412 Epoch 2 Batch 210/263 - Train Accuracy: 0.8587, Validation Accuracy: 0.8489, Loss: 0.2220 Epoch 2 Batch 220/263 - Train Accuracy: 0.8531, Validation Accuracy: 0.8516, Loss: 0.2080 Epoch 2 Batch 230/263 - Train Accuracy: 0.8642, Validation Accuracy: 0.8595, Loss: 0.2008 Epoch 2 Batch 240/263 - Train Accuracy: 0.8567, Validation Accuracy: 0.8641, Loss: 0.2077 Epoch 2 Batch 250/263 - Train Accuracy: 0.8619, Validation Accuracy: 0.8669, Loss: 0.1936 Epoch 2 Batch 260/263 - Train Accuracy: 0.8667, Validation Accuracy: 0.8682, Loss: 0.1961 Epoch 3 Batch 10/263 - Train Accuracy: 0.8834, Validation Accuracy: 0.8757, Loss: 0.1846 Epoch 3 Batch 20/263 - Train Accuracy: 0.8692, Validation Accuracy: 0.8799, Loss: 0.1843 Epoch 3 Batch 30/263 - Train Accuracy: 0.8920, Validation Accuracy: 0.8889, Loss: 0.1649 Epoch 3 Batch 40/263 - Train Accuracy: 0.8818, Validation Accuracy: 0.8831, Loss: 0.1618 Epoch 3 Batch 50/263 - Train Accuracy: 0.8776, Validation Accuracy: 0.8778, Loss: 0.1601 Epoch 3 Batch 60/263 - Train Accuracy: 0.8960, Validation Accuracy: 0.8794, Loss: 0.1417 Epoch 3 Batch 70/263 - Train Accuracy: 0.8857, Validation Accuracy: 0.8934, Loss: 0.1603 Epoch 3 Batch 80/263 - Train Accuracy: 0.9029, Validation Accuracy: 0.8898, Loss: 0.1367 Epoch 3 Batch 90/263 - Train Accuracy: 0.9067, Validation Accuracy: 0.8889, Loss: 0.1310 Epoch 3 Batch 100/263 - Train Accuracy: 0.8835, Validation Accuracy: 0.8910, Loss: 0.1306 Epoch 3 Batch 110/263 - Train Accuracy: 0.8908, Validation Accuracy: 0.8921, Loss: 0.1358 Epoch 3 Batch 120/263 - Train Accuracy: 0.8994, Validation Accuracy: 0.9001, Loss: 0.1177 Epoch 3 Batch 130/263 - Train Accuracy: 0.9059, Validation Accuracy: 0.9007, Loss: 0.1195 Epoch 3 Batch 140/263 - Train Accuracy: 0.9169, Validation Accuracy: 0.8945, Loss: 0.1123 Epoch 3 Batch 150/263 - Train Accuracy: 0.8986, Validation Accuracy: 0.9087, Loss: 0.1135 Epoch 3 Batch 160/263 - Train Accuracy: 0.9110, Validation Accuracy: 0.9064, Loss: 0.1079 Epoch 3 Batch 170/263 - Train Accuracy: 0.9194, Validation Accuracy: 0.9022, Loss: 0.0988 Epoch 3 Batch 180/263 - Train Accuracy: 0.9017, Validation Accuracy: 0.9094, Loss: 0.1103 Epoch 3 Batch 190/263 - Train Accuracy: 0.8905, Validation Accuracy: 0.9052, Loss: 0.1038 Epoch 3 Batch 200/263 - Train Accuracy: 0.9144, Validation Accuracy: 0.9072, Loss: 0.1074 Epoch 3 Batch 210/263 - Train Accuracy: 0.9113, Validation Accuracy: 0.9131, Loss: 0.0987 Epoch 3 Batch 220/263 - Train Accuracy: 0.9100, Validation Accuracy: 0.9116, Loss: 0.0915 Epoch 3 Batch 230/263 - Train Accuracy: 0.9235, Validation Accuracy: 0.9036, Loss: 0.0870 Epoch 3 Batch 240/263 - Train Accuracy: 0.9050, Validation Accuracy: 0.9154, Loss: 0.0934 Epoch 3 Batch 250/263 - Train Accuracy: 0.9148, Validation Accuracy: 0.9204, Loss: 0.0870 Epoch 3 Batch 260/263 - Train Accuracy: 0.9337, Validation Accuracy: 0.9185, Loss: 0.0867 Epoch 4 Batch 10/263 - Train Accuracy: 0.9164, Validation Accuracy: 0.9189, Loss: 0.0853 Epoch 4 Batch 20/263 - Train Accuracy: 0.9078, Validation Accuracy: 0.9171, Loss: 0.0908 Epoch 4 Batch 30/263 - Train Accuracy: 0.9140, Validation Accuracy: 0.9107, Loss: 0.0851 Epoch 4 Batch 40/263 - Train Accuracy: 0.9160, Validation Accuracy: 0.9159, Loss: 0.0831 Epoch 4 Batch 50/263 - Train Accuracy: 0.9246, Validation Accuracy: 0.9249, Loss: 0.0778 Epoch 4 Batch 60/263 - Train Accuracy: 0.9364, Validation Accuracy: 0.9198, Loss: 0.0726 Epoch 4 Batch 70/263 - Train Accuracy: 0.9174, Validation Accuracy: 0.9232, Loss: 0.0876 Epoch 4 Batch 80/263 - Train Accuracy: 0.9290, Validation Accuracy: 0.9231, Loss: 0.0680 Epoch 4 Batch 90/263 - Train Accuracy: 0.9316, Validation Accuracy: 0.9251, Loss: 0.0674 Epoch 4 Batch 100/263 - Train Accuracy: 0.9246, Validation Accuracy: 0.9307, Loss: 0.0679 Epoch 4 Batch 110/263 - Train Accuracy: 0.9248, Validation Accuracy: 0.9198, Loss: 0.0733 Epoch 4 Batch 120/263 - Train Accuracy: 0.9233, Validation Accuracy: 0.9274, Loss: 0.0598 Epoch 4 Batch 130/263 - Train Accuracy: 0.9265, Validation Accuracy: 0.9234, Loss: 0.0623 Epoch 4 Batch 140/263 - Train Accuracy: 0.9372, Validation Accuracy: 0.9275, Loss: 0.0588 Epoch 4 Batch 150/263 - Train Accuracy: 0.9272, Validation Accuracy: 0.9378, Loss: 0.0616 Epoch 4 Batch 160/263 - Train Accuracy: 0.9395, Validation Accuracy: 0.9340, Loss: 0.0636 Epoch 4 Batch 170/263 - Train Accuracy: 0.9430, Validation Accuracy: 0.9405, Loss: 0.0543 Epoch 4 Batch 180/263 - Train Accuracy: 0.9207, Validation Accuracy: 0.9332, Loss: 0.0644 Epoch 4 Batch 190/263 - Train Accuracy: 0.9161, Validation Accuracy: 0.9325, Loss: 0.0628 Epoch 4 Batch 200/263 - Train Accuracy: 0.9373, Validation Accuracy: 0.9347, Loss: 0.0654 Epoch 4 Batch 210/263 - Train Accuracy: 0.9344, Validation Accuracy: 0.9382, Loss: 0.0594 Epoch 4 Batch 220/263 - Train Accuracy: 0.9218, Validation Accuracy: 0.9362, Loss: 0.0554 Epoch 4 Batch 230/263 - Train Accuracy: 0.9575, Validation Accuracy: 0.9379, Loss: 0.0507 Epoch 4 Batch 240/263 - Train Accuracy: 0.9213, Validation Accuracy: 0.9411, Loss: 0.0558 Epoch 4 Batch 250/263 - Train Accuracy: 0.9299, Validation Accuracy: 0.9374, Loss: 0.0560 Epoch 4 Batch 260/263 - Train Accuracy: 0.9360, Validation Accuracy: 0.9426, Loss: 0.0552 Model Trained and Saved ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function seq = [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.lower().split()] return seq """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) ###Output INFO:tensorflow:Restoring parameters from checkpoints/dev Input Word Ids: [127, 31, 175, 187, 47, 76, 209] English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.'] Prediction Word Ids: [247, 274, 112, 246, 72, 1] French Words: il vu un jaune . <EOS> ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function source_id_text = [] for sentence in source_text.split('\n'): source_id_sentence = [] words = sentence.split(' ') if '' in words: words.remove('') for word in words: source_id_sentence.append(source_vocab_to_int[word]) source_id_text.append(source_id_sentence) target_id_text = [] for sentence in target_text.split('\n'): target_id_sentence = [] words = sentence.split(' ') if '' in words: words.remove('') for word in words: target_id_sentence.append(target_vocab_to_int[word]) target_id_sentence.append(target_vocab_to_int['<EOS>']) target_id_text.append(target_id_sentence) return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper import problem_unittests as tests (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.1.0 Default GPU Device: /gpu:0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoder_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.- Target sequence length placeholder named "target_sequence_length" with rank 1- Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.- Source sequence length placeholder named "source_sequence_length" with rank 1Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ # TODO: Implement Function input_data = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None]) lr = tf.placeholder(tf.float32, []) keep_prob = tf.placeholder(tf.float32, [], name='keep_prob') target_sequence_length = tf.placeholder(tf.int32, [None], name='target_sequence_length') max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len') source_sequence_length = tf.placeholder(tf.int32, [None], name='source_sequence_length') return input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output Tests Passed ###Markdown Process Decoder InputImplement `process_decoder_input` by removing the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch. ###Code def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function ending = tf.strided_slice(target_data, [0,0], [batch_size,-1], [1,1]) output = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) # print(output) return output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer: * Embed the encoder input using [`tf.contrib.layers.embed_sequence`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence) * Construct a [stacked](https://github.com/tensorflow/tensorflow/blob/6947f65a374ebf29e74bb71e36fd82760056d82c/tensorflow/docs_src/tutorials/recurrent.mdstacking-multiple-lstms) [`tf.contrib.rnn.LSTMCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell) wrapped in a [`tf.contrib.rnn.DropoutWrapper`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper) * Pass cell and embedded input to [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) ###Code from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ # TODO: Implement Function embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, vocab_size=source_vocab_size, embed_dim=encoding_embedding_size ) # RNN def make_cell(rnn_size): enc_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=42)) return enc_cell enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) enc_dropout_cell = tf.contrib.rnn.DropoutWrapper(enc_cell, output_keep_prob=keep_prob) enc_output, enc_state = tf.nn.dynamic_rnn(enc_dropout_cell, embed_input, sequence_length=source_sequence_length, dtype=tf.float32 ) return enc_output, enc_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate a training decoding layer:* Create a [`tf.contrib.seq2seq.TrainingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper) * Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ # TODO: Implement Function training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, sequence_length=target_sequence_length, time_major=False) training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, initial_state=encoder_state, output_layer=output_layer) training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True, maximum_iterations=max_summary_length) return training_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference decoder:* Create a [`tf.contrib.seq2seq.GreedyEmbeddingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper)* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ # TODO: Implement Function start_tokens = tf.tile(tf.constant([target_vocab_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens') inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, target_vocab_to_int['<EOS>']) inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer) inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length) return inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.* Embed the target sequences* Construct the decoder LSTM cell (just like you constructed the encoder cell above)* Create an output layer to map the outputs of the decoder to the elements of our vocabulary* Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)` function to get the training logits.* Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function # Decoder Embedding dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) # decoder Cell def make_cell(rnn_size): dec_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=42)) return dec_cell dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) # Dense layer to map the outputs of the decoder to the elements of our vocabulary output_layer = Dense(target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean=0.0, stddev=0.1)) with tf.variable_scope('decode'): training_decoder_output = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) start_of_sequence_id = target_vocab_to_int['<GO>'] end_of_sequence_id = target_vocab_to_int['<EOS>'] with tf.variable_scope('decode', reuse=True): inference_decoder_output = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size)`.- Process target data using your `process_decoder_input(target_data, target_vocab_to_int, batch_size)` function.- Decode the encoded input using your `decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)` function. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function # Enconde the input _, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) # Process the target data dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size) training_decoder_output, inference_decoder_output = decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability- Set `display_step` to state how many steps between each debug output statement ###Code # Number of Epochs epochs = 40 # Batch Size batch_size = 256 # RNN Size rnn_size = 50 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 15 decoding_embedding_size = 15 # Learning Rate learning_rate = 0.01 # Dropout Keep Probability keep_probability = 0.5 display_step = 500 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown Batch and pad the source and target sequences ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def pad_sentence_batch(sentence_batch, pad_int): """Pad sentences with <PAD> so that each sentence of a batch has the same length""" max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): """Batch targets, sources, and the lengths of their sentences together""" for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code %%time """ DON'T MODIFY ANYTHING IN THIS CELL """ def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 500/538 - Train Accuracy: 0.6399, Validation Accuracy: 0.6243, Loss: 0.5003 Epoch 1 Batch 500/538 - Train Accuracy: 0.8556, Validation Accuracy: 0.8203, Loss: 0.1831 Epoch 2 Batch 500/538 - Train Accuracy: 0.9212, Validation Accuracy: 0.8757, Loss: 0.0768 Epoch 3 Batch 500/538 - Train Accuracy: 0.9437, Validation Accuracy: 0.9119, Loss: 0.0434 Epoch 4 Batch 500/538 - Train Accuracy: 0.9647, Validation Accuracy: 0.9148, Loss: 0.0325 Epoch 5 Batch 500/538 - Train Accuracy: 0.9643, Validation Accuracy: 0.9322, Loss: 0.0252 Epoch 6 Batch 500/538 - Train Accuracy: 0.9613, Validation Accuracy: 0.9419, Loss: 0.0237 Epoch 7 Batch 500/538 - Train Accuracy: 0.9625, Validation Accuracy: 0.9304, Loss: 0.0193 Epoch 8 Batch 500/538 - Train Accuracy: 0.9716, Validation Accuracy: 0.9398, Loss: 0.0201 Epoch 9 Batch 500/538 - Train Accuracy: 0.9734, Validation Accuracy: 0.9371, Loss: 0.0188 Epoch 10 Batch 500/538 - Train Accuracy: 0.9599, Validation Accuracy: 0.9309, Loss: 0.0218 Epoch 11 Batch 500/538 - Train Accuracy: 0.9734, Validation Accuracy: 0.9426, Loss: 0.0191 Epoch 12 Batch 500/538 - Train Accuracy: 0.9767, Validation Accuracy: 0.9529, Loss: 0.0137 Epoch 13 Batch 500/538 - Train Accuracy: 0.9822, Validation Accuracy: 0.9540, Loss: 0.0124 Epoch 14 Batch 500/538 - Train Accuracy: 0.9847, Validation Accuracy: 0.9565, Loss: 0.0134 Epoch 15 Batch 500/538 - Train Accuracy: 0.9728, Validation Accuracy: 0.9595, Loss: 0.0143 Epoch 16 Batch 500/538 - Train Accuracy: 0.9805, Validation Accuracy: 0.9515, Loss: 0.0120 Epoch 17 Batch 500/538 - Train Accuracy: 0.9723, Validation Accuracy: 0.9556, Loss: 0.0145 Epoch 18 Batch 500/538 - Train Accuracy: 0.9762, Validation Accuracy: 0.9565, Loss: 0.0127 Epoch 19 Batch 500/538 - Train Accuracy: 0.9858, Validation Accuracy: 0.9672, Loss: 0.0115 Epoch 20 Batch 500/538 - Train Accuracy: 0.9673, Validation Accuracy: 0.9629, Loss: 0.0139 Epoch 21 Batch 500/538 - Train Accuracy: 0.9783, Validation Accuracy: 0.9600, Loss: 0.0122 Epoch 22 Batch 500/538 - Train Accuracy: 0.9867, Validation Accuracy: 0.9620, Loss: 0.0094 Epoch 23 Batch 500/538 - Train Accuracy: 0.9838, Validation Accuracy: 0.9572, Loss: 0.0107 Epoch 24 Batch 500/538 - Train Accuracy: 0.9849, Validation Accuracy: 0.9700, Loss: 0.0098 Epoch 25 Batch 500/538 - Train Accuracy: 0.9846, Validation Accuracy: 0.9506, Loss: 0.0085 Epoch 26 Batch 500/538 - Train Accuracy: 0.9906, Validation Accuracy: 0.9586, Loss: 0.0098 Epoch 27 Batch 500/538 - Train Accuracy: 0.9785, Validation Accuracy: 0.9572, Loss: 0.0110 Epoch 28 Batch 500/538 - Train Accuracy: 0.9865, Validation Accuracy: 0.9638, Loss: 0.0090 Epoch 29 Batch 500/538 - Train Accuracy: 0.9691, Validation Accuracy: 0.9494, Loss: 0.0159 Epoch 30 Batch 500/538 - Train Accuracy: 0.9792, Validation Accuracy: 0.9599, Loss: 0.0132 Epoch 31 Batch 500/538 - Train Accuracy: 0.9858, Validation Accuracy: 0.9608, Loss: 0.0083 Epoch 32 Batch 500/538 - Train Accuracy: 0.9790, Validation Accuracy: 0.9638, Loss: 0.0114 Epoch 33 Batch 500/538 - Train Accuracy: 0.9828, Validation Accuracy: 0.9703, Loss: 0.0115 Epoch 34 Batch 500/538 - Train Accuracy: 0.9822, Validation Accuracy: 0.9641, Loss: 0.0110 Epoch 35 Batch 500/538 - Train Accuracy: 0.9846, Validation Accuracy: 0.9666, Loss: 0.0134 Epoch 36 Batch 500/538 - Train Accuracy: 0.9812, Validation Accuracy: 0.9673, Loss: 0.0091 Epoch 37 Batch 500/538 - Train Accuracy: 0.9909, Validation Accuracy: 0.9672, Loss: 0.0090 Epoch 38 Batch 500/538 - Train Accuracy: 0.9879, Validation Accuracy: 0.9613, Loss: 0.0102 Epoch 39 Batch 500/538 - Train Accuracy: 0.9860, Validation Accuracy: 0.9657, Loss: 0.0092 Model Trained and Saved CPU times: user 29min 42s, sys: 2min 28s, total: 32min 10s Wall time: 11min 39s ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function processed_sentence = [] for word in sentence.split(' '): if word in list(vocab_to_int): processed_sentence.append(vocab_to_int[word]) else: processed_sentence.append(vocab_to_int['<UNK>']) return processed_sentence """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) ###Output INFO:tensorflow:Restoring parameters from checkpoints/dev Input Word Ids: [104, 149, 110, 124, 182, 13, 191] English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.'] Prediction Word Ids: [350, 219, 301, 10, 39, 234, 25, 255, 1] French Words: il a vu un vieux camion noir . <EOS> ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) # array of sentences in english target_text = helper.load_data(target_path) # array of sentences in french ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) # sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # list of sentences: source = source_text.split('\n') target = target_text.split('\n') # converting the sentences into ids sequences source_id_text = [[source_vocab_to_int[word] for word in sentence.split()] for sentence in source] target_id_text = [[target_vocab_to_int[word] for word in sentence.split()] for sentence in target] # adding <EOS> token at the end of each target sequence for sentence in target_id_text: sentence.append(target_vocab_to_int['<EOS>']) return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper import problem_unittests as tests (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.2.1 Default GPU Device: /gpu:0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoder_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.- Target sequence length placeholder named "target_sequence_length" with rank 1- Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.- Source sequence length placeholder named "source_sequence_length" with rank 1Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ inputs = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None]) learning_rate = tf.placeholder(tf.float32) keep_prob = tf.placeholder(tf.float32, name="keep_prob") target_sequence_length = tf.placeholder(tf.int32, [None,], "target_sequence_length") max_target_sequence = tf.reduce_max(target_sequence_length, name="max_target_length") source_sequence_length = tf.placeholder(tf.int32, [None,], name="source_sequence_length") return (inputs, targets, learning_rate, keep_prob, target_sequence_length, max_target_sequence, source_sequence_length) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output ERROR:tensorflow:================================== Object was never used (type <class 'tensorflow.python.framework.ops.Operation'>): <tf.Operation 'assert_rank_2/Assert/Assert' type=Assert> If you want to mark it as used call its "mark_used()" method. It was originally created here: ['File "/usr/local/lib/python3.5/runpy.py", line 193, in _run_module_as_main\n "__main__", mod_spec)', 'File "/usr/local/lib/python3.5/runpy.py", line 85, in _run_code\n exec(code, run_globals)', 'File "/usr/local/lib/python3.5/site-packages/ipykernel_launcher.py", line 16, in <module>\n app.launch_new_instance()', 'File "/usr/local/lib/python3.5/site-packages/traitlets/config/application.py", line 658, in launch_instance\n app.start()', 'File "/usr/local/lib/python3.5/site-packages/ipykernel/kernelapp.py", line 477, in start\n ioloop.IOLoop.instance().start()', 'File "/usr/local/lib/python3.5/site-packages/zmq/eventloop/ioloop.py", line 177, in start\n super(ZMQIOLoop, self).start()', 'File "/usr/local/lib/python3.5/site-packages/tornado/ioloop.py", line 888, in start\n handler_func(fd_obj, events)', 'File "/usr/local/lib/python3.5/site-packages/tornado/stack_context.py", line 277, in null_wrapper\n return fn(*args, **kwargs)', 'File "/usr/local/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 440, in _handle_events\n self._handle_recv()', 'File "/usr/local/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 472, in _handle_recv\n self._run_callback(callback, msg)', 'File "/usr/local/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 414, in _run_callback\n callback(*args, **kwargs)', 'File "/usr/local/lib/python3.5/site-packages/tornado/stack_context.py", line 277, in null_wrapper\n return fn(*args, **kwargs)', 'File "/usr/local/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 283, in dispatcher\n return self.dispatch_shell(stream, msg)', 'File "/usr/local/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 235, in dispatch_shell\n handler(stream, idents, msg)', 'File "/usr/local/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 399, in execute_request\n user_expressions, allow_stdin)', 'File "/usr/local/lib/python3.5/site-packages/ipykernel/ipkernel.py", line 196, in do_execute\n res = shell.run_cell(code, store_history=store_history, silent=silent)', 'File "/usr/local/lib/python3.5/site-packages/ipykernel/zmqshell.py", line 533, in run_cell\n return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)', 'File "/usr/local/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2698, in run_cell\n interactivity=interactivity, compiler=compiler, result=result)', 'File "/usr/local/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2808, in run_ast_nodes\n if self.run_code(code, result):', 'File "/usr/local/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2862, in run_code\n exec(code_obj, self.user_global_ns, self.user_ns)', 'File "<ipython-input-7-7ae39038e397>", line 34, in <module>\n tests.test_model_inputs(model_inputs)', 'File "/output/problem_unittests.py", line 106, in test_model_inputs\n assert tf.assert_rank(lr, 0, message=\'Learning Rate has wrong rank\')', 'File "/usr/local/lib/python3.5/site-packages/tensorflow/python/ops/check_ops.py", line 617, in assert_rank\n dynamic_condition, data, summarize)', 'File "/usr/local/lib/python3.5/site-packages/tensorflow/python/ops/check_ops.py", line 571, in _assert_rank_condition\n return control_flow_ops.Assert(condition, data, summarize=summarize)', 'File "/usr/local/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py", line 170, in wrapped\n return _add_should_use_warning(fn(*args, **kwargs))', 'File "/usr/local/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py", line 139, in _add_should_use_warning\n wrapped = TFShouldUseWarningWrapper(x)', 'File "/usr/local/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py", line 96, in __init__\n stack = [s.strip() for s in traceback.format_stack()]'] ================================== ERROR:tensorflow:================================== Object was never used (type <class 'tensorflow.python.framework.ops.Operation'>): <tf.Operation 'assert_rank_3/Assert/Assert' type=Assert> If you want to mark it as used call its "mark_used()" method. It was originally created here: ['File "/usr/local/lib/python3.5/runpy.py", line 193, in _run_module_as_main\n "__main__", mod_spec)', 'File "/usr/local/lib/python3.5/runpy.py", line 85, in _run_code\n exec(code, run_globals)', 'File "/usr/local/lib/python3.5/site-packages/ipykernel_launcher.py", line 16, in <module>\n app.launch_new_instance()', 'File "/usr/local/lib/python3.5/site-packages/traitlets/config/application.py", line 658, in launch_instance\n app.start()', 'File "/usr/local/lib/python3.5/site-packages/ipykernel/kernelapp.py", line 477, in start\n ioloop.IOLoop.instance().start()', 'File "/usr/local/lib/python3.5/site-packages/zmq/eventloop/ioloop.py", line 177, in start\n super(ZMQIOLoop, self).start()', 'File "/usr/local/lib/python3.5/site-packages/tornado/ioloop.py", line 888, in start\n handler_func(fd_obj, events)', 'File "/usr/local/lib/python3.5/site-packages/tornado/stack_context.py", line 277, in null_wrapper\n return fn(*args, **kwargs)', 'File "/usr/local/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 440, in _handle_events\n self._handle_recv()', 'File "/usr/local/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 472, in _handle_recv\n self._run_callback(callback, msg)', 'File "/usr/local/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 414, in _run_callback\n callback(*args, **kwargs)', 'File "/usr/local/lib/python3.5/site-packages/tornado/stack_context.py", line 277, in null_wrapper\n return fn(*args, **kwargs)', 'File "/usr/local/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 283, in dispatcher\n return self.dispatch_shell(stream, msg)', 'File "/usr/local/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 235, in dispatch_shell\n handler(stream, idents, msg)', 'File "/usr/local/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 399, in execute_request\n user_expressions, allow_stdin)', 'File "/usr/local/lib/python3.5/site-packages/ipykernel/ipkernel.py", line 196, in do_execute\n res = shell.run_cell(code, store_history=store_history, silent=silent)', 'File "/usr/local/lib/python3.5/site-packages/ipykernel/zmqshell.py", line 533, in run_cell\n return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)', 'File "/usr/local/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2698, in run_cell\n interactivity=interactivity, compiler=compiler, result=result)', 'File "/usr/local/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2808, in run_ast_nodes\n if self.run_code(code, result):', 'File "/usr/local/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2862, in run_code\n exec(code_obj, self.user_global_ns, self.user_ns)', 'File "<ipython-input-7-7ae39038e397>", line 34, in <module>\n tests.test_model_inputs(model_inputs)', 'File "/output/problem_unittests.py", line 107, in test_model_inputs\n assert tf.assert_rank(keep_prob, 0, message=\'Keep Probability has wrong rank\')', 'File "/usr/local/lib/python3.5/site-packages/tensorflow/python/ops/check_ops.py", line 617, in assert_rank\n dynamic_condition, data, summarize)', 'File "/usr/local/lib/python3.5/site-packages/tensorflow/python/ops/check_ops.py", line 571, in _assert_rank_condition\n return control_flow_ops.Assert(condition, data, summarize=summarize)', 'File "/usr/local/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py", line 170, in wrapped\n return _add_should_use_warning(fn(*args, **kwargs))', 'File "/usr/local/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py", line 139, in _add_should_use_warning\n wrapped = TFShouldUseWarningWrapper(x)', 'File "/usr/local/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py", line 96, in __init__\n stack = [s.strip() for s in traceback.format_stack()]'] ================================== ###Markdown Process Decoder InputImplement `process_decoder_input` by removing the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch. ###Code def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # removing last column processed_targets = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) # adding <GO> token at the beggining of every sentence processed_targets = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), processed_targets], 1) return processed_targets """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer: * Embed the encoder input using [`tf.contrib.layers.embed_sequence`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence) * Construct a [stacked](https://github.com/tensorflow/tensorflow/blob/6947f65a374ebf29e74bb71e36fd82760056d82c/tensorflow/docs_src/tutorials/recurrent.mdstacking-multiple-lstms) [`tf.contrib.rnn.LSTMCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell) wrapped in a [`tf.contrib.rnn.DropoutWrapper`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper) * Pass cell and embedded input to [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) ###Code from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ encoder_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size) # tensorflow 1.1 doens't create new wieght matrices using contib.rnn.MultiLSTMCell encoder_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.BasicLSTMCell(rnn_size), output_keep_prob=keep_prob) for _ in range(num_layers)]) # encoder_embed_input insn't a fized sized length array, each sentence has a length. source_sequence_length is a None tensor of rank 1 encoder_output, encoder_state = tf.nn.dynamic_rnn(encoder_cell, encoder_embed_input, sequence_length=source_sequence_length, dtype=tf.float32) return encoder_output, encoder_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate a training decoding layer:* Create a [`tf.contrib.seq2seq.TrainingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper) * Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code from tensorflow.python.layers.core import Dense def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ # the decoder input is already embedded, no need to embed. # output layer and rnn cell already sent in, no need to create them # The training decoder layer will not feed itself with the outputs of each timestep. Instead, it feeds itself # with the actual next target. # creating training decoder inside name scope 'decode' so the inference decoder can use the same weights # helper to send to BasicDecoder to help read the inputs training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, sequence_length=target_sequence_length, time_major=False) # Actual encoder. Sending in the decoder multirnn cell, the training helper, encoder final state and the output layer # to be applied at the end of the run training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer) # only returning the first value of the tuple (final_outputs, final_state, final_sequence_lengths). training_decoder_output = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True, maximum_iterations=max_summary_length)[0] return training_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference decoder:* Create a [`tf.contrib.seq2seq.GreedyEmbeddingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper)* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): # NOT USING KEEP_PROB, VOCAB_SIZE AND DECODING_SCOPE """ Create a decoding layer for inference :param encoder_state: Encoder state # going to dynamic decode :param dec_cell: Decoder RNN Cell # going to dynamic decode :param dec_embeddings: Decoder embeddings # going to decode helper :param start_of_sequence_id: GO ID # start token :param end_of_sequence_id: EOS Id # end token :param max_target_sequence_length: Maximum length of target sequences # dynamic decode :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ # Assumptions: # decode embeddings is the weight matrix of the decoder embedding layer # # using the same variable scope as the training decoder # For this inference decoder, we're not feeding it with targets every time step, we're actually using the output of # a step as the input of the next one. start_tokens = tf.tile(tf.constant([start_of_sequence_id], tf.int32), [batch_size], name='start_tokens') # Greedy embedding helper # This helper will treat the output of the inference rnn as logits, # and pass it through the embedding layer to get the next input inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id) # we're actually generating a new sentece using the decoder, so it uses the output of a step # as the input to the next one. # inference decoder: recieves decoder cell, inference helper to generate next input, encoder_state after it has read # the input, and a output layer to be applied to the output before storing the the result inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer) # inference outputs: runnning the inference decoder, setting the maximum number of iterations as the length of the # longest sequence. Only retrieving the rnn_output inference_output = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length)[0] return inference_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.* Embed the target sequences* Construct the decoder LSTM cell (just like you constructed the encoder cell above)* Create an output layer to map the outputs of the decoder to the elements of our vocabulary* Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)` function to get the training logits.* Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code from tensorflow.python.layers.core import Dense def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # embedding table: representations with which the target words will be embedded dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) # embedding targets dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) # multi layer lastm cell dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size) for _ in range(num_layers)]) # output layer, will be used to transform rnn outputs to logits output_layer = Dense(target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1)) with tf.variable_scope('decode'): decoder_training_output = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) # reusing de variable scobe for rnn weights and embeddings sharing across training and inference decoders with tf.variable_scope('decode', reuse=True): decoder_inference_output = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob) return decoder_training_output, decoder_inference_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size)`.- Process target data using your `process_decoder_input(target_data, target_vocab_to_int, batch_size)` function.- Decode the encoded input using your `decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)` function. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # encoding layer outputs and state encoder_outputs, encoder_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) # preparing the decoder input dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size) # decoder layer's training outputs and inference output training_decoder_outputs, inference_decoder_output = decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return training_decoder_outputs, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability- Set `display_step` to state how many steps between each debug output statement ###Code # Number of Epochs epochs = 5 # Batch Size batch_size = 256 # RNN Size rnn_size = 512 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 200 decoding_embedding_size = 200 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 0.75 display_step = 50 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): # create placeholders input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown Batch and pad the source and target sequences ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def pad_sentence_batch(sentence_batch, pad_int): """Pad sentences with <PAD> so that each sentence of a batch has the same length""" max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): """Batch targets, sources, and the lengths of their sentences together""" for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 50/538 - Train Accuracy: 0.4562, Validation Accuracy: 0.5060, Loss: 2.0081 Epoch 0 Batch 100/538 - Train Accuracy: 0.4877, Validation Accuracy: 0.5247, Loss: 1.1880 Epoch 0 Batch 150/538 - Train Accuracy: 0.5410, Validation Accuracy: 0.5522, Loss: 0.8732 Epoch 0 Batch 200/538 - Train Accuracy: 0.5707, Validation Accuracy: 0.5744, Loss: 0.7122 Epoch 0 Batch 250/538 - Train Accuracy: 0.5703, Validation Accuracy: 0.5959, Loss: 0.6545 Epoch 0 Batch 300/538 - Train Accuracy: 0.5964, Validation Accuracy: 0.6025, Loss: 0.5775 Epoch 0 Batch 350/538 - Train Accuracy: 0.5887, Validation Accuracy: 0.6046, Loss: 0.5576 Epoch 0 Batch 400/538 - Train Accuracy: 0.6181, Validation Accuracy: 0.6408, Loss: 0.4981 Epoch 0 Batch 450/538 - Train Accuracy: 0.6503, Validation Accuracy: 0.6529, Loss: 0.4814 Epoch 0 Batch 500/538 - Train Accuracy: 0.6898, Validation Accuracy: 0.6642, Loss: 0.3606 Epoch 1 Batch 50/538 - Train Accuracy: 0.7863, Validation Accuracy: 0.7505, Loss: 0.2835 Epoch 1 Batch 100/538 - Train Accuracy: 0.8248, Validation Accuracy: 0.7717, Loss: 0.2029 Epoch 1 Batch 150/538 - Train Accuracy: 0.8566, Validation Accuracy: 0.8350, Loss: 0.1521 Epoch 1 Batch 200/538 - Train Accuracy: 0.8742, Validation Accuracy: 0.8679, Loss: 0.1138 Epoch 1 Batch 250/538 - Train Accuracy: 0.9080, Validation Accuracy: 0.8901, Loss: 0.0910 Epoch 1 Batch 300/538 - Train Accuracy: 0.8958, Validation Accuracy: 0.9093, Loss: 0.0767 Epoch 1 Batch 350/538 - Train Accuracy: 0.9079, Validation Accuracy: 0.9018, Loss: 0.0747 Epoch 1 Batch 400/538 - Train Accuracy: 0.9323, Validation Accuracy: 0.9180, Loss: 0.0600 Epoch 1 Batch 450/538 - Train Accuracy: 0.9107, Validation Accuracy: 0.9102, Loss: 0.0632 Epoch 1 Batch 500/538 - Train Accuracy: 0.9432, Validation Accuracy: 0.9313, Loss: 0.0398 Epoch 2 Batch 50/538 - Train Accuracy: 0.9455, Validation Accuracy: 0.9263, Loss: 0.0377 Epoch 2 Batch 100/538 - Train Accuracy: 0.9416, Validation Accuracy: 0.9423, Loss: 0.0316 Epoch 2 Batch 150/538 - Train Accuracy: 0.9523, Validation Accuracy: 0.9379, Loss: 0.0316 Epoch 2 Batch 200/538 - Train Accuracy: 0.9496, Validation Accuracy: 0.9300, Loss: 0.0287 Epoch 2 Batch 250/538 - Train Accuracy: 0.9543, Validation Accuracy: 0.9425, Loss: 0.0350 Epoch 2 Batch 300/538 - Train Accuracy: 0.9408, Validation Accuracy: 0.9592, Loss: 0.0331 Epoch 2 Batch 350/538 - Train Accuracy: 0.9635, Validation Accuracy: 0.9499, Loss: 0.0326 Epoch 2 Batch 400/538 - Train Accuracy: 0.9565, Validation Accuracy: 0.9462, Loss: 0.0297 Epoch 2 Batch 450/538 - Train Accuracy: 0.9355, Validation Accuracy: 0.9506, Loss: 0.0393 Epoch 2 Batch 500/538 - Train Accuracy: 0.9695, Validation Accuracy: 0.9400, Loss: 0.0220 Epoch 3 Batch 50/538 - Train Accuracy: 0.9676, Validation Accuracy: 0.9538, Loss: 0.0221 Epoch 3 Batch 100/538 - Train Accuracy: 0.9682, Validation Accuracy: 0.9522, Loss: 0.0198 Epoch 3 Batch 150/538 - Train Accuracy: 0.9730, Validation Accuracy: 0.9638, Loss: 0.0216 Epoch 3 Batch 200/538 - Train Accuracy: 0.9727, Validation Accuracy: 0.9537, Loss: 0.0156 Epoch 3 Batch 250/538 - Train Accuracy: 0.9779, Validation Accuracy: 0.9657, Loss: 0.0252 Epoch 3 Batch 300/538 - Train Accuracy: 0.9688, Validation Accuracy: 0.9663, Loss: 0.0218 Epoch 3 Batch 350/538 - Train Accuracy: 0.9831, Validation Accuracy: 0.9629, Loss: 0.0220 Epoch 3 Batch 400/538 - Train Accuracy: 0.9723, Validation Accuracy: 0.9670, Loss: 0.0190 Epoch 3 Batch 450/538 - Train Accuracy: 0.9448, Validation Accuracy: 0.9590, Loss: 0.0275 Epoch 3 Batch 500/538 - Train Accuracy: 0.9757, Validation Accuracy: 0.9604, Loss: 0.0133 Epoch 4 Batch 50/538 - Train Accuracy: 0.9742, Validation Accuracy: 0.9569, Loss: 0.0159 Epoch 4 Batch 100/538 - Train Accuracy: 0.9797, Validation Accuracy: 0.9618, Loss: 0.0139 Epoch 4 Batch 150/538 - Train Accuracy: 0.9771, Validation Accuracy: 0.9753, Loss: 0.0147 Epoch 4 Batch 200/538 - Train Accuracy: 0.9732, Validation Accuracy: 0.9609, Loss: 0.0422 Epoch 4 Batch 250/538 - Train Accuracy: 0.9652, Validation Accuracy: 0.9638, Loss: 0.0241 Epoch 4 Batch 300/538 - Train Accuracy: 0.9814, Validation Accuracy: 0.9721, Loss: 0.0167 Epoch 4 Batch 350/538 - Train Accuracy: 0.9818, Validation Accuracy: 0.9689, Loss: 0.0177 Epoch 4 Batch 400/538 - Train Accuracy: 0.9844, Validation Accuracy: 0.9785, Loss: 0.0132 Epoch 4 Batch 450/538 - Train Accuracy: 0.9531, Validation Accuracy: 0.9606, Loss: 0.0202 Epoch 4 Batch 500/538 - Train Accuracy: 0.9830, Validation Accuracy: 0.9650, Loss: 0.0089 Model Trained and Saved ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ id_sentence = [] for word in sentence.lower().split(" "): try: id_sentence.append(vocab_to_int[word]) except: id_sentence.append(vocab_to_int['<UNK>']) return id_sentence """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) ###Output INFO:tensorflow:Restoring parameters from checkpoints/dev Input Word Ids: [84, 213, 188, 176, 39, 175, 11] English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.'] Prediction Word Ids: [347, 67, 173, 352, 306, 356, 178, 1] French Words: il a vu un jaune rouillé . <EOS> ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of each sentence from `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # Done: Implement Function source_id_text = [] for sentence in source_text.split('\n'): sentence_int = [source_vocab_to_int[word] for word in sentence.split()] source_id_text.append(sentence_int) #print(source_id_text) target_id_text = [] for sentence in target_text.split('\n'): sentence_int = [target_vocab_to_int[word] for word in sentence.split()] sentence_int.append(1) # Missing <EOS>? target_id_text.append(sentence_int) #print(target_id_text) return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__) print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.0.0 Default GPU Device: /gpu:0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoding_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) """ # Done: Implement Function # Dim Types - Rank and Shape reference: # https://www.tensorflow.org/programmers_guide/dims_types input = tf.placeholder(tf.int32, shape=(None, None), name='input') targets = tf.placeholder(tf.int32, shape=(None, None)) learning_rate = tf.placeholder(tf.float32) keep_prob = tf.placeholder(tf.float32, name='keep_prob') return input, targets, learning_rate, keep_prob """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output Tests Passed ###Markdown Process Decoding InputImplement `process_decoding_input` using TensorFlow to remove the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch. ###Code def process_decoding_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for dencoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # Done: Implement Function # tf.strided_slice - https://www.tensorflow.org/api_docs/python/tf/strided_slice # tf.concat - https://www.tensorflow.org/api_docs/python/tf/concat # tf.fill - https://www.tensorflow.org/api_docs/python/tf/fill batch = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) decode_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), batch], 1) return decode_input """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_decoding_input(process_decoding_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer using [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn). ###Code def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ # Done: Implement Function # tf.nn.dynamic_rnn(cell, inputs, sequence_length=None, initial_state=None, dtype=None, parallel_iterations=None, swap_memory=False, time_major=False, scope=None) lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) cell = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers) outputs, state = tf.nn.dynamic_rnn(cell, rnn_inputs, dtype=tf.float32) return state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate training logits using [`tf.contrib.seq2seq.simple_decoder_fn_train()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_train) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). Apply the `output_fn` to the [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder) outputs. ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits """ # Done: Implement Function dynamic_fn_train = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) outputs, final_state, final_context_state = tf.contrib.seq2seq.dynamic_rnn_decoder( dec_cell, dynamic_fn_train, dec_embed_input, sequence_length, scope=decoding_scope) # Apply output function train_logits = output_fn(outputs) return train_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference logits using [`tf.contrib.seq2seq.simple_decoder_fn_inference()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_inference) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: The maximum allowed time steps to decode :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits """ # Done: Implement Function dynamic_rnn_decoder = tf.contrib.seq2seq.simple_decoder_fn_inference( output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, dtype=tf.int32) outputs, final_state, final_context_state = tf.contrib.seq2seq.dynamic_rnn_decoder( dec_cell, dynamic_rnn_decoder, scope=decoding_scope) return outputs """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.- Create RNN cell for decoding using `rnn_size` and `num_layers`.- Create the output fuction using [`lambda`](https://docs.python.org/3/tutorial/controlflow.htmllambda-expressions) to transform it's input, logits, to class logits.- Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)` function to get the training logits.- Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) """ # Done: Implement Function with tf.variable_scope("decoding") as decoding_scope: lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) dec_cell = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers) dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, input_keep_prob=keep_prob, output_keep_prob=keep_prob) output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope) training_logits = decoding_layer_train( encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) with tf.variable_scope("decoding", reuse=True) as decoding_scope: inference_logits = decoding_layer_infer( encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], sequence_length, vocab_size, decoding_scope, output_fn, keep_prob) return training_logits, inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Apply embedding to the input data for the encoder.- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob)`.- Process target data using your `process_decoding_input(target_data, target_vocab_to_int, batch_size)` function.- Apply embedding to the target data for the decoder.- Decode the encoded input using your `decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)`. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) """ # Done: Implement Function rnn_inputs = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size) encoder_state = encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob) decode_input = process_decoding_input(target_data, target_vocab_to_int, batch_size) dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, decode_input) training_logits, inference_logits = decoding_layer( dec_embed_input, dec_embeddings, encoder_state, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) return training_logits, inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability ###Code # Number of Epochs epochs = 2 # Batch Size batch_size = 256 # RNN Size rnn_size = 256 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 227 decoding_embedding_size = 227 # Learning Rate learning_rate = 0.01 # Dropout Keep Probability keep_probability = 0.8 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_source_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob = model_inputs() sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model( tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) tf.identity(inference_logits, 'logits') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([input_shape[0], sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import time def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 0/538 - Train Accuracy: 0.234, Validation Accuracy: 0.316, Loss: 5.880 Epoch 0 Batch 1/538 - Train Accuracy: 0.274, Validation Accuracy: 0.351, Loss: 4.366 Epoch 0 Batch 2/538 - Train Accuracy: 0.264, Validation Accuracy: 0.328, Loss: 4.572 Epoch 0 Batch 3/538 - Train Accuracy: 0.266, Validation Accuracy: 0.347, Loss: 3.962 Epoch 0 Batch 4/538 - Train Accuracy: 0.284, Validation Accuracy: 0.353, Loss: 3.793 Epoch 0 Batch 5/538 - Train Accuracy: 0.303, Validation Accuracy: 0.352, Loss: 3.579 Epoch 0 Batch 6/538 - Train Accuracy: 0.311, Validation Accuracy: 0.352, Loss: 3.373 Epoch 0 Batch 7/538 - Train Accuracy: 0.287, Validation Accuracy: 0.352, Loss: 3.462 Epoch 0 Batch 8/538 - Train Accuracy: 0.282, Validation Accuracy: 0.349, Loss: 3.366 Epoch 0 Batch 9/538 - Train Accuracy: 0.289, Validation Accuracy: 0.354, Loss: 3.387 Epoch 0 Batch 10/538 - Train Accuracy: 0.273, Validation Accuracy: 0.357, Loss: 3.353 Epoch 0 Batch 11/538 - Train Accuracy: 0.288, Validation Accuracy: 0.359, Loss: 3.268 Epoch 0 Batch 12/538 - Train Accuracy: 0.303, Validation Accuracy: 0.377, Loss: 3.222 Epoch 0 Batch 13/538 - Train Accuracy: 0.358, Validation Accuracy: 0.385, Loss: 2.944 Epoch 0 Batch 14/538 - Train Accuracy: 0.326, Validation Accuracy: 0.391, Loss: 3.034 Epoch 0 Batch 15/538 - Train Accuracy: 0.368, Validation Accuracy: 0.395, Loss: 2.872 Epoch 0 Batch 16/538 - Train Accuracy: 0.350, Validation Accuracy: 0.394, Loss: 2.836 Epoch 0 Batch 17/538 - Train Accuracy: 0.343, Validation Accuracy: 0.395, Loss: 2.878 Epoch 0 Batch 18/538 - Train Accuracy: 0.327, Validation Accuracy: 0.396, Loss: 2.922 Epoch 0 Batch 19/538 - Train Accuracy: 0.329, Validation Accuracy: 0.398, Loss: 2.890 Epoch 0 Batch 20/538 - Train Accuracy: 0.376, Validation Accuracy: 0.412, Loss: 2.741 Epoch 0 Batch 21/538 - Train Accuracy: 0.317, Validation Accuracy: 0.413, Loss: 2.923 Epoch 0 Batch 22/538 - Train Accuracy: 0.358, Validation Accuracy: 0.417, Loss: 2.755 Epoch 0 Batch 23/538 - Train Accuracy: 0.375, Validation Accuracy: 0.431, Loss: 2.726 Epoch 0 Batch 24/538 - Train Accuracy: 0.392, Validation Accuracy: 0.439, Loss: 2.667 Epoch 0 Batch 25/538 - Train Accuracy: 0.377, Validation Accuracy: 0.438, Loss: 2.671 Epoch 0 Batch 26/538 - Train Accuracy: 0.384, Validation Accuracy: 0.441, Loss: 2.657 Epoch 0 Batch 27/538 - Train Accuracy: 0.392, Validation Accuracy: 0.444, Loss: 2.605 Epoch 0 Batch 28/538 - Train Accuracy: 0.451, Validation Accuracy: 0.454, Loss: 2.360 Epoch 0 Batch 29/538 - Train Accuracy: 0.414, Validation Accuracy: 0.460, Loss: 2.507 Epoch 0 Batch 30/538 - Train Accuracy: 0.404, Validation Accuracy: 0.468, Loss: 2.573 Epoch 0 Batch 31/538 - Train Accuracy: 0.434, Validation Accuracy: 0.466, Loss: 2.398 Epoch 0 Batch 32/538 - Train Accuracy: 0.414, Validation Accuracy: 0.462, Loss: 2.450 Epoch 0 Batch 33/538 - Train Accuracy: 0.438, Validation Accuracy: 0.478, Loss: 2.415 Epoch 0 Batch 34/538 - Train Accuracy: 0.444, Validation Accuracy: 0.490, Loss: 2.443 Epoch 0 Batch 35/538 - Train Accuracy: 0.398, Validation Accuracy: 0.456, Loss: 2.416 Epoch 0 Batch 36/538 - Train Accuracy: 0.453, Validation Accuracy: 0.493, Loss: 2.316 Epoch 0 Batch 37/538 - Train Accuracy: 0.456, Validation Accuracy: 0.504, Loss: 2.332 Epoch 0 Batch 38/538 - Train Accuracy: 0.434, Validation Accuracy: 0.493, Loss: 2.387 Epoch 0 Batch 39/538 - Train Accuracy: 0.454, Validation Accuracy: 0.512, Loss: 2.356 Epoch 0 Batch 40/538 - Train Accuracy: 0.502, Validation Accuracy: 0.511, Loss: 2.107 Epoch 0 Batch 41/538 - Train Accuracy: 0.458, Validation Accuracy: 0.510, Loss: 2.298 Epoch 0 Batch 42/538 - Train Accuracy: 0.441, Validation Accuracy: 0.498, Loss: 2.228 Epoch 0 Batch 43/538 - Train Accuracy: 0.447, Validation Accuracy: 0.501, Loss: 2.248 Epoch 0 Batch 44/538 - Train Accuracy: 0.458, Validation Accuracy: 0.523, Loss: 2.301 Epoch 0 Batch 45/538 - Train Accuracy: 0.474, Validation Accuracy: 0.499, Loss: 2.069 Epoch 0 Batch 46/538 - Train Accuracy: 0.452, Validation Accuracy: 0.512, Loss: 2.199 Epoch 0 Batch 47/538 - Train Accuracy: 0.501, Validation Accuracy: 0.531, Loss: 2.113 Epoch 0 Batch 48/538 - Train Accuracy: 0.481, Validation Accuracy: 0.503, Loss: 2.019 Epoch 0 Batch 49/538 - Train Accuracy: 0.467, Validation Accuracy: 0.521, Loss: 2.229 Epoch 0 Batch 50/538 - Train Accuracy: 0.470, Validation Accuracy: 0.511, Loss: 2.046 Epoch 0 Batch 51/538 - Train Accuracy: 0.422, Validation Accuracy: 0.518, Loss: 2.273 Epoch 0 Batch 52/538 - Train Accuracy: 0.468, Validation Accuracy: 0.516, Loss: 2.022 Epoch 0 Batch 53/538 - Train Accuracy: 0.508, Validation Accuracy: 0.510, Loss: 1.870 Epoch 0 Batch 54/538 - Train Accuracy: 0.463, Validation Accuracy: 0.501, Loss: 1.950 Epoch 0 Batch 55/538 - Train Accuracy: 0.471, Validation Accuracy: 0.528, Loss: 2.028 Epoch 0 Batch 56/538 - Train Accuracy: 0.489, Validation Accuracy: 0.525, Loss: 1.907 Epoch 0 Batch 57/538 - Train Accuracy: 0.440, Validation Accuracy: 0.512, Loss: 2.001 Epoch 0 Batch 58/538 - Train Accuracy: 0.451, Validation Accuracy: 0.523, Loss: 1.988 Epoch 0 Batch 59/538 - Train Accuracy: 0.493, Validation Accuracy: 0.544, Loss: 1.899 Epoch 0 Batch 60/538 - Train Accuracy: 0.490, Validation Accuracy: 0.534, Loss: 1.866 Epoch 0 Batch 61/538 - Train Accuracy: 0.457, Validation Accuracy: 0.513, Loss: 1.793 Epoch 0 Batch 62/538 - Train Accuracy: 0.448, Validation Accuracy: 0.494, Loss: 1.795 Epoch 0 Batch 63/538 - Train Accuracy: 0.536, Validation Accuracy: 0.543, Loss: 1.773 Epoch 0 Batch 64/538 - Train Accuracy: 0.510, Validation Accuracy: 0.539, Loss: 1.694 Epoch 0 Batch 65/538 - Train Accuracy: 0.439, Validation Accuracy: 0.512, Loss: 1.862 Epoch 0 Batch 66/538 - Train Accuracy: 0.478, Validation Accuracy: 0.507, Loss: 1.646 Epoch 0 Batch 67/538 - Train Accuracy: 0.474, Validation Accuracy: 0.520, Loss: 1.717 Epoch 0 Batch 68/538 - Train Accuracy: 0.521, Validation Accuracy: 0.532, Loss: 1.577 Epoch 0 Batch 69/538 - Train Accuracy: 0.490, Validation Accuracy: 0.527, Loss: 1.695 Epoch 0 Batch 70/538 - Train Accuracy: 0.493, Validation Accuracy: 0.521, Loss: 1.551 Epoch 0 Batch 71/538 - Train Accuracy: 0.480, Validation Accuracy: 0.527, Loss: 1.604 Epoch 0 Batch 72/538 - Train Accuracy: 0.522, Validation Accuracy: 0.531, Loss: 1.510 Epoch 0 Batch 73/538 - Train Accuracy: 0.495, Validation Accuracy: 0.547, Loss: 1.552 Epoch 0 Batch 74/538 - Train Accuracy: 0.508, Validation Accuracy: 0.541, Loss: 1.459 Epoch 0 Batch 75/538 - Train Accuracy: 0.508, Validation Accuracy: 0.536, Loss: 1.432 Epoch 0 Batch 76/538 - Train Accuracy: 0.479, Validation Accuracy: 0.535, Loss: 1.500 Epoch 0 Batch 77/538 - Train Accuracy: 0.476, Validation Accuracy: 0.535, Loss: 1.452 Epoch 0 Batch 78/538 - Train Accuracy: 0.521, Validation Accuracy: 0.545, Loss: 1.388 Epoch 0 Batch 79/538 - Train Accuracy: 0.518, Validation Accuracy: 0.535, Loss: 1.305 Epoch 0 Batch 80/538 - Train Accuracy: 0.489, Validation Accuracy: 0.538, Loss: 1.390 Epoch 0 Batch 81/538 - Train Accuracy: 0.493, Validation Accuracy: 0.540, Loss: 1.338 Epoch 0 Batch 82/538 - Train Accuracy: 0.501, Validation Accuracy: 0.536, Loss: 1.292 Epoch 0 Batch 83/538 - Train Accuracy: 0.490, Validation Accuracy: 0.536, Loss: 1.291 Epoch 0 Batch 84/538 - Train Accuracy: 0.514, Validation Accuracy: 0.541, Loss: 1.218 Epoch 0 Batch 85/538 - Train Accuracy: 0.526, Validation Accuracy: 0.536, Loss: 1.122 Epoch 0 Batch 86/538 - Train Accuracy: 0.506, Validation Accuracy: 0.547, Loss: 1.202 Epoch 0 Batch 87/538 - Train Accuracy: 0.523, Validation Accuracy: 0.560, Loss: 1.155 Epoch 0 Batch 88/538 - Train Accuracy: 0.500, Validation Accuracy: 0.541, Loss: 1.153 Epoch 0 Batch 89/538 - Train Accuracy: 0.520, Validation Accuracy: 0.551, Loss: 1.114 Epoch 0 Batch 90/538 - Train Accuracy: 0.539, Validation Accuracy: 0.557, Loss: 1.087 Epoch 0 Batch 91/538 - Train Accuracy: 0.502, Validation Accuracy: 0.543, Loss: 1.093 Epoch 0 Batch 92/538 - Train Accuracy: 0.494, Validation Accuracy: 0.546, Loss: 1.056 Epoch 0 Batch 93/538 - Train Accuracy: 0.494, Validation Accuracy: 0.529, Loss: 1.049 Epoch 0 Batch 94/538 - Train Accuracy: 0.507, Validation Accuracy: 0.532, Loss: 1.030 Epoch 0 Batch 95/538 - Train Accuracy: 0.545, Validation Accuracy: 0.544, Loss: 0.936 Epoch 0 Batch 96/538 - Train Accuracy: 0.542, Validation Accuracy: 0.543, Loss: 0.928 Epoch 0 Batch 97/538 - Train Accuracy: 0.504, Validation Accuracy: 0.547, Loss: 0.960 Epoch 0 Batch 98/538 - Train Accuracy: 0.547, Validation Accuracy: 0.551, Loss: 0.907 Epoch 0 Batch 99/538 - Train Accuracy: 0.491, Validation Accuracy: 0.546, Loss: 0.961 Epoch 0 Batch 100/538 - Train Accuracy: 0.541, Validation Accuracy: 0.552, Loss: 0.913 Epoch 0 Batch 101/538 - Train Accuracy: 0.519, Validation Accuracy: 0.555, Loss: 0.928 Epoch 0 Batch 102/538 - Train Accuracy: 0.537, Validation Accuracy: 0.555, Loss: 0.924 Epoch 0 Batch 103/538 - Train Accuracy: 0.546, Validation Accuracy: 0.560, Loss: 0.891 Epoch 0 Batch 104/538 - Train Accuracy: 0.537, Validation Accuracy: 0.560, Loss: 0.870 Epoch 0 Batch 105/538 - Train Accuracy: 0.531, Validation Accuracy: 0.566, Loss: 0.838 Epoch 0 Batch 106/538 - Train Accuracy: 0.522, Validation Accuracy: 0.559, Loss: 0.833 Epoch 0 Batch 107/538 - Train Accuracy: 0.503, Validation Accuracy: 0.557, Loss: 0.890 Epoch 0 Batch 108/538 - Train Accuracy: 0.547, Validation Accuracy: 0.563, Loss: 0.865 Epoch 0 Batch 109/538 - Train Accuracy: 0.552, Validation Accuracy: 0.563, Loss: 0.813 Epoch 0 Batch 110/538 - Train Accuracy: 0.523, Validation Accuracy: 0.567, Loss: 0.861 Epoch 0 Batch 111/538 - Train Accuracy: 0.556, Validation Accuracy: 0.578, Loss: 0.802 Epoch 0 Batch 112/538 - Train Accuracy: 0.546, Validation Accuracy: 0.572, Loss: 0.833 Epoch 0 Batch 113/538 - Train Accuracy: 0.550, Validation Accuracy: 0.575, Loss: 0.829 Epoch 0 Batch 114/538 - Train Accuracy: 0.597, Validation Accuracy: 0.579, Loss: 0.782 Epoch 0 Batch 115/538 - Train Accuracy: 0.551, Validation Accuracy: 0.585, Loss: 0.815 Epoch 0 Batch 116/538 - Train Accuracy: 0.573, Validation Accuracy: 0.586, Loss: 0.800 Epoch 0 Batch 117/538 - Train Accuracy: 0.574, Validation Accuracy: 0.592, Loss: 0.755 Epoch 0 Batch 118/538 - Train Accuracy: 0.587, Validation Accuracy: 0.587, Loss: 0.752 Epoch 0 Batch 119/538 - Train Accuracy: 0.584, Validation Accuracy: 0.588, Loss: 0.726 Epoch 0 Batch 120/538 - Train Accuracy: 0.593, Validation Accuracy: 0.596, Loss: 0.754 Epoch 0 Batch 121/538 - Train Accuracy: 0.583, Validation Accuracy: 0.592, Loss: 0.734 Epoch 0 Batch 122/538 - Train Accuracy: 0.569, Validation Accuracy: 0.579, Loss: 0.720 Epoch 0 Batch 123/538 - Train Accuracy: 0.591, Validation Accuracy: 0.577, Loss: 0.708 Epoch 0 Batch 124/538 - Train Accuracy: 0.575, Validation Accuracy: 0.595, Loss: 0.688 Epoch 0 Batch 125/538 - Train Accuracy: 0.578, Validation Accuracy: 0.600, Loss: 0.722 Epoch 0 Batch 126/538 - Train Accuracy: 0.601, Validation Accuracy: 0.592, Loss: 0.704 Epoch 0 Batch 127/538 - Train Accuracy: 0.537, Validation Accuracy: 0.590, Loss: 0.765 Epoch 0 Batch 128/538 - Train Accuracy: 0.592, Validation Accuracy: 0.602, Loss: 0.719 Epoch 0 Batch 129/538 - Train Accuracy: 0.573, Validation Accuracy: 0.610, Loss: 0.704 Epoch 0 Batch 130/538 - Train Accuracy: 0.574, Validation Accuracy: 0.605, Loss: 0.689 Epoch 0 Batch 131/538 - Train Accuracy: 0.574, Validation Accuracy: 0.590, Loss: 0.718 Epoch 0 Batch 132/538 - Train Accuracy: 0.567, Validation Accuracy: 0.584, Loss: 0.704 Epoch 0 Batch 133/538 - Train Accuracy: 0.608, Validation Accuracy: 0.610, Loss: 0.671 Epoch 0 Batch 134/538 - Train Accuracy: 0.568, Validation Accuracy: 0.600, Loss: 0.736 Epoch 0 Batch 135/538 - Train Accuracy: 0.578, Validation Accuracy: 0.595, Loss: 0.699 Epoch 0 Batch 136/538 - Train Accuracy: 0.570, Validation Accuracy: 0.599, Loss: 0.697 Epoch 0 Batch 137/538 - Train Accuracy: 0.578, Validation Accuracy: 0.600, Loss: 0.693 Epoch 0 Batch 138/538 - Train Accuracy: 0.609, Validation Accuracy: 0.605, Loss: 0.679 Epoch 0 Batch 139/538 - Train Accuracy: 0.588, Validation Accuracy: 0.605, Loss: 0.752 Epoch 0 Batch 140/538 - Train Accuracy: 0.583, Validation Accuracy: 0.606, Loss: 0.738 Epoch 0 Batch 141/538 - Train Accuracy: 0.599, Validation Accuracy: 0.601, Loss: 0.715 Epoch 0 Batch 142/538 - Train Accuracy: 0.615, Validation Accuracy: 0.605, Loss: 0.660 Epoch 0 Batch 143/538 - Train Accuracy: 0.564, Validation Accuracy: 0.600, Loss: 0.688 Epoch 0 Batch 144/538 - Train Accuracy: 0.594, Validation Accuracy: 0.617, Loss: 0.701 Epoch 0 Batch 145/538 - Train Accuracy: 0.599, Validation Accuracy: 0.603, Loss: 0.684 Epoch 0 Batch 146/538 - Train Accuracy: 0.609, Validation Accuracy: 0.607, Loss: 0.649 Epoch 0 Batch 147/538 - Train Accuracy: 0.629, Validation Accuracy: 0.624, Loss: 0.646 Epoch 0 Batch 148/538 - Train Accuracy: 0.579, Validation Accuracy: 0.622, Loss: 0.728 Epoch 0 Batch 149/538 - Train Accuracy: 0.596, Validation Accuracy: 0.607, Loss: 0.654 Epoch 0 Batch 150/538 - Train Accuracy: 0.626, Validation Accuracy: 0.625, Loss: 0.669 Epoch 0 Batch 151/538 - Train Accuracy: 0.599, Validation Accuracy: 0.618, Loss: 0.632 Epoch 0 Batch 152/538 - Train Accuracy: 0.632, Validation Accuracy: 0.622, Loss: 0.628 Epoch 0 Batch 153/538 - Train Accuracy: 0.595, Validation Accuracy: 0.621, Loss: 0.666 Epoch 0 Batch 154/538 - Train Accuracy: 0.622, Validation Accuracy: 0.624, Loss: 0.636 Epoch 0 Batch 155/538 - Train Accuracy: 0.646, Validation Accuracy: 0.625, Loss: 0.652 Epoch 0 Batch 156/538 - Train Accuracy: 0.604, Validation Accuracy: 0.628, Loss: 0.648 Epoch 0 Batch 157/538 - Train Accuracy: 0.621, Validation Accuracy: 0.625, Loss: 0.617 Epoch 0 Batch 158/538 - Train Accuracy: 0.617, Validation Accuracy: 0.629, Loss: 0.657 Epoch 0 Batch 159/538 - Train Accuracy: 0.619, Validation Accuracy: 0.631, Loss: 0.647 Epoch 0 Batch 160/538 - Train Accuracy: 0.606, Validation Accuracy: 0.618, Loss: 0.624 Epoch 0 Batch 161/538 - Train Accuracy: 0.609, Validation Accuracy: 0.620, Loss: 0.651 Epoch 0 Batch 162/538 - Train Accuracy: 0.633, Validation Accuracy: 0.620, Loss: 0.619 Epoch 0 Batch 163/538 - Train Accuracy: 0.629, Validation Accuracy: 0.625, Loss: 0.640 Epoch 0 Batch 164/538 - Train Accuracy: 0.622, Validation Accuracy: 0.629, Loss: 0.651 Epoch 0 Batch 165/538 - Train Accuracy: 0.624, Validation Accuracy: 0.626, Loss: 0.596 Epoch 0 Batch 166/538 - Train Accuracy: 0.637, Validation Accuracy: 0.629, Loss: 0.620 Epoch 0 Batch 167/538 - Train Accuracy: 0.643, Validation Accuracy: 0.627, Loss: 0.615 Epoch 0 Batch 168/538 - Train Accuracy: 0.615, Validation Accuracy: 0.637, Loss: 0.669 Epoch 0 Batch 169/538 - Train Accuracy: 0.630, Validation Accuracy: 0.631, Loss: 0.613 Epoch 0 Batch 170/538 - Train Accuracy: 0.634, Validation Accuracy: 0.635, Loss: 0.632 Epoch 0 Batch 171/538 - Train Accuracy: 0.607, Validation Accuracy: 0.624, Loss: 0.645 Epoch 0 Batch 172/538 - Train Accuracy: 0.623, Validation Accuracy: 0.632, Loss: 0.610 Epoch 0 Batch 173/538 - Train Accuracy: 0.608, Validation Accuracy: 0.620, Loss: 0.586 Epoch 0 Batch 174/538 - Train Accuracy: 0.585, Validation Accuracy: 0.615, Loss: 0.618 Epoch 0 Batch 175/538 - Train Accuracy: 0.604, Validation Accuracy: 0.619, Loss: 0.621 Epoch 0 Batch 176/538 - Train Accuracy: 0.598, Validation Accuracy: 0.624, Loss: 0.645 Epoch 0 Batch 177/538 - Train Accuracy: 0.622, Validation Accuracy: 0.629, Loss: 0.596 Epoch 0 Batch 178/538 - Train Accuracy: 0.626, Validation Accuracy: 0.623, Loss: 0.585 Epoch 0 Batch 179/538 - Train Accuracy: 0.626, Validation Accuracy: 0.632, Loss: 0.610 Epoch 0 Batch 180/538 - Train Accuracy: 0.656, Validation Accuracy: 0.636, Loss: 0.584 Epoch 0 Batch 181/538 - Train Accuracy: 0.610, Validation Accuracy: 0.641, Loss: 0.619 Epoch 0 Batch 182/538 - Train Accuracy: 0.599, Validation Accuracy: 0.638, Loss: 0.615 Epoch 0 Batch 183/538 - Train Accuracy: 0.651, Validation Accuracy: 0.625, Loss: 0.560 Epoch 0 Batch 184/538 - Train Accuracy: 0.636, Validation Accuracy: 0.628, Loss: 0.573 Epoch 0 Batch 185/538 - Train Accuracy: 0.611, Validation Accuracy: 0.619, Loss: 0.591 Epoch 0 Batch 186/538 - Train Accuracy: 0.625, Validation Accuracy: 0.629, Loss: 0.584 Epoch 0 Batch 187/538 - Train Accuracy: 0.637, Validation Accuracy: 0.631, Loss: 0.563 Epoch 0 Batch 188/538 - Train Accuracy: 0.615, Validation Accuracy: 0.635, Loss: 0.590 Epoch 0 Batch 189/538 - Train Accuracy: 0.628, Validation Accuracy: 0.635, Loss: 0.599 Epoch 0 Batch 190/538 - Train Accuracy: 0.627, Validation Accuracy: 0.630, Loss: 0.578 Epoch 0 Batch 191/538 - Train Accuracy: 0.651, Validation Accuracy: 0.647, Loss: 0.564 Epoch 0 Batch 192/538 - Train Accuracy: 0.624, Validation Accuracy: 0.638, Loss: 0.566 Epoch 0 Batch 193/538 - Train Accuracy: 0.645, Validation Accuracy: 0.640, Loss: 0.553 Epoch 0 Batch 194/538 - Train Accuracy: 0.606, Validation Accuracy: 0.639, Loss: 0.601 Epoch 0 Batch 195/538 - Train Accuracy: 0.633, Validation Accuracy: 0.640, Loss: 0.563 Epoch 0 Batch 196/538 - Train Accuracy: 0.635, Validation Accuracy: 0.654, Loss: 0.554 Epoch 0 Batch 197/538 - Train Accuracy: 0.636, Validation Accuracy: 0.660, Loss: 0.556 Epoch 0 Batch 198/538 - Train Accuracy: 0.660, Validation Accuracy: 0.655, Loss: 0.561 Epoch 0 Batch 199/538 - Train Accuracy: 0.634, Validation Accuracy: 0.647, Loss: 0.576 Epoch 0 Batch 200/538 - Train Accuracy: 0.651, Validation Accuracy: 0.645, Loss: 0.558 Epoch 0 Batch 201/538 - Train Accuracy: 0.656, Validation Accuracy: 0.655, Loss: 0.561 Epoch 0 Batch 202/538 - Train Accuracy: 0.645, Validation Accuracy: 0.649, Loss: 0.597 Epoch 0 Batch 203/538 - Train Accuracy: 0.614, Validation Accuracy: 0.646, Loss: 0.581 Epoch 0 Batch 204/538 - Train Accuracy: 0.615, Validation Accuracy: 0.646, Loss: 0.575 Epoch 0 Batch 205/538 - Train Accuracy: 0.650, Validation Accuracy: 0.640, Loss: 0.531 Epoch 0 Batch 206/538 - Train Accuracy: 0.594, Validation Accuracy: 0.641, Loss: 0.576 Epoch 0 Batch 207/538 - Train Accuracy: 0.637, Validation Accuracy: 0.647, Loss: 0.521 Epoch 0 Batch 208/538 - Train Accuracy: 0.636, Validation Accuracy: 0.648, Loss: 0.564 Epoch 0 Batch 209/538 - Train Accuracy: 0.630, Validation Accuracy: 0.653, Loss: 0.543 Epoch 0 Batch 210/538 - Train Accuracy: 0.640, Validation Accuracy: 0.654, Loss: 0.534 Epoch 0 Batch 211/538 - Train Accuracy: 0.628, Validation Accuracy: 0.654, Loss: 0.559 Epoch 0 Batch 212/538 - Train Accuracy: 0.660, Validation Accuracy: 0.657, Loss: 0.538 Epoch 0 Batch 213/538 - Train Accuracy: 0.632, Validation Accuracy: 0.665, Loss: 0.523 Epoch 0 Batch 214/538 - Train Accuracy: 0.640, Validation Accuracy: 0.653, Loss: 0.534 Epoch 0 Batch 215/538 - Train Accuracy: 0.627, Validation Accuracy: 0.656, Loss: 0.532 Epoch 0 Batch 216/538 - Train Accuracy: 0.614, Validation Accuracy: 0.647, Loss: 0.557 Epoch 0 Batch 217/538 - Train Accuracy: 0.644, Validation Accuracy: 0.651, Loss: 0.518 Epoch 0 Batch 218/538 - Train Accuracy: 0.618, Validation Accuracy: 0.652, Loss: 0.541 Epoch 0 Batch 219/538 - Train Accuracy: 0.622, Validation Accuracy: 0.656, Loss: 0.551 Epoch 0 Batch 220/538 - Train Accuracy: 0.613, Validation Accuracy: 0.649, Loss: 0.511 Epoch 0 Batch 221/538 - Train Accuracy: 0.663, Validation Accuracy: 0.646, Loss: 0.494 Epoch 0 Batch 222/538 - Train Accuracy: 0.635, Validation Accuracy: 0.642, Loss: 0.487 Epoch 0 Batch 223/538 - Train Accuracy: 0.638, Validation Accuracy: 0.649, Loss: 0.523 Epoch 0 Batch 224/538 - Train Accuracy: 0.614, Validation Accuracy: 0.662, Loss: 0.524 Epoch 0 Batch 225/538 - Train Accuracy: 0.676, Validation Accuracy: 0.663, Loss: 0.486 Epoch 0 Batch 226/538 - Train Accuracy: 0.672, Validation Accuracy: 0.660, Loss: 0.495 Epoch 0 Batch 227/538 - Train Accuracy: 0.690, Validation Accuracy: 0.669, Loss: 0.474 Epoch 0 Batch 228/538 - Train Accuracy: 0.648, Validation Accuracy: 0.680, Loss: 0.485 Epoch 0 Batch 229/538 - Train Accuracy: 0.680, Validation Accuracy: 0.678, Loss: 0.500 Epoch 0 Batch 230/538 - Train Accuracy: 0.671, Validation Accuracy: 0.686, Loss: 0.501 Epoch 0 Batch 231/538 - Train Accuracy: 0.666, Validation Accuracy: 0.696, Loss: 0.505 Epoch 0 Batch 232/538 - Train Accuracy: 0.672, Validation Accuracy: 0.688, Loss: 0.507 Epoch 0 Batch 233/538 - Train Accuracy: 0.714, Validation Accuracy: 0.691, Loss: 0.483 Epoch 0 Batch 234/538 - Train Accuracy: 0.643, Validation Accuracy: 0.703, Loss: 0.506 Epoch 0 Batch 235/538 - Train Accuracy: 0.686, Validation Accuracy: 0.692, Loss: 0.468 Epoch 0 Batch 236/538 - Train Accuracy: 0.660, Validation Accuracy: 0.699, Loss: 0.504 Epoch 0 Batch 237/538 - Train Accuracy: 0.682, Validation Accuracy: 0.695, Loss: 0.465 Epoch 0 Batch 238/538 - Train Accuracy: 0.721, Validation Accuracy: 0.695, Loss: 0.468 Epoch 0 Batch 239/538 - Train Accuracy: 0.686, Validation Accuracy: 0.683, Loss: 0.482 Epoch 0 Batch 240/538 - Train Accuracy: 0.671, Validation Accuracy: 0.686, Loss: 0.480 Epoch 0 Batch 241/538 - Train Accuracy: 0.677, Validation Accuracy: 0.698, Loss: 0.481 Epoch 0 Batch 242/538 - Train Accuracy: 0.703, Validation Accuracy: 0.691, Loss: 0.467 Epoch 0 Batch 243/538 - Train Accuracy: 0.681, Validation Accuracy: 0.686, Loss: 0.479 Epoch 0 Batch 244/538 - Train Accuracy: 0.676, Validation Accuracy: 0.686, Loss: 0.455 Epoch 0 Batch 245/538 - Train Accuracy: 0.661, Validation Accuracy: 0.678, Loss: 0.471 Epoch 0 Batch 246/538 - Train Accuracy: 0.706, Validation Accuracy: 0.681, Loss: 0.430 Epoch 0 Batch 247/538 - Train Accuracy: 0.670, Validation Accuracy: 0.685, Loss: 0.455 Epoch 0 Batch 248/538 - Train Accuracy: 0.686, Validation Accuracy: 0.682, Loss: 0.457 Epoch 0 Batch 249/538 - Train Accuracy: 0.659, Validation Accuracy: 0.684, Loss: 0.436 Epoch 0 Batch 250/538 - Train Accuracy: 0.671, Validation Accuracy: 0.688, Loss: 0.452 Epoch 0 Batch 251/538 - Train Accuracy: 0.697, Validation Accuracy: 0.699, Loss: 0.455 Epoch 0 Batch 252/538 - Train Accuracy: 0.707, Validation Accuracy: 0.709, Loss: 0.425 Epoch 0 Batch 253/538 - Train Accuracy: 0.695, Validation Accuracy: 0.710, Loss: 0.430 Epoch 0 Batch 254/538 - Train Accuracy: 0.689, Validation Accuracy: 0.712, Loss: 0.454 Epoch 0 Batch 255/538 - Train Accuracy: 0.698, Validation Accuracy: 0.692, Loss: 0.433 Epoch 0 Batch 256/538 - Train Accuracy: 0.678, Validation Accuracy: 0.692, Loss: 0.450 Epoch 0 Batch 257/538 - Train Accuracy: 0.713, Validation Accuracy: 0.704, Loss: 0.418 Epoch 0 Batch 258/538 - Train Accuracy: 0.725, Validation Accuracy: 0.710, Loss: 0.426 Epoch 0 Batch 259/538 - Train Accuracy: 0.726, Validation Accuracy: 0.712, Loss: 0.403 Epoch 0 Batch 260/538 - Train Accuracy: 0.689, Validation Accuracy: 0.713, Loss: 0.416 Epoch 0 Batch 261/538 - Train Accuracy: 0.690, Validation Accuracy: 0.713, Loss: 0.443 Epoch 0 Batch 262/538 - Train Accuracy: 0.707, Validation Accuracy: 0.716, Loss: 0.420 Epoch 0 Batch 263/538 - Train Accuracy: 0.694, Validation Accuracy: 0.702, Loss: 0.412 Epoch 0 Batch 264/538 - Train Accuracy: 0.690, Validation Accuracy: 0.714, Loss: 0.432 Epoch 0 Batch 265/538 - Train Accuracy: 0.668, Validation Accuracy: 0.718, Loss: 0.434 Epoch 0 Batch 266/538 - Train Accuracy: 0.717, Validation Accuracy: 0.722, Loss: 0.401 Epoch 0 Batch 267/538 - Train Accuracy: 0.701, Validation Accuracy: 0.732, Loss: 0.409 Epoch 0 Batch 268/538 - Train Accuracy: 0.734, Validation Accuracy: 0.728, Loss: 0.376 Epoch 0 Batch 269/538 - Train Accuracy: 0.720, Validation Accuracy: 0.733, Loss: 0.399 Epoch 0 Batch 270/538 - Train Accuracy: 0.729, Validation Accuracy: 0.735, Loss: 0.402 Epoch 0 Batch 271/538 - Train Accuracy: 0.719, Validation Accuracy: 0.736, Loss: 0.406 Epoch 0 Batch 272/538 - Train Accuracy: 0.694, Validation Accuracy: 0.740, Loss: 0.425 Epoch 0 Batch 273/538 - Train Accuracy: 0.728, Validation Accuracy: 0.743, Loss: 0.408 Epoch 0 Batch 274/538 - Train Accuracy: 0.693, Validation Accuracy: 0.751, Loss: 0.414 Epoch 0 Batch 275/538 - Train Accuracy: 0.721, Validation Accuracy: 0.739, Loss: 0.410 Epoch 0 Batch 276/538 - Train Accuracy: 0.721, Validation Accuracy: 0.736, Loss: 0.395 Epoch 0 Batch 277/538 - Train Accuracy: 0.723, Validation Accuracy: 0.734, Loss: 0.381 Epoch 0 Batch 278/538 - Train Accuracy: 0.741, Validation Accuracy: 0.735, Loss: 0.385 Epoch 0 Batch 279/538 - Train Accuracy: 0.724, Validation Accuracy: 0.735, Loss: 0.380 Epoch 0 Batch 280/538 - Train Accuracy: 0.736, Validation Accuracy: 0.727, Loss: 0.360 Epoch 0 Batch 281/538 - Train Accuracy: 0.737, Validation Accuracy: 0.733, Loss: 0.397 Epoch 0 Batch 282/538 - Train Accuracy: 0.763, Validation Accuracy: 0.738, Loss: 0.375 Epoch 0 Batch 283/538 - Train Accuracy: 0.728, Validation Accuracy: 0.728, Loss: 0.381 Epoch 0 Batch 284/538 - Train Accuracy: 0.744, Validation Accuracy: 0.742, Loss: 0.386 Epoch 0 Batch 285/538 - Train Accuracy: 0.738, Validation Accuracy: 0.742, Loss: 0.352 Epoch 0 Batch 286/538 - Train Accuracy: 0.722, Validation Accuracy: 0.730, Loss: 0.383 Epoch 0 Batch 287/538 - Train Accuracy: 0.784, Validation Accuracy: 0.755, Loss: 0.358 Epoch 0 Batch 288/538 - Train Accuracy: 0.735, Validation Accuracy: 0.755, Loss: 0.363 Epoch 0 Batch 289/538 - Train Accuracy: 0.754, Validation Accuracy: 0.749, Loss: 0.339 Epoch 0 Batch 290/538 - Train Accuracy: 0.757, Validation Accuracy: 0.748, Loss: 0.358 Epoch 0 Batch 291/538 - Train Accuracy: 0.747, Validation Accuracy: 0.755, Loss: 0.356 Epoch 0 Batch 292/538 - Train Accuracy: 0.749, Validation Accuracy: 0.745, Loss: 0.350 Epoch 0 Batch 293/538 - Train Accuracy: 0.776, Validation Accuracy: 0.730, Loss: 0.339 Epoch 0 Batch 294/538 - Train Accuracy: 0.713, Validation Accuracy: 0.749, Loss: 0.377 Epoch 0 Batch 295/538 - Train Accuracy: 0.764, Validation Accuracy: 0.754, Loss: 0.326 Epoch 0 Batch 296/538 - Train Accuracy: 0.749, Validation Accuracy: 0.755, Loss: 0.351 Epoch 0 Batch 297/538 - Train Accuracy: 0.729, Validation Accuracy: 0.754, Loss: 0.359 Epoch 0 Batch 298/538 - Train Accuracy: 0.756, Validation Accuracy: 0.763, Loss: 0.337 Epoch 0 Batch 299/538 - Train Accuracy: 0.732, Validation Accuracy: 0.759, Loss: 0.353 Epoch 0 Batch 300/538 - Train Accuracy: 0.733, Validation Accuracy: 0.749, Loss: 0.343 Epoch 0 Batch 301/538 - Train Accuracy: 0.741, Validation Accuracy: 0.735, Loss: 0.336 Epoch 0 Batch 302/538 - Train Accuracy: 0.775, Validation Accuracy: 0.752, Loss: 0.325 Epoch 0 Batch 303/538 - Train Accuracy: 0.778, Validation Accuracy: 0.758, Loss: 0.330 Epoch 0 Batch 304/538 - Train Accuracy: 0.736, Validation Accuracy: 0.770, Loss: 0.334 Epoch 0 Batch 305/538 - Train Accuracy: 0.753, Validation Accuracy: 0.781, Loss: 0.329 Epoch 0 Batch 306/538 - Train Accuracy: 0.751, Validation Accuracy: 0.774, Loss: 0.326 Epoch 0 Batch 307/538 - Train Accuracy: 0.750, Validation Accuracy: 0.768, Loss: 0.339 Epoch 0 Batch 308/538 - Train Accuracy: 0.762, Validation Accuracy: 0.771, Loss: 0.324 Epoch 0 Batch 309/538 - Train Accuracy: 0.762, Validation Accuracy: 0.774, Loss: 0.325 Epoch 0 Batch 310/538 - Train Accuracy: 0.764, Validation Accuracy: 0.768, Loss: 0.341 Epoch 0 Batch 311/538 - Train Accuracy: 0.757, Validation Accuracy: 0.765, Loss: 0.318 Epoch 0 Batch 312/538 - Train Accuracy: 0.775, Validation Accuracy: 0.765, Loss: 0.299 Epoch 0 Batch 313/538 - Train Accuracy: 0.754, Validation Accuracy: 0.763, Loss: 0.339 Epoch 0 Batch 314/538 - Train Accuracy: 0.756, Validation Accuracy: 0.776, Loss: 0.316 Epoch 0 Batch 315/538 - Train Accuracy: 0.746, Validation Accuracy: 0.769, Loss: 0.314 Epoch 0 Batch 316/538 - Train Accuracy: 0.768, Validation Accuracy: 0.776, Loss: 0.304 Epoch 0 Batch 317/538 - Train Accuracy: 0.780, Validation Accuracy: 0.773, Loss: 0.328 Epoch 0 Batch 318/538 - Train Accuracy: 0.743, Validation Accuracy: 0.777, Loss: 0.312 Epoch 0 Batch 319/538 - Train Accuracy: 0.780, Validation Accuracy: 0.767, Loss: 0.312 Epoch 0 Batch 320/538 - Train Accuracy: 0.764, Validation Accuracy: 0.767, Loss: 0.318 Epoch 0 Batch 321/538 - Train Accuracy: 0.773, Validation Accuracy: 0.775, Loss: 0.303 Epoch 0 Batch 322/538 - Train Accuracy: 0.787, Validation Accuracy: 0.779, Loss: 0.314 Epoch 0 Batch 323/538 - Train Accuracy: 0.779, Validation Accuracy: 0.777, Loss: 0.301 Epoch 0 Batch 324/538 - Train Accuracy: 0.742, Validation Accuracy: 0.779, Loss: 0.319 Epoch 0 Batch 325/538 - Train Accuracy: 0.791, Validation Accuracy: 0.780, Loss: 0.291 Epoch 0 Batch 326/538 - Train Accuracy: 0.785, Validation Accuracy: 0.780, Loss: 0.306 Epoch 0 Batch 327/538 - Train Accuracy: 0.782, Validation Accuracy: 0.780, Loss: 0.308 Epoch 0 Batch 328/538 - Train Accuracy: 0.779, Validation Accuracy: 0.786, Loss: 0.291 Epoch 0 Batch 329/538 - Train Accuracy: 0.791, Validation Accuracy: 0.772, Loss: 0.293 Epoch 0 Batch 330/538 - Train Accuracy: 0.787, Validation Accuracy: 0.768, Loss: 0.282 Epoch 0 Batch 331/538 - Train Accuracy: 0.787, Validation Accuracy: 0.770, Loss: 0.293 Epoch 0 Batch 332/538 - Train Accuracy: 0.811, Validation Accuracy: 0.767, Loss: 0.298 Epoch 0 Batch 333/538 - Train Accuracy: 0.792, Validation Accuracy: 0.778, Loss: 0.282 Epoch 0 Batch 334/538 - Train Accuracy: 0.795, Validation Accuracy: 0.778, Loss: 0.271 Epoch 0 Batch 335/538 - Train Accuracy: 0.795, Validation Accuracy: 0.772, Loss: 0.282 Epoch 0 Batch 336/538 - Train Accuracy: 0.798, Validation Accuracy: 0.771, Loss: 0.267 Epoch 0 Batch 337/538 - Train Accuracy: 0.805, Validation Accuracy: 0.791, Loss: 0.290 Epoch 0 Batch 338/538 - Train Accuracy: 0.752, Validation Accuracy: 0.798, Loss: 0.290 Epoch 0 Batch 339/538 - Train Accuracy: 0.771, Validation Accuracy: 0.799, Loss: 0.285 Epoch 0 Batch 340/538 - Train Accuracy: 0.776, Validation Accuracy: 0.776, Loss: 0.300 Epoch 0 Batch 341/538 - Train Accuracy: 0.800, Validation Accuracy: 0.791, Loss: 0.291 Epoch 0 Batch 342/538 - Train Accuracy: 0.761, Validation Accuracy: 0.777, Loss: 0.262 Epoch 0 Batch 343/538 - Train Accuracy: 0.787, Validation Accuracy: 0.777, Loss: 0.292 Epoch 0 Batch 344/538 - Train Accuracy: 0.813, Validation Accuracy: 0.788, Loss: 0.273 Epoch 0 Batch 345/538 - Train Accuracy: 0.799, Validation Accuracy: 0.785, Loss: 0.276 Epoch 0 Batch 346/538 - Train Accuracy: 0.758, Validation Accuracy: 0.775, Loss: 0.300 Epoch 0 Batch 347/538 - Train Accuracy: 0.790, Validation Accuracy: 0.775, Loss: 0.276 Epoch 0 Batch 348/538 - Train Accuracy: 0.773, Validation Accuracy: 0.795, Loss: 0.269 Epoch 0 Batch 349/538 - Train Accuracy: 0.797, Validation Accuracy: 0.795, Loss: 0.264 Epoch 0 Batch 350/538 - Train Accuracy: 0.811, Validation Accuracy: 0.798, Loss: 0.283 Epoch 0 Batch 351/538 - Train Accuracy: 0.777, Validation Accuracy: 0.793, Loss: 0.292 Epoch 0 Batch 352/538 - Train Accuracy: 0.788, Validation Accuracy: 0.798, Loss: 0.283 Epoch 0 Batch 353/538 - Train Accuracy: 0.800, Validation Accuracy: 0.803, Loss: 0.282 Epoch 0 Batch 354/538 - Train Accuracy: 0.778, Validation Accuracy: 0.805, Loss: 0.290 Epoch 0 Batch 355/538 - Train Accuracy: 0.795, Validation Accuracy: 0.798, Loss: 0.283 Epoch 0 Batch 356/538 - Train Accuracy: 0.806, Validation Accuracy: 0.800, Loss: 0.241 Epoch 0 Batch 357/538 - Train Accuracy: 0.791, Validation Accuracy: 0.787, Loss: 0.269 Epoch 0 Batch 358/538 - Train Accuracy: 0.786, Validation Accuracy: 0.797, Loss: 0.261 Epoch 0 Batch 359/538 - Train Accuracy: 0.790, Validation Accuracy: 0.807, Loss: 0.257 Epoch 0 Batch 360/538 - Train Accuracy: 0.784, Validation Accuracy: 0.807, Loss: 0.262 Epoch 0 Batch 361/538 - Train Accuracy: 0.803, Validation Accuracy: 0.801, Loss: 0.254 Epoch 0 Batch 362/538 - Train Accuracy: 0.820, Validation Accuracy: 0.806, Loss: 0.236 Epoch 0 Batch 363/538 - Train Accuracy: 0.822, Validation Accuracy: 0.794, Loss: 0.248 Epoch 0 Batch 364/538 - Train Accuracy: 0.778, Validation Accuracy: 0.792, Loss: 0.273 Epoch 0 Batch 365/538 - Train Accuracy: 0.782, Validation Accuracy: 0.780, Loss: 0.242 Epoch 0 Batch 366/538 - Train Accuracy: 0.818, Validation Accuracy: 0.790, Loss: 0.273 Epoch 0 Batch 367/538 - Train Accuracy: 0.817, Validation Accuracy: 0.805, Loss: 0.240 Epoch 0 Batch 368/538 - Train Accuracy: 0.833, Validation Accuracy: 0.804, Loss: 0.220 Epoch 0 Batch 369/538 - Train Accuracy: 0.814, Validation Accuracy: 0.803, Loss: 0.233 Epoch 0 Batch 370/538 - Train Accuracy: 0.814, Validation Accuracy: 0.813, Loss: 0.257 Epoch 0 Batch 371/538 - Train Accuracy: 0.819, Validation Accuracy: 0.822, Loss: 0.227 Epoch 0 Batch 372/538 - Train Accuracy: 0.840, Validation Accuracy: 0.811, Loss: 0.245 Epoch 0 Batch 373/538 - Train Accuracy: 0.815, Validation Accuracy: 0.811, Loss: 0.228 Epoch 0 Batch 374/538 - Train Accuracy: 0.796, Validation Accuracy: 0.804, Loss: 0.241 Epoch 0 Batch 375/538 - Train Accuracy: 0.815, Validation Accuracy: 0.798, Loss: 0.220 Epoch 0 Batch 376/538 - Train Accuracy: 0.837, Validation Accuracy: 0.808, Loss: 0.244 Epoch 0 Batch 377/538 - Train Accuracy: 0.819, Validation Accuracy: 0.809, Loss: 0.230 Epoch 0 Batch 378/538 - Train Accuracy: 0.859, Validation Accuracy: 0.809, Loss: 0.218 Epoch 0 Batch 379/538 - Train Accuracy: 0.838, Validation Accuracy: 0.807, Loss: 0.221 Epoch 0 Batch 380/538 - Train Accuracy: 0.825, Validation Accuracy: 0.818, Loss: 0.229 Epoch 0 Batch 381/538 - Train Accuracy: 0.837, Validation Accuracy: 0.798, Loss: 0.222 Epoch 0 Batch 382/538 - Train Accuracy: 0.807, Validation Accuracy: 0.817, Loss: 0.236 Epoch 0 Batch 383/538 - Train Accuracy: 0.815, Validation Accuracy: 0.821, Loss: 0.233 Epoch 0 Batch 384/538 - Train Accuracy: 0.822, Validation Accuracy: 0.822, Loss: 0.220 Epoch 0 Batch 385/538 - Train Accuracy: 0.834, Validation Accuracy: 0.821, Loss: 0.224 Epoch 0 Batch 386/538 - Train Accuracy: 0.831, Validation Accuracy: 0.829, Loss: 0.245 Epoch 0 Batch 387/538 - Train Accuracy: 0.817, Validation Accuracy: 0.820, Loss: 0.225 Epoch 0 Batch 388/538 - Train Accuracy: 0.837, Validation Accuracy: 0.816, Loss: 0.217 Epoch 0 Batch 389/538 - Train Accuracy: 0.789, Validation Accuracy: 0.823, Loss: 0.230 Epoch 0 Batch 390/538 - Train Accuracy: 0.839, Validation Accuracy: 0.825, Loss: 0.205 Epoch 0 Batch 391/538 - Train Accuracy: 0.822, Validation Accuracy: 0.818, Loss: 0.217 Epoch 0 Batch 392/538 - Train Accuracy: 0.834, Validation Accuracy: 0.833, Loss: 0.213 Epoch 0 Batch 393/538 - Train Accuracy: 0.841, Validation Accuracy: 0.838, Loss: 0.204 Epoch 0 Batch 394/538 - Train Accuracy: 0.796, Validation Accuracy: 0.835, Loss: 0.232 Epoch 0 Batch 395/538 - Train Accuracy: 0.829, Validation Accuracy: 0.828, Loss: 0.231 Epoch 0 Batch 396/538 - Train Accuracy: 0.847, Validation Accuracy: 0.828, Loss: 0.212 Epoch 0 Batch 397/538 - Train Accuracy: 0.839, Validation Accuracy: 0.813, Loss: 0.231 Epoch 0 Batch 398/538 - Train Accuracy: 0.813, Validation Accuracy: 0.821, Loss: 0.215 Epoch 0 Batch 399/538 - Train Accuracy: 0.814, Validation Accuracy: 0.825, Loss: 0.234 Epoch 0 Batch 400/538 - Train Accuracy: 0.841, Validation Accuracy: 0.824, Loss: 0.211 Epoch 0 Batch 401/538 - Train Accuracy: 0.851, Validation Accuracy: 0.840, Loss: 0.203 Epoch 0 Batch 402/538 - Train Accuracy: 0.853, Validation Accuracy: 0.831, Loss: 0.206 Epoch 0 Batch 403/538 - Train Accuracy: 0.839, Validation Accuracy: 0.826, Loss: 0.214 Epoch 0 Batch 404/538 - Train Accuracy: 0.828, Validation Accuracy: 0.821, Loss: 0.203 Epoch 0 Batch 405/538 - Train Accuracy: 0.822, Validation Accuracy: 0.829, Loss: 0.201 Epoch 0 Batch 406/538 - Train Accuracy: 0.847, Validation Accuracy: 0.834, Loss: 0.199 Epoch 0 Batch 407/538 - Train Accuracy: 0.862, Validation Accuracy: 0.832, Loss: 0.204 Epoch 0 Batch 408/538 - Train Accuracy: 0.852, Validation Accuracy: 0.824, Loss: 0.214 Epoch 0 Batch 409/538 - Train Accuracy: 0.846, Validation Accuracy: 0.836, Loss: 0.201 Epoch 0 Batch 410/538 - Train Accuracy: 0.849, Validation Accuracy: 0.835, Loss: 0.202 Epoch 0 Batch 411/538 - Train Accuracy: 0.850, Validation Accuracy: 0.830, Loss: 0.192 Epoch 0 Batch 412/538 - Train Accuracy: 0.831, Validation Accuracy: 0.825, Loss: 0.183 Epoch 0 Batch 413/538 - Train Accuracy: 0.857, Validation Accuracy: 0.836, Loss: 0.192 Epoch 0 Batch 414/538 - Train Accuracy: 0.813, Validation Accuracy: 0.831, Loss: 0.209 Epoch 0 Batch 415/538 - Train Accuracy: 0.834, Validation Accuracy: 0.844, Loss: 0.192 Epoch 0 Batch 416/538 - Train Accuracy: 0.875, Validation Accuracy: 0.834, Loss: 0.175 Epoch 0 Batch 417/538 - Train Accuracy: 0.841, Validation Accuracy: 0.834, Loss: 0.192 Epoch 0 Batch 418/538 - Train Accuracy: 0.844, Validation Accuracy: 0.838, Loss: 0.194 Epoch 0 Batch 419/538 - Train Accuracy: 0.860, Validation Accuracy: 0.843, Loss: 0.181 Epoch 0 Batch 420/538 - Train Accuracy: 0.866, Validation Accuracy: 0.838, Loss: 0.186 Epoch 0 Batch 421/538 - Train Accuracy: 0.858, Validation Accuracy: 0.838, Loss: 0.178 Epoch 0 Batch 422/538 - Train Accuracy: 0.855, Validation Accuracy: 0.852, Loss: 0.186 Epoch 0 Batch 423/538 - Train Accuracy: 0.846, Validation Accuracy: 0.842, Loss: 0.188 Epoch 0 Batch 424/538 - Train Accuracy: 0.859, Validation Accuracy: 0.854, Loss: 0.191 Epoch 0 Batch 425/538 - Train Accuracy: 0.860, Validation Accuracy: 0.867, Loss: 0.198 Epoch 0 Batch 426/538 - Train Accuracy: 0.858, Validation Accuracy: 0.870, Loss: 0.181 Epoch 0 Batch 427/538 - Train Accuracy: 0.840, Validation Accuracy: 0.871, Loss: 0.182 Epoch 0 Batch 428/538 - Train Accuracy: 0.874, Validation Accuracy: 0.842, Loss: 0.165 Epoch 0 Batch 429/538 - Train Accuracy: 0.873, Validation Accuracy: 0.856, Loss: 0.183 Epoch 0 Batch 430/538 - Train Accuracy: 0.866, Validation Accuracy: 0.857, Loss: 0.172 Epoch 0 Batch 431/538 - Train Accuracy: 0.880, Validation Accuracy: 0.838, Loss: 0.172 Epoch 0 Batch 432/538 - Train Accuracy: 0.860, Validation Accuracy: 0.838, Loss: 0.168 Epoch 0 Batch 433/538 - Train Accuracy: 0.862, Validation Accuracy: 0.851, Loss: 0.198 Epoch 0 Batch 434/538 - Train Accuracy: 0.845, Validation Accuracy: 0.858, Loss: 0.174 Epoch 0 Batch 435/538 - Train Accuracy: 0.874, Validation Accuracy: 0.855, Loss: 0.166 Epoch 0 Batch 436/538 - Train Accuracy: 0.846, Validation Accuracy: 0.859, Loss: 0.180 Epoch 0 Batch 437/538 - Train Accuracy: 0.863, Validation Accuracy: 0.866, Loss: 0.169 Epoch 0 Batch 438/538 - Train Accuracy: 0.890, Validation Accuracy: 0.863, Loss: 0.161 Epoch 0 Batch 439/538 - Train Accuracy: 0.870, Validation Accuracy: 0.857, Loss: 0.159 Epoch 0 Batch 440/538 - Train Accuracy: 0.875, Validation Accuracy: 0.864, Loss: 0.173 Epoch 0 Batch 441/538 - Train Accuracy: 0.871, Validation Accuracy: 0.871, Loss: 0.169 Epoch 0 Batch 442/538 - Train Accuracy: 0.889, Validation Accuracy: 0.864, Loss: 0.140 Epoch 0 Batch 443/538 - Train Accuracy: 0.872, Validation Accuracy: 0.847, Loss: 0.163 Epoch 0 Batch 444/538 - Train Accuracy: 0.890, Validation Accuracy: 0.862, Loss: 0.154 Epoch 0 Batch 445/538 - Train Accuracy: 0.900, Validation Accuracy: 0.863, Loss: 0.139 Epoch 0 Batch 446/538 - Train Accuracy: 0.882, Validation Accuracy: 0.869, Loss: 0.149 Epoch 0 Batch 447/538 - Train Accuracy: 0.865, Validation Accuracy: 0.872, Loss: 0.159 Epoch 0 Batch 448/538 - Train Accuracy: 0.889, Validation Accuracy: 0.880, Loss: 0.134 Epoch 0 Batch 449/538 - Train Accuracy: 0.886, Validation Accuracy: 0.869, Loss: 0.161 Epoch 0 Batch 450/538 - Train Accuracy: 0.864, Validation Accuracy: 0.862, Loss: 0.163 Epoch 0 Batch 451/538 - Train Accuracy: 0.854, Validation Accuracy: 0.851, Loss: 0.153 Epoch 0 Batch 452/538 - Train Accuracy: 0.888, Validation Accuracy: 0.839, Loss: 0.135 Epoch 0 Batch 453/538 - Train Accuracy: 0.875, Validation Accuracy: 0.856, Loss: 0.164 Epoch 0 Batch 454/538 - Train Accuracy: 0.897, Validation Accuracy: 0.870, Loss: 0.155 Epoch 0 Batch 455/538 - Train Accuracy: 0.885, Validation Accuracy: 0.874, Loss: 0.143 Epoch 0 Batch 456/538 - Train Accuracy: 0.896, Validation Accuracy: 0.867, Loss: 0.157 Epoch 0 Batch 457/538 - Train Accuracy: 0.863, Validation Accuracy: 0.875, Loss: 0.154 Epoch 0 Batch 458/538 - Train Accuracy: 0.875, Validation Accuracy: 0.866, Loss: 0.132 Epoch 0 Batch 459/538 - Train Accuracy: 0.883, Validation Accuracy: 0.867, Loss: 0.142 Epoch 0 Batch 460/538 - Train Accuracy: 0.880, Validation Accuracy: 0.859, Loss: 0.145 Epoch 0 Batch 461/538 - Train Accuracy: 0.890, Validation Accuracy: 0.858, Loss: 0.150 Epoch 0 Batch 462/538 - Train Accuracy: 0.874, Validation Accuracy: 0.863, Loss: 0.132 Epoch 0 Batch 463/538 - Train Accuracy: 0.857, Validation Accuracy: 0.854, Loss: 0.150 Epoch 0 Batch 464/538 - Train Accuracy: 0.877, Validation Accuracy: 0.856, Loss: 0.143 Epoch 0 Batch 465/538 - Train Accuracy: 0.860, Validation Accuracy: 0.846, Loss: 0.135 Epoch 0 Batch 466/538 - Train Accuracy: 0.881, Validation Accuracy: 0.847, Loss: 0.135 Epoch 0 Batch 467/538 - Train Accuracy: 0.882, Validation Accuracy: 0.850, Loss: 0.146 Epoch 0 Batch 468/538 - Train Accuracy: 0.904, Validation Accuracy: 0.843, Loss: 0.149 Epoch 0 Batch 469/538 - Train Accuracy: 0.891, Validation Accuracy: 0.840, Loss: 0.139 Epoch 0 Batch 470/538 - Train Accuracy: 0.879, Validation Accuracy: 0.853, Loss: 0.132 Epoch 0 Batch 471/538 - Train Accuracy: 0.863, Validation Accuracy: 0.875, Loss: 0.135 Epoch 0 Batch 472/538 - Train Accuracy: 0.916, Validation Accuracy: 0.869, Loss: 0.126 Epoch 0 Batch 473/538 - Train Accuracy: 0.850, Validation Accuracy: 0.872, Loss: 0.142 Epoch 0 Batch 474/538 - Train Accuracy: 0.918, Validation Accuracy: 0.877, Loss: 0.126 Epoch 0 Batch 475/538 - Train Accuracy: 0.893, Validation Accuracy: 0.884, Loss: 0.128 Epoch 0 Batch 476/538 - Train Accuracy: 0.887, Validation Accuracy: 0.875, Loss: 0.128 Epoch 0 Batch 477/538 - Train Accuracy: 0.877, Validation Accuracy: 0.861, Loss: 0.146 Epoch 0 Batch 478/538 - Train Accuracy: 0.916, Validation Accuracy: 0.865, Loss: 0.115 Epoch 0 Batch 479/538 - Train Accuracy: 0.898, Validation Accuracy: 0.872, Loss: 0.116 Epoch 0 Batch 480/538 - Train Accuracy: 0.907, Validation Accuracy: 0.872, Loss: 0.118 Epoch 0 Batch 481/538 - Train Accuracy: 0.900, Validation Accuracy: 0.871, Loss: 0.125 Epoch 0 Batch 482/538 - Train Accuracy: 0.884, Validation Accuracy: 0.889, Loss: 0.117 Epoch 0 Batch 483/538 - Train Accuracy: 0.864, Validation Accuracy: 0.888, Loss: 0.135 Epoch 0 Batch 484/538 - Train Accuracy: 0.895, Validation Accuracy: 0.894, Loss: 0.142 Epoch 0 Batch 485/538 - Train Accuracy: 0.905, Validation Accuracy: 0.883, Loss: 0.119 Epoch 0 Batch 486/538 - Train Accuracy: 0.904, Validation Accuracy: 0.886, Loss: 0.111 Epoch 0 Batch 487/538 - Train Accuracy: 0.893, Validation Accuracy: 0.894, Loss: 0.110 Epoch 0 Batch 488/538 - Train Accuracy: 0.914, Validation Accuracy: 0.883, Loss: 0.108 Epoch 0 Batch 489/538 - Train Accuracy: 0.879, Validation Accuracy: 0.883, Loss: 0.123 Epoch 0 Batch 490/538 - Train Accuracy: 0.897, Validation Accuracy: 0.883, Loss: 0.112 Epoch 0 Batch 491/538 - Train Accuracy: 0.856, Validation Accuracy: 0.893, Loss: 0.129 Epoch 0 Batch 492/538 - Train Accuracy: 0.879, Validation Accuracy: 0.893, Loss: 0.117 Epoch 0 Batch 493/538 - Train Accuracy: 0.887, Validation Accuracy: 0.876, Loss: 0.115 Epoch 0 Batch 494/538 - Train Accuracy: 0.879, Validation Accuracy: 0.880, Loss: 0.132 Epoch 0 Batch 495/538 - Train Accuracy: 0.884, Validation Accuracy: 0.892, Loss: 0.116 Epoch 0 Batch 496/538 - Train Accuracy: 0.881, Validation Accuracy: 0.892, Loss: 0.110 Epoch 0 Batch 497/538 - Train Accuracy: 0.887, Validation Accuracy: 0.905, Loss: 0.118 Epoch 0 Batch 498/538 - Train Accuracy: 0.893, Validation Accuracy: 0.893, Loss: 0.117 Epoch 0 Batch 499/538 - Train Accuracy: 0.906, Validation Accuracy: 0.899, Loss: 0.116 Epoch 0 Batch 500/538 - Train Accuracy: 0.919, Validation Accuracy: 0.904, Loss: 0.101 Epoch 0 Batch 501/538 - Train Accuracy: 0.906, Validation Accuracy: 0.895, Loss: 0.119 Epoch 0 Batch 502/538 - Train Accuracy: 0.905, Validation Accuracy: 0.895, Loss: 0.108 Epoch 0 Batch 503/538 - Train Accuracy: 0.931, Validation Accuracy: 0.894, Loss: 0.114 Epoch 0 Batch 504/538 - Train Accuracy: 0.921, Validation Accuracy: 0.895, Loss: 0.106 Epoch 0 Batch 505/538 - Train Accuracy: 0.920, Validation Accuracy: 0.896, Loss: 0.099 Epoch 0 Batch 506/538 - Train Accuracy: 0.893, Validation Accuracy: 0.896, Loss: 0.108 Epoch 0 Batch 507/538 - Train Accuracy: 0.880, Validation Accuracy: 0.899, Loss: 0.130 Epoch 0 Batch 508/538 - Train Accuracy: 0.882, Validation Accuracy: 0.896, Loss: 0.117 Epoch 0 Batch 509/538 - Train Accuracy: 0.895, Validation Accuracy: 0.885, Loss: 0.114 Epoch 0 Batch 510/538 - Train Accuracy: 0.911, Validation Accuracy: 0.877, Loss: 0.097 Epoch 0 Batch 511/538 - Train Accuracy: 0.878, Validation Accuracy: 0.884, Loss: 0.117 Epoch 0 Batch 512/538 - Train Accuracy: 0.910, Validation Accuracy: 0.871, Loss: 0.105 Epoch 0 Batch 513/538 - Train Accuracy: 0.895, Validation Accuracy: 0.882, Loss: 0.098 Epoch 0 Batch 514/538 - Train Accuracy: 0.884, Validation Accuracy: 0.907, Loss: 0.110 Epoch 0 Batch 515/538 - Train Accuracy: 0.897, Validation Accuracy: 0.884, Loss: 0.115 Epoch 0 Batch 516/538 - Train Accuracy: 0.857, Validation Accuracy: 0.896, Loss: 0.113 Epoch 0 Batch 517/538 - Train Accuracy: 0.925, Validation Accuracy: 0.888, Loss: 0.104 Epoch 0 Batch 518/538 - Train Accuracy: 0.900, Validation Accuracy: 0.888, Loss: 0.114 Epoch 0 Batch 519/538 - Train Accuracy: 0.915, Validation Accuracy: 0.882, Loss: 0.111 Epoch 0 Batch 520/538 - Train Accuracy: 0.889, Validation Accuracy: 0.876, Loss: 0.112 Epoch 0 Batch 521/538 - Train Accuracy: 0.898, Validation Accuracy: 0.876, Loss: 0.119 Epoch 0 Batch 522/538 - Train Accuracy: 0.904, Validation Accuracy: 0.892, Loss: 0.098 Epoch 0 Batch 523/538 - Train Accuracy: 0.914, Validation Accuracy: 0.898, Loss: 0.100 Epoch 0 Batch 524/538 - Train Accuracy: 0.934, Validation Accuracy: 0.902, Loss: 0.103 Epoch 0 Batch 525/538 - Train Accuracy: 0.909, Validation Accuracy: 0.901, Loss: 0.109 Epoch 0 Batch 526/538 - Train Accuracy: 0.896, Validation Accuracy: 0.905, Loss: 0.108 Epoch 0 Batch 527/538 - Train Accuracy: 0.923, Validation Accuracy: 0.900, Loss: 0.100 Epoch 0 Batch 528/538 - Train Accuracy: 0.897, Validation Accuracy: 0.893, Loss: 0.115 Epoch 0 Batch 529/538 - Train Accuracy: 0.884, Validation Accuracy: 0.902, Loss: 0.114 Epoch 0 Batch 530/538 - Train Accuracy: 0.878, Validation Accuracy: 0.894, Loss: 0.113 Epoch 0 Batch 531/538 - Train Accuracy: 0.916, Validation Accuracy: 0.888, Loss: 0.107 Epoch 0 Batch 532/538 - Train Accuracy: 0.892, Validation Accuracy: 0.874, Loss: 0.098 Epoch 0 Batch 533/538 - Train Accuracy: 0.890, Validation Accuracy: 0.883, Loss: 0.105 Epoch 0 Batch 534/538 - Train Accuracy: 0.898, Validation Accuracy: 0.869, Loss: 0.084 Epoch 0 Batch 535/538 - Train Accuracy: 0.918, Validation Accuracy: 0.875, Loss: 0.099 Epoch 0 Batch 536/538 - Train Accuracy: 0.913, Validation Accuracy: 0.877, Loss: 0.110 Epoch 1 Batch 0/538 - Train Accuracy: 0.903, Validation Accuracy: 0.865, Loss: 0.089 Epoch 1 Batch 1/538 - Train Accuracy: 0.917, Validation Accuracy: 0.879, Loss: 0.097 Epoch 1 Batch 2/538 - Train Accuracy: 0.910, Validation Accuracy: 0.887, Loss: 0.109 Epoch 1 Batch 3/538 - Train Accuracy: 0.919, Validation Accuracy: 0.893, Loss: 0.089 Epoch 1 Batch 4/538 - Train Accuracy: 0.907, Validation Accuracy: 0.896, Loss: 0.094 Epoch 1 Batch 5/538 - Train Accuracy: 0.920, Validation Accuracy: 0.890, Loss: 0.102 Epoch 1 Batch 6/538 - Train Accuracy: 0.928, Validation Accuracy: 0.892, Loss: 0.088 Epoch 1 Batch 7/538 - Train Accuracy: 0.903, Validation Accuracy: 0.897, Loss: 0.097 Epoch 1 Batch 8/538 - Train Accuracy: 0.900, Validation Accuracy: 0.896, Loss: 0.100 Epoch 1 Batch 9/538 - Train Accuracy: 0.889, Validation Accuracy: 0.911, Loss: 0.090 Epoch 1 Batch 10/538 - Train Accuracy: 0.903, Validation Accuracy: 0.895, Loss: 0.107 Epoch 1 Batch 11/538 - Train Accuracy: 0.901, Validation Accuracy: 0.878, Loss: 0.091 Epoch 1 Batch 12/538 - Train Accuracy: 0.902, Validation Accuracy: 0.901, Loss: 0.098 Epoch 1 Batch 13/538 - Train Accuracy: 0.936, Validation Accuracy: 0.912, Loss: 0.083 Epoch 1 Batch 14/538 - Train Accuracy: 0.923, Validation Accuracy: 0.902, Loss: 0.089 Epoch 1 Batch 15/538 - Train Accuracy: 0.926, Validation Accuracy: 0.909, Loss: 0.086 Epoch 1 Batch 16/538 - Train Accuracy: 0.902, Validation Accuracy: 0.902, Loss: 0.091 Epoch 1 Batch 17/538 - Train Accuracy: 0.915, Validation Accuracy: 0.899, Loss: 0.091 Epoch 1 Batch 18/538 - Train Accuracy: 0.920, Validation Accuracy: 0.892, Loss: 0.098 Epoch 1 Batch 19/538 - Train Accuracy: 0.886, Validation Accuracy: 0.896, Loss: 0.100 Epoch 1 Batch 20/538 - Train Accuracy: 0.911, Validation Accuracy: 0.901, Loss: 0.093 Epoch 1 Batch 21/538 - Train Accuracy: 0.931, Validation Accuracy: 0.904, Loss: 0.078 Epoch 1 Batch 22/538 - Train Accuracy: 0.908, Validation Accuracy: 0.902, Loss: 0.081 Epoch 1 Batch 23/538 - Train Accuracy: 0.893, Validation Accuracy: 0.897, Loss: 0.101 Epoch 1 Batch 24/538 - Train Accuracy: 0.922, Validation Accuracy: 0.907, Loss: 0.096 Epoch 1 Batch 25/538 - Train Accuracy: 0.901, Validation Accuracy: 0.911, Loss: 0.095 Epoch 1 Batch 26/538 - Train Accuracy: 0.905, Validation Accuracy: 0.892, Loss: 0.106 Epoch 1 Batch 27/538 - Train Accuracy: 0.894, Validation Accuracy: 0.861, Loss: 0.080 Epoch 1 Batch 28/538 - Train Accuracy: 0.897, Validation Accuracy: 0.866, Loss: 0.086 Epoch 1 Batch 29/538 - Train Accuracy: 0.914, Validation Accuracy: 0.894, Loss: 0.082 Epoch 1 Batch 30/538 - Train Accuracy: 0.893, Validation Accuracy: 0.911, Loss: 0.093 Epoch 1 Batch 31/538 - Train Accuracy: 0.926, Validation Accuracy: 0.911, Loss: 0.075 Epoch 1 Batch 32/538 - Train Accuracy: 0.919, Validation Accuracy: 0.910, Loss: 0.068 Epoch 1 Batch 33/538 - Train Accuracy: 0.919, Validation Accuracy: 0.902, Loss: 0.086 Epoch 1 Batch 34/538 - Train Accuracy: 0.909, Validation Accuracy: 0.909, Loss: 0.100 Epoch 1 Batch 35/538 - Train Accuracy: 0.914, Validation Accuracy: 0.907, Loss: 0.077 Epoch 1 Batch 36/538 - Train Accuracy: 0.922, Validation Accuracy: 0.907, Loss: 0.076 Epoch 1 Batch 37/538 - Train Accuracy: 0.920, Validation Accuracy: 0.897, Loss: 0.093 Epoch 1 Batch 38/538 - Train Accuracy: 0.895, Validation Accuracy: 0.893, Loss: 0.082 Epoch 1 Batch 39/538 - Train Accuracy: 0.919, Validation Accuracy: 0.904, Loss: 0.083 Epoch 1 Batch 40/538 - Train Accuracy: 0.906, Validation Accuracy: 0.909, Loss: 0.070 Epoch 1 Batch 41/538 - Train Accuracy: 0.909, Validation Accuracy: 0.910, Loss: 0.083 Epoch 1 Batch 42/538 - Train Accuracy: 0.885, Validation Accuracy: 0.905, Loss: 0.083 Epoch 1 Batch 43/538 - Train Accuracy: 0.894, Validation Accuracy: 0.913, Loss: 0.105 Epoch 1 Batch 44/538 - Train Accuracy: 0.907, Validation Accuracy: 0.911, Loss: 0.091 Epoch 1 Batch 45/538 - Train Accuracy: 0.911, Validation Accuracy: 0.902, Loss: 0.076 Epoch 1 Batch 46/538 - Train Accuracy: 0.930, Validation Accuracy: 0.906, Loss: 0.079 Epoch 1 Batch 47/538 - Train Accuracy: 0.905, Validation Accuracy: 0.910, Loss: 0.091 Epoch 1 Batch 48/538 - Train Accuracy: 0.903, Validation Accuracy: 0.900, Loss: 0.089 Epoch 1 Batch 49/538 - Train Accuracy: 0.921, Validation Accuracy: 0.899, Loss: 0.082 Epoch 1 Batch 50/538 - Train Accuracy: 0.919, Validation Accuracy: 0.897, Loss: 0.075 Epoch 1 Batch 51/538 - Train Accuracy: 0.897, Validation Accuracy: 0.885, Loss: 0.103 Epoch 1 Batch 52/538 - Train Accuracy: 0.918, Validation Accuracy: 0.893, Loss: 0.091 Epoch 1 Batch 53/538 - Train Accuracy: 0.890, Validation Accuracy: 0.903, Loss: 0.081 Epoch 1 Batch 54/538 - Train Accuracy: 0.929, Validation Accuracy: 0.908, Loss: 0.072 Epoch 1 Batch 55/538 - Train Accuracy: 0.923, Validation Accuracy: 0.904, Loss: 0.076 Epoch 1 Batch 56/538 - Train Accuracy: 0.911, Validation Accuracy: 0.881, Loss: 0.076 Epoch 1 Batch 57/538 - Train Accuracy: 0.876, Validation Accuracy: 0.883, Loss: 0.090 Epoch 1 Batch 58/538 - Train Accuracy: 0.888, Validation Accuracy: 0.892, Loss: 0.084 Epoch 1 Batch 59/538 - Train Accuracy: 0.919, Validation Accuracy: 0.914, Loss: 0.090 Epoch 1 Batch 60/538 - Train Accuracy: 0.929, Validation Accuracy: 0.924, Loss: 0.079 Epoch 1 Batch 61/538 - Train Accuracy: 0.917, Validation Accuracy: 0.918, Loss: 0.075 Epoch 1 Batch 62/538 - Train Accuracy: 0.908, Validation Accuracy: 0.906, Loss: 0.077 Epoch 1 Batch 63/538 - Train Accuracy: 0.929, Validation Accuracy: 0.900, Loss: 0.077 Epoch 1 Batch 64/538 - Train Accuracy: 0.914, Validation Accuracy: 0.895, Loss: 0.084 Epoch 1 Batch 65/538 - Train Accuracy: 0.907, Validation Accuracy: 0.879, Loss: 0.085 Epoch 1 Batch 66/538 - Train Accuracy: 0.915, Validation Accuracy: 0.874, Loss: 0.073 Epoch 1 Batch 67/538 - Train Accuracy: 0.923, Validation Accuracy: 0.866, Loss: 0.077 Epoch 1 Batch 68/538 - Train Accuracy: 0.899, Validation Accuracy: 0.880, Loss: 0.066 Epoch 1 Batch 69/538 - Train Accuracy: 0.922, Validation Accuracy: 0.891, Loss: 0.082 Epoch 1 Batch 70/538 - Train Accuracy: 0.911, Validation Accuracy: 0.896, Loss: 0.072 Epoch 1 Batch 71/538 - Train Accuracy: 0.900, Validation Accuracy: 0.892, Loss: 0.085 Epoch 1 Batch 72/538 - Train Accuracy: 0.920, Validation Accuracy: 0.893, Loss: 0.092 Epoch 1 Batch 73/538 - Train Accuracy: 0.902, Validation Accuracy: 0.900, Loss: 0.081 Epoch 1 Batch 74/538 - Train Accuracy: 0.922, Validation Accuracy: 0.896, Loss: 0.077 Epoch 1 Batch 75/538 - Train Accuracy: 0.910, Validation Accuracy: 0.889, Loss: 0.078 Epoch 1 Batch 76/538 - Train Accuracy: 0.916, Validation Accuracy: 0.903, Loss: 0.080 Epoch 1 Batch 77/538 - Train Accuracy: 0.916, Validation Accuracy: 0.898, Loss: 0.065 Epoch 1 Batch 78/538 - Train Accuracy: 0.906, Validation Accuracy: 0.902, Loss: 0.077 Epoch 1 Batch 79/538 - Train Accuracy: 0.933, Validation Accuracy: 0.904, Loss: 0.061 Epoch 1 Batch 80/538 - Train Accuracy: 0.934, Validation Accuracy: 0.904, Loss: 0.076 Epoch 1 Batch 81/538 - Train Accuracy: 0.922, Validation Accuracy: 0.902, Loss: 0.080 Epoch 1 Batch 82/538 - Train Accuracy: 0.895, Validation Accuracy: 0.912, Loss: 0.078 Epoch 1 Batch 83/538 - Train Accuracy: 0.908, Validation Accuracy: 0.914, Loss: 0.075 Epoch 1 Batch 84/538 - Train Accuracy: 0.913, Validation Accuracy: 0.915, Loss: 0.076 Epoch 1 Batch 85/538 - Train Accuracy: 0.932, Validation Accuracy: 0.911, Loss: 0.066 Epoch 1 Batch 86/538 - Train Accuracy: 0.939, Validation Accuracy: 0.902, Loss: 0.068 Epoch 1 Batch 87/538 - Train Accuracy: 0.908, Validation Accuracy: 0.914, Loss: 0.079 Epoch 1 Batch 88/538 - Train Accuracy: 0.925, Validation Accuracy: 0.922, Loss: 0.080 Epoch 1 Batch 89/538 - Train Accuracy: 0.935, Validation Accuracy: 0.915, Loss: 0.067 Epoch 1 Batch 90/538 - Train Accuracy: 0.925, Validation Accuracy: 0.918, Loss: 0.089 Epoch 1 Batch 91/538 - Train Accuracy: 0.914, Validation Accuracy: 0.909, Loss: 0.067 Epoch 1 Batch 92/538 - Train Accuracy: 0.908, Validation Accuracy: 0.903, Loss: 0.078 Epoch 1 Batch 93/538 - Train Accuracy: 0.913, Validation Accuracy: 0.914, Loss: 0.071 Epoch 1 Batch 94/538 - Train Accuracy: 0.928, Validation Accuracy: 0.925, Loss: 0.067 Epoch 1 Batch 95/538 - Train Accuracy: 0.930, Validation Accuracy: 0.923, Loss: 0.068 Epoch 1 Batch 96/538 - Train Accuracy: 0.947, Validation Accuracy: 0.918, Loss: 0.060 Epoch 1 Batch 97/538 - Train Accuracy: 0.922, Validation Accuracy: 0.919, Loss: 0.064 Epoch 1 Batch 98/538 - Train Accuracy: 0.935, Validation Accuracy: 0.915, Loss: 0.072 Epoch 1 Batch 99/538 - Train Accuracy: 0.938, Validation Accuracy: 0.913, Loss: 0.070 Epoch 1 Batch 100/538 - Train Accuracy: 0.923, Validation Accuracy: 0.908, Loss: 0.061 Epoch 1 Batch 101/538 - Train Accuracy: 0.914, Validation Accuracy: 0.903, Loss: 0.086 Epoch 1 Batch 102/538 - Train Accuracy: 0.905, Validation Accuracy: 0.904, Loss: 0.079 Epoch 1 Batch 103/538 - Train Accuracy: 0.929, Validation Accuracy: 0.905, Loss: 0.071 Epoch 1 Batch 104/538 - Train Accuracy: 0.911, Validation Accuracy: 0.917, Loss: 0.074 Epoch 1 Batch 105/538 - Train Accuracy: 0.936, Validation Accuracy: 0.925, Loss: 0.058 Epoch 1 Batch 106/538 - Train Accuracy: 0.910, Validation Accuracy: 0.925, Loss: 0.057 Epoch 1 Batch 107/538 - Train Accuracy: 0.896, Validation Accuracy: 0.923, Loss: 0.075 Epoch 1 Batch 108/538 - Train Accuracy: 0.922, Validation Accuracy: 0.919, Loss: 0.074 Epoch 1 Batch 109/538 - Train Accuracy: 0.940, Validation Accuracy: 0.913, Loss: 0.064 Epoch 1 Batch 110/538 - Train Accuracy: 0.927, Validation Accuracy: 0.917, Loss: 0.073 Epoch 1 Batch 111/538 - Train Accuracy: 0.934, Validation Accuracy: 0.918, Loss: 0.068 Epoch 1 Batch 112/538 - Train Accuracy: 0.916, Validation Accuracy: 0.922, Loss: 0.075 Epoch 1 Batch 113/538 - Train Accuracy: 0.904, Validation Accuracy: 0.915, Loss: 0.081 Epoch 1 Batch 114/538 - Train Accuracy: 0.928, Validation Accuracy: 0.906, Loss: 0.073 Epoch 1 Batch 115/538 - Train Accuracy: 0.930, Validation Accuracy: 0.917, Loss: 0.072 Epoch 1 Batch 116/538 - Train Accuracy: 0.911, Validation Accuracy: 0.923, Loss: 0.083 Epoch 1 Batch 117/538 - Train Accuracy: 0.928, Validation Accuracy: 0.925, Loss: 0.075 Epoch 1 Batch 118/538 - Train Accuracy: 0.922, Validation Accuracy: 0.929, Loss: 0.063 Epoch 1 Batch 119/538 - Train Accuracy: 0.935, Validation Accuracy: 0.919, Loss: 0.056 Epoch 1 Batch 120/538 - Train Accuracy: 0.938, Validation Accuracy: 0.909, Loss: 0.059 Epoch 1 Batch 121/538 - Train Accuracy: 0.922, Validation Accuracy: 0.916, Loss: 0.065 Epoch 1 Batch 122/538 - Train Accuracy: 0.912, Validation Accuracy: 0.904, Loss: 0.057 Epoch 1 Batch 123/538 - Train Accuracy: 0.930, Validation Accuracy: 0.910, Loss: 0.067 Epoch 1 Batch 124/538 - Train Accuracy: 0.931, Validation Accuracy: 0.914, Loss: 0.057 Epoch 1 Batch 125/538 - Train Accuracy: 0.912, Validation Accuracy: 0.914, Loss: 0.076 Epoch 1 Batch 126/538 - Train Accuracy: 0.907, Validation Accuracy: 0.915, Loss: 0.077 Epoch 1 Batch 127/538 - Train Accuracy: 0.913, Validation Accuracy: 0.917, Loss: 0.085 Epoch 1 Batch 128/538 - Train Accuracy: 0.902, Validation Accuracy: 0.922, Loss: 0.072 Epoch 1 Batch 129/538 - Train Accuracy: 0.922, Validation Accuracy: 0.930, Loss: 0.061 Epoch 1 Batch 130/538 - Train Accuracy: 0.924, Validation Accuracy: 0.929, Loss: 0.065 Epoch 1 Batch 131/538 - Train Accuracy: 0.940, Validation Accuracy: 0.922, Loss: 0.066 Epoch 1 Batch 132/538 - Train Accuracy: 0.909, Validation Accuracy: 0.921, Loss: 0.069 Epoch 1 Batch 133/538 - Train Accuracy: 0.928, Validation Accuracy: 0.920, Loss: 0.067 Epoch 1 Batch 134/538 - Train Accuracy: 0.901, Validation Accuracy: 0.914, Loss: 0.079 Epoch 1 Batch 135/538 - Train Accuracy: 0.941, Validation Accuracy: 0.900, Loss: 0.077 Epoch 1 Batch 136/538 - Train Accuracy: 0.915, Validation Accuracy: 0.902, Loss: 0.064 Epoch 1 Batch 137/538 - Train Accuracy: 0.913, Validation Accuracy: 0.908, Loss: 0.079 Epoch 1 Batch 138/538 - Train Accuracy: 0.922, Validation Accuracy: 0.912, Loss: 0.070 Epoch 1 Batch 139/538 - Train Accuracy: 0.911, Validation Accuracy: 0.914, Loss: 0.075 Epoch 1 Batch 140/538 - Train Accuracy: 0.924, Validation Accuracy: 0.918, Loss: 0.080 Epoch 1 Batch 141/538 - Train Accuracy: 0.927, Validation Accuracy: 0.922, Loss: 0.074 Epoch 1 Batch 142/538 - Train Accuracy: 0.923, Validation Accuracy: 0.909, Loss: 0.063 Epoch 1 Batch 143/538 - Train Accuracy: 0.936, Validation Accuracy: 0.910, Loss: 0.067 Epoch 1 Batch 144/538 - Train Accuracy: 0.925, Validation Accuracy: 0.916, Loss: 0.078 Epoch 1 Batch 145/538 - Train Accuracy: 0.907, Validation Accuracy: 0.907, Loss: 0.077 Epoch 1 Batch 146/538 - Train Accuracy: 0.936, Validation Accuracy: 0.897, Loss: 0.067 Epoch 1 Batch 147/538 - Train Accuracy: 0.898, Validation Accuracy: 0.899, Loss: 0.073 Epoch 1 Batch 148/538 - Train Accuracy: 0.915, Validation Accuracy: 0.896, Loss: 0.082 Epoch 1 Batch 149/538 - Train Accuracy: 0.940, Validation Accuracy: 0.909, Loss: 0.066 Epoch 1 Batch 150/538 - Train Accuracy: 0.934, Validation Accuracy: 0.920, Loss: 0.061 Epoch 1 Batch 151/538 - Train Accuracy: 0.917, Validation Accuracy: 0.910, Loss: 0.065 Epoch 1 Batch 152/538 - Train Accuracy: 0.927, Validation Accuracy: 0.910, Loss: 0.070 Epoch 1 Batch 153/538 - Train Accuracy: 0.920, Validation Accuracy: 0.911, Loss: 0.068 Epoch 1 Batch 154/538 - Train Accuracy: 0.925, Validation Accuracy: 0.919, Loss: 0.061 Epoch 1 Batch 155/538 - Train Accuracy: 0.920, Validation Accuracy: 0.917, Loss: 0.065 Epoch 1 Batch 156/538 - Train Accuracy: 0.940, Validation Accuracy: 0.912, Loss: 0.056 Epoch 1 Batch 157/538 - Train Accuracy: 0.928, Validation Accuracy: 0.922, Loss: 0.063 Epoch 1 Batch 158/538 - Train Accuracy: 0.938, Validation Accuracy: 0.922, Loss: 0.068 Epoch 1 Batch 159/538 - Train Accuracy: 0.930, Validation Accuracy: 0.922, Loss: 0.073 Epoch 1 Batch 160/538 - Train Accuracy: 0.915, Validation Accuracy: 0.931, Loss: 0.063 Epoch 1 Batch 161/538 - Train Accuracy: 0.927, Validation Accuracy: 0.928, Loss: 0.061 Epoch 1 Batch 162/538 - Train Accuracy: 0.907, Validation Accuracy: 0.919, Loss: 0.064 Epoch 1 Batch 163/538 - Train Accuracy: 0.913, Validation Accuracy: 0.915, Loss: 0.072 Epoch 1 Batch 164/538 - Train Accuracy: 0.915, Validation Accuracy: 0.914, Loss: 0.070 Epoch 1 Batch 165/538 - Train Accuracy: 0.907, Validation Accuracy: 0.914, Loss: 0.062 Epoch 1 Batch 166/538 - Train Accuracy: 0.947, Validation Accuracy: 0.912, Loss: 0.059 Epoch 1 Batch 167/538 - Train Accuracy: 0.928, Validation Accuracy: 0.893, Loss: 0.073 Epoch 1 Batch 168/538 - Train Accuracy: 0.897, Validation Accuracy: 0.896, Loss: 0.077 Epoch 1 Batch 169/538 - Train Accuracy: 0.938, Validation Accuracy: 0.890, Loss: 0.059 Epoch 1 Batch 170/538 - Train Accuracy: 0.915, Validation Accuracy: 0.899, Loss: 0.072 Epoch 1 Batch 171/538 - Train Accuracy: 0.929, Validation Accuracy: 0.909, Loss: 0.061 Epoch 1 Batch 172/538 - Train Accuracy: 0.922, Validation Accuracy: 0.900, Loss: 0.064 Epoch 1 Batch 173/538 - Train Accuracy: 0.935, Validation Accuracy: 0.895, Loss: 0.051 Epoch 1 Batch 174/538 - Train Accuracy: 0.916, Validation Accuracy: 0.908, Loss: 0.067 Epoch 1 Batch 175/538 - Train Accuracy: 0.936, Validation Accuracy: 0.918, Loss: 0.057 Epoch 1 Batch 176/538 - Train Accuracy: 0.927, Validation Accuracy: 0.918, Loss: 0.073 Epoch 1 Batch 177/538 - Train Accuracy: 0.939, Validation Accuracy: 0.920, Loss: 0.062 Epoch 1 Batch 178/538 - Train Accuracy: 0.907, Validation Accuracy: 0.921, Loss: 0.070 Epoch 1 Batch 179/538 - Train Accuracy: 0.925, Validation Accuracy: 0.921, Loss: 0.057 Epoch 1 Batch 180/538 - Train Accuracy: 0.931, Validation Accuracy: 0.921, Loss: 0.064 Epoch 1 Batch 181/538 - Train Accuracy: 0.916, Validation Accuracy: 0.915, Loss: 0.067 Epoch 1 Batch 182/538 - Train Accuracy: 0.948, Validation Accuracy: 0.921, Loss: 0.056 Epoch 1 Batch 183/538 - Train Accuracy: 0.948, Validation Accuracy: 0.928, Loss: 0.052 Epoch 1 Batch 184/538 - Train Accuracy: 0.938, Validation Accuracy: 0.930, Loss: 0.058 Epoch 1 Batch 185/538 - Train Accuracy: 0.951, Validation Accuracy: 0.928, Loss: 0.050 Epoch 1 Batch 186/538 - Train Accuracy: 0.934, Validation Accuracy: 0.923, Loss: 0.055 Epoch 1 Batch 187/538 - Train Accuracy: 0.945, Validation Accuracy: 0.918, Loss: 0.062 Epoch 1 Batch 188/538 - Train Accuracy: 0.931, Validation Accuracy: 0.916, Loss: 0.055 Epoch 1 Batch 189/538 - Train Accuracy: 0.952, Validation Accuracy: 0.921, Loss: 0.063 Epoch 1 Batch 190/538 - Train Accuracy: 0.921, Validation Accuracy: 0.921, Loss: 0.081 Epoch 1 Batch 191/538 - Train Accuracy: 0.945, Validation Accuracy: 0.928, Loss: 0.054 Epoch 1 Batch 192/538 - Train Accuracy: 0.940, Validation Accuracy: 0.928, Loss: 0.055 Epoch 1 Batch 193/538 - Train Accuracy: 0.923, Validation Accuracy: 0.925, Loss: 0.062 Epoch 1 Batch 194/538 - Train Accuracy: 0.900, Validation Accuracy: 0.910, Loss: 0.070 Epoch 1 Batch 195/538 - Train Accuracy: 0.947, Validation Accuracy: 0.901, Loss: 0.066 Epoch 1 Batch 196/538 - Train Accuracy: 0.918, Validation Accuracy: 0.914, Loss: 0.057 Epoch 1 Batch 197/538 - Train Accuracy: 0.925, Validation Accuracy: 0.914, Loss: 0.059 Epoch 1 Batch 198/538 - Train Accuracy: 0.938, Validation Accuracy: 0.922, Loss: 0.061 Epoch 1 Batch 199/538 - Train Accuracy: 0.925, Validation Accuracy: 0.918, Loss: 0.063 Epoch 1 Batch 200/538 - Train Accuracy: 0.957, Validation Accuracy: 0.931, Loss: 0.048 Epoch 1 Batch 201/538 - Train Accuracy: 0.927, Validation Accuracy: 0.924, Loss: 0.066 Epoch 1 Batch 202/538 - Train Accuracy: 0.928, Validation Accuracy: 0.917, Loss: 0.054 Epoch 1 Batch 203/538 - Train Accuracy: 0.926, Validation Accuracy: 0.922, Loss: 0.061 Epoch 1 Batch 204/538 - Train Accuracy: 0.926, Validation Accuracy: 0.928, Loss: 0.074 Epoch 1 Batch 205/538 - Train Accuracy: 0.933, Validation Accuracy: 0.926, Loss: 0.060 Epoch 1 Batch 206/538 - Train Accuracy: 0.931, Validation Accuracy: 0.925, Loss: 0.062 Epoch 1 Batch 207/538 - Train Accuracy: 0.938, Validation Accuracy: 0.918, Loss: 0.059 Epoch 1 Batch 208/538 - Train Accuracy: 0.936, Validation Accuracy: 0.911, Loss: 0.073 Epoch 1 Batch 209/538 - Train Accuracy: 0.950, Validation Accuracy: 0.908, Loss: 0.055 Epoch 1 Batch 210/538 - Train Accuracy: 0.928, Validation Accuracy: 0.914, Loss: 0.065 Epoch 1 Batch 211/538 - Train Accuracy: 0.929, Validation Accuracy: 0.918, Loss: 0.058 Epoch 1 Batch 212/538 - Train Accuracy: 0.920, Validation Accuracy: 0.920, Loss: 0.056 Epoch 1 Batch 213/538 - Train Accuracy: 0.935, Validation Accuracy: 0.924, Loss: 0.056 Epoch 1 Batch 214/538 - Train Accuracy: 0.937, Validation Accuracy: 0.920, Loss: 0.054 Epoch 1 Batch 215/538 - Train Accuracy: 0.930, Validation Accuracy: 0.917, Loss: 0.058 Epoch 1 Batch 216/538 - Train Accuracy: 0.937, Validation Accuracy: 0.917, Loss: 0.060 Epoch 1 Batch 217/538 - Train Accuracy: 0.945, Validation Accuracy: 0.925, Loss: 0.062 Epoch 1 Batch 218/538 - Train Accuracy: 0.932, Validation Accuracy: 0.924, Loss: 0.052 Epoch 1 Batch 219/538 - Train Accuracy: 0.899, Validation Accuracy: 0.924, Loss: 0.074 Epoch 1 Batch 220/538 - Train Accuracy: 0.907, Validation Accuracy: 0.921, Loss: 0.062 Epoch 1 Batch 221/538 - Train Accuracy: 0.930, Validation Accuracy: 0.917, Loss: 0.052 Epoch 1 Batch 222/538 - Train Accuracy: 0.910, Validation Accuracy: 0.900, Loss: 0.058 Epoch 1 Batch 223/538 - Train Accuracy: 0.919, Validation Accuracy: 0.884, Loss: 0.062 Epoch 1 Batch 224/538 - Train Accuracy: 0.925, Validation Accuracy: 0.883, Loss: 0.065 Epoch 1 Batch 225/538 - Train Accuracy: 0.944, Validation Accuracy: 0.895, Loss: 0.061 Epoch 1 Batch 226/538 - Train Accuracy: 0.917, Validation Accuracy: 0.921, Loss: 0.058 Epoch 1 Batch 227/538 - Train Accuracy: 0.934, Validation Accuracy: 0.920, Loss: 0.059 Epoch 1 Batch 228/538 - Train Accuracy: 0.930, Validation Accuracy: 0.918, Loss: 0.055 Epoch 1 Batch 229/538 - Train Accuracy: 0.936, Validation Accuracy: 0.919, Loss: 0.063 Epoch 1 Batch 230/538 - Train Accuracy: 0.927, Validation Accuracy: 0.895, Loss: 0.056 Epoch 1 Batch 231/538 - Train Accuracy: 0.923, Validation Accuracy: 0.898, Loss: 0.061 Epoch 1 Batch 232/538 - Train Accuracy: 0.915, Validation Accuracy: 0.900, Loss: 0.066 Epoch 1 Batch 233/538 - Train Accuracy: 0.941, Validation Accuracy: 0.918, Loss: 0.063 Epoch 1 Batch 234/538 - Train Accuracy: 0.921, Validation Accuracy: 0.909, Loss: 0.054 Epoch 1 Batch 235/538 - Train Accuracy: 0.935, Validation Accuracy: 0.907, Loss: 0.050 Epoch 1 Batch 236/538 - Train Accuracy: 0.926, Validation Accuracy: 0.899, Loss: 0.061 Epoch 1 Batch 237/538 - Train Accuracy: 0.922, Validation Accuracy: 0.894, Loss: 0.045 Epoch 1 Batch 238/538 - Train Accuracy: 0.955, Validation Accuracy: 0.890, Loss: 0.052 Epoch 1 Batch 239/538 - Train Accuracy: 0.912, Validation Accuracy: 0.911, Loss: 0.063 Epoch 1 Batch 240/538 - Train Accuracy: 0.930, Validation Accuracy: 0.912, Loss: 0.054 Epoch 1 Batch 241/538 - Train Accuracy: 0.920, Validation Accuracy: 0.911, Loss: 0.060 Epoch 1 Batch 242/538 - Train Accuracy: 0.946, Validation Accuracy: 0.908, Loss: 0.058 Epoch 1 Batch 243/538 - Train Accuracy: 0.939, Validation Accuracy: 0.910, Loss: 0.050 Epoch 1 Batch 244/538 - Train Accuracy: 0.908, Validation Accuracy: 0.910, Loss: 0.057 Epoch 1 Batch 245/538 - Train Accuracy: 0.922, Validation Accuracy: 0.893, Loss: 0.065 Epoch 1 Batch 246/538 - Train Accuracy: 0.937, Validation Accuracy: 0.901, Loss: 0.050 Epoch 1 Batch 247/538 - Train Accuracy: 0.910, Validation Accuracy: 0.904, Loss: 0.055 Epoch 1 Batch 248/538 - Train Accuracy: 0.933, Validation Accuracy: 0.908, Loss: 0.061 Epoch 1 Batch 249/538 - Train Accuracy: 0.924, Validation Accuracy: 0.903, Loss: 0.041 Epoch 1 Batch 250/538 - Train Accuracy: 0.940, Validation Accuracy: 0.902, Loss: 0.059 Epoch 1 Batch 251/538 - Train Accuracy: 0.938, Validation Accuracy: 0.909, Loss: 0.051 Epoch 1 Batch 252/538 - Train Accuracy: 0.931, Validation Accuracy: 0.920, Loss: 0.056 Epoch 1 Batch 253/538 - Train Accuracy: 0.917, Validation Accuracy: 0.912, Loss: 0.058 Epoch 1 Batch 254/538 - Train Accuracy: 0.918, Validation Accuracy: 0.916, Loss: 0.068 Epoch 1 Batch 255/538 - Train Accuracy: 0.935, Validation Accuracy: 0.918, Loss: 0.047 Epoch 1 Batch 256/538 - Train Accuracy: 0.933, Validation Accuracy: 0.910, Loss: 0.057 Epoch 1 Batch 257/538 - Train Accuracy: 0.936, Validation Accuracy: 0.901, Loss: 0.056 Epoch 1 Batch 258/538 - Train Accuracy: 0.936, Validation Accuracy: 0.903, Loss: 0.058 Epoch 1 Batch 259/538 - Train Accuracy: 0.932, Validation Accuracy: 0.905, Loss: 0.051 Epoch 1 Batch 260/538 - Train Accuracy: 0.882, Validation Accuracy: 0.918, Loss: 0.061 Epoch 1 Batch 261/538 - Train Accuracy: 0.927, Validation Accuracy: 0.932, Loss: 0.063 Epoch 1 Batch 262/538 - Train Accuracy: 0.941, Validation Accuracy: 0.931, Loss: 0.048 Epoch 1 Batch 263/538 - Train Accuracy: 0.915, Validation Accuracy: 0.923, Loss: 0.057 Epoch 1 Batch 264/538 - Train Accuracy: 0.918, Validation Accuracy: 0.920, Loss: 0.066 Epoch 1 Batch 265/538 - Train Accuracy: 0.919, Validation Accuracy: 0.923, Loss: 0.069 Epoch 1 Batch 266/538 - Train Accuracy: 0.914, Validation Accuracy: 0.916, Loss: 0.054 Epoch 1 Batch 267/538 - Train Accuracy: 0.936, Validation Accuracy: 0.902, Loss: 0.055 Epoch 1 Batch 268/538 - Train Accuracy: 0.942, Validation Accuracy: 0.890, Loss: 0.047 Epoch 1 Batch 269/538 - Train Accuracy: 0.919, Validation Accuracy: 0.904, Loss: 0.064 Epoch 1 Batch 270/538 - Train Accuracy: 0.930, Validation Accuracy: 0.906, Loss: 0.054 Epoch 1 Batch 271/538 - Train Accuracy: 0.937, Validation Accuracy: 0.920, Loss: 0.046 Epoch 1 Batch 272/538 - Train Accuracy: 0.916, Validation Accuracy: 0.920, Loss: 0.061 Epoch 1 Batch 273/538 - Train Accuracy: 0.927, Validation Accuracy: 0.923, Loss: 0.062 Epoch 1 Batch 274/538 - Train Accuracy: 0.891, Validation Accuracy: 0.920, Loss: 0.066 Epoch 1 Batch 275/538 - Train Accuracy: 0.921, Validation Accuracy: 0.915, Loss: 0.064 Epoch 1 Batch 276/538 - Train Accuracy: 0.915, Validation Accuracy: 0.926, Loss: 0.063 Epoch 1 Batch 277/538 - Train Accuracy: 0.941, Validation Accuracy: 0.927, Loss: 0.048 Epoch 1 Batch 278/538 - Train Accuracy: 0.929, Validation Accuracy: 0.933, Loss: 0.049 Epoch 1 Batch 279/538 - Train Accuracy: 0.936, Validation Accuracy: 0.930, Loss: 0.055 Epoch 1 Batch 280/538 - Train Accuracy: 0.941, Validation Accuracy: 0.930, Loss: 0.046 Epoch 1 Batch 281/538 - Train Accuracy: 0.927, Validation Accuracy: 0.918, Loss: 0.060 Epoch 1 Batch 282/538 - Train Accuracy: 0.938, Validation Accuracy: 0.911, Loss: 0.055 Epoch 1 Batch 283/538 - Train Accuracy: 0.946, Validation Accuracy: 0.916, Loss: 0.053 Epoch 1 Batch 284/538 - Train Accuracy: 0.919, Validation Accuracy: 0.916, Loss: 0.063 Epoch 1 Batch 285/538 - Train Accuracy: 0.918, Validation Accuracy: 0.922, Loss: 0.048 Epoch 1 Batch 286/538 - Train Accuracy: 0.918, Validation Accuracy: 0.918, Loss: 0.065 Epoch 1 Batch 287/538 - Train Accuracy: 0.944, Validation Accuracy: 0.919, Loss: 0.043 Epoch 1 Batch 288/538 - Train Accuracy: 0.948, Validation Accuracy: 0.917, Loss: 0.051 Epoch 1 Batch 289/538 - Train Accuracy: 0.934, Validation Accuracy: 0.917, Loss: 0.047 Epoch 1 Batch 290/538 - Train Accuracy: 0.956, Validation Accuracy: 0.919, Loss: 0.050 Epoch 1 Batch 291/538 - Train Accuracy: 0.953, Validation Accuracy: 0.923, Loss: 0.049 Epoch 1 Batch 292/538 - Train Accuracy: 0.946, Validation Accuracy: 0.921, Loss: 0.043 Epoch 1 Batch 293/538 - Train Accuracy: 0.938, Validation Accuracy: 0.921, Loss: 0.051 Epoch 1 Batch 294/538 - Train Accuracy: 0.941, Validation Accuracy: 0.923, Loss: 0.055 Epoch 1 Batch 295/538 - Train Accuracy: 0.936, Validation Accuracy: 0.927, Loss: 0.049 Epoch 1 Batch 296/538 - Train Accuracy: 0.932, Validation Accuracy: 0.917, Loss: 0.062 Epoch 1 Batch 297/538 - Train Accuracy: 0.966, Validation Accuracy: 0.915, Loss: 0.051 Epoch 1 Batch 298/538 - Train Accuracy: 0.933, Validation Accuracy: 0.915, Loss: 0.051 Epoch 1 Batch 299/538 - Train Accuracy: 0.913, Validation Accuracy: 0.919, Loss: 0.063 Epoch 1 Batch 300/538 - Train Accuracy: 0.926, Validation Accuracy: 0.926, Loss: 0.051 Epoch 1 Batch 301/538 - Train Accuracy: 0.923, Validation Accuracy: 0.938, Loss: 0.058 Epoch 1 Batch 302/538 - Train Accuracy: 0.950, Validation Accuracy: 0.944, Loss: 0.049 Epoch 1 Batch 303/538 - Train Accuracy: 0.941, Validation Accuracy: 0.938, Loss: 0.054 Epoch 1 Batch 304/538 - Train Accuracy: 0.932, Validation Accuracy: 0.937, Loss: 0.054 Epoch 1 Batch 305/538 - Train Accuracy: 0.940, Validation Accuracy: 0.932, Loss: 0.046 Epoch 1 Batch 306/538 - Train Accuracy: 0.925, Validation Accuracy: 0.920, Loss: 0.051 Epoch 1 Batch 307/538 - Train Accuracy: 0.949, Validation Accuracy: 0.926, Loss: 0.044 Epoch 1 Batch 308/538 - Train Accuracy: 0.953, Validation Accuracy: 0.943, Loss: 0.049 Epoch 1 Batch 309/538 - Train Accuracy: 0.945, Validation Accuracy: 0.942, Loss: 0.043 Epoch 1 Batch 310/538 - Train Accuracy: 0.955, Validation Accuracy: 0.943, Loss: 0.056 Epoch 1 Batch 311/538 - Train Accuracy: 0.927, Validation Accuracy: 0.944, Loss: 0.055 Epoch 1 Batch 312/538 - Train Accuracy: 0.941, Validation Accuracy: 0.945, Loss: 0.043 Epoch 1 Batch 313/538 - Train Accuracy: 0.931, Validation Accuracy: 0.942, Loss: 0.052 Epoch 1 Batch 314/538 - Train Accuracy: 0.933, Validation Accuracy: 0.943, Loss: 0.057 Epoch 1 Batch 315/538 - Train Accuracy: 0.929, Validation Accuracy: 0.941, Loss: 0.047 Epoch 1 Batch 316/538 - Train Accuracy: 0.933, Validation Accuracy: 0.935, Loss: 0.045 Epoch 1 Batch 317/538 - Train Accuracy: 0.941, Validation Accuracy: 0.930, Loss: 0.056 Epoch 1 Batch 318/538 - Train Accuracy: 0.929, Validation Accuracy: 0.913, Loss: 0.050 Epoch 1 Batch 319/538 - Train Accuracy: 0.936, Validation Accuracy: 0.902, Loss: 0.056 Epoch 1 Batch 320/538 - Train Accuracy: 0.929, Validation Accuracy: 0.906, Loss: 0.049 Epoch 1 Batch 321/538 - Train Accuracy: 0.930, Validation Accuracy: 0.925, Loss: 0.045 Epoch 1 Batch 322/538 - Train Accuracy: 0.938, Validation Accuracy: 0.924, Loss: 0.055 Epoch 1 Batch 323/538 - Train Accuracy: 0.937, Validation Accuracy: 0.927, Loss: 0.045 Epoch 1 Batch 324/538 - Train Accuracy: 0.952, Validation Accuracy: 0.919, Loss: 0.051 Epoch 1 Batch 325/538 - Train Accuracy: 0.944, Validation Accuracy: 0.921, Loss: 0.049 Epoch 1 Batch 326/538 - Train Accuracy: 0.940, Validation Accuracy: 0.925, Loss: 0.055 Epoch 1 Batch 327/538 - Train Accuracy: 0.928, Validation Accuracy: 0.926, Loss: 0.059 Epoch 1 Batch 328/538 - Train Accuracy: 0.949, Validation Accuracy: 0.927, Loss: 0.041 Epoch 1 Batch 329/538 - Train Accuracy: 0.953, Validation Accuracy: 0.921, Loss: 0.052 Epoch 1 Batch 330/538 - Train Accuracy: 0.951, Validation Accuracy: 0.923, Loss: 0.041 Epoch 1 Batch 331/538 - Train Accuracy: 0.931, Validation Accuracy: 0.931, Loss: 0.050 Epoch 1 Batch 332/538 - Train Accuracy: 0.933, Validation Accuracy: 0.935, Loss: 0.051 Epoch 1 Batch 333/538 - Train Accuracy: 0.946, Validation Accuracy: 0.939, Loss: 0.047 Epoch 1 Batch 334/538 - Train Accuracy: 0.946, Validation Accuracy: 0.925, Loss: 0.041 Epoch 1 Batch 335/538 - Train Accuracy: 0.949, Validation Accuracy: 0.930, Loss: 0.047 Epoch 1 Batch 336/538 - Train Accuracy: 0.930, Validation Accuracy: 0.932, Loss: 0.048 Epoch 1 Batch 337/538 - Train Accuracy: 0.932, Validation Accuracy: 0.936, Loss: 0.049 Epoch 1 Batch 338/538 - Train Accuracy: 0.944, Validation Accuracy: 0.934, Loss: 0.050 Epoch 1 Batch 339/538 - Train Accuracy: 0.947, Validation Accuracy: 0.937, Loss: 0.044 Epoch 1 Batch 340/538 - Train Accuracy: 0.925, Validation Accuracy: 0.938, Loss: 0.045 Epoch 1 Batch 341/538 - Train Accuracy: 0.943, Validation Accuracy: 0.933, Loss: 0.045 Epoch 1 Batch 342/538 - Train Accuracy: 0.934, Validation Accuracy: 0.938, Loss: 0.045 Epoch 1 Batch 343/538 - Train Accuracy: 0.951, Validation Accuracy: 0.945, Loss: 0.048 Epoch 1 Batch 344/538 - Train Accuracy: 0.950, Validation Accuracy: 0.946, Loss: 0.045 Epoch 1 Batch 345/538 - Train Accuracy: 0.951, Validation Accuracy: 0.939, Loss: 0.044 Epoch 1 Batch 346/538 - Train Accuracy: 0.911, Validation Accuracy: 0.928, Loss: 0.060 Epoch 1 Batch 347/538 - Train Accuracy: 0.938, Validation Accuracy: 0.923, Loss: 0.048 Epoch 1 Batch 348/538 - Train Accuracy: 0.940, Validation Accuracy: 0.924, Loss: 0.038 Epoch 1 Batch 349/538 - Train Accuracy: 0.952, Validation Accuracy: 0.914, Loss: 0.041 Epoch 1 Batch 350/538 - Train Accuracy: 0.930, Validation Accuracy: 0.920, Loss: 0.055 Epoch 1 Batch 351/538 - Train Accuracy: 0.930, Validation Accuracy: 0.926, Loss: 0.061 Epoch 1 Batch 352/538 - Train Accuracy: 0.912, Validation Accuracy: 0.926, Loss: 0.070 Epoch 1 Batch 353/538 - Train Accuracy: 0.929, Validation Accuracy: 0.925, Loss: 0.057 Epoch 1 Batch 354/538 - Train Accuracy: 0.932, Validation Accuracy: 0.933, Loss: 0.049 Epoch 1 Batch 355/538 - Train Accuracy: 0.952, Validation Accuracy: 0.931, Loss: 0.051 Epoch 1 Batch 356/538 - Train Accuracy: 0.955, Validation Accuracy: 0.939, Loss: 0.038 Epoch 1 Batch 357/538 - Train Accuracy: 0.940, Validation Accuracy: 0.944, Loss: 0.042 Epoch 1 Batch 358/538 - Train Accuracy: 0.954, Validation Accuracy: 0.944, Loss: 0.043 Epoch 1 Batch 359/538 - Train Accuracy: 0.944, Validation Accuracy: 0.943, Loss: 0.051 Epoch 1 Batch 360/538 - Train Accuracy: 0.938, Validation Accuracy: 0.942, Loss: 0.045 Epoch 1 Batch 361/538 - Train Accuracy: 0.947, Validation Accuracy: 0.935, Loss: 0.047 Epoch 1 Batch 362/538 - Train Accuracy: 0.951, Validation Accuracy: 0.939, Loss: 0.037 Epoch 1 Batch 363/538 - Train Accuracy: 0.953, Validation Accuracy: 0.940, Loss: 0.046 Epoch 1 Batch 364/538 - Train Accuracy: 0.936, Validation Accuracy: 0.930, Loss: 0.059 Epoch 1 Batch 365/538 - Train Accuracy: 0.943, Validation Accuracy: 0.934, Loss: 0.045 Epoch 1 Batch 366/538 - Train Accuracy: 0.943, Validation Accuracy: 0.934, Loss: 0.047 Epoch 1 Batch 367/538 - Train Accuracy: 0.953, Validation Accuracy: 0.938, Loss: 0.036 Epoch 1 Batch 368/538 - Train Accuracy: 0.955, Validation Accuracy: 0.939, Loss: 0.039 Epoch 1 Batch 369/538 - Train Accuracy: 0.947, Validation Accuracy: 0.939, Loss: 0.042 Epoch 1 Batch 370/538 - Train Accuracy: 0.929, Validation Accuracy: 0.935, Loss: 0.045 Epoch 1 Batch 371/538 - Train Accuracy: 0.946, Validation Accuracy: 0.938, Loss: 0.044 Epoch 1 Batch 372/538 - Train Accuracy: 0.961, Validation Accuracy: 0.937, Loss: 0.046 Epoch 1 Batch 373/538 - Train Accuracy: 0.948, Validation Accuracy: 0.953, Loss: 0.035 Epoch 1 Batch 374/538 - Train Accuracy: 0.942, Validation Accuracy: 0.954, Loss: 0.042 Epoch 1 Batch 375/538 - Train Accuracy: 0.945, Validation Accuracy: 0.949, Loss: 0.043 Epoch 1 Batch 376/538 - Train Accuracy: 0.956, Validation Accuracy: 0.943, Loss: 0.045 Epoch 1 Batch 377/538 - Train Accuracy: 0.957, Validation Accuracy: 0.945, Loss: 0.046 Epoch 1 Batch 378/538 - Train Accuracy: 0.957, Validation Accuracy: 0.953, Loss: 0.035 Epoch 1 Batch 379/538 - Train Accuracy: 0.950, Validation Accuracy: 0.939, Loss: 0.044 Epoch 1 Batch 380/538 - Train Accuracy: 0.940, Validation Accuracy: 0.930, Loss: 0.039 Epoch 1 Batch 381/538 - Train Accuracy: 0.961, Validation Accuracy: 0.940, Loss: 0.042 Epoch 1 Batch 382/538 - Train Accuracy: 0.928, Validation Accuracy: 0.949, Loss: 0.052 Epoch 1 Batch 383/538 - Train Accuracy: 0.958, Validation Accuracy: 0.934, Loss: 0.040 Epoch 1 Batch 384/538 - Train Accuracy: 0.942, Validation Accuracy: 0.926, Loss: 0.046 Epoch 1 Batch 385/538 - Train Accuracy: 0.944, Validation Accuracy: 0.922, Loss: 0.047 Epoch 1 Batch 386/538 - Train Accuracy: 0.940, Validation Accuracy: 0.936, Loss: 0.051 Epoch 1 Batch 387/538 - Train Accuracy: 0.947, Validation Accuracy: 0.949, Loss: 0.041 Epoch 1 Batch 388/538 - Train Accuracy: 0.927, Validation Accuracy: 0.942, Loss: 0.043 Epoch 1 Batch 389/538 - Train Accuracy: 0.904, Validation Accuracy: 0.935, Loss: 0.054 Epoch 1 Batch 390/538 - Train Accuracy: 0.930, Validation Accuracy: 0.948, Loss: 0.044 Epoch 1 Batch 391/538 - Train Accuracy: 0.947, Validation Accuracy: 0.931, Loss: 0.038 Epoch 1 Batch 392/538 - Train Accuracy: 0.924, Validation Accuracy: 0.934, Loss: 0.044 Epoch 1 Batch 393/538 - Train Accuracy: 0.936, Validation Accuracy: 0.935, Loss: 0.040 Epoch 1 Batch 394/538 - Train Accuracy: 0.922, Validation Accuracy: 0.935, Loss: 0.047 Epoch 1 Batch 395/538 - Train Accuracy: 0.948, Validation Accuracy: 0.936, Loss: 0.053 Epoch 1 Batch 396/538 - Train Accuracy: 0.948, Validation Accuracy: 0.935, Loss: 0.038 Epoch 1 Batch 397/538 - Train Accuracy: 0.937, Validation Accuracy: 0.933, Loss: 0.052 Epoch 1 Batch 398/538 - Train Accuracy: 0.942, Validation Accuracy: 0.934, Loss: 0.045 Epoch 1 Batch 399/538 - Train Accuracy: 0.937, Validation Accuracy: 0.943, Loss: 0.049 Epoch 1 Batch 400/538 - Train Accuracy: 0.943, Validation Accuracy: 0.947, Loss: 0.043 Epoch 1 Batch 401/538 - Train Accuracy: 0.961, Validation Accuracy: 0.945, Loss: 0.042 Epoch 1 Batch 402/538 - Train Accuracy: 0.948, Validation Accuracy: 0.938, Loss: 0.041 Epoch 1 Batch 403/538 - Train Accuracy: 0.939, Validation Accuracy: 0.936, Loss: 0.046 Epoch 1 Batch 404/538 - Train Accuracy: 0.925, Validation Accuracy: 0.935, Loss: 0.056 Epoch 1 Batch 405/538 - Train Accuracy: 0.944, Validation Accuracy: 0.941, Loss: 0.046 Epoch 1 Batch 406/538 - Train Accuracy: 0.931, Validation Accuracy: 0.937, Loss: 0.042 Epoch 1 Batch 407/538 - Train Accuracy: 0.957, Validation Accuracy: 0.938, Loss: 0.049 Epoch 1 Batch 408/538 - Train Accuracy: 0.942, Validation Accuracy: 0.941, Loss: 0.047 Epoch 1 Batch 409/538 - Train Accuracy: 0.929, Validation Accuracy: 0.932, Loss: 0.046 Epoch 1 Batch 410/538 - Train Accuracy: 0.952, Validation Accuracy: 0.925, Loss: 0.048 Epoch 1 Batch 411/538 - Train Accuracy: 0.945, Validation Accuracy: 0.923, Loss: 0.045 Epoch 1 Batch 412/538 - Train Accuracy: 0.942, Validation Accuracy: 0.923, Loss: 0.034 Epoch 1 Batch 413/538 - Train Accuracy: 0.953, Validation Accuracy: 0.932, Loss: 0.044 Epoch 1 Batch 414/538 - Train Accuracy: 0.918, Validation Accuracy: 0.923, Loss: 0.064 Epoch 1 Batch 415/538 - Train Accuracy: 0.933, Validation Accuracy: 0.929, Loss: 0.047 Epoch 1 Batch 416/538 - Train Accuracy: 0.945, Validation Accuracy: 0.937, Loss: 0.046 Epoch 1 Batch 417/538 - Train Accuracy: 0.950, Validation Accuracy: 0.938, Loss: 0.044 Epoch 1 Batch 418/538 - Train Accuracy: 0.952, Validation Accuracy: 0.940, Loss: 0.049 Epoch 1 Batch 419/538 - Train Accuracy: 0.959, Validation Accuracy: 0.939, Loss: 0.036 Epoch 1 Batch 420/538 - Train Accuracy: 0.954, Validation Accuracy: 0.942, Loss: 0.040 Epoch 1 Batch 421/538 - Train Accuracy: 0.938, Validation Accuracy: 0.952, Loss: 0.040 Epoch 1 Batch 422/538 - Train Accuracy: 0.938, Validation Accuracy: 0.956, Loss: 0.044 Epoch 1 Batch 423/538 - Train Accuracy: 0.948, Validation Accuracy: 0.945, Loss: 0.048 Epoch 1 Batch 424/538 - Train Accuracy: 0.936, Validation Accuracy: 0.944, Loss: 0.058 Epoch 1 Batch 425/538 - Train Accuracy: 0.927, Validation Accuracy: 0.938, Loss: 0.056 Epoch 1 Batch 426/538 - Train Accuracy: 0.955, Validation Accuracy: 0.933, Loss: 0.050 Epoch 1 Batch 427/538 - Train Accuracy: 0.924, Validation Accuracy: 0.936, Loss: 0.046 Epoch 1 Batch 428/538 - Train Accuracy: 0.952, Validation Accuracy: 0.930, Loss: 0.043 Epoch 1 Batch 429/538 - Train Accuracy: 0.952, Validation Accuracy: 0.929, Loss: 0.043 Epoch 1 Batch 430/538 - Train Accuracy: 0.930, Validation Accuracy: 0.929, Loss: 0.047 Epoch 1 Batch 431/538 - Train Accuracy: 0.934, Validation Accuracy: 0.928, Loss: 0.042 Epoch 1 Batch 432/538 - Train Accuracy: 0.933, Validation Accuracy: 0.925, Loss: 0.053 Epoch 1 Batch 433/538 - Train Accuracy: 0.930, Validation Accuracy: 0.926, Loss: 0.067 Epoch 1 Batch 434/538 - Train Accuracy: 0.945, Validation Accuracy: 0.932, Loss: 0.045 Epoch 1 Batch 435/538 - Train Accuracy: 0.940, Validation Accuracy: 0.926, Loss: 0.040 Epoch 1 Batch 436/538 - Train Accuracy: 0.925, Validation Accuracy: 0.931, Loss: 0.049 Epoch 1 Batch 437/538 - Train Accuracy: 0.940, Validation Accuracy: 0.939, Loss: 0.047 Epoch 1 Batch 438/538 - Train Accuracy: 0.944, Validation Accuracy: 0.934, Loss: 0.036 Epoch 1 Batch 439/538 - Train Accuracy: 0.944, Validation Accuracy: 0.936, Loss: 0.040 Epoch 1 Batch 440/538 - Train Accuracy: 0.936, Validation Accuracy: 0.932, Loss: 0.053 Epoch 1 Batch 441/538 - Train Accuracy: 0.923, Validation Accuracy: 0.933, Loss: 0.054 Epoch 1 Batch 442/538 - Train Accuracy: 0.935, Validation Accuracy: 0.933, Loss: 0.037 Epoch 1 Batch 443/538 - Train Accuracy: 0.942, Validation Accuracy: 0.926, Loss: 0.041 Epoch 1 Batch 444/538 - Train Accuracy: 0.930, Validation Accuracy: 0.925, Loss: 0.045 Epoch 1 Batch 445/538 - Train Accuracy: 0.943, Validation Accuracy: 0.934, Loss: 0.040 Epoch 1 Batch 446/538 - Train Accuracy: 0.945, Validation Accuracy: 0.934, Loss: 0.046 Epoch 1 Batch 447/538 - Train Accuracy: 0.930, Validation Accuracy: 0.941, Loss: 0.045 Epoch 1 Batch 448/538 - Train Accuracy: 0.951, Validation Accuracy: 0.945, Loss: 0.039 Epoch 1 Batch 449/538 - Train Accuracy: 0.958, Validation Accuracy: 0.944, Loss: 0.046 Epoch 1 Batch 450/538 - Train Accuracy: 0.918, Validation Accuracy: 0.950, Loss: 0.056 Epoch 1 Batch 451/538 - Train Accuracy: 0.934, Validation Accuracy: 0.941, Loss: 0.045 Epoch 1 Batch 452/538 - Train Accuracy: 0.952, Validation Accuracy: 0.932, Loss: 0.041 Epoch 1 Batch 453/538 - Train Accuracy: 0.942, Validation Accuracy: 0.933, Loss: 0.046 Epoch 1 Batch 454/538 - Train Accuracy: 0.942, Validation Accuracy: 0.937, Loss: 0.049 Epoch 1 Batch 455/538 - Train Accuracy: 0.940, Validation Accuracy: 0.932, Loss: 0.050 Epoch 1 Batch 456/538 - Train Accuracy: 0.959, Validation Accuracy: 0.940, Loss: 0.058 Epoch 1 Batch 457/538 - Train Accuracy: 0.938, Validation Accuracy: 0.928, Loss: 0.042 Epoch 1 Batch 458/538 - Train Accuracy: 0.953, Validation Accuracy: 0.919, Loss: 0.041 Epoch 1 Batch 459/538 - Train Accuracy: 0.943, Validation Accuracy: 0.928, Loss: 0.041 Epoch 1 Batch 460/538 - Train Accuracy: 0.918, Validation Accuracy: 0.924, Loss: 0.050 Epoch 1 Batch 461/538 - Train Accuracy: 0.956, Validation Accuracy: 0.926, Loss: 0.046 Epoch 1 Batch 462/538 - Train Accuracy: 0.952, Validation Accuracy: 0.925, Loss: 0.041 Epoch 1 Batch 463/538 - Train Accuracy: 0.926, Validation Accuracy: 0.931, Loss: 0.052 Epoch 1 Batch 464/538 - Train Accuracy: 0.944, Validation Accuracy: 0.938, Loss: 0.043 Epoch 1 Batch 465/538 - Train Accuracy: 0.942, Validation Accuracy: 0.937, Loss: 0.042 Epoch 1 Batch 466/538 - Train Accuracy: 0.950, Validation Accuracy: 0.932, Loss: 0.042 Epoch 1 Batch 467/538 - Train Accuracy: 0.941, Validation Accuracy: 0.925, Loss: 0.050 Epoch 1 Batch 468/538 - Train Accuracy: 0.959, Validation Accuracy: 0.915, Loss: 0.048 Epoch 1 Batch 469/538 - Train Accuracy: 0.952, Validation Accuracy: 0.912, Loss: 0.046 Epoch 1 Batch 470/538 - Train Accuracy: 0.956, Validation Accuracy: 0.913, Loss: 0.044 Epoch 1 Batch 471/538 - Train Accuracy: 0.942, Validation Accuracy: 0.925, Loss: 0.038 Epoch 1 Batch 472/538 - Train Accuracy: 0.974, Validation Accuracy: 0.924, Loss: 0.033 Epoch 1 Batch 473/538 - Train Accuracy: 0.940, Validation Accuracy: 0.928, Loss: 0.044 Epoch 1 Batch 474/538 - Train Accuracy: 0.957, Validation Accuracy: 0.924, Loss: 0.039 Epoch 1 Batch 475/538 - Train Accuracy: 0.940, Validation Accuracy: 0.912, Loss: 0.042 Epoch 1 Batch 476/538 - Train Accuracy: 0.941, Validation Accuracy: 0.918, Loss: 0.045 Epoch 1 Batch 477/538 - Train Accuracy: 0.937, Validation Accuracy: 0.928, Loss: 0.053 Epoch 1 Batch 478/538 - Train Accuracy: 0.951, Validation Accuracy: 0.925, Loss: 0.043 Epoch 1 Batch 479/538 - Train Accuracy: 0.937, Validation Accuracy: 0.922, Loss: 0.046 Epoch 1 Batch 480/538 - Train Accuracy: 0.943, Validation Accuracy: 0.927, Loss: 0.037 Epoch 1 Batch 481/538 - Train Accuracy: 0.954, Validation Accuracy: 0.925, Loss: 0.037 Epoch 1 Batch 482/538 - Train Accuracy: 0.947, Validation Accuracy: 0.935, Loss: 0.036 Epoch 1 Batch 483/538 - Train Accuracy: 0.927, Validation Accuracy: 0.928, Loss: 0.050 Epoch 1 Batch 484/538 - Train Accuracy: 0.940, Validation Accuracy: 0.918, Loss: 0.049 Epoch 1 Batch 485/538 - Train Accuracy: 0.934, Validation Accuracy: 0.921, Loss: 0.045 Epoch 1 Batch 486/538 - Train Accuracy: 0.946, Validation Accuracy: 0.929, Loss: 0.036 Epoch 1 Batch 487/538 - Train Accuracy: 0.938, Validation Accuracy: 0.941, Loss: 0.032 Epoch 1 Batch 488/538 - Train Accuracy: 0.944, Validation Accuracy: 0.947, Loss: 0.035 Epoch 1 Batch 489/538 - Train Accuracy: 0.939, Validation Accuracy: 0.944, Loss: 0.042 Epoch 1 Batch 490/538 - Train Accuracy: 0.948, Validation Accuracy: 0.944, Loss: 0.042 Epoch 1 Batch 491/538 - Train Accuracy: 0.931, Validation Accuracy: 0.942, Loss: 0.044 Epoch 1 Batch 492/538 - Train Accuracy: 0.954, Validation Accuracy: 0.934, Loss: 0.039 Epoch 1 Batch 493/538 - Train Accuracy: 0.951, Validation Accuracy: 0.932, Loss: 0.039 Epoch 1 Batch 494/538 - Train Accuracy: 0.937, Validation Accuracy: 0.928, Loss: 0.049 Epoch 1 Batch 495/538 - Train Accuracy: 0.942, Validation Accuracy: 0.938, Loss: 0.045 Epoch 1 Batch 496/538 - Train Accuracy: 0.962, Validation Accuracy: 0.946, Loss: 0.039 Epoch 1 Batch 497/538 - Train Accuracy: 0.954, Validation Accuracy: 0.945, Loss: 0.047 Epoch 1 Batch 498/538 - Train Accuracy: 0.940, Validation Accuracy: 0.951, Loss: 0.040 Epoch 1 Batch 499/538 - Train Accuracy: 0.935, Validation Accuracy: 0.942, Loss: 0.046 Epoch 1 Batch 500/538 - Train Accuracy: 0.960, Validation Accuracy: 0.936, Loss: 0.031 Epoch 1 Batch 501/538 - Train Accuracy: 0.940, Validation Accuracy: 0.926, Loss: 0.045 Epoch 1 Batch 502/538 - Train Accuracy: 0.939, Validation Accuracy: 0.926, Loss: 0.035 Epoch 1 Batch 503/538 - Train Accuracy: 0.958, Validation Accuracy: 0.928, Loss: 0.044 Epoch 1 Batch 504/538 - Train Accuracy: 0.962, Validation Accuracy: 0.933, Loss: 0.034 Epoch 1 Batch 505/538 - Train Accuracy: 0.943, Validation Accuracy: 0.940, Loss: 0.032 Epoch 1 Batch 506/538 - Train Accuracy: 0.956, Validation Accuracy: 0.941, Loss: 0.037 Epoch 1 Batch 507/538 - Train Accuracy: 0.935, Validation Accuracy: 0.939, Loss: 0.046 Epoch 1 Batch 508/538 - Train Accuracy: 0.942, Validation Accuracy: 0.930, Loss: 0.040 Epoch 1 Batch 509/538 - Train Accuracy: 0.949, Validation Accuracy: 0.931, Loss: 0.046 Epoch 1 Batch 510/538 - Train Accuracy: 0.963, Validation Accuracy: 0.946, Loss: 0.037 Epoch 1 Batch 511/538 - Train Accuracy: 0.930, Validation Accuracy: 0.940, Loss: 0.050 Epoch 1 Batch 512/538 - Train Accuracy: 0.945, Validation Accuracy: 0.939, Loss: 0.047 Epoch 1 Batch 513/538 - Train Accuracy: 0.935, Validation Accuracy: 0.934, Loss: 0.041 Epoch 1 Batch 514/538 - Train Accuracy: 0.954, Validation Accuracy: 0.938, Loss: 0.035 Epoch 1 Batch 515/538 - Train Accuracy: 0.938, Validation Accuracy: 0.931, Loss: 0.050 Epoch 1 Batch 516/538 - Train Accuracy: 0.936, Validation Accuracy: 0.923, Loss: 0.041 Epoch 1 Batch 517/538 - Train Accuracy: 0.948, Validation Accuracy: 0.919, Loss: 0.039 Epoch 1 Batch 518/538 - Train Accuracy: 0.939, Validation Accuracy: 0.927, Loss: 0.047 Epoch 1 Batch 519/538 - Train Accuracy: 0.948, Validation Accuracy: 0.935, Loss: 0.038 Epoch 1 Batch 520/538 - Train Accuracy: 0.945, Validation Accuracy: 0.935, Loss: 0.045 Epoch 1 Batch 521/538 - Train Accuracy: 0.943, Validation Accuracy: 0.938, Loss: 0.053 Epoch 1 Batch 522/538 - Train Accuracy: 0.936, Validation Accuracy: 0.935, Loss: 0.038 Epoch 1 Batch 523/538 - Train Accuracy: 0.963, Validation Accuracy: 0.934, Loss: 0.039 Epoch 1 Batch 524/538 - Train Accuracy: 0.956, Validation Accuracy: 0.933, Loss: 0.036 Epoch 1 Batch 525/538 - Train Accuracy: 0.953, Validation Accuracy: 0.938, Loss: 0.038 Epoch 1 Batch 526/538 - Train Accuracy: 0.939, Validation Accuracy: 0.937, Loss: 0.044 Epoch 1 Batch 527/538 - Train Accuracy: 0.954, Validation Accuracy: 0.935, Loss: 0.040 Epoch 1 Batch 528/538 - Train Accuracy: 0.946, Validation Accuracy: 0.935, Loss: 0.048 Epoch 1 Batch 529/538 - Train Accuracy: 0.929, Validation Accuracy: 0.937, Loss: 0.051 Epoch 1 Batch 530/538 - Train Accuracy: 0.933, Validation Accuracy: 0.934, Loss: 0.044 Epoch 1 Batch 531/538 - Train Accuracy: 0.944, Validation Accuracy: 0.929, Loss: 0.043 Epoch 1 Batch 532/538 - Train Accuracy: 0.942, Validation Accuracy: 0.921, Loss: 0.038 Epoch 1 Batch 533/538 - Train Accuracy: 0.935, Validation Accuracy: 0.923, Loss: 0.042 Epoch 1 Batch 534/538 - Train Accuracy: 0.958, Validation Accuracy: 0.928, Loss: 0.029 Epoch 1 Batch 535/538 - Train Accuracy: 0.948, Validation Accuracy: 0.931, Loss: 0.038 Epoch 1 Batch 536/538 - Train Accuracy: 0.947, Validation Accuracy: 0.930, Loss: 0.044 Model Trained and Saved ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function sentence = str.lower(sentence) word_ids = [] for word in sentence.split(): try: word_ids.append(vocab_to_int[word]) except: word_ids.append(vocab_to_int['<UNK>']) return word_ids """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('logits:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)])) print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)])) ###Output Input Word Ids: [49, 67, 33, 168, 104, 229, 77] English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.'] Prediction Word Ids: [329, 103, 18, 138, 107, 314, 111, 206, 1] French Words: ['il', 'a', 'vu', 'un', 'vieux', 'camion', 'jaune', '.', '<EOS>'] ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function source_id_text = [] target_id_text = [] source_sentences = source_text.split("\n") target_sentences = target_text.split("\n") for sentence in source_sentences: sentence_words = sentence.split() source_id_text.append([source_vocab_to_int[w] for w in sentence_words]) for sentence in target_sentences: sentence_words = sentence.split() sentence_words.append("<EOS>") target_id_text.append([target_vocab_to_int[w] for w in sentence_words]) return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper import problem_unittests as tests (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.2.1 Default GPU Device: /gpu:0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoder_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.- Target sequence length placeholder named "target_sequence_length" with rank 1- Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.- Source sequence length placeholder named "source_sequence_length" with rank 1Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ # TODO: Implement Function input_ = tf.placeholder(tf.int32, shape = [None, None], name = "input") targets_ = tf.placeholder(tf.int32, shape = [None, None], name = "target") learn_rate = tf.placeholder(tf.float32) keep_prob = tf.placeholder(tf.float32, shape=[], name = "keep_prob") target_sequence_length = tf.placeholder(tf.int32, shape = [None], \ name = "target_sequence_length") max_target_sequence_length = tf.reduce_max(target_sequence_length, \ name = "max_target_len") source_sequence_length = tf.placeholder(tf.int32, shape = [None], \ name = "source_sequence_length") return input_, targets_, learn_rate, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output ERROR:tensorflow:================================== Object was never used (type <class 'tensorflow.python.framework.ops.Operation'>): <tf.Operation 'assert_rank_2/Assert/Assert' type=Assert> If you want to mark it as used call its "mark_used()" method. It was originally created here: ['File "/usr/local/lib/python3.5/runpy.py", line 193, in _run_module_as_main\n "__main__", mod_spec)', 'File "/usr/local/lib/python3.5/runpy.py", line 85, in _run_code\n exec(code, run_globals)', 'File "/usr/local/lib/python3.5/site-packages/ipykernel_launcher.py", line 16, in <module>\n app.launch_new_instance()', 'File "/usr/local/lib/python3.5/site-packages/traitlets/config/application.py", line 658, in launch_instance\n app.start()', 'File "/usr/local/lib/python3.5/site-packages/ipykernel/kernelapp.py", line 477, in start\n ioloop.IOLoop.instance().start()', 'File "/usr/local/lib/python3.5/site-packages/zmq/eventloop/ioloop.py", line 177, in start\n super(ZMQIOLoop, self).start()', 'File "/usr/local/lib/python3.5/site-packages/tornado/ioloop.py", line 888, in start\n handler_func(fd_obj, events)', 'File "/usr/local/lib/python3.5/site-packages/tornado/stack_context.py", line 277, in null_wrapper\n return fn(*args, **kwargs)', 'File "/usr/local/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 440, in _handle_events\n self._handle_recv()', 'File "/usr/local/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 472, in _handle_recv\n self._run_callback(callback, msg)', 'File "/usr/local/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 414, in _run_callback\n callback(*args, **kwargs)', 'File "/usr/local/lib/python3.5/site-packages/tornado/stack_context.py", line 277, in null_wrapper\n return fn(*args, **kwargs)', 'File "/usr/local/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 283, in dispatcher\n return self.dispatch_shell(stream, msg)', 'File "/usr/local/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 235, in dispatch_shell\n handler(stream, idents, msg)', 'File "/usr/local/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 399, in execute_request\n user_expressions, allow_stdin)', 'File "/usr/local/lib/python3.5/site-packages/ipykernel/ipkernel.py", line 196, in do_execute\n res = shell.run_cell(code, store_history=store_history, silent=silent)', 'File "/usr/local/lib/python3.5/site-packages/ipykernel/zmqshell.py", line 533, in run_cell\n return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)', 'File "/usr/local/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2698, in run_cell\n interactivity=interactivity, compiler=compiler, result=result)', 'File "/usr/local/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2808, in run_ast_nodes\n if self.run_code(code, result):', 'File "/usr/local/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2862, in run_code\n exec(code_obj, self.user_global_ns, self.user_ns)', 'File "<ipython-input-7-2c2e55b8a9d0>", line 27, in <module>\n tests.test_model_inputs(model_inputs)', 'File "/output/problem_unittests.py", line 106, in test_model_inputs\n assert tf.assert_rank(lr, 0, message=\'Learning Rate has wrong rank\')', 'File "/usr/local/lib/python3.5/site-packages/tensorflow/python/ops/check_ops.py", line 617, in assert_rank\n dynamic_condition, data, summarize)', 'File "/usr/local/lib/python3.5/site-packages/tensorflow/python/ops/check_ops.py", line 571, in _assert_rank_condition\n return control_flow_ops.Assert(condition, data, summarize=summarize)', 'File "/usr/local/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py", line 170, in wrapped\n return _add_should_use_warning(fn(*args, **kwargs))', 'File "/usr/local/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py", line 139, in _add_should_use_warning\n wrapped = TFShouldUseWarningWrapper(x)', 'File "/usr/local/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py", line 96, in __init__\n stack = [s.strip() for s in traceback.format_stack()]'] ================================== Tests Passed ###Markdown Process Decoder InputImplement `process_decoder_input` by removing the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch. ###Code def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function ending = tf.strided_slice(target_data, [0,0], [batch_size, -1], [1,1]) dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return dec_input """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer: * Embed the encoder input using [`tf.contrib.layers.embed_sequence`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence) * Construct a [stacked](https://github.com/tensorflow/tensorflow/blob/6947f65a374ebf29e74bb71e36fd82760056d82c/tensorflow/docs_src/tutorials/recurrent.mdstacking-multiple-lstms) [`tf.contrib.rnn.LSTMCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell) wrapped in a [`tf.contrib.rnn.DropoutWrapper`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper) * Pass cell and embedded input to [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) ###Code from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ # TODO: Implement Function enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, \ source_vocab_size, \ encoding_embedding_size) def make_cell(rnn_size, keep_prob): lstm_layer = tf.contrib.rnn.LSTMCell(rnn_size, \ initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) lstm_layer_with_dropout = tf.contrib.rnn.DropoutWrapper(lstm_layer, output_keep_prob=keep_prob) return lstm_layer_with_dropout enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size, keep_prob) for _ in range(num_layers)]) enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, \ enc_embed_input, \ sequence_length=source_sequence_length, \ dtype=tf.float32) return enc_output, enc_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate a training decoding layer:* Create a [`tf.contrib.seq2seq.TrainingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper) * Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ # TODO: Implement Function training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, \ sequence_length=target_sequence_length, \ time_major=False) training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, \ training_helper, \ encoder_state, \ output_layer) training_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder, \ impute_finished=True, \ maximum_iterations=max_summary_length) return training_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference decoder:* Create a [`tf.contrib.seq2seq.GreedyEmbeddingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper)* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ # TODO: Implement Function start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens') inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id) inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer) inference_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length) return inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.* Embed the target sequences* Construct the decoder LSTM cell (just like you constructed the encoder cell above)* Create an output layer to map the outputs of the decoder to the elements of our vocabulary* Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)` function to get the training logits.* Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) def make_cell(rnn_size, keep_prob): lstm_layer = tf.contrib.rnn.LSTMCell(rnn_size, \ initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) lstm_layer_with_dropout = tf.contrib.rnn.DropoutWrapper(lstm_layer, output_keep_prob=keep_prob) return lstm_layer_with_dropout dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size, keep_prob) for _ in range(num_layers)]) weight_mu = 0.0 weight_sd = 1.0 / np.sqrt(float(target_vocab_size)) output_layer = Dense(target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = weight_mu, stddev=weight_sd)) with tf.variable_scope("decode"): training_decoder_output = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) with tf.variable_scope("decode", reuse=True): inference_decoder_output = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size)`.- Process target data using your `process_decoder_input(target_data, target_vocab_to_int, batch_size)` function.- Decode the encoded input using your `decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)` function. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param max_target_sentence_length: Maximum length of target sequences :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Encoder embedding size :param dec_embedding_size: Decoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function enc_output, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size) training_decoder_output, inference_decoder_output = decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability- Set `display_step` to state how many steps between each debug output statement ###Code # Number of Epochs epochs = 6 # Batch Size batch_size = 128 # RNN Size rnn_size = 512 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 200 decoding_embedding_size = 200 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 0.5 display_step = 20 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown Batch and pad the source and target sequences ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def pad_sentence_batch(sentence_batch, pad_int): """Pad sentences with <PAD> so that each sentence of a batch has the same length""" max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): """Batch targets, sources, and the lengths of their sentences together""" for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 20/1077 - Train Accuracy: 0.3609, Validation Accuracy: 0.4165, Loss: 2.8497 Epoch 0 Batch 40/1077 - Train Accuracy: 0.4129, Validation Accuracy: 0.4553, Loss: 2.3994 Epoch 0 Batch 60/1077 - Train Accuracy: 0.4706, Validation Accuracy: 0.5057, Loss: 1.9243 Epoch 0 Batch 80/1077 - Train Accuracy: 0.4648, Validation Accuracy: 0.5185, Loss: 1.7060 Epoch 0 Batch 100/1077 - Train Accuracy: 0.4480, Validation Accuracy: 0.4975, Loss: 1.5478 Epoch 0 Batch 120/1077 - Train Accuracy: 0.4703, Validation Accuracy: 0.5213, Loss: 1.3948 Epoch 0 Batch 140/1077 - Train Accuracy: 0.4268, Validation Accuracy: 0.4972, Loss: 1.3430 Epoch 0 Batch 160/1077 - Train Accuracy: 0.5195, Validation Accuracy: 0.5639, Loss: 1.1056 Epoch 0 Batch 180/1077 - Train Accuracy: 0.5340, Validation Accuracy: 0.5568, Loss: 1.0290 Epoch 0 Batch 200/1077 - Train Accuracy: 0.5410, Validation Accuracy: 0.5749, Loss: 0.9832 Epoch 0 Batch 220/1077 - Train Accuracy: 0.5489, Validation Accuracy: 0.5781, Loss: 0.9006 Epoch 0 Batch 240/1077 - Train Accuracy: 0.5773, Validation Accuracy: 0.5930, Loss: 0.8611 Epoch 0 Batch 260/1077 - Train Accuracy: 0.5867, Validation Accuracy: 0.5977, Loss: 0.7558 Epoch 0 Batch 280/1077 - Train Accuracy: 0.5961, Validation Accuracy: 0.5984, Loss: 0.7857 Epoch 0 Batch 300/1077 - Train Accuracy: 0.5600, Validation Accuracy: 0.5952, Loss: 0.7521 Epoch 0 Batch 320/1077 - Train Accuracy: 0.6195, Validation Accuracy: 0.6186, Loss: 0.7151 Epoch 0 Batch 340/1077 - Train Accuracy: 0.5641, Validation Accuracy: 0.6165, Loss: 0.6944 Epoch 0 Batch 360/1077 - Train Accuracy: 0.6332, Validation Accuracy: 0.6197, Loss: 0.6532 Epoch 0 Batch 380/1077 - Train Accuracy: 0.6137, Validation Accuracy: 0.6477, Loss: 0.6152 Epoch 0 Batch 400/1077 - Train Accuracy: 0.6320, Validation Accuracy: 0.6325, Loss: 0.6105 Epoch 0 Batch 420/1077 - Train Accuracy: 0.6207, Validation Accuracy: 0.6467, Loss: 0.5762 Epoch 0 Batch 440/1077 - Train Accuracy: 0.6207, Validation Accuracy: 0.6616, Loss: 0.6038 Epoch 0 Batch 460/1077 - Train Accuracy: 0.6285, Validation Accuracy: 0.6591, Loss: 0.5790 Epoch 0 Batch 480/1077 - Train Accuracy: 0.6694, Validation Accuracy: 0.6594, Loss: 0.5507 Epoch 0 Batch 500/1077 - Train Accuracy: 0.6406, Validation Accuracy: 0.6609, Loss: 0.5224 Epoch 0 Batch 520/1077 - Train Accuracy: 0.6763, Validation Accuracy: 0.6747, Loss: 0.4946 Epoch 0 Batch 540/1077 - Train Accuracy: 0.6512, Validation Accuracy: 0.6729, Loss: 0.4668 Epoch 0 Batch 560/1077 - Train Accuracy: 0.6582, Validation Accuracy: 0.6527, Loss: 0.4826 Epoch 0 Batch 580/1077 - Train Accuracy: 0.6946, Validation Accuracy: 0.6758, Loss: 0.4405 Epoch 0 Batch 600/1077 - Train Accuracy: 0.6983, Validation Accuracy: 0.6935, Loss: 0.4409 Epoch 0 Batch 620/1077 - Train Accuracy: 0.6836, Validation Accuracy: 0.6957, Loss: 0.4282 Epoch 0 Batch 640/1077 - Train Accuracy: 0.7176, Validation Accuracy: 0.6889, Loss: 0.4120 Epoch 0 Batch 660/1077 - Train Accuracy: 0.7047, Validation Accuracy: 0.6907, Loss: 0.4041 Epoch 0 Batch 680/1077 - Train Accuracy: 0.7217, Validation Accuracy: 0.6982, Loss: 0.3741 Epoch 0 Batch 700/1077 - Train Accuracy: 0.7395, Validation Accuracy: 0.7362, Loss: 0.3586 Epoch 0 Batch 720/1077 - Train Accuracy: 0.7385, Validation Accuracy: 0.7227, Loss: 0.3955 Epoch 0 Batch 740/1077 - Train Accuracy: 0.7504, Validation Accuracy: 0.7440, Loss: 0.3526 Epoch 0 Batch 760/1077 - Train Accuracy: 0.7340, Validation Accuracy: 0.7305, Loss: 0.3538 Epoch 0 Batch 780/1077 - Train Accuracy: 0.7594, Validation Accuracy: 0.7351, Loss: 0.3335 Epoch 0 Batch 800/1077 - Train Accuracy: 0.7773, Validation Accuracy: 0.7855, Loss: 0.3070 Epoch 0 Batch 820/1077 - Train Accuracy: 0.7930, Validation Accuracy: 0.7816, Loss: 0.2959 Epoch 0 Batch 840/1077 - Train Accuracy: 0.8359, Validation Accuracy: 0.8125, Loss: 0.2596 Epoch 0 Batch 860/1077 - Train Accuracy: 0.8292, Validation Accuracy: 0.8143, Loss: 0.2664 Epoch 0 Batch 880/1077 - Train Accuracy: 0.8484, Validation Accuracy: 0.8065, Loss: 0.2445 Epoch 0 Batch 900/1077 - Train Accuracy: 0.8570, Validation Accuracy: 0.8200, Loss: 0.2377 Epoch 0 Batch 920/1077 - Train Accuracy: 0.8457, Validation Accuracy: 0.8409, Loss: 0.2129 Epoch 0 Batch 940/1077 - Train Accuracy: 0.8820, Validation Accuracy: 0.8263, Loss: 0.1908 Epoch 0 Batch 960/1077 - Train Accuracy: 0.8891, Validation Accuracy: 0.8189, Loss: 0.1849 Epoch 0 Batch 980/1077 - Train Accuracy: 0.8863, Validation Accuracy: 0.8643, Loss: 0.1829 Epoch 0 Batch 1000/1077 - Train Accuracy: 0.9103, Validation Accuracy: 0.8704, Loss: 0.1477 Epoch 0 Batch 1020/1077 - Train Accuracy: 0.9176, Validation Accuracy: 0.8739, Loss: 0.1480 Epoch 0 Batch 1040/1077 - Train Accuracy: 0.8721, Validation Accuracy: 0.8672, Loss: 0.1692 Epoch 0 Batch 1060/1077 - Train Accuracy: 0.8910, Validation Accuracy: 0.8629, Loss: 0.1273 Epoch 1 Batch 20/1077 - Train Accuracy: 0.9113, Validation Accuracy: 0.8768, Loss: 0.1020 Epoch 1 Batch 40/1077 - Train Accuracy: 0.9168, Validation Accuracy: 0.8878, Loss: 0.1070 Epoch 1 Batch 60/1077 - Train Accuracy: 0.9018, Validation Accuracy: 0.8441, Loss: 0.1095 Epoch 1 Batch 80/1077 - Train Accuracy: 0.9102, Validation Accuracy: 0.8860, Loss: 0.0989 Epoch 1 Batch 100/1077 - Train Accuracy: 0.9172, Validation Accuracy: 0.8849, Loss: 0.1039 Epoch 1 Batch 120/1077 - Train Accuracy: 0.9348, Validation Accuracy: 0.8903, Loss: 0.1120 Epoch 1 Batch 140/1077 - Train Accuracy: 0.8960, Validation Accuracy: 0.8732, Loss: 0.0950 Epoch 1 Batch 160/1077 - Train Accuracy: 0.9234, Validation Accuracy: 0.8963, Loss: 0.0852 Epoch 1 Batch 180/1077 - Train Accuracy: 0.9129, Validation Accuracy: 0.8828, Loss: 0.0912 Epoch 1 Batch 200/1077 - Train Accuracy: 0.8961, Validation Accuracy: 0.8825, Loss: 0.0959 Epoch 1 Batch 220/1077 - Train Accuracy: 0.9363, Validation Accuracy: 0.8942, Loss: 0.0766 Epoch 1 Batch 240/1077 - Train Accuracy: 0.9414, Validation Accuracy: 0.8906, Loss: 0.0819 Epoch 1 Batch 260/1077 - Train Accuracy: 0.9278, Validation Accuracy: 0.9094, Loss: 0.0679 Epoch 1 Batch 280/1077 - Train Accuracy: 0.9051, Validation Accuracy: 0.9048, Loss: 0.0804 Epoch 1 Batch 300/1077 - Train Accuracy: 0.9490, Validation Accuracy: 0.9205, Loss: 0.0685 Epoch 1 Batch 320/1077 - Train Accuracy: 0.9457, Validation Accuracy: 0.8825, Loss: 0.0773 Epoch 1 Batch 340/1077 - Train Accuracy: 0.9515, Validation Accuracy: 0.8956, Loss: 0.0675 Epoch 1 Batch 360/1077 - Train Accuracy: 0.9531, Validation Accuracy: 0.9144, Loss: 0.0668 Epoch 1 Batch 380/1077 - Train Accuracy: 0.9211, Validation Accuracy: 0.8903, Loss: 0.0681 Epoch 1 Batch 400/1077 - Train Accuracy: 0.9309, Validation Accuracy: 0.9052, Loss: 0.0777 Epoch 1 Batch 420/1077 - Train Accuracy: 0.9652, Validation Accuracy: 0.9016, Loss: 0.0549 Epoch 1 Batch 440/1077 - Train Accuracy: 0.9285, Validation Accuracy: 0.8839, Loss: 0.0803 Epoch 1 Batch 460/1077 - Train Accuracy: 0.9113, Validation Accuracy: 0.8718, Loss: 0.0686 Epoch 1 Batch 480/1077 - Train Accuracy: 0.9482, Validation Accuracy: 0.9094, Loss: 0.0621 Epoch 1 Batch 500/1077 - Train Accuracy: 0.9227, Validation Accuracy: 0.9158, Loss: 0.0586 Epoch 1 Batch 520/1077 - Train Accuracy: 0.9624, Validation Accuracy: 0.9247, Loss: 0.0634 Epoch 1 Batch 540/1077 - Train Accuracy: 0.9375, Validation Accuracy: 0.9059, Loss: 0.0515 Epoch 1 Batch 560/1077 - Train Accuracy: 0.9316, Validation Accuracy: 0.9066, Loss: 0.0562 Epoch 1 Batch 580/1077 - Train Accuracy: 0.9576, Validation Accuracy: 0.9059, Loss: 0.0468 Epoch 1 Batch 600/1077 - Train Accuracy: 0.9379, Validation Accuracy: 0.9080, Loss: 0.0669 Epoch 1 Batch 620/1077 - Train Accuracy: 0.9477, Validation Accuracy: 0.9183, Loss: 0.0549 Epoch 1 Batch 640/1077 - Train Accuracy: 0.9338, Validation Accuracy: 0.9300, Loss: 0.0559 Epoch 1 Batch 660/1077 - Train Accuracy: 0.9297, Validation Accuracy: 0.9411, Loss: 0.0599 Epoch 1 Batch 680/1077 - Train Accuracy: 0.9044, Validation Accuracy: 0.9283, Loss: 0.0595 Epoch 1 Batch 700/1077 - Train Accuracy: 0.9641, Validation Accuracy: 0.9251, Loss: 0.0471 ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function words = sentence.lower().split() mapping_func = lambda w: vocab_to_int[w] if w in vocab_to_int else vocab_to_int['<UNK>'] word_ids = list(map(mapping_func, words)) return word_ids """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) ###Output INFO:tensorflow:Restoring parameters from checkpoints/dev Input Word Ids: [70, 161, 137, 115, 230, 155, 142] English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.'] Prediction Word Ids: [94, 237, 207, 120, 96, 280, 266, 239, 1] French Words: il a vu un vieux camion jaune . <EOS> ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of each sentence from `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code from collections import Counter import re def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function source_id_text = [] target_id_text = [] for line in source_text.strip().split('\n'): id_text = [source_vocab_to_int[w] for w in re.findall(r"[\w'\-,.!?\":;)(]+", line)] #id_text.append(source_vocab_to_int['<EOS>']) source_id_text.append(id_text) for line in target_text.strip().split('\n'): id_text = [target_vocab_to_int[w] for w in re.findall(r"[\w'\-,!.?\":;)(]+", line)] id_text.append(target_vocab_to_int['<EOS>']) target_id_text.append(id_text) return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__) print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.0.1 Default GPU Device: /gpu:0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoding_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) """ # TODO: Implement Function Inputs = tf.placeholder(tf.int32, shape=(None, None), name='input') targets = tf.placeholder(tf.int32, shape=(None, None)) learn_rate = tf.placeholder(tf.float32, shape=None) keep_prob = tf.placeholder(tf.float32, shape=None, name='keep_prob') return Inputs, targets, learn_rate, keep_prob """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output Tests Passed ###Markdown Process Decoding InputImplement `process_decoding_input` using TensorFlow to remove the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch. ###Code def process_decoding_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for dencoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return dec_input """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_decoding_input(process_decoding_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer using [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn). ###Code def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ # TODO: Implement Function lstm = tf.contrib.rnn.LSTMCell(rnn_size) #drop_out = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) cell = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers) _, state = tf.nn.dynamic_rnn(cell, rnn_inputs, dtype=tf.float32) return state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate training logits using [`tf.contrib.seq2seq.simple_decoder_fn_train()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_train) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). Apply the `output_fn` to the [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder) outputs. ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits """ # TODO: Implement Function decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) #drop_out = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob) train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder( dec_cell, decoder_fn, inputs=dec_embed_input, sequence_length=sequence_length, scope=decoding_scope) # Apply output function return output_fn(train_pred) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference logits using [`tf.contrib.seq2seq.simple_decoder_fn_inference()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_inference) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: The maximum allowed time steps to decode :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits """ decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size) inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, decoder_fn, scope=decoding_scope) return inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.- Create RNN cell for decoding using `rnn_size` and `num_layers`.- Create the output fuction using [`lambda`](https://docs.python.org/3/tutorial/controlflow.htmllambda-expressions) to transform it's input, logits, to class logits.- Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)` function to get the training logits.- Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function lstm = tf.contrib.rnn.LSTMCell(rnn_size) dec_cell = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers) # Output Layer output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope) with tf.variable_scope("decoding") as decoding_scope: train_pred = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) with tf.variable_scope("decoding", reuse=True) as decoding_scope: # Inference Decoder infer_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], sequence_length, vocab_size, decoding_scope, output_fn, keep_prob) inference_logits = infer_logits return train_pred, inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Apply embedding to the input data for the encoder.- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob)`.- Process target data using your `process_decoding_input(target_data, target_vocab_to_int, batch_size)` function.- Apply embedding to the target data for the decoder.- Decode the encoded input using your `decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)`. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function encode_inputs = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size) encoder_state = encoding_layer(encode_inputs, rnn_size, num_layers, keep_prob) dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size) # Decoder Embedding dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) return decoding_layer(dec_embed_input, dec_embeddings, encoder_state, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability ###Code # Number of Epochs epochs = 12 # Batch Size batch_size = 512 # RNN Size rnn_size = 512 # Number of Layers num_layers = 3 # Embedding Size encoding_embedding_size = 32 decoding_embedding_size = 32 # Learning Rate learning_rate = 0.00075 # Dropout Keep Probability keep_probability = 0.65 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_source_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob = model_inputs() sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model( tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) tf.identity(inference_logits, 'logits') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([input_shape[0], sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import time def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 0/269 - Train Accuracy: 0.248, Validation Accuracy: 0.316, Loss: 5.889 Epoch 0 Batch 1/269 - Train Accuracy: 0.243, Validation Accuracy: 0.320, Loss: 5.674 Epoch 0 Batch 2/269 - Train Accuracy: 0.268, Validation Accuracy: 0.311, Loss: 4.834 Epoch 0 Batch 3/269 - Train Accuracy: 0.258, Validation Accuracy: 0.324, Loss: 4.852 Epoch 0 Batch 4/269 - Train Accuracy: 0.268, Validation Accuracy: 0.341, Loss: 4.639 Epoch 0 Batch 5/269 - Train Accuracy: 0.270, Validation Accuracy: 0.343, Loss: 4.291 Epoch 0 Batch 6/269 - Train Accuracy: 0.314, Validation Accuracy: 0.343, Loss: 4.052 Epoch 0 Batch 7/269 - Train Accuracy: 0.311, Validation Accuracy: 0.343, Loss: 3.825 Epoch 0 Batch 8/269 - Train Accuracy: 0.278, Validation Accuracy: 0.343, Loss: 3.737 Epoch 0 Batch 9/269 - Train Accuracy: 0.318, Validation Accuracy: 0.359, Loss: 3.526 Epoch 0 Batch 10/269 - Train Accuracy: 0.308, Validation Accuracy: 0.376, Loss: 3.526 Epoch 0 Batch 11/269 - Train Accuracy: 0.342, Validation Accuracy: 0.375, Loss: 3.320 Epoch 0 Batch 12/269 - Train Accuracy: 0.311, Validation Accuracy: 0.372, Loss: 3.376 Epoch 0 Batch 13/269 - Train Accuracy: 0.356, Validation Accuracy: 0.355, Loss: 3.016 Epoch 0 Batch 14/269 - Train Accuracy: 0.342, Validation Accuracy: 0.376, Loss: 3.146 Epoch 0 Batch 15/269 - Train Accuracy: 0.336, Validation Accuracy: 0.377, Loss: 3.121 Epoch 0 Batch 16/269 - Train Accuracy: 0.347, Validation Accuracy: 0.377, Loss: 3.059 Epoch 0 Batch 17/269 - Train Accuracy: 0.349, Validation Accuracy: 0.387, Loss: 2.999 Epoch 0 Batch 18/269 - Train Accuracy: 0.323, Validation Accuracy: 0.385, Loss: 3.096 Epoch 0 Batch 19/269 - Train Accuracy: 0.387, Validation Accuracy: 0.391, Loss: 2.785 Epoch 0 Batch 20/269 - Train Accuracy: 0.342, Validation Accuracy: 0.403, Loss: 2.956 Epoch 0 Batch 21/269 - Train Accuracy: 0.322, Validation Accuracy: 0.388, Loss: 2.975 Epoch 0 Batch 22/269 - Train Accuracy: 0.388, Validation Accuracy: 0.414, Loss: 2.899 Epoch 0 Batch 23/269 - Train Accuracy: 0.402, Validation Accuracy: 0.420, Loss: 2.768 Epoch 0 Batch 24/269 - Train Accuracy: 0.382, Validation Accuracy: 0.438, Loss: 2.951 Epoch 0 Batch 25/269 - Train Accuracy: 0.349, Validation Accuracy: 0.405, Loss: 2.832 Epoch 0 Batch 26/269 - Train Accuracy: 0.446, Validation Accuracy: 0.441, Loss: 2.665 Epoch 0 Batch 27/269 - Train Accuracy: 0.390, Validation Accuracy: 0.422, Loss: 2.672 Epoch 0 Batch 28/269 - Train Accuracy: 0.380, Validation Accuracy: 0.447, Loss: 2.912 Epoch 0 Batch 29/269 - Train Accuracy: 0.356, Validation Accuracy: 0.421, Loss: 2.740 Epoch 0 Batch 30/269 - Train Accuracy: 0.414, Validation Accuracy: 0.440, Loss: 2.656 Epoch 0 Batch 31/269 - Train Accuracy: 0.420, Validation Accuracy: 0.448, Loss: 2.576 Epoch 0 Batch 32/269 - Train Accuracy: 0.407, Validation Accuracy: 0.445, Loss: 2.602 Epoch 0 Batch 33/269 - Train Accuracy: 0.434, Validation Accuracy: 0.453, Loss: 2.509 Epoch 0 Batch 34/269 - Train Accuracy: 0.424, Validation Accuracy: 0.449, Loss: 2.515 Epoch 0 Batch 35/269 - Train Accuracy: 0.440, Validation Accuracy: 0.461, Loss: 2.500 Epoch 0 Batch 36/269 - Train Accuracy: 0.429, Validation Accuracy: 0.456, Loss: 2.480 Epoch 0 Batch 37/269 - Train Accuracy: 0.436, Validation Accuracy: 0.465, Loss: 2.472 Epoch 0 Batch 38/269 - Train Accuracy: 0.436, Validation Accuracy: 0.459, Loss: 2.455 Epoch 0 Batch 39/269 - Train Accuracy: 0.441, Validation Accuracy: 0.464, Loss: 2.425 Epoch 0 Batch 40/269 - Train Accuracy: 0.424, Validation Accuracy: 0.474, Loss: 2.514 Epoch 0 Batch 41/269 - Train Accuracy: 0.444, Validation Accuracy: 0.472, Loss: 2.412 Epoch 0 Batch 42/269 - Train Accuracy: 0.458, Validation Accuracy: 0.462, Loss: 2.287 Epoch 0 Batch 43/269 - Train Accuracy: 0.421, Validation Accuracy: 0.468, Loss: 2.455 Epoch 0 Batch 44/269 - Train Accuracy: 0.459, Validation Accuracy: 0.476, Loss: 2.343 Epoch 0 Batch 45/269 - Train Accuracy: 0.426, Validation Accuracy: 0.470, Loss: 2.433 Epoch 0 Batch 46/269 - Train Accuracy: 0.410, Validation Accuracy: 0.461, Loss: 2.448 Epoch 0 Batch 47/269 - Train Accuracy: 0.476, Validation Accuracy: 0.468, Loss: 2.188 Epoch 0 Batch 48/269 - Train Accuracy: 0.452, Validation Accuracy: 0.470, Loss: 2.251 Epoch 0 Batch 49/269 - Train Accuracy: 0.417, Validation Accuracy: 0.468, Loss: 2.379 Epoch 0 Batch 50/269 - Train Accuracy: 0.415, Validation Accuracy: 0.468, Loss: 2.384 Epoch 0 Batch 51/269 - Train Accuracy: 0.445, Validation Accuracy: 0.474, Loss: 2.305 Epoch 0 Batch 52/269 - Train Accuracy: 0.452, Validation Accuracy: 0.474, Loss: 2.247 Epoch 0 Batch 53/269 - Train Accuracy: 0.427, Validation Accuracy: 0.473, Loss: 2.338 Epoch 0 Batch 54/269 - Train Accuracy: 0.432, Validation Accuracy: 0.479, Loss: 2.333 Epoch 0 Batch 55/269 - Train Accuracy: 0.455, Validation Accuracy: 0.476, Loss: 2.229 Epoch 0 Batch 56/269 - Train Accuracy: 0.456, Validation Accuracy: 0.474, Loss: 2.210 Epoch 0 Batch 57/269 - Train Accuracy: 0.460, Validation Accuracy: 0.483, Loss: 2.209 Epoch 0 Batch 58/269 - Train Accuracy: 0.454, Validation Accuracy: 0.471, Loss: 2.216 Epoch 0 Batch 59/269 - Train Accuracy: 0.447, Validation Accuracy: 0.474, Loss: 2.198 Epoch 0 Batch 60/269 - Train Accuracy: 0.448, Validation Accuracy: 0.461, Loss: 2.180 Epoch 0 Batch 61/269 - Train Accuracy: 0.487, Validation Accuracy: 0.483, Loss: 2.111 Epoch 0 Batch 62/269 - Train Accuracy: 0.482, Validation Accuracy: 0.481, Loss: 2.092 Epoch 0 Batch 63/269 - Train Accuracy: 0.458, Validation Accuracy: 0.485, Loss: 2.187 Epoch 0 Batch 64/269 - Train Accuracy: 0.454, Validation Accuracy: 0.479, Loss: 2.191 Epoch 0 Batch 65/269 - Train Accuracy: 0.463, Validation Accuracy: 0.482, Loss: 2.155 Epoch 0 Batch 66/269 - Train Accuracy: 0.461, Validation Accuracy: 0.468, Loss: 2.123 Epoch 0 Batch 67/269 - Train Accuracy: 0.450, Validation Accuracy: 0.480, Loss: 2.195 Epoch 0 Batch 68/269 - Train Accuracy: 0.446, Validation Accuracy: 0.480, Loss: 2.198 Epoch 0 Batch 69/269 - Train Accuracy: 0.430, Validation Accuracy: 0.484, Loss: 2.308 Epoch 0 Batch 70/269 - Train Accuracy: 0.473, Validation Accuracy: 0.490, Loss: 2.129 Epoch 0 Batch 71/269 - Train Accuracy: 0.442, Validation Accuracy: 0.484, Loss: 2.232 Epoch 0 Batch 72/269 - Train Accuracy: 0.486, Validation Accuracy: 0.489, Loss: 2.066 Epoch 0 Batch 73/269 - Train Accuracy: 0.457, Validation Accuracy: 0.477, Loss: 2.137 Epoch 0 Batch 74/269 - Train Accuracy: 0.409, Validation Accuracy: 0.463, Loss: 2.196 Epoch 0 Batch 75/269 - Train Accuracy: 0.455, Validation Accuracy: 0.483, Loss: 2.233 Epoch 0 Batch 76/269 - Train Accuracy: 0.452, Validation Accuracy: 0.490, Loss: 2.168 Epoch 0 Batch 77/269 - Train Accuracy: 0.460, Validation Accuracy: 0.480, Loss: 2.072 Epoch 0 Batch 78/269 - Train Accuracy: 0.440, Validation Accuracy: 0.462, Loss: 2.162 Epoch 0 Batch 79/269 - Train Accuracy: 0.448, Validation Accuracy: 0.477, Loss: 2.208 Epoch 0 Batch 80/269 - Train Accuracy: 0.467, Validation Accuracy: 0.481, Loss: 2.089 Epoch 0 Batch 81/269 - Train Accuracy: 0.451, Validation Accuracy: 0.473, Loss: 2.157 Epoch 0 Batch 82/269 - Train Accuracy: 0.478, Validation Accuracy: 0.492, Loss: 2.089 Epoch 0 Batch 83/269 - Train Accuracy: 0.458, Validation Accuracy: 0.479, Loss: 2.027 Epoch 0 Batch 84/269 - Train Accuracy: 0.480, Validation Accuracy: 0.500, Loss: 2.098 Epoch 0 Batch 85/269 - Train Accuracy: 0.462, Validation Accuracy: 0.488, Loss: 2.064 Epoch 0 Batch 86/269 - Train Accuracy: 0.465, Validation Accuracy: 0.494, Loss: 2.082 Epoch 0 Batch 87/269 - Train Accuracy: 0.425, Validation Accuracy: 0.493, Loss: 2.203 Epoch 0 Batch 88/269 - Train Accuracy: 0.481, Validation Accuracy: 0.501, Loss: 2.034 Epoch 0 Batch 89/269 - Train Accuracy: 0.483, Validation Accuracy: 0.497, Loss: 2.020 Epoch 0 Batch 90/269 - Train Accuracy: 0.449, Validation Accuracy: 0.502, Loss: 2.153 Epoch 0 Batch 91/269 - Train Accuracy: 0.469, Validation Accuracy: 0.498, Loss: 2.018 Epoch 0 Batch 92/269 - Train Accuracy: 0.474, Validation Accuracy: 0.498, Loss: 2.002 Epoch 0 Batch 93/269 - Train Accuracy: 0.499, Validation Accuracy: 0.505, Loss: 1.925 Epoch 0 Batch 94/269 - Train Accuracy: 0.461, Validation Accuracy: 0.490, Loss: 2.025 Epoch 0 Batch 95/269 - Train Accuracy: 0.490, Validation Accuracy: 0.502, Loss: 1.986 Epoch 0 Batch 96/269 - Train Accuracy: 0.484, Validation Accuracy: 0.506, Loss: 1.971 Epoch 0 Batch 97/269 - Train Accuracy: 0.476, Validation Accuracy: 0.501, Loss: 1.957 Epoch 0 Batch 98/269 - Train Accuracy: 0.499, Validation Accuracy: 0.505, Loss: 1.925 Epoch 0 Batch 99/269 - Train Accuracy: 0.429, Validation Accuracy: 0.486, Loss: 2.052 Epoch 0 Batch 100/269 - Train Accuracy: 0.496, Validation Accuracy: 0.501, Loss: 1.912 Epoch 0 Batch 101/269 - Train Accuracy: 0.382, Validation Accuracy: 0.439, Loss: 2.113 Epoch 0 Batch 102/269 - Train Accuracy: 0.441, Validation Accuracy: 0.470, Loss: 2.262 Epoch 0 Batch 103/269 - Train Accuracy: 0.460, Validation Accuracy: 0.481, Loss: 2.035 Epoch 0 Batch 104/269 - Train Accuracy: 0.427, Validation Accuracy: 0.463, Loss: 2.075 Epoch 0 Batch 105/269 - Train Accuracy: 0.448, Validation Accuracy: 0.474, Loss: 2.053 Epoch 0 Batch 106/269 - Train Accuracy: 0.446, Validation Accuracy: 0.476, Loss: 2.017 Epoch 0 Batch 107/269 - Train Accuracy: 0.421, Validation Accuracy: 0.488, Loss: 2.302 Epoch 0 Batch 108/269 - Train Accuracy: 0.429, Validation Accuracy: 0.462, Loss: 1.939 Epoch 0 Batch 109/269 - Train Accuracy: 0.449, Validation Accuracy: 0.485, Loss: 2.093 Epoch 0 Batch 110/269 - Train Accuracy: 0.482, Validation Accuracy: 0.502, Loss: 1.974 Epoch 0 Batch 111/269 - Train Accuracy: 0.447, Validation Accuracy: 0.498, Loss: 2.111 Epoch 0 Batch 112/269 - Train Accuracy: 0.453, Validation Accuracy: 0.483, Loss: 1.940 Epoch 0 Batch 113/269 - Train Accuracy: 0.490, Validation Accuracy: 0.490, Loss: 1.898 Epoch 0 Batch 114/269 - Train Accuracy: 0.472, Validation Accuracy: 0.506, Loss: 1.921 Epoch 0 Batch 115/269 - Train Accuracy: 0.454, Validation Accuracy: 0.504, Loss: 1.977 Epoch 0 Batch 116/269 - Train Accuracy: 0.474, Validation Accuracy: 0.498, Loss: 1.936 Epoch 0 Batch 117/269 - Train Accuracy: 0.475, Validation Accuracy: 0.500, Loss: 1.909 Epoch 0 Batch 118/269 - Train Accuracy: 0.504, Validation Accuracy: 0.512, Loss: 1.826 Epoch 0 Batch 119/269 - Train Accuracy: 0.455, Validation Accuracy: 0.499, Loss: 1.969 Epoch 0 Batch 120/269 - Train Accuracy: 0.435, Validation Accuracy: 0.490, Loss: 1.928 Epoch 0 Batch 121/269 - Train Accuracy: 0.465, Validation Accuracy: 0.498, Loss: 1.865 Epoch 0 Batch 122/269 - Train Accuracy: 0.491, Validation Accuracy: 0.507, Loss: 1.824 Epoch 0 Batch 123/269 - Train Accuracy: 0.430, Validation Accuracy: 0.496, Loss: 1.964 Epoch 0 Batch 124/269 - Train Accuracy: 0.465, Validation Accuracy: 0.492, Loss: 1.815 Epoch 0 Batch 125/269 - Train Accuracy: 0.468, Validation Accuracy: 0.493, Loss: 1.783 Epoch 0 Batch 126/269 - Train Accuracy: 0.480, Validation Accuracy: 0.497, Loss: 1.800 Epoch 0 Batch 127/269 - Train Accuracy: 0.442, Validation Accuracy: 0.487, Loss: 1.886 Epoch 0 Batch 128/269 - Train Accuracy: 0.488, Validation Accuracy: 0.501, Loss: 1.813 Epoch 0 Batch 129/269 - Train Accuracy: 0.468, Validation Accuracy: 0.501, Loss: 1.813 Epoch 0 Batch 130/269 - Train Accuracy: 0.424, Validation Accuracy: 0.491, Loss: 1.921 Epoch 0 Batch 131/269 - Train Accuracy: 0.448, Validation Accuracy: 0.492, Loss: 1.871 Epoch 0 Batch 132/269 - Train Accuracy: 0.467, Validation Accuracy: 0.496, Loss: 1.794 Epoch 0 Batch 133/269 - Train Accuracy: 0.463, Validation Accuracy: 0.492, Loss: 1.746 Epoch 0 Batch 134/269 - Train Accuracy: 0.452, Validation Accuracy: 0.499, Loss: 1.835 Epoch 0 Batch 135/269 - Train Accuracy: 0.448, Validation Accuracy: 0.506, Loss: 1.880 Epoch 0 Batch 136/269 - Train Accuracy: 0.433, Validation Accuracy: 0.493, Loss: 1.862 Epoch 0 Batch 137/269 - Train Accuracy: 0.457, Validation Accuracy: 0.499, Loss: 1.855 Epoch 0 Batch 138/269 - Train Accuracy: 0.473, Validation Accuracy: 0.506, Loss: 1.778 Epoch 0 Batch 139/269 - Train Accuracy: 0.471, Validation Accuracy: 0.491, Loss: 1.721 Epoch 0 Batch 140/269 - Train Accuracy: 0.482, Validation Accuracy: 0.505, Loss: 1.739 Epoch 0 Batch 141/269 - Train Accuracy: 0.479, Validation Accuracy: 0.515, Loss: 1.747 Epoch 0 Batch 142/269 - Train Accuracy: 0.455, Validation Accuracy: 0.480, Loss: 1.728 Epoch 0 Batch 143/269 - Train Accuracy: 0.484, Validation Accuracy: 0.509, Loss: 1.776 Epoch 0 Batch 144/269 - Train Accuracy: 0.484, Validation Accuracy: 0.504, Loss: 1.716 Epoch 0 Batch 145/269 - Train Accuracy: 0.467, Validation Accuracy: 0.502, Loss: 1.703 Epoch 0 Batch 146/269 - Train Accuracy: 0.488, Validation Accuracy: 0.507, Loss: 1.712 Epoch 0 Batch 147/269 - Train Accuracy: 0.493, Validation Accuracy: 0.502, Loss: 1.650 Epoch 0 Batch 148/269 - Train Accuracy: 0.460, Validation Accuracy: 0.504, Loss: 1.735 Epoch 0 Batch 149/269 - Train Accuracy: 0.483, Validation Accuracy: 0.506, Loss: 1.686 Epoch 0 Batch 150/269 - Train Accuracy: 0.497, Validation Accuracy: 0.515, Loss: 1.689 Epoch 0 Batch 151/269 - Train Accuracy: 0.513, Validation Accuracy: 0.506, Loss: 1.594 Epoch 0 Batch 152/269 - Train Accuracy: 0.473, Validation Accuracy: 0.503, Loss: 1.673 Epoch 0 Batch 153/269 - Train Accuracy: 0.474, Validation Accuracy: 0.495, Loss: 1.652 Epoch 0 Batch 154/269 - Train Accuracy: 0.434, Validation Accuracy: 0.499, Loss: 1.745 Epoch 0 Batch 155/269 - Train Accuracy: 0.512, Validation Accuracy: 0.510, Loss: 1.557 Epoch 0 Batch 156/269 - Train Accuracy: 0.470, Validation Accuracy: 0.514, Loss: 1.689 Epoch 0 Batch 157/269 - Train Accuracy: 0.459, Validation Accuracy: 0.489, Loss: 1.658 Epoch 0 Batch 158/269 - Train Accuracy: 0.471, Validation Accuracy: 0.494, Loss: 1.622 Epoch 0 Batch 159/269 - Train Accuracy: 0.467, Validation Accuracy: 0.494, Loss: 1.665 Epoch 0 Batch 160/269 - Train Accuracy: 0.494, Validation Accuracy: 0.520, Loss: 1.659 Epoch 0 Batch 161/269 - Train Accuracy: 0.481, Validation Accuracy: 0.519, Loss: 1.651 Epoch 0 Batch 162/269 - Train Accuracy: 0.493, Validation Accuracy: 0.514, Loss: 1.603 Epoch 0 Batch 163/269 - Train Accuracy: 0.496, Validation Accuracy: 0.513, Loss: 1.607 Epoch 0 Batch 164/269 - Train Accuracy: 0.489, Validation Accuracy: 0.515, Loss: 1.600 Epoch 0 Batch 165/269 - Train Accuracy: 0.448, Validation Accuracy: 0.506, Loss: 1.659 Epoch 0 Batch 166/269 - Train Accuracy: 0.532, Validation Accuracy: 0.524, Loss: 1.505 Epoch 0 Batch 167/269 - Train Accuracy: 0.468, Validation Accuracy: 0.494, Loss: 1.621 Epoch 0 Batch 168/269 - Train Accuracy: 0.473, Validation Accuracy: 0.484, Loss: 1.686 Epoch 0 Batch 169/269 - Train Accuracy: 0.423, Validation Accuracy: 0.469, Loss: 1.835 Epoch 0 Batch 170/269 - Train Accuracy: 0.489, Validation Accuracy: 0.513, Loss: 1.955 Epoch 0 Batch 171/269 - Train Accuracy: 0.449, Validation Accuracy: 0.481, Loss: 1.627 Epoch 0 Batch 172/269 - Train Accuracy: 0.439, Validation Accuracy: 0.472, Loss: 1.875 Epoch 0 Batch 173/269 - Train Accuracy: 0.461, Validation Accuracy: 0.494, Loss: 1.780 Epoch 0 Batch 174/269 - Train Accuracy: 0.463, Validation Accuracy: 0.492, Loss: 1.680 Epoch 0 Batch 175/269 - Train Accuracy: 0.492, Validation Accuracy: 0.507, Loss: 1.760 Epoch 0 Batch 176/269 - Train Accuracy: 0.430, Validation Accuracy: 0.485, Loss: 1.697 Epoch 0 Batch 177/269 - Train Accuracy: 0.486, Validation Accuracy: 0.499, Loss: 1.629 Epoch 0 Batch 178/269 - Train Accuracy: 0.478, Validation Accuracy: 0.516, Loss: 1.692 Epoch 0 Batch 179/269 - Train Accuracy: 0.504, Validation Accuracy: 0.513, Loss: 1.624 Epoch 0 Batch 180/269 - Train Accuracy: 0.457, Validation Accuracy: 0.488, Loss: 1.578 Epoch 0 Batch 181/269 - Train Accuracy: 0.473, Validation Accuracy: 0.497, Loss: 1.593 Epoch 0 Batch 182/269 - Train Accuracy: 0.499, Validation Accuracy: 0.529, Loss: 1.598 Epoch 0 Batch 183/269 - Train Accuracy: 0.562, Validation Accuracy: 0.524, Loss: 1.332 Epoch 0 Batch 184/269 - Train Accuracy: 0.451, Validation Accuracy: 0.496, Loss: 1.668 Epoch 0 Batch 185/269 - Train Accuracy: 0.470, Validation Accuracy: 0.485, Loss: 1.533 Epoch 0 Batch 186/269 - Train Accuracy: 0.459, Validation Accuracy: 0.507, Loss: 1.656 Epoch 0 Batch 187/269 - Train Accuracy: 0.488, Validation Accuracy: 0.510, Loss: 1.499 Epoch 0 Batch 188/269 - Train Accuracy: 0.465, Validation Accuracy: 0.482, Loss: 1.532 Epoch 0 Batch 189/269 - Train Accuracy: 0.441, Validation Accuracy: 0.469, Loss: 1.521 Epoch 0 Batch 190/269 - Train Accuracy: 0.486, Validation Accuracy: 0.508, Loss: 1.554 Epoch 0 Batch 191/269 - Train Accuracy: 0.504, Validation Accuracy: 0.508, Loss: 1.502 Epoch 0 Batch 192/269 - Train Accuracy: 0.493, Validation Accuracy: 0.516, Loss: 1.547 Epoch 0 Batch 193/269 - Train Accuracy: 0.477, Validation Accuracy: 0.506, Loss: 1.481 Epoch 0 Batch 194/269 - Train Accuracy: 0.495, Validation Accuracy: 0.519, Loss: 1.520 Epoch 0 Batch 195/269 - Train Accuracy: 0.493, Validation Accuracy: 0.513, Loss: 1.487 Epoch 0 Batch 196/269 - Train Accuracy: 0.501, Validation Accuracy: 0.521, Loss: 1.484 Epoch 0 Batch 197/269 - Train Accuracy: 0.470, Validation Accuracy: 0.520, Loss: 1.518 Epoch 0 Batch 198/269 - Train Accuracy: 0.477, Validation Accuracy: 0.527, Loss: 1.587 Epoch 0 Batch 199/269 - Train Accuracy: 0.490, Validation Accuracy: 0.516, Loss: 1.467 Epoch 0 Batch 200/269 - Train Accuracy: 0.488, Validation Accuracy: 0.525, Loss: 1.536 Epoch 0 Batch 201/269 - Train Accuracy: 0.498, Validation Accuracy: 0.519, Loss: 1.428 Epoch 0 Batch 202/269 - Train Accuracy: 0.483, Validation Accuracy: 0.514, Loss: 1.459 Epoch 0 Batch 203/269 - Train Accuracy: 0.467, Validation Accuracy: 0.515, Loss: 1.503 Epoch 0 Batch 204/269 - Train Accuracy: 0.451, Validation Accuracy: 0.510, Loss: 1.517 Epoch 0 Batch 205/269 - Train Accuracy: 0.468, Validation Accuracy: 0.502, Loss: 1.408 Epoch 0 Batch 206/269 - Train Accuracy: 0.458, Validation Accuracy: 0.512, Loss: 1.517 Epoch 0 Batch 207/269 - Train Accuracy: 0.526, Validation Accuracy: 0.519, Loss: 1.349 Epoch 0 Batch 208/269 - Train Accuracy: 0.487, Validation Accuracy: 0.523, Loss: 1.518 Epoch 0 Batch 209/269 - Train Accuracy: 0.479, Validation Accuracy: 0.525, Loss: 1.426 Epoch 0 Batch 210/269 - Train Accuracy: 0.492, Validation Accuracy: 0.520, Loss: 1.397 Epoch 0 Batch 211/269 - Train Accuracy: 0.514, Validation Accuracy: 0.528, Loss: 1.386 Epoch 0 Batch 212/269 - Train Accuracy: 0.517, Validation Accuracy: 0.529, Loss: 1.373 Epoch 0 Batch 213/269 - Train Accuracy: 0.494, Validation Accuracy: 0.520, Loss: 1.367 Epoch 0 Batch 214/269 - Train Accuracy: 0.505, Validation Accuracy: 0.523, Loss: 1.364 Epoch 0 Batch 215/269 - Train Accuracy: 0.544, Validation Accuracy: 0.535, Loss: 1.293 Epoch 0 Batch 216/269 - Train Accuracy: 0.493, Validation Accuracy: 0.535, Loss: 1.449 Epoch 0 Batch 217/269 - Train Accuracy: 0.477, Validation Accuracy: 0.534, Loss: 1.419 Epoch 0 Batch 218/269 - Train Accuracy: 0.492, Validation Accuracy: 0.534, Loss: 1.397 Epoch 0 Batch 219/269 - Train Accuracy: 0.487, Validation Accuracy: 0.516, Loss: 1.378 Epoch 0 Batch 220/269 - Train Accuracy: 0.511, Validation Accuracy: 0.522, Loss: 1.283 Epoch 0 Batch 221/269 - Train Accuracy: 0.500, Validation Accuracy: 0.519, Loss: 1.343 Epoch 0 Batch 222/269 - Train Accuracy: 0.511, Validation Accuracy: 0.515, Loss: 1.296 Epoch 0 Batch 223/269 - Train Accuracy: 0.506, Validation Accuracy: 0.522, Loss: 1.313 Epoch 0 Batch 224/269 - Train Accuracy: 0.509, Validation Accuracy: 0.526, Loss: 1.358 Epoch 0 Batch 225/269 - Train Accuracy: 0.468, Validation Accuracy: 0.525, Loss: 1.373 Epoch 0 Batch 226/269 - Train Accuracy: 0.501, Validation Accuracy: 0.529, Loss: 1.312 Epoch 0 Batch 227/269 - Train Accuracy: 0.582, Validation Accuracy: 0.528, Loss: 1.165 Epoch 0 Batch 228/269 - Train Accuracy: 0.508, Validation Accuracy: 0.523, Loss: 1.329 Epoch 0 Batch 229/269 - Train Accuracy: 0.515, Validation Accuracy: 0.526, Loss: 1.315 Epoch 0 Batch 230/269 - Train Accuracy: 0.506, Validation Accuracy: 0.535, Loss: 1.331 Epoch 0 Batch 231/269 - Train Accuracy: 0.487, Validation Accuracy: 0.535, Loss: 1.377 Epoch 0 Batch 232/269 - Train Accuracy: 0.488, Validation Accuracy: 0.537, Loss: 1.366 Epoch 0 Batch 233/269 - Train Accuracy: 0.508, Validation Accuracy: 0.519, Loss: 1.300 Epoch 0 Batch 234/269 - Train Accuracy: 0.524, Validation Accuracy: 0.531, Loss: 1.289 Epoch 0 Batch 235/269 - Train Accuracy: 0.525, Validation Accuracy: 0.537, Loss: 1.285 Epoch 0 Batch 236/269 - Train Accuracy: 0.501, Validation Accuracy: 0.523, Loss: 1.295 Epoch 0 Batch 237/269 - Train Accuracy: 0.494, Validation Accuracy: 0.515, Loss: 1.291 Epoch 0 Batch 238/269 - Train Accuracy: 0.512, Validation Accuracy: 0.523, Loss: 1.290 Epoch 0 Batch 239/269 - Train Accuracy: 0.500, Validation Accuracy: 0.521, Loss: 1.266 Epoch 0 Batch 240/269 - Train Accuracy: 0.529, Validation Accuracy: 0.516, Loss: 1.169 Epoch 0 Batch 241/269 - Train Accuracy: 0.531, Validation Accuracy: 0.539, Loss: 1.278 Epoch 0 Batch 242/269 - Train Accuracy: 0.517, Validation Accuracy: 0.530, Loss: 1.261 Epoch 0 Batch 243/269 - Train Accuracy: 0.542, Validation Accuracy: 0.540, Loss: 1.239 Epoch 0 Batch 244/269 - Train Accuracy: 0.506, Validation Accuracy: 0.530, Loss: 1.258 Epoch 0 Batch 245/269 - Train Accuracy: 0.483, Validation Accuracy: 0.525, Loss: 1.382 Epoch 0 Batch 246/269 - Train Accuracy: 0.489, Validation Accuracy: 0.512, Loss: 1.337 Epoch 0 Batch 247/269 - Train Accuracy: 0.515, Validation Accuracy: 0.542, Loss: 1.350 Epoch 0 Batch 248/269 - Train Accuracy: 0.496, Validation Accuracy: 0.525, Loss: 1.237 Epoch 0 Batch 249/269 - Train Accuracy: 0.534, Validation Accuracy: 0.525, Loss: 1.244 Epoch 0 Batch 250/269 - Train Accuracy: 0.492, Validation Accuracy: 0.524, Loss: 1.298 Epoch 0 Batch 251/269 - Train Accuracy: 0.499, Validation Accuracy: 0.517, Loss: 1.257 Epoch 0 Batch 252/269 - Train Accuracy: 0.483, Validation Accuracy: 0.521, Loss: 1.268 Epoch 0 Batch 253/269 - Train Accuracy: 0.498, Validation Accuracy: 0.528, Loss: 1.262 Epoch 0 Batch 254/269 - Train Accuracy: 0.515, Validation Accuracy: 0.533, Loss: 1.230 Epoch 0 Batch 255/269 - Train Accuracy: 0.550, Validation Accuracy: 0.535, Loss: 1.216 Epoch 0 Batch 256/269 - Train Accuracy: 0.512, Validation Accuracy: 0.533, Loss: 1.240 Epoch 0 Batch 257/269 - Train Accuracy: 0.513, Validation Accuracy: 0.541, Loss: 1.266 Epoch 0 Batch 258/269 - Train Accuracy: 0.507, Validation Accuracy: 0.522, Loss: 1.230 Epoch 0 Batch 259/269 - Train Accuracy: 0.509, Validation Accuracy: 0.516, Loss: 1.234 Epoch 0 Batch 260/269 - Train Accuracy: 0.511, Validation Accuracy: 0.534, Loss: 1.284 Epoch 0 Batch 261/269 - Train Accuracy: 0.488, Validation Accuracy: 0.536, Loss: 1.274 Epoch 0 Batch 262/269 - Train Accuracy: 0.528, Validation Accuracy: 0.529, Loss: 1.226 Epoch 0 Batch 263/269 - Train Accuracy: 0.506, Validation Accuracy: 0.525, Loss: 1.260 Epoch 0 Batch 264/269 - Train Accuracy: 0.477, Validation Accuracy: 0.522, Loss: 1.295 Epoch 0 Batch 265/269 - Train Accuracy: 0.477, Validation Accuracy: 0.523, Loss: 1.239 Epoch 0 Batch 266/269 - Train Accuracy: 0.526, Validation Accuracy: 0.533, Loss: 1.184 Epoch 0 Batch 267/269 - Train Accuracy: 0.519, Validation Accuracy: 0.530, Loss: 1.226 Epoch 1 Batch 0/269 - Train Accuracy: 0.475, Validation Accuracy: 0.522, Loss: 1.269 Epoch 1 Batch 1/269 - Train Accuracy: 0.477, Validation Accuracy: 0.519, Loss: 1.211 Epoch 1 Batch 2/269 - Train Accuracy: 0.498, Validation Accuracy: 0.535, Loss: 1.228 Epoch 1 Batch 3/269 - Train Accuracy: 0.505, Validation Accuracy: 0.543, Loss: 1.242 Epoch 1 Batch 4/269 - Train Accuracy: 0.482, Validation Accuracy: 0.523, Loss: 1.240 Epoch 1 Batch 5/269 - Train Accuracy: 0.468, Validation Accuracy: 0.512, Loss: 1.257 Epoch 1 Batch 6/269 - Train Accuracy: 0.500, Validation Accuracy: 0.503, Loss: 1.267 Epoch 1 Batch 7/269 - Train Accuracy: 0.532, Validation Accuracy: 0.541, Loss: 1.271 Epoch 1 Batch 8/269 - Train Accuracy: 0.484, Validation Accuracy: 0.520, Loss: 1.287 Epoch 1 Batch 9/269 - Train Accuracy: 0.491, Validation Accuracy: 0.522, Loss: 1.244 Epoch 1 Batch 10/269 - Train Accuracy: 0.458, Validation Accuracy: 0.502, Loss: 1.270 Epoch 1 Batch 11/269 - Train Accuracy: 0.491, Validation Accuracy: 0.522, Loss: 1.226 Epoch 1 Batch 12/269 - Train Accuracy: 0.487, Validation Accuracy: 0.532, Loss: 1.241 Epoch 1 Batch 13/269 - Train Accuracy: 0.554, Validation Accuracy: 0.541, Loss: 1.126 Epoch 1 Batch 14/269 - Train Accuracy: 0.489, Validation Accuracy: 0.528, Loss: 1.177 Epoch 1 Batch 15/269 - Train Accuracy: 0.513, Validation Accuracy: 0.529, Loss: 1.194 Epoch 1 Batch 16/269 - Train Accuracy: 0.521, Validation Accuracy: 0.511, Loss: 1.198 Epoch 1 Batch 17/269 - Train Accuracy: 0.512, Validation Accuracy: 0.522, Loss: 1.156 Epoch 1 Batch 18/269 - Train Accuracy: 0.486, Validation Accuracy: 0.520, Loss: 1.214 Epoch 1 Batch 19/269 - Train Accuracy: 0.532, Validation Accuracy: 0.513, Loss: 1.108 Epoch 1 Batch 20/269 - Train Accuracy: 0.507, Validation Accuracy: 0.529, Loss: 1.234 Epoch 1 Batch 21/269 - Train Accuracy: 0.491, Validation Accuracy: 0.530, Loss: 1.245 Epoch 1 Batch 22/269 - Train Accuracy: 0.519, Validation Accuracy: 0.532, Loss: 1.143 Epoch 1 Batch 23/269 - Train Accuracy: 0.525, Validation Accuracy: 0.537, Loss: 1.186 Epoch 1 Batch 24/269 - Train Accuracy: 0.492, Validation Accuracy: 0.542, Loss: 1.199 Epoch 1 Batch 25/269 - Train Accuracy: 0.487, Validation Accuracy: 0.539, Loss: 1.235 Epoch 1 Batch 26/269 - Train Accuracy: 0.537, Validation Accuracy: 0.530, Loss: 1.085 Epoch 1 Batch 27/269 - Train Accuracy: 0.516, Validation Accuracy: 0.531, Loss: 1.170 Epoch 1 Batch 28/269 - Train Accuracy: 0.476, Validation Accuracy: 0.531, Loss: 1.253 Epoch 1 Batch 29/269 - Train Accuracy: 0.496, Validation Accuracy: 0.525, Loss: 1.159 Epoch 1 Batch 30/269 - Train Accuracy: 0.519, Validation Accuracy: 0.522, Loss: 1.140 Epoch 1 Batch 31/269 - Train Accuracy: 0.510, Validation Accuracy: 0.514, Loss: 1.130 Epoch 1 Batch 32/269 - Train Accuracy: 0.503, Validation Accuracy: 0.517, Loss: 1.123 Epoch 1 Batch 33/269 - Train Accuracy: 0.526, Validation Accuracy: 0.531, Loss: 1.112 Epoch 1 Batch 34/269 - Train Accuracy: 0.509, Validation Accuracy: 0.516, Loss: 1.129 Epoch 1 Batch 35/269 - Train Accuracy: 0.519, Validation Accuracy: 0.526, Loss: 1.148 Epoch 1 Batch 36/269 - Train Accuracy: 0.526, Validation Accuracy: 0.536, Loss: 1.145 Epoch 1 Batch 37/269 - Train Accuracy: 0.535, Validation Accuracy: 0.558, Loss: 1.117 Epoch 1 Batch 38/269 - Train Accuracy: 0.536, Validation Accuracy: 0.556, Loss: 1.137 Epoch 1 Batch 39/269 - Train Accuracy: 0.528, Validation Accuracy: 0.548, Loss: 1.118 Epoch 1 Batch 40/269 - Train Accuracy: 0.478, Validation Accuracy: 0.527, Loss: 1.167 Epoch 1 Batch 41/269 - Train Accuracy: 0.504, Validation Accuracy: 0.540, Loss: 1.136 Epoch 1 Batch 42/269 - Train Accuracy: 0.545, Validation Accuracy: 0.544, Loss: 1.064 Epoch 1 Batch 43/269 - Train Accuracy: 0.507, Validation Accuracy: 0.537, Loss: 1.158 Epoch 1 Batch 44/269 - Train Accuracy: 0.534, Validation Accuracy: 0.544, Loss: 1.123 Epoch 1 Batch 45/269 - Train Accuracy: 0.499, Validation Accuracy: 0.542, Loss: 1.170 Epoch 1 Batch 46/269 - Train Accuracy: 0.493, Validation Accuracy: 0.530, Loss: 1.138 Epoch 1 Batch 47/269 - Train Accuracy: 0.527, Validation Accuracy: 0.512, Loss: 1.037 Epoch 1 Batch 48/269 - Train Accuracy: 0.523, Validation Accuracy: 0.533, Loss: 1.068 Epoch 1 Batch 49/269 - Train Accuracy: 0.478, Validation Accuracy: 0.525, Loss: 1.143 Epoch 1 Batch 50/269 - Train Accuracy: 0.479, Validation Accuracy: 0.509, Loss: 1.157 Epoch 1 Batch 51/269 - Train Accuracy: 0.500, Validation Accuracy: 0.532, Loss: 1.134 Epoch 1 Batch 52/269 - Train Accuracy: 0.510, Validation Accuracy: 0.523, Loss: 1.061 Epoch 1 Batch 53/269 - Train Accuracy: 0.477, Validation Accuracy: 0.511, Loss: 1.171 Epoch 1 Batch 54/269 - Train Accuracy: 0.514, Validation Accuracy: 0.532, Loss: 1.130 Epoch 1 Batch 55/269 - Train Accuracy: 0.516, Validation Accuracy: 0.548, Loss: 1.088 Epoch 1 Batch 56/269 - Train Accuracy: 0.529, Validation Accuracy: 0.535, Loss: 1.097 Epoch 1 Batch 57/269 - Train Accuracy: 0.520, Validation Accuracy: 0.529, Loss: 1.111 Epoch 1 Batch 58/269 - Train Accuracy: 0.520, Validation Accuracy: 0.538, Loss: 1.090 Epoch 1 Batch 59/269 - Train Accuracy: 0.527, Validation Accuracy: 0.542, Loss: 1.073 Epoch 1 Batch 60/269 - Train Accuracy: 0.529, Validation Accuracy: 0.539, Loss: 1.042 Epoch 1 Batch 61/269 - Train Accuracy: 0.551, Validation Accuracy: 0.551, Loss: 1.015 Epoch 1 Batch 62/269 - Train Accuracy: 0.553, Validation Accuracy: 0.553, Loss: 1.055 Epoch 1 Batch 63/269 - Train Accuracy: 0.527, Validation Accuracy: 0.552, Loss: 1.118 Epoch 1 Batch 64/269 - Train Accuracy: 0.517, Validation Accuracy: 0.530, Loss: 1.068 Epoch 1 Batch 65/269 - Train Accuracy: 0.520, Validation Accuracy: 0.529, Loss: 1.071 Epoch 1 Batch 66/269 - Train Accuracy: 0.521, Validation Accuracy: 0.535, Loss: 1.032 Epoch 1 Batch 67/269 - Train Accuracy: 0.512, Validation Accuracy: 0.526, Loss: 1.084 Epoch 1 Batch 68/269 - Train Accuracy: 0.499, Validation Accuracy: 0.523, Loss: 1.092 Epoch 1 Batch 69/269 - Train Accuracy: 0.507, Validation Accuracy: 0.550, Loss: 1.172 Epoch 1 Batch 70/269 - Train Accuracy: 0.555, Validation Accuracy: 0.559, Loss: 1.063 Epoch 1 Batch 71/269 - Train Accuracy: 0.516, Validation Accuracy: 0.541, Loss: 1.101 Epoch 1 Batch 72/269 - Train Accuracy: 0.550, Validation Accuracy: 0.550, Loss: 1.057 Epoch 1 Batch 73/269 - Train Accuracy: 0.535, Validation Accuracy: 0.547, Loss: 1.094 Epoch 1 Batch 74/269 - Train Accuracy: 0.500, Validation Accuracy: 0.527, Loss: 1.095 Epoch 1 Batch 75/269 - Train Accuracy: 0.542, Validation Accuracy: 0.562, Loss: 1.161 Epoch 1 Batch 76/269 - Train Accuracy: 0.517, Validation Accuracy: 0.554, Loss: 1.104 Epoch 1 Batch 77/269 - Train Accuracy: 0.527, Validation Accuracy: 0.532, Loss: 1.100 Epoch 1 Batch 78/269 - Train Accuracy: 0.496, Validation Accuracy: 0.501, Loss: 1.113 Epoch 1 Batch 79/269 - Train Accuracy: 0.508, Validation Accuracy: 0.531, Loss: 1.095 Epoch 1 Batch 80/269 - Train Accuracy: 0.489, Validation Accuracy: 0.508, Loss: 1.070 Epoch 1 Batch 81/269 - Train Accuracy: 0.478, Validation Accuracy: 0.497, Loss: 1.205 Epoch 1 Batch 82/269 - Train Accuracy: 0.530, Validation Accuracy: 0.537, Loss: 1.133 Epoch 1 Batch 83/269 - Train Accuracy: 0.545, Validation Accuracy: 0.550, Loss: 1.073 Epoch 1 Batch 84/269 - Train Accuracy: 0.531, Validation Accuracy: 0.540, Loss: 1.097 Epoch 1 Batch 85/269 - Train Accuracy: 0.529, Validation Accuracy: 0.556, Loss: 1.092 Epoch 1 Batch 86/269 - Train Accuracy: 0.515, Validation Accuracy: 0.544, Loss: 1.082 Epoch 1 Batch 87/269 - Train Accuracy: 0.513, Validation Accuracy: 0.542, Loss: 1.177 Epoch 1 Batch 88/269 - Train Accuracy: 0.524, Validation Accuracy: 0.538, Loss: 1.130 Epoch 1 Batch 89/269 - Train Accuracy: 0.532, Validation Accuracy: 0.531, Loss: 1.074 Epoch 1 Batch 90/269 - Train Accuracy: 0.465, Validation Accuracy: 0.517, Loss: 1.225 Epoch 1 Batch 91/269 - Train Accuracy: 0.492, Validation Accuracy: 0.507, Loss: 1.216 Epoch 1 Batch 92/269 - Train Accuracy: 0.518, Validation Accuracy: 0.539, Loss: 1.207 Epoch 1 Batch 93/269 - Train Accuracy: 0.535, Validation Accuracy: 0.528, Loss: 1.170 Epoch 1 Batch 94/269 - Train Accuracy: 0.500, Validation Accuracy: 0.513, Loss: 1.084 Epoch 1 Batch 95/269 - Train Accuracy: 0.521, Validation Accuracy: 0.537, Loss: 1.138 Epoch 1 Batch 96/269 - Train Accuracy: 0.514, Validation Accuracy: 0.532, Loss: 1.096 Epoch 1 Batch 97/269 - Train Accuracy: 0.532, Validation Accuracy: 0.541, Loss: 1.154 Epoch 1 Batch 98/269 - Train Accuracy: 0.534, Validation Accuracy: 0.531, Loss: 1.095 Epoch 1 Batch 99/269 - Train Accuracy: 0.511, Validation Accuracy: 0.546, Loss: 1.208 Epoch 1 Batch 100/269 - Train Accuracy: 0.548, Validation Accuracy: 0.547, Loss: 1.066 Epoch 1 Batch 101/269 - Train Accuracy: 0.492, Validation Accuracy: 0.535, Loss: 1.212 Epoch 1 Batch 102/269 - Train Accuracy: 0.483, Validation Accuracy: 0.497, Loss: 1.067 Epoch 1 Batch 103/269 - Train Accuracy: 0.507, Validation Accuracy: 0.520, Loss: 1.207 Epoch 1 Batch 104/269 - Train Accuracy: 0.477, Validation Accuracy: 0.516, Loss: 1.062 Epoch 1 Batch 105/269 - Train Accuracy: 0.502, Validation Accuracy: 0.531, Loss: 1.234 Epoch 1 Batch 106/269 - Train Accuracy: 0.482, Validation Accuracy: 0.510, Loss: 1.063 Epoch 1 Batch 107/269 - Train Accuracy: 0.494, Validation Accuracy: 0.541, Loss: 1.212 Epoch 1 Batch 108/269 - Train Accuracy: 0.520, Validation Accuracy: 0.535, Loss: 1.046 Epoch 1 Batch 109/269 - Train Accuracy: 0.506, Validation Accuracy: 0.543, Loss: 1.135 Epoch 1 Batch 110/269 - Train Accuracy: 0.533, Validation Accuracy: 0.553, Loss: 1.058 Epoch 1 Batch 111/269 - Train Accuracy: 0.497, Validation Accuracy: 0.538, Loss: 1.212 Epoch 1 Batch 112/269 - Train Accuracy: 0.522, Validation Accuracy: 0.520, Loss: 1.043 Epoch 1 Batch 113/269 - Train Accuracy: 0.513, Validation Accuracy: 0.521, Loss: 1.032 Epoch 1 Batch 114/269 - Train Accuracy: 0.481, Validation Accuracy: 0.510, Loss: 1.069 Epoch 1 Batch 115/269 - Train Accuracy: 0.486, Validation Accuracy: 0.522, Loss: 1.074 Epoch 1 Batch 116/269 - Train Accuracy: 0.512, Validation Accuracy: 0.519, Loss: 1.030 Epoch 1 Batch 117/269 - Train Accuracy: 0.524, Validation Accuracy: 0.553, Loss: 1.037 Epoch 1 Batch 118/269 - Train Accuracy: 0.545, Validation Accuracy: 0.560, Loss: 1.001 Epoch 1 Batch 119/269 - Train Accuracy: 0.524, Validation Accuracy: 0.550, Loss: 1.082 Epoch 1 Batch 120/269 - Train Accuracy: 0.529, Validation Accuracy: 0.559, Loss: 1.064 Epoch 1 Batch 121/269 - Train Accuracy: 0.538, Validation Accuracy: 0.564, Loss: 1.022 Epoch 1 Batch 122/269 - Train Accuracy: 0.550, Validation Accuracy: 0.552, Loss: 1.016 Epoch 1 Batch 123/269 - Train Accuracy: 0.497, Validation Accuracy: 0.536, Loss: 1.067 Epoch 1 Batch 124/269 - Train Accuracy: 0.512, Validation Accuracy: 0.534, Loss: 0.981 Epoch 1 Batch 125/269 - Train Accuracy: 0.517, Validation Accuracy: 0.537, Loss: 0.973 Epoch 1 Batch 126/269 - Train Accuracy: 0.542, Validation Accuracy: 0.548, Loss: 1.000 Epoch 1 Batch 127/269 - Train Accuracy: 0.536, Validation Accuracy: 0.564, Loss: 1.046 Epoch 1 Batch 128/269 - Train Accuracy: 0.565, Validation Accuracy: 0.558, Loss: 0.990 Epoch 1 Batch 129/269 - Train Accuracy: 0.544, Validation Accuracy: 0.555, Loss: 1.006 Epoch 1 Batch 130/269 - Train Accuracy: 0.526, Validation Accuracy: 0.557, Loss: 1.031 Epoch 1 Batch 131/269 - Train Accuracy: 0.535, Validation Accuracy: 0.560, Loss: 1.012 Epoch 1 Batch 132/269 - Train Accuracy: 0.546, Validation Accuracy: 0.558, Loss: 0.999 Epoch 1 Batch 133/269 - Train Accuracy: 0.554, Validation Accuracy: 0.564, Loss: 0.947 Epoch 1 Batch 134/269 - Train Accuracy: 0.508, Validation Accuracy: 0.547, Loss: 1.013 Epoch 1 Batch 135/269 - Train Accuracy: 0.506, Validation Accuracy: 0.547, Loss: 1.047 Epoch 1 Batch 136/269 - Train Accuracy: 0.496, Validation Accuracy: 0.549, Loss: 1.035 Epoch 1 Batch 137/269 - Train Accuracy: 0.522, Validation Accuracy: 0.557, Loss: 1.046 Epoch 1 Batch 138/269 - Train Accuracy: 0.531, Validation Accuracy: 0.566, Loss: 0.996 Epoch 1 Batch 139/269 - Train Accuracy: 0.552, Validation Accuracy: 0.566, Loss: 0.951 Epoch 1 Batch 140/269 - Train Accuracy: 0.555, Validation Accuracy: 0.564, Loss: 0.990 Epoch 1 Batch 141/269 - Train Accuracy: 0.542, Validation Accuracy: 0.563, Loss: 0.996 Epoch 1 Batch 142/269 - Train Accuracy: 0.555, Validation Accuracy: 0.571, Loss: 0.945 Epoch 1 Batch 143/269 - Train Accuracy: 0.557, Validation Accuracy: 0.571, Loss: 0.959 Epoch 1 Batch 144/269 - Train Accuracy: 0.554, Validation Accuracy: 0.567, Loss: 0.932 Epoch 1 Batch 145/269 - Train Accuracy: 0.545, Validation Accuracy: 0.557, Loss: 0.962 Epoch 1 Batch 146/269 - Train Accuracy: 0.544, Validation Accuracy: 0.564, Loss: 0.957 Epoch 1 Batch 147/269 - Train Accuracy: 0.567, Validation Accuracy: 0.563, Loss: 0.922 Epoch 1 Batch 148/269 - Train Accuracy: 0.531, Validation Accuracy: 0.561, Loss: 1.000 Epoch 1 Batch 149/269 - Train Accuracy: 0.549, Validation Accuracy: 0.560, Loss: 0.987 Epoch 1 Batch 150/269 - Train Accuracy: 0.543, Validation Accuracy: 0.559, Loss: 0.983 Epoch 1 Batch 151/269 - Train Accuracy: 0.573, Validation Accuracy: 0.558, Loss: 0.917 Epoch 1 Batch 152/269 - Train Accuracy: 0.537, Validation Accuracy: 0.555, Loss: 0.966 Epoch 1 Batch 153/269 - Train Accuracy: 0.553, Validation Accuracy: 0.559, Loss: 0.938 Epoch 1 Batch 154/269 - Train Accuracy: 0.517, Validation Accuracy: 0.564, Loss: 0.952 Epoch 1 Batch 155/269 - Train Accuracy: 0.581, Validation Accuracy: 0.571, Loss: 0.922 Epoch 1 Batch 156/269 - Train Accuracy: 0.547, Validation Accuracy: 0.572, Loss: 0.994 Epoch 1 Batch 157/269 - Train Accuracy: 0.555, Validation Accuracy: 0.569, Loss: 0.962 Epoch 1 Batch 158/269 - Train Accuracy: 0.565, Validation Accuracy: 0.572, Loss: 0.940 Epoch 1 Batch 159/269 - Train Accuracy: 0.556, Validation Accuracy: 0.573, Loss: 0.945 Epoch 1 Batch 160/269 - Train Accuracy: 0.555, Validation Accuracy: 0.572, Loss: 0.946 Epoch 1 Batch 161/269 - Train Accuracy: 0.543, Validation Accuracy: 0.568, Loss: 0.938 Epoch 1 Batch 162/269 - Train Accuracy: 0.549, Validation Accuracy: 0.574, Loss: 0.921 Epoch 1 Batch 163/269 - Train Accuracy: 0.569, Validation Accuracy: 0.572, Loss: 0.920 Epoch 1 Batch 164/269 - Train Accuracy: 0.562, Validation Accuracy: 0.569, Loss: 0.932 Epoch 1 Batch 165/269 - Train Accuracy: 0.531, Validation Accuracy: 0.571, Loss: 0.954 Epoch 1 Batch 166/269 - Train Accuracy: 0.576, Validation Accuracy: 0.572, Loss: 0.867 Epoch 1 Batch 167/269 - Train Accuracy: 0.553, Validation Accuracy: 0.568, Loss: 0.947 Epoch 1 Batch 168/269 - Train Accuracy: 0.541, Validation Accuracy: 0.573, Loss: 0.949 Epoch 1 Batch 169/269 - Train Accuracy: 0.550, Validation Accuracy: 0.572, Loss: 0.918 Epoch 1 Batch 170/269 - Train Accuracy: 0.558, Validation Accuracy: 0.569, Loss: 0.910 Epoch 1 Batch 171/269 - Train Accuracy: 0.548, Validation Accuracy: 0.572, Loss: 0.947 Epoch 1 Batch 172/269 - Train Accuracy: 0.557, Validation Accuracy: 0.571, Loss: 0.936 Epoch 1 Batch 173/269 - Train Accuracy: 0.548, Validation Accuracy: 0.572, Loss: 0.918 Epoch 1 Batch 174/269 - Train Accuracy: 0.564, Validation Accuracy: 0.570, Loss: 0.929 Epoch 1 Batch 175/269 - Train Accuracy: 0.562, Validation Accuracy: 0.572, Loss: 0.943 Epoch 1 Batch 176/269 - Train Accuracy: 0.548, Validation Accuracy: 0.575, Loss: 0.972 Epoch 1 Batch 177/269 - Train Accuracy: 0.565, Validation Accuracy: 0.571, Loss: 0.904 Epoch 1 Batch 178/269 - Train Accuracy: 0.535, Validation Accuracy: 0.570, Loss: 0.942 Epoch 1 Batch 179/269 - Train Accuracy: 0.570, Validation Accuracy: 0.572, Loss: 0.901 Epoch 1 Batch 180/269 - Train Accuracy: 0.560, Validation Accuracy: 0.576, Loss: 0.889 Epoch 1 Batch 181/269 - Train Accuracy: 0.550, Validation Accuracy: 0.573, Loss: 0.907 Epoch 1 Batch 182/269 - Train Accuracy: 0.554, Validation Accuracy: 0.571, Loss: 0.932 Epoch 1 Batch 183/269 - Train Accuracy: 0.614, Validation Accuracy: 0.570, Loss: 0.790 Epoch 1 Batch 184/269 - Train Accuracy: 0.536, Validation Accuracy: 0.569, Loss: 0.939 Epoch 1 Batch 185/269 - Train Accuracy: 0.556, Validation Accuracy: 0.571, Loss: 0.910 Epoch 1 Batch 186/269 - Train Accuracy: 0.527, Validation Accuracy: 0.570, Loss: 0.947 Epoch 1 Batch 187/269 - Train Accuracy: 0.556, Validation Accuracy: 0.570, Loss: 0.901 Epoch 1 Batch 188/269 - Train Accuracy: 0.564, Validation Accuracy: 0.571, Loss: 0.887 Epoch 1 Batch 189/269 - Train Accuracy: 0.564, Validation Accuracy: 0.573, Loss: 0.899 Epoch 1 Batch 190/269 - Train Accuracy: 0.557, Validation Accuracy: 0.574, Loss: 0.880 Epoch 1 Batch 191/269 - Train Accuracy: 0.560, Validation Accuracy: 0.575, Loss: 0.895 Epoch 1 Batch 192/269 - Train Accuracy: 0.558, Validation Accuracy: 0.572, Loss: 0.910 Epoch 1 Batch 193/269 - Train Accuracy: 0.550, Validation Accuracy: 0.570, Loss: 0.892 Epoch 1 Batch 194/269 - Train Accuracy: 0.566, Validation Accuracy: 0.574, Loss: 0.887 Epoch 1 Batch 195/269 - Train Accuracy: 0.553, Validation Accuracy: 0.579, Loss: 0.899 Epoch 1 Batch 196/269 - Train Accuracy: 0.548, Validation Accuracy: 0.577, Loss: 0.878 Epoch 1 Batch 197/269 - Train Accuracy: 0.539, Validation Accuracy: 0.578, Loss: 0.924 Epoch 1 Batch 198/269 - Train Accuracy: 0.527, Validation Accuracy: 0.574, Loss: 0.952 Epoch 1 Batch 199/269 - Train Accuracy: 0.539, Validation Accuracy: 0.573, Loss: 0.917 Epoch 1 Batch 200/269 - Train Accuracy: 0.546, Validation Accuracy: 0.574, Loss: 0.925 Epoch 1 Batch 201/269 - Train Accuracy: 0.553, Validation Accuracy: 0.578, Loss: 0.870 Epoch 1 Batch 202/269 - Train Accuracy: 0.554, Validation Accuracy: 0.575, Loss: 0.878 Epoch 1 Batch 203/269 - Train Accuracy: 0.533, Validation Accuracy: 0.576, Loss: 0.952 Epoch 1 Batch 204/269 - Train Accuracy: 0.539, Validation Accuracy: 0.577, Loss: 0.897 Epoch 1 Batch 205/269 - Train Accuracy: 0.553, Validation Accuracy: 0.580, Loss: 0.864 Epoch 1 Batch 206/269 - Train Accuracy: 0.534, Validation Accuracy: 0.578, Loss: 0.926 Epoch 1 Batch 207/269 - Train Accuracy: 0.589, Validation Accuracy: 0.576, Loss: 0.845 Epoch 1 Batch 208/269 - Train Accuracy: 0.529, Validation Accuracy: 0.572, Loss: 0.932 Epoch 1 Batch 209/269 - Train Accuracy: 0.548, Validation Accuracy: 0.570, Loss: 0.880 Epoch 1 Batch 210/269 - Train Accuracy: 0.569, Validation Accuracy: 0.567, Loss: 0.858 Epoch 1 Batch 211/269 - Train Accuracy: 0.567, Validation Accuracy: 0.580, Loss: 0.886 Epoch 1 Batch 212/269 - Train Accuracy: 0.578, Validation Accuracy: 0.588, Loss: 0.886 Epoch 1 Batch 213/269 - Train Accuracy: 0.575, Validation Accuracy: 0.581, Loss: 0.877 Epoch 1 Batch 214/269 - Train Accuracy: 0.582, Validation Accuracy: 0.582, Loss: 0.858 Epoch 1 Batch 215/269 - Train Accuracy: 0.591, Validation Accuracy: 0.585, Loss: 0.826 Epoch 1 Batch 216/269 - Train Accuracy: 0.545, Validation Accuracy: 0.584, Loss: 0.929 Epoch 1 Batch 217/269 - Train Accuracy: 0.544, Validation Accuracy: 0.585, Loss: 0.891 Epoch 1 Batch 218/269 - Train Accuracy: 0.561, Validation Accuracy: 0.585, Loss: 0.884 Epoch 1 Batch 219/269 - Train Accuracy: 0.552, Validation Accuracy: 0.581, Loss: 0.879 Epoch 1 Batch 220/269 - Train Accuracy: 0.572, Validation Accuracy: 0.581, Loss: 0.801 Epoch 1 Batch 221/269 - Train Accuracy: 0.588, Validation Accuracy: 0.592, Loss: 0.863 Epoch 1 Batch 222/269 - Train Accuracy: 0.580, Validation Accuracy: 0.587, Loss: 0.821 Epoch 1 Batch 223/269 - Train Accuracy: 0.577, Validation Accuracy: 0.587, Loss: 0.837 Epoch 1 Batch 224/269 - Train Accuracy: 0.585, Validation Accuracy: 0.588, Loss: 0.898 Epoch 1 Batch 225/269 - Train Accuracy: 0.557, Validation Accuracy: 0.581, Loss: 0.859 Epoch 1 Batch 226/269 - Train Accuracy: 0.562, Validation Accuracy: 0.578, Loss: 0.849 Epoch 1 Batch 227/269 - Train Accuracy: 0.628, Validation Accuracy: 0.583, Loss: 0.761 Epoch 1 Batch 228/269 - Train Accuracy: 0.573, Validation Accuracy: 0.580, Loss: 0.842 Epoch 1 Batch 229/269 - Train Accuracy: 0.562, Validation Accuracy: 0.581, Loss: 0.848 Epoch 1 Batch 230/269 - Train Accuracy: 0.560, Validation Accuracy: 0.580, Loss: 0.853 Epoch 1 Batch 231/269 - Train Accuracy: 0.509, Validation Accuracy: 0.563, Loss: 0.934 Epoch 1 Batch 232/269 - Train Accuracy: 0.500, Validation Accuracy: 0.544, Loss: 0.975 Epoch 1 Batch 233/269 - Train Accuracy: 0.532, Validation Accuracy: 0.553, Loss: 1.337 Epoch 1 Batch 234/269 - Train Accuracy: 0.554, Validation Accuracy: 0.569, Loss: 1.230 Epoch 1 Batch 235/269 - Train Accuracy: 0.539, Validation Accuracy: 0.557, Loss: 0.984 Epoch 1 Batch 236/269 - Train Accuracy: 0.463, Validation Accuracy: 0.479, Loss: 1.251 Epoch 1 Batch 237/269 - Train Accuracy: 0.482, Validation Accuracy: 0.499, Loss: 1.832 Epoch 1 Batch 238/269 - Train Accuracy: 0.530, Validation Accuracy: 0.530, Loss: 1.522 Epoch 1 Batch 239/269 - Train Accuracy: 0.503, Validation Accuracy: 0.518, Loss: 1.320 Epoch 1 Batch 240/269 - Train Accuracy: 0.511, Validation Accuracy: 0.496, Loss: 1.135 Epoch 1 Batch 241/269 - Train Accuracy: 0.498, Validation Accuracy: 0.508, Loss: 1.272 Epoch 1 Batch 242/269 - Train Accuracy: 0.490, Validation Accuracy: 0.520, Loss: 1.270 Epoch 1 Batch 243/269 - Train Accuracy: 0.523, Validation Accuracy: 0.528, Loss: 1.142 Epoch 1 Batch 244/269 - Train Accuracy: 0.519, Validation Accuracy: 0.538, Loss: 1.134 Epoch 1 Batch 245/269 - Train Accuracy: 0.498, Validation Accuracy: 0.543, Loss: 1.241 Epoch 1 Batch 246/269 - Train Accuracy: 0.521, Validation Accuracy: 0.550, Loss: 1.160 Epoch 1 Batch 247/269 - Train Accuracy: 0.517, Validation Accuracy: 0.546, Loss: 1.142 Epoch 1 Batch 248/269 - Train Accuracy: 0.507, Validation Accuracy: 0.534, Loss: 1.023 Epoch 1 Batch 249/269 - Train Accuracy: 0.556, Validation Accuracy: 0.544, Loss: 1.046 Epoch 1 Batch 250/269 - Train Accuracy: 0.502, Validation Accuracy: 0.536, Loss: 1.075 Epoch 1 Batch 251/269 - Train Accuracy: 0.527, Validation Accuracy: 0.523, Loss: 1.055 Epoch 1 Batch 252/269 - Train Accuracy: 0.519, Validation Accuracy: 0.538, Loss: 1.032 Epoch 1 Batch 253/269 - Train Accuracy: 0.541, Validation Accuracy: 0.546, Loss: 1.010 Epoch 1 Batch 254/269 - Train Accuracy: 0.556, Validation Accuracy: 0.567, Loss: 0.978 Epoch 1 Batch 255/269 - Train Accuracy: 0.592, Validation Accuracy: 0.570, Loss: 0.955 Epoch 1 Batch 256/269 - Train Accuracy: 0.556, Validation Accuracy: 0.576, Loss: 0.993 Epoch 1 Batch 257/269 - Train Accuracy: 0.551, Validation Accuracy: 0.586, Loss: 0.992 Epoch 1 Batch 258/269 - Train Accuracy: 0.559, Validation Accuracy: 0.577, Loss: 0.968 Epoch 1 Batch 259/269 - Train Accuracy: 0.571, Validation Accuracy: 0.565, Loss: 0.952 Epoch 1 Batch 260/269 - Train Accuracy: 0.544, Validation Accuracy: 0.561, Loss: 0.998 Epoch 1 Batch 261/269 - Train Accuracy: 0.513, Validation Accuracy: 0.566, Loss: 0.981 Epoch 1 Batch 262/269 - Train Accuracy: 0.568, Validation Accuracy: 0.569, Loss: 0.936 Epoch 1 Batch 263/269 - Train Accuracy: 0.550, Validation Accuracy: 0.566, Loss: 0.956 Epoch 1 Batch 264/269 - Train Accuracy: 0.533, Validation Accuracy: 0.570, Loss: 0.984 Epoch 1 Batch 265/269 - Train Accuracy: 0.547, Validation Accuracy: 0.573, Loss: 0.931 Epoch 1 Batch 266/269 - Train Accuracy: 0.573, Validation Accuracy: 0.573, Loss: 0.888 Epoch 1 Batch 267/269 - Train Accuracy: 0.564, Validation Accuracy: 0.572, Loss: 0.921 Epoch 2 Batch 0/269 - Train Accuracy: 0.542, Validation Accuracy: 0.576, Loss: 0.951 Epoch 2 Batch 1/269 - Train Accuracy: 0.540, Validation Accuracy: 0.574, Loss: 0.904 Epoch 2 Batch 2/269 - Train Accuracy: 0.537, Validation Accuracy: 0.573, Loss: 0.917 Epoch 2 Batch 3/269 - Train Accuracy: 0.550, Validation Accuracy: 0.574, Loss: 0.903 Epoch 2 Batch 4/269 - Train Accuracy: 0.545, Validation Accuracy: 0.577, Loss: 0.905 Epoch 2 Batch 5/269 - Train Accuracy: 0.524, Validation Accuracy: 0.572, Loss: 0.893 Epoch 2 Batch 6/269 - Train Accuracy: 0.580, Validation Accuracy: 0.574, Loss: 0.826 Epoch 2 Batch 7/269 - Train Accuracy: 0.566, Validation Accuracy: 0.578, Loss: 0.865 Epoch 2 Batch 8/269 - Train Accuracy: 0.559, Validation Accuracy: 0.586, Loss: 0.911 Epoch 2 Batch 9/269 - Train Accuracy: 0.559, Validation Accuracy: 0.587, Loss: 0.885 Epoch 2 Batch 10/269 - Train Accuracy: 0.566, Validation Accuracy: 0.588, Loss: 0.867 Epoch 2 Batch 11/269 - Train Accuracy: 0.570, Validation Accuracy: 0.589, Loss: 0.867 Epoch 2 Batch 12/269 - Train Accuracy: 0.557, Validation Accuracy: 0.590, Loss: 0.883 Epoch 2 Batch 13/269 - Train Accuracy: 0.599, Validation Accuracy: 0.589, Loss: 0.785 Epoch 2 Batch 14/269 - Train Accuracy: 0.579, Validation Accuracy: 0.584, Loss: 0.833 Epoch 2 Batch 15/269 - Train Accuracy: 0.568, Validation Accuracy: 0.589, Loss: 0.825 Epoch 2 Batch 16/269 - Train Accuracy: 0.590, Validation Accuracy: 0.588, Loss: 0.835 Epoch 2 Batch 17/269 - Train Accuracy: 0.587, Validation Accuracy: 0.588, Loss: 0.815 Epoch 2 Batch 18/269 - Train Accuracy: 0.544, Validation Accuracy: 0.583, Loss: 0.862 Epoch 2 Batch 19/269 - Train Accuracy: 0.606, Validation Accuracy: 0.584, Loss: 0.779 Epoch 2 Batch 20/269 - Train Accuracy: 0.564, Validation Accuracy: 0.578, Loss: 0.857 Epoch 2 Batch 21/269 - Train Accuracy: 0.577, Validation Accuracy: 0.593, Loss: 0.883 Epoch 2 Batch 22/269 - Train Accuracy: 0.582, Validation Accuracy: 0.594, Loss: 0.797 Epoch 2 Batch 23/269 - Train Accuracy: 0.596, Validation Accuracy: 0.595, Loss: 0.834 Epoch 2 Batch 24/269 - Train Accuracy: 0.567, Validation Accuracy: 0.591, Loss: 0.847 Epoch 2 Batch 25/269 - Train Accuracy: 0.561, Validation Accuracy: 0.596, Loss: 0.862 Epoch 2 Batch 26/269 - Train Accuracy: 0.613, Validation Accuracy: 0.598, Loss: 0.768 Epoch 2 Batch 27/269 - Train Accuracy: 0.594, Validation Accuracy: 0.598, Loss: 0.808 Epoch 2 Batch 28/269 - Train Accuracy: 0.552, Validation Accuracy: 0.596, Loss: 0.863 Epoch 2 Batch 29/269 - Train Accuracy: 0.582, Validation Accuracy: 0.597, Loss: 0.807 Epoch 2 Batch 30/269 - Train Accuracy: 0.586, Validation Accuracy: 0.596, Loss: 0.793 Epoch 2 Batch 31/269 - Train Accuracy: 0.589, Validation Accuracy: 0.585, Loss: 0.784 Epoch 2 Batch 32/269 - Train Accuracy: 0.571, Validation Accuracy: 0.581, Loss: 0.789 Epoch 2 Batch 33/269 - Train Accuracy: 0.591, Validation Accuracy: 0.585, Loss: 0.766 Epoch 2 Batch 34/269 - Train Accuracy: 0.589, Validation Accuracy: 0.593, Loss: 0.795 Epoch 2 Batch 35/269 - Train Accuracy: 0.592, Validation Accuracy: 0.595, Loss: 0.810 Epoch 2 Batch 36/269 - Train Accuracy: 0.583, Validation Accuracy: 0.595, Loss: 0.803 Epoch 2 Batch 37/269 - Train Accuracy: 0.583, Validation Accuracy: 0.594, Loss: 0.777 Epoch 2 Batch 38/269 - Train Accuracy: 0.594, Validation Accuracy: 0.594, Loss: 0.787 Epoch 2 Batch 39/269 - Train Accuracy: 0.583, Validation Accuracy: 0.592, Loss: 0.786 Epoch 2 Batch 40/269 - Train Accuracy: 0.563, Validation Accuracy: 0.600, Loss: 0.817 Epoch 2 Batch 41/269 - Train Accuracy: 0.576, Validation Accuracy: 0.601, Loss: 0.805 Epoch 2 Batch 42/269 - Train Accuracy: 0.621, Validation Accuracy: 0.601, Loss: 0.736 Epoch 2 Batch 43/269 - Train Accuracy: 0.578, Validation Accuracy: 0.601, Loss: 0.814 Epoch 2 Batch 44/269 - Train Accuracy: 0.600, Validation Accuracy: 0.602, Loss: 0.794 Epoch 2 Batch 45/269 - Train Accuracy: 0.574, Validation Accuracy: 0.599, Loss: 0.815 Epoch 2 Batch 46/269 - Train Accuracy: 0.585, Validation Accuracy: 0.597, Loss: 0.801 Epoch 2 Batch 47/269 - Train Accuracy: 0.611, Validation Accuracy: 0.596, Loss: 0.715 Epoch 2 Batch 48/269 - Train Accuracy: 0.592, Validation Accuracy: 0.596, Loss: 0.757 Epoch 2 Batch 49/269 - Train Accuracy: 0.569, Validation Accuracy: 0.597, Loss: 0.791 Epoch 2 Batch 50/269 - Train Accuracy: 0.562, Validation Accuracy: 0.596, Loss: 0.814 Epoch 2 Batch 51/269 - Train Accuracy: 0.573, Validation Accuracy: 0.593, Loss: 0.776 Epoch 2 Batch 52/269 - Train Accuracy: 0.591, Validation Accuracy: 0.597, Loss: 0.735 Epoch 2 Batch 53/269 - Train Accuracy: 0.571, Validation Accuracy: 0.600, Loss: 0.820 Epoch 2 Batch 54/269 - Train Accuracy: 0.593, Validation Accuracy: 0.600, Loss: 0.787 Epoch 2 Batch 55/269 - Train Accuracy: 0.594, Validation Accuracy: 0.601, Loss: 0.761 Epoch 2 Batch 56/269 - Train Accuracy: 0.608, Validation Accuracy: 0.605, Loss: 0.765 Epoch 2 Batch 57/269 - Train Accuracy: 0.599, Validation Accuracy: 0.604, Loss: 0.768 Epoch 2 Batch 58/269 - Train Accuracy: 0.603, Validation Accuracy: 0.604, Loss: 0.754 Epoch 2 Batch 59/269 - Train Accuracy: 0.606, Validation Accuracy: 0.603, Loss: 0.734 Epoch 2 Batch 60/269 - Train Accuracy: 0.610, Validation Accuracy: 0.602, Loss: 0.730 Epoch 2 Batch 61/269 - Train Accuracy: 0.609, Validation Accuracy: 0.602, Loss: 0.705 Epoch 2 Batch 62/269 - Train Accuracy: 0.610, Validation Accuracy: 0.600, Loss: 0.734 Epoch 2 Batch 63/269 - Train Accuracy: 0.572, Validation Accuracy: 0.594, Loss: 0.774 Epoch 2 Batch 64/269 - Train Accuracy: 0.586, Validation Accuracy: 0.592, Loss: 0.738 Epoch 2 Batch 65/269 - Train Accuracy: 0.585, Validation Accuracy: 0.591, Loss: 0.751 Epoch 2 Batch 66/269 - Train Accuracy: 0.599, Validation Accuracy: 0.587, Loss: 0.716 Epoch 2 Batch 67/269 - Train Accuracy: 0.596, Validation Accuracy: 0.590, Loss: 0.762 Epoch 2 Batch 68/269 - Train Accuracy: 0.583, Validation Accuracy: 0.598, Loss: 0.756 Epoch 2 Batch 69/269 - Train Accuracy: 0.559, Validation Accuracy: 0.598, Loss: 0.829 Epoch 2 Batch 70/269 - Train Accuracy: 0.602, Validation Accuracy: 0.597, Loss: 0.747 Epoch 2 Batch 71/269 - Train Accuracy: 0.579, Validation Accuracy: 0.600, Loss: 0.769 Epoch 2 Batch 72/269 - Train Accuracy: 0.599, Validation Accuracy: 0.601, Loss: 0.728 Epoch 2 Batch 73/269 - Train Accuracy: 0.591, Validation Accuracy: 0.602, Loss: 0.769 Epoch 2 Batch 74/269 - Train Accuracy: 0.599, Validation Accuracy: 0.607, Loss: 0.744 Epoch 2 Batch 75/269 - Train Accuracy: 0.583, Validation Accuracy: 0.602, Loss: 0.748 Epoch 2 Batch 76/269 - Train Accuracy: 0.579, Validation Accuracy: 0.599, Loss: 0.753 Epoch 2 Batch 77/269 - Train Accuracy: 0.615, Validation Accuracy: 0.599, Loss: 0.742 Epoch 2 Batch 78/269 - Train Accuracy: 0.607, Validation Accuracy: 0.602, Loss: 0.725 Epoch 2 Batch 79/269 - Train Accuracy: 0.594, Validation Accuracy: 0.604, Loss: 0.724 Epoch 2 Batch 80/269 - Train Accuracy: 0.606, Validation Accuracy: 0.600, Loss: 0.724 Epoch 2 Batch 81/269 - Train Accuracy: 0.597, Validation Accuracy: 0.603, Loss: 0.739 Epoch 2 Batch 82/269 - Train Accuracy: 0.607, Validation Accuracy: 0.598, Loss: 0.709 Epoch 2 Batch 83/269 - Train Accuracy: 0.593, Validation Accuracy: 0.598, Loss: 0.752 Epoch 2 Batch 84/269 - Train Accuracy: 0.607, Validation Accuracy: 0.597, Loss: 0.717 Epoch 2 Batch 85/269 - Train Accuracy: 0.590, Validation Accuracy: 0.602, Loss: 0.731 Epoch 2 Batch 86/269 - Train Accuracy: 0.583, Validation Accuracy: 0.602, Loss: 0.727 Epoch 2 Batch 87/269 - Train Accuracy: 0.579, Validation Accuracy: 0.607, Loss: 0.772 Epoch 2 Batch 88/269 - Train Accuracy: 0.601, Validation Accuracy: 0.609, Loss: 0.718 Epoch 2 Batch 89/269 - Train Accuracy: 0.614, Validation Accuracy: 0.609, Loss: 0.728 Epoch 2 Batch 90/269 - Train Accuracy: 0.565, Validation Accuracy: 0.609, Loss: 0.773 Epoch 2 Batch 91/269 - Train Accuracy: 0.598, Validation Accuracy: 0.606, Loss: 0.695 Epoch 2 Batch 92/269 - Train Accuracy: 0.600, Validation Accuracy: 0.610, Loss: 0.715 Epoch 2 Batch 93/269 - Train Accuracy: 0.620, Validation Accuracy: 0.606, Loss: 0.681 Epoch 2 Batch 94/269 - Train Accuracy: 0.602, Validation Accuracy: 0.606, Loss: 0.730 Epoch 2 Batch 95/269 - Train Accuracy: 0.593, Validation Accuracy: 0.604, Loss: 0.727 Epoch 2 Batch 96/269 - Train Accuracy: 0.601, Validation Accuracy: 0.601, Loss: 0.727 Epoch 2 Batch 97/269 - Train Accuracy: 0.587, Validation Accuracy: 0.600, Loss: 0.717 Epoch 2 Batch 98/269 - Train Accuracy: 0.592, Validation Accuracy: 0.600, Loss: 0.727 Epoch 2 Batch 99/269 - Train Accuracy: 0.587, Validation Accuracy: 0.600, Loss: 0.746 Epoch 2 Batch 100/269 - Train Accuracy: 0.607, Validation Accuracy: 0.597, Loss: 0.707 Epoch 2 Batch 101/269 - Train Accuracy: 0.565, Validation Accuracy: 0.598, Loss: 0.770 Epoch 2 Batch 102/269 - Train Accuracy: 0.604, Validation Accuracy: 0.597, Loss: 0.709 Epoch 2 Batch 103/269 - Train Accuracy: 0.599, Validation Accuracy: 0.597, Loss: 0.704 Epoch 2 Batch 104/269 - Train Accuracy: 0.587, Validation Accuracy: 0.596, Loss: 0.716 Epoch 2 Batch 105/269 - Train Accuracy: 0.593, Validation Accuracy: 0.603, Loss: 0.726 Epoch 2 Batch 106/269 - Train Accuracy: 0.598, Validation Accuracy: 0.603, Loss: 0.710 Epoch 2 Batch 107/269 - Train Accuracy: 0.566, Validation Accuracy: 0.600, Loss: 0.738 Epoch 2 Batch 108/269 - Train Accuracy: 0.601, Validation Accuracy: 0.598, Loss: 0.703 Epoch 2 Batch 109/269 - Train Accuracy: 0.582, Validation Accuracy: 0.604, Loss: 0.718 Epoch 2 Batch 110/269 - Train Accuracy: 0.596, Validation Accuracy: 0.607, Loss: 0.698 Epoch 2 Batch 111/269 - Train Accuracy: 0.570, Validation Accuracy: 0.602, Loss: 0.739 Epoch 2 Batch 112/269 - Train Accuracy: 0.615, Validation Accuracy: 0.607, Loss: 0.708 Epoch 2 Batch 113/269 - Train Accuracy: 0.598, Validation Accuracy: 0.605, Loss: 0.669 Epoch 2 Batch 114/269 - Train Accuracy: 0.599, Validation Accuracy: 0.601, Loss: 0.698 Epoch 2 Batch 115/269 - Train Accuracy: 0.589, Validation Accuracy: 0.605, Loss: 0.733 Epoch 2 Batch 116/269 - Train Accuracy: 0.591, Validation Accuracy: 0.600, Loss: 0.703 Epoch 2 Batch 117/269 - Train Accuracy: 0.591, Validation Accuracy: 0.603, Loss: 0.698 Epoch 2 Batch 118/269 - Train Accuracy: 0.618, Validation Accuracy: 0.606, Loss: 0.674 Epoch 2 Batch 119/269 - Train Accuracy: 0.588, Validation Accuracy: 0.606, Loss: 0.734 Epoch 2 Batch 120/269 - Train Accuracy: 0.599, Validation Accuracy: 0.607, Loss: 0.716 Epoch 2 Batch 121/269 - Train Accuracy: 0.601, Validation Accuracy: 0.608, Loss: 0.692 Epoch 2 Batch 122/269 - Train Accuracy: 0.595, Validation Accuracy: 0.607, Loss: 0.689 Epoch 2 Batch 123/269 - Train Accuracy: 0.576, Validation Accuracy: 0.607, Loss: 0.727 Epoch 2 Batch 124/269 - Train Accuracy: 0.599, Validation Accuracy: 0.610, Loss: 0.682 Epoch 2 Batch 125/269 - Train Accuracy: 0.605, Validation Accuracy: 0.604, Loss: 0.687 Epoch 2 Batch 126/269 - Train Accuracy: 0.607, Validation Accuracy: 0.599, Loss: 0.696 Epoch 2 Batch 127/269 - Train Accuracy: 0.588, Validation Accuracy: 0.603, Loss: 0.715 Epoch 2 Batch 128/269 - Train Accuracy: 0.612, Validation Accuracy: 0.604, Loss: 0.697 Epoch 2 Batch 129/269 - Train Accuracy: 0.605, Validation Accuracy: 0.606, Loss: 0.686 Epoch 2 Batch 130/269 - Train Accuracy: 0.578, Validation Accuracy: 0.604, Loss: 0.719 Epoch 2 Batch 131/269 - Train Accuracy: 0.587, Validation Accuracy: 0.604, Loss: 0.704 Epoch 2 Batch 132/269 - Train Accuracy: 0.597, Validation Accuracy: 0.606, Loss: 0.702 Epoch 2 Batch 133/269 - Train Accuracy: 0.603, Validation Accuracy: 0.598, Loss: 0.666 Epoch 2 Batch 134/269 - Train Accuracy: 0.579, Validation Accuracy: 0.603, Loss: 0.713 Epoch 2 Batch 135/269 - Train Accuracy: 0.584, Validation Accuracy: 0.603, Loss: 0.738 Epoch 2 Batch 136/269 - Train Accuracy: 0.578, Validation Accuracy: 0.605, Loss: 0.739 Epoch 2 Batch 137/269 - Train Accuracy: 0.594, Validation Accuracy: 0.609, Loss: 0.726 Epoch 2 Batch 138/269 - Train Accuracy: 0.586, Validation Accuracy: 0.606, Loss: 0.703 Epoch 2 Batch 139/269 - Train Accuracy: 0.603, Validation Accuracy: 0.605, Loss: 0.670 Epoch 2 Batch 140/269 - Train Accuracy: 0.613, Validation Accuracy: 0.606, Loss: 0.703 Epoch 2 Batch 141/269 - Train Accuracy: 0.596, Validation Accuracy: 0.608, Loss: 0.707 Epoch 2 Batch 142/269 - Train Accuracy: 0.600, Validation Accuracy: 0.605, Loss: 0.659 Epoch 2 Batch 143/269 - Train Accuracy: 0.606, Validation Accuracy: 0.607, Loss: 0.682 Epoch 2 Batch 144/269 - Train Accuracy: 0.610, Validation Accuracy: 0.605, Loss: 0.646 Epoch 2 Batch 145/269 - Train Accuracy: 0.603, Validation Accuracy: 0.609, Loss: 0.675 Epoch 2 Batch 146/269 - Train Accuracy: 0.609, Validation Accuracy: 0.611, Loss: 0.665 Epoch 2 Batch 147/269 - Train Accuracy: 0.612, Validation Accuracy: 0.607, Loss: 0.650 Epoch 2 Batch 148/269 - Train Accuracy: 0.594, Validation Accuracy: 0.596, Loss: 0.687 Epoch 2 Batch 149/269 - Train Accuracy: 0.593, Validation Accuracy: 0.591, Loss: 0.688 Epoch 2 Batch 150/269 - Train Accuracy: 0.595, Validation Accuracy: 0.601, Loss: 0.686 Epoch 2 Batch 151/269 - Train Accuracy: 0.629, Validation Accuracy: 0.606, Loss: 0.642 Epoch 2 Batch 152/269 - Train Accuracy: 0.599, Validation Accuracy: 0.612, Loss: 0.680 Epoch 2 Batch 153/269 - Train Accuracy: 0.614, Validation Accuracy: 0.608, Loss: 0.663 Epoch 2 Batch 154/269 - Train Accuracy: 0.584, Validation Accuracy: 0.611, Loss: 0.677 Epoch 2 Batch 155/269 - Train Accuracy: 0.630, Validation Accuracy: 0.606, Loss: 0.643 Epoch 2 Batch 156/269 - Train Accuracy: 0.594, Validation Accuracy: 0.609, Loss: 0.716 Epoch 2 Batch 157/269 - Train Accuracy: 0.607, Validation Accuracy: 0.610, Loss: 0.677 Epoch 2 Batch 158/269 - Train Accuracy: 0.595, Validation Accuracy: 0.610, Loss: 0.672 Epoch 2 Batch 159/269 - Train Accuracy: 0.607, Validation Accuracy: 0.612, Loss: 0.662 Epoch 2 Batch 160/269 - Train Accuracy: 0.614, Validation Accuracy: 0.611, Loss: 0.666 Epoch 2 Batch 161/269 - Train Accuracy: 0.595, Validation Accuracy: 0.614, Loss: 0.664 Epoch 2 Batch 162/269 - Train Accuracy: 0.599, Validation Accuracy: 0.603, Loss: 0.660 Epoch 2 Batch 163/269 - Train Accuracy: 0.628, Validation Accuracy: 0.603, Loss: 0.667 Epoch 2 Batch 164/269 - Train Accuracy: 0.615, Validation Accuracy: 0.604, Loss: 0.657 Epoch 2 Batch 165/269 - Train Accuracy: 0.588, Validation Accuracy: 0.606, Loss: 0.674 Epoch 2 Batch 166/269 - Train Accuracy: 0.633, Validation Accuracy: 0.608, Loss: 0.625 Epoch 2 Batch 167/269 - Train Accuracy: 0.613, Validation Accuracy: 0.614, Loss: 0.668 Epoch 2 Batch 168/269 - Train Accuracy: 0.599, Validation Accuracy: 0.612, Loss: 0.664 Epoch 2 Batch 169/269 - Train Accuracy: 0.602, Validation Accuracy: 0.612, Loss: 0.666 Epoch 2 Batch 170/269 - Train Accuracy: 0.610, Validation Accuracy: 0.610, Loss: 0.653 Epoch 2 Batch 171/269 - Train Accuracy: 0.598, Validation Accuracy: 0.610, Loss: 0.691 Epoch 2 Batch 172/269 - Train Accuracy: 0.603, Validation Accuracy: 0.611, Loss: 0.672 Epoch 2 Batch 173/269 - Train Accuracy: 0.612, Validation Accuracy: 0.611, Loss: 0.652 Epoch 2 Batch 174/269 - Train Accuracy: 0.604, Validation Accuracy: 0.612, Loss: 0.656 Epoch 2 Batch 175/269 - Train Accuracy: 0.610, Validation Accuracy: 0.609, Loss: 0.682 Epoch 2 Batch 176/269 - Train Accuracy: 0.601, Validation Accuracy: 0.610, Loss: 0.692 Epoch 2 Batch 177/269 - Train Accuracy: 0.603, Validation Accuracy: 0.610, Loss: 0.631 Epoch 2 Batch 178/269 - Train Accuracy: 0.596, Validation Accuracy: 0.608, Loss: 0.681 Epoch 2 Batch 179/269 - Train Accuracy: 0.621, Validation Accuracy: 0.608, Loss: 0.649 Epoch 2 Batch 180/269 - Train Accuracy: 0.602, Validation Accuracy: 0.609, Loss: 0.645 Epoch 2 Batch 181/269 - Train Accuracy: 0.604, Validation Accuracy: 0.615, Loss: 0.644 Epoch 2 Batch 182/269 - Train Accuracy: 0.622, Validation Accuracy: 0.610, Loss: 0.661 Epoch 2 Batch 183/269 - Train Accuracy: 0.670, Validation Accuracy: 0.607, Loss: 0.571 Epoch 2 Batch 184/269 - Train Accuracy: 0.593, Validation Accuracy: 0.612, Loss: 0.677 Epoch 2 Batch 185/269 - Train Accuracy: 0.618, Validation Accuracy: 0.609, Loss: 0.643 Epoch 2 Batch 186/269 - Train Accuracy: 0.589, Validation Accuracy: 0.613, Loss: 0.669 Epoch 2 Batch 187/269 - Train Accuracy: 0.611, Validation Accuracy: 0.616, Loss: 0.649 Epoch 2 Batch 188/269 - Train Accuracy: 0.613, Validation Accuracy: 0.613, Loss: 0.635 Epoch 2 Batch 189/269 - Train Accuracy: 0.614, Validation Accuracy: 0.613, Loss: 0.636 Epoch 2 Batch 190/269 - Train Accuracy: 0.607, Validation Accuracy: 0.611, Loss: 0.627 Epoch 2 Batch 191/269 - Train Accuracy: 0.625, Validation Accuracy: 0.618, Loss: 0.636 Epoch 2 Batch 192/269 - Train Accuracy: 0.618, Validation Accuracy: 0.611, Loss: 0.650 Epoch 2 Batch 193/269 - Train Accuracy: 0.613, Validation Accuracy: 0.613, Loss: 0.640 Epoch 2 Batch 194/269 - Train Accuracy: 0.622, Validation Accuracy: 0.612, Loss: 0.645 Epoch 2 Batch 195/269 - Train Accuracy: 0.604, Validation Accuracy: 0.613, Loss: 0.640 Epoch 2 Batch 196/269 - Train Accuracy: 0.584, Validation Accuracy: 0.610, Loss: 0.635 Epoch 2 Batch 197/269 - Train Accuracy: 0.596, Validation Accuracy: 0.608, Loss: 0.672 Epoch 2 Batch 198/269 - Train Accuracy: 0.594, Validation Accuracy: 0.608, Loss: 0.674 Epoch 2 Batch 199/269 - Train Accuracy: 0.601, Validation Accuracy: 0.608, Loss: 0.652 Epoch 2 Batch 200/269 - Train Accuracy: 0.604, Validation Accuracy: 0.613, Loss: 0.657 Epoch 2 Batch 201/269 - Train Accuracy: 0.623, Validation Accuracy: 0.614, Loss: 0.628 Epoch 2 Batch 202/269 - Train Accuracy: 0.610, Validation Accuracy: 0.613, Loss: 0.628 Epoch 2 Batch 203/269 - Train Accuracy: 0.585, Validation Accuracy: 0.612, Loss: 0.682 Epoch 2 Batch 204/269 - Train Accuracy: 0.600, Validation Accuracy: 0.611, Loss: 0.659 Epoch 2 Batch 205/269 - Train Accuracy: 0.603, Validation Accuracy: 0.617, Loss: 0.628 Epoch 2 Batch 206/269 - Train Accuracy: 0.600, Validation Accuracy: 0.611, Loss: 0.663 Epoch 2 Batch 207/269 - Train Accuracy: 0.636, Validation Accuracy: 0.609, Loss: 0.614 Epoch 2 Batch 208/269 - Train Accuracy: 0.597, Validation Accuracy: 0.619, Loss: 0.659 Epoch 2 Batch 209/269 - Train Accuracy: 0.612, Validation Accuracy: 0.612, Loss: 0.636 Epoch 2 Batch 210/269 - Train Accuracy: 0.626, Validation Accuracy: 0.616, Loss: 0.622 Epoch 2 Batch 211/269 - Train Accuracy: 0.616, Validation Accuracy: 0.611, Loss: 0.638 Epoch 2 Batch 212/269 - Train Accuracy: 0.632, Validation Accuracy: 0.616, Loss: 0.628 Epoch 2 Batch 213/269 - Train Accuracy: 0.619, Validation Accuracy: 0.613, Loss: 0.626 Epoch 2 Batch 214/269 - Train Accuracy: 0.626, Validation Accuracy: 0.614, Loss: 0.623 Epoch 2 Batch 215/269 - Train Accuracy: 0.645, Validation Accuracy: 0.621, Loss: 0.594 Epoch 2 Batch 216/269 - Train Accuracy: 0.589, Validation Accuracy: 0.621, Loss: 0.666 Epoch 2 Batch 217/269 - Train Accuracy: 0.586, Validation Accuracy: 0.618, Loss: 0.654 Epoch 2 Batch 218/269 - Train Accuracy: 0.604, Validation Accuracy: 0.614, Loss: 0.637 Epoch 2 Batch 219/269 - Train Accuracy: 0.606, Validation Accuracy: 0.615, Loss: 0.655 Epoch 2 Batch 220/269 - Train Accuracy: 0.618, Validation Accuracy: 0.617, Loss: 0.585 Epoch 2 Batch 221/269 - Train Accuracy: 0.634, Validation Accuracy: 0.614, Loss: 0.626 Epoch 2 Batch 222/269 - Train Accuracy: 0.631, Validation Accuracy: 0.613, Loss: 0.606 Epoch 2 Batch 223/269 - Train Accuracy: 0.613, Validation Accuracy: 0.613, Loss: 0.614 Epoch 2 Batch 224/269 - Train Accuracy: 0.626, Validation Accuracy: 0.614, Loss: 0.645 Epoch 2 Batch 225/269 - Train Accuracy: 0.619, Validation Accuracy: 0.617, Loss: 0.630 Epoch 2 Batch 226/269 - Train Accuracy: 0.612, Validation Accuracy: 0.616, Loss: 0.619 Epoch 2 Batch 227/269 - Train Accuracy: 0.671, Validation Accuracy: 0.617, Loss: 0.554 Epoch 2 Batch 228/269 - Train Accuracy: 0.620, Validation Accuracy: 0.613, Loss: 0.614 Epoch 2 Batch 229/269 - Train Accuracy: 0.619, Validation Accuracy: 0.613, Loss: 0.609 Epoch 2 Batch 230/269 - Train Accuracy: 0.605, Validation Accuracy: 0.610, Loss: 0.616 Epoch 2 Batch 231/269 - Train Accuracy: 0.583, Validation Accuracy: 0.610, Loss: 0.655 Epoch 2 Batch 232/269 - Train Accuracy: 0.582, Validation Accuracy: 0.612, Loss: 0.654 Epoch 2 Batch 233/269 - Train Accuracy: 0.619, Validation Accuracy: 0.611, Loss: 0.624 Epoch 2 Batch 234/269 - Train Accuracy: 0.616, Validation Accuracy: 0.613, Loss: 0.607 Epoch 2 Batch 235/269 - Train Accuracy: 0.622, Validation Accuracy: 0.617, Loss: 0.599 Epoch 2 Batch 236/269 - Train Accuracy: 0.606, Validation Accuracy: 0.620, Loss: 0.606 Epoch 2 Batch 237/269 - Train Accuracy: 0.602, Validation Accuracy: 0.619, Loss: 0.614 Epoch 2 Batch 238/269 - Train Accuracy: 0.637, Validation Accuracy: 0.619, Loss: 0.602 Epoch 2 Batch 239/269 - Train Accuracy: 0.632, Validation Accuracy: 0.616, Loss: 0.611 Epoch 2 Batch 240/269 - Train Accuracy: 0.647, Validation Accuracy: 0.615, Loss: 0.549 Epoch 2 Batch 241/269 - Train Accuracy: 0.613, Validation Accuracy: 0.610, Loss: 0.620 Epoch 2 Batch 242/269 - Train Accuracy: 0.610, Validation Accuracy: 0.610, Loss: 0.607 Epoch 2 Batch 243/269 - Train Accuracy: 0.630, Validation Accuracy: 0.611, Loss: 0.591 Epoch 2 Batch 244/269 - Train Accuracy: 0.618, Validation Accuracy: 0.614, Loss: 0.613 Epoch 2 Batch 245/269 - Train Accuracy: 0.612, Validation Accuracy: 0.619, Loss: 0.646 Epoch 2 Batch 246/269 - Train Accuracy: 0.608, Validation Accuracy: 0.619, Loss: 0.613 Epoch 2 Batch 247/269 - Train Accuracy: 0.611, Validation Accuracy: 0.621, Loss: 0.625 Epoch 2 Batch 248/269 - Train Accuracy: 0.599, Validation Accuracy: 0.619, Loss: 0.594 Epoch 2 Batch 249/269 - Train Accuracy: 0.636, Validation Accuracy: 0.616, Loss: 0.578 Epoch 2 Batch 250/269 - Train Accuracy: 0.599, Validation Accuracy: 0.614, Loss: 0.626 Epoch 2 Batch 251/269 - Train Accuracy: 0.644, Validation Accuracy: 0.617, Loss: 0.601 Epoch 2 Batch 252/269 - Train Accuracy: 0.627, Validation Accuracy: 0.620, Loss: 0.616 Epoch 2 Batch 253/269 - Train Accuracy: 0.613, Validation Accuracy: 0.624, Loss: 0.637 Epoch 2 Batch 254/269 - Train Accuracy: 0.626, Validation Accuracy: 0.622, Loss: 0.600 Epoch 2 Batch 255/269 - Train Accuracy: 0.643, Validation Accuracy: 0.618, Loss: 0.594 Epoch 2 Batch 256/269 - Train Accuracy: 0.609, Validation Accuracy: 0.624, Loss: 0.620 Epoch 2 Batch 257/269 - Train Accuracy: 0.595, Validation Accuracy: 0.623, Loss: 0.627 Epoch 2 Batch 258/269 - Train Accuracy: 0.614, Validation Accuracy: 0.612, Loss: 0.612 Epoch 2 Batch 259/269 - Train Accuracy: 0.628, Validation Accuracy: 0.604, Loss: 0.620 Epoch 2 Batch 260/269 - Train Accuracy: 0.612, Validation Accuracy: 0.614, Loss: 0.642 Epoch 2 Batch 261/269 - Train Accuracy: 0.588, Validation Accuracy: 0.618, Loss: 0.651 Epoch 2 Batch 262/269 - Train Accuracy: 0.620, Validation Accuracy: 0.625, Loss: 0.603 Epoch 2 Batch 263/269 - Train Accuracy: 0.628, Validation Accuracy: 0.619, Loss: 0.627 Epoch 2 Batch 264/269 - Train Accuracy: 0.596, Validation Accuracy: 0.617, Loss: 0.639 Epoch 2 Batch 265/269 - Train Accuracy: 0.606, Validation Accuracy: 0.628, Loss: 0.619 Epoch 2 Batch 266/269 - Train Accuracy: 0.634, Validation Accuracy: 0.621, Loss: 0.594 Epoch 2 Batch 267/269 - Train Accuracy: 0.632, Validation Accuracy: 0.624, Loss: 0.600 Epoch 3 Batch 0/269 - Train Accuracy: 0.615, Validation Accuracy: 0.624, Loss: 0.635 Epoch 3 Batch 1/269 - Train Accuracy: 0.612, Validation Accuracy: 0.620, Loss: 0.610 Epoch 3 Batch 2/269 - Train Accuracy: 0.609, Validation Accuracy: 0.616, Loss: 0.608 Epoch 3 Batch 3/269 - Train Accuracy: 0.619, Validation Accuracy: 0.616, Loss: 0.608 Epoch 3 Batch 4/269 - Train Accuracy: 0.585, Validation Accuracy: 0.619, Loss: 0.628 Epoch 3 Batch 5/269 - Train Accuracy: 0.600, Validation Accuracy: 0.617, Loss: 0.613 Epoch 3 Batch 6/269 - Train Accuracy: 0.618, Validation Accuracy: 0.618, Loss: 0.578 Epoch 3 Batch 7/269 - Train Accuracy: 0.622, Validation Accuracy: 0.621, Loss: 0.589 Epoch 3 Batch 8/269 - Train Accuracy: 0.604, Validation Accuracy: 0.624, Loss: 0.625 Epoch 3 Batch 9/269 - Train Accuracy: 0.612, Validation Accuracy: 0.626, Loss: 0.614 Epoch 3 Batch 10/269 - Train Accuracy: 0.615, Validation Accuracy: 0.623, Loss: 0.607 Epoch 3 Batch 11/269 - Train Accuracy: 0.614, Validation Accuracy: 0.622, Loss: 0.604 Epoch 3 Batch 12/269 - Train Accuracy: 0.603, Validation Accuracy: 0.629, Loss: 0.624 Epoch 3 Batch 13/269 - Train Accuracy: 0.648, Validation Accuracy: 0.627, Loss: 0.551 Epoch 3 Batch 14/269 - Train Accuracy: 0.627, Validation Accuracy: 0.626, Loss: 0.592 Epoch 3 Batch 15/269 - Train Accuracy: 0.617, Validation Accuracy: 0.627, Loss: 0.571 Epoch 3 Batch 16/269 - Train Accuracy: 0.637, Validation Accuracy: 0.629, Loss: 0.582 Epoch 3 Batch 17/269 - Train Accuracy: 0.630, Validation Accuracy: 0.621, Loss: 0.580 Epoch 3 Batch 18/269 - Train Accuracy: 0.599, Validation Accuracy: 0.619, Loss: 0.603 Epoch 3 Batch 19/269 - Train Accuracy: 0.655, Validation Accuracy: 0.617, Loss: 0.542 Epoch 3 Batch 20/269 - Train Accuracy: 0.615, Validation Accuracy: 0.620, Loss: 0.603 Epoch 3 Batch 21/269 - Train Accuracy: 0.609, Validation Accuracy: 0.622, Loss: 0.622 Epoch 3 Batch 22/269 - Train Accuracy: 0.638, Validation Accuracy: 0.627, Loss: 0.561 Epoch 3 Batch 23/269 - Train Accuracy: 0.641, Validation Accuracy: 0.626, Loss: 0.576 Epoch 3 Batch 24/269 - Train Accuracy: 0.629, Validation Accuracy: 0.628, Loss: 0.605 Epoch 3 Batch 25/269 - Train Accuracy: 0.604, Validation Accuracy: 0.629, Loss: 0.614 Epoch 3 Batch 26/269 - Train Accuracy: 0.659, Validation Accuracy: 0.626, Loss: 0.544 Epoch 3 Batch 27/269 - Train Accuracy: 0.627, Validation Accuracy: 0.627, Loss: 0.573 Epoch 3 Batch 28/269 - Train Accuracy: 0.583, Validation Accuracy: 0.615, Loss: 0.626 Epoch 3 Batch 29/269 - Train Accuracy: 0.625, Validation Accuracy: 0.609, Loss: 0.599 Epoch 3 Batch 30/269 - Train Accuracy: 0.621, Validation Accuracy: 0.607, Loss: 0.572 Epoch 3 Batch 31/269 - Train Accuracy: 0.625, Validation Accuracy: 0.615, Loss: 0.555 Epoch 3 Batch 32/269 - Train Accuracy: 0.611, Validation Accuracy: 0.611, Loss: 0.563 Epoch 3 Batch 33/269 - Train Accuracy: 0.623, Validation Accuracy: 0.609, Loss: 0.560 Epoch 3 Batch 34/269 - Train Accuracy: 0.631, Validation Accuracy: 0.629, Loss: 0.574 Epoch 3 Batch 35/269 - Train Accuracy: 0.641, Validation Accuracy: 0.631, Loss: 0.587 Epoch 3 Batch 36/269 - Train Accuracy: 0.629, Validation Accuracy: 0.627, Loss: 0.569 Epoch 3 Batch 37/269 - Train Accuracy: 0.639, Validation Accuracy: 0.629, Loss: 0.563 Epoch 3 Batch 38/269 - Train Accuracy: 0.634, Validation Accuracy: 0.626, Loss: 0.566 Epoch 3 Batch 39/269 - Train Accuracy: 0.629, Validation Accuracy: 0.627, Loss: 0.569 Epoch 3 Batch 40/269 - Train Accuracy: 0.612, Validation Accuracy: 0.626, Loss: 0.596 Epoch 3 Batch 41/269 - Train Accuracy: 0.618, Validation Accuracy: 0.629, Loss: 0.584 Epoch 3 Batch 42/269 - Train Accuracy: 0.657, Validation Accuracy: 0.628, Loss: 0.538 Epoch 3 Batch 43/269 - Train Accuracy: 0.620, Validation Accuracy: 0.629, Loss: 0.588 Epoch 3 Batch 44/269 - Train Accuracy: 0.636, Validation Accuracy: 0.624, Loss: 0.574 Epoch 3 Batch 45/269 - Train Accuracy: 0.613, Validation Accuracy: 0.631, Loss: 0.593 Epoch 3 Batch 46/269 - Train Accuracy: 0.630, Validation Accuracy: 0.632, Loss: 0.586 Epoch 3 Batch 47/269 - Train Accuracy: 0.659, Validation Accuracy: 0.633, Loss: 0.527 Epoch 3 Batch 48/269 - Train Accuracy: 0.629, Validation Accuracy: 0.628, Loss: 0.558 Epoch 3 Batch 49/269 - Train Accuracy: 0.622, Validation Accuracy: 0.628, Loss: 0.571 Epoch 3 Batch 50/269 - Train Accuracy: 0.613, Validation Accuracy: 0.623, Loss: 0.596 Epoch 3 Batch 51/269 - Train Accuracy: 0.625, Validation Accuracy: 0.631, Loss: 0.563 Epoch 3 Batch 52/269 - Train Accuracy: 0.624, Validation Accuracy: 0.621, Loss: 0.544 Epoch 3 Batch 53/269 - Train Accuracy: 0.609, Validation Accuracy: 0.622, Loss: 0.601 Epoch 3 Batch 54/269 - Train Accuracy: 0.631, Validation Accuracy: 0.625, Loss: 0.577 Epoch 3 Batch 55/269 - Train Accuracy: 0.647, Validation Accuracy: 0.634, Loss: 0.556 Epoch 3 Batch 56/269 - Train Accuracy: 0.639, Validation Accuracy: 0.630, Loss: 0.563 Epoch 3 Batch 57/269 - Train Accuracy: 0.646, Validation Accuracy: 0.630, Loss: 0.574 Epoch 3 Batch 58/269 - Train Accuracy: 0.647, Validation Accuracy: 0.634, Loss: 0.550 Epoch 3 Batch 59/269 - Train Accuracy: 0.647, Validation Accuracy: 0.631, Loss: 0.538 Epoch 3 Batch 60/269 - Train Accuracy: 0.642, Validation Accuracy: 0.634, Loss: 0.538 Epoch 3 Batch 61/269 - Train Accuracy: 0.657, Validation Accuracy: 0.633, Loss: 0.520 Epoch 3 Batch 62/269 - Train Accuracy: 0.647, Validation Accuracy: 0.631, Loss: 0.536 Epoch 3 Batch 63/269 - Train Accuracy: 0.622, Validation Accuracy: 0.631, Loss: 0.576 Epoch 3 Batch 64/269 - Train Accuracy: 0.628, Validation Accuracy: 0.628, Loss: 0.552 Epoch 3 Batch 65/269 - Train Accuracy: 0.628, Validation Accuracy: 0.624, Loss: 0.557 Epoch 3 Batch 66/269 - Train Accuracy: 0.638, Validation Accuracy: 0.627, Loss: 0.542 Epoch 3 Batch 67/269 - Train Accuracy: 0.635, Validation Accuracy: 0.614, Loss: 0.570 Epoch 3 Batch 68/269 - Train Accuracy: 0.612, Validation Accuracy: 0.620, Loss: 0.562 Epoch 3 Batch 69/269 - Train Accuracy: 0.605, Validation Accuracy: 0.629, Loss: 0.612 Epoch 3 Batch 70/269 - Train Accuracy: 0.652, Validation Accuracy: 0.631, Loss: 0.558 Epoch 3 Batch 71/269 - Train Accuracy: 0.625, Validation Accuracy: 0.631, Loss: 0.580 Epoch 3 Batch 72/269 - Train Accuracy: 0.643, Validation Accuracy: 0.635, Loss: 0.546 Epoch 3 Batch 73/269 - Train Accuracy: 0.637, Validation Accuracy: 0.633, Loss: 0.568 Epoch 3 Batch 74/269 - Train Accuracy: 0.642, Validation Accuracy: 0.629, Loss: 0.560 Epoch 3 Batch 75/269 - Train Accuracy: 0.628, Validation Accuracy: 0.638, Loss: 0.556 Epoch 3 Batch 76/269 - Train Accuracy: 0.616, Validation Accuracy: 0.631, Loss: 0.559 Epoch 3 Batch 77/269 - Train Accuracy: 0.647, Validation Accuracy: 0.637, Loss: 0.552 Epoch 3 Batch 78/269 - Train Accuracy: 0.644, Validation Accuracy: 0.636, Loss: 0.543 Epoch 3 Batch 79/269 - Train Accuracy: 0.630, Validation Accuracy: 0.629, Loss: 0.545 Epoch 3 Batch 80/269 - Train Accuracy: 0.647, Validation Accuracy: 0.636, Loss: 0.548 Epoch 3 Batch 81/269 - Train Accuracy: 0.638, Validation Accuracy: 0.637, Loss: 0.557 Epoch 3 Batch 82/269 - Train Accuracy: 0.644, Validation Accuracy: 0.633, Loss: 0.528 Epoch 3 Batch 83/269 - Train Accuracy: 0.643, Validation Accuracy: 0.634, Loss: 0.559 Epoch 3 Batch 84/269 - Train Accuracy: 0.641, Validation Accuracy: 0.630, Loss: 0.545 Epoch 3 Batch 85/269 - Train Accuracy: 0.640, Validation Accuracy: 0.637, Loss: 0.545 Epoch 3 Batch 86/269 - Train Accuracy: 0.618, Validation Accuracy: 0.640, Loss: 0.536 Epoch 3 Batch 87/269 - Train Accuracy: 0.614, Validation Accuracy: 0.636, Loss: 0.576 Epoch 3 Batch 88/269 - Train Accuracy: 0.636, Validation Accuracy: 0.631, Loss: 0.534 Epoch 3 Batch 89/269 - Train Accuracy: 0.643, Validation Accuracy: 0.624, Loss: 0.549 Epoch 3 Batch 90/269 - Train Accuracy: 0.601, Validation Accuracy: 0.628, Loss: 0.579 Epoch 3 Batch 91/269 - Train Accuracy: 0.640, Validation Accuracy: 0.630, Loss: 0.520 Epoch 3 Batch 92/269 - Train Accuracy: 0.644, Validation Accuracy: 0.633, Loss: 0.530 Epoch 3 Batch 93/269 - Train Accuracy: 0.649, Validation Accuracy: 0.631, Loss: 0.520 Epoch 3 Batch 94/269 - Train Accuracy: 0.633, Validation Accuracy: 0.630, Loss: 0.546 Epoch 3 Batch 95/269 - Train Accuracy: 0.636, Validation Accuracy: 0.630, Loss: 0.545 Epoch 3 Batch 96/269 - Train Accuracy: 0.649, Validation Accuracy: 0.639, Loss: 0.543 Epoch 3 Batch 97/269 - Train Accuracy: 0.633, Validation Accuracy: 0.641, Loss: 0.531 Epoch 3 Batch 98/269 - Train Accuracy: 0.638, Validation Accuracy: 0.636, Loss: 0.552 Epoch 3 Batch 99/269 - Train Accuracy: 0.641, Validation Accuracy: 0.639, Loss: 0.559 Epoch 3 Batch 100/269 - Train Accuracy: 0.669, Validation Accuracy: 0.643, Loss: 0.532 Epoch 3 Batch 101/269 - Train Accuracy: 0.610, Validation Accuracy: 0.640, Loss: 0.573 Epoch 3 Batch 102/269 - Train Accuracy: 0.656, Validation Accuracy: 0.637, Loss: 0.537 Epoch 3 Batch 103/269 - Train Accuracy: 0.644, Validation Accuracy: 0.637, Loss: 0.519 Epoch 3 Batch 104/269 - Train Accuracy: 0.633, Validation Accuracy: 0.638, Loss: 0.540 Epoch 3 Batch 105/269 - Train Accuracy: 0.634, Validation Accuracy: 0.629, Loss: 0.547 Epoch 3 Batch 106/269 - Train Accuracy: 0.628, Validation Accuracy: 0.623, Loss: 0.528 Epoch 3 Batch 107/269 - Train Accuracy: 0.612, Validation Accuracy: 0.631, Loss: 0.563 Epoch 3 Batch 108/269 - Train Accuracy: 0.634, Validation Accuracy: 0.630, Loss: 0.532 Epoch 3 Batch 109/269 - Train Accuracy: 0.621, Validation Accuracy: 0.631, Loss: 0.533 Epoch 3 Batch 110/269 - Train Accuracy: 0.632, Validation Accuracy: 0.627, Loss: 0.527 Epoch 3 Batch 111/269 - Train Accuracy: 0.609, Validation Accuracy: 0.624, Loss: 0.555 Epoch 3 Batch 112/269 - Train Accuracy: 0.657, Validation Accuracy: 0.629, Loss: 0.537 Epoch 3 Batch 113/269 - Train Accuracy: 0.640, Validation Accuracy: 0.632, Loss: 0.505 Epoch 3 Batch 114/269 - Train Accuracy: 0.639, Validation Accuracy: 0.632, Loss: 0.525 Epoch 3 Batch 115/269 - Train Accuracy: 0.623, Validation Accuracy: 0.637, Loss: 0.560 Epoch 3 Batch 116/269 - Train Accuracy: 0.642, Validation Accuracy: 0.642, Loss: 0.528 Epoch 3 Batch 117/269 - Train Accuracy: 0.639, Validation Accuracy: 0.643, Loss: 0.530 Epoch 3 Batch 118/269 - Train Accuracy: 0.662, Validation Accuracy: 0.644, Loss: 0.511 Epoch 3 Batch 119/269 - Train Accuracy: 0.626, Validation Accuracy: 0.634, Loss: 0.544 Epoch 3 Batch 120/269 - Train Accuracy: 0.647, Validation Accuracy: 0.647, Loss: 0.546 Epoch 3 Batch 121/269 - Train Accuracy: 0.650, Validation Accuracy: 0.650, Loss: 0.519 Epoch 3 Batch 122/269 - Train Accuracy: 0.654, Validation Accuracy: 0.646, Loss: 0.520 Epoch 3 Batch 123/269 - Train Accuracy: 0.632, Validation Accuracy: 0.646, Loss: 0.542 Epoch 3 Batch 124/269 - Train Accuracy: 0.640, Validation Accuracy: 0.641, Loss: 0.514 Epoch 3 Batch 125/269 - Train Accuracy: 0.649, Validation Accuracy: 0.642, Loss: 0.520 Epoch 3 Batch 126/269 - Train Accuracy: 0.646, Validation Accuracy: 0.638, Loss: 0.523 Epoch 3 Batch 127/269 - Train Accuracy: 0.629, Validation Accuracy: 0.640, Loss: 0.540 Epoch 3 Batch 128/269 - Train Accuracy: 0.663, Validation Accuracy: 0.640, Loss: 0.519 Epoch 3 Batch 129/269 - Train Accuracy: 0.647, Validation Accuracy: 0.638, Loss: 0.514 Epoch 3 Batch 130/269 - Train Accuracy: 0.636, Validation Accuracy: 0.635, Loss: 0.533 Epoch 3 Batch 131/269 - Train Accuracy: 0.641, Validation Accuracy: 0.640, Loss: 0.531 Epoch 3 Batch 132/269 - Train Accuracy: 0.643, Validation Accuracy: 0.641, Loss: 0.527 Epoch 3 Batch 133/269 - Train Accuracy: 0.653, Validation Accuracy: 0.643, Loss: 0.506 Epoch 3 Batch 134/269 - Train Accuracy: 0.624, Validation Accuracy: 0.645, Loss: 0.532 Epoch 3 Batch 135/269 - Train Accuracy: 0.618, Validation Accuracy: 0.635, Loss: 0.552 Epoch 3 Batch 136/269 - Train Accuracy: 0.629, Validation Accuracy: 0.637, Loss: 0.558 Epoch 3 Batch 137/269 - Train Accuracy: 0.630, Validation Accuracy: 0.636, Loss: 0.546 Epoch 3 Batch 138/269 - Train Accuracy: 0.636, Validation Accuracy: 0.641, Loss: 0.532 Epoch 3 Batch 139/269 - Train Accuracy: 0.650, Validation Accuracy: 0.645, Loss: 0.501 Epoch 3 Batch 140/269 - Train Accuracy: 0.647, Validation Accuracy: 0.642, Loss: 0.533 Epoch 3 Batch 141/269 - Train Accuracy: 0.651, Validation Accuracy: 0.642, Loss: 0.533 Epoch 3 Batch 142/269 - Train Accuracy: 0.650, Validation Accuracy: 0.641, Loss: 0.496 Epoch 3 Batch 143/269 - Train Accuracy: 0.665, Validation Accuracy: 0.651, Loss: 0.505 Epoch 3 Batch 144/269 - Train Accuracy: 0.652, Validation Accuracy: 0.645, Loss: 0.496 Epoch 3 Batch 145/269 - Train Accuracy: 0.650, Validation Accuracy: 0.644, Loss: 0.513 Epoch 3 Batch 146/269 - Train Accuracy: 0.657, Validation Accuracy: 0.642, Loss: 0.499 Epoch 3 Batch 147/269 - Train Accuracy: 0.659, Validation Accuracy: 0.648, Loss: 0.492 Epoch 3 Batch 148/269 - Train Accuracy: 0.642, Validation Accuracy: 0.636, Loss: 0.515 Epoch 3 Batch 149/269 - Train Accuracy: 0.644, Validation Accuracy: 0.636, Loss: 0.515 Epoch 3 Batch 150/269 - Train Accuracy: 0.653, Validation Accuracy: 0.639, Loss: 0.511 Epoch 3 Batch 151/269 - Train Accuracy: 0.672, Validation Accuracy: 0.643, Loss: 0.482 Epoch 3 Batch 152/269 - Train Accuracy: 0.644, Validation Accuracy: 0.646, Loss: 0.517 Epoch 3 Batch 153/269 - Train Accuracy: 0.651, Validation Accuracy: 0.645, Loss: 0.499 Epoch 3 Batch 154/269 - Train Accuracy: 0.634, Validation Accuracy: 0.644, Loss: 0.509 Epoch 3 Batch 155/269 - Train Accuracy: 0.675, Validation Accuracy: 0.647, Loss: 0.483 Epoch 3 Batch 156/269 - Train Accuracy: 0.642, Validation Accuracy: 0.648, Loss: 0.537 Epoch 3 Batch 157/269 - Train Accuracy: 0.656, Validation Accuracy: 0.651, Loss: 0.502 Epoch 3 Batch 158/269 - Train Accuracy: 0.650, Validation Accuracy: 0.649, Loss: 0.510 Epoch 3 Batch 159/269 - Train Accuracy: 0.643, Validation Accuracy: 0.645, Loss: 0.503 Epoch 3 Batch 160/269 - Train Accuracy: 0.656, Validation Accuracy: 0.649, Loss: 0.497 Epoch 3 Batch 161/269 - Train Accuracy: 0.646, Validation Accuracy: 0.650, Loss: 0.502 Epoch 3 Batch 162/269 - Train Accuracy: 0.659, Validation Accuracy: 0.645, Loss: 0.502 Epoch 3 Batch 163/269 - Train Accuracy: 0.655, Validation Accuracy: 0.650, Loss: 0.506 Epoch 3 Batch 164/269 - Train Accuracy: 0.658, Validation Accuracy: 0.649, Loss: 0.499 Epoch 3 Batch 165/269 - Train Accuracy: 0.630, Validation Accuracy: 0.652, Loss: 0.511 Epoch 3 Batch 166/269 - Train Accuracy: 0.680, Validation Accuracy: 0.648, Loss: 0.469 Epoch 3 Batch 167/269 - Train Accuracy: 0.663, Validation Accuracy: 0.652, Loss: 0.504 Epoch 3 Batch 168/269 - Train Accuracy: 0.651, Validation Accuracy: 0.650, Loss: 0.500 Epoch 3 Batch 169/269 - Train Accuracy: 0.655, Validation Accuracy: 0.651, Loss: 0.499 Epoch 3 Batch 170/269 - Train Accuracy: 0.658, Validation Accuracy: 0.643, Loss: 0.492 Epoch 3 Batch 171/269 - Train Accuracy: 0.656, Validation Accuracy: 0.646, Loss: 0.519 Epoch 3 Batch 172/269 - Train Accuracy: 0.648, Validation Accuracy: 0.643, Loss: 0.506 Epoch 3 Batch 173/269 - Train Accuracy: 0.651, Validation Accuracy: 0.643, Loss: 0.484 Epoch 3 Batch 174/269 - Train Accuracy: 0.648, Validation Accuracy: 0.646, Loss: 0.495 Epoch 3 Batch 175/269 - Train Accuracy: 0.662, Validation Accuracy: 0.646, Loss: 0.511 Epoch 3 Batch 176/269 - Train Accuracy: 0.647, Validation Accuracy: 0.654, Loss: 0.520 Epoch 3 Batch 177/269 - Train Accuracy: 0.661, Validation Accuracy: 0.653, Loss: 0.482 Epoch 3 Batch 178/269 - Train Accuracy: 0.642, Validation Accuracy: 0.645, Loss: 0.511 Epoch 3 Batch 179/269 - Train Accuracy: 0.663, Validation Accuracy: 0.650, Loss: 0.492 Epoch 3 Batch 180/269 - Train Accuracy: 0.660, Validation Accuracy: 0.650, Loss: 0.490 Epoch 3 Batch 181/269 - Train Accuracy: 0.640, Validation Accuracy: 0.642, Loss: 0.485 Epoch 3 Batch 182/269 - Train Accuracy: 0.663, Validation Accuracy: 0.644, Loss: 0.493 Epoch 3 Batch 183/269 - Train Accuracy: 0.702, Validation Accuracy: 0.648, Loss: 0.433 Epoch 3 Batch 184/269 - Train Accuracy: 0.646, Validation Accuracy: 0.655, Loss: 0.507 Epoch 3 Batch 185/269 - Train Accuracy: 0.667, Validation Accuracy: 0.643, Loss: 0.476 Epoch 3 Batch 186/269 - Train Accuracy: 0.638, Validation Accuracy: 0.653, Loss: 0.513 Epoch 3 Batch 187/269 - Train Accuracy: 0.664, Validation Accuracy: 0.647, Loss: 0.481 Epoch 3 Batch 188/269 - Train Accuracy: 0.682, Validation Accuracy: 0.654, Loss: 0.480 Epoch 3 Batch 189/269 - Train Accuracy: 0.669, Validation Accuracy: 0.661, Loss: 0.472 Epoch 3 Batch 190/269 - Train Accuracy: 0.659, Validation Accuracy: 0.663, Loss: 0.476 Epoch 3 Batch 191/269 - Train Accuracy: 0.674, Validation Accuracy: 0.658, Loss: 0.474 Epoch 3 Batch 192/269 - Train Accuracy: 0.662, Validation Accuracy: 0.659, Loss: 0.483 Epoch 3 Batch 193/269 - Train Accuracy: 0.659, Validation Accuracy: 0.657, Loss: 0.477 Epoch 3 Batch 194/269 - Train Accuracy: 0.677, Validation Accuracy: 0.650, Loss: 0.479 Epoch 3 Batch 195/269 - Train Accuracy: 0.662, Validation Accuracy: 0.653, Loss: 0.476 Epoch 3 Batch 196/269 - Train Accuracy: 0.646, Validation Accuracy: 0.647, Loss: 0.480 Epoch 3 Batch 197/269 - Train Accuracy: 0.638, Validation Accuracy: 0.649, Loss: 0.498 Epoch 3 Batch 198/269 - Train Accuracy: 0.643, Validation Accuracy: 0.651, Loss: 0.501 Epoch 3 Batch 199/269 - Train Accuracy: 0.645, Validation Accuracy: 0.643, Loss: 0.486 Epoch 3 Batch 200/269 - Train Accuracy: 0.656, Validation Accuracy: 0.647, Loss: 0.490 Epoch 3 Batch 201/269 - Train Accuracy: 0.663, Validation Accuracy: 0.650, Loss: 0.472 Epoch 3 Batch 202/269 - Train Accuracy: 0.659, Validation Accuracy: 0.649, Loss: 0.464 Epoch 3 Batch 203/269 - Train Accuracy: 0.639, Validation Accuracy: 0.651, Loss: 0.501 Epoch 3 Batch 204/269 - Train Accuracy: 0.640, Validation Accuracy: 0.657, Loss: 0.495 Epoch 3 Batch 205/269 - Train Accuracy: 0.652, Validation Accuracy: 0.650, Loss: 0.472 Epoch 3 Batch 206/269 - Train Accuracy: 0.661, Validation Accuracy: 0.653, Loss: 0.492 Epoch 3 Batch 207/269 - Train Accuracy: 0.681, Validation Accuracy: 0.658, Loss: 0.459 Epoch 3 Batch 208/269 - Train Accuracy: 0.650, Validation Accuracy: 0.654, Loss: 0.486 Epoch 3 Batch 209/269 - Train Accuracy: 0.662, Validation Accuracy: 0.660, Loss: 0.472 Epoch 3 Batch 210/269 - Train Accuracy: 0.669, Validation Accuracy: 0.659, Loss: 0.462 Epoch 3 Batch 211/269 - Train Accuracy: 0.664, Validation Accuracy: 0.660, Loss: 0.479 Epoch 3 Batch 212/269 - Train Accuracy: 0.678, Validation Accuracy: 0.674, Loss: 0.471 Epoch 3 Batch 213/269 - Train Accuracy: 0.671, Validation Accuracy: 0.671, Loss: 0.459 Epoch 3 Batch 214/269 - Train Accuracy: 0.672, Validation Accuracy: 0.670, Loss: 0.467 Epoch 3 Batch 215/269 - Train Accuracy: 0.691, Validation Accuracy: 0.665, Loss: 0.443 Epoch 3 Batch 216/269 - Train Accuracy: 0.645, Validation Accuracy: 0.667, Loss: 0.499 Epoch 3 Batch 217/269 - Train Accuracy: 0.646, Validation Accuracy: 0.665, Loss: 0.481 Epoch 3 Batch 218/269 - Train Accuracy: 0.662, Validation Accuracy: 0.661, Loss: 0.474 Epoch 3 Batch 219/269 - Train Accuracy: 0.667, Validation Accuracy: 0.669, Loss: 0.488 Epoch 3 Batch 220/269 - Train Accuracy: 0.675, Validation Accuracy: 0.660, Loss: 0.431 Epoch 3 Batch 221/269 - Train Accuracy: 0.689, Validation Accuracy: 0.659, Loss: 0.466 Epoch 3 Batch 222/269 - Train Accuracy: 0.688, Validation Accuracy: 0.655, Loss: 0.449 Epoch 3 Batch 223/269 - Train Accuracy: 0.673, Validation Accuracy: 0.664, Loss: 0.455 Epoch 3 Batch 224/269 - Train Accuracy: 0.671, Validation Accuracy: 0.662, Loss: 0.472 Epoch 3 Batch 225/269 - Train Accuracy: 0.664, Validation Accuracy: 0.661, Loss: 0.468 Epoch 3 Batch 226/269 - Train Accuracy: 0.660, Validation Accuracy: 0.661, Loss: 0.466 Epoch 3 Batch 227/269 - Train Accuracy: 0.713, Validation Accuracy: 0.668, Loss: 0.422 Epoch 3 Batch 228/269 - Train Accuracy: 0.655, Validation Accuracy: 0.649, Loss: 0.454 Epoch 3 Batch 229/269 - Train Accuracy: 0.661, Validation Accuracy: 0.652, Loss: 0.465 Epoch 3 Batch 230/269 - Train Accuracy: 0.653, Validation Accuracy: 0.649, Loss: 0.480 Epoch 3 Batch 231/269 - Train Accuracy: 0.625, Validation Accuracy: 0.649, Loss: 0.526 Epoch 3 Batch 232/269 - Train Accuracy: 0.629, Validation Accuracy: 0.642, Loss: 0.511 Epoch 3 Batch 233/269 - Train Accuracy: 0.672, Validation Accuracy: 0.659, Loss: 0.473 Epoch 3 Batch 234/269 - Train Accuracy: 0.665, Validation Accuracy: 0.666, Loss: 0.475 Epoch 3 Batch 235/269 - Train Accuracy: 0.682, Validation Accuracy: 0.669, Loss: 0.452 Epoch 3 Batch 236/269 - Train Accuracy: 0.660, Validation Accuracy: 0.661, Loss: 0.466 Epoch 3 Batch 237/269 - Train Accuracy: 0.652, Validation Accuracy: 0.663, Loss: 0.466 Epoch 3 Batch 238/269 - Train Accuracy: 0.675, Validation Accuracy: 0.666, Loss: 0.457 Epoch 3 Batch 239/269 - Train Accuracy: 0.688, Validation Accuracy: 0.674, Loss: 0.467 Epoch 3 Batch 240/269 - Train Accuracy: 0.682, Validation Accuracy: 0.671, Loss: 0.415 Epoch 3 Batch 241/269 - Train Accuracy: 0.654, Validation Accuracy: 0.655, Loss: 0.470 Epoch 3 Batch 242/269 - Train Accuracy: 0.666, Validation Accuracy: 0.657, Loss: 0.452 Epoch 3 Batch 243/269 - Train Accuracy: 0.677, Validation Accuracy: 0.647, Loss: 0.448 Epoch 3 Batch 244/269 - Train Accuracy: 0.662, Validation Accuracy: 0.654, Loss: 0.468 Epoch 3 Batch 245/269 - Train Accuracy: 0.646, Validation Accuracy: 0.660, Loss: 0.479 Epoch 3 Batch 246/269 - Train Accuracy: 0.666, Validation Accuracy: 0.661, Loss: 0.451 Epoch 3 Batch 247/269 - Train Accuracy: 0.672, Validation Accuracy: 0.672, Loss: 0.467 Epoch 3 Batch 248/269 - Train Accuracy: 0.662, Validation Accuracy: 0.669, Loss: 0.440 Epoch 3 Batch 249/269 - Train Accuracy: 0.691, Validation Accuracy: 0.657, Loss: 0.420 Epoch 3 Batch 250/269 - Train Accuracy: 0.668, Validation Accuracy: 0.670, Loss: 0.450 Epoch 3 Batch 251/269 - Train Accuracy: 0.703, Validation Accuracy: 0.672, Loss: 0.441 Epoch 3 Batch 252/269 - Train Accuracy: 0.668, Validation Accuracy: 0.670, Loss: 0.445 Epoch 3 Batch 253/269 - Train Accuracy: 0.656, Validation Accuracy: 0.664, Loss: 0.454 Epoch 3 Batch 254/269 - Train Accuracy: 0.666, Validation Accuracy: 0.670, Loss: 0.444 Epoch 3 Batch 255/269 - Train Accuracy: 0.689, Validation Accuracy: 0.666, Loss: 0.426 Epoch 3 Batch 256/269 - Train Accuracy: 0.655, Validation Accuracy: 0.663, Loss: 0.446 Epoch 3 Batch 257/269 - Train Accuracy: 0.645, Validation Accuracy: 0.668, Loss: 0.452 Epoch 3 Batch 258/269 - Train Accuracy: 0.672, Validation Accuracy: 0.671, Loss: 0.440 Epoch 3 Batch 259/269 - Train Accuracy: 0.689, Validation Accuracy: 0.671, Loss: 0.441 Epoch 3 Batch 260/269 - Train Accuracy: 0.658, Validation Accuracy: 0.667, Loss: 0.462 Epoch 3 Batch 261/269 - Train Accuracy: 0.655, Validation Accuracy: 0.677, Loss: 0.465 Epoch 3 Batch 262/269 - Train Accuracy: 0.690, Validation Accuracy: 0.687, Loss: 0.432 Epoch 3 Batch 263/269 - Train Accuracy: 0.683, Validation Accuracy: 0.679, Loss: 0.449 Epoch 3 Batch 264/269 - Train Accuracy: 0.664, Validation Accuracy: 0.682, Loss: 0.459 Epoch 3 Batch 265/269 - Train Accuracy: 0.676, Validation Accuracy: 0.683, Loss: 0.447 Epoch 3 Batch 266/269 - Train Accuracy: 0.676, Validation Accuracy: 0.677, Loss: 0.428 Epoch 3 Batch 267/269 - Train Accuracy: 0.688, Validation Accuracy: 0.673, Loss: 0.435 Epoch 4 Batch 0/269 - Train Accuracy: 0.675, Validation Accuracy: 0.676, Loss: 0.455 Epoch 4 Batch 1/269 - Train Accuracy: 0.662, Validation Accuracy: 0.674, Loss: 0.440 Epoch 4 Batch 2/269 - Train Accuracy: 0.677, Validation Accuracy: 0.670, Loss: 0.430 Epoch 4 Batch 3/269 - Train Accuracy: 0.684, Validation Accuracy: 0.675, Loss: 0.439 Epoch 4 Batch 4/269 - Train Accuracy: 0.646, Validation Accuracy: 0.672, Loss: 0.451 Epoch 4 Batch 5/269 - Train Accuracy: 0.660, Validation Accuracy: 0.672, Loss: 0.444 Epoch 4 Batch 6/269 - Train Accuracy: 0.674, Validation Accuracy: 0.670, Loss: 0.418 Epoch 4 Batch 7/269 - Train Accuracy: 0.677, Validation Accuracy: 0.670, Loss: 0.427 Epoch 4 Batch 8/269 - Train Accuracy: 0.654, Validation Accuracy: 0.663, Loss: 0.449 Epoch 4 Batch 9/269 - Train Accuracy: 0.664, Validation Accuracy: 0.675, Loss: 0.444 Epoch 4 Batch 10/269 - Train Accuracy: 0.680, Validation Accuracy: 0.682, Loss: 0.437 Epoch 4 Batch 11/269 - Train Accuracy: 0.692, Validation Accuracy: 0.685, Loss: 0.437 Epoch 4 Batch 12/269 - Train Accuracy: 0.660, Validation Accuracy: 0.679, Loss: 0.442 Epoch 4 Batch 13/269 - Train Accuracy: 0.699, Validation Accuracy: 0.673, Loss: 0.399 Epoch 4 Batch 14/269 - Train Accuracy: 0.679, Validation Accuracy: 0.682, Loss: 0.427 Epoch 4 Batch 15/269 - Train Accuracy: 0.678, Validation Accuracy: 0.676, Loss: 0.411 Epoch 4 Batch 16/269 - Train Accuracy: 0.705, Validation Accuracy: 0.679, Loss: 0.421 Epoch 4 Batch 17/269 - Train Accuracy: 0.682, Validation Accuracy: 0.672, Loss: 0.420 Epoch 4 Batch 18/269 - Train Accuracy: 0.668, Validation Accuracy: 0.676, Loss: 0.435 Epoch 4 Batch 19/269 - Train Accuracy: 0.714, Validation Accuracy: 0.673, Loss: 0.387 Epoch 4 Batch 20/269 - Train Accuracy: 0.676, Validation Accuracy: 0.671, Loss: 0.427 Epoch 4 Batch 21/269 - Train Accuracy: 0.678, Validation Accuracy: 0.682, Loss: 0.448 Epoch 4 Batch 22/269 - Train Accuracy: 0.701, Validation Accuracy: 0.686, Loss: 0.404 Epoch 4 Batch 23/269 - Train Accuracy: 0.694, Validation Accuracy: 0.687, Loss: 0.411 Epoch 4 Batch 24/269 - Train Accuracy: 0.683, Validation Accuracy: 0.688, Loss: 0.426 Epoch 4 Batch 25/269 - Train Accuracy: 0.677, Validation Accuracy: 0.690, Loss: 0.438 Epoch 4 Batch 26/269 - Train Accuracy: 0.714, Validation Accuracy: 0.693, Loss: 0.387 Epoch 4 Batch 27/269 - Train Accuracy: 0.685, Validation Accuracy: 0.682, Loss: 0.407 Epoch 4 Batch 28/269 - Train Accuracy: 0.642, Validation Accuracy: 0.679, Loss: 0.450 Epoch 4 Batch 29/269 - Train Accuracy: 0.681, Validation Accuracy: 0.681, Loss: 0.433 Epoch 4 Batch 30/269 - Train Accuracy: 0.680, Validation Accuracy: 0.666, Loss: 0.416 Epoch 4 Batch 31/269 - Train Accuracy: 0.676, Validation Accuracy: 0.663, Loss: 0.442 Epoch 4 Batch 32/269 - Train Accuracy: 0.664, Validation Accuracy: 0.658, Loss: 0.418 Epoch 4 Batch 33/269 - Train Accuracy: 0.674, Validation Accuracy: 0.669, Loss: 0.432 Epoch 4 Batch 34/269 - Train Accuracy: 0.669, Validation Accuracy: 0.678, Loss: 0.444 Epoch 4 Batch 35/269 - Train Accuracy: 0.690, Validation Accuracy: 0.689, Loss: 0.451 Epoch 4 Batch 36/269 - Train Accuracy: 0.686, Validation Accuracy: 0.685, Loss: 0.434 Epoch 4 Batch 37/269 - Train Accuracy: 0.688, Validation Accuracy: 0.674, Loss: 0.426 Epoch 4 Batch 38/269 - Train Accuracy: 0.680, Validation Accuracy: 0.670, Loss: 0.420 Epoch 4 Batch 39/269 - Train Accuracy: 0.686, Validation Accuracy: 0.672, Loss: 0.426 Epoch 4 Batch 40/269 - Train Accuracy: 0.672, Validation Accuracy: 0.678, Loss: 0.453 Epoch 4 Batch 41/269 - Train Accuracy: 0.677, Validation Accuracy: 0.675, Loss: 0.435 Epoch 4 Batch 42/269 - Train Accuracy: 0.698, Validation Accuracy: 0.683, Loss: 0.403 Epoch 4 Batch 43/269 - Train Accuracy: 0.673, Validation Accuracy: 0.680, Loss: 0.426 Epoch 4 Batch 44/269 - Train Accuracy: 0.692, Validation Accuracy: 0.676, Loss: 0.426 Epoch 4 Batch 45/269 - Train Accuracy: 0.657, Validation Accuracy: 0.673, Loss: 0.444 Epoch 4 Batch 46/269 - Train Accuracy: 0.683, Validation Accuracy: 0.676, Loss: 0.432 Epoch 4 Batch 47/269 - Train Accuracy: 0.702, Validation Accuracy: 0.681, Loss: 0.393 Epoch 4 Batch 48/269 - Train Accuracy: 0.693, Validation Accuracy: 0.675, Loss: 0.403 Epoch 4 Batch 49/269 - Train Accuracy: 0.682, Validation Accuracy: 0.678, Loss: 0.420 Epoch 4 Batch 50/269 - Train Accuracy: 0.674, Validation Accuracy: 0.681, Loss: 0.435 Epoch 4 Batch 51/269 - Train Accuracy: 0.685, Validation Accuracy: 0.686, Loss: 0.415 Epoch 4 Batch 52/269 - Train Accuracy: 0.674, Validation Accuracy: 0.681, Loss: 0.392 Epoch 4 Batch 53/269 - Train Accuracy: 0.677, Validation Accuracy: 0.682, Loss: 0.447 Epoch 4 Batch 54/269 - Train Accuracy: 0.697, Validation Accuracy: 0.680, Loss: 0.416 Epoch 4 Batch 55/269 - Train Accuracy: 0.699, Validation Accuracy: 0.685, Loss: 0.402 Epoch 4 Batch 56/269 - Train Accuracy: 0.700, Validation Accuracy: 0.691, Loss: 0.408 Epoch 4 Batch 57/269 - Train Accuracy: 0.693, Validation Accuracy: 0.687, Loss: 0.415 Epoch 4 Batch 58/269 - Train Accuracy: 0.704, Validation Accuracy: 0.689, Loss: 0.394 Epoch 4 Batch 59/269 - Train Accuracy: 0.710, Validation Accuracy: 0.691, Loss: 0.379 Epoch 4 Batch 60/269 - Train Accuracy: 0.693, Validation Accuracy: 0.689, Loss: 0.383 Epoch 4 Batch 61/269 - Train Accuracy: 0.710, Validation Accuracy: 0.693, Loss: 0.378 Epoch 4 Batch 62/269 - Train Accuracy: 0.696, Validation Accuracy: 0.680, Loss: 0.387 Epoch 4 Batch 63/269 - Train Accuracy: 0.679, Validation Accuracy: 0.685, Loss: 0.411 Epoch 4 Batch 64/269 - Train Accuracy: 0.690, Validation Accuracy: 0.681, Loss: 0.392 Epoch 4 Batch 65/269 - Train Accuracy: 0.685, Validation Accuracy: 0.675, Loss: 0.396 Epoch 4 Batch 66/269 - Train Accuracy: 0.675, Validation Accuracy: 0.673, Loss: 0.386 Epoch 4 Batch 67/269 - Train Accuracy: 0.678, Validation Accuracy: 0.682, Loss: 0.404 Epoch 4 Batch 68/269 - Train Accuracy: 0.666, Validation Accuracy: 0.682, Loss: 0.401 Epoch 4 Batch 69/269 - Train Accuracy: 0.668, Validation Accuracy: 0.687, Loss: 0.433 Epoch 4 Batch 70/269 - Train Accuracy: 0.713, Validation Accuracy: 0.688, Loss: 0.398 Epoch 4 Batch 71/269 - Train Accuracy: 0.681, Validation Accuracy: 0.696, Loss: 0.411 Epoch 4 Batch 72/269 - Train Accuracy: 0.699, Validation Accuracy: 0.695, Loss: 0.393 Epoch 4 Batch 73/269 - Train Accuracy: 0.696, Validation Accuracy: 0.696, Loss: 0.404 Epoch 4 Batch 74/269 - Train Accuracy: 0.691, Validation Accuracy: 0.686, Loss: 0.396 Epoch 4 Batch 75/269 - Train Accuracy: 0.680, Validation Accuracy: 0.689, Loss: 0.395 Epoch 4 Batch 76/269 - Train Accuracy: 0.687, Validation Accuracy: 0.687, Loss: 0.393 Epoch 4 Batch 77/269 - Train Accuracy: 0.713, Validation Accuracy: 0.680, Loss: 0.386 Epoch 4 Batch 78/269 - Train Accuracy: 0.710, Validation Accuracy: 0.688, Loss: 0.383 Epoch 4 Batch 79/269 - Train Accuracy: 0.701, Validation Accuracy: 0.697, Loss: 0.389 Epoch 4 Batch 80/269 - Train Accuracy: 0.715, Validation Accuracy: 0.695, Loss: 0.382 Epoch 4 Batch 81/269 - Train Accuracy: 0.699, Validation Accuracy: 0.692, Loss: 0.396 Epoch 4 Batch 82/269 - Train Accuracy: 0.713, Validation Accuracy: 0.692, Loss: 0.369 Epoch 4 Batch 83/269 - Train Accuracy: 0.707, Validation Accuracy: 0.697, Loss: 0.397 Epoch 4 Batch 84/269 - Train Accuracy: 0.715, Validation Accuracy: 0.700, Loss: 0.386 Epoch 4 Batch 85/269 - Train Accuracy: 0.709, Validation Accuracy: 0.697, Loss: 0.384 Epoch 4 Batch 86/269 - Train Accuracy: 0.671, Validation Accuracy: 0.688, Loss: 0.377 Epoch 4 Batch 87/269 - Train Accuracy: 0.675, Validation Accuracy: 0.684, Loss: 0.407 Epoch 4 Batch 88/269 - Train Accuracy: 0.694, Validation Accuracy: 0.694, Loss: 0.374 Epoch 4 Batch 89/269 - Train Accuracy: 0.709, Validation Accuracy: 0.692, Loss: 0.388 Epoch 4 Batch 90/269 - Train Accuracy: 0.669, Validation Accuracy: 0.689, Loss: 0.410 Epoch 4 Batch 91/269 - Train Accuracy: 0.698, Validation Accuracy: 0.693, Loss: 0.373 Epoch 4 Batch 92/269 - Train Accuracy: 0.707, Validation Accuracy: 0.692, Loss: 0.370 Epoch 4 Batch 93/269 - Train Accuracy: 0.712, Validation Accuracy: 0.700, Loss: 0.366 Epoch 4 Batch 94/269 - Train Accuracy: 0.694, Validation Accuracy: 0.696, Loss: 0.391 Epoch 4 Batch 95/269 - Train Accuracy: 0.695, Validation Accuracy: 0.693, Loss: 0.382 Epoch 4 Batch 96/269 - Train Accuracy: 0.699, Validation Accuracy: 0.690, Loss: 0.387 Epoch 4 Batch 97/269 - Train Accuracy: 0.690, Validation Accuracy: 0.698, Loss: 0.379 Epoch 4 Batch 98/269 - Train Accuracy: 0.708, Validation Accuracy: 0.698, Loss: 0.380 Epoch 4 Batch 99/269 - Train Accuracy: 0.702, Validation Accuracy: 0.697, Loss: 0.395 Epoch 4 Batch 100/269 - Train Accuracy: 0.746, Validation Accuracy: 0.706, Loss: 0.371 Epoch 4 Batch 101/269 - Train Accuracy: 0.665, Validation Accuracy: 0.697, Loss: 0.405 Epoch 4 Batch 102/269 - Train Accuracy: 0.713, Validation Accuracy: 0.701, Loss: 0.377 Epoch 4 Batch 103/269 - Train Accuracy: 0.689, Validation Accuracy: 0.687, Loss: 0.372 Epoch 4 Batch 104/269 - Train Accuracy: 0.698, Validation Accuracy: 0.698, Loss: 0.379 Epoch 4 Batch 105/269 - Train Accuracy: 0.695, Validation Accuracy: 0.693, Loss: 0.381 Epoch 4 Batch 106/269 - Train Accuracy: 0.690, Validation Accuracy: 0.682, Loss: 0.370 Epoch 4 Batch 107/269 - Train Accuracy: 0.661, Validation Accuracy: 0.674, Loss: 0.403 Epoch 4 Batch 108/269 - Train Accuracy: 0.691, Validation Accuracy: 0.680, Loss: 0.411 Epoch 4 Batch 109/269 - Train Accuracy: 0.666, Validation Accuracy: 0.682, Loss: 0.404 Epoch 4 Batch 110/269 - Train Accuracy: 0.686, Validation Accuracy: 0.688, Loss: 0.377 Epoch 4 Batch 111/269 - Train Accuracy: 0.666, Validation Accuracy: 0.686, Loss: 0.413 Epoch 4 Batch 112/269 - Train Accuracy: 0.675, Validation Accuracy: 0.671, Loss: 0.386 Epoch 4 Batch 113/269 - Train Accuracy: 0.677, Validation Accuracy: 0.678, Loss: 0.407 Epoch 4 Batch 114/269 - Train Accuracy: 0.669, Validation Accuracy: 0.683, Loss: 0.453 Epoch 4 Batch 115/269 - Train Accuracy: 0.645, Validation Accuracy: 0.663, Loss: 0.472 Epoch 4 Batch 116/269 - Train Accuracy: 0.680, Validation Accuracy: 0.688, Loss: 0.490 Epoch 4 Batch 117/269 - Train Accuracy: 0.653, Validation Accuracy: 0.665, Loss: 0.401 Epoch 4 Batch 118/269 - Train Accuracy: 0.702, Validation Accuracy: 0.676, Loss: 0.425 Epoch 4 Batch 119/269 - Train Accuracy: 0.662, Validation Accuracy: 0.675, Loss: 0.450 Epoch 4 Batch 120/269 - Train Accuracy: 0.667, Validation Accuracy: 0.672, Loss: 0.419 Epoch 4 Batch 121/269 - Train Accuracy: 0.680, Validation Accuracy: 0.678, Loss: 0.412 Epoch 4 Batch 122/269 - Train Accuracy: 0.690, Validation Accuracy: 0.684, Loss: 0.405 Epoch 4 Batch 123/269 - Train Accuracy: 0.659, Validation Accuracy: 0.686, Loss: 0.419 Epoch 4 Batch 124/269 - Train Accuracy: 0.688, Validation Accuracy: 0.680, Loss: 0.397 Epoch 4 Batch 125/269 - Train Accuracy: 0.683, Validation Accuracy: 0.685, Loss: 0.398 Epoch 4 Batch 126/269 - Train Accuracy: 0.685, Validation Accuracy: 0.685, Loss: 0.404 Epoch 4 Batch 127/269 - Train Accuracy: 0.681, Validation Accuracy: 0.678, Loss: 0.409 Epoch 4 Batch 128/269 - Train Accuracy: 0.699, Validation Accuracy: 0.672, Loss: 0.393 Epoch 4 Batch 129/269 - Train Accuracy: 0.683, Validation Accuracy: 0.680, Loss: 0.386 Epoch 4 Batch 130/269 - Train Accuracy: 0.671, Validation Accuracy: 0.682, Loss: 0.402 Epoch 4 Batch 131/269 - Train Accuracy: 0.675, Validation Accuracy: 0.687, Loss: 0.398 Epoch 4 Batch 132/269 - Train Accuracy: 0.667, Validation Accuracy: 0.680, Loss: 0.396 Epoch 4 Batch 133/269 - Train Accuracy: 0.698, Validation Accuracy: 0.673, Loss: 0.375 Epoch 4 Batch 134/269 - Train Accuracy: 0.677, Validation Accuracy: 0.681, Loss: 0.394 Epoch 4 Batch 135/269 - Train Accuracy: 0.677, Validation Accuracy: 0.681, Loss: 0.406 Epoch 4 Batch 136/269 - Train Accuracy: 0.666, Validation Accuracy: 0.687, Loss: 0.408 Epoch 4 Batch 137/269 - Train Accuracy: 0.694, Validation Accuracy: 0.693, Loss: 0.398 Epoch 4 Batch 138/269 - Train Accuracy: 0.701, Validation Accuracy: 0.694, Loss: 0.385 Epoch 4 Batch 139/269 - Train Accuracy: 0.717, Validation Accuracy: 0.696, Loss: 0.364 Epoch 4 Batch 140/269 - Train Accuracy: 0.706, Validation Accuracy: 0.693, Loss: 0.385 Epoch 4 Batch 141/269 - Train Accuracy: 0.700, Validation Accuracy: 0.703, Loss: 0.386 Epoch 4 Batch 142/269 - Train Accuracy: 0.707, Validation Accuracy: 0.702, Loss: 0.354 Epoch 4 Batch 143/269 - Train Accuracy: 0.731, Validation Accuracy: 0.704, Loss: 0.358 Epoch 4 Batch 144/269 - Train Accuracy: 0.720, Validation Accuracy: 0.700, Loss: 0.347 Epoch 4 Batch 145/269 - Train Accuracy: 0.704, Validation Accuracy: 0.692, Loss: 0.360 Epoch 4 Batch 146/269 - Train Accuracy: 0.703, Validation Accuracy: 0.688, Loss: 0.354 Epoch 4 Batch 147/269 - Train Accuracy: 0.714, Validation Accuracy: 0.690, Loss: 0.356 Epoch 4 Batch 148/269 - Train Accuracy: 0.699, Validation Accuracy: 0.691, Loss: 0.365 Epoch 4 Batch 149/269 - Train Accuracy: 0.697, Validation Accuracy: 0.685, Loss: 0.371 Epoch 4 Batch 150/269 - Train Accuracy: 0.714, Validation Accuracy: 0.683, Loss: 0.359 Epoch 4 Batch 151/269 - Train Accuracy: 0.742, Validation Accuracy: 0.693, Loss: 0.343 Epoch 4 Batch 152/269 - Train Accuracy: 0.705, Validation Accuracy: 0.699, Loss: 0.362 Epoch 4 Batch 153/269 - Train Accuracy: 0.723, Validation Accuracy: 0.701, Loss: 0.353 Epoch 4 Batch 154/269 - Train Accuracy: 0.710, Validation Accuracy: 0.706, Loss: 0.357 Epoch 4 Batch 155/269 - Train Accuracy: 0.736, Validation Accuracy: 0.696, Loss: 0.340 Epoch 4 Batch 156/269 - Train Accuracy: 0.696, Validation Accuracy: 0.695, Loss: 0.371 Epoch 4 Batch 157/269 - Train Accuracy: 0.717, Validation Accuracy: 0.698, Loss: 0.354 Epoch 4 Batch 158/269 - Train Accuracy: 0.710, Validation Accuracy: 0.706, Loss: 0.357 Epoch 4 Batch 159/269 - Train Accuracy: 0.703, Validation Accuracy: 0.708, Loss: 0.355 Epoch 4 Batch 160/269 - Train Accuracy: 0.718, Validation Accuracy: 0.704, Loss: 0.347 Epoch 4 Batch 161/269 - Train Accuracy: 0.704, Validation Accuracy: 0.699, Loss: 0.350 Epoch 4 Batch 162/269 - Train Accuracy: 0.712, Validation Accuracy: 0.702, Loss: 0.347 Epoch 4 Batch 163/269 - Train Accuracy: 0.706, Validation Accuracy: 0.705, Loss: 0.359 Epoch 4 Batch 164/269 - Train Accuracy: 0.717, Validation Accuracy: 0.704, Loss: 0.347 Epoch 4 Batch 165/269 - Train Accuracy: 0.705, Validation Accuracy: 0.702, Loss: 0.356 Epoch 4 Batch 166/269 - Train Accuracy: 0.725, Validation Accuracy: 0.703, Loss: 0.331 Epoch 4 Batch 167/269 - Train Accuracy: 0.713, Validation Accuracy: 0.709, Loss: 0.352 Epoch 4 Batch 168/269 - Train Accuracy: 0.716, Validation Accuracy: 0.705, Loss: 0.352 Epoch 4 Batch 169/269 - Train Accuracy: 0.719, Validation Accuracy: 0.700, Loss: 0.348 Epoch 4 Batch 170/269 - Train Accuracy: 0.711, Validation Accuracy: 0.704, Loss: 0.341 Epoch 4 Batch 171/269 - Train Accuracy: 0.710, Validation Accuracy: 0.704, Loss: 0.365 Epoch 4 Batch 172/269 - Train Accuracy: 0.700, Validation Accuracy: 0.701, Loss: 0.359 Epoch 4 Batch 173/269 - Train Accuracy: 0.717, Validation Accuracy: 0.704, Loss: 0.336 Epoch 4 Batch 174/269 - Train Accuracy: 0.710, Validation Accuracy: 0.703, Loss: 0.347 Epoch 4 Batch 175/269 - Train Accuracy: 0.729, Validation Accuracy: 0.709, Loss: 0.357 Epoch 4 Batch 176/269 - Train Accuracy: 0.709, Validation Accuracy: 0.707, Loss: 0.362 Epoch 4 Batch 177/269 - Train Accuracy: 0.730, Validation Accuracy: 0.713, Loss: 0.329 Epoch 4 Batch 178/269 - Train Accuracy: 0.725, Validation Accuracy: 0.718, Loss: 0.354 Epoch 4 Batch 179/269 - Train Accuracy: 0.728, Validation Accuracy: 0.713, Loss: 0.342 Epoch 4 Batch 180/269 - Train Accuracy: 0.721, Validation Accuracy: 0.711, Loss: 0.341 Epoch 4 Batch 181/269 - Train Accuracy: 0.702, Validation Accuracy: 0.712, Loss: 0.341 Epoch 4 Batch 182/269 - Train Accuracy: 0.740, Validation Accuracy: 0.712, Loss: 0.345 Epoch 4 Batch 183/269 - Train Accuracy: 0.760, Validation Accuracy: 0.709, Loss: 0.295 Epoch 4 Batch 184/269 - Train Accuracy: 0.718, Validation Accuracy: 0.711, Loss: 0.349 Epoch 4 Batch 185/269 - Train Accuracy: 0.734, Validation Accuracy: 0.718, Loss: 0.333 Epoch 4 Batch 186/269 - Train Accuracy: 0.706, Validation Accuracy: 0.718, Loss: 0.346 Epoch 4 Batch 187/269 - Train Accuracy: 0.718, Validation Accuracy: 0.706, Loss: 0.337 Epoch 4 Batch 188/269 - Train Accuracy: 0.730, Validation Accuracy: 0.706, Loss: 0.335 Epoch 4 Batch 189/269 - Train Accuracy: 0.721, Validation Accuracy: 0.711, Loss: 0.325 Epoch 4 Batch 190/269 - Train Accuracy: 0.737, Validation Accuracy: 0.705, Loss: 0.325 Epoch 4 Batch 191/269 - Train Accuracy: 0.737, Validation Accuracy: 0.707, Loss: 0.322 Epoch 4 Batch 192/269 - Train Accuracy: 0.709, Validation Accuracy: 0.704, Loss: 0.337 Epoch 4 Batch 193/269 - Train Accuracy: 0.722, Validation Accuracy: 0.715, Loss: 0.336 Epoch 4 Batch 194/269 - Train Accuracy: 0.737, Validation Accuracy: 0.715, Loss: 0.336 Epoch 4 Batch 195/269 - Train Accuracy: 0.723, Validation Accuracy: 0.717, Loss: 0.329 Epoch 4 Batch 196/269 - Train Accuracy: 0.706, Validation Accuracy: 0.711, Loss: 0.336 Epoch 4 Batch 197/269 - Train Accuracy: 0.698, Validation Accuracy: 0.707, Loss: 0.348 Epoch 4 Batch 198/269 - Train Accuracy: 0.691, Validation Accuracy: 0.712, Loss: 0.351 Epoch 4 Batch 199/269 - Train Accuracy: 0.711, Validation Accuracy: 0.713, Loss: 0.341 Epoch 4 Batch 200/269 - Train Accuracy: 0.727, Validation Accuracy: 0.713, Loss: 0.338 Epoch 4 Batch 201/269 - Train Accuracy: 0.717, Validation Accuracy: 0.718, Loss: 0.331 Epoch 4 Batch 202/269 - Train Accuracy: 0.730, Validation Accuracy: 0.711, Loss: 0.328 Epoch 4 Batch 203/269 - Train Accuracy: 0.703, Validation Accuracy: 0.716, Loss: 0.349 Epoch 4 Batch 204/269 - Train Accuracy: 0.708, Validation Accuracy: 0.711, Loss: 0.351 Epoch 4 Batch 205/269 - Train Accuracy: 0.721, Validation Accuracy: 0.710, Loss: 0.327 Epoch 4 Batch 206/269 - Train Accuracy: 0.710, Validation Accuracy: 0.711, Loss: 0.344 Epoch 4 Batch 207/269 - Train Accuracy: 0.748, Validation Accuracy: 0.712, Loss: 0.316 Epoch 4 Batch 208/269 - Train Accuracy: 0.708, Validation Accuracy: 0.706, Loss: 0.340 Epoch 4 Batch 209/269 - Train Accuracy: 0.717, Validation Accuracy: 0.707, Loss: 0.328 Epoch 4 Batch 210/269 - Train Accuracy: 0.736, Validation Accuracy: 0.717, Loss: 0.324 Epoch 4 Batch 211/269 - Train Accuracy: 0.740, Validation Accuracy: 0.712, Loss: 0.327 Epoch 4 Batch 212/269 - Train Accuracy: 0.732, Validation Accuracy: 0.710, Loss: 0.333 Epoch 4 Batch 213/269 - Train Accuracy: 0.727, Validation Accuracy: 0.722, Loss: 0.324 Epoch 4 Batch 214/269 - Train Accuracy: 0.720, Validation Accuracy: 0.717, Loss: 0.327 Epoch 4 Batch 215/269 - Train Accuracy: 0.736, Validation Accuracy: 0.710, Loss: 0.316 Epoch 4 Batch 216/269 - Train Accuracy: 0.713, Validation Accuracy: 0.720, Loss: 0.354 Epoch 4 Batch 217/269 - Train Accuracy: 0.701, Validation Accuracy: 0.711, Loss: 0.337 Epoch 4 Batch 218/269 - Train Accuracy: 0.720, Validation Accuracy: 0.714, Loss: 0.333 Epoch 4 Batch 219/269 - Train Accuracy: 0.726, Validation Accuracy: 0.712, Loss: 0.341 Epoch 4 Batch 220/269 - Train Accuracy: 0.726, Validation Accuracy: 0.714, Loss: 0.306 Epoch 4 Batch 221/269 - Train Accuracy: 0.748, Validation Accuracy: 0.715, Loss: 0.323 Epoch 4 Batch 222/269 - Train Accuracy: 0.741, Validation Accuracy: 0.711, Loss: 0.317 Epoch 4 Batch 223/269 - Train Accuracy: 0.735, Validation Accuracy: 0.713, Loss: 0.313 Epoch 4 Batch 224/269 - Train Accuracy: 0.735, Validation Accuracy: 0.725, Loss: 0.334 Epoch 4 Batch 225/269 - Train Accuracy: 0.726, Validation Accuracy: 0.714, Loss: 0.319 Epoch 4 Batch 226/269 - Train Accuracy: 0.731, Validation Accuracy: 0.720, Loss: 0.322 Epoch 4 Batch 227/269 - Train Accuracy: 0.773, Validation Accuracy: 0.712, Loss: 0.291 Epoch 4 Batch 228/269 - Train Accuracy: 0.720, Validation Accuracy: 0.710, Loss: 0.319 Epoch 4 Batch 229/269 - Train Accuracy: 0.738, Validation Accuracy: 0.718, Loss: 0.315 Epoch 4 Batch 230/269 - Train Accuracy: 0.707, Validation Accuracy: 0.721, Loss: 0.324 Epoch 4 Batch 231/269 - Train Accuracy: 0.695, Validation Accuracy: 0.716, Loss: 0.343 Epoch 4 Batch 232/269 - Train Accuracy: 0.683, Validation Accuracy: 0.718, Loss: 0.345 Epoch 4 Batch 233/269 - Train Accuracy: 0.734, Validation Accuracy: 0.725, Loss: 0.327 Epoch 4 Batch 234/269 - Train Accuracy: 0.734, Validation Accuracy: 0.727, Loss: 0.315 Epoch 4 Batch 235/269 - Train Accuracy: 0.744, Validation Accuracy: 0.732, Loss: 0.305 Epoch 4 Batch 236/269 - Train Accuracy: 0.721, Validation Accuracy: 0.731, Loss: 0.313 Epoch 4 Batch 237/269 - Train Accuracy: 0.715, Validation Accuracy: 0.728, Loss: 0.319 Epoch 4 Batch 238/269 - Train Accuracy: 0.731, Validation Accuracy: 0.723, Loss: 0.308 Epoch 4 Batch 239/269 - Train Accuracy: 0.738, Validation Accuracy: 0.724, Loss: 0.316 Epoch 4 Batch 240/269 - Train Accuracy: 0.742, Validation Accuracy: 0.717, Loss: 0.285 Epoch 4 Batch 241/269 - Train Accuracy: 0.725, Validation Accuracy: 0.721, Loss: 0.325 Epoch 4 Batch 242/269 - Train Accuracy: 0.739, Validation Accuracy: 0.721, Loss: 0.310 Epoch 4 Batch 243/269 - Train Accuracy: 0.740, Validation Accuracy: 0.716, Loss: 0.304 Epoch 4 Batch 244/269 - Train Accuracy: 0.713, Validation Accuracy: 0.717, Loss: 0.325 Epoch 4 Batch 245/269 - Train Accuracy: 0.714, Validation Accuracy: 0.718, Loss: 0.328 Epoch 4 Batch 246/269 - Train Accuracy: 0.716, Validation Accuracy: 0.725, Loss: 0.313 Epoch 4 Batch 247/269 - Train Accuracy: 0.745, Validation Accuracy: 0.727, Loss: 0.318 Epoch 4 Batch 248/269 - Train Accuracy: 0.724, Validation Accuracy: 0.732, Loss: 0.310 Epoch 4 Batch 249/269 - Train Accuracy: 0.764, Validation Accuracy: 0.729, Loss: 0.288 Epoch 4 Batch 250/269 - Train Accuracy: 0.718, Validation Accuracy: 0.734, Loss: 0.315 Epoch 4 Batch 251/269 - Train Accuracy: 0.756, Validation Accuracy: 0.726, Loss: 0.304 Epoch 4 Batch 252/269 - Train Accuracy: 0.726, Validation Accuracy: 0.713, Loss: 0.314 Epoch 4 Batch 253/269 - Train Accuracy: 0.725, Validation Accuracy: 0.721, Loss: 0.327 Epoch 4 Batch 254/269 - Train Accuracy: 0.728, Validation Accuracy: 0.721, Loss: 0.318 Epoch 4 Batch 255/269 - Train Accuracy: 0.733, Validation Accuracy: 0.707, Loss: 0.298 Epoch 4 Batch 256/269 - Train Accuracy: 0.721, Validation Accuracy: 0.714, Loss: 0.320 Epoch 4 Batch 257/269 - Train Accuracy: 0.707, Validation Accuracy: 0.715, Loss: 0.334 Epoch 4 Batch 258/269 - Train Accuracy: 0.732, Validation Accuracy: 0.715, Loss: 0.315 Epoch 4 Batch 259/269 - Train Accuracy: 0.733, Validation Accuracy: 0.722, Loss: 0.306 Epoch 4 Batch 260/269 - Train Accuracy: 0.726, Validation Accuracy: 0.725, Loss: 0.334 Epoch 4 Batch 261/269 - Train Accuracy: 0.714, Validation Accuracy: 0.724, Loss: 0.326 Epoch 4 Batch 262/269 - Train Accuracy: 0.751, Validation Accuracy: 0.733, Loss: 0.305 Epoch 4 Batch 263/269 - Train Accuracy: 0.735, Validation Accuracy: 0.725, Loss: 0.313 Epoch 4 Batch 264/269 - Train Accuracy: 0.721, Validation Accuracy: 0.731, Loss: 0.328 Epoch 4 Batch 265/269 - Train Accuracy: 0.719, Validation Accuracy: 0.729, Loss: 0.319 Epoch 4 Batch 266/269 - Train Accuracy: 0.728, Validation Accuracy: 0.719, Loss: 0.301 Epoch 4 Batch 267/269 - Train Accuracy: 0.729, Validation Accuracy: 0.725, Loss: 0.315 Epoch 5 Batch 0/269 - Train Accuracy: 0.726, Validation Accuracy: 0.727, Loss: 0.323 Epoch 5 Batch 1/269 - Train Accuracy: 0.721, Validation Accuracy: 0.710, Loss: 0.313 Epoch 5 Batch 2/269 - Train Accuracy: 0.731, Validation Accuracy: 0.710, Loss: 0.313 Epoch 5 Batch 3/269 - Train Accuracy: 0.730, Validation Accuracy: 0.717, Loss: 0.318 Epoch 5 Batch 4/269 - Train Accuracy: 0.691, Validation Accuracy: 0.722, Loss: 0.327 Epoch 5 Batch 5/269 - Train Accuracy: 0.720, Validation Accuracy: 0.720, Loss: 0.318 Epoch 5 Batch 6/269 - Train Accuracy: 0.731, Validation Accuracy: 0.714, Loss: 0.302 Epoch 5 Batch 7/269 - Train Accuracy: 0.722, Validation Accuracy: 0.718, Loss: 0.316 Epoch 5 Batch 8/269 - Train Accuracy: 0.706, Validation Accuracy: 0.713, Loss: 0.339 Epoch 5 Batch 9/269 - Train Accuracy: 0.721, Validation Accuracy: 0.731, Loss: 0.330 Epoch 5 Batch 10/269 - Train Accuracy: 0.727, Validation Accuracy: 0.726, Loss: 0.308 Epoch 5 Batch 11/269 - Train Accuracy: 0.726, Validation Accuracy: 0.725, Loss: 0.319 Epoch 5 Batch 12/269 - Train Accuracy: 0.711, Validation Accuracy: 0.729, Loss: 0.326 Epoch 5 Batch 13/269 - Train Accuracy: 0.767, Validation Accuracy: 0.726, Loss: 0.277 Epoch 5 Batch 14/269 - Train Accuracy: 0.714, Validation Accuracy: 0.726, Loss: 0.314 Epoch 5 Batch 15/269 - Train Accuracy: 0.727, Validation Accuracy: 0.724, Loss: 0.293 Epoch 5 Batch 16/269 - Train Accuracy: 0.759, Validation Accuracy: 0.717, Loss: 0.302 Epoch 5 Batch 17/269 - Train Accuracy: 0.726, Validation Accuracy: 0.716, Loss: 0.296 Epoch 5 Batch 18/269 - Train Accuracy: 0.729, Validation Accuracy: 0.722, Loss: 0.316 Epoch 5 Batch 19/269 - Train Accuracy: 0.755, Validation Accuracy: 0.721, Loss: 0.271 Epoch 5 Batch 20/269 - Train Accuracy: 0.723, Validation Accuracy: 0.727, Loss: 0.309 Epoch 5 Batch 21/269 - Train Accuracy: 0.734, Validation Accuracy: 0.730, Loss: 0.328 Epoch 5 Batch 22/269 - Train Accuracy: 0.744, Validation Accuracy: 0.729, Loss: 0.288 Epoch 5 Batch 23/269 - Train Accuracy: 0.755, Validation Accuracy: 0.739, Loss: 0.296 Epoch 5 Batch 24/269 - Train Accuracy: 0.727, Validation Accuracy: 0.731, Loss: 0.305 Epoch 5 Batch 25/269 - Train Accuracy: 0.736, Validation Accuracy: 0.739, Loss: 0.322 Epoch 5 Batch 26/269 - Train Accuracy: 0.750, Validation Accuracy: 0.734, Loss: 0.276 Epoch 5 Batch 27/269 - Train Accuracy: 0.731, Validation Accuracy: 0.735, Loss: 0.297 Epoch 5 Batch 28/269 - Train Accuracy: 0.700, Validation Accuracy: 0.729, Loss: 0.322 Epoch 5 Batch 29/269 - Train Accuracy: 0.752, Validation Accuracy: 0.733, Loss: 0.311 Epoch 5 Batch 30/269 - Train Accuracy: 0.743, Validation Accuracy: 0.728, Loss: 0.287 Epoch 5 Batch 31/269 - Train Accuracy: 0.752, Validation Accuracy: 0.725, Loss: 0.286 Epoch 5 Batch 32/269 - Train Accuracy: 0.721, Validation Accuracy: 0.720, Loss: 0.283 Epoch 5 Batch 33/269 - Train Accuracy: 0.754, Validation Accuracy: 0.740, Loss: 0.286 Epoch 5 Batch 34/269 - Train Accuracy: 0.741, Validation Accuracy: 0.742, Loss: 0.291 Epoch 5 Batch 35/269 - Train Accuracy: 0.750, Validation Accuracy: 0.735, Loss: 0.304 Epoch 5 Batch 36/269 - Train Accuracy: 0.738, Validation Accuracy: 0.729, Loss: 0.289 Epoch 5 Batch 37/269 - Train Accuracy: 0.750, Validation Accuracy: 0.739, Loss: 0.292 Epoch 5 Batch 38/269 - Train Accuracy: 0.741, Validation Accuracy: 0.738, Loss: 0.290 Epoch 5 Batch 39/269 - Train Accuracy: 0.764, Validation Accuracy: 0.737, Loss: 0.290 Epoch 5 Batch 40/269 - Train Accuracy: 0.735, Validation Accuracy: 0.734, Loss: 0.306 Epoch 5 Batch 41/269 - Train Accuracy: 0.734, Validation Accuracy: 0.734, Loss: 0.298 Epoch 5 Batch 42/269 - Train Accuracy: 0.752, Validation Accuracy: 0.741, Loss: 0.276 Epoch 5 Batch 43/269 - Train Accuracy: 0.751, Validation Accuracy: 0.740, Loss: 0.287 Epoch 5 Batch 44/269 - Train Accuracy: 0.752, Validation Accuracy: 0.743, Loss: 0.287 Epoch 5 Batch 45/269 - Train Accuracy: 0.726, Validation Accuracy: 0.742, Loss: 0.295 Epoch 5 Batch 46/269 - Train Accuracy: 0.751, Validation Accuracy: 0.746, Loss: 0.294 Epoch 5 Batch 47/269 - Train Accuracy: 0.769, Validation Accuracy: 0.738, Loss: 0.264 Epoch 5 Batch 48/269 - Train Accuracy: 0.754, Validation Accuracy: 0.744, Loss: 0.277 Epoch 5 Batch 49/269 - Train Accuracy: 0.752, Validation Accuracy: 0.744, Loss: 0.281 Epoch 5 Batch 50/269 - Train Accuracy: 0.742, Validation Accuracy: 0.744, Loss: 0.303 Epoch 5 Batch 51/269 - Train Accuracy: 0.740, Validation Accuracy: 0.745, Loss: 0.287 Epoch 5 Batch 52/269 - Train Accuracy: 0.758, Validation Accuracy: 0.749, Loss: 0.271 Epoch 5 Batch 53/269 - Train Accuracy: 0.741, Validation Accuracy: 0.745, Loss: 0.304 Epoch 5 Batch 54/269 - Train Accuracy: 0.762, Validation Accuracy: 0.736, Loss: 0.287 Epoch 5 Batch 55/269 - Train Accuracy: 0.775, Validation Accuracy: 0.746, Loss: 0.280 Epoch 5 Batch 56/269 - Train Accuracy: 0.771, Validation Accuracy: 0.756, Loss: 0.279 Epoch 5 Batch 57/269 - Train Accuracy: 0.758, Validation Accuracy: 0.750, Loss: 0.287 Epoch 5 Batch 58/269 - Train Accuracy: 0.758, Validation Accuracy: 0.753, Loss: 0.276 Epoch 5 Batch 59/269 - Train Accuracy: 0.770, Validation Accuracy: 0.745, Loss: 0.259 Epoch 5 Batch 60/269 - Train Accuracy: 0.749, Validation Accuracy: 0.747, Loss: 0.266 Epoch 5 Batch 61/269 - Train Accuracy: 0.777, Validation Accuracy: 0.754, Loss: 0.258 Epoch 5 Batch 62/269 - Train Accuracy: 0.772, Validation Accuracy: 0.763, Loss: 0.269 Epoch 5 Batch 63/269 - Train Accuracy: 0.753, Validation Accuracy: 0.738, Loss: 0.282 Epoch 5 Batch 64/269 - Train Accuracy: 0.746, Validation Accuracy: 0.728, Loss: 0.270 Epoch 5 Batch 65/269 - Train Accuracy: 0.759, Validation Accuracy: 0.762, Loss: 0.282 Epoch 5 Batch 66/269 - Train Accuracy: 0.741, Validation Accuracy: 0.757, Loss: 0.270 Epoch 5 Batch 67/269 - Train Accuracy: 0.728, Validation Accuracy: 0.740, Loss: 0.281 Epoch 5 Batch 68/269 - Train Accuracy: 0.740, Validation Accuracy: 0.751, Loss: 0.286 Epoch 5 Batch 69/269 - Train Accuracy: 0.717, Validation Accuracy: 0.737, Loss: 0.303 Epoch 5 Batch 70/269 - Train Accuracy: 0.777, Validation Accuracy: 0.745, Loss: 0.278 Epoch 5 Batch 71/269 - Train Accuracy: 0.769, Validation Accuracy: 0.752, Loss: 0.289 Epoch 5 Batch 72/269 - Train Accuracy: 0.764, Validation Accuracy: 0.757, Loss: 0.277 Epoch 5 Batch 73/269 - Train Accuracy: 0.753, Validation Accuracy: 0.756, Loss: 0.281 Epoch 5 Batch 74/269 - Train Accuracy: 0.759, Validation Accuracy: 0.754, Loss: 0.271 Epoch 5 Batch 75/269 - Train Accuracy: 0.774, Validation Accuracy: 0.759, Loss: 0.268 Epoch 5 Batch 76/269 - Train Accuracy: 0.732, Validation Accuracy: 0.750, Loss: 0.269 Epoch 5 Batch 77/269 - Train Accuracy: 0.767, Validation Accuracy: 0.756, Loss: 0.263 Epoch 5 Batch 78/269 - Train Accuracy: 0.786, Validation Accuracy: 0.761, Loss: 0.258 Epoch 5 Batch 79/269 - Train Accuracy: 0.776, Validation Accuracy: 0.760, Loss: 0.266 Epoch 5 Batch 80/269 - Train Accuracy: 0.779, Validation Accuracy: 0.754, Loss: 0.267 Epoch 5 Batch 81/269 - Train Accuracy: 0.770, Validation Accuracy: 0.761, Loss: 0.278 Epoch 5 Batch 82/269 - Train Accuracy: 0.785, Validation Accuracy: 0.756, Loss: 0.250 Epoch 5 Batch 83/269 - Train Accuracy: 0.762, Validation Accuracy: 0.755, Loss: 0.284 Epoch 5 Batch 84/269 - Train Accuracy: 0.773, Validation Accuracy: 0.767, Loss: 0.267 Epoch 5 Batch 85/269 - Train Accuracy: 0.787, Validation Accuracy: 0.780, Loss: 0.260 Epoch 5 Batch 86/269 - Train Accuracy: 0.750, Validation Accuracy: 0.771, Loss: 0.258 Epoch 5 Batch 87/269 - Train Accuracy: 0.748, Validation Accuracy: 0.764, Loss: 0.271 Epoch 5 Batch 88/269 - Train Accuracy: 0.770, Validation Accuracy: 0.762, Loss: 0.251 Epoch 5 Batch 89/269 - Train Accuracy: 0.767, Validation Accuracy: 0.764, Loss: 0.263 Epoch 5 Batch 90/269 - Train Accuracy: 0.734, Validation Accuracy: 0.755, Loss: 0.273 Epoch 5 Batch 91/269 - Train Accuracy: 0.778, Validation Accuracy: 0.758, Loss: 0.250 Epoch 5 Batch 92/269 - Train Accuracy: 0.786, Validation Accuracy: 0.763, Loss: 0.245 Epoch 5 Batch 93/269 - Train Accuracy: 0.795, Validation Accuracy: 0.766, Loss: 0.242 Epoch 5 Batch 94/269 - Train Accuracy: 0.760, Validation Accuracy: 0.759, Loss: 0.266 Epoch 5 Batch 95/269 - Train Accuracy: 0.763, Validation Accuracy: 0.757, Loss: 0.258 Epoch 5 Batch 96/269 - Train Accuracy: 0.758, Validation Accuracy: 0.760, Loss: 0.258 Epoch 5 Batch 97/269 - Train Accuracy: 0.768, Validation Accuracy: 0.771, Loss: 0.251 Epoch 5 Batch 98/269 - Train Accuracy: 0.777, Validation Accuracy: 0.768, Loss: 0.255 Epoch 5 Batch 99/269 - Train Accuracy: 0.761, Validation Accuracy: 0.763, Loss: 0.258 Epoch 5 Batch 100/269 - Train Accuracy: 0.779, Validation Accuracy: 0.750, Loss: 0.259 Epoch 5 Batch 101/269 - Train Accuracy: 0.689, Validation Accuracy: 0.718, Loss: 0.332 Epoch 5 Batch 102/269 - Train Accuracy: 0.674, Validation Accuracy: 0.679, Loss: 0.371 Epoch 5 Batch 103/269 - Train Accuracy: 0.650, Validation Accuracy: 0.649, Loss: 0.451 Epoch 5 Batch 104/269 - Train Accuracy: 0.685, Validation Accuracy: 0.689, Loss: 0.537 Epoch 5 Batch 105/269 - Train Accuracy: 0.690, Validation Accuracy: 0.710, Loss: 0.542 Epoch 5 Batch 106/269 - Train Accuracy: 0.668, Validation Accuracy: 0.681, Loss: 0.422 Epoch 5 Batch 107/269 - Train Accuracy: 0.655, Validation Accuracy: 0.671, Loss: 0.455 Epoch 5 Batch 108/269 - Train Accuracy: 0.681, Validation Accuracy: 0.675, Loss: 0.427 Epoch 5 Batch 109/269 - Train Accuracy: 0.668, Validation Accuracy: 0.687, Loss: 0.438 Epoch 5 Batch 110/269 - Train Accuracy: 0.683, Validation Accuracy: 0.686, Loss: 0.375 Epoch 5 Batch 111/269 - Train Accuracy: 0.667, Validation Accuracy: 0.683, Loss: 0.386 Epoch 5 Batch 112/269 - Train Accuracy: 0.700, Validation Accuracy: 0.699, Loss: 0.375 Epoch 5 Batch 113/269 - Train Accuracy: 0.725, Validation Accuracy: 0.712, Loss: 0.360 Epoch 5 Batch 114/269 - Train Accuracy: 0.710, Validation Accuracy: 0.705, Loss: 0.363 Epoch 5 Batch 115/269 - Train Accuracy: 0.698, Validation Accuracy: 0.715, Loss: 0.381 Epoch 5 Batch 116/269 - Train Accuracy: 0.716, Validation Accuracy: 0.721, Loss: 0.342 Epoch 5 Batch 117/269 - Train Accuracy: 0.713, Validation Accuracy: 0.703, Loss: 0.330 Epoch 5 Batch 118/269 - Train Accuracy: 0.748, Validation Accuracy: 0.712, Loss: 0.325 Epoch 5 Batch 119/269 - Train Accuracy: 0.729, Validation Accuracy: 0.730, Loss: 0.334 Epoch 5 Batch 120/269 - Train Accuracy: 0.734, Validation Accuracy: 0.728, Loss: 0.323 Epoch 5 Batch 121/269 - Train Accuracy: 0.738, Validation Accuracy: 0.732, Loss: 0.302 Epoch 5 Batch 122/269 - Train Accuracy: 0.740, Validation Accuracy: 0.726, Loss: 0.311 Epoch 5 Batch 123/269 - Train Accuracy: 0.729, Validation Accuracy: 0.735, Loss: 0.314 Epoch 5 Batch 124/269 - Train Accuracy: 0.748, Validation Accuracy: 0.745, Loss: 0.290 Epoch 5 Batch 125/269 - Train Accuracy: 0.738, Validation Accuracy: 0.733, Loss: 0.286 Epoch 5 Batch 126/269 - Train Accuracy: 0.752, Validation Accuracy: 0.735, Loss: 0.288 Epoch 5 Batch 127/269 - Train Accuracy: 0.735, Validation Accuracy: 0.733, Loss: 0.296 Epoch 5 Batch 128/269 - Train Accuracy: 0.749, Validation Accuracy: 0.732, Loss: 0.283 Epoch 5 Batch 129/269 - Train Accuracy: 0.746, Validation Accuracy: 0.744, Loss: 0.277 Epoch 5 Batch 130/269 - Train Accuracy: 0.749, Validation Accuracy: 0.743, Loss: 0.284 Epoch 5 Batch 131/269 - Train Accuracy: 0.752, Validation Accuracy: 0.749, Loss: 0.279 Epoch 5 Batch 132/269 - Train Accuracy: 0.751, Validation Accuracy: 0.760, Loss: 0.278 Epoch 5 Batch 133/269 - Train Accuracy: 0.776, Validation Accuracy: 0.759, Loss: 0.259 Epoch 5 Batch 134/269 - Train Accuracy: 0.748, Validation Accuracy: 0.750, Loss: 0.274 Epoch 5 Batch 135/269 - Train Accuracy: 0.758, Validation Accuracy: 0.752, Loss: 0.277 Epoch 5 Batch 136/269 - Train Accuracy: 0.753, Validation Accuracy: 0.759, Loss: 0.279 Epoch 5 Batch 137/269 - Train Accuracy: 0.775, Validation Accuracy: 0.755, Loss: 0.273 Epoch 5 Batch 138/269 - Train Accuracy: 0.765, Validation Accuracy: 0.759, Loss: 0.264 Epoch 5 Batch 139/269 - Train Accuracy: 0.778, Validation Accuracy: 0.756, Loss: 0.248 Epoch 5 Batch 140/269 - Train Accuracy: 0.771, Validation Accuracy: 0.766, Loss: 0.262 Epoch 5 Batch 141/269 - Train Accuracy: 0.778, Validation Accuracy: 0.771, Loss: 0.265 Epoch 5 Batch 142/269 - Train Accuracy: 0.791, Validation Accuracy: 0.767, Loss: 0.241 Epoch 5 Batch 143/269 - Train Accuracy: 0.803, Validation Accuracy: 0.776, Loss: 0.240 Epoch 5 Batch 144/269 - Train Accuracy: 0.793, Validation Accuracy: 0.770, Loss: 0.229 Epoch 5 Batch 145/269 - Train Accuracy: 0.779, Validation Accuracy: 0.770, Loss: 0.239 Epoch 5 Batch 146/269 - Train Accuracy: 0.767, Validation Accuracy: 0.770, Loss: 0.237 Epoch 5 Batch 147/269 - Train Accuracy: 0.791, Validation Accuracy: 0.784, Loss: 0.242 Epoch 5 Batch 148/269 - Train Accuracy: 0.773, Validation Accuracy: 0.778, Loss: 0.245 Epoch 5 Batch 149/269 - Train Accuracy: 0.762, Validation Accuracy: 0.783, Loss: 0.253 Epoch 5 Batch 150/269 - Train Accuracy: 0.781, Validation Accuracy: 0.777, Loss: 0.241 Epoch 5 Batch 151/269 - Train Accuracy: 0.805, Validation Accuracy: 0.778, Loss: 0.228 Epoch 5 Batch 152/269 - Train Accuracy: 0.784, Validation Accuracy: 0.783, Loss: 0.239 Epoch 5 Batch 153/269 - Train Accuracy: 0.796, Validation Accuracy: 0.786, Loss: 0.234 Epoch 5 Batch 154/269 - Train Accuracy: 0.803, Validation Accuracy: 0.787, Loss: 0.229 Epoch 5 Batch 155/269 - Train Accuracy: 0.796, Validation Accuracy: 0.779, Loss: 0.225 Epoch 5 Batch 156/269 - Train Accuracy: 0.773, Validation Accuracy: 0.782, Loss: 0.244 Epoch 5 Batch 157/269 - Train Accuracy: 0.777, Validation Accuracy: 0.780, Loss: 0.230 Epoch 5 Batch 158/269 - Train Accuracy: 0.784, Validation Accuracy: 0.789, Loss: 0.237 Epoch 5 Batch 159/269 - Train Accuracy: 0.784, Validation Accuracy: 0.793, Loss: 0.232 Epoch 5 Batch 160/269 - Train Accuracy: 0.793, Validation Accuracy: 0.781, Loss: 0.227 Epoch 5 Batch 161/269 - Train Accuracy: 0.770, Validation Accuracy: 0.783, Loss: 0.228 Epoch 5 Batch 162/269 - Train Accuracy: 0.812, Validation Accuracy: 0.791, Loss: 0.226 Epoch 5 Batch 163/269 - Train Accuracy: 0.785, Validation Accuracy: 0.781, Loss: 0.236 Epoch 5 Batch 164/269 - Train Accuracy: 0.807, Validation Accuracy: 0.782, Loss: 0.227 Epoch 5 Batch 165/269 - Train Accuracy: 0.787, Validation Accuracy: 0.785, Loss: 0.226 Epoch 5 Batch 166/269 - Train Accuracy: 0.806, Validation Accuracy: 0.779, Loss: 0.215 Epoch 5 Batch 167/269 - Train Accuracy: 0.809, Validation Accuracy: 0.780, Loss: 0.227 Epoch 5 Batch 168/269 - Train Accuracy: 0.804, Validation Accuracy: 0.787, Loss: 0.228 Epoch 5 Batch 169/269 - Train Accuracy: 0.805, Validation Accuracy: 0.792, Loss: 0.222 Epoch 5 Batch 170/269 - Train Accuracy: 0.796, Validation Accuracy: 0.789, Loss: 0.215 Epoch 5 Batch 171/269 - Train Accuracy: 0.804, Validation Accuracy: 0.793, Loss: 0.233 Epoch 5 Batch 172/269 - Train Accuracy: 0.781, Validation Accuracy: 0.797, Loss: 0.232 Epoch 5 Batch 173/269 - Train Accuracy: 0.802, Validation Accuracy: 0.795, Loss: 0.213 Epoch 5 Batch 174/269 - Train Accuracy: 0.791, Validation Accuracy: 0.793, Loss: 0.220 Epoch 5 Batch 175/269 - Train Accuracy: 0.801, Validation Accuracy: 0.800, Loss: 0.231 Epoch 5 Batch 176/269 - Train Accuracy: 0.789, Validation Accuracy: 0.799, Loss: 0.228 Epoch 5 Batch 177/269 - Train Accuracy: 0.808, Validation Accuracy: 0.801, Loss: 0.208 Epoch 5 Batch 178/269 - Train Accuracy: 0.799, Validation Accuracy: 0.802, Loss: 0.222 Epoch 5 Batch 179/269 - Train Accuracy: 0.794, Validation Accuracy: 0.803, Loss: 0.218 Epoch 5 Batch 180/269 - Train Accuracy: 0.814, Validation Accuracy: 0.804, Loss: 0.212 Epoch 5 Batch 181/269 - Train Accuracy: 0.790, Validation Accuracy: 0.800, Loss: 0.218 Epoch 5 Batch 182/269 - Train Accuracy: 0.812, Validation Accuracy: 0.796, Loss: 0.221 Epoch 5 Batch 183/269 - Train Accuracy: 0.837, Validation Accuracy: 0.803, Loss: 0.186 Epoch 5 Batch 184/269 - Train Accuracy: 0.819, Validation Accuracy: 0.806, Loss: 0.218 Epoch 5 Batch 185/269 - Train Accuracy: 0.830, Validation Accuracy: 0.801, Loss: 0.207 Epoch 5 Batch 186/269 - Train Accuracy: 0.788, Validation Accuracy: 0.807, Loss: 0.216 Epoch 5 Batch 187/269 - Train Accuracy: 0.786, Validation Accuracy: 0.791, Loss: 0.212 Epoch 5 Batch 188/269 - Train Accuracy: 0.810, Validation Accuracy: 0.800, Loss: 0.215 Epoch 5 Batch 189/269 - Train Accuracy: 0.819, Validation Accuracy: 0.796, Loss: 0.206 Epoch 5 Batch 190/269 - Train Accuracy: 0.824, Validation Accuracy: 0.803, Loss: 0.201 Epoch 5 Batch 191/269 - Train Accuracy: 0.815, Validation Accuracy: 0.792, Loss: 0.199 Epoch 5 Batch 192/269 - Train Accuracy: 0.796, Validation Accuracy: 0.796, Loss: 0.206 Epoch 5 Batch 193/269 - Train Accuracy: 0.801, Validation Accuracy: 0.800, Loss: 0.205 Epoch 5 Batch 194/269 - Train Accuracy: 0.819, Validation Accuracy: 0.801, Loss: 0.209 Epoch 5 Batch 195/269 - Train Accuracy: 0.821, Validation Accuracy: 0.802, Loss: 0.204 Epoch 5 Batch 196/269 - Train Accuracy: 0.819, Validation Accuracy: 0.804, Loss: 0.201 Epoch 5 Batch 197/269 - Train Accuracy: 0.785, Validation Accuracy: 0.808, Loss: 0.214 Epoch 5 Batch 198/269 - Train Accuracy: 0.797, Validation Accuracy: 0.808, Loss: 0.216 Epoch 5 Batch 199/269 - Train Accuracy: 0.789, Validation Accuracy: 0.802, Loss: 0.215 Epoch 5 Batch 200/269 - Train Accuracy: 0.806, Validation Accuracy: 0.810, Loss: 0.208 Epoch 5 Batch 201/269 - Train Accuracy: 0.814, Validation Accuracy: 0.809, Loss: 0.203 Epoch 5 Batch 202/269 - Train Accuracy: 0.812, Validation Accuracy: 0.810, Loss: 0.201 Epoch 5 Batch 203/269 - Train Accuracy: 0.821, Validation Accuracy: 0.803, Loss: 0.216 Epoch 5 Batch 204/269 - Train Accuracy: 0.790, Validation Accuracy: 0.792, Loss: 0.215 Epoch 5 Batch 205/269 - Train Accuracy: 0.810, Validation Accuracy: 0.800, Loss: 0.203 Epoch 5 Batch 206/269 - Train Accuracy: 0.810, Validation Accuracy: 0.793, Loss: 0.215 Epoch 5 Batch 207/269 - Train Accuracy: 0.806, Validation Accuracy: 0.800, Loss: 0.197 Epoch 5 Batch 208/269 - Train Accuracy: 0.814, Validation Accuracy: 0.800, Loss: 0.210 Epoch 5 Batch 209/269 - Train Accuracy: 0.812, Validation Accuracy: 0.792, Loss: 0.198 Epoch 5 Batch 210/269 - Train Accuracy: 0.814, Validation Accuracy: 0.798, Loss: 0.199 Epoch 5 Batch 211/269 - Train Accuracy: 0.807, Validation Accuracy: 0.803, Loss: 0.202 Epoch 5 Batch 212/269 - Train Accuracy: 0.807, Validation Accuracy: 0.806, Loss: 0.205 Epoch 5 Batch 213/269 - Train Accuracy: 0.825, Validation Accuracy: 0.809, Loss: 0.195 Epoch 5 Batch 214/269 - Train Accuracy: 0.806, Validation Accuracy: 0.807, Loss: 0.204 Epoch 5 Batch 215/269 - Train Accuracy: 0.842, Validation Accuracy: 0.811, Loss: 0.192 Epoch 5 Batch 216/269 - Train Accuracy: 0.809, Validation Accuracy: 0.808, Loss: 0.220 Epoch 5 Batch 217/269 - Train Accuracy: 0.810, Validation Accuracy: 0.810, Loss: 0.204 Epoch 5 Batch 218/269 - Train Accuracy: 0.828, Validation Accuracy: 0.811, Loss: 0.196 Epoch 5 Batch 219/269 - Train Accuracy: 0.833, Validation Accuracy: 0.805, Loss: 0.207 Epoch 5 Batch 220/269 - Train Accuracy: 0.824, Validation Accuracy: 0.801, Loss: 0.183 Epoch 5 Batch 221/269 - Train Accuracy: 0.838, Validation Accuracy: 0.813, Loss: 0.198 Epoch 5 Batch 222/269 - Train Accuracy: 0.843, Validation Accuracy: 0.814, Loss: 0.184 Epoch 5 Batch 223/269 - Train Accuracy: 0.821, Validation Accuracy: 0.813, Loss: 0.184 Epoch 5 Batch 224/269 - Train Accuracy: 0.814, Validation Accuracy: 0.812, Loss: 0.203 Epoch 5 Batch 225/269 - Train Accuracy: 0.809, Validation Accuracy: 0.809, Loss: 0.187 Epoch 5 Batch 226/269 - Train Accuracy: 0.824, Validation Accuracy: 0.813, Loss: 0.195 Epoch 5 Batch 227/269 - Train Accuracy: 0.859, Validation Accuracy: 0.818, Loss: 0.182 Epoch 5 Batch 228/269 - Train Accuracy: 0.824, Validation Accuracy: 0.812, Loss: 0.188 Epoch 5 Batch 229/269 - Train Accuracy: 0.824, Validation Accuracy: 0.811, Loss: 0.185 Epoch 5 Batch 230/269 - Train Accuracy: 0.807, Validation Accuracy: 0.808, Loss: 0.190 Epoch 5 Batch 231/269 - Train Accuracy: 0.789, Validation Accuracy: 0.800, Loss: 0.206 Epoch 5 Batch 232/269 - Train Accuracy: 0.799, Validation Accuracy: 0.811, Loss: 0.201 Epoch 5 Batch 233/269 - Train Accuracy: 0.838, Validation Accuracy: 0.814, Loss: 0.192 Epoch 5 Batch 234/269 - Train Accuracy: 0.833, Validation Accuracy: 0.814, Loss: 0.189 Epoch 5 Batch 235/269 - Train Accuracy: 0.821, Validation Accuracy: 0.819, Loss: 0.178 Epoch 5 Batch 236/269 - Train Accuracy: 0.814, Validation Accuracy: 0.821, Loss: 0.183 Epoch 5 Batch 237/269 - Train Accuracy: 0.803, Validation Accuracy: 0.817, Loss: 0.189 Epoch 5 Batch 238/269 - Train Accuracy: 0.839, Validation Accuracy: 0.821, Loss: 0.182 Epoch 5 Batch 239/269 - Train Accuracy: 0.821, Validation Accuracy: 0.809, Loss: 0.183 Epoch 5 Batch 240/269 - Train Accuracy: 0.832, Validation Accuracy: 0.809, Loss: 0.173 Epoch 5 Batch 241/269 - Train Accuracy: 0.817, Validation Accuracy: 0.803, Loss: 0.195 Epoch 5 Batch 242/269 - Train Accuracy: 0.822, Validation Accuracy: 0.804, Loss: 0.183 Epoch 5 Batch 243/269 - Train Accuracy: 0.856, Validation Accuracy: 0.822, Loss: 0.183 Epoch 5 Batch 244/269 - Train Accuracy: 0.816, Validation Accuracy: 0.819, Loss: 0.189 Epoch 5 Batch 245/269 - Train Accuracy: 0.802, Validation Accuracy: 0.821, Loss: 0.193 Epoch 5 Batch 246/269 - Train Accuracy: 0.806, Validation Accuracy: 0.823, Loss: 0.183 Epoch 5 Batch 247/269 - Train Accuracy: 0.817, Validation Accuracy: 0.821, Loss: 0.182 Epoch 5 Batch 248/269 - Train Accuracy: 0.827, Validation Accuracy: 0.825, Loss: 0.181 Epoch 5 Batch 249/269 - Train Accuracy: 0.848, Validation Accuracy: 0.820, Loss: 0.162 Epoch 5 Batch 250/269 - Train Accuracy: 0.834, Validation Accuracy: 0.822, Loss: 0.180 Epoch 5 Batch 251/269 - Train Accuracy: 0.862, Validation Accuracy: 0.818, Loss: 0.175 Epoch 5 Batch 252/269 - Train Accuracy: 0.826, Validation Accuracy: 0.816, Loss: 0.172 Epoch 5 Batch 253/269 - Train Accuracy: 0.813, Validation Accuracy: 0.814, Loss: 0.182 Epoch 5 Batch 254/269 - Train Accuracy: 0.828, Validation Accuracy: 0.818, Loss: 0.178 Epoch 5 Batch 255/269 - Train Accuracy: 0.824, Validation Accuracy: 0.819, Loss: 0.173 Epoch 5 Batch 256/269 - Train Accuracy: 0.813, Validation Accuracy: 0.823, Loss: 0.175 Epoch 5 Batch 257/269 - Train Accuracy: 0.797, Validation Accuracy: 0.818, Loss: 0.188 Epoch 5 Batch 258/269 - Train Accuracy: 0.823, Validation Accuracy: 0.827, Loss: 0.179 Epoch 5 Batch 259/269 - Train Accuracy: 0.835, Validation Accuracy: 0.826, Loss: 0.175 Epoch 5 Batch 260/269 - Train Accuracy: 0.821, Validation Accuracy: 0.832, Loss: 0.188 Epoch 5 Batch 261/269 - Train Accuracy: 0.823, Validation Accuracy: 0.822, Loss: 0.180 Epoch 5 Batch 262/269 - Train Accuracy: 0.835, Validation Accuracy: 0.823, Loss: 0.171 Epoch 5 Batch 263/269 - Train Accuracy: 0.824, Validation Accuracy: 0.821, Loss: 0.186 Epoch 5 Batch 264/269 - Train Accuracy: 0.802, Validation Accuracy: 0.825, Loss: 0.193 Epoch 5 Batch 265/269 - Train Accuracy: 0.817, Validation Accuracy: 0.820, Loss: 0.179 Epoch 5 Batch 266/269 - Train Accuracy: 0.815, Validation Accuracy: 0.815, Loss: 0.180 Epoch 5 Batch 267/269 - Train Accuracy: 0.831, Validation Accuracy: 0.816, Loss: 0.197 Epoch 6 Batch 0/269 - Train Accuracy: 0.824, Validation Accuracy: 0.821, Loss: 0.189 Epoch 6 Batch 1/269 - Train Accuracy: 0.822, Validation Accuracy: 0.816, Loss: 0.178 Epoch 6 Batch 2/269 - Train Accuracy: 0.850, Validation Accuracy: 0.825, Loss: 0.189 Epoch 6 Batch 3/269 - Train Accuracy: 0.836, Validation Accuracy: 0.823, Loss: 0.177 Epoch 6 Batch 4/269 - Train Accuracy: 0.787, Validation Accuracy: 0.810, Loss: 0.190 Epoch 6 Batch 5/269 - Train Accuracy: 0.823, Validation Accuracy: 0.825, Loss: 0.194 Epoch 6 Batch 6/269 - Train Accuracy: 0.836, Validation Accuracy: 0.818, Loss: 0.169 Epoch 6 Batch 7/269 - Train Accuracy: 0.833, Validation Accuracy: 0.826, Loss: 0.187 Epoch 6 Batch 8/269 - Train Accuracy: 0.830, Validation Accuracy: 0.829, Loss: 0.187 Epoch 6 Batch 9/269 - Train Accuracy: 0.826, Validation Accuracy: 0.819, Loss: 0.184 Epoch 6 Batch 10/269 - Train Accuracy: 0.844, Validation Accuracy: 0.827, Loss: 0.174 Epoch 6 Batch 11/269 - Train Accuracy: 0.820, Validation Accuracy: 0.825, Loss: 0.179 Epoch 6 Batch 12/269 - Train Accuracy: 0.820, Validation Accuracy: 0.829, Loss: 0.187 Epoch 6 Batch 13/269 - Train Accuracy: 0.840, Validation Accuracy: 0.827, Loss: 0.156 Epoch 6 Batch 14/269 - Train Accuracy: 0.826, Validation Accuracy: 0.832, Loss: 0.175 Epoch 6 Batch 15/269 - Train Accuracy: 0.842, Validation Accuracy: 0.828, Loss: 0.155 Epoch 6 Batch 16/269 - Train Accuracy: 0.844, Validation Accuracy: 0.830, Loss: 0.172 Epoch 6 Batch 17/269 - Train Accuracy: 0.847, Validation Accuracy: 0.830, Loss: 0.158 Epoch 6 Batch 18/269 - Train Accuracy: 0.846, Validation Accuracy: 0.832, Loss: 0.181 Epoch 6 Batch 19/269 - Train Accuracy: 0.854, Validation Accuracy: 0.839, Loss: 0.148 Epoch 6 Batch 20/269 - Train Accuracy: 0.848, Validation Accuracy: 0.838, Loss: 0.167 Epoch 6 Batch 21/269 - Train Accuracy: 0.829, Validation Accuracy: 0.840, Loss: 0.184 Epoch 6 Batch 22/269 - Train Accuracy: 0.863, Validation Accuracy: 0.841, Loss: 0.158 Epoch 6 Batch 23/269 - Train Accuracy: 0.843, Validation Accuracy: 0.836, Loss: 0.165 Epoch 6 Batch 24/269 - Train Accuracy: 0.860, Validation Accuracy: 0.846, Loss: 0.163 Epoch 6 Batch 25/269 - Train Accuracy: 0.847, Validation Accuracy: 0.840, Loss: 0.174 Epoch 6 Batch 26/269 - Train Accuracy: 0.852, Validation Accuracy: 0.844, Loss: 0.150 Epoch 6 Batch 27/269 - Train Accuracy: 0.855, Validation Accuracy: 0.848, Loss: 0.157 Epoch 6 Batch 28/269 - Train Accuracy: 0.835, Validation Accuracy: 0.838, Loss: 0.175 Epoch 6 Batch 29/269 - Train Accuracy: 0.848, Validation Accuracy: 0.837, Loss: 0.166 Epoch 6 Batch 30/269 - Train Accuracy: 0.849, Validation Accuracy: 0.853, Loss: 0.156 Epoch 6 Batch 31/269 - Train Accuracy: 0.871, Validation Accuracy: 0.850, Loss: 0.153 Epoch 6 Batch 32/269 - Train Accuracy: 0.858, Validation Accuracy: 0.842, Loss: 0.148 Epoch 6 Batch 33/269 - Train Accuracy: 0.859, Validation Accuracy: 0.845, Loss: 0.151 Epoch 6 Batch 34/269 - Train Accuracy: 0.849, Validation Accuracy: 0.855, Loss: 0.153 Epoch 6 Batch 35/269 - Train Accuracy: 0.856, Validation Accuracy: 0.839, Loss: 0.168 Epoch 6 Batch 36/269 - Train Accuracy: 0.847, Validation Accuracy: 0.849, Loss: 0.154 Epoch 6 Batch 37/269 - Train Accuracy: 0.860, Validation Accuracy: 0.848, Loss: 0.158 Epoch 6 Batch 38/269 - Train Accuracy: 0.856, Validation Accuracy: 0.842, Loss: 0.154 Epoch 6 Batch 39/269 - Train Accuracy: 0.862, Validation Accuracy: 0.845, Loss: 0.153 Epoch 6 Batch 40/269 - Train Accuracy: 0.845, Validation Accuracy: 0.851, Loss: 0.164 Epoch 6 Batch 41/269 - Train Accuracy: 0.845, Validation Accuracy: 0.849, Loss: 0.155 Epoch 6 Batch 42/269 - Train Accuracy: 0.878, Validation Accuracy: 0.855, Loss: 0.145 Epoch 6 Batch 43/269 - Train Accuracy: 0.866, Validation Accuracy: 0.857, Loss: 0.149 Epoch 6 Batch 44/269 - Train Accuracy: 0.849, Validation Accuracy: 0.840, Loss: 0.156 Epoch 6 Batch 45/269 - Train Accuracy: 0.841, Validation Accuracy: 0.857, Loss: 0.154 Epoch 6 Batch 46/269 - Train Accuracy: 0.871, Validation Accuracy: 0.854, Loss: 0.148 Epoch 6 Batch 47/269 - Train Accuracy: 0.870, Validation Accuracy: 0.856, Loss: 0.137 Epoch 6 Batch 48/269 - Train Accuracy: 0.890, Validation Accuracy: 0.862, Loss: 0.145 Epoch 6 Batch 49/269 - Train Accuracy: 0.854, Validation Accuracy: 0.854, Loss: 0.141 Epoch 6 Batch 50/269 - Train Accuracy: 0.841, Validation Accuracy: 0.858, Loss: 0.161 Epoch 6 Batch 51/269 - Train Accuracy: 0.858, Validation Accuracy: 0.859, Loss: 0.148 Epoch 6 Batch 52/269 - Train Accuracy: 0.860, Validation Accuracy: 0.853, Loss: 0.139 Epoch 6 Batch 53/269 - Train Accuracy: 0.862, Validation Accuracy: 0.862, Loss: 0.159 Epoch 6 Batch 54/269 - Train Accuracy: 0.884, Validation Accuracy: 0.862, Loss: 0.141 Epoch 6 Batch 55/269 - Train Accuracy: 0.875, Validation Accuracy: 0.859, Loss: 0.143 Epoch 6 Batch 56/269 - Train Accuracy: 0.875, Validation Accuracy: 0.865, Loss: 0.147 Epoch 6 Batch 57/269 - Train Accuracy: 0.877, Validation Accuracy: 0.865, Loss: 0.145 Epoch 6 Batch 58/269 - Train Accuracy: 0.866, Validation Accuracy: 0.869, Loss: 0.138 Epoch 6 Batch 59/269 - Train Accuracy: 0.890, Validation Accuracy: 0.866, Loss: 0.122 Epoch 6 Batch 60/269 - Train Accuracy: 0.878, Validation Accuracy: 0.863, Loss: 0.132 Epoch 6 Batch 61/269 - Train Accuracy: 0.877, Validation Accuracy: 0.867, Loss: 0.130 Epoch 6 Batch 62/269 - Train Accuracy: 0.876, Validation Accuracy: 0.871, Loss: 0.140 Epoch 6 Batch 63/269 - Train Accuracy: 0.867, Validation Accuracy: 0.868, Loss: 0.146 Epoch 6 Batch 64/269 - Train Accuracy: 0.883, Validation Accuracy: 0.869, Loss: 0.128 Epoch 6 Batch 65/269 - Train Accuracy: 0.870, Validation Accuracy: 0.864, Loss: 0.134 Epoch 6 Batch 66/269 - Train Accuracy: 0.862, Validation Accuracy: 0.873, Loss: 0.139 Epoch 6 Batch 67/269 - Train Accuracy: 0.851, Validation Accuracy: 0.868, Loss: 0.147 Epoch 6 Batch 68/269 - Train Accuracy: 0.856, Validation Accuracy: 0.868, Loss: 0.151 Epoch 6 Batch 69/269 - Train Accuracy: 0.844, Validation Accuracy: 0.851, Loss: 0.163 Epoch 6 Batch 70/269 - Train Accuracy: 0.867, Validation Accuracy: 0.857, Loss: 0.162 Epoch 6 Batch 71/269 - Train Accuracy: 0.838, Validation Accuracy: 0.836, Loss: 0.179 Epoch 6 Batch 72/269 - Train Accuracy: 0.836, Validation Accuracy: 0.851, Loss: 0.171 Epoch 6 Batch 73/269 - Train Accuracy: 0.854, Validation Accuracy: 0.858, Loss: 0.155 Epoch 6 Batch 74/269 - Train Accuracy: 0.829, Validation Accuracy: 0.831, Loss: 0.151 Epoch 6 Batch 75/269 - Train Accuracy: 0.876, Validation Accuracy: 0.860, Loss: 0.158 Epoch 6 Batch 76/269 - Train Accuracy: 0.855, Validation Accuracy: 0.852, Loss: 0.142 Epoch 6 Batch 77/269 - Train Accuracy: 0.857, Validation Accuracy: 0.870, Loss: 0.150 Epoch 6 Batch 78/269 - Train Accuracy: 0.871, Validation Accuracy: 0.853, Loss: 0.136 Epoch 6 Batch 79/269 - Train Accuracy: 0.852, Validation Accuracy: 0.853, Loss: 0.148 Epoch 6 Batch 80/269 - Train Accuracy: 0.866, Validation Accuracy: 0.862, Loss: 0.148 Epoch 6 Batch 81/269 - Train Accuracy: 0.863, Validation Accuracy: 0.857, Loss: 0.152 Epoch 6 Batch 82/269 - Train Accuracy: 0.886, Validation Accuracy: 0.864, Loss: 0.133 Epoch 6 Batch 83/269 - Train Accuracy: 0.859, Validation Accuracy: 0.851, Loss: 0.157 Epoch 6 Batch 84/269 - Train Accuracy: 0.862, Validation Accuracy: 0.858, Loss: 0.146 Epoch 6 Batch 85/269 - Train Accuracy: 0.882, Validation Accuracy: 0.862, Loss: 0.141 Epoch 6 Batch 86/269 - Train Accuracy: 0.865, Validation Accuracy: 0.868, Loss: 0.134 Epoch 6 Batch 87/269 - Train Accuracy: 0.865, Validation Accuracy: 0.865, Loss: 0.148 Epoch 6 Batch 88/269 - Train Accuracy: 0.872, Validation Accuracy: 0.872, Loss: 0.138 Epoch 6 Batch 89/269 - Train Accuracy: 0.881, Validation Accuracy: 0.871, Loss: 0.141 Epoch 6 Batch 90/269 - Train Accuracy: 0.856, Validation Accuracy: 0.861, Loss: 0.144 Epoch 6 Batch 91/269 - Train Accuracy: 0.875, Validation Accuracy: 0.860, Loss: 0.138 Epoch 6 Batch 92/269 - Train Accuracy: 0.887, Validation Accuracy: 0.854, Loss: 0.123 Epoch 6 Batch 93/269 - Train Accuracy: 0.878, Validation Accuracy: 0.864, Loss: 0.132 Epoch 6 Batch 94/269 - Train Accuracy: 0.863, Validation Accuracy: 0.865, Loss: 0.146 Epoch 6 Batch 95/269 - Train Accuracy: 0.873, Validation Accuracy: 0.868, Loss: 0.130 Epoch 6 Batch 96/269 - Train Accuracy: 0.855, Validation Accuracy: 0.866, Loss: 0.138 Epoch 6 Batch 97/269 - Train Accuracy: 0.865, Validation Accuracy: 0.875, Loss: 0.135 Epoch 6 Batch 98/269 - Train Accuracy: 0.879, Validation Accuracy: 0.875, Loss: 0.132 Epoch 6 Batch 99/269 - Train Accuracy: 0.881, Validation Accuracy: 0.878, Loss: 0.131 Epoch 6 Batch 100/269 - Train Accuracy: 0.889, Validation Accuracy: 0.876, Loss: 0.126 Epoch 6 Batch 101/269 - Train Accuracy: 0.866, Validation Accuracy: 0.873, Loss: 0.142 Epoch 6 Batch 102/269 - Train Accuracy: 0.869, Validation Accuracy: 0.870, Loss: 0.123 Epoch 6 Batch 103/269 - Train Accuracy: 0.875, Validation Accuracy: 0.873, Loss: 0.135 Epoch 6 Batch 104/269 - Train Accuracy: 0.875, Validation Accuracy: 0.876, Loss: 0.124 Epoch 6 Batch 105/269 - Train Accuracy: 0.880, Validation Accuracy: 0.871, Loss: 0.125 Epoch 6 Batch 106/269 - Train Accuracy: 0.874, Validation Accuracy: 0.877, Loss: 0.118 Epoch 6 Batch 107/269 - Train Accuracy: 0.885, Validation Accuracy: 0.877, Loss: 0.124 Epoch 6 Batch 108/269 - Train Accuracy: 0.889, Validation Accuracy: 0.874, Loss: 0.121 Epoch 6 Batch 109/269 - Train Accuracy: 0.872, Validation Accuracy: 0.874, Loss: 0.132 Epoch 6 Batch 110/269 - Train Accuracy: 0.875, Validation Accuracy: 0.870, Loss: 0.116 Epoch 6 Batch 111/269 - Train Accuracy: 0.878, Validation Accuracy: 0.876, Loss: 0.135 Epoch 6 Batch 112/269 - Train Accuracy: 0.885, Validation Accuracy: 0.874, Loss: 0.133 Epoch 6 Batch 113/269 - Train Accuracy: 0.868, Validation Accuracy: 0.880, Loss: 0.116 Epoch 6 Batch 114/269 - Train Accuracy: 0.873, Validation Accuracy: 0.870, Loss: 0.125 Epoch 6 Batch 115/269 - Train Accuracy: 0.868, Validation Accuracy: 0.879, Loss: 0.128 Epoch 6 Batch 116/269 - Train Accuracy: 0.880, Validation Accuracy: 0.880, Loss: 0.123 Epoch 6 Batch 117/269 - Train Accuracy: 0.869, Validation Accuracy: 0.880, Loss: 0.115 Epoch 6 Batch 118/269 - Train Accuracy: 0.906, Validation Accuracy: 0.872, Loss: 0.112 Epoch 6 Batch 119/269 - Train Accuracy: 0.869, Validation Accuracy: 0.870, Loss: 0.132 Epoch 6 Batch 120/269 - Train Accuracy: 0.881, Validation Accuracy: 0.870, Loss: 0.126 Epoch 6 Batch 121/269 - Train Accuracy: 0.883, Validation Accuracy: 0.873, Loss: 0.122 Epoch 6 Batch 122/269 - Train Accuracy: 0.869, Validation Accuracy: 0.866, Loss: 0.132 Epoch 6 Batch 123/269 - Train Accuracy: 0.875, Validation Accuracy: 0.862, Loss: 0.132 Epoch 6 Batch 124/269 - Train Accuracy: 0.860, Validation Accuracy: 0.845, Loss: 0.131 Epoch 6 Batch 125/269 - Train Accuracy: 0.881, Validation Accuracy: 0.855, Loss: 0.165 Epoch 6 Batch 126/269 - Train Accuracy: 0.861, Validation Accuracy: 0.870, Loss: 0.141 Epoch 6 Batch 127/269 - Train Accuracy: 0.863, Validation Accuracy: 0.858, Loss: 0.133 Epoch 6 Batch 128/269 - Train Accuracy: 0.883, Validation Accuracy: 0.867, Loss: 0.140 Epoch 6 Batch 129/269 - Train Accuracy: 0.858, Validation Accuracy: 0.871, Loss: 0.130 Epoch 6 Batch 130/269 - Train Accuracy: 0.879, Validation Accuracy: 0.864, Loss: 0.132 Epoch 6 Batch 131/269 - Train Accuracy: 0.849, Validation Accuracy: 0.870, Loss: 0.129 Epoch 6 Batch 132/269 - Train Accuracy: 0.870, Validation Accuracy: 0.871, Loss: 0.139 Epoch 6 Batch 133/269 - Train Accuracy: 0.883, Validation Accuracy: 0.876, Loss: 0.123 Epoch 6 Batch 134/269 - Train Accuracy: 0.863, Validation Accuracy: 0.870, Loss: 0.134 Epoch 6 Batch 135/269 - Train Accuracy: 0.870, Validation Accuracy: 0.867, Loss: 0.131 Epoch 6 Batch 136/269 - Train Accuracy: 0.859, Validation Accuracy: 0.875, Loss: 0.136 Epoch 6 Batch 137/269 - Train Accuracy: 0.883, Validation Accuracy: 0.873, Loss: 0.134 Epoch 6 Batch 138/269 - Train Accuracy: 0.868, Validation Accuracy: 0.870, Loss: 0.120 Epoch 6 Batch 139/269 - Train Accuracy: 0.885, Validation Accuracy: 0.877, Loss: 0.118 Epoch 6 Batch 140/269 - Train Accuracy: 0.887, Validation Accuracy: 0.882, Loss: 0.125 Epoch 6 Batch 141/269 - Train Accuracy: 0.867, Validation Accuracy: 0.877, Loss: 0.128 Epoch 6 Batch 142/269 - Train Accuracy: 0.874, Validation Accuracy: 0.873, Loss: 0.116 Epoch 6 Batch 143/269 - Train Accuracy: 0.908, Validation Accuracy: 0.881, Loss: 0.109 Epoch 6 Batch 144/269 - Train Accuracy: 0.891, Validation Accuracy: 0.881, Loss: 0.107 Epoch 6 Batch 145/269 - Train Accuracy: 0.892, Validation Accuracy: 0.877, Loss: 0.107 Epoch 6 Batch 146/269 - Train Accuracy: 0.889, Validation Accuracy: 0.884, Loss: 0.114 Epoch 6 Batch 147/269 - Train Accuracy: 0.884, Validation Accuracy: 0.879, Loss: 0.117 Epoch 6 Batch 148/269 - Train Accuracy: 0.876, Validation Accuracy: 0.875, Loss: 0.119 Epoch 6 Batch 149/269 - Train Accuracy: 0.874, Validation Accuracy: 0.885, Loss: 0.128 Epoch 6 Batch 150/269 - Train Accuracy: 0.875, Validation Accuracy: 0.880, Loss: 0.116 Epoch 6 Batch 151/269 - Train Accuracy: 0.891, Validation Accuracy: 0.879, Loss: 0.116 Epoch 6 Batch 152/269 - Train Accuracy: 0.886, Validation Accuracy: 0.882, Loss: 0.119 Epoch 6 Batch 153/269 - Train Accuracy: 0.896, Validation Accuracy: 0.883, Loss: 0.110 Epoch 6 Batch 154/269 - Train Accuracy: 0.907, Validation Accuracy: 0.883, Loss: 0.108 Epoch 6 Batch 155/269 - Train Accuracy: 0.881, Validation Accuracy: 0.892, Loss: 0.108 Epoch 6 Batch 156/269 - Train Accuracy: 0.872, Validation Accuracy: 0.884, Loss: 0.119 Epoch 6 Batch 157/269 - Train Accuracy: 0.871, Validation Accuracy: 0.885, Loss: 0.108 Epoch 6 Batch 158/269 - Train Accuracy: 0.880, Validation Accuracy: 0.876, Loss: 0.112 Epoch 6 Batch 159/269 - Train Accuracy: 0.884, Validation Accuracy: 0.880, Loss: 0.115 Epoch 6 Batch 160/269 - Train Accuracy: 0.901, Validation Accuracy: 0.879, Loss: 0.108 Epoch 6 Batch 161/269 - Train Accuracy: 0.872, Validation Accuracy: 0.881, Loss: 0.108 Epoch 6 Batch 162/269 - Train Accuracy: 0.902, Validation Accuracy: 0.878, Loss: 0.106 Epoch 6 Batch 163/269 - Train Accuracy: 0.890, Validation Accuracy: 0.879, Loss: 0.113 Epoch 6 Batch 164/269 - Train Accuracy: 0.898, Validation Accuracy: 0.877, Loss: 0.110 Epoch 6 Batch 165/269 - Train Accuracy: 0.896, Validation Accuracy: 0.880, Loss: 0.103 Epoch 6 Batch 166/269 - Train Accuracy: 0.899, Validation Accuracy: 0.884, Loss: 0.105 Epoch 6 Batch 167/269 - Train Accuracy: 0.894, Validation Accuracy: 0.881, Loss: 0.108 Epoch 6 Batch 168/269 - Train Accuracy: 0.891, Validation Accuracy: 0.885, Loss: 0.110 Epoch 6 Batch 169/269 - Train Accuracy: 0.886, Validation Accuracy: 0.890, Loss: 0.106 Epoch 6 Batch 170/269 - Train Accuracy: 0.883, Validation Accuracy: 0.893, Loss: 0.101 Epoch 6 Batch 171/269 - Train Accuracy: 0.902, Validation Accuracy: 0.887, Loss: 0.112 Epoch 6 Batch 172/269 - Train Accuracy: 0.870, Validation Accuracy: 0.884, Loss: 0.116 Epoch 6 Batch 173/269 - Train Accuracy: 0.893, Validation Accuracy: 0.886, Loss: 0.099 Epoch 6 Batch 174/269 - Train Accuracy: 0.890, Validation Accuracy: 0.879, Loss: 0.108 Epoch 6 Batch 175/269 - Train Accuracy: 0.868, Validation Accuracy: 0.883, Loss: 0.116 Epoch 6 Batch 176/269 - Train Accuracy: 0.882, Validation Accuracy: 0.881, Loss: 0.109 Epoch 6 Batch 177/269 - Train Accuracy: 0.888, Validation Accuracy: 0.883, Loss: 0.099 Epoch 6 Batch 178/269 - Train Accuracy: 0.888, Validation Accuracy: 0.884, Loss: 0.103 Epoch 6 Batch 179/269 - Train Accuracy: 0.881, Validation Accuracy: 0.888, Loss: 0.105 Epoch 6 Batch 180/269 - Train Accuracy: 0.896, Validation Accuracy: 0.886, Loss: 0.098 Epoch 6 Batch 181/269 - Train Accuracy: 0.879, Validation Accuracy: 0.880, Loss: 0.110 Epoch 6 Batch 182/269 - Train Accuracy: 0.895, Validation Accuracy: 0.884, Loss: 0.106 Epoch 6 Batch 183/269 - Train Accuracy: 0.910, Validation Accuracy: 0.887, Loss: 0.086 Epoch 6 Batch 184/269 - Train Accuracy: 0.899, Validation Accuracy: 0.887, Loss: 0.102 Epoch 6 Batch 185/269 - Train Accuracy: 0.896, Validation Accuracy: 0.889, Loss: 0.100 Epoch 6 Batch 186/269 - Train Accuracy: 0.887, Validation Accuracy: 0.890, Loss: 0.097 Epoch 6 Batch 187/269 - Train Accuracy: 0.881, Validation Accuracy: 0.890, Loss: 0.100 Epoch 6 Batch 188/269 - Train Accuracy: 0.890, Validation Accuracy: 0.886, Loss: 0.102 Epoch 6 Batch 189/269 - Train Accuracy: 0.893, Validation Accuracy: 0.885, Loss: 0.095 Epoch 6 Batch 190/269 - Train Accuracy: 0.904, Validation Accuracy: 0.886, Loss: 0.093 Epoch 6 Batch 191/269 - Train Accuracy: 0.885, Validation Accuracy: 0.887, Loss: 0.095 Epoch 6 Batch 192/269 - Train Accuracy: 0.893, Validation Accuracy: 0.886, Loss: 0.099 Epoch 6 Batch 193/269 - Train Accuracy: 0.890, Validation Accuracy: 0.890, Loss: 0.100 Epoch 6 Batch 194/269 - Train Accuracy: 0.882, Validation Accuracy: 0.886, Loss: 0.102 Epoch 6 Batch 195/269 - Train Accuracy: 0.884, Validation Accuracy: 0.891, Loss: 0.099 Epoch 6 Batch 196/269 - Train Accuracy: 0.887, Validation Accuracy: 0.885, Loss: 0.097 Epoch 6 Batch 197/269 - Train Accuracy: 0.876, Validation Accuracy: 0.879, Loss: 0.105 Epoch 6 Batch 198/269 - Train Accuracy: 0.890, Validation Accuracy: 0.887, Loss: 0.103 Epoch 6 Batch 199/269 - Train Accuracy: 0.882, Validation Accuracy: 0.881, Loss: 0.107 Epoch 6 Batch 200/269 - Train Accuracy: 0.896, Validation Accuracy: 0.899, Loss: 0.098 Epoch 6 Batch 201/269 - Train Accuracy: 0.899, Validation Accuracy: 0.896, Loss: 0.103 Epoch 6 Batch 202/269 - Train Accuracy: 0.886, Validation Accuracy: 0.887, Loss: 0.098 Epoch 6 Batch 203/269 - Train Accuracy: 0.894, Validation Accuracy: 0.884, Loss: 0.108 Epoch 6 Batch 204/269 - Train Accuracy: 0.891, Validation Accuracy: 0.882, Loss: 0.105 Epoch 6 Batch 205/269 - Train Accuracy: 0.902, Validation Accuracy: 0.880, Loss: 0.099 Epoch 6 Batch 206/269 - Train Accuracy: 0.885, Validation Accuracy: 0.886, Loss: 0.108 Epoch 6 Batch 207/269 - Train Accuracy: 0.886, Validation Accuracy: 0.889, Loss: 0.098 Epoch 6 Batch 208/269 - Train Accuracy: 0.890, Validation Accuracy: 0.890, Loss: 0.107 Epoch 6 Batch 209/269 - Train Accuracy: 0.895, Validation Accuracy: 0.887, Loss: 0.095 Epoch 6 Batch 210/269 - Train Accuracy: 0.888, Validation Accuracy: 0.885, Loss: 0.099 Epoch 6 Batch 211/269 - Train Accuracy: 0.894, Validation Accuracy: 0.882, Loss: 0.113 Epoch 6 Batch 212/269 - Train Accuracy: 0.887, Validation Accuracy: 0.883, Loss: 0.110 Epoch 6 Batch 213/269 - Train Accuracy: 0.866, Validation Accuracy: 0.874, Loss: 0.103 Epoch 6 Batch 214/269 - Train Accuracy: 0.876, Validation Accuracy: 0.885, Loss: 0.128 Epoch 6 Batch 215/269 - Train Accuracy: 0.904, Validation Accuracy: 0.882, Loss: 0.102 Epoch 6 Batch 216/269 - Train Accuracy: 0.855, Validation Accuracy: 0.867, Loss: 0.131 Epoch 6 Batch 217/269 - Train Accuracy: 0.881, Validation Accuracy: 0.879, Loss: 0.125 Epoch 6 Batch 218/269 - Train Accuracy: 0.890, Validation Accuracy: 0.870, Loss: 0.105 Epoch 6 Batch 219/269 - Train Accuracy: 0.894, Validation Accuracy: 0.881, Loss: 0.129 Epoch 6 Batch 220/269 - Train Accuracy: 0.890, Validation Accuracy: 0.873, Loss: 0.107 Epoch 6 Batch 221/269 - Train Accuracy: 0.889, Validation Accuracy: 0.881, Loss: 0.113 Epoch 6 Batch 222/269 - Train Accuracy: 0.895, Validation Accuracy: 0.873, Loss: 0.100 Epoch 6 Batch 223/269 - Train Accuracy: 0.893, Validation Accuracy: 0.886, Loss: 0.107 Epoch 6 Batch 224/269 - Train Accuracy: 0.885, Validation Accuracy: 0.877, Loss: 0.111 Epoch 6 Batch 225/269 - Train Accuracy: 0.878, Validation Accuracy: 0.878, Loss: 0.102 Epoch 6 Batch 226/269 - Train Accuracy: 0.892, Validation Accuracy: 0.879, Loss: 0.109 Epoch 6 Batch 227/269 - Train Accuracy: 0.890, Validation Accuracy: 0.882, Loss: 0.112 Epoch 6 Batch 228/269 - Train Accuracy: 0.872, Validation Accuracy: 0.880, Loss: 0.104 Epoch 6 Batch 229/269 - Train Accuracy: 0.883, Validation Accuracy: 0.882, Loss: 0.103 Epoch 6 Batch 230/269 - Train Accuracy: 0.897, Validation Accuracy: 0.881, Loss: 0.102 Epoch 6 Batch 231/269 - Train Accuracy: 0.880, Validation Accuracy: 0.877, Loss: 0.113 Epoch 6 Batch 232/269 - Train Accuracy: 0.872, Validation Accuracy: 0.873, Loss: 0.108 Epoch 6 Batch 233/269 - Train Accuracy: 0.907, Validation Accuracy: 0.888, Loss: 0.110 Epoch 6 Batch 234/269 - Train Accuracy: 0.882, Validation Accuracy: 0.876, Loss: 0.099 Epoch 6 Batch 235/269 - Train Accuracy: 0.907, Validation Accuracy: 0.886, Loss: 0.099 Epoch 6 Batch 236/269 - Train Accuracy: 0.884, Validation Accuracy: 0.887, Loss: 0.093 Epoch 6 Batch 237/269 - Train Accuracy: 0.894, Validation Accuracy: 0.879, Loss: 0.100 Epoch 6 Batch 238/269 - Train Accuracy: 0.896, Validation Accuracy: 0.883, Loss: 0.107 Epoch 6 Batch 239/269 - Train Accuracy: 0.889, Validation Accuracy: 0.870, Loss: 0.093 Epoch 6 Batch 240/269 - Train Accuracy: 0.898, Validation Accuracy: 0.879, Loss: 0.092 Epoch 6 Batch 241/269 - Train Accuracy: 0.887, Validation Accuracy: 0.884, Loss: 0.105 Epoch 6 Batch 242/269 - Train Accuracy: 0.904, Validation Accuracy: 0.884, Loss: 0.090 Epoch 6 Batch 243/269 - Train Accuracy: 0.900, Validation Accuracy: 0.887, Loss: 0.091 Epoch 6 Batch 244/269 - Train Accuracy: 0.886, Validation Accuracy: 0.891, Loss: 0.097 Epoch 6 Batch 245/269 - Train Accuracy: 0.883, Validation Accuracy: 0.893, Loss: 0.096 Epoch 6 Batch 246/269 - Train Accuracy: 0.891, Validation Accuracy: 0.894, Loss: 0.094 Epoch 6 Batch 247/269 - Train Accuracy: 0.898, Validation Accuracy: 0.891, Loss: 0.093 Epoch 6 Batch 248/269 - Train Accuracy: 0.903, Validation Accuracy: 0.890, Loss: 0.090 Epoch 6 Batch 249/269 - Train Accuracy: 0.912, Validation Accuracy: 0.892, Loss: 0.080 Epoch 6 Batch 250/269 - Train Accuracy: 0.895, Validation Accuracy: 0.884, Loss: 0.089 Epoch 6 Batch 251/269 - Train Accuracy: 0.917, Validation Accuracy: 0.886, Loss: 0.088 Epoch 6 Batch 252/269 - Train Accuracy: 0.893, Validation Accuracy: 0.882, Loss: 0.081 Epoch 6 Batch 253/269 - Train Accuracy: 0.885, Validation Accuracy: 0.888, Loss: 0.092 Epoch 6 Batch 254/269 - Train Accuracy: 0.888, Validation Accuracy: 0.886, Loss: 0.090 Epoch 6 Batch 255/269 - Train Accuracy: 0.894, Validation Accuracy: 0.890, Loss: 0.089 Epoch 6 Batch 256/269 - Train Accuracy: 0.889, Validation Accuracy: 0.891, Loss: 0.087 Epoch 6 Batch 257/269 - Train Accuracy: 0.871, Validation Accuracy: 0.885, Loss: 0.098 Epoch 6 Batch 258/269 - Train Accuracy: 0.896, Validation Accuracy: 0.888, Loss: 0.092 Epoch 6 Batch 259/269 - Train Accuracy: 0.893, Validation Accuracy: 0.885, Loss: 0.088 Epoch 6 Batch 260/269 - Train Accuracy: 0.881, Validation Accuracy: 0.892, Loss: 0.102 Epoch 6 Batch 261/269 - Train Accuracy: 0.889, Validation Accuracy: 0.885, Loss: 0.089 Epoch 6 Batch 262/269 - Train Accuracy: 0.911, Validation Accuracy: 0.891, Loss: 0.086 Epoch 6 Batch 263/269 - Train Accuracy: 0.893, Validation Accuracy: 0.887, Loss: 0.091 Epoch 6 Batch 264/269 - Train Accuracy: 0.870, Validation Accuracy: 0.897, Loss: 0.100 Epoch 6 Batch 265/269 - Train Accuracy: 0.884, Validation Accuracy: 0.894, Loss: 0.085 Epoch 6 Batch 266/269 - Train Accuracy: 0.891, Validation Accuracy: 0.890, Loss: 0.083 Epoch 6 Batch 267/269 - Train Accuracy: 0.904, Validation Accuracy: 0.888, Loss: 0.090 Epoch 7 Batch 0/269 - Train Accuracy: 0.896, Validation Accuracy: 0.891, Loss: 0.094 Epoch 7 Batch 1/269 - Train Accuracy: 0.899, Validation Accuracy: 0.889, Loss: 0.084 Epoch 7 Batch 2/269 - Train Accuracy: 0.905, Validation Accuracy: 0.891, Loss: 0.087 Epoch 7 Batch 3/269 - Train Accuracy: 0.907, Validation Accuracy: 0.889, Loss: 0.084 Epoch 7 Batch 4/269 - Train Accuracy: 0.896, Validation Accuracy: 0.883, Loss: 0.089 Epoch 7 Batch 5/269 - Train Accuracy: 0.895, Validation Accuracy: 0.891, Loss: 0.088 Epoch 7 Batch 6/269 - Train Accuracy: 0.909, Validation Accuracy: 0.885, Loss: 0.083 Epoch 7 Batch 7/269 - Train Accuracy: 0.883, Validation Accuracy: 0.886, Loss: 0.084 Epoch 7 Batch 8/269 - Train Accuracy: 0.905, Validation Accuracy: 0.886, Loss: 0.088 Epoch 7 Batch 9/269 - Train Accuracy: 0.886, Validation Accuracy: 0.886, Loss: 0.088 Epoch 7 Batch 10/269 - Train Accuracy: 0.906, Validation Accuracy: 0.894, Loss: 0.076 Epoch 7 Batch 11/269 - Train Accuracy: 0.898, Validation Accuracy: 0.895, Loss: 0.090 Epoch 7 Batch 12/269 - Train Accuracy: 0.893, Validation Accuracy: 0.889, Loss: 0.094 Epoch 7 Batch 13/269 - Train Accuracy: 0.897, Validation Accuracy: 0.888, Loss: 0.075 Epoch 7 Batch 14/269 - Train Accuracy: 0.896, Validation Accuracy: 0.891, Loss: 0.084 Epoch 7 Batch 15/269 - Train Accuracy: 0.900, Validation Accuracy: 0.893, Loss: 0.072 Epoch 7 Batch 16/269 - Train Accuracy: 0.893, Validation Accuracy: 0.889, Loss: 0.088 Epoch 7 Batch 17/269 - Train Accuracy: 0.912, Validation Accuracy: 0.897, Loss: 0.073 Epoch 7 Batch 18/269 - Train Accuracy: 0.905, Validation Accuracy: 0.895, Loss: 0.087 Epoch 7 Batch 19/269 - Train Accuracy: 0.909, Validation Accuracy: 0.899, Loss: 0.071 Epoch 7 Batch 20/269 - Train Accuracy: 0.910, Validation Accuracy: 0.899, Loss: 0.080 Epoch 7 Batch 21/269 - Train Accuracy: 0.889, Validation Accuracy: 0.898, Loss: 0.094 Epoch 7 Batch 22/269 - Train Accuracy: 0.910, Validation Accuracy: 0.900, Loss: 0.078 Epoch 7 Batch 23/269 - Train Accuracy: 0.897, Validation Accuracy: 0.900, Loss: 0.088 Epoch 7 Batch 24/269 - Train Accuracy: 0.904, Validation Accuracy: 0.898, Loss: 0.078 Epoch 7 Batch 25/269 - Train Accuracy: 0.898, Validation Accuracy: 0.897, Loss: 0.090 Epoch 7 Batch 26/269 - Train Accuracy: 0.894, Validation Accuracy: 0.899, Loss: 0.078 Epoch 7 Batch 27/269 - Train Accuracy: 0.897, Validation Accuracy: 0.897, Loss: 0.079 Epoch 7 Batch 28/269 - Train Accuracy: 0.876, Validation Accuracy: 0.895, Loss: 0.091 Epoch 7 Batch 29/269 - Train Accuracy: 0.900, Validation Accuracy: 0.894, Loss: 0.084 Epoch 7 Batch 30/269 - Train Accuracy: 0.906, Validation Accuracy: 0.893, Loss: 0.078 Epoch 7 Batch 31/269 - Train Accuracy: 0.909, Validation Accuracy: 0.895, Loss: 0.080 Epoch 7 Batch 32/269 - Train Accuracy: 0.907, Validation Accuracy: 0.893, Loss: 0.074 Epoch 7 Batch 33/269 - Train Accuracy: 0.900, Validation Accuracy: 0.892, Loss: 0.074 Epoch 7 Batch 34/269 - Train Accuracy: 0.894, Validation Accuracy: 0.892, Loss: 0.079 Epoch 7 Batch 35/269 - Train Accuracy: 0.903, Validation Accuracy: 0.888, Loss: 0.094 Epoch 7 Batch 36/269 - Train Accuracy: 0.894, Validation Accuracy: 0.892, Loss: 0.079 Epoch 7 Batch 37/269 - Train Accuracy: 0.899, Validation Accuracy: 0.892, Loss: 0.084 Epoch 7 Batch 38/269 - Train Accuracy: 0.900, Validation Accuracy: 0.889, Loss: 0.079 Epoch 7 Batch 39/269 - Train Accuracy: 0.894, Validation Accuracy: 0.891, Loss: 0.076 Epoch 7 Batch 40/269 - Train Accuracy: 0.896, Validation Accuracy: 0.888, Loss: 0.086 Epoch 7 Batch 41/269 - Train Accuracy: 0.895, Validation Accuracy: 0.889, Loss: 0.081 Epoch 7 Batch 42/269 - Train Accuracy: 0.910, Validation Accuracy: 0.896, Loss: 0.076 Epoch 7 Batch 43/269 - Train Accuracy: 0.900, Validation Accuracy: 0.896, Loss: 0.080 Epoch 7 Batch 44/269 - Train Accuracy: 0.905, Validation Accuracy: 0.894, Loss: 0.083 Epoch 7 Batch 45/269 - Train Accuracy: 0.901, Validation Accuracy: 0.899, Loss: 0.079 Epoch 7 Batch 46/269 - Train Accuracy: 0.921, Validation Accuracy: 0.903, Loss: 0.074 Epoch 7 Batch 47/269 - Train Accuracy: 0.911, Validation Accuracy: 0.905, Loss: 0.068 Epoch 7 Batch 48/269 - Train Accuracy: 0.916, Validation Accuracy: 0.905, Loss: 0.074 Epoch 7 Batch 49/269 - Train Accuracy: 0.901, Validation Accuracy: 0.905, Loss: 0.073 Epoch 7 Batch 50/269 - Train Accuracy: 0.886, Validation Accuracy: 0.902, Loss: 0.090 Epoch 7 Batch 51/269 - Train Accuracy: 0.903, Validation Accuracy: 0.900, Loss: 0.077 Epoch 7 Batch 52/269 - Train Accuracy: 0.890, Validation Accuracy: 0.901, Loss: 0.071 Epoch 7 Batch 53/269 - Train Accuracy: 0.903, Validation Accuracy: 0.905, Loss: 0.086 Epoch 7 Batch 54/269 - Train Accuracy: 0.906, Validation Accuracy: 0.904, Loss: 0.074 Epoch 7 Batch 55/269 - Train Accuracy: 0.918, Validation Accuracy: 0.901, Loss: 0.077 Epoch 7 Batch 56/269 - Train Accuracy: 0.902, Validation Accuracy: 0.903, Loss: 0.080 Epoch 7 Batch 57/269 - Train Accuracy: 0.903, Validation Accuracy: 0.895, Loss: 0.083 Epoch 7 Batch 58/269 - Train Accuracy: 0.898, Validation Accuracy: 0.900, Loss: 0.075 Epoch 7 Batch 59/269 - Train Accuracy: 0.928, Validation Accuracy: 0.905, Loss: 0.062 Epoch 7 Batch 60/269 - Train Accuracy: 0.917, Validation Accuracy: 0.898, Loss: 0.070 Epoch 7 Batch 61/269 - Train Accuracy: 0.913, Validation Accuracy: 0.899, Loss: 0.069 Epoch 7 Batch 62/269 - Train Accuracy: 0.909, Validation Accuracy: 0.897, Loss: 0.080 Epoch 7 Batch 63/269 - Train Accuracy: 0.908, Validation Accuracy: 0.901, Loss: 0.081 Epoch 7 Batch 64/269 - Train Accuracy: 0.906, Validation Accuracy: 0.897, Loss: 0.068 Epoch 7 Batch 65/269 - Train Accuracy: 0.905, Validation Accuracy: 0.901, Loss: 0.074 Epoch 7 Batch 66/269 - Train Accuracy: 0.892, Validation Accuracy: 0.891, Loss: 0.078 Epoch 7 Batch 67/269 - Train Accuracy: 0.900, Validation Accuracy: 0.889, Loss: 0.083 Epoch 7 Batch 68/269 - Train Accuracy: 0.870, Validation Accuracy: 0.902, Loss: 0.086 Epoch 7 Batch 69/269 - Train Accuracy: 0.883, Validation Accuracy: 0.897, Loss: 0.092 Epoch 7 Batch 70/269 - Train Accuracy: 0.917, Validation Accuracy: 0.898, Loss: 0.082 Epoch 7 Batch 71/269 - Train Accuracy: 0.908, Validation Accuracy: 0.892, Loss: 0.084 Epoch 7 Batch 72/269 - Train Accuracy: 0.894, Validation Accuracy: 0.895, Loss: 0.082 Epoch 7 Batch 73/269 - Train Accuracy: 0.897, Validation Accuracy: 0.896, Loss: 0.084 Epoch 7 Batch 74/269 - Train Accuracy: 0.910, Validation Accuracy: 0.894, Loss: 0.073 Epoch 7 Batch 75/269 - Train Accuracy: 0.901, Validation Accuracy: 0.893, Loss: 0.081 Epoch 7 Batch 76/269 - Train Accuracy: 0.893, Validation Accuracy: 0.888, Loss: 0.075 Epoch 7 Batch 77/269 - Train Accuracy: 0.894, Validation Accuracy: 0.896, Loss: 0.078 Epoch 7 Batch 78/269 - Train Accuracy: 0.925, Validation Accuracy: 0.892, Loss: 0.074 Epoch 7 Batch 79/269 - Train Accuracy: 0.903, Validation Accuracy: 0.905, Loss: 0.081 Epoch 7 Batch 80/269 - Train Accuracy: 0.898, Validation Accuracy: 0.901, Loss: 0.076 Epoch 7 Batch 81/269 - Train Accuracy: 0.897, Validation Accuracy: 0.900, Loss: 0.086 Epoch 7 Batch 82/269 - Train Accuracy: 0.916, Validation Accuracy: 0.893, Loss: 0.069 Epoch 7 Batch 83/269 - Train Accuracy: 0.896, Validation Accuracy: 0.891, Loss: 0.087 Epoch 7 Batch 84/269 - Train Accuracy: 0.904, Validation Accuracy: 0.891, Loss: 0.074 Epoch 7 Batch 85/269 - Train Accuracy: 0.916, Validation Accuracy: 0.893, Loss: 0.073 Epoch 7 Batch 86/269 - Train Accuracy: 0.896, Validation Accuracy: 0.899, Loss: 0.071 Epoch 7 Batch 87/269 - Train Accuracy: 0.904, Validation Accuracy: 0.896, Loss: 0.078 Epoch 7 Batch 88/269 - Train Accuracy: 0.912, Validation Accuracy: 0.895, Loss: 0.073 Epoch 7 Batch 89/269 - Train Accuracy: 0.907, Validation Accuracy: 0.896, Loss: 0.076 Epoch 7 Batch 90/269 - Train Accuracy: 0.900, Validation Accuracy: 0.895, Loss: 0.078 Epoch 7 Batch 91/269 - Train Accuracy: 0.911, Validation Accuracy: 0.899, Loss: 0.072 Epoch 7 Batch 92/269 - Train Accuracy: 0.931, Validation Accuracy: 0.898, Loss: 0.065 Epoch 7 Batch 93/269 - Train Accuracy: 0.907, Validation Accuracy: 0.897, Loss: 0.072 Epoch 7 Batch 94/269 - Train Accuracy: 0.893, Validation Accuracy: 0.900, Loss: 0.083 Epoch 7 Batch 95/269 - Train Accuracy: 0.900, Validation Accuracy: 0.902, Loss: 0.072 Epoch 7 Batch 96/269 - Train Accuracy: 0.895, Validation Accuracy: 0.898, Loss: 0.079 Epoch 7 Batch 97/269 - Train Accuracy: 0.900, Validation Accuracy: 0.900, Loss: 0.076 Epoch 7 Batch 98/269 - Train Accuracy: 0.913, Validation Accuracy: 0.906, Loss: 0.076 Epoch 7 Batch 99/269 - Train Accuracy: 0.904, Validation Accuracy: 0.907, Loss: 0.073 Epoch 7 Batch 100/269 - Train Accuracy: 0.906, Validation Accuracy: 0.909, Loss: 0.073 Epoch 7 Batch 101/269 - Train Accuracy: 0.900, Validation Accuracy: 0.907, Loss: 0.082 Epoch 7 Batch 102/269 - Train Accuracy: 0.905, Validation Accuracy: 0.904, Loss: 0.071 Epoch 7 Batch 103/269 - Train Accuracy: 0.910, Validation Accuracy: 0.899, Loss: 0.082 Epoch 7 Batch 104/269 - Train Accuracy: 0.902, Validation Accuracy: 0.901, Loss: 0.072 Epoch 7 Batch 105/269 - Train Accuracy: 0.906, Validation Accuracy: 0.895, Loss: 0.074 Epoch 7 Batch 106/269 - Train Accuracy: 0.904, Validation Accuracy: 0.898, Loss: 0.067 Epoch 7 Batch 107/269 - Train Accuracy: 0.928, Validation Accuracy: 0.904, Loss: 0.070 Epoch 7 Batch 108/269 - Train Accuracy: 0.912, Validation Accuracy: 0.896, Loss: 0.070 Epoch 7 Batch 109/269 - Train Accuracy: 0.894, Validation Accuracy: 0.903, Loss: 0.081 Epoch 7 Batch 110/269 - Train Accuracy: 0.908, Validation Accuracy: 0.904, Loss: 0.067 Epoch 7 Batch 111/269 - Train Accuracy: 0.908, Validation Accuracy: 0.904, Loss: 0.080 Epoch 7 Batch 112/269 - Train Accuracy: 0.917, Validation Accuracy: 0.903, Loss: 0.079 Epoch 7 Batch 113/269 - Train Accuracy: 0.894, Validation Accuracy: 0.902, Loss: 0.069 Epoch 7 Batch 114/269 - Train Accuracy: 0.902, Validation Accuracy: 0.896, Loss: 0.075 Epoch 7 Batch 115/269 - Train Accuracy: 0.896, Validation Accuracy: 0.897, Loss: 0.075 Epoch 7 Batch 116/269 - Train Accuracy: 0.911, Validation Accuracy: 0.907, Loss: 0.074 Epoch 7 Batch 117/269 - Train Accuracy: 0.904, Validation Accuracy: 0.905, Loss: 0.065 Epoch 7 Batch 118/269 - Train Accuracy: 0.926, Validation Accuracy: 0.898, Loss: 0.064 Epoch 7 Batch 119/269 - Train Accuracy: 0.895, Validation Accuracy: 0.895, Loss: 0.080 Epoch 7 Batch 120/269 - Train Accuracy: 0.907, Validation Accuracy: 0.903, Loss: 0.076 Epoch 7 Batch 121/269 - Train Accuracy: 0.912, Validation Accuracy: 0.903, Loss: 0.067 Epoch 7 Batch 122/269 - Train Accuracy: 0.911, Validation Accuracy: 0.908, Loss: 0.071 Epoch 7 Batch 123/269 - Train Accuracy: 0.912, Validation Accuracy: 0.903, Loss: 0.070 Epoch 7 Batch 124/269 - Train Accuracy: 0.904, Validation Accuracy: 0.895, Loss: 0.064 Epoch 7 Batch 125/269 - Train Accuracy: 0.920, Validation Accuracy: 0.895, Loss: 0.066 Epoch 7 Batch 126/269 - Train Accuracy: 0.892, Validation Accuracy: 0.900, Loss: 0.072 Epoch 7 Batch 127/269 - Train Accuracy: 0.903, Validation Accuracy: 0.900, Loss: 0.073 Epoch 7 Batch 128/269 - Train Accuracy: 0.903, Validation Accuracy: 0.897, Loss: 0.073 Epoch 7 Batch 129/269 - Train Accuracy: 0.896, Validation Accuracy: 0.902, Loss: 0.071 Epoch 7 Batch 130/269 - Train Accuracy: 0.920, Validation Accuracy: 0.902, Loss: 0.071 Epoch 7 Batch 131/269 - Train Accuracy: 0.885, Validation Accuracy: 0.901, Loss: 0.073 Epoch 7 Batch 132/269 - Train Accuracy: 0.893, Validation Accuracy: 0.911, Loss: 0.076 Epoch 7 Batch 133/269 - Train Accuracy: 0.912, Validation Accuracy: 0.906, Loss: 0.067 Epoch 7 Batch 134/269 - Train Accuracy: 0.897, Validation Accuracy: 0.905, Loss: 0.075 Epoch 7 Batch 135/269 - Train Accuracy: 0.903, Validation Accuracy: 0.901, Loss: 0.071 Epoch 7 Batch 136/269 - Train Accuracy: 0.890, Validation Accuracy: 0.900, Loss: 0.079 Epoch 7 Batch 137/269 - Train Accuracy: 0.909, Validation Accuracy: 0.896, Loss: 0.077 Epoch 7 Batch 138/269 - Train Accuracy: 0.891, Validation Accuracy: 0.897, Loss: 0.071 Epoch 7 Batch 139/269 - Train Accuracy: 0.907, Validation Accuracy: 0.890, Loss: 0.065 Epoch 7 Batch 140/269 - Train Accuracy: 0.908, Validation Accuracy: 0.902, Loss: 0.074 Epoch 7 Batch 141/269 - Train Accuracy: 0.899, Validation Accuracy: 0.904, Loss: 0.079 Epoch 7 Batch 142/269 - Train Accuracy: 0.911, Validation Accuracy: 0.906, Loss: 0.070 Epoch 7 Batch 143/269 - Train Accuracy: 0.926, Validation Accuracy: 0.901, Loss: 0.061 Epoch 7 Batch 144/269 - Train Accuracy: 0.920, Validation Accuracy: 0.905, Loss: 0.066 Epoch 7 Batch 145/269 - Train Accuracy: 0.920, Validation Accuracy: 0.897, Loss: 0.065 Epoch 7 Batch 146/269 - Train Accuracy: 0.906, Validation Accuracy: 0.901, Loss: 0.069 Epoch 7 Batch 147/269 - Train Accuracy: 0.910, Validation Accuracy: 0.902, Loss: 0.075 Epoch 7 Batch 148/269 - Train Accuracy: 0.909, Validation Accuracy: 0.892, Loss: 0.069 Epoch 7 Batch 149/269 - Train Accuracy: 0.903, Validation Accuracy: 0.898, Loss: 0.081 Epoch 7 Batch 150/269 - Train Accuracy: 0.897, Validation Accuracy: 0.900, Loss: 0.072 Epoch 7 Batch 151/269 - Train Accuracy: 0.915, Validation Accuracy: 0.899, Loss: 0.073 Epoch 7 Batch 152/269 - Train Accuracy: 0.905, Validation Accuracy: 0.898, Loss: 0.073 Epoch 7 Batch 153/269 - Train Accuracy: 0.920, Validation Accuracy: 0.897, Loss: 0.066 Epoch 7 Batch 154/269 - Train Accuracy: 0.920, Validation Accuracy: 0.893, Loss: 0.066 Epoch 7 Batch 155/269 - Train Accuracy: 0.894, Validation Accuracy: 0.896, Loss: 0.080 Epoch 7 Batch 156/269 - Train Accuracy: 0.875, Validation Accuracy: 0.886, Loss: 0.091 Epoch 7 Batch 157/269 - Train Accuracy: 0.890, Validation Accuracy: 0.897, Loss: 0.130 Epoch 7 Batch 158/269 - Train Accuracy: 0.891, Validation Accuracy: 0.890, Loss: 0.078 Epoch 7 Batch 159/269 - Train Accuracy: 0.886, Validation Accuracy: 0.888, Loss: 0.091 Epoch 7 Batch 160/269 - Train Accuracy: 0.906, Validation Accuracy: 0.894, Loss: 0.095 Epoch 7 Batch 161/269 - Train Accuracy: 0.891, Validation Accuracy: 0.891, Loss: 0.092 Epoch 7 Batch 162/269 - Train Accuracy: 0.919, Validation Accuracy: 0.896, Loss: 0.090 Epoch 7 Batch 163/269 - Train Accuracy: 0.904, Validation Accuracy: 0.893, Loss: 0.083 Epoch 7 Batch 164/269 - Train Accuracy: 0.906, Validation Accuracy: 0.893, Loss: 0.082 Epoch 7 Batch 165/269 - Train Accuracy: 0.902, Validation Accuracy: 0.892, Loss: 0.086 Epoch 7 Batch 166/269 - Train Accuracy: 0.904, Validation Accuracy: 0.883, Loss: 0.080 Epoch 7 Batch 167/269 - Train Accuracy: 0.903, Validation Accuracy: 0.889, Loss: 0.089 Epoch 7 Batch 168/269 - Train Accuracy: 0.896, Validation Accuracy: 0.885, Loss: 0.085 Epoch 7 Batch 169/269 - Train Accuracy: 0.876, Validation Accuracy: 0.872, Loss: 0.091 Epoch 7 Batch 170/269 - Train Accuracy: 0.874, Validation Accuracy: 0.870, Loss: 0.115 Epoch 7 Batch 171/269 - Train Accuracy: 0.887, Validation Accuracy: 0.886, Loss: 0.130 Epoch 7 Batch 172/269 - Train Accuracy: 0.839, Validation Accuracy: 0.858, Loss: 0.139 Epoch 7 Batch 173/269 - Train Accuracy: 0.865, Validation Accuracy: 0.863, Loss: 0.183 Epoch 7 Batch 174/269 - Train Accuracy: 0.885, Validation Accuracy: 0.860, Loss: 0.130 Epoch 7 Batch 175/269 - Train Accuracy: 0.858, Validation Accuracy: 0.868, Loss: 0.134 Epoch 7 Batch 176/269 - Train Accuracy: 0.869, Validation Accuracy: 0.876, Loss: 0.129 Epoch 7 Batch 177/269 - Train Accuracy: 0.878, Validation Accuracy: 0.871, Loss: 0.113 Epoch 7 Batch 178/269 - Train Accuracy: 0.880, Validation Accuracy: 0.869, Loss: 0.110 Epoch 7 Batch 179/269 - Train Accuracy: 0.877, Validation Accuracy: 0.874, Loss: 0.119 Epoch 7 Batch 180/269 - Train Accuracy: 0.885, Validation Accuracy: 0.869, Loss: 0.109 Epoch 7 Batch 181/269 - Train Accuracy: 0.877, Validation Accuracy: 0.878, Loss: 0.118 Epoch 7 Batch 182/269 - Train Accuracy: 0.887, Validation Accuracy: 0.880, Loss: 0.110 Epoch 7 Batch 183/269 - Train Accuracy: 0.900, Validation Accuracy: 0.879, Loss: 0.085 Epoch 7 Batch 184/269 - Train Accuracy: 0.889, Validation Accuracy: 0.874, Loss: 0.108 Epoch 7 Batch 185/269 - Train Accuracy: 0.890, Validation Accuracy: 0.876, Loss: 0.097 Epoch 7 Batch 186/269 - Train Accuracy: 0.884, Validation Accuracy: 0.871, Loss: 0.102 Epoch 7 Batch 187/269 - Train Accuracy: 0.882, Validation Accuracy: 0.879, Loss: 0.097 Epoch 7 Batch 188/269 - Train Accuracy: 0.893, Validation Accuracy: 0.894, Loss: 0.100 Epoch 7 Batch 189/269 - Train Accuracy: 0.898, Validation Accuracy: 0.889, Loss: 0.083 Epoch 7 Batch 190/269 - Train Accuracy: 0.901, Validation Accuracy: 0.896, Loss: 0.093 Epoch 7 Batch 191/269 - Train Accuracy: 0.894, Validation Accuracy: 0.890, Loss: 0.083 Epoch 7 Batch 192/269 - Train Accuracy: 0.893, Validation Accuracy: 0.898, Loss: 0.091 Epoch 7 Batch 193/269 - Train Accuracy: 0.905, Validation Accuracy: 0.893, Loss: 0.087 Epoch 7 Batch 194/269 - Train Accuracy: 0.889, Validation Accuracy: 0.895, Loss: 0.086 Epoch 7 Batch 195/269 - Train Accuracy: 0.899, Validation Accuracy: 0.902, Loss: 0.085 Epoch 7 Batch 196/269 - Train Accuracy: 0.891, Validation Accuracy: 0.891, Loss: 0.082 Epoch 7 Batch 197/269 - Train Accuracy: 0.883, Validation Accuracy: 0.880, Loss: 0.084 Epoch 7 Batch 198/269 - Train Accuracy: 0.880, Validation Accuracy: 0.885, Loss: 0.089 Epoch 7 Batch 199/269 - Train Accuracy: 0.876, Validation Accuracy: 0.890, Loss: 0.093 Epoch 7 Batch 200/269 - Train Accuracy: 0.902, Validation Accuracy: 0.898, Loss: 0.086 Epoch 7 Batch 201/269 - Train Accuracy: 0.894, Validation Accuracy: 0.895, Loss: 0.089 Epoch 7 Batch 202/269 - Train Accuracy: 0.902, Validation Accuracy: 0.895, Loss: 0.078 Epoch 7 Batch 203/269 - Train Accuracy: 0.896, Validation Accuracy: 0.893, Loss: 0.087 Epoch 7 Batch 204/269 - Train Accuracy: 0.898, Validation Accuracy: 0.888, Loss: 0.086 Epoch 7 Batch 205/269 - Train Accuracy: 0.910, Validation Accuracy: 0.893, Loss: 0.075 Epoch 7 Batch 206/269 - Train Accuracy: 0.893, Validation Accuracy: 0.901, Loss: 0.088 Epoch 7 Batch 207/269 - Train Accuracy: 0.893, Validation Accuracy: 0.888, Loss: 0.075 Epoch 7 Batch 208/269 - Train Accuracy: 0.897, Validation Accuracy: 0.889, Loss: 0.082 Epoch 7 Batch 209/269 - Train Accuracy: 0.907, Validation Accuracy: 0.893, Loss: 0.075 Epoch 7 Batch 210/269 - Train Accuracy: 0.895, Validation Accuracy: 0.894, Loss: 0.075 Epoch 7 Batch 211/269 - Train Accuracy: 0.905, Validation Accuracy: 0.890, Loss: 0.080 Epoch 7 Batch 212/269 - Train Accuracy: 0.894, Validation Accuracy: 0.891, Loss: 0.082 Epoch 7 Batch 213/269 - Train Accuracy: 0.899, Validation Accuracy: 0.893, Loss: 0.071 Epoch 7 Batch 214/269 - Train Accuracy: 0.893, Validation Accuracy: 0.905, Loss: 0.075 Epoch 7 Batch 215/269 - Train Accuracy: 0.911, Validation Accuracy: 0.902, Loss: 0.072 Epoch 7 Batch 216/269 - Train Accuracy: 0.893, Validation Accuracy: 0.898, Loss: 0.090 Epoch 7 Batch 217/269 - Train Accuracy: 0.890, Validation Accuracy: 0.896, Loss: 0.074 Epoch 7 Batch 218/269 - Train Accuracy: 0.909, Validation Accuracy: 0.904, Loss: 0.073 Epoch 7 Batch 219/269 - Train Accuracy: 0.909, Validation Accuracy: 0.899, Loss: 0.075 Epoch 7 Batch 220/269 - Train Accuracy: 0.910, Validation Accuracy: 0.903, Loss: 0.067 Epoch 7 Batch 221/269 - Train Accuracy: 0.914, Validation Accuracy: 0.901, Loss: 0.073 Epoch 7 Batch 222/269 - Train Accuracy: 0.927, Validation Accuracy: 0.899, Loss: 0.062 Epoch 7 Batch 223/269 - Train Accuracy: 0.900, Validation Accuracy: 0.896, Loss: 0.064 Epoch 7 Batch 224/269 - Train Accuracy: 0.914, Validation Accuracy: 0.897, Loss: 0.077 Epoch 7 Batch 225/269 - Train Accuracy: 0.897, Validation Accuracy: 0.907, Loss: 0.064 Epoch 7 Batch 226/269 - Train Accuracy: 0.917, Validation Accuracy: 0.910, Loss: 0.071 Epoch 7 Batch 227/269 - Train Accuracy: 0.918, Validation Accuracy: 0.908, Loss: 0.076 Epoch 7 Batch 228/269 - Train Accuracy: 0.902, Validation Accuracy: 0.909, Loss: 0.065 Epoch 7 Batch 229/269 - Train Accuracy: 0.902, Validation Accuracy: 0.905, Loss: 0.065 Epoch 7 Batch 230/269 - Train Accuracy: 0.916, Validation Accuracy: 0.903, Loss: 0.064 Epoch 7 Batch 231/269 - Train Accuracy: 0.910, Validation Accuracy: 0.907, Loss: 0.071 Epoch 7 Batch 232/269 - Train Accuracy: 0.911, Validation Accuracy: 0.912, Loss: 0.066 Epoch 7 Batch 233/269 - Train Accuracy: 0.926, Validation Accuracy: 0.905, Loss: 0.070 Epoch 7 Batch 234/269 - Train Accuracy: 0.897, Validation Accuracy: 0.901, Loss: 0.065 Epoch 7 Batch 235/269 - Train Accuracy: 0.936, Validation Accuracy: 0.903, Loss: 0.057 Epoch 7 Batch 236/269 - Train Accuracy: 0.914, Validation Accuracy: 0.910, Loss: 0.059 Epoch 7 Batch 237/269 - Train Accuracy: 0.923, Validation Accuracy: 0.912, Loss: 0.063 Epoch 7 Batch 238/269 - Train Accuracy: 0.925, Validation Accuracy: 0.916, Loss: 0.064 Epoch 7 Batch 239/269 - Train Accuracy: 0.920, Validation Accuracy: 0.908, Loss: 0.062 Epoch 7 Batch 240/269 - Train Accuracy: 0.920, Validation Accuracy: 0.910, Loss: 0.058 Epoch 7 Batch 241/269 - Train Accuracy: 0.909, Validation Accuracy: 0.917, Loss: 0.070 Epoch 7 Batch 242/269 - Train Accuracy: 0.932, Validation Accuracy: 0.915, Loss: 0.060 Epoch 7 Batch 243/269 - Train Accuracy: 0.928, Validation Accuracy: 0.913, Loss: 0.057 Epoch 7 Batch 244/269 - Train Accuracy: 0.907, Validation Accuracy: 0.912, Loss: 0.066 Epoch 7 Batch 245/269 - Train Accuracy: 0.896, Validation Accuracy: 0.910, Loss: 0.064 Epoch 7 Batch 246/269 - Train Accuracy: 0.908, Validation Accuracy: 0.909, Loss: 0.063 Epoch 7 Batch 247/269 - Train Accuracy: 0.917, Validation Accuracy: 0.908, Loss: 0.062 Epoch 7 Batch 248/269 - Train Accuracy: 0.922, Validation Accuracy: 0.913, Loss: 0.062 Epoch 7 Batch 249/269 - Train Accuracy: 0.929, Validation Accuracy: 0.917, Loss: 0.054 Epoch 7 Batch 250/269 - Train Accuracy: 0.918, Validation Accuracy: 0.914, Loss: 0.059 Epoch 7 Batch 251/269 - Train Accuracy: 0.935, Validation Accuracy: 0.909, Loss: 0.059 Epoch 7 Batch 252/269 - Train Accuracy: 0.918, Validation Accuracy: 0.910, Loss: 0.053 Epoch 7 Batch 253/269 - Train Accuracy: 0.908, Validation Accuracy: 0.902, Loss: 0.061 Epoch 7 Batch 254/269 - Train Accuracy: 0.906, Validation Accuracy: 0.908, Loss: 0.064 Epoch 7 Batch 255/269 - Train Accuracy: 0.910, Validation Accuracy: 0.899, Loss: 0.063 Epoch 7 Batch 256/269 - Train Accuracy: 0.900, Validation Accuracy: 0.906, Loss: 0.060 Epoch 7 Batch 257/269 - Train Accuracy: 0.899, Validation Accuracy: 0.910, Loss: 0.069 Epoch 7 Batch 258/269 - Train Accuracy: 0.911, Validation Accuracy: 0.906, Loss: 0.063 Epoch 7 Batch 259/269 - Train Accuracy: 0.912, Validation Accuracy: 0.905, Loss: 0.061 Epoch 7 Batch 260/269 - Train Accuracy: 0.905, Validation Accuracy: 0.906, Loss: 0.073 Epoch 7 Batch 261/269 - Train Accuracy: 0.904, Validation Accuracy: 0.913, Loss: 0.061 Epoch 7 Batch 262/269 - Train Accuracy: 0.922, Validation Accuracy: 0.915, Loss: 0.058 Epoch 7 Batch 263/269 - Train Accuracy: 0.911, Validation Accuracy: 0.915, Loss: 0.065 Epoch 7 Batch 264/269 - Train Accuracy: 0.888, Validation Accuracy: 0.912, Loss: 0.070 Epoch 7 Batch 265/269 - Train Accuracy: 0.920, Validation Accuracy: 0.911, Loss: 0.060 Epoch 7 Batch 266/269 - Train Accuracy: 0.920, Validation Accuracy: 0.911, Loss: 0.055 Epoch 7 Batch 267/269 - Train Accuracy: 0.930, Validation Accuracy: 0.912, Loss: 0.063 Epoch 8 Batch 0/269 - Train Accuracy: 0.921, Validation Accuracy: 0.910, Loss: 0.066 Epoch 8 Batch 1/269 - Train Accuracy: 0.912, Validation Accuracy: 0.911, Loss: 0.058 Epoch 8 Batch 2/269 - Train Accuracy: 0.924, Validation Accuracy: 0.907, Loss: 0.060 Epoch 8 Batch 3/269 - Train Accuracy: 0.920, Validation Accuracy: 0.904, Loss: 0.058 Epoch 8 Batch 4/269 - Train Accuracy: 0.905, Validation Accuracy: 0.910, Loss: 0.061 Epoch 8 Batch 5/269 - Train Accuracy: 0.912, Validation Accuracy: 0.908, Loss: 0.060 Epoch 8 Batch 6/269 - Train Accuracy: 0.937, Validation Accuracy: 0.903, Loss: 0.058 Epoch 8 Batch 7/269 - Train Accuracy: 0.911, Validation Accuracy: 0.906, Loss: 0.059 Epoch 8 Batch 8/269 - Train Accuracy: 0.921, Validation Accuracy: 0.908, Loss: 0.063 Epoch 8 Batch 9/269 - Train Accuracy: 0.915, Validation Accuracy: 0.908, Loss: 0.062 Epoch 8 Batch 10/269 - Train Accuracy: 0.924, Validation Accuracy: 0.912, Loss: 0.051 Epoch 8 Batch 11/269 - Train Accuracy: 0.927, Validation Accuracy: 0.913, Loss: 0.065 Epoch 8 Batch 12/269 - Train Accuracy: 0.908, Validation Accuracy: 0.910, Loss: 0.069 Epoch 8 Batch 13/269 - Train Accuracy: 0.916, Validation Accuracy: 0.911, Loss: 0.053 Epoch 8 Batch 14/269 - Train Accuracy: 0.906, Validation Accuracy: 0.908, Loss: 0.059 Epoch 8 Batch 15/269 - Train Accuracy: 0.912, Validation Accuracy: 0.912, Loss: 0.049 Epoch 8 Batch 16/269 - Train Accuracy: 0.915, Validation Accuracy: 0.906, Loss: 0.062 Epoch 8 Batch 17/269 - Train Accuracy: 0.929, Validation Accuracy: 0.914, Loss: 0.052 Epoch 8 Batch 18/269 - Train Accuracy: 0.921, Validation Accuracy: 0.910, Loss: 0.061 Epoch 8 Batch 19/269 - Train Accuracy: 0.924, Validation Accuracy: 0.914, Loss: 0.051 Epoch 8 Batch 20/269 - Train Accuracy: 0.918, Validation Accuracy: 0.911, Loss: 0.056 Epoch 8 Batch 21/269 - Train Accuracy: 0.905, Validation Accuracy: 0.912, Loss: 0.066 Epoch 8 Batch 22/269 - Train Accuracy: 0.926, Validation Accuracy: 0.911, Loss: 0.055 Epoch 8 Batch 23/269 - Train Accuracy: 0.912, Validation Accuracy: 0.910, Loss: 0.062 Epoch 8 Batch 24/269 - Train Accuracy: 0.913, Validation Accuracy: 0.915, Loss: 0.056 Epoch 8 Batch 25/269 - Train Accuracy: 0.918, Validation Accuracy: 0.907, Loss: 0.064 Epoch 8 Batch 26/269 - Train Accuracy: 0.904, Validation Accuracy: 0.910, Loss: 0.057 Epoch 8 Batch 27/269 - Train Accuracy: 0.912, Validation Accuracy: 0.914, Loss: 0.056 Epoch 8 Batch 28/269 - Train Accuracy: 0.897, Validation Accuracy: 0.917, Loss: 0.065 Epoch 8 Batch 29/269 - Train Accuracy: 0.922, Validation Accuracy: 0.915, Loss: 0.060 Epoch 8 Batch 30/269 - Train Accuracy: 0.930, Validation Accuracy: 0.909, Loss: 0.055 Epoch 8 Batch 31/269 - Train Accuracy: 0.915, Validation Accuracy: 0.908, Loss: 0.057 Epoch 8 Batch 32/269 - Train Accuracy: 0.925, Validation Accuracy: 0.909, Loss: 0.053 Epoch 8 Batch 33/269 - Train Accuracy: 0.917, Validation Accuracy: 0.910, Loss: 0.052 Epoch 8 Batch 34/269 - Train Accuracy: 0.907, Validation Accuracy: 0.911, Loss: 0.056 Epoch 8 Batch 35/269 - Train Accuracy: 0.927, Validation Accuracy: 0.909, Loss: 0.071 Epoch 8 Batch 36/269 - Train Accuracy: 0.899, Validation Accuracy: 0.903, Loss: 0.057 Epoch 8 Batch 37/269 - Train Accuracy: 0.918, Validation Accuracy: 0.904, Loss: 0.061 Epoch 8 Batch 38/269 - Train Accuracy: 0.919, Validation Accuracy: 0.903, Loss: 0.056 Epoch 8 Batch 39/269 - Train Accuracy: 0.917, Validation Accuracy: 0.908, Loss: 0.053 Epoch 8 Batch 40/269 - Train Accuracy: 0.909, Validation Accuracy: 0.909, Loss: 0.062 Epoch 8 Batch 41/269 - Train Accuracy: 0.910, Validation Accuracy: 0.908, Loss: 0.058 Epoch 8 Batch 42/269 - Train Accuracy: 0.930, Validation Accuracy: 0.911, Loss: 0.054 Epoch 8 Batch 43/269 - Train Accuracy: 0.918, Validation Accuracy: 0.911, Loss: 0.058 Epoch 8 Batch 44/269 - Train Accuracy: 0.928, Validation Accuracy: 0.911, Loss: 0.060 Epoch 8 Batch 45/269 - Train Accuracy: 0.918, Validation Accuracy: 0.912, Loss: 0.055 Epoch 8 Batch 46/269 - Train Accuracy: 0.942, Validation Accuracy: 0.918, Loss: 0.050 Epoch 8 Batch 47/269 - Train Accuracy: 0.926, Validation Accuracy: 0.915, Loss: 0.047 Epoch 8 Batch 48/269 - Train Accuracy: 0.927, Validation Accuracy: 0.915, Loss: 0.052 Epoch 8 Batch 49/269 - Train Accuracy: 0.931, Validation Accuracy: 0.914, Loss: 0.052 Epoch 8 Batch 50/269 - Train Accuracy: 0.901, Validation Accuracy: 0.916, Loss: 0.066 Epoch 8 Batch 51/269 - Train Accuracy: 0.919, Validation Accuracy: 0.922, Loss: 0.056 Epoch 8 Batch 52/269 - Train Accuracy: 0.912, Validation Accuracy: 0.922, Loss: 0.051 Epoch 8 Batch 53/269 - Train Accuracy: 0.913, Validation Accuracy: 0.920, Loss: 0.062 Epoch 8 Batch 54/269 - Train Accuracy: 0.920, Validation Accuracy: 0.917, Loss: 0.053 Epoch 8 Batch 55/269 - Train Accuracy: 0.936, Validation Accuracy: 0.922, Loss: 0.056 Epoch 8 Batch 56/269 - Train Accuracy: 0.921, Validation Accuracy: 0.922, Loss: 0.057 Epoch 8 Batch 57/269 - Train Accuracy: 0.921, Validation Accuracy: 0.919, Loss: 0.062 Epoch 8 Batch 58/269 - Train Accuracy: 0.915, Validation Accuracy: 0.919, Loss: 0.054 Epoch 8 Batch 59/269 - Train Accuracy: 0.941, Validation Accuracy: 0.915, Loss: 0.043 Epoch 8 Batch 60/269 - Train Accuracy: 0.928, Validation Accuracy: 0.920, Loss: 0.051 Epoch 8 Batch 61/269 - Train Accuracy: 0.926, Validation Accuracy: 0.917, Loss: 0.050 Epoch 8 Batch 62/269 - Train Accuracy: 0.918, Validation Accuracy: 0.915, Loss: 0.060 Epoch 8 Batch 63/269 - Train Accuracy: 0.920, Validation Accuracy: 0.915, Loss: 0.058 Epoch 8 Batch 64/269 - Train Accuracy: 0.925, Validation Accuracy: 0.913, Loss: 0.049 Epoch 8 Batch 65/269 - Train Accuracy: 0.920, Validation Accuracy: 0.917, Loss: 0.053 Epoch 8 Batch 66/269 - Train Accuracy: 0.912, Validation Accuracy: 0.922, Loss: 0.058 Epoch 8 Batch 67/269 - Train Accuracy: 0.917, Validation Accuracy: 0.919, Loss: 0.059 Epoch 8 Batch 68/269 - Train Accuracy: 0.902, Validation Accuracy: 0.909, Loss: 0.064 Epoch 8 Batch 69/269 - Train Accuracy: 0.911, Validation Accuracy: 0.917, Loss: 0.067 Epoch 8 Batch 70/269 - Train Accuracy: 0.922, Validation Accuracy: 0.916, Loss: 0.058 Epoch 8 Batch 71/269 - Train Accuracy: 0.926, Validation Accuracy: 0.918, Loss: 0.062 Epoch 8 Batch 72/269 - Train Accuracy: 0.916, Validation Accuracy: 0.917, Loss: 0.060 Epoch 8 Batch 73/269 - Train Accuracy: 0.920, Validation Accuracy: 0.919, Loss: 0.062 Epoch 8 Batch 74/269 - Train Accuracy: 0.933, Validation Accuracy: 0.920, Loss: 0.053 Epoch 8 Batch 75/269 - Train Accuracy: 0.923, Validation Accuracy: 0.919, Loss: 0.061 Epoch 8 Batch 76/269 - Train Accuracy: 0.923, Validation Accuracy: 0.911, Loss: 0.054 Epoch 8 Batch 77/269 - Train Accuracy: 0.915, Validation Accuracy: 0.914, Loss: 0.058 Epoch 8 Batch 78/269 - Train Accuracy: 0.933, Validation Accuracy: 0.914, Loss: 0.054 Epoch 8 Batch 79/269 - Train Accuracy: 0.922, Validation Accuracy: 0.912, Loss: 0.061 Epoch 8 Batch 80/269 - Train Accuracy: 0.925, Validation Accuracy: 0.913, Loss: 0.056 Epoch 8 Batch 81/269 - Train Accuracy: 0.912, Validation Accuracy: 0.919, Loss: 0.061 Epoch 8 Batch 82/269 - Train Accuracy: 0.931, Validation Accuracy: 0.918, Loss: 0.052 Epoch 8 Batch 83/269 - Train Accuracy: 0.910, Validation Accuracy: 0.915, Loss: 0.067 Epoch 8 Batch 84/269 - Train Accuracy: 0.920, Validation Accuracy: 0.905, Loss: 0.054 Epoch 8 Batch 85/269 - Train Accuracy: 0.930, Validation Accuracy: 0.905, Loss: 0.054 Epoch 8 Batch 86/269 - Train Accuracy: 0.915, Validation Accuracy: 0.907, Loss: 0.053 Epoch 8 Batch 87/269 - Train Accuracy: 0.915, Validation Accuracy: 0.913, Loss: 0.057 Epoch 8 Batch 88/269 - Train Accuracy: 0.923, Validation Accuracy: 0.907, Loss: 0.055 Epoch 8 Batch 89/269 - Train Accuracy: 0.926, Validation Accuracy: 0.908, Loss: 0.054 Epoch 8 Batch 90/269 - Train Accuracy: 0.916, Validation Accuracy: 0.911, Loss: 0.057 Epoch 8 Batch 91/269 - Train Accuracy: 0.930, Validation Accuracy: 0.916, Loss: 0.054 Epoch 8 Batch 92/269 - Train Accuracy: 0.948, Validation Accuracy: 0.912, Loss: 0.049 Epoch 8 Batch 93/269 - Train Accuracy: 0.914, Validation Accuracy: 0.914, Loss: 0.053 Epoch 8 Batch 94/269 - Train Accuracy: 0.907, Validation Accuracy: 0.915, Loss: 0.067 Epoch 8 Batch 95/269 - Train Accuracy: 0.917, Validation Accuracy: 0.913, Loss: 0.052 Epoch 8 Batch 96/269 - Train Accuracy: 0.917, Validation Accuracy: 0.916, Loss: 0.067 Epoch 8 Batch 97/269 - Train Accuracy: 0.930, Validation Accuracy: 0.930, Loss: 0.067 Epoch 8 Batch 98/269 - Train Accuracy: 0.922, Validation Accuracy: 0.918, Loss: 0.055 Epoch 8 Batch 99/269 - Train Accuracy: 0.922, Validation Accuracy: 0.922, Loss: 0.063 Epoch 8 Batch 100/269 - Train Accuracy: 0.915, Validation Accuracy: 0.906, Loss: 0.058 Epoch 8 Batch 101/269 - Train Accuracy: 0.904, Validation Accuracy: 0.911, Loss: 0.078 Epoch 8 Batch 102/269 - Train Accuracy: 0.917, Validation Accuracy: 0.917, Loss: 0.061 Epoch 8 Batch 103/269 - Train Accuracy: 0.918, Validation Accuracy: 0.920, Loss: 0.070 Epoch 8 Batch 104/269 - Train Accuracy: 0.918, Validation Accuracy: 0.922, Loss: 0.063 Epoch 8 Batch 105/269 - Train Accuracy: 0.911, Validation Accuracy: 0.911, Loss: 0.061 Epoch 8 Batch 106/269 - Train Accuracy: 0.914, Validation Accuracy: 0.914, Loss: 0.055 Epoch 8 Batch 107/269 - Train Accuracy: 0.938, Validation Accuracy: 0.908, Loss: 0.060 Epoch 8 Batch 108/269 - Train Accuracy: 0.921, Validation Accuracy: 0.906, Loss: 0.058 Epoch 8 Batch 109/269 - Train Accuracy: 0.901, Validation Accuracy: 0.913, Loss: 0.069 Epoch 8 Batch 110/269 - Train Accuracy: 0.925, Validation Accuracy: 0.903, Loss: 0.053 Epoch 8 Batch 111/269 - Train Accuracy: 0.922, Validation Accuracy: 0.909, Loss: 0.065 Epoch 8 Batch 112/269 - Train Accuracy: 0.919, Validation Accuracy: 0.916, Loss: 0.064 Epoch 8 Batch 113/269 - Train Accuracy: 0.926, Validation Accuracy: 0.915, Loss: 0.059 Epoch 8 Batch 114/269 - Train Accuracy: 0.913, Validation Accuracy: 0.919, Loss: 0.059 Epoch 8 Batch 115/269 - Train Accuracy: 0.909, Validation Accuracy: 0.913, Loss: 0.064 Epoch 8 Batch 116/269 - Train Accuracy: 0.927, Validation Accuracy: 0.913, Loss: 0.063 Epoch 8 Batch 117/269 - Train Accuracy: 0.900, Validation Accuracy: 0.901, Loss: 0.066 Epoch 8 Batch 118/269 - Train Accuracy: 0.925, Validation Accuracy: 0.915, Loss: 0.095 Epoch 8 Batch 119/269 - Train Accuracy: 0.887, Validation Accuracy: 0.882, Loss: 0.068 Epoch 8 Batch 120/269 - Train Accuracy: 0.893, Validation Accuracy: 0.884, Loss: 0.139 Epoch 8 Batch 121/269 - Train Accuracy: 0.910, Validation Accuracy: 0.891, Loss: 0.097 Epoch 8 Batch 122/269 - Train Accuracy: 0.908, Validation Accuracy: 0.903, Loss: 0.097 Epoch 8 Batch 123/269 - Train Accuracy: 0.895, Validation Accuracy: 0.898, Loss: 0.083 Epoch 8 Batch 124/269 - Train Accuracy: 0.913, Validation Accuracy: 0.891, Loss: 0.080 Epoch 8 Batch 125/269 - Train Accuracy: 0.912, Validation Accuracy: 0.879, Loss: 0.082 Epoch 8 Batch 126/269 - Train Accuracy: 0.869, Validation Accuracy: 0.889, Loss: 0.111 Epoch 8 Batch 127/269 - Train Accuracy: 0.894, Validation Accuracy: 0.897, Loss: 0.095 Epoch 8 Batch 128/269 - Train Accuracy: 0.877, Validation Accuracy: 0.883, Loss: 0.087 Epoch 8 Batch 129/269 - Train Accuracy: 0.871, Validation Accuracy: 0.885, Loss: 0.110 Epoch 8 Batch 130/269 - Train Accuracy: 0.919, Validation Accuracy: 0.905, Loss: 0.108 Epoch 8 Batch 131/269 - Train Accuracy: 0.867, Validation Accuracy: 0.878, Loss: 0.073 Epoch 8 Batch 132/269 - Train Accuracy: 0.883, Validation Accuracy: 0.892, Loss: 0.110 Epoch 8 Batch 133/269 - Train Accuracy: 0.904, Validation Accuracy: 0.889, Loss: 0.068 Epoch 8 Batch 134/269 - Train Accuracy: 0.900, Validation Accuracy: 0.899, Loss: 0.082 Epoch 8 Batch 135/269 - Train Accuracy: 0.909, Validation Accuracy: 0.916, Loss: 0.084 Epoch 8 Batch 136/269 - Train Accuracy: 0.886, Validation Accuracy: 0.900, Loss: 0.078 Epoch 8 Batch 137/269 - Train Accuracy: 0.917, Validation Accuracy: 0.904, Loss: 0.084 Epoch 8 Batch 138/269 - Train Accuracy: 0.892, Validation Accuracy: 0.906, Loss: 0.075 Epoch 8 Batch 139/269 - Train Accuracy: 0.909, Validation Accuracy: 0.907, Loss: 0.065 Epoch 8 Batch 140/269 - Train Accuracy: 0.906, Validation Accuracy: 0.902, Loss: 0.071 Epoch 8 Batch 141/269 - Train Accuracy: 0.898, Validation Accuracy: 0.911, Loss: 0.078 Epoch 8 Batch 142/269 - Train Accuracy: 0.907, Validation Accuracy: 0.904, Loss: 0.069 Epoch 8 Batch 143/269 - Train Accuracy: 0.928, Validation Accuracy: 0.905, Loss: 0.062 Epoch 8 Batch 144/269 - Train Accuracy: 0.921, Validation Accuracy: 0.911, Loss: 0.061 Epoch 8 Batch 145/269 - Train Accuracy: 0.916, Validation Accuracy: 0.902, Loss: 0.062 Epoch 8 Batch 146/269 - Train Accuracy: 0.904, Validation Accuracy: 0.902, Loss: 0.062 Epoch 8 Batch 147/269 - Train Accuracy: 0.910, Validation Accuracy: 0.908, Loss: 0.072 Epoch 8 Batch 148/269 - Train Accuracy: 0.911, Validation Accuracy: 0.912, Loss: 0.064 Epoch 8 Batch 149/269 - Train Accuracy: 0.909, Validation Accuracy: 0.906, Loss: 0.067 Epoch 8 Batch 150/269 - Train Accuracy: 0.905, Validation Accuracy: 0.906, Loss: 0.068 Epoch 8 Batch 151/269 - Train Accuracy: 0.923, Validation Accuracy: 0.901, Loss: 0.064 Epoch 8 Batch 152/269 - Train Accuracy: 0.914, Validation Accuracy: 0.909, Loss: 0.066 Epoch 8 Batch 153/269 - Train Accuracy: 0.936, Validation Accuracy: 0.913, Loss: 0.056 Epoch 8 Batch 154/269 - Train Accuracy: 0.934, Validation Accuracy: 0.912, Loss: 0.054 Epoch 8 Batch 155/269 - Train Accuracy: 0.918, Validation Accuracy: 0.912, Loss: 0.060 Epoch 8 Batch 156/269 - Train Accuracy: 0.918, Validation Accuracy: 0.909, Loss: 0.060 Epoch 8 Batch 157/269 - Train Accuracy: 0.906, Validation Accuracy: 0.912, Loss: 0.053 Epoch 8 Batch 158/269 - Train Accuracy: 0.922, Validation Accuracy: 0.909, Loss: 0.058 Epoch 8 Batch 159/269 - Train Accuracy: 0.916, Validation Accuracy: 0.914, Loss: 0.056 Epoch 8 Batch 160/269 - Train Accuracy: 0.920, Validation Accuracy: 0.918, Loss: 0.057 Epoch 8 Batch 161/269 - Train Accuracy: 0.922, Validation Accuracy: 0.914, Loss: 0.057 Epoch 8 Batch 162/269 - Train Accuracy: 0.938, Validation Accuracy: 0.918, Loss: 0.053 Epoch 8 Batch 163/269 - Train Accuracy: 0.924, Validation Accuracy: 0.914, Loss: 0.057 Epoch 8 Batch 164/269 - Train Accuracy: 0.926, Validation Accuracy: 0.913, Loss: 0.055 Epoch 8 Batch 165/269 - Train Accuracy: 0.920, Validation Accuracy: 0.913, Loss: 0.052 Epoch 8 Batch 166/269 - Train Accuracy: 0.930, Validation Accuracy: 0.913, Loss: 0.054 Epoch 8 Batch 167/269 - Train Accuracy: 0.925, Validation Accuracy: 0.916, Loss: 0.054 Epoch 8 Batch 168/269 - Train Accuracy: 0.915, Validation Accuracy: 0.915, Loss: 0.054 Epoch 8 Batch 169/269 - Train Accuracy: 0.910, Validation Accuracy: 0.918, Loss: 0.054 Epoch 8 Batch 170/269 - Train Accuracy: 0.911, Validation Accuracy: 0.918, Loss: 0.050 Epoch 8 Batch 171/269 - Train Accuracy: 0.928, Validation Accuracy: 0.915, Loss: 0.055 Epoch 8 Batch 172/269 - Train Accuracy: 0.902, Validation Accuracy: 0.915, Loss: 0.061 Epoch 8 Batch 173/269 - Train Accuracy: 0.926, Validation Accuracy: 0.916, Loss: 0.049 Epoch 8 Batch 174/269 - Train Accuracy: 0.931, Validation Accuracy: 0.911, Loss: 0.053 Epoch 8 Batch 175/269 - Train Accuracy: 0.902, Validation Accuracy: 0.915, Loss: 0.064 Epoch 8 Batch 176/269 - Train Accuracy: 0.906, Validation Accuracy: 0.922, Loss: 0.057 Epoch 8 Batch 177/269 - Train Accuracy: 0.930, Validation Accuracy: 0.919, Loss: 0.047 Epoch 8 Batch 178/269 - Train Accuracy: 0.921, Validation Accuracy: 0.918, Loss: 0.052 Epoch 8 Batch 179/269 - Train Accuracy: 0.920, Validation Accuracy: 0.920, Loss: 0.054 Epoch 8 Batch 180/269 - Train Accuracy: 0.931, Validation Accuracy: 0.919, Loss: 0.047 Epoch 8 Batch 181/269 - Train Accuracy: 0.918, Validation Accuracy: 0.915, Loss: 0.057 Epoch 8 Batch 182/269 - Train Accuracy: 0.927, Validation Accuracy: 0.917, Loss: 0.053 Epoch 8 Batch 183/269 - Train Accuracy: 0.933, Validation Accuracy: 0.917, Loss: 0.043 Epoch 8 Batch 184/269 - Train Accuracy: 0.933, Validation Accuracy: 0.923, Loss: 0.051 Epoch 8 Batch 185/269 - Train Accuracy: 0.928, Validation Accuracy: 0.921, Loss: 0.052 Epoch 8 Batch 186/269 - Train Accuracy: 0.922, Validation Accuracy: 0.919, Loss: 0.046 Epoch 8 Batch 187/269 - Train Accuracy: 0.923, Validation Accuracy: 0.923, Loss: 0.051 Epoch 8 Batch 188/269 - Train Accuracy: 0.925, Validation Accuracy: 0.927, Loss: 0.051 Epoch 8 Batch 189/269 - Train Accuracy: 0.921, Validation Accuracy: 0.924, Loss: 0.048 Epoch 8 Batch 190/269 - Train Accuracy: 0.929, Validation Accuracy: 0.925, Loss: 0.049 Epoch 8 Batch 191/269 - Train Accuracy: 0.926, Validation Accuracy: 0.913, Loss: 0.048 Epoch 8 Batch 192/269 - Train Accuracy: 0.924, Validation Accuracy: 0.912, Loss: 0.051 Epoch 8 Batch 193/269 - Train Accuracy: 0.928, Validation Accuracy: 0.921, Loss: 0.053 Epoch 8 Batch 194/269 - Train Accuracy: 0.927, Validation Accuracy: 0.925, Loss: 0.052 Epoch 8 Batch 195/269 - Train Accuracy: 0.917, Validation Accuracy: 0.928, Loss: 0.051 Epoch 8 Batch 196/269 - Train Accuracy: 0.918, Validation Accuracy: 0.925, Loss: 0.054 Epoch 8 Batch 197/269 - Train Accuracy: 0.915, Validation Accuracy: 0.922, Loss: 0.057 Epoch 8 Batch 198/269 - Train Accuracy: 0.922, Validation Accuracy: 0.914, Loss: 0.055 Epoch 8 Batch 199/269 - Train Accuracy: 0.919, Validation Accuracy: 0.914, Loss: 0.057 Epoch 8 Batch 200/269 - Train Accuracy: 0.928, Validation Accuracy: 0.915, Loss: 0.055 Epoch 8 Batch 201/269 - Train Accuracy: 0.924, Validation Accuracy: 0.920, Loss: 0.058 Epoch 8 Batch 202/269 - Train Accuracy: 0.923, Validation Accuracy: 0.919, Loss: 0.050 Epoch 8 Batch 203/269 - Train Accuracy: 0.928, Validation Accuracy: 0.912, Loss: 0.057 Epoch 8 Batch 204/269 - Train Accuracy: 0.913, Validation Accuracy: 0.915, Loss: 0.056 Epoch 8 Batch 205/269 - Train Accuracy: 0.932, Validation Accuracy: 0.918, Loss: 0.051 Epoch 8 Batch 206/269 - Train Accuracy: 0.908, Validation Accuracy: 0.923, Loss: 0.060 Epoch 8 Batch 207/269 - Train Accuracy: 0.916, Validation Accuracy: 0.920, Loss: 0.050 Epoch 8 Batch 208/269 - Train Accuracy: 0.919, Validation Accuracy: 0.915, Loss: 0.053 Epoch 8 Batch 209/269 - Train Accuracy: 0.933, Validation Accuracy: 0.919, Loss: 0.049 Epoch 8 Batch 210/269 - Train Accuracy: 0.923, Validation Accuracy: 0.914, Loss: 0.050 Epoch 8 Batch 211/269 - Train Accuracy: 0.931, Validation Accuracy: 0.923, Loss: 0.051 Epoch 8 Batch 212/269 - Train Accuracy: 0.919, Validation Accuracy: 0.912, Loss: 0.057 Epoch 8 Batch 213/269 - Train Accuracy: 0.914, Validation Accuracy: 0.910, Loss: 0.049 Epoch 8 Batch 214/269 - Train Accuracy: 0.933, Validation Accuracy: 0.921, Loss: 0.051 Epoch 8 Batch 215/269 - Train Accuracy: 0.935, Validation Accuracy: 0.922, Loss: 0.050 Epoch 8 Batch 216/269 - Train Accuracy: 0.913, Validation Accuracy: 0.921, Loss: 0.066 Epoch 8 Batch 217/269 - Train Accuracy: 0.917, Validation Accuracy: 0.926, Loss: 0.052 Epoch 8 Batch 218/269 - Train Accuracy: 0.926, Validation Accuracy: 0.921, Loss: 0.049 Epoch 8 Batch 219/269 - Train Accuracy: 0.923, Validation Accuracy: 0.919, Loss: 0.054 Epoch 8 Batch 220/269 - Train Accuracy: 0.919, Validation Accuracy: 0.923, Loss: 0.047 Epoch 8 Batch 221/269 - Train Accuracy: 0.920, Validation Accuracy: 0.926, Loss: 0.052 Epoch 8 Batch 222/269 - Train Accuracy: 0.944, Validation Accuracy: 0.921, Loss: 0.043 Epoch 8 Batch 223/269 - Train Accuracy: 0.916, Validation Accuracy: 0.924, Loss: 0.046 Epoch 8 Batch 224/269 - Train Accuracy: 0.931, Validation Accuracy: 0.921, Loss: 0.056 Epoch 8 Batch 225/269 - Train Accuracy: 0.923, Validation Accuracy: 0.921, Loss: 0.046 Epoch 8 Batch 226/269 - Train Accuracy: 0.927, Validation Accuracy: 0.923, Loss: 0.052 Epoch 8 Batch 227/269 - Train Accuracy: 0.927, Validation Accuracy: 0.920, Loss: 0.059 Epoch 8 Batch 228/269 - Train Accuracy: 0.920, Validation Accuracy: 0.925, Loss: 0.047 Epoch 8 Batch 229/269 - Train Accuracy: 0.916, Validation Accuracy: 0.922, Loss: 0.048 Epoch 8 Batch 230/269 - Train Accuracy: 0.935, Validation Accuracy: 0.926, Loss: 0.047 Epoch 8 Batch 231/269 - Train Accuracy: 0.922, Validation Accuracy: 0.926, Loss: 0.053 Epoch 8 Batch 232/269 - Train Accuracy: 0.919, Validation Accuracy: 0.928, Loss: 0.050 Epoch 8 Batch 233/269 - Train Accuracy: 0.932, Validation Accuracy: 0.922, Loss: 0.052 Epoch 8 Batch 234/269 - Train Accuracy: 0.926, Validation Accuracy: 0.921, Loss: 0.048 Epoch 8 Batch 235/269 - Train Accuracy: 0.944, Validation Accuracy: 0.925, Loss: 0.042 Epoch 8 Batch 236/269 - Train Accuracy: 0.922, Validation Accuracy: 0.928, Loss: 0.045 Epoch 8 Batch 237/269 - Train Accuracy: 0.940, Validation Accuracy: 0.930, Loss: 0.048 Epoch 8 Batch 238/269 - Train Accuracy: 0.934, Validation Accuracy: 0.927, Loss: 0.048 Epoch 8 Batch 239/269 - Train Accuracy: 0.925, Validation Accuracy: 0.929, Loss: 0.046 Epoch 8 Batch 240/269 - Train Accuracy: 0.933, Validation Accuracy: 0.934, Loss: 0.044 Epoch 8 Batch 241/269 - Train Accuracy: 0.923, Validation Accuracy: 0.932, Loss: 0.053 Epoch 8 Batch 242/269 - Train Accuracy: 0.939, Validation Accuracy: 0.928, Loss: 0.045 Epoch 8 Batch 243/269 - Train Accuracy: 0.931, Validation Accuracy: 0.926, Loss: 0.042 Epoch 8 Batch 244/269 - Train Accuracy: 0.918, Validation Accuracy: 0.925, Loss: 0.050 Epoch 8 Batch 245/269 - Train Accuracy: 0.921, Validation Accuracy: 0.911, Loss: 0.048 Epoch 8 Batch 246/269 - Train Accuracy: 0.918, Validation Accuracy: 0.912, Loss: 0.049 Epoch 8 Batch 247/269 - Train Accuracy: 0.927, Validation Accuracy: 0.918, Loss: 0.048 Epoch 8 Batch 248/269 - Train Accuracy: 0.924, Validation Accuracy: 0.920, Loss: 0.048 Epoch 8 Batch 249/269 - Train Accuracy: 0.939, Validation Accuracy: 0.924, Loss: 0.040 Epoch 8 Batch 250/269 - Train Accuracy: 0.930, Validation Accuracy: 0.922, Loss: 0.045 Epoch 8 Batch 251/269 - Train Accuracy: 0.955, Validation Accuracy: 0.923, Loss: 0.044 Epoch 8 Batch 252/269 - Train Accuracy: 0.928, Validation Accuracy: 0.924, Loss: 0.039 Epoch 8 Batch 253/269 - Train Accuracy: 0.918, Validation Accuracy: 0.928, Loss: 0.046 Epoch 8 Batch 254/269 - Train Accuracy: 0.921, Validation Accuracy: 0.924, Loss: 0.050 Epoch 8 Batch 255/269 - Train Accuracy: 0.930, Validation Accuracy: 0.923, Loss: 0.048 Epoch 8 Batch 256/269 - Train Accuracy: 0.915, Validation Accuracy: 0.917, Loss: 0.046 Epoch 8 Batch 257/269 - Train Accuracy: 0.917, Validation Accuracy: 0.916, Loss: 0.051 Epoch 8 Batch 258/269 - Train Accuracy: 0.917, Validation Accuracy: 0.917, Loss: 0.050 Epoch 8 Batch 259/269 - Train Accuracy: 0.927, Validation Accuracy: 0.914, Loss: 0.046 Epoch 8 Batch 260/269 - Train Accuracy: 0.927, Validation Accuracy: 0.922, Loss: 0.056 Epoch 8 Batch 261/269 - Train Accuracy: 0.925, Validation Accuracy: 0.928, Loss: 0.047 Epoch 8 Batch 262/269 - Train Accuracy: 0.932, Validation Accuracy: 0.926, Loss: 0.045 Epoch 8 Batch 263/269 - Train Accuracy: 0.921, Validation Accuracy: 0.928, Loss: 0.050 Epoch 8 Batch 264/269 - Train Accuracy: 0.900, Validation Accuracy: 0.928, Loss: 0.055 Epoch 8 Batch 265/269 - Train Accuracy: 0.930, Validation Accuracy: 0.927, Loss: 0.047 Epoch 8 Batch 266/269 - Train Accuracy: 0.930, Validation Accuracy: 0.926, Loss: 0.041 Epoch 8 Batch 267/269 - Train Accuracy: 0.941, Validation Accuracy: 0.927, Loss: 0.047 Epoch 9 Batch 0/269 - Train Accuracy: 0.928, Validation Accuracy: 0.920, Loss: 0.051 Epoch 9 Batch 1/269 - Train Accuracy: 0.922, Validation Accuracy: 0.916, Loss: 0.045 Epoch 9 Batch 2/269 - Train Accuracy: 0.935, Validation Accuracy: 0.916, Loss: 0.047 Epoch 9 Batch 3/269 - Train Accuracy: 0.930, Validation Accuracy: 0.917, Loss: 0.044 Epoch 9 Batch 4/269 - Train Accuracy: 0.927, Validation Accuracy: 0.923, Loss: 0.045 Epoch 9 Batch 5/269 - Train Accuracy: 0.933, Validation Accuracy: 0.924, Loss: 0.047 Epoch 9 Batch 6/269 - Train Accuracy: 0.950, Validation Accuracy: 0.925, Loss: 0.044 Epoch 9 Batch 7/269 - Train Accuracy: 0.932, Validation Accuracy: 0.925, Loss: 0.045 Epoch 9 Batch 8/269 - Train Accuracy: 0.933, Validation Accuracy: 0.925, Loss: 0.049 Epoch 9 Batch 9/269 - Train Accuracy: 0.930, Validation Accuracy: 0.922, Loss: 0.048 Epoch 9 Batch 10/269 - Train Accuracy: 0.933, Validation Accuracy: 0.926, Loss: 0.040 Epoch 9 Batch 11/269 - Train Accuracy: 0.933, Validation Accuracy: 0.928, Loss: 0.050 Epoch 9 Batch 12/269 - Train Accuracy: 0.926, Validation Accuracy: 0.926, Loss: 0.055 Epoch 9 Batch 13/269 - Train Accuracy: 0.928, Validation Accuracy: 0.926, Loss: 0.041 Epoch 9 Batch 14/269 - Train Accuracy: 0.917, Validation Accuracy: 0.929, Loss: 0.045 Epoch 9 Batch 15/269 - Train Accuracy: 0.934, Validation Accuracy: 0.927, Loss: 0.036 Epoch 9 Batch 16/269 - Train Accuracy: 0.925, Validation Accuracy: 0.928, Loss: 0.049 Epoch 9 Batch 17/269 - Train Accuracy: 0.939, Validation Accuracy: 0.927, Loss: 0.039 Epoch 9 Batch 18/269 - Train Accuracy: 0.929, Validation Accuracy: 0.924, Loss: 0.047 Epoch 9 Batch 19/269 - Train Accuracy: 0.936, Validation Accuracy: 0.925, Loss: 0.038 Epoch 9 Batch 20/269 - Train Accuracy: 0.934, Validation Accuracy: 0.926, Loss: 0.043 Epoch 9 Batch 21/269 - Train Accuracy: 0.914, Validation Accuracy: 0.924, Loss: 0.050 Epoch 9 Batch 22/269 - Train Accuracy: 0.945, Validation Accuracy: 0.924, Loss: 0.041 Epoch 9 Batch 23/269 - Train Accuracy: 0.931, Validation Accuracy: 0.924, Loss: 0.047 Epoch 9 Batch 24/269 - Train Accuracy: 0.932, Validation Accuracy: 0.926, Loss: 0.045 Epoch 9 Batch 25/269 - Train Accuracy: 0.929, Validation Accuracy: 0.922, Loss: 0.048 Epoch 9 Batch 26/269 - Train Accuracy: 0.922, Validation Accuracy: 0.922, Loss: 0.043 Epoch 9 Batch 27/269 - Train Accuracy: 0.922, Validation Accuracy: 0.924, Loss: 0.044 Epoch 9 Batch 28/269 - Train Accuracy: 0.913, Validation Accuracy: 0.928, Loss: 0.050 Epoch 9 Batch 29/269 - Train Accuracy: 0.934, Validation Accuracy: 0.921, Loss: 0.047 Epoch 9 Batch 30/269 - Train Accuracy: 0.941, Validation Accuracy: 0.929, Loss: 0.043 Epoch 9 Batch 31/269 - Train Accuracy: 0.932, Validation Accuracy: 0.924, Loss: 0.044 Epoch 9 Batch 32/269 - Train Accuracy: 0.940, Validation Accuracy: 0.923, Loss: 0.040 Epoch 9 Batch 33/269 - Train Accuracy: 0.925, Validation Accuracy: 0.928, Loss: 0.040 Epoch 9 Batch 34/269 - Train Accuracy: 0.925, Validation Accuracy: 0.927, Loss: 0.042 Epoch 9 Batch 35/269 - Train Accuracy: 0.935, Validation Accuracy: 0.927, Loss: 0.057 Epoch 9 Batch 36/269 - Train Accuracy: 0.925, Validation Accuracy: 0.919, Loss: 0.044 Epoch 9 Batch 37/269 - Train Accuracy: 0.929, Validation Accuracy: 0.923, Loss: 0.047 Epoch 9 Batch 38/269 - Train Accuracy: 0.936, Validation Accuracy: 0.924, Loss: 0.043 Epoch 9 Batch 39/269 - Train Accuracy: 0.932, Validation Accuracy: 0.925, Loss: 0.041 Epoch 9 Batch 40/269 - Train Accuracy: 0.919, Validation Accuracy: 0.919, Loss: 0.049 Epoch 9 Batch 41/269 - Train Accuracy: 0.931, Validation Accuracy: 0.918, Loss: 0.044 Epoch 9 Batch 42/269 - Train Accuracy: 0.938, Validation Accuracy: 0.923, Loss: 0.042 Epoch 9 Batch 43/269 - Train Accuracy: 0.930, Validation Accuracy: 0.923, Loss: 0.045 Epoch 9 Batch 44/269 - Train Accuracy: 0.942, Validation Accuracy: 0.924, Loss: 0.047 Epoch 9 Batch 45/269 - Train Accuracy: 0.935, Validation Accuracy: 0.927, Loss: 0.044 Epoch 9 Batch 46/269 - Train Accuracy: 0.938, Validation Accuracy: 0.930, Loss: 0.039 Epoch 9 Batch 47/269 - Train Accuracy: 0.936, Validation Accuracy: 0.924, Loss: 0.037 Epoch 9 Batch 48/269 - Train Accuracy: 0.934, Validation Accuracy: 0.924, Loss: 0.041 Epoch 9 Batch 49/269 - Train Accuracy: 0.933, Validation Accuracy: 0.922, Loss: 0.041 Epoch 9 Batch 50/269 - Train Accuracy: 0.905, Validation Accuracy: 0.925, Loss: 0.053 Epoch 9 Batch 51/269 - Train Accuracy: 0.930, Validation Accuracy: 0.928, Loss: 0.044 Epoch 9 Batch 52/269 - Train Accuracy: 0.930, Validation Accuracy: 0.928, Loss: 0.039 Epoch 9 Batch 53/269 - Train Accuracy: 0.920, Validation Accuracy: 0.933, Loss: 0.049 Epoch 9 Batch 54/269 - Train Accuracy: 0.926, Validation Accuracy: 0.928, Loss: 0.040 Epoch 9 Batch 55/269 - Train Accuracy: 0.941, Validation Accuracy: 0.933, Loss: 0.044 Epoch 9 Batch 56/269 - Train Accuracy: 0.927, Validation Accuracy: 0.933, Loss: 0.044 Epoch 9 Batch 57/269 - Train Accuracy: 0.934, Validation Accuracy: 0.935, Loss: 0.048 Epoch 9 Batch 58/269 - Train Accuracy: 0.936, Validation Accuracy: 0.930, Loss: 0.042 Epoch 9 Batch 59/269 - Train Accuracy: 0.953, Validation Accuracy: 0.932, Loss: 0.034 Epoch 9 Batch 60/269 - Train Accuracy: 0.939, Validation Accuracy: 0.928, Loss: 0.040 Epoch 9 Batch 61/269 - Train Accuracy: 0.939, Validation Accuracy: 0.930, Loss: 0.040 Epoch 9 Batch 62/269 - Train Accuracy: 0.929, Validation Accuracy: 0.933, Loss: 0.048 Epoch 9 Batch 63/269 - Train Accuracy: 0.937, Validation Accuracy: 0.936, Loss: 0.045 Epoch 9 Batch 64/269 - Train Accuracy: 0.935, Validation Accuracy: 0.934, Loss: 0.038 Epoch 9 Batch 65/269 - Train Accuracy: 0.930, Validation Accuracy: 0.930, Loss: 0.041 Epoch 9 Batch 66/269 - Train Accuracy: 0.929, Validation Accuracy: 0.930, Loss: 0.047 Epoch 9 Batch 67/269 - Train Accuracy: 0.929, Validation Accuracy: 0.928, Loss: 0.046 Epoch 9 Batch 68/269 - Train Accuracy: 0.918, Validation Accuracy: 0.925, Loss: 0.051 Epoch 9 Batch 69/269 - Train Accuracy: 0.920, Validation Accuracy: 0.922, Loss: 0.054 Epoch 9 Batch 70/269 - Train Accuracy: 0.931, Validation Accuracy: 0.925, Loss: 0.044 Epoch 9 Batch 71/269 - Train Accuracy: 0.936, Validation Accuracy: 0.929, Loss: 0.050 Epoch 9 Batch 72/269 - Train Accuracy: 0.927, Validation Accuracy: 0.930, Loss: 0.048 Epoch 9 Batch 73/269 - Train Accuracy: 0.926, Validation Accuracy: 0.932, Loss: 0.048 Epoch 9 Batch 74/269 - Train Accuracy: 0.941, Validation Accuracy: 0.933, Loss: 0.041 Epoch 9 Batch 75/269 - Train Accuracy: 0.941, Validation Accuracy: 0.933, Loss: 0.047 Epoch 9 Batch 76/269 - Train Accuracy: 0.932, Validation Accuracy: 0.934, Loss: 0.042 Epoch 9 Batch 77/269 - Train Accuracy: 0.929, Validation Accuracy: 0.926, Loss: 0.043 Epoch 9 Batch 78/269 - Train Accuracy: 0.942, Validation Accuracy: 0.920, Loss: 0.043 Epoch 9 Batch 79/269 - Train Accuracy: 0.930, Validation Accuracy: 0.922, Loss: 0.048 Epoch 9 Batch 80/269 - Train Accuracy: 0.934, Validation Accuracy: 0.924, Loss: 0.041 Epoch 9 Batch 81/269 - Train Accuracy: 0.922, Validation Accuracy: 0.925, Loss: 0.047 Epoch 9 Batch 82/269 - Train Accuracy: 0.941, Validation Accuracy: 0.927, Loss: 0.038 Epoch 9 Batch 83/269 - Train Accuracy: 0.920, Validation Accuracy: 0.928, Loss: 0.053 Epoch 9 Batch 84/269 - Train Accuracy: 0.936, Validation Accuracy: 0.925, Loss: 0.041 Epoch 9 Batch 85/269 - Train Accuracy: 0.944, Validation Accuracy: 0.922, Loss: 0.041 Epoch 9 Batch 86/269 - Train Accuracy: 0.930, Validation Accuracy: 0.920, Loss: 0.039 Epoch 9 Batch 87/269 - Train Accuracy: 0.929, Validation Accuracy: 0.921, Loss: 0.044 Epoch 9 Batch 88/269 - Train Accuracy: 0.936, Validation Accuracy: 0.920, Loss: 0.041 Epoch 9 Batch 89/269 - Train Accuracy: 0.933, Validation Accuracy: 0.917, Loss: 0.042 Epoch 9 Batch 90/269 - Train Accuracy: 0.933, Validation Accuracy: 0.919, Loss: 0.043 Epoch 9 Batch 91/269 - Train Accuracy: 0.940, Validation Accuracy: 0.922, Loss: 0.042 Epoch 9 Batch 92/269 - Train Accuracy: 0.957, Validation Accuracy: 0.918, Loss: 0.036 Epoch 9 Batch 93/269 - Train Accuracy: 0.930, Validation Accuracy: 0.927, Loss: 0.043 Epoch 9 Batch 94/269 - Train Accuracy: 0.929, Validation Accuracy: 0.935, Loss: 0.051 Epoch 9 Batch 95/269 - Train Accuracy: 0.939, Validation Accuracy: 0.931, Loss: 0.039 Epoch 9 Batch 96/269 - Train Accuracy: 0.924, Validation Accuracy: 0.932, Loss: 0.047 Epoch 9 Batch 97/269 - Train Accuracy: 0.938, Validation Accuracy: 0.936, Loss: 0.046 Epoch 9 Batch 98/269 - Train Accuracy: 0.939, Validation Accuracy: 0.936, Loss: 0.042 Epoch 9 Batch 99/269 - Train Accuracy: 0.934, Validation Accuracy: 0.932, Loss: 0.041 Epoch 9 Batch 100/269 - Train Accuracy: 0.925, Validation Accuracy: 0.932, Loss: 0.045 Epoch 9 Batch 101/269 - Train Accuracy: 0.926, Validation Accuracy: 0.928, Loss: 0.048 Epoch 9 Batch 102/269 - Train Accuracy: 0.933, Validation Accuracy: 0.925, Loss: 0.042 Epoch 9 Batch 103/269 - Train Accuracy: 0.939, Validation Accuracy: 0.930, Loss: 0.050 Epoch 9 Batch 104/269 - Train Accuracy: 0.928, Validation Accuracy: 0.932, Loss: 0.042 Epoch 9 Batch 105/269 - Train Accuracy: 0.932, Validation Accuracy: 0.932, Loss: 0.043 Epoch 9 Batch 106/269 - Train Accuracy: 0.938, Validation Accuracy: 0.933, Loss: 0.038 Epoch 9 Batch 107/269 - Train Accuracy: 0.946, Validation Accuracy: 0.931, Loss: 0.041 Epoch 9 Batch 108/269 - Train Accuracy: 0.940, Validation Accuracy: 0.927, Loss: 0.040 Epoch 9 Batch 109/269 - Train Accuracy: 0.924, Validation Accuracy: 0.926, Loss: 0.046 Epoch 9 Batch 110/269 - Train Accuracy: 0.932, Validation Accuracy: 0.923, Loss: 0.038 Epoch 9 Batch 111/269 - Train Accuracy: 0.934, Validation Accuracy: 0.930, Loss: 0.045 Epoch 9 Batch 112/269 - Train Accuracy: 0.939, Validation Accuracy: 0.927, Loss: 0.045 Epoch 9 Batch 113/269 - Train Accuracy: 0.943, Validation Accuracy: 0.925, Loss: 0.039 Epoch 9 Batch 114/269 - Train Accuracy: 0.925, Validation Accuracy: 0.930, Loss: 0.042 Epoch 9 Batch 115/269 - Train Accuracy: 0.926, Validation Accuracy: 0.931, Loss: 0.043 Epoch 9 Batch 116/269 - Train Accuracy: 0.937, Validation Accuracy: 0.930, Loss: 0.041 Epoch 9 Batch 117/269 - Train Accuracy: 0.934, Validation Accuracy: 0.930, Loss: 0.037 Epoch 9 Batch 118/269 - Train Accuracy: 0.941, Validation Accuracy: 0.932, Loss: 0.037 Epoch 9 Batch 119/269 - Train Accuracy: 0.933, Validation Accuracy: 0.927, Loss: 0.044 Epoch 9 Batch 120/269 - Train Accuracy: 0.938, Validation Accuracy: 0.925, Loss: 0.042 Epoch 9 Batch 121/269 - Train Accuracy: 0.944, Validation Accuracy: 0.929, Loss: 0.040 Epoch 9 Batch 122/269 - Train Accuracy: 0.932, Validation Accuracy: 0.930, Loss: 0.040 Epoch 9 Batch 123/269 - Train Accuracy: 0.930, Validation Accuracy: 0.928, Loss: 0.039 Epoch 9 Batch 124/269 - Train Accuracy: 0.932, Validation Accuracy: 0.922, Loss: 0.036 Epoch 9 Batch 125/269 - Train Accuracy: 0.937, Validation Accuracy: 0.918, Loss: 0.038 Epoch 9 Batch 126/269 - Train Accuracy: 0.913, Validation Accuracy: 0.920, Loss: 0.042 Epoch 9 Batch 127/269 - Train Accuracy: 0.929, Validation Accuracy: 0.923, Loss: 0.042 Epoch 9 Batch 128/269 - Train Accuracy: 0.931, Validation Accuracy: 0.926, Loss: 0.043 Epoch 9 Batch 129/269 - Train Accuracy: 0.920, Validation Accuracy: 0.926, Loss: 0.042 Epoch 9 Batch 130/269 - Train Accuracy: 0.928, Validation Accuracy: 0.925, Loss: 0.043 Epoch 9 Batch 131/269 - Train Accuracy: 0.920, Validation Accuracy: 0.927, Loss: 0.043 Epoch 9 Batch 132/269 - Train Accuracy: 0.912, Validation Accuracy: 0.924, Loss: 0.045 Epoch 9 Batch 133/269 - Train Accuracy: 0.935, Validation Accuracy: 0.924, Loss: 0.039 Epoch 9 Batch 134/269 - Train Accuracy: 0.933, Validation Accuracy: 0.929, Loss: 0.042 Epoch 9 Batch 135/269 - Train Accuracy: 0.930, Validation Accuracy: 0.929, Loss: 0.040 Epoch 9 Batch 136/269 - Train Accuracy: 0.921, Validation Accuracy: 0.929, Loss: 0.046 Epoch 9 Batch 137/269 - Train Accuracy: 0.939, Validation Accuracy: 0.931, Loss: 0.046 Epoch 9 Batch 138/269 - Train Accuracy: 0.935, Validation Accuracy: 0.923, Loss: 0.039 Epoch 9 Batch 139/269 - Train Accuracy: 0.943, Validation Accuracy: 0.925, Loss: 0.037 Epoch 9 Batch 140/269 - Train Accuracy: 0.929, Validation Accuracy: 0.926, Loss: 0.043 Epoch 9 Batch 141/269 - Train Accuracy: 0.923, Validation Accuracy: 0.930, Loss: 0.046 Epoch 9 Batch 142/269 - Train Accuracy: 0.931, Validation Accuracy: 0.937, Loss: 0.043 Epoch 9 Batch 143/269 - Train Accuracy: 0.936, Validation Accuracy: 0.933, Loss: 0.034 Epoch 9 Batch 144/269 - Train Accuracy: 0.934, Validation Accuracy: 0.928, Loss: 0.036 Epoch 9 Batch 145/269 - Train Accuracy: 0.933, Validation Accuracy: 0.927, Loss: 0.038 Epoch 9 Batch 146/269 - Train Accuracy: 0.930, Validation Accuracy: 0.929, Loss: 0.038 Epoch 9 Batch 147/269 - Train Accuracy: 0.932, Validation Accuracy: 0.927, Loss: 0.046 Epoch 9 Batch 148/269 - Train Accuracy: 0.935, Validation Accuracy: 0.925, Loss: 0.041 Epoch 9 Batch 149/269 - Train Accuracy: 0.936, Validation Accuracy: 0.930, Loss: 0.045 Epoch 9 Batch 150/269 - Train Accuracy: 0.935, Validation Accuracy: 0.928, Loss: 0.043 Epoch 9 Batch 151/269 - Train Accuracy: 0.938, Validation Accuracy: 0.932, Loss: 0.045 Epoch 9 Batch 152/269 - Train Accuracy: 0.933, Validation Accuracy: 0.930, Loss: 0.043 Epoch 9 Batch 153/269 - Train Accuracy: 0.946, Validation Accuracy: 0.926, Loss: 0.037 Epoch 9 Batch 154/269 - Train Accuracy: 0.950, Validation Accuracy: 0.930, Loss: 0.037 Epoch 9 Batch 155/269 - Train Accuracy: 0.933, Validation Accuracy: 0.935, Loss: 0.039 Epoch 9 Batch 156/269 - Train Accuracy: 0.936, Validation Accuracy: 0.934, Loss: 0.041 Epoch 9 Batch 157/269 - Train Accuracy: 0.924, Validation Accuracy: 0.930, Loss: 0.036 Epoch 9 Batch 158/269 - Train Accuracy: 0.941, Validation Accuracy: 0.933, Loss: 0.040 Epoch 9 Batch 159/269 - Train Accuracy: 0.935, Validation Accuracy: 0.937, Loss: 0.040 Epoch 9 Batch 160/269 - Train Accuracy: 0.938, Validation Accuracy: 0.932, Loss: 0.041 Epoch 9 Batch 161/269 - Train Accuracy: 0.934, Validation Accuracy: 0.930, Loss: 0.039 Epoch 9 Batch 162/269 - Train Accuracy: 0.944, Validation Accuracy: 0.930, Loss: 0.038 Epoch 9 Batch 163/269 - Train Accuracy: 0.937, Validation Accuracy: 0.926, Loss: 0.041 Epoch 9 Batch 164/269 - Train Accuracy: 0.940, Validation Accuracy: 0.932, Loss: 0.041 Epoch 9 Batch 165/269 - Train Accuracy: 0.937, Validation Accuracy: 0.933, Loss: 0.039 Epoch 9 Batch 166/269 - Train Accuracy: 0.955, Validation Accuracy: 0.930, Loss: 0.039 Epoch 9 Batch 167/269 - Train Accuracy: 0.942, Validation Accuracy: 0.928, Loss: 0.039 Epoch 9 Batch 168/269 - Train Accuracy: 0.943, Validation Accuracy: 0.929, Loss: 0.039 Epoch 9 Batch 169/269 - Train Accuracy: 0.921, Validation Accuracy: 0.938, Loss: 0.039 Epoch 9 Batch 170/269 - Train Accuracy: 0.932, Validation Accuracy: 0.941, Loss: 0.036 Epoch 9 Batch 171/269 - Train Accuracy: 0.942, Validation Accuracy: 0.941, Loss: 0.040 Epoch 9 Batch 172/269 - Train Accuracy: 0.923, Validation Accuracy: 0.941, Loss: 0.046 Epoch 9 Batch 173/269 - Train Accuracy: 0.945, Validation Accuracy: 0.936, Loss: 0.036 Epoch 9 Batch 174/269 - Train Accuracy: 0.946, Validation Accuracy: 0.929, Loss: 0.040 Epoch 9 Batch 175/269 - Train Accuracy: 0.912, Validation Accuracy: 0.927, Loss: 0.050 Epoch 9 Batch 176/269 - Train Accuracy: 0.930, Validation Accuracy: 0.934, Loss: 0.043 Epoch 9 Batch 177/269 - Train Accuracy: 0.939, Validation Accuracy: 0.939, Loss: 0.036 Epoch 9 Batch 178/269 - Train Accuracy: 0.941, Validation Accuracy: 0.931, Loss: 0.038 Epoch 9 Batch 179/269 - Train Accuracy: 0.940, Validation Accuracy: 0.934, Loss: 0.040 Epoch 9 Batch 180/269 - Train Accuracy: 0.950, Validation Accuracy: 0.936, Loss: 0.036 Epoch 9 Batch 181/269 - Train Accuracy: 0.932, Validation Accuracy: 0.932, Loss: 0.044 Epoch 9 Batch 182/269 - Train Accuracy: 0.935, Validation Accuracy: 0.935, Loss: 0.040 Epoch 9 Batch 183/269 - Train Accuracy: 0.943, Validation Accuracy: 0.933, Loss: 0.033 Epoch 9 Batch 184/269 - Train Accuracy: 0.945, Validation Accuracy: 0.927, Loss: 0.038 Epoch 9 Batch 185/269 - Train Accuracy: 0.941, Validation Accuracy: 0.930, Loss: 0.040 Epoch 9 Batch 186/269 - Train Accuracy: 0.931, Validation Accuracy: 0.927, Loss: 0.035 Epoch 9 Batch 187/269 - Train Accuracy: 0.943, Validation Accuracy: 0.930, Loss: 0.039 Epoch 9 Batch 188/269 - Train Accuracy: 0.938, Validation Accuracy: 0.934, Loss: 0.040 Epoch 9 Batch 189/269 - Train Accuracy: 0.934, Validation Accuracy: 0.936, Loss: 0.037 Epoch 9 Batch 190/269 - Train Accuracy: 0.944, Validation Accuracy: 0.929, Loss: 0.038 Epoch 9 Batch 191/269 - Train Accuracy: 0.936, Validation Accuracy: 0.934, Loss: 0.038 Epoch 9 Batch 192/269 - Train Accuracy: 0.944, Validation Accuracy: 0.933, Loss: 0.038 Epoch 9 Batch 193/269 - Train Accuracy: 0.940, Validation Accuracy: 0.943, Loss: 0.038 Epoch 9 Batch 194/269 - Train Accuracy: 0.932, Validation Accuracy: 0.938, Loss: 0.040 Epoch 9 Batch 195/269 - Train Accuracy: 0.933, Validation Accuracy: 0.940, Loss: 0.037 Epoch 9 Batch 196/269 - Train Accuracy: 0.941, Validation Accuracy: 0.934, Loss: 0.040 Epoch 9 Batch 197/269 - Train Accuracy: 0.931, Validation Accuracy: 0.927, Loss: 0.040 Epoch 9 Batch 198/269 - Train Accuracy: 0.937, Validation Accuracy: 0.936, Loss: 0.041 Epoch 9 Batch 199/269 - Train Accuracy: 0.928, Validation Accuracy: 0.929, Loss: 0.043 Epoch 9 Batch 200/269 - Train Accuracy: 0.945, Validation Accuracy: 0.925, Loss: 0.036 Epoch 9 Batch 201/269 - Train Accuracy: 0.942, Validation Accuracy: 0.940, Loss: 0.044 Epoch 9 Batch 202/269 - Train Accuracy: 0.942, Validation Accuracy: 0.942, Loss: 0.039 Epoch 9 Batch 203/269 - Train Accuracy: 0.937, Validation Accuracy: 0.940, Loss: 0.042 Epoch 9 Batch 204/269 - Train Accuracy: 0.925, Validation Accuracy: 0.939, Loss: 0.042 Epoch 9 Batch 205/269 - Train Accuracy: 0.944, Validation Accuracy: 0.938, Loss: 0.039 Epoch 9 Batch 206/269 - Train Accuracy: 0.925, Validation Accuracy: 0.933, Loss: 0.044 Epoch 9 Batch 207/269 - Train Accuracy: 0.934, Validation Accuracy: 0.937, Loss: 0.039 Epoch 9 Batch 208/269 - Train Accuracy: 0.926, Validation Accuracy: 0.942, Loss: 0.040 Epoch 9 Batch 209/269 - Train Accuracy: 0.948, Validation Accuracy: 0.930, Loss: 0.037 Epoch 9 Batch 210/269 - Train Accuracy: 0.945, Validation Accuracy: 0.931, Loss: 0.038 Epoch 9 Batch 211/269 - Train Accuracy: 0.937, Validation Accuracy: 0.928, Loss: 0.039 Epoch 9 Batch 212/269 - Train Accuracy: 0.936, Validation Accuracy: 0.929, Loss: 0.045 Epoch 9 Batch 213/269 - Train Accuracy: 0.927, Validation Accuracy: 0.926, Loss: 0.038 Epoch 9 Batch 214/269 - Train Accuracy: 0.942, Validation Accuracy: 0.931, Loss: 0.040 Epoch 9 Batch 215/269 - Train Accuracy: 0.944, Validation Accuracy: 0.930, Loss: 0.039 Epoch 9 Batch 216/269 - Train Accuracy: 0.930, Validation Accuracy: 0.934, Loss: 0.051 Epoch 9 Batch 217/269 - Train Accuracy: 0.930, Validation Accuracy: 0.940, Loss: 0.040 Epoch 9 Batch 218/269 - Train Accuracy: 0.939, Validation Accuracy: 0.941, Loss: 0.037 Epoch 9 Batch 219/269 - Train Accuracy: 0.941, Validation Accuracy: 0.937, Loss: 0.042 Epoch 9 Batch 220/269 - Train Accuracy: 0.934, Validation Accuracy: 0.937, Loss: 0.037 Epoch 9 Batch 221/269 - Train Accuracy: 0.930, Validation Accuracy: 0.934, Loss: 0.041 Epoch 9 Batch 222/269 - Train Accuracy: 0.957, Validation Accuracy: 0.936, Loss: 0.033 Epoch 9 Batch 223/269 - Train Accuracy: 0.924, Validation Accuracy: 0.931, Loss: 0.036 Epoch 9 Batch 224/269 - Train Accuracy: 0.937, Validation Accuracy: 0.932, Loss: 0.046 Epoch 9 Batch 225/269 - Train Accuracy: 0.931, Validation Accuracy: 0.932, Loss: 0.036 Epoch 9 Batch 226/269 - Train Accuracy: 0.935, Validation Accuracy: 0.933, Loss: 0.042 Epoch 9 Batch 227/269 - Train Accuracy: 0.934, Validation Accuracy: 0.931, Loss: 0.048 Epoch 9 Batch 228/269 - Train Accuracy: 0.937, Validation Accuracy: 0.937, Loss: 0.037 Epoch 9 Batch 229/269 - Train Accuracy: 0.945, Validation Accuracy: 0.939, Loss: 0.037 Epoch 9 Batch 230/269 - Train Accuracy: 0.942, Validation Accuracy: 0.934, Loss: 0.037 Epoch 9 Batch 231/269 - Train Accuracy: 0.926, Validation Accuracy: 0.938, Loss: 0.041 Epoch 9 Batch 232/269 - Train Accuracy: 0.931, Validation Accuracy: 0.943, Loss: 0.040 Epoch 9 Batch 233/269 - Train Accuracy: 0.953, Validation Accuracy: 0.935, Loss: 0.041 Epoch 9 Batch 234/269 - Train Accuracy: 0.940, Validation Accuracy: 0.933, Loss: 0.037 Epoch 9 Batch 235/269 - Train Accuracy: 0.953, Validation Accuracy: 0.933, Loss: 0.033 Epoch 9 Batch 236/269 - Train Accuracy: 0.932, Validation Accuracy: 0.937, Loss: 0.035 Epoch 9 Batch 237/269 - Train Accuracy: 0.948, Validation Accuracy: 0.946, Loss: 0.037 Epoch 9 Batch 238/269 - Train Accuracy: 0.941, Validation Accuracy: 0.940, Loss: 0.037 Epoch 9 Batch 239/269 - Train Accuracy: 0.940, Validation Accuracy: 0.939, Loss: 0.036 Epoch 9 Batch 240/269 - Train Accuracy: 0.939, Validation Accuracy: 0.940, Loss: 0.035 Epoch 9 Batch 241/269 - Train Accuracy: 0.934, Validation Accuracy: 0.936, Loss: 0.041 Epoch 9 Batch 242/269 - Train Accuracy: 0.943, Validation Accuracy: 0.934, Loss: 0.036 Epoch 9 Batch 243/269 - Train Accuracy: 0.941, Validation Accuracy: 0.937, Loss: 0.033 Epoch 9 Batch 244/269 - Train Accuracy: 0.934, Validation Accuracy: 0.935, Loss: 0.039 Epoch 9 Batch 245/269 - Train Accuracy: 0.930, Validation Accuracy: 0.930, Loss: 0.037 Epoch 9 Batch 246/269 - Train Accuracy: 0.926, Validation Accuracy: 0.924, Loss: 0.040 Epoch 9 Batch 247/269 - Train Accuracy: 0.937, Validation Accuracy: 0.930, Loss: 0.038 Epoch 9 Batch 248/269 - Train Accuracy: 0.942, Validation Accuracy: 0.937, Loss: 0.038 Epoch 9 Batch 249/269 - Train Accuracy: 0.950, Validation Accuracy: 0.933, Loss: 0.032 Epoch 9 Batch 250/269 - Train Accuracy: 0.938, Validation Accuracy: 0.929, Loss: 0.036 Epoch 9 Batch 251/269 - Train Accuracy: 0.961, Validation Accuracy: 0.939, Loss: 0.034 Epoch 9 Batch 252/269 - Train Accuracy: 0.946, Validation Accuracy: 0.939, Loss: 0.030 Epoch 9 Batch 253/269 - Train Accuracy: 0.930, Validation Accuracy: 0.940, Loss: 0.036 Epoch 9 Batch 254/269 - Train Accuracy: 0.937, Validation Accuracy: 0.936, Loss: 0.040 Epoch 9 Batch 255/269 - Train Accuracy: 0.937, Validation Accuracy: 0.938, Loss: 0.038 Epoch 9 Batch 256/269 - Train Accuracy: 0.922, Validation Accuracy: 0.925, Loss: 0.037 Epoch 9 Batch 257/269 - Train Accuracy: 0.920, Validation Accuracy: 0.923, Loss: 0.041 Epoch 9 Batch 258/269 - Train Accuracy: 0.923, Validation Accuracy: 0.925, Loss: 0.041 Epoch 9 Batch 259/269 - Train Accuracy: 0.935, Validation Accuracy: 0.925, Loss: 0.037 Epoch 9 Batch 260/269 - Train Accuracy: 0.930, Validation Accuracy: 0.930, Loss: 0.046 Epoch 9 Batch 261/269 - Train Accuracy: 0.943, Validation Accuracy: 0.937, Loss: 0.037 Epoch 9 Batch 262/269 - Train Accuracy: 0.940, Validation Accuracy: 0.934, Loss: 0.036 Epoch 9 Batch 263/269 - Train Accuracy: 0.932, Validation Accuracy: 0.935, Loss: 0.041 Epoch 9 Batch 264/269 - Train Accuracy: 0.915, Validation Accuracy: 0.936, Loss: 0.043 Epoch 9 Batch 265/269 - Train Accuracy: 0.937, Validation Accuracy: 0.937, Loss: 0.037 Epoch 9 Batch 266/269 - Train Accuracy: 0.934, Validation Accuracy: 0.931, Loss: 0.032 Epoch 9 Batch 267/269 - Train Accuracy: 0.942, Validation Accuracy: 0.932, Loss: 0.038 Epoch 10 Batch 0/269 - Train Accuracy: 0.936, Validation Accuracy: 0.935, Loss: 0.041 Epoch 10 Batch 1/269 - Train Accuracy: 0.936, Validation Accuracy: 0.932, Loss: 0.036 Epoch 10 Batch 2/269 - Train Accuracy: 0.938, Validation Accuracy: 0.931, Loss: 0.038 Epoch 10 Batch 3/269 - Train Accuracy: 0.948, Validation Accuracy: 0.932, Loss: 0.034 Epoch 10 Batch 4/269 - Train Accuracy: 0.934, Validation Accuracy: 0.934, Loss: 0.036 Epoch 10 Batch 5/269 - Train Accuracy: 0.943, Validation Accuracy: 0.931, Loss: 0.037 Epoch 10 Batch 6/269 - Train Accuracy: 0.953, Validation Accuracy: 0.929, Loss: 0.034 Epoch 10 Batch 7/269 - Train Accuracy: 0.939, Validation Accuracy: 0.932, Loss: 0.035 Epoch 10 Batch 8/269 - Train Accuracy: 0.944, Validation Accuracy: 0.933, Loss: 0.040 Epoch 10 Batch 9/269 - Train Accuracy: 0.945, Validation Accuracy: 0.935, Loss: 0.039 Epoch 10 Batch 10/269 - Train Accuracy: 0.947, Validation Accuracy: 0.925, Loss: 0.031 Epoch 10 Batch 11/269 - Train Accuracy: 0.934, Validation Accuracy: 0.926, Loss: 0.040 Epoch 10 Batch 12/269 - Train Accuracy: 0.940, Validation Accuracy: 0.933, Loss: 0.045 Epoch 10 Batch 13/269 - Train Accuracy: 0.938, Validation Accuracy: 0.935, Loss: 0.033 Epoch 10 Batch 14/269 - Train Accuracy: 0.939, Validation Accuracy: 0.934, Loss: 0.036 Epoch 10 Batch 15/269 - Train Accuracy: 0.943, Validation Accuracy: 0.938, Loss: 0.028 Epoch 10 Batch 16/269 - Train Accuracy: 0.936, Validation Accuracy: 0.935, Loss: 0.040 Epoch 10 Batch 17/269 - Train Accuracy: 0.948, Validation Accuracy: 0.930, Loss: 0.031 Epoch 10 Batch 18/269 - Train Accuracy: 0.941, Validation Accuracy: 0.929, Loss: 0.037 Epoch 10 Batch 19/269 - Train Accuracy: 0.946, Validation Accuracy: 0.932, Loss: 0.030 Epoch 10 Batch 20/269 - Train Accuracy: 0.945, Validation Accuracy: 0.930, Loss: 0.034 Epoch 10 Batch 21/269 - Train Accuracy: 0.922, Validation Accuracy: 0.932, Loss: 0.040 Epoch 10 Batch 22/269 - Train Accuracy: 0.952, Validation Accuracy: 0.934, Loss: 0.033 Epoch 10 Batch 23/269 - Train Accuracy: 0.943, Validation Accuracy: 0.935, Loss: 0.038 Epoch 10 Batch 24/269 - Train Accuracy: 0.939, Validation Accuracy: 0.935, Loss: 0.036 Epoch 10 Batch 25/269 - Train Accuracy: 0.943, Validation Accuracy: 0.932, Loss: 0.039 Epoch 10 Batch 26/269 - Train Accuracy: 0.930, Validation Accuracy: 0.931, Loss: 0.035 Epoch 10 Batch 27/269 - Train Accuracy: 0.930, Validation Accuracy: 0.932, Loss: 0.037 Epoch 10 Batch 28/269 - Train Accuracy: 0.923, Validation Accuracy: 0.935, Loss: 0.040 Epoch 10 Batch 29/269 - Train Accuracy: 0.937, Validation Accuracy: 0.938, Loss: 0.037 Epoch 10 Batch 30/269 - Train Accuracy: 0.952, Validation Accuracy: 0.935, Loss: 0.034 Epoch 10 Batch 31/269 - Train Accuracy: 0.942, Validation Accuracy: 0.939, Loss: 0.036 Epoch 10 Batch 32/269 - Train Accuracy: 0.943, Validation Accuracy: 0.938, Loss: 0.032 Epoch 10 Batch 33/269 - Train Accuracy: 0.933, Validation Accuracy: 0.937, Loss: 0.032 Epoch 10 Batch 34/269 - Train Accuracy: 0.935, Validation Accuracy: 0.937, Loss: 0.034 Epoch 10 Batch 35/269 - Train Accuracy: 0.945, Validation Accuracy: 0.930, Loss: 0.048 Epoch 10 Batch 36/269 - Train Accuracy: 0.930, Validation Accuracy: 0.926, Loss: 0.036 Epoch 10 Batch 37/269 - Train Accuracy: 0.934, Validation Accuracy: 0.928, Loss: 0.038 Epoch 10 Batch 38/269 - Train Accuracy: 0.942, Validation Accuracy: 0.930, Loss: 0.034 Epoch 10 Batch 39/269 - Train Accuracy: 0.942, Validation Accuracy: 0.931, Loss: 0.033 Epoch 10 Batch 40/269 - Train Accuracy: 0.929, Validation Accuracy: 0.928, Loss: 0.040 Epoch 10 Batch 41/269 - Train Accuracy: 0.939, Validation Accuracy: 0.927, Loss: 0.036 Epoch 10 Batch 42/269 - Train Accuracy: 0.948, Validation Accuracy: 0.926, Loss: 0.034 Epoch 10 Batch 43/269 - Train Accuracy: 0.938, Validation Accuracy: 0.932, Loss: 0.037 Epoch 10 Batch 44/269 - Train Accuracy: 0.950, Validation Accuracy: 0.940, Loss: 0.037 Epoch 10 Batch 45/269 - Train Accuracy: 0.941, Validation Accuracy: 0.942, Loss: 0.036 Epoch 10 Batch 46/269 - Train Accuracy: 0.950, Validation Accuracy: 0.940, Loss: 0.032 Epoch 10 Batch 47/269 - Train Accuracy: 0.954, Validation Accuracy: 0.934, Loss: 0.029 Epoch 10 Batch 48/269 - Train Accuracy: 0.944, Validation Accuracy: 0.937, Loss: 0.034 Epoch 10 Batch 49/269 - Train Accuracy: 0.941, Validation Accuracy: 0.937, Loss: 0.033 Epoch 10 Batch 50/269 - Train Accuracy: 0.918, Validation Accuracy: 0.931, Loss: 0.044 Epoch 10 Batch 51/269 - Train Accuracy: 0.937, Validation Accuracy: 0.935, Loss: 0.036 Epoch 10 Batch 52/269 - Train Accuracy: 0.934, Validation Accuracy: 0.938, Loss: 0.032 Epoch 10 Batch 53/269 - Train Accuracy: 0.933, Validation Accuracy: 0.941, Loss: 0.040 Epoch 10 Batch 54/269 - Train Accuracy: 0.944, Validation Accuracy: 0.937, Loss: 0.032 Epoch 10 Batch 55/269 - Train Accuracy: 0.946, Validation Accuracy: 0.940, Loss: 0.037 Epoch 10 Batch 56/269 - Train Accuracy: 0.933, Validation Accuracy: 0.939, Loss: 0.035 Epoch 10 Batch 57/269 - Train Accuracy: 0.950, Validation Accuracy: 0.939, Loss: 0.039 Epoch 10 Batch 58/269 - Train Accuracy: 0.945, Validation Accuracy: 0.942, Loss: 0.033 Epoch 10 Batch 59/269 - Train Accuracy: 0.956, Validation Accuracy: 0.941, Loss: 0.028 Epoch 10 Batch 60/269 - Train Accuracy: 0.946, Validation Accuracy: 0.946, Loss: 0.033 Epoch 10 Batch 61/269 - Train Accuracy: 0.943, Validation Accuracy: 0.946, Loss: 0.032 Epoch 10 Batch 62/269 - Train Accuracy: 0.938, Validation Accuracy: 0.945, Loss: 0.041 Epoch 10 Batch 63/269 - Train Accuracy: 0.944, Validation Accuracy: 0.944, Loss: 0.037 Epoch 10 Batch 64/269 - Train Accuracy: 0.946, Validation Accuracy: 0.943, Loss: 0.031 Epoch 10 Batch 65/269 - Train Accuracy: 0.937, Validation Accuracy: 0.947, Loss: 0.033 Epoch 10 Batch 66/269 - Train Accuracy: 0.932, Validation Accuracy: 0.940, Loss: 0.039 Epoch 10 Batch 67/269 - Train Accuracy: 0.936, Validation Accuracy: 0.940, Loss: 0.039 Epoch 10 Batch 68/269 - Train Accuracy: 0.929, Validation Accuracy: 0.937, Loss: 0.041 Epoch 10 Batch 69/269 - Train Accuracy: 0.929, Validation Accuracy: 0.939, Loss: 0.044 Epoch 10 Batch 70/269 - Train Accuracy: 0.942, Validation Accuracy: 0.939, Loss: 0.036 Epoch 10 Batch 71/269 - Train Accuracy: 0.942, Validation Accuracy: 0.941, Loss: 0.041 Epoch 10 Batch 72/269 - Train Accuracy: 0.929, Validation Accuracy: 0.941, Loss: 0.040 Epoch 10 Batch 73/269 - Train Accuracy: 0.939, Validation Accuracy: 0.940, Loss: 0.040 Epoch 10 Batch 74/269 - Train Accuracy: 0.943, Validation Accuracy: 0.938, Loss: 0.033 Epoch 10 Batch 75/269 - Train Accuracy: 0.948, Validation Accuracy: 0.935, Loss: 0.039 Epoch 10 Batch 76/269 - Train Accuracy: 0.938, Validation Accuracy: 0.934, Loss: 0.035 Epoch 10 Batch 77/269 - Train Accuracy: 0.942, Validation Accuracy: 0.933, Loss: 0.035 Epoch 10 Batch 78/269 - Train Accuracy: 0.946, Validation Accuracy: 0.930, Loss: 0.035 Epoch 10 Batch 79/269 - Train Accuracy: 0.945, Validation Accuracy: 0.933, Loss: 0.039 Epoch 10 Batch 80/269 - Train Accuracy: 0.951, Validation Accuracy: 0.934, Loss: 0.033 Epoch 10 Batch 81/269 - Train Accuracy: 0.931, Validation Accuracy: 0.930, Loss: 0.039 Epoch 10 Batch 82/269 - Train Accuracy: 0.947, Validation Accuracy: 0.932, Loss: 0.031 Epoch 10 Batch 83/269 - Train Accuracy: 0.930, Validation Accuracy: 0.933, Loss: 0.044 Epoch 10 Batch 84/269 - Train Accuracy: 0.942, Validation Accuracy: 0.932, Loss: 0.033 Epoch 10 Batch 85/269 - Train Accuracy: 0.953, Validation Accuracy: 0.929, Loss: 0.033 Epoch 10 Batch 86/269 - Train Accuracy: 0.940, Validation Accuracy: 0.930, Loss: 0.033 Epoch 10 Batch 87/269 - Train Accuracy: 0.936, Validation Accuracy: 0.928, Loss: 0.036 Epoch 10 Batch 88/269 - Train Accuracy: 0.940, Validation Accuracy: 0.933, Loss: 0.034 Epoch 10 Batch 89/269 - Train Accuracy: 0.945, Validation Accuracy: 0.931, Loss: 0.033 Epoch 10 Batch 90/269 - Train Accuracy: 0.939, Validation Accuracy: 0.931, Loss: 0.035 Epoch 10 Batch 91/269 - Train Accuracy: 0.951, Validation Accuracy: 0.930, Loss: 0.034 Epoch 10 Batch 92/269 - Train Accuracy: 0.961, Validation Accuracy: 0.930, Loss: 0.029 Epoch 10 Batch 93/269 - Train Accuracy: 0.939, Validation Accuracy: 0.932, Loss: 0.035 Epoch 10 Batch 94/269 - Train Accuracy: 0.939, Validation Accuracy: 0.939, Loss: 0.042 Epoch 10 Batch 95/269 - Train Accuracy: 0.947, Validation Accuracy: 0.936, Loss: 0.033 Epoch 10 Batch 96/269 - Train Accuracy: 0.930, Validation Accuracy: 0.933, Loss: 0.040 Epoch 10 Batch 97/269 - Train Accuracy: 0.948, Validation Accuracy: 0.935, Loss: 0.039 Epoch 10 Batch 98/269 - Train Accuracy: 0.945, Validation Accuracy: 0.936, Loss: 0.034 Epoch 10 Batch 99/269 - Train Accuracy: 0.935, Validation Accuracy: 0.933, Loss: 0.034 Epoch 10 Batch 100/269 - Train Accuracy: 0.935, Validation Accuracy: 0.934, Loss: 0.038 Epoch 10 Batch 101/269 - Train Accuracy: 0.934, Validation Accuracy: 0.935, Loss: 0.040 Epoch 10 Batch 102/269 - Train Accuracy: 0.939, Validation Accuracy: 0.932, Loss: 0.033 Epoch 10 Batch 103/269 - Train Accuracy: 0.949, Validation Accuracy: 0.933, Loss: 0.041 Epoch 10 Batch 104/269 - Train Accuracy: 0.939, Validation Accuracy: 0.931, Loss: 0.035 Epoch 10 Batch 105/269 - Train Accuracy: 0.942, Validation Accuracy: 0.935, Loss: 0.034 Epoch 10 Batch 106/269 - Train Accuracy: 0.942, Validation Accuracy: 0.936, Loss: 0.030 Epoch 10 Batch 107/269 - Train Accuracy: 0.948, Validation Accuracy: 0.938, Loss: 0.034 Epoch 10 Batch 108/269 - Train Accuracy: 0.946, Validation Accuracy: 0.937, Loss: 0.033 Epoch 10 Batch 109/269 - Train Accuracy: 0.934, Validation Accuracy: 0.936, Loss: 0.037 Epoch 10 Batch 110/269 - Train Accuracy: 0.936, Validation Accuracy: 0.936, Loss: 0.030 Epoch 10 Batch 111/269 - Train Accuracy: 0.946, Validation Accuracy: 0.933, Loss: 0.035 Epoch 10 Batch 112/269 - Train Accuracy: 0.950, Validation Accuracy: 0.935, Loss: 0.036 Epoch 10 Batch 113/269 - Train Accuracy: 0.946, Validation Accuracy: 0.936, Loss: 0.032 Epoch 10 Batch 114/269 - Train Accuracy: 0.935, Validation Accuracy: 0.939, Loss: 0.034 Epoch 10 Batch 115/269 - Train Accuracy: 0.935, Validation Accuracy: 0.937, Loss: 0.035 Epoch 10 Batch 116/269 - Train Accuracy: 0.952, Validation Accuracy: 0.935, Loss: 0.033 Epoch 10 Batch 117/269 - Train Accuracy: 0.936, Validation Accuracy: 0.937, Loss: 0.029 Epoch 10 Batch 118/269 - Train Accuracy: 0.952, Validation Accuracy: 0.936, Loss: 0.030 Epoch 10 Batch 119/269 - Train Accuracy: 0.942, Validation Accuracy: 0.936, Loss: 0.035 Epoch 10 Batch 120/269 - Train Accuracy: 0.942, Validation Accuracy: 0.938, Loss: 0.034 Epoch 10 Batch 121/269 - Train Accuracy: 0.947, Validation Accuracy: 0.938, Loss: 0.031 Epoch 10 Batch 122/269 - Train Accuracy: 0.943, Validation Accuracy: 0.940, Loss: 0.032 Epoch 10 Batch 123/269 - Train Accuracy: 0.936, Validation Accuracy: 0.935, Loss: 0.032 Epoch 10 Batch 124/269 - Train Accuracy: 0.939, Validation Accuracy: 0.934, Loss: 0.030 Epoch 10 Batch 125/269 - Train Accuracy: 0.942, Validation Accuracy: 0.926, Loss: 0.030 Epoch 10 Batch 126/269 - Train Accuracy: 0.923, Validation Accuracy: 0.922, Loss: 0.034 Epoch 10 Batch 127/269 - Train Accuracy: 0.937, Validation Accuracy: 0.926, Loss: 0.035 Epoch 10 Batch 128/269 - Train Accuracy: 0.945, Validation Accuracy: 0.933, Loss: 0.034 Epoch 10 Batch 129/269 - Train Accuracy: 0.937, Validation Accuracy: 0.936, Loss: 0.033 Epoch 10 Batch 130/269 - Train Accuracy: 0.934, Validation Accuracy: 0.932, Loss: 0.035 Epoch 10 Batch 131/269 - Train Accuracy: 0.935, Validation Accuracy: 0.933, Loss: 0.035 Epoch 10 Batch 132/269 - Train Accuracy: 0.923, Validation Accuracy: 0.930, Loss: 0.038 Epoch 10 Batch 133/269 - Train Accuracy: 0.951, Validation Accuracy: 0.935, Loss: 0.031 Epoch 10 Batch 134/269 - Train Accuracy: 0.943, Validation Accuracy: 0.940, Loss: 0.034 Epoch 10 Batch 135/269 - Train Accuracy: 0.937, Validation Accuracy: 0.938, Loss: 0.033 Epoch 10 Batch 136/269 - Train Accuracy: 0.922, Validation Accuracy: 0.934, Loss: 0.038 Epoch 10 Batch 137/269 - Train Accuracy: 0.948, Validation Accuracy: 0.937, Loss: 0.039 Epoch 10 Batch 138/269 - Train Accuracy: 0.946, Validation Accuracy: 0.938, Loss: 0.031 Epoch 10 Batch 139/269 - Train Accuracy: 0.945, Validation Accuracy: 0.934, Loss: 0.030 Epoch 10 Batch 140/269 - Train Accuracy: 0.941, Validation Accuracy: 0.939, Loss: 0.035 Epoch 10 Batch 141/269 - Train Accuracy: 0.935, Validation Accuracy: 0.935, Loss: 0.037 Epoch 10 Batch 142/269 - Train Accuracy: 0.936, Validation Accuracy: 0.937, Loss: 0.037 Epoch 10 Batch 143/269 - Train Accuracy: 0.950, Validation Accuracy: 0.936, Loss: 0.028 Epoch 10 Batch 144/269 - Train Accuracy: 0.940, Validation Accuracy: 0.936, Loss: 0.030 Epoch 10 Batch 145/269 - Train Accuracy: 0.943, Validation Accuracy: 0.935, Loss: 0.031 Epoch 10 Batch 146/269 - Train Accuracy: 0.936, Validation Accuracy: 0.938, Loss: 0.032 Epoch 10 Batch 147/269 - Train Accuracy: 0.944, Validation Accuracy: 0.937, Loss: 0.038 Epoch 10 Batch 148/269 - Train Accuracy: 0.944, Validation Accuracy: 0.937, Loss: 0.034 Epoch 10 Batch 149/269 - Train Accuracy: 0.939, Validation Accuracy: 0.940, Loss: 0.037 Epoch 10 Batch 150/269 - Train Accuracy: 0.945, Validation Accuracy: 0.935, Loss: 0.036 Epoch 10 Batch 151/269 - Train Accuracy: 0.946, Validation Accuracy: 0.937, Loss: 0.038 Epoch 10 Batch 152/269 - Train Accuracy: 0.944, Validation Accuracy: 0.935, Loss: 0.037 Epoch 10 Batch 153/269 - Train Accuracy: 0.952, Validation Accuracy: 0.930, Loss: 0.031 Epoch 10 Batch 154/269 - Train Accuracy: 0.957, Validation Accuracy: 0.937, Loss: 0.031 Epoch 10 Batch 155/269 - Train Accuracy: 0.942, Validation Accuracy: 0.943, Loss: 0.032 Epoch 10 Batch 156/269 - Train Accuracy: 0.954, Validation Accuracy: 0.940, Loss: 0.035 Epoch 10 Batch 157/269 - Train Accuracy: 0.936, Validation Accuracy: 0.936, Loss: 0.030 Epoch 10 Batch 158/269 - Train Accuracy: 0.949, Validation Accuracy: 0.941, Loss: 0.033 Epoch 10 Batch 159/269 - Train Accuracy: 0.944, Validation Accuracy: 0.940, Loss: 0.034 Epoch 10 Batch 160/269 - Train Accuracy: 0.946, Validation Accuracy: 0.940, Loss: 0.035 Epoch 10 Batch 161/269 - Train Accuracy: 0.941, Validation Accuracy: 0.942, Loss: 0.032 Epoch 10 Batch 162/269 - Train Accuracy: 0.951, Validation Accuracy: 0.942, Loss: 0.031 Epoch 10 Batch 163/269 - Train Accuracy: 0.944, Validation Accuracy: 0.944, Loss: 0.034 Epoch 10 Batch 164/269 - Train Accuracy: 0.950, Validation Accuracy: 0.944, Loss: 0.034 Epoch 10 Batch 165/269 - Train Accuracy: 0.944, Validation Accuracy: 0.943, Loss: 0.033 Epoch 10 Batch 166/269 - Train Accuracy: 0.957, Validation Accuracy: 0.940, Loss: 0.032 Epoch 10 Batch 167/269 - Train Accuracy: 0.946, Validation Accuracy: 0.938, Loss: 0.032 Epoch 10 Batch 168/269 - Train Accuracy: 0.944, Validation Accuracy: 0.937, Loss: 0.032 Epoch 10 Batch 169/269 - Train Accuracy: 0.930, Validation Accuracy: 0.939, Loss: 0.032 Epoch 10 Batch 170/269 - Train Accuracy: 0.942, Validation Accuracy: 0.943, Loss: 0.030 Epoch 10 Batch 171/269 - Train Accuracy: 0.952, Validation Accuracy: 0.950, Loss: 0.033 Epoch 10 Batch 172/269 - Train Accuracy: 0.931, Validation Accuracy: 0.951, Loss: 0.038 Epoch 10 Batch 173/269 - Train Accuracy: 0.953, Validation Accuracy: 0.941, Loss: 0.030 Epoch 10 Batch 174/269 - Train Accuracy: 0.952, Validation Accuracy: 0.936, Loss: 0.033 Epoch 10 Batch 175/269 - Train Accuracy: 0.929, Validation Accuracy: 0.938, Loss: 0.043 Epoch 10 Batch 176/269 - Train Accuracy: 0.941, Validation Accuracy: 0.940, Loss: 0.035 Epoch 10 Batch 177/269 - Train Accuracy: 0.949, Validation Accuracy: 0.945, Loss: 0.029 Epoch 10 Batch 178/269 - Train Accuracy: 0.950, Validation Accuracy: 0.940, Loss: 0.031 Epoch 10 Batch 179/269 - Train Accuracy: 0.946, Validation Accuracy: 0.941, Loss: 0.033 Epoch 10 Batch 180/269 - Train Accuracy: 0.955, Validation Accuracy: 0.940, Loss: 0.029 Epoch 10 Batch 181/269 - Train Accuracy: 0.938, Validation Accuracy: 0.940, Loss: 0.037 Epoch 10 Batch 182/269 - Train Accuracy: 0.943, Validation Accuracy: 0.939, Loss: 0.033 Epoch 10 Batch 183/269 - Train Accuracy: 0.952, Validation Accuracy: 0.938, Loss: 0.027 Epoch 10 Batch 184/269 - Train Accuracy: 0.951, Validation Accuracy: 0.937, Loss: 0.030 Epoch 10 Batch 185/269 - Train Accuracy: 0.951, Validation Accuracy: 0.933, Loss: 0.033 Epoch 10 Batch 186/269 - Train Accuracy: 0.943, Validation Accuracy: 0.932, Loss: 0.028 Epoch 10 Batch 187/269 - Train Accuracy: 0.953, Validation Accuracy: 0.938, Loss: 0.032 Epoch 10 Batch 188/269 - Train Accuracy: 0.950, Validation Accuracy: 0.938, Loss: 0.032 Epoch 10 Batch 189/269 - Train Accuracy: 0.950, Validation Accuracy: 0.936, Loss: 0.030 Epoch 10 Batch 190/269 - Train Accuracy: 0.951, Validation Accuracy: 0.938, Loss: 0.033 Epoch 10 Batch 191/269 - Train Accuracy: 0.947, Validation Accuracy: 0.942, Loss: 0.032 Epoch 10 Batch 192/269 - Train Accuracy: 0.952, Validation Accuracy: 0.944, Loss: 0.031 Epoch 10 Batch 193/269 - Train Accuracy: 0.955, Validation Accuracy: 0.945, Loss: 0.031 Epoch 10 Batch 194/269 - Train Accuracy: 0.939, Validation Accuracy: 0.942, Loss: 0.033 Epoch 10 Batch 195/269 - Train Accuracy: 0.948, Validation Accuracy: 0.942, Loss: 0.030 Epoch 10 Batch 196/269 - Train Accuracy: 0.940, Validation Accuracy: 0.944, Loss: 0.033 Epoch 10 Batch 197/269 - Train Accuracy: 0.941, Validation Accuracy: 0.942, Loss: 0.032 Epoch 10 Batch 198/269 - Train Accuracy: 0.942, Validation Accuracy: 0.945, Loss: 0.033 Epoch 10 Batch 199/269 - Train Accuracy: 0.938, Validation Accuracy: 0.939, Loss: 0.035 Epoch 10 Batch 200/269 - Train Accuracy: 0.958, Validation Accuracy: 0.938, Loss: 0.028 Epoch 10 Batch 201/269 - Train Accuracy: 0.949, Validation Accuracy: 0.946, Loss: 0.037 Epoch 10 Batch 202/269 - Train Accuracy: 0.943, Validation Accuracy: 0.938, Loss: 0.032 Epoch 10 Batch 203/269 - Train Accuracy: 0.933, Validation Accuracy: 0.937, Loss: 0.040 Epoch 10 Batch 204/269 - Train Accuracy: 0.909, Validation Accuracy: 0.923, Loss: 0.063 Epoch 10 Batch 205/269 - Train Accuracy: 0.933, Validation Accuracy: 0.922, Loss: 0.148 Epoch 10 Batch 206/269 - Train Accuracy: 0.883, Validation Accuracy: 0.905, Loss: 0.094 Epoch 10 Batch 207/269 - Train Accuracy: 0.864, Validation Accuracy: 0.864, Loss: 0.120 Epoch 10 Batch 208/269 - Train Accuracy: 0.883, Validation Accuracy: 0.889, Loss: 0.272 Epoch 10 Batch 209/269 - Train Accuracy: 0.900, Validation Accuracy: 0.889, Loss: 0.156 Epoch 10 Batch 210/269 - Train Accuracy: 0.895, Validation Accuracy: 0.893, Loss: 0.179 Epoch 10 Batch 211/269 - Train Accuracy: 0.871, Validation Accuracy: 0.873, Loss: 0.149 Epoch 10 Batch 212/269 - Train Accuracy: 0.857, Validation Accuracy: 0.875, Loss: 0.183 Epoch 10 Batch 213/269 - Train Accuracy: 0.863, Validation Accuracy: 0.861, Loss: 0.194 Epoch 10 Batch 214/269 - Train Accuracy: 0.872, Validation Accuracy: 0.869, Loss: 0.163 Epoch 10 Batch 215/269 - Train Accuracy: 0.863, Validation Accuracy: 0.840, Loss: 0.162 Epoch 10 Batch 216/269 - Train Accuracy: 0.827, Validation Accuracy: 0.857, Loss: 0.235 Epoch 10 Batch 217/269 - Train Accuracy: 0.832, Validation Accuracy: 0.854, Loss: 0.191 Epoch 10 Batch 218/269 - Train Accuracy: 0.860, Validation Accuracy: 0.855, Loss: 0.154 Epoch 10 Batch 219/269 - Train Accuracy: 0.856, Validation Accuracy: 0.862, Loss: 0.139 Epoch 10 Batch 220/269 - Train Accuracy: 0.859, Validation Accuracy: 0.864, Loss: 0.154 Epoch 10 Batch 221/269 - Train Accuracy: 0.881, Validation Accuracy: 0.879, Loss: 0.152 Epoch 10 Batch 222/269 - Train Accuracy: 0.882, Validation Accuracy: 0.855, Loss: 0.106 Epoch 10 Batch 223/269 - Train Accuracy: 0.866, Validation Accuracy: 0.876, Loss: 0.125 Epoch 10 Batch 224/269 - Train Accuracy: 0.881, Validation Accuracy: 0.869, Loss: 0.129 Epoch 10 Batch 225/269 - Train Accuracy: 0.889, Validation Accuracy: 0.897, Loss: 0.108 Epoch 10 Batch 226/269 - Train Accuracy: 0.875, Validation Accuracy: 0.882, Loss: 0.092 Epoch 10 Batch 227/269 - Train Accuracy: 0.901, Validation Accuracy: 0.894, Loss: 0.110 Epoch 10 Batch 228/269 - Train Accuracy: 0.888, Validation Accuracy: 0.895, Loss: 0.095 Epoch 10 Batch 229/269 - Train Accuracy: 0.894, Validation Accuracy: 0.892, Loss: 0.091 Epoch 10 Batch 230/269 - Train Accuracy: 0.904, Validation Accuracy: 0.912, Loss: 0.085 Epoch 10 Batch 231/269 - Train Accuracy: 0.889, Validation Accuracy: 0.885, Loss: 0.079 Epoch 10 Batch 232/269 - Train Accuracy: 0.906, Validation Accuracy: 0.899, Loss: 0.088 Epoch 10 Batch 233/269 - Train Accuracy: 0.921, Validation Accuracy: 0.891, Loss: 0.076 Epoch 10 Batch 234/269 - Train Accuracy: 0.910, Validation Accuracy: 0.894, Loss: 0.069 Epoch 10 Batch 235/269 - Train Accuracy: 0.923, Validation Accuracy: 0.895, Loss: 0.059 Epoch 10 Batch 236/269 - Train Accuracy: 0.912, Validation Accuracy: 0.912, Loss: 0.063 Epoch 10 Batch 237/269 - Train Accuracy: 0.930, Validation Accuracy: 0.911, Loss: 0.065 Epoch 10 Batch 238/269 - Train Accuracy: 0.922, Validation Accuracy: 0.915, Loss: 0.061 Epoch 10 Batch 239/269 - Train Accuracy: 0.917, Validation Accuracy: 0.906, Loss: 0.056 Epoch 10 Batch 240/269 - Train Accuracy: 0.916, Validation Accuracy: 0.897, Loss: 0.053 Epoch 10 Batch 241/269 - Train Accuracy: 0.910, Validation Accuracy: 0.901, Loss: 0.062 Epoch 10 Batch 242/269 - Train Accuracy: 0.932, Validation Accuracy: 0.911, Loss: 0.049 Epoch 10 Batch 243/269 - Train Accuracy: 0.923, Validation Accuracy: 0.909, Loss: 0.052 Epoch 10 Batch 244/269 - Train Accuracy: 0.919, Validation Accuracy: 0.918, Loss: 0.056 Epoch 10 Batch 245/269 - Train Accuracy: 0.920, Validation Accuracy: 0.920, Loss: 0.054 Epoch 10 Batch 246/269 - Train Accuracy: 0.923, Validation Accuracy: 0.920, Loss: 0.049 Epoch 10 Batch 247/269 - Train Accuracy: 0.940, Validation Accuracy: 0.924, Loss: 0.049 Epoch 10 Batch 248/269 - Train Accuracy: 0.933, Validation Accuracy: 0.919, Loss: 0.048 Epoch 10 Batch 249/269 - Train Accuracy: 0.936, Validation Accuracy: 0.921, Loss: 0.041 Epoch 10 Batch 250/269 - Train Accuracy: 0.936, Validation Accuracy: 0.913, Loss: 0.043 Epoch 10 Batch 251/269 - Train Accuracy: 0.943, Validation Accuracy: 0.919, Loss: 0.045 Epoch 10 Batch 252/269 - Train Accuracy: 0.946, Validation Accuracy: 0.927, Loss: 0.037 Epoch 10 Batch 253/269 - Train Accuracy: 0.927, Validation Accuracy: 0.926, Loss: 0.044 Epoch 10 Batch 254/269 - Train Accuracy: 0.930, Validation Accuracy: 0.925, Loss: 0.045 Epoch 10 Batch 255/269 - Train Accuracy: 0.931, Validation Accuracy: 0.927, Loss: 0.046 Epoch 10 Batch 256/269 - Train Accuracy: 0.918, Validation Accuracy: 0.924, Loss: 0.043 Epoch 10 Batch 257/269 - Train Accuracy: 0.907, Validation Accuracy: 0.920, Loss: 0.045 Epoch 10 Batch 258/269 - Train Accuracy: 0.931, Validation Accuracy: 0.924, Loss: 0.044 Epoch 10 Batch 259/269 - Train Accuracy: 0.930, Validation Accuracy: 0.920, Loss: 0.042 Epoch 10 Batch 260/269 - Train Accuracy: 0.934, Validation Accuracy: 0.928, Loss: 0.048 Epoch 10 Batch 261/269 - Train Accuracy: 0.931, Validation Accuracy: 0.932, Loss: 0.039 Epoch 10 Batch 262/269 - Train Accuracy: 0.939, Validation Accuracy: 0.930, Loss: 0.038 Epoch 10 Batch 263/269 - Train Accuracy: 0.932, Validation Accuracy: 0.938, Loss: 0.041 Epoch 10 Batch 264/269 - Train Accuracy: 0.913, Validation Accuracy: 0.940, Loss: 0.044 Epoch 10 Batch 265/269 - Train Accuracy: 0.940, Validation Accuracy: 0.936, Loss: 0.038 Epoch 10 Batch 266/269 - Train Accuracy: 0.943, Validation Accuracy: 0.938, Loss: 0.033 Epoch 10 Batch 267/269 - Train Accuracy: 0.950, Validation Accuracy: 0.938, Loss: 0.038 Epoch 11 Batch 0/269 - Train Accuracy: 0.940, Validation Accuracy: 0.938, Loss: 0.039 Epoch 11 Batch 1/269 - Train Accuracy: 0.942, Validation Accuracy: 0.938, Loss: 0.036 Epoch 11 Batch 2/269 - Train Accuracy: 0.940, Validation Accuracy: 0.936, Loss: 0.037 Epoch 11 Batch 3/269 - Train Accuracy: 0.942, Validation Accuracy: 0.937, Loss: 0.035 Epoch 11 Batch 4/269 - Train Accuracy: 0.932, Validation Accuracy: 0.935, Loss: 0.036 Epoch 11 Batch 5/269 - Train Accuracy: 0.947, Validation Accuracy: 0.932, Loss: 0.036 Epoch 11 Batch 6/269 - Train Accuracy: 0.954, Validation Accuracy: 0.930, Loss: 0.034 Epoch 11 Batch 7/269 - Train Accuracy: 0.940, Validation Accuracy: 0.936, Loss: 0.034 Epoch 11 Batch 8/269 - Train Accuracy: 0.944, Validation Accuracy: 0.932, Loss: 0.039 Epoch 11 Batch 9/269 - Train Accuracy: 0.946, Validation Accuracy: 0.930, Loss: 0.037 Epoch 11 Batch 10/269 - Train Accuracy: 0.947, Validation Accuracy: 0.930, Loss: 0.030 Epoch 11 Batch 11/269 - Train Accuracy: 0.941, Validation Accuracy: 0.934, Loss: 0.038 Epoch 11 Batch 12/269 - Train Accuracy: 0.940, Validation Accuracy: 0.936, Loss: 0.042 Epoch 11 Batch 13/269 - Train Accuracy: 0.944, Validation Accuracy: 0.939, Loss: 0.031 Epoch 11 Batch 14/269 - Train Accuracy: 0.938, Validation Accuracy: 0.941, Loss: 0.033 Epoch 11 Batch 15/269 - Train Accuracy: 0.942, Validation Accuracy: 0.940, Loss: 0.027 Epoch 11 Batch 16/269 - Train Accuracy: 0.938, Validation Accuracy: 0.941, Loss: 0.038 Epoch 11 Batch 17/269 - Train Accuracy: 0.947, Validation Accuracy: 0.941, Loss: 0.029 Epoch 11 Batch 18/269 - Train Accuracy: 0.941, Validation Accuracy: 0.936, Loss: 0.034 Epoch 11 Batch 19/269 - Train Accuracy: 0.946, Validation Accuracy: 0.939, Loss: 0.028 Epoch 11 Batch 20/269 - Train Accuracy: 0.946, Validation Accuracy: 0.941, Loss: 0.032 Epoch 11 Batch 21/269 - Train Accuracy: 0.920, Validation Accuracy: 0.939, Loss: 0.037 Epoch 11 Batch 22/269 - Train Accuracy: 0.955, Validation Accuracy: 0.939, Loss: 0.031 Epoch 11 Batch 23/269 - Train Accuracy: 0.946, Validation Accuracy: 0.939, Loss: 0.034 Epoch 11 Batch 24/269 - Train Accuracy: 0.944, Validation Accuracy: 0.938, Loss: 0.033 Epoch 11 Batch 25/269 - Train Accuracy: 0.946, Validation Accuracy: 0.929, Loss: 0.035 Epoch 11 Batch 26/269 - Train Accuracy: 0.935, Validation Accuracy: 0.934, Loss: 0.033 Epoch 11 Batch 27/269 - Train Accuracy: 0.940, Validation Accuracy: 0.940, Loss: 0.034 Epoch 11 Batch 28/269 - Train Accuracy: 0.923, Validation Accuracy: 0.942, Loss: 0.035 Epoch 11 Batch 29/269 - Train Accuracy: 0.939, Validation Accuracy: 0.940, Loss: 0.033 Epoch 11 Batch 30/269 - Train Accuracy: 0.957, Validation Accuracy: 0.940, Loss: 0.032 Epoch 11 Batch 31/269 - Train Accuracy: 0.950, Validation Accuracy: 0.940, Loss: 0.033 Epoch 11 Batch 32/269 - Train Accuracy: 0.950, Validation Accuracy: 0.944, Loss: 0.028 Epoch 11 Batch 33/269 - Train Accuracy: 0.944, Validation Accuracy: 0.938, Loss: 0.028 Epoch 11 Batch 34/269 - Train Accuracy: 0.937, Validation Accuracy: 0.936, Loss: 0.030 Epoch 11 Batch 35/269 - Train Accuracy: 0.951, Validation Accuracy: 0.938, Loss: 0.045 Epoch 11 Batch 36/269 - Train Accuracy: 0.935, Validation Accuracy: 0.934, Loss: 0.032 Epoch 11 Batch 37/269 - Train Accuracy: 0.943, Validation Accuracy: 0.934, Loss: 0.033 Epoch 11 Batch 38/269 - Train Accuracy: 0.947, Validation Accuracy: 0.935, Loss: 0.030 Epoch 11 Batch 39/269 - Train Accuracy: 0.947, Validation Accuracy: 0.940, Loss: 0.029 Epoch 11 Batch 40/269 - Train Accuracy: 0.935, Validation Accuracy: 0.941, Loss: 0.036 Epoch 11 Batch 41/269 - Train Accuracy: 0.941, Validation Accuracy: 0.940, Loss: 0.032 Epoch 11 Batch 42/269 - Train Accuracy: 0.953, Validation Accuracy: 0.933, Loss: 0.029 Epoch 11 Batch 43/269 - Train Accuracy: 0.943, Validation Accuracy: 0.934, Loss: 0.033 Epoch 11 Batch 44/269 - Train Accuracy: 0.956, Validation Accuracy: 0.942, Loss: 0.034 Epoch 11 Batch 45/269 - Train Accuracy: 0.948, Validation Accuracy: 0.944, Loss: 0.031 Epoch 11 Batch 46/269 - Train Accuracy: 0.950, Validation Accuracy: 0.948, Loss: 0.028 Epoch 11 Batch 47/269 - Train Accuracy: 0.952, Validation Accuracy: 0.947, Loss: 0.027 Epoch 11 Batch 48/269 - Train Accuracy: 0.951, Validation Accuracy: 0.944, Loss: 0.029 Epoch 11 Batch 49/269 - Train Accuracy: 0.954, Validation Accuracy: 0.947, Loss: 0.029 Epoch 11 Batch 50/269 - Train Accuracy: 0.932, Validation Accuracy: 0.939, Loss: 0.038 Epoch 11 Batch 51/269 - Train Accuracy: 0.942, Validation Accuracy: 0.937, Loss: 0.031 Epoch 11 Batch 52/269 - Train Accuracy: 0.942, Validation Accuracy: 0.938, Loss: 0.028 Epoch 11 Batch 53/269 - Train Accuracy: 0.927, Validation Accuracy: 0.945, Loss: 0.036 Epoch 11 Batch 54/269 - Train Accuracy: 0.948, Validation Accuracy: 0.944, Loss: 0.027 Epoch 11 Batch 55/269 - Train Accuracy: 0.948, Validation Accuracy: 0.944, Loss: 0.031 Epoch 11 Batch 56/269 - Train Accuracy: 0.937, Validation Accuracy: 0.944, Loss: 0.030 Epoch 11 Batch 57/269 - Train Accuracy: 0.956, Validation Accuracy: 0.943, Loss: 0.034 Epoch 11 Batch 58/269 - Train Accuracy: 0.944, Validation Accuracy: 0.943, Loss: 0.029 Epoch 11 Batch 59/269 - Train Accuracy: 0.962, Validation Accuracy: 0.941, Loss: 0.024 Epoch 11 Batch 60/269 - Train Accuracy: 0.952, Validation Accuracy: 0.945, Loss: 0.028 Epoch 11 Batch 61/269 - Train Accuracy: 0.950, Validation Accuracy: 0.946, Loss: 0.028 Epoch 11 Batch 62/269 - Train Accuracy: 0.941, Validation Accuracy: 0.948, Loss: 0.036 Epoch 11 Batch 63/269 - Train Accuracy: 0.943, Validation Accuracy: 0.950, Loss: 0.034 Epoch 11 Batch 64/269 - Train Accuracy: 0.951, Validation Accuracy: 0.948, Loss: 0.027 Epoch 11 Batch 65/269 - Train Accuracy: 0.948, Validation Accuracy: 0.952, Loss: 0.028 Epoch 11 Batch 66/269 - Train Accuracy: 0.937, Validation Accuracy: 0.949, Loss: 0.034 Epoch 11 Batch 67/269 - Train Accuracy: 0.947, Validation Accuracy: 0.950, Loss: 0.034 Epoch 11 Batch 68/269 - Train Accuracy: 0.932, Validation Accuracy: 0.949, Loss: 0.035 Epoch 11 Batch 69/269 - Train Accuracy: 0.935, Validation Accuracy: 0.948, Loss: 0.038 Epoch 11 Batch 70/269 - Train Accuracy: 0.953, Validation Accuracy: 0.943, Loss: 0.031 Epoch 11 Batch 71/269 - Train Accuracy: 0.947, Validation Accuracy: 0.945, Loss: 0.037 Epoch 11 Batch 72/269 - Train Accuracy: 0.937, Validation Accuracy: 0.946, Loss: 0.035 Epoch 11 Batch 73/269 - Train Accuracy: 0.946, Validation Accuracy: 0.945, Loss: 0.035 Epoch 11 Batch 74/269 - Train Accuracy: 0.954, Validation Accuracy: 0.944, Loss: 0.029 Epoch 11 Batch 75/269 - Train Accuracy: 0.953, Validation Accuracy: 0.942, Loss: 0.034 Epoch 11 Batch 76/269 - Train Accuracy: 0.941, Validation Accuracy: 0.938, Loss: 0.030 Epoch 11 Batch 77/269 - Train Accuracy: 0.947, Validation Accuracy: 0.940, Loss: 0.030 Epoch 11 Batch 78/269 - Train Accuracy: 0.951, Validation Accuracy: 0.938, Loss: 0.030 Epoch 11 Batch 79/269 - Train Accuracy: 0.944, Validation Accuracy: 0.935, Loss: 0.034 Epoch 11 Batch 80/269 - Train Accuracy: 0.949, Validation Accuracy: 0.934, Loss: 0.029 Epoch 11 Batch 81/269 - Train Accuracy: 0.930, Validation Accuracy: 0.935, Loss: 0.034 Epoch 11 Batch 82/269 - Train Accuracy: 0.951, Validation Accuracy: 0.938, Loss: 0.027 Epoch 11 Batch 83/269 - Train Accuracy: 0.945, Validation Accuracy: 0.935, Loss: 0.040 Epoch 11 Batch 84/269 - Train Accuracy: 0.947, Validation Accuracy: 0.934, Loss: 0.028 Epoch 11 Batch 85/269 - Train Accuracy: 0.951, Validation Accuracy: 0.935, Loss: 0.029 Epoch 11 Batch 86/269 - Train Accuracy: 0.943, Validation Accuracy: 0.938, Loss: 0.030 Epoch 11 Batch 87/269 - Train Accuracy: 0.944, Validation Accuracy: 0.935, Loss: 0.032 Epoch 11 Batch 88/269 - Train Accuracy: 0.946, Validation Accuracy: 0.934, Loss: 0.028 Epoch 11 Batch 89/269 - Train Accuracy: 0.951, Validation Accuracy: 0.938, Loss: 0.028 Epoch 11 Batch 90/269 - Train Accuracy: 0.942, Validation Accuracy: 0.937, Loss: 0.031 Epoch 11 Batch 91/269 - Train Accuracy: 0.953, Validation Accuracy: 0.941, Loss: 0.029 Epoch 11 Batch 92/269 - Train Accuracy: 0.964, Validation Accuracy: 0.943, Loss: 0.026 Epoch 11 Batch 93/269 - Train Accuracy: 0.942, Validation Accuracy: 0.941, Loss: 0.031 Epoch 11 Batch 94/269 - Train Accuracy: 0.942, Validation Accuracy: 0.941, Loss: 0.036 Epoch 11 Batch 95/269 - Train Accuracy: 0.951, Validation Accuracy: 0.945, Loss: 0.028 Epoch 11 Batch 96/269 - Train Accuracy: 0.931, Validation Accuracy: 0.942, Loss: 0.033 Epoch 11 Batch 97/269 - Train Accuracy: 0.949, Validation Accuracy: 0.946, Loss: 0.033 Epoch 11 Batch 98/269 - Train Accuracy: 0.951, Validation Accuracy: 0.951, Loss: 0.029 Epoch 11 Batch 99/269 - Train Accuracy: 0.948, Validation Accuracy: 0.948, Loss: 0.029 Epoch 11 Batch 100/269 - Train Accuracy: 0.944, Validation Accuracy: 0.948, Loss: 0.031 Epoch 11 Batch 101/269 - Train Accuracy: 0.942, Validation Accuracy: 0.944, Loss: 0.035 Epoch 11 Batch 102/269 - Train Accuracy: 0.948, Validation Accuracy: 0.937, Loss: 0.028 Epoch 11 Batch 103/269 - Train Accuracy: 0.953, Validation Accuracy: 0.940, Loss: 0.034 Epoch 11 Batch 104/269 - Train Accuracy: 0.939, Validation Accuracy: 0.943, Loss: 0.030 Epoch 11 Batch 105/269 - Train Accuracy: 0.954, Validation Accuracy: 0.946, Loss: 0.030 Epoch 11 Batch 106/269 - Train Accuracy: 0.948, Validation Accuracy: 0.945, Loss: 0.026 Epoch 11 Batch 107/269 - Train Accuracy: 0.951, Validation Accuracy: 0.945, Loss: 0.029 Epoch 11 Batch 108/269 - Train Accuracy: 0.952, Validation Accuracy: 0.946, Loss: 0.029 Epoch 11 Batch 109/269 - Train Accuracy: 0.940, Validation Accuracy: 0.943, Loss: 0.032 Epoch 11 Batch 110/269 - Train Accuracy: 0.948, Validation Accuracy: 0.939, Loss: 0.025 Epoch 11 Batch 111/269 - Train Accuracy: 0.949, Validation Accuracy: 0.939, Loss: 0.030 Epoch 11 Batch 112/269 - Train Accuracy: 0.953, Validation Accuracy: 0.945, Loss: 0.032 Epoch 11 Batch 113/269 - Train Accuracy: 0.953, Validation Accuracy: 0.942, Loss: 0.027 Epoch 11 Batch 114/269 - Train Accuracy: 0.945, Validation Accuracy: 0.940, Loss: 0.029 Epoch 11 Batch 115/269 - Train Accuracy: 0.933, Validation Accuracy: 0.938, Loss: 0.030 Epoch 11 Batch 116/269 - Train Accuracy: 0.959, Validation Accuracy: 0.936, Loss: 0.029 Epoch 11 Batch 117/269 - Train Accuracy: 0.946, Validation Accuracy: 0.939, Loss: 0.026 Epoch 11 Batch 118/269 - Train Accuracy: 0.956, Validation Accuracy: 0.941, Loss: 0.025 Epoch 11 Batch 119/269 - Train Accuracy: 0.944, Validation Accuracy: 0.943, Loss: 0.030 Epoch 11 Batch 120/269 - Train Accuracy: 0.950, Validation Accuracy: 0.942, Loss: 0.030 Epoch 11 Batch 121/269 - Train Accuracy: 0.946, Validation Accuracy: 0.943, Loss: 0.027 Epoch 11 Batch 122/269 - Train Accuracy: 0.951, Validation Accuracy: 0.944, Loss: 0.027 Epoch 11 Batch 123/269 - Train Accuracy: 0.946, Validation Accuracy: 0.940, Loss: 0.028 Epoch 11 Batch 124/269 - Train Accuracy: 0.949, Validation Accuracy: 0.937, Loss: 0.027 Epoch 11 Batch 125/269 - Train Accuracy: 0.952, Validation Accuracy: 0.938, Loss: 0.027 Epoch 11 Batch 126/269 - Train Accuracy: 0.939, Validation Accuracy: 0.935, Loss: 0.029 Epoch 11 Batch 127/269 - Train Accuracy: 0.947, Validation Accuracy: 0.933, Loss: 0.029 Epoch 11 Batch 128/269 - Train Accuracy: 0.954, Validation Accuracy: 0.935, Loss: 0.030 Epoch 11 Batch 129/269 - Train Accuracy: 0.942, Validation Accuracy: 0.940, Loss: 0.029 Epoch 11 Batch 130/269 - Train Accuracy: 0.937, Validation Accuracy: 0.936, Loss: 0.031 Epoch 11 Batch 131/269 - Train Accuracy: 0.938, Validation Accuracy: 0.939, Loss: 0.030 Epoch 11 Batch 132/269 - Train Accuracy: 0.933, Validation Accuracy: 0.945, Loss: 0.032 Epoch 11 Batch 133/269 - Train Accuracy: 0.952, Validation Accuracy: 0.939, Loss: 0.026 Epoch 11 Batch 134/269 - Train Accuracy: 0.946, Validation Accuracy: 0.940, Loss: 0.029 Epoch 11 Batch 135/269 - Train Accuracy: 0.944, Validation Accuracy: 0.944, Loss: 0.028 Epoch 11 Batch 136/269 - Train Accuracy: 0.935, Validation Accuracy: 0.942, Loss: 0.032 Epoch 11 Batch 137/269 - Train Accuracy: 0.951, Validation Accuracy: 0.932, Loss: 0.032 Epoch 11 Batch 138/269 - Train Accuracy: 0.945, Validation Accuracy: 0.938, Loss: 0.027 Epoch 11 Batch 139/269 - Train Accuracy: 0.957, Validation Accuracy: 0.942, Loss: 0.025 Epoch 11 Batch 140/269 - Train Accuracy: 0.945, Validation Accuracy: 0.946, Loss: 0.029 Epoch 11 Batch 141/269 - Train Accuracy: 0.939, Validation Accuracy: 0.942, Loss: 0.031 Epoch 11 Batch 142/269 - Train Accuracy: 0.941, Validation Accuracy: 0.944, Loss: 0.032 Epoch 11 Batch 143/269 - Train Accuracy: 0.951, Validation Accuracy: 0.941, Loss: 0.025 Epoch 11 Batch 144/269 - Train Accuracy: 0.955, Validation Accuracy: 0.945, Loss: 0.025 Epoch 11 Batch 145/269 - Train Accuracy: 0.953, Validation Accuracy: 0.939, Loss: 0.026 Epoch 11 Batch 146/269 - Train Accuracy: 0.945, Validation Accuracy: 0.944, Loss: 0.027 Epoch 11 Batch 147/269 - Train Accuracy: 0.947, Validation Accuracy: 0.950, Loss: 0.033 Epoch 11 Batch 148/269 - Train Accuracy: 0.947, Validation Accuracy: 0.948, Loss: 0.027 Epoch 11 Batch 149/269 - Train Accuracy: 0.942, Validation Accuracy: 0.945, Loss: 0.032 Epoch 11 Batch 150/269 - Train Accuracy: 0.950, Validation Accuracy: 0.945, Loss: 0.031 Epoch 11 Batch 151/269 - Train Accuracy: 0.947, Validation Accuracy: 0.942, Loss: 0.033 Epoch 11 Batch 152/269 - Train Accuracy: 0.951, Validation Accuracy: 0.944, Loss: 0.030 Epoch 11 Batch 153/269 - Train Accuracy: 0.955, Validation Accuracy: 0.941, Loss: 0.026 Epoch 11 Batch 154/269 - Train Accuracy: 0.955, Validation Accuracy: 0.941, Loss: 0.027 Epoch 11 Batch 155/269 - Train Accuracy: 0.946, Validation Accuracy: 0.946, Loss: 0.028 Epoch 11 Batch 156/269 - Train Accuracy: 0.960, Validation Accuracy: 0.947, Loss: 0.029 Epoch 11 Batch 157/269 - Train Accuracy: 0.939, Validation Accuracy: 0.946, Loss: 0.026 Epoch 11 Batch 158/269 - Train Accuracy: 0.950, Validation Accuracy: 0.946, Loss: 0.029 Epoch 11 Batch 159/269 - Train Accuracy: 0.949, Validation Accuracy: 0.953, Loss: 0.028 Epoch 11 Batch 160/269 - Train Accuracy: 0.949, Validation Accuracy: 0.952, Loss: 0.028 Epoch 11 Batch 161/269 - Train Accuracy: 0.950, Validation Accuracy: 0.954, Loss: 0.027 Epoch 11 Batch 162/269 - Train Accuracy: 0.963, Validation Accuracy: 0.948, Loss: 0.026 Epoch 11 Batch 163/269 - Train Accuracy: 0.952, Validation Accuracy: 0.954, Loss: 0.029 Epoch 11 Batch 164/269 - Train Accuracy: 0.955, Validation Accuracy: 0.950, Loss: 0.028 Epoch 11 Batch 165/269 - Train Accuracy: 0.946, Validation Accuracy: 0.945, Loss: 0.029 Epoch 11 Batch 166/269 - Train Accuracy: 0.957, Validation Accuracy: 0.949, Loss: 0.027 Epoch 11 Batch 167/269 - Train Accuracy: 0.955, Validation Accuracy: 0.945, Loss: 0.027 Epoch 11 Batch 168/269 - Train Accuracy: 0.948, Validation Accuracy: 0.945, Loss: 0.027 Epoch 11 Batch 169/269 - Train Accuracy: 0.941, Validation Accuracy: 0.945, Loss: 0.028 Epoch 11 Batch 170/269 - Train Accuracy: 0.949, Validation Accuracy: 0.942, Loss: 0.026 Epoch 11 Batch 171/269 - Train Accuracy: 0.953, Validation Accuracy: 0.943, Loss: 0.028 Epoch 11 Batch 172/269 - Train Accuracy: 0.935, Validation Accuracy: 0.944, Loss: 0.032 Epoch 11 Batch 173/269 - Train Accuracy: 0.951, Validation Accuracy: 0.948, Loss: 0.025 Epoch 11 Batch 174/269 - Train Accuracy: 0.960, Validation Accuracy: 0.949, Loss: 0.027 Epoch 11 Batch 175/269 - Train Accuracy: 0.940, Validation Accuracy: 0.944, Loss: 0.036 Epoch 11 Batch 176/269 - Train Accuracy: 0.953, Validation Accuracy: 0.947, Loss: 0.029 Epoch 11 Batch 177/269 - Train Accuracy: 0.955, Validation Accuracy: 0.944, Loss: 0.025 Epoch 11 Batch 178/269 - Train Accuracy: 0.953, Validation Accuracy: 0.949, Loss: 0.026 Epoch 11 Batch 179/269 - Train Accuracy: 0.955, Validation Accuracy: 0.949, Loss: 0.027 Epoch 11 Batch 180/269 - Train Accuracy: 0.957, Validation Accuracy: 0.944, Loss: 0.025 Epoch 11 Batch 181/269 - Train Accuracy: 0.945, Validation Accuracy: 0.947, Loss: 0.032 Epoch 11 Batch 182/269 - Train Accuracy: 0.952, Validation Accuracy: 0.946, Loss: 0.028 Epoch 11 Batch 183/269 - Train Accuracy: 0.951, Validation Accuracy: 0.947, Loss: 0.022 Epoch 11 Batch 184/269 - Train Accuracy: 0.958, Validation Accuracy: 0.948, Loss: 0.025 Epoch 11 Batch 185/269 - Train Accuracy: 0.949, Validation Accuracy: 0.949, Loss: 0.028 Epoch 11 Batch 186/269 - Train Accuracy: 0.948, Validation Accuracy: 0.939, Loss: 0.024 Epoch 11 Batch 187/269 - Train Accuracy: 0.949, Validation Accuracy: 0.943, Loss: 0.026 Epoch 11 Batch 188/269 - Train Accuracy: 0.958, Validation Accuracy: 0.943, Loss: 0.028 Epoch 11 Batch 189/269 - Train Accuracy: 0.949, Validation Accuracy: 0.942, Loss: 0.026 Epoch 11 Batch 190/269 - Train Accuracy: 0.957, Validation Accuracy: 0.942, Loss: 0.028 Epoch 11 Batch 191/269 - Train Accuracy: 0.952, Validation Accuracy: 0.945, Loss: 0.026 Epoch 11 Batch 192/269 - Train Accuracy: 0.950, Validation Accuracy: 0.949, Loss: 0.026 Epoch 11 Batch 193/269 - Train Accuracy: 0.949, Validation Accuracy: 0.948, Loss: 0.026 Epoch 11 Batch 194/269 - Train Accuracy: 0.943, Validation Accuracy: 0.952, Loss: 0.028 Epoch 11 Batch 195/269 - Train Accuracy: 0.942, Validation Accuracy: 0.956, Loss: 0.024 Epoch 11 Batch 196/269 - Train Accuracy: 0.947, Validation Accuracy: 0.954, Loss: 0.028 Epoch 11 Batch 197/269 - Train Accuracy: 0.952, Validation Accuracy: 0.944, Loss: 0.028 Epoch 11 Batch 198/269 - Train Accuracy: 0.950, Validation Accuracy: 0.948, Loss: 0.028 Epoch 11 Batch 199/269 - Train Accuracy: 0.949, Validation Accuracy: 0.950, Loss: 0.029 Epoch 11 Batch 200/269 - Train Accuracy: 0.964, Validation Accuracy: 0.947, Loss: 0.024 Epoch 11 Batch 201/269 - Train Accuracy: 0.948, Validation Accuracy: 0.947, Loss: 0.032 Epoch 11 Batch 202/269 - Train Accuracy: 0.947, Validation Accuracy: 0.950, Loss: 0.027 Epoch 11 Batch 203/269 - Train Accuracy: 0.949, Validation Accuracy: 0.951, Loss: 0.029 Epoch 11 Batch 204/269 - Train Accuracy: 0.947, Validation Accuracy: 0.954, Loss: 0.029 Epoch 11 Batch 205/269 - Train Accuracy: 0.962, Validation Accuracy: 0.947, Loss: 0.027 Epoch 11 Batch 206/269 - Train Accuracy: 0.934, Validation Accuracy: 0.950, Loss: 0.032 Epoch 11 Batch 207/269 - Train Accuracy: 0.943, Validation Accuracy: 0.948, Loss: 0.026 Epoch 11 Batch 208/269 - Train Accuracy: 0.953, Validation Accuracy: 0.947, Loss: 0.027 Epoch 11 Batch 209/269 - Train Accuracy: 0.959, Validation Accuracy: 0.948, Loss: 0.027 Epoch 11 Batch 210/269 - Train Accuracy: 0.954, Validation Accuracy: 0.949, Loss: 0.026 Epoch 11 Batch 211/269 - Train Accuracy: 0.959, Validation Accuracy: 0.948, Loss: 0.028 Epoch 11 Batch 212/269 - Train Accuracy: 0.943, Validation Accuracy: 0.950, Loss: 0.033 Epoch 11 Batch 213/269 - Train Accuracy: 0.940, Validation Accuracy: 0.951, Loss: 0.026 Epoch 11 Batch 214/269 - Train Accuracy: 0.953, Validation Accuracy: 0.937, Loss: 0.027 Epoch 11 Batch 215/269 - Train Accuracy: 0.954, Validation Accuracy: 0.944, Loss: 0.028 Epoch 11 Batch 216/269 - Train Accuracy: 0.946, Validation Accuracy: 0.950, Loss: 0.037 Epoch 11 Batch 217/269 - Train Accuracy: 0.946, Validation Accuracy: 0.951, Loss: 0.028 Epoch 11 Batch 218/269 - Train Accuracy: 0.954, Validation Accuracy: 0.945, Loss: 0.025 Epoch 11 Batch 219/269 - Train Accuracy: 0.953, Validation Accuracy: 0.948, Loss: 0.029 Epoch 11 Batch 220/269 - Train Accuracy: 0.947, Validation Accuracy: 0.947, Loss: 0.025 Epoch 11 Batch 221/269 - Train Accuracy: 0.942, Validation Accuracy: 0.952, Loss: 0.029 Epoch 11 Batch 222/269 - Train Accuracy: 0.965, Validation Accuracy: 0.943, Loss: 0.022 Epoch 11 Batch 223/269 - Train Accuracy: 0.945, Validation Accuracy: 0.941, Loss: 0.025 Epoch 11 Batch 224/269 - Train Accuracy: 0.951, Validation Accuracy: 0.948, Loss: 0.033 Epoch 11 Batch 225/269 - Train Accuracy: 0.947, Validation Accuracy: 0.947, Loss: 0.026 Epoch 11 Batch 226/269 - Train Accuracy: 0.952, Validation Accuracy: 0.945, Loss: 0.030 Epoch 11 Batch 227/269 - Train Accuracy: 0.948, Validation Accuracy: 0.952, Loss: 0.037 Epoch 11 Batch 228/269 - Train Accuracy: 0.951, Validation Accuracy: 0.948, Loss: 0.026 Epoch 11 Batch 229/269 - Train Accuracy: 0.948, Validation Accuracy: 0.951, Loss: 0.025 Epoch 11 Batch 230/269 - Train Accuracy: 0.960, Validation Accuracy: 0.951, Loss: 0.026 Epoch 11 Batch 231/269 - Train Accuracy: 0.936, Validation Accuracy: 0.949, Loss: 0.029 Epoch 11 Batch 232/269 - Train Accuracy: 0.945, Validation Accuracy: 0.951, Loss: 0.028 Epoch 11 Batch 233/269 - Train Accuracy: 0.960, Validation Accuracy: 0.951, Loss: 0.029 Epoch 11 Batch 234/269 - Train Accuracy: 0.949, Validation Accuracy: 0.945, Loss: 0.026 Epoch 11 Batch 235/269 - Train Accuracy: 0.966, Validation Accuracy: 0.942, Loss: 0.022 Epoch 11 Batch 236/269 - Train Accuracy: 0.947, Validation Accuracy: 0.947, Loss: 0.024 Epoch 11 Batch 237/269 - Train Accuracy: 0.954, Validation Accuracy: 0.944, Loss: 0.027 Epoch 11 Batch 238/269 - Train Accuracy: 0.956, Validation Accuracy: 0.951, Loss: 0.026 Epoch 11 Batch 239/269 - Train Accuracy: 0.954, Validation Accuracy: 0.947, Loss: 0.026 Epoch 11 Batch 240/269 - Train Accuracy: 0.951, Validation Accuracy: 0.943, Loss: 0.025 Epoch 11 Batch 241/269 - Train Accuracy: 0.948, Validation Accuracy: 0.940, Loss: 0.029 Epoch 11 Batch 242/269 - Train Accuracy: 0.958, Validation Accuracy: 0.947, Loss: 0.026 Epoch 11 Batch 243/269 - Train Accuracy: 0.958, Validation Accuracy: 0.947, Loss: 0.023 Epoch 11 Batch 244/269 - Train Accuracy: 0.940, Validation Accuracy: 0.947, Loss: 0.028 Epoch 11 Batch 245/269 - Train Accuracy: 0.949, Validation Accuracy: 0.943, Loss: 0.027 Epoch 11 Batch 246/269 - Train Accuracy: 0.938, Validation Accuracy: 0.939, Loss: 0.027 Epoch 11 Batch 247/269 - Train Accuracy: 0.952, Validation Accuracy: 0.941, Loss: 0.028 Epoch 11 Batch 248/269 - Train Accuracy: 0.955, Validation Accuracy: 0.952, Loss: 0.029 Epoch 11 Batch 249/269 - Train Accuracy: 0.958, Validation Accuracy: 0.947, Loss: 0.023 Epoch 11 Batch 250/269 - Train Accuracy: 0.953, Validation Accuracy: 0.942, Loss: 0.027 Epoch 11 Batch 251/269 - Train Accuracy: 0.965, Validation Accuracy: 0.945, Loss: 0.025 Epoch 11 Batch 252/269 - Train Accuracy: 0.953, Validation Accuracy: 0.942, Loss: 0.022 Epoch 11 Batch 253/269 - Train Accuracy: 0.938, Validation Accuracy: 0.945, Loss: 0.027 Epoch 11 Batch 254/269 - Train Accuracy: 0.941, Validation Accuracy: 0.945, Loss: 0.030 Epoch 11 Batch 255/269 - Train Accuracy: 0.944, Validation Accuracy: 0.938, Loss: 0.029 Epoch 11 Batch 256/269 - Train Accuracy: 0.942, Validation Accuracy: 0.933, Loss: 0.027 Epoch 11 Batch 257/269 - Train Accuracy: 0.930, Validation Accuracy: 0.929, Loss: 0.029 Epoch 11 Batch 258/269 - Train Accuracy: 0.940, Validation Accuracy: 0.932, Loss: 0.031 Epoch 11 Batch 259/269 - Train Accuracy: 0.948, Validation Accuracy: 0.932, Loss: 0.029 Epoch 11 Batch 260/269 - Train Accuracy: 0.944, Validation Accuracy: 0.942, Loss: 0.033 Epoch 11 Batch 261/269 - Train Accuracy: 0.951, Validation Accuracy: 0.940, Loss: 0.025 Epoch 11 Batch 262/269 - Train Accuracy: 0.944, Validation Accuracy: 0.941, Loss: 0.027 Epoch 11 Batch 263/269 - Train Accuracy: 0.946, Validation Accuracy: 0.944, Loss: 0.029 Epoch 11 Batch 264/269 - Train Accuracy: 0.927, Validation Accuracy: 0.949, Loss: 0.032 Epoch 11 Batch 265/269 - Train Accuracy: 0.948, Validation Accuracy: 0.950, Loss: 0.027 Epoch 11 Batch 266/269 - Train Accuracy: 0.955, Validation Accuracy: 0.946, Loss: 0.022 Epoch 11 Batch 267/269 - Train Accuracy: 0.956, Validation Accuracy: 0.943, Loss: 0.027 Model Trained and Saved ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function unknown = vocab_to_int['<UNK>'] return [vocab_to_int[w] if w in vocab_to_int else unknown for w in re.findall(r"[\w'\-,.!?\":;)(]+", sentence.lower())] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw a old yellow truck.' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('logits:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)])) print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)])) ###Output Input Word Ids: [218, 58, 21, 28, 141, 2] English Words: ['he', 'saw', 'a', 'old', 'yellow', '<UNK>'] Prediction Word Ids: [176, 62, 277, 128, 330, 142, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] French Words: ['elle', 'est', 'la', 'vieux', 'camion', 'jaune', '<PAD>', '<PAD>', '<PAD>', '<PAD>', '<PAD>', '<PAD>', '<PAD>', '<PAD>', '<PAD>', '<PAD>', '<PAD>', '<PAD>'] ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of each sentence from `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function source_id_text = [[source_vocab_to_int.get(word) for word in line.split()] for line in source_text.split('\n')] target_id_text = [[target_vocab_to_int.get(word) for word in str(line + ' <EOS>').split()] for line in target_text.split('\n')] return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code # if running notebook from here import the unit tests import sys if 'problem_unittests' not in sys.modules: import problem_unittests as tests """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__) print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.0.1 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoding_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) """ # input = x # targets = y inputs = tf.placeholder(tf.int32, shape=[None, None], name='input') targets = tf.placeholder(tf.int32, shape=[None, None], name='targets') # hyper parameters of the model learning_rate = tf.placeholder(tf.float32, name='learning_rate') keep_prob = tf.placeholder(tf.float32, name='keep_prob') return inputs, targets, learning_rate, keep_prob """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output Tests Passed ###Markdown Process Decoding InputImplement `process_decoding_input` using TensorFlow to remove the last word id from each batch in `target_data` and concat the GO ID to the beginning of each batch. ###Code def process_decoding_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for dencoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return dec_input """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_decoding_input(process_decoding_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer using [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn). ###Code def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ cell = tf.contrib.rnn.BasicLSTMCell(rnn_size) cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) cell = tf.contrib.rnn.MultiRNNCell([cell] * num_layers) _, RNN_state = tf.nn.dynamic_rnn(cell, rnn_inputs, dtype=tf.float32) return RNN_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate training logits using [`tf.contrib.seq2seq.simple_decoder_fn_train()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_train) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). Apply the `output_fn` to the [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder) outputs. ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits """ dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob = keep_prob) # Training Decoder train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope) # Apply output function train_logits = output_fn(train_pred) return train_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference logits using [`tf.contrib.seq2seq.simple_decoder_fn_inference()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_inference) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: The maximum allowed time steps to decode :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits """ # Inference Decoder infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn=output_fn, encoder_state=encoder_state, embeddings=dec_embeddings, start_of_sequence_id=start_of_sequence_id, end_of_sequence_id=end_of_sequence_id, maximum_length=maximum_length, num_decoder_symbols=vocab_size) # Apply output function infer_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope) return infer_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.- Create RNN cell for decoding using `rnn_size` and `num_layers`.- Create the output fuction using [`lambda`](https://docs.python.org/3/tutorial/controlflow.htmllambda-expressions) to transform it's input, logits, to class logits.- Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)` function to get the training logits.- Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) """ start_of_sequence_id = target_vocab_to_int['<GO>'] end_of_sequence_id = target_vocab_to_int['<EOS>'] cell = tf.contrib.rnn.BasicLSTMCell(rnn_size) cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) cell = tf.contrib.rnn.MultiRNNCell([cell] * num_layers) with tf.variable_scope('decoding', reuse=None) as decoding_scope: output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope) train_logits = decoding_layer_train(encoder_state, cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) decoding_scope.reuse_variables() infer_logits = decoding_layer_infer(encoder_state, cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, sequence_length, vocab_size, decoding_scope, output_fn, keep_prob) return train_logits, infer_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Apply embedding to the input data for the encoder.- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob)`.- Process target data using your `process_decoding_input(target_data, target_vocab_to_int, batch_size)` function.- Apply embedding to the target data for the decoder.- Decode the encoded input using your `decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)`. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) """ # Encoder enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size) enc_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob) # Decoder dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size) dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) # Logits train_logits, infer_logits = decoding_layer(dec_embed_input, dec_embeddings, enc_state, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) return train_logits, infer_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability ###Code # Number of Epochs epochs = 3 # Batch Size batch_size = 512 # RNN Size rnn_size = 512 # Number of Layers num_layers = 1 # Embedding Size encoding_embedding_size = 200 decoding_embedding_size = 200 # Learning Rate learning_rate = 0.01 # Dropout Keep Probability keep_probability = 0.7 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_source_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob = model_inputs() sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model( tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) tf.identity(inference_logits, 'logits') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([input_shape[0], sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import time def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) lr_changes = 0 for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 0/269 - Train Accuracy: 0.242, Validation Accuracy: 0.310, Loss: 5.970 Epoch 0 Batch 1/269 - Train Accuracy: 0.271, Validation Accuracy: 0.343, Loss: 5.984 Epoch 0 Batch 2/269 - Train Accuracy: 0.297, Validation Accuracy: 0.340, Loss: 4.040 Epoch 0 Batch 3/269 - Train Accuracy: 0.177, Validation Accuracy: 0.253, Loss: 4.006 Epoch 0 Batch 4/269 - Train Accuracy: 0.169, Validation Accuracy: 0.241, Loss: 3.882 Epoch 0 Batch 5/269 - Train Accuracy: 0.247, Validation Accuracy: 0.323, Loss: 3.797 Epoch 0 Batch 6/269 - Train Accuracy: 0.318, Validation Accuracy: 0.347, Loss: 3.675 Epoch 0 Batch 7/269 - Train Accuracy: 0.316, Validation Accuracy: 0.347, Loss: 3.399 Epoch 0 Batch 8/269 - Train Accuracy: 0.284, Validation Accuracy: 0.348, Loss: 3.384 Epoch 0 Batch 9/269 - Train Accuracy: 0.309, Validation Accuracy: 0.350, Loss: 3.216 Epoch 0 Batch 10/269 - Train Accuracy: 0.279, Validation Accuracy: 0.349, Loss: 3.221 Epoch 0 Batch 11/269 - Train Accuracy: 0.314, Validation Accuracy: 0.349, Loss: 3.041 Epoch 0 Batch 12/269 - Train Accuracy: 0.304, Validation Accuracy: 0.363, Loss: 3.024 Epoch 0 Batch 13/269 - Train Accuracy: 0.392, Validation Accuracy: 0.389, Loss: 2.689 Epoch 0 Batch 14/269 - Train Accuracy: 0.356, Validation Accuracy: 0.391, Loss: 2.730 Epoch 0 Batch 15/269 - Train Accuracy: 0.358, Validation Accuracy: 0.403, Loss: 2.665 Epoch 0 Batch 16/269 - Train Accuracy: 0.382, Validation Accuracy: 0.407, Loss: 2.539 Epoch 0 Batch 17/269 - Train Accuracy: 0.366, Validation Accuracy: 0.404, Loss: 2.459 Epoch 0 Batch 18/269 - Train Accuracy: 0.346, Validation Accuracy: 0.404, Loss: 2.479 Epoch 0 Batch 19/269 - Train Accuracy: 0.370, Validation Accuracy: 0.375, Loss: 2.273 Epoch 0 Batch 20/269 - Train Accuracy: 0.330, Validation Accuracy: 0.385, Loss: 2.331 Epoch 0 Batch 21/269 - Train Accuracy: 0.328, Validation Accuracy: 0.396, Loss: 2.302 Epoch 0 Batch 22/269 - Train Accuracy: 0.398, Validation Accuracy: 0.424, Loss: 2.087 Epoch 0 Batch 23/269 - Train Accuracy: 0.412, Validation Accuracy: 0.424, Loss: 2.016 Epoch 0 Batch 24/269 - Train Accuracy: 0.355, Validation Accuracy: 0.419, Loss: 2.019 Epoch 0 Batch 25/269 - Train Accuracy: 0.347, Validation Accuracy: 0.411, Loss: 1.950 Epoch 0 Batch 26/269 - Train Accuracy: 0.427, Validation Accuracy: 0.428, Loss: 1.691 Epoch 0 Batch 27/269 - Train Accuracy: 0.418, Validation Accuracy: 0.451, Loss: 1.737 Epoch 0 Batch 28/269 - Train Accuracy: 0.394, Validation Accuracy: 0.463, Loss: 1.747 Epoch 0 Batch 29/269 - Train Accuracy: 0.454, Validation Accuracy: 0.488, Loss: 1.652 Epoch 0 Batch 30/269 - Train Accuracy: 0.468, Validation Accuracy: 0.481, Loss: 1.529 Epoch 0 Batch 31/269 - Train Accuracy: 0.477, Validation Accuracy: 0.484, Loss: 1.472 Epoch 0 Batch 32/269 - Train Accuracy: 0.456, Validation Accuracy: 0.479, Loss: 1.422 Epoch 0 Batch 33/269 - Train Accuracy: 0.485, Validation Accuracy: 0.481, Loss: 1.314 Epoch 0 Batch 34/269 - Train Accuracy: 0.483, Validation Accuracy: 0.495, Loss: 1.301 Epoch 0 Batch 35/269 - Train Accuracy: 0.491, Validation Accuracy: 0.491, Loss: 1.260 Epoch 0 Batch 36/269 - Train Accuracy: 0.482, Validation Accuracy: 0.493, Loss: 1.236 Epoch 0 Batch 37/269 - Train Accuracy: 0.499, Validation Accuracy: 0.507, Loss: 1.199 Epoch 0 Batch 38/269 - Train Accuracy: 0.479, Validation Accuracy: 0.495, Loss: 1.163 Epoch 0 Batch 39/269 - Train Accuracy: 0.506, Validation Accuracy: 0.506, Loss: 1.104 Epoch 0 Batch 40/269 - Train Accuracy: 0.488, Validation Accuracy: 0.510, Loss: 1.125 Epoch 0 Batch 41/269 - Train Accuracy: 0.503, Validation Accuracy: 0.508, Loss: 1.081 Epoch 0 Batch 42/269 - Train Accuracy: 0.522, Validation Accuracy: 0.508, Loss: 1.001 Epoch 0 Batch 43/269 - Train Accuracy: 0.494, Validation Accuracy: 0.511, Loss: 1.070 Epoch 0 Batch 44/269 - Train Accuracy: 0.527, Validation Accuracy: 0.514, Loss: 1.003 Epoch 0 Batch 45/269 - Train Accuracy: 0.485, Validation Accuracy: 0.515, Loss: 1.025 Epoch 0 Batch 46/269 - Train Accuracy: 0.483, Validation Accuracy: 0.515, Loss: 0.998 Epoch 0 Batch 47/269 - Train Accuracy: 0.534, Validation Accuracy: 0.518, Loss: 0.898 Epoch 0 Batch 48/269 - Train Accuracy: 0.521, Validation Accuracy: 0.524, Loss: 0.914 Epoch 0 Batch 49/269 - Train Accuracy: 0.502, Validation Accuracy: 0.535, Loss: 0.955 Epoch 0 Batch 50/269 - Train Accuracy: 0.503, Validation Accuracy: 0.531, Loss: 0.957 Epoch 0 Batch 51/269 - Train Accuracy: 0.505, Validation Accuracy: 0.524, Loss: 0.909 Epoch 0 Batch 52/269 - Train Accuracy: 0.510, Validation Accuracy: 0.523, Loss: 0.868 Epoch 0 Batch 53/269 - Train Accuracy: 0.500, Validation Accuracy: 0.526, Loss: 0.926 Epoch 0 Batch 54/269 - Train Accuracy: 0.510, Validation Accuracy: 0.528, Loss: 0.898 Epoch 0 Batch 55/269 - Train Accuracy: 0.529, Validation Accuracy: 0.532, Loss: 0.855 Epoch 0 Batch 56/269 - Train Accuracy: 0.537, Validation Accuracy: 0.528, Loss: 0.855 Epoch 0 Batch 57/269 - Train Accuracy: 0.544, Validation Accuracy: 0.536, Loss: 0.858 Epoch 0 Batch 58/269 - Train Accuracy: 0.547, Validation Accuracy: 0.548, Loss: 0.834 Epoch 0 Batch 59/269 - Train Accuracy: 0.548, Validation Accuracy: 0.550, Loss: 0.796 Epoch 0 Batch 60/269 - Train Accuracy: 0.549, Validation Accuracy: 0.552, Loss: 0.791 Epoch 0 Batch 61/269 - Train Accuracy: 0.561, Validation Accuracy: 0.553, Loss: 0.758 Epoch 0 Batch 62/269 - Train Accuracy: 0.571, Validation Accuracy: 0.548, Loss: 0.778 Epoch 0 Batch 63/269 - Train Accuracy: 0.541, Validation Accuracy: 0.551, Loss: 0.800 Epoch 0 Batch 64/269 - Train Accuracy: 0.548, Validation Accuracy: 0.551, Loss: 0.778 Epoch 0 Batch 65/269 - Train Accuracy: 0.538, Validation Accuracy: 0.556, Loss: 0.779 Epoch 0 Batch 66/269 - Train Accuracy: 0.564, Validation Accuracy: 0.554, Loss: 0.744 Epoch 0 Batch 67/269 - Train Accuracy: 0.551, Validation Accuracy: 0.563, Loss: 0.776 Epoch 0 Batch 68/269 - Train Accuracy: 0.554, Validation Accuracy: 0.564, Loss: 0.770 Epoch 0 Batch 69/269 - Train Accuracy: 0.522, Validation Accuracy: 0.571, Loss: 0.828 Epoch 0 Batch 70/269 - Train Accuracy: 0.564, Validation Accuracy: 0.573, Loss: 0.763 Epoch 0 Batch 71/269 - Train Accuracy: 0.544, Validation Accuracy: 0.568, Loss: 0.783 Epoch 0 Batch 72/269 - Train Accuracy: 0.566, Validation Accuracy: 0.559, Loss: 0.732 Epoch 0 Batch 73/269 - Train Accuracy: 0.561, Validation Accuracy: 0.561, Loss: 0.748 Epoch 0 Batch 74/269 - Train Accuracy: 0.558, Validation Accuracy: 0.571, Loss: 0.743 Epoch 0 Batch 75/269 - Train Accuracy: 0.571, Validation Accuracy: 0.569, Loss: 0.722 Epoch 0 Batch 76/269 - Train Accuracy: 0.543, Validation Accuracy: 0.569, Loss: 0.734 Epoch 0 Batch 77/269 - Train Accuracy: 0.578, Validation Accuracy: 0.574, Loss: 0.714 Epoch 0 Batch 78/269 - Train Accuracy: 0.594, Validation Accuracy: 0.580, Loss: 0.697 Epoch 0 Batch 79/269 - Train Accuracy: 0.583, Validation Accuracy: 0.582, Loss: 0.693 Epoch 0 Batch 80/269 - Train Accuracy: 0.574, Validation Accuracy: 0.580, Loss: 0.685 Epoch 0 Batch 81/269 - Train Accuracy: 0.588, Validation Accuracy: 0.581, Loss: 0.711 Epoch 0 Batch 82/269 - Train Accuracy: 0.594, Validation Accuracy: 0.581, Loss: 0.660 Epoch 0 Batch 83/269 - Train Accuracy: 0.573, Validation Accuracy: 0.584, Loss: 0.695 Epoch 0 Batch 84/269 - Train Accuracy: 0.576, Validation Accuracy: 0.588, Loss: 0.669 Epoch 0 Batch 85/269 - Train Accuracy: 0.583, Validation Accuracy: 0.585, Loss: 0.679 Epoch 0 Batch 86/269 - Train Accuracy: 0.585, Validation Accuracy: 0.587, Loss: 0.673 Epoch 0 Batch 87/269 - Train Accuracy: 0.575, Validation Accuracy: 0.589, Loss: 0.721 Epoch 0 Batch 88/269 - Train Accuracy: 0.579, Validation Accuracy: 0.591, Loss: 0.669 Epoch 0 Batch 89/269 - Train Accuracy: 0.606, Validation Accuracy: 0.594, Loss: 0.663 Epoch 0 Batch 90/269 - Train Accuracy: 0.562, Validation Accuracy: 0.600, Loss: 0.699 Epoch 0 Batch 91/269 - Train Accuracy: 0.612, Validation Accuracy: 0.604, Loss: 0.628 Epoch 0 Batch 92/269 - Train Accuracy: 0.596, Validation Accuracy: 0.600, Loss: 0.639 Epoch 0 Batch 93/269 - Train Accuracy: 0.619, Validation Accuracy: 0.600, Loss: 0.619 Epoch 0 Batch 94/269 - Train Accuracy: 0.596, Validation Accuracy: 0.599, Loss: 0.645 Epoch 0 Batch 95/269 - Train Accuracy: 0.595, Validation Accuracy: 0.598, Loss: 0.635 Epoch 0 Batch 96/269 - Train Accuracy: 0.602, Validation Accuracy: 0.607, Loss: 0.636 Epoch 0 Batch 97/269 - Train Accuracy: 0.606, Validation Accuracy: 0.610, Loss: 0.628 Epoch 0 Batch 98/269 - Train Accuracy: 0.620, Validation Accuracy: 0.618, Loss: 0.631 Epoch 0 Batch 99/269 - Train Accuracy: 0.606, Validation Accuracy: 0.617, Loss: 0.638 Epoch 0 Batch 100/269 - Train Accuracy: 0.634, Validation Accuracy: 0.622, Loss: 0.606 Epoch 0 Batch 101/269 - Train Accuracy: 0.589, Validation Accuracy: 0.616, Loss: 0.661 Epoch 0 Batch 102/269 - Train Accuracy: 0.604, Validation Accuracy: 0.610, Loss: 0.615 Epoch 0 Batch 103/269 - Train Accuracy: 0.610, Validation Accuracy: 0.605, Loss: 0.601 Epoch 0 Batch 104/269 - Train Accuracy: 0.601, Validation Accuracy: 0.598, Loss: 0.600 Epoch 0 Batch 105/269 - Train Accuracy: 0.609, Validation Accuracy: 0.600, Loss: 0.613 Epoch 0 Batch 106/269 - Train Accuracy: 0.602, Validation Accuracy: 0.607, Loss: 0.599 Epoch 0 Batch 107/269 - Train Accuracy: 0.595, Validation Accuracy: 0.616, Loss: 0.631 Epoch 0 Batch 108/269 - Train Accuracy: 0.601, Validation Accuracy: 0.603, Loss: 0.594 Epoch 0 Batch 109/269 - Train Accuracy: 0.616, Validation Accuracy: 0.620, Loss: 0.593 Epoch 0 Batch 110/269 - Train Accuracy: 0.631, Validation Accuracy: 0.622, Loss: 0.578 Epoch 0 Batch 111/269 - Train Accuracy: 0.612, Validation Accuracy: 0.626, Loss: 0.609 Epoch 0 Batch 112/269 - Train Accuracy: 0.629, Validation Accuracy: 0.627, Loss: 0.580 Epoch 0 Batch 113/269 - Train Accuracy: 0.624, Validation Accuracy: 0.621, Loss: 0.552 Epoch 0 Batch 114/269 - Train Accuracy: 0.626, Validation Accuracy: 0.625, Loss: 0.574 Epoch 0 Batch 115/269 - Train Accuracy: 0.611, Validation Accuracy: 0.640, Loss: 0.592 Epoch 0 Batch 116/269 - Train Accuracy: 0.673, Validation Accuracy: 0.658, Loss: 0.571 Epoch 0 Batch 117/269 - Train Accuracy: 0.670, Validation Accuracy: 0.666, Loss: 0.568 Epoch 0 Batch 118/269 - Train Accuracy: 0.678, Validation Accuracy: 0.666, Loss: 0.546 Epoch 0 Batch 119/269 - Train Accuracy: 0.667, Validation Accuracy: 0.667, Loss: 0.580 Epoch 0 Batch 120/269 - Train Accuracy: 0.659, Validation Accuracy: 0.668, Loss: 0.574 Epoch 0 Batch 121/269 - Train Accuracy: 0.663, Validation Accuracy: 0.667, Loss: 0.546 Epoch 0 Batch 122/269 - Train Accuracy: 0.674, Validation Accuracy: 0.676, Loss: 0.542 Epoch 0 Batch 123/269 - Train Accuracy: 0.680, Validation Accuracy: 0.679, Loss: 0.567 Epoch 0 Batch 124/269 - Train Accuracy: 0.666, Validation Accuracy: 0.676, Loss: 0.533 Epoch 0 Batch 125/269 - Train Accuracy: 0.686, Validation Accuracy: 0.671, Loss: 0.535 Epoch 0 Batch 126/269 - Train Accuracy: 0.669, Validation Accuracy: 0.673, Loss: 0.544 Epoch 0 Batch 127/269 - Train Accuracy: 0.658, Validation Accuracy: 0.670, Loss: 0.552 Epoch 0 Batch 128/269 - Train Accuracy: 0.694, Validation Accuracy: 0.678, Loss: 0.530 Epoch 0 Batch 129/269 - Train Accuracy: 0.677, Validation Accuracy: 0.679, Loss: 0.523 Epoch 0 Batch 130/269 - Train Accuracy: 0.675, Validation Accuracy: 0.694, Loss: 0.546 Epoch 0 Batch 131/269 - Train Accuracy: 0.673, Validation Accuracy: 0.700, Loss: 0.534 Epoch 0 Batch 132/269 - Train Accuracy: 0.691, Validation Accuracy: 0.693, Loss: 0.535 Epoch 0 Batch 133/269 - Train Accuracy: 0.703, Validation Accuracy: 0.698, Loss: 0.505 Epoch 0 Batch 134/269 - Train Accuracy: 0.691, Validation Accuracy: 0.701, Loss: 0.531 Epoch 0 Batch 135/269 - Train Accuracy: 0.679, Validation Accuracy: 0.704, Loss: 0.541 Epoch 0 Batch 136/269 - Train Accuracy: 0.686, Validation Accuracy: 0.709, Loss: 0.542 Epoch 0 Batch 137/269 - Train Accuracy: 0.694, Validation Accuracy: 0.711, Loss: 0.529 Epoch 0 Batch 138/269 - Train Accuracy: 0.717, Validation Accuracy: 0.721, Loss: 0.516 Epoch 0 Batch 139/269 - Train Accuracy: 0.743, Validation Accuracy: 0.725, Loss: 0.482 Epoch 0 Batch 140/269 - Train Accuracy: 0.733, Validation Accuracy: 0.734, Loss: 0.505 Epoch 0 Batch 141/269 - Train Accuracy: 0.730, Validation Accuracy: 0.724, Loss: 0.505 Epoch 0 Batch 142/269 - Train Accuracy: 0.716, Validation Accuracy: 0.724, Loss: 0.481 Epoch 0 Batch 143/269 - Train Accuracy: 0.721, Validation Accuracy: 0.723, Loss: 0.482 Epoch 0 Batch 144/269 - Train Accuracy: 0.728, Validation Accuracy: 0.727, Loss: 0.470 Epoch 0 Batch 145/269 - Train Accuracy: 0.727, Validation Accuracy: 0.731, Loss: 0.479 Epoch 0 Batch 146/269 - Train Accuracy: 0.723, Validation Accuracy: 0.734, Loss: 0.476 Epoch 0 Batch 147/269 - Train Accuracy: 0.747, Validation Accuracy: 0.735, Loss: 0.456 Epoch 0 Batch 148/269 - Train Accuracy: 0.721, Validation Accuracy: 0.733, Loss: 0.475 Epoch 0 Batch 149/269 - Train Accuracy: 0.718, Validation Accuracy: 0.735, Loss: 0.475 Epoch 0 Batch 150/269 - Train Accuracy: 0.745, Validation Accuracy: 0.739, Loss: 0.465 Epoch 0 Batch 151/269 - Train Accuracy: 0.755, Validation Accuracy: 0.745, Loss: 0.430 Epoch 0 Batch 152/269 - Train Accuracy: 0.743, Validation Accuracy: 0.742, Loss: 0.457 Epoch 0 Batch 153/269 - Train Accuracy: 0.756, Validation Accuracy: 0.755, Loss: 0.446 Epoch 0 Batch 154/269 - Train Accuracy: 0.752, Validation Accuracy: 0.762, Loss: 0.469 Epoch 0 Batch 155/269 - Train Accuracy: 0.767, Validation Accuracy: 0.758, Loss: 0.424 Epoch 0 Batch 156/269 - Train Accuracy: 0.739, Validation Accuracy: 0.762, Loss: 0.457 Epoch 0 Batch 157/269 - Train Accuracy: 0.754, Validation Accuracy: 0.765, Loss: 0.436 Epoch 0 Batch 158/269 - Train Accuracy: 0.756, Validation Accuracy: 0.759, Loss: 0.436 Epoch 0 Batch 159/269 - Train Accuracy: 0.743, Validation Accuracy: 0.756, Loss: 0.433 Epoch 0 Batch 160/269 - Train Accuracy: 0.760, Validation Accuracy: 0.764, Loss: 0.426 Epoch 0 Batch 161/269 - Train Accuracy: 0.748, Validation Accuracy: 0.770, Loss: 0.430 Epoch 0 Batch 162/269 - Train Accuracy: 0.765, Validation Accuracy: 0.775, Loss: 0.414 Epoch 0 Batch 163/269 - Train Accuracy: 0.766, Validation Accuracy: 0.777, Loss: 0.427 Epoch 0 Batch 164/269 - Train Accuracy: 0.778, Validation Accuracy: 0.772, Loss: 0.411 Epoch 0 Batch 165/269 - Train Accuracy: 0.757, Validation Accuracy: 0.771, Loss: 0.426 Epoch 0 Batch 166/269 - Train Accuracy: 0.773, Validation Accuracy: 0.776, Loss: 0.393 Epoch 0 Batch 167/269 - Train Accuracy: 0.761, Validation Accuracy: 0.768, Loss: 0.408 Epoch 0 Batch 168/269 - Train Accuracy: 0.760, Validation Accuracy: 0.774, Loss: 0.408 Epoch 0 Batch 169/269 - Train Accuracy: 0.781, Validation Accuracy: 0.778, Loss: 0.402 Epoch 0 Batch 170/269 - Train Accuracy: 0.769, Validation Accuracy: 0.776, Loss: 0.389 Epoch 0 Batch 171/269 - Train Accuracy: 0.788, Validation Accuracy: 0.775, Loss: 0.408 Epoch 0 Batch 172/269 - Train Accuracy: 0.761, Validation Accuracy: 0.766, Loss: 0.404 Epoch 0 Batch 173/269 - Train Accuracy: 0.774, Validation Accuracy: 0.767, Loss: 0.379 Epoch 0 Batch 174/269 - Train Accuracy: 0.774, Validation Accuracy: 0.774, Loss: 0.390 Epoch 0 Batch 175/269 - Train Accuracy: 0.769, Validation Accuracy: 0.776, Loss: 0.406 Epoch 0 Batch 176/269 - Train Accuracy: 0.768, Validation Accuracy: 0.780, Loss: 0.399 Epoch 0 Batch 177/269 - Train Accuracy: 0.784, Validation Accuracy: 0.779, Loss: 0.355 Epoch 0 Batch 178/269 - Train Accuracy: 0.779, Validation Accuracy: 0.783, Loss: 0.387 Epoch 0 Batch 179/269 - Train Accuracy: 0.769, Validation Accuracy: 0.781, Loss: 0.367 Epoch 0 Batch 180/269 - Train Accuracy: 0.794, Validation Accuracy: 0.785, Loss: 0.361 Epoch 0 Batch 181/269 - Train Accuracy: 0.785, Validation Accuracy: 0.790, Loss: 0.365 Epoch 0 Batch 182/269 - Train Accuracy: 0.791, Validation Accuracy: 0.795, Loss: 0.375 Epoch 0 Batch 183/269 - Train Accuracy: 0.820, Validation Accuracy: 0.797, Loss: 0.315 Epoch 0 Batch 184/269 - Train Accuracy: 0.781, Validation Accuracy: 0.796, Loss: 0.371 Epoch 0 Batch 185/269 - Train Accuracy: 0.806, Validation Accuracy: 0.797, Loss: 0.353 Epoch 0 Batch 186/269 - Train Accuracy: 0.778, Validation Accuracy: 0.799, Loss: 0.360 Epoch 0 Batch 187/269 - Train Accuracy: 0.803, Validation Accuracy: 0.787, Loss: 0.343 Epoch 0 Batch 188/269 - Train Accuracy: 0.804, Validation Accuracy: 0.786, Loss: 0.339 Epoch 0 Batch 189/269 - Train Accuracy: 0.798, Validation Accuracy: 0.797, Loss: 0.335 Epoch 0 Batch 190/269 - Train Accuracy: 0.794, Validation Accuracy: 0.804, Loss: 0.327 Epoch 0 Batch 191/269 - Train Accuracy: 0.798, Validation Accuracy: 0.809, Loss: 0.338 Epoch 0 Batch 192/269 - Train Accuracy: 0.810, Validation Accuracy: 0.809, Loss: 0.338 Epoch 0 Batch 193/269 - Train Accuracy: 0.800, Validation Accuracy: 0.807, Loss: 0.337 Epoch 0 Batch 194/269 - Train Accuracy: 0.803, Validation Accuracy: 0.809, Loss: 0.342 Epoch 0 Batch 195/269 - Train Accuracy: 0.800, Validation Accuracy: 0.811, Loss: 0.332 Epoch 0 Batch 196/269 - Train Accuracy: 0.804, Validation Accuracy: 0.809, Loss: 0.331 Epoch 0 Batch 197/269 - Train Accuracy: 0.797, Validation Accuracy: 0.815, Loss: 0.346 Epoch 0 Batch 198/269 - Train Accuracy: 0.798, Validation Accuracy: 0.822, Loss: 0.343 Epoch 0 Batch 199/269 - Train Accuracy: 0.801, Validation Accuracy: 0.819, Loss: 0.336 Epoch 0 Batch 200/269 - Train Accuracy: 0.804, Validation Accuracy: 0.816, Loss: 0.330 Epoch 0 Batch 201/269 - Train Accuracy: 0.814, Validation Accuracy: 0.816, Loss: 0.313 Epoch 0 Batch 202/269 - Train Accuracy: 0.793, Validation Accuracy: 0.810, Loss: 0.316 Epoch 0 Batch 203/269 - Train Accuracy: 0.806, Validation Accuracy: 0.811, Loss: 0.334 Epoch 0 Batch 204/269 - Train Accuracy: 0.813, Validation Accuracy: 0.811, Loss: 0.327 Epoch 0 Batch 205/269 - Train Accuracy: 0.821, Validation Accuracy: 0.810, Loss: 0.304 Epoch 0 Batch 206/269 - Train Accuracy: 0.798, Validation Accuracy: 0.813, Loss: 0.335 Epoch 0 Batch 207/269 - Train Accuracy: 0.823, Validation Accuracy: 0.820, Loss: 0.301 Epoch 0 Batch 208/269 - Train Accuracy: 0.836, Validation Accuracy: 0.828, Loss: 0.315 Epoch 0 Batch 209/269 - Train Accuracy: 0.853, Validation Accuracy: 0.828, Loss: 0.304 Epoch 0 Batch 210/269 - Train Accuracy: 0.831, Validation Accuracy: 0.829, Loss: 0.284 Epoch 0 Batch 211/269 - Train Accuracy: 0.821, Validation Accuracy: 0.835, Loss: 0.302 Epoch 0 Batch 212/269 - Train Accuracy: 0.832, Validation Accuracy: 0.842, Loss: 0.290 Epoch 0 Batch 213/269 - Train Accuracy: 0.831, Validation Accuracy: 0.831, Loss: 0.289 Epoch 0 Batch 214/269 - Train Accuracy: 0.829, Validation Accuracy: 0.836, Loss: 0.292 Epoch 0 Batch 215/269 - Train Accuracy: 0.848, Validation Accuracy: 0.833, Loss: 0.274 Epoch 0 Batch 216/269 - Train Accuracy: 0.812, Validation Accuracy: 0.832, Loss: 0.306 Epoch 0 Batch 217/269 - Train Accuracy: 0.824, Validation Accuracy: 0.833, Loss: 0.295 Epoch 0 Batch 218/269 - Train Accuracy: 0.847, Validation Accuracy: 0.839, Loss: 0.293 Epoch 0 Batch 219/269 - Train Accuracy: 0.843, Validation Accuracy: 0.846, Loss: 0.294 Epoch 0 Batch 220/269 - Train Accuracy: 0.839, Validation Accuracy: 0.846, Loss: 0.266 Epoch 0 Batch 221/269 - Train Accuracy: 0.864, Validation Accuracy: 0.848, Loss: 0.269 Epoch 0 Batch 222/269 - Train Accuracy: 0.873, Validation Accuracy: 0.841, Loss: 0.259 Epoch 0 Batch 223/269 - Train Accuracy: 0.830, Validation Accuracy: 0.841, Loss: 0.271 Epoch 0 Batch 224/269 - Train Accuracy: 0.843, Validation Accuracy: 0.838, Loss: 0.279 Epoch 0 Batch 225/269 - Train Accuracy: 0.822, Validation Accuracy: 0.837, Loss: 0.265 Epoch 0 Batch 226/269 - Train Accuracy: 0.842, Validation Accuracy: 0.834, Loss: 0.273 Epoch 0 Batch 227/269 - Train Accuracy: 0.851, Validation Accuracy: 0.835, Loss: 0.243 Epoch 0 Batch 228/269 - Train Accuracy: 0.842, Validation Accuracy: 0.850, Loss: 0.260 Epoch 0 Batch 229/269 - Train Accuracy: 0.841, Validation Accuracy: 0.856, Loss: 0.255 Epoch 0 Batch 230/269 - Train Accuracy: 0.857, Validation Accuracy: 0.852, Loss: 0.255 Epoch 0 Batch 231/269 - Train Accuracy: 0.845, Validation Accuracy: 0.860, Loss: 0.271 Epoch 0 Batch 232/269 - Train Accuracy: 0.855, Validation Accuracy: 0.868, Loss: 0.265 Epoch 0 Batch 233/269 - Train Accuracy: 0.873, Validation Accuracy: 0.859, Loss: 0.257 Epoch 0 Batch 234/269 - Train Accuracy: 0.861, Validation Accuracy: 0.858, Loss: 0.255 Epoch 0 Batch 235/269 - Train Accuracy: 0.871, Validation Accuracy: 0.847, Loss: 0.239 Epoch 0 Batch 236/269 - Train Accuracy: 0.848, Validation Accuracy: 0.848, Loss: 0.242 Epoch 0 Batch 237/269 - Train Accuracy: 0.860, Validation Accuracy: 0.861, Loss: 0.239 Epoch 0 Batch 238/269 - Train Accuracy: 0.866, Validation Accuracy: 0.871, Loss: 0.235 Epoch 0 Batch 239/269 - Train Accuracy: 0.861, Validation Accuracy: 0.873, Loss: 0.233 Epoch 0 Batch 240/269 - Train Accuracy: 0.867, Validation Accuracy: 0.877, Loss: 0.217 Epoch 0 Batch 241/269 - Train Accuracy: 0.861, Validation Accuracy: 0.874, Loss: 0.248 Epoch 0 Batch 242/269 - Train Accuracy: 0.869, Validation Accuracy: 0.867, Loss: 0.226 Epoch 0 Batch 243/269 - Train Accuracy: 0.877, Validation Accuracy: 0.867, Loss: 0.218 Epoch 0 Batch 244/269 - Train Accuracy: 0.859, Validation Accuracy: 0.856, Loss: 0.221 Epoch 0 Batch 245/269 - Train Accuracy: 0.854, Validation Accuracy: 0.855, Loss: 0.243 Epoch 0 Batch 246/269 - Train Accuracy: 0.855, Validation Accuracy: 0.855, Loss: 0.228 Epoch 0 Batch 247/269 - Train Accuracy: 0.867, Validation Accuracy: 0.862, Loss: 0.233 Epoch 0 Batch 248/269 - Train Accuracy: 0.871, Validation Accuracy: 0.867, Loss: 0.214 Epoch 0 Batch 249/269 - Train Accuracy: 0.878, Validation Accuracy: 0.869, Loss: 0.212 Epoch 0 Batch 250/269 - Train Accuracy: 0.887, Validation Accuracy: 0.866, Loss: 0.221 Epoch 0 Batch 251/269 - Train Accuracy: 0.892, Validation Accuracy: 0.870, Loss: 0.212 Epoch 0 Batch 252/269 - Train Accuracy: 0.865, Validation Accuracy: 0.875, Loss: 0.212 Epoch 0 Batch 253/269 - Train Accuracy: 0.855, Validation Accuracy: 0.881, Loss: 0.226 Epoch 0 Batch 254/269 - Train Accuracy: 0.878, Validation Accuracy: 0.888, Loss: 0.204 Epoch 0 Batch 255/269 - Train Accuracy: 0.878, Validation Accuracy: 0.886, Loss: 0.213 Epoch 0 Batch 256/269 - Train Accuracy: 0.860, Validation Accuracy: 0.879, Loss: 0.215 Epoch 0 Batch 257/269 - Train Accuracy: 0.854, Validation Accuracy: 0.878, Loss: 0.218 Epoch 0 Batch 258/269 - Train Accuracy: 0.870, Validation Accuracy: 0.878, Loss: 0.217 Epoch 0 Batch 259/269 - Train Accuracy: 0.870, Validation Accuracy: 0.874, Loss: 0.204 Epoch 0 Batch 260/269 - Train Accuracy: 0.871, Validation Accuracy: 0.881, Loss: 0.216 Epoch 0 Batch 261/269 - Train Accuracy: 0.866, Validation Accuracy: 0.884, Loss: 0.217 Epoch 0 Batch 262/269 - Train Accuracy: 0.879, Validation Accuracy: 0.872, Loss: 0.201 Epoch 0 Batch 263/269 - Train Accuracy: 0.883, Validation Accuracy: 0.882, Loss: 0.210 Epoch 0 Batch 264/269 - Train Accuracy: 0.856, Validation Accuracy: 0.884, Loss: 0.210 Epoch 0 Batch 265/269 - Train Accuracy: 0.871, Validation Accuracy: 0.888, Loss: 0.207 Epoch 0 Batch 266/269 - Train Accuracy: 0.886, Validation Accuracy: 0.884, Loss: 0.194 Epoch 0 Batch 267/269 - Train Accuracy: 0.886, Validation Accuracy: 0.885, Loss: 0.202 Epoch 1 Batch 0/269 - Train Accuracy: 0.892, Validation Accuracy: 0.885, Loss: 0.203 Epoch 1 Batch 1/269 - Train Accuracy: 0.890, Validation Accuracy: 0.881, Loss: 0.194 Epoch 1 Batch 2/269 - Train Accuracy: 0.893, Validation Accuracy: 0.885, Loss: 0.194 Epoch 1 Batch 3/269 - Train Accuracy: 0.903, Validation Accuracy: 0.887, Loss: 0.198 Epoch 1 Batch 4/269 - Train Accuracy: 0.885, Validation Accuracy: 0.886, Loss: 0.199 Epoch 1 Batch 5/269 - Train Accuracy: 0.882, Validation Accuracy: 0.878, Loss: 0.189 Epoch 1 Batch 6/269 - Train Accuracy: 0.893, Validation Accuracy: 0.879, Loss: 0.183 Epoch 1 Batch 7/269 - Train Accuracy: 0.883, Validation Accuracy: 0.878, Loss: 0.183 Epoch 1 Batch 8/269 - Train Accuracy: 0.892, Validation Accuracy: 0.884, Loss: 0.191 Epoch 1 Batch 9/269 - Train Accuracy: 0.895, Validation Accuracy: 0.894, Loss: 0.184 Epoch 1 Batch 10/269 - Train Accuracy: 0.894, Validation Accuracy: 0.891, Loss: 0.174 Epoch 1 Batch 11/269 - Train Accuracy: 0.890, Validation Accuracy: 0.895, Loss: 0.189 Epoch 1 Batch 12/269 - Train Accuracy: 0.882, Validation Accuracy: 0.894, Loss: 0.183 Epoch 1 Batch 13/269 - Train Accuracy: 0.906, Validation Accuracy: 0.897, Loss: 0.161 Epoch 1 Batch 14/269 - Train Accuracy: 0.892, Validation Accuracy: 0.899, Loss: 0.171 Epoch 1 Batch 15/269 - Train Accuracy: 0.895, Validation Accuracy: 0.895, Loss: 0.164 Epoch 1 Batch 16/269 - Train Accuracy: 0.877, Validation Accuracy: 0.896, Loss: 0.175 Epoch 1 Batch 17/269 - Train Accuracy: 0.902, Validation Accuracy: 0.894, Loss: 0.165 Epoch 1 Batch 18/269 - Train Accuracy: 0.879, Validation Accuracy: 0.895, Loss: 0.183 Epoch 1 Batch 19/269 - Train Accuracy: 0.899, Validation Accuracy: 0.898, Loss: 0.150 Epoch 1 Batch 20/269 - Train Accuracy: 0.892, Validation Accuracy: 0.907, Loss: 0.174 Epoch 1 Batch 21/269 - Train Accuracy: 0.870, Validation Accuracy: 0.902, Loss: 0.180 Epoch 1 Batch 22/269 - Train Accuracy: 0.908, Validation Accuracy: 0.900, Loss: 0.164 Epoch 1 Batch 23/269 - Train Accuracy: 0.897, Validation Accuracy: 0.901, Loss: 0.167 Epoch 1 Batch 24/269 - Train Accuracy: 0.896, Validation Accuracy: 0.906, Loss: 0.169 Epoch 1 Batch 25/269 - Train Accuracy: 0.888, Validation Accuracy: 0.899, Loss: 0.168 Epoch 1 Batch 26/269 - Train Accuracy: 0.907, Validation Accuracy: 0.904, Loss: 0.149 Epoch 1 Batch 27/269 - Train Accuracy: 0.904, Validation Accuracy: 0.906, Loss: 0.156 Epoch 1 Batch 28/269 - Train Accuracy: 0.873, Validation Accuracy: 0.904, Loss: 0.171 Epoch 1 Batch 29/269 - Train Accuracy: 0.898, Validation Accuracy: 0.903, Loss: 0.168 Epoch 1 Batch 30/269 - Train Accuracy: 0.912, Validation Accuracy: 0.899, Loss: 0.156 Epoch 1 Batch 31/269 - Train Accuracy: 0.913, Validation Accuracy: 0.895, Loss: 0.154 Epoch 1 Batch 32/269 - Train Accuracy: 0.909, Validation Accuracy: 0.897, Loss: 0.149 Epoch 1 Batch 33/269 - Train Accuracy: 0.907, Validation Accuracy: 0.898, Loss: 0.144 Epoch 1 Batch 34/269 - Train Accuracy: 0.902, Validation Accuracy: 0.903, Loss: 0.149 Epoch 1 Batch 35/269 - Train Accuracy: 0.902, Validation Accuracy: 0.904, Loss: 0.157 Epoch 1 Batch 36/269 - Train Accuracy: 0.894, Validation Accuracy: 0.914, Loss: 0.155 Epoch 1 Batch 37/269 - Train Accuracy: 0.892, Validation Accuracy: 0.908, Loss: 0.157 Epoch 1 Batch 38/269 - Train Accuracy: 0.901, Validation Accuracy: 0.911, Loss: 0.149 Epoch 1 Batch 39/269 - Train Accuracy: 0.902, Validation Accuracy: 0.909, Loss: 0.144 Epoch 1 Batch 40/269 - Train Accuracy: 0.893, Validation Accuracy: 0.907, Loss: 0.155 Epoch 1 Batch 41/269 - Train Accuracy: 0.896, Validation Accuracy: 0.907, Loss: 0.153 Epoch 1 Batch 42/269 - Train Accuracy: 0.916, Validation Accuracy: 0.903, Loss: 0.133 Epoch 1 Batch 43/269 - Train Accuracy: 0.889, Validation Accuracy: 0.900, Loss: 0.153 Epoch 1 Batch 44/269 - Train Accuracy: 0.898, Validation Accuracy: 0.909, Loss: 0.153 Epoch 1 Batch 45/269 - Train Accuracy: 0.903, Validation Accuracy: 0.910, Loss: 0.155 Epoch 1 Batch 46/269 - Train Accuracy: 0.904, Validation Accuracy: 0.914, Loss: 0.141 Epoch 1 Batch 47/269 - Train Accuracy: 0.921, Validation Accuracy: 0.909, Loss: 0.130 Epoch 1 Batch 48/269 - Train Accuracy: 0.924, Validation Accuracy: 0.910, Loss: 0.137 Epoch 1 Batch 49/269 - Train Accuracy: 0.903, Validation Accuracy: 0.911, Loss: 0.139 Epoch 1 Batch 50/269 - Train Accuracy: 0.897, Validation Accuracy: 0.907, Loss: 0.152 Epoch 1 Batch 51/269 - Train Accuracy: 0.907, Validation Accuracy: 0.911, Loss: 0.134 Epoch 1 Batch 52/269 - Train Accuracy: 0.900, Validation Accuracy: 0.913, Loss: 0.123 Epoch 1 Batch 53/269 - Train Accuracy: 0.912, Validation Accuracy: 0.910, Loss: 0.142 Epoch 1 Batch 54/269 - Train Accuracy: 0.916, Validation Accuracy: 0.909, Loss: 0.138 Epoch 1 Batch 55/269 - Train Accuracy: 0.927, Validation Accuracy: 0.904, Loss: 0.130 Epoch 1 Batch 56/269 - Train Accuracy: 0.893, Validation Accuracy: 0.913, Loss: 0.139 Epoch 1 Batch 57/269 - Train Accuracy: 0.904, Validation Accuracy: 0.908, Loss: 0.138 Epoch 1 Batch 58/269 - Train Accuracy: 0.923, Validation Accuracy: 0.909, Loss: 0.134 Epoch 1 Batch 59/269 - Train Accuracy: 0.933, Validation Accuracy: 0.911, Loss: 0.118 Epoch 1 Batch 60/269 - Train Accuracy: 0.918, Validation Accuracy: 0.920, Loss: 0.128 Epoch 1 Batch 61/269 - Train Accuracy: 0.918, Validation Accuracy: 0.922, Loss: 0.121 Epoch 1 Batch 62/269 - Train Accuracy: 0.905, Validation Accuracy: 0.923, Loss: 0.125 Epoch 1 Batch 63/269 - Train Accuracy: 0.911, Validation Accuracy: 0.925, Loss: 0.139 Epoch 1 Batch 64/269 - Train Accuracy: 0.929, Validation Accuracy: 0.921, Loss: 0.124 Epoch 1 Batch 65/269 - Train Accuracy: 0.912, Validation Accuracy: 0.921, Loss: 0.120 Epoch 1 Batch 66/269 - Train Accuracy: 0.918, Validation Accuracy: 0.919, Loss: 0.127 Epoch 1 Batch 67/269 - Train Accuracy: 0.918, Validation Accuracy: 0.919, Loss: 0.133 Epoch 1 Batch 68/269 - Train Accuracy: 0.903, Validation Accuracy: 0.920, Loss: 0.133 Epoch 1 Batch 69/269 - Train Accuracy: 0.896, Validation Accuracy: 0.919, Loss: 0.145 Epoch 1 Batch 70/269 - Train Accuracy: 0.931, Validation Accuracy: 0.915, Loss: 0.126 Epoch 1 Batch 71/269 - Train Accuracy: 0.916, Validation Accuracy: 0.914, Loss: 0.133 Epoch 1 Batch 72/269 - Train Accuracy: 0.913, Validation Accuracy: 0.918, Loss: 0.123 Epoch 1 Batch 73/269 - Train Accuracy: 0.898, Validation Accuracy: 0.923, Loss: 0.129 Epoch 1 Batch 74/269 - Train Accuracy: 0.927, Validation Accuracy: 0.923, Loss: 0.113 Epoch 1 Batch 75/269 - Train Accuracy: 0.918, Validation Accuracy: 0.920, Loss: 0.123 Epoch 1 Batch 76/269 - Train Accuracy: 0.916, Validation Accuracy: 0.920, Loss: 0.122 Epoch 1 Batch 77/269 - Train Accuracy: 0.931, Validation Accuracy: 0.917, Loss: 0.114 Epoch 1 Batch 78/269 - Train Accuracy: 0.931, Validation Accuracy: 0.913, Loss: 0.118 Epoch 1 Batch 79/269 - Train Accuracy: 0.909, Validation Accuracy: 0.913, Loss: 0.113 Epoch 1 Batch 80/269 - Train Accuracy: 0.930, Validation Accuracy: 0.908, Loss: 0.116 Epoch 1 Batch 81/269 - Train Accuracy: 0.926, Validation Accuracy: 0.913, Loss: 0.123 Epoch 1 Batch 82/269 - Train Accuracy: 0.939, Validation Accuracy: 0.915, Loss: 0.104 Epoch 1 Batch 83/269 - Train Accuracy: 0.899, Validation Accuracy: 0.913, Loss: 0.121 Epoch 1 Batch 84/269 - Train Accuracy: 0.926, Validation Accuracy: 0.915, Loss: 0.110 Epoch 1 Batch 85/269 - Train Accuracy: 0.916, Validation Accuracy: 0.915, Loss: 0.121 Epoch 1 Batch 86/269 - Train Accuracy: 0.930, Validation Accuracy: 0.923, Loss: 0.112 Epoch 1 Batch 87/269 - Train Accuracy: 0.929, Validation Accuracy: 0.921, Loss: 0.123 Epoch 1 Batch 88/269 - Train Accuracy: 0.915, Validation Accuracy: 0.928, Loss: 0.116 Epoch 1 Batch 89/269 - Train Accuracy: 0.928, Validation Accuracy: 0.919, Loss: 0.110 Epoch 1 Batch 90/269 - Train Accuracy: 0.924, Validation Accuracy: 0.919, Loss: 0.115 Epoch 1 Batch 91/269 - Train Accuracy: 0.943, Validation Accuracy: 0.924, Loss: 0.106 Epoch 1 Batch 92/269 - Train Accuracy: 0.939, Validation Accuracy: 0.929, Loss: 0.104 Epoch 1 Batch 93/269 - Train Accuracy: 0.927, Validation Accuracy: 0.931, Loss: 0.108 Epoch 1 Batch 94/269 - Train Accuracy: 0.923, Validation Accuracy: 0.931, Loss: 0.126 Epoch 1 Batch 95/269 - Train Accuracy: 0.925, Validation Accuracy: 0.930, Loss: 0.106 Epoch 1 Batch 96/269 - Train Accuracy: 0.904, Validation Accuracy: 0.926, Loss: 0.110 Epoch 1 Batch 97/269 - Train Accuracy: 0.920, Validation Accuracy: 0.927, Loss: 0.110 Epoch 1 Batch 98/269 - Train Accuracy: 0.924, Validation Accuracy: 0.928, Loss: 0.109 Epoch 1 Batch 99/269 - Train Accuracy: 0.930, Validation Accuracy: 0.928, Loss: 0.104 Epoch 1 Batch 100/269 - Train Accuracy: 0.931, Validation Accuracy: 0.932, Loss: 0.104 Epoch 1 Batch 101/269 - Train Accuracy: 0.918, Validation Accuracy: 0.930, Loss: 0.112 Epoch 1 Batch 102/269 - Train Accuracy: 0.925, Validation Accuracy: 0.926, Loss: 0.101 Epoch 1 Batch 103/269 - Train Accuracy: 0.933, Validation Accuracy: 0.928, Loss: 0.111 Epoch 1 Batch 104/269 - Train Accuracy: 0.924, Validation Accuracy: 0.930, Loss: 0.097 Epoch 1 Batch 105/269 - Train Accuracy: 0.920, Validation Accuracy: 0.929, Loss: 0.104 Epoch 1 Batch 106/269 - Train Accuracy: 0.929, Validation Accuracy: 0.928, Loss: 0.096 Epoch 1 Batch 107/269 - Train Accuracy: 0.946, Validation Accuracy: 0.925, Loss: 0.100 Epoch 1 Batch 108/269 - Train Accuracy: 0.931, Validation Accuracy: 0.926, Loss: 0.103 Epoch 1 Batch 109/269 - Train Accuracy: 0.909, Validation Accuracy: 0.923, Loss: 0.110 Epoch 1 Batch 110/269 - Train Accuracy: 0.925, Validation Accuracy: 0.921, Loss: 0.093 Epoch 1 Batch 111/269 - Train Accuracy: 0.920, Validation Accuracy: 0.919, Loss: 0.107 Epoch 1 Batch 112/269 - Train Accuracy: 0.942, Validation Accuracy: 0.921, Loss: 0.091 Epoch 1 Batch 113/269 - Train Accuracy: 0.930, Validation Accuracy: 0.923, Loss: 0.094 Epoch 1 Batch 114/269 - Train Accuracy: 0.927, Validation Accuracy: 0.927, Loss: 0.098 Epoch 1 Batch 115/269 - Train Accuracy: 0.929, Validation Accuracy: 0.925, Loss: 0.096 Epoch 1 Batch 116/269 - Train Accuracy: 0.930, Validation Accuracy: 0.916, Loss: 0.100 Epoch 1 Batch 117/269 - Train Accuracy: 0.930, Validation Accuracy: 0.919, Loss: 0.098 Epoch 1 Batch 118/269 - Train Accuracy: 0.936, Validation Accuracy: 0.925, Loss: 0.092 Epoch 1 Batch 119/269 - Train Accuracy: 0.923, Validation Accuracy: 0.931, Loss: 0.101 Epoch 1 Batch 120/269 - Train Accuracy: 0.925, Validation Accuracy: 0.926, Loss: 0.100 Epoch 1 Batch 121/269 - Train Accuracy: 0.930, Validation Accuracy: 0.921, Loss: 0.088 Epoch 1 Batch 122/269 - Train Accuracy: 0.928, Validation Accuracy: 0.927, Loss: 0.093 Epoch 1 Batch 123/269 - Train Accuracy: 0.927, Validation Accuracy: 0.915, Loss: 0.096 Epoch 1 Batch 124/269 - Train Accuracy: 0.930, Validation Accuracy: 0.917, Loss: 0.089 Epoch 1 Batch 125/269 - Train Accuracy: 0.939, Validation Accuracy: 0.925, Loss: 0.080 Epoch 1 Batch 126/269 - Train Accuracy: 0.909, Validation Accuracy: 0.930, Loss: 0.092 Epoch 1 Batch 127/269 - Train Accuracy: 0.926, Validation Accuracy: 0.932, Loss: 0.094 Epoch 1 Batch 128/269 - Train Accuracy: 0.928, Validation Accuracy: 0.938, Loss: 0.092 Epoch 1 Batch 129/269 - Train Accuracy: 0.920, Validation Accuracy: 0.942, Loss: 0.097 Epoch 1 Batch 130/269 - Train Accuracy: 0.923, Validation Accuracy: 0.940, Loss: 0.103 Epoch 1 Batch 131/269 - Train Accuracy: 0.924, Validation Accuracy: 0.939, Loss: 0.091 Epoch 1 Batch 132/269 - Train Accuracy: 0.919, Validation Accuracy: 0.936, Loss: 0.097 Epoch 1 Batch 133/269 - Train Accuracy: 0.932, Validation Accuracy: 0.927, Loss: 0.089 Epoch 1 Batch 134/269 - Train Accuracy: 0.933, Validation Accuracy: 0.926, Loss: 0.092 Epoch 1 Batch 135/269 - Train Accuracy: 0.931, Validation Accuracy: 0.928, Loss: 0.091 Epoch 1 Batch 136/269 - Train Accuracy: 0.915, Validation Accuracy: 0.930, Loss: 0.098 Epoch 1 Batch 137/269 - Train Accuracy: 0.926, Validation Accuracy: 0.928, Loss: 0.100 Epoch 1 Batch 138/269 - Train Accuracy: 0.925, Validation Accuracy: 0.928, Loss: 0.083 Epoch 1 Batch 139/269 - Train Accuracy: 0.927, Validation Accuracy: 0.929, Loss: 0.081 Epoch 1 Batch 140/269 - Train Accuracy: 0.923, Validation Accuracy: 0.929, Loss: 0.095 Epoch 1 Batch 141/269 - Train Accuracy: 0.935, Validation Accuracy: 0.934, Loss: 0.091 Epoch 1 Batch 142/269 - Train Accuracy: 0.933, Validation Accuracy: 0.938, Loss: 0.084 Epoch 1 Batch 143/269 - Train Accuracy: 0.942, Validation Accuracy: 0.929, Loss: 0.084 Epoch 1 Batch 144/269 - Train Accuracy: 0.936, Validation Accuracy: 0.928, Loss: 0.079 Epoch 1 Batch 145/269 - Train Accuracy: 0.936, Validation Accuracy: 0.930, Loss: 0.089 Epoch 1 Batch 146/269 - Train Accuracy: 0.927, Validation Accuracy: 0.933, Loss: 0.083 Epoch 1 Batch 147/269 - Train Accuracy: 0.933, Validation Accuracy: 0.935, Loss: 0.091 Epoch 1 Batch 148/269 - Train Accuracy: 0.947, Validation Accuracy: 0.932, Loss: 0.084 Epoch 1 Batch 149/269 - Train Accuracy: 0.923, Validation Accuracy: 0.931, Loss: 0.095 Epoch 1 Batch 150/269 - Train Accuracy: 0.935, Validation Accuracy: 0.934, Loss: 0.089 Epoch 1 Batch 151/269 - Train Accuracy: 0.944, Validation Accuracy: 0.938, Loss: 0.078 Epoch 1 Batch 152/269 - Train Accuracy: 0.936, Validation Accuracy: 0.933, Loss: 0.081 Epoch 1 Batch 153/269 - Train Accuracy: 0.932, Validation Accuracy: 0.935, Loss: 0.086 Epoch 1 Batch 154/269 - Train Accuracy: 0.948, Validation Accuracy: 0.935, Loss: 0.085 Epoch 1 Batch 155/269 - Train Accuracy: 0.928, Validation Accuracy: 0.941, Loss: 0.078 Epoch 1 Batch 156/269 - Train Accuracy: 0.935, Validation Accuracy: 0.945, Loss: 0.084 Epoch 1 Batch 157/269 - Train Accuracy: 0.935, Validation Accuracy: 0.946, Loss: 0.075 Epoch 1 Batch 158/269 - Train Accuracy: 0.931, Validation Accuracy: 0.945, Loss: 0.083 Epoch 1 Batch 159/269 - Train Accuracy: 0.940, Validation Accuracy: 0.948, Loss: 0.081 Epoch 1 Batch 160/269 - Train Accuracy: 0.930, Validation Accuracy: 0.948, Loss: 0.083 Epoch 1 Batch 161/269 - Train Accuracy: 0.946, Validation Accuracy: 0.945, Loss: 0.076 Epoch 1 Batch 162/269 - Train Accuracy: 0.935, Validation Accuracy: 0.943, Loss: 0.080 Epoch 1 Batch 163/269 - Train Accuracy: 0.945, Validation Accuracy: 0.939, Loss: 0.079 Epoch 1 Batch 164/269 - Train Accuracy: 0.944, Validation Accuracy: 0.935, Loss: 0.077 Epoch 1 Batch 165/269 - Train Accuracy: 0.935, Validation Accuracy: 0.934, Loss: 0.076 Epoch 1 Batch 166/269 - Train Accuracy: 0.944, Validation Accuracy: 0.939, Loss: 0.072 Epoch 1 Batch 167/269 - Train Accuracy: 0.929, Validation Accuracy: 0.946, Loss: 0.077 Epoch 1 Batch 168/269 - Train Accuracy: 0.936, Validation Accuracy: 0.947, Loss: 0.081 Epoch 1 Batch 169/269 - Train Accuracy: 0.935, Validation Accuracy: 0.943, Loss: 0.077 Epoch 1 Batch 170/269 - Train Accuracy: 0.937, Validation Accuracy: 0.944, Loss: 0.074 Epoch 1 Batch 171/269 - Train Accuracy: 0.947, Validation Accuracy: 0.943, Loss: 0.077 Epoch 1 Batch 172/269 - Train Accuracy: 0.931, Validation Accuracy: 0.945, Loss: 0.085 Epoch 1 Batch 173/269 - Train Accuracy: 0.939, Validation Accuracy: 0.943, Loss: 0.075 Epoch 1 Batch 174/269 - Train Accuracy: 0.951, Validation Accuracy: 0.941, Loss: 0.079 Epoch 1 Batch 175/269 - Train Accuracy: 0.930, Validation Accuracy: 0.940, Loss: 0.091 Epoch 1 Batch 176/269 - Train Accuracy: 0.925, Validation Accuracy: 0.940, Loss: 0.080 Epoch 1 Batch 177/269 - Train Accuracy: 0.934, Validation Accuracy: 0.941, Loss: 0.074 Epoch 1 Batch 178/269 - Train Accuracy: 0.951, Validation Accuracy: 0.947, Loss: 0.070 Epoch 1 Batch 179/269 - Train Accuracy: 0.926, Validation Accuracy: 0.949, Loss: 0.076 Epoch 1 Batch 180/269 - Train Accuracy: 0.947, Validation Accuracy: 0.949, Loss: 0.066 Epoch 1 Batch 181/269 - Train Accuracy: 0.938, Validation Accuracy: 0.951, Loss: 0.080 Epoch 1 Batch 182/269 - Train Accuracy: 0.935, Validation Accuracy: 0.951, Loss: 0.079 Epoch 1 Batch 183/269 - Train Accuracy: 0.949, Validation Accuracy: 0.954, Loss: 0.061 Epoch 1 Batch 184/269 - Train Accuracy: 0.946, Validation Accuracy: 0.948, Loss: 0.069 Epoch 1 Batch 185/269 - Train Accuracy: 0.944, Validation Accuracy: 0.946, Loss: 0.073 Epoch 1 Batch 186/269 - Train Accuracy: 0.942, Validation Accuracy: 0.948, Loss: 0.069 Epoch 1 Batch 187/269 - Train Accuracy: 0.938, Validation Accuracy: 0.947, Loss: 0.072 Epoch 1 Batch 188/269 - Train Accuracy: 0.941, Validation Accuracy: 0.942, Loss: 0.070 Epoch 1 Batch 189/269 - Train Accuracy: 0.940, Validation Accuracy: 0.941, Loss: 0.069 Epoch 1 Batch 190/269 - Train Accuracy: 0.945, Validation Accuracy: 0.939, Loss: 0.071 Epoch 1 Batch 191/269 - Train Accuracy: 0.933, Validation Accuracy: 0.938, Loss: 0.066 Epoch 1 Batch 192/269 - Train Accuracy: 0.942, Validation Accuracy: 0.941, Loss: 0.077 Epoch 1 Batch 193/269 - Train Accuracy: 0.933, Validation Accuracy: 0.940, Loss: 0.069 Epoch 1 Batch 194/269 - Train Accuracy: 0.942, Validation Accuracy: 0.941, Loss: 0.075 Epoch 1 Batch 195/269 - Train Accuracy: 0.930, Validation Accuracy: 0.944, Loss: 0.066 Epoch 1 Batch 196/269 - Train Accuracy: 0.938, Validation Accuracy: 0.942, Loss: 0.068 Epoch 1 Batch 197/269 - Train Accuracy: 0.932, Validation Accuracy: 0.941, Loss: 0.070 Epoch 1 Batch 198/269 - Train Accuracy: 0.938, Validation Accuracy: 0.945, Loss: 0.073 Epoch 1 Batch 199/269 - Train Accuracy: 0.937, Validation Accuracy: 0.943, Loss: 0.076 Epoch 1 Batch 200/269 - Train Accuracy: 0.941, Validation Accuracy: 0.941, Loss: 0.070 Epoch 1 Batch 201/269 - Train Accuracy: 0.939, Validation Accuracy: 0.943, Loss: 0.071 Epoch 1 Batch 202/269 - Train Accuracy: 0.941, Validation Accuracy: 0.943, Loss: 0.074 Epoch 1 Batch 203/269 - Train Accuracy: 0.938, Validation Accuracy: 0.939, Loss: 0.074 Epoch 1 Batch 204/269 - Train Accuracy: 0.940, Validation Accuracy: 0.940, Loss: 0.065 Epoch 1 Batch 205/269 - Train Accuracy: 0.942, Validation Accuracy: 0.939, Loss: 0.064 Epoch 1 Batch 206/269 - Train Accuracy: 0.921, Validation Accuracy: 0.941, Loss: 0.078 Epoch 1 Batch 207/269 - Train Accuracy: 0.938, Validation Accuracy: 0.945, Loss: 0.065 Epoch 1 Batch 208/269 - Train Accuracy: 0.949, Validation Accuracy: 0.948, Loss: 0.070 Epoch 1 Batch 209/269 - Train Accuracy: 0.948, Validation Accuracy: 0.952, Loss: 0.068 Epoch 1 Batch 210/269 - Train Accuracy: 0.938, Validation Accuracy: 0.948, Loss: 0.065 Epoch 1 Batch 211/269 - Train Accuracy: 0.942, Validation Accuracy: 0.946, Loss: 0.073 Epoch 1 Batch 212/269 - Train Accuracy: 0.938, Validation Accuracy: 0.945, Loss: 0.076 Epoch 1 Batch 213/269 - Train Accuracy: 0.949, Validation Accuracy: 0.942, Loss: 0.066 Epoch 1 Batch 214/269 - Train Accuracy: 0.934, Validation Accuracy: 0.939, Loss: 0.071 Epoch 1 Batch 215/269 - Train Accuracy: 0.935, Validation Accuracy: 0.938, Loss: 0.064 Epoch 1 Batch 216/269 - Train Accuracy: 0.933, Validation Accuracy: 0.942, Loss: 0.081 Epoch 1 Batch 217/269 - Train Accuracy: 0.935, Validation Accuracy: 0.946, Loss: 0.071 Epoch 1 Batch 218/269 - Train Accuracy: 0.944, Validation Accuracy: 0.944, Loss: 0.069 Epoch 1 Batch 219/269 - Train Accuracy: 0.935, Validation Accuracy: 0.945, Loss: 0.072 Epoch 1 Batch 220/269 - Train Accuracy: 0.933, Validation Accuracy: 0.946, Loss: 0.067 Epoch 1 Batch 221/269 - Train Accuracy: 0.942, Validation Accuracy: 0.945, Loss: 0.069 Epoch 1 Batch 222/269 - Train Accuracy: 0.950, Validation Accuracy: 0.943, Loss: 0.062 Epoch 1 Batch 223/269 - Train Accuracy: 0.940, Validation Accuracy: 0.946, Loss: 0.063 Epoch 1 Batch 224/269 - Train Accuracy: 0.940, Validation Accuracy: 0.945, Loss: 0.074 Epoch 1 Batch 225/269 - Train Accuracy: 0.937, Validation Accuracy: 0.946, Loss: 0.065 Epoch 1 Batch 226/269 - Train Accuracy: 0.944, Validation Accuracy: 0.948, Loss: 0.068 Epoch 1 Batch 227/269 - Train Accuracy: 0.951, Validation Accuracy: 0.957, Loss: 0.068 Epoch 1 Batch 228/269 - Train Accuracy: 0.946, Validation Accuracy: 0.959, Loss: 0.064 Epoch 1 Batch 229/269 - Train Accuracy: 0.942, Validation Accuracy: 0.957, Loss: 0.064 Epoch 1 Batch 230/269 - Train Accuracy: 0.947, Validation Accuracy: 0.950, Loss: 0.067 Epoch 1 Batch 231/269 - Train Accuracy: 0.933, Validation Accuracy: 0.953, Loss: 0.070 Epoch 1 Batch 232/269 - Train Accuracy: 0.945, Validation Accuracy: 0.954, Loss: 0.063 Epoch 1 Batch 233/269 - Train Accuracy: 0.954, Validation Accuracy: 0.948, Loss: 0.065 Epoch 1 Batch 234/269 - Train Accuracy: 0.947, Validation Accuracy: 0.945, Loss: 0.066 Epoch 1 Batch 235/269 - Train Accuracy: 0.963, Validation Accuracy: 0.940, Loss: 0.053 Epoch 1 Batch 236/269 - Train Accuracy: 0.937, Validation Accuracy: 0.941, Loss: 0.061 Epoch 1 Batch 237/269 - Train Accuracy: 0.950, Validation Accuracy: 0.941, Loss: 0.056 Epoch 1 Batch 238/269 - Train Accuracy: 0.951, Validation Accuracy: 0.949, Loss: 0.063 Epoch 1 Batch 239/269 - Train Accuracy: 0.942, Validation Accuracy: 0.951, Loss: 0.060 Epoch 1 Batch 240/269 - Train Accuracy: 0.951, Validation Accuracy: 0.951, Loss: 0.055 Epoch 1 Batch 241/269 - Train Accuracy: 0.938, Validation Accuracy: 0.948, Loss: 0.070 Epoch 1 Batch 242/269 - Train Accuracy: 0.950, Validation Accuracy: 0.954, Loss: 0.061 Epoch 1 Batch 243/269 - Train Accuracy: 0.951, Validation Accuracy: 0.948, Loss: 0.055 Epoch 1 Batch 244/269 - Train Accuracy: 0.941, Validation Accuracy: 0.943, Loss: 0.059 Epoch 1 Batch 245/269 - Train Accuracy: 0.929, Validation Accuracy: 0.942, Loss: 0.063 Epoch 1 Batch 246/269 - Train Accuracy: 0.948, Validation Accuracy: 0.944, Loss: 0.059 Epoch 1 Batch 247/269 - Train Accuracy: 0.940, Validation Accuracy: 0.942, Loss: 0.059 Epoch 1 Batch 248/269 - Train Accuracy: 0.953, Validation Accuracy: 0.944, Loss: 0.057 Epoch 1 Batch 249/269 - Train Accuracy: 0.951, Validation Accuracy: 0.945, Loss: 0.053 Epoch 1 Batch 250/269 - Train Accuracy: 0.951, Validation Accuracy: 0.949, Loss: 0.056 Epoch 1 Batch 251/269 - Train Accuracy: 0.959, Validation Accuracy: 0.947, Loss: 0.056 Epoch 1 Batch 252/269 - Train Accuracy: 0.957, Validation Accuracy: 0.946, Loss: 0.051 Epoch 1 Batch 253/269 - Train Accuracy: 0.922, Validation Accuracy: 0.947, Loss: 0.062 Epoch 1 Batch 254/269 - Train Accuracy: 0.957, Validation Accuracy: 0.948, Loss: 0.060 Epoch 1 Batch 255/269 - Train Accuracy: 0.947, Validation Accuracy: 0.953, Loss: 0.062 Epoch 1 Batch 256/269 - Train Accuracy: 0.944, Validation Accuracy: 0.954, Loss: 0.058 Epoch 1 Batch 257/269 - Train Accuracy: 0.938, Validation Accuracy: 0.945, Loss: 0.064 Epoch 1 Batch 258/269 - Train Accuracy: 0.939, Validation Accuracy: 0.942, Loss: 0.058 Epoch 1 Batch 259/269 - Train Accuracy: 0.950, Validation Accuracy: 0.954, Loss: 0.059 Epoch 1 Batch 260/269 - Train Accuracy: 0.948, Validation Accuracy: 0.952, Loss: 0.061 Epoch 1 Batch 261/269 - Train Accuracy: 0.958, Validation Accuracy: 0.951, Loss: 0.060 Epoch 1 Batch 262/269 - Train Accuracy: 0.939, Validation Accuracy: 0.953, Loss: 0.060 Epoch 1 Batch 263/269 - Train Accuracy: 0.948, Validation Accuracy: 0.956, Loss: 0.061 Epoch 1 Batch 264/269 - Train Accuracy: 0.934, Validation Accuracy: 0.955, Loss: 0.061 Epoch 1 Batch 265/269 - Train Accuracy: 0.949, Validation Accuracy: 0.952, Loss: 0.057 Epoch 1 Batch 266/269 - Train Accuracy: 0.949, Validation Accuracy: 0.955, Loss: 0.054 Epoch 1 Batch 267/269 - Train Accuracy: 0.955, Validation Accuracy: 0.952, Loss: 0.061 Epoch 2 Batch 0/269 - Train Accuracy: 0.959, Validation Accuracy: 0.952, Loss: 0.061 Epoch 2 Batch 1/269 - Train Accuracy: 0.953, Validation Accuracy: 0.951, Loss: 0.057 Epoch 2 Batch 2/269 - Train Accuracy: 0.946, Validation Accuracy: 0.948, Loss: 0.059 Epoch 2 Batch 3/269 - Train Accuracy: 0.960, Validation Accuracy: 0.948, Loss: 0.061 Epoch 2 Batch 4/269 - Train Accuracy: 0.940, Validation Accuracy: 0.954, Loss: 0.060 Epoch 2 Batch 5/269 - Train Accuracy: 0.948, Validation Accuracy: 0.948, Loss: 0.060 Epoch 2 Batch 6/269 - Train Accuracy: 0.958, Validation Accuracy: 0.945, Loss: 0.054 Epoch 2 Batch 7/269 - Train Accuracy: 0.950, Validation Accuracy: 0.951, Loss: 0.057 Epoch 2 Batch 8/269 - Train Accuracy: 0.953, Validation Accuracy: 0.943, Loss: 0.065 Epoch 2 Batch 9/269 - Train Accuracy: 0.944, Validation Accuracy: 0.934, Loss: 0.063 Epoch 2 Batch 10/269 - Train Accuracy: 0.948, Validation Accuracy: 0.937, Loss: 0.053 Epoch 2 Batch 11/269 - Train Accuracy: 0.944, Validation Accuracy: 0.940, Loss: 0.068 Epoch 2 Batch 12/269 - Train Accuracy: 0.940, Validation Accuracy: 0.938, Loss: 0.065 Epoch 2 Batch 13/269 - Train Accuracy: 0.955, Validation Accuracy: 0.945, Loss: 0.053 Epoch 2 Batch 14/269 - Train Accuracy: 0.946, Validation Accuracy: 0.944, Loss: 0.052 Epoch 2 Batch 15/269 - Train Accuracy: 0.959, Validation Accuracy: 0.943, Loss: 0.054 Epoch 2 Batch 16/269 - Train Accuracy: 0.946, Validation Accuracy: 0.944, Loss: 0.063 Epoch 2 Batch 17/269 - Train Accuracy: 0.959, Validation Accuracy: 0.947, Loss: 0.048 Epoch 2 Batch 18/269 - Train Accuracy: 0.943, Validation Accuracy: 0.949, Loss: 0.063 Epoch 2 Batch 19/269 - Train Accuracy: 0.951, Validation Accuracy: 0.948, Loss: 0.054 Epoch 2 Batch 20/269 - Train Accuracy: 0.942, Validation Accuracy: 0.953, Loss: 0.057 Epoch 2 Batch 21/269 - Train Accuracy: 0.935, Validation Accuracy: 0.953, Loss: 0.062 Epoch 2 Batch 22/269 - Train Accuracy: 0.952, Validation Accuracy: 0.951, Loss: 0.057 Epoch 2 Batch 23/269 - Train Accuracy: 0.948, Validation Accuracy: 0.947, Loss: 0.058 Epoch 2 Batch 24/269 - Train Accuracy: 0.956, Validation Accuracy: 0.946, Loss: 0.047 Epoch 2 Batch 25/269 - Train Accuracy: 0.944, Validation Accuracy: 0.949, Loss: 0.062 Epoch 2 Batch 26/269 - Train Accuracy: 0.956, Validation Accuracy: 0.949, Loss: 0.053 Epoch 2 Batch 27/269 - Train Accuracy: 0.947, Validation Accuracy: 0.946, Loss: 0.052 Epoch 2 Batch 28/269 - Train Accuracy: 0.934, Validation Accuracy: 0.945, Loss: 0.061 Epoch 2 Batch 29/269 - Train Accuracy: 0.951, Validation Accuracy: 0.946, Loss: 0.057 Epoch 2 Batch 30/269 - Train Accuracy: 0.949, Validation Accuracy: 0.947, Loss: 0.058 Epoch 2 Batch 31/269 - Train Accuracy: 0.955, Validation Accuracy: 0.949, Loss: 0.055 Epoch 2 Batch 32/269 - Train Accuracy: 0.968, Validation Accuracy: 0.950, Loss: 0.049 Epoch 2 Batch 33/269 - Train Accuracy: 0.956, Validation Accuracy: 0.951, Loss: 0.049 Epoch 2 Batch 34/269 - Train Accuracy: 0.958, Validation Accuracy: 0.952, Loss: 0.052 Epoch 2 Batch 35/269 - Train Accuracy: 0.954, Validation Accuracy: 0.955, Loss: 0.067 Epoch 2 Batch 36/269 - Train Accuracy: 0.951, Validation Accuracy: 0.950, Loss: 0.055 Epoch 2 Batch 37/269 - Train Accuracy: 0.945, Validation Accuracy: 0.946, Loss: 0.055 Epoch 2 Batch 38/269 - Train Accuracy: 0.947, Validation Accuracy: 0.946, Loss: 0.055 Epoch 2 Batch 39/269 - Train Accuracy: 0.952, Validation Accuracy: 0.946, Loss: 0.049 Epoch 2 Batch 40/269 - Train Accuracy: 0.940, Validation Accuracy: 0.943, Loss: 0.058 Epoch 2 Batch 41/269 - Train Accuracy: 0.942, Validation Accuracy: 0.947, Loss: 0.056 Epoch 2 Batch 42/269 - Train Accuracy: 0.963, Validation Accuracy: 0.945, Loss: 0.047 Epoch 2 Batch 43/269 - Train Accuracy: 0.938, Validation Accuracy: 0.947, Loss: 0.054 Epoch 2 Batch 44/269 - Train Accuracy: 0.942, Validation Accuracy: 0.950, Loss: 0.055 Epoch 2 Batch 45/269 - Train Accuracy: 0.949, Validation Accuracy: 0.949, Loss: 0.054 Epoch 2 Batch 46/269 - Train Accuracy: 0.959, Validation Accuracy: 0.952, Loss: 0.048 Epoch 2 Batch 47/269 - Train Accuracy: 0.956, Validation Accuracy: 0.953, Loss: 0.047 Epoch 2 Batch 48/269 - Train Accuracy: 0.954, Validation Accuracy: 0.952, Loss: 0.049 Epoch 2 Batch 49/269 - Train Accuracy: 0.950, Validation Accuracy: 0.953, Loss: 0.050 Epoch 2 Batch 50/269 - Train Accuracy: 0.940, Validation Accuracy: 0.949, Loss: 0.061 Epoch 2 Batch 51/269 - Train Accuracy: 0.955, Validation Accuracy: 0.952, Loss: 0.049 Epoch 2 Batch 52/269 - Train Accuracy: 0.958, Validation Accuracy: 0.941, Loss: 0.041 Epoch 2 Batch 53/269 - Train Accuracy: 0.944, Validation Accuracy: 0.944, Loss: 0.059 Epoch 2 Batch 54/269 - Train Accuracy: 0.954, Validation Accuracy: 0.953, Loss: 0.050 Epoch 2 Batch 55/269 - Train Accuracy: 0.953, Validation Accuracy: 0.946, Loss: 0.049 Epoch 2 Batch 56/269 - Train Accuracy: 0.934, Validation Accuracy: 0.943, Loss: 0.059 Epoch 2 Batch 57/269 - Train Accuracy: 0.947, Validation Accuracy: 0.948, Loss: 0.055 Epoch 2 Batch 58/269 - Train Accuracy: 0.947, Validation Accuracy: 0.954, Loss: 0.049 Epoch 2 Batch 59/269 - Train Accuracy: 0.965, Validation Accuracy: 0.955, Loss: 0.039 Epoch 2 Batch 60/269 - Train Accuracy: 0.959, Validation Accuracy: 0.953, Loss: 0.048 Epoch 2 Batch 61/269 - Train Accuracy: 0.952, Validation Accuracy: 0.953, Loss: 0.046 Epoch 2 Batch 62/269 - Train Accuracy: 0.944, Validation Accuracy: 0.955, Loss: 0.054 Epoch 2 Batch 63/269 - Train Accuracy: 0.947, Validation Accuracy: 0.955, Loss: 0.056 Epoch 2 Batch 64/269 - Train Accuracy: 0.952, Validation Accuracy: 0.950, Loss: 0.046 Epoch 2 Batch 65/269 - Train Accuracy: 0.956, Validation Accuracy: 0.955, Loss: 0.048 Epoch 2 Batch 66/269 - Train Accuracy: 0.949, Validation Accuracy: 0.951, Loss: 0.053 Epoch 2 Batch 67/269 - Train Accuracy: 0.953, Validation Accuracy: 0.935, Loss: 0.055 Epoch 2 Batch 68/269 - Train Accuracy: 0.951, Validation Accuracy: 0.934, Loss: 0.056 Epoch 2 Batch 69/269 - Train Accuracy: 0.953, Validation Accuracy: 0.935, Loss: 0.066 Epoch 2 Batch 70/269 - Train Accuracy: 0.954, Validation Accuracy: 0.942, Loss: 0.050 Epoch 2 Batch 71/269 - Train Accuracy: 0.947, Validation Accuracy: 0.947, Loss: 0.062 Epoch 2 Batch 72/269 - Train Accuracy: 0.945, Validation Accuracy: 0.952, Loss: 0.051 Epoch 2 Batch 73/269 - Train Accuracy: 0.951, Validation Accuracy: 0.953, Loss: 0.056 Epoch 2 Batch 74/269 - Train Accuracy: 0.961, Validation Accuracy: 0.952, Loss: 0.047 Epoch 2 Batch 75/269 - Train Accuracy: 0.943, Validation Accuracy: 0.948, Loss: 0.055 Epoch 2 Batch 76/269 - Train Accuracy: 0.951, Validation Accuracy: 0.953, Loss: 0.050 Epoch 2 Batch 77/269 - Train Accuracy: 0.942, Validation Accuracy: 0.951, Loss: 0.047 Epoch 2 Batch 78/269 - Train Accuracy: 0.955, Validation Accuracy: 0.954, Loss: 0.045 Epoch 2 Batch 79/269 - Train Accuracy: 0.943, Validation Accuracy: 0.952, Loss: 0.050 Epoch 2 Batch 80/269 - Train Accuracy: 0.954, Validation Accuracy: 0.948, Loss: 0.050 Epoch 2 Batch 81/269 - Train Accuracy: 0.948, Validation Accuracy: 0.949, Loss: 0.058 Epoch 2 Batch 82/269 - Train Accuracy: 0.966, Validation Accuracy: 0.946, Loss: 0.042 Epoch 2 Batch 83/269 - Train Accuracy: 0.941, Validation Accuracy: 0.942, Loss: 0.061 Epoch 2 Batch 84/269 - Train Accuracy: 0.951, Validation Accuracy: 0.944, Loss: 0.046 Epoch 2 Batch 85/269 - Train Accuracy: 0.950, Validation Accuracy: 0.945, Loss: 0.048 Epoch 2 Batch 86/269 - Train Accuracy: 0.956, Validation Accuracy: 0.945, Loss: 0.047 Epoch 2 Batch 87/269 - Train Accuracy: 0.961, Validation Accuracy: 0.947, Loss: 0.053 Epoch 2 Batch 88/269 - Train Accuracy: 0.940, Validation Accuracy: 0.948, Loss: 0.054 Epoch 2 Batch 89/269 - Train Accuracy: 0.959, Validation Accuracy: 0.950, Loss: 0.047 Epoch 2 Batch 90/269 - Train Accuracy: 0.952, Validation Accuracy: 0.954, Loss: 0.051 Epoch 2 Batch 91/269 - Train Accuracy: 0.958, Validation Accuracy: 0.951, Loss: 0.044 Epoch 2 Batch 92/269 - Train Accuracy: 0.962, Validation Accuracy: 0.950, Loss: 0.043 Epoch 2 Batch 93/269 - Train Accuracy: 0.963, Validation Accuracy: 0.946, Loss: 0.045 Epoch 2 Batch 94/269 - Train Accuracy: 0.947, Validation Accuracy: 0.949, Loss: 0.058 Epoch 2 Batch 95/269 - Train Accuracy: 0.960, Validation Accuracy: 0.946, Loss: 0.043 Epoch 2 Batch 96/269 - Train Accuracy: 0.952, Validation Accuracy: 0.940, Loss: 0.046 Epoch 2 Batch 97/269 - Train Accuracy: 0.951, Validation Accuracy: 0.945, Loss: 0.049 Epoch 2 Batch 98/269 - Train Accuracy: 0.955, Validation Accuracy: 0.946, Loss: 0.049 Epoch 2 Batch 99/269 - Train Accuracy: 0.952, Validation Accuracy: 0.946, Loss: 0.045 Epoch 2 Batch 100/269 - Train Accuracy: 0.957, Validation Accuracy: 0.946, Loss: 0.049 Epoch 2 Batch 101/269 - Train Accuracy: 0.952, Validation Accuracy: 0.954, Loss: 0.055 Epoch 2 Batch 102/269 - Train Accuracy: 0.949, Validation Accuracy: 0.956, Loss: 0.048 Epoch 2 Batch 103/269 - Train Accuracy: 0.957, Validation Accuracy: 0.956, Loss: 0.051 Epoch 2 Batch 104/269 - Train Accuracy: 0.962, Validation Accuracy: 0.955, Loss: 0.045 Epoch 2 Batch 105/269 - Train Accuracy: 0.952, Validation Accuracy: 0.956, Loss: 0.048 Epoch 2 Batch 106/269 - Train Accuracy: 0.964, Validation Accuracy: 0.953, Loss: 0.041 Epoch 2 Batch 107/269 - Train Accuracy: 0.960, Validation Accuracy: 0.952, Loss: 0.048 Epoch 2 Batch 108/269 - Train Accuracy: 0.958, Validation Accuracy: 0.955, Loss: 0.044 Epoch 2 Batch 109/269 - Train Accuracy: 0.940, Validation Accuracy: 0.955, Loss: 0.051 Epoch 2 Batch 110/269 - Train Accuracy: 0.951, Validation Accuracy: 0.954, Loss: 0.044 Epoch 2 Batch 111/269 - Train Accuracy: 0.952, Validation Accuracy: 0.951, Loss: 0.052 Epoch 2 Batch 112/269 - Train Accuracy: 0.953, Validation Accuracy: 0.948, Loss: 0.050 Epoch 2 Batch 113/269 - Train Accuracy: 0.954, Validation Accuracy: 0.947, Loss: 0.046 Epoch 2 Batch 114/269 - Train Accuracy: 0.949, Validation Accuracy: 0.944, Loss: 0.047 Epoch 2 Batch 115/269 - Train Accuracy: 0.951, Validation Accuracy: 0.945, Loss: 0.049 Epoch 2 Batch 116/269 - Train Accuracy: 0.966, Validation Accuracy: 0.948, Loss: 0.047 Epoch 2 Batch 117/269 - Train Accuracy: 0.961, Validation Accuracy: 0.946, Loss: 0.042 Epoch 2 Batch 118/269 - Train Accuracy: 0.957, Validation Accuracy: 0.946, Loss: 0.039 Epoch 2 Batch 119/269 - Train Accuracy: 0.950, Validation Accuracy: 0.943, Loss: 0.048 Epoch 2 Batch 120/269 - Train Accuracy: 0.961, Validation Accuracy: 0.949, Loss: 0.050 Epoch 2 Batch 121/269 - Train Accuracy: 0.955, Validation Accuracy: 0.947, Loss: 0.045 Epoch 2 Batch 122/269 - Train Accuracy: 0.948, Validation Accuracy: 0.948, Loss: 0.046 Epoch 2 Batch 123/269 - Train Accuracy: 0.944, Validation Accuracy: 0.951, Loss: 0.047 Epoch 2 Batch 124/269 - Train Accuracy: 0.964, Validation Accuracy: 0.954, Loss: 0.038 Epoch 2 Batch 125/269 - Train Accuracy: 0.962, Validation Accuracy: 0.955, Loss: 0.043 Epoch 2 Batch 126/269 - Train Accuracy: 0.952, Validation Accuracy: 0.961, Loss: 0.045 Epoch 2 Batch 127/269 - Train Accuracy: 0.963, Validation Accuracy: 0.958, Loss: 0.050 Epoch 2 Batch 128/269 - Train Accuracy: 0.955, Validation Accuracy: 0.958, Loss: 0.044 Epoch 2 Batch 129/269 - Train Accuracy: 0.948, Validation Accuracy: 0.959, Loss: 0.048 Epoch 2 Batch 130/269 - Train Accuracy: 0.947, Validation Accuracy: 0.958, Loss: 0.052 Epoch 2 Batch 131/269 - Train Accuracy: 0.954, Validation Accuracy: 0.961, Loss: 0.050 Epoch 2 Batch 132/269 - Train Accuracy: 0.944, Validation Accuracy: 0.959, Loss: 0.053 Epoch 2 Batch 133/269 - Train Accuracy: 0.959, Validation Accuracy: 0.957, Loss: 0.041 Epoch 2 Batch 134/269 - Train Accuracy: 0.951, Validation Accuracy: 0.958, Loss: 0.046 Epoch 2 Batch 135/269 - Train Accuracy: 0.964, Validation Accuracy: 0.963, Loss: 0.044 Epoch 2 Batch 136/269 - Train Accuracy: 0.938, Validation Accuracy: 0.961, Loss: 0.050 Epoch 2 Batch 137/269 - Train Accuracy: 0.953, Validation Accuracy: 0.960, Loss: 0.053 Epoch 2 Batch 138/269 - Train Accuracy: 0.953, Validation Accuracy: 0.960, Loss: 0.041 Epoch 2 Batch 139/269 - Train Accuracy: 0.954, Validation Accuracy: 0.954, Loss: 0.040 Epoch 2 Batch 140/269 - Train Accuracy: 0.957, Validation Accuracy: 0.951, Loss: 0.050 Epoch 2 Batch 141/269 - Train Accuracy: 0.964, Validation Accuracy: 0.953, Loss: 0.047 Epoch 2 Batch 142/269 - Train Accuracy: 0.950, Validation Accuracy: 0.950, Loss: 0.048 Epoch 2 Batch 143/269 - Train Accuracy: 0.966, Validation Accuracy: 0.953, Loss: 0.045 Epoch 2 Batch 144/269 - Train Accuracy: 0.962, Validation Accuracy: 0.953, Loss: 0.036 Epoch 2 Batch 145/269 - Train Accuracy: 0.960, Validation Accuracy: 0.957, Loss: 0.040 Epoch 2 Batch 146/269 - Train Accuracy: 0.955, Validation Accuracy: 0.960, Loss: 0.043 Epoch 2 Batch 147/269 - Train Accuracy: 0.956, Validation Accuracy: 0.963, Loss: 0.048 Epoch 2 Batch 148/269 - Train Accuracy: 0.965, Validation Accuracy: 0.959, Loss: 0.039 Epoch 2 Batch 149/269 - Train Accuracy: 0.952, Validation Accuracy: 0.956, Loss: 0.051 Epoch 2 Batch 150/269 - Train Accuracy: 0.953, Validation Accuracy: 0.956, Loss: 0.047 Epoch 2 Batch 151/269 - Train Accuracy: 0.968, Validation Accuracy: 0.960, Loss: 0.043 Epoch 2 Batch 152/269 - Train Accuracy: 0.960, Validation Accuracy: 0.956, Loss: 0.041 Epoch 2 Batch 153/269 - Train Accuracy: 0.961, Validation Accuracy: 0.955, Loss: 0.039 Epoch 2 Batch 154/269 - Train Accuracy: 0.965, Validation Accuracy: 0.958, Loss: 0.043 Epoch 2 Batch 155/269 - Train Accuracy: 0.954, Validation Accuracy: 0.958, Loss: 0.042 Epoch 2 Batch 156/269 - Train Accuracy: 0.956, Validation Accuracy: 0.955, Loss: 0.042 Epoch 2 Batch 157/269 - Train Accuracy: 0.959, Validation Accuracy: 0.955, Loss: 0.038 Epoch 2 Batch 158/269 - Train Accuracy: 0.946, Validation Accuracy: 0.955, Loss: 0.045 Epoch 2 Batch 159/269 - Train Accuracy: 0.963, Validation Accuracy: 0.956, Loss: 0.042 Epoch 2 Batch 160/269 - Train Accuracy: 0.968, Validation Accuracy: 0.962, Loss: 0.042 Epoch 2 Batch 161/269 - Train Accuracy: 0.971, Validation Accuracy: 0.960, Loss: 0.039 Epoch 2 Batch 162/269 - Train Accuracy: 0.964, Validation Accuracy: 0.962, Loss: 0.042 Epoch 2 Batch 163/269 - Train Accuracy: 0.968, Validation Accuracy: 0.958, Loss: 0.041 Epoch 2 Batch 164/269 - Train Accuracy: 0.969, Validation Accuracy: 0.956, Loss: 0.041 Epoch 2 Batch 165/269 - Train Accuracy: 0.963, Validation Accuracy: 0.950, Loss: 0.043 Epoch 2 Batch 166/269 - Train Accuracy: 0.963, Validation Accuracy: 0.950, Loss: 0.040 Epoch 2 Batch 167/269 - Train Accuracy: 0.947, Validation Accuracy: 0.959, Loss: 0.045 Epoch 2 Batch 168/269 - Train Accuracy: 0.960, Validation Accuracy: 0.962, Loss: 0.044 Epoch 2 Batch 169/269 - Train Accuracy: 0.954, Validation Accuracy: 0.965, Loss: 0.040 Epoch 2 Batch 170/269 - Train Accuracy: 0.961, Validation Accuracy: 0.967, Loss: 0.039 Epoch 2 Batch 171/269 - Train Accuracy: 0.969, Validation Accuracy: 0.964, Loss: 0.039 Epoch 2 Batch 172/269 - Train Accuracy: 0.960, Validation Accuracy: 0.959, Loss: 0.045 Epoch 2 Batch 173/269 - Train Accuracy: 0.955, Validation Accuracy: 0.959, Loss: 0.039 Epoch 2 Batch 174/269 - Train Accuracy: 0.969, Validation Accuracy: 0.955, Loss: 0.045 Epoch 2 Batch 175/269 - Train Accuracy: 0.957, Validation Accuracy: 0.958, Loss: 0.051 Epoch 2 Batch 176/269 - Train Accuracy: 0.959, Validation Accuracy: 0.954, Loss: 0.046 Epoch 2 Batch 177/269 - Train Accuracy: 0.969, Validation Accuracy: 0.956, Loss: 0.040 Epoch 2 Batch 178/269 - Train Accuracy: 0.971, Validation Accuracy: 0.956, Loss: 0.036 Epoch 2 Batch 179/269 - Train Accuracy: 0.952, Validation Accuracy: 0.957, Loss: 0.042 Epoch 2 Batch 180/269 - Train Accuracy: 0.964, Validation Accuracy: 0.960, Loss: 0.037 Epoch 2 Batch 181/269 - Train Accuracy: 0.957, Validation Accuracy: 0.961, Loss: 0.046 Epoch 2 Batch 182/269 - Train Accuracy: 0.959, Validation Accuracy: 0.964, Loss: 0.042 Epoch 2 Batch 183/269 - Train Accuracy: 0.955, Validation Accuracy: 0.964, Loss: 0.034 Epoch 2 Batch 184/269 - Train Accuracy: 0.967, Validation Accuracy: 0.966, Loss: 0.040 Epoch 2 Batch 185/269 - Train Accuracy: 0.969, Validation Accuracy: 0.964, Loss: 0.045 Epoch 2 Batch 186/269 - Train Accuracy: 0.962, Validation Accuracy: 0.959, Loss: 0.035 Epoch 2 Batch 187/269 - Train Accuracy: 0.963, Validation Accuracy: 0.959, Loss: 0.038 Epoch 2 Batch 188/269 - Train Accuracy: 0.968, Validation Accuracy: 0.959, Loss: 0.037 Epoch 2 Batch 189/269 - Train Accuracy: 0.965, Validation Accuracy: 0.958, Loss: 0.040 Epoch 2 Batch 190/269 - Train Accuracy: 0.958, Validation Accuracy: 0.960, Loss: 0.039 Epoch 2 Batch 191/269 - Train Accuracy: 0.957, Validation Accuracy: 0.960, Loss: 0.040 Epoch 2 Batch 192/269 - Train Accuracy: 0.965, Validation Accuracy: 0.962, Loss: 0.043 Epoch 2 Batch 193/269 - Train Accuracy: 0.956, Validation Accuracy: 0.964, Loss: 0.039 Epoch 2 Batch 194/269 - Train Accuracy: 0.964, Validation Accuracy: 0.960, Loss: 0.040 Epoch 2 Batch 195/269 - Train Accuracy: 0.962, Validation Accuracy: 0.961, Loss: 0.036 Epoch 2 Batch 196/269 - Train Accuracy: 0.957, Validation Accuracy: 0.960, Loss: 0.039 Epoch 2 Batch 197/269 - Train Accuracy: 0.962, Validation Accuracy: 0.956, Loss: 0.041 Epoch 2 Batch 198/269 - Train Accuracy: 0.959, Validation Accuracy: 0.962, Loss: 0.044 Epoch 2 Batch 199/269 - Train Accuracy: 0.966, Validation Accuracy: 0.963, Loss: 0.046 Epoch 2 Batch 200/269 - Train Accuracy: 0.971, Validation Accuracy: 0.958, Loss: 0.037 Epoch 2 Batch 201/269 - Train Accuracy: 0.959, Validation Accuracy: 0.956, Loss: 0.041 Epoch 2 Batch 202/269 - Train Accuracy: 0.954, Validation Accuracy: 0.957, Loss: 0.041 Epoch 2 Batch 203/269 - Train Accuracy: 0.957, Validation Accuracy: 0.959, Loss: 0.044 Epoch 2 Batch 204/269 - Train Accuracy: 0.964, Validation Accuracy: 0.964, Loss: 0.038 Epoch 2 Batch 205/269 - Train Accuracy: 0.968, Validation Accuracy: 0.962, Loss: 0.039 Epoch 2 Batch 206/269 - Train Accuracy: 0.949, Validation Accuracy: 0.963, Loss: 0.048 Epoch 2 Batch 207/269 - Train Accuracy: 0.954, Validation Accuracy: 0.959, Loss: 0.040 Epoch 2 Batch 208/269 - Train Accuracy: 0.962, Validation Accuracy: 0.965, Loss: 0.044 Epoch 2 Batch 209/269 - Train Accuracy: 0.967, Validation Accuracy: 0.961, Loss: 0.039 Epoch 2 Batch 210/269 - Train Accuracy: 0.955, Validation Accuracy: 0.960, Loss: 0.040 Epoch 2 Batch 211/269 - Train Accuracy: 0.961, Validation Accuracy: 0.965, Loss: 0.046 Epoch 2 Batch 212/269 - Train Accuracy: 0.960, Validation Accuracy: 0.967, Loss: 0.045 Epoch 2 Batch 213/269 - Train Accuracy: 0.958, Validation Accuracy: 0.963, Loss: 0.038 Epoch 2 Batch 214/269 - Train Accuracy: 0.954, Validation Accuracy: 0.965, Loss: 0.041 Epoch 2 Batch 215/269 - Train Accuracy: 0.955, Validation Accuracy: 0.961, Loss: 0.041 Epoch 2 Batch 216/269 - Train Accuracy: 0.947, Validation Accuracy: 0.959, Loss: 0.048 Epoch 2 Batch 217/269 - Train Accuracy: 0.953, Validation Accuracy: 0.960, Loss: 0.044 Epoch 2 Batch 218/269 - Train Accuracy: 0.959, Validation Accuracy: 0.963, Loss: 0.038 Epoch 2 Batch 219/269 - Train Accuracy: 0.965, Validation Accuracy: 0.961, Loss: 0.042 Epoch 2 Batch 220/269 - Train Accuracy: 0.955, Validation Accuracy: 0.960, Loss: 0.039 Epoch 2 Batch 221/269 - Train Accuracy: 0.962, Validation Accuracy: 0.959, Loss: 0.039 Epoch 2 Batch 222/269 - Train Accuracy: 0.963, Validation Accuracy: 0.964, Loss: 0.037 Epoch 2 Batch 223/269 - Train Accuracy: 0.957, Validation Accuracy: 0.969, Loss: 0.039 Epoch 2 Batch 224/269 - Train Accuracy: 0.951, Validation Accuracy: 0.967, Loss: 0.049 Epoch 2 Batch 225/269 - Train Accuracy: 0.948, Validation Accuracy: 0.964, Loss: 0.038 Epoch 2 Batch 226/269 - Train Accuracy: 0.957, Validation Accuracy: 0.963, Loss: 0.041 Epoch 2 Batch 227/269 - Train Accuracy: 0.965, Validation Accuracy: 0.971, Loss: 0.050 Epoch 2 Batch 228/269 - Train Accuracy: 0.961, Validation Accuracy: 0.971, Loss: 0.037 Epoch 2 Batch 229/269 - Train Accuracy: 0.960, Validation Accuracy: 0.969, Loss: 0.038 Epoch 2 Batch 230/269 - Train Accuracy: 0.969, Validation Accuracy: 0.971, Loss: 0.036 Epoch 2 Batch 231/269 - Train Accuracy: 0.959, Validation Accuracy: 0.970, Loss: 0.045 Epoch 2 Batch 232/269 - Train Accuracy: 0.959, Validation Accuracy: 0.972, Loss: 0.039 Epoch 2 Batch 233/269 - Train Accuracy: 0.971, Validation Accuracy: 0.970, Loss: 0.044 Epoch 2 Batch 234/269 - Train Accuracy: 0.958, Validation Accuracy: 0.968, Loss: 0.040 Epoch 2 Batch 235/269 - Train Accuracy: 0.983, Validation Accuracy: 0.967, Loss: 0.029 Epoch 2 Batch 236/269 - Train Accuracy: 0.965, Validation Accuracy: 0.961, Loss: 0.036 Epoch 2 Batch 237/269 - Train Accuracy: 0.969, Validation Accuracy: 0.961, Loss: 0.032 Epoch 2 Batch 238/269 - Train Accuracy: 0.964, Validation Accuracy: 0.959, Loss: 0.038 Epoch 2 Batch 239/269 - Train Accuracy: 0.959, Validation Accuracy: 0.957, Loss: 0.037 Epoch 2 Batch 240/269 - Train Accuracy: 0.966, Validation Accuracy: 0.960, Loss: 0.037 Epoch 2 Batch 241/269 - Train Accuracy: 0.946, Validation Accuracy: 0.964, Loss: 0.047 Epoch 2 Batch 242/269 - Train Accuracy: 0.970, Validation Accuracy: 0.963, Loss: 0.036 Epoch 2 Batch 243/269 - Train Accuracy: 0.967, Validation Accuracy: 0.962, Loss: 0.035 Epoch 2 Batch 244/269 - Train Accuracy: 0.957, Validation Accuracy: 0.962, Loss: 0.038 Epoch 2 Batch 245/269 - Train Accuracy: 0.956, Validation Accuracy: 0.964, Loss: 0.041 Epoch 2 Batch 246/269 - Train Accuracy: 0.954, Validation Accuracy: 0.964, Loss: 0.042 Epoch 2 Batch 247/269 - Train Accuracy: 0.964, Validation Accuracy: 0.962, Loss: 0.038 Epoch 2 Batch 248/269 - Train Accuracy: 0.964, Validation Accuracy: 0.958, Loss: 0.031 Epoch 2 Batch 249/269 - Train Accuracy: 0.961, Validation Accuracy: 0.955, Loss: 0.035 Epoch 2 Batch 250/269 - Train Accuracy: 0.962, Validation Accuracy: 0.953, Loss: 0.041 Epoch 2 Batch 251/269 - Train Accuracy: 0.969, Validation Accuracy: 0.960, Loss: 0.037 Epoch 2 Batch 252/269 - Train Accuracy: 0.971, Validation Accuracy: 0.960, Loss: 0.034 Epoch 2 Batch 253/269 - Train Accuracy: 0.956, Validation Accuracy: 0.961, Loss: 0.037 Epoch 2 Batch 254/269 - Train Accuracy: 0.970, Validation Accuracy: 0.964, Loss: 0.038 Epoch 2 Batch 255/269 - Train Accuracy: 0.958, Validation Accuracy: 0.964, Loss: 0.039 Epoch 2 Batch 256/269 - Train Accuracy: 0.968, Validation Accuracy: 0.963, Loss: 0.035 Epoch 2 Batch 257/269 - Train Accuracy: 0.961, Validation Accuracy: 0.966, Loss: 0.040 Epoch 2 Batch 258/269 - Train Accuracy: 0.964, Validation Accuracy: 0.965, Loss: 0.039 Epoch 2 Batch 259/269 - Train Accuracy: 0.958, Validation Accuracy: 0.959, Loss: 0.038 Epoch 2 Batch 260/269 - Train Accuracy: 0.961, Validation Accuracy: 0.960, Loss: 0.040 Epoch 2 Batch 261/269 - Train Accuracy: 0.963, Validation Accuracy: 0.965, Loss: 0.035 Epoch 2 Batch 262/269 - Train Accuracy: 0.953, Validation Accuracy: 0.964, Loss: 0.037 Epoch 2 Batch 263/269 - Train Accuracy: 0.963, Validation Accuracy: 0.964, Loss: 0.038 Epoch 2 Batch 264/269 - Train Accuracy: 0.946, Validation Accuracy: 0.962, Loss: 0.040 Epoch 2 Batch 265/269 - Train Accuracy: 0.961, Validation Accuracy: 0.964, Loss: 0.040 Epoch 2 Batch 266/269 - Train Accuracy: 0.970, Validation Accuracy: 0.968, Loss: 0.033 Epoch 2 Batch 267/269 - Train Accuracy: 0.966, Validation Accuracy: 0.969, Loss: 0.044 Model Trained and Saved ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ word_ids = [vocab_to_int.get(word.lower(), vocab_to_int['<UNK>']) for word in sentence.split()] return word_ids """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw the frost in summer .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('logits:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)])) print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)])) ###Output Input Word Ids: [226, 118, 224, 2, 192, 57, 128] English Words: ['he', 'saw', 'the', '<UNK>', 'in', 'summer', '.'] Prediction Word Ids: [110, 79, 113, 25, 184, 122, 146, 318, 1] French Words: ['il', 'a', 'vu', 'la', 'france', 'en', 'été', '.', '<EOS>'] ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ source_id_text = [[source_vocab_to_int.get(word, source_vocab_to_int['<UNK>']) for word in line.split(' ')] for line in source_text.split('\n')] target_id_text = [[target_vocab_to_int.get(word, target_vocab_to_int['<UNK>']) for word in line.split(' ')] +[target_vocab_to_int['<EOS>']] for line in target_text.split('\n')] return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper import problem_unittests as tests (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.1.0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoder_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.- Target sequence length placeholder named "target_sequence_length" with rank 1- Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.- Source sequence length placeholder named "source_sequence_length" with rank 1Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ input_text = tf.placeholder(tf.int32, [None, None], name='input') tf.rank(input_text, name='2') targets = tf.placeholder(tf.int32, [None, None], name='targets') tf.rank(targets, name='2') lr = tf.placeholder(tf.float32, name='learning_rate') tf.rank(lr, name='0') keep_prob = tf.placeholder(tf.float32, name='keep_prob') tf.rank(keep_prob, name='0') target_sequence_length = tf.placeholder(tf.int32, (None,), name='target_sequence_length') tf.rank(target_sequence_length, name='1') max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len') tf.rank(max_target_sequence_length, name='0') source_sequence_length = tf.placeholder(tf.int32, (None,), name='source_sequence_length') tf.rank(source_sequence_length, name='1') return input_text, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output Tests Passed ###Markdown Process Decoder InputImplement `process_decoder_input` by removing the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch. ###Code def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ ending = tf.strided_slice(target_data, [0,0], [batch_size, -1],[1,1]) prepro_target_data = tf.concat([tf.fill([batch_size,1], target_vocab_to_int['<GO>']), ending], 1) return prepro_target_data """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer: * Embed the encoder input using [`tf.contrib.layers.embed_sequence`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence) * Construct a [stacked](https://github.com/tensorflow/tensorflow/blob/6947f65a374ebf29e74bb71e36fd82760056d82c/tensorflow/docs_src/tutorials/recurrent.mdstacking-multiple-lstms) [`tf.contrib.rnn.LSTMCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell) wrapped in a [`tf.contrib.rnn.DropoutWrapper`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper) * Pass cell and embedded input to [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) ###Code from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ #Encoder embedding enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size) #RNN Cell def make_cell(rnn_size): enc_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1,0.1, seed=2)) enc_cell = tf.contrib.rnn.DropoutWrapper(enc_cell, keep_prob) return enc_cell enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32) return enc_output, enc_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate a training decoding layer:* Create a [`tf.contrib.seq2seq.TrainingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper) * Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ #Helper for the training process training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, sequence_length=target_sequence_length, time_major=False) #BasicDecoder training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer) #Perform dynamic decoding using the decoder training_decoder_output = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True, maximum_iterations=max_summary_length)[0] return training_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference decoder:* Create a [`tf.contrib.seq2seq.GreedyEmbeddingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper)* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens') #Helper for the inference process inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id) #Basic decoder inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer) #perform dynamic decoding using the decoder inference_decoder_output = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length)[0] return inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.* Embed the target sequences* Construct the decoder LSTM cell (just like you constructed the encoder cell above)* Create an output layer to map the outputs of the decoder to the elements of our vocabulary* Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)` function to get the training logits.* Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # decoder embedding dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) # construct the decoder cell def make_cell(rnn_size): dec_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) return dec_cell dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) # Dense Layer to translate the decoder's output at each time # step into a choice from the target vocabulary output_layer = Dense(target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean=0.0, stddev=0.1)) # set up a training decoder and an inference decoder # Training Decoder with tf.variable_scope("decode"): training_decoder_output = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length,output_layer, keep_prob) # inference Decoder with tf.variable_scope("decode", reuse=True): inference_decoder_output = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size)`.- Process target data using your `process_decoder_input(target_data, target_vocab_to_int, batch_size)` function.- Decode the encoded input using your `decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)` function. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ #pass the input data through the encoder _, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) #prepare the target sequence we will feed to the decoder in training mode dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size) #Pass encoder state and decoder inputs to the decoders training_decoder_output, inference_decoder_output = decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability- Set `display_step` to state how many steps between each debug output statement ###Code # Number of Epochs epochs = 2 # Batch Size batch_size = 256 # RNN Size rnn_size = 128 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 200 decoding_embedding_size = 200 # Learning Rate learning_rate = 0.01 # Dropout Keep Probability keep_probability = 0.5 display_step = 10 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown Batch and pad the source and target sequences ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def pad_sentence_batch(sentence_batch, pad_int): """Pad sentences with <PAD> so that each sentence of a batch has the same length""" max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): """Batch targets, sources, and the lengths of their sentences together""" for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 10/538 - Train Accuracy: 0.3143, Validation Accuracy: 0.3912, Loss: 3.2014 Epoch 0 Batch 20/538 - Train Accuracy: 0.3685, Validation Accuracy: 0.4139, Loss: 2.6942 Epoch 0 Batch 30/538 - Train Accuracy: 0.4242, Validation Accuracy: 0.4759, Loss: 2.5218 Epoch 0 Batch 40/538 - Train Accuracy: 0.4744, Validation Accuracy: 0.4803, Loss: 2.0246 Epoch 0 Batch 50/538 - Train Accuracy: 0.4889, Validation Accuracy: 0.5320, Loss: 1.9395 Epoch 0 Batch 60/538 - Train Accuracy: 0.5053, Validation Accuracy: 0.5451, Loss: 1.7129 Epoch 0 Batch 70/538 - Train Accuracy: 0.4978, Validation Accuracy: 0.5311, Loss: 1.4774 Epoch 0 Batch 80/538 - Train Accuracy: 0.4477, Validation Accuracy: 0.4961, Loss: 1.3585 Epoch 0 Batch 90/538 - Train Accuracy: 0.4983, Validation Accuracy: 0.5206, Loss: 1.1479 Epoch 0 Batch 100/538 - Train Accuracy: 0.5223, Validation Accuracy: 0.5572, Loss: 1.0095 Epoch 0 Batch 110/538 - Train Accuracy: 0.5398, Validation Accuracy: 0.5763, Loss: 0.9685 Epoch 0 Batch 120/538 - Train Accuracy: 0.5832, Validation Accuracy: 0.6046, Loss: 0.8603 Epoch 0 Batch 130/538 - Train Accuracy: 0.5738, Validation Accuracy: 0.6053, Loss: 0.7795 Epoch 0 Batch 140/538 - Train Accuracy: 0.5639, Validation Accuracy: 0.6152, Loss: 0.7999 Epoch 0 Batch 150/538 - Train Accuracy: 0.6123, Validation Accuracy: 0.6207, Loss: 0.7182 Epoch 0 Batch 160/538 - Train Accuracy: 0.5973, Validation Accuracy: 0.6323, Loss: 0.6546 Epoch 0 Batch 170/538 - Train Accuracy: 0.6148, Validation Accuracy: 0.6422, Loss: 0.6468 Epoch 0 Batch 180/538 - Train Accuracy: 0.6598, Validation Accuracy: 0.6470, Loss: 0.6127 Epoch 0 Batch 190/538 - Train Accuracy: 0.6561, Validation Accuracy: 0.6468, Loss: 0.6076 Epoch 0 Batch 200/538 - Train Accuracy: 0.6484, Validation Accuracy: 0.6472, Loss: 0.5743 Epoch 0 Batch 210/538 - Train Accuracy: 0.6572, Validation Accuracy: 0.6625, Loss: 0.5423 Epoch 0 Batch 220/538 - Train Accuracy: 0.6589, Validation Accuracy: 0.6598, Loss: 0.5174 Epoch 0 Batch 230/538 - Train Accuracy: 0.6760, Validation Accuracy: 0.6820, Loss: 0.5194 Epoch 0 Batch 240/538 - Train Accuracy: 0.6982, Validation Accuracy: 0.6768, Loss: 0.5003 Epoch 0 Batch 250/538 - Train Accuracy: 0.7088, Validation Accuracy: 0.7044, Loss: 0.4820 Epoch 0 Batch 260/538 - Train Accuracy: 0.7005, Validation Accuracy: 0.7013, Loss: 0.4538 Epoch 0 Batch 270/538 - Train Accuracy: 0.7084, Validation Accuracy: 0.7012, Loss: 0.4381 Epoch 0 Batch 280/538 - Train Accuracy: 0.7560, Validation Accuracy: 0.7173, Loss: 0.3965 Epoch 0 Batch 290/538 - Train Accuracy: 0.7453, Validation Accuracy: 0.7193, Loss: 0.3921 Epoch 0 Batch 300/538 - Train Accuracy: 0.7468, Validation Accuracy: 0.7241, Loss: 0.3721 Epoch 0 Batch 310/538 - Train Accuracy: 0.7805, Validation Accuracy: 0.7560, Loss: 0.3790 Epoch 0 Batch 320/538 - Train Accuracy: 0.7370, Validation Accuracy: 0.7408, Loss: 0.3462 Epoch 0 Batch 330/538 - Train Accuracy: 0.7814, Validation Accuracy: 0.7576, Loss: 0.3205 Epoch 0 Batch 340/538 - Train Accuracy: 0.7955, Validation Accuracy: 0.7745, Loss: 0.3273 Epoch 0 Batch 350/538 - Train Accuracy: 0.8151, Validation Accuracy: 0.7807, Loss: 0.3172 Epoch 0 Batch 360/538 - Train Accuracy: 0.7650, Validation Accuracy: 0.7846, Loss: 0.2994 Epoch 0 Batch 370/538 - Train Accuracy: 0.8002, Validation Accuracy: 0.7770, Loss: 0.2906 Epoch 0 Batch 380/538 - Train Accuracy: 0.8111, Validation Accuracy: 0.8084, Loss: 0.2641 Epoch 0 Batch 390/538 - Train Accuracy: 0.8348, Validation Accuracy: 0.8038, Loss: 0.2424 Epoch 0 Batch 400/538 - Train Accuracy: 0.8263, Validation Accuracy: 0.8100, Loss: 0.2449 Epoch 0 Batch 410/538 - Train Accuracy: 0.8352, Validation Accuracy: 0.8382, Loss: 0.2356 Epoch 0 Batch 420/538 - Train Accuracy: 0.8572, Validation Accuracy: 0.8462, Loss: 0.2231 Epoch 0 Batch 430/538 - Train Accuracy: 0.8412, Validation Accuracy: 0.8194, Loss: 0.2014 Epoch 0 Batch 440/538 - Train Accuracy: 0.8441, Validation Accuracy: 0.8358, Loss: 0.2211 Epoch 0 Batch 450/538 - Train Accuracy: 0.8445, Validation Accuracy: 0.8624, Loss: 0.2064 Epoch 0 Batch 460/538 - Train Accuracy: 0.8344, Validation Accuracy: 0.8505, Loss: 0.1882 Epoch 0 Batch 470/538 - Train Accuracy: 0.8679, Validation Accuracy: 0.8377, Loss: 0.1699 Epoch 0 Batch 480/538 - Train Accuracy: 0.8756, Validation Accuracy: 0.8398, Loss: 0.1618 Epoch 0 Batch 490/538 - Train Accuracy: 0.8694, Validation Accuracy: 0.8418, Loss: 0.1579 Epoch 0 Batch 500/538 - Train Accuracy: 0.9087, Validation Accuracy: 0.8688, Loss: 0.1314 Epoch 0 Batch 510/538 - Train Accuracy: 0.8826, Validation Accuracy: 0.8755, Loss: 0.1447 Epoch 0 Batch 520/538 - Train Accuracy: 0.8871, Validation Accuracy: 0.8787, Loss: 0.1529 Epoch 0 Batch 530/538 - Train Accuracy: 0.8504, Validation Accuracy: 0.8704, Loss: 0.1487 Epoch 1 Batch 10/538 - Train Accuracy: 0.9088, Validation Accuracy: 0.8786, Loss: 0.1366 Epoch 1 Batch 20/538 - Train Accuracy: 0.8914, Validation Accuracy: 0.8805, Loss: 0.1245 Epoch 1 Batch 30/538 - Train Accuracy: 0.8738, Validation Accuracy: 0.8640, Loss: 0.1289 Epoch 1 Batch 40/538 - Train Accuracy: 0.9034, Validation Accuracy: 0.8839, Loss: 0.1020 Epoch 1 Batch 50/538 - Train Accuracy: 0.8967, Validation Accuracy: 0.8908, Loss: 0.1033 Epoch 1 Batch 60/538 - Train Accuracy: 0.9043, Validation Accuracy: 0.8910, Loss: 0.1028 Epoch 1 Batch 70/538 - Train Accuracy: 0.8856, Validation Accuracy: 0.8665, Loss: 0.0997 Epoch 1 Batch 80/538 - Train Accuracy: 0.9082, Validation Accuracy: 0.8819, Loss: 0.0991 Epoch 1 Batch 90/538 - Train Accuracy: 0.8940, Validation Accuracy: 0.8865, Loss: 0.1107 Epoch 1 Batch 100/538 - Train Accuracy: 0.9162, Validation Accuracy: 0.9134, Loss: 0.0823 Epoch 1 Batch 110/538 - Train Accuracy: 0.9004, Validation Accuracy: 0.9107, Loss: 0.0900 Epoch 1 Batch 120/538 - Train Accuracy: 0.9254, Validation Accuracy: 0.8773, Loss: 0.0716 Epoch 1 Batch 130/538 - Train Accuracy: 0.9200, Validation Accuracy: 0.8999, Loss: 0.0788 Epoch 1 Batch 140/538 - Train Accuracy: 0.9018, Validation Accuracy: 0.8945, Loss: 0.0946 Epoch 1 Batch 150/538 - Train Accuracy: 0.9043, Validation Accuracy: 0.9185, Loss: 0.0766 Epoch 1 Batch 160/538 - Train Accuracy: 0.9068, Validation Accuracy: 0.9125, Loss: 0.0714 Epoch 1 Batch 170/538 - Train Accuracy: 0.8826, Validation Accuracy: 0.9043, Loss: 0.0824 Epoch 1 Batch 180/538 - Train Accuracy: 0.9304, Validation Accuracy: 0.8874, Loss: 0.0720 Epoch 1 Batch 190/538 - Train Accuracy: 0.8951, Validation Accuracy: 0.9171, Loss: 0.0938 Epoch 1 Batch 200/538 - Train Accuracy: 0.9113, Validation Accuracy: 0.9126, Loss: 0.0605 Epoch 1 Batch 210/538 - Train Accuracy: 0.9150, Validation Accuracy: 0.9082, Loss: 0.0752 Epoch 1 Batch 220/538 - Train Accuracy: 0.9208, Validation Accuracy: 0.9000, Loss: 0.0655 Epoch 1 Batch 230/538 - Train Accuracy: 0.9031, Validation Accuracy: 0.9231, Loss: 0.0661 Epoch 1 Batch 240/538 - Train Accuracy: 0.9232, Validation Accuracy: 0.9293, Loss: 0.0700 Epoch 1 Batch 250/538 - Train Accuracy: 0.9381, Validation Accuracy: 0.9135, Loss: 0.0651 Epoch 1 Batch 260/538 - Train Accuracy: 0.8821, Validation Accuracy: 0.9100, Loss: 0.0705 Epoch 1 Batch 270/538 - Train Accuracy: 0.9314, Validation Accuracy: 0.9006, Loss: 0.0566 Epoch 1 Batch 280/538 - Train Accuracy: 0.9302, Validation Accuracy: 0.9052, Loss: 0.0537 Epoch 1 Batch 290/538 - Train Accuracy: 0.9309, Validation Accuracy: 0.9231, Loss: 0.0549 Epoch 1 Batch 300/538 - Train Accuracy: 0.9150, Validation Accuracy: 0.9073, Loss: 0.0625 Epoch 1 Batch 310/538 - Train Accuracy: 0.9436, Validation Accuracy: 0.9185, Loss: 0.0598 Epoch 1 Batch 320/538 - Train Accuracy: 0.9189, Validation Accuracy: 0.9231, Loss: 0.0574 Epoch 1 Batch 330/538 - Train Accuracy: 0.9224, Validation Accuracy: 0.9034, Loss: 0.0559 Epoch 1 Batch 340/538 - Train Accuracy: 0.9109, Validation Accuracy: 0.9212, Loss: 0.0599 Epoch 1 Batch 350/538 - Train Accuracy: 0.9260, Validation Accuracy: 0.9229, Loss: 0.0613 Epoch 1 Batch 360/538 - Train Accuracy: 0.9096, Validation Accuracy: 0.9366, Loss: 0.0561 ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ sentence = sentence.lower().split() return [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) ###Output INFO:tensorflow:Restoring parameters from checkpoints/dev Input Word Ids: [17, 225, 149, 73, 169, 96, 208] English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.'] Prediction Word Ids: [14, 239, 129, 275, 323, 48, 151, 353, 1] French Words: il a vu un gros camion jaune . <EOS> ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ #REF: Seq2Seq Lab source_id_text = [[source_vocab_to_int[word] for word in line.split()] for line in source_text.split('\n')] target_id_text = [[target_vocab_to_int[word] for word in line.split()] + [target_vocab_to_int['<EOS>']] for line in target_text.split('\n')] return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.1.0 Default GPU Device: /gpu:0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoder_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.- Target sequence length placeholder named "target_sequence_length" with rank 1- Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.- Source sequence length placeholder named "source_sequence_length" with rank 1Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ # TODO: Implement Function #Source: Seq2Seq Lab input_data = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None], name='targets') lr = tf.placeholder(tf.float32, name='learning_rate') keep_probability = tf.placeholder(tf.float32, name='keep_prob') target_sequence_length = tf.placeholder(tf.int32, (None,), name='target_sequence_length') max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len') source_sequence_length = tf.placeholder(tf.int32, (None,), name='source_sequence_length') return input_data, targets, lr,keep_probability ,target_sequence_length, max_target_sequence_length, source_sequence_length """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output Tests Passed ###Markdown Process Decoder InputImplement `process_decoder_input` by removing the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch. ###Code def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function #Source: Seq2Seq Lab '''Remove the last word id from each batch and concat the <GO> to the begining of each batch''' ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return dec_input """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer: * Embed the encoder input using [`tf.contrib.layers.embed_sequence`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence) * Construct a [stacked](https://github.com/tensorflow/tensorflow/blob/6947f65a374ebf29e74bb71e36fd82760056d82c/tensorflow/docs_src/tutorials/recurrent.mdstacking-multiple-lstms) [`tf.contrib.rnn.LSTMCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell) wrapped in a [`tf.contrib.rnn.DropoutWrapper`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper) * Pass cell and embedded input to [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) ###Code from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ # TODO: Implement Function #REF: Seq2Seq Lab # Encoder embedding enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size) # RNN cell def make_cell(rnn_size): enc_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) return enc_cell enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32) return enc_output, enc_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate a training decoding layer:* Create a [`tf.contrib.seq2seq.TrainingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper) * Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ # TODO: Implement Function #REF: Seq2Seq Lab # Helper for the training process. Used by BasicDecoder to read inputs. training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, sequence_length=target_sequence_length, time_major=False) # Basic decoder training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer) # Perform dynamic decoding using the decoder training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True, maximum_iterations=max_summary_length) return training_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference decoder:* Create a [`tf.contrib.seq2seq.GreedyEmbeddingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper)* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ # TODO: Implement Function #REF: Seq2Seq Lab starting =tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens') # Helper for the inference process. inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, starting, end_of_sequence_id) # Basic decoder inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer) # Perform dynamic decoding using the decoder inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length) return inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.* Embed the target sequences* Construct the decoder LSTM cell (just like you constructed the encoder cell above)* Create an output layer to map the outputs of the decoder to the elements of our vocabulary* Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)` function to get the training logits.* Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function # REF: Seq2Seq Lab # thanks for the useful disscuion: # https://discussions.udacity.com/t/getting-errors-on-project-4-translation-need-help/313819/13 # 1. Decoder Embedding dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) # 2. Construct the decoder cell def make_cell(rnn_size): dec_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) return dec_cell dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) # 3. Dense layer to translate the decoder's output at each time # step into a choice from the target vocabulary output_layer = Dense(target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1)) start_of_sequence_id = target_vocab_to_int['<GO>'] end_of_sequence_id = target_vocab_to_int['<EOS>'] # 4. Training Decoder with tf.variable_scope("decode"): training_decoder_output = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) # 5. Inference Decoder # Reuses the same parameters trained by the training process with tf.variable_scope("decode", reuse=True): inference_decoder_output = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size)`.- Process target data using your `process_decoder_input(target_data, target_vocab_to_int, batch_size)` function.- Decode the encoded input using your `decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)` function. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function #REF: Seq2Seq Lab # Pass the input data through the encoder. The encoder output is ignored while the state is utilized _, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) # Prepare the target sequences. They decoder is fed in training mode dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size) # Pass encoder state and decoder inputs to the decoders training_decoder_output, inference_decoder_output = decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability- Set `display_step` to state how many steps between each debug output statement ###Code # Number of Epochs epochs = 3 # Batch Size batch_size = 64 # RNN Size rnn_size = 200 # Number of Layers num_layers = 3 # Embedding Size encoding_embedding_size = 64 decoding_embedding_size = 64 # Learning Rate learning_rate = 0.007 # Dropout Keep Probability keep_probability = 0.75 display_step = 10 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown Batch and pad the source and target sequences ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def pad_sentence_batch(sentence_batch, pad_int): """Pad sentences with <PAD> so that each sentence of a batch has the same length""" max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): """Batch targets, sources, and the lengths of their sentences together""" for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 10/2154 - Train Accuracy: 0.2844, Validation Accuracy: 0.3438, Loss: 3.4832 Epoch 0 Batch 20/2154 - Train Accuracy: 0.3320, Validation Accuracy: 0.3629, Loss: 3.0050 Epoch 0 Batch 30/2154 - Train Accuracy: 0.3617, Validation Accuracy: 0.4112, Loss: 2.8223 Epoch 0 Batch 40/2154 - Train Accuracy: 0.3627, Validation Accuracy: 0.4347, Loss: 2.6398 Epoch 0 Batch 50/2154 - Train Accuracy: 0.3914, Validation Accuracy: 0.4332, Loss: 2.5594 Epoch 0 Batch 60/2154 - Train Accuracy: 0.4164, Validation Accuracy: 0.4638, Loss: 2.3824 Epoch 0 Batch 70/2154 - Train Accuracy: 0.4507, Validation Accuracy: 0.4766, Loss: 2.2710 Epoch 0 Batch 80/2154 - Train Accuracy: 0.4445, Validation Accuracy: 0.4822, Loss: 2.1869 Epoch 0 Batch 90/2154 - Train Accuracy: 0.4186, Validation Accuracy: 0.4787, Loss: 2.1614 Epoch 0 Batch 100/2154 - Train Accuracy: 0.4457, Validation Accuracy: 0.4773, Loss: 2.1292 Epoch 0 Batch 110/2154 - Train Accuracy: 0.4344, Validation Accuracy: 0.4609, Loss: 2.1202 Epoch 0 Batch 120/2154 - Train Accuracy: 0.5280, Validation Accuracy: 0.5036, Loss: 1.8923 Epoch 0 Batch 130/2154 - Train Accuracy: 0.4508, Validation Accuracy: 0.5000, Loss: 1.7340 Epoch 0 Batch 140/2154 - Train Accuracy: 0.5008, Validation Accuracy: 0.5043, Loss: 1.6234 Epoch 0 Batch 150/2154 - Train Accuracy: 0.5320, Validation Accuracy: 0.5476, Loss: 1.5070 Epoch 0 Batch 160/2154 - Train Accuracy: 0.5227, Validation Accuracy: 0.5149, Loss: 1.5072 Epoch 0 Batch 170/2154 - Train Accuracy: 0.5086, Validation Accuracy: 0.5128, Loss: 1.2945 Epoch 0 Batch 180/2154 - Train Accuracy: 0.4906, Validation Accuracy: 0.5142, Loss: 1.2199 Epoch 0 Batch 190/2154 - Train Accuracy: 0.5180, Validation Accuracy: 0.5277, Loss: 1.0654 Epoch 0 Batch 200/2154 - Train Accuracy: 0.5250, Validation Accuracy: 0.5618, Loss: 1.0659 Epoch 0 Batch 210/2154 - Train Accuracy: 0.5238, Validation Accuracy: 0.5604, Loss: 0.9992 Epoch 0 Batch 220/2154 - Train Accuracy: 0.5570, Validation Accuracy: 0.5526, Loss: 0.9271 Epoch 0 Batch 230/2154 - Train Accuracy: 0.5885, Validation Accuracy: 0.5732, Loss: 0.8685 Epoch 0 Batch 240/2154 - Train Accuracy: 0.5469, Validation Accuracy: 0.5447, Loss: 0.8646 Epoch 0 Batch 250/2154 - Train Accuracy: 0.5859, Validation Accuracy: 0.5866, Loss: 0.8632 Epoch 0 Batch 260/2154 - Train Accuracy: 0.6258, Validation Accuracy: 0.5327, Loss: 0.7691 Epoch 0 Batch 270/2154 - Train Accuracy: 0.6562, Validation Accuracy: 0.5902, Loss: 0.7050 Epoch 0 Batch 280/2154 - Train Accuracy: 0.5962, Validation Accuracy: 0.5788, Loss: 0.8427 Epoch 0 Batch 290/2154 - Train Accuracy: 0.5781, Validation Accuracy: 0.5398, Loss: 0.7605 Epoch 0 Batch 300/2154 - Train Accuracy: 0.5493, Validation Accuracy: 0.5774, Loss: 0.7770 Epoch 0 Batch 310/2154 - Train Accuracy: 0.6447, Validation Accuracy: 0.5909, Loss: 0.7093 Epoch 0 Batch 320/2154 - Train Accuracy: 0.6031, Validation Accuracy: 0.5668, Loss: 0.6541 Epoch 0 Batch 330/2154 - Train Accuracy: 0.6382, Validation Accuracy: 0.5447, Loss: 0.6981 Epoch 0 Batch 340/2154 - Train Accuracy: 0.6332, Validation Accuracy: 0.6030, Loss: 0.6010 Epoch 0 Batch 350/2154 - Train Accuracy: 0.6195, Validation Accuracy: 0.5852, Loss: 0.6669 Epoch 0 Batch 360/2154 - Train Accuracy: 0.6201, Validation Accuracy: 0.5980, Loss: 0.6666 Epoch 0 Batch 370/2154 - Train Accuracy: 0.6188, Validation Accuracy: 0.6477, Loss: 0.6274 Epoch 0 Batch 380/2154 - Train Accuracy: 0.6008, Validation Accuracy: 0.5518, Loss: 0.5927 Epoch 0 Batch 390/2154 - Train Accuracy: 0.6674, Validation Accuracy: 0.6151, Loss: 0.5428 Epoch 0 Batch 400/2154 - Train Accuracy: 0.6266, Validation Accuracy: 0.6207, Loss: 0.5874 Epoch 0 Batch 410/2154 - Train Accuracy: 0.6180, Validation Accuracy: 0.6314, Loss: 0.6157 Epoch 0 Batch 420/2154 - Train Accuracy: 0.6696, Validation Accuracy: 0.6165, Loss: 0.5033 Epoch 0 Batch 430/2154 - Train Accuracy: 0.6234, Validation Accuracy: 0.6321, Loss: 0.5931 Epoch 0 Batch 440/2154 - Train Accuracy: 0.6595, Validation Accuracy: 0.5959, Loss: 0.5752 Epoch 0 Batch 450/2154 - Train Accuracy: 0.6977, Validation Accuracy: 0.6385, Loss: 0.5176 Epoch 0 Batch 460/2154 - Train Accuracy: 0.6516, Validation Accuracy: 0.6122, Loss: 0.5222 Epoch 0 Batch 470/2154 - Train Accuracy: 0.5851, Validation Accuracy: 0.6051, Loss: 0.6033 Epoch 0 Batch 480/2154 - Train Accuracy: 0.6481, Validation Accuracy: 0.6151, Loss: 0.4742 Epoch 0 Batch 490/2154 - Train Accuracy: 0.6289, Validation Accuracy: 0.5959, Loss: 0.5237 Epoch 0 Batch 500/2154 - Train Accuracy: 0.6587, Validation Accuracy: 0.6562, Loss: 0.4881 Epoch 0 Batch 510/2154 - Train Accuracy: 0.6320, Validation Accuracy: 0.6399, Loss: 0.4956 Epoch 0 Batch 520/2154 - Train Accuracy: 0.6398, Validation Accuracy: 0.6641, Loss: 0.4681 Epoch 0 Batch 530/2154 - Train Accuracy: 0.6664, Validation Accuracy: 0.6477, Loss: 0.4783 Epoch 0 Batch 540/2154 - Train Accuracy: 0.6752, Validation Accuracy: 0.6612, Loss: 0.4922 Epoch 0 Batch 550/2154 - Train Accuracy: 0.6667, Validation Accuracy: 0.5994, Loss: 0.4803 Epoch 0 Batch 560/2154 - Train Accuracy: 0.6305, Validation Accuracy: 0.6768, Loss: 0.5137 Epoch 0 Batch 570/2154 - Train Accuracy: 0.6438, Validation Accuracy: 0.6364, Loss: 0.5091 Epoch 0 Batch 580/2154 - Train Accuracy: 0.6612, Validation Accuracy: 0.6854, Loss: 0.4745 Epoch 0 Batch 590/2154 - Train Accuracy: 0.6868, Validation Accuracy: 0.6449, Loss: 0.4362 Epoch 0 Batch 600/2154 - Train Accuracy: 0.6192, Validation Accuracy: 0.6783, Loss: 0.4329 Epoch 0 Batch 610/2154 - Train Accuracy: 0.7262, Validation Accuracy: 0.7124, Loss: 0.3819 Epoch 0 Batch 620/2154 - Train Accuracy: 0.7277, Validation Accuracy: 0.6818, Loss: 0.3985 Epoch 0 Batch 630/2154 - Train Accuracy: 0.7623, Validation Accuracy: 0.6804, Loss: 0.4052 Epoch 0 Batch 640/2154 - Train Accuracy: 0.7000, Validation Accuracy: 0.7131, Loss: 0.4210 Epoch 0 Batch 650/2154 - Train Accuracy: 0.6602, Validation Accuracy: 0.6868, Loss: 0.4300 Epoch 0 Batch 660/2154 - Train Accuracy: 0.6750, Validation Accuracy: 0.6456, Loss: 0.4021 Epoch 0 Batch 670/2154 - Train Accuracy: 0.7451, Validation Accuracy: 0.6520, Loss: 0.3954 Epoch 0 Batch 680/2154 - Train Accuracy: 0.6998, Validation Accuracy: 0.6811, Loss: 0.3785 Epoch 0 Batch 690/2154 - Train Accuracy: 0.7164, Validation Accuracy: 0.6939, Loss: 0.3998 Epoch 0 Batch 700/2154 - Train Accuracy: 0.6842, Validation Accuracy: 0.7280, Loss: 0.3772 Epoch 0 Batch 710/2154 - Train Accuracy: 0.7266, Validation Accuracy: 0.7372, Loss: 0.3660 Epoch 0 Batch 720/2154 - Train Accuracy: 0.7007, Validation Accuracy: 0.6712, Loss: 0.3982 Epoch 0 Batch 730/2154 - Train Accuracy: 0.6398, Validation Accuracy: 0.7372, Loss: 0.4333 Epoch 0 Batch 740/2154 - Train Accuracy: 0.7422, Validation Accuracy: 0.7195, Loss: 0.3678 Epoch 0 Batch 750/2154 - Train Accuracy: 0.7430, Validation Accuracy: 0.7024, Loss: 0.3676 Epoch 0 Batch 760/2154 - Train Accuracy: 0.6875, Validation Accuracy: 0.6896, Loss: 0.3342 Epoch 0 Batch 770/2154 - Train Accuracy: 0.7633, Validation Accuracy: 0.7223, Loss: 0.3039 Epoch 0 Batch 780/2154 - Train Accuracy: 0.7570, Validation Accuracy: 0.6868, Loss: 0.2839 Epoch 0 Batch 790/2154 - Train Accuracy: 0.7391, Validation Accuracy: 0.7131, Loss: 0.3106 Epoch 0 Batch 800/2154 - Train Accuracy: 0.7418, Validation Accuracy: 0.7060, Loss: 0.2934 Epoch 0 Batch 810/2154 - Train Accuracy: 0.8170, Validation Accuracy: 0.7266, Loss: 0.2774 Epoch 0 Batch 820/2154 - Train Accuracy: 0.6793, Validation Accuracy: 0.7798, Loss: 0.3136 Epoch 0 Batch 830/2154 - Train Accuracy: 0.7031, Validation Accuracy: 0.7436, Loss: 0.3332 Epoch 0 Batch 840/2154 - Train Accuracy: 0.7531, Validation Accuracy: 0.7685, Loss: 0.3089 Epoch 0 Batch 850/2154 - Train Accuracy: 0.6961, Validation Accuracy: 0.7514, Loss: 0.2632 Epoch 0 Batch 860/2154 - Train Accuracy: 0.8102, Validation Accuracy: 0.7713, Loss: 0.2604 Epoch 0 Batch 870/2154 - Train Accuracy: 0.7992, Validation Accuracy: 0.7102, Loss: 0.2622 Epoch 0 Batch 880/2154 - Train Accuracy: 0.7344, Validation Accuracy: 0.7685, Loss: 0.3150 ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function word_ids = [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.split()] return word_ids """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) ###Output INFO:tensorflow:Restoring parameters from checkpoints/dev Input Word Ids: [80, 143, 8, 4, 98, 221, 45] English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.'] Prediction Word Ids: [30, 197, 72, 141, 313, 317, 8, 16, 1] French Words: il a vu une vieille voiture jaune . <EOS> ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function source_id_text=[] print(source_text.find('')) for line in source_text.split('\n'): a=[] for word in line.split(): a.append(source_vocab_to_int[word]) source_id_text.append(a) target_id_text=[] for line in target_text.split('\n'): b=[] line=line+' '+'<EOS>' for word in line.split(): b.append(target_vocab_to_int[word]) target_id_text.append(b) return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output 0 Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output 0 ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.1.0 Default GPU Device: /gpu:0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoder_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.- Target sequence length placeholder named "target_sequence_length" with rank 1- Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.- Source sequence length placeholder named "source_sequence_length" with rank 1Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ # TODO: Implement Function input_text = tf.placeholder(dtype=tf.int32, shape=(None, None), name="input") targets = tf.placeholder(dtype=tf.int32, shape=(None, None), name="targets") lr = tf.placeholder(dtype=tf.float32, shape=(None), name="learnrate") keep_prob = tf.placeholder(dtype=tf.float32, shape=(None), name="keep_prob") target_seq_len = tf.placeholder(dtype=tf.int32, shape=(None, ), name="target_sequence_length") max_target_len = tf.reduce_max(target_seq_len, name="max_target_len") src_seq_len = tf.placeholder(dtype=tf.int32, shape=(None, ), name="source_sequence_length") return input_text, targets, lr, keep_prob, target_seq_len, max_target_len, src_seq_len """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output Tests Passed ###Markdown Process Decoder InputImplement `process_decoder_input` by removing the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch. ###Code def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return dec_input """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer: * Embed the encoder input using [`tf.contrib.layers.embed_sequence`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence) * Construct a [stacked](https://github.com/tensorflow/tensorflow/blob/6947f65a374ebf29e74bb71e36fd82760056d82c/tensorflow/docs_src/tutorials/recurrent.mdstacking-multiple-lstms) [`tf.contrib.rnn.LSTMCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell) wrapped in a [`tf.contrib.rnn.DropoutWrapper`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper) * Pass cell and embedded input to [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) ###Code from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ # TODO: Implement Function embedding = tf.contrib.layers.embed_sequence(ids=rnn_inputs, vocab_size=source_vocab_size, embed_dim=encoding_embedding_size) gen_cell = lambda: tf.contrib.rnn.LSTMCell(num_units=rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) cells = tf.contrib.rnn.MultiRNNCell(cells=[gen_cell() for _ in range(num_layers)]) drop = tf.contrib.rnn.DropoutWrapper(cell=cells, output_keep_prob=keep_prob) output, state = tf.nn.dynamic_rnn(cell=drop, inputs=embedding, dtype=tf.float32) return output, state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate a training decoding layer:* Create a [`tf.contrib.seq2seq.TrainingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper) * Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ # TODO: Implement Function training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, sequence_length=target_sequence_length, time_major=False) decoder = tf.contrib.seq2seq.BasicDecoder(cell=dec_cell, helper=training_helper, initial_state=encoder_state, output_layer=output_layer) training_decoder_output,_ = tf.contrib.seq2seq.dynamic_decode(decoder, impute_finished=True, maximum_iterations=max_summary_length) return training_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference decoder:* Create a [`tf.contrib.seq2seq.GreedyEmbeddingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper)* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ # TODO: Implement Function start_token= tf.constant([start_of_sequence_id],tf.int32,[batch_size]) embed_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(embedding=dec_embeddings,start_tokens=start_token, end_token=end_of_sequence_id) dec = tf.contrib.seq2seq.BasicDecoder(cell = dec_cell , helper=embed_helper, initial_state=encoder_state, output_layer=output_layer ) decoder , _ = tf.contrib.seq2seq.dynamic_decode(decoder=dec,impute_finished=True, maximum_iterations=max_target_sequence_length) return decoder """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.* Embed the target sequences* Construct the decoder LSTM cell (just like you constructed the encoder cell above)* Create an output layer to map the outputs of the decoder to the elements of our vocabulary* Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)` function to get the training logits.* Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function start_of_sequence_id = target_vocab_to_int['<GO>'] end_of_sequence_id = target_vocab_to_int['<EOS>'] dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) gen_cell = lambda rnn_size: tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) dec_cell = tf.contrib.rnn.MultiRNNCell([gen_cell(rnn_size) for _ in range(num_layers)]) output_layer = Dense(units=target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1)) with tf.variable_scope("decode"): training = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) #(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob) with tf.variable_scope("decode", reuse=True): inference = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob) return training, inference """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Apply embedding to the input data for the encoder.- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size)`.- Process target data using your `process_decoder_input(target_data, target_vocab_to_int, batch_size)` function.- Apply embedding to the target data for the decoder.- Decode the encoded input using your `decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)` function. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function output_enc,state_enc = encoding_layer(input_data,rnn_size,num_layers,keep_prob, source_sequence_length,source_vocab_size,enc_embedding_size) dec_input = process_decoder_input(target_data,target_vocab_to_int,batch_size) training,inference = decoding_layer(dec_input, state_enc, target_sequence_length, max_target_sentence_length, rnn_size,num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return training, inference """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability- Set `display_step` to state how many steps between each debug output statement ###Code # Number of Epochs epochs = 5 # Batch Size batch_size = 350 # RNN Size rnn_size = 450 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 128 decoding_embedding_size = 128 # Learning Rate learning_rate = 0.005 # Dropout Keep Probability keep_probability = 0.80 display_step = 10 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown Batch and pad the source and target sequences ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def pad_sentence_batch(sentence_batch, pad_int): """Pad sentences with <PAD> so that each sentence of a batch has the same length""" max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): """Batch targets, sources, and the lengths of their sentences together""" for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 10/393 - Train Accuracy: 0.2849, Validation Accuracy: 0.3543, Loss: 3.3487 Epoch 0 Batch 20/393 - Train Accuracy: 0.3869, Validation Accuracy: 0.3913, Loss: 2.7521 Epoch 0 Batch 30/393 - Train Accuracy: 0.3314, Validation Accuracy: 0.3914, Loss: 2.7333 Epoch 0 Batch 40/393 - Train Accuracy: 0.3061, Validation Accuracy: 0.3618, Loss: 2.5732 Epoch 0 Batch 50/393 - Train Accuracy: 0.3720, Validation Accuracy: 0.4309, Loss: 2.2955 Epoch 0 Batch 60/393 - Train Accuracy: 0.3960, Validation Accuracy: 0.4464, Loss: 2.0376 Epoch 0 Batch 70/393 - Train Accuracy: 0.4459, Validation Accuracy: 0.4603, Loss: 1.6798 Epoch 0 Batch 80/393 - Train Accuracy: 0.4110, Validation Accuracy: 0.4573, Loss: 1.6129 Epoch 0 Batch 90/393 - Train Accuracy: 0.4144, Validation Accuracy: 0.4570, Loss: 1.3836 Epoch 0 Batch 100/393 - Train Accuracy: 0.4259, Validation Accuracy: 0.4549, Loss: 1.1441 Epoch 0 Batch 110/393 - Train Accuracy: 0.4218, Validation Accuracy: 0.4431, Loss: 1.0062 Epoch 0 Batch 120/393 - Train Accuracy: 0.4203, Validation Accuracy: 0.4512, Loss: 0.9378 Epoch 0 Batch 130/393 - Train Accuracy: 0.4680, Validation Accuracy: 0.4845, Loss: 0.8481 Epoch 0 Batch 140/393 - Train Accuracy: 0.4944, Validation Accuracy: 0.5166, Loss: 0.7978 Epoch 0 Batch 150/393 - Train Accuracy: 0.5223, Validation Accuracy: 0.5429, Loss: 0.7879 Epoch 0 Batch 160/393 - Train Accuracy: 0.5080, Validation Accuracy: 0.5434, Loss: 0.7640 Epoch 0 Batch 170/393 - Train Accuracy: 0.5659, Validation Accuracy: 0.5756, Loss: 0.7002 Epoch 0 Batch 180/393 - Train Accuracy: 0.5686, Validation Accuracy: 0.5865, Loss: 0.6765 Epoch 0 Batch 190/393 - Train Accuracy: 0.5648, Validation Accuracy: 0.5804, Loss: 0.6603 Epoch 0 Batch 200/393 - Train Accuracy: 0.5434, Validation Accuracy: 0.5931, Loss: 0.6861 Epoch 0 Batch 210/393 - Train Accuracy: 0.5977, Validation Accuracy: 0.6142, Loss: 0.6355 Epoch 0 Batch 220/393 - Train Accuracy: 0.6166, Validation Accuracy: 0.6256, Loss: 0.6025 Epoch 0 Batch 230/393 - Train Accuracy: 0.6044, Validation Accuracy: 0.6319, Loss: 0.5827 Epoch 0 Batch 240/393 - Train Accuracy: 0.6210, Validation Accuracy: 0.6204, Loss: 0.5814 Epoch 0 Batch 250/393 - Train Accuracy: 0.6269, Validation Accuracy: 0.6392, Loss: 0.5502 Epoch 0 Batch 260/393 - Train Accuracy: 0.6571, Validation Accuracy: 0.6581, Loss: 0.5051 Epoch 0 Batch 270/393 - Train Accuracy: 0.6441, Validation Accuracy: 0.6578, Loss: 0.5390 Epoch 0 Batch 280/393 - Train Accuracy: 0.6529, Validation Accuracy: 0.6647, Loss: 0.5029 Epoch 0 Batch 290/393 - Train Accuracy: 0.6519, Validation Accuracy: 0.6670, Loss: 0.4876 Epoch 0 Batch 300/393 - Train Accuracy: 0.6512, Validation Accuracy: 0.6670, Loss: 0.4487 Epoch 0 Batch 310/393 - Train Accuracy: 0.6618, Validation Accuracy: 0.6782, Loss: 0.4526 Epoch 0 Batch 320/393 - Train Accuracy: 0.6727, Validation Accuracy: 0.6945, Loss: 0.4115 Epoch 0 Batch 330/393 - Train Accuracy: 0.6957, Validation Accuracy: 0.6948, Loss: 0.3879 Epoch 0 Batch 340/393 - Train Accuracy: 0.6827, Validation Accuracy: 0.6964, Loss: 0.3873 Epoch 0 Batch 350/393 - Train Accuracy: 0.7317, Validation Accuracy: 0.7143, Loss: 0.3426 Epoch 0 Batch 360/393 - Train Accuracy: 0.7161, Validation Accuracy: 0.7168, Loss: 0.3324 Epoch 0 Batch 370/393 - Train Accuracy: 0.7230, Validation Accuracy: 0.7288, Loss: 0.3457 Epoch 0 Batch 380/393 - Train Accuracy: 0.7510, Validation Accuracy: 0.7457, Loss: 0.3201 Epoch 0 Batch 390/393 - Train Accuracy: 0.7569, Validation Accuracy: 0.7403, Loss: 0.2725 Epoch 1 Batch 10/393 - Train Accuracy: 0.7423, Validation Accuracy: 0.7404, Loss: 0.2847 Epoch 1 Batch 20/393 - Train Accuracy: 0.7697, Validation Accuracy: 0.7521, Loss: 0.2333 Epoch 1 Batch 30/393 - Train Accuracy: 0.7943, Validation Accuracy: 0.7771, Loss: 0.2410 Epoch 1 Batch 40/393 - Train Accuracy: 0.7613, Validation Accuracy: 0.7681, Loss: 0.2343 Epoch 1 Batch 50/393 - Train Accuracy: 0.7540, Validation Accuracy: 0.7642, Loss: 0.2272 Epoch 1 Batch 60/393 - Train Accuracy: 0.7849, Validation Accuracy: 0.8018, Loss: 0.2260 Epoch 1 Batch 70/393 - Train Accuracy: 0.8214, Validation Accuracy: 0.8214, Loss: 0.1758 Epoch 1 Batch 80/393 - Train Accuracy: 0.8169, Validation Accuracy: 0.8119, Loss: 0.1793 Epoch 1 Batch 90/393 - Train Accuracy: 0.8206, Validation Accuracy: 0.8229, Loss: 0.1801 Epoch 1 Batch 100/393 - Train Accuracy: 0.8539, Validation Accuracy: 0.8334, Loss: 0.1665 Epoch 1 Batch 110/393 - Train Accuracy: 0.8595, Validation Accuracy: 0.8483, Loss: 0.1546 Epoch 1 Batch 120/393 - Train Accuracy: 0.8510, Validation Accuracy: 0.8481, Loss: 0.1480 Epoch 1 Batch 130/393 - Train Accuracy: 0.8109, Validation Accuracy: 0.8288, Loss: 0.1434 Epoch 1 Batch 140/393 - Train Accuracy: 0.8657, Validation Accuracy: 0.8410, Loss: 0.1362 Epoch 1 Batch 150/393 - Train Accuracy: 0.8609, Validation Accuracy: 0.8617, Loss: 0.1236 Epoch 1 Batch 160/393 - Train Accuracy: 0.8467, Validation Accuracy: 0.8614, Loss: 0.1318 Epoch 1 Batch 170/393 - Train Accuracy: 0.8899, Validation Accuracy: 0.8753, Loss: 0.1143 Epoch 1 Batch 180/393 - Train Accuracy: 0.8840, Validation Accuracy: 0.8708, Loss: 0.0959 Epoch 1 Batch 190/393 - Train Accuracy: 0.8716, Validation Accuracy: 0.8888, Loss: 0.1090 Epoch 1 Batch 200/393 - Train Accuracy: 0.8476, Validation Accuracy: 0.8777, Loss: 0.0983 Epoch 1 Batch 210/393 - Train Accuracy: 0.9253, Validation Accuracy: 0.8957, Loss: 0.0791 Epoch 1 Batch 220/393 - Train Accuracy: 0.8976, Validation Accuracy: 0.9184, Loss: 0.0893 Epoch 1 Batch 230/393 - Train Accuracy: 0.9195, Validation Accuracy: 0.9152, Loss: 0.0740 Epoch 1 Batch 240/393 - Train Accuracy: 0.9305, Validation Accuracy: 0.9126, Loss: 0.0686 Epoch 1 Batch 250/393 - Train Accuracy: 0.9224, Validation Accuracy: 0.9095, Loss: 0.0715 Epoch 1 Batch 260/393 - Train Accuracy: 0.9345, Validation Accuracy: 0.9204, Loss: 0.0700 Epoch 1 Batch 270/393 - Train Accuracy: 0.9321, Validation Accuracy: 0.9292, Loss: 0.0635 Epoch 1 Batch 280/393 - Train Accuracy: 0.9200, Validation Accuracy: 0.9352, Loss: 0.0659 Epoch 1 Batch 290/393 - Train Accuracy: 0.9273, Validation Accuracy: 0.9327, Loss: 0.0593 Epoch 1 Batch 300/393 - Train Accuracy: 0.9502, Validation Accuracy: 0.9438, Loss: 0.0538 Epoch 1 Batch 310/393 - Train Accuracy: 0.9331, Validation Accuracy: 0.9514, Loss: 0.0632 Epoch 1 Batch 320/393 - Train Accuracy: 0.9399, Validation Accuracy: 0.9425, Loss: 0.0473 Epoch 1 Batch 330/393 - Train Accuracy: 0.9406, Validation Accuracy: 0.9471, Loss: 0.0459 Epoch 1 Batch 340/393 - Train Accuracy: 0.9301, Validation Accuracy: 0.9301, Loss: 0.0476 Epoch 1 Batch 350/393 - Train Accuracy: 0.9442, Validation Accuracy: 0.9369, Loss: 0.0494 Epoch 1 Batch 360/393 - Train Accuracy: 0.9295, Validation Accuracy: 0.9373, Loss: 0.0532 Epoch 1 Batch 370/393 - Train Accuracy: 0.9188, Validation Accuracy: 0.9461, Loss: 0.0491 Epoch 1 Batch 380/393 - Train Accuracy: 0.9366, Validation Accuracy: 0.9429, Loss: 0.0506 Epoch 1 Batch 390/393 - Train Accuracy: 0.9518, Validation Accuracy: 0.9457, Loss: 0.0371 Epoch 2 Batch 10/393 - Train Accuracy: 0.9460, Validation Accuracy: 0.9392, Loss: 0.0424 Epoch 2 Batch 20/393 - Train Accuracy: 0.9479, Validation Accuracy: 0.9423, Loss: 0.0380 Epoch 2 Batch 30/393 - Train Accuracy: 0.9439, Validation Accuracy: 0.9491, Loss: 0.0432 Epoch 2 Batch 40/393 - Train Accuracy: 0.9329, Validation Accuracy: 0.9496, Loss: 0.0421 Epoch 2 Batch 50/393 - Train Accuracy: 0.9421, Validation Accuracy: 0.9478, Loss: 0.0379 Epoch 2 Batch 60/393 - Train Accuracy: 0.9354, Validation Accuracy: 0.9490, Loss: 0.0457 Epoch 2 Batch 70/393 - Train Accuracy: 0.9608, Validation Accuracy: 0.9400, Loss: 0.0306 Epoch 2 Batch 80/393 - Train Accuracy: 0.9557, Validation Accuracy: 0.9544, Loss: 0.0387 Epoch 2 Batch 90/393 - Train Accuracy: 0.9506, Validation Accuracy: 0.9534, Loss: 0.0349 Epoch 2 Batch 100/393 - Train Accuracy: 0.9414, Validation Accuracy: 0.9270, Loss: 0.0396 Epoch 2 Batch 110/393 - Train Accuracy: 0.9524, Validation Accuracy: 0.9423, Loss: 0.0364 ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function s = sentence.lower() default = vocab_to_int["<UNK>"] word_ids = [vocab_to_int.get(word, default) for word in s.split()] return word_ids """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw an old yellow truck.' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) ###Output INFO:tensorflow:Restoring parameters from checkpoints/dev Input Word Ids: [102, 8, 2, 95, 137, 2] English Words: ['he', 'saw', '<UNK>', 'old', 'yellow', '<UNK>'] Prediction Word Ids: [205, 276, 321, 248, 172, 235, 289, 1] French Words: il a vieux le nouveau camion . <EOS> ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function #eh necessario codificar os espacos tb source_id_text = [[source_vocab_to_int[i] for i in frase.split()] for frase in source_text.split('\n')] target_id_text = [[target_vocab_to_int[i] for i in frase.split()] for frase in target_text.split('\n')] for i in target_id_text: i.append(target_vocab_to_int['<EOS>']) return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper import problem_unittests as tests (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.5.0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoder_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.- Target sequence length placeholder named "target_sequence_length" with rank 1- Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.- Source sequence length placeholder named "source_sequence_length" with rank 1Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ # TODO: Implement Function #rank sao as dimensoes inputs = tf.placeholder(tf.int32, name='input', shape=[None, None]) targets = tf.placeholder(tf.int32, shape=[None, None]) learning_rate = tf.placeholder(tf.float32, shape=[]) keep_probability = tf.placeholder(tf.float32, name='keep_prob', shape=[]) target_sequence_length = tf.placeholder(tf.int32, name='target_sequence_length', shape=[None]) max_target_sequence_length = tf.reduce_max(target_sequence_length) source_sequence_length = tf.placeholder(tf.int32, name='source_sequence_length', shape=[None]) return inputs, targets, learning_rate, keep_probability, target_sequence_length, max_target_sequence_length, source_sequence_length """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output Tests Passed ###Markdown Process Decoder InputImplement `process_decoder_input` by removing the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch. ###Code def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function #remover a ultima palavra de cada lote fim = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) frase = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), fim], 1) return frase """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer: * Embed the encoder input using [`tf.contrib.layers.embed_sequence`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence) * Construct a [stacked](https://github.com/tensorflow/tensorflow/blob/6947f65a374ebf29e74bb71e36fd82760056d82c/tensorflow/docs_src/tutorials/recurrent.mdstacking-multiple-lstms) [`tf.contrib.rnn.LSTMCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell) wrapped in a [`tf.contrib.rnn.DropoutWrapper`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper) * Pass cell and embedded input to [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) ###Code from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ # TODO: Implement Function enconder_entrada = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size) def make_cell(rnn_size): enc_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) return enc_cell enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enconder_entrada, sequence_length=source_sequence_length, dtype=tf.float32) return enc_output, enc_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate a training decoding layer:* Create a [`tf.contrib.seq2seq.TrainingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper) * Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ # TODO: Implement Function treinamento = tf.contrib.seq2seq.TrainingHelper( inputs=dec_embed_input, sequence_length=target_sequence_length) decodificador = tf.contrib.seq2seq.BasicDecoder( dec_cell, treinamento, encoder_state, output_layer) output = tf.contrib.seq2seq.dynamic_decode( decodificador, impute_finished=True, maximum_iterations=max_summary_length)[0] # output = tf.nn.dropout(output, keep_prob)-- Porque Deu Erro? return output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference decoder:* Create a [`tf.contrib.seq2seq.GreedyEmbeddingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper)* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ # TODO: Implement Function start_tokens = tf.constant([start_of_sequence_id]*batch_size) inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id) # Decodificador inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer) # Execucao do decodificador inference_decoder_output = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length)[0] return inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.* Embed the target sequences* Construct the decoder LSTM cell (just like you constructed the encoder cell above)* Create an output layer to map the outputs of the decoder to the elements of our vocabulary* Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)` function to get the training logits.* Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code #exercicio seq2seq def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) def make_cell(rnn_size): dec_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) return dec_cell dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) output_layer = Dense(target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1)) with tf.variable_scope("decode"): treinamento_decodificador = decoding_layer_train( encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) print(target_vocab_to_int) with tf.variable_scope("decode", reuse=True): decodificador_inferencia = decoding_layer_infer( encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob) return treinamento_decodificador, decodificador_inferencia """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output {'<GO>': 3, '<EOS>': 1} Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size)`.- Process target data using your `process_decoder_input(target_data, target_vocab_to_int, batch_size)` function.- Decode the encoded input using your `decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)` function. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function enc_output, enc_state = encoding_layer( input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) decodificar_ent = process_decoder_input(target_data, target_vocab_to_int, batch_size) training_decoder_output, inference_decoder_output = decoding_layer( decodificar_ent, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output {'<GO>': 3, '<EOS>': 1} Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability- Set `display_step` to state how many steps between each debug output statement ###Code # Number of Epochs epochs = 5 # Batch Size batch_size = 256 # RNN Size rnn_size = 256 # Number of Layers num_layers = 3 # Embedding Size encoding_embedding_size = 256 decoding_embedding_size = 256 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 0.5 display_step = 5 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output {'dernière': 4, 'une': 173, 'bananes': 174, 'aiment': 5, 'amusant': 6, 'préféré.': 7, 'grand': 222, 'plus': 175, 'oranges': 176, 'lapins': 177, 'manguiers': 179, 'aiment-ils': 180, "l'oiseau": 8, 'fait': 9, 'ils': 10, 'vu': 11, 'pleut': 181, 'camion': 182, 'jersey': 12, 'maillot': 13, "n'aime": 88, 'favoris': 184, 'veut': 185, 'aimez': 186, 'fraises': 187, 'new': 14, 'en': 15, 'pousse': 190, 'chaux': 16, 'préféré': 191, 'lui': 192, 'pommes': 195, 'porcelaine': 17, 'entre': 18, 'la': 196, 'au': 19, 'trop': 20, 'cours': 71, 'volant': 197, "n'êtes": 91, "j'aime": 198, 'congélation': 21, '?': 22, 'allions': 24, 'voulaient': 25, 'magnifique': 33, 'grande': 145, 'pluie': 26, 'chien': 27, "l'animal": 28, 'tout': 199, 'aux': 178, 'vous': 206, 'vit': 337, 'mouillé': 29, 'lion': 201, 'gelé': 30, 'bleue': 31, 'le': 32, "n'aimons": 202, 'souvent': 204, 'grosse': 205, 'visiter': 34, 'qui': 35, 'verte': 36, 't': 37, 'brillante': 207, "n'aimez": 38, 'occupée': 209, 'tour': 210, 'nouveau': 211, 'poires': 212, 'comme': 40, 'grosses': 41, 'chat': 213, 'pourquoi': 42, 'petite': 43, 'nouvelle': 214, 'blanc': 44, 'ont': 45, "n'aiment": 46, 'elle': 183, '<EOS>': 1, 'pensez': 268, 'gelés': 47, 'voulait': 215, 'rouge': 48, 'allez': 49, 'pourraient': 50, '.': 217, 'humide': 219, 'aime': 220, 'raisins': 221, 'voudrait': 51, '<PAD>': 0, 'à': 223, 'aimeraient': 52, 'facile': 224, '-ce': 53, 'chinois': 54, 'rouillé': 225, 'neige': 226, 'grands': 270, 'visite': 227, 'envisagent': 228, 'es-tu': 229, 'cépage': 349, 'faire': 55, 'juin': 230, 'mangue': 56, 'préférée': 231, 'petit': 39, 'froid': 57, 'gel': 58, 'singes': 233, 'jaune': 68, 'oiseaux': 59, 'inde': 60, 'ressort': 61, 'i': 62, 'chiens': 234, '<UNK>': 2, 'avez': 63, 'relaxant': 236, 'pendant': 237, 'aller': 64, 'glaciales': 239, 'cheval': 66, 'petits': 123, 'fruits': 67, 'son': 100, 'préférés': 69, 'paris': 70, 'ne': 72, 'chine': 188, 'espagnol': 241, 'mangues': 242, 'bénigne': 73, 'détendre': 240, 'pêche': 74, 'gèle': 75, 'beau': 76, 'aimons': 77, 'veulent': 243, 'vieille': 244, ',': 245, "qu'il": 246, 'et': 247, 'généralement': 78, 'sont': 79, 'conduisait': 80, 'pluvieux': 249, 'frais': 81, 'grandes': 251, 'petites': 252, 'il': 189, 'portugais': 82, 'at': 255, 'pomme': 256, 'vos': 343, 'limes': 257, 'est-ce': 156, 'mais': 84, 'bien': 85, '<GO>': 3, 'préférées': 86, 'moindres': 258, 'éléphants': 87, 'pense': 260, 'fraise': 261, 'conduite': 152, 'verts': 218, 'aimait': 262, 'un': 89, 'mai': 90, 'douce': 263, 'chats': 264, 'souris': 265, 'pamplemousses': 92, 'traduction': 266, 'décembre': 93, 'jamais': 105, 'traduire': 94, 'habituellement': 95, 'je': 267, 'vers': 96, 'agréable': 126, 'états-unis': 269, 'blanche': 271, 'octobre': 97, '-elle': 216, 'conduit': 98, 'nos': 272, "l'épicerie": 193, 'sec': 273, 'ce': 101, 'noir': 274, 'pourrait': 102, 'éléphant': 103, 'du': 275, 'leurs': 104, 'a': 276, "l'éléphant": 107, 'juillet': 277, 'pas': 278, 'ours': 106, 'août': 279, 'dans': 108, 'animal': 109, 'prévoit': 110, "n'a": 194, 'de': 281, 'était': 282, 'parfois': 283, 'chaude': 284, 'anglais': 285, 'novembre': 286, "n'est": 287, "l'automobile": 288, 'sèche': 111, 'tranquille': 347, 'français': 112, 'temps': 290, 'frisquet': 291, 'mes': 248, 'traduis': 292, 'peu': 113, 'citrons': 114, 'chevaux': 115, 'êtes-vous': 294, "l'": 116, 'envisage': 331, 'moins': 117, 'as-tu': 295, 'mouillée': 296, 'notre': 118, 'aimés': 297, 'france': 350, 'redoutés': 119, 'requins': 298, 'calme': 120, 'été': 121, 'pêches': 300, 'voiture': 250, 'rouille': 301, 'belle': 122, "c'est": 302, 'cet': 303, 'avons': 238, 'favori': 304, 'détend': 142, 'lac': 305, 'printemps': 164, 'automobile': 306, 'allons': 124, 'monde': 125, 'intention': 132, 'détestez': 203, 'nous': 165, 'vont': 127, 'déteste': 128, 'occupé': 307, 'des': 308, 'pluies': 309, 'terrain': 129, 'prévoient': 130, 'leur': 310, 'lapin': 208, 'moteur': 311, 'mon': 312, 'prévois': 131, 'football': 313, 'votre': 133, 'eiffel': 253, 'chaud': 134, 'cher': 135, 'comment': 314, 'requin': 299, "l'automne": 65, 'février': 254, 'les': 316, 'plaît': 137, 'janvier': 138, 'fruit': 318, 'vais': 319, 'mars': 139, 'aimée': 320, 'avril': 315, 'prochain': 321, 'oiseau': 322, 'légère': 323, 'aimé': 141, 'vert': 324, "l'orange": 143, 'détestons': 325, 'apprécié': 293, 'que': 326, 'mois': 327, 'serpent': 328, 'automne': 259, 'citron': 144, '-ils': 329, 'durant': 136, 'va': 330, 'allée': 289, 'raisin': 146, 'proches': 332, 'gros': 333, 'brillant': 334, 'neigeux': 83, 'voulez': 147, 'étaient': 335, 'cette': 148, 'etats-unis': 336, 'aimé.': 149, 'allés': 338, 'trouvé': 150, 'california': 151, 'singe': 23, 'pamplemousse': 339, 'bleu': 153, 'animaux': 154, 'septembre': 155, 'quand': 340, 'rouillée': 341, 'sur': 342, 'où': 344, '-': 157, 'serpents': 232, 'banane': 158, 'enneigée': 280, "l'école": 159, 'californie': 345, 'est': 160, 'poire': 346, 'redouté': 140, "l'ours": 348, 'clémentes': 170, 'ses': 161, 'rendre': 162, 'allé': 351, 'enneigé': 163, 'prévoyons': 352, 'merveilleux': 353, 'détestait': 99, "d'": 166, 'vieux': 317, 'dernier': 167, 'noire': 354, 'hiver': 200, "n'aimait": 168, '-il': 355, "qu'elle": 356, 'doux': 169, 'se': 357, 'difficile': 171, 'lions': 172, 'redoutée': 235} ###Markdown Batch and pad the source and target sequences ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def pad_sentence_batch(sentence_batch, pad_int): """Pad sentences with <PAD> so that each sentence of a batch has the same length""" max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): """Batch targets, sources, and the lengths of their sentences together""" for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 5/538 - Train Accuracy: 0.2989, Validation Accuracy: 0.3471, Loss: 3.9274 Epoch 0 Batch 10/538 - Train Accuracy: 0.3045, Validation Accuracy: 0.3860, Loss: 3.4965 Epoch 0 Batch 15/538 - Train Accuracy: 0.3891, Validation Accuracy: 0.4183, Loss: 2.9372 Epoch 0 Batch 20/538 - Train Accuracy: 0.3767, Validation Accuracy: 0.4190, Loss: 2.8031 Epoch 0 Batch 25/538 - Train Accuracy: 0.3891, Validation Accuracy: 0.4432, Loss: 2.7616 Epoch 0 Batch 30/538 - Train Accuracy: 0.4047, Validation Accuracy: 0.4661, Loss: 2.6924 Epoch 0 Batch 35/538 - Train Accuracy: 0.3994, Validation Accuracy: 0.4599, Loss: 2.5811 Epoch 0 Batch 40/538 - Train Accuracy: 0.4757, Validation Accuracy: 0.4821, Loss: 2.2991 Epoch 0 Batch 45/538 - Train Accuracy: 0.4617, Validation Accuracy: 0.4945, Loss: 2.3449 Epoch 0 Batch 50/538 - Train Accuracy: 0.4146, Validation Accuracy: 0.4728, Loss: 2.3298 Epoch 0 Batch 55/538 - Train Accuracy: 0.4461, Validation Accuracy: 0.5051, Loss: 2.2818 Epoch 0 Batch 60/538 - Train Accuracy: 0.4584, Validation Accuracy: 0.5124, Loss: 2.1654 Epoch 0 Batch 65/538 - Train Accuracy: 0.4463, Validation Accuracy: 0.5014, Loss: 2.2184 Epoch 0 Batch 70/538 - Train Accuracy: 0.4717, Validation Accuracy: 0.5011, Loss: 1.9505 Epoch 0 Batch 75/538 - Train Accuracy: 0.4991, Validation Accuracy: 0.5250, Loss: 1.8832 Epoch 0 Batch 80/538 - Train Accuracy: 0.4752, Validation Accuracy: 0.5298, Loss: 1.9449 Epoch 0 Batch 85/538 - Train Accuracy: 0.5099, Validation Accuracy: 0.5298, Loss: 1.7270 Epoch 0 Batch 90/538 - Train Accuracy: 0.4913, Validation Accuracy: 0.5197, Loss: 1.8011 Epoch 0 Batch 95/538 - Train Accuracy: 0.5218, Validation Accuracy: 0.5224, Loss: 1.6130 Epoch 0 Batch 100/538 - Train Accuracy: 0.4951, Validation Accuracy: 0.5385, Loss: 1.6960 Epoch 0 Batch 105/538 - Train Accuracy: 0.5078, Validation Accuracy: 0.5392, Loss: 1.6073 Epoch 0 Batch 110/538 - Train Accuracy: 0.4814, Validation Accuracy: 0.5328, Loss: 1.6740 Epoch 0 Batch 115/538 - Train Accuracy: 0.5006, Validation Accuracy: 0.5430, Loss: 1.5904 Epoch 0 Batch 120/538 - Train Accuracy: 0.5125, Validation Accuracy: 0.5492, Loss: 1.5689 Epoch 0 Batch 125/538 - Train Accuracy: 0.4730, Validation Accuracy: 0.4961, Loss: 1.5524 Epoch 0 Batch 130/538 - Train Accuracy: 0.5060, Validation Accuracy: 0.5362, Loss: 1.4987 Epoch 0 Batch 135/538 - Train Accuracy: 0.5299, Validation Accuracy: 0.5540, Loss: 1.4612 Epoch 0 Batch 140/538 - Train Accuracy: 0.4633, Validation Accuracy: 0.5362, Loss: 1.5494 Epoch 0 Batch 145/538 - Train Accuracy: 0.5054, Validation Accuracy: 0.5352, Loss: 1.4313 Epoch 0 Batch 150/538 - Train Accuracy: 0.5387, Validation Accuracy: 0.5568, Loss: 1.4120 Epoch 0 Batch 155/538 - Train Accuracy: 0.5391, Validation Accuracy: 0.5320, Loss: 1.3451 Epoch 0 Batch 160/538 - Train Accuracy: 0.5402, Validation Accuracy: 0.5641, Loss: 1.2995 Epoch 0 Batch 165/538 - Train Accuracy: 0.5530, Validation Accuracy: 0.5703, Loss: 1.2307 Epoch 0 Batch 170/538 - Train Accuracy: 0.5229, Validation Accuracy: 0.5410, Loss: 1.2957 Epoch 0 Batch 175/538 - Train Accuracy: 0.5166, Validation Accuracy: 0.5602, Loss: 1.6221 Epoch 0 Batch 180/538 - Train Accuracy: 0.5456, Validation Accuracy: 0.5439, Loss: 1.2883 Epoch 0 Batch 185/538 - Train Accuracy: 0.5268, Validation Accuracy: 0.5559, Loss: 1.2862 Epoch 0 Batch 190/538 - Train Accuracy: 0.5229, Validation Accuracy: 0.5540, Loss: 1.3002 Epoch 0 Batch 195/538 - Train Accuracy: 0.5188, Validation Accuracy: 0.5336, Loss: 1.2044 Epoch 0 Batch 200/538 - Train Accuracy: 0.5363, Validation Accuracy: 0.5614, Loss: 1.2271 Epoch 0 Batch 205/538 - Train Accuracy: 0.5625, Validation Accuracy: 0.5680, Loss: 1.1354 Epoch 0 Batch 210/538 - Train Accuracy: 0.5484, Validation Accuracy: 0.5753, Loss: 1.1448 Epoch 0 Batch 215/538 - Train Accuracy: 0.5523, Validation Accuracy: 0.5803, Loss: 1.1732 Epoch 0 Batch 220/538 - Train Accuracy: 0.5406, Validation Accuracy: 0.5813, Loss: 1.1179 Epoch 0 Batch 225/538 - Train Accuracy: 0.5519, Validation Accuracy: 0.5630, Loss: 1.0683 Epoch 0 Batch 230/538 - Train Accuracy: 0.5271, Validation Accuracy: 0.5632, Loss: 1.0959 Epoch 0 Batch 235/538 - Train Accuracy: 0.5688, Validation Accuracy: 0.5838, Loss: 1.0432 Epoch 0 Batch 240/538 - Train Accuracy: 0.5270, Validation Accuracy: 0.5549, Loss: 1.0858 Epoch 0 Batch 245/538 - Train Accuracy: 0.5377, Validation Accuracy: 0.5678, Loss: 1.0750 Epoch 0 Batch 250/538 - Train Accuracy: 0.5561, Validation Accuracy: 0.5804, Loss: 1.0149 Epoch 0 Batch 255/538 - Train Accuracy: 0.5471, Validation Accuracy: 0.5774, Loss: 1.0196 Epoch 0 Batch 260/538 - Train Accuracy: 0.5374, Validation Accuracy: 0.5678, Loss: 0.9923 Epoch 0 Batch 265/538 - Train Accuracy: 0.5299, Validation Accuracy: 0.5787, Loss: 1.0266 Epoch 0 Batch 270/538 - Train Accuracy: 0.5553, Validation Accuracy: 0.5843, Loss: 0.9731 Epoch 0 Batch 275/538 - Train Accuracy: 0.5627, Validation Accuracy: 0.5817, Loss: 0.9945 Epoch 0 Batch 280/538 - Train Accuracy: 0.6137, Validation Accuracy: 0.5847, Loss: 0.9085 Epoch 0 Batch 285/538 - Train Accuracy: 0.5606, Validation Accuracy: 0.5803, Loss: 0.8851 Epoch 0 Batch 290/538 - Train Accuracy: 0.5598, Validation Accuracy: 0.5849, Loss: 0.9272 Epoch 0 Batch 295/538 - Train Accuracy: 0.6136, Validation Accuracy: 0.6032, Loss: 0.8497 Epoch 0 Batch 300/538 - Train Accuracy: 0.5805, Validation Accuracy: 0.5803, Loss: 0.8683 Epoch 0 Batch 305/538 - Train Accuracy: 0.5724, Validation Accuracy: 0.5888, Loss: 0.8665 Epoch 0 Batch 310/538 - Train Accuracy: 0.5875, Validation Accuracy: 0.5962, Loss: 0.8572 Epoch 0 Batch 315/538 - Train Accuracy: 0.5934, Validation Accuracy: 0.6010, Loss: 0.8628 Epoch 0 Batch 320/538 - Train Accuracy: 0.5926, Validation Accuracy: 0.5843, Loss: 0.8462 Epoch 0 Batch 325/538 - Train Accuracy: 0.5844, Validation Accuracy: 0.6007, Loss: 0.8359 Epoch 0 Batch 330/538 - Train Accuracy: 0.5947, Validation Accuracy: 0.5950, Loss: 0.8157 Epoch 0 Batch 335/538 - Train Accuracy: 0.5962, Validation Accuracy: 0.5993, Loss: 0.8130 Epoch 0 Batch 340/538 - Train Accuracy: 0.5848, Validation Accuracy: 0.6113, Loss: 0.8482 Epoch 0 Batch 345/538 - Train Accuracy: 0.6263, Validation Accuracy: 0.6214, Loss: 0.7908 Epoch 0 Batch 350/538 - Train Accuracy: 0.6092, Validation Accuracy: 0.6065, Loss: 0.8163 Epoch 0 Batch 355/538 - Train Accuracy: 0.5902, Validation Accuracy: 0.6131, Loss: 0.8245 Epoch 0 Batch 360/538 - Train Accuracy: 0.5965, Validation Accuracy: 0.6122, Loss: 0.8077 Epoch 0 Batch 365/538 - Train Accuracy: 0.5991, Validation Accuracy: 0.6156, Loss: 0.7937 Epoch 0 Batch 370/538 - Train Accuracy: 0.5787, Validation Accuracy: 0.6300, Loss: 0.7936 Epoch 0 Batch 375/538 - Train Accuracy: 0.6036, Validation Accuracy: 0.6147, Loss: 0.7396 Epoch 0 Batch 380/538 - Train Accuracy: 0.5711, Validation Accuracy: 0.6117, Loss: 0.7590 Epoch 0 Batch 385/538 - Train Accuracy: 0.6062, Validation Accuracy: 0.6028, Loss: 0.7422 Epoch 0 Batch 390/538 - Train Accuracy: 0.6451, Validation Accuracy: 0.6332, Loss: 0.7213 Epoch 0 Batch 395/538 - Train Accuracy: 0.5826, Validation Accuracy: 0.6135, Loss: 0.7727 Epoch 0 Batch 400/538 - Train Accuracy: 0.5884, Validation Accuracy: 0.6200, Loss: 0.7208 Epoch 0 Batch 405/538 - Train Accuracy: 0.6127, Validation Accuracy: 0.6254, Loss: 0.6957 Epoch 0 Batch 410/538 - Train Accuracy: 0.5928, Validation Accuracy: 0.6310, Loss: 0.7201 Epoch 0 Batch 415/538 - Train Accuracy: 0.5848, Validation Accuracy: 0.6158, Loss: 0.7347 Epoch 0 Batch 420/538 - Train Accuracy: 0.6098, Validation Accuracy: 0.6230, Loss: 0.6941 Epoch 0 Batch 425/538 - Train Accuracy: 0.6101, Validation Accuracy: 0.6278, Loss: 0.6961 Epoch 0 Batch 430/538 - Train Accuracy: 0.5980, Validation Accuracy: 0.6188, Loss: 0.6917 Epoch 0 Batch 435/538 - Train Accuracy: 0.5596, Validation Accuracy: 0.5884, Loss: 0.7335 Epoch 0 Batch 440/538 - Train Accuracy: 0.5834, Validation Accuracy: 0.6042, Loss: 0.8037 Epoch 0 Batch 445/538 - Train Accuracy: 0.5990, Validation Accuracy: 0.5922, Loss: 0.7168 Epoch 0 Batch 450/538 - Train Accuracy: 0.6256, Validation Accuracy: 0.6110, Loss: 0.7119 Epoch 0 Batch 455/538 - Train Accuracy: 0.6355, Validation Accuracy: 0.6140, Loss: 0.6290 Epoch 0 Batch 460/538 - Train Accuracy: 0.5910, Validation Accuracy: 0.6255, Loss: 0.6630 Epoch 0 Batch 465/538 - Train Accuracy: 0.5914, Validation Accuracy: 0.6225, Loss: 0.6804 Epoch 0 Batch 470/538 - Train Accuracy: 0.6060, Validation Accuracy: 0.6246, Loss: 0.6375 Epoch 0 Batch 475/538 - Train Accuracy: 0.6153, Validation Accuracy: 0.6245, Loss: 0.6336 Epoch 0 Batch 480/538 - Train Accuracy: 0.6269, Validation Accuracy: 0.6143, Loss: 0.6235 Epoch 0 Batch 485/538 - Train Accuracy: 0.6006, Validation Accuracy: 0.6252, Loss: 0.6142 Epoch 0 Batch 490/538 - Train Accuracy: 0.6298, Validation Accuracy: 0.6341, Loss: 0.6131 Epoch 0 Batch 495/538 - Train Accuracy: 0.5957, Validation Accuracy: 0.6335, Loss: 0.6289 Epoch 0 Batch 500/538 - Train Accuracy: 0.6461, Validation Accuracy: 0.6341, Loss: 0.5564 Epoch 0 Batch 505/538 - Train Accuracy: 0.6484, Validation Accuracy: 0.6365, Loss: 0.5952 Epoch 0 Batch 510/538 - Train Accuracy: 0.6616, Validation Accuracy: 0.6422, Loss: 0.5761 Epoch 0 Batch 515/538 - Train Accuracy: 0.6451, Validation Accuracy: 0.6541, Loss: 0.5756 Epoch 0 Batch 520/538 - Train Accuracy: 0.6266, Validation Accuracy: 0.6483, Loss: 0.6067 Epoch 0 Batch 525/538 - Train Accuracy: 0.6670, Validation Accuracy: 0.6502, Loss: 0.5631 Epoch 0 Batch 530/538 - Train Accuracy: 0.6426, Validation Accuracy: 0.6454, Loss: 0.6006 Epoch 0 Batch 535/538 - Train Accuracy: 0.6501, Validation Accuracy: 0.6456, Loss: 0.5532 Epoch 1 Batch 5/538 - Train Accuracy: 0.6177, Validation Accuracy: 0.6293, Loss: 0.5705 Epoch 1 Batch 10/538 - Train Accuracy: 0.6014, Validation Accuracy: 0.6429, Loss: 0.5842 Epoch 1 Batch 15/538 - Train Accuracy: 0.6438, Validation Accuracy: 0.6518, Loss: 0.5294 Epoch 1 Batch 20/538 - Train Accuracy: 0.6585, Validation Accuracy: 0.6634, Loss: 0.5527 Epoch 1 Batch 25/538 - Train Accuracy: 0.6418, Validation Accuracy: 0.6683, Loss: 0.5553 Epoch 1 Batch 30/538 - Train Accuracy: 0.6434, Validation Accuracy: 0.6545, Loss: 0.5666 Epoch 1 Batch 35/538 - Train Accuracy: 0.6721, Validation Accuracy: 0.6653, Loss: 0.5337 Epoch 1 Batch 40/538 - Train Accuracy: 0.7008, Validation Accuracy: 0.6685, Loss: 0.4761 Epoch 1 Batch 45/538 - Train Accuracy: 0.6648, Validation Accuracy: 0.6776, Loss: 0.5012 Epoch 1 Batch 50/538 - Train Accuracy: 0.6744, Validation Accuracy: 0.6834, Loss: 0.5253 Epoch 1 Batch 55/538 - Train Accuracy: 0.6541, Validation Accuracy: 0.6738, Loss: 0.5355 Epoch 1 Batch 60/538 - Train Accuracy: 0.6561, Validation Accuracy: 0.6758, Loss: 0.5155 Epoch 1 Batch 65/538 - Train Accuracy: 0.6471, Validation Accuracy: 0.6623, Loss: 0.5215 Epoch 1 Batch 70/538 - Train Accuracy: 0.6735, Validation Accuracy: 0.6808, Loss: 0.4866 Epoch 1 Batch 75/538 - Train Accuracy: 0.6970, Validation Accuracy: 0.6754, Loss: 0.4780 Epoch 1 Batch 80/538 - Train Accuracy: 0.6582, Validation Accuracy: 0.6818, Loss: 0.5228 Epoch 1 Batch 85/538 - Train Accuracy: 0.6923, Validation Accuracy: 0.6816, Loss: 0.4491 Epoch 1 Batch 90/538 - Train Accuracy: 0.6821, Validation Accuracy: 0.6806, Loss: 0.4940 Epoch 1 Batch 95/538 - Train Accuracy: 0.6937, Validation Accuracy: 0.6738, Loss: 0.4467 Epoch 1 Batch 100/538 - Train Accuracy: 0.7064, Validation Accuracy: 0.6855, Loss: 0.4631 Epoch 1 Batch 105/538 - Train Accuracy: 0.6661, Validation Accuracy: 0.6788, Loss: 0.4454 Epoch 1 Batch 110/538 - Train Accuracy: 0.6816, Validation Accuracy: 0.7008, Loss: 0.4760 Epoch 1 Batch 115/538 - Train Accuracy: 0.6736, Validation Accuracy: 0.7132, Loss: 0.4812 Epoch 1 Batch 120/538 - Train Accuracy: 0.6967, Validation Accuracy: 0.7038, Loss: 0.4463 Epoch 1 Batch 125/538 - Train Accuracy: 0.7282, Validation Accuracy: 0.6948, Loss: 0.4480 Epoch 1 Batch 130/538 - Train Accuracy: 0.7024, Validation Accuracy: 0.6934, Loss: 0.4302 Epoch 1 Batch 135/538 - Train Accuracy: 0.7195, Validation Accuracy: 0.7083, Loss: 0.4425 Epoch 1 Batch 140/538 - Train Accuracy: 0.6777, Validation Accuracy: 0.7141, Loss: 0.4673 Epoch 1 Batch 145/538 - Train Accuracy: 0.7044, Validation Accuracy: 0.7191, Loss: 0.4387 Epoch 1 Batch 150/538 - Train Accuracy: 0.7102, Validation Accuracy: 0.7255, Loss: 0.4345 Epoch 1 Batch 155/538 - Train Accuracy: 0.7258, Validation Accuracy: 0.7154, Loss: 0.4365 Epoch 1 Batch 160/538 - Train Accuracy: 0.6970, Validation Accuracy: 0.7156, Loss: 0.4092 Epoch 1 Batch 165/538 - Train Accuracy: 0.7305, Validation Accuracy: 0.7310, Loss: 0.3826 Epoch 1 Batch 170/538 - Train Accuracy: 0.7539, Validation Accuracy: 0.7367, Loss: 0.4143 Epoch 1 Batch 175/538 - Train Accuracy: 0.7371, Validation Accuracy: 0.7335, Loss: 0.4095 Epoch 1 Batch 180/538 - Train Accuracy: 0.7573, Validation Accuracy: 0.7349, Loss: 0.3989 Epoch 1 Batch 185/538 - Train Accuracy: 0.7539, Validation Accuracy: 0.7235, Loss: 0.3800 Epoch 1 Batch 190/538 - Train Accuracy: 0.7401, Validation Accuracy: 0.7337, Loss: 0.4011 Epoch 1 Batch 195/538 - Train Accuracy: 0.7844, Validation Accuracy: 0.7461, Loss: 0.3650 Epoch 1 Batch 200/538 - Train Accuracy: 0.7771, Validation Accuracy: 0.7434, Loss: 0.3735 Epoch 1 Batch 205/538 - Train Accuracy: 0.7705, Validation Accuracy: 0.7543, Loss: 0.3557 Epoch 1 Batch 210/538 - Train Accuracy: 0.7314, Validation Accuracy: 0.7459, Loss: 0.3654 Epoch 1 Batch 215/538 - Train Accuracy: 0.7691, Validation Accuracy: 0.7496, Loss: 0.3631 Epoch 1 Batch 220/538 - Train Accuracy: 0.7753, Validation Accuracy: 0.7425, Loss: 0.3414 Epoch 1 Batch 225/538 - Train Accuracy: 0.7636, Validation Accuracy: 0.7525, Loss: 0.3480 Epoch 1 Batch 230/538 - Train Accuracy: 0.7826, Validation Accuracy: 0.7635, Loss: 0.3557 Epoch 1 Batch 235/538 - Train Accuracy: 0.7876, Validation Accuracy: 0.7607, Loss: 0.3240 Epoch 1 Batch 240/538 - Train Accuracy: 0.7820, Validation Accuracy: 0.7653, Loss: 0.3379 Epoch 1 Batch 245/538 - Train Accuracy: 0.7719, Validation Accuracy: 0.7697, Loss: 0.3477 Epoch 1 Batch 250/538 - Train Accuracy: 0.7979, Validation Accuracy: 0.7686, Loss: 0.3274 Epoch 1 Batch 255/538 - Train Accuracy: 0.7963, Validation Accuracy: 0.7738, Loss: 0.3160 Epoch 1 Batch 260/538 - Train Accuracy: 0.7619, Validation Accuracy: 0.7752, Loss: 0.3149 Epoch 1 Batch 265/538 - Train Accuracy: 0.8000, Validation Accuracy: 0.7695, Loss: 0.3297 Epoch 1 Batch 270/538 - Train Accuracy: 0.7975, Validation Accuracy: 0.7985, Loss: 0.3122 Epoch 1 Batch 275/538 - Train Accuracy: 0.7781, Validation Accuracy: 0.7940, Loss: 0.3194 Epoch 1 Batch 280/538 - Train Accuracy: 0.8116, Validation Accuracy: 0.7750, Loss: 0.2840 Epoch 1 Batch 285/538 - Train Accuracy: 0.8064, Validation Accuracy: 0.7896, Loss: 0.2695 Epoch 1 Batch 290/538 - Train Accuracy: 0.8441, Validation Accuracy: 0.8049, Loss: 0.2731 Epoch 1 Batch 295/538 - Train Accuracy: 0.8358, Validation Accuracy: 0.7992, Loss: 0.2616 Epoch 1 Batch 300/538 - Train Accuracy: 0.8129, Validation Accuracy: 0.7962, Loss: 0.2669 Epoch 1 Batch 305/538 - Train Accuracy: 0.8358, Validation Accuracy: 0.8153, Loss: 0.2646 Epoch 1 Batch 310/538 - Train Accuracy: 0.8473, Validation Accuracy: 0.8205, Loss: 0.2674 Epoch 1 Batch 315/538 - Train Accuracy: 0.8134, Validation Accuracy: 0.8162, Loss: 0.2553 Epoch 1 Batch 320/538 - Train Accuracy: 0.8086, Validation Accuracy: 0.8192, Loss: 0.2543 Epoch 1 Batch 325/538 - Train Accuracy: 0.8627, Validation Accuracy: 0.8255, Loss: 0.2497 Epoch 1 Batch 330/538 - Train Accuracy: 0.8341, Validation Accuracy: 0.8251, Loss: 0.2382 Epoch 1 Batch 335/538 - Train Accuracy: 0.8339, Validation Accuracy: 0.8200, Loss: 0.2427 Epoch 1 Batch 340/538 - Train Accuracy: 0.8537, Validation Accuracy: 0.8258, Loss: 0.2479 Epoch 1 Batch 345/538 - Train Accuracy: 0.8438, Validation Accuracy: 0.8244, Loss: 0.2370 Epoch 1 Batch 350/538 - Train Accuracy: 0.8211, Validation Accuracy: 0.8232, Loss: 0.2475 Epoch 1 Batch 355/538 - Train Accuracy: 0.8363, Validation Accuracy: 0.8297, Loss: 0.2439 Epoch 1 Batch 360/538 - Train Accuracy: 0.8035, Validation Accuracy: 0.8260, Loss: 0.2417 Epoch 1 Batch 365/538 - Train Accuracy: 0.8387, Validation Accuracy: 0.8475, Loss: 0.2285 Epoch 1 Batch 370/538 - Train Accuracy: 0.8475, Validation Accuracy: 0.8356, Loss: 0.2297 Epoch 1 Batch 375/538 - Train Accuracy: 0.8464, Validation Accuracy: 0.8372, Loss: 0.2043 Epoch 1 Batch 380/538 - Train Accuracy: 0.8734, Validation Accuracy: 0.8487, Loss: 0.2038 Epoch 1 Batch 385/538 - Train Accuracy: 0.8711, Validation Accuracy: 0.8422, Loss: 0.2019 Epoch 1 Batch 390/538 - Train Accuracy: 0.8821, Validation Accuracy: 0.8525, Loss: 0.1932 Epoch 1 Batch 395/538 - Train Accuracy: 0.8268, Validation Accuracy: 0.8517, Loss: 0.2277 Epoch 1 Batch 400/538 - Train Accuracy: 0.8655, Validation Accuracy: 0.8413, Loss: 0.2006 Epoch 1 Batch 405/538 - Train Accuracy: 0.8579, Validation Accuracy: 0.8388, Loss: 0.1925 Epoch 1 Batch 410/538 - Train Accuracy: 0.8666, Validation Accuracy: 0.8510, Loss: 0.1967 Epoch 1 Batch 415/538 - Train Accuracy: 0.8443, Validation Accuracy: 0.8606, Loss: 0.1915 Epoch 1 Batch 420/538 - Train Accuracy: 0.8818, Validation Accuracy: 0.8583, Loss: 0.1847 Epoch 1 Batch 425/538 - Train Accuracy: 0.8506, Validation Accuracy: 0.8542, Loss: 0.1874 Epoch 1 Batch 430/538 - Train Accuracy: 0.8611, Validation Accuracy: 0.8540, Loss: 0.1815 Epoch 1 Batch 435/538 - Train Accuracy: 0.8643, Validation Accuracy: 0.8578, Loss: 0.1691 Epoch 1 Batch 440/538 - Train Accuracy: 0.8742, Validation Accuracy: 0.8622, Loss: 0.1896 Epoch 1 Batch 445/538 - Train Accuracy: 0.9074, Validation Accuracy: 0.8702, Loss: 0.1546 Epoch 1 Batch 450/538 - Train Accuracy: 0.8750, Validation Accuracy: 0.8675, Loss: 0.1820 Epoch 1 Batch 455/538 - Train Accuracy: 0.8817, Validation Accuracy: 0.8565, Loss: 0.1545 Epoch 1 Batch 460/538 - Train Accuracy: 0.8577, Validation Accuracy: 0.8540, Loss: 0.1681 Epoch 1 Batch 465/538 - Train Accuracy: 0.8895, Validation Accuracy: 0.8777, Loss: 0.1548 Epoch 1 Batch 470/538 - Train Accuracy: 0.8839, Validation Accuracy: 0.8727, Loss: 0.1502 Epoch 1 Batch 475/538 - Train Accuracy: 0.9044, Validation Accuracy: 0.8704, Loss: 0.1490 Epoch 1 Batch 480/538 - Train Accuracy: 0.8800, Validation Accuracy: 0.8894, Loss: 0.1479 Epoch 1 Batch 485/538 - Train Accuracy: 0.8942, Validation Accuracy: 0.8651, Loss: 0.1455 Epoch 1 Batch 490/538 - Train Accuracy: 0.8880, Validation Accuracy: 0.8746, Loss: 0.1403 Epoch 1 Batch 495/538 - Train Accuracy: 0.9053, Validation Accuracy: 0.8954, Loss: 0.1412 Epoch 1 Batch 500/538 - Train Accuracy: 0.9224, Validation Accuracy: 0.8864, Loss: 0.1186 Epoch 1 Batch 505/538 - Train Accuracy: 0.9154, Validation Accuracy: 0.9167, Loss: 0.1249 Epoch 1 Batch 510/538 - Train Accuracy: 0.9001, Validation Accuracy: 0.8970, Loss: 0.1249 Epoch 1 Batch 515/538 - Train Accuracy: 0.9031, Validation Accuracy: 0.8869, Loss: 0.1523 Epoch 1 Batch 520/538 - Train Accuracy: 0.8984, Validation Accuracy: 0.8757, Loss: 0.1345 Epoch 1 Batch 525/538 - Train Accuracy: 0.9057, Validation Accuracy: 0.9109, Loss: 0.1286 Epoch 1 Batch 530/538 - Train Accuracy: 0.8838, Validation Accuracy: 0.9025, Loss: 0.1410 Epoch 1 Batch 535/538 - Train Accuracy: 0.9020, Validation Accuracy: 0.8977, Loss: 0.1205 Epoch 2 Batch 5/538 - Train Accuracy: 0.8943, Validation Accuracy: 0.9009, Loss: 0.1277 Epoch 2 Batch 10/538 - Train Accuracy: 0.9004, Validation Accuracy: 0.9148, Loss: 0.1319 Epoch 2 Batch 15/538 - Train Accuracy: 0.9146, Validation Accuracy: 0.9071, Loss: 0.1110 Epoch 2 Batch 20/538 - Train Accuracy: 0.9161, Validation Accuracy: 0.9123, Loss: 0.1159 Epoch 2 Batch 25/538 - Train Accuracy: 0.9047, Validation Accuracy: 0.9128, Loss: 0.1221 Epoch 2 Batch 30/538 - Train Accuracy: 0.8984, Validation Accuracy: 0.8887, Loss: 0.1227 Epoch 2 Batch 35/538 - Train Accuracy: 0.9191, Validation Accuracy: 0.8935, Loss: 0.0982 Epoch 2 Batch 40/538 - Train Accuracy: 0.9073, Validation Accuracy: 0.8981, Loss: 0.0961 Epoch 2 Batch 45/538 - Train Accuracy: 0.9191, Validation Accuracy: 0.9020, Loss: 0.1030 Epoch 2 Batch 50/538 - Train Accuracy: 0.9080, Validation Accuracy: 0.9087, Loss: 0.1071 Epoch 2 Batch 55/538 - Train Accuracy: 0.8900, Validation Accuracy: 0.8901, Loss: 0.1010 Epoch 2 Batch 60/538 - Train Accuracy: 0.9145, Validation Accuracy: 0.9038, Loss: 0.1008 Epoch 2 Batch 65/538 - Train Accuracy: 0.9000, Validation Accuracy: 0.8860, Loss: 0.1015 Epoch 2 Batch 70/538 - Train Accuracy: 0.9066, Validation Accuracy: 0.9055, Loss: 0.0979 Epoch 2 Batch 75/538 - Train Accuracy: 0.9062, Validation Accuracy: 0.9181, Loss: 0.0985 Epoch 2 Batch 80/538 - Train Accuracy: 0.9121, Validation Accuracy: 0.9150, Loss: 0.1059 Epoch 2 Batch 85/538 - Train Accuracy: 0.9245, Validation Accuracy: 0.9070, Loss: 0.0839 Epoch 2 Batch 90/538 - Train Accuracy: 0.8910, Validation Accuracy: 0.9077, Loss: 0.1032 Epoch 2 Batch 95/538 - Train Accuracy: 0.9165, Validation Accuracy: 0.9146, Loss: 0.0840 Epoch 2 Batch 100/538 - Train Accuracy: 0.9199, Validation Accuracy: 0.9144, Loss: 0.0857 Epoch 2 Batch 105/538 - Train Accuracy: 0.8999, Validation Accuracy: 0.9155, Loss: 0.0824 Epoch 2 Batch 110/538 - Train Accuracy: 0.9203, Validation Accuracy: 0.9238, Loss: 0.0921 Epoch 2 Batch 115/538 - Train Accuracy: 0.9219, Validation Accuracy: 0.9190, Loss: 0.0920 Epoch 2 Batch 120/538 - Train Accuracy: 0.9270, Validation Accuracy: 0.9203, Loss: 0.0757 Epoch 2 Batch 125/538 - Train Accuracy: 0.9131, Validation Accuracy: 0.9128, Loss: 0.0915 Epoch 2 Batch 130/538 - Train Accuracy: 0.9371, Validation Accuracy: 0.9082, Loss: 0.0799 Epoch 2 Batch 135/538 - Train Accuracy: 0.9189, Validation Accuracy: 0.9070, Loss: 0.0930 Epoch 2 Batch 140/538 - Train Accuracy: 0.9064, Validation Accuracy: 0.9164, Loss: 0.1020 Epoch 2 Batch 145/538 - Train Accuracy: 0.8821, Validation Accuracy: 0.9102, Loss: 0.1046 Epoch 2 Batch 150/538 - Train Accuracy: 0.9338, Validation Accuracy: 0.9302, Loss: 0.0811 Epoch 2 Batch 155/538 - Train Accuracy: 0.9038, Validation Accuracy: 0.9091, Loss: 0.0878 Epoch 2 Batch 160/538 - Train Accuracy: 0.9129, Validation Accuracy: 0.9057, Loss: 0.0725 Epoch 2 Batch 165/538 - Train Accuracy: 0.9118, Validation Accuracy: 0.9009, Loss: 0.0706 Epoch 2 Batch 170/538 - Train Accuracy: 0.9070, Validation Accuracy: 0.9178, Loss: 0.0837 Epoch 2 Batch 175/538 - Train Accuracy: 0.9313, Validation Accuracy: 0.9153, Loss: 0.0758 Epoch 2 Batch 180/538 - Train Accuracy: 0.9156, Validation Accuracy: 0.9118, Loss: 0.0810 Epoch 2 Batch 185/538 - Train Accuracy: 0.9395, Validation Accuracy: 0.9192, Loss: 0.0692 Epoch 2 Batch 190/538 - Train Accuracy: 0.9219, Validation Accuracy: 0.9190, Loss: 0.0980 Epoch 2 Batch 195/538 - Train Accuracy: 0.9180, Validation Accuracy: 0.9086, Loss: 0.0783 Epoch 2 Batch 200/538 - Train Accuracy: 0.9217, Validation Accuracy: 0.9206, Loss: 0.0661 Epoch 2 Batch 205/538 - Train Accuracy: 0.9397, Validation Accuracy: 0.9233, Loss: 0.0688 Epoch 2 Batch 210/538 - Train Accuracy: 0.9111, Validation Accuracy: 0.9176, Loss: 0.0790 Epoch 2 Batch 215/538 - Train Accuracy: 0.9391, Validation Accuracy: 0.9171, Loss: 0.1156 Epoch 2 Batch 220/538 - Train Accuracy: 0.9200, Validation Accuracy: 0.9075, Loss: 0.0764 Epoch 2 Batch 225/538 - Train Accuracy: 0.9284, Validation Accuracy: 0.9171, Loss: 0.0818 Epoch 2 Batch 230/538 - Train Accuracy: 0.9072, Validation Accuracy: 0.9169, Loss: 0.0832 Epoch 2 Batch 235/538 - Train Accuracy: 0.9394, Validation Accuracy: 0.9228, Loss: 0.0651 Epoch 2 Batch 240/538 - Train Accuracy: 0.9236, Validation Accuracy: 0.9244, Loss: 0.0794 Epoch 2 Batch 245/538 - Train Accuracy: 0.9133, Validation Accuracy: 0.9169, Loss: 0.0876 Epoch 2 Batch 250/538 - Train Accuracy: 0.9330, Validation Accuracy: 0.9130, Loss: 0.0683 Epoch 2 Batch 255/538 - Train Accuracy: 0.9467, Validation Accuracy: 0.9242, Loss: 0.0643 Epoch 2 Batch 260/538 - Train Accuracy: 0.9064, Validation Accuracy: 0.9238, Loss: 0.0741 Epoch 2 Batch 265/538 - Train Accuracy: 0.9158, Validation Accuracy: 0.9224, Loss: 0.0777 Epoch 2 Batch 270/538 - Train Accuracy: 0.9316, Validation Accuracy: 0.9320, Loss: 0.0620 Epoch 2 Batch 275/538 - Train Accuracy: 0.9236, Validation Accuracy: 0.9164, Loss: 0.0748 Epoch 2 Batch 280/538 - Train Accuracy: 0.9466, Validation Accuracy: 0.9206, Loss: 0.0576 Epoch 2 Batch 285/538 - Train Accuracy: 0.9422, Validation Accuracy: 0.9171, Loss: 0.0552 Epoch 2 Batch 290/538 - Train Accuracy: 0.9500, Validation Accuracy: 0.9368, Loss: 0.0570 Epoch 2 Batch 295/538 - Train Accuracy: 0.9508, Validation Accuracy: 0.9212, Loss: 0.0603 Epoch 2 Batch 300/538 - Train Accuracy: 0.9161, Validation Accuracy: 0.9256, Loss: 0.0658 Epoch 2 Batch 305/538 - Train Accuracy: 0.9496, Validation Accuracy: 0.9228, Loss: 0.0571 Epoch 2 Batch 310/538 - Train Accuracy: 0.9473, Validation Accuracy: 0.9373, Loss: 0.0611 Epoch 2 Batch 315/538 - Train Accuracy: 0.9243, Validation Accuracy: 0.9347, Loss: 0.0572 Epoch 2 Batch 320/538 - Train Accuracy: 0.9349, Validation Accuracy: 0.9400, Loss: 0.0588 Epoch 2 Batch 325/538 - Train Accuracy: 0.9449, Validation Accuracy: 0.9343, Loss: 0.0600 Epoch 2 Batch 330/538 - Train Accuracy: 0.9466, Validation Accuracy: 0.9297, Loss: 0.0566 Epoch 2 Batch 335/538 - Train Accuracy: 0.9273, Validation Accuracy: 0.9252, Loss: 0.0580 Epoch 2 Batch 340/538 - Train Accuracy: 0.9174, Validation Accuracy: 0.9320, Loss: 0.0587 Epoch 2 Batch 345/538 - Train Accuracy: 0.9448, Validation Accuracy: 0.9322, Loss: 0.0586 Epoch 2 Batch 350/538 - Train Accuracy: 0.9438, Validation Accuracy: 0.9380, Loss: 0.0699 Epoch 2 Batch 355/538 - Train Accuracy: 0.9469, Validation Accuracy: 0.9407, Loss: 0.0584 Epoch 2 Batch 360/538 - Train Accuracy: 0.9437, Validation Accuracy: 0.9423, Loss: 0.0590 Epoch 2 Batch 365/538 - Train Accuracy: 0.9226, Validation Accuracy: 0.9371, Loss: 0.0607 Epoch 2 Batch 370/538 - Train Accuracy: 0.9424, Validation Accuracy: 0.9297, Loss: 0.0576 Epoch 2 Batch 375/538 - Train Accuracy: 0.9375, Validation Accuracy: 0.9437, Loss: 0.0527 Epoch 2 Batch 380/538 - Train Accuracy: 0.9297, Validation Accuracy: 0.9313, Loss: 0.0527 Epoch 2 Batch 385/538 - Train Accuracy: 0.9472, Validation Accuracy: 0.9411, Loss: 0.0545 Epoch 2 Batch 390/538 - Train Accuracy: 0.9397, Validation Accuracy: 0.9363, Loss: 0.0514 Epoch 2 Batch 395/538 - Train Accuracy: 0.9348, Validation Accuracy: 0.9425, Loss: 0.0652 Epoch 2 Batch 400/538 - Train Accuracy: 0.9522, Validation Accuracy: 0.9382, Loss: 0.0536 Epoch 2 Batch 405/538 - Train Accuracy: 0.9347, Validation Accuracy: 0.9469, Loss: 0.0537 Epoch 2 Batch 410/538 - Train Accuracy: 0.9457, Validation Accuracy: 0.9361, Loss: 0.0589 Epoch 2 Batch 415/538 - Train Accuracy: 0.9303, Validation Accuracy: 0.9272, Loss: 0.0554 Epoch 2 Batch 420/538 - Train Accuracy: 0.9564, Validation Accuracy: 0.9361, Loss: 0.0532 Epoch 2 Batch 425/538 - Train Accuracy: 0.9271, Validation Accuracy: 0.9348, Loss: 0.0644 Epoch 2 Batch 430/538 - Train Accuracy: 0.9373, Validation Accuracy: 0.9503, Loss: 0.0537 Epoch 2 Batch 435/538 - Train Accuracy: 0.9451, Validation Accuracy: 0.9446, Loss: 0.0503 Epoch 2 Batch 440/538 - Train Accuracy: 0.9406, Validation Accuracy: 0.9430, Loss: 0.0565 Epoch 2 Batch 445/538 - Train Accuracy: 0.9547, Validation Accuracy: 0.9411, Loss: 0.0436 Epoch 2 Batch 450/538 - Train Accuracy: 0.9299, Validation Accuracy: 0.9363, Loss: 0.0629 Epoch 2 Batch 455/538 - Train Accuracy: 0.9384, Validation Accuracy: 0.9386, Loss: 0.0514 Epoch 2 Batch 460/538 - Train Accuracy: 0.9405, Validation Accuracy: 0.9389, Loss: 0.0586 Epoch 2 Batch 465/538 - Train Accuracy: 0.9490, Validation Accuracy: 0.9501, Loss: 0.0476 Epoch 2 Batch 470/538 - Train Accuracy: 0.9369, Validation Accuracy: 0.9384, Loss: 0.0484 Epoch 2 Batch 475/538 - Train Accuracy: 0.9485, Validation Accuracy: 0.9395, Loss: 0.0491 Epoch 2 Batch 480/538 - Train Accuracy: 0.9399, Validation Accuracy: 0.9531, Loss: 0.0488 Epoch 2 Batch 485/538 - Train Accuracy: 0.9619, Validation Accuracy: 0.9419, Loss: 0.0529 Epoch 2 Batch 490/538 - Train Accuracy: 0.9390, Validation Accuracy: 0.9489, Loss: 0.0491 Epoch 2 Batch 495/538 - Train Accuracy: 0.9590, Validation Accuracy: 0.9276, Loss: 0.0471 Epoch 2 Batch 500/538 - Train Accuracy: 0.9636, Validation Accuracy: 0.9442, Loss: 0.0369 Epoch 2 Batch 505/538 - Train Accuracy: 0.9654, Validation Accuracy: 0.9531, Loss: 0.0392 Epoch 2 Batch 510/538 - Train Accuracy: 0.9624, Validation Accuracy: 0.9435, Loss: 0.0425 Epoch 2 Batch 515/538 - Train Accuracy: 0.9479, Validation Accuracy: 0.9462, Loss: 0.0554 Epoch 2 Batch 520/538 - Train Accuracy: 0.9443, Validation Accuracy: 0.9371, Loss: 0.0480 Epoch 2 Batch 525/538 - Train Accuracy: 0.9312, Validation Accuracy: 0.9322, Loss: 0.0491 Epoch 2 Batch 530/538 - Train Accuracy: 0.9348, Validation Accuracy: 0.9308, Loss: 0.0525 Epoch 2 Batch 535/538 - Train Accuracy: 0.9501, Validation Accuracy: 0.9487, Loss: 0.0464 Epoch 3 Batch 5/538 - Train Accuracy: 0.9511, Validation Accuracy: 0.9435, Loss: 0.0465 Epoch 3 Batch 10/538 - Train Accuracy: 0.9453, Validation Accuracy: 0.9258, Loss: 0.0489 Epoch 3 Batch 15/538 - Train Accuracy: 0.9518, Validation Accuracy: 0.9345, Loss: 0.0412 Epoch 3 Batch 20/538 - Train Accuracy: 0.9444, Validation Accuracy: 0.9487, Loss: 0.0473 Epoch 3 Batch 25/538 - Train Accuracy: 0.9445, Validation Accuracy: 0.9361, Loss: 0.0515 Epoch 3 Batch 30/538 - Train Accuracy: 0.9412, Validation Accuracy: 0.9368, Loss: 0.0515 Epoch 3 Batch 35/538 - Train Accuracy: 0.9578, Validation Accuracy: 0.9439, Loss: 0.0375 Epoch 3 Batch 40/538 - Train Accuracy: 0.9453, Validation Accuracy: 0.9473, Loss: 0.0381 Epoch 3 Batch 45/538 - Train Accuracy: 0.9474, Validation Accuracy: 0.9430, Loss: 0.0461 Epoch 3 Batch 50/538 - Train Accuracy: 0.9416, Validation Accuracy: 0.9471, Loss: 0.0421 Epoch 3 Batch 55/538 - Train Accuracy: 0.9457, Validation Accuracy: 0.9409, Loss: 0.0381 Epoch 3 Batch 60/538 - Train Accuracy: 0.9545, Validation Accuracy: 0.9435, Loss: 0.0428 Epoch 3 Batch 65/538 - Train Accuracy: 0.9416, Validation Accuracy: 0.9412, Loss: 0.0391 Epoch 3 Batch 70/538 - Train Accuracy: 0.9382, Validation Accuracy: 0.9359, Loss: 0.0420 Epoch 3 Batch 75/538 - Train Accuracy: 0.9488, Validation Accuracy: 0.9476, Loss: 0.0436 Epoch 3 Batch 80/538 - Train Accuracy: 0.9508, Validation Accuracy: 0.9462, Loss: 0.0459 Epoch 3 Batch 85/538 - Train Accuracy: 0.9597, Validation Accuracy: 0.9494, Loss: 0.0371 Epoch 3 Batch 90/538 - Train Accuracy: 0.9529, Validation Accuracy: 0.9492, Loss: 0.0466 Epoch 3 Batch 95/538 - Train Accuracy: 0.9419, Validation Accuracy: 0.9457, Loss: 0.0377 Epoch 3 Batch 100/538 - Train Accuracy: 0.9543, Validation Accuracy: 0.9483, Loss: 0.0349 Epoch 3 Batch 105/538 - Train Accuracy: 0.9544, Validation Accuracy: 0.9334, Loss: 0.0356 Epoch 3 Batch 110/538 - Train Accuracy: 0.9424, Validation Accuracy: 0.9426, Loss: 0.0416 Epoch 3 Batch 115/538 - Train Accuracy: 0.9398, Validation Accuracy: 0.9451, Loss: 0.0431 Epoch 3 Batch 120/538 - Train Accuracy: 0.9426, Validation Accuracy: 0.9522, Loss: 0.0315 Epoch 3 Batch 125/538 - Train Accuracy: 0.9477, Validation Accuracy: 0.9526, Loss: 0.0459 Epoch 3 Batch 130/538 - Train Accuracy: 0.9656, Validation Accuracy: 0.9437, Loss: 0.0402 Epoch 3 Batch 135/538 - Train Accuracy: 0.9466, Validation Accuracy: 0.9194, Loss: 0.0470 Epoch 3 Batch 140/538 - Train Accuracy: 0.9371, Validation Accuracy: 0.9329, Loss: 0.0512 Epoch 3 Batch 145/538 - Train Accuracy: 0.9235, Validation Accuracy: 0.9521, Loss: 0.0569 Epoch 3 Batch 150/538 - Train Accuracy: 0.9551, Validation Accuracy: 0.9359, Loss: 0.0398 Epoch 3 Batch 155/538 - Train Accuracy: 0.9399, Validation Accuracy: 0.9382, Loss: 0.0431 Epoch 3 Batch 160/538 - Train Accuracy: 0.9535, Validation Accuracy: 0.9423, Loss: 0.0358 Epoch 3 Batch 165/538 - Train Accuracy: 0.9371, Validation Accuracy: 0.9503, Loss: 0.0348 Epoch 3 Batch 170/538 - Train Accuracy: 0.9490, Validation Accuracy: 0.9411, Loss: 0.0396 Epoch 3 Batch 175/538 - Train Accuracy: 0.9602, Validation Accuracy: 0.9396, Loss: 0.0358 Epoch 3 Batch 180/538 - Train Accuracy: 0.9462, Validation Accuracy: 0.9439, Loss: 0.0427 Epoch 3 Batch 185/538 - Train Accuracy: 0.9717, Validation Accuracy: 0.9405, Loss: 0.0322 Epoch 3 Batch 190/538 - Train Accuracy: 0.9464, Validation Accuracy: 0.9395, Loss: 0.0540 Epoch 3 Batch 195/538 - Train Accuracy: 0.9630, Validation Accuracy: 0.9505, Loss: 0.0430 Epoch 3 Batch 200/538 - Train Accuracy: 0.9682, Validation Accuracy: 0.9538, Loss: 0.0314 Epoch 3 Batch 205/538 - Train Accuracy: 0.9528, Validation Accuracy: 0.9501, Loss: 0.0363 Epoch 3 Batch 210/538 - Train Accuracy: 0.9444, Validation Accuracy: 0.9531, Loss: 0.0431 Epoch 3 Batch 215/538 - Train Accuracy: 0.9602, Validation Accuracy: 0.9551, Loss: 0.0359 Epoch 3 Batch 220/538 - Train Accuracy: 0.9414, Validation Accuracy: 0.9498, Loss: 0.0389 Epoch 3 Batch 225/538 - Train Accuracy: 0.9570, Validation Accuracy: 0.9391, Loss: 0.0352 Epoch 3 Batch 230/538 - Train Accuracy: 0.9451, Validation Accuracy: 0.9531, Loss: 0.0383 Epoch 3 Batch 235/538 - Train Accuracy: 0.9660, Validation Accuracy: 0.9515, Loss: 0.0298 Epoch 3 Batch 240/538 - Train Accuracy: 0.9629, Validation Accuracy: 0.9512, Loss: 0.0372 Epoch 3 Batch 245/538 - Train Accuracy: 0.9590, Validation Accuracy: 0.9485, Loss: 0.0479 Epoch 3 Batch 250/538 - Train Accuracy: 0.9510, Validation Accuracy: 0.9444, Loss: 0.0348 Epoch 3 Batch 255/538 - Train Accuracy: 0.9580, Validation Accuracy: 0.9471, Loss: 0.0322 Epoch 3 Batch 260/538 - Train Accuracy: 0.9418, Validation Accuracy: 0.9471, Loss: 0.0378 Epoch 3 Batch 265/538 - Train Accuracy: 0.9410, Validation Accuracy: 0.9544, Loss: 0.0424 Epoch 3 Batch 270/538 - Train Accuracy: 0.9697, Validation Accuracy: 0.9579, Loss: 0.0297 Epoch 3 Batch 275/538 - Train Accuracy: 0.9486, Validation Accuracy: 0.9480, Loss: 0.0391 Epoch 3 Batch 280/538 - Train Accuracy: 0.9641, Validation Accuracy: 0.9455, Loss: 0.0297 Epoch 3 Batch 285/538 - Train Accuracy: 0.9585, Validation Accuracy: 0.9455, Loss: 0.0307 Epoch 3 Batch 290/538 - Train Accuracy: 0.9762, Validation Accuracy: 0.9508, Loss: 0.0284 Epoch 3 Batch 295/538 - Train Accuracy: 0.9600, Validation Accuracy: 0.9632, Loss: 0.0336 Epoch 3 Batch 300/538 - Train Accuracy: 0.9526, Validation Accuracy: 0.9474, Loss: 0.0359 Epoch 3 Batch 305/538 - Train Accuracy: 0.9682, Validation Accuracy: 0.9526, Loss: 0.0300 Epoch 3 Batch 310/538 - Train Accuracy: 0.9818, Validation Accuracy: 0.9592, Loss: 0.0356 Epoch 3 Batch 315/538 - Train Accuracy: 0.9379, Validation Accuracy: 0.9652, Loss: 0.0310 Epoch 3 Batch 320/538 - Train Accuracy: 0.9621, Validation Accuracy: 0.9545, Loss: 0.0336 Epoch 3 Batch 325/538 - Train Accuracy: 0.9522, Validation Accuracy: 0.9538, Loss: 0.0334 Epoch 3 Batch 330/538 - Train Accuracy: 0.9630, Validation Accuracy: 0.9494, Loss: 0.0312 Epoch 3 Batch 335/538 - Train Accuracy: 0.9513, Validation Accuracy: 0.9513, Loss: 0.0329 Epoch 3 Batch 340/538 - Train Accuracy: 0.9502, Validation Accuracy: 0.9643, Loss: 0.0349 Epoch 3 Batch 345/538 - Train Accuracy: 0.9715, Validation Accuracy: 0.9585, Loss: 0.0326 Epoch 3 Batch 350/538 - Train Accuracy: 0.9632, Validation Accuracy: 0.9625, Loss: 0.0391 Epoch 3 Batch 355/538 - Train Accuracy: 0.9684, Validation Accuracy: 0.9567, Loss: 0.0301 Epoch 3 Batch 360/538 - Train Accuracy: 0.9500, Validation Accuracy: 0.9641, Loss: 0.0304 Epoch 3 Batch 365/538 - Train Accuracy: 0.9555, Validation Accuracy: 0.9554, Loss: 0.0323 Epoch 3 Batch 370/538 - Train Accuracy: 0.9656, Validation Accuracy: 0.9442, Loss: 0.0308 Epoch 3 Batch 375/538 - Train Accuracy: 0.9628, Validation Accuracy: 0.9524, Loss: 0.0289 Epoch 3 Batch 380/538 - Train Accuracy: 0.9504, Validation Accuracy: 0.9608, Loss: 0.0305 Epoch 3 Batch 385/538 - Train Accuracy: 0.9645, Validation Accuracy: 0.9641, Loss: 0.0303 Epoch 3 Batch 390/538 - Train Accuracy: 0.9539, Validation Accuracy: 0.9560, Loss: 0.0292 Epoch 3 Batch 395/538 - Train Accuracy: 0.9586, Validation Accuracy: 0.9721, Loss: 0.0380 Epoch 3 Batch 400/538 - Train Accuracy: 0.9751, Validation Accuracy: 0.9613, Loss: 0.0309 Epoch 3 Batch 405/538 - Train Accuracy: 0.9606, Validation Accuracy: 0.9444, Loss: 0.0304 Epoch 3 Batch 410/538 - Train Accuracy: 0.9613, Validation Accuracy: 0.9391, Loss: 0.0355 Epoch 3 Batch 415/538 - Train Accuracy: 0.9437, Validation Accuracy: 0.9583, Loss: 0.0346 Epoch 3 Batch 420/538 - Train Accuracy: 0.9604, Validation Accuracy: 0.9563, Loss: 0.0343 Epoch 3 Batch 425/538 - Train Accuracy: 0.9459, Validation Accuracy: 0.9588, Loss: 0.0423 Epoch 3 Batch 430/538 - Train Accuracy: 0.9572, Validation Accuracy: 0.9576, Loss: 0.0315 Epoch 3 Batch 435/538 - Train Accuracy: 0.9648, Validation Accuracy: 0.9609, Loss: 0.0308 Epoch 3 Batch 440/538 - Train Accuracy: 0.9613, Validation Accuracy: 0.9606, Loss: 0.0356 Epoch 3 Batch 445/538 - Train Accuracy: 0.9695, Validation Accuracy: 0.9279, Loss: 0.0268 Epoch 3 Batch 450/538 - Train Accuracy: 0.9418, Validation Accuracy: 0.9482, Loss: 0.0402 Epoch 3 Batch 455/538 - Train Accuracy: 0.9600, Validation Accuracy: 0.9599, Loss: 0.0313 Epoch 3 Batch 460/538 - Train Accuracy: 0.9604, Validation Accuracy: 0.9600, Loss: 0.0342 Epoch 3 Batch 465/538 - Train Accuracy: 0.9525, Validation Accuracy: 0.9553, Loss: 0.0300 Epoch 3 Batch 470/538 - Train Accuracy: 0.9591, Validation Accuracy: 0.9581, Loss: 0.0295 Epoch 3 Batch 475/538 - Train Accuracy: 0.9630, Validation Accuracy: 0.9597, Loss: 0.0291 Epoch 3 Batch 480/538 - Train Accuracy: 0.9686, Validation Accuracy: 0.9608, Loss: 0.0301 Epoch 3 Batch 485/538 - Train Accuracy: 0.9753, Validation Accuracy: 0.9643, Loss: 0.0320 Epoch 3 Batch 490/538 - Train Accuracy: 0.9585, Validation Accuracy: 0.9592, Loss: 0.0299 Epoch 3 Batch 495/538 - Train Accuracy: 0.9699, Validation Accuracy: 0.9499, Loss: 0.0290 Epoch 3 Batch 500/538 - Train Accuracy: 0.9766, Validation Accuracy: 0.9608, Loss: 0.0219 Epoch 3 Batch 505/538 - Train Accuracy: 0.9751, Validation Accuracy: 0.9572, Loss: 0.0232 Epoch 3 Batch 510/538 - Train Accuracy: 0.9784, Validation Accuracy: 0.9517, Loss: 0.0246 Epoch 3 Batch 515/538 - Train Accuracy: 0.9555, Validation Accuracy: 0.9508, Loss: 0.0336 Epoch 3 Batch 520/538 - Train Accuracy: 0.9586, Validation Accuracy: 0.9371, Loss: 0.0311 Epoch 3 Batch 525/538 - Train Accuracy: 0.9667, Validation Accuracy: 0.9510, Loss: 0.0335 Epoch 3 Batch 530/538 - Train Accuracy: 0.9541, Validation Accuracy: 0.9430, Loss: 0.0328 Epoch 3 Batch 535/538 - Train Accuracy: 0.9689, Validation Accuracy: 0.9592, Loss: 0.0308 Epoch 4 Batch 5/538 - Train Accuracy: 0.9639, Validation Accuracy: 0.9524, Loss: 0.0315 Epoch 4 Batch 10/538 - Train Accuracy: 0.9578, Validation Accuracy: 0.9499, Loss: 0.0307 Epoch 4 Batch 15/538 - Train Accuracy: 0.9697, Validation Accuracy: 0.9517, Loss: 0.0251 Epoch 4 Batch 20/538 - Train Accuracy: 0.9734, Validation Accuracy: 0.9533, Loss: 0.0288 Epoch 4 Batch 25/538 - Train Accuracy: 0.9572, Validation Accuracy: 0.9529, Loss: 0.0303 Epoch 4 Batch 30/538 - Train Accuracy: 0.9570, Validation Accuracy: 0.9565, Loss: 0.0315 Epoch 4 Batch 35/538 - Train Accuracy: 0.9668, Validation Accuracy: 0.9636, Loss: 0.0229 Epoch 4 Batch 40/538 - Train Accuracy: 0.9590, Validation Accuracy: 0.9641, Loss: 0.0225 Epoch 4 Batch 45/538 - Train Accuracy: 0.9721, Validation Accuracy: 0.9673, Loss: 0.0283 Epoch 4 Batch 50/538 - Train Accuracy: 0.9592, Validation Accuracy: 0.9641, Loss: 0.0254 Epoch 4 Batch 55/538 - Train Accuracy: 0.9670, Validation Accuracy: 0.9634, Loss: 0.0236 Epoch 4 Batch 60/538 - Train Accuracy: 0.9643, Validation Accuracy: 0.9593, Loss: 0.0259 Epoch 4 Batch 65/538 - Train Accuracy: 0.9662, Validation Accuracy: 0.9501, Loss: 0.0250 Epoch 4 Batch 70/538 - Train Accuracy: 0.9630, Validation Accuracy: 0.9561, Loss: 0.0264 Epoch 4 Batch 75/538 - Train Accuracy: 0.9697, Validation Accuracy: 0.9519, Loss: 0.0259 Epoch 4 Batch 80/538 - Train Accuracy: 0.9574, Validation Accuracy: 0.9533, Loss: 0.0285 Epoch 4 Batch 85/538 - Train Accuracy: 0.9712, Validation Accuracy: 0.9361, Loss: 0.0238 Epoch 4 Batch 90/538 - Train Accuracy: 0.9628, Validation Accuracy: 0.9577, Loss: 0.0321 Epoch 4 Batch 95/538 - Train Accuracy: 0.9512, Validation Accuracy: 0.9556, Loss: 0.0252 Epoch 4 Batch 100/538 - Train Accuracy: 0.9723, Validation Accuracy: 0.9563, Loss: 0.0223 Epoch 4 Batch 105/538 - Train Accuracy: 0.9760, Validation Accuracy: 0.9583, Loss: 0.0218 Epoch 4 Batch 110/538 - Train Accuracy: 0.9695, Validation Accuracy: 0.9537, Loss: 0.0263 Epoch 4 Batch 115/538 - Train Accuracy: 0.9535, Validation Accuracy: 0.9512, Loss: 0.0292 Epoch 4 Batch 120/538 - Train Accuracy: 0.9646, Validation Accuracy: 0.9515, Loss: 0.0203 Epoch 4 Batch 125/538 - Train Accuracy: 0.9691, Validation Accuracy: 0.9538, Loss: 0.0312 Epoch 4 Batch 130/538 - Train Accuracy: 0.9773, Validation Accuracy: 0.9643, Loss: 0.0235 Epoch 4 Batch 135/538 - Train Accuracy: 0.9663, Validation Accuracy: 0.9501, Loss: 0.0310 Epoch 4 Batch 140/538 - Train Accuracy: 0.9598, Validation Accuracy: 0.9581, Loss: 0.0362 Epoch 4 Batch 145/538 - Train Accuracy: 0.9522, Validation Accuracy: 0.9492, Loss: 0.0397 Epoch 4 Batch 150/538 - Train Accuracy: 0.9697, Validation Accuracy: 0.9510, Loss: 0.0261 Epoch 4 Batch 155/538 - Train Accuracy: 0.9583, Validation Accuracy: 0.9453, Loss: 0.0289 Epoch 4 Batch 160/538 - Train Accuracy: 0.9583, Validation Accuracy: 0.9529, Loss: 0.0264 Epoch 4 Batch 165/538 - Train Accuracy: 0.9617, Validation Accuracy: 0.9688, Loss: 0.0224 Epoch 4 Batch 170/538 - Train Accuracy: 0.9518, Validation Accuracy: 0.9576, Loss: 0.0281 Epoch 4 Batch 175/538 - Train Accuracy: 0.9699, Validation Accuracy: 0.9466, Loss: 0.0211 Epoch 4 Batch 180/538 - Train Accuracy: 0.9624, Validation Accuracy: 0.9531, Loss: 0.0253 Epoch 4 Batch 185/538 - Train Accuracy: 0.9854, Validation Accuracy: 0.9625, Loss: 0.0197 Epoch 4 Batch 190/538 - Train Accuracy: 0.9608, Validation Accuracy: 0.9625, Loss: 0.0370 Epoch 4 Batch 195/538 - Train Accuracy: 0.9660, Validation Accuracy: 0.9620, Loss: 0.0289 Epoch 4 Batch 200/538 - Train Accuracy: 0.9723, Validation Accuracy: 0.9524, Loss: 0.0212 Epoch 4 Batch 205/538 - Train Accuracy: 0.9721, Validation Accuracy: 0.9609, Loss: 0.0217 Epoch 4 Batch 210/538 - Train Accuracy: 0.9654, Validation Accuracy: 0.9616, Loss: 0.0276 Epoch 4 Batch 215/538 - Train Accuracy: 0.9754, Validation Accuracy: 0.9668, Loss: 0.0228 Epoch 4 Batch 220/538 - Train Accuracy: 0.9483, Validation Accuracy: 0.9647, Loss: 0.0283 Epoch 4 Batch 225/538 - Train Accuracy: 0.9810, Validation Accuracy: 0.9570, Loss: 0.0229 Epoch 4 Batch 230/538 - Train Accuracy: 0.9705, Validation Accuracy: 0.9627, Loss: 0.0238 Epoch 4 Batch 235/538 - Train Accuracy: 0.9680, Validation Accuracy: 0.9656, Loss: 0.0211 Epoch 4 Batch 240/538 - Train Accuracy: 0.9650, Validation Accuracy: 0.9508, Loss: 0.0261 Epoch 4 Batch 245/538 - Train Accuracy: 0.9650, Validation Accuracy: 0.9537, Loss: 0.0343 Epoch 4 Batch 250/538 - Train Accuracy: 0.9787, Validation Accuracy: 0.9574, Loss: 0.0238 Epoch 4 Batch 255/538 - Train Accuracy: 0.9713, Validation Accuracy: 0.9593, Loss: 0.0216 Epoch 4 Batch 260/538 - Train Accuracy: 0.9542, Validation Accuracy: 0.9494, Loss: 0.0262 Epoch 4 Batch 265/538 - Train Accuracy: 0.9506, Validation Accuracy: 0.9609, Loss: 0.0312 Epoch 4 Batch 270/538 - Train Accuracy: 0.9742, Validation Accuracy: 0.9517, Loss: 0.0201 Epoch 4 Batch 275/538 - Train Accuracy: 0.9660, Validation Accuracy: 0.9563, Loss: 0.0261 Epoch 4 Batch 280/538 - Train Accuracy: 0.9648, Validation Accuracy: 0.9576, Loss: 0.0198 Epoch 4 Batch 285/538 - Train Accuracy: 0.9678, Validation Accuracy: 0.9542, Loss: 0.0212 Epoch 4 Batch 290/538 - Train Accuracy: 0.9799, Validation Accuracy: 0.9618, Loss: 0.0192 Epoch 4 Batch 295/538 - Train Accuracy: 0.9670, Validation Accuracy: 0.9656, Loss: 0.0220 Epoch 4 Batch 300/538 - Train Accuracy: 0.9691, Validation Accuracy: 0.9648, Loss: 0.0240 Epoch 4 Batch 305/538 - Train Accuracy: 0.9719, Validation Accuracy: 0.9611, Loss: 0.0197 Epoch 4 Batch 310/538 - Train Accuracy: 0.9725, Validation Accuracy: 0.9656, Loss: 0.0285 Epoch 4 Batch 315/538 - Train Accuracy: 0.9678, Validation Accuracy: 0.9663, Loss: 0.0233 Epoch 4 Batch 320/538 - Train Accuracy: 0.9678, Validation Accuracy: 0.9688, Loss: 0.0231 Epoch 4 Batch 325/538 - Train Accuracy: 0.9667, Validation Accuracy: 0.9586, Loss: 0.0247 Epoch 4 Batch 330/538 - Train Accuracy: 0.9779, Validation Accuracy: 0.9654, Loss: 0.0239 Epoch 4 Batch 335/538 - Train Accuracy: 0.9661, Validation Accuracy: 0.9696, Loss: 0.0233 Epoch 4 Batch 340/538 - Train Accuracy: 0.9643, Validation Accuracy: 0.9648, Loss: 0.0224 Epoch 4 Batch 345/538 - Train Accuracy: 0.9745, Validation Accuracy: 0.9549, Loss: 0.0221 Epoch 4 Batch 350/538 - Train Accuracy: 0.9712, Validation Accuracy: 0.9666, Loss: 0.0282 Epoch 4 Batch 355/538 - Train Accuracy: 0.9787, Validation Accuracy: 0.9608, Loss: 0.0187 Epoch 4 Batch 360/538 - Train Accuracy: 0.9689, Validation Accuracy: 0.9686, Loss: 0.0207 Epoch 4 Batch 365/538 - Train Accuracy: 0.9598, Validation Accuracy: 0.9682, Loss: 0.0238 Epoch 4 Batch 370/538 - Train Accuracy: 0.9633, Validation Accuracy: 0.9489, Loss: 0.0214 Epoch 4 Batch 375/538 - Train Accuracy: 0.9691, Validation Accuracy: 0.9537, Loss: 0.0201 Epoch 4 Batch 380/538 - Train Accuracy: 0.9639, Validation Accuracy: 0.9616, Loss: 0.0183 Epoch 4 Batch 385/538 - Train Accuracy: 0.9693, Validation Accuracy: 0.9599, Loss: 0.0206 Epoch 4 Batch 390/538 - Train Accuracy: 0.9652, Validation Accuracy: 0.9606, Loss: 0.0221 Epoch 4 Batch 395/538 - Train Accuracy: 0.9750, Validation Accuracy: 0.9718, Loss: 0.0250 Epoch 4 Batch 400/538 - Train Accuracy: 0.9816, Validation Accuracy: 0.9696, Loss: 0.0205 Epoch 4 Batch 405/538 - Train Accuracy: 0.9676, Validation Accuracy: 0.9689, Loss: 0.0216 Epoch 4 Batch 410/538 - Train Accuracy: 0.9754, Validation Accuracy: 0.9586, Loss: 0.0212 Epoch 4 Batch 415/538 - Train Accuracy: 0.9568, Validation Accuracy: 0.9631, Loss: 0.0228 Epoch 4 Batch 420/538 - Train Accuracy: 0.9715, Validation Accuracy: 0.9490, Loss: 0.0273 Epoch 4 Batch 425/538 - Train Accuracy: 0.9501, Validation Accuracy: 0.9632, Loss: 0.0319 Epoch 4 Batch 430/538 - Train Accuracy: 0.9672, Validation Accuracy: 0.9652, Loss: 0.0218 Epoch 4 Batch 435/538 - Train Accuracy: 0.9672, Validation Accuracy: 0.9563, Loss: 0.0246 Epoch 4 Batch 440/538 - Train Accuracy: 0.9736, Validation Accuracy: 0.9622, Loss: 0.0258 Epoch 4 Batch 445/538 - Train Accuracy: 0.9711, Validation Accuracy: 0.9377, Loss: 0.0180 Epoch 4 Batch 450/538 - Train Accuracy: 0.9622, Validation Accuracy: 0.9597, Loss: 0.0299 Epoch 4 Batch 455/538 - Train Accuracy: 0.9647, Validation Accuracy: 0.9558, Loss: 0.0242 Epoch 4 Batch 460/538 - Train Accuracy: 0.9738, Validation Accuracy: 0.9691, Loss: 0.0240 Epoch 4 Batch 465/538 - Train Accuracy: 0.9625, Validation Accuracy: 0.9695, Loss: 0.0215 Epoch 4 Batch 470/538 - Train Accuracy: 0.9782, Validation Accuracy: 0.9581, Loss: 0.0207 Epoch 4 Batch 475/538 - Train Accuracy: 0.9695, Validation Accuracy: 0.9627, Loss: 0.0199 Epoch 4 Batch 480/538 - Train Accuracy: 0.9754, Validation Accuracy: 0.9675, Loss: 0.0205 Epoch 4 Batch 485/538 - Train Accuracy: 0.9795, Validation Accuracy: 0.9608, Loss: 0.0235 Epoch 4 Batch 490/538 - Train Accuracy: 0.9663, Validation Accuracy: 0.9592, Loss: 0.0232 Epoch 4 Batch 495/538 - Train Accuracy: 0.9748, Validation Accuracy: 0.9650, Loss: 0.0204 Epoch 4 Batch 500/538 - Train Accuracy: 0.9838, Validation Accuracy: 0.9593, Loss: 0.0151 Epoch 4 Batch 505/538 - Train Accuracy: 0.9781, Validation Accuracy: 0.9553, Loss: 0.0163 Epoch 4 Batch 510/538 - Train Accuracy: 0.9894, Validation Accuracy: 0.9652, Loss: 0.0160 Epoch 4 Batch 515/538 - Train Accuracy: 0.9656, Validation Accuracy: 0.9510, Loss: 0.0244 Epoch 4 Batch 520/538 - Train Accuracy: 0.9672, Validation Accuracy: 0.9577, Loss: 0.0226 Epoch 4 Batch 525/538 - Train Accuracy: 0.9684, Validation Accuracy: 0.9558, Loss: 0.0236 Epoch 4 Batch 530/538 - Train Accuracy: 0.9660, Validation Accuracy: 0.9663, Loss: 0.0232 Epoch 4 Batch 535/538 - Train Accuracy: 0.9699, Validation Accuracy: 0.9620, Loss: 0.0232 Model Trained and Saved ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function traducao = [vocab_to_int.get(palavra, vocab_to_int['<UNK>']) for palavra in sentence.lower().split()] return traducao """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) ###Output INFO:tensorflow:Restoring parameters from checkpoints/dev Input Word Ids: [64, 120, 45, 143, 19, 70, 139] English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.'] Prediction Word Ids: [189, 11, 89, 317, 182, 68, 217, 1] French Words: il vu un vieux camion jaune . <EOS> ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function id_source = [] for line in source_text.split("\n"): id_line = [source_vocab_to_int[word] for word in line.split(" ") if word != ''] id_source.append(id_line) id_target = [] for line in target_text.split("\n"): id_line = [target_vocab_to_int[word] for word in line.split(" ") if word != ''] id_line.append(source_vocab_to_int["<EOS>"]) id_target.append(id_line) return id_source, id_target """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper import problem_unittests as tests (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.2.1 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoder_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.- Target sequence length placeholder named "target_sequence_length" with rank 1- Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.- Source sequence length placeholder named "source_sequence_length" with rank 1Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ # TODO: Implement Function inputs = tf.placeholder(tf.int32, shape=[None, None], name='input') targets = tf.placeholder(tf.int32, shape=[None, None], name='targets') learning_rate = tf.placeholder(tf.float32, name='learning_rate') keep_prob = tf.placeholder(tf.float32, name='keep_prob') target_seq = tf.placeholder(tf.int32, shape=[None], name='target_sequence_length') max_target_len = tf.reduce_max(target_seq) source_seq = tf.placeholder(tf.int32, shape=[None], name='source_sequence_length') return inputs, targets, learning_rate, keep_prob, target_seq, max_target_len, source_seq """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output ERROR:tensorflow:================================== Object was never used (type <class 'tensorflow.python.framework.ops.Operation'>): <tf.Operation 'assert_rank_2/Assert/Assert' type=Assert> If you want to mark it as used call its "mark_used()" method. It was originally created here: ['File "/Users/bryantravissmith/anaconda3/lib/python3.6/runpy.py", line 193, in _run_module_as_main\n "__main__", mod_spec)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/runpy.py", line 85, in _run_code\n exec(code, run_globals)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/ipykernel/__main__.py", line 3, in <module>\n app.launch_new_instance()', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/traitlets/config/application.py", line 658, in launch_instance\n app.start()', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/ipykernel/kernelapp.py", line 477, in start\n ioloop.IOLoop.instance().start()', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/zmq/eventloop/ioloop.py", line 177, in start\n super(ZMQIOLoop, self).start()', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/tornado/ioloop.py", line 888, in start\n handler_func(fd_obj, events)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/tornado/stack_context.py", line 277, in null_wrapper\n return fn(*args, **kwargs)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 440, in _handle_events\n self._handle_recv()', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 472, in _handle_recv\n self._run_callback(callback, msg)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 414, in _run_callback\n callback(*args, **kwargs)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/tornado/stack_context.py", line 277, in null_wrapper\n return fn(*args, **kwargs)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 283, in dispatcher\n return self.dispatch_shell(stream, msg)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 235, in dispatch_shell\n handler(stream, idents, msg)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 399, in execute_request\n user_expressions, allow_stdin)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/ipykernel/ipkernel.py", line 196, in do_execute\n res = shell.run_cell(code, store_history=store_history, silent=silent)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/ipykernel/zmqshell.py", line 533, in run_cell\n return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2717, in run_cell\n interactivity=interactivity, compiler=compiler, result=result)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2827, in run_ast_nodes\n if self.run_code(code, result):', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2881, in run_code\n exec(code_obj, self.user_global_ns, self.user_ns)', 'File "<ipython-input-36-ee4b8ce6653d>", line 22, in <module>\n tests.test_model_inputs(model_inputs)', 'File "/Users/bryantravissmith/Desktop/Udacity/AIDL/language-translation/problem_unittests.py", line 106, in test_model_inputs\n assert tf.assert_rank(lr, 0, message=\'Learning Rate has wrong rank\')', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/check_ops.py", line 617, in assert_rank\n dynamic_condition, data, summarize)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/check_ops.py", line 571, in _assert_rank_condition\n return control_flow_ops.Assert(condition, data, summarize=summarize)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/tf_should_use.py", line 170, in wrapped\n return _add_should_use_warning(fn(*args, **kwargs))', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/tf_should_use.py", line 139, in _add_should_use_warning\n wrapped = TFShouldUseWarningWrapper(x)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/tf_should_use.py", line 96, in __init__\n stack = [s.strip() for s in traceback.format_stack()]'] ================================== ERROR:tensorflow:================================== Object was never used (type <class 'tensorflow.python.framework.ops.Operation'>): <tf.Operation 'assert_rank_3/Assert/Assert' type=Assert> If you want to mark it as used call its "mark_used()" method. It was originally created here: ['File "/Users/bryantravissmith/anaconda3/lib/python3.6/runpy.py", line 193, in _run_module_as_main\n "__main__", mod_spec)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/runpy.py", line 85, in _run_code\n exec(code, run_globals)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/ipykernel/__main__.py", line 3, in <module>\n app.launch_new_instance()', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/traitlets/config/application.py", line 658, in launch_instance\n app.start()', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/ipykernel/kernelapp.py", line 477, in start\n ioloop.IOLoop.instance().start()', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/zmq/eventloop/ioloop.py", line 177, in start\n super(ZMQIOLoop, self).start()', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/tornado/ioloop.py", line 888, in start\n handler_func(fd_obj, events)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/tornado/stack_context.py", line 277, in null_wrapper\n return fn(*args, **kwargs)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 440, in _handle_events\n self._handle_recv()', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 472, in _handle_recv\n self._run_callback(callback, msg)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 414, in _run_callback\n callback(*args, **kwargs)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/tornado/stack_context.py", line 277, in null_wrapper\n return fn(*args, **kwargs)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 283, in dispatcher\n return self.dispatch_shell(stream, msg)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 235, in dispatch_shell\n handler(stream, idents, msg)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 399, in execute_request\n user_expressions, allow_stdin)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/ipykernel/ipkernel.py", line 196, in do_execute\n res = shell.run_cell(code, store_history=store_history, silent=silent)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/ipykernel/zmqshell.py", line 533, in run_cell\n return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2717, in run_cell\n interactivity=interactivity, compiler=compiler, result=result)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2827, in run_ast_nodes\n if self.run_code(code, result):', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2881, in run_code\n exec(code_obj, self.user_global_ns, self.user_ns)', 'File "<ipython-input-36-ee4b8ce6653d>", line 22, in <module>\n tests.test_model_inputs(model_inputs)', 'File "/Users/bryantravissmith/Desktop/Udacity/AIDL/language-translation/problem_unittests.py", line 107, in test_model_inputs\n assert tf.assert_rank(keep_prob, 0, message=\'Keep Probability has wrong rank\')', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/check_ops.py", line 617, in assert_rank\n dynamic_condition, data, summarize)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/check_ops.py", line 571, in _assert_rank_condition\n return control_flow_ops.Assert(condition, data, summarize=summarize)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/tf_should_use.py", line 170, in wrapped\n return _add_should_use_warning(fn(*args, **kwargs))', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/tf_should_use.py", line 139, in _add_should_use_warning\n wrapped = TFShouldUseWarningWrapper(x)', 'File "/Users/bryantravissmith/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/tf_should_use.py", line 96, in __init__\n stack = [s.strip() for s in traceback.format_stack()]'] ================================== ###Markdown Process Decoder InputImplement `process_decoder_input` by removing the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch. ###Code def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ beginning = tf.fill([batch_size, 1], target_vocab_to_int['<GO>']) batch = tf.strided_slice( target_data, begin = [0,0], end = [batch_size, -1] ) # TODO: Implement Function return tf.concat([beginning, batch], 1) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) ###Output Tensor("Placeholder:0", shape=(2, 3), dtype=int32) 2 Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer: * Embed the encoder input using [`tf.contrib.layers.embed_sequence`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence) * Construct a [stacked](https://github.com/tensorflow/tensorflow/blob/6947f65a374ebf29e74bb71e36fd82760056d82c/tensorflow/docs_src/tutorials/recurrent.mdstacking-multiple-lstms) [`tf.contrib.rnn.LSTMCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell) wrapped in a [`tf.contrib.rnn.DropoutWrapper`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper) * Pass cell and embedded input to [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) ###Code from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ # TODO: Implement Function embedding = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size) stacked_cells = tf.contrib.rnn.MultiRNNCell([ tf.contrib.rnn.DropoutWrapper( tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1)), keep_prob ) for _ in range(num_layers) ]) output = tf.nn.dynamic_rnn(stacked_cells, embedding, sequence_length=source_sequence_length, dtype=tf.float32) return output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate a training decoding layer:* Create a [`tf.contrib.seq2seq.TrainingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper) * Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ # TODO: Implement Function helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length, time_major=False) decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, helper, encoder_state, output_layer) final_outputs, final_state, final_sequence_lengths = tf.contrib.seq2seq.dynamic_decode( decoder, impute_finished=True, maximum_iterations=max_summary_length ) return final_outputs """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference decoder:* Create a [`tf.contrib.seq2seq.GreedyEmbeddingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper)* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ # TODO: Implement Function start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size]) helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id) decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, helper, encoder_state, output_layer) infer_outputs, final_state, final_sequence_lengths = tf.contrib.seq2seq.dynamic_decode( decoder, impute_finished=True, maximum_iterations=max_target_sequence_length ) return infer_outputs """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.* Embed the target sequences* Construct the decoder LSTM cell (just like you constructed the encoder cell above)* Create an output layer to map the outputs of the decoder to the elements of our vocabulary* Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)` function to get the training logits.* Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # 1. Decoder Embedding dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) # 2. Construct the decoder cell def make_dec_cell(rnn_size): dec_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) return dec_cell #dec_cell = tf.contrib.rnn.MultiRNNCell([make_dec_cell(rnn_size) for _ in range(num_layers)]) #dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, keep_prob) dec_cell = tf.contrib.rnn.MultiRNNCell([ tf.contrib.rnn.DropoutWrapper( tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1)), keep_prob ) for _ in range(num_layers) ]) # 3. Dense layer to translate the decoder's output at each time step into a choice from the target vocabulary output_layer = Dense(target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1)) # 4. Set up a training decoder and an inference decoder # Training Decoder with tf.variable_scope("decode"): training_decoder_output = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) # 5. Inference Decoder # Reuses the same parameters trained by the training process with tf.variable_scope("decode", reuse=True): start_of_sequence_id = target_vocab_to_int['<GO>'] end_of_sequence_id = target_vocab_to_int['<EOS>'] inference_decoder_output = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) dec_cell = tf.contrib.rnn.MultiRNNCell([ tf.contrib.rnn.DropoutWrapper( tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1)), keep_prob ) for _ in range(num_layers) ]) output_layer = Dense(target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1)) with tf.variable_scope("decoding") as decoding_scope: train_layer = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) with tf.variable_scope("decode", reuse=True): start_of_sequence_id = target_vocab_to_int['<GO>'] end_of_sequence_id = target_vocab_to_int['<EOS>'] infer_layer = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob) return train_layer, infer_layer """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size)`.- Process target data using your `process_decoder_input(target_data, target_vocab_to_int, batch_size)` function.- Decode the encoded input using your `decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)` function. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function _, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) # Prepare the target sequences we'll feed to the decoder in training mode dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size) # Pass encoder state and decoder inputs to the decoders training_decoder_output, inference_decoder_output = decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tensor("Placeholder_7:0", shape=(64, 22), dtype=int32) 64 Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability- Set `display_step` to state how many steps between each debug output statement ###Code # Number of Epochs epochs = 3 # Batch Size batch_size = 100 # RNN Size rnn_size = 1024 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 200 decoding_embedding_size = 200 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 0.2 display_step = 10 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output Tensor("targets:0", shape=(?, ?), dtype=int32) 100 ###Markdown Batch and pad the source and target sequences ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def pad_sentence_batch(sentence_batch, pad_int): """Pad sentences with <PAD> so that each sentence of a batch has the same length""" max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): """Batch targets, sources, and the lengths of their sentences together""" for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 10/1378 - Train Accuracy: 0.3160, Validation Accuracy: 0.3891, Loss: 3.2533 Epoch 0 Batch 20/1378 - Train Accuracy: 0.3416, Validation Accuracy: 0.4341, Loss: 2.9764 Epoch 0 Batch 30/1378 - Train Accuracy: 0.3745, Validation Accuracy: 0.4482, Loss: 2.6049 Epoch 0 Batch 40/1378 - Train Accuracy: 0.4090, Validation Accuracy: 0.4473, Loss: 2.4607 Epoch 0 Batch 50/1378 - Train Accuracy: 0.4175, Validation Accuracy: 0.4727, Loss: 2.2240 Epoch 0 Batch 60/1378 - Train Accuracy: 0.4240, Validation Accuracy: 0.4755, Loss: 2.0793 Epoch 0 Batch 70/1378 - Train Accuracy: 0.4330, Validation Accuracy: 0.5000, Loss: 2.1260 Epoch 0 Batch 80/1378 - Train Accuracy: 0.3679, Validation Accuracy: 0.4723, Loss: 1.9653 Epoch 0 Batch 90/1378 - Train Accuracy: 0.4053, Validation Accuracy: 0.4955, Loss: 1.7636 Epoch 0 Batch 100/1378 - Train Accuracy: 0.4375, Validation Accuracy: 0.4936, Loss: 1.6402 Epoch 0 Batch 110/1378 - Train Accuracy: 0.4530, Validation Accuracy: 0.4936, Loss: 1.5541 Epoch 0 Batch 120/1378 - Train Accuracy: 0.4125, Validation Accuracy: 0.4945, Loss: 1.4933 Epoch 0 Batch 130/1378 - Train Accuracy: 0.3865, Validation Accuracy: 0.4618, Loss: 1.4946 Epoch 0 Batch 140/1378 - Train Accuracy: 0.4340, Validation Accuracy: 0.4868, Loss: 1.4799 Epoch 0 Batch 150/1378 - Train Accuracy: 0.3935, Validation Accuracy: 0.4986, Loss: 1.4883 Epoch 0 Batch 160/1378 - Train Accuracy: 0.5162, Validation Accuracy: 0.5118, Loss: 1.2875 Epoch 0 Batch 170/1378 - Train Accuracy: 0.4540, Validation Accuracy: 0.5150, Loss: 1.3255 Epoch 0 Batch 180/1378 - Train Accuracy: 0.4158, Validation Accuracy: 0.5114, Loss: 1.3937 Epoch 0 Batch 190/1378 - Train Accuracy: 0.4300, Validation Accuracy: 0.5064, Loss: 1.3424 Epoch 0 Batch 200/1378 - Train Accuracy: 0.4870, Validation Accuracy: 0.5350, Loss: 1.2016 Epoch 0 Batch 210/1378 - Train Accuracy: 0.4760, Validation Accuracy: 0.5459, Loss: 1.1959 Epoch 0 Batch 220/1378 - Train Accuracy: 0.4390, Validation Accuracy: 0.5268, Loss: 1.1473 Epoch 0 Batch 230/1378 - Train Accuracy: 0.4679, Validation Accuracy: 0.5600, Loss: 1.2071 Epoch 0 Batch 240/1378 - Train Accuracy: 0.4790, Validation Accuracy: 0.5250, Loss: 1.0994 Epoch 0 Batch 250/1378 - Train Accuracy: 0.5010, Validation Accuracy: 0.5477, Loss: 1.0533 Epoch 0 Batch 260/1378 - Train Accuracy: 0.4935, Validation Accuracy: 0.5568, Loss: 1.0647 Epoch 0 Batch 270/1378 - Train Accuracy: 0.5176, Validation Accuracy: 0.5573, Loss: 0.9866 Epoch 0 Batch 280/1378 - Train Accuracy: 0.5016, Validation Accuracy: 0.5482, Loss: 1.0796 Epoch 0 Batch 290/1378 - Train Accuracy: 0.4835, Validation Accuracy: 0.5532, Loss: 1.0309 Epoch 0 Batch 300/1378 - Train Accuracy: 0.5329, Validation Accuracy: 0.5682, Loss: 1.0241 Epoch 0 Batch 310/1378 - Train Accuracy: 0.5260, Validation Accuracy: 0.5577, Loss: 0.9398 Epoch 0 Batch 320/1378 - Train Accuracy: 0.4616, Validation Accuracy: 0.5591, Loss: 1.0040 Epoch 0 Batch 330/1378 - Train Accuracy: 0.5035, Validation Accuracy: 0.5518, Loss: 0.9263 Epoch 0 Batch 340/1378 - Train Accuracy: 0.5330, Validation Accuracy: 0.5659, Loss: 0.8866 Epoch 0 Batch 350/1378 - Train Accuracy: 0.5357, Validation Accuracy: 0.5759, Loss: 0.8874 Epoch 0 Batch 360/1378 - Train Accuracy: 0.5095, Validation Accuracy: 0.5705, Loss: 0.9517 Epoch 0 Batch 370/1378 - Train Accuracy: 0.5705, Validation Accuracy: 0.5600, Loss: 0.8404 Epoch 0 Batch 380/1378 - Train Accuracy: 0.5420, Validation Accuracy: 0.5755, Loss: 0.8915 Epoch 0 Batch 390/1378 - Train Accuracy: 0.5314, Validation Accuracy: 0.5695, Loss: 0.7972 Epoch 0 Batch 400/1378 - Train Accuracy: 0.5675, Validation Accuracy: 0.5909, Loss: 0.8916 Epoch 0 Batch 410/1378 - Train Accuracy: 0.5975, Validation Accuracy: 0.5805, Loss: 0.8074 Epoch 0 Batch 420/1378 - Train Accuracy: 0.5665, Validation Accuracy: 0.5986, Loss: 0.8872 Epoch 0 Batch 430/1378 - Train Accuracy: 0.5762, Validation Accuracy: 0.5832, Loss: 0.7857 Epoch 0 Batch 440/1378 - Train Accuracy: 0.5535, Validation Accuracy: 0.5827, Loss: 0.7894 Epoch 0 Batch 450/1378 - Train Accuracy: 0.5535, Validation Accuracy: 0.5932, Loss: 0.7978 Epoch 0 Batch 460/1378 - Train Accuracy: 0.5484, Validation Accuracy: 0.6132, Loss: 0.8359 Epoch 0 Batch 470/1378 - Train Accuracy: 0.5981, Validation Accuracy: 0.6109, Loss: 0.6801 Epoch 0 Batch 480/1378 - Train Accuracy: 0.5580, Validation Accuracy: 0.6145, Loss: 0.7657 Epoch 0 Batch 490/1378 - Train Accuracy: 0.5581, Validation Accuracy: 0.6027, Loss: 0.7270 Epoch 0 Batch 500/1378 - Train Accuracy: 0.5680, Validation Accuracy: 0.6032, Loss: 0.7414 Epoch 0 Batch 510/1378 - Train Accuracy: 0.5768, Validation Accuracy: 0.5836, Loss: 0.7934 Epoch 0 Batch 520/1378 - Train Accuracy: 0.6221, Validation Accuracy: 0.6000, Loss: 0.7281 Epoch 0 Batch 530/1378 - Train Accuracy: 0.5575, Validation Accuracy: 0.5773, Loss: 0.7370 Epoch 0 Batch 540/1378 - Train Accuracy: 0.5750, Validation Accuracy: 0.5995, Loss: 0.6914 Epoch 0 Batch 550/1378 - Train Accuracy: 0.6275, Validation Accuracy: 0.6200, Loss: 0.6814 Epoch 0 Batch 560/1378 - Train Accuracy: 0.5515, Validation Accuracy: 0.6150, Loss: 0.7199 Epoch 0 Batch 570/1378 - Train Accuracy: 0.5432, Validation Accuracy: 0.6155, Loss: 0.7324 Epoch 0 Batch 580/1378 - Train Accuracy: 0.6019, Validation Accuracy: 0.5964, Loss: 0.6678 Epoch 0 Batch 590/1378 - Train Accuracy: 0.5915, Validation Accuracy: 0.6018, Loss: 0.7226 Epoch 0 Batch 600/1378 - Train Accuracy: 0.5455, Validation Accuracy: 0.6191, Loss: 0.7546 Epoch 0 Batch 610/1378 - Train Accuracy: 0.5632, Validation Accuracy: 0.6150, Loss: 0.7126 Epoch 0 Batch 620/1378 - Train Accuracy: 0.5615, Validation Accuracy: 0.6227, Loss: 0.6860 Epoch 0 Batch 630/1378 - Train Accuracy: 0.5395, Validation Accuracy: 0.5918, Loss: 0.7247 Epoch 0 Batch 640/1378 - Train Accuracy: 0.5790, Validation Accuracy: 0.6232, Loss: 0.6664 Epoch 0 Batch 650/1378 - Train Accuracy: 0.5800, Validation Accuracy: 0.6177, Loss: 0.7034 Epoch 0 Batch 660/1378 - Train Accuracy: 0.5810, Validation Accuracy: 0.6205, Loss: 0.6978 Epoch 0 Batch 670/1378 - Train Accuracy: 0.5890, Validation Accuracy: 0.6077, Loss: 0.6760 Epoch 0 Batch 680/1378 - Train Accuracy: 0.5700, Validation Accuracy: 0.6273, Loss: 0.6902 Epoch 0 Batch 690/1378 - Train Accuracy: 0.5830, Validation Accuracy: 0.5964, Loss: 0.6787 Epoch 0 Batch 700/1378 - Train Accuracy: 0.5895, Validation Accuracy: 0.6136, Loss: 0.6691 Epoch 0 Batch 710/1378 - Train Accuracy: 0.6010, Validation Accuracy: 0.6268, Loss: 0.6718 Epoch 0 Batch 720/1378 - Train Accuracy: 0.6219, Validation Accuracy: 0.6109, Loss: 0.6036 Epoch 0 Batch 730/1378 - Train Accuracy: 0.5768, Validation Accuracy: 0.6150, Loss: 0.6875 Epoch 0 Batch 740/1378 - Train Accuracy: 0.5835, Validation Accuracy: 0.6259, Loss: 0.6588 Epoch 0 Batch 750/1378 - Train Accuracy: 0.6280, Validation Accuracy: 0.6214, Loss: 0.6137 Epoch 0 Batch 760/1378 - Train Accuracy: 0.5662, Validation Accuracy: 0.6105, Loss: 0.6391 Epoch 0 Batch 770/1378 - Train Accuracy: 0.5745, Validation Accuracy: 0.6277, Loss: 0.6766 Epoch 0 Batch 780/1378 - Train Accuracy: 0.5725, Validation Accuracy: 0.6082, Loss: 0.6465 Epoch 0 Batch 790/1378 - Train Accuracy: 0.6333, Validation Accuracy: 0.6323, Loss: 0.6045 Epoch 0 Batch 800/1378 - Train Accuracy: 0.6205, Validation Accuracy: 0.6345, Loss: 0.6384 Epoch 0 Batch 810/1378 - Train Accuracy: 0.5680, Validation Accuracy: 0.6245, Loss: 0.6583 Epoch 0 Batch 820/1378 - Train Accuracy: 0.6038, Validation Accuracy: 0.6205, Loss: 0.6107 Epoch 0 Batch 830/1378 - Train Accuracy: 0.6300, Validation Accuracy: 0.6205, Loss: 0.5874 Epoch 0 Batch 840/1378 - Train Accuracy: 0.5950, Validation Accuracy: 0.6105, Loss: 0.6364 Epoch 0 Batch 850/1378 - Train Accuracy: 0.5890, Validation Accuracy: 0.6177, Loss: 0.6125 Epoch 0 Batch 860/1378 - Train Accuracy: 0.6021, Validation Accuracy: 0.6177, Loss: 0.6530 Epoch 0 Batch 870/1378 - Train Accuracy: 0.5832, Validation Accuracy: 0.6332, Loss: 0.6386 Epoch 0 Batch 880/1378 - Train Accuracy: 0.6455, Validation Accuracy: 0.6286, Loss: 0.6399 ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function return [vocab_to_int[word] if word in vocab_to_int else vocab_to_int['<UNK>'] for word in sentence.lower().split(" ")] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) ###Output INFO:tensorflow:Restoring parameters from checkpoints/dev Input Word Ids: [58, 24, 126, 146, 184, 166, 137] English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.'] Prediction Word Ids: [86, 244, 14, 181, 194, 249, 148, 1] French Words: il a vu une voiture rouge . <EOS> ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of each sentence from `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function source_id_text = [[source_vocab_to_int[word] for word in n.split()] for n in source_text.split("\n")] target_id_text = [[target_vocab_to_int[word] for word in n.split()]+[target_vocab_to_int['<EOS>']] for n in target_text.split("\n")] return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__) print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.0.1 Default GPU Device: /gpu:0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoding_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) """ inputs = tf.placeholder(tf.int32, shape = [None,None],name="input") targets = tf.placeholder(tf.int32,shape = [None,None], name="targets") learning_rate = tf.placeholder(tf.float32, name = "learning_rate") keep_probability = tf.placeholder(tf.float32, name="keep_prob") # TODO: Implement Function return inputs, targets, learning_rate, keep_probability """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output Tests Passed ###Markdown Process Decoding InputImplement `process_decoding_input` using TensorFlow to remove the last word id from each batch in `target_data` and concat the GO ID to the beginning of each batch. ###Code def process_decoding_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for decoding :param target_data: Target Placeholder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) decoding_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) # TODO: Implement Function return decoding_input """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_decoding_input(process_decoding_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer using [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn). ###Code def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ # TODO: Implement Function basic_cell = tf.contrib.rnn.BasicLSTMCell(rnn_size) drop_cell = tf.contrib.rnn.DropoutWrapper(basic_cell, output_keep_prob=keep_prob) rnn_cell = tf.contrib.rnn.MultiRNNCell([drop_cell] * num_layers) output, encoding_state = tf.nn.dynamic_rnn(rnn_cell, rnn_inputs, dtype=tf.float32) return encoding_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate training logits using [`tf.contrib.seq2seq.simple_decoder_fn_train()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_train) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). Apply the `output_fn` to the [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder) outputs. ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits """ # TODO: Implement Function drop_out = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob) decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) outputs, final_state, final_context_state = tf.contrib.seq2seq.dynamic_rnn_decoder(drop_out, decoder_fn, inputs=dec_embed_input, sequence_length=sequence_length, scope=decoding_scope) train_logits = output_fn(outputs) return train_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference logits using [`tf.contrib.seq2seq.simple_decoder_fn_inference()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_inference) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: The maximum allowed time steps to decode :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits """ # TODO: Implement Function decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, num_decoder_symbols=vocab_size, dtype=tf.int32, name=None) outputs, final_state, final_context_state = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, decoder_fn, scope=decoding_scope) return outputs """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.- Create RNN cell for decoding using `rnn_size` and `num_layers`.- Create the output fuction using [`lambda`](https://docs.python.org/3/tutorial/controlflow.htmllambda-expressions) to transform it's input, logits, to class logits.- Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)` function to get the training logits.- Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) dec_cell = tf.contrib.rnn.MultiRNNCell([lstm]*num_layers) output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope) start_of_sequence_id = target_vocab_to_int['<GO>'] end_of_sequence_id = target_vocab_to_int['<EOS>'] with tf.variable_scope("decoding") as decoding_scope: train_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) decoding_scope.reuse_variables() infer_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, sequence_length, vocab_size, decoding_scope, output_fn, keep_prob) return train_logits, infer_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Apply embedding to the input data for the encoder.- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob)`.- Process target data using your `process_decoding_input(target_data, target_vocab_to_int, batch_size)` function.- Apply embedding to the target data for the decoder.- Decode the encoded input using your `decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)`. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function #Encoding enc_embeddings = tf.Variable(tf.random_uniform([source_vocab_size, enc_embedding_size], -1.0, 1.0)) enc_embed = tf.nn.embedding_lookup(enc_embeddings, input_data) encoder_state = encoding_layer(enc_embed, rnn_size, num_layers, keep_prob) # Decoding dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size) dec_embeddings = tf.Variable(tf.random_uniform([source_vocab_size, dec_embedding_size], -1.0, 1.0)) dec_embed = tf.nn.embedding_lookup(dec_embeddings, dec_input) train_logits, inference_logits = decoding_layer(dec_embed, dec_embeddings, encoder_state, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) return train_logits, inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability ###Code # Number of Epochs epochs = 5 # Batch Size batch_size = 128 # RNN Size rnn_size = 512 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 512 decoding_embedding_size = 512 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 0.5 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_source_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob = model_inputs() sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model( tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) tf.identity(inference_logits, 'logits') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([input_shape[0], sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import time def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 0/1077 - Train Accuracy: 0.341, Validation Accuracy: 0.352, Loss: 5.894 Epoch 0 Batch 1/1077 - Train Accuracy: 0.264, Validation Accuracy: 0.344, Loss: 5.374 Epoch 0 Batch 2/1077 - Train Accuracy: 0.251, Validation Accuracy: 0.344, Loss: 4.567 Epoch 0 Batch 3/1077 - Train Accuracy: 0.171, Validation Accuracy: 0.241, Loss: 4.405 Epoch 0 Batch 4/1077 - Train Accuracy: 0.271, Validation Accuracy: 0.346, Loss: 3.886 Epoch 0 Batch 5/1077 - Train Accuracy: 0.324, Validation Accuracy: 0.373, Loss: 3.651 Epoch 0 Batch 6/1077 - Train Accuracy: 0.305, Validation Accuracy: 0.360, Loss: 3.446 Epoch 0 Batch 7/1077 - Train Accuracy: 0.284, Validation Accuracy: 0.359, Loss: 3.357 Epoch 0 Batch 8/1077 - Train Accuracy: 0.311, Validation Accuracy: 0.377, Loss: 3.187 Epoch 0 Batch 9/1077 - Train Accuracy: 0.343, Validation Accuracy: 0.399, Loss: 3.071 Epoch 0 Batch 10/1077 - Train Accuracy: 0.310, Validation Accuracy: 0.406, Loss: 3.158 Epoch 0 Batch 11/1077 - Train Accuracy: 0.361, Validation Accuracy: 0.405, Loss: 2.858 Epoch 0 Batch 12/1077 - Train Accuracy: 0.345, Validation Accuracy: 0.408, Loss: 2.903 Epoch 0 Batch 13/1077 - Train Accuracy: 0.376, Validation Accuracy: 0.401, Loss: 2.671 Epoch 0 Batch 14/1077 - Train Accuracy: 0.353, Validation Accuracy: 0.402, Loss: 2.612 Epoch 0 Batch 15/1077 - Train Accuracy: 0.350, Validation Accuracy: 0.404, Loss: 2.669 Epoch 0 Batch 16/1077 - Train Accuracy: 0.362, Validation Accuracy: 0.403, Loss: 2.580 Epoch 0 Batch 17/1077 - Train Accuracy: 0.368, Validation Accuracy: 0.408, Loss: 2.502 Epoch 0 Batch 18/1077 - Train Accuracy: 0.354, Validation Accuracy: 0.411, Loss: 2.526 Epoch 0 Batch 19/1077 - Train Accuracy: 0.372, Validation Accuracy: 0.407, Loss: 2.372 Epoch 0 Batch 20/1077 - Train Accuracy: 0.351, Validation Accuracy: 0.413, Loss: 2.397 Epoch 0 Batch 21/1077 - Train Accuracy: 0.336, Validation Accuracy: 0.425, Loss: 2.434 Epoch 0 Batch 22/1077 - Train Accuracy: 0.357, Validation Accuracy: 0.439, Loss: 2.354 Epoch 0 Batch 23/1077 - Train Accuracy: 0.363, Validation Accuracy: 0.434, Loss: 2.290 Epoch 0 Batch 24/1077 - Train Accuracy: 0.374, Validation Accuracy: 0.431, Loss: 2.212 Epoch 0 Batch 25/1077 - Train Accuracy: 0.361, Validation Accuracy: 0.431, Loss: 2.235 Epoch 0 Batch 26/1077 - Train Accuracy: 0.363, Validation Accuracy: 0.448, Loss: 2.180 Epoch 0 Batch 27/1077 - Train Accuracy: 0.423, Validation Accuracy: 0.442, Loss: 1.963 Epoch 0 Batch 28/1077 - Train Accuracy: 0.380, Validation Accuracy: 0.433, Loss: 1.999 Epoch 0 Batch 29/1077 - Train Accuracy: 0.366, Validation Accuracy: 0.433, Loss: 1.987 Epoch 0 Batch 30/1077 - Train Accuracy: 0.356, Validation Accuracy: 0.443, Loss: 1.971 Epoch 0 Batch 31/1077 - Train Accuracy: 0.373, Validation Accuracy: 0.437, Loss: 2.002 Epoch 0 Batch 32/1077 - Train Accuracy: 0.435, Validation Accuracy: 0.434, Loss: 1.759 Epoch 0 Batch 33/1077 - Train Accuracy: 0.394, Validation Accuracy: 0.444, Loss: 1.797 Epoch 0 Batch 34/1077 - Train Accuracy: 0.379, Validation Accuracy: 0.458, Loss: 1.844 Epoch 0 Batch 35/1077 - Train Accuracy: 0.397, Validation Accuracy: 0.455, Loss: 1.842 Epoch 0 Batch 36/1077 - Train Accuracy: 0.423, Validation Accuracy: 0.460, Loss: 1.783 Epoch 0 Batch 37/1077 - Train Accuracy: 0.411, Validation Accuracy: 0.468, Loss: 1.805 Epoch 0 Batch 38/1077 - Train Accuracy: 0.356, Validation Accuracy: 0.443, Loss: 1.887 Epoch 0 Batch 39/1077 - Train Accuracy: 0.376, Validation Accuracy: 0.440, Loss: 1.801 Epoch 0 Batch 40/1077 - Train Accuracy: 0.390, Validation Accuracy: 0.444, Loss: 1.734 Epoch 0 Batch 41/1077 - Train Accuracy: 0.405, Validation Accuracy: 0.466, Loss: 1.648 Epoch 0 Batch 42/1077 - Train Accuracy: 0.433, Validation Accuracy: 0.482, Loss: 1.662 Epoch 0 Batch 43/1077 - Train Accuracy: 0.393, Validation Accuracy: 0.448, Loss: 1.646 Epoch 0 Batch 44/1077 - Train Accuracy: 0.366, Validation Accuracy: 0.452, Loss: 1.688 Epoch 0 Batch 45/1077 - Train Accuracy: 0.400, Validation Accuracy: 0.461, Loss: 1.637 Epoch 0 Batch 46/1077 - Train Accuracy: 0.411, Validation Accuracy: 0.470, Loss: 1.666 Epoch 0 Batch 47/1077 - Train Accuracy: 0.429, Validation Accuracy: 0.457, Loss: 1.538 Epoch 0 Batch 48/1077 - Train Accuracy: 0.411, Validation Accuracy: 0.452, Loss: 1.566 Epoch 0 Batch 49/1077 - Train Accuracy: 0.414, Validation Accuracy: 0.456, Loss: 1.554 Epoch 0 Batch 50/1077 - Train Accuracy: 0.422, Validation Accuracy: 0.477, Loss: 1.544 Epoch 0 Batch 51/1077 - Train Accuracy: 0.446, Validation Accuracy: 0.491, Loss: 1.463 Epoch 0 Batch 52/1077 - Train Accuracy: 0.429, Validation Accuracy: 0.491, Loss: 1.519 Epoch 0 Batch 53/1077 - Train Accuracy: 0.438, Validation Accuracy: 0.476, Loss: 1.441 Epoch 0 Batch 54/1077 - Train Accuracy: 0.387, Validation Accuracy: 0.474, Loss: 1.558 Epoch 0 Batch 55/1077 - Train Accuracy: 0.470, Validation Accuracy: 0.492, Loss: 1.385 Epoch 0 Batch 56/1077 - Train Accuracy: 0.491, Validation Accuracy: 0.551, Loss: 1.422 Epoch 0 Batch 57/1077 - Train Accuracy: 0.537, Validation Accuracy: 0.513, Loss: 1.246 Epoch 0 Batch 58/1077 - Train Accuracy: 0.428, Validation Accuracy: 0.476, Loss: 1.376 Epoch 0 Batch 59/1077 - Train Accuracy: 0.444, Validation Accuracy: 0.524, Loss: 1.449 Epoch 0 Batch 60/1077 - Train Accuracy: 0.474, Validation Accuracy: 0.503, Loss: 1.304 Epoch 0 Batch 61/1077 - Train Accuracy: 0.416, Validation Accuracy: 0.465, Loss: 1.355 Epoch 0 Batch 62/1077 - Train Accuracy: 0.439, Validation Accuracy: 0.490, Loss: 1.408 Epoch 0 Batch 63/1077 - Train Accuracy: 0.535, Validation Accuracy: 0.536, Loss: 1.239 Epoch 0 Batch 64/1077 - Train Accuracy: 0.477, Validation Accuracy: 0.507, Loss: 1.299 Epoch 0 Batch 65/1077 - Train Accuracy: 0.375, Validation Accuracy: 0.478, Loss: 1.359 Epoch 0 Batch 66/1077 - Train Accuracy: 0.462, Validation Accuracy: 0.501, Loss: 1.224 Epoch 0 Batch 67/1077 - Train Accuracy: 0.510, Validation Accuracy: 0.509, Loss: 1.216 Epoch 0 Batch 68/1077 - Train Accuracy: 0.448, Validation Accuracy: 0.509, Loss: 1.264 Epoch 0 Batch 69/1077 - Train Accuracy: 0.504, Validation Accuracy: 0.509, Loss: 1.291 Epoch 0 Batch 70/1077 - Train Accuracy: 0.470, Validation Accuracy: 0.534, Loss: 1.249 Epoch 0 Batch 71/1077 - Train Accuracy: 0.450, Validation Accuracy: 0.493, Loss: 1.153 Epoch 0 Batch 72/1077 - Train Accuracy: 0.422, Validation Accuracy: 0.473, Loss: 1.194 Epoch 0 Batch 73/1077 - Train Accuracy: 0.518, Validation Accuracy: 0.539, Loss: 1.180 Epoch 0 Batch 74/1077 - Train Accuracy: 0.520, Validation Accuracy: 0.546, Loss: 1.097 Epoch 0 Batch 75/1077 - Train Accuracy: 0.465, Validation Accuracy: 0.489, Loss: 1.121 Epoch 0 Batch 76/1077 - Train Accuracy: 0.502, Validation Accuracy: 0.536, Loss: 1.151 Epoch 0 Batch 77/1077 - Train Accuracy: 0.473, Validation Accuracy: 0.522, Loss: 1.159 Epoch 0 Batch 78/1077 - Train Accuracy: 0.436, Validation Accuracy: 0.522, Loss: 1.177 Epoch 0 Batch 79/1077 - Train Accuracy: 0.486, Validation Accuracy: 0.509, Loss: 1.149 Epoch 0 Batch 80/1077 - Train Accuracy: 0.484, Validation Accuracy: 0.512, Loss: 1.105 Epoch 0 Batch 81/1077 - Train Accuracy: 0.461, Validation Accuracy: 0.494, Loss: 1.078 Epoch 0 Batch 82/1077 - Train Accuracy: 0.509, Validation Accuracy: 0.516, Loss: 0.957 Epoch 0 Batch 83/1077 - Train Accuracy: 0.437, Validation Accuracy: 0.515, Loss: 1.144 Epoch 0 Batch 84/1077 - Train Accuracy: 0.500, Validation Accuracy: 0.552, Loss: 1.053 Epoch 0 Batch 85/1077 - Train Accuracy: 0.532, Validation Accuracy: 0.536, Loss: 1.013 Epoch 0 Batch 86/1077 - Train Accuracy: 0.478, Validation Accuracy: 0.522, Loss: 1.074 Epoch 0 Batch 87/1077 - Train Accuracy: 0.480, Validation Accuracy: 0.491, Loss: 1.053 Epoch 0 Batch 88/1077 - Train Accuracy: 0.456, Validation Accuracy: 0.491, Loss: 1.044 Epoch 0 Batch 89/1077 - Train Accuracy: 0.475, Validation Accuracy: 0.496, Loss: 1.016 Epoch 0 Batch 90/1077 - Train Accuracy: 0.484, Validation Accuracy: 0.526, Loss: 1.049 Epoch 0 Batch 91/1077 - Train Accuracy: 0.575, Validation Accuracy: 0.552, Loss: 0.922 Epoch 0 Batch 92/1077 - Train Accuracy: 0.523, Validation Accuracy: 0.545, Loss: 0.987 Epoch 0 Batch 93/1077 - Train Accuracy: 0.505, Validation Accuracy: 0.526, Loss: 1.002 Epoch 0 Batch 94/1077 - Train Accuracy: 0.505, Validation Accuracy: 0.529, Loss: 0.933 Epoch 0 Batch 95/1077 - Train Accuracy: 0.509, Validation Accuracy: 0.536, Loss: 1.006 Epoch 0 Batch 96/1077 - Train Accuracy: 0.499, Validation Accuracy: 0.533, Loss: 0.945 Epoch 0 Batch 97/1077 - Train Accuracy: 0.530, Validation Accuracy: 0.542, Loss: 0.950 Epoch 0 Batch 98/1077 - Train Accuracy: 0.582, Validation Accuracy: 0.557, Loss: 0.911 Epoch 0 Batch 99/1077 - Train Accuracy: 0.504, Validation Accuracy: 0.541, Loss: 0.980 Epoch 0 Batch 100/1077 - Train Accuracy: 0.508, Validation Accuracy: 0.554, Loss: 0.943 Epoch 0 Batch 101/1077 - Train Accuracy: 0.513, Validation Accuracy: 0.559, Loss: 0.876 Epoch 0 Batch 102/1077 - Train Accuracy: 0.505, Validation Accuracy: 0.536, Loss: 0.905 Epoch 0 Batch 103/1077 - Train Accuracy: 0.445, Validation Accuracy: 0.516, Loss: 0.947 Epoch 0 Batch 104/1077 - Train Accuracy: 0.466, Validation Accuracy: 0.535, Loss: 0.939 Epoch 0 Batch 105/1077 - Train Accuracy: 0.521, Validation Accuracy: 0.596, Loss: 0.883 Epoch 0 Batch 106/1077 - Train Accuracy: 0.549, Validation Accuracy: 0.600, Loss: 1.002 Epoch 0 Batch 107/1077 - Train Accuracy: 0.549, Validation Accuracy: 0.594, Loss: 0.852 Epoch 0 Batch 108/1077 - Train Accuracy: 0.572, Validation Accuracy: 0.575, Loss: 0.771 Epoch 0 Batch 109/1077 - Train Accuracy: 0.520, Validation Accuracy: 0.567, Loss: 0.893 Epoch 0 Batch 110/1077 - Train Accuracy: 0.566, Validation Accuracy: 0.572, Loss: 0.835 Epoch 0 Batch 111/1077 - Train Accuracy: 0.539, Validation Accuracy: 0.586, Loss: 0.864 Epoch 0 Batch 112/1077 - Train Accuracy: 0.563, Validation Accuracy: 0.566, Loss: 0.874 Epoch 0 Batch 113/1077 - Train Accuracy: 0.491, Validation Accuracy: 0.557, Loss: 0.881 Epoch 0 Batch 114/1077 - Train Accuracy: 0.533, Validation Accuracy: 0.560, Loss: 0.808 Epoch 0 Batch 115/1077 - Train Accuracy: 0.523, Validation Accuracy: 0.549, Loss: 0.865 Epoch 0 Batch 116/1077 - Train Accuracy: 0.490, Validation Accuracy: 0.531, Loss: 0.856 Epoch 0 Batch 117/1077 - Train Accuracy: 0.492, Validation Accuracy: 0.534, Loss: 0.868 Epoch 0 Batch 118/1077 - Train Accuracy: 0.483, Validation Accuracy: 0.536, Loss: 0.856 Epoch 0 Batch 119/1077 - Train Accuracy: 0.490, Validation Accuracy: 0.549, Loss: 0.804 Epoch 0 Batch 120/1077 - Train Accuracy: 0.521, Validation Accuracy: 0.556, Loss: 0.821 Epoch 0 Batch 121/1077 - Train Accuracy: 0.573, Validation Accuracy: 0.553, Loss: 0.785 Epoch 0 Batch 122/1077 - Train Accuracy: 0.550, Validation Accuracy: 0.559, Loss: 0.781 Epoch 0 Batch 123/1077 - Train Accuracy: 0.527, Validation Accuracy: 0.550, Loss: 0.766 Epoch 0 Batch 124/1077 - Train Accuracy: 0.539, Validation Accuracy: 0.546, Loss: 0.792 Epoch 0 Batch 125/1077 - Train Accuracy: 0.564, Validation Accuracy: 0.545, Loss: 0.756 Epoch 0 Batch 126/1077 - Train Accuracy: 0.548, Validation Accuracy: 0.566, Loss: 0.729 Epoch 0 Batch 127/1077 - Train Accuracy: 0.541, Validation Accuracy: 0.605, Loss: 0.787 Epoch 0 Batch 128/1077 - Train Accuracy: 0.598, Validation Accuracy: 0.618, Loss: 0.724 Epoch 0 Batch 129/1077 - Train Accuracy: 0.622, Validation Accuracy: 0.621, Loss: 0.798 Epoch 0 Batch 130/1077 - Train Accuracy: 0.577, Validation Accuracy: 0.602, Loss: 0.731 Epoch 0 Batch 131/1077 - Train Accuracy: 0.566, Validation Accuracy: 0.596, Loss: 0.778 Epoch 0 Batch 132/1077 - Train Accuracy: 0.579, Validation Accuracy: 0.629, Loss: 0.796 Epoch 0 Batch 133/1077 - Train Accuracy: 0.602, Validation Accuracy: 0.632, Loss: 0.757 Epoch 0 Batch 134/1077 - Train Accuracy: 0.620, Validation Accuracy: 0.629, Loss: 0.720 Epoch 0 Batch 135/1077 - Train Accuracy: 0.589, Validation Accuracy: 0.624, Loss: 0.800 Epoch 0 Batch 136/1077 - Train Accuracy: 0.623, Validation Accuracy: 0.626, Loss: 0.713 Epoch 0 Batch 137/1077 - Train Accuracy: 0.622, Validation Accuracy: 0.622, Loss: 0.674 Epoch 0 Batch 138/1077 - Train Accuracy: 0.623, Validation Accuracy: 0.625, Loss: 0.724 Epoch 0 Batch 139/1077 - Train Accuracy: 0.613, Validation Accuracy: 0.634, Loss: 0.744 Epoch 0 Batch 140/1077 - Train Accuracy: 0.572, Validation Accuracy: 0.637, Loss: 0.758 Epoch 0 Batch 141/1077 - Train Accuracy: 0.574, Validation Accuracy: 0.647, Loss: 0.744 Epoch 0 Batch 142/1077 - Train Accuracy: 0.637, Validation Accuracy: 0.653, Loss: 0.681 Epoch 0 Batch 143/1077 - Train Accuracy: 0.607, Validation Accuracy: 0.659, Loss: 0.744 Epoch 0 Batch 144/1077 - Train Accuracy: 0.572, Validation Accuracy: 0.654, Loss: 0.762 Epoch 0 Batch 145/1077 - Train Accuracy: 0.645, Validation Accuracy: 0.653, Loss: 0.737 Epoch 0 Batch 146/1077 - Train Accuracy: 0.637, Validation Accuracy: 0.643, Loss: 0.750 Epoch 0 Batch 147/1077 - Train Accuracy: 0.618, Validation Accuracy: 0.648, Loss: 0.749 Epoch 0 Batch 148/1077 - Train Accuracy: 0.629, Validation Accuracy: 0.654, Loss: 0.693 Epoch 0 Batch 149/1077 - Train Accuracy: 0.602, Validation Accuracy: 0.661, Loss: 0.713 Epoch 0 Batch 150/1077 - Train Accuracy: 0.663, Validation Accuracy: 0.665, Loss: 0.664 Epoch 0 Batch 151/1077 - Train Accuracy: 0.651, Validation Accuracy: 0.664, Loss: 0.633 Epoch 0 Batch 152/1077 - Train Accuracy: 0.652, Validation Accuracy: 0.657, Loss: 0.700 Epoch 0 Batch 153/1077 - Train Accuracy: 0.624, Validation Accuracy: 0.658, Loss: 0.711 Epoch 0 Batch 154/1077 - Train Accuracy: 0.630, Validation Accuracy: 0.654, Loss: 0.727 Epoch 0 Batch 155/1077 - Train Accuracy: 0.649, Validation Accuracy: 0.655, Loss: 0.698 Epoch 0 Batch 156/1077 - Train Accuracy: 0.631, Validation Accuracy: 0.656, Loss: 0.669 Epoch 0 Batch 157/1077 - Train Accuracy: 0.657, Validation Accuracy: 0.658, Loss: 0.655 Epoch 0 Batch 158/1077 - Train Accuracy: 0.692, Validation Accuracy: 0.661, Loss: 0.686 Epoch 0 Batch 159/1077 - Train Accuracy: 0.670, Validation Accuracy: 0.661, Loss: 0.609 Epoch 0 Batch 160/1077 - Train Accuracy: 0.676, Validation Accuracy: 0.675, Loss: 0.644 Epoch 0 Batch 161/1077 - Train Accuracy: 0.650, Validation Accuracy: 0.680, Loss: 0.669 Epoch 0 Batch 162/1077 - Train Accuracy: 0.641, Validation Accuracy: 0.670, Loss: 0.709 Epoch 0 Batch 163/1077 - Train Accuracy: 0.686, Validation Accuracy: 0.679, Loss: 0.678 Epoch 0 Batch 164/1077 - Train Accuracy: 0.672, Validation Accuracy: 0.664, Loss: 0.653 Epoch 0 Batch 165/1077 - Train Accuracy: 0.630, Validation Accuracy: 0.671, Loss: 0.621 Epoch 0 Batch 166/1077 - Train Accuracy: 0.657, Validation Accuracy: 0.667, Loss: 0.691 Epoch 0 Batch 167/1077 - Train Accuracy: 0.655, Validation Accuracy: 0.665, Loss: 0.659 Epoch 0 Batch 168/1077 - Train Accuracy: 0.654, Validation Accuracy: 0.673, Loss: 0.689 Epoch 0 Batch 169/1077 - Train Accuracy: 0.698, Validation Accuracy: 0.676, Loss: 0.649 Epoch 0 Batch 170/1077 - Train Accuracy: 0.654, Validation Accuracy: 0.656, Loss: 0.642 Epoch 0 Batch 171/1077 - Train Accuracy: 0.699, Validation Accuracy: 0.668, Loss: 0.587 Epoch 0 Batch 172/1077 - Train Accuracy: 0.730, Validation Accuracy: 0.692, Loss: 0.572 Epoch 0 Batch 173/1077 - Train Accuracy: 0.701, Validation Accuracy: 0.692, Loss: 0.662 Epoch 0 Batch 174/1077 - Train Accuracy: 0.736, Validation Accuracy: 0.689, Loss: 0.606 Epoch 0 Batch 175/1077 - Train Accuracy: 0.710, Validation Accuracy: 0.688, Loss: 0.613 Epoch 0 Batch 176/1077 - Train Accuracy: 0.675, Validation Accuracy: 0.691, Loss: 0.609 Epoch 0 Batch 177/1077 - Train Accuracy: 0.706, Validation Accuracy: 0.683, Loss: 0.649 Epoch 0 Batch 178/1077 - Train Accuracy: 0.700, Validation Accuracy: 0.695, Loss: 0.614 Epoch 0 Batch 179/1077 - Train Accuracy: 0.687, Validation Accuracy: 0.683, Loss: 0.624 Epoch 0 Batch 180/1077 - Train Accuracy: 0.662, Validation Accuracy: 0.689, Loss: 0.608 Epoch 0 Batch 181/1077 - Train Accuracy: 0.668, Validation Accuracy: 0.695, Loss: 0.640 Epoch 0 Batch 182/1077 - Train Accuracy: 0.721, Validation Accuracy: 0.692, Loss: 0.583 Epoch 0 Batch 183/1077 - Train Accuracy: 0.680, Validation Accuracy: 0.699, Loss: 0.603 Epoch 0 Batch 184/1077 - Train Accuracy: 0.711, Validation Accuracy: 0.705, Loss: 0.549 Epoch 0 Batch 185/1077 - Train Accuracy: 0.694, Validation Accuracy: 0.710, Loss: 0.593 Epoch 0 Batch 186/1077 - Train Accuracy: 0.659, Validation Accuracy: 0.695, Loss: 0.629 Epoch 0 Batch 187/1077 - Train Accuracy: 0.681, Validation Accuracy: 0.707, Loss: 0.571 Epoch 0 Batch 188/1077 - Train Accuracy: 0.690, Validation Accuracy: 0.712, Loss: 0.567 Epoch 0 Batch 189/1077 - Train Accuracy: 0.739, Validation Accuracy: 0.713, Loss: 0.566 Epoch 0 Batch 190/1077 - Train Accuracy: 0.744, Validation Accuracy: 0.711, Loss: 0.545 Epoch 0 Batch 191/1077 - Train Accuracy: 0.719, Validation Accuracy: 0.713, Loss: 0.506 Epoch 0 Batch 192/1077 - Train Accuracy: 0.716, Validation Accuracy: 0.707, Loss: 0.556 Epoch 0 Batch 193/1077 - Train Accuracy: 0.720, Validation Accuracy: 0.716, Loss: 0.533 Epoch 0 Batch 194/1077 - Train Accuracy: 0.728, Validation Accuracy: 0.693, Loss: 0.501 Epoch 0 Batch 195/1077 - Train Accuracy: 0.736, Validation Accuracy: 0.712, Loss: 0.523 Epoch 0 Batch 196/1077 - Train Accuracy: 0.758, Validation Accuracy: 0.725, Loss: 0.527 Epoch 0 Batch 197/1077 - Train Accuracy: 0.719, Validation Accuracy: 0.709, Loss: 0.537 Epoch 0 Batch 198/1077 - Train Accuracy: 0.757, Validation Accuracy: 0.711, Loss: 0.494 Epoch 0 Batch 199/1077 - Train Accuracy: 0.715, Validation Accuracy: 0.718, Loss: 0.518 Epoch 0 Batch 200/1077 - Train Accuracy: 0.714, Validation Accuracy: 0.711, Loss: 0.558 Epoch 0 Batch 201/1077 - Train Accuracy: 0.721, Validation Accuracy: 0.706, Loss: 0.498 Epoch 0 Batch 202/1077 - Train Accuracy: 0.751, Validation Accuracy: 0.718, Loss: 0.527 Epoch 0 Batch 203/1077 - Train Accuracy: 0.720, Validation Accuracy: 0.718, Loss: 0.524 Epoch 0 Batch 204/1077 - Train Accuracy: 0.734, Validation Accuracy: 0.719, Loss: 0.534 Epoch 0 Batch 205/1077 - Train Accuracy: 0.692, Validation Accuracy: 0.724, Loss: 0.545 Epoch 0 Batch 206/1077 - Train Accuracy: 0.752, Validation Accuracy: 0.723, Loss: 0.520 Epoch 0 Batch 207/1077 - Train Accuracy: 0.748, Validation Accuracy: 0.739, Loss: 0.527 Epoch 0 Batch 208/1077 - Train Accuracy: 0.740, Validation Accuracy: 0.734, Loss: 0.517 Epoch 0 Batch 209/1077 - Train Accuracy: 0.755, Validation Accuracy: 0.734, Loss: 0.466 Epoch 0 Batch 210/1077 - Train Accuracy: 0.738, Validation Accuracy: 0.734, Loss: 0.505 Epoch 0 Batch 211/1077 - Train Accuracy: 0.771, Validation Accuracy: 0.734, Loss: 0.498 Epoch 0 Batch 212/1077 - Train Accuracy: 0.757, Validation Accuracy: 0.745, Loss: 0.475 Epoch 0 Batch 213/1077 - Train Accuracy: 0.751, Validation Accuracy: 0.739, Loss: 0.464 Epoch 0 Batch 214/1077 - Train Accuracy: 0.715, Validation Accuracy: 0.745, Loss: 0.482 Epoch 0 Batch 215/1077 - Train Accuracy: 0.738, Validation Accuracy: 0.744, Loss: 0.500 Epoch 0 Batch 216/1077 - Train Accuracy: 0.741, Validation Accuracy: 0.724, Loss: 0.498 Epoch 0 Batch 217/1077 - Train Accuracy: 0.758, Validation Accuracy: 0.736, Loss: 0.478 Epoch 0 Batch 218/1077 - Train Accuracy: 0.736, Validation Accuracy: 0.753, Loss: 0.556 Epoch 0 Batch 219/1077 - Train Accuracy: 0.784, Validation Accuracy: 0.762, Loss: 0.479 Epoch 0 Batch 220/1077 - Train Accuracy: 0.782, Validation Accuracy: 0.761, Loss: 0.494 Epoch 0 Batch 221/1077 - Train Accuracy: 0.773, Validation Accuracy: 0.772, Loss: 0.512 Epoch 0 Batch 222/1077 - Train Accuracy: 0.723, Validation Accuracy: 0.769, Loss: 0.493 Epoch 0 Batch 223/1077 - Train Accuracy: 0.792, Validation Accuracy: 0.746, Loss: 0.429 Epoch 0 Batch 224/1077 - Train Accuracy: 0.776, Validation Accuracy: 0.739, Loss: 0.480 Epoch 0 Batch 225/1077 - Train Accuracy: 0.788, Validation Accuracy: 0.740, Loss: 0.494 Epoch 0 Batch 226/1077 - Train Accuracy: 0.782, Validation Accuracy: 0.741, Loss: 0.471 Epoch 0 Batch 227/1077 - Train Accuracy: 0.717, Validation Accuracy: 0.736, Loss: 0.504 Epoch 0 Batch 228/1077 - Train Accuracy: 0.798, Validation Accuracy: 0.735, Loss: 0.455 Epoch 0 Batch 229/1077 - Train Accuracy: 0.783, Validation Accuracy: 0.748, Loss: 0.459 Epoch 0 Batch 230/1077 - Train Accuracy: 0.768, Validation Accuracy: 0.743, Loss: 0.463 Epoch 0 Batch 231/1077 - Train Accuracy: 0.756, Validation Accuracy: 0.751, Loss: 0.477 Epoch 0 Batch 232/1077 - Train Accuracy: 0.756, Validation Accuracy: 0.749, Loss: 0.484 Epoch 0 Batch 233/1077 - Train Accuracy: 0.769, Validation Accuracy: 0.756, Loss: 0.497 Epoch 0 Batch 234/1077 - Train Accuracy: 0.766, Validation Accuracy: 0.775, Loss: 0.465 Epoch 0 Batch 235/1077 - Train Accuracy: 0.785, Validation Accuracy: 0.780, Loss: 0.410 Epoch 0 Batch 236/1077 - Train Accuracy: 0.783, Validation Accuracy: 0.774, Loss: 0.474 Epoch 0 Batch 237/1077 - Train Accuracy: 0.797, Validation Accuracy: 0.762, Loss: 0.390 Epoch 0 Batch 238/1077 - Train Accuracy: 0.786, Validation Accuracy: 0.761, Loss: 0.450 Epoch 0 Batch 239/1077 - Train Accuracy: 0.814, Validation Accuracy: 0.751, Loss: 0.397 Epoch 0 Batch 240/1077 - Train Accuracy: 0.784, Validation Accuracy: 0.756, Loss: 0.426 Epoch 0 Batch 241/1077 - Train Accuracy: 0.787, Validation Accuracy: 0.758, Loss: 0.393 Epoch 0 Batch 242/1077 - Train Accuracy: 0.785, Validation Accuracy: 0.752, Loss: 0.413 Epoch 0 Batch 243/1077 - Train Accuracy: 0.758, Validation Accuracy: 0.764, Loss: 0.447 Epoch 0 Batch 244/1077 - Train Accuracy: 0.772, Validation Accuracy: 0.766, Loss: 0.404 Epoch 0 Batch 245/1077 - Train Accuracy: 0.776, Validation Accuracy: 0.767, Loss: 0.389 Epoch 0 Batch 246/1077 - Train Accuracy: 0.790, Validation Accuracy: 0.765, Loss: 0.422 Epoch 0 Batch 247/1077 - Train Accuracy: 0.813, Validation Accuracy: 0.775, Loss: 0.394 Epoch 0 Batch 248/1077 - Train Accuracy: 0.798, Validation Accuracy: 0.763, Loss: 0.397 Epoch 0 Batch 249/1077 - Train Accuracy: 0.797, Validation Accuracy: 0.768, Loss: 0.399 Epoch 0 Batch 250/1077 - Train Accuracy: 0.798, Validation Accuracy: 0.776, Loss: 0.373 Epoch 0 Batch 251/1077 - Train Accuracy: 0.812, Validation Accuracy: 0.772, Loss: 0.408 Epoch 0 Batch 252/1077 - Train Accuracy: 0.783, Validation Accuracy: 0.773, Loss: 0.405 Epoch 0 Batch 253/1077 - Train Accuracy: 0.784, Validation Accuracy: 0.772, Loss: 0.401 Epoch 0 Batch 254/1077 - Train Accuracy: 0.792, Validation Accuracy: 0.758, Loss: 0.409 Epoch 0 Batch 255/1077 - Train Accuracy: 0.778, Validation Accuracy: 0.772, Loss: 0.394 Epoch 0 Batch 256/1077 - Train Accuracy: 0.793, Validation Accuracy: 0.777, Loss: 0.439 Epoch 0 Batch 257/1077 - Train Accuracy: 0.788, Validation Accuracy: 0.786, Loss: 0.383 Epoch 0 Batch 258/1077 - Train Accuracy: 0.799, Validation Accuracy: 0.787, Loss: 0.390 Epoch 0 Batch 259/1077 - Train Accuracy: 0.784, Validation Accuracy: 0.789, Loss: 0.383 Epoch 0 Batch 260/1077 - Train Accuracy: 0.806, Validation Accuracy: 0.793, Loss: 0.356 Epoch 0 Batch 261/1077 - Train Accuracy: 0.786, Validation Accuracy: 0.798, Loss: 0.403 Epoch 0 Batch 262/1077 - Train Accuracy: 0.797, Validation Accuracy: 0.780, Loss: 0.368 Epoch 0 Batch 263/1077 - Train Accuracy: 0.815, Validation Accuracy: 0.782, Loss: 0.359 Epoch 0 Batch 264/1077 - Train Accuracy: 0.806, Validation Accuracy: 0.783, Loss: 0.385 Epoch 0 Batch 265/1077 - Train Accuracy: 0.777, Validation Accuracy: 0.792, Loss: 0.388 Epoch 0 Batch 266/1077 - Train Accuracy: 0.831, Validation Accuracy: 0.790, Loss: 0.369 Epoch 0 Batch 267/1077 - Train Accuracy: 0.817, Validation Accuracy: 0.786, Loss: 0.344 Epoch 0 Batch 268/1077 - Train Accuracy: 0.774, Validation Accuracy: 0.798, Loss: 0.398 Epoch 0 Batch 269/1077 - Train Accuracy: 0.791, Validation Accuracy: 0.806, Loss: 0.409 Epoch 0 Batch 270/1077 - Train Accuracy: 0.801, Validation Accuracy: 0.799, Loss: 0.386 Epoch 0 Batch 271/1077 - Train Accuracy: 0.852, Validation Accuracy: 0.784, Loss: 0.353 Epoch 0 Batch 272/1077 - Train Accuracy: 0.818, Validation Accuracy: 0.778, Loss: 0.404 Epoch 0 Batch 273/1077 - Train Accuracy: 0.812, Validation Accuracy: 0.787, Loss: 0.351 Epoch 0 Batch 274/1077 - Train Accuracy: 0.830, Validation Accuracy: 0.791, Loss: 0.356 Epoch 0 Batch 275/1077 - Train Accuracy: 0.802, Validation Accuracy: 0.788, Loss: 0.344 Epoch 0 Batch 276/1077 - Train Accuracy: 0.776, Validation Accuracy: 0.791, Loss: 0.379 Epoch 0 Batch 277/1077 - Train Accuracy: 0.834, Validation Accuracy: 0.795, Loss: 0.331 Epoch 0 Batch 278/1077 - Train Accuracy: 0.782, Validation Accuracy: 0.812, Loss: 0.398 Epoch 0 Batch 279/1077 - Train Accuracy: 0.802, Validation Accuracy: 0.821, Loss: 0.382 Epoch 0 Batch 280/1077 - Train Accuracy: 0.819, Validation Accuracy: 0.823, Loss: 0.352 Epoch 0 Batch 281/1077 - Train Accuracy: 0.814, Validation Accuracy: 0.832, Loss: 0.361 Epoch 0 Batch 282/1077 - Train Accuracy: 0.767, Validation Accuracy: 0.818, Loss: 0.393 Epoch 0 Batch 283/1077 - Train Accuracy: 0.843, Validation Accuracy: 0.825, Loss: 0.360 Epoch 0 Batch 284/1077 - Train Accuracy: 0.806, Validation Accuracy: 0.820, Loss: 0.362 Epoch 0 Batch 285/1077 - Train Accuracy: 0.804, Validation Accuracy: 0.813, Loss: 0.337 Epoch 0 Batch 286/1077 - Train Accuracy: 0.845, Validation Accuracy: 0.820, Loss: 0.321 Epoch 0 Batch 287/1077 - Train Accuracy: 0.841, Validation Accuracy: 0.821, Loss: 0.325 Epoch 0 Batch 288/1077 - Train Accuracy: 0.824, Validation Accuracy: 0.815, Loss: 0.342 Epoch 0 Batch 289/1077 - Train Accuracy: 0.843, Validation Accuracy: 0.836, Loss: 0.327 Epoch 0 Batch 290/1077 - Train Accuracy: 0.807, Validation Accuracy: 0.828, Loss: 0.360 Epoch 0 Batch 291/1077 - Train Accuracy: 0.813, Validation Accuracy: 0.811, Loss: 0.356 Epoch 0 Batch 292/1077 - Train Accuracy: 0.836, Validation Accuracy: 0.805, Loss: 0.327 Epoch 0 Batch 293/1077 - Train Accuracy: 0.830, Validation Accuracy: 0.802, Loss: 0.340 Epoch 0 Batch 294/1077 - Train Accuracy: 0.822, Validation Accuracy: 0.802, Loss: 0.291 Epoch 0 Batch 295/1077 - Train Accuracy: 0.789, Validation Accuracy: 0.812, Loss: 0.340 Epoch 0 Batch 296/1077 - Train Accuracy: 0.847, Validation Accuracy: 0.819, Loss: 0.300 Epoch 0 Batch 297/1077 - Train Accuracy: 0.829, Validation Accuracy: 0.813, Loss: 0.345 Epoch 0 Batch 298/1077 - Train Accuracy: 0.812, Validation Accuracy: 0.812, Loss: 0.324 Epoch 0 Batch 299/1077 - Train Accuracy: 0.860, Validation Accuracy: 0.815, Loss: 0.313 Epoch 0 Batch 300/1077 - Train Accuracy: 0.877, Validation Accuracy: 0.812, Loss: 0.314 Epoch 0 Batch 301/1077 - Train Accuracy: 0.817, Validation Accuracy: 0.818, Loss: 0.285 Epoch 0 Batch 302/1077 - Train Accuracy: 0.859, Validation Accuracy: 0.814, Loss: 0.296 Epoch 0 Batch 303/1077 - Train Accuracy: 0.844, Validation Accuracy: 0.824, Loss: 0.303 Epoch 0 Batch 304/1077 - Train Accuracy: 0.873, Validation Accuracy: 0.821, Loss: 0.265 Epoch 0 Batch 305/1077 - Train Accuracy: 0.864, Validation Accuracy: 0.843, Loss: 0.292 Epoch 0 Batch 306/1077 - Train Accuracy: 0.842, Validation Accuracy: 0.842, Loss: 0.273 Epoch 0 Batch 307/1077 - Train Accuracy: 0.844, Validation Accuracy: 0.847, Loss: 0.277 Epoch 0 Batch 308/1077 - Train Accuracy: 0.825, Validation Accuracy: 0.827, Loss: 0.312 Epoch 0 Batch 309/1077 - Train Accuracy: 0.830, Validation Accuracy: 0.819, Loss: 0.252 Epoch 0 Batch 310/1077 - Train Accuracy: 0.861, Validation Accuracy: 0.816, Loss: 0.277 Epoch 0 Batch 311/1077 - Train Accuracy: 0.855, Validation Accuracy: 0.822, Loss: 0.251 Epoch 0 Batch 312/1077 - Train Accuracy: 0.827, Validation Accuracy: 0.840, Loss: 0.305 Epoch 0 Batch 313/1077 - Train Accuracy: 0.880, Validation Accuracy: 0.842, Loss: 0.260 Epoch 0 Batch 314/1077 - Train Accuracy: 0.870, Validation Accuracy: 0.848, Loss: 0.260 Epoch 0 Batch 315/1077 - Train Accuracy: 0.876, Validation Accuracy: 0.842, Loss: 0.260 Epoch 0 Batch 316/1077 - Train Accuracy: 0.867, Validation Accuracy: 0.847, Loss: 0.254 Epoch 0 Batch 317/1077 - Train Accuracy: 0.847, Validation Accuracy: 0.850, Loss: 0.298 Epoch 0 Batch 318/1077 - Train Accuracy: 0.866, Validation Accuracy: 0.848, Loss: 0.264 Epoch 0 Batch 319/1077 - Train Accuracy: 0.850, Validation Accuracy: 0.847, Loss: 0.250 Epoch 0 Batch 320/1077 - Train Accuracy: 0.882, Validation Accuracy: 0.832, Loss: 0.262 Epoch 0 Batch 321/1077 - Train Accuracy: 0.856, Validation Accuracy: 0.852, Loss: 0.258 Epoch 0 Batch 322/1077 - Train Accuracy: 0.843, Validation Accuracy: 0.842, Loss: 0.228 Epoch 0 Batch 323/1077 - Train Accuracy: 0.871, Validation Accuracy: 0.830, Loss: 0.256 Epoch 0 Batch 324/1077 - Train Accuracy: 0.866, Validation Accuracy: 0.822, Loss: 0.232 Epoch 0 Batch 325/1077 - Train Accuracy: 0.862, Validation Accuracy: 0.828, Loss: 0.239 Epoch 0 Batch 326/1077 - Train Accuracy: 0.884, Validation Accuracy: 0.835, Loss: 0.256 Epoch 0 Batch 327/1077 - Train Accuracy: 0.879, Validation Accuracy: 0.842, Loss: 0.260 Epoch 0 Batch 328/1077 - Train Accuracy: 0.868, Validation Accuracy: 0.836, Loss: 0.261 Epoch 0 Batch 329/1077 - Train Accuracy: 0.871, Validation Accuracy: 0.832, Loss: 0.261 Epoch 0 Batch 330/1077 - Train Accuracy: 0.869, Validation Accuracy: 0.817, Loss: 0.246 Epoch 0 Batch 331/1077 - Train Accuracy: 0.858, Validation Accuracy: 0.825, Loss: 0.242 Epoch 0 Batch 332/1077 - Train Accuracy: 0.869, Validation Accuracy: 0.833, Loss: 0.201 Epoch 0 Batch 333/1077 - Train Accuracy: 0.910, Validation Accuracy: 0.837, Loss: 0.235 Epoch 0 Batch 334/1077 - Train Accuracy: 0.892, Validation Accuracy: 0.839, Loss: 0.235 Epoch 0 Batch 335/1077 - Train Accuracy: 0.901, Validation Accuracy: 0.843, Loss: 0.203 Epoch 0 Batch 336/1077 - Train Accuracy: 0.885, Validation Accuracy: 0.832, Loss: 0.250 Epoch 0 Batch 337/1077 - Train Accuracy: 0.879, Validation Accuracy: 0.845, Loss: 0.244 Epoch 0 Batch 338/1077 - Train Accuracy: 0.850, Validation Accuracy: 0.849, Loss: 0.239 Epoch 0 Batch 339/1077 - Train Accuracy: 0.902, Validation Accuracy: 0.844, Loss: 0.201 Epoch 0 Batch 340/1077 - Train Accuracy: 0.909, Validation Accuracy: 0.850, Loss: 0.218 Epoch 0 Batch 341/1077 - Train Accuracy: 0.873, Validation Accuracy: 0.858, Loss: 0.252 Epoch 0 Batch 342/1077 - Train Accuracy: 0.880, Validation Accuracy: 0.855, Loss: 0.194 Epoch 0 Batch 343/1077 - Train Accuracy: 0.906, Validation Accuracy: 0.864, Loss: 0.209 Epoch 0 Batch 344/1077 - Train Accuracy: 0.864, Validation Accuracy: 0.846, Loss: 0.207 Epoch 0 Batch 345/1077 - Train Accuracy: 0.877, Validation Accuracy: 0.843, Loss: 0.199 Epoch 0 Batch 346/1077 - Train Accuracy: 0.868, Validation Accuracy: 0.857, Loss: 0.229 Epoch 0 Batch 347/1077 - Train Accuracy: 0.886, Validation Accuracy: 0.854, Loss: 0.188 Epoch 0 Batch 348/1077 - Train Accuracy: 0.869, Validation Accuracy: 0.841, Loss: 0.193 Epoch 0 Batch 349/1077 - Train Accuracy: 0.892, Validation Accuracy: 0.847, Loss: 0.187 Epoch 0 Batch 350/1077 - Train Accuracy: 0.904, Validation Accuracy: 0.871, Loss: 0.230 Epoch 0 Batch 351/1077 - Train Accuracy: 0.847, Validation Accuracy: 0.857, Loss: 0.206 Epoch 0 Batch 352/1077 - Train Accuracy: 0.874, Validation Accuracy: 0.855, Loss: 0.210 Epoch 0 Batch 353/1077 - Train Accuracy: 0.820, Validation Accuracy: 0.859, Loss: 0.237 Epoch 0 Batch 354/1077 - Train Accuracy: 0.871, Validation Accuracy: 0.877, Loss: 0.222 Epoch 0 Batch 355/1077 - Train Accuracy: 0.891, Validation Accuracy: 0.878, Loss: 0.209 Epoch 0 Batch 356/1077 - Train Accuracy: 0.898, Validation Accuracy: 0.880, Loss: 0.201 Epoch 0 Batch 357/1077 - Train Accuracy: 0.865, Validation Accuracy: 0.869, Loss: 0.188 Epoch 0 Batch 358/1077 - Train Accuracy: 0.889, Validation Accuracy: 0.862, Loss: 0.209 Epoch 0 Batch 359/1077 - Train Accuracy: 0.913, Validation Accuracy: 0.859, Loss: 0.197 Epoch 0 Batch 360/1077 - Train Accuracy: 0.905, Validation Accuracy: 0.864, Loss: 0.173 Epoch 0 Batch 361/1077 - Train Accuracy: 0.896, Validation Accuracy: 0.866, Loss: 0.198 Epoch 0 Batch 362/1077 - Train Accuracy: 0.906, Validation Accuracy: 0.871, Loss: 0.193 Epoch 0 Batch 363/1077 - Train Accuracy: 0.895, Validation Accuracy: 0.867, Loss: 0.197 Epoch 0 Batch 364/1077 - Train Accuracy: 0.875, Validation Accuracy: 0.866, Loss: 0.211 Epoch 0 Batch 365/1077 - Train Accuracy: 0.900, Validation Accuracy: 0.858, Loss: 0.168 Epoch 0 Batch 366/1077 - Train Accuracy: 0.898, Validation Accuracy: 0.858, Loss: 0.180 Epoch 0 Batch 367/1077 - Train Accuracy: 0.910, Validation Accuracy: 0.877, Loss: 0.158 Epoch 0 Batch 368/1077 - Train Accuracy: 0.911, Validation Accuracy: 0.878, Loss: 0.181 Epoch 0 Batch 369/1077 - Train Accuracy: 0.892, Validation Accuracy: 0.875, Loss: 0.175 Epoch 0 Batch 370/1077 - Train Accuracy: 0.906, Validation Accuracy: 0.877, Loss: 0.163 Epoch 0 Batch 371/1077 - Train Accuracy: 0.915, Validation Accuracy: 0.880, Loss: 0.155 Epoch 0 Batch 372/1077 - Train Accuracy: 0.891, Validation Accuracy: 0.878, Loss: 0.148 Epoch 0 Batch 373/1077 - Train Accuracy: 0.917, Validation Accuracy: 0.878, Loss: 0.148 Epoch 0 Batch 374/1077 - Train Accuracy: 0.881, Validation Accuracy: 0.866, Loss: 0.194 Epoch 0 Batch 375/1077 - Train Accuracy: 0.902, Validation Accuracy: 0.868, Loss: 0.167 Epoch 0 Batch 376/1077 - Train Accuracy: 0.910, Validation Accuracy: 0.870, Loss: 0.171 Epoch 0 Batch 377/1077 - Train Accuracy: 0.882, Validation Accuracy: 0.867, Loss: 0.171 Epoch 0 Batch 378/1077 - Train Accuracy: 0.900, Validation Accuracy: 0.881, Loss: 0.149 Epoch 0 Batch 379/1077 - Train Accuracy: 0.932, Validation Accuracy: 0.870, Loss: 0.173 Epoch 0 Batch 380/1077 - Train Accuracy: 0.921, Validation Accuracy: 0.871, Loss: 0.160 Epoch 0 Batch 381/1077 - Train Accuracy: 0.879, Validation Accuracy: 0.866, Loss: 0.190 Epoch 0 Batch 382/1077 - Train Accuracy: 0.895, Validation Accuracy: 0.866, Loss: 0.198 Epoch 0 Batch 383/1077 - Train Accuracy: 0.919, Validation Accuracy: 0.869, Loss: 0.145 Epoch 0 Batch 384/1077 - Train Accuracy: 0.902, Validation Accuracy: 0.870, Loss: 0.154 Epoch 0 Batch 385/1077 - Train Accuracy: 0.900, Validation Accuracy: 0.870, Loss: 0.142 Epoch 0 Batch 386/1077 - Train Accuracy: 0.910, Validation Accuracy: 0.877, Loss: 0.152 Epoch 0 Batch 387/1077 - Train Accuracy: 0.911, Validation Accuracy: 0.882, Loss: 0.164 Epoch 0 Batch 388/1077 - Train Accuracy: 0.895, Validation Accuracy: 0.887, Loss: 0.140 Epoch 0 Batch 389/1077 - Train Accuracy: 0.918, Validation Accuracy: 0.881, Loss: 0.150 Epoch 0 Batch 390/1077 - Train Accuracy: 0.877, Validation Accuracy: 0.875, Loss: 0.159 Epoch 0 Batch 391/1077 - Train Accuracy: 0.904, Validation Accuracy: 0.880, Loss: 0.161 Epoch 0 Batch 392/1077 - Train Accuracy: 0.901, Validation Accuracy: 0.882, Loss: 0.161 Epoch 0 Batch 393/1077 - Train Accuracy: 0.885, Validation Accuracy: 0.876, Loss: 0.131 Epoch 0 Batch 394/1077 - Train Accuracy: 0.902, Validation Accuracy: 0.878, Loss: 0.139 Epoch 0 Batch 395/1077 - Train Accuracy: 0.920, Validation Accuracy: 0.877, Loss: 0.142 Epoch 0 Batch 396/1077 - Train Accuracy: 0.895, Validation Accuracy: 0.881, Loss: 0.151 Epoch 0 Batch 397/1077 - Train Accuracy: 0.911, Validation Accuracy: 0.892, Loss: 0.126 Epoch 0 Batch 398/1077 - Train Accuracy: 0.917, Validation Accuracy: 0.885, Loss: 0.152 Epoch 0 Batch 399/1077 - Train Accuracy: 0.873, Validation Accuracy: 0.892, Loss: 0.151 Epoch 0 Batch 400/1077 - Train Accuracy: 0.907, Validation Accuracy: 0.896, Loss: 0.155 Epoch 0 Batch 401/1077 - Train Accuracy: 0.907, Validation Accuracy: 0.896, Loss: 0.136 Epoch 0 Batch 402/1077 - Train Accuracy: 0.911, Validation Accuracy: 0.887, Loss: 0.128 Epoch 0 Batch 403/1077 - Train Accuracy: 0.895, Validation Accuracy: 0.883, Loss: 0.161 Epoch 0 Batch 404/1077 - Train Accuracy: 0.944, Validation Accuracy: 0.886, Loss: 0.126 Epoch 0 Batch 405/1077 - Train Accuracy: 0.895, Validation Accuracy: 0.881, Loss: 0.143 Epoch 0 Batch 406/1077 - Train Accuracy: 0.915, Validation Accuracy: 0.878, Loss: 0.133 Epoch 0 Batch 407/1077 - Train Accuracy: 0.888, Validation Accuracy: 0.884, Loss: 0.155 Epoch 0 Batch 408/1077 - Train Accuracy: 0.905, Validation Accuracy: 0.902, Loss: 0.138 Epoch 0 Batch 409/1077 - Train Accuracy: 0.932, Validation Accuracy: 0.902, Loss: 0.144 Epoch 0 Batch 410/1077 - Train Accuracy: 0.898, Validation Accuracy: 0.917, Loss: 0.145 Epoch 0 Batch 411/1077 - Train Accuracy: 0.897, Validation Accuracy: 0.910, Loss: 0.151 Epoch 0 Batch 412/1077 - Train Accuracy: 0.928, Validation Accuracy: 0.918, Loss: 0.113 Epoch 0 Batch 413/1077 - Train Accuracy: 0.925, Validation Accuracy: 0.909, Loss: 0.127 Epoch 0 Batch 414/1077 - Train Accuracy: 0.880, Validation Accuracy: 0.904, Loss: 0.131 Epoch 0 Batch 415/1077 - Train Accuracy: 0.905, Validation Accuracy: 0.907, Loss: 0.141 Epoch 0 Batch 416/1077 - Train Accuracy: 0.904, Validation Accuracy: 0.909, Loss: 0.135 Epoch 0 Batch 417/1077 - Train Accuracy: 0.894, Validation Accuracy: 0.908, Loss: 0.176 Epoch 0 Batch 418/1077 - Train Accuracy: 0.936, Validation Accuracy: 0.914, Loss: 0.122 Epoch 0 Batch 419/1077 - Train Accuracy: 0.931, Validation Accuracy: 0.903, Loss: 0.119 Epoch 0 Batch 420/1077 - Train Accuracy: 0.949, Validation Accuracy: 0.904, Loss: 0.113 Epoch 0 Batch 421/1077 - Train Accuracy: 0.916, Validation Accuracy: 0.906, Loss: 0.141 Epoch 0 Batch 422/1077 - Train Accuracy: 0.888, Validation Accuracy: 0.900, Loss: 0.110 Epoch 0 Batch 423/1077 - Train Accuracy: 0.905, Validation Accuracy: 0.891, Loss: 0.149 Epoch 0 Batch 424/1077 - Train Accuracy: 0.901, Validation Accuracy: 0.901, Loss: 0.122 Epoch 0 Batch 425/1077 - Train Accuracy: 0.917, Validation Accuracy: 0.896, Loss: 0.108 Epoch 0 Batch 426/1077 - Train Accuracy: 0.891, Validation Accuracy: 0.893, Loss: 0.145 Epoch 0 Batch 427/1077 - Train Accuracy: 0.890, Validation Accuracy: 0.888, Loss: 0.126 Epoch 0 Batch 428/1077 - Train Accuracy: 0.922, Validation Accuracy: 0.895, Loss: 0.106 Epoch 0 Batch 429/1077 - Train Accuracy: 0.910, Validation Accuracy: 0.897, Loss: 0.115 Epoch 0 Batch 430/1077 - Train Accuracy: 0.916, Validation Accuracy: 0.897, Loss: 0.117 Epoch 0 Batch 431/1077 - Train Accuracy: 0.919, Validation Accuracy: 0.897, Loss: 0.107 Epoch 0 Batch 432/1077 - Train Accuracy: 0.910, Validation Accuracy: 0.905, Loss: 0.133 Epoch 0 Batch 433/1077 - Train Accuracy: 0.941, Validation Accuracy: 0.913, Loss: 0.133 Epoch 0 Batch 434/1077 - Train Accuracy: 0.931, Validation Accuracy: 0.902, Loss: 0.117 Epoch 0 Batch 435/1077 - Train Accuracy: 0.933, Validation Accuracy: 0.891, Loss: 0.130 Epoch 0 Batch 436/1077 - Train Accuracy: 0.897, Validation Accuracy: 0.893, Loss: 0.126 Epoch 0 Batch 437/1077 - Train Accuracy: 0.934, Validation Accuracy: 0.901, Loss: 0.103 Epoch 0 Batch 438/1077 - Train Accuracy: 0.898, Validation Accuracy: 0.901, Loss: 0.118 Epoch 0 Batch 439/1077 - Train Accuracy: 0.911, Validation Accuracy: 0.891, Loss: 0.128 Epoch 0 Batch 440/1077 - Train Accuracy: 0.901, Validation Accuracy: 0.887, Loss: 0.144 Epoch 0 Batch 441/1077 - Train Accuracy: 0.900, Validation Accuracy: 0.898, Loss: 0.118 Epoch 0 Batch 442/1077 - Train Accuracy: 0.913, Validation Accuracy: 0.897, Loss: 0.120 Epoch 0 Batch 443/1077 - Train Accuracy: 0.933, Validation Accuracy: 0.900, Loss: 0.114 Epoch 0 Batch 444/1077 - Train Accuracy: 0.927, Validation Accuracy: 0.896, Loss: 0.114 Epoch 0 Batch 445/1077 - Train Accuracy: 0.884, Validation Accuracy: 0.891, Loss: 0.128 Epoch 0 Batch 446/1077 - Train Accuracy: 0.916, Validation Accuracy: 0.894, Loss: 0.096 Epoch 0 Batch 447/1077 - Train Accuracy: 0.897, Validation Accuracy: 0.895, Loss: 0.115 Epoch 0 Batch 448/1077 - Train Accuracy: 0.891, Validation Accuracy: 0.897, Loss: 0.154 Epoch 0 Batch 449/1077 - Train Accuracy: 0.906, Validation Accuracy: 0.906, Loss: 0.125 Epoch 0 Batch 450/1077 - Train Accuracy: 0.901, Validation Accuracy: 0.911, Loss: 0.112 Epoch 0 Batch 451/1077 - Train Accuracy: 0.941, Validation Accuracy: 0.901, Loss: 0.115 Epoch 0 Batch 452/1077 - Train Accuracy: 0.934, Validation Accuracy: 0.905, Loss: 0.115 Epoch 0 Batch 453/1077 - Train Accuracy: 0.892, Validation Accuracy: 0.901, Loss: 0.111 Epoch 0 Batch 454/1077 - Train Accuracy: 0.905, Validation Accuracy: 0.906, Loss: 0.116 Epoch 0 Batch 455/1077 - Train Accuracy: 0.920, Validation Accuracy: 0.913, Loss: 0.110 Epoch 0 Batch 456/1077 - Train Accuracy: 0.923, Validation Accuracy: 0.912, Loss: 0.116 Epoch 0 Batch 457/1077 - Train Accuracy: 0.904, Validation Accuracy: 0.915, Loss: 0.094 Epoch 0 Batch 458/1077 - Train Accuracy: 0.910, Validation Accuracy: 0.905, Loss: 0.118 Epoch 0 Batch 459/1077 - Train Accuracy: 0.930, Validation Accuracy: 0.891, Loss: 0.100 Epoch 0 Batch 460/1077 - Train Accuracy: 0.925, Validation Accuracy: 0.887, Loss: 0.121 Epoch 0 Batch 461/1077 - Train Accuracy: 0.910, Validation Accuracy: 0.891, Loss: 0.104 Epoch 0 Batch 462/1077 - Train Accuracy: 0.926, Validation Accuracy: 0.890, Loss: 0.102 Epoch 0 Batch 463/1077 - Train Accuracy: 0.911, Validation Accuracy: 0.891, Loss: 0.101 Epoch 0 Batch 464/1077 - Train Accuracy: 0.924, Validation Accuracy: 0.890, Loss: 0.102 Epoch 0 Batch 465/1077 - Train Accuracy: 0.923, Validation Accuracy: 0.892, Loss: 0.102 Epoch 0 Batch 466/1077 - Train Accuracy: 0.933, Validation Accuracy: 0.891, Loss: 0.078 Epoch 0 Batch 467/1077 - Train Accuracy: 0.949, Validation Accuracy: 0.898, Loss: 0.115 Epoch 0 Batch 468/1077 - Train Accuracy: 0.925, Validation Accuracy: 0.905, Loss: 0.114 Epoch 0 Batch 469/1077 - Train Accuracy: 0.929, Validation Accuracy: 0.910, Loss: 0.102 Epoch 0 Batch 470/1077 - Train Accuracy: 0.943, Validation Accuracy: 0.901, Loss: 0.104 Epoch 0 Batch 471/1077 - Train Accuracy: 0.944, Validation Accuracy: 0.895, Loss: 0.082 Epoch 0 Batch 472/1077 - Train Accuracy: 0.914, Validation Accuracy: 0.905, Loss: 0.101 Epoch 0 Batch 473/1077 - Train Accuracy: 0.932, Validation Accuracy: 0.907, Loss: 0.105 Epoch 0 Batch 474/1077 - Train Accuracy: 0.912, Validation Accuracy: 0.906, Loss: 0.096 Epoch 0 Batch 475/1077 - Train Accuracy: 0.921, Validation Accuracy: 0.914, Loss: 0.093 Epoch 0 Batch 476/1077 - Train Accuracy: 0.919, Validation Accuracy: 0.928, Loss: 0.080 Epoch 0 Batch 477/1077 - Train Accuracy: 0.941, Validation Accuracy: 0.937, Loss: 0.103 Epoch 0 Batch 478/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.925, Loss: 0.096 Epoch 0 Batch 479/1077 - Train Accuracy: 0.895, Validation Accuracy: 0.924, Loss: 0.122 Epoch 0 Batch 480/1077 - Train Accuracy: 0.940, Validation Accuracy: 0.913, Loss: 0.108 Epoch 0 Batch 481/1077 - Train Accuracy: 0.929, Validation Accuracy: 0.903, Loss: 0.094 Epoch 0 Batch 482/1077 - Train Accuracy: 0.920, Validation Accuracy: 0.906, Loss: 0.127 Epoch 0 Batch 483/1077 - Train Accuracy: 0.902, Validation Accuracy: 0.903, Loss: 0.097 Epoch 0 Batch 484/1077 - Train Accuracy: 0.934, Validation Accuracy: 0.896, Loss: 0.103 Epoch 0 Batch 485/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.901, Loss: 0.088 Epoch 0 Batch 486/1077 - Train Accuracy: 0.933, Validation Accuracy: 0.901, Loss: 0.084 Epoch 0 Batch 487/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.912, Loss: 0.102 Epoch 0 Batch 488/1077 - Train Accuracy: 0.944, Validation Accuracy: 0.912, Loss: 0.096 Epoch 0 Batch 489/1077 - Train Accuracy: 0.939, Validation Accuracy: 0.903, Loss: 0.082 Epoch 0 Batch 490/1077 - Train Accuracy: 0.903, Validation Accuracy: 0.897, Loss: 0.098 Epoch 0 Batch 491/1077 - Train Accuracy: 0.900, Validation Accuracy: 0.910, Loss: 0.104 Epoch 0 Batch 492/1077 - Train Accuracy: 0.934, Validation Accuracy: 0.901, Loss: 0.111 Epoch 0 Batch 493/1077 - Train Accuracy: 0.934, Validation Accuracy: 0.904, Loss: 0.087 Epoch 0 Batch 494/1077 - Train Accuracy: 0.921, Validation Accuracy: 0.909, Loss: 0.077 Epoch 0 Batch 495/1077 - Train Accuracy: 0.894, Validation Accuracy: 0.917, Loss: 0.091 Epoch 0 Batch 496/1077 - Train Accuracy: 0.925, Validation Accuracy: 0.930, Loss: 0.095 Epoch 0 Batch 497/1077 - Train Accuracy: 0.929, Validation Accuracy: 0.931, Loss: 0.101 Epoch 0 Batch 498/1077 - Train Accuracy: 0.941, Validation Accuracy: 0.934, Loss: 0.088 Epoch 0 Batch 499/1077 - Train Accuracy: 0.932, Validation Accuracy: 0.928, Loss: 0.076 Epoch 0 Batch 500/1077 - Train Accuracy: 0.926, Validation Accuracy: 0.907, Loss: 0.071 Epoch 0 Batch 501/1077 - Train Accuracy: 0.938, Validation Accuracy: 0.912, Loss: 0.083 Epoch 0 Batch 502/1077 - Train Accuracy: 0.933, Validation Accuracy: 0.915, Loss: 0.092 Epoch 0 Batch 503/1077 - Train Accuracy: 0.939, Validation Accuracy: 0.913, Loss: 0.086 Epoch 0 Batch 504/1077 - Train Accuracy: 0.946, Validation Accuracy: 0.927, Loss: 0.088 Epoch 0 Batch 505/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.926, Loss: 0.067 Epoch 0 Batch 506/1077 - Train Accuracy: 0.939, Validation Accuracy: 0.922, Loss: 0.099 Epoch 0 Batch 507/1077 - Train Accuracy: 0.906, Validation Accuracy: 0.926, Loss: 0.092 Epoch 0 Batch 508/1077 - Train Accuracy: 0.928, Validation Accuracy: 0.925, Loss: 0.081 Epoch 0 Batch 509/1077 - Train Accuracy: 0.923, Validation Accuracy: 0.923, Loss: 0.093 Epoch 0 Batch 510/1077 - Train Accuracy: 0.928, Validation Accuracy: 0.922, Loss: 0.097 Epoch 0 Batch 511/1077 - Train Accuracy: 0.936, Validation Accuracy: 0.926, Loss: 0.084 Epoch 0 Batch 512/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.925, Loss: 0.091 Epoch 0 Batch 513/1077 - Train Accuracy: 0.912, Validation Accuracy: 0.928, Loss: 0.095 Epoch 0 Batch 514/1077 - Train Accuracy: 0.916, Validation Accuracy: 0.928, Loss: 0.098 Epoch 0 Batch 515/1077 - Train Accuracy: 0.934, Validation Accuracy: 0.932, Loss: 0.087 Epoch 0 Batch 516/1077 - Train Accuracy: 0.943, Validation Accuracy: 0.920, Loss: 0.092 Epoch 0 Batch 517/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.920, Loss: 0.103 Epoch 0 Batch 518/1077 - Train Accuracy: 0.943, Validation Accuracy: 0.922, Loss: 0.084 Epoch 0 Batch 519/1077 - Train Accuracy: 0.944, Validation Accuracy: 0.922, Loss: 0.080 Epoch 0 Batch 520/1077 - Train Accuracy: 0.947, Validation Accuracy: 0.919, Loss: 0.082 Epoch 0 Batch 521/1077 - Train Accuracy: 0.913, Validation Accuracy: 0.922, Loss: 0.094 Epoch 0 Batch 522/1077 - Train Accuracy: 0.887, Validation Accuracy: 0.930, Loss: 0.109 Epoch 0 Batch 523/1077 - Train Accuracy: 0.926, Validation Accuracy: 0.916, Loss: 0.091 Epoch 0 Batch 524/1077 - Train Accuracy: 0.921, Validation Accuracy: 0.924, Loss: 0.092 Epoch 0 Batch 525/1077 - Train Accuracy: 0.928, Validation Accuracy: 0.923, Loss: 0.089 Epoch 0 Batch 526/1077 - Train Accuracy: 0.940, Validation Accuracy: 0.908, Loss: 0.079 Epoch 0 Batch 527/1077 - Train Accuracy: 0.919, Validation Accuracy: 0.901, Loss: 0.092 Epoch 0 Batch 528/1077 - Train Accuracy: 0.914, Validation Accuracy: 0.922, Loss: 0.087 Epoch 0 Batch 529/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.914, Loss: 0.079 Epoch 0 Batch 530/1077 - Train Accuracy: 0.911, Validation Accuracy: 0.918, Loss: 0.082 Epoch 0 Batch 531/1077 - Train Accuracy: 0.895, Validation Accuracy: 0.932, Loss: 0.087 Epoch 0 Batch 532/1077 - Train Accuracy: 0.921, Validation Accuracy: 0.931, Loss: 0.111 Epoch 0 Batch 533/1077 - Train Accuracy: 0.908, Validation Accuracy: 0.922, Loss: 0.089 Epoch 0 Batch 534/1077 - Train Accuracy: 0.908, Validation Accuracy: 0.919, Loss: 0.083 Epoch 0 Batch 535/1077 - Train Accuracy: 0.941, Validation Accuracy: 0.925, Loss: 0.080 Epoch 0 Batch 536/1077 - Train Accuracy: 0.923, Validation Accuracy: 0.923, Loss: 0.091 Epoch 0 Batch 537/1077 - Train Accuracy: 0.917, Validation Accuracy: 0.928, Loss: 0.072 Epoch 0 Batch 538/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.932, Loss: 0.058 Epoch 0 Batch 539/1077 - Train Accuracy: 0.918, Validation Accuracy: 0.941, Loss: 0.096 Epoch 0 Batch 540/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.930, Loss: 0.072 Epoch 0 Batch 541/1077 - Train Accuracy: 0.943, Validation Accuracy: 0.910, Loss: 0.068 Epoch 0 Batch 542/1077 - Train Accuracy: 0.925, Validation Accuracy: 0.925, Loss: 0.077 Epoch 0 Batch 543/1077 - Train Accuracy: 0.937, Validation Accuracy: 0.921, Loss: 0.087 Epoch 0 Batch 544/1077 - Train Accuracy: 0.941, Validation Accuracy: 0.911, Loss: 0.062 Epoch 0 Batch 545/1077 - Train Accuracy: 0.931, Validation Accuracy: 0.911, Loss: 0.095 Epoch 0 Batch 546/1077 - Train Accuracy: 0.913, Validation Accuracy: 0.903, Loss: 0.096 Epoch 0 Batch 547/1077 - Train Accuracy: 0.940, Validation Accuracy: 0.896, Loss: 0.075 Epoch 0 Batch 548/1077 - Train Accuracy: 0.901, Validation Accuracy: 0.908, Loss: 0.093 Epoch 0 Batch 549/1077 - Train Accuracy: 0.910, Validation Accuracy: 0.922, Loss: 0.098 Epoch 0 Batch 550/1077 - Train Accuracy: 0.919, Validation Accuracy: 0.925, Loss: 0.080 Epoch 0 Batch 551/1077 - Train Accuracy: 0.937, Validation Accuracy: 0.927, Loss: 0.084 Epoch 0 Batch 552/1077 - Train Accuracy: 0.929, Validation Accuracy: 0.921, Loss: 0.097 Epoch 0 Batch 553/1077 - Train Accuracy: 0.941, Validation Accuracy: 0.912, Loss: 0.091 Epoch 0 Batch 554/1077 - Train Accuracy: 0.916, Validation Accuracy: 0.917, Loss: 0.070 Epoch 0 Batch 555/1077 - Train Accuracy: 0.928, Validation Accuracy: 0.916, Loss: 0.081 Epoch 0 Batch 556/1077 - Train Accuracy: 0.935, Validation Accuracy: 0.912, Loss: 0.071 Epoch 0 Batch 557/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.923, Loss: 0.079 Epoch 0 Batch 558/1077 - Train Accuracy: 0.946, Validation Accuracy: 0.919, Loss: 0.062 Epoch 0 Batch 559/1077 - Train Accuracy: 0.943, Validation Accuracy: 0.924, Loss: 0.081 Epoch 0 Batch 560/1077 - Train Accuracy: 0.930, Validation Accuracy: 0.910, Loss: 0.068 Epoch 0 Batch 561/1077 - Train Accuracy: 0.945, Validation Accuracy: 0.898, Loss: 0.065 Epoch 0 Batch 562/1077 - Train Accuracy: 0.938, Validation Accuracy: 0.900, Loss: 0.062 Epoch 0 Batch 563/1077 - Train Accuracy: 0.921, Validation Accuracy: 0.907, Loss: 0.079 Epoch 0 Batch 564/1077 - Train Accuracy: 0.935, Validation Accuracy: 0.919, Loss: 0.088 Epoch 0 Batch 565/1077 - Train Accuracy: 0.948, Validation Accuracy: 0.917, Loss: 0.075 Epoch 0 Batch 566/1077 - Train Accuracy: 0.921, Validation Accuracy: 0.915, Loss: 0.074 Epoch 0 Batch 567/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.914, Loss: 0.076 Epoch 0 Batch 568/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.922, Loss: 0.079 Epoch 0 Batch 569/1077 - Train Accuracy: 0.943, Validation Accuracy: 0.918, Loss: 0.071 Epoch 0 Batch 570/1077 - Train Accuracy: 0.929, Validation Accuracy: 0.923, Loss: 0.093 Epoch 0 Batch 571/1077 - Train Accuracy: 0.933, Validation Accuracy: 0.929, Loss: 0.060 Epoch 0 Batch 572/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.924, Loss: 0.059 Epoch 0 Batch 573/1077 - Train Accuracy: 0.898, Validation Accuracy: 0.919, Loss: 0.087 Epoch 0 Batch 574/1077 - Train Accuracy: 0.933, Validation Accuracy: 0.919, Loss: 0.083 Epoch 0 Batch 575/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.917, Loss: 0.052 Epoch 0 Batch 576/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.926, Loss: 0.063 Epoch 0 Batch 577/1077 - Train Accuracy: 0.936, Validation Accuracy: 0.941, Loss: 0.081 Epoch 0 Batch 578/1077 - Train Accuracy: 0.937, Validation Accuracy: 0.938, Loss: 0.066 Epoch 0 Batch 579/1077 - Train Accuracy: 0.936, Validation Accuracy: 0.931, Loss: 0.069 Epoch 0 Batch 580/1077 - Train Accuracy: 0.943, Validation Accuracy: 0.920, Loss: 0.058 Epoch 0 Batch 581/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.917, Loss: 0.051 Epoch 0 Batch 582/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.915, Loss: 0.075 Epoch 0 Batch 583/1077 - Train Accuracy: 0.924, Validation Accuracy: 0.916, Loss: 0.086 Epoch 0 Batch 584/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.927, Loss: 0.070 Epoch 0 Batch 585/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.936, Loss: 0.059 Epoch 0 Batch 586/1077 - Train Accuracy: 0.941, Validation Accuracy: 0.929, Loss: 0.068 Epoch 0 Batch 587/1077 - Train Accuracy: 0.926, Validation Accuracy: 0.935, Loss: 0.081 Epoch 0 Batch 588/1077 - Train Accuracy: 0.938, Validation Accuracy: 0.938, Loss: 0.065 Epoch 0 Batch 589/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.926, Loss: 0.060 Epoch 0 Batch 590/1077 - Train Accuracy: 0.921, Validation Accuracy: 0.925, Loss: 0.077 Epoch 0 Batch 591/1077 - Train Accuracy: 0.928, Validation Accuracy: 0.917, Loss: 0.065 Epoch 0 Batch 592/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.919, Loss: 0.075 Epoch 0 Batch 593/1077 - Train Accuracy: 0.924, Validation Accuracy: 0.919, Loss: 0.078 Epoch 0 Batch 594/1077 - Train Accuracy: 0.925, Validation Accuracy: 0.919, Loss: 0.084 Epoch 0 Batch 595/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.928, Loss: 0.063 Epoch 0 Batch 596/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.923, Loss: 0.070 Epoch 0 Batch 597/1077 - Train Accuracy: 0.939, Validation Accuracy: 0.928, Loss: 0.061 Epoch 0 Batch 598/1077 - Train Accuracy: 0.926, Validation Accuracy: 0.933, Loss: 0.075 Epoch 0 Batch 599/1077 - Train Accuracy: 0.916, Validation Accuracy: 0.936, Loss: 0.108 Epoch 0 Batch 600/1077 - Train Accuracy: 0.913, Validation Accuracy: 0.927, Loss: 0.074 Epoch 0 Batch 601/1077 - Train Accuracy: 0.940, Validation Accuracy: 0.923, Loss: 0.080 Epoch 0 Batch 602/1077 - Train Accuracy: 0.943, Validation Accuracy: 0.930, Loss: 0.071 Epoch 0 Batch 603/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.935, Loss: 0.070 Epoch 0 Batch 604/1077 - Train Accuracy: 0.925, Validation Accuracy: 0.935, Loss: 0.083 Epoch 0 Batch 605/1077 - Train Accuracy: 0.929, Validation Accuracy: 0.932, Loss: 0.089 Epoch 0 Batch 606/1077 - Train Accuracy: 0.934, Validation Accuracy: 0.932, Loss: 0.060 Epoch 0 Batch 607/1077 - Train Accuracy: 0.942, Validation Accuracy: 0.948, Loss: 0.068 Epoch 0 Batch 608/1077 - Train Accuracy: 0.948, Validation Accuracy: 0.947, Loss: 0.076 Epoch 0 Batch 609/1077 - Train Accuracy: 0.948, Validation Accuracy: 0.951, Loss: 0.070 Epoch 0 Batch 610/1077 - Train Accuracy: 0.931, Validation Accuracy: 0.951, Loss: 0.077 Epoch 0 Batch 611/1077 - Train Accuracy: 0.932, Validation Accuracy: 0.951, Loss: 0.066 Epoch 0 Batch 612/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.951, Loss: 0.062 Epoch 0 Batch 613/1077 - Train Accuracy: 0.947, Validation Accuracy: 0.940, Loss: 0.077 Epoch 0 Batch 614/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.935, Loss: 0.061 Epoch 0 Batch 615/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.940, Loss: 0.065 Epoch 0 Batch 616/1077 - Train Accuracy: 0.947, Validation Accuracy: 0.939, Loss: 0.069 Epoch 0 Batch 617/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.937, Loss: 0.062 Epoch 0 Batch 618/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.942, Loss: 0.056 Epoch 0 Batch 619/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.926, Loss: 0.057 Epoch 0 Batch 620/1077 - Train Accuracy: 0.948, Validation Accuracy: 0.928, Loss: 0.061 Epoch 0 Batch 621/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.918, Loss: 0.067 Epoch 0 Batch 622/1077 - Train Accuracy: 0.940, Validation Accuracy: 0.915, Loss: 0.073 Epoch 0 Batch 623/1077 - Train Accuracy: 0.899, Validation Accuracy: 0.926, Loss: 0.080 Epoch 0 Batch 624/1077 - Train Accuracy: 0.946, Validation Accuracy: 0.918, Loss: 0.065 Epoch 0 Batch 625/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.918, Loss: 0.060 Epoch 0 Batch 626/1077 - Train Accuracy: 0.922, Validation Accuracy: 0.921, Loss: 0.062 Epoch 0 Batch 627/1077 - Train Accuracy: 0.934, Validation Accuracy: 0.922, Loss: 0.059 Epoch 0 Batch 628/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.920, Loss: 0.074 Epoch 0 Batch 629/1077 - Train Accuracy: 0.926, Validation Accuracy: 0.930, Loss: 0.073 Epoch 0 Batch 630/1077 - Train Accuracy: 0.946, Validation Accuracy: 0.930, Loss: 0.062 Epoch 0 Batch 631/1077 - Train Accuracy: 0.932, Validation Accuracy: 0.933, Loss: 0.071 Epoch 0 Batch 632/1077 - Train Accuracy: 0.946, Validation Accuracy: 0.931, Loss: 0.049 Epoch 0 Batch 633/1077 - Train Accuracy: 0.922, Validation Accuracy: 0.933, Loss: 0.072 Epoch 0 Batch 634/1077 - Train Accuracy: 0.947, Validation Accuracy: 0.932, Loss: 0.044 Epoch 0 Batch 635/1077 - Train Accuracy: 0.942, Validation Accuracy: 0.931, Loss: 0.085 Epoch 0 Batch 636/1077 - Train Accuracy: 0.940, Validation Accuracy: 0.931, Loss: 0.058 Epoch 0 Batch 637/1077 - Train Accuracy: 0.945, Validation Accuracy: 0.927, Loss: 0.068 Epoch 0 Batch 638/1077 - Train Accuracy: 0.948, Validation Accuracy: 0.936, Loss: 0.061 Epoch 0 Batch 639/1077 - Train Accuracy: 0.916, Validation Accuracy: 0.938, Loss: 0.085 Epoch 0 Batch 640/1077 - Train Accuracy: 0.935, Validation Accuracy: 0.941, Loss: 0.064 Epoch 0 Batch 641/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.949, Loss: 0.049 Epoch 0 Batch 642/1077 - Train Accuracy: 0.924, Validation Accuracy: 0.947, Loss: 0.061 Epoch 0 Batch 643/1077 - Train Accuracy: 0.947, Validation Accuracy: 0.949, Loss: 0.044 Epoch 0 Batch 644/1077 - Train Accuracy: 0.911, Validation Accuracy: 0.957, Loss: 0.067 Epoch 0 Batch 645/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.953, Loss: 0.073 Epoch 0 Batch 646/1077 - Train Accuracy: 0.944, Validation Accuracy: 0.953, Loss: 0.055 Epoch 0 Batch 647/1077 - Train Accuracy: 0.943, Validation Accuracy: 0.949, Loss: 0.062 Epoch 0 Batch 648/1077 - Train Accuracy: 0.949, Validation Accuracy: 0.940, Loss: 0.050 Epoch 0 Batch 649/1077 - Train Accuracy: 0.936, Validation Accuracy: 0.939, Loss: 0.066 Epoch 0 Batch 650/1077 - Train Accuracy: 0.941, Validation Accuracy: 0.935, Loss: 0.072 Epoch 0 Batch 651/1077 - Train Accuracy: 0.924, Validation Accuracy: 0.939, Loss: 0.064 Epoch 0 Batch 652/1077 - Train Accuracy: 0.949, Validation Accuracy: 0.939, Loss: 0.058 Epoch 0 Batch 653/1077 - Train Accuracy: 0.945, Validation Accuracy: 0.944, Loss: 0.054 Epoch 0 Batch 654/1077 - Train Accuracy: 0.946, Validation Accuracy: 0.939, Loss: 0.050 Epoch 0 Batch 655/1077 - Train Accuracy: 0.943, Validation Accuracy: 0.944, Loss: 0.067 Epoch 0 Batch 656/1077 - Train Accuracy: 0.927, Validation Accuracy: 0.942, Loss: 0.065 Epoch 0 Batch 657/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.937, Loss: 0.055 Epoch 0 Batch 658/1077 - Train Accuracy: 0.938, Validation Accuracy: 0.941, Loss: 0.048 Epoch 0 Batch 659/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.945, Loss: 0.068 Epoch 0 Batch 660/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.932, Loss: 0.056 Epoch 0 Batch 661/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.930, Loss: 0.047 Epoch 0 Batch 662/1077 - Train Accuracy: 0.947, Validation Accuracy: 0.911, Loss: 0.061 Epoch 0 Batch 663/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.910, Loss: 0.054 Epoch 0 Batch 664/1077 - Train Accuracy: 0.943, Validation Accuracy: 0.919, Loss: 0.059 Epoch 0 Batch 665/1077 - Train Accuracy: 0.921, Validation Accuracy: 0.924, Loss: 0.052 Epoch 0 Batch 666/1077 - Train Accuracy: 0.938, Validation Accuracy: 0.921, Loss: 0.077 Epoch 0 Batch 667/1077 - Train Accuracy: 0.940, Validation Accuracy: 0.929, Loss: 0.080 Epoch 0 Batch 668/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.936, Loss: 0.058 Epoch 0 Batch 669/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.937, Loss: 0.055 Epoch 0 Batch 670/1077 - Train Accuracy: 0.929, Validation Accuracy: 0.929, Loss: 0.072 Epoch 0 Batch 671/1077 - Train Accuracy: 0.943, Validation Accuracy: 0.936, Loss: 0.070 Epoch 0 Batch 672/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.928, Loss: 0.057 Epoch 0 Batch 673/1077 - Train Accuracy: 0.939, Validation Accuracy: 0.931, Loss: 0.064 Epoch 0 Batch 674/1077 - Train Accuracy: 0.943, Validation Accuracy: 0.931, Loss: 0.064 Epoch 0 Batch 675/1077 - Train Accuracy: 0.927, Validation Accuracy: 0.921, Loss: 0.073 Epoch 0 Batch 676/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.931, Loss: 0.051 Epoch 0 Batch 677/1077 - Train Accuracy: 0.931, Validation Accuracy: 0.930, Loss: 0.073 Epoch 0 Batch 678/1077 - Train Accuracy: 0.941, Validation Accuracy: 0.929, Loss: 0.053 Epoch 0 Batch 679/1077 - Train Accuracy: 0.943, Validation Accuracy: 0.925, Loss: 0.057 Epoch 0 Batch 680/1077 - Train Accuracy: 0.925, Validation Accuracy: 0.928, Loss: 0.058 Epoch 0 Batch 681/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.932, Loss: 0.065 Epoch 0 Batch 682/1077 - Train Accuracy: 0.920, Validation Accuracy: 0.933, Loss: 0.050 Epoch 0 Batch 683/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.941, Loss: 0.055 Epoch 0 Batch 684/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.942, Loss: 0.061 Epoch 0 Batch 685/1077 - Train Accuracy: 0.935, Validation Accuracy: 0.944, Loss: 0.068 Epoch 0 Batch 686/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.944, Loss: 0.050 Epoch 0 Batch 687/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.947, Loss: 0.066 Epoch 0 Batch 688/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.950, Loss: 0.051 Epoch 0 Batch 689/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.944, Loss: 0.048 Epoch 0 Batch 690/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.939, Loss: 0.056 Epoch 0 Batch 691/1077 - Train Accuracy: 0.941, Validation Accuracy: 0.946, Loss: 0.078 Epoch 0 Batch 692/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.949, Loss: 0.052 Epoch 0 Batch 693/1077 - Train Accuracy: 0.916, Validation Accuracy: 0.938, Loss: 0.085 Epoch 0 Batch 694/1077 - Train Accuracy: 0.940, Validation Accuracy: 0.939, Loss: 0.067 Epoch 0 Batch 695/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.939, Loss: 0.047 Epoch 0 Batch 696/1077 - Train Accuracy: 0.935, Validation Accuracy: 0.945, Loss: 0.061 Epoch 0 Batch 697/1077 - Train Accuracy: 0.934, Validation Accuracy: 0.939, Loss: 0.059 Epoch 0 Batch 698/1077 - Train Accuracy: 0.940, Validation Accuracy: 0.946, Loss: 0.052 Epoch 0 Batch 699/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.951, Loss: 0.043 Epoch 0 Batch 700/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.956, Loss: 0.047 Epoch 0 Batch 701/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.953, Loss: 0.062 Epoch 0 Batch 702/1077 - Train Accuracy: 0.931, Validation Accuracy: 0.948, Loss: 0.064 Epoch 0 Batch 703/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.957, Loss: 0.059 Epoch 0 Batch 704/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.953, Loss: 0.076 Epoch 0 Batch 705/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.949, Loss: 0.069 Epoch 0 Batch 706/1077 - Train Accuracy: 0.892, Validation Accuracy: 0.940, Loss: 0.084 Epoch 0 Batch 707/1077 - Train Accuracy: 0.929, Validation Accuracy: 0.926, Loss: 0.067 Epoch 0 Batch 708/1077 - Train Accuracy: 0.922, Validation Accuracy: 0.925, Loss: 0.064 Epoch 0 Batch 709/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.930, Loss: 0.060 Epoch 0 Batch 710/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.939, Loss: 0.045 Epoch 0 Batch 711/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.933, Loss: 0.068 Epoch 0 Batch 712/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.930, Loss: 0.042 Epoch 0 Batch 713/1077 - Train Accuracy: 0.943, Validation Accuracy: 0.931, Loss: 0.049 Epoch 0 Batch 714/1077 - Train Accuracy: 0.948, Validation Accuracy: 0.931, Loss: 0.047 Epoch 0 Batch 715/1077 - Train Accuracy: 0.940, Validation Accuracy: 0.935, Loss: 0.066 Epoch 0 Batch 716/1077 - Train Accuracy: 0.944, Validation Accuracy: 0.936, Loss: 0.053 Epoch 0 Batch 717/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.937, Loss: 0.044 Epoch 0 Batch 718/1077 - Train Accuracy: 0.948, Validation Accuracy: 0.941, Loss: 0.044 Epoch 0 Batch 719/1077 - Train Accuracy: 0.941, Validation Accuracy: 0.929, Loss: 0.062 Epoch 0 Batch 720/1077 - Train Accuracy: 0.937, Validation Accuracy: 0.928, Loss: 0.063 Epoch 0 Batch 721/1077 - Train Accuracy: 0.914, Validation Accuracy: 0.944, Loss: 0.061 Epoch 0 Batch 722/1077 - Train Accuracy: 0.934, Validation Accuracy: 0.940, Loss: 0.047 Epoch 0 Batch 723/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.940, Loss: 0.062 Epoch 0 Batch 724/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.935, Loss: 0.054 Epoch 0 Batch 725/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.953, Loss: 0.044 Epoch 0 Batch 726/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.948, Loss: 0.052 Epoch 0 Batch 727/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.944, Loss: 0.052 Epoch 0 Batch 728/1077 - Train Accuracy: 0.942, Validation Accuracy: 0.938, Loss: 0.058 Epoch 0 Batch 729/1077 - Train Accuracy: 0.936, Validation Accuracy: 0.946, Loss: 0.071 Epoch 0 Batch 730/1077 - Train Accuracy: 0.945, Validation Accuracy: 0.942, Loss: 0.060 Epoch 0 Batch 731/1077 - Train Accuracy: 0.936, Validation Accuracy: 0.936, Loss: 0.058 Epoch 0 Batch 732/1077 - Train Accuracy: 0.927, Validation Accuracy: 0.941, Loss: 0.064 Epoch 0 Batch 733/1077 - Train Accuracy: 0.941, Validation Accuracy: 0.938, Loss: 0.055 Epoch 0 Batch 734/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.934, Loss: 0.057 Epoch 0 Batch 735/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.934, Loss: 0.045 Epoch 0 Batch 736/1077 - Train Accuracy: 0.947, Validation Accuracy: 0.934, Loss: 0.044 Epoch 0 Batch 737/1077 - Train Accuracy: 0.943, Validation Accuracy: 0.933, Loss: 0.062 Epoch 0 Batch 738/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.938, Loss: 0.037 Epoch 0 Batch 739/1077 - Train Accuracy: 0.940, Validation Accuracy: 0.938, Loss: 0.051 Epoch 0 Batch 740/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.936, Loss: 0.046 Epoch 0 Batch 741/1077 - Train Accuracy: 0.941, Validation Accuracy: 0.938, Loss: 0.055 Epoch 0 Batch 742/1077 - Train Accuracy: 0.936, Validation Accuracy: 0.939, Loss: 0.045 Epoch 0 Batch 743/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.935, Loss: 0.057 Epoch 0 Batch 744/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.939, Loss: 0.052 Epoch 0 Batch 745/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.937, Loss: 0.047 Epoch 0 Batch 746/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.937, Loss: 0.044 Epoch 0 Batch 747/1077 - Train Accuracy: 0.942, Validation Accuracy: 0.945, Loss: 0.035 Epoch 0 Batch 748/1077 - Train Accuracy: 0.942, Validation Accuracy: 0.947, Loss: 0.041 Epoch 0 Batch 749/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.935, Loss: 0.048 Epoch 0 Batch 750/1077 - Train Accuracy: 0.937, Validation Accuracy: 0.945, Loss: 0.057 Epoch 0 Batch 751/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.939, Loss: 0.044 Epoch 0 Batch 752/1077 - Train Accuracy: 0.939, Validation Accuracy: 0.936, Loss: 0.048 Epoch 0 Batch 753/1077 - Train Accuracy: 0.947, Validation Accuracy: 0.933, Loss: 0.052 Epoch 0 Batch 754/1077 - Train Accuracy: 0.948, Validation Accuracy: 0.937, Loss: 0.048 Epoch 0 Batch 755/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.948, Loss: 0.053 Epoch 0 Batch 756/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.944, Loss: 0.045 Epoch 0 Batch 757/1077 - Train Accuracy: 0.949, Validation Accuracy: 0.944, Loss: 0.048 Epoch 0 Batch 758/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.932, Loss: 0.045 Epoch 0 Batch 759/1077 - Train Accuracy: 0.939, Validation Accuracy: 0.930, Loss: 0.050 Epoch 0 Batch 760/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.934, Loss: 0.058 Epoch 0 Batch 761/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.943, Loss: 0.050 Epoch 0 Batch 762/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.938, Loss: 0.046 Epoch 0 Batch 763/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.940, Loss: 0.042 Epoch 0 Batch 764/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.944, Loss: 0.051 Epoch 0 Batch 765/1077 - Train Accuracy: 0.939, Validation Accuracy: 0.950, Loss: 0.047 Epoch 0 Batch 766/1077 - Train Accuracy: 0.912, Validation Accuracy: 0.955, Loss: 0.060 Epoch 0 Batch 767/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.960, Loss: 0.044 Epoch 0 Batch 768/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.959, Loss: 0.041 Epoch 0 Batch 769/1077 - Train Accuracy: 0.937, Validation Accuracy: 0.958, Loss: 0.053 Epoch 0 Batch 770/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.947, Loss: 0.050 Epoch 0 Batch 771/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.942, Loss: 0.051 Epoch 0 Batch 772/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.947, Loss: 0.040 Epoch 0 Batch 773/1077 - Train Accuracy: 0.947, Validation Accuracy: 0.942, Loss: 0.054 Epoch 0 Batch 774/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.946, Loss: 0.057 Epoch 0 Batch 775/1077 - Train Accuracy: 0.936, Validation Accuracy: 0.949, Loss: 0.052 Epoch 0 Batch 776/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.953, Loss: 0.042 Epoch 0 Batch 777/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.947, Loss: 0.049 Epoch 0 Batch 778/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.934, Loss: 0.041 Epoch 0 Batch 779/1077 - Train Accuracy: 0.949, Validation Accuracy: 0.936, Loss: 0.062 Epoch 0 Batch 780/1077 - Train Accuracy: 0.922, Validation Accuracy: 0.933, Loss: 0.078 Epoch 0 Batch 781/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.949, Loss: 0.041 Epoch 0 Batch 782/1077 - Train Accuracy: 0.942, Validation Accuracy: 0.950, Loss: 0.044 Epoch 0 Batch 783/1077 - Train Accuracy: 0.928, Validation Accuracy: 0.942, Loss: 0.063 Epoch 0 Batch 784/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.948, Loss: 0.033 Epoch 0 Batch 785/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.938, Loss: 0.051 Epoch 0 Batch 786/1077 - Train Accuracy: 0.946, Validation Accuracy: 0.947, Loss: 0.040 Epoch 0 Batch 787/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.945, Loss: 0.043 Epoch 0 Batch 788/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.945, Loss: 0.041 Epoch 0 Batch 789/1077 - Train Accuracy: 0.948, Validation Accuracy: 0.945, Loss: 0.049 Epoch 0 Batch 790/1077 - Train Accuracy: 0.897, Validation Accuracy: 0.944, Loss: 0.056 Epoch 0 Batch 791/1077 - Train Accuracy: 0.943, Validation Accuracy: 0.943, Loss: 0.047 Epoch 0 Batch 792/1077 - Train Accuracy: 0.948, Validation Accuracy: 0.944, Loss: 0.052 Epoch 0 Batch 793/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.938, Loss: 0.047 Epoch 0 Batch 794/1077 - Train Accuracy: 0.945, Validation Accuracy: 0.933, Loss: 0.031 Epoch 0 Batch 795/1077 - Train Accuracy: 0.945, Validation Accuracy: 0.940, Loss: 0.057 Epoch 0 Batch 796/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.946, Loss: 0.042 Epoch 0 Batch 797/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.951, Loss: 0.041 Epoch 0 Batch 798/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.951, Loss: 0.055 Epoch 0 Batch 799/1077 - Train Accuracy: 0.937, Validation Accuracy: 0.941, Loss: 0.067 Epoch 0 Batch 800/1077 - Train Accuracy: 0.943, Validation Accuracy: 0.929, Loss: 0.049 Epoch 0 Batch 801/1077 - Train Accuracy: 0.937, Validation Accuracy: 0.935, Loss: 0.053 Epoch 0 Batch 802/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.936, Loss: 0.047 Epoch 0 Batch 803/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.941, Loss: 0.051 Epoch 0 Batch 804/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.951, Loss: 0.046 Epoch 0 Batch 805/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.950, Loss: 0.046 Epoch 0 Batch 806/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.945, Loss: 0.043 Epoch 0 Batch 807/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.947, Loss: 0.043 Epoch 0 Batch 808/1077 - Train Accuracy: 0.942, Validation Accuracy: 0.952, Loss: 0.072 Epoch 0 Batch 809/1077 - Train Accuracy: 0.935, Validation Accuracy: 0.953, Loss: 0.067 Epoch 0 Batch 810/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.946, Loss: 0.038 Epoch 0 Batch 811/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.947, Loss: 0.046 Epoch 0 Batch 812/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.941, Loss: 0.053 Epoch 0 Batch 813/1077 - Train Accuracy: 0.948, Validation Accuracy: 0.940, Loss: 0.046 Epoch 0 Batch 814/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.930, Loss: 0.044 Epoch 0 Batch 815/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.935, Loss: 0.052 Epoch 0 Batch 816/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.951, Loss: 0.055 Epoch 0 Batch 817/1077 - Train Accuracy: 0.938, Validation Accuracy: 0.941, Loss: 0.049 Epoch 0 Batch 818/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.947, Loss: 0.051 Epoch 0 Batch 819/1077 - Train Accuracy: 0.941, Validation Accuracy: 0.940, Loss: 0.052 Epoch 0 Batch 820/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.940, Loss: 0.042 Epoch 0 Batch 821/1077 - Train Accuracy: 0.932, Validation Accuracy: 0.935, Loss: 0.046 Epoch 0 Batch 822/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.928, Loss: 0.051 Epoch 0 Batch 823/1077 - Train Accuracy: 0.945, Validation Accuracy: 0.927, Loss: 0.050 Epoch 0 Batch 824/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.938, Loss: 0.044 Epoch 0 Batch 825/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.938, Loss: 0.031 Epoch 0 Batch 826/1077 - Train Accuracy: 0.943, Validation Accuracy: 0.941, Loss: 0.045 Epoch 0 Batch 827/1077 - Train Accuracy: 0.948, Validation Accuracy: 0.934, Loss: 0.050 Epoch 0 Batch 828/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.923, Loss: 0.043 Epoch 0 Batch 829/1077 - Train Accuracy: 0.930, Validation Accuracy: 0.934, Loss: 0.064 Epoch 0 Batch 830/1077 - Train Accuracy: 0.938, Validation Accuracy: 0.939, Loss: 0.060 Epoch 0 Batch 831/1077 - Train Accuracy: 0.909, Validation Accuracy: 0.945, Loss: 0.049 Epoch 0 Batch 832/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.942, Loss: 0.041 Epoch 0 Batch 833/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.945, Loss: 0.051 Epoch 0 Batch 834/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.944, Loss: 0.054 Epoch 0 Batch 835/1077 - Train Accuracy: 0.943, Validation Accuracy: 0.943, Loss: 0.048 Epoch 0 Batch 836/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.946, Loss: 0.047 Epoch 0 Batch 837/1077 - Train Accuracy: 0.920, Validation Accuracy: 0.942, Loss: 0.069 Epoch 0 Batch 838/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.943, Loss: 0.044 Epoch 0 Batch 839/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.934, Loss: 0.039 Epoch 0 Batch 840/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.937, Loss: 0.036 Epoch 0 Batch 841/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.941, Loss: 0.050 Epoch 0 Batch 842/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.943, Loss: 0.039 Epoch 0 Batch 843/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.947, Loss: 0.038 Epoch 0 Batch 844/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.953, Loss: 0.041 Epoch 0 Batch 845/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.951, Loss: 0.037 Epoch 0 Batch 846/1077 - Train Accuracy: 0.932, Validation Accuracy: 0.957, Loss: 0.054 Epoch 0 Batch 847/1077 - Train Accuracy: 0.947, Validation Accuracy: 0.952, Loss: 0.059 Epoch 0 Batch 848/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.958, Loss: 0.041 Epoch 0 Batch 849/1077 - Train Accuracy: 0.945, Validation Accuracy: 0.958, Loss: 0.042 Epoch 0 Batch 850/1077 - Train Accuracy: 0.949, Validation Accuracy: 0.963, Loss: 0.064 Epoch 0 Batch 851/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.966, Loss: 0.050 Epoch 0 Batch 852/1077 - Train Accuracy: 0.925, Validation Accuracy: 0.967, Loss: 0.065 Epoch 0 Batch 853/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.948, Loss: 0.040 Epoch 0 Batch 854/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.940, Loss: 0.054 Epoch 0 Batch 855/1077 - Train Accuracy: 0.931, Validation Accuracy: 0.938, Loss: 0.044 Epoch 0 Batch 856/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.938, Loss: 0.047 Epoch 0 Batch 857/1077 - Train Accuracy: 0.948, Validation Accuracy: 0.934, Loss: 0.044 Epoch 0 Batch 858/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.935, Loss: 0.041 Epoch 0 Batch 859/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.953, Loss: 0.050 Epoch 0 Batch 860/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.949, Loss: 0.047 Epoch 0 Batch 861/1077 - Train Accuracy: 0.938, Validation Accuracy: 0.946, Loss: 0.036 Epoch 0 Batch 862/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.945, Loss: 0.036 Epoch 0 Batch 863/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.946, Loss: 0.040 Epoch 0 Batch 864/1077 - Train Accuracy: 0.934, Validation Accuracy: 0.949, Loss: 0.049 Epoch 0 Batch 865/1077 - Train Accuracy: 0.941, Validation Accuracy: 0.952, Loss: 0.044 Epoch 0 Batch 866/1077 - Train Accuracy: 0.936, Validation Accuracy: 0.956, Loss: 0.054 Epoch 0 Batch 867/1077 - Train Accuracy: 0.916, Validation Accuracy: 0.961, Loss: 0.084 Epoch 0 Batch 868/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.952, Loss: 0.041 Epoch 0 Batch 869/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.938, Loss: 0.045 Epoch 0 Batch 870/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.942, Loss: 0.049 Epoch 0 Batch 871/1077 - Train Accuracy: 0.944, Validation Accuracy: 0.949, Loss: 0.034 Epoch 0 Batch 872/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.950, Loss: 0.045 Epoch 0 Batch 873/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.956, Loss: 0.048 Epoch 0 Batch 874/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.956, Loss: 0.057 Epoch 0 Batch 875/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.950, Loss: 0.045 Epoch 0 Batch 876/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.950, Loss: 0.040 Epoch 0 Batch 877/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.954, Loss: 0.034 Epoch 0 Batch 878/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.948, Loss: 0.046 Epoch 0 Batch 879/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.949, Loss: 0.034 Epoch 0 Batch 880/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.945, Loss: 0.054 Epoch 0 Batch 881/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.945, Loss: 0.049 Epoch 0 Batch 882/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.946, Loss: 0.041 Epoch 0 Batch 883/1077 - Train Accuracy: 0.932, Validation Accuracy: 0.954, Loss: 0.058 Epoch 0 Batch 884/1077 - Train Accuracy: 0.945, Validation Accuracy: 0.954, Loss: 0.040 Epoch 0 Batch 885/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.955, Loss: 0.030 Epoch 0 Batch 886/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.954, Loss: 0.044 Epoch 0 Batch 887/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.954, Loss: 0.052 Epoch 0 Batch 888/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.954, Loss: 0.036 Epoch 0 Batch 889/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.953, Loss: 0.044 Epoch 0 Batch 890/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.955, Loss: 0.041 Epoch 0 Batch 891/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.953, Loss: 0.034 Epoch 0 Batch 892/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.958, Loss: 0.038 Epoch 0 Batch 893/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.958, Loss: 0.046 Epoch 0 Batch 894/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.953, Loss: 0.038 Epoch 0 Batch 895/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.955, Loss: 0.046 Epoch 0 Batch 896/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.951, Loss: 0.042 Epoch 0 Batch 897/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.951, Loss: 0.043 Epoch 0 Batch 898/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.949, Loss: 0.036 Epoch 0 Batch 899/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.939, Loss: 0.053 Epoch 0 Batch 900/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.948, Loss: 0.058 Epoch 0 Batch 901/1077 - Train Accuracy: 0.931, Validation Accuracy: 0.952, Loss: 0.061 Epoch 0 Batch 902/1077 - Train Accuracy: 0.949, Validation Accuracy: 0.952, Loss: 0.052 Epoch 0 Batch 903/1077 - Train Accuracy: 0.937, Validation Accuracy: 0.947, Loss: 0.052 Epoch 0 Batch 904/1077 - Train Accuracy: 0.949, Validation Accuracy: 0.953, Loss: 0.042 Epoch 0 Batch 905/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.961, Loss: 0.033 Epoch 0 Batch 906/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.961, Loss: 0.043 Epoch 0 Batch 907/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.955, Loss: 0.040 Epoch 0 Batch 908/1077 - Train Accuracy: 0.940, Validation Accuracy: 0.957, Loss: 0.056 Epoch 0 Batch 909/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.957, Loss: 0.057 Epoch 0 Batch 910/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.955, Loss: 0.046 Epoch 0 Batch 911/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.956, Loss: 0.052 Epoch 0 Batch 912/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.956, Loss: 0.041 Epoch 0 Batch 913/1077 - Train Accuracy: 0.948, Validation Accuracy: 0.956, Loss: 0.062 Epoch 0 Batch 914/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.957, Loss: 0.078 Epoch 0 Batch 915/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.957, Loss: 0.033 Epoch 0 Batch 916/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.957, Loss: 0.055 Epoch 0 Batch 917/1077 - Train Accuracy: 0.947, Validation Accuracy: 0.957, Loss: 0.041 Epoch 0 Batch 918/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.966, Loss: 0.032 Epoch 0 Batch 919/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.961, Loss: 0.036 Epoch 0 Batch 920/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.960, Loss: 0.035 Epoch 0 Batch 921/1077 - Train Accuracy: 0.924, Validation Accuracy: 0.960, Loss: 0.048 Epoch 0 Batch 922/1077 - Train Accuracy: 0.941, Validation Accuracy: 0.960, Loss: 0.042 Epoch 0 Batch 923/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.960, Loss: 0.029 Epoch 0 Batch 924/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.959, Loss: 0.064 Epoch 0 Batch 925/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.951, Loss: 0.040 Epoch 0 Batch 926/1077 - Train Accuracy: 0.944, Validation Accuracy: 0.947, Loss: 0.049 Epoch 0 Batch 927/1077 - Train Accuracy: 0.939, Validation Accuracy: 0.948, Loss: 0.056 Epoch 0 Batch 928/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.950, Loss: 0.037 Epoch 0 Batch 929/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.952, Loss: 0.037 Epoch 0 Batch 930/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.946, Loss: 0.036 Epoch 0 Batch 931/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.951, Loss: 0.036 Epoch 0 Batch 932/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.960, Loss: 0.035 Epoch 0 Batch 933/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.960, Loss: 0.038 Epoch 0 Batch 934/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.960, Loss: 0.032 Epoch 0 Batch 935/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.958, Loss: 0.044 Epoch 0 Batch 936/1077 - Train Accuracy: 0.943, Validation Accuracy: 0.964, Loss: 0.049 Epoch 0 Batch 937/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.956, Loss: 0.046 Epoch 0 Batch 938/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.958, Loss: 0.062 Epoch 0 Batch 939/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.947, Loss: 0.057 Epoch 0 Batch 940/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.934, Loss: 0.034 Epoch 0 Batch 941/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.934, Loss: 0.033 Epoch 0 Batch 942/1077 - Train Accuracy: 0.944, Validation Accuracy: 0.941, Loss: 0.049 Epoch 0 Batch 943/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.961, Loss: 0.041 Epoch 0 Batch 944/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.961, Loss: 0.036 Epoch 0 Batch 945/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.959, Loss: 0.030 Epoch 0 Batch 946/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.956, Loss: 0.029 Epoch 0 Batch 947/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.956, Loss: 0.039 Epoch 0 Batch 948/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.961, Loss: 0.041 Epoch 0 Batch 949/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.961, Loss: 0.030 Epoch 0 Batch 950/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.960, Loss: 0.033 Epoch 0 Batch 951/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.960, Loss: 0.063 Epoch 0 Batch 952/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.961, Loss: 0.034 Epoch 0 Batch 953/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.959, Loss: 0.041 Epoch 0 Batch 954/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.956, Loss: 0.043 Epoch 0 Batch 955/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.960, Loss: 0.052 Epoch 0 Batch 956/1077 - Train Accuracy: 0.934, Validation Accuracy: 0.960, Loss: 0.055 Epoch 0 Batch 957/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.951, Loss: 0.023 Epoch 0 Batch 958/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.947, Loss: 0.032 Epoch 0 Batch 959/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.944, Loss: 0.038 Epoch 0 Batch 960/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.952, Loss: 0.042 Epoch 0 Batch 961/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.971, Loss: 0.029 Epoch 0 Batch 962/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.975, Loss: 0.039 Epoch 0 Batch 963/1077 - Train Accuracy: 0.945, Validation Accuracy: 0.973, Loss: 0.061 Epoch 0 Batch 964/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.973, Loss: 0.037 Epoch 0 Batch 965/1077 - Train Accuracy: 0.949, Validation Accuracy: 0.974, Loss: 0.050 Epoch 0 Batch 966/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.971, Loss: 0.030 Epoch 0 Batch 967/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.970, Loss: 0.044 Epoch 0 Batch 968/1077 - Train Accuracy: 0.927, Validation Accuracy: 0.960, Loss: 0.056 Epoch 0 Batch 969/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.949, Loss: 0.057 Epoch 0 Batch 970/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.949, Loss: 0.048 Epoch 0 Batch 971/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.950, Loss: 0.053 Epoch 0 Batch 972/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.950, Loss: 0.040 Epoch 0 Batch 973/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.954, Loss: 0.037 Epoch 0 Batch 974/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.962, Loss: 0.035 Epoch 0 Batch 975/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.976, Loss: 0.044 Epoch 0 Batch 976/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.969, Loss: 0.042 Epoch 0 Batch 977/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.962, Loss: 0.022 Epoch 0 Batch 978/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.960, Loss: 0.037 Epoch 0 Batch 979/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.960, Loss: 0.040 Epoch 0 Batch 980/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.958, Loss: 0.040 Epoch 0 Batch 981/1077 - Train Accuracy: 0.936, Validation Accuracy: 0.952, Loss: 0.034 Epoch 0 Batch 982/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.954, Loss: 0.033 Epoch 0 Batch 983/1077 - Train Accuracy: 0.944, Validation Accuracy: 0.954, Loss: 0.042 Epoch 0 Batch 984/1077 - Train Accuracy: 0.927, Validation Accuracy: 0.950, Loss: 0.054 Epoch 0 Batch 985/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.950, Loss: 0.032 Epoch 0 Batch 986/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.957, Loss: 0.037 Epoch 0 Batch 987/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.962, Loss: 0.031 Epoch 0 Batch 988/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.957, Loss: 0.049 Epoch 0 Batch 989/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.954, Loss: 0.042 Epoch 0 Batch 990/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.955, Loss: 0.048 Epoch 0 Batch 991/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.962, Loss: 0.033 Epoch 0 Batch 992/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.962, Loss: 0.047 Epoch 0 Batch 993/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.959, Loss: 0.031 Epoch 0 Batch 994/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.960, Loss: 0.038 Epoch 0 Batch 995/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.958, Loss: 0.044 Epoch 0 Batch 996/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.957, Loss: 0.033 Epoch 0 Batch 997/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.963, Loss: 0.037 Epoch 0 Batch 998/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.958, Loss: 0.036 Epoch 0 Batch 999/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.951, Loss: 0.047 Epoch 0 Batch 1000/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.951, Loss: 0.038 Epoch 0 Batch 1001/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.951, Loss: 0.036 Epoch 0 Batch 1002/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.954, Loss: 0.025 Epoch 0 Batch 1003/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.955, Loss: 0.047 Epoch 0 Batch 1004/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.953, Loss: 0.045 Epoch 0 Batch 1005/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.949, Loss: 0.039 Epoch 0 Batch 1006/1077 - Train Accuracy: 0.936, Validation Accuracy: 0.956, Loss: 0.035 Epoch 0 Batch 1007/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.961, Loss: 0.032 Epoch 0 Batch 1008/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.958, Loss: 0.060 Epoch 0 Batch 1009/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.962, Loss: 0.024 Epoch 0 Batch 1010/1077 - Train Accuracy: 0.948, Validation Accuracy: 0.961, Loss: 0.033 Epoch 0 Batch 1011/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.961, Loss: 0.032 Epoch 0 Batch 1012/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.948, Loss: 0.031 Epoch 0 Batch 1013/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.944, Loss: 0.023 Epoch 0 Batch 1014/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.941, Loss: 0.045 Epoch 0 Batch 1015/1077 - Train Accuracy: 0.928, Validation Accuracy: 0.945, Loss: 0.049 Epoch 0 Batch 1016/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.937, Loss: 0.044 Epoch 0 Batch 1017/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.948, Loss: 0.040 Epoch 0 Batch 1018/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.957, Loss: 0.037 Epoch 0 Batch 1019/1077 - Train Accuracy: 0.940, Validation Accuracy: 0.957, Loss: 0.053 Epoch 0 Batch 1020/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.962, Loss: 0.026 Epoch 0 Batch 1021/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.962, Loss: 0.038 Epoch 0 Batch 1022/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.955, Loss: 0.034 Epoch 0 Batch 1023/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.958, Loss: 0.039 Epoch 0 Batch 1024/1077 - Train Accuracy: 0.938, Validation Accuracy: 0.960, Loss: 0.048 Epoch 0 Batch 1025/1077 - Train Accuracy: 0.946, Validation Accuracy: 0.955, Loss: 0.035 Epoch 0 Batch 1026/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.966, Loss: 0.043 Epoch 0 Batch 1027/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.962, Loss: 0.037 Epoch 0 Batch 1028/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.961, Loss: 0.033 Epoch 0 Batch 1029/1077 - Train Accuracy: 0.944, Validation Accuracy: 0.961, Loss: 0.038 Epoch 0 Batch 1030/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.962, Loss: 0.035 Epoch 0 Batch 1031/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.962, Loss: 0.044 Epoch 0 Batch 1032/1077 - Train Accuracy: 0.936, Validation Accuracy: 0.971, Loss: 0.056 Epoch 0 Batch 1033/1077 - Train Accuracy: 0.945, Validation Accuracy: 0.971, Loss: 0.040 Epoch 0 Batch 1034/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.965, Loss: 0.040 Epoch 0 Batch 1035/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.965, Loss: 0.035 Epoch 0 Batch 1036/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.960, Loss: 0.040 Epoch 0 Batch 1037/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.958, Loss: 0.036 Epoch 0 Batch 1038/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.954, Loss: 0.047 Epoch 0 Batch 1039/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.954, Loss: 0.043 Epoch 0 Batch 1040/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.948, Loss: 0.048 Epoch 0 Batch 1041/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.957, Loss: 0.049 Epoch 0 Batch 1042/1077 - Train Accuracy: 0.946, Validation Accuracy: 0.955, Loss: 0.042 Epoch 0 Batch 1043/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.951, Loss: 0.040 Epoch 0 Batch 1044/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.949, Loss: 0.049 Epoch 0 Batch 1045/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.949, Loss: 0.030 Epoch 0 Batch 1046/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.944, Loss: 0.026 Epoch 0 Batch 1047/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.944, Loss: 0.032 Epoch 0 Batch 1048/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.938, Loss: 0.041 Epoch 0 Batch 1049/1077 - Train Accuracy: 0.946, Validation Accuracy: 0.942, Loss: 0.046 Epoch 0 Batch 1050/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.950, Loss: 0.031 Epoch 0 Batch 1051/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.951, Loss: 0.038 Epoch 0 Batch 1052/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.951, Loss: 0.034 Epoch 0 Batch 1053/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.960, Loss: 0.038 Epoch 0 Batch 1054/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.960, Loss: 0.041 Epoch 0 Batch 1055/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.960, Loss: 0.037 Epoch 0 Batch 1056/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.964, Loss: 0.029 Epoch 0 Batch 1057/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.970, Loss: 0.048 Epoch 0 Batch 1058/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.965, Loss: 0.044 Epoch 0 Batch 1059/1077 - Train Accuracy: 0.926, Validation Accuracy: 0.965, Loss: 0.052 Epoch 0 Batch 1060/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.966, Loss: 0.030 Epoch 0 Batch 1061/1077 - Train Accuracy: 0.945, Validation Accuracy: 0.969, Loss: 0.050 Epoch 0 Batch 1062/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.969, Loss: 0.028 Epoch 0 Batch 1063/1077 - Train Accuracy: 0.939, Validation Accuracy: 0.972, Loss: 0.046 Epoch 0 Batch 1064/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.972, Loss: 0.035 Epoch 0 Batch 1065/1077 - Train Accuracy: 0.939, Validation Accuracy: 0.972, Loss: 0.036 Epoch 0 Batch 1066/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.974, Loss: 0.032 Epoch 0 Batch 1067/1077 - Train Accuracy: 0.945, Validation Accuracy: 0.974, Loss: 0.047 Epoch 0 Batch 1068/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.979, Loss: 0.024 Epoch 0 Batch 1069/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.975, Loss: 0.023 Epoch 0 Batch 1070/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.975, Loss: 0.033 Epoch 0 Batch 1071/1077 - Train Accuracy: 0.949, Validation Accuracy: 0.975, Loss: 0.033 Epoch 0 Batch 1072/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.980, Loss: 0.037 Epoch 0 Batch 1073/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.980, Loss: 0.031 Epoch 0 Batch 1074/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.980, Loss: 0.047 Epoch 0 Batch 1075/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.974, Loss: 0.040 Epoch 1 Batch 0/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.973, Loss: 0.027 Epoch 1 Batch 1/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.973, Loss: 0.026 Epoch 1 Batch 2/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.969, Loss: 0.035 Epoch 1 Batch 3/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.969, Loss: 0.043 Epoch 1 Batch 4/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.969, Loss: 0.029 Epoch 1 Batch 5/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.963, Loss: 0.056 Epoch 1 Batch 6/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.963, Loss: 0.033 Epoch 1 Batch 7/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.965, Loss: 0.028 Epoch 1 Batch 8/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.965, Loss: 0.038 Epoch 1 Batch 9/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.955, Loss: 0.040 Epoch 1 Batch 10/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.957, Loss: 0.038 Epoch 1 Batch 11/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.947, Loss: 0.042 Epoch 1 Batch 12/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.945, Loss: 0.031 Epoch 1 Batch 13/1077 - Train Accuracy: 0.949, Validation Accuracy: 0.945, Loss: 0.043 Epoch 1 Batch 14/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.955, Loss: 0.025 Epoch 1 Batch 15/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.955, Loss: 0.032 Epoch 1 Batch 16/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.954, Loss: 0.040 Epoch 1 Batch 17/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.960, Loss: 0.032 Epoch 1 Batch 18/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.968, Loss: 0.043 Epoch 1 Batch 19/1077 - Train Accuracy: 0.941, Validation Accuracy: 0.978, Loss: 0.041 Epoch 1 Batch 20/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.974, Loss: 0.030 Epoch 1 Batch 21/1077 - Train Accuracy: 0.934, Validation Accuracy: 0.964, Loss: 0.034 Epoch 1 Batch 22/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.946, Loss: 0.042 Epoch 1 Batch 23/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.944, Loss: 0.039 Epoch 1 Batch 24/1077 - Train Accuracy: 0.943, Validation Accuracy: 0.942, Loss: 0.039 Epoch 1 Batch 25/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.945, Loss: 0.025 Epoch 1 Batch 26/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.963, Loss: 0.042 Epoch 1 Batch 27/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.956, Loss: 0.028 Epoch 1 Batch 28/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.956, Loss: 0.037 Epoch 1 Batch 29/1077 - Train Accuracy: 0.926, Validation Accuracy: 0.956, Loss: 0.040 Epoch 1 Batch 30/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.956, Loss: 0.028 Epoch 1 Batch 31/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.956, Loss: 0.032 Epoch 1 Batch 32/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.957, Loss: 0.039 Epoch 1 Batch 33/1077 - Train Accuracy: 0.948, Validation Accuracy: 0.958, Loss: 0.032 Epoch 1 Batch 34/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.965, Loss: 0.030 Epoch 1 Batch 35/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.954, Loss: 0.036 Epoch 1 Batch 36/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.949, Loss: 0.030 Epoch 1 Batch 37/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.952, Loss: 0.040 Epoch 1 Batch 38/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.958, Loss: 0.050 Epoch 1 Batch 39/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.954, Loss: 0.048 Epoch 1 Batch 40/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.953, Loss: 0.026 Epoch 1 Batch 41/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.953, Loss: 0.029 Epoch 1 Batch 42/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.953, Loss: 0.049 Epoch 1 Batch 43/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.962, Loss: 0.020 Epoch 1 Batch 44/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.962, Loss: 0.022 Epoch 1 Batch 45/1077 - Train Accuracy: 0.941, Validation Accuracy: 0.962, Loss: 0.038 Epoch 1 Batch 46/1077 - Train Accuracy: 0.940, Validation Accuracy: 0.957, Loss: 0.033 Epoch 1 Batch 47/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.962, Loss: 0.037 Epoch 1 Batch 48/1077 - Train Accuracy: 0.949, Validation Accuracy: 0.963, Loss: 0.050 Epoch 1 Batch 49/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.958, Loss: 0.042 Epoch 1 Batch 50/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.959, Loss: 0.037 Epoch 1 Batch 51/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.966, Loss: 0.046 Epoch 1 Batch 52/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.971, Loss: 0.033 Epoch 1 Batch 53/1077 - Train Accuracy: 0.942, Validation Accuracy: 0.975, Loss: 0.035 Epoch 1 Batch 54/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.969, Loss: 0.048 Epoch 1 Batch 55/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.964, Loss: 0.033 Epoch 1 Batch 56/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.959, Loss: 0.027 Epoch 1 Batch 57/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.959, Loss: 0.034 Epoch 1 Batch 58/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.966, Loss: 0.040 Epoch 1 Batch 59/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.975, Loss: 0.030 Epoch 1 Batch 60/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.979, Loss: 0.026 Epoch 1 Batch 61/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.958, Loss: 0.037 Epoch 1 Batch 62/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.958, Loss: 0.029 Epoch 1 Batch 63/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.953, Loss: 0.027 Epoch 1 Batch 64/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.958, Loss: 0.028 Epoch 1 Batch 65/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.957, Loss: 0.028 Epoch 1 Batch 66/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.962, Loss: 0.021 Epoch 1 Batch 67/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.958, Loss: 0.039 Epoch 1 Batch 68/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.953, Loss: 0.042 Epoch 1 Batch 69/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.953, Loss: 0.044 Epoch 1 Batch 70/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.953, Loss: 0.037 Epoch 1 Batch 71/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.953, Loss: 0.019 Epoch 1 Batch 72/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.965, Loss: 0.033 Epoch 1 Batch 73/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.960, Loss: 0.028 Epoch 1 Batch 74/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.971, Loss: 0.028 Epoch 1 Batch 75/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.966, Loss: 0.044 Epoch 1 Batch 76/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.962, Loss: 0.024 Epoch 1 Batch 77/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.955, Loss: 0.029 Epoch 1 Batch 78/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.956, Loss: 0.029 Epoch 1 Batch 79/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.956, Loss: 0.035 Epoch 1 Batch 80/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.962, Loss: 0.032 Epoch 1 Batch 81/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.963, Loss: 0.026 Epoch 1 Batch 82/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.962, Loss: 0.030 Epoch 1 Batch 83/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.958, Loss: 0.024 Epoch 1 Batch 84/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.953, Loss: 0.029 Epoch 1 Batch 85/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.953, Loss: 0.027 Epoch 1 Batch 86/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.958, Loss: 0.034 Epoch 1 Batch 87/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.957, Loss: 0.042 Epoch 1 Batch 88/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.957, Loss: 0.035 Epoch 1 Batch 89/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.951, Loss: 0.032 Epoch 1 Batch 90/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.951, Loss: 0.033 Epoch 1 Batch 91/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.957, Loss: 0.027 Epoch 1 Batch 92/1077 - Train Accuracy: 0.948, Validation Accuracy: 0.957, Loss: 0.042 Epoch 1 Batch 93/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.952, Loss: 0.027 Epoch 1 Batch 94/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.952, Loss: 0.022 Epoch 1 Batch 95/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.952, Loss: 0.042 Epoch 1 Batch 96/1077 - Train Accuracy: 0.948, Validation Accuracy: 0.954, Loss: 0.042 Epoch 1 Batch 97/1077 - Train Accuracy: 0.939, Validation Accuracy: 0.956, Loss: 0.035 Epoch 1 Batch 98/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.969, Loss: 0.035 Epoch 1 Batch 99/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.964, Loss: 0.037 Epoch 1 Batch 100/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.963, Loss: 0.027 Epoch 1 Batch 101/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.955, Loss: 0.027 Epoch 1 Batch 102/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.955, Loss: 0.030 Epoch 1 Batch 103/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.956, Loss: 0.037 Epoch 1 Batch 104/1077 - Train Accuracy: 0.949, Validation Accuracy: 0.955, Loss: 0.035 Epoch 1 Batch 105/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.955, Loss: 0.029 Epoch 1 Batch 106/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.949, Loss: 0.041 Epoch 1 Batch 107/1077 - Train Accuracy: 0.942, Validation Accuracy: 0.956, Loss: 0.033 Epoch 1 Batch 108/1077 - Train Accuracy: 0.938, Validation Accuracy: 0.954, Loss: 0.032 Epoch 1 Batch 109/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.954, Loss: 0.030 Epoch 1 Batch 110/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.954, Loss: 0.021 Epoch 1 Batch 111/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.956, Loss: 0.028 Epoch 1 Batch 112/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.956, Loss: 0.030 Epoch 1 Batch 113/1077 - Train Accuracy: 0.945, Validation Accuracy: 0.960, Loss: 0.039 Epoch 1 Batch 114/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.960, Loss: 0.025 Epoch 1 Batch 115/1077 - Train Accuracy: 0.947, Validation Accuracy: 0.960, Loss: 0.034 Epoch 1 Batch 116/1077 - Train Accuracy: 0.921, Validation Accuracy: 0.956, Loss: 0.045 Epoch 1 Batch 117/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.956, Loss: 0.029 Epoch 1 Batch 118/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.953, Loss: 0.025 Epoch 1 Batch 119/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.953, Loss: 0.026 Epoch 1 Batch 120/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.962, Loss: 0.031 Epoch 1 Batch 121/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.962, Loss: 0.032 Epoch 1 Batch 122/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.954, Loss: 0.025 Epoch 1 Batch 123/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.959, Loss: 0.026 Epoch 1 Batch 124/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.957, Loss: 0.037 Epoch 1 Batch 125/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.957, Loss: 0.045 Epoch 1 Batch 126/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.962, Loss: 0.029 Epoch 1 Batch 127/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.962, Loss: 0.031 Epoch 1 Batch 128/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.962, Loss: 0.030 Epoch 1 Batch 129/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.961, Loss: 0.037 Epoch 1 Batch 130/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.959, Loss: 0.024 Epoch 1 Batch 131/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.955, Loss: 0.030 Epoch 1 Batch 132/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.950, Loss: 0.032 Epoch 1 Batch 133/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.945, Loss: 0.015 Epoch 1 Batch 134/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.948, Loss: 0.029 Epoch 1 Batch 135/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.955, Loss: 0.026 Epoch 1 Batch 136/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.949, Loss: 0.031 Epoch 1 Batch 137/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.949, Loss: 0.033 Epoch 1 Batch 138/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.960, Loss: 0.024 Epoch 1 Batch 139/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.960, Loss: 0.038 Epoch 1 Batch 140/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.960, Loss: 0.033 Epoch 1 Batch 141/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.970, Loss: 0.021 Epoch 1 Batch 142/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.965, Loss: 0.025 Epoch 1 Batch 143/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.958, Loss: 0.035 Epoch 1 Batch 144/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.953, Loss: 0.043 Epoch 1 Batch 145/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.953, Loss: 0.029 Epoch 1 Batch 146/1077 - Train Accuracy: 0.949, Validation Accuracy: 0.954, Loss: 0.069 Epoch 1 Batch 147/1077 - Train Accuracy: 0.944, Validation Accuracy: 0.950, Loss: 0.029 Epoch 1 Batch 148/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.954, Loss: 0.033 Epoch 1 Batch 149/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.954, Loss: 0.030 Epoch 1 Batch 150/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.955, Loss: 0.027 Epoch 1 Batch 151/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.953, Loss: 0.026 Epoch 1 Batch 152/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.955, Loss: 0.039 Epoch 1 Batch 153/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.960, Loss: 0.042 Epoch 1 Batch 154/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.960, Loss: 0.022 Epoch 1 Batch 155/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.970, Loss: 0.032 Epoch 1 Batch 156/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.973, Loss: 0.023 Epoch 1 Batch 157/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.968, Loss: 0.028 Epoch 1 Batch 158/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.968, Loss: 0.039 Epoch 1 Batch 159/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.968, Loss: 0.032 Epoch 1 Batch 160/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.968, Loss: 0.024 Epoch 1 Batch 161/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.968, Loss: 0.027 Epoch 1 Batch 162/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.965, Loss: 0.030 Epoch 1 Batch 163/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.964, Loss: 0.038 Epoch 1 Batch 164/1077 - Train Accuracy: 0.946, Validation Accuracy: 0.969, Loss: 0.034 Epoch 1 Batch 165/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.967, Loss: 0.023 Epoch 1 Batch 166/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.968, Loss: 0.034 Epoch 1 Batch 167/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.971, Loss: 0.031 Epoch 1 Batch 168/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.964, Loss: 0.040 Epoch 1 Batch 169/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.964, Loss: 0.029 Epoch 1 Batch 170/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.959, Loss: 0.038 Epoch 1 Batch 171/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.960, Loss: 0.034 Epoch 1 Batch 172/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.955, Loss: 0.024 Epoch 1 Batch 173/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.955, Loss: 0.032 Epoch 1 Batch 174/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.950, Loss: 0.023 Epoch 1 Batch 175/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.946, Loss: 0.034 Epoch 1 Batch 176/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.946, Loss: 0.027 Epoch 1 Batch 177/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.946, Loss: 0.040 Epoch 1 Batch 178/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.962, Loss: 0.030 Epoch 1 Batch 179/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.961, Loss: 0.032 Epoch 1 Batch 180/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.955, Loss: 0.026 Epoch 1 Batch 181/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.954, Loss: 0.040 Epoch 1 Batch 182/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.954, Loss: 0.039 Epoch 1 Batch 183/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.954, Loss: 0.030 Epoch 1 Batch 184/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.962, Loss: 0.032 Epoch 1 Batch 185/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.956, Loss: 0.037 Epoch 1 Batch 186/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.950, Loss: 0.036 Epoch 1 Batch 187/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.947, Loss: 0.024 Epoch 1 Batch 188/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.952, Loss: 0.032 Epoch 1 Batch 189/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.955, Loss: 0.026 Epoch 1 Batch 190/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.956, Loss: 0.019 Epoch 1 Batch 191/1077 - Train Accuracy: 0.947, Validation Accuracy: 0.960, Loss: 0.022 Epoch 1 Batch 192/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.960, Loss: 0.027 Epoch 1 Batch 193/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.960, Loss: 0.025 Epoch 1 Batch 194/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.960, Loss: 0.022 Epoch 1 Batch 195/1077 - Train Accuracy: 0.943, Validation Accuracy: 0.957, Loss: 0.029 Epoch 1 Batch 196/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.957, Loss: 0.031 Epoch 1 Batch 197/1077 - Train Accuracy: 0.948, Validation Accuracy: 0.958, Loss: 0.033 Epoch 1 Batch 198/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.953, Loss: 0.039 Epoch 1 Batch 199/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.954, Loss: 0.027 Epoch 1 Batch 200/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.961, Loss: 0.032 Epoch 1 Batch 201/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.961, Loss: 0.021 Epoch 1 Batch 202/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.955, Loss: 0.028 Epoch 1 Batch 203/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.955, Loss: 0.027 Epoch 1 Batch 204/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.960, Loss: 0.053 Epoch 1 Batch 205/1077 - Train Accuracy: 0.935, Validation Accuracy: 0.960, Loss: 0.054 Epoch 1 Batch 206/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.956, Loss: 0.023 Epoch 1 Batch 207/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.951, Loss: 0.032 Epoch 1 Batch 208/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.955, Loss: 0.030 Epoch 1 Batch 209/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.954, Loss: 0.018 Epoch 1 Batch 210/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.954, Loss: 0.028 Epoch 1 Batch 211/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.950, Loss: 0.028 Epoch 1 Batch 212/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.950, Loss: 0.019 Epoch 1 Batch 213/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.959, Loss: 0.023 Epoch 1 Batch 214/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.963, Loss: 0.020 Epoch 1 Batch 215/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.963, Loss: 0.033 Epoch 1 Batch 216/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.961, Loss: 0.029 Epoch 1 Batch 217/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.961, Loss: 0.022 Epoch 1 Batch 218/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.961, Loss: 0.035 Epoch 1 Batch 219/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.966, Loss: 0.021 Epoch 1 Batch 220/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.966, Loss: 0.034 Epoch 1 Batch 221/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.964, Loss: 0.029 Epoch 1 Batch 222/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.964, Loss: 0.027 Epoch 1 Batch 223/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.974, Loss: 0.017 Epoch 1 Batch 224/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.975, Loss: 0.029 Epoch 1 Batch 225/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.971, Loss: 0.039 Epoch 1 Batch 226/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.966, Loss: 0.024 Epoch 1 Batch 227/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.966, Loss: 0.040 Epoch 1 Batch 228/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.966, Loss: 0.022 Epoch 1 Batch 229/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.961, Loss: 0.027 Epoch 1 Batch 230/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.961, Loss: 0.026 Epoch 1 Batch 231/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.963, Loss: 0.036 Epoch 1 Batch 232/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.953, Loss: 0.027 Epoch 1 Batch 233/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.959, Loss: 0.047 Epoch 1 Batch 234/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.964, Loss: 0.034 Epoch 1 Batch 235/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.973, Loss: 0.026 Epoch 1 Batch 236/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.969, Loss: 0.037 Epoch 1 Batch 237/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.969, Loss: 0.022 Epoch 1 Batch 238/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.969, Loss: 0.023 Epoch 1 Batch 239/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.964, Loss: 0.016 Epoch 1 Batch 240/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.965, Loss: 0.023 Epoch 1 Batch 241/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.965, Loss: 0.017 Epoch 1 Batch 242/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.965, Loss: 0.026 Epoch 1 Batch 243/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.963, Loss: 0.031 Epoch 1 Batch 244/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.959, Loss: 0.022 Epoch 1 Batch 245/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.961, Loss: 0.025 Epoch 1 Batch 246/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.966, Loss: 0.029 Epoch 1 Batch 247/1077 - Train Accuracy: 0.944, Validation Accuracy: 0.970, Loss: 0.029 Epoch 1 Batch 248/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.970, Loss: 0.026 Epoch 1 Batch 249/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.972, Loss: 0.022 Epoch 1 Batch 250/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.966, Loss: 0.029 Epoch 1 Batch 251/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.966, Loss: 0.043 Epoch 1 Batch 252/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.967, Loss: 0.035 Epoch 1 Batch 253/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.968, Loss: 0.030 Epoch 1 Batch 254/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.969, Loss: 0.030 Epoch 1 Batch 255/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.975, Loss: 0.022 Epoch 1 Batch 256/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.973, Loss: 0.045 Epoch 1 Batch 257/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.968, Loss: 0.029 Epoch 1 Batch 258/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.964, Loss: 0.031 Epoch 1 Batch 259/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.969, Loss: 0.021 Epoch 1 Batch 260/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.969, Loss: 0.022 Epoch 1 Batch 261/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.970, Loss: 0.031 Epoch 1 Batch 262/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.970, Loss: 0.029 Epoch 1 Batch 263/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.967, Loss: 0.020 Epoch 1 Batch 264/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.964, Loss: 0.029 Epoch 1 Batch 265/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.964, Loss: 0.029 Epoch 1 Batch 266/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.964, Loss: 0.032 Epoch 1 Batch 267/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.960, Loss: 0.021 Epoch 1 Batch 268/1077 - Train Accuracy: 0.940, Validation Accuracy: 0.958, Loss: 0.038 Epoch 1 Batch 269/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.960, Loss: 0.046 Epoch 1 Batch 270/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.956, Loss: 0.036 Epoch 1 Batch 271/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.952, Loss: 0.031 Epoch 1 Batch 272/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.949, Loss: 0.050 Epoch 1 Batch 273/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.946, Loss: 0.033 Epoch 1 Batch 274/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.949, Loss: 0.028 Epoch 1 Batch 275/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.947, Loss: 0.026 Epoch 1 Batch 276/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.941, Loss: 0.041 Epoch 1 Batch 277/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.946, Loss: 0.031 Epoch 1 Batch 278/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.946, Loss: 0.037 Epoch 1 Batch 279/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.955, Loss: 0.029 Epoch 1 Batch 280/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.952, Loss: 0.038 Epoch 1 Batch 281/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.962, Loss: 0.040 Epoch 1 Batch 282/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.963, Loss: 0.053 Epoch 1 Batch 283/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.966, Loss: 0.031 Epoch 1 Batch 284/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.958, Loss: 0.029 Epoch 1 Batch 285/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.960, Loss: 0.030 Epoch 1 Batch 286/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.952, Loss: 0.028 Epoch 1 Batch 287/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.952, Loss: 0.028 Epoch 1 Batch 288/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.945, Loss: 0.041 Epoch 1 Batch 289/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.940, Loss: 0.023 Epoch 1 Batch 290/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.939, Loss: 0.041 Epoch 1 Batch 291/1077 - Train Accuracy: 0.942, Validation Accuracy: 0.941, Loss: 0.057 Epoch 1 Batch 292/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.942, Loss: 0.030 Epoch 1 Batch 293/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.942, Loss: 0.031 Epoch 1 Batch 294/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.942, Loss: 0.028 Epoch 1 Batch 295/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.944, Loss: 0.035 Epoch 1 Batch 296/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.944, Loss: 0.032 Epoch 1 Batch 297/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.949, Loss: 0.034 Epoch 1 Batch 298/1077 - Train Accuracy: 0.947, Validation Accuracy: 0.944, Loss: 0.036 Epoch 1 Batch 299/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.941, Loss: 0.032 Epoch 1 Batch 300/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.948, Loss: 0.021 Epoch 1 Batch 301/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.954, Loss: 0.024 Epoch 1 Batch 302/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.954, Loss: 0.021 Epoch 1 Batch 303/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.949, Loss: 0.032 Epoch 1 Batch 304/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.949, Loss: 0.030 Epoch 1 Batch 305/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.961, Loss: 0.025 Epoch 1 Batch 306/1077 - Train Accuracy: 0.945, Validation Accuracy: 0.958, Loss: 0.034 Epoch 1 Batch 307/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.952, Loss: 0.028 Epoch 1 Batch 308/1077 - Train Accuracy: 0.936, Validation Accuracy: 0.951, Loss: 0.039 Epoch 1 Batch 309/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.960, Loss: 0.020 Epoch 1 Batch 310/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.966, Loss: 0.028 Epoch 1 Batch 311/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.956, Loss: 0.025 Epoch 1 Batch 312/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.953, Loss: 0.035 Epoch 1 Batch 313/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.948, Loss: 0.023 Epoch 1 Batch 314/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.948, Loss: 0.033 Epoch 1 Batch 315/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.955, Loss: 0.025 Epoch 1 Batch 316/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.955, Loss: 0.034 Epoch 1 Batch 317/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.965, Loss: 0.038 Epoch 1 Batch 318/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.965, Loss: 0.024 Epoch 1 Batch 319/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.960, Loss: 0.031 Epoch 1 Batch 320/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.961, Loss: 0.030 Epoch 1 Batch 321/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.956, Loss: 0.029 Epoch 1 Batch 322/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.961, Loss: 0.026 Epoch 1 Batch 323/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.958, Loss: 0.027 Epoch 1 Batch 324/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.953, Loss: 0.023 Epoch 1 Batch 325/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.953, Loss: 0.028 Epoch 1 Batch 326/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.952, Loss: 0.026 Epoch 1 Batch 327/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.949, Loss: 0.030 Epoch 1 Batch 328/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.949, Loss: 0.040 Epoch 1 Batch 329/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.948, Loss: 0.030 Epoch 1 Batch 330/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.952, Loss: 0.033 Epoch 1 Batch 331/1077 - Train Accuracy: 0.948, Validation Accuracy: 0.957, Loss: 0.040 Epoch 1 Batch 332/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.950, Loss: 0.019 Epoch 1 Batch 333/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.956, Loss: 0.021 Epoch 1 Batch 334/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.961, Loss: 0.022 Epoch 1 Batch 335/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.958, Loss: 0.031 Epoch 1 Batch 336/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.958, Loss: 0.055 Epoch 1 Batch 337/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.958, Loss: 0.033 Epoch 1 Batch 338/1077 - Train Accuracy: 0.942, Validation Accuracy: 0.957, Loss: 0.037 Epoch 1 Batch 339/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.960, Loss: 0.017 Epoch 1 Batch 340/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.960, Loss: 0.024 Epoch 1 Batch 341/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.960, Loss: 0.037 Epoch 1 Batch 342/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.960, Loss: 0.019 Epoch 1 Batch 343/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.960, Loss: 0.032 Epoch 1 Batch 344/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.957, Loss: 0.032 Epoch 1 Batch 345/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.959, Loss: 0.020 Epoch 1 Batch 346/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.965, Loss: 0.028 Epoch 1 Batch 347/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.954, Loss: 0.018 Epoch 1 Batch 348/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.953, Loss: 0.029 Epoch 1 Batch 349/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.953, Loss: 0.029 Epoch 1 Batch 350/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.953, Loss: 0.026 Epoch 1 Batch 351/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.953, Loss: 0.029 Epoch 1 Batch 352/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.954, Loss: 0.021 Epoch 1 Batch 353/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.959, Loss: 0.031 Epoch 1 Batch 354/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.959, Loss: 0.032 Epoch 1 Batch 355/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.959, Loss: 0.029 Epoch 1 Batch 356/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.964, Loss: 0.026 Epoch 1 Batch 357/1077 - Train Accuracy: 0.949, Validation Accuracy: 0.972, Loss: 0.030 Epoch 1 Batch 358/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.967, Loss: 0.035 Epoch 1 Batch 359/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.960, Loss: 0.029 Epoch 1 Batch 360/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.955, Loss: 0.021 Epoch 1 Batch 361/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.955, Loss: 0.037 Epoch 1 Batch 362/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.950, Loss: 0.036 Epoch 1 Batch 363/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.954, Loss: 0.034 Epoch 1 Batch 364/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.956, Loss: 0.039 Epoch 1 Batch 365/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.956, Loss: 0.017 Epoch 1 Batch 366/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.956, Loss: 0.023 Epoch 1 Batch 367/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.956, Loss: 0.015 Epoch 1 Batch 368/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.956, Loss: 0.030 Epoch 1 Batch 369/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.954, Loss: 0.024 Epoch 1 Batch 370/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.963, Loss: 0.028 Epoch 1 Batch 371/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.969, Loss: 0.020 Epoch 1 Batch 372/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.967, Loss: 0.018 Epoch 1 Batch 373/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.966, Loss: 0.019 Epoch 1 Batch 374/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.961, Loss: 0.031 Epoch 1 Batch 375/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.968, Loss: 0.025 Epoch 1 Batch 376/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.968, Loss: 0.021 Epoch 1 Batch 377/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.964, Loss: 0.023 Epoch 1 Batch 378/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.960, Loss: 0.019 Epoch 1 Batch 379/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.967, Loss: 0.038 Epoch 1 Batch 380/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.967, Loss: 0.022 Epoch 1 Batch 381/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.976, Loss: 0.036 Epoch 1 Batch 382/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.976, Loss: 0.042 Epoch 1 Batch 383/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.979, Loss: 0.023 Epoch 1 Batch 384/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.979, Loss: 0.025 Epoch 1 Batch 385/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.979, Loss: 0.027 Epoch 1 Batch 386/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.974, Loss: 0.024 Epoch 1 Batch 387/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.976, Loss: 0.017 Epoch 1 Batch 388/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.976, Loss: 0.028 Epoch 1 Batch 389/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.976, Loss: 0.027 Epoch 1 Batch 390/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.975, Loss: 0.038 Epoch 1 Batch 391/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.975, Loss: 0.035 Epoch 1 Batch 392/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.975, Loss: 0.038 Epoch 1 Batch 393/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.973, Loss: 0.021 Epoch 1 Batch 394/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.969, Loss: 0.025 Epoch 1 Batch 395/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.963, Loss: 0.034 Epoch 1 Batch 396/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.963, Loss: 0.036 Epoch 1 Batch 397/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.968, Loss: 0.033 Epoch 1 Batch 398/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.961, Loss: 0.025 Epoch 1 Batch 399/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.951, Loss: 0.029 Epoch 1 Batch 400/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.951, Loss: 0.039 Epoch 1 Batch 401/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.949, Loss: 0.025 Epoch 1 Batch 402/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.949, Loss: 0.019 Epoch 1 Batch 403/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.949, Loss: 0.035 Epoch 1 Batch 404/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.954, Loss: 0.029 Epoch 1 Batch 405/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.949, Loss: 0.026 Epoch 1 Batch 406/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.953, Loss: 0.026 Epoch 1 Batch 407/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.953, Loss: 0.029 Epoch 1 Batch 408/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.955, Loss: 0.032 Epoch 1 Batch 409/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.953, Loss: 0.033 Epoch 1 Batch 410/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.964, Loss: 0.038 Epoch 1 Batch 411/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.964, Loss: 0.032 Epoch 1 Batch 412/1077 - Train Accuracy: 0.948, Validation Accuracy: 0.964, Loss: 0.025 Epoch 1 Batch 413/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.964, Loss: 0.025 Epoch 1 Batch 414/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.955, Loss: 0.029 Epoch 1 Batch 415/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.961, Loss: 0.036 Epoch 1 Batch 416/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.955, Loss: 0.030 Epoch 1 Batch 417/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.959, Loss: 0.053 Epoch 1 Batch 418/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.954, Loss: 0.027 Epoch 1 Batch 419/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.949, Loss: 0.033 Epoch 1 Batch 420/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.949, Loss: 0.017 Epoch 1 Batch 421/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.955, Loss: 0.042 Epoch 1 Batch 422/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.956, Loss: 0.023 Epoch 1 Batch 423/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.956, Loss: 0.040 Epoch 1 Batch 424/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.955, Loss: 0.023 Epoch 1 Batch 425/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.955, Loss: 0.019 Epoch 1 Batch 426/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.949, Loss: 0.034 Epoch 1 Batch 427/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.949, Loss: 0.023 Epoch 1 Batch 428/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.945, Loss: 0.022 Epoch 1 Batch 429/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.946, Loss: 0.018 Epoch 1 Batch 430/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.947, Loss: 0.020 Epoch 1 Batch 431/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.952, Loss: 0.022 Epoch 1 Batch 432/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.945, Loss: 0.028 Epoch 1 Batch 433/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.945, Loss: 0.034 Epoch 1 Batch 434/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.945, Loss: 0.021 Epoch 1 Batch 435/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.945, Loss: 0.036 Epoch 1 Batch 436/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.944, Loss: 0.028 Epoch 1 Batch 437/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.943, Loss: 0.017 Epoch 1 Batch 438/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.938, Loss: 0.023 Epoch 1 Batch 439/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.939, Loss: 0.031 Epoch 1 Batch 440/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.950, Loss: 0.035 Epoch 1 Batch 441/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.954, Loss: 0.023 Epoch 1 Batch 442/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.960, Loss: 0.031 Epoch 1 Batch 443/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.964, Loss: 0.022 Epoch 1 Batch 444/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.960, Loss: 0.020 Epoch 1 Batch 445/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.962, Loss: 0.022 Epoch 1 Batch 446/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.964, Loss: 0.022 Epoch 1 Batch 447/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.966, Loss: 0.021 Epoch 1 Batch 448/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.971, Loss: 0.032 Epoch 1 Batch 449/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.971, Loss: 0.024 Epoch 1 Batch 450/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.966, Loss: 0.025 Epoch 1 Batch 451/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.966, Loss: 0.022 Epoch 1 Batch 452/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.966, Loss: 0.027 Epoch 1 Batch 453/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.966, Loss: 0.025 Epoch 1 Batch 454/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.967, Loss: 0.034 Epoch 1 Batch 455/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.969, Loss: 0.029 Epoch 1 Batch 456/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.969, Loss: 0.025 Epoch 1 Batch 457/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.969, Loss: 0.019 Epoch 1 Batch 458/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.969, Loss: 0.032 Epoch 1 Batch 459/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.969, Loss: 0.023 Epoch 1 Batch 460/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.963, Loss: 0.030 Epoch 1 Batch 461/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.961, Loss: 0.020 Epoch 1 Batch 462/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.957, Loss: 0.022 Epoch 1 Batch 463/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.956, Loss: 0.024 Epoch 1 Batch 464/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.952, Loss: 0.027 Epoch 1 Batch 465/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.952, Loss: 0.026 Epoch 1 Batch 466/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.952, Loss: 0.021 Epoch 1 Batch 467/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.952, Loss: 0.024 Epoch 1 Batch 468/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.951, Loss: 0.029 Epoch 1 Batch 469/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.957, Loss: 0.022 Epoch 1 Batch 470/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.952, Loss: 0.025 Epoch 1 Batch 471/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.956, Loss: 0.016 Epoch 1 Batch 472/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.960, Loss: 0.025 Epoch 1 Batch 473/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.960, Loss: 0.025 Epoch 1 Batch 474/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.960, Loss: 0.028 Epoch 1 Batch 475/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.960, Loss: 0.024 Epoch 1 Batch 476/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.961, Loss: 0.012 Epoch 1 Batch 477/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.959, Loss: 0.029 Epoch 1 Batch 478/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.958, Loss: 0.019 Epoch 1 Batch 479/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.958, Loss: 0.032 Epoch 1 Batch 480/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.958, Loss: 0.024 Epoch 1 Batch 481/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.963, Loss: 0.017 Epoch 1 Batch 482/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.964, Loss: 0.029 Epoch 1 Batch 483/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.960, Loss: 0.031 Epoch 1 Batch 484/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.960, Loss: 0.031 Epoch 1 Batch 485/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.957, Loss: 0.020 Epoch 1 Batch 486/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.962, Loss: 0.025 Epoch 1 Batch 487/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.967, Loss: 0.016 Epoch 1 Batch 488/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.952, Loss: 0.027 Epoch 1 Batch 489/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.955, Loss: 0.021 Epoch 1 Batch 490/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.951, Loss: 0.029 Epoch 1 Batch 491/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.953, Loss: 0.033 Epoch 1 Batch 492/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.953, Loss: 0.033 Epoch 1 Batch 493/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.954, Loss: 0.017 Epoch 1 Batch 494/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.949, Loss: 0.024 Epoch 1 Batch 495/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.956, Loss: 0.019 Epoch 1 Batch 496/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.956, Loss: 0.027 Epoch 1 Batch 497/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.956, Loss: 0.029 Epoch 1 Batch 498/1077 - Train Accuracy: 0.943, Validation Accuracy: 0.961, Loss: 0.033 Epoch 1 Batch 499/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.965, Loss: 0.020 Epoch 1 Batch 500/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.971, Loss: 0.017 Epoch 1 Batch 501/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.969, Loss: 0.019 Epoch 1 Batch 502/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.969, Loss: 0.030 Epoch 1 Batch 503/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.969, Loss: 0.025 Epoch 1 Batch 504/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.968, Loss: 0.023 Epoch 1 Batch 505/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.960, Loss: 0.019 Epoch 1 Batch 506/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.965, Loss: 0.032 Epoch 1 Batch 507/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.961, Loss: 0.034 Epoch 1 Batch 508/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.965, Loss: 0.017 Epoch 1 Batch 509/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.965, Loss: 0.034 Epoch 1 Batch 510/1077 - Train Accuracy: 0.948, Validation Accuracy: 0.967, Loss: 0.027 Epoch 1 Batch 511/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.963, Loss: 0.027 Epoch 1 Batch 512/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.963, Loss: 0.020 Epoch 1 Batch 513/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.961, Loss: 0.026 Epoch 1 Batch 514/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.961, Loss: 0.028 Epoch 1 Batch 515/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.961, Loss: 0.022 Epoch 1 Batch 516/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.961, Loss: 0.025 Epoch 1 Batch 517/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.961, Loss: 0.031 Epoch 1 Batch 518/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.961, Loss: 0.024 Epoch 1 Batch 519/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.959, Loss: 0.027 Epoch 1 Batch 520/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.957, Loss: 0.020 Epoch 1 Batch 521/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.957, Loss: 0.026 Epoch 1 Batch 522/1077 - Train Accuracy: 0.936, Validation Accuracy: 0.957, Loss: 0.035 Epoch 1 Batch 523/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.961, Loss: 0.030 Epoch 1 Batch 524/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.960, Loss: 0.027 Epoch 1 Batch 525/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.961, Loss: 0.027 Epoch 1 Batch 526/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.959, Loss: 0.027 Epoch 1 Batch 527/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.949, Loss: 0.025 Epoch 1 Batch 528/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.954, Loss: 0.025 Epoch 1 Batch 529/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.959, Loss: 0.026 Epoch 1 Batch 530/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.962, Loss: 0.032 Epoch 1 Batch 531/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.964, Loss: 0.027 Epoch 1 Batch 532/1077 - Train Accuracy: 0.944, Validation Accuracy: 0.965, Loss: 0.039 Epoch 1 Batch 533/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.954, Loss: 0.022 Epoch 1 Batch 534/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.954, Loss: 0.024 Epoch 1 Batch 535/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.959, Loss: 0.024 Epoch 1 Batch 536/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.958, Loss: 0.025 Epoch 1 Batch 537/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.963, Loss: 0.016 Epoch 1 Batch 538/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.959, Loss: 0.022 Epoch 1 Batch 539/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.955, Loss: 0.034 Epoch 1 Batch 540/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.962, Loss: 0.020 Epoch 1 Batch 541/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.968, Loss: 0.018 Epoch 1 Batch 542/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.968, Loss: 0.028 Epoch 1 Batch 543/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.969, Loss: 0.018 Epoch 1 Batch 544/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.967, Loss: 0.016 Epoch 1 Batch 545/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.967, Loss: 0.022 Epoch 1 Batch 546/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.967, Loss: 0.026 Epoch 1 Batch 547/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.962, Loss: 0.022 Epoch 1 Batch 548/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.957, Loss: 0.031 Epoch 1 Batch 549/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.964, Loss: 0.026 Epoch 1 Batch 550/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.963, Loss: 0.021 Epoch 1 Batch 551/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.958, Loss: 0.022 Epoch 1 Batch 552/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.958, Loss: 0.030 Epoch 1 Batch 553/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.962, Loss: 0.038 Epoch 1 Batch 554/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.962, Loss: 0.023 Epoch 1 Batch 555/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.962, Loss: 0.019 Epoch 1 Batch 556/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.961, Loss: 0.019 Epoch 1 Batch 557/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.963, Loss: 0.025 Epoch 1 Batch 558/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.967, Loss: 0.016 Epoch 1 Batch 559/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.965, Loss: 0.026 Epoch 1 Batch 560/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.969, Loss: 0.024 Epoch 1 Batch 561/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.969, Loss: 0.020 Epoch 1 Batch 562/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.967, Loss: 0.018 Epoch 1 Batch 563/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.967, Loss: 0.027 Epoch 1 Batch 564/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.967, Loss: 0.026 Epoch 1 Batch 565/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.962, Loss: 0.028 Epoch 1 Batch 566/1077 - Train Accuracy: 0.948, Validation Accuracy: 0.959, Loss: 0.025 Epoch 1 Batch 567/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.958, Loss: 0.024 Epoch 1 Batch 568/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.958, Loss: 0.024 Epoch 1 Batch 569/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.959, Loss: 0.024 Epoch 1 Batch 570/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.966, Loss: 0.034 Epoch 1 Batch 571/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.968, Loss: 0.023 Epoch 1 Batch 572/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.968, Loss: 0.023 Epoch 1 Batch 573/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.971, Loss: 0.038 Epoch 1 Batch 574/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.971, Loss: 0.029 Epoch 1 Batch 575/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.976, Loss: 0.018 Epoch 1 Batch 576/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.976, Loss: 0.015 Epoch 1 Batch 577/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.974, Loss: 0.029 Epoch 1 Batch 578/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.974, Loss: 0.017 Epoch 1 Batch 579/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.972, Loss: 0.023 Epoch 1 Batch 580/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.972, Loss: 0.018 Epoch 1 Batch 581/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.970, Loss: 0.017 Epoch 1 Batch 582/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.971, Loss: 0.024 Epoch 1 Batch 583/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.971, Loss: 0.032 Epoch 1 Batch 584/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.974, Loss: 0.020 Epoch 1 Batch 585/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.969, Loss: 0.015 Epoch 1 Batch 586/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.970, Loss: 0.018 Epoch 1 Batch 587/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.971, Loss: 0.031 Epoch 1 Batch 588/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.973, Loss: 0.018 Epoch 1 Batch 589/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.977, Loss: 0.021 Epoch 1 Batch 590/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.972, Loss: 0.021 Epoch 1 Batch 591/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.970, Loss: 0.026 Epoch 1 Batch 592/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.970, Loss: 0.023 Epoch 1 Batch 593/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.974, Loss: 0.040 Epoch 1 Batch 594/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.970, Loss: 0.032 Epoch 1 Batch 595/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.970, Loss: 0.021 Epoch 1 Batch 596/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.965, Loss: 0.021 Epoch 1 Batch 597/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.961, Loss: 0.018 Epoch 1 Batch 598/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.961, Loss: 0.026 Epoch 1 Batch 599/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.961, Loss: 0.040 Epoch 1 Batch 600/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.958, Loss: 0.027 Epoch 1 Batch 601/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.958, Loss: 0.024 Epoch 1 Batch 602/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.970, Loss: 0.026 Epoch 1 Batch 603/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.974, Loss: 0.016 Epoch 1 Batch 604/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.984, Loss: 0.034 Epoch 1 Batch 605/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.984, Loss: 0.030 Epoch 1 Batch 606/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.983, Loss: 0.021 Epoch 1 Batch 607/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.989, Loss: 0.028 Epoch 1 Batch 608/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.988, Loss: 0.024 Epoch 1 Batch 609/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.989, Loss: 0.023 Epoch 1 Batch 610/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.988, Loss: 0.022 Epoch 1 Batch 611/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.982, Loss: 0.022 Epoch 1 Batch 612/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.978, Loss: 0.014 Epoch 1 Batch 613/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.969, Loss: 0.027 Epoch 1 Batch 614/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.958, Loss: 0.019 Epoch 1 Batch 615/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.958, Loss: 0.017 Epoch 1 Batch 616/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.968, Loss: 0.026 Epoch 1 Batch 617/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.973, Loss: 0.019 Epoch 1 Batch 618/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.978, Loss: 0.020 Epoch 1 Batch 619/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.973, Loss: 0.014 Epoch 1 Batch 620/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.974, Loss: 0.024 Epoch 1 Batch 621/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.972, Loss: 0.030 Epoch 1 Batch 622/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.972, Loss: 0.028 Epoch 1 Batch 623/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.968, Loss: 0.024 Epoch 1 Batch 624/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.964, Loss: 0.023 Epoch 1 Batch 625/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.963, Loss: 0.021 Epoch 1 Batch 626/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.963, Loss: 0.020 Epoch 1 Batch 627/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.964, Loss: 0.026 Epoch 1 Batch 628/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.967, Loss: 0.031 Epoch 1 Batch 629/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.962, Loss: 0.029 Epoch 1 Batch 630/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.960, Loss: 0.028 Epoch 1 Batch 631/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.964, Loss: 0.021 Epoch 1 Batch 632/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.969, Loss: 0.018 Epoch 1 Batch 633/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.974, Loss: 0.025 Epoch 1 Batch 634/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.979, Loss: 0.019 Epoch 1 Batch 635/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.982, Loss: 0.031 Epoch 1 Batch 636/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.982, Loss: 0.018 Epoch 1 Batch 637/1077 - Train Accuracy: 0.948, Validation Accuracy: 0.982, Loss: 0.025 Epoch 1 Batch 638/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.981, Loss: 0.017 Epoch 1 Batch 639/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.981, Loss: 0.032 Epoch 1 Batch 640/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.976, Loss: 0.026 Epoch 1 Batch 641/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.971, Loss: 0.024 Epoch 1 Batch 642/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.971, Loss: 0.019 Epoch 1 Batch 643/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.971, Loss: 0.020 Epoch 1 Batch 644/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.973, Loss: 0.024 Epoch 1 Batch 645/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.973, Loss: 0.026 Epoch 1 Batch 646/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.974, Loss: 0.020 Epoch 1 Batch 647/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.964, Loss: 0.023 Epoch 1 Batch 648/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.966, Loss: 0.015 Epoch 1 Batch 649/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.963, Loss: 0.021 Epoch 1 Batch 650/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.963, Loss: 0.028 Epoch 1 Batch 651/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.963, Loss: 0.025 Epoch 1 Batch 652/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.962, Loss: 0.025 Epoch 1 Batch 653/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.963, Loss: 0.021 Epoch 1 Batch 654/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.955, Loss: 0.014 Epoch 1 Batch 655/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.947, Loss: 0.028 Epoch 1 Batch 656/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.951, Loss: 0.023 Epoch 1 Batch 657/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.951, Loss: 0.019 Epoch 1 Batch 658/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.954, Loss: 0.020 Epoch 1 Batch 659/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.949, Loss: 0.021 Epoch 1 Batch 660/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.949, Loss: 0.023 Epoch 1 Batch 661/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.950, Loss: 0.021 Epoch 1 Batch 662/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.956, Loss: 0.027 Epoch 1 Batch 663/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.967, Loss: 0.022 Epoch 1 Batch 664/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.961, Loss: 0.022 Epoch 1 Batch 665/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.956, Loss: 0.016 Epoch 1 Batch 666/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.956, Loss: 0.033 Epoch 1 Batch 667/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.954, Loss: 0.028 Epoch 1 Batch 668/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.953, Loss: 0.022 Epoch 1 Batch 669/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.960, Loss: 0.021 Epoch 1 Batch 670/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.964, Loss: 0.019 Epoch 1 Batch 671/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.964, Loss: 0.038 Epoch 1 Batch 672/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.960, Loss: 0.020 Epoch 1 Batch 673/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.960, Loss: 0.027 Epoch 1 Batch 674/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.958, Loss: 0.025 Epoch 1 Batch 675/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.958, Loss: 0.032 Epoch 1 Batch 676/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.958, Loss: 0.021 Epoch 1 Batch 677/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.958, Loss: 0.027 Epoch 1 Batch 678/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.962, Loss: 0.018 Epoch 1 Batch 679/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.963, Loss: 0.021 Epoch 1 Batch 680/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.964, Loss: 0.026 Epoch 1 Batch 681/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.959, Loss: 0.022 Epoch 1 Batch 682/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.959, Loss: 0.024 Epoch 1 Batch 683/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.955, Loss: 0.019 Epoch 1 Batch 684/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.953, Loss: 0.027 Epoch 1 Batch 685/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.958, Loss: 0.034 Epoch 1 Batch 686/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.958, Loss: 0.019 Epoch 1 Batch 687/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.956, Loss: 0.022 Epoch 1 Batch 688/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.961, Loss: 0.020 Epoch 1 Batch 689/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.957, Loss: 0.013 Epoch 1 Batch 690/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.951, Loss: 0.027 Epoch 1 Batch 691/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.955, Loss: 0.036 Epoch 1 Batch 692/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.964, Loss: 0.017 Epoch 1 Batch 693/1077 - Train Accuracy: 0.945, Validation Accuracy: 0.960, Loss: 0.038 Epoch 1 Batch 694/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.964, Loss: 0.024 Epoch 1 Batch 695/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.962, Loss: 0.019 Epoch 1 Batch 696/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.962, Loss: 0.018 Epoch 1 Batch 697/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.964, Loss: 0.022 Epoch 1 Batch 698/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.969, Loss: 0.017 Epoch 1 Batch 699/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.964, Loss: 0.018 Epoch 1 Batch 700/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.960, Loss: 0.013 Epoch 1 Batch 701/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.966, Loss: 0.024 Epoch 1 Batch 702/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.970, Loss: 0.029 Epoch 1 Batch 703/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.972, Loss: 0.025 Epoch 1 Batch 704/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.967, Loss: 0.035 Epoch 1 Batch 705/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.972, Loss: 0.030 Epoch 1 Batch 706/1077 - Train Accuracy: 0.949, Validation Accuracy: 0.966, Loss: 0.058 Epoch 1 Batch 707/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.971, Loss: 0.027 Epoch 1 Batch 708/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.971, Loss: 0.025 Epoch 1 Batch 709/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.971, Loss: 0.024 Epoch 1 Batch 710/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.975, Loss: 0.016 Epoch 1 Batch 711/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.975, Loss: 0.036 Epoch 1 Batch 712/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.973, Loss: 0.016 Epoch 1 Batch 713/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.973, Loss: 0.017 Epoch 1 Batch 714/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.968, Loss: 0.018 Epoch 1 Batch 715/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.968, Loss: 0.021 Epoch 1 Batch 716/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.968, Loss: 0.018 Epoch 1 Batch 717/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.968, Loss: 0.022 Epoch 1 Batch 718/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.968, Loss: 0.020 Epoch 1 Batch 719/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.967, Loss: 0.026 Epoch 1 Batch 720/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.967, Loss: 0.021 Epoch 1 Batch 721/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.965, Loss: 0.022 Epoch 1 Batch 722/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.960, Loss: 0.021 Epoch 1 Batch 723/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.960, Loss: 0.028 Epoch 1 Batch 724/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.960, Loss: 0.021 Epoch 1 Batch 725/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.961, Loss: 0.018 Epoch 1 Batch 726/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.961, Loss: 0.019 Epoch 1 Batch 727/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.961, Loss: 0.019 Epoch 1 Batch 728/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.961, Loss: 0.029 Epoch 1 Batch 729/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.961, Loss: 0.031 Epoch 1 Batch 730/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.963, Loss: 0.028 Epoch 1 Batch 731/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.959, Loss: 0.023 Epoch 1 Batch 732/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.959, Loss: 0.028 Epoch 1 Batch 733/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.959, Loss: 0.026 Epoch 1 Batch 734/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.959, Loss: 0.021 Epoch 1 Batch 735/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.958, Loss: 0.019 Epoch 1 Batch 736/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.959, Loss: 0.017 Epoch 1 Batch 737/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.959, Loss: 0.028 Epoch 1 Batch 738/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.959, Loss: 0.022 Epoch 1 Batch 739/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.959, Loss: 0.020 Epoch 1 Batch 740/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.959, Loss: 0.020 Epoch 1 Batch 741/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.959, Loss: 0.028 Epoch 1 Batch 742/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.959, Loss: 0.013 Epoch 1 Batch 743/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.954, Loss: 0.028 Epoch 1 Batch 744/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.952, Loss: 0.020 Epoch 1 Batch 745/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.961, Loss: 0.019 Epoch 1 Batch 746/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.964, Loss: 0.015 Epoch 1 Batch 747/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.961, Loss: 0.013 Epoch 1 Batch 748/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.963, Loss: 0.019 Epoch 1 Batch 749/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.958, Loss: 0.019 Epoch 1 Batch 750/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.959, Loss: 0.021 Epoch 1 Batch 751/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.963, Loss: 0.018 Epoch 1 Batch 752/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.958, Loss: 0.019 Epoch 1 Batch 753/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.964, Loss: 0.023 Epoch 1 Batch 754/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.962, Loss: 0.017 Epoch 1 Batch 755/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.961, Loss: 0.036 Epoch 1 Batch 756/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.958, Loss: 0.017 Epoch 1 Batch 757/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.949, Loss: 0.016 Epoch 1 Batch 758/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.949, Loss: 0.014 Epoch 1 Batch 759/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.958, Loss: 0.025 Epoch 1 Batch 760/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.956, Loss: 0.025 Epoch 1 Batch 761/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.956, Loss: 0.024 Epoch 1 Batch 762/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.956, Loss: 0.018 Epoch 1 Batch 763/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.952, Loss: 0.026 Epoch 1 Batch 764/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.956, Loss: 0.019 Epoch 1 Batch 765/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.952, Loss: 0.030 Epoch 1 Batch 766/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.958, Loss: 0.018 Epoch 1 Batch 767/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.958, Loss: 0.021 Epoch 1 Batch 768/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.959, Loss: 0.014 Epoch 1 Batch 769/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.960, Loss: 0.026 Epoch 1 Batch 770/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.960, Loss: 0.024 Epoch 1 Batch 771/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.968, Loss: 0.018 Epoch 1 Batch 772/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.966, Loss: 0.015 Epoch 1 Batch 773/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.962, Loss: 0.020 Epoch 1 Batch 774/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.962, Loss: 0.025 Epoch 1 Batch 775/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.953, Loss: 0.020 Epoch 1 Batch 776/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.958, Loss: 0.022 Epoch 1 Batch 777/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.962, Loss: 0.018 Epoch 1 Batch 778/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.960, Loss: 0.024 Epoch 1 Batch 779/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.957, Loss: 0.027 Epoch 1 Batch 780/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.959, Loss: 0.038 Epoch 1 Batch 781/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.968, Loss: 0.014 Epoch 1 Batch 782/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.968, Loss: 0.016 Epoch 1 Batch 783/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.968, Loss: 0.030 Epoch 1 Batch 784/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.968, Loss: 0.014 Epoch 1 Batch 785/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.968, Loss: 0.018 Epoch 1 Batch 786/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.968, Loss: 0.016 Epoch 1 Batch 787/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.968, Loss: 0.024 Epoch 1 Batch 788/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.968, Loss: 0.023 Epoch 1 Batch 789/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.968, Loss: 0.021 Epoch 1 Batch 790/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.963, Loss: 0.030 Epoch 1 Batch 791/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.966, Loss: 0.015 Epoch 1 Batch 792/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.966, Loss: 0.027 Epoch 1 Batch 793/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.966, Loss: 0.021 Epoch 1 Batch 794/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.975, Loss: 0.015 Epoch 1 Batch 795/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.975, Loss: 0.027 Epoch 1 Batch 796/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.971, Loss: 0.020 Epoch 1 Batch 797/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.975, Loss: 0.020 Epoch 1 Batch 798/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.974, Loss: 0.023 Epoch 1 Batch 799/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.976, Loss: 0.026 Epoch 1 Batch 800/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.965, Loss: 0.019 Epoch 1 Batch 801/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.965, Loss: 0.024 Epoch 1 Batch 802/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.963, Loss: 0.021 Epoch 1 Batch 803/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.963, Loss: 0.020 Epoch 1 Batch 804/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.963, Loss: 0.014 Epoch 1 Batch 805/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.963, Loss: 0.019 Epoch 1 Batch 806/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.963, Loss: 0.020 Epoch 1 Batch 807/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.962, Loss: 0.017 Epoch 1 Batch 808/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.962, Loss: 0.038 Epoch 1 Batch 809/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.962, Loss: 0.037 Epoch 1 Batch 810/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.960, Loss: 0.014 Epoch 1 Batch 811/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.960, Loss: 0.021 Epoch 1 Batch 812/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.964, Loss: 0.021 Epoch 1 Batch 813/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.960, Loss: 0.022 Epoch 1 Batch 814/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.955, Loss: 0.018 Epoch 1 Batch 815/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.961, Loss: 0.028 Epoch 1 Batch 816/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.961, Loss: 0.023 Epoch 1 Batch 817/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.961, Loss: 0.026 Epoch 1 Batch 818/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.956, Loss: 0.030 Epoch 1 Batch 819/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.956, Loss: 0.021 Epoch 1 Batch 820/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.956, Loss: 0.016 Epoch 1 Batch 821/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.956, Loss: 0.021 Epoch 1 Batch 822/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.956, Loss: 0.018 Epoch 1 Batch 823/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.956, Loss: 0.029 Epoch 1 Batch 824/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.960, Loss: 0.023 Epoch 1 Batch 825/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.960, Loss: 0.009 Epoch 1 Batch 826/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.961, Loss: 0.016 Epoch 1 Batch 827/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.961, Loss: 0.021 Epoch 1 Batch 828/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.962, Loss: 0.021 Epoch 1 Batch 829/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.959, Loss: 0.037 Epoch 1 Batch 830/1077 - Train Accuracy: 0.946, Validation Accuracy: 0.972, Loss: 0.030 Epoch 1 Batch 831/1077 - Train Accuracy: 0.945, Validation Accuracy: 0.973, Loss: 0.031 Epoch 1 Batch 832/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.962, Loss: 0.021 Epoch 1 Batch 833/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.961, Loss: 0.025 Epoch 1 Batch 834/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.959, Loss: 0.025 Epoch 1 Batch 835/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.959, Loss: 0.021 Epoch 1 Batch 836/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.966, Loss: 0.025 Epoch 1 Batch 837/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.966, Loss: 0.028 Epoch 1 Batch 838/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.966, Loss: 0.017 Epoch 1 Batch 839/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.963, Loss: 0.022 Epoch 1 Batch 840/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.963, Loss: 0.019 Epoch 1 Batch 841/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.966, Loss: 0.028 Epoch 1 Batch 842/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.966, Loss: 0.023 Epoch 1 Batch 843/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.964, Loss: 0.020 Epoch 1 Batch 844/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.964, Loss: 0.017 Epoch 1 Batch 845/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.964, Loss: 0.017 Epoch 1 Batch 846/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.964, Loss: 0.031 Epoch 1 Batch 847/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.970, Loss: 0.028 Epoch 1 Batch 848/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.970, Loss: 0.017 Epoch 1 Batch 849/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.965, Loss: 0.017 Epoch 1 Batch 850/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.965, Loss: 0.037 Epoch 1 Batch 851/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.965, Loss: 0.030 Epoch 1 Batch 852/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.967, Loss: 0.034 Epoch 1 Batch 853/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.972, Loss: 0.019 Epoch 1 Batch 854/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.972, Loss: 0.026 Epoch 1 Batch 855/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.972, Loss: 0.017 Epoch 1 Batch 856/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.969, Loss: 0.022 Epoch 1 Batch 857/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.969, Loss: 0.023 Epoch 1 Batch 858/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.965, Loss: 0.013 Epoch 1 Batch 859/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.965, Loss: 0.021 Epoch 1 Batch 860/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.972, Loss: 0.023 Epoch 1 Batch 861/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.974, Loss: 0.018 Epoch 1 Batch 862/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.970, Loss: 0.028 Epoch 1 Batch 863/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.978, Loss: 0.015 Epoch 1 Batch 864/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.977, Loss: 0.023 Epoch 1 Batch 865/1077 - Train Accuracy: 0.943, Validation Accuracy: 0.977, Loss: 0.026 Epoch 1 Batch 866/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.969, Loss: 0.025 Epoch 1 Batch 867/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.974, Loss: 0.058 Epoch 1 Batch 868/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.974, Loss: 0.022 Epoch 1 Batch 869/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.974, Loss: 0.024 Epoch 1 Batch 870/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.966, Loss: 0.025 Epoch 1 Batch 871/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.962, Loss: 0.023 Epoch 1 Batch 872/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.968, Loss: 0.024 Epoch 1 Batch 873/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.972, Loss: 0.021 Epoch 1 Batch 874/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.971, Loss: 0.030 Epoch 1 Batch 875/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.971, Loss: 0.023 Epoch 1 Batch 876/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.975, Loss: 0.017 Epoch 1 Batch 877/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.971, Loss: 0.014 Epoch 1 Batch 878/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.972, Loss: 0.022 Epoch 1 Batch 879/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.970, Loss: 0.012 Epoch 1 Batch 880/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.964, Loss: 0.028 Epoch 1 Batch 881/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.964, Loss: 0.030 Epoch 1 Batch 882/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.967, Loss: 0.020 Epoch 1 Batch 883/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.967, Loss: 0.031 Epoch 1 Batch 884/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.967, Loss: 0.024 Epoch 1 Batch 885/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.971, Loss: 0.017 Epoch 1 Batch 886/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.964, Loss: 0.023 Epoch 1 Batch 887/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.963, Loss: 0.033 Epoch 1 Batch 888/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.970, Loss: 0.017 Epoch 1 Batch 889/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.968, Loss: 0.019 Epoch 1 Batch 890/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.969, Loss: 0.024 Epoch 1 Batch 891/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.969, Loss: 0.018 Epoch 1 Batch 892/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.969, Loss: 0.021 Epoch 1 Batch 893/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.969, Loss: 0.022 Epoch 1 Batch 894/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.969, Loss: 0.017 Epoch 1 Batch 895/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.969, Loss: 0.021 Epoch 1 Batch 896/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.969, Loss: 0.022 Epoch 1 Batch 897/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.974, Loss: 0.020 Epoch 1 Batch 898/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.975, Loss: 0.017 Epoch 1 Batch 899/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.975, Loss: 0.026 Epoch 1 Batch 900/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.975, Loss: 0.017 Epoch 1 Batch 901/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.975, Loss: 0.040 Epoch 1 Batch 902/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.971, Loss: 0.025 Epoch 1 Batch 903/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.971, Loss: 0.026 Epoch 1 Batch 904/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.971, Loss: 0.017 Epoch 1 Batch 905/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.971, Loss: 0.017 Epoch 1 Batch 906/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.971, Loss: 0.021 Epoch 1 Batch 907/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.964, Loss: 0.022 Epoch 1 Batch 908/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.964, Loss: 0.026 Epoch 1 Batch 909/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.960, Loss: 0.033 Epoch 1 Batch 910/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.960, Loss: 0.024 Epoch 1 Batch 911/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.960, Loss: 0.023 Epoch 1 Batch 912/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.960, Loss: 0.020 Epoch 1 Batch 913/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.961, Loss: 0.030 Epoch 1 Batch 914/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.955, Loss: 0.051 Epoch 1 Batch 915/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.959, Loss: 0.014 Epoch 1 Batch 916/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.955, Loss: 0.020 Epoch 1 Batch 917/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.955, Loss: 0.017 Epoch 1 Batch 918/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.955, Loss: 0.015 Epoch 1 Batch 919/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.955, Loss: 0.017 Epoch 1 Batch 920/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.968, Loss: 0.018 Epoch 1 Batch 921/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.968, Loss: 0.034 Epoch 1 Batch 922/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.964, Loss: 0.022 Epoch 1 Batch 923/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.968, Loss: 0.014 Epoch 1 Batch 924/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.968, Loss: 0.028 Epoch 1 Batch 925/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.973, Loss: 0.017 Epoch 1 Batch 926/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.973, Loss: 0.022 Epoch 1 Batch 927/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.968, Loss: 0.028 Epoch 1 Batch 928/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.968, Loss: 0.016 Epoch 1 Batch 929/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.968, Loss: 0.018 Epoch 1 Batch 930/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.971, Loss: 0.016 Epoch 1 Batch 931/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.970, Loss: 0.017 Epoch 1 Batch 932/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.961, Loss: 0.019 Epoch 1 Batch 933/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.963, Loss: 0.020 Epoch 1 Batch 934/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.963, Loss: 0.016 Epoch 1 Batch 935/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.963, Loss: 0.022 Epoch 1 Batch 936/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.965, Loss: 0.027 Epoch 1 Batch 937/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.969, Loss: 0.027 Epoch 1 Batch 938/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.969, Loss: 0.027 Epoch 1 Batch 939/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.969, Loss: 0.025 Epoch 1 Batch 940/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.965, Loss: 0.015 Epoch 1 Batch 941/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.965, Loss: 0.018 Epoch 1 Batch 942/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.967, Loss: 0.027 Epoch 1 Batch 943/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.967, Loss: 0.021 Epoch 1 Batch 944/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.967, Loss: 0.017 Epoch 1 Batch 945/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.967, Loss: 0.014 Epoch 1 Batch 946/1077 - Train Accuracy: 0.997, Validation Accuracy: 0.967, Loss: 0.011 Epoch 1 Batch 947/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.967, Loss: 0.017 Epoch 1 Batch 948/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.965, Loss: 0.018 Epoch 1 Batch 949/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.970, Loss: 0.019 Epoch 1 Batch 950/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.970, Loss: 0.015 Epoch 1 Batch 951/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.979, Loss: 0.023 Epoch 1 Batch 952/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.979, Loss: 0.013 Epoch 1 Batch 953/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.979, Loss: 0.015 Epoch 1 Batch 954/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.979, Loss: 0.023 Epoch 1 Batch 955/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.974, Loss: 0.027 Epoch 1 Batch 956/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.969, Loss: 0.024 Epoch 1 Batch 957/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.956, Loss: 0.011 Epoch 1 Batch 958/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.944, Loss: 0.016 Epoch 1 Batch 959/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.944, Loss: 0.018 Epoch 1 Batch 960/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.946, Loss: 0.018 Epoch 1 Batch 961/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.951, Loss: 0.015 Epoch 1 Batch 962/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.956, Loss: 0.022 Epoch 1 Batch 963/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.961, Loss: 0.029 Epoch 1 Batch 964/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.963, Loss: 0.025 Epoch 1 Batch 965/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.962, Loss: 0.037 Epoch 1 Batch 966/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.967, Loss: 0.016 Epoch 1 Batch 967/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.967, Loss: 0.022 Epoch 1 Batch 968/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.967, Loss: 0.019 Epoch 1 Batch 969/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.967, Loss: 0.028 Epoch 1 Batch 970/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.961, Loss: 0.025 Epoch 1 Batch 971/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.954, Loss: 0.025 Epoch 1 Batch 972/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.954, Loss: 0.018 Epoch 1 Batch 973/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.956, Loss: 0.019 Epoch 1 Batch 974/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.971, Loss: 0.013 Epoch 1 Batch 975/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.971, Loss: 0.020 Epoch 1 Batch 976/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.975, Loss: 0.023 Epoch 1 Batch 977/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.975, Loss: 0.010 Epoch 1 Batch 978/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.977, Loss: 0.015 Epoch 1 Batch 979/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.972, Loss: 0.021 Epoch 1 Batch 980/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.972, Loss: 0.021 Epoch 1 Batch 981/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.977, Loss: 0.022 Epoch 1 Batch 982/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.977, Loss: 0.012 Epoch 1 Batch 983/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.977, Loss: 0.016 Epoch 1 Batch 984/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.981, Loss: 0.024 Epoch 1 Batch 985/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.981, Loss: 0.014 Epoch 1 Batch 986/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.981, Loss: 0.017 Epoch 1 Batch 987/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.980, Loss: 0.014 Epoch 1 Batch 988/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.971, Loss: 0.031 Epoch 1 Batch 989/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.971, Loss: 0.023 Epoch 1 Batch 990/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.971, Loss: 0.025 Epoch 1 Batch 991/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.976, Loss: 0.024 Epoch 1 Batch 992/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.976, Loss: 0.021 Epoch 1 Batch 993/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.976, Loss: 0.015 Epoch 1 Batch 994/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.972, Loss: 0.022 Epoch 1 Batch 995/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.972, Loss: 0.022 Epoch 1 Batch 996/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.972, Loss: 0.015 Epoch 1 Batch 997/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.974, Loss: 0.021 Epoch 1 Batch 998/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.979, Loss: 0.025 Epoch 1 Batch 999/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.984, Loss: 0.030 Epoch 1 Batch 1000/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.984, Loss: 0.015 Epoch 1 Batch 1001/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.984, Loss: 0.018 Epoch 1 Batch 1002/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.974, Loss: 0.013 Epoch 1 Batch 1003/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.971, Loss: 0.022 Epoch 1 Batch 1004/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.965, Loss: 0.027 Epoch 1 Batch 1005/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.968, Loss: 0.019 Epoch 1 Batch 1006/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.967, Loss: 0.023 Epoch 1 Batch 1007/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.969, Loss: 0.017 Epoch 1 Batch 1008/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.969, Loss: 0.031 Epoch 1 Batch 1009/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.976, Loss: 0.015 Epoch 1 Batch 1010/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.976, Loss: 0.021 Epoch 1 Batch 1011/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.977, Loss: 0.018 Epoch 1 Batch 1012/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.977, Loss: 0.016 Epoch 1 Batch 1013/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.976, Loss: 0.013 Epoch 1 Batch 1014/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.972, Loss: 0.015 Epoch 1 Batch 1015/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.971, Loss: 0.019 Epoch 1 Batch 1016/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.971, Loss: 0.019 Epoch 1 Batch 1017/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.966, Loss: 0.015 Epoch 1 Batch 1018/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.966, Loss: 0.024 Epoch 1 Batch 1019/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.967, Loss: 0.037 Epoch 1 Batch 1020/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.967, Loss: 0.016 Epoch 1 Batch 1021/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.967, Loss: 0.016 Epoch 1 Batch 1022/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.967, Loss: 0.016 Epoch 1 Batch 1023/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.967, Loss: 0.025 Epoch 1 Batch 1024/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.973, Loss: 0.031 Epoch 1 Batch 1025/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.974, Loss: 0.024 Epoch 1 Batch 1026/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.974, Loss: 0.024 Epoch 1 Batch 1027/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.979, Loss: 0.014 Epoch 1 Batch 1028/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.979, Loss: 0.020 Epoch 1 Batch 1029/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.979, Loss: 0.016 Epoch 1 Batch 1030/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.979, Loss: 0.018 Epoch 1 Batch 1031/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.980, Loss: 0.019 Epoch 1 Batch 1032/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.986, Loss: 0.028 Epoch 1 Batch 1033/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.983, Loss: 0.022 Epoch 1 Batch 1034/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.982, Loss: 0.021 Epoch 1 Batch 1035/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.977, Loss: 0.016 Epoch 1 Batch 1036/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.970, Loss: 0.017 Epoch 1 Batch 1037/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.970, Loss: 0.021 Epoch 1 Batch 1038/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.970, Loss: 0.029 Epoch 1 Batch 1039/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.979, Loss: 0.019 Epoch 1 Batch 1040/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.979, Loss: 0.020 Epoch 1 Batch 1041/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.979, Loss: 0.030 Epoch 1 Batch 1042/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.979, Loss: 0.023 Epoch 1 Batch 1043/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.984, Loss: 0.020 Epoch 1 Batch 1044/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.984, Loss: 0.025 Epoch 1 Batch 1045/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.977, Loss: 0.019 Epoch 1 Batch 1046/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.972, Loss: 0.010 Epoch 1 Batch 1047/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.957, Loss: 0.020 Epoch 1 Batch 1048/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.957, Loss: 0.022 Epoch 1 Batch 1049/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.957, Loss: 0.023 Epoch 1 Batch 1050/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.957, Loss: 0.018 Epoch 1 Batch 1051/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.957, Loss: 0.025 Epoch 1 Batch 1052/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.957, Loss: 0.017 Epoch 1 Batch 1053/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.962, Loss: 0.026 Epoch 1 Batch 1054/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.957, Loss: 0.022 Epoch 1 Batch 1055/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.959, Loss: 0.016 Epoch 1 Batch 1056/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.958, Loss: 0.016 Epoch 1 Batch 1057/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.958, Loss: 0.021 Epoch 1 Batch 1058/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.965, Loss: 0.023 Epoch 1 Batch 1059/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.960, Loss: 0.030 Epoch 1 Batch 1060/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.964, Loss: 0.016 Epoch 1 Batch 1061/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.969, Loss: 0.026 Epoch 1 Batch 1062/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.970, Loss: 0.013 Epoch 1 Batch 1063/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.970, Loss: 0.027 Epoch 1 Batch 1064/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.970, Loss: 0.019 Epoch 1 Batch 1065/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.966, Loss: 0.017 Epoch 1 Batch 1066/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.964, Loss: 0.015 Epoch 1 Batch 1067/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.969, Loss: 0.026 Epoch 1 Batch 1068/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.969, Loss: 0.013 Epoch 1 Batch 1069/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.969, Loss: 0.009 Epoch 1 Batch 1070/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.969, Loss: 0.016 Epoch 1 Batch 1071/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.962, Loss: 0.022 Epoch 1 Batch 1072/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.962, Loss: 0.017 Epoch 1 Batch 1073/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.962, Loss: 0.020 Epoch 1 Batch 1074/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.955, Loss: 0.024 Epoch 1 Batch 1075/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.955, Loss: 0.020 Epoch 2 Batch 0/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.956, Loss: 0.017 Epoch 2 Batch 1/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.956, Loss: 0.014 Epoch 2 Batch 2/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.959, Loss: 0.022 Epoch 2 Batch 3/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.953, Loss: 0.020 Epoch 2 Batch 4/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.947, Loss: 0.017 Epoch 2 Batch 5/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.960, Loss: 0.028 Epoch 2 Batch 6/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.964, Loss: 0.013 Epoch 2 Batch 7/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.966, Loss: 0.014 Epoch 2 Batch 8/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.961, Loss: 0.020 Epoch 2 Batch 9/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.961, Loss: 0.023 Epoch 2 Batch 10/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.972, Loss: 0.027 Epoch 2 Batch 11/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.968, Loss: 0.027 Epoch 2 Batch 12/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.974, Loss: 0.016 Epoch 2 Batch 13/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.974, Loss: 0.017 Epoch 2 Batch 14/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.972, Loss: 0.015 Epoch 2 Batch 15/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.967, Loss: 0.018 Epoch 2 Batch 16/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.967, Loss: 0.020 Epoch 2 Batch 17/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.967, Loss: 0.019 Epoch 2 Batch 18/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.971, Loss: 0.024 Epoch 2 Batch 19/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.971, Loss: 0.026 Epoch 2 Batch 20/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.975, Loss: 0.014 Epoch 2 Batch 21/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.975, Loss: 0.019 Epoch 2 Batch 22/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.971, Loss: 0.018 Epoch 2 Batch 23/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.971, Loss: 0.027 Epoch 2 Batch 24/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.971, Loss: 0.021 Epoch 2 Batch 25/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.971, Loss: 0.013 Epoch 2 Batch 26/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.967, Loss: 0.027 Epoch 2 Batch 27/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.967, Loss: 0.013 Epoch 2 Batch 28/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.973, Loss: 0.019 Epoch 2 Batch 29/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.978, Loss: 0.015 Epoch 2 Batch 30/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.983, Loss: 0.017 Epoch 2 Batch 31/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.983, Loss: 0.017 Epoch 2 Batch 32/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.983, Loss: 0.025 Epoch 2 Batch 33/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.976, Loss: 0.016 Epoch 2 Batch 34/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.976, Loss: 0.017 Epoch 2 Batch 35/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.972, Loss: 0.019 Epoch 2 Batch 36/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.973, Loss: 0.015 Epoch 2 Batch 37/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.973, Loss: 0.021 Epoch 2 Batch 38/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.961, Loss: 0.030 Epoch 2 Batch 39/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.971, Loss: 0.024 Epoch 2 Batch 40/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.971, Loss: 0.015 Epoch 2 Batch 41/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.970, Loss: 0.016 Epoch 2 Batch 42/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.971, Loss: 0.028 Epoch 2 Batch 43/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.971, Loss: 0.016 Epoch 2 Batch 44/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.971, Loss: 0.009 Epoch 2 Batch 45/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.974, Loss: 0.021 Epoch 2 Batch 46/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.974, Loss: 0.021 Epoch 2 Batch 47/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.973, Loss: 0.021 Epoch 2 Batch 48/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.970, Loss: 0.026 Epoch 2 Batch 49/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.966, Loss: 0.030 Epoch 2 Batch 50/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.967, Loss: 0.021 Epoch 2 Batch 51/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.967, Loss: 0.021 Epoch 2 Batch 52/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.969, Loss: 0.021 Epoch 2 Batch 53/1077 - Train Accuracy: 0.942, Validation Accuracy: 0.964, Loss: 0.020 Epoch 2 Batch 54/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.960, Loss: 0.023 Epoch 2 Batch 55/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.966, Loss: 0.019 Epoch 2 Batch 56/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.966, Loss: 0.013 Epoch 2 Batch 57/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.971, Loss: 0.017 Epoch 2 Batch 58/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.971, Loss: 0.020 Epoch 2 Batch 59/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.972, Loss: 0.016 Epoch 2 Batch 60/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.967, Loss: 0.018 Epoch 2 Batch 61/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.967, Loss: 0.024 Epoch 2 Batch 62/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.962, Loss: 0.026 Epoch 2 Batch 63/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.969, Loss: 0.017 Epoch 2 Batch 64/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.969, Loss: 0.014 Epoch 2 Batch 65/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.969, Loss: 0.012 Epoch 2 Batch 66/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.969, Loss: 0.009 Epoch 2 Batch 67/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.972, Loss: 0.024 Epoch 2 Batch 68/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.978, Loss: 0.025 Epoch 2 Batch 69/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.978, Loss: 0.033 Epoch 2 Batch 70/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.978, Loss: 0.027 Epoch 2 Batch 71/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.978, Loss: 0.012 Epoch 2 Batch 72/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.973, Loss: 0.018 Epoch 2 Batch 73/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.973, Loss: 0.014 Epoch 2 Batch 74/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.971, Loss: 0.019 Epoch 2 Batch 75/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.967, Loss: 0.025 Epoch 2 Batch 76/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.967, Loss: 0.016 Epoch 2 Batch 77/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.976, Loss: 0.017 Epoch 2 Batch 78/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.973, Loss: 0.012 Epoch 2 Batch 79/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.973, Loss: 0.015 Epoch 2 Batch 80/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.973, Loss: 0.017 Epoch 2 Batch 81/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.971, Loss: 0.015 Epoch 2 Batch 82/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.971, Loss: 0.017 Epoch 2 Batch 83/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.971, Loss: 0.016 Epoch 2 Batch 84/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.966, Loss: 0.026 Epoch 2 Batch 85/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.966, Loss: 0.015 Epoch 2 Batch 86/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.966, Loss: 0.019 Epoch 2 Batch 87/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.966, Loss: 0.023 Epoch 2 Batch 88/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.966, Loss: 0.017 Epoch 2 Batch 89/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.963, Loss: 0.016 Epoch 2 Batch 90/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.962, Loss: 0.019 Epoch 2 Batch 91/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.962, Loss: 0.013 Epoch 2 Batch 92/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.962, Loss: 0.021 Epoch 2 Batch 93/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.962, Loss: 0.016 Epoch 2 Batch 94/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.962, Loss: 0.018 Epoch 2 Batch 95/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.962, Loss: 0.021 Epoch 2 Batch 96/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.957, Loss: 0.023 Epoch 2 Batch 97/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.957, Loss: 0.022 Epoch 2 Batch 98/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.955, Loss: 0.022 Epoch 2 Batch 99/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.956, Loss: 0.017 Epoch 2 Batch 100/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.957, Loss: 0.011 Epoch 2 Batch 101/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.963, Loss: 0.015 Epoch 2 Batch 102/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.963, Loss: 0.023 Epoch 2 Batch 103/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.959, Loss: 0.021 Epoch 2 Batch 104/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.959, Loss: 0.027 Epoch 2 Batch 105/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.961, Loss: 0.016 Epoch 2 Batch 106/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.956, Loss: 0.026 Epoch 2 Batch 107/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.958, Loss: 0.020 Epoch 2 Batch 108/1077 - Train Accuracy: 0.949, Validation Accuracy: 0.957, Loss: 0.028 Epoch 2 Batch 109/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.962, Loss: 0.021 Epoch 2 Batch 110/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.966, Loss: 0.012 Epoch 2 Batch 111/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.971, Loss: 0.015 Epoch 2 Batch 112/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.971, Loss: 0.020 Epoch 2 Batch 113/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.971, Loss: 0.024 Epoch 2 Batch 114/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.971, Loss: 0.017 Epoch 2 Batch 115/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.973, Loss: 0.023 Epoch 2 Batch 116/1077 - Train Accuracy: 0.948, Validation Accuracy: 0.973, Loss: 0.028 Epoch 2 Batch 117/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.965, Loss: 0.018 Epoch 2 Batch 118/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.965, Loss: 0.014 Epoch 2 Batch 119/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.963, Loss: 0.015 Epoch 2 Batch 120/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.963, Loss: 0.020 Epoch 2 Batch 121/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.973, Loss: 0.018 Epoch 2 Batch 122/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.967, Loss: 0.018 Epoch 2 Batch 123/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.967, Loss: 0.013 Epoch 2 Batch 124/1077 - Train Accuracy: 0.947, Validation Accuracy: 0.966, Loss: 0.026 Epoch 2 Batch 125/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.966, Loss: 0.036 Epoch 2 Batch 126/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.966, Loss: 0.019 Epoch 2 Batch 127/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.966, Loss: 0.020 Epoch 2 Batch 128/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.966, Loss: 0.016 Epoch 2 Batch 129/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.966, Loss: 0.022 Epoch 2 Batch 130/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.970, Loss: 0.017 Epoch 2 Batch 131/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.970, Loss: 0.019 Epoch 2 Batch 132/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.969, Loss: 0.017 Epoch 2 Batch 133/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.976, Loss: 0.008 Epoch 2 Batch 134/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.976, Loss: 0.014 Epoch 2 Batch 135/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.975, Loss: 0.017 Epoch 2 Batch 136/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.976, Loss: 0.016 Epoch 2 Batch 137/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.976, Loss: 0.023 Epoch 2 Batch 138/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.972, Loss: 0.014 Epoch 2 Batch 139/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.971, Loss: 0.016 Epoch 2 Batch 140/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.972, Loss: 0.022 Epoch 2 Batch 141/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.967, Loss: 0.009 Epoch 2 Batch 142/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.972, Loss: 0.015 Epoch 2 Batch 143/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.971, Loss: 0.019 Epoch 2 Batch 144/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.972, Loss: 0.025 Epoch 2 Batch 145/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.967, Loss: 0.015 Epoch 2 Batch 146/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.972, Loss: 0.045 Epoch 2 Batch 147/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.972, Loss: 0.015 Epoch 2 Batch 148/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.967, Loss: 0.023 Epoch 2 Batch 149/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.974, Loss: 0.013 Epoch 2 Batch 150/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.967, Loss: 0.018 Epoch 2 Batch 151/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.967, Loss: 0.016 Epoch 2 Batch 152/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.967, Loss: 0.022 Epoch 2 Batch 153/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.967, Loss: 0.023 Epoch 2 Batch 154/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.965, Loss: 0.014 Epoch 2 Batch 155/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.965, Loss: 0.014 Epoch 2 Batch 156/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.969, Loss: 0.016 Epoch 2 Batch 157/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.969, Loss: 0.021 Epoch 2 Batch 158/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.969, Loss: 0.024 Epoch 2 Batch 159/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.977, Loss: 0.014 Epoch 2 Batch 160/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.973, Loss: 0.010 Epoch 2 Batch 161/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.971, Loss: 0.012 Epoch 2 Batch 162/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.971, Loss: 0.016 Epoch 2 Batch 163/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.973, Loss: 0.027 Epoch 2 Batch 164/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.973, Loss: 0.018 Epoch 2 Batch 165/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.974, Loss: 0.017 Epoch 2 Batch 166/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.974, Loss: 0.017 Epoch 2 Batch 167/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.974, Loss: 0.013 Epoch 2 Batch 168/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.974, Loss: 0.022 Epoch 2 Batch 169/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.974, Loss: 0.019 Epoch 2 Batch 170/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.977, Loss: 0.022 Epoch 2 Batch 171/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.977, Loss: 0.015 Epoch 2 Batch 172/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.970, Loss: 0.010 Epoch 2 Batch 173/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.970, Loss: 0.019 Epoch 2 Batch 174/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.968, Loss: 0.011 Epoch 2 Batch 175/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.968, Loss: 0.021 Epoch 2 Batch 176/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.968, Loss: 0.014 Epoch 2 Batch 177/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.964, Loss: 0.021 Epoch 2 Batch 178/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.962, Loss: 0.025 Epoch 2 Batch 179/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.963, Loss: 0.019 Epoch 2 Batch 180/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.956, Loss: 0.017 Epoch 2 Batch 181/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.956, Loss: 0.020 Epoch 2 Batch 182/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.956, Loss: 0.025 Epoch 2 Batch 183/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.961, Loss: 0.015 Epoch 2 Batch 184/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.961, Loss: 0.020 Epoch 2 Batch 185/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.961, Loss: 0.021 Epoch 2 Batch 186/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.961, Loss: 0.018 Epoch 2 Batch 187/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.961, Loss: 0.012 Epoch 2 Batch 188/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.968, Loss: 0.019 Epoch 2 Batch 189/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.967, Loss: 0.017 Epoch 2 Batch 190/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.967, Loss: 0.014 Epoch 2 Batch 191/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.969, Loss: 0.014 Epoch 2 Batch 192/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.964, Loss: 0.016 Epoch 2 Batch 193/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.964, Loss: 0.018 Epoch 2 Batch 194/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.964, Loss: 0.012 Epoch 2 Batch 195/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.965, Loss: 0.016 Epoch 2 Batch 196/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.970, Loss: 0.017 Epoch 2 Batch 197/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.971, Loss: 0.020 Epoch 2 Batch 198/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.975, Loss: 0.018 Epoch 2 Batch 199/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.978, Loss: 0.013 Epoch 2 Batch 200/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.975, Loss: 0.018 Epoch 2 Batch 201/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.975, Loss: 0.013 Epoch 2 Batch 202/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.969, Loss: 0.019 Epoch 2 Batch 203/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.969, Loss: 0.016 Epoch 2 Batch 204/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.969, Loss: 0.031 Epoch 2 Batch 205/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.971, Loss: 0.036 Epoch 2 Batch 206/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.978, Loss: 0.013 Epoch 2 Batch 207/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.969, Loss: 0.016 Epoch 2 Batch 208/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.964, Loss: 0.021 Epoch 2 Batch 209/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.969, Loss: 0.009 Epoch 2 Batch 210/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.969, Loss: 0.019 Epoch 2 Batch 211/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.969, Loss: 0.016 Epoch 2 Batch 212/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.969, Loss: 0.018 Epoch 2 Batch 213/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.974, Loss: 0.015 Epoch 2 Batch 214/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.974, Loss: 0.015 Epoch 2 Batch 215/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.975, Loss: 0.026 Epoch 2 Batch 216/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.974, Loss: 0.014 Epoch 2 Batch 217/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.974, Loss: 0.010 Epoch 2 Batch 218/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.974, Loss: 0.016 Epoch 2 Batch 219/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.974, Loss: 0.010 Epoch 2 Batch 220/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.969, Loss: 0.024 Epoch 2 Batch 221/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.967, Loss: 0.016 Epoch 2 Batch 222/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.968, Loss: 0.021 Epoch 2 Batch 223/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.968, Loss: 0.008 Epoch 2 Batch 224/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.968, Loss: 0.021 Epoch 2 Batch 225/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.968, Loss: 0.022 Epoch 2 Batch 226/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.968, Loss: 0.015 Epoch 2 Batch 227/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.962, Loss: 0.022 Epoch 2 Batch 228/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.962, Loss: 0.020 Epoch 2 Batch 229/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.961, Loss: 0.019 Epoch 2 Batch 230/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.963, Loss: 0.015 Epoch 2 Batch 231/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.957, Loss: 0.022 Epoch 2 Batch 232/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.962, Loss: 0.011 Epoch 2 Batch 233/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.968, Loss: 0.024 Epoch 2 Batch 234/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.962, Loss: 0.025 Epoch 2 Batch 235/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.962, Loss: 0.017 Epoch 2 Batch 236/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.962, Loss: 0.020 Epoch 2 Batch 237/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.966, Loss: 0.017 Epoch 2 Batch 238/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.968, Loss: 0.013 Epoch 2 Batch 239/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.968, Loss: 0.011 Epoch 2 Batch 240/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.968, Loss: 0.014 Epoch 2 Batch 241/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.968, Loss: 0.010 Epoch 2 Batch 242/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.973, Loss: 0.017 Epoch 2 Batch 243/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.973, Loss: 0.023 Epoch 2 Batch 244/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.971, Loss: 0.014 Epoch 2 Batch 245/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.971, Loss: 0.020 Epoch 2 Batch 246/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.973, Loss: 0.019 Epoch 2 Batch 247/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.973, Loss: 0.023 Epoch 2 Batch 248/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.967, Loss: 0.016 Epoch 2 Batch 249/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.967, Loss: 0.013 Epoch 2 Batch 250/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.974, Loss: 0.020 Epoch 2 Batch 251/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.974, Loss: 0.027 Epoch 2 Batch 252/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.979, Loss: 0.021 Epoch 2 Batch 253/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.979, Loss: 0.016 Epoch 2 Batch 254/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.979, Loss: 0.020 Epoch 2 Batch 255/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.979, Loss: 0.023 Epoch 2 Batch 256/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.979, Loss: 0.032 Epoch 2 Batch 257/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.979, Loss: 0.019 Epoch 2 Batch 258/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.982, Loss: 0.019 Epoch 2 Batch 259/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.981, Loss: 0.014 Epoch 2 Batch 260/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.980, Loss: 0.014 Epoch 2 Batch 261/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.975, Loss: 0.017 Epoch 2 Batch 262/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.974, Loss: 0.019 Epoch 2 Batch 263/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.974, Loss: 0.019 Epoch 2 Batch 264/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.975, Loss: 0.021 Epoch 2 Batch 265/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.975, Loss: 0.015 Epoch 2 Batch 266/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.975, Loss: 0.022 Epoch 2 Batch 267/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.975, Loss: 0.012 Epoch 2 Batch 268/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.975, Loss: 0.022 Epoch 2 Batch 269/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.978, Loss: 0.032 Epoch 2 Batch 270/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.978, Loss: 0.024 Epoch 2 Batch 271/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.973, Loss: 0.016 Epoch 2 Batch 272/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.974, Loss: 0.027 Epoch 2 Batch 273/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.968, Loss: 0.016 Epoch 2 Batch 274/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.968, Loss: 0.018 Epoch 2 Batch 275/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.971, Loss: 0.011 Epoch 2 Batch 276/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.973, Loss: 0.028 Epoch 2 Batch 277/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.973, Loss: 0.016 Epoch 2 Batch 278/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.973, Loss: 0.024 Epoch 2 Batch 279/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.973, Loss: 0.021 Epoch 2 Batch 280/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.973, Loss: 0.023 Epoch 2 Batch 281/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.968, Loss: 0.021 Epoch 2 Batch 282/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.961, Loss: 0.038 Epoch 2 Batch 283/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.954, Loss: 0.017 Epoch 2 Batch 284/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.959, Loss: 0.022 Epoch 2 Batch 285/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.955, Loss: 0.017 Epoch 2 Batch 286/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.954, Loss: 0.023 Epoch 2 Batch 287/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.953, Loss: 0.024 Epoch 2 Batch 288/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.953, Loss: 0.021 Epoch 2 Batch 289/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.967, Loss: 0.020 Epoch 2 Batch 290/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.967, Loss: 0.035 Epoch 2 Batch 291/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.967, Loss: 0.034 Epoch 2 Batch 292/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.967, Loss: 0.021 Epoch 2 Batch 293/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.967, Loss: 0.017 Epoch 2 Batch 294/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.951, Loss: 0.022 Epoch 2 Batch 295/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.961, Loss: 0.020 Epoch 2 Batch 296/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.959, Loss: 0.018 Epoch 2 Batch 297/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.961, Loss: 0.020 Epoch 2 Batch 298/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.961, Loss: 0.028 Epoch 2 Batch 299/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.956, Loss: 0.019 Epoch 2 Batch 300/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.953, Loss: 0.012 Epoch 2 Batch 301/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.947, Loss: 0.016 Epoch 2 Batch 302/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.951, Loss: 0.018 Epoch 2 Batch 303/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.951, Loss: 0.023 Epoch 2 Batch 304/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.951, Loss: 0.021 Epoch 2 Batch 305/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.952, Loss: 0.015 Epoch 2 Batch 306/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.961, Loss: 0.027 Epoch 2 Batch 307/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.957, Loss: 0.012 Epoch 2 Batch 308/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.952, Loss: 0.024 Epoch 2 Batch 309/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.952, Loss: 0.016 Epoch 2 Batch 310/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.961, Loss: 0.019 Epoch 2 Batch 311/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.961, Loss: 0.013 Epoch 2 Batch 312/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.956, Loss: 0.021 Epoch 2 Batch 313/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.956, Loss: 0.013 Epoch 2 Batch 314/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.961, Loss: 0.019 Epoch 2 Batch 315/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.956, Loss: 0.016 Epoch 2 Batch 316/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.959, Loss: 0.016 Epoch 2 Batch 317/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.959, Loss: 0.019 Epoch 2 Batch 318/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.959, Loss: 0.016 Epoch 2 Batch 319/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.957, Loss: 0.023 Epoch 2 Batch 320/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.957, Loss: 0.016 Epoch 2 Batch 321/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.957, Loss: 0.017 Epoch 2 Batch 322/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.955, Loss: 0.018 Epoch 2 Batch 323/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.964, Loss: 0.019 Epoch 2 Batch 324/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.964, Loss: 0.019 Epoch 2 Batch 325/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.960, Loss: 0.016 Epoch 2 Batch 326/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.960, Loss: 0.015 Epoch 2 Batch 327/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.960, Loss: 0.020 Epoch 2 Batch 328/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.960, Loss: 0.026 Epoch 2 Batch 329/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.966, Loss: 0.016 Epoch 2 Batch 330/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.969, Loss: 0.020 Epoch 2 Batch 331/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.964, Loss: 0.018 Epoch 2 Batch 332/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.964, Loss: 0.011 Epoch 2 Batch 333/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.964, Loss: 0.015 Epoch 2 Batch 334/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.964, Loss: 0.014 Epoch 2 Batch 335/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.964, Loss: 0.019 Epoch 2 Batch 336/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.964, Loss: 0.040 Epoch 2 Batch 337/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.968, Loss: 0.019 Epoch 2 Batch 338/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.968, Loss: 0.027 Epoch 2 Batch 339/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.968, Loss: 0.011 Epoch 2 Batch 340/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.968, Loss: 0.019 Epoch 2 Batch 341/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.968, Loss: 0.019 Epoch 2 Batch 342/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.973, Loss: 0.009 Epoch 2 Batch 343/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.975, Loss: 0.016 Epoch 2 Batch 344/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.975, Loss: 0.018 Epoch 2 Batch 345/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.975, Loss: 0.010 Epoch 2 Batch 346/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.969, Loss: 0.019 Epoch 2 Batch 347/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.969, Loss: 0.008 Epoch 2 Batch 348/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.974, Loss: 0.017 Epoch 2 Batch 349/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.974, Loss: 0.019 Epoch 2 Batch 350/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.974, Loss: 0.017 Epoch 2 Batch 351/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.974, Loss: 0.020 Epoch 2 Batch 352/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.965, Loss: 0.014 Epoch 2 Batch 353/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.952, Loss: 0.021 Epoch 2 Batch 354/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.948, Loss: 0.018 Epoch 2 Batch 355/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.948, Loss: 0.020 Epoch 2 Batch 356/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.948, Loss: 0.014 Epoch 2 Batch 357/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.948, Loss: 0.021 Epoch 2 Batch 358/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.953, Loss: 0.018 Epoch 2 Batch 359/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.958, Loss: 0.016 Epoch 2 Batch 360/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.962, Loss: 0.013 Epoch 2 Batch 361/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.962, Loss: 0.017 Epoch 2 Batch 362/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.962, Loss: 0.020 Epoch 2 Batch 363/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.967, Loss: 0.020 Epoch 2 Batch 364/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.967, Loss: 0.024 Epoch 2 Batch 365/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.966, Loss: 0.011 Epoch 2 Batch 366/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.966, Loss: 0.020 Epoch 2 Batch 367/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.966, Loss: 0.016 Epoch 2 Batch 368/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.961, Loss: 0.017 Epoch 2 Batch 369/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.968, Loss: 0.015 Epoch 2 Batch 370/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.968, Loss: 0.019 Epoch 2 Batch 371/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.970, Loss: 0.013 Epoch 2 Batch 372/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.971, Loss: 0.011 Epoch 2 Batch 373/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.971, Loss: 0.012 Epoch 2 Batch 374/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.971, Loss: 0.016 Epoch 2 Batch 375/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.971, Loss: 0.015 Epoch 2 Batch 376/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.966, Loss: 0.021 Epoch 2 Batch 377/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.966, Loss: 0.015 Epoch 2 Batch 378/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.962, Loss: 0.012 Epoch 2 Batch 379/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.963, Loss: 0.025 Epoch 2 Batch 380/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.968, Loss: 0.015 Epoch 2 Batch 381/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.968, Loss: 0.017 Epoch 2 Batch 382/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.974, Loss: 0.025 Epoch 2 Batch 383/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.974, Loss: 0.018 Epoch 2 Batch 384/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.974, Loss: 0.013 Epoch 2 Batch 385/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.973, Loss: 0.020 Epoch 2 Batch 386/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.973, Loss: 0.018 Epoch 2 Batch 387/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.975, Loss: 0.011 Epoch 2 Batch 388/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.976, Loss: 0.018 Epoch 2 Batch 389/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.976, Loss: 0.024 Epoch 2 Batch 390/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.976, Loss: 0.025 Epoch 2 Batch 391/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.980, Loss: 0.024 Epoch 2 Batch 392/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.983, Loss: 0.018 Epoch 2 Batch 393/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.977, Loss: 0.014 Epoch 2 Batch 394/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.966, Loss: 0.017 Epoch 2 Batch 395/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.971, Loss: 0.016 Epoch 2 Batch 396/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.970, Loss: 0.021 Epoch 2 Batch 397/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.970, Loss: 0.023 Epoch 2 Batch 398/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.972, Loss: 0.016 Epoch 2 Batch 399/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.967, Loss: 0.017 Epoch 2 Batch 400/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.967, Loss: 0.024 Epoch 2 Batch 401/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.962, Loss: 0.015 Epoch 2 Batch 402/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.962, Loss: 0.012 Epoch 2 Batch 403/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.962, Loss: 0.027 Epoch 2 Batch 404/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.962, Loss: 0.019 Epoch 2 Batch 405/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.967, Loss: 0.016 Epoch 2 Batch 406/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.969, Loss: 0.013 Epoch 2 Batch 407/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.969, Loss: 0.020 Epoch 2 Batch 408/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.965, Loss: 0.016 Epoch 2 Batch 409/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.970, Loss: 0.026 Epoch 2 Batch 410/1077 - Train Accuracy: 0.946, Validation Accuracy: 0.970, Loss: 0.027 Epoch 2 Batch 411/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.970, Loss: 0.019 Epoch 2 Batch 412/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.970, Loss: 0.020 Epoch 2 Batch 413/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.970, Loss: 0.011 Epoch 2 Batch 414/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.970, Loss: 0.021 Epoch 2 Batch 415/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.967, Loss: 0.032 Epoch 2 Batch 416/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.967, Loss: 0.021 Epoch 2 Batch 417/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.961, Loss: 0.038 Epoch 2 Batch 418/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.962, Loss: 0.014 Epoch 2 Batch 419/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.966, Loss: 0.024 Epoch 2 Batch 420/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.960, Loss: 0.014 Epoch 2 Batch 421/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.955, Loss: 0.028 Epoch 2 Batch 422/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.958, Loss: 0.017 Epoch 2 Batch 423/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.958, Loss: 0.028 Epoch 2 Batch 424/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.960, Loss: 0.021 Epoch 2 Batch 425/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.960, Loss: 0.011 Epoch 2 Batch 426/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.960, Loss: 0.019 Epoch 2 Batch 427/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.962, Loss: 0.016 Epoch 2 Batch 428/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.959, Loss: 0.014 Epoch 2 Batch 429/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.963, Loss: 0.010 Epoch 2 Batch 430/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.963, Loss: 0.019 Epoch 2 Batch 431/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.959, Loss: 0.013 Epoch 2 Batch 432/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.962, Loss: 0.017 Epoch 2 Batch 433/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.960, Loss: 0.017 Epoch 2 Batch 434/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.961, Loss: 0.011 Epoch 2 Batch 435/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.951, Loss: 0.022 Epoch 2 Batch 436/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.951, Loss: 0.025 Epoch 2 Batch 437/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.949, Loss: 0.009 Epoch 2 Batch 438/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.951, Loss: 0.013 Epoch 2 Batch 439/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.951, Loss: 0.018 Epoch 2 Batch 440/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.951, Loss: 0.016 Epoch 2 Batch 441/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.951, Loss: 0.020 Epoch 2 Batch 442/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.961, Loss: 0.025 Epoch 2 Batch 443/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.958, Loss: 0.012 Epoch 2 Batch 444/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.957, Loss: 0.014 Epoch 2 Batch 445/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.957, Loss: 0.014 Epoch 2 Batch 446/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.957, Loss: 0.014 Epoch 2 Batch 447/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.962, Loss: 0.012 Epoch 2 Batch 448/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.963, Loss: 0.021 Epoch 2 Batch 449/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.961, Loss: 0.018 Epoch 2 Batch 450/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.965, Loss: 0.018 Epoch 2 Batch 451/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.961, Loss: 0.011 Epoch 2 Batch 452/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.960, Loss: 0.026 Epoch 2 Batch 453/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.961, Loss: 0.018 Epoch 2 Batch 454/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.967, Loss: 0.018 Epoch 2 Batch 455/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.967, Loss: 0.020 Epoch 2 Batch 456/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.968, Loss: 0.016 Epoch 2 Batch 457/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.968, Loss: 0.013 Epoch 2 Batch 458/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.971, Loss: 0.023 Epoch 2 Batch 459/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.971, Loss: 0.021 Epoch 2 Batch 460/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.972, Loss: 0.021 Epoch 2 Batch 461/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.972, Loss: 0.016 Epoch 2 Batch 462/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.971, Loss: 0.016 Epoch 2 Batch 463/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.971, Loss: 0.016 Epoch 2 Batch 464/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.967, Loss: 0.021 Epoch 2 Batch 465/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.967, Loss: 0.020 Epoch 2 Batch 466/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.967, Loss: 0.017 Epoch 2 Batch 467/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.965, Loss: 0.014 Epoch 2 Batch 468/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.965, Loss: 0.012 Epoch 2 Batch 469/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.965, Loss: 0.013 Epoch 2 Batch 470/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.961, Loss: 0.019 Epoch 2 Batch 471/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.961, Loss: 0.013 Epoch 2 Batch 472/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.961, Loss: 0.024 Epoch 2 Batch 473/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.970, Loss: 0.013 Epoch 2 Batch 474/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.969, Loss: 0.018 Epoch 2 Batch 475/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.962, Loss: 0.011 Epoch 2 Batch 476/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.960, Loss: 0.012 Epoch 2 Batch 477/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.960, Loss: 0.021 Epoch 2 Batch 478/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.956, Loss: 0.014 Epoch 2 Batch 479/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.956, Loss: 0.019 Epoch 2 Batch 480/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.963, Loss: 0.015 Epoch 2 Batch 481/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.961, Loss: 0.015 Epoch 2 Batch 482/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.963, Loss: 0.023 Epoch 2 Batch 483/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.967, Loss: 0.029 Epoch 2 Batch 484/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.967, Loss: 0.030 Epoch 2 Batch 485/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.961, Loss: 0.022 Epoch 2 Batch 486/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.961, Loss: 0.018 Epoch 2 Batch 487/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.966, Loss: 0.012 Epoch 2 Batch 488/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.956, Loss: 0.015 Epoch 2 Batch 489/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.956, Loss: 0.012 Epoch 2 Batch 490/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.952, Loss: 0.018 Epoch 2 Batch 491/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.956, Loss: 0.026 Epoch 2 Batch 492/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.956, Loss: 0.025 Epoch 2 Batch 493/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.956, Loss: 0.014 Epoch 2 Batch 494/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.956, Loss: 0.015 Epoch 2 Batch 495/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.956, Loss: 0.017 Epoch 2 Batch 496/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.958, Loss: 0.018 Epoch 2 Batch 497/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.963, Loss: 0.023 Epoch 2 Batch 498/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.973, Loss: 0.027 Epoch 2 Batch 499/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.975, Loss: 0.014 Epoch 2 Batch 500/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.975, Loss: 0.011 Epoch 2 Batch 501/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.975, Loss: 0.013 Epoch 2 Batch 502/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.975, Loss: 0.023 Epoch 2 Batch 503/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.980, Loss: 0.021 Epoch 2 Batch 504/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.978, Loss: 0.015 Epoch 2 Batch 505/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.978, Loss: 0.013 Epoch 2 Batch 506/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.982, Loss: 0.032 Epoch 2 Batch 507/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.982, Loss: 0.018 Epoch 2 Batch 508/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.982, Loss: 0.018 Epoch 2 Batch 509/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.982, Loss: 0.026 Epoch 2 Batch 510/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.977, Loss: 0.017 Epoch 2 Batch 511/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.977, Loss: 0.021 Epoch 2 Batch 512/1077 - Train Accuracy: 0.998, Validation Accuracy: 0.977, Loss: 0.012 Epoch 2 Batch 513/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.972, Loss: 0.022 Epoch 2 Batch 514/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.971, Loss: 0.022 Epoch 2 Batch 515/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.969, Loss: 0.018 Epoch 2 Batch 516/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.965, Loss: 0.018 Epoch 2 Batch 517/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.965, Loss: 0.030 Epoch 2 Batch 518/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.970, Loss: 0.014 Epoch 2 Batch 519/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.971, Loss: 0.018 Epoch 2 Batch 520/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.971, Loss: 0.013 Epoch 2 Batch 521/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.971, Loss: 0.019 Epoch 2 Batch 522/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.971, Loss: 0.027 Epoch 2 Batch 523/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.971, Loss: 0.017 Epoch 2 Batch 524/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.971, Loss: 0.028 Epoch 2 Batch 525/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.974, Loss: 0.021 Epoch 2 Batch 526/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.968, Loss: 0.019 Epoch 2 Batch 527/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.966, Loss: 0.016 Epoch 2 Batch 528/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.977, Loss: 0.021 Epoch 2 Batch 529/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.972, Loss: 0.019 Epoch 2 Batch 530/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.972, Loss: 0.025 Epoch 2 Batch 531/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.972, Loss: 0.018 Epoch 2 Batch 532/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.972, Loss: 0.028 Epoch 2 Batch 533/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.972, Loss: 0.013 Epoch 2 Batch 534/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.968, Loss: 0.023 Epoch 2 Batch 535/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.968, Loss: 0.017 Epoch 2 Batch 536/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.968, Loss: 0.018 Epoch 2 Batch 537/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.969, Loss: 0.011 Epoch 2 Batch 538/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.972, Loss: 0.014 Epoch 2 Batch 539/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.972, Loss: 0.028 Epoch 2 Batch 540/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.972, Loss: 0.014 Epoch 2 Batch 541/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.972, Loss: 0.017 Epoch 2 Batch 542/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.972, Loss: 0.018 Epoch 2 Batch 543/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.972, Loss: 0.015 Epoch 2 Batch 544/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.972, Loss: 0.017 Epoch 2 Batch 545/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.972, Loss: 0.013 Epoch 2 Batch 546/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.972, Loss: 0.023 Epoch 2 Batch 547/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.972, Loss: 0.018 Epoch 2 Batch 548/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.973, Loss: 0.026 Epoch 2 Batch 549/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.973, Loss: 0.021 Epoch 2 Batch 550/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.978, Loss: 0.017 Epoch 2 Batch 551/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.978, Loss: 0.017 Epoch 2 Batch 552/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.978, Loss: 0.027 Epoch 2 Batch 553/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.978, Loss: 0.031 Epoch 2 Batch 554/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.978, Loss: 0.013 Epoch 2 Batch 555/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.978, Loss: 0.012 Epoch 2 Batch 556/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.978, Loss: 0.013 Epoch 2 Batch 557/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.978, Loss: 0.017 Epoch 2 Batch 558/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.978, Loss: 0.010 Epoch 2 Batch 559/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.978, Loss: 0.020 Epoch 2 Batch 560/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.976, Loss: 0.018 Epoch 2 Batch 561/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.976, Loss: 0.012 Epoch 2 Batch 562/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.976, Loss: 0.011 Epoch 2 Batch 563/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.976, Loss: 0.019 Epoch 2 Batch 564/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.981, Loss: 0.022 Epoch 2 Batch 565/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.977, Loss: 0.022 Epoch 2 Batch 566/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.977, Loss: 0.016 Epoch 2 Batch 567/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.976, Loss: 0.022 Epoch 2 Batch 568/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.971, Loss: 0.020 Epoch 2 Batch 569/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.971, Loss: 0.017 Epoch 2 Batch 570/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.971, Loss: 0.022 Epoch 2 Batch 571/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.975, Loss: 0.015 Epoch 2 Batch 572/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.973, Loss: 0.012 Epoch 2 Batch 573/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.973, Loss: 0.030 Epoch 2 Batch 574/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.973, Loss: 0.024 Epoch 2 Batch 575/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.970, Loss: 0.019 Epoch 2 Batch 576/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.970, Loss: 0.012 Epoch 2 Batch 577/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.970, Loss: 0.018 Epoch 2 Batch 578/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.968, Loss: 0.015 Epoch 2 Batch 579/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.968, Loss: 0.015 Epoch 2 Batch 580/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.968, Loss: 0.012 Epoch 2 Batch 581/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.970, Loss: 0.015 Epoch 2 Batch 582/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.970, Loss: 0.015 Epoch 2 Batch 583/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.970, Loss: 0.021 Epoch 2 Batch 584/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.970, Loss: 0.014 Epoch 2 Batch 585/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.970, Loss: 0.013 Epoch 2 Batch 586/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.970, Loss: 0.011 Epoch 2 Batch 587/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.970, Loss: 0.024 Epoch 2 Batch 588/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.970, Loss: 0.013 Epoch 2 Batch 589/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.977, Loss: 0.011 Epoch 2 Batch 590/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.974, Loss: 0.014 Epoch 2 Batch 591/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.974, Loss: 0.023 Epoch 2 Batch 592/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.974, Loss: 0.018 Epoch 2 Batch 593/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.974, Loss: 0.024 Epoch 2 Batch 594/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.974, Loss: 0.028 Epoch 2 Batch 595/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.974, Loss: 0.013 Epoch 2 Batch 596/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.974, Loss: 0.014 Epoch 2 Batch 597/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.974, Loss: 0.010 Epoch 2 Batch 598/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.972, Loss: 0.015 Epoch 2 Batch 599/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.972, Loss: 0.027 Epoch 2 Batch 600/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.968, Loss: 0.015 Epoch 2 Batch 601/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.968, Loss: 0.015 Epoch 2 Batch 602/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.972, Loss: 0.021 Epoch 2 Batch 603/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.971, Loss: 0.012 Epoch 2 Batch 604/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.971, Loss: 0.028 Epoch 2 Batch 605/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.977, Loss: 0.023 Epoch 2 Batch 606/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.977, Loss: 0.017 Epoch 2 Batch 607/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.977, Loss: 0.021 Epoch 2 Batch 608/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.977, Loss: 0.016 Epoch 2 Batch 609/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.977, Loss: 0.014 Epoch 2 Batch 610/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.977, Loss: 0.018 Epoch 2 Batch 611/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.982, Loss: 0.018 Epoch 2 Batch 612/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.977, Loss: 0.012 Epoch 2 Batch 613/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.981, Loss: 0.019 Epoch 2 Batch 614/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.981, Loss: 0.013 Epoch 2 Batch 615/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.981, Loss: 0.008 Epoch 2 Batch 616/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.981, Loss: 0.016 Epoch 2 Batch 617/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.981, Loss: 0.015 Epoch 2 Batch 618/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.981, Loss: 0.015 Epoch 2 Batch 619/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.981, Loss: 0.009 Epoch 2 Batch 620/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.981, Loss: 0.022 Epoch 2 Batch 621/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.981, Loss: 0.025 Epoch 2 Batch 622/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.981, Loss: 0.020 Epoch 2 Batch 623/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.981, Loss: 0.019 Epoch 2 Batch 624/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.981, Loss: 0.013 Epoch 2 Batch 625/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.981, Loss: 0.014 Epoch 2 Batch 626/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.981, Loss: 0.019 Epoch 2 Batch 627/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.981, Loss: 0.022 Epoch 2 Batch 628/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.981, Loss: 0.032 Epoch 2 Batch 629/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.970, Loss: 0.011 Epoch 2 Batch 630/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.970, Loss: 0.011 Epoch 2 Batch 631/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.965, Loss: 0.012 Epoch 2 Batch 632/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.965, Loss: 0.015 Epoch 2 Batch 633/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.969, Loss: 0.015 Epoch 2 Batch 634/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.974, Loss: 0.015 Epoch 2 Batch 635/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.979, Loss: 0.021 Epoch 2 Batch 636/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.979, Loss: 0.015 Epoch 2 Batch 637/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.981, Loss: 0.018 Epoch 2 Batch 638/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.979, Loss: 0.013 Epoch 2 Batch 639/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.979, Loss: 0.022 Epoch 2 Batch 640/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.979, Loss: 0.012 Epoch 2 Batch 641/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.974, Loss: 0.020 Epoch 2 Batch 642/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.974, Loss: 0.014 Epoch 2 Batch 643/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.979, Loss: 0.016 Epoch 2 Batch 644/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.981, Loss: 0.015 Epoch 2 Batch 645/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.981, Loss: 0.023 Epoch 2 Batch 646/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.978, Loss: 0.018 Epoch 2 Batch 647/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.978, Loss: 0.015 Epoch 2 Batch 648/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.975, Loss: 0.011 Epoch 2 Batch 649/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.975, Loss: 0.019 Epoch 2 Batch 650/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.975, Loss: 0.015 Epoch 2 Batch 651/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.975, Loss: 0.016 Epoch 2 Batch 652/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.975, Loss: 0.020 Epoch 2 Batch 653/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.970, Loss: 0.016 Epoch 2 Batch 654/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.972, Loss: 0.011 Epoch 2 Batch 655/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.972, Loss: 0.021 Epoch 2 Batch 656/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.971, Loss: 0.011 Epoch 2 Batch 657/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.969, Loss: 0.016 Epoch 2 Batch 658/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.971, Loss: 0.016 Epoch 2 Batch 659/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.971, Loss: 0.014 Epoch 2 Batch 660/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.966, Loss: 0.011 Epoch 2 Batch 661/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.968, Loss: 0.016 Epoch 2 Batch 662/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.968, Loss: 0.019 Epoch 2 Batch 663/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.969, Loss: 0.013 Epoch 2 Batch 664/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.969, Loss: 0.017 Epoch 2 Batch 665/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.973, Loss: 0.011 Epoch 2 Batch 666/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.973, Loss: 0.015 Epoch 2 Batch 667/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.977, Loss: 0.019 Epoch 2 Batch 668/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.977, Loss: 0.016 Epoch 2 Batch 669/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.978, Loss: 0.013 Epoch 2 Batch 670/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.978, Loss: 0.015 Epoch 2 Batch 671/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.978, Loss: 0.027 Epoch 2 Batch 672/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.977, Loss: 0.019 Epoch 2 Batch 673/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.978, Loss: 0.022 Epoch 2 Batch 674/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.973, Loss: 0.015 Epoch 2 Batch 675/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.973, Loss: 0.020 Epoch 2 Batch 676/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.973, Loss: 0.013 Epoch 2 Batch 677/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.978, Loss: 0.013 Epoch 2 Batch 678/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.983, Loss: 0.017 Epoch 2 Batch 679/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.977, Loss: 0.013 Epoch 2 Batch 680/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.977, Loss: 0.014 Epoch 2 Batch 681/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.977, Loss: 0.014 Epoch 2 Batch 682/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.967, Loss: 0.019 Epoch 2 Batch 683/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.962, Loss: 0.013 Epoch 2 Batch 684/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.962, Loss: 0.021 Epoch 2 Batch 685/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.962, Loss: 0.030 Epoch 2 Batch 686/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.962, Loss: 0.015 Epoch 2 Batch 687/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.962, Loss: 0.014 Epoch 2 Batch 688/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.967, Loss: 0.012 Epoch 2 Batch 689/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.972, Loss: 0.009 Epoch 2 Batch 690/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.972, Loss: 0.017 Epoch 2 Batch 691/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.968, Loss: 0.019 Epoch 2 Batch 692/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.968, Loss: 0.013 Epoch 2 Batch 693/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.968, Loss: 0.018 Epoch 2 Batch 694/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.966, Loss: 0.016 Epoch 2 Batch 695/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.962, Loss: 0.014 Epoch 2 Batch 696/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.956, Loss: 0.011 Epoch 2 Batch 697/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.956, Loss: 0.018 Epoch 2 Batch 698/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.956, Loss: 0.014 Epoch 2 Batch 699/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.956, Loss: 0.009 Epoch 2 Batch 700/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.957, Loss: 0.011 Epoch 2 Batch 701/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.955, Loss: 0.015 Epoch 2 Batch 702/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.959, Loss: 0.020 Epoch 2 Batch 703/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.959, Loss: 0.022 Epoch 2 Batch 704/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.963, Loss: 0.031 Epoch 2 Batch 705/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.968, Loss: 0.018 Epoch 2 Batch 706/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.967, Loss: 0.044 Epoch 2 Batch 707/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.967, Loss: 0.021 Epoch 2 Batch 708/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.970, Loss: 0.014 Epoch 2 Batch 709/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.968, Loss: 0.020 Epoch 2 Batch 710/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.968, Loss: 0.014 Epoch 2 Batch 711/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.968, Loss: 0.028 Epoch 2 Batch 712/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.966, Loss: 0.011 Epoch 2 Batch 713/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.966, Loss: 0.011 Epoch 2 Batch 714/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.961, Loss: 0.014 Epoch 2 Batch 715/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.961, Loss: 0.023 Epoch 2 Batch 716/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.962, Loss: 0.012 Epoch 2 Batch 717/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.966, Loss: 0.012 Epoch 2 Batch 718/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.970, Loss: 0.019 Epoch 2 Batch 719/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.970, Loss: 0.015 Epoch 2 Batch 720/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.971, Loss: 0.019 Epoch 2 Batch 721/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.977, Loss: 0.017 Epoch 2 Batch 722/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.977, Loss: 0.015 Epoch 2 Batch 723/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.977, Loss: 0.022 Epoch 2 Batch 724/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.977, Loss: 0.020 Epoch 2 Batch 725/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.977, Loss: 0.013 Epoch 2 Batch 726/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.977, Loss: 0.010 Epoch 2 Batch 727/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.965, Loss: 0.018 Epoch 2 Batch 728/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.965, Loss: 0.021 Epoch 2 Batch 729/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.962, Loss: 0.021 Epoch 2 Batch 730/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.962, Loss: 0.020 Epoch 2 Batch 731/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.962, Loss: 0.012 Epoch 2 Batch 732/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.962, Loss: 0.020 Epoch 2 Batch 733/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.964, Loss: 0.019 Epoch 2 Batch 734/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.964, Loss: 0.016 Epoch 2 Batch 735/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.964, Loss: 0.009 Epoch 2 Batch 736/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.964, Loss: 0.015 Epoch 2 Batch 737/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.964, Loss: 0.016 Epoch 2 Batch 738/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.964, Loss: 0.013 Epoch 2 Batch 739/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.964, Loss: 0.018 Epoch 2 Batch 740/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.960, Loss: 0.019 Epoch 2 Batch 741/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.964, Loss: 0.020 Epoch 2 Batch 742/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.964, Loss: 0.014 Epoch 2 Batch 743/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.964, Loss: 0.015 Epoch 2 Batch 744/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.962, Loss: 0.013 Epoch 2 Batch 745/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.967, Loss: 0.017 Epoch 2 Batch 746/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.971, Loss: 0.011 Epoch 2 Batch 747/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.965, Loss: 0.010 Epoch 2 Batch 748/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.965, Loss: 0.014 Epoch 2 Batch 749/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.961, Loss: 0.014 Epoch 2 Batch 750/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.959, Loss: 0.020 Epoch 2 Batch 751/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.959, Loss: 0.014 Epoch 2 Batch 752/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.955, Loss: 0.015 Epoch 2 Batch 753/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.961, Loss: 0.017 Epoch 2 Batch 754/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.961, Loss: 0.013 Epoch 2 Batch 755/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.961, Loss: 0.027 Epoch 2 Batch 756/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.961, Loss: 0.013 Epoch 2 Batch 757/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.963, Loss: 0.014 Epoch 2 Batch 758/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.963, Loss: 0.009 Epoch 2 Batch 759/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.967, Loss: 0.022 Epoch 2 Batch 760/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.972, Loss: 0.017 Epoch 2 Batch 761/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.976, Loss: 0.020 Epoch 2 Batch 762/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.976, Loss: 0.012 Epoch 2 Batch 763/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.976, Loss: 0.017 Epoch 2 Batch 764/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.971, Loss: 0.017 Epoch 2 Batch 765/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.966, Loss: 0.022 Epoch 2 Batch 766/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.966, Loss: 0.011 Epoch 2 Batch 767/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.969, Loss: 0.016 Epoch 2 Batch 768/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.970, Loss: 0.012 Epoch 2 Batch 769/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.975, Loss: 0.014 Epoch 2 Batch 770/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.977, Loss: 0.013 Epoch 2 Batch 771/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.972, Loss: 0.013 Epoch 2 Batch 772/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.974, Loss: 0.008 Epoch 2 Batch 773/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.972, Loss: 0.017 Epoch 2 Batch 774/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.966, Loss: 0.015 Epoch 2 Batch 775/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.966, Loss: 0.013 Epoch 2 Batch 776/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.966, Loss: 0.012 Epoch 2 Batch 777/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.966, Loss: 0.009 Epoch 2 Batch 778/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.966, Loss: 0.019 Epoch 2 Batch 779/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.961, Loss: 0.018 Epoch 2 Batch 780/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.961, Loss: 0.022 Epoch 2 Batch 781/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.962, Loss: 0.009 Epoch 2 Batch 782/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.961, Loss: 0.017 Epoch 2 Batch 783/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.956, Loss: 0.022 Epoch 2 Batch 784/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.956, Loss: 0.010 Epoch 2 Batch 785/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.954, Loss: 0.011 Epoch 2 Batch 786/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.950, Loss: 0.012 Epoch 2 Batch 787/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.950, Loss: 0.015 Epoch 2 Batch 788/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.947, Loss: 0.016 Epoch 2 Batch 789/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.952, Loss: 0.018 Epoch 2 Batch 790/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.953, Loss: 0.025 Epoch 2 Batch 791/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.963, Loss: 0.011 Epoch 2 Batch 792/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.970, Loss: 0.019 Epoch 2 Batch 793/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.970, Loss: 0.014 Epoch 2 Batch 794/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.970, Loss: 0.016 Epoch 2 Batch 795/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.969, Loss: 0.017 Epoch 2 Batch 796/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.967, Loss: 0.015 Epoch 2 Batch 797/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.972, Loss: 0.019 Epoch 2 Batch 798/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.979, Loss: 0.020 Epoch 2 Batch 799/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.974, Loss: 0.017 Epoch 2 Batch 800/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.974, Loss: 0.011 Epoch 2 Batch 801/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.974, Loss: 0.015 Epoch 2 Batch 802/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.974, Loss: 0.017 Epoch 2 Batch 803/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.974, Loss: 0.023 Epoch 2 Batch 804/1077 - Train Accuracy: 0.997, Validation Accuracy: 0.974, Loss: 0.008 Epoch 2 Batch 805/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.974, Loss: 0.014 Epoch 2 Batch 806/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.974, Loss: 0.015 Epoch 2 Batch 807/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.974, Loss: 0.011 Epoch 2 Batch 808/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.970, Loss: 0.030 Epoch 2 Batch 809/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.970, Loss: 0.019 Epoch 2 Batch 810/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.968, Loss: 0.010 Epoch 2 Batch 811/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.968, Loss: 0.019 Epoch 2 Batch 812/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.966, Loss: 0.015 Epoch 2 Batch 813/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.966, Loss: 0.014 Epoch 2 Batch 814/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.962, Loss: 0.015 Epoch 2 Batch 815/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.962, Loss: 0.017 Epoch 2 Batch 816/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.962, Loss: 0.016 Epoch 2 Batch 817/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.962, Loss: 0.019 Epoch 2 Batch 818/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.966, Loss: 0.028 Epoch 2 Batch 819/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.966, Loss: 0.016 Epoch 2 Batch 820/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.968, Loss: 0.011 Epoch 2 Batch 821/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.968, Loss: 0.013 Epoch 2 Batch 822/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.968, Loss: 0.010 Epoch 2 Batch 823/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.968, Loss: 0.019 Epoch 2 Batch 824/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.968, Loss: 0.020 Epoch 2 Batch 825/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.969, Loss: 0.009 Epoch 2 Batch 826/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.969, Loss: 0.011 Epoch 2 Batch 827/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.969, Loss: 0.018 Epoch 2 Batch 828/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.969, Loss: 0.015 Epoch 2 Batch 829/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.969, Loss: 0.030 Epoch 2 Batch 830/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.972, Loss: 0.025 Epoch 2 Batch 831/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.967, Loss: 0.022 Epoch 2 Batch 832/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.972, Loss: 0.013 Epoch 2 Batch 833/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.969, Loss: 0.020 Epoch 2 Batch 834/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.969, Loss: 0.017 Epoch 2 Batch 835/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.963, Loss: 0.020 Epoch 2 Batch 836/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.963, Loss: 0.010 Epoch 2 Batch 837/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.963, Loss: 0.025 Epoch 2 Batch 838/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.963, Loss: 0.020 Epoch 2 Batch 839/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.963, Loss: 0.016 Epoch 2 Batch 840/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.967, Loss: 0.016 Epoch 2 Batch 841/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.966, Loss: 0.022 Epoch 2 Batch 842/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.966, Loss: 0.014 Epoch 2 Batch 843/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.966, Loss: 0.015 Epoch 2 Batch 844/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.966, Loss: 0.013 Epoch 2 Batch 845/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.963, Loss: 0.016 Epoch 2 Batch 846/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.963, Loss: 0.021 Epoch 2 Batch 847/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.969, Loss: 0.019 Epoch 2 Batch 848/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.968, Loss: 0.018 Epoch 2 Batch 849/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.966, Loss: 0.013 Epoch 2 Batch 850/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.966, Loss: 0.025 Epoch 2 Batch 851/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.960, Loss: 0.022 Epoch 2 Batch 852/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.965, Loss: 0.029 Epoch 2 Batch 853/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.968, Loss: 0.018 Epoch 2 Batch 854/1077 - Train Accuracy: 0.953, Validation Accuracy: 0.968, Loss: 0.020 Epoch 2 Batch 855/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.968, Loss: 0.014 Epoch 2 Batch 856/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.968, Loss: 0.023 Epoch 2 Batch 857/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.963, Loss: 0.020 Epoch 2 Batch 858/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.970, Loss: 0.010 Epoch 2 Batch 859/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.970, Loss: 0.017 Epoch 2 Batch 860/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.962, Loss: 0.012 Epoch 2 Batch 861/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.966, Loss: 0.011 Epoch 2 Batch 862/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.970, Loss: 0.020 Epoch 2 Batch 863/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.972, Loss: 0.015 Epoch 2 Batch 864/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.972, Loss: 0.015 Epoch 2 Batch 865/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.972, Loss: 0.018 Epoch 2 Batch 866/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.967, Loss: 0.021 Epoch 2 Batch 867/1077 - Train Accuracy: 0.946, Validation Accuracy: 0.969, Loss: 0.041 Epoch 2 Batch 868/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.967, Loss: 0.020 Epoch 2 Batch 869/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.962, Loss: 0.012 Epoch 2 Batch 870/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.954, Loss: 0.019 Epoch 2 Batch 871/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.954, Loss: 0.011 Epoch 2 Batch 872/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.949, Loss: 0.017 Epoch 2 Batch 873/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.954, Loss: 0.015 Epoch 2 Batch 874/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.955, Loss: 0.023 Epoch 2 Batch 875/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.952, Loss: 0.018 Epoch 2 Batch 876/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.947, Loss: 0.016 Epoch 2 Batch 877/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.954, Loss: 0.010 Epoch 2 Batch 878/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.951, Loss: 0.016 Epoch 2 Batch 879/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.951, Loss: 0.011 Epoch 2 Batch 880/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.951, Loss: 0.023 Epoch 2 Batch 881/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.951, Loss: 0.015 Epoch 2 Batch 882/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.951, Loss: 0.019 Epoch 2 Batch 883/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.951, Loss: 0.017 Epoch 2 Batch 884/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.962, Loss: 0.018 Epoch 2 Batch 885/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.962, Loss: 0.013 Epoch 2 Batch 886/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.966, Loss: 0.018 Epoch 2 Batch 887/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.966, Loss: 0.022 Epoch 2 Batch 888/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.966, Loss: 0.008 Epoch 2 Batch 889/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.966, Loss: 0.015 Epoch 2 Batch 890/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.964, Loss: 0.012 Epoch 2 Batch 891/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.963, Loss: 0.012 Epoch 2 Batch 892/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.963, Loss: 0.016 Epoch 2 Batch 893/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.962, Loss: 0.018 Epoch 2 Batch 894/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.962, Loss: 0.013 Epoch 2 Batch 895/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.957, Loss: 0.011 Epoch 2 Batch 896/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.957, Loss: 0.021 Epoch 2 Batch 897/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.957, Loss: 0.019 Epoch 2 Batch 898/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.962, Loss: 0.015 Epoch 2 Batch 899/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.962, Loss: 0.016 Epoch 2 Batch 900/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.967, Loss: 0.021 Epoch 2 Batch 901/1077 - Train Accuracy: 0.951, Validation Accuracy: 0.967, Loss: 0.035 Epoch 2 Batch 902/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.967, Loss: 0.019 Epoch 2 Batch 903/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.967, Loss: 0.016 Epoch 2 Batch 904/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.967, Loss: 0.011 Epoch 2 Batch 905/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.967, Loss: 0.014 Epoch 2 Batch 906/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.967, Loss: 0.015 Epoch 2 Batch 907/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.967, Loss: 0.014 Epoch 2 Batch 908/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.967, Loss: 0.016 Epoch 2 Batch 909/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.971, Loss: 0.016 Epoch 2 Batch 910/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.970, Loss: 0.016 Epoch 2 Batch 911/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.961, Loss: 0.016 Epoch 2 Batch 912/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.961, Loss: 0.017 Epoch 2 Batch 913/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.961, Loss: 0.025 Epoch 2 Batch 914/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.961, Loss: 0.037 Epoch 2 Batch 915/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.961, Loss: 0.009 Epoch 2 Batch 916/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.961, Loss: 0.018 Epoch 2 Batch 917/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.962, Loss: 0.012 Epoch 2 Batch 918/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.962, Loss: 0.012 Epoch 2 Batch 919/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.962, Loss: 0.012 Epoch 2 Batch 920/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.962, Loss: 0.014 Epoch 2 Batch 921/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.962, Loss: 0.020 Epoch 2 Batch 922/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.962, Loss: 0.018 Epoch 2 Batch 923/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.961, Loss: 0.015 Epoch 2 Batch 924/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.961, Loss: 0.019 Epoch 2 Batch 925/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.961, Loss: 0.012 Epoch 2 Batch 926/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.963, Loss: 0.018 Epoch 2 Batch 927/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.958, Loss: 0.024 Epoch 2 Batch 928/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.957, Loss: 0.015 Epoch 2 Batch 929/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.956, Loss: 0.019 Epoch 2 Batch 930/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.956, Loss: 0.014 Epoch 2 Batch 931/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.956, Loss: 0.013 Epoch 2 Batch 932/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.950, Loss: 0.011 Epoch 2 Batch 933/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.955, Loss: 0.017 Epoch 2 Batch 934/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.958, Loss: 0.015 Epoch 2 Batch 935/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.958, Loss: 0.010 Epoch 2 Batch 936/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.962, Loss: 0.015 Epoch 2 Batch 937/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.962, Loss: 0.017 Epoch 2 Batch 938/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.956, Loss: 0.018 Epoch 2 Batch 939/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.961, Loss: 0.017 Epoch 2 Batch 940/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.961, Loss: 0.013 Epoch 2 Batch 941/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.961, Loss: 0.022 Epoch 2 Batch 942/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.961, Loss: 0.017 Epoch 2 Batch 943/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.962, Loss: 0.012 Epoch 2 Batch 944/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.961, Loss: 0.015 Epoch 2 Batch 945/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.961, Loss: 0.010 Epoch 2 Batch 946/1077 - Train Accuracy: 0.997, Validation Accuracy: 0.961, Loss: 0.008 Epoch 2 Batch 947/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.961, Loss: 0.012 Epoch 2 Batch 948/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.961, Loss: 0.011 Epoch 2 Batch 949/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.971, Loss: 0.017 Epoch 2 Batch 950/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.971, Loss: 0.011 Epoch 2 Batch 951/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.971, Loss: 0.012 Epoch 2 Batch 952/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.966, Loss: 0.012 Epoch 2 Batch 953/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.969, Loss: 0.009 Epoch 2 Batch 954/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.969, Loss: 0.012 Epoch 2 Batch 955/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.969, Loss: 0.019 Epoch 2 Batch 956/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.964, Loss: 0.021 Epoch 2 Batch 957/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.964, Loss: 0.011 Epoch 2 Batch 958/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.969, Loss: 0.016 Epoch 2 Batch 959/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.975, Loss: 0.021 Epoch 2 Batch 960/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.972, Loss: 0.013 Epoch 2 Batch 961/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.967, Loss: 0.010 Epoch 2 Batch 962/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.963, Loss: 0.019 Epoch 2 Batch 963/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.956, Loss: 0.024 Epoch 2 Batch 964/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.956, Loss: 0.019 Epoch 2 Batch 965/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.959, Loss: 0.022 Epoch 2 Batch 966/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.964, Loss: 0.010 Epoch 2 Batch 967/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.968, Loss: 0.011 Epoch 2 Batch 968/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.967, Loss: 0.015 Epoch 2 Batch 969/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.967, Loss: 0.016 Epoch 2 Batch 970/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.967, Loss: 0.019 Epoch 2 Batch 971/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.967, Loss: 0.022 Epoch 2 Batch 972/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.967, Loss: 0.020 Epoch 2 Batch 973/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.968, Loss: 0.013 Epoch 2 Batch 974/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.968, Loss: 0.011 Epoch 2 Batch 975/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.968, Loss: 0.012 Epoch 2 Batch 976/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.968, Loss: 0.015 Epoch 2 Batch 977/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.967, Loss: 0.007 Epoch 2 Batch 978/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.968, Loss: 0.017 Epoch 2 Batch 979/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.961, Loss: 0.015 Epoch 2 Batch 980/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.961, Loss: 0.011 Epoch 2 Batch 981/1077 - Train Accuracy: 0.950, Validation Accuracy: 0.966, Loss: 0.020 Epoch 2 Batch 982/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.966, Loss: 0.012 Epoch 2 Batch 983/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.966, Loss: 0.012 Epoch 2 Batch 984/1077 - Train Accuracy: 0.949, Validation Accuracy: 0.966, Loss: 0.024 Epoch 2 Batch 985/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.972, Loss: 0.012 Epoch 2 Batch 986/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.972, Loss: 0.018 Epoch 2 Batch 987/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.977, Loss: 0.013 Epoch 2 Batch 988/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.979, Loss: 0.028 Epoch 2 Batch 989/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.984, Loss: 0.013 Epoch 2 Batch 990/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.975, Loss: 0.016 Epoch 2 Batch 991/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.980, Loss: 0.016 Epoch 2 Batch 992/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.977, Loss: 0.017 Epoch 2 Batch 993/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.977, Loss: 0.010 Epoch 2 Batch 994/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.977, Loss: 0.014 Epoch 2 Batch 995/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.977, Loss: 0.017 Epoch 2 Batch 996/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.977, Loss: 0.011 Epoch 2 Batch 997/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.977, Loss: 0.015 Epoch 2 Batch 998/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.977, Loss: 0.013 Epoch 2 Batch 999/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.977, Loss: 0.016 Epoch 2 Batch 1000/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.977, Loss: 0.012 Epoch 2 Batch 1001/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.977, Loss: 0.010 Epoch 2 Batch 1002/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.977, Loss: 0.009 Epoch 2 Batch 1003/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.970, Loss: 0.019 Epoch 2 Batch 1004/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.965, Loss: 0.019 Epoch 2 Batch 1005/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.969, Loss: 0.016 Epoch 2 Batch 1006/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.971, Loss: 0.014 Epoch 2 Batch 1007/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.971, Loss: 0.011 Epoch 2 Batch 1008/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.971, Loss: 0.024 Epoch 2 Batch 1009/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.973, Loss: 0.013 Epoch 2 Batch 1010/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.973, Loss: 0.013 Epoch 2 Batch 1011/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.973, Loss: 0.013 Epoch 2 Batch 1012/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.971, Loss: 0.016 Epoch 2 Batch 1013/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.976, Loss: 0.014 Epoch 2 Batch 1014/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.976, Loss: 0.012 Epoch 2 Batch 1015/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.981, Loss: 0.012 Epoch 2 Batch 1016/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.981, Loss: 0.013 Epoch 2 Batch 1017/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.979, Loss: 0.012 Epoch 2 Batch 1018/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.979, Loss: 0.018 Epoch 2 Batch 1019/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.984, Loss: 0.027 Epoch 2 Batch 1020/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.984, Loss: 0.014 Epoch 2 Batch 1021/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.984, Loss: 0.020 Epoch 2 Batch 1022/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.979, Loss: 0.011 Epoch 2 Batch 1023/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.979, Loss: 0.016 Epoch 2 Batch 1024/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.977, Loss: 0.019 Epoch 2 Batch 1025/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.977, Loss: 0.016 Epoch 2 Batch 1026/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.978, Loss: 0.021 Epoch 2 Batch 1027/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.978, Loss: 0.012 Epoch 2 Batch 1028/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.977, Loss: 0.013 Epoch 2 Batch 1029/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.977, Loss: 0.019 Epoch 2 Batch 1030/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.972, Loss: 0.012 Epoch 2 Batch 1031/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.973, Loss: 0.019 Epoch 2 Batch 1032/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.974, Loss: 0.020 Epoch 2 Batch 1033/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.969, Loss: 0.012 Epoch 2 Batch 1034/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.965, Loss: 0.016 Epoch 2 Batch 1035/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.967, Loss: 0.013 Epoch 2 Batch 1036/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.967, Loss: 0.018 Epoch 2 Batch 1037/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.967, Loss: 0.013 Epoch 2 Batch 1038/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.967, Loss: 0.020 Epoch 2 Batch 1039/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.967, Loss: 0.012 Epoch 2 Batch 1040/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.967, Loss: 0.015 Epoch 2 Batch 1041/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.970, Loss: 0.024 Epoch 2 Batch 1042/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.970, Loss: 0.012 Epoch 2 Batch 1043/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.970, Loss: 0.013 Epoch 2 Batch 1044/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.960, Loss: 0.015 Epoch 2 Batch 1045/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.960, Loss: 0.015 Epoch 2 Batch 1046/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.960, Loss: 0.007 Epoch 2 Batch 1047/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.960, Loss: 0.011 Epoch 2 Batch 1048/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.956, Loss: 0.013 Epoch 2 Batch 1049/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.956, Loss: 0.015 Epoch 2 Batch 1050/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.956, Loss: 0.011 Epoch 2 Batch 1051/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.956, Loss: 0.018 Epoch 2 Batch 1052/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.956, Loss: 0.012 Epoch 2 Batch 1053/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.960, Loss: 0.022 Epoch 2 Batch 1054/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.960, Loss: 0.011 Epoch 2 Batch 1055/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.965, Loss: 0.015 Epoch 2 Batch 1056/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.970, Loss: 0.012 Epoch 2 Batch 1057/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.977, Loss: 0.015 Epoch 2 Batch 1058/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.982, Loss: 0.021 Epoch 2 Batch 1059/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.982, Loss: 0.026 Epoch 2 Batch 1060/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.982, Loss: 0.016 Epoch 2 Batch 1061/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.978, Loss: 0.015 Epoch 2 Batch 1062/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.974, Loss: 0.016 Epoch 2 Batch 1063/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.984, Loss: 0.026 Epoch 2 Batch 1064/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.984, Loss: 0.012 Epoch 2 Batch 1065/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.979, Loss: 0.009 Epoch 2 Batch 1066/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.974, Loss: 0.014 Epoch 2 Batch 1067/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.969, Loss: 0.018 Epoch 2 Batch 1068/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.969, Loss: 0.006 Epoch 2 Batch 1069/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.966, Loss: 0.010 Epoch 2 Batch 1070/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.962, Loss: 0.010 Epoch 2 Batch 1071/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.960, Loss: 0.018 Epoch 2 Batch 1072/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.960, Loss: 0.016 Epoch 2 Batch 1073/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.962, Loss: 0.018 Epoch 2 Batch 1074/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.967, Loss: 0.018 Epoch 2 Batch 1075/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.971, Loss: 0.012 Epoch 3 Batch 0/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.976, Loss: 0.013 Epoch 3 Batch 1/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.976, Loss: 0.014 Epoch 3 Batch 2/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.981, Loss: 0.016 Epoch 3 Batch 3/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.976, Loss: 0.017 Epoch 3 Batch 4/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.973, Loss: 0.014 Epoch 3 Batch 5/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.973, Loss: 0.018 Epoch 3 Batch 6/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.974, Loss: 0.013 Epoch 3 Batch 7/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.974, Loss: 0.013 Epoch 3 Batch 8/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.974, Loss: 0.016 Epoch 3 Batch 9/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.974, Loss: 0.019 Epoch 3 Batch 10/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.983, Loss: 0.017 Epoch 3 Batch 11/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.983, Loss: 0.022 Epoch 3 Batch 12/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.983, Loss: 0.011 Epoch 3 Batch 13/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.981, Loss: 0.017 Epoch 3 Batch 14/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.972, Loss: 0.010 Epoch 3 Batch 15/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.967, Loss: 0.015 Epoch 3 Batch 16/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.967, Loss: 0.014 Epoch 3 Batch 17/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.968, Loss: 0.011 Epoch 3 Batch 18/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.968, Loss: 0.014 Epoch 3 Batch 19/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.968, Loss: 0.016 Epoch 3 Batch 20/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.972, Loss: 0.012 Epoch 3 Batch 21/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.972, Loss: 0.017 Epoch 3 Batch 22/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.977, Loss: 0.010 Epoch 3 Batch 23/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.977, Loss: 0.013 Epoch 3 Batch 24/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.977, Loss: 0.020 Epoch 3 Batch 25/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.976, Loss: 0.011 Epoch 3 Batch 26/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.976, Loss: 0.020 Epoch 3 Batch 27/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.976, Loss: 0.009 Epoch 3 Batch 28/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.977, Loss: 0.017 Epoch 3 Batch 29/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.979, Loss: 0.016 Epoch 3 Batch 30/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.977, Loss: 0.007 Epoch 3 Batch 31/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.977, Loss: 0.011 Epoch 3 Batch 32/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.977, Loss: 0.017 Epoch 3 Batch 33/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.981, Loss: 0.018 Epoch 3 Batch 34/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.982, Loss: 0.011 Epoch 3 Batch 35/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.982, Loss: 0.012 Epoch 3 Batch 36/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.982, Loss: 0.015 Epoch 3 Batch 37/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.984, Loss: 0.019 Epoch 3 Batch 38/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.984, Loss: 0.023 Epoch 3 Batch 39/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.984, Loss: 0.019 Epoch 3 Batch 40/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.984, Loss: 0.011 Epoch 3 Batch 41/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.978, Loss: 0.010 Epoch 3 Batch 42/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.974, Loss: 0.025 Epoch 3 Batch 43/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.979, Loss: 0.009 Epoch 3 Batch 44/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.979, Loss: 0.009 Epoch 3 Batch 45/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.977, Loss: 0.023 Epoch 3 Batch 46/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.977, Loss: 0.020 Epoch 3 Batch 47/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.977, Loss: 0.013 Epoch 3 Batch 48/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.973, Loss: 0.018 Epoch 3 Batch 49/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.973, Loss: 0.020 Epoch 3 Batch 50/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.977, Loss: 0.014 Epoch 3 Batch 51/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.977, Loss: 0.022 Epoch 3 Batch 52/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.968, Loss: 0.018 Epoch 3 Batch 53/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.968, Loss: 0.019 Epoch 3 Batch 54/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.968, Loss: 0.022 Epoch 3 Batch 55/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.968, Loss: 0.013 Epoch 3 Batch 56/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.970, Loss: 0.009 Epoch 3 Batch 57/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.975, Loss: 0.010 Epoch 3 Batch 58/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.975, Loss: 0.017 Epoch 3 Batch 59/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.975, Loss: 0.010 Epoch 3 Batch 60/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.975, Loss: 0.014 Epoch 3 Batch 61/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.971, Loss: 0.018 Epoch 3 Batch 62/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.971, Loss: 0.024 Epoch 3 Batch 63/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.966, Loss: 0.009 Epoch 3 Batch 64/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.966, Loss: 0.013 Epoch 3 Batch 65/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.966, Loss: 0.008 Epoch 3 Batch 66/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.966, Loss: 0.007 Epoch 3 Batch 67/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.966, Loss: 0.020 Epoch 3 Batch 68/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.966, Loss: 0.020 Epoch 3 Batch 69/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.971, Loss: 0.022 Epoch 3 Batch 70/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.975, Loss: 0.024 Epoch 3 Batch 71/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.975, Loss: 0.008 Epoch 3 Batch 72/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.975, Loss: 0.009 Epoch 3 Batch 73/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.974, Loss: 0.012 Epoch 3 Batch 74/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.972, Loss: 0.013 Epoch 3 Batch 75/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.972, Loss: 0.020 Epoch 3 Batch 76/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.971, Loss: 0.016 Epoch 3 Batch 77/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.971, Loss: 0.014 Epoch 3 Batch 78/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.971, Loss: 0.014 Epoch 3 Batch 79/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.973, Loss: 0.009 Epoch 3 Batch 80/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.973, Loss: 0.015 Epoch 3 Batch 81/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.971, Loss: 0.012 Epoch 3 Batch 82/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.971, Loss: 0.012 Epoch 3 Batch 83/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.966, Loss: 0.010 Epoch 3 Batch 84/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.980, Loss: 0.020 Epoch 3 Batch 85/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.975, Loss: 0.012 Epoch 3 Batch 86/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.975, Loss: 0.012 Epoch 3 Batch 87/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.975, Loss: 0.015 Epoch 3 Batch 88/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.975, Loss: 0.011 Epoch 3 Batch 89/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.975, Loss: 0.010 Epoch 3 Batch 90/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.970, Loss: 0.015 Epoch 3 Batch 91/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.972, Loss: 0.014 Epoch 3 Batch 92/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.970, Loss: 0.017 Epoch 3 Batch 93/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.968, Loss: 0.007 Epoch 3 Batch 94/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.968, Loss: 0.021 Epoch 3 Batch 95/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.974, Loss: 0.011 Epoch 3 Batch 96/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.974, Loss: 0.017 Epoch 3 Batch 97/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.973, Loss: 0.019 Epoch 3 Batch 98/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.973, Loss: 0.021 Epoch 3 Batch 99/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.980, Loss: 0.009 Epoch 3 Batch 100/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.980, Loss: 0.014 Epoch 3 Batch 101/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.971, Loss: 0.014 Epoch 3 Batch 102/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.971, Loss: 0.019 Epoch 3 Batch 103/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.971, Loss: 0.014 Epoch 3 Batch 104/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.971, Loss: 0.018 Epoch 3 Batch 105/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.971, Loss: 0.009 Epoch 3 Batch 106/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.975, Loss: 0.019 Epoch 3 Batch 107/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.975, Loss: 0.010 Epoch 3 Batch 108/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.975, Loss: 0.017 Epoch 3 Batch 109/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.975, Loss: 0.023 Epoch 3 Batch 110/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.978, Loss: 0.010 Epoch 3 Batch 111/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.978, Loss: 0.014 Epoch 3 Batch 112/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.977, Loss: 0.012 Epoch 3 Batch 113/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.977, Loss: 0.018 Epoch 3 Batch 114/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.977, Loss: 0.019 Epoch 3 Batch 115/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.977, Loss: 0.016 Epoch 3 Batch 116/1077 - Train Accuracy: 0.954, Validation Accuracy: 0.977, Loss: 0.022 Epoch 3 Batch 117/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.969, Loss: 0.012 Epoch 3 Batch 118/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.969, Loss: 0.013 Epoch 3 Batch 119/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.969, Loss: 0.013 Epoch 3 Batch 120/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.969, Loss: 0.015 Epoch 3 Batch 121/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.969, Loss: 0.016 Epoch 3 Batch 122/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.969, Loss: 0.011 Epoch 3 Batch 123/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.969, Loss: 0.011 Epoch 3 Batch 124/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.967, Loss: 0.024 Epoch 3 Batch 125/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.969, Loss: 0.024 Epoch 3 Batch 126/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.969, Loss: 0.015 Epoch 3 Batch 127/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.969, Loss: 0.016 Epoch 3 Batch 128/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.969, Loss: 0.016 Epoch 3 Batch 129/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.969, Loss: 0.014 Epoch 3 Batch 130/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.970, Loss: 0.010 Epoch 3 Batch 131/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.970, Loss: 0.013 Epoch 3 Batch 132/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.971, Loss: 0.013 Epoch 3 Batch 133/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.971, Loss: 0.007 Epoch 3 Batch 134/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.975, Loss: 0.009 Epoch 3 Batch 135/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.970, Loss: 0.011 Epoch 3 Batch 136/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.970, Loss: 0.019 Epoch 3 Batch 137/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.970, Loss: 0.016 Epoch 3 Batch 138/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.975, Loss: 0.008 Epoch 3 Batch 139/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.974, Loss: 0.016 Epoch 3 Batch 140/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.974, Loss: 0.012 Epoch 3 Batch 141/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.974, Loss: 0.012 Epoch 3 Batch 142/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.976, Loss: 0.011 Epoch 3 Batch 143/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.971, Loss: 0.008 Epoch 3 Batch 144/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.972, Loss: 0.021 Epoch 3 Batch 145/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.972, Loss: 0.014 Epoch 3 Batch 146/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.970, Loss: 0.043 Epoch 3 Batch 147/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.972, Loss: 0.014 Epoch 3 Batch 148/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.971, Loss: 0.018 Epoch 3 Batch 149/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.976, Loss: 0.014 Epoch 3 Batch 150/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.972, Loss: 0.009 Epoch 3 Batch 151/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.972, Loss: 0.013 Epoch 3 Batch 152/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.972, Loss: 0.019 Epoch 3 Batch 153/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.971, Loss: 0.018 Epoch 3 Batch 154/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.971, Loss: 0.010 Epoch 3 Batch 155/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.971, Loss: 0.013 Epoch 3 Batch 156/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.971, Loss: 0.022 Epoch 3 Batch 157/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.971, Loss: 0.017 Epoch 3 Batch 158/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.972, Loss: 0.018 Epoch 3 Batch 159/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.968, Loss: 0.010 Epoch 3 Batch 160/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.968, Loss: 0.009 Epoch 3 Batch 161/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.972, Loss: 0.012 Epoch 3 Batch 162/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.972, Loss: 0.009 Epoch 3 Batch 163/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.972, Loss: 0.021 Epoch 3 Batch 164/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.970, Loss: 0.016 Epoch 3 Batch 165/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.970, Loss: 0.014 Epoch 3 Batch 166/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.968, Loss: 0.018 Epoch 3 Batch 167/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.968, Loss: 0.010 Epoch 3 Batch 168/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.967, Loss: 0.021 Epoch 3 Batch 169/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.969, Loss: 0.017 Epoch 3 Batch 170/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.965, Loss: 0.019 Epoch 3 Batch 171/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.970, Loss: 0.013 Epoch 3 Batch 172/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.970, Loss: 0.013 Epoch 3 Batch 173/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.968, Loss: 0.019 Epoch 3 Batch 174/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.968, Loss: 0.007 Epoch 3 Batch 175/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.968, Loss: 0.016 Epoch 3 Batch 176/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.966, Loss: 0.012 Epoch 3 Batch 177/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.966, Loss: 0.013 Epoch 3 Batch 178/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.966, Loss: 0.021 Epoch 3 Batch 179/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.967, Loss: 0.015 Epoch 3 Batch 180/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.965, Loss: 0.011 Epoch 3 Batch 181/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.965, Loss: 0.013 Epoch 3 Batch 182/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.961, Loss: 0.019 Epoch 3 Batch 183/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.967, Loss: 0.012 Epoch 3 Batch 184/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.967, Loss: 0.021 Epoch 3 Batch 185/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.968, Loss: 0.017 Epoch 3 Batch 186/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.966, Loss: 0.018 Epoch 3 Batch 187/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.970, Loss: 0.012 Epoch 3 Batch 188/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.970, Loss: 0.013 Epoch 3 Batch 189/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.966, Loss: 0.010 Epoch 3 Batch 190/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.966, Loss: 0.010 Epoch 3 Batch 191/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.966, Loss: 0.008 Epoch 3 Batch 192/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.966, Loss: 0.012 Epoch 3 Batch 193/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.971, Loss: 0.007 Epoch 3 Batch 194/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.966, Loss: 0.013 Epoch 3 Batch 195/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.966, Loss: 0.012 Epoch 3 Batch 196/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.964, Loss: 0.008 Epoch 3 Batch 197/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.964, Loss: 0.015 Epoch 3 Batch 198/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.966, Loss: 0.016 Epoch 3 Batch 199/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.971, Loss: 0.006 Epoch 3 Batch 200/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.971, Loss: 0.021 Epoch 3 Batch 201/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.971, Loss: 0.013 Epoch 3 Batch 202/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.971, Loss: 0.016 Epoch 3 Batch 203/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.971, Loss: 0.014 Epoch 3 Batch 204/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.973, Loss: 0.024 Epoch 3 Batch 205/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.973, Loss: 0.026 Epoch 3 Batch 206/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.973, Loss: 0.012 Epoch 3 Batch 207/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.973, Loss: 0.014 Epoch 3 Batch 208/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.978, Loss: 0.015 Epoch 3 Batch 209/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.978, Loss: 0.009 Epoch 3 Batch 210/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.974, Loss: 0.013 Epoch 3 Batch 211/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.974, Loss: 0.012 Epoch 3 Batch 212/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.972, Loss: 0.012 Epoch 3 Batch 213/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.974, Loss: 0.014 Epoch 3 Batch 214/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.979, Loss: 0.012 Epoch 3 Batch 215/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.981, Loss: 0.019 Epoch 3 Batch 216/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.981, Loss: 0.014 Epoch 3 Batch 217/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.981, Loss: 0.009 Epoch 3 Batch 218/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.981, Loss: 0.016 Epoch 3 Batch 219/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.981, Loss: 0.010 Epoch 3 Batch 220/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.972, Loss: 0.017 Epoch 3 Batch 221/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.964, Loss: 0.015 Epoch 3 Batch 222/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.964, Loss: 0.014 Epoch 3 Batch 223/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.964, Loss: 0.011 Epoch 3 Batch 224/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.974, Loss: 0.015 Epoch 3 Batch 225/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.979, Loss: 0.021 Epoch 3 Batch 226/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.976, Loss: 0.009 Epoch 3 Batch 227/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.976, Loss: 0.018 Epoch 3 Batch 228/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.976, Loss: 0.018 Epoch 3 Batch 229/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.977, Loss: 0.013 Epoch 3 Batch 230/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.977, Loss: 0.013 Epoch 3 Batch 231/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.977, Loss: 0.014 Epoch 3 Batch 232/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.977, Loss: 0.011 Epoch 3 Batch 233/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.977, Loss: 0.021 Epoch 3 Batch 234/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.977, Loss: 0.021 Epoch 3 Batch 235/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.977, Loss: 0.022 Epoch 3 Batch 236/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.977, Loss: 0.015 Epoch 3 Batch 237/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.978, Loss: 0.014 Epoch 3 Batch 238/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.980, Loss: 0.012 Epoch 3 Batch 239/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.979, Loss: 0.008 Epoch 3 Batch 240/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.974, Loss: 0.009 Epoch 3 Batch 241/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.974, Loss: 0.008 Epoch 3 Batch 242/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.974, Loss: 0.010 Epoch 3 Batch 243/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.974, Loss: 0.015 Epoch 3 Batch 244/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.979, Loss: 0.012 Epoch 3 Batch 245/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.979, Loss: 0.017 Epoch 3 Batch 246/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.981, Loss: 0.013 Epoch 3 Batch 247/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.976, Loss: 0.018 Epoch 3 Batch 248/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.981, Loss: 0.014 Epoch 3 Batch 249/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.981, Loss: 0.013 Epoch 3 Batch 250/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.981, Loss: 0.018 Epoch 3 Batch 251/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.981, Loss: 0.022 Epoch 3 Batch 252/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.981, Loss: 0.013 Epoch 3 Batch 253/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.981, Loss: 0.012 Epoch 3 Batch 254/1077 - Train Accuracy: 0.952, Validation Accuracy: 0.981, Loss: 0.019 Epoch 3 Batch 255/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.981, Loss: 0.020 Epoch 3 Batch 256/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.981, Loss: 0.022 Epoch 3 Batch 257/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.981, Loss: 0.008 Epoch 3 Batch 258/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.981, Loss: 0.014 Epoch 3 Batch 259/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.981, Loss: 0.015 Epoch 3 Batch 260/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.981, Loss: 0.010 Epoch 3 Batch 261/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.981, Loss: 0.011 Epoch 3 Batch 262/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.981, Loss: 0.015 Epoch 3 Batch 263/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.981, Loss: 0.015 Epoch 3 Batch 264/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.981, Loss: 0.012 Epoch 3 Batch 265/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.981, Loss: 0.011 Epoch 3 Batch 266/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.979, Loss: 0.022 Epoch 3 Batch 267/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.974, Loss: 0.013 Epoch 3 Batch 268/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.973, Loss: 0.021 Epoch 3 Batch 269/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.973, Loss: 0.025 Epoch 3 Batch 270/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.977, Loss: 0.013 Epoch 3 Batch 271/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.977, Loss: 0.012 Epoch 3 Batch 272/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.977, Loss: 0.025 Epoch 3 Batch 273/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.976, Loss: 0.012 Epoch 3 Batch 274/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.976, Loss: 0.016 Epoch 3 Batch 275/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.978, Loss: 0.010 Epoch 3 Batch 276/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.978, Loss: 0.021 Epoch 3 Batch 277/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.977, Loss: 0.019 Epoch 3 Batch 278/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.977, Loss: 0.013 Epoch 3 Batch 279/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.977, Loss: 0.016 Epoch 3 Batch 280/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.979, Loss: 0.017 Epoch 3 Batch 281/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.979, Loss: 0.016 Epoch 3 Batch 282/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.977, Loss: 0.024 Epoch 3 Batch 283/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.977, Loss: 0.013 Epoch 3 Batch 284/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.964, Loss: 0.012 Epoch 3 Batch 285/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.970, Loss: 0.018 Epoch 3 Batch 286/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.970, Loss: 0.014 Epoch 3 Batch 287/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.973, Loss: 0.018 Epoch 3 Batch 288/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.973, Loss: 0.017 Epoch 3 Batch 289/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.973, Loss: 0.006 Epoch 3 Batch 290/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.973, Loss: 0.027 Epoch 3 Batch 291/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.971, Loss: 0.028 Epoch 3 Batch 292/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.975, Loss: 0.013 Epoch 3 Batch 293/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.975, Loss: 0.012 Epoch 3 Batch 294/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.966, Loss: 0.014 Epoch 3 Batch 295/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.966, Loss: 0.017 Epoch 3 Batch 296/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.966, Loss: 0.013 Epoch 3 Batch 297/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.966, Loss: 0.015 Epoch 3 Batch 298/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.961, Loss: 0.021 Epoch 3 Batch 299/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.955, Loss: 0.015 Epoch 3 Batch 300/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.954, Loss: 0.011 Epoch 3 Batch 301/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.961, Loss: 0.013 Epoch 3 Batch 302/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.961, Loss: 0.016 Epoch 3 Batch 303/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.961, Loss: 0.020 Epoch 3 Batch 304/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.961, Loss: 0.017 Epoch 3 Batch 305/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.966, Loss: 0.010 Epoch 3 Batch 306/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.965, Loss: 0.017 Epoch 3 Batch 307/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.970, Loss: 0.009 Epoch 3 Batch 308/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.973, Loss: 0.013 Epoch 3 Batch 309/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.973, Loss: 0.010 Epoch 3 Batch 310/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.973, Loss: 0.012 Epoch 3 Batch 311/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.973, Loss: 0.009 Epoch 3 Batch 312/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.973, Loss: 0.016 Epoch 3 Batch 313/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.968, Loss: 0.010 Epoch 3 Batch 314/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.968, Loss: 0.014 Epoch 3 Batch 315/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.968, Loss: 0.013 Epoch 3 Batch 316/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.972, Loss: 0.016 Epoch 3 Batch 317/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.972, Loss: 0.014 Epoch 3 Batch 318/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.972, Loss: 0.007 Epoch 3 Batch 319/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.969, Loss: 0.019 Epoch 3 Batch 320/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.969, Loss: 0.012 Epoch 3 Batch 321/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.969, Loss: 0.010 Epoch 3 Batch 322/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.969, Loss: 0.015 Epoch 3 Batch 323/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.972, Loss: 0.012 Epoch 3 Batch 324/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.972, Loss: 0.011 Epoch 3 Batch 325/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.972, Loss: 0.014 Epoch 3 Batch 326/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.974, Loss: 0.010 Epoch 3 Batch 327/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.974, Loss: 0.018 Epoch 3 Batch 328/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.974, Loss: 0.016 Epoch 3 Batch 329/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.974, Loss: 0.012 Epoch 3 Batch 330/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.974, Loss: 0.013 Epoch 3 Batch 331/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.974, Loss: 0.013 Epoch 3 Batch 332/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.974, Loss: 0.009 Epoch 3 Batch 333/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.972, Loss: 0.012 Epoch 3 Batch 334/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.967, Loss: 0.009 Epoch 3 Batch 335/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.967, Loss: 0.018 Epoch 3 Batch 336/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.974, Loss: 0.034 Epoch 3 Batch 337/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.974, Loss: 0.013 Epoch 3 Batch 338/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.974, Loss: 0.021 Epoch 3 Batch 339/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.974, Loss: 0.013 Epoch 3 Batch 340/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.974, Loss: 0.017 Epoch 3 Batch 341/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.974, Loss: 0.024 Epoch 3 Batch 342/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.974, Loss: 0.009 Epoch 3 Batch 343/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.968, Loss: 0.013 Epoch 3 Batch 344/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.968, Loss: 0.016 Epoch 3 Batch 345/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.968, Loss: 0.009 Epoch 3 Batch 346/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.968, Loss: 0.010 Epoch 3 Batch 347/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.967, Loss: 0.009 Epoch 3 Batch 348/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.967, Loss: 0.011 Epoch 3 Batch 349/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.967, Loss: 0.011 Epoch 3 Batch 350/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.968, Loss: 0.010 Epoch 3 Batch 351/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.964, Loss: 0.017 Epoch 3 Batch 352/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.966, Loss: 0.011 Epoch 3 Batch 353/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.966, Loss: 0.017 Epoch 3 Batch 354/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.966, Loss: 0.012 Epoch 3 Batch 355/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.972, Loss: 0.012 Epoch 3 Batch 356/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.972, Loss: 0.015 Epoch 3 Batch 357/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.970, Loss: 0.014 Epoch 3 Batch 358/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.970, Loss: 0.016 Epoch 3 Batch 359/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.974, Loss: 0.015 Epoch 3 Batch 360/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.974, Loss: 0.009 Epoch 3 Batch 361/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.974, Loss: 0.014 Epoch 3 Batch 362/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.974, Loss: 0.013 Epoch 3 Batch 363/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.974, Loss: 0.015 Epoch 3 Batch 364/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.974, Loss: 0.020 Epoch 3 Batch 365/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.974, Loss: 0.006 Epoch 3 Batch 366/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.974, Loss: 0.009 Epoch 3 Batch 367/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.970, Loss: 0.009 Epoch 3 Batch 368/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.970, Loss: 0.016 Epoch 3 Batch 369/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.977, Loss: 0.012 Epoch 3 Batch 370/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.977, Loss: 0.019 Epoch 3 Batch 371/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.977, Loss: 0.015 Epoch 3 Batch 372/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.977, Loss: 0.008 Epoch 3 Batch 373/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.979, Loss: 0.009 Epoch 3 Batch 374/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.979, Loss: 0.011 Epoch 3 Batch 375/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.979, Loss: 0.013 Epoch 3 Batch 376/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.979, Loss: 0.010 Epoch 3 Batch 377/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.979, Loss: 0.011 Epoch 3 Batch 378/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.979, Loss: 0.010 Epoch 3 Batch 379/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.979, Loss: 0.017 Epoch 3 Batch 380/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.979, Loss: 0.013 Epoch 3 Batch 381/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.979, Loss: 0.015 Epoch 3 Batch 382/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.979, Loss: 0.019 Epoch 3 Batch 383/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.979, Loss: 0.015 Epoch 3 Batch 384/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.976, Loss: 0.009 Epoch 3 Batch 385/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.976, Loss: 0.014 Epoch 3 Batch 386/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.976, Loss: 0.013 Epoch 3 Batch 387/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.976, Loss: 0.007 Epoch 3 Batch 388/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.976, Loss: 0.013 Epoch 3 Batch 389/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.976, Loss: 0.017 Epoch 3 Batch 390/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.976, Loss: 0.020 Epoch 3 Batch 391/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.976, Loss: 0.020 Epoch 3 Batch 392/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.976, Loss: 0.017 Epoch 3 Batch 393/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.977, Loss: 0.011 Epoch 3 Batch 394/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.977, Loss: 0.011 Epoch 3 Batch 395/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.982, Loss: 0.012 Epoch 3 Batch 396/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.982, Loss: 0.013 Epoch 3 Batch 397/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.982, Loss: 0.014 Epoch 3 Batch 398/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.982, Loss: 0.014 Epoch 3 Batch 399/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.981, Loss: 0.021 Epoch 3 Batch 400/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.981, Loss: 0.014 Epoch 3 Batch 401/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.975, Loss: 0.009 Epoch 3 Batch 402/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.975, Loss: 0.008 Epoch 3 Batch 403/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.968, Loss: 0.020 Epoch 3 Batch 404/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.968, Loss: 0.013 Epoch 3 Batch 405/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.973, Loss: 0.012 Epoch 3 Batch 406/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.973, Loss: 0.010 Epoch 3 Batch 407/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.973, Loss: 0.014 Epoch 3 Batch 408/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.973, Loss: 0.020 Epoch 3 Batch 409/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.973, Loss: 0.014 Epoch 3 Batch 410/1077 - Train Accuracy: 0.956, Validation Accuracy: 0.973, Loss: 0.023 Epoch 3 Batch 411/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.973, Loss: 0.013 Epoch 3 Batch 412/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.971, Loss: 0.016 Epoch 3 Batch 413/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.973, Loss: 0.007 Epoch 3 Batch 414/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.973, Loss: 0.010 Epoch 3 Batch 415/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.977, Loss: 0.028 Epoch 3 Batch 416/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.984, Loss: 0.016 Epoch 3 Batch 417/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.984, Loss: 0.027 Epoch 3 Batch 418/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.984, Loss: 0.009 Epoch 3 Batch 419/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.984, Loss: 0.017 Epoch 3 Batch 420/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.986, Loss: 0.011 Epoch 3 Batch 421/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.986, Loss: 0.020 Epoch 3 Batch 422/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.986, Loss: 0.009 Epoch 3 Batch 423/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.981, Loss: 0.019 Epoch 3 Batch 424/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.976, Loss: 0.013 Epoch 3 Batch 425/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.976, Loss: 0.010 Epoch 3 Batch 426/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.981, Loss: 0.015 Epoch 3 Batch 427/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.979, Loss: 0.012 Epoch 3 Batch 428/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.975, Loss: 0.012 Epoch 3 Batch 429/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.975, Loss: 0.006 Epoch 3 Batch 430/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.975, Loss: 0.017 Epoch 3 Batch 431/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.975, Loss: 0.010 Epoch 3 Batch 432/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.973, Loss: 0.014 Epoch 3 Batch 433/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.973, Loss: 0.016 Epoch 3 Batch 434/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.973, Loss: 0.008 Epoch 3 Batch 435/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.973, Loss: 0.017 Epoch 3 Batch 436/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.968, Loss: 0.016 Epoch 3 Batch 437/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.964, Loss: 0.009 Epoch 3 Batch 438/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.964, Loss: 0.009 Epoch 3 Batch 439/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.966, Loss: 0.010 Epoch 3 Batch 440/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.966, Loss: 0.011 Epoch 3 Batch 441/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.966, Loss: 0.015 Epoch 3 Batch 442/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.960, Loss: 0.018 Epoch 3 Batch 443/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.961, Loss: 0.012 Epoch 3 Batch 444/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.957, Loss: 0.011 Epoch 3 Batch 445/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.957, Loss: 0.008 Epoch 3 Batch 446/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.957, Loss: 0.010 Epoch 3 Batch 447/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.957, Loss: 0.013 Epoch 3 Batch 448/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.957, Loss: 0.019 Epoch 3 Batch 449/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.957, Loss: 0.010 Epoch 3 Batch 450/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.957, Loss: 0.016 Epoch 3 Batch 451/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.957, Loss: 0.010 Epoch 3 Batch 452/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.963, Loss: 0.011 Epoch 3 Batch 453/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.965, Loss: 0.013 Epoch 3 Batch 454/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.965, Loss: 0.017 Epoch 3 Batch 455/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.965, Loss: 0.016 Epoch 3 Batch 456/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.965, Loss: 0.014 Epoch 3 Batch 457/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.968, Loss: 0.008 Epoch 3 Batch 458/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.967, Loss: 0.018 Epoch 3 Batch 459/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.968, Loss: 0.019 Epoch 3 Batch 460/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.967, Loss: 0.012 Epoch 3 Batch 461/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.967, Loss: 0.012 Epoch 3 Batch 462/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.971, Loss: 0.014 Epoch 3 Batch 463/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.971, Loss: 0.015 Epoch 3 Batch 464/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.966, Loss: 0.014 Epoch 3 Batch 465/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.966, Loss: 0.020 Epoch 3 Batch 466/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.966, Loss: 0.012 Epoch 3 Batch 467/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.966, Loss: 0.010 Epoch 3 Batch 468/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.961, Loss: 0.011 Epoch 3 Batch 469/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.961, Loss: 0.008 Epoch 3 Batch 470/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.966, Loss: 0.012 Epoch 3 Batch 471/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.966, Loss: 0.008 Epoch 3 Batch 472/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.963, Loss: 0.012 Epoch 3 Batch 473/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.959, Loss: 0.009 Epoch 3 Batch 474/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.963, Loss: 0.010 Epoch 3 Batch 475/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.961, Loss: 0.011 Epoch 3 Batch 476/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.961, Loss: 0.006 Epoch 3 Batch 477/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.962, Loss: 0.016 Epoch 3 Batch 478/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.962, Loss: 0.013 Epoch 3 Batch 479/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.962, Loss: 0.012 Epoch 3 Batch 480/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.961, Loss: 0.019 Epoch 3 Batch 481/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.963, Loss: 0.012 Epoch 3 Batch 482/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.965, Loss: 0.018 Epoch 3 Batch 483/1077 - Train Accuracy: 0.958, Validation Accuracy: 0.964, Loss: 0.020 Epoch 3 Batch 484/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.964, Loss: 0.021 Epoch 3 Batch 485/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.964, Loss: 0.014 Epoch 3 Batch 486/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.964, Loss: 0.012 Epoch 3 Batch 487/1077 - Train Accuracy: 0.997, Validation Accuracy: 0.964, Loss: 0.006 Epoch 3 Batch 488/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.960, Loss: 0.023 Epoch 3 Batch 489/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.960, Loss: 0.012 Epoch 3 Batch 490/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.962, Loss: 0.013 Epoch 3 Batch 491/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.962, Loss: 0.022 Epoch 3 Batch 492/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.962, Loss: 0.020 Epoch 3 Batch 493/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.962, Loss: 0.015 Epoch 3 Batch 494/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.962, Loss: 0.009 Epoch 3 Batch 495/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.967, Loss: 0.014 Epoch 3 Batch 496/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.967, Loss: 0.017 Epoch 3 Batch 497/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.967, Loss: 0.025 Epoch 3 Batch 498/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.967, Loss: 0.020 Epoch 3 Batch 499/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.967, Loss: 0.014 Epoch 3 Batch 500/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.967, Loss: 0.007 Epoch 3 Batch 501/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.967, Loss: 0.010 Epoch 3 Batch 502/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.967, Loss: 0.015 Epoch 3 Batch 503/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.967, Loss: 0.013 Epoch 3 Batch 504/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.966, Loss: 0.008 Epoch 3 Batch 505/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.966, Loss: 0.009 Epoch 3 Batch 506/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.966, Loss: 0.026 Epoch 3 Batch 507/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.966, Loss: 0.014 Epoch 3 Batch 508/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.970, Loss: 0.007 Epoch 3 Batch 509/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.971, Loss: 0.014 Epoch 3 Batch 510/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.968, Loss: 0.019 Epoch 3 Batch 511/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.968, Loss: 0.014 Epoch 3 Batch 512/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.962, Loss: 0.007 Epoch 3 Batch 513/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.962, Loss: 0.010 Epoch 3 Batch 514/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.962, Loss: 0.018 Epoch 3 Batch 515/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.961, Loss: 0.016 Epoch 3 Batch 516/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.961, Loss: 0.013 Epoch 3 Batch 517/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.961, Loss: 0.018 Epoch 3 Batch 518/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.966, Loss: 0.014 Epoch 3 Batch 519/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.966, Loss: 0.016 Epoch 3 Batch 520/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.962, Loss: 0.010 Epoch 3 Batch 521/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.962, Loss: 0.014 Epoch 3 Batch 522/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.962, Loss: 0.017 Epoch 3 Batch 523/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.959, Loss: 0.015 Epoch 3 Batch 524/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.966, Loss: 0.022 Epoch 3 Batch 525/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.966, Loss: 0.014 Epoch 3 Batch 526/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.966, Loss: 0.014 Epoch 3 Batch 527/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.967, Loss: 0.013 Epoch 3 Batch 528/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.967, Loss: 0.019 Epoch 3 Batch 529/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.971, Loss: 0.013 Epoch 3 Batch 530/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.967, Loss: 0.015 Epoch 3 Batch 531/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.967, Loss: 0.016 Epoch 3 Batch 532/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.972, Loss: 0.023 Epoch 3 Batch 533/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.972, Loss: 0.010 Epoch 3 Batch 534/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.972, Loss: 0.012 Epoch 3 Batch 535/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.972, Loss: 0.013 Epoch 3 Batch 536/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.972, Loss: 0.015 Epoch 3 Batch 537/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.972, Loss: 0.011 Epoch 3 Batch 538/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.972, Loss: 0.012 Epoch 3 Batch 539/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.972, Loss: 0.019 Epoch 3 Batch 540/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.972, Loss: 0.014 Epoch 3 Batch 541/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.969, Loss: 0.014 Epoch 3 Batch 542/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.969, Loss: 0.013 Epoch 3 Batch 543/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.969, Loss: 0.006 Epoch 3 Batch 544/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.969, Loss: 0.014 Epoch 3 Batch 545/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.969, Loss: 0.009 Epoch 3 Batch 546/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.969, Loss: 0.013 Epoch 3 Batch 547/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.969, Loss: 0.016 Epoch 3 Batch 548/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.974, Loss: 0.015 Epoch 3 Batch 549/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.969, Loss: 0.017 Epoch 3 Batch 550/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.969, Loss: 0.012 Epoch 3 Batch 551/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.969, Loss: 0.014 Epoch 3 Batch 552/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.969, Loss: 0.015 Epoch 3 Batch 553/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.972, Loss: 0.021 Epoch 3 Batch 554/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.977, Loss: 0.016 Epoch 3 Batch 555/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.977, Loss: 0.011 Epoch 3 Batch 556/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.977, Loss: 0.011 Epoch 3 Batch 557/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.977, Loss: 0.010 Epoch 3 Batch 558/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.977, Loss: 0.010 Epoch 3 Batch 559/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.977, Loss: 0.014 Epoch 3 Batch 560/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.973, Loss: 0.009 Epoch 3 Batch 561/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.973, Loss: 0.009 Epoch 3 Batch 562/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.973, Loss: 0.008 Epoch 3 Batch 563/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.973, Loss: 0.013 Epoch 3 Batch 564/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.973, Loss: 0.012 Epoch 3 Batch 565/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.973, Loss: 0.016 Epoch 3 Batch 566/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.973, Loss: 0.012 Epoch 3 Batch 567/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.973, Loss: 0.016 Epoch 3 Batch 568/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.973, Loss: 0.018 Epoch 3 Batch 569/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.973, Loss: 0.009 Epoch 3 Batch 570/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.973, Loss: 0.017 Epoch 3 Batch 571/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.968, Loss: 0.014 Epoch 3 Batch 572/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.967, Loss: 0.011 Epoch 3 Batch 573/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.974, Loss: 0.026 Epoch 3 Batch 574/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.974, Loss: 0.016 Epoch 3 Batch 575/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.974, Loss: 0.011 Epoch 3 Batch 576/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.974, Loss: 0.011 Epoch 3 Batch 577/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.972, Loss: 0.011 Epoch 3 Batch 578/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.972, Loss: 0.011 Epoch 3 Batch 579/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.974, Loss: 0.008 Epoch 3 Batch 580/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.974, Loss: 0.016 Epoch 3 Batch 581/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.974, Loss: 0.011 Epoch 3 Batch 582/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.972, Loss: 0.012 Epoch 3 Batch 583/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.972, Loss: 0.013 Epoch 3 Batch 584/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.972, Loss: 0.011 Epoch 3 Batch 585/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.972, Loss: 0.005 Epoch 3 Batch 586/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.972, Loss: 0.011 Epoch 3 Batch 587/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.972, Loss: 0.017 Epoch 3 Batch 588/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.967, Loss: 0.010 Epoch 3 Batch 589/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.967, Loss: 0.008 Epoch 3 Batch 590/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.967, Loss: 0.008 Epoch 3 Batch 591/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.972, Loss: 0.019 Epoch 3 Batch 592/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.967, Loss: 0.011 Epoch 3 Batch 593/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.967, Loss: 0.020 Epoch 3 Batch 594/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.967, Loss: 0.023 Epoch 3 Batch 595/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.967, Loss: 0.015 Epoch 3 Batch 596/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.972, Loss: 0.014 Epoch 3 Batch 597/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.973, Loss: 0.007 Epoch 3 Batch 598/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.974, Loss: 0.014 Epoch 3 Batch 599/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.974, Loss: 0.012 Epoch 3 Batch 600/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.974, Loss: 0.014 Epoch 3 Batch 601/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.974, Loss: 0.014 Epoch 3 Batch 602/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.972, Loss: 0.016 Epoch 3 Batch 603/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.972, Loss: 0.013 Epoch 3 Batch 604/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.972, Loss: 0.022 Epoch 3 Batch 605/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.973, Loss: 0.018 Epoch 3 Batch 606/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.973, Loss: 0.014 Epoch 3 Batch 607/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.968, Loss: 0.017 Epoch 3 Batch 608/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.973, Loss: 0.007 Epoch 3 Batch 609/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.973, Loss: 0.012 Epoch 3 Batch 610/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.968, Loss: 0.013 Epoch 3 Batch 611/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.971, Loss: 0.012 Epoch 3 Batch 612/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.971, Loss: 0.009 Epoch 3 Batch 613/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.972, Loss: 0.013 Epoch 3 Batch 614/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.979, Loss: 0.008 Epoch 3 Batch 615/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.974, Loss: 0.008 Epoch 3 Batch 616/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.982, Loss: 0.014 Epoch 3 Batch 617/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.982, Loss: 0.009 Epoch 3 Batch 618/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.982, Loss: 0.012 Epoch 3 Batch 619/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.982, Loss: 0.008 Epoch 3 Batch 620/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.982, Loss: 0.015 Epoch 3 Batch 621/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.982, Loss: 0.020 Epoch 3 Batch 622/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.982, Loss: 0.017 Epoch 3 Batch 623/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.967, Loss: 0.016 Epoch 3 Batch 624/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.967, Loss: 0.011 Epoch 3 Batch 625/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.964, Loss: 0.014 Epoch 3 Batch 626/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.970, Loss: 0.011 Epoch 3 Batch 627/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.967, Loss: 0.014 Epoch 3 Batch 628/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.967, Loss: 0.011 Epoch 3 Batch 629/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.972, Loss: 0.016 Epoch 3 Batch 630/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.972, Loss: 0.010 Epoch 3 Batch 631/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.972, Loss: 0.014 Epoch 3 Batch 632/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.970, Loss: 0.013 Epoch 3 Batch 633/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.975, Loss: 0.017 Epoch 3 Batch 634/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.975, Loss: 0.008 Epoch 3 Batch 635/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.975, Loss: 0.019 Epoch 3 Batch 636/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.971, Loss: 0.011 Epoch 3 Batch 637/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.977, Loss: 0.018 Epoch 3 Batch 638/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.977, Loss: 0.011 Epoch 3 Batch 639/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.977, Loss: 0.019 Epoch 3 Batch 640/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.977, Loss: 0.012 Epoch 3 Batch 641/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.975, Loss: 0.013 Epoch 3 Batch 642/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.975, Loss: 0.009 Epoch 3 Batch 643/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.972, Loss: 0.011 Epoch 3 Batch 644/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.972, Loss: 0.012 Epoch 3 Batch 645/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.972, Loss: 0.021 Epoch 3 Batch 646/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.972, Loss: 0.011 Epoch 3 Batch 647/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.972, Loss: 0.012 Epoch 3 Batch 648/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.977, Loss: 0.013 Epoch 3 Batch 649/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.977, Loss: 0.010 Epoch 3 Batch 650/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.977, Loss: 0.011 Epoch 3 Batch 651/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.967, Loss: 0.010 Epoch 3 Batch 652/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.967, Loss: 0.017 Epoch 3 Batch 653/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.967, Loss: 0.013 Epoch 3 Batch 654/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.967, Loss: 0.009 Epoch 3 Batch 655/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.967, Loss: 0.017 Epoch 3 Batch 656/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.972, Loss: 0.008 Epoch 3 Batch 657/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.972, Loss: 0.010 Epoch 3 Batch 658/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.972, Loss: 0.015 Epoch 3 Batch 659/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.972, Loss: 0.012 Epoch 3 Batch 660/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.972, Loss: 0.005 Epoch 3 Batch 661/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.974, Loss: 0.014 Epoch 3 Batch 662/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.974, Loss: 0.021 Epoch 3 Batch 663/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.974, Loss: 0.016 Epoch 3 Batch 664/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.974, Loss: 0.014 Epoch 3 Batch 665/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.974, Loss: 0.007 Epoch 3 Batch 666/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.974, Loss: 0.018 Epoch 3 Batch 667/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.974, Loss: 0.015 Epoch 3 Batch 668/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.969, Loss: 0.014 Epoch 3 Batch 669/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.969, Loss: 0.016 Epoch 3 Batch 670/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.969, Loss: 0.006 Epoch 3 Batch 671/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.969, Loss: 0.023 Epoch 3 Batch 672/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.969, Loss: 0.008 Epoch 3 Batch 673/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.969, Loss: 0.015 Epoch 3 Batch 674/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.969, Loss: 0.009 Epoch 3 Batch 675/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.968, Loss: 0.020 Epoch 3 Batch 676/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.968, Loss: 0.009 Epoch 3 Batch 677/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.972, Loss: 0.012 Epoch 3 Batch 678/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.962, Loss: 0.010 Epoch 3 Batch 679/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.962, Loss: 0.008 Epoch 3 Batch 680/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.964, Loss: 0.010 Epoch 3 Batch 681/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.964, Loss: 0.010 Epoch 3 Batch 682/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.964, Loss: 0.011 Epoch 3 Batch 683/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.964, Loss: 0.011 Epoch 3 Batch 684/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.964, Loss: 0.013 Epoch 3 Batch 685/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.964, Loss: 0.024 Epoch 3 Batch 686/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.964, Loss: 0.007 Epoch 3 Batch 687/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.964, Loss: 0.015 Epoch 3 Batch 688/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.962, Loss: 0.011 Epoch 3 Batch 689/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.962, Loss: 0.007 Epoch 3 Batch 690/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.962, Loss: 0.012 Epoch 3 Batch 691/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.962, Loss: 0.017 Epoch 3 Batch 692/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.962, Loss: 0.008 Epoch 3 Batch 693/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.962, Loss: 0.019 Epoch 3 Batch 694/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.972, Loss: 0.009 Epoch 3 Batch 695/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.974, Loss: 0.013 Epoch 3 Batch 696/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.965, Loss: 0.012 Epoch 3 Batch 697/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.965, Loss: 0.011 Epoch 3 Batch 698/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.965, Loss: 0.013 Epoch 3 Batch 699/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.965, Loss: 0.007 Epoch 3 Batch 700/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.970, Loss: 0.012 Epoch 3 Batch 701/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.970, Loss: 0.012 Epoch 3 Batch 702/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.970, Loss: 0.019 Epoch 3 Batch 703/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.970, Loss: 0.015 Epoch 3 Batch 704/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.970, Loss: 0.021 Epoch 3 Batch 705/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.975, Loss: 0.015 Epoch 3 Batch 706/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.979, Loss: 0.038 Epoch 3 Batch 707/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.975, Loss: 0.012 Epoch 3 Batch 708/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.975, Loss: 0.009 Epoch 3 Batch 709/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.972, Loss: 0.016 Epoch 3 Batch 710/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.972, Loss: 0.012 Epoch 3 Batch 711/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.970, Loss: 0.018 Epoch 3 Batch 712/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.966, Loss: 0.008 Epoch 3 Batch 713/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.965, Loss: 0.008 Epoch 3 Batch 714/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.965, Loss: 0.009 Epoch 3 Batch 715/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.974, Loss: 0.025 Epoch 3 Batch 716/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.979, Loss: 0.009 Epoch 3 Batch 717/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.973, Loss: 0.011 Epoch 3 Batch 718/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.972, Loss: 0.011 Epoch 3 Batch 719/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.968, Loss: 0.013 Epoch 3 Batch 720/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.963, Loss: 0.013 Epoch 3 Batch 721/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.963, Loss: 0.013 Epoch 3 Batch 722/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.963, Loss: 0.011 Epoch 3 Batch 723/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.963, Loss: 0.016 Epoch 3 Batch 724/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.963, Loss: 0.015 Epoch 3 Batch 725/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.959, Loss: 0.012 Epoch 3 Batch 726/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.963, Loss: 0.009 Epoch 3 Batch 727/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.973, Loss: 0.013 Epoch 3 Batch 728/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.973, Loss: 0.017 Epoch 3 Batch 729/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.973, Loss: 0.018 Epoch 3 Batch 730/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.970, Loss: 0.015 Epoch 3 Batch 731/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.970, Loss: 0.009 Epoch 3 Batch 732/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.971, Loss: 0.012 Epoch 3 Batch 733/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.966, Loss: 0.015 Epoch 3 Batch 734/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.966, Loss: 0.016 Epoch 3 Batch 735/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.968, Loss: 0.009 Epoch 3 Batch 736/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.968, Loss: 0.010 Epoch 3 Batch 737/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.963, Loss: 0.009 Epoch 3 Batch 738/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.965, Loss: 0.009 Epoch 3 Batch 739/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.965, Loss: 0.015 Epoch 3 Batch 740/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.960, Loss: 0.010 Epoch 3 Batch 741/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.960, Loss: 0.015 Epoch 3 Batch 742/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.965, Loss: 0.010 Epoch 3 Batch 743/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.963, Loss: 0.013 Epoch 3 Batch 744/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.963, Loss: 0.013 Epoch 3 Batch 745/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.972, Loss: 0.014 Epoch 3 Batch 746/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.972, Loss: 0.013 Epoch 3 Batch 747/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.972, Loss: 0.005 Epoch 3 Batch 748/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.968, Loss: 0.009 Epoch 3 Batch 749/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.968, Loss: 0.008 Epoch 3 Batch 750/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.968, Loss: 0.020 Epoch 3 Batch 751/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.970, Loss: 0.009 Epoch 3 Batch 752/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.970, Loss: 0.013 Epoch 3 Batch 753/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.970, Loss: 0.015 Epoch 3 Batch 754/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.970, Loss: 0.007 Epoch 3 Batch 755/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.970, Loss: 0.027 Epoch 3 Batch 756/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.970, Loss: 0.011 Epoch 3 Batch 757/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.974, Loss: 0.009 Epoch 3 Batch 758/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.974, Loss: 0.005 Epoch 3 Batch 759/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.974, Loss: 0.014 Epoch 3 Batch 760/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.974, Loss: 0.014 Epoch 3 Batch 761/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.974, Loss: 0.012 Epoch 3 Batch 762/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.969, Loss: 0.008 Epoch 3 Batch 763/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.969, Loss: 0.013 Epoch 3 Batch 764/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.969, Loss: 0.008 Epoch 3 Batch 765/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.970, Loss: 0.022 Epoch 3 Batch 766/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.965, Loss: 0.012 Epoch 3 Batch 767/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.966, Loss: 0.010 Epoch 3 Batch 768/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.970, Loss: 0.009 Epoch 3 Batch 769/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.972, Loss: 0.015 Epoch 3 Batch 770/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.968, Loss: 0.013 Epoch 3 Batch 771/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.968, Loss: 0.015 Epoch 3 Batch 772/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.968, Loss: 0.009 Epoch 3 Batch 773/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.968, Loss: 0.011 Epoch 3 Batch 774/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.968, Loss: 0.013 Epoch 3 Batch 775/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.968, Loss: 0.015 Epoch 3 Batch 776/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.968, Loss: 0.013 Epoch 3 Batch 777/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.970, Loss: 0.010 Epoch 3 Batch 778/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.963, Loss: 0.011 Epoch 3 Batch 779/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.963, Loss: 0.011 Epoch 3 Batch 780/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.963, Loss: 0.021 Epoch 3 Batch 781/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.961, Loss: 0.008 Epoch 3 Batch 782/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.956, Loss: 0.010 Epoch 3 Batch 783/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.957, Loss: 0.013 Epoch 3 Batch 784/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.961, Loss: 0.008 Epoch 3 Batch 785/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.963, Loss: 0.006 Epoch 3 Batch 786/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.963, Loss: 0.010 Epoch 3 Batch 787/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.963, Loss: 0.013 Epoch 3 Batch 788/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.963, Loss: 0.013 Epoch 3 Batch 789/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.963, Loss: 0.013 Epoch 3 Batch 790/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.963, Loss: 0.018 Epoch 3 Batch 791/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.966, Loss: 0.008 Epoch 3 Batch 792/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.966, Loss: 0.017 Epoch 3 Batch 793/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.966, Loss: 0.014 Epoch 3 Batch 794/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.966, Loss: 0.009 Epoch 3 Batch 795/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.966, Loss: 0.012 Epoch 3 Batch 796/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.966, Loss: 0.006 Epoch 3 Batch 797/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.968, Loss: 0.009 Epoch 3 Batch 798/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.977, Loss: 0.008 Epoch 3 Batch 799/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.977, Loss: 0.015 Epoch 3 Batch 800/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.977, Loss: 0.009 Epoch 3 Batch 801/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.977, Loss: 0.012 Epoch 3 Batch 802/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.976, Loss: 0.016 Epoch 3 Batch 803/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.976, Loss: 0.016 Epoch 3 Batch 804/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.976, Loss: 0.008 Epoch 3 Batch 805/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.976, Loss: 0.013 Epoch 3 Batch 806/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.981, Loss: 0.010 Epoch 3 Batch 807/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.977, Loss: 0.007 Epoch 3 Batch 808/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.977, Loss: 0.020 Epoch 3 Batch 809/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.973, Loss: 0.029 Epoch 3 Batch 810/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.973, Loss: 0.008 Epoch 3 Batch 811/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.973, Loss: 0.013 Epoch 3 Batch 812/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.973, Loss: 0.013 Epoch 3 Batch 813/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.973, Loss: 0.016 Epoch 3 Batch 814/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.968, Loss: 0.013 Epoch 3 Batch 815/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.966, Loss: 0.015 Epoch 3 Batch 816/1077 - Train Accuracy: 0.998, Validation Accuracy: 0.966, Loss: 0.011 Epoch 3 Batch 817/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.966, Loss: 0.006 Epoch 3 Batch 818/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.966, Loss: 0.015 Epoch 3 Batch 819/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.966, Loss: 0.010 Epoch 3 Batch 820/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.973, Loss: 0.009 Epoch 3 Batch 821/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.973, Loss: 0.009 Epoch 3 Batch 822/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.967, Loss: 0.012 Epoch 3 Batch 823/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.967, Loss: 0.018 Epoch 3 Batch 824/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.968, Loss: 0.018 Epoch 3 Batch 825/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.968, Loss: 0.004 Epoch 3 Batch 826/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.968, Loss: 0.009 Epoch 3 Batch 827/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.968, Loss: 0.010 Epoch 3 Batch 828/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.969, Loss: 0.013 Epoch 3 Batch 829/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.969, Loss: 0.024 Epoch 3 Batch 830/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.970, Loss: 0.021 Epoch 3 Batch 831/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.970, Loss: 0.017 Epoch 3 Batch 832/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.970, Loss: 0.010 Epoch 3 Batch 833/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.969, Loss: 0.011 Epoch 3 Batch 834/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.977, Loss: 0.014 Epoch 3 Batch 835/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.977, Loss: 0.013 Epoch 3 Batch 836/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.976, Loss: 0.008 Epoch 3 Batch 837/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.974, Loss: 0.015 Epoch 3 Batch 838/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.967, Loss: 0.017 Epoch 3 Batch 839/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.967, Loss: 0.013 Epoch 3 Batch 840/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.967, Loss: 0.014 Epoch 3 Batch 841/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.966, Loss: 0.019 Epoch 3 Batch 842/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.969, Loss: 0.015 Epoch 3 Batch 843/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.964, Loss: 0.012 Epoch 3 Batch 844/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.963, Loss: 0.007 Epoch 3 Batch 845/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.963, Loss: 0.007 Epoch 3 Batch 846/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.963, Loss: 0.015 Epoch 3 Batch 847/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.963, Loss: 0.015 Epoch 3 Batch 848/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.963, Loss: 0.010 Epoch 3 Batch 849/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.963, Loss: 0.010 Epoch 3 Batch 850/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.968, Loss: 0.017 Epoch 3 Batch 851/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.968, Loss: 0.022 Epoch 3 Batch 852/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.968, Loss: 0.023 Epoch 3 Batch 853/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.971, Loss: 0.012 Epoch 3 Batch 854/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.971, Loss: 0.019 Epoch 3 Batch 855/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.971, Loss: 0.012 Epoch 3 Batch 856/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.971, Loss: 0.013 Epoch 3 Batch 857/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.975, Loss: 0.016 Epoch 3 Batch 858/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.975, Loss: 0.011 Epoch 3 Batch 859/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.973, Loss: 0.011 Epoch 3 Batch 860/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.973, Loss: 0.009 Epoch 3 Batch 861/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.973, Loss: 0.006 Epoch 3 Batch 862/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.978, Loss: 0.018 Epoch 3 Batch 863/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.979, Loss: 0.016 Epoch 3 Batch 864/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.979, Loss: 0.009 Epoch 3 Batch 865/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.979, Loss: 0.020 Epoch 3 Batch 866/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.981, Loss: 0.016 Epoch 3 Batch 867/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.981, Loss: 0.037 Epoch 3 Batch 868/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.983, Loss: 0.015 Epoch 3 Batch 869/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.983, Loss: 0.013 Epoch 3 Batch 870/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.983, Loss: 0.015 Epoch 3 Batch 871/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.983, Loss: 0.009 Epoch 3 Batch 872/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.983, Loss: 0.013 Epoch 3 Batch 873/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.981, Loss: 0.010 Epoch 3 Batch 874/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.978, Loss: 0.022 Epoch 3 Batch 875/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.972, Loss: 0.014 Epoch 3 Batch 876/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.972, Loss: 0.013 Epoch 3 Batch 877/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.969, Loss: 0.007 Epoch 3 Batch 878/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.970, Loss: 0.013 Epoch 3 Batch 879/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.970, Loss: 0.011 Epoch 3 Batch 880/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.968, Loss: 0.018 Epoch 3 Batch 881/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.968, Loss: 0.014 Epoch 3 Batch 882/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.969, Loss: 0.010 Epoch 3 Batch 883/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.969, Loss: 0.019 Epoch 3 Batch 884/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.969, Loss: 0.019 Epoch 3 Batch 885/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.969, Loss: 0.010 Epoch 3 Batch 886/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.967, Loss: 0.012 Epoch 3 Batch 887/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.969, Loss: 0.020 Epoch 3 Batch 888/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.969, Loss: 0.010 Epoch 3 Batch 889/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.969, Loss: 0.015 Epoch 3 Batch 890/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.975, Loss: 0.011 Epoch 3 Batch 891/1077 - Train Accuracy: 0.998, Validation Accuracy: 0.977, Loss: 0.008 Epoch 3 Batch 892/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.977, Loss: 0.017 Epoch 3 Batch 893/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.972, Loss: 0.012 Epoch 3 Batch 894/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.972, Loss: 0.023 Epoch 3 Batch 895/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.972, Loss: 0.015 Epoch 3 Batch 896/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.972, Loss: 0.015 Epoch 3 Batch 897/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.967, Loss: 0.012 Epoch 3 Batch 898/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.967, Loss: 0.013 Epoch 3 Batch 899/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.965, Loss: 0.016 Epoch 3 Batch 900/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.970, Loss: 0.015 Epoch 3 Batch 901/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.971, Loss: 0.031 Epoch 3 Batch 902/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.971, Loss: 0.016 Epoch 3 Batch 903/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.968, Loss: 0.018 Epoch 3 Batch 904/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.968, Loss: 0.009 Epoch 3 Batch 905/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.969, Loss: 0.013 Epoch 3 Batch 906/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.968, Loss: 0.010 Epoch 3 Batch 907/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.965, Loss: 0.015 Epoch 3 Batch 908/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.965, Loss: 0.015 Epoch 3 Batch 909/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.966, Loss: 0.018 Epoch 3 Batch 910/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.967, Loss: 0.017 Epoch 3 Batch 911/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.962, Loss: 0.010 Epoch 3 Batch 912/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.962, Loss: 0.008 Epoch 3 Batch 913/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.960, Loss: 0.018 Epoch 3 Batch 914/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.960, Loss: 0.024 Epoch 3 Batch 915/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.966, Loss: 0.008 Epoch 3 Batch 916/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.968, Loss: 0.014 Epoch 3 Batch 917/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.968, Loss: 0.011 Epoch 3 Batch 918/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.968, Loss: 0.011 Epoch 3 Batch 919/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.975, Loss: 0.004 Epoch 3 Batch 920/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.975, Loss: 0.010 Epoch 3 Batch 921/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.975, Loss: 0.016 Epoch 3 Batch 922/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.980, Loss: 0.011 Epoch 3 Batch 923/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.980, Loss: 0.009 Epoch 3 Batch 924/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.980, Loss: 0.018 Epoch 3 Batch 925/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.980, Loss: 0.007 Epoch 3 Batch 926/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.980, Loss: 0.017 Epoch 3 Batch 927/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.976, Loss: 0.017 Epoch 3 Batch 928/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.976, Loss: 0.013 Epoch 3 Batch 929/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.976, Loss: 0.013 Epoch 3 Batch 930/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.975, Loss: 0.018 Epoch 3 Batch 931/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.975, Loss: 0.011 Epoch 3 Batch 932/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.980, Loss: 0.008 Epoch 3 Batch 933/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.980, Loss: 0.013 Epoch 3 Batch 934/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.980, Loss: 0.014 Epoch 3 Batch 935/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.979, Loss: 0.010 Epoch 3 Batch 936/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.977, Loss: 0.017 Epoch 3 Batch 937/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.975, Loss: 0.015 Epoch 3 Batch 938/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.975, Loss: 0.011 Epoch 3 Batch 939/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.975, Loss: 0.013 Epoch 3 Batch 940/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.975, Loss: 0.010 Epoch 3 Batch 941/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.977, Loss: 0.021 Epoch 3 Batch 942/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.977, Loss: 0.015 Epoch 3 Batch 943/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.975, Loss: 0.012 Epoch 3 Batch 944/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.975, Loss: 0.007 Epoch 3 Batch 945/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.975, Loss: 0.008 Epoch 3 Batch 946/1077 - Train Accuracy: 0.997, Validation Accuracy: 0.975, Loss: 0.005 Epoch 3 Batch 947/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.975, Loss: 0.011 Epoch 3 Batch 948/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.979, Loss: 0.010 Epoch 3 Batch 949/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.978, Loss: 0.013 Epoch 3 Batch 950/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.973, Loss: 0.009 Epoch 3 Batch 951/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.973, Loss: 0.020 Epoch 3 Batch 952/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.973, Loss: 0.008 Epoch 3 Batch 953/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.975, Loss: 0.011 Epoch 3 Batch 954/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.979, Loss: 0.009 Epoch 3 Batch 955/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.975, Loss: 0.012 Epoch 3 Batch 956/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.972, Loss: 0.016 Epoch 3 Batch 957/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.973, Loss: 0.006 Epoch 3 Batch 958/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.973, Loss: 0.009 Epoch 3 Batch 959/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.968, Loss: 0.013 Epoch 3 Batch 960/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.968, Loss: 0.008 Epoch 3 Batch 961/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.970, Loss: 0.012 Epoch 3 Batch 962/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.970, Loss: 0.017 Epoch 3 Batch 963/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.970, Loss: 0.013 Epoch 3 Batch 964/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.970, Loss: 0.016 Epoch 3 Batch 965/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.970, Loss: 0.015 Epoch 3 Batch 966/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.965, Loss: 0.015 Epoch 3 Batch 967/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.965, Loss: 0.011 Epoch 3 Batch 968/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.969, Loss: 0.014 Epoch 3 Batch 969/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.969, Loss: 0.016 Epoch 3 Batch 970/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.969, Loss: 0.022 Epoch 3 Batch 971/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.969, Loss: 0.013 Epoch 3 Batch 972/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.965, Loss: 0.010 Epoch 3 Batch 973/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.965, Loss: 0.009 Epoch 3 Batch 974/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.965, Loss: 0.007 Epoch 3 Batch 975/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.965, Loss: 0.011 Epoch 3 Batch 976/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.974, Loss: 0.012 Epoch 3 Batch 977/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.975, Loss: 0.004 Epoch 3 Batch 978/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.975, Loss: 0.013 Epoch 3 Batch 979/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.975, Loss: 0.012 Epoch 3 Batch 980/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.975, Loss: 0.013 Epoch 3 Batch 981/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.975, Loss: 0.018 Epoch 3 Batch 982/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.975, Loss: 0.007 Epoch 3 Batch 983/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.975, Loss: 0.015 Epoch 3 Batch 984/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.975, Loss: 0.014 Epoch 3 Batch 985/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.978, Loss: 0.007 Epoch 3 Batch 986/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.978, Loss: 0.016 Epoch 3 Batch 987/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.972, Loss: 0.012 Epoch 3 Batch 988/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.967, Loss: 0.022 Epoch 3 Batch 989/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.963, Loss: 0.014 Epoch 3 Batch 990/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.958, Loss: 0.013 Epoch 3 Batch 991/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.958, Loss: 0.016 Epoch 3 Batch 992/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.958, Loss: 0.015 Epoch 3 Batch 993/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.963, Loss: 0.011 Epoch 3 Batch 994/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.972, Loss: 0.012 Epoch 3 Batch 995/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.972, Loss: 0.009 Epoch 3 Batch 996/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.972, Loss: 0.011 Epoch 3 Batch 997/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.972, Loss: 0.009 Epoch 3 Batch 998/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.977, Loss: 0.014 Epoch 3 Batch 999/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.983, Loss: 0.012 Epoch 3 Batch 1000/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.984, Loss: 0.011 Epoch 3 Batch 1001/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.984, Loss: 0.010 Epoch 3 Batch 1002/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.984, Loss: 0.009 Epoch 3 Batch 1003/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.984, Loss: 0.012 Epoch 3 Batch 1004/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.982, Loss: 0.015 Epoch 3 Batch 1005/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.982, Loss: 0.015 Epoch 3 Batch 1006/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.977, Loss: 0.013 Epoch 3 Batch 1007/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.977, Loss: 0.013 Epoch 3 Batch 1008/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.977, Loss: 0.023 Epoch 3 Batch 1009/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.977, Loss: 0.010 Epoch 3 Batch 1010/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.977, Loss: 0.013 Epoch 3 Batch 1011/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.979, Loss: 0.011 Epoch 3 Batch 1012/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.979, Loss: 0.014 Epoch 3 Batch 1013/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.974, Loss: 0.012 Epoch 3 Batch 1014/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.974, Loss: 0.014 Epoch 3 Batch 1015/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.976, Loss: 0.013 Epoch 3 Batch 1016/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.974, Loss: 0.010 Epoch 3 Batch 1017/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.974, Loss: 0.013 Epoch 3 Batch 1018/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.975, Loss: 0.019 Epoch 3 Batch 1019/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.975, Loss: 0.029 Epoch 3 Batch 1020/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.970, Loss: 0.012 Epoch 3 Batch 1021/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.970, Loss: 0.010 Epoch 3 Batch 1022/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.968, Loss: 0.010 Epoch 3 Batch 1023/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.967, Loss: 0.017 Epoch 3 Batch 1024/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.967, Loss: 0.016 Epoch 3 Batch 1025/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.967, Loss: 0.015 Epoch 3 Batch 1026/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.968, Loss: 0.014 Epoch 3 Batch 1027/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.968, Loss: 0.013 Epoch 3 Batch 1028/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.980, Loss: 0.011 Epoch 3 Batch 1029/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.980, Loss: 0.018 Epoch 3 Batch 1030/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.980, Loss: 0.011 Epoch 3 Batch 1031/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.980, Loss: 0.025 Epoch 3 Batch 1032/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.979, Loss: 0.016 Epoch 3 Batch 1033/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.979, Loss: 0.013 Epoch 3 Batch 1034/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.979, Loss: 0.013 Epoch 3 Batch 1035/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.977, Loss: 0.008 Epoch 3 Batch 1036/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.977, Loss: 0.013 Epoch 3 Batch 1037/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.977, Loss: 0.013 Epoch 3 Batch 1038/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.972, Loss: 0.016 Epoch 3 Batch 1039/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.972, Loss: 0.017 Epoch 3 Batch 1040/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.977, Loss: 0.016 Epoch 3 Batch 1041/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.977, Loss: 0.020 Epoch 3 Batch 1042/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.977, Loss: 0.009 Epoch 3 Batch 1043/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.977, Loss: 0.011 Epoch 3 Batch 1044/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.977, Loss: 0.014 Epoch 3 Batch 1045/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.972, Loss: 0.016 Epoch 3 Batch 1046/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.971, Loss: 0.007 Epoch 3 Batch 1047/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.962, Loss: 0.009 Epoch 3 Batch 1048/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.962, Loss: 0.012 Epoch 3 Batch 1049/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.958, Loss: 0.011 Epoch 3 Batch 1050/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.958, Loss: 0.013 Epoch 3 Batch 1051/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.958, Loss: 0.018 Epoch 3 Batch 1052/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.958, Loss: 0.014 Epoch 3 Batch 1053/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.961, Loss: 0.018 Epoch 3 Batch 1054/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.961, Loss: 0.013 Epoch 3 Batch 1055/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.961, Loss: 0.010 Epoch 3 Batch 1056/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.968, Loss: 0.011 Epoch 3 Batch 1057/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.968, Loss: 0.008 Epoch 3 Batch 1058/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.972, Loss: 0.017 Epoch 3 Batch 1059/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.973, Loss: 0.016 Epoch 3 Batch 1060/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.978, Loss: 0.012 Epoch 3 Batch 1061/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.975, Loss: 0.020 Epoch 3 Batch 1062/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.975, Loss: 0.013 Epoch 3 Batch 1063/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.971, Loss: 0.014 Epoch 3 Batch 1064/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.973, Loss: 0.011 Epoch 3 Batch 1065/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.977, Loss: 0.015 Epoch 3 Batch 1066/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.977, Loss: 0.006 Epoch 3 Batch 1067/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.977, Loss: 0.013 Epoch 3 Batch 1068/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.982, Loss: 0.007 Epoch 3 Batch 1069/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.982, Loss: 0.009 Epoch 3 Batch 1070/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.982, Loss: 0.009 Epoch 3 Batch 1071/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.979, Loss: 0.019 Epoch 3 Batch 1072/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.982, Loss: 0.011 Epoch 3 Batch 1073/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.981, Loss: 0.011 Epoch 3 Batch 1074/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.981, Loss: 0.009 Epoch 3 Batch 1075/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.981, Loss: 0.015 Epoch 4 Batch 0/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.981, Loss: 0.013 Epoch 4 Batch 1/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.981, Loss: 0.007 Epoch 4 Batch 2/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.981, Loss: 0.017 Epoch 4 Batch 3/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.981, Loss: 0.016 Epoch 4 Batch 4/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.981, Loss: 0.009 Epoch 4 Batch 5/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.981, Loss: 0.014 Epoch 4 Batch 6/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.981, Loss: 0.008 Epoch 4 Batch 7/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.981, Loss: 0.009 Epoch 4 Batch 8/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.975, Loss: 0.014 Epoch 4 Batch 9/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.975, Loss: 0.013 Epoch 4 Batch 10/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.975, Loss: 0.012 Epoch 4 Batch 11/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.972, Loss: 0.016 Epoch 4 Batch 12/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.972, Loss: 0.012 Epoch 4 Batch 13/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.972, Loss: 0.008 Epoch 4 Batch 14/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.972, Loss: 0.008 Epoch 4 Batch 15/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.977, Loss: 0.010 Epoch 4 Batch 16/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.977, Loss: 0.013 Epoch 4 Batch 17/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.977, Loss: 0.007 Epoch 4 Batch 18/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.977, Loss: 0.008 Epoch 4 Batch 19/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.977, Loss: 0.012 Epoch 4 Batch 20/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.977, Loss: 0.008 Epoch 4 Batch 21/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.977, Loss: 0.013 Epoch 4 Batch 22/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.977, Loss: 0.009 Epoch 4 Batch 23/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.977, Loss: 0.010 Epoch 4 Batch 24/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.979, Loss: 0.020 Epoch 4 Batch 25/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.979, Loss: 0.007 Epoch 4 Batch 26/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.979, Loss: 0.019 Epoch 4 Batch 27/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.977, Loss: 0.010 Epoch 4 Batch 28/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.977, Loss: 0.012 Epoch 4 Batch 29/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.977, Loss: 0.015 Epoch 4 Batch 30/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.982, Loss: 0.003 Epoch 4 Batch 31/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.982, Loss: 0.009 Epoch 4 Batch 32/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.982, Loss: 0.019 Epoch 4 Batch 33/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.977, Loss: 0.011 Epoch 4 Batch 34/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.975, Loss: 0.012 Epoch 4 Batch 35/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.975, Loss: 0.014 Epoch 4 Batch 36/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.975, Loss: 0.012 Epoch 4 Batch 37/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.975, Loss: 0.015 Epoch 4 Batch 38/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.979, Loss: 0.022 Epoch 4 Batch 39/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.976, Loss: 0.015 Epoch 4 Batch 40/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.978, Loss: 0.010 Epoch 4 Batch 41/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.983, Loss: 0.007 Epoch 4 Batch 42/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.985, Loss: 0.027 Epoch 4 Batch 43/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.985, Loss: 0.006 Epoch 4 Batch 44/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.975, Loss: 0.010 Epoch 4 Batch 45/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.975, Loss: 0.014 Epoch 4 Batch 46/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.976, Loss: 0.011 Epoch 4 Batch 47/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.976, Loss: 0.012 Epoch 4 Batch 48/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.976, Loss: 0.020 Epoch 4 Batch 49/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.976, Loss: 0.015 Epoch 4 Batch 50/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.976, Loss: 0.011 Epoch 4 Batch 51/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.973, Loss: 0.017 Epoch 4 Batch 52/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.973, Loss: 0.012 Epoch 4 Batch 53/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.978, Loss: 0.013 Epoch 4 Batch 54/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.978, Loss: 0.015 Epoch 4 Batch 55/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.978, Loss: 0.012 Epoch 4 Batch 56/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.980, Loss: 0.006 Epoch 4 Batch 57/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.980, Loss: 0.011 Epoch 4 Batch 58/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.980, Loss: 0.012 Epoch 4 Batch 59/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.980, Loss: 0.011 Epoch 4 Batch 60/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.975, Loss: 0.017 Epoch 4 Batch 61/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.975, Loss: 0.013 Epoch 4 Batch 62/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.975, Loss: 0.017 Epoch 4 Batch 63/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.975, Loss: 0.010 Epoch 4 Batch 64/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.975, Loss: 0.008 Epoch 4 Batch 65/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.975, Loss: 0.007 Epoch 4 Batch 66/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.975, Loss: 0.005 Epoch 4 Batch 67/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.973, Loss: 0.015 Epoch 4 Batch 68/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.973, Loss: 0.012 Epoch 4 Batch 69/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.973, Loss: 0.020 Epoch 4 Batch 70/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.973, Loss: 0.018 Epoch 4 Batch 71/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.973, Loss: 0.007 Epoch 4 Batch 72/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.973, Loss: 0.014 Epoch 4 Batch 73/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.973, Loss: 0.010 Epoch 4 Batch 74/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.975, Loss: 0.008 Epoch 4 Batch 75/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.974, Loss: 0.017 Epoch 4 Batch 76/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.974, Loss: 0.008 Epoch 4 Batch 77/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.974, Loss: 0.013 Epoch 4 Batch 78/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.972, Loss: 0.009 Epoch 4 Batch 79/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.972, Loss: 0.006 Epoch 4 Batch 80/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.971, Loss: 0.012 Epoch 4 Batch 81/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.971, Loss: 0.006 Epoch 4 Batch 82/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.971, Loss: 0.011 Epoch 4 Batch 83/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.976, Loss: 0.011 Epoch 4 Batch 84/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.979, Loss: 0.014 Epoch 4 Batch 85/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.979, Loss: 0.010 Epoch 4 Batch 86/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.979, Loss: 0.013 Epoch 4 Batch 87/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.979, Loss: 0.019 Epoch 4 Batch 88/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.979, Loss: 0.010 Epoch 4 Batch 89/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.979, Loss: 0.009 Epoch 4 Batch 90/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.979, Loss: 0.013 Epoch 4 Batch 91/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.979, Loss: 0.009 Epoch 4 Batch 92/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.979, Loss: 0.012 Epoch 4 Batch 93/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.979, Loss: 0.005 Epoch 4 Batch 94/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.979, Loss: 0.009 Epoch 4 Batch 95/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.979, Loss: 0.010 Epoch 4 Batch 96/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.980, Loss: 0.011 Epoch 4 Batch 97/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.980, Loss: 0.011 Epoch 4 Batch 98/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.980, Loss: 0.016 Epoch 4 Batch 99/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.980, Loss: 0.011 Epoch 4 Batch 100/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.980, Loss: 0.008 Epoch 4 Batch 101/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.980, Loss: 0.014 Epoch 4 Batch 102/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.982, Loss: 0.016 Epoch 4 Batch 103/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.977, Loss: 0.010 Epoch 4 Batch 104/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.977, Loss: 0.017 Epoch 4 Batch 105/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.972, Loss: 0.009 Epoch 4 Batch 106/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.972, Loss: 0.018 Epoch 4 Batch 107/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.972, Loss: 0.010 Epoch 4 Batch 108/1077 - Train Accuracy: 0.962, Validation Accuracy: 0.972, Loss: 0.016 Epoch 4 Batch 109/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.972, Loss: 0.019 Epoch 4 Batch 110/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.972, Loss: 0.007 Epoch 4 Batch 111/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.972, Loss: 0.013 Epoch 4 Batch 112/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.972, Loss: 0.007 Epoch 4 Batch 113/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.972, Loss: 0.014 Epoch 4 Batch 114/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.972, Loss: 0.012 Epoch 4 Batch 115/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.972, Loss: 0.016 Epoch 4 Batch 116/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.977, Loss: 0.015 Epoch 4 Batch 117/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.986, Loss: 0.007 Epoch 4 Batch 118/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.986, Loss: 0.009 Epoch 4 Batch 119/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.986, Loss: 0.010 Epoch 4 Batch 120/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.986, Loss: 0.012 Epoch 4 Batch 121/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.986, Loss: 0.011 Epoch 4 Batch 122/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.981, Loss: 0.013 Epoch 4 Batch 123/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.981, Loss: 0.011 Epoch 4 Batch 124/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.981, Loss: 0.016 Epoch 4 Batch 125/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.981, Loss: 0.020 Epoch 4 Batch 126/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.981, Loss: 0.011 Epoch 4 Batch 127/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.977, Loss: 0.013 Epoch 4 Batch 128/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.977, Loss: 0.009 Epoch 4 Batch 129/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.977, Loss: 0.013 Epoch 4 Batch 130/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.979, Loss: 0.006 Epoch 4 Batch 131/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.984, Loss: 0.010 Epoch 4 Batch 132/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.984, Loss: 0.016 Epoch 4 Batch 133/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.983, Loss: 0.003 Epoch 4 Batch 134/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.981, Loss: 0.009 Epoch 4 Batch 135/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.981, Loss: 0.011 Epoch 4 Batch 136/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.981, Loss: 0.011 Epoch 4 Batch 137/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.981, Loss: 0.011 Epoch 4 Batch 138/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.983, Loss: 0.009 Epoch 4 Batch 139/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.983, Loss: 0.012 Epoch 4 Batch 140/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.983, Loss: 0.010 Epoch 4 Batch 141/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.983, Loss: 0.005 Epoch 4 Batch 142/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.983, Loss: 0.011 Epoch 4 Batch 143/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.983, Loss: 0.010 Epoch 4 Batch 144/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.983, Loss: 0.016 Epoch 4 Batch 145/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.984, Loss: 0.009 Epoch 4 Batch 146/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.984, Loss: 0.023 Epoch 4 Batch 147/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.984, Loss: 0.007 Epoch 4 Batch 148/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.984, Loss: 0.016 Epoch 4 Batch 149/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.978, Loss: 0.009 Epoch 4 Batch 150/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.978, Loss: 0.010 Epoch 4 Batch 151/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.978, Loss: 0.011 Epoch 4 Batch 152/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.979, Loss: 0.012 Epoch 4 Batch 153/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.979, Loss: 0.012 Epoch 4 Batch 154/1077 - Train Accuracy: 0.998, Validation Accuracy: 0.974, Loss: 0.008 Epoch 4 Batch 155/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.974, Loss: 0.013 Epoch 4 Batch 156/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.979, Loss: 0.019 Epoch 4 Batch 157/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.979, Loss: 0.012 Epoch 4 Batch 158/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.979, Loss: 0.012 Epoch 4 Batch 159/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.977, Loss: 0.014 Epoch 4 Batch 160/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.982, Loss: 0.008 Epoch 4 Batch 161/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.979, Loss: 0.011 Epoch 4 Batch 162/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.979, Loss: 0.008 Epoch 4 Batch 163/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.979, Loss: 0.014 Epoch 4 Batch 164/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.979, Loss: 0.007 Epoch 4 Batch 165/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.974, Loss: 0.012 Epoch 4 Batch 166/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.974, Loss: 0.017 Epoch 4 Batch 167/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.970, Loss: 0.007 Epoch 4 Batch 168/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.970, Loss: 0.015 Epoch 4 Batch 169/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.970, Loss: 0.015 Epoch 4 Batch 170/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.970, Loss: 0.019 Epoch 4 Batch 171/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.970, Loss: 0.010 Epoch 4 Batch 172/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.975, Loss: 0.008 Epoch 4 Batch 173/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.977, Loss: 0.014 Epoch 4 Batch 174/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.977, Loss: 0.010 Epoch 4 Batch 175/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.977, Loss: 0.010 Epoch 4 Batch 176/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.977, Loss: 0.009 Epoch 4 Batch 177/1077 - Train Accuracy: 0.999, Validation Accuracy: 0.977, Loss: 0.009 Epoch 4 Batch 178/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.977, Loss: 0.016 Epoch 4 Batch 179/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.977, Loss: 0.010 Epoch 4 Batch 180/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.977, Loss: 0.009 Epoch 4 Batch 181/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.977, Loss: 0.009 Epoch 4 Batch 182/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.979, Loss: 0.016 Epoch 4 Batch 183/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.979, Loss: 0.013 Epoch 4 Batch 184/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.979, Loss: 0.018 Epoch 4 Batch 185/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.973, Loss: 0.018 Epoch 4 Batch 186/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.977, Loss: 0.010 Epoch 4 Batch 187/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.977, Loss: 0.006 Epoch 4 Batch 188/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.977, Loss: 0.007 Epoch 4 Batch 189/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.975, Loss: 0.008 Epoch 4 Batch 190/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.975, Loss: 0.006 Epoch 4 Batch 191/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.971, Loss: 0.008 Epoch 4 Batch 192/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.971, Loss: 0.009 Epoch 4 Batch 193/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.975, Loss: 0.008 Epoch 4 Batch 194/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.975, Loss: 0.009 Epoch 4 Batch 195/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.975, Loss: 0.012 Epoch 4 Batch 196/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.979, Loss: 0.012 Epoch 4 Batch 197/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.979, Loss: 0.012 Epoch 4 Batch 198/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.979, Loss: 0.013 Epoch 4 Batch 199/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.979, Loss: 0.010 Epoch 4 Batch 200/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.979, Loss: 0.011 Epoch 4 Batch 201/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.979, Loss: 0.008 Epoch 4 Batch 202/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.979, Loss: 0.012 Epoch 4 Batch 203/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.975, Loss: 0.016 Epoch 4 Batch 204/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.977, Loss: 0.024 Epoch 4 Batch 205/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.977, Loss: 0.024 Epoch 4 Batch 206/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.977, Loss: 0.010 Epoch 4 Batch 207/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.976, Loss: 0.013 Epoch 4 Batch 208/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.972, Loss: 0.014 Epoch 4 Batch 209/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.972, Loss: 0.005 Epoch 4 Batch 210/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.972, Loss: 0.016 Epoch 4 Batch 211/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.977, Loss: 0.008 Epoch 4 Batch 212/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.977, Loss: 0.008 Epoch 4 Batch 213/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.982, Loss: 0.009 Epoch 4 Batch 214/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.982, Loss: 0.010 Epoch 4 Batch 215/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.982, Loss: 0.020 Epoch 4 Batch 216/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.980, Loss: 0.009 Epoch 4 Batch 217/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.980, Loss: 0.006 Epoch 4 Batch 218/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.980, Loss: 0.016 Epoch 4 Batch 219/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.980, Loss: 0.005 Epoch 4 Batch 220/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.980, Loss: 0.017 Epoch 4 Batch 221/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.980, Loss: 0.013 Epoch 4 Batch 222/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.980, Loss: 0.011 Epoch 4 Batch 223/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.975, Loss: 0.008 Epoch 4 Batch 224/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.975, Loss: 0.013 Epoch 4 Batch 225/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.971, Loss: 0.020 Epoch 4 Batch 226/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.971, Loss: 0.009 Epoch 4 Batch 227/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.971, Loss: 0.013 Epoch 4 Batch 228/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.971, Loss: 0.019 Epoch 4 Batch 229/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.971, Loss: 0.010 Epoch 4 Batch 230/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.971, Loss: 0.009 Epoch 4 Batch 231/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.971, Loss: 0.017 Epoch 4 Batch 232/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.975, Loss: 0.009 Epoch 4 Batch 233/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.975, Loss: 0.016 Epoch 4 Batch 234/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.975, Loss: 0.013 Epoch 4 Batch 235/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.975, Loss: 0.010 Epoch 4 Batch 236/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.975, Loss: 0.012 Epoch 4 Batch 237/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.975, Loss: 0.010 Epoch 4 Batch 238/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.975, Loss: 0.012 Epoch 4 Batch 239/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.975, Loss: 0.004 Epoch 4 Batch 240/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.975, Loss: 0.009 Epoch 4 Batch 241/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.974, Loss: 0.009 Epoch 4 Batch 242/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.974, Loss: 0.013 Epoch 4 Batch 243/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.974, Loss: 0.013 Epoch 4 Batch 244/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.974, Loss: 0.011 Epoch 4 Batch 245/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.984, Loss: 0.015 Epoch 4 Batch 246/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.984, Loss: 0.015 Epoch 4 Batch 247/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.984, Loss: 0.016 Epoch 4 Batch 248/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.984, Loss: 0.008 Epoch 4 Batch 249/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.979, Loss: 0.009 Epoch 4 Batch 250/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.979, Loss: 0.015 Epoch 4 Batch 251/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.979, Loss: 0.016 Epoch 4 Batch 252/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.979, Loss: 0.012 Epoch 4 Batch 253/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.979, Loss: 0.010 Epoch 4 Batch 254/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.979, Loss: 0.016 Epoch 4 Batch 255/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.979, Loss: 0.015 Epoch 4 Batch 256/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.984, Loss: 0.016 Epoch 4 Batch 257/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.984, Loss: 0.008 Epoch 4 Batch 258/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.979, Loss: 0.013 Epoch 4 Batch 259/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.979, Loss: 0.008 Epoch 4 Batch 260/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.979, Loss: 0.006 Epoch 4 Batch 261/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.979, Loss: 0.014 Epoch 4 Batch 262/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.979, Loss: 0.008 Epoch 4 Batch 263/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.979, Loss: 0.012 Epoch 4 Batch 264/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.977, Loss: 0.013 Epoch 4 Batch 265/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.977, Loss: 0.011 Epoch 4 Batch 266/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.977, Loss: 0.017 Epoch 4 Batch 267/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.972, Loss: 0.008 Epoch 4 Batch 268/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.972, Loss: 0.016 Epoch 4 Batch 269/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.972, Loss: 0.019 Epoch 4 Batch 270/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.970, Loss: 0.013 Epoch 4 Batch 271/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.972, Loss: 0.019 Epoch 4 Batch 272/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.972, Loss: 0.026 Epoch 4 Batch 273/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.973, Loss: 0.013 Epoch 4 Batch 274/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.973, Loss: 0.012 Epoch 4 Batch 275/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.973, Loss: 0.009 Epoch 4 Batch 276/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.974, Loss: 0.018 Epoch 4 Batch 277/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.973, Loss: 0.011 Epoch 4 Batch 278/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.973, Loss: 0.009 Epoch 4 Batch 279/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.975, Loss: 0.017 Epoch 4 Batch 280/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.972, Loss: 0.012 Epoch 4 Batch 281/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.972, Loss: 0.015 Epoch 4 Batch 282/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.972, Loss: 0.020 Epoch 4 Batch 283/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.972, Loss: 0.009 Epoch 4 Batch 284/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.973, Loss: 0.014 Epoch 4 Batch 285/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.973, Loss: 0.015 Epoch 4 Batch 286/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.973, Loss: 0.015 Epoch 4 Batch 287/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.973, Loss: 0.014 Epoch 4 Batch 288/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.973, Loss: 0.017 Epoch 4 Batch 289/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.973, Loss: 0.012 Epoch 4 Batch 290/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.971, Loss: 0.020 Epoch 4 Batch 291/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.971, Loss: 0.019 Epoch 4 Batch 292/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.970, Loss: 0.016 Epoch 4 Batch 293/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.970, Loss: 0.009 Epoch 4 Batch 294/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.971, Loss: 0.010 Epoch 4 Batch 295/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.971, Loss: 0.013 Epoch 4 Batch 296/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.973, Loss: 0.008 Epoch 4 Batch 297/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.973, Loss: 0.013 Epoch 4 Batch 298/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.975, Loss: 0.017 Epoch 4 Batch 299/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.975, Loss: 0.010 Epoch 4 Batch 300/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.976, Loss: 0.009 Epoch 4 Batch 301/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.976, Loss: 0.010 Epoch 4 Batch 302/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.979, Loss: 0.013 Epoch 4 Batch 303/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.974, Loss: 0.018 Epoch 4 Batch 304/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.974, Loss: 0.010 Epoch 4 Batch 305/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.974, Loss: 0.011 Epoch 4 Batch 306/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.972, Loss: 0.014 Epoch 4 Batch 307/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.979, Loss: 0.008 Epoch 4 Batch 308/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.979, Loss: 0.012 Epoch 4 Batch 309/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.974, Loss: 0.009 Epoch 4 Batch 310/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.974, Loss: 0.011 Epoch 4 Batch 311/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.974, Loss: 0.009 Epoch 4 Batch 312/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.967, Loss: 0.011 Epoch 4 Batch 313/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.967, Loss: 0.013 Epoch 4 Batch 314/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.963, Loss: 0.010 Epoch 4 Batch 315/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.967, Loss: 0.017 Epoch 4 Batch 316/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.972, Loss: 0.012 Epoch 4 Batch 317/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.972, Loss: 0.009 Epoch 4 Batch 318/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.972, Loss: 0.006 Epoch 4 Batch 319/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.972, Loss: 0.014 Epoch 4 Batch 320/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.972, Loss: 0.012 Epoch 4 Batch 321/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.972, Loss: 0.011 Epoch 4 Batch 322/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.972, Loss: 0.014 Epoch 4 Batch 323/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.977, Loss: 0.011 Epoch 4 Batch 324/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.977, Loss: 0.010 Epoch 4 Batch 325/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.977, Loss: 0.008 Epoch 4 Batch 326/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.977, Loss: 0.009 Epoch 4 Batch 327/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.977, Loss: 0.015 Epoch 4 Batch 328/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.977, Loss: 0.013 Epoch 4 Batch 329/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.977, Loss: 0.010 Epoch 4 Batch 330/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.975, Loss: 0.011 Epoch 4 Batch 331/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.975, Loss: 0.011 Epoch 4 Batch 332/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.975, Loss: 0.008 Epoch 4 Batch 333/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.977, Loss: 0.010 Epoch 4 Batch 334/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.977, Loss: 0.009 Epoch 4 Batch 335/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.977, Loss: 0.014 Epoch 4 Batch 336/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.979, Loss: 0.026 Epoch 4 Batch 337/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.974, Loss: 0.014 Epoch 4 Batch 338/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.974, Loss: 0.018 Epoch 4 Batch 339/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.972, Loss: 0.006 Epoch 4 Batch 340/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.972, Loss: 0.014 Epoch 4 Batch 341/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.972, Loss: 0.013 Epoch 4 Batch 342/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.972, Loss: 0.007 Epoch 4 Batch 343/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.972, Loss: 0.009 Epoch 4 Batch 344/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.977, Loss: 0.013 Epoch 4 Batch 345/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.977, Loss: 0.009 Epoch 4 Batch 346/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.977, Loss: 0.009 Epoch 4 Batch 347/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.977, Loss: 0.009 Epoch 4 Batch 348/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.977, Loss: 0.005 Epoch 4 Batch 349/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.972, Loss: 0.011 Epoch 4 Batch 350/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.972, Loss: 0.010 Epoch 4 Batch 351/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.972, Loss: 0.014 Epoch 4 Batch 352/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.970, Loss: 0.006 Epoch 4 Batch 353/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.967, Loss: 0.017 Epoch 4 Batch 354/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.967, Loss: 0.012 Epoch 4 Batch 355/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.967, Loss: 0.013 Epoch 4 Batch 356/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.972, Loss: 0.010 Epoch 4 Batch 357/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.972, Loss: 0.017 Epoch 4 Batch 358/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.972, Loss: 0.013 Epoch 4 Batch 359/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.972, Loss: 0.014 Epoch 4 Batch 360/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.972, Loss: 0.009 Epoch 4 Batch 361/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.972, Loss: 0.009 Epoch 4 Batch 362/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.977, Loss: 0.015 Epoch 4 Batch 363/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.977, Loss: 0.010 Epoch 4 Batch 364/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.977, Loss: 0.019 Epoch 4 Batch 365/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.977, Loss: 0.008 Epoch 4 Batch 366/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.977, Loss: 0.010 Epoch 4 Batch 367/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.977, Loss: 0.006 Epoch 4 Batch 368/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.977, Loss: 0.016 Epoch 4 Batch 369/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.977, Loss: 0.009 Epoch 4 Batch 370/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.980, Loss: 0.013 Epoch 4 Batch 371/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.980, Loss: 0.009 Epoch 4 Batch 372/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.975, Loss: 0.006 Epoch 4 Batch 373/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.977, Loss: 0.007 Epoch 4 Batch 374/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.973, Loss: 0.009 Epoch 4 Batch 375/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.973, Loss: 0.009 Epoch 4 Batch 376/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.973, Loss: 0.008 Epoch 4 Batch 377/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.973, Loss: 0.010 Epoch 4 Batch 378/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.973, Loss: 0.010 Epoch 4 Batch 379/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.973, Loss: 0.013 Epoch 4 Batch 380/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.973, Loss: 0.009 Epoch 4 Batch 381/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.973, Loss: 0.011 Epoch 4 Batch 382/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.977, Loss: 0.017 Epoch 4 Batch 383/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.977, Loss: 0.010 Epoch 4 Batch 384/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.977, Loss: 0.009 Epoch 4 Batch 385/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.976, Loss: 0.010 Epoch 4 Batch 386/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.977, Loss: 0.012 Epoch 4 Batch 387/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.977, Loss: 0.007 Epoch 4 Batch 388/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.977, Loss: 0.007 Epoch 4 Batch 389/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.977, Loss: 0.009 Epoch 4 Batch 390/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.977, Loss: 0.014 Epoch 4 Batch 391/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.977, Loss: 0.018 Epoch 4 Batch 392/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.977, Loss: 0.014 Epoch 4 Batch 393/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.977, Loss: 0.013 Epoch 4 Batch 394/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.977, Loss: 0.011 Epoch 4 Batch 395/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.977, Loss: 0.006 Epoch 4 Batch 396/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.977, Loss: 0.016 Epoch 4 Batch 397/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.977, Loss: 0.017 Epoch 4 Batch 398/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.977, Loss: 0.009 Epoch 4 Batch 399/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.976, Loss: 0.016 Epoch 4 Batch 400/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.976, Loss: 0.013 Epoch 4 Batch 401/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.976, Loss: 0.011 Epoch 4 Batch 402/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.976, Loss: 0.006 Epoch 4 Batch 403/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.976, Loss: 0.019 Epoch 4 Batch 404/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.976, Loss: 0.014 Epoch 4 Batch 405/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.976, Loss: 0.009 Epoch 4 Batch 406/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.974, Loss: 0.012 Epoch 4 Batch 407/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.974, Loss: 0.011 Epoch 4 Batch 408/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.974, Loss: 0.007 Epoch 4 Batch 409/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.974, Loss: 0.020 Epoch 4 Batch 410/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.974, Loss: 0.020 Epoch 4 Batch 411/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.974, Loss: 0.011 Epoch 4 Batch 412/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.976, Loss: 0.020 Epoch 4 Batch 413/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.976, Loss: 0.010 Epoch 4 Batch 414/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.983, Loss: 0.011 Epoch 4 Batch 415/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.983, Loss: 0.021 Epoch 4 Batch 416/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.983, Loss: 0.013 Epoch 4 Batch 417/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.985, Loss: 0.026 Epoch 4 Batch 418/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.985, Loss: 0.006 Epoch 4 Batch 419/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.985, Loss: 0.014 Epoch 4 Batch 420/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.985, Loss: 0.007 Epoch 4 Batch 421/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.985, Loss: 0.018 Epoch 4 Batch 422/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.984, Loss: 0.008 Epoch 4 Batch 423/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.984, Loss: 0.014 Epoch 4 Batch 424/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.979, Loss: 0.009 Epoch 4 Batch 425/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.979, Loss: 0.012 Epoch 4 Batch 426/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.980, Loss: 0.016 Epoch 4 Batch 427/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.980, Loss: 0.014 Epoch 4 Batch 428/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.980, Loss: 0.015 Epoch 4 Batch 429/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.980, Loss: 0.008 Epoch 4 Batch 430/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.980, Loss: 0.016 Epoch 4 Batch 431/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.981, Loss: 0.007 Epoch 4 Batch 432/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.981, Loss: 0.016 Epoch 4 Batch 433/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.981, Loss: 0.012 Epoch 4 Batch 434/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.978, Loss: 0.007 Epoch 4 Batch 435/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.978, Loss: 0.016 Epoch 4 Batch 436/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.978, Loss: 0.019 Epoch 4 Batch 437/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.972, Loss: 0.009 Epoch 4 Batch 438/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.973, Loss: 0.013 Epoch 4 Batch 439/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.971, Loss: 0.009 Epoch 4 Batch 440/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.971, Loss: 0.013 Epoch 4 Batch 441/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.971, Loss: 0.013 Epoch 4 Batch 442/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.971, Loss: 0.015 Epoch 4 Batch 443/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.965, Loss: 0.011 Epoch 4 Batch 444/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.963, Loss: 0.006 Epoch 4 Batch 445/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.963, Loss: 0.012 Epoch 4 Batch 446/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.963, Loss: 0.011 Epoch 4 Batch 447/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.963, Loss: 0.009 Epoch 4 Batch 448/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.968, Loss: 0.016 Epoch 4 Batch 449/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.968, Loss: 0.007 Epoch 4 Batch 450/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.968, Loss: 0.014 Epoch 4 Batch 451/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.968, Loss: 0.016 Epoch 4 Batch 452/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.967, Loss: 0.013 Epoch 4 Batch 453/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.974, Loss: 0.016 Epoch 4 Batch 454/1077 - Train Accuracy: 0.960, Validation Accuracy: 0.974, Loss: 0.018 Epoch 4 Batch 455/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.974, Loss: 0.012 Epoch 4 Batch 456/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.974, Loss: 0.011 Epoch 4 Batch 457/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.976, Loss: 0.008 Epoch 4 Batch 458/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.977, Loss: 0.020 Epoch 4 Batch 459/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.977, Loss: 0.014 Epoch 4 Batch 460/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.977, Loss: 0.012 Epoch 4 Batch 461/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.972, Loss: 0.012 Epoch 4 Batch 462/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.972, Loss: 0.013 Epoch 4 Batch 463/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.972, Loss: 0.011 Epoch 4 Batch 464/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.977, Loss: 0.011 Epoch 4 Batch 465/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.977, Loss: 0.018 Epoch 4 Batch 466/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.977, Loss: 0.009 Epoch 4 Batch 467/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.977, Loss: 0.014 Epoch 4 Batch 468/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.977, Loss: 0.010 Epoch 4 Batch 469/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.977, Loss: 0.009 Epoch 4 Batch 470/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.977, Loss: 0.010 Epoch 4 Batch 471/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.977, Loss: 0.012 Epoch 4 Batch 472/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.977, Loss: 0.014 Epoch 4 Batch 473/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.974, Loss: 0.009 Epoch 4 Batch 474/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.974, Loss: 0.019 Epoch 4 Batch 475/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.974, Loss: 0.019 Epoch 4 Batch 476/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.969, Loss: 0.014 Epoch 4 Batch 477/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.969, Loss: 0.012 Epoch 4 Batch 478/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.969, Loss: 0.009 Epoch 4 Batch 479/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.964, Loss: 0.011 Epoch 4 Batch 480/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.970, Loss: 0.012 Epoch 4 Batch 481/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.970, Loss: 0.015 Epoch 4 Batch 482/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.965, Loss: 0.016 Epoch 4 Batch 483/1077 - Train Accuracy: 0.967, Validation Accuracy: 0.970, Loss: 0.016 Epoch 4 Batch 484/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.970, Loss: 0.017 Epoch 4 Batch 485/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.970, Loss: 0.010 Epoch 4 Batch 486/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.970, Loss: 0.011 Epoch 4 Batch 487/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.974, Loss: 0.007 Epoch 4 Batch 488/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.974, Loss: 0.015 Epoch 4 Batch 489/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.969, Loss: 0.008 Epoch 4 Batch 490/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.969, Loss: 0.013 Epoch 4 Batch 491/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.972, Loss: 0.023 Epoch 4 Batch 492/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.967, Loss: 0.022 Epoch 4 Batch 493/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.963, Loss: 0.009 Epoch 4 Batch 494/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.963, Loss: 0.007 Epoch 4 Batch 495/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.968, Loss: 0.013 Epoch 4 Batch 496/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.968, Loss: 0.010 Epoch 4 Batch 497/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.968, Loss: 0.021 Epoch 4 Batch 498/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.968, Loss: 0.016 Epoch 4 Batch 499/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.968, Loss: 0.009 Epoch 4 Batch 500/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.963, Loss: 0.005 Epoch 4 Batch 501/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.961, Loss: 0.009 Epoch 4 Batch 502/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.961, Loss: 0.013 Epoch 4 Batch 503/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.961, Loss: 0.010 Epoch 4 Batch 504/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.961, Loss: 0.006 Epoch 4 Batch 505/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.960, Loss: 0.004 Epoch 4 Batch 506/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.960, Loss: 0.023 Epoch 4 Batch 507/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.960, Loss: 0.014 Epoch 4 Batch 508/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.965, Loss: 0.008 Epoch 4 Batch 509/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.969, Loss: 0.015 Epoch 4 Batch 510/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.969, Loss: 0.013 Epoch 4 Batch 511/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.969, Loss: 0.010 Epoch 4 Batch 512/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.974, Loss: 0.010 Epoch 4 Batch 513/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.974, Loss: 0.012 Epoch 4 Batch 514/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.974, Loss: 0.014 Epoch 4 Batch 515/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.974, Loss: 0.019 Epoch 4 Batch 516/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.974, Loss: 0.013 Epoch 4 Batch 517/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.974, Loss: 0.016 Epoch 4 Batch 518/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.969, Loss: 0.008 Epoch 4 Batch 519/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.969, Loss: 0.014 Epoch 4 Batch 520/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.969, Loss: 0.007 Epoch 4 Batch 521/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.969, Loss: 0.017 Epoch 4 Batch 522/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.969, Loss: 0.015 Epoch 4 Batch 523/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.969, Loss: 0.009 Epoch 4 Batch 524/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.965, Loss: 0.016 Epoch 4 Batch 525/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.965, Loss: 0.015 Epoch 4 Batch 526/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.965, Loss: 0.018 Epoch 4 Batch 527/1077 - Train Accuracy: 0.968, Validation Accuracy: 0.965, Loss: 0.012 Epoch 4 Batch 528/1077 - Train Accuracy: 0.961, Validation Accuracy: 0.965, Loss: 0.013 Epoch 4 Batch 529/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.965, Loss: 0.014 Epoch 4 Batch 530/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.964, Loss: 0.015 Epoch 4 Batch 531/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.965, Loss: 0.009 Epoch 4 Batch 532/1077 - Train Accuracy: 0.965, Validation Accuracy: 0.965, Loss: 0.020 Epoch 4 Batch 533/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.972, Loss: 0.007 Epoch 4 Batch 534/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.968, Loss: 0.016 Epoch 4 Batch 535/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.968, Loss: 0.013 Epoch 4 Batch 536/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.968, Loss: 0.015 Epoch 4 Batch 537/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.977, Loss: 0.005 Epoch 4 Batch 538/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.978, Loss: 0.009 Epoch 4 Batch 539/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.978, Loss: 0.015 Epoch 4 Batch 540/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.977, Loss: 0.010 Epoch 4 Batch 541/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.977, Loss: 0.006 Epoch 4 Batch 542/1077 - Train Accuracy: 0.997, Validation Accuracy: 0.977, Loss: 0.008 Epoch 4 Batch 543/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.977, Loss: 0.006 Epoch 4 Batch 544/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.977, Loss: 0.015 Epoch 4 Batch 545/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.977, Loss: 0.006 Epoch 4 Batch 546/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.977, Loss: 0.008 Epoch 4 Batch 547/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.977, Loss: 0.011 Epoch 4 Batch 548/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.974, Loss: 0.015 Epoch 4 Batch 549/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.974, Loss: 0.009 Epoch 4 Batch 550/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.979, Loss: 0.012 Epoch 4 Batch 551/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.979, Loss: 0.010 Epoch 4 Batch 552/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.972, Loss: 0.014 Epoch 4 Batch 553/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.977, Loss: 0.019 Epoch 4 Batch 554/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.977, Loss: 0.010 Epoch 4 Batch 555/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.977, Loss: 0.005 Epoch 4 Batch 556/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.974, Loss: 0.006 Epoch 4 Batch 557/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.974, Loss: 0.013 Epoch 4 Batch 558/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.974, Loss: 0.006 Epoch 4 Batch 559/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.974, Loss: 0.013 Epoch 4 Batch 560/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.974, Loss: 0.011 Epoch 4 Batch 561/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.972, Loss: 0.011 Epoch 4 Batch 562/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.972, Loss: 0.012 Epoch 4 Batch 563/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.972, Loss: 0.014 Epoch 4 Batch 564/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.972, Loss: 0.013 Epoch 4 Batch 565/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.974, Loss: 0.014 Epoch 4 Batch 566/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.974, Loss: 0.011 Epoch 4 Batch 567/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.975, Loss: 0.014 Epoch 4 Batch 568/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.979, Loss: 0.013 Epoch 4 Batch 569/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.979, Loss: 0.008 Epoch 4 Batch 570/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.975, Loss: 0.012 Epoch 4 Batch 571/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.975, Loss: 0.007 Epoch 4 Batch 572/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.974, Loss: 0.008 Epoch 4 Batch 573/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.974, Loss: 0.021 Epoch 4 Batch 574/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.974, Loss: 0.014 Epoch 4 Batch 575/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.974, Loss: 0.006 Epoch 4 Batch 576/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.974, Loss: 0.014 Epoch 4 Batch 577/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.974, Loss: 0.008 Epoch 4 Batch 578/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.974, Loss: 0.007 Epoch 4 Batch 579/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.974, Loss: 0.008 Epoch 4 Batch 580/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.976, Loss: 0.009 Epoch 4 Batch 581/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.976, Loss: 0.008 Epoch 4 Batch 582/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.976, Loss: 0.016 Epoch 4 Batch 583/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.976, Loss: 0.012 Epoch 4 Batch 584/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.976, Loss: 0.009 Epoch 4 Batch 585/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.977, Loss: 0.005 Epoch 4 Batch 586/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.977, Loss: 0.008 Epoch 4 Batch 587/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.977, Loss: 0.017 Epoch 4 Batch 588/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.974, Loss: 0.008 Epoch 4 Batch 589/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.974, Loss: 0.011 Epoch 4 Batch 590/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.977, Loss: 0.005 Epoch 4 Batch 591/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.979, Loss: 0.011 Epoch 4 Batch 592/1077 - Train Accuracy: 0.997, Validation Accuracy: 0.978, Loss: 0.010 Epoch 4 Batch 593/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.978, Loss: 0.019 Epoch 4 Batch 594/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.977, Loss: 0.014 Epoch 4 Batch 595/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.977, Loss: 0.011 Epoch 4 Batch 596/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.976, Loss: 0.011 Epoch 4 Batch 597/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.976, Loss: 0.007 Epoch 4 Batch 598/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.969, Loss: 0.010 Epoch 4 Batch 599/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.965, Loss: 0.023 Epoch 4 Batch 600/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.962, Loss: 0.011 Epoch 4 Batch 601/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.961, Loss: 0.015 Epoch 4 Batch 602/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.961, Loss: 0.012 Epoch 4 Batch 603/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.961, Loss: 0.009 Epoch 4 Batch 604/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.961, Loss: 0.016 Epoch 4 Batch 605/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.961, Loss: 0.023 Epoch 4 Batch 606/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.962, Loss: 0.020 Epoch 4 Batch 607/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.962, Loss: 0.016 Epoch 4 Batch 608/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.965, Loss: 0.010 Epoch 4 Batch 609/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.963, Loss: 0.007 Epoch 4 Batch 610/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.963, Loss: 0.013 Epoch 4 Batch 611/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.966, Loss: 0.011 Epoch 4 Batch 612/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.966, Loss: 0.008 Epoch 4 Batch 613/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.966, Loss: 0.018 Epoch 4 Batch 614/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.966, Loss: 0.009 Epoch 4 Batch 615/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.966, Loss: 0.006 Epoch 4 Batch 616/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.968, Loss: 0.020 Epoch 4 Batch 617/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.968, Loss: 0.011 Epoch 4 Batch 618/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.968, Loss: 0.012 Epoch 4 Batch 619/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.972, Loss: 0.006 Epoch 4 Batch 620/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.972, Loss: 0.017 Epoch 4 Batch 621/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.972, Loss: 0.016 Epoch 4 Batch 622/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.972, Loss: 0.016 Epoch 4 Batch 623/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.972, Loss: 0.015 Epoch 4 Batch 624/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.970, Loss: 0.011 Epoch 4 Batch 625/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.967, Loss: 0.016 Epoch 4 Batch 626/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.967, Loss: 0.015 Epoch 4 Batch 627/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.969, Loss: 0.012 Epoch 4 Batch 628/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.969, Loss: 0.017 Epoch 4 Batch 629/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.960, Loss: 0.009 Epoch 4 Batch 630/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.965, Loss: 0.005 Epoch 4 Batch 631/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.965, Loss: 0.010 Epoch 4 Batch 632/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.966, Loss: 0.009 Epoch 4 Batch 633/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.970, Loss: 0.009 Epoch 4 Batch 634/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.971, Loss: 0.008 Epoch 4 Batch 635/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.972, Loss: 0.014 Epoch 4 Batch 636/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.968, Loss: 0.010 Epoch 4 Batch 637/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.972, Loss: 0.022 Epoch 4 Batch 638/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.972, Loss: 0.008 Epoch 4 Batch 639/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.972, Loss: 0.015 Epoch 4 Batch 640/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.972, Loss: 0.013 Epoch 4 Batch 641/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.972, Loss: 0.015 Epoch 4 Batch 642/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.972, Loss: 0.006 Epoch 4 Batch 643/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.972, Loss: 0.010 Epoch 4 Batch 644/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.972, Loss: 0.012 Epoch 4 Batch 645/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.974, Loss: 0.015 Epoch 4 Batch 646/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.967, Loss: 0.008 Epoch 4 Batch 647/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.974, Loss: 0.019 Epoch 4 Batch 648/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.979, Loss: 0.010 Epoch 4 Batch 649/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.980, Loss: 0.006 Epoch 4 Batch 650/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.975, Loss: 0.009 Epoch 4 Batch 651/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.974, Loss: 0.008 Epoch 4 Batch 652/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.970, Loss: 0.016 Epoch 4 Batch 653/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.963, Loss: 0.018 Epoch 4 Batch 654/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.962, Loss: 0.005 Epoch 4 Batch 655/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.962, Loss: 0.013 Epoch 4 Batch 656/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.962, Loss: 0.005 Epoch 4 Batch 657/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.958, Loss: 0.010 Epoch 4 Batch 658/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.957, Loss: 0.008 Epoch 4 Batch 659/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.962, Loss: 0.011 Epoch 4 Batch 660/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.964, Loss: 0.009 Epoch 4 Batch 661/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.964, Loss: 0.014 Epoch 4 Batch 662/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.969, Loss: 0.011 Epoch 4 Batch 663/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.969, Loss: 0.011 Epoch 4 Batch 664/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.964, Loss: 0.015 Epoch 4 Batch 665/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.969, Loss: 0.008 Epoch 4 Batch 666/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.974, Loss: 0.015 Epoch 4 Batch 667/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.974, Loss: 0.010 Epoch 4 Batch 668/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.973, Loss: 0.010 Epoch 4 Batch 669/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.973, Loss: 0.007 Epoch 4 Batch 670/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.972, Loss: 0.008 Epoch 4 Batch 671/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.969, Loss: 0.016 Epoch 4 Batch 672/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.969, Loss: 0.011 Epoch 4 Batch 673/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.973, Loss: 0.016 Epoch 4 Batch 674/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.973, Loss: 0.015 Epoch 4 Batch 675/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.973, Loss: 0.015 Epoch 4 Batch 676/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.971, Loss: 0.008 Epoch 4 Batch 677/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.975, Loss: 0.010 Epoch 4 Batch 678/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.977, Loss: 0.008 Epoch 4 Batch 679/1077 - Train Accuracy: 0.997, Validation Accuracy: 0.977, Loss: 0.005 Epoch 4 Batch 680/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.977, Loss: 0.011 Epoch 4 Batch 681/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.972, Loss: 0.008 Epoch 4 Batch 682/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.972, Loss: 0.009 Epoch 4 Batch 683/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.967, Loss: 0.011 Epoch 4 Batch 684/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.967, Loss: 0.014 Epoch 4 Batch 685/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.967, Loss: 0.022 Epoch 4 Batch 686/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.971, Loss: 0.005 Epoch 4 Batch 687/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.976, Loss: 0.011 Epoch 4 Batch 688/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.976, Loss: 0.012 Epoch 4 Batch 689/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.976, Loss: 0.007 Epoch 4 Batch 690/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.976, Loss: 0.013 Epoch 4 Batch 691/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.976, Loss: 0.012 Epoch 4 Batch 692/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.976, Loss: 0.009 Epoch 4 Batch 693/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.976, Loss: 0.015 Epoch 4 Batch 694/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.976, Loss: 0.011 Epoch 4 Batch 695/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.972, Loss: 0.012 Epoch 4 Batch 696/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.972, Loss: 0.008 Epoch 4 Batch 697/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.972, Loss: 0.009 Epoch 4 Batch 698/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.977, Loss: 0.008 Epoch 4 Batch 699/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.977, Loss: 0.007 Epoch 4 Batch 700/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.977, Loss: 0.010 Epoch 4 Batch 701/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.981, Loss: 0.007 Epoch 4 Batch 702/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.981, Loss: 0.015 Epoch 4 Batch 703/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.981, Loss: 0.014 Epoch 4 Batch 704/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.981, Loss: 0.019 Epoch 4 Batch 705/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.981, Loss: 0.015 Epoch 4 Batch 706/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.981, Loss: 0.032 Epoch 4 Batch 707/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.981, Loss: 0.012 Epoch 4 Batch 708/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.976, Loss: 0.014 Epoch 4 Batch 709/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.972, Loss: 0.017 Epoch 4 Batch 710/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.977, Loss: 0.010 Epoch 4 Batch 711/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.977, Loss: 0.014 Epoch 4 Batch 712/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.977, Loss: 0.007 Epoch 4 Batch 713/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.977, Loss: 0.008 Epoch 4 Batch 714/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.979, Loss: 0.013 Epoch 4 Batch 715/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.977, Loss: 0.017 Epoch 4 Batch 716/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.976, Loss: 0.011 Epoch 4 Batch 717/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.974, Loss: 0.005 Epoch 4 Batch 718/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.969, Loss: 0.016 Epoch 4 Batch 719/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.969, Loss: 0.012 Epoch 4 Batch 720/1077 - Train Accuracy: 0.997, Validation Accuracy: 0.969, Loss: 0.009 Epoch 4 Batch 721/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.967, Loss: 0.007 Epoch 4 Batch 722/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.967, Loss: 0.008 Epoch 4 Batch 723/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.967, Loss: 0.016 Epoch 4 Batch 724/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.967, Loss: 0.015 Epoch 4 Batch 725/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.967, Loss: 0.011 Epoch 4 Batch 726/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.973, Loss: 0.007 Epoch 4 Batch 727/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.973, Loss: 0.009 Epoch 4 Batch 728/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.973, Loss: 0.017 Epoch 4 Batch 729/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.975, Loss: 0.015 Epoch 4 Batch 730/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.975, Loss: 0.014 Epoch 4 Batch 731/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.975, Loss: 0.007 Epoch 4 Batch 732/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.975, Loss: 0.012 Epoch 4 Batch 733/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.971, Loss: 0.012 Epoch 4 Batch 734/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.970, Loss: 0.014 Epoch 4 Batch 735/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.970, Loss: 0.007 Epoch 4 Batch 736/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.970, Loss: 0.011 Epoch 4 Batch 737/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.970, Loss: 0.008 Epoch 4 Batch 738/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.970, Loss: 0.015 Epoch 4 Batch 739/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.970, Loss: 0.011 Epoch 4 Batch 740/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.970, Loss: 0.011 Epoch 4 Batch 741/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.970, Loss: 0.014 Epoch 4 Batch 742/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.970, Loss: 0.007 Epoch 4 Batch 743/1077 - Train Accuracy: 0.997, Validation Accuracy: 0.974, Loss: 0.009 Epoch 4 Batch 744/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.970, Loss: 0.010 Epoch 4 Batch 745/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.974, Loss: 0.015 Epoch 4 Batch 746/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.974, Loss: 0.010 Epoch 4 Batch 747/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.974, Loss: 0.006 Epoch 4 Batch 748/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.974, Loss: 0.009 Epoch 4 Batch 749/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.974, Loss: 0.011 Epoch 4 Batch 750/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.974, Loss: 0.010 Epoch 4 Batch 751/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.974, Loss: 0.009 Epoch 4 Batch 752/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.974, Loss: 0.013 Epoch 4 Batch 753/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.974, Loss: 0.014 Epoch 4 Batch 754/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.974, Loss: 0.006 Epoch 4 Batch 755/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.974, Loss: 0.016 Epoch 4 Batch 756/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.974, Loss: 0.012 Epoch 4 Batch 757/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.974, Loss: 0.008 Epoch 4 Batch 758/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.974, Loss: 0.005 Epoch 4 Batch 759/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.978, Loss: 0.010 Epoch 4 Batch 760/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.983, Loss: 0.011 Epoch 4 Batch 761/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.978, Loss: 0.009 Epoch 4 Batch 762/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.978, Loss: 0.008 Epoch 4 Batch 763/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.978, Loss: 0.008 Epoch 4 Batch 764/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.978, Loss: 0.011 Epoch 4 Batch 765/1077 - Train Accuracy: 0.955, Validation Accuracy: 0.978, Loss: 0.016 Epoch 4 Batch 766/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.978, Loss: 0.011 Epoch 4 Batch 767/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.978, Loss: 0.006 Epoch 4 Batch 768/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.978, Loss: 0.007 Epoch 4 Batch 769/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.979, Loss: 0.014 Epoch 4 Batch 770/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.984, Loss: 0.012 Epoch 4 Batch 771/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.979, Loss: 0.009 Epoch 4 Batch 772/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.974, Loss: 0.012 Epoch 4 Batch 773/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.974, Loss: 0.009 Epoch 4 Batch 774/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.975, Loss: 0.009 Epoch 4 Batch 775/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.975, Loss: 0.007 Epoch 4 Batch 776/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.975, Loss: 0.011 Epoch 4 Batch 777/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.975, Loss: 0.009 Epoch 4 Batch 778/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.975, Loss: 0.013 Epoch 4 Batch 779/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.975, Loss: 0.010 Epoch 4 Batch 780/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.975, Loss: 0.014 Epoch 4 Batch 781/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.970, Loss: 0.006 Epoch 4 Batch 782/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.970, Loss: 0.008 Epoch 4 Batch 783/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.970, Loss: 0.016 Epoch 4 Batch 784/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.970, Loss: 0.009 Epoch 4 Batch 785/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.969, Loss: 0.005 Epoch 4 Batch 786/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.974, Loss: 0.013 Epoch 4 Batch 787/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.974, Loss: 0.007 Epoch 4 Batch 788/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.974, Loss: 0.019 Epoch 4 Batch 789/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.974, Loss: 0.013 Epoch 4 Batch 790/1077 - Train Accuracy: 0.963, Validation Accuracy: 0.974, Loss: 0.018 Epoch 4 Batch 791/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.974, Loss: 0.008 Epoch 4 Batch 792/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.969, Loss: 0.013 Epoch 4 Batch 793/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.974, Loss: 0.014 Epoch 4 Batch 794/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.974, Loss: 0.009 Epoch 4 Batch 795/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.972, Loss: 0.010 Epoch 4 Batch 796/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.969, Loss: 0.011 Epoch 4 Batch 797/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.969, Loss: 0.007 Epoch 4 Batch 798/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.970, Loss: 0.016 Epoch 4 Batch 799/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.972, Loss: 0.010 Epoch 4 Batch 800/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.972, Loss: 0.007 Epoch 4 Batch 801/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.972, Loss: 0.011 Epoch 4 Batch 802/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.972, Loss: 0.008 Epoch 4 Batch 803/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.972, Loss: 0.007 Epoch 4 Batch 804/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.972, Loss: 0.007 Epoch 4 Batch 805/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.972, Loss: 0.012 Epoch 4 Batch 806/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.974, Loss: 0.008 Epoch 4 Batch 807/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.974, Loss: 0.006 Epoch 4 Batch 808/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.974, Loss: 0.019 Epoch 4 Batch 809/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.974, Loss: 0.017 Epoch 4 Batch 810/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.974, Loss: 0.007 Epoch 4 Batch 811/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.974, Loss: 0.013 Epoch 4 Batch 812/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.974, Loss: 0.009 Epoch 4 Batch 813/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.974, Loss: 0.015 Epoch 4 Batch 814/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.974, Loss: 0.007 Epoch 4 Batch 815/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.974, Loss: 0.011 Epoch 4 Batch 816/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.974, Loss: 0.007 Epoch 4 Batch 817/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.974, Loss: 0.012 Epoch 4 Batch 818/1077 - Train Accuracy: 0.957, Validation Accuracy: 0.972, Loss: 0.012 Epoch 4 Batch 819/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.970, Loss: 0.013 Epoch 4 Batch 820/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.970, Loss: 0.008 Epoch 4 Batch 821/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.970, Loss: 0.009 Epoch 4 Batch 822/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.970, Loss: 0.007 Epoch 4 Batch 823/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.973, Loss: 0.014 Epoch 4 Batch 824/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.973, Loss: 0.009 Epoch 4 Batch 825/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.973, Loss: 0.004 Epoch 4 Batch 826/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.973, Loss: 0.010 Epoch 4 Batch 827/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.977, Loss: 0.007 Epoch 4 Batch 828/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.979, Loss: 0.011 Epoch 4 Batch 829/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.979, Loss: 0.024 Epoch 4 Batch 830/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.979, Loss: 0.015 Epoch 4 Batch 831/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.979, Loss: 0.013 Epoch 4 Batch 832/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.979, Loss: 0.010 Epoch 4 Batch 833/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.979, Loss: 0.008 Epoch 4 Batch 834/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.974, Loss: 0.011 Epoch 4 Batch 835/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.975, Loss: 0.007 Epoch 4 Batch 836/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.975, Loss: 0.012 Epoch 4 Batch 837/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.975, Loss: 0.020 Epoch 4 Batch 838/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.975, Loss: 0.012 Epoch 4 Batch 839/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.975, Loss: 0.010 Epoch 4 Batch 840/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.979, Loss: 0.013 Epoch 4 Batch 841/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.979, Loss: 0.014 Epoch 4 Batch 842/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.979, Loss: 0.006 Epoch 4 Batch 843/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.979, Loss: 0.009 Epoch 4 Batch 844/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.979, Loss: 0.010 Epoch 4 Batch 845/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.979, Loss: 0.011 Epoch 4 Batch 846/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.979, Loss: 0.018 Epoch 4 Batch 847/1077 - Train Accuracy: 0.997, Validation Accuracy: 0.980, Loss: 0.012 Epoch 4 Batch 848/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.980, Loss: 0.016 Epoch 4 Batch 849/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.980, Loss: 0.007 Epoch 4 Batch 850/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.984, Loss: 0.015 Epoch 4 Batch 851/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.987, Loss: 0.017 Epoch 4 Batch 852/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.982, Loss: 0.022 Epoch 4 Batch 853/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.980, Loss: 0.012 Epoch 4 Batch 854/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.980, Loss: 0.009 Epoch 4 Batch 855/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.980, Loss: 0.011 Epoch 4 Batch 856/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.980, Loss: 0.013 Epoch 4 Batch 857/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.975, Loss: 0.010 Epoch 4 Batch 858/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.973, Loss: 0.009 Epoch 4 Batch 859/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.973, Loss: 0.015 Epoch 4 Batch 860/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.973, Loss: 0.008 Epoch 4 Batch 861/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.973, Loss: 0.007 Epoch 4 Batch 862/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.973, Loss: 0.013 Epoch 4 Batch 863/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.971, Loss: 0.010 Epoch 4 Batch 864/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.971, Loss: 0.010 Epoch 4 Batch 865/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.971, Loss: 0.017 Epoch 4 Batch 866/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.971, Loss: 0.012 Epoch 4 Batch 867/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.975, Loss: 0.033 Epoch 4 Batch 868/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.975, Loss: 0.012 Epoch 4 Batch 869/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.975, Loss: 0.012 Epoch 4 Batch 870/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.974, Loss: 0.010 Epoch 4 Batch 871/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.974, Loss: 0.010 Epoch 4 Batch 872/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.974, Loss: 0.014 Epoch 4 Batch 873/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.974, Loss: 0.010 Epoch 4 Batch 874/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.979, Loss: 0.018 Epoch 4 Batch 875/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.974, Loss: 0.011 Epoch 4 Batch 876/1077 - Train Accuracy: 0.975, Validation Accuracy: 0.979, Loss: 0.014 Epoch 4 Batch 877/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.984, Loss: 0.006 Epoch 4 Batch 878/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.984, Loss: 0.009 Epoch 4 Batch 879/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.986, Loss: 0.010 Epoch 4 Batch 880/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.986, Loss: 0.013 Epoch 4 Batch 881/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.986, Loss: 0.015 Epoch 4 Batch 882/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.986, Loss: 0.012 Epoch 4 Batch 883/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.981, Loss: 0.013 Epoch 4 Batch 884/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.976, Loss: 0.018 Epoch 4 Batch 885/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.976, Loss: 0.010 Epoch 4 Batch 886/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.976, Loss: 0.015 Epoch 4 Batch 887/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.980, Loss: 0.017 Epoch 4 Batch 888/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.980, Loss: 0.009 Epoch 4 Batch 889/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.979, Loss: 0.011 Epoch 4 Batch 890/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.979, Loss: 0.012 Epoch 4 Batch 891/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.974, Loss: 0.010 Epoch 4 Batch 892/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.974, Loss: 0.018 Epoch 4 Batch 893/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.974, Loss: 0.009 Epoch 4 Batch 894/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.974, Loss: 0.008 Epoch 4 Batch 895/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.969, Loss: 0.009 Epoch 4 Batch 896/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.969, Loss: 0.009 Epoch 4 Batch 897/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.969, Loss: 0.012 Epoch 4 Batch 898/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.969, Loss: 0.008 Epoch 4 Batch 899/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.974, Loss: 0.014 Epoch 4 Batch 900/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.974, Loss: 0.017 Epoch 4 Batch 901/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.974, Loss: 0.031 Epoch 4 Batch 902/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.974, Loss: 0.019 Epoch 4 Batch 903/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.974, Loss: 0.012 Epoch 4 Batch 904/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.979, Loss: 0.009 Epoch 4 Batch 905/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.984, Loss: 0.010 Epoch 4 Batch 906/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.979, Loss: 0.011 Epoch 4 Batch 907/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.977, Loss: 0.010 Epoch 4 Batch 908/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.977, Loss: 0.009 Epoch 4 Batch 909/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.977, Loss: 0.013 Epoch 4 Batch 910/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.975, Loss: 0.013 Epoch 4 Batch 911/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.975, Loss: 0.010 Epoch 4 Batch 912/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.970, Loss: 0.010 Epoch 4 Batch 913/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.977, Loss: 0.016 Epoch 4 Batch 914/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.977, Loss: 0.021 Epoch 4 Batch 915/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.983, Loss: 0.012 Epoch 4 Batch 916/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.978, Loss: 0.008 Epoch 4 Batch 917/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.978, Loss: 0.010 Epoch 4 Batch 918/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.978, Loss: 0.010 Epoch 4 Batch 919/1077 - Train Accuracy: 0.999, Validation Accuracy: 0.981, Loss: 0.006 Epoch 4 Batch 920/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.980, Loss: 0.009 Epoch 4 Batch 921/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.980, Loss: 0.016 Epoch 4 Batch 922/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.980, Loss: 0.012 Epoch 4 Batch 923/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.980, Loss: 0.007 Epoch 4 Batch 924/1077 - Train Accuracy: 0.976, Validation Accuracy: 0.980, Loss: 0.020 Epoch 4 Batch 925/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.981, Loss: 0.012 Epoch 4 Batch 926/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.976, Loss: 0.011 Epoch 4 Batch 927/1077 - Train Accuracy: 0.969, Validation Accuracy: 0.973, Loss: 0.019 Epoch 4 Batch 928/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.973, Loss: 0.013 Epoch 4 Batch 929/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.973, Loss: 0.007 Epoch 4 Batch 930/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.973, Loss: 0.012 Epoch 4 Batch 931/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.973, Loss: 0.009 Epoch 4 Batch 932/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.973, Loss: 0.008 Epoch 4 Batch 933/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.973, Loss: 0.011 Epoch 4 Batch 934/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.972, Loss: 0.014 Epoch 4 Batch 935/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.975, Loss: 0.007 Epoch 4 Batch 936/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.975, Loss: 0.015 Epoch 4 Batch 937/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.975, Loss: 0.013 Epoch 4 Batch 938/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.979, Loss: 0.007 Epoch 4 Batch 939/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.979, Loss: 0.015 Epoch 4 Batch 940/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.979, Loss: 0.008 Epoch 4 Batch 941/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.979, Loss: 0.015 Epoch 4 Batch 942/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.979, Loss: 0.015 Epoch 4 Batch 943/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.979, Loss: 0.012 Epoch 4 Batch 944/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.974, Loss: 0.005 Epoch 4 Batch 945/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.979, Loss: 0.008 Epoch 4 Batch 946/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.977, Loss: 0.008 Epoch 4 Batch 947/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.977, Loss: 0.013 Epoch 4 Batch 948/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.982, Loss: 0.010 Epoch 4 Batch 949/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.982, Loss: 0.015 Epoch 4 Batch 950/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.982, Loss: 0.008 Epoch 4 Batch 951/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.977, Loss: 0.012 Epoch 4 Batch 952/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.976, Loss: 0.009 Epoch 4 Batch 953/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.976, Loss: 0.010 Epoch 4 Batch 954/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.975, Loss: 0.010 Epoch 4 Batch 955/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.980, Loss: 0.013 Epoch 4 Batch 956/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.980, Loss: 0.015 Epoch 4 Batch 957/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.975, Loss: 0.009 Epoch 4 Batch 958/1077 - Train Accuracy: 0.994, Validation Accuracy: 0.977, Loss: 0.006 Epoch 4 Batch 959/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.977, Loss: 0.010 Epoch 4 Batch 960/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.963, Loss: 0.010 Epoch 4 Batch 961/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.963, Loss: 0.004 Epoch 4 Batch 962/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.963, Loss: 0.014 Epoch 4 Batch 963/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.962, Loss: 0.014 Epoch 4 Batch 964/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.963, Loss: 0.017 Epoch 4 Batch 965/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.963, Loss: 0.024 Epoch 4 Batch 966/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.972, Loss: 0.010 Epoch 4 Batch 967/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.972, Loss: 0.013 Epoch 4 Batch 968/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.972, Loss: 0.013 Epoch 4 Batch 969/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.969, Loss: 0.014 Epoch 4 Batch 970/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.969, Loss: 0.012 Epoch 4 Batch 971/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.969, Loss: 0.013 Epoch 4 Batch 972/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.969, Loss: 0.010 Epoch 4 Batch 973/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.969, Loss: 0.012 Epoch 4 Batch 974/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.969, Loss: 0.008 Epoch 4 Batch 975/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.969, Loss: 0.009 Epoch 4 Batch 976/1077 - Train Accuracy: 0.997, Validation Accuracy: 0.969, Loss: 0.007 Epoch 4 Batch 977/1077 - Train Accuracy: 0.997, Validation Accuracy: 0.974, Loss: 0.006 Epoch 4 Batch 978/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.974, Loss: 0.011 Epoch 4 Batch 979/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.977, Loss: 0.013 Epoch 4 Batch 980/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.977, Loss: 0.009 Epoch 4 Batch 981/1077 - Train Accuracy: 0.966, Validation Accuracy: 0.972, Loss: 0.011 Epoch 4 Batch 982/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.972, Loss: 0.008 Epoch 4 Batch 983/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.972, Loss: 0.013 Epoch 4 Batch 984/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.972, Loss: 0.018 Epoch 4 Batch 985/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.972, Loss: 0.012 Epoch 4 Batch 986/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.977, Loss: 0.012 Epoch 4 Batch 987/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.977, Loss: 0.011 Epoch 4 Batch 988/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.977, Loss: 0.016 Epoch 4 Batch 989/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.977, Loss: 0.013 Epoch 4 Batch 990/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.972, Loss: 0.011 Epoch 4 Batch 991/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.972, Loss: 0.008 Epoch 4 Batch 992/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.972, Loss: 0.017 Epoch 4 Batch 993/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.972, Loss: 0.009 Epoch 4 Batch 994/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.972, Loss: 0.010 Epoch 4 Batch 995/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.972, Loss: 0.008 Epoch 4 Batch 996/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.977, Loss: 0.008 Epoch 4 Batch 997/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.973, Loss: 0.008 Epoch 4 Batch 998/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.973, Loss: 0.011 Epoch 4 Batch 999/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.973, Loss: 0.012 Epoch 4 Batch 1000/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.973, Loss: 0.007 Epoch 4 Batch 1001/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.970, Loss: 0.014 Epoch 4 Batch 1002/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.970, Loss: 0.007 Epoch 4 Batch 1003/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.970, Loss: 0.010 Epoch 4 Batch 1004/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.970, Loss: 0.012 Epoch 4 Batch 1005/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.970, Loss: 0.011 Epoch 4 Batch 1006/1077 - Train Accuracy: 0.978, Validation Accuracy: 0.970, Loss: 0.012 Epoch 4 Batch 1007/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.966, Loss: 0.009 Epoch 4 Batch 1008/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.974, Loss: 0.016 Epoch 4 Batch 1009/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.974, Loss: 0.008 Epoch 4 Batch 1010/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.974, Loss: 0.009 Epoch 4 Batch 1011/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.974, Loss: 0.009 Epoch 4 Batch 1012/1077 - Train Accuracy: 0.959, Validation Accuracy: 0.972, Loss: 0.013 Epoch 4 Batch 1013/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.979, Loss: 0.013 Epoch 4 Batch 1014/1077 - Train Accuracy: 0.973, Validation Accuracy: 0.982, Loss: 0.017 Epoch 4 Batch 1015/1077 - Train Accuracy: 0.991, Validation Accuracy: 0.982, Loss: 0.008 Epoch 4 Batch 1016/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.982, Loss: 0.011 Epoch 4 Batch 1017/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.982, Loss: 0.005 Epoch 4 Batch 1018/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.982, Loss: 0.010 Epoch 4 Batch 1019/1077 - Train Accuracy: 0.964, Validation Accuracy: 0.982, Loss: 0.027 Epoch 4 Batch 1020/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.982, Loss: 0.010 Epoch 4 Batch 1021/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.977, Loss: 0.008 Epoch 4 Batch 1022/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.977, Loss: 0.007 Epoch 4 Batch 1023/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.977, Loss: 0.014 Epoch 4 Batch 1024/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.977, Loss: 0.016 Epoch 4 Batch 1025/1077 - Train Accuracy: 0.987, Validation Accuracy: 0.977, Loss: 0.010 Epoch 4 Batch 1026/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.977, Loss: 0.011 Epoch 4 Batch 1027/1077 - Train Accuracy: 0.972, Validation Accuracy: 0.977, Loss: 0.012 Epoch 4 Batch 1028/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.977, Loss: 0.013 Epoch 4 Batch 1029/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.977, Loss: 0.008 Epoch 4 Batch 1030/1077 - Train Accuracy: 0.995, Validation Accuracy: 0.977, Loss: 0.006 Epoch 4 Batch 1031/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.975, Loss: 0.014 Epoch 4 Batch 1032/1077 - Train Accuracy: 0.980, Validation Accuracy: 0.973, Loss: 0.012 Epoch 4 Batch 1033/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.973, Loss: 0.014 Epoch 4 Batch 1034/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.973, Loss: 0.010 Epoch 4 Batch 1035/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.975, Loss: 0.012 Epoch 4 Batch 1036/1077 - Train Accuracy: 0.996, Validation Accuracy: 0.975, Loss: 0.005 Epoch 4 Batch 1037/1077 - Train Accuracy: 0.970, Validation Accuracy: 0.973, Loss: 0.010 Epoch 4 Batch 1038/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.973, Loss: 0.017 Epoch 4 Batch 1039/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.973, Loss: 0.013 Epoch 4 Batch 1040/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.977, Loss: 0.011 Epoch 4 Batch 1041/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.977, Loss: 0.019 Epoch 4 Batch 1042/1077 - Train Accuracy: 0.992, Validation Accuracy: 0.977, Loss: 0.010 Epoch 4 Batch 1043/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.977, Loss: 0.006 Epoch 4 Batch 1044/1077 - Train Accuracy: 0.986, Validation Accuracy: 0.977, Loss: 0.013 Epoch 4 Batch 1045/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.973, Loss: 0.007 Epoch 4 Batch 1046/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.968, Loss: 0.004 Epoch 4 Batch 1047/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.970, Loss: 0.010 Epoch 4 Batch 1048/1077 - Train Accuracy: 0.989, Validation Accuracy: 0.970, Loss: 0.009 Epoch 4 Batch 1049/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.970, Loss: 0.009 Epoch 4 Batch 1050/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.970, Loss: 0.007 Epoch 4 Batch 1051/1077 - Train Accuracy: 0.974, Validation Accuracy: 0.970, Loss: 0.018 Epoch 4 Batch 1052/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.970, Loss: 0.007 Epoch 4 Batch 1053/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.973, Loss: 0.016 Epoch 4 Batch 1054/1077 - Train Accuracy: 0.983, Validation Accuracy: 0.973, Loss: 0.010 Epoch 4 Batch 1055/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.973, Loss: 0.010 Epoch 4 Batch 1056/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.977, Loss: 0.007 Epoch 4 Batch 1057/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.977, Loss: 0.012 Epoch 4 Batch 1058/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.978, Loss: 0.015 Epoch 4 Batch 1059/1077 - Train Accuracy: 0.977, Validation Accuracy: 0.978, Loss: 0.017 Epoch 4 Batch 1060/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.978, Loss: 0.013 Epoch 4 Batch 1061/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.982, Loss: 0.015 Epoch 4 Batch 1062/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.978, Loss: 0.008 Epoch 4 Batch 1063/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.978, Loss: 0.018 Epoch 4 Batch 1064/1077 - Train Accuracy: 0.993, Validation Accuracy: 0.973, Loss: 0.010 Epoch 4 Batch 1065/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.978, Loss: 0.010 Epoch 4 Batch 1066/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.978, Loss: 0.009 Epoch 4 Batch 1067/1077 - Train Accuracy: 0.971, Validation Accuracy: 0.982, Loss: 0.019 Epoch 4 Batch 1068/1077 - Train Accuracy: 0.979, Validation Accuracy: 0.982, Loss: 0.007 Epoch 4 Batch 1069/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.978, Loss: 0.006 Epoch 4 Batch 1070/1077 - Train Accuracy: 0.982, Validation Accuracy: 0.978, Loss: 0.012 Epoch 4 Batch 1071/1077 - Train Accuracy: 0.988, Validation Accuracy: 0.978, Loss: 0.019 Epoch 4 Batch 1072/1077 - Train Accuracy: 0.985, Validation Accuracy: 0.978, Loss: 0.012 Epoch 4 Batch 1073/1077 - Train Accuracy: 0.990, Validation Accuracy: 0.978, Loss: 0.010 Epoch 4 Batch 1074/1077 - Train Accuracy: 0.984, Validation Accuracy: 0.982, Loss: 0.009 Epoch 4 Batch 1075/1077 - Train Accuracy: 0.981, Validation Accuracy: 0.982, Loss: 0.011 Model Trained and Saved ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int`- Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function word_ids_seq = [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in (sentence.lower().split())] return word_ids_seq """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('logits:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)])) print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)])) ###Output Input Word Ids: [145, 106, 89, 113, 164, 45, 160] English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.'] Prediction Word Ids: [326, 219, 283, 281, 287, 19, 293, 143, 1] French Words: ['il', 'a', 'vu', 'un', 'vieux', 'camion', 'jaune', '.', '<EOS>'] ###Markdown 语言翻译在此项目中,你将了解神经网络机器翻译这一领域。你将用由英语和法语语句组成的数据集,训练一个序列到序列模型(sequence to sequence model),该模型能够将新的英语句子翻译成法语。 获取数据因为将整个英语语言内容翻译成法语需要大量训练时间,所以我们提供了一小部分的英语语料库。 ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests from os.path import isdir if not isdir('checkpoints'): !mkdir checkpoints source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown 探索数据研究 view_sentence_range,查看并熟悉该数据的不同部分。 ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]] )) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown 实现预处理函数 文本到单词 id和之前的 RNN 一样,你必须首先将文本转换为数字,这样计算机才能读懂。在函数 `text_to_ids()` 中,你需要将单词中的 `source_text` 和 `target_text` 转为 id。但是,你需要在 `target_text` 中每个句子的末尾,添加 `` 单词 id。这样可以帮助神经网络预测句子应该在什么地方结束。你可以通过以下代码获取 ` ` 单词ID:```pythontarget_vocab_to_int['']```你可以使用 `source_vocab_to_int` 和 `target_vocab_to_int` 获得其他单词 id。 ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function source_sentences = source_text.split('\n') target_sentences = target_text.split('\n') source_word_to_id, target_word_to_id = [], [] for sentence in source_sentences: words = sentence.split() source_word_to_id.append([source_vocab_to_int[word] for word in words]) #source_word_to_id.append(list(map((lambda word: source_vocab_to_int[word]), words))) for sentence in target_sentences: words = sentence.split() + ['<EOS>'] target_word_to_id.append([target_vocab_to_int[word] for word in words]) return source_word_to_id, target_word_to_id """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown 预处理所有数据并保存运行以下代码单元,预处理所有数据,并保存到文件中。 ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown 检查点这是你的第一个检查点。如果你什么时候决定再回到该记事本,或需要重新启动该记事本,可以从这里继续。预处理的数据已保存到磁盘上。 ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown 检查 TensorFlow 版本,确认可访问 GPU这一检查步骤,可以确保你使用的是正确版本的 TensorFlow,并且能够访问 GPU。 ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__) print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.0.1 ###Markdown 构建神经网络你将通过实现以下函数,构建出要构建一个序列到序列模型所需的组件:- `model_inputs`- `process_decoding_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` 输入实现 `model_inputs()` 函数,为神经网络创建 TF 占位符。该函数应该创建以下占位符:- 名为 “input” 的输入文本占位符,并使用 TF Placeholder 名称参数(等级(Rank)为 2)。- 目标占位符(等级为 2)。- 学习速率占位符(等级为 0)。- 名为 “keep_prob” 的保留率占位符,并使用 TF Placeholder 名称参数(等级为 0)。在以下元祖(tuple)中返回占位符:(输入、目标、学习速率、保留率) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) """ # TODO: Implement Function input = tf.placeholder(tf.int32, [None , None] , name = 'input') targets = tf.placeholder(tf.int32 , [None, None]) learning_rate = tf.placeholder(tf.float32) keep_probability = tf.placeholder(tf.float32 , name = 'keep_prob') return input, targets, learning_rate, keep_probability """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output Tests Passed ###Markdown 处理解码输入使用 TensorFlow 实现 `process_decoding_input`,以便删掉 `target_data` 中每个批次的最后一个单词 ID,并将 GO ID 放到每个批次的开头。 ###Code def process_decoding_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for dencoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function # tf.stried_slice(tensor, begin, end, strides) 抽取tensor中的一部分 # tf.fill() 创建一个填满标量值的tensor # tf.concat() 参数1代表按列合并tensor ending = tf.strided_slice(target_data, [0 ,0], [batch_size, -1], [1 ,1]) decod_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return decod_input """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_decoding_input(process_decoding_input) ###Output Tests Passed ###Markdown 编码实现 `encoding_layer()`,以使用 [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) 创建编码器 RNN 层级。 ###Code def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ # TODO: Implement Function lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob = keep_prob) dec_cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers) encode_output, encode_state = tf.nn.dynamic_rnn(dec_cell, rnn_inputs, dtype = tf.float32) return encode_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown 解码 - 训练使用 [`tf.contrib.seq2seq.simple_decoder_fn_train()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_train) 和 [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder) 创建训练分对数(training logits)。将 `output_fn` 应用到 [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder) 输出上。 ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits """ # TODO: Implement Function dec_fn_train = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob = keep_prob) outputs_dec_train, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(cell = dec_cell, decoder_fn = dec_fn_train, inputs = dec_embed_input, sequence_length = sequence_length, scope = decoding_scope) train_logits = output_fn(outputs_dec_train) return train_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown 解码 - 推论使用 [`tf.contrib.seq2seq.simple_decoder_fn_inference()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_inference) 和 [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder) 创建推论分对数(inference logits)。 ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: The maximum allowed time steps to decode :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits """ # TODO: Implement Function dec_fn_infer = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size) dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob = keep_prob) outputs_infer, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(cell = dec_cell, decoder_fn = dec_fn_infer, scope = decoding_scope) infer_logits = outputs_infer return infer_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown 构建解码层级实现 `decoding_layer()` 以创建解码器 RNN 层级。- 使用 `rnn_size` 和 `num_layers` 创建解码 RNN 单元。- 使用 [`lambda`](https://docs.python.org/3/tutorial/controlflow.htmllambda-expressions) 创建输出函数,将输入,也就是分对数转换为类分对数(class logits)。- 使用 `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)` 函数获取训练分对数。- 使用 `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob)` 函数获取推论分对数。注意:你将需要使用 [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) 在训练和推论分对数间分享变量。 ###Code def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function start_of_sequence_id = target_vocab_to_int['<GO>'] end_of_sequence_id = target_vocab_to_int['<EOS>'] lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob = keep_prob) dec_cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers) output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None) with tf.variable_scope('decoding') as decoding_scope: train_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) with tf.variable_scope('decoding', reuse = True) as decoding_scope: inference_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, sequence_length, vocab_size, decoding_scope, output_fn, keep_prob) return train_logits, inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown 构建神经网络应用你在上方实现的函数,以:- 向编码器的输入数据应用嵌入。- 使用 `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob)` 编码输入。- 使用 `process_decoding_input(target_data, target_vocab_to_int, batch_size)` 函数处理目标数据。- 向解码器的目标数据应用嵌入。- 使用 `decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)` 解码编码的输入数据。 ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function enc_input_data = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size) enc_state = encoding_layer(enc_input_data, rnn_size, num_layers, keep_prob) dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size) dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size])) dec_input_data = tf.nn.embedding_lookup(dec_embeddings, dec_input) training_dec_output, inference_dec_output = decoding_layer(dec_input_data, dec_embeddings, enc_state, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) train_logits, infer_logits = training_dec_output, inference_dec_output return train_logits, infer_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown 训练神经网络 超参数调试以下参数:- 将 `epochs` 设为 epoch 次数。- 将 `batch_size` 设为批次大小。- 将 `rnn_size` 设为 RNN 的大小。- 将 `num_layers` 设为层级数量。- 将 `encoding_embedding_size` 设为编码器嵌入大小。- 将 `decoding_embedding_size` 设为解码器嵌入大小- 将 `learning_rate` 设为训练速率。- 将 `keep_probability` 设为丢弃保留率(Dropout keep probability)。 ###Code # Number of Epochs epochs = 10 # Batch Size batch_size = 256 # RNN Size rnn_size = 256 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 128 decoding_embedding_size = 128 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 0.5 ###Output _____no_output_____ ###Markdown 构建图表使用你实现的神经网络构建图表。 ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_source_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob = model_inputs() sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model( tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) tf.identity(inference_logits, 'logits') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([input_shape[0], sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown 训练利用预处理的数据训练神经网络。如果很难获得低损失值,请访问我们的论坛,看看其他人是否遇到了相同的问题。 ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import time def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 0/538 - Train Accuracy: 0.234, Validation Accuracy: 0.316, Loss: 5.892 Epoch 0 Batch 1/538 - Train Accuracy: 0.231, Validation Accuracy: 0.316, Loss: 5.578 Epoch 0 Batch 2/538 - Train Accuracy: 0.252, Validation Accuracy: 0.316, Loss: 5.195 Epoch 0 Batch 3/538 - Train Accuracy: 0.229, Validation Accuracy: 0.316, Loss: 4.896 Epoch 0 Batch 4/538 - Train Accuracy: 0.237, Validation Accuracy: 0.316, Loss: 4.701 Epoch 0 Batch 5/538 - Train Accuracy: 0.275, Validation Accuracy: 0.328, Loss: 4.500 Epoch 0 Batch 6/538 - Train Accuracy: 0.301, Validation Accuracy: 0.345, Loss: 4.346 Epoch 0 Batch 7/538 - Train Accuracy: 0.280, Validation Accuracy: 0.346, Loss: 4.297 Epoch 0 Batch 8/538 - Train Accuracy: 0.280, Validation Accuracy: 0.347, Loss: 4.201 Epoch 0 Batch 9/538 - Train Accuracy: 0.279, Validation Accuracy: 0.347, Loss: 4.090 Epoch 0 Batch 10/538 - Train Accuracy: 0.260, Validation Accuracy: 0.347, Loss: 4.080 Epoch 0 Batch 11/538 - Train Accuracy: 0.273, Validation Accuracy: 0.347, Loss: 3.954 Epoch 0 Batch 12/538 - Train Accuracy: 0.285, Validation Accuracy: 0.363, Loss: 3.913 Epoch 0 Batch 13/538 - Train Accuracy: 0.357, Validation Accuracy: 0.384, Loss: 3.605 Epoch 0 Batch 14/538 - Train Accuracy: 0.323, Validation Accuracy: 0.393, Loss: 3.724 Epoch 0 Batch 15/538 - Train Accuracy: 0.366, Validation Accuracy: 0.394, Loss: 3.471 Epoch 0 Batch 16/538 - Train Accuracy: 0.351, Validation Accuracy: 0.393, Loss: 3.455 Epoch 0 Batch 17/538 - Train Accuracy: 0.331, Validation Accuracy: 0.393, Loss: 3.494 Epoch 0 Batch 18/538 - Train Accuracy: 0.321, Validation Accuracy: 0.394, Loss: 3.510 Epoch 0 Batch 19/538 - Train Accuracy: 0.325, Validation Accuracy: 0.398, Loss: 3.460 Epoch 0 Batch 20/538 - Train Accuracy: 0.358, Validation Accuracy: 0.402, Loss: 3.273 Epoch 0 Batch 21/538 - Train Accuracy: 0.294, Validation Accuracy: 0.404, Loss: 3.522 Epoch 0 Batch 22/538 - Train Accuracy: 0.348, Validation Accuracy: 0.411, Loss: 3.304 Epoch 0 Batch 23/538 - Train Accuracy: 0.363, Validation Accuracy: 0.419, Loss: 3.229 Epoch 0 Batch 24/538 - Train Accuracy: 0.372, Validation Accuracy: 0.422, Loss: 3.168 Epoch 0 Batch 25/538 - Train Accuracy: 0.370, Validation Accuracy: 0.425, Loss: 3.213 Epoch 0 Batch 26/538 - Train Accuracy: 0.370, Validation Accuracy: 0.431, Loss: 3.193 Epoch 0 Batch 27/538 - Train Accuracy: 0.371, Validation Accuracy: 0.427, Loss: 3.142 Epoch 0 Batch 28/538 - Train Accuracy: 0.425, Validation Accuracy: 0.429, Loss: 2.877 Epoch 0 Batch 29/538 - Train Accuracy: 0.378, Validation Accuracy: 0.421, Loss: 3.063 Epoch 0 Batch 30/538 - Train Accuracy: 0.374, Validation Accuracy: 0.439, Loss: 3.105 Epoch 0 Batch 31/538 - Train Accuracy: 0.405, Validation Accuracy: 0.440, Loss: 2.923 Epoch 0 Batch 32/538 - Train Accuracy: 0.385, Validation Accuracy: 0.431, Loss: 3.030 Epoch 0 Batch 33/538 - Train Accuracy: 0.418, Validation Accuracy: 0.454, Loss: 2.944 Epoch 0 Batch 34/538 - Train Accuracy: 0.399, Validation Accuracy: 0.447, Loss: 2.957 Epoch 0 Batch 35/538 - Train Accuracy: 0.381, Validation Accuracy: 0.447, Loss: 3.009 Epoch 0 Batch 36/538 - Train Accuracy: 0.410, Validation Accuracy: 0.451, Loss: 2.844 Epoch 0 Batch 37/538 - Train Accuracy: 0.399, Validation Accuracy: 0.451, Loss: 2.894 Epoch 0 Batch 38/538 - Train Accuracy: 0.388, Validation Accuracy: 0.465, Loss: 2.977 Epoch 0 Batch 39/538 - Train Accuracy: 0.385, Validation Accuracy: 0.440, Loss: 2.905 Epoch 0 Batch 40/538 - Train Accuracy: 0.464, Validation Accuracy: 0.464, Loss: 2.658 Epoch 0 Batch 41/538 - Train Accuracy: 0.412, Validation Accuracy: 0.469, Loss: 2.873 Epoch 0 Batch 42/538 - Train Accuracy: 0.410, Validation Accuracy: 0.462, Loss: 2.826 Epoch 0 Batch 43/538 - Train Accuracy: 0.423, Validation Accuracy: 0.472, Loss: 2.836 Epoch 0 Batch 44/538 - Train Accuracy: 0.409, Validation Accuracy: 0.469, Loss: 2.876 Epoch 0 Batch 45/538 - Train Accuracy: 0.443, Validation Accuracy: 0.470, Loss: 2.668 Epoch 0 Batch 46/538 - Train Accuracy: 0.424, Validation Accuracy: 0.479, Loss: 2.791 Epoch 0 Batch 47/538 - Train Accuracy: 0.448, Validation Accuracy: 0.479, Loss: 2.638 Epoch 0 Batch 48/538 - Train Accuracy: 0.456, Validation Accuracy: 0.477, Loss: 2.614 Epoch 0 Batch 49/538 - Train Accuracy: 0.408, Validation Accuracy: 0.481, Loss: 2.808 Epoch 0 Batch 50/538 - Train Accuracy: 0.428, Validation Accuracy: 0.478, Loss: 2.668 Epoch 0 Batch 51/538 - Train Accuracy: 0.356, Validation Accuracy: 0.449, Loss: 2.942 Epoch 0 Batch 52/538 - Train Accuracy: 0.432, Validation Accuracy: 0.475, Loss: 2.706 Epoch 0 Batch 53/538 - Train Accuracy: 0.475, Validation Accuracy: 0.485, Loss: 2.471 Epoch 0 Batch 54/538 - Train Accuracy: 0.437, Validation Accuracy: 0.480, Loss: 2.647 Epoch 0 Batch 55/538 - Train Accuracy: 0.420, Validation Accuracy: 0.483, Loss: 2.679 Epoch 0 Batch 56/538 - Train Accuracy: 0.443, Validation Accuracy: 0.474, Loss: 2.568 Epoch 0 Batch 57/538 - Train Accuracy: 0.403, Validation Accuracy: 0.475, Loss: 2.700 Epoch 0 Batch 58/538 - Train Accuracy: 0.407, Validation Accuracy: 0.485, Loss: 2.694 Epoch 0 Batch 59/538 - Train Accuracy: 0.431, Validation Accuracy: 0.492, Loss: 2.639 Epoch 0 Batch 60/538 - Train Accuracy: 0.426, Validation Accuracy: 0.475, Loss: 2.602 Epoch 0 Batch 61/538 - Train Accuracy: 0.426, Validation Accuracy: 0.477, Loss: 2.557 Epoch 0 Batch 62/538 - Train Accuracy: 0.447, Validation Accuracy: 0.494, Loss: 2.511 Epoch 0 Batch 63/538 - Train Accuracy: 0.464, Validation Accuracy: 0.487, Loss: 2.420 Epoch 0 Batch 64/538 - Train Accuracy: 0.462, Validation Accuracy: 0.492, Loss: 2.430 Epoch 0 Batch 65/538 - Train Accuracy: 0.406, Validation Accuracy: 0.473, Loss: 2.575 Epoch 0 Batch 66/538 - Train Accuracy: 0.441, Validation Accuracy: 0.477, Loss: 2.405 Epoch 0 Batch 67/538 - Train Accuracy: 0.427, Validation Accuracy: 0.486, Loss: 2.477 Epoch 0 Batch 68/538 - Train Accuracy: 0.469, Validation Accuracy: 0.494, Loss: 2.353 Epoch 0 Batch 69/538 - Train Accuracy: 0.373, Validation Accuracy: 0.441, Loss: 2.532 Epoch 0 Batch 70/538 - Train Accuracy: 0.430, Validation Accuracy: 0.454, Loss: 2.601 Epoch 0 Batch 71/538 - Train Accuracy: 0.415, Validation Accuracy: 0.471, Loss: 2.572 Epoch 0 Batch 72/538 - Train Accuracy: 0.425, Validation Accuracy: 0.456, Loss: 2.401 Epoch 0 Batch 73/538 - Train Accuracy: 0.392, Validation Accuracy: 0.464, Loss: 2.539 Epoch 0 Batch 74/538 - Train Accuracy: 0.470, Validation Accuracy: 0.490, Loss: 2.403 Epoch 0 Batch 75/538 - Train Accuracy: 0.460, Validation Accuracy: 0.481, Loss: 2.375 Epoch 0 Batch 76/538 - Train Accuracy: 0.422, Validation Accuracy: 0.490, Loss: 2.583 Epoch 0 Batch 77/538 - Train Accuracy: 0.392, Validation Accuracy: 0.462, Loss: 2.440 Epoch 0 Batch 78/538 - Train Accuracy: 0.450, Validation Accuracy: 0.485, Loss: 2.433 Epoch 0 Batch 79/538 - Train Accuracy: 0.445, Validation Accuracy: 0.479, Loss: 2.300 Epoch 0 Batch 80/538 - Train Accuracy: 0.439, Validation Accuracy: 0.488, Loss: 2.453 Epoch 0 Batch 81/538 - Train Accuracy: 0.425, Validation Accuracy: 0.490, Loss: 2.424 Epoch 0 Batch 82/538 - Train Accuracy: 0.417, Validation Accuracy: 0.472, Loss: 2.335 Epoch 0 Batch 83/538 - Train Accuracy: 0.430, Validation Accuracy: 0.495, Loss: 2.418 Epoch 0 Batch 84/538 - Train Accuracy: 0.451, Validation Accuracy: 0.484, Loss: 2.289 Epoch 0 Batch 85/538 - Train Accuracy: 0.468, Validation Accuracy: 0.493, Loss: 2.210 Epoch 0 Batch 86/538 - Train Accuracy: 0.407, Validation Accuracy: 0.466, Loss: 2.386 Epoch 0 Batch 87/538 - Train Accuracy: 0.398, Validation Accuracy: 0.459, Loss: 2.330 Epoch 0 Batch 88/538 - Train Accuracy: 0.432, Validation Accuracy: 0.481, Loss: 2.328 ###Markdown 保存参数保存 `batch_size` 和 `save_path` 参数以进行推论(for inference)。 ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown 检查点 ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown 句子到序列要向模型提供要翻译的句子,你首先需要预处理该句子。实现函数 `sentence_to_seq()` 以预处理新的句子。- 将句子转换为小写形式- 使用 `vocab_to_int` 将单词转换为 id - 如果单词不在词汇表中,将其转换为`` 单词 id ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function list_word_id = [] for word in sentence.lower().split(): if word in vocab_to_int: list_word_id.append(vocab_to_int[word]) else: list_word_id.append(vocab_to_int['<UNK>']) return list_word_id """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown 翻译将 `translate_sentence` 从英语翻译成法语。 ###Code translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('logits:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)])) print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)])) ###Output _____no_output_____ ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of each sentence from `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ source_id_text = [[source_vocab_to_int[word] for word in sentence.split()] for sentence in source_text.split('\n')] target_id_text = [[target_vocab_to_int[word] for word in str(sentence + ' <EOS>').split()] for sentence in target_text.split('\n')] return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.0.0 Default GPU Device: /gpu:0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoding_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) """ inputs = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None], name='target') learning_rate = tf.placeholder(tf.float32, name='learning_rate') keep_prob = tf.placeholder(tf.float32, name='keep_prob') return inputs, targets, learning_rate, keep_prob """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output Tests Passed ###Markdown Process Decoding InputImplement `process_decoding_input` using TensorFlow to remove the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch. ###Code def process_decoding_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for dencoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return dec_input """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_decoding_input(process_decoding_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer using [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn). ###Code def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ ''' source_vocab_size = len(source_vocab_to_int) embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, rnn_size) ''' lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers) _, state = tf.nn.dynamic_rnn(cell, rnn_inputs, dtype=tf.float32) return state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate training logits using [`tf.contrib.seq2seq.simple_decoder_fn_train()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_train) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). Apply the `output_fn` to the [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder) outputs. ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits """ dynamic_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) outputs, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, dynamic_fn, dec_embed_input, sequence_length, scope=decoding_scope) #dropout = tf.nn.dropout(outputs, keep_prob=keep_prob) train_logits = output_fn(outputs) return train_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference logits using [`tf.contrib.seq2seq.simple_decoder_fn_inference()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_inference) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: Maximum length of :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits """ decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn, encoder_state, dec_embeddings, \ start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size) inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, decoder_fn, scope=decoding_scope) return inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.- Create RNN cell for decoding using `rnn_size` and `num_layers`.- Create the output fuction using [`lambda`](https://docs.python.org/3/tutorial/controlflow.htmllambda-expressions) to transform it's input, logits, to class logits.- Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)` function to get the training logits.- Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) """ # 1.create rnn cell lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) drop_cell = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) dec_cell = tf.contrib.rnn.MultiRNNCell([drop_cell] * num_layers) # 2.Create the output fuction with tf.variable_scope('decoding') as decoding_scope: output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope) # 3.train train_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) # 4. with tf.variable_scope('decoding', reuse=True) as decoding_scope: inference_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'], \ target_vocab_to_int['<EOS>'], sequence_length, vocab_size, decoding_scope, output_fn, keep_prob) return train_logits, inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Apply embedding to the input data for the encoder.- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob)`.- Process target data using your `process_decoding_input(target_data, target_vocab_to_int, batch_size)` function.- Apply embedding to the target data for the decoder.- Decode the encoded input using your `decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)`. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) """ # 12. Apply embedding to the input data for the encoder. dec_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size) encoder_state = encoding_layer(dec_embed_input, rnn_size, num_layers, keep_prob) # 3. dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size) # 4. dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) # 5. train_logits, inference_logits = decoding_layer(dec_embed_input, dec_embeddings, encoder_state, target_vocab_size,\ sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) return train_logits, inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability ###Code # Number of Epochs epochs = 20 # Batch Size batch_size = 512 # RNN Size rnn_size = 200 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 200 decoding_embedding_size = 200 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 0.5 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob = model_inputs() sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model( tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) tf.identity(inference_logits, 'logits') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([input_shape[0], sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import time def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target_batch, [(0,0),(0,max_seq - target_batch.shape[1]), (0,0)], 'constant') if max_seq - batch_train_logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 0/269 - Train Accuracy: 0.242, Validation Accuracy: 0.310, Loss: 5.937 Epoch 0 Batch 1/269 - Train Accuracy: 0.233, Validation Accuracy: 0.310, Loss: 5.651 Epoch 0 Batch 2/269 - Train Accuracy: 0.266, Validation Accuracy: 0.310, Loss: 5.343 Epoch 0 Batch 3/269 - Train Accuracy: 0.244, Validation Accuracy: 0.310, Loss: 5.097 Epoch 0 Batch 4/269 - Train Accuracy: 0.232, Validation Accuracy: 0.310, Loss: 4.851 Epoch 0 Batch 5/269 - Train Accuracy: 0.233, Validation Accuracy: 0.310, Loss: 4.637 Epoch 0 Batch 6/269 - Train Accuracy: 0.290, Validation Accuracy: 0.321, Loss: 4.283 Epoch 0 Batch 7/269 - Train Accuracy: 0.292, Validation Accuracy: 0.326, Loss: 4.161 Epoch 0 Batch 8/269 - Train Accuracy: 0.276, Validation Accuracy: 0.341, Loss: 4.199 Epoch 0 Batch 9/269 - Train Accuracy: 0.301, Validation Accuracy: 0.342, Loss: 3.944 Epoch 0 Batch 10/269 - Train Accuracy: 0.271, Validation Accuracy: 0.343, Loss: 3.965 Epoch 0 Batch 11/269 - Train Accuracy: 0.307, Validation Accuracy: 0.343, Loss: 3.746 Epoch 0 Batch 12/269 - Train Accuracy: 0.280, Validation Accuracy: 0.343, Loss: 3.804 Epoch 0 Batch 13/269 - Train Accuracy: 0.344, Validation Accuracy: 0.343, Loss: 3.441 Epoch 0 Batch 14/269 - Train Accuracy: 0.329, Validation Accuracy: 0.363, Loss: 3.546 Epoch 0 Batch 15/269 - Train Accuracy: 0.327, Validation Accuracy: 0.371, Loss: 3.533 Epoch 0 Batch 16/269 - Train Accuracy: 0.342, Validation Accuracy: 0.372, Loss: 3.454 Epoch 0 Batch 17/269 - Train Accuracy: 0.334, Validation Accuracy: 0.372, Loss: 3.430 Epoch 0 Batch 18/269 - Train Accuracy: 0.303, Validation Accuracy: 0.372, Loss: 3.530 Epoch 0 Batch 19/269 - Train Accuracy: 0.370, Validation Accuracy: 0.372, Loss: 3.218 Epoch 0 Batch 20/269 - Train Accuracy: 0.307, Validation Accuracy: 0.369, Loss: 3.431 Epoch 0 Batch 21/269 - Train Accuracy: 0.310, Validation Accuracy: 0.370, Loss: 3.440 Epoch 0 Batch 22/269 - Train Accuracy: 0.346, Validation Accuracy: 0.372, Loss: 3.254 Epoch 0 Batch 23/269 - Train Accuracy: 0.353, Validation Accuracy: 0.372, Loss: 3.239 Epoch 0 Batch 24/269 - Train Accuracy: 0.305, Validation Accuracy: 0.372, Loss: 3.370 Epoch 0 Batch 25/269 - Train Accuracy: 0.312, Validation Accuracy: 0.374, Loss: 3.350 Epoch 0 Batch 26/269 - Train Accuracy: 0.380, Validation Accuracy: 0.379, Loss: 3.046 Epoch 0 Batch 27/269 - Train Accuracy: 0.347, Validation Accuracy: 0.379, Loss: 3.163 Epoch 0 Batch 28/269 - Train Accuracy: 0.302, Validation Accuracy: 0.379, Loss: 3.307 Epoch 0 Batch 29/269 - Train Accuracy: 0.320, Validation Accuracy: 0.385, Loss: 3.255 Epoch 0 Batch 30/269 - Train Accuracy: 0.349, Validation Accuracy: 0.388, Loss: 3.085 Epoch 0 Batch 31/269 - Train Accuracy: 0.361, Validation Accuracy: 0.391, Loss: 3.046 Epoch 0 Batch 32/269 - Train Accuracy: 0.362, Validation Accuracy: 0.401, Loss: 3.056 Epoch 0 Batch 33/269 - Train Accuracy: 0.366, Validation Accuracy: 0.395, Loss: 2.976 Epoch 0 Batch 34/269 - Train Accuracy: 0.366, Validation Accuracy: 0.397, Loss: 2.977 Epoch 0 Batch 35/269 - Train Accuracy: 0.379, Validation Accuracy: 0.408, Loss: 2.961 Epoch 0 Batch 36/269 - Train Accuracy: 0.386, Validation Accuracy: 0.418, Loss: 2.963 Epoch 0 Batch 37/269 - Train Accuracy: 0.383, Validation Accuracy: 0.408, Loss: 2.944 Epoch 0 Batch 38/269 - Train Accuracy: 0.384, Validation Accuracy: 0.415, Loss: 2.935 Epoch 0 Batch 39/269 - Train Accuracy: 0.390, Validation Accuracy: 0.421, Loss: 2.894 Epoch 0 Batch 40/269 - Train Accuracy: 0.362, Validation Accuracy: 0.422, Loss: 3.030 Epoch 0 Batch 41/269 - Train Accuracy: 0.388, Validation Accuracy: 0.419, Loss: 2.873 Epoch 0 Batch 42/269 - Train Accuracy: 0.425, Validation Accuracy: 0.426, Loss: 2.736 Epoch 0 Batch 43/269 - Train Accuracy: 0.378, Validation Accuracy: 0.429, Loss: 2.944 Epoch 0 Batch 44/269 - Train Accuracy: 0.406, Validation Accuracy: 0.430, Loss: 2.811 Epoch 0 Batch 45/269 - Train Accuracy: 0.371, Validation Accuracy: 0.430, Loss: 2.938 Epoch 0 Batch 46/269 - Train Accuracy: 0.384, Validation Accuracy: 0.443, Loss: 2.946 Epoch 0 Batch 47/269 - Train Accuracy: 0.446, Validation Accuracy: 0.444, Loss: 2.637 Epoch 0 Batch 48/269 - Train Accuracy: 0.421, Validation Accuracy: 0.445, Loss: 2.730 Epoch 0 Batch 49/269 - Train Accuracy: 0.390, Validation Accuracy: 0.444, Loss: 2.863 Epoch 0 Batch 50/269 - Train Accuracy: 0.393, Validation Accuracy: 0.443, Loss: 2.850 Epoch 0 Batch 51/269 - Train Accuracy: 0.422, Validation Accuracy: 0.452, Loss: 2.733 Epoch 0 Batch 52/269 - Train Accuracy: 0.431, Validation Accuracy: 0.452, Loss: 2.687 Epoch 0 Batch 53/269 - Train Accuracy: 0.397, Validation Accuracy: 0.449, Loss: 2.791 Epoch 0 Batch 54/269 - Train Accuracy: 0.398, Validation Accuracy: 0.449, Loss: 2.783 Epoch 0 Batch 55/269 - Train Accuracy: 0.431, Validation Accuracy: 0.455, Loss: 2.640 Epoch 0 Batch 56/269 - Train Accuracy: 0.435, Validation Accuracy: 0.454, Loss: 2.620 Epoch 0 Batch 57/269 - Train Accuracy: 0.435, Validation Accuracy: 0.458, Loss: 2.611 Epoch 0 Batch 58/269 - Train Accuracy: 0.435, Validation Accuracy: 0.456, Loss: 2.591 Epoch 0 Batch 59/269 - Train Accuracy: 0.429, Validation Accuracy: 0.456, Loss: 2.578 Epoch 0 Batch 60/269 - Train Accuracy: 0.445, Validation Accuracy: 0.462, Loss: 2.506 Epoch 0 Batch 61/269 - Train Accuracy: 0.468, Validation Accuracy: 0.463, Loss: 2.440 Epoch 0 Batch 62/269 - Train Accuracy: 0.465, Validation Accuracy: 0.467, Loss: 2.450 Epoch 0 Batch 63/269 - Train Accuracy: 0.444, Validation Accuracy: 0.471, Loss: 2.515 Epoch 0 Batch 64/269 - Train Accuracy: 0.446, Validation Accuracy: 0.474, Loss: 2.516 Epoch 0 Batch 65/269 - Train Accuracy: 0.450, Validation Accuracy: 0.474, Loss: 2.478 Epoch 0 Batch 66/269 - Train Accuracy: 0.467, Validation Accuracy: 0.479, Loss: 2.403 Epoch 0 Batch 67/269 - Train Accuracy: 0.449, Validation Accuracy: 0.474, Loss: 2.476 Epoch 0 Batch 68/269 - Train Accuracy: 0.445, Validation Accuracy: 0.477, Loss: 2.465 Epoch 0 Batch 69/269 - Train Accuracy: 0.414, Validation Accuracy: 0.474, Loss: 2.584 Epoch 0 Batch 70/269 - Train Accuracy: 0.460, Validation Accuracy: 0.482, Loss: 2.411 Epoch 0 Batch 71/269 - Train Accuracy: 0.440, Validation Accuracy: 0.487, Loss: 2.534 Epoch 0 Batch 72/269 - Train Accuracy: 0.475, Validation Accuracy: 0.481, Loss: 2.321 Epoch 0 Batch 73/269 - Train Accuracy: 0.466, Validation Accuracy: 0.487, Loss: 2.384 Epoch 0 Batch 74/269 - Train Accuracy: 0.443, Validation Accuracy: 0.487, Loss: 2.448 Epoch 0 Batch 75/269 - Train Accuracy: 0.457, Validation Accuracy: 0.484, Loss: 2.339 Epoch 0 Batch 76/269 - Train Accuracy: 0.456, Validation Accuracy: 0.492, Loss: 2.378 Epoch 0 Batch 77/269 - Train Accuracy: 0.456, Validation Accuracy: 0.478, Loss: 2.322 Epoch 0 Batch 78/269 - Train Accuracy: 0.455, Validation Accuracy: 0.484, Loss: 2.340 Epoch 0 Batch 79/269 - Train Accuracy: 0.443, Validation Accuracy: 0.477, Loss: 2.307 Epoch 0 Batch 80/269 - Train Accuracy: 0.467, Validation Accuracy: 0.480, Loss: 2.233 Epoch 0 Batch 81/269 - Train Accuracy: 0.460, Validation Accuracy: 0.487, Loss: 2.297 Epoch 0 Batch 82/269 - Train Accuracy: 0.462, Validation Accuracy: 0.482, Loss: 2.222 Epoch 0 Batch 83/269 - Train Accuracy: 0.467, Validation Accuracy: 0.483, Loss: 2.207 Epoch 0 Batch 84/269 - Train Accuracy: 0.448, Validation Accuracy: 0.476, Loss: 2.235 Epoch 0 Batch 85/269 - Train Accuracy: 0.454, Validation Accuracy: 0.480, Loss: 2.247 Epoch 0 Batch 86/269 - Train Accuracy: 0.442, Validation Accuracy: 0.480, Loss: 2.239 Epoch 0 Batch 87/269 - Train Accuracy: 0.417, Validation Accuracy: 0.488, Loss: 2.362 Epoch 0 Batch 88/269 - Train Accuracy: 0.457, Validation Accuracy: 0.485, Loss: 2.191 Epoch 0 Batch 89/269 - Train Accuracy: 0.457, Validation Accuracy: 0.482, Loss: 2.171 Epoch 0 Batch 90/269 - Train Accuracy: 0.418, Validation Accuracy: 0.482, Loss: 2.289 Epoch 0 Batch 91/269 - Train Accuracy: 0.442, Validation Accuracy: 0.471, Loss: 2.155 Epoch 0 Batch 92/269 - Train Accuracy: 0.462, Validation Accuracy: 0.489, Loss: 2.143 Epoch 0 Batch 93/269 - Train Accuracy: 0.475, Validation Accuracy: 0.485, Loss: 2.054 Epoch 0 Batch 94/269 - Train Accuracy: 0.437, Validation Accuracy: 0.464, Loss: 2.136 Epoch 0 Batch 95/269 - Train Accuracy: 0.471, Validation Accuracy: 0.490, Loss: 2.118 Epoch 0 Batch 96/269 - Train Accuracy: 0.448, Validation Accuracy: 0.474, Loss: 2.105 Epoch 0 Batch 97/269 - Train Accuracy: 0.449, Validation Accuracy: 0.478, Loss: 2.096 Epoch 0 Batch 98/269 - Train Accuracy: 0.479, Validation Accuracy: 0.493, Loss: 2.049 Epoch 0 Batch 99/269 - Train Accuracy: 0.429, Validation Accuracy: 0.481, Loss: 2.183 Epoch 0 Batch 100/269 - Train Accuracy: 0.471, Validation Accuracy: 0.480, Loss: 2.010 Epoch 0 Batch 101/269 - Train Accuracy: 0.444, Validation Accuracy: 0.495, Loss: 2.151 Epoch 0 Batch 102/269 - Train Accuracy: 0.461, Validation Accuracy: 0.489, Loss: 2.031 Epoch 0 Batch 103/269 - Train Accuracy: 0.452, Validation Accuracy: 0.482, Loss: 2.023 Epoch 0 Batch 104/269 - Train Accuracy: 0.457, Validation Accuracy: 0.487, Loss: 2.029 Epoch 0 Batch 105/269 - Train Accuracy: 0.455, Validation Accuracy: 0.483, Loss: 2.024 Epoch 0 Batch 106/269 - Train Accuracy: 0.452, Validation Accuracy: 0.487, Loss: 2.019 Epoch 0 Batch 107/269 - Train Accuracy: 0.424, Validation Accuracy: 0.491, Loss: 2.102 Epoch 0 Batch 108/269 - Train Accuracy: 0.446, Validation Accuracy: 0.481, Loss: 1.975 Epoch 0 Batch 109/269 - Train Accuracy: 0.444, Validation Accuracy: 0.477, Loss: 1.971 Epoch 0 Batch 110/269 - Train Accuracy: 0.457, Validation Accuracy: 0.490, Loss: 1.954 Epoch 0 Batch 111/269 - Train Accuracy: 0.421, Validation Accuracy: 0.486, Loss: 2.085 Epoch 0 Batch 112/269 - Train Accuracy: 0.458, Validation Accuracy: 0.483, Loss: 1.923 Epoch 0 Batch 113/269 - Train Accuracy: 0.484, Validation Accuracy: 0.490, Loss: 1.842 Epoch 0 Batch 114/269 - Train Accuracy: 0.442, Validation Accuracy: 0.473, Loss: 1.918 Epoch 0 Batch 115/269 - Train Accuracy: 0.420, Validation Accuracy: 0.479, Loss: 1.977 Epoch 0 Batch 116/269 - Train Accuracy: 0.465, Validation Accuracy: 0.486, Loss: 1.899 Epoch 0 Batch 117/269 - Train Accuracy: 0.438, Validation Accuracy: 0.473, Loss: 1.892 Epoch 0 Batch 118/269 - Train Accuracy: 0.477, Validation Accuracy: 0.482, Loss: 1.819 Epoch 0 Batch 119/269 - Train Accuracy: 0.440, Validation Accuracy: 0.486, Loss: 1.950 Epoch 0 Batch 120/269 - Train Accuracy: 0.421, Validation Accuracy: 0.472, Loss: 1.930 Epoch 0 Batch 121/269 - Train Accuracy: 0.456, Validation Accuracy: 0.487, Loss: 1.839 Epoch 0 Batch 122/269 - Train Accuracy: 0.462, Validation Accuracy: 0.482, Loss: 1.819 Epoch 0 Batch 123/269 - Train Accuracy: 0.414, Validation Accuracy: 0.477, Loss: 1.918 Epoch 0 Batch 124/269 - Train Accuracy: 0.455, Validation Accuracy: 0.485, Loss: 1.789 Epoch 0 Batch 125/269 - Train Accuracy: 0.449, Validation Accuracy: 0.481, Loss: 1.771 Epoch 0 Batch 126/269 - Train Accuracy: 0.466, Validation Accuracy: 0.485, Loss: 1.762 Epoch 0 Batch 127/269 - Train Accuracy: 0.424, Validation Accuracy: 0.475, Loss: 1.871 Epoch 0 Batch 128/269 - Train Accuracy: 0.464, Validation Accuracy: 0.482, Loss: 1.764 Epoch 0 Batch 129/269 - Train Accuracy: 0.454, Validation Accuracy: 0.485, Loss: 1.787 Epoch 0 Batch 130/269 - Train Accuracy: 0.412, Validation Accuracy: 0.479, Loss: 1.880 Epoch 0 Batch 131/269 - Train Accuracy: 0.445, Validation Accuracy: 0.493, Loss: 1.823 Epoch 0 Batch 132/269 - Train Accuracy: 0.448, Validation Accuracy: 0.482, Loss: 1.747 Epoch 0 Batch 133/269 - Train Accuracy: 0.466, Validation Accuracy: 0.498, Loss: 1.722 Epoch 0 Batch 134/269 - Train Accuracy: 0.433, Validation Accuracy: 0.487, Loss: 1.784 Epoch 0 Batch 135/269 - Train Accuracy: 0.424, Validation Accuracy: 0.485, Loss: 1.824 Epoch 0 Batch 136/269 - Train Accuracy: 0.428, Validation Accuracy: 0.486, Loss: 1.804 Epoch 0 Batch 137/269 - Train Accuracy: 0.427, Validation Accuracy: 0.481, Loss: 1.771 Epoch 0 Batch 138/269 - Train Accuracy: 0.454, Validation Accuracy: 0.493, Loss: 1.720 Epoch 0 Batch 139/269 - Train Accuracy: 0.459, Validation Accuracy: 0.476, Loss: 1.663 Epoch 0 Batch 140/269 - Train Accuracy: 0.456, Validation Accuracy: 0.478, Loss: 1.672 Epoch 0 Batch 141/269 - Train Accuracy: 0.454, Validation Accuracy: 0.491, Loss: 1.693 Epoch 0 Batch 142/269 - Train Accuracy: 0.452, Validation Accuracy: 0.473, Loss: 1.649 Epoch 0 Batch 143/269 - Train Accuracy: 0.465, Validation Accuracy: 0.488, Loss: 1.663 Epoch 0 Batch 144/269 - Train Accuracy: 0.463, Validation Accuracy: 0.483, Loss: 1.625 Epoch 0 Batch 145/269 - Train Accuracy: 0.445, Validation Accuracy: 0.480, Loss: 1.636 Epoch 0 Batch 146/269 - Train Accuracy: 0.468, Validation Accuracy: 0.493, Loss: 1.627 Epoch 0 Batch 147/269 - Train Accuracy: 0.490, Validation Accuracy: 0.491, Loss: 1.550 Epoch 0 Batch 148/269 - Train Accuracy: 0.441, Validation Accuracy: 0.481, Loss: 1.652 Epoch 0 Batch 149/269 - Train Accuracy: 0.464, Validation Accuracy: 0.485, Loss: 1.603 Epoch 0 Batch 150/269 - Train Accuracy: 0.471, Validation Accuracy: 0.495, Loss: 1.602 Epoch 0 Batch 151/269 - Train Accuracy: 0.491, Validation Accuracy: 0.490, Loss: 1.518 Epoch 0 Batch 152/269 - Train Accuracy: 0.467, Validation Accuracy: 0.497, Loss: 1.586 Epoch 0 Batch 153/269 - Train Accuracy: 0.487, Validation Accuracy: 0.506, Loss: 1.569 Epoch 0 Batch 154/269 - Train Accuracy: 0.434, Validation Accuracy: 0.495, Loss: 1.655 Epoch 0 Batch 155/269 - Train Accuracy: 0.505, Validation Accuracy: 0.502, Loss: 1.480 Epoch 0 Batch 156/269 - Train Accuracy: 0.468, Validation Accuracy: 0.509, Loss: 1.591 Epoch 0 Batch 157/269 - Train Accuracy: 0.473, Validation Accuracy: 0.501, Loss: 1.546 Epoch 0 Batch 158/269 - Train Accuracy: 0.479, Validation Accuracy: 0.500, Loss: 1.519 Epoch 0 Batch 159/269 - Train Accuracy: 0.480, Validation Accuracy: 0.505, Loss: 1.534 Epoch 0 Batch 160/269 - Train Accuracy: 0.467, Validation Accuracy: 0.493, Loss: 1.548 Epoch 0 Batch 161/269 - Train Accuracy: 0.465, Validation Accuracy: 0.501, Loss: 1.551 Epoch 0 Batch 162/269 - Train Accuracy: 0.486, Validation Accuracy: 0.505, Loss: 1.506 Epoch 0 Batch 163/269 - Train Accuracy: 0.475, Validation Accuracy: 0.495, Loss: 1.507 Epoch 0 Batch 164/269 - Train Accuracy: 0.485, Validation Accuracy: 0.505, Loss: 1.504 Epoch 0 Batch 165/269 - Train Accuracy: 0.463, Validation Accuracy: 0.510, Loss: 1.553 Epoch 0 Batch 166/269 - Train Accuracy: 0.507, Validation Accuracy: 0.502, Loss: 1.402 Epoch 0 Batch 167/269 - Train Accuracy: 0.492, Validation Accuracy: 0.508, Loss: 1.480 Epoch 0 Batch 168/269 - Train Accuracy: 0.487, Validation Accuracy: 0.508, Loss: 1.496 Epoch 0 Batch 169/269 - Train Accuracy: 0.469, Validation Accuracy: 0.500, Loss: 1.465 Epoch 0 Batch 170/269 - Train Accuracy: 0.486, Validation Accuracy: 0.507, Loss: 1.464 Epoch 0 Batch 171/269 - Train Accuracy: 0.472, Validation Accuracy: 0.510, Loss: 1.504 Epoch 0 Batch 172/269 - Train Accuracy: 0.483, Validation Accuracy: 0.506, Loss: 1.461 Epoch 0 Batch 173/269 - Train Accuracy: 0.491, Validation Accuracy: 0.512, Loss: 1.450 Epoch 0 Batch 174/269 - Train Accuracy: 0.473, Validation Accuracy: 0.507, Loss: 1.443 Epoch 0 Batch 175/269 - Train Accuracy: 0.468, Validation Accuracy: 0.506, Loss: 1.446 Epoch 0 Batch 176/269 - Train Accuracy: 0.471, Validation Accuracy: 0.520, Loss: 1.505 Epoch 0 Batch 177/269 - Train Accuracy: 0.513, Validation Accuracy: 0.513, Loss: 1.379 Epoch 0 Batch 178/269 - Train Accuracy: 0.458, Validation Accuracy: 0.503, Loss: 1.468 Epoch 0 Batch 179/269 - Train Accuracy: 0.499, Validation Accuracy: 0.509, Loss: 1.419 Epoch 0 Batch 180/269 - Train Accuracy: 0.497, Validation Accuracy: 0.514, Loss: 1.395 Epoch 0 Batch 181/269 - Train Accuracy: 0.463, Validation Accuracy: 0.491, Loss: 1.404 Epoch 0 Batch 182/269 - Train Accuracy: 0.484, Validation Accuracy: 0.507, Loss: 1.417 Epoch 0 Batch 183/269 - Train Accuracy: 0.557, Validation Accuracy: 0.514, Loss: 1.205 Epoch 0 Batch 184/269 - Train Accuracy: 0.460, Validation Accuracy: 0.503, Loss: 1.438 Epoch 0 Batch 185/269 - Train Accuracy: 0.495, Validation Accuracy: 0.511, Loss: 1.370 Epoch 0 Batch 186/269 - Train Accuracy: 0.474, Validation Accuracy: 0.517, Loss: 1.421 Epoch 0 Batch 187/269 - Train Accuracy: 0.500, Validation Accuracy: 0.510, Loss: 1.342 Epoch 0 Batch 188/269 - Train Accuracy: 0.485, Validation Accuracy: 0.499, Loss: 1.330 Epoch 0 Batch 189/269 - Train Accuracy: 0.498, Validation Accuracy: 0.525, Loss: 1.352 Epoch 0 Batch 190/269 - Train Accuracy: 0.482, Validation Accuracy: 0.518, Loss: 1.342 Epoch 0 Batch 191/269 - Train Accuracy: 0.482, Validation Accuracy: 0.513, Loss: 1.346 Epoch 0 Batch 192/269 - Train Accuracy: 0.507, Validation Accuracy: 0.524, Loss: 1.353 Epoch 0 Batch 193/269 - Train Accuracy: 0.514, Validation Accuracy: 0.531, Loss: 1.343 Epoch 0 Batch 194/269 - Train Accuracy: 0.503, Validation Accuracy: 0.515, Loss: 1.338 Epoch 0 Batch 195/269 - Train Accuracy: 0.499, Validation Accuracy: 0.527, Loss: 1.356 Epoch 0 Batch 196/269 - Train Accuracy: 0.509, Validation Accuracy: 0.539, Loss: 1.317 Epoch 0 Batch 197/269 - Train Accuracy: 0.475, Validation Accuracy: 0.530, Loss: 1.371 Epoch 0 Batch 198/269 - Train Accuracy: 0.483, Validation Accuracy: 0.528, Loss: 1.404 Epoch 0 Batch 199/269 - Train Accuracy: 0.507, Validation Accuracy: 0.540, Loss: 1.337 Epoch 0 Batch 200/269 - Train Accuracy: 0.489, Validation Accuracy: 0.520, Loss: 1.344 Epoch 0 Batch 201/269 - Train Accuracy: 0.508, Validation Accuracy: 0.524, Loss: 1.299 Epoch 0 Batch 202/269 - Train Accuracy: 0.514, Validation Accuracy: 0.538, Loss: 1.300 Epoch 0 Batch 203/269 - Train Accuracy: 0.492, Validation Accuracy: 0.538, Loss: 1.350 Epoch 0 Batch 204/269 - Train Accuracy: 0.484, Validation Accuracy: 0.532, Loss: 1.330 Epoch 0 Batch 205/269 - Train Accuracy: 0.509, Validation Accuracy: 0.541, Loss: 1.272 Epoch 0 Batch 206/269 - Train Accuracy: 0.504, Validation Accuracy: 0.553, Loss: 1.361 Epoch 0 Batch 207/269 - Train Accuracy: 0.547, Validation Accuracy: 0.537, Loss: 1.220 Epoch 0 Batch 208/269 - Train Accuracy: 0.492, Validation Accuracy: 0.540, Loss: 1.357 Epoch 0 Batch 209/269 - Train Accuracy: 0.494, Validation Accuracy: 0.537, Loss: 1.294 Epoch 0 Batch 210/269 - Train Accuracy: 0.520, Validation Accuracy: 0.538, Loss: 1.241 Epoch 0 Batch 211/269 - Train Accuracy: 0.518, Validation Accuracy: 0.532, Loss: 1.251 Epoch 0 Batch 212/269 - Train Accuracy: 0.533, Validation Accuracy: 0.539, Loss: 1.221 Epoch 0 Batch 213/269 - Train Accuracy: 0.531, Validation Accuracy: 0.542, Loss: 1.221 Epoch 0 Batch 214/269 - Train Accuracy: 0.527, Validation Accuracy: 0.541, Loss: 1.221 Epoch 0 Batch 215/269 - Train Accuracy: 0.547, Validation Accuracy: 0.540, Loss: 1.160 Epoch 0 Batch 216/269 - Train Accuracy: 0.489, Validation Accuracy: 0.544, Loss: 1.317 Epoch 0 Batch 217/269 - Train Accuracy: 0.504, Validation Accuracy: 0.542, Loss: 1.281 Epoch 0 Batch 218/269 - Train Accuracy: 0.524, Validation Accuracy: 0.546, Loss: 1.281 Epoch 0 Batch 219/269 - Train Accuracy: 0.513, Validation Accuracy: 0.547, Loss: 1.256 Epoch 0 Batch 220/269 - Train Accuracy: 0.536, Validation Accuracy: 0.546, Loss: 1.167 Epoch 0 Batch 221/269 - Train Accuracy: 0.539, Validation Accuracy: 0.547, Loss: 1.200 Epoch 0 Batch 222/269 - Train Accuracy: 0.533, Validation Accuracy: 0.544, Loss: 1.160 Epoch 0 Batch 223/269 - Train Accuracy: 0.534, Validation Accuracy: 0.548, Loss: 1.157 Epoch 0 Batch 224/269 - Train Accuracy: 0.541, Validation Accuracy: 0.551, Loss: 1.212 Epoch 0 Batch 225/269 - Train Accuracy: 0.517, Validation Accuracy: 0.552, Loss: 1.232 Epoch 0 Batch 226/269 - Train Accuracy: 0.522, Validation Accuracy: 0.551, Loss: 1.188 Epoch 0 Batch 227/269 - Train Accuracy: 0.596, Validation Accuracy: 0.548, Loss: 1.037 Epoch 0 Batch 228/269 - Train Accuracy: 0.523, Validation Accuracy: 0.547, Loss: 1.184 Epoch 0 Batch 229/269 - Train Accuracy: 0.531, Validation Accuracy: 0.548, Loss: 1.173 Epoch 0 Batch 230/269 - Train Accuracy: 0.521, Validation Accuracy: 0.543, Loss: 1.177 Epoch 0 Batch 231/269 - Train Accuracy: 0.503, Validation Accuracy: 0.542, Loss: 1.228 Epoch 0 Batch 232/269 - Train Accuracy: 0.494, Validation Accuracy: 0.537, Loss: 1.205 Epoch 0 Batch 233/269 - Train Accuracy: 0.521, Validation Accuracy: 0.534, Loss: 1.171 Epoch 0 Batch 234/269 - Train Accuracy: 0.526, Validation Accuracy: 0.540, Loss: 1.163 Epoch 0 Batch 235/269 - Train Accuracy: 0.519, Validation Accuracy: 0.531, Loss: 1.154 Epoch 0 Batch 236/269 - Train Accuracy: 0.514, Validation Accuracy: 0.543, Loss: 1.150 Epoch 0 Batch 237/269 - Train Accuracy: 0.518, Validation Accuracy: 0.540, Loss: 1.138 Epoch 0 Batch 238/269 - Train Accuracy: 0.530, Validation Accuracy: 0.537, Loss: 1.146 Epoch 0 Batch 239/269 - Train Accuracy: 0.544, Validation Accuracy: 0.544, Loss: 1.124 Epoch 0 Batch 240/269 - Train Accuracy: 0.565, Validation Accuracy: 0.541, Loss: 1.061 Epoch 0 Batch 241/269 - Train Accuracy: 0.531, Validation Accuracy: 0.540, Loss: 1.128 Epoch 0 Batch 242/269 - Train Accuracy: 0.525, Validation Accuracy: 0.538, Loss: 1.120 Epoch 0 Batch 243/269 - Train Accuracy: 0.545, Validation Accuracy: 0.539, Loss: 1.087 Epoch 0 Batch 244/269 - Train Accuracy: 0.531, Validation Accuracy: 0.538, Loss: 1.102 Epoch 0 Batch 245/269 - Train Accuracy: 0.505, Validation Accuracy: 0.540, Loss: 1.174 Epoch 0 Batch 246/269 - Train Accuracy: 0.514, Validation Accuracy: 0.531, Loss: 1.123 Epoch 0 Batch 247/269 - Train Accuracy: 0.521, Validation Accuracy: 0.541, Loss: 1.152 Epoch 0 Batch 248/269 - Train Accuracy: 0.516, Validation Accuracy: 0.540, Loss: 1.104 Epoch 0 Batch 249/269 - Train Accuracy: 0.555, Validation Accuracy: 0.540, Loss: 1.056 Epoch 0 Batch 250/269 - Train Accuracy: 0.508, Validation Accuracy: 0.539, Loss: 1.127 Epoch 0 Batch 251/269 - Train Accuracy: 0.530, Validation Accuracy: 0.538, Loss: 1.080 Epoch 0 Batch 252/269 - Train Accuracy: 0.520, Validation Accuracy: 0.540, Loss: 1.102 Epoch 0 Batch 253/269 - Train Accuracy: 0.497, Validation Accuracy: 0.519, Loss: 1.097 Epoch 0 Batch 254/269 - Train Accuracy: 0.529, Validation Accuracy: 0.533, Loss: 1.078 Epoch 0 Batch 255/269 - Train Accuracy: 0.546, Validation Accuracy: 0.531, Loss: 1.035 Epoch 0 Batch 256/269 - Train Accuracy: 0.491, Validation Accuracy: 0.520, Loss: 1.093 Epoch 0 Batch 257/269 - Train Accuracy: 0.507, Validation Accuracy: 0.537, Loss: 1.085 Epoch 0 Batch 258/269 - Train Accuracy: 0.525, Validation Accuracy: 0.544, Loss: 1.080 Epoch 0 Batch 259/269 - Train Accuracy: 0.539, Validation Accuracy: 0.539, Loss: 1.068 Epoch 0 Batch 260/269 - Train Accuracy: 0.526, Validation Accuracy: 0.548, Loss: 1.120 Epoch 0 Batch 261/269 - Train Accuracy: 0.494, Validation Accuracy: 0.547, Loss: 1.128 Epoch 0 Batch 262/269 - Train Accuracy: 0.529, Validation Accuracy: 0.539, Loss: 1.063 Epoch 0 Batch 263/269 - Train Accuracy: 0.516, Validation Accuracy: 0.542, Loss: 1.099 Epoch 0 Batch 264/269 - Train Accuracy: 0.502, Validation Accuracy: 0.545, Loss: 1.104 Epoch 0 Batch 265/269 - Train Accuracy: 0.500, Validation Accuracy: 0.540, Loss: 1.076 Epoch 0 Batch 266/269 - Train Accuracy: 0.546, Validation Accuracy: 0.552, Loss: 1.033 Epoch 0 Batch 267/269 - Train Accuracy: 0.523, Validation Accuracy: 0.550, Loss: 1.066 Epoch 1 Batch 0/269 - Train Accuracy: 0.510, Validation Accuracy: 0.544, Loss: 1.091 Epoch 1 Batch 1/269 - Train Accuracy: 0.499, Validation Accuracy: 0.544, Loss: 1.071 Epoch 1 Batch 2/269 - Train Accuracy: 0.504, Validation Accuracy: 0.543, Loss: 1.051 Epoch 1 Batch 3/269 - Train Accuracy: 0.518, Validation Accuracy: 0.546, Loss: 1.072 Epoch 1 Batch 4/269 - Train Accuracy: 0.511, Validation Accuracy: 0.547, Loss: 1.062 Epoch 1 Batch 5/269 - Train Accuracy: 0.492, Validation Accuracy: 0.541, Loss: 1.081 Epoch 1 Batch 6/269 - Train Accuracy: 0.542, Validation Accuracy: 0.544, Loss: 0.985 Epoch 1 Batch 7/269 - Train Accuracy: 0.529, Validation Accuracy: 0.544, Loss: 1.009 Epoch 1 Batch 8/269 - Train Accuracy: 0.513, Validation Accuracy: 0.546, Loss: 1.063 Epoch 1 Batch 9/269 - Train Accuracy: 0.525, Validation Accuracy: 0.547, Loss: 1.020 Epoch 1 Batch 10/269 - Train Accuracy: 0.505, Validation Accuracy: 0.541, Loss: 1.029 Epoch 1 Batch 11/269 - Train Accuracy: 0.516, Validation Accuracy: 0.543, Loss: 1.018 Epoch 1 Batch 12/269 - Train Accuracy: 0.498, Validation Accuracy: 0.543, Loss: 1.051 Epoch 1 Batch 13/269 - Train Accuracy: 0.546, Validation Accuracy: 0.538, Loss: 0.943 Epoch 1 Batch 14/269 - Train Accuracy: 0.526, Validation Accuracy: 0.546, Loss: 1.010 Epoch 1 Batch 15/269 - Train Accuracy: 0.519, Validation Accuracy: 0.545, Loss: 0.987 Epoch 1 Batch 16/269 - Train Accuracy: 0.536, Validation Accuracy: 0.535, Loss: 1.002 Epoch 1 Batch 17/269 - Train Accuracy: 0.526, Validation Accuracy: 0.547, Loss: 0.978 Epoch 1 Batch 18/269 - Train Accuracy: 0.520, Validation Accuracy: 0.553, Loss: 1.014 Epoch 1 Batch 19/269 - Train Accuracy: 0.553, Validation Accuracy: 0.536, Loss: 0.948 Epoch 1 Batch 20/269 - Train Accuracy: 0.516, Validation Accuracy: 0.542, Loss: 1.020 Epoch 1 Batch 21/269 - Train Accuracy: 0.511, Validation Accuracy: 0.548, Loss: 1.055 Epoch 1 Batch 22/269 - Train Accuracy: 0.532, Validation Accuracy: 0.545, Loss: 0.963 Epoch 1 Batch 23/269 - Train Accuracy: 0.534, Validation Accuracy: 0.546, Loss: 0.988 Epoch 1 Batch 24/269 - Train Accuracy: 0.521, Validation Accuracy: 0.552, Loss: 1.023 Epoch 1 Batch 25/269 - Train Accuracy: 0.519, Validation Accuracy: 0.554, Loss: 1.029 Epoch 1 Batch 26/269 - Train Accuracy: 0.568, Validation Accuracy: 0.554, Loss: 0.913 Epoch 1 Batch 27/269 - Train Accuracy: 0.530, Validation Accuracy: 0.558, Loss: 0.965 Epoch 1 Batch 28/269 - Train Accuracy: 0.510, Validation Accuracy: 0.564, Loss: 1.032 Epoch 1 Batch 29/269 - Train Accuracy: 0.514, Validation Accuracy: 0.564, Loss: 1.001 Epoch 1 Batch 30/269 - Train Accuracy: 0.558, Validation Accuracy: 0.567, Loss: 0.949 Epoch 1 Batch 31/269 - Train Accuracy: 0.562, Validation Accuracy: 0.572, Loss: 0.953 Epoch 1 Batch 32/269 - Train Accuracy: 0.543, Validation Accuracy: 0.570, Loss: 0.954 Epoch 1 Batch 33/269 - Train Accuracy: 0.550, Validation Accuracy: 0.569, Loss: 0.921 Epoch 1 Batch 34/269 - Train Accuracy: 0.557, Validation Accuracy: 0.575, Loss: 0.947 Epoch 1 Batch 35/269 - Train Accuracy: 0.565, Validation Accuracy: 0.571, Loss: 0.956 Epoch 1 Batch 36/269 - Train Accuracy: 0.550, Validation Accuracy: 0.570, Loss: 0.952 Epoch 1 Batch 37/269 - Train Accuracy: 0.549, Validation Accuracy: 0.564, Loss: 0.957 Epoch 1 Batch 38/269 - Train Accuracy: 0.551, Validation Accuracy: 0.573, Loss: 0.949 Epoch 1 Batch 39/269 - Train Accuracy: 0.556, Validation Accuracy: 0.576, Loss: 0.931 Epoch 1 Batch 40/269 - Train Accuracy: 0.538, Validation Accuracy: 0.573, Loss: 0.973 Epoch 1 Batch 41/269 - Train Accuracy: 0.539, Validation Accuracy: 0.556, Loss: 0.948 Epoch 1 Batch 42/269 - Train Accuracy: 0.565, Validation Accuracy: 0.561, Loss: 0.898 Epoch 1 Batch 43/269 - Train Accuracy: 0.536, Validation Accuracy: 0.568, Loss: 0.970 Epoch 1 Batch 44/269 - Train Accuracy: 0.565, Validation Accuracy: 0.567, Loss: 0.940 Epoch 1 Batch 45/269 - Train Accuracy: 0.532, Validation Accuracy: 0.560, Loss: 0.977 Epoch 1 Batch 46/269 - Train Accuracy: 0.533, Validation Accuracy: 0.561, Loss: 0.961 Epoch 1 Batch 47/269 - Train Accuracy: 0.575, Validation Accuracy: 0.562, Loss: 0.868 Epoch 1 Batch 48/269 - Train Accuracy: 0.556, Validation Accuracy: 0.567, Loss: 0.903 Epoch 1 Batch 49/269 - Train Accuracy: 0.532, Validation Accuracy: 0.568, Loss: 0.945 Epoch 1 Batch 50/269 - Train Accuracy: 0.532, Validation Accuracy: 0.564, Loss: 0.957 Epoch 1 Batch 51/269 - Train Accuracy: 0.540, Validation Accuracy: 0.563, Loss: 0.935 Epoch 1 Batch 52/269 - Train Accuracy: 0.540, Validation Accuracy: 0.562, Loss: 0.895 Epoch 1 Batch 53/269 - Train Accuracy: 0.526, Validation Accuracy: 0.569, Loss: 0.962 Epoch 1 Batch 54/269 - Train Accuracy: 0.552, Validation Accuracy: 0.572, Loss: 0.949 Epoch 1 Batch 55/269 - Train Accuracy: 0.571, Validation Accuracy: 0.577, Loss: 0.900 Epoch 1 Batch 56/269 - Train Accuracy: 0.573, Validation Accuracy: 0.568, Loss: 0.913 Epoch 1 Batch 57/269 - Train Accuracy: 0.571, Validation Accuracy: 0.574, Loss: 0.913 Epoch 1 Batch 58/269 - Train Accuracy: 0.570, Validation Accuracy: 0.578, Loss: 0.894 Epoch 1 Batch 59/269 - Train Accuracy: 0.573, Validation Accuracy: 0.575, Loss: 0.878 Epoch 1 Batch 60/269 - Train Accuracy: 0.566, Validation Accuracy: 0.568, Loss: 0.865 Epoch 1 Batch 61/269 - Train Accuracy: 0.576, Validation Accuracy: 0.572, Loss: 0.854 Epoch 1 Batch 62/269 - Train Accuracy: 0.579, Validation Accuracy: 0.576, Loss: 0.865 Epoch 1 Batch 63/269 - Train Accuracy: 0.557, Validation Accuracy: 0.578, Loss: 0.899 Epoch 1 Batch 64/269 - Train Accuracy: 0.556, Validation Accuracy: 0.578, Loss: 0.889 Epoch 1 Batch 65/269 - Train Accuracy: 0.565, Validation Accuracy: 0.583, Loss: 0.882 Epoch 1 Batch 66/269 - Train Accuracy: 0.575, Validation Accuracy: 0.582, Loss: 0.858 Epoch 1 Batch 67/269 - Train Accuracy: 0.562, Validation Accuracy: 0.580, Loss: 0.899 Epoch 1 Batch 68/269 - Train Accuracy: 0.550, Validation Accuracy: 0.578, Loss: 0.893 Epoch 1 Batch 69/269 - Train Accuracy: 0.547, Validation Accuracy: 0.579, Loss: 0.965 Epoch 1 Batch 70/269 - Train Accuracy: 0.580, Validation Accuracy: 0.584, Loss: 0.892 Epoch 1 Batch 71/269 - Train Accuracy: 0.546, Validation Accuracy: 0.580, Loss: 0.917 Epoch 1 Batch 72/269 - Train Accuracy: 0.580, Validation Accuracy: 0.579, Loss: 0.869 Epoch 1 Batch 73/269 - Train Accuracy: 0.564, Validation Accuracy: 0.582, Loss: 0.897 Epoch 1 Batch 74/269 - Train Accuracy: 0.563, Validation Accuracy: 0.589, Loss: 0.892 Epoch 1 Batch 75/269 - Train Accuracy: 0.567, Validation Accuracy: 0.588, Loss: 0.873 Epoch 1 Batch 76/269 - Train Accuracy: 0.545, Validation Accuracy: 0.574, Loss: 0.894 Epoch 1 Batch 77/269 - Train Accuracy: 0.584, Validation Accuracy: 0.576, Loss: 0.864 Epoch 1 Batch 78/269 - Train Accuracy: 0.564, Validation Accuracy: 0.583, Loss: 0.864 Epoch 1 Batch 79/269 - Train Accuracy: 0.579, Validation Accuracy: 0.589, Loss: 0.862 Epoch 1 Batch 80/269 - Train Accuracy: 0.594, Validation Accuracy: 0.592, Loss: 0.853 Epoch 1 Batch 81/269 - Train Accuracy: 0.574, Validation Accuracy: 0.590, Loss: 0.890 Epoch 1 Batch 82/269 - Train Accuracy: 0.576, Validation Accuracy: 0.588, Loss: 0.838 Epoch 1 Batch 83/269 - Train Accuracy: 0.578, Validation Accuracy: 0.581, Loss: 0.856 Epoch 1 Batch 84/269 - Train Accuracy: 0.573, Validation Accuracy: 0.583, Loss: 0.840 Epoch 1 Batch 85/269 - Train Accuracy: 0.562, Validation Accuracy: 0.585, Loss: 0.862 Epoch 1 Batch 86/269 - Train Accuracy: 0.550, Validation Accuracy: 0.582, Loss: 0.858 Epoch 1 Batch 87/269 - Train Accuracy: 0.544, Validation Accuracy: 0.574, Loss: 0.911 Epoch 1 Batch 88/269 - Train Accuracy: 0.572, Validation Accuracy: 0.578, Loss: 0.861 Epoch 1 Batch 89/269 - Train Accuracy: 0.585, Validation Accuracy: 0.583, Loss: 0.857 Epoch 1 Batch 90/269 - Train Accuracy: 0.522, Validation Accuracy: 0.574, Loss: 0.902 Epoch 1 Batch 91/269 - Train Accuracy: 0.565, Validation Accuracy: 0.576, Loss: 0.835 Epoch 1 Batch 92/269 - Train Accuracy: 0.576, Validation Accuracy: 0.584, Loss: 0.837 Epoch 1 Batch 93/269 - Train Accuracy: 0.589, Validation Accuracy: 0.590, Loss: 0.811 Epoch 1 Batch 94/269 - Train Accuracy: 0.580, Validation Accuracy: 0.592, Loss: 0.863 Epoch 1 Batch 95/269 - Train Accuracy: 0.582, Validation Accuracy: 0.589, Loss: 0.846 Epoch 1 Batch 96/269 - Train Accuracy: 0.584, Validation Accuracy: 0.589, Loss: 0.844 Epoch 1 Batch 97/269 - Train Accuracy: 0.566, Validation Accuracy: 0.592, Loss: 0.845 Epoch 1 Batch 98/269 - Train Accuracy: 0.582, Validation Accuracy: 0.586, Loss: 0.836 Epoch 1 Batch 99/269 - Train Accuracy: 0.551, Validation Accuracy: 0.579, Loss: 0.878 Epoch 1 Batch 100/269 - Train Accuracy: 0.592, Validation Accuracy: 0.582, Loss: 0.820 Epoch 1 Batch 101/269 - Train Accuracy: 0.549, Validation Accuracy: 0.590, Loss: 0.884 Epoch 1 Batch 102/269 - Train Accuracy: 0.569, Validation Accuracy: 0.584, Loss: 0.839 Epoch 1 Batch 103/269 - Train Accuracy: 0.574, Validation Accuracy: 0.586, Loss: 0.849 Epoch 1 Batch 104/269 - Train Accuracy: 0.576, Validation Accuracy: 0.589, Loss: 0.829 Epoch 1 Batch 105/269 - Train Accuracy: 0.570, Validation Accuracy: 0.582, Loss: 0.861 Epoch 1 Batch 106/269 - Train Accuracy: 0.571, Validation Accuracy: 0.581, Loss: 0.831 Epoch 1 Batch 107/269 - Train Accuracy: 0.547, Validation Accuracy: 0.585, Loss: 0.875 Epoch 1 Batch 108/269 - Train Accuracy: 0.587, Validation Accuracy: 0.588, Loss: 0.821 Epoch 1 Batch 109/269 - Train Accuracy: 0.571, Validation Accuracy: 0.586, Loss: 0.846 Epoch 1 Batch 110/269 - Train Accuracy: 0.581, Validation Accuracy: 0.586, Loss: 0.809 Epoch 1 Batch 111/269 - Train Accuracy: 0.562, Validation Accuracy: 0.596, Loss: 0.892 Epoch 1 Batch 112/269 - Train Accuracy: 0.594, Validation Accuracy: 0.596, Loss: 0.812 Epoch 1 Batch 113/269 - Train Accuracy: 0.591, Validation Accuracy: 0.583, Loss: 0.790 Epoch 1 Batch 114/269 - Train Accuracy: 0.577, Validation Accuracy: 0.585, Loss: 0.817 Epoch 1 Batch 115/269 - Train Accuracy: 0.569, Validation Accuracy: 0.589, Loss: 0.839 Epoch 1 Batch 116/269 - Train Accuracy: 0.588, Validation Accuracy: 0.591, Loss: 0.828 Epoch 1 Batch 117/269 - Train Accuracy: 0.582, Validation Accuracy: 0.585, Loss: 0.819 Epoch 1 Batch 118/269 - Train Accuracy: 0.605, Validation Accuracy: 0.585, Loss: 0.785 Epoch 1 Batch 119/269 - Train Accuracy: 0.569, Validation Accuracy: 0.593, Loss: 0.859 Epoch 1 Batch 120/269 - Train Accuracy: 0.570, Validation Accuracy: 0.589, Loss: 0.841 Epoch 1 Batch 121/269 - Train Accuracy: 0.591, Validation Accuracy: 0.595, Loss: 0.805 Epoch 1 Batch 122/269 - Train Accuracy: 0.591, Validation Accuracy: 0.590, Loss: 0.801 Epoch 1 Batch 123/269 - Train Accuracy: 0.574, Validation Accuracy: 0.593, Loss: 0.844 Epoch 1 Batch 124/269 - Train Accuracy: 0.602, Validation Accuracy: 0.596, Loss: 0.790 Epoch 1 Batch 125/269 - Train Accuracy: 0.590, Validation Accuracy: 0.594, Loss: 0.784 Epoch 1 Batch 126/269 - Train Accuracy: 0.596, Validation Accuracy: 0.596, Loss: 0.783 Epoch 1 Batch 127/269 - Train Accuracy: 0.594, Validation Accuracy: 0.595, Loss: 0.848 Epoch 1 Batch 128/269 - Train Accuracy: 0.607, Validation Accuracy: 0.596, Loss: 0.792 Epoch 1 Batch 129/269 - Train Accuracy: 0.591, Validation Accuracy: 0.594, Loss: 0.810 Epoch 1 Batch 130/269 - Train Accuracy: 0.559, Validation Accuracy: 0.592, Loss: 0.839 Epoch 1 Batch 131/269 - Train Accuracy: 0.576, Validation Accuracy: 0.594, Loss: 0.822 Epoch 1 Batch 132/269 - Train Accuracy: 0.585, Validation Accuracy: 0.592, Loss: 0.808 Epoch 1 Batch 133/269 - Train Accuracy: 0.600, Validation Accuracy: 0.595, Loss: 0.772 Epoch 1 Batch 134/269 - Train Accuracy: 0.570, Validation Accuracy: 0.588, Loss: 0.817 Epoch 1 Batch 135/269 - Train Accuracy: 0.561, Validation Accuracy: 0.580, Loss: 0.854 Epoch 1 Batch 136/269 - Train Accuracy: 0.568, Validation Accuracy: 0.598, Loss: 0.845 Epoch 1 Batch 137/269 - Train Accuracy: 0.588, Validation Accuracy: 0.603, Loss: 0.827 Epoch 1 Batch 138/269 - Train Accuracy: 0.585, Validation Accuracy: 0.604, Loss: 0.807 Epoch 1 Batch 139/269 - Train Accuracy: 0.609, Validation Accuracy: 0.597, Loss: 0.770 Epoch 1 Batch 140/269 - Train Accuracy: 0.604, Validation Accuracy: 0.597, Loss: 0.793 Epoch 1 Batch 141/269 - Train Accuracy: 0.597, Validation Accuracy: 0.600, Loss: 0.810 Epoch 1 Batch 142/269 - Train Accuracy: 0.611, Validation Accuracy: 0.601, Loss: 0.767 Epoch 1 Batch 143/269 - Train Accuracy: 0.605, Validation Accuracy: 0.598, Loss: 0.785 Epoch 1 Batch 144/269 - Train Accuracy: 0.605, Validation Accuracy: 0.603, Loss: 0.766 Epoch 1 Batch 145/269 - Train Accuracy: 0.600, Validation Accuracy: 0.597, Loss: 0.766 Epoch 1 Batch 146/269 - Train Accuracy: 0.592, Validation Accuracy: 0.599, Loss: 0.769 Epoch 1 Batch 147/269 - Train Accuracy: 0.616, Validation Accuracy: 0.603, Loss: 0.748 Epoch 1 Batch 148/269 - Train Accuracy: 0.595, Validation Accuracy: 0.606, Loss: 0.784 Epoch 1 Batch 149/269 - Train Accuracy: 0.597, Validation Accuracy: 0.604, Loss: 0.780 Epoch 1 Batch 150/269 - Train Accuracy: 0.603, Validation Accuracy: 0.602, Loss: 0.786 Epoch 1 Batch 151/269 - Train Accuracy: 0.623, Validation Accuracy: 0.603, Loss: 0.746 Epoch 1 Batch 152/269 - Train Accuracy: 0.596, Validation Accuracy: 0.600, Loss: 0.773 Epoch 1 Batch 153/269 - Train Accuracy: 0.607, Validation Accuracy: 0.604, Loss: 0.764 Epoch 1 Batch 154/269 - Train Accuracy: 0.587, Validation Accuracy: 0.608, Loss: 0.788 Epoch 1 Batch 155/269 - Train Accuracy: 0.634, Validation Accuracy: 0.607, Loss: 0.731 Epoch 1 Batch 156/269 - Train Accuracy: 0.585, Validation Accuracy: 0.602, Loss: 0.800 Epoch 1 Batch 157/269 - Train Accuracy: 0.592, Validation Accuracy: 0.599, Loss: 0.767 Epoch 1 Batch 158/269 - Train Accuracy: 0.602, Validation Accuracy: 0.605, Loss: 0.756 Epoch 1 Batch 159/269 - Train Accuracy: 0.603, Validation Accuracy: 0.605, Loss: 0.769 Epoch 1 Batch 160/269 - Train Accuracy: 0.597, Validation Accuracy: 0.605, Loss: 0.764 Epoch 1 Batch 161/269 - Train Accuracy: 0.596, Validation Accuracy: 0.605, Loss: 0.765 Epoch 1 Batch 162/269 - Train Accuracy: 0.606, Validation Accuracy: 0.605, Loss: 0.761 Epoch 1 Batch 163/269 - Train Accuracy: 0.611, Validation Accuracy: 0.609, Loss: 0.755 Epoch 1 Batch 164/269 - Train Accuracy: 0.610, Validation Accuracy: 0.602, Loss: 0.753 Epoch 1 Batch 165/269 - Train Accuracy: 0.581, Validation Accuracy: 0.604, Loss: 0.780 Epoch 1 Batch 166/269 - Train Accuracy: 0.625, Validation Accuracy: 0.605, Loss: 0.720 Epoch 1 Batch 167/269 - Train Accuracy: 0.598, Validation Accuracy: 0.604, Loss: 0.760 Epoch 1 Batch 168/269 - Train Accuracy: 0.590, Validation Accuracy: 0.603, Loss: 0.763 Epoch 1 Batch 169/269 - Train Accuracy: 0.597, Validation Accuracy: 0.601, Loss: 0.757 Epoch 1 Batch 170/269 - Train Accuracy: 0.606, Validation Accuracy: 0.603, Loss: 0.747 Epoch 1 Batch 171/269 - Train Accuracy: 0.597, Validation Accuracy: 0.606, Loss: 0.786 Epoch 1 Batch 172/269 - Train Accuracy: 0.597, Validation Accuracy: 0.605, Loss: 0.761 Epoch 1 Batch 173/269 - Train Accuracy: 0.612, Validation Accuracy: 0.607, Loss: 0.743 Epoch 1 Batch 174/269 - Train Accuracy: 0.590, Validation Accuracy: 0.604, Loss: 0.754 Epoch 1 Batch 175/269 - Train Accuracy: 0.612, Validation Accuracy: 0.609, Loss: 0.762 Epoch 1 Batch 176/269 - Train Accuracy: 0.592, Validation Accuracy: 0.613, Loss: 0.789 Epoch 1 Batch 177/269 - Train Accuracy: 0.618, Validation Accuracy: 0.611, Loss: 0.723 Epoch 1 Batch 178/269 - Train Accuracy: 0.595, Validation Accuracy: 0.607, Loss: 0.760 Epoch 1 Batch 179/269 - Train Accuracy: 0.611, Validation Accuracy: 0.609, Loss: 0.749 Epoch 1 Batch 180/269 - Train Accuracy: 0.606, Validation Accuracy: 0.610, Loss: 0.740 Epoch 1 Batch 181/269 - Train Accuracy: 0.596, Validation Accuracy: 0.608, Loss: 0.747 Epoch 1 Batch 182/269 - Train Accuracy: 0.614, Validation Accuracy: 0.607, Loss: 0.745 Epoch 1 Batch 183/269 - Train Accuracy: 0.669, Validation Accuracy: 0.607, Loss: 0.644 Epoch 1 Batch 184/269 - Train Accuracy: 0.597, Validation Accuracy: 0.609, Loss: 0.770 Epoch 1 Batch 185/269 - Train Accuracy: 0.621, Validation Accuracy: 0.608, Loss: 0.736 Epoch 1 Batch 186/269 - Train Accuracy: 0.584, Validation Accuracy: 0.604, Loss: 0.762 Epoch 1 Batch 187/269 - Train Accuracy: 0.616, Validation Accuracy: 0.600, Loss: 0.724 Epoch 1 Batch 188/269 - Train Accuracy: 0.610, Validation Accuracy: 0.613, Loss: 0.715 Epoch 1 Batch 189/269 - Train Accuracy: 0.607, Validation Accuracy: 0.620, Loss: 0.719 Epoch 1 Batch 190/269 - Train Accuracy: 0.610, Validation Accuracy: 0.607, Loss: 0.723 Epoch 1 Batch 191/269 - Train Accuracy: 0.619, Validation Accuracy: 0.601, Loss: 0.724 Epoch 1 Batch 192/269 - Train Accuracy: 0.621, Validation Accuracy: 0.616, Loss: 0.733 Epoch 1 Batch 193/269 - Train Accuracy: 0.609, Validation Accuracy: 0.614, Loss: 0.732 Epoch 1 Batch 194/269 - Train Accuracy: 0.620, Validation Accuracy: 0.603, Loss: 0.741 Epoch 1 Batch 195/269 - Train Accuracy: 0.599, Validation Accuracy: 0.603, Loss: 0.734 Epoch 1 Batch 196/269 - Train Accuracy: 0.592, Validation Accuracy: 0.611, Loss: 0.730 Epoch 1 Batch 197/269 - Train Accuracy: 0.588, Validation Accuracy: 0.611, Loss: 0.760 Epoch 1 Batch 198/269 - Train Accuracy: 0.593, Validation Accuracy: 0.613, Loss: 0.773 Epoch 1 Batch 199/269 - Train Accuracy: 0.588, Validation Accuracy: 0.604, Loss: 0.744 Epoch 1 Batch 200/269 - Train Accuracy: 0.596, Validation Accuracy: 0.606, Loss: 0.748 Epoch 1 Batch 201/269 - Train Accuracy: 0.611, Validation Accuracy: 0.619, Loss: 0.722 Epoch 1 Batch 202/269 - Train Accuracy: 0.611, Validation Accuracy: 0.622, Loss: 0.719 Epoch 1 Batch 203/269 - Train Accuracy: 0.598, Validation Accuracy: 0.618, Loss: 0.770 Epoch 1 Batch 204/269 - Train Accuracy: 0.590, Validation Accuracy: 0.621, Loss: 0.744 Epoch 1 Batch 205/269 - Train Accuracy: 0.617, Validation Accuracy: 0.622, Loss: 0.712 Epoch 1 Batch 206/269 - Train Accuracy: 0.607, Validation Accuracy: 0.623, Loss: 0.751 Epoch 1 Batch 207/269 - Train Accuracy: 0.632, Validation Accuracy: 0.620, Loss: 0.705 Epoch 1 Batch 208/269 - Train Accuracy: 0.591, Validation Accuracy: 0.618, Loss: 0.748 Epoch 1 Batch 209/269 - Train Accuracy: 0.615, Validation Accuracy: 0.621, Loss: 0.726 Epoch 1 Batch 210/269 - Train Accuracy: 0.626, Validation Accuracy: 0.618, Loss: 0.700 Epoch 1 Batch 211/269 - Train Accuracy: 0.616, Validation Accuracy: 0.617, Loss: 0.724 Epoch 1 Batch 212/269 - Train Accuracy: 0.625, Validation Accuracy: 0.608, Loss: 0.707 Epoch 1 Batch 213/269 - Train Accuracy: 0.612, Validation Accuracy: 0.608, Loss: 0.714 Epoch 1 Batch 214/269 - Train Accuracy: 0.632, Validation Accuracy: 0.607, Loss: 0.703 Epoch 1 Batch 215/269 - Train Accuracy: 0.631, Validation Accuracy: 0.607, Loss: 0.673 Epoch 1 Batch 216/269 - Train Accuracy: 0.592, Validation Accuracy: 0.621, Loss: 0.764 Epoch 1 Batch 217/269 - Train Accuracy: 0.598, Validation Accuracy: 0.620, Loss: 0.746 Epoch 1 Batch 218/269 - Train Accuracy: 0.602, Validation Accuracy: 0.614, Loss: 0.739 Epoch 1 Batch 219/269 - Train Accuracy: 0.614, Validation Accuracy: 0.619, Loss: 0.733 Epoch 1 Batch 220/269 - Train Accuracy: 0.621, Validation Accuracy: 0.625, Loss: 0.668 Epoch 1 Batch 221/269 - Train Accuracy: 0.643, Validation Accuracy: 0.623, Loss: 0.706 Epoch 1 Batch 222/269 - Train Accuracy: 0.631, Validation Accuracy: 0.620, Loss: 0.683 Epoch 1 Batch 223/269 - Train Accuracy: 0.608, Validation Accuracy: 0.621, Loss: 0.688 Epoch 1 Batch 224/269 - Train Accuracy: 0.618, Validation Accuracy: 0.623, Loss: 0.721 Epoch 1 Batch 225/269 - Train Accuracy: 0.604, Validation Accuracy: 0.615, Loss: 0.720 Epoch 1 Batch 226/269 - Train Accuracy: 0.612, Validation Accuracy: 0.621, Loss: 0.706 Epoch 1 Batch 227/269 - Train Accuracy: 0.673, Validation Accuracy: 0.625, Loss: 0.623 Epoch 1 Batch 228/269 - Train Accuracy: 0.618, Validation Accuracy: 0.624, Loss: 0.701 Epoch 1 Batch 229/269 - Train Accuracy: 0.618, Validation Accuracy: 0.626, Loss: 0.688 Epoch 1 Batch 230/269 - Train Accuracy: 0.603, Validation Accuracy: 0.610, Loss: 0.701 Epoch 1 Batch 231/269 - Train Accuracy: 0.585, Validation Accuracy: 0.610, Loss: 0.737 Epoch 1 Batch 232/269 - Train Accuracy: 0.601, Validation Accuracy: 0.622, Loss: 0.733 Epoch 1 Batch 233/269 - Train Accuracy: 0.632, Validation Accuracy: 0.630, Loss: 0.706 Epoch 1 Batch 234/269 - Train Accuracy: 0.627, Validation Accuracy: 0.626, Loss: 0.700 Epoch 1 Batch 235/269 - Train Accuracy: 0.629, Validation Accuracy: 0.622, Loss: 0.689 Epoch 1 Batch 236/269 - Train Accuracy: 0.617, Validation Accuracy: 0.622, Loss: 0.689 Epoch 1 Batch 237/269 - Train Accuracy: 0.618, Validation Accuracy: 0.631, Loss: 0.690 Epoch 1 Batch 238/269 - Train Accuracy: 0.649, Validation Accuracy: 0.629, Loss: 0.681 Epoch 1 Batch 239/269 - Train Accuracy: 0.630, Validation Accuracy: 0.623, Loss: 0.684 Epoch 1 Batch 240/269 - Train Accuracy: 0.652, Validation Accuracy: 0.619, Loss: 0.634 Epoch 1 Batch 241/269 - Train Accuracy: 0.632, Validation Accuracy: 0.632, Loss: 0.696 Epoch 1 Batch 242/269 - Train Accuracy: 0.619, Validation Accuracy: 0.634, Loss: 0.690 Epoch 1 Batch 243/269 - Train Accuracy: 0.646, Validation Accuracy: 0.633, Loss: 0.659 Epoch 1 Batch 244/269 - Train Accuracy: 0.614, Validation Accuracy: 0.633, Loss: 0.687 Epoch 1 Batch 245/269 - Train Accuracy: 0.606, Validation Accuracy: 0.633, Loss: 0.724 Epoch 1 Batch 246/269 - Train Accuracy: 0.603, Validation Accuracy: 0.627, Loss: 0.690 Epoch 1 Batch 247/269 - Train Accuracy: 0.613, Validation Accuracy: 0.633, Loss: 0.714 Epoch 1 Batch 248/269 - Train Accuracy: 0.627, Validation Accuracy: 0.635, Loss: 0.682 Epoch 1 Batch 249/269 - Train Accuracy: 0.650, Validation Accuracy: 0.630, Loss: 0.648 Epoch 1 Batch 250/269 - Train Accuracy: 0.607, Validation Accuracy: 0.627, Loss: 0.701 Epoch 1 Batch 251/269 - Train Accuracy: 0.646, Validation Accuracy: 0.640, Loss: 0.670 Epoch 1 Batch 252/269 - Train Accuracy: 0.631, Validation Accuracy: 0.640, Loss: 0.685 Epoch 1 Batch 253/269 - Train Accuracy: 0.622, Validation Accuracy: 0.635, Loss: 0.694 Epoch 1 Batch 254/269 - Train Accuracy: 0.621, Validation Accuracy: 0.627, Loss: 0.676 Epoch 1 Batch 255/269 - Train Accuracy: 0.651, Validation Accuracy: 0.625, Loss: 0.653 Epoch 1 Batch 256/269 - Train Accuracy: 0.619, Validation Accuracy: 0.630, Loss: 0.688 Epoch 1 Batch 257/269 - Train Accuracy: 0.617, Validation Accuracy: 0.624, Loss: 0.680 Epoch 1 Batch 258/269 - Train Accuracy: 0.612, Validation Accuracy: 0.624, Loss: 0.682 Epoch 1 Batch 259/269 - Train Accuracy: 0.644, Validation Accuracy: 0.627, Loss: 0.677 Epoch 1 Batch 260/269 - Train Accuracy: 0.604, Validation Accuracy: 0.626, Loss: 0.712 Epoch 1 Batch 261/269 - Train Accuracy: 0.599, Validation Accuracy: 0.630, Loss: 0.714 Epoch 1 Batch 262/269 - Train Accuracy: 0.646, Validation Accuracy: 0.630, Loss: 0.673 Epoch 1 Batch 263/269 - Train Accuracy: 0.622, Validation Accuracy: 0.626, Loss: 0.687 Epoch 1 Batch 264/269 - Train Accuracy: 0.605, Validation Accuracy: 0.629, Loss: 0.703 Epoch 1 Batch 265/269 - Train Accuracy: 0.612, Validation Accuracy: 0.629, Loss: 0.699 Epoch 1 Batch 266/269 - Train Accuracy: 0.642, Validation Accuracy: 0.628, Loss: 0.659 Epoch 1 Batch 267/269 - Train Accuracy: 0.631, Validation Accuracy: 0.636, Loss: 0.685 Epoch 2 Batch 0/269 - Train Accuracy: 0.615, Validation Accuracy: 0.632, Loss: 0.704 Epoch 2 Batch 1/269 - Train Accuracy: 0.609, Validation Accuracy: 0.630, Loss: 0.682 Epoch 2 Batch 2/269 - Train Accuracy: 0.612, Validation Accuracy: 0.631, Loss: 0.673 Epoch 2 Batch 3/269 - Train Accuracy: 0.624, Validation Accuracy: 0.622, Loss: 0.676 Epoch 2 Batch 4/269 - Train Accuracy: 0.605, Validation Accuracy: 0.634, Loss: 0.688 Epoch 2 Batch 5/269 - Train Accuracy: 0.609, Validation Accuracy: 0.634, Loss: 0.692 Epoch 2 Batch 6/269 - Train Accuracy: 0.639, Validation Accuracy: 0.625, Loss: 0.637 Epoch 2 Batch 7/269 - Train Accuracy: 0.635, Validation Accuracy: 0.627, Loss: 0.649 Epoch 2 Batch 8/269 - Train Accuracy: 0.624, Validation Accuracy: 0.623, Loss: 0.689 Epoch 2 Batch 9/269 - Train Accuracy: 0.617, Validation Accuracy: 0.623, Loss: 0.677 Epoch 2 Batch 10/269 - Train Accuracy: 0.632, Validation Accuracy: 0.634, Loss: 0.684 Epoch 2 Batch 11/269 - Train Accuracy: 0.622, Validation Accuracy: 0.631, Loss: 0.667 Epoch 2 Batch 12/269 - Train Accuracy: 0.613, Validation Accuracy: 0.633, Loss: 0.692 Epoch 2 Batch 13/269 - Train Accuracy: 0.644, Validation Accuracy: 0.629, Loss: 0.618 Epoch 2 Batch 14/269 - Train Accuracy: 0.629, Validation Accuracy: 0.628, Loss: 0.658 Epoch 2 Batch 15/269 - Train Accuracy: 0.632, Validation Accuracy: 0.628, Loss: 0.648 Epoch 2 Batch 16/269 - Train Accuracy: 0.634, Validation Accuracy: 0.617, Loss: 0.653 Epoch 2 Batch 17/269 - Train Accuracy: 0.639, Validation Accuracy: 0.623, Loss: 0.645 Epoch 2 Batch 18/269 - Train Accuracy: 0.606, Validation Accuracy: 0.632, Loss: 0.670 Epoch 2 Batch 19/269 - Train Accuracy: 0.661, Validation Accuracy: 0.636, Loss: 0.615 Epoch 2 Batch 20/269 - Train Accuracy: 0.637, Validation Accuracy: 0.641, Loss: 0.663 Epoch 2 Batch 21/269 - Train Accuracy: 0.633, Validation Accuracy: 0.638, Loss: 0.684 Epoch 2 Batch 22/269 - Train Accuracy: 0.660, Validation Accuracy: 0.646, Loss: 0.637 Epoch 2 Batch 23/269 - Train Accuracy: 0.648, Validation Accuracy: 0.643, Loss: 0.645 Epoch 2 Batch 24/269 - Train Accuracy: 0.634, Validation Accuracy: 0.641, Loss: 0.677 Epoch 2 Batch 25/269 - Train Accuracy: 0.624, Validation Accuracy: 0.649, Loss: 0.688 Epoch 2 Batch 26/269 - Train Accuracy: 0.662, Validation Accuracy: 0.645, Loss: 0.599 Epoch 2 Batch 27/269 - Train Accuracy: 0.623, Validation Accuracy: 0.642, Loss: 0.634 Epoch 2 Batch 28/269 - Train Accuracy: 0.597, Validation Accuracy: 0.638, Loss: 0.690 Epoch 2 Batch 29/269 - Train Accuracy: 0.635, Validation Accuracy: 0.642, Loss: 0.671 Epoch 2 Batch 30/269 - Train Accuracy: 0.649, Validation Accuracy: 0.637, Loss: 0.634 Epoch 2 Batch 31/269 - Train Accuracy: 0.646, Validation Accuracy: 0.632, Loss: 0.627 Epoch 2 Batch 32/269 - Train Accuracy: 0.636, Validation Accuracy: 0.634, Loss: 0.632 Epoch 2 Batch 33/269 - Train Accuracy: 0.662, Validation Accuracy: 0.638, Loss: 0.622 Epoch 2 Batch 34/269 - Train Accuracy: 0.638, Validation Accuracy: 0.646, Loss: 0.633 Epoch 2 Batch 35/269 - Train Accuracy: 0.634, Validation Accuracy: 0.636, Loss: 0.642 Epoch 2 Batch 36/269 - Train Accuracy: 0.635, Validation Accuracy: 0.639, Loss: 0.641 Epoch 2 Batch 37/269 - Train Accuracy: 0.647, Validation Accuracy: 0.642, Loss: 0.633 Epoch 2 Batch 38/269 - Train Accuracy: 0.638, Validation Accuracy: 0.639, Loss: 0.632 Epoch 2 Batch 39/269 - Train Accuracy: 0.656, Validation Accuracy: 0.646, Loss: 0.634 Epoch 2 Batch 40/269 - Train Accuracy: 0.630, Validation Accuracy: 0.644, Loss: 0.661 Epoch 2 Batch 41/269 - Train Accuracy: 0.615, Validation Accuracy: 0.629, Loss: 0.653 Epoch 2 Batch 42/269 - Train Accuracy: 0.660, Validation Accuracy: 0.643, Loss: 0.613 Epoch 2 Batch 43/269 - Train Accuracy: 0.636, Validation Accuracy: 0.638, Loss: 0.655 Epoch 2 Batch 44/269 - Train Accuracy: 0.643, Validation Accuracy: 0.634, Loss: 0.644 Epoch 2 Batch 45/269 - Train Accuracy: 0.618, Validation Accuracy: 0.639, Loss: 0.657 Epoch 2 Batch 46/269 - Train Accuracy: 0.643, Validation Accuracy: 0.641, Loss: 0.659 Epoch 2 Batch 47/269 - Train Accuracy: 0.672, Validation Accuracy: 0.638, Loss: 0.583 Epoch 2 Batch 48/269 - Train Accuracy: 0.638, Validation Accuracy: 0.630, Loss: 0.622 Epoch 2 Batch 49/269 - Train Accuracy: 0.605, Validation Accuracy: 0.639, Loss: 0.645 Epoch 2 Batch 50/269 - Train Accuracy: 0.630, Validation Accuracy: 0.642, Loss: 0.656 Epoch 2 Batch 51/269 - Train Accuracy: 0.619, Validation Accuracy: 0.638, Loss: 0.633 Epoch 2 Batch 52/269 - Train Accuracy: 0.635, Validation Accuracy: 0.637, Loss: 0.604 Epoch 2 Batch 53/269 - Train Accuracy: 0.629, Validation Accuracy: 0.638, Loss: 0.654 Epoch 2 Batch 54/269 - Train Accuracy: 0.641, Validation Accuracy: 0.649, Loss: 0.646 Epoch 2 Batch 55/269 - Train Accuracy: 0.646, Validation Accuracy: 0.641, Loss: 0.612 Epoch 2 Batch 56/269 - Train Accuracy: 0.657, Validation Accuracy: 0.645, Loss: 0.627 Epoch 2 Batch 57/269 - Train Accuracy: 0.656, Validation Accuracy: 0.640, Loss: 0.636 Epoch 2 Batch 58/269 - Train Accuracy: 0.649, Validation Accuracy: 0.649, Loss: 0.614 Epoch 2 Batch 59/269 - Train Accuracy: 0.649, Validation Accuracy: 0.645, Loss: 0.595 Epoch 2 Batch 60/269 - Train Accuracy: 0.664, Validation Accuracy: 0.650, Loss: 0.592 Epoch 2 Batch 61/269 - Train Accuracy: 0.665, Validation Accuracy: 0.650, Loss: 0.575 Epoch 2 Batch 62/269 - Train Accuracy: 0.663, Validation Accuracy: 0.648, Loss: 0.594 Epoch 2 Batch 63/269 - Train Accuracy: 0.634, Validation Accuracy: 0.645, Loss: 0.622 Epoch 2 Batch 64/269 - Train Accuracy: 0.642, Validation Accuracy: 0.654, Loss: 0.610 Epoch 2 Batch 65/269 - Train Accuracy: 0.646, Validation Accuracy: 0.654, Loss: 0.613 Epoch 2 Batch 66/269 - Train Accuracy: 0.641, Validation Accuracy: 0.651, Loss: 0.596 Epoch 2 Batch 67/269 - Train Accuracy: 0.646, Validation Accuracy: 0.650, Loss: 0.625 Epoch 2 Batch 68/269 - Train Accuracy: 0.640, Validation Accuracy: 0.642, Loss: 0.615 Epoch 2 Batch 69/269 - Train Accuracy: 0.609, Validation Accuracy: 0.635, Loss: 0.666 Epoch 2 Batch 70/269 - Train Accuracy: 0.643, Validation Accuracy: 0.640, Loss: 0.616 Epoch 2 Batch 71/269 - Train Accuracy: 0.612, Validation Accuracy: 0.650, Loss: 0.638 Epoch 2 Batch 72/269 - Train Accuracy: 0.647, Validation Accuracy: 0.649, Loss: 0.597 Epoch 2 Batch 73/269 - Train Accuracy: 0.648, Validation Accuracy: 0.650, Loss: 0.620 Epoch 2 Batch 74/269 - Train Accuracy: 0.640, Validation Accuracy: 0.644, Loss: 0.621 Epoch 2 Batch 75/269 - Train Accuracy: 0.659, Validation Accuracy: 0.646, Loss: 0.607 Epoch 2 Batch 76/269 - Train Accuracy: 0.640, Validation Accuracy: 0.652, Loss: 0.618 Epoch 2 Batch 77/269 - Train Accuracy: 0.658, Validation Accuracy: 0.652, Loss: 0.600 Epoch 2 Batch 78/269 - Train Accuracy: 0.653, Validation Accuracy: 0.655, Loss: 0.596 Epoch 2 Batch 79/269 - Train Accuracy: 0.652, Validation Accuracy: 0.656, Loss: 0.596 Epoch 2 Batch 80/269 - Train Accuracy: 0.662, Validation Accuracy: 0.659, Loss: 0.598 Epoch 2 Batch 81/269 - Train Accuracy: 0.658, Validation Accuracy: 0.661, Loss: 0.619 Epoch 2 Batch 82/269 - Train Accuracy: 0.668, Validation Accuracy: 0.656, Loss: 0.570 Epoch 2 Batch 83/269 - Train Accuracy: 0.650, Validation Accuracy: 0.656, Loss: 0.605 Epoch 2 Batch 84/269 - Train Accuracy: 0.659, Validation Accuracy: 0.658, Loss: 0.585 Epoch 2 Batch 85/269 - Train Accuracy: 0.645, Validation Accuracy: 0.660, Loss: 0.592 Epoch 2 Batch 86/269 - Train Accuracy: 0.644, Validation Accuracy: 0.656, Loss: 0.594 Epoch 2 Batch 87/269 - Train Accuracy: 0.639, Validation Accuracy: 0.660, Loss: 0.629 Epoch 2 Batch 88/269 - Train Accuracy: 0.650, Validation Accuracy: 0.654, Loss: 0.595 Epoch 2 Batch 89/269 - Train Accuracy: 0.666, Validation Accuracy: 0.657, Loss: 0.596 Epoch 2 Batch 90/269 - Train Accuracy: 0.620, Validation Accuracy: 0.651, Loss: 0.632 Epoch 2 Batch 91/269 - Train Accuracy: 0.662, Validation Accuracy: 0.652, Loss: 0.574 Epoch 2 Batch 92/269 - Train Accuracy: 0.646, Validation Accuracy: 0.659, Loss: 0.588 Epoch 2 Batch 93/269 - Train Accuracy: 0.659, Validation Accuracy: 0.658, Loss: 0.564 Epoch 2 Batch 94/269 - Train Accuracy: 0.656, Validation Accuracy: 0.662, Loss: 0.603 Epoch 2 Batch 95/269 - Train Accuracy: 0.647, Validation Accuracy: 0.658, Loss: 0.596 Epoch 2 Batch 96/269 - Train Accuracy: 0.639, Validation Accuracy: 0.655, Loss: 0.585 Epoch 2 Batch 97/269 - Train Accuracy: 0.653, Validation Accuracy: 0.651, Loss: 0.584 Epoch 2 Batch 98/269 - Train Accuracy: 0.662, Validation Accuracy: 0.654, Loss: 0.586 Epoch 2 Batch 99/269 - Train Accuracy: 0.642, Validation Accuracy: 0.663, Loss: 0.609 Epoch 2 Batch 100/269 - Train Accuracy: 0.668, Validation Accuracy: 0.660, Loss: 0.573 Epoch 2 Batch 101/269 - Train Accuracy: 0.627, Validation Accuracy: 0.665, Loss: 0.627 Epoch 2 Batch 102/269 - Train Accuracy: 0.655, Validation Accuracy: 0.665, Loss: 0.584 Epoch 2 Batch 103/269 - Train Accuracy: 0.654, Validation Accuracy: 0.666, Loss: 0.579 Epoch 2 Batch 104/269 - Train Accuracy: 0.634, Validation Accuracy: 0.665, Loss: 0.584 Epoch 2 Batch 105/269 - Train Accuracy: 0.640, Validation Accuracy: 0.667, Loss: 0.590 Epoch 2 Batch 106/269 - Train Accuracy: 0.644, Validation Accuracy: 0.661, Loss: 0.577 Epoch 2 Batch 107/269 - Train Accuracy: 0.637, Validation Accuracy: 0.661, Loss: 0.616 Epoch 2 Batch 108/269 - Train Accuracy: 0.659, Validation Accuracy: 0.660, Loss: 0.587 Epoch 2 Batch 109/269 - Train Accuracy: 0.636, Validation Accuracy: 0.659, Loss: 0.588 Epoch 2 Batch 110/269 - Train Accuracy: 0.656, Validation Accuracy: 0.664, Loss: 0.573 Epoch 2 Batch 111/269 - Train Accuracy: 0.631, Validation Accuracy: 0.663, Loss: 0.615 Epoch 2 Batch 112/269 - Train Accuracy: 0.649, Validation Accuracy: 0.655, Loss: 0.576 Epoch 2 Batch 113/269 - Train Accuracy: 0.661, Validation Accuracy: 0.661, Loss: 0.556 Epoch 2 Batch 114/269 - Train Accuracy: 0.668, Validation Accuracy: 0.662, Loss: 0.573 Epoch 2 Batch 115/269 - Train Accuracy: 0.629, Validation Accuracy: 0.654, Loss: 0.600 Epoch 2 Batch 116/269 - Train Accuracy: 0.655, Validation Accuracy: 0.664, Loss: 0.587 Epoch 2 Batch 117/269 - Train Accuracy: 0.652, Validation Accuracy: 0.655, Loss: 0.574 Epoch 2 Batch 118/269 - Train Accuracy: 0.674, Validation Accuracy: 0.657, Loss: 0.559 Epoch 2 Batch 119/269 - Train Accuracy: 0.644, Validation Accuracy: 0.659, Loss: 0.599 Epoch 2 Batch 120/269 - Train Accuracy: 0.657, Validation Accuracy: 0.662, Loss: 0.596 Epoch 2 Batch 121/269 - Train Accuracy: 0.663, Validation Accuracy: 0.666, Loss: 0.569 Epoch 2 Batch 122/269 - Train Accuracy: 0.666, Validation Accuracy: 0.665, Loss: 0.561 Epoch 2 Batch 123/269 - Train Accuracy: 0.653, Validation Accuracy: 0.666, Loss: 0.593 Epoch 2 Batch 124/269 - Train Accuracy: 0.645, Validation Accuracy: 0.664, Loss: 0.559 Epoch 2 Batch 125/269 - Train Accuracy: 0.670, Validation Accuracy: 0.663, Loss: 0.557 Epoch 2 Batch 126/269 - Train Accuracy: 0.669, Validation Accuracy: 0.673, Loss: 0.556 Epoch 2 Batch 127/269 - Train Accuracy: 0.642, Validation Accuracy: 0.663, Loss: 0.591 Epoch 2 Batch 128/269 - Train Accuracy: 0.664, Validation Accuracy: 0.665, Loss: 0.570 Epoch 2 Batch 129/269 - Train Accuracy: 0.650, Validation Accuracy: 0.669, Loss: 0.570 Epoch 2 Batch 130/269 - Train Accuracy: 0.654, Validation Accuracy: 0.678, Loss: 0.588 Epoch 2 Batch 131/269 - Train Accuracy: 0.642, Validation Accuracy: 0.661, Loss: 0.581 Epoch 2 Batch 132/269 - Train Accuracy: 0.643, Validation Accuracy: 0.664, Loss: 0.579 Epoch 2 Batch 133/269 - Train Accuracy: 0.675, Validation Accuracy: 0.666, Loss: 0.549 Epoch 2 Batch 134/269 - Train Accuracy: 0.638, Validation Accuracy: 0.658, Loss: 0.584 Epoch 2 Batch 135/269 - Train Accuracy: 0.652, Validation Accuracy: 0.668, Loss: 0.606 Epoch 2 Batch 136/269 - Train Accuracy: 0.639, Validation Accuracy: 0.679, Loss: 0.602 Epoch 2 Batch 137/269 - Train Accuracy: 0.652, Validation Accuracy: 0.662, Loss: 0.594 Epoch 2 Batch 138/269 - Train Accuracy: 0.660, Validation Accuracy: 0.672, Loss: 0.581 Epoch 2 Batch 139/269 - Train Accuracy: 0.678, Validation Accuracy: 0.672, Loss: 0.553 Epoch 2 Batch 140/269 - Train Accuracy: 0.662, Validation Accuracy: 0.652, Loss: 0.574 Epoch 2 Batch 141/269 - Train Accuracy: 0.653, Validation Accuracy: 0.666, Loss: 0.572 Epoch 2 Batch 142/269 - Train Accuracy: 0.662, Validation Accuracy: 0.682, Loss: 0.551 Epoch 2 Batch 143/269 - Train Accuracy: 0.664, Validation Accuracy: 0.672, Loss: 0.562 Epoch 2 Batch 144/269 - Train Accuracy: 0.679, Validation Accuracy: 0.672, Loss: 0.539 Epoch 2 Batch 145/269 - Train Accuracy: 0.685, Validation Accuracy: 0.687, Loss: 0.548 Epoch 2 Batch 146/269 - Train Accuracy: 0.663, Validation Accuracy: 0.677, Loss: 0.550 Epoch 2 Batch 147/269 - Train Accuracy: 0.675, Validation Accuracy: 0.658, Loss: 0.533 Epoch 2 Batch 148/269 - Train Accuracy: 0.667, Validation Accuracy: 0.676, Loss: 0.560 Epoch 2 Batch 149/269 - Train Accuracy: 0.673, Validation Accuracy: 0.686, Loss: 0.567 Epoch 2 Batch 150/269 - Train Accuracy: 0.673, Validation Accuracy: 0.679, Loss: 0.550 Epoch 2 Batch 151/269 - Train Accuracy: 0.680, Validation Accuracy: 0.670, Loss: 0.526 Epoch 2 Batch 152/269 - Train Accuracy: 0.665, Validation Accuracy: 0.678, Loss: 0.557 Epoch 2 Batch 153/269 - Train Accuracy: 0.676, Validation Accuracy: 0.681, Loss: 0.543 Epoch 2 Batch 154/269 - Train Accuracy: 0.669, Validation Accuracy: 0.685, Loss: 0.566 Epoch 2 Batch 155/269 - Train Accuracy: 0.695, Validation Accuracy: 0.680, Loss: 0.522 Epoch 2 Batch 156/269 - Train Accuracy: 0.654, Validation Accuracy: 0.679, Loss: 0.578 Epoch 2 Batch 157/269 - Train Accuracy: 0.656, Validation Accuracy: 0.676, Loss: 0.548 Epoch 2 Batch 158/269 - Train Accuracy: 0.675, Validation Accuracy: 0.687, Loss: 0.551 Epoch 2 Batch 159/269 - Train Accuracy: 0.686, Validation Accuracy: 0.693, Loss: 0.552 Epoch 2 Batch 160/269 - Train Accuracy: 0.672, Validation Accuracy: 0.684, Loss: 0.549 Epoch 2 Batch 161/269 - Train Accuracy: 0.652, Validation Accuracy: 0.665, Loss: 0.549 Epoch 2 Batch 162/269 - Train Accuracy: 0.688, Validation Accuracy: 0.696, Loss: 0.545 Epoch 2 Batch 163/269 - Train Accuracy: 0.690, Validation Accuracy: 0.684, Loss: 0.544 Epoch 2 Batch 164/269 - Train Accuracy: 0.672, Validation Accuracy: 0.664, Loss: 0.541 Epoch 2 Batch 165/269 - Train Accuracy: 0.671, Validation Accuracy: 0.691, Loss: 0.564 Epoch 2 Batch 166/269 - Train Accuracy: 0.705, Validation Accuracy: 0.690, Loss: 0.516 Epoch 2 Batch 167/269 - Train Accuracy: 0.654, Validation Accuracy: 0.657, Loss: 0.543 Epoch 2 Batch 168/269 - Train Accuracy: 0.666, Validation Accuracy: 0.673, Loss: 0.554 Epoch 2 Batch 169/269 - Train Accuracy: 0.680, Validation Accuracy: 0.696, Loss: 0.553 Epoch 2 Batch 170/269 - Train Accuracy: 0.683, Validation Accuracy: 0.693, Loss: 0.538 Epoch 2 Batch 171/269 - Train Accuracy: 0.655, Validation Accuracy: 0.668, Loss: 0.567 Epoch 2 Batch 172/269 - Train Accuracy: 0.675, Validation Accuracy: 0.693, Loss: 0.553 Epoch 2 Batch 173/269 - Train Accuracy: 0.683, Validation Accuracy: 0.693, Loss: 0.534 Epoch 2 Batch 174/269 - Train Accuracy: 0.655, Validation Accuracy: 0.677, Loss: 0.541 Epoch 2 Batch 175/269 - Train Accuracy: 0.673, Validation Accuracy: 0.681, Loss: 0.564 Epoch 2 Batch 176/269 - Train Accuracy: 0.667, Validation Accuracy: 0.697, Loss: 0.573 Epoch 2 Batch 177/269 - Train Accuracy: 0.696, Validation Accuracy: 0.691, Loss: 0.524 Epoch 2 Batch 178/269 - Train Accuracy: 0.662, Validation Accuracy: 0.672, Loss: 0.547 Epoch 2 Batch 179/269 - Train Accuracy: 0.670, Validation Accuracy: 0.680, Loss: 0.543 Epoch 2 Batch 180/269 - Train Accuracy: 0.689, Validation Accuracy: 0.699, Loss: 0.537 Epoch 2 Batch 181/269 - Train Accuracy: 0.672, Validation Accuracy: 0.693, Loss: 0.549 Epoch 2 Batch 182/269 - Train Accuracy: 0.662, Validation Accuracy: 0.663, Loss: 0.541 Epoch 2 Batch 183/269 - Train Accuracy: 0.737, Validation Accuracy: 0.704, Loss: 0.465 Epoch 2 Batch 184/269 - Train Accuracy: 0.683, Validation Accuracy: 0.711, Loss: 0.558 Epoch 2 Batch 185/269 - Train Accuracy: 0.695, Validation Accuracy: 0.698, Loss: 0.530 Epoch 2 Batch 186/269 - Train Accuracy: 0.644, Validation Accuracy: 0.667, Loss: 0.545 Epoch 2 Batch 187/269 - Train Accuracy: 0.701, Validation Accuracy: 0.702, Loss: 0.522 Epoch 2 Batch 188/269 - Train Accuracy: 0.687, Validation Accuracy: 0.696, Loss: 0.520 Epoch 2 Batch 189/269 - Train Accuracy: 0.676, Validation Accuracy: 0.683, Loss: 0.517 Epoch 2 Batch 190/269 - Train Accuracy: 0.684, Validation Accuracy: 0.687, Loss: 0.521 Epoch 2 Batch 191/269 - Train Accuracy: 0.706, Validation Accuracy: 0.704, Loss: 0.526 Epoch 2 Batch 192/269 - Train Accuracy: 0.695, Validation Accuracy: 0.703, Loss: 0.532 Epoch 2 Batch 193/269 - Train Accuracy: 0.685, Validation Accuracy: 0.684, Loss: 0.528 Epoch 2 Batch 194/269 - Train Accuracy: 0.694, Validation Accuracy: 0.691, Loss: 0.535 Epoch 2 Batch 195/269 - Train Accuracy: 0.692, Validation Accuracy: 0.706, Loss: 0.522 Epoch 2 Batch 196/269 - Train Accuracy: 0.685, Validation Accuracy: 0.709, Loss: 0.518 Epoch 2 Batch 197/269 - Train Accuracy: 0.650, Validation Accuracy: 0.683, Loss: 0.554 Epoch 2 Batch 198/269 - Train Accuracy: 0.673, Validation Accuracy: 0.704, Loss: 0.554 Epoch 2 Batch 199/269 - Train Accuracy: 0.697, Validation Accuracy: 0.703, Loss: 0.533 Epoch 2 Batch 200/269 - Train Accuracy: 0.688, Validation Accuracy: 0.706, Loss: 0.542 Epoch 2 Batch 201/269 - Train Accuracy: 0.687, Validation Accuracy: 0.680, Loss: 0.520 Epoch 2 Batch 202/269 - Train Accuracy: 0.670, Validation Accuracy: 0.690, Loss: 0.523 Epoch 2 Batch 203/269 - Train Accuracy: 0.682, Validation Accuracy: 0.709, Loss: 0.558 Epoch 2 Batch 204/269 - Train Accuracy: 0.682, Validation Accuracy: 0.699, Loss: 0.546 Epoch 2 Batch 205/269 - Train Accuracy: 0.681, Validation Accuracy: 0.683, Loss: 0.518 Epoch 2 Batch 206/269 - Train Accuracy: 0.680, Validation Accuracy: 0.694, Loss: 0.548 Epoch 2 Batch 207/269 - Train Accuracy: 0.705, Validation Accuracy: 0.710, Loss: 0.509 Epoch 2 Batch 208/269 - Train Accuracy: 0.666, Validation Accuracy: 0.687, Loss: 0.544 Epoch 2 Batch 209/269 - Train Accuracy: 0.703, Validation Accuracy: 0.700, Loss: 0.530 Epoch 2 Batch 210/269 - Train Accuracy: 0.706, Validation Accuracy: 0.703, Loss: 0.510 Epoch 2 Batch 211/269 - Train Accuracy: 0.685, Validation Accuracy: 0.706, Loss: 0.522 Epoch 2 Batch 212/269 - Train Accuracy: 0.695, Validation Accuracy: 0.708, Loss: 0.512 Epoch 2 Batch 213/269 - Train Accuracy: 0.703, Validation Accuracy: 0.701, Loss: 0.516 Epoch 2 Batch 214/269 - Train Accuracy: 0.712, Validation Accuracy: 0.718, Loss: 0.524 Epoch 2 Batch 215/269 - Train Accuracy: 0.720, Validation Accuracy: 0.713, Loss: 0.490 Epoch 2 Batch 216/269 - Train Accuracy: 0.670, Validation Accuracy: 0.705, Loss: 0.553 Epoch 2 Batch 217/269 - Train Accuracy: 0.678, Validation Accuracy: 0.709, Loss: 0.541 Epoch 2 Batch 218/269 - Train Accuracy: 0.698, Validation Accuracy: 0.714, Loss: 0.532 Epoch 2 Batch 219/269 - Train Accuracy: 0.707, Validation Accuracy: 0.709, Loss: 0.536 Epoch 2 Batch 220/269 - Train Accuracy: 0.702, Validation Accuracy: 0.712, Loss: 0.486 Epoch 2 Batch 221/269 - Train Accuracy: 0.722, Validation Accuracy: 0.709, Loss: 0.509 Epoch 2 Batch 222/269 - Train Accuracy: 0.728, Validation Accuracy: 0.708, Loss: 0.496 Epoch 2 Batch 223/269 - Train Accuracy: 0.690, Validation Accuracy: 0.706, Loss: 0.503 Epoch 2 Batch 224/269 - Train Accuracy: 0.711, Validation Accuracy: 0.718, Loss: 0.521 Epoch 2 Batch 225/269 - Train Accuracy: 0.704, Validation Accuracy: 0.721, Loss: 0.513 Epoch 2 Batch 226/269 - Train Accuracy: 0.713, Validation Accuracy: 0.726, Loss: 0.508 Epoch 2 Batch 227/269 - Train Accuracy: 0.744, Validation Accuracy: 0.720, Loss: 0.452 Epoch 2 Batch 228/269 - Train Accuracy: 0.698, Validation Accuracy: 0.714, Loss: 0.509 Epoch 2 Batch 229/269 - Train Accuracy: 0.695, Validation Accuracy: 0.725, Loss: 0.496 Epoch 2 Batch 230/269 - Train Accuracy: 0.703, Validation Accuracy: 0.722, Loss: 0.500 Epoch 2 Batch 231/269 - Train Accuracy: 0.687, Validation Accuracy: 0.717, Loss: 0.540 Epoch 2 Batch 232/269 - Train Accuracy: 0.673, Validation Accuracy: 0.698, Loss: 0.532 Epoch 2 Batch 233/269 - Train Accuracy: 0.727, Validation Accuracy: 0.728, Loss: 0.520 Epoch 2 Batch 234/269 - Train Accuracy: 0.715, Validation Accuracy: 0.723, Loss: 0.499 Epoch 2 Batch 235/269 - Train Accuracy: 0.681, Validation Accuracy: 0.698, Loss: 0.501 Epoch 2 Batch 236/269 - Train Accuracy: 0.689, Validation Accuracy: 0.712, Loss: 0.499 Epoch 2 Batch 237/269 - Train Accuracy: 0.699, Validation Accuracy: 0.726, Loss: 0.509 Epoch 2 Batch 238/269 - Train Accuracy: 0.718, Validation Accuracy: 0.721, Loss: 0.489 Epoch 2 Batch 239/269 - Train Accuracy: 0.690, Validation Accuracy: 0.703, Loss: 0.499 Epoch 2 Batch 240/269 - Train Accuracy: 0.734, Validation Accuracy: 0.715, Loss: 0.465 Epoch 2 Batch 241/269 - Train Accuracy: 0.712, Validation Accuracy: 0.722, Loss: 0.499 Epoch 2 Batch 242/269 - Train Accuracy: 0.681, Validation Accuracy: 0.702, Loss: 0.499 Epoch 2 Batch 243/269 - Train Accuracy: 0.712, Validation Accuracy: 0.701, Loss: 0.490 Epoch 2 Batch 244/269 - Train Accuracy: 0.705, Validation Accuracy: 0.723, Loss: 0.495 Epoch 2 Batch 245/269 - Train Accuracy: 0.688, Validation Accuracy: 0.723, Loss: 0.536 Epoch 2 Batch 246/269 - Train Accuracy: 0.668, Validation Accuracy: 0.695, Loss: 0.504 Epoch 2 Batch 247/269 - Train Accuracy: 0.684, Validation Accuracy: 0.692, Loss: 0.514 Epoch 2 Batch 248/269 - Train Accuracy: 0.710, Validation Accuracy: 0.722, Loss: 0.494 Epoch 2 Batch 249/269 - Train Accuracy: 0.732, Validation Accuracy: 0.709, Loss: 0.476 Epoch 2 Batch 250/269 - Train Accuracy: 0.692, Validation Accuracy: 0.695, Loss: 0.510 Epoch 2 Batch 251/269 - Train Accuracy: 0.735, Validation Accuracy: 0.723, Loss: 0.491 Epoch 2 Batch 252/269 - Train Accuracy: 0.717, Validation Accuracy: 0.722, Loss: 0.506 Epoch 2 Batch 253/269 - Train Accuracy: 0.674, Validation Accuracy: 0.712, Loss: 0.498 Epoch 2 Batch 254/269 - Train Accuracy: 0.714, Validation Accuracy: 0.720, Loss: 0.492 Epoch 2 Batch 255/269 - Train Accuracy: 0.721, Validation Accuracy: 0.725, Loss: 0.478 Epoch 2 Batch 256/269 - Train Accuracy: 0.690, Validation Accuracy: 0.708, Loss: 0.506 Epoch 2 Batch 257/269 - Train Accuracy: 0.681, Validation Accuracy: 0.696, Loss: 0.506 Epoch 2 Batch 258/269 - Train Accuracy: 0.710, Validation Accuracy: 0.733, Loss: 0.494 Epoch 2 Batch 259/269 - Train Accuracy: 0.716, Validation Accuracy: 0.732, Loss: 0.497 Epoch 2 Batch 260/269 - Train Accuracy: 0.667, Validation Accuracy: 0.697, Loss: 0.516 Epoch 2 Batch 261/269 - Train Accuracy: 0.672, Validation Accuracy: 0.718, Loss: 0.529 Epoch 2 Batch 262/269 - Train Accuracy: 0.724, Validation Accuracy: 0.736, Loss: 0.492 Epoch 2 Batch 263/269 - Train Accuracy: 0.714, Validation Accuracy: 0.733, Loss: 0.503 Epoch 2 Batch 264/269 - Train Accuracy: 0.681, Validation Accuracy: 0.705, Loss: 0.511 Epoch 2 Batch 265/269 - Train Accuracy: 0.701, Validation Accuracy: 0.716, Loss: 0.504 Epoch 2 Batch 266/269 - Train Accuracy: 0.724, Validation Accuracy: 0.728, Loss: 0.478 Epoch 2 Batch 267/269 - Train Accuracy: 0.703, Validation Accuracy: 0.726, Loss: 0.496 Epoch 3 Batch 0/269 - Train Accuracy: 0.691, Validation Accuracy: 0.707, Loss: 0.509 Epoch 3 Batch 1/269 - Train Accuracy: 0.703, Validation Accuracy: 0.731, Loss: 0.502 Epoch 3 Batch 2/269 - Train Accuracy: 0.707, Validation Accuracy: 0.726, Loss: 0.490 Epoch 3 Batch 3/269 - Train Accuracy: 0.697, Validation Accuracy: 0.713, Loss: 0.495 Epoch 3 Batch 4/269 - Train Accuracy: 0.676, Validation Accuracy: 0.721, Loss: 0.505 Epoch 3 Batch 5/269 - Train Accuracy: 0.687, Validation Accuracy: 0.731, Loss: 0.502 Epoch 3 Batch 6/269 - Train Accuracy: 0.730, Validation Accuracy: 0.728, Loss: 0.473 Epoch 3 Batch 7/269 - Train Accuracy: 0.709, Validation Accuracy: 0.723, Loss: 0.477 Epoch 3 Batch 8/269 - Train Accuracy: 0.703, Validation Accuracy: 0.736, Loss: 0.506 Epoch 3 Batch 9/269 - Train Accuracy: 0.721, Validation Accuracy: 0.735, Loss: 0.492 Epoch 3 Batch 10/269 - Train Accuracy: 0.706, Validation Accuracy: 0.722, Loss: 0.501 Epoch 3 Batch 11/269 - Train Accuracy: 0.723, Validation Accuracy: 0.725, Loss: 0.491 Epoch 3 Batch 12/269 - Train Accuracy: 0.708, Validation Accuracy: 0.735, Loss: 0.505 Epoch 3 Batch 13/269 - Train Accuracy: 0.726, Validation Accuracy: 0.731, Loss: 0.451 Epoch 3 Batch 14/269 - Train Accuracy: 0.691, Validation Accuracy: 0.707, Loss: 0.475 Epoch 3 Batch 15/269 - Train Accuracy: 0.713, Validation Accuracy: 0.721, Loss: 0.469 Epoch 3 Batch 16/269 - Train Accuracy: 0.712, Validation Accuracy: 0.730, Loss: 0.479 Epoch 3 Batch 17/269 - Train Accuracy: 0.712, Validation Accuracy: 0.726, Loss: 0.469 Epoch 3 Batch 18/269 - Train Accuracy: 0.683, Validation Accuracy: 0.717, Loss: 0.489 Epoch 3 Batch 19/269 - Train Accuracy: 0.737, Validation Accuracy: 0.725, Loss: 0.444 Epoch 3 Batch 20/269 - Train Accuracy: 0.703, Validation Accuracy: 0.733, Loss: 0.482 Epoch 3 Batch 21/269 - Train Accuracy: 0.698, Validation Accuracy: 0.733, Loss: 0.514 Epoch 3 Batch 22/269 - Train Accuracy: 0.725, Validation Accuracy: 0.718, Loss: 0.462 Epoch 3 Batch 23/269 - Train Accuracy: 0.714, Validation Accuracy: 0.722, Loss: 0.464 Epoch 3 Batch 24/269 - Train Accuracy: 0.723, Validation Accuracy: 0.735, Loss: 0.493 Epoch 3 Batch 25/269 - Train Accuracy: 0.702, Validation Accuracy: 0.726, Loss: 0.509 Epoch 3 Batch 26/269 - Train Accuracy: 0.737, Validation Accuracy: 0.730, Loss: 0.439 Epoch 3 Batch 27/269 - Train Accuracy: 0.712, Validation Accuracy: 0.730, Loss: 0.464 Epoch 3 Batch 28/269 - Train Accuracy: 0.676, Validation Accuracy: 0.727, Loss: 0.509 Epoch 3 Batch 29/269 - Train Accuracy: 0.714, Validation Accuracy: 0.738, Loss: 0.495 Epoch 3 Batch 30/269 - Train Accuracy: 0.726, Validation Accuracy: 0.726, Loss: 0.462 Epoch 3 Batch 31/269 - Train Accuracy: 0.724, Validation Accuracy: 0.729, Loss: 0.471 Epoch 3 Batch 32/269 - Train Accuracy: 0.715, Validation Accuracy: 0.732, Loss: 0.462 Epoch 3 Batch 33/269 - Train Accuracy: 0.744, Validation Accuracy: 0.734, Loss: 0.453 Epoch 3 Batch 34/269 - Train Accuracy: 0.737, Validation Accuracy: 0.736, Loss: 0.460 Epoch 3 Batch 35/269 - Train Accuracy: 0.722, Validation Accuracy: 0.739, Loss: 0.480 Epoch 3 Batch 36/269 - Train Accuracy: 0.713, Validation Accuracy: 0.732, Loss: 0.465 Epoch 3 Batch 37/269 - Train Accuracy: 0.730, Validation Accuracy: 0.739, Loss: 0.467 Epoch 3 Batch 38/269 - Train Accuracy: 0.718, Validation Accuracy: 0.735, Loss: 0.462 Epoch 3 Batch 39/269 - Train Accuracy: 0.731, Validation Accuracy: 0.740, Loss: 0.461 Epoch 3 Batch 40/269 - Train Accuracy: 0.712, Validation Accuracy: 0.731, Loss: 0.481 Epoch 3 Batch 41/269 - Train Accuracy: 0.706, Validation Accuracy: 0.737, Loss: 0.474 Epoch 3 Batch 42/269 - Train Accuracy: 0.746, Validation Accuracy: 0.737, Loss: 0.436 Epoch 3 Batch 43/269 - Train Accuracy: 0.720, Validation Accuracy: 0.731, Loss: 0.479 Epoch 3 Batch 44/269 - Train Accuracy: 0.715, Validation Accuracy: 0.732, Loss: 0.466 Epoch 3 Batch 45/269 - Train Accuracy: 0.729, Validation Accuracy: 0.741, Loss: 0.479 Epoch 3 Batch 46/269 - Train Accuracy: 0.709, Validation Accuracy: 0.737, Loss: 0.479 Epoch 3 Batch 47/269 - Train Accuracy: 0.722, Validation Accuracy: 0.728, Loss: 0.433 Epoch 3 Batch 48/269 - Train Accuracy: 0.741, Validation Accuracy: 0.736, Loss: 0.449 Epoch 3 Batch 49/269 - Train Accuracy: 0.736, Validation Accuracy: 0.744, Loss: 0.474 Epoch 3 Batch 50/269 - Train Accuracy: 0.687, Validation Accuracy: 0.734, Loss: 0.485 Epoch 3 Batch 51/269 - Train Accuracy: 0.702, Validation Accuracy: 0.730, Loss: 0.457 Epoch 3 Batch 52/269 - Train Accuracy: 0.733, Validation Accuracy: 0.750, Loss: 0.437 Epoch 3 Batch 53/269 - Train Accuracy: 0.724, Validation Accuracy: 0.745, Loss: 0.481 Epoch 3 Batch 54/269 - Train Accuracy: 0.724, Validation Accuracy: 0.733, Loss: 0.473 Epoch 3 Batch 55/269 - Train Accuracy: 0.733, Validation Accuracy: 0.722, Loss: 0.447 Epoch 3 Batch 56/269 - Train Accuracy: 0.730, Validation Accuracy: 0.743, Loss: 0.459 Epoch 3 Batch 57/269 - Train Accuracy: 0.734, Validation Accuracy: 0.744, Loss: 0.468 Epoch 3 Batch 58/269 - Train Accuracy: 0.740, Validation Accuracy: 0.736, Loss: 0.449 Epoch 3 Batch 59/269 - Train Accuracy: 0.730, Validation Accuracy: 0.733, Loss: 0.432 Epoch 3 Batch 60/269 - Train Accuracy: 0.744, Validation Accuracy: 0.749, Loss: 0.437 Epoch 3 Batch 61/269 - Train Accuracy: 0.738, Validation Accuracy: 0.744, Loss: 0.423 Epoch 3 Batch 62/269 - Train Accuracy: 0.730, Validation Accuracy: 0.718, Loss: 0.435 Epoch 3 Batch 63/269 - Train Accuracy: 0.729, Validation Accuracy: 0.739, Loss: 0.463 Epoch 3 Batch 64/269 - Train Accuracy: 0.745, Validation Accuracy: 0.745, Loss: 0.452 Epoch 3 Batch 65/269 - Train Accuracy: 0.734, Validation Accuracy: 0.750, Loss: 0.454 Epoch 3 Batch 66/269 - Train Accuracy: 0.728, Validation Accuracy: 0.736, Loss: 0.433 Epoch 3 Batch 67/269 - Train Accuracy: 0.718, Validation Accuracy: 0.735, Loss: 0.464 Epoch 3 Batch 68/269 - Train Accuracy: 0.727, Validation Accuracy: 0.749, Loss: 0.463 Epoch 3 Batch 69/269 - Train Accuracy: 0.700, Validation Accuracy: 0.739, Loss: 0.499 Epoch 3 Batch 70/269 - Train Accuracy: 0.728, Validation Accuracy: 0.725, Loss: 0.451 Epoch 3 Batch 71/269 - Train Accuracy: 0.728, Validation Accuracy: 0.741, Loss: 0.473 Epoch 3 Batch 72/269 - Train Accuracy: 0.723, Validation Accuracy: 0.742, Loss: 0.446 Epoch 3 Batch 73/269 - Train Accuracy: 0.729, Validation Accuracy: 0.741, Loss: 0.465 Epoch 3 Batch 74/269 - Train Accuracy: 0.725, Validation Accuracy: 0.734, Loss: 0.457 Epoch 3 Batch 75/269 - Train Accuracy: 0.739, Validation Accuracy: 0.737, Loss: 0.448 Epoch 3 Batch 76/269 - Train Accuracy: 0.723, Validation Accuracy: 0.748, Loss: 0.454 Epoch 3 Batch 77/269 - Train Accuracy: 0.738, Validation Accuracy: 0.745, Loss: 0.443 Epoch 3 Batch 78/269 - Train Accuracy: 0.732, Validation Accuracy: 0.729, Loss: 0.436 Epoch 3 Batch 79/269 - Train Accuracy: 0.728, Validation Accuracy: 0.737, Loss: 0.444 Epoch 3 Batch 80/269 - Train Accuracy: 0.743, Validation Accuracy: 0.751, Loss: 0.440 Epoch 3 Batch 81/269 - Train Accuracy: 0.731, Validation Accuracy: 0.744, Loss: 0.454 Epoch 3 Batch 82/269 - Train Accuracy: 0.747, Validation Accuracy: 0.733, Loss: 0.424 Epoch 3 Batch 83/269 - Train Accuracy: 0.735, Validation Accuracy: 0.738, Loss: 0.447 Epoch 3 Batch 84/269 - Train Accuracy: 0.735, Validation Accuracy: 0.740, Loss: 0.437 Epoch 3 Batch 85/269 - Train Accuracy: 0.719, Validation Accuracy: 0.742, Loss: 0.446 Epoch 3 Batch 86/269 - Train Accuracy: 0.741, Validation Accuracy: 0.743, Loss: 0.438 Epoch 3 Batch 87/269 - Train Accuracy: 0.735, Validation Accuracy: 0.743, Loss: 0.473 Epoch 3 Batch 88/269 - Train Accuracy: 0.729, Validation Accuracy: 0.747, Loss: 0.439 Epoch 3 Batch 89/269 - Train Accuracy: 0.761, Validation Accuracy: 0.745, Loss: 0.445 Epoch 3 Batch 90/269 - Train Accuracy: 0.709, Validation Accuracy: 0.742, Loss: 0.468 Epoch 3 Batch 91/269 - Train Accuracy: 0.753, Validation Accuracy: 0.751, Loss: 0.434 Epoch 3 Batch 92/269 - Train Accuracy: 0.738, Validation Accuracy: 0.742, Loss: 0.432 Epoch 3 Batch 93/269 - Train Accuracy: 0.730, Validation Accuracy: 0.737, Loss: 0.427 Epoch 3 Batch 94/269 - Train Accuracy: 0.742, Validation Accuracy: 0.746, Loss: 0.458 Epoch 3 Batch 95/269 - Train Accuracy: 0.739, Validation Accuracy: 0.736, Loss: 0.440 Epoch 3 Batch 96/269 - Train Accuracy: 0.729, Validation Accuracy: 0.750, Loss: 0.449 Epoch 3 Batch 97/269 - Train Accuracy: 0.721, Validation Accuracy: 0.737, Loss: 0.436 Epoch 3 Batch 98/269 - Train Accuracy: 0.749, Validation Accuracy: 0.756, Loss: 0.449 Epoch 3 Batch 99/269 - Train Accuracy: 0.727, Validation Accuracy: 0.741, Loss: 0.454 Epoch 3 Batch 100/269 - Train Accuracy: 0.750, Validation Accuracy: 0.742, Loss: 0.433 Epoch 3 Batch 101/269 - Train Accuracy: 0.714, Validation Accuracy: 0.742, Loss: 0.465 Epoch 3 Batch 102/269 - Train Accuracy: 0.731, Validation Accuracy: 0.744, Loss: 0.441 Epoch 3 Batch 103/269 - Train Accuracy: 0.746, Validation Accuracy: 0.746, Loss: 0.435 Epoch 3 Batch 104/269 - Train Accuracy: 0.739, Validation Accuracy: 0.753, Loss: 0.432 Epoch 3 Batch 105/269 - Train Accuracy: 0.719, Validation Accuracy: 0.751, Loss: 0.444 Epoch 3 Batch 106/269 - Train Accuracy: 0.722, Validation Accuracy: 0.746, Loss: 0.422 Epoch 3 Batch 107/269 - Train Accuracy: 0.725, Validation Accuracy: 0.756, Loss: 0.459 Epoch 3 Batch 108/269 - Train Accuracy: 0.728, Validation Accuracy: 0.745, Loss: 0.435 Epoch 3 Batch 109/269 - Train Accuracy: 0.720, Validation Accuracy: 0.745, Loss: 0.442 Epoch 3 Batch 110/269 - Train Accuracy: 0.755, Validation Accuracy: 0.754, Loss: 0.419 Epoch 3 Batch 111/269 - Train Accuracy: 0.735, Validation Accuracy: 0.752, Loss: 0.463 Epoch 3 Batch 112/269 - Train Accuracy: 0.735, Validation Accuracy: 0.739, Loss: 0.432 Epoch 3 Batch 113/269 - Train Accuracy: 0.748, Validation Accuracy: 0.743, Loss: 0.421 Epoch 3 Batch 114/269 - Train Accuracy: 0.752, Validation Accuracy: 0.747, Loss: 0.429 Epoch 3 Batch 115/269 - Train Accuracy: 0.729, Validation Accuracy: 0.749, Loss: 0.452 Epoch 3 Batch 116/269 - Train Accuracy: 0.738, Validation Accuracy: 0.748, Loss: 0.438 Epoch 3 Batch 117/269 - Train Accuracy: 0.755, Validation Accuracy: 0.762, Loss: 0.429 Epoch 3 Batch 118/269 - Train Accuracy: 0.763, Validation Accuracy: 0.761, Loss: 0.424 Epoch 3 Batch 119/269 - Train Accuracy: 0.728, Validation Accuracy: 0.749, Loss: 0.454 Epoch 3 Batch 120/269 - Train Accuracy: 0.756, Validation Accuracy: 0.753, Loss: 0.440 Epoch 3 Batch 121/269 - Train Accuracy: 0.751, Validation Accuracy: 0.763, Loss: 0.426 Epoch 3 Batch 122/269 - Train Accuracy: 0.757, Validation Accuracy: 0.764, Loss: 0.419 Epoch 3 Batch 123/269 - Train Accuracy: 0.747, Validation Accuracy: 0.754, Loss: 0.444 Epoch 3 Batch 124/269 - Train Accuracy: 0.748, Validation Accuracy: 0.763, Loss: 0.421 Epoch 3 Batch 125/269 - Train Accuracy: 0.760, Validation Accuracy: 0.756, Loss: 0.411 Epoch 3 Batch 126/269 - Train Accuracy: 0.729, Validation Accuracy: 0.759, Loss: 0.418 Epoch 3 Batch 127/269 - Train Accuracy: 0.745, Validation Accuracy: 0.752, Loss: 0.441 Epoch 3 Batch 128/269 - Train Accuracy: 0.760, Validation Accuracy: 0.761, Loss: 0.425 Epoch 3 Batch 129/269 - Train Accuracy: 0.754, Validation Accuracy: 0.762, Loss: 0.419 Epoch 3 Batch 130/269 - Train Accuracy: 0.741, Validation Accuracy: 0.765, Loss: 0.437 Epoch 3 Batch 131/269 - Train Accuracy: 0.735, Validation Accuracy: 0.749, Loss: 0.435 Epoch 3 Batch 132/269 - Train Accuracy: 0.742, Validation Accuracy: 0.758, Loss: 0.433 Epoch 3 Batch 133/269 - Train Accuracy: 0.756, Validation Accuracy: 0.763, Loss: 0.408 Epoch 3 Batch 134/269 - Train Accuracy: 0.726, Validation Accuracy: 0.763, Loss: 0.436 Epoch 3 Batch 135/269 - Train Accuracy: 0.730, Validation Accuracy: 0.769, Loss: 0.447 Epoch 3 Batch 136/269 - Train Accuracy: 0.730, Validation Accuracy: 0.768, Loss: 0.448 Epoch 3 Batch 137/269 - Train Accuracy: 0.729, Validation Accuracy: 0.764, Loss: 0.450 Epoch 3 Batch 138/269 - Train Accuracy: 0.742, Validation Accuracy: 0.748, Loss: 0.431 Epoch 3 Batch 139/269 - Train Accuracy: 0.747, Validation Accuracy: 0.760, Loss: 0.412 Epoch 3 Batch 140/269 - Train Accuracy: 0.752, Validation Accuracy: 0.767, Loss: 0.429 Epoch 3 Batch 141/269 - Train Accuracy: 0.754, Validation Accuracy: 0.767, Loss: 0.433 Epoch 3 Batch 142/269 - Train Accuracy: 0.730, Validation Accuracy: 0.748, Loss: 0.410 Epoch 3 Batch 143/269 - Train Accuracy: 0.741, Validation Accuracy: 0.744, Loss: 0.417 Epoch 3 Batch 144/269 - Train Accuracy: 0.771, Validation Accuracy: 0.764, Loss: 0.402 Epoch 3 Batch 145/269 - Train Accuracy: 0.777, Validation Accuracy: 0.759, Loss: 0.412 Epoch 3 Batch 146/269 - Train Accuracy: 0.737, Validation Accuracy: 0.759, Loss: 0.406 Epoch 3 Batch 147/269 - Train Accuracy: 0.765, Validation Accuracy: 0.763, Loss: 0.398 Epoch 3 Batch 148/269 - Train Accuracy: 0.758, Validation Accuracy: 0.773, Loss: 0.415 Epoch 3 Batch 149/269 - Train Accuracy: 0.754, Validation Accuracy: 0.769, Loss: 0.425 Epoch 3 Batch 150/269 - Train Accuracy: 0.757, Validation Accuracy: 0.753, Loss: 0.410 Epoch 3 Batch 151/269 - Train Accuracy: 0.765, Validation Accuracy: 0.772, Loss: 0.399 Epoch 3 Batch 152/269 - Train Accuracy: 0.759, Validation Accuracy: 0.779, Loss: 0.414 Epoch 3 Batch 153/269 - Train Accuracy: 0.760, Validation Accuracy: 0.766, Loss: 0.413 Epoch 3 Batch 154/269 - Train Accuracy: 0.760, Validation Accuracy: 0.763, Loss: 0.425 Epoch 3 Batch 155/269 - Train Accuracy: 0.763, Validation Accuracy: 0.759, Loss: 0.395 Epoch 3 Batch 156/269 - Train Accuracy: 0.725, Validation Accuracy: 0.761, Loss: 0.428 Epoch 3 Batch 157/269 - Train Accuracy: 0.764, Validation Accuracy: 0.766, Loss: 0.406 Epoch 3 Batch 158/269 - Train Accuracy: 0.749, Validation Accuracy: 0.757, Loss: 0.413 Epoch 3 Batch 159/269 - Train Accuracy: 0.748, Validation Accuracy: 0.762, Loss: 0.420 Epoch 3 Batch 160/269 - Train Accuracy: 0.761, Validation Accuracy: 0.778, Loss: 0.410 Epoch 3 Batch 161/269 - Train Accuracy: 0.757, Validation Accuracy: 0.765, Loss: 0.411 Epoch 3 Batch 162/269 - Train Accuracy: 0.767, Validation Accuracy: 0.764, Loss: 0.406 Epoch 3 Batch 163/269 - Train Accuracy: 0.739, Validation Accuracy: 0.759, Loss: 0.404 Epoch 3 Batch 164/269 - Train Accuracy: 0.775, Validation Accuracy: 0.770, Loss: 0.409 Epoch 3 Batch 165/269 - Train Accuracy: 0.765, Validation Accuracy: 0.771, Loss: 0.414 Epoch 3 Batch 166/269 - Train Accuracy: 0.768, Validation Accuracy: 0.780, Loss: 0.395 Epoch 3 Batch 167/269 - Train Accuracy: 0.759, Validation Accuracy: 0.770, Loss: 0.400 Epoch 3 Batch 168/269 - Train Accuracy: 0.759, Validation Accuracy: 0.772, Loss: 0.412 Epoch 3 Batch 169/269 - Train Accuracy: 0.751, Validation Accuracy: 0.778, Loss: 0.406 Epoch 3 Batch 170/269 - Train Accuracy: 0.769, Validation Accuracy: 0.779, Loss: 0.396 Epoch 3 Batch 171/269 - Train Accuracy: 0.773, Validation Accuracy: 0.775, Loss: 0.419 Epoch 3 Batch 172/269 - Train Accuracy: 0.769, Validation Accuracy: 0.780, Loss: 0.406 Epoch 3 Batch 173/269 - Train Accuracy: 0.758, Validation Accuracy: 0.766, Loss: 0.398 Epoch 3 Batch 174/269 - Train Accuracy: 0.755, Validation Accuracy: 0.772, Loss: 0.407 Epoch 3 Batch 175/269 - Train Accuracy: 0.740, Validation Accuracy: 0.765, Loss: 0.420 Epoch 3 Batch 176/269 - Train Accuracy: 0.735, Validation Accuracy: 0.758, Loss: 0.426 Epoch 3 Batch 177/269 - Train Accuracy: 0.774, Validation Accuracy: 0.771, Loss: 0.395 Epoch 3 Batch 178/269 - Train Accuracy: 0.758, Validation Accuracy: 0.773, Loss: 0.422 Epoch 3 Batch 179/269 - Train Accuracy: 0.737, Validation Accuracy: 0.763, Loss: 0.405 Epoch 3 Batch 180/269 - Train Accuracy: 0.769, Validation Accuracy: 0.775, Loss: 0.398 Epoch 3 Batch 181/269 - Train Accuracy: 0.755, Validation Accuracy: 0.781, Loss: 0.408 Epoch 3 Batch 182/269 - Train Accuracy: 0.765, Validation Accuracy: 0.774, Loss: 0.408 Epoch 3 Batch 183/269 - Train Accuracy: 0.797, Validation Accuracy: 0.767, Loss: 0.352 Epoch 3 Batch 184/269 - Train Accuracy: 0.750, Validation Accuracy: 0.776, Loss: 0.425 Epoch 3 Batch 185/269 - Train Accuracy: 0.777, Validation Accuracy: 0.773, Loss: 0.394 Epoch 3 Batch 186/269 - Train Accuracy: 0.736, Validation Accuracy: 0.757, Loss: 0.403 Epoch 3 Batch 187/269 - Train Accuracy: 0.769, Validation Accuracy: 0.777, Loss: 0.394 Epoch 3 Batch 188/269 - Train Accuracy: 0.767, Validation Accuracy: 0.766, Loss: 0.386 Epoch 3 Batch 189/269 - Train Accuracy: 0.771, Validation Accuracy: 0.774, Loss: 0.391 Epoch 3 Batch 190/269 - Train Accuracy: 0.752, Validation Accuracy: 0.761, Loss: 0.384 Epoch 3 Batch 191/269 - Train Accuracy: 0.770, Validation Accuracy: 0.775, Loss: 0.389 Epoch 3 Batch 192/269 - Train Accuracy: 0.776, Validation Accuracy: 0.774, Loss: 0.397 Epoch 3 Batch 193/269 - Train Accuracy: 0.777, Validation Accuracy: 0.770, Loss: 0.390 Epoch 3 Batch 194/269 - Train Accuracy: 0.754, Validation Accuracy: 0.757, Loss: 0.397 Epoch 3 Batch 195/269 - Train Accuracy: 0.763, Validation Accuracy: 0.776, Loss: 0.390 Epoch 3 Batch 196/269 - Train Accuracy: 0.757, Validation Accuracy: 0.760, Loss: 0.388 Epoch 3 Batch 197/269 - Train Accuracy: 0.760, Validation Accuracy: 0.761, Loss: 0.406 Epoch 3 Batch 198/269 - Train Accuracy: 0.753, Validation Accuracy: 0.764, Loss: 0.417 Epoch 3 Batch 199/269 - Train Accuracy: 0.750, Validation Accuracy: 0.772, Loss: 0.400 Epoch 3 Batch 200/269 - Train Accuracy: 0.763, Validation Accuracy: 0.778, Loss: 0.402 Epoch 3 Batch 201/269 - Train Accuracy: 0.775, Validation Accuracy: 0.774, Loss: 0.386 Epoch 3 Batch 202/269 - Train Accuracy: 0.744, Validation Accuracy: 0.770, Loss: 0.392 Epoch 3 Batch 203/269 - Train Accuracy: 0.759, Validation Accuracy: 0.775, Loss: 0.415 Epoch 3 Batch 204/269 - Train Accuracy: 0.764, Validation Accuracy: 0.774, Loss: 0.421 Epoch 3 Batch 205/269 - Train Accuracy: 0.771, Validation Accuracy: 0.770, Loss: 0.389 Epoch 3 Batch 206/269 - Train Accuracy: 0.748, Validation Accuracy: 0.765, Loss: 0.402 Epoch 3 Batch 207/269 - Train Accuracy: 0.772, Validation Accuracy: 0.767, Loss: 0.384 Epoch 3 Batch 208/269 - Train Accuracy: 0.768, Validation Accuracy: 0.771, Loss: 0.400 Epoch 3 Batch 209/269 - Train Accuracy: 0.785, Validation Accuracy: 0.771, Loss: 0.391 Epoch 3 Batch 210/269 - Train Accuracy: 0.769, Validation Accuracy: 0.790, Loss: 0.377 Epoch 3 Batch 211/269 - Train Accuracy: 0.772, Validation Accuracy: 0.791, Loss: 0.384 Epoch 3 Batch 212/269 - Train Accuracy: 0.774, Validation Accuracy: 0.782, Loss: 0.380 Epoch 3 Batch 213/269 - Train Accuracy: 0.777, Validation Accuracy: 0.786, Loss: 0.382 Epoch 3 Batch 214/269 - Train Accuracy: 0.774, Validation Accuracy: 0.786, Loss: 0.384 Epoch 3 Batch 215/269 - Train Accuracy: 0.795, Validation Accuracy: 0.780, Loss: 0.366 Epoch 3 Batch 216/269 - Train Accuracy: 0.751, Validation Accuracy: 0.781, Loss: 0.410 Epoch 3 Batch 217/269 - Train Accuracy: 0.756, Validation Accuracy: 0.781, Loss: 0.395 Epoch 3 Batch 218/269 - Train Accuracy: 0.780, Validation Accuracy: 0.779, Loss: 0.394 Epoch 3 Batch 219/269 - Train Accuracy: 0.775, Validation Accuracy: 0.773, Loss: 0.391 Epoch 3 Batch 220/269 - Train Accuracy: 0.779, Validation Accuracy: 0.776, Loss: 0.359 Epoch 3 Batch 221/269 - Train Accuracy: 0.789, Validation Accuracy: 0.774, Loss: 0.383 Epoch 3 Batch 222/269 - Train Accuracy: 0.792, Validation Accuracy: 0.770, Loss: 0.367 Epoch 3 Batch 223/269 - Train Accuracy: 0.759, Validation Accuracy: 0.777, Loss: 0.372 Epoch 3 Batch 224/269 - Train Accuracy: 0.774, Validation Accuracy: 0.775, Loss: 0.394 Epoch 3 Batch 225/269 - Train Accuracy: 0.779, Validation Accuracy: 0.784, Loss: 0.382 Epoch 3 Batch 226/269 - Train Accuracy: 0.778, Validation Accuracy: 0.789, Loss: 0.377 Epoch 3 Batch 227/269 - Train Accuracy: 0.804, Validation Accuracy: 0.777, Loss: 0.342 Epoch 3 Batch 228/269 - Train Accuracy: 0.769, Validation Accuracy: 0.781, Loss: 0.381 Epoch 3 Batch 229/269 - Train Accuracy: 0.766, Validation Accuracy: 0.780, Loss: 0.376 Epoch 3 Batch 230/269 - Train Accuracy: 0.779, Validation Accuracy: 0.780, Loss: 0.377 Epoch 3 Batch 231/269 - Train Accuracy: 0.773, Validation Accuracy: 0.783, Loss: 0.393 Epoch 3 Batch 232/269 - Train Accuracy: 0.745, Validation Accuracy: 0.772, Loss: 0.404 Epoch 3 Batch 233/269 - Train Accuracy: 0.800, Validation Accuracy: 0.783, Loss: 0.398 Epoch 3 Batch 234/269 - Train Accuracy: 0.778, Validation Accuracy: 0.791, Loss: 0.371 Epoch 3 Batch 235/269 - Train Accuracy: 0.768, Validation Accuracy: 0.778, Loss: 0.368 Epoch 3 Batch 236/269 - Train Accuracy: 0.764, Validation Accuracy: 0.778, Loss: 0.375 Epoch 3 Batch 237/269 - Train Accuracy: 0.781, Validation Accuracy: 0.785, Loss: 0.375 Epoch 3 Batch 238/269 - Train Accuracy: 0.779, Validation Accuracy: 0.775, Loss: 0.370 Epoch 3 Batch 239/269 - Train Accuracy: 0.775, Validation Accuracy: 0.786, Loss: 0.364 Epoch 3 Batch 240/269 - Train Accuracy: 0.797, Validation Accuracy: 0.791, Loss: 0.337 Epoch 3 Batch 241/269 - Train Accuracy: 0.780, Validation Accuracy: 0.788, Loss: 0.389 Epoch 3 Batch 242/269 - Train Accuracy: 0.769, Validation Accuracy: 0.789, Loss: 0.365 Epoch 3 Batch 243/269 - Train Accuracy: 0.785, Validation Accuracy: 0.787, Loss: 0.352 Epoch 3 Batch 244/269 - Train Accuracy: 0.783, Validation Accuracy: 0.792, Loss: 0.375 Epoch 3 Batch 245/269 - Train Accuracy: 0.748, Validation Accuracy: 0.786, Loss: 0.388 Epoch 3 Batch 246/269 - Train Accuracy: 0.766, Validation Accuracy: 0.788, Loss: 0.364 Epoch 3 Batch 247/269 - Train Accuracy: 0.780, Validation Accuracy: 0.787, Loss: 0.379 Epoch 3 Batch 248/269 - Train Accuracy: 0.783, Validation Accuracy: 0.782, Loss: 0.359 Epoch 3 Batch 249/269 - Train Accuracy: 0.796, Validation Accuracy: 0.778, Loss: 0.346 Epoch 3 Batch 250/269 - Train Accuracy: 0.777, Validation Accuracy: 0.792, Loss: 0.377 Epoch 3 Batch 251/269 - Train Accuracy: 0.801, Validation Accuracy: 0.786, Loss: 0.351 Epoch 3 Batch 252/269 - Train Accuracy: 0.783, Validation Accuracy: 0.779, Loss: 0.372 Epoch 3 Batch 253/269 - Train Accuracy: 0.760, Validation Accuracy: 0.788, Loss: 0.379 Epoch 3 Batch 254/269 - Train Accuracy: 0.781, Validation Accuracy: 0.778, Loss: 0.369 Epoch 3 Batch 255/269 - Train Accuracy: 0.785, Validation Accuracy: 0.781, Loss: 0.356 Epoch 3 Batch 256/269 - Train Accuracy: 0.767, Validation Accuracy: 0.787, Loss: 0.370 Epoch 3 Batch 257/269 - Train Accuracy: 0.764, Validation Accuracy: 0.787, Loss: 0.380 Epoch 3 Batch 258/269 - Train Accuracy: 0.773, Validation Accuracy: 0.783, Loss: 0.363 Epoch 3 Batch 259/269 - Train Accuracy: 0.793, Validation Accuracy: 0.789, Loss: 0.373 Epoch 3 Batch 260/269 - Train Accuracy: 0.771, Validation Accuracy: 0.790, Loss: 0.388 Epoch 3 Batch 261/269 - Train Accuracy: 0.769, Validation Accuracy: 0.789, Loss: 0.380 Epoch 3 Batch 262/269 - Train Accuracy: 0.791, Validation Accuracy: 0.790, Loss: 0.359 Epoch 3 Batch 263/269 - Train Accuracy: 0.765, Validation Accuracy: 0.792, Loss: 0.375 Epoch 3 Batch 264/269 - Train Accuracy: 0.767, Validation Accuracy: 0.790, Loss: 0.377 Epoch 3 Batch 265/269 - Train Accuracy: 0.787, Validation Accuracy: 0.787, Loss: 0.369 Epoch 3 Batch 266/269 - Train Accuracy: 0.785, Validation Accuracy: 0.791, Loss: 0.353 Epoch 3 Batch 267/269 - Train Accuracy: 0.783, Validation Accuracy: 0.796, Loss: 0.360 Epoch 4 Batch 0/269 - Train Accuracy: 0.792, Validation Accuracy: 0.798, Loss: 0.380 Epoch 4 Batch 1/269 - Train Accuracy: 0.780, Validation Accuracy: 0.794, Loss: 0.361 Epoch 4 Batch 2/269 - Train Accuracy: 0.779, Validation Accuracy: 0.785, Loss: 0.362 Epoch 4 Batch 3/269 - Train Accuracy: 0.798, Validation Accuracy: 0.796, Loss: 0.367 Epoch 4 Batch 4/269 - Train Accuracy: 0.755, Validation Accuracy: 0.784, Loss: 0.373 Epoch 4 Batch 5/269 - Train Accuracy: 0.776, Validation Accuracy: 0.795, Loss: 0.365 Epoch 4 Batch 6/269 - Train Accuracy: 0.807, Validation Accuracy: 0.801, Loss: 0.339 Epoch 4 Batch 7/269 - Train Accuracy: 0.787, Validation Accuracy: 0.797, Loss: 0.347 Epoch 4 Batch 8/269 - Train Accuracy: 0.782, Validation Accuracy: 0.803, Loss: 0.371 Epoch 4 Batch 9/269 - Train Accuracy: 0.780, Validation Accuracy: 0.796, Loss: 0.368 Epoch 4 Batch 10/269 - Train Accuracy: 0.784, Validation Accuracy: 0.800, Loss: 0.358 Epoch 4 Batch 11/269 - Train Accuracy: 0.801, Validation Accuracy: 0.797, Loss: 0.359 Epoch 4 Batch 12/269 - Train Accuracy: 0.785, Validation Accuracy: 0.792, Loss: 0.369 Epoch 4 Batch 13/269 - Train Accuracy: 0.784, Validation Accuracy: 0.795, Loss: 0.331 Epoch 4 Batch 14/269 - Train Accuracy: 0.780, Validation Accuracy: 0.799, Loss: 0.354 Epoch 4 Batch 15/269 - Train Accuracy: 0.792, Validation Accuracy: 0.795, Loss: 0.336 Epoch 4 Batch 16/269 - Train Accuracy: 0.786, Validation Accuracy: 0.793, Loss: 0.358 Epoch 4 Batch 17/269 - Train Accuracy: 0.795, Validation Accuracy: 0.797, Loss: 0.341 Epoch 4 Batch 18/269 - Train Accuracy: 0.778, Validation Accuracy: 0.801, Loss: 0.353 Epoch 4 Batch 19/269 - Train Accuracy: 0.806, Validation Accuracy: 0.795, Loss: 0.327 Epoch 4 Batch 20/269 - Train Accuracy: 0.781, Validation Accuracy: 0.793, Loss: 0.357 Epoch 4 Batch 21/269 - Train Accuracy: 0.764, Validation Accuracy: 0.793, Loss: 0.377 Epoch 4 Batch 22/269 - Train Accuracy: 0.795, Validation Accuracy: 0.799, Loss: 0.333 Epoch 4 Batch 23/269 - Train Accuracy: 0.780, Validation Accuracy: 0.790, Loss: 0.346 Epoch 4 Batch 24/269 - Train Accuracy: 0.794, Validation Accuracy: 0.789, Loss: 0.358 Epoch 4 Batch 25/269 - Train Accuracy: 0.784, Validation Accuracy: 0.792, Loss: 0.371 Epoch 4 Batch 26/269 - Train Accuracy: 0.801, Validation Accuracy: 0.788, Loss: 0.317 Epoch 4 Batch 27/269 - Train Accuracy: 0.789, Validation Accuracy: 0.791, Loss: 0.340 Epoch 4 Batch 28/269 - Train Accuracy: 0.748, Validation Accuracy: 0.787, Loss: 0.367 Epoch 4 Batch 29/269 - Train Accuracy: 0.803, Validation Accuracy: 0.783, Loss: 0.361 Epoch 4 Batch 30/269 - Train Accuracy: 0.795, Validation Accuracy: 0.797, Loss: 0.337 Epoch 4 Batch 31/269 - Train Accuracy: 0.801, Validation Accuracy: 0.796, Loss: 0.331 Epoch 4 Batch 32/269 - Train Accuracy: 0.794, Validation Accuracy: 0.795, Loss: 0.343 Epoch 4 Batch 33/269 - Train Accuracy: 0.797, Validation Accuracy: 0.795, Loss: 0.329 Epoch 4 Batch 34/269 - Train Accuracy: 0.787, Validation Accuracy: 0.796, Loss: 0.338 Epoch 4 Batch 35/269 - Train Accuracy: 0.795, Validation Accuracy: 0.798, Loss: 0.357 Epoch 4 Batch 36/269 - Train Accuracy: 0.783, Validation Accuracy: 0.794, Loss: 0.339 Epoch 4 Batch 37/269 - Train Accuracy: 0.804, Validation Accuracy: 0.795, Loss: 0.343 Epoch 4 Batch 38/269 - Train Accuracy: 0.781, Validation Accuracy: 0.791, Loss: 0.342 Epoch 4 Batch 39/269 - Train Accuracy: 0.781, Validation Accuracy: 0.799, Loss: 0.349 Epoch 4 Batch 40/269 - Train Accuracy: 0.781, Validation Accuracy: 0.797, Loss: 0.367 Epoch 4 Batch 41/269 - Train Accuracy: 0.761, Validation Accuracy: 0.792, Loss: 0.356 Epoch 4 Batch 42/269 - Train Accuracy: 0.807, Validation Accuracy: 0.802, Loss: 0.326 Epoch 4 Batch 43/269 - Train Accuracy: 0.796, Validation Accuracy: 0.801, Loss: 0.350 Epoch 4 Batch 44/269 - Train Accuracy: 0.786, Validation Accuracy: 0.792, Loss: 0.343 Epoch 4 Batch 45/269 - Train Accuracy: 0.785, Validation Accuracy: 0.795, Loss: 0.356 Epoch 4 Batch 46/269 - Train Accuracy: 0.786, Validation Accuracy: 0.797, Loss: 0.353 Epoch 4 Batch 47/269 - Train Accuracy: 0.796, Validation Accuracy: 0.792, Loss: 0.323 Epoch 4 Batch 48/269 - Train Accuracy: 0.794, Validation Accuracy: 0.796, Loss: 0.331 Epoch 4 Batch 49/269 - Train Accuracy: 0.781, Validation Accuracy: 0.798, Loss: 0.342 Epoch 4 Batch 50/269 - Train Accuracy: 0.773, Validation Accuracy: 0.804, Loss: 0.357 Epoch 4 Batch 51/269 - Train Accuracy: 0.789, Validation Accuracy: 0.793, Loss: 0.339 Epoch 4 Batch 52/269 - Train Accuracy: 0.795, Validation Accuracy: 0.804, Loss: 0.325 Epoch 4 Batch 53/269 - Train Accuracy: 0.796, Validation Accuracy: 0.800, Loss: 0.355 Epoch 4 Batch 54/269 - Train Accuracy: 0.809, Validation Accuracy: 0.797, Loss: 0.352 Epoch 4 Batch 55/269 - Train Accuracy: 0.810, Validation Accuracy: 0.798, Loss: 0.330 Epoch 4 Batch 56/269 - Train Accuracy: 0.799, Validation Accuracy: 0.797, Loss: 0.333 Epoch 4 Batch 57/269 - Train Accuracy: 0.788, Validation Accuracy: 0.799, Loss: 0.350 Epoch 4 Batch 58/269 - Train Accuracy: 0.799, Validation Accuracy: 0.803, Loss: 0.324 Epoch 4 Batch 59/269 - Train Accuracy: 0.828, Validation Accuracy: 0.807, Loss: 0.316 Epoch 4 Batch 60/269 - Train Accuracy: 0.798, Validation Accuracy: 0.805, Loss: 0.314 Epoch 4 Batch 61/269 - Train Accuracy: 0.809, Validation Accuracy: 0.805, Loss: 0.313 Epoch 4 Batch 62/269 - Train Accuracy: 0.814, Validation Accuracy: 0.802, Loss: 0.317 Epoch 4 Batch 63/269 - Train Accuracy: 0.788, Validation Accuracy: 0.799, Loss: 0.336 Epoch 4 Batch 64/269 - Train Accuracy: 0.809, Validation Accuracy: 0.804, Loss: 0.325 Epoch 4 Batch 65/269 - Train Accuracy: 0.802, Validation Accuracy: 0.818, Loss: 0.325 Epoch 4 Batch 66/269 - Train Accuracy: 0.797, Validation Accuracy: 0.809, Loss: 0.317 Epoch 4 Batch 67/269 - Train Accuracy: 0.787, Validation Accuracy: 0.811, Loss: 0.337 Epoch 4 Batch 68/269 - Train Accuracy: 0.783, Validation Accuracy: 0.804, Loss: 0.331 Epoch 4 Batch 69/269 - Train Accuracy: 0.770, Validation Accuracy: 0.808, Loss: 0.372 Epoch 4 Batch 70/269 - Train Accuracy: 0.834, Validation Accuracy: 0.819, Loss: 0.328 Epoch 4 Batch 71/269 - Train Accuracy: 0.805, Validation Accuracy: 0.809, Loss: 0.346 Epoch 4 Batch 72/269 - Train Accuracy: 0.800, Validation Accuracy: 0.807, Loss: 0.332 Epoch 4 Batch 73/269 - Train Accuracy: 0.785, Validation Accuracy: 0.800, Loss: 0.335 Epoch 4 Batch 74/269 - Train Accuracy: 0.789, Validation Accuracy: 0.808, Loss: 0.336 Epoch 4 Batch 75/269 - Train Accuracy: 0.803, Validation Accuracy: 0.805, Loss: 0.324 Epoch 4 Batch 76/269 - Train Accuracy: 0.789, Validation Accuracy: 0.805, Loss: 0.331 Epoch 4 Batch 77/269 - Train Accuracy: 0.790, Validation Accuracy: 0.794, Loss: 0.322 Epoch 4 Batch 78/269 - Train Accuracy: 0.818, Validation Accuracy: 0.815, Loss: 0.326 Epoch 4 Batch 79/269 - Train Accuracy: 0.783, Validation Accuracy: 0.794, Loss: 0.323 Epoch 4 Batch 80/269 - Train Accuracy: 0.819, Validation Accuracy: 0.810, Loss: 0.328 Epoch 4 Batch 81/269 - Train Accuracy: 0.794, Validation Accuracy: 0.805, Loss: 0.331 Epoch 4 Batch 82/269 - Train Accuracy: 0.816, Validation Accuracy: 0.794, Loss: 0.312 Epoch 4 Batch 83/269 - Train Accuracy: 0.790, Validation Accuracy: 0.794, Loss: 0.336 Epoch 4 Batch 84/269 - Train Accuracy: 0.803, Validation Accuracy: 0.800, Loss: 0.317 Epoch 4 Batch 85/269 - Train Accuracy: 0.804, Validation Accuracy: 0.809, Loss: 0.325 Epoch 4 Batch 86/269 - Train Accuracy: 0.812, Validation Accuracy: 0.809, Loss: 0.308 Epoch 4 Batch 87/269 - Train Accuracy: 0.801, Validation Accuracy: 0.813, Loss: 0.342 Epoch 4 Batch 88/269 - Train Accuracy: 0.777, Validation Accuracy: 0.808, Loss: 0.320 Epoch 4 Batch 89/269 - Train Accuracy: 0.803, Validation Accuracy: 0.806, Loss: 0.324 Epoch 4 Batch 90/269 - Train Accuracy: 0.768, Validation Accuracy: 0.804, Loss: 0.336 Epoch 4 Batch 91/269 - Train Accuracy: 0.826, Validation Accuracy: 0.808, Loss: 0.307 Epoch 4 Batch 92/269 - Train Accuracy: 0.809, Validation Accuracy: 0.810, Loss: 0.309 Epoch 4 Batch 93/269 - Train Accuracy: 0.796, Validation Accuracy: 0.808, Loss: 0.306 Epoch 4 Batch 94/269 - Train Accuracy: 0.809, Validation Accuracy: 0.812, Loss: 0.328 Epoch 4 Batch 95/269 - Train Accuracy: 0.809, Validation Accuracy: 0.794, Loss: 0.314 Epoch 4 Batch 96/269 - Train Accuracy: 0.792, Validation Accuracy: 0.807, Loss: 0.318 Epoch 4 Batch 97/269 - Train Accuracy: 0.806, Validation Accuracy: 0.818, Loss: 0.313 Epoch 4 Batch 98/269 - Train Accuracy: 0.811, Validation Accuracy: 0.809, Loss: 0.318 Epoch 4 Batch 99/269 - Train Accuracy: 0.798, Validation Accuracy: 0.798, Loss: 0.328 Epoch 4 Batch 100/269 - Train Accuracy: 0.807, Validation Accuracy: 0.788, Loss: 0.310 Epoch 4 Batch 101/269 - Train Accuracy: 0.786, Validation Accuracy: 0.800, Loss: 0.348 Epoch 4 Batch 102/269 - Train Accuracy: 0.792, Validation Accuracy: 0.807, Loss: 0.320 Epoch 4 Batch 103/269 - Train Accuracy: 0.820, Validation Accuracy: 0.822, Loss: 0.322 Epoch 4 Batch 104/269 - Train Accuracy: 0.808, Validation Accuracy: 0.811, Loss: 0.305 Epoch 4 Batch 105/269 - Train Accuracy: 0.799, Validation Accuracy: 0.819, Loss: 0.322 Epoch 4 Batch 106/269 - Train Accuracy: 0.804, Validation Accuracy: 0.806, Loss: 0.307 Epoch 4 Batch 107/269 - Train Accuracy: 0.793, Validation Accuracy: 0.813, Loss: 0.328 Epoch 4 Batch 108/269 - Train Accuracy: 0.810, Validation Accuracy: 0.819, Loss: 0.317 Epoch 4 Batch 109/269 - Train Accuracy: 0.791, Validation Accuracy: 0.812, Loss: 0.323 Epoch 4 Batch 110/269 - Train Accuracy: 0.804, Validation Accuracy: 0.813, Loss: 0.306 Epoch 4 Batch 111/269 - Train Accuracy: 0.807, Validation Accuracy: 0.811, Loss: 0.331 Epoch 4 Batch 112/269 - Train Accuracy: 0.812, Validation Accuracy: 0.821, Loss: 0.316 Epoch 4 Batch 113/269 - Train Accuracy: 0.812, Validation Accuracy: 0.817, Loss: 0.301 Epoch 4 Batch 114/269 - Train Accuracy: 0.825, Validation Accuracy: 0.822, Loss: 0.309 Epoch 4 Batch 115/269 - Train Accuracy: 0.800, Validation Accuracy: 0.823, Loss: 0.319 Epoch 4 Batch 116/269 - Train Accuracy: 0.801, Validation Accuracy: 0.812, Loss: 0.316 Epoch 4 Batch 117/269 - Train Accuracy: 0.807, Validation Accuracy: 0.820, Loss: 0.312 Epoch 4 Batch 118/269 - Train Accuracy: 0.813, Validation Accuracy: 0.824, Loss: 0.298 Epoch 4 Batch 119/269 - Train Accuracy: 0.810, Validation Accuracy: 0.819, Loss: 0.322 Epoch 4 Batch 120/269 - Train Accuracy: 0.810, Validation Accuracy: 0.821, Loss: 0.314 Epoch 4 Batch 121/269 - Train Accuracy: 0.821, Validation Accuracy: 0.819, Loss: 0.309 Epoch 4 Batch 122/269 - Train Accuracy: 0.805, Validation Accuracy: 0.812, Loss: 0.298 Epoch 4 Batch 123/269 - Train Accuracy: 0.822, Validation Accuracy: 0.829, Loss: 0.318 Epoch 4 Batch 124/269 - Train Accuracy: 0.823, Validation Accuracy: 0.828, Loss: 0.295 Epoch 4 Batch 125/269 - Train Accuracy: 0.829, Validation Accuracy: 0.820, Loss: 0.295 Epoch 4 Batch 126/269 - Train Accuracy: 0.802, Validation Accuracy: 0.822, Loss: 0.301 Epoch 4 Batch 127/269 - Train Accuracy: 0.806, Validation Accuracy: 0.820, Loss: 0.315 Epoch 4 Batch 128/269 - Train Accuracy: 0.817, Validation Accuracy: 0.826, Loss: 0.303 Epoch 4 Batch 129/269 - Train Accuracy: 0.810, Validation Accuracy: 0.821, Loss: 0.308 Epoch 4 Batch 130/269 - Train Accuracy: 0.815, Validation Accuracy: 0.823, Loss: 0.318 Epoch 4 Batch 131/269 - Train Accuracy: 0.803, Validation Accuracy: 0.823, Loss: 0.309 Epoch 4 Batch 132/269 - Train Accuracy: 0.809, Validation Accuracy: 0.820, Loss: 0.309 Epoch 4 Batch 133/269 - Train Accuracy: 0.826, Validation Accuracy: 0.826, Loss: 0.296 Epoch 4 Batch 134/269 - Train Accuracy: 0.804, Validation Accuracy: 0.833, Loss: 0.308 Epoch 4 Batch 135/269 - Train Accuracy: 0.811, Validation Accuracy: 0.817, Loss: 0.319 Epoch 4 Batch 136/269 - Train Accuracy: 0.804, Validation Accuracy: 0.824, Loss: 0.324 Epoch 4 Batch 137/269 - Train Accuracy: 0.803, Validation Accuracy: 0.814, Loss: 0.318 Epoch 4 Batch 138/269 - Train Accuracy: 0.817, Validation Accuracy: 0.819, Loss: 0.305 Epoch 4 Batch 139/269 - Train Accuracy: 0.821, Validation Accuracy: 0.812, Loss: 0.291 Epoch 4 Batch 140/269 - Train Accuracy: 0.808, Validation Accuracy: 0.816, Loss: 0.311 Epoch 4 Batch 141/269 - Train Accuracy: 0.807, Validation Accuracy: 0.807, Loss: 0.308 Epoch 4 Batch 142/269 - Train Accuracy: 0.805, Validation Accuracy: 0.812, Loss: 0.297 Epoch 4 Batch 143/269 - Train Accuracy: 0.824, Validation Accuracy: 0.821, Loss: 0.294 Epoch 4 Batch 144/269 - Train Accuracy: 0.835, Validation Accuracy: 0.829, Loss: 0.282 Epoch 4 Batch 145/269 - Train Accuracy: 0.823, Validation Accuracy: 0.831, Loss: 0.288 Epoch 4 Batch 146/269 - Train Accuracy: 0.824, Validation Accuracy: 0.827, Loss: 0.291 Epoch 4 Batch 147/269 - Train Accuracy: 0.826, Validation Accuracy: 0.818, Loss: 0.283 Epoch 4 Batch 148/269 - Train Accuracy: 0.804, Validation Accuracy: 0.818, Loss: 0.303 Epoch 4 Batch 149/269 - Train Accuracy: 0.804, Validation Accuracy: 0.821, Loss: 0.304 Epoch 4 Batch 150/269 - Train Accuracy: 0.828, Validation Accuracy: 0.835, Loss: 0.299 Epoch 4 Batch 151/269 - Train Accuracy: 0.818, Validation Accuracy: 0.825, Loss: 0.285 Epoch 4 Batch 152/269 - Train Accuracy: 0.822, Validation Accuracy: 0.823, Loss: 0.299 Epoch 4 Batch 153/269 - Train Accuracy: 0.835, Validation Accuracy: 0.819, Loss: 0.292 Epoch 4 Batch 154/269 - Train Accuracy: 0.838, Validation Accuracy: 0.829, Loss: 0.295 Epoch 4 Batch 155/269 - Train Accuracy: 0.828, Validation Accuracy: 0.834, Loss: 0.283 Epoch 4 Batch 156/269 - Train Accuracy: 0.803, Validation Accuracy: 0.828, Loss: 0.310 Epoch 4 Batch 157/269 - Train Accuracy: 0.816, Validation Accuracy: 0.826, Loss: 0.294 Epoch 4 Batch 158/269 - Train Accuracy: 0.829, Validation Accuracy: 0.823, Loss: 0.290 Epoch 4 Batch 159/269 - Train Accuracy: 0.814, Validation Accuracy: 0.828, Loss: 0.299 Epoch 4 Batch 160/269 - Train Accuracy: 0.816, Validation Accuracy: 0.829, Loss: 0.292 Epoch 4 Batch 161/269 - Train Accuracy: 0.820, Validation Accuracy: 0.826, Loss: 0.289 Epoch 4 Batch 162/269 - Train Accuracy: 0.835, Validation Accuracy: 0.826, Loss: 0.288 Epoch 4 Batch 163/269 - Train Accuracy: 0.815, Validation Accuracy: 0.838, Loss: 0.291 Epoch 4 Batch 164/269 - Train Accuracy: 0.830, Validation Accuracy: 0.826, Loss: 0.290 Epoch 4 Batch 165/269 - Train Accuracy: 0.833, Validation Accuracy: 0.827, Loss: 0.297 Epoch 4 Batch 166/269 - Train Accuracy: 0.831, Validation Accuracy: 0.837, Loss: 0.279 Epoch 4 Batch 167/269 - Train Accuracy: 0.831, Validation Accuracy: 0.837, Loss: 0.286 Epoch 4 Batch 168/269 - Train Accuracy: 0.816, Validation Accuracy: 0.834, Loss: 0.297 Epoch 4 Batch 169/269 - Train Accuracy: 0.803, Validation Accuracy: 0.823, Loss: 0.293 Epoch 4 Batch 170/269 - Train Accuracy: 0.826, Validation Accuracy: 0.828, Loss: 0.281 Epoch 4 Batch 171/269 - Train Accuracy: 0.832, Validation Accuracy: 0.831, Loss: 0.301 Epoch 4 Batch 172/269 - Train Accuracy: 0.822, Validation Accuracy: 0.830, Loss: 0.302 Epoch 4 Batch 173/269 - Train Accuracy: 0.824, Validation Accuracy: 0.821, Loss: 0.284 Epoch 4 Batch 174/269 - Train Accuracy: 0.809, Validation Accuracy: 0.816, Loss: 0.289 Epoch 4 Batch 175/269 - Train Accuracy: 0.821, Validation Accuracy: 0.827, Loss: 0.313 Epoch 4 Batch 176/269 - Train Accuracy: 0.807, Validation Accuracy: 0.830, Loss: 0.309 Epoch 4 Batch 177/269 - Train Accuracy: 0.829, Validation Accuracy: 0.824, Loss: 0.287 Epoch 4 Batch 178/269 - Train Accuracy: 0.839, Validation Accuracy: 0.831, Loss: 0.294 Epoch 4 Batch 179/269 - Train Accuracy: 0.814, Validation Accuracy: 0.822, Loss: 0.287 Epoch 4 Batch 180/269 - Train Accuracy: 0.828, Validation Accuracy: 0.821, Loss: 0.284 Epoch 4 Batch 181/269 - Train Accuracy: 0.826, Validation Accuracy: 0.829, Loss: 0.287 Epoch 4 Batch 182/269 - Train Accuracy: 0.836, Validation Accuracy: 0.836, Loss: 0.288 Epoch 4 Batch 183/269 - Train Accuracy: 0.854, Validation Accuracy: 0.824, Loss: 0.245 Epoch 4 Batch 184/269 - Train Accuracy: 0.802, Validation Accuracy: 0.825, Loss: 0.296 Epoch 4 Batch 185/269 - Train Accuracy: 0.846, Validation Accuracy: 0.833, Loss: 0.283 Epoch 4 Batch 186/269 - Train Accuracy: 0.829, Validation Accuracy: 0.838, Loss: 0.282 Epoch 4 Batch 187/269 - Train Accuracy: 0.827, Validation Accuracy: 0.832, Loss: 0.278 Epoch 4 Batch 188/269 - Train Accuracy: 0.842, Validation Accuracy: 0.835, Loss: 0.273 Epoch 4 Batch 189/269 - Train Accuracy: 0.838, Validation Accuracy: 0.836, Loss: 0.268 Epoch 4 Batch 190/269 - Train Accuracy: 0.834, Validation Accuracy: 0.840, Loss: 0.274 Epoch 4 Batch 191/269 - Train Accuracy: 0.823, Validation Accuracy: 0.840, Loss: 0.276 Epoch 4 Batch 192/269 - Train Accuracy: 0.842, Validation Accuracy: 0.842, Loss: 0.281 Epoch 4 Batch 193/269 - Train Accuracy: 0.847, Validation Accuracy: 0.843, Loss: 0.272 Epoch 4 Batch 194/269 - Train Accuracy: 0.825, Validation Accuracy: 0.834, Loss: 0.279 Epoch 4 Batch 195/269 - Train Accuracy: 0.829, Validation Accuracy: 0.848, Loss: 0.284 Epoch 4 Batch 196/269 - Train Accuracy: 0.819, Validation Accuracy: 0.841, Loss: 0.270 Epoch 4 Batch 197/269 - Train Accuracy: 0.821, Validation Accuracy: 0.835, Loss: 0.286 Epoch 4 Batch 198/269 - Train Accuracy: 0.822, Validation Accuracy: 0.841, Loss: 0.292 Epoch 4 Batch 199/269 - Train Accuracy: 0.833, Validation Accuracy: 0.840, Loss: 0.288 Epoch 4 Batch 200/269 - Train Accuracy: 0.835, Validation Accuracy: 0.840, Loss: 0.287 Epoch 4 Batch 201/269 - Train Accuracy: 0.834, Validation Accuracy: 0.843, Loss: 0.271 Epoch 4 Batch 202/269 - Train Accuracy: 0.831, Validation Accuracy: 0.845, Loss: 0.271 Epoch 4 Batch 203/269 - Train Accuracy: 0.833, Validation Accuracy: 0.837, Loss: 0.297 Epoch 4 Batch 204/269 - Train Accuracy: 0.833, Validation Accuracy: 0.836, Loss: 0.289 Epoch 4 Batch 205/269 - Train Accuracy: 0.838, Validation Accuracy: 0.843, Loss: 0.272 Epoch 4 Batch 206/269 - Train Accuracy: 0.818, Validation Accuracy: 0.839, Loss: 0.286 Epoch 4 Batch 207/269 - Train Accuracy: 0.812, Validation Accuracy: 0.839, Loss: 0.271 Epoch 4 Batch 208/269 - Train Accuracy: 0.828, Validation Accuracy: 0.831, Loss: 0.285 Epoch 4 Batch 209/269 - Train Accuracy: 0.843, Validation Accuracy: 0.846, Loss: 0.278 Epoch 4 Batch 210/269 - Train Accuracy: 0.831, Validation Accuracy: 0.850, Loss: 0.268 Epoch 4 Batch 211/269 - Train Accuracy: 0.826, Validation Accuracy: 0.836, Loss: 0.275 Epoch 4 Batch 212/269 - Train Accuracy: 0.841, Validation Accuracy: 0.840, Loss: 0.282 Epoch 4 Batch 213/269 - Train Accuracy: 0.832, Validation Accuracy: 0.840, Loss: 0.270 Epoch 4 Batch 214/269 - Train Accuracy: 0.822, Validation Accuracy: 0.841, Loss: 0.276 Epoch 4 Batch 215/269 - Train Accuracy: 0.854, Validation Accuracy: 0.836, Loss: 0.259 Epoch 4 Batch 216/269 - Train Accuracy: 0.809, Validation Accuracy: 0.843, Loss: 0.305 Epoch 4 Batch 217/269 - Train Accuracy: 0.822, Validation Accuracy: 0.844, Loss: 0.280 Epoch 4 Batch 218/269 - Train Accuracy: 0.846, Validation Accuracy: 0.845, Loss: 0.280 Epoch 4 Batch 219/269 - Train Accuracy: 0.838, Validation Accuracy: 0.846, Loss: 0.291 Epoch 4 Batch 220/269 - Train Accuracy: 0.845, Validation Accuracy: 0.844, Loss: 0.257 Epoch 4 Batch 221/269 - Train Accuracy: 0.843, Validation Accuracy: 0.836, Loss: 0.270 Epoch 4 Batch 222/269 - Train Accuracy: 0.853, Validation Accuracy: 0.846, Loss: 0.258 Epoch 4 Batch 223/269 - Train Accuracy: 0.830, Validation Accuracy: 0.844, Loss: 0.260 Epoch 4 Batch 224/269 - Train Accuracy: 0.838, Validation Accuracy: 0.837, Loss: 0.281 Epoch 4 Batch 225/269 - Train Accuracy: 0.832, Validation Accuracy: 0.833, Loss: 0.269 Epoch 4 Batch 226/269 - Train Accuracy: 0.850, Validation Accuracy: 0.845, Loss: 0.274 Epoch 4 Batch 227/269 - Train Accuracy: 0.862, Validation Accuracy: 0.842, Loss: 0.245 Epoch 4 Batch 228/269 - Train Accuracy: 0.836, Validation Accuracy: 0.844, Loss: 0.270 Epoch 4 Batch 229/269 - Train Accuracy: 0.835, Validation Accuracy: 0.839, Loss: 0.260 Epoch 4 Batch 230/269 - Train Accuracy: 0.847, Validation Accuracy: 0.839, Loss: 0.258 Epoch 4 Batch 231/269 - Train Accuracy: 0.837, Validation Accuracy: 0.847, Loss: 0.280 Epoch 4 Batch 232/269 - Train Accuracy: 0.824, Validation Accuracy: 0.850, Loss: 0.277 Epoch 4 Batch 233/269 - Train Accuracy: 0.850, Validation Accuracy: 0.847, Loss: 0.268 Epoch 4 Batch 234/269 - Train Accuracy: 0.841, Validation Accuracy: 0.854, Loss: 0.264 Epoch 4 Batch 235/269 - Train Accuracy: 0.851, Validation Accuracy: 0.846, Loss: 0.251 Epoch 4 Batch 236/269 - Train Accuracy: 0.840, Validation Accuracy: 0.847, Loss: 0.258 Epoch 4 Batch 237/269 - Train Accuracy: 0.835, Validation Accuracy: 0.839, Loss: 0.257 Epoch 4 Batch 238/269 - Train Accuracy: 0.848, Validation Accuracy: 0.843, Loss: 0.258 Epoch 4 Batch 239/269 - Train Accuracy: 0.844, Validation Accuracy: 0.847, Loss: 0.258 Epoch 4 Batch 240/269 - Train Accuracy: 0.849, Validation Accuracy: 0.849, Loss: 0.242 Epoch 4 Batch 241/269 - Train Accuracy: 0.836, Validation Accuracy: 0.839, Loss: 0.275 Epoch 4 Batch 242/269 - Train Accuracy: 0.835, Validation Accuracy: 0.831, Loss: 0.253 Epoch 4 Batch 243/269 - Train Accuracy: 0.854, Validation Accuracy: 0.851, Loss: 0.252 Epoch 4 Batch 244/269 - Train Accuracy: 0.841, Validation Accuracy: 0.851, Loss: 0.264 Epoch 4 Batch 245/269 - Train Accuracy: 0.826, Validation Accuracy: 0.851, Loss: 0.280 Epoch 4 Batch 246/269 - Train Accuracy: 0.814, Validation Accuracy: 0.844, Loss: 0.264 Epoch 4 Batch 247/269 - Train Accuracy: 0.844, Validation Accuracy: 0.846, Loss: 0.264 Epoch 4 Batch 248/269 - Train Accuracy: 0.847, Validation Accuracy: 0.854, Loss: 0.254 Epoch 4 Batch 249/269 - Train Accuracy: 0.867, Validation Accuracy: 0.855, Loss: 0.243 Epoch 4 Batch 250/269 - Train Accuracy: 0.838, Validation Accuracy: 0.833, Loss: 0.263 Epoch 4 Batch 251/269 - Train Accuracy: 0.871, Validation Accuracy: 0.852, Loss: 0.262 Epoch 4 Batch 252/269 - Train Accuracy: 0.838, Validation Accuracy: 0.835, Loss: 0.258 Epoch 4 Batch 253/269 - Train Accuracy: 0.827, Validation Accuracy: 0.857, Loss: 0.270 Epoch 4 Batch 254/269 - Train Accuracy: 0.854, Validation Accuracy: 0.849, Loss: 0.259 Epoch 4 Batch 255/269 - Train Accuracy: 0.847, Validation Accuracy: 0.856, Loss: 0.253 Epoch 4 Batch 256/269 - Train Accuracy: 0.827, Validation Accuracy: 0.856, Loss: 0.254 Epoch 4 Batch 257/269 - Train Accuracy: 0.826, Validation Accuracy: 0.860, Loss: 0.271 Epoch 4 Batch 258/269 - Train Accuracy: 0.859, Validation Accuracy: 0.856, Loss: 0.259 Epoch 4 Batch 259/269 - Train Accuracy: 0.847, Validation Accuracy: 0.846, Loss: 0.260 Epoch 4 Batch 260/269 - Train Accuracy: 0.831, Validation Accuracy: 0.859, Loss: 0.271 Epoch 4 Batch 261/269 - Train Accuracy: 0.832, Validation Accuracy: 0.849, Loss: 0.277 Epoch 4 Batch 262/269 - Train Accuracy: 0.853, Validation Accuracy: 0.856, Loss: 0.257 Epoch 4 Batch 263/269 - Train Accuracy: 0.837, Validation Accuracy: 0.850, Loss: 0.261 Epoch 4 Batch 264/269 - Train Accuracy: 0.823, Validation Accuracy: 0.860, Loss: 0.274 Epoch 4 Batch 265/269 - Train Accuracy: 0.850, Validation Accuracy: 0.851, Loss: 0.260 Epoch 4 Batch 266/269 - Train Accuracy: 0.840, Validation Accuracy: 0.859, Loss: 0.244 Epoch 4 Batch 267/269 - Train Accuracy: 0.837, Validation Accuracy: 0.837, Loss: 0.256 Epoch 5 Batch 0/269 - Train Accuracy: 0.859, Validation Accuracy: 0.863, Loss: 0.271 Epoch 5 Batch 1/269 - Train Accuracy: 0.838, Validation Accuracy: 0.855, Loss: 0.254 Epoch 5 Batch 2/269 - Train Accuracy: 0.841, Validation Accuracy: 0.855, Loss: 0.261 Epoch 5 Batch 3/269 - Train Accuracy: 0.865, Validation Accuracy: 0.856, Loss: 0.251 Epoch 5 Batch 4/269 - Train Accuracy: 0.833, Validation Accuracy: 0.857, Loss: 0.263 Epoch 5 Batch 5/269 - Train Accuracy: 0.841, Validation Accuracy: 0.858, Loss: 0.254 Epoch 5 Batch 6/269 - Train Accuracy: 0.860, Validation Accuracy: 0.854, Loss: 0.243 Epoch 5 Batch 7/269 - Train Accuracy: 0.849, Validation Accuracy: 0.850, Loss: 0.245 Epoch 5 Batch 8/269 - Train Accuracy: 0.833, Validation Accuracy: 0.853, Loss: 0.263 Epoch 5 Batch 9/269 - Train Accuracy: 0.834, Validation Accuracy: 0.850, Loss: 0.258 Epoch 5 Batch 10/269 - Train Accuracy: 0.859, Validation Accuracy: 0.862, Loss: 0.264 Epoch 5 Batch 11/269 - Train Accuracy: 0.863, Validation Accuracy: 0.861, Loss: 0.256 Epoch 5 Batch 12/269 - Train Accuracy: 0.834, Validation Accuracy: 0.847, Loss: 0.262 Epoch 5 Batch 13/269 - Train Accuracy: 0.844, Validation Accuracy: 0.847, Loss: 0.233 Epoch 5 Batch 14/269 - Train Accuracy: 0.826, Validation Accuracy: 0.857, Loss: 0.255 Epoch 5 Batch 15/269 - Train Accuracy: 0.847, Validation Accuracy: 0.856, Loss: 0.233 Epoch 5 Batch 16/269 - Train Accuracy: 0.846, Validation Accuracy: 0.854, Loss: 0.263 Epoch 5 Batch 17/269 - Train Accuracy: 0.844, Validation Accuracy: 0.852, Loss: 0.242 Epoch 5 Batch 18/269 - Train Accuracy: 0.834, Validation Accuracy: 0.850, Loss: 0.250 Epoch 5 Batch 19/269 - Train Accuracy: 0.860, Validation Accuracy: 0.854, Loss: 0.230 Epoch 5 Batch 20/269 - Train Accuracy: 0.843, Validation Accuracy: 0.854, Loss: 0.248 Epoch 5 Batch 21/269 - Train Accuracy: 0.831, Validation Accuracy: 0.851, Loss: 0.272 Epoch 5 Batch 22/269 - Train Accuracy: 0.876, Validation Accuracy: 0.848, Loss: 0.243 Epoch 5 Batch 23/269 - Train Accuracy: 0.840, Validation Accuracy: 0.844, Loss: 0.246 Epoch 5 Batch 24/269 - Train Accuracy: 0.838, Validation Accuracy: 0.854, Loss: 0.258 Epoch 5 Batch 25/269 - Train Accuracy: 0.845, Validation Accuracy: 0.855, Loss: 0.273 Epoch 5 Batch 26/269 - Train Accuracy: 0.854, Validation Accuracy: 0.854, Loss: 0.225 Epoch 5 Batch 27/269 - Train Accuracy: 0.845, Validation Accuracy: 0.849, Loss: 0.238 Epoch 5 Batch 28/269 - Train Accuracy: 0.811, Validation Accuracy: 0.851, Loss: 0.264 Epoch 5 Batch 29/269 - Train Accuracy: 0.862, Validation Accuracy: 0.852, Loss: 0.253 Epoch 5 Batch 30/269 - Train Accuracy: 0.846, Validation Accuracy: 0.865, Loss: 0.243 Epoch 5 Batch 31/269 - Train Accuracy: 0.869, Validation Accuracy: 0.865, Loss: 0.233 Epoch 5 Batch 32/269 - Train Accuracy: 0.855, Validation Accuracy: 0.865, Loss: 0.237 Epoch 5 Batch 33/269 - Train Accuracy: 0.861, Validation Accuracy: 0.859, Loss: 0.226 Epoch 5 Batch 34/269 - Train Accuracy: 0.855, Validation Accuracy: 0.865, Loss: 0.241 Epoch 5 Batch 35/269 - Train Accuracy: 0.861, Validation Accuracy: 0.862, Loss: 0.251 Epoch 5 Batch 36/269 - Train Accuracy: 0.843, Validation Accuracy: 0.862, Loss: 0.238 Epoch 5 Batch 37/269 - Train Accuracy: 0.852, Validation Accuracy: 0.860, Loss: 0.232 Epoch 5 Batch 38/269 - Train Accuracy: 0.843, Validation Accuracy: 0.858, Loss: 0.239 Epoch 5 Batch 39/269 - Train Accuracy: 0.839, Validation Accuracy: 0.864, Loss: 0.235 Epoch 5 Batch 40/269 - Train Accuracy: 0.854, Validation Accuracy: 0.866, Loss: 0.248 Epoch 5 Batch 41/269 - Train Accuracy: 0.846, Validation Accuracy: 0.864, Loss: 0.246 Epoch 5 Batch 42/269 - Train Accuracy: 0.869, Validation Accuracy: 0.860, Loss: 0.222 Epoch 5 Batch 43/269 - Train Accuracy: 0.851, Validation Accuracy: 0.858, Loss: 0.244 Epoch 5 Batch 44/269 - Train Accuracy: 0.843, Validation Accuracy: 0.858, Loss: 0.239 Epoch 5 Batch 45/269 - Train Accuracy: 0.852, Validation Accuracy: 0.857, Loss: 0.242 Epoch 5 Batch 46/269 - Train Accuracy: 0.850, Validation Accuracy: 0.863, Loss: 0.239 Epoch 5 Batch 47/269 - Train Accuracy: 0.869, Validation Accuracy: 0.861, Loss: 0.218 Epoch 5 Batch 48/269 - Train Accuracy: 0.864, Validation Accuracy: 0.850, Loss: 0.228 Epoch 5 Batch 49/269 - Train Accuracy: 0.846, Validation Accuracy: 0.861, Loss: 0.234 Epoch 5 Batch 50/269 - Train Accuracy: 0.835, Validation Accuracy: 0.866, Loss: 0.244 Epoch 5 Batch 51/269 - Train Accuracy: 0.847, Validation Accuracy: 0.865, Loss: 0.233 Epoch 5 Batch 52/269 - Train Accuracy: 0.838, Validation Accuracy: 0.855, Loss: 0.221 Epoch 5 Batch 53/269 - Train Accuracy: 0.851, Validation Accuracy: 0.864, Loss: 0.250 Epoch 5 Batch 54/269 - Train Accuracy: 0.857, Validation Accuracy: 0.865, Loss: 0.236 Epoch 5 Batch 55/269 - Train Accuracy: 0.876, Validation Accuracy: 0.867, Loss: 0.229 Epoch 5 Batch 56/269 - Train Accuracy: 0.859, Validation Accuracy: 0.855, Loss: 0.230 Epoch 5 Batch 57/269 - Train Accuracy: 0.858, Validation Accuracy: 0.868, Loss: 0.238 Epoch 5 Batch 58/269 - Train Accuracy: 0.857, Validation Accuracy: 0.870, Loss: 0.223 Epoch 5 Batch 59/269 - Train Accuracy: 0.872, Validation Accuracy: 0.860, Loss: 0.215 Epoch 5 Batch 60/269 - Train Accuracy: 0.850, Validation Accuracy: 0.861, Loss: 0.220 Epoch 5 Batch 61/269 - Train Accuracy: 0.867, Validation Accuracy: 0.871, Loss: 0.214 Epoch 5 Batch 62/269 - Train Accuracy: 0.865, Validation Accuracy: 0.871, Loss: 0.226 Epoch 5 Batch 63/269 - Train Accuracy: 0.839, Validation Accuracy: 0.852, Loss: 0.236 Epoch 5 Batch 64/269 - Train Accuracy: 0.865, Validation Accuracy: 0.866, Loss: 0.223 Epoch 5 Batch 65/269 - Train Accuracy: 0.845, Validation Accuracy: 0.871, Loss: 0.225 Epoch 5 Batch 66/269 - Train Accuracy: 0.848, Validation Accuracy: 0.869, Loss: 0.227 Epoch 5 Batch 67/269 - Train Accuracy: 0.852, Validation Accuracy: 0.862, Loss: 0.233 Epoch 5 Batch 68/269 - Train Accuracy: 0.844, Validation Accuracy: 0.869, Loss: 0.243 Epoch 5 Batch 69/269 - Train Accuracy: 0.839, Validation Accuracy: 0.869, Loss: 0.257 Epoch 5 Batch 70/269 - Train Accuracy: 0.875, Validation Accuracy: 0.876, Loss: 0.232 Epoch 5 Batch 71/269 - Train Accuracy: 0.856, Validation Accuracy: 0.863, Loss: 0.243 Epoch 5 Batch 72/269 - Train Accuracy: 0.852, Validation Accuracy: 0.859, Loss: 0.233 Epoch 5 Batch 73/269 - Train Accuracy: 0.845, Validation Accuracy: 0.860, Loss: 0.242 Epoch 5 Batch 74/269 - Train Accuracy: 0.861, Validation Accuracy: 0.867, Loss: 0.228 Epoch 5 Batch 75/269 - Train Accuracy: 0.862, Validation Accuracy: 0.853, Loss: 0.232 Epoch 5 Batch 76/269 - Train Accuracy: 0.844, Validation Accuracy: 0.861, Loss: 0.228 Epoch 5 Batch 77/269 - Train Accuracy: 0.860, Validation Accuracy: 0.860, Loss: 0.220 Epoch 5 Batch 78/269 - Train Accuracy: 0.862, Validation Accuracy: 0.857, Loss: 0.220 Epoch 5 Batch 79/269 - Train Accuracy: 0.844, Validation Accuracy: 0.859, Loss: 0.227 Epoch 5 Batch 80/269 - Train Accuracy: 0.869, Validation Accuracy: 0.870, Loss: 0.223 Epoch 5 Batch 81/269 - Train Accuracy: 0.854, Validation Accuracy: 0.869, Loss: 0.236 Epoch 5 Batch 82/269 - Train Accuracy: 0.871, Validation Accuracy: 0.860, Loss: 0.214 Epoch 5 Batch 83/269 - Train Accuracy: 0.848, Validation Accuracy: 0.862, Loss: 0.236 Epoch 5 Batch 84/269 - Train Accuracy: 0.856, Validation Accuracy: 0.870, Loss: 0.218 Epoch 5 Batch 85/269 - Train Accuracy: 0.857, Validation Accuracy: 0.871, Loss: 0.221 Epoch 5 Batch 86/269 - Train Accuracy: 0.861, Validation Accuracy: 0.870, Loss: 0.214 Epoch 5 Batch 87/269 - Train Accuracy: 0.849, Validation Accuracy: 0.865, Loss: 0.228 Epoch 5 Batch 88/269 - Train Accuracy: 0.844, Validation Accuracy: 0.860, Loss: 0.224 Epoch 5 Batch 89/269 - Train Accuracy: 0.860, Validation Accuracy: 0.863, Loss: 0.219 Epoch 5 Batch 90/269 - Train Accuracy: 0.838, Validation Accuracy: 0.865, Loss: 0.233 Epoch 5 Batch 91/269 - Train Accuracy: 0.867, Validation Accuracy: 0.860, Loss: 0.216 Epoch 5 Batch 92/269 - Train Accuracy: 0.873, Validation Accuracy: 0.866, Loss: 0.213 Epoch 5 Batch 93/269 - Train Accuracy: 0.852, Validation Accuracy: 0.859, Loss: 0.214 Epoch 5 Batch 94/269 - Train Accuracy: 0.866, Validation Accuracy: 0.865, Loss: 0.238 Epoch 5 Batch 95/269 - Train Accuracy: 0.854, Validation Accuracy: 0.843, Loss: 0.218 Epoch 5 Batch 96/269 - Train Accuracy: 0.837, Validation Accuracy: 0.864, Loss: 0.222 Epoch 5 Batch 97/269 - Train Accuracy: 0.861, Validation Accuracy: 0.871, Loss: 0.218 Epoch 5 Batch 98/269 - Train Accuracy: 0.866, Validation Accuracy: 0.868, Loss: 0.221 Epoch 5 Batch 99/269 - Train Accuracy: 0.860, Validation Accuracy: 0.869, Loss: 0.230 Epoch 5 Batch 100/269 - Train Accuracy: 0.871, Validation Accuracy: 0.869, Loss: 0.212 Epoch 5 Batch 101/269 - Train Accuracy: 0.845, Validation Accuracy: 0.871, Loss: 0.234 Epoch 5 Batch 102/269 - Train Accuracy: 0.864, Validation Accuracy: 0.875, Loss: 0.223 Epoch 5 Batch 103/269 - Train Accuracy: 0.858, Validation Accuracy: 0.865, Loss: 0.219 Epoch 5 Batch 104/269 - Train Accuracy: 0.860, Validation Accuracy: 0.857, Loss: 0.214 Epoch 5 Batch 105/269 - Train Accuracy: 0.858, Validation Accuracy: 0.868, Loss: 0.219 Epoch 5 Batch 106/269 - Train Accuracy: 0.864, Validation Accuracy: 0.868, Loss: 0.213 Epoch 5 Batch 107/269 - Train Accuracy: 0.864, Validation Accuracy: 0.864, Loss: 0.228 Epoch 5 Batch 108/269 - Train Accuracy: 0.868, Validation Accuracy: 0.869, Loss: 0.223 Epoch 5 Batch 109/269 - Train Accuracy: 0.842, Validation Accuracy: 0.868, Loss: 0.225 Epoch 5 Batch 110/269 - Train Accuracy: 0.847, Validation Accuracy: 0.864, Loss: 0.215 Epoch 5 Batch 111/269 - Train Accuracy: 0.862, Validation Accuracy: 0.862, Loss: 0.233 Epoch 5 Batch 112/269 - Train Accuracy: 0.863, Validation Accuracy: 0.873, Loss: 0.218 Epoch 5 Batch 113/269 - Train Accuracy: 0.854, Validation Accuracy: 0.867, Loss: 0.207 Epoch 5 Batch 114/269 - Train Accuracy: 0.868, Validation Accuracy: 0.865, Loss: 0.213 Epoch 5 Batch 115/269 - Train Accuracy: 0.851, Validation Accuracy: 0.859, Loss: 0.210 Epoch 5 Batch 116/269 - Train Accuracy: 0.869, Validation Accuracy: 0.872, Loss: 0.223 Epoch 5 Batch 117/269 - Train Accuracy: 0.855, Validation Accuracy: 0.875, Loss: 0.210 Epoch 5 Batch 118/269 - Train Accuracy: 0.877, Validation Accuracy: 0.871, Loss: 0.208 Epoch 5 Batch 119/269 - Train Accuracy: 0.845, Validation Accuracy: 0.863, Loss: 0.220 Epoch 5 Batch 120/269 - Train Accuracy: 0.862, Validation Accuracy: 0.873, Loss: 0.218 Epoch 5 Batch 121/269 - Train Accuracy: 0.870, Validation Accuracy: 0.873, Loss: 0.201 Epoch 5 Batch 122/269 - Train Accuracy: 0.859, Validation Accuracy: 0.868, Loss: 0.203 Epoch 5 Batch 123/269 - Train Accuracy: 0.845, Validation Accuracy: 0.854, Loss: 0.219 Epoch 5 Batch 124/269 - Train Accuracy: 0.863, Validation Accuracy: 0.860, Loss: 0.204 Epoch 5 Batch 125/269 - Train Accuracy: 0.887, Validation Accuracy: 0.874, Loss: 0.202 Epoch 5 Batch 126/269 - Train Accuracy: 0.859, Validation Accuracy: 0.870, Loss: 0.217 Epoch 5 Batch 127/269 - Train Accuracy: 0.856, Validation Accuracy: 0.856, Loss: 0.213 Epoch 5 Batch 128/269 - Train Accuracy: 0.880, Validation Accuracy: 0.874, Loss: 0.211 Epoch 5 Batch 129/269 - Train Accuracy: 0.863, Validation Accuracy: 0.876, Loss: 0.208 Epoch 5 Batch 130/269 - Train Accuracy: 0.859, Validation Accuracy: 0.864, Loss: 0.216 Epoch 5 Batch 131/269 - Train Accuracy: 0.839, Validation Accuracy: 0.853, Loss: 0.218 Epoch 5 Batch 132/269 - Train Accuracy: 0.860, Validation Accuracy: 0.875, Loss: 0.212 Epoch 5 Batch 133/269 - Train Accuracy: 0.870, Validation Accuracy: 0.875, Loss: 0.198 Epoch 5 Batch 134/269 - Train Accuracy: 0.844, Validation Accuracy: 0.869, Loss: 0.216 Epoch 5 Batch 135/269 - Train Accuracy: 0.866, Validation Accuracy: 0.877, Loss: 0.216 Epoch 5 Batch 136/269 - Train Accuracy: 0.853, Validation Accuracy: 0.878, Loss: 0.223 Epoch 5 Batch 137/269 - Train Accuracy: 0.857, Validation Accuracy: 0.870, Loss: 0.221 Epoch 5 Batch 138/269 - Train Accuracy: 0.862, Validation Accuracy: 0.882, Loss: 0.203 Epoch 5 Batch 139/269 - Train Accuracy: 0.868, Validation Accuracy: 0.877, Loss: 0.197 Epoch 5 Batch 140/269 - Train Accuracy: 0.848, Validation Accuracy: 0.857, Loss: 0.220 Epoch 5 Batch 141/269 - Train Accuracy: 0.855, Validation Accuracy: 0.876, Loss: 0.215 Epoch 5 Batch 142/269 - Train Accuracy: 0.857, Validation Accuracy: 0.871, Loss: 0.196 Epoch 5 Batch 143/269 - Train Accuracy: 0.869, Validation Accuracy: 0.871, Loss: 0.198 Epoch 5 Batch 144/269 - Train Accuracy: 0.884, Validation Accuracy: 0.866, Loss: 0.187 Epoch 5 Batch 145/269 - Train Accuracy: 0.879, Validation Accuracy: 0.869, Loss: 0.195 Epoch 5 Batch 146/269 - Train Accuracy: 0.870, Validation Accuracy: 0.868, Loss: 0.195 Epoch 5 Batch 147/269 - Train Accuracy: 0.864, Validation Accuracy: 0.861, Loss: 0.195 Epoch 5 Batch 148/269 - Train Accuracy: 0.865, Validation Accuracy: 0.877, Loss: 0.208 Epoch 5 Batch 149/269 - Train Accuracy: 0.858, Validation Accuracy: 0.876, Loss: 0.214 Epoch 5 Batch 150/269 - Train Accuracy: 0.859, Validation Accuracy: 0.869, Loss: 0.202 Epoch 5 Batch 151/269 - Train Accuracy: 0.870, Validation Accuracy: 0.880, Loss: 0.196 Epoch 5 Batch 152/269 - Train Accuracy: 0.876, Validation Accuracy: 0.882, Loss: 0.200 Epoch 5 Batch 153/269 - Train Accuracy: 0.871, Validation Accuracy: 0.874, Loss: 0.193 Epoch 5 Batch 154/269 - Train Accuracy: 0.888, Validation Accuracy: 0.869, Loss: 0.197 Epoch 5 Batch 155/269 - Train Accuracy: 0.865, Validation Accuracy: 0.878, Loss: 0.195 Epoch 5 Batch 156/269 - Train Accuracy: 0.858, Validation Accuracy: 0.875, Loss: 0.213 Epoch 5 Batch 157/269 - Train Accuracy: 0.857, Validation Accuracy: 0.878, Loss: 0.198 Epoch 5 Batch 158/269 - Train Accuracy: 0.869, Validation Accuracy: 0.873, Loss: 0.191 Epoch 5 Batch 159/269 - Train Accuracy: 0.856, Validation Accuracy: 0.873, Loss: 0.200 Epoch 5 Batch 160/269 - Train Accuracy: 0.874, Validation Accuracy: 0.881, Loss: 0.199 Epoch 5 Batch 161/269 - Train Accuracy: 0.870, Validation Accuracy: 0.878, Loss: 0.195 Epoch 5 Batch 162/269 - Train Accuracy: 0.889, Validation Accuracy: 0.882, Loss: 0.195 Epoch 5 Batch 163/269 - Train Accuracy: 0.872, Validation Accuracy: 0.875, Loss: 0.198 Epoch 5 Batch 164/269 - Train Accuracy: 0.888, Validation Accuracy: 0.876, Loss: 0.192 Epoch 5 Batch 165/269 - Train Accuracy: 0.877, Validation Accuracy: 0.876, Loss: 0.197 Epoch 5 Batch 166/269 - Train Accuracy: 0.869, Validation Accuracy: 0.876, Loss: 0.190 Epoch 5 Batch 167/269 - Train Accuracy: 0.882, Validation Accuracy: 0.869, Loss: 0.193 Epoch 5 Batch 168/269 - Train Accuracy: 0.864, Validation Accuracy: 0.871, Loss: 0.205 Epoch 5 Batch 169/269 - Train Accuracy: 0.853, Validation Accuracy: 0.865, Loss: 0.201 Epoch 5 Batch 170/269 - Train Accuracy: 0.875, Validation Accuracy: 0.874, Loss: 0.193 Epoch 5 Batch 171/269 - Train Accuracy: 0.875, Validation Accuracy: 0.876, Loss: 0.198 Epoch 5 Batch 172/269 - Train Accuracy: 0.852, Validation Accuracy: 0.871, Loss: 0.205 Epoch 5 Batch 173/269 - Train Accuracy: 0.873, Validation Accuracy: 0.869, Loss: 0.189 Epoch 5 Batch 174/269 - Train Accuracy: 0.872, Validation Accuracy: 0.885, Loss: 0.194 Epoch 5 Batch 175/269 - Train Accuracy: 0.854, Validation Accuracy: 0.874, Loss: 0.213 Epoch 5 Batch 176/269 - Train Accuracy: 0.855, Validation Accuracy: 0.879, Loss: 0.211 Epoch 5 Batch 177/269 - Train Accuracy: 0.871, Validation Accuracy: 0.872, Loss: 0.184 Epoch 5 Batch 178/269 - Train Accuracy: 0.866, Validation Accuracy: 0.869, Loss: 0.198 Epoch 5 Batch 179/269 - Train Accuracy: 0.859, Validation Accuracy: 0.879, Loss: 0.193 Epoch 5 Batch 180/269 - Train Accuracy: 0.882, Validation Accuracy: 0.882, Loss: 0.187 Epoch 5 Batch 181/269 - Train Accuracy: 0.860, Validation Accuracy: 0.865, Loss: 0.201 Epoch 5 Batch 182/269 - Train Accuracy: 0.878, Validation Accuracy: 0.873, Loss: 0.192 Epoch 5 Batch 183/269 - Train Accuracy: 0.888, Validation Accuracy: 0.877, Loss: 0.163 Epoch 5 Batch 184/269 - Train Accuracy: 0.878, Validation Accuracy: 0.880, Loss: 0.202 Epoch 5 Batch 185/269 - Train Accuracy: 0.874, Validation Accuracy: 0.874, Loss: 0.188 Epoch 5 Batch 186/269 - Train Accuracy: 0.865, Validation Accuracy: 0.877, Loss: 0.192 Epoch 5 Batch 187/269 - Train Accuracy: 0.868, Validation Accuracy: 0.882, Loss: 0.188 Epoch 5 Batch 188/269 - Train Accuracy: 0.894, Validation Accuracy: 0.884, Loss: 0.188 Epoch 5 Batch 189/269 - Train Accuracy: 0.882, Validation Accuracy: 0.868, Loss: 0.178 Epoch 5 Batch 190/269 - Train Accuracy: 0.875, Validation Accuracy: 0.869, Loss: 0.187 Epoch 5 Batch 191/269 - Train Accuracy: 0.870, Validation Accuracy: 0.879, Loss: 0.183 Epoch 5 Batch 192/269 - Train Accuracy: 0.868, Validation Accuracy: 0.884, Loss: 0.196 Epoch 5 Batch 193/269 - Train Accuracy: 0.878, Validation Accuracy: 0.880, Loss: 0.184 Epoch 5 Batch 194/269 - Train Accuracy: 0.866, Validation Accuracy: 0.875, Loss: 0.189 Epoch 5 Batch 195/269 - Train Accuracy: 0.870, Validation Accuracy: 0.879, Loss: 0.187 Epoch 5 Batch 196/269 - Train Accuracy: 0.873, Validation Accuracy: 0.877, Loss: 0.180 Epoch 5 Batch 197/269 - Train Accuracy: 0.868, Validation Accuracy: 0.874, Loss: 0.197 Epoch 5 Batch 198/269 - Train Accuracy: 0.872, Validation Accuracy: 0.874, Loss: 0.193 Epoch 5 Batch 199/269 - Train Accuracy: 0.867, Validation Accuracy: 0.876, Loss: 0.195 Epoch 5 Batch 200/269 - Train Accuracy: 0.875, Validation Accuracy: 0.867, Loss: 0.194 Epoch 5 Batch 201/269 - Train Accuracy: 0.871, Validation Accuracy: 0.881, Loss: 0.187 Epoch 5 Batch 202/269 - Train Accuracy: 0.863, Validation Accuracy: 0.874, Loss: 0.183 Epoch 5 Batch 203/269 - Train Accuracy: 0.865, Validation Accuracy: 0.876, Loss: 0.205 Epoch 5 Batch 204/269 - Train Accuracy: 0.874, Validation Accuracy: 0.878, Loss: 0.201 Epoch 5 Batch 205/269 - Train Accuracy: 0.877, Validation Accuracy: 0.871, Loss: 0.186 Epoch 5 Batch 206/269 - Train Accuracy: 0.854, Validation Accuracy: 0.880, Loss: 0.209 Epoch 5 Batch 207/269 - Train Accuracy: 0.871, Validation Accuracy: 0.883, Loss: 0.188 Epoch 5 Batch 208/269 - Train Accuracy: 0.874, Validation Accuracy: 0.887, Loss: 0.194 Epoch 5 Batch 209/269 - Train Accuracy: 0.886, Validation Accuracy: 0.885, Loss: 0.185 Epoch 5 Batch 210/269 - Train Accuracy: 0.877, Validation Accuracy: 0.886, Loss: 0.177 Epoch 5 Batch 211/269 - Train Accuracy: 0.872, Validation Accuracy: 0.881, Loss: 0.189 Epoch 5 Batch 212/269 - Train Accuracy: 0.883, Validation Accuracy: 0.879, Loss: 0.191 Epoch 5 Batch 213/269 - Train Accuracy: 0.871, Validation Accuracy: 0.883, Loss: 0.184 Epoch 5 Batch 214/269 - Train Accuracy: 0.871, Validation Accuracy: 0.887, Loss: 0.192 Epoch 5 Batch 215/269 - Train Accuracy: 0.896, Validation Accuracy: 0.896, Loss: 0.172 Epoch 5 Batch 216/269 - Train Accuracy: 0.845, Validation Accuracy: 0.892, Loss: 0.203 Epoch 5 Batch 217/269 - Train Accuracy: 0.868, Validation Accuracy: 0.886, Loss: 0.192 Epoch 5 Batch 218/269 - Train Accuracy: 0.893, Validation Accuracy: 0.883, Loss: 0.187 Epoch 5 Batch 219/269 - Train Accuracy: 0.884, Validation Accuracy: 0.888, Loss: 0.187 Epoch 5 Batch 220/269 - Train Accuracy: 0.893, Validation Accuracy: 0.885, Loss: 0.172 Epoch 5 Batch 221/269 - Train Accuracy: 0.883, Validation Accuracy: 0.883, Loss: 0.187 Epoch 5 Batch 222/269 - Train Accuracy: 0.900, Validation Accuracy: 0.887, Loss: 0.173 Epoch 5 Batch 223/269 - Train Accuracy: 0.878, Validation Accuracy: 0.889, Loss: 0.177 Epoch 5 Batch 224/269 - Train Accuracy: 0.878, Validation Accuracy: 0.890, Loss: 0.202 Epoch 5 Batch 225/269 - Train Accuracy: 0.876, Validation Accuracy: 0.888, Loss: 0.189 Epoch 5 Batch 226/269 - Train Accuracy: 0.889, Validation Accuracy: 0.881, Loss: 0.183 Epoch 5 Batch 227/269 - Train Accuracy: 0.891, Validation Accuracy: 0.884, Loss: 0.176 Epoch 5 Batch 228/269 - Train Accuracy: 0.877, Validation Accuracy: 0.884, Loss: 0.182 Epoch 5 Batch 229/269 - Train Accuracy: 0.871, Validation Accuracy: 0.883, Loss: 0.181 Epoch 5 Batch 230/269 - Train Accuracy: 0.883, Validation Accuracy: 0.887, Loss: 0.173 Epoch 5 Batch 231/269 - Train Accuracy: 0.866, Validation Accuracy: 0.881, Loss: 0.185 Epoch 5 Batch 232/269 - Train Accuracy: 0.869, Validation Accuracy: 0.884, Loss: 0.180 Epoch 5 Batch 233/269 - Train Accuracy: 0.897, Validation Accuracy: 0.889, Loss: 0.181 Epoch 5 Batch 234/269 - Train Accuracy: 0.878, Validation Accuracy: 0.882, Loss: 0.171 Epoch 5 Batch 235/269 - Train Accuracy: 0.894, Validation Accuracy: 0.886, Loss: 0.172 Epoch 5 Batch 236/269 - Train Accuracy: 0.865, Validation Accuracy: 0.880, Loss: 0.172 Epoch 5 Batch 237/269 - Train Accuracy: 0.878, Validation Accuracy: 0.884, Loss: 0.178 Epoch 5 Batch 238/269 - Train Accuracy: 0.884, Validation Accuracy: 0.889, Loss: 0.185 Epoch 5 Batch 239/269 - Train Accuracy: 0.875, Validation Accuracy: 0.883, Loss: 0.173 Epoch 5 Batch 240/269 - Train Accuracy: 0.884, Validation Accuracy: 0.879, Loss: 0.165 Epoch 5 Batch 241/269 - Train Accuracy: 0.863, Validation Accuracy: 0.885, Loss: 0.185 Epoch 5 Batch 242/269 - Train Accuracy: 0.883, Validation Accuracy: 0.890, Loss: 0.166 Epoch 5 Batch 243/269 - Train Accuracy: 0.896, Validation Accuracy: 0.886, Loss: 0.160 Epoch 5 Batch 244/269 - Train Accuracy: 0.875, Validation Accuracy: 0.891, Loss: 0.172 Epoch 5 Batch 245/269 - Train Accuracy: 0.878, Validation Accuracy: 0.882, Loss: 0.177 Epoch 5 Batch 246/269 - Train Accuracy: 0.851, Validation Accuracy: 0.875, Loss: 0.171 Epoch 5 Batch 247/269 - Train Accuracy: 0.881, Validation Accuracy: 0.880, Loss: 0.173 Epoch 5 Batch 248/269 - Train Accuracy: 0.895, Validation Accuracy: 0.887, Loss: 0.169 Epoch 5 Batch 249/269 - Train Accuracy: 0.893, Validation Accuracy: 0.880, Loss: 0.159 Epoch 5 Batch 250/269 - Train Accuracy: 0.891, Validation Accuracy: 0.881, Loss: 0.172 Epoch 5 Batch 251/269 - Train Accuracy: 0.901, Validation Accuracy: 0.879, Loss: 0.166 Epoch 5 Batch 252/269 - Train Accuracy: 0.887, Validation Accuracy: 0.889, Loss: 0.163 Epoch 5 Batch 253/269 - Train Accuracy: 0.876, Validation Accuracy: 0.886, Loss: 0.177 Epoch 5 Batch 254/269 - Train Accuracy: 0.893, Validation Accuracy: 0.893, Loss: 0.167 Epoch 5 Batch 255/269 - Train Accuracy: 0.891, Validation Accuracy: 0.890, Loss: 0.166 Epoch 5 Batch 256/269 - Train Accuracy: 0.856, Validation Accuracy: 0.885, Loss: 0.172 Epoch 5 Batch 257/269 - Train Accuracy: 0.872, Validation Accuracy: 0.886, Loss: 0.176 Epoch 5 Batch 258/269 - Train Accuracy: 0.878, Validation Accuracy: 0.892, Loss: 0.170 Epoch 5 Batch 259/269 - Train Accuracy: 0.881, Validation Accuracy: 0.890, Loss: 0.169 Epoch 5 Batch 260/269 - Train Accuracy: 0.875, Validation Accuracy: 0.885, Loss: 0.182 Epoch 5 Batch 261/269 - Train Accuracy: 0.871, Validation Accuracy: 0.887, Loss: 0.173 Epoch 5 Batch 262/269 - Train Accuracy: 0.884, Validation Accuracy: 0.892, Loss: 0.172 Epoch 5 Batch 263/269 - Train Accuracy: 0.887, Validation Accuracy: 0.887, Loss: 0.178 Epoch 5 Batch 264/269 - Train Accuracy: 0.855, Validation Accuracy: 0.893, Loss: 0.184 Epoch 5 Batch 265/269 - Train Accuracy: 0.891, Validation Accuracy: 0.888, Loss: 0.167 Epoch 5 Batch 266/269 - Train Accuracy: 0.881, Validation Accuracy: 0.883, Loss: 0.169 Epoch 5 Batch 267/269 - Train Accuracy: 0.880, Validation Accuracy: 0.891, Loss: 0.185 Epoch 6 Batch 0/269 - Train Accuracy: 0.890, Validation Accuracy: 0.890, Loss: 0.179 Epoch 6 Batch 1/269 - Train Accuracy: 0.878, Validation Accuracy: 0.895, Loss: 0.167 Epoch 6 Batch 2/269 - Train Accuracy: 0.876, Validation Accuracy: 0.890, Loss: 0.168 Epoch 6 Batch 3/269 - Train Accuracy: 0.893, Validation Accuracy: 0.890, Loss: 0.174 Epoch 6 Batch 4/269 - Train Accuracy: 0.874, Validation Accuracy: 0.885, Loss: 0.166 Epoch 6 Batch 5/269 - Train Accuracy: 0.870, Validation Accuracy: 0.890, Loss: 0.172 Epoch 6 Batch 6/269 - Train Accuracy: 0.903, Validation Accuracy: 0.885, Loss: 0.159 Epoch 6 Batch 7/269 - Train Accuracy: 0.875, Validation Accuracy: 0.878, Loss: 0.162 Epoch 6 Batch 8/269 - Train Accuracy: 0.889, Validation Accuracy: 0.882, Loss: 0.176 Epoch 6 Batch 9/269 - Train Accuracy: 0.876, Validation Accuracy: 0.886, Loss: 0.173 Epoch 6 Batch 10/269 - Train Accuracy: 0.892, Validation Accuracy: 0.888, Loss: 0.161 Epoch 6 Batch 11/269 - Train Accuracy: 0.895, Validation Accuracy: 0.888, Loss: 0.179 Epoch 6 Batch 12/269 - Train Accuracy: 0.868, Validation Accuracy: 0.889, Loss: 0.176 Epoch 6 Batch 13/269 - Train Accuracy: 0.883, Validation Accuracy: 0.886, Loss: 0.150 Epoch 6 Batch 14/269 - Train Accuracy: 0.878, Validation Accuracy: 0.887, Loss: 0.165 Epoch 6 Batch 15/269 - Train Accuracy: 0.890, Validation Accuracy: 0.893, Loss: 0.144 Epoch 6 Batch 16/269 - Train Accuracy: 0.883, Validation Accuracy: 0.891, Loss: 0.170 Epoch 6 Batch 17/269 - Train Accuracy: 0.886, Validation Accuracy: 0.897, Loss: 0.158 Epoch 6 Batch 18/269 - Train Accuracy: 0.884, Validation Accuracy: 0.893, Loss: 0.163 Epoch 6 Batch 19/269 - Train Accuracy: 0.893, Validation Accuracy: 0.888, Loss: 0.148 Epoch 6 Batch 20/269 - Train Accuracy: 0.882, Validation Accuracy: 0.890, Loss: 0.160 Epoch 6 Batch 21/269 - Train Accuracy: 0.872, Validation Accuracy: 0.893, Loss: 0.182 Epoch 6 Batch 22/269 - Train Accuracy: 0.910, Validation Accuracy: 0.894, Loss: 0.158 Epoch 6 Batch 23/269 - Train Accuracy: 0.873, Validation Accuracy: 0.897, Loss: 0.172 Epoch 6 Batch 24/269 - Train Accuracy: 0.876, Validation Accuracy: 0.897, Loss: 0.166 Epoch 6 Batch 25/269 - Train Accuracy: 0.890, Validation Accuracy: 0.894, Loss: 0.172 Epoch 6 Batch 26/269 - Train Accuracy: 0.898, Validation Accuracy: 0.899, Loss: 0.150 Epoch 6 Batch 27/269 - Train Accuracy: 0.889, Validation Accuracy: 0.897, Loss: 0.158 Epoch 6 Batch 28/269 - Train Accuracy: 0.854, Validation Accuracy: 0.897, Loss: 0.174 Epoch 6 Batch 29/269 - Train Accuracy: 0.903, Validation Accuracy: 0.894, Loss: 0.166 Epoch 6 Batch 30/269 - Train Accuracy: 0.880, Validation Accuracy: 0.892, Loss: 0.158 Epoch 6 Batch 31/269 - Train Accuracy: 0.901, Validation Accuracy: 0.898, Loss: 0.158 Epoch 6 Batch 32/269 - Train Accuracy: 0.885, Validation Accuracy: 0.894, Loss: 0.159 Epoch 6 Batch 33/269 - Train Accuracy: 0.894, Validation Accuracy: 0.892, Loss: 0.151 Epoch 6 Batch 34/269 - Train Accuracy: 0.891, Validation Accuracy: 0.893, Loss: 0.150 Epoch 6 Batch 35/269 - Train Accuracy: 0.885, Validation Accuracy: 0.893, Loss: 0.172 Epoch 6 Batch 36/269 - Train Accuracy: 0.870, Validation Accuracy: 0.893, Loss: 0.162 Epoch 6 Batch 37/269 - Train Accuracy: 0.889, Validation Accuracy: 0.892, Loss: 0.159 Epoch 6 Batch 38/269 - Train Accuracy: 0.889, Validation Accuracy: 0.896, Loss: 0.158 Epoch 6 Batch 39/269 - Train Accuracy: 0.871, Validation Accuracy: 0.891, Loss: 0.152 Epoch 6 Batch 40/269 - Train Accuracy: 0.879, Validation Accuracy: 0.892, Loss: 0.168 Epoch 6 Batch 41/269 - Train Accuracy: 0.874, Validation Accuracy: 0.896, Loss: 0.163 Epoch 6 Batch 42/269 - Train Accuracy: 0.904, Validation Accuracy: 0.899, Loss: 0.146 Epoch 6 Batch 43/269 - Train Accuracy: 0.875, Validation Accuracy: 0.891, Loss: 0.160 Epoch 6 Batch 44/269 - Train Accuracy: 0.882, Validation Accuracy: 0.892, Loss: 0.164 Epoch 6 Batch 45/269 - Train Accuracy: 0.890, Validation Accuracy: 0.894, Loss: 0.152 Epoch 6 Batch 46/269 - Train Accuracy: 0.893, Validation Accuracy: 0.893, Loss: 0.156 Epoch 6 Batch 47/269 - Train Accuracy: 0.897, Validation Accuracy: 0.897, Loss: 0.146 Epoch 6 Batch 48/269 - Train Accuracy: 0.903, Validation Accuracy: 0.900, Loss: 0.154 Epoch 6 Batch 49/269 - Train Accuracy: 0.890, Validation Accuracy: 0.895, Loss: 0.153 Epoch 6 Batch 50/269 - Train Accuracy: 0.871, Validation Accuracy: 0.899, Loss: 0.163 Epoch 6 Batch 51/269 - Train Accuracy: 0.895, Validation Accuracy: 0.892, Loss: 0.159 Epoch 6 Batch 52/269 - Train Accuracy: 0.872, Validation Accuracy: 0.891, Loss: 0.138 Epoch 6 Batch 53/269 - Train Accuracy: 0.888, Validation Accuracy: 0.906, Loss: 0.165 Epoch 6 Batch 54/269 - Train Accuracy: 0.887, Validation Accuracy: 0.899, Loss: 0.151 Epoch 6 Batch 55/269 - Train Accuracy: 0.912, Validation Accuracy: 0.901, Loss: 0.151 Epoch 6 Batch 56/269 - Train Accuracy: 0.889, Validation Accuracy: 0.896, Loss: 0.154 Epoch 6 Batch 57/269 - Train Accuracy: 0.888, Validation Accuracy: 0.899, Loss: 0.164 Epoch 6 Batch 58/269 - Train Accuracy: 0.893, Validation Accuracy: 0.897, Loss: 0.155 Epoch 6 Batch 59/269 - Train Accuracy: 0.898, Validation Accuracy: 0.895, Loss: 0.131 Epoch 6 Batch 60/269 - Train Accuracy: 0.893, Validation Accuracy: 0.889, Loss: 0.146 Epoch 6 Batch 61/269 - Train Accuracy: 0.906, Validation Accuracy: 0.899, Loss: 0.142 Epoch 6 Batch 62/269 - Train Accuracy: 0.898, Validation Accuracy: 0.901, Loss: 0.158 Epoch 6 Batch 63/269 - Train Accuracy: 0.881, Validation Accuracy: 0.896, Loss: 0.156 Epoch 6 Batch 64/269 - Train Accuracy: 0.892, Validation Accuracy: 0.894, Loss: 0.147 Epoch 6 Batch 65/269 - Train Accuracy: 0.871, Validation Accuracy: 0.902, Loss: 0.152 Epoch 6 Batch 66/269 - Train Accuracy: 0.878, Validation Accuracy: 0.903, Loss: 0.150 Epoch 6 Batch 67/269 - Train Accuracy: 0.893, Validation Accuracy: 0.901, Loss: 0.158 Epoch 6 Batch 68/269 - Train Accuracy: 0.867, Validation Accuracy: 0.899, Loss: 0.170 Epoch 6 Batch 69/269 - Train Accuracy: 0.869, Validation Accuracy: 0.892, Loss: 0.182 Epoch 6 Batch 70/269 - Train Accuracy: 0.901, Validation Accuracy: 0.895, Loss: 0.155 Epoch 6 Batch 71/269 - Train Accuracy: 0.876, Validation Accuracy: 0.882, Loss: 0.157 Epoch 6 Batch 72/269 - Train Accuracy: 0.869, Validation Accuracy: 0.895, Loss: 0.168 Epoch 6 Batch 73/269 - Train Accuracy: 0.877, Validation Accuracy: 0.891, Loss: 0.163 Epoch 6 Batch 74/269 - Train Accuracy: 0.889, Validation Accuracy: 0.888, Loss: 0.152 Epoch 6 Batch 75/269 - Train Accuracy: 0.896, Validation Accuracy: 0.893, Loss: 0.168 Epoch 6 Batch 76/269 - Train Accuracy: 0.875, Validation Accuracy: 0.891, Loss: 0.154 Epoch 6 Batch 77/269 - Train Accuracy: 0.908, Validation Accuracy: 0.892, Loss: 0.143 Epoch 6 Batch 78/269 - Train Accuracy: 0.897, Validation Accuracy: 0.894, Loss: 0.153 Epoch 6 Batch 79/269 - Train Accuracy: 0.883, Validation Accuracy: 0.896, Loss: 0.153 Epoch 6 Batch 80/269 - Train Accuracy: 0.893, Validation Accuracy: 0.896, Loss: 0.158 Epoch 6 Batch 81/269 - Train Accuracy: 0.876, Validation Accuracy: 0.887, Loss: 0.162 Epoch 6 Batch 82/269 - Train Accuracy: 0.905, Validation Accuracy: 0.890, Loss: 0.145 Epoch 6 Batch 83/269 - Train Accuracy: 0.885, Validation Accuracy: 0.888, Loss: 0.161 Epoch 6 Batch 84/269 - Train Accuracy: 0.900, Validation Accuracy: 0.894, Loss: 0.150 Epoch 6 Batch 85/269 - Train Accuracy: 0.890, Validation Accuracy: 0.900, Loss: 0.151 Epoch 6 Batch 86/269 - Train Accuracy: 0.891, Validation Accuracy: 0.891, Loss: 0.144 Epoch 6 Batch 87/269 - Train Accuracy: 0.883, Validation Accuracy: 0.889, Loss: 0.160 Epoch 6 Batch 88/269 - Train Accuracy: 0.885, Validation Accuracy: 0.896, Loss: 0.155 Epoch 6 Batch 89/269 - Train Accuracy: 0.889, Validation Accuracy: 0.886, Loss: 0.149 Epoch 6 Batch 90/269 - Train Accuracy: 0.883, Validation Accuracy: 0.890, Loss: 0.158 Epoch 6 Batch 91/269 - Train Accuracy: 0.888, Validation Accuracy: 0.890, Loss: 0.146 Epoch 6 Batch 92/269 - Train Accuracy: 0.911, Validation Accuracy: 0.900, Loss: 0.140 Epoch 6 Batch 93/269 - Train Accuracy: 0.900, Validation Accuracy: 0.893, Loss: 0.143 Epoch 6 Batch 94/269 - Train Accuracy: 0.893, Validation Accuracy: 0.891, Loss: 0.158 Epoch 6 Batch 95/269 - Train Accuracy: 0.897, Validation Accuracy: 0.891, Loss: 0.149 Epoch 6 Batch 96/269 - Train Accuracy: 0.881, Validation Accuracy: 0.892, Loss: 0.145 Epoch 6 Batch 97/269 - Train Accuracy: 0.895, Validation Accuracy: 0.887, Loss: 0.146 Epoch 6 Batch 98/269 - Train Accuracy: 0.885, Validation Accuracy: 0.884, Loss: 0.148 Epoch 6 Batch 99/269 - Train Accuracy: 0.886, Validation Accuracy: 0.890, Loss: 0.148 Epoch 6 Batch 100/269 - Train Accuracy: 0.900, Validation Accuracy: 0.892, Loss: 0.143 Epoch 6 Batch 101/269 - Train Accuracy: 0.877, Validation Accuracy: 0.890, Loss: 0.161 Epoch 6 Batch 102/269 - Train Accuracy: 0.890, Validation Accuracy: 0.894, Loss: 0.138 Epoch 6 Batch 103/269 - Train Accuracy: 0.891, Validation Accuracy: 0.900, Loss: 0.153 Epoch 6 Batch 104/269 - Train Accuracy: 0.890, Validation Accuracy: 0.894, Loss: 0.145 Epoch 6 Batch 105/269 - Train Accuracy: 0.883, Validation Accuracy: 0.891, Loss: 0.149 Epoch 6 Batch 106/269 - Train Accuracy: 0.899, Validation Accuracy: 0.896, Loss: 0.142 Epoch 6 Batch 107/269 - Train Accuracy: 0.897, Validation Accuracy: 0.900, Loss: 0.152 Epoch 6 Batch 108/269 - Train Accuracy: 0.889, Validation Accuracy: 0.899, Loss: 0.143 Epoch 6 Batch 109/269 - Train Accuracy: 0.873, Validation Accuracy: 0.894, Loss: 0.151 Epoch 6 Batch 110/269 - Train Accuracy: 0.888, Validation Accuracy: 0.901, Loss: 0.143 Epoch 6 Batch 111/269 - Train Accuracy: 0.901, Validation Accuracy: 0.901, Loss: 0.158 Epoch 6 Batch 112/269 - Train Accuracy: 0.900, Validation Accuracy: 0.900, Loss: 0.149 Epoch 6 Batch 113/269 - Train Accuracy: 0.887, Validation Accuracy: 0.898, Loss: 0.144 Epoch 6 Batch 114/269 - Train Accuracy: 0.884, Validation Accuracy: 0.884, Loss: 0.144 Epoch 6 Batch 115/269 - Train Accuracy: 0.875, Validation Accuracy: 0.882, Loss: 0.148 Epoch 6 Batch 116/269 - Train Accuracy: 0.899, Validation Accuracy: 0.891, Loss: 0.149 Epoch 6 Batch 117/269 - Train Accuracy: 0.893, Validation Accuracy: 0.888, Loss: 0.138 Epoch 6 Batch 118/269 - Train Accuracy: 0.907, Validation Accuracy: 0.891, Loss: 0.140 Epoch 6 Batch 119/269 - Train Accuracy: 0.881, Validation Accuracy: 0.895, Loss: 0.157 Epoch 6 Batch 120/269 - Train Accuracy: 0.886, Validation Accuracy: 0.894, Loss: 0.144 Epoch 6 Batch 121/269 - Train Accuracy: 0.904, Validation Accuracy: 0.898, Loss: 0.140 Epoch 6 Batch 122/269 - Train Accuracy: 0.897, Validation Accuracy: 0.906, Loss: 0.142 Epoch 6 Batch 123/269 - Train Accuracy: 0.908, Validation Accuracy: 0.901, Loss: 0.146 Epoch 6 Batch 124/269 - Train Accuracy: 0.891, Validation Accuracy: 0.907, Loss: 0.136 Epoch 6 Batch 125/269 - Train Accuracy: 0.901, Validation Accuracy: 0.896, Loss: 0.137 Epoch 6 Batch 126/269 - Train Accuracy: 0.877, Validation Accuracy: 0.899, Loss: 0.142 Epoch 6 Batch 127/269 - Train Accuracy: 0.889, Validation Accuracy: 0.893, Loss: 0.149 Epoch 6 Batch 128/269 - Train Accuracy: 0.894, Validation Accuracy: 0.891, Loss: 0.139 Epoch 6 Batch 129/269 - Train Accuracy: 0.892, Validation Accuracy: 0.891, Loss: 0.146 Epoch 6 Batch 130/269 - Train Accuracy: 0.891, Validation Accuracy: 0.889, Loss: 0.144 Epoch 6 Batch 131/269 - Train Accuracy: 0.867, Validation Accuracy: 0.892, Loss: 0.152 Epoch 6 Batch 132/269 - Train Accuracy: 0.892, Validation Accuracy: 0.892, Loss: 0.144 Epoch 6 Batch 133/269 - Train Accuracy: 0.903, Validation Accuracy: 0.891, Loss: 0.137 Epoch 6 Batch 134/269 - Train Accuracy: 0.872, Validation Accuracy: 0.884, Loss: 0.140 Epoch 6 Batch 135/269 - Train Accuracy: 0.898, Validation Accuracy: 0.894, Loss: 0.150 Epoch 6 Batch 136/269 - Train Accuracy: 0.872, Validation Accuracy: 0.895, Loss: 0.153 Epoch 6 Batch 137/269 - Train Accuracy: 0.884, Validation Accuracy: 0.891, Loss: 0.154 Epoch 6 Batch 138/269 - Train Accuracy: 0.891, Validation Accuracy: 0.892, Loss: 0.141 Epoch 6 Batch 139/269 - Train Accuracy: 0.896, Validation Accuracy: 0.895, Loss: 0.132 Epoch 6 Batch 140/269 - Train Accuracy: 0.878, Validation Accuracy: 0.892, Loss: 0.148 Epoch 6 Batch 141/269 - Train Accuracy: 0.889, Validation Accuracy: 0.893, Loss: 0.149 Epoch 6 Batch 142/269 - Train Accuracy: 0.883, Validation Accuracy: 0.895, Loss: 0.135 Epoch 6 Batch 143/269 - Train Accuracy: 0.899, Validation Accuracy: 0.895, Loss: 0.134 Epoch 6 Batch 144/269 - Train Accuracy: 0.894, Validation Accuracy: 0.892, Loss: 0.128 Epoch 6 Batch 145/269 - Train Accuracy: 0.896, Validation Accuracy: 0.892, Loss: 0.134 Epoch 6 Batch 146/269 - Train Accuracy: 0.876, Validation Accuracy: 0.890, Loss: 0.138 Epoch 6 Batch 147/269 - Train Accuracy: 0.893, Validation Accuracy: 0.889, Loss: 0.143 Epoch 6 Batch 148/269 - Train Accuracy: 0.889, Validation Accuracy: 0.893, Loss: 0.140 Epoch 6 Batch 149/269 - Train Accuracy: 0.882, Validation Accuracy: 0.896, Loss: 0.147 Epoch 6 Batch 150/269 - Train Accuracy: 0.887, Validation Accuracy: 0.894, Loss: 0.137 Epoch 6 Batch 151/269 - Train Accuracy: 0.891, Validation Accuracy: 0.900, Loss: 0.134 Epoch 6 Batch 152/269 - Train Accuracy: 0.900, Validation Accuracy: 0.896, Loss: 0.138 Epoch 6 Batch 153/269 - Train Accuracy: 0.890, Validation Accuracy: 0.897, Loss: 0.128 Epoch 6 Batch 154/269 - Train Accuracy: 0.910, Validation Accuracy: 0.895, Loss: 0.129 Epoch 6 Batch 155/269 - Train Accuracy: 0.894, Validation Accuracy: 0.898, Loss: 0.136 Epoch 6 Batch 156/269 - Train Accuracy: 0.885, Validation Accuracy: 0.904, Loss: 0.143 Epoch 6 Batch 157/269 - Train Accuracy: 0.884, Validation Accuracy: 0.900, Loss: 0.130 Epoch 6 Batch 158/269 - Train Accuracy: 0.899, Validation Accuracy: 0.900, Loss: 0.133 Epoch 6 Batch 159/269 - Train Accuracy: 0.885, Validation Accuracy: 0.901, Loss: 0.141 Epoch 6 Batch 160/269 - Train Accuracy: 0.898, Validation Accuracy: 0.906, Loss: 0.136 Epoch 6 Batch 161/269 - Train Accuracy: 0.897, Validation Accuracy: 0.900, Loss: 0.136 Epoch 6 Batch 162/269 - Train Accuracy: 0.914, Validation Accuracy: 0.906, Loss: 0.128 Epoch 6 Batch 163/269 - Train Accuracy: 0.896, Validation Accuracy: 0.897, Loss: 0.137 Epoch 6 Batch 164/269 - Train Accuracy: 0.907, Validation Accuracy: 0.903, Loss: 0.136 Epoch 6 Batch 165/269 - Train Accuracy: 0.900, Validation Accuracy: 0.898, Loss: 0.135 Epoch 6 Batch 166/269 - Train Accuracy: 0.909, Validation Accuracy: 0.907, Loss: 0.132 Epoch 6 Batch 167/269 - Train Accuracy: 0.905, Validation Accuracy: 0.902, Loss: 0.131 Epoch 6 Batch 168/269 - Train Accuracy: 0.892, Validation Accuracy: 0.905, Loss: 0.140 Epoch 6 Batch 169/269 - Train Accuracy: 0.879, Validation Accuracy: 0.902, Loss: 0.133 Epoch 6 Batch 170/269 - Train Accuracy: 0.890, Validation Accuracy: 0.907, Loss: 0.129 Epoch 6 Batch 171/269 - Train Accuracy: 0.907, Validation Accuracy: 0.906, Loss: 0.137 Epoch 6 Batch 172/269 - Train Accuracy: 0.883, Validation Accuracy: 0.905, Loss: 0.146 Epoch 6 Batch 173/269 - Train Accuracy: 0.908, Validation Accuracy: 0.903, Loss: 0.120 Epoch 6 Batch 174/269 - Train Accuracy: 0.894, Validation Accuracy: 0.898, Loss: 0.134 Epoch 6 Batch 175/269 - Train Accuracy: 0.880, Validation Accuracy: 0.902, Loss: 0.150 Epoch 6 Batch 176/269 - Train Accuracy: 0.889, Validation Accuracy: 0.904, Loss: 0.136 Epoch 6 Batch 177/269 - Train Accuracy: 0.910, Validation Accuracy: 0.907, Loss: 0.132 Epoch 6 Batch 178/269 - Train Accuracy: 0.902, Validation Accuracy: 0.908, Loss: 0.129 Epoch 6 Batch 179/269 - Train Accuracy: 0.890, Validation Accuracy: 0.908, Loss: 0.130 Epoch 6 Batch 180/269 - Train Accuracy: 0.902, Validation Accuracy: 0.905, Loss: 0.129 Epoch 6 Batch 181/269 - Train Accuracy: 0.893, Validation Accuracy: 0.906, Loss: 0.143 Epoch 6 Batch 182/269 - Train Accuracy: 0.899, Validation Accuracy: 0.909, Loss: 0.133 Epoch 6 Batch 183/269 - Train Accuracy: 0.917, Validation Accuracy: 0.911, Loss: 0.111 Epoch 6 Batch 184/269 - Train Accuracy: 0.904, Validation Accuracy: 0.905, Loss: 0.135 Epoch 6 Batch 185/269 - Train Accuracy: 0.906, Validation Accuracy: 0.901, Loss: 0.134 Epoch 6 Batch 186/269 - Train Accuracy: 0.893, Validation Accuracy: 0.904, Loss: 0.129 Epoch 6 Batch 187/269 - Train Accuracy: 0.892, Validation Accuracy: 0.903, Loss: 0.128 Epoch 6 Batch 188/269 - Train Accuracy: 0.903, Validation Accuracy: 0.903, Loss: 0.124 Epoch 6 Batch 189/269 - Train Accuracy: 0.906, Validation Accuracy: 0.905, Loss: 0.126 Epoch 6 Batch 190/269 - Train Accuracy: 0.899, Validation Accuracy: 0.903, Loss: 0.131 Epoch 6 Batch 191/269 - Train Accuracy: 0.886, Validation Accuracy: 0.906, Loss: 0.126 Epoch 6 Batch 192/269 - Train Accuracy: 0.908, Validation Accuracy: 0.903, Loss: 0.131 Epoch 6 Batch 193/269 - Train Accuracy: 0.908, Validation Accuracy: 0.905, Loss: 0.125 Epoch 6 Batch 194/269 - Train Accuracy: 0.894, Validation Accuracy: 0.908, Loss: 0.135 Epoch 6 Batch 195/269 - Train Accuracy: 0.887, Validation Accuracy: 0.905, Loss: 0.130 Epoch 6 Batch 196/269 - Train Accuracy: 0.891, Validation Accuracy: 0.908, Loss: 0.125 Epoch 6 Batch 197/269 - Train Accuracy: 0.887, Validation Accuracy: 0.907, Loss: 0.135 Epoch 6 Batch 198/269 - Train Accuracy: 0.907, Validation Accuracy: 0.906, Loss: 0.134 Epoch 6 Batch 199/269 - Train Accuracy: 0.899, Validation Accuracy: 0.905, Loss: 0.131 Epoch 6 Batch 200/269 - Train Accuracy: 0.898, Validation Accuracy: 0.899, Loss: 0.133 Epoch 6 Batch 201/269 - Train Accuracy: 0.882, Validation Accuracy: 0.909, Loss: 0.130 Epoch 6 Batch 202/269 - Train Accuracy: 0.891, Validation Accuracy: 0.907, Loss: 0.126 Epoch 6 Batch 203/269 - Train Accuracy: 0.893, Validation Accuracy: 0.909, Loss: 0.145 Epoch 6 Batch 204/269 - Train Accuracy: 0.895, Validation Accuracy: 0.905, Loss: 0.133 Epoch 6 Batch 205/269 - Train Accuracy: 0.900, Validation Accuracy: 0.913, Loss: 0.126 Epoch 6 Batch 206/269 - Train Accuracy: 0.887, Validation Accuracy: 0.908, Loss: 0.139 Epoch 6 Batch 207/269 - Train Accuracy: 0.902, Validation Accuracy: 0.907, Loss: 0.127 Epoch 6 Batch 208/269 - Train Accuracy: 0.906, Validation Accuracy: 0.907, Loss: 0.136 Epoch 6 Batch 209/269 - Train Accuracy: 0.913, Validation Accuracy: 0.909, Loss: 0.125 Epoch 6 Batch 210/269 - Train Accuracy: 0.896, Validation Accuracy: 0.917, Loss: 0.127 Epoch 6 Batch 211/269 - Train Accuracy: 0.900, Validation Accuracy: 0.909, Loss: 0.140 Epoch 6 Batch 212/269 - Train Accuracy: 0.903, Validation Accuracy: 0.905, Loss: 0.136 Epoch 6 Batch 213/269 - Train Accuracy: 0.895, Validation Accuracy: 0.907, Loss: 0.129 Epoch 6 Batch 214/269 - Train Accuracy: 0.888, Validation Accuracy: 0.908, Loss: 0.133 Epoch 6 Batch 215/269 - Train Accuracy: 0.913, Validation Accuracy: 0.906, Loss: 0.121 Epoch 6 Batch 216/269 - Train Accuracy: 0.875, Validation Accuracy: 0.902, Loss: 0.150 Epoch 6 Batch 217/269 - Train Accuracy: 0.896, Validation Accuracy: 0.908, Loss: 0.137 Epoch 6 Batch 218/269 - Train Accuracy: 0.901, Validation Accuracy: 0.904, Loss: 0.131 Epoch 6 Batch 219/269 - Train Accuracy: 0.913, Validation Accuracy: 0.913, Loss: 0.142 Epoch 6 Batch 220/269 - Train Accuracy: 0.907, Validation Accuracy: 0.907, Loss: 0.123 Epoch 6 Batch 221/269 - Train Accuracy: 0.899, Validation Accuracy: 0.906, Loss: 0.133 Epoch 6 Batch 222/269 - Train Accuracy: 0.918, Validation Accuracy: 0.913, Loss: 0.118 Epoch 6 Batch 223/269 - Train Accuracy: 0.902, Validation Accuracy: 0.913, Loss: 0.128 Epoch 6 Batch 224/269 - Train Accuracy: 0.901, Validation Accuracy: 0.906, Loss: 0.140 Epoch 6 Batch 225/269 - Train Accuracy: 0.887, Validation Accuracy: 0.905, Loss: 0.122 Epoch 6 Batch 226/269 - Train Accuracy: 0.903, Validation Accuracy: 0.906, Loss: 0.137 Epoch 6 Batch 227/269 - Train Accuracy: 0.911, Validation Accuracy: 0.905, Loss: 0.122 Epoch 6 Batch 228/269 - Train Accuracy: 0.894, Validation Accuracy: 0.903, Loss: 0.129 Epoch 6 Batch 229/269 - Train Accuracy: 0.895, Validation Accuracy: 0.904, Loss: 0.130 Epoch 6 Batch 230/269 - Train Accuracy: 0.910, Validation Accuracy: 0.907, Loss: 0.123 Epoch 6 Batch 231/269 - Train Accuracy: 0.897, Validation Accuracy: 0.908, Loss: 0.134 Epoch 6 Batch 232/269 - Train Accuracy: 0.886, Validation Accuracy: 0.905, Loss: 0.127 Epoch 6 Batch 233/269 - Train Accuracy: 0.919, Validation Accuracy: 0.905, Loss: 0.131 Epoch 6 Batch 234/269 - Train Accuracy: 0.899, Validation Accuracy: 0.908, Loss: 0.129 Epoch 6 Batch 235/269 - Train Accuracy: 0.914, Validation Accuracy: 0.910, Loss: 0.117 Epoch 6 Batch 236/269 - Train Accuracy: 0.884, Validation Accuracy: 0.902, Loss: 0.124 Epoch 6 Batch 237/269 - Train Accuracy: 0.913, Validation Accuracy: 0.912, Loss: 0.123 Epoch 6 Batch 238/269 - Train Accuracy: 0.898, Validation Accuracy: 0.907, Loss: 0.127 Epoch 6 Batch 239/269 - Train Accuracy: 0.903, Validation Accuracy: 0.909, Loss: 0.125 Epoch 6 Batch 240/269 - Train Accuracy: 0.909, Validation Accuracy: 0.909, Loss: 0.116 Epoch 6 Batch 241/269 - Train Accuracy: 0.900, Validation Accuracy: 0.914, Loss: 0.143 Epoch 6 Batch 242/269 - Train Accuracy: 0.903, Validation Accuracy: 0.916, Loss: 0.122 Epoch 6 Batch 243/269 - Train Accuracy: 0.929, Validation Accuracy: 0.911, Loss: 0.113 Epoch 6 Batch 244/269 - Train Accuracy: 0.893, Validation Accuracy: 0.915, Loss: 0.126 Epoch 6 Batch 245/269 - Train Accuracy: 0.891, Validation Accuracy: 0.909, Loss: 0.131 Epoch 6 Batch 246/269 - Train Accuracy: 0.890, Validation Accuracy: 0.913, Loss: 0.128 Epoch 6 Batch 247/269 - Train Accuracy: 0.911, Validation Accuracy: 0.912, Loss: 0.125 Epoch 6 Batch 248/269 - Train Accuracy: 0.908, Validation Accuracy: 0.907, Loss: 0.121 Epoch 6 Batch 249/269 - Train Accuracy: 0.911, Validation Accuracy: 0.908, Loss: 0.117 Epoch 6 Batch 250/269 - Train Accuracy: 0.917, Validation Accuracy: 0.913, Loss: 0.121 Epoch 6 Batch 251/269 - Train Accuracy: 0.931, Validation Accuracy: 0.907, Loss: 0.120 Epoch 6 Batch 252/269 - Train Accuracy: 0.916, Validation Accuracy: 0.908, Loss: 0.110 Epoch 6 Batch 253/269 - Train Accuracy: 0.891, Validation Accuracy: 0.910, Loss: 0.130 Epoch 6 Batch 254/269 - Train Accuracy: 0.904, Validation Accuracy: 0.907, Loss: 0.121 Epoch 6 Batch 255/269 - Train Accuracy: 0.907, Validation Accuracy: 0.911, Loss: 0.120 Epoch 6 Batch 256/269 - Train Accuracy: 0.893, Validation Accuracy: 0.918, Loss: 0.123 Epoch 6 Batch 257/269 - Train Accuracy: 0.893, Validation Accuracy: 0.913, Loss: 0.131 Epoch 6 Batch 258/269 - Train Accuracy: 0.903, Validation Accuracy: 0.910, Loss: 0.125 Epoch 6 Batch 259/269 - Train Accuracy: 0.901, Validation Accuracy: 0.913, Loss: 0.123 Epoch 6 Batch 260/269 - Train Accuracy: 0.902, Validation Accuracy: 0.912, Loss: 0.134 Epoch 6 Batch 261/269 - Train Accuracy: 0.901, Validation Accuracy: 0.914, Loss: 0.126 Epoch 6 Batch 262/269 - Train Accuracy: 0.910, Validation Accuracy: 0.912, Loss: 0.127 Epoch 6 Batch 263/269 - Train Accuracy: 0.904, Validation Accuracy: 0.913, Loss: 0.125 Epoch 6 Batch 264/269 - Train Accuracy: 0.871, Validation Accuracy: 0.913, Loss: 0.130 Epoch 6 Batch 265/269 - Train Accuracy: 0.909, Validation Accuracy: 0.913, Loss: 0.121 Epoch 6 Batch 266/269 - Train Accuracy: 0.906, Validation Accuracy: 0.910, Loss: 0.113 Epoch 6 Batch 267/269 - Train Accuracy: 0.916, Validation Accuracy: 0.908, Loss: 0.130 Epoch 7 Batch 0/269 - Train Accuracy: 0.918, Validation Accuracy: 0.909, Loss: 0.131 Epoch 7 Batch 1/269 - Train Accuracy: 0.903, Validation Accuracy: 0.907, Loss: 0.120 Epoch 7 Batch 2/269 - Train Accuracy: 0.897, Validation Accuracy: 0.909, Loss: 0.120 Epoch 7 Batch 3/269 - Train Accuracy: 0.912, Validation Accuracy: 0.905, Loss: 0.123 Epoch 7 Batch 4/269 - Train Accuracy: 0.894, Validation Accuracy: 0.909, Loss: 0.124 Epoch 7 Batch 5/269 - Train Accuracy: 0.892, Validation Accuracy: 0.912, Loss: 0.123 Epoch 7 Batch 6/269 - Train Accuracy: 0.916, Validation Accuracy: 0.913, Loss: 0.113 Epoch 7 Batch 7/269 - Train Accuracy: 0.908, Validation Accuracy: 0.912, Loss: 0.113 Epoch 7 Batch 8/269 - Train Accuracy: 0.912, Validation Accuracy: 0.916, Loss: 0.125 Epoch 7 Batch 9/269 - Train Accuracy: 0.902, Validation Accuracy: 0.917, Loss: 0.124 Epoch 7 Batch 10/269 - Train Accuracy: 0.909, Validation Accuracy: 0.911, Loss: 0.113 Epoch 7 Batch 11/269 - Train Accuracy: 0.921, Validation Accuracy: 0.907, Loss: 0.128 Epoch 7 Batch 12/269 - Train Accuracy: 0.896, Validation Accuracy: 0.910, Loss: 0.130 Epoch 7 Batch 13/269 - Train Accuracy: 0.901, Validation Accuracy: 0.905, Loss: 0.105 Epoch 7 Batch 14/269 - Train Accuracy: 0.894, Validation Accuracy: 0.911, Loss: 0.123 Epoch 7 Batch 15/269 - Train Accuracy: 0.905, Validation Accuracy: 0.914, Loss: 0.105 Epoch 7 Batch 16/269 - Train Accuracy: 0.903, Validation Accuracy: 0.914, Loss: 0.124 Epoch 7 Batch 17/269 - Train Accuracy: 0.905, Validation Accuracy: 0.918, Loss: 0.107 Epoch 7 Batch 18/269 - Train Accuracy: 0.910, Validation Accuracy: 0.917, Loss: 0.115 Epoch 7 Batch 19/269 - Train Accuracy: 0.914, Validation Accuracy: 0.924, Loss: 0.107 Epoch 7 Batch 20/269 - Train Accuracy: 0.912, Validation Accuracy: 0.914, Loss: 0.114 Epoch 7 Batch 21/269 - Train Accuracy: 0.891, Validation Accuracy: 0.914, Loss: 0.133 Epoch 7 Batch 22/269 - Train Accuracy: 0.930, Validation Accuracy: 0.918, Loss: 0.109 Epoch 7 Batch 23/269 - Train Accuracy: 0.901, Validation Accuracy: 0.918, Loss: 0.117 Epoch 7 Batch 24/269 - Train Accuracy: 0.897, Validation Accuracy: 0.911, Loss: 0.118 Epoch 7 Batch 25/269 - Train Accuracy: 0.901, Validation Accuracy: 0.920, Loss: 0.132 Epoch 7 Batch 26/269 - Train Accuracy: 0.906, Validation Accuracy: 0.911, Loss: 0.109 Epoch 7 Batch 27/269 - Train Accuracy: 0.899, Validation Accuracy: 0.916, Loss: 0.112 Epoch 7 Batch 28/269 - Train Accuracy: 0.879, Validation Accuracy: 0.910, Loss: 0.126 Epoch 7 Batch 29/269 - Train Accuracy: 0.916, Validation Accuracy: 0.915, Loss: 0.129 Epoch 7 Batch 30/269 - Train Accuracy: 0.890, Validation Accuracy: 0.914, Loss: 0.118 Epoch 7 Batch 31/269 - Train Accuracy: 0.912, Validation Accuracy: 0.912, Loss: 0.115 Epoch 7 Batch 32/269 - Train Accuracy: 0.901, Validation Accuracy: 0.919, Loss: 0.116 Epoch 7 Batch 33/269 - Train Accuracy: 0.917, Validation Accuracy: 0.920, Loss: 0.112 Epoch 7 Batch 34/269 - Train Accuracy: 0.905, Validation Accuracy: 0.915, Loss: 0.110 Epoch 7 Batch 35/269 - Train Accuracy: 0.907, Validation Accuracy: 0.919, Loss: 0.131 Epoch 7 Batch 36/269 - Train Accuracy: 0.895, Validation Accuracy: 0.918, Loss: 0.117 Epoch 7 Batch 37/269 - Train Accuracy: 0.906, Validation Accuracy: 0.912, Loss: 0.122 Epoch 7 Batch 38/269 - Train Accuracy: 0.900, Validation Accuracy: 0.912, Loss: 0.115 Epoch 7 Batch 39/269 - Train Accuracy: 0.896, Validation Accuracy: 0.910, Loss: 0.112 Epoch 7 Batch 40/269 - Train Accuracy: 0.889, Validation Accuracy: 0.909, Loss: 0.126 Epoch 7 Batch 41/269 - Train Accuracy: 0.890, Validation Accuracy: 0.909, Loss: 0.123 Epoch 7 Batch 42/269 - Train Accuracy: 0.919, Validation Accuracy: 0.914, Loss: 0.109 Epoch 7 Batch 43/269 - Train Accuracy: 0.899, Validation Accuracy: 0.912, Loss: 0.121 Epoch 7 Batch 44/269 - Train Accuracy: 0.912, Validation Accuracy: 0.918, Loss: 0.115 Epoch 7 Batch 45/269 - Train Accuracy: 0.904, Validation Accuracy: 0.917, Loss: 0.116 Epoch 7 Batch 46/269 - Train Accuracy: 0.906, Validation Accuracy: 0.919, Loss: 0.110 Epoch 7 Batch 47/269 - Train Accuracy: 0.923, Validation Accuracy: 0.919, Loss: 0.102 Epoch 7 Batch 48/269 - Train Accuracy: 0.919, Validation Accuracy: 0.920, Loss: 0.111 Epoch 7 Batch 49/269 - Train Accuracy: 0.899, Validation Accuracy: 0.916, Loss: 0.109 Epoch 7 Batch 50/269 - Train Accuracy: 0.894, Validation Accuracy: 0.921, Loss: 0.122 Epoch 7 Batch 51/269 - Train Accuracy: 0.911, Validation Accuracy: 0.917, Loss: 0.113 Epoch 7 Batch 52/269 - Train Accuracy: 0.899, Validation Accuracy: 0.917, Loss: 0.108 Epoch 7 Batch 53/269 - Train Accuracy: 0.906, Validation Accuracy: 0.922, Loss: 0.122 Epoch 7 Batch 54/269 - Train Accuracy: 0.902, Validation Accuracy: 0.924, Loss: 0.114 Epoch 7 Batch 55/269 - Train Accuracy: 0.923, Validation Accuracy: 0.918, Loss: 0.107 Epoch 7 Batch 56/269 - Train Accuracy: 0.912, Validation Accuracy: 0.916, Loss: 0.114 Epoch 7 Batch 57/269 - Train Accuracy: 0.904, Validation Accuracy: 0.926, Loss: 0.119 Epoch 7 Batch 58/269 - Train Accuracy: 0.909, Validation Accuracy: 0.919, Loss: 0.111 Epoch 7 Batch 59/269 - Train Accuracy: 0.927, Validation Accuracy: 0.924, Loss: 0.106 Epoch 7 Batch 60/269 - Train Accuracy: 0.913, Validation Accuracy: 0.915, Loss: 0.104 Epoch 7 Batch 61/269 - Train Accuracy: 0.927, Validation Accuracy: 0.915, Loss: 0.109 Epoch 7 Batch 62/269 - Train Accuracy: 0.910, Validation Accuracy: 0.908, Loss: 0.114 Epoch 7 Batch 63/269 - Train Accuracy: 0.904, Validation Accuracy: 0.912, Loss: 0.122 Epoch 7 Batch 64/269 - Train Accuracy: 0.918, Validation Accuracy: 0.919, Loss: 0.109 Epoch 7 Batch 65/269 - Train Accuracy: 0.906, Validation Accuracy: 0.919, Loss: 0.109 Epoch 7 Batch 66/269 - Train Accuracy: 0.898, Validation Accuracy: 0.920, Loss: 0.114 Epoch 7 Batch 67/269 - Train Accuracy: 0.907, Validation Accuracy: 0.920, Loss: 0.121 Epoch 7 Batch 68/269 - Train Accuracy: 0.890, Validation Accuracy: 0.919, Loss: 0.126 Epoch 7 Batch 69/269 - Train Accuracy: 0.902, Validation Accuracy: 0.927, Loss: 0.135 Epoch 7 Batch 70/269 - Train Accuracy: 0.922, Validation Accuracy: 0.927, Loss: 0.121 Epoch 7 Batch 71/269 - Train Accuracy: 0.914, Validation Accuracy: 0.923, Loss: 0.122 Epoch 7 Batch 72/269 - Train Accuracy: 0.902, Validation Accuracy: 0.921, Loss: 0.121 Epoch 7 Batch 73/269 - Train Accuracy: 0.891, Validation Accuracy: 0.916, Loss: 0.126 Epoch 7 Batch 74/269 - Train Accuracy: 0.918, Validation Accuracy: 0.920, Loss: 0.108 Epoch 7 Batch 75/269 - Train Accuracy: 0.914, Validation Accuracy: 0.914, Loss: 0.119 Epoch 7 Batch 76/269 - Train Accuracy: 0.899, Validation Accuracy: 0.912, Loss: 0.115 Epoch 7 Batch 77/269 - Train Accuracy: 0.914, Validation Accuracy: 0.909, Loss: 0.105 Epoch 7 Batch 78/269 - Train Accuracy: 0.899, Validation Accuracy: 0.912, Loss: 0.113 Epoch 7 Batch 79/269 - Train Accuracy: 0.899, Validation Accuracy: 0.911, Loss: 0.111 Epoch 7 Batch 80/269 - Train Accuracy: 0.916, Validation Accuracy: 0.913, Loss: 0.114 Epoch 7 Batch 81/269 - Train Accuracy: 0.892, Validation Accuracy: 0.910, Loss: 0.120 Epoch 7 Batch 82/269 - Train Accuracy: 0.920, Validation Accuracy: 0.906, Loss: 0.107 Epoch 7 Batch 83/269 - Train Accuracy: 0.889, Validation Accuracy: 0.912, Loss: 0.123 Epoch 7 Batch 84/269 - Train Accuracy: 0.914, Validation Accuracy: 0.914, Loss: 0.113 Epoch 7 Batch 85/269 - Train Accuracy: 0.916, Validation Accuracy: 0.919, Loss: 0.115 Epoch 7 Batch 86/269 - Train Accuracy: 0.912, Validation Accuracy: 0.918, Loss: 0.106 Epoch 7 Batch 87/269 - Train Accuracy: 0.910, Validation Accuracy: 0.919, Loss: 0.119 Epoch 7 Batch 88/269 - Train Accuracy: 0.892, Validation Accuracy: 0.915, Loss: 0.110 Epoch 7 Batch 89/269 - Train Accuracy: 0.907, Validation Accuracy: 0.916, Loss: 0.116 Epoch 7 Batch 90/269 - Train Accuracy: 0.905, Validation Accuracy: 0.912, Loss: 0.114 Epoch 7 Batch 91/269 - Train Accuracy: 0.919, Validation Accuracy: 0.906, Loss: 0.107 Epoch 7 Batch 92/269 - Train Accuracy: 0.922, Validation Accuracy: 0.911, Loss: 0.103 Epoch 7 Batch 93/269 - Train Accuracy: 0.916, Validation Accuracy: 0.909, Loss: 0.103 Epoch 7 Batch 94/269 - Train Accuracy: 0.903, Validation Accuracy: 0.909, Loss: 0.126 Epoch 7 Batch 95/269 - Train Accuracy: 0.911, Validation Accuracy: 0.909, Loss: 0.109 Epoch 7 Batch 96/269 - Train Accuracy: 0.893, Validation Accuracy: 0.912, Loss: 0.114 Epoch 7 Batch 97/269 - Train Accuracy: 0.920, Validation Accuracy: 0.915, Loss: 0.107 Epoch 7 Batch 98/269 - Train Accuracy: 0.909, Validation Accuracy: 0.918, Loss: 0.111 Epoch 7 Batch 99/269 - Train Accuracy: 0.905, Validation Accuracy: 0.912, Loss: 0.103 Epoch 7 Batch 100/269 - Train Accuracy: 0.914, Validation Accuracy: 0.914, Loss: 0.109 Epoch 7 Batch 101/269 - Train Accuracy: 0.896, Validation Accuracy: 0.910, Loss: 0.121 Epoch 7 Batch 102/269 - Train Accuracy: 0.916, Validation Accuracy: 0.918, Loss: 0.105 Epoch 7 Batch 103/269 - Train Accuracy: 0.919, Validation Accuracy: 0.917, Loss: 0.116 Epoch 7 Batch 104/269 - Train Accuracy: 0.901, Validation Accuracy: 0.922, Loss: 0.109 Epoch 7 Batch 105/269 - Train Accuracy: 0.911, Validation Accuracy: 0.922, Loss: 0.110 Epoch 7 Batch 106/269 - Train Accuracy: 0.914, Validation Accuracy: 0.920, Loss: 0.103 Epoch 7 Batch 107/269 - Train Accuracy: 0.921, Validation Accuracy: 0.918, Loss: 0.115 Epoch 7 Batch 108/269 - Train Accuracy: 0.914, Validation Accuracy: 0.915, Loss: 0.105 Epoch 7 Batch 109/269 - Train Accuracy: 0.899, Validation Accuracy: 0.919, Loss: 0.112 Epoch 7 Batch 110/269 - Train Accuracy: 0.907, Validation Accuracy: 0.915, Loss: 0.103 Epoch 7 Batch 111/269 - Train Accuracy: 0.914, Validation Accuracy: 0.911, Loss: 0.121 Epoch 7 Batch 112/269 - Train Accuracy: 0.918, Validation Accuracy: 0.915, Loss: 0.111 Epoch 7 Batch 113/269 - Train Accuracy: 0.905, Validation Accuracy: 0.916, Loss: 0.106 Epoch 7 Batch 114/269 - Train Accuracy: 0.907, Validation Accuracy: 0.918, Loss: 0.107 Epoch 7 Batch 115/269 - Train Accuracy: 0.908, Validation Accuracy: 0.918, Loss: 0.107 Epoch 7 Batch 116/269 - Train Accuracy: 0.916, Validation Accuracy: 0.918, Loss: 0.115 Epoch 7 Batch 117/269 - Train Accuracy: 0.918, Validation Accuracy: 0.915, Loss: 0.106 Epoch 7 Batch 118/269 - Train Accuracy: 0.916, Validation Accuracy: 0.916, Loss: 0.100 Epoch 7 Batch 119/269 - Train Accuracy: 0.899, Validation Accuracy: 0.914, Loss: 0.109 Epoch 7 Batch 120/269 - Train Accuracy: 0.910, Validation Accuracy: 0.916, Loss: 0.112 Epoch 7 Batch 121/269 - Train Accuracy: 0.916, Validation Accuracy: 0.914, Loss: 0.100 Epoch 7 Batch 122/269 - Train Accuracy: 0.918, Validation Accuracy: 0.914, Loss: 0.101 Epoch 7 Batch 123/269 - Train Accuracy: 0.914, Validation Accuracy: 0.912, Loss: 0.103 Epoch 7 Batch 124/269 - Train Accuracy: 0.908, Validation Accuracy: 0.907, Loss: 0.095 Epoch 7 Batch 125/269 - Train Accuracy: 0.917, Validation Accuracy: 0.913, Loss: 0.101 Epoch 7 Batch 126/269 - Train Accuracy: 0.892, Validation Accuracy: 0.911, Loss: 0.107 Epoch 7 Batch 127/269 - Train Accuracy: 0.912, Validation Accuracy: 0.914, Loss: 0.112 Epoch 7 Batch 128/269 - Train Accuracy: 0.916, Validation Accuracy: 0.916, Loss: 0.104 Epoch 7 Batch 129/269 - Train Accuracy: 0.911, Validation Accuracy: 0.917, Loss: 0.106 Epoch 7 Batch 130/269 - Train Accuracy: 0.912, Validation Accuracy: 0.912, Loss: 0.111 Epoch 7 Batch 131/269 - Train Accuracy: 0.881, Validation Accuracy: 0.919, Loss: 0.109 Epoch 7 Batch 132/269 - Train Accuracy: 0.911, Validation Accuracy: 0.915, Loss: 0.114 Epoch 7 Batch 133/269 - Train Accuracy: 0.919, Validation Accuracy: 0.920, Loss: 0.099 Epoch 7 Batch 134/269 - Train Accuracy: 0.907, Validation Accuracy: 0.918, Loss: 0.112 Epoch 7 Batch 135/269 - Train Accuracy: 0.915, Validation Accuracy: 0.922, Loss: 0.105 Epoch 7 Batch 136/269 - Train Accuracy: 0.894, Validation Accuracy: 0.927, Loss: 0.115 Epoch 7 Batch 137/269 - Train Accuracy: 0.910, Validation Accuracy: 0.922, Loss: 0.115 Epoch 7 Batch 138/269 - Train Accuracy: 0.911, Validation Accuracy: 0.919, Loss: 0.106 Epoch 7 Batch 139/269 - Train Accuracy: 0.914, Validation Accuracy: 0.916, Loss: 0.097 Epoch 7 Batch 140/269 - Train Accuracy: 0.907, Validation Accuracy: 0.913, Loss: 0.119 Epoch 7 Batch 141/269 - Train Accuracy: 0.899, Validation Accuracy: 0.913, Loss: 0.110 Epoch 7 Batch 142/269 - Train Accuracy: 0.916, Validation Accuracy: 0.910, Loss: 0.100 Epoch 7 Batch 143/269 - Train Accuracy: 0.918, Validation Accuracy: 0.912, Loss: 0.097 Epoch 7 Batch 144/269 - Train Accuracy: 0.921, Validation Accuracy: 0.920, Loss: 0.095 Epoch 7 Batch 145/269 - Train Accuracy: 0.923, Validation Accuracy: 0.920, Loss: 0.098 Epoch 7 Batch 146/269 - Train Accuracy: 0.906, Validation Accuracy: 0.918, Loss: 0.103 Epoch 7 Batch 147/269 - Train Accuracy: 0.911, Validation Accuracy: 0.922, Loss: 0.107 Epoch 7 Batch 148/269 - Train Accuracy: 0.915, Validation Accuracy: 0.921, Loss: 0.109 Epoch 7 Batch 149/269 - Train Accuracy: 0.900, Validation Accuracy: 0.916, Loss: 0.115 Epoch 7 Batch 150/269 - Train Accuracy: 0.908, Validation Accuracy: 0.919, Loss: 0.104 Epoch 7 Batch 151/269 - Train Accuracy: 0.920, Validation Accuracy: 0.920, Loss: 0.103 Epoch 7 Batch 152/269 - Train Accuracy: 0.920, Validation Accuracy: 0.918, Loss: 0.100 Epoch 7 Batch 153/269 - Train Accuracy: 0.919, Validation Accuracy: 0.925, Loss: 0.098 Epoch 7 Batch 154/269 - Train Accuracy: 0.923, Validation Accuracy: 0.919, Loss: 0.101 Epoch 7 Batch 155/269 - Train Accuracy: 0.910, Validation Accuracy: 0.912, Loss: 0.099 Epoch 7 Batch 156/269 - Train Accuracy: 0.907, Validation Accuracy: 0.918, Loss: 0.112 Epoch 7 Batch 157/269 - Train Accuracy: 0.905, Validation Accuracy: 0.924, Loss: 0.095 Epoch 7 Batch 158/269 - Train Accuracy: 0.917, Validation Accuracy: 0.923, Loss: 0.104 Epoch 7 Batch 159/269 - Train Accuracy: 0.905, Validation Accuracy: 0.922, Loss: 0.110 Epoch 7 Batch 160/269 - Train Accuracy: 0.907, Validation Accuracy: 0.927, Loss: 0.106 Epoch 7 Batch 161/269 - Train Accuracy: 0.906, Validation Accuracy: 0.924, Loss: 0.097 Epoch 7 Batch 162/269 - Train Accuracy: 0.932, Validation Accuracy: 0.922, Loss: 0.097 Epoch 7 Batch 163/269 - Train Accuracy: 0.926, Validation Accuracy: 0.927, Loss: 0.108 Epoch 7 Batch 164/269 - Train Accuracy: 0.924, Validation Accuracy: 0.923, Loss: 0.102 Epoch 7 Batch 165/269 - Train Accuracy: 0.912, Validation Accuracy: 0.924, Loss: 0.107 Epoch 7 Batch 166/269 - Train Accuracy: 0.927, Validation Accuracy: 0.925, Loss: 0.111 Epoch 7 Batch 167/269 - Train Accuracy: 0.921, Validation Accuracy: 0.912, Loss: 0.102 Epoch 7 Batch 168/269 - Train Accuracy: 0.909, Validation Accuracy: 0.926, Loss: 0.118 Epoch 7 Batch 169/269 - Train Accuracy: 0.893, Validation Accuracy: 0.924, Loss: 0.103 Epoch 7 Batch 170/269 - Train Accuracy: 0.910, Validation Accuracy: 0.922, Loss: 0.106 Epoch 7 Batch 171/269 - Train Accuracy: 0.921, Validation Accuracy: 0.920, Loss: 0.103 Epoch 7 Batch 172/269 - Train Accuracy: 0.902, Validation Accuracy: 0.925, Loss: 0.110 Epoch 7 Batch 173/269 - Train Accuracy: 0.927, Validation Accuracy: 0.917, Loss: 0.100 Epoch 7 Batch 174/269 - Train Accuracy: 0.923, Validation Accuracy: 0.918, Loss: 0.106 Epoch 7 Batch 175/269 - Train Accuracy: 0.896, Validation Accuracy: 0.921, Loss: 0.117 Epoch 7 Batch 176/269 - Train Accuracy: 0.894, Validation Accuracy: 0.918, Loss: 0.109 Epoch 7 Batch 177/269 - Train Accuracy: 0.915, Validation Accuracy: 0.921, Loss: 0.098 Epoch 7 Batch 178/269 - Train Accuracy: 0.916, Validation Accuracy: 0.923, Loss: 0.094 Epoch 7 Batch 179/269 - Train Accuracy: 0.902, Validation Accuracy: 0.924, Loss: 0.105 Epoch 7 Batch 180/269 - Train Accuracy: 0.917, Validation Accuracy: 0.920, Loss: 0.107 Epoch 7 Batch 181/269 - Train Accuracy: 0.904, Validation Accuracy: 0.916, Loss: 0.112 Epoch 7 Batch 182/269 - Train Accuracy: 0.924, Validation Accuracy: 0.916, Loss: 0.101 Epoch 7 Batch 183/269 - Train Accuracy: 0.924, Validation Accuracy: 0.920, Loss: 0.086 Epoch 7 Batch 184/269 - Train Accuracy: 0.916, Validation Accuracy: 0.921, Loss: 0.102 Epoch 7 Batch 185/269 - Train Accuracy: 0.920, Validation Accuracy: 0.925, Loss: 0.099 Epoch 7 Batch 186/269 - Train Accuracy: 0.916, Validation Accuracy: 0.925, Loss: 0.097 Epoch 7 Batch 187/269 - Train Accuracy: 0.908, Validation Accuracy: 0.924, Loss: 0.096 Epoch 7 Batch 188/269 - Train Accuracy: 0.921, Validation Accuracy: 0.923, Loss: 0.100 Epoch 7 Batch 189/269 - Train Accuracy: 0.919, Validation Accuracy: 0.929, Loss: 0.095 Epoch 7 Batch 190/269 - Train Accuracy: 0.913, Validation Accuracy: 0.928, Loss: 0.099 Epoch 7 Batch 191/269 - Train Accuracy: 0.900, Validation Accuracy: 0.921, Loss: 0.100 Epoch 7 Batch 192/269 - Train Accuracy: 0.915, Validation Accuracy: 0.920, Loss: 0.103 Epoch 7 Batch 193/269 - Train Accuracy: 0.922, Validation Accuracy: 0.924, Loss: 0.092 Epoch 7 Batch 194/269 - Train Accuracy: 0.917, Validation Accuracy: 0.922, Loss: 0.105 Epoch 7 Batch 195/269 - Train Accuracy: 0.899, Validation Accuracy: 0.921, Loss: 0.099 Epoch 7 Batch 196/269 - Train Accuracy: 0.915, Validation Accuracy: 0.916, Loss: 0.097 Epoch 7 Batch 197/269 - Train Accuracy: 0.902, Validation Accuracy: 0.918, Loss: 0.106 Epoch 7 Batch 198/269 - Train Accuracy: 0.916, Validation Accuracy: 0.923, Loss: 0.104 Epoch 7 Batch 199/269 - Train Accuracy: 0.918, Validation Accuracy: 0.924, Loss: 0.101 Epoch 7 Batch 200/269 - Train Accuracy: 0.912, Validation Accuracy: 0.923, Loss: 0.101 Epoch 7 Batch 201/269 - Train Accuracy: 0.900, Validation Accuracy: 0.926, Loss: 0.102 Epoch 7 Batch 202/269 - Train Accuracy: 0.919, Validation Accuracy: 0.924, Loss: 0.098 Epoch 7 Batch 203/269 - Train Accuracy: 0.908, Validation Accuracy: 0.919, Loss: 0.112 Epoch 7 Batch 204/269 - Train Accuracy: 0.916, Validation Accuracy: 0.919, Loss: 0.107 Epoch 7 Batch 205/269 - Train Accuracy: 0.924, Validation Accuracy: 0.917, Loss: 0.105 Epoch 7 Batch 206/269 - Train Accuracy: 0.914, Validation Accuracy: 0.918, Loss: 0.110 Epoch 7 Batch 207/269 - Train Accuracy: 0.914, Validation Accuracy: 0.923, Loss: 0.098 Epoch 7 Batch 208/269 - Train Accuracy: 0.925, Validation Accuracy: 0.922, Loss: 0.103 Epoch 7 Batch 209/269 - Train Accuracy: 0.932, Validation Accuracy: 0.922, Loss: 0.096 Epoch 7 Batch 210/269 - Train Accuracy: 0.918, Validation Accuracy: 0.924, Loss: 0.097 Epoch 7 Batch 211/269 - Train Accuracy: 0.924, Validation Accuracy: 0.918, Loss: 0.107 Epoch 7 Batch 212/269 - Train Accuracy: 0.918, Validation Accuracy: 0.918, Loss: 0.106 Epoch 7 Batch 213/269 - Train Accuracy: 0.909, Validation Accuracy: 0.923, Loss: 0.099 Epoch 7 Batch 214/269 - Train Accuracy: 0.911, Validation Accuracy: 0.916, Loss: 0.104 Epoch 7 Batch 215/269 - Train Accuracy: 0.926, Validation Accuracy: 0.919, Loss: 0.094 Epoch 7 Batch 216/269 - Train Accuracy: 0.890, Validation Accuracy: 0.913, Loss: 0.118 Epoch 7 Batch 217/269 - Train Accuracy: 0.912, Validation Accuracy: 0.915, Loss: 0.107 Epoch 7 Batch 218/269 - Train Accuracy: 0.915, Validation Accuracy: 0.922, Loss: 0.102 Epoch 7 Batch 219/269 - Train Accuracy: 0.922, Validation Accuracy: 0.921, Loss: 0.110 Epoch 7 Batch 220/269 - Train Accuracy: 0.917, Validation Accuracy: 0.920, Loss: 0.096 Epoch 7 Batch 221/269 - Train Accuracy: 0.917, Validation Accuracy: 0.921, Loss: 0.105 Epoch 7 Batch 222/269 - Train Accuracy: 0.931, Validation Accuracy: 0.917, Loss: 0.095 Epoch 7 Batch 223/269 - Train Accuracy: 0.919, Validation Accuracy: 0.919, Loss: 0.092 Epoch 7 Batch 224/269 - Train Accuracy: 0.914, Validation Accuracy: 0.920, Loss: 0.106 Epoch 7 Batch 225/269 - Train Accuracy: 0.900, Validation Accuracy: 0.919, Loss: 0.090 Epoch 7 Batch 226/269 - Train Accuracy: 0.925, Validation Accuracy: 0.922, Loss: 0.104 Epoch 7 Batch 227/269 - Train Accuracy: 0.924, Validation Accuracy: 0.917, Loss: 0.102 Epoch 7 Batch 228/269 - Train Accuracy: 0.912, Validation Accuracy: 0.920, Loss: 0.099 Epoch 7 Batch 229/269 - Train Accuracy: 0.905, Validation Accuracy: 0.917, Loss: 0.101 Epoch 7 Batch 230/269 - Train Accuracy: 0.930, Validation Accuracy: 0.919, Loss: 0.094 Epoch 7 Batch 231/269 - Train Accuracy: 0.910, Validation Accuracy: 0.918, Loss: 0.101 Epoch 7 Batch 232/269 - Train Accuracy: 0.902, Validation Accuracy: 0.920, Loss: 0.097 Epoch 7 Batch 233/269 - Train Accuracy: 0.928, Validation Accuracy: 0.921, Loss: 0.101 Epoch 7 Batch 234/269 - Train Accuracy: 0.920, Validation Accuracy: 0.922, Loss: 0.098 Epoch 7 Batch 235/269 - Train Accuracy: 0.939, Validation Accuracy: 0.926, Loss: 0.088 Epoch 7 Batch 236/269 - Train Accuracy: 0.912, Validation Accuracy: 0.928, Loss: 0.096 Epoch 7 Batch 237/269 - Train Accuracy: 0.929, Validation Accuracy: 0.926, Loss: 0.094 Epoch 7 Batch 238/269 - Train Accuracy: 0.913, Validation Accuracy: 0.926, Loss: 0.096 Epoch 7 Batch 239/269 - Train Accuracy: 0.924, Validation Accuracy: 0.919, Loss: 0.094 Epoch 7 Batch 240/269 - Train Accuracy: 0.923, Validation Accuracy: 0.923, Loss: 0.092 Epoch 7 Batch 241/269 - Train Accuracy: 0.913, Validation Accuracy: 0.926, Loss: 0.109 Epoch 7 Batch 242/269 - Train Accuracy: 0.922, Validation Accuracy: 0.925, Loss: 0.094 Epoch 7 Batch 243/269 - Train Accuracy: 0.937, Validation Accuracy: 0.930, Loss: 0.086 Epoch 7 Batch 244/269 - Train Accuracy: 0.911, Validation Accuracy: 0.930, Loss: 0.091 Epoch 7 Batch 245/269 - Train Accuracy: 0.911, Validation Accuracy: 0.926, Loss: 0.103 Epoch 7 Batch 246/269 - Train Accuracy: 0.905, Validation Accuracy: 0.923, Loss: 0.101 Epoch 7 Batch 247/269 - Train Accuracy: 0.923, Validation Accuracy: 0.924, Loss: 0.098 Epoch 7 Batch 248/269 - Train Accuracy: 0.925, Validation Accuracy: 0.920, Loss: 0.093 Epoch 7 Batch 249/269 - Train Accuracy: 0.927, Validation Accuracy: 0.924, Loss: 0.089 Epoch 7 Batch 250/269 - Train Accuracy: 0.925, Validation Accuracy: 0.921, Loss: 0.098 Epoch 7 Batch 251/269 - Train Accuracy: 0.936, Validation Accuracy: 0.915, Loss: 0.095 Epoch 7 Batch 252/269 - Train Accuracy: 0.925, Validation Accuracy: 0.917, Loss: 0.087 Epoch 7 Batch 253/269 - Train Accuracy: 0.906, Validation Accuracy: 0.920, Loss: 0.101 Epoch 7 Batch 254/269 - Train Accuracy: 0.913, Validation Accuracy: 0.916, Loss: 0.093 Epoch 7 Batch 255/269 - Train Accuracy: 0.915, Validation Accuracy: 0.917, Loss: 0.094 Epoch 7 Batch 256/269 - Train Accuracy: 0.902, Validation Accuracy: 0.924, Loss: 0.095 Epoch 7 Batch 257/269 - Train Accuracy: 0.907, Validation Accuracy: 0.933, Loss: 0.103 Epoch 7 Batch 258/269 - Train Accuracy: 0.914, Validation Accuracy: 0.926, Loss: 0.097 Epoch 7 Batch 259/269 - Train Accuracy: 0.917, Validation Accuracy: 0.924, Loss: 0.097 Epoch 7 Batch 260/269 - Train Accuracy: 0.909, Validation Accuracy: 0.924, Loss: 0.103 Epoch 7 Batch 261/269 - Train Accuracy: 0.918, Validation Accuracy: 0.922, Loss: 0.099 Epoch 7 Batch 262/269 - Train Accuracy: 0.921, Validation Accuracy: 0.926, Loss: 0.098 Epoch 7 Batch 263/269 - Train Accuracy: 0.926, Validation Accuracy: 0.922, Loss: 0.102 Epoch 7 Batch 264/269 - Train Accuracy: 0.882, Validation Accuracy: 0.924, Loss: 0.105 Epoch 7 Batch 265/269 - Train Accuracy: 0.917, Validation Accuracy: 0.919, Loss: 0.094 Epoch 7 Batch 266/269 - Train Accuracy: 0.926, Validation Accuracy: 0.918, Loss: 0.091 Epoch 7 Batch 267/269 - Train Accuracy: 0.926, Validation Accuracy: 0.924, Loss: 0.103 Epoch 8 Batch 0/269 - Train Accuracy: 0.932, Validation Accuracy: 0.924, Loss: 0.105 Epoch 8 Batch 1/269 - Train Accuracy: 0.918, Validation Accuracy: 0.923, Loss: 0.098 Epoch 8 Batch 2/269 - Train Accuracy: 0.912, Validation Accuracy: 0.923, Loss: 0.098 Epoch 8 Batch 3/269 - Train Accuracy: 0.918, Validation Accuracy: 0.921, Loss: 0.096 Epoch 8 Batch 4/269 - Train Accuracy: 0.903, Validation Accuracy: 0.928, Loss: 0.092 Epoch 8 Batch 5/269 - Train Accuracy: 0.919, Validation Accuracy: 0.929, Loss: 0.098 Epoch 8 Batch 6/269 - Train Accuracy: 0.928, Validation Accuracy: 0.926, Loss: 0.090 Epoch 8 Batch 7/269 - Train Accuracy: 0.920, Validation Accuracy: 0.925, Loss: 0.091 Epoch 8 Batch 8/269 - Train Accuracy: 0.929, Validation Accuracy: 0.924, Loss: 0.097 Epoch 8 Batch 9/269 - Train Accuracy: 0.908, Validation Accuracy: 0.920, Loss: 0.101 Epoch 8 Batch 10/269 - Train Accuracy: 0.922, Validation Accuracy: 0.925, Loss: 0.085 Epoch 8 Batch 11/269 - Train Accuracy: 0.927, Validation Accuracy: 0.927, Loss: 0.103 Epoch 8 Batch 12/269 - Train Accuracy: 0.908, Validation Accuracy: 0.931, Loss: 0.104 Epoch 8 Batch 13/269 - Train Accuracy: 0.913, Validation Accuracy: 0.928, Loss: 0.085 Epoch 8 Batch 14/269 - Train Accuracy: 0.919, Validation Accuracy: 0.922, Loss: 0.094 Epoch 8 Batch 15/269 - Train Accuracy: 0.915, Validation Accuracy: 0.927, Loss: 0.081 Epoch 8 Batch 16/269 - Train Accuracy: 0.920, Validation Accuracy: 0.922, Loss: 0.104 Epoch 8 Batch 17/269 - Train Accuracy: 0.922, Validation Accuracy: 0.925, Loss: 0.089 Epoch 8 Batch 18/269 - Train Accuracy: 0.913, Validation Accuracy: 0.924, Loss: 0.095 Epoch 8 Batch 19/269 - Train Accuracy: 0.925, Validation Accuracy: 0.925, Loss: 0.079 Epoch 8 Batch 20/269 - Train Accuracy: 0.923, Validation Accuracy: 0.925, Loss: 0.097 Epoch 8 Batch 21/269 - Train Accuracy: 0.900, Validation Accuracy: 0.925, Loss: 0.110 Epoch 8 Batch 22/269 - Train Accuracy: 0.937, Validation Accuracy: 0.927, Loss: 0.088 Epoch 8 Batch 23/269 - Train Accuracy: 0.910, Validation Accuracy: 0.927, Loss: 0.096 Epoch 8 Batch 24/269 - Train Accuracy: 0.901, Validation Accuracy: 0.926, Loss: 0.092 Epoch 8 Batch 25/269 - Train Accuracy: 0.910, Validation Accuracy: 0.922, Loss: 0.103 Epoch 8 Batch 26/269 - Train Accuracy: 0.923, Validation Accuracy: 0.926, Loss: 0.089 Epoch 8 Batch 27/269 - Train Accuracy: 0.915, Validation Accuracy: 0.926, Loss: 0.089 Epoch 8 Batch 28/269 - Train Accuracy: 0.892, Validation Accuracy: 0.931, Loss: 0.099 Epoch 8 Batch 29/269 - Train Accuracy: 0.918, Validation Accuracy: 0.932, Loss: 0.094 Epoch 8 Batch 30/269 - Train Accuracy: 0.923, Validation Accuracy: 0.929, Loss: 0.091 Epoch 8 Batch 31/269 - Train Accuracy: 0.931, Validation Accuracy: 0.926, Loss: 0.094 Epoch 8 Batch 32/269 - Train Accuracy: 0.924, Validation Accuracy: 0.924, Loss: 0.093 Epoch 8 Batch 33/269 - Train Accuracy: 0.924, Validation Accuracy: 0.926, Loss: 0.087 Epoch 8 Batch 34/269 - Train Accuracy: 0.917, Validation Accuracy: 0.919, Loss: 0.090 Epoch 8 Batch 35/269 - Train Accuracy: 0.916, Validation Accuracy: 0.927, Loss: 0.105 Epoch 8 Batch 36/269 - Train Accuracy: 0.911, Validation Accuracy: 0.925, Loss: 0.096 Epoch 8 Batch 37/269 - Train Accuracy: 0.921, Validation Accuracy: 0.925, Loss: 0.097 Epoch 8 Batch 38/269 - Train Accuracy: 0.917, Validation Accuracy: 0.928, Loss: 0.089 Epoch 8 Batch 39/269 - Train Accuracy: 0.926, Validation Accuracy: 0.921, Loss: 0.087 Epoch 8 Batch 40/269 - Train Accuracy: 0.899, Validation Accuracy: 0.924, Loss: 0.102 Epoch 8 Batch 41/269 - Train Accuracy: 0.906, Validation Accuracy: 0.925, Loss: 0.101 Epoch 8 Batch 42/269 - Train Accuracy: 0.926, Validation Accuracy: 0.925, Loss: 0.079 Epoch 8 Batch 43/269 - Train Accuracy: 0.914, Validation Accuracy: 0.927, Loss: 0.100 Epoch 8 Batch 44/269 - Train Accuracy: 0.916, Validation Accuracy: 0.921, Loss: 0.095 Epoch 8 Batch 45/269 - Train Accuracy: 0.922, Validation Accuracy: 0.919, Loss: 0.094 Epoch 8 Batch 46/269 - Train Accuracy: 0.921, Validation Accuracy: 0.925, Loss: 0.093 Epoch 8 Batch 47/269 - Train Accuracy: 0.930, Validation Accuracy: 0.927, Loss: 0.080 Epoch 8 Batch 48/269 - Train Accuracy: 0.929, Validation Accuracy: 0.926, Loss: 0.092 Epoch 8 Batch 49/269 - Train Accuracy: 0.918, Validation Accuracy: 0.928, Loss: 0.089 Epoch 8 Batch 50/269 - Train Accuracy: 0.905, Validation Accuracy: 0.928, Loss: 0.102 Epoch 8 Batch 51/269 - Train Accuracy: 0.927, Validation Accuracy: 0.929, Loss: 0.092 Epoch 8 Batch 52/269 - Train Accuracy: 0.904, Validation Accuracy: 0.928, Loss: 0.082 Epoch 8 Batch 53/269 - Train Accuracy: 0.910, Validation Accuracy: 0.923, Loss: 0.098 Epoch 8 Batch 54/269 - Train Accuracy: 0.916, Validation Accuracy: 0.928, Loss: 0.089 Epoch 8 Batch 55/269 - Train Accuracy: 0.921, Validation Accuracy: 0.926, Loss: 0.086 Epoch 8 Batch 56/269 - Train Accuracy: 0.921, Validation Accuracy: 0.925, Loss: 0.095 Epoch 8 Batch 57/269 - Train Accuracy: 0.917, Validation Accuracy: 0.927, Loss: 0.098 Epoch 8 Batch 58/269 - Train Accuracy: 0.923, Validation Accuracy: 0.929, Loss: 0.094 Epoch 8 Batch 59/269 - Train Accuracy: 0.940, Validation Accuracy: 0.929, Loss: 0.079 Epoch 8 Batch 60/269 - Train Accuracy: 0.927, Validation Accuracy: 0.926, Loss: 0.086 Epoch 8 Batch 61/269 - Train Accuracy: 0.928, Validation Accuracy: 0.929, Loss: 0.081 Epoch 8 Batch 62/269 - Train Accuracy: 0.916, Validation Accuracy: 0.930, Loss: 0.092 Epoch 8 Batch 63/269 - Train Accuracy: 0.917, Validation Accuracy: 0.925, Loss: 0.100 Epoch 8 Batch 64/269 - Train Accuracy: 0.926, Validation Accuracy: 0.928, Loss: 0.085 Epoch 8 Batch 65/269 - Train Accuracy: 0.919, Validation Accuracy: 0.926, Loss: 0.086 Epoch 8 Batch 66/269 - Train Accuracy: 0.909, Validation Accuracy: 0.926, Loss: 0.092 Epoch 8 Batch 67/269 - Train Accuracy: 0.916, Validation Accuracy: 0.932, Loss: 0.100 Epoch 8 Batch 68/269 - Train Accuracy: 0.909, Validation Accuracy: 0.925, Loss: 0.101 Epoch 8 Batch 69/269 - Train Accuracy: 0.908, Validation Accuracy: 0.929, Loss: 0.114 Epoch 8 Batch 70/269 - Train Accuracy: 0.929, Validation Accuracy: 0.923, Loss: 0.098 Epoch 8 Batch 71/269 - Train Accuracy: 0.917, Validation Accuracy: 0.926, Loss: 0.095 Epoch 8 Batch 72/269 - Train Accuracy: 0.916, Validation Accuracy: 0.925, Loss: 0.104 Epoch 8 Batch 73/269 - Train Accuracy: 0.907, Validation Accuracy: 0.927, Loss: 0.099 Epoch 8 Batch 74/269 - Train Accuracy: 0.929, Validation Accuracy: 0.933, Loss: 0.083 Epoch 8 Batch 75/269 - Train Accuracy: 0.924, Validation Accuracy: 0.924, Loss: 0.097 Epoch 8 Batch 76/269 - Train Accuracy: 0.919, Validation Accuracy: 0.926, Loss: 0.090 Epoch 8 Batch 77/269 - Train Accuracy: 0.928, Validation Accuracy: 0.925, Loss: 0.090 Epoch 8 Batch 78/269 - Train Accuracy: 0.917, Validation Accuracy: 0.925, Loss: 0.089 Epoch 8 Batch 79/269 - Train Accuracy: 0.913, Validation Accuracy: 0.924, Loss: 0.090 Epoch 8 Batch 80/269 - Train Accuracy: 0.926, Validation Accuracy: 0.928, Loss: 0.089 Epoch 8 Batch 81/269 - Train Accuracy: 0.915, Validation Accuracy: 0.927, Loss: 0.097 Epoch 8 Batch 82/269 - Train Accuracy: 0.926, Validation Accuracy: 0.925, Loss: 0.086 Epoch 8 Batch 83/269 - Train Accuracy: 0.907, Validation Accuracy: 0.918, Loss: 0.099 Epoch 8 Batch 84/269 - Train Accuracy: 0.920, Validation Accuracy: 0.923, Loss: 0.091 Epoch 8 Batch 85/269 - Train Accuracy: 0.929, Validation Accuracy: 0.920, Loss: 0.090 Epoch 8 Batch 86/269 - Train Accuracy: 0.927, Validation Accuracy: 0.920, Loss: 0.086 Epoch 8 Batch 87/269 - Train Accuracy: 0.922, Validation Accuracy: 0.925, Loss: 0.093 Epoch 8 Batch 88/269 - Train Accuracy: 0.910, Validation Accuracy: 0.926, Loss: 0.095 Epoch 8 Batch 89/269 - Train Accuracy: 0.929, Validation Accuracy: 0.926, Loss: 0.089 Epoch 8 Batch 90/269 - Train Accuracy: 0.923, Validation Accuracy: 0.926, Loss: 0.096 Epoch 8 Batch 91/269 - Train Accuracy: 0.927, Validation Accuracy: 0.923, Loss: 0.086 Epoch 8 Batch 92/269 - Train Accuracy: 0.935, Validation Accuracy: 0.925, Loss: 0.079 Epoch 8 Batch 93/269 - Train Accuracy: 0.934, Validation Accuracy: 0.919, Loss: 0.084 Epoch 8 Batch 94/269 - Train Accuracy: 0.926, Validation Accuracy: 0.919, Loss: 0.103 Epoch 8 Batch 95/269 - Train Accuracy: 0.922, Validation Accuracy: 0.917, Loss: 0.087 Epoch 8 Batch 96/269 - Train Accuracy: 0.904, Validation Accuracy: 0.925, Loss: 0.096 Epoch 8 Batch 97/269 - Train Accuracy: 0.921, Validation Accuracy: 0.922, Loss: 0.088 Epoch 8 Batch 98/269 - Train Accuracy: 0.920, Validation Accuracy: 0.918, Loss: 0.092 Epoch 8 Batch 99/269 - Train Accuracy: 0.921, Validation Accuracy: 0.922, Loss: 0.092 Epoch 8 Batch 100/269 - Train Accuracy: 0.919, Validation Accuracy: 0.924, Loss: 0.090 Epoch 8 Batch 101/269 - Train Accuracy: 0.911, Validation Accuracy: 0.920, Loss: 0.103 Epoch 8 Batch 102/269 - Train Accuracy: 0.919, Validation Accuracy: 0.922, Loss: 0.093 Epoch 8 Batch 103/269 - Train Accuracy: 0.919, Validation Accuracy: 0.923, Loss: 0.098 Epoch 8 Batch 104/269 - Train Accuracy: 0.911, Validation Accuracy: 0.924, Loss: 0.091 Epoch 8 Batch 105/269 - Train Accuracy: 0.918, Validation Accuracy: 0.922, Loss: 0.091 Epoch 8 Batch 106/269 - Train Accuracy: 0.929, Validation Accuracy: 0.923, Loss: 0.083 Epoch 8 Batch 107/269 - Train Accuracy: 0.930, Validation Accuracy: 0.928, Loss: 0.091 Epoch 8 Batch 108/269 - Train Accuracy: 0.924, Validation Accuracy: 0.926, Loss: 0.087 Epoch 8 Batch 109/269 - Train Accuracy: 0.909, Validation Accuracy: 0.926, Loss: 0.094 Epoch 8 Batch 110/269 - Train Accuracy: 0.918, Validation Accuracy: 0.923, Loss: 0.084 Epoch 8 Batch 111/269 - Train Accuracy: 0.923, Validation Accuracy: 0.920, Loss: 0.096 Epoch 8 Batch 112/269 - Train Accuracy: 0.921, Validation Accuracy: 0.921, Loss: 0.089 Epoch 8 Batch 113/269 - Train Accuracy: 0.914, Validation Accuracy: 0.922, Loss: 0.090 Epoch 8 Batch 114/269 - Train Accuracy: 0.916, Validation Accuracy: 0.922, Loss: 0.088 Epoch 8 Batch 115/269 - Train Accuracy: 0.919, Validation Accuracy: 0.918, Loss: 0.090 Epoch 8 Batch 116/269 - Train Accuracy: 0.927, Validation Accuracy: 0.925, Loss: 0.089 Epoch 8 Batch 117/269 - Train Accuracy: 0.927, Validation Accuracy: 0.930, Loss: 0.079 Epoch 8 Batch 118/269 - Train Accuracy: 0.923, Validation Accuracy: 0.929, Loss: 0.080 Epoch 8 Batch 119/269 - Train Accuracy: 0.916, Validation Accuracy: 0.927, Loss: 0.095 Epoch 8 Batch 120/269 - Train Accuracy: 0.916, Validation Accuracy: 0.930, Loss: 0.089 Epoch 8 Batch 121/269 - Train Accuracy: 0.920, Validation Accuracy: 0.925, Loss: 0.083 Epoch 8 Batch 122/269 - Train Accuracy: 0.930, Validation Accuracy: 0.925, Loss: 0.084 Epoch 8 Batch 123/269 - Train Accuracy: 0.928, Validation Accuracy: 0.925, Loss: 0.087 Epoch 8 Batch 124/269 - Train Accuracy: 0.932, Validation Accuracy: 0.924, Loss: 0.079 Epoch 8 Batch 125/269 - Train Accuracy: 0.931, Validation Accuracy: 0.920, Loss: 0.080 Epoch 8 Batch 126/269 - Train Accuracy: 0.902, Validation Accuracy: 0.926, Loss: 0.090 Epoch 8 Batch 127/269 - Train Accuracy: 0.923, Validation Accuracy: 0.926, Loss: 0.089 Epoch 8 Batch 128/269 - Train Accuracy: 0.924, Validation Accuracy: 0.927, Loss: 0.087 Epoch 8 Batch 129/269 - Train Accuracy: 0.920, Validation Accuracy: 0.926, Loss: 0.090 Epoch 8 Batch 130/269 - Train Accuracy: 0.922, Validation Accuracy: 0.922, Loss: 0.090 Epoch 8 Batch 131/269 - Train Accuracy: 0.894, Validation Accuracy: 0.923, Loss: 0.089 Epoch 8 Batch 132/269 - Train Accuracy: 0.918, Validation Accuracy: 0.929, Loss: 0.094 Epoch 8 Batch 133/269 - Train Accuracy: 0.929, Validation Accuracy: 0.931, Loss: 0.082 Epoch 8 Batch 134/269 - Train Accuracy: 0.923, Validation Accuracy: 0.926, Loss: 0.091 Epoch 8 Batch 135/269 - Train Accuracy: 0.926, Validation Accuracy: 0.931, Loss: 0.089 Epoch 8 Batch 136/269 - Train Accuracy: 0.901, Validation Accuracy: 0.929, Loss: 0.091 Epoch 8 Batch 137/269 - Train Accuracy: 0.925, Validation Accuracy: 0.928, Loss: 0.094 Epoch 8 Batch 138/269 - Train Accuracy: 0.922, Validation Accuracy: 0.925, Loss: 0.087 Epoch 8 Batch 139/269 - Train Accuracy: 0.931, Validation Accuracy: 0.925, Loss: 0.081 Epoch 8 Batch 140/269 - Train Accuracy: 0.912, Validation Accuracy: 0.929, Loss: 0.097 Epoch 8 Batch 141/269 - Train Accuracy: 0.921, Validation Accuracy: 0.928, Loss: 0.090 Epoch 8 Batch 142/269 - Train Accuracy: 0.921, Validation Accuracy: 0.930, Loss: 0.084 Epoch 8 Batch 143/269 - Train Accuracy: 0.924, Validation Accuracy: 0.925, Loss: 0.084 Epoch 8 Batch 144/269 - Train Accuracy: 0.925, Validation Accuracy: 0.922, Loss: 0.079 Epoch 8 Batch 145/269 - Train Accuracy: 0.927, Validation Accuracy: 0.927, Loss: 0.082 Epoch 8 Batch 146/269 - Train Accuracy: 0.918, Validation Accuracy: 0.926, Loss: 0.091 Epoch 8 Batch 147/269 - Train Accuracy: 0.914, Validation Accuracy: 0.927, Loss: 0.091 Epoch 8 Batch 148/269 - Train Accuracy: 0.914, Validation Accuracy: 0.925, Loss: 0.092 Epoch 8 Batch 149/269 - Train Accuracy: 0.909, Validation Accuracy: 0.927, Loss: 0.101 Epoch 8 Batch 150/269 - Train Accuracy: 0.909, Validation Accuracy: 0.921, Loss: 0.090 Epoch 8 Batch 151/269 - Train Accuracy: 0.924, Validation Accuracy: 0.928, Loss: 0.092 Epoch 8 Batch 152/269 - Train Accuracy: 0.924, Validation Accuracy: 0.930, Loss: 0.090 Epoch 8 Batch 153/269 - Train Accuracy: 0.923, Validation Accuracy: 0.927, Loss: 0.087 Epoch 8 Batch 154/269 - Train Accuracy: 0.932, Validation Accuracy: 0.927, Loss: 0.086 Epoch 8 Batch 155/269 - Train Accuracy: 0.914, Validation Accuracy: 0.921, Loss: 0.087 Epoch 8 Batch 156/269 - Train Accuracy: 0.925, Validation Accuracy: 0.924, Loss: 0.092 Epoch 8 Batch 157/269 - Train Accuracy: 0.911, Validation Accuracy: 0.927, Loss: 0.082 Epoch 8 Batch 158/269 - Train Accuracy: 0.924, Validation Accuracy: 0.930, Loss: 0.088 Epoch 8 Batch 159/269 - Train Accuracy: 0.912, Validation Accuracy: 0.924, Loss: 0.090 Epoch 8 Batch 160/269 - Train Accuracy: 0.911, Validation Accuracy: 0.927, Loss: 0.090 Epoch 8 Batch 161/269 - Train Accuracy: 0.922, Validation Accuracy: 0.923, Loss: 0.086 Epoch 8 Batch 162/269 - Train Accuracy: 0.934, Validation Accuracy: 0.925, Loss: 0.083 Epoch 8 Batch 163/269 - Train Accuracy: 0.933, Validation Accuracy: 0.927, Loss: 0.089 Epoch 8 Batch 164/269 - Train Accuracy: 0.933, Validation Accuracy: 0.932, Loss: 0.086 Epoch 8 Batch 165/269 - Train Accuracy: 0.924, Validation Accuracy: 0.928, Loss: 0.085 Epoch 8 Batch 166/269 - Train Accuracy: 0.933, Validation Accuracy: 0.926, Loss: 0.081 Epoch 8 Batch 167/269 - Train Accuracy: 0.933, Validation Accuracy: 0.924, Loss: 0.086 Epoch 8 Batch 168/269 - Train Accuracy: 0.921, Validation Accuracy: 0.925, Loss: 0.092 Epoch 8 Batch 169/269 - Train Accuracy: 0.911, Validation Accuracy: 0.927, Loss: 0.089 Epoch 8 Batch 170/269 - Train Accuracy: 0.922, Validation Accuracy: 0.928, Loss: 0.080 Epoch 8 Batch 171/269 - Train Accuracy: 0.927, Validation Accuracy: 0.929, Loss: 0.090 Epoch 8 Batch 172/269 - Train Accuracy: 0.908, Validation Accuracy: 0.927, Loss: 0.094 Epoch 8 Batch 173/269 - Train Accuracy: 0.935, Validation Accuracy: 0.925, Loss: 0.079 Epoch 8 Batch 174/269 - Train Accuracy: 0.928, Validation Accuracy: 0.925, Loss: 0.084 Epoch 8 Batch 175/269 - Train Accuracy: 0.907, Validation Accuracy: 0.929, Loss: 0.108 Epoch 8 Batch 176/269 - Train Accuracy: 0.912, Validation Accuracy: 0.930, Loss: 0.092 Epoch 8 Batch 177/269 - Train Accuracy: 0.920, Validation Accuracy: 0.930, Loss: 0.083 Epoch 8 Batch 178/269 - Train Accuracy: 0.930, Validation Accuracy: 0.933, Loss: 0.081 Epoch 8 Batch 179/269 - Train Accuracy: 0.912, Validation Accuracy: 0.929, Loss: 0.085 Epoch 8 Batch 180/269 - Train Accuracy: 0.922, Validation Accuracy: 0.928, Loss: 0.084 Epoch 8 Batch 181/269 - Train Accuracy: 0.917, Validation Accuracy: 0.926, Loss: 0.088 Epoch 8 Batch 182/269 - Train Accuracy: 0.923, Validation Accuracy: 0.928, Loss: 0.084 Epoch 8 Batch 183/269 - Train Accuracy: 0.937, Validation Accuracy: 0.927, Loss: 0.069 Epoch 8 Batch 184/269 - Train Accuracy: 0.930, Validation Accuracy: 0.930, Loss: 0.081 Epoch 8 Batch 185/269 - Train Accuracy: 0.932, Validation Accuracy: 0.929, Loss: 0.082 Epoch 8 Batch 186/269 - Train Accuracy: 0.921, Validation Accuracy: 0.928, Loss: 0.080 Epoch 8 Batch 187/269 - Train Accuracy: 0.915, Validation Accuracy: 0.930, Loss: 0.082 Epoch 8 Batch 188/269 - Train Accuracy: 0.931, Validation Accuracy: 0.931, Loss: 0.082 Epoch 8 Batch 189/269 - Train Accuracy: 0.925, Validation Accuracy: 0.933, Loss: 0.081 Epoch 8 Batch 190/269 - Train Accuracy: 0.925, Validation Accuracy: 0.930, Loss: 0.084 Epoch 8 Batch 191/269 - Train Accuracy: 0.917, Validation Accuracy: 0.930, Loss: 0.082 Epoch 8 Batch 192/269 - Train Accuracy: 0.926, Validation Accuracy: 0.932, Loss: 0.088 Epoch 8 Batch 193/269 - Train Accuracy: 0.920, Validation Accuracy: 0.936, Loss: 0.082 Epoch 8 Batch 194/269 - Train Accuracy: 0.917, Validation Accuracy: 0.930, Loss: 0.085 Epoch 8 Batch 195/269 - Train Accuracy: 0.908, Validation Accuracy: 0.932, Loss: 0.082 Epoch 8 Batch 196/269 - Train Accuracy: 0.920, Validation Accuracy: 0.933, Loss: 0.083 Epoch 8 Batch 197/269 - Train Accuracy: 0.923, Validation Accuracy: 0.932, Loss: 0.086 Epoch 8 Batch 198/269 - Train Accuracy: 0.925, Validation Accuracy: 0.937, Loss: 0.089 Epoch 8 Batch 199/269 - Train Accuracy: 0.919, Validation Accuracy: 0.929, Loss: 0.087 Epoch 8 Batch 200/269 - Train Accuracy: 0.915, Validation Accuracy: 0.931, Loss: 0.080 Epoch 8 Batch 201/269 - Train Accuracy: 0.920, Validation Accuracy: 0.931, Loss: 0.088 Epoch 8 Batch 202/269 - Train Accuracy: 0.925, Validation Accuracy: 0.930, Loss: 0.078 Epoch 8 Batch 203/269 - Train Accuracy: 0.915, Validation Accuracy: 0.929, Loss: 0.094 Epoch 8 Batch 204/269 - Train Accuracy: 0.922, Validation Accuracy: 0.933, Loss: 0.089 Epoch 8 Batch 205/269 - Train Accuracy: 0.932, Validation Accuracy: 0.929, Loss: 0.084 Epoch 8 Batch 206/269 - Train Accuracy: 0.904, Validation Accuracy: 0.929, Loss: 0.094 Epoch 8 Batch 207/269 - Train Accuracy: 0.926, Validation Accuracy: 0.928, Loss: 0.084 Epoch 8 Batch 208/269 - Train Accuracy: 0.929, Validation Accuracy: 0.929, Loss: 0.090 Epoch 8 Batch 209/269 - Train Accuracy: 0.941, Validation Accuracy: 0.937, Loss: 0.079 Epoch 8 Batch 210/269 - Train Accuracy: 0.932, Validation Accuracy: 0.932, Loss: 0.084 Epoch 8 Batch 211/269 - Train Accuracy: 0.926, Validation Accuracy: 0.933, Loss: 0.085 Epoch 8 Batch 212/269 - Train Accuracy: 0.933, Validation Accuracy: 0.933, Loss: 0.092 Epoch 8 Batch 213/269 - Train Accuracy: 0.913, Validation Accuracy: 0.933, Loss: 0.083 Epoch 8 Batch 214/269 - Train Accuracy: 0.927, Validation Accuracy: 0.934, Loss: 0.086 Epoch 8 Batch 215/269 - Train Accuracy: 0.933, Validation Accuracy: 0.931, Loss: 0.081 Epoch 8 Batch 216/269 - Train Accuracy: 0.928, Validation Accuracy: 0.933, Loss: 0.098 Epoch 8 Batch 217/269 - Train Accuracy: 0.917, Validation Accuracy: 0.938, Loss: 0.086 Epoch 8 Batch 218/269 - Train Accuracy: 0.932, Validation Accuracy: 0.934, Loss: 0.081 Epoch 8 Batch 219/269 - Train Accuracy: 0.928, Validation Accuracy: 0.929, Loss: 0.083 Epoch 8 Batch 220/269 - Train Accuracy: 0.932, Validation Accuracy: 0.930, Loss: 0.076 Epoch 8 Batch 221/269 - Train Accuracy: 0.922, Validation Accuracy: 0.934, Loss: 0.085 Epoch 8 Batch 222/269 - Train Accuracy: 0.939, Validation Accuracy: 0.932, Loss: 0.072 Epoch 8 Batch 223/269 - Train Accuracy: 0.932, Validation Accuracy: 0.931, Loss: 0.078 Epoch 8 Batch 224/269 - Train Accuracy: 0.920, Validation Accuracy: 0.930, Loss: 0.088 Epoch 8 Batch 225/269 - Train Accuracy: 0.912, Validation Accuracy: 0.931, Loss: 0.078 Epoch 8 Batch 226/269 - Train Accuracy: 0.929, Validation Accuracy: 0.935, Loss: 0.083 Epoch 8 Batch 227/269 - Train Accuracy: 0.927, Validation Accuracy: 0.924, Loss: 0.084 Epoch 8 Batch 228/269 - Train Accuracy: 0.927, Validation Accuracy: 0.929, Loss: 0.079 Epoch 8 Batch 229/269 - Train Accuracy: 0.918, Validation Accuracy: 0.936, Loss: 0.078 Epoch 8 Batch 230/269 - Train Accuracy: 0.935, Validation Accuracy: 0.931, Loss: 0.080 Epoch 8 Batch 231/269 - Train Accuracy: 0.920, Validation Accuracy: 0.932, Loss: 0.084 Epoch 8 Batch 232/269 - Train Accuracy: 0.913, Validation Accuracy: 0.932, Loss: 0.081 Epoch 8 Batch 233/269 - Train Accuracy: 0.932, Validation Accuracy: 0.935, Loss: 0.086 Epoch 8 Batch 234/269 - Train Accuracy: 0.933, Validation Accuracy: 0.930, Loss: 0.079 Epoch 8 Batch 235/269 - Train Accuracy: 0.949, Validation Accuracy: 0.930, Loss: 0.073 Epoch 8 Batch 236/269 - Train Accuracy: 0.917, Validation Accuracy: 0.927, Loss: 0.080 Epoch 8 Batch 237/269 - Train Accuracy: 0.938, Validation Accuracy: 0.926, Loss: 0.081 Epoch 8 Batch 238/269 - Train Accuracy: 0.918, Validation Accuracy: 0.924, Loss: 0.080 Epoch 8 Batch 239/269 - Train Accuracy: 0.927, Validation Accuracy: 0.924, Loss: 0.079 Epoch 8 Batch 240/269 - Train Accuracy: 0.931, Validation Accuracy: 0.920, Loss: 0.072 Epoch 8 Batch 241/269 - Train Accuracy: 0.913, Validation Accuracy: 0.923, Loss: 0.090 Epoch 8 Batch 242/269 - Train Accuracy: 0.929, Validation Accuracy: 0.928, Loss: 0.079 Epoch 8 Batch 243/269 - Train Accuracy: 0.943, Validation Accuracy: 0.929, Loss: 0.069 Epoch 8 Batch 244/269 - Train Accuracy: 0.914, Validation Accuracy: 0.927, Loss: 0.085 Epoch 8 Batch 245/269 - Train Accuracy: 0.918, Validation Accuracy: 0.927, Loss: 0.088 Epoch 8 Batch 246/269 - Train Accuracy: 0.910, Validation Accuracy: 0.931, Loss: 0.084 Epoch 8 Batch 247/269 - Train Accuracy: 0.930, Validation Accuracy: 0.932, Loss: 0.078 Epoch 8 Batch 248/269 - Train Accuracy: 0.929, Validation Accuracy: 0.931, Loss: 0.080 Epoch 8 Batch 249/269 - Train Accuracy: 0.926, Validation Accuracy: 0.931, Loss: 0.076 Epoch 8 Batch 250/269 - Train Accuracy: 0.934, Validation Accuracy: 0.931, Loss: 0.085 Epoch 8 Batch 251/269 - Train Accuracy: 0.944, Validation Accuracy: 0.930, Loss: 0.073 Epoch 8 Batch 252/269 - Train Accuracy: 0.935, Validation Accuracy: 0.930, Loss: 0.073 Epoch 8 Batch 253/269 - Train Accuracy: 0.913, Validation Accuracy: 0.930, Loss: 0.084 Epoch 8 Batch 254/269 - Train Accuracy: 0.921, Validation Accuracy: 0.928, Loss: 0.084 Epoch 8 Batch 255/269 - Train Accuracy: 0.921, Validation Accuracy: 0.924, Loss: 0.081 Epoch 8 Batch 256/269 - Train Accuracy: 0.910, Validation Accuracy: 0.927, Loss: 0.077 Epoch 8 Batch 257/269 - Train Accuracy: 0.912, Validation Accuracy: 0.930, Loss: 0.087 Epoch 8 Batch 258/269 - Train Accuracy: 0.917, Validation Accuracy: 0.934, Loss: 0.086 Epoch 8 Batch 259/269 - Train Accuracy: 0.928, Validation Accuracy: 0.933, Loss: 0.080 Epoch 8 Batch 260/269 - Train Accuracy: 0.919, Validation Accuracy: 0.932, Loss: 0.087 Epoch 8 Batch 261/269 - Train Accuracy: 0.929, Validation Accuracy: 0.926, Loss: 0.078 Epoch 8 Batch 262/269 - Train Accuracy: 0.930, Validation Accuracy: 0.930, Loss: 0.089 Epoch 8 Batch 263/269 - Train Accuracy: 0.937, Validation Accuracy: 0.937, Loss: 0.082 Epoch 8 Batch 264/269 - Train Accuracy: 0.893, Validation Accuracy: 0.934, Loss: 0.092 Epoch 8 Batch 265/269 - Train Accuracy: 0.927, Validation Accuracy: 0.932, Loss: 0.081 Epoch 8 Batch 266/269 - Train Accuracy: 0.934, Validation Accuracy: 0.927, Loss: 0.078 Epoch 8 Batch 267/269 - Train Accuracy: 0.937, Validation Accuracy: 0.932, Loss: 0.089 Epoch 9 Batch 0/269 - Train Accuracy: 0.943, Validation Accuracy: 0.935, Loss: 0.085 Epoch 9 Batch 1/269 - Train Accuracy: 0.927, Validation Accuracy: 0.933, Loss: 0.082 Epoch 9 Batch 2/269 - Train Accuracy: 0.928, Validation Accuracy: 0.930, Loss: 0.080 Epoch 9 Batch 3/269 - Train Accuracy: 0.930, Validation Accuracy: 0.925, Loss: 0.081 Epoch 9 Batch 4/269 - Train Accuracy: 0.912, Validation Accuracy: 0.926, Loss: 0.082 Epoch 9 Batch 5/269 - Train Accuracy: 0.929, Validation Accuracy: 0.929, Loss: 0.085 Epoch 9 Batch 6/269 - Train Accuracy: 0.932, Validation Accuracy: 0.928, Loss: 0.080 Epoch 9 Batch 7/269 - Train Accuracy: 0.933, Validation Accuracy: 0.929, Loss: 0.076 Epoch 9 Batch 8/269 - Train Accuracy: 0.937, Validation Accuracy: 0.930, Loss: 0.083 Epoch 9 Batch 9/269 - Train Accuracy: 0.925, Validation Accuracy: 0.928, Loss: 0.084 Epoch 9 Batch 10/269 - Train Accuracy: 0.930, Validation Accuracy: 0.925, Loss: 0.071 Epoch 9 Batch 11/269 - Train Accuracy: 0.935, Validation Accuracy: 0.925, Loss: 0.089 Epoch 9 Batch 12/269 - Train Accuracy: 0.913, Validation Accuracy: 0.928, Loss: 0.090 Epoch 9 Batch 13/269 - Train Accuracy: 0.921, Validation Accuracy: 0.931, Loss: 0.074 Epoch 9 Batch 14/269 - Train Accuracy: 0.930, Validation Accuracy: 0.932, Loss: 0.085 Epoch 9 Batch 15/269 - Train Accuracy: 0.935, Validation Accuracy: 0.933, Loss: 0.070 Epoch 9 Batch 16/269 - Train Accuracy: 0.921, Validation Accuracy: 0.932, Loss: 0.086 Epoch 9 Batch 17/269 - Train Accuracy: 0.929, Validation Accuracy: 0.932, Loss: 0.073 Epoch 9 Batch 18/269 - Train Accuracy: 0.920, Validation Accuracy: 0.936, Loss: 0.082 Epoch 9 Batch 19/269 - Train Accuracy: 0.940, Validation Accuracy: 0.932, Loss: 0.070 Epoch 9 Batch 20/269 - Train Accuracy: 0.924, Validation Accuracy: 0.936, Loss: 0.077 Epoch 9 Batch 21/269 - Train Accuracy: 0.911, Validation Accuracy: 0.931, Loss: 0.093 Epoch 9 Batch 22/269 - Train Accuracy: 0.940, Validation Accuracy: 0.937, Loss: 0.078 Epoch 9 Batch 23/269 - Train Accuracy: 0.930, Validation Accuracy: 0.928, Loss: 0.084 Epoch 9 Batch 24/269 - Train Accuracy: 0.918, Validation Accuracy: 0.929, Loss: 0.082 Epoch 9 Batch 25/269 - Train Accuracy: 0.916, Validation Accuracy: 0.933, Loss: 0.088 Epoch 9 Batch 26/269 - Train Accuracy: 0.936, Validation Accuracy: 0.937, Loss: 0.075 Epoch 9 Batch 27/269 - Train Accuracy: 0.919, Validation Accuracy: 0.935, Loss: 0.078 Epoch 9 Batch 28/269 - Train Accuracy: 0.904, Validation Accuracy: 0.935, Loss: 0.085 Epoch 9 Batch 29/269 - Train Accuracy: 0.925, Validation Accuracy: 0.929, Loss: 0.082 Epoch 9 Batch 30/269 - Train Accuracy: 0.924, Validation Accuracy: 0.927, Loss: 0.076 Epoch 9 Batch 31/269 - Train Accuracy: 0.942, Validation Accuracy: 0.933, Loss: 0.077 Epoch 9 Batch 32/269 - Train Accuracy: 0.925, Validation Accuracy: 0.929, Loss: 0.074 Epoch 9 Batch 33/269 - Train Accuracy: 0.927, Validation Accuracy: 0.931, Loss: 0.075 Epoch 9 Batch 34/269 - Train Accuracy: 0.918, Validation Accuracy: 0.928, Loss: 0.078 Epoch 9 Batch 35/269 - Train Accuracy: 0.926, Validation Accuracy: 0.928, Loss: 0.087 Epoch 9 Batch 36/269 - Train Accuracy: 0.928, Validation Accuracy: 0.934, Loss: 0.082 Epoch 9 Batch 37/269 - Train Accuracy: 0.921, Validation Accuracy: 0.931, Loss: 0.080 Epoch 9 Batch 38/269 - Train Accuracy: 0.924, Validation Accuracy: 0.935, Loss: 0.081 Epoch 9 Batch 39/269 - Train Accuracy: 0.929, Validation Accuracy: 0.934, Loss: 0.072 Epoch 9 Batch 40/269 - Train Accuracy: 0.916, Validation Accuracy: 0.932, Loss: 0.087 Epoch 9 Batch 41/269 - Train Accuracy: 0.916, Validation Accuracy: 0.935, Loss: 0.085 Epoch 9 Batch 42/269 - Train Accuracy: 0.937, Validation Accuracy: 0.930, Loss: 0.067 Epoch 9 Batch 43/269 - Train Accuracy: 0.929, Validation Accuracy: 0.928, Loss: 0.082 Epoch 9 Batch 44/269 - Train Accuracy: 0.918, Validation Accuracy: 0.932, Loss: 0.079 Epoch 9 Batch 45/269 - Train Accuracy: 0.926, Validation Accuracy: 0.926, Loss: 0.081 Epoch 9 Batch 46/269 - Train Accuracy: 0.927, Validation Accuracy: 0.929, Loss: 0.077 Epoch 9 Batch 47/269 - Train Accuracy: 0.933, Validation Accuracy: 0.922, Loss: 0.071 Epoch 9 Batch 48/269 - Train Accuracy: 0.933, Validation Accuracy: 0.926, Loss: 0.077 Epoch 9 Batch 49/269 - Train Accuracy: 0.918, Validation Accuracy: 0.929, Loss: 0.071 Epoch 9 Batch 50/269 - Train Accuracy: 0.914, Validation Accuracy: 0.927, Loss: 0.084 Epoch 9 Batch 51/269 - Train Accuracy: 0.936, Validation Accuracy: 0.933, Loss: 0.078 Epoch 9 Batch 52/269 - Train Accuracy: 0.922, Validation Accuracy: 0.929, Loss: 0.068 Epoch 9 Batch 53/269 - Train Accuracy: 0.923, Validation Accuracy: 0.934, Loss: 0.087 Epoch 9 Batch 54/269 - Train Accuracy: 0.933, Validation Accuracy: 0.935, Loss: 0.077 Epoch 9 Batch 55/269 - Train Accuracy: 0.930, Validation Accuracy: 0.932, Loss: 0.072 Epoch 9 Batch 56/269 - Train Accuracy: 0.925, Validation Accuracy: 0.931, Loss: 0.078 Epoch 9 Batch 57/269 - Train Accuracy: 0.925, Validation Accuracy: 0.933, Loss: 0.086 Epoch 9 Batch 58/269 - Train Accuracy: 0.931, Validation Accuracy: 0.934, Loss: 0.073 Epoch 9 Batch 59/269 - Train Accuracy: 0.946, Validation Accuracy: 0.931, Loss: 0.063 Epoch 9 Batch 60/269 - Train Accuracy: 0.937, Validation Accuracy: 0.938, Loss: 0.070 Epoch 9 Batch 61/269 - Train Accuracy: 0.934, Validation Accuracy: 0.935, Loss: 0.071 Epoch 9 Batch 62/269 - Train Accuracy: 0.923, Validation Accuracy: 0.933, Loss: 0.074 Epoch 9 Batch 63/269 - Train Accuracy: 0.918, Validation Accuracy: 0.938, Loss: 0.085 Epoch 9 Batch 64/269 - Train Accuracy: 0.938, Validation Accuracy: 0.938, Loss: 0.070 Epoch 9 Batch 65/269 - Train Accuracy: 0.925, Validation Accuracy: 0.938, Loss: 0.071 Epoch 9 Batch 66/269 - Train Accuracy: 0.917, Validation Accuracy: 0.934, Loss: 0.077 Epoch 9 Batch 67/269 - Train Accuracy: 0.922, Validation Accuracy: 0.935, Loss: 0.084 Epoch 9 Batch 68/269 - Train Accuracy: 0.923, Validation Accuracy: 0.935, Loss: 0.083 Epoch 9 Batch 69/269 - Train Accuracy: 0.912, Validation Accuracy: 0.931, Loss: 0.091 Epoch 9 Batch 70/269 - Train Accuracy: 0.943, Validation Accuracy: 0.931, Loss: 0.082 Epoch 9 Batch 71/269 - Train Accuracy: 0.924, Validation Accuracy: 0.937, Loss: 0.087 Epoch 9 Batch 72/269 - Train Accuracy: 0.920, Validation Accuracy: 0.935, Loss: 0.085 Epoch 9 Batch 73/269 - Train Accuracy: 0.923, Validation Accuracy: 0.930, Loss: 0.081 Epoch 9 Batch 74/269 - Train Accuracy: 0.937, Validation Accuracy: 0.929, Loss: 0.074 Epoch 9 Batch 75/269 - Train Accuracy: 0.925, Validation Accuracy: 0.930, Loss: 0.081 Epoch 9 Batch 76/269 - Train Accuracy: 0.922, Validation Accuracy: 0.936, Loss: 0.079 Epoch 9 Batch 77/269 - Train Accuracy: 0.943, Validation Accuracy: 0.935, Loss: 0.071 Epoch 9 Batch 78/269 - Train Accuracy: 0.935, Validation Accuracy: 0.934, Loss: 0.077 Epoch 9 Batch 79/269 - Train Accuracy: 0.926, Validation Accuracy: 0.925, Loss: 0.074 Epoch 9 Batch 80/269 - Train Accuracy: 0.930, Validation Accuracy: 0.929, Loss: 0.079 Epoch 9 Batch 81/269 - Train Accuracy: 0.917, Validation Accuracy: 0.925, Loss: 0.085 Epoch 9 Batch 82/269 - Train Accuracy: 0.933, Validation Accuracy: 0.925, Loss: 0.071 Epoch 9 Batch 83/269 - Train Accuracy: 0.920, Validation Accuracy: 0.932, Loss: 0.089 Epoch 9 Batch 84/269 - Train Accuracy: 0.933, Validation Accuracy: 0.934, Loss: 0.077 Epoch 9 Batch 85/269 - Train Accuracy: 0.932, Validation Accuracy: 0.928, Loss: 0.076 Epoch 9 Batch 86/269 - Train Accuracy: 0.925, Validation Accuracy: 0.927, Loss: 0.075 Epoch 9 Batch 87/269 - Train Accuracy: 0.929, Validation Accuracy: 0.931, Loss: 0.079 Epoch 9 Batch 88/269 - Train Accuracy: 0.907, Validation Accuracy: 0.928, Loss: 0.075 Epoch 9 Batch 89/269 - Train Accuracy: 0.942, Validation Accuracy: 0.930, Loss: 0.078 Epoch 9 Batch 90/269 - Train Accuracy: 0.940, Validation Accuracy: 0.931, Loss: 0.078 Epoch 9 Batch 91/269 - Train Accuracy: 0.935, Validation Accuracy: 0.935, Loss: 0.070 Epoch 9 Batch 92/269 - Train Accuracy: 0.938, Validation Accuracy: 0.930, Loss: 0.068 Epoch 9 Batch 93/269 - Train Accuracy: 0.934, Validation Accuracy: 0.927, Loss: 0.070 Epoch 9 Batch 94/269 - Train Accuracy: 0.927, Validation Accuracy: 0.929, Loss: 0.089 Epoch 9 Batch 95/269 - Train Accuracy: 0.933, Validation Accuracy: 0.931, Loss: 0.071 Epoch 9 Batch 96/269 - Train Accuracy: 0.909, Validation Accuracy: 0.926, Loss: 0.080 Epoch 9 Batch 97/269 - Train Accuracy: 0.936, Validation Accuracy: 0.929, Loss: 0.077 Epoch 9 Batch 98/269 - Train Accuracy: 0.929, Validation Accuracy: 0.926, Loss: 0.078 Epoch 9 Batch 99/269 - Train Accuracy: 0.928, Validation Accuracy: 0.931, Loss: 0.073 Epoch 9 Batch 100/269 - Train Accuracy: 0.931, Validation Accuracy: 0.929, Loss: 0.075 Epoch 9 Batch 101/269 - Train Accuracy: 0.920, Validation Accuracy: 0.924, Loss: 0.090 Epoch 9 Batch 102/269 - Train Accuracy: 0.929, Validation Accuracy: 0.931, Loss: 0.071 Epoch 9 Batch 103/269 - Train Accuracy: 0.925, Validation Accuracy: 0.928, Loss: 0.086 Epoch 9 Batch 104/269 - Train Accuracy: 0.933, Validation Accuracy: 0.930, Loss: 0.070 Epoch 9 Batch 105/269 - Train Accuracy: 0.935, Validation Accuracy: 0.933, Loss: 0.077 Epoch 9 Batch 106/269 - Train Accuracy: 0.936, Validation Accuracy: 0.935, Loss: 0.070 Epoch 9 Batch 107/269 - Train Accuracy: 0.934, Validation Accuracy: 0.934, Loss: 0.073 Epoch 9 Batch 108/269 - Train Accuracy: 0.938, Validation Accuracy: 0.932, Loss: 0.070 Epoch 9 Batch 109/269 - Train Accuracy: 0.921, Validation Accuracy: 0.932, Loss: 0.081 Epoch 9 Batch 110/269 - Train Accuracy: 0.927, Validation Accuracy: 0.931, Loss: 0.071 Epoch 9 Batch 111/269 - Train Accuracy: 0.927, Validation Accuracy: 0.932, Loss: 0.080 Epoch 9 Batch 112/269 - Train Accuracy: 0.928, Validation Accuracy: 0.929, Loss: 0.073 Epoch 9 Batch 113/269 - Train Accuracy: 0.928, Validation Accuracy: 0.938, Loss: 0.075 Epoch 9 Batch 114/269 - Train Accuracy: 0.923, Validation Accuracy: 0.934, Loss: 0.075 Epoch 9 Batch 115/269 - Train Accuracy: 0.923, Validation Accuracy: 0.933, Loss: 0.079 Epoch 9 Batch 116/269 - Train Accuracy: 0.927, Validation Accuracy: 0.925, Loss: 0.076 Epoch 9 Batch 117/269 - Train Accuracy: 0.932, Validation Accuracy: 0.932, Loss: 0.068 Epoch 9 Batch 118/269 - Train Accuracy: 0.934, Validation Accuracy: 0.933, Loss: 0.067 Epoch 9 Batch 119/269 - Train Accuracy: 0.925, Validation Accuracy: 0.936, Loss: 0.078 Epoch 9 Batch 120/269 - Train Accuracy: 0.925, Validation Accuracy: 0.936, Loss: 0.071 Epoch 9 Batch 121/269 - Train Accuracy: 0.930, Validation Accuracy: 0.932, Loss: 0.070 Epoch 9 Batch 122/269 - Train Accuracy: 0.927, Validation Accuracy: 0.929, Loss: 0.072 Epoch 9 Batch 123/269 - Train Accuracy: 0.924, Validation Accuracy: 0.933, Loss: 0.073 Epoch 9 Batch 124/269 - Train Accuracy: 0.933, Validation Accuracy: 0.931, Loss: 0.066 Epoch 9 Batch 125/269 - Train Accuracy: 0.936, Validation Accuracy: 0.928, Loss: 0.065 Epoch 9 Batch 126/269 - Train Accuracy: 0.904, Validation Accuracy: 0.927, Loss: 0.074 Epoch 9 Batch 127/269 - Train Accuracy: 0.930, Validation Accuracy: 0.930, Loss: 0.075 Epoch 9 Batch 128/269 - Train Accuracy: 0.935, Validation Accuracy: 0.930, Loss: 0.074 Epoch 9 Batch 129/269 - Train Accuracy: 0.924, Validation Accuracy: 0.929, Loss: 0.073 Epoch 9 Batch 130/269 - Train Accuracy: 0.936, Validation Accuracy: 0.930, Loss: 0.082 Epoch 9 Batch 131/269 - Train Accuracy: 0.902, Validation Accuracy: 0.930, Loss: 0.078 Epoch 9 Batch 132/269 - Train Accuracy: 0.924, Validation Accuracy: 0.933, Loss: 0.077 Epoch 9 Batch 133/269 - Train Accuracy: 0.933, Validation Accuracy: 0.930, Loss: 0.067 Epoch 9 Batch 134/269 - Train Accuracy: 0.932, Validation Accuracy: 0.930, Loss: 0.076 Epoch 9 Batch 135/269 - Train Accuracy: 0.931, Validation Accuracy: 0.934, Loss: 0.076 Epoch 9 Batch 136/269 - Train Accuracy: 0.908, Validation Accuracy: 0.934, Loss: 0.080 Epoch 9 Batch 137/269 - Train Accuracy: 0.928, Validation Accuracy: 0.930, Loss: 0.081 Epoch 9 Batch 138/269 - Train Accuracy: 0.921, Validation Accuracy: 0.937, Loss: 0.073 Epoch 9 Batch 139/269 - Train Accuracy: 0.932, Validation Accuracy: 0.932, Loss: 0.066 Epoch 9 Batch 140/269 - Train Accuracy: 0.919, Validation Accuracy: 0.933, Loss: 0.082 Epoch 9 Batch 141/269 - Train Accuracy: 0.924, Validation Accuracy: 0.933, Loss: 0.081 Epoch 9 Batch 142/269 - Train Accuracy: 0.933, Validation Accuracy: 0.940, Loss: 0.073 Epoch 9 Batch 143/269 - Train Accuracy: 0.938, Validation Accuracy: 0.936, Loss: 0.066 Epoch 9 Batch 144/269 - Train Accuracy: 0.941, Validation Accuracy: 0.933, Loss: 0.066 Epoch 9 Batch 145/269 - Train Accuracy: 0.938, Validation Accuracy: 0.934, Loss: 0.068 Epoch 9 Batch 146/269 - Train Accuracy: 0.925, Validation Accuracy: 0.930, Loss: 0.072 Epoch 9 Batch 147/269 - Train Accuracy: 0.929, Validation Accuracy: 0.930, Loss: 0.074 Epoch 9 Batch 148/269 - Train Accuracy: 0.929, Validation Accuracy: 0.933, Loss: 0.074 Epoch 9 Batch 149/269 - Train Accuracy: 0.916, Validation Accuracy: 0.932, Loss: 0.083 Epoch 9 Batch 150/269 - Train Accuracy: 0.921, Validation Accuracy: 0.935, Loss: 0.076 Epoch 9 Batch 151/269 - Train Accuracy: 0.933, Validation Accuracy: 0.934, Loss: 0.070 Epoch 9 Batch 152/269 - Train Accuracy: 0.934, Validation Accuracy: 0.934, Loss: 0.076 Epoch 9 Batch 153/269 - Train Accuracy: 0.931, Validation Accuracy: 0.931, Loss: 0.069 Epoch 9 Batch 154/269 - Train Accuracy: 0.946, Validation Accuracy: 0.931, Loss: 0.070 Epoch 9 Batch 155/269 - Train Accuracy: 0.919, Validation Accuracy: 0.930, Loss: 0.069 Epoch 9 Batch 156/269 - Train Accuracy: 0.934, Validation Accuracy: 0.934, Loss: 0.076 Epoch 9 Batch 157/269 - Train Accuracy: 0.923, Validation Accuracy: 0.938, Loss: 0.071 Epoch 9 Batch 158/269 - Train Accuracy: 0.924, Validation Accuracy: 0.936, Loss: 0.075 Epoch 9 Batch 159/269 - Train Accuracy: 0.926, Validation Accuracy: 0.937, Loss: 0.072 Epoch 9 Batch 160/269 - Train Accuracy: 0.922, Validation Accuracy: 0.932, Loss: 0.071 Epoch 9 Batch 161/269 - Train Accuracy: 0.935, Validation Accuracy: 0.928, Loss: 0.066 Epoch 9 Batch 162/269 - Train Accuracy: 0.945, Validation Accuracy: 0.925, Loss: 0.070 Epoch 9 Batch 163/269 - Train Accuracy: 0.940, Validation Accuracy: 0.936, Loss: 0.072 Epoch 9 Batch 164/269 - Train Accuracy: 0.940, Validation Accuracy: 0.936, Loss: 0.069 Epoch 9 Batch 165/269 - Train Accuracy: 0.938, Validation Accuracy: 0.932, Loss: 0.070 Epoch 9 Batch 166/269 - Train Accuracy: 0.935, Validation Accuracy: 0.934, Loss: 0.074 Epoch 9 Batch 167/269 - Train Accuracy: 0.932, Validation Accuracy: 0.937, Loss: 0.072 Epoch 9 Batch 168/269 - Train Accuracy: 0.925, Validation Accuracy: 0.931, Loss: 0.076 Epoch 9 Batch 169/269 - Train Accuracy: 0.917, Validation Accuracy: 0.931, Loss: 0.076 Epoch 9 Batch 170/269 - Train Accuracy: 0.927, Validation Accuracy: 0.925, Loss: 0.068 Epoch 9 Batch 171/269 - Train Accuracy: 0.925, Validation Accuracy: 0.934, Loss: 0.075 Epoch 9 Batch 172/269 - Train Accuracy: 0.913, Validation Accuracy: 0.933, Loss: 0.082 Epoch 9 Batch 173/269 - Train Accuracy: 0.940, Validation Accuracy: 0.939, Loss: 0.063 Epoch 9 Batch 174/269 - Train Accuracy: 0.933, Validation Accuracy: 0.939, Loss: 0.073 Epoch 9 Batch 175/269 - Train Accuracy: 0.913, Validation Accuracy: 0.936, Loss: 0.085 Epoch 9 Batch 176/269 - Train Accuracy: 0.914, Validation Accuracy: 0.934, Loss: 0.081 Epoch 9 Batch 177/269 - Train Accuracy: 0.928, Validation Accuracy: 0.934, Loss: 0.069 Epoch 9 Batch 178/269 - Train Accuracy: 0.937, Validation Accuracy: 0.931, Loss: 0.069 Epoch 9 Batch 179/269 - Train Accuracy: 0.922, Validation Accuracy: 0.935, Loss: 0.070 Epoch 9 Batch 180/269 - Train Accuracy: 0.938, Validation Accuracy: 0.937, Loss: 0.071 Epoch 9 Batch 181/269 - Train Accuracy: 0.924, Validation Accuracy: 0.935, Loss: 0.078 Epoch 9 Batch 182/269 - Train Accuracy: 0.926, Validation Accuracy: 0.932, Loss: 0.072 Epoch 9 Batch 183/269 - Train Accuracy: 0.933, Validation Accuracy: 0.929, Loss: 0.061 Epoch 9 Batch 184/269 - Train Accuracy: 0.939, Validation Accuracy: 0.938, Loss: 0.073 Epoch 9 Batch 185/269 - Train Accuracy: 0.926, Validation Accuracy: 0.932, Loss: 0.072 Epoch 9 Batch 186/269 - Train Accuracy: 0.927, Validation Accuracy: 0.935, Loss: 0.071 Epoch 9 Batch 187/269 - Train Accuracy: 0.927, Validation Accuracy: 0.935, Loss: 0.073 Epoch 9 Batch 188/269 - Train Accuracy: 0.940, Validation Accuracy: 0.933, Loss: 0.067 Epoch 9 Batch 189/269 - Train Accuracy: 0.933, Validation Accuracy: 0.930, Loss: 0.069 Epoch 9 Batch 190/269 - Train Accuracy: 0.931, Validation Accuracy: 0.931, Loss: 0.069 Epoch 9 Batch 191/269 - Train Accuracy: 0.921, Validation Accuracy: 0.932, Loss: 0.070 Epoch 9 Batch 192/269 - Train Accuracy: 0.935, Validation Accuracy: 0.933, Loss: 0.072 Epoch 9 Batch 193/269 - Train Accuracy: 0.926, Validation Accuracy: 0.932, Loss: 0.069 Epoch 9 Batch 194/269 - Train Accuracy: 0.924, Validation Accuracy: 0.933, Loss: 0.076 Epoch 9 Batch 195/269 - Train Accuracy: 0.914, Validation Accuracy: 0.933, Loss: 0.068 Epoch 9 Batch 196/269 - Train Accuracy: 0.931, Validation Accuracy: 0.939, Loss: 0.067 Epoch 9 Batch 197/269 - Train Accuracy: 0.929, Validation Accuracy: 0.938, Loss: 0.077 Epoch 9 Batch 198/269 - Train Accuracy: 0.929, Validation Accuracy: 0.938, Loss: 0.075 Epoch 9 Batch 199/269 - Train Accuracy: 0.928, Validation Accuracy: 0.938, Loss: 0.078 Epoch 9 Batch 200/269 - Train Accuracy: 0.935, Validation Accuracy: 0.933, Loss: 0.068 Epoch 9 Batch 201/269 - Train Accuracy: 0.927, Validation Accuracy: 0.937, Loss: 0.074 Epoch 9 Batch 202/269 - Train Accuracy: 0.937, Validation Accuracy: 0.931, Loss: 0.071 Epoch 9 Batch 203/269 - Train Accuracy: 0.922, Validation Accuracy: 0.926, Loss: 0.084 Epoch 9 Batch 204/269 - Train Accuracy: 0.924, Validation Accuracy: 0.926, Loss: 0.074 Epoch 9 Batch 205/269 - Train Accuracy: 0.936, Validation Accuracy: 0.930, Loss: 0.073 Epoch 9 Batch 206/269 - Train Accuracy: 0.919, Validation Accuracy: 0.926, Loss: 0.082 Epoch 9 Batch 207/269 - Train Accuracy: 0.930, Validation Accuracy: 0.924, Loss: 0.069 Epoch 9 Batch 208/269 - Train Accuracy: 0.936, Validation Accuracy: 0.930, Loss: 0.073 Epoch 9 Batch 209/269 - Train Accuracy: 0.942, Validation Accuracy: 0.930, Loss: 0.068 Epoch 9 Batch 210/269 - Train Accuracy: 0.934, Validation Accuracy: 0.933, Loss: 0.072 Epoch 9 Batch 211/269 - Train Accuracy: 0.930, Validation Accuracy: 0.933, Loss: 0.077 Epoch 9 Batch 212/269 - Train Accuracy: 0.936, Validation Accuracy: 0.942, Loss: 0.080 Epoch 9 Batch 213/269 - Train Accuracy: 0.925, Validation Accuracy: 0.939, Loss: 0.071 Epoch 9 Batch 214/269 - Train Accuracy: 0.937, Validation Accuracy: 0.936, Loss: 0.076 Epoch 9 Batch 215/269 - Train Accuracy: 0.939, Validation Accuracy: 0.936, Loss: 0.068 Epoch 9 Batch 216/269 - Train Accuracy: 0.928, Validation Accuracy: 0.933, Loss: 0.085 Epoch 9 Batch 217/269 - Train Accuracy: 0.921, Validation Accuracy: 0.935, Loss: 0.076 Epoch 9 Batch 218/269 - Train Accuracy: 0.936, Validation Accuracy: 0.934, Loss: 0.072 Epoch 9 Batch 219/269 - Train Accuracy: 0.937, Validation Accuracy: 0.934, Loss: 0.073 Epoch 9 Batch 220/269 - Train Accuracy: 0.930, Validation Accuracy: 0.935, Loss: 0.067 Epoch 9 Batch 221/269 - Train Accuracy: 0.931, Validation Accuracy: 0.940, Loss: 0.073 Epoch 9 Batch 222/269 - Train Accuracy: 0.942, Validation Accuracy: 0.938, Loss: 0.065 Epoch 9 Batch 223/269 - Train Accuracy: 0.924, Validation Accuracy: 0.936, Loss: 0.066 Epoch 9 Batch 224/269 - Train Accuracy: 0.930, Validation Accuracy: 0.938, Loss: 0.079 Epoch 9 Batch 225/269 - Train Accuracy: 0.918, Validation Accuracy: 0.930, Loss: 0.065 Epoch 9 Batch 226/269 - Train Accuracy: 0.932, Validation Accuracy: 0.941, Loss: 0.072 Epoch 9 Batch 227/269 - Train Accuracy: 0.937, Validation Accuracy: 0.936, Loss: 0.078 Epoch 9 Batch 228/269 - Train Accuracy: 0.932, Validation Accuracy: 0.935, Loss: 0.067 Epoch 9 Batch 229/269 - Train Accuracy: 0.926, Validation Accuracy: 0.937, Loss: 0.067 Epoch 9 Batch 230/269 - Train Accuracy: 0.943, Validation Accuracy: 0.940, Loss: 0.071 Epoch 9 Batch 231/269 - Train Accuracy: 0.919, Validation Accuracy: 0.943, Loss: 0.073 Epoch 9 Batch 232/269 - Train Accuracy: 0.916, Validation Accuracy: 0.944, Loss: 0.068 Epoch 9 Batch 233/269 - Train Accuracy: 0.940, Validation Accuracy: 0.934, Loss: 0.077 Epoch 9 Batch 234/269 - Train Accuracy: 0.938, Validation Accuracy: 0.930, Loss: 0.070 Epoch 9 Batch 235/269 - Train Accuracy: 0.958, Validation Accuracy: 0.934, Loss: 0.060 Epoch 9 Batch 236/269 - Train Accuracy: 0.925, Validation Accuracy: 0.931, Loss: 0.072 Epoch 9 Batch 237/269 - Train Accuracy: 0.929, Validation Accuracy: 0.931, Loss: 0.065 Epoch 9 Batch 238/269 - Train Accuracy: 0.922, Validation Accuracy: 0.931, Loss: 0.067 Epoch 9 Batch 239/269 - Train Accuracy: 0.938, Validation Accuracy: 0.938, Loss: 0.068 Epoch 9 Batch 240/269 - Train Accuracy: 0.938, Validation Accuracy: 0.938, Loss: 0.062 Epoch 9 Batch 241/269 - Train Accuracy: 0.920, Validation Accuracy: 0.933, Loss: 0.078 Epoch 9 Batch 242/269 - Train Accuracy: 0.939, Validation Accuracy: 0.932, Loss: 0.069 Epoch 9 Batch 243/269 - Train Accuracy: 0.941, Validation Accuracy: 0.939, Loss: 0.064 Epoch 9 Batch 244/269 - Train Accuracy: 0.914, Validation Accuracy: 0.940, Loss: 0.069 Epoch 9 Batch 245/269 - Train Accuracy: 0.919, Validation Accuracy: 0.937, Loss: 0.074 Epoch 9 Batch 246/269 - Train Accuracy: 0.918, Validation Accuracy: 0.931, Loss: 0.074 Epoch 9 Batch 247/269 - Train Accuracy: 0.934, Validation Accuracy: 0.932, Loss: 0.069 Epoch 9 Batch 248/269 - Train Accuracy: 0.931, Validation Accuracy: 0.932, Loss: 0.065 Epoch 9 Batch 249/269 - Train Accuracy: 0.939, Validation Accuracy: 0.937, Loss: 0.064 Epoch 9 Batch 250/269 - Train Accuracy: 0.935, Validation Accuracy: 0.940, Loss: 0.069 Epoch 9 Batch 251/269 - Train Accuracy: 0.956, Validation Accuracy: 0.940, Loss: 0.065 Epoch 9 Batch 252/269 - Train Accuracy: 0.939, Validation Accuracy: 0.939, Loss: 0.065 Epoch 9 Batch 253/269 - Train Accuracy: 0.920, Validation Accuracy: 0.942, Loss: 0.073 Epoch 9 Batch 254/269 - Train Accuracy: 0.931, Validation Accuracy: 0.938, Loss: 0.068 Epoch 9 Batch 255/269 - Train Accuracy: 0.927, Validation Accuracy: 0.936, Loss: 0.067 Epoch 9 Batch 256/269 - Train Accuracy: 0.920, Validation Accuracy: 0.930, Loss: 0.067 Epoch 9 Batch 257/269 - Train Accuracy: 0.920, Validation Accuracy: 0.937, Loss: 0.075 Epoch 9 Batch 258/269 - Train Accuracy: 0.925, Validation Accuracy: 0.930, Loss: 0.071 Epoch 9 Batch 259/269 - Train Accuracy: 0.937, Validation Accuracy: 0.928, Loss: 0.065 Epoch 9 Batch 260/269 - Train Accuracy: 0.926, Validation Accuracy: 0.930, Loss: 0.073 Epoch 9 Batch 261/269 - Train Accuracy: 0.933, Validation Accuracy: 0.930, Loss: 0.069 Epoch 9 Batch 262/269 - Train Accuracy: 0.928, Validation Accuracy: 0.930, Loss: 0.075 Epoch 9 Batch 263/269 - Train Accuracy: 0.942, Validation Accuracy: 0.931, Loss: 0.072 Epoch 9 Batch 264/269 - Train Accuracy: 0.908, Validation Accuracy: 0.931, Loss: 0.074 Epoch 9 Batch 265/269 - Train Accuracy: 0.937, Validation Accuracy: 0.932, Loss: 0.068 Epoch 9 Batch 266/269 - Train Accuracy: 0.939, Validation Accuracy: 0.938, Loss: 0.064 Epoch 9 Batch 267/269 - Train Accuracy: 0.941, Validation Accuracy: 0.936, Loss: 0.071 Epoch 10 Batch 0/269 - Train Accuracy: 0.943, Validation Accuracy: 0.941, Loss: 0.072 Epoch 10 Batch 1/269 - Train Accuracy: 0.927, Validation Accuracy: 0.939, Loss: 0.067 Epoch 10 Batch 2/269 - Train Accuracy: 0.934, Validation Accuracy: 0.939, Loss: 0.071 Epoch 10 Batch 3/269 - Train Accuracy: 0.931, Validation Accuracy: 0.935, Loss: 0.069 Epoch 10 Batch 4/269 - Train Accuracy: 0.917, Validation Accuracy: 0.931, Loss: 0.070 Epoch 10 Batch 5/269 - Train Accuracy: 0.930, Validation Accuracy: 0.928, Loss: 0.071 Epoch 10 Batch 6/269 - Train Accuracy: 0.932, Validation Accuracy: 0.930, Loss: 0.069 Epoch 10 Batch 7/269 - Train Accuracy: 0.938, Validation Accuracy: 0.930, Loss: 0.066 Epoch 10 Batch 8/269 - Train Accuracy: 0.938, Validation Accuracy: 0.928, Loss: 0.071 Epoch 10 Batch 9/269 - Train Accuracy: 0.927, Validation Accuracy: 0.930, Loss: 0.072 Epoch 10 Batch 10/269 - Train Accuracy: 0.939, Validation Accuracy: 0.943, Loss: 0.064 Epoch 10 Batch 11/269 - Train Accuracy: 0.943, Validation Accuracy: 0.936, Loss: 0.076 Epoch 10 Batch 12/269 - Train Accuracy: 0.923, Validation Accuracy: 0.935, Loss: 0.075 Epoch 10 Batch 13/269 - Train Accuracy: 0.936, Validation Accuracy: 0.934, Loss: 0.061 Epoch 10 Batch 14/269 - Train Accuracy: 0.945, Validation Accuracy: 0.932, Loss: 0.068 Epoch 10 Batch 15/269 - Train Accuracy: 0.927, Validation Accuracy: 0.937, Loss: 0.058 Epoch 10 Batch 16/269 - Train Accuracy: 0.933, Validation Accuracy: 0.941, Loss: 0.078 Epoch 10 Batch 17/269 - Train Accuracy: 0.939, Validation Accuracy: 0.944, Loss: 0.062 Epoch 10 Batch 18/269 - Train Accuracy: 0.930, Validation Accuracy: 0.940, Loss: 0.072 Epoch 10 Batch 19/269 - Train Accuracy: 0.935, Validation Accuracy: 0.937, Loss: 0.059 Epoch 10 Batch 20/269 - Train Accuracy: 0.932, Validation Accuracy: 0.936, Loss: 0.068 Epoch 10 Batch 21/269 - Train Accuracy: 0.913, Validation Accuracy: 0.939, Loss: 0.081 Epoch 10 Batch 22/269 - Train Accuracy: 0.948, Validation Accuracy: 0.939, Loss: 0.062 Epoch 10 Batch 23/269 - Train Accuracy: 0.928, Validation Accuracy: 0.939, Loss: 0.069 Epoch 10 Batch 24/269 - Train Accuracy: 0.918, Validation Accuracy: 0.945, Loss: 0.070 Epoch 10 Batch 25/269 - Train Accuracy: 0.928, Validation Accuracy: 0.938, Loss: 0.073 Epoch 10 Batch 26/269 - Train Accuracy: 0.931, Validation Accuracy: 0.941, Loss: 0.065 Epoch 10 Batch 27/269 - Train Accuracy: 0.925, Validation Accuracy: 0.945, Loss: 0.066 Epoch 10 Batch 28/269 - Train Accuracy: 0.913, Validation Accuracy: 0.944, Loss: 0.072 Epoch 10 Batch 29/269 - Train Accuracy: 0.933, Validation Accuracy: 0.942, Loss: 0.071 Epoch 10 Batch 30/269 - Train Accuracy: 0.934, Validation Accuracy: 0.941, Loss: 0.066 Epoch 10 Batch 31/269 - Train Accuracy: 0.948, Validation Accuracy: 0.941, Loss: 0.066 Epoch 10 Batch 32/269 - Train Accuracy: 0.939, Validation Accuracy: 0.937, Loss: 0.062 Epoch 10 Batch 33/269 - Train Accuracy: 0.936, Validation Accuracy: 0.934, Loss: 0.066 Epoch 10 Batch 34/269 - Train Accuracy: 0.929, Validation Accuracy: 0.936, Loss: 0.066 Epoch 10 Batch 35/269 - Train Accuracy: 0.937, Validation Accuracy: 0.939, Loss: 0.077 Epoch 10 Batch 36/269 - Train Accuracy: 0.926, Validation Accuracy: 0.944, Loss: 0.072 Epoch 10 Batch 37/269 - Train Accuracy: 0.930, Validation Accuracy: 0.940, Loss: 0.070 Epoch 10 Batch 38/269 - Train Accuracy: 0.929, Validation Accuracy: 0.943, Loss: 0.071 Epoch 10 Batch 39/269 - Train Accuracy: 0.938, Validation Accuracy: 0.943, Loss: 0.062 Epoch 10 Batch 40/269 - Train Accuracy: 0.924, Validation Accuracy: 0.944, Loss: 0.074 Epoch 10 Batch 41/269 - Train Accuracy: 0.923, Validation Accuracy: 0.940, Loss: 0.072 Epoch 10 Batch 42/269 - Train Accuracy: 0.949, Validation Accuracy: 0.936, Loss: 0.057 Epoch 10 Batch 43/269 - Train Accuracy: 0.930, Validation Accuracy: 0.939, Loss: 0.075 Epoch 10 Batch 44/269 - Train Accuracy: 0.933, Validation Accuracy: 0.940, Loss: 0.067 Epoch 10 Batch 45/269 - Train Accuracy: 0.937, Validation Accuracy: 0.939, Loss: 0.071 Epoch 10 Batch 46/269 - Train Accuracy: 0.937, Validation Accuracy: 0.937, Loss: 0.065 Epoch 10 Batch 47/269 - Train Accuracy: 0.938, Validation Accuracy: 0.938, Loss: 0.066 Epoch 10 Batch 48/269 - Train Accuracy: 0.941, Validation Accuracy: 0.938, Loss: 0.067 Epoch 10 Batch 49/269 - Train Accuracy: 0.929, Validation Accuracy: 0.933, Loss: 0.063 Epoch 10 Batch 50/269 - Train Accuracy: 0.926, Validation Accuracy: 0.938, Loss: 0.079 Epoch 10 Batch 51/269 - Train Accuracy: 0.941, Validation Accuracy: 0.940, Loss: 0.069 Epoch 10 Batch 52/269 - Train Accuracy: 0.926, Validation Accuracy: 0.942, Loss: 0.062 Epoch 10 Batch 53/269 - Train Accuracy: 0.928, Validation Accuracy: 0.935, Loss: 0.072 Epoch 10 Batch 54/269 - Train Accuracy: 0.944, Validation Accuracy: 0.940, Loss: 0.062 Epoch 10 Batch 55/269 - Train Accuracy: 0.941, Validation Accuracy: 0.942, Loss: 0.065 Epoch 10 Batch 56/269 - Train Accuracy: 0.927, Validation Accuracy: 0.940, Loss: 0.067 Epoch 10 Batch 57/269 - Train Accuracy: 0.936, Validation Accuracy: 0.946, Loss: 0.073 Epoch 10 Batch 58/269 - Train Accuracy: 0.941, Validation Accuracy: 0.941, Loss: 0.068 Epoch 10 Batch 59/269 - Train Accuracy: 0.943, Validation Accuracy: 0.942, Loss: 0.059 Epoch 10 Batch 60/269 - Train Accuracy: 0.938, Validation Accuracy: 0.946, Loss: 0.064 Epoch 10 Batch 61/269 - Train Accuracy: 0.937, Validation Accuracy: 0.945, Loss: 0.062 Epoch 10 Batch 62/269 - Train Accuracy: 0.938, Validation Accuracy: 0.944, Loss: 0.071 Epoch 10 Batch 63/269 - Train Accuracy: 0.931, Validation Accuracy: 0.944, Loss: 0.073 Epoch 10 Batch 64/269 - Train Accuracy: 0.942, Validation Accuracy: 0.944, Loss: 0.060 Epoch 10 Batch 65/269 - Train Accuracy: 0.928, Validation Accuracy: 0.942, Loss: 0.064 Epoch 10 Batch 66/269 - Train Accuracy: 0.926, Validation Accuracy: 0.939, Loss: 0.070 Epoch 10 Batch 67/269 - Train Accuracy: 0.927, Validation Accuracy: 0.943, Loss: 0.073 Epoch 10 Batch 68/269 - Train Accuracy: 0.929, Validation Accuracy: 0.939, Loss: 0.075 Epoch 10 Batch 69/269 - Train Accuracy: 0.921, Validation Accuracy: 0.939, Loss: 0.086 Epoch 10 Batch 70/269 - Train Accuracy: 0.947, Validation Accuracy: 0.942, Loss: 0.068 Epoch 10 Batch 71/269 - Train Accuracy: 0.943, Validation Accuracy: 0.941, Loss: 0.072 Epoch 10 Batch 72/269 - Train Accuracy: 0.925, Validation Accuracy: 0.939, Loss: 0.076 Epoch 10 Batch 73/269 - Train Accuracy: 0.913, Validation Accuracy: 0.935, Loss: 0.071 Epoch 10 Batch 74/269 - Train Accuracy: 0.945, Validation Accuracy: 0.941, Loss: 0.061 Epoch 10 Batch 75/269 - Train Accuracy: 0.939, Validation Accuracy: 0.943, Loss: 0.071 Epoch 10 Batch 76/269 - Train Accuracy: 0.934, Validation Accuracy: 0.943, Loss: 0.063 Epoch 10 Batch 77/269 - Train Accuracy: 0.945, Validation Accuracy: 0.943, Loss: 0.065 Epoch 10 Batch 78/269 - Train Accuracy: 0.936, Validation Accuracy: 0.944, Loss: 0.067 Epoch 10 Batch 79/269 - Train Accuracy: 0.934, Validation Accuracy: 0.939, Loss: 0.065 Epoch 10 Batch 80/269 - Train Accuracy: 0.936, Validation Accuracy: 0.939, Loss: 0.068 Epoch 10 Batch 81/269 - Train Accuracy: 0.923, Validation Accuracy: 0.937, Loss: 0.075 Epoch 10 Batch 82/269 - Train Accuracy: 0.933, Validation Accuracy: 0.941, Loss: 0.060 Epoch 10 Batch 83/269 - Train Accuracy: 0.924, Validation Accuracy: 0.938, Loss: 0.078 Epoch 10 Batch 84/269 - Train Accuracy: 0.934, Validation Accuracy: 0.936, Loss: 0.066 Epoch 10 Batch 85/269 - Train Accuracy: 0.942, Validation Accuracy: 0.942, Loss: 0.069 Epoch 10 Batch 86/269 - Train Accuracy: 0.929, Validation Accuracy: 0.936, Loss: 0.061 Epoch 10 Batch 87/269 - Train Accuracy: 0.945, Validation Accuracy: 0.934, Loss: 0.067 Epoch 10 Batch 88/269 - Train Accuracy: 0.925, Validation Accuracy: 0.936, Loss: 0.069 Epoch 10 Batch 89/269 - Train Accuracy: 0.943, Validation Accuracy: 0.930, Loss: 0.066 Epoch 10 Batch 90/269 - Train Accuracy: 0.939, Validation Accuracy: 0.934, Loss: 0.073 Epoch 10 Batch 91/269 - Train Accuracy: 0.935, Validation Accuracy: 0.938, Loss: 0.064 Epoch 10 Batch 92/269 - Train Accuracy: 0.940, Validation Accuracy: 0.936, Loss: 0.063 Epoch 10 Batch 93/269 - Train Accuracy: 0.945, Validation Accuracy: 0.934, Loss: 0.060 Epoch 10 Batch 94/269 - Train Accuracy: 0.928, Validation Accuracy: 0.933, Loss: 0.079 Epoch 10 Batch 95/269 - Train Accuracy: 0.934, Validation Accuracy: 0.931, Loss: 0.068 Epoch 10 Batch 96/269 - Train Accuracy: 0.927, Validation Accuracy: 0.932, Loss: 0.069 Epoch 10 Batch 97/269 - Train Accuracy: 0.947, Validation Accuracy: 0.933, Loss: 0.068 Epoch 10 Batch 98/269 - Train Accuracy: 0.937, Validation Accuracy: 0.940, Loss: 0.069 Epoch 10 Batch 99/269 - Train Accuracy: 0.938, Validation Accuracy: 0.937, Loss: 0.063 Epoch 10 Batch 100/269 - Train Accuracy: 0.939, Validation Accuracy: 0.936, Loss: 0.069 Epoch 10 Batch 101/269 - Train Accuracy: 0.928, Validation Accuracy: 0.939, Loss: 0.079 Epoch 10 Batch 102/269 - Train Accuracy: 0.936, Validation Accuracy: 0.941, Loss: 0.065 Epoch 10 Batch 103/269 - Train Accuracy: 0.931, Validation Accuracy: 0.940, Loss: 0.073 Epoch 10 Batch 104/269 - Train Accuracy: 0.928, Validation Accuracy: 0.941, Loss: 0.065 Epoch 10 Batch 105/269 - Train Accuracy: 0.933, Validation Accuracy: 0.942, Loss: 0.070 Epoch 10 Batch 106/269 - Train Accuracy: 0.943, Validation Accuracy: 0.944, Loss: 0.066 Epoch 10 Batch 107/269 - Train Accuracy: 0.943, Validation Accuracy: 0.946, Loss: 0.065 Epoch 10 Batch 108/269 - Train Accuracy: 0.942, Validation Accuracy: 0.944, Loss: 0.064 Epoch 10 Batch 109/269 - Train Accuracy: 0.930, Validation Accuracy: 0.946, Loss: 0.073 Epoch 10 Batch 110/269 - Train Accuracy: 0.932, Validation Accuracy: 0.942, Loss: 0.063 Epoch 10 Batch 111/269 - Train Accuracy: 0.938, Validation Accuracy: 0.938, Loss: 0.076 Epoch 10 Batch 112/269 - Train Accuracy: 0.938, Validation Accuracy: 0.936, Loss: 0.064 Epoch 10 Batch 113/269 - Train Accuracy: 0.929, Validation Accuracy: 0.936, Loss: 0.067 Epoch 10 Batch 114/269 - Train Accuracy: 0.931, Validation Accuracy: 0.936, Loss: 0.069 Epoch 10 Batch 115/269 - Train Accuracy: 0.925, Validation Accuracy: 0.938, Loss: 0.066 Epoch 10 Batch 116/269 - Train Accuracy: 0.942, Validation Accuracy: 0.930, Loss: 0.070 Epoch 10 Batch 117/269 - Train Accuracy: 0.934, Validation Accuracy: 0.930, Loss: 0.059 Epoch 10 Batch 118/269 - Train Accuracy: 0.940, Validation Accuracy: 0.937, Loss: 0.061 Epoch 10 Batch 119/269 - Train Accuracy: 0.934, Validation Accuracy: 0.934, Loss: 0.065 Epoch 10 Batch 120/269 - Train Accuracy: 0.932, Validation Accuracy: 0.934, Loss: 0.065 Epoch 10 Batch 121/269 - Train Accuracy: 0.947, Validation Accuracy: 0.935, Loss: 0.060 Epoch 10 Batch 122/269 - Train Accuracy: 0.942, Validation Accuracy: 0.932, Loss: 0.069 Epoch 10 Batch 123/269 - Train Accuracy: 0.930, Validation Accuracy: 0.929, Loss: 0.061 Epoch 10 Batch 124/269 - Train Accuracy: 0.942, Validation Accuracy: 0.937, Loss: 0.057 Epoch 10 Batch 125/269 - Train Accuracy: 0.935, Validation Accuracy: 0.940, Loss: 0.064 Epoch 10 Batch 126/269 - Train Accuracy: 0.918, Validation Accuracy: 0.937, Loss: 0.064 Epoch 10 Batch 127/269 - Train Accuracy: 0.935, Validation Accuracy: 0.938, Loss: 0.067 Epoch 10 Batch 128/269 - Train Accuracy: 0.935, Validation Accuracy: 0.932, Loss: 0.067 Epoch 10 Batch 129/269 - Train Accuracy: 0.926, Validation Accuracy: 0.928, Loss: 0.067 Epoch 10 Batch 130/269 - Train Accuracy: 0.937, Validation Accuracy: 0.930, Loss: 0.064 Epoch 10 Batch 131/269 - Train Accuracy: 0.913, Validation Accuracy: 0.939, Loss: 0.068 Epoch 10 Batch 132/269 - Train Accuracy: 0.929, Validation Accuracy: 0.944, Loss: 0.074 Epoch 10 Batch 133/269 - Train Accuracy: 0.945, Validation Accuracy: 0.944, Loss: 0.058 Epoch 10 Batch 134/269 - Train Accuracy: 0.941, Validation Accuracy: 0.941, Loss: 0.068 Epoch 10 Batch 135/269 - Train Accuracy: 0.943, Validation Accuracy: 0.937, Loss: 0.065 Epoch 10 Batch 136/269 - Train Accuracy: 0.918, Validation Accuracy: 0.935, Loss: 0.071 Epoch 10 Batch 137/269 - Train Accuracy: 0.933, Validation Accuracy: 0.942, Loss: 0.073 Epoch 10 Batch 138/269 - Train Accuracy: 0.934, Validation Accuracy: 0.948, Loss: 0.063 Epoch 10 Batch 139/269 - Train Accuracy: 0.933, Validation Accuracy: 0.943, Loss: 0.064 Epoch 10 Batch 140/269 - Train Accuracy: 0.930, Validation Accuracy: 0.936, Loss: 0.075 Epoch 10 Batch 141/269 - Train Accuracy: 0.934, Validation Accuracy: 0.933, Loss: 0.072 Epoch 10 Batch 142/269 - Train Accuracy: 0.934, Validation Accuracy: 0.936, Loss: 0.058 Epoch 10 Batch 143/269 - Train Accuracy: 0.945, Validation Accuracy: 0.933, Loss: 0.061 Epoch 10 Batch 144/269 - Train Accuracy: 0.948, Validation Accuracy: 0.940, Loss: 0.056 Epoch 10 Batch 145/269 - Train Accuracy: 0.948, Validation Accuracy: 0.941, Loss: 0.060 Epoch 10 Batch 146/269 - Train Accuracy: 0.935, Validation Accuracy: 0.942, Loss: 0.060 Epoch 10 Batch 147/269 - Train Accuracy: 0.930, Validation Accuracy: 0.941, Loss: 0.068 Epoch 10 Batch 148/269 - Train Accuracy: 0.936, Validation Accuracy: 0.941, Loss: 0.063 Epoch 10 Batch 149/269 - Train Accuracy: 0.928, Validation Accuracy: 0.940, Loss: 0.072 Epoch 10 Batch 150/269 - Train Accuracy: 0.929, Validation Accuracy: 0.936, Loss: 0.066 Epoch 10 Batch 151/269 - Train Accuracy: 0.934, Validation Accuracy: 0.935, Loss: 0.066 Epoch 10 Batch 152/269 - Train Accuracy: 0.932, Validation Accuracy: 0.942, Loss: 0.064 Epoch 10 Batch 153/269 - Train Accuracy: 0.941, Validation Accuracy: 0.940, Loss: 0.061 Epoch 10 Batch 154/269 - Train Accuracy: 0.950, Validation Accuracy: 0.939, Loss: 0.062 Epoch 10 Batch 155/269 - Train Accuracy: 0.929, Validation Accuracy: 0.943, Loss: 0.060 Epoch 10 Batch 156/269 - Train Accuracy: 0.937, Validation Accuracy: 0.943, Loss: 0.065 Epoch 10 Batch 157/269 - Train Accuracy: 0.924, Validation Accuracy: 0.945, Loss: 0.059 Epoch 10 Batch 158/269 - Train Accuracy: 0.932, Validation Accuracy: 0.945, Loss: 0.064 Epoch 10 Batch 159/269 - Train Accuracy: 0.935, Validation Accuracy: 0.939, Loss: 0.065 Epoch 10 Batch 160/269 - Train Accuracy: 0.931, Validation Accuracy: 0.938, Loss: 0.061 Epoch 10 Batch 161/269 - Train Accuracy: 0.938, Validation Accuracy: 0.937, Loss: 0.057 Epoch 10 Batch 162/269 - Train Accuracy: 0.948, Validation Accuracy: 0.943, Loss: 0.060 Epoch 10 Batch 163/269 - Train Accuracy: 0.949, Validation Accuracy: 0.942, Loss: 0.059 Epoch 10 Batch 164/269 - Train Accuracy: 0.945, Validation Accuracy: 0.941, Loss: 0.061 Epoch 10 Batch 165/269 - Train Accuracy: 0.941, Validation Accuracy: 0.941, Loss: 0.062 Epoch 10 Batch 166/269 - Train Accuracy: 0.949, Validation Accuracy: 0.939, Loss: 0.065 Epoch 10 Batch 167/269 - Train Accuracy: 0.940, Validation Accuracy: 0.942, Loss: 0.064 Epoch 10 Batch 168/269 - Train Accuracy: 0.931, Validation Accuracy: 0.940, Loss: 0.066 Epoch 10 Batch 169/269 - Train Accuracy: 0.927, Validation Accuracy: 0.939, Loss: 0.060 Epoch 10 Batch 170/269 - Train Accuracy: 0.940, Validation Accuracy: 0.942, Loss: 0.061 Epoch 10 Batch 171/269 - Train Accuracy: 0.949, Validation Accuracy: 0.950, Loss: 0.068 Epoch 10 Batch 172/269 - Train Accuracy: 0.919, Validation Accuracy: 0.948, Loss: 0.070 Epoch 10 Batch 173/269 - Train Accuracy: 0.937, Validation Accuracy: 0.945, Loss: 0.059 Epoch 10 Batch 174/269 - Train Accuracy: 0.936, Validation Accuracy: 0.944, Loss: 0.065 Epoch 10 Batch 175/269 - Train Accuracy: 0.928, Validation Accuracy: 0.945, Loss: 0.077 Epoch 10 Batch 176/269 - Train Accuracy: 0.916, Validation Accuracy: 0.942, Loss: 0.064 Epoch 10 Batch 177/269 - Train Accuracy: 0.938, Validation Accuracy: 0.945, Loss: 0.061 Epoch 10 Batch 178/269 - Train Accuracy: 0.937, Validation Accuracy: 0.939, Loss: 0.058 Epoch 10 Batch 179/269 - Train Accuracy: 0.924, Validation Accuracy: 0.934, Loss: 0.064 Epoch 10 Batch 180/269 - Train Accuracy: 0.941, Validation Accuracy: 0.939, Loss: 0.060 Epoch 10 Batch 181/269 - Train Accuracy: 0.934, Validation Accuracy: 0.944, Loss: 0.071 Epoch 10 Batch 182/269 - Train Accuracy: 0.938, Validation Accuracy: 0.940, Loss: 0.060 Epoch 10 Batch 183/269 - Train Accuracy: 0.949, Validation Accuracy: 0.943, Loss: 0.050 Epoch 10 Batch 184/269 - Train Accuracy: 0.949, Validation Accuracy: 0.942, Loss: 0.065 Epoch 10 Batch 185/269 - Train Accuracy: 0.941, Validation Accuracy: 0.943, Loss: 0.061 Epoch 10 Batch 186/269 - Train Accuracy: 0.938, Validation Accuracy: 0.944, Loss: 0.063 Epoch 10 Batch 187/269 - Train Accuracy: 0.931, Validation Accuracy: 0.939, Loss: 0.062 Epoch 10 Batch 188/269 - Train Accuracy: 0.942, Validation Accuracy: 0.941, Loss: 0.064 Epoch 10 Batch 189/269 - Train Accuracy: 0.941, Validation Accuracy: 0.945, Loss: 0.059 Epoch 10 Batch 190/269 - Train Accuracy: 0.937, Validation Accuracy: 0.946, Loss: 0.062 Epoch 10 Batch 191/269 - Train Accuracy: 0.932, Validation Accuracy: 0.946, Loss: 0.062 Epoch 10 Batch 192/269 - Train Accuracy: 0.942, Validation Accuracy: 0.942, Loss: 0.060 Epoch 10 Batch 193/269 - Train Accuracy: 0.942, Validation Accuracy: 0.939, Loss: 0.059 Epoch 10 Batch 194/269 - Train Accuracy: 0.926, Validation Accuracy: 0.937, Loss: 0.066 Epoch 10 Batch 195/269 - Train Accuracy: 0.928, Validation Accuracy: 0.941, Loss: 0.058 Epoch 10 Batch 196/269 - Train Accuracy: 0.932, Validation Accuracy: 0.941, Loss: 0.063 Epoch 10 Batch 197/269 - Train Accuracy: 0.937, Validation Accuracy: 0.949, Loss: 0.067 Epoch 10 Batch 198/269 - Train Accuracy: 0.934, Validation Accuracy: 0.948, Loss: 0.068 Epoch 10 Batch 199/269 - Train Accuracy: 0.944, Validation Accuracy: 0.947, Loss: 0.067 Epoch 10 Batch 200/269 - Train Accuracy: 0.939, Validation Accuracy: 0.941, Loss: 0.064 Epoch 10 Batch 201/269 - Train Accuracy: 0.929, Validation Accuracy: 0.940, Loss: 0.068 Epoch 10 Batch 202/269 - Train Accuracy: 0.931, Validation Accuracy: 0.936, Loss: 0.060 Epoch 10 Batch 203/269 - Train Accuracy: 0.927, Validation Accuracy: 0.937, Loss: 0.071 Epoch 10 Batch 204/269 - Train Accuracy: 0.940, Validation Accuracy: 0.940, Loss: 0.065 Epoch 10 Batch 205/269 - Train Accuracy: 0.944, Validation Accuracy: 0.941, Loss: 0.062 Epoch 10 Batch 206/269 - Train Accuracy: 0.922, Validation Accuracy: 0.937, Loss: 0.070 Epoch 10 Batch 207/269 - Train Accuracy: 0.938, Validation Accuracy: 0.940, Loss: 0.063 Epoch 10 Batch 208/269 - Train Accuracy: 0.943, Validation Accuracy: 0.944, Loss: 0.064 Epoch 10 Batch 209/269 - Train Accuracy: 0.952, Validation Accuracy: 0.941, Loss: 0.056 Epoch 10 Batch 210/269 - Train Accuracy: 0.932, Validation Accuracy: 0.946, Loss: 0.062 Epoch 10 Batch 211/269 - Train Accuracy: 0.937, Validation Accuracy: 0.944, Loss: 0.067 Epoch 10 Batch 212/269 - Train Accuracy: 0.938, Validation Accuracy: 0.937, Loss: 0.072 Epoch 10 Batch 213/269 - Train Accuracy: 0.932, Validation Accuracy: 0.935, Loss: 0.063 Epoch 10 Batch 214/269 - Train Accuracy: 0.938, Validation Accuracy: 0.940, Loss: 0.062 Epoch 10 Batch 215/269 - Train Accuracy: 0.938, Validation Accuracy: 0.943, Loss: 0.057 Epoch 10 Batch 216/269 - Train Accuracy: 0.936, Validation Accuracy: 0.942, Loss: 0.070 Epoch 10 Batch 217/269 - Train Accuracy: 0.927, Validation Accuracy: 0.944, Loss: 0.066 Epoch 10 Batch 218/269 - Train Accuracy: 0.944, Validation Accuracy: 0.942, Loss: 0.056 Epoch 10 Batch 219/269 - Train Accuracy: 0.944, Validation Accuracy: 0.941, Loss: 0.063 Epoch 10 Batch 220/269 - Train Accuracy: 0.932, Validation Accuracy: 0.937, Loss: 0.058 Epoch 10 Batch 221/269 - Train Accuracy: 0.938, Validation Accuracy: 0.937, Loss: 0.066 Epoch 10 Batch 222/269 - Train Accuracy: 0.947, Validation Accuracy: 0.940, Loss: 0.057 Epoch 10 Batch 223/269 - Train Accuracy: 0.932, Validation Accuracy: 0.940, Loss: 0.060 Epoch 10 Batch 224/269 - Train Accuracy: 0.937, Validation Accuracy: 0.943, Loss: 0.068 Epoch 10 Batch 225/269 - Train Accuracy: 0.933, Validation Accuracy: 0.939, Loss: 0.056 Epoch 10 Batch 226/269 - Train Accuracy: 0.942, Validation Accuracy: 0.933, Loss: 0.069 Epoch 10 Batch 227/269 - Train Accuracy: 0.944, Validation Accuracy: 0.936, Loss: 0.072 Epoch 10 Batch 228/269 - Train Accuracy: 0.939, Validation Accuracy: 0.940, Loss: 0.061 Epoch 10 Batch 229/269 - Train Accuracy: 0.933, Validation Accuracy: 0.941, Loss: 0.065 Epoch 10 Batch 230/269 - Train Accuracy: 0.949, Validation Accuracy: 0.949, Loss: 0.061 Epoch 10 Batch 231/269 - Train Accuracy: 0.927, Validation Accuracy: 0.941, Loss: 0.066 Epoch 10 Batch 232/269 - Train Accuracy: 0.938, Validation Accuracy: 0.941, Loss: 0.061 Epoch 10 Batch 233/269 - Train Accuracy: 0.946, Validation Accuracy: 0.936, Loss: 0.064 Epoch 10 Batch 234/269 - Train Accuracy: 0.940, Validation Accuracy: 0.934, Loss: 0.061 Epoch 10 Batch 235/269 - Train Accuracy: 0.960, Validation Accuracy: 0.941, Loss: 0.055 Epoch 10 Batch 236/269 - Train Accuracy: 0.925, Validation Accuracy: 0.945, Loss: 0.057 Epoch 10 Batch 237/269 - Train Accuracy: 0.946, Validation Accuracy: 0.943, Loss: 0.061 Epoch 10 Batch 238/269 - Train Accuracy: 0.931, Validation Accuracy: 0.940, Loss: 0.060 Epoch 10 Batch 239/269 - Train Accuracy: 0.937, Validation Accuracy: 0.941, Loss: 0.061 Epoch 10 Batch 240/269 - Train Accuracy: 0.946, Validation Accuracy: 0.940, Loss: 0.058 Epoch 10 Batch 241/269 - Train Accuracy: 0.927, Validation Accuracy: 0.942, Loss: 0.073 Epoch 10 Batch 242/269 - Train Accuracy: 0.947, Validation Accuracy: 0.938, Loss: 0.063 Epoch 10 Batch 243/269 - Train Accuracy: 0.948, Validation Accuracy: 0.943, Loss: 0.056 Epoch 10 Batch 244/269 - Train Accuracy: 0.925, Validation Accuracy: 0.942, Loss: 0.062 Epoch 10 Batch 245/269 - Train Accuracy: 0.930, Validation Accuracy: 0.940, Loss: 0.063 Epoch 10 Batch 246/269 - Train Accuracy: 0.924, Validation Accuracy: 0.934, Loss: 0.066 Epoch 10 Batch 247/269 - Train Accuracy: 0.935, Validation Accuracy: 0.944, Loss: 0.063 Epoch 10 Batch 248/269 - Train Accuracy: 0.946, Validation Accuracy: 0.942, Loss: 0.058 Epoch 10 Batch 249/269 - Train Accuracy: 0.950, Validation Accuracy: 0.937, Loss: 0.057 Epoch 10 Batch 250/269 - Train Accuracy: 0.943, Validation Accuracy: 0.938, Loss: 0.057 Epoch 10 Batch 251/269 - Train Accuracy: 0.957, Validation Accuracy: 0.940, Loss: 0.058 Epoch 10 Batch 252/269 - Train Accuracy: 0.950, Validation Accuracy: 0.943, Loss: 0.055 Epoch 10 Batch 253/269 - Train Accuracy: 0.935, Validation Accuracy: 0.942, Loss: 0.063 Epoch 10 Batch 254/269 - Train Accuracy: 0.931, Validation Accuracy: 0.940, Loss: 0.063 Epoch 10 Batch 255/269 - Train Accuracy: 0.937, Validation Accuracy: 0.939, Loss: 0.062 Epoch 10 Batch 256/269 - Train Accuracy: 0.934, Validation Accuracy: 0.946, Loss: 0.061 Epoch 10 Batch 257/269 - Train Accuracy: 0.926, Validation Accuracy: 0.941, Loss: 0.066 Epoch 10 Batch 258/269 - Train Accuracy: 0.928, Validation Accuracy: 0.930, Loss: 0.064 Epoch 10 Batch 259/269 - Train Accuracy: 0.934, Validation Accuracy: 0.932, Loss: 0.060 Epoch 10 Batch 260/269 - Train Accuracy: 0.930, Validation Accuracy: 0.933, Loss: 0.065 Epoch 10 Batch 261/269 - Train Accuracy: 0.938, Validation Accuracy: 0.941, Loss: 0.066 Epoch 10 Batch 262/269 - Train Accuracy: 0.939, Validation Accuracy: 0.946, Loss: 0.066 Epoch 10 Batch 263/269 - Train Accuracy: 0.945, Validation Accuracy: 0.939, Loss: 0.067 Epoch 10 Batch 264/269 - Train Accuracy: 0.925, Validation Accuracy: 0.948, Loss: 0.067 Epoch 10 Batch 265/269 - Train Accuracy: 0.941, Validation Accuracy: 0.945, Loss: 0.062 Epoch 10 Batch 266/269 - Train Accuracy: 0.943, Validation Accuracy: 0.943, Loss: 0.058 Epoch 10 Batch 267/269 - Train Accuracy: 0.953, Validation Accuracy: 0.945, Loss: 0.065 Epoch 11 Batch 0/269 - Train Accuracy: 0.951, Validation Accuracy: 0.945, Loss: 0.068 Epoch 11 Batch 1/269 - Train Accuracy: 0.942, Validation Accuracy: 0.947, Loss: 0.059 Epoch 11 Batch 2/269 - Train Accuracy: 0.939, Validation Accuracy: 0.947, Loss: 0.066 Epoch 11 Batch 3/269 - Train Accuracy: 0.934, Validation Accuracy: 0.941, Loss: 0.062 Epoch 11 Batch 4/269 - Train Accuracy: 0.932, Validation Accuracy: 0.942, Loss: 0.059 Epoch 11 Batch 5/269 - Train Accuracy: 0.942, Validation Accuracy: 0.942, Loss: 0.064 Epoch 11 Batch 6/269 - Train Accuracy: 0.941, Validation Accuracy: 0.941, Loss: 0.056 Epoch 11 Batch 7/269 - Train Accuracy: 0.944, Validation Accuracy: 0.937, Loss: 0.061 Epoch 11 Batch 8/269 - Train Accuracy: 0.950, Validation Accuracy: 0.940, Loss: 0.064 Epoch 11 Batch 9/269 - Train Accuracy: 0.934, Validation Accuracy: 0.944, Loss: 0.064 Epoch 11 Batch 10/269 - Train Accuracy: 0.946, Validation Accuracy: 0.943, Loss: 0.054 Epoch 11 Batch 11/269 - Train Accuracy: 0.948, Validation Accuracy: 0.945, Loss: 0.069 Epoch 11 Batch 12/269 - Train Accuracy: 0.919, Validation Accuracy: 0.945, Loss: 0.068 Epoch 11 Batch 13/269 - Train Accuracy: 0.939, Validation Accuracy: 0.943, Loss: 0.053 Epoch 11 Batch 14/269 - Train Accuracy: 0.943, Validation Accuracy: 0.946, Loss: 0.059 Epoch 11 Batch 15/269 - Train Accuracy: 0.944, Validation Accuracy: 0.948, Loss: 0.050 Epoch 11 Batch 16/269 - Train Accuracy: 0.937, Validation Accuracy: 0.946, Loss: 0.066 Epoch 11 Batch 17/269 - Train Accuracy: 0.952, Validation Accuracy: 0.948, Loss: 0.054 Epoch 11 Batch 18/269 - Train Accuracy: 0.927, Validation Accuracy: 0.947, Loss: 0.061 Epoch 11 Batch 19/269 - Train Accuracy: 0.945, Validation Accuracy: 0.945, Loss: 0.054 Epoch 11 Batch 20/269 - Train Accuracy: 0.941, Validation Accuracy: 0.946, Loss: 0.058 Epoch 11 Batch 21/269 - Train Accuracy: 0.925, Validation Accuracy: 0.943, Loss: 0.069 Epoch 11 Batch 22/269 - Train Accuracy: 0.951, Validation Accuracy: 0.948, Loss: 0.056 Epoch 11 Batch 23/269 - Train Accuracy: 0.933, Validation Accuracy: 0.943, Loss: 0.064 Epoch 11 Batch 24/269 - Train Accuracy: 0.934, Validation Accuracy: 0.947, Loss: 0.057 Epoch 11 Batch 25/269 - Train Accuracy: 0.937, Validation Accuracy: 0.943, Loss: 0.069 Epoch 11 Batch 26/269 - Train Accuracy: 0.926, Validation Accuracy: 0.945, Loss: 0.056 Epoch 11 Batch 27/269 - Train Accuracy: 0.928, Validation Accuracy: 0.949, Loss: 0.057 Epoch 11 Batch 28/269 - Train Accuracy: 0.925, Validation Accuracy: 0.948, Loss: 0.063 Epoch 11 Batch 29/269 - Train Accuracy: 0.940, Validation Accuracy: 0.948, Loss: 0.064 Epoch 11 Batch 30/269 - Train Accuracy: 0.941, Validation Accuracy: 0.945, Loss: 0.056 Epoch 11 Batch 31/269 - Train Accuracy: 0.951, Validation Accuracy: 0.946, Loss: 0.058 Epoch 11 Batch 32/269 - Train Accuracy: 0.942, Validation Accuracy: 0.945, Loss: 0.056 Epoch 11 Batch 33/269 - Train Accuracy: 0.941, Validation Accuracy: 0.936, Loss: 0.054 Epoch 11 Batch 34/269 - Train Accuracy: 0.935, Validation Accuracy: 0.936, Loss: 0.057 Epoch 11 Batch 35/269 - Train Accuracy: 0.939, Validation Accuracy: 0.937, Loss: 0.071 Epoch 11 Batch 36/269 - Train Accuracy: 0.935, Validation Accuracy: 0.941, Loss: 0.063 Epoch 11 Batch 37/269 - Train Accuracy: 0.930, Validation Accuracy: 0.939, Loss: 0.059 Epoch 11 Batch 38/269 - Train Accuracy: 0.934, Validation Accuracy: 0.936, Loss: 0.055 Epoch 11 Batch 39/269 - Train Accuracy: 0.944, Validation Accuracy: 0.948, Loss: 0.058 Epoch 11 Batch 40/269 - Train Accuracy: 0.934, Validation Accuracy: 0.941, Loss: 0.064 Epoch 11 Batch 41/269 - Train Accuracy: 0.923, Validation Accuracy: 0.943, Loss: 0.065 Epoch 11 Batch 42/269 - Train Accuracy: 0.953, Validation Accuracy: 0.947, Loss: 0.052 Epoch 11 Batch 43/269 - Train Accuracy: 0.932, Validation Accuracy: 0.944, Loss: 0.058 Epoch 11 Batch 44/269 - Train Accuracy: 0.934, Validation Accuracy: 0.945, Loss: 0.061 Epoch 11 Batch 45/269 - Train Accuracy: 0.938, Validation Accuracy: 0.940, Loss: 0.062 Epoch 11 Batch 46/269 - Train Accuracy: 0.937, Validation Accuracy: 0.942, Loss: 0.063 Epoch 11 Batch 47/269 - Train Accuracy: 0.943, Validation Accuracy: 0.943, Loss: 0.053 Epoch 11 Batch 48/269 - Train Accuracy: 0.946, Validation Accuracy: 0.943, Loss: 0.053 Epoch 11 Batch 49/269 - Train Accuracy: 0.936, Validation Accuracy: 0.943, Loss: 0.058 Epoch 11 Batch 50/269 - Train Accuracy: 0.935, Validation Accuracy: 0.946, Loss: 0.065 Epoch 11 Batch 51/269 - Train Accuracy: 0.949, Validation Accuracy: 0.948, Loss: 0.058 Epoch 11 Batch 52/269 - Train Accuracy: 0.937, Validation Accuracy: 0.946, Loss: 0.051 Epoch 11 Batch 53/269 - Train Accuracy: 0.944, Validation Accuracy: 0.945, Loss: 0.064 Epoch 11 Batch 54/269 - Train Accuracy: 0.956, Validation Accuracy: 0.946, Loss: 0.060 Epoch 11 Batch 55/269 - Train Accuracy: 0.944, Validation Accuracy: 0.943, Loss: 0.057 Epoch 11 Batch 56/269 - Train Accuracy: 0.939, Validation Accuracy: 0.941, Loss: 0.060 Epoch 11 Batch 57/269 - Train Accuracy: 0.934, Validation Accuracy: 0.943, Loss: 0.063 Epoch 11 Batch 58/269 - Train Accuracy: 0.944, Validation Accuracy: 0.945, Loss: 0.060 Epoch 11 Batch 59/269 - Train Accuracy: 0.957, Validation Accuracy: 0.943, Loss: 0.051 Epoch 11 Batch 60/269 - Train Accuracy: 0.943, Validation Accuracy: 0.939, Loss: 0.057 Epoch 11 Batch 61/269 - Train Accuracy: 0.938, Validation Accuracy: 0.939, Loss: 0.055 Epoch 11 Batch 62/269 - Train Accuracy: 0.933, Validation Accuracy: 0.938, Loss: 0.066 Epoch 11 Batch 63/269 - Train Accuracy: 0.930, Validation Accuracy: 0.940, Loss: 0.065 Epoch 11 Batch 64/269 - Train Accuracy: 0.946, Validation Accuracy: 0.945, Loss: 0.050 Epoch 11 Batch 65/269 - Train Accuracy: 0.942, Validation Accuracy: 0.943, Loss: 0.057 Epoch 11 Batch 66/269 - Train Accuracy: 0.933, Validation Accuracy: 0.945, Loss: 0.064 Epoch 11 Batch 67/269 - Train Accuracy: 0.935, Validation Accuracy: 0.944, Loss: 0.070 Epoch 11 Batch 68/269 - Train Accuracy: 0.930, Validation Accuracy: 0.933, Loss: 0.065 Epoch 11 Batch 69/269 - Train Accuracy: 0.915, Validation Accuracy: 0.944, Loss: 0.083 Epoch 11 Batch 70/269 - Train Accuracy: 0.946, Validation Accuracy: 0.936, Loss: 0.062 Epoch 11 Batch 71/269 - Train Accuracy: 0.943, Validation Accuracy: 0.942, Loss: 0.072 Epoch 11 Batch 72/269 - Train Accuracy: 0.935, Validation Accuracy: 0.938, Loss: 0.066 Epoch 11 Batch 73/269 - Train Accuracy: 0.919, Validation Accuracy: 0.938, Loss: 0.072 Epoch 11 Batch 74/269 - Train Accuracy: 0.942, Validation Accuracy: 0.937, Loss: 0.057 Epoch 11 Batch 75/269 - Train Accuracy: 0.926, Validation Accuracy: 0.933, Loss: 0.090 Epoch 11 Batch 76/269 - Train Accuracy: 0.926, Validation Accuracy: 0.938, Loss: 0.112 Epoch 11 Batch 77/269 - Train Accuracy: 0.929, Validation Accuracy: 0.929, Loss: 0.066 Epoch 11 Batch 78/269 - Train Accuracy: 0.934, Validation Accuracy: 0.934, Loss: 0.104 Epoch 11 Batch 79/269 - Train Accuracy: 0.912, Validation Accuracy: 0.915, Loss: 0.069 Epoch 11 Batch 80/269 - Train Accuracy: 0.939, Validation Accuracy: 0.939, Loss: 0.122 Epoch 11 Batch 81/269 - Train Accuracy: 0.885, Validation Accuracy: 0.918, Loss: 0.085 Epoch 11 Batch 82/269 - Train Accuracy: 0.936, Validation Accuracy: 0.931, Loss: 0.116 Epoch 11 Batch 83/269 - Train Accuracy: 0.906, Validation Accuracy: 0.916, Loss: 0.088 Epoch 11 Batch 84/269 - Train Accuracy: 0.924, Validation Accuracy: 0.928, Loss: 0.123 Epoch 11 Batch 85/269 - Train Accuracy: 0.928, Validation Accuracy: 0.927, Loss: 0.075 Epoch 11 Batch 86/269 - Train Accuracy: 0.922, Validation Accuracy: 0.909, Loss: 0.090 Epoch 11 Batch 87/269 - Train Accuracy: 0.937, Validation Accuracy: 0.925, Loss: 0.111 Epoch 11 Batch 88/269 - Train Accuracy: 0.902, Validation Accuracy: 0.907, Loss: 0.075 Epoch 11 Batch 89/269 - Train Accuracy: 0.935, Validation Accuracy: 0.930, Loss: 0.098 Epoch 11 Batch 90/269 - Train Accuracy: 0.937, Validation Accuracy: 0.931, Loss: 0.110 Epoch 11 Batch 91/269 - Train Accuracy: 0.930, Validation Accuracy: 0.922, Loss: 0.071 Epoch 11 Batch 92/269 - Train Accuracy: 0.949, Validation Accuracy: 0.933, Loss: 0.093 Epoch 11 Batch 93/269 - Train Accuracy: 0.930, Validation Accuracy: 0.925, Loss: 0.078 Epoch 11 Batch 94/269 - Train Accuracy: 0.931, Validation Accuracy: 0.934, Loss: 0.107 Epoch 11 Batch 95/269 - Train Accuracy: 0.937, Validation Accuracy: 0.928, Loss: 0.079 Epoch 11 Batch 96/269 - Train Accuracy: 0.913, Validation Accuracy: 0.914, Loss: 0.079 Epoch 11 Batch 97/269 - Train Accuracy: 0.936, Validation Accuracy: 0.927, Loss: 0.088 Epoch 11 Batch 98/269 - Train Accuracy: 0.932, Validation Accuracy: 0.931, Loss: 0.075 Epoch 11 Batch 99/269 - Train Accuracy: 0.927, Validation Accuracy: 0.937, Loss: 0.078 Epoch 11 Batch 100/269 - Train Accuracy: 0.937, Validation Accuracy: 0.937, Loss: 0.079 Epoch 11 Batch 101/269 - Train Accuracy: 0.929, Validation Accuracy: 0.933, Loss: 0.087 Epoch 11 Batch 102/269 - Train Accuracy: 0.923, Validation Accuracy: 0.925, Loss: 0.068 Epoch 11 Batch 103/269 - Train Accuracy: 0.934, Validation Accuracy: 0.927, Loss: 0.076 Epoch 11 Batch 104/269 - Train Accuracy: 0.938, Validation Accuracy: 0.933, Loss: 0.070 Epoch 11 Batch 105/269 - Train Accuracy: 0.932, Validation Accuracy: 0.933, Loss: 0.070 Epoch 11 Batch 106/269 - Train Accuracy: 0.929, Validation Accuracy: 0.927, Loss: 0.066 Epoch 11 Batch 107/269 - Train Accuracy: 0.941, Validation Accuracy: 0.940, Loss: 0.072 Epoch 11 Batch 108/269 - Train Accuracy: 0.948, Validation Accuracy: 0.940, Loss: 0.058 Epoch 11 Batch 109/269 - Train Accuracy: 0.922, Validation Accuracy: 0.940, Loss: 0.074 Epoch 11 Batch 110/269 - Train Accuracy: 0.930, Validation Accuracy: 0.939, Loss: 0.067 Epoch 11 Batch 111/269 - Train Accuracy: 0.937, Validation Accuracy: 0.938, Loss: 0.076 Epoch 11 Batch 112/269 - Train Accuracy: 0.944, Validation Accuracy: 0.937, Loss: 0.064 Epoch 11 Batch 113/269 - Train Accuracy: 0.926, Validation Accuracy: 0.941, Loss: 0.064 Epoch 11 Batch 114/269 - Train Accuracy: 0.936, Validation Accuracy: 0.944, Loss: 0.066 Epoch 11 Batch 115/269 - Train Accuracy: 0.928, Validation Accuracy: 0.938, Loss: 0.068 Epoch 11 Batch 116/269 - Train Accuracy: 0.942, Validation Accuracy: 0.938, Loss: 0.066 Epoch 11 Batch 117/269 - Train Accuracy: 0.936, Validation Accuracy: 0.936, Loss: 0.060 Epoch 11 Batch 118/269 - Train Accuracy: 0.929, Validation Accuracy: 0.938, Loss: 0.055 Epoch 11 Batch 119/269 - Train Accuracy: 0.938, Validation Accuracy: 0.941, Loss: 0.065 Epoch 11 Batch 120/269 - Train Accuracy: 0.932, Validation Accuracy: 0.939, Loss: 0.063 Epoch 11 Batch 121/269 - Train Accuracy: 0.940, Validation Accuracy: 0.943, Loss: 0.057 Epoch 11 Batch 122/269 - Train Accuracy: 0.940, Validation Accuracy: 0.939, Loss: 0.061 Epoch 11 Batch 123/269 - Train Accuracy: 0.935, Validation Accuracy: 0.936, Loss: 0.062 Epoch 11 Batch 124/269 - Train Accuracy: 0.942, Validation Accuracy: 0.934, Loss: 0.056 Epoch 11 Batch 125/269 - Train Accuracy: 0.936, Validation Accuracy: 0.936, Loss: 0.058 Epoch 11 Batch 126/269 - Train Accuracy: 0.926, Validation Accuracy: 0.936, Loss: 0.063 Epoch 11 Batch 127/269 - Train Accuracy: 0.941, Validation Accuracy: 0.939, Loss: 0.060 Epoch 11 Batch 128/269 - Train Accuracy: 0.946, Validation Accuracy: 0.939, Loss: 0.058 Epoch 11 Batch 129/269 - Train Accuracy: 0.938, Validation Accuracy: 0.937, Loss: 0.060 Epoch 11 Batch 130/269 - Train Accuracy: 0.943, Validation Accuracy: 0.936, Loss: 0.060 Epoch 11 Batch 131/269 - Train Accuracy: 0.924, Validation Accuracy: 0.943, Loss: 0.062 Epoch 11 Batch 132/269 - Train Accuracy: 0.930, Validation Accuracy: 0.944, Loss: 0.064 Epoch 11 Batch 133/269 - Train Accuracy: 0.938, Validation Accuracy: 0.943, Loss: 0.052 Epoch 11 Batch 134/269 - Train Accuracy: 0.942, Validation Accuracy: 0.938, Loss: 0.061 Epoch 11 Batch 135/269 - Train Accuracy: 0.943, Validation Accuracy: 0.938, Loss: 0.057 Epoch 11 Batch 136/269 - Train Accuracy: 0.926, Validation Accuracy: 0.934, Loss: 0.066 Epoch 11 Batch 137/269 - Train Accuracy: 0.937, Validation Accuracy: 0.939, Loss: 0.067 Epoch 11 Batch 138/269 - Train Accuracy: 0.934, Validation Accuracy: 0.938, Loss: 0.056 Epoch 11 Batch 139/269 - Train Accuracy: 0.947, Validation Accuracy: 0.940, Loss: 0.055 Epoch 11 Batch 140/269 - Train Accuracy: 0.932, Validation Accuracy: 0.945, Loss: 0.065 Epoch 11 Batch 141/269 - Train Accuracy: 0.935, Validation Accuracy: 0.939, Loss: 0.061 Epoch 11 Batch 142/269 - Train Accuracy: 0.939, Validation Accuracy: 0.940, Loss: 0.054 Epoch 11 Batch 143/269 - Train Accuracy: 0.948, Validation Accuracy: 0.942, Loss: 0.053 Epoch 11 Batch 144/269 - Train Accuracy: 0.954, Validation Accuracy: 0.939, Loss: 0.048 Epoch 11 Batch 145/269 - Train Accuracy: 0.948, Validation Accuracy: 0.942, Loss: 0.056 Epoch 11 Batch 146/269 - Train Accuracy: 0.943, Validation Accuracy: 0.943, Loss: 0.058 Epoch 11 Batch 147/269 - Train Accuracy: 0.941, Validation Accuracy: 0.943, Loss: 0.064 Epoch 11 Batch 148/269 - Train Accuracy: 0.941, Validation Accuracy: 0.942, Loss: 0.058 Epoch 11 Batch 149/269 - Train Accuracy: 0.938, Validation Accuracy: 0.940, Loss: 0.065 Epoch 11 Batch 150/269 - Train Accuracy: 0.942, Validation Accuracy: 0.935, Loss: 0.059 Epoch 11 Batch 151/269 - Train Accuracy: 0.940, Validation Accuracy: 0.939, Loss: 0.054 Epoch 11 Batch 152/269 - Train Accuracy: 0.940, Validation Accuracy: 0.938, Loss: 0.059 Epoch 11 Batch 153/269 - Train Accuracy: 0.950, Validation Accuracy: 0.943, Loss: 0.056 Epoch 11 Batch 154/269 - Train Accuracy: 0.954, Validation Accuracy: 0.945, Loss: 0.056 Epoch 11 Batch 155/269 - Train Accuracy: 0.937, Validation Accuracy: 0.945, Loss: 0.052 Epoch 11 Batch 156/269 - Train Accuracy: 0.938, Validation Accuracy: 0.944, Loss: 0.060 Epoch 11 Batch 157/269 - Train Accuracy: 0.931, Validation Accuracy: 0.945, Loss: 0.056 Epoch 11 Batch 158/269 - Train Accuracy: 0.939, Validation Accuracy: 0.949, Loss: 0.060 Epoch 11 Batch 159/269 - Train Accuracy: 0.939, Validation Accuracy: 0.951, Loss: 0.057 Epoch 11 Batch 160/269 - Train Accuracy: 0.934, Validation Accuracy: 0.948, Loss: 0.056 Epoch 11 Batch 161/269 - Train Accuracy: 0.946, Validation Accuracy: 0.941, Loss: 0.051 Epoch 11 Batch 162/269 - Train Accuracy: 0.950, Validation Accuracy: 0.940, Loss: 0.056 Epoch 11 Batch 163/269 - Train Accuracy: 0.949, Validation Accuracy: 0.940, Loss: 0.055 Epoch 11 Batch 164/269 - Train Accuracy: 0.948, Validation Accuracy: 0.939, Loss: 0.054 Epoch 11 Batch 165/269 - Train Accuracy: 0.951, Validation Accuracy: 0.940, Loss: 0.054 Epoch 11 Batch 166/269 - Train Accuracy: 0.943, Validation Accuracy: 0.943, Loss: 0.057 Epoch 11 Batch 167/269 - Train Accuracy: 0.947, Validation Accuracy: 0.946, Loss: 0.056 Epoch 11 Batch 168/269 - Train Accuracy: 0.942, Validation Accuracy: 0.949, Loss: 0.057 Epoch 11 Batch 169/269 - Train Accuracy: 0.931, Validation Accuracy: 0.947, Loss: 0.061 Epoch 11 Batch 170/269 - Train Accuracy: 0.932, Validation Accuracy: 0.946, Loss: 0.055 Epoch 11 Batch 171/269 - Train Accuracy: 0.946, Validation Accuracy: 0.944, Loss: 0.057 Epoch 11 Batch 172/269 - Train Accuracy: 0.921, Validation Accuracy: 0.942, Loss: 0.061 Epoch 11 Batch 173/269 - Train Accuracy: 0.943, Validation Accuracy: 0.945, Loss: 0.053 Epoch 11 Batch 174/269 - Train Accuracy: 0.948, Validation Accuracy: 0.942, Loss: 0.054 Epoch 11 Batch 175/269 - Train Accuracy: 0.933, Validation Accuracy: 0.941, Loss: 0.070 Epoch 11 Batch 176/269 - Train Accuracy: 0.930, Validation Accuracy: 0.940, Loss: 0.061 Epoch 11 Batch 177/269 - Train Accuracy: 0.947, Validation Accuracy: 0.939, Loss: 0.055 Epoch 11 Batch 178/269 - Train Accuracy: 0.946, Validation Accuracy: 0.940, Loss: 0.055 Epoch 11 Batch 179/269 - Train Accuracy: 0.931, Validation Accuracy: 0.945, Loss: 0.056 Epoch 11 Batch 180/269 - Train Accuracy: 0.949, Validation Accuracy: 0.945, Loss: 0.054 Epoch 11 Batch 181/269 - Train Accuracy: 0.943, Validation Accuracy: 0.943, Loss: 0.062 Epoch 11 Batch 182/269 - Train Accuracy: 0.941, Validation Accuracy: 0.945, Loss: 0.056 Epoch 11 Batch 183/269 - Train Accuracy: 0.948, Validation Accuracy: 0.943, Loss: 0.046 Epoch 11 Batch 184/269 - Train Accuracy: 0.949, Validation Accuracy: 0.947, Loss: 0.051 Epoch 11 Batch 185/269 - Train Accuracy: 0.941, Validation Accuracy: 0.946, Loss: 0.054 Epoch 11 Batch 186/269 - Train Accuracy: 0.939, Validation Accuracy: 0.949, Loss: 0.057 Epoch 11 Batch 187/269 - Train Accuracy: 0.935, Validation Accuracy: 0.949, Loss: 0.058 Epoch 11 Batch 188/269 - Train Accuracy: 0.950, Validation Accuracy: 0.948, Loss: 0.051 Epoch 11 Batch 189/269 - Train Accuracy: 0.942, Validation Accuracy: 0.948, Loss: 0.051 Epoch 11 Batch 190/269 - Train Accuracy: 0.942, Validation Accuracy: 0.943, Loss: 0.055 Epoch 11 Batch 191/269 - Train Accuracy: 0.931, Validation Accuracy: 0.947, Loss: 0.054 Epoch 11 Batch 192/269 - Train Accuracy: 0.942, Validation Accuracy: 0.943, Loss: 0.054 Epoch 11 Batch 193/269 - Train Accuracy: 0.941, Validation Accuracy: 0.945, Loss: 0.056 Epoch 11 Batch 194/269 - Train Accuracy: 0.934, Validation Accuracy: 0.946, Loss: 0.057 Epoch 11 Batch 195/269 - Train Accuracy: 0.944, Validation Accuracy: 0.945, Loss: 0.056 Epoch 11 Batch 196/269 - Train Accuracy: 0.935, Validation Accuracy: 0.945, Loss: 0.052 Epoch 11 Batch 197/269 - Train Accuracy: 0.939, Validation Accuracy: 0.947, Loss: 0.056 Epoch 11 Batch 198/269 - Train Accuracy: 0.944, Validation Accuracy: 0.945, Loss: 0.057 Epoch 11 Batch 199/269 - Train Accuracy: 0.942, Validation Accuracy: 0.943, Loss: 0.060 Epoch 11 Batch 200/269 - Train Accuracy: 0.937, Validation Accuracy: 0.945, Loss: 0.055 Epoch 11 Batch 201/269 - Train Accuracy: 0.939, Validation Accuracy: 0.947, Loss: 0.059 Epoch 11 Batch 202/269 - Train Accuracy: 0.945, Validation Accuracy: 0.949, Loss: 0.055 Epoch 11 Batch 203/269 - Train Accuracy: 0.935, Validation Accuracy: 0.949, Loss: 0.060 Epoch 11 Batch 204/269 - Train Accuracy: 0.946, Validation Accuracy: 0.945, Loss: 0.057 Epoch 11 Batch 205/269 - Train Accuracy: 0.947, Validation Accuracy: 0.948, Loss: 0.051 Epoch 11 Batch 206/269 - Train Accuracy: 0.928, Validation Accuracy: 0.945, Loss: 0.067 Epoch 11 Batch 207/269 - Train Accuracy: 0.942, Validation Accuracy: 0.947, Loss: 0.053 Epoch 11 Batch 208/269 - Train Accuracy: 0.948, Validation Accuracy: 0.946, Loss: 0.061 Epoch 11 Batch 209/269 - Train Accuracy: 0.953, Validation Accuracy: 0.947, Loss: 0.051 Epoch 11 Batch 210/269 - Train Accuracy: 0.946, Validation Accuracy: 0.946, Loss: 0.049 Epoch 11 Batch 211/269 - Train Accuracy: 0.945, Validation Accuracy: 0.948, Loss: 0.060 Epoch 11 Batch 212/269 - Train Accuracy: 0.943, Validation Accuracy: 0.949, Loss: 0.063 Epoch 11 Batch 213/269 - Train Accuracy: 0.932, Validation Accuracy: 0.941, Loss: 0.056 Epoch 11 Batch 214/269 - Train Accuracy: 0.944, Validation Accuracy: 0.941, Loss: 0.056 Epoch 11 Batch 215/269 - Train Accuracy: 0.942, Validation Accuracy: 0.943, Loss: 0.055 Epoch 11 Batch 216/269 - Train Accuracy: 0.935, Validation Accuracy: 0.945, Loss: 0.067 Epoch 11 Batch 217/269 - Train Accuracy: 0.937, Validation Accuracy: 0.945, Loss: 0.061 Epoch 11 Batch 218/269 - Train Accuracy: 0.949, Validation Accuracy: 0.945, Loss: 0.053 Epoch 11 Batch 219/269 - Train Accuracy: 0.952, Validation Accuracy: 0.943, Loss: 0.056 Epoch 11 Batch 220/269 - Train Accuracy: 0.932, Validation Accuracy: 0.942, Loss: 0.052 Epoch 11 Batch 221/269 - Train Accuracy: 0.945, Validation Accuracy: 0.942, Loss: 0.061 Epoch 11 Batch 222/269 - Train Accuracy: 0.951, Validation Accuracy: 0.946, Loss: 0.050 Epoch 11 Batch 223/269 - Train Accuracy: 0.935, Validation Accuracy: 0.949, Loss: 0.053 Epoch 11 Batch 224/269 - Train Accuracy: 0.935, Validation Accuracy: 0.946, Loss: 0.063 Epoch 11 Batch 225/269 - Train Accuracy: 0.940, Validation Accuracy: 0.946, Loss: 0.054 Epoch 11 Batch 226/269 - Train Accuracy: 0.946, Validation Accuracy: 0.946, Loss: 0.060 Epoch 11 Batch 227/269 - Train Accuracy: 0.947, Validation Accuracy: 0.941, Loss: 0.065 Epoch 11 Batch 228/269 - Train Accuracy: 0.949, Validation Accuracy: 0.938, Loss: 0.054 Epoch 11 Batch 229/269 - Train Accuracy: 0.941, Validation Accuracy: 0.937, Loss: 0.056 Epoch 11 Batch 230/269 - Train Accuracy: 0.952, Validation Accuracy: 0.948, Loss: 0.054 Epoch 11 Batch 231/269 - Train Accuracy: 0.933, Validation Accuracy: 0.945, Loss: 0.053 Epoch 11 Batch 232/269 - Train Accuracy: 0.938, Validation Accuracy: 0.942, Loss: 0.056 Epoch 11 Batch 233/269 - Train Accuracy: 0.954, Validation Accuracy: 0.945, Loss: 0.061 Epoch 11 Batch 234/269 - Train Accuracy: 0.952, Validation Accuracy: 0.944, Loss: 0.055 Epoch 11 Batch 235/269 - Train Accuracy: 0.966, Validation Accuracy: 0.941, Loss: 0.046 Epoch 11 Batch 236/269 - Train Accuracy: 0.932, Validation Accuracy: 0.942, Loss: 0.054 Epoch 11 Batch 237/269 - Train Accuracy: 0.949, Validation Accuracy: 0.948, Loss: 0.054 Epoch 11 Batch 238/269 - Train Accuracy: 0.936, Validation Accuracy: 0.945, Loss: 0.058 Epoch 11 Batch 239/269 - Train Accuracy: 0.947, Validation Accuracy: 0.945, Loss: 0.054 Epoch 11 Batch 240/269 - Train Accuracy: 0.946, Validation Accuracy: 0.945, Loss: 0.052 Epoch 11 Batch 241/269 - Train Accuracy: 0.933, Validation Accuracy: 0.942, Loss: 0.062 Epoch 11 Batch 242/269 - Train Accuracy: 0.950, Validation Accuracy: 0.941, Loss: 0.051 Epoch 11 Batch 243/269 - Train Accuracy: 0.953, Validation Accuracy: 0.943, Loss: 0.050 Epoch 11 Batch 244/269 - Train Accuracy: 0.934, Validation Accuracy: 0.947, Loss: 0.054 Epoch 11 Batch 245/269 - Train Accuracy: 0.939, Validation Accuracy: 0.947, Loss: 0.057 Epoch 11 Batch 246/269 - Train Accuracy: 0.930, Validation Accuracy: 0.950, Loss: 0.060 Epoch 11 Batch 247/269 - Train Accuracy: 0.947, Validation Accuracy: 0.950, Loss: 0.053 Epoch 11 Batch 248/269 - Train Accuracy: 0.950, Validation Accuracy: 0.944, Loss: 0.051 Epoch 11 Batch 249/269 - Train Accuracy: 0.955, Validation Accuracy: 0.943, Loss: 0.049 Epoch 11 Batch 250/269 - Train Accuracy: 0.944, Validation Accuracy: 0.945, Loss: 0.056 Epoch 11 Batch 251/269 - Train Accuracy: 0.962, Validation Accuracy: 0.943, Loss: 0.050 Epoch 11 Batch 252/269 - Train Accuracy: 0.952, Validation Accuracy: 0.946, Loss: 0.046 Epoch 11 Batch 253/269 - Train Accuracy: 0.939, Validation Accuracy: 0.945, Loss: 0.057 Epoch 11 Batch 254/269 - Train Accuracy: 0.939, Validation Accuracy: 0.947, Loss: 0.056 Epoch 11 Batch 255/269 - Train Accuracy: 0.941, Validation Accuracy: 0.947, Loss: 0.054 Epoch 11 Batch 256/269 - Train Accuracy: 0.937, Validation Accuracy: 0.945, Loss: 0.055 Epoch 11 Batch 257/269 - Train Accuracy: 0.936, Validation Accuracy: 0.944, Loss: 0.064 Epoch 11 Batch 258/269 - Train Accuracy: 0.939, Validation Accuracy: 0.946, Loss: 0.052 Epoch 11 Batch 259/269 - Train Accuracy: 0.962, Validation Accuracy: 0.946, Loss: 0.054 Epoch 11 Batch 260/269 - Train Accuracy: 0.942, Validation Accuracy: 0.948, Loss: 0.059 Epoch 11 Batch 261/269 - Train Accuracy: 0.952, Validation Accuracy: 0.950, Loss: 0.056 Epoch 11 Batch 262/269 - Train Accuracy: 0.946, Validation Accuracy: 0.949, Loss: 0.056 Epoch 11 Batch 263/269 - Train Accuracy: 0.944, Validation Accuracy: 0.950, Loss: 0.054 Epoch 11 Batch 264/269 - Train Accuracy: 0.929, Validation Accuracy: 0.953, Loss: 0.056 Epoch 11 Batch 265/269 - Train Accuracy: 0.945, Validation Accuracy: 0.952, Loss: 0.051 Epoch 11 Batch 266/269 - Train Accuracy: 0.952, Validation Accuracy: 0.953, Loss: 0.049 Epoch 11 Batch 267/269 - Train Accuracy: 0.951, Validation Accuracy: 0.956, Loss: 0.060 Epoch 12 Batch 0/269 - Train Accuracy: 0.959, Validation Accuracy: 0.953, Loss: 0.061 Epoch 12 Batch 1/269 - Train Accuracy: 0.956, Validation Accuracy: 0.952, Loss: 0.054 Epoch 12 Batch 2/269 - Train Accuracy: 0.948, Validation Accuracy: 0.954, Loss: 0.056 Epoch 12 Batch 3/269 - Train Accuracy: 0.946, Validation Accuracy: 0.955, Loss: 0.054 Epoch 12 Batch 4/269 - Train Accuracy: 0.934, Validation Accuracy: 0.954, Loss: 0.055 Epoch 12 Batch 5/269 - Train Accuracy: 0.946, Validation Accuracy: 0.951, Loss: 0.058 Epoch 12 Batch 6/269 - Train Accuracy: 0.955, Validation Accuracy: 0.948, Loss: 0.054 Epoch 12 Batch 7/269 - Train Accuracy: 0.943, Validation Accuracy: 0.945, Loss: 0.053 Epoch 12 Batch 8/269 - Train Accuracy: 0.953, Validation Accuracy: 0.947, Loss: 0.059 Epoch 12 Batch 9/269 - Train Accuracy: 0.941, Validation Accuracy: 0.944, Loss: 0.059 Epoch 12 Batch 10/269 - Train Accuracy: 0.949, Validation Accuracy: 0.944, Loss: 0.050 Epoch 12 Batch 11/269 - Train Accuracy: 0.945, Validation Accuracy: 0.942, Loss: 0.062 Epoch 12 Batch 12/269 - Train Accuracy: 0.936, Validation Accuracy: 0.943, Loss: 0.058 Epoch 12 Batch 13/269 - Train Accuracy: 0.947, Validation Accuracy: 0.944, Loss: 0.049 Epoch 12 Batch 14/269 - Train Accuracy: 0.952, Validation Accuracy: 0.947, Loss: 0.049 Epoch 12 Batch 15/269 - Train Accuracy: 0.954, Validation Accuracy: 0.946, Loss: 0.041 Epoch 12 Batch 16/269 - Train Accuracy: 0.938, Validation Accuracy: 0.949, Loss: 0.059 Epoch 12 Batch 17/269 - Train Accuracy: 0.951, Validation Accuracy: 0.946, Loss: 0.045 Epoch 12 Batch 18/269 - Train Accuracy: 0.936, Validation Accuracy: 0.950, Loss: 0.055 Epoch 12 Batch 19/269 - Train Accuracy: 0.946, Validation Accuracy: 0.952, Loss: 0.051 Epoch 12 Batch 20/269 - Train Accuracy: 0.942, Validation Accuracy: 0.956, Loss: 0.050 Epoch 12 Batch 21/269 - Train Accuracy: 0.926, Validation Accuracy: 0.953, Loss: 0.063 Epoch 12 Batch 22/269 - Train Accuracy: 0.957, Validation Accuracy: 0.952, Loss: 0.052 Epoch 12 Batch 23/269 - Train Accuracy: 0.944, Validation Accuracy: 0.949, Loss: 0.062 Epoch 12 Batch 24/269 - Train Accuracy: 0.936, Validation Accuracy: 0.951, Loss: 0.052 Epoch 12 Batch 25/269 - Train Accuracy: 0.946, Validation Accuracy: 0.956, Loss: 0.058 Epoch 12 Batch 26/269 - Train Accuracy: 0.944, Validation Accuracy: 0.955, Loss: 0.048 Epoch 12 Batch 27/269 - Train Accuracy: 0.938, Validation Accuracy: 0.955, Loss: 0.051 Epoch 12 Batch 28/269 - Train Accuracy: 0.933, Validation Accuracy: 0.952, Loss: 0.057 Epoch 12 Batch 29/269 - Train Accuracy: 0.943, Validation Accuracy: 0.955, Loss: 0.058 Epoch 12 Batch 30/269 - Train Accuracy: 0.946, Validation Accuracy: 0.949, Loss: 0.053 Epoch 12 Batch 31/269 - Train Accuracy: 0.949, Validation Accuracy: 0.948, Loss: 0.049 Epoch 12 Batch 32/269 - Train Accuracy: 0.954, Validation Accuracy: 0.945, Loss: 0.048 Epoch 12 Batch 33/269 - Train Accuracy: 0.943, Validation Accuracy: 0.944, Loss: 0.044 Epoch 12 Batch 34/269 - Train Accuracy: 0.941, Validation Accuracy: 0.946, Loss: 0.049 Epoch 12 Batch 35/269 - Train Accuracy: 0.949, Validation Accuracy: 0.945, Loss: 0.062 Epoch 12 Batch 36/269 - Train Accuracy: 0.945, Validation Accuracy: 0.946, Loss: 0.057 Epoch 12 Batch 37/269 - Train Accuracy: 0.936, Validation Accuracy: 0.949, Loss: 0.055 Epoch 12 Batch 38/269 - Train Accuracy: 0.936, Validation Accuracy: 0.949, Loss: 0.052 Epoch 12 Batch 39/269 - Train Accuracy: 0.946, Validation Accuracy: 0.949, Loss: 0.048 Epoch 12 Batch 40/269 - Train Accuracy: 0.925, Validation Accuracy: 0.948, Loss: 0.056 Epoch 12 Batch 41/269 - Train Accuracy: 0.928, Validation Accuracy: 0.950, Loss: 0.056 Epoch 12 Batch 42/269 - Train Accuracy: 0.952, Validation Accuracy: 0.950, Loss: 0.045 Epoch 12 Batch 43/269 - Train Accuracy: 0.943, Validation Accuracy: 0.948, Loss: 0.056 Epoch 12 Batch 44/269 - Train Accuracy: 0.949, Validation Accuracy: 0.948, Loss: 0.055 Epoch 12 Batch 45/269 - Train Accuracy: 0.941, Validation Accuracy: 0.952, Loss: 0.057 Epoch 12 Batch 46/269 - Train Accuracy: 0.938, Validation Accuracy: 0.950, Loss: 0.051 Epoch 12 Batch 47/269 - Train Accuracy: 0.949, Validation Accuracy: 0.951, Loss: 0.047 Epoch 12 Batch 48/269 - Train Accuracy: 0.957, Validation Accuracy: 0.950, Loss: 0.048 Epoch 12 Batch 49/269 - Train Accuracy: 0.947, Validation Accuracy: 0.951, Loss: 0.053 Epoch 12 Batch 50/269 - Train Accuracy: 0.944, Validation Accuracy: 0.953, Loss: 0.059 Epoch 12 Batch 51/269 - Train Accuracy: 0.955, Validation Accuracy: 0.946, Loss: 0.048 Epoch 12 Batch 52/269 - Train Accuracy: 0.947, Validation Accuracy: 0.947, Loss: 0.043 Epoch 12 Batch 53/269 - Train Accuracy: 0.942, Validation Accuracy: 0.947, Loss: 0.055 Epoch 12 Batch 54/269 - Train Accuracy: 0.958, Validation Accuracy: 0.947, Loss: 0.050 Epoch 12 Batch 55/269 - Train Accuracy: 0.949, Validation Accuracy: 0.950, Loss: 0.049 Epoch 12 Batch 56/269 - Train Accuracy: 0.939, Validation Accuracy: 0.945, Loss: 0.053 Epoch 12 Batch 57/269 - Train Accuracy: 0.940, Validation Accuracy: 0.949, Loss: 0.059 Epoch 12 Batch 58/269 - Train Accuracy: 0.949, Validation Accuracy: 0.951, Loss: 0.054 Epoch 12 Batch 59/269 - Train Accuracy: 0.958, Validation Accuracy: 0.952, Loss: 0.044 Epoch 12 Batch 60/269 - Train Accuracy: 0.955, Validation Accuracy: 0.947, Loss: 0.047 Epoch 12 Batch 61/269 - Train Accuracy: 0.944, Validation Accuracy: 0.947, Loss: 0.052 Epoch 12 Batch 62/269 - Train Accuracy: 0.942, Validation Accuracy: 0.951, Loss: 0.057 Epoch 12 Batch 63/269 - Train Accuracy: 0.937, Validation Accuracy: 0.948, Loss: 0.061 Epoch 12 Batch 64/269 - Train Accuracy: 0.952, Validation Accuracy: 0.947, Loss: 0.047 Epoch 12 Batch 65/269 - Train Accuracy: 0.946, Validation Accuracy: 0.948, Loss: 0.050 Epoch 12 Batch 66/269 - Train Accuracy: 0.937, Validation Accuracy: 0.948, Loss: 0.054 Epoch 12 Batch 67/269 - Train Accuracy: 0.937, Validation Accuracy: 0.948, Loss: 0.058 Epoch 12 Batch 68/269 - Train Accuracy: 0.949, Validation Accuracy: 0.948, Loss: 0.057 Epoch 12 Batch 69/269 - Train Accuracy: 0.928, Validation Accuracy: 0.946, Loss: 0.064 Epoch 12 Batch 70/269 - Train Accuracy: 0.954, Validation Accuracy: 0.948, Loss: 0.056 Epoch 12 Batch 71/269 - Train Accuracy: 0.944, Validation Accuracy: 0.948, Loss: 0.057 Epoch 12 Batch 72/269 - Train Accuracy: 0.946, Validation Accuracy: 0.945, Loss: 0.058 Epoch 12 Batch 73/269 - Train Accuracy: 0.937, Validation Accuracy: 0.945, Loss: 0.059 Epoch 12 Batch 74/269 - Train Accuracy: 0.956, Validation Accuracy: 0.943, Loss: 0.047 Epoch 12 Batch 75/269 - Train Accuracy: 0.945, Validation Accuracy: 0.946, Loss: 0.055 Epoch 12 Batch 76/269 - Train Accuracy: 0.939, Validation Accuracy: 0.946, Loss: 0.050 Epoch 12 Batch 77/269 - Train Accuracy: 0.957, Validation Accuracy: 0.949, Loss: 0.050 Epoch 12 Batch 78/269 - Train Accuracy: 0.953, Validation Accuracy: 0.951, Loss: 0.052 Epoch 12 Batch 79/269 - Train Accuracy: 0.946, Validation Accuracy: 0.948, Loss: 0.054 Epoch 12 Batch 80/269 - Train Accuracy: 0.949, Validation Accuracy: 0.950, Loss: 0.051 Epoch 12 Batch 81/269 - Train Accuracy: 0.940, Validation Accuracy: 0.951, Loss: 0.062 Epoch 12 Batch 82/269 - Train Accuracy: 0.955, Validation Accuracy: 0.951, Loss: 0.047 Epoch 12 Batch 83/269 - Train Accuracy: 0.935, Validation Accuracy: 0.951, Loss: 0.065 Epoch 12 Batch 84/269 - Train Accuracy: 0.951, Validation Accuracy: 0.949, Loss: 0.051 Epoch 12 Batch 85/269 - Train Accuracy: 0.948, Validation Accuracy: 0.947, Loss: 0.051 Epoch 12 Batch 86/269 - Train Accuracy: 0.945, Validation Accuracy: 0.949, Loss: 0.046 Epoch 12 Batch 87/269 - Train Accuracy: 0.956, Validation Accuracy: 0.951, Loss: 0.051 Epoch 12 Batch 88/269 - Train Accuracy: 0.940, Validation Accuracy: 0.949, Loss: 0.054 Epoch 12 Batch 89/269 - Train Accuracy: 0.948, Validation Accuracy: 0.951, Loss: 0.053 Epoch 12 Batch 90/269 - Train Accuracy: 0.947, Validation Accuracy: 0.949, Loss: 0.055 Epoch 12 Batch 91/269 - Train Accuracy: 0.950, Validation Accuracy: 0.950, Loss: 0.050 Epoch 12 Batch 92/269 - Train Accuracy: 0.951, Validation Accuracy: 0.951, Loss: 0.045 Epoch 12 Batch 93/269 - Train Accuracy: 0.961, Validation Accuracy: 0.950, Loss: 0.052 Epoch 12 Batch 94/269 - Train Accuracy: 0.943, Validation Accuracy: 0.949, Loss: 0.065 Epoch 12 Batch 95/269 - Train Accuracy: 0.954, Validation Accuracy: 0.949, Loss: 0.052 Epoch 12 Batch 96/269 - Train Accuracy: 0.936, Validation Accuracy: 0.949, Loss: 0.056 Epoch 12 Batch 97/269 - Train Accuracy: 0.949, Validation Accuracy: 0.948, Loss: 0.055 Epoch 12 Batch 98/269 - Train Accuracy: 0.948, Validation Accuracy: 0.950, Loss: 0.054 Epoch 12 Batch 99/269 - Train Accuracy: 0.945, Validation Accuracy: 0.949, Loss: 0.049 Epoch 12 Batch 100/269 - Train Accuracy: 0.953, Validation Accuracy: 0.947, Loss: 0.048 Epoch 12 Batch 101/269 - Train Accuracy: 0.934, Validation Accuracy: 0.950, Loss: 0.063 Epoch 12 Batch 102/269 - Train Accuracy: 0.948, Validation Accuracy: 0.954, Loss: 0.053 Epoch 12 Batch 103/269 - Train Accuracy: 0.958, Validation Accuracy: 0.956, Loss: 0.057 Epoch 12 Batch 104/269 - Train Accuracy: 0.958, Validation Accuracy: 0.950, Loss: 0.052 Epoch 12 Batch 105/269 - Train Accuracy: 0.952, Validation Accuracy: 0.952, Loss: 0.053 Epoch 12 Batch 106/269 - Train Accuracy: 0.960, Validation Accuracy: 0.952, Loss: 0.046 Epoch 12 Batch 107/269 - Train Accuracy: 0.953, Validation Accuracy: 0.949, Loss: 0.051 Epoch 12 Batch 108/269 - Train Accuracy: 0.957, Validation Accuracy: 0.951, Loss: 0.051 Epoch 12 Batch 109/269 - Train Accuracy: 0.940, Validation Accuracy: 0.950, Loss: 0.059 Epoch 12 Batch 110/269 - Train Accuracy: 0.941, Validation Accuracy: 0.950, Loss: 0.047 Epoch 12 Batch 111/269 - Train Accuracy: 0.949, Validation Accuracy: 0.947, Loss: 0.051 Epoch 12 Batch 112/269 - Train Accuracy: 0.953, Validation Accuracy: 0.947, Loss: 0.056 Epoch 12 Batch 113/269 - Train Accuracy: 0.939, Validation Accuracy: 0.950, Loss: 0.055 Epoch 12 Batch 114/269 - Train Accuracy: 0.941, Validation Accuracy: 0.947, Loss: 0.055 Epoch 12 Batch 115/269 - Train Accuracy: 0.940, Validation Accuracy: 0.952, Loss: 0.057 Epoch 12 Batch 116/269 - Train Accuracy: 0.951, Validation Accuracy: 0.945, Loss: 0.055 Epoch 12 Batch 117/269 - Train Accuracy: 0.944, Validation Accuracy: 0.945, Loss: 0.050 Epoch 12 Batch 118/269 - Train Accuracy: 0.951, Validation Accuracy: 0.946, Loss: 0.050 Epoch 12 Batch 119/269 - Train Accuracy: 0.941, Validation Accuracy: 0.951, Loss: 0.056 Epoch 12 Batch 120/269 - Train Accuracy: 0.944, Validation Accuracy: 0.954, Loss: 0.052 Epoch 12 Batch 121/269 - Train Accuracy: 0.953, Validation Accuracy: 0.948, Loss: 0.051 Epoch 12 Batch 122/269 - Train Accuracy: 0.948, Validation Accuracy: 0.951, Loss: 0.050 Epoch 12 Batch 123/269 - Train Accuracy: 0.936, Validation Accuracy: 0.949, Loss: 0.051 Epoch 12 Batch 124/269 - Train Accuracy: 0.949, Validation Accuracy: 0.945, Loss: 0.049 Epoch 12 Batch 125/269 - Train Accuracy: 0.949, Validation Accuracy: 0.942, Loss: 0.048 Epoch 12 Batch 126/269 - Train Accuracy: 0.935, Validation Accuracy: 0.945, Loss: 0.053 Epoch 12 Batch 127/269 - Train Accuracy: 0.954, Validation Accuracy: 0.947, Loss: 0.050 Epoch 12 Batch 128/269 - Train Accuracy: 0.952, Validation Accuracy: 0.948, Loss: 0.052 Epoch 12 Batch 129/269 - Train Accuracy: 0.937, Validation Accuracy: 0.953, Loss: 0.051 Epoch 12 Batch 130/269 - Train Accuracy: 0.947, Validation Accuracy: 0.950, Loss: 0.053 Epoch 12 Batch 131/269 - Train Accuracy: 0.923, Validation Accuracy: 0.947, Loss: 0.053 Epoch 12 Batch 132/269 - Train Accuracy: 0.941, Validation Accuracy: 0.943, Loss: 0.054 Epoch 12 Batch 133/269 - Train Accuracy: 0.946, Validation Accuracy: 0.945, Loss: 0.047 Epoch 12 Batch 134/269 - Train Accuracy: 0.946, Validation Accuracy: 0.945, Loss: 0.050 Epoch 12 Batch 135/269 - Train Accuracy: 0.953, Validation Accuracy: 0.945, Loss: 0.050 Epoch 12 Batch 136/269 - Train Accuracy: 0.931, Validation Accuracy: 0.951, Loss: 0.061 Epoch 12 Batch 137/269 - Train Accuracy: 0.943, Validation Accuracy: 0.951, Loss: 0.058 Epoch 12 Batch 138/269 - Train Accuracy: 0.948, Validation Accuracy: 0.951, Loss: 0.046 Epoch 12 Batch 139/269 - Train Accuracy: 0.944, Validation Accuracy: 0.949, Loss: 0.048 Epoch 12 Batch 140/269 - Train Accuracy: 0.941, Validation Accuracy: 0.950, Loss: 0.063 Epoch 12 Batch 141/269 - Train Accuracy: 0.942, Validation Accuracy: 0.954, Loss: 0.054 Epoch 12 Batch 142/269 - Train Accuracy: 0.944, Validation Accuracy: 0.950, Loss: 0.050 Epoch 12 Batch 143/269 - Train Accuracy: 0.946, Validation Accuracy: 0.952, Loss: 0.047 Epoch 12 Batch 144/269 - Train Accuracy: 0.960, Validation Accuracy: 0.945, Loss: 0.044 Epoch 12 Batch 145/269 - Train Accuracy: 0.954, Validation Accuracy: 0.945, Loss: 0.044 Epoch 12 Batch 146/269 - Train Accuracy: 0.946, Validation Accuracy: 0.942, Loss: 0.048 Epoch 12 Batch 147/269 - Train Accuracy: 0.947, Validation Accuracy: 0.944, Loss: 0.054 Epoch 12 Batch 148/269 - Train Accuracy: 0.947, Validation Accuracy: 0.944, Loss: 0.054 Epoch 12 Batch 149/269 - Train Accuracy: 0.938, Validation Accuracy: 0.949, Loss: 0.059 Epoch 12 Batch 150/269 - Train Accuracy: 0.945, Validation Accuracy: 0.949, Loss: 0.056 Epoch 12 Batch 151/269 - Train Accuracy: 0.951, Validation Accuracy: 0.946, Loss: 0.051 Epoch 12 Batch 152/269 - Train Accuracy: 0.948, Validation Accuracy: 0.943, Loss: 0.053 Epoch 12 Batch 153/269 - Train Accuracy: 0.949, Validation Accuracy: 0.946, Loss: 0.049 Epoch 12 Batch 154/269 - Train Accuracy: 0.954, Validation Accuracy: 0.945, Loss: 0.050 Epoch 12 Batch 155/269 - Train Accuracy: 0.936, Validation Accuracy: 0.948, Loss: 0.049 Epoch 12 Batch 156/269 - Train Accuracy: 0.947, Validation Accuracy: 0.948, Loss: 0.053 Epoch 12 Batch 157/269 - Train Accuracy: 0.940, Validation Accuracy: 0.951, Loss: 0.048 Epoch 12 Batch 158/269 - Train Accuracy: 0.939, Validation Accuracy: 0.950, Loss: 0.052 Epoch 12 Batch 159/269 - Train Accuracy: 0.942, Validation Accuracy: 0.951, Loss: 0.058 Epoch 12 Batch 160/269 - Train Accuracy: 0.950, Validation Accuracy: 0.952, Loss: 0.050 Epoch 12 Batch 161/269 - Train Accuracy: 0.952, Validation Accuracy: 0.947, Loss: 0.047 Epoch 12 Batch 162/269 - Train Accuracy: 0.942, Validation Accuracy: 0.946, Loss: 0.051 Epoch 12 Batch 163/269 - Train Accuracy: 0.951, Validation Accuracy: 0.951, Loss: 0.050 Epoch 12 Batch 164/269 - Train Accuracy: 0.956, Validation Accuracy: 0.949, Loss: 0.048 Epoch 12 Batch 165/269 - Train Accuracy: 0.948, Validation Accuracy: 0.951, Loss: 0.053 Epoch 12 Batch 166/269 - Train Accuracy: 0.953, Validation Accuracy: 0.946, Loss: 0.049 Epoch 12 Batch 167/269 - Train Accuracy: 0.942, Validation Accuracy: 0.949, Loss: 0.054 Epoch 12 Batch 168/269 - Train Accuracy: 0.947, Validation Accuracy: 0.946, Loss: 0.049 Epoch 12 Batch 169/269 - Train Accuracy: 0.932, Validation Accuracy: 0.948, Loss: 0.050 Epoch 12 Batch 170/269 - Train Accuracy: 0.950, Validation Accuracy: 0.950, Loss: 0.048 Epoch 12 Batch 171/269 - Train Accuracy: 0.956, Validation Accuracy: 0.944, Loss: 0.052 Epoch 12 Batch 172/269 - Train Accuracy: 0.939, Validation Accuracy: 0.944, Loss: 0.055 Epoch 12 Batch 173/269 - Train Accuracy: 0.950, Validation Accuracy: 0.943, Loss: 0.047 Epoch 12 Batch 174/269 - Train Accuracy: 0.954, Validation Accuracy: 0.944, Loss: 0.051 Epoch 12 Batch 175/269 - Train Accuracy: 0.940, Validation Accuracy: 0.940, Loss: 0.066 Epoch 12 Batch 176/269 - Train Accuracy: 0.944, Validation Accuracy: 0.944, Loss: 0.056 Epoch 12 Batch 177/269 - Train Accuracy: 0.953, Validation Accuracy: 0.946, Loss: 0.052 Epoch 12 Batch 178/269 - Train Accuracy: 0.948, Validation Accuracy: 0.941, Loss: 0.047 Epoch 12 Batch 179/269 - Train Accuracy: 0.950, Validation Accuracy: 0.944, Loss: 0.050 Epoch 12 Batch 180/269 - Train Accuracy: 0.959, Validation Accuracy: 0.945, Loss: 0.049 Epoch 12 Batch 181/269 - Train Accuracy: 0.948, Validation Accuracy: 0.950, Loss: 0.053 Epoch 12 Batch 182/269 - Train Accuracy: 0.948, Validation Accuracy: 0.948, Loss: 0.049 Epoch 12 Batch 183/269 - Train Accuracy: 0.956, Validation Accuracy: 0.949, Loss: 0.041 Epoch 12 Batch 184/269 - Train Accuracy: 0.957, Validation Accuracy: 0.952, Loss: 0.050 Epoch 12 Batch 185/269 - Train Accuracy: 0.950, Validation Accuracy: 0.948, Loss: 0.052 Epoch 12 Batch 186/269 - Train Accuracy: 0.943, Validation Accuracy: 0.948, Loss: 0.046 Epoch 12 Batch 187/269 - Train Accuracy: 0.951, Validation Accuracy: 0.955, Loss: 0.050 Epoch 12 Batch 188/269 - Train Accuracy: 0.949, Validation Accuracy: 0.950, Loss: 0.049 Epoch 12 Batch 189/269 - Train Accuracy: 0.949, Validation Accuracy: 0.947, Loss: 0.047 Epoch 12 Batch 190/269 - Train Accuracy: 0.950, Validation Accuracy: 0.949, Loss: 0.050 Epoch 12 Batch 191/269 - Train Accuracy: 0.939, Validation Accuracy: 0.956, Loss: 0.050 Epoch 12 Batch 192/269 - Train Accuracy: 0.955, Validation Accuracy: 0.952, Loss: 0.048 Epoch 12 Batch 193/269 - Train Accuracy: 0.955, Validation Accuracy: 0.944, Loss: 0.048 Epoch 12 Batch 194/269 - Train Accuracy: 0.930, Validation Accuracy: 0.944, Loss: 0.057 Epoch 12 Batch 195/269 - Train Accuracy: 0.936, Validation Accuracy: 0.941, Loss: 0.046 Epoch 12 Batch 196/269 - Train Accuracy: 0.945, Validation Accuracy: 0.945, Loss: 0.050 Epoch 12 Batch 197/269 - Train Accuracy: 0.944, Validation Accuracy: 0.943, Loss: 0.052 Epoch 12 Batch 198/269 - Train Accuracy: 0.944, Validation Accuracy: 0.949, Loss: 0.051 Epoch 12 Batch 199/269 - Train Accuracy: 0.945, Validation Accuracy: 0.952, Loss: 0.055 Epoch 12 Batch 200/269 - Train Accuracy: 0.958, Validation Accuracy: 0.953, Loss: 0.050 Epoch 12 Batch 201/269 - Train Accuracy: 0.943, Validation Accuracy: 0.952, Loss: 0.052 Epoch 12 Batch 202/269 - Train Accuracy: 0.952, Validation Accuracy: 0.952, Loss: 0.052 Epoch 12 Batch 203/269 - Train Accuracy: 0.936, Validation Accuracy: 0.949, Loss: 0.061 Epoch 12 Batch 204/269 - Train Accuracy: 0.948, Validation Accuracy: 0.947, Loss: 0.047 Epoch 12 Batch 205/269 - Train Accuracy: 0.952, Validation Accuracy: 0.951, Loss: 0.049 Epoch 12 Batch 206/269 - Train Accuracy: 0.929, Validation Accuracy: 0.950, Loss: 0.055 Epoch 12 Batch 207/269 - Train Accuracy: 0.949, Validation Accuracy: 0.947, Loss: 0.051 Epoch 12 Batch 208/269 - Train Accuracy: 0.951, Validation Accuracy: 0.951, Loss: 0.052 Epoch 12 Batch 209/269 - Train Accuracy: 0.956, Validation Accuracy: 0.949, Loss: 0.048 Epoch 12 Batch 210/269 - Train Accuracy: 0.943, Validation Accuracy: 0.948, Loss: 0.045 Epoch 12 Batch 211/269 - Train Accuracy: 0.951, Validation Accuracy: 0.947, Loss: 0.052 Epoch 12 Batch 212/269 - Train Accuracy: 0.946, Validation Accuracy: 0.948, Loss: 0.055 Epoch 12 Batch 213/269 - Train Accuracy: 0.938, Validation Accuracy: 0.950, Loss: 0.053 Epoch 12 Batch 214/269 - Train Accuracy: 0.949, Validation Accuracy: 0.948, Loss: 0.055 Epoch 12 Batch 215/269 - Train Accuracy: 0.944, Validation Accuracy: 0.950, Loss: 0.047 Epoch 12 Batch 216/269 - Train Accuracy: 0.945, Validation Accuracy: 0.952, Loss: 0.061 Epoch 12 Batch 217/269 - Train Accuracy: 0.948, Validation Accuracy: 0.946, Loss: 0.053 Epoch 12 Batch 218/269 - Train Accuracy: 0.951, Validation Accuracy: 0.947, Loss: 0.047 Epoch 12 Batch 219/269 - Train Accuracy: 0.953, Validation Accuracy: 0.947, Loss: 0.050 Epoch 12 Batch 220/269 - Train Accuracy: 0.951, Validation Accuracy: 0.951, Loss: 0.046 Epoch 12 Batch 221/269 - Train Accuracy: 0.946, Validation Accuracy: 0.949, Loss: 0.051 Epoch 12 Batch 222/269 - Train Accuracy: 0.956, Validation Accuracy: 0.947, Loss: 0.044 Epoch 12 Batch 223/269 - Train Accuracy: 0.934, Validation Accuracy: 0.951, Loss: 0.052 Epoch 12 Batch 224/269 - Train Accuracy: 0.943, Validation Accuracy: 0.950, Loss: 0.059 Epoch 12 Batch 225/269 - Train Accuracy: 0.947, Validation Accuracy: 0.949, Loss: 0.046 Epoch 12 Batch 226/269 - Train Accuracy: 0.951, Validation Accuracy: 0.944, Loss: 0.056 Epoch 12 Batch 227/269 - Train Accuracy: 0.945, Validation Accuracy: 0.944, Loss: 0.060 Epoch 12 Batch 228/269 - Train Accuracy: 0.951, Validation Accuracy: 0.946, Loss: 0.046 Epoch 12 Batch 229/269 - Train Accuracy: 0.947, Validation Accuracy: 0.953, Loss: 0.049 Epoch 12 Batch 230/269 - Train Accuracy: 0.949, Validation Accuracy: 0.957, Loss: 0.049 Epoch 12 Batch 231/269 - Train Accuracy: 0.939, Validation Accuracy: 0.955, Loss: 0.054 Epoch 12 Batch 232/269 - Train Accuracy: 0.943, Validation Accuracy: 0.959, Loss: 0.053 Epoch 12 Batch 233/269 - Train Accuracy: 0.955, Validation Accuracy: 0.953, Loss: 0.058 Epoch 12 Batch 234/269 - Train Accuracy: 0.948, Validation Accuracy: 0.951, Loss: 0.049 Epoch 12 Batch 235/269 - Train Accuracy: 0.969, Validation Accuracy: 0.946, Loss: 0.041 Epoch 12 Batch 236/269 - Train Accuracy: 0.936, Validation Accuracy: 0.949, Loss: 0.049 Epoch 12 Batch 237/269 - Train Accuracy: 0.954, Validation Accuracy: 0.955, Loss: 0.047 Epoch 12 Batch 238/269 - Train Accuracy: 0.941, Validation Accuracy: 0.956, Loss: 0.052 Epoch 12 Batch 239/269 - Train Accuracy: 0.950, Validation Accuracy: 0.955, Loss: 0.049 Epoch 12 Batch 240/269 - Train Accuracy: 0.952, Validation Accuracy: 0.955, Loss: 0.046 Epoch 12 Batch 241/269 - Train Accuracy: 0.941, Validation Accuracy: 0.954, Loss: 0.059 Epoch 12 Batch 242/269 - Train Accuracy: 0.956, Validation Accuracy: 0.951, Loss: 0.052 Epoch 12 Batch 243/269 - Train Accuracy: 0.946, Validation Accuracy: 0.951, Loss: 0.044 Epoch 12 Batch 244/269 - Train Accuracy: 0.938, Validation Accuracy: 0.952, Loss: 0.050 Epoch 12 Batch 245/269 - Train Accuracy: 0.938, Validation Accuracy: 0.952, Loss: 0.051 Epoch 12 Batch 246/269 - Train Accuracy: 0.944, Validation Accuracy: 0.949, Loss: 0.052 Epoch 12 Batch 247/269 - Train Accuracy: 0.950, Validation Accuracy: 0.949, Loss: 0.048 Epoch 12 Batch 248/269 - Train Accuracy: 0.953, Validation Accuracy: 0.952, Loss: 0.045 Epoch 12 Batch 249/269 - Train Accuracy: 0.951, Validation Accuracy: 0.952, Loss: 0.045 Epoch 12 Batch 250/269 - Train Accuracy: 0.942, Validation Accuracy: 0.949, Loss: 0.048 Epoch 12 Batch 251/269 - Train Accuracy: 0.962, Validation Accuracy: 0.954, Loss: 0.049 Epoch 12 Batch 252/269 - Train Accuracy: 0.959, Validation Accuracy: 0.952, Loss: 0.043 Epoch 12 Batch 253/269 - Train Accuracy: 0.939, Validation Accuracy: 0.948, Loss: 0.052 Epoch 12 Batch 254/269 - Train Accuracy: 0.947, Validation Accuracy: 0.947, Loss: 0.050 Epoch 12 Batch 255/269 - Train Accuracy: 0.953, Validation Accuracy: 0.948, Loss: 0.047 Epoch 12 Batch 256/269 - Train Accuracy: 0.947, Validation Accuracy: 0.952, Loss: 0.048 Epoch 12 Batch 257/269 - Train Accuracy: 0.944, Validation Accuracy: 0.951, Loss: 0.052 Epoch 12 Batch 258/269 - Train Accuracy: 0.942, Validation Accuracy: 0.949, Loss: 0.047 Epoch 12 Batch 259/269 - Train Accuracy: 0.952, Validation Accuracy: 0.944, Loss: 0.050 Epoch 12 Batch 260/269 - Train Accuracy: 0.951, Validation Accuracy: 0.946, Loss: 0.050 Epoch 12 Batch 261/269 - Train Accuracy: 0.959, Validation Accuracy: 0.952, Loss: 0.053 Epoch 12 Batch 262/269 - Train Accuracy: 0.943, Validation Accuracy: 0.951, Loss: 0.053 Epoch 12 Batch 263/269 - Train Accuracy: 0.956, Validation Accuracy: 0.955, Loss: 0.048 Epoch 12 Batch 264/269 - Train Accuracy: 0.941, Validation Accuracy: 0.954, Loss: 0.051 Epoch 12 Batch 265/269 - Train Accuracy: 0.947, Validation Accuracy: 0.953, Loss: 0.050 Epoch 12 Batch 266/269 - Train Accuracy: 0.948, Validation Accuracy: 0.951, Loss: 0.045 Epoch 12 Batch 267/269 - Train Accuracy: 0.952, Validation Accuracy: 0.952, Loss: 0.052 Epoch 13 Batch 0/269 - Train Accuracy: 0.963, Validation Accuracy: 0.952, Loss: 0.054 Epoch 13 Batch 1/269 - Train Accuracy: 0.957, Validation Accuracy: 0.954, Loss: 0.047 Epoch 13 Batch 2/269 - Train Accuracy: 0.952, Validation Accuracy: 0.949, Loss: 0.049 Epoch 13 Batch 3/269 - Train Accuracy: 0.950, Validation Accuracy: 0.950, Loss: 0.049 Epoch 13 Batch 4/269 - Train Accuracy: 0.930, Validation Accuracy: 0.952, Loss: 0.050 Epoch 13 Batch 5/269 - Train Accuracy: 0.949, Validation Accuracy: 0.943, Loss: 0.052 Epoch 13 Batch 6/269 - Train Accuracy: 0.958, Validation Accuracy: 0.948, Loss: 0.046 Epoch 13 Batch 7/269 - Train Accuracy: 0.946, Validation Accuracy: 0.950, Loss: 0.047 Epoch 13 Batch 8/269 - Train Accuracy: 0.957, Validation Accuracy: 0.950, Loss: 0.053 Epoch 13 Batch 9/269 - Train Accuracy: 0.952, Validation Accuracy: 0.956, Loss: 0.052 Epoch 13 Batch 10/269 - Train Accuracy: 0.956, Validation Accuracy: 0.955, Loss: 0.045 Epoch 13 Batch 11/269 - Train Accuracy: 0.956, Validation Accuracy: 0.954, Loss: 0.055 Epoch 13 Batch 12/269 - Train Accuracy: 0.945, Validation Accuracy: 0.952, Loss: 0.055 Epoch 13 Batch 13/269 - Train Accuracy: 0.953, Validation Accuracy: 0.952, Loss: 0.043 Epoch 13 Batch 14/269 - Train Accuracy: 0.951, Validation Accuracy: 0.950, Loss: 0.048 Epoch 13 Batch 15/269 - Train Accuracy: 0.959, Validation Accuracy: 0.952, Loss: 0.038 Epoch 13 Batch 16/269 - Train Accuracy: 0.948, Validation Accuracy: 0.953, Loss: 0.054 Epoch 13 Batch 17/269 - Train Accuracy: 0.952, Validation Accuracy: 0.950, Loss: 0.044 Epoch 13 Batch 18/269 - Train Accuracy: 0.942, Validation Accuracy: 0.953, Loss: 0.049 Epoch 13 Batch 19/269 - Train Accuracy: 0.954, Validation Accuracy: 0.955, Loss: 0.040 Epoch 13 Batch 20/269 - Train Accuracy: 0.955, Validation Accuracy: 0.954, Loss: 0.049 Epoch 13 Batch 21/269 - Train Accuracy: 0.937, Validation Accuracy: 0.954, Loss: 0.052 Epoch 13 Batch 22/269 - Train Accuracy: 0.953, Validation Accuracy: 0.955, Loss: 0.045 Epoch 13 Batch 23/269 - Train Accuracy: 0.954, Validation Accuracy: 0.958, Loss: 0.051 Epoch 13 Batch 24/269 - Train Accuracy: 0.947, Validation Accuracy: 0.959, Loss: 0.046 Epoch 13 Batch 25/269 - Train Accuracy: 0.946, Validation Accuracy: 0.959, Loss: 0.051 Epoch 13 Batch 26/269 - Train Accuracy: 0.946, Validation Accuracy: 0.956, Loss: 0.045 Epoch 13 Batch 27/269 - Train Accuracy: 0.949, Validation Accuracy: 0.960, Loss: 0.045 Epoch 13 Batch 28/269 - Train Accuracy: 0.932, Validation Accuracy: 0.958, Loss: 0.051 Epoch 13 Batch 29/269 - Train Accuracy: 0.952, Validation Accuracy: 0.955, Loss: 0.054 Epoch 13 Batch 30/269 - Train Accuracy: 0.943, Validation Accuracy: 0.952, Loss: 0.048 Epoch 13 Batch 31/269 - Train Accuracy: 0.959, Validation Accuracy: 0.949, Loss: 0.047 Epoch 13 Batch 32/269 - Train Accuracy: 0.961, Validation Accuracy: 0.950, Loss: 0.045 Epoch 13 Batch 33/269 - Train Accuracy: 0.952, Validation Accuracy: 0.952, Loss: 0.042 Epoch 13 Batch 34/269 - Train Accuracy: 0.949, Validation Accuracy: 0.953, Loss: 0.045 Epoch 13 Batch 35/269 - Train Accuracy: 0.954, Validation Accuracy: 0.950, Loss: 0.059 Epoch 13 Batch 36/269 - Train Accuracy: 0.949, Validation Accuracy: 0.953, Loss: 0.049 Epoch 13 Batch 37/269 - Train Accuracy: 0.945, Validation Accuracy: 0.951, Loss: 0.051 Epoch 13 Batch 38/269 - Train Accuracy: 0.945, Validation Accuracy: 0.948, Loss: 0.047 Epoch 13 Batch 39/269 - Train Accuracy: 0.948, Validation Accuracy: 0.952, Loss: 0.048 Epoch 13 Batch 40/269 - Train Accuracy: 0.937, Validation Accuracy: 0.953, Loss: 0.051 Epoch 13 Batch 41/269 - Train Accuracy: 0.934, Validation Accuracy: 0.951, Loss: 0.052 Epoch 13 Batch 42/269 - Train Accuracy: 0.958, Validation Accuracy: 0.956, Loss: 0.041 Epoch 13 Batch 43/269 - Train Accuracy: 0.946, Validation Accuracy: 0.948, Loss: 0.049 Epoch 13 Batch 44/269 - Train Accuracy: 0.946, Validation Accuracy: 0.940, Loss: 0.046 Epoch 13 Batch 45/269 - Train Accuracy: 0.940, Validation Accuracy: 0.956, Loss: 0.056 Epoch 13 Batch 46/269 - Train Accuracy: 0.943, Validation Accuracy: 0.949, Loss: 0.042 Epoch 13 Batch 47/269 - Train Accuracy: 0.950, Validation Accuracy: 0.947, Loss: 0.044 Epoch 13 Batch 48/269 - Train Accuracy: 0.958, Validation Accuracy: 0.958, Loss: 0.045 Epoch 13 Batch 49/269 - Train Accuracy: 0.955, Validation Accuracy: 0.951, Loss: 0.043 Epoch 13 Batch 50/269 - Train Accuracy: 0.942, Validation Accuracy: 0.951, Loss: 0.055 Epoch 13 Batch 51/269 - Train Accuracy: 0.955, Validation Accuracy: 0.951, Loss: 0.043 Epoch 13 Batch 52/269 - Train Accuracy: 0.943, Validation Accuracy: 0.952, Loss: 0.042 Epoch 13 Batch 53/269 - Train Accuracy: 0.949, Validation Accuracy: 0.948, Loss: 0.056 Epoch 13 Batch 54/269 - Train Accuracy: 0.964, Validation Accuracy: 0.953, Loss: 0.042 Epoch 13 Batch 55/269 - Train Accuracy: 0.952, Validation Accuracy: 0.952, Loss: 0.046 Epoch 13 Batch 56/269 - Train Accuracy: 0.952, Validation Accuracy: 0.949, Loss: 0.048 Epoch 13 Batch 57/269 - Train Accuracy: 0.944, Validation Accuracy: 0.954, Loss: 0.050 Epoch 13 Batch 58/269 - Train Accuracy: 0.959, Validation Accuracy: 0.955, Loss: 0.051 Epoch 13 Batch 59/269 - Train Accuracy: 0.966, Validation Accuracy: 0.956, Loss: 0.039 Epoch 13 Batch 60/269 - Train Accuracy: 0.959, Validation Accuracy: 0.957, Loss: 0.047 Epoch 13 Batch 61/269 - Train Accuracy: 0.943, Validation Accuracy: 0.960, Loss: 0.046 Epoch 13 Batch 62/269 - Train Accuracy: 0.950, Validation Accuracy: 0.957, Loss: 0.051 Epoch 13 Batch 63/269 - Train Accuracy: 0.947, Validation Accuracy: 0.955, Loss: 0.048 Epoch 13 Batch 64/269 - Train Accuracy: 0.953, Validation Accuracy: 0.957, Loss: 0.044 Epoch 13 Batch 65/269 - Train Accuracy: 0.957, Validation Accuracy: 0.953, Loss: 0.044 Epoch 13 Batch 66/269 - Train Accuracy: 0.941, Validation Accuracy: 0.954, Loss: 0.049 Epoch 13 Batch 67/269 - Train Accuracy: 0.943, Validation Accuracy: 0.954, Loss: 0.051 Epoch 13 Batch 68/269 - Train Accuracy: 0.950, Validation Accuracy: 0.953, Loss: 0.056 Epoch 13 Batch 69/269 - Train Accuracy: 0.939, Validation Accuracy: 0.954, Loss: 0.060 Epoch 13 Batch 70/269 - Train Accuracy: 0.954, Validation Accuracy: 0.951, Loss: 0.048 Epoch 13 Batch 71/269 - Train Accuracy: 0.947, Validation Accuracy: 0.951, Loss: 0.050 Epoch 13 Batch 72/269 - Train Accuracy: 0.948, Validation Accuracy: 0.954, Loss: 0.052 Epoch 13 Batch 73/269 - Train Accuracy: 0.936, Validation Accuracy: 0.948, Loss: 0.053 Epoch 13 Batch 74/269 - Train Accuracy: 0.958, Validation Accuracy: 0.948, Loss: 0.047 Epoch 13 Batch 75/269 - Train Accuracy: 0.945, Validation Accuracy: 0.953, Loss: 0.046 Epoch 13 Batch 76/269 - Train Accuracy: 0.950, Validation Accuracy: 0.954, Loss: 0.046 Epoch 13 Batch 77/269 - Train Accuracy: 0.958, Validation Accuracy: 0.953, Loss: 0.044 Epoch 13 Batch 78/269 - Train Accuracy: 0.955, Validation Accuracy: 0.950, Loss: 0.047 Epoch 13 Batch 79/269 - Train Accuracy: 0.945, Validation Accuracy: 0.954, Loss: 0.048 Epoch 13 Batch 80/269 - Train Accuracy: 0.954, Validation Accuracy: 0.952, Loss: 0.044 Epoch 13 Batch 81/269 - Train Accuracy: 0.940, Validation Accuracy: 0.950, Loss: 0.053 Epoch 13 Batch 82/269 - Train Accuracy: 0.949, Validation Accuracy: 0.948, Loss: 0.044 Epoch 13 Batch 83/269 - Train Accuracy: 0.941, Validation Accuracy: 0.952, Loss: 0.060 Epoch 13 Batch 84/269 - Train Accuracy: 0.953, Validation Accuracy: 0.957, Loss: 0.047 Epoch 13 Batch 85/269 - Train Accuracy: 0.952, Validation Accuracy: 0.954, Loss: 0.045 Epoch 13 Batch 86/269 - Train Accuracy: 0.957, Validation Accuracy: 0.947, Loss: 0.044 Epoch 13 Batch 87/269 - Train Accuracy: 0.961, Validation Accuracy: 0.948, Loss: 0.044 Epoch 13 Batch 88/269 - Train Accuracy: 0.942, Validation Accuracy: 0.941, Loss: 0.049 Epoch 13 Batch 89/269 - Train Accuracy: 0.955, Validation Accuracy: 0.944, Loss: 0.048 Epoch 13 Batch 90/269 - Train Accuracy: 0.951, Validation Accuracy: 0.951, Loss: 0.046 Epoch 13 Batch 91/269 - Train Accuracy: 0.958, Validation Accuracy: 0.956, Loss: 0.045 Epoch 13 Batch 92/269 - Train Accuracy: 0.953, Validation Accuracy: 0.956, Loss: 0.041 Epoch 13 Batch 93/269 - Train Accuracy: 0.959, Validation Accuracy: 0.955, Loss: 0.044 Epoch 13 Batch 94/269 - Train Accuracy: 0.948, Validation Accuracy: 0.952, Loss: 0.057 Epoch 13 Batch 95/269 - Train Accuracy: 0.952, Validation Accuracy: 0.947, Loss: 0.046 Epoch 13 Batch 96/269 - Train Accuracy: 0.934, Validation Accuracy: 0.946, Loss: 0.054 Epoch 13 Batch 97/269 - Train Accuracy: 0.952, Validation Accuracy: 0.947, Loss: 0.047 Epoch 13 Batch 98/269 - Train Accuracy: 0.956, Validation Accuracy: 0.947, Loss: 0.046 Epoch 13 Batch 99/269 - Train Accuracy: 0.949, Validation Accuracy: 0.954, Loss: 0.046 Epoch 13 Batch 100/269 - Train Accuracy: 0.956, Validation Accuracy: 0.957, Loss: 0.048 Epoch 13 Batch 101/269 - Train Accuracy: 0.945, Validation Accuracy: 0.958, Loss: 0.056 Epoch 13 Batch 102/269 - Train Accuracy: 0.949, Validation Accuracy: 0.955, Loss: 0.048 Epoch 13 Batch 103/269 - Train Accuracy: 0.953, Validation Accuracy: 0.953, Loss: 0.050 Epoch 13 Batch 104/269 - Train Accuracy: 0.951, Validation Accuracy: 0.954, Loss: 0.044 Epoch 13 Batch 105/269 - Train Accuracy: 0.953, Validation Accuracy: 0.953, Loss: 0.046 Epoch 13 Batch 106/269 - Train Accuracy: 0.964, Validation Accuracy: 0.955, Loss: 0.041 Epoch 13 Batch 107/269 - Train Accuracy: 0.959, Validation Accuracy: 0.956, Loss: 0.047 Epoch 13 Batch 108/269 - Train Accuracy: 0.952, Validation Accuracy: 0.953, Loss: 0.043 Epoch 13 Batch 109/269 - Train Accuracy: 0.944, Validation Accuracy: 0.953, Loss: 0.051 Epoch 13 Batch 110/269 - Train Accuracy: 0.948, Validation Accuracy: 0.954, Loss: 0.044 Epoch 13 Batch 111/269 - Train Accuracy: 0.953, Validation Accuracy: 0.956, Loss: 0.052 Epoch 13 Batch 112/269 - Train Accuracy: 0.951, Validation Accuracy: 0.957, Loss: 0.049 Epoch 13 Batch 113/269 - Train Accuracy: 0.949, Validation Accuracy: 0.956, Loss: 0.050 Epoch 13 Batch 114/269 - Train Accuracy: 0.943, Validation Accuracy: 0.951, Loss: 0.049 Epoch 13 Batch 115/269 - Train Accuracy: 0.952, Validation Accuracy: 0.948, Loss: 0.046 Epoch 13 Batch 116/269 - Train Accuracy: 0.954, Validation Accuracy: 0.949, Loss: 0.047 Epoch 13 Batch 117/269 - Train Accuracy: 0.946, Validation Accuracy: 0.948, Loss: 0.042 Epoch 13 Batch 118/269 - Train Accuracy: 0.952, Validation Accuracy: 0.951, Loss: 0.044 Epoch 13 Batch 119/269 - Train Accuracy: 0.942, Validation Accuracy: 0.953, Loss: 0.049 Epoch 13 Batch 120/269 - Train Accuracy: 0.952, Validation Accuracy: 0.954, Loss: 0.044 Epoch 13 Batch 121/269 - Train Accuracy: 0.948, Validation Accuracy: 0.954, Loss: 0.043 Epoch 13 Batch 122/269 - Train Accuracy: 0.949, Validation Accuracy: 0.955, Loss: 0.048 Epoch 13 Batch 123/269 - Train Accuracy: 0.943, Validation Accuracy: 0.950, Loss: 0.045 Epoch 13 Batch 124/269 - Train Accuracy: 0.948, Validation Accuracy: 0.949, Loss: 0.043 Epoch 13 Batch 125/269 - Train Accuracy: 0.949, Validation Accuracy: 0.949, Loss: 0.044 Epoch 13 Batch 126/269 - Train Accuracy: 0.939, Validation Accuracy: 0.957, Loss: 0.046 Epoch 13 Batch 127/269 - Train Accuracy: 0.958, Validation Accuracy: 0.953, Loss: 0.045 Epoch 13 Batch 128/269 - Train Accuracy: 0.956, Validation Accuracy: 0.951, Loss: 0.044 Epoch 13 Batch 129/269 - Train Accuracy: 0.938, Validation Accuracy: 0.954, Loss: 0.046 Epoch 13 Batch 130/269 - Train Accuracy: 0.957, Validation Accuracy: 0.950, Loss: 0.051 Epoch 13 Batch 131/269 - Train Accuracy: 0.934, Validation Accuracy: 0.950, Loss: 0.050 Epoch 13 Batch 132/269 - Train Accuracy: 0.939, Validation Accuracy: 0.948, Loss: 0.052 Epoch 13 Batch 133/269 - Train Accuracy: 0.956, Validation Accuracy: 0.949, Loss: 0.041 Epoch 13 Batch 134/269 - Train Accuracy: 0.958, Validation Accuracy: 0.950, Loss: 0.046 Epoch 13 Batch 135/269 - Train Accuracy: 0.960, Validation Accuracy: 0.950, Loss: 0.045 Epoch 13 Batch 136/269 - Train Accuracy: 0.940, Validation Accuracy: 0.952, Loss: 0.051 Epoch 13 Batch 137/269 - Train Accuracy: 0.945, Validation Accuracy: 0.952, Loss: 0.054 Epoch 13 Batch 138/269 - Train Accuracy: 0.949, Validation Accuracy: 0.956, Loss: 0.041 Epoch 13 Batch 139/269 - Train Accuracy: 0.954, Validation Accuracy: 0.956, Loss: 0.043 Epoch 13 Batch 140/269 - Train Accuracy: 0.946, Validation Accuracy: 0.956, Loss: 0.054 Epoch 13 Batch 141/269 - Train Accuracy: 0.946, Validation Accuracy: 0.955, Loss: 0.050 Epoch 13 Batch 142/269 - Train Accuracy: 0.952, Validation Accuracy: 0.954, Loss: 0.042 Epoch 13 Batch 143/269 - Train Accuracy: 0.958, Validation Accuracy: 0.951, Loss: 0.038 Epoch 13 Batch 144/269 - Train Accuracy: 0.963, Validation Accuracy: 0.949, Loss: 0.039 Epoch 13 Batch 145/269 - Train Accuracy: 0.963, Validation Accuracy: 0.951, Loss: 0.042 Epoch 13 Batch 146/269 - Train Accuracy: 0.950, Validation Accuracy: 0.952, Loss: 0.045 Epoch 13 Batch 147/269 - Train Accuracy: 0.947, Validation Accuracy: 0.953, Loss: 0.048 Epoch 13 Batch 148/269 - Train Accuracy: 0.954, Validation Accuracy: 0.951, Loss: 0.047 Epoch 13 Batch 149/269 - Train Accuracy: 0.947, Validation Accuracy: 0.949, Loss: 0.052 Epoch 13 Batch 150/269 - Train Accuracy: 0.950, Validation Accuracy: 0.945, Loss: 0.046 Epoch 13 Batch 151/269 - Train Accuracy: 0.954, Validation Accuracy: 0.948, Loss: 0.046 Epoch 13 Batch 152/269 - Train Accuracy: 0.960, Validation Accuracy: 0.953, Loss: 0.048 Epoch 13 Batch 153/269 - Train Accuracy: 0.960, Validation Accuracy: 0.949, Loss: 0.042 Epoch 13 Batch 154/269 - Train Accuracy: 0.963, Validation Accuracy: 0.951, Loss: 0.045 Epoch 13 Batch 155/269 - Train Accuracy: 0.940, Validation Accuracy: 0.952, Loss: 0.042 Epoch 13 Batch 156/269 - Train Accuracy: 0.943, Validation Accuracy: 0.954, Loss: 0.049 Epoch 13 Batch 157/269 - Train Accuracy: 0.945, Validation Accuracy: 0.956, Loss: 0.040 Epoch 13 Batch 158/269 - Train Accuracy: 0.938, Validation Accuracy: 0.956, Loss: 0.049 Epoch 13 Batch 159/269 - Train Accuracy: 0.945, Validation Accuracy: 0.956, Loss: 0.047 Epoch 13 Batch 160/269 - Train Accuracy: 0.950, Validation Accuracy: 0.955, Loss: 0.045 Epoch 13 Batch 161/269 - Train Accuracy: 0.960, Validation Accuracy: 0.953, Loss: 0.042 Epoch 13 Batch 162/269 - Train Accuracy: 0.953, Validation Accuracy: 0.955, Loss: 0.048 Epoch 13 Batch 163/269 - Train Accuracy: 0.961, Validation Accuracy: 0.957, Loss: 0.045 Epoch 13 Batch 164/269 - Train Accuracy: 0.961, Validation Accuracy: 0.954, Loss: 0.042 Epoch 13 Batch 165/269 - Train Accuracy: 0.955, Validation Accuracy: 0.952, Loss: 0.045 Epoch 13 Batch 166/269 - Train Accuracy: 0.949, Validation Accuracy: 0.952, Loss: 0.047 Epoch 13 Batch 167/269 - Train Accuracy: 0.953, Validation Accuracy: 0.951, Loss: 0.046 Epoch 13 Batch 168/269 - Train Accuracy: 0.951, Validation Accuracy: 0.954, Loss: 0.046 Epoch 13 Batch 169/269 - Train Accuracy: 0.939, Validation Accuracy: 0.952, Loss: 0.045 Epoch 13 Batch 170/269 - Train Accuracy: 0.952, Validation Accuracy: 0.953, Loss: 0.041 Epoch 13 Batch 171/269 - Train Accuracy: 0.959, Validation Accuracy: 0.951, Loss: 0.047 Epoch 13 Batch 172/269 - Train Accuracy: 0.943, Validation Accuracy: 0.954, Loss: 0.047 Epoch 13 Batch 173/269 - Train Accuracy: 0.954, Validation Accuracy: 0.953, Loss: 0.042 Epoch 13 Batch 174/269 - Train Accuracy: 0.960, Validation Accuracy: 0.950, Loss: 0.047 Epoch 13 Batch 175/269 - Train Accuracy: 0.943, Validation Accuracy: 0.950, Loss: 0.057 Epoch 13 Batch 176/269 - Train Accuracy: 0.942, Validation Accuracy: 0.950, Loss: 0.051 Epoch 13 Batch 177/269 - Train Accuracy: 0.958, Validation Accuracy: 0.952, Loss: 0.043 Epoch 13 Batch 178/269 - Train Accuracy: 0.953, Validation Accuracy: 0.950, Loss: 0.043 Epoch 13 Batch 179/269 - Train Accuracy: 0.942, Validation Accuracy: 0.954, Loss: 0.043 Epoch 13 Batch 180/269 - Train Accuracy: 0.958, Validation Accuracy: 0.953, Loss: 0.044 Epoch 13 Batch 181/269 - Train Accuracy: 0.947, Validation Accuracy: 0.950, Loss: 0.050 Epoch 13 Batch 182/269 - Train Accuracy: 0.951, Validation Accuracy: 0.955, Loss: 0.046 Epoch 13 Batch 183/269 - Train Accuracy: 0.956, Validation Accuracy: 0.959, Loss: 0.039 Epoch 13 Batch 184/269 - Train Accuracy: 0.957, Validation Accuracy: 0.956, Loss: 0.041 Epoch 13 Batch 185/269 - Train Accuracy: 0.957, Validation Accuracy: 0.955, Loss: 0.047 Epoch 13 Batch 186/269 - Train Accuracy: 0.946, Validation Accuracy: 0.952, Loss: 0.041 Epoch 13 Batch 187/269 - Train Accuracy: 0.948, Validation Accuracy: 0.950, Loss: 0.046 Epoch 13 Batch 188/269 - Train Accuracy: 0.960, Validation Accuracy: 0.955, Loss: 0.043 Epoch 13 Batch 189/269 - Train Accuracy: 0.958, Validation Accuracy: 0.953, Loss: 0.041 Epoch 13 Batch 190/269 - Train Accuracy: 0.952, Validation Accuracy: 0.954, Loss: 0.044 Epoch 13 Batch 191/269 - Train Accuracy: 0.949, Validation Accuracy: 0.951, Loss: 0.045 Epoch 13 Batch 192/269 - Train Accuracy: 0.957, Validation Accuracy: 0.951, Loss: 0.044 Epoch 13 Batch 193/269 - Train Accuracy: 0.950, Validation Accuracy: 0.954, Loss: 0.045 Epoch 13 Batch 194/269 - Train Accuracy: 0.941, Validation Accuracy: 0.953, Loss: 0.048 Epoch 13 Batch 195/269 - Train Accuracy: 0.948, Validation Accuracy: 0.951, Loss: 0.044 Epoch 13 Batch 196/269 - Train Accuracy: 0.948, Validation Accuracy: 0.953, Loss: 0.048 Epoch 13 Batch 197/269 - Train Accuracy: 0.957, Validation Accuracy: 0.958, Loss: 0.045 Epoch 13 Batch 198/269 - Train Accuracy: 0.958, Validation Accuracy: 0.953, Loss: 0.048 Epoch 13 Batch 199/269 - Train Accuracy: 0.954, Validation Accuracy: 0.953, Loss: 0.049 Epoch 13 Batch 200/269 - Train Accuracy: 0.954, Validation Accuracy: 0.955, Loss: 0.042 Epoch 13 Batch 201/269 - Train Accuracy: 0.957, Validation Accuracy: 0.954, Loss: 0.047 Epoch 13 Batch 202/269 - Train Accuracy: 0.956, Validation Accuracy: 0.953, Loss: 0.044 Epoch 13 Batch 203/269 - Train Accuracy: 0.946, Validation Accuracy: 0.956, Loss: 0.053 Epoch 13 Batch 204/269 - Train Accuracy: 0.959, Validation Accuracy: 0.950, Loss: 0.048 Epoch 13 Batch 205/269 - Train Accuracy: 0.960, Validation Accuracy: 0.948, Loss: 0.044 Epoch 13 Batch 206/269 - Train Accuracy: 0.938, Validation Accuracy: 0.949, Loss: 0.053 Epoch 13 Batch 207/269 - Train Accuracy: 0.945, Validation Accuracy: 0.949, Loss: 0.045 Epoch 13 Batch 208/269 - Train Accuracy: 0.954, Validation Accuracy: 0.949, Loss: 0.049 Epoch 13 Batch 209/269 - Train Accuracy: 0.963, Validation Accuracy: 0.954, Loss: 0.042 Epoch 13 Batch 210/269 - Train Accuracy: 0.951, Validation Accuracy: 0.955, Loss: 0.044 Epoch 13 Batch 211/269 - Train Accuracy: 0.952, Validation Accuracy: 0.949, Loss: 0.052 Epoch 13 Batch 212/269 - Train Accuracy: 0.947, Validation Accuracy: 0.947, Loss: 0.051 Epoch 13 Batch 213/269 - Train Accuracy: 0.950, Validation Accuracy: 0.949, Loss: 0.044 Epoch 13 Batch 214/269 - Train Accuracy: 0.958, Validation Accuracy: 0.949, Loss: 0.044 Epoch 13 Batch 215/269 - Train Accuracy: 0.948, Validation Accuracy: 0.949, Loss: 0.046 Epoch 13 Batch 216/269 - Train Accuracy: 0.949, Validation Accuracy: 0.952, Loss: 0.058 Epoch 13 Batch 217/269 - Train Accuracy: 0.950, Validation Accuracy: 0.948, Loss: 0.049 Epoch 13 Batch 218/269 - Train Accuracy: 0.958, Validation Accuracy: 0.945, Loss: 0.040 Epoch 13 Batch 219/269 - Train Accuracy: 0.962, Validation Accuracy: 0.945, Loss: 0.047 Epoch 13 Batch 220/269 - Train Accuracy: 0.945, Validation Accuracy: 0.948, Loss: 0.041 Epoch 13 Batch 221/269 - Train Accuracy: 0.948, Validation Accuracy: 0.949, Loss: 0.050 Epoch 13 Batch 222/269 - Train Accuracy: 0.965, Validation Accuracy: 0.951, Loss: 0.042 Epoch 13 Batch 223/269 - Train Accuracy: 0.937, Validation Accuracy: 0.952, Loss: 0.043 Epoch 13 Batch 224/269 - Train Accuracy: 0.948, Validation Accuracy: 0.955, Loss: 0.055 Epoch 13 Batch 225/269 - Train Accuracy: 0.953, Validation Accuracy: 0.956, Loss: 0.040 Epoch 13 Batch 226/269 - Train Accuracy: 0.956, Validation Accuracy: 0.958, Loss: 0.047 Epoch 13 Batch 227/269 - Train Accuracy: 0.951, Validation Accuracy: 0.955, Loss: 0.056 Epoch 13 Batch 228/269 - Train Accuracy: 0.959, Validation Accuracy: 0.958, Loss: 0.042 Epoch 13 Batch 229/269 - Train Accuracy: 0.954, Validation Accuracy: 0.957, Loss: 0.044 Epoch 13 Batch 230/269 - Train Accuracy: 0.957, Validation Accuracy: 0.957, Loss: 0.047 Epoch 13 Batch 231/269 - Train Accuracy: 0.943, Validation Accuracy: 0.960, Loss: 0.046 Epoch 13 Batch 232/269 - Train Accuracy: 0.950, Validation Accuracy: 0.958, Loss: 0.045 Epoch 13 Batch 233/269 - Train Accuracy: 0.960, Validation Accuracy: 0.957, Loss: 0.048 Epoch 13 Batch 234/269 - Train Accuracy: 0.955, Validation Accuracy: 0.952, Loss: 0.045 Epoch 13 Batch 235/269 - Train Accuracy: 0.979, Validation Accuracy: 0.953, Loss: 0.038 Epoch 13 Batch 236/269 - Train Accuracy: 0.949, Validation Accuracy: 0.955, Loss: 0.045 Epoch 13 Batch 237/269 - Train Accuracy: 0.956, Validation Accuracy: 0.954, Loss: 0.041 Epoch 13 Batch 238/269 - Train Accuracy: 0.954, Validation Accuracy: 0.956, Loss: 0.046 Epoch 13 Batch 239/269 - Train Accuracy: 0.956, Validation Accuracy: 0.956, Loss: 0.044 Epoch 13 Batch 240/269 - Train Accuracy: 0.960, Validation Accuracy: 0.956, Loss: 0.042 Epoch 13 Batch 241/269 - Train Accuracy: 0.945, Validation Accuracy: 0.957, Loss: 0.046 Epoch 13 Batch 242/269 - Train Accuracy: 0.957, Validation Accuracy: 0.954, Loss: 0.044 Epoch 13 Batch 243/269 - Train Accuracy: 0.954, Validation Accuracy: 0.952, Loss: 0.040 Epoch 13 Batch 244/269 - Train Accuracy: 0.939, Validation Accuracy: 0.954, Loss: 0.046 Epoch 13 Batch 245/269 - Train Accuracy: 0.944, Validation Accuracy: 0.956, Loss: 0.047 Epoch 13 Batch 246/269 - Train Accuracy: 0.943, Validation Accuracy: 0.953, Loss: 0.048 Epoch 13 Batch 247/269 - Train Accuracy: 0.960, Validation Accuracy: 0.956, Loss: 0.046 Epoch 13 Batch 248/269 - Train Accuracy: 0.961, Validation Accuracy: 0.959, Loss: 0.044 Epoch 13 Batch 249/269 - Train Accuracy: 0.963, Validation Accuracy: 0.953, Loss: 0.039 Epoch 13 Batch 250/269 - Train Accuracy: 0.949, Validation Accuracy: 0.950, Loss: 0.045 Epoch 13 Batch 251/269 - Train Accuracy: 0.965, Validation Accuracy: 0.953, Loss: 0.043 Epoch 13 Batch 252/269 - Train Accuracy: 0.962, Validation Accuracy: 0.958, Loss: 0.039 Epoch 13 Batch 253/269 - Train Accuracy: 0.944, Validation Accuracy: 0.959, Loss: 0.046 Epoch 13 Batch 254/269 - Train Accuracy: 0.950, Validation Accuracy: 0.953, Loss: 0.049 Epoch 13 Batch 255/269 - Train Accuracy: 0.952, Validation Accuracy: 0.958, Loss: 0.047 Epoch 13 Batch 256/269 - Train Accuracy: 0.947, Validation Accuracy: 0.958, Loss: 0.038 Epoch 13 Batch 257/269 - Train Accuracy: 0.952, Validation Accuracy: 0.954, Loss: 0.047 Epoch 13 Batch 258/269 - Train Accuracy: 0.957, Validation Accuracy: 0.954, Loss: 0.047 Epoch 13 Batch 259/269 - Train Accuracy: 0.966, Validation Accuracy: 0.957, Loss: 0.040 Epoch 13 Batch 260/269 - Train Accuracy: 0.962, Validation Accuracy: 0.957, Loss: 0.046 Epoch 13 Batch 261/269 - Train Accuracy: 0.958, Validation Accuracy: 0.960, Loss: 0.045 Epoch 13 Batch 262/269 - Train Accuracy: 0.948, Validation Accuracy: 0.958, Loss: 0.046 Epoch 13 Batch 263/269 - Train Accuracy: 0.957, Validation Accuracy: 0.954, Loss: 0.048 Epoch 13 Batch 264/269 - Train Accuracy: 0.944, Validation Accuracy: 0.959, Loss: 0.047 Epoch 13 Batch 265/269 - Train Accuracy: 0.950, Validation Accuracy: 0.956, Loss: 0.047 Epoch 13 Batch 266/269 - Train Accuracy: 0.955, Validation Accuracy: 0.956, Loss: 0.040 Epoch 13 Batch 267/269 - Train Accuracy: 0.963, Validation Accuracy: 0.959, Loss: 0.045 Epoch 14 Batch 0/269 - Train Accuracy: 0.966, Validation Accuracy: 0.958, Loss: 0.050 Epoch 14 Batch 1/269 - Train Accuracy: 0.966, Validation Accuracy: 0.963, Loss: 0.043 Epoch 14 Batch 2/269 - Train Accuracy: 0.957, Validation Accuracy: 0.960, Loss: 0.045 Epoch 14 Batch 3/269 - Train Accuracy: 0.952, Validation Accuracy: 0.957, Loss: 0.043 Epoch 14 Batch 4/269 - Train Accuracy: 0.941, Validation Accuracy: 0.960, Loss: 0.044 Epoch 14 Batch 5/269 - Train Accuracy: 0.953, Validation Accuracy: 0.952, Loss: 0.045 Epoch 14 Batch 6/269 - Train Accuracy: 0.959, Validation Accuracy: 0.955, Loss: 0.043 Epoch 14 Batch 7/269 - Train Accuracy: 0.954, Validation Accuracy: 0.960, Loss: 0.042 Epoch 14 Batch 8/269 - Train Accuracy: 0.953, Validation Accuracy: 0.951, Loss: 0.047 Epoch 14 Batch 9/269 - Train Accuracy: 0.951, Validation Accuracy: 0.949, Loss: 0.049 Epoch 14 Batch 10/269 - Train Accuracy: 0.955, Validation Accuracy: 0.954, Loss: 0.040 Epoch 14 Batch 11/269 - Train Accuracy: 0.956, Validation Accuracy: 0.954, Loss: 0.048 Epoch 14 Batch 12/269 - Train Accuracy: 0.939, Validation Accuracy: 0.954, Loss: 0.051 Epoch 14 Batch 13/269 - Train Accuracy: 0.953, Validation Accuracy: 0.956, Loss: 0.042 Epoch 14 Batch 14/269 - Train Accuracy: 0.956, Validation Accuracy: 0.949, Loss: 0.042 Epoch 14 Batch 15/269 - Train Accuracy: 0.960, Validation Accuracy: 0.953, Loss: 0.040 Epoch 14 Batch 16/269 - Train Accuracy: 0.946, Validation Accuracy: 0.961, Loss: 0.052 Epoch 14 Batch 17/269 - Train Accuracy: 0.957, Validation Accuracy: 0.952, Loss: 0.039 Epoch 14 Batch 18/269 - Train Accuracy: 0.947, Validation Accuracy: 0.956, Loss: 0.052 Epoch 14 Batch 19/269 - Train Accuracy: 0.956, Validation Accuracy: 0.958, Loss: 0.036 Epoch 14 Batch 20/269 - Train Accuracy: 0.952, Validation Accuracy: 0.958, Loss: 0.045 Epoch 14 Batch 21/269 - Train Accuracy: 0.931, Validation Accuracy: 0.959, Loss: 0.054 Epoch 14 Batch 22/269 - Train Accuracy: 0.963, Validation Accuracy: 0.963, Loss: 0.041 Epoch 14 Batch 23/269 - Train Accuracy: 0.952, Validation Accuracy: 0.961, Loss: 0.045 Epoch 14 Batch 24/269 - Train Accuracy: 0.954, Validation Accuracy: 0.960, Loss: 0.045 Epoch 14 Batch 25/269 - Train Accuracy: 0.956, Validation Accuracy: 0.961, Loss: 0.049 Epoch 14 Batch 26/269 - Train Accuracy: 0.949, Validation Accuracy: 0.957, Loss: 0.041 Epoch 14 Batch 27/269 - Train Accuracy: 0.952, Validation Accuracy: 0.957, Loss: 0.039 Epoch 14 Batch 28/269 - Train Accuracy: 0.940, Validation Accuracy: 0.961, Loss: 0.047 Epoch 14 Batch 29/269 - Train Accuracy: 0.950, Validation Accuracy: 0.960, Loss: 0.052 Epoch 14 Batch 30/269 - Train Accuracy: 0.948, Validation Accuracy: 0.958, Loss: 0.044 Epoch 14 Batch 31/269 - Train Accuracy: 0.956, Validation Accuracy: 0.956, Loss: 0.043 Epoch 14 Batch 32/269 - Train Accuracy: 0.962, Validation Accuracy: 0.957, Loss: 0.037 Epoch 14 Batch 33/269 - Train Accuracy: 0.950, Validation Accuracy: 0.956, Loss: 0.041 Epoch 14 Batch 34/269 - Train Accuracy: 0.947, Validation Accuracy: 0.954, Loss: 0.043 Epoch 14 Batch 35/269 - Train Accuracy: 0.958, Validation Accuracy: 0.953, Loss: 0.052 Epoch 14 Batch 36/269 - Train Accuracy: 0.949, Validation Accuracy: 0.953, Loss: 0.043 Epoch 14 Batch 37/269 - Train Accuracy: 0.950, Validation Accuracy: 0.951, Loss: 0.050 Epoch 14 Batch 38/269 - Train Accuracy: 0.944, Validation Accuracy: 0.951, Loss: 0.041 Epoch 14 Batch 39/269 - Train Accuracy: 0.959, Validation Accuracy: 0.958, Loss: 0.041 Epoch 14 Batch 40/269 - Train Accuracy: 0.935, Validation Accuracy: 0.960, Loss: 0.049 Epoch 14 Batch 41/269 - Train Accuracy: 0.945, Validation Accuracy: 0.957, Loss: 0.044 Epoch 14 Batch 42/269 - Train Accuracy: 0.963, Validation Accuracy: 0.959, Loss: 0.035 Epoch 14 Batch 43/269 - Train Accuracy: 0.952, Validation Accuracy: 0.958, Loss: 0.047 Epoch 14 Batch 44/269 - Train Accuracy: 0.949, Validation Accuracy: 0.960, Loss: 0.044 Epoch 14 Batch 45/269 - Train Accuracy: 0.954, Validation Accuracy: 0.959, Loss: 0.048 Epoch 14 Batch 46/269 - Train Accuracy: 0.949, Validation Accuracy: 0.958, Loss: 0.039 Epoch 14 Batch 47/269 - Train Accuracy: 0.959, Validation Accuracy: 0.960, Loss: 0.038 Epoch 14 Batch 48/269 - Train Accuracy: 0.965, Validation Accuracy: 0.960, Loss: 0.040 Epoch 14 Batch 49/269 - Train Accuracy: 0.954, Validation Accuracy: 0.960, Loss: 0.045 Epoch 14 Batch 50/269 - Train Accuracy: 0.953, Validation Accuracy: 0.959, Loss: 0.048 Epoch 14 Batch 51/269 - Train Accuracy: 0.958, Validation Accuracy: 0.961, Loss: 0.041 Epoch 14 Batch 52/269 - Train Accuracy: 0.950, Validation Accuracy: 0.956, Loss: 0.040 Epoch 14 Batch 53/269 - Train Accuracy: 0.948, Validation Accuracy: 0.951, Loss: 0.048 Epoch 14 Batch 54/269 - Train Accuracy: 0.963, Validation Accuracy: 0.951, Loss: 0.039 Epoch 14 Batch 55/269 - Train Accuracy: 0.953, Validation Accuracy: 0.953, Loss: 0.041 Epoch 14 Batch 56/269 - Train Accuracy: 0.950, Validation Accuracy: 0.953, Loss: 0.047 Epoch 14 Batch 57/269 - Train Accuracy: 0.949, Validation Accuracy: 0.958, Loss: 0.051 Epoch 14 Batch 58/269 - Train Accuracy: 0.955, Validation Accuracy: 0.953, Loss: 0.042 Epoch 14 Batch 59/269 - Train Accuracy: 0.963, Validation Accuracy: 0.955, Loss: 0.035 Epoch 14 Batch 60/269 - Train Accuracy: 0.965, Validation Accuracy: 0.962, Loss: 0.047 Epoch 14 Batch 61/269 - Train Accuracy: 0.946, Validation Accuracy: 0.959, Loss: 0.045 Epoch 14 Batch 62/269 - Train Accuracy: 0.949, Validation Accuracy: 0.959, Loss: 0.049 Epoch 14 Batch 63/269 - Train Accuracy: 0.953, Validation Accuracy: 0.956, Loss: 0.048 Epoch 14 Batch 64/269 - Train Accuracy: 0.957, Validation Accuracy: 0.956, Loss: 0.041 Epoch 14 Batch 65/269 - Train Accuracy: 0.953, Validation Accuracy: 0.954, Loss: 0.044 Epoch 14 Batch 66/269 - Train Accuracy: 0.939, Validation Accuracy: 0.954, Loss: 0.046 Epoch 14 Batch 67/269 - Train Accuracy: 0.948, Validation Accuracy: 0.958, Loss: 0.048 Epoch 14 Batch 68/269 - Train Accuracy: 0.954, Validation Accuracy: 0.953, Loss: 0.047 Epoch 14 Batch 69/269 - Train Accuracy: 0.937, Validation Accuracy: 0.950, Loss: 0.052 Epoch 14 Batch 70/269 - Train Accuracy: 0.956, Validation Accuracy: 0.952, Loss: 0.044 Epoch 14 Batch 71/269 - Train Accuracy: 0.955, Validation Accuracy: 0.955, Loss: 0.050 Epoch 14 Batch 72/269 - Train Accuracy: 0.948, Validation Accuracy: 0.954, Loss: 0.047 Epoch 14 Batch 73/269 - Train Accuracy: 0.940, Validation Accuracy: 0.957, Loss: 0.050 Epoch 14 Batch 74/269 - Train Accuracy: 0.967, Validation Accuracy: 0.953, Loss: 0.039 Epoch 14 Batch 75/269 - Train Accuracy: 0.949, Validation Accuracy: 0.955, Loss: 0.049 Epoch 14 Batch 76/269 - Train Accuracy: 0.953, Validation Accuracy: 0.957, Loss: 0.043 Epoch 14 Batch 77/269 - Train Accuracy: 0.959, Validation Accuracy: 0.956, Loss: 0.042 Epoch 14 Batch 78/269 - Train Accuracy: 0.961, Validation Accuracy: 0.960, Loss: 0.045 Epoch 14 Batch 79/269 - Train Accuracy: 0.949, Validation Accuracy: 0.959, Loss: 0.044 Epoch 14 Batch 80/269 - Train Accuracy: 0.959, Validation Accuracy: 0.956, Loss: 0.040 Epoch 14 Batch 81/269 - Train Accuracy: 0.946, Validation Accuracy: 0.955, Loss: 0.051 Epoch 14 Batch 82/269 - Train Accuracy: 0.956, Validation Accuracy: 0.958, Loss: 0.039 Epoch 14 Batch 83/269 - Train Accuracy: 0.940, Validation Accuracy: 0.956, Loss: 0.056 Epoch 14 Batch 84/269 - Train Accuracy: 0.958, Validation Accuracy: 0.958, Loss: 0.040 Epoch 14 Batch 85/269 - Train Accuracy: 0.952, Validation Accuracy: 0.959, Loss: 0.040 Epoch 14 Batch 86/269 - Train Accuracy: 0.956, Validation Accuracy: 0.961, Loss: 0.040 Epoch 14 Batch 87/269 - Train Accuracy: 0.963, Validation Accuracy: 0.959, Loss: 0.043 Epoch 14 Batch 88/269 - Train Accuracy: 0.947, Validation Accuracy: 0.953, Loss: 0.044 Epoch 14 Batch 89/269 - Train Accuracy: 0.960, Validation Accuracy: 0.954, Loss: 0.045 Epoch 14 Batch 90/269 - Train Accuracy: 0.967, Validation Accuracy: 0.961, Loss: 0.044 Epoch 14 Batch 91/269 - Train Accuracy: 0.962, Validation Accuracy: 0.961, Loss: 0.042 Epoch 14 Batch 92/269 - Train Accuracy: 0.967, Validation Accuracy: 0.961, Loss: 0.039 Epoch 14 Batch 93/269 - Train Accuracy: 0.968, Validation Accuracy: 0.961, Loss: 0.037 Epoch 14 Batch 94/269 - Train Accuracy: 0.948, Validation Accuracy: 0.960, Loss: 0.053 Epoch 14 Batch 95/269 - Train Accuracy: 0.957, Validation Accuracy: 0.960, Loss: 0.041 Epoch 14 Batch 96/269 - Train Accuracy: 0.937, Validation Accuracy: 0.956, Loss: 0.048 Epoch 14 Batch 97/269 - Train Accuracy: 0.952, Validation Accuracy: 0.958, Loss: 0.047 Epoch 14 Batch 98/269 - Train Accuracy: 0.957, Validation Accuracy: 0.961, Loss: 0.044 Epoch 14 Batch 99/269 - Train Accuracy: 0.957, Validation Accuracy: 0.959, Loss: 0.043 Epoch 14 Batch 100/269 - Train Accuracy: 0.955, Validation Accuracy: 0.960, Loss: 0.048 Epoch 14 Batch 101/269 - Train Accuracy: 0.951, Validation Accuracy: 0.958, Loss: 0.051 Epoch 14 Batch 102/269 - Train Accuracy: 0.955, Validation Accuracy: 0.958, Loss: 0.043 Epoch 14 Batch 103/269 - Train Accuracy: 0.960, Validation Accuracy: 0.960, Loss: 0.047 Epoch 14 Batch 104/269 - Train Accuracy: 0.954, Validation Accuracy: 0.961, Loss: 0.044 Epoch 14 Batch 105/269 - Train Accuracy: 0.962, Validation Accuracy: 0.963, Loss: 0.043 Epoch 14 Batch 106/269 - Train Accuracy: 0.962, Validation Accuracy: 0.964, Loss: 0.036 Epoch 14 Batch 107/269 - Train Accuracy: 0.966, Validation Accuracy: 0.966, Loss: 0.044 Epoch 14 Batch 108/269 - Train Accuracy: 0.963, Validation Accuracy: 0.964, Loss: 0.041 Epoch 14 Batch 109/269 - Train Accuracy: 0.955, Validation Accuracy: 0.963, Loss: 0.047 Epoch 14 Batch 110/269 - Train Accuracy: 0.949, Validation Accuracy: 0.959, Loss: 0.037 Epoch 14 Batch 111/269 - Train Accuracy: 0.947, Validation Accuracy: 0.958, Loss: 0.048 Epoch 14 Batch 112/269 - Train Accuracy: 0.947, Validation Accuracy: 0.957, Loss: 0.043 Epoch 14 Batch 113/269 - Train Accuracy: 0.949, Validation Accuracy: 0.952, Loss: 0.044 Epoch 14 Batch 114/269 - Train Accuracy: 0.947, Validation Accuracy: 0.953, Loss: 0.048 Epoch 14 Batch 115/269 - Train Accuracy: 0.950, Validation Accuracy: 0.954, Loss: 0.043 Epoch 14 Batch 116/269 - Train Accuracy: 0.961, Validation Accuracy: 0.951, Loss: 0.043 Epoch 14 Batch 117/269 - Train Accuracy: 0.948, Validation Accuracy: 0.952, Loss: 0.037 Epoch 14 Batch 118/269 - Train Accuracy: 0.955, Validation Accuracy: 0.959, Loss: 0.040 Epoch 14 Batch 119/269 - Train Accuracy: 0.954, Validation Accuracy: 0.958, Loss: 0.046 Epoch 14 Batch 120/269 - Train Accuracy: 0.957, Validation Accuracy: 0.954, Loss: 0.041 Epoch 14 Batch 121/269 - Train Accuracy: 0.956, Validation Accuracy: 0.954, Loss: 0.041 Epoch 14 Batch 122/269 - Train Accuracy: 0.951, Validation Accuracy: 0.960, Loss: 0.044 Epoch 14 Batch 123/269 - Train Accuracy: 0.947, Validation Accuracy: 0.956, Loss: 0.044 Epoch 14 Batch 124/269 - Train Accuracy: 0.958, Validation Accuracy: 0.954, Loss: 0.037 Epoch 14 Batch 125/269 - Train Accuracy: 0.951, Validation Accuracy: 0.955, Loss: 0.040 Epoch 14 Batch 126/269 - Train Accuracy: 0.949, Validation Accuracy: 0.959, Loss: 0.046 Epoch 14 Batch 127/269 - Train Accuracy: 0.961, Validation Accuracy: 0.959, Loss: 0.042 Epoch 14 Batch 128/269 - Train Accuracy: 0.960, Validation Accuracy: 0.959, Loss: 0.041 Epoch 14 Batch 129/269 - Train Accuracy: 0.950, Validation Accuracy: 0.964, Loss: 0.046 Epoch 14 Batch 130/269 - Train Accuracy: 0.961, Validation Accuracy: 0.960, Loss: 0.042 Epoch 14 Batch 131/269 - Train Accuracy: 0.946, Validation Accuracy: 0.955, Loss: 0.044 Epoch 14 Batch 132/269 - Train Accuracy: 0.942, Validation Accuracy: 0.953, Loss: 0.051 Epoch 14 Batch 133/269 - Train Accuracy: 0.954, Validation Accuracy: 0.959, Loss: 0.037 Epoch 14 Batch 134/269 - Train Accuracy: 0.956, Validation Accuracy: 0.958, Loss: 0.045 Epoch 14 Batch 135/269 - Train Accuracy: 0.956, Validation Accuracy: 0.958, Loss: 0.042 Epoch 14 Batch 136/269 - Train Accuracy: 0.941, Validation Accuracy: 0.960, Loss: 0.049 Epoch 14 Batch 137/269 - Train Accuracy: 0.949, Validation Accuracy: 0.961, Loss: 0.051 Epoch 14 Batch 138/269 - Train Accuracy: 0.953, Validation Accuracy: 0.961, Loss: 0.039 Epoch 14 Batch 139/269 - Train Accuracy: 0.955, Validation Accuracy: 0.961, Loss: 0.038 Epoch 14 Batch 140/269 - Train Accuracy: 0.954, Validation Accuracy: 0.962, Loss: 0.047 Epoch 14 Batch 141/269 - Train Accuracy: 0.954, Validation Accuracy: 0.959, Loss: 0.048 Epoch 14 Batch 142/269 - Train Accuracy: 0.952, Validation Accuracy: 0.965, Loss: 0.041 Epoch 14 Batch 143/269 - Train Accuracy: 0.957, Validation Accuracy: 0.961, Loss: 0.037 Epoch 14 Batch 144/269 - Train Accuracy: 0.965, Validation Accuracy: 0.961, Loss: 0.039 Epoch 14 Batch 145/269 - Train Accuracy: 0.967, Validation Accuracy: 0.959, Loss: 0.041 Epoch 14 Batch 146/269 - Train Accuracy: 0.953, Validation Accuracy: 0.958, Loss: 0.041 Epoch 14 Batch 147/269 - Train Accuracy: 0.950, Validation Accuracy: 0.957, Loss: 0.049 Epoch 14 Batch 148/269 - Train Accuracy: 0.959, Validation Accuracy: 0.956, Loss: 0.040 Epoch 14 Batch 149/269 - Train Accuracy: 0.950, Validation Accuracy: 0.951, Loss: 0.052 Epoch 14 Batch 150/269 - Train Accuracy: 0.954, Validation Accuracy: 0.956, Loss: 0.041 Epoch 14 Batch 151/269 - Train Accuracy: 0.956, Validation Accuracy: 0.957, Loss: 0.046 Epoch 14 Batch 152/269 - Train Accuracy: 0.957, Validation Accuracy: 0.956, Loss: 0.041 Epoch 14 Batch 153/269 - Train Accuracy: 0.962, Validation Accuracy: 0.960, Loss: 0.041 Epoch 14 Batch 154/269 - Train Accuracy: 0.965, Validation Accuracy: 0.959, Loss: 0.041 Epoch 14 Batch 155/269 - Train Accuracy: 0.951, Validation Accuracy: 0.957, Loss: 0.039 Epoch 14 Batch 156/269 - Train Accuracy: 0.953, Validation Accuracy: 0.960, Loss: 0.046 Epoch 14 Batch 157/269 - Train Accuracy: 0.953, Validation Accuracy: 0.958, Loss: 0.036 Epoch 14 Batch 158/269 - Train Accuracy: 0.953, Validation Accuracy: 0.958, Loss: 0.042 Epoch 14 Batch 159/269 - Train Accuracy: 0.945, Validation Accuracy: 0.960, Loss: 0.045 Epoch 14 Batch 160/269 - Train Accuracy: 0.944, Validation Accuracy: 0.954, Loss: 0.042 Epoch 14 Batch 161/269 - Train Accuracy: 0.963, Validation Accuracy: 0.955, Loss: 0.038 Epoch 14 Batch 162/269 - Train Accuracy: 0.958, Validation Accuracy: 0.959, Loss: 0.045 Epoch 14 Batch 163/269 - Train Accuracy: 0.957, Validation Accuracy: 0.963, Loss: 0.039 Epoch 14 Batch 164/269 - Train Accuracy: 0.962, Validation Accuracy: 0.959, Loss: 0.039 Epoch 14 Batch 165/269 - Train Accuracy: 0.959, Validation Accuracy: 0.960, Loss: 0.041 Epoch 14 Batch 166/269 - Train Accuracy: 0.964, Validation Accuracy: 0.961, Loss: 0.040 Epoch 14 Batch 167/269 - Train Accuracy: 0.958, Validation Accuracy: 0.957, Loss: 0.042 Epoch 14 Batch 168/269 - Train Accuracy: 0.961, Validation Accuracy: 0.957, Loss: 0.043 Epoch 14 Batch 169/269 - Train Accuracy: 0.944, Validation Accuracy: 0.963, Loss: 0.043 Epoch 14 Batch 170/269 - Train Accuracy: 0.954, Validation Accuracy: 0.963, Loss: 0.039 Epoch 14 Batch 171/269 - Train Accuracy: 0.957, Validation Accuracy: 0.960, Loss: 0.046 Epoch 14 Batch 172/269 - Train Accuracy: 0.959, Validation Accuracy: 0.963, Loss: 0.044 Epoch 14 Batch 173/269 - Train Accuracy: 0.956, Validation Accuracy: 0.964, Loss: 0.041 Epoch 14 Batch 174/269 - Train Accuracy: 0.959, Validation Accuracy: 0.961, Loss: 0.036 Epoch 14 Batch 175/269 - Train Accuracy: 0.946, Validation Accuracy: 0.963, Loss: 0.057 Epoch 14 Batch 176/269 - Train Accuracy: 0.950, Validation Accuracy: 0.966, Loss: 0.045 Epoch 14 Batch 177/269 - Train Accuracy: 0.959, Validation Accuracy: 0.961, Loss: 0.043 Epoch 14 Batch 178/269 - Train Accuracy: 0.964, Validation Accuracy: 0.963, Loss: 0.037 Epoch 14 Batch 179/269 - Train Accuracy: 0.950, Validation Accuracy: 0.963, Loss: 0.043 Epoch 14 Batch 180/269 - Train Accuracy: 0.966, Validation Accuracy: 0.961, Loss: 0.043 Epoch 14 Batch 181/269 - Train Accuracy: 0.955, Validation Accuracy: 0.961, Loss: 0.048 Epoch 14 Batch 182/269 - Train Accuracy: 0.957, Validation Accuracy: 0.958, Loss: 0.040 Epoch 14 Batch 183/269 - Train Accuracy: 0.964, Validation Accuracy: 0.960, Loss: 0.034 Epoch 14 Batch 184/269 - Train Accuracy: 0.961, Validation Accuracy: 0.959, Loss: 0.043 Epoch 14 Batch 185/269 - Train Accuracy: 0.965, Validation Accuracy: 0.958, Loss: 0.043 Epoch 14 Batch 186/269 - Train Accuracy: 0.961, Validation Accuracy: 0.959, Loss: 0.037 Epoch 14 Batch 187/269 - Train Accuracy: 0.952, Validation Accuracy: 0.960, Loss: 0.041 Epoch 14 Batch 188/269 - Train Accuracy: 0.961, Validation Accuracy: 0.963, Loss: 0.042 Epoch 14 Batch 189/269 - Train Accuracy: 0.958, Validation Accuracy: 0.961, Loss: 0.039 Epoch 14 Batch 190/269 - Train Accuracy: 0.955, Validation Accuracy: 0.964, Loss: 0.043 Epoch 14 Batch 191/269 - Train Accuracy: 0.952, Validation Accuracy: 0.962, Loss: 0.040 Epoch 14 Batch 192/269 - Train Accuracy: 0.964, Validation Accuracy: 0.959, Loss: 0.042 Epoch 14 Batch 193/269 - Train Accuracy: 0.959, Validation Accuracy: 0.961, Loss: 0.040 Epoch 14 Batch 194/269 - Train Accuracy: 0.945, Validation Accuracy: 0.961, Loss: 0.045 Epoch 14 Batch 195/269 - Train Accuracy: 0.955, Validation Accuracy: 0.960, Loss: 0.040 Epoch 14 Batch 196/269 - Train Accuracy: 0.955, Validation Accuracy: 0.960, Loss: 0.044 Epoch 14 Batch 197/269 - Train Accuracy: 0.965, Validation Accuracy: 0.963, Loss: 0.047 Epoch 14 Batch 198/269 - Train Accuracy: 0.959, Validation Accuracy: 0.962, Loss: 0.046 Epoch 14 Batch 199/269 - Train Accuracy: 0.958, Validation Accuracy: 0.966, Loss: 0.051 Epoch 14 Batch 200/269 - Train Accuracy: 0.964, Validation Accuracy: 0.964, Loss: 0.039 Epoch 14 Batch 201/269 - Train Accuracy: 0.963, Validation Accuracy: 0.961, Loss: 0.046 Epoch 14 Batch 202/269 - Train Accuracy: 0.960, Validation Accuracy: 0.957, Loss: 0.041 Epoch 14 Batch 203/269 - Train Accuracy: 0.954, Validation Accuracy: 0.956, Loss: 0.049 Epoch 14 Batch 204/269 - Train Accuracy: 0.963, Validation Accuracy: 0.958, Loss: 0.043 Epoch 14 Batch 205/269 - Train Accuracy: 0.963, Validation Accuracy: 0.957, Loss: 0.040 Epoch 14 Batch 206/269 - Train Accuracy: 0.934, Validation Accuracy: 0.959, Loss: 0.050 Epoch 14 Batch 207/269 - Train Accuracy: 0.957, Validation Accuracy: 0.955, Loss: 0.043 Epoch 14 Batch 208/269 - Train Accuracy: 0.954, Validation Accuracy: 0.956, Loss: 0.044 Epoch 14 Batch 209/269 - Train Accuracy: 0.959, Validation Accuracy: 0.954, Loss: 0.041 Epoch 14 Batch 210/269 - Train Accuracy: 0.948, Validation Accuracy: 0.955, Loss: 0.039 Epoch 14 Batch 211/269 - Train Accuracy: 0.960, Validation Accuracy: 0.955, Loss: 0.041 Epoch 14 Batch 212/269 - Train Accuracy: 0.957, Validation Accuracy: 0.959, Loss: 0.049 Epoch 14 Batch 213/269 - Train Accuracy: 0.945, Validation Accuracy: 0.954, Loss: 0.040 Epoch 14 Batch 214/269 - Train Accuracy: 0.958, Validation Accuracy: 0.953, Loss: 0.043 Epoch 14 Batch 215/269 - Train Accuracy: 0.948, Validation Accuracy: 0.962, Loss: 0.043 Epoch 14 Batch 216/269 - Train Accuracy: 0.945, Validation Accuracy: 0.958, Loss: 0.055 Epoch 14 Batch 217/269 - Train Accuracy: 0.952, Validation Accuracy: 0.957, Loss: 0.044 Epoch 14 Batch 218/269 - Train Accuracy: 0.962, Validation Accuracy: 0.955, Loss: 0.039 Epoch 14 Batch 219/269 - Train Accuracy: 0.965, Validation Accuracy: 0.956, Loss: 0.045 Epoch 14 Batch 220/269 - Train Accuracy: 0.950, Validation Accuracy: 0.956, Loss: 0.042 Epoch 14 Batch 221/269 - Train Accuracy: 0.951, Validation Accuracy: 0.957, Loss: 0.043 Epoch 14 Batch 222/269 - Train Accuracy: 0.965, Validation Accuracy: 0.959, Loss: 0.036 Epoch 14 Batch 223/269 - Train Accuracy: 0.949, Validation Accuracy: 0.959, Loss: 0.039 Epoch 14 Batch 224/269 - Train Accuracy: 0.959, Validation Accuracy: 0.956, Loss: 0.047 Epoch 14 Batch 225/269 - Train Accuracy: 0.952, Validation Accuracy: 0.958, Loss: 0.039 Epoch 14 Batch 226/269 - Train Accuracy: 0.962, Validation Accuracy: 0.956, Loss: 0.047 Epoch 14 Batch 227/269 - Train Accuracy: 0.959, Validation Accuracy: 0.954, Loss: 0.047 Epoch 14 Batch 228/269 - Train Accuracy: 0.965, Validation Accuracy: 0.954, Loss: 0.039 Epoch 14 Batch 229/269 - Train Accuracy: 0.959, Validation Accuracy: 0.957, Loss: 0.039 Epoch 14 Batch 230/269 - Train Accuracy: 0.962, Validation Accuracy: 0.959, Loss: 0.041 Epoch 14 Batch 231/269 - Train Accuracy: 0.953, Validation Accuracy: 0.958, Loss: 0.042 Epoch 14 Batch 232/269 - Train Accuracy: 0.950, Validation Accuracy: 0.957, Loss: 0.039 Epoch 14 Batch 233/269 - Train Accuracy: 0.964, Validation Accuracy: 0.957, Loss: 0.046 Epoch 14 Batch 234/269 - Train Accuracy: 0.956, Validation Accuracy: 0.958, Loss: 0.040 Epoch 14 Batch 235/269 - Train Accuracy: 0.984, Validation Accuracy: 0.956, Loss: 0.036 Epoch 14 Batch 236/269 - Train Accuracy: 0.944, Validation Accuracy: 0.957, Loss: 0.042 Epoch 14 Batch 237/269 - Train Accuracy: 0.971, Validation Accuracy: 0.961, Loss: 0.040 Epoch 14 Batch 238/269 - Train Accuracy: 0.961, Validation Accuracy: 0.958, Loss: 0.043 Epoch 14 Batch 239/269 - Train Accuracy: 0.956, Validation Accuracy: 0.958, Loss: 0.040 Epoch 14 Batch 240/269 - Train Accuracy: 0.961, Validation Accuracy: 0.960, Loss: 0.033 Epoch 14 Batch 241/269 - Train Accuracy: 0.950, Validation Accuracy: 0.959, Loss: 0.043 Epoch 14 Batch 242/269 - Train Accuracy: 0.964, Validation Accuracy: 0.960, Loss: 0.041 Epoch 14 Batch 243/269 - Train Accuracy: 0.961, Validation Accuracy: 0.962, Loss: 0.035 Epoch 14 Batch 244/269 - Train Accuracy: 0.951, Validation Accuracy: 0.963, Loss: 0.040 Epoch 14 Batch 245/269 - Train Accuracy: 0.953, Validation Accuracy: 0.963, Loss: 0.042 Epoch 14 Batch 246/269 - Train Accuracy: 0.957, Validation Accuracy: 0.962, Loss: 0.043 Epoch 14 Batch 247/269 - Train Accuracy: 0.961, Validation Accuracy: 0.961, Loss: 0.040 Epoch 14 Batch 248/269 - Train Accuracy: 0.957, Validation Accuracy: 0.962, Loss: 0.039 Epoch 14 Batch 249/269 - Train Accuracy: 0.964, Validation Accuracy: 0.962, Loss: 0.037 Epoch 14 Batch 250/269 - Train Accuracy: 0.959, Validation Accuracy: 0.960, Loss: 0.041 Epoch 14 Batch 251/269 - Train Accuracy: 0.969, Validation Accuracy: 0.959, Loss: 0.037 Epoch 14 Batch 252/269 - Train Accuracy: 0.964, Validation Accuracy: 0.961, Loss: 0.035 Epoch 14 Batch 253/269 - Train Accuracy: 0.951, Validation Accuracy: 0.962, Loss: 0.043 Epoch 14 Batch 254/269 - Train Accuracy: 0.956, Validation Accuracy: 0.960, Loss: 0.041 Epoch 14 Batch 255/269 - Train Accuracy: 0.960, Validation Accuracy: 0.963, Loss: 0.039 Epoch 14 Batch 256/269 - Train Accuracy: 0.953, Validation Accuracy: 0.959, Loss: 0.041 Epoch 14 Batch 257/269 - Train Accuracy: 0.950, Validation Accuracy: 0.961, Loss: 0.044 Epoch 14 Batch 258/269 - Train Accuracy: 0.962, Validation Accuracy: 0.962, Loss: 0.039 Epoch 14 Batch 259/269 - Train Accuracy: 0.960, Validation Accuracy: 0.961, Loss: 0.038 Epoch 14 Batch 260/269 - Train Accuracy: 0.958, Validation Accuracy: 0.960, Loss: 0.042 Epoch 14 Batch 261/269 - Train Accuracy: 0.962, Validation Accuracy: 0.961, Loss: 0.041 Epoch 14 Batch 262/269 - Train Accuracy: 0.950, Validation Accuracy: 0.960, Loss: 0.043 Epoch 14 Batch 263/269 - Train Accuracy: 0.965, Validation Accuracy: 0.960, Loss: 0.040 Epoch 14 Batch 264/269 - Train Accuracy: 0.949, Validation Accuracy: 0.962, Loss: 0.043 Epoch 14 Batch 265/269 - Train Accuracy: 0.959, Validation Accuracy: 0.957, Loss: 0.039 Epoch 14 Batch 266/269 - Train Accuracy: 0.959, Validation Accuracy: 0.959, Loss: 0.036 Epoch 14 Batch 267/269 - Train Accuracy: 0.962, Validation Accuracy: 0.955, Loss: 0.045 Epoch 15 Batch 0/269 - Train Accuracy: 0.962, Validation Accuracy: 0.953, Loss: 0.046 Epoch 15 Batch 1/269 - Train Accuracy: 0.970, Validation Accuracy: 0.953, Loss: 0.039 Epoch 15 Batch 2/269 - Train Accuracy: 0.957, Validation Accuracy: 0.955, Loss: 0.041 Epoch 15 Batch 3/269 - Train Accuracy: 0.956, Validation Accuracy: 0.956, Loss: 0.041 Epoch 15 Batch 4/269 - Train Accuracy: 0.941, Validation Accuracy: 0.959, Loss: 0.040 Epoch 15 Batch 5/269 - Train Accuracy: 0.956, Validation Accuracy: 0.955, Loss: 0.040 Epoch 15 Batch 6/269 - Train Accuracy: 0.960, Validation Accuracy: 0.953, Loss: 0.038 Epoch 15 Batch 7/269 - Train Accuracy: 0.964, Validation Accuracy: 0.956, Loss: 0.036 Epoch 15 Batch 8/269 - Train Accuracy: 0.967, Validation Accuracy: 0.960, Loss: 0.042 Epoch 15 Batch 9/269 - Train Accuracy: 0.953, Validation Accuracy: 0.958, Loss: 0.045 Epoch 15 Batch 10/269 - Train Accuracy: 0.965, Validation Accuracy: 0.958, Loss: 0.038 Epoch 15 Batch 11/269 - Train Accuracy: 0.966, Validation Accuracy: 0.958, Loss: 0.046 Epoch 15 Batch 12/269 - Train Accuracy: 0.940, Validation Accuracy: 0.957, Loss: 0.048 Epoch 15 Batch 13/269 - Train Accuracy: 0.958, Validation Accuracy: 0.955, Loss: 0.039 Epoch 15 Batch 14/269 - Train Accuracy: 0.960, Validation Accuracy: 0.959, Loss: 0.037 Epoch 15 Batch 15/269 - Train Accuracy: 0.963, Validation Accuracy: 0.961, Loss: 0.033 Epoch 15 Batch 16/269 - Train Accuracy: 0.956, Validation Accuracy: 0.961, Loss: 0.048 Epoch 15 Batch 17/269 - Train Accuracy: 0.962, Validation Accuracy: 0.961, Loss: 0.034 Epoch 15 Batch 18/269 - Train Accuracy: 0.949, Validation Accuracy: 0.961, Loss: 0.040 Epoch 15 Batch 19/269 - Train Accuracy: 0.963, Validation Accuracy: 0.959, Loss: 0.037 Epoch 15 Batch 20/269 - Train Accuracy: 0.964, Validation Accuracy: 0.960, Loss: 0.038 Epoch 15 Batch 21/269 - Train Accuracy: 0.944, Validation Accuracy: 0.963, Loss: 0.046 Epoch 15 Batch 22/269 - Train Accuracy: 0.964, Validation Accuracy: 0.962, Loss: 0.036 Epoch 15 Batch 23/269 - Train Accuracy: 0.955, Validation Accuracy: 0.963, Loss: 0.042 Epoch 15 Batch 24/269 - Train Accuracy: 0.957, Validation Accuracy: 0.964, Loss: 0.039 Epoch 15 Batch 25/269 - Train Accuracy: 0.959, Validation Accuracy: 0.961, Loss: 0.045 Epoch 15 Batch 26/269 - Train Accuracy: 0.954, Validation Accuracy: 0.961, Loss: 0.036 Epoch 15 Batch 27/269 - Train Accuracy: 0.954, Validation Accuracy: 0.961, Loss: 0.038 Epoch 15 Batch 28/269 - Train Accuracy: 0.945, Validation Accuracy: 0.962, Loss: 0.047 Epoch 15 Batch 29/269 - Train Accuracy: 0.954, Validation Accuracy: 0.962, Loss: 0.039 Epoch 15 Batch 30/269 - Train Accuracy: 0.956, Validation Accuracy: 0.963, Loss: 0.039 Epoch 15 Batch 31/269 - Train Accuracy: 0.962, Validation Accuracy: 0.960, Loss: 0.039 Epoch 15 Batch 32/269 - Train Accuracy: 0.971, Validation Accuracy: 0.965, Loss: 0.035 Epoch 15 Batch 33/269 - Train Accuracy: 0.967, Validation Accuracy: 0.961, Loss: 0.035 Epoch 15 Batch 34/269 - Train Accuracy: 0.959, Validation Accuracy: 0.965, Loss: 0.038 Epoch 15 Batch 35/269 - Train Accuracy: 0.961, Validation Accuracy: 0.962, Loss: 0.049 Epoch 15 Batch 36/269 - Train Accuracy: 0.956, Validation Accuracy: 0.952, Loss: 0.040 Epoch 15 Batch 37/269 - Train Accuracy: 0.956, Validation Accuracy: 0.954, Loss: 0.040 Epoch 15 Batch 38/269 - Train Accuracy: 0.955, Validation Accuracy: 0.957, Loss: 0.040 Epoch 15 Batch 39/269 - Train Accuracy: 0.954, Validation Accuracy: 0.957, Loss: 0.037 Epoch 15 Batch 40/269 - Train Accuracy: 0.942, Validation Accuracy: 0.957, Loss: 0.043 Epoch 15 Batch 41/269 - Train Accuracy: 0.954, Validation Accuracy: 0.960, Loss: 0.043 Epoch 15 Batch 42/269 - Train Accuracy: 0.963, Validation Accuracy: 0.962, Loss: 0.033 Epoch 15 Batch 43/269 - Train Accuracy: 0.963, Validation Accuracy: 0.961, Loss: 0.043 Epoch 15 Batch 44/269 - Train Accuracy: 0.954, Validation Accuracy: 0.961, Loss: 0.039 Epoch 15 Batch 45/269 - Train Accuracy: 0.955, Validation Accuracy: 0.957, Loss: 0.042 Epoch 15 Batch 46/269 - Train Accuracy: 0.960, Validation Accuracy: 0.958, Loss: 0.037 Epoch 15 Batch 47/269 - Train Accuracy: 0.958, Validation Accuracy: 0.960, Loss: 0.038 Epoch 15 Batch 48/269 - Train Accuracy: 0.964, Validation Accuracy: 0.962, Loss: 0.032 Epoch 15 Batch 49/269 - Train Accuracy: 0.952, Validation Accuracy: 0.961, Loss: 0.037 Epoch 15 Batch 50/269 - Train Accuracy: 0.953, Validation Accuracy: 0.960, Loss: 0.049 Epoch 15 Batch 51/269 - Train Accuracy: 0.968, Validation Accuracy: 0.956, Loss: 0.037 Epoch 15 Batch 52/269 - Train Accuracy: 0.944, Validation Accuracy: 0.956, Loss: 0.032 Epoch 15 Batch 53/269 - Train Accuracy: 0.961, Validation Accuracy: 0.955, Loss: 0.047 Epoch 15 Batch 54/269 - Train Accuracy: 0.969, Validation Accuracy: 0.957, Loss: 0.037 Epoch 15 Batch 55/269 - Train Accuracy: 0.955, Validation Accuracy: 0.953, Loss: 0.042 Epoch 15 Batch 56/269 - Train Accuracy: 0.956, Validation Accuracy: 0.947, Loss: 0.043 Epoch 15 Batch 57/269 - Train Accuracy: 0.956, Validation Accuracy: 0.948, Loss: 0.048 Epoch 15 Batch 58/269 - Train Accuracy: 0.959, Validation Accuracy: 0.955, Loss: 0.042 Epoch 15 Batch 59/269 - Train Accuracy: 0.968, Validation Accuracy: 0.959, Loss: 0.032 Epoch 15 Batch 60/269 - Train Accuracy: 0.967, Validation Accuracy: 0.961, Loss: 0.040 Epoch 15 Batch 61/269 - Train Accuracy: 0.953, Validation Accuracy: 0.960, Loss: 0.041 Epoch 15 Batch 62/269 - Train Accuracy: 0.952, Validation Accuracy: 0.960, Loss: 0.045 Epoch 15 Batch 63/269 - Train Accuracy: 0.947, Validation Accuracy: 0.955, Loss: 0.042 Epoch 15 Batch 64/269 - Train Accuracy: 0.966, Validation Accuracy: 0.954, Loss: 0.034 Epoch 15 Batch 65/269 - Train Accuracy: 0.963, Validation Accuracy: 0.955, Loss: 0.038 Epoch 15 Batch 66/269 - Train Accuracy: 0.952, Validation Accuracy: 0.954, Loss: 0.045 Epoch 15 Batch 67/269 - Train Accuracy: 0.951, Validation Accuracy: 0.957, Loss: 0.047 Epoch 15 Batch 68/269 - Train Accuracy: 0.960, Validation Accuracy: 0.954, Loss: 0.045 Epoch 15 Batch 69/269 - Train Accuracy: 0.946, Validation Accuracy: 0.948, Loss: 0.052 Epoch 15 Batch 70/269 - Train Accuracy: 0.951, Validation Accuracy: 0.951, Loss: 0.045 Epoch 15 Batch 71/269 - Train Accuracy: 0.957, Validation Accuracy: 0.960, Loss: 0.045 Epoch 15 Batch 72/269 - Train Accuracy: 0.953, Validation Accuracy: 0.959, Loss: 0.046 Epoch 15 Batch 73/269 - Train Accuracy: 0.943, Validation Accuracy: 0.955, Loss: 0.041 Epoch 15 Batch 74/269 - Train Accuracy: 0.964, Validation Accuracy: 0.953, Loss: 0.040 Epoch 15 Batch 75/269 - Train Accuracy: 0.948, Validation Accuracy: 0.950, Loss: 0.045 Epoch 15 Batch 76/269 - Train Accuracy: 0.954, Validation Accuracy: 0.956, Loss: 0.041 Epoch 15 Batch 77/269 - Train Accuracy: 0.958, Validation Accuracy: 0.959, Loss: 0.039 Epoch 15 Batch 78/269 - Train Accuracy: 0.962, Validation Accuracy: 0.961, Loss: 0.039 Epoch 15 Batch 79/269 - Train Accuracy: 0.955, Validation Accuracy: 0.962, Loss: 0.042 Epoch 15 Batch 80/269 - Train Accuracy: 0.963, Validation Accuracy: 0.960, Loss: 0.038 Epoch 15 Batch 81/269 - Train Accuracy: 0.950, Validation Accuracy: 0.960, Loss: 0.048 Epoch 15 Batch 82/269 - Train Accuracy: 0.957, Validation Accuracy: 0.961, Loss: 0.038 Epoch 15 Batch 83/269 - Train Accuracy: 0.946, Validation Accuracy: 0.961, Loss: 0.051 Epoch 15 Batch 84/269 - Train Accuracy: 0.963, Validation Accuracy: 0.960, Loss: 0.037 Epoch 15 Batch 85/269 - Train Accuracy: 0.958, Validation Accuracy: 0.960, Loss: 0.038 Epoch 15 Batch 86/269 - Train Accuracy: 0.960, Validation Accuracy: 0.963, Loss: 0.037 Epoch 15 Batch 87/269 - Train Accuracy: 0.966, Validation Accuracy: 0.960, Loss: 0.041 Epoch 15 Batch 88/269 - Train Accuracy: 0.956, Validation Accuracy: 0.959, Loss: 0.042 Epoch 15 Batch 89/269 - Train Accuracy: 0.957, Validation Accuracy: 0.960, Loss: 0.037 Epoch 15 Batch 90/269 - Train Accuracy: 0.959, Validation Accuracy: 0.960, Loss: 0.041 Epoch 15 Batch 91/269 - Train Accuracy: 0.965, Validation Accuracy: 0.961, Loss: 0.037 Epoch 15 Batch 92/269 - Train Accuracy: 0.969, Validation Accuracy: 0.958, Loss: 0.035 Epoch 15 Batch 93/269 - Train Accuracy: 0.962, Validation Accuracy: 0.958, Loss: 0.037 Epoch 15 Batch 94/269 - Train Accuracy: 0.951, Validation Accuracy: 0.953, Loss: 0.045 Epoch 15 Batch 95/269 - Train Accuracy: 0.966, Validation Accuracy: 0.952, Loss: 0.039 Epoch 15 Batch 96/269 - Train Accuracy: 0.940, Validation Accuracy: 0.957, Loss: 0.046 Epoch 15 Batch 97/269 - Train Accuracy: 0.950, Validation Accuracy: 0.958, Loss: 0.043 Epoch 15 Batch 98/269 - Train Accuracy: 0.960, Validation Accuracy: 0.956, Loss: 0.040 Epoch 15 Batch 99/269 - Train Accuracy: 0.958, Validation Accuracy: 0.956, Loss: 0.039 Epoch 15 Batch 100/269 - Train Accuracy: 0.960, Validation Accuracy: 0.954, Loss: 0.042 Epoch 15 Batch 101/269 - Train Accuracy: 0.953, Validation Accuracy: 0.952, Loss: 0.046 Epoch 15 Batch 102/269 - Train Accuracy: 0.953, Validation Accuracy: 0.954, Loss: 0.036 Epoch 15 Batch 103/269 - Train Accuracy: 0.965, Validation Accuracy: 0.957, Loss: 0.046 Epoch 15 Batch 104/269 - Train Accuracy: 0.963, Validation Accuracy: 0.960, Loss: 0.039 Epoch 15 Batch 105/269 - Train Accuracy: 0.959, Validation Accuracy: 0.954, Loss: 0.043 Epoch 15 Batch 106/269 - Train Accuracy: 0.966, Validation Accuracy: 0.956, Loss: 0.035 Epoch 15 Batch 107/269 - Train Accuracy: 0.964, Validation Accuracy: 0.956, Loss: 0.038 Epoch 15 Batch 108/269 - Train Accuracy: 0.967, Validation Accuracy: 0.954, Loss: 0.037 Epoch 15 Batch 109/269 - Train Accuracy: 0.950, Validation Accuracy: 0.956, Loss: 0.040 Epoch 15 Batch 110/269 - Train Accuracy: 0.951, Validation Accuracy: 0.959, Loss: 0.038 Epoch 15 Batch 111/269 - Train Accuracy: 0.958, Validation Accuracy: 0.960, Loss: 0.046 Epoch 15 Batch 112/269 - Train Accuracy: 0.954, Validation Accuracy: 0.956, Loss: 0.042 Epoch 15 Batch 113/269 - Train Accuracy: 0.949, Validation Accuracy: 0.959, Loss: 0.043 Epoch 15 Batch 114/269 - Train Accuracy: 0.945, Validation Accuracy: 0.960, Loss: 0.042 Epoch 15 Batch 115/269 - Train Accuracy: 0.955, Validation Accuracy: 0.956, Loss: 0.044 Epoch 15 Batch 116/269 - Train Accuracy: 0.961, Validation Accuracy: 0.955, Loss: 0.042 Epoch 15 Batch 117/269 - Train Accuracy: 0.952, Validation Accuracy: 0.958, Loss: 0.039 Epoch 15 Batch 118/269 - Train Accuracy: 0.960, Validation Accuracy: 0.957, Loss: 0.036 Epoch 15 Batch 119/269 - Train Accuracy: 0.948, Validation Accuracy: 0.951, Loss: 0.044 Epoch 15 Batch 120/269 - Train Accuracy: 0.954, Validation Accuracy: 0.957, Loss: 0.044 Epoch 15 Batch 121/269 - Train Accuracy: 0.959, Validation Accuracy: 0.959, Loss: 0.035 Epoch 15 Batch 122/269 - Train Accuracy: 0.948, Validation Accuracy: 0.952, Loss: 0.041 Epoch 15 Batch 123/269 - Train Accuracy: 0.949, Validation Accuracy: 0.951, Loss: 0.041 Epoch 15 Batch 124/269 - Train Accuracy: 0.956, Validation Accuracy: 0.958, Loss: 0.035 Epoch 15 Batch 125/269 - Train Accuracy: 0.951, Validation Accuracy: 0.958, Loss: 0.036 Epoch 15 Batch 126/269 - Train Accuracy: 0.944, Validation Accuracy: 0.956, Loss: 0.041 Epoch 15 Batch 127/269 - Train Accuracy: 0.963, Validation Accuracy: 0.955, Loss: 0.039 Epoch 15 Batch 128/269 - Train Accuracy: 0.966, Validation Accuracy: 0.961, Loss: 0.037 Epoch 15 Batch 129/269 - Train Accuracy: 0.948, Validation Accuracy: 0.959, Loss: 0.037 Epoch 15 Batch 130/269 - Train Accuracy: 0.963, Validation Accuracy: 0.955, Loss: 0.043 Epoch 15 Batch 131/269 - Train Accuracy: 0.949, Validation Accuracy: 0.953, Loss: 0.041 Epoch 15 Batch 132/269 - Train Accuracy: 0.945, Validation Accuracy: 0.956, Loss: 0.046 Epoch 15 Batch 133/269 - Train Accuracy: 0.958, Validation Accuracy: 0.954, Loss: 0.033 Epoch 15 Batch 134/269 - Train Accuracy: 0.958, Validation Accuracy: 0.956, Loss: 0.041 Epoch 15 Batch 135/269 - Train Accuracy: 0.965, Validation Accuracy: 0.951, Loss: 0.038 Epoch 15 Batch 136/269 - Train Accuracy: 0.949, Validation Accuracy: 0.952, Loss: 0.047 Epoch 15 Batch 137/269 - Train Accuracy: 0.948, Validation Accuracy: 0.953, Loss: 0.047 Epoch 15 Batch 138/269 - Train Accuracy: 0.954, Validation Accuracy: 0.957, Loss: 0.031 Epoch 15 Batch 139/269 - Train Accuracy: 0.965, Validation Accuracy: 0.953, Loss: 0.037 Epoch 15 Batch 140/269 - Train Accuracy: 0.954, Validation Accuracy: 0.958, Loss: 0.048 Epoch 15 Batch 141/269 - Train Accuracy: 0.954, Validation Accuracy: 0.960, Loss: 0.041 Epoch 15 Batch 142/269 - Train Accuracy: 0.955, Validation Accuracy: 0.959, Loss: 0.042 Epoch 15 Batch 143/269 - Train Accuracy: 0.959, Validation Accuracy: 0.954, Loss: 0.032 Epoch 15 Batch 144/269 - Train Accuracy: 0.963, Validation Accuracy: 0.957, Loss: 0.033 Epoch 15 Batch 145/269 - Train Accuracy: 0.965, Validation Accuracy: 0.962, Loss: 0.034 Epoch 15 Batch 146/269 - Train Accuracy: 0.956, Validation Accuracy: 0.965, Loss: 0.041 Epoch 15 Batch 147/269 - Train Accuracy: 0.954, Validation Accuracy: 0.961, Loss: 0.042 Epoch 15 Batch 148/269 - Train Accuracy: 0.968, Validation Accuracy: 0.953, Loss: 0.037 Epoch 15 Batch 149/269 - Train Accuracy: 0.956, Validation Accuracy: 0.954, Loss: 0.046 Epoch 15 Batch 150/269 - Train Accuracy: 0.956, Validation Accuracy: 0.955, Loss: 0.039 Epoch 15 Batch 151/269 - Train Accuracy: 0.960, Validation Accuracy: 0.960, Loss: 0.039 Epoch 15 Batch 152/269 - Train Accuracy: 0.964, Validation Accuracy: 0.964, Loss: 0.040 Epoch 15 Batch 153/269 - Train Accuracy: 0.962, Validation Accuracy: 0.962, Loss: 0.036 Epoch 15 Batch 154/269 - Train Accuracy: 0.967, Validation Accuracy: 0.963, Loss: 0.041 Epoch 15 Batch 155/269 - Train Accuracy: 0.955, Validation Accuracy: 0.964, Loss: 0.038 Epoch 15 Batch 156/269 - Train Accuracy: 0.960, Validation Accuracy: 0.959, Loss: 0.038 Epoch 15 Batch 157/269 - Train Accuracy: 0.953, Validation Accuracy: 0.958, Loss: 0.032 Epoch 15 Batch 158/269 - Train Accuracy: 0.959, Validation Accuracy: 0.959, Loss: 0.039 Epoch 15 Batch 159/269 - Train Accuracy: 0.957, Validation Accuracy: 0.962, Loss: 0.040 Epoch 15 Batch 160/269 - Train Accuracy: 0.956, Validation Accuracy: 0.962, Loss: 0.035 Epoch 15 Batch 161/269 - Train Accuracy: 0.962, Validation Accuracy: 0.961, Loss: 0.036 Epoch 15 Batch 162/269 - Train Accuracy: 0.956, Validation Accuracy: 0.964, Loss: 0.044 Epoch 15 Batch 163/269 - Train Accuracy: 0.966, Validation Accuracy: 0.958, Loss: 0.035 Epoch 15 Batch 164/269 - Train Accuracy: 0.969, Validation Accuracy: 0.959, Loss: 0.035 Epoch 15 Batch 165/269 - Train Accuracy: 0.958, Validation Accuracy: 0.960, Loss: 0.036 Epoch 15 Batch 166/269 - Train Accuracy: 0.964, Validation Accuracy: 0.958, Loss: 0.040 Epoch 15 Batch 167/269 - Train Accuracy: 0.953, Validation Accuracy: 0.958, Loss: 0.039 Epoch 15 Batch 168/269 - Train Accuracy: 0.962, Validation Accuracy: 0.959, Loss: 0.040 Epoch 15 Batch 169/269 - Train Accuracy: 0.953, Validation Accuracy: 0.957, Loss: 0.036 Epoch 15 Batch 170/269 - Train Accuracy: 0.953, Validation Accuracy: 0.960, Loss: 0.037 Epoch 15 Batch 171/269 - Train Accuracy: 0.967, Validation Accuracy: 0.959, Loss: 0.042 Epoch 15 Batch 172/269 - Train Accuracy: 0.952, Validation Accuracy: 0.963, Loss: 0.041 Epoch 15 Batch 173/269 - Train Accuracy: 0.963, Validation Accuracy: 0.961, Loss: 0.035 Epoch 15 Batch 174/269 - Train Accuracy: 0.970, Validation Accuracy: 0.963, Loss: 0.035 Epoch 15 Batch 175/269 - Train Accuracy: 0.945, Validation Accuracy: 0.964, Loss: 0.049 Epoch 15 Batch 176/269 - Train Accuracy: 0.944, Validation Accuracy: 0.960, Loss: 0.040 Epoch 15 Batch 177/269 - Train Accuracy: 0.963, Validation Accuracy: 0.960, Loss: 0.035 Epoch 15 Batch 178/269 - Train Accuracy: 0.971, Validation Accuracy: 0.960, Loss: 0.034 Epoch 15 Batch 179/269 - Train Accuracy: 0.954, Validation Accuracy: 0.960, Loss: 0.039 Epoch 15 Batch 180/269 - Train Accuracy: 0.968, Validation Accuracy: 0.962, Loss: 0.039 Epoch 15 Batch 181/269 - Train Accuracy: 0.955, Validation Accuracy: 0.962, Loss: 0.044 Epoch 15 Batch 182/269 - Train Accuracy: 0.956, Validation Accuracy: 0.957, Loss: 0.039 Epoch 15 Batch 183/269 - Train Accuracy: 0.966, Validation Accuracy: 0.959, Loss: 0.031 Epoch 15 Batch 184/269 - Train Accuracy: 0.963, Validation Accuracy: 0.955, Loss: 0.037 Epoch 15 Batch 185/269 - Train Accuracy: 0.965, Validation Accuracy: 0.961, Loss: 0.036 Epoch 15 Batch 186/269 - Train Accuracy: 0.961, Validation Accuracy: 0.964, Loss: 0.033 Epoch 15 Batch 187/269 - Train Accuracy: 0.963, Validation Accuracy: 0.965, Loss: 0.038 Epoch 15 Batch 188/269 - Train Accuracy: 0.968, Validation Accuracy: 0.962, Loss: 0.034 Epoch 15 Batch 189/269 - Train Accuracy: 0.966, Validation Accuracy: 0.962, Loss: 0.034 Epoch 15 Batch 190/269 - Train Accuracy: 0.963, Validation Accuracy: 0.964, Loss: 0.040 Epoch 15 Batch 191/269 - Train Accuracy: 0.960, Validation Accuracy: 0.964, Loss: 0.037 Epoch 15 Batch 192/269 - Train Accuracy: 0.966, Validation Accuracy: 0.963, Loss: 0.042 Epoch 15 Batch 193/269 - Train Accuracy: 0.951, Validation Accuracy: 0.962, Loss: 0.041 Epoch 15 Batch 194/269 - Train Accuracy: 0.946, Validation Accuracy: 0.960, Loss: 0.044 Epoch 15 Batch 195/269 - Train Accuracy: 0.964, Validation Accuracy: 0.960, Loss: 0.034 Epoch 15 Batch 196/269 - Train Accuracy: 0.949, Validation Accuracy: 0.967, Loss: 0.041 Epoch 15 Batch 197/269 - Train Accuracy: 0.960, Validation Accuracy: 0.964, Loss: 0.039 Epoch 15 Batch 198/269 - Train Accuracy: 0.962, Validation Accuracy: 0.964, Loss: 0.040 Epoch 15 Batch 199/269 - Train Accuracy: 0.956, Validation Accuracy: 0.960, Loss: 0.040 Epoch 15 Batch 200/269 - Train Accuracy: 0.970, Validation Accuracy: 0.959, Loss: 0.035 Epoch 15 Batch 201/269 - Train Accuracy: 0.961, Validation Accuracy: 0.959, Loss: 0.040 Epoch 15 Batch 202/269 - Train Accuracy: 0.960, Validation Accuracy: 0.960, Loss: 0.036 Epoch 15 Batch 203/269 - Train Accuracy: 0.946, Validation Accuracy: 0.962, Loss: 0.044 Epoch 15 Batch 204/269 - Train Accuracy: 0.965, Validation Accuracy: 0.964, Loss: 0.038 Epoch 15 Batch 205/269 - Train Accuracy: 0.959, Validation Accuracy: 0.965, Loss: 0.035 Epoch 15 Batch 206/269 - Train Accuracy: 0.946, Validation Accuracy: 0.965, Loss: 0.043 Epoch 15 Batch 207/269 - Train Accuracy: 0.955, Validation Accuracy: 0.963, Loss: 0.038 Epoch 15 Batch 208/269 - Train Accuracy: 0.959, Validation Accuracy: 0.956, Loss: 0.039 Epoch 15 Batch 209/269 - Train Accuracy: 0.964, Validation Accuracy: 0.961, Loss: 0.035 Epoch 15 Batch 210/269 - Train Accuracy: 0.955, Validation Accuracy: 0.959, Loss: 0.035 Epoch 15 Batch 211/269 - Train Accuracy: 0.962, Validation Accuracy: 0.960, Loss: 0.039 Epoch 15 Batch 212/269 - Train Accuracy: 0.957, Validation Accuracy: 0.960, Loss: 0.044 Epoch 15 Batch 213/269 - Train Accuracy: 0.958, Validation Accuracy: 0.961, Loss: 0.037 Epoch 15 Batch 214/269 - Train Accuracy: 0.958, Validation Accuracy: 0.960, Loss: 0.039 Epoch 15 Batch 215/269 - Train Accuracy: 0.953, Validation Accuracy: 0.962, Loss: 0.035 Epoch 15 Batch 216/269 - Train Accuracy: 0.949, Validation Accuracy: 0.964, Loss: 0.049 Epoch 15 Batch 217/269 - Train Accuracy: 0.961, Validation Accuracy: 0.962, Loss: 0.044 Epoch 15 Batch 218/269 - Train Accuracy: 0.960, Validation Accuracy: 0.960, Loss: 0.040 Epoch 15 Batch 219/269 - Train Accuracy: 0.968, Validation Accuracy: 0.960, Loss: 0.039 Epoch 15 Batch 220/269 - Train Accuracy: 0.960, Validation Accuracy: 0.961, Loss: 0.038 Epoch 15 Batch 221/269 - Train Accuracy: 0.961, Validation Accuracy: 0.960, Loss: 0.041 Epoch 15 Batch 222/269 - Train Accuracy: 0.968, Validation Accuracy: 0.963, Loss: 0.037 Epoch 15 Batch 223/269 - Train Accuracy: 0.941, Validation Accuracy: 0.963, Loss: 0.042 Epoch 15 Batch 224/269 - Train Accuracy: 0.957, Validation Accuracy: 0.964, Loss: 0.044 Epoch 15 Batch 225/269 - Train Accuracy: 0.953, Validation Accuracy: 0.966, Loss: 0.039 Epoch 15 Batch 226/269 - Train Accuracy: 0.960, Validation Accuracy: 0.958, Loss: 0.042 Epoch 15 Batch 227/269 - Train Accuracy: 0.950, Validation Accuracy: 0.956, Loss: 0.049 Epoch 15 Batch 228/269 - Train Accuracy: 0.966, Validation Accuracy: 0.960, Loss: 0.034 Epoch 15 Batch 229/269 - Train Accuracy: 0.963, Validation Accuracy: 0.968, Loss: 0.035 Epoch 15 Batch 230/269 - Train Accuracy: 0.963, Validation Accuracy: 0.965, Loss: 0.038 Epoch 15 Batch 231/269 - Train Accuracy: 0.952, Validation Accuracy: 0.966, Loss: 0.042 Epoch 15 Batch 232/269 - Train Accuracy: 0.954, Validation Accuracy: 0.967, Loss: 0.037 Epoch 15 Batch 233/269 - Train Accuracy: 0.960, Validation Accuracy: 0.964, Loss: 0.042 Epoch 15 Batch 234/269 - Train Accuracy: 0.957, Validation Accuracy: 0.964, Loss: 0.039 Epoch 15 Batch 235/269 - Train Accuracy: 0.984, Validation Accuracy: 0.960, Loss: 0.031 Epoch 15 Batch 236/269 - Train Accuracy: 0.954, Validation Accuracy: 0.967, Loss: 0.038 Epoch 15 Batch 237/269 - Train Accuracy: 0.968, Validation Accuracy: 0.968, Loss: 0.035 Epoch 15 Batch 238/269 - Train Accuracy: 0.962, Validation Accuracy: 0.963, Loss: 0.036 Epoch 15 Batch 239/269 - Train Accuracy: 0.960, Validation Accuracy: 0.964, Loss: 0.036 Epoch 15 Batch 240/269 - Train Accuracy: 0.963, Validation Accuracy: 0.965, Loss: 0.034 Epoch 15 Batch 241/269 - Train Accuracy: 0.951, Validation Accuracy: 0.961, Loss: 0.043 Epoch 15 Batch 242/269 - Train Accuracy: 0.968, Validation Accuracy: 0.962, Loss: 0.036 Epoch 15 Batch 243/269 - Train Accuracy: 0.963, Validation Accuracy: 0.960, Loss: 0.031 Epoch 15 Batch 244/269 - Train Accuracy: 0.948, Validation Accuracy: 0.961, Loss: 0.041 Epoch 15 Batch 245/269 - Train Accuracy: 0.961, Validation Accuracy: 0.963, Loss: 0.037 Epoch 15 Batch 246/269 - Train Accuracy: 0.948, Validation Accuracy: 0.963, Loss: 0.044 Epoch 15 Batch 247/269 - Train Accuracy: 0.963, Validation Accuracy: 0.964, Loss: 0.037 Epoch 15 Batch 248/269 - Train Accuracy: 0.961, Validation Accuracy: 0.966, Loss: 0.038 Epoch 15 Batch 249/269 - Train Accuracy: 0.969, Validation Accuracy: 0.964, Loss: 0.034 Epoch 15 Batch 250/269 - Train Accuracy: 0.960, Validation Accuracy: 0.965, Loss: 0.036 Epoch 15 Batch 251/269 - Train Accuracy: 0.969, Validation Accuracy: 0.963, Loss: 0.037 Epoch 15 Batch 252/269 - Train Accuracy: 0.972, Validation Accuracy: 0.961, Loss: 0.033 Epoch 15 Batch 253/269 - Train Accuracy: 0.953, Validation Accuracy: 0.960, Loss: 0.038 Epoch 15 Batch 254/269 - Train Accuracy: 0.959, Validation Accuracy: 0.960, Loss: 0.038 Epoch 15 Batch 255/269 - Train Accuracy: 0.966, Validation Accuracy: 0.962, Loss: 0.041 Epoch 15 Batch 256/269 - Train Accuracy: 0.959, Validation Accuracy: 0.964, Loss: 0.035 Epoch 15 Batch 257/269 - Train Accuracy: 0.954, Validation Accuracy: 0.962, Loss: 0.039 Epoch 15 Batch 258/269 - Train Accuracy: 0.960, Validation Accuracy: 0.962, Loss: 0.040 Epoch 15 Batch 259/269 - Train Accuracy: 0.964, Validation Accuracy: 0.961, Loss: 0.037 Epoch 15 Batch 260/269 - Train Accuracy: 0.960, Validation Accuracy: 0.962, Loss: 0.042 Epoch 15 Batch 261/269 - Train Accuracy: 0.959, Validation Accuracy: 0.962, Loss: 0.039 Epoch 15 Batch 262/269 - Train Accuracy: 0.953, Validation Accuracy: 0.965, Loss: 0.041 Epoch 15 Batch 263/269 - Train Accuracy: 0.968, Validation Accuracy: 0.967, Loss: 0.035 Epoch 15 Batch 264/269 - Train Accuracy: 0.948, Validation Accuracy: 0.964, Loss: 0.038 Epoch 15 Batch 265/269 - Train Accuracy: 0.957, Validation Accuracy: 0.967, Loss: 0.037 Epoch 15 Batch 266/269 - Train Accuracy: 0.958, Validation Accuracy: 0.964, Loss: 0.036 Epoch 15 Batch 267/269 - Train Accuracy: 0.962, Validation Accuracy: 0.963, Loss: 0.041 Epoch 16 Batch 0/269 - Train Accuracy: 0.965, Validation Accuracy: 0.961, Loss: 0.043 Epoch 16 Batch 1/269 - Train Accuracy: 0.972, Validation Accuracy: 0.960, Loss: 0.041 Epoch 16 Batch 2/269 - Train Accuracy: 0.959, Validation Accuracy: 0.963, Loss: 0.041 Epoch 16 Batch 3/269 - Train Accuracy: 0.951, Validation Accuracy: 0.963, Loss: 0.039 Epoch 16 Batch 4/269 - Train Accuracy: 0.941, Validation Accuracy: 0.965, Loss: 0.041 Epoch 16 Batch 5/269 - Train Accuracy: 0.961, Validation Accuracy: 0.962, Loss: 0.038 Epoch 16 Batch 6/269 - Train Accuracy: 0.959, Validation Accuracy: 0.958, Loss: 0.035 Epoch 16 Batch 7/269 - Train Accuracy: 0.962, Validation Accuracy: 0.962, Loss: 0.038 Epoch 16 Batch 8/269 - Train Accuracy: 0.964, Validation Accuracy: 0.963, Loss: 0.041 Epoch 16 Batch 9/269 - Train Accuracy: 0.960, Validation Accuracy: 0.963, Loss: 0.041 Epoch 16 Batch 10/269 - Train Accuracy: 0.969, Validation Accuracy: 0.961, Loss: 0.034 Epoch 16 Batch 11/269 - Train Accuracy: 0.961, Validation Accuracy: 0.958, Loss: 0.041 Epoch 16 Batch 12/269 - Train Accuracy: 0.951, Validation Accuracy: 0.958, Loss: 0.043 Epoch 16 Batch 13/269 - Train Accuracy: 0.960, Validation Accuracy: 0.958, Loss: 0.035 Epoch 16 Batch 14/269 - Train Accuracy: 0.959, Validation Accuracy: 0.958, Loss: 0.036 Epoch 16 Batch 15/269 - Train Accuracy: 0.963, Validation Accuracy: 0.962, Loss: 0.031 Epoch 16 Batch 16/269 - Train Accuracy: 0.959, Validation Accuracy: 0.968, Loss: 0.040 Epoch 16 Batch 17/269 - Train Accuracy: 0.958, Validation Accuracy: 0.956, Loss: 0.033 Epoch 16 Batch 18/269 - Train Accuracy: 0.950, Validation Accuracy: 0.956, Loss: 0.040 Epoch 16 Batch 19/269 - Train Accuracy: 0.963, Validation Accuracy: 0.959, Loss: 0.034 Epoch 16 Batch 20/269 - Train Accuracy: 0.962, Validation Accuracy: 0.965, Loss: 0.035 Epoch 16 Batch 21/269 - Train Accuracy: 0.936, Validation Accuracy: 0.967, Loss: 0.041 Epoch 16 Batch 22/269 - Train Accuracy: 0.967, Validation Accuracy: 0.965, Loss: 0.034 Epoch 16 Batch 23/269 - Train Accuracy: 0.955, Validation Accuracy: 0.961, Loss: 0.040 Epoch 16 Batch 24/269 - Train Accuracy: 0.961, Validation Accuracy: 0.962, Loss: 0.038 Epoch 16 Batch 25/269 - Train Accuracy: 0.963, Validation Accuracy: 0.958, Loss: 0.039 Epoch 16 Batch 26/269 - Train Accuracy: 0.955, Validation Accuracy: 0.960, Loss: 0.033 Epoch 16 Batch 27/269 - Train Accuracy: 0.962, Validation Accuracy: 0.968, Loss: 0.036 Epoch 16 Batch 28/269 - Train Accuracy: 0.955, Validation Accuracy: 0.965, Loss: 0.043 Epoch 16 Batch 29/269 - Train Accuracy: 0.961, Validation Accuracy: 0.962, Loss: 0.038 Epoch 16 Batch 30/269 - Train Accuracy: 0.952, Validation Accuracy: 0.962, Loss: 0.036 Epoch 16 Batch 31/269 - Train Accuracy: 0.951, Validation Accuracy: 0.965, Loss: 0.034 Epoch 16 Batch 32/269 - Train Accuracy: 0.968, Validation Accuracy: 0.969, Loss: 0.034 Epoch 16 Batch 33/269 - Train Accuracy: 0.962, Validation Accuracy: 0.966, Loss: 0.031 Epoch 16 Batch 34/269 - Train Accuracy: 0.957, Validation Accuracy: 0.962, Loss: 0.036 Epoch 16 Batch 35/269 - Train Accuracy: 0.960, Validation Accuracy: 0.956, Loss: 0.043 Epoch 16 Batch 36/269 - Train Accuracy: 0.959, Validation Accuracy: 0.957, Loss: 0.035 Epoch 16 Batch 37/269 - Train Accuracy: 0.956, Validation Accuracy: 0.958, Loss: 0.040 Epoch 16 Batch 38/269 - Train Accuracy: 0.956, Validation Accuracy: 0.958, Loss: 0.037 Epoch 16 Batch 39/269 - Train Accuracy: 0.960, Validation Accuracy: 0.962, Loss: 0.035 Epoch 16 Batch 40/269 - Train Accuracy: 0.951, Validation Accuracy: 0.962, Loss: 0.043 Epoch 16 Batch 41/269 - Train Accuracy: 0.950, Validation Accuracy: 0.958, Loss: 0.042 Epoch 16 Batch 42/269 - Train Accuracy: 0.972, Validation Accuracy: 0.957, Loss: 0.027 Epoch 16 Batch 43/269 - Train Accuracy: 0.959, Validation Accuracy: 0.960, Loss: 0.039 Epoch 16 Batch 44/269 - Train Accuracy: 0.962, Validation Accuracy: 0.968, Loss: 0.037 Epoch 16 Batch 45/269 - Train Accuracy: 0.953, Validation Accuracy: 0.963, Loss: 0.043 Epoch 16 Batch 46/269 - Train Accuracy: 0.951, Validation Accuracy: 0.963, Loss: 0.036 Epoch 16 Batch 47/269 - Train Accuracy: 0.964, Validation Accuracy: 0.964, Loss: 0.032 Epoch 16 Batch 48/269 - Train Accuracy: 0.970, Validation Accuracy: 0.963, Loss: 0.034 Epoch 16 Batch 49/269 - Train Accuracy: 0.965, Validation Accuracy: 0.963, Loss: 0.035 Epoch 16 Batch 50/269 - Train Accuracy: 0.955, Validation Accuracy: 0.964, Loss: 0.043 Epoch 16 Batch 51/269 - Train Accuracy: 0.966, Validation Accuracy: 0.967, Loss: 0.032 Epoch 16 Batch 52/269 - Train Accuracy: 0.953, Validation Accuracy: 0.968, Loss: 0.033 Epoch 16 Batch 53/269 - Train Accuracy: 0.962, Validation Accuracy: 0.969, Loss: 0.044 Epoch 16 Batch 54/269 - Train Accuracy: 0.970, Validation Accuracy: 0.968, Loss: 0.036 Epoch 16 Batch 55/269 - Train Accuracy: 0.955, Validation Accuracy: 0.970, Loss: 0.038 Epoch 16 Batch 56/269 - Train Accuracy: 0.956, Validation Accuracy: 0.967, Loss: 0.038 Epoch 16 Batch 57/269 - Train Accuracy: 0.960, Validation Accuracy: 0.966, Loss: 0.042 Epoch 16 Batch 58/269 - Train Accuracy: 0.969, Validation Accuracy: 0.966, Loss: 0.040 Epoch 16 Batch 59/269 - Train Accuracy: 0.974, Validation Accuracy: 0.967, Loss: 0.029 Epoch 16 Batch 60/269 - Train Accuracy: 0.967, Validation Accuracy: 0.968, Loss: 0.037 Epoch 16 Batch 61/269 - Train Accuracy: 0.956, Validation Accuracy: 0.965, Loss: 0.035 Epoch 16 Batch 62/269 - Train Accuracy: 0.955, Validation Accuracy: 0.966, Loss: 0.039 Epoch 16 Batch 63/269 - Train Accuracy: 0.950, Validation Accuracy: 0.962, Loss: 0.042 Epoch 16 Batch 64/269 - Train Accuracy: 0.962, Validation Accuracy: 0.965, Loss: 0.034 Epoch 16 Batch 65/269 - Train Accuracy: 0.961, Validation Accuracy: 0.969, Loss: 0.034 Epoch 16 Batch 66/269 - Train Accuracy: 0.955, Validation Accuracy: 0.969, Loss: 0.044 Epoch 16 Batch 67/269 - Train Accuracy: 0.961, Validation Accuracy: 0.970, Loss: 0.043 Epoch 16 Batch 68/269 - Train Accuracy: 0.962, Validation Accuracy: 0.960, Loss: 0.039 Epoch 16 Batch 69/269 - Train Accuracy: 0.949, Validation Accuracy: 0.957, Loss: 0.049 Epoch 16 Batch 70/269 - Train Accuracy: 0.956, Validation Accuracy: 0.959, Loss: 0.039 Epoch 16 Batch 71/269 - Train Accuracy: 0.963, Validation Accuracy: 0.961, Loss: 0.041 Epoch 16 Batch 72/269 - Train Accuracy: 0.953, Validation Accuracy: 0.960, Loss: 0.042 Epoch 16 Batch 73/269 - Train Accuracy: 0.950, Validation Accuracy: 0.963, Loss: 0.041 Epoch 16 Batch 74/269 - Train Accuracy: 0.964, Validation Accuracy: 0.959, Loss: 0.034 Epoch 16 Batch 75/269 - Train Accuracy: 0.957, Validation Accuracy: 0.961, Loss: 0.038 Epoch 16 Batch 76/269 - Train Accuracy: 0.961, Validation Accuracy: 0.963, Loss: 0.034 Epoch 16 Batch 77/269 - Train Accuracy: 0.962, Validation Accuracy: 0.964, Loss: 0.033 Epoch 16 Batch 78/269 - Train Accuracy: 0.971, Validation Accuracy: 0.966, Loss: 0.030 Epoch 16 Batch 79/269 - Train Accuracy: 0.962, Validation Accuracy: 0.967, Loss: 0.035 Epoch 16 Batch 80/269 - Train Accuracy: 0.965, Validation Accuracy: 0.967, Loss: 0.036 Epoch 16 Batch 81/269 - Train Accuracy: 0.964, Validation Accuracy: 0.968, Loss: 0.042 Epoch 16 Batch 82/269 - Train Accuracy: 0.964, Validation Accuracy: 0.966, Loss: 0.032 Epoch 16 Batch 83/269 - Train Accuracy: 0.940, Validation Accuracy: 0.968, Loss: 0.049 Epoch 16 Batch 84/269 - Train Accuracy: 0.962, Validation Accuracy: 0.961, Loss: 0.036 Epoch 16 Batch 85/269 - Train Accuracy: 0.958, Validation Accuracy: 0.959, Loss: 0.036 Epoch 16 Batch 86/269 - Train Accuracy: 0.968, Validation Accuracy: 0.968, Loss: 0.035 Epoch 16 Batch 87/269 - Train Accuracy: 0.973, Validation Accuracy: 0.968, Loss: 0.037 Epoch 16 Batch 88/269 - Train Accuracy: 0.954, Validation Accuracy: 0.963, Loss: 0.035 Epoch 16 Batch 89/269 - Train Accuracy: 0.964, Validation Accuracy: 0.967, Loss: 0.036 Epoch 16 Batch 90/269 - Train Accuracy: 0.969, Validation Accuracy: 0.965, Loss: 0.035 Epoch 16 Batch 91/269 - Train Accuracy: 0.964, Validation Accuracy: 0.967, Loss: 0.036 Epoch 16 Batch 92/269 - Train Accuracy: 0.971, Validation Accuracy: 0.962, Loss: 0.030 Epoch 16 Batch 93/269 - Train Accuracy: 0.968, Validation Accuracy: 0.962, Loss: 0.034 Epoch 16 Batch 94/269 - Train Accuracy: 0.955, Validation Accuracy: 0.966, Loss: 0.042 Epoch 16 Batch 95/269 - Train Accuracy: 0.968, Validation Accuracy: 0.966, Loss: 0.034 Epoch 16 Batch 96/269 - Train Accuracy: 0.946, Validation Accuracy: 0.960, Loss: 0.041 Epoch 16 Batch 97/269 - Train Accuracy: 0.955, Validation Accuracy: 0.962, Loss: 0.041 Epoch 16 Batch 98/269 - Train Accuracy: 0.960, Validation Accuracy: 0.962, Loss: 0.037 Epoch 16 Batch 99/269 - Train Accuracy: 0.955, Validation Accuracy: 0.964, Loss: 0.038 Epoch 16 Batch 100/269 - Train Accuracy: 0.960, Validation Accuracy: 0.964, Loss: 0.041 Epoch 16 Batch 101/269 - Train Accuracy: 0.962, Validation Accuracy: 0.967, Loss: 0.045 Epoch 16 Batch 102/269 - Train Accuracy: 0.959, Validation Accuracy: 0.963, Loss: 0.036 Epoch 16 Batch 103/269 - Train Accuracy: 0.962, Validation Accuracy: 0.962, Loss: 0.041 Epoch 16 Batch 104/269 - Train Accuracy: 0.964, Validation Accuracy: 0.961, Loss: 0.039 Epoch 16 Batch 105/269 - Train Accuracy: 0.968, Validation Accuracy: 0.963, Loss: 0.038 Epoch 16 Batch 106/269 - Train Accuracy: 0.969, Validation Accuracy: 0.967, Loss: 0.034 Epoch 16 Batch 107/269 - Train Accuracy: 0.968, Validation Accuracy: 0.968, Loss: 0.033 Epoch 16 Batch 108/269 - Train Accuracy: 0.967, Validation Accuracy: 0.961, Loss: 0.037 Epoch 16 Batch 109/269 - Train Accuracy: 0.956, Validation Accuracy: 0.963, Loss: 0.042 Epoch 16 Batch 110/269 - Train Accuracy: 0.956, Validation Accuracy: 0.964, Loss: 0.035 Epoch 16 Batch 111/269 - Train Accuracy: 0.959, Validation Accuracy: 0.961, Loss: 0.042 Epoch 16 Batch 112/269 - Train Accuracy: 0.955, Validation Accuracy: 0.963, Loss: 0.037 Epoch 16 Batch 113/269 - Train Accuracy: 0.951, Validation Accuracy: 0.966, Loss: 0.040 Epoch 16 Batch 114/269 - Train Accuracy: 0.951, Validation Accuracy: 0.967, Loss: 0.039 Epoch 16 Batch 115/269 - Train Accuracy: 0.958, Validation Accuracy: 0.962, Loss: 0.037 Epoch 16 Batch 116/269 - Train Accuracy: 0.967, Validation Accuracy: 0.964, Loss: 0.038 Epoch 16 Batch 117/269 - Train Accuracy: 0.954, Validation Accuracy: 0.957, Loss: 0.035 Epoch 16 Batch 118/269 - Train Accuracy: 0.965, Validation Accuracy: 0.959, Loss: 0.033 Epoch 16 Batch 119/269 - Train Accuracy: 0.955, Validation Accuracy: 0.968, Loss: 0.036 Epoch 16 Batch 120/269 - Train Accuracy: 0.963, Validation Accuracy: 0.958, Loss: 0.034 Epoch 16 Batch 121/269 - Train Accuracy: 0.959, Validation Accuracy: 0.959, Loss: 0.034 Epoch 16 Batch 122/269 - Train Accuracy: 0.957, Validation Accuracy: 0.959, Loss: 0.041 Epoch 16 Batch 123/269 - Train Accuracy: 0.955, Validation Accuracy: 0.966, Loss: 0.039 Epoch 16 Batch 124/269 - Train Accuracy: 0.956, Validation Accuracy: 0.963, Loss: 0.033 Epoch 16 Batch 125/269 - Train Accuracy: 0.956, Validation Accuracy: 0.960, Loss: 0.034 Epoch 16 Batch 126/269 - Train Accuracy: 0.949, Validation Accuracy: 0.960, Loss: 0.038 Epoch 16 Batch 127/269 - Train Accuracy: 0.959, Validation Accuracy: 0.960, Loss: 0.033 Epoch 16 Batch 128/269 - Train Accuracy: 0.966, Validation Accuracy: 0.962, Loss: 0.038 Epoch 16 Batch 129/269 - Train Accuracy: 0.949, Validation Accuracy: 0.962, Loss: 0.038 Epoch 16 Batch 130/269 - Train Accuracy: 0.962, Validation Accuracy: 0.965, Loss: 0.037 Epoch 16 Batch 131/269 - Train Accuracy: 0.957, Validation Accuracy: 0.967, Loss: 0.038 Epoch 16 Batch 132/269 - Train Accuracy: 0.944, Validation Accuracy: 0.966, Loss: 0.041 Epoch 16 Batch 133/269 - Train Accuracy: 0.957, Validation Accuracy: 0.965, Loss: 0.030 Epoch 16 Batch 134/269 - Train Accuracy: 0.968, Validation Accuracy: 0.965, Loss: 0.035 Epoch 16 Batch 135/269 - Train Accuracy: 0.965, Validation Accuracy: 0.965, Loss: 0.037 Epoch 16 Batch 136/269 - Train Accuracy: 0.952, Validation Accuracy: 0.965, Loss: 0.040 Epoch 16 Batch 137/269 - Train Accuracy: 0.952, Validation Accuracy: 0.962, Loss: 0.044 Epoch 16 Batch 138/269 - Train Accuracy: 0.956, Validation Accuracy: 0.964, Loss: 0.038 Epoch 16 Batch 139/269 - Train Accuracy: 0.963, Validation Accuracy: 0.965, Loss: 0.035 Epoch 16 Batch 140/269 - Train Accuracy: 0.962, Validation Accuracy: 0.966, Loss: 0.043 Epoch 16 Batch 141/269 - Train Accuracy: 0.965, Validation Accuracy: 0.965, Loss: 0.040 Epoch 16 Batch 142/269 - Train Accuracy: 0.958, Validation Accuracy: 0.959, Loss: 0.040 Epoch 16 Batch 143/269 - Train Accuracy: 0.960, Validation Accuracy: 0.959, Loss: 0.030 Epoch 16 Batch 144/269 - Train Accuracy: 0.974, Validation Accuracy: 0.961, Loss: 0.029 Epoch 16 Batch 145/269 - Train Accuracy: 0.974, Validation Accuracy: 0.968, Loss: 0.032 Epoch 16 Batch 146/269 - Train Accuracy: 0.952, Validation Accuracy: 0.963, Loss: 0.032 Epoch 16 Batch 147/269 - Train Accuracy: 0.957, Validation Accuracy: 0.963, Loss: 0.037 Epoch 16 Batch 148/269 - Train Accuracy: 0.972, Validation Accuracy: 0.965, Loss: 0.037 Epoch 16 Batch 149/269 - Train Accuracy: 0.958, Validation Accuracy: 0.956, Loss: 0.041 Epoch 16 Batch 150/269 - Train Accuracy: 0.964, Validation Accuracy: 0.955, Loss: 0.040 Epoch 16 Batch 151/269 - Train Accuracy: 0.959, Validation Accuracy: 0.960, Loss: 0.037 Epoch 16 Batch 152/269 - Train Accuracy: 0.966, Validation Accuracy: 0.964, Loss: 0.035 Epoch 16 Batch 153/269 - Train Accuracy: 0.970, Validation Accuracy: 0.964, Loss: 0.031 Epoch 16 Batch 154/269 - Train Accuracy: 0.971, Validation Accuracy: 0.967, Loss: 0.039 Epoch 16 Batch 155/269 - Train Accuracy: 0.950, Validation Accuracy: 0.964, Loss: 0.034 Epoch 16 Batch 156/269 - Train Accuracy: 0.958, Validation Accuracy: 0.964, Loss: 0.037 Epoch 16 Batch 157/269 - Train Accuracy: 0.965, Validation Accuracy: 0.963, Loss: 0.029 Epoch 16 Batch 158/269 - Train Accuracy: 0.947, Validation Accuracy: 0.965, Loss: 0.038 Epoch 16 Batch 159/269 - Train Accuracy: 0.957, Validation Accuracy: 0.960, Loss: 0.035 Epoch 16 Batch 160/269 - Train Accuracy: 0.963, Validation Accuracy: 0.963, Loss: 0.036 Epoch 16 Batch 161/269 - Train Accuracy: 0.966, Validation Accuracy: 0.966, Loss: 0.033 Epoch 16 Batch 162/269 - Train Accuracy: 0.960, Validation Accuracy: 0.967, Loss: 0.037 Epoch 16 Batch 163/269 - Train Accuracy: 0.961, Validation Accuracy: 0.968, Loss: 0.034 Epoch 16 Batch 164/269 - Train Accuracy: 0.968, Validation Accuracy: 0.966, Loss: 0.033 Epoch 16 Batch 165/269 - Train Accuracy: 0.963, Validation Accuracy: 0.967, Loss: 0.035 Epoch 16 Batch 166/269 - Train Accuracy: 0.966, Validation Accuracy: 0.968, Loss: 0.035 Epoch 16 Batch 167/269 - Train Accuracy: 0.967, Validation Accuracy: 0.968, Loss: 0.037 Epoch 16 Batch 168/269 - Train Accuracy: 0.968, Validation Accuracy: 0.966, Loss: 0.036 Epoch 16 Batch 169/269 - Train Accuracy: 0.954, Validation Accuracy: 0.968, Loss: 0.033 Epoch 16 Batch 170/269 - Train Accuracy: 0.955, Validation Accuracy: 0.968, Loss: 0.032 Epoch 16 Batch 171/269 - Train Accuracy: 0.969, Validation Accuracy: 0.963, Loss: 0.038 Epoch 16 Batch 172/269 - Train Accuracy: 0.960, Validation Accuracy: 0.962, Loss: 0.038 Epoch 16 Batch 173/269 - Train Accuracy: 0.963, Validation Accuracy: 0.963, Loss: 0.030 Epoch 16 Batch 174/269 - Train Accuracy: 0.971, Validation Accuracy: 0.963, Loss: 0.035 Epoch 16 Batch 175/269 - Train Accuracy: 0.952, Validation Accuracy: 0.959, Loss: 0.045 Epoch 16 Batch 176/269 - Train Accuracy: 0.956, Validation Accuracy: 0.965, Loss: 0.039 Epoch 16 Batch 177/269 - Train Accuracy: 0.966, Validation Accuracy: 0.965, Loss: 0.035 Epoch 16 Batch 178/269 - Train Accuracy: 0.960, Validation Accuracy: 0.960, Loss: 0.035 Epoch 16 Batch 179/269 - Train Accuracy: 0.956, Validation Accuracy: 0.967, Loss: 0.035 Epoch 16 Batch 180/269 - Train Accuracy: 0.971, Validation Accuracy: 0.968, Loss: 0.033 Epoch 16 Batch 181/269 - Train Accuracy: 0.958, Validation Accuracy: 0.970, Loss: 0.042 Epoch 16 Batch 182/269 - Train Accuracy: 0.961, Validation Accuracy: 0.965, Loss: 0.034 Epoch 16 Batch 183/269 - Train Accuracy: 0.967, Validation Accuracy: 0.965, Loss: 0.028 Epoch 16 Batch 184/269 - Train Accuracy: 0.970, Validation Accuracy: 0.965, Loss: 0.035 Epoch 16 Batch 185/269 - Train Accuracy: 0.963, Validation Accuracy: 0.967, Loss: 0.037 Epoch 16 Batch 186/269 - Train Accuracy: 0.971, Validation Accuracy: 0.968, Loss: 0.034 Epoch 16 Batch 187/269 - Train Accuracy: 0.960, Validation Accuracy: 0.972, Loss: 0.031 Epoch 16 Batch 188/269 - Train Accuracy: 0.969, Validation Accuracy: 0.972, Loss: 0.031 Epoch 16 Batch 189/269 - Train Accuracy: 0.967, Validation Accuracy: 0.972, Loss: 0.033 Epoch 16 Batch 190/269 - Train Accuracy: 0.965, Validation Accuracy: 0.972, Loss: 0.040 Epoch 16 Batch 191/269 - Train Accuracy: 0.959, Validation Accuracy: 0.975, Loss: 0.036 Epoch 16 Batch 192/269 - Train Accuracy: 0.965, Validation Accuracy: 0.972, Loss: 0.036 Epoch 16 Batch 193/269 - Train Accuracy: 0.961, Validation Accuracy: 0.969, Loss: 0.033 Epoch 16 Batch 194/269 - Train Accuracy: 0.964, Validation Accuracy: 0.967, Loss: 0.035 Epoch 16 Batch 195/269 - Train Accuracy: 0.957, Validation Accuracy: 0.965, Loss: 0.035 Epoch 16 Batch 196/269 - Train Accuracy: 0.956, Validation Accuracy: 0.963, Loss: 0.034 Epoch 16 Batch 197/269 - Train Accuracy: 0.958, Validation Accuracy: 0.960, Loss: 0.036 Epoch 16 Batch 198/269 - Train Accuracy: 0.966, Validation Accuracy: 0.965, Loss: 0.039 Epoch 16 Batch 199/269 - Train Accuracy: 0.966, Validation Accuracy: 0.959, Loss: 0.037 Epoch 16 Batch 200/269 - Train Accuracy: 0.964, Validation Accuracy: 0.961, Loss: 0.034 Epoch 16 Batch 201/269 - Train Accuracy: 0.958, Validation Accuracy: 0.968, Loss: 0.042 Epoch 16 Batch 202/269 - Train Accuracy: 0.968, Validation Accuracy: 0.963, Loss: 0.034 Epoch 16 Batch 203/269 - Train Accuracy: 0.952, Validation Accuracy: 0.963, Loss: 0.036 Epoch 16 Batch 204/269 - Train Accuracy: 0.959, Validation Accuracy: 0.965, Loss: 0.039 Epoch 16 Batch 205/269 - Train Accuracy: 0.969, Validation Accuracy: 0.963, Loss: 0.033 Epoch 16 Batch 206/269 - Train Accuracy: 0.952, Validation Accuracy: 0.962, Loss: 0.041 Epoch 16 Batch 207/269 - Train Accuracy: 0.963, Validation Accuracy: 0.965, Loss: 0.035 Epoch 16 Batch 208/269 - Train Accuracy: 0.962, Validation Accuracy: 0.963, Loss: 0.039 Epoch 16 Batch 209/269 - Train Accuracy: 0.977, Validation Accuracy: 0.966, Loss: 0.038 Epoch 16 Batch 210/269 - Train Accuracy: 0.958, Validation Accuracy: 0.964, Loss: 0.034 Epoch 16 Batch 211/269 - Train Accuracy: 0.964, Validation Accuracy: 0.966, Loss: 0.048 Epoch 16 Batch 212/269 - Train Accuracy: 0.958, Validation Accuracy: 0.960, Loss: 0.040 Epoch 16 Batch 213/269 - Train Accuracy: 0.955, Validation Accuracy: 0.960, Loss: 0.040 Epoch 16 Batch 214/269 - Train Accuracy: 0.967, Validation Accuracy: 0.964, Loss: 0.042 Epoch 16 Batch 215/269 - Train Accuracy: 0.949, Validation Accuracy: 0.957, Loss: 0.038 Epoch 16 Batch 216/269 - Train Accuracy: 0.952, Validation Accuracy: 0.963, Loss: 0.053 Epoch 16 Batch 217/269 - Train Accuracy: 0.961, Validation Accuracy: 0.960, Loss: 0.037 Epoch 16 Batch 218/269 - Train Accuracy: 0.967, Validation Accuracy: 0.960, Loss: 0.040 Epoch 16 Batch 219/269 - Train Accuracy: 0.968, Validation Accuracy: 0.965, Loss: 0.041 Epoch 16 Batch 220/269 - Train Accuracy: 0.958, Validation Accuracy: 0.964, Loss: 0.037 Epoch 16 Batch 221/269 - Train Accuracy: 0.955, Validation Accuracy: 0.968, Loss: 0.051 Epoch 16 Batch 222/269 - Train Accuracy: 0.967, Validation Accuracy: 0.966, Loss: 0.030 Epoch 16 Batch 223/269 - Train Accuracy: 0.958, Validation Accuracy: 0.968, Loss: 0.041 Epoch 16 Batch 224/269 - Train Accuracy: 0.960, Validation Accuracy: 0.971, Loss: 0.046 Epoch 16 Batch 225/269 - Train Accuracy: 0.953, Validation Accuracy: 0.965, Loss: 0.035 Epoch 16 Batch 226/269 - Train Accuracy: 0.963, Validation Accuracy: 0.967, Loss: 0.043 Epoch 16 Batch 227/269 - Train Accuracy: 0.964, Validation Accuracy: 0.969, Loss: 0.050 Epoch 16 Batch 228/269 - Train Accuracy: 0.965, Validation Accuracy: 0.967, Loss: 0.035 Epoch 16 Batch 229/269 - Train Accuracy: 0.968, Validation Accuracy: 0.966, Loss: 0.035 Epoch 16 Batch 230/269 - Train Accuracy: 0.968, Validation Accuracy: 0.969, Loss: 0.035 Epoch 16 Batch 231/269 - Train Accuracy: 0.955, Validation Accuracy: 0.966, Loss: 0.037 Epoch 16 Batch 232/269 - Train Accuracy: 0.968, Validation Accuracy: 0.966, Loss: 0.035 Epoch 16 Batch 233/269 - Train Accuracy: 0.969, Validation Accuracy: 0.970, Loss: 0.041 Epoch 16 Batch 234/269 - Train Accuracy: 0.965, Validation Accuracy: 0.975, Loss: 0.036 Epoch 16 Batch 235/269 - Train Accuracy: 0.983, Validation Accuracy: 0.969, Loss: 0.025 Epoch 16 Batch 236/269 - Train Accuracy: 0.950, Validation Accuracy: 0.974, Loss: 0.039 Epoch 16 Batch 237/269 - Train Accuracy: 0.969, Validation Accuracy: 0.971, Loss: 0.036 Epoch 16 Batch 238/269 - Train Accuracy: 0.964, Validation Accuracy: 0.971, Loss: 0.036 Epoch 16 Batch 239/269 - Train Accuracy: 0.968, Validation Accuracy: 0.971, Loss: 0.031 Epoch 16 Batch 240/269 - Train Accuracy: 0.968, Validation Accuracy: 0.973, Loss: 0.034 Epoch 16 Batch 241/269 - Train Accuracy: 0.958, Validation Accuracy: 0.970, Loss: 0.040 Epoch 16 Batch 242/269 - Train Accuracy: 0.970, Validation Accuracy: 0.973, Loss: 0.035 Epoch 16 Batch 243/269 - Train Accuracy: 0.964, Validation Accuracy: 0.970, Loss: 0.033 Epoch 16 Batch 244/269 - Train Accuracy: 0.953, Validation Accuracy: 0.966, Loss: 0.039 Epoch 16 Batch 245/269 - Train Accuracy: 0.962, Validation Accuracy: 0.969, Loss: 0.034 Epoch 16 Batch 246/269 - Train Accuracy: 0.957, Validation Accuracy: 0.970, Loss: 0.039 Epoch 16 Batch 247/269 - Train Accuracy: 0.970, Validation Accuracy: 0.970, Loss: 0.034 Epoch 16 Batch 248/269 - Train Accuracy: 0.969, Validation Accuracy: 0.967, Loss: 0.034 Epoch 16 Batch 249/269 - Train Accuracy: 0.969, Validation Accuracy: 0.967, Loss: 0.034 Epoch 16 Batch 250/269 - Train Accuracy: 0.958, Validation Accuracy: 0.967, Loss: 0.034 Epoch 16 Batch 251/269 - Train Accuracy: 0.978, Validation Accuracy: 0.969, Loss: 0.033 Epoch 16 Batch 252/269 - Train Accuracy: 0.968, Validation Accuracy: 0.972, Loss: 0.028 Epoch 16 Batch 253/269 - Train Accuracy: 0.962, Validation Accuracy: 0.970, Loss: 0.038 Epoch 16 Batch 254/269 - Train Accuracy: 0.965, Validation Accuracy: 0.966, Loss: 0.035 Epoch 16 Batch 255/269 - Train Accuracy: 0.963, Validation Accuracy: 0.966, Loss: 0.036 Epoch 16 Batch 256/269 - Train Accuracy: 0.966, Validation Accuracy: 0.970, Loss: 0.031 Epoch 16 Batch 257/269 - Train Accuracy: 0.958, Validation Accuracy: 0.968, Loss: 0.039 Epoch 16 Batch 258/269 - Train Accuracy: 0.965, Validation Accuracy: 0.966, Loss: 0.036 Epoch 16 Batch 259/269 - Train Accuracy: 0.965, Validation Accuracy: 0.966, Loss: 0.034 Epoch 16 Batch 260/269 - Train Accuracy: 0.966, Validation Accuracy: 0.970, Loss: 0.038 Epoch 16 Batch 261/269 - Train Accuracy: 0.970, Validation Accuracy: 0.970, Loss: 0.034 Epoch 16 Batch 262/269 - Train Accuracy: 0.959, Validation Accuracy: 0.967, Loss: 0.036 Epoch 16 Batch 263/269 - Train Accuracy: 0.971, Validation Accuracy: 0.969, Loss: 0.036 Epoch 16 Batch 264/269 - Train Accuracy: 0.956, Validation Accuracy: 0.967, Loss: 0.039 Epoch 16 Batch 265/269 - Train Accuracy: 0.961, Validation Accuracy: 0.966, Loss: 0.033 Epoch 16 Batch 266/269 - Train Accuracy: 0.961, Validation Accuracy: 0.969, Loss: 0.031 Epoch 16 Batch 267/269 - Train Accuracy: 0.970, Validation Accuracy: 0.969, Loss: 0.037 Epoch 17 Batch 0/269 - Train Accuracy: 0.972, Validation Accuracy: 0.965, Loss: 0.040 Epoch 17 Batch 1/269 - Train Accuracy: 0.974, Validation Accuracy: 0.965, Loss: 0.032 Epoch 17 Batch 2/269 - Train Accuracy: 0.961, Validation Accuracy: 0.965, Loss: 0.035 Epoch 17 Batch 3/269 - Train Accuracy: 0.960, Validation Accuracy: 0.965, Loss: 0.040 Epoch 17 Batch 4/269 - Train Accuracy: 0.953, Validation Accuracy: 0.960, Loss: 0.036 Epoch 17 Batch 5/269 - Train Accuracy: 0.959, Validation Accuracy: 0.955, Loss: 0.038 Epoch 17 Batch 6/269 - Train Accuracy: 0.964, Validation Accuracy: 0.956, Loss: 0.032 Epoch 17 Batch 7/269 - Train Accuracy: 0.969, Validation Accuracy: 0.959, Loss: 0.032 Epoch 17 Batch 8/269 - Train Accuracy: 0.966, Validation Accuracy: 0.969, Loss: 0.038 Epoch 17 Batch 9/269 - Train Accuracy: 0.957, Validation Accuracy: 0.963, Loss: 0.035 Epoch 17 Batch 10/269 - Train Accuracy: 0.966, Validation Accuracy: 0.966, Loss: 0.031 Epoch 17 Batch 11/269 - Train Accuracy: 0.966, Validation Accuracy: 0.968, Loss: 0.036 Epoch 17 Batch 12/269 - Train Accuracy: 0.951, Validation Accuracy: 0.963, Loss: 0.039 Epoch 17 Batch 13/269 - Train Accuracy: 0.956, Validation Accuracy: 0.962, Loss: 0.032 Epoch 17 Batch 14/269 - Train Accuracy: 0.968, Validation Accuracy: 0.962, Loss: 0.032 Epoch 17 Batch 15/269 - Train Accuracy: 0.968, Validation Accuracy: 0.964, Loss: 0.029 Epoch 17 Batch 16/269 - Train Accuracy: 0.956, Validation Accuracy: 0.962, Loss: 0.040 Epoch 17 Batch 17/269 - Train Accuracy: 0.963, Validation Accuracy: 0.961, Loss: 0.029 Epoch 17 Batch 18/269 - Train Accuracy: 0.960, Validation Accuracy: 0.963, Loss: 0.034 Epoch 17 Batch 19/269 - Train Accuracy: 0.966, Validation Accuracy: 0.963, Loss: 0.031 Epoch 17 Batch 20/269 - Train Accuracy: 0.966, Validation Accuracy: 0.959, Loss: 0.034 Epoch 17 Batch 21/269 - Train Accuracy: 0.960, Validation Accuracy: 0.966, Loss: 0.038 Epoch 17 Batch 22/269 - Train Accuracy: 0.966, Validation Accuracy: 0.964, Loss: 0.034 Epoch 17 Batch 23/269 - Train Accuracy: 0.958, Validation Accuracy: 0.962, Loss: 0.041 Epoch 17 Batch 24/269 - Train Accuracy: 0.964, Validation Accuracy: 0.962, Loss: 0.035 Epoch 17 Batch 25/269 - Train Accuracy: 0.965, Validation Accuracy: 0.956, Loss: 0.037 Epoch 17 Batch 26/269 - Train Accuracy: 0.955, Validation Accuracy: 0.964, Loss: 0.034 Epoch 17 Batch 27/269 - Train Accuracy: 0.962, Validation Accuracy: 0.961, Loss: 0.032 Epoch 17 Batch 28/269 - Train Accuracy: 0.951, Validation Accuracy: 0.962, Loss: 0.038 Epoch 17 Batch 29/269 - Train Accuracy: 0.963, Validation Accuracy: 0.962, Loss: 0.034 Epoch 17 Batch 30/269 - Train Accuracy: 0.963, Validation Accuracy: 0.961, Loss: 0.035 Epoch 17 Batch 31/269 - Train Accuracy: 0.967, Validation Accuracy: 0.962, Loss: 0.034 Epoch 17 Batch 32/269 - Train Accuracy: 0.967, Validation Accuracy: 0.958, Loss: 0.027 Epoch 17 Batch 33/269 - Train Accuracy: 0.964, Validation Accuracy: 0.964, Loss: 0.031 Epoch 17 Batch 34/269 - Train Accuracy: 0.964, Validation Accuracy: 0.963, Loss: 0.037 Epoch 17 Batch 35/269 - Train Accuracy: 0.965, Validation Accuracy: 0.957, Loss: 0.039 Epoch 17 Batch 36/269 - Train Accuracy: 0.968, Validation Accuracy: 0.962, Loss: 0.036 Epoch 17 Batch 37/269 - Train Accuracy: 0.961, Validation Accuracy: 0.966, Loss: 0.037 Epoch 17 Batch 38/269 - Train Accuracy: 0.963, Validation Accuracy: 0.963, Loss: 0.029 Epoch 17 Batch 39/269 - Train Accuracy: 0.968, Validation Accuracy: 0.962, Loss: 0.036 Epoch 17 Batch 40/269 - Train Accuracy: 0.958, Validation Accuracy: 0.965, Loss: 0.038 Epoch 17 Batch 41/269 - Train Accuracy: 0.956, Validation Accuracy: 0.968, Loss: 0.036 Epoch 17 Batch 42/269 - Train Accuracy: 0.970, Validation Accuracy: 0.968, Loss: 0.031 Epoch 17 Batch 43/269 - Train Accuracy: 0.969, Validation Accuracy: 0.969, Loss: 0.034 Epoch 17 Batch 44/269 - Train Accuracy: 0.964, Validation Accuracy: 0.968, Loss: 0.034 Epoch 17 Batch 45/269 - Train Accuracy: 0.959, Validation Accuracy: 0.967, Loss: 0.035 Epoch 17 Batch 46/269 - Train Accuracy: 0.964, Validation Accuracy: 0.968, Loss: 0.030 Epoch 17 Batch 47/269 - Train Accuracy: 0.964, Validation Accuracy: 0.968, Loss: 0.027 Epoch 17 Batch 48/269 - Train Accuracy: 0.968, Validation Accuracy: 0.967, Loss: 0.031 Epoch 17 Batch 49/269 - Train Accuracy: 0.970, Validation Accuracy: 0.968, Loss: 0.033 Epoch 17 Batch 50/269 - Train Accuracy: 0.963, Validation Accuracy: 0.970, Loss: 0.039 Epoch 17 Batch 51/269 - Train Accuracy: 0.970, Validation Accuracy: 0.969, Loss: 0.030 Epoch 17 Batch 52/269 - Train Accuracy: 0.960, Validation Accuracy: 0.968, Loss: 0.029 Epoch 17 Batch 53/269 - Train Accuracy: 0.965, Validation Accuracy: 0.969, Loss: 0.037 Epoch 17 Batch 54/269 - Train Accuracy: 0.975, Validation Accuracy: 0.967, Loss: 0.029 Epoch 17 Batch 55/269 - Train Accuracy: 0.963, Validation Accuracy: 0.967, Loss: 0.035 Epoch 17 Batch 56/269 - Train Accuracy: 0.962, Validation Accuracy: 0.966, Loss: 0.033 Epoch 17 Batch 57/269 - Train Accuracy: 0.963, Validation Accuracy: 0.965, Loss: 0.038 Epoch 17 Batch 58/269 - Train Accuracy: 0.973, Validation Accuracy: 0.964, Loss: 0.035 Epoch 17 Batch 59/269 - Train Accuracy: 0.974, Validation Accuracy: 0.968, Loss: 0.025 Epoch 17 Batch 60/269 - Train Accuracy: 0.968, Validation Accuracy: 0.967, Loss: 0.034 Epoch 17 Batch 61/269 - Train Accuracy: 0.952, Validation Accuracy: 0.965, Loss: 0.031 Epoch 17 Batch 62/269 - Train Accuracy: 0.960, Validation Accuracy: 0.965, Loss: 0.040 Epoch 17 Batch 63/269 - Train Accuracy: 0.956, Validation Accuracy: 0.964, Loss: 0.039 Epoch 17 Batch 64/269 - Train Accuracy: 0.971, Validation Accuracy: 0.964, Loss: 0.030 Epoch 17 Batch 65/269 - Train Accuracy: 0.966, Validation Accuracy: 0.966, Loss: 0.033 Epoch 17 Batch 66/269 - Train Accuracy: 0.957, Validation Accuracy: 0.966, Loss: 0.037 Epoch 17 Batch 67/269 - Train Accuracy: 0.964, Validation Accuracy: 0.966, Loss: 0.040 Epoch 17 Batch 68/269 - Train Accuracy: 0.968, Validation Accuracy: 0.963, Loss: 0.035 Epoch 17 Batch 69/269 - Train Accuracy: 0.960, Validation Accuracy: 0.964, Loss: 0.043 Epoch 17 Batch 70/269 - Train Accuracy: 0.972, Validation Accuracy: 0.967, Loss: 0.036 Epoch 17 Batch 71/269 - Train Accuracy: 0.962, Validation Accuracy: 0.967, Loss: 0.039 Epoch 17 Batch 72/269 - Train Accuracy: 0.965, Validation Accuracy: 0.969, Loss: 0.037 Epoch 17 Batch 73/269 - Train Accuracy: 0.955, Validation Accuracy: 0.969, Loss: 0.035 Epoch 17 Batch 74/269 - Train Accuracy: 0.973, Validation Accuracy: 0.968, Loss: 0.033 Epoch 17 Batch 75/269 - Train Accuracy: 0.960, Validation Accuracy: 0.966, Loss: 0.039 Epoch 17 Batch 76/269 - Train Accuracy: 0.958, Validation Accuracy: 0.966, Loss: 0.033 Epoch 17 Batch 77/269 - Train Accuracy: 0.967, Validation Accuracy: 0.965, Loss: 0.031 Epoch 17 Batch 78/269 - Train Accuracy: 0.973, Validation Accuracy: 0.968, Loss: 0.032 Epoch 17 Batch 79/269 - Train Accuracy: 0.963, Validation Accuracy: 0.968, Loss: 0.032 Epoch 17 Batch 80/269 - Train Accuracy: 0.965, Validation Accuracy: 0.966, Loss: 0.034 Epoch 17 Batch 81/269 - Train Accuracy: 0.961, Validation Accuracy: 0.964, Loss: 0.041 Epoch 17 Batch 82/269 - Train Accuracy: 0.965, Validation Accuracy: 0.964, Loss: 0.032 Epoch 17 Batch 83/269 - Train Accuracy: 0.948, Validation Accuracy: 0.961, Loss: 0.046 Epoch 17 Batch 84/269 - Train Accuracy: 0.964, Validation Accuracy: 0.960, Loss: 0.033 Epoch 17 Batch 85/269 - Train Accuracy: 0.965, Validation Accuracy: 0.963, Loss: 0.034 Epoch 17 Batch 86/269 - Train Accuracy: 0.963, Validation Accuracy: 0.965, Loss: 0.029 Epoch 17 Batch 87/269 - Train Accuracy: 0.968, Validation Accuracy: 0.965, Loss: 0.033 Epoch 17 Batch 88/269 - Train Accuracy: 0.961, Validation Accuracy: 0.969, Loss: 0.038 Epoch 17 Batch 89/269 - Train Accuracy: 0.975, Validation Accuracy: 0.966, Loss: 0.033 Epoch 17 Batch 90/269 - Train Accuracy: 0.964, Validation Accuracy: 0.965, Loss: 0.034 Epoch 17 Batch 91/269 - Train Accuracy: 0.971, Validation Accuracy: 0.965, Loss: 0.035 Epoch 17 Batch 92/269 - Train Accuracy: 0.975, Validation Accuracy: 0.965, Loss: 0.030 Epoch 17 Batch 93/269 - Train Accuracy: 0.971, Validation Accuracy: 0.964, Loss: 0.030 Epoch 17 Batch 94/269 - Train Accuracy: 0.966, Validation Accuracy: 0.962, Loss: 0.039 Epoch 17 Batch 95/269 - Train Accuracy: 0.968, Validation Accuracy: 0.966, Loss: 0.034 Epoch 17 Batch 96/269 - Train Accuracy: 0.955, Validation Accuracy: 0.971, Loss: 0.040 Epoch 17 Batch 97/269 - Train Accuracy: 0.958, Validation Accuracy: 0.965, Loss: 0.040 Epoch 17 Batch 98/269 - Train Accuracy: 0.963, Validation Accuracy: 0.962, Loss: 0.036 Epoch 17 Batch 99/269 - Train Accuracy: 0.964, Validation Accuracy: 0.962, Loss: 0.031 Epoch 17 Batch 100/269 - Train Accuracy: 0.969, Validation Accuracy: 0.963, Loss: 0.039 Epoch 17 Batch 101/269 - Train Accuracy: 0.965, Validation Accuracy: 0.965, Loss: 0.038 Epoch 17 Batch 102/269 - Train Accuracy: 0.960, Validation Accuracy: 0.967, Loss: 0.033 Epoch 17 Batch 103/269 - Train Accuracy: 0.962, Validation Accuracy: 0.968, Loss: 0.038 Epoch 17 Batch 104/269 - Train Accuracy: 0.966, Validation Accuracy: 0.968, Loss: 0.035 Epoch 17 Batch 105/269 - Train Accuracy: 0.969, Validation Accuracy: 0.968, Loss: 0.036 Epoch 17 Batch 106/269 - Train Accuracy: 0.974, Validation Accuracy: 0.966, Loss: 0.027 Epoch 17 Batch 107/269 - Train Accuracy: 0.965, Validation Accuracy: 0.968, Loss: 0.032 Epoch 17 Batch 108/269 - Train Accuracy: 0.969, Validation Accuracy: 0.970, Loss: 0.033 Epoch 17 Batch 109/269 - Train Accuracy: 0.965, Validation Accuracy: 0.963, Loss: 0.038 Epoch 17 Batch 110/269 - Train Accuracy: 0.956, Validation Accuracy: 0.954, Loss: 0.030 Epoch 17 Batch 111/269 - Train Accuracy: 0.962, Validation Accuracy: 0.960, Loss: 0.037 Epoch 17 Batch 112/269 - Train Accuracy: 0.960, Validation Accuracy: 0.962, Loss: 0.034 Epoch 17 Batch 113/269 - Train Accuracy: 0.960, Validation Accuracy: 0.964, Loss: 0.037 Epoch 17 Batch 114/269 - Train Accuracy: 0.957, Validation Accuracy: 0.962, Loss: 0.037 Epoch 17 Batch 115/269 - Train Accuracy: 0.962, Validation Accuracy: 0.964, Loss: 0.036 Epoch 17 Batch 116/269 - Train Accuracy: 0.968, Validation Accuracy: 0.959, Loss: 0.031 Epoch 17 Batch 117/269 - Train Accuracy: 0.969, Validation Accuracy: 0.962, Loss: 0.036 Epoch 17 Batch 118/269 - Train Accuracy: 0.968, Validation Accuracy: 0.960, Loss: 0.032 Epoch 17 Batch 119/269 - Train Accuracy: 0.961, Validation Accuracy: 0.961, Loss: 0.037 Epoch 17 Batch 120/269 - Train Accuracy: 0.964, Validation Accuracy: 0.960, Loss: 0.036 Epoch 17 Batch 121/269 - Train Accuracy: 0.961, Validation Accuracy: 0.964, Loss: 0.032 Epoch 17 Batch 122/269 - Train Accuracy: 0.962, Validation Accuracy: 0.962, Loss: 0.036 Epoch 17 Batch 123/269 - Train Accuracy: 0.956, Validation Accuracy: 0.965, Loss: 0.036 Epoch 17 Batch 124/269 - Train Accuracy: 0.965, Validation Accuracy: 0.964, Loss: 0.033 Epoch 17 Batch 125/269 - Train Accuracy: 0.963, Validation Accuracy: 0.965, Loss: 0.029 Epoch 17 Batch 126/269 - Train Accuracy: 0.952, Validation Accuracy: 0.965, Loss: 0.034 Epoch 17 Batch 127/269 - Train Accuracy: 0.975, Validation Accuracy: 0.964, Loss: 0.032 Epoch 17 Batch 128/269 - Train Accuracy: 0.974, Validation Accuracy: 0.963, Loss: 0.032 Epoch 17 Batch 129/269 - Train Accuracy: 0.960, Validation Accuracy: 0.967, Loss: 0.033 Epoch 17 Batch 130/269 - Train Accuracy: 0.963, Validation Accuracy: 0.968, Loss: 0.036 Epoch 17 Batch 131/269 - Train Accuracy: 0.959, Validation Accuracy: 0.970, Loss: 0.036 Epoch 17 Batch 132/269 - Train Accuracy: 0.954, Validation Accuracy: 0.969, Loss: 0.038 Epoch 17 Batch 133/269 - Train Accuracy: 0.960, Validation Accuracy: 0.960, Loss: 0.031 Epoch 17 Batch 134/269 - Train Accuracy: 0.967, Validation Accuracy: 0.963, Loss: 0.037 Epoch 17 Batch 135/269 - Train Accuracy: 0.961, Validation Accuracy: 0.964, Loss: 0.033 Epoch 17 Batch 136/269 - Train Accuracy: 0.949, Validation Accuracy: 0.964, Loss: 0.038 Epoch 17 Batch 137/269 - Train Accuracy: 0.956, Validation Accuracy: 0.965, Loss: 0.043 Epoch 17 Batch 138/269 - Train Accuracy: 0.963, Validation Accuracy: 0.967, Loss: 0.033 Epoch 17 Batch 139/269 - Train Accuracy: 0.964, Validation Accuracy: 0.968, Loss: 0.032 Epoch 17 Batch 140/269 - Train Accuracy: 0.965, Validation Accuracy: 0.968, Loss: 0.036 Epoch 17 Batch 141/269 - Train Accuracy: 0.966, Validation Accuracy: 0.968, Loss: 0.033 Epoch 17 Batch 142/269 - Train Accuracy: 0.960, Validation Accuracy: 0.964, Loss: 0.036 Epoch 17 Batch 143/269 - Train Accuracy: 0.966, Validation Accuracy: 0.966, Loss: 0.029 Epoch 17 Batch 144/269 - Train Accuracy: 0.972, Validation Accuracy: 0.967, Loss: 0.027 Epoch 17 Batch 145/269 - Train Accuracy: 0.971, Validation Accuracy: 0.967, Loss: 0.031 Epoch 17 Batch 146/269 - Train Accuracy: 0.961, Validation Accuracy: 0.970, Loss: 0.033 Epoch 17 Batch 147/269 - Train Accuracy: 0.960, Validation Accuracy: 0.969, Loss: 0.034 Epoch 17 Batch 148/269 - Train Accuracy: 0.970, Validation Accuracy: 0.963, Loss: 0.031 Epoch 17 Batch 149/269 - Train Accuracy: 0.961, Validation Accuracy: 0.963, Loss: 0.038 Epoch 17 Batch 150/269 - Train Accuracy: 0.964, Validation Accuracy: 0.966, Loss: 0.035 Epoch 17 Batch 151/269 - Train Accuracy: 0.961, Validation Accuracy: 0.968, Loss: 0.036 Epoch 17 Batch 152/269 - Train Accuracy: 0.969, Validation Accuracy: 0.970, Loss: 0.032 Epoch 17 Batch 153/269 - Train Accuracy: 0.973, Validation Accuracy: 0.969, Loss: 0.031 Epoch 17 Batch 154/269 - Train Accuracy: 0.962, Validation Accuracy: 0.971, Loss: 0.034 Epoch 17 Batch 155/269 - Train Accuracy: 0.958, Validation Accuracy: 0.967, Loss: 0.034 Epoch 17 Batch 156/269 - Train Accuracy: 0.961, Validation Accuracy: 0.964, Loss: 0.039 Epoch 17 Batch 157/269 - Train Accuracy: 0.961, Validation Accuracy: 0.967, Loss: 0.031 Epoch 17 Batch 158/269 - Train Accuracy: 0.955, Validation Accuracy: 0.966, Loss: 0.034 Epoch 17 Batch 159/269 - Train Accuracy: 0.955, Validation Accuracy: 0.963, Loss: 0.033 Epoch 17 Batch 160/269 - Train Accuracy: 0.962, Validation Accuracy: 0.963, Loss: 0.029 Epoch 17 Batch 161/269 - Train Accuracy: 0.967, Validation Accuracy: 0.965, Loss: 0.033 Epoch 17 Batch 162/269 - Train Accuracy: 0.967, Validation Accuracy: 0.965, Loss: 0.032 Epoch 17 Batch 163/269 - Train Accuracy: 0.973, Validation Accuracy: 0.965, Loss: 0.030 Epoch 17 Batch 164/269 - Train Accuracy: 0.971, Validation Accuracy: 0.964, Loss: 0.030 Epoch 17 Batch 165/269 - Train Accuracy: 0.968, Validation Accuracy: 0.968, Loss: 0.031 Epoch 17 Batch 166/269 - Train Accuracy: 0.971, Validation Accuracy: 0.966, Loss: 0.030 Epoch 17 Batch 167/269 - Train Accuracy: 0.964, Validation Accuracy: 0.971, Loss: 0.032 Epoch 17 Batch 168/269 - Train Accuracy: 0.966, Validation Accuracy: 0.967, Loss: 0.034 Epoch 17 Batch 169/269 - Train Accuracy: 0.962, Validation Accuracy: 0.972, Loss: 0.029 Epoch 17 Batch 170/269 - Train Accuracy: 0.961, Validation Accuracy: 0.972, Loss: 0.031 Epoch 17 Batch 171/269 - Train Accuracy: 0.973, Validation Accuracy: 0.967, Loss: 0.034 Epoch 17 Batch 172/269 - Train Accuracy: 0.964, Validation Accuracy: 0.963, Loss: 0.036 Epoch 17 Batch 173/269 - Train Accuracy: 0.963, Validation Accuracy: 0.967, Loss: 0.031 Epoch 17 Batch 174/269 - Train Accuracy: 0.972, Validation Accuracy: 0.970, Loss: 0.029 Epoch 17 Batch 175/269 - Train Accuracy: 0.960, Validation Accuracy: 0.968, Loss: 0.044 Epoch 17 Batch 176/269 - Train Accuracy: 0.962, Validation Accuracy: 0.968, Loss: 0.035 Epoch 17 Batch 177/269 - Train Accuracy: 0.969, Validation Accuracy: 0.966, Loss: 0.032 Epoch 17 Batch 178/269 - Train Accuracy: 0.969, Validation Accuracy: 0.968, Loss: 0.030 Epoch 17 Batch 179/269 - Train Accuracy: 0.961, Validation Accuracy: 0.965, Loss: 0.033 Epoch 17 Batch 180/269 - Train Accuracy: 0.976, Validation Accuracy: 0.964, Loss: 0.030 Epoch 17 Batch 181/269 - Train Accuracy: 0.957, Validation Accuracy: 0.966, Loss: 0.037 Epoch 17 Batch 182/269 - Train Accuracy: 0.961, Validation Accuracy: 0.967, Loss: 0.032 Epoch 17 Batch 183/269 - Train Accuracy: 0.971, Validation Accuracy: 0.970, Loss: 0.027 Epoch 17 Batch 184/269 - Train Accuracy: 0.965, Validation Accuracy: 0.970, Loss: 0.033 Epoch 17 Batch 185/269 - Train Accuracy: 0.968, Validation Accuracy: 0.970, Loss: 0.033 Epoch 17 Batch 186/269 - Train Accuracy: 0.959, Validation Accuracy: 0.972, Loss: 0.029 Epoch 17 Batch 187/269 - Train Accuracy: 0.964, Validation Accuracy: 0.971, Loss: 0.033 Epoch 17 Batch 188/269 - Train Accuracy: 0.966, Validation Accuracy: 0.970, Loss: 0.034 Epoch 17 Batch 189/269 - Train Accuracy: 0.971, Validation Accuracy: 0.969, Loss: 0.032 Epoch 17 Batch 190/269 - Train Accuracy: 0.967, Validation Accuracy: 0.963, Loss: 0.036 Epoch 17 Batch 191/269 - Train Accuracy: 0.965, Validation Accuracy: 0.963, Loss: 0.033 Epoch 17 Batch 192/269 - Train Accuracy: 0.965, Validation Accuracy: 0.967, Loss: 0.031 Epoch 17 Batch 193/269 - Train Accuracy: 0.965, Validation Accuracy: 0.967, Loss: 0.031 Epoch 17 Batch 194/269 - Train Accuracy: 0.959, Validation Accuracy: 0.969, Loss: 0.037 Epoch 17 Batch 195/269 - Train Accuracy: 0.964, Validation Accuracy: 0.965, Loss: 0.029 Epoch 17 Batch 196/269 - Train Accuracy: 0.960, Validation Accuracy: 0.957, Loss: 0.032 Epoch 17 Batch 197/269 - Train Accuracy: 0.966, Validation Accuracy: 0.970, Loss: 0.032 Epoch 17 Batch 198/269 - Train Accuracy: 0.962, Validation Accuracy: 0.970, Loss: 0.034 Epoch 17 Batch 199/269 - Train Accuracy: 0.968, Validation Accuracy: 0.968, Loss: 0.035 Epoch 17 Batch 200/269 - Train Accuracy: 0.967, Validation Accuracy: 0.967, Loss: 0.035 Epoch 17 Batch 201/269 - Train Accuracy: 0.968, Validation Accuracy: 0.968, Loss: 0.036 Epoch 17 Batch 202/269 - Train Accuracy: 0.964, Validation Accuracy: 0.966, Loss: 0.035 Epoch 17 Batch 203/269 - Train Accuracy: 0.956, Validation Accuracy: 0.966, Loss: 0.038 Epoch 17 Batch 204/269 - Train Accuracy: 0.968, Validation Accuracy: 0.960, Loss: 0.036 Epoch 17 Batch 205/269 - Train Accuracy: 0.968, Validation Accuracy: 0.960, Loss: 0.031 Epoch 17 Batch 206/269 - Train Accuracy: 0.951, Validation Accuracy: 0.960, Loss: 0.037 Epoch 17 Batch 207/269 - Train Accuracy: 0.970, Validation Accuracy: 0.969, Loss: 0.033 Epoch 17 Batch 208/269 - Train Accuracy: 0.966, Validation Accuracy: 0.965, Loss: 0.034 Epoch 17 Batch 209/269 - Train Accuracy: 0.973, Validation Accuracy: 0.968, Loss: 0.036 Epoch 17 Batch 210/269 - Train Accuracy: 0.965, Validation Accuracy: 0.969, Loss: 0.033 Epoch 17 Batch 211/269 - Train Accuracy: 0.961, Validation Accuracy: 0.969, Loss: 0.038 Epoch 17 Batch 212/269 - Train Accuracy: 0.962, Validation Accuracy: 0.971, Loss: 0.036 Epoch 17 Batch 213/269 - Train Accuracy: 0.963, Validation Accuracy: 0.971, Loss: 0.033 Epoch 17 Batch 214/269 - Train Accuracy: 0.965, Validation Accuracy: 0.969, Loss: 0.032 Epoch 17 Batch 215/269 - Train Accuracy: 0.953, Validation Accuracy: 0.972, Loss: 0.035 Epoch 17 Batch 216/269 - Train Accuracy: 0.954, Validation Accuracy: 0.970, Loss: 0.041 Epoch 17 Batch 217/269 - Train Accuracy: 0.960, Validation Accuracy: 0.969, Loss: 0.037 Epoch 17 Batch 218/269 - Train Accuracy: 0.971, Validation Accuracy: 0.965, Loss: 0.031 Epoch 17 Batch 219/269 - Train Accuracy: 0.969, Validation Accuracy: 0.960, Loss: 0.032 Epoch 17 Batch 220/269 - Train Accuracy: 0.956, Validation Accuracy: 0.958, Loss: 0.033 Epoch 17 Batch 221/269 - Train Accuracy: 0.957, Validation Accuracy: 0.962, Loss: 0.035 Epoch 17 Batch 222/269 - Train Accuracy: 0.970, Validation Accuracy: 0.966, Loss: 0.030 Epoch 17 Batch 223/269 - Train Accuracy: 0.954, Validation Accuracy: 0.967, Loss: 0.033 Epoch 17 Batch 224/269 - Train Accuracy: 0.961, Validation Accuracy: 0.972, Loss: 0.037 Epoch 17 Batch 225/269 - Train Accuracy: 0.962, Validation Accuracy: 0.966, Loss: 0.030 Epoch 17 Batch 226/269 - Train Accuracy: 0.969, Validation Accuracy: 0.965, Loss: 0.037 Epoch 17 Batch 227/269 - Train Accuracy: 0.959, Validation Accuracy: 0.965, Loss: 0.041 Epoch 17 Batch 228/269 - Train Accuracy: 0.973, Validation Accuracy: 0.962, Loss: 0.031 Epoch 17 Batch 229/269 - Train Accuracy: 0.969, Validation Accuracy: 0.964, Loss: 0.033 Epoch 17 Batch 230/269 - Train Accuracy: 0.970, Validation Accuracy: 0.968, Loss: 0.034 Epoch 17 Batch 231/269 - Train Accuracy: 0.958, Validation Accuracy: 0.965, Loss: 0.032 Epoch 17 Batch 232/269 - Train Accuracy: 0.963, Validation Accuracy: 0.966, Loss: 0.032 Epoch 17 Batch 233/269 - Train Accuracy: 0.974, Validation Accuracy: 0.969, Loss: 0.037 Epoch 17 Batch 234/269 - Train Accuracy: 0.956, Validation Accuracy: 0.966, Loss: 0.033 Epoch 17 Batch 235/269 - Train Accuracy: 0.983, Validation Accuracy: 0.965, Loss: 0.029 Epoch 17 Batch 236/269 - Train Accuracy: 0.953, Validation Accuracy: 0.970, Loss: 0.034 Epoch 17 Batch 237/269 - Train Accuracy: 0.970, Validation Accuracy: 0.971, Loss: 0.032 Epoch 17 Batch 238/269 - Train Accuracy: 0.973, Validation Accuracy: 0.970, Loss: 0.031 Epoch 17 Batch 239/269 - Train Accuracy: 0.970, Validation Accuracy: 0.965, Loss: 0.033 Epoch 17 Batch 240/269 - Train Accuracy: 0.971, Validation Accuracy: 0.964, Loss: 0.028 Epoch 17 Batch 241/269 - Train Accuracy: 0.957, Validation Accuracy: 0.964, Loss: 0.038 Epoch 17 Batch 242/269 - Train Accuracy: 0.972, Validation Accuracy: 0.966, Loss: 0.034 Epoch 17 Batch 243/269 - Train Accuracy: 0.976, Validation Accuracy: 0.964, Loss: 0.029 Epoch 17 Batch 244/269 - Train Accuracy: 0.958, Validation Accuracy: 0.962, Loss: 0.033 Epoch 17 Batch 245/269 - Train Accuracy: 0.961, Validation Accuracy: 0.965, Loss: 0.031 Epoch 17 Batch 246/269 - Train Accuracy: 0.960, Validation Accuracy: 0.970, Loss: 0.035 Epoch 17 Batch 247/269 - Train Accuracy: 0.965, Validation Accuracy: 0.973, Loss: 0.032 Epoch 17 Batch 248/269 - Train Accuracy: 0.963, Validation Accuracy: 0.966, Loss: 0.032 Epoch 17 Batch 249/269 - Train Accuracy: 0.966, Validation Accuracy: 0.965, Loss: 0.028 Epoch 17 Batch 250/269 - Train Accuracy: 0.959, Validation Accuracy: 0.966, Loss: 0.033 Epoch 17 Batch 251/269 - Train Accuracy: 0.973, Validation Accuracy: 0.969, Loss: 0.030 Epoch 17 Batch 252/269 - Train Accuracy: 0.970, Validation Accuracy: 0.969, Loss: 0.028 Epoch 17 Batch 253/269 - Train Accuracy: 0.958, Validation Accuracy: 0.966, Loss: 0.034 Epoch 17 Batch 254/269 - Train Accuracy: 0.965, Validation Accuracy: 0.963, Loss: 0.032 Epoch 17 Batch 255/269 - Train Accuracy: 0.966, Validation Accuracy: 0.961, Loss: 0.032 Epoch 17 Batch 256/269 - Train Accuracy: 0.966, Validation Accuracy: 0.967, Loss: 0.033 Epoch 17 Batch 257/269 - Train Accuracy: 0.958, Validation Accuracy: 0.965, Loss: 0.037 Epoch 17 Batch 258/269 - Train Accuracy: 0.970, Validation Accuracy: 0.966, Loss: 0.034 Epoch 17 Batch 259/269 - Train Accuracy: 0.968, Validation Accuracy: 0.962, Loss: 0.032 Epoch 17 Batch 260/269 - Train Accuracy: 0.971, Validation Accuracy: 0.960, Loss: 0.035 Epoch 17 Batch 261/269 - Train Accuracy: 0.967, Validation Accuracy: 0.959, Loss: 0.034 Epoch 17 Batch 262/269 - Train Accuracy: 0.957, Validation Accuracy: 0.961, Loss: 0.036 Epoch 17 Batch 263/269 - Train Accuracy: 0.973, Validation Accuracy: 0.967, Loss: 0.031 Epoch 17 Batch 264/269 - Train Accuracy: 0.957, Validation Accuracy: 0.965, Loss: 0.033 Epoch 17 Batch 265/269 - Train Accuracy: 0.966, Validation Accuracy: 0.968, Loss: 0.035 Epoch 17 Batch 266/269 - Train Accuracy: 0.965, Validation Accuracy: 0.965, Loss: 0.027 Epoch 17 Batch 267/269 - Train Accuracy: 0.974, Validation Accuracy: 0.960, Loss: 0.035 Epoch 18 Batch 0/269 - Train Accuracy: 0.973, Validation Accuracy: 0.957, Loss: 0.037 Epoch 18 Batch 1/269 - Train Accuracy: 0.978, Validation Accuracy: 0.964, Loss: 0.031 Epoch 18 Batch 2/269 - Train Accuracy: 0.960, Validation Accuracy: 0.965, Loss: 0.034 Epoch 18 Batch 3/269 - Train Accuracy: 0.970, Validation Accuracy: 0.968, Loss: 0.032 Epoch 18 Batch 4/269 - Train Accuracy: 0.951, Validation Accuracy: 0.962, Loss: 0.038 Epoch 18 Batch 5/269 - Train Accuracy: 0.957, Validation Accuracy: 0.965, Loss: 0.035 Epoch 18 Batch 6/269 - Train Accuracy: 0.968, Validation Accuracy: 0.960, Loss: 0.029 Epoch 18 Batch 7/269 - Train Accuracy: 0.964, Validation Accuracy: 0.966, Loss: 0.029 Epoch 18 Batch 8/269 - Train Accuracy: 0.967, Validation Accuracy: 0.964, Loss: 0.039 Epoch 18 Batch 9/269 - Train Accuracy: 0.965, Validation Accuracy: 0.965, Loss: 0.035 Epoch 18 Batch 10/269 - Train Accuracy: 0.968, Validation Accuracy: 0.968, Loss: 0.028 Epoch 18 Batch 11/269 - Train Accuracy: 0.972, Validation Accuracy: 0.969, Loss: 0.039 Epoch 18 Batch 12/269 - Train Accuracy: 0.954, Validation Accuracy: 0.969, Loss: 0.035 Epoch 18 Batch 13/269 - Train Accuracy: 0.970, Validation Accuracy: 0.969, Loss: 0.029 Epoch 18 Batch 14/269 - Train Accuracy: 0.970, Validation Accuracy: 0.972, Loss: 0.029 Epoch 18 Batch 15/269 - Train Accuracy: 0.974, Validation Accuracy: 0.971, Loss: 0.024 Epoch 18 Batch 16/269 - Train Accuracy: 0.965, Validation Accuracy: 0.969, Loss: 0.037 Epoch 18 Batch 17/269 - Train Accuracy: 0.972, Validation Accuracy: 0.969, Loss: 0.034 Epoch 18 Batch 18/269 - Train Accuracy: 0.963, Validation Accuracy: 0.965, Loss: 0.033 Epoch 18 Batch 19/269 - Train Accuracy: 0.971, Validation Accuracy: 0.962, Loss: 0.025 Epoch 18 Batch 20/269 - Train Accuracy: 0.968, Validation Accuracy: 0.964, Loss: 0.031 Epoch 18 Batch 21/269 - Train Accuracy: 0.960, Validation Accuracy: 0.965, Loss: 0.034 Epoch 18 Batch 22/269 - Train Accuracy: 0.966, Validation Accuracy: 0.962, Loss: 0.029 Epoch 18 Batch 23/269 - Train Accuracy: 0.960, Validation Accuracy: 0.964, Loss: 0.039 Epoch 18 Batch 24/269 - Train Accuracy: 0.962, Validation Accuracy: 0.962, Loss: 0.030 Epoch 18 Batch 25/269 - Train Accuracy: 0.959, Validation Accuracy: 0.946, Loss: 0.034 Epoch 18 Batch 26/269 - Train Accuracy: 0.948, Validation Accuracy: 0.953, Loss: 0.034 Epoch 18 Batch 27/269 - Train Accuracy: 0.969, Validation Accuracy: 0.967, Loss: 0.030 Epoch 18 Batch 28/269 - Train Accuracy: 0.952, Validation Accuracy: 0.962, Loss: 0.036 Epoch 18 Batch 29/269 - Train Accuracy: 0.965, Validation Accuracy: 0.960, Loss: 0.033 Epoch 18 Batch 30/269 - Train Accuracy: 0.960, Validation Accuracy: 0.962, Loss: 0.033 Epoch 18 Batch 31/269 - Train Accuracy: 0.968, Validation Accuracy: 0.966, Loss: 0.034 Epoch 18 Batch 32/269 - Train Accuracy: 0.971, Validation Accuracy: 0.962, Loss: 0.024 Epoch 18 Batch 33/269 - Train Accuracy: 0.968, Validation Accuracy: 0.962, Loss: 0.031 Epoch 18 Batch 34/269 - Train Accuracy: 0.960, Validation Accuracy: 0.966, Loss: 0.031 Epoch 18 Batch 35/269 - Train Accuracy: 0.969, Validation Accuracy: 0.964, Loss: 0.039 Epoch 18 Batch 36/269 - Train Accuracy: 0.969, Validation Accuracy: 0.966, Loss: 0.032 Epoch 18 Batch 37/269 - Train Accuracy: 0.963, Validation Accuracy: 0.967, Loss: 0.032 Epoch 18 Batch 38/269 - Train Accuracy: 0.961, Validation Accuracy: 0.962, Loss: 0.030 Epoch 18 Batch 39/269 - Train Accuracy: 0.971, Validation Accuracy: 0.965, Loss: 0.031 Epoch 18 Batch 40/269 - Train Accuracy: 0.959, Validation Accuracy: 0.963, Loss: 0.036 Epoch 18 Batch 41/269 - Train Accuracy: 0.960, Validation Accuracy: 0.963, Loss: 0.036 Epoch 18 Batch 42/269 - Train Accuracy: 0.972, Validation Accuracy: 0.968, Loss: 0.026 Epoch 18 Batch 43/269 - Train Accuracy: 0.971, Validation Accuracy: 0.971, Loss: 0.032 Epoch 18 Batch 44/269 - Train Accuracy: 0.960, Validation Accuracy: 0.969, Loss: 0.029 Epoch 18 Batch 45/269 - Train Accuracy: 0.960, Validation Accuracy: 0.971, Loss: 0.037 Epoch 18 Batch 46/269 - Train Accuracy: 0.968, Validation Accuracy: 0.971, Loss: 0.031 Epoch 18 Batch 47/269 - Train Accuracy: 0.968, Validation Accuracy: 0.968, Loss: 0.026 Epoch 18 Batch 48/269 - Train Accuracy: 0.971, Validation Accuracy: 0.966, Loss: 0.029 Epoch 18 Batch 49/269 - Train Accuracy: 0.967, Validation Accuracy: 0.967, Loss: 0.029 Epoch 18 Batch 50/269 - Train Accuracy: 0.961, Validation Accuracy: 0.965, Loss: 0.035 Epoch 18 Batch 51/269 - Train Accuracy: 0.971, Validation Accuracy: 0.965, Loss: 0.031 Epoch 18 Batch 52/269 - Train Accuracy: 0.957, Validation Accuracy: 0.969, Loss: 0.029 Epoch 18 Batch 53/269 - Train Accuracy: 0.967, Validation Accuracy: 0.966, Loss: 0.031 Epoch 18 Batch 54/269 - Train Accuracy: 0.966, Validation Accuracy: 0.967, Loss: 0.028 Epoch 18 Batch 55/269 - Train Accuracy: 0.964, Validation Accuracy: 0.968, Loss: 0.032 Epoch 18 Batch 56/269 - Train Accuracy: 0.959, Validation Accuracy: 0.963, Loss: 0.033 Epoch 18 Batch 57/269 - Train Accuracy: 0.968, Validation Accuracy: 0.964, Loss: 0.039 Epoch 18 Batch 58/269 - Train Accuracy: 0.974, Validation Accuracy: 0.971, Loss: 0.035 Epoch 18 Batch 59/269 - Train Accuracy: 0.976, Validation Accuracy: 0.969, Loss: 0.025 Epoch 18 Batch 60/269 - Train Accuracy: 0.963, Validation Accuracy: 0.967, Loss: 0.030 Epoch 18 Batch 61/269 - Train Accuracy: 0.958, Validation Accuracy: 0.968, Loss: 0.033 Epoch 18 Batch 62/269 - Train Accuracy: 0.960, Validation Accuracy: 0.969, Loss: 0.036 Epoch 18 Batch 63/269 - Train Accuracy: 0.954, Validation Accuracy: 0.964, Loss: 0.036 Epoch 18 Batch 64/269 - Train Accuracy: 0.967, Validation Accuracy: 0.965, Loss: 0.028 Epoch 18 Batch 65/269 - Train Accuracy: 0.972, Validation Accuracy: 0.969, Loss: 0.031 Epoch 18 Batch 66/269 - Train Accuracy: 0.957, Validation Accuracy: 0.966, Loss: 0.037 Epoch 18 Batch 67/269 - Train Accuracy: 0.967, Validation Accuracy: 0.966, Loss: 0.037 Epoch 18 Batch 68/269 - Train Accuracy: 0.968, Validation Accuracy: 0.964, Loss: 0.032 Epoch 18 Batch 69/269 - Train Accuracy: 0.950, Validation Accuracy: 0.963, Loss: 0.038 Epoch 18 Batch 70/269 - Train Accuracy: 0.967, Validation Accuracy: 0.965, Loss: 0.032 Epoch 18 Batch 71/269 - Train Accuracy: 0.964, Validation Accuracy: 0.968, Loss: 0.035 Epoch 18 Batch 72/269 - Train Accuracy: 0.957, Validation Accuracy: 0.968, Loss: 0.035 Epoch 18 Batch 73/269 - Train Accuracy: 0.955, Validation Accuracy: 0.972, Loss: 0.034 Epoch 18 Batch 74/269 - Train Accuracy: 0.972, Validation Accuracy: 0.972, Loss: 0.031 Epoch 18 Batch 75/269 - Train Accuracy: 0.957, Validation Accuracy: 0.966, Loss: 0.032 Epoch 18 Batch 76/269 - Train Accuracy: 0.960, Validation Accuracy: 0.963, Loss: 0.031 Epoch 18 Batch 77/269 - Train Accuracy: 0.964, Validation Accuracy: 0.966, Loss: 0.032 Epoch 18 Batch 78/269 - Train Accuracy: 0.975, Validation Accuracy: 0.968, Loss: 0.029 Epoch 18 Batch 79/269 - Train Accuracy: 0.967, Validation Accuracy: 0.970, Loss: 0.031 Epoch 18 Batch 80/269 - Train Accuracy: 0.965, Validation Accuracy: 0.967, Loss: 0.033 Epoch 18 Batch 81/269 - Train Accuracy: 0.961, Validation Accuracy: 0.969, Loss: 0.036 Epoch 18 Batch 82/269 - Train Accuracy: 0.970, Validation Accuracy: 0.971, Loss: 0.029 Epoch 18 Batch 83/269 - Train Accuracy: 0.952, Validation Accuracy: 0.967, Loss: 0.044 Epoch 18 Batch 84/269 - Train Accuracy: 0.968, Validation Accuracy: 0.968, Loss: 0.029 Epoch 18 Batch 85/269 - Train Accuracy: 0.964, Validation Accuracy: 0.969, Loss: 0.031 Epoch 18 Batch 86/269 - Train Accuracy: 0.971, Validation Accuracy: 0.969, Loss: 0.027 Epoch 18 Batch 87/269 - Train Accuracy: 0.975, Validation Accuracy: 0.969, Loss: 0.030 Epoch 18 Batch 88/269 - Train Accuracy: 0.964, Validation Accuracy: 0.969, Loss: 0.033 Epoch 18 Batch 89/269 - Train Accuracy: 0.967, Validation Accuracy: 0.967, Loss: 0.029 Epoch 18 Batch 90/269 - Train Accuracy: 0.963, Validation Accuracy: 0.957, Loss: 0.031 Epoch 18 Batch 91/269 - Train Accuracy: 0.969, Validation Accuracy: 0.953, Loss: 0.030 Epoch 18 Batch 92/269 - Train Accuracy: 0.973, Validation Accuracy: 0.959, Loss: 0.029 Epoch 18 Batch 93/269 - Train Accuracy: 0.969, Validation Accuracy: 0.965, Loss: 0.034 Epoch 18 Batch 94/269 - Train Accuracy: 0.965, Validation Accuracy: 0.970, Loss: 0.039 Epoch 18 Batch 95/269 - Train Accuracy: 0.969, Validation Accuracy: 0.970, Loss: 0.030 Epoch 18 Batch 96/269 - Train Accuracy: 0.962, Validation Accuracy: 0.970, Loss: 0.033 Epoch 18 Batch 97/269 - Train Accuracy: 0.962, Validation Accuracy: 0.968, Loss: 0.037 Epoch 18 Batch 98/269 - Train Accuracy: 0.967, Validation Accuracy: 0.968, Loss: 0.033 Epoch 18 Batch 99/269 - Train Accuracy: 0.964, Validation Accuracy: 0.965, Loss: 0.033 Epoch 18 Batch 100/269 - Train Accuracy: 0.970, Validation Accuracy: 0.962, Loss: 0.034 Epoch 18 Batch 101/269 - Train Accuracy: 0.962, Validation Accuracy: 0.962, Loss: 0.040 Epoch 18 Batch 102/269 - Train Accuracy: 0.968, Validation Accuracy: 0.967, Loss: 0.033 Epoch 18 Batch 103/269 - Train Accuracy: 0.968, Validation Accuracy: 0.973, Loss: 0.034 Epoch 18 Batch 104/269 - Train Accuracy: 0.963, Validation Accuracy: 0.973, Loss: 0.031 Epoch 18 Batch 105/269 - Train Accuracy: 0.971, Validation Accuracy: 0.974, Loss: 0.036 Epoch 18 Batch 106/269 - Train Accuracy: 0.975, Validation Accuracy: 0.974, Loss: 0.027 Epoch 18 Batch 107/269 - Train Accuracy: 0.961, Validation Accuracy: 0.970, Loss: 0.032 Epoch 18 Batch 108/269 - Train Accuracy: 0.969, Validation Accuracy: 0.966, Loss: 0.031 Epoch 18 Batch 109/269 - Train Accuracy: 0.959, Validation Accuracy: 0.965, Loss: 0.034 Epoch 18 Batch 110/269 - Train Accuracy: 0.966, Validation Accuracy: 0.962, Loss: 0.029 Epoch 18 Batch 111/269 - Train Accuracy: 0.964, Validation Accuracy: 0.961, Loss: 0.036 Epoch 18 Batch 112/269 - Train Accuracy: 0.967, Validation Accuracy: 0.966, Loss: 0.034 Epoch 18 Batch 113/269 - Train Accuracy: 0.961, Validation Accuracy: 0.968, Loss: 0.032 Epoch 18 Batch 114/269 - Train Accuracy: 0.958, Validation Accuracy: 0.968, Loss: 0.033 Epoch 18 Batch 115/269 - Train Accuracy: 0.960, Validation Accuracy: 0.967, Loss: 0.036 Epoch 18 Batch 116/269 - Train Accuracy: 0.973, Validation Accuracy: 0.966, Loss: 0.031 Epoch 18 Batch 117/269 - Train Accuracy: 0.959, Validation Accuracy: 0.966, Loss: 0.028 Epoch 18 Batch 118/269 - Train Accuracy: 0.967, Validation Accuracy: 0.965, Loss: 0.030 Epoch 18 Batch 119/269 - Train Accuracy: 0.961, Validation Accuracy: 0.968, Loss: 0.034 Epoch 18 Batch 120/269 - Train Accuracy: 0.960, Validation Accuracy: 0.968, Loss: 0.033 Epoch 18 Batch 121/269 - Train Accuracy: 0.961, Validation Accuracy: 0.965, Loss: 0.030 Epoch 18 Batch 122/269 - Train Accuracy: 0.962, Validation Accuracy: 0.966, Loss: 0.035 Epoch 18 Batch 123/269 - Train Accuracy: 0.962, Validation Accuracy: 0.966, Loss: 0.032 Epoch 18 Batch 124/269 - Train Accuracy: 0.964, Validation Accuracy: 0.967, Loss: 0.028 Epoch 18 Batch 125/269 - Train Accuracy: 0.962, Validation Accuracy: 0.964, Loss: 0.031 Epoch 18 Batch 126/269 - Train Accuracy: 0.952, Validation Accuracy: 0.971, Loss: 0.034 Epoch 18 Batch 127/269 - Train Accuracy: 0.958, Validation Accuracy: 0.968, Loss: 0.029 Epoch 18 Batch 128/269 - Train Accuracy: 0.970, Validation Accuracy: 0.969, Loss: 0.032 Epoch 18 Batch 129/269 - Train Accuracy: 0.962, Validation Accuracy: 0.969, Loss: 0.032 Epoch 18 Batch 130/269 - Train Accuracy: 0.967, Validation Accuracy: 0.971, Loss: 0.035 Epoch 18 Batch 131/269 - Train Accuracy: 0.960, Validation Accuracy: 0.970, Loss: 0.032 Epoch 18 Batch 132/269 - Train Accuracy: 0.959, Validation Accuracy: 0.969, Loss: 0.036 Epoch 18 Batch 133/269 - Train Accuracy: 0.967, Validation Accuracy: 0.965, Loss: 0.028 Epoch 18 Batch 134/269 - Train Accuracy: 0.964, Validation Accuracy: 0.969, Loss: 0.034 Epoch 18 Batch 135/269 - Train Accuracy: 0.972, Validation Accuracy: 0.973, Loss: 0.030 Epoch 18 Batch 136/269 - Train Accuracy: 0.962, Validation Accuracy: 0.968, Loss: 0.036 Epoch 18 Batch 137/269 - Train Accuracy: 0.961, Validation Accuracy: 0.963, Loss: 0.037 Epoch 18 Batch 138/269 - Train Accuracy: 0.969, Validation Accuracy: 0.961, Loss: 0.026 Epoch 18 Batch 139/269 - Train Accuracy: 0.964, Validation Accuracy: 0.966, Loss: 0.028 Epoch 18 Batch 140/269 - Train Accuracy: 0.965, Validation Accuracy: 0.962, Loss: 0.032 Epoch 18 Batch 141/269 - Train Accuracy: 0.965, Validation Accuracy: 0.963, Loss: 0.035 Epoch 18 Batch 142/269 - Train Accuracy: 0.959, Validation Accuracy: 0.961, Loss: 0.033 Epoch 18 Batch 143/269 - Train Accuracy: 0.968, Validation Accuracy: 0.963, Loss: 0.028 Epoch 18 Batch 144/269 - Train Accuracy: 0.973, Validation Accuracy: 0.968, Loss: 0.027 Epoch 18 Batch 145/269 - Train Accuracy: 0.974, Validation Accuracy: 0.965, Loss: 0.028 Epoch 18 Batch 146/269 - Train Accuracy: 0.965, Validation Accuracy: 0.967, Loss: 0.032 Epoch 18 Batch 147/269 - Train Accuracy: 0.960, Validation Accuracy: 0.962, Loss: 0.034 Epoch 18 Batch 148/269 - Train Accuracy: 0.973, Validation Accuracy: 0.965, Loss: 0.032 Epoch 18 Batch 149/269 - Train Accuracy: 0.961, Validation Accuracy: 0.965, Loss: 0.036 Epoch 18 Batch 150/269 - Train Accuracy: 0.962, Validation Accuracy: 0.962, Loss: 0.031 Epoch 18 Batch 151/269 - Train Accuracy: 0.964, Validation Accuracy: 0.961, Loss: 0.032 Epoch 18 Batch 152/269 - Train Accuracy: 0.970, Validation Accuracy: 0.961, Loss: 0.032 Epoch 18 Batch 153/269 - Train Accuracy: 0.978, Validation Accuracy: 0.962, Loss: 0.028 Epoch 18 Batch 154/269 - Train Accuracy: 0.969, Validation Accuracy: 0.967, Loss: 0.035 Epoch 18 Batch 155/269 - Train Accuracy: 0.961, Validation Accuracy: 0.966, Loss: 0.031 Epoch 18 Batch 156/269 - Train Accuracy: 0.963, Validation Accuracy: 0.963, Loss: 0.033 Epoch 18 Batch 157/269 - Train Accuracy: 0.965, Validation Accuracy: 0.962, Loss: 0.031 Epoch 18 Batch 158/269 - Train Accuracy: 0.964, Validation Accuracy: 0.962, Loss: 0.034 Epoch 18 Batch 159/269 - Train Accuracy: 0.962, Validation Accuracy: 0.963, Loss: 0.031 Epoch 18 Batch 160/269 - Train Accuracy: 0.965, Validation Accuracy: 0.963, Loss: 0.029 Epoch 18 Batch 161/269 - Train Accuracy: 0.973, Validation Accuracy: 0.965, Loss: 0.028 Epoch 18 Batch 162/269 - Train Accuracy: 0.969, Validation Accuracy: 0.965, Loss: 0.033 Epoch 18 Batch 163/269 - Train Accuracy: 0.972, Validation Accuracy: 0.967, Loss: 0.029 Epoch 18 Batch 164/269 - Train Accuracy: 0.973, Validation Accuracy: 0.966, Loss: 0.030 Epoch 18 Batch 165/269 - Train Accuracy: 0.963, Validation Accuracy: 0.967, Loss: 0.031 Epoch 18 Batch 166/269 - Train Accuracy: 0.969, Validation Accuracy: 0.965, Loss: 0.029 Epoch 18 Batch 167/269 - Train Accuracy: 0.965, Validation Accuracy: 0.964, Loss: 0.033 Epoch 18 Batch 168/269 - Train Accuracy: 0.970, Validation Accuracy: 0.966, Loss: 0.031 Epoch 18 Batch 169/269 - Train Accuracy: 0.961, Validation Accuracy: 0.965, Loss: 0.031 Epoch 18 Batch 170/269 - Train Accuracy: 0.963, Validation Accuracy: 0.967, Loss: 0.029 Epoch 18 Batch 171/269 - Train Accuracy: 0.977, Validation Accuracy: 0.965, Loss: 0.031 Epoch 18 Batch 172/269 - Train Accuracy: 0.966, Validation Accuracy: 0.965, Loss: 0.032 Epoch 18 Batch 173/269 - Train Accuracy: 0.969, Validation Accuracy: 0.957, Loss: 0.026 Epoch 18 Batch 174/269 - Train Accuracy: 0.969, Validation Accuracy: 0.960, Loss: 0.032 Epoch 18 Batch 175/269 - Train Accuracy: 0.958, Validation Accuracy: 0.965, Loss: 0.045 Epoch 18 Batch 176/269 - Train Accuracy: 0.962, Validation Accuracy: 0.967, Loss: 0.034 Epoch 18 Batch 177/269 - Train Accuracy: 0.972, Validation Accuracy: 0.968, Loss: 0.030 Epoch 18 Batch 178/269 - Train Accuracy: 0.967, Validation Accuracy: 0.966, Loss: 0.028 Epoch 18 Batch 179/269 - Train Accuracy: 0.962, Validation Accuracy: 0.966, Loss: 0.032 Epoch 18 Batch 180/269 - Train Accuracy: 0.972, Validation Accuracy: 0.969, Loss: 0.027 Epoch 18 Batch 181/269 - Train Accuracy: 0.957, Validation Accuracy: 0.970, Loss: 0.038 Epoch 18 Batch 182/269 - Train Accuracy: 0.960, Validation Accuracy: 0.972, Loss: 0.032 Epoch 18 Batch 183/269 - Train Accuracy: 0.973, Validation Accuracy: 0.973, Loss: 0.026 Epoch 18 Batch 184/269 - Train Accuracy: 0.966, Validation Accuracy: 0.969, Loss: 0.029 Epoch 18 Batch 185/269 - Train Accuracy: 0.975, Validation Accuracy: 0.970, Loss: 0.029 Epoch 18 Batch 186/269 - Train Accuracy: 0.969, Validation Accuracy: 0.969, Loss: 0.027 Epoch 18 Batch 187/269 - Train Accuracy: 0.967, Validation Accuracy: 0.971, Loss: 0.031 Epoch 18 Batch 188/269 - Train Accuracy: 0.969, Validation Accuracy: 0.973, Loss: 0.030 Epoch 18 Batch 189/269 - Train Accuracy: 0.967, Validation Accuracy: 0.972, Loss: 0.027 Epoch 18 Batch 190/269 - Train Accuracy: 0.968, Validation Accuracy: 0.967, Loss: 0.032 Epoch 18 Batch 191/269 - Train Accuracy: 0.960, Validation Accuracy: 0.971, Loss: 0.030 Epoch 18 Batch 192/269 - Train Accuracy: 0.976, Validation Accuracy: 0.974, Loss: 0.031 Epoch 18 Batch 193/269 - Train Accuracy: 0.962, Validation Accuracy: 0.971, Loss: 0.029 Epoch 18 Batch 194/269 - Train Accuracy: 0.961, Validation Accuracy: 0.968, Loss: 0.033 Epoch 18 Batch 195/269 - Train Accuracy: 0.965, Validation Accuracy: 0.968, Loss: 0.028 Epoch 18 Batch 196/269 - Train Accuracy: 0.966, Validation Accuracy: 0.965, Loss: 0.029 Epoch 18 Batch 197/269 - Train Accuracy: 0.963, Validation Accuracy: 0.966, Loss: 0.032 Epoch 18 Batch 198/269 - Train Accuracy: 0.967, Validation Accuracy: 0.968, Loss: 0.032 Epoch 18 Batch 199/269 - Train Accuracy: 0.968, Validation Accuracy: 0.974, Loss: 0.035 Epoch 18 Batch 200/269 - Train Accuracy: 0.974, Validation Accuracy: 0.971, Loss: 0.028 Epoch 18 Batch 201/269 - Train Accuracy: 0.967, Validation Accuracy: 0.971, Loss: 0.030 Epoch 18 Batch 202/269 - Train Accuracy: 0.969, Validation Accuracy: 0.971, Loss: 0.029 Epoch 18 Batch 203/269 - Train Accuracy: 0.960, Validation Accuracy: 0.967, Loss: 0.041 Epoch 18 Batch 204/269 - Train Accuracy: 0.969, Validation Accuracy: 0.970, Loss: 0.031 Epoch 18 Batch 205/269 - Train Accuracy: 0.976, Validation Accuracy: 0.969, Loss: 0.029 Epoch 18 Batch 206/269 - Train Accuracy: 0.955, Validation Accuracy: 0.967, Loss: 0.036 Epoch 18 Batch 207/269 - Train Accuracy: 0.969, Validation Accuracy: 0.970, Loss: 0.033 Epoch 18 Batch 208/269 - Train Accuracy: 0.963, Validation Accuracy: 0.971, Loss: 0.035 Epoch 18 Batch 209/269 - Train Accuracy: 0.970, Validation Accuracy: 0.970, Loss: 0.028 Epoch 18 Batch 210/269 - Train Accuracy: 0.968, Validation Accuracy: 0.971, Loss: 0.031 Epoch 18 Batch 211/269 - Train Accuracy: 0.967, Validation Accuracy: 0.965, Loss: 0.034 Epoch 18 Batch 212/269 - Train Accuracy: 0.959, Validation Accuracy: 0.966, Loss: 0.039 Epoch 18 Batch 213/269 - Train Accuracy: 0.963, Validation Accuracy: 0.967, Loss: 0.029 Epoch 18 Batch 214/269 - Train Accuracy: 0.957, Validation Accuracy: 0.966, Loss: 0.033 Epoch 18 Batch 215/269 - Train Accuracy: 0.964, Validation Accuracy: 0.966, Loss: 0.034 Epoch 18 Batch 216/269 - Train Accuracy: 0.959, Validation Accuracy: 0.967, Loss: 0.042 Epoch 18 Batch 217/269 - Train Accuracy: 0.961, Validation Accuracy: 0.969, Loss: 0.032 Epoch 18 Batch 218/269 - Train Accuracy: 0.970, Validation Accuracy: 0.972, Loss: 0.029 Epoch 18 Batch 219/269 - Train Accuracy: 0.967, Validation Accuracy: 0.967, Loss: 0.032 Epoch 18 Batch 220/269 - Train Accuracy: 0.965, Validation Accuracy: 0.970, Loss: 0.032 Epoch 18 Batch 221/269 - Train Accuracy: 0.965, Validation Accuracy: 0.968, Loss: 0.032 Epoch 18 Batch 222/269 - Train Accuracy: 0.969, Validation Accuracy: 0.968, Loss: 0.028 Epoch 18 Batch 223/269 - Train Accuracy: 0.953, Validation Accuracy: 0.968, Loss: 0.032 Epoch 18 Batch 224/269 - Train Accuracy: 0.962, Validation Accuracy: 0.967, Loss: 0.038 Epoch 18 Batch 225/269 - Train Accuracy: 0.964, Validation Accuracy: 0.969, Loss: 0.030 Epoch 18 Batch 226/269 - Train Accuracy: 0.967, Validation Accuracy: 0.968, Loss: 0.034 Epoch 18 Batch 227/269 - Train Accuracy: 0.972, Validation Accuracy: 0.971, Loss: 0.042 Epoch 18 Batch 228/269 - Train Accuracy: 0.971, Validation Accuracy: 0.972, Loss: 0.029 Epoch 18 Batch 229/269 - Train Accuracy: 0.971, Validation Accuracy: 0.971, Loss: 0.031 Epoch 18 Batch 230/269 - Train Accuracy: 0.971, Validation Accuracy: 0.968, Loss: 0.028 Epoch 18 Batch 231/269 - Train Accuracy: 0.962, Validation Accuracy: 0.969, Loss: 0.033 Epoch 18 Batch 232/269 - Train Accuracy: 0.967, Validation Accuracy: 0.968, Loss: 0.027 Epoch 18 Batch 233/269 - Train Accuracy: 0.978, Validation Accuracy: 0.970, Loss: 0.032 Epoch 18 Batch 234/269 - Train Accuracy: 0.963, Validation Accuracy: 0.972, Loss: 0.031 Epoch 18 Batch 235/269 - Train Accuracy: 0.987, Validation Accuracy: 0.970, Loss: 0.023 Epoch 18 Batch 236/269 - Train Accuracy: 0.964, Validation Accuracy: 0.970, Loss: 0.029 Epoch 18 Batch 237/269 - Train Accuracy: 0.977, Validation Accuracy: 0.968, Loss: 0.028 Epoch 18 Batch 238/269 - Train Accuracy: 0.974, Validation Accuracy: 0.970, Loss: 0.031 Epoch 18 Batch 239/269 - Train Accuracy: 0.968, Validation Accuracy: 0.971, Loss: 0.027 Epoch 18 Batch 240/269 - Train Accuracy: 0.973, Validation Accuracy: 0.969, Loss: 0.028 Epoch 18 Batch 241/269 - Train Accuracy: 0.962, Validation Accuracy: 0.969, Loss: 0.031 Epoch 18 Batch 242/269 - Train Accuracy: 0.972, Validation Accuracy: 0.968, Loss: 0.030 Epoch 18 Batch 243/269 - Train Accuracy: 0.975, Validation Accuracy: 0.968, Loss: 0.027 Epoch 18 Batch 244/269 - Train Accuracy: 0.957, Validation Accuracy: 0.967, Loss: 0.032 Epoch 18 Batch 245/269 - Train Accuracy: 0.968, Validation Accuracy: 0.970, Loss: 0.031 Epoch 18 Batch 246/269 - Train Accuracy: 0.962, Validation Accuracy: 0.967, Loss: 0.035 Epoch 18 Batch 247/269 - Train Accuracy: 0.970, Validation Accuracy: 0.968, Loss: 0.026 Epoch 18 Batch 248/269 - Train Accuracy: 0.968, Validation Accuracy: 0.971, Loss: 0.028 Epoch 18 Batch 249/269 - Train Accuracy: 0.973, Validation Accuracy: 0.969, Loss: 0.028 Epoch 18 Batch 250/269 - Train Accuracy: 0.967, Validation Accuracy: 0.969, Loss: 0.032 Epoch 18 Batch 251/269 - Train Accuracy: 0.976, Validation Accuracy: 0.969, Loss: 0.027 Epoch 18 Batch 252/269 - Train Accuracy: 0.977, Validation Accuracy: 0.971, Loss: 0.025 Epoch 18 Batch 253/269 - Train Accuracy: 0.968, Validation Accuracy: 0.971, Loss: 0.031 Epoch 18 Batch 254/269 - Train Accuracy: 0.970, Validation Accuracy: 0.969, Loss: 0.031 Epoch 18 Batch 255/269 - Train Accuracy: 0.967, Validation Accuracy: 0.967, Loss: 0.029 Epoch 18 Batch 256/269 - Train Accuracy: 0.971, Validation Accuracy: 0.970, Loss: 0.029 Epoch 18 Batch 257/269 - Train Accuracy: 0.962, Validation Accuracy: 0.970, Loss: 0.032 Epoch 18 Batch 258/269 - Train Accuracy: 0.972, Validation Accuracy: 0.971, Loss: 0.031 Epoch 18 Batch 259/269 - Train Accuracy: 0.972, Validation Accuracy: 0.972, Loss: 0.028 Epoch 18 Batch 260/269 - Train Accuracy: 0.973, Validation Accuracy: 0.972, Loss: 0.031 Epoch 18 Batch 261/269 - Train Accuracy: 0.970, Validation Accuracy: 0.970, Loss: 0.033 Epoch 18 Batch 262/269 - Train Accuracy: 0.964, Validation Accuracy: 0.969, Loss: 0.031 Epoch 18 Batch 263/269 - Train Accuracy: 0.977, Validation Accuracy: 0.970, Loss: 0.030 Epoch 18 Batch 264/269 - Train Accuracy: 0.960, Validation Accuracy: 0.968, Loss: 0.031 Epoch 18 Batch 265/269 - Train Accuracy: 0.966, Validation Accuracy: 0.972, Loss: 0.031 Epoch 18 Batch 266/269 - Train Accuracy: 0.972, Validation Accuracy: 0.973, Loss: 0.026 Epoch 18 Batch 267/269 - Train Accuracy: 0.979, Validation Accuracy: 0.970, Loss: 0.033 Epoch 19 Batch 0/269 - Train Accuracy: 0.975, Validation Accuracy: 0.972, Loss: 0.037 Epoch 19 Batch 1/269 - Train Accuracy: 0.977, Validation Accuracy: 0.968, Loss: 0.030 Epoch 19 Batch 2/269 - Train Accuracy: 0.968, Validation Accuracy: 0.966, Loss: 0.029 Epoch 19 Batch 3/269 - Train Accuracy: 0.964, Validation Accuracy: 0.964, Loss: 0.031 Epoch 19 Batch 4/269 - Train Accuracy: 0.963, Validation Accuracy: 0.962, Loss: 0.030 Epoch 19 Batch 5/269 - Train Accuracy: 0.964, Validation Accuracy: 0.965, Loss: 0.034 Epoch 19 Batch 6/269 - Train Accuracy: 0.970, Validation Accuracy: 0.970, Loss: 0.028 Epoch 19 Batch 7/269 - Train Accuracy: 0.972, Validation Accuracy: 0.971, Loss: 0.026 Epoch 19 Batch 8/269 - Train Accuracy: 0.972, Validation Accuracy: 0.970, Loss: 0.031 Epoch 19 Batch 9/269 - Train Accuracy: 0.966, Validation Accuracy: 0.969, Loss: 0.030 Epoch 19 Batch 10/269 - Train Accuracy: 0.973, Validation Accuracy: 0.965, Loss: 0.026 Epoch 19 Batch 11/269 - Train Accuracy: 0.971, Validation Accuracy: 0.962, Loss: 0.035 Epoch 19 Batch 12/269 - Train Accuracy: 0.956, Validation Accuracy: 0.964, Loss: 0.034 Epoch 19 Batch 13/269 - Train Accuracy: 0.968, Validation Accuracy: 0.967, Loss: 0.028 Epoch 19 Batch 14/269 - Train Accuracy: 0.967, Validation Accuracy: 0.971, Loss: 0.027 Epoch 19 Batch 15/269 - Train Accuracy: 0.979, Validation Accuracy: 0.971, Loss: 0.025 Epoch 19 Batch 16/269 - Train Accuracy: 0.962, Validation Accuracy: 0.972, Loss: 0.035 Epoch 19 Batch 17/269 - Train Accuracy: 0.969, Validation Accuracy: 0.971, Loss: 0.027 Epoch 19 Batch 18/269 - Train Accuracy: 0.958, Validation Accuracy: 0.963, Loss: 0.032 Epoch 19 Batch 19/269 - Train Accuracy: 0.969, Validation Accuracy: 0.964, Loss: 0.027 Epoch 19 Batch 20/269 - Train Accuracy: 0.968, Validation Accuracy: 0.966, Loss: 0.028 Epoch 19 Batch 21/269 - Train Accuracy: 0.964, Validation Accuracy: 0.966, Loss: 0.032 Epoch 19 Batch 22/269 - Train Accuracy: 0.974, Validation Accuracy: 0.966, Loss: 0.028 Epoch 19 Batch 23/269 - Train Accuracy: 0.957, Validation Accuracy: 0.967, Loss: 0.031 Epoch 19 Batch 24/269 - Train Accuracy: 0.967, Validation Accuracy: 0.965, Loss: 0.027 Epoch 19 Batch 25/269 - Train Accuracy: 0.969, Validation Accuracy: 0.961, Loss: 0.032 Epoch 19 Batch 26/269 - Train Accuracy: 0.962, Validation Accuracy: 0.962, Loss: 0.028 Epoch 19 Batch 27/269 - Train Accuracy: 0.965, Validation Accuracy: 0.968, Loss: 0.030 Epoch 19 Batch 28/269 - Train Accuracy: 0.957, Validation Accuracy: 0.971, Loss: 0.034 Epoch 19 Batch 29/269 - Train Accuracy: 0.966, Validation Accuracy: 0.970, Loss: 0.032 Epoch 19 Batch 30/269 - Train Accuracy: 0.966, Validation Accuracy: 0.972, Loss: 0.031 Epoch 19 Batch 31/269 - Train Accuracy: 0.963, Validation Accuracy: 0.970, Loss: 0.028 Epoch 19 Batch 32/269 - Train Accuracy: 0.968, Validation Accuracy: 0.968, Loss: 0.028 Epoch 19 Batch 33/269 - Train Accuracy: 0.970, Validation Accuracy: 0.968, Loss: 0.027 Epoch 19 Batch 34/269 - Train Accuracy: 0.962, Validation Accuracy: 0.969, Loss: 0.027 Epoch 19 Batch 35/269 - Train Accuracy: 0.969, Validation Accuracy: 0.963, Loss: 0.037 Epoch 19 Batch 36/269 - Train Accuracy: 0.964, Validation Accuracy: 0.962, Loss: 0.028 Epoch 19 Batch 37/269 - Train Accuracy: 0.965, Validation Accuracy: 0.962, Loss: 0.030 Epoch 19 Batch 38/269 - Train Accuracy: 0.969, Validation Accuracy: 0.964, Loss: 0.026 Epoch 19 Batch 39/269 - Train Accuracy: 0.971, Validation Accuracy: 0.964, Loss: 0.028 Epoch 19 Batch 40/269 - Train Accuracy: 0.964, Validation Accuracy: 0.968, Loss: 0.034 Epoch 19 Batch 41/269 - Train Accuracy: 0.960, Validation Accuracy: 0.970, Loss: 0.031 Epoch 19 Batch 42/269 - Train Accuracy: 0.976, Validation Accuracy: 0.969, Loss: 0.025 Epoch 19 Batch 43/269 - Train Accuracy: 0.972, Validation Accuracy: 0.967, Loss: 0.030 Epoch 19 Batch 44/269 - Train Accuracy: 0.964, Validation Accuracy: 0.965, Loss: 0.030 Epoch 19 Batch 45/269 - Train Accuracy: 0.955, Validation Accuracy: 0.966, Loss: 0.032 Epoch 19 Batch 46/269 - Train Accuracy: 0.967, Validation Accuracy: 0.970, Loss: 0.029 Epoch 19 Batch 47/269 - Train Accuracy: 0.973, Validation Accuracy: 0.969, Loss: 0.026 Epoch 19 Batch 48/269 - Train Accuracy: 0.970, Validation Accuracy: 0.968, Loss: 0.029 Epoch 19 Batch 49/269 - Train Accuracy: 0.971, Validation Accuracy: 0.967, Loss: 0.026 Epoch 19 Batch 50/269 - Train Accuracy: 0.964, Validation Accuracy: 0.966, Loss: 0.034 Epoch 19 Batch 51/269 - Train Accuracy: 0.971, Validation Accuracy: 0.964, Loss: 0.027 Epoch 19 Batch 52/269 - Train Accuracy: 0.968, Validation Accuracy: 0.964, Loss: 0.025 Epoch 19 Batch 53/269 - Train Accuracy: 0.969, Validation Accuracy: 0.966, Loss: 0.032 Epoch 19 Batch 54/269 - Train Accuracy: 0.970, Validation Accuracy: 0.967, Loss: 0.027 Epoch 19 Batch 55/269 - Train Accuracy: 0.958, Validation Accuracy: 0.962, Loss: 0.030 Epoch 19 Batch 56/269 - Train Accuracy: 0.968, Validation Accuracy: 0.962, Loss: 0.029 Epoch 19 Batch 57/269 - Train Accuracy: 0.968, Validation Accuracy: 0.967, Loss: 0.035 Epoch 19 Batch 58/269 - Train Accuracy: 0.975, Validation Accuracy: 0.967, Loss: 0.027 Epoch 19 Batch 59/269 - Train Accuracy: 0.976, Validation Accuracy: 0.971, Loss: 0.022 Epoch 19 Batch 60/269 - Train Accuracy: 0.971, Validation Accuracy: 0.971, Loss: 0.028 Epoch 19 Batch 61/269 - Train Accuracy: 0.962, Validation Accuracy: 0.972, Loss: 0.029 Epoch 19 Batch 62/269 - Train Accuracy: 0.969, Validation Accuracy: 0.970, Loss: 0.032 Epoch 19 Batch 63/269 - Train Accuracy: 0.961, Validation Accuracy: 0.970, Loss: 0.035 Epoch 19 Batch 64/269 - Train Accuracy: 0.967, Validation Accuracy: 0.968, Loss: 0.026 Epoch 19 Batch 65/269 - Train Accuracy: 0.974, Validation Accuracy: 0.968, Loss: 0.029 Epoch 19 Batch 66/269 - Train Accuracy: 0.964, Validation Accuracy: 0.965, Loss: 0.032 Epoch 19 Batch 67/269 - Train Accuracy: 0.970, Validation Accuracy: 0.959, Loss: 0.033 Epoch 19 Batch 68/269 - Train Accuracy: 0.971, Validation Accuracy: 0.961, Loss: 0.031 Epoch 19 Batch 69/269 - Train Accuracy: 0.957, Validation Accuracy: 0.959, Loss: 0.037 Epoch 19 Batch 70/269 - Train Accuracy: 0.971, Validation Accuracy: 0.958, Loss: 0.032 Epoch 19 Batch 71/269 - Train Accuracy: 0.968, Validation Accuracy: 0.963, Loss: 0.033 Epoch 19 Batch 72/269 - Train Accuracy: 0.967, Validation Accuracy: 0.964, Loss: 0.032 Epoch 19 Batch 73/269 - Train Accuracy: 0.960, Validation Accuracy: 0.966, Loss: 0.032 Epoch 19 Batch 74/269 - Train Accuracy: 0.979, Validation Accuracy: 0.965, Loss: 0.028 Epoch 19 Batch 75/269 - Train Accuracy: 0.962, Validation Accuracy: 0.967, Loss: 0.031 Epoch 19 Batch 76/269 - Train Accuracy: 0.966, Validation Accuracy: 0.967, Loss: 0.027 Epoch 19 Batch 77/269 - Train Accuracy: 0.971, Validation Accuracy: 0.968, Loss: 0.029 Epoch 19 Batch 78/269 - Train Accuracy: 0.977, Validation Accuracy: 0.963, Loss: 0.028 Epoch 19 Batch 79/269 - Train Accuracy: 0.963, Validation Accuracy: 0.966, Loss: 0.031 Epoch 19 Batch 80/269 - Train Accuracy: 0.969, Validation Accuracy: 0.968, Loss: 0.028 Epoch 19 Batch 81/269 - Train Accuracy: 0.961, Validation Accuracy: 0.966, Loss: 0.034 Epoch 19 Batch 82/269 - Train Accuracy: 0.972, Validation Accuracy: 0.967, Loss: 0.026 Epoch 19 Batch 83/269 - Train Accuracy: 0.957, Validation Accuracy: 0.967, Loss: 0.039 Epoch 19 Batch 84/269 - Train Accuracy: 0.972, Validation Accuracy: 0.961, Loss: 0.030 Epoch 19 Batch 85/269 - Train Accuracy: 0.967, Validation Accuracy: 0.961, Loss: 0.029 Epoch 19 Batch 86/269 - Train Accuracy: 0.969, Validation Accuracy: 0.964, Loss: 0.027 Epoch 19 Batch 87/269 - Train Accuracy: 0.977, Validation Accuracy: 0.964, Loss: 0.029 Epoch 19 Batch 88/269 - Train Accuracy: 0.959, Validation Accuracy: 0.958, Loss: 0.031 Epoch 19 Batch 89/269 - Train Accuracy: 0.971, Validation Accuracy: 0.965, Loss: 0.028 Epoch 19 Batch 90/269 - Train Accuracy: 0.968, Validation Accuracy: 0.967, Loss: 0.029 Epoch 19 Batch 91/269 - Train Accuracy: 0.967, Validation Accuracy: 0.965, Loss: 0.023 Epoch 19 Batch 92/269 - Train Accuracy: 0.980, Validation Accuracy: 0.969, Loss: 0.023 Epoch 19 Batch 93/269 - Train Accuracy: 0.973, Validation Accuracy: 0.963, Loss: 0.027 Epoch 19 Batch 94/269 - Train Accuracy: 0.962, Validation Accuracy: 0.966, Loss: 0.034 Epoch 19 Batch 95/269 - Train Accuracy: 0.974, Validation Accuracy: 0.959, Loss: 0.030 Epoch 19 Batch 96/269 - Train Accuracy: 0.959, Validation Accuracy: 0.959, Loss: 0.034 Epoch 19 Batch 97/269 - Train Accuracy: 0.960, Validation Accuracy: 0.960, Loss: 0.035 Epoch 19 Batch 98/269 - Train Accuracy: 0.961, Validation Accuracy: 0.965, Loss: 0.029 Epoch 19 Batch 99/269 - Train Accuracy: 0.970, Validation Accuracy: 0.970, Loss: 0.029 Epoch 19 Batch 100/269 - Train Accuracy: 0.969, Validation Accuracy: 0.969, Loss: 0.034 Epoch 19 Batch 101/269 - Train Accuracy: 0.963, Validation Accuracy: 0.970, Loss: 0.037 Epoch 19 Batch 102/269 - Train Accuracy: 0.965, Validation Accuracy: 0.969, Loss: 0.032 Epoch 19 Batch 103/269 - Train Accuracy: 0.970, Validation Accuracy: 0.968, Loss: 0.029 Epoch 19 Batch 104/269 - Train Accuracy: 0.967, Validation Accuracy: 0.966, Loss: 0.028 Epoch 19 Batch 105/269 - Train Accuracy: 0.972, Validation Accuracy: 0.969, Loss: 0.031 Epoch 19 Batch 106/269 - Train Accuracy: 0.979, Validation Accuracy: 0.972, Loss: 0.023 Epoch 19 Batch 107/269 - Train Accuracy: 0.971, Validation Accuracy: 0.967, Loss: 0.030 Epoch 19 Batch 108/269 - Train Accuracy: 0.973, Validation Accuracy: 0.969, Loss: 0.030 Epoch 19 Batch 109/269 - Train Accuracy: 0.964, Validation Accuracy: 0.971, Loss: 0.033 Epoch 19 Batch 110/269 - Train Accuracy: 0.969, Validation Accuracy: 0.969, Loss: 0.029 Epoch 19 Batch 111/269 - Train Accuracy: 0.957, Validation Accuracy: 0.966, Loss: 0.033 Epoch 19 Batch 112/269 - Train Accuracy: 0.958, Validation Accuracy: 0.968, Loss: 0.028 Epoch 19 Batch 113/269 - Train Accuracy: 0.959, Validation Accuracy: 0.968, Loss: 0.034 Epoch 19 Batch 114/269 - Train Accuracy: 0.962, Validation Accuracy: 0.969, Loss: 0.032 Epoch 19 Batch 115/269 - Train Accuracy: 0.966, Validation Accuracy: 0.967, Loss: 0.034 Epoch 19 Batch 116/269 - Train Accuracy: 0.977, Validation Accuracy: 0.962, Loss: 0.028 Epoch 19 Batch 117/269 - Train Accuracy: 0.966, Validation Accuracy: 0.958, Loss: 0.027 Epoch 19 Batch 118/269 - Train Accuracy: 0.966, Validation Accuracy: 0.950, Loss: 0.026 Epoch 19 Batch 119/269 - Train Accuracy: 0.967, Validation Accuracy: 0.958, Loss: 0.034 Epoch 19 Batch 120/269 - Train Accuracy: 0.963, Validation Accuracy: 0.962, Loss: 0.030 Epoch 19 Batch 121/269 - Train Accuracy: 0.959, Validation Accuracy: 0.963, Loss: 0.024 Epoch 19 Batch 122/269 - Train Accuracy: 0.959, Validation Accuracy: 0.964, Loss: 0.032 Epoch 19 Batch 123/269 - Train Accuracy: 0.958, Validation Accuracy: 0.966, Loss: 0.031 Epoch 19 Batch 124/269 - Train Accuracy: 0.963, Validation Accuracy: 0.964, Loss: 0.027 Epoch 19 Batch 125/269 - Train Accuracy: 0.962, Validation Accuracy: 0.964, Loss: 0.027 Epoch 19 Batch 126/269 - Train Accuracy: 0.959, Validation Accuracy: 0.961, Loss: 0.030 Epoch 19 Batch 127/269 - Train Accuracy: 0.975, Validation Accuracy: 0.964, Loss: 0.030 Epoch 19 Batch 128/269 - Train Accuracy: 0.976, Validation Accuracy: 0.963, Loss: 0.027 Epoch 19 Batch 129/269 - Train Accuracy: 0.963, Validation Accuracy: 0.967, Loss: 0.031 Epoch 19 Batch 130/269 - Train Accuracy: 0.967, Validation Accuracy: 0.971, Loss: 0.030 Epoch 19 Batch 131/269 - Train Accuracy: 0.972, Validation Accuracy: 0.970, Loss: 0.031 Epoch 19 Batch 132/269 - Train Accuracy: 0.958, Validation Accuracy: 0.967, Loss: 0.032 Epoch 19 Batch 133/269 - Train Accuracy: 0.964, Validation Accuracy: 0.966, Loss: 0.027 Epoch 19 Batch 134/269 - Train Accuracy: 0.967, Validation Accuracy: 0.965, Loss: 0.029 Epoch 19 Batch 135/269 - Train Accuracy: 0.973, Validation Accuracy: 0.964, Loss: 0.028 Epoch 19 Batch 136/269 - Train Accuracy: 0.953, Validation Accuracy: 0.964, Loss: 0.037 Epoch 19 Batch 137/269 - Train Accuracy: 0.963, Validation Accuracy: 0.963, Loss: 0.037 Epoch 19 Batch 138/269 - Train Accuracy: 0.966, Validation Accuracy: 0.964, Loss: 0.026 Epoch 19 Batch 139/269 - Train Accuracy: 0.965, Validation Accuracy: 0.964, Loss: 0.027 Epoch 19 Batch 140/269 - Train Accuracy: 0.970, Validation Accuracy: 0.964, Loss: 0.031 Epoch 19 Batch 141/269 - Train Accuracy: 0.973, Validation Accuracy: 0.964, Loss: 0.036 Epoch 19 Batch 142/269 - Train Accuracy: 0.965, Validation Accuracy: 0.962, Loss: 0.031 Epoch 19 Batch 143/269 - Train Accuracy: 0.965, Validation Accuracy: 0.959, Loss: 0.024 Epoch 19 Batch 144/269 - Train Accuracy: 0.974, Validation Accuracy: 0.960, Loss: 0.023 Epoch 19 Batch 145/269 - Train Accuracy: 0.972, Validation Accuracy: 0.965, Loss: 0.026 Epoch 19 Batch 146/269 - Train Accuracy: 0.966, Validation Accuracy: 0.970, Loss: 0.028 Epoch 19 Batch 147/269 - Train Accuracy: 0.967, Validation Accuracy: 0.969, Loss: 0.033 Epoch 19 Batch 148/269 - Train Accuracy: 0.977, Validation Accuracy: 0.967, Loss: 0.030 Epoch 19 Batch 149/269 - Train Accuracy: 0.963, Validation Accuracy: 0.962, Loss: 0.035 Epoch 19 Batch 150/269 - Train Accuracy: 0.966, Validation Accuracy: 0.958, Loss: 0.029 Epoch 19 Batch 151/269 - Train Accuracy: 0.968, Validation Accuracy: 0.958, Loss: 0.032 Epoch 19 Batch 152/269 - Train Accuracy: 0.977, Validation Accuracy: 0.963, Loss: 0.028 Epoch 19 Batch 153/269 - Train Accuracy: 0.975, Validation Accuracy: 0.967, Loss: 0.023 Epoch 19 Batch 154/269 - Train Accuracy: 0.971, Validation Accuracy: 0.968, Loss: 0.027 Epoch 19 Batch 155/269 - Train Accuracy: 0.964, Validation Accuracy: 0.970, Loss: 0.028 Epoch 19 Batch 156/269 - Train Accuracy: 0.975, Validation Accuracy: 0.967, Loss: 0.034 Epoch 19 Batch 157/269 - Train Accuracy: 0.973, Validation Accuracy: 0.971, Loss: 0.025 Epoch 19 Batch 158/269 - Train Accuracy: 0.964, Validation Accuracy: 0.971, Loss: 0.029 Epoch 19 Batch 159/269 - Train Accuracy: 0.964, Validation Accuracy: 0.968, Loss: 0.029 Epoch 19 Batch 160/269 - Train Accuracy: 0.969, Validation Accuracy: 0.963, Loss: 0.026 Epoch 19 Batch 161/269 - Train Accuracy: 0.971, Validation Accuracy: 0.959, Loss: 0.027 Epoch 19 Batch 162/269 - Train Accuracy: 0.969, Validation Accuracy: 0.958, Loss: 0.029 Epoch 19 Batch 163/269 - Train Accuracy: 0.974, Validation Accuracy: 0.962, Loss: 0.029 Epoch 19 Batch 164/269 - Train Accuracy: 0.979, Validation Accuracy: 0.965, Loss: 0.026 Epoch 19 Batch 165/269 - Train Accuracy: 0.970, Validation Accuracy: 0.968, Loss: 0.028 Epoch 19 Batch 166/269 - Train Accuracy: 0.972, Validation Accuracy: 0.970, Loss: 0.028 Epoch 19 Batch 167/269 - Train Accuracy: 0.966, Validation Accuracy: 0.971, Loss: 0.029 Epoch 19 Batch 168/269 - Train Accuracy: 0.966, Validation Accuracy: 0.967, Loss: 0.031 Epoch 19 Batch 169/269 - Train Accuracy: 0.965, Validation Accuracy: 0.964, Loss: 0.030 Epoch 19 Batch 170/269 - Train Accuracy: 0.965, Validation Accuracy: 0.967, Loss: 0.029 Epoch 19 Batch 171/269 - Train Accuracy: 0.973, Validation Accuracy: 0.964, Loss: 0.031 Epoch 19 Batch 172/269 - Train Accuracy: 0.968, Validation Accuracy: 0.961, Loss: 0.030 Epoch 19 Batch 173/269 - Train Accuracy: 0.969, Validation Accuracy: 0.964, Loss: 0.027 Epoch 19 Batch 174/269 - Train Accuracy: 0.977, Validation Accuracy: 0.966, Loss: 0.030 Epoch 19 Batch 175/269 - Train Accuracy: 0.965, Validation Accuracy: 0.966, Loss: 0.042 Epoch 19 Batch 176/269 - Train Accuracy: 0.967, Validation Accuracy: 0.965, Loss: 0.029 Epoch 19 Batch 177/269 - Train Accuracy: 0.968, Validation Accuracy: 0.966, Loss: 0.029 Epoch 19 Batch 178/269 - Train Accuracy: 0.975, Validation Accuracy: 0.969, Loss: 0.024 Epoch 19 Batch 179/269 - Train Accuracy: 0.973, Validation Accuracy: 0.971, Loss: 0.029 Epoch 19 Batch 180/269 - Train Accuracy: 0.976, Validation Accuracy: 0.973, Loss: 0.026 Epoch 19 Batch 181/269 - Train Accuracy: 0.965, Validation Accuracy: 0.969, Loss: 0.033 Epoch 19 Batch 182/269 - Train Accuracy: 0.963, Validation Accuracy: 0.973, Loss: 0.030 Epoch 19 Batch 183/269 - Train Accuracy: 0.972, Validation Accuracy: 0.972, Loss: 0.023 Epoch 19 Batch 184/269 - Train Accuracy: 0.972, Validation Accuracy: 0.969, Loss: 0.027 Epoch 19 Batch 185/269 - Train Accuracy: 0.975, Validation Accuracy: 0.969, Loss: 0.029 Epoch 19 Batch 186/269 - Train Accuracy: 0.964, Validation Accuracy: 0.967, Loss: 0.026 Epoch 19 Batch 187/269 - Train Accuracy: 0.966, Validation Accuracy: 0.967, Loss: 0.028 Epoch 19 Batch 188/269 - Train Accuracy: 0.972, Validation Accuracy: 0.968, Loss: 0.026 Epoch 19 Batch 189/269 - Train Accuracy: 0.975, Validation Accuracy: 0.968, Loss: 0.027 Epoch 19 Batch 190/269 - Train Accuracy: 0.964, Validation Accuracy: 0.967, Loss: 0.031 Epoch 19 Batch 191/269 - Train Accuracy: 0.970, Validation Accuracy: 0.967, Loss: 0.026 Epoch 19 Batch 192/269 - Train Accuracy: 0.976, Validation Accuracy: 0.969, Loss: 0.028 Epoch 19 Batch 193/269 - Train Accuracy: 0.969, Validation Accuracy: 0.969, Loss: 0.027 Epoch 19 Batch 194/269 - Train Accuracy: 0.963, Validation Accuracy: 0.971, Loss: 0.028 Epoch 19 Batch 195/269 - Train Accuracy: 0.964, Validation Accuracy: 0.966, Loss: 0.030 Epoch 19 Batch 196/269 - Train Accuracy: 0.959, Validation Accuracy: 0.969, Loss: 0.030 Epoch 19 Batch 197/269 - Train Accuracy: 0.966, Validation Accuracy: 0.966, Loss: 0.031 Epoch 19 Batch 198/269 - Train Accuracy: 0.964, Validation Accuracy: 0.964, Loss: 0.030 Epoch 19 Batch 199/269 - Train Accuracy: 0.968, Validation Accuracy: 0.964, Loss: 0.031 Epoch 19 Batch 200/269 - Train Accuracy: 0.977, Validation Accuracy: 0.969, Loss: 0.028 Epoch 19 Batch 201/269 - Train Accuracy: 0.975, Validation Accuracy: 0.967, Loss: 0.032 Epoch 19 Batch 202/269 - Train Accuracy: 0.969, Validation Accuracy: 0.965, Loss: 0.028 Epoch 19 Batch 203/269 - Train Accuracy: 0.962, Validation Accuracy: 0.965, Loss: 0.035 Epoch 19 Batch 204/269 - Train Accuracy: 0.973, Validation Accuracy: 0.963, Loss: 0.028 Epoch 19 Batch 205/269 - Train Accuracy: 0.976, Validation Accuracy: 0.963, Loss: 0.028 Epoch 19 Batch 206/269 - Train Accuracy: 0.962, Validation Accuracy: 0.968, Loss: 0.033 Epoch 19 Batch 207/269 - Train Accuracy: 0.979, Validation Accuracy: 0.968, Loss: 0.027 Epoch 19 Batch 208/269 - Train Accuracy: 0.968, Validation Accuracy: 0.963, Loss: 0.030 Epoch 19 Batch 209/269 - Train Accuracy: 0.974, Validation Accuracy: 0.962, Loss: 0.028 Epoch 19 Batch 210/269 - Train Accuracy: 0.968, Validation Accuracy: 0.966, Loss: 0.026 Epoch 19 Batch 211/269 - Train Accuracy: 0.969, Validation Accuracy: 0.967, Loss: 0.032 Epoch 19 Batch 212/269 - Train Accuracy: 0.968, Validation Accuracy: 0.970, Loss: 0.034 Epoch 19 Batch 213/269 - Train Accuracy: 0.972, Validation Accuracy: 0.971, Loss: 0.028 Epoch 19 Batch 214/269 - Train Accuracy: 0.966, Validation Accuracy: 0.972, Loss: 0.030 Epoch 19 Batch 215/269 - Train Accuracy: 0.965, Validation Accuracy: 0.970, Loss: 0.031 Epoch 19 Batch 216/269 - Train Accuracy: 0.964, Validation Accuracy: 0.965, Loss: 0.039 Epoch 19 Batch 217/269 - Train Accuracy: 0.961, Validation Accuracy: 0.967, Loss: 0.034 Epoch 19 Batch 218/269 - Train Accuracy: 0.970, Validation Accuracy: 0.970, Loss: 0.029 Epoch 19 Batch 219/269 - Train Accuracy: 0.975, Validation Accuracy: 0.971, Loss: 0.029 Epoch 19 Batch 220/269 - Train Accuracy: 0.964, Validation Accuracy: 0.972, Loss: 0.026 Epoch 19 Batch 221/269 - Train Accuracy: 0.967, Validation Accuracy: 0.973, Loss: 0.029 Epoch 19 Batch 222/269 - Train Accuracy: 0.968, Validation Accuracy: 0.968, Loss: 0.025 Epoch 19 Batch 223/269 - Train Accuracy: 0.958, Validation Accuracy: 0.966, Loss: 0.028 Epoch 19 Batch 224/269 - Train Accuracy: 0.964, Validation Accuracy: 0.969, Loss: 0.034 Epoch 19 Batch 225/269 - Train Accuracy: 0.972, Validation Accuracy: 0.970, Loss: 0.029 Epoch 19 Batch 226/269 - Train Accuracy: 0.965, Validation Accuracy: 0.970, Loss: 0.030 Epoch 19 Batch 227/269 - Train Accuracy: 0.971, Validation Accuracy: 0.970, Loss: 0.035 Epoch 19 Batch 228/269 - Train Accuracy: 0.975, Validation Accuracy: 0.971, Loss: 0.023 Epoch 19 Batch 229/269 - Train Accuracy: 0.971, Validation Accuracy: 0.974, Loss: 0.024 Epoch 19 Batch 230/269 - Train Accuracy: 0.972, Validation Accuracy: 0.976, Loss: 0.029 Epoch 19 Batch 231/269 - Train Accuracy: 0.970, Validation Accuracy: 0.973, Loss: 0.028 Epoch 19 Batch 232/269 - Train Accuracy: 0.968, Validation Accuracy: 0.972, Loss: 0.028 Epoch 19 Batch 233/269 - Train Accuracy: 0.979, Validation Accuracy: 0.972, Loss: 0.033 Epoch 19 Batch 234/269 - Train Accuracy: 0.967, Validation Accuracy: 0.970, Loss: 0.033 Epoch 19 Batch 235/269 - Train Accuracy: 0.986, Validation Accuracy: 0.970, Loss: 0.022 Epoch 19 Batch 236/269 - Train Accuracy: 0.964, Validation Accuracy: 0.968, Loss: 0.026 Epoch 19 Batch 237/269 - Train Accuracy: 0.974, Validation Accuracy: 0.966, Loss: 0.027 Epoch 19 Batch 238/269 - Train Accuracy: 0.975, Validation Accuracy: 0.965, Loss: 0.029 Epoch 19 Batch 239/269 - Train Accuracy: 0.969, Validation Accuracy: 0.965, Loss: 0.027 Epoch 19 Batch 240/269 - Train Accuracy: 0.970, Validation Accuracy: 0.967, Loss: 0.027 Epoch 19 Batch 241/269 - Train Accuracy: 0.960, Validation Accuracy: 0.968, Loss: 0.033 Epoch 19 Batch 242/269 - Train Accuracy: 0.974, Validation Accuracy: 0.969, Loss: 0.029 Epoch 19 Batch 243/269 - Train Accuracy: 0.972, Validation Accuracy: 0.967, Loss: 0.023 Epoch 19 Batch 244/269 - Train Accuracy: 0.960, Validation Accuracy: 0.968, Loss: 0.030 Epoch 19 Batch 245/269 - Train Accuracy: 0.963, Validation Accuracy: 0.966, Loss: 0.028 Epoch 19 Batch 246/269 - Train Accuracy: 0.963, Validation Accuracy: 0.965, Loss: 0.030 Epoch 19 Batch 247/269 - Train Accuracy: 0.960, Validation Accuracy: 0.962, Loss: 0.024 Epoch 19 Batch 248/269 - Train Accuracy: 0.974, Validation Accuracy: 0.965, Loss: 0.027 Epoch 19 Batch 249/269 - Train Accuracy: 0.973, Validation Accuracy: 0.967, Loss: 0.024 Epoch 19 Batch 250/269 - Train Accuracy: 0.962, Validation Accuracy: 0.970, Loss: 0.028 Epoch 19 Batch 251/269 - Train Accuracy: 0.976, Validation Accuracy: 0.971, Loss: 0.025 Epoch 19 Batch 252/269 - Train Accuracy: 0.976, Validation Accuracy: 0.971, Loss: 0.022 Epoch 19 Batch 253/269 - Train Accuracy: 0.970, Validation Accuracy: 0.965, Loss: 0.026 Epoch 19 Batch 254/269 - Train Accuracy: 0.972, Validation Accuracy: 0.967, Loss: 0.028 Epoch 19 Batch 255/269 - Train Accuracy: 0.971, Validation Accuracy: 0.967, Loss: 0.027 Epoch 19 Batch 256/269 - Train Accuracy: 0.974, Validation Accuracy: 0.966, Loss: 0.026 Epoch 19 Batch 257/269 - Train Accuracy: 0.966, Validation Accuracy: 0.966, Loss: 0.032 Epoch 19 Batch 258/269 - Train Accuracy: 0.975, Validation Accuracy: 0.966, Loss: 0.028 Epoch 19 Batch 259/269 - Train Accuracy: 0.973, Validation Accuracy: 0.970, Loss: 0.030 Epoch 19 Batch 260/269 - Train Accuracy: 0.971, Validation Accuracy: 0.971, Loss: 0.026 Epoch 19 Batch 261/269 - Train Accuracy: 0.971, Validation Accuracy: 0.973, Loss: 0.029 Epoch 19 Batch 262/269 - Train Accuracy: 0.962, Validation Accuracy: 0.972, Loss: 0.029 Epoch 19 Batch 263/269 - Train Accuracy: 0.976, Validation Accuracy: 0.970, Loss: 0.028 Epoch 19 Batch 264/269 - Train Accuracy: 0.959, Validation Accuracy: 0.968, Loss: 0.032 Epoch 19 Batch 265/269 - Train Accuracy: 0.970, Validation Accuracy: 0.972, Loss: 0.028 Epoch 19 Batch 266/269 - Train Accuracy: 0.975, Validation Accuracy: 0.971, Loss: 0.022 Epoch 19 Batch 267/269 - Train Accuracy: 0.974, Validation Accuracy: 0.972, Loss: 0.032 Model Trained and Saved ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ sentence = sentence.lower().split() for index, word in enumerate(sentence): if word not in vocab_to_int: sentence[index] = '<UNK>' sentence_to_id = [vocab_to_int[word] for word in sentence] return sentence_to_id """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('logits:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)])) print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)])) ###Output Input Word Ids: [213, 151, 227, 209, 179, 37, 167] English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.'] Prediction Word Ids: [130, 90, 325, 179, 119, 344, 246, 245, 1] French Words: ['il', 'a', 'vu', 'la', 'vieux', 'camion', 'jaune', '.', '<EOS>'] ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of each sentence from `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids_(text, lookup): lines = text.split('\n') ids = [[lookup[w] for w in words if w != ''] for words in [l.split(' ') for l in lines]] return ids def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ src = text_to_ids_(source_text, source_vocab_to_int) target = text_to_ids_(target_text.replace('.', '. <EOS>'), target_vocab_to_int) # TODO: Implement Function return src, target """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__) print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.0.1 Default GPU Device: /gpu:0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoding_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) """ input_ = tf.placeholder(tf.int32, [None, None], "input") target_ = tf.placeholder(tf.int32, [None, None], "target") learn_rate = tf.placeholder(tf.float32, name="learning_rate") keep_prob = tf.placeholder(tf.float32, name="keep_prob") # TODO: Implement Function return input_, target_, learn_rate, keep_prob """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output Tests Passed ###Markdown Process Decoding InputImplement `process_decoding_input` using TensorFlow to remove the last word id from each batch in `target_data` and concat the GO ID to the beginning of each batch. ###Code def process_decoding_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for dencoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function tail = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1,1]) head = tf.fill([batch_size,1], target_vocab_to_int['<GO>']) res = tf.concat([head, tail], 1) return res """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_decoding_input(process_decoding_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer using [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn). ###Code def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ # TODO: Implement Function cell = tf.contrib.rnn.BasicLSTMCell(rnn_size) cell = tf.contrib.rnn.MultiRNNCell([cell]*num_layers) cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) out, state = tf.nn.dynamic_rnn(cell, rnn_inputs, dtype=tf.float32) return state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate training logits using [`tf.contrib.seq2seq.simple_decoder_fn_train()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_train) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). Apply the `output_fn` to the [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder) outputs. ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits """ # TODO: Implement Function fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob) #needed? pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, fn, dec_embed_input, sequence_length, scope=decoding_scope) #pred = tf.contrib.rnn.DropoutWrapper(pred, output_keep_prob=keep_prob) logits = output_fn(pred) return logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference logits using [`tf.contrib.seq2seq.simple_decoder_fn_inference()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_inference) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: The maximum allowed time steps to decode :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits """ # TODO: Implement Function inf = tf.contrib.seq2seq.simple_decoder_fn_inference( output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size) dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob) pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, inf, scope=decoding_scope) return pred """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.- Create RNN cell for decoding using `rnn_size` and `num_layers`.- Create the output fuction using [`lambda`](https://docs.python.org/3/tutorial/controlflow.htmllambda-expressions) to transform it's input, logits, to class logits.- Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)` function to get the training logits.- Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function start_of_sequence_id = target_vocab_to_int['<GO>'] end_of_sequence_id = target_vocab_to_int['<EOS>'] lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) cell = tf.contrib.rnn.MultiRNNCell([lstm]*num_layers) with tf.variable_scope('decoding') as decoding_scope: output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope) train_logits = decoding_layer_train(encoder_state, cell, dec_embed_input, sequence_length,decoding_scope, output_fn, keep_prob) with tf.variable_scope('decoding', reuse=True) as decoding_scope: infer_logits = decoding_layer_infer(encoder_state, cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, sequence_length, vocab_size, decoding_scope, output_fn, keep_prob) return train_logits, infer_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Apply embedding to the input data for the encoder.- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob)`.- Process target data using your `process_decoding_input(target_data, target_vocab_to_int, batch_size)` function.- Apply embedding to the target data for the decoder.- Decode the encoded input using your `decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)`. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function enc_embed_input = tf.contrib.layers.embed_sequence( input_data, source_vocab_size, enc_embedding_size) enc_state = encoding_layer( enc_embed_input, rnn_size, num_layers, keep_prob) proc_target_data = process_decoding_input(target_data, target_vocab_to_int, batch_size) dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, proc_target_data) train_logits, refer_logits = decoding_layer( dec_embed_input, dec_embeddings, enc_state, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) return train_logits, refer_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability ###Code # Number of Epochs epochs = 10 # Batch Size batch_size = 256 # RNN Size rnn_size = 512 # Number of Layers num_layers = 4 # Embedding Size encoding_embedding_size = 128 decoding_embedding_size = 128 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 0.75 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_source_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob = model_inputs() sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model( tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) tf.identity(inference_logits, 'logits') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([input_shape[0], sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import time def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 0/538 - Train Accuracy: 0.247, Validation Accuracy: 0.327, Loss: 5.882 Epoch 0 Batch 1/538 - Train Accuracy: 0.231, Validation Accuracy: 0.316, Loss: 5.478 Epoch 0 Batch 2/538 - Train Accuracy: 0.279, Validation Accuracy: 0.339, Loss: 4.625 Epoch 0 Batch 3/538 - Train Accuracy: 0.263, Validation Accuracy: 0.346, Loss: 4.436 Epoch 0 Batch 4/538 - Train Accuracy: 0.276, Validation Accuracy: 0.347, Loss: 4.141 Epoch 0 Batch 5/538 - Train Accuracy: 0.299, Validation Accuracy: 0.347, Loss: 3.740 Epoch 0 Batch 6/538 - Train Accuracy: 0.335, Validation Accuracy: 0.376, Loss: 3.539 Epoch 0 Batch 7/538 - Train Accuracy: 0.310, Validation Accuracy: 0.376, Loss: 3.573 Epoch 0 Batch 8/538 - Train Accuracy: 0.303, Validation Accuracy: 0.369, Loss: 3.439 Epoch 0 Batch 9/538 - Train Accuracy: 0.311, Validation Accuracy: 0.375, Loss: 3.311 Epoch 0 Batch 10/538 - Train Accuracy: 0.298, Validation Accuracy: 0.375, Loss: 3.363 Epoch 0 Batch 11/538 - Train Accuracy: 0.312, Validation Accuracy: 0.381, Loss: 3.264 Epoch 0 Batch 12/538 - Train Accuracy: 0.294, Validation Accuracy: 0.369, Loss: 3.267 Epoch 0 Batch 13/538 - Train Accuracy: 0.360, Validation Accuracy: 0.393, Loss: 3.074 Epoch 0 Batch 14/538 - Train Accuracy: 0.333, Validation Accuracy: 0.398, Loss: 3.148 Epoch 0 Batch 15/538 - Train Accuracy: 0.360, Validation Accuracy: 0.392, Loss: 2.979 Epoch 0 Batch 16/538 - Train Accuracy: 0.372, Validation Accuracy: 0.409, Loss: 2.930 Epoch 0 Batch 17/538 - Train Accuracy: 0.355, Validation Accuracy: 0.412, Loss: 2.988 Epoch 0 Batch 18/538 - Train Accuracy: 0.325, Validation Accuracy: 0.394, Loss: 3.022 Epoch 0 Batch 19/538 - Train Accuracy: 0.356, Validation Accuracy: 0.420, Loss: 3.145 Epoch 0 Batch 20/538 - Train Accuracy: 0.372, Validation Accuracy: 0.413, Loss: 2.802 Epoch 0 Batch 21/538 - Train Accuracy: 0.306, Validation Accuracy: 0.389, Loss: 3.120 Epoch 0 Batch 22/538 - Train Accuracy: 0.378, Validation Accuracy: 0.428, Loss: 3.005 Epoch 0 Batch 23/538 - Train Accuracy: 0.369, Validation Accuracy: 0.430, Loss: 2.844 Epoch 0 Batch 24/538 - Train Accuracy: 0.386, Validation Accuracy: 0.434, Loss: 2.797 Epoch 0 Batch 25/538 - Train Accuracy: 0.342, Validation Accuracy: 0.401, Loss: 2.870 Epoch 0 Batch 26/538 - Train Accuracy: 0.385, Validation Accuracy: 0.444, Loss: 2.940 Epoch 0 Batch 27/538 - Train Accuracy: 0.380, Validation Accuracy: 0.439, Loss: 2.745 Epoch 0 Batch 28/538 - Train Accuracy: 0.426, Validation Accuracy: 0.434, Loss: 2.534 Epoch 0 Batch 29/538 - Train Accuracy: 0.419, Validation Accuracy: 0.456, Loss: 2.729 Epoch 0 Batch 30/538 - Train Accuracy: 0.399, Validation Accuracy: 0.451, Loss: 2.727 Epoch 0 Batch 31/538 - Train Accuracy: 0.419, Validation Accuracy: 0.456, Loss: 2.660 Epoch 0 Batch 32/538 - Train Accuracy: 0.404, Validation Accuracy: 0.454, Loss: 2.612 Epoch 0 Batch 33/538 - Train Accuracy: 0.415, Validation Accuracy: 0.456, Loss: 2.543 Epoch 0 Batch 34/538 - Train Accuracy: 0.399, Validation Accuracy: 0.454, Loss: 2.656 Epoch 0 Batch 35/538 - Train Accuracy: 0.399, Validation Accuracy: 0.462, Loss: 2.607 Epoch 0 Batch 36/538 - Train Accuracy: 0.405, Validation Accuracy: 0.453, Loss: 2.480 Epoch 0 Batch 37/538 - Train Accuracy: 0.413, Validation Accuracy: 0.465, Loss: 2.562 Epoch 0 Batch 38/538 - Train Accuracy: 0.391, Validation Accuracy: 0.471, Loss: 2.580 Epoch 0 Batch 39/538 - Train Accuracy: 0.410, Validation Accuracy: 0.475, Loss: 2.547 Epoch 0 Batch 40/538 - Train Accuracy: 0.474, Validation Accuracy: 0.479, Loss: 2.305 Epoch 0 Batch 41/538 - Train Accuracy: 0.424, Validation Accuracy: 0.476, Loss: 2.489 Epoch 0 Batch 42/538 - Train Accuracy: 0.422, Validation Accuracy: 0.477, Loss: 2.455 Epoch 0 Batch 43/538 - Train Accuracy: 0.436, Validation Accuracy: 0.487, Loss: 2.486 Epoch 0 Batch 44/538 - Train Accuracy: 0.423, Validation Accuracy: 0.478, Loss: 2.501 Epoch 0 Batch 45/538 - Train Accuracy: 0.447, Validation Accuracy: 0.474, Loss: 2.345 Epoch 0 Batch 46/538 - Train Accuracy: 0.428, Validation Accuracy: 0.476, Loss: 2.437 Epoch 0 Batch 47/538 - Train Accuracy: 0.446, Validation Accuracy: 0.480, Loss: 2.340 Epoch 0 Batch 48/538 - Train Accuracy: 0.460, Validation Accuracy: 0.483, Loss: 2.291 Epoch 0 Batch 49/538 - Train Accuracy: 0.416, Validation Accuracy: 0.481, Loss: 2.450 Epoch 0 Batch 50/538 - Train Accuracy: 0.431, Validation Accuracy: 0.471, Loss: 2.336 Epoch 0 Batch 51/538 - Train Accuracy: 0.370, Validation Accuracy: 0.480, Loss: 2.577 Epoch 0 Batch 52/538 - Train Accuracy: 0.434, Validation Accuracy: 0.481, Loss: 2.365 Epoch 0 Batch 53/538 - Train Accuracy: 0.460, Validation Accuracy: 0.469, Loss: 2.212 Epoch 0 Batch 54/538 - Train Accuracy: 0.439, Validation Accuracy: 0.479, Loss: 2.366 Epoch 0 Batch 55/538 - Train Accuracy: 0.416, Validation Accuracy: 0.476, Loss: 2.380 Epoch 0 Batch 56/538 - Train Accuracy: 0.440, Validation Accuracy: 0.476, Loss: 2.313 Epoch 0 Batch 57/538 - Train Accuracy: 0.401, Validation Accuracy: 0.477, Loss: 2.426 Epoch 0 Batch 58/538 - Train Accuracy: 0.404, Validation Accuracy: 0.479, Loss: 2.399 Epoch 0 Batch 59/538 - Train Accuracy: 0.419, Validation Accuracy: 0.486, Loss: 2.358 Epoch 0 Batch 60/538 - Train Accuracy: 0.438, Validation Accuracy: 0.488, Loss: 2.316 Epoch 0 Batch 61/538 - Train Accuracy: 0.416, Validation Accuracy: 0.474, Loss: 2.303 Epoch 0 Batch 62/538 - Train Accuracy: 0.427, Validation Accuracy: 0.479, Loss: 2.260 Epoch 0 Batch 63/538 - Train Accuracy: 0.449, Validation Accuracy: 0.478, Loss: 2.241 Epoch 0 Batch 64/538 - Train Accuracy: 0.425, Validation Accuracy: 0.458, Loss: 2.213 Epoch 0 Batch 65/538 - Train Accuracy: 0.384, Validation Accuracy: 0.468, Loss: 2.408 Epoch 0 Batch 66/538 - Train Accuracy: 0.454, Validation Accuracy: 0.484, Loss: 2.329 Epoch 0 Batch 67/538 - Train Accuracy: 0.395, Validation Accuracy: 0.439, Loss: 2.254 Epoch 0 Batch 68/538 - Train Accuracy: 0.450, Validation Accuracy: 0.481, Loss: 2.284 Epoch 0 Batch 69/538 - Train Accuracy: 0.420, Validation Accuracy: 0.480, Loss: 2.345 Epoch 0 Batch 70/538 - Train Accuracy: 0.448, Validation Accuracy: 0.485, Loss: 2.287 Epoch 0 Batch 71/538 - Train Accuracy: 0.400, Validation Accuracy: 0.463, Loss: 2.279 Epoch 0 Batch 72/538 - Train Accuracy: 0.475, Validation Accuracy: 0.493, Loss: 2.233 Epoch 0 Batch 73/538 - Train Accuracy: 0.417, Validation Accuracy: 0.478, Loss: 2.296 Epoch 0 Batch 74/538 - Train Accuracy: 0.465, Validation Accuracy: 0.496, Loss: 2.253 Epoch 0 Batch 75/538 - Train Accuracy: 0.443, Validation Accuracy: 0.466, Loss: 2.163 Epoch 0 Batch 76/538 - Train Accuracy: 0.440, Validation Accuracy: 0.496, Loss: 2.388 Epoch 0 Batch 77/538 - Train Accuracy: 0.422, Validation Accuracy: 0.492, Loss: 2.239 Epoch 0 Batch 78/538 - Train Accuracy: 0.474, Validation Accuracy: 0.497, Loss: 2.243 Epoch 0 Batch 79/538 - Train Accuracy: 0.439, Validation Accuracy: 0.471, Loss: 2.116 Epoch 0 Batch 80/538 - Train Accuracy: 0.448, Validation Accuracy: 0.497, Loss: 2.283 Epoch 0 Batch 81/538 - Train Accuracy: 0.442, Validation Accuracy: 0.491, Loss: 2.212 Epoch 0 Batch 82/538 - Train Accuracy: 0.450, Validation Accuracy: 0.499, Loss: 2.228 Epoch 0 Batch 83/538 - Train Accuracy: 0.410, Validation Accuracy: 0.473, Loss: 2.244 Epoch 0 Batch 84/538 - Train Accuracy: 0.461, Validation Accuracy: 0.492, Loss: 2.186 Epoch 0 Batch 85/538 - Train Accuracy: 0.477, Validation Accuracy: 0.496, Loss: 2.057 Epoch 0 Batch 86/538 - Train Accuracy: 0.446, Validation Accuracy: 0.493, Loss: 2.222 Epoch 0 Batch 87/538 - Train Accuracy: 0.435, Validation Accuracy: 0.492, Loss: 2.199 Epoch 0 Batch 88/538 - Train Accuracy: 0.455, Validation Accuracy: 0.504, Loss: 2.216 ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ sentence = sentence.lower() unk = vocab_to_int['<UNK>'] ids = [vocab_to_int.get(w, unk) for w in sentence.split(' ')] # TODO: Implement Function return ids """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('logits:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)])) print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)])) ###Output Input Word Ids: [222, 94, 59, 214, 206, 205, 22] English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.'] Prediction Word Ids: [190, 188, 221, 26, 262, 176, 274, 1] French Words: ['il', 'a', 'vu', 'un', 'vieux', 'camion', '.', '<EOS>'] ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ eos = target_vocab_to_int['<EOS>'] source_sentences = [sentence for sentence in source_text.split('\n')] target_sentences = [sentence for sentence in target_text.split('\n')] source_text_id = [[source_vocab_to_int[word] for word in sentence.split()] for sentence in source_sentences] target_text_id = [[target_vocab_to_int[word]for word in sentence.split()] + [eos] for sentence in target_sentences] return source_text_id, target_text_id """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper import problem_unittests as tests (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.3.0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoder_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.- Target sequence length placeholder named "target_sequence_length" with rank 1- Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.- Source sequence length placeholder named "source_sequence_length" with rank 1Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ input_ = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None], name='target') learning_rate = tf.placeholder(tf.float32, None, 'learning_rate') keep_prob = tf.placeholder(tf.float32, None, 'keep_prob') target_sequence_length = tf.placeholder(tf.int32, [None], 'target_sequence_length') max_target_len = tf.reduce_max(target_sequence_length, name='max_target_len') source_sequence_length = tf.placeholder(tf.int32, [None], 'source_sequence_length') return input_, targets, learning_rate, keep_prob, target_sequence_length, max_target_len, source_sequence_length """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output ERROR:tensorflow:================================== Object was never used (type <class 'tensorflow.python.framework.ops.Operation'>): <tf.Operation 'assert_rank_2/Assert/Assert' type=Assert> If you want to mark it as used call its "mark_used()" method. It was originally created here: ['File "/home/carnd/anaconda3/envs/dl/lib/python3.5/runpy.py", line 184, in _run_module_as_main\n "__main__", mod_spec)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/runpy.py", line 85, in _run_code\n exec(code, run_globals)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/__main__.py", line 3, in <module>\n app.launch_new_instance()', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/traitlets/config/application.py", line 658, in launch_instance\n app.start()', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/kernelapp.py", line 474, in start\n ioloop.IOLoop.instance().start()', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/zmq/eventloop/ioloop.py", line 177, in start\n super(ZMQIOLoop, self).start()', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tornado/ioloop.py", line 887, in start\n handler_func(fd_obj, events)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tornado/stack_context.py", line 275, in null_wrapper\n return fn(*args, **kwargs)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 440, in _handle_events\n self._handle_recv()', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 472, in _handle_recv\n self._run_callback(callback, msg)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 414, in _run_callback\n callback(*args, **kwargs)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tornado/stack_context.py", line 275, in null_wrapper\n return fn(*args, **kwargs)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 276, in dispatcher\n return self.dispatch_shell(stream, msg)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 228, in dispatch_shell\n handler(stream, idents, msg)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 390, in execute_request\n user_expressions, allow_stdin)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/ipkernel.py", line 196, in do_execute\n res = shell.run_cell(code, store_history=store_history, silent=silent)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/zmqshell.py", line 501, in run_cell\n return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2717, in run_cell\n interactivity=interactivity, compiler=compiler, result=result)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2827, in run_ast_nodes\n if self.run_code(code, result):', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2881, in run_code\n exec(code_obj, self.user_global_ns, self.user_ns)', 'File "<ipython-input-7-4a615b60ab0c>", line 20, in <module>\n tests.test_model_inputs(model_inputs)', 'File "/home/carnd/deep-learning/language-translation/problem_unittests.py", line 106, in test_model_inputs\n assert tf.assert_rank(lr, 0, message=\'Learning Rate has wrong rank\')', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tensorflow/python/ops/check_ops.py", line 617, in assert_rank\n dynamic_condition, data, summarize)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tensorflow/python/ops/check_ops.py", line 571, in _assert_rank_condition\n return control_flow_ops.Assert(condition, data, summarize=summarize)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py", line 175, in wrapped\n return _add_should_use_warning(fn(*args, **kwargs))', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py", line 144, in _add_should_use_warning\n wrapped = TFShouldUseWarningWrapper(x)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py", line 101, in __init__\n stack = [s.strip() for s in traceback.format_stack()]'] ================================== ERROR:tensorflow:================================== Object was never used (type <class 'tensorflow.python.framework.ops.Operation'>): <tf.Operation 'assert_rank_3/Assert/Assert' type=Assert> If you want to mark it as used call its "mark_used()" method. It was originally created here: ['File "/home/carnd/anaconda3/envs/dl/lib/python3.5/runpy.py", line 184, in _run_module_as_main\n "__main__", mod_spec)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/runpy.py", line 85, in _run_code\n exec(code, run_globals)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/__main__.py", line 3, in <module>\n app.launch_new_instance()', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/traitlets/config/application.py", line 658, in launch_instance\n app.start()', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/kernelapp.py", line 474, in start\n ioloop.IOLoop.instance().start()', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/zmq/eventloop/ioloop.py", line 177, in start\n super(ZMQIOLoop, self).start()', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tornado/ioloop.py", line 887, in start\n handler_func(fd_obj, events)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tornado/stack_context.py", line 275, in null_wrapper\n return fn(*args, **kwargs)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 440, in _handle_events\n self._handle_recv()', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 472, in _handle_recv\n self._run_callback(callback, msg)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 414, in _run_callback\n callback(*args, **kwargs)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tornado/stack_context.py", line 275, in null_wrapper\n return fn(*args, **kwargs)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 276, in dispatcher\n return self.dispatch_shell(stream, msg)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 228, in dispatch_shell\n handler(stream, idents, msg)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 390, in execute_request\n user_expressions, allow_stdin)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/ipkernel.py", line 196, in do_execute\n res = shell.run_cell(code, store_history=store_history, silent=silent)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/zmqshell.py", line 501, in run_cell\n return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2717, in run_cell\n interactivity=interactivity, compiler=compiler, result=result)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2827, in run_ast_nodes\n if self.run_code(code, result):', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2881, in run_code\n exec(code_obj, self.user_global_ns, self.user_ns)', 'File "<ipython-input-7-4a615b60ab0c>", line 20, in <module>\n tests.test_model_inputs(model_inputs)', 'File "/home/carnd/deep-learning/language-translation/problem_unittests.py", line 107, in test_model_inputs\n assert tf.assert_rank(keep_prob, 0, message=\'Keep Probability has wrong rank\')', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tensorflow/python/ops/check_ops.py", line 617, in assert_rank\n dynamic_condition, data, summarize)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tensorflow/python/ops/check_ops.py", line 571, in _assert_rank_condition\n return control_flow_ops.Assert(condition, data, summarize=summarize)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py", line 175, in wrapped\n return _add_should_use_warning(fn(*args, **kwargs))', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py", line 144, in _add_should_use_warning\n wrapped = TFShouldUseWarningWrapper(x)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py", line 101, in __init__\n stack = [s.strip() for s in traceback.format_stack()]'] ================================== Tests Passed ###Markdown Process Decoder InputImplement `process_decoder_input` by removing the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch. ###Code def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ ending = tf.strided_slice(target_data, [0,0], [batch_size, -1], [1,1]) return tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer: * Embed the encoder input using [`tf.contrib.layers.embed_sequence`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence) * Construct a [stacked](https://github.com/tensorflow/tensorflow/blob/6947f65a374ebf29e74bb71e36fd82760056d82c/tensorflow/docs_src/tutorials/recurrent.mdstacking-multiple-lstms) [`tf.contrib.rnn.LSTMCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell) wrapped in a [`tf.contrib.rnn.DropoutWrapper`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper) * Pass cell and embedded input to [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) ###Code from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ #Encoder Input enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size) #Make RNN Cell def make_cell(rnn_size): enc_cell = tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)), keep_prob) return enc_cell enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32) return enc_output, enc_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate a training decoding layer:* Create a [`tf.contrib.seq2seq.TrainingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper) * Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ # TODO: Implement Function # Helper for the training process. Used by BasicDecoder to read inputs. training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, sequence_length=target_sequence_length, time_major=False) # Basic decoder training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer) # Perform dynamic decoding using the decoder training_decoder_output = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True, maximum_iterations=max_summary_length)[0] return training_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference decoder:* Create a [`tf.contrib.seq2seq.GreedyEmbeddingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper)* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ # TODO: Implement Function start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens') # Helper for the inference process. inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id) # Basic decoder inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer) # Perform dynamic decoding using the decoder inference_decoder_output = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length)[0] return inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.* Embed the target sequences* Construct the decoder LSTM cell (just like you constructed the encoder cell above)* Create an output layer to map the outputs of the decoder to the elements of our vocabulary* Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)` function to get the training logits.* Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # 1. Decoder Embedding dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) # 2. Construct the decoder cell def make_cell(rnn_size): dec_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) return dec_cell dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) # 3. Dense layer to translate the decoder's output at each time # step into a choice from the target vocabulary output_layer = Dense(target_vocab_size, kernel_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1)) # 4. Set up a training decoder and an inference decoder # Training Decoder with tf.variable_scope("decode"): train_decoder = decoding_layer_train(encoder_state, dec_cell, dec_embed_input,target_sequence_length, max_target_sequence_length, output_layer, keep_prob) # Inference Decoder start_of_sequence_id = target_vocab_to_int['<GO>'] end_of_sequence_id = target_vocab_to_int['<EOS>'] # Reuses the same parameters trained by the training process with tf.variable_scope("decode", reuse=True): inference_decoder = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob) return train_decoder, inference_decoder """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size)`.- Process target data using your `process_decoder_input(target_data, target_vocab_to_int, batch_size)` function.- Decode the encoded input using your `decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)` function. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # Pass the input data through the encoder. _, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) # Prepare the target sequences we'll feed to the decoder in training mode dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size) # Pass encoder state and decoder inputs to the decoders training_decoder_output, inference_decoder_output = decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability- Set `display_step` to state how many steps between each debug output statement ###Code # Number of Epochs epochs = 15 # Batch Size batch_size = 512 # RNN Size rnn_size = 256 # Number of Layers num_layers = 3 # Embedding Size encoding_embedding_size = 256 decoding_embedding_size = 256 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 0.75 display_step = 100 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown Batch and pad the source and target sequences ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def pad_sentence_batch(sentence_batch, pad_int): """Pad sentences with <PAD> so that each sentence of a batch has the same length""" max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): """Batch targets, sources, and the lengths of their sentences together""" for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 100/269 - Train Accuracy: 0.5335, Validation Accuracy: 0.5375, Loss: 1.6010 Epoch 0 Batch 200/269 - Train Accuracy: 0.5559, Validation Accuracy: 0.5691, Loss: 1.1872 Epoch 1 Batch 100/269 - Train Accuracy: 0.6413, Validation Accuracy: 0.6266, Loss: 0.7214 Epoch 1 Batch 200/269 - Train Accuracy: 0.6604, Validation Accuracy: 0.6644, Loss: 0.5832 Epoch 2 Batch 100/269 - Train Accuracy: 0.8077, Validation Accuracy: 0.7915, Loss: 0.3490 Epoch 2 Batch 200/269 - Train Accuracy: 0.8610, Validation Accuracy: 0.8530, Loss: 0.2406 Epoch 3 Batch 100/269 - Train Accuracy: 0.9097, Validation Accuracy: 0.9039, Loss: 0.1199 Epoch 3 Batch 200/269 - Train Accuracy: 0.9203, Validation Accuracy: 0.9237, Loss: 0.0918 Epoch 4 Batch 100/269 - Train Accuracy: 0.9429, Validation Accuracy: 0.9383, Loss: 0.0632 Epoch 4 Batch 200/269 - Train Accuracy: 0.9433, Validation Accuracy: 0.9440, Loss: 0.0527 Epoch 5 Batch 100/269 - Train Accuracy: 0.9574, Validation Accuracy: 0.9459, Loss: 0.0439 Epoch 5 Batch 200/269 - Train Accuracy: 0.9505, Validation Accuracy: 0.9669, Loss: 0.0370 Epoch 6 Batch 100/269 - Train Accuracy: 0.9657, Validation Accuracy: 0.9713, Loss: 0.0333 Epoch 6 Batch 200/269 - Train Accuracy: 0.9648, Validation Accuracy: 0.9658, Loss: 0.0282 Epoch 7 Batch 100/269 - Train Accuracy: 0.9756, Validation Accuracy: 0.9733, Loss: 0.0270 Epoch 7 Batch 200/269 - Train Accuracy: 0.9774, Validation Accuracy: 0.9718, Loss: 0.0206 Epoch 8 Batch 100/269 - Train Accuracy: 0.9746, Validation Accuracy: 0.9708, Loss: 0.0218 Epoch 8 Batch 200/269 - Train Accuracy: 0.9802, Validation Accuracy: 0.9729, Loss: 0.0170 Epoch 9 Batch 100/269 - Train Accuracy: 0.9778, Validation Accuracy: 0.9763, Loss: 0.0180 Epoch 9 Batch 200/269 - Train Accuracy: 0.9801, Validation Accuracy: 0.9723, Loss: 0.0150 Epoch 10 Batch 100/269 - Train Accuracy: 0.9803, Validation Accuracy: 0.9714, Loss: 0.0150 Epoch 10 Batch 200/269 - Train Accuracy: 0.9812, Validation Accuracy: 0.9717, Loss: 0.0130 Epoch 11 Batch 100/269 - Train Accuracy: 0.9863, Validation Accuracy: 0.9730, Loss: 0.0136 Epoch 11 Batch 200/269 - Train Accuracy: 0.9848, Validation Accuracy: 0.9698, Loss: 0.0110 Epoch 12 Batch 100/269 - Train Accuracy: 0.9888, Validation Accuracy: 0.9771, Loss: 0.0116 Epoch 12 Batch 200/269 - Train Accuracy: 0.9899, Validation Accuracy: 0.9727, Loss: 0.0082 Epoch 13 Batch 100/269 - Train Accuracy: 0.9906, Validation Accuracy: 0.9789, Loss: 0.0106 Epoch 13 Batch 200/269 - Train Accuracy: 0.9951, Validation Accuracy: 0.9755, Loss: 0.0078 Epoch 14 Batch 100/269 - Train Accuracy: 0.9919, Validation Accuracy: 0.9788, Loss: 0.0092 Epoch 14 Batch 200/269 - Train Accuracy: 0.9911, Validation Accuracy: 0.9718, Loss: 0.0079 Model Trained and Saved ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ unk = vocab_to_int['<UNK>'] sentence = sentence.lower() return [vocab_to_int.get(w, unk) for w in sentence.split()] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code #translate_sentence = 'he saw a old yellow truck .' translate_sentence = 'your least liked fruit is the grape , but my least liked is the apple .' #translate_sentence = 'our least liked fruit is the lemon , but my least liked is the grape .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) ###Output INFO:tensorflow:Restoring parameters from checkpoints/dev ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function ###source_sent = [ sent for sent in source_text.split("\n") ] ###target_sent = [ sent + ' <EOS>' for sent in target_text.split("\n") ] ###source_ids = [ [ source_vocab_to_int[word] for word in sent.split() ] for sent in source_sent ] ###target_ids = [ [ target_vocab_to_int[word] for word in sent.split() ] for sent in target_sent ] # Advice from Udacity Reviewer target_ids = [[target_vocab_to_int[w] for w in s.split()] + [target_vocab_to_int['<EOS>']] for s in target_text.split('\n')] source_ids = [[source_vocab_to_int[w] for w in s.split()] for s in source_text.split('\n')] return source_ids, target_ids """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.1.0 Default GPU Device: /gpu:0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoder_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.- Target sequence length placeholder named "target_sequence_length" with rank 1- Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.- Source sequence length placeholder named "source_sequence_length" with rank 1Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ # TODO: Implement Function input_ = tf.placeholder( tf.int32, [None, None], name = "input" ) target_ = tf.placeholder( tf.int32, [None, None], name = "target" ) learn_rate_ = tf.placeholder( tf.float32, None, name = "learn_rate" ) keep_prob_ = tf.placeholder( tf.float32, None, name = "keep_prob" ) target_sequence_length = tf.placeholder( tf.int32, [None], name="target_sequence_length" ) max_target_sequence_length = tf.reduce_max( target_sequence_length ) source_sequence_length = tf.placeholder( tf.int32, [None], name="source_sequence_length" ) return input_, target_, learn_rate_, keep_prob_, target_sequence_length, max_target_sequence_length, source_sequence_length """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output Tests Passed ###Markdown Process Decoder InputImplement `process_decoder_input` by removing the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch. ###Code def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function go_id = source_vocab_to_int[ '<GO>' ] ending_text = tf.strided_slice( target_data, [0, 0], [batch_size, -1], [1, 1] ) decoded_text = tf.concat( [ tf.fill([batch_size, 1], go_id), ending_text ], 1) return decoded_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer: * Embed the encoder input using [`tf.contrib.layers.embed_sequence`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence) * Construct a [stacked](https://github.com/tensorflow/tensorflow/blob/6947f65a374ebf29e74bb71e36fd82760056d82c/tensorflow/docs_src/tutorials/recurrent.mdstacking-multiple-lstms) [`tf.contrib.rnn.LSTMCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell) wrapped in a [`tf.contrib.rnn.DropoutWrapper`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper) * Pass cell and embedded input to [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) ###Code from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ # TODO: Implement Function encod_inputs = tf.contrib.layers.embed_sequence( rnn_inputs, source_vocab_size, encoding_embedding_size ) rnn_cell = tf.contrib.rnn.MultiRNNCell( [ tf.contrib.rnn.LSTMCell( rnn_size ) for _ in range(num_layers) ] ) # Adding dropout layer rnn_cell = tf.contrib.rnn.DropoutWrapper( rnn_cell, output_keep_prob = keep_prob ) rnn_output, rnn_state = tf.nn.dynamic_rnn( rnn_cell, encod_inputs, source_sequence_length, dtype = tf.float32 ) return rnn_output, rnn_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate a training decoding layer:* Create a [`tf.contrib.seq2seq.TrainingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper) * Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ # TODO: Implement Function decode_helper = tf.contrib.seq2seq.TrainingHelper( dec_embed_input, target_sequence_length ) decoder = tf.contrib.seq2seq.BasicDecoder( dec_cell, decode_helper, encoder_state, output_layer ) decoder_outputs, decoder_state = tf.contrib.seq2seq.dynamic_decode( decoder, impute_finished=True, maximum_iterations= max_summary_length ) return decoder_outputs """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference decoder:* Create a [`tf.contrib.seq2seq.GreedyEmbeddingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper)* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ # TODO: Implement Function start_tokens = tf.tile( tf.constant( [start_of_sequence_id], dtype=tf.int32), [ batch_size ], name = "start_tokens" ) decode_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper( dec_embeddings, start_tokens, end_of_sequence_id ) decoder = tf.contrib.seq2seq.BasicDecoder( dec_cell, decode_helper, encoder_state, output_layer = output_layer ) decoder_outputs, decoder_state = tf.contrib.seq2seq.dynamic_decode( decoder, impute_finished=True, maximum_iterations = max_target_sequence_length ) return decoder_outputs """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.* Embed the target sequences* Construct the decoder LSTM cell (just like you constructed the encoder cell above)* Create an output layer to map the outputs of the decoder to the elements of our vocabulary* Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)` function to get the training logits.* Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code from tensorflow.python.layers import core as layers_core def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function decode_embed = tf.Variable( tf.random_uniform( [ target_vocab_size, decoding_embedding_size ] ) ) decode_embed_input = tf.nn.embedding_lookup( decode_embed, dec_input ) decode_cell = tf.contrib.rnn.MultiRNNCell( [ tf.contrib.rnn.LSTMCell(rnn_size) for _ in range(num_layers) ] ) # Adding dropout layer decode_cell = tf.contrib.rnn.DropoutWrapper( decode_cell, output_keep_prob = keep_prob ) output_layer = layers_core.Dense( target_vocab_size, kernel_initializer = tf.truncated_normal_initializer( mean = 0.0, stddev=0.1 ) ) with tf.variable_scope( "decoding" ) as decoding_scope: decode_outputs_train = decoding_layer_train( encoder_state, decode_cell, decode_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob ) SOS_id = target_vocab_to_int[ "<GO>" ] EOS_id = target_vocab_to_int[ "<EOS>" ] with tf.variable_scope( "decoding", reuse=True) as decoding_scope: decode_outputs_infer = decoding_layer_infer( encoder_state, decode_cell, decode_embed, SOS_id,EOS_id, max_target_sequence_length,target_vocab_size, output_layer, batch_size, keep_prob ) return decode_outputs_train, decode_outputs_infer """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Apply embedding to the input data for the encoder.- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size)`.- Process target data using your `process_decoder_input(target_data, target_vocab_to_int, batch_size)` function.- Apply embedding to the target data for the decoder.- Decode the encoded input using your `decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)` function. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function encode_output, encode_state = encoding_layer( input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size ) decode_input = process_decoder_input( target_data, target_vocab_to_int, batch_size ) decode_outputs_train, decode_outputs_infer = decoding_layer( decode_input, encode_state, target_sequence_length, tf.reduce_max( target_sequence_length ), rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size ) return decode_outputs_train, decode_outputs_infer """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability- Set `display_step` to state how many steps between each debug output statement ###Code # Number of Epochs epochs = 10 # Batch Size batch_size = 256 # RNN Size rnn_size = 256 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 128 decoding_embedding_size = 128 # Learning Rate learning_rate = 0.01 # Dropout Keep Probability keep_probability = 0.8 display_step = 10 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown Batch and pad the source and target sequences ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def pad_sentence_batch(sentence_batch, pad_int): """Pad sentences with <PAD> so that each sentence of a batch has the same length""" max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): """Batch targets, sources, and the lengths of their sentences together""" for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 10/538 - Train Accuracy: 0.2750, Validation Accuracy: 0.3580, Loss: 3.5907 Epoch 0 Batch 20/538 - Train Accuracy: 0.3421, Validation Accuracy: 0.3867, Loss: 3.1406 Epoch 0 Batch 30/538 - Train Accuracy: 0.3754, Validation Accuracy: 0.4334, Loss: 2.7792 Epoch 0 Batch 40/538 - Train Accuracy: 0.4814, Validation Accuracy: 0.4862, Loss: 2.2332 Epoch 0 Batch 50/538 - Train Accuracy: 0.4697, Validation Accuracy: 0.4993, Loss: 2.1344 Epoch 0 Batch 60/538 - Train Accuracy: 0.4504, Validation Accuracy: 0.4885, Loss: 1.9996 Epoch 0 Batch 70/538 - Train Accuracy: 0.5130, Validation Accuracy: 0.5126, Loss: 1.6135 Epoch 0 Batch 80/538 - Train Accuracy: 0.4852, Validation Accuracy: 0.5353, Loss: 1.4580 Epoch 0 Batch 90/538 - Train Accuracy: 0.5355, Validation Accuracy: 0.5605, Loss: 1.1581 Epoch 0 Batch 100/538 - Train Accuracy: 0.5424, Validation Accuracy: 0.5543, Loss: 0.9910 Epoch 0 Batch 110/538 - Train Accuracy: 0.5148, Validation Accuracy: 0.5625, Loss: 0.9339 Epoch 0 Batch 120/538 - Train Accuracy: 0.5584, Validation Accuracy: 0.5742, Loss: 0.7966 Epoch 0 Batch 130/538 - Train Accuracy: 0.5582, Validation Accuracy: 0.5444, Loss: 0.7359 Epoch 0 Batch 140/538 - Train Accuracy: 0.5469, Validation Accuracy: 0.5595, Loss: 0.7922 Epoch 0 Batch 150/538 - Train Accuracy: 0.5795, Validation Accuracy: 0.5767, Loss: 0.6854 Epoch 0 Batch 160/538 - Train Accuracy: 0.6034, Validation Accuracy: 0.5964, Loss: 0.6193 Epoch 0 Batch 170/538 - Train Accuracy: 0.6055, Validation Accuracy: 0.5906, Loss: 0.6137 Epoch 0 Batch 180/538 - Train Accuracy: 0.6546, Validation Accuracy: 0.6289, Loss: 0.5773 Epoch 0 Batch 190/538 - Train Accuracy: 0.6404, Validation Accuracy: 0.6341, Loss: 0.5809 Epoch 0 Batch 200/538 - Train Accuracy: 0.6432, Validation Accuracy: 0.6296, Loss: 0.5532 Epoch 0 Batch 210/538 - Train Accuracy: 0.6235, Validation Accuracy: 0.6472, Loss: 0.5421 Epoch 0 Batch 220/538 - Train Accuracy: 0.6328, Validation Accuracy: 0.6506, Loss: 0.5062 Epoch 0 Batch 230/538 - Train Accuracy: 0.6570, Validation Accuracy: 0.6483, Loss: 0.5024 Epoch 0 Batch 240/538 - Train Accuracy: 0.6400, Validation Accuracy: 0.6619, Loss: 0.4988 Epoch 0 Batch 250/538 - Train Accuracy: 0.6844, Validation Accuracy: 0.6564, Loss: 0.4717 Epoch 0 Batch 260/538 - Train Accuracy: 0.6663, Validation Accuracy: 0.6758, Loss: 0.4429 Epoch 0 Batch 270/538 - Train Accuracy: 0.6801, Validation Accuracy: 0.6866, Loss: 0.4447 Epoch 0 Batch 280/538 - Train Accuracy: 0.7115, Validation Accuracy: 0.6799, Loss: 0.4020 Epoch 0 Batch 290/538 - Train Accuracy: 0.7143, Validation Accuracy: 0.6900, Loss: 0.3985 Epoch 0 Batch 300/538 - Train Accuracy: 0.7178, Validation Accuracy: 0.7040, Loss: 0.3906 Epoch 0 Batch 310/538 - Train Accuracy: 0.7291, Validation Accuracy: 0.6976, Loss: 0.3852 Epoch 0 Batch 320/538 - Train Accuracy: 0.6804, Validation Accuracy: 0.7053, Loss: 0.3711 Epoch 0 Batch 330/538 - Train Accuracy: 0.7347, Validation Accuracy: 0.7072, Loss: 0.3398 Epoch 0 Batch 340/538 - Train Accuracy: 0.7008, Validation Accuracy: 0.7310, Loss: 0.3530 Epoch 0 Batch 350/538 - Train Accuracy: 0.7502, Validation Accuracy: 0.7347, Loss: 0.3443 Epoch 0 Batch 360/538 - Train Accuracy: 0.7365, Validation Accuracy: 0.7227, Loss: 0.3216 Epoch 0 Batch 370/538 - Train Accuracy: 0.7541, Validation Accuracy: 0.7445, Loss: 0.3156 Epoch 0 Batch 380/538 - Train Accuracy: 0.7707, Validation Accuracy: 0.7202, Loss: 0.2817 Epoch 0 Batch 390/538 - Train Accuracy: 0.7814, Validation Accuracy: 0.7443, Loss: 0.2698 Epoch 0 Batch 400/538 - Train Accuracy: 0.7684, Validation Accuracy: 0.7591, Loss: 0.2666 Epoch 0 Batch 410/538 - Train Accuracy: 0.8049, Validation Accuracy: 0.7523, Loss: 0.2599 Epoch 0 Batch 420/538 - Train Accuracy: 0.7945, Validation Accuracy: 0.7624, Loss: 0.2545 Epoch 0 Batch 430/538 - Train Accuracy: 0.7766, Validation Accuracy: 0.7694, Loss: 0.2349 Epoch 0 Batch 440/538 - Train Accuracy: 0.7918, Validation Accuracy: 0.7766, Loss: 0.2389 Epoch 0 Batch 450/538 - Train Accuracy: 0.8155, Validation Accuracy: 0.7919, Loss: 0.2342 Epoch 0 Batch 460/538 - Train Accuracy: 0.7889, Validation Accuracy: 0.8061, Loss: 0.2134 Epoch 0 Batch 470/538 - Train Accuracy: 0.8451, Validation Accuracy: 0.7985, Loss: 0.1917 Epoch 0 Batch 480/538 - Train Accuracy: 0.8562, Validation Accuracy: 0.8232, Loss: 0.1809 Epoch 0 Batch 490/538 - Train Accuracy: 0.8415, Validation Accuracy: 0.8192, Loss: 0.1679 Epoch 0 Batch 500/538 - Train Accuracy: 0.8686, Validation Accuracy: 0.8358, Loss: 0.1494 Epoch 0 Batch 510/538 - Train Accuracy: 0.8722, Validation Accuracy: 0.8303, Loss: 0.1552 Epoch 0 Batch 520/538 - Train Accuracy: 0.8535, Validation Accuracy: 0.8473, Loss: 0.1692 Epoch 0 Batch 530/538 - Train Accuracy: 0.8426, Validation Accuracy: 0.8466, Loss: 0.1530 Epoch 1 Batch 10/538 - Train Accuracy: 0.8812, Validation Accuracy: 0.8565, Loss: 0.1379 Epoch 1 Batch 20/538 - Train Accuracy: 0.8906, Validation Accuracy: 0.8491, Loss: 0.1281 Epoch 1 Batch 30/538 - Train Accuracy: 0.8645, Validation Accuracy: 0.8601, Loss: 0.1342 Epoch 1 Batch 40/538 - Train Accuracy: 0.8958, Validation Accuracy: 0.8688, Loss: 0.1084 Epoch 1 Batch 50/538 - Train Accuracy: 0.8926, Validation Accuracy: 0.8649, Loss: 0.1135 Epoch 1 Batch 60/538 - Train Accuracy: 0.9018, Validation Accuracy: 0.8841, Loss: 0.1106 Epoch 1 Batch 70/538 - Train Accuracy: 0.8904, Validation Accuracy: 0.8645, Loss: 0.1062 Epoch 1 Batch 80/538 - Train Accuracy: 0.9000, Validation Accuracy: 0.8915, Loss: 0.1092 Epoch 1 Batch 90/538 - Train Accuracy: 0.8938, Validation Accuracy: 0.8752, Loss: 0.1103 Epoch 1 Batch 100/538 - Train Accuracy: 0.9051, Validation Accuracy: 0.8961, Loss: 0.0889 Epoch 1 Batch 110/538 - Train Accuracy: 0.9010, Validation Accuracy: 0.9023, Loss: 0.0901 Epoch 1 Batch 120/538 - Train Accuracy: 0.9471, Validation Accuracy: 0.9022, Loss: 0.0723 Epoch 1 Batch 130/538 - Train Accuracy: 0.9107, Validation Accuracy: 0.9070, Loss: 0.0847 Epoch 1 Batch 140/538 - Train Accuracy: 0.8945, Validation Accuracy: 0.8841, Loss: 0.1078 Epoch 1 Batch 150/538 - Train Accuracy: 0.9164, Validation Accuracy: 0.8988, Loss: 0.0760 Epoch 1 Batch 160/538 - Train Accuracy: 0.9252, Validation Accuracy: 0.8972, Loss: 0.0708 Epoch 1 Batch 170/538 - Train Accuracy: 0.9118, Validation Accuracy: 0.8794, Loss: 0.0827 Epoch 1 Batch 180/538 - Train Accuracy: 0.9169, Validation Accuracy: 0.8835, Loss: 0.0807 Epoch 1 Batch 190/538 - Train Accuracy: 0.9152, Validation Accuracy: 0.8931, Loss: 0.0961 Epoch 1 Batch 200/538 - Train Accuracy: 0.9229, Validation Accuracy: 0.9134, Loss: 0.0689 Epoch 1 Batch 210/538 - Train Accuracy: 0.8899, Validation Accuracy: 0.8977, Loss: 0.0671 Epoch 1 Batch 220/538 - Train Accuracy: 0.9219, Validation Accuracy: 0.9054, Loss: 0.0774 Epoch 1 Batch 230/538 - Train Accuracy: 0.9039, Validation Accuracy: 0.9190, Loss: 0.0704 Epoch 1 Batch 240/538 - Train Accuracy: 0.9270, Validation Accuracy: 0.9148, Loss: 0.0717 Epoch 1 Batch 250/538 - Train Accuracy: 0.9326, Validation Accuracy: 0.9158, Loss: 0.0714 Epoch 1 Batch 260/538 - Train Accuracy: 0.9001, Validation Accuracy: 0.9153, Loss: 0.0723 Epoch 1 Batch 270/538 - Train Accuracy: 0.9205, Validation Accuracy: 0.9082, Loss: 0.0630 Epoch 1 Batch 280/538 - Train Accuracy: 0.9271, Validation Accuracy: 0.9153, Loss: 0.0583 Epoch 1 Batch 290/538 - Train Accuracy: 0.9494, Validation Accuracy: 0.9112, Loss: 0.0597 Epoch 1 Batch 300/538 - Train Accuracy: 0.9345, Validation Accuracy: 0.9263, Loss: 0.0668 Epoch 1 Batch 310/538 - Train Accuracy: 0.9580, Validation Accuracy: 0.9297, Loss: 0.0659 Epoch 1 Batch 320/538 - Train Accuracy: 0.9204, Validation Accuracy: 0.9213, Loss: 0.0587 Epoch 1 Batch 330/538 - Train Accuracy: 0.9619, Validation Accuracy: 0.9171, Loss: 0.0582 Epoch 1 Batch 340/538 - Train Accuracy: 0.9309, Validation Accuracy: 0.9254, Loss: 0.0586 Epoch 1 Batch 350/538 - Train Accuracy: 0.9384, Validation Accuracy: 0.9110, Loss: 0.0708 Epoch 1 Batch 360/538 - Train Accuracy: 0.9318, Validation Accuracy: 0.9387, Loss: 0.0581 ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function sequence = [ vocab_to_int.get( word, vocab_to_int[ "<UNK>"] ) for word in sentence.lower().split() ] return sequence """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) ###Output INFO:tensorflow:Restoring parameters from checkpoints/dev Input Word Ids: [7, 158, 180, 33, 179, 172, 89] English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.'] Prediction Word Ids: [330, 167, 283, 316, 69, 21, 208, 122, 1] French Words: il a vu la petite voiture jaune . <EOS> ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ source_id_text = [[source_vocab_to_int[word] for word in n.split()] for n in source_text.split("\n")] target_id_text = [[target_vocab_to_int[word] for word in n.split()]+[target_vocab_to_int['<EOS>']] for n in target_text.split("\n")] return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.1.0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoder_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.- Target sequence length placeholder named "target_sequence_length" with rank 1- Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.- Source sequence length placeholder named "source_sequence_length" with rank 1Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ input = tf.placeholder(tf.int32,[None,None],name='input') target = tf.placeholder(tf.int32,[None,None]) target_sequence_len = tf.placeholder(tf.int32, [None], name='target_sequence_length') max_target_len = tf.reduce_max(target_sequence_len,name='max_target_length' ) source_sequence_len = tf.placeholder(tf.int32, [None], name='source_sequence_length') learning_rate = tf.placeholder(tf.float32) keep_prob = tf.placeholder(tf.float32, name='keep_prob') return input, target, learning_rate, keep_prob, target_sequence_len, max_target_len, source_sequence_len """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output Tests Passed ###Markdown Process Decoder InputImplement `process_decoder_input` by removing the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch. ###Code def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) decoder_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return decoder_input """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer: * Embed the encoder input using [`tf.contrib.layers.embed_sequence`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence) * Construct a [stacked](https://github.com/tensorflow/tensorflow/blob/6947f65a374ebf29e74bb71e36fd82760056d82c/tensorflow/docs_src/tutorials/recurrent.mdstacking-multiple-lstms) [`tf.contrib.rnn.LSTMCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell) wrapped in a [`tf.contrib.rnn.DropoutWrapper`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper) * Pass cell and embedded input to [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) ###Code from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ encoder_inputs = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size) cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.BasicLSTMCell(rnn_size), output_keep_prob=keep_prob) for _ in range(num_layers)]) RNN_output, RNN_state = tf.nn.dynamic_rnn(cell, encoder_inputs, sequence_length=source_sequence_length, dtype=tf.float32) return RNN_output, RNN_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate a training decoding layer:* Create a [`tf.contrib.seq2seq.TrainingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper) * Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, sequence_length=target_sequence_length, time_major=False) basic_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,training_helper,encoder_state,output_layer) training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(basic_decoder,impute_finished=True,maximum_iterations= max_summary_length) return training_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference decoder:* Create a [`tf.contrib.seq2seq.GreedyEmbeddingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper)* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ start_of_sequence_id = target_vocab_to_int['<GO>'] end_of_sequence_id = target_vocab_to_int['<EOS>'] start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens') inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id) inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer) inference_decoder_output, _= tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished=True, maximum_iterations= max_target_sequence_length) return inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.* Embed the target sequences* Construct the decoder LSTM cell (just like you constructed the encoder cell above)* Create an output layer to map the outputs of the decoder to the elements of our vocabulary* Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)` function to get the training logits.* Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code from tensorflow.python.layers import core as layers_core def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # embedding target sequence dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) # construct decoder lstm cell dec_cell = tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size) for _ in range(num_layers)]),output_keep_prob=keep_prob) # create output layer to map the outputs of the decoder to the elements of our vocabulary output_layer = layers_core.Dense(target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1)) # decoder train with tf.variable_scope("decoding") as decoding_scope: dec_outputs_train = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) # decoder inference start_of_sequence_id = target_vocab_to_int["<GO>"] end_of_sequence_id = target_vocab_to_int["<EOS>"] with tf.variable_scope("decoding", reuse=True) as decoding_scope: dec_outputs_infer = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob) # rerturn return dec_outputs_train, dec_outputs_infer """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size)`.- Process target data using your `process_decoder_input(target_data, target_vocab_to_int, batch_size)` function.- Decode the encoded input using your `decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)` function. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ enc_output, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) # process target data dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size) # embedding and decoding dec_outputs_train, dec_outputs_infer = decoding_layer(dec_input, enc_state, target_sequence_length, tf.reduce_max(max_target_sentence_length), rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return dec_outputs_train, dec_outputs_infer """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability- Set `display_step` to state how many steps between each debug output statement ###Code # Number of Epochs epochs = 5 # Batch Size batch_size = 256 # RNN Size rnn_size = 512 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 256 decoding_embedding_size = 256 # Learning Rate learning_rate = 0.01 # Dropout Keep Probability keep_probability = 0.8 # Display Step display_step = 10 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown Batch and pad the source and target sequences ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def pad_sentence_batch(sentence_batch, pad_int): """Pad sentences with <PAD> so that each sentence of a batch has the same length""" max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): """Batch targets, sources, and the lengths of their sentences together""" for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 10/538 - Train Accuracy: 0.2686, Validation Accuracy: 0.3530, Loss: 3.5477 Epoch 0 Batch 20/538 - Train Accuracy: 0.3469, Validation Accuracy: 0.3858, Loss: 2.8958 Epoch 0 Batch 30/538 - Train Accuracy: 0.3609, Validation Accuracy: 0.4206, Loss: 2.7392 Epoch 0 Batch 40/538 - Train Accuracy: 0.4526, Validation Accuracy: 0.4474, Loss: 2.2537 Epoch 0 Batch 50/538 - Train Accuracy: 0.4592, Validation Accuracy: 0.5014, Loss: 2.1874 Epoch 0 Batch 60/538 - Train Accuracy: 0.4574, Validation Accuracy: 0.5158, Loss: 2.0013 Epoch 0 Batch 70/538 - Train Accuracy: 0.4933, Validation Accuracy: 0.5217, Loss: 1.7485 Epoch 0 Batch 80/538 - Train Accuracy: 0.4713, Validation Accuracy: 0.5220, Loss: 1.6875 Epoch 0 Batch 90/538 - Train Accuracy: 0.5179, Validation Accuracy: 0.5384, Loss: 1.3981 Epoch 0 Batch 100/538 - Train Accuracy: 0.4977, Validation Accuracy: 0.5236, Loss: 1.2250 Epoch 0 Batch 110/538 - Train Accuracy: 0.5145, Validation Accuracy: 0.5447, Loss: 1.1504 Epoch 0 Batch 120/538 - Train Accuracy: 0.5012, Validation Accuracy: 0.5444, Loss: 0.9934 Epoch 0 Batch 130/538 - Train Accuracy: 0.5147, Validation Accuracy: 0.5323, Loss: 0.8930 Epoch 0 Batch 140/538 - Train Accuracy: 0.5141, Validation Accuracy: 0.5558, Loss: 0.9148 Epoch 0 Batch 150/538 - Train Accuracy: 0.5449, Validation Accuracy: 0.5501, Loss: 0.8247 Epoch 0 Batch 160/538 - Train Accuracy: 0.5552, Validation Accuracy: 0.5506, Loss: 0.7543 Epoch 0 Batch 170/538 - Train Accuracy: 0.5863, Validation Accuracy: 0.5636, Loss: 0.7417 Epoch 0 Batch 180/538 - Train Accuracy: 0.6047, Validation Accuracy: 0.5815, Loss: 0.6984 Epoch 0 Batch 190/538 - Train Accuracy: 0.6153, Validation Accuracy: 0.6003, Loss: 0.6839 Epoch 0 Batch 200/538 - Train Accuracy: 0.6086, Validation Accuracy: 0.6060, Loss: 0.6629 Epoch 0 Batch 210/538 - Train Accuracy: 0.5869, Validation Accuracy: 0.5875, Loss: 0.6389 Epoch 0 Batch 220/538 - Train Accuracy: 0.5932, Validation Accuracy: 0.5895, Loss: 0.6169 Epoch 0 Batch 230/538 - Train Accuracy: 0.6188, Validation Accuracy: 0.6314, Loss: 0.6339 Epoch 0 Batch 240/538 - Train Accuracy: 0.6227, Validation Accuracy: 0.6108, Loss: 0.6212 Epoch 0 Batch 250/538 - Train Accuracy: 0.6289, Validation Accuracy: 0.6223, Loss: 0.5924 Epoch 0 Batch 260/538 - Train Accuracy: 0.6291, Validation Accuracy: 0.6557, Loss: 0.5761 Epoch 0 Batch 270/538 - Train Accuracy: 0.6504, Validation Accuracy: 0.6614, Loss: 0.5793 Epoch 0 Batch 280/538 - Train Accuracy: 0.6734, Validation Accuracy: 0.6491, Loss: 0.5349 Epoch 0 Batch 290/538 - Train Accuracy: 0.6643, Validation Accuracy: 0.6628, Loss: 0.5370 Epoch 0 Batch 300/538 - Train Accuracy: 0.6750, Validation Accuracy: 0.6781, Loss: 0.5152 Epoch 0 Batch 310/538 - Train Accuracy: 0.6947, Validation Accuracy: 0.6921, Loss: 0.5109 Epoch 0 Batch 320/538 - Train Accuracy: 0.6888, Validation Accuracy: 0.6926, Loss: 0.4903 Epoch 0 Batch 330/538 - Train Accuracy: 0.6778, Validation Accuracy: 0.6614, Loss: 0.4667 Epoch 0 Batch 340/538 - Train Accuracy: 0.6771, Validation Accuracy: 0.6836, Loss: 0.4912 Epoch 0 Batch 350/538 - Train Accuracy: 0.7065, Validation Accuracy: 0.6939, Loss: 0.4590 Epoch 0 Batch 360/538 - Train Accuracy: 0.6912, Validation Accuracy: 0.7108, Loss: 0.4437 Epoch 0 Batch 370/538 - Train Accuracy: 0.7111, Validation Accuracy: 0.7131, Loss: 0.4454 Epoch 0 Batch 380/538 - Train Accuracy: 0.7559, Validation Accuracy: 0.7314, Loss: 0.4108 Epoch 0 Batch 390/538 - Train Accuracy: 0.7338, Validation Accuracy: 0.7085, Loss: 0.3934 Epoch 0 Batch 400/538 - Train Accuracy: 0.7180, Validation Accuracy: 0.7095, Loss: 0.3924 Epoch 0 Batch 410/538 - Train Accuracy: 0.7432, Validation Accuracy: 0.7280, Loss: 0.3879 Epoch 0 Batch 420/538 - Train Accuracy: 0.7344, Validation Accuracy: 0.7049, Loss: 0.3823 Epoch 0 Batch 430/538 - Train Accuracy: 0.7375, Validation Accuracy: 0.7156, Loss: 0.3530 Epoch 0 Batch 440/538 - Train Accuracy: 0.7438, Validation Accuracy: 0.7360, Loss: 0.3512 Epoch 0 Batch 450/538 - Train Accuracy: 0.7359, Validation Accuracy: 0.7244, Loss: 0.3419 Epoch 0 Batch 460/538 - Train Accuracy: 0.7480, Validation Accuracy: 0.7573, Loss: 0.3144 Epoch 0 Batch 470/538 - Train Accuracy: 0.7660, Validation Accuracy: 0.7495, Loss: 0.3089 Epoch 0 Batch 480/538 - Train Accuracy: 0.7826, Validation Accuracy: 0.7775, Loss: 0.2863 Epoch 0 Batch 490/538 - Train Accuracy: 0.7984, Validation Accuracy: 0.7995, Loss: 0.2710 Epoch 0 Batch 500/538 - Train Accuracy: 0.8349, Validation Accuracy: 0.7905, Loss: 0.2391 Epoch 0 Batch 510/538 - Train Accuracy: 0.8263, Validation Accuracy: 0.8042, Loss: 0.2474 Epoch 0 Batch 520/538 - Train Accuracy: 0.7877, Validation Accuracy: 0.8086, Loss: 0.2489 Epoch 0 Batch 530/538 - Train Accuracy: 0.7967, Validation Accuracy: 0.8184, Loss: 0.2509 Epoch 1 Batch 10/538 - Train Accuracy: 0.8295, Validation Accuracy: 0.8265, Loss: 0.2437 Epoch 1 Batch 20/538 - Train Accuracy: 0.8354, Validation Accuracy: 0.8345, Loss: 0.2117 Epoch 1 Batch 30/538 - Train Accuracy: 0.8449, Validation Accuracy: 0.8530, Loss: 0.2079 Epoch 1 Batch 40/538 - Train Accuracy: 0.8484, Validation Accuracy: 0.8638, Loss: 0.1758 Epoch 1 Batch 50/538 - Train Accuracy: 0.8684, Validation Accuracy: 0.8697, Loss: 0.1792 Epoch 1 Batch 60/538 - Train Accuracy: 0.8436, Validation Accuracy: 0.8485, Loss: 0.1796 Epoch 1 Batch 70/538 - Train Accuracy: 0.8785, Validation Accuracy: 0.8537, Loss: 0.1685 Epoch 1 Batch 80/538 - Train Accuracy: 0.8742, Validation Accuracy: 0.8810, Loss: 0.1764 Epoch 1 Batch 90/538 - Train Accuracy: 0.8728, Validation Accuracy: 0.8665, Loss: 0.1729 Epoch 1 Batch 100/538 - Train Accuracy: 0.8797, Validation Accuracy: 0.8700, Loss: 0.1473 Epoch 1 Batch 110/538 - Train Accuracy: 0.8697, Validation Accuracy: 0.8707, Loss: 0.1598 Epoch 1 Batch 120/538 - Train Accuracy: 0.8965, Validation Accuracy: 0.8983, Loss: 0.1258 Epoch 1 Batch 130/538 - Train Accuracy: 0.8687, Validation Accuracy: 0.8869, Loss: 0.1378 Epoch 1 Batch 140/538 - Train Accuracy: 0.8727, Validation Accuracy: 0.8945, Loss: 0.1474 Epoch 1 Batch 150/538 - Train Accuracy: 0.8996, Validation Accuracy: 0.8952, Loss: 0.1254 Epoch 1 Batch 160/538 - Train Accuracy: 0.8824, Validation Accuracy: 0.8915, Loss: 0.1177 Epoch 1 Batch 170/538 - Train Accuracy: 0.8945, Validation Accuracy: 0.8883, Loss: 0.1253 Epoch 1 Batch 180/538 - Train Accuracy: 0.9118, Validation Accuracy: 0.8833, Loss: 0.1180 Epoch 1 Batch 190/538 - Train Accuracy: 0.9022, Validation Accuracy: 0.9228, Loss: 0.1296 Epoch 1 Batch 200/538 - Train Accuracy: 0.9053, Validation Accuracy: 0.9015, Loss: 0.0992 Epoch 1 Batch 210/538 - Train Accuracy: 0.9023, Validation Accuracy: 0.9102, Loss: 0.1065 Epoch 1 Batch 220/538 - Train Accuracy: 0.8977, Validation Accuracy: 0.8960, Loss: 0.1068 Epoch 1 Batch 230/538 - Train Accuracy: 0.8939, Validation Accuracy: 0.8991, Loss: 0.1004 Epoch 1 Batch 240/538 - Train Accuracy: 0.9062, Validation Accuracy: 0.9283, Loss: 0.0942 Epoch 1 Batch 250/538 - Train Accuracy: 0.9158, Validation Accuracy: 0.9025, Loss: 0.0928 Epoch 1 Batch 260/538 - Train Accuracy: 0.8897, Validation Accuracy: 0.8922, Loss: 0.0985 Epoch 1 Batch 270/538 - Train Accuracy: 0.9146, Validation Accuracy: 0.8961, Loss: 0.0865 Epoch 1 Batch 280/538 - Train Accuracy: 0.9224, Validation Accuracy: 0.9185, Loss: 0.0831 Epoch 1 Batch 290/538 - Train Accuracy: 0.9295, Validation Accuracy: 0.9066, Loss: 0.0773 Epoch 1 Batch 300/538 - Train Accuracy: 0.9139, Validation Accuracy: 0.9089, Loss: 0.0850 Epoch 1 Batch 310/538 - Train Accuracy: 0.9293, Validation Accuracy: 0.9116, Loss: 0.0869 Epoch 1 Batch 320/538 - Train Accuracy: 0.9087, Validation Accuracy: 0.9105, Loss: 0.0792 Epoch 1 Batch 330/538 - Train Accuracy: 0.9276, Validation Accuracy: 0.9146, Loss: 0.0716 Epoch 1 Batch 340/538 - Train Accuracy: 0.9238, Validation Accuracy: 0.9199, Loss: 0.0729 Epoch 1 Batch 350/538 - Train Accuracy: 0.9262, Validation Accuracy: 0.9277, Loss: 0.0827 Epoch 1 Batch 360/538 - Train Accuracy: 0.9064, Validation Accuracy: 0.9308, Loss: 0.0755 ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ sentence = sentence.lower() words = sentence.split() word_id_list = [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in words] return word_id_list """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) ###Output INFO:tensorflow:Restoring parameters from checkpoints/dev Input Word Ids: [206, 30, 51, 17, 150, 197, 101] English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.'] Prediction Word Ids: [137, 17, 228, 50, 29, 40, 123, 115, 1] French Words: il a vu un vieux camion noir . <EOS> ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of each sentence from `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function source_id_text = [[source_vocab_to_int[word] for word in sentence.split()] for sentence in source_text.split('\n')] eos_word = target_vocab_to_int['<EOS>'] target_id_text = [[target_vocab_to_int[word] for word in sentence.split()] + [eos_word] for sentence in target_text.split('\n')] #print(target_id_text) return (source_id_text, target_id_text) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__) print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.0.0 Default GPU Device: /gpu:0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoding_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) """ # TODO: Implement Function data_input = tf.placeholder(tf.int32, shape=(None, None), name='input') data_target = tf.placeholder(tf.int32, shape=(None, None), name='targets') learn_rate = tf.placeholder(tf.float32, name='learning_rate') keep_prob = tf.placeholder(tf.float32, name='keep_prob') return (data_input, data_target, learn_rate, keep_prob) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output Tests Passed ###Markdown Process Decoding InputImplement `process_decoding_input` using TensorFlow to remove the last word id from each batch in `target_data` and concat the GO ID to the beginning of each batch. ###Code def process_decoding_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for decoding :param target_data: Target Placeholder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function # This is the code from the lesson, any resources tohelp with further understanding of this? ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return dec_input """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_decoding_input(process_decoding_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer using [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn). ###Code def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ # TODO: Implement Function # Initial state of the LSTM memory. stacked_lstm_cells = tf.contrib.rnn.MultiRNNCell([build_rnn_cell(rnn_size, keep_prob) for _ in range(num_layers)], state_is_tuple=True) outputs, cells_bw_fw = tf.nn.dynamic_rnn(stacked_lstm_cells, rnn_inputs, dtype=tf.float32) return cells_bw_fw # use Tensorflow 1.1 and this olves an error as you will see below def build_rnn_cell(rnn_size,keep_prob): lstm_cell = tf.contrib.rnn.BasicLSTMCell(rnn_size, state_is_tuple=True) # Very usefull drop = tf.contrib.rnn.DropoutWrapper(lstm_cell, input_keep_prob=1.0, output_keep_prob=keep_prob) return drop """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate training logits using [`tf.contrib.seq2seq.simple_decoder_fn_train()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_train) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). Apply the `output_fn` to the [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder) outputs. ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits """ # TODO: Implement Function train_dynamic_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) train_outputs, train_final_state, train_final_ctx_state = tf.contrib.seq2seq.dynamic_rnn_decoder(cell=dec_cell, decoder_fn=train_dynamic_fn, inputs=dec_embed_input, sequence_length=sequence_length, scope=decoding_scope) train_logits = output_fn(train_outputs) return train_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference logits using [`tf.contrib.seq2seq.simple_decoder_fn_inference()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_inference) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: The maximum allowed time steps to decode :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits """ # TODO: Implement Function infer_dynamic_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size) infer_outputs, infer_final_state, infer_final_ctx_state = tf.contrib.seq2seq.dynamic_rnn_decoder(cell=dec_cell, decoder_fn=infer_dynamic_fn, scope=decoding_scope) infer_logits = infer_outputs return infer_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.- Create RNN cell for decoding using `rnn_size` and `num_layers`.- Create the output fuction using [`lambda`](https://docs.python.org/3/tutorial/controlflow.htmllambda-expressions) to transform it's input, logits, to class logits.- Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)` function to get the training logits.- Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function # Initial state of the LSTM memory. dec_stacked_cells = tf.contrib.rnn.MultiRNNCell([build_rnn_cell(rnn_size, keep_prob) for _ in range(num_layers)], state_is_tuple=True) end_seq_id = target_vocab_to_int['<EOS>'] start_seq_id = target_vocab_to_int['<GO>'] with tf.variable_scope("decoding", reuse=None) as decoding_scope: # as shown in problem_unittests.py output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope) with tf.variable_scope("decoding", reuse=None) as decoding_scope: train_logits = decoding_layer_train(encoder_state, dec_stacked_cells, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) # Error when reuse=True with tf.variable_scope("decoding", reuse=True) as decoding_scope: infer_logits = decoding_layer_infer(encoder_state, dec_stacked_cells, dec_embeddings, start_seq_id, end_seq_id, sequence_length-1, vocab_size, decoding_scope, output_fn, keep_prob) return (train_logits, infer_logits) # use Tensorflow 1.1 and this olves an error as you will see below def build_rnn_cell(rnn_size,keep_prob): lstm_cell = tf.contrib.rnn.BasicLSTMCell(rnn_size, state_is_tuple=True) # Very usefull drop = tf.contrib.rnn.DropoutWrapper(lstm_cell, input_keep_prob=1.0, output_keep_prob=keep_prob) return drop """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Apply embedding to the input data for the encoder.- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob)`.- Process target data using your `process_decoding_input(target_data, target_vocab_to_int, batch_size)` function.- Apply embedding to the target data for the decoder.- Decode the encoded input using your `decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)`. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function #1 rnn_inputs = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size) #2 input_encoded_state = encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob) #3 target_output = process_decoding_input(target_data, target_vocab_to_int, batch_size) #4 dec_embeds = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size])) dec_embeds_input = tf.nn.embedding_lookup(dec_embeds, target_output) #5 train_logits, infer_logits = decoding_layer(dec_embeds_input, dec_embeds, input_encoded_state, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) return (train_logits, infer_logits) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability ###Code # Number of Epochs epochs = 6 # Batch Size batch_size = 128 # RNN Size rnn_size = 128 # Number of Layers num_layers = 1 # Embedding Size encoding_embedding_size = 256 decoding_embedding_size = 256 # Learning Rate learning_rate = 0.007 # Dropout Keep Probability keep_probability = 0.5 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_source_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob = model_inputs() sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model( tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) tf.identity(inference_logits, 'logits') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([input_shape[0], sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem. ###Code import matplotlib.pyplot as plt """ DON'T MODIFY ANYTHING IN THIS CELL """ import time def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) losses = {'train':[], 'validation':[]} for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() #print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) sys.stdout.write('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) losses['train'].append(train_acc) losses['validation'].append(valid_acc) print("") #plot accuracy plt.plot(losses['train'], label='Training loss') plt.plot(losses['validation'], label='Validation loss') plt.legend() # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') plt.show() ###Output _____no_output_____ ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int`- Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function words_lower_case = [word.lower() for word in sentence.split()] # default to '<UNK>' in case word is non-existent in domain seq_word_id = [vocab_to_int.get(l_case_word, vocab_to_int['<UNK>']) for l_case_word in words_lower_case] #print(seq_word_id) return seq_word_id """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('logits:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)])) print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)])) ###Output Input Word Ids: [100, 46, 16, 184, 197, 111, 105] English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.'] Prediction Word Ids: [163, 10, 351, 237, 169, 111, 120, 306, 1] French Words: ['il', 'a', 'vu', 'un', 'vieux', 'camion', 'jaune', '.', '<EOS>'] ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (2000, 2010) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 2000 to 2010: she saw the shiny red automobile . new jersey is pleasant during july , and it is sometimes freezing in april . the peach is your most loved fruit , but the lemon is our most loved . he likes grapes , strawberries , and oranges. new jersey is usually cold during november , but it is never warm in october . the peach is her least favorite fruit , but the grapefruit is my least favorite . china is never pleasant during spring , and it is usually dry in july . california is never dry during fall , but it is never wonderful in autumn . how was your visit to china last autumn ? china is sometimes chilly during may , but it is sometimes cold in spring . French sentences 2000 to 2010: elle a vu la brillante voiture rouge . new jersey est agréable en juillet , et il est parfois le gel en avril . la pêche est votre fruit le plus aimé , mais le citron est notre plus aimé . il aime les raisins , les fraises et les oranges . new jersey est généralement froid en novembre , mais il est jamais chaud en octobre . la pêche est moins son fruit préféré , mais le pamplemousse est mon préféré moins . chine est jamais agréable au printemps , et il est généralement sec en juillet . californie est jamais à sec pendant l' automne , mais il est jamais merveilleux à l' automne . comment était votre visite en chine l' automne dernier ? la chine est parfois frisquet en mai , mais il est parfois froid au printemps . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of each sentence from `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function source_sentences = source_text.split('\n') target_sentences = target_text.split('\n') source_id_text = [] target_id_text = [] for sentence in source_sentences: source_sentence_id_text = [] for word in sentence.split(): source_sentence_id_text.append(source_vocab_to_int[word]) source_id_text.append(source_sentence_id_text) for sentence in target_sentences: target_sentence_id_text = [] for word in sentence.split(): target_sentence_id_text.append(target_vocab_to_int[word]) target_sentence_id_text.append(target_vocab_to_int['<EOS>']) target_id_text.append(target_sentence_id_text) return source_id_text, target_id_text #test_source_text = 'new jersey is sometimes quiet during autumn , and it is snowy in april .\nthe united states is usually chilly during july , and it is usually freezing in november .' #test_target_text = 'new jersey est parfois calme pendant l\' automne , et il est neigeux en avril .\nles états-unis est généralement froid en juillet , et il gèle habituellement en novembre .' #source_vocab_to_int, source_int_to_vocab = helper.create_lookup_tables(test_source_text) #target_vocab_to_int, target_int_to_vocab = helper.create_lookup_tables(test_target_text) #text_to_ids(test_source_text, test_target_text, source_vocab_to_int, target_vocab_to_int) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__) print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.0.0 Default GPU Device: /gpu:0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoding_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) """ # TODO: Implement Function inputs = tf.placeholder(tf.int32, [None, None], name="input") targets = tf.placeholder(tf.int32, [None, None]) learning_rate = tf.placeholder(tf.float32) keep_prob = tf.placeholder(tf.float32, name="keep_prob") return inputs, targets, learning_rate, keep_prob """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output Tests Passed ###Markdown Process Decoding InputImplement `process_decoding_input` using TensorFlow to remove the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch. ###Code def process_decoding_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for dencoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) decoded_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return decoded_input """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_decoding_input(process_decoding_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer using [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn). ###Code def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ # TODO: Implement Function LSTM_cell = tf.contrib.rnn.BasicLSTMCell(rnn_size) LSTM_cell = tf.contrib.rnn.DropoutWrapper(LSTM_cell, keep_prob) encoded_cell = tf.contrib.rnn.MultiRNNCell([LSTM_cell] * num_layers) _, RNN_state = tf.nn.dynamic_rnn(encoded_cell, rnn_inputs, dtype=tf.float32) return RNN_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate training logits using [`tf.contrib.seq2seq.simple_decoder_fn_train()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_train) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). Apply the `output_fn` to the [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder) outputs. ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits """ # TODO: Implement Function train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder( dec_cell, train_decoder_fn, inputs=dec_embed_input, sequence_length=sequence_length, scope=decoding_scope) train_logits = output_fn(train_pred) return train_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference logits using [`tf.contrib.seq2seq.simple_decoder_fn_inference()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_inference) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: The maximum allowed time steps to decode :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits """ # TODO: Implement Function inference_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference( output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size) inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder( dec_cell, inference_decoder_fn, scope=decoding_scope) return inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.- Create RNN cell for decoding using `rnn_size` and `num_layers`.- Create the output fuction using [`lambda`](https://docs.python.org/3/tutorial/controlflow.htmllambda-expressions) to transform it's input, logits, to class logits.- Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)` function to get the training logits.- Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function # Create RNN cell for decoding using rnn_size and num_layers dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers) # Create output function using lambda to transform inputs, logits, to class logits with tf.variable_scope("decoding") as decoding_scope: output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope) # Use decoding_layer_train() function to get training logits with tf.variable_scope("decoding") as decoding_scope: training_logits = decoding_layer_train( encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) # Use decoding_layer_infer() function to get inference logits with tf.variable_scope("decoding", reuse=True) as decoding_scope: inference_logits = decoding_layer_infer( encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], sequence_length - 1, vocab_size, decoding_scope, output_fn, keep_prob) return training_logits, inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Apply embedding to the input data for the encoder.- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob)`.- Process target data using your `process_decoding_input(target_data, target_vocab_to_int, batch_size)` function.- Apply embedding to the target data for the decoder.- Decode the encoded input using your `decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)`. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function # Apply embedding to input data enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size) # Encode the input using the encoding_layer() function. encoder_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob) # Process target data using process_decoding_input() function. decoded_input = process_decoding_input(target_data, target_vocab_to_int, batch_size) # Apply embedding to target data for the decoder. dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, decoded_input) # Decode the encoded input using decoding_layer() function. training_logits, inference_logits = decoding_layer( dec_embed_input, dec_embeddings, encoder_state, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) return training_logits, inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability ###Code # Number of Epochs epochs = 30 # Batch Size batch_size = 256 # RNN Size rnn_size = 100 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 100 decoding_embedding_size = 100 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 0.9 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_source_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob = model_inputs() sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model( tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) tf.identity(inference_logits, 'logits') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([input_shape[0], sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import time def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 536/538 - Train Accuracy: 0.529, Validation Accuracy: 0.553, Loss: 1.020 Epoch 1 Batch 536/538 - Train Accuracy: 0.626, Validation Accuracy: 0.639, Loss: 0.605 Epoch 2 Batch 536/538 - Train Accuracy: 0.772, Validation Accuracy: 0.741, Loss: 0.380 Epoch 3 Batch 536/538 - Train Accuracy: 0.863, Validation Accuracy: 0.840, Loss: 0.224 Epoch 4 Batch 536/538 - Train Accuracy: 0.901, Validation Accuracy: 0.881, Loss: 0.127 Epoch 5 Batch 536/538 - Train Accuracy: 0.921, Validation Accuracy: 0.906, Loss: 0.086 Epoch 6 Batch 536/538 - Train Accuracy: 0.931, Validation Accuracy: 0.929, Loss: 0.066 Epoch 7 Batch 536/538 - Train Accuracy: 0.943, Validation Accuracy: 0.929, Loss: 0.054 Epoch 8 Batch 536/538 - Train Accuracy: 0.949, Validation Accuracy: 0.941, Loss: 0.043 Epoch 9 Batch 536/538 - Train Accuracy: 0.956, Validation Accuracy: 0.947, Loss: 0.037 Epoch 10 Batch 536/538 - Train Accuracy: 0.959, Validation Accuracy: 0.953, Loss: 0.032 Epoch 11 Batch 536/538 - Train Accuracy: 0.957, Validation Accuracy: 0.946, Loss: 0.027 Epoch 12 Batch 536/538 - Train Accuracy: 0.962, Validation Accuracy: 0.946, Loss: 0.024 Epoch 13 Batch 536/538 - Train Accuracy: 0.964, Validation Accuracy: 0.948, Loss: 0.022 Epoch 14 Batch 536/538 - Train Accuracy: 0.965, Validation Accuracy: 0.955, Loss: 0.020 Epoch 15 Batch 536/538 - Train Accuracy: 0.971, Validation Accuracy: 0.953, Loss: 0.017 Epoch 16 Batch 536/538 - Train Accuracy: 0.971, Validation Accuracy: 0.947, Loss: 0.016 Epoch 17 Batch 536/538 - Train Accuracy: 0.974, Validation Accuracy: 0.958, Loss: 0.016 Epoch 18 Batch 536/538 - Train Accuracy: 0.969, Validation Accuracy: 0.959, Loss: 0.014 Epoch 19 Batch 536/538 - Train Accuracy: 0.967, Validation Accuracy: 0.961, Loss: 0.013 Epoch 20 Batch 536/538 - Train Accuracy: 0.974, Validation Accuracy: 0.965, Loss: 0.012 Epoch 21 Batch 536/538 - Train Accuracy: 0.975, Validation Accuracy: 0.966, Loss: 0.011 Epoch 22 Batch 536/538 - Train Accuracy: 0.977, Validation Accuracy: 0.966, Loss: 0.010 Epoch 23 Batch 536/538 - Train Accuracy: 0.975, Validation Accuracy: 0.965, Loss: 0.010 Epoch 24 Batch 536/538 - Train Accuracy: 0.975, Validation Accuracy: 0.963, Loss: 0.011 Epoch 25 Batch 536/538 - Train Accuracy: 0.972, Validation Accuracy: 0.973, Loss: 0.010 Epoch 26 Batch 536/538 - Train Accuracy: 0.977, Validation Accuracy: 0.966, Loss: 0.008 Epoch 27 Batch 536/538 - Train Accuracy: 0.976, Validation Accuracy: 0.968, Loss: 0.009 Epoch 28 Batch 536/538 - Train Accuracy: 0.978, Validation Accuracy: 0.959, Loss: 0.009 Epoch 29 Batch 536/538 - Train Accuracy: 0.975, Validation Accuracy: 0.965, Loss: 0.009 Model Trained and Saved ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function # Convert sentence to lowercase. sentence_l = sentence.lower() # Convert words to ids using vocab_to_int. word_ids = [] for word in sentence_l.split(): if word in vocab_to_int.keys(): word_ids.append(vocab_to_int[word]) else: word_ids.append(vocab_to_int['<UNK>']) return word_ids """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'france is lovely in the spring .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('logits:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)])) print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)])) ###Output Input Word Ids: [175, 134, 2, 174, 146, 116, 56] English Words: ['france', 'is', '<UNK>', 'in', 'the', 'spring', '.'] Prediction Word Ids: [333, 157, 121, 335, 297, 141, 179, 110, 1] French Words: ['la', 'france', 'est', 'jamais', 'sec', 'au', 'printemps', '.', '<EOS>'] ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ source_list = ( [vocab for vocab in sentence.split(' ') if len(vocab) > 0] for sentence in source_text.split('\n')) source_text_ints = [ [source_vocab_to_int[vocab] for vocab in sentence] for sentence in source_list] target_list = ( [vocab for vocab in sentence.split(' ') if len(vocab) > 0] + ['<EOS>'] for sentence in target_text.split('\n')) target_text_ints = [ [target_vocab_to_int[vocab] for vocab in sentence] for sentence in target_list] return (source_text_ints, target_text_ints) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper import problem_unittests as tests (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.1.0 Default GPU Device: /gpu:0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoder_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.- Target sequence length placeholder named "target_sequence_length" with rank 1- Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.- Source sequence length placeholder named "source_sequence_length" with rank 1Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ inputs = tf.placeholder(dtype=tf.int32, shape=[None, None], name='input') targets = tf.placeholder(dtype=tf.int32, shape=[None, None], name='target') learning_rate = tf.placeholder(dtype=tf.float32, shape=[], name='learning_rate') keep_prob = tf.placeholder(dtype=tf.float32, shape=[], name='keep_prob') target_sequence_length = tf.placeholder(dtype=tf.int32, shape=[None], name='target_sequence_length') max_target = tf.reduce_max(target_sequence_length, name='max_target_len') source_sequence = tf.placeholder(dtype=tf.int32, shape=[None], name='source_sequence_length') return (inputs, targets, learning_rate, keep_prob, target_sequence_length, max_target, source_sequence) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output Tests Passed ###Markdown Process Decoder InputImplement `process_decoder_input` by removing the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch. ###Code def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return dec_input """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer: * Embed the encoder input using [`tf.contrib.layers.embed_sequence`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence) * Construct a [stacked](https://github.com/tensorflow/tensorflow/blob/6947f65a374ebf29e74bb71e36fd82760056d82c/tensorflow/docs_src/tutorials/recurrent.mdstacking-multiple-lstms) [`tf.contrib.rnn.LSTMCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell) wrapped in a [`tf.contrib.rnn.DropoutWrapper`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper) * Pass cell and embedded input to [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) ###Code from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ def build_droput_lstm_cell(rnn_size, keep_prob): cell = tf.contrib.rnn.LSTMCell( num_units=rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1)) dropout_cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) return dropout_cell embed = tf.contrib.layers.embed_sequence( ids=rnn_inputs, vocab_size=source_vocab_size, embed_dim=encoding_embedding_size) cells = [build_droput_lstm_cell(rnn_size, keep_prob) for _ in range(num_layers)] multicell = tf.contrib.rnn.MultiRNNCell(cells) (output, state) = tf.nn.dynamic_rnn( cell=multicell, inputs=embed, sequence_length=source_sequence_length, dtype=tf.float32) return (output, state) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate a training decoding layer:* Create a [`tf.contrib.seq2seq.TrainingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper) * Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ training_helper = tf.contrib.seq2seq.TrainingHelper( inputs=dec_embed_input, sequence_length=target_sequence_length) basic_decoder = tf.contrib.seq2seq.BasicDecoder( cell=tf.contrib.rnn.DropoutWrapper(dec_cell, input_keep_prob=keep_prob), helper=training_helper, initial_state=encoder_state, output_layer=output_layer) (final_outputs, _) = tf.contrib.seq2seq.dynamic_decode( decoder=basic_decoder, maximum_iterations=max_summary_length) return final_outputs """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference decoder:* Create a [`tf.contrib.seq2seq.GreedyEmbeddingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper)* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ start_tokens = tf.tile( input=tf.constant([start_of_sequence_id], dtype=tf.int32), multiples=[batch_size], name='start_tokens') inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper( embedding=dec_embeddings, start_tokens=start_tokens, end_token=end_of_sequence_id) inference_decoder = tf.contrib.seq2seq.BasicDecoder( cell=dec_cell, helper=inference_helper, initial_state=encoder_state, output_layer=output_layer) (inference_decoder_output, _) = tf.contrib.seq2seq.dynamic_decode( decoder=inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length) return inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.* Embed the target sequences* Construct the decoder LSTM cell (just like you constructed the encoder cell above)* Create an output layer to map the outputs of the decoder to the elements of our vocabulary* Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)` function to get the training logits.* Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ def build_lstm_cell(rnn_size): cell = tf.contrib.rnn.LSTMCell( num_units=rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1)) return cell embeddings = tf.Variable( initial_value=tf.random_uniform([target_vocab_size, decoding_embedding_size])) embed_input = tf.nn.embedding_lookup( params=embeddings, ids=dec_input) multicell = tf.contrib.rnn.MultiRNNCell([build_lstm_cell(rnn_size) for _ in range(num_layers)]) output_layer = Dense( units=target_vocab_size, kernel_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1)) with tf.variable_scope('decode'): training_decoder_output = decoding_layer_train( encoder_state, multicell, embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) with tf.variable_scope('decode', reuse=True): inference_decoder_output = decoding_layer_infer( encoder_state, multicell, embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size)`.- Process target data using your `process_decoder_input(target_data, target_vocab_to_int, batch_size)` function.- Decode the encoded input using your `decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)` function. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ (encoded_output, encoded_state) = encoding_layer( input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) decoded_input = process_decoder_input(target_data, target_vocab_to_int, batch_size) (training_decoder_output, inference_decoder_output) = decoding_layer( decoded_input, encoded_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return (training_decoder_output, inference_decoder_output) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability- Set `display_step` to state how many steps between each debug output statement ###Code # Number of Epochs epochs = 8 # Batch Size batch_size = 128 # RNN Size rnn_size = 256 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 128 decoding_embedding_size = 128 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 0.5 display_step = True ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown Batch and pad the source and target sequences ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def pad_sentence_batch(sentence_batch, pad_int): """Pad sentences with <PAD> so that each sentence of a batch has the same length""" max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): """Batch targets, sources, and the lengths of their sentences together""" for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 1/1077 - Train Accuracy: 0.2230, Validation Accuracy: 0.3054, Loss: 5.3245 Epoch 0 Batch 2/1077 - Train Accuracy: 0.2085, Validation Accuracy: 0.3054, Loss: 4.9486 Epoch 0 Batch 3/1077 - Train Accuracy: 0.2410, Validation Accuracy: 0.3068, Loss: 4.5461 Epoch 0 Batch 4/1077 - Train Accuracy: 0.2375, Validation Accuracy: 0.3189, Loss: 4.4503 Epoch 0 Batch 5/1077 - Train Accuracy: 0.2750, Validation Accuracy: 0.3267, Loss: 4.0730 Epoch 0 Batch 6/1077 - Train Accuracy: 0.2883, Validation Accuracy: 0.3438, Loss: 3.9914 Epoch 0 Batch 7/1077 - Train Accuracy: 0.2844, Validation Accuracy: 0.3622, Loss: 3.9432 Epoch 0 Batch 8/1077 - Train Accuracy: 0.3000, Validation Accuracy: 0.3690, Loss: 3.7570 Epoch 0 Batch 9/1077 - Train Accuracy: 0.3137, Validation Accuracy: 0.3754, Loss: 3.5977 Epoch 0 Batch 10/1077 - Train Accuracy: 0.2817, Validation Accuracy: 0.3768, Loss: 3.6676 Epoch 0 Batch 11/1077 - Train Accuracy: 0.3344, Validation Accuracy: 0.3782, Loss: 3.3227 Epoch 0 Batch 12/1077 - Train Accuracy: 0.3168, Validation Accuracy: 0.3825, Loss: 3.3905 Epoch 0 Batch 13/1077 - Train Accuracy: 0.3724, Validation Accuracy: 0.3935, Loss: 3.1151 Epoch 0 Batch 14/1077 - Train Accuracy: 0.3460, Validation Accuracy: 0.3931, Loss: 3.0984 Epoch 0 Batch 15/1077 - Train Accuracy: 0.3359, Validation Accuracy: 0.3988, Loss: 3.1902 Epoch 0 Batch 16/1077 - Train Accuracy: 0.3512, Validation Accuracy: 0.3991, Loss: 3.0848 Epoch 0 Batch 17/1077 - Train Accuracy: 0.3719, Validation Accuracy: 0.4134, Loss: 3.0270 Epoch 0 Batch 18/1077 - Train Accuracy: 0.3648, Validation Accuracy: 0.4325, Loss: 3.0561 Epoch 0 Batch 19/1077 - Train Accuracy: 0.3645, Validation Accuracy: 0.4094, Loss: 2.9342 Epoch 0 Batch 20/1077 - Train Accuracy: 0.3730, Validation Accuracy: 0.4339, Loss: 2.9496 Epoch 0 Batch 21/1077 - Train Accuracy: 0.3566, Validation Accuracy: 0.4315, Loss: 2.9805 Epoch 0 Batch 22/1077 - Train Accuracy: 0.3754, Validation Accuracy: 0.4403, Loss: 2.9383 Epoch 0 Batch 23/1077 - Train Accuracy: 0.3824, Validation Accuracy: 0.4396, Loss: 2.8817 Epoch 0 Batch 24/1077 - Train Accuracy: 0.3824, Validation Accuracy: 0.4393, Loss: 2.7968 Epoch 0 Batch 25/1077 - Train Accuracy: 0.3812, Validation Accuracy: 0.4393, Loss: 2.8634 Epoch 0 Batch 26/1077 - Train Accuracy: 0.3688, Validation Accuracy: 0.4396, Loss: 2.8331 Epoch 0 Batch 27/1077 - Train Accuracy: 0.4323, Validation Accuracy: 0.4464, Loss: 2.5767 Epoch 0 Batch 28/1077 - Train Accuracy: 0.3988, Validation Accuracy: 0.4485, Loss: 2.6985 Epoch 0 Batch 29/1077 - Train Accuracy: 0.4051, Validation Accuracy: 0.4538, Loss: 2.6687 Epoch 0 Batch 30/1077 - Train Accuracy: 0.3945, Validation Accuracy: 0.4560, Loss: 2.6849 Epoch 0 Batch 31/1077 - Train Accuracy: 0.3984, Validation Accuracy: 0.4574, Loss: 2.7034 Epoch 0 Batch 32/1077 - Train Accuracy: 0.4427, Validation Accuracy: 0.4563, Loss: 2.4714 Epoch 0 Batch 33/1077 - Train Accuracy: 0.4070, Validation Accuracy: 0.4290, Loss: 2.5313 Epoch 0 Batch 34/1077 - Train Accuracy: 0.3922, Validation Accuracy: 0.4513, Loss: 2.6299 Epoch 0 Batch 35/1077 - Train Accuracy: 0.4145, Validation Accuracy: 0.4705, Loss: 2.6032 Epoch 0 Batch 36/1077 - Train Accuracy: 0.4242, Validation Accuracy: 0.4602, Loss: 2.5399 Epoch 0 Batch 37/1077 - Train Accuracy: 0.4199, Validation Accuracy: 0.4624, Loss: 2.6160 Epoch 0 Batch 38/1077 - Train Accuracy: 0.3845, Validation Accuracy: 0.4705, Loss: 2.7528 Epoch 0 Batch 39/1077 - Train Accuracy: 0.3984, Validation Accuracy: 0.4691, Loss: 2.5900 Epoch 0 Batch 40/1077 - Train Accuracy: 0.4203, Validation Accuracy: 0.4822, Loss: 2.5604 Epoch 0 Batch 41/1077 - Train Accuracy: 0.4356, Validation Accuracy: 0.4648, Loss: 2.4518 Epoch 0 Batch 42/1077 - Train Accuracy: 0.4160, Validation Accuracy: 0.4680, Loss: 2.4951 Epoch 0 Batch 43/1077 - Train Accuracy: 0.4260, Validation Accuracy: 0.4897, Loss: 2.5534 Epoch 0 Batch 44/1077 - Train Accuracy: 0.3923, Validation Accuracy: 0.4929, Loss: 2.6234 Epoch 0 Batch 45/1077 - Train Accuracy: 0.4031, Validation Accuracy: 0.4680, Loss: 2.4989 Epoch 0 Batch 46/1077 - Train Accuracy: 0.4322, Validation Accuracy: 0.4933, Loss: 2.5245 Epoch 0 Batch 47/1077 - Train Accuracy: 0.4422, Validation Accuracy: 0.4840, Loss: 2.3874 Epoch 0 Batch 48/1077 - Train Accuracy: 0.4301, Validation Accuracy: 0.4670, Loss: 2.4439 Epoch 0 Batch 49/1077 - Train Accuracy: 0.4219, Validation Accuracy: 0.4755, Loss: 2.4452 Epoch 0 Batch 50/1077 - Train Accuracy: 0.3984, Validation Accuracy: 0.4780, Loss: 2.4888 Epoch 0 Batch 51/1077 - Train Accuracy: 0.4430, Validation Accuracy: 0.4812, Loss: 2.3377 Epoch 0 Batch 52/1077 - Train Accuracy: 0.4109, Validation Accuracy: 0.4570, Loss: 2.4386 Epoch 0 Batch 53/1077 - Train Accuracy: 0.4412, Validation Accuracy: 0.4858, Loss: 2.4220 Epoch 0 Batch 54/1077 - Train Accuracy: 0.3965, Validation Accuracy: 0.4762, Loss: 2.5019 Epoch 0 Batch 55/1077 - Train Accuracy: 0.4758, Validation Accuracy: 0.4975, Loss: 2.3012 Epoch 0 Batch 56/1077 - Train Accuracy: 0.4051, Validation Accuracy: 0.4386, Loss: 2.3172 Epoch 0 Batch 57/1077 - Train Accuracy: 0.4975, Validation Accuracy: 0.4879, Loss: 2.1437 Epoch 0 Batch 58/1077 - Train Accuracy: 0.4250, Validation Accuracy: 0.4801, Loss: 2.3092 Epoch 0 Batch 59/1077 - Train Accuracy: 0.4062, Validation Accuracy: 0.5039, Loss: 2.4620 Epoch 0 Batch 60/1077 - Train Accuracy: 0.4412, Validation Accuracy: 0.4755, Loss: 2.2443 Epoch 0 Batch 61/1077 - Train Accuracy: 0.4371, Validation Accuracy: 0.4947, Loss: 2.3300 Epoch 0 Batch 62/1077 - Train Accuracy: 0.3923, Validation Accuracy: 0.4911, Loss: 2.4111 Epoch 0 Batch 63/1077 - Train Accuracy: 0.4587, Validation Accuracy: 0.4837, Loss: 2.1440 Epoch 0 Batch 64/1077 - Train Accuracy: 0.4328, Validation Accuracy: 0.4858, Loss: 2.2868 Epoch 0 Batch 65/1077 - Train Accuracy: 0.4120, Validation Accuracy: 0.4741, Loss: 2.3531 Epoch 0 Batch 66/1077 - Train Accuracy: 0.4379, Validation Accuracy: 0.5007, Loss: 2.2254 Epoch 0 Batch 67/1077 - Train Accuracy: 0.4673, Validation Accuracy: 0.4815, Loss: 2.1138 Epoch 0 Batch 68/1077 - Train Accuracy: 0.4219, Validation Accuracy: 0.4957, Loss: 2.2909 Epoch 0 Batch 69/1077 - Train Accuracy: 0.4715, Validation Accuracy: 0.4996, Loss: 2.1543 Epoch 0 Batch 70/1077 - Train Accuracy: 0.4305, Validation Accuracy: 0.5021, Loss: 2.2467 Epoch 0 Batch 71/1077 - Train Accuracy: 0.4344, Validation Accuracy: 0.4968, Loss: 2.1386 Epoch 0 Batch 72/1077 - Train Accuracy: 0.4469, Validation Accuracy: 0.5092, Loss: 2.1122 Epoch 0 Batch 73/1077 - Train Accuracy: 0.4676, Validation Accuracy: 0.5046, Loss: 2.1262 Epoch 0 Batch 74/1077 - Train Accuracy: 0.4781, Validation Accuracy: 0.5114, Loss: 1.9826 Epoch 0 Batch 75/1077 - Train Accuracy: 0.4762, Validation Accuracy: 0.5192, Loss: 2.0234 Epoch 0 Batch 76/1077 - Train Accuracy: 0.4641, Validation Accuracy: 0.5146, Loss: 2.0389 Epoch 0 Batch 77/1077 - Train Accuracy: 0.4641, Validation Accuracy: 0.5291, Loss: 2.0928 Epoch 0 Batch 78/1077 - Train Accuracy: 0.4297, Validation Accuracy: 0.5167, Loss: 2.1159 Epoch 0 Batch 79/1077 - Train Accuracy: 0.4359, Validation Accuracy: 0.4925, Loss: 2.0610 Epoch 0 Batch 80/1077 - Train Accuracy: 0.4117, Validation Accuracy: 0.4773, Loss: 2.0134 Epoch 0 Batch 81/1077 - Train Accuracy: 0.4852, Validation Accuracy: 0.5160, Loss: 1.9553 Epoch 0 Batch 82/1077 - Train Accuracy: 0.5387, Validation Accuracy: 0.5352, Loss: 1.7825 Epoch 0 Batch 83/1077 - Train Accuracy: 0.4457, Validation Accuracy: 0.5234, Loss: 2.1020 Epoch 0 Batch 84/1077 - Train Accuracy: 0.4430, Validation Accuracy: 0.4986, Loss: 1.9460 Epoch 0 Batch 85/1077 - Train Accuracy: 0.4754, Validation Accuracy: 0.5227, Loss: 1.8847 Epoch 0 Batch 86/1077 - Train Accuracy: 0.4730, Validation Accuracy: 0.5178, Loss: 1.9477 Epoch 0 Batch 87/1077 - Train Accuracy: 0.4531, Validation Accuracy: 0.5085, Loss: 1.9519 Epoch 0 Batch 88/1077 - Train Accuracy: 0.4242, Validation Accuracy: 0.4741, Loss: 1.8971 Epoch 0 Batch 89/1077 - Train Accuracy: 0.4352, Validation Accuracy: 0.4961, Loss: 1.9377 Epoch 0 Batch 90/1077 - Train Accuracy: 0.4742, Validation Accuracy: 0.5210, Loss: 1.9432 Epoch 0 Batch 91/1077 - Train Accuracy: 0.4981, Validation Accuracy: 0.5057, Loss: 1.7639 Epoch 0 Batch 92/1077 - Train Accuracy: 0.4408, Validation Accuracy: 0.4847, Loss: 1.8284 Epoch 0 Batch 93/1077 - Train Accuracy: 0.4605, Validation Accuracy: 0.5167, Loss: 1.9212 Epoch 0 Batch 94/1077 - Train Accuracy: 0.4953, Validation Accuracy: 0.5288, Loss: 1.7884 Epoch 0 Batch 95/1077 - Train Accuracy: 0.4978, Validation Accuracy: 0.5199, Loss: 1.8232 Epoch 0 Batch 96/1077 - Train Accuracy: 0.4316, Validation Accuracy: 0.4805, Loss: 1.8108 Epoch 0 Batch 97/1077 - Train Accuracy: 0.4500, Validation Accuracy: 0.4886, Loss: 1.7995 Epoch 0 Batch 98/1077 - Train Accuracy: 0.5063, Validation Accuracy: 0.5014, Loss: 1.7103 Epoch 0 Batch 99/1077 - Train Accuracy: 0.4316, Validation Accuracy: 0.4918, Loss: 1.8778 Epoch 0 Batch 100/1077 - Train Accuracy: 0.4441, Validation Accuracy: 0.4869, Loss: 1.7855 Epoch 0 Batch 101/1077 - Train Accuracy: 0.4492, Validation Accuracy: 0.5078, Loss: 1.7428 Epoch 0 Batch 102/1077 - Train Accuracy: 0.4750, Validation Accuracy: 0.5199, Loss: 1.7219 Epoch 0 Batch 103/1077 - Train Accuracy: 0.4104, Validation Accuracy: 0.4982, Loss: 1.8666 Epoch 0 Batch 104/1077 - Train Accuracy: 0.3951, Validation Accuracy: 0.5039, Loss: 1.8715 Epoch 0 Batch 105/1077 - Train Accuracy: 0.4629, Validation Accuracy: 0.5064, Loss: 1.7258 Epoch 0 Batch 106/1077 - Train Accuracy: 0.4322, Validation Accuracy: 0.4858, Loss: 1.8288 Epoch 0 Batch 107/1077 - Train Accuracy: 0.4494, Validation Accuracy: 0.4879, Loss: 1.6425 Epoch 0 Batch 108/1077 - Train Accuracy: 0.5156, Validation Accuracy: 0.5213, Loss: 1.5608 Epoch 0 Batch 109/1077 - Train Accuracy: 0.4777, Validation Accuracy: 0.5210, Loss: 1.6934 Epoch 0 Batch 110/1077 - Train Accuracy: 0.4723, Validation Accuracy: 0.4886, Loss: 1.6320 Epoch 0 Batch 111/1077 - Train Accuracy: 0.4375, Validation Accuracy: 0.4975, Loss: 1.6867 Epoch 0 Batch 112/1077 - Train Accuracy: 0.4715, Validation Accuracy: 0.5298, Loss: 1.6987 Epoch 0 Batch 113/1077 - Train Accuracy: 0.4453, Validation Accuracy: 0.5046, Loss: 1.7190 Epoch 0 Batch 114/1077 - Train Accuracy: 0.4628, Validation Accuracy: 0.4762, Loss: 1.5999 Epoch 0 Batch 115/1077 - Train Accuracy: 0.4391, Validation Accuracy: 0.4968, Loss: 1.7198 Epoch 0 Batch 116/1077 - Train Accuracy: 0.4156, Validation Accuracy: 0.4972, Loss: 1.7161 Epoch 0 Batch 117/1077 - Train Accuracy: 0.4227, Validation Accuracy: 0.4972, Loss: 1.7352 Epoch 0 Batch 118/1077 - Train Accuracy: 0.3935, Validation Accuracy: 0.4759, Loss: 1.7019 Epoch 0 Batch 119/1077 - Train Accuracy: 0.4602, Validation Accuracy: 0.5000, Loss: 1.6261 Epoch 0 Batch 120/1077 - Train Accuracy: 0.4465, Validation Accuracy: 0.4929, Loss: 1.6251 Epoch 0 Batch 121/1077 - Train Accuracy: 0.4344, Validation Accuracy: 0.4805, Loss: 1.5957 Epoch 0 Batch 122/1077 - Train Accuracy: 0.4488, Validation Accuracy: 0.4964, Loss: 1.5898 Epoch 0 Batch 123/1077 - Train Accuracy: 0.4824, Validation Accuracy: 0.5206, Loss: 1.5562 Epoch 0 Batch 124/1077 - Train Accuracy: 0.4473, Validation Accuracy: 0.5085, Loss: 1.6091 Epoch 0 Batch 125/1077 - Train Accuracy: 0.4751, Validation Accuracy: 0.5036, Loss: 1.5587 Epoch 0 Batch 126/1077 - Train Accuracy: 0.4900, Validation Accuracy: 0.5121, Loss: 1.5072 Epoch 0 Batch 127/1077 - Train Accuracy: 0.4836, Validation Accuracy: 0.5149, Loss: 1.5688 Epoch 0 Batch 128/1077 - Train Accuracy: 0.4810, Validation Accuracy: 0.4940, Loss: 1.4821 Epoch 0 Batch 129/1077 - Train Accuracy: 0.4617, Validation Accuracy: 0.4975, Loss: 1.5916 Epoch 0 Batch 130/1077 - Train Accuracy: 0.4643, Validation Accuracy: 0.5071, Loss: 1.4743 Epoch 0 Batch 131/1077 - Train Accuracy: 0.4543, Validation Accuracy: 0.5170, Loss: 1.5650 Epoch 0 Batch 132/1077 - Train Accuracy: 0.4437, Validation Accuracy: 0.5103, Loss: 1.6001 Epoch 0 Batch 133/1077 - Train Accuracy: 0.4568, Validation Accuracy: 0.5174, Loss: 1.5571 Epoch 0 Batch 134/1077 - Train Accuracy: 0.5134, Validation Accuracy: 0.5419, Loss: 1.4846 Epoch 0 Batch 135/1077 - Train Accuracy: 0.4725, Validation Accuracy: 0.5252, Loss: 1.5916 Epoch 0 Batch 136/1077 - Train Accuracy: 0.4629, Validation Accuracy: 0.5135, Loss: 1.4947 Epoch 0 Batch 137/1077 - Train Accuracy: 0.5104, Validation Accuracy: 0.5078, Loss: 1.3849 Epoch 0 Batch 138/1077 - Train Accuracy: 0.4699, Validation Accuracy: 0.5131, Loss: 1.4543 Epoch 0 Batch 139/1077 - Train Accuracy: 0.4520, Validation Accuracy: 0.4918, Loss: 1.4718 Epoch 0 Batch 140/1077 - Train Accuracy: 0.4100, Validation Accuracy: 0.4837, Loss: 1.5694 Epoch 0 Batch 141/1077 - Train Accuracy: 0.4840, Validation Accuracy: 0.5199, Loss: 1.5058 Epoch 0 Batch 142/1077 - Train Accuracy: 0.5160, Validation Accuracy: 0.5323, Loss: 1.3853 Epoch 0 Batch 143/1077 - Train Accuracy: 0.4699, Validation Accuracy: 0.5078, Loss: 1.4727 Epoch 0 Batch 144/1077 - Train Accuracy: 0.4124, Validation Accuracy: 0.4961, Loss: 1.5384 Epoch 0 Batch 145/1077 - Train Accuracy: 0.5168, Validation Accuracy: 0.5234, Loss: 1.4102 Epoch 0 Batch 146/1077 - Train Accuracy: 0.5190, Validation Accuracy: 0.5366, Loss: 1.4257 Epoch 0 Batch 147/1077 - Train Accuracy: 0.4559, Validation Accuracy: 0.5082, Loss: 1.4719 Epoch 0 Batch 148/1077 - Train Accuracy: 0.4641, Validation Accuracy: 0.5018, Loss: 1.4311 Epoch 0 Batch 149/1077 - Train Accuracy: 0.4816, Validation Accuracy: 0.5188, Loss: 1.4797 Epoch 0 Batch 150/1077 - Train Accuracy: 0.5108, Validation Accuracy: 0.5284, Loss: 1.3665 Epoch 0 Batch 151/1077 - Train Accuracy: 0.4680, Validation Accuracy: 0.4986, Loss: 1.3661 Epoch 0 Batch 152/1077 - Train Accuracy: 0.4387, Validation Accuracy: 0.4957, Loss: 1.4399 Epoch 0 Batch 153/1077 - Train Accuracy: 0.4809, Validation Accuracy: 0.5220, Loss: 1.4796 Epoch 0 Batch 154/1077 - Train Accuracy: 0.4387, Validation Accuracy: 0.5114, Loss: 1.4927 Epoch 0 Batch 155/1077 - Train Accuracy: 0.4574, Validation Accuracy: 0.4933, Loss: 1.3994 Epoch 0 Batch 156/1077 - Train Accuracy: 0.4574, Validation Accuracy: 0.4975, Loss: 1.3815 Epoch 0 Batch 157/1077 - Train Accuracy: 0.4820, Validation Accuracy: 0.5124, Loss: 1.4037 Epoch 0 Batch 158/1077 - Train Accuracy: 0.4914, Validation Accuracy: 0.5156, Loss: 1.3967 Epoch 0 Batch 159/1077 - Train Accuracy: 0.5086, Validation Accuracy: 0.5359, Loss: 1.3148 Epoch 0 Batch 160/1077 - Train Accuracy: 0.4512, Validation Accuracy: 0.5011, Loss: 1.3450 Epoch 0 Batch 161/1077 - Train Accuracy: 0.4695, Validation Accuracy: 0.5199, Loss: 1.3886 Epoch 0 Batch 162/1077 - Train Accuracy: 0.4980, Validation Accuracy: 0.5298, Loss: 1.3728 Epoch 0 Batch 163/1077 - Train Accuracy: 0.4457, Validation Accuracy: 0.5284, Loss: 1.4181 Epoch 0 Batch 164/1077 - Train Accuracy: 0.4547, Validation Accuracy: 0.5075, Loss: 1.3661 Epoch 0 Batch 165/1077 - Train Accuracy: 0.4477, Validation Accuracy: 0.5121, Loss: 1.3079 Epoch 0 Batch 166/1077 - Train Accuracy: 0.4797, Validation Accuracy: 0.4893, Loss: 1.3229 Epoch 0 Batch 167/1077 - Train Accuracy: 0.4789, Validation Accuracy: 0.5281, Loss: 1.3830 Epoch 0 Batch 168/1077 - Train Accuracy: 0.4679, Validation Accuracy: 0.5234, Loss: 1.3853 Epoch 0 Batch 169/1077 - Train Accuracy: 0.5112, Validation Accuracy: 0.5135, Loss: 1.2840 Epoch 0 Batch 170/1077 - Train Accuracy: 0.4172, Validation Accuracy: 0.4936, Loss: 1.3556 Epoch 0 Batch 171/1077 - Train Accuracy: 0.4964, Validation Accuracy: 0.4904, Loss: 1.2334 Epoch 0 Batch 172/1077 - Train Accuracy: 0.4914, Validation Accuracy: 0.5224, Loss: 1.2347 Epoch 0 Batch 173/1077 - Train Accuracy: 0.4404, Validation Accuracy: 0.5167, Loss: 1.4106 Epoch 0 Batch 174/1077 - Train Accuracy: 0.5090, Validation Accuracy: 0.5249, Loss: 1.2903 Epoch 0 Batch 175/1077 - Train Accuracy: 0.4820, Validation Accuracy: 0.5135, Loss: 1.2732 Epoch 0 Batch 176/1077 - Train Accuracy: 0.4746, Validation Accuracy: 0.5234, Loss: 1.2877 Epoch 0 Batch 177/1077 - Train Accuracy: 0.4622, Validation Accuracy: 0.5199, Loss: 1.3549 Epoch 0 Batch 178/1077 - Train Accuracy: 0.5062, Validation Accuracy: 0.5270, Loss: 1.2622 Epoch 0 Batch 179/1077 - Train Accuracy: 0.5095, Validation Accuracy: 0.5437, Loss: 1.3276 Epoch 0 Batch 180/1077 - Train Accuracy: 0.5023, Validation Accuracy: 0.5423, Loss: 1.2808 Epoch 0 Batch 181/1077 - Train Accuracy: 0.4754, Validation Accuracy: 0.5089, Loss: 1.2885 Epoch 0 Batch 182/1077 - Train Accuracy: 0.4914, Validation Accuracy: 0.5156, Loss: 1.2449 Epoch 0 Batch 183/1077 - Train Accuracy: 0.4512, Validation Accuracy: 0.5138, Loss: 1.2960 Epoch 0 Batch 184/1077 - Train Accuracy: 0.5305, Validation Accuracy: 0.5423, Loss: 1.2166 Epoch 0 Batch 185/1077 - Train Accuracy: 0.4867, Validation Accuracy: 0.5231, Loss: 1.2474 Epoch 0 Batch 186/1077 - Train Accuracy: 0.4901, Validation Accuracy: 0.5281, Loss: 1.2942 Epoch 0 Batch 187/1077 - Train Accuracy: 0.4750, Validation Accuracy: 0.5096, Loss: 1.2248 Epoch 0 Batch 188/1077 - Train Accuracy: 0.4969, Validation Accuracy: 0.5412, Loss: 1.2290 Epoch 0 Batch 189/1077 - Train Accuracy: 0.5125, Validation Accuracy: 0.5369, Loss: 1.2485 Epoch 0 Batch 190/1077 - Train Accuracy: 0.5055, Validation Accuracy: 0.5263, Loss: 1.2257 Epoch 0 Batch 191/1077 - Train Accuracy: 0.4851, Validation Accuracy: 0.4862, Loss: 1.1206 Epoch 0 Batch 192/1077 - Train Accuracy: 0.4828, Validation Accuracy: 0.5014, Loss: 1.2375 Epoch 0 Batch 193/1077 - Train Accuracy: 0.5074, Validation Accuracy: 0.5323, Loss: 1.1859 Epoch 0 Batch 194/1077 - Train Accuracy: 0.5365, Validation Accuracy: 0.5451, Loss: 1.1170 Epoch 0 Batch 195/1077 - Train Accuracy: 0.4645, Validation Accuracy: 0.5391, Loss: 1.2011 Epoch 0 Batch 196/1077 - Train Accuracy: 0.4988, Validation Accuracy: 0.5249, Loss: 1.1566 Epoch 0 Batch 197/1077 - Train Accuracy: 0.5004, Validation Accuracy: 0.5277, Loss: 1.1791 Epoch 0 Batch 198/1077 - Train Accuracy: 0.5290, Validation Accuracy: 0.5284, Loss: 1.0783 Epoch 0 Batch 199/1077 - Train Accuracy: 0.4813, Validation Accuracy: 0.5142, Loss: 1.1910 Epoch 0 Batch 200/1077 - Train Accuracy: 0.5008, Validation Accuracy: 0.5312, Loss: 1.1992 Epoch 0 Batch 201/1077 - Train Accuracy: 0.5191, Validation Accuracy: 0.5312, Loss: 1.1257 Epoch 0 Batch 202/1077 - Train Accuracy: 0.5270, Validation Accuracy: 0.5281, Loss: 1.1654 Epoch 0 Batch 203/1077 - Train Accuracy: 0.5102, Validation Accuracy: 0.5316, Loss: 1.1733 Epoch 0 Batch 204/1077 - Train Accuracy: 0.5004, Validation Accuracy: 0.5387, Loss: 1.1696 Epoch 0 Batch 205/1077 - Train Accuracy: 0.5332, Validation Accuracy: 0.5511, Loss: 1.1784 Epoch 0 Batch 206/1077 - Train Accuracy: 0.5312, Validation Accuracy: 0.5405, Loss: 1.1683 Epoch 0 Batch 207/1077 - Train Accuracy: 0.4633, Validation Accuracy: 0.5419, Loss: 1.1792 Epoch 0 Batch 208/1077 - Train Accuracy: 0.5130, Validation Accuracy: 0.5320, Loss: 1.1456 Epoch 0 Batch 209/1077 - Train Accuracy: 0.5030, Validation Accuracy: 0.5298, Loss: 1.0630 Epoch 0 Batch 210/1077 - Train Accuracy: 0.5067, Validation Accuracy: 0.5355, Loss: 1.1571 Epoch 0 Batch 211/1077 - Train Accuracy: 0.5078, Validation Accuracy: 0.5515, Loss: 1.1263 Epoch 0 Batch 212/1077 - Train Accuracy: 0.5461, Validation Accuracy: 0.5433, Loss: 1.0793 Epoch 0 Batch 213/1077 - Train Accuracy: 0.5215, Validation Accuracy: 0.5472, Loss: 1.0942 Epoch 0 Batch 214/1077 - Train Accuracy: 0.4813, Validation Accuracy: 0.5295, Loss: 1.1308 Epoch 0 Batch 215/1077 - Train Accuracy: 0.4570, Validation Accuracy: 0.5092, Loss: 1.1467 Epoch 0 Batch 216/1077 - Train Accuracy: 0.5004, Validation Accuracy: 0.5423, Loss: 1.1764 Epoch 0 Batch 217/1077 - Train Accuracy: 0.5500, Validation Accuracy: 0.5366, Loss: 1.0834 Epoch 0 Batch 218/1077 - Train Accuracy: 0.5263, Validation Accuracy: 0.5451, Loss: 1.2411 Epoch 0 Batch 219/1077 - Train Accuracy: 0.4898, Validation Accuracy: 0.5284, Loss: 1.0918 Epoch 0 Batch 220/1077 - Train Accuracy: 0.4379, Validation Accuracy: 0.5075, Loss: 1.1346 Epoch 0 Batch 221/1077 - Train Accuracy: 0.5415, Validation Accuracy: 0.5398, Loss: 1.1352 Epoch 0 Batch 222/1077 - Train Accuracy: 0.4777, Validation Accuracy: 0.5362, Loss: 1.1807 Epoch 0 Batch 223/1077 - Train Accuracy: 0.5461, Validation Accuracy: 0.5515, Loss: 1.0535 Epoch 0 Batch 224/1077 - Train Accuracy: 0.5266, Validation Accuracy: 0.5604, Loss: 1.1120 Epoch 0 Batch 225/1077 - Train Accuracy: 0.4957, Validation Accuracy: 0.5611, Loss: 1.1168 Epoch 0 Batch 226/1077 - Train Accuracy: 0.5223, Validation Accuracy: 0.5462, Loss: 1.0876 Epoch 0 Batch 227/1077 - Train Accuracy: 0.4977, Validation Accuracy: 0.5529, Loss: 1.1415 Epoch 0 Batch 228/1077 - Train Accuracy: 0.5465, Validation Accuracy: 0.5501, Loss: 1.0644 Epoch 0 Batch 229/1077 - Train Accuracy: 0.5414, Validation Accuracy: 0.5355, Loss: 1.1003 Epoch 0 Batch 230/1077 - Train Accuracy: 0.5335, Validation Accuracy: 0.5337, Loss: 1.0208 Epoch 0 Batch 231/1077 - Train Accuracy: 0.5121, Validation Accuracy: 0.5220, Loss: 1.0457 Epoch 0 Batch 232/1077 - Train Accuracy: 0.5284, Validation Accuracy: 0.5504, Loss: 1.0942 Epoch 0 Batch 233/1077 - Train Accuracy: 0.5184, Validation Accuracy: 0.5465, Loss: 1.1304 Epoch 0 Batch 234/1077 - Train Accuracy: 0.5573, Validation Accuracy: 0.5444, Loss: 1.0417 Epoch 0 Batch 235/1077 - Train Accuracy: 0.5662, Validation Accuracy: 0.5337, Loss: 0.9897 Epoch 0 Batch 236/1077 - Train Accuracy: 0.5039, Validation Accuracy: 0.5384, Loss: 1.0730 Epoch 0 Batch 237/1077 - Train Accuracy: 0.5439, Validation Accuracy: 0.5412, Loss: 0.9667 Epoch 0 Batch 238/1077 - Train Accuracy: 0.5414, Validation Accuracy: 0.5575, Loss: 1.0917 Epoch 0 Batch 239/1077 - Train Accuracy: 0.5547, Validation Accuracy: 0.5568, Loss: 0.9694 Epoch 0 Batch 240/1077 - Train Accuracy: 0.5277, Validation Accuracy: 0.5558, Loss: 1.0457 Epoch 0 Batch 241/1077 - Train Accuracy: 0.5309, Validation Accuracy: 0.5575, Loss: 0.9804 Epoch 0 Batch 242/1077 - Train Accuracy: 0.5043, Validation Accuracy: 0.5582, Loss: 1.0376 Epoch 0 Batch 243/1077 - Train Accuracy: 0.4867, Validation Accuracy: 0.5692, Loss: 1.0386 Epoch 0 Batch 244/1077 - Train Accuracy: 0.5827, Validation Accuracy: 0.5575, Loss: 0.9748 Epoch 0 Batch 245/1077 - Train Accuracy: 0.5368, Validation Accuracy: 0.5547, Loss: 0.9799 Epoch 0 Batch 246/1077 - Train Accuracy: 0.5363, Validation Accuracy: 0.5653, Loss: 1.0037 Epoch 0 Batch 247/1077 - Train Accuracy: 0.5577, Validation Accuracy: 0.5543, Loss: 0.9518 Epoch 0 Batch 248/1077 - Train Accuracy: 0.5547, Validation Accuracy: 0.5526, Loss: 0.9913 Epoch 0 Batch 249/1077 - Train Accuracy: 0.5273, Validation Accuracy: 0.5643, Loss: 0.9876 Epoch 0 Batch 250/1077 - Train Accuracy: 0.5487, Validation Accuracy: 0.5653, Loss: 0.9366 Epoch 0 Batch 251/1077 - Train Accuracy: 0.5525, Validation Accuracy: 0.5600, Loss: 0.9766 Epoch 0 Batch 252/1077 - Train Accuracy: 0.5328, Validation Accuracy: 0.5540, Loss: 0.9851 Epoch 0 Batch 253/1077 - Train Accuracy: 0.5348, Validation Accuracy: 0.5529, Loss: 0.9734 Epoch 0 Batch 254/1077 - Train Accuracy: 0.5109, Validation Accuracy: 0.5639, Loss: 1.0147 Epoch 0 Batch 255/1077 - Train Accuracy: 0.4957, Validation Accuracy: 0.5465, Loss: 1.0075 Epoch 0 Batch 256/1077 - Train Accuracy: 0.4715, Validation Accuracy: 0.5408, Loss: 1.0271 Epoch 0 Batch 257/1077 - Train Accuracy: 0.5413, Validation Accuracy: 0.5494, Loss: 0.9558 Epoch 0 Batch 258/1077 - Train Accuracy: 0.5350, Validation Accuracy: 0.5675, Loss: 0.9601 Epoch 0 Batch 259/1077 - Train Accuracy: 0.5332, Validation Accuracy: 0.5756, Loss: 0.9801 Epoch 0 Batch 260/1077 - Train Accuracy: 0.5432, Validation Accuracy: 0.5724, Loss: 0.9283 Epoch 0 Batch 261/1077 - Train Accuracy: 0.5365, Validation Accuracy: 0.5721, Loss: 0.9363 Epoch 0 Batch 262/1077 - Train Accuracy: 0.5160, Validation Accuracy: 0.5636, Loss: 0.9443 Epoch 0 Batch 263/1077 - Train Accuracy: 0.5602, Validation Accuracy: 0.5621, Loss: 0.9304 Epoch 0 Batch 264/1077 - Train Accuracy: 0.4977, Validation Accuracy: 0.5668, Loss: 0.9746 Epoch 0 Batch 265/1077 - Train Accuracy: 0.5406, Validation Accuracy: 0.5582, Loss: 0.9636 Epoch 0 Batch 266/1077 - Train Accuracy: 0.5644, Validation Accuracy: 0.5739, Loss: 0.9094 Epoch 0 Batch 267/1077 - Train Accuracy: 0.5380, Validation Accuracy: 0.5643, Loss: 0.8878 Epoch 0 Batch 268/1077 - Train Accuracy: 0.5617, Validation Accuracy: 0.5760, Loss: 0.9488 Epoch 0 Batch 269/1077 - Train Accuracy: 0.5185, Validation Accuracy: 0.5678, Loss: 1.0229 Epoch 0 Batch 270/1077 - Train Accuracy: 0.5016, Validation Accuracy: 0.5639, Loss: 0.9932 Epoch 0 Batch 271/1077 - Train Accuracy: 0.5535, Validation Accuracy: 0.5582, Loss: 0.9379 Epoch 0 Batch 272/1077 - Train Accuracy: 0.5305, Validation Accuracy: 0.5533, Loss: 0.9390 Epoch 0 Batch 273/1077 - Train Accuracy: 0.5521, Validation Accuracy: 0.5597, Loss: 0.9050 Epoch 0 Batch 274/1077 - Train Accuracy: 0.5472, Validation Accuracy: 0.5657, Loss: 0.9121 Epoch 0 Batch 275/1077 - Train Accuracy: 0.5458, Validation Accuracy: 0.5589, Loss: 0.8999 Epoch 0 Batch 276/1077 - Train Accuracy: 0.5270, Validation Accuracy: 0.5558, Loss: 0.9781 Epoch 0 Batch 277/1077 - Train Accuracy: 0.5677, Validation Accuracy: 0.5561, Loss: 0.8813 Epoch 0 Batch 278/1077 - Train Accuracy: 0.5352, Validation Accuracy: 0.5547, Loss: 0.9665 Epoch 0 Batch 279/1077 - Train Accuracy: 0.5293, Validation Accuracy: 0.5472, Loss: 0.9784 Epoch 0 Batch 280/1077 - Train Accuracy: 0.5277, Validation Accuracy: 0.5554, Loss: 0.9405 Epoch 0 Batch 281/1077 - Train Accuracy: 0.5418, Validation Accuracy: 0.5600, Loss: 0.9599 Epoch 0 Batch 282/1077 - Train Accuracy: 0.5441, Validation Accuracy: 0.5582, Loss: 0.9697 Epoch 0 Batch 283/1077 - Train Accuracy: 0.5402, Validation Accuracy: 0.5629, Loss: 0.9407 Epoch 0 Batch 284/1077 - Train Accuracy: 0.5414, Validation Accuracy: 0.5536, Loss: 0.9406 Epoch 0 Batch 285/1077 - Train Accuracy: 0.5614, Validation Accuracy: 0.5661, Loss: 0.8815 Epoch 0 Batch 286/1077 - Train Accuracy: 0.5606, Validation Accuracy: 0.5742, Loss: 0.8323 Epoch 0 Batch 287/1077 - Train Accuracy: 0.5437, Validation Accuracy: 0.5639, Loss: 0.8732 Epoch 0 Batch 288/1077 - Train Accuracy: 0.5004, Validation Accuracy: 0.5767, Loss: 0.9220 Epoch 0 Batch 289/1077 - Train Accuracy: 0.5777, Validation Accuracy: 0.5863, Loss: 0.8738 Epoch 0 Batch 290/1077 - Train Accuracy: 0.5402, Validation Accuracy: 0.5888, Loss: 0.9244 Epoch 0 Batch 291/1077 - Train Accuracy: 0.5532, Validation Accuracy: 0.5742, Loss: 0.9258 Epoch 0 Batch 292/1077 - Train Accuracy: 0.5785, Validation Accuracy: 0.5692, Loss: 0.8846 Epoch 0 Batch 293/1077 - Train Accuracy: 0.5410, Validation Accuracy: 0.5753, Loss: 0.9337 Epoch 0 Batch 294/1077 - Train Accuracy: 0.6133, Validation Accuracy: 0.5756, Loss: 0.8211 Epoch 0 Batch 295/1077 - Train Accuracy: 0.5530, Validation Accuracy: 0.5700, Loss: 0.9530 Epoch 0 Batch 296/1077 - Train Accuracy: 0.5822, Validation Accuracy: 0.5774, Loss: 0.8353 Epoch 0 Batch 297/1077 - Train Accuracy: 0.5324, Validation Accuracy: 0.5838, Loss: 0.9245 Epoch 0 Batch 298/1077 - Train Accuracy: 0.5563, Validation Accuracy: 0.5778, Loss: 0.9522 Epoch 0 Batch 299/1077 - Train Accuracy: 0.5656, Validation Accuracy: 0.5835, Loss: 0.8464 Epoch 0 Batch 300/1077 - Train Accuracy: 0.5300, Validation Accuracy: 0.5859, Loss: 0.8995 Epoch 0 Batch 301/1077 - Train Accuracy: 0.5520, Validation Accuracy: 0.5827, Loss: 0.8649 Epoch 0 Batch 302/1077 - Train Accuracy: 0.6035, Validation Accuracy: 0.5760, Loss: 0.8610 Epoch 0 Batch 303/1077 - Train Accuracy: 0.5254, Validation Accuracy: 0.5732, Loss: 0.8839 Epoch 0 Batch 304/1077 - Train Accuracy: 0.5632, Validation Accuracy: 0.5820, Loss: 0.8193 Epoch 0 Batch 305/1077 - Train Accuracy: 0.5820, Validation Accuracy: 0.5838, Loss: 0.8498 Epoch 0 Batch 306/1077 - Train Accuracy: 0.5737, Validation Accuracy: 0.5863, Loss: 0.8345 Epoch 0 Batch 307/1077 - Train Accuracy: 0.5446, Validation Accuracy: 0.5838, Loss: 0.8494 Epoch 0 Batch 308/1077 - Train Accuracy: 0.5363, Validation Accuracy: 0.5735, Loss: 0.9348 Epoch 0 Batch 309/1077 - Train Accuracy: 0.5807, Validation Accuracy: 0.5795, Loss: 0.8126 Epoch 0 Batch 310/1077 - Train Accuracy: 0.5613, Validation Accuracy: 0.5817, Loss: 0.8658 Epoch 0 Batch 311/1077 - Train Accuracy: 0.6161, Validation Accuracy: 0.5856, Loss: 0.8138 Epoch 0 Batch 312/1077 - Train Accuracy: 0.5797, Validation Accuracy: 0.5842, Loss: 0.9148 Epoch 0 Batch 313/1077 - Train Accuracy: 0.5523, Validation Accuracy: 0.5852, Loss: 0.8590 Epoch 0 Batch 314/1077 - Train Accuracy: 0.5855, Validation Accuracy: 0.5934, Loss: 0.8260 Epoch 0 Batch 315/1077 - Train Accuracy: 0.5774, Validation Accuracy: 0.5934, Loss: 0.8005 Epoch 0 Batch 316/1077 - Train Accuracy: 0.5718, Validation Accuracy: 0.5838, Loss: 0.7965 Epoch 0 Batch 317/1077 - Train Accuracy: 0.5913, Validation Accuracy: 0.5898, Loss: 0.8952 Epoch 0 Batch 318/1077 - Train Accuracy: 0.5645, Validation Accuracy: 0.5781, Loss: 0.8499 Epoch 0 Batch 319/1077 - Train Accuracy: 0.5742, Validation Accuracy: 0.5877, Loss: 0.8429 Epoch 0 Batch 320/1077 - Train Accuracy: 0.6000, Validation Accuracy: 0.5831, Loss: 0.8450 Epoch 0 Batch 321/1077 - Train Accuracy: 0.5680, Validation Accuracy: 0.5746, Loss: 0.8199 Epoch 0 Batch 322/1077 - Train Accuracy: 0.5577, Validation Accuracy: 0.5685, Loss: 0.8082 Epoch 0 Batch 323/1077 - Train Accuracy: 0.5500, Validation Accuracy: 0.5785, Loss: 0.8487 Epoch 0 Batch 324/1077 - Train Accuracy: 0.5859, Validation Accuracy: 0.5991, Loss: 0.8324 Epoch 0 Batch 325/1077 - Train Accuracy: 0.6287, Validation Accuracy: 0.5962, Loss: 0.7899 Epoch 0 Batch 326/1077 - Train Accuracy: 0.6131, Validation Accuracy: 0.5923, Loss: 0.8006 Epoch 0 Batch 327/1077 - Train Accuracy: 0.5910, Validation Accuracy: 0.5842, Loss: 0.8432 Epoch 0 Batch 328/1077 - Train Accuracy: 0.6045, Validation Accuracy: 0.5962, Loss: 0.8470 Epoch 0 Batch 329/1077 - Train Accuracy: 0.5656, Validation Accuracy: 0.5973, Loss: 0.8618 Epoch 0 Batch 330/1077 - Train Accuracy: 0.6152, Validation Accuracy: 0.5973, Loss: 0.8192 Epoch 0 Batch 331/1077 - Train Accuracy: 0.5666, Validation Accuracy: 0.5898, Loss: 0.8306 Epoch 0 Batch 332/1077 - Train Accuracy: 0.5655, Validation Accuracy: 0.5945, Loss: 0.7507 Epoch 0 Batch 333/1077 - Train Accuracy: 0.6040, Validation Accuracy: 0.5888, Loss: 0.8126 Epoch 0 Batch 334/1077 - Train Accuracy: 0.5773, Validation Accuracy: 0.5898, Loss: 0.8376 Epoch 0 Batch 335/1077 - Train Accuracy: 0.6135, Validation Accuracy: 0.5913, Loss: 0.7592 Epoch 0 Batch 336/1077 - Train Accuracy: 0.5813, Validation Accuracy: 0.5842, Loss: 0.8243 Epoch 0 Batch 337/1077 - Train Accuracy: 0.5625, Validation Accuracy: 0.5881, Loss: 0.8499 Epoch 0 Batch 338/1077 - Train Accuracy: 0.5980, Validation Accuracy: 0.5923, Loss: 0.8352 Epoch 0 Batch 339/1077 - Train Accuracy: 0.5906, Validation Accuracy: 0.5952, Loss: 0.7905 Epoch 0 Batch 340/1077 - Train Accuracy: 0.5695, Validation Accuracy: 0.5998, Loss: 0.8071 Epoch 0 Batch 341/1077 - Train Accuracy: 0.6316, Validation Accuracy: 0.6051, Loss: 0.8216 Epoch 0 Batch 342/1077 - Train Accuracy: 0.6101, Validation Accuracy: 0.6016, Loss: 0.7519 Epoch 0 Batch 343/1077 - Train Accuracy: 0.5559, Validation Accuracy: 0.5984, Loss: 0.8168 Epoch 0 Batch 344/1077 - Train Accuracy: 0.6113, Validation Accuracy: 0.6030, Loss: 0.7974 Epoch 0 Batch 345/1077 - Train Accuracy: 0.5967, Validation Accuracy: 0.5973, Loss: 0.7762 Epoch 0 Batch 346/1077 - Train Accuracy: 0.5687, Validation Accuracy: 0.5959, Loss: 0.8004 Epoch 0 Batch 347/1077 - Train Accuracy: 0.6001, Validation Accuracy: 0.6048, Loss: 0.7608 Epoch 0 Batch 348/1077 - Train Accuracy: 0.5733, Validation Accuracy: 0.5977, Loss: 0.7474 Epoch 0 Batch 349/1077 - Train Accuracy: 0.5621, Validation Accuracy: 0.6019, Loss: 0.7840 Epoch 0 Batch 350/1077 - Train Accuracy: 0.5684, Validation Accuracy: 0.5998, Loss: 0.8248 Epoch 0 Batch 351/1077 - Train Accuracy: 0.5600, Validation Accuracy: 0.5984, Loss: 0.8366 Epoch 0 Batch 352/1077 - Train Accuracy: 0.5824, Validation Accuracy: 0.6037, Loss: 0.7938 Epoch 0 Batch 353/1077 - Train Accuracy: 0.5617, Validation Accuracy: 0.6037, Loss: 0.8557 Epoch 0 Batch 354/1077 - Train Accuracy: 0.5832, Validation Accuracy: 0.6101, Loss: 0.8200 Epoch 0 Batch 355/1077 - Train Accuracy: 0.5978, Validation Accuracy: 0.6087, Loss: 0.7702 Epoch 0 Batch 356/1077 - Train Accuracy: 0.5781, Validation Accuracy: 0.6108, Loss: 0.7798 Epoch 0 Batch 357/1077 - Train Accuracy: 0.6168, Validation Accuracy: 0.6065, Loss: 0.7468 Epoch 0 Batch 358/1077 - Train Accuracy: 0.5769, Validation Accuracy: 0.6040, Loss: 0.8223 Epoch 0 Batch 359/1077 - Train Accuracy: 0.5934, Validation Accuracy: 0.6044, Loss: 0.7721 Epoch 0 Batch 360/1077 - Train Accuracy: 0.6102, Validation Accuracy: 0.6023, Loss: 0.7676 Epoch 0 Batch 361/1077 - Train Accuracy: 0.6114, Validation Accuracy: 0.6115, Loss: 0.8085 Epoch 0 Batch 362/1077 - Train Accuracy: 0.6172, Validation Accuracy: 0.6112, Loss: 0.7875 Epoch 0 Batch 363/1077 - Train Accuracy: 0.5547, Validation Accuracy: 0.6115, Loss: 0.8066 Epoch 0 Batch 364/1077 - Train Accuracy: 0.5883, Validation Accuracy: 0.6154, Loss: 0.8009 Epoch 0 Batch 365/1077 - Train Accuracy: 0.5590, Validation Accuracy: 0.6147, Loss: 0.7701 Epoch 0 Batch 366/1077 - Train Accuracy: 0.5934, Validation Accuracy: 0.6151, Loss: 0.7833 Epoch 0 Batch 367/1077 - Train Accuracy: 0.6172, Validation Accuracy: 0.6158, Loss: 0.6910 Epoch 0 Batch 368/1077 - Train Accuracy: 0.6191, Validation Accuracy: 0.6175, Loss: 0.7664 Epoch 0 Batch 369/1077 - Train Accuracy: 0.5816, Validation Accuracy: 0.6246, Loss: 0.7621 Epoch 0 Batch 370/1077 - Train Accuracy: 0.6228, Validation Accuracy: 0.6168, Loss: 0.7139 Epoch 0 Batch 371/1077 - Train Accuracy: 0.6066, Validation Accuracy: 0.6136, Loss: 0.7478 Epoch 0 Batch 372/1077 - Train Accuracy: 0.5766, Validation Accuracy: 0.6165, Loss: 0.7552 Epoch 0 Batch 373/1077 - Train Accuracy: 0.6421, Validation Accuracy: 0.6229, Loss: 0.6907 Epoch 0 Batch 374/1077 - Train Accuracy: 0.5516, Validation Accuracy: 0.6193, Loss: 0.8113 Epoch 0 Batch 375/1077 - Train Accuracy: 0.6179, Validation Accuracy: 0.6197, Loss: 0.6982 Epoch 0 Batch 376/1077 - Train Accuracy: 0.6222, Validation Accuracy: 0.6243, Loss: 0.7128 Epoch 0 Batch 377/1077 - Train Accuracy: 0.5715, Validation Accuracy: 0.6207, Loss: 0.7619 Epoch 0 Batch 378/1077 - Train Accuracy: 0.6066, Validation Accuracy: 0.6204, Loss: 0.7177 Epoch 0 Batch 379/1077 - Train Accuracy: 0.6242, Validation Accuracy: 0.6200, Loss: 0.7570 Epoch 0 Batch 380/1077 - Train Accuracy: 0.6094, Validation Accuracy: 0.6140, Loss: 0.7257 Epoch 0 Batch 381/1077 - Train Accuracy: 0.6012, Validation Accuracy: 0.6232, Loss: 0.7707 Epoch 0 Batch 382/1077 - Train Accuracy: 0.5923, Validation Accuracy: 0.6133, Loss: 0.7745 Epoch 0 Batch 383/1077 - Train Accuracy: 0.6150, Validation Accuracy: 0.6076, Loss: 0.7180 Epoch 0 Batch 384/1077 - Train Accuracy: 0.6363, Validation Accuracy: 0.6183, Loss: 0.7372 Epoch 0 Batch 385/1077 - Train Accuracy: 0.5820, Validation Accuracy: 0.6243, Loss: 0.7710 Epoch 0 Batch 386/1077 - Train Accuracy: 0.5871, Validation Accuracy: 0.6186, Loss: 0.7401 Epoch 0 Batch 387/1077 - Train Accuracy: 0.6270, Validation Accuracy: 0.6186, Loss: 0.7319 Epoch 0 Batch 388/1077 - Train Accuracy: 0.6112, Validation Accuracy: 0.6140, Loss: 0.7308 Epoch 0 Batch 389/1077 - Train Accuracy: 0.6031, Validation Accuracy: 0.6140, Loss: 0.7344 Epoch 0 Batch 390/1077 - Train Accuracy: 0.5723, Validation Accuracy: 0.6222, Loss: 0.7608 Epoch 0 Batch 391/1077 - Train Accuracy: 0.6053, Validation Accuracy: 0.6232, Loss: 0.7134 Epoch 0 Batch 392/1077 - Train Accuracy: 0.5914, Validation Accuracy: 0.6197, Loss: 0.7362 Epoch 0 Batch 393/1077 - Train Accuracy: 0.6042, Validation Accuracy: 0.6158, Loss: 0.7030 Epoch 0 Batch 394/1077 - Train Accuracy: 0.5801, Validation Accuracy: 0.6232, Loss: 0.7550 Epoch 0 Batch 395/1077 - Train Accuracy: 0.6198, Validation Accuracy: 0.6186, Loss: 0.6986 Epoch 0 Batch 396/1077 - Train Accuracy: 0.5895, Validation Accuracy: 0.6239, Loss: 0.7497 Epoch 0 Batch 397/1077 - Train Accuracy: 0.6447, Validation Accuracy: 0.6076, Loss: 0.7177 Epoch 0 Batch 398/1077 - Train Accuracy: 0.5777, Validation Accuracy: 0.6101, Loss: 0.7756 Epoch 0 Batch 399/1077 - Train Accuracy: 0.5584, Validation Accuracy: 0.6154, Loss: 0.8024 Epoch 0 Batch 400/1077 - Train Accuracy: 0.6258, Validation Accuracy: 0.6101, Loss: 0.7234 Epoch 0 Batch 401/1077 - Train Accuracy: 0.5988, Validation Accuracy: 0.6090, Loss: 0.7395 Epoch 0 Batch 402/1077 - Train Accuracy: 0.6531, Validation Accuracy: 0.6076, Loss: 0.6726 Epoch 0 Batch 403/1077 - Train Accuracy: 0.5891, Validation Accuracy: 0.6076, Loss: 0.7373 Epoch 0 Batch 404/1077 - Train Accuracy: 0.6328, Validation Accuracy: 0.6065, Loss: 0.6899 Epoch 0 Batch 405/1077 - Train Accuracy: 0.5798, Validation Accuracy: 0.6140, Loss: 0.7744 Epoch 0 Batch 406/1077 - Train Accuracy: 0.6645, Validation Accuracy: 0.6112, Loss: 0.7061 Epoch 0 Batch 407/1077 - Train Accuracy: 0.5855, Validation Accuracy: 0.6222, Loss: 0.7623 Epoch 0 Batch 408/1077 - Train Accuracy: 0.5816, Validation Accuracy: 0.6207, Loss: 0.7243 Epoch 0 Batch 409/1077 - Train Accuracy: 0.5895, Validation Accuracy: 0.6214, Loss: 0.7238 Epoch 0 Batch 410/1077 - Train Accuracy: 0.5847, Validation Accuracy: 0.6374, Loss: 0.7347 Epoch 0 Batch 411/1077 - Train Accuracy: 0.6358, Validation Accuracy: 0.6239, Loss: 0.6899 Epoch 0 Batch 412/1077 - Train Accuracy: 0.6223, Validation Accuracy: 0.6197, Loss: 0.6851 Epoch 0 Batch 413/1077 - Train Accuracy: 0.6039, Validation Accuracy: 0.6175, Loss: 0.7136 Epoch 0 Batch 414/1077 - Train Accuracy: 0.5953, Validation Accuracy: 0.6232, Loss: 0.7186 Epoch 0 Batch 415/1077 - Train Accuracy: 0.6362, Validation Accuracy: 0.6332, Loss: 0.6636 Epoch 0 Batch 416/1077 - Train Accuracy: 0.6020, Validation Accuracy: 0.6225, Loss: 0.7127 Epoch 0 Batch 417/1077 - Train Accuracy: 0.6109, Validation Accuracy: 0.6207, Loss: 0.7256 Epoch 0 Batch 418/1077 - Train Accuracy: 0.6098, Validation Accuracy: 0.6197, Loss: 0.6857 Epoch 0 Batch 419/1077 - Train Accuracy: 0.6008, Validation Accuracy: 0.6264, Loss: 0.7172 Epoch 0 Batch 420/1077 - Train Accuracy: 0.6004, Validation Accuracy: 0.6168, Loss: 0.6790 Epoch 0 Batch 421/1077 - Train Accuracy: 0.6074, Validation Accuracy: 0.6314, Loss: 0.7328 Epoch 0 Batch 422/1077 - Train Accuracy: 0.6105, Validation Accuracy: 0.6335, Loss: 0.6527 Epoch 0 Batch 423/1077 - Train Accuracy: 0.6320, Validation Accuracy: 0.6353, Loss: 0.7171 Epoch 0 Batch 424/1077 - Train Accuracy: 0.5625, Validation Accuracy: 0.6399, Loss: 0.7215 Epoch 0 Batch 425/1077 - Train Accuracy: 0.6399, Validation Accuracy: 0.6399, Loss: 0.6897 Epoch 0 Batch 426/1077 - Train Accuracy: 0.5910, Validation Accuracy: 0.6317, Loss: 0.7210 Epoch 0 Batch 427/1077 - Train Accuracy: 0.5636, Validation Accuracy: 0.6214, Loss: 0.6867 Epoch 0 Batch 428/1077 - Train Accuracy: 0.6481, Validation Accuracy: 0.6278, Loss: 0.6639 Epoch 0 Batch 429/1077 - Train Accuracy: 0.6293, Validation Accuracy: 0.6286, Loss: 0.6667 Epoch 0 Batch 430/1077 - Train Accuracy: 0.5926, Validation Accuracy: 0.6339, Loss: 0.7002 Epoch 0 Batch 431/1077 - Train Accuracy: 0.5938, Validation Accuracy: 0.6378, Loss: 0.6861 Epoch 0 Batch 432/1077 - Train Accuracy: 0.6266, Validation Accuracy: 0.6442, Loss: 0.6978 Epoch 0 Batch 433/1077 - Train Accuracy: 0.6043, Validation Accuracy: 0.6360, Loss: 0.6973 Epoch 0 Batch 434/1077 - Train Accuracy: 0.5859, Validation Accuracy: 0.6325, Loss: 0.6964 Epoch 0 Batch 435/1077 - Train Accuracy: 0.6102, Validation Accuracy: 0.6410, Loss: 0.7248 Epoch 0 Batch 436/1077 - Train Accuracy: 0.6358, Validation Accuracy: 0.6396, Loss: 0.6618 Epoch 0 Batch 437/1077 - Train Accuracy: 0.6012, Validation Accuracy: 0.6403, Loss: 0.6821 Epoch 0 Batch 438/1077 - Train Accuracy: 0.6039, Validation Accuracy: 0.6403, Loss: 0.6784 Epoch 0 Batch 439/1077 - Train Accuracy: 0.5984, Validation Accuracy: 0.6371, Loss: 0.7026 Epoch 0 Batch 440/1077 - Train Accuracy: 0.6109, Validation Accuracy: 0.6325, Loss: 0.7174 Epoch 0 Batch 441/1077 - Train Accuracy: 0.5625, Validation Accuracy: 0.6332, Loss: 0.6704 Epoch 0 Batch 442/1077 - Train Accuracy: 0.6094, Validation Accuracy: 0.6332, Loss: 0.6568 Epoch 0 Batch 443/1077 - Train Accuracy: 0.6365, Validation Accuracy: 0.6317, Loss: 0.6399 Epoch 0 Batch 444/1077 - Train Accuracy: 0.6281, Validation Accuracy: 0.6371, Loss: 0.6771 Epoch 0 Batch 445/1077 - Train Accuracy: 0.6012, Validation Accuracy: 0.6300, Loss: 0.7082 Epoch 0 Batch 446/1077 - Train Accuracy: 0.6124, Validation Accuracy: 0.6222, Loss: 0.6185 Epoch 0 Batch 447/1077 - Train Accuracy: 0.6047, Validation Accuracy: 0.6175, Loss: 0.6614 Epoch 0 Batch 448/1077 - Train Accuracy: 0.5793, Validation Accuracy: 0.6136, Loss: 0.6867 Epoch 0 Batch 449/1077 - Train Accuracy: 0.5848, Validation Accuracy: 0.6040, Loss: 0.7017 Epoch 0 Batch 450/1077 - Train Accuracy: 0.6098, Validation Accuracy: 0.6080, Loss: 0.6400 Epoch 0 Batch 451/1077 - Train Accuracy: 0.6287, Validation Accuracy: 0.6151, Loss: 0.6513 Epoch 0 Batch 452/1077 - Train Accuracy: 0.6207, Validation Accuracy: 0.6296, Loss: 0.6739 Epoch 0 Batch 453/1077 - Train Accuracy: 0.6276, Validation Accuracy: 0.6392, Loss: 0.6263 Epoch 0 Batch 454/1077 - Train Accuracy: 0.6324, Validation Accuracy: 0.6307, Loss: 0.6626 Epoch 0 Batch 455/1077 - Train Accuracy: 0.6467, Validation Accuracy: 0.6261, Loss: 0.6237 Epoch 0 Batch 456/1077 - Train Accuracy: 0.6406, Validation Accuracy: 0.6254, Loss: 0.6652 Epoch 0 Batch 457/1077 - Train Accuracy: 0.6302, Validation Accuracy: 0.6293, Loss: 0.6122 Epoch 0 Batch 458/1077 - Train Accuracy: 0.5875, Validation Accuracy: 0.6378, Loss: 0.6692 Epoch 0 Batch 459/1077 - Train Accuracy: 0.6544, Validation Accuracy: 0.6321, Loss: 0.6298 Epoch 0 Batch 460/1077 - Train Accuracy: 0.6129, Validation Accuracy: 0.6381, Loss: 0.6623 Epoch 0 Batch 461/1077 - Train Accuracy: 0.6312, Validation Accuracy: 0.6424, Loss: 0.6560 Epoch 0 Batch 462/1077 - Train Accuracy: 0.6062, Validation Accuracy: 0.6470, Loss: 0.6423 Epoch 0 Batch 463/1077 - Train Accuracy: 0.5883, Validation Accuracy: 0.6484, Loss: 0.6571 Epoch 0 Batch 464/1077 - Train Accuracy: 0.6785, Validation Accuracy: 0.6413, Loss: 0.6268 Epoch 0 Batch 465/1077 - Train Accuracy: 0.6213, Validation Accuracy: 0.6438, Loss: 0.6763 Epoch 0 Batch 466/1077 - Train Accuracy: 0.6285, Validation Accuracy: 0.6293, Loss: 0.6290 Epoch 0 Batch 467/1077 - Train Accuracy: 0.6663, Validation Accuracy: 0.6300, Loss: 0.6205 Epoch 0 Batch 468/1077 - Train Accuracy: 0.6458, Validation Accuracy: 0.6314, Loss: 0.6518 Epoch 0 Batch 469/1077 - Train Accuracy: 0.6145, Validation Accuracy: 0.6357, Loss: 0.6548 Epoch 0 Batch 470/1077 - Train Accuracy: 0.5999, Validation Accuracy: 0.6303, Loss: 0.6688 Epoch 0 Batch 471/1077 - Train Accuracy: 0.6719, Validation Accuracy: 0.6321, Loss: 0.6225 Epoch 0 Batch 472/1077 - Train Accuracy: 0.6198, Validation Accuracy: 0.6357, Loss: 0.6094 Epoch 0 Batch 473/1077 - Train Accuracy: 0.6223, Validation Accuracy: 0.6250, Loss: 0.6541 Epoch 0 Batch 474/1077 - Train Accuracy: 0.6199, Validation Accuracy: 0.6282, Loss: 0.6358 Epoch 0 Batch 475/1077 - Train Accuracy: 0.6457, Validation Accuracy: 0.6335, Loss: 0.6227 Epoch 0 Batch 476/1077 - Train Accuracy: 0.5950, Validation Accuracy: 0.6186, Loss: 0.6287 Epoch 0 Batch 477/1077 - Train Accuracy: 0.6968, Validation Accuracy: 0.6310, Loss: 0.6034 Epoch 0 Batch 478/1077 - Train Accuracy: 0.6254, Validation Accuracy: 0.6172, Loss: 0.6701 Epoch 0 Batch 479/1077 - Train Accuracy: 0.6168, Validation Accuracy: 0.6300, Loss: 0.6680 Epoch 0 Batch 480/1077 - Train Accuracy: 0.6377, Validation Accuracy: 0.6325, Loss: 0.6251 Epoch 0 Batch 481/1077 - Train Accuracy: 0.6230, Validation Accuracy: 0.6325, Loss: 0.6277 Epoch 0 Batch 482/1077 - Train Accuracy: 0.6184, Validation Accuracy: 0.6307, Loss: 0.6910 Epoch 0 Batch 483/1077 - Train Accuracy: 0.6102, Validation Accuracy: 0.6317, Loss: 0.6312 Epoch 0 Batch 484/1077 - Train Accuracy: 0.6500, Validation Accuracy: 0.6342, Loss: 0.6191 Epoch 0 Batch 485/1077 - Train Accuracy: 0.6586, Validation Accuracy: 0.6381, Loss: 0.6288 Epoch 0 Batch 486/1077 - Train Accuracy: 0.6299, Validation Accuracy: 0.6406, Loss: 0.6411 Epoch 0 Batch 487/1077 - Train Accuracy: 0.6266, Validation Accuracy: 0.6438, Loss: 0.6445 Epoch 0 Batch 488/1077 - Train Accuracy: 0.6155, Validation Accuracy: 0.6317, Loss: 0.6436 Epoch 0 Batch 489/1077 - Train Accuracy: 0.6462, Validation Accuracy: 0.6271, Loss: 0.5965 Epoch 0 Batch 490/1077 - Train Accuracy: 0.6262, Validation Accuracy: 0.6243, Loss: 0.6371 Epoch 0 Batch 491/1077 - Train Accuracy: 0.6395, Validation Accuracy: 0.6222, Loss: 0.6263 Epoch 0 Batch 492/1077 - Train Accuracy: 0.6125, Validation Accuracy: 0.6236, Loss: 0.6558 Epoch 0 Batch 493/1077 - Train Accuracy: 0.6447, Validation Accuracy: 0.6257, Loss: 0.5889 Epoch 0 Batch 494/1077 - Train Accuracy: 0.6430, Validation Accuracy: 0.6257, Loss: 0.5954 Epoch 0 Batch 495/1077 - Train Accuracy: 0.6031, Validation Accuracy: 0.6339, Loss: 0.6065 Epoch 0 Batch 496/1077 - Train Accuracy: 0.6039, Validation Accuracy: 0.6282, Loss: 0.6266 Epoch 0 Batch 497/1077 - Train Accuracy: 0.6266, Validation Accuracy: 0.6218, Loss: 0.6630 Epoch 0 Batch 498/1077 - Train Accuracy: 0.6695, Validation Accuracy: 0.6278, Loss: 0.5927 Epoch 0 Batch 499/1077 - Train Accuracy: 0.6228, Validation Accuracy: 0.6357, Loss: 0.5831 Epoch 0 Batch 500/1077 - Train Accuracy: 0.6273, Validation Accuracy: 0.6289, Loss: 0.6104 Epoch 0 Batch 501/1077 - Train Accuracy: 0.6199, Validation Accuracy: 0.6300, Loss: 0.5928 Epoch 0 Batch 502/1077 - Train Accuracy: 0.6598, Validation Accuracy: 0.6335, Loss: 0.5938 Epoch 0 Batch 503/1077 - Train Accuracy: 0.6586, Validation Accuracy: 0.6403, Loss: 0.6241 Epoch 0 Batch 504/1077 - Train Accuracy: 0.6141, Validation Accuracy: 0.6428, Loss: 0.6031 Epoch 0 Batch 505/1077 - Train Accuracy: 0.6853, Validation Accuracy: 0.6417, Loss: 0.5514 Epoch 0 Batch 506/1077 - Train Accuracy: 0.6266, Validation Accuracy: 0.6491, Loss: 0.6206 Epoch 0 Batch 507/1077 - Train Accuracy: 0.6289, Validation Accuracy: 0.6428, Loss: 0.5891 Epoch 0 Batch 508/1077 - Train Accuracy: 0.6443, Validation Accuracy: 0.6477, Loss: 0.5794 Epoch 0 Batch 509/1077 - Train Accuracy: 0.6391, Validation Accuracy: 0.6513, Loss: 0.6157 Epoch 0 Batch 510/1077 - Train Accuracy: 0.6434, Validation Accuracy: 0.6392, Loss: 0.6033 Epoch 0 Batch 511/1077 - Train Accuracy: 0.6086, Validation Accuracy: 0.6325, Loss: 0.6234 Epoch 0 Batch 512/1077 - Train Accuracy: 0.6664, Validation Accuracy: 0.6349, Loss: 0.6034 Epoch 0 Batch 513/1077 - Train Accuracy: 0.6457, Validation Accuracy: 0.6385, Loss: 0.6174 Epoch 0 Batch 514/1077 - Train Accuracy: 0.5828, Validation Accuracy: 0.6293, Loss: 0.6136 Epoch 0 Batch 515/1077 - Train Accuracy: 0.6117, Validation Accuracy: 0.6264, Loss: 0.6186 Epoch 0 Batch 516/1077 - Train Accuracy: 0.6775, Validation Accuracy: 0.6222, Loss: 0.5743 Epoch 0 Batch 517/1077 - Train Accuracy: 0.6778, Validation Accuracy: 0.6207, Loss: 0.5871 Epoch 0 Batch 518/1077 - Train Accuracy: 0.6535, Validation Accuracy: 0.6403, Loss: 0.5867 Epoch 0 Batch 519/1077 - Train Accuracy: 0.6516, Validation Accuracy: 0.6410, Loss: 0.5940 Epoch 0 Batch 520/1077 - Train Accuracy: 0.6652, Validation Accuracy: 0.6509, Loss: 0.5618 Epoch 0 Batch 521/1077 - Train Accuracy: 0.6265, Validation Accuracy: 0.6520, Loss: 0.5630 Epoch 0 Batch 522/1077 - Train Accuracy: 0.5887, Validation Accuracy: 0.6477, Loss: 0.6233 Epoch 0 Batch 523/1077 - Train Accuracy: 0.6387, Validation Accuracy: 0.6509, Loss: 0.6018 Epoch 0 Batch 524/1077 - Train Accuracy: 0.6172, Validation Accuracy: 0.6413, Loss: 0.6017 Epoch 0 Batch 525/1077 - Train Accuracy: 0.6293, Validation Accuracy: 0.6442, Loss: 0.5708 Epoch 0 Batch 526/1077 - Train Accuracy: 0.6320, Validation Accuracy: 0.6364, Loss: 0.5864 Epoch 0 Batch 527/1077 - Train Accuracy: 0.6188, Validation Accuracy: 0.6335, Loss: 0.6206 Epoch 0 Batch 528/1077 - Train Accuracy: 0.6324, Validation Accuracy: 0.6307, Loss: 0.5818 Epoch 0 Batch 529/1077 - Train Accuracy: 0.6355, Validation Accuracy: 0.6442, Loss: 0.5897 Epoch 0 Batch 530/1077 - Train Accuracy: 0.6336, Validation Accuracy: 0.6374, Loss: 0.5856 Epoch 0 Batch 531/1077 - Train Accuracy: 0.6242, Validation Accuracy: 0.6261, Loss: 0.5896 Epoch 0 Batch 532/1077 - Train Accuracy: 0.5891, Validation Accuracy: 0.6388, Loss: 0.6214 Epoch 0 Batch 533/1077 - Train Accuracy: 0.6375, Validation Accuracy: 0.6442, Loss: 0.6119 Epoch 0 Batch 534/1077 - Train Accuracy: 0.6789, Validation Accuracy: 0.6378, Loss: 0.5582 Epoch 0 Batch 535/1077 - Train Accuracy: 0.6625, Validation Accuracy: 0.6186, Loss: 0.5886 Epoch 0 Batch 536/1077 - Train Accuracy: 0.6547, Validation Accuracy: 0.6168, Loss: 0.5768 Epoch 0 Batch 537/1077 - Train Accuracy: 0.6285, Validation Accuracy: 0.6367, Loss: 0.5741 Epoch 0 Batch 538/1077 - Train Accuracy: 0.6696, Validation Accuracy: 0.6410, Loss: 0.5307 Epoch 0 Batch 539/1077 - Train Accuracy: 0.6352, Validation Accuracy: 0.6268, Loss: 0.5948 Epoch 0 Batch 540/1077 - Train Accuracy: 0.6328, Validation Accuracy: 0.6349, Loss: 0.5532 Epoch 0 Batch 541/1077 - Train Accuracy: 0.6512, Validation Accuracy: 0.6286, Loss: 0.5841 Epoch 0 Batch 542/1077 - Train Accuracy: 0.6469, Validation Accuracy: 0.6452, Loss: 0.5664 Epoch 0 Batch 543/1077 - Train Accuracy: 0.6398, Validation Accuracy: 0.6396, Loss: 0.6081 Epoch 0 Batch 544/1077 - Train Accuracy: 0.5867, Validation Accuracy: 0.6115, Loss: 0.5545 Epoch 0 Batch 545/1077 - Train Accuracy: 0.6426, Validation Accuracy: 0.6207, Loss: 0.6137 Epoch 0 Batch 546/1077 - Train Accuracy: 0.6176, Validation Accuracy: 0.6396, Loss: 0.6059 Epoch 0 Batch 547/1077 - Train Accuracy: 0.6656, Validation Accuracy: 0.6424, Loss: 0.5708 Epoch 0 Batch 548/1077 - Train Accuracy: 0.6555, Validation Accuracy: 0.6484, Loss: 0.5938 Epoch 0 Batch 549/1077 - Train Accuracy: 0.6352, Validation Accuracy: 0.6669, Loss: 0.6170 Epoch 0 Batch 550/1077 - Train Accuracy: 0.6168, Validation Accuracy: 0.6708, Loss: 0.6069 Epoch 0 Batch 551/1077 - Train Accuracy: 0.6395, Validation Accuracy: 0.6673, Loss: 0.5841 Epoch 0 Batch 552/1077 - Train Accuracy: 0.6453, Validation Accuracy: 0.6626, Loss: 0.5946 Epoch 0 Batch 553/1077 - Train Accuracy: 0.6535, Validation Accuracy: 0.6637, Loss: 0.5707 Epoch 0 Batch 554/1077 - Train Accuracy: 0.6555, Validation Accuracy: 0.6605, Loss: 0.5742 Epoch 0 Batch 555/1077 - Train Accuracy: 0.6492, Validation Accuracy: 0.6520, Loss: 0.5719 Epoch 0 Batch 556/1077 - Train Accuracy: 0.6211, Validation Accuracy: 0.6648, Loss: 0.5529 Epoch 0 Batch 557/1077 - Train Accuracy: 0.6840, Validation Accuracy: 0.6605, Loss: 0.5738 Epoch 0 Batch 558/1077 - Train Accuracy: 0.6656, Validation Accuracy: 0.6470, Loss: 0.5490 Epoch 0 Batch 559/1077 - Train Accuracy: 0.6863, Validation Accuracy: 0.6516, Loss: 0.5709 Epoch 0 Batch 560/1077 - Train Accuracy: 0.6672, Validation Accuracy: 0.6463, Loss: 0.5479 Epoch 0 Batch 561/1077 - Train Accuracy: 0.6812, Validation Accuracy: 0.6431, Loss: 0.5456 Epoch 0 Batch 562/1077 - Train Accuracy: 0.6994, Validation Accuracy: 0.6406, Loss: 0.5144 Epoch 0 Batch 563/1077 - Train Accuracy: 0.6250, Validation Accuracy: 0.6523, Loss: 0.5906 Epoch 0 Batch 564/1077 - Train Accuracy: 0.6468, Validation Accuracy: 0.6289, Loss: 0.6074 Epoch 0 Batch 565/1077 - Train Accuracy: 0.6484, Validation Accuracy: 0.6254, Loss: 0.5510 Epoch 0 Batch 566/1077 - Train Accuracy: 0.6637, Validation Accuracy: 0.6477, Loss: 0.5802 Epoch 0 Batch 567/1077 - Train Accuracy: 0.6285, Validation Accuracy: 0.6623, Loss: 0.5620 Epoch 0 Batch 568/1077 - Train Accuracy: 0.6609, Validation Accuracy: 0.6641, Loss: 0.5513 Epoch 0 Batch 569/1077 - Train Accuracy: 0.6656, Validation Accuracy: 0.6534, Loss: 0.5479 Epoch 0 Batch 570/1077 - Train Accuracy: 0.6637, Validation Accuracy: 0.6648, Loss: 0.5833 Epoch 0 Batch 571/1077 - Train Accuracy: 0.6641, Validation Accuracy: 0.6737, Loss: 0.5167 Epoch 0 Batch 572/1077 - Train Accuracy: 0.6771, Validation Accuracy: 0.6548, Loss: 0.5071 Epoch 0 Batch 573/1077 - Train Accuracy: 0.6535, Validation Accuracy: 0.6641, Loss: 0.5720 Epoch 0 Batch 574/1077 - Train Accuracy: 0.6201, Validation Accuracy: 0.6555, Loss: 0.5649 Epoch 0 Batch 575/1077 - Train Accuracy: 0.6685, Validation Accuracy: 0.6634, Loss: 0.5424 Epoch 0 Batch 576/1077 - Train Accuracy: 0.6875, Validation Accuracy: 0.6577, Loss: 0.5562 Epoch 0 Batch 577/1077 - Train Accuracy: 0.6336, Validation Accuracy: 0.6694, Loss: 0.5804 Epoch 0 Batch 578/1077 - Train Accuracy: 0.6547, Validation Accuracy: 0.6634, Loss: 0.5547 Epoch 0 Batch 579/1077 - Train Accuracy: 0.6984, Validation Accuracy: 0.6648, Loss: 0.5325 Epoch 0 Batch 580/1077 - Train Accuracy: 0.6741, Validation Accuracy: 0.6580, Loss: 0.5081 Epoch 0 Batch 581/1077 - Train Accuracy: 0.6453, Validation Accuracy: 0.6680, Loss: 0.5296 Epoch 0 Batch 582/1077 - Train Accuracy: 0.6375, Validation Accuracy: 0.6665, Loss: 0.5416 Epoch 0 Batch 583/1077 - Train Accuracy: 0.6854, Validation Accuracy: 0.6665, Loss: 0.5602 Epoch 0 Batch 584/1077 - Train Accuracy: 0.6644, Validation Accuracy: 0.6619, Loss: 0.5413 Epoch 0 Batch 585/1077 - Train Accuracy: 0.6629, Validation Accuracy: 0.6676, Loss: 0.5019 Epoch 0 Batch 586/1077 - Train Accuracy: 0.6345, Validation Accuracy: 0.6687, Loss: 0.5811 Epoch 0 Batch 587/1077 - Train Accuracy: 0.7028, Validation Accuracy: 0.6335, Loss: 0.5183 Epoch 0 Batch 588/1077 - Train Accuracy: 0.6387, Validation Accuracy: 0.6175, Loss: 0.5466 Epoch 0 Batch 589/1077 - Train Accuracy: 0.6476, Validation Accuracy: 0.6527, Loss: 0.5589 Epoch 0 Batch 590/1077 - Train Accuracy: 0.6262, Validation Accuracy: 0.6559, Loss: 0.5697 Epoch 0 Batch 591/1077 - Train Accuracy: 0.6868, Validation Accuracy: 0.6346, Loss: 0.5018 Epoch 0 Batch 592/1077 - Train Accuracy: 0.6324, Validation Accuracy: 0.6428, Loss: 0.5560 Epoch 0 Batch 593/1077 - Train Accuracy: 0.6384, Validation Accuracy: 0.6616, Loss: 0.5227 Epoch 0 Batch 594/1077 - Train Accuracy: 0.6617, Validation Accuracy: 0.6690, Loss: 0.5539 Epoch 0 Batch 595/1077 - Train Accuracy: 0.6559, Validation Accuracy: 0.6921, Loss: 0.5459 Epoch 0 Batch 596/1077 - Train Accuracy: 0.6520, Validation Accuracy: 0.6854, Loss: 0.5562 Epoch 0 Batch 597/1077 - Train Accuracy: 0.6859, Validation Accuracy: 0.6847, Loss: 0.5444 Epoch 0 Batch 598/1077 - Train Accuracy: 0.6972, Validation Accuracy: 0.6903, Loss: 0.5181 Epoch 0 Batch 599/1077 - Train Accuracy: 0.6539, Validation Accuracy: 0.6751, Loss: 0.6081 Epoch 0 Batch 600/1077 - Train Accuracy: 0.6637, Validation Accuracy: 0.6733, Loss: 0.5006 Epoch 0 Batch 601/1077 - Train Accuracy: 0.6544, Validation Accuracy: 0.6555, Loss: 0.5312 Epoch 0 Batch 602/1077 - Train Accuracy: 0.6687, Validation Accuracy: 0.6602, Loss: 0.5376 Epoch 0 Batch 603/1077 - Train Accuracy: 0.6771, Validation Accuracy: 0.6555, Loss: 0.5391 Epoch 0 Batch 604/1077 - Train Accuracy: 0.6191, Validation Accuracy: 0.6481, Loss: 0.5544 Epoch 0 Batch 605/1077 - Train Accuracy: 0.6377, Validation Accuracy: 0.6470, Loss: 0.5711 Epoch 0 Batch 606/1077 - Train Accuracy: 0.7072, Validation Accuracy: 0.6648, Loss: 0.4891 Epoch 0 Batch 607/1077 - Train Accuracy: 0.7234, Validation Accuracy: 0.6772, Loss: 0.4954 Epoch 0 Batch 608/1077 - Train Accuracy: 0.6555, Validation Accuracy: 0.6644, Loss: 0.5595 Epoch 0 Batch 609/1077 - Train Accuracy: 0.6434, Validation Accuracy: 0.6729, Loss: 0.5330 Epoch 0 Batch 610/1077 - Train Accuracy: 0.6525, Validation Accuracy: 0.6822, Loss: 0.5527 Epoch 0 Batch 611/1077 - Train Accuracy: 0.6687, Validation Accuracy: 0.6793, Loss: 0.5229 Epoch 0 Batch 612/1077 - Train Accuracy: 0.6812, Validation Accuracy: 0.6658, Loss: 0.5071 Epoch 0 Batch 613/1077 - Train Accuracy: 0.6461, Validation Accuracy: 0.6665, Loss: 0.5609 Epoch 0 Batch 614/1077 - Train Accuracy: 0.6808, Validation Accuracy: 0.6573, Loss: 0.5059 Epoch 0 Batch 615/1077 - Train Accuracy: 0.6586, Validation Accuracy: 0.6481, Loss: 0.5174 Epoch 0 Batch 616/1077 - Train Accuracy: 0.6484, Validation Accuracy: 0.6463, Loss: 0.5372 Epoch 0 Batch 617/1077 - Train Accuracy: 0.6853, Validation Accuracy: 0.6562, Loss: 0.4956 Epoch 0 Batch 618/1077 - Train Accuracy: 0.6762, Validation Accuracy: 0.6573, Loss: 0.5146 Epoch 0 Batch 619/1077 - Train Accuracy: 0.6842, Validation Accuracy: 0.6644, Loss: 0.5265 Epoch 0 Batch 620/1077 - Train Accuracy: 0.6836, Validation Accuracy: 0.6683, Loss: 0.5029 Epoch 0 Batch 621/1077 - Train Accuracy: 0.7219, Validation Accuracy: 0.6850, Loss: 0.5000 Epoch 0 Batch 622/1077 - Train Accuracy: 0.6916, Validation Accuracy: 0.6914, Loss: 0.5413 Epoch 0 Batch 623/1077 - Train Accuracy: 0.6805, Validation Accuracy: 0.7010, Loss: 0.5139 Epoch 0 Batch 624/1077 - Train Accuracy: 0.7139, Validation Accuracy: 0.6946, Loss: 0.4951 Epoch 0 Batch 625/1077 - Train Accuracy: 0.6937, Validation Accuracy: 0.6726, Loss: 0.5141 Epoch 0 Batch 626/1077 - Train Accuracy: 0.7017, Validation Accuracy: 0.6722, Loss: 0.4670 Epoch 0 Batch 627/1077 - Train Accuracy: 0.6641, Validation Accuracy: 0.6797, Loss: 0.5129 Epoch 0 Batch 628/1077 - Train Accuracy: 0.6586, Validation Accuracy: 0.6886, Loss: 0.5370 Epoch 0 Batch 629/1077 - Train Accuracy: 0.6735, Validation Accuracy: 0.6857, Loss: 0.5334 Epoch 0 Batch 630/1077 - Train Accuracy: 0.6848, Validation Accuracy: 0.6946, Loss: 0.5078 Epoch 0 Batch 631/1077 - Train Accuracy: 0.6696, Validation Accuracy: 0.6974, Loss: 0.4982 Epoch 0 Batch 632/1077 - Train Accuracy: 0.6785, Validation Accuracy: 0.6942, Loss: 0.4917 Epoch 0 Batch 633/1077 - Train Accuracy: 0.7301, Validation Accuracy: 0.6918, Loss: 0.5123 Epoch 0 Batch 634/1077 - Train Accuracy: 0.7106, Validation Accuracy: 0.7060, Loss: 0.4729 Epoch 0 Batch 635/1077 - Train Accuracy: 0.6785, Validation Accuracy: 0.6989, Loss: 0.5360 Epoch 0 Batch 636/1077 - Train Accuracy: 0.7250, Validation Accuracy: 0.7053, Loss: 0.4910 Epoch 0 Batch 637/1077 - Train Accuracy: 0.6848, Validation Accuracy: 0.6815, Loss: 0.5124 Epoch 0 Batch 638/1077 - Train Accuracy: 0.6987, Validation Accuracy: 0.6946, Loss: 0.4915 Epoch 0 Batch 639/1077 - Train Accuracy: 0.6941, Validation Accuracy: 0.6857, Loss: 0.5107 Epoch 0 Batch 640/1077 - Train Accuracy: 0.6815, Validation Accuracy: 0.6914, Loss: 0.4916 Epoch 0 Batch 641/1077 - Train Accuracy: 0.6789, Validation Accuracy: 0.6811, Loss: 0.4949 Epoch 0 Batch 642/1077 - Train Accuracy: 0.6871, Validation Accuracy: 0.6715, Loss: 0.4986 Epoch 0 Batch 643/1077 - Train Accuracy: 0.6819, Validation Accuracy: 0.6740, Loss: 0.4675 Epoch 0 Batch 644/1077 - Train Accuracy: 0.7094, Validation Accuracy: 0.6847, Loss: 0.4971 Epoch 0 Batch 645/1077 - Train Accuracy: 0.7106, Validation Accuracy: 0.6825, Loss: 0.4744 Epoch 0 Batch 646/1077 - Train Accuracy: 0.6752, Validation Accuracy: 0.6779, Loss: 0.4937 Epoch 0 Batch 647/1077 - Train Accuracy: 0.6945, Validation Accuracy: 0.6886, Loss: 0.4865 Epoch 0 Batch 648/1077 - Train Accuracy: 0.7035, Validation Accuracy: 0.6850, Loss: 0.4622 Epoch 0 Batch 649/1077 - Train Accuracy: 0.6762, Validation Accuracy: 0.6879, Loss: 0.5132 Epoch 0 Batch 650/1077 - Train Accuracy: 0.6813, Validation Accuracy: 0.6893, Loss: 0.5023 Epoch 0 Batch 651/1077 - Train Accuracy: 0.7135, Validation Accuracy: 0.7024, Loss: 0.4723 Epoch 0 Batch 652/1077 - Train Accuracy: 0.6986, Validation Accuracy: 0.6992, Loss: 0.5077 Epoch 0 Batch 653/1077 - Train Accuracy: 0.6996, Validation Accuracy: 0.6907, Loss: 0.4897 Epoch 0 Batch 654/1077 - Train Accuracy: 0.7242, Validation Accuracy: 0.7049, Loss: 0.4708 Epoch 0 Batch 655/1077 - Train Accuracy: 0.7223, Validation Accuracy: 0.6932, Loss: 0.4902 Epoch 0 Batch 656/1077 - Train Accuracy: 0.7074, Validation Accuracy: 0.7099, Loss: 0.4880 Epoch 0 Batch 657/1077 - Train Accuracy: 0.7220, Validation Accuracy: 0.6992, Loss: 0.4979 Epoch 0 Batch 658/1077 - Train Accuracy: 0.7124, Validation Accuracy: 0.6942, Loss: 0.4639 Epoch 0 Batch 659/1077 - Train Accuracy: 0.7586, Validation Accuracy: 0.7010, Loss: 0.4637 Epoch 0 Batch 660/1077 - Train Accuracy: 0.7020, Validation Accuracy: 0.7021, Loss: 0.5092 Epoch 0 Batch 661/1077 - Train Accuracy: 0.7109, Validation Accuracy: 0.6967, Loss: 0.4535 Epoch 0 Batch 662/1077 - Train Accuracy: 0.7355, Validation Accuracy: 0.6996, Loss: 0.4625 Epoch 0 Batch 663/1077 - Train Accuracy: 0.6879, Validation Accuracy: 0.7067, Loss: 0.4619 Epoch 0 Batch 664/1077 - Train Accuracy: 0.7129, Validation Accuracy: 0.7010, Loss: 0.4681 Epoch 0 Batch 665/1077 - Train Accuracy: 0.6547, Validation Accuracy: 0.6900, Loss: 0.4775 Epoch 0 Batch 666/1077 - Train Accuracy: 0.7097, Validation Accuracy: 0.7053, Loss: 0.5287 Epoch 0 Batch 667/1077 - Train Accuracy: 0.6694, Validation Accuracy: 0.6960, Loss: 0.5030 Epoch 0 Batch 668/1077 - Train Accuracy: 0.6998, Validation Accuracy: 0.7134, Loss: 0.4674 Epoch 0 Batch 669/1077 - Train Accuracy: 0.7168, Validation Accuracy: 0.7244, Loss: 0.4699 Epoch 0 Batch 670/1077 - Train Accuracy: 0.7287, Validation Accuracy: 0.7241, Loss: 0.4518 Epoch 0 Batch 671/1077 - Train Accuracy: 0.6863, Validation Accuracy: 0.7227, Loss: 0.5079 Epoch 0 Batch 672/1077 - Train Accuracy: 0.7202, Validation Accuracy: 0.7234, Loss: 0.4570 Epoch 0 Batch 673/1077 - Train Accuracy: 0.7046, Validation Accuracy: 0.7230, Loss: 0.4530 Epoch 0 Batch 674/1077 - Train Accuracy: 0.7387, Validation Accuracy: 0.7188, Loss: 0.4762 Epoch 0 Batch 675/1077 - Train Accuracy: 0.7128, Validation Accuracy: 0.7202, Loss: 0.4734 Epoch 0 Batch 676/1077 - Train Accuracy: 0.6871, Validation Accuracy: 0.7514, Loss: 0.4951 Epoch 0 Batch 677/1077 - Train Accuracy: 0.6789, Validation Accuracy: 0.7333, Loss: 0.4954 Epoch 0 Batch 678/1077 - Train Accuracy: 0.7180, Validation Accuracy: 0.6996, Loss: 0.4591 Epoch 0 Batch 679/1077 - Train Accuracy: 0.7360, Validation Accuracy: 0.7134, Loss: 0.4948 Epoch 0 Batch 680/1077 - Train Accuracy: 0.7117, Validation Accuracy: 0.7124, Loss: 0.4649 Epoch 0 Batch 681/1077 - Train Accuracy: 0.7098, Validation Accuracy: 0.7056, Loss: 0.4835 Epoch 0 Batch 682/1077 - Train Accuracy: 0.7180, Validation Accuracy: 0.7085, Loss: 0.4736 Epoch 0 Batch 683/1077 - Train Accuracy: 0.6672, Validation Accuracy: 0.6964, Loss: 0.4729 Epoch 0 Batch 684/1077 - Train Accuracy: 0.7059, Validation Accuracy: 0.6974, Loss: 0.4677 Epoch 0 Batch 685/1077 - Train Accuracy: 0.6859, Validation Accuracy: 0.6808, Loss: 0.4818 Epoch 0 Batch 686/1077 - Train Accuracy: 0.6819, Validation Accuracy: 0.6776, Loss: 0.4602 Epoch 0 Batch 687/1077 - Train Accuracy: 0.7457, Validation Accuracy: 0.7191, Loss: 0.4987 Epoch 0 Batch 688/1077 - Train Accuracy: 0.7441, Validation Accuracy: 0.7127, Loss: 0.4646 Epoch 0 Batch 689/1077 - Train Accuracy: 0.7402, Validation Accuracy: 0.7369, Loss: 0.4624 Epoch 0 Batch 690/1077 - Train Accuracy: 0.7504, Validation Accuracy: 0.7330, Loss: 0.4617 Epoch 0 Batch 691/1077 - Train Accuracy: 0.7521, Validation Accuracy: 0.7294, Loss: 0.5004 Epoch 0 Batch 692/1077 - Train Accuracy: 0.7362, Validation Accuracy: 0.7223, Loss: 0.4397 Epoch 0 Batch 693/1077 - Train Accuracy: 0.6735, Validation Accuracy: 0.7255, Loss: 0.5414 Epoch 0 Batch 694/1077 - Train Accuracy: 0.7545, Validation Accuracy: 0.7156, Loss: 0.4373 Epoch 0 Batch 695/1077 - Train Accuracy: 0.7363, Validation Accuracy: 0.7248, Loss: 0.4420 Epoch 0 Batch 696/1077 - Train Accuracy: 0.7093, Validation Accuracy: 0.7188, Loss: 0.5105 Epoch 0 Batch 697/1077 - Train Accuracy: 0.7312, Validation Accuracy: 0.7202, Loss: 0.4481 Epoch 0 Batch 698/1077 - Train Accuracy: 0.6935, Validation Accuracy: 0.7045, Loss: 0.4392 Epoch 0 Batch 699/1077 - Train Accuracy: 0.7216, Validation Accuracy: 0.7063, Loss: 0.4606 Epoch 0 Batch 700/1077 - Train Accuracy: 0.7211, Validation Accuracy: 0.7188, Loss: 0.4458 Epoch 0 Batch 701/1077 - Train Accuracy: 0.7434, Validation Accuracy: 0.7330, Loss: 0.4742 Epoch 0 Batch 702/1077 - Train Accuracy: 0.7087, Validation Accuracy: 0.7298, Loss: 0.4611 Epoch 0 Batch 703/1077 - Train Accuracy: 0.7043, Validation Accuracy: 0.7148, Loss: 0.4634 Epoch 0 Batch 704/1077 - Train Accuracy: 0.6687, Validation Accuracy: 0.7109, Loss: 0.4674 Epoch 0 Batch 705/1077 - Train Accuracy: 0.7002, Validation Accuracy: 0.7195, Loss: 0.5054 Epoch 0 Batch 706/1077 - Train Accuracy: 0.6994, Validation Accuracy: 0.7354, Loss: 0.4534 Epoch 0 Batch 707/1077 - Train Accuracy: 0.7555, Validation Accuracy: 0.7322, Loss: 0.4614 Epoch 0 Batch 708/1077 - Train Accuracy: 0.7391, Validation Accuracy: 0.7145, Loss: 0.4582 Epoch 0 Batch 709/1077 - Train Accuracy: 0.6820, Validation Accuracy: 0.7074, Loss: 0.4807 Epoch 0 Batch 710/1077 - Train Accuracy: 0.7277, Validation Accuracy: 0.7053, Loss: 0.4549 Epoch 0 Batch 711/1077 - Train Accuracy: 0.7352, Validation Accuracy: 0.7120, Loss: 0.4639 Epoch 0 Batch 712/1077 - Train Accuracy: 0.7438, Validation Accuracy: 0.7219, Loss: 0.4313 Epoch 0 Batch 713/1077 - Train Accuracy: 0.7401, Validation Accuracy: 0.7290, Loss: 0.3992 Epoch 0 Batch 714/1077 - Train Accuracy: 0.7318, Validation Accuracy: 0.6857, Loss: 0.4704 Epoch 0 Batch 715/1077 - Train Accuracy: 0.6875, Validation Accuracy: 0.7259, Loss: 0.5146 Epoch 0 Batch 716/1077 - Train Accuracy: 0.7285, Validation Accuracy: 0.6900, Loss: 0.4167 Epoch 0 Batch 717/1077 - Train Accuracy: 0.7159, Validation Accuracy: 0.6978, Loss: 0.4766 Epoch 0 Batch 718/1077 - Train Accuracy: 0.6895, Validation Accuracy: 0.6843, Loss: 0.4516 Epoch 0 Batch 719/1077 - Train Accuracy: 0.7214, Validation Accuracy: 0.6811, Loss: 0.4469 Epoch 0 Batch 720/1077 - Train Accuracy: 0.6891, Validation Accuracy: 0.6868, Loss: 0.4775 Epoch 0 Batch 721/1077 - Train Accuracy: 0.7188, Validation Accuracy: 0.7053, Loss: 0.4574 Epoch 0 Batch 722/1077 - Train Accuracy: 0.6973, Validation Accuracy: 0.7188, Loss: 0.4395 Epoch 0 Batch 723/1077 - Train Accuracy: 0.7429, Validation Accuracy: 0.7092, Loss: 0.4522 Epoch 0 Batch 724/1077 - Train Accuracy: 0.7492, Validation Accuracy: 0.7010, Loss: 0.4713 Epoch 0 Batch 725/1077 - Train Accuracy: 0.7411, Validation Accuracy: 0.7028, Loss: 0.4118 Epoch 0 Batch 726/1077 - Train Accuracy: 0.7426, Validation Accuracy: 0.6989, Loss: 0.4258 Epoch 0 Batch 727/1077 - Train Accuracy: 0.7301, Validation Accuracy: 0.6953, Loss: 0.4239 Epoch 0 Batch 728/1077 - Train Accuracy: 0.7072, Validation Accuracy: 0.7053, Loss: 0.4409 Epoch 0 Batch 729/1077 - Train Accuracy: 0.7156, Validation Accuracy: 0.7202, Loss: 0.4784 Epoch 0 Batch 730/1077 - Train Accuracy: 0.7316, Validation Accuracy: 0.7241, Loss: 0.4386 Epoch 0 Batch 731/1077 - Train Accuracy: 0.6879, Validation Accuracy: 0.7298, Loss: 0.4387 Epoch 0 Batch 732/1077 - Train Accuracy: 0.7171, Validation Accuracy: 0.7070, Loss: 0.4752 Epoch 0 Batch 733/1077 - Train Accuracy: 0.7336, Validation Accuracy: 0.7049, Loss: 0.4373 Epoch 0 Batch 734/1077 - Train Accuracy: 0.7253, Validation Accuracy: 0.7063, Loss: 0.4644 Epoch 0 Batch 735/1077 - Train Accuracy: 0.7238, Validation Accuracy: 0.7124, Loss: 0.4246 Epoch 0 Batch 736/1077 - Train Accuracy: 0.7673, Validation Accuracy: 0.7120, Loss: 0.4112 Epoch 0 Batch 737/1077 - Train Accuracy: 0.7074, Validation Accuracy: 0.7099, Loss: 0.4613 Epoch 0 Batch 738/1077 - Train Accuracy: 0.7585, Validation Accuracy: 0.7085, Loss: 0.3774 Epoch 0 Batch 739/1077 - Train Accuracy: 0.7289, Validation Accuracy: 0.7191, Loss: 0.4182 Epoch 0 Batch 740/1077 - Train Accuracy: 0.7293, Validation Accuracy: 0.7287, Loss: 0.4198 Epoch 0 Batch 741/1077 - Train Accuracy: 0.7211, Validation Accuracy: 0.7198, Loss: 0.4430 Epoch 0 Batch 742/1077 - Train Accuracy: 0.7547, Validation Accuracy: 0.7344, Loss: 0.4237 Epoch 0 Batch 743/1077 - Train Accuracy: 0.7590, Validation Accuracy: 0.7255, Loss: 0.4235 Epoch 0 Batch 744/1077 - Train Accuracy: 0.7597, Validation Accuracy: 0.7266, Loss: 0.4001 Epoch 0 Batch 745/1077 - Train Accuracy: 0.7508, Validation Accuracy: 0.7251, Loss: 0.4234 Epoch 0 Batch 746/1077 - Train Accuracy: 0.7836, Validation Accuracy: 0.7244, Loss: 0.4136 Epoch 0 Batch 747/1077 - Train Accuracy: 0.7664, Validation Accuracy: 0.7191, Loss: 0.3960 Epoch 0 Batch 748/1077 - Train Accuracy: 0.7445, Validation Accuracy: 0.7273, Loss: 0.4141 Epoch 0 Batch 749/1077 - Train Accuracy: 0.7473, Validation Accuracy: 0.7191, Loss: 0.4282 Epoch 0 Batch 750/1077 - Train Accuracy: 0.7520, Validation Accuracy: 0.7035, Loss: 0.4169 Epoch 0 Batch 751/1077 - Train Accuracy: 0.7570, Validation Accuracy: 0.7088, Loss: 0.4167 Epoch 0 Batch 752/1077 - Train Accuracy: 0.7407, Validation Accuracy: 0.7088, Loss: 0.3821 Epoch 0 Batch 753/1077 - Train Accuracy: 0.7840, Validation Accuracy: 0.7085, Loss: 0.4028 Epoch 0 Batch 754/1077 - Train Accuracy: 0.7328, Validation Accuracy: 0.6996, Loss: 0.4183 Epoch 0 Batch 755/1077 - Train Accuracy: 0.7457, Validation Accuracy: 0.7088, Loss: 0.4146 Epoch 0 Batch 756/1077 - Train Accuracy: 0.7508, Validation Accuracy: 0.7156, Loss: 0.4102 Epoch 0 Batch 757/1077 - Train Accuracy: 0.7401, Validation Accuracy: 0.7099, Loss: 0.4251 Epoch 0 Batch 758/1077 - Train Accuracy: 0.7686, Validation Accuracy: 0.7042, Loss: 0.3821 Epoch 0 Batch 759/1077 - Train Accuracy: 0.7868, Validation Accuracy: 0.7085, Loss: 0.3758 Epoch 0 Batch 760/1077 - Train Accuracy: 0.7207, Validation Accuracy: 0.7195, Loss: 0.4316 Epoch 0 Batch 761/1077 - Train Accuracy: 0.7159, Validation Accuracy: 0.7205, Loss: 0.4218 Epoch 0 Batch 762/1077 - Train Accuracy: 0.7820, Validation Accuracy: 0.7092, Loss: 0.3947 Epoch 0 Batch 763/1077 - Train Accuracy: 0.7452, Validation Accuracy: 0.7109, Loss: 0.3891 Epoch 0 Batch 764/1077 - Train Accuracy: 0.7615, Validation Accuracy: 0.7283, Loss: 0.4181 Epoch 0 Batch 765/1077 - Train Accuracy: 0.7742, Validation Accuracy: 0.7315, Loss: 0.3837 Epoch 0 Batch 766/1077 - Train Accuracy: 0.7129, Validation Accuracy: 0.7457, Loss: 0.4220 Epoch 0 Batch 767/1077 - Train Accuracy: 0.7656, Validation Accuracy: 0.7493, Loss: 0.4004 Epoch 0 Batch 768/1077 - Train Accuracy: 0.7195, Validation Accuracy: 0.7532, Loss: 0.4060 Epoch 0 Batch 769/1077 - Train Accuracy: 0.7660, Validation Accuracy: 0.7411, Loss: 0.4082 Epoch 0 Batch 770/1077 - Train Accuracy: 0.7600, Validation Accuracy: 0.7308, Loss: 0.3842 Epoch 0 Batch 771/1077 - Train Accuracy: 0.7664, Validation Accuracy: 0.7177, Loss: 0.4112 Epoch 0 Batch 772/1077 - Train Accuracy: 0.7794, Validation Accuracy: 0.7259, Loss: 0.3782 Epoch 0 Batch 773/1077 - Train Accuracy: 0.7469, Validation Accuracy: 0.7383, Loss: 0.3853 Epoch 0 Batch 774/1077 - Train Accuracy: 0.7805, Validation Accuracy: 0.7422, Loss: 0.4159 Epoch 0 Batch 775/1077 - Train Accuracy: 0.7672, Validation Accuracy: 0.7425, Loss: 0.3965 Epoch 0 Batch 776/1077 - Train Accuracy: 0.7715, Validation Accuracy: 0.7315, Loss: 0.3932 Epoch 0 Batch 777/1077 - Train Accuracy: 0.7395, Validation Accuracy: 0.7408, Loss: 0.4059 Epoch 0 Batch 778/1077 - Train Accuracy: 0.7760, Validation Accuracy: 0.7429, Loss: 0.3711 Epoch 0 Batch 779/1077 - Train Accuracy: 0.7379, Validation Accuracy: 0.7543, Loss: 0.4036 Epoch 0 Batch 780/1077 - Train Accuracy: 0.7074, Validation Accuracy: 0.7592, Loss: 0.4136 Epoch 0 Batch 781/1077 - Train Accuracy: 0.8173, Validation Accuracy: 0.7482, Loss: 0.3632 Epoch 0 Batch 782/1077 - Train Accuracy: 0.7418, Validation Accuracy: 0.7496, Loss: 0.3874 Epoch 0 Batch 783/1077 - Train Accuracy: 0.7537, Validation Accuracy: 0.7507, Loss: 0.4022 Epoch 0 Batch 784/1077 - Train Accuracy: 0.7773, Validation Accuracy: 0.7390, Loss: 0.3814 Epoch 0 Batch 785/1077 - Train Accuracy: 0.7861, Validation Accuracy: 0.7454, Loss: 0.3662 Epoch 0 Batch 786/1077 - Train Accuracy: 0.7277, Validation Accuracy: 0.7461, Loss: 0.3888 Epoch 0 Batch 787/1077 - Train Accuracy: 0.7682, Validation Accuracy: 0.7543, Loss: 0.3690 Epoch 0 Batch 788/1077 - Train Accuracy: 0.7617, Validation Accuracy: 0.7514, Loss: 0.3787 Epoch 0 Batch 789/1077 - Train Accuracy: 0.7508, Validation Accuracy: 0.7408, Loss: 0.4128 Epoch 0 Batch 790/1077 - Train Accuracy: 0.7055, Validation Accuracy: 0.7369, Loss: 0.4171 Epoch 0 Batch 791/1077 - Train Accuracy: 0.7496, Validation Accuracy: 0.7418, Loss: 0.4030 Epoch 0 Batch 792/1077 - Train Accuracy: 0.7617, Validation Accuracy: 0.7319, Loss: 0.3938 Epoch 0 Batch 793/1077 - Train Accuracy: 0.7668, Validation Accuracy: 0.7326, Loss: 0.3807 Epoch 0 Batch 794/1077 - Train Accuracy: 0.7449, Validation Accuracy: 0.7315, Loss: 0.3749 Epoch 0 Batch 795/1077 - Train Accuracy: 0.7340, Validation Accuracy: 0.7386, Loss: 0.4008 Epoch 0 Batch 796/1077 - Train Accuracy: 0.7730, Validation Accuracy: 0.7429, Loss: 0.3854 Epoch 0 Batch 797/1077 - Train Accuracy: 0.7469, Validation Accuracy: 0.7511, Loss: 0.3678 Epoch 0 Batch 798/1077 - Train Accuracy: 0.7457, Validation Accuracy: 0.7464, Loss: 0.3995 Epoch 0 Batch 799/1077 - Train Accuracy: 0.7418, Validation Accuracy: 0.7482, Loss: 0.4264 Epoch 0 Batch 800/1077 - Train Accuracy: 0.7410, Validation Accuracy: 0.7266, Loss: 0.3892 Epoch 0 Batch 801/1077 - Train Accuracy: 0.7539, Validation Accuracy: 0.7393, Loss: 0.3956 Epoch 0 Batch 802/1077 - Train Accuracy: 0.7679, Validation Accuracy: 0.7450, Loss: 0.3745 Epoch 0 Batch 803/1077 - Train Accuracy: 0.7832, Validation Accuracy: 0.7429, Loss: 0.3947 Epoch 0 Batch 804/1077 - Train Accuracy: 0.7527, Validation Accuracy: 0.7354, Loss: 0.3751 Epoch 0 Batch 805/1077 - Train Accuracy: 0.7594, Validation Accuracy: 0.7372, Loss: 0.3842 Epoch 0 Batch 806/1077 - Train Accuracy: 0.7660, Validation Accuracy: 0.7397, Loss: 0.3623 Epoch 0 Batch 807/1077 - Train Accuracy: 0.7508, Validation Accuracy: 0.7344, Loss: 0.3589 Epoch 0 Batch 808/1077 - Train Accuracy: 0.7930, Validation Accuracy: 0.7305, Loss: 0.4022 Epoch 0 Batch 809/1077 - Train Accuracy: 0.7463, Validation Accuracy: 0.7365, Loss: 0.4173 Epoch 0 Batch 810/1077 - Train Accuracy: 0.7604, Validation Accuracy: 0.7326, Loss: 0.3450 Epoch 0 Batch 811/1077 - Train Accuracy: 0.7809, Validation Accuracy: 0.7411, Loss: 0.3532 Epoch 0 Batch 812/1077 - Train Accuracy: 0.7473, Validation Accuracy: 0.7138, Loss: 0.3787 Epoch 0 Batch 813/1077 - Train Accuracy: 0.7597, Validation Accuracy: 0.7024, Loss: 0.3501 Epoch 0 Batch 814/1077 - Train Accuracy: 0.7707, Validation Accuracy: 0.7195, Loss: 0.3739 Epoch 0 Batch 815/1077 - Train Accuracy: 0.7438, Validation Accuracy: 0.7376, Loss: 0.3651 Epoch 0 Batch 816/1077 - Train Accuracy: 0.8026, Validation Accuracy: 0.7408, Loss: 0.3902 Epoch 0 Batch 817/1077 - Train Accuracy: 0.7496, Validation Accuracy: 0.7401, Loss: 0.3952 Epoch 0 Batch 818/1077 - Train Accuracy: 0.7566, Validation Accuracy: 0.7425, Loss: 0.3787 Epoch 0 Batch 819/1077 - Train Accuracy: 0.7898, Validation Accuracy: 0.7436, Loss: 0.3631 Epoch 0 Batch 820/1077 - Train Accuracy: 0.7027, Validation Accuracy: 0.7301, Loss: 0.3816 Epoch 0 Batch 821/1077 - Train Accuracy: 0.7836, Validation Accuracy: 0.7337, Loss: 0.3684 Epoch 0 Batch 822/1077 - Train Accuracy: 0.7762, Validation Accuracy: 0.7454, Loss: 0.3814 Epoch 0 Batch 823/1077 - Train Accuracy: 0.7809, Validation Accuracy: 0.7585, Loss: 0.3647 Epoch 0 Batch 824/1077 - Train Accuracy: 0.7697, Validation Accuracy: 0.7603, Loss: 0.3591 Epoch 0 Batch 825/1077 - Train Accuracy: 0.8133, Validation Accuracy: 0.7500, Loss: 0.3541 Epoch 0 Batch 826/1077 - Train Accuracy: 0.7638, Validation Accuracy: 0.7539, Loss: 0.3430 Epoch 0 Batch 827/1077 - Train Accuracy: 0.7418, Validation Accuracy: 0.7575, Loss: 0.3837 Epoch 0 Batch 828/1077 - Train Accuracy: 0.7742, Validation Accuracy: 0.7642, Loss: 0.3517 Epoch 0 Batch 829/1077 - Train Accuracy: 0.7629, Validation Accuracy: 0.7713, Loss: 0.3757 Epoch 0 Batch 830/1077 - Train Accuracy: 0.7867, Validation Accuracy: 0.7670, Loss: 0.3587 Epoch 0 Batch 831/1077 - Train Accuracy: 0.7363, Validation Accuracy: 0.7589, Loss: 0.3581 Epoch 0 Batch 832/1077 - Train Accuracy: 0.7504, Validation Accuracy: 0.7635, Loss: 0.3641 Epoch 0 Batch 833/1077 - Train Accuracy: 0.7801, Validation Accuracy: 0.7553, Loss: 0.3678 Epoch 0 Batch 834/1077 - Train Accuracy: 0.8026, Validation Accuracy: 0.7518, Loss: 0.3468 Epoch 0 Batch 835/1077 - Train Accuracy: 0.7918, Validation Accuracy: 0.7560, Loss: 0.3528 Epoch 0 Batch 836/1077 - Train Accuracy: 0.8150, Validation Accuracy: 0.7528, Loss: 0.3654 Epoch 0 Batch 837/1077 - Train Accuracy: 0.7883, Validation Accuracy: 0.7443, Loss: 0.3765 Epoch 0 Batch 838/1077 - Train Accuracy: 0.7750, Validation Accuracy: 0.7358, Loss: 0.3320 Epoch 0 Batch 839/1077 - Train Accuracy: 0.8070, Validation Accuracy: 0.7351, Loss: 0.3367 Epoch 0 Batch 840/1077 - Train Accuracy: 0.7777, Validation Accuracy: 0.7351, Loss: 0.3353 Epoch 0 Batch 841/1077 - Train Accuracy: 0.8359, Validation Accuracy: 0.7390, Loss: 0.3399 Epoch 0 Batch 842/1077 - Train Accuracy: 0.8016, Validation Accuracy: 0.7578, Loss: 0.3313 Epoch 0 Batch 843/1077 - Train Accuracy: 0.7939, Validation Accuracy: 0.7617, Loss: 0.3242 Epoch 0 Batch 844/1077 - Train Accuracy: 0.7846, Validation Accuracy: 0.7443, Loss: 0.3265 Epoch 0 Batch 845/1077 - Train Accuracy: 0.7848, Validation Accuracy: 0.7344, Loss: 0.3367 Epoch 0 Batch 846/1077 - Train Accuracy: 0.7727, Validation Accuracy: 0.7319, Loss: 0.3565 Epoch 0 Batch 847/1077 - Train Accuracy: 0.7953, Validation Accuracy: 0.7454, Loss: 0.3537 Epoch 0 Batch 848/1077 - Train Accuracy: 0.8059, Validation Accuracy: 0.7614, Loss: 0.3463 Epoch 0 Batch 849/1077 - Train Accuracy: 0.7699, Validation Accuracy: 0.7628, Loss: 0.3310 Epoch 0 Batch 850/1077 - Train Accuracy: 0.7693, Validation Accuracy: 0.7695, Loss: 0.3774 Epoch 0 Batch 851/1077 - Train Accuracy: 0.7764, Validation Accuracy: 0.7638, Loss: 0.3464 Epoch 0 Batch 852/1077 - Train Accuracy: 0.7762, Validation Accuracy: 0.7667, Loss: 0.3608 Epoch 0 Batch 853/1077 - Train Accuracy: 0.7859, Validation Accuracy: 0.7557, Loss: 0.3202 Epoch 0 Batch 854/1077 - Train Accuracy: 0.7895, Validation Accuracy: 0.7589, Loss: 0.3436 Epoch 0 Batch 855/1077 - Train Accuracy: 0.7711, Validation Accuracy: 0.7635, Loss: 0.3267 Epoch 0 Batch 856/1077 - Train Accuracy: 0.7703, Validation Accuracy: 0.7543, Loss: 0.3532 Epoch 0 Batch 857/1077 - Train Accuracy: 0.8215, Validation Accuracy: 0.7592, Loss: 0.3315 Epoch 0 Batch 858/1077 - Train Accuracy: 0.7898, Validation Accuracy: 0.7525, Loss: 0.3294 Epoch 0 Batch 859/1077 - Train Accuracy: 0.7551, Validation Accuracy: 0.7596, Loss: 0.3541 Epoch 0 Batch 860/1077 - Train Accuracy: 0.7958, Validation Accuracy: 0.7766, Loss: 0.3354 Epoch 0 Batch 861/1077 - Train Accuracy: 0.8102, Validation Accuracy: 0.7702, Loss: 0.3314 Epoch 0 Batch 862/1077 - Train Accuracy: 0.8121, Validation Accuracy: 0.7710, Loss: 0.3243 Epoch 0 Batch 863/1077 - Train Accuracy: 0.8227, Validation Accuracy: 0.7663, Loss: 0.3250 Epoch 0 Batch 864/1077 - Train Accuracy: 0.7785, Validation Accuracy: 0.7507, Loss: 0.3327 Epoch 0 Batch 865/1077 - Train Accuracy: 0.8072, Validation Accuracy: 0.7330, Loss: 0.3108 Epoch 0 Batch 866/1077 - Train Accuracy: 0.8069, Validation Accuracy: 0.7479, Loss: 0.3314 Epoch 0 Batch 867/1077 - Train Accuracy: 0.7715, Validation Accuracy: 0.7496, Loss: 0.3751 Epoch 0 Batch 868/1077 - Train Accuracy: 0.8281, Validation Accuracy: 0.7546, Loss: 0.3364 Epoch 0 Batch 869/1077 - Train Accuracy: 0.7809, Validation Accuracy: 0.7592, Loss: 0.3356 Epoch 0 Batch 870/1077 - Train Accuracy: 0.7410, Validation Accuracy: 0.7670, Loss: 0.3379 Epoch 0 Batch 871/1077 - Train Accuracy: 0.7699, Validation Accuracy: 0.7749, Loss: 0.3146 Epoch 0 Batch 872/1077 - Train Accuracy: 0.8008, Validation Accuracy: 0.7702, Loss: 0.3180 Epoch 0 Batch 873/1077 - Train Accuracy: 0.7754, Validation Accuracy: 0.7674, Loss: 0.3373 Epoch 0 Batch 874/1077 - Train Accuracy: 0.7992, Validation Accuracy: 0.7688, Loss: 0.3383 Epoch 0 Batch 875/1077 - Train Accuracy: 0.7937, Validation Accuracy: 0.7805, Loss: 0.3383 Epoch 0 Batch 876/1077 - Train Accuracy: 0.7855, Validation Accuracy: 0.7706, Loss: 0.3331 Epoch 0 Batch 877/1077 - Train Accuracy: 0.7973, Validation Accuracy: 0.7592, Loss: 0.3103 Epoch 0 Batch 878/1077 - Train Accuracy: 0.7930, Validation Accuracy: 0.7678, Loss: 0.3278 Epoch 0 Batch 879/1077 - Train Accuracy: 0.8063, Validation Accuracy: 0.7827, Loss: 0.3005 Epoch 0 Batch 880/1077 - Train Accuracy: 0.8340, Validation Accuracy: 0.7713, Loss: 0.3253 Epoch 0 Batch 881/1077 - Train Accuracy: 0.7855, Validation Accuracy: 0.7681, Loss: 0.3503 Epoch 0 Batch 882/1077 - Train Accuracy: 0.7582, Validation Accuracy: 0.7717, Loss: 0.3500 Epoch 0 Batch 883/1077 - Train Accuracy: 0.7714, Validation Accuracy: 0.7692, Loss: 0.3803 Epoch 0 Batch 884/1077 - Train Accuracy: 0.8012, Validation Accuracy: 0.7599, Loss: 0.2972 Epoch 0 Batch 885/1077 - Train Accuracy: 0.8246, Validation Accuracy: 0.7617, Loss: 0.2826 Epoch 0 Batch 886/1077 - Train Accuracy: 0.7902, Validation Accuracy: 0.7546, Loss: 0.3154 Epoch 0 Batch 887/1077 - Train Accuracy: 0.8004, Validation Accuracy: 0.7564, Loss: 0.3464 Epoch 0 Batch 888/1077 - Train Accuracy: 0.8121, Validation Accuracy: 0.7379, Loss: 0.3278 Epoch 0 Batch 889/1077 - Train Accuracy: 0.8133, Validation Accuracy: 0.7585, Loss: 0.3189 Epoch 0 Batch 890/1077 - Train Accuracy: 0.8378, Validation Accuracy: 0.7589, Loss: 0.3076 Epoch 0 Batch 891/1077 - Train Accuracy: 0.8010, Validation Accuracy: 0.7717, Loss: 0.3365 Epoch 0 Batch 892/1077 - Train Accuracy: 0.7930, Validation Accuracy: 0.7628, Loss: 0.3017 Epoch 0 Batch 893/1077 - Train Accuracy: 0.7859, Validation Accuracy: 0.7578, Loss: 0.3182 Epoch 0 Batch 894/1077 - Train Accuracy: 0.8419, Validation Accuracy: 0.7624, Loss: 0.3093 Epoch 0 Batch 895/1077 - Train Accuracy: 0.8043, Validation Accuracy: 0.7628, Loss: 0.3004 Epoch 0 Batch 896/1077 - Train Accuracy: 0.7882, Validation Accuracy: 0.7855, Loss: 0.3430 Epoch 0 Batch 897/1077 - Train Accuracy: 0.7827, Validation Accuracy: 0.7749, Loss: 0.2791 Epoch 0 Batch 898/1077 - Train Accuracy: 0.7928, Validation Accuracy: 0.7766, Loss: 0.2955 Epoch 0 Batch 899/1077 - Train Accuracy: 0.7992, Validation Accuracy: 0.7781, Loss: 0.3257 Epoch 0 Batch 900/1077 - Train Accuracy: 0.8008, Validation Accuracy: 0.7702, Loss: 0.3330 Epoch 0 Batch 901/1077 - Train Accuracy: 0.8196, Validation Accuracy: 0.7745, Loss: 0.3291 Epoch 0 Batch 902/1077 - Train Accuracy: 0.8214, Validation Accuracy: 0.7706, Loss: 0.3155 Epoch 0 Batch 903/1077 - Train Accuracy: 0.7770, Validation Accuracy: 0.7699, Loss: 0.3222 Epoch 0 Batch 904/1077 - Train Accuracy: 0.7453, Validation Accuracy: 0.7699, Loss: 0.3154 Epoch 0 Batch 905/1077 - Train Accuracy: 0.8211, Validation Accuracy: 0.7738, Loss: 0.2889 Epoch 0 Batch 906/1077 - Train Accuracy: 0.8223, Validation Accuracy: 0.7663, Loss: 0.3003 Epoch 0 Batch 907/1077 - Train Accuracy: 0.8066, Validation Accuracy: 0.7656, Loss: 0.3062 Epoch 0 Batch 908/1077 - Train Accuracy: 0.7984, Validation Accuracy: 0.7791, Loss: 0.3231 Epoch 0 Batch 909/1077 - Train Accuracy: 0.8063, Validation Accuracy: 0.7756, Loss: 0.3105 Epoch 0 Batch 910/1077 - Train Accuracy: 0.8025, Validation Accuracy: 0.7646, Loss: 0.3012 Epoch 0 Batch 911/1077 - Train Accuracy: 0.8221, Validation Accuracy: 0.7614, Loss: 0.2862 Epoch 0 Batch 912/1077 - Train Accuracy: 0.7832, Validation Accuracy: 0.7660, Loss: 0.3064 Epoch 0 Batch 913/1077 - Train Accuracy: 0.8063, Validation Accuracy: 0.7560, Loss: 0.3276 Epoch 0 Batch 914/1077 - Train Accuracy: 0.8356, Validation Accuracy: 0.7699, Loss: 0.2951 Epoch 0 Batch 915/1077 - Train Accuracy: 0.7664, Validation Accuracy: 0.7745, Loss: 0.3164 Epoch 0 Batch 916/1077 - Train Accuracy: 0.7930, Validation Accuracy: 0.7695, Loss: 0.3336 Epoch 0 Batch 917/1077 - Train Accuracy: 0.7941, Validation Accuracy: 0.7685, Loss: 0.2823 Epoch 0 Batch 918/1077 - Train Accuracy: 0.8490, Validation Accuracy: 0.7759, Loss: 0.2792 Epoch 0 Batch 919/1077 - Train Accuracy: 0.8413, Validation Accuracy: 0.7873, Loss: 0.2828 Epoch 0 Batch 920/1077 - Train Accuracy: 0.8137, Validation Accuracy: 0.7869, Loss: 0.3109 Epoch 0 Batch 921/1077 - Train Accuracy: 0.8039, Validation Accuracy: 0.7823, Loss: 0.2976 Epoch 0 Batch 922/1077 - Train Accuracy: 0.7719, Validation Accuracy: 0.7784, Loss: 0.3104 Epoch 0 Batch 923/1077 - Train Accuracy: 0.8269, Validation Accuracy: 0.7869, Loss: 0.2990 Epoch 0 Batch 924/1077 - Train Accuracy: 0.7981, Validation Accuracy: 0.7674, Loss: 0.3236 Epoch 0 Batch 925/1077 - Train Accuracy: 0.8367, Validation Accuracy: 0.7873, Loss: 0.2897 Epoch 0 Batch 926/1077 - Train Accuracy: 0.7992, Validation Accuracy: 0.7944, Loss: 0.2961 Epoch 0 Batch 927/1077 - Train Accuracy: 0.8066, Validation Accuracy: 0.8004, Loss: 0.3023 Epoch 0 Batch 928/1077 - Train Accuracy: 0.8145, Validation Accuracy: 0.7812, Loss: 0.2998 Epoch 0 Batch 929/1077 - Train Accuracy: 0.8137, Validation Accuracy: 0.7628, Loss: 0.3033 Epoch 0 Batch 930/1077 - Train Accuracy: 0.7973, Validation Accuracy: 0.7763, Loss: 0.2885 Epoch 0 Batch 931/1077 - Train Accuracy: 0.8316, Validation Accuracy: 0.7745, Loss: 0.2804 Epoch 0 Batch 932/1077 - Train Accuracy: 0.7727, Validation Accuracy: 0.7859, Loss: 0.2908 Epoch 0 Batch 933/1077 - Train Accuracy: 0.8395, Validation Accuracy: 0.7823, Loss: 0.2966 Epoch 0 Batch 934/1077 - Train Accuracy: 0.8113, Validation Accuracy: 0.7741, Loss: 0.2899 Epoch 0 Batch 935/1077 - Train Accuracy: 0.8375, Validation Accuracy: 0.7635, Loss: 0.3039 Epoch 0 Batch 936/1077 - Train Accuracy: 0.8322, Validation Accuracy: 0.7773, Loss: 0.2870 Epoch 0 Batch 937/1077 - Train Accuracy: 0.8076, Validation Accuracy: 0.7791, Loss: 0.3094 Epoch 0 Batch 938/1077 - Train Accuracy: 0.8598, Validation Accuracy: 0.7866, Loss: 0.2954 Epoch 0 Batch 939/1077 - Train Accuracy: 0.8082, Validation Accuracy: 0.7784, Loss: 0.3009 Epoch 0 Batch 940/1077 - Train Accuracy: 0.8090, Validation Accuracy: 0.7816, Loss: 0.2782 Epoch 0 Batch 941/1077 - Train Accuracy: 0.7987, Validation Accuracy: 0.7848, Loss: 0.2675 Epoch 0 Batch 942/1077 - Train Accuracy: 0.8273, Validation Accuracy: 0.7795, Loss: 0.2863 Epoch 0 Batch 943/1077 - Train Accuracy: 0.8160, Validation Accuracy: 0.7770, Loss: 0.2962 Epoch 0 Batch 944/1077 - Train Accuracy: 0.7831, Validation Accuracy: 0.7866, Loss: 0.2743 Epoch 0 Batch 945/1077 - Train Accuracy: 0.8520, Validation Accuracy: 0.7734, Loss: 0.2794 Epoch 0 Batch 946/1077 - Train Accuracy: 0.8207, Validation Accuracy: 0.7695, Loss: 0.2880 Epoch 0 Batch 947/1077 - Train Accuracy: 0.7788, Validation Accuracy: 0.7685, Loss: 0.3012 Epoch 0 Batch 948/1077 - Train Accuracy: 0.8074, Validation Accuracy: 0.7812, Loss: 0.2795 Epoch 0 Batch 949/1077 - Train Accuracy: 0.8709, Validation Accuracy: 0.8001, Loss: 0.2520 Epoch 0 Batch 950/1077 - Train Accuracy: 0.8601, Validation Accuracy: 0.8001, Loss: 0.2548 Epoch 0 Batch 951/1077 - Train Accuracy: 0.8341, Validation Accuracy: 0.8114, Loss: 0.2942 Epoch 0 Batch 952/1077 - Train Accuracy: 0.8277, Validation Accuracy: 0.7894, Loss: 0.2656 Epoch 0 Batch 953/1077 - Train Accuracy: 0.8466, Validation Accuracy: 0.7823, Loss: 0.2712 Epoch 0 Batch 954/1077 - Train Accuracy: 0.7961, Validation Accuracy: 0.7919, Loss: 0.3000 Epoch 0 Batch 955/1077 - Train Accuracy: 0.8008, Validation Accuracy: 0.7937, Loss: 0.2839 Epoch 0 Batch 956/1077 - Train Accuracy: 0.8504, Validation Accuracy: 0.7866, Loss: 0.2723 Epoch 0 Batch 957/1077 - Train Accuracy: 0.8538, Validation Accuracy: 0.7951, Loss: 0.2546 Epoch 0 Batch 958/1077 - Train Accuracy: 0.8234, Validation Accuracy: 0.8058, Loss: 0.2801 Epoch 0 Batch 959/1077 - Train Accuracy: 0.8488, Validation Accuracy: 0.8093, Loss: 0.2631 Epoch 0 Batch 960/1077 - Train Accuracy: 0.8069, Validation Accuracy: 0.7898, Loss: 0.2678 Epoch 0 Batch 961/1077 - Train Accuracy: 0.8305, Validation Accuracy: 0.7852, Loss: 0.2760 Epoch 0 Batch 962/1077 - Train Accuracy: 0.8244, Validation Accuracy: 0.7905, Loss: 0.2646 Epoch 0 Batch 963/1077 - Train Accuracy: 0.8481, Validation Accuracy: 0.7894, Loss: 0.3057 Epoch 0 Batch 964/1077 - Train Accuracy: 0.8300, Validation Accuracy: 0.7979, Loss: 0.2573 Epoch 0 Batch 965/1077 - Train Accuracy: 0.8220, Validation Accuracy: 0.8050, Loss: 0.2948 Epoch 0 Batch 966/1077 - Train Accuracy: 0.8404, Validation Accuracy: 0.7965, Loss: 0.2330 Epoch 0 Batch 967/1077 - Train Accuracy: 0.8281, Validation Accuracy: 0.7894, Loss: 0.2679 Epoch 0 Batch 968/1077 - Train Accuracy: 0.7863, Validation Accuracy: 0.8089, Loss: 0.2959 Epoch 0 Batch 969/1077 - Train Accuracy: 0.8188, Validation Accuracy: 0.8100, Loss: 0.2971 Epoch 0 Batch 970/1077 - Train Accuracy: 0.8273, Validation Accuracy: 0.8043, Loss: 0.2895 Epoch 0 Batch 971/1077 - Train Accuracy: 0.8389, Validation Accuracy: 0.8242, Loss: 0.2635 Epoch 0 Batch 972/1077 - Train Accuracy: 0.8047, Validation Accuracy: 0.8221, Loss: 0.2684 Epoch 0 Batch 973/1077 - Train Accuracy: 0.8702, Validation Accuracy: 0.8121, Loss: 0.2548 Epoch 0 Batch 974/1077 - Train Accuracy: 0.8301, Validation Accuracy: 0.8164, Loss: 0.2478 Epoch 0 Batch 975/1077 - Train Accuracy: 0.7995, Validation Accuracy: 0.8136, Loss: 0.2563 Epoch 0 Batch 976/1077 - Train Accuracy: 0.8293, Validation Accuracy: 0.8068, Loss: 0.2615 Epoch 0 Batch 977/1077 - Train Accuracy: 0.8055, Validation Accuracy: 0.7969, Loss: 0.2431 Epoch 0 Batch 978/1077 - Train Accuracy: 0.8477, Validation Accuracy: 0.8050, Loss: 0.2659 Epoch 0 Batch 979/1077 - Train Accuracy: 0.8211, Validation Accuracy: 0.8100, Loss: 0.2853 Epoch 0 Batch 980/1077 - Train Accuracy: 0.8074, Validation Accuracy: 0.8054, Loss: 0.2773 Epoch 0 Batch 981/1077 - Train Accuracy: 0.7965, Validation Accuracy: 0.7923, Loss: 0.2522 Epoch 0 Batch 982/1077 - Train Accuracy: 0.8475, Validation Accuracy: 0.7930, Loss: 0.2641 Epoch 0 Batch 983/1077 - Train Accuracy: 0.8113, Validation Accuracy: 0.8121, Loss: 0.2840 Epoch 0 Batch 984/1077 - Train Accuracy: 0.7797, Validation Accuracy: 0.8168, Loss: 0.2825 Epoch 0 Batch 985/1077 - Train Accuracy: 0.8109, Validation Accuracy: 0.8207, Loss: 0.2755 Epoch 0 Batch 986/1077 - Train Accuracy: 0.8203, Validation Accuracy: 0.8143, Loss: 0.2635 Epoch 0 Batch 987/1077 - Train Accuracy: 0.8259, Validation Accuracy: 0.8100, Loss: 0.2459 Epoch 0 Batch 988/1077 - Train Accuracy: 0.8383, Validation Accuracy: 0.8111, Loss: 0.2595 Epoch 0 Batch 989/1077 - Train Accuracy: 0.8305, Validation Accuracy: 0.8026, Loss: 0.2852 Epoch 0 Batch 990/1077 - Train Accuracy: 0.8224, Validation Accuracy: 0.8079, Loss: 0.2668 Epoch 0 Batch 991/1077 - Train Accuracy: 0.8180, Validation Accuracy: 0.8129, Loss: 0.2775 Epoch 0 Batch 992/1077 - Train Accuracy: 0.8359, Validation Accuracy: 0.8168, Loss: 0.2466 Epoch 0 Batch 993/1077 - Train Accuracy: 0.8254, Validation Accuracy: 0.8246, Loss: 0.2412 Epoch 0 Batch 994/1077 - Train Accuracy: 0.8449, Validation Accuracy: 0.8143, Loss: 0.2480 Epoch 0 Batch 995/1077 - Train Accuracy: 0.8344, Validation Accuracy: 0.8065, Loss: 0.2608 Epoch 0 Batch 996/1077 - Train Accuracy: 0.8227, Validation Accuracy: 0.8196, Loss: 0.2596 Epoch 0 Batch 997/1077 - Train Accuracy: 0.8610, Validation Accuracy: 0.8164, Loss: 0.2605 Epoch 0 Batch 998/1077 - Train Accuracy: 0.7961, Validation Accuracy: 0.8129, Loss: 0.2400 Epoch 0 Batch 999/1077 - Train Accuracy: 0.8508, Validation Accuracy: 0.8306, Loss: 0.2632 Epoch 0 Batch 1000/1077 - Train Accuracy: 0.8460, Validation Accuracy: 0.8342, Loss: 0.2355 Epoch 0 Batch 1001/1077 - Train Accuracy: 0.8491, Validation Accuracy: 0.8196, Loss: 0.2297 Epoch 0 Batch 1002/1077 - Train Accuracy: 0.8547, Validation Accuracy: 0.8242, Loss: 0.2366 Epoch 0 Batch 1003/1077 - Train Accuracy: 0.8512, Validation Accuracy: 0.8125, Loss: 0.2654 Epoch 0 Batch 1004/1077 - Train Accuracy: 0.8691, Validation Accuracy: 0.8058, Loss: 0.2639 Epoch 0 Batch 1005/1077 - Train Accuracy: 0.8328, Validation Accuracy: 0.8075, Loss: 0.2430 Epoch 0 Batch 1006/1077 - Train Accuracy: 0.8133, Validation Accuracy: 0.8058, Loss: 0.2283 Epoch 0 Batch 1007/1077 - Train Accuracy: 0.8843, Validation Accuracy: 0.8054, Loss: 0.2279 Epoch 0 Batch 1008/1077 - Train Accuracy: 0.8410, Validation Accuracy: 0.8303, Loss: 0.2741 Epoch 0 Batch 1009/1077 - Train Accuracy: 0.8867, Validation Accuracy: 0.8274, Loss: 0.2206 Epoch 0 Batch 1010/1077 - Train Accuracy: 0.8512, Validation Accuracy: 0.8285, Loss: 0.2481 Epoch 0 Batch 1011/1077 - Train Accuracy: 0.8539, Validation Accuracy: 0.8288, Loss: 0.2476 Epoch 0 Batch 1012/1077 - Train Accuracy: 0.8508, Validation Accuracy: 0.8207, Loss: 0.2195 Epoch 0 Batch 1013/1077 - Train Accuracy: 0.8676, Validation Accuracy: 0.8107, Loss: 0.2176 Epoch 0 Batch 1014/1077 - Train Accuracy: 0.8102, Validation Accuracy: 0.8100, Loss: 0.2489 Epoch 0 Batch 1015/1077 - Train Accuracy: 0.8234, Validation Accuracy: 0.8232, Loss: 0.2750 Epoch 0 Batch 1016/1077 - Train Accuracy: 0.8073, Validation Accuracy: 0.8111, Loss: 0.2618 Epoch 0 Batch 1017/1077 - Train Accuracy: 0.8051, Validation Accuracy: 0.8153, Loss: 0.2405 Epoch 0 Batch 1018/1077 - Train Accuracy: 0.8382, Validation Accuracy: 0.8239, Loss: 0.2243 Epoch 0 Batch 1019/1077 - Train Accuracy: 0.8244, Validation Accuracy: 0.8239, Loss: 0.2716 Epoch 0 Batch 1020/1077 - Train Accuracy: 0.8691, Validation Accuracy: 0.8267, Loss: 0.2220 Epoch 0 Batch 1021/1077 - Train Accuracy: 0.8612, Validation Accuracy: 0.8366, Loss: 0.2378 Epoch 0 Batch 1022/1077 - Train Accuracy: 0.8672, Validation Accuracy: 0.8370, Loss: 0.2149 Epoch 0 Batch 1023/1077 - Train Accuracy: 0.8391, Validation Accuracy: 0.8274, Loss: 0.2312 Epoch 0 Batch 1024/1077 - Train Accuracy: 0.8137, Validation Accuracy: 0.8303, Loss: 0.2585 Epoch 0 Batch 1025/1077 - Train Accuracy: 0.8385, Validation Accuracy: 0.8366, Loss: 0.2254 Epoch 0 Batch 1026/1077 - Train Accuracy: 0.9051, Validation Accuracy: 0.8406, Loss: 0.2288 Epoch 0 Batch 1027/1077 - Train Accuracy: 0.8281, Validation Accuracy: 0.8374, Loss: 0.2444 Epoch 0 Batch 1028/1077 - Train Accuracy: 0.8285, Validation Accuracy: 0.8363, Loss: 0.2225 Epoch 0 Batch 1029/1077 - Train Accuracy: 0.8344, Validation Accuracy: 0.8224, Loss: 0.2207 Epoch 0 Batch 1030/1077 - Train Accuracy: 0.8465, Validation Accuracy: 0.8082, Loss: 0.2489 Epoch 0 Batch 1031/1077 - Train Accuracy: 0.8697, Validation Accuracy: 0.8310, Loss: 0.2545 Epoch 0 Batch 1032/1077 - Train Accuracy: 0.8508, Validation Accuracy: 0.8214, Loss: 0.2374 Epoch 0 Batch 1033/1077 - Train Accuracy: 0.7946, Validation Accuracy: 0.8242, Loss: 0.2251 Epoch 0 Batch 1034/1077 - Train Accuracy: 0.8289, Validation Accuracy: 0.8295, Loss: 0.2447 Epoch 0 Batch 1035/1077 - Train Accuracy: 0.8876, Validation Accuracy: 0.8196, Loss: 0.1984 Epoch 0 Batch 1036/1077 - Train Accuracy: 0.8315, Validation Accuracy: 0.8214, Loss: 0.2413 Epoch 0 Batch 1037/1077 - Train Accuracy: 0.8695, Validation Accuracy: 0.8281, Loss: 0.2451 Epoch 0 Batch 1038/1077 - Train Accuracy: 0.8535, Validation Accuracy: 0.8228, Loss: 0.2478 Epoch 0 Batch 1039/1077 - Train Accuracy: 0.8471, Validation Accuracy: 0.8111, Loss: 0.2233 Epoch 0 Batch 1040/1077 - Train Accuracy: 0.8446, Validation Accuracy: 0.8228, Loss: 0.2483 Epoch 0 Batch 1041/1077 - Train Accuracy: 0.8535, Validation Accuracy: 0.8239, Loss: 0.2353 Epoch 0 Batch 1042/1077 - Train Accuracy: 0.8512, Validation Accuracy: 0.8313, Loss: 0.2285 Epoch 0 Batch 1043/1077 - Train Accuracy: 0.8332, Validation Accuracy: 0.8271, Loss: 0.2600 Epoch 0 Batch 1044/1077 - Train Accuracy: 0.8578, Validation Accuracy: 0.8292, Loss: 0.2435 Epoch 0 Batch 1045/1077 - Train Accuracy: 0.8414, Validation Accuracy: 0.8320, Loss: 0.2261 Epoch 0 Batch 1046/1077 - Train Accuracy: 0.8602, Validation Accuracy: 0.8111, Loss: 0.1981 Epoch 0 Batch 1047/1077 - Train Accuracy: 0.8348, Validation Accuracy: 0.8118, Loss: 0.2294 Epoch 0 Batch 1048/1077 - Train Accuracy: 0.8383, Validation Accuracy: 0.8278, Loss: 0.2109 Epoch 0 Batch 1049/1077 - Train Accuracy: 0.8422, Validation Accuracy: 0.8324, Loss: 0.2384 Epoch 0 Batch 1050/1077 - Train Accuracy: 0.8207, Validation Accuracy: 0.8338, Loss: 0.2124 Epoch 0 Batch 1051/1077 - Train Accuracy: 0.8616, Validation Accuracy: 0.8271, Loss: 0.2162 Epoch 0 Batch 1052/1077 - Train Accuracy: 0.8806, Validation Accuracy: 0.8377, Loss: 0.2129 Epoch 0 Batch 1053/1077 - Train Accuracy: 0.8620, Validation Accuracy: 0.8228, Loss: 0.2142 Epoch 0 Batch 1054/1077 - Train Accuracy: 0.8699, Validation Accuracy: 0.8303, Loss: 0.2291 Epoch 0 Batch 1055/1077 - Train Accuracy: 0.8375, Validation Accuracy: 0.8409, Loss: 0.2340 Epoch 0 Batch 1056/1077 - Train Accuracy: 0.8602, Validation Accuracy: 0.8221, Loss: 0.2134 Epoch 0 Batch 1057/1077 - Train Accuracy: 0.8512, Validation Accuracy: 0.7951, Loss: 0.2205 Epoch 0 Batch 1058/1077 - Train Accuracy: 0.8442, Validation Accuracy: 0.8178, Loss: 0.2499 Epoch 0 Batch 1059/1077 - Train Accuracy: 0.8331, Validation Accuracy: 0.8104, Loss: 0.2402 Epoch 0 Batch 1060/1077 - Train Accuracy: 0.8293, Validation Accuracy: 0.8196, Loss: 0.2033 Epoch 0 Batch 1061/1077 - Train Accuracy: 0.8063, Validation Accuracy: 0.8114, Loss: 0.2416 Epoch 0 Batch 1062/1077 - Train Accuracy: 0.8258, Validation Accuracy: 0.8033, Loss: 0.2303 Epoch 0 Batch 1063/1077 - Train Accuracy: 0.8391, Validation Accuracy: 0.8075, Loss: 0.2132 Epoch 0 Batch 1064/1077 - Train Accuracy: 0.8758, Validation Accuracy: 0.8107, Loss: 0.2199 Epoch 0 Batch 1065/1077 - Train Accuracy: 0.8414, Validation Accuracy: 0.8132, Loss: 0.2138 Epoch 0 Batch 1066/1077 - Train Accuracy: 0.8652, Validation Accuracy: 0.8260, Loss: 0.2109 Epoch 0 Batch 1067/1077 - Train Accuracy: 0.8344, Validation Accuracy: 0.8185, Loss: 0.2384 Epoch 0 Batch 1068/1077 - Train Accuracy: 0.8812, Validation Accuracy: 0.8242, Loss: 0.2004 Epoch 0 Batch 1069/1077 - Train Accuracy: 0.8824, Validation Accuracy: 0.8281, Loss: 0.1894 Epoch 0 Batch 1070/1077 - Train Accuracy: 0.8480, Validation Accuracy: 0.8214, Loss: 0.2163 Epoch 0 Batch 1071/1077 - Train Accuracy: 0.8578, Validation Accuracy: 0.8267, Loss: 0.2031 Epoch 0 Batch 1072/1077 - Train Accuracy: 0.8594, Validation Accuracy: 0.8331, Loss: 0.2017 Epoch 0 Batch 1073/1077 - Train Accuracy: 0.8766, Validation Accuracy: 0.8381, Loss: 0.2256 Epoch 0 Batch 1074/1077 - Train Accuracy: 0.8690, Validation Accuracy: 0.8288, Loss: 0.2412 Epoch 0 Batch 1075/1077 - Train Accuracy: 0.8618, Validation Accuracy: 0.8249, Loss: 0.2334 Epoch 1 Batch 1/1077 - Train Accuracy: 0.8820, Validation Accuracy: 0.8352, Loss: 0.1908 Epoch 1 Batch 2/1077 - Train Accuracy: 0.8339, Validation Accuracy: 0.8377, Loss: 0.2119 Epoch 1 Batch 3/1077 - Train Accuracy: 0.8785, Validation Accuracy: 0.8413, Loss: 0.2018 Epoch 1 Batch 4/1077 - Train Accuracy: 0.8656, Validation Accuracy: 0.8572, Loss: 0.2048 Epoch 1 Batch 5/1077 - Train Accuracy: 0.8445, Validation Accuracy: 0.8420, Loss: 0.2299 Epoch 1 Batch 6/1077 - Train Accuracy: 0.8559, Validation Accuracy: 0.8452, Loss: 0.2139 Epoch 1 Batch 7/1077 - Train Accuracy: 0.8715, Validation Accuracy: 0.8544, Loss: 0.1986 Epoch 1 Batch 8/1077 - Train Accuracy: 0.9023, Validation Accuracy: 0.8707, Loss: 0.2024 Epoch 1 Batch 9/1077 - Train Accuracy: 0.8605, Validation Accuracy: 0.8512, Loss: 0.2020 Epoch 1 Batch 10/1077 - Train Accuracy: 0.8454, Validation Accuracy: 0.8494, Loss: 0.2082 Epoch 1 Batch 11/1077 - Train Accuracy: 0.8330, Validation Accuracy: 0.8540, Loss: 0.2107 Epoch 1 Batch 12/1077 - Train Accuracy: 0.8414, Validation Accuracy: 0.8327, Loss: 0.2128 Epoch 1 Batch 13/1077 - Train Accuracy: 0.8460, Validation Accuracy: 0.8338, Loss: 0.2109 Epoch 1 Batch 14/1077 - Train Accuracy: 0.8862, Validation Accuracy: 0.8377, Loss: 0.1776 Epoch 1 Batch 15/1077 - Train Accuracy: 0.8898, Validation Accuracy: 0.8398, Loss: 0.1888 Epoch 1 Batch 16/1077 - Train Accuracy: 0.8688, Validation Accuracy: 0.8295, Loss: 0.2056 Epoch 1 Batch 17/1077 - Train Accuracy: 0.8539, Validation Accuracy: 0.8370, Loss: 0.1964 Epoch 1 Batch 18/1077 - Train Accuracy: 0.8801, Validation Accuracy: 0.8509, Loss: 0.2000 Epoch 1 Batch 19/1077 - Train Accuracy: 0.8527, Validation Accuracy: 0.8558, Loss: 0.1949 Epoch 1 Batch 20/1077 - Train Accuracy: 0.8574, Validation Accuracy: 0.8640, Loss: 0.1793 Epoch 1 Batch 21/1077 - Train Accuracy: 0.8418, Validation Accuracy: 0.8384, Loss: 0.2049 Epoch 1 Batch 22/1077 - Train Accuracy: 0.8684, Validation Accuracy: 0.8292, Loss: 0.2052 Epoch 1 Batch 23/1077 - Train Accuracy: 0.8539, Validation Accuracy: 0.8501, Loss: 0.2030 Epoch 1 Batch 24/1077 - Train Accuracy: 0.8801, Validation Accuracy: 0.8530, Loss: 0.1858 Epoch 1 Batch 25/1077 - Train Accuracy: 0.8734, Validation Accuracy: 0.8597, Loss: 0.1882 Epoch 1 Batch 26/1077 - Train Accuracy: 0.8187, Validation Accuracy: 0.8565, Loss: 0.2073 Epoch 1 Batch 27/1077 - Train Accuracy: 0.8869, Validation Accuracy: 0.8434, Loss: 0.1760 Epoch 1 Batch 28/1077 - Train Accuracy: 0.9129, Validation Accuracy: 0.8501, Loss: 0.1943 Epoch 1 Batch 29/1077 - Train Accuracy: 0.8668, Validation Accuracy: 0.8374, Loss: 0.1867 Epoch 1 Batch 30/1077 - Train Accuracy: 0.8824, Validation Accuracy: 0.8587, Loss: 0.1896 Epoch 1 Batch 31/1077 - Train Accuracy: 0.8598, Validation Accuracy: 0.8484, Loss: 0.1953 Epoch 1 Batch 32/1077 - Train Accuracy: 0.8705, Validation Accuracy: 0.8509, Loss: 0.1883 Epoch 1 Batch 33/1077 - Train Accuracy: 0.8638, Validation Accuracy: 0.8484, Loss: 0.1911 Epoch 1 Batch 34/1077 - Train Accuracy: 0.8871, Validation Accuracy: 0.8349, Loss: 0.1867 Epoch 1 Batch 35/1077 - Train Accuracy: 0.8777, Validation Accuracy: 0.8569, Loss: 0.1858 Epoch 1 Batch 36/1077 - Train Accuracy: 0.8539, Validation Accuracy: 0.8544, Loss: 0.1894 Epoch 1 Batch 37/1077 - Train Accuracy: 0.8844, Validation Accuracy: 0.8469, Loss: 0.1869 Epoch 1 Batch 38/1077 - Train Accuracy: 0.8421, Validation Accuracy: 0.8469, Loss: 0.2185 Epoch 1 Batch 39/1077 - Train Accuracy: 0.8535, Validation Accuracy: 0.8469, Loss: 0.2091 Epoch 1 Batch 40/1077 - Train Accuracy: 0.8699, Validation Accuracy: 0.8512, Loss: 0.1802 Epoch 1 Batch 41/1077 - Train Accuracy: 0.8713, Validation Accuracy: 0.8484, Loss: 0.1846 Epoch 1 Batch 42/1077 - Train Accuracy: 0.8582, Validation Accuracy: 0.8580, Loss: 0.1947 Epoch 1 Batch 43/1077 - Train Accuracy: 0.8960, Validation Accuracy: 0.8363, Loss: 0.1612 Epoch 1 Batch 44/1077 - Train Accuracy: 0.8717, Validation Accuracy: 0.8320, Loss: 0.1914 Epoch 1 Batch 45/1077 - Train Accuracy: 0.8520, Validation Accuracy: 0.8331, Loss: 0.1880 Epoch 1 Batch 46/1077 - Train Accuracy: 0.8664, Validation Accuracy: 0.8420, Loss: 0.1930 Epoch 1 Batch 47/1077 - Train Accuracy: 0.9078, Validation Accuracy: 0.8441, Loss: 0.1860 Epoch 1 Batch 48/1077 - Train Accuracy: 0.8605, Validation Accuracy: 0.8441, Loss: 0.2169 Epoch 1 Batch 49/1077 - Train Accuracy: 0.8571, Validation Accuracy: 0.8427, Loss: 0.1838 Epoch 1 Batch 50/1077 - Train Accuracy: 0.8750, Validation Accuracy: 0.8402, Loss: 0.1822 Epoch 1 Batch 51/1077 - Train Accuracy: 0.8746, Validation Accuracy: 0.8352, Loss: 0.1816 Epoch 1 Batch 52/1077 - Train Accuracy: 0.8598, Validation Accuracy: 0.8473, Loss: 0.1944 Epoch 1 Batch 53/1077 - Train Accuracy: 0.8318, Validation Accuracy: 0.8636, Loss: 0.1833 Epoch 1 Batch 54/1077 - Train Accuracy: 0.8602, Validation Accuracy: 0.8516, Loss: 0.2233 Epoch 1 Batch 55/1077 - Train Accuracy: 0.8707, Validation Accuracy: 0.8498, Loss: 0.1654 Epoch 1 Batch 56/1077 - Train Accuracy: 0.9070, Validation Accuracy: 0.8526, Loss: 0.1718 Epoch 1 Batch 57/1077 - Train Accuracy: 0.8707, Validation Accuracy: 0.8523, Loss: 0.1682 Epoch 1 Batch 58/1077 - Train Accuracy: 0.8895, Validation Accuracy: 0.8349, Loss: 0.1717 Epoch 1 Batch 59/1077 - Train Accuracy: 0.8512, Validation Accuracy: 0.8494, Loss: 0.1856 Epoch 1 Batch 60/1077 - Train Accuracy: 0.8728, Validation Accuracy: 0.8615, Loss: 0.1630 Epoch 1 Batch 61/1077 - Train Accuracy: 0.8516, Validation Accuracy: 0.8558, Loss: 0.1964 Epoch 1 Batch 62/1077 - Train Accuracy: 0.8692, Validation Accuracy: 0.8544, Loss: 0.1936 Epoch 1 Batch 63/1077 - Train Accuracy: 0.8966, Validation Accuracy: 0.8693, Loss: 0.1634 Epoch 1 Batch 64/1077 - Train Accuracy: 0.8707, Validation Accuracy: 0.8633, Loss: 0.1740 Epoch 1 Batch 65/1077 - Train Accuracy: 0.8544, Validation Accuracy: 0.8683, Loss: 0.1727 Epoch 1 Batch 66/1077 - Train Accuracy: 0.8713, Validation Accuracy: 0.8707, Loss: 0.1446 Epoch 1 Batch 67/1077 - Train Accuracy: 0.8817, Validation Accuracy: 0.8640, Loss: 0.1648 Epoch 1 Batch 68/1077 - Train Accuracy: 0.8883, Validation Accuracy: 0.8604, Loss: 0.1808 Epoch 1 Batch 69/1077 - Train Accuracy: 0.8500, Validation Accuracy: 0.8569, Loss: 0.1980 Epoch 1 Batch 70/1077 - Train Accuracy: 0.8651, Validation Accuracy: 0.8754, Loss: 0.1780 Epoch 1 Batch 71/1077 - Train Accuracy: 0.9156, Validation Accuracy: 0.8693, Loss: 0.1467 Epoch 1 Batch 72/1077 - Train Accuracy: 0.8688, Validation Accuracy: 0.8583, Loss: 0.1711 Epoch 1 Batch 73/1077 - Train Accuracy: 0.8770, Validation Accuracy: 0.8683, Loss: 0.1739 Epoch 1 Batch 74/1077 - Train Accuracy: 0.8888, Validation Accuracy: 0.8562, Loss: 0.1479 Epoch 1 Batch 75/1077 - Train Accuracy: 0.8949, Validation Accuracy: 0.8569, Loss: 0.1767 Epoch 1 Batch 76/1077 - Train Accuracy: 0.8871, Validation Accuracy: 0.8594, Loss: 0.1506 Epoch 1 Batch 77/1077 - Train Accuracy: 0.8984, Validation Accuracy: 0.8622, Loss: 0.1665 Epoch 1 Batch 78/1077 - Train Accuracy: 0.8433, Validation Accuracy: 0.8601, Loss: 0.1762 Epoch 1 Batch 79/1077 - Train Accuracy: 0.8992, Validation Accuracy: 0.8512, Loss: 0.1716 Epoch 1 Batch 80/1077 - Train Accuracy: 0.8805, Validation Accuracy: 0.8523, Loss: 0.1586 Epoch 1 Batch 81/1077 - Train Accuracy: 0.8777, Validation Accuracy: 0.8526, Loss: 0.1491 Epoch 1 Batch 82/1077 - Train Accuracy: 0.8945, Validation Accuracy: 0.8580, Loss: 0.1545 Epoch 1 Batch 83/1077 - Train Accuracy: 0.8750, Validation Accuracy: 0.8757, Loss: 0.1729 Epoch 1 Batch 84/1077 - Train Accuracy: 0.8805, Validation Accuracy: 0.8729, Loss: 0.1591 Epoch 1 Batch 85/1077 - Train Accuracy: 0.8840, Validation Accuracy: 0.8569, Loss: 0.1534 Epoch 1 Batch 86/1077 - Train Accuracy: 0.8859, Validation Accuracy: 0.8789, Loss: 0.1691 Epoch 1 Batch 87/1077 - Train Accuracy: 0.8555, Validation Accuracy: 0.8821, Loss: 0.1917 Epoch 1 Batch 88/1077 - Train Accuracy: 0.8738, Validation Accuracy: 0.8736, Loss: 0.1742 Epoch 1 Batch 89/1077 - Train Accuracy: 0.8629, Validation Accuracy: 0.8651, Loss: 0.1648 Epoch 1 Batch 90/1077 - Train Accuracy: 0.8793, Validation Accuracy: 0.8711, Loss: 0.1791 Epoch 1 Batch 91/1077 - Train Accuracy: 0.8884, Validation Accuracy: 0.8558, Loss: 0.1525 Epoch 1 Batch 92/1077 - Train Accuracy: 0.8910, Validation Accuracy: 0.8604, Loss: 0.1648 Epoch 1 Batch 93/1077 - Train Accuracy: 0.8703, Validation Accuracy: 0.8519, Loss: 0.1737 Epoch 1 Batch 94/1077 - Train Accuracy: 0.9062, Validation Accuracy: 0.8462, Loss: 0.1508 Epoch 1 Batch 95/1077 - Train Accuracy: 0.8780, Validation Accuracy: 0.8484, Loss: 0.1761 Epoch 1 Batch 96/1077 - Train Accuracy: 0.8641, Validation Accuracy: 0.8526, Loss: 0.1651 Epoch 1 Batch 97/1077 - Train Accuracy: 0.8738, Validation Accuracy: 0.8438, Loss: 0.1719 Epoch 1 Batch 98/1077 - Train Accuracy: 0.8739, Validation Accuracy: 0.8505, Loss: 0.1695 Epoch 1 Batch 99/1077 - Train Accuracy: 0.8957, Validation Accuracy: 0.8480, Loss: 0.1744 Epoch 1 Batch 100/1077 - Train Accuracy: 0.8809, Validation Accuracy: 0.8441, Loss: 0.1633 Epoch 1 Batch 101/1077 - Train Accuracy: 0.8656, Validation Accuracy: 0.8615, Loss: 0.1584 Epoch 1 Batch 102/1077 - Train Accuracy: 0.8988, Validation Accuracy: 0.8714, Loss: 0.1566 Epoch 1 Batch 103/1077 - Train Accuracy: 0.8524, Validation Accuracy: 0.8796, Loss: 0.1730 Epoch 1 Batch 104/1077 - Train Accuracy: 0.8943, Validation Accuracy: 0.8761, Loss: 0.1637 Epoch 1 Batch 105/1077 - Train Accuracy: 0.8598, Validation Accuracy: 0.8668, Loss: 0.1633 Epoch 1 Batch 106/1077 - Train Accuracy: 0.8783, Validation Accuracy: 0.8707, Loss: 0.1848 Epoch 1 Batch 107/1077 - Train Accuracy: 0.8527, Validation Accuracy: 0.8686, Loss: 0.1499 Epoch 1 Batch 108/1077 - Train Accuracy: 0.8761, Validation Accuracy: 0.8725, Loss: 0.1426 Epoch 1 Batch 109/1077 - Train Accuracy: 0.8867, Validation Accuracy: 0.8739, Loss: 0.1664 Epoch 1 Batch 110/1077 - Train Accuracy: 0.9105, Validation Accuracy: 0.8672, Loss: 0.1385 Epoch 1 Batch 111/1077 - Train Accuracy: 0.8645, Validation Accuracy: 0.8544, Loss: 0.1622 Epoch 1 Batch 112/1077 - Train Accuracy: 0.8840, Validation Accuracy: 0.8551, Loss: 0.1552 Epoch 1 Batch 113/1077 - Train Accuracy: 0.8711, Validation Accuracy: 0.8651, Loss: 0.1574 Epoch 1 Batch 114/1077 - Train Accuracy: 0.9074, Validation Accuracy: 0.8643, Loss: 0.1394 Epoch 1 Batch 115/1077 - Train Accuracy: 0.8680, Validation Accuracy: 0.8569, Loss: 0.1751 Epoch 1 Batch 116/1077 - Train Accuracy: 0.8664, Validation Accuracy: 0.8707, Loss: 0.1738 Epoch 1 Batch 117/1077 - Train Accuracy: 0.8477, Validation Accuracy: 0.8754, Loss: 0.1510 Epoch 1 Batch 118/1077 - Train Accuracy: 0.8442, Validation Accuracy: 0.8800, Loss: 0.1658 Epoch 1 Batch 119/1077 - Train Accuracy: 0.8859, Validation Accuracy: 0.8874, Loss: 0.1617 Epoch 1 Batch 120/1077 - Train Accuracy: 0.9008, Validation Accuracy: 0.8800, Loss: 0.1649 Epoch 1 Batch 121/1077 - Train Accuracy: 0.8895, Validation Accuracy: 0.8746, Loss: 0.1568 Epoch 1 Batch 122/1077 - Train Accuracy: 0.8914, Validation Accuracy: 0.8668, Loss: 0.1499 Epoch 1 Batch 123/1077 - Train Accuracy: 0.9266, Validation Accuracy: 0.8580, Loss: 0.1292 Epoch 1 Batch 124/1077 - Train Accuracy: 0.8734, Validation Accuracy: 0.8615, Loss: 0.1696 Epoch 1 Batch 125/1077 - Train Accuracy: 0.8880, Validation Accuracy: 0.8398, Loss: 0.1578 Epoch 1 Batch 126/1077 - Train Accuracy: 0.8847, Validation Accuracy: 0.8356, Loss: 0.1456 Epoch 1 Batch 127/1077 - Train Accuracy: 0.8543, Validation Accuracy: 0.8416, Loss: 0.1527 Epoch 1 Batch 128/1077 - Train Accuracy: 0.9200, Validation Accuracy: 0.8448, Loss: 0.1363 Epoch 1 Batch 129/1077 - Train Accuracy: 0.8684, Validation Accuracy: 0.8455, Loss: 0.1782 Epoch 1 Batch 130/1077 - Train Accuracy: 0.9100, Validation Accuracy: 0.8235, Loss: 0.1384 Epoch 1 Batch 131/1077 - Train Accuracy: 0.8582, Validation Accuracy: 0.8256, Loss: 0.1476 Epoch 1 Batch 132/1077 - Train Accuracy: 0.8461, Validation Accuracy: 0.8320, Loss: 0.1587 Epoch 1 Batch 133/1077 - Train Accuracy: 0.8845, Validation Accuracy: 0.8317, Loss: 0.1377 Epoch 1 Batch 134/1077 - Train Accuracy: 0.9089, Validation Accuracy: 0.8150, Loss: 0.1494 Epoch 1 Batch 135/1077 - Train Accuracy: 0.8758, Validation Accuracy: 0.8306, Loss: 0.1546 Epoch 1 Batch 136/1077 - Train Accuracy: 0.9129, Validation Accuracy: 0.8356, Loss: 0.1469 Epoch 1 Batch 137/1077 - Train Accuracy: 0.9025, Validation Accuracy: 0.8438, Loss: 0.1247 Epoch 1 Batch 138/1077 - Train Accuracy: 0.8727, Validation Accuracy: 0.8480, Loss: 0.1376 Epoch 1 Batch 139/1077 - Train Accuracy: 0.8480, Validation Accuracy: 0.8555, Loss: 0.1631 Epoch 1 Batch 140/1077 - Train Accuracy: 0.8914, Validation Accuracy: 0.8356, Loss: 0.1430 Epoch 1 Batch 141/1077 - Train Accuracy: 0.8711, Validation Accuracy: 0.8452, Loss: 0.1465 Epoch 1 Batch 142/1077 - Train Accuracy: 0.8906, Validation Accuracy: 0.8441, Loss: 0.1368 Epoch 1 Batch 143/1077 - Train Accuracy: 0.8605, Validation Accuracy: 0.8438, Loss: 0.1503 Epoch 1 Batch 144/1077 - Train Accuracy: 0.8729, Validation Accuracy: 0.8388, Loss: 0.1660 Epoch 1 Batch 145/1077 - Train Accuracy: 0.9023, Validation Accuracy: 0.8406, Loss: 0.1508 Epoch 1 Batch 146/1077 - Train Accuracy: 0.9036, Validation Accuracy: 0.8388, Loss: 0.1681 Epoch 1 Batch 147/1077 - Train Accuracy: 0.8582, Validation Accuracy: 0.8381, Loss: 0.1422 Epoch 1 Batch 148/1077 - Train Accuracy: 0.8734, Validation Accuracy: 0.8388, Loss: 0.1499 Epoch 1 Batch 149/1077 - Train Accuracy: 0.8750, Validation Accuracy: 0.8480, Loss: 0.1401 Epoch 1 Batch 150/1077 - Train Accuracy: 0.9051, Validation Accuracy: 0.8626, Loss: 0.1526 Epoch 1 Batch 151/1077 - Train Accuracy: 0.8642, Validation Accuracy: 0.8615, Loss: 0.1341 Epoch 1 Batch 152/1077 - Train Accuracy: 0.8840, Validation Accuracy: 0.8537, Loss: 0.1570 Epoch 1 Batch 153/1077 - Train Accuracy: 0.8574, Validation Accuracy: 0.8462, Loss: 0.1610 Epoch 1 Batch 154/1077 - Train Accuracy: 0.8873, Validation Accuracy: 0.8501, Loss: 0.1447 Epoch 1 Batch 155/1077 - Train Accuracy: 0.9262, Validation Accuracy: 0.8626, Loss: 0.1369 Epoch 1 Batch 156/1077 - Train Accuracy: 0.8773, Validation Accuracy: 0.8743, Loss: 0.1286 Epoch 1 Batch 157/1077 - Train Accuracy: 0.9129, Validation Accuracy: 0.8881, Loss: 0.1368 Epoch 1 Batch 158/1077 - Train Accuracy: 0.8761, Validation Accuracy: 0.8821, Loss: 0.1626 Epoch 1 Batch 159/1077 - Train Accuracy: 0.8854, Validation Accuracy: 0.8853, Loss: 0.1229 Epoch 1 Batch 160/1077 - Train Accuracy: 0.8930, Validation Accuracy: 0.8633, Loss: 0.1370 Epoch 1 Batch 161/1077 - Train Accuracy: 0.8918, Validation Accuracy: 0.8594, Loss: 0.1349 Epoch 1 Batch 162/1077 - Train Accuracy: 0.8949, Validation Accuracy: 0.8626, Loss: 0.1608 Epoch 1 Batch 163/1077 - Train Accuracy: 0.8865, Validation Accuracy: 0.8700, Loss: 0.1465 Epoch 1 Batch 164/1077 - Train Accuracy: 0.8992, Validation Accuracy: 0.8864, Loss: 0.1359 Epoch 1 Batch 165/1077 - Train Accuracy: 0.8910, Validation Accuracy: 0.8956, Loss: 0.1279 Epoch 1 Batch 166/1077 - Train Accuracy: 0.8789, Validation Accuracy: 0.8842, Loss: 0.1511 Epoch 1 Batch 167/1077 - Train Accuracy: 0.8934, Validation Accuracy: 0.8771, Loss: 0.1328 Epoch 1 Batch 168/1077 - Train Accuracy: 0.8725, Validation Accuracy: 0.8864, Loss: 0.1531 Epoch 1 Batch 169/1077 - Train Accuracy: 0.8705, Validation Accuracy: 0.8793, Loss: 0.1467 Epoch 1 Batch 170/1077 - Train Accuracy: 0.8781, Validation Accuracy: 0.8707, Loss: 0.1414 Epoch 1 Batch 171/1077 - Train Accuracy: 0.8739, Validation Accuracy: 0.8551, Loss: 0.1349 Epoch 1 Batch 172/1077 - Train Accuracy: 0.8925, Validation Accuracy: 0.8608, Loss: 0.1305 Epoch 1 Batch 173/1077 - Train Accuracy: 0.8869, Validation Accuracy: 0.8587, Loss: 0.1509 Epoch 1 Batch 174/1077 - Train Accuracy: 0.9020, Validation Accuracy: 0.8555, Loss: 0.1345 Epoch 1 Batch 175/1077 - Train Accuracy: 0.8961, Validation Accuracy: 0.8530, Loss: 0.1429 Epoch 1 Batch 176/1077 - Train Accuracy: 0.8820, Validation Accuracy: 0.8633, Loss: 0.1316 Epoch 1 Batch 177/1077 - Train Accuracy: 0.9034, Validation Accuracy: 0.8587, Loss: 0.1546 Epoch 1 Batch 178/1077 - Train Accuracy: 0.8637, Validation Accuracy: 0.8640, Loss: 0.1462 Epoch 1 Batch 179/1077 - Train Accuracy: 0.9149, Validation Accuracy: 0.8462, Loss: 0.1337 Epoch 1 Batch 180/1077 - Train Accuracy: 0.8883, Validation Accuracy: 0.8615, Loss: 0.1224 Epoch 1 Batch 181/1077 - Train Accuracy: 0.8633, Validation Accuracy: 0.8516, Loss: 0.1436 Epoch 1 Batch 182/1077 - Train Accuracy: 0.8880, Validation Accuracy: 0.8484, Loss: 0.1408 Epoch 1 Batch 183/1077 - Train Accuracy: 0.8680, Validation Accuracy: 0.8686, Loss: 0.1338 Epoch 1 Batch 184/1077 - Train Accuracy: 0.9207, Validation Accuracy: 0.8509, Loss: 0.1238 Epoch 1 Batch 185/1077 - Train Accuracy: 0.9055, Validation Accuracy: 0.8540, Loss: 0.1371 Epoch 1 Batch 186/1077 - Train Accuracy: 0.8610, Validation Accuracy: 0.8576, Loss: 0.1461 Epoch 1 Batch 187/1077 - Train Accuracy: 0.8910, Validation Accuracy: 0.8665, Loss: 0.1204 Epoch 1 Batch 188/1077 - Train Accuracy: 0.8781, Validation Accuracy: 0.8675, Loss: 0.1299 Epoch 1 Batch 189/1077 - Train Accuracy: 0.8867, Validation Accuracy: 0.8679, Loss: 0.1345 Epoch 1 Batch 190/1077 - Train Accuracy: 0.9074, Validation Accuracy: 0.8736, Loss: 0.1229 Epoch 1 Batch 191/1077 - Train Accuracy: 0.8807, Validation Accuracy: 0.8668, Loss: 0.1113 Epoch 1 Batch 192/1077 - Train Accuracy: 0.9047, Validation Accuracy: 0.8754, Loss: 0.1374 Epoch 1 Batch 193/1077 - Train Accuracy: 0.9359, Validation Accuracy: 0.8714, Loss: 0.1190 Epoch 1 Batch 194/1077 - Train Accuracy: 0.9200, Validation Accuracy: 0.8633, Loss: 0.1186 Epoch 1 Batch 195/1077 - Train Accuracy: 0.8977, Validation Accuracy: 0.8540, Loss: 0.1137 Epoch 1 Batch 196/1077 - Train Accuracy: 0.8781, Validation Accuracy: 0.8558, Loss: 0.1185 Epoch 1 Batch 197/1077 - Train Accuracy: 0.8883, Validation Accuracy: 0.8612, Loss: 0.1356 Epoch 1 Batch 198/1077 - Train Accuracy: 0.9245, Validation Accuracy: 0.8640, Loss: 0.1308 Epoch 1 Batch 199/1077 - Train Accuracy: 0.9160, Validation Accuracy: 0.8647, Loss: 0.1173 Epoch 1 Batch 200/1077 - Train Accuracy: 0.8953, Validation Accuracy: 0.8707, Loss: 0.1314 Epoch 1 Batch 201/1077 - Train Accuracy: 0.9160, Validation Accuracy: 0.8853, Loss: 0.1093 Epoch 1 Batch 202/1077 - Train Accuracy: 0.8934, Validation Accuracy: 0.8775, Loss: 0.1292 Epoch 1 Batch 203/1077 - Train Accuracy: 0.8953, Validation Accuracy: 0.8672, Loss: 0.1256 Epoch 1 Batch 204/1077 - Train Accuracy: 0.8816, Validation Accuracy: 0.8626, Loss: 0.1474 Epoch 1 Batch 205/1077 - Train Accuracy: 0.8535, Validation Accuracy: 0.8668, Loss: 0.1433 Epoch 1 Batch 206/1077 - Train Accuracy: 0.9137, Validation Accuracy: 0.8714, Loss: 0.1299 Epoch 1 Batch 207/1077 - Train Accuracy: 0.9297, Validation Accuracy: 0.8846, Loss: 0.1167 Epoch 1 Batch 208/1077 - Train Accuracy: 0.8914, Validation Accuracy: 0.8690, Loss: 0.1246 Epoch 1 Batch 209/1077 - Train Accuracy: 0.9010, Validation Accuracy: 0.8626, Loss: 0.1056 Epoch 1 Batch 210/1077 - Train Accuracy: 0.9103, Validation Accuracy: 0.8661, Loss: 0.1282 Epoch 1 Batch 211/1077 - Train Accuracy: 0.8918, Validation Accuracy: 0.8661, Loss: 0.1271 Epoch 1 Batch 212/1077 - Train Accuracy: 0.8772, Validation Accuracy: 0.8615, Loss: 0.1115 Epoch 1 Batch 213/1077 - Train Accuracy: 0.8816, Validation Accuracy: 0.8817, Loss: 0.1131 Epoch 1 Batch 214/1077 - Train Accuracy: 0.8801, Validation Accuracy: 0.8825, Loss: 0.1124 Epoch 1 Batch 215/1077 - Train Accuracy: 0.8785, Validation Accuracy: 0.8707, Loss: 0.1285 Epoch 1 Batch 216/1077 - Train Accuracy: 0.9258, Validation Accuracy: 0.8643, Loss: 0.1247 Epoch 1 Batch 217/1077 - Train Accuracy: 0.9164, Validation Accuracy: 0.8683, Loss: 0.1137 Epoch 1 Batch 218/1077 - Train Accuracy: 0.9248, Validation Accuracy: 0.8675, Loss: 0.1448 Epoch 1 Batch 219/1077 - Train Accuracy: 0.9254, Validation Accuracy: 0.8707, Loss: 0.1168 Epoch 1 Batch 220/1077 - Train Accuracy: 0.9190, Validation Accuracy: 0.8704, Loss: 0.1214 Epoch 1 Batch 221/1077 - Train Accuracy: 0.9050, Validation Accuracy: 0.8764, Loss: 0.1339 Epoch 1 Batch 222/1077 - Train Accuracy: 0.9078, Validation Accuracy: 0.8757, Loss: 0.1169 Epoch 1 Batch 223/1077 - Train Accuracy: 0.9010, Validation Accuracy: 0.8839, Loss: 0.1076 Epoch 1 Batch 224/1077 - Train Accuracy: 0.8934, Validation Accuracy: 0.8835, Loss: 0.1308 Epoch 1 Batch 225/1077 - Train Accuracy: 0.9047, Validation Accuracy: 0.8629, Loss: 0.1389 Epoch 1 Batch 226/1077 - Train Accuracy: 0.9270, Validation Accuracy: 0.8491, Loss: 0.1029 Epoch 1 Batch 227/1077 - Train Accuracy: 0.8488, Validation Accuracy: 0.8643, Loss: 0.1488 Epoch 1 Batch 228/1077 - Train Accuracy: 0.8941, Validation Accuracy: 0.8739, Loss: 0.1262 Epoch 1 Batch 229/1077 - Train Accuracy: 0.9062, Validation Accuracy: 0.8754, Loss: 0.1231 Epoch 1 Batch 230/1077 - Train Accuracy: 0.8776, Validation Accuracy: 0.8679, Loss: 0.1215 Epoch 1 Batch 231/1077 - Train Accuracy: 0.9129, Validation Accuracy: 0.8722, Loss: 0.1245 Epoch 1 Batch 232/1077 - Train Accuracy: 0.9034, Validation Accuracy: 0.8793, Loss: 0.1090 Epoch 1 Batch 233/1077 - Train Accuracy: 0.8863, Validation Accuracy: 0.8736, Loss: 0.1448 Epoch 1 Batch 234/1077 - Train Accuracy: 0.8817, Validation Accuracy: 0.8764, Loss: 0.1257 Epoch 1 Batch 235/1077 - Train Accuracy: 0.8787, Validation Accuracy: 0.8683, Loss: 0.1250 Epoch 1 Batch 236/1077 - Train Accuracy: 0.8855, Validation Accuracy: 0.8683, Loss: 0.1402 Epoch 1 Batch 237/1077 - Train Accuracy: 0.9029, Validation Accuracy: 0.8615, Loss: 0.1008 Epoch 1 Batch 238/1077 - Train Accuracy: 0.8859, Validation Accuracy: 0.8569, Loss: 0.1181 Epoch 1 Batch 239/1077 - Train Accuracy: 0.9152, Validation Accuracy: 0.8743, Loss: 0.0940 Epoch 1 Batch 240/1077 - Train Accuracy: 0.9359, Validation Accuracy: 0.8700, Loss: 0.1135 Epoch 1 Batch 241/1077 - Train Accuracy: 0.9254, Validation Accuracy: 0.8675, Loss: 0.1022 Epoch 1 Batch 242/1077 - Train Accuracy: 0.8988, Validation Accuracy: 0.8899, Loss: 0.1060 Epoch 1 Batch 243/1077 - Train Accuracy: 0.8699, Validation Accuracy: 0.8885, Loss: 0.1160 Epoch 1 Batch 244/1077 - Train Accuracy: 0.9134, Validation Accuracy: 0.8857, Loss: 0.1115 Epoch 1 Batch 245/1077 - Train Accuracy: 0.9141, Validation Accuracy: 0.8999, Loss: 0.1005 Epoch 1 Batch 246/1077 - Train Accuracy: 0.8875, Validation Accuracy: 0.8931, Loss: 0.1138 Epoch 1 Batch 247/1077 - Train Accuracy: 0.9304, Validation Accuracy: 0.9052, Loss: 0.1019 Epoch 1 Batch 248/1077 - Train Accuracy: 0.9191, Validation Accuracy: 0.8807, Loss: 0.1150 Epoch 1 Batch 249/1077 - Train Accuracy: 0.8938, Validation Accuracy: 0.8768, Loss: 0.1116 Epoch 1 Batch 250/1077 - Train Accuracy: 0.8984, Validation Accuracy: 0.8920, Loss: 0.1071 Epoch 1 Batch 251/1077 - Train Accuracy: 0.8992, Validation Accuracy: 0.8906, Loss: 0.1176 Epoch 1 Batch 252/1077 - Train Accuracy: 0.8934, Validation Accuracy: 0.8924, Loss: 0.1165 Epoch 1 Batch 253/1077 - Train Accuracy: 0.8718, Validation Accuracy: 0.8857, Loss: 0.1201 Epoch 1 Batch 254/1077 - Train Accuracy: 0.8934, Validation Accuracy: 0.8771, Loss: 0.1239 Epoch 1 Batch 255/1077 - Train Accuracy: 0.9250, Validation Accuracy: 0.8736, Loss: 0.1120 Epoch 1 Batch 256/1077 - Train Accuracy: 0.8746, Validation Accuracy: 0.8679, Loss: 0.1414 Epoch 1 Batch 257/1077 - Train Accuracy: 0.8955, Validation Accuracy: 0.8558, Loss: 0.1090 Epoch 1 Batch 258/1077 - Train Accuracy: 0.9204, Validation Accuracy: 0.8597, Loss: 0.1150 Epoch 1 Batch 259/1077 - Train Accuracy: 0.8859, Validation Accuracy: 0.8683, Loss: 0.1080 Epoch 1 Batch 260/1077 - Train Accuracy: 0.9036, Validation Accuracy: 0.8679, Loss: 0.1012 Epoch 1 Batch 261/1077 - Train Accuracy: 0.8984, Validation Accuracy: 0.8697, Loss: 0.1120 Epoch 1 Batch 262/1077 - Train Accuracy: 0.9313, Validation Accuracy: 0.8714, Loss: 0.1013 Epoch 1 Batch 263/1077 - Train Accuracy: 0.9441, Validation Accuracy: 0.8711, Loss: 0.0996 Epoch 1 Batch 264/1077 - Train Accuracy: 0.9348, Validation Accuracy: 0.8587, Loss: 0.1158 Epoch 1 Batch 265/1077 - Train Accuracy: 0.8711, Validation Accuracy: 0.8754, Loss: 0.1133 Epoch 1 Batch 266/1077 - Train Accuracy: 0.8746, Validation Accuracy: 0.8839, Loss: 0.1125 Epoch 1 Batch 267/1077 - Train Accuracy: 0.9073, Validation Accuracy: 0.8796, Loss: 0.0964 Epoch 1 Batch 268/1077 - Train Accuracy: 0.8754, Validation Accuracy: 0.8832, Loss: 0.1246 Epoch 1 Batch 269/1077 - Train Accuracy: 0.8849, Validation Accuracy: 0.8885, Loss: 0.1352 Epoch 1 Batch 270/1077 - Train Accuracy: 0.8609, Validation Accuracy: 0.8885, Loss: 0.1267 Epoch 1 Batch 271/1077 - Train Accuracy: 0.9328, Validation Accuracy: 0.8928, Loss: 0.1083 Epoch 1 Batch 272/1077 - Train Accuracy: 0.9018, Validation Accuracy: 0.8846, Loss: 0.1384 Epoch 1 Batch 273/1077 - Train Accuracy: 0.9118, Validation Accuracy: 0.8782, Loss: 0.0971 Epoch 1 Batch 274/1077 - Train Accuracy: 0.9007, Validation Accuracy: 0.8757, Loss: 0.1109 Epoch 1 Batch 275/1077 - Train Accuracy: 0.9029, Validation Accuracy: 0.8807, Loss: 0.1220 Epoch 1 Batch 276/1077 - Train Accuracy: 0.8484, Validation Accuracy: 0.8807, Loss: 0.1324 Epoch 1 Batch 277/1077 - Train Accuracy: 0.9263, Validation Accuracy: 0.8857, Loss: 0.1077 Epoch 1 Batch 278/1077 - Train Accuracy: 0.8773, Validation Accuracy: 0.8960, Loss: 0.1276 Epoch 1 Batch 279/1077 - Train Accuracy: 0.9055, Validation Accuracy: 0.8999, Loss: 0.1258 Epoch 1 Batch 280/1077 - Train Accuracy: 0.8855, Validation Accuracy: 0.9034, Loss: 0.1167 Epoch 1 Batch 281/1077 - Train Accuracy: 0.8887, Validation Accuracy: 0.8920, Loss: 0.1237 Epoch 1 Batch 282/1077 - Train Accuracy: 0.8848, Validation Accuracy: 0.8860, Loss: 0.1310 Epoch 1 Batch 283/1077 - Train Accuracy: 0.8969, Validation Accuracy: 0.8803, Loss: 0.1295 Epoch 1 Batch 284/1077 - Train Accuracy: 0.8984, Validation Accuracy: 0.8814, Loss: 0.1287 Epoch 1 Batch 285/1077 - Train Accuracy: 0.9066, Validation Accuracy: 0.8821, Loss: 0.1119 Epoch 1 Batch 286/1077 - Train Accuracy: 0.9148, Validation Accuracy: 0.8761, Loss: 0.1077 Epoch 1 Batch 287/1077 - Train Accuracy: 0.9039, Validation Accuracy: 0.8711, Loss: 0.0986 Epoch 1 Batch 288/1077 - Train Accuracy: 0.9145, Validation Accuracy: 0.8899, Loss: 0.1132 Epoch 1 Batch 289/1077 - Train Accuracy: 0.9344, Validation Accuracy: 0.8782, Loss: 0.1107 Epoch 1 Batch 290/1077 - Train Accuracy: 0.8891, Validation Accuracy: 0.8668, Loss: 0.1408 Epoch 1 Batch 291/1077 - Train Accuracy: 0.8705, Validation Accuracy: 0.8658, Loss: 0.1383 Epoch 1 Batch 292/1077 - Train Accuracy: 0.9118, Validation Accuracy: 0.8683, Loss: 0.1153 Epoch 1 Batch 293/1077 - Train Accuracy: 0.8762, Validation Accuracy: 0.8771, Loss: 0.1264 Epoch 1 Batch 294/1077 - Train Accuracy: 0.9080, Validation Accuracy: 0.8743, Loss: 0.1038 Epoch 1 Batch 295/1077 - Train Accuracy: 0.9157, Validation Accuracy: 0.8746, Loss: 0.1273 Epoch 1 Batch 296/1077 - Train Accuracy: 0.8973, Validation Accuracy: 0.8732, Loss: 0.1182 Epoch 1 Batch 297/1077 - Train Accuracy: 0.8688, Validation Accuracy: 0.8636, Loss: 0.1228 Epoch 1 Batch 298/1077 - Train Accuracy: 0.8754, Validation Accuracy: 0.8686, Loss: 0.1268 Epoch 1 Batch 299/1077 - Train Accuracy: 0.9102, Validation Accuracy: 0.8761, Loss: 0.1245 Epoch 1 Batch 300/1077 - Train Accuracy: 0.9391, Validation Accuracy: 0.8643, Loss: 0.0983 Epoch 1 Batch 301/1077 - Train Accuracy: 0.8934, Validation Accuracy: 0.8679, Loss: 0.0954 Epoch 1 Batch 302/1077 - Train Accuracy: 0.9105, Validation Accuracy: 0.8725, Loss: 0.1063 Epoch 1 Batch 303/1077 - Train Accuracy: 0.9074, Validation Accuracy: 0.8622, Loss: 0.1169 Epoch 1 Batch 304/1077 - Train Accuracy: 0.9144, Validation Accuracy: 0.8714, Loss: 0.1012 Epoch 1 Batch 305/1077 - Train Accuracy: 0.9176, Validation Accuracy: 0.8771, Loss: 0.1135 Epoch 1 Batch 306/1077 - Train Accuracy: 0.8988, Validation Accuracy: 0.8793, Loss: 0.1115 Epoch 1 Batch 307/1077 - Train Accuracy: 0.8802, Validation Accuracy: 0.8803, Loss: 0.0964 Epoch 1 Batch 308/1077 - Train Accuracy: 0.9051, Validation Accuracy: 0.8743, Loss: 0.1258 Epoch 1 Batch 309/1077 - Train Accuracy: 0.9148, Validation Accuracy: 0.8810, Loss: 0.0847 Epoch 1 Batch 310/1077 - Train Accuracy: 0.9273, Validation Accuracy: 0.8896, Loss: 0.1084 Epoch 1 Batch 311/1077 - Train Accuracy: 0.9070, Validation Accuracy: 0.8938, Loss: 0.1035 Epoch 1 Batch 312/1077 - Train Accuracy: 0.8617, Validation Accuracy: 0.9020, Loss: 0.1257 Epoch 1 Batch 313/1077 - Train Accuracy: 0.9254, Validation Accuracy: 0.8963, Loss: 0.0839 Epoch 1 Batch 314/1077 - Train Accuracy: 0.9250, Validation Accuracy: 0.8999, Loss: 0.0952 Epoch 1 Batch 315/1077 - Train Accuracy: 0.9241, Validation Accuracy: 0.8970, Loss: 0.0968 Epoch 1 Batch 316/1077 - Train Accuracy: 0.9208, Validation Accuracy: 0.9006, Loss: 0.0980 Epoch 1 Batch 317/1077 - Train Accuracy: 0.8939, Validation Accuracy: 0.8789, Loss: 0.1297 Epoch 1 Batch 318/1077 - Train Accuracy: 0.9121, Validation Accuracy: 0.8757, Loss: 0.0961 Epoch 1 Batch 319/1077 - Train Accuracy: 0.8824, Validation Accuracy: 0.8786, Loss: 0.1203 Epoch 1 Batch 320/1077 - Train Accuracy: 0.9363, Validation Accuracy: 0.8786, Loss: 0.1025 Epoch 1 Batch 321/1077 - Train Accuracy: 0.9125, Validation Accuracy: 0.8757, Loss: 0.1000 Epoch 1 Batch 322/1077 - Train Accuracy: 0.9025, Validation Accuracy: 0.8849, Loss: 0.0981 Epoch 1 Batch 323/1077 - Train Accuracy: 0.9410, Validation Accuracy: 0.8835, Loss: 0.0972 Epoch 1 Batch 324/1077 - Train Accuracy: 0.9078, Validation Accuracy: 0.8857, Loss: 0.0971 Epoch 1 Batch 325/1077 - Train Accuracy: 0.8683, Validation Accuracy: 0.8746, Loss: 0.1012 Epoch 1 Batch 326/1077 - Train Accuracy: 0.8783, Validation Accuracy: 0.8807, Loss: 0.0989 Epoch 1 Batch 327/1077 - Train Accuracy: 0.8863, Validation Accuracy: 0.8810, Loss: 0.1146 Epoch 1 Batch 328/1077 - Train Accuracy: 0.9059, Validation Accuracy: 0.8761, Loss: 0.1233 Epoch 1 Batch 329/1077 - Train Accuracy: 0.8988, Validation Accuracy: 0.8569, Loss: 0.1107 Epoch 1 Batch 330/1077 - Train Accuracy: 0.8914, Validation Accuracy: 0.8651, Loss: 0.1081 Epoch 1 Batch 331/1077 - Train Accuracy: 0.8972, Validation Accuracy: 0.8754, Loss: 0.1135 Epoch 1 Batch 332/1077 - Train Accuracy: 0.9089, Validation Accuracy: 0.8736, Loss: 0.0767 Epoch 1 Batch 333/1077 - Train Accuracy: 0.9256, Validation Accuracy: 0.8597, Loss: 0.0961 Epoch 1 Batch 334/1077 - Train Accuracy: 0.9238, Validation Accuracy: 0.8555, Loss: 0.1086 Epoch 1 Batch 335/1077 - Train Accuracy: 0.9230, Validation Accuracy: 0.8647, Loss: 0.0948 Epoch 1 Batch 336/1077 - Train Accuracy: 0.9117, Validation Accuracy: 0.8775, Loss: 0.1181 Epoch 1 Batch 337/1077 - Train Accuracy: 0.8906, Validation Accuracy: 0.8800, Loss: 0.1108 Epoch 1 Batch 338/1077 - Train Accuracy: 0.8754, Validation Accuracy: 0.8828, Loss: 0.1209 Epoch 1 Batch 339/1077 - Train Accuracy: 0.9289, Validation Accuracy: 0.8860, Loss: 0.0980 Epoch 1 Batch 340/1077 - Train Accuracy: 0.9165, Validation Accuracy: 0.8888, Loss: 0.0997 Epoch 1 Batch 341/1077 - Train Accuracy: 0.8883, Validation Accuracy: 0.8892, Loss: 0.1261 Epoch 1 Batch 342/1077 - Train Accuracy: 0.8888, Validation Accuracy: 0.8881, Loss: 0.0860 Epoch 1 Batch 343/1077 - Train Accuracy: 0.9105, Validation Accuracy: 0.8821, Loss: 0.1044 Epoch 1 Batch 344/1077 - Train Accuracy: 0.9051, Validation Accuracy: 0.8789, Loss: 0.0948 Epoch 1 Batch 345/1077 - Train Accuracy: 0.9178, Validation Accuracy: 0.8817, Loss: 0.0867 Epoch 1 Batch 346/1077 - Train Accuracy: 0.8984, Validation Accuracy: 0.8906, Loss: 0.1034 Epoch 1 Batch 347/1077 - Train Accuracy: 0.9103, Validation Accuracy: 0.8853, Loss: 0.0865 Epoch 1 Batch 348/1077 - Train Accuracy: 0.8988, Validation Accuracy: 0.8771, Loss: 0.0900 Epoch 1 Batch 349/1077 - Train Accuracy: 0.8840, Validation Accuracy: 0.8821, Loss: 0.0962 Epoch 1 Batch 350/1077 - Train Accuracy: 0.8969, Validation Accuracy: 0.8874, Loss: 0.1059 Epoch 1 Batch 351/1077 - Train Accuracy: 0.8906, Validation Accuracy: 0.8974, Loss: 0.0968 Epoch 1 Batch 352/1077 - Train Accuracy: 0.9023, Validation Accuracy: 0.8917, Loss: 0.0985 Epoch 1 Batch 353/1077 - Train Accuracy: 0.8997, Validation Accuracy: 0.8960, Loss: 0.1113 Epoch 1 Batch 354/1077 - Train Accuracy: 0.9012, Validation Accuracy: 0.9130, Loss: 0.1167 Epoch 1 Batch 355/1077 - Train Accuracy: 0.9133, Validation Accuracy: 0.9013, Loss: 0.0985 Epoch 1 Batch 356/1077 - Train Accuracy: 0.9254, Validation Accuracy: 0.8935, Loss: 0.1022 Epoch 1 Batch 357/1077 - Train Accuracy: 0.8973, Validation Accuracy: 0.8825, Loss: 0.0950 Epoch 1 Batch 358/1077 - Train Accuracy: 0.9001, Validation Accuracy: 0.8757, Loss: 0.1061 Epoch 1 Batch 359/1077 - Train Accuracy: 0.9035, Validation Accuracy: 0.8636, Loss: 0.1034 Epoch 1 Batch 360/1077 - Train Accuracy: 0.9359, Validation Accuracy: 0.8722, Loss: 0.0877 Epoch 1 Batch 361/1077 - Train Accuracy: 0.9301, Validation Accuracy: 0.8729, Loss: 0.1028 Epoch 1 Batch 362/1077 - Train Accuracy: 0.9178, Validation Accuracy: 0.8651, Loss: 0.1051 Epoch 1 Batch 363/1077 - Train Accuracy: 0.9086, Validation Accuracy: 0.8661, Loss: 0.1178 Epoch 1 Batch 364/1077 - Train Accuracy: 0.8875, Validation Accuracy: 0.8743, Loss: 0.1071 Epoch 1 Batch 365/1077 - Train Accuracy: 0.8930, Validation Accuracy: 0.8860, Loss: 0.0834 Epoch 1 Batch 366/1077 - Train Accuracy: 0.9152, Validation Accuracy: 0.8853, Loss: 0.0921 Epoch 1 Batch 367/1077 - Train Accuracy: 0.9289, Validation Accuracy: 0.9038, Loss: 0.0793 Epoch 1 Batch 368/1077 - Train Accuracy: 0.9262, Validation Accuracy: 0.8864, Loss: 0.0922 Epoch 1 Batch 369/1077 - Train Accuracy: 0.9176, Validation Accuracy: 0.8878, Loss: 0.0906 Epoch 1 Batch 370/1077 - Train Accuracy: 0.9260, Validation Accuracy: 0.8970, Loss: 0.0935 Epoch 1 Batch 371/1077 - Train Accuracy: 0.9246, Validation Accuracy: 0.8949, Loss: 0.0804 Epoch 1 Batch 372/1077 - Train Accuracy: 0.9406, Validation Accuracy: 0.8949, Loss: 0.0823 Epoch 1 Batch 373/1077 - Train Accuracy: 0.9189, Validation Accuracy: 0.8991, Loss: 0.0808 Epoch 1 Batch 374/1077 - Train Accuracy: 0.8938, Validation Accuracy: 0.8906, Loss: 0.1058 Epoch 1 Batch 375/1077 - Train Accuracy: 0.9219, Validation Accuracy: 0.8956, Loss: 0.0884 Epoch 1 Batch 376/1077 - Train Accuracy: 0.9094, Validation Accuracy: 0.9070, Loss: 0.0969 Epoch 1 Batch 377/1077 - Train Accuracy: 0.8961, Validation Accuracy: 0.9038, Loss: 0.0951 Epoch 1 Batch 378/1077 - Train Accuracy: 0.9355, Validation Accuracy: 0.9016, Loss: 0.0797 Epoch 1 Batch 379/1077 - Train Accuracy: 0.9285, Validation Accuracy: 0.8963, Loss: 0.1052 Epoch 1 Batch 380/1077 - Train Accuracy: 0.9184, Validation Accuracy: 0.8896, Loss: 0.0895 Epoch 1 Batch 381/1077 - Train Accuracy: 0.8945, Validation Accuracy: 0.8974, Loss: 0.1057 Epoch 1 Batch 382/1077 - Train Accuracy: 0.8735, Validation Accuracy: 0.8835, Loss: 0.1321 Epoch 1 Batch 383/1077 - Train Accuracy: 0.9230, Validation Accuracy: 0.8864, Loss: 0.0915 Epoch 1 Batch 384/1077 - Train Accuracy: 0.9242, Validation Accuracy: 0.8938, Loss: 0.0950 Epoch 1 Batch 385/1077 - Train Accuracy: 0.9184, Validation Accuracy: 0.8991, Loss: 0.0838 Epoch 1 Batch 386/1077 - Train Accuracy: 0.9230, Validation Accuracy: 0.9055, Loss: 0.0999 Epoch 1 Batch 387/1077 - Train Accuracy: 0.9023, Validation Accuracy: 0.9006, Loss: 0.0994 Epoch 1 Batch 388/1077 - Train Accuracy: 0.9096, Validation Accuracy: 0.9009, Loss: 0.0935 Epoch 1 Batch 389/1077 - Train Accuracy: 0.9223, Validation Accuracy: 0.8988, Loss: 0.0906 Epoch 1 Batch 390/1077 - Train Accuracy: 0.8531, Validation Accuracy: 0.8942, Loss: 0.1044 Epoch 1 Batch 391/1077 - Train Accuracy: 0.9222, Validation Accuracy: 0.8970, Loss: 0.0973 Epoch 1 Batch 392/1077 - Train Accuracy: 0.9184, Validation Accuracy: 0.8846, Loss: 0.0959 Epoch 1 Batch 393/1077 - Train Accuracy: 0.9040, Validation Accuracy: 0.8878, Loss: 0.0878 Epoch 1 Batch 394/1077 - Train Accuracy: 0.8973, Validation Accuracy: 0.8782, Loss: 0.0881 Epoch 1 Batch 395/1077 - Train Accuracy: 0.9144, Validation Accuracy: 0.8789, Loss: 0.0897 Epoch 1 Batch 396/1077 - Train Accuracy: 0.9090, Validation Accuracy: 0.8881, Loss: 0.0960 Epoch 1 Batch 397/1077 - Train Accuracy: 0.9025, Validation Accuracy: 0.8832, Loss: 0.0898 Epoch 1 Batch 398/1077 - Train Accuracy: 0.9391, Validation Accuracy: 0.8771, Loss: 0.0983 Epoch 1 Batch 399/1077 - Train Accuracy: 0.8775, Validation Accuracy: 0.8842, Loss: 0.0982 Epoch 1 Batch 400/1077 - Train Accuracy: 0.9293, Validation Accuracy: 0.8821, Loss: 0.1011 Epoch 1 Batch 401/1077 - Train Accuracy: 0.9125, Validation Accuracy: 0.8768, Loss: 0.0880 Epoch 1 Batch 402/1077 - Train Accuracy: 0.9316, Validation Accuracy: 0.8817, Loss: 0.0766 Epoch 1 Batch 403/1077 - Train Accuracy: 0.8871, Validation Accuracy: 0.8789, Loss: 0.1077 Epoch 1 Batch 404/1077 - Train Accuracy: 0.9338, Validation Accuracy: 0.8825, Loss: 0.0856 Epoch 1 Batch 405/1077 - Train Accuracy: 0.8976, Validation Accuracy: 0.8810, Loss: 0.0983 Epoch 1 Batch 406/1077 - Train Accuracy: 0.9404, Validation Accuracy: 0.8810, Loss: 0.0888 Epoch 1 Batch 407/1077 - Train Accuracy: 0.9184, Validation Accuracy: 0.8771, Loss: 0.0994 Epoch 1 Batch 408/1077 - Train Accuracy: 0.9098, Validation Accuracy: 0.8793, Loss: 0.0994 Epoch 1 Batch 409/1077 - Train Accuracy: 0.8934, Validation Accuracy: 0.8849, Loss: 0.1013 Epoch 1 Batch 410/1077 - Train Accuracy: 0.8894, Validation Accuracy: 0.8825, Loss: 0.1041 Epoch 1 Batch 411/1077 - Train Accuracy: 0.9174, Validation Accuracy: 0.8764, Loss: 0.0997 Epoch 1 Batch 412/1077 - Train Accuracy: 0.9172, Validation Accuracy: 0.8729, Loss: 0.0723 Epoch 1 Batch 413/1077 - Train Accuracy: 0.9242, Validation Accuracy: 0.8814, Loss: 0.0860 Epoch 1 Batch 414/1077 - Train Accuracy: 0.9105, Validation Accuracy: 0.8711, Loss: 0.0988 Epoch 1 Batch 415/1077 - Train Accuracy: 0.9215, Validation Accuracy: 0.8864, Loss: 0.0997 Epoch 1 Batch 416/1077 - Train Accuracy: 0.9242, Validation Accuracy: 0.8839, Loss: 0.0965 Epoch 1 Batch 417/1077 - Train Accuracy: 0.9230, Validation Accuracy: 0.8793, Loss: 0.1209 Epoch 1 Batch 418/1077 - Train Accuracy: 0.9516, Validation Accuracy: 0.8857, Loss: 0.0799 Epoch 1 Batch 419/1077 - Train Accuracy: 0.9293, Validation Accuracy: 0.8924, Loss: 0.0781 Epoch 1 Batch 420/1077 - Train Accuracy: 0.9539, Validation Accuracy: 0.8984, Loss: 0.0748 Epoch 1 Batch 421/1077 - Train Accuracy: 0.8898, Validation Accuracy: 0.8931, Loss: 0.1013 Epoch 1 Batch 422/1077 - Train Accuracy: 0.8757, Validation Accuracy: 0.8892, Loss: 0.0830 Epoch 1 Batch 423/1077 - Train Accuracy: 0.9297, Validation Accuracy: 0.8796, Loss: 0.1090 Epoch 1 Batch 424/1077 - Train Accuracy: 0.8863, Validation Accuracy: 0.8817, Loss: 0.0890 Epoch 1 Batch 425/1077 - Train Accuracy: 0.9144, Validation Accuracy: 0.8853, Loss: 0.0749 Epoch 1 Batch 426/1077 - Train Accuracy: 0.9152, Validation Accuracy: 0.8800, Loss: 0.1013 Epoch 1 Batch 427/1077 - Train Accuracy: 0.8847, Validation Accuracy: 0.8832, Loss: 0.0920 Epoch 1 Batch 428/1077 - Train Accuracy: 0.9219, Validation Accuracy: 0.8881, Loss: 0.0762 Epoch 1 Batch 429/1077 - Train Accuracy: 0.9098, Validation Accuracy: 0.8860, Loss: 0.0797 Epoch 1 Batch 430/1077 - Train Accuracy: 0.9055, Validation Accuracy: 0.8810, Loss: 0.0782 Epoch 1 Batch 431/1077 - Train Accuracy: 0.9176, Validation Accuracy: 0.8935, Loss: 0.0737 Epoch 1 Batch 432/1077 - Train Accuracy: 0.9328, Validation Accuracy: 0.8924, Loss: 0.0905 Epoch 1 Batch 433/1077 - Train Accuracy: 0.9430, Validation Accuracy: 0.8977, Loss: 0.0896 Epoch 1 Batch 434/1077 - Train Accuracy: 0.9199, Validation Accuracy: 0.9027, Loss: 0.0833 Epoch 1 Batch 435/1077 - Train Accuracy: 0.9280, Validation Accuracy: 0.9045, Loss: 0.0916 Epoch 1 Batch 436/1077 - Train Accuracy: 0.8988, Validation Accuracy: 0.8910, Loss: 0.0961 Epoch 1 Batch 437/1077 - Train Accuracy: 0.9445, Validation Accuracy: 0.8935, Loss: 0.0724 Epoch 1 Batch 438/1077 - Train Accuracy: 0.8930, Validation Accuracy: 0.8949, Loss: 0.0792 Epoch 1 Batch 439/1077 - Train Accuracy: 0.9043, Validation Accuracy: 0.8853, Loss: 0.0971 Epoch 1 Batch 440/1077 - Train Accuracy: 0.8895, Validation Accuracy: 0.8928, Loss: 0.1027 Epoch 1 Batch 441/1077 - Train Accuracy: 0.9145, Validation Accuracy: 0.8974, Loss: 0.0827 Epoch 1 Batch 442/1077 - Train Accuracy: 0.8717, Validation Accuracy: 0.8988, Loss: 0.0922 Epoch 1 Batch 443/1077 - Train Accuracy: 0.9539, Validation Accuracy: 0.8938, Loss: 0.0789 Epoch 1 Batch 444/1077 - Train Accuracy: 0.9055, Validation Accuracy: 0.8793, Loss: 0.0835 Epoch 1 Batch 445/1077 - Train Accuracy: 0.8849, Validation Accuracy: 0.8743, Loss: 0.0894 Epoch 1 Batch 446/1077 - Train Accuracy: 0.9077, Validation Accuracy: 0.8711, Loss: 0.0740 Epoch 1 Batch 447/1077 - Train Accuracy: 0.9219, Validation Accuracy: 0.8714, Loss: 0.0882 Epoch 1 Batch 448/1077 - Train Accuracy: 0.8914, Validation Accuracy: 0.8700, Loss: 0.1071 Epoch 1 Batch 449/1077 - Train Accuracy: 0.8855, Validation Accuracy: 0.8743, Loss: 0.0905 Epoch 1 Batch 450/1077 - Train Accuracy: 0.9262, Validation Accuracy: 0.8828, Loss: 0.0884 Epoch 1 Batch 451/1077 - Train Accuracy: 0.9427, Validation Accuracy: 0.8871, Loss: 0.0839 Epoch 1 Batch 452/1077 - Train Accuracy: 0.9215, Validation Accuracy: 0.8825, Loss: 0.0929 Epoch 1 Batch 453/1077 - Train Accuracy: 0.9070, Validation Accuracy: 0.8757, Loss: 0.0799 Epoch 1 Batch 454/1077 - Train Accuracy: 0.9156, Validation Accuracy: 0.8718, Loss: 0.0899 Epoch 1 Batch 455/1077 - Train Accuracy: 0.8885, Validation Accuracy: 0.8825, Loss: 0.0880 Epoch 1 Batch 456/1077 - Train Accuracy: 0.9281, Validation Accuracy: 0.8874, Loss: 0.0879 Epoch 1 Batch 457/1077 - Train Accuracy: 0.9129, Validation Accuracy: 0.8817, Loss: 0.0741 Epoch 1 Batch 458/1077 - Train Accuracy: 0.8863, Validation Accuracy: 0.8871, Loss: 0.0930 Epoch 1 Batch 459/1077 - Train Accuracy: 0.9118, Validation Accuracy: 0.8768, Loss: 0.0794 Epoch 1 Batch 460/1077 - Train Accuracy: 0.9012, Validation Accuracy: 0.8757, Loss: 0.0895 Epoch 1 Batch 461/1077 - Train Accuracy: 0.8938, Validation Accuracy: 0.8587, Loss: 0.0807 Epoch 1 Batch 462/1077 - Train Accuracy: 0.9176, Validation Accuracy: 0.8661, Loss: 0.0898 Epoch 1 Batch 463/1077 - Train Accuracy: 0.9016, Validation Accuracy: 0.8562, Loss: 0.0852 Epoch 1 Batch 464/1077 - Train Accuracy: 0.9187, Validation Accuracy: 0.8672, Loss: 0.0815 Epoch 1 Batch 465/1077 - Train Accuracy: 0.9161, Validation Accuracy: 0.8778, Loss: 0.0939 Epoch 1 Batch 466/1077 - Train Accuracy: 0.9027, Validation Accuracy: 0.8931, Loss: 0.0764 Epoch 1 Batch 467/1077 - Train Accuracy: 0.9141, Validation Accuracy: 0.8825, Loss: 0.0896 Epoch 1 Batch 468/1077 - Train Accuracy: 0.9182, Validation Accuracy: 0.8924, Loss: 0.0868 Epoch 1 Batch 469/1077 - Train Accuracy: 0.9195, Validation Accuracy: 0.8849, Loss: 0.0916 Epoch 1 Batch 470/1077 - Train Accuracy: 0.9519, Validation Accuracy: 0.8810, Loss: 0.0851 Epoch 1 Batch 471/1077 - Train Accuracy: 0.9285, Validation Accuracy: 0.8853, Loss: 0.0662 Epoch 1 Batch 472/1077 - Train Accuracy: 0.9226, Validation Accuracy: 0.8853, Loss: 0.0752 Epoch 1 Batch 473/1077 - Train Accuracy: 0.9187, Validation Accuracy: 0.8839, Loss: 0.0840 Epoch 1 Batch 474/1077 - Train Accuracy: 0.9078, Validation Accuracy: 0.8857, Loss: 0.0802 Epoch 1 Batch 475/1077 - Train Accuracy: 0.9180, Validation Accuracy: 0.9027, Loss: 0.0785 Epoch 1 Batch 476/1077 - Train Accuracy: 0.9313, Validation Accuracy: 0.8977, Loss: 0.0671 Epoch 1 Batch 477/1077 - Train Accuracy: 0.9353, Validation Accuracy: 0.8832, Loss: 0.0839 Epoch 1 Batch 478/1077 - Train Accuracy: 0.9194, Validation Accuracy: 0.8778, Loss: 0.0796 Epoch 1 Batch 479/1077 - Train Accuracy: 0.8844, Validation Accuracy: 0.8825, Loss: 0.1036 Epoch 1 Batch 480/1077 - Train Accuracy: 0.9206, Validation Accuracy: 0.8828, Loss: 0.0787 Epoch 1 Batch 481/1077 - Train Accuracy: 0.9094, Validation Accuracy: 0.8867, Loss: 0.0800 Epoch 1 Batch 482/1077 - Train Accuracy: 0.9013, Validation Accuracy: 0.8803, Loss: 0.1019 Epoch 1 Batch 483/1077 - Train Accuracy: 0.9207, Validation Accuracy: 0.8956, Loss: 0.0750 Epoch 1 Batch 484/1077 - Train Accuracy: 0.9320, Validation Accuracy: 0.8896, Loss: 0.0883 Epoch 1 Batch 485/1077 - Train Accuracy: 0.9414, Validation Accuracy: 0.8995, Loss: 0.0819 Epoch 1 Batch 486/1077 - Train Accuracy: 0.9354, Validation Accuracy: 0.8995, Loss: 0.0694 Epoch 1 Batch 487/1077 - Train Accuracy: 0.9174, Validation Accuracy: 0.9045, Loss: 0.0856 Epoch 1 Batch 488/1077 - Train Accuracy: 0.9149, Validation Accuracy: 0.8931, Loss: 0.0700 Epoch 1 Batch 489/1077 - Train Accuracy: 0.9100, Validation Accuracy: 0.8750, Loss: 0.0669 Epoch 1 Batch 490/1077 - Train Accuracy: 0.8922, Validation Accuracy: 0.8700, Loss: 0.0843 Epoch 1 Batch 491/1077 - Train Accuracy: 0.8680, Validation Accuracy: 0.8732, Loss: 0.0986 Epoch 1 Batch 492/1077 - Train Accuracy: 0.9234, Validation Accuracy: 0.8704, Loss: 0.0909 Epoch 1 Batch 493/1077 - Train Accuracy: 0.9386, Validation Accuracy: 0.8711, Loss: 0.0635 Epoch 1 Batch 494/1077 - Train Accuracy: 0.9078, Validation Accuracy: 0.8864, Loss: 0.0664 Epoch 1 Batch 495/1077 - Train Accuracy: 0.9105, Validation Accuracy: 0.8817, Loss: 0.0781 Epoch 1 Batch 496/1077 - Train Accuracy: 0.9121, Validation Accuracy: 0.8768, Loss: 0.0811 Epoch 1 Batch 497/1077 - Train Accuracy: 0.9211, Validation Accuracy: 0.8828, Loss: 0.0842 Epoch 1 Batch 498/1077 - Train Accuracy: 0.9277, Validation Accuracy: 0.8942, Loss: 0.0789 Epoch 1 Batch 499/1077 - Train Accuracy: 0.9022, Validation Accuracy: 0.8977, Loss: 0.0678 Epoch 1 Batch 500/1077 - Train Accuracy: 0.9371, Validation Accuracy: 0.9027, Loss: 0.0733 Epoch 1 Batch 501/1077 - Train Accuracy: 0.9430, Validation Accuracy: 0.9027, Loss: 0.0662 Epoch 1 Batch 502/1077 - Train Accuracy: 0.9285, Validation Accuracy: 0.9038, Loss: 0.0811 Epoch 1 Batch 503/1077 - Train Accuracy: 0.9469, Validation Accuracy: 0.8981, Loss: 0.0684 Epoch 1 Batch 504/1077 - Train Accuracy: 0.9168, Validation Accuracy: 0.9006, Loss: 0.0795 Epoch 1 Batch 505/1077 - Train Accuracy: 0.9453, Validation Accuracy: 0.8991, Loss: 0.0651 Epoch 1 Batch 506/1077 - Train Accuracy: 0.9336, Validation Accuracy: 0.8984, Loss: 0.0736 Epoch 1 Batch 507/1077 - Train Accuracy: 0.8883, Validation Accuracy: 0.8810, Loss: 0.0742 Epoch 1 Batch 508/1077 - Train Accuracy: 0.9301, Validation Accuracy: 0.8803, Loss: 0.0697 Epoch 1 Batch 509/1077 - Train Accuracy: 0.8930, Validation Accuracy: 0.8825, Loss: 0.0875 Epoch 1 Batch 510/1077 - Train Accuracy: 0.9195, Validation Accuracy: 0.8899, Loss: 0.0755 Epoch 1 Batch 511/1077 - Train Accuracy: 0.9149, Validation Accuracy: 0.9055, Loss: 0.0741 Epoch 1 Batch 512/1077 - Train Accuracy: 0.9480, Validation Accuracy: 0.8956, Loss: 0.0698 Epoch 1 Batch 513/1077 - Train Accuracy: 0.8926, Validation Accuracy: 0.9105, Loss: 0.0882 Epoch 1 Batch 514/1077 - Train Accuracy: 0.9016, Validation Accuracy: 0.8977, Loss: 0.0779 Epoch 1 Batch 515/1077 - Train Accuracy: 0.9180, Validation Accuracy: 0.8970, Loss: 0.0833 Epoch 1 Batch 516/1077 - Train Accuracy: 0.9007, Validation Accuracy: 0.9031, Loss: 0.0829 Epoch 1 Batch 517/1077 - Train Accuracy: 0.9066, Validation Accuracy: 0.9052, Loss: 0.0863 Epoch 1 Batch 518/1077 - Train Accuracy: 0.9238, Validation Accuracy: 0.8896, Loss: 0.0722 Epoch 1 Batch 519/1077 - Train Accuracy: 0.9211, Validation Accuracy: 0.9013, Loss: 0.0710 Epoch 1 Batch 520/1077 - Train Accuracy: 0.9449, Validation Accuracy: 0.8892, Loss: 0.0712 Epoch 1 Batch 521/1077 - Train Accuracy: 0.8836, Validation Accuracy: 0.8913, Loss: 0.0791 Epoch 1 Batch 522/1077 - Train Accuracy: 0.8473, Validation Accuracy: 0.8977, Loss: 0.0942 Epoch 1 Batch 523/1077 - Train Accuracy: 0.9277, Validation Accuracy: 0.9077, Loss: 0.0910 Epoch 1 Batch 524/1077 - Train Accuracy: 0.9293, Validation Accuracy: 0.8970, Loss: 0.0741 Epoch 1 Batch 525/1077 - Train Accuracy: 0.8996, Validation Accuracy: 0.8952, Loss: 0.0797 Epoch 1 Batch 526/1077 - Train Accuracy: 0.9270, Validation Accuracy: 0.9027, Loss: 0.0696 Epoch 1 Batch 527/1077 - Train Accuracy: 0.9038, Validation Accuracy: 0.9002, Loss: 0.0836 Epoch 1 Batch 528/1077 - Train Accuracy: 0.8988, Validation Accuracy: 0.9031, Loss: 0.0810 Epoch 1 Batch 529/1077 - Train Accuracy: 0.8980, Validation Accuracy: 0.9119, Loss: 0.0765 Epoch 1 Batch 530/1077 - Train Accuracy: 0.9172, Validation Accuracy: 0.9119, Loss: 0.0812 Epoch 1 Batch 531/1077 - Train Accuracy: 0.9180, Validation Accuracy: 0.9016, Loss: 0.0744 Epoch 1 Batch 532/1077 - Train Accuracy: 0.8855, Validation Accuracy: 0.9080, Loss: 0.0939 Epoch 1 Batch 533/1077 - Train Accuracy: 0.9031, Validation Accuracy: 0.9038, Loss: 0.0885 Epoch 1 Batch 534/1077 - Train Accuracy: 0.9085, Validation Accuracy: 0.9084, Loss: 0.0776 Epoch 1 Batch 535/1077 - Train Accuracy: 0.9250, Validation Accuracy: 0.8981, Loss: 0.0824 Epoch 1 Batch 536/1077 - Train Accuracy: 0.9117, Validation Accuracy: 0.9261, Loss: 0.0841 Epoch 1 Batch 537/1077 - Train Accuracy: 0.9375, Validation Accuracy: 0.9283, Loss: 0.0659 Epoch 1 Batch 538/1077 - Train Accuracy: 0.9528, Validation Accuracy: 0.9311, Loss: 0.0595 Epoch 1 Batch 539/1077 - Train Accuracy: 0.9223, Validation Accuracy: 0.9229, Loss: 0.0887 Epoch 1 Batch 540/1077 - Train Accuracy: 0.9395, Validation Accuracy: 0.9176, Loss: 0.0670 Epoch 1 Batch 541/1077 - Train Accuracy: 0.9285, Validation Accuracy: 0.9091, Loss: 0.0726 Epoch 1 Batch 542/1077 - Train Accuracy: 0.9070, Validation Accuracy: 0.8928, Loss: 0.0765 Epoch 1 Batch 543/1077 - Train Accuracy: 0.9043, Validation Accuracy: 0.9027, Loss: 0.0780 Epoch 1 Batch 544/1077 - Train Accuracy: 0.9305, Validation Accuracy: 0.9226, Loss: 0.0530 Epoch 1 Batch 545/1077 - Train Accuracy: 0.9219, Validation Accuracy: 0.9197, Loss: 0.0894 Epoch 1 Batch 546/1077 - Train Accuracy: 0.9230, Validation Accuracy: 0.9233, Loss: 0.0787 Epoch 1 Batch 547/1077 - Train Accuracy: 0.9430, Validation Accuracy: 0.9130, Loss: 0.0744 Epoch 1 Batch 548/1077 - Train Accuracy: 0.9078, Validation Accuracy: 0.9130, Loss: 0.0914 Epoch 1 Batch 549/1077 - Train Accuracy: 0.8988, Validation Accuracy: 0.9226, Loss: 0.0914 Epoch 1 Batch 550/1077 - Train Accuracy: 0.8836, Validation Accuracy: 0.9173, Loss: 0.0716 Epoch 1 Batch 551/1077 - Train Accuracy: 0.9148, Validation Accuracy: 0.9084, Loss: 0.0816 Epoch 1 Batch 552/1077 - Train Accuracy: 0.9195, Validation Accuracy: 0.9162, Loss: 0.0879 Epoch 1 Batch 553/1077 - Train Accuracy: 0.9258, Validation Accuracy: 0.9173, Loss: 0.0825 Epoch 1 Batch 554/1077 - Train Accuracy: 0.9012, Validation Accuracy: 0.9013, Loss: 0.0698 Epoch 1 Batch 555/1077 - Train Accuracy: 0.9227, Validation Accuracy: 0.8984, Loss: 0.0730 Epoch 1 Batch 556/1077 - Train Accuracy: 0.9426, Validation Accuracy: 0.9137, Loss: 0.0623 Epoch 1 Batch 557/1077 - Train Accuracy: 0.9207, Validation Accuracy: 0.9144, Loss: 0.0689 Epoch 1 Batch 558/1077 - Train Accuracy: 0.9168, Validation Accuracy: 0.9048, Loss: 0.0663 Epoch 1 Batch 559/1077 - Train Accuracy: 0.9180, Validation Accuracy: 0.9077, Loss: 0.0778 Epoch 1 Batch 560/1077 - Train Accuracy: 0.8941, Validation Accuracy: 0.9059, Loss: 0.0707 Epoch 1 Batch 561/1077 - Train Accuracy: 0.9245, Validation Accuracy: 0.8984, Loss: 0.0684 Epoch 1 Batch 562/1077 - Train Accuracy: 0.9293, Validation Accuracy: 0.9031, Loss: 0.0654 Epoch 1 Batch 563/1077 - Train Accuracy: 0.9250, Validation Accuracy: 0.9066, Loss: 0.0765 Epoch 1 Batch 564/1077 - Train Accuracy: 0.9342, Validation Accuracy: 0.9048, Loss: 0.0803 Epoch 1 Batch 565/1077 - Train Accuracy: 0.9070, Validation Accuracy: 0.9073, Loss: 0.0771 Epoch 1 Batch 566/1077 - Train Accuracy: 0.8910, Validation Accuracy: 0.9002, Loss: 0.0712 Epoch 1 Batch 567/1077 - Train Accuracy: 0.9125, Validation Accuracy: 0.8991, Loss: 0.0722 Epoch 1 Batch 568/1077 - Train Accuracy: 0.9258, Validation Accuracy: 0.9023, Loss: 0.0714 Epoch 1 Batch 569/1077 - Train Accuracy: 0.9203, Validation Accuracy: 0.9013, Loss: 0.0788 Epoch 1 Batch 570/1077 - Train Accuracy: 0.9137, Validation Accuracy: 0.9055, Loss: 0.0877 Epoch 1 Batch 571/1077 - Train Accuracy: 0.9256, Validation Accuracy: 0.9105, Loss: 0.0593 Epoch 1 Batch 572/1077 - Train Accuracy: 0.9334, Validation Accuracy: 0.9077, Loss: 0.0700 Epoch 1 Batch 573/1077 - Train Accuracy: 0.9230, Validation Accuracy: 0.9031, Loss: 0.0897 Epoch 1 Batch 574/1077 - Train Accuracy: 0.9100, Validation Accuracy: 0.8995, Loss: 0.0767 Epoch 1 Batch 575/1077 - Train Accuracy: 0.9334, Validation Accuracy: 0.8995, Loss: 0.0623 Epoch 1 Batch 576/1077 - Train Accuracy: 0.9449, Validation Accuracy: 0.8999, Loss: 0.0696 Epoch 1 Batch 577/1077 - Train Accuracy: 0.9153, Validation Accuracy: 0.8960, Loss: 0.0776 Epoch 1 Batch 578/1077 - Train Accuracy: 0.9477, Validation Accuracy: 0.9041, Loss: 0.0622 Epoch 1 Batch 579/1077 - Train Accuracy: 0.9227, Validation Accuracy: 0.9087, Loss: 0.0624 Epoch 1 Batch 580/1077 - Train Accuracy: 0.9416, Validation Accuracy: 0.9119, Loss: 0.0598 Epoch 1 Batch 581/1077 - Train Accuracy: 0.9387, Validation Accuracy: 0.9045, Loss: 0.0552 Epoch 1 Batch 582/1077 - Train Accuracy: 0.9535, Validation Accuracy: 0.9126, Loss: 0.0765 Epoch 1 Batch 583/1077 - Train Accuracy: 0.9219, Validation Accuracy: 0.9123, Loss: 0.0788 Epoch 1 Batch 584/1077 - Train Accuracy: 0.9293, Validation Accuracy: 0.9119, Loss: 0.0691 Epoch 1 Batch 585/1077 - Train Accuracy: 0.9397, Validation Accuracy: 0.9123, Loss: 0.0556 Epoch 1 Batch 586/1077 - Train Accuracy: 0.9211, Validation Accuracy: 0.9194, Loss: 0.0732 Epoch 1 Batch 587/1077 - Train Accuracy: 0.9222, Validation Accuracy: 0.9261, Loss: 0.0788 Epoch 1 Batch 588/1077 - Train Accuracy: 0.9305, Validation Accuracy: 0.9173, Loss: 0.0680 Epoch 1 Batch 589/1077 - Train Accuracy: 0.9400, Validation Accuracy: 0.9151, Loss: 0.0682 Epoch 1 Batch 590/1077 - Train Accuracy: 0.8746, Validation Accuracy: 0.9183, Loss: 0.0825 Epoch 1 Batch 591/1077 - Train Accuracy: 0.9151, Validation Accuracy: 0.9094, Loss: 0.0716 Epoch 1 Batch 592/1077 - Train Accuracy: 0.9305, Validation Accuracy: 0.9109, Loss: 0.0752 Epoch 1 Batch 593/1077 - Train Accuracy: 0.9193, Validation Accuracy: 0.9059, Loss: 0.0825 Epoch 1 Batch 594/1077 - Train Accuracy: 0.9387, Validation Accuracy: 0.9077, Loss: 0.0809 Epoch 1 Batch 595/1077 - Train Accuracy: 0.9305, Validation Accuracy: 0.8970, Loss: 0.0688 Epoch 1 Batch 596/1077 - Train Accuracy: 0.9297, Validation Accuracy: 0.9027, Loss: 0.0767 Epoch 1 Batch 597/1077 - Train Accuracy: 0.9102, Validation Accuracy: 0.9006, Loss: 0.0709 Epoch 1 Batch 598/1077 - Train Accuracy: 0.9174, Validation Accuracy: 0.9077, Loss: 0.0783 Epoch 1 Batch 599/1077 - Train Accuracy: 0.8879, Validation Accuracy: 0.9119, Loss: 0.0957 Epoch 1 Batch 600/1077 - Train Accuracy: 0.9353, Validation Accuracy: 0.9087, Loss: 0.0717 Epoch 1 Batch 601/1077 - Train Accuracy: 0.9156, Validation Accuracy: 0.9020, Loss: 0.0772 Epoch 1 Batch 602/1077 - Train Accuracy: 0.9234, Validation Accuracy: 0.9016, Loss: 0.0721 Epoch 1 Batch 603/1077 - Train Accuracy: 0.9122, Validation Accuracy: 0.9016, Loss: 0.0748 Epoch 1 Batch 604/1077 - Train Accuracy: 0.9117, Validation Accuracy: 0.9002, Loss: 0.0773 Epoch 1 Batch 605/1077 - Train Accuracy: 0.9219, Validation Accuracy: 0.9052, Loss: 0.0911 Epoch 1 Batch 606/1077 - Train Accuracy: 0.9156, Validation Accuracy: 0.9105, Loss: 0.0572 Epoch 1 Batch 607/1077 - Train Accuracy: 0.9283, Validation Accuracy: 0.9144, Loss: 0.0774 Epoch 1 Batch 608/1077 - Train Accuracy: 0.9250, Validation Accuracy: 0.9144, Loss: 0.0805 Epoch 1 Batch 609/1077 - Train Accuracy: 0.9129, Validation Accuracy: 0.9087, Loss: 0.0779 Epoch 1 Batch 610/1077 - Train Accuracy: 0.9391, Validation Accuracy: 0.9059, Loss: 0.0747 Epoch 1 Batch 611/1077 - Train Accuracy: 0.9297, Validation Accuracy: 0.9016, Loss: 0.0657 Epoch 1 Batch 612/1077 - Train Accuracy: 0.9483, Validation Accuracy: 0.8967, Loss: 0.0615 Epoch 1 Batch 613/1077 - Train Accuracy: 0.9074, Validation Accuracy: 0.8981, Loss: 0.0851 Epoch 1 Batch 614/1077 - Train Accuracy: 0.9267, Validation Accuracy: 0.9073, Loss: 0.0664 Epoch 1 Batch 615/1077 - Train Accuracy: 0.9430, Validation Accuracy: 0.9176, Loss: 0.0678 Epoch 1 Batch 616/1077 - Train Accuracy: 0.9087, Validation Accuracy: 0.9112, Loss: 0.0790 Epoch 1 Batch 617/1077 - Train Accuracy: 0.9349, Validation Accuracy: 0.9116, Loss: 0.0688 Epoch 1 Batch 618/1077 - Train Accuracy: 0.9176, Validation Accuracy: 0.9233, Loss: 0.0677 Epoch 1 Batch 619/1077 - Train Accuracy: 0.9186, Validation Accuracy: 0.9137, Loss: 0.0618 Epoch 1 Batch 620/1077 - Train Accuracy: 0.9535, Validation Accuracy: 0.9130, Loss: 0.0587 Epoch 1 Batch 621/1077 - Train Accuracy: 0.9410, Validation Accuracy: 0.9062, Loss: 0.0637 Epoch 1 Batch 622/1077 - Train Accuracy: 0.9182, Validation Accuracy: 0.9073, Loss: 0.0798 Epoch 1 Batch 623/1077 - Train Accuracy: 0.9105, Validation Accuracy: 0.9080, Loss: 0.0736 Epoch 1 Batch 624/1077 - Train Accuracy: 0.9349, Validation Accuracy: 0.9105, Loss: 0.0736 Epoch 1 Batch 625/1077 - Train Accuracy: 0.9199, Validation Accuracy: 0.9034, Loss: 0.0678 Epoch 1 Batch 626/1077 - Train Accuracy: 0.8981, Validation Accuracy: 0.9027, Loss: 0.0648 Epoch 1 Batch 627/1077 - Train Accuracy: 0.9102, Validation Accuracy: 0.9137, Loss: 0.0623 Epoch 1 Batch 628/1077 - Train Accuracy: 0.9371, Validation Accuracy: 0.9141, Loss: 0.0791 Epoch 1 Batch 629/1077 - Train Accuracy: 0.8968, Validation Accuracy: 0.9148, Loss: 0.0715 Epoch 1 Batch 630/1077 - Train Accuracy: 0.9379, Validation Accuracy: 0.9091, Loss: 0.0685 Epoch 1 Batch 631/1077 - Train Accuracy: 0.9059, Validation Accuracy: 0.9038, Loss: 0.0686 Epoch 1 Batch 632/1077 - Train Accuracy: 0.9309, Validation Accuracy: 0.9045, Loss: 0.0606 Epoch 1 Batch 633/1077 - Train Accuracy: 0.9215, Validation Accuracy: 0.9038, Loss: 0.0654 Epoch 1 Batch 634/1077 - Train Accuracy: 0.9040, Validation Accuracy: 0.8956, Loss: 0.0509 Epoch 1 Batch 635/1077 - Train Accuracy: 0.9153, Validation Accuracy: 0.9034, Loss: 0.0806 Epoch 1 Batch 636/1077 - Train Accuracy: 0.9336, Validation Accuracy: 0.9169, Loss: 0.0626 Epoch 1 Batch 637/1077 - Train Accuracy: 0.9219, Validation Accuracy: 0.9027, Loss: 0.0721 Epoch 1 Batch 638/1077 - Train Accuracy: 0.9211, Validation Accuracy: 0.8952, Loss: 0.0639 Epoch 1 Batch 639/1077 - Train Accuracy: 0.9125, Validation Accuracy: 0.9020, Loss: 0.0880 Epoch 1 Batch 640/1077 - Train Accuracy: 0.9315, Validation Accuracy: 0.9023, Loss: 0.0703 Epoch 1 Batch 641/1077 - Train Accuracy: 0.9246, Validation Accuracy: 0.8924, Loss: 0.0609 Epoch 1 Batch 642/1077 - Train Accuracy: 0.8977, Validation Accuracy: 0.9116, Loss: 0.0730 Epoch 1 Batch 643/1077 - Train Accuracy: 0.9353, Validation Accuracy: 0.9098, Loss: 0.0540 Epoch 1 Batch 644/1077 - Train Accuracy: 0.9109, Validation Accuracy: 0.9144, Loss: 0.0732 Epoch 1 Batch 645/1077 - Train Accuracy: 0.9148, Validation Accuracy: 0.9251, Loss: 0.0787 Epoch 1 Batch 646/1077 - Train Accuracy: 0.9156, Validation Accuracy: 0.9087, Loss: 0.0689 Epoch 1 Batch 647/1077 - Train Accuracy: 0.9242, Validation Accuracy: 0.9087, Loss: 0.0708 Epoch 1 Batch 648/1077 - Train Accuracy: 0.9249, Validation Accuracy: 0.8924, Loss: 0.0568 Epoch 1 Batch 649/1077 - Train Accuracy: 0.9180, Validation Accuracy: 0.8928, Loss: 0.0698 Epoch 1 Batch 650/1077 - Train Accuracy: 0.9277, Validation Accuracy: 0.9151, Loss: 0.0710 Epoch 1 Batch 651/1077 - Train Accuracy: 0.9211, Validation Accuracy: 0.9165, Loss: 0.0657 Epoch 1 Batch 652/1077 - Train Accuracy: 0.9363, Validation Accuracy: 0.9062, Loss: 0.0686 Epoch 1 Batch 653/1077 - Train Accuracy: 0.9246, Validation Accuracy: 0.9134, Loss: 0.0676 Epoch 1 Batch 654/1077 - Train Accuracy: 0.9344, Validation Accuracy: 0.9034, Loss: 0.0606 Epoch 1 Batch 655/1077 - Train Accuracy: 0.9148, Validation Accuracy: 0.8981, Loss: 0.0766 Epoch 1 Batch 656/1077 - Train Accuracy: 0.9422, Validation Accuracy: 0.8949, Loss: 0.0687 Epoch 1 Batch 657/1077 - Train Accuracy: 0.9486, Validation Accuracy: 0.8892, Loss: 0.0647 Epoch 1 Batch 658/1077 - Train Accuracy: 0.9263, Validation Accuracy: 0.8949, Loss: 0.0590 Epoch 1 Batch 659/1077 - Train Accuracy: 0.9323, Validation Accuracy: 0.8999, Loss: 0.0807 Epoch 1 Batch 660/1077 - Train Accuracy: 0.9238, Validation Accuracy: 0.8935, Loss: 0.0670 Epoch 1 Batch 661/1077 - Train Accuracy: 0.9438, Validation Accuracy: 0.8977, Loss: 0.0571 Epoch 1 Batch 662/1077 - Train Accuracy: 0.9312, Validation Accuracy: 0.9091, Loss: 0.0663 Epoch 1 Batch 663/1077 - Train Accuracy: 0.9289, Validation Accuracy: 0.9102, Loss: 0.0645 Epoch 1 Batch 664/1077 - Train Accuracy: 0.9395, Validation Accuracy: 0.9176, Loss: 0.0614 Epoch 1 Batch 665/1077 - Train Accuracy: 0.9328, Validation Accuracy: 0.9254, Loss: 0.0574 Epoch 1 Batch 666/1077 - Train Accuracy: 0.9112, Validation Accuracy: 0.9240, Loss: 0.0731 Epoch 1 Batch 667/1077 - Train Accuracy: 0.9428, Validation Accuracy: 0.9322, Loss: 0.0767 Epoch 1 Batch 668/1077 - Train Accuracy: 0.9594, Validation Accuracy: 0.9286, Loss: 0.0658 Epoch 1 Batch 669/1077 - Train Accuracy: 0.9359, Validation Accuracy: 0.9222, Loss: 0.0655 Epoch 1 Batch 670/1077 - Train Accuracy: 0.9421, Validation Accuracy: 0.9240, Loss: 0.0722 Epoch 1 Batch 671/1077 - Train Accuracy: 0.9021, Validation Accuracy: 0.9237, Loss: 0.0833 Epoch 1 Batch 672/1077 - Train Accuracy: 0.9118, Validation Accuracy: 0.9187, Loss: 0.0633 Epoch 1 Batch 673/1077 - Train Accuracy: 0.9271, Validation Accuracy: 0.9158, Loss: 0.0636 Epoch 1 Batch 674/1077 - Train Accuracy: 0.9375, Validation Accuracy: 0.9180, Loss: 0.0693 Epoch 1 Batch 675/1077 - Train Accuracy: 0.9356, Validation Accuracy: 0.9094, Loss: 0.0801 Epoch 1 Batch 676/1077 - Train Accuracy: 0.8951, Validation Accuracy: 0.9094, Loss: 0.0649 Epoch 1 Batch 677/1077 - Train Accuracy: 0.9117, Validation Accuracy: 0.9251, Loss: 0.0830 Epoch 1 Batch 678/1077 - Train Accuracy: 0.9182, Validation Accuracy: 0.9183, Loss: 0.0576 Epoch 1 Batch 679/1077 - Train Accuracy: 0.9252, Validation Accuracy: 0.9084, Loss: 0.0651 Epoch 1 Batch 680/1077 - Train Accuracy: 0.9256, Validation Accuracy: 0.9119, Loss: 0.0721 Epoch 1 Batch 681/1077 - Train Accuracy: 0.9492, Validation Accuracy: 0.9119, Loss: 0.0683 Epoch 1 Batch 682/1077 - Train Accuracy: 0.8922, Validation Accuracy: 0.9268, Loss: 0.0586 Epoch 1 Batch 683/1077 - Train Accuracy: 0.8938, Validation Accuracy: 0.9173, Loss: 0.0611 Epoch 1 Batch 684/1077 - Train Accuracy: 0.9355, Validation Accuracy: 0.9045, Loss: 0.0657 Epoch 1 Batch 685/1077 - Train Accuracy: 0.8785, Validation Accuracy: 0.9098, Loss: 0.0724 Epoch 1 Batch 686/1077 - Train Accuracy: 0.9219, Validation Accuracy: 0.9094, Loss: 0.0581 Epoch 1 Batch 687/1077 - Train Accuracy: 0.9504, Validation Accuracy: 0.9148, Loss: 0.0821 Epoch 1 Batch 688/1077 - Train Accuracy: 0.9469, Validation Accuracy: 0.9151, Loss: 0.0625 Epoch 1 Batch 689/1077 - Train Accuracy: 0.9465, Validation Accuracy: 0.8970, Loss: 0.0548 Epoch 1 Batch 690/1077 - Train Accuracy: 0.9316, Validation Accuracy: 0.8960, Loss: 0.0674 Epoch 1 Batch 691/1077 - Train Accuracy: 0.9132, Validation Accuracy: 0.9151, Loss: 0.0725 Epoch 1 Batch 692/1077 - Train Accuracy: 0.9353, Validation Accuracy: 0.9094, Loss: 0.0622 Epoch 1 Batch 693/1077 - Train Accuracy: 0.8684, Validation Accuracy: 0.9205, Loss: 0.0887 Epoch 1 Batch 694/1077 - Train Accuracy: 0.9249, Validation Accuracy: 0.9016, Loss: 0.0687 Epoch 1 Batch 695/1077 - Train Accuracy: 0.9563, Validation Accuracy: 0.9105, Loss: 0.0584 Epoch 1 Batch 696/1077 - Train Accuracy: 0.8988, Validation Accuracy: 0.8984, Loss: 0.0716 Epoch 1 Batch 697/1077 - Train Accuracy: 0.9371, Validation Accuracy: 0.8988, Loss: 0.0590 Epoch 1 Batch 698/1077 - Train Accuracy: 0.9275, Validation Accuracy: 0.8963, Loss: 0.0569 Epoch 1 Batch 699/1077 - Train Accuracy: 0.9424, Validation Accuracy: 0.8970, Loss: 0.0608 Epoch 1 Batch 700/1077 - Train Accuracy: 0.9437, Validation Accuracy: 0.8963, Loss: 0.0564 Epoch 1 Batch 701/1077 - Train Accuracy: 0.9313, Validation Accuracy: 0.9244, Loss: 0.0713 Epoch 1 Batch 702/1077 - Train Accuracy: 0.9319, Validation Accuracy: 0.9137, Loss: 0.0783 Epoch 1 Batch 703/1077 - Train Accuracy: 0.9328, Validation Accuracy: 0.9126, Loss: 0.0685 Epoch 1 Batch 704/1077 - Train Accuracy: 0.9004, Validation Accuracy: 0.9251, Loss: 0.0853 Epoch 1 Batch 705/1077 - Train Accuracy: 0.9330, Validation Accuracy: 0.9162, Loss: 0.0849 Epoch 1 Batch 706/1077 - Train Accuracy: 0.8635, Validation Accuracy: 0.9055, Loss: 0.0938 Epoch 1 Batch 707/1077 - Train Accuracy: 0.9320, Validation Accuracy: 0.8960, Loss: 0.0712 Epoch 1 Batch 708/1077 - Train Accuracy: 0.9105, Validation Accuracy: 0.9055, Loss: 0.0706 Epoch 1 Batch 709/1077 - Train Accuracy: 0.9187, Validation Accuracy: 0.8988, Loss: 0.0801 Epoch 1 Batch 710/1077 - Train Accuracy: 0.9082, Validation Accuracy: 0.9009, Loss: 0.0577 Epoch 1 Batch 711/1077 - Train Accuracy: 0.9027, Validation Accuracy: 0.9027, Loss: 0.0752 Epoch 1 Batch 712/1077 - Train Accuracy: 0.9539, Validation Accuracy: 0.8967, Loss: 0.0558 Epoch 1 Batch 713/1077 - Train Accuracy: 0.9233, Validation Accuracy: 0.9084, Loss: 0.0591 Epoch 1 Batch 714/1077 - Train Accuracy: 0.9241, Validation Accuracy: 0.9045, Loss: 0.0653 Epoch 1 Batch 715/1077 - Train Accuracy: 0.9164, Validation Accuracy: 0.9077, Loss: 0.0712 Epoch 1 Batch 716/1077 - Train Accuracy: 0.9352, Validation Accuracy: 0.9169, Loss: 0.0567 Epoch 1 Batch 717/1077 - Train Accuracy: 0.9202, Validation Accuracy: 0.9052, Loss: 0.0578 Epoch 1 Batch 718/1077 - Train Accuracy: 0.9215, Validation Accuracy: 0.9297, Loss: 0.0609 Epoch 1 Batch 719/1077 - Train Accuracy: 0.9226, Validation Accuracy: 0.9173, Loss: 0.0663 Epoch 1 Batch 720/1077 - Train Accuracy: 0.9280, Validation Accuracy: 0.9144, Loss: 0.0669 Epoch 1 Batch 721/1077 - Train Accuracy: 0.9391, Validation Accuracy: 0.9240, Loss: 0.0709 Epoch 1 Batch 722/1077 - Train Accuracy: 0.9270, Validation Accuracy: 0.9144, Loss: 0.0552 Epoch 1 Batch 723/1077 - Train Accuracy: 0.9442, Validation Accuracy: 0.9102, Loss: 0.0707 Epoch 1 Batch 724/1077 - Train Accuracy: 0.9317, Validation Accuracy: 0.9013, Loss: 0.0657 Epoch 1 Batch 725/1077 - Train Accuracy: 0.9286, Validation Accuracy: 0.8935, Loss: 0.0493 Epoch 1 Batch 726/1077 - Train Accuracy: 0.9324, Validation Accuracy: 0.9094, Loss: 0.0617 Epoch 1 Batch 727/1077 - Train Accuracy: 0.9527, Validation Accuracy: 0.9190, Loss: 0.0573 Epoch 1 Batch 728/1077 - Train Accuracy: 0.9036, Validation Accuracy: 0.9265, Loss: 0.0686 Epoch 1 Batch 729/1077 - Train Accuracy: 0.9074, Validation Accuracy: 0.9208, Loss: 0.0726 Epoch 1 Batch 730/1077 - Train Accuracy: 0.9367, Validation Accuracy: 0.9162, Loss: 0.0728 Epoch 1 Batch 731/1077 - Train Accuracy: 0.9152, Validation Accuracy: 0.9176, Loss: 0.0577 Epoch 1 Batch 732/1077 - Train Accuracy: 0.9071, Validation Accuracy: 0.9062, Loss: 0.0773 Epoch 1 Batch 733/1077 - Train Accuracy: 0.9277, Validation Accuracy: 0.9119, Loss: 0.0657 Epoch 1 Batch 734/1077 - Train Accuracy: 0.9359, Validation Accuracy: 0.9062, Loss: 0.0636 Epoch 1 Batch 735/1077 - Train Accuracy: 0.9125, Validation Accuracy: 0.9073, Loss: 0.0591 Epoch 1 Batch 736/1077 - Train Accuracy: 0.9659, Validation Accuracy: 0.9151, Loss: 0.0466 Epoch 1 Batch 737/1077 - Train Accuracy: 0.9297, Validation Accuracy: 0.9169, Loss: 0.0742 Epoch 1 Batch 738/1077 - Train Accuracy: 0.9131, Validation Accuracy: 0.9162, Loss: 0.0508 Epoch 1 Batch 739/1077 - Train Accuracy: 0.9094, Validation Accuracy: 0.9162, Loss: 0.0604 Epoch 1 Batch 740/1077 - Train Accuracy: 0.9301, Validation Accuracy: 0.9091, Loss: 0.0541 Epoch 1 Batch 741/1077 - Train Accuracy: 0.9238, Validation Accuracy: 0.9130, Loss: 0.0682 Epoch 1 Batch 742/1077 - Train Accuracy: 0.9383, Validation Accuracy: 0.9094, Loss: 0.0581 Epoch 1 Batch 743/1077 - Train Accuracy: 0.9367, Validation Accuracy: 0.9027, Loss: 0.0669 Epoch 1 Batch 744/1077 - Train Accuracy: 0.9312, Validation Accuracy: 0.8896, Loss: 0.0636 Epoch 1 Batch 745/1077 - Train Accuracy: 0.9375, Validation Accuracy: 0.8942, Loss: 0.0679 Epoch 1 Batch 746/1077 - Train Accuracy: 0.9434, Validation Accuracy: 0.9002, Loss: 0.0567 Epoch 1 Batch 747/1077 - Train Accuracy: 0.9324, Validation Accuracy: 0.9041, Loss: 0.0504 Epoch 1 Batch 748/1077 - Train Accuracy: 0.9402, Validation Accuracy: 0.9052, Loss: 0.0523 Epoch 1 Batch 749/1077 - Train Accuracy: 0.9398, Validation Accuracy: 0.9155, Loss: 0.0637 Epoch 1 Batch 750/1077 - Train Accuracy: 0.9398, Validation Accuracy: 0.9205, Loss: 0.0564 Epoch 1 Batch 751/1077 - Train Accuracy: 0.9410, Validation Accuracy: 0.9165, Loss: 0.0570 Epoch 1 Batch 752/1077 - Train Accuracy: 0.9156, Validation Accuracy: 0.9183, Loss: 0.0538 Epoch 1 Batch 753/1077 - Train Accuracy: 0.9273, Validation Accuracy: 0.9329, Loss: 0.0591 Epoch 1 Batch 754/1077 - Train Accuracy: 0.9129, Validation Accuracy: 0.9315, Loss: 0.0689 Epoch 1 Batch 755/1077 - Train Accuracy: 0.9426, Validation Accuracy: 0.9084, Loss: 0.0623 Epoch 1 Batch 756/1077 - Train Accuracy: 0.9266, Validation Accuracy: 0.9077, Loss: 0.0596 Epoch 1 Batch 757/1077 - Train Accuracy: 0.9363, Validation Accuracy: 0.9112, Loss: 0.0548 Epoch 1 Batch 758/1077 - Train Accuracy: 0.9338, Validation Accuracy: 0.9137, Loss: 0.0606 Epoch 1 Batch 759/1077 - Train Accuracy: 0.9315, Validation Accuracy: 0.9112, Loss: 0.0538 Epoch 1 Batch 760/1077 - Train Accuracy: 0.9387, Validation Accuracy: 0.9102, Loss: 0.0728 Epoch 1 Batch 761/1077 - Train Accuracy: 0.9030, Validation Accuracy: 0.9144, Loss: 0.0621 Epoch 1 Batch 762/1077 - Train Accuracy: 0.9516, Validation Accuracy: 0.9084, Loss: 0.0546 Epoch 1 Batch 763/1077 - Train Accuracy: 0.9606, Validation Accuracy: 0.9116, Loss: 0.0518 Epoch 1 Batch 764/1077 - Train Accuracy: 0.9523, Validation Accuracy: 0.9137, Loss: 0.0612 Epoch 1 Batch 765/1077 - Train Accuracy: 0.9121, Validation Accuracy: 0.9126, Loss: 0.0620 Epoch 1 Batch 766/1077 - Train Accuracy: 0.9297, Validation Accuracy: 0.9105, Loss: 0.0627 Epoch 1 Batch 767/1077 - Train Accuracy: 0.9570, Validation Accuracy: 0.9208, Loss: 0.0568 Epoch 1 Batch 768/1077 - Train Accuracy: 0.9160, Validation Accuracy: 0.9315, Loss: 0.0654 Epoch 1 Batch 769/1077 - Train Accuracy: 0.9277, Validation Accuracy: 0.9308, Loss: 0.0609 Epoch 1 Batch 770/1077 - Train Accuracy: 0.9234, Validation Accuracy: 0.9311, Loss: 0.0613 Epoch 1 Batch 771/1077 - Train Accuracy: 0.9672, Validation Accuracy: 0.9322, Loss: 0.0657 Epoch 1 Batch 772/1077 - Train Accuracy: 0.9501, Validation Accuracy: 0.9279, Loss: 0.0480 Epoch 1 Batch 773/1077 - Train Accuracy: 0.9180, Validation Accuracy: 0.9261, Loss: 0.0608 Epoch 1 Batch 774/1077 - Train Accuracy: 0.9523, Validation Accuracy: 0.9247, Loss: 0.0665 Epoch 1 Batch 775/1077 - Train Accuracy: 0.9371, Validation Accuracy: 0.9244, Loss: 0.0620 Epoch 1 Batch 776/1077 - Train Accuracy: 0.9406, Validation Accuracy: 0.9208, Loss: 0.0516 Epoch 1 Batch 777/1077 - Train Accuracy: 0.9254, Validation Accuracy: 0.9112, Loss: 0.0694 Epoch 1 Batch 778/1077 - Train Accuracy: 0.9334, Validation Accuracy: 0.9116, Loss: 0.0513 Epoch 1 Batch 779/1077 - Train Accuracy: 0.9098, Validation Accuracy: 0.9052, Loss: 0.0651 Epoch 1 Batch 780/1077 - Train Accuracy: 0.8941, Validation Accuracy: 0.9141, Loss: 0.0743 Epoch 1 Batch 781/1077 - Train Accuracy: 0.9516, Validation Accuracy: 0.9062, Loss: 0.0471 Epoch 1 Batch 782/1077 - Train Accuracy: 0.9356, Validation Accuracy: 0.9126, Loss: 0.0593 Epoch 1 Batch 783/1077 - Train Accuracy: 0.9156, Validation Accuracy: 0.9237, Loss: 0.0694 Epoch 1 Batch 784/1077 - Train Accuracy: 0.9398, Validation Accuracy: 0.9261, Loss: 0.0476 Epoch 1 Batch 785/1077 - Train Accuracy: 0.9490, Validation Accuracy: 0.9315, Loss: 0.0583 Epoch 1 Batch 786/1077 - Train Accuracy: 0.9328, Validation Accuracy: 0.9311, Loss: 0.0580 Epoch 1 Batch 787/1077 - Train Accuracy: 0.9442, Validation Accuracy: 0.9240, Loss: 0.0604 Epoch 1 Batch 788/1077 - Train Accuracy: 0.9422, Validation Accuracy: 0.9300, Loss: 0.0549 Epoch 1 Batch 789/1077 - Train Accuracy: 0.9354, Validation Accuracy: 0.9414, Loss: 0.0627 Epoch 1 Batch 790/1077 - Train Accuracy: 0.8887, Validation Accuracy: 0.9158, Loss: 0.0677 Epoch 1 Batch 791/1077 - Train Accuracy: 0.9203, Validation Accuracy: 0.9183, Loss: 0.0679 Epoch 1 Batch 792/1077 - Train Accuracy: 0.9055, Validation Accuracy: 0.9183, Loss: 0.0686 Epoch 1 Batch 793/1077 - Train Accuracy: 0.9547, Validation Accuracy: 0.9141, Loss: 0.0549 Epoch 1 Batch 794/1077 - Train Accuracy: 0.9219, Validation Accuracy: 0.9091, Loss: 0.0485 Epoch 1 Batch 795/1077 - Train Accuracy: 0.9004, Validation Accuracy: 0.9013, Loss: 0.0721 Epoch 1 Batch 796/1077 - Train Accuracy: 0.9320, Validation Accuracy: 0.9237, Loss: 0.0603 Epoch 1 Batch 797/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9237, Loss: 0.0478 Epoch 1 Batch 798/1077 - Train Accuracy: 0.9297, Validation Accuracy: 0.9233, Loss: 0.0622 Epoch 1 Batch 799/1077 - Train Accuracy: 0.9070, Validation Accuracy: 0.9134, Loss: 0.0753 Epoch 1 Batch 800/1077 - Train Accuracy: 0.9246, Validation Accuracy: 0.9141, Loss: 0.0589 Epoch 1 Batch 801/1077 - Train Accuracy: 0.9383, Validation Accuracy: 0.9162, Loss: 0.0650 Epoch 1 Batch 802/1077 - Train Accuracy: 0.9267, Validation Accuracy: 0.9279, Loss: 0.0666 Epoch 1 Batch 803/1077 - Train Accuracy: 0.9293, Validation Accuracy: 0.9240, Loss: 0.0685 Epoch 1 Batch 804/1077 - Train Accuracy: 0.9512, Validation Accuracy: 0.9251, Loss: 0.0482 Epoch 1 Batch 805/1077 - Train Accuracy: 0.9172, Validation Accuracy: 0.9123, Loss: 0.0609 Epoch 1 Batch 806/1077 - Train Accuracy: 0.9383, Validation Accuracy: 0.9105, Loss: 0.0510 Epoch 1 Batch 807/1077 - Train Accuracy: 0.9539, Validation Accuracy: 0.9077, Loss: 0.0568 Epoch 1 Batch 808/1077 - Train Accuracy: 0.9375, Validation Accuracy: 0.9134, Loss: 0.0802 Epoch 1 Batch 809/1077 - Train Accuracy: 0.8997, Validation Accuracy: 0.9244, Loss: 0.0859 Epoch 1 Batch 810/1077 - Train Accuracy: 0.9535, Validation Accuracy: 0.9318, Loss: 0.0503 Epoch 1 Batch 811/1077 - Train Accuracy: 0.9327, Validation Accuracy: 0.9379, Loss: 0.0576 Epoch 1 Batch 812/1077 - Train Accuracy: 0.9172, Validation Accuracy: 0.9315, Loss: 0.0594 Epoch 1 Batch 813/1077 - Train Accuracy: 0.9189, Validation Accuracy: 0.9215, Loss: 0.0580 Epoch 1 Batch 814/1077 - Train Accuracy: 0.9344, Validation Accuracy: 0.9226, Loss: 0.0560 Epoch 1 Batch 815/1077 - Train Accuracy: 0.9211, Validation Accuracy: 0.9162, Loss: 0.0607 Epoch 1 Batch 816/1077 - Train Accuracy: 0.9490, Validation Accuracy: 0.9244, Loss: 0.0658 Epoch 1 Batch 817/1077 - Train Accuracy: 0.9258, Validation Accuracy: 0.9268, Loss: 0.0694 Epoch 1 Batch 818/1077 - Train Accuracy: 0.9309, Validation Accuracy: 0.9119, Loss: 0.0602 Epoch 1 Batch 819/1077 - Train Accuracy: 0.9145, Validation Accuracy: 0.9226, Loss: 0.0693 Epoch 1 Batch 820/1077 - Train Accuracy: 0.9078, Validation Accuracy: 0.9197, Loss: 0.0559 Epoch 1 Batch 821/1077 - Train Accuracy: 0.9324, Validation Accuracy: 0.9222, Loss: 0.0686 Epoch 1 Batch 822/1077 - Train Accuracy: 0.9234, Validation Accuracy: 0.9254, Loss: 0.0624 Epoch 1 Batch 823/1077 - Train Accuracy: 0.9371, Validation Accuracy: 0.9265, Loss: 0.0578 Epoch 1 Batch 824/1077 - Train Accuracy: 0.9249, Validation Accuracy: 0.9201, Loss: 0.0631 Epoch 1 Batch 825/1077 - Train Accuracy: 0.9625, Validation Accuracy: 0.9098, Loss: 0.0498 Epoch 1 Batch 826/1077 - Train Accuracy: 0.9267, Validation Accuracy: 0.9066, Loss: 0.0573 Epoch 1 Batch 827/1077 - Train Accuracy: 0.9285, Validation Accuracy: 0.9176, Loss: 0.0623 Epoch 1 Batch 828/1077 - Train Accuracy: 0.9379, Validation Accuracy: 0.9173, Loss: 0.0546 Epoch 1 Batch 829/1077 - Train Accuracy: 0.9078, Validation Accuracy: 0.9119, Loss: 0.0764 Epoch 1 Batch 830/1077 - Train Accuracy: 0.9148, Validation Accuracy: 0.9158, Loss: 0.0631 Epoch 1 Batch 831/1077 - Train Accuracy: 0.9094, Validation Accuracy: 0.9237, Loss: 0.0579 Epoch 1 Batch 832/1077 - Train Accuracy: 0.9355, Validation Accuracy: 0.9151, Loss: 0.0565 Epoch 1 Batch 833/1077 - Train Accuracy: 0.9285, Validation Accuracy: 0.9254, Loss: 0.0618 Epoch 1 Batch 834/1077 - Train Accuracy: 0.9332, Validation Accuracy: 0.9237, Loss: 0.0588 Epoch 1 Batch 835/1077 - Train Accuracy: 0.9402, Validation Accuracy: 0.9290, Loss: 0.0600 Epoch 1 Batch 836/1077 - Train Accuracy: 0.9260, Validation Accuracy: 0.9290, Loss: 0.0595 Epoch 1 Batch 837/1077 - Train Accuracy: 0.9156, Validation Accuracy: 0.9272, Loss: 0.0728 Epoch 1 Batch 838/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9212, Loss: 0.0519 Epoch 1 Batch 839/1077 - Train Accuracy: 0.9320, Validation Accuracy: 0.9165, Loss: 0.0466 Epoch 1 Batch 840/1077 - Train Accuracy: 0.9496, Validation Accuracy: 0.9126, Loss: 0.0507 Epoch 1 Batch 841/1077 - Train Accuracy: 0.9582, Validation Accuracy: 0.9208, Loss: 0.0607 Epoch 1 Batch 842/1077 - Train Accuracy: 0.9465, Validation Accuracy: 0.9212, Loss: 0.0438 Epoch 1 Batch 843/1077 - Train Accuracy: 0.9096, Validation Accuracy: 0.9197, Loss: 0.0528 Epoch 1 Batch 844/1077 - Train Accuracy: 0.9416, Validation Accuracy: 0.9237, Loss: 0.0500 Epoch 1 Batch 845/1077 - Train Accuracy: 0.9480, Validation Accuracy: 0.9176, Loss: 0.0538 Epoch 1 Batch 846/1077 - Train Accuracy: 0.9285, Validation Accuracy: 0.9226, Loss: 0.0635 Epoch 1 Batch 847/1077 - Train Accuracy: 0.9254, Validation Accuracy: 0.9251, Loss: 0.0679 Epoch 1 Batch 848/1077 - Train Accuracy: 0.9285, Validation Accuracy: 0.9254, Loss: 0.0522 Epoch 1 Batch 849/1077 - Train Accuracy: 0.9371, Validation Accuracy: 0.9308, Loss: 0.0487 Epoch 1 Batch 850/1077 - Train Accuracy: 0.9163, Validation Accuracy: 0.9283, Loss: 0.0766 Epoch 1 Batch 851/1077 - Train Accuracy: 0.9364, Validation Accuracy: 0.9229, Loss: 0.0686 Epoch 1 Batch 852/1077 - Train Accuracy: 0.9156, Validation Accuracy: 0.9240, Loss: 0.0753 Epoch 1 Batch 853/1077 - Train Accuracy: 0.9266, Validation Accuracy: 0.9087, Loss: 0.0550 Epoch 1 Batch 854/1077 - Train Accuracy: 0.9180, Validation Accuracy: 0.9162, Loss: 0.0611 Epoch 1 Batch 855/1077 - Train Accuracy: 0.9102, Validation Accuracy: 0.9190, Loss: 0.0594 Epoch 1 Batch 856/1077 - Train Accuracy: 0.9145, Validation Accuracy: 0.9197, Loss: 0.0646 Epoch 1 Batch 857/1077 - Train Accuracy: 0.9457, Validation Accuracy: 0.9169, Loss: 0.0535 Epoch 1 Batch 858/1077 - Train Accuracy: 0.9446, Validation Accuracy: 0.9190, Loss: 0.0489 Epoch 1 Batch 859/1077 - Train Accuracy: 0.9477, Validation Accuracy: 0.9215, Loss: 0.0624 Epoch 1 Batch 860/1077 - Train Accuracy: 0.9382, Validation Accuracy: 0.9162, Loss: 0.0565 Epoch 1 Batch 861/1077 - Train Accuracy: 0.9375, Validation Accuracy: 0.9165, Loss: 0.0511 Epoch 1 Batch 862/1077 - Train Accuracy: 0.9461, Validation Accuracy: 0.9116, Loss: 0.0546 Epoch 1 Batch 863/1077 - Train Accuracy: 0.9410, Validation Accuracy: 0.9130, Loss: 0.0521 Epoch 1 Batch 864/1077 - Train Accuracy: 0.9176, Validation Accuracy: 0.9137, Loss: 0.0571 Epoch 1 Batch 865/1077 - Train Accuracy: 0.9357, Validation Accuracy: 0.9091, Loss: 0.0547 Epoch 1 Batch 866/1077 - Train Accuracy: 0.9360, Validation Accuracy: 0.9098, Loss: 0.0600 Epoch 1 Batch 867/1077 - Train Accuracy: 0.9078, Validation Accuracy: 0.9031, Loss: 0.0991 Epoch 1 Batch 868/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9102, Loss: 0.0580 Epoch 1 Batch 869/1077 - Train Accuracy: 0.9187, Validation Accuracy: 0.9155, Loss: 0.0560 Epoch 1 Batch 870/1077 - Train Accuracy: 0.9301, Validation Accuracy: 0.9208, Loss: 0.0571 Epoch 1 Batch 871/1077 - Train Accuracy: 0.9313, Validation Accuracy: 0.9237, Loss: 0.0463 Epoch 1 Batch 872/1077 - Train Accuracy: 0.9258, Validation Accuracy: 0.9261, Loss: 0.0515 Epoch 1 Batch 873/1077 - Train Accuracy: 0.9113, Validation Accuracy: 0.9183, Loss: 0.0570 Epoch 1 Batch 874/1077 - Train Accuracy: 0.9102, Validation Accuracy: 0.9229, Loss: 0.0675 Epoch 1 Batch 875/1077 - Train Accuracy: 0.9504, Validation Accuracy: 0.9251, Loss: 0.0610 Epoch 1 Batch 876/1077 - Train Accuracy: 0.9449, Validation Accuracy: 0.9180, Loss: 0.0489 Epoch 1 Batch 877/1077 - Train Accuracy: 0.9453, Validation Accuracy: 0.9112, Loss: 0.0481 Epoch 1 Batch 878/1077 - Train Accuracy: 0.9281, Validation Accuracy: 0.9201, Loss: 0.0522 Epoch 1 Batch 879/1077 - Train Accuracy: 0.9453, Validation Accuracy: 0.9197, Loss: 0.0464 Epoch 1 Batch 880/1077 - Train Accuracy: 0.9344, Validation Accuracy: 0.9197, Loss: 0.0637 Epoch 1 Batch 881/1077 - Train Accuracy: 0.9324, Validation Accuracy: 0.9272, Loss: 0.0585 Epoch 1 Batch 882/1077 - Train Accuracy: 0.9500, Validation Accuracy: 0.9272, Loss: 0.0572 Epoch 1 Batch 883/1077 - Train Accuracy: 0.9174, Validation Accuracy: 0.9233, Loss: 0.0788 Epoch 1 Batch 884/1077 - Train Accuracy: 0.9523, Validation Accuracy: 0.9251, Loss: 0.0530 Epoch 1 Batch 885/1077 - Train Accuracy: 0.9450, Validation Accuracy: 0.9229, Loss: 0.0395 Epoch 1 Batch 886/1077 - Train Accuracy: 0.9336, Validation Accuracy: 0.9155, Loss: 0.0543 Epoch 1 Batch 887/1077 - Train Accuracy: 0.9434, Validation Accuracy: 0.9087, Loss: 0.0630 Epoch 1 Batch 888/1077 - Train Accuracy: 0.9501, Validation Accuracy: 0.9048, Loss: 0.0595 Epoch 1 Batch 889/1077 - Train Accuracy: 0.9363, Validation Accuracy: 0.9045, Loss: 0.0496 Epoch 1 Batch 890/1077 - Train Accuracy: 0.9297, Validation Accuracy: 0.9247, Loss: 0.0579 Epoch 1 Batch 891/1077 - Train Accuracy: 0.9519, Validation Accuracy: 0.9233, Loss: 0.0509 Epoch 1 Batch 892/1077 - Train Accuracy: 0.9363, Validation Accuracy: 0.9219, Loss: 0.0466 Epoch 1 Batch 893/1077 - Train Accuracy: 0.9402, Validation Accuracy: 0.9187, Loss: 0.0531 Epoch 1 Batch 894/1077 - Train Accuracy: 0.9539, Validation Accuracy: 0.9180, Loss: 0.0547 Epoch 1 Batch 895/1077 - Train Accuracy: 0.9348, Validation Accuracy: 0.9144, Loss: 0.0481 Epoch 1 Batch 896/1077 - Train Accuracy: 0.9248, Validation Accuracy: 0.9258, Loss: 0.0558 Epoch 1 Batch 897/1077 - Train Accuracy: 0.9353, Validation Accuracy: 0.9276, Loss: 0.0471 Epoch 1 Batch 898/1077 - Train Accuracy: 0.9397, Validation Accuracy: 0.9233, Loss: 0.0455 Epoch 1 Batch 899/1077 - Train Accuracy: 0.9566, Validation Accuracy: 0.9219, Loss: 0.0574 Epoch 1 Batch 900/1077 - Train Accuracy: 0.9430, Validation Accuracy: 0.9283, Loss: 0.0624 Epoch 1 Batch 901/1077 - Train Accuracy: 0.9122, Validation Accuracy: 0.9354, Loss: 0.0746 Epoch 1 Batch 902/1077 - Train Accuracy: 0.9107, Validation Accuracy: 0.9432, Loss: 0.0667 Epoch 1 Batch 903/1077 - Train Accuracy: 0.9250, Validation Accuracy: 0.9489, Loss: 0.0544 Epoch 1 Batch 904/1077 - Train Accuracy: 0.9266, Validation Accuracy: 0.9482, Loss: 0.0499 Epoch 1 Batch 905/1077 - Train Accuracy: 0.9328, Validation Accuracy: 0.9364, Loss: 0.0467 Epoch 1 Batch 906/1077 - Train Accuracy: 0.9395, Validation Accuracy: 0.9297, Loss: 0.0489 Epoch 1 Batch 907/1077 - Train Accuracy: 0.9504, Validation Accuracy: 0.9339, Loss: 0.0532 Epoch 1 Batch 908/1077 - Train Accuracy: 0.9371, Validation Accuracy: 0.9343, Loss: 0.0630 Epoch 1 Batch 909/1077 - Train Accuracy: 0.9504, Validation Accuracy: 0.9144, Loss: 0.0618 Epoch 1 Batch 910/1077 - Train Accuracy: 0.9461, Validation Accuracy: 0.9148, Loss: 0.0557 Epoch 1 Batch 911/1077 - Train Accuracy: 0.9418, Validation Accuracy: 0.9062, Loss: 0.0555 Epoch 1 Batch 912/1077 - Train Accuracy: 0.9254, Validation Accuracy: 0.9205, Loss: 0.0505 Epoch 1 Batch 913/1077 - Train Accuracy: 0.9313, Validation Accuracy: 0.9205, Loss: 0.0674 Epoch 1 Batch 914/1077 - Train Accuracy: 0.9365, Validation Accuracy: 0.9190, Loss: 0.0777 Epoch 1 Batch 915/1077 - Train Accuracy: 0.8894, Validation Accuracy: 0.9300, Loss: 0.0478 Epoch 1 Batch 916/1077 - Train Accuracy: 0.9434, Validation Accuracy: 0.9254, Loss: 0.0659 Epoch 1 Batch 917/1077 - Train Accuracy: 0.9109, Validation Accuracy: 0.9350, Loss: 0.0519 Epoch 1 Batch 918/1077 - Train Accuracy: 0.9487, Validation Accuracy: 0.9325, Loss: 0.0486 Epoch 1 Batch 919/1077 - Train Accuracy: 0.9387, Validation Accuracy: 0.9293, Loss: 0.0447 Epoch 1 Batch 920/1077 - Train Accuracy: 0.9242, Validation Accuracy: 0.9215, Loss: 0.0534 Epoch 1 Batch 921/1077 - Train Accuracy: 0.8816, Validation Accuracy: 0.9212, Loss: 0.0594 Epoch 1 Batch 922/1077 - Train Accuracy: 0.9241, Validation Accuracy: 0.9258, Loss: 0.0642 Epoch 1 Batch 923/1077 - Train Accuracy: 0.9511, Validation Accuracy: 0.9315, Loss: 0.0401 Epoch 1 Batch 924/1077 - Train Accuracy: 0.9400, Validation Accuracy: 0.9183, Loss: 0.0734 Epoch 1 Batch 925/1077 - Train Accuracy: 0.9394, Validation Accuracy: 0.9134, Loss: 0.0526 Epoch 1 Batch 926/1077 - Train Accuracy: 0.9105, Validation Accuracy: 0.9070, Loss: 0.0590 Epoch 1 Batch 927/1077 - Train Accuracy: 0.9187, Validation Accuracy: 0.9070, Loss: 0.0561 Epoch 1 Batch 928/1077 - Train Accuracy: 0.9277, Validation Accuracy: 0.9073, Loss: 0.0521 Epoch 1 Batch 929/1077 - Train Accuracy: 0.9383, Validation Accuracy: 0.9091, Loss: 0.0552 Epoch 1 Batch 930/1077 - Train Accuracy: 0.9301, Validation Accuracy: 0.9268, Loss: 0.0429 Epoch 1 Batch 931/1077 - Train Accuracy: 0.9457, Validation Accuracy: 0.9205, Loss: 0.0446 Epoch 1 Batch 932/1077 - Train Accuracy: 0.9262, Validation Accuracy: 0.9205, Loss: 0.0510 Epoch 1 Batch 933/1077 - Train Accuracy: 0.9336, Validation Accuracy: 0.9190, Loss: 0.0508 Epoch 1 Batch 934/1077 - Train Accuracy: 0.9211, Validation Accuracy: 0.9237, Loss: 0.0442 Epoch 1 Batch 935/1077 - Train Accuracy: 0.9508, Validation Accuracy: 0.9098, Loss: 0.0520 Epoch 1 Batch 936/1077 - Train Accuracy: 0.9241, Validation Accuracy: 0.9137, Loss: 0.0623 Epoch 1 Batch 937/1077 - Train Accuracy: 0.9548, Validation Accuracy: 0.9066, Loss: 0.0585 Epoch 1 Batch 938/1077 - Train Accuracy: 0.9578, Validation Accuracy: 0.9091, Loss: 0.0612 Epoch 1 Batch 939/1077 - Train Accuracy: 0.9371, Validation Accuracy: 0.9112, Loss: 0.0606 Epoch 1 Batch 940/1077 - Train Accuracy: 0.9437, Validation Accuracy: 0.9094, Loss: 0.0437 Epoch 1 Batch 941/1077 - Train Accuracy: 0.9416, Validation Accuracy: 0.9016, Loss: 0.0461 Epoch 1 Batch 942/1077 - Train Accuracy: 0.9512, Validation Accuracy: 0.9130, Loss: 0.0520 Epoch 1 Batch 943/1077 - Train Accuracy: 0.9473, Validation Accuracy: 0.9080, Loss: 0.0513 Epoch 1 Batch 944/1077 - Train Accuracy: 0.9390, Validation Accuracy: 0.9155, Loss: 0.0484 Epoch 1 Batch 945/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9158, Loss: 0.0465 Epoch 1 Batch 946/1077 - Train Accuracy: 0.9790, Validation Accuracy: 0.9183, Loss: 0.0485 Epoch 1 Batch 947/1077 - Train Accuracy: 0.9112, Validation Accuracy: 0.9052, Loss: 0.0589 Epoch 1 Batch 948/1077 - Train Accuracy: 0.9340, Validation Accuracy: 0.9073, Loss: 0.0534 Epoch 1 Batch 949/1077 - Train Accuracy: 0.9710, Validation Accuracy: 0.9173, Loss: 0.0443 Epoch 1 Batch 950/1077 - Train Accuracy: 0.9334, Validation Accuracy: 0.9371, Loss: 0.0459 Epoch 1 Batch 951/1077 - Train Accuracy: 0.9263, Validation Accuracy: 0.9414, Loss: 0.0665 Epoch 1 Batch 952/1077 - Train Accuracy: 0.9437, Validation Accuracy: 0.9308, Loss: 0.0377 Epoch 1 Batch 953/1077 - Train Accuracy: 0.9515, Validation Accuracy: 0.9091, Loss: 0.0431 Epoch 1 Batch 954/1077 - Train Accuracy: 0.8895, Validation Accuracy: 0.9240, Loss: 0.0645 Epoch 1 Batch 955/1077 - Train Accuracy: 0.9234, Validation Accuracy: 0.9371, Loss: 0.0599 Epoch 1 Batch 956/1077 - Train Accuracy: 0.9371, Validation Accuracy: 0.9311, Loss: 0.0592 Epoch 1 Batch 957/1077 - Train Accuracy: 0.9509, Validation Accuracy: 0.9315, Loss: 0.0388 Epoch 1 Batch 958/1077 - Train Accuracy: 0.9422, Validation Accuracy: 0.9322, Loss: 0.0509 Epoch 1 Batch 959/1077 - Train Accuracy: 0.9457, Validation Accuracy: 0.9304, Loss: 0.0479 Epoch 1 Batch 960/1077 - Train Accuracy: 0.9289, Validation Accuracy: 0.9315, Loss: 0.0540 Epoch 1 Batch 961/1077 - Train Accuracy: 0.9441, Validation Accuracy: 0.9261, Loss: 0.0437 Epoch 1 Batch 962/1077 - Train Accuracy: 0.9364, Validation Accuracy: 0.9279, Loss: 0.0502 Epoch 1 Batch 963/1077 - Train Accuracy: 0.9431, Validation Accuracy: 0.9155, Loss: 0.0705 Epoch 1 Batch 964/1077 - Train Accuracy: 0.9371, Validation Accuracy: 0.9112, Loss: 0.0468 Epoch 1 Batch 965/1077 - Train Accuracy: 0.9091, Validation Accuracy: 0.8977, Loss: 0.0627 Epoch 1 Batch 966/1077 - Train Accuracy: 0.9290, Validation Accuracy: 0.9034, Loss: 0.0419 Epoch 1 Batch 967/1077 - Train Accuracy: 0.9195, Validation Accuracy: 0.9386, Loss: 0.0502 Epoch 1 Batch 968/1077 - Train Accuracy: 0.9129, Validation Accuracy: 0.9435, Loss: 0.0617 Epoch 1 Batch 969/1077 - Train Accuracy: 0.9416, Validation Accuracy: 0.9386, Loss: 0.0710 Epoch 1 Batch 970/1077 - Train Accuracy: 0.9520, Validation Accuracy: 0.9336, Loss: 0.0550 Epoch 1 Batch 971/1077 - Train Accuracy: 0.9531, Validation Accuracy: 0.9308, Loss: 0.0562 Epoch 1 Batch 972/1077 - Train Accuracy: 0.9359, Validation Accuracy: 0.9364, Loss: 0.0504 Epoch 1 Batch 973/1077 - Train Accuracy: 0.9736, Validation Accuracy: 0.9297, Loss: 0.0463 Epoch 1 Batch 974/1077 - Train Accuracy: 0.9539, Validation Accuracy: 0.9169, Loss: 0.0419 Epoch 1 Batch 975/1077 - Train Accuracy: 0.9345, Validation Accuracy: 0.9226, Loss: 0.0487 Epoch 1 Batch 976/1077 - Train Accuracy: 0.9516, Validation Accuracy: 0.9297, Loss: 0.0434 Epoch 1 Batch 977/1077 - Train Accuracy: 0.9582, Validation Accuracy: 0.9233, Loss: 0.0355 Epoch 1 Batch 978/1077 - Train Accuracy: 0.9281, Validation Accuracy: 0.9233, Loss: 0.0481 Epoch 1 Batch 979/1077 - Train Accuracy: 0.9313, Validation Accuracy: 0.9151, Loss: 0.0487 Epoch 1 Batch 980/1077 - Train Accuracy: 0.9266, Validation Accuracy: 0.9073, Loss: 0.0573 Epoch 1 Batch 981/1077 - Train Accuracy: 0.9492, Validation Accuracy: 0.9247, Loss: 0.0434 Epoch 1 Batch 982/1077 - Train Accuracy: 0.9468, Validation Accuracy: 0.9283, Loss: 0.0517 Epoch 1 Batch 983/1077 - Train Accuracy: 0.9108, Validation Accuracy: 0.9265, Loss: 0.0596 Epoch 1 Batch 984/1077 - Train Accuracy: 0.8941, Validation Accuracy: 0.9315, Loss: 0.0661 Epoch 1 Batch 985/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9368, Loss: 0.0401 Epoch 1 Batch 986/1077 - Train Accuracy: 0.9336, Validation Accuracy: 0.9322, Loss: 0.0494 Epoch 1 Batch 987/1077 - Train Accuracy: 0.9420, Validation Accuracy: 0.9272, Loss: 0.0435 Epoch 1 Batch 988/1077 - Train Accuracy: 0.9586, Validation Accuracy: 0.9265, Loss: 0.0558 Epoch 1 Batch 989/1077 - Train Accuracy: 0.9035, Validation Accuracy: 0.9268, Loss: 0.0626 Epoch 1 Batch 990/1077 - Train Accuracy: 0.9507, Validation Accuracy: 0.9254, Loss: 0.0562 Epoch 1 Batch 991/1077 - Train Accuracy: 0.9500, Validation Accuracy: 0.9286, Loss: 0.0484 Epoch 1 Batch 992/1077 - Train Accuracy: 0.9230, Validation Accuracy: 0.9308, Loss: 0.0577 Epoch 1 Batch 993/1077 - Train Accuracy: 0.9520, Validation Accuracy: 0.9286, Loss: 0.0406 Epoch 1 Batch 994/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9254, Loss: 0.0441 Epoch 1 Batch 995/1077 - Train Accuracy: 0.9397, Validation Accuracy: 0.9268, Loss: 0.0568 Epoch 1 Batch 996/1077 - Train Accuracy: 0.9543, Validation Accuracy: 0.9322, Loss: 0.0399 Epoch 1 Batch 997/1077 - Train Accuracy: 0.9581, Validation Accuracy: 0.9258, Loss: 0.0498 Epoch 1 Batch 998/1077 - Train Accuracy: 0.9316, Validation Accuracy: 0.9158, Loss: 0.0453 Epoch 1 Batch 999/1077 - Train Accuracy: 0.9398, Validation Accuracy: 0.9187, Loss: 0.0577 Epoch 1 Batch 1000/1077 - Train Accuracy: 0.9539, Validation Accuracy: 0.9215, Loss: 0.0500 Epoch 1 Batch 1001/1077 - Train Accuracy: 0.9517, Validation Accuracy: 0.9190, Loss: 0.0384 Epoch 1 Batch 1002/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9265, Loss: 0.0371 Epoch 1 Batch 1003/1077 - Train Accuracy: 0.9626, Validation Accuracy: 0.9283, Loss: 0.0497 Epoch 1 Batch 1004/1077 - Train Accuracy: 0.9578, Validation Accuracy: 0.9322, Loss: 0.0594 Epoch 1 Batch 1005/1077 - Train Accuracy: 0.9359, Validation Accuracy: 0.9268, Loss: 0.0420 Epoch 1 Batch 1006/1077 - Train Accuracy: 0.9203, Validation Accuracy: 0.9339, Loss: 0.0399 Epoch 1 Batch 1007/1077 - Train Accuracy: 0.9732, Validation Accuracy: 0.9293, Loss: 0.0430 Epoch 1 Batch 1008/1077 - Train Accuracy: 0.9281, Validation Accuracy: 0.9411, Loss: 0.0672 Epoch 1 Batch 1009/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9315, Loss: 0.0333 Epoch 1 Batch 1010/1077 - Train Accuracy: 0.9535, Validation Accuracy: 0.9318, Loss: 0.0442 Epoch 1 Batch 1011/1077 - Train Accuracy: 0.9578, Validation Accuracy: 0.9418, Loss: 0.0423 Epoch 1 Batch 1012/1077 - Train Accuracy: 0.9509, Validation Accuracy: 0.9421, Loss: 0.0387 Epoch 1 Batch 1013/1077 - Train Accuracy: 0.9673, Validation Accuracy: 0.9421, Loss: 0.0339 Epoch 1 Batch 1014/1077 - Train Accuracy: 0.9535, Validation Accuracy: 0.9286, Loss: 0.0449 Epoch 1 Batch 1015/1077 - Train Accuracy: 0.9199, Validation Accuracy: 0.9315, Loss: 0.0660 Epoch 1 Batch 1016/1077 - Train Accuracy: 0.9315, Validation Accuracy: 0.9382, Loss: 0.0502 Epoch 1 Batch 1017/1077 - Train Accuracy: 0.9354, Validation Accuracy: 0.9318, Loss: 0.0472 Epoch 1 Batch 1018/1077 - Train Accuracy: 0.9222, Validation Accuracy: 0.9329, Loss: 0.0466 Epoch 1 Batch 1019/1077 - Train Accuracy: 0.9165, Validation Accuracy: 0.9439, Loss: 0.0645 Epoch 1 Batch 1020/1077 - Train Accuracy: 0.9570, Validation Accuracy: 0.9418, Loss: 0.0404 Epoch 1 Batch 1021/1077 - Train Accuracy: 0.9449, Validation Accuracy: 0.9375, Loss: 0.0530 Epoch 1 Batch 1022/1077 - Train Accuracy: 0.9301, Validation Accuracy: 0.9393, Loss: 0.0441 Epoch 1 Batch 1023/1077 - Train Accuracy: 0.9208, Validation Accuracy: 0.9482, Loss: 0.0522 Epoch 1 Batch 1024/1077 - Train Accuracy: 0.9152, Validation Accuracy: 0.9350, Loss: 0.0611 Epoch 1 Batch 1025/1077 - Train Accuracy: 0.9267, Validation Accuracy: 0.9322, Loss: 0.0400 Epoch 1 Batch 1026/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9254, Loss: 0.0528 Epoch 1 Batch 1027/1077 - Train Accuracy: 0.9152, Validation Accuracy: 0.9297, Loss: 0.0452 Epoch 1 Batch 1028/1077 - Train Accuracy: 0.9204, Validation Accuracy: 0.9304, Loss: 0.0433 Epoch 1 Batch 1029/1077 - Train Accuracy: 0.9348, Validation Accuracy: 0.9315, Loss: 0.0418 Epoch 1 Batch 1030/1077 - Train Accuracy: 0.9441, Validation Accuracy: 0.9389, Loss: 0.0524 Epoch 1 Batch 1031/1077 - Train Accuracy: 0.9412, Validation Accuracy: 0.9354, Loss: 0.0546 Epoch 1 Batch 1032/1077 - Train Accuracy: 0.9408, Validation Accuracy: 0.9418, Loss: 0.0605 Epoch 1 Batch 1033/1077 - Train Accuracy: 0.9196, Validation Accuracy: 0.9308, Loss: 0.0503 Epoch 1 Batch 1034/1077 - Train Accuracy: 0.9539, Validation Accuracy: 0.9197, Loss: 0.0452 Epoch 1 Batch 1035/1077 - Train Accuracy: 0.9702, Validation Accuracy: 0.9187, Loss: 0.0370 Epoch 1 Batch 1036/1077 - Train Accuracy: 0.9122, Validation Accuracy: 0.9194, Loss: 0.0567 Epoch 1 Batch 1037/1077 - Train Accuracy: 0.9473, Validation Accuracy: 0.9205, Loss: 0.0478 Epoch 1 Batch 1038/1077 - Train Accuracy: 0.9449, Validation Accuracy: 0.9105, Loss: 0.0596 Epoch 1 Batch 1039/1077 - Train Accuracy: 0.9245, Validation Accuracy: 0.9155, Loss: 0.0459 Epoch 1 Batch 1040/1077 - Train Accuracy: 0.9379, Validation Accuracy: 0.9190, Loss: 0.0511 Epoch 1 Batch 1041/1077 - Train Accuracy: 0.9484, Validation Accuracy: 0.9194, Loss: 0.0526 Epoch 1 Batch 1042/1077 - Train Accuracy: 0.9484, Validation Accuracy: 0.9304, Loss: 0.0422 Epoch 1 Batch 1043/1077 - Train Accuracy: 0.9430, Validation Accuracy: 0.9361, Loss: 0.0605 Epoch 1 Batch 1044/1077 - Train Accuracy: 0.9328, Validation Accuracy: 0.9318, Loss: 0.0670 Epoch 1 Batch 1045/1077 - Train Accuracy: 0.9367, Validation Accuracy: 0.9205, Loss: 0.0482 Epoch 1 Batch 1046/1077 - Train Accuracy: 0.9453, Validation Accuracy: 0.9158, Loss: 0.0361 Epoch 1 Batch 1047/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9126, Loss: 0.0446 Epoch 1 Batch 1048/1077 - Train Accuracy: 0.9168, Validation Accuracy: 0.9283, Loss: 0.0440 Epoch 1 Batch 1049/1077 - Train Accuracy: 0.9414, Validation Accuracy: 0.9208, Loss: 0.0528 Epoch 1 Batch 1050/1077 - Train Accuracy: 0.9520, Validation Accuracy: 0.9304, Loss: 0.0370 Epoch 1 Batch 1051/1077 - Train Accuracy: 0.9490, Validation Accuracy: 0.9244, Loss: 0.0474 Epoch 1 Batch 1052/1077 - Train Accuracy: 0.9598, Validation Accuracy: 0.9311, Loss: 0.0474 Epoch 1 Batch 1053/1077 - Train Accuracy: 0.9446, Validation Accuracy: 0.9450, Loss: 0.0525 Epoch 1 Batch 1054/1077 - Train Accuracy: 0.9445, Validation Accuracy: 0.9343, Loss: 0.0450 Epoch 1 Batch 1055/1077 - Train Accuracy: 0.9289, Validation Accuracy: 0.9414, Loss: 0.0635 Epoch 1 Batch 1056/1077 - Train Accuracy: 0.9516, Validation Accuracy: 0.9300, Loss: 0.0443 Epoch 1 Batch 1057/1077 - Train Accuracy: 0.9408, Validation Accuracy: 0.9237, Loss: 0.0537 Epoch 1 Batch 1058/1077 - Train Accuracy: 0.9342, Validation Accuracy: 0.9254, Loss: 0.0613 Epoch 1 Batch 1059/1077 - Train Accuracy: 0.9050, Validation Accuracy: 0.9290, Loss: 0.0627 Epoch 1 Batch 1060/1077 - Train Accuracy: 0.9262, Validation Accuracy: 0.9308, Loss: 0.0429 Epoch 1 Batch 1061/1077 - Train Accuracy: 0.9141, Validation Accuracy: 0.9347, Loss: 0.0646 Epoch 1 Batch 1062/1077 - Train Accuracy: 0.9129, Validation Accuracy: 0.9332, Loss: 0.0539 Epoch 1 Batch 1063/1077 - Train Accuracy: 0.9301, Validation Accuracy: 0.9389, Loss: 0.0526 Epoch 1 Batch 1064/1077 - Train Accuracy: 0.9477, Validation Accuracy: 0.9325, Loss: 0.0536 Epoch 1 Batch 1065/1077 - Train Accuracy: 0.9383, Validation Accuracy: 0.9347, Loss: 0.0471 Epoch 1 Batch 1066/1077 - Train Accuracy: 0.9293, Validation Accuracy: 0.9371, Loss: 0.0453 Epoch 1 Batch 1067/1077 - Train Accuracy: 0.9516, Validation Accuracy: 0.9325, Loss: 0.0576 Epoch 1 Batch 1068/1077 - Train Accuracy: 0.9508, Validation Accuracy: 0.9315, Loss: 0.0393 Epoch 1 Batch 1069/1077 - Train Accuracy: 0.9528, Validation Accuracy: 0.9421, Loss: 0.0377 Epoch 1 Batch 1070/1077 - Train Accuracy: 0.9359, Validation Accuracy: 0.9510, Loss: 0.0466 Epoch 1 Batch 1071/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9450, Loss: 0.0416 Epoch 1 Batch 1072/1077 - Train Accuracy: 0.9401, Validation Accuracy: 0.9414, Loss: 0.0518 Epoch 1 Batch 1073/1077 - Train Accuracy: 0.9449, Validation Accuracy: 0.9556, Loss: 0.0522 Epoch 1 Batch 1074/1077 - Train Accuracy: 0.9516, Validation Accuracy: 0.9577, Loss: 0.0678 Epoch 1 Batch 1075/1077 - Train Accuracy: 0.9338, Validation Accuracy: 0.9595, Loss: 0.0553 Epoch 2 Batch 1/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9421, Loss: 0.0364 Epoch 2 Batch 2/1077 - Train Accuracy: 0.9445, Validation Accuracy: 0.9407, Loss: 0.0436 Epoch 2 Batch 3/1077 - Train Accuracy: 0.9578, Validation Accuracy: 0.9290, Loss: 0.0499 Epoch 2 Batch 4/1077 - Train Accuracy: 0.9504, Validation Accuracy: 0.9290, Loss: 0.0430 Epoch 2 Batch 5/1077 - Train Accuracy: 0.9242, Validation Accuracy: 0.9350, Loss: 0.0664 Epoch 2 Batch 6/1077 - Train Accuracy: 0.9590, Validation Accuracy: 0.9407, Loss: 0.0486 Epoch 2 Batch 7/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9403, Loss: 0.0409 Epoch 2 Batch 8/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9510, Loss: 0.0456 Epoch 2 Batch 9/1077 - Train Accuracy: 0.9422, Validation Accuracy: 0.9549, Loss: 0.0458 Epoch 2 Batch 10/1077 - Train Accuracy: 0.9391, Validation Accuracy: 0.9474, Loss: 0.0495 Epoch 2 Batch 11/1077 - Train Accuracy: 0.9349, Validation Accuracy: 0.9396, Loss: 0.0582 Epoch 2 Batch 12/1077 - Train Accuracy: 0.9480, Validation Accuracy: 0.9336, Loss: 0.0472 Epoch 2 Batch 13/1077 - Train Accuracy: 0.9483, Validation Accuracy: 0.9371, Loss: 0.0528 Epoch 2 Batch 14/1077 - Train Accuracy: 0.9565, Validation Accuracy: 0.9354, Loss: 0.0329 Epoch 2 Batch 15/1077 - Train Accuracy: 0.9648, Validation Accuracy: 0.9382, Loss: 0.0419 Epoch 2 Batch 16/1077 - Train Accuracy: 0.9512, Validation Accuracy: 0.9386, Loss: 0.0476 Epoch 2 Batch 17/1077 - Train Accuracy: 0.9496, Validation Accuracy: 0.9343, Loss: 0.0461 Epoch 2 Batch 18/1077 - Train Accuracy: 0.9551, Validation Accuracy: 0.9389, Loss: 0.0489 Epoch 2 Batch 19/1077 - Train Accuracy: 0.9250, Validation Accuracy: 0.9421, Loss: 0.0452 Epoch 2 Batch 20/1077 - Train Accuracy: 0.9301, Validation Accuracy: 0.9492, Loss: 0.0413 Epoch 2 Batch 21/1077 - Train Accuracy: 0.9402, Validation Accuracy: 0.9325, Loss: 0.0482 Epoch 2 Batch 22/1077 - Train Accuracy: 0.9437, Validation Accuracy: 0.9084, Loss: 0.0482 Epoch 2 Batch 23/1077 - Train Accuracy: 0.9520, Validation Accuracy: 0.9148, Loss: 0.0446 Epoch 2 Batch 24/1077 - Train Accuracy: 0.9500, Validation Accuracy: 0.9229, Loss: 0.0526 Epoch 2 Batch 25/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9403, Loss: 0.0341 Epoch 2 Batch 26/1077 - Train Accuracy: 0.9406, Validation Accuracy: 0.9322, Loss: 0.0540 Epoch 2 Batch 27/1077 - Train Accuracy: 0.9505, Validation Accuracy: 0.9311, Loss: 0.0385 Epoch 2 Batch 28/1077 - Train Accuracy: 0.9500, Validation Accuracy: 0.9308, Loss: 0.0471 Epoch 2 Batch 29/1077 - Train Accuracy: 0.9437, Validation Accuracy: 0.9357, Loss: 0.0454 Epoch 2 Batch 30/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9226, Loss: 0.0421 Epoch 2 Batch 31/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9290, Loss: 0.0427 Epoch 2 Batch 32/1077 - Train Accuracy: 0.9412, Validation Accuracy: 0.9325, Loss: 0.0470 Epoch 2 Batch 33/1077 - Train Accuracy: 0.9368, Validation Accuracy: 0.9290, Loss: 0.0413 Epoch 2 Batch 34/1077 - Train Accuracy: 0.9473, Validation Accuracy: 0.9322, Loss: 0.0408 Epoch 2 Batch 35/1077 - Train Accuracy: 0.9641, Validation Accuracy: 0.9251, Loss: 0.0451 Epoch 2 Batch 36/1077 - Train Accuracy: 0.9453, Validation Accuracy: 0.9276, Loss: 0.0427 Epoch 2 Batch 37/1077 - Train Accuracy: 0.9473, Validation Accuracy: 0.9382, Loss: 0.0416 Epoch 2 Batch 38/1077 - Train Accuracy: 0.9564, Validation Accuracy: 0.9428, Loss: 0.0649 Epoch 2 Batch 39/1077 - Train Accuracy: 0.9387, Validation Accuracy: 0.9478, Loss: 0.0569 Epoch 2 Batch 40/1077 - Train Accuracy: 0.9563, Validation Accuracy: 0.9304, Loss: 0.0375 Epoch 2 Batch 41/1077 - Train Accuracy: 0.9420, Validation Accuracy: 0.9297, Loss: 0.0462 Epoch 2 Batch 42/1077 - Train Accuracy: 0.9414, Validation Accuracy: 0.9350, Loss: 0.0529 Epoch 2 Batch 43/1077 - Train Accuracy: 0.9634, Validation Accuracy: 0.9308, Loss: 0.0295 Epoch 2 Batch 44/1077 - Train Accuracy: 0.9696, Validation Accuracy: 0.9308, Loss: 0.0386 Epoch 2 Batch 45/1077 - Train Accuracy: 0.9195, Validation Accuracy: 0.9368, Loss: 0.0449 Epoch 2 Batch 46/1077 - Train Accuracy: 0.9309, Validation Accuracy: 0.9347, Loss: 0.0443 Epoch 2 Batch 47/1077 - Train Accuracy: 0.9480, Validation Accuracy: 0.9261, Loss: 0.0453 Epoch 2 Batch 48/1077 - Train Accuracy: 0.9465, Validation Accuracy: 0.9233, Loss: 0.0650 Epoch 2 Batch 49/1077 - Train Accuracy: 0.9308, Validation Accuracy: 0.9272, Loss: 0.0499 Epoch 2 Batch 50/1077 - Train Accuracy: 0.9363, Validation Accuracy: 0.9297, Loss: 0.0505 Epoch 2 Batch 51/1077 - Train Accuracy: 0.9375, Validation Accuracy: 0.9368, Loss: 0.0458 Epoch 2 Batch 52/1077 - Train Accuracy: 0.9410, Validation Accuracy: 0.9347, Loss: 0.0467 Epoch 2 Batch 53/1077 - Train Accuracy: 0.9297, Validation Accuracy: 0.9347, Loss: 0.0407 Epoch 2 Batch 54/1077 - Train Accuracy: 0.9313, Validation Accuracy: 0.9354, Loss: 0.0680 Epoch 2 Batch 55/1077 - Train Accuracy: 0.9570, Validation Accuracy: 0.9329, Loss: 0.0388 Epoch 2 Batch 56/1077 - Train Accuracy: 0.9504, Validation Accuracy: 0.9251, Loss: 0.0415 Epoch 2 Batch 57/1077 - Train Accuracy: 0.9279, Validation Accuracy: 0.9229, Loss: 0.0452 Epoch 2 Batch 58/1077 - Train Accuracy: 0.9523, Validation Accuracy: 0.9180, Loss: 0.0488 Epoch 2 Batch 59/1077 - Train Accuracy: 0.9375, Validation Accuracy: 0.9112, Loss: 0.0420 Epoch 2 Batch 60/1077 - Train Accuracy: 0.9554, Validation Accuracy: 0.9176, Loss: 0.0363 Epoch 2 Batch 61/1077 - Train Accuracy: 0.9348, Validation Accuracy: 0.9151, Loss: 0.0553 Epoch 2 Batch 62/1077 - Train Accuracy: 0.9190, Validation Accuracy: 0.9197, Loss: 0.0489 Epoch 2 Batch 63/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9208, Loss: 0.0377 Epoch 2 Batch 64/1077 - Train Accuracy: 0.9449, Validation Accuracy: 0.9229, Loss: 0.0463 Epoch 2 Batch 65/1077 - Train Accuracy: 0.9416, Validation Accuracy: 0.9201, Loss: 0.0401 Epoch 2 Batch 66/1077 - Train Accuracy: 0.9587, Validation Accuracy: 0.9251, Loss: 0.0279 Epoch 2 Batch 67/1077 - Train Accuracy: 0.9401, Validation Accuracy: 0.9240, Loss: 0.0429 Epoch 2 Batch 68/1077 - Train Accuracy: 0.9500, Validation Accuracy: 0.9354, Loss: 0.0520 Epoch 2 Batch 69/1077 - Train Accuracy: 0.9336, Validation Accuracy: 0.9332, Loss: 0.0600 Epoch 2 Batch 70/1077 - Train Accuracy: 0.9178, Validation Accuracy: 0.9272, Loss: 0.0494 Epoch 2 Batch 71/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9297, Loss: 0.0300 Epoch 2 Batch 72/1077 - Train Accuracy: 0.9203, Validation Accuracy: 0.9375, Loss: 0.0445 Epoch 2 Batch 73/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9400, Loss: 0.0390 Epoch 2 Batch 74/1077 - Train Accuracy: 0.9501, Validation Accuracy: 0.9272, Loss: 0.0393 Epoch 2 Batch 75/1077 - Train Accuracy: 0.9227, Validation Accuracy: 0.9237, Loss: 0.0545 Epoch 2 Batch 76/1077 - Train Accuracy: 0.9551, Validation Accuracy: 0.9190, Loss: 0.0358 Epoch 2 Batch 77/1077 - Train Accuracy: 0.9430, Validation Accuracy: 0.9158, Loss: 0.0484 Epoch 2 Batch 78/1077 - Train Accuracy: 0.9256, Validation Accuracy: 0.9126, Loss: 0.0408 Epoch 2 Batch 79/1077 - Train Accuracy: 0.9555, Validation Accuracy: 0.9176, Loss: 0.0429 Epoch 2 Batch 80/1077 - Train Accuracy: 0.9391, Validation Accuracy: 0.9151, Loss: 0.0396 Epoch 2 Batch 81/1077 - Train Accuracy: 0.9477, Validation Accuracy: 0.9233, Loss: 0.0358 Epoch 2 Batch 82/1077 - Train Accuracy: 0.9585, Validation Accuracy: 0.9194, Loss: 0.0372 Epoch 2 Batch 83/1077 - Train Accuracy: 0.9363, Validation Accuracy: 0.9300, Loss: 0.0409 Epoch 2 Batch 84/1077 - Train Accuracy: 0.9395, Validation Accuracy: 0.9205, Loss: 0.0410 Epoch 2 Batch 85/1077 - Train Accuracy: 0.9590, Validation Accuracy: 0.9233, Loss: 0.0339 Epoch 2 Batch 86/1077 - Train Accuracy: 0.9430, Validation Accuracy: 0.9162, Loss: 0.0410 Epoch 2 Batch 87/1077 - Train Accuracy: 0.9336, Validation Accuracy: 0.9137, Loss: 0.0557 Epoch 2 Batch 88/1077 - Train Accuracy: 0.9332, Validation Accuracy: 0.9116, Loss: 0.0505 Epoch 2 Batch 89/1077 - Train Accuracy: 0.9480, Validation Accuracy: 0.9109, Loss: 0.0457 Epoch 2 Batch 90/1077 - Train Accuracy: 0.9125, Validation Accuracy: 0.9240, Loss: 0.0404 Epoch 2 Batch 91/1077 - Train Accuracy: 0.9438, Validation Accuracy: 0.9190, Loss: 0.0397 Epoch 2 Batch 92/1077 - Train Accuracy: 0.9587, Validation Accuracy: 0.9148, Loss: 0.0453 Epoch 2 Batch 93/1077 - Train Accuracy: 0.9582, Validation Accuracy: 0.9109, Loss: 0.0435 Epoch 2 Batch 94/1077 - Train Accuracy: 0.9527, Validation Accuracy: 0.9205, Loss: 0.0379 Epoch 2 Batch 95/1077 - Train Accuracy: 0.9628, Validation Accuracy: 0.9293, Loss: 0.0476 Epoch 2 Batch 96/1077 - Train Accuracy: 0.9344, Validation Accuracy: 0.9450, Loss: 0.0455 Epoch 2 Batch 97/1077 - Train Accuracy: 0.9398, Validation Accuracy: 0.9428, Loss: 0.0491 Epoch 2 Batch 98/1077 - Train Accuracy: 0.9516, Validation Accuracy: 0.9489, Loss: 0.0481 Epoch 2 Batch 99/1077 - Train Accuracy: 0.9582, Validation Accuracy: 0.9379, Loss: 0.0396 Epoch 2 Batch 100/1077 - Train Accuracy: 0.9367, Validation Accuracy: 0.9339, Loss: 0.0410 Epoch 2 Batch 101/1077 - Train Accuracy: 0.9320, Validation Accuracy: 0.9535, Loss: 0.0433 Epoch 2 Batch 102/1077 - Train Accuracy: 0.9566, Validation Accuracy: 0.9538, Loss: 0.0382 Epoch 2 Batch 103/1077 - Train Accuracy: 0.9391, Validation Accuracy: 0.9439, Loss: 0.0511 Epoch 2 Batch 104/1077 - Train Accuracy: 0.9511, Validation Accuracy: 0.9563, Loss: 0.0477 Epoch 2 Batch 105/1077 - Train Accuracy: 0.9590, Validation Accuracy: 0.9517, Loss: 0.0382 Epoch 2 Batch 106/1077 - Train Accuracy: 0.9396, Validation Accuracy: 0.9435, Loss: 0.0484 Epoch 2 Batch 107/1077 - Train Accuracy: 0.9364, Validation Accuracy: 0.9407, Loss: 0.0419 Epoch 2 Batch 108/1077 - Train Accuracy: 0.9233, Validation Accuracy: 0.9293, Loss: 0.0451 Epoch 2 Batch 109/1077 - Train Accuracy: 0.9391, Validation Accuracy: 0.9258, Loss: 0.0470 Epoch 2 Batch 110/1077 - Train Accuracy: 0.9633, Validation Accuracy: 0.9261, Loss: 0.0325 Epoch 2 Batch 111/1077 - Train Accuracy: 0.9207, Validation Accuracy: 0.9261, Loss: 0.0433 Epoch 2 Batch 112/1077 - Train Accuracy: 0.9383, Validation Accuracy: 0.9279, Loss: 0.0392 Epoch 2 Batch 113/1077 - Train Accuracy: 0.9297, Validation Accuracy: 0.9261, Loss: 0.0455 Epoch 2 Batch 114/1077 - Train Accuracy: 0.9650, Validation Accuracy: 0.9347, Loss: 0.0349 Epoch 2 Batch 115/1077 - Train Accuracy: 0.9551, Validation Accuracy: 0.9396, Loss: 0.0476 Epoch 2 Batch 116/1077 - Train Accuracy: 0.9160, Validation Accuracy: 0.9379, Loss: 0.0627 Epoch 2 Batch 117/1077 - Train Accuracy: 0.9594, Validation Accuracy: 0.9308, Loss: 0.0376 Epoch 2 Batch 118/1077 - Train Accuracy: 0.9350, Validation Accuracy: 0.9300, Loss: 0.0444 Epoch 2 Batch 119/1077 - Train Accuracy: 0.9316, Validation Accuracy: 0.9300, Loss: 0.0427 Epoch 2 Batch 120/1077 - Train Accuracy: 0.9547, Validation Accuracy: 0.9343, Loss: 0.0481 Epoch 2 Batch 121/1077 - Train Accuracy: 0.9305, Validation Accuracy: 0.9336, Loss: 0.0492 Epoch 2 Batch 122/1077 - Train Accuracy: 0.9480, Validation Accuracy: 0.9361, Loss: 0.0437 Epoch 2 Batch 123/1077 - Train Accuracy: 0.9527, Validation Accuracy: 0.9343, Loss: 0.0356 Epoch 2 Batch 124/1077 - Train Accuracy: 0.9418, Validation Accuracy: 0.9293, Loss: 0.0494 Epoch 2 Batch 125/1077 - Train Accuracy: 0.9427, Validation Accuracy: 0.9240, Loss: 0.0514 Epoch 2 Batch 126/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9315, Loss: 0.0403 Epoch 2 Batch 127/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9368, Loss: 0.0379 Epoch 2 Batch 128/1077 - Train Accuracy: 0.9751, Validation Accuracy: 0.9311, Loss: 0.0419 Epoch 2 Batch 129/1077 - Train Accuracy: 0.9500, Validation Accuracy: 0.9304, Loss: 0.0520 Epoch 2 Batch 130/1077 - Train Accuracy: 0.9561, Validation Accuracy: 0.9283, Loss: 0.0366 Epoch 2 Batch 131/1077 - Train Accuracy: 0.9527, Validation Accuracy: 0.9300, Loss: 0.0428 Epoch 2 Batch 132/1077 - Train Accuracy: 0.9418, Validation Accuracy: 0.9247, Loss: 0.0423 Epoch 2 Batch 133/1077 - Train Accuracy: 0.9605, Validation Accuracy: 0.9371, Loss: 0.0325 Epoch 2 Batch 134/1077 - Train Accuracy: 0.9542, Validation Accuracy: 0.9368, Loss: 0.0369 Epoch 2 Batch 135/1077 - Train Accuracy: 0.9305, Validation Accuracy: 0.9268, Loss: 0.0390 Epoch 2 Batch 136/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9272, Loss: 0.0407 Epoch 2 Batch 137/1077 - Train Accuracy: 0.9524, Validation Accuracy: 0.9183, Loss: 0.0318 Epoch 2 Batch 138/1077 - Train Accuracy: 0.9113, Validation Accuracy: 0.9215, Loss: 0.0415 Epoch 2 Batch 139/1077 - Train Accuracy: 0.9379, Validation Accuracy: 0.9041, Loss: 0.0516 Epoch 2 Batch 140/1077 - Train Accuracy: 0.9420, Validation Accuracy: 0.9091, Loss: 0.0368 Epoch 2 Batch 141/1077 - Train Accuracy: 0.9551, Validation Accuracy: 0.9151, Loss: 0.0406 Epoch 2 Batch 142/1077 - Train Accuracy: 0.9330, Validation Accuracy: 0.9336, Loss: 0.0424 Epoch 2 Batch 143/1077 - Train Accuracy: 0.9453, Validation Accuracy: 0.9375, Loss: 0.0441 Epoch 2 Batch 144/1077 - Train Accuracy: 0.9100, Validation Accuracy: 0.9368, Loss: 0.0561 Epoch 2 Batch 145/1077 - Train Accuracy: 0.9535, Validation Accuracy: 0.9467, Loss: 0.0420 Epoch 2 Batch 146/1077 - Train Accuracy: 0.9312, Validation Accuracy: 0.9354, Loss: 0.0679 Epoch 2 Batch 147/1077 - Train Accuracy: 0.9563, Validation Accuracy: 0.9450, Loss: 0.0383 Epoch 2 Batch 148/1077 - Train Accuracy: 0.9344, Validation Accuracy: 0.9403, Loss: 0.0483 Epoch 2 Batch 149/1077 - Train Accuracy: 0.9230, Validation Accuracy: 0.9396, Loss: 0.0456 Epoch 2 Batch 150/1077 - Train Accuracy: 0.9598, Validation Accuracy: 0.9418, Loss: 0.0443 Epoch 2 Batch 151/1077 - Train Accuracy: 0.9308, Validation Accuracy: 0.9339, Loss: 0.0357 Epoch 2 Batch 152/1077 - Train Accuracy: 0.9309, Validation Accuracy: 0.9325, Loss: 0.0565 Epoch 2 Batch 153/1077 - Train Accuracy: 0.9512, Validation Accuracy: 0.9251, Loss: 0.0466 Epoch 2 Batch 154/1077 - Train Accuracy: 0.9457, Validation Accuracy: 0.9197, Loss: 0.0413 Epoch 2 Batch 155/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9403, Loss: 0.0429 Epoch 2 Batch 156/1077 - Train Accuracy: 0.9570, Validation Accuracy: 0.9357, Loss: 0.0306 Epoch 2 Batch 157/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9308, Loss: 0.0378 Epoch 2 Batch 158/1077 - Train Accuracy: 0.9148, Validation Accuracy: 0.9368, Loss: 0.0546 Epoch 2 Batch 159/1077 - Train Accuracy: 0.9516, Validation Accuracy: 0.9407, Loss: 0.0415 Epoch 2 Batch 160/1077 - Train Accuracy: 0.9426, Validation Accuracy: 0.9411, Loss: 0.0462 Epoch 2 Batch 161/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9361, Loss: 0.0355 Epoch 2 Batch 162/1077 - Train Accuracy: 0.9480, Validation Accuracy: 0.9428, Loss: 0.0559 Epoch 2 Batch 163/1077 - Train Accuracy: 0.9490, Validation Accuracy: 0.9432, Loss: 0.0499 Epoch 2 Batch 164/1077 - Train Accuracy: 0.9488, Validation Accuracy: 0.9411, Loss: 0.0410 Epoch 2 Batch 165/1077 - Train Accuracy: 0.9652, Validation Accuracy: 0.9311, Loss: 0.0336 Epoch 2 Batch 166/1077 - Train Accuracy: 0.9406, Validation Accuracy: 0.9315, Loss: 0.0449 Epoch 2 Batch 167/1077 - Train Accuracy: 0.9594, Validation Accuracy: 0.9386, Loss: 0.0404 Epoch 2 Batch 168/1077 - Train Accuracy: 0.9515, Validation Accuracy: 0.9407, Loss: 0.0500 Epoch 2 Batch 169/1077 - Train Accuracy: 0.9412, Validation Accuracy: 0.9354, Loss: 0.0484 Epoch 2 Batch 170/1077 - Train Accuracy: 0.9336, Validation Accuracy: 0.9425, Loss: 0.0502 Epoch 2 Batch 171/1077 - Train Accuracy: 0.9450, Validation Accuracy: 0.9421, Loss: 0.0418 Epoch 2 Batch 172/1077 - Train Accuracy: 0.9568, Validation Accuracy: 0.9311, Loss: 0.0347 Epoch 2 Batch 173/1077 - Train Accuracy: 0.9560, Validation Accuracy: 0.9222, Loss: 0.0476 Epoch 2 Batch 174/1077 - Train Accuracy: 0.9383, Validation Accuracy: 0.9276, Loss: 0.0358 Epoch 2 Batch 175/1077 - Train Accuracy: 0.9199, Validation Accuracy: 0.9379, Loss: 0.0500 Epoch 2 Batch 176/1077 - Train Accuracy: 0.9484, Validation Accuracy: 0.9379, Loss: 0.0377 Epoch 2 Batch 177/1077 - Train Accuracy: 0.9778, Validation Accuracy: 0.9474, Loss: 0.0468 Epoch 2 Batch 178/1077 - Train Accuracy: 0.9445, Validation Accuracy: 0.9425, Loss: 0.0411 Epoch 2 Batch 179/1077 - Train Accuracy: 0.9535, Validation Accuracy: 0.9347, Loss: 0.0440 Epoch 2 Batch 180/1077 - Train Accuracy: 0.9473, Validation Accuracy: 0.9364, Loss: 0.0370 Epoch 2 Batch 181/1077 - Train Accuracy: 0.9523, Validation Accuracy: 0.9219, Loss: 0.0467 Epoch 2 Batch 182/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9347, Loss: 0.0463 Epoch 2 Batch 183/1077 - Train Accuracy: 0.9406, Validation Accuracy: 0.9343, Loss: 0.0407 Epoch 2 Batch 184/1077 - Train Accuracy: 0.9496, Validation Accuracy: 0.9446, Loss: 0.0428 Epoch 2 Batch 185/1077 - Train Accuracy: 0.9324, Validation Accuracy: 0.9421, Loss: 0.0454 Epoch 2 Batch 186/1077 - Train Accuracy: 0.9502, Validation Accuracy: 0.9521, Loss: 0.0406 Epoch 2 Batch 187/1077 - Train Accuracy: 0.9535, Validation Accuracy: 0.9450, Loss: 0.0321 Epoch 2 Batch 188/1077 - Train Accuracy: 0.9406, Validation Accuracy: 0.9329, Loss: 0.0442 Epoch 2 Batch 189/1077 - Train Accuracy: 0.9078, Validation Accuracy: 0.9268, Loss: 0.0412 Epoch 2 Batch 190/1077 - Train Accuracy: 0.9469, Validation Accuracy: 0.9418, Loss: 0.0388 Epoch 2 Batch 191/1077 - Train Accuracy: 0.9265, Validation Accuracy: 0.9396, Loss: 0.0367 Epoch 2 Batch 192/1077 - Train Accuracy: 0.9211, Validation Accuracy: 0.9400, Loss: 0.0483 Epoch 2 Batch 193/1077 - Train Accuracy: 0.9645, Validation Accuracy: 0.9283, Loss: 0.0388 Epoch 2 Batch 194/1077 - Train Accuracy: 0.9743, Validation Accuracy: 0.9379, Loss: 0.0320 Epoch 2 Batch 195/1077 - Train Accuracy: 0.9234, Validation Accuracy: 0.9386, Loss: 0.0328 Epoch 2 Batch 196/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9545, Loss: 0.0344 Epoch 2 Batch 197/1077 - Train Accuracy: 0.9410, Validation Accuracy: 0.9560, Loss: 0.0432 Epoch 2 Batch 198/1077 - Train Accuracy: 0.9319, Validation Accuracy: 0.9453, Loss: 0.0490 Epoch 2 Batch 199/1077 - Train Accuracy: 0.9547, Validation Accuracy: 0.9165, Loss: 0.0374 Epoch 2 Batch 200/1077 - Train Accuracy: 0.9477, Validation Accuracy: 0.9052, Loss: 0.0418 Epoch 2 Batch 201/1077 - Train Accuracy: 0.9344, Validation Accuracy: 0.9300, Loss: 0.0339 Epoch 2 Batch 202/1077 - Train Accuracy: 0.9527, Validation Accuracy: 0.9386, Loss: 0.0398 Epoch 2 Batch 203/1077 - Train Accuracy: 0.9453, Validation Accuracy: 0.9304, Loss: 0.0372 Epoch 2 Batch 204/1077 - Train Accuracy: 0.9559, Validation Accuracy: 0.9165, Loss: 0.0546 Epoch 2 Batch 205/1077 - Train Accuracy: 0.8914, Validation Accuracy: 0.9265, Loss: 0.0594 Epoch 2 Batch 206/1077 - Train Accuracy: 0.9629, Validation Accuracy: 0.9226, Loss: 0.0393 Epoch 2 Batch 207/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9091, Loss: 0.0384 Epoch 2 Batch 208/1077 - Train Accuracy: 0.9423, Validation Accuracy: 0.9151, Loss: 0.0441 Epoch 2 Batch 209/1077 - Train Accuracy: 0.9665, Validation Accuracy: 0.9229, Loss: 0.0318 Epoch 2 Batch 210/1077 - Train Accuracy: 0.9528, Validation Accuracy: 0.9304, Loss: 0.0448 Epoch 2 Batch 211/1077 - Train Accuracy: 0.9359, Validation Accuracy: 0.9347, Loss: 0.0402 Epoch 2 Batch 212/1077 - Train Accuracy: 0.9371, Validation Accuracy: 0.9339, Loss: 0.0324 Epoch 2 Batch 213/1077 - Train Accuracy: 0.9527, Validation Accuracy: 0.9233, Loss: 0.0273 Epoch 2 Batch 214/1077 - Train Accuracy: 0.9293, Validation Accuracy: 0.9119, Loss: 0.0352 Epoch 2 Batch 215/1077 - Train Accuracy: 0.9117, Validation Accuracy: 0.9276, Loss: 0.0444 Epoch 2 Batch 216/1077 - Train Accuracy: 0.9527, Validation Accuracy: 0.9308, Loss: 0.0410 Epoch 2 Batch 217/1077 - Train Accuracy: 0.9406, Validation Accuracy: 0.9368, Loss: 0.0366 Epoch 2 Batch 218/1077 - Train Accuracy: 0.9498, Validation Accuracy: 0.9418, Loss: 0.0498 Epoch 2 Batch 219/1077 - Train Accuracy: 0.9563, Validation Accuracy: 0.9361, Loss: 0.0341 Epoch 2 Batch 220/1077 - Train Accuracy: 0.9441, Validation Accuracy: 0.9407, Loss: 0.0429 Epoch 2 Batch 221/1077 - Train Accuracy: 0.9535, Validation Accuracy: 0.9371, Loss: 0.0418 Epoch 2 Batch 222/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9268, Loss: 0.0363 Epoch 2 Batch 223/1077 - Train Accuracy: 0.9408, Validation Accuracy: 0.9219, Loss: 0.0339 Epoch 2 Batch 224/1077 - Train Accuracy: 0.9652, Validation Accuracy: 0.9247, Loss: 0.0400 Epoch 2 Batch 225/1077 - Train Accuracy: 0.9469, Validation Accuracy: 0.9251, Loss: 0.0523 Epoch 2 Batch 226/1077 - Train Accuracy: 0.9418, Validation Accuracy: 0.9144, Loss: 0.0356 Epoch 2 Batch 227/1077 - Train Accuracy: 0.9207, Validation Accuracy: 0.9180, Loss: 0.0453 Epoch 2 Batch 228/1077 - Train Accuracy: 0.9555, Validation Accuracy: 0.9279, Loss: 0.0388 Epoch 2 Batch 229/1077 - Train Accuracy: 0.9434, Validation Accuracy: 0.9176, Loss: 0.0393 Epoch 2 Batch 230/1077 - Train Accuracy: 0.9568, Validation Accuracy: 0.9265, Loss: 0.0395 Epoch 2 Batch 231/1077 - Train Accuracy: 0.9551, Validation Accuracy: 0.9304, Loss: 0.0390 Epoch 2 Batch 232/1077 - Train Accuracy: 0.9712, Validation Accuracy: 0.9382, Loss: 0.0312 Epoch 2 Batch 233/1077 - Train Accuracy: 0.9480, Validation Accuracy: 0.9169, Loss: 0.0562 Epoch 2 Batch 234/1077 - Train Accuracy: 0.9561, Validation Accuracy: 0.9212, Loss: 0.0428 Epoch 2 Batch 235/1077 - Train Accuracy: 0.9338, Validation Accuracy: 0.9286, Loss: 0.0466 Epoch 2 Batch 236/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9286, Loss: 0.0502 Epoch 2 Batch 237/1077 - Train Accuracy: 0.9617, Validation Accuracy: 0.9290, Loss: 0.0319 Epoch 2 Batch 238/1077 - Train Accuracy: 0.9492, Validation Accuracy: 0.9265, Loss: 0.0400 Epoch 2 Batch 239/1077 - Train Accuracy: 0.9795, Validation Accuracy: 0.9315, Loss: 0.0302 Epoch 2 Batch 240/1077 - Train Accuracy: 0.9617, Validation Accuracy: 0.9460, Loss: 0.0396 Epoch 2 Batch 241/1077 - Train Accuracy: 0.9504, Validation Accuracy: 0.9442, Loss: 0.0305 Epoch 2 Batch 242/1077 - Train Accuracy: 0.9641, Validation Accuracy: 0.9489, Loss: 0.0296 Epoch 2 Batch 243/1077 - Train Accuracy: 0.9652, Validation Accuracy: 0.9400, Loss: 0.0443 Epoch 2 Batch 244/1077 - Train Accuracy: 0.9563, Validation Accuracy: 0.9471, Loss: 0.0321 Epoch 2 Batch 245/1077 - Train Accuracy: 0.9237, Validation Accuracy: 0.9457, Loss: 0.0404 Epoch 2 Batch 246/1077 - Train Accuracy: 0.9496, Validation Accuracy: 0.9343, Loss: 0.0411 Epoch 2 Batch 247/1077 - Train Accuracy: 0.9524, Validation Accuracy: 0.9393, Loss: 0.0381 Epoch 2 Batch 248/1077 - Train Accuracy: 0.9504, Validation Accuracy: 0.9375, Loss: 0.0396 Epoch 2 Batch 249/1077 - Train Accuracy: 0.9570, Validation Accuracy: 0.9329, Loss: 0.0337 Epoch 2 Batch 250/1077 - Train Accuracy: 0.9414, Validation Accuracy: 0.9379, Loss: 0.0353 Epoch 2 Batch 251/1077 - Train Accuracy: 0.9315, Validation Accuracy: 0.9361, Loss: 0.0501 Epoch 2 Batch 252/1077 - Train Accuracy: 0.9523, Validation Accuracy: 0.9418, Loss: 0.0413 Epoch 2 Batch 253/1077 - Train Accuracy: 0.9254, Validation Accuracy: 0.9336, Loss: 0.0405 Epoch 2 Batch 254/1077 - Train Accuracy: 0.9461, Validation Accuracy: 0.9332, Loss: 0.0480 Epoch 2 Batch 255/1077 - Train Accuracy: 0.9566, Validation Accuracy: 0.9403, Loss: 0.0337 Epoch 2 Batch 256/1077 - Train Accuracy: 0.9355, Validation Accuracy: 0.9425, Loss: 0.0571 Epoch 2 Batch 257/1077 - Train Accuracy: 0.9338, Validation Accuracy: 0.9442, Loss: 0.0443 Epoch 2 Batch 258/1077 - Train Accuracy: 0.9702, Validation Accuracy: 0.9425, Loss: 0.0376 Epoch 2 Batch 259/1077 - Train Accuracy: 0.9582, Validation Accuracy: 0.9425, Loss: 0.0352 Epoch 2 Batch 260/1077 - Train Accuracy: 0.9382, Validation Accuracy: 0.9474, Loss: 0.0353 Epoch 2 Batch 261/1077 - Train Accuracy: 0.9509, Validation Accuracy: 0.9457, Loss: 0.0422 Epoch 2 Batch 262/1077 - Train Accuracy: 0.9617, Validation Accuracy: 0.9503, Loss: 0.0295 Epoch 2 Batch 263/1077 - Train Accuracy: 0.9652, Validation Accuracy: 0.9506, Loss: 0.0276 Epoch 2 Batch 264/1077 - Train Accuracy: 0.9574, Validation Accuracy: 0.9503, Loss: 0.0399 Epoch 2 Batch 265/1077 - Train Accuracy: 0.9281, Validation Accuracy: 0.9446, Loss: 0.0405 Epoch 2 Batch 266/1077 - Train Accuracy: 0.9542, Validation Accuracy: 0.9393, Loss: 0.0401 Epoch 2 Batch 267/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9446, Loss: 0.0356 Epoch 2 Batch 268/1077 - Train Accuracy: 0.9141, Validation Accuracy: 0.9357, Loss: 0.0469 Epoch 2 Batch 269/1077 - Train Accuracy: 0.9433, Validation Accuracy: 0.9354, Loss: 0.0478 Epoch 2 Batch 270/1077 - Train Accuracy: 0.9313, Validation Accuracy: 0.9371, Loss: 0.0496 Epoch 2 Batch 271/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9393, Loss: 0.0385 Epoch 2 Batch 272/1077 - Train Accuracy: 0.9516, Validation Accuracy: 0.9336, Loss: 0.0584 Epoch 2 Batch 273/1077 - Train Accuracy: 0.9632, Validation Accuracy: 0.9364, Loss: 0.0312 Epoch 2 Batch 274/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9311, Loss: 0.0429 Epoch 2 Batch 275/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9261, Loss: 0.0457 Epoch 2 Batch 276/1077 - Train Accuracy: 0.9109, Validation Accuracy: 0.9265, Loss: 0.0563 Epoch 2 Batch 277/1077 - Train Accuracy: 0.9661, Validation Accuracy: 0.9258, Loss: 0.0359 Epoch 2 Batch 278/1077 - Train Accuracy: 0.9434, Validation Accuracy: 0.9379, Loss: 0.0513 Epoch 2 Batch 279/1077 - Train Accuracy: 0.9508, Validation Accuracy: 0.9446, Loss: 0.0418 Epoch 2 Batch 280/1077 - Train Accuracy: 0.9414, Validation Accuracy: 0.9396, Loss: 0.0444 Epoch 2 Batch 281/1077 - Train Accuracy: 0.9324, Validation Accuracy: 0.9396, Loss: 0.0507 Epoch 2 Batch 282/1077 - Train Accuracy: 0.9160, Validation Accuracy: 0.9329, Loss: 0.0569 Epoch 2 Batch 283/1077 - Train Accuracy: 0.9555, Validation Accuracy: 0.9357, Loss: 0.0467 Epoch 2 Batch 284/1077 - Train Accuracy: 0.9332, Validation Accuracy: 0.9304, Loss: 0.0493 Epoch 2 Batch 285/1077 - Train Accuracy: 0.9535, Validation Accuracy: 0.9158, Loss: 0.0424 Epoch 2 Batch 286/1077 - Train Accuracy: 0.9706, Validation Accuracy: 0.9311, Loss: 0.0413 Epoch 2 Batch 287/1077 - Train Accuracy: 0.9422, Validation Accuracy: 0.9329, Loss: 0.0392 Epoch 2 Batch 288/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9368, Loss: 0.0504 Epoch 2 Batch 289/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9297, Loss: 0.0392 Epoch 2 Batch 290/1077 - Train Accuracy: 0.9348, Validation Accuracy: 0.9261, Loss: 0.0606 Epoch 2 Batch 291/1077 - Train Accuracy: 0.9263, Validation Accuracy: 0.9261, Loss: 0.0603 Epoch 2 Batch 292/1077 - Train Accuracy: 0.9565, Validation Accuracy: 0.9279, Loss: 0.0440 Epoch 2 Batch 293/1077 - Train Accuracy: 0.9574, Validation Accuracy: 0.9165, Loss: 0.0465 Epoch 2 Batch 294/1077 - Train Accuracy: 0.9553, Validation Accuracy: 0.9162, Loss: 0.0352 Epoch 2 Batch 295/1077 - Train Accuracy: 0.9420, Validation Accuracy: 0.9187, Loss: 0.0477 Epoch 2 Batch 296/1077 - Train Accuracy: 0.9554, Validation Accuracy: 0.9176, Loss: 0.0388 Epoch 2 Batch 297/1077 - Train Accuracy: 0.9434, Validation Accuracy: 0.9180, Loss: 0.0433 Epoch 2 Batch 298/1077 - Train Accuracy: 0.9371, Validation Accuracy: 0.9290, Loss: 0.0523 Epoch 2 Batch 299/1077 - Train Accuracy: 0.9543, Validation Accuracy: 0.9386, Loss: 0.0477 Epoch 2 Batch 300/1077 - Train Accuracy: 0.9618, Validation Accuracy: 0.9343, Loss: 0.0332 Epoch 2 Batch 301/1077 - Train Accuracy: 0.9441, Validation Accuracy: 0.9371, Loss: 0.0318 Epoch 2 Batch 302/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9368, Loss: 0.0330 Epoch 2 Batch 303/1077 - Train Accuracy: 0.9582, Validation Accuracy: 0.9389, Loss: 0.0429 Epoch 2 Batch 304/1077 - Train Accuracy: 0.9665, Validation Accuracy: 0.9336, Loss: 0.0421 Epoch 2 Batch 305/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9386, Loss: 0.0337 Epoch 2 Batch 306/1077 - Train Accuracy: 0.9464, Validation Accuracy: 0.9382, Loss: 0.0441 Epoch 2 Batch 307/1077 - Train Accuracy: 0.9442, Validation Accuracy: 0.9332, Loss: 0.0326 Epoch 2 Batch 308/1077 - Train Accuracy: 0.9426, Validation Accuracy: 0.9414, Loss: 0.0495 Epoch 2 Batch 309/1077 - Train Accuracy: 0.9516, Validation Accuracy: 0.9283, Loss: 0.0328 Epoch 2 Batch 310/1077 - Train Accuracy: 0.9625, Validation Accuracy: 0.9258, Loss: 0.0374 Epoch 2 Batch 311/1077 - Train Accuracy: 0.9301, Validation Accuracy: 0.9151, Loss: 0.0355 Epoch 2 Batch 312/1077 - Train Accuracy: 0.9277, Validation Accuracy: 0.9080, Loss: 0.0484 Epoch 2 Batch 313/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9180, Loss: 0.0264 Epoch 2 Batch 314/1077 - Train Accuracy: 0.9457, Validation Accuracy: 0.9144, Loss: 0.0361 Epoch 2 Batch 315/1077 - Train Accuracy: 0.9665, Validation Accuracy: 0.9322, Loss: 0.0348 Epoch 2 Batch 316/1077 - Train Accuracy: 0.9528, Validation Accuracy: 0.9357, Loss: 0.0360 Epoch 2 Batch 317/1077 - Train Accuracy: 0.9671, Validation Accuracy: 0.9361, Loss: 0.0483 Epoch 2 Batch 318/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9357, Loss: 0.0332 Epoch 2 Batch 319/1077 - Train Accuracy: 0.9590, Validation Accuracy: 0.9311, Loss: 0.0450 Epoch 2 Batch 320/1077 - Train Accuracy: 0.9504, Validation Accuracy: 0.9357, Loss: 0.0423 Epoch 2 Batch 321/1077 - Train Accuracy: 0.9512, Validation Accuracy: 0.9286, Loss: 0.0348 Epoch 2 Batch 322/1077 - Train Accuracy: 0.9598, Validation Accuracy: 0.9343, Loss: 0.0358 Epoch 2 Batch 323/1077 - Train Accuracy: 0.9500, Validation Accuracy: 0.9350, Loss: 0.0359 Epoch 2 Batch 324/1077 - Train Accuracy: 0.9418, Validation Accuracy: 0.9389, Loss: 0.0344 Epoch 2 Batch 325/1077 - Train Accuracy: 0.9360, Validation Accuracy: 0.9354, Loss: 0.0408 Epoch 2 Batch 326/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9354, Loss: 0.0357 Epoch 2 Batch 327/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9428, Loss: 0.0405 Epoch 2 Batch 328/1077 - Train Accuracy: 0.9501, Validation Accuracy: 0.9457, Loss: 0.0493 Epoch 2 Batch 329/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9361, Loss: 0.0466 Epoch 2 Batch 330/1077 - Train Accuracy: 0.9383, Validation Accuracy: 0.9382, Loss: 0.0409 Epoch 2 Batch 331/1077 - Train Accuracy: 0.9486, Validation Accuracy: 0.9286, Loss: 0.0413 Epoch 2 Batch 332/1077 - Train Accuracy: 0.9568, Validation Accuracy: 0.9254, Loss: 0.0258 Epoch 2 Batch 333/1077 - Train Accuracy: 0.9544, Validation Accuracy: 0.9237, Loss: 0.0330 Epoch 2 Batch 334/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9240, Loss: 0.0364 Epoch 2 Batch 335/1077 - Train Accuracy: 0.9710, Validation Accuracy: 0.9290, Loss: 0.0409 Epoch 2 Batch 336/1077 - Train Accuracy: 0.9547, Validation Accuracy: 0.9290, Loss: 0.0534 Epoch 2 Batch 337/1077 - Train Accuracy: 0.9414, Validation Accuracy: 0.9442, Loss: 0.0452 Epoch 2 Batch 338/1077 - Train Accuracy: 0.8941, Validation Accuracy: 0.9492, Loss: 0.0550 Epoch 2 Batch 339/1077 - Train Accuracy: 0.9484, Validation Accuracy: 0.9492, Loss: 0.0333 Epoch 2 Batch 340/1077 - Train Accuracy: 0.9634, Validation Accuracy: 0.9492, Loss: 0.0338 Epoch 2 Batch 341/1077 - Train Accuracy: 0.9324, Validation Accuracy: 0.9453, Loss: 0.0540 Epoch 2 Batch 342/1077 - Train Accuracy: 0.9405, Validation Accuracy: 0.9364, Loss: 0.0332 Epoch 2 Batch 343/1077 - Train Accuracy: 0.9520, Validation Accuracy: 0.9300, Loss: 0.0423 Epoch 2 Batch 344/1077 - Train Accuracy: 0.9270, Validation Accuracy: 0.9297, Loss: 0.0420 Epoch 2 Batch 345/1077 - Train Accuracy: 0.9431, Validation Accuracy: 0.9215, Loss: 0.0295 Epoch 2 Batch 346/1077 - Train Accuracy: 0.9426, Validation Accuracy: 0.9173, Loss: 0.0378 Epoch 2 Batch 347/1077 - Train Accuracy: 0.9539, Validation Accuracy: 0.9226, Loss: 0.0319 Epoch 2 Batch 348/1077 - Train Accuracy: 0.9364, Validation Accuracy: 0.9151, Loss: 0.0392 Epoch 2 Batch 349/1077 - Train Accuracy: 0.9492, Validation Accuracy: 0.9293, Loss: 0.0348 Epoch 2 Batch 350/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9290, Loss: 0.0392 Epoch 2 Batch 351/1077 - Train Accuracy: 0.9379, Validation Accuracy: 0.9336, Loss: 0.0413 Epoch 2 Batch 352/1077 - Train Accuracy: 0.9648, Validation Accuracy: 0.9407, Loss: 0.0346 Epoch 2 Batch 353/1077 - Train Accuracy: 0.9486, Validation Accuracy: 0.9439, Loss: 0.0443 Epoch 2 Batch 354/1077 - Train Accuracy: 0.9520, Validation Accuracy: 0.9460, Loss: 0.0507 Epoch 2 Batch 355/1077 - Train Accuracy: 0.9554, Validation Accuracy: 0.9471, Loss: 0.0368 Epoch 2 Batch 356/1077 - Train Accuracy: 0.9652, Validation Accuracy: 0.9474, Loss: 0.0399 Epoch 2 Batch 357/1077 - Train Accuracy: 0.9412, Validation Accuracy: 0.9560, Loss: 0.0359 Epoch 2 Batch 358/1077 - Train Accuracy: 0.9219, Validation Accuracy: 0.9492, Loss: 0.0475 Epoch 2 Batch 359/1077 - Train Accuracy: 0.9516, Validation Accuracy: 0.9499, Loss: 0.0389 Epoch 2 Batch 360/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9347, Loss: 0.0258 Epoch 2 Batch 361/1077 - Train Accuracy: 0.9486, Validation Accuracy: 0.9293, Loss: 0.0407 Epoch 2 Batch 362/1077 - Train Accuracy: 0.9353, Validation Accuracy: 0.9286, Loss: 0.0444 Epoch 2 Batch 363/1077 - Train Accuracy: 0.9398, Validation Accuracy: 0.9329, Loss: 0.0469 Epoch 2 Batch 364/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9254, Loss: 0.0415 Epoch 2 Batch 365/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9343, Loss: 0.0309 Epoch 2 Batch 366/1077 - Train Accuracy: 0.9625, Validation Accuracy: 0.9375, Loss: 0.0343 Epoch 2 Batch 367/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9489, Loss: 0.0245 Epoch 2 Batch 368/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9496, Loss: 0.0344 Epoch 2 Batch 369/1077 - Train Accuracy: 0.9352, Validation Accuracy: 0.9496, Loss: 0.0376 Epoch 2 Batch 370/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9528, Loss: 0.0364 Epoch 2 Batch 371/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9542, Loss: 0.0317 Epoch 2 Batch 372/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9496, Loss: 0.0277 Epoch 2 Batch 373/1077 - Train Accuracy: 0.9661, Validation Accuracy: 0.9506, Loss: 0.0305 Epoch 2 Batch 374/1077 - Train Accuracy: 0.9371, Validation Accuracy: 0.9425, Loss: 0.0428 Epoch 2 Batch 375/1077 - Train Accuracy: 0.9670, Validation Accuracy: 0.9400, Loss: 0.0322 Epoch 2 Batch 376/1077 - Train Accuracy: 0.9560, Validation Accuracy: 0.9482, Loss: 0.0391 Epoch 2 Batch 377/1077 - Train Accuracy: 0.9477, Validation Accuracy: 0.9379, Loss: 0.0323 Epoch 2 Batch 378/1077 - Train Accuracy: 0.9566, Validation Accuracy: 0.9531, Loss: 0.0284 Epoch 2 Batch 379/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9567, Loss: 0.0481 Epoch 2 Batch 380/1077 - Train Accuracy: 0.9523, Validation Accuracy: 0.9471, Loss: 0.0301 Epoch 2 Batch 381/1077 - Train Accuracy: 0.9477, Validation Accuracy: 0.9471, Loss: 0.0426 Epoch 2 Batch 382/1077 - Train Accuracy: 0.9371, Validation Accuracy: 0.9357, Loss: 0.0573 Epoch 2 Batch 383/1077 - Train Accuracy: 0.9676, Validation Accuracy: 0.9435, Loss: 0.0334 Epoch 2 Batch 384/1077 - Train Accuracy: 0.9652, Validation Accuracy: 0.9329, Loss: 0.0323 Epoch 2 Batch 385/1077 - Train Accuracy: 0.9504, Validation Accuracy: 0.9279, Loss: 0.0311 Epoch 2 Batch 386/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9350, Loss: 0.0382 Epoch 2 Batch 387/1077 - Train Accuracy: 0.9625, Validation Accuracy: 0.9350, Loss: 0.0348 Epoch 2 Batch 388/1077 - Train Accuracy: 0.9576, Validation Accuracy: 0.9357, Loss: 0.0399 Epoch 2 Batch 389/1077 - Train Accuracy: 0.9520, Validation Accuracy: 0.9403, Loss: 0.0381 Epoch 2 Batch 390/1077 - Train Accuracy: 0.9047, Validation Accuracy: 0.9343, Loss: 0.0487 Epoch 2 Batch 391/1077 - Train Accuracy: 0.9665, Validation Accuracy: 0.9453, Loss: 0.0454 Epoch 2 Batch 392/1077 - Train Accuracy: 0.9520, Validation Accuracy: 0.9425, Loss: 0.0421 Epoch 2 Batch 393/1077 - Train Accuracy: 0.9661, Validation Accuracy: 0.9513, Loss: 0.0329 Epoch 2 Batch 394/1077 - Train Accuracy: 0.9289, Validation Accuracy: 0.9609, Loss: 0.0381 Epoch 2 Batch 395/1077 - Train Accuracy: 0.9542, Validation Accuracy: 0.9538, Loss: 0.0359 Epoch 2 Batch 396/1077 - Train Accuracy: 0.9477, Validation Accuracy: 0.9482, Loss: 0.0414 Epoch 2 Batch 397/1077 - Train Accuracy: 0.9639, Validation Accuracy: 0.9489, Loss: 0.0350 Epoch 2 Batch 398/1077 - Train Accuracy: 0.9650, Validation Accuracy: 0.9432, Loss: 0.0373 Epoch 2 Batch 399/1077 - Train Accuracy: 0.9338, Validation Accuracy: 0.9382, Loss: 0.0392 Epoch 2 Batch 400/1077 - Train Accuracy: 0.9297, Validation Accuracy: 0.9482, Loss: 0.0439 Epoch 2 Batch 401/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9457, Loss: 0.0287 Epoch 2 Batch 402/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9350, Loss: 0.0277 Epoch 2 Batch 403/1077 - Train Accuracy: 0.9492, Validation Accuracy: 0.9400, Loss: 0.0467 Epoch 2 Batch 404/1077 - Train Accuracy: 0.9714, Validation Accuracy: 0.9457, Loss: 0.0366 Epoch 2 Batch 405/1077 - Train Accuracy: 0.9433, Validation Accuracy: 0.9386, Loss: 0.0410 Epoch 2 Batch 406/1077 - Train Accuracy: 0.9786, Validation Accuracy: 0.9400, Loss: 0.0311 Epoch 2 Batch 407/1077 - Train Accuracy: 0.9566, Validation Accuracy: 0.9354, Loss: 0.0411 Epoch 2 Batch 408/1077 - Train Accuracy: 0.9547, Validation Accuracy: 0.9354, Loss: 0.0385 Epoch 2 Batch 409/1077 - Train Accuracy: 0.9414, Validation Accuracy: 0.9354, Loss: 0.0455 Epoch 2 Batch 410/1077 - Train Accuracy: 0.9186, Validation Accuracy: 0.9403, Loss: 0.0495 Epoch 2 Batch 411/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9425, Loss: 0.0387 Epoch 2 Batch 412/1077 - Train Accuracy: 0.9418, Validation Accuracy: 0.9407, Loss: 0.0303 Epoch 2 Batch 413/1077 - Train Accuracy: 0.9625, Validation Accuracy: 0.9379, Loss: 0.0339 Epoch 2 Batch 414/1077 - Train Accuracy: 0.9441, Validation Accuracy: 0.9482, Loss: 0.0388 Epoch 2 Batch 415/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9453, Loss: 0.0454 Epoch 2 Batch 416/1077 - Train Accuracy: 0.9695, Validation Accuracy: 0.9510, Loss: 0.0330 Epoch 2 Batch 417/1077 - Train Accuracy: 0.9535, Validation Accuracy: 0.9471, Loss: 0.0583 Epoch 2 Batch 418/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9428, Loss: 0.0312 Epoch 2 Batch 419/1077 - Train Accuracy: 0.9555, Validation Accuracy: 0.9428, Loss: 0.0375 Epoch 2 Batch 420/1077 - Train Accuracy: 0.9648, Validation Accuracy: 0.9347, Loss: 0.0273 Epoch 2 Batch 421/1077 - Train Accuracy: 0.9578, Validation Accuracy: 0.9322, Loss: 0.0527 Epoch 2 Batch 422/1077 - Train Accuracy: 0.9345, Validation Accuracy: 0.9389, Loss: 0.0328 Epoch 2 Batch 423/1077 - Train Accuracy: 0.9527, Validation Accuracy: 0.9318, Loss: 0.0498 Epoch 2 Batch 424/1077 - Train Accuracy: 0.9695, Validation Accuracy: 0.9411, Loss: 0.0349 Epoch 2 Batch 425/1077 - Train Accuracy: 0.9401, Validation Accuracy: 0.9411, Loss: 0.0320 Epoch 2 Batch 426/1077 - Train Accuracy: 0.9527, Validation Accuracy: 0.9432, Loss: 0.0402 Epoch 2 Batch 427/1077 - Train Accuracy: 0.9304, Validation Accuracy: 0.9432, Loss: 0.0374 Epoch 2 Batch 428/1077 - Train Accuracy: 0.9732, Validation Accuracy: 0.9343, Loss: 0.0271 Epoch 2 Batch 429/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9343, Loss: 0.0290 Epoch 2 Batch 430/1077 - Train Accuracy: 0.9395, Validation Accuracy: 0.9318, Loss: 0.0314 Epoch 2 Batch 431/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9371, Loss: 0.0300 Epoch 2 Batch 432/1077 - Train Accuracy: 0.9547, Validation Accuracy: 0.9418, Loss: 0.0383 Epoch 2 Batch 433/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9560, Loss: 0.0450 Epoch 2 Batch 434/1077 - Train Accuracy: 0.9578, Validation Accuracy: 0.9588, Loss: 0.0359 Epoch 2 Batch 435/1077 - Train Accuracy: 0.9778, Validation Accuracy: 0.9581, Loss: 0.0417 Epoch 2 Batch 436/1077 - Train Accuracy: 0.9509, Validation Accuracy: 0.9535, Loss: 0.0395 Epoch 2 Batch 437/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9535, Loss: 0.0279 Epoch 2 Batch 438/1077 - Train Accuracy: 0.9637, Validation Accuracy: 0.9535, Loss: 0.0332 Epoch 2 Batch 439/1077 - Train Accuracy: 0.9379, Validation Accuracy: 0.9556, Loss: 0.0459 Epoch 2 Batch 440/1077 - Train Accuracy: 0.9355, Validation Accuracy: 0.9567, Loss: 0.0466 Epoch 2 Batch 441/1077 - Train Accuracy: 0.9594, Validation Accuracy: 0.9467, Loss: 0.0324 Epoch 2 Batch 442/1077 - Train Accuracy: 0.9375, Validation Accuracy: 0.9464, Loss: 0.0472 Epoch 2 Batch 443/1077 - Train Accuracy: 0.9736, Validation Accuracy: 0.9418, Loss: 0.0302 Epoch 2 Batch 444/1077 - Train Accuracy: 0.9496, Validation Accuracy: 0.9371, Loss: 0.0340 Epoch 2 Batch 445/1077 - Train Accuracy: 0.9408, Validation Accuracy: 0.9386, Loss: 0.0396 Epoch 2 Batch 446/1077 - Train Accuracy: 0.9442, Validation Accuracy: 0.9233, Loss: 0.0290 Epoch 2 Batch 447/1077 - Train Accuracy: 0.9363, Validation Accuracy: 0.9386, Loss: 0.0363 Epoch 2 Batch 448/1077 - Train Accuracy: 0.9340, Validation Accuracy: 0.9524, Loss: 0.0476 Epoch 2 Batch 449/1077 - Train Accuracy: 0.9551, Validation Accuracy: 0.9606, Loss: 0.0406 Epoch 2 Batch 450/1077 - Train Accuracy: 0.9555, Validation Accuracy: 0.9616, Loss: 0.0363 Epoch 2 Batch 451/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9666, Loss: 0.0374 Epoch 2 Batch 452/1077 - Train Accuracy: 0.9555, Validation Accuracy: 0.9673, Loss: 0.0395 Epoch 2 Batch 453/1077 - Train Accuracy: 0.9528, Validation Accuracy: 0.9688, Loss: 0.0354 Epoch 2 Batch 454/1077 - Train Accuracy: 0.9344, Validation Accuracy: 0.9521, Loss: 0.0412 Epoch 2 Batch 455/1077 - Train Accuracy: 0.9588, Validation Accuracy: 0.9517, Loss: 0.0392 Epoch 2 Batch 456/1077 - Train Accuracy: 0.9496, Validation Accuracy: 0.9510, Loss: 0.0391 Epoch 2 Batch 457/1077 - Train Accuracy: 0.9375, Validation Accuracy: 0.9503, Loss: 0.0304 Epoch 2 Batch 458/1077 - Train Accuracy: 0.9324, Validation Accuracy: 0.9435, Loss: 0.0483 Epoch 2 Batch 459/1077 - Train Accuracy: 0.9591, Validation Accuracy: 0.9474, Loss: 0.0412 Epoch 2 Batch 460/1077 - Train Accuracy: 0.9652, Validation Accuracy: 0.9432, Loss: 0.0401 Epoch 2 Batch 461/1077 - Train Accuracy: 0.9293, Validation Accuracy: 0.9474, Loss: 0.0349 Epoch 2 Batch 462/1077 - Train Accuracy: 0.9480, Validation Accuracy: 0.9474, Loss: 0.0359 Epoch 2 Batch 463/1077 - Train Accuracy: 0.9285, Validation Accuracy: 0.9453, Loss: 0.0371 Epoch 2 Batch 464/1077 - Train Accuracy: 0.9586, Validation Accuracy: 0.9375, Loss: 0.0338 Epoch 2 Batch 465/1077 - Train Accuracy: 0.9391, Validation Accuracy: 0.9474, Loss: 0.0419 Epoch 2 Batch 466/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9528, Loss: 0.0312 Epoch 2 Batch 467/1077 - Train Accuracy: 0.9676, Validation Accuracy: 0.9545, Loss: 0.0342 Epoch 2 Batch 468/1077 - Train Accuracy: 0.9565, Validation Accuracy: 0.9485, Loss: 0.0456 Epoch 2 Batch 469/1077 - Train Accuracy: 0.9598, Validation Accuracy: 0.9588, Loss: 0.0380 Epoch 2 Batch 470/1077 - Train Accuracy: 0.9659, Validation Accuracy: 0.9545, Loss: 0.0339 Epoch 2 Batch 471/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9474, Loss: 0.0264 Epoch 2 Batch 472/1077 - Train Accuracy: 0.9457, Validation Accuracy: 0.9428, Loss: 0.0331 Epoch 2 Batch 473/1077 - Train Accuracy: 0.9582, Validation Accuracy: 0.9371, Loss: 0.0341 Epoch 2 Batch 474/1077 - Train Accuracy: 0.9223, Validation Accuracy: 0.9396, Loss: 0.0323 Epoch 2 Batch 475/1077 - Train Accuracy: 0.9313, Validation Accuracy: 0.9471, Loss: 0.0324 Epoch 2 Batch 476/1077 - Train Accuracy: 0.9630, Validation Accuracy: 0.9513, Loss: 0.0249 Epoch 2 Batch 477/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9482, Loss: 0.0370 Epoch 2 Batch 478/1077 - Train Accuracy: 0.9618, Validation Accuracy: 0.9535, Loss: 0.0377 Epoch 2 Batch 479/1077 - Train Accuracy: 0.9344, Validation Accuracy: 0.9489, Loss: 0.0440 Epoch 2 Batch 480/1077 - Train Accuracy: 0.9494, Validation Accuracy: 0.9382, Loss: 0.0355 Epoch 2 Batch 481/1077 - Train Accuracy: 0.9434, Validation Accuracy: 0.9411, Loss: 0.0327 Epoch 2 Batch 482/1077 - Train Accuracy: 0.9593, Validation Accuracy: 0.9478, Loss: 0.0424 Epoch 2 Batch 483/1077 - Train Accuracy: 0.9367, Validation Accuracy: 0.9414, Loss: 0.0377 Epoch 2 Batch 484/1077 - Train Accuracy: 0.9578, Validation Accuracy: 0.9464, Loss: 0.0392 Epoch 2 Batch 485/1077 - Train Accuracy: 0.9469, Validation Accuracy: 0.9450, Loss: 0.0367 Epoch 2 Batch 486/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9503, Loss: 0.0321 Epoch 2 Batch 487/1077 - Train Accuracy: 0.9494, Validation Accuracy: 0.9450, Loss: 0.0370 Epoch 2 Batch 488/1077 - Train Accuracy: 0.9548, Validation Accuracy: 0.9457, Loss: 0.0304 Epoch 2 Batch 489/1077 - Train Accuracy: 0.9516, Validation Accuracy: 0.9411, Loss: 0.0273 Epoch 2 Batch 490/1077 - Train Accuracy: 0.9340, Validation Accuracy: 0.9389, Loss: 0.0383 Epoch 2 Batch 491/1077 - Train Accuracy: 0.9504, Validation Accuracy: 0.9396, Loss: 0.0467 Epoch 2 Batch 492/1077 - Train Accuracy: 0.9637, Validation Accuracy: 0.9371, Loss: 0.0408 Epoch 2 Batch 493/1077 - Train Accuracy: 0.9821, Validation Accuracy: 0.9311, Loss: 0.0246 Epoch 2 Batch 494/1077 - Train Accuracy: 0.9293, Validation Accuracy: 0.9325, Loss: 0.0281 Epoch 2 Batch 495/1077 - Train Accuracy: 0.9430, Validation Accuracy: 0.9308, Loss: 0.0309 Epoch 2 Batch 496/1077 - Train Accuracy: 0.9504, Validation Accuracy: 0.9464, Loss: 0.0333 Epoch 2 Batch 497/1077 - Train Accuracy: 0.9593, Validation Accuracy: 0.9471, Loss: 0.0397 Epoch 2 Batch 498/1077 - Train Accuracy: 0.9402, Validation Accuracy: 0.9521, Loss: 0.0374 Epoch 2 Batch 499/1077 - Train Accuracy: 0.9498, Validation Accuracy: 0.9531, Loss: 0.0274 Epoch 2 Batch 500/1077 - Train Accuracy: 0.9508, Validation Accuracy: 0.9510, Loss: 0.0279 Epoch 2 Batch 501/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9489, Loss: 0.0262 Epoch 2 Batch 502/1077 - Train Accuracy: 0.9578, Validation Accuracy: 0.9545, Loss: 0.0408 Epoch 2 Batch 503/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9553, Loss: 0.0309 Epoch 2 Batch 504/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9553, Loss: 0.0316 Epoch 2 Batch 505/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9553, Loss: 0.0264 Epoch 2 Batch 506/1077 - Train Accuracy: 0.9664, Validation Accuracy: 0.9549, Loss: 0.0380 Epoch 2 Batch 507/1077 - Train Accuracy: 0.9246, Validation Accuracy: 0.9620, Loss: 0.0402 Epoch 2 Batch 508/1077 - Train Accuracy: 0.9632, Validation Accuracy: 0.9563, Loss: 0.0255 Epoch 2 Batch 509/1077 - Train Accuracy: 0.9496, Validation Accuracy: 0.9588, Loss: 0.0385 Epoch 2 Batch 510/1077 - Train Accuracy: 0.9578, Validation Accuracy: 0.9585, Loss: 0.0341 Epoch 2 Batch 511/1077 - Train Accuracy: 0.9757, Validation Accuracy: 0.9585, Loss: 0.0296 Epoch 2 Batch 512/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9524, Loss: 0.0286 Epoch 2 Batch 513/1077 - Train Accuracy: 0.9598, Validation Accuracy: 0.9453, Loss: 0.0381 Epoch 2 Batch 514/1077 - Train Accuracy: 0.9441, Validation Accuracy: 0.9474, Loss: 0.0336 Epoch 2 Batch 515/1077 - Train Accuracy: 0.9563, Validation Accuracy: 0.9425, Loss: 0.0355 Epoch 2 Batch 516/1077 - Train Accuracy: 0.9483, Validation Accuracy: 0.9474, Loss: 0.0392 Epoch 2 Batch 517/1077 - Train Accuracy: 0.9635, Validation Accuracy: 0.9478, Loss: 0.0397 Epoch 2 Batch 518/1077 - Train Accuracy: 0.9633, Validation Accuracy: 0.9432, Loss: 0.0310 Epoch 2 Batch 519/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9432, Loss: 0.0299 Epoch 2 Batch 520/1077 - Train Accuracy: 0.9650, Validation Accuracy: 0.9471, Loss: 0.0286 Epoch 2 Batch 521/1077 - Train Accuracy: 0.9420, Validation Accuracy: 0.9499, Loss: 0.0364 Epoch 2 Batch 522/1077 - Train Accuracy: 0.9297, Validation Accuracy: 0.9496, Loss: 0.0426 Epoch 2 Batch 523/1077 - Train Accuracy: 0.9578, Validation Accuracy: 0.9496, Loss: 0.0390 Epoch 2 Batch 524/1077 - Train Accuracy: 0.9563, Validation Accuracy: 0.9581, Loss: 0.0379 Epoch 2 Batch 525/1077 - Train Accuracy: 0.9586, Validation Accuracy: 0.9691, Loss: 0.0350 Epoch 2 Batch 526/1077 - Train Accuracy: 0.9664, Validation Accuracy: 0.9517, Loss: 0.0287 Epoch 2 Batch 527/1077 - Train Accuracy: 0.9412, Validation Accuracy: 0.9563, Loss: 0.0356 Epoch 2 Batch 528/1077 - Train Accuracy: 0.9344, Validation Accuracy: 0.9567, Loss: 0.0374 Epoch 2 Batch 529/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9567, Loss: 0.0352 Epoch 2 Batch 530/1077 - Train Accuracy: 0.9523, Validation Accuracy: 0.9574, Loss: 0.0368 Epoch 2 Batch 531/1077 - Train Accuracy: 0.9383, Validation Accuracy: 0.9407, Loss: 0.0391 Epoch 2 Batch 532/1077 - Train Accuracy: 0.9238, Validation Accuracy: 0.9474, Loss: 0.0486 Epoch 2 Batch 533/1077 - Train Accuracy: 0.9547, Validation Accuracy: 0.9382, Loss: 0.0386 Epoch 2 Batch 534/1077 - Train Accuracy: 0.9360, Validation Accuracy: 0.9453, Loss: 0.0407 Epoch 2 Batch 535/1077 - Train Accuracy: 0.9492, Validation Accuracy: 0.9400, Loss: 0.0368 Epoch 2 Batch 536/1077 - Train Accuracy: 0.9391, Validation Accuracy: 0.9411, Loss: 0.0360 Epoch 2 Batch 537/1077 - Train Accuracy: 0.9676, Validation Accuracy: 0.9460, Loss: 0.0295 Epoch 2 Batch 538/1077 - Train Accuracy: 0.9673, Validation Accuracy: 0.9535, Loss: 0.0273 Epoch 2 Batch 539/1077 - Train Accuracy: 0.9500, Validation Accuracy: 0.9482, Loss: 0.0504 Epoch 2 Batch 540/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9489, Loss: 0.0263 Epoch 2 Batch 541/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9460, Loss: 0.0282 Epoch 2 Batch 542/1077 - Train Accuracy: 0.9422, Validation Accuracy: 0.9478, Loss: 0.0353 Epoch 2 Batch 543/1077 - Train Accuracy: 0.9645, Validation Accuracy: 0.9453, Loss: 0.0312 Epoch 2 Batch 544/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9453, Loss: 0.0212 Epoch 2 Batch 545/1077 - Train Accuracy: 0.9551, Validation Accuracy: 0.9350, Loss: 0.0389 Epoch 2 Batch 546/1077 - Train Accuracy: 0.9648, Validation Accuracy: 0.9403, Loss: 0.0342 Epoch 2 Batch 547/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9428, Loss: 0.0337 Epoch 2 Batch 548/1077 - Train Accuracy: 0.9426, Validation Accuracy: 0.9513, Loss: 0.0409 Epoch 2 Batch 549/1077 - Train Accuracy: 0.9453, Validation Accuracy: 0.9503, Loss: 0.0442 Epoch 2 Batch 550/1077 - Train Accuracy: 0.9387, Validation Accuracy: 0.9549, Loss: 0.0355 Epoch 2 Batch 551/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9510, Loss: 0.0362 Epoch 2 Batch 552/1077 - Train Accuracy: 0.9453, Validation Accuracy: 0.9538, Loss: 0.0377 Epoch 2 Batch 553/1077 - Train Accuracy: 0.9617, Validation Accuracy: 0.9542, Loss: 0.0399 Epoch 2 Batch 554/1077 - Train Accuracy: 0.9242, Validation Accuracy: 0.9560, Loss: 0.0310 Epoch 2 Batch 555/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9535, Loss: 0.0275 Epoch 2 Batch 556/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9599, Loss: 0.0259 Epoch 2 Batch 557/1077 - Train Accuracy: 0.9469, Validation Accuracy: 0.9648, Loss: 0.0310 Epoch 2 Batch 558/1077 - Train Accuracy: 0.9648, Validation Accuracy: 0.9624, Loss: 0.0272 Epoch 2 Batch 559/1077 - Train Accuracy: 0.9633, Validation Accuracy: 0.9528, Loss: 0.0329 Epoch 2 Batch 560/1077 - Train Accuracy: 0.9527, Validation Accuracy: 0.9521, Loss: 0.0325 Epoch 2 Batch 561/1077 - Train Accuracy: 0.9587, Validation Accuracy: 0.9457, Loss: 0.0338 Epoch 2 Batch 562/1077 - Train Accuracy: 0.9580, Validation Accuracy: 0.9542, Loss: 0.0263 Epoch 2 Batch 563/1077 - Train Accuracy: 0.9535, Validation Accuracy: 0.9588, Loss: 0.0323 Epoch 2 Batch 564/1077 - Train Accuracy: 0.9593, Validation Accuracy: 0.9496, Loss: 0.0384 Epoch 2 Batch 565/1077 - Train Accuracy: 0.9572, Validation Accuracy: 0.9446, Loss: 0.0336 Epoch 2 Batch 566/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9446, Loss: 0.0286 Epoch 2 Batch 567/1077 - Train Accuracy: 0.9664, Validation Accuracy: 0.9460, Loss: 0.0318 Epoch 2 Batch 568/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9510, Loss: 0.0323 Epoch 2 Batch 569/1077 - Train Accuracy: 0.9508, Validation Accuracy: 0.9609, Loss: 0.0334 Epoch 2 Batch 570/1077 - Train Accuracy: 0.9490, Validation Accuracy: 0.9659, Loss: 0.0414 Epoch 2 Batch 571/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9595, Loss: 0.0237 Epoch 2 Batch 572/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9482, Loss: 0.0330 Epoch 2 Batch 573/1077 - Train Accuracy: 0.9426, Validation Accuracy: 0.9435, Loss: 0.0476 Epoch 2 Batch 574/1077 - Train Accuracy: 0.9539, Validation Accuracy: 0.9318, Loss: 0.0388 Epoch 2 Batch 575/1077 - Train Accuracy: 0.9736, Validation Accuracy: 0.9354, Loss: 0.0229 Epoch 2 Batch 576/1077 - Train Accuracy: 0.9790, Validation Accuracy: 0.9403, Loss: 0.0274 Epoch 2 Batch 577/1077 - Train Accuracy: 0.9597, Validation Accuracy: 0.9350, Loss: 0.0337 Epoch 2 Batch 578/1077 - Train Accuracy: 0.9535, Validation Accuracy: 0.9347, Loss: 0.0247 Epoch 2 Batch 579/1077 - Train Accuracy: 0.9570, Validation Accuracy: 0.9418, Loss: 0.0275 Epoch 2 Batch 580/1077 - Train Accuracy: 0.9490, Validation Accuracy: 0.9531, Loss: 0.0287 Epoch 2 Batch 581/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9606, Loss: 0.0253 Epoch 2 Batch 582/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9528, Loss: 0.0301 Epoch 2 Batch 583/1077 - Train Accuracy: 0.9618, Validation Accuracy: 0.9595, Loss: 0.0367 Epoch 2 Batch 584/1077 - Train Accuracy: 0.9628, Validation Accuracy: 0.9620, Loss: 0.0338 Epoch 2 Batch 585/1077 - Train Accuracy: 0.9550, Validation Accuracy: 0.9684, Loss: 0.0295 Epoch 2 Batch 586/1077 - Train Accuracy: 0.9622, Validation Accuracy: 0.9652, Loss: 0.0297 Epoch 2 Batch 587/1077 - Train Accuracy: 0.9606, Validation Accuracy: 0.9609, Loss: 0.0354 Epoch 2 Batch 588/1077 - Train Accuracy: 0.9523, Validation Accuracy: 0.9528, Loss: 0.0289 Epoch 2 Batch 589/1077 - Train Accuracy: 0.9753, Validation Accuracy: 0.9471, Loss: 0.0303 Epoch 2 Batch 590/1077 - Train Accuracy: 0.9523, Validation Accuracy: 0.9425, Loss: 0.0353 Epoch 2 Batch 591/1077 - Train Accuracy: 0.9517, Validation Accuracy: 0.9428, Loss: 0.0348 Epoch 2 Batch 592/1077 - Train Accuracy: 0.9582, Validation Accuracy: 0.9503, Loss: 0.0349 Epoch 2 Batch 593/1077 - Train Accuracy: 0.9375, Validation Accuracy: 0.9457, Loss: 0.0503 Epoch 2 Batch 594/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9432, Loss: 0.0416 Epoch 2 Batch 595/1077 - Train Accuracy: 0.9625, Validation Accuracy: 0.9407, Loss: 0.0307 Epoch 2 Batch 596/1077 - Train Accuracy: 0.9641, Validation Accuracy: 0.9407, Loss: 0.0351 Epoch 2 Batch 597/1077 - Train Accuracy: 0.9723, Validation Accuracy: 0.9450, Loss: 0.0292 Epoch 2 Batch 598/1077 - Train Accuracy: 0.9438, Validation Accuracy: 0.9577, Loss: 0.0362 Epoch 2 Batch 599/1077 - Train Accuracy: 0.9238, Validation Accuracy: 0.9627, Loss: 0.0490 Epoch 2 Batch 600/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9641, Loss: 0.0382 Epoch 2 Batch 601/1077 - Train Accuracy: 0.9539, Validation Accuracy: 0.9606, Loss: 0.0348 Epoch 2 Batch 602/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9538, Loss: 0.0339 Epoch 2 Batch 603/1077 - Train Accuracy: 0.9647, Validation Accuracy: 0.9489, Loss: 0.0336 Epoch 2 Batch 604/1077 - Train Accuracy: 0.9352, Validation Accuracy: 0.9499, Loss: 0.0413 Epoch 2 Batch 605/1077 - Train Accuracy: 0.9576, Validation Accuracy: 0.9553, Loss: 0.0417 Epoch 2 Batch 606/1077 - Train Accuracy: 0.9598, Validation Accuracy: 0.9567, Loss: 0.0248 Epoch 2 Batch 607/1077 - Train Accuracy: 0.9545, Validation Accuracy: 0.9567, Loss: 0.0443 Epoch 2 Batch 608/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9560, Loss: 0.0369 Epoch 2 Batch 609/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9567, Loss: 0.0375 Epoch 2 Batch 610/1077 - Train Accuracy: 0.9638, Validation Accuracy: 0.9567, Loss: 0.0342 Epoch 2 Batch 611/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9560, Loss: 0.0305 Epoch 2 Batch 612/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9641, Loss: 0.0265 Epoch 2 Batch 613/1077 - Train Accuracy: 0.9398, Validation Accuracy: 0.9609, Loss: 0.0444 Epoch 2 Batch 614/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9567, Loss: 0.0309 Epoch 2 Batch 615/1077 - Train Accuracy: 0.9566, Validation Accuracy: 0.9567, Loss: 0.0301 Epoch 2 Batch 616/1077 - Train Accuracy: 0.9683, Validation Accuracy: 0.9616, Loss: 0.0359 Epoch 2 Batch 617/1077 - Train Accuracy: 0.9624, Validation Accuracy: 0.9613, Loss: 0.0333 Epoch 2 Batch 618/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9478, Loss: 0.0313 Epoch 2 Batch 619/1077 - Train Accuracy: 0.9708, Validation Accuracy: 0.9478, Loss: 0.0293 Epoch 2 Batch 620/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9485, Loss: 0.0331 Epoch 2 Batch 621/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9382, Loss: 0.0367 Epoch 2 Batch 622/1077 - Train Accuracy: 0.9581, Validation Accuracy: 0.9297, Loss: 0.0434 Epoch 2 Batch 623/1077 - Train Accuracy: 0.9379, Validation Accuracy: 0.9386, Loss: 0.0390 Epoch 2 Batch 624/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9382, Loss: 0.0361 Epoch 2 Batch 625/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9482, Loss: 0.0252 Epoch 2 Batch 626/1077 - Train Accuracy: 0.9361, Validation Accuracy: 0.9538, Loss: 0.0326 Epoch 2 Batch 627/1077 - Train Accuracy: 0.9391, Validation Accuracy: 0.9553, Loss: 0.0331 Epoch 2 Batch 628/1077 - Train Accuracy: 0.9578, Validation Accuracy: 0.9659, Loss: 0.0423 Epoch 2 Batch 629/1077 - Train Accuracy: 0.9531, Validation Accuracy: 0.9680, Loss: 0.0360 Epoch 2 Batch 630/1077 - Train Accuracy: 0.9605, Validation Accuracy: 0.9631, Loss: 0.0305 Epoch 2 Batch 631/1077 - Train Accuracy: 0.9539, Validation Accuracy: 0.9567, Loss: 0.0286 Epoch 2 Batch 632/1077 - Train Accuracy: 0.9586, Validation Accuracy: 0.9581, Loss: 0.0241 Epoch 2 Batch 633/1077 - Train Accuracy: 0.9473, Validation Accuracy: 0.9634, Loss: 0.0329 Epoch 2 Batch 634/1077 - Train Accuracy: 0.9598, Validation Accuracy: 0.9656, Loss: 0.0221 Epoch 2 Batch 635/1077 - Train Accuracy: 0.9486, Validation Accuracy: 0.9506, Loss: 0.0380 Epoch 2 Batch 636/1077 - Train Accuracy: 0.9543, Validation Accuracy: 0.9492, Loss: 0.0317 Epoch 2 Batch 637/1077 - Train Accuracy: 0.9406, Validation Accuracy: 0.9592, Loss: 0.0305 Epoch 2 Batch 638/1077 - Train Accuracy: 0.9487, Validation Accuracy: 0.9620, Loss: 0.0308 Epoch 2 Batch 639/1077 - Train Accuracy: 0.9391, Validation Accuracy: 0.9624, Loss: 0.0480 Epoch 2 Batch 640/1077 - Train Accuracy: 0.9628, Validation Accuracy: 0.9620, Loss: 0.0335 Epoch 2 Batch 641/1077 - Train Accuracy: 0.9465, Validation Accuracy: 0.9620, Loss: 0.0327 Epoch 2 Batch 642/1077 - Train Accuracy: 0.9509, Validation Accuracy: 0.9563, Loss: 0.0340 Epoch 2 Batch 643/1077 - Train Accuracy: 0.9807, Validation Accuracy: 0.9563, Loss: 0.0256 Epoch 2 Batch 644/1077 - Train Accuracy: 0.9605, Validation Accuracy: 0.9670, Loss: 0.0327 Epoch 2 Batch 645/1077 - Train Accuracy: 0.9550, Validation Accuracy: 0.9666, Loss: 0.0396 Epoch 2 Batch 646/1077 - Train Accuracy: 0.9769, Validation Accuracy: 0.9549, Loss: 0.0322 Epoch 2 Batch 647/1077 - Train Accuracy: 0.9617, Validation Accuracy: 0.9595, Loss: 0.0376 Epoch 2 Batch 648/1077 - Train Accuracy: 0.9479, Validation Accuracy: 0.9478, Loss: 0.0285 Epoch 2 Batch 649/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9364, Loss: 0.0313 Epoch 2 Batch 650/1077 - Train Accuracy: 0.9625, Validation Accuracy: 0.9389, Loss: 0.0339 Epoch 2 Batch 651/1077 - Train Accuracy: 0.9524, Validation Accuracy: 0.9371, Loss: 0.0328 Epoch 2 Batch 652/1077 - Train Accuracy: 0.9696, Validation Accuracy: 0.9425, Loss: 0.0330 Epoch 2 Batch 653/1077 - Train Accuracy: 0.9586, Validation Accuracy: 0.9425, Loss: 0.0344 Epoch 2 Batch 654/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9368, Loss: 0.0301 Epoch 2 Batch 655/1077 - Train Accuracy: 0.9367, Validation Accuracy: 0.9325, Loss: 0.0409 Epoch 2 Batch 656/1077 - Train Accuracy: 0.9574, Validation Accuracy: 0.9428, Loss: 0.0347 Epoch 2 Batch 657/1077 - Train Accuracy: 0.9593, Validation Accuracy: 0.9375, Loss: 0.0316 Epoch 2 Batch 658/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9407, Loss: 0.0220 Epoch 2 Batch 659/1077 - Train Accuracy: 0.9643, Validation Accuracy: 0.9396, Loss: 0.0374 Epoch 2 Batch 660/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9308, Loss: 0.0312 Epoch 2 Batch 661/1077 - Train Accuracy: 0.9717, Validation Accuracy: 0.9318, Loss: 0.0285 Epoch 2 Batch 662/1077 - Train Accuracy: 0.9628, Validation Accuracy: 0.9169, Loss: 0.0301 Epoch 2 Batch 663/1077 - Train Accuracy: 0.9625, Validation Accuracy: 0.9190, Loss: 0.0274 Epoch 2 Batch 664/1077 - Train Accuracy: 0.9574, Validation Accuracy: 0.9187, Loss: 0.0293 Epoch 2 Batch 665/1077 - Train Accuracy: 0.9582, Validation Accuracy: 0.9293, Loss: 0.0281 Epoch 2 Batch 666/1077 - Train Accuracy: 0.9396, Validation Accuracy: 0.9339, Loss: 0.0455 Epoch 2 Batch 667/1077 - Train Accuracy: 0.9638, Validation Accuracy: 0.9464, Loss: 0.0410 Epoch 2 Batch 668/1077 - Train Accuracy: 0.9788, Validation Accuracy: 0.9428, Loss: 0.0328 Epoch 2 Batch 669/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9467, Loss: 0.0297 Epoch 2 Batch 670/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9464, Loss: 0.0329 Epoch 2 Batch 671/1077 - Train Accuracy: 0.9371, Validation Accuracy: 0.9506, Loss: 0.0404 Epoch 2 Batch 672/1077 - Train Accuracy: 0.9714, Validation Accuracy: 0.9510, Loss: 0.0305 Epoch 2 Batch 673/1077 - Train Accuracy: 0.9580, Validation Accuracy: 0.9513, Loss: 0.0305 Epoch 2 Batch 674/1077 - Train Accuracy: 0.9516, Validation Accuracy: 0.9492, Loss: 0.0354 Epoch 2 Batch 675/1077 - Train Accuracy: 0.9658, Validation Accuracy: 0.9528, Loss: 0.0395 Epoch 2 Batch 676/1077 - Train Accuracy: 0.9482, Validation Accuracy: 0.9474, Loss: 0.0286 Epoch 2 Batch 677/1077 - Train Accuracy: 0.9395, Validation Accuracy: 0.9371, Loss: 0.0405 Epoch 2 Batch 678/1077 - Train Accuracy: 0.9784, Validation Accuracy: 0.9371, Loss: 0.0251 Epoch 2 Batch 679/1077 - Train Accuracy: 0.9675, Validation Accuracy: 0.9421, Loss: 0.0265 Epoch 2 Batch 680/1077 - Train Accuracy: 0.9461, Validation Accuracy: 0.9400, Loss: 0.0339 Epoch 2 Batch 681/1077 - Train Accuracy: 0.9723, Validation Accuracy: 0.9400, Loss: 0.0303 Epoch 2 Batch 682/1077 - Train Accuracy: 0.9457, Validation Accuracy: 0.9400, Loss: 0.0305 Epoch 2 Batch 683/1077 - Train Accuracy: 0.9496, Validation Accuracy: 0.9357, Loss: 0.0273 Epoch 2 Batch 684/1077 - Train Accuracy: 0.9473, Validation Accuracy: 0.9339, Loss: 0.0320 Epoch 2 Batch 685/1077 - Train Accuracy: 0.9516, Validation Accuracy: 0.9485, Loss: 0.0332 Epoch 2 Batch 686/1077 - Train Accuracy: 0.9587, Validation Accuracy: 0.9464, Loss: 0.0245 Epoch 2 Batch 687/1077 - Train Accuracy: 0.9598, Validation Accuracy: 0.9506, Loss: 0.0391 Epoch 2 Batch 688/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9513, Loss: 0.0264 Epoch 2 Batch 689/1077 - Train Accuracy: 0.9641, Validation Accuracy: 0.9492, Loss: 0.0251 Epoch 2 Batch 690/1077 - Train Accuracy: 0.9570, Validation Accuracy: 0.9517, Loss: 0.0322 Epoch 2 Batch 691/1077 - Train Accuracy: 0.9461, Validation Accuracy: 0.9510, Loss: 0.0387 Epoch 2 Batch 692/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9510, Loss: 0.0301 Epoch 2 Batch 693/1077 - Train Accuracy: 0.9391, Validation Accuracy: 0.9563, Loss: 0.0462 Epoch 2 Batch 694/1077 - Train Accuracy: 0.9665, Validation Accuracy: 0.9556, Loss: 0.0295 Epoch 2 Batch 695/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9503, Loss: 0.0270 Epoch 2 Batch 696/1077 - Train Accuracy: 0.9667, Validation Accuracy: 0.9567, Loss: 0.0318 Epoch 2 Batch 697/1077 - Train Accuracy: 0.9551, Validation Accuracy: 0.9616, Loss: 0.0287 Epoch 2 Batch 698/1077 - Train Accuracy: 0.9483, Validation Accuracy: 0.9663, Loss: 0.0286 Epoch 2 Batch 699/1077 - Train Accuracy: 0.9712, Validation Accuracy: 0.9634, Loss: 0.0257 Epoch 2 Batch 700/1077 - Train Accuracy: 0.9648, Validation Accuracy: 0.9570, Loss: 0.0230 Epoch 2 Batch 701/1077 - Train Accuracy: 0.9617, Validation Accuracy: 0.9634, Loss: 0.0295 Epoch 2 Batch 702/1077 - Train Accuracy: 0.9531, Validation Accuracy: 0.9638, Loss: 0.0423 Epoch 2 Batch 703/1077 - Train Accuracy: 0.9637, Validation Accuracy: 0.9563, Loss: 0.0381 Epoch 2 Batch 704/1077 - Train Accuracy: 0.9305, Validation Accuracy: 0.9602, Loss: 0.0425 Epoch 2 Batch 705/1077 - Train Accuracy: 0.9371, Validation Accuracy: 0.9513, Loss: 0.0423 Epoch 2 Batch 706/1077 - Train Accuracy: 0.9520, Validation Accuracy: 0.9531, Loss: 0.0608 Epoch 2 Batch 707/1077 - Train Accuracy: 0.9375, Validation Accuracy: 0.9428, Loss: 0.0372 Epoch 2 Batch 708/1077 - Train Accuracy: 0.9508, Validation Accuracy: 0.9421, Loss: 0.0351 Epoch 2 Batch 709/1077 - Train Accuracy: 0.9496, Validation Accuracy: 0.9439, Loss: 0.0412 Epoch 2 Batch 710/1077 - Train Accuracy: 0.9566, Validation Accuracy: 0.9453, Loss: 0.0266 Epoch 2 Batch 711/1077 - Train Accuracy: 0.9402, Validation Accuracy: 0.9474, Loss: 0.0386 Epoch 2 Batch 712/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9528, Loss: 0.0265 Epoch 2 Batch 713/1077 - Train Accuracy: 0.9513, Validation Accuracy: 0.9521, Loss: 0.0270 Epoch 2 Batch 714/1077 - Train Accuracy: 0.9487, Validation Accuracy: 0.9506, Loss: 0.0383 Epoch 2 Batch 715/1077 - Train Accuracy: 0.9586, Validation Accuracy: 0.9581, Loss: 0.0350 Epoch 2 Batch 716/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9585, Loss: 0.0219 Epoch 2 Batch 717/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9698, Loss: 0.0236 Epoch 2 Batch 718/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9645, Loss: 0.0328 Epoch 2 Batch 719/1077 - Train Accuracy: 0.9371, Validation Accuracy: 0.9645, Loss: 0.0386 Epoch 2 Batch 720/1077 - Train Accuracy: 0.9535, Validation Accuracy: 0.9741, Loss: 0.0334 Epoch 2 Batch 721/1077 - Train Accuracy: 0.9488, Validation Accuracy: 0.9748, Loss: 0.0326 Epoch 2 Batch 722/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9755, Loss: 0.0223 Epoch 2 Batch 723/1077 - Train Accuracy: 0.9695, Validation Accuracy: 0.9709, Loss: 0.0380 Epoch 2 Batch 724/1077 - Train Accuracy: 0.9757, Validation Accuracy: 0.9616, Loss: 0.0351 Epoch 2 Batch 725/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9599, Loss: 0.0280 Epoch 2 Batch 726/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9595, Loss: 0.0271 Epoch 2 Batch 727/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9549, Loss: 0.0288 Epoch 2 Batch 728/1077 - Train Accuracy: 0.9356, Validation Accuracy: 0.9553, Loss: 0.0354 Epoch 2 Batch 729/1077 - Train Accuracy: 0.9434, Validation Accuracy: 0.9606, Loss: 0.0361 Epoch 2 Batch 730/1077 - Train Accuracy: 0.9570, Validation Accuracy: 0.9634, Loss: 0.0367 Epoch 2 Batch 731/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9464, Loss: 0.0265 Epoch 2 Batch 732/1077 - Train Accuracy: 0.9511, Validation Accuracy: 0.9304, Loss: 0.0370 Epoch 2 Batch 733/1077 - Train Accuracy: 0.9496, Validation Accuracy: 0.9293, Loss: 0.0320 Epoch 2 Batch 734/1077 - Train Accuracy: 0.9589, Validation Accuracy: 0.9389, Loss: 0.0287 Epoch 2 Batch 735/1077 - Train Accuracy: 0.9453, Validation Accuracy: 0.9421, Loss: 0.0288 Epoch 2 Batch 736/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9364, Loss: 0.0250 Epoch 2 Batch 737/1077 - Train Accuracy: 0.9496, Validation Accuracy: 0.9453, Loss: 0.0406 Epoch 2 Batch 738/1077 - Train Accuracy: 0.9616, Validation Accuracy: 0.9489, Loss: 0.0234 Epoch 2 Batch 739/1077 - Train Accuracy: 0.9625, Validation Accuracy: 0.9492, Loss: 0.0279 Epoch 2 Batch 740/1077 - Train Accuracy: 0.9617, Validation Accuracy: 0.9418, Loss: 0.0261 Epoch 2 Batch 741/1077 - Train Accuracy: 0.9523, Validation Accuracy: 0.9496, Loss: 0.0381 Epoch 2 Batch 742/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9492, Loss: 0.0290 Epoch 2 Batch 743/1077 - Train Accuracy: 0.9543, Validation Accuracy: 0.9492, Loss: 0.0399 Epoch 2 Batch 744/1077 - Train Accuracy: 0.9647, Validation Accuracy: 0.9439, Loss: 0.0295 Epoch 2 Batch 745/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9442, Loss: 0.0300 Epoch 2 Batch 746/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9450, Loss: 0.0272 Epoch 2 Batch 747/1077 - Train Accuracy: 0.9578, Validation Accuracy: 0.9361, Loss: 0.0258 Epoch 2 Batch 748/1077 - Train Accuracy: 0.9488, Validation Accuracy: 0.9453, Loss: 0.0262 Epoch 2 Batch 749/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9428, Loss: 0.0257 Epoch 2 Batch 750/1077 - Train Accuracy: 0.9531, Validation Accuracy: 0.9432, Loss: 0.0276 Epoch 2 Batch 751/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9524, Loss: 0.0275 Epoch 2 Batch 752/1077 - Train Accuracy: 0.9498, Validation Accuracy: 0.9620, Loss: 0.0323 Epoch 2 Batch 753/1077 - Train Accuracy: 0.9695, Validation Accuracy: 0.9670, Loss: 0.0267 Epoch 2 Batch 754/1077 - Train Accuracy: 0.9445, Validation Accuracy: 0.9545, Loss: 0.0368 Epoch 2 Batch 755/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9450, Loss: 0.0348 Epoch 2 Batch 756/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9478, Loss: 0.0269 Epoch 2 Batch 757/1077 - Train Accuracy: 0.9634, Validation Accuracy: 0.9581, Loss: 0.0238 Epoch 2 Batch 758/1077 - Train Accuracy: 0.9643, Validation Accuracy: 0.9585, Loss: 0.0275 Epoch 2 Batch 759/1077 - Train Accuracy: 0.9542, Validation Accuracy: 0.9535, Loss: 0.0318 Epoch 2 Batch 760/1077 - Train Accuracy: 0.9645, Validation Accuracy: 0.9599, Loss: 0.0355 Epoch 2 Batch 761/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9599, Loss: 0.0295 Epoch 2 Batch 762/1077 - Train Accuracy: 0.9590, Validation Accuracy: 0.9645, Loss: 0.0255 Epoch 2 Batch 763/1077 - Train Accuracy: 0.9795, Validation Accuracy: 0.9673, Loss: 0.0280 Epoch 2 Batch 764/1077 - Train Accuracy: 0.9753, Validation Accuracy: 0.9609, Loss: 0.0293 Epoch 2 Batch 765/1077 - Train Accuracy: 0.9340, Validation Accuracy: 0.9624, Loss: 0.0341 Epoch 2 Batch 766/1077 - Train Accuracy: 0.9633, Validation Accuracy: 0.9531, Loss: 0.0310 Epoch 2 Batch 767/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9638, Loss: 0.0271 Epoch 2 Batch 768/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9506, Loss: 0.0271 Epoch 2 Batch 769/1077 - Train Accuracy: 0.9422, Validation Accuracy: 0.9506, Loss: 0.0341 Epoch 2 Batch 770/1077 - Train Accuracy: 0.9513, Validation Accuracy: 0.9560, Loss: 0.0326 Epoch 2 Batch 771/1077 - Train Accuracy: 0.9637, Validation Accuracy: 0.9570, Loss: 0.0352 Epoch 2 Batch 772/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9542, Loss: 0.0238 Epoch 2 Batch 773/1077 - Train Accuracy: 0.9551, Validation Accuracy: 0.9592, Loss: 0.0311 Epoch 2 Batch 774/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9545, Loss: 0.0330 Epoch 2 Batch 775/1077 - Train Accuracy: 0.9410, Validation Accuracy: 0.9549, Loss: 0.0320 Epoch 2 Batch 776/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9567, Loss: 0.0235 Epoch 2 Batch 777/1077 - Train Accuracy: 0.9672, Validation Accuracy: 0.9485, Loss: 0.0309 Epoch 2 Batch 778/1077 - Train Accuracy: 0.9784, Validation Accuracy: 0.9403, Loss: 0.0266 Epoch 2 Batch 779/1077 - Train Accuracy: 0.9340, Validation Accuracy: 0.9364, Loss: 0.0337 Epoch 2 Batch 780/1077 - Train Accuracy: 0.9441, Validation Accuracy: 0.9467, Loss: 0.0417 Epoch 2 Batch 781/1077 - Train Accuracy: 0.9736, Validation Accuracy: 0.9513, Loss: 0.0231 Epoch 2 Batch 782/1077 - Train Accuracy: 0.9546, Validation Accuracy: 0.9556, Loss: 0.0312 Epoch 2 Batch 783/1077 - Train Accuracy: 0.9360, Validation Accuracy: 0.9542, Loss: 0.0348 Epoch 2 Batch 784/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9670, Loss: 0.0199 Epoch 2 Batch 785/1077 - Train Accuracy: 0.9732, Validation Accuracy: 0.9535, Loss: 0.0302 Epoch 2 Batch 786/1077 - Train Accuracy: 0.9676, Validation Accuracy: 0.9531, Loss: 0.0237 Epoch 2 Batch 787/1077 - Train Accuracy: 0.9784, Validation Accuracy: 0.9428, Loss: 0.0280 Epoch 2 Batch 788/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9485, Loss: 0.0281 Epoch 2 Batch 789/1077 - Train Accuracy: 0.9659, Validation Accuracy: 0.9574, Loss: 0.0294 Epoch 2 Batch 790/1077 - Train Accuracy: 0.9488, Validation Accuracy: 0.9613, Loss: 0.0356 Epoch 2 Batch 791/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9588, Loss: 0.0315 Epoch 2 Batch 792/1077 - Train Accuracy: 0.9512, Validation Accuracy: 0.9624, Loss: 0.0351 Epoch 2 Batch 793/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9656, Loss: 0.0250 Epoch 2 Batch 794/1077 - Train Accuracy: 0.9367, Validation Accuracy: 0.9595, Loss: 0.0257 Epoch 2 Batch 795/1077 - Train Accuracy: 0.9473, Validation Accuracy: 0.9560, Loss: 0.0354 Epoch 2 Batch 796/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9638, Loss: 0.0325 Epoch 2 Batch 797/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9705, Loss: 0.0246 Epoch 2 Batch 798/1077 - Train Accuracy: 0.9551, Validation Accuracy: 0.9542, Loss: 0.0311 Epoch 2 Batch 799/1077 - Train Accuracy: 0.9488, Validation Accuracy: 0.9542, Loss: 0.0434 Epoch 2 Batch 800/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9435, Loss: 0.0288 Epoch 2 Batch 801/1077 - Train Accuracy: 0.9695, Validation Accuracy: 0.9421, Loss: 0.0337 Epoch 2 Batch 802/1077 - Train Accuracy: 0.9628, Validation Accuracy: 0.9418, Loss: 0.0320 Epoch 2 Batch 803/1077 - Train Accuracy: 0.9582, Validation Accuracy: 0.9418, Loss: 0.0309 Epoch 2 Batch 804/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9450, Loss: 0.0242 Epoch 2 Batch 805/1077 - Train Accuracy: 0.9566, Validation Accuracy: 0.9542, Loss: 0.0287 Epoch 2 Batch 806/1077 - Train Accuracy: 0.9753, Validation Accuracy: 0.9549, Loss: 0.0261 Epoch 2 Batch 807/1077 - Train Accuracy: 0.9672, Validation Accuracy: 0.9602, Loss: 0.0242 Epoch 2 Batch 808/1077 - Train Accuracy: 0.9441, Validation Accuracy: 0.9574, Loss: 0.0489 Epoch 2 Batch 809/1077 - Train Accuracy: 0.9585, Validation Accuracy: 0.9521, Loss: 0.0490 Epoch 2 Batch 810/1077 - Train Accuracy: 0.9795, Validation Accuracy: 0.9524, Loss: 0.0241 Epoch 2 Batch 811/1077 - Train Accuracy: 0.9635, Validation Accuracy: 0.9641, Loss: 0.0307 Epoch 2 Batch 812/1077 - Train Accuracy: 0.9566, Validation Accuracy: 0.9627, Loss: 0.0289 Epoch 2 Batch 813/1077 - Train Accuracy: 0.9513, Validation Accuracy: 0.9616, Loss: 0.0318 Epoch 2 Batch 814/1077 - Train Accuracy: 0.9617, Validation Accuracy: 0.9616, Loss: 0.0289 Epoch 2 Batch 815/1077 - Train Accuracy: 0.9605, Validation Accuracy: 0.9705, Loss: 0.0321 Epoch 2 Batch 816/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9652, Loss: 0.0314 Epoch 2 Batch 817/1077 - Train Accuracy: 0.9578, Validation Accuracy: 0.9705, Loss: 0.0338 Epoch 2 Batch 818/1077 - Train Accuracy: 0.9574, Validation Accuracy: 0.9680, Loss: 0.0323 Epoch 2 Batch 819/1077 - Train Accuracy: 0.9359, Validation Accuracy: 0.9677, Loss: 0.0365 Epoch 2 Batch 820/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9673, Loss: 0.0258 Epoch 2 Batch 821/1077 - Train Accuracy: 0.9605, Validation Accuracy: 0.9556, Loss: 0.0371 Epoch 2 Batch 822/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9595, Loss: 0.0294 Epoch 2 Batch 823/1077 - Train Accuracy: 0.9570, Validation Accuracy: 0.9567, Loss: 0.0288 Epoch 2 Batch 824/1077 - Train Accuracy: 0.9661, Validation Accuracy: 0.9428, Loss: 0.0350 Epoch 2 Batch 825/1077 - Train Accuracy: 0.9949, Validation Accuracy: 0.9428, Loss: 0.0196 Epoch 2 Batch 826/1077 - Train Accuracy: 0.9516, Validation Accuracy: 0.9478, Loss: 0.0303 Epoch 2 Batch 827/1077 - Train Accuracy: 0.9498, Validation Accuracy: 0.9528, Loss: 0.0325 Epoch 2 Batch 828/1077 - Train Accuracy: 0.9582, Validation Accuracy: 0.9560, Loss: 0.0257 Epoch 2 Batch 829/1077 - Train Accuracy: 0.9340, Validation Accuracy: 0.9585, Loss: 0.0451 Epoch 2 Batch 830/1077 - Train Accuracy: 0.9352, Validation Accuracy: 0.9563, Loss: 0.0340 Epoch 2 Batch 831/1077 - Train Accuracy: 0.9410, Validation Accuracy: 0.9602, Loss: 0.0321 Epoch 2 Batch 832/1077 - Train Accuracy: 0.9594, Validation Accuracy: 0.9631, Loss: 0.0282 Epoch 2 Batch 833/1077 - Train Accuracy: 0.9508, Validation Accuracy: 0.9606, Loss: 0.0312 Epoch 2 Batch 834/1077 - Train Accuracy: 0.9822, Validation Accuracy: 0.9627, Loss: 0.0319 Epoch 2 Batch 835/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9599, Loss: 0.0315 Epoch 2 Batch 836/1077 - Train Accuracy: 0.9712, Validation Accuracy: 0.9570, Loss: 0.0273 Epoch 2 Batch 837/1077 - Train Accuracy: 0.9418, Validation Accuracy: 0.9652, Loss: 0.0364 Epoch 2 Batch 838/1077 - Train Accuracy: 0.9664, Validation Accuracy: 0.9631, Loss: 0.0308 Epoch 2 Batch 839/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9680, Loss: 0.0254 Epoch 2 Batch 840/1077 - Train Accuracy: 0.9645, Validation Accuracy: 0.9702, Loss: 0.0275 Epoch 2 Batch 841/1077 - Train Accuracy: 0.9672, Validation Accuracy: 0.9648, Loss: 0.0350 Epoch 2 Batch 842/1077 - Train Accuracy: 0.9551, Validation Accuracy: 0.9652, Loss: 0.0231 Epoch 2 Batch 843/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9606, Loss: 0.0287 Epoch 2 Batch 844/1077 - Train Accuracy: 0.9706, Validation Accuracy: 0.9599, Loss: 0.0242 Epoch 2 Batch 845/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9602, Loss: 0.0235 Epoch 2 Batch 846/1077 - Train Accuracy: 0.9348, Validation Accuracy: 0.9560, Loss: 0.0391 Epoch 2 Batch 847/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9471, Loss: 0.0339 Epoch 2 Batch 848/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9542, Loss: 0.0292 Epoch 2 Batch 849/1077 - Train Accuracy: 0.9594, Validation Accuracy: 0.9535, Loss: 0.0240 Epoch 2 Batch 850/1077 - Train Accuracy: 0.9408, Validation Accuracy: 0.9560, Loss: 0.0505 Epoch 2 Batch 851/1077 - Train Accuracy: 0.9669, Validation Accuracy: 0.9645, Loss: 0.0380 Epoch 2 Batch 852/1077 - Train Accuracy: 0.9457, Validation Accuracy: 0.9691, Loss: 0.0455 Epoch 2 Batch 853/1077 - Train Accuracy: 0.9590, Validation Accuracy: 0.9659, Loss: 0.0276 Epoch 2 Batch 854/1077 - Train Accuracy: 0.9469, Validation Accuracy: 0.9712, Loss: 0.0331 Epoch 2 Batch 855/1077 - Train Accuracy: 0.9414, Validation Accuracy: 0.9755, Loss: 0.0309 Epoch 2 Batch 856/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9592, Loss: 0.0299 Epoch 2 Batch 857/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9599, Loss: 0.0289 Epoch 2 Batch 858/1077 - Train Accuracy: 0.9728, Validation Accuracy: 0.9549, Loss: 0.0253 Epoch 2 Batch 859/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9549, Loss: 0.0302 Epoch 2 Batch 860/1077 - Train Accuracy: 0.9695, Validation Accuracy: 0.9606, Loss: 0.0275 Epoch 2 Batch 861/1077 - Train Accuracy: 0.9598, Validation Accuracy: 0.9567, Loss: 0.0240 Epoch 2 Batch 862/1077 - Train Accuracy: 0.9523, Validation Accuracy: 0.9517, Loss: 0.0310 Epoch 2 Batch 863/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9606, Loss: 0.0280 Epoch 2 Batch 864/1077 - Train Accuracy: 0.9270, Validation Accuracy: 0.9606, Loss: 0.0309 Epoch 2 Batch 865/1077 - Train Accuracy: 0.9489, Validation Accuracy: 0.9513, Loss: 0.0339 Epoch 2 Batch 866/1077 - Train Accuracy: 0.9464, Validation Accuracy: 0.9638, Loss: 0.0335 Epoch 2 Batch 867/1077 - Train Accuracy: 0.9441, Validation Accuracy: 0.9545, Loss: 0.0631 Epoch 2 Batch 868/1077 - Train Accuracy: 0.9566, Validation Accuracy: 0.9513, Loss: 0.0327 Epoch 2 Batch 869/1077 - Train Accuracy: 0.9301, Validation Accuracy: 0.9521, Loss: 0.0308 Epoch 2 Batch 870/1077 - Train Accuracy: 0.9572, Validation Accuracy: 0.9517, Loss: 0.0288 Epoch 2 Batch 871/1077 - Train Accuracy: 0.9570, Validation Accuracy: 0.9560, Loss: 0.0278 Epoch 2 Batch 872/1077 - Train Accuracy: 0.9535, Validation Accuracy: 0.9606, Loss: 0.0302 Epoch 2 Batch 873/1077 - Train Accuracy: 0.9539, Validation Accuracy: 0.9585, Loss: 0.0330 Epoch 2 Batch 874/1077 - Train Accuracy: 0.9363, Validation Accuracy: 0.9517, Loss: 0.0383 Epoch 2 Batch 875/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9517, Loss: 0.0330 Epoch 2 Batch 876/1077 - Train Accuracy: 0.9629, Validation Accuracy: 0.9577, Loss: 0.0301 Epoch 2 Batch 877/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9567, Loss: 0.0243 Epoch 2 Batch 878/1077 - Train Accuracy: 0.9445, Validation Accuracy: 0.9567, Loss: 0.0282 Epoch 2 Batch 879/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9563, Loss: 0.0230 Epoch 2 Batch 880/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9648, Loss: 0.0375 Epoch 2 Batch 881/1077 - Train Accuracy: 0.9336, Validation Accuracy: 0.9556, Loss: 0.0322 Epoch 2 Batch 882/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9503, Loss: 0.0283 Epoch 2 Batch 883/1077 - Train Accuracy: 0.9511, Validation Accuracy: 0.9542, Loss: 0.0419 Epoch 2 Batch 884/1077 - Train Accuracy: 0.9441, Validation Accuracy: 0.9535, Loss: 0.0298 Epoch 2 Batch 885/1077 - Train Accuracy: 0.9695, Validation Accuracy: 0.9595, Loss: 0.0204 Epoch 2 Batch 886/1077 - Train Accuracy: 0.9605, Validation Accuracy: 0.9595, Loss: 0.0297 Epoch 2 Batch 887/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9535, Loss: 0.0319 Epoch 2 Batch 888/1077 - Train Accuracy: 0.9654, Validation Accuracy: 0.9489, Loss: 0.0247 Epoch 2 Batch 889/1077 - Train Accuracy: 0.9547, Validation Accuracy: 0.9538, Loss: 0.0279 Epoch 2 Batch 890/1077 - Train Accuracy: 0.9650, Validation Accuracy: 0.9560, Loss: 0.0287 Epoch 2 Batch 891/1077 - Train Accuracy: 0.9889, Validation Accuracy: 0.9574, Loss: 0.0201 Epoch 2 Batch 892/1077 - Train Accuracy: 0.9645, Validation Accuracy: 0.9574, Loss: 0.0223 Epoch 2 Batch 893/1077 - Train Accuracy: 0.9664, Validation Accuracy: 0.9492, Loss: 0.0246 Epoch 2 Batch 894/1077 - Train Accuracy: 0.9606, Validation Accuracy: 0.9492, Loss: 0.0268 Epoch 2 Batch 895/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9538, Loss: 0.0230 Epoch 2 Batch 896/1077 - Train Accuracy: 0.9572, Validation Accuracy: 0.9510, Loss: 0.0294 Epoch 2 Batch 897/1077 - Train Accuracy: 0.9639, Validation Accuracy: 0.9510, Loss: 0.0238 Epoch 2 Batch 898/1077 - Train Accuracy: 0.9643, Validation Accuracy: 0.9567, Loss: 0.0243 Epoch 2 Batch 899/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9595, Loss: 0.0307 Epoch 2 Batch 900/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9606, Loss: 0.0332 Epoch 2 Batch 901/1077 - Train Accuracy: 0.9401, Validation Accuracy: 0.9581, Loss: 0.0463 Epoch 2 Batch 902/1077 - Train Accuracy: 0.9420, Validation Accuracy: 0.9613, Loss: 0.0344 Epoch 2 Batch 903/1077 - Train Accuracy: 0.9531, Validation Accuracy: 0.9524, Loss: 0.0294 Epoch 2 Batch 904/1077 - Train Accuracy: 0.9457, Validation Accuracy: 0.9524, Loss: 0.0258 Epoch 2 Batch 905/1077 - Train Accuracy: 0.9512, Validation Accuracy: 0.9563, Loss: 0.0282 Epoch 2 Batch 906/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9567, Loss: 0.0254 Epoch 2 Batch 907/1077 - Train Accuracy: 0.9629, Validation Accuracy: 0.9506, Loss: 0.0280 Epoch 2 Batch 908/1077 - Train Accuracy: 0.9570, Validation Accuracy: 0.9510, Loss: 0.0310 Epoch 2 Batch 909/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9446, Loss: 0.0380 Epoch 2 Batch 910/1077 - Train Accuracy: 0.9654, Validation Accuracy: 0.9428, Loss: 0.0326 Epoch 2 Batch 911/1077 - Train Accuracy: 0.9474, Validation Accuracy: 0.9460, Loss: 0.0364 Epoch 2 Batch 912/1077 - Train Accuracy: 0.9695, Validation Accuracy: 0.9499, Loss: 0.0253 Epoch 2 Batch 913/1077 - Train Accuracy: 0.9465, Validation Accuracy: 0.9513, Loss: 0.0369 Epoch 2 Batch 914/1077 - Train Accuracy: 0.9554, Validation Accuracy: 0.9510, Loss: 0.0554 Epoch 2 Batch 915/1077 - Train Accuracy: 0.9618, Validation Accuracy: 0.9513, Loss: 0.0268 Epoch 2 Batch 916/1077 - Train Accuracy: 0.9652, Validation Accuracy: 0.9567, Loss: 0.0312 Epoch 2 Batch 917/1077 - Train Accuracy: 0.9480, Validation Accuracy: 0.9574, Loss: 0.0302 Epoch 2 Batch 918/1077 - Train Accuracy: 0.9710, Validation Accuracy: 0.9510, Loss: 0.0222 Epoch 2 Batch 919/1077 - Train Accuracy: 0.9815, Validation Accuracy: 0.9513, Loss: 0.0214 Epoch 2 Batch 920/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9460, Loss: 0.0249 Epoch 2 Batch 921/1077 - Train Accuracy: 0.9504, Validation Accuracy: 0.9467, Loss: 0.0322 Epoch 2 Batch 922/1077 - Train Accuracy: 0.9598, Validation Accuracy: 0.9535, Loss: 0.0319 Epoch 2 Batch 923/1077 - Train Accuracy: 0.9778, Validation Accuracy: 0.9489, Loss: 0.0202 Epoch 2 Batch 924/1077 - Train Accuracy: 0.9650, Validation Accuracy: 0.9506, Loss: 0.0456 Epoch 2 Batch 925/1077 - Train Accuracy: 0.9661, Validation Accuracy: 0.9609, Loss: 0.0258 Epoch 2 Batch 926/1077 - Train Accuracy: 0.9387, Validation Accuracy: 0.9613, Loss: 0.0329 Epoch 2 Batch 927/1077 - Train Accuracy: 0.9340, Validation Accuracy: 0.9648, Loss: 0.0373 Epoch 2 Batch 928/1077 - Train Accuracy: 0.9492, Validation Accuracy: 0.9691, Loss: 0.0265 Epoch 2 Batch 929/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9691, Loss: 0.0251 Epoch 2 Batch 930/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9698, Loss: 0.0234 Epoch 2 Batch 931/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9634, Loss: 0.0229 Epoch 2 Batch 932/1077 - Train Accuracy: 0.9551, Validation Accuracy: 0.9581, Loss: 0.0277 Epoch 2 Batch 933/1077 - Train Accuracy: 0.9664, Validation Accuracy: 0.9602, Loss: 0.0249 Epoch 2 Batch 934/1077 - Train Accuracy: 0.9625, Validation Accuracy: 0.9545, Loss: 0.0225 Epoch 2 Batch 935/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9489, Loss: 0.0274 Epoch 2 Batch 936/1077 - Train Accuracy: 0.9550, Validation Accuracy: 0.9560, Loss: 0.0380 Epoch 2 Batch 937/1077 - Train Accuracy: 0.9737, Validation Accuracy: 0.9549, Loss: 0.0327 Epoch 2 Batch 938/1077 - Train Accuracy: 0.9695, Validation Accuracy: 0.9567, Loss: 0.0329 Epoch 2 Batch 939/1077 - Train Accuracy: 0.9484, Validation Accuracy: 0.9549, Loss: 0.0345 Epoch 2 Batch 940/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9499, Loss: 0.0258 Epoch 2 Batch 941/1077 - Train Accuracy: 0.9628, Validation Accuracy: 0.9496, Loss: 0.0242 Epoch 2 Batch 942/1077 - Train Accuracy: 0.9461, Validation Accuracy: 0.9602, Loss: 0.0306 Epoch 2 Batch 943/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9563, Loss: 0.0284 Epoch 2 Batch 944/1077 - Train Accuracy: 0.9606, Validation Accuracy: 0.9567, Loss: 0.0238 Epoch 2 Batch 945/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9510, Loss: 0.0219 Epoch 2 Batch 946/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9467, Loss: 0.0242 Epoch 2 Batch 947/1077 - Train Accuracy: 0.9535, Validation Accuracy: 0.9496, Loss: 0.0301 Epoch 2 Batch 948/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9496, Loss: 0.0251 Epoch 2 Batch 949/1077 - Train Accuracy: 0.9911, Validation Accuracy: 0.9506, Loss: 0.0221 Epoch 2 Batch 950/1077 - Train Accuracy: 0.9714, Validation Accuracy: 0.9613, Loss: 0.0223 Epoch 2 Batch 951/1077 - Train Accuracy: 0.9643, Validation Accuracy: 0.9620, Loss: 0.0373 Epoch 2 Batch 952/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9570, Loss: 0.0179 Epoch 2 Batch 953/1077 - Train Accuracy: 0.9819, Validation Accuracy: 0.9460, Loss: 0.0213 Epoch 2 Batch 954/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9460, Loss: 0.0310 Epoch 2 Batch 955/1077 - Train Accuracy: 0.9457, Validation Accuracy: 0.9460, Loss: 0.0373 Epoch 2 Batch 956/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9457, Loss: 0.0329 Epoch 2 Batch 957/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9432, Loss: 0.0190 Epoch 2 Batch 958/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9453, Loss: 0.0257 Epoch 2 Batch 959/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9496, Loss: 0.0291 Epoch 2 Batch 960/1077 - Train Accuracy: 0.9654, Validation Accuracy: 0.9428, Loss: 0.0254 Epoch 2 Batch 961/1077 - Train Accuracy: 0.9594, Validation Accuracy: 0.9489, Loss: 0.0223 Epoch 2 Batch 962/1077 - Train Accuracy: 0.9598, Validation Accuracy: 0.9489, Loss: 0.0263 Epoch 2 Batch 963/1077 - Train Accuracy: 0.9757, Validation Accuracy: 0.9489, Loss: 0.0356 Epoch 2 Batch 964/1077 - Train Accuracy: 0.9509, Validation Accuracy: 0.9588, Loss: 0.0264 Epoch 2 Batch 965/1077 - Train Accuracy: 0.9457, Validation Accuracy: 0.9585, Loss: 0.0317 Epoch 2 Batch 966/1077 - Train Accuracy: 0.9783, Validation Accuracy: 0.9641, Loss: 0.0228 Epoch 2 Batch 967/1077 - Train Accuracy: 0.9332, Validation Accuracy: 0.9648, Loss: 0.0255 Epoch 2 Batch 968/1077 - Train Accuracy: 0.9426, Validation Accuracy: 0.9659, Loss: 0.0348 Epoch 2 Batch 969/1077 - Train Accuracy: 0.9516, Validation Accuracy: 0.9613, Loss: 0.0413 Epoch 2 Batch 970/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9563, Loss: 0.0282 Epoch 2 Batch 971/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9446, Loss: 0.0326 Epoch 2 Batch 972/1077 - Train Accuracy: 0.9648, Validation Accuracy: 0.9521, Loss: 0.0276 Epoch 2 Batch 973/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9457, Loss: 0.0216 Epoch 2 Batch 974/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9457, Loss: 0.0219 Epoch 2 Batch 975/1077 - Train Accuracy: 0.9710, Validation Accuracy: 0.9560, Loss: 0.0242 Epoch 2 Batch 976/1077 - Train Accuracy: 0.9648, Validation Accuracy: 0.9556, Loss: 0.0222 Epoch 2 Batch 977/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9453, Loss: 0.0178 Epoch 2 Batch 978/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9553, Loss: 0.0272 Epoch 2 Batch 979/1077 - Train Accuracy: 0.9683, Validation Accuracy: 0.9403, Loss: 0.0285 Epoch 2 Batch 980/1077 - Train Accuracy: 0.9414, Validation Accuracy: 0.9407, Loss: 0.0355 Epoch 2 Batch 981/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9474, Loss: 0.0276 Epoch 2 Batch 982/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9521, Loss: 0.0263 Epoch 2 Batch 983/1077 - Train Accuracy: 0.9449, Validation Accuracy: 0.9574, Loss: 0.0282 Epoch 2 Batch 984/1077 - Train Accuracy: 0.9375, Validation Accuracy: 0.9545, Loss: 0.0350 Epoch 2 Batch 985/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9599, Loss: 0.0222 Epoch 2 Batch 986/1077 - Train Accuracy: 0.9406, Validation Accuracy: 0.9553, Loss: 0.0284 Epoch 2 Batch 987/1077 - Train Accuracy: 0.9583, Validation Accuracy: 0.9556, Loss: 0.0208 Epoch 2 Batch 988/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9553, Loss: 0.0313 Epoch 2 Batch 989/1077 - Train Accuracy: 0.9504, Validation Accuracy: 0.9336, Loss: 0.0296 Epoch 2 Batch 990/1077 - Train Accuracy: 0.9642, Validation Accuracy: 0.9386, Loss: 0.0300 Epoch 2 Batch 991/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9393, Loss: 0.0216 Epoch 2 Batch 992/1077 - Train Accuracy: 0.9484, Validation Accuracy: 0.9311, Loss: 0.0344 Epoch 2 Batch 993/1077 - Train Accuracy: 0.9723, Validation Accuracy: 0.9283, Loss: 0.0221 Epoch 2 Batch 994/1077 - Train Accuracy: 0.9672, Validation Accuracy: 0.9293, Loss: 0.0220 Epoch 2 Batch 995/1077 - Train Accuracy: 0.9557, Validation Accuracy: 0.9297, Loss: 0.0322 Epoch 2 Batch 996/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9350, Loss: 0.0233 Epoch 2 Batch 997/1077 - Train Accuracy: 0.9807, Validation Accuracy: 0.9293, Loss: 0.0275 Epoch 2 Batch 998/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9386, Loss: 0.0238 Epoch 2 Batch 999/1077 - Train Accuracy: 0.9566, Validation Accuracy: 0.9400, Loss: 0.0317 Epoch 2 Batch 1000/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9396, Loss: 0.0280 Epoch 2 Batch 1001/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9460, Loss: 0.0218 Epoch 2 Batch 1002/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9411, Loss: 0.0199 Epoch 2 Batch 1003/1077 - Train Accuracy: 0.9679, Validation Accuracy: 0.9421, Loss: 0.0259 Epoch 2 Batch 1004/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9421, Loss: 0.0296 Epoch 2 Batch 1005/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9382, Loss: 0.0241 Epoch 2 Batch 1006/1077 - Train Accuracy: 0.9383, Validation Accuracy: 0.9382, Loss: 0.0231 Epoch 2 Batch 1007/1077 - Train Accuracy: 0.9829, Validation Accuracy: 0.9499, Loss: 0.0260 Epoch 2 Batch 1008/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9478, Loss: 0.0369 Epoch 2 Batch 1009/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9474, Loss: 0.0180 Epoch 2 Batch 1010/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9460, Loss: 0.0227 Epoch 2 Batch 1011/1077 - Train Accuracy: 0.9645, Validation Accuracy: 0.9457, Loss: 0.0237 Epoch 2 Batch 1012/1077 - Train Accuracy: 0.9654, Validation Accuracy: 0.9450, Loss: 0.0225 Epoch 2 Batch 1013/1077 - Train Accuracy: 0.9821, Validation Accuracy: 0.9450, Loss: 0.0191 Epoch 2 Batch 1014/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9542, Loss: 0.0237 Epoch 2 Batch 1015/1077 - Train Accuracy: 0.9297, Validation Accuracy: 0.9489, Loss: 0.0363 Epoch 2 Batch 1016/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9442, Loss: 0.0238 Epoch 2 Batch 1017/1077 - Train Accuracy: 0.9650, Validation Accuracy: 0.9471, Loss: 0.0248 Epoch 2 Batch 1018/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9492, Loss: 0.0251 Epoch 2 Batch 1019/1077 - Train Accuracy: 0.9301, Validation Accuracy: 0.9492, Loss: 0.0401 Epoch 2 Batch 1020/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9489, Loss: 0.0222 Epoch 2 Batch 1021/1077 - Train Accuracy: 0.9803, Validation Accuracy: 0.9595, Loss: 0.0238 Epoch 2 Batch 1022/1077 - Train Accuracy: 0.9751, Validation Accuracy: 0.9595, Loss: 0.0253 Epoch 2 Batch 1023/1077 - Train Accuracy: 0.9386, Validation Accuracy: 0.9645, Loss: 0.0297 Epoch 2 Batch 1024/1077 - Train Accuracy: 0.9594, Validation Accuracy: 0.9648, Loss: 0.0357 Epoch 2 Batch 1025/1077 - Train Accuracy: 0.9513, Validation Accuracy: 0.9648, Loss: 0.0248 Epoch 2 Batch 1026/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9677, Loss: 0.0368 Epoch 2 Batch 1027/1077 - Train Accuracy: 0.9559, Validation Accuracy: 0.9570, Loss: 0.0256 Epoch 2 Batch 1028/1077 - Train Accuracy: 0.9554, Validation Accuracy: 0.9560, Loss: 0.0246 Epoch 2 Batch 1029/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9545, Loss: 0.0230 Epoch 2 Batch 1030/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9549, Loss: 0.0280 Epoch 2 Batch 1031/1077 - Train Accuracy: 0.9692, Validation Accuracy: 0.9595, Loss: 0.0316 Epoch 2 Batch 1032/1077 - Train Accuracy: 0.9714, Validation Accuracy: 0.9478, Loss: 0.0308 Epoch 2 Batch 1033/1077 - Train Accuracy: 0.9617, Validation Accuracy: 0.9482, Loss: 0.0277 Epoch 2 Batch 1034/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9513, Loss: 0.0251 Epoch 2 Batch 1035/1077 - Train Accuracy: 0.9814, Validation Accuracy: 0.9506, Loss: 0.0181 Epoch 2 Batch 1036/1077 - Train Accuracy: 0.9375, Validation Accuracy: 0.9379, Loss: 0.0332 Epoch 2 Batch 1037/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9382, Loss: 0.0284 Epoch 2 Batch 1038/1077 - Train Accuracy: 0.9652, Validation Accuracy: 0.9322, Loss: 0.0339 Epoch 2 Batch 1039/1077 - Train Accuracy: 0.9628, Validation Accuracy: 0.9421, Loss: 0.0258 Epoch 2 Batch 1040/1077 - Train Accuracy: 0.9696, Validation Accuracy: 0.9421, Loss: 0.0275 Epoch 2 Batch 1041/1077 - Train Accuracy: 0.9551, Validation Accuracy: 0.9414, Loss: 0.0387 Epoch 2 Batch 1042/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9489, Loss: 0.0227 Epoch 2 Batch 1043/1077 - Train Accuracy: 0.9527, Validation Accuracy: 0.9485, Loss: 0.0398 Epoch 2 Batch 1044/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9460, Loss: 0.0351 Epoch 2 Batch 1045/1077 - Train Accuracy: 0.9605, Validation Accuracy: 0.9442, Loss: 0.0296 Epoch 2 Batch 1046/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9244, Loss: 0.0196 Epoch 2 Batch 1047/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9293, Loss: 0.0279 Epoch 2 Batch 1048/1077 - Train Accuracy: 0.9523, Validation Accuracy: 0.9343, Loss: 0.0277 Epoch 2 Batch 1049/1077 - Train Accuracy: 0.9496, Validation Accuracy: 0.9361, Loss: 0.0266 Epoch 2 Batch 1050/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9357, Loss: 0.0209 Epoch 2 Batch 1051/1077 - Train Accuracy: 0.9624, Validation Accuracy: 0.9496, Loss: 0.0294 Epoch 2 Batch 1052/1077 - Train Accuracy: 0.9561, Validation Accuracy: 0.9496, Loss: 0.0274 Epoch 2 Batch 1053/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9492, Loss: 0.0302 Epoch 2 Batch 1054/1077 - Train Accuracy: 0.9590, Validation Accuracy: 0.9496, Loss: 0.0262 Epoch 2 Batch 1055/1077 - Train Accuracy: 0.9437, Validation Accuracy: 0.9542, Loss: 0.0381 Epoch 2 Batch 1056/1077 - Train Accuracy: 0.9633, Validation Accuracy: 0.9542, Loss: 0.0233 Epoch 2 Batch 1057/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9499, Loss: 0.0298 Epoch 2 Batch 1058/1077 - Train Accuracy: 0.9712, Validation Accuracy: 0.9492, Loss: 0.0305 Epoch 2 Batch 1059/1077 - Train Accuracy: 0.9379, Validation Accuracy: 0.9489, Loss: 0.0389 Epoch 2 Batch 1060/1077 - Train Accuracy: 0.9504, Validation Accuracy: 0.9450, Loss: 0.0246 Epoch 2 Batch 1061/1077 - Train Accuracy: 0.9582, Validation Accuracy: 0.9439, Loss: 0.0358 Epoch 2 Batch 1062/1077 - Train Accuracy: 0.9535, Validation Accuracy: 0.9428, Loss: 0.0284 Epoch 2 Batch 1063/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9489, Loss: 0.0314 Epoch 2 Batch 1064/1077 - Train Accuracy: 0.9641, Validation Accuracy: 0.9474, Loss: 0.0300 Epoch 2 Batch 1065/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9499, Loss: 0.0256 Epoch 2 Batch 1066/1077 - Train Accuracy: 0.9656, Validation Accuracy: 0.9489, Loss: 0.0280 Epoch 2 Batch 1067/1077 - Train Accuracy: 0.9645, Validation Accuracy: 0.9538, Loss: 0.0311 Epoch 2 Batch 1068/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9609, Loss: 0.0208 Epoch 2 Batch 1069/1077 - Train Accuracy: 0.9814, Validation Accuracy: 0.9609, Loss: 0.0197 Epoch 2 Batch 1070/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9609, Loss: 0.0265 Epoch 2 Batch 1071/1077 - Train Accuracy: 0.9664, Validation Accuracy: 0.9609, Loss: 0.0241 Epoch 2 Batch 1072/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9606, Loss: 0.0288 Epoch 2 Batch 1073/1077 - Train Accuracy: 0.9656, Validation Accuracy: 0.9613, Loss: 0.0300 Epoch 2 Batch 1074/1077 - Train Accuracy: 0.9818, Validation Accuracy: 0.9567, Loss: 0.0383 Epoch 2 Batch 1075/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9631, Loss: 0.0310 Epoch 3 Batch 1/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9620, Loss: 0.0196 Epoch 3 Batch 2/1077 - Train Accuracy: 0.9807, Validation Accuracy: 0.9616, Loss: 0.0251 Epoch 3 Batch 3/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9556, Loss: 0.0314 Epoch 3 Batch 4/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9599, Loss: 0.0246 Epoch 3 Batch 5/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9609, Loss: 0.0382 Epoch 3 Batch 6/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9606, Loss: 0.0236 Epoch 3 Batch 7/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9648, Loss: 0.0233 Epoch 3 Batch 8/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9652, Loss: 0.0297 Epoch 3 Batch 9/1077 - Train Accuracy: 0.9637, Validation Accuracy: 0.9652, Loss: 0.0288 Epoch 3 Batch 10/1077 - Train Accuracy: 0.9733, Validation Accuracy: 0.9663, Loss: 0.0280 Epoch 3 Batch 11/1077 - Train Accuracy: 0.9494, Validation Accuracy: 0.9656, Loss: 0.0374 Epoch 3 Batch 12/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9673, Loss: 0.0260 Epoch 3 Batch 13/1077 - Train Accuracy: 0.9524, Validation Accuracy: 0.9634, Loss: 0.0302 Epoch 3 Batch 14/1077 - Train Accuracy: 0.9736, Validation Accuracy: 0.9780, Loss: 0.0224 Epoch 3 Batch 15/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9730, Loss: 0.0280 Epoch 3 Batch 16/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9751, Loss: 0.0267 Epoch 3 Batch 17/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9705, Loss: 0.0298 Epoch 3 Batch 18/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9688, Loss: 0.0330 Epoch 3 Batch 19/1077 - Train Accuracy: 0.9605, Validation Accuracy: 0.9684, Loss: 0.0287 Epoch 3 Batch 20/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9659, Loss: 0.0259 Epoch 3 Batch 21/1077 - Train Accuracy: 0.9641, Validation Accuracy: 0.9652, Loss: 0.0269 Epoch 3 Batch 22/1077 - Train Accuracy: 0.9586, Validation Accuracy: 0.9645, Loss: 0.0250 Epoch 3 Batch 23/1077 - Train Accuracy: 0.9652, Validation Accuracy: 0.9595, Loss: 0.0285 Epoch 3 Batch 24/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9556, Loss: 0.0339 Epoch 3 Batch 25/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9656, Loss: 0.0188 Epoch 3 Batch 26/1077 - Train Accuracy: 0.9672, Validation Accuracy: 0.9712, Loss: 0.0348 Epoch 3 Batch 27/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9638, Loss: 0.0229 Epoch 3 Batch 28/1077 - Train Accuracy: 0.9672, Validation Accuracy: 0.9659, Loss: 0.0274 Epoch 3 Batch 29/1077 - Train Accuracy: 0.9453, Validation Accuracy: 0.9638, Loss: 0.0330 Epoch 3 Batch 30/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9585, Loss: 0.0235 Epoch 3 Batch 31/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9545, Loss: 0.0247 Epoch 3 Batch 32/1077 - Train Accuracy: 0.9594, Validation Accuracy: 0.9542, Loss: 0.0259 Epoch 3 Batch 33/1077 - Train Accuracy: 0.9416, Validation Accuracy: 0.9513, Loss: 0.0251 Epoch 3 Batch 34/1077 - Train Accuracy: 0.9492, Validation Accuracy: 0.9517, Loss: 0.0260 Epoch 3 Batch 35/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9538, Loss: 0.0267 Epoch 3 Batch 36/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9631, Loss: 0.0252 Epoch 3 Batch 37/1077 - Train Accuracy: 0.9551, Validation Accuracy: 0.9531, Loss: 0.0296 Epoch 3 Batch 38/1077 - Train Accuracy: 0.9634, Validation Accuracy: 0.9609, Loss: 0.0437 Epoch 3 Batch 39/1077 - Train Accuracy: 0.9516, Validation Accuracy: 0.9709, Loss: 0.0343 Epoch 3 Batch 40/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9613, Loss: 0.0205 Epoch 3 Batch 41/1077 - Train Accuracy: 0.9635, Validation Accuracy: 0.9602, Loss: 0.0282 Epoch 3 Batch 42/1077 - Train Accuracy: 0.9648, Validation Accuracy: 0.9606, Loss: 0.0351 Epoch 3 Batch 43/1077 - Train Accuracy: 0.9671, Validation Accuracy: 0.9556, Loss: 0.0184 Epoch 3 Batch 44/1077 - Train Accuracy: 0.9675, Validation Accuracy: 0.9517, Loss: 0.0229 Epoch 3 Batch 45/1077 - Train Accuracy: 0.9473, Validation Accuracy: 0.9474, Loss: 0.0286 Epoch 3 Batch 46/1077 - Train Accuracy: 0.9626, Validation Accuracy: 0.9418, Loss: 0.0270 Epoch 3 Batch 47/1077 - Train Accuracy: 0.9578, Validation Accuracy: 0.9418, Loss: 0.0244 Epoch 3 Batch 48/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9368, Loss: 0.0360 Epoch 3 Batch 49/1077 - Train Accuracy: 0.9717, Validation Accuracy: 0.9556, Loss: 0.0306 Epoch 3 Batch 50/1077 - Train Accuracy: 0.9512, Validation Accuracy: 0.9606, Loss: 0.0297 Epoch 3 Batch 51/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9609, Loss: 0.0276 Epoch 3 Batch 52/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9560, Loss: 0.0297 Epoch 3 Batch 53/1077 - Train Accuracy: 0.9465, Validation Accuracy: 0.9567, Loss: 0.0270 Epoch 3 Batch 54/1077 - Train Accuracy: 0.9656, Validation Accuracy: 0.9524, Loss: 0.0395 Epoch 3 Batch 55/1077 - Train Accuracy: 0.9625, Validation Accuracy: 0.9524, Loss: 0.0263 Epoch 3 Batch 56/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9513, Loss: 0.0226 Epoch 3 Batch 57/1077 - Train Accuracy: 0.9528, Validation Accuracy: 0.9602, Loss: 0.0277 Epoch 3 Batch 58/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9581, Loss: 0.0302 Epoch 3 Batch 59/1077 - Train Accuracy: 0.9581, Validation Accuracy: 0.9585, Loss: 0.0236 Epoch 3 Batch 60/1077 - Train Accuracy: 0.9658, Validation Accuracy: 0.9581, Loss: 0.0193 Epoch 3 Batch 61/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9531, Loss: 0.0304 Epoch 3 Batch 62/1077 - Train Accuracy: 0.9519, Validation Accuracy: 0.9535, Loss: 0.0296 Epoch 3 Batch 63/1077 - Train Accuracy: 0.9714, Validation Accuracy: 0.9467, Loss: 0.0220 Epoch 3 Batch 64/1077 - Train Accuracy: 0.9590, Validation Accuracy: 0.9531, Loss: 0.0251 Epoch 3 Batch 65/1077 - Train Accuracy: 0.9605, Validation Accuracy: 0.9531, Loss: 0.0271 Epoch 3 Batch 66/1077 - Train Accuracy: 0.9647, Validation Accuracy: 0.9585, Loss: 0.0180 Epoch 3 Batch 67/1077 - Train Accuracy: 0.9606, Validation Accuracy: 0.9585, Loss: 0.0264 Epoch 3 Batch 68/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9577, Loss: 0.0323 Epoch 3 Batch 69/1077 - Train Accuracy: 0.9406, Validation Accuracy: 0.9528, Loss: 0.0343 Epoch 3 Batch 70/1077 - Train Accuracy: 0.9474, Validation Accuracy: 0.9524, Loss: 0.0308 Epoch 3 Batch 71/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9457, Loss: 0.0163 Epoch 3 Batch 72/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9460, Loss: 0.0266 Epoch 3 Batch 73/1077 - Train Accuracy: 0.9672, Validation Accuracy: 0.9411, Loss: 0.0222 Epoch 3 Batch 74/1077 - Train Accuracy: 0.9639, Validation Accuracy: 0.9457, Loss: 0.0234 Epoch 3 Batch 75/1077 - Train Accuracy: 0.9641, Validation Accuracy: 0.9439, Loss: 0.0320 Epoch 3 Batch 76/1077 - Train Accuracy: 0.9695, Validation Accuracy: 0.9453, Loss: 0.0207 Epoch 3 Batch 77/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9453, Loss: 0.0231 Epoch 3 Batch 78/1077 - Train Accuracy: 0.9585, Validation Accuracy: 0.9496, Loss: 0.0220 Epoch 3 Batch 79/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9489, Loss: 0.0217 Epoch 3 Batch 80/1077 - Train Accuracy: 0.9508, Validation Accuracy: 0.9442, Loss: 0.0230 Epoch 3 Batch 81/1077 - Train Accuracy: 0.9484, Validation Accuracy: 0.9478, Loss: 0.0206 Epoch 3 Batch 82/1077 - Train Accuracy: 0.9833, Validation Accuracy: 0.9453, Loss: 0.0221 Epoch 3 Batch 83/1077 - Train Accuracy: 0.9757, Validation Accuracy: 0.9457, Loss: 0.0221 Epoch 3 Batch 84/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9482, Loss: 0.0230 Epoch 3 Batch 85/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9506, Loss: 0.0213 Epoch 3 Batch 86/1077 - Train Accuracy: 0.9672, Validation Accuracy: 0.9506, Loss: 0.0258 Epoch 3 Batch 87/1077 - Train Accuracy: 0.9605, Validation Accuracy: 0.9457, Loss: 0.0304 Epoch 3 Batch 88/1077 - Train Accuracy: 0.9590, Validation Accuracy: 0.9400, Loss: 0.0303 Epoch 3 Batch 89/1077 - Train Accuracy: 0.9633, Validation Accuracy: 0.9393, Loss: 0.0260 Epoch 3 Batch 90/1077 - Train Accuracy: 0.9414, Validation Accuracy: 0.9471, Loss: 0.0227 Epoch 3 Batch 91/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9474, Loss: 0.0259 Epoch 3 Batch 92/1077 - Train Accuracy: 0.9557, Validation Accuracy: 0.9524, Loss: 0.0302 Epoch 3 Batch 93/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9478, Loss: 0.0218 Epoch 3 Batch 94/1077 - Train Accuracy: 0.9633, Validation Accuracy: 0.9545, Loss: 0.0235 Epoch 3 Batch 95/1077 - Train Accuracy: 0.9818, Validation Accuracy: 0.9542, Loss: 0.0259 Epoch 3 Batch 96/1077 - Train Accuracy: 0.9488, Validation Accuracy: 0.9542, Loss: 0.0268 Epoch 3 Batch 97/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9567, Loss: 0.0251 Epoch 3 Batch 98/1077 - Train Accuracy: 0.9710, Validation Accuracy: 0.9624, Loss: 0.0274 Epoch 3 Batch 99/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9688, Loss: 0.0245 Epoch 3 Batch 100/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9638, Loss: 0.0254 Epoch 3 Batch 101/1077 - Train Accuracy: 0.9445, Validation Accuracy: 0.9606, Loss: 0.0240 Epoch 3 Batch 102/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9613, Loss: 0.0209 Epoch 3 Batch 103/1077 - Train Accuracy: 0.9753, Validation Accuracy: 0.9638, Loss: 0.0278 Epoch 3 Batch 104/1077 - Train Accuracy: 0.9630, Validation Accuracy: 0.9531, Loss: 0.0292 Epoch 3 Batch 105/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9513, Loss: 0.0194 Epoch 3 Batch 106/1077 - Train Accuracy: 0.9799, Validation Accuracy: 0.9517, Loss: 0.0266 Epoch 3 Batch 107/1077 - Train Accuracy: 0.9375, Validation Accuracy: 0.9517, Loss: 0.0286 Epoch 3 Batch 108/1077 - Train Accuracy: 0.9563, Validation Accuracy: 0.9471, Loss: 0.0257 Epoch 3 Batch 109/1077 - Train Accuracy: 0.9570, Validation Accuracy: 0.9513, Loss: 0.0304 Epoch 3 Batch 110/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9634, Loss: 0.0207 Epoch 3 Batch 111/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9634, Loss: 0.0229 Epoch 3 Batch 112/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9634, Loss: 0.0226 Epoch 3 Batch 113/1077 - Train Accuracy: 0.9492, Validation Accuracy: 0.9538, Loss: 0.0285 Epoch 3 Batch 114/1077 - Train Accuracy: 0.9803, Validation Accuracy: 0.9542, Loss: 0.0177 Epoch 3 Batch 115/1077 - Train Accuracy: 0.9652, Validation Accuracy: 0.9666, Loss: 0.0257 Epoch 3 Batch 116/1077 - Train Accuracy: 0.9125, Validation Accuracy: 0.9666, Loss: 0.0481 Epoch 3 Batch 117/1077 - Train Accuracy: 0.9590, Validation Accuracy: 0.9631, Loss: 0.0219 Epoch 3 Batch 118/1077 - Train Accuracy: 0.9642, Validation Accuracy: 0.9677, Loss: 0.0233 Epoch 3 Batch 119/1077 - Train Accuracy: 0.9648, Validation Accuracy: 0.9592, Loss: 0.0236 Epoch 3 Batch 120/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9599, Loss: 0.0231 Epoch 3 Batch 121/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9606, Loss: 0.0285 Epoch 3 Batch 122/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9606, Loss: 0.0233 Epoch 3 Batch 123/1077 - Train Accuracy: 0.9723, Validation Accuracy: 0.9538, Loss: 0.0216 Epoch 3 Batch 124/1077 - Train Accuracy: 0.9520, Validation Accuracy: 0.9595, Loss: 0.0309 Epoch 3 Batch 125/1077 - Train Accuracy: 0.9606, Validation Accuracy: 0.9503, Loss: 0.0362 Epoch 3 Batch 126/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9570, Loss: 0.0248 Epoch 3 Batch 127/1077 - Train Accuracy: 0.9523, Validation Accuracy: 0.9563, Loss: 0.0231 Epoch 3 Batch 128/1077 - Train Accuracy: 0.9706, Validation Accuracy: 0.9428, Loss: 0.0277 Epoch 3 Batch 129/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9375, Loss: 0.0266 Epoch 3 Batch 130/1077 - Train Accuracy: 0.9732, Validation Accuracy: 0.9421, Loss: 0.0220 Epoch 3 Batch 131/1077 - Train Accuracy: 0.9617, Validation Accuracy: 0.9545, Loss: 0.0252 Epoch 3 Batch 132/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9592, Loss: 0.0220 Epoch 3 Batch 133/1077 - Train Accuracy: 0.9749, Validation Accuracy: 0.9535, Loss: 0.0187 Epoch 3 Batch 134/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9421, Loss: 0.0224 Epoch 3 Batch 135/1077 - Train Accuracy: 0.9576, Validation Accuracy: 0.9393, Loss: 0.0216 Epoch 3 Batch 136/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9393, Loss: 0.0276 Epoch 3 Batch 137/1077 - Train Accuracy: 0.9717, Validation Accuracy: 0.9446, Loss: 0.0214 Epoch 3 Batch 138/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9425, Loss: 0.0267 Epoch 3 Batch 139/1077 - Train Accuracy: 0.9547, Validation Accuracy: 0.9513, Loss: 0.0271 Epoch 3 Batch 140/1077 - Train Accuracy: 0.9720, Validation Accuracy: 0.9563, Loss: 0.0218 Epoch 3 Batch 141/1077 - Train Accuracy: 0.9590, Validation Accuracy: 0.9485, Loss: 0.0235 Epoch 3 Batch 142/1077 - Train Accuracy: 0.9416, Validation Accuracy: 0.9581, Loss: 0.0253 Epoch 3 Batch 143/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9574, Loss: 0.0242 Epoch 3 Batch 144/1077 - Train Accuracy: 0.9465, Validation Accuracy: 0.9471, Loss: 0.0376 Epoch 3 Batch 145/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9467, Loss: 0.0242 Epoch 3 Batch 146/1077 - Train Accuracy: 0.9531, Validation Accuracy: 0.9492, Loss: 0.0467 Epoch 3 Batch 147/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9492, Loss: 0.0240 Epoch 3 Batch 148/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9602, Loss: 0.0257 Epoch 3 Batch 149/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9599, Loss: 0.0306 Epoch 3 Batch 150/1077 - Train Accuracy: 0.9714, Validation Accuracy: 0.9602, Loss: 0.0248 Epoch 3 Batch 151/1077 - Train Accuracy: 0.9650, Validation Accuracy: 0.9602, Loss: 0.0225 Epoch 3 Batch 152/1077 - Train Accuracy: 0.9598, Validation Accuracy: 0.9602, Loss: 0.0389 Epoch 3 Batch 153/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9553, Loss: 0.0313 Epoch 3 Batch 154/1077 - Train Accuracy: 0.9659, Validation Accuracy: 0.9577, Loss: 0.0227 Epoch 3 Batch 155/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9581, Loss: 0.0239 Epoch 3 Batch 156/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9595, Loss: 0.0191 Epoch 3 Batch 157/1077 - Train Accuracy: 0.9488, Validation Accuracy: 0.9695, Loss: 0.0248 Epoch 3 Batch 158/1077 - Train Accuracy: 0.9572, Validation Accuracy: 0.9602, Loss: 0.0325 Epoch 3 Batch 159/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9599, Loss: 0.0231 Epoch 3 Batch 160/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9535, Loss: 0.0257 Epoch 3 Batch 161/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9513, Loss: 0.0208 Epoch 3 Batch 162/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9577, Loss: 0.0286 Epoch 3 Batch 163/1077 - Train Accuracy: 0.9692, Validation Accuracy: 0.9585, Loss: 0.0332 Epoch 3 Batch 164/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9553, Loss: 0.0273 Epoch 3 Batch 165/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9531, Loss: 0.0203 Epoch 3 Batch 166/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9506, Loss: 0.0262 Epoch 3 Batch 167/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9531, Loss: 0.0205 Epoch 3 Batch 168/1077 - Train Accuracy: 0.9581, Validation Accuracy: 0.9553, Loss: 0.0296 Epoch 3 Batch 169/1077 - Train Accuracy: 0.9669, Validation Accuracy: 0.9553, Loss: 0.0301 Epoch 3 Batch 170/1077 - Train Accuracy: 0.9453, Validation Accuracy: 0.9503, Loss: 0.0312 Epoch 3 Batch 171/1077 - Train Accuracy: 0.9574, Validation Accuracy: 0.9595, Loss: 0.0241 Epoch 3 Batch 172/1077 - Train Accuracy: 0.9769, Validation Accuracy: 0.9521, Loss: 0.0212 Epoch 3 Batch 173/1077 - Train Accuracy: 0.9704, Validation Accuracy: 0.9535, Loss: 0.0244 Epoch 3 Batch 174/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9542, Loss: 0.0197 Epoch 3 Batch 175/1077 - Train Accuracy: 0.9523, Validation Accuracy: 0.9439, Loss: 0.0316 Epoch 3 Batch 176/1077 - Train Accuracy: 0.9629, Validation Accuracy: 0.9442, Loss: 0.0203 Epoch 3 Batch 177/1077 - Train Accuracy: 0.9729, Validation Accuracy: 0.9439, Loss: 0.0244 Epoch 3 Batch 178/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9538, Loss: 0.0226 Epoch 3 Batch 179/1077 - Train Accuracy: 0.9679, Validation Accuracy: 0.9609, Loss: 0.0239 Epoch 3 Batch 180/1077 - Train Accuracy: 0.9637, Validation Accuracy: 0.9620, Loss: 0.0185 Epoch 3 Batch 181/1077 - Train Accuracy: 0.9672, Validation Accuracy: 0.9556, Loss: 0.0269 Epoch 3 Batch 182/1077 - Train Accuracy: 0.9624, Validation Accuracy: 0.9606, Loss: 0.0301 Epoch 3 Batch 183/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9613, Loss: 0.0250 Epoch 3 Batch 184/1077 - Train Accuracy: 0.9641, Validation Accuracy: 0.9670, Loss: 0.0254 Epoch 3 Batch 185/1077 - Train Accuracy: 0.9652, Validation Accuracy: 0.9666, Loss: 0.0290 Epoch 3 Batch 186/1077 - Train Accuracy: 0.9650, Validation Accuracy: 0.9574, Loss: 0.0238 Epoch 3 Batch 187/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9574, Loss: 0.0202 Epoch 3 Batch 188/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9570, Loss: 0.0242 Epoch 3 Batch 189/1077 - Train Accuracy: 0.9617, Validation Accuracy: 0.9567, Loss: 0.0202 Epoch 3 Batch 190/1077 - Train Accuracy: 0.9723, Validation Accuracy: 0.9567, Loss: 0.0203 Epoch 3 Batch 191/1077 - Train Accuracy: 0.9425, Validation Accuracy: 0.9549, Loss: 0.0217 Epoch 3 Batch 192/1077 - Train Accuracy: 0.9598, Validation Accuracy: 0.9613, Loss: 0.0275 Epoch 3 Batch 193/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9613, Loss: 0.0211 Epoch 3 Batch 194/1077 - Train Accuracy: 0.9892, Validation Accuracy: 0.9609, Loss: 0.0209 Epoch 3 Batch 195/1077 - Train Accuracy: 0.9594, Validation Accuracy: 0.9535, Loss: 0.0206 Epoch 3 Batch 196/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9513, Loss: 0.0177 Epoch 3 Batch 197/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9513, Loss: 0.0234 Epoch 3 Batch 198/1077 - Train Accuracy: 0.9550, Validation Accuracy: 0.9464, Loss: 0.0295 Epoch 3 Batch 199/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9492, Loss: 0.0245 Epoch 3 Batch 200/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9517, Loss: 0.0248 Epoch 3 Batch 201/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9471, Loss: 0.0190 Epoch 3 Batch 202/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9585, Loss: 0.0220 Epoch 3 Batch 203/1077 - Train Accuracy: 0.9637, Validation Accuracy: 0.9577, Loss: 0.0202 Epoch 3 Batch 204/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9528, Loss: 0.0362 Epoch 3 Batch 205/1077 - Train Accuracy: 0.9406, Validation Accuracy: 0.9521, Loss: 0.0342 Epoch 3 Batch 206/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9620, Loss: 0.0183 Epoch 3 Batch 207/1077 - Train Accuracy: 0.9664, Validation Accuracy: 0.9556, Loss: 0.0202 Epoch 3 Batch 208/1077 - Train Accuracy: 0.9673, Validation Accuracy: 0.9563, Loss: 0.0251 Epoch 3 Batch 209/1077 - Train Accuracy: 0.9821, Validation Accuracy: 0.9616, Loss: 0.0208 Epoch 3 Batch 210/1077 - Train Accuracy: 0.9725, Validation Accuracy: 0.9542, Loss: 0.0247 Epoch 3 Batch 211/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9489, Loss: 0.0201 Epoch 3 Batch 212/1077 - Train Accuracy: 0.9539, Validation Accuracy: 0.9467, Loss: 0.0190 Epoch 3 Batch 213/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9304, Loss: 0.0165 Epoch 3 Batch 214/1077 - Train Accuracy: 0.9535, Validation Accuracy: 0.9400, Loss: 0.0209 Epoch 3 Batch 215/1077 - Train Accuracy: 0.9391, Validation Accuracy: 0.9453, Loss: 0.0295 Epoch 3 Batch 216/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9471, Loss: 0.0237 Epoch 3 Batch 217/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9521, Loss: 0.0202 Epoch 3 Batch 218/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9570, Loss: 0.0278 Epoch 3 Batch 219/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9616, Loss: 0.0168 Epoch 3 Batch 220/1077 - Train Accuracy: 0.9622, Validation Accuracy: 0.9581, Loss: 0.0273 Epoch 3 Batch 221/1077 - Train Accuracy: 0.9618, Validation Accuracy: 0.9513, Loss: 0.0255 Epoch 3 Batch 222/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9510, Loss: 0.0242 Epoch 3 Batch 223/1077 - Train Accuracy: 0.9721, Validation Accuracy: 0.9489, Loss: 0.0182 Epoch 3 Batch 224/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9503, Loss: 0.0222 Epoch 3 Batch 225/1077 - Train Accuracy: 0.9395, Validation Accuracy: 0.9503, Loss: 0.0316 Epoch 3 Batch 226/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9425, Loss: 0.0204 Epoch 3 Batch 227/1077 - Train Accuracy: 0.9586, Validation Accuracy: 0.9361, Loss: 0.0305 Epoch 3 Batch 228/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9361, Loss: 0.0214 Epoch 3 Batch 229/1077 - Train Accuracy: 0.9555, Validation Accuracy: 0.9379, Loss: 0.0228 Epoch 3 Batch 230/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9393, Loss: 0.0211 Epoch 3 Batch 231/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9439, Loss: 0.0258 Epoch 3 Batch 232/1077 - Train Accuracy: 0.9827, Validation Accuracy: 0.9542, Loss: 0.0189 Epoch 3 Batch 233/1077 - Train Accuracy: 0.9559, Validation Accuracy: 0.9542, Loss: 0.0362 Epoch 3 Batch 234/1077 - Train Accuracy: 0.9606, Validation Accuracy: 0.9588, Loss: 0.0262 Epoch 3 Batch 235/1077 - Train Accuracy: 0.9669, Validation Accuracy: 0.9677, Loss: 0.0258 Epoch 3 Batch 236/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9677, Loss: 0.0319 Epoch 3 Batch 237/1077 - Train Accuracy: 0.9702, Validation Accuracy: 0.9691, Loss: 0.0205 Epoch 3 Batch 238/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9691, Loss: 0.0216 Epoch 3 Batch 239/1077 - Train Accuracy: 0.9747, Validation Accuracy: 0.9741, Loss: 0.0170 Epoch 3 Batch 240/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9673, Loss: 0.0234 Epoch 3 Batch 241/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9624, Loss: 0.0140 Epoch 3 Batch 242/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9616, Loss: 0.0189 Epoch 3 Batch 243/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9513, Loss: 0.0279 Epoch 3 Batch 244/1077 - Train Accuracy: 0.9808, Validation Accuracy: 0.9513, Loss: 0.0203 Epoch 3 Batch 245/1077 - Train Accuracy: 0.9654, Validation Accuracy: 0.9513, Loss: 0.0252 Epoch 3 Batch 246/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9531, Loss: 0.0265 Epoch 3 Batch 247/1077 - Train Accuracy: 0.9539, Validation Accuracy: 0.9535, Loss: 0.0255 Epoch 3 Batch 248/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9492, Loss: 0.0249 Epoch 3 Batch 249/1077 - Train Accuracy: 0.9664, Validation Accuracy: 0.9446, Loss: 0.0191 Epoch 3 Batch 250/1077 - Train Accuracy: 0.9616, Validation Accuracy: 0.9400, Loss: 0.0256 Epoch 3 Batch 251/1077 - Train Accuracy: 0.9747, Validation Accuracy: 0.9474, Loss: 0.0354 Epoch 3 Batch 252/1077 - Train Accuracy: 0.9648, Validation Accuracy: 0.9460, Loss: 0.0303 Epoch 3 Batch 253/1077 - Train Accuracy: 0.9361, Validation Accuracy: 0.9460, Loss: 0.0265 Epoch 3 Batch 254/1077 - Train Accuracy: 0.9484, Validation Accuracy: 0.9506, Loss: 0.0271 Epoch 3 Batch 255/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9489, Loss: 0.0201 Epoch 3 Batch 256/1077 - Train Accuracy: 0.9578, Validation Accuracy: 0.9531, Loss: 0.0393 Epoch 3 Batch 257/1077 - Train Accuracy: 0.9591, Validation Accuracy: 0.9368, Loss: 0.0251 Epoch 3 Batch 258/1077 - Train Accuracy: 0.9792, Validation Accuracy: 0.9364, Loss: 0.0249 Epoch 3 Batch 259/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9407, Loss: 0.0190 Epoch 3 Batch 260/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9407, Loss: 0.0201 Epoch 3 Batch 261/1077 - Train Accuracy: 0.9513, Validation Accuracy: 0.9450, Loss: 0.0275 Epoch 3 Batch 262/1077 - Train Accuracy: 0.9723, Validation Accuracy: 0.9421, Loss: 0.0226 Epoch 3 Batch 263/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9375, Loss: 0.0180 Epoch 3 Batch 264/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9379, Loss: 0.0236 Epoch 3 Batch 265/1077 - Train Accuracy: 0.9676, Validation Accuracy: 0.9393, Loss: 0.0237 Epoch 3 Batch 266/1077 - Train Accuracy: 0.9606, Validation Accuracy: 0.9396, Loss: 0.0277 Epoch 3 Batch 267/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9421, Loss: 0.0230 Epoch 3 Batch 268/1077 - Train Accuracy: 0.9375, Validation Accuracy: 0.9517, Loss: 0.0297 Epoch 3 Batch 269/1077 - Train Accuracy: 0.9581, Validation Accuracy: 0.9439, Loss: 0.0350 Epoch 3 Batch 270/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9439, Loss: 0.0344 Epoch 3 Batch 271/1077 - Train Accuracy: 0.9723, Validation Accuracy: 0.9556, Loss: 0.0240 Epoch 3 Batch 272/1077 - Train Accuracy: 0.9654, Validation Accuracy: 0.9602, Loss: 0.0369 Epoch 3 Batch 273/1077 - Train Accuracy: 0.9706, Validation Accuracy: 0.9542, Loss: 0.0200 Epoch 3 Batch 274/1077 - Train Accuracy: 0.9676, Validation Accuracy: 0.9496, Loss: 0.0231 Epoch 3 Batch 275/1077 - Train Accuracy: 0.9676, Validation Accuracy: 0.9492, Loss: 0.0265 Epoch 3 Batch 276/1077 - Train Accuracy: 0.9531, Validation Accuracy: 0.9386, Loss: 0.0333 Epoch 3 Batch 277/1077 - Train Accuracy: 0.9568, Validation Accuracy: 0.9339, Loss: 0.0222 Epoch 3 Batch 278/1077 - Train Accuracy: 0.9586, Validation Accuracy: 0.9485, Loss: 0.0325 Epoch 3 Batch 279/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9435, Loss: 0.0282 Epoch 3 Batch 280/1077 - Train Accuracy: 0.9504, Validation Accuracy: 0.9357, Loss: 0.0269 Epoch 3 Batch 281/1077 - Train Accuracy: 0.9617, Validation Accuracy: 0.9407, Loss: 0.0325 Epoch 3 Batch 282/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9474, Loss: 0.0348 Epoch 3 Batch 283/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9418, Loss: 0.0267 Epoch 3 Batch 284/1077 - Train Accuracy: 0.9629, Validation Accuracy: 0.9379, Loss: 0.0302 Epoch 3 Batch 285/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9375, Loss: 0.0241 Epoch 3 Batch 286/1077 - Train Accuracy: 0.9669, Validation Accuracy: 0.9439, Loss: 0.0268 Epoch 3 Batch 287/1077 - Train Accuracy: 0.9488, Validation Accuracy: 0.9435, Loss: 0.0316 Epoch 3 Batch 288/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9439, Loss: 0.0360 Epoch 3 Batch 289/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9411, Loss: 0.0239 Epoch 3 Batch 290/1077 - Train Accuracy: 0.9652, Validation Accuracy: 0.9411, Loss: 0.0344 Epoch 3 Batch 291/1077 - Train Accuracy: 0.9628, Validation Accuracy: 0.9482, Loss: 0.0377 Epoch 3 Batch 292/1077 - Train Accuracy: 0.9717, Validation Accuracy: 0.9371, Loss: 0.0283 Epoch 3 Batch 293/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9368, Loss: 0.0261 Epoch 3 Batch 294/1077 - Train Accuracy: 0.9606, Validation Accuracy: 0.9403, Loss: 0.0215 Epoch 3 Batch 295/1077 - Train Accuracy: 0.9663, Validation Accuracy: 0.9400, Loss: 0.0291 Epoch 3 Batch 296/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9457, Loss: 0.0202 Epoch 3 Batch 297/1077 - Train Accuracy: 0.9641, Validation Accuracy: 0.9393, Loss: 0.0271 Epoch 3 Batch 298/1077 - Train Accuracy: 0.9434, Validation Accuracy: 0.9521, Loss: 0.0333 Epoch 3 Batch 299/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9535, Loss: 0.0294 Epoch 3 Batch 300/1077 - Train Accuracy: 0.9741, Validation Accuracy: 0.9585, Loss: 0.0192 Epoch 3 Batch 301/1077 - Train Accuracy: 0.9672, Validation Accuracy: 0.9538, Loss: 0.0217 Epoch 3 Batch 302/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9474, Loss: 0.0202 Epoch 3 Batch 303/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9421, Loss: 0.0244 Epoch 3 Batch 304/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9450, Loss: 0.0289 Epoch 3 Batch 305/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9560, Loss: 0.0193 Epoch 3 Batch 306/1077 - Train Accuracy: 0.9736, Validation Accuracy: 0.9574, Loss: 0.0299 Epoch 3 Batch 307/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9581, Loss: 0.0199 Epoch 3 Batch 308/1077 - Train Accuracy: 0.9527, Validation Accuracy: 0.9606, Loss: 0.0315 Epoch 3 Batch 309/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9464, Loss: 0.0215 Epoch 3 Batch 310/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9496, Loss: 0.0235 Epoch 3 Batch 311/1077 - Train Accuracy: 0.9505, Validation Accuracy: 0.9361, Loss: 0.0246 Epoch 3 Batch 312/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9439, Loss: 0.0297 Epoch 3 Batch 313/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9361, Loss: 0.0146 Epoch 3 Batch 314/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9389, Loss: 0.0259 Epoch 3 Batch 315/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9442, Loss: 0.0211 Epoch 3 Batch 316/1077 - Train Accuracy: 0.9665, Validation Accuracy: 0.9510, Loss: 0.0251 Epoch 3 Batch 317/1077 - Train Accuracy: 0.9827, Validation Accuracy: 0.9535, Loss: 0.0262 Epoch 3 Batch 318/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9528, Loss: 0.0180 Epoch 3 Batch 319/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9524, Loss: 0.0224 Epoch 3 Batch 320/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9524, Loss: 0.0311 Epoch 3 Batch 321/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9524, Loss: 0.0211 Epoch 3 Batch 322/1077 - Train Accuracy: 0.9598, Validation Accuracy: 0.9474, Loss: 0.0231 Epoch 3 Batch 323/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9556, Loss: 0.0236 Epoch 3 Batch 324/1077 - Train Accuracy: 0.9453, Validation Accuracy: 0.9538, Loss: 0.0216 Epoch 3 Batch 325/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9524, Loss: 0.0266 Epoch 3 Batch 326/1077 - Train Accuracy: 0.9740, Validation Accuracy: 0.9567, Loss: 0.0215 Epoch 3 Batch 327/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9574, Loss: 0.0266 Epoch 3 Batch 328/1077 - Train Accuracy: 0.9736, Validation Accuracy: 0.9528, Loss: 0.0291 Epoch 3 Batch 329/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9503, Loss: 0.0253 Epoch 3 Batch 330/1077 - Train Accuracy: 0.9563, Validation Accuracy: 0.9435, Loss: 0.0228 Epoch 3 Batch 331/1077 - Train Accuracy: 0.9716, Validation Accuracy: 0.9403, Loss: 0.0225 Epoch 3 Batch 332/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9450, Loss: 0.0143 Epoch 3 Batch 333/1077 - Train Accuracy: 0.9704, Validation Accuracy: 0.9453, Loss: 0.0178 Epoch 3 Batch 334/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9425, Loss: 0.0175 Epoch 3 Batch 335/1077 - Train Accuracy: 0.9743, Validation Accuracy: 0.9403, Loss: 0.0254 Epoch 3 Batch 336/1077 - Train Accuracy: 0.9551, Validation Accuracy: 0.9453, Loss: 0.0363 Epoch 3 Batch 337/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9482, Loss: 0.0291 Epoch 3 Batch 338/1077 - Train Accuracy: 0.9379, Validation Accuracy: 0.9510, Loss: 0.0353 Epoch 3 Batch 339/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9521, Loss: 0.0182 Epoch 3 Batch 340/1077 - Train Accuracy: 0.9720, Validation Accuracy: 0.9570, Loss: 0.0230 Epoch 3 Batch 341/1077 - Train Accuracy: 0.9625, Validation Accuracy: 0.9570, Loss: 0.0325 Epoch 3 Batch 342/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9624, Loss: 0.0179 Epoch 3 Batch 343/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9474, Loss: 0.0235 Epoch 3 Batch 344/1077 - Train Accuracy: 0.9508, Validation Accuracy: 0.9535, Loss: 0.0270 Epoch 3 Batch 345/1077 - Train Accuracy: 0.9814, Validation Accuracy: 0.9574, Loss: 0.0194 Epoch 3 Batch 346/1077 - Train Accuracy: 0.9648, Validation Accuracy: 0.9609, Loss: 0.0240 Epoch 3 Batch 347/1077 - Train Accuracy: 0.9814, Validation Accuracy: 0.9531, Loss: 0.0184 Epoch 3 Batch 348/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9513, Loss: 0.0218 Epoch 3 Batch 349/1077 - Train Accuracy: 0.9574, Validation Accuracy: 0.9560, Loss: 0.0234 Epoch 3 Batch 350/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9560, Loss: 0.0225 Epoch 3 Batch 351/1077 - Train Accuracy: 0.9638, Validation Accuracy: 0.9538, Loss: 0.0303 Epoch 3 Batch 352/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9485, Loss: 0.0180 Epoch 3 Batch 353/1077 - Train Accuracy: 0.9864, Validation Accuracy: 0.9545, Loss: 0.0256 Epoch 3 Batch 354/1077 - Train Accuracy: 0.9695, Validation Accuracy: 0.9545, Loss: 0.0351 Epoch 3 Batch 355/1077 - Train Accuracy: 0.9535, Validation Accuracy: 0.9677, Loss: 0.0192 Epoch 3 Batch 356/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9652, Loss: 0.0238 Epoch 3 Batch 357/1077 - Train Accuracy: 0.9539, Validation Accuracy: 0.9553, Loss: 0.0233 Epoch 3 Batch 358/1077 - Train Accuracy: 0.9576, Validation Accuracy: 0.9581, Loss: 0.0302 Epoch 3 Batch 359/1077 - Train Accuracy: 0.9594, Validation Accuracy: 0.9574, Loss: 0.0201 Epoch 3 Batch 360/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9521, Loss: 0.0156 Epoch 3 Batch 361/1077 - Train Accuracy: 0.9704, Validation Accuracy: 0.9549, Loss: 0.0289 Epoch 3 Batch 362/1077 - Train Accuracy: 0.9557, Validation Accuracy: 0.9503, Loss: 0.0280 Epoch 3 Batch 363/1077 - Train Accuracy: 0.9457, Validation Accuracy: 0.9474, Loss: 0.0289 Epoch 3 Batch 364/1077 - Train Accuracy: 0.9574, Validation Accuracy: 0.9521, Loss: 0.0265 Epoch 3 Batch 365/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9577, Loss: 0.0172 Epoch 3 Batch 366/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9506, Loss: 0.0168 Epoch 3 Batch 367/1077 - Train Accuracy: 0.9821, Validation Accuracy: 0.9553, Loss: 0.0162 Epoch 3 Batch 368/1077 - Train Accuracy: 0.9695, Validation Accuracy: 0.9553, Loss: 0.0223 Epoch 3 Batch 369/1077 - Train Accuracy: 0.9488, Validation Accuracy: 0.9549, Loss: 0.0285 Epoch 3 Batch 370/1077 - Train Accuracy: 0.9661, Validation Accuracy: 0.9485, Loss: 0.0240 Epoch 3 Batch 371/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9482, Loss: 0.0195 Epoch 3 Batch 372/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9513, Loss: 0.0133 Epoch 3 Batch 373/1077 - Train Accuracy: 0.9743, Validation Accuracy: 0.9531, Loss: 0.0168 Epoch 3 Batch 374/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9531, Loss: 0.0269 Epoch 3 Batch 375/1077 - Train Accuracy: 0.9780, Validation Accuracy: 0.9482, Loss: 0.0216 Epoch 3 Batch 376/1077 - Train Accuracy: 0.9670, Validation Accuracy: 0.9585, Loss: 0.0240 Epoch 3 Batch 377/1077 - Train Accuracy: 0.9641, Validation Accuracy: 0.9585, Loss: 0.0203 Epoch 3 Batch 378/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9549, Loss: 0.0174 Epoch 3 Batch 379/1077 - Train Accuracy: 0.9520, Validation Accuracy: 0.9521, Loss: 0.0313 Epoch 3 Batch 380/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9492, Loss: 0.0174 Epoch 3 Batch 381/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9450, Loss: 0.0225 Epoch 3 Batch 382/1077 - Train Accuracy: 0.9487, Validation Accuracy: 0.9446, Loss: 0.0397 Epoch 3 Batch 383/1077 - Train Accuracy: 0.9669, Validation Accuracy: 0.9389, Loss: 0.0237 Epoch 3 Batch 384/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9489, Loss: 0.0188 Epoch 3 Batch 385/1077 - Train Accuracy: 0.9648, Validation Accuracy: 0.9528, Loss: 0.0201 Epoch 3 Batch 386/1077 - Train Accuracy: 0.9803, Validation Accuracy: 0.9549, Loss: 0.0209 Epoch 3 Batch 387/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9524, Loss: 0.0205 Epoch 3 Batch 388/1077 - Train Accuracy: 0.9710, Validation Accuracy: 0.9524, Loss: 0.0251 Epoch 3 Batch 389/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9457, Loss: 0.0262 Epoch 3 Batch 390/1077 - Train Accuracy: 0.9441, Validation Accuracy: 0.9411, Loss: 0.0307 Epoch 3 Batch 391/1077 - Train Accuracy: 0.9825, Validation Accuracy: 0.9549, Loss: 0.0280 Epoch 3 Batch 392/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9510, Loss: 0.0269 Epoch 3 Batch 393/1077 - Train Accuracy: 0.9643, Validation Accuracy: 0.9435, Loss: 0.0181 Epoch 3 Batch 394/1077 - Train Accuracy: 0.9379, Validation Accuracy: 0.9478, Loss: 0.0255 Epoch 3 Batch 395/1077 - Train Accuracy: 0.9717, Validation Accuracy: 0.9474, Loss: 0.0214 Epoch 3 Batch 396/1077 - Train Accuracy: 0.9504, Validation Accuracy: 0.9634, Loss: 0.0298 Epoch 3 Batch 397/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9592, Loss: 0.0240 Epoch 3 Batch 398/1077 - Train Accuracy: 0.9790, Validation Accuracy: 0.9581, Loss: 0.0224 Epoch 3 Batch 399/1077 - Train Accuracy: 0.9630, Validation Accuracy: 0.9528, Loss: 0.0217 Epoch 3 Batch 400/1077 - Train Accuracy: 0.9582, Validation Accuracy: 0.9482, Loss: 0.0317 Epoch 3 Batch 401/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9428, Loss: 0.0177 Epoch 3 Batch 402/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9386, Loss: 0.0140 Epoch 3 Batch 403/1077 - Train Accuracy: 0.9578, Validation Accuracy: 0.9354, Loss: 0.0352 Epoch 3 Batch 404/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9453, Loss: 0.0253 Epoch 3 Batch 405/1077 - Train Accuracy: 0.9523, Validation Accuracy: 0.9375, Loss: 0.0257 Epoch 3 Batch 406/1077 - Train Accuracy: 0.9757, Validation Accuracy: 0.9396, Loss: 0.0169 Epoch 3 Batch 407/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9347, Loss: 0.0225 Epoch 3 Batch 408/1077 - Train Accuracy: 0.9590, Validation Accuracy: 0.9474, Loss: 0.0230 Epoch 3 Batch 409/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9464, Loss: 0.0289 Epoch 3 Batch 410/1077 - Train Accuracy: 0.9515, Validation Accuracy: 0.9467, Loss: 0.0350 Epoch 3 Batch 411/1077 - Train Accuracy: 0.9606, Validation Accuracy: 0.9492, Loss: 0.0260 Epoch 3 Batch 412/1077 - Train Accuracy: 0.9582, Validation Accuracy: 0.9492, Loss: 0.0232 Epoch 3 Batch 413/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9545, Loss: 0.0233 Epoch 3 Batch 414/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9570, Loss: 0.0242 Epoch 3 Batch 415/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9567, Loss: 0.0293 Epoch 3 Batch 416/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9542, Loss: 0.0198 Epoch 3 Batch 417/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9606, Loss: 0.0378 Epoch 3 Batch 418/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9606, Loss: 0.0186 Epoch 3 Batch 419/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9606, Loss: 0.0212 Epoch 3 Batch 420/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9542, Loss: 0.0167 Epoch 3 Batch 421/1077 - Train Accuracy: 0.9629, Validation Accuracy: 0.9489, Loss: 0.0334 Epoch 3 Batch 422/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9503, Loss: 0.0203 Epoch 3 Batch 423/1077 - Train Accuracy: 0.9598, Validation Accuracy: 0.9503, Loss: 0.0302 Epoch 3 Batch 424/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9503, Loss: 0.0227 Epoch 3 Batch 425/1077 - Train Accuracy: 0.9557, Validation Accuracy: 0.9460, Loss: 0.0196 Epoch 3 Batch 426/1077 - Train Accuracy: 0.9645, Validation Accuracy: 0.9411, Loss: 0.0234 Epoch 3 Batch 427/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9414, Loss: 0.0235 Epoch 3 Batch 428/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9460, Loss: 0.0177 Epoch 3 Batch 429/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9478, Loss: 0.0179 Epoch 3 Batch 430/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9492, Loss: 0.0187 Epoch 3 Batch 431/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9471, Loss: 0.0193 Epoch 3 Batch 432/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9464, Loss: 0.0219 Epoch 3 Batch 433/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9521, Loss: 0.0327 Epoch 3 Batch 434/1077 - Train Accuracy: 0.9648, Validation Accuracy: 0.9524, Loss: 0.0206 Epoch 3 Batch 435/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9567, Loss: 0.0270 Epoch 3 Batch 436/1077 - Train Accuracy: 0.9736, Validation Accuracy: 0.9670, Loss: 0.0221 Epoch 3 Batch 437/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9553, Loss: 0.0157 Epoch 3 Batch 438/1077 - Train Accuracy: 0.9676, Validation Accuracy: 0.9553, Loss: 0.0228 Epoch 3 Batch 439/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9556, Loss: 0.0244 Epoch 3 Batch 440/1077 - Train Accuracy: 0.9563, Validation Accuracy: 0.9560, Loss: 0.0250 Epoch 3 Batch 441/1077 - Train Accuracy: 0.9551, Validation Accuracy: 0.9577, Loss: 0.0218 Epoch 3 Batch 442/1077 - Train Accuracy: 0.9516, Validation Accuracy: 0.9428, Loss: 0.0303 Epoch 3 Batch 443/1077 - Train Accuracy: 0.9784, Validation Accuracy: 0.9432, Loss: 0.0169 Epoch 3 Batch 444/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9478, Loss: 0.0222 Epoch 3 Batch 445/1077 - Train Accuracy: 0.9720, Validation Accuracy: 0.9439, Loss: 0.0230 Epoch 3 Batch 446/1077 - Train Accuracy: 0.9751, Validation Accuracy: 0.9439, Loss: 0.0175 Epoch 3 Batch 447/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9442, Loss: 0.0178 Epoch 3 Batch 448/1077 - Train Accuracy: 0.9656, Validation Accuracy: 0.9535, Loss: 0.0265 Epoch 3 Batch 449/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9574, Loss: 0.0216 Epoch 3 Batch 450/1077 - Train Accuracy: 0.9676, Validation Accuracy: 0.9574, Loss: 0.0240 Epoch 3 Batch 451/1077 - Train Accuracy: 0.9818, Validation Accuracy: 0.9641, Loss: 0.0227 Epoch 3 Batch 452/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9627, Loss: 0.0217 Epoch 3 Batch 453/1077 - Train Accuracy: 0.9732, Validation Accuracy: 0.9631, Loss: 0.0227 Epoch 3 Batch 454/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9645, Loss: 0.0285 Epoch 3 Batch 455/1077 - Train Accuracy: 0.9641, Validation Accuracy: 0.9648, Loss: 0.0267 Epoch 3 Batch 456/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9652, Loss: 0.0241 Epoch 3 Batch 457/1077 - Train Accuracy: 0.9591, Validation Accuracy: 0.9680, Loss: 0.0183 Epoch 3 Batch 458/1077 - Train Accuracy: 0.9605, Validation Accuracy: 0.9631, Loss: 0.0263 Epoch 3 Batch 459/1077 - Train Accuracy: 0.9572, Validation Accuracy: 0.9535, Loss: 0.0299 Epoch 3 Batch 460/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9535, Loss: 0.0258 Epoch 3 Batch 461/1077 - Train Accuracy: 0.9457, Validation Accuracy: 0.9535, Loss: 0.0245 Epoch 3 Batch 462/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9403, Loss: 0.0250 Epoch 3 Batch 463/1077 - Train Accuracy: 0.9637, Validation Accuracy: 0.9403, Loss: 0.0230 Epoch 3 Batch 464/1077 - Train Accuracy: 0.9645, Validation Accuracy: 0.9364, Loss: 0.0224 Epoch 3 Batch 465/1077 - Train Accuracy: 0.9618, Validation Accuracy: 0.9364, Loss: 0.0258 Epoch 3 Batch 466/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9329, Loss: 0.0216 Epoch 3 Batch 467/1077 - Train Accuracy: 0.9821, Validation Accuracy: 0.9279, Loss: 0.0224 Epoch 3 Batch 468/1077 - Train Accuracy: 0.9635, Validation Accuracy: 0.9389, Loss: 0.0249 Epoch 3 Batch 469/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9414, Loss: 0.0216 Epoch 3 Batch 470/1077 - Train Accuracy: 0.9827, Validation Accuracy: 0.9407, Loss: 0.0207 Epoch 3 Batch 471/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9403, Loss: 0.0170 Epoch 3 Batch 472/1077 - Train Accuracy: 0.9769, Validation Accuracy: 0.9403, Loss: 0.0197 Epoch 3 Batch 473/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9446, Loss: 0.0188 Epoch 3 Batch 474/1077 - Train Accuracy: 0.9461, Validation Accuracy: 0.9450, Loss: 0.0220 Epoch 3 Batch 475/1077 - Train Accuracy: 0.9590, Validation Accuracy: 0.9450, Loss: 0.0237 Epoch 3 Batch 476/1077 - Train Accuracy: 0.9704, Validation Accuracy: 0.9553, Loss: 0.0143 Epoch 3 Batch 477/1077 - Train Accuracy: 0.9747, Validation Accuracy: 0.9553, Loss: 0.0254 Epoch 3 Batch 478/1077 - Train Accuracy: 0.9733, Validation Accuracy: 0.9528, Loss: 0.0209 Epoch 3 Batch 479/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9528, Loss: 0.0265 Epoch 3 Batch 480/1077 - Train Accuracy: 0.9725, Validation Accuracy: 0.9577, Loss: 0.0219 Epoch 3 Batch 481/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9577, Loss: 0.0204 Epoch 3 Batch 482/1077 - Train Accuracy: 0.9597, Validation Accuracy: 0.9531, Loss: 0.0259 Epoch 3 Batch 483/1077 - Train Accuracy: 0.9641, Validation Accuracy: 0.9375, Loss: 0.0231 Epoch 3 Batch 484/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9400, Loss: 0.0248 Epoch 3 Batch 485/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9400, Loss: 0.0221 Epoch 3 Batch 486/1077 - Train Accuracy: 0.9671, Validation Accuracy: 0.9354, Loss: 0.0189 Epoch 3 Batch 487/1077 - Train Accuracy: 0.9782, Validation Accuracy: 0.9403, Loss: 0.0186 Epoch 3 Batch 488/1077 - Train Accuracy: 0.9725, Validation Accuracy: 0.9339, Loss: 0.0198 Epoch 3 Batch 489/1077 - Train Accuracy: 0.9714, Validation Accuracy: 0.9446, Loss: 0.0188 Epoch 3 Batch 490/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9528, Loss: 0.0211 Epoch 3 Batch 491/1077 - Train Accuracy: 0.9648, Validation Accuracy: 0.9549, Loss: 0.0310 Epoch 3 Batch 492/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9545, Loss: 0.0249 Epoch 3 Batch 493/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9524, Loss: 0.0143 Epoch 3 Batch 494/1077 - Train Accuracy: 0.9441, Validation Accuracy: 0.9506, Loss: 0.0195 Epoch 3 Batch 495/1077 - Train Accuracy: 0.9582, Validation Accuracy: 0.9506, Loss: 0.0179 Epoch 3 Batch 496/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9499, Loss: 0.0222 Epoch 3 Batch 497/1077 - Train Accuracy: 0.9741, Validation Accuracy: 0.9499, Loss: 0.0258 Epoch 3 Batch 498/1077 - Train Accuracy: 0.9555, Validation Accuracy: 0.9442, Loss: 0.0250 Epoch 3 Batch 499/1077 - Train Accuracy: 0.9706, Validation Accuracy: 0.9492, Loss: 0.0196 Epoch 3 Batch 500/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9535, Loss: 0.0173 Epoch 3 Batch 501/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9599, Loss: 0.0173 Epoch 3 Batch 502/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9652, Loss: 0.0279 Epoch 3 Batch 503/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9620, Loss: 0.0193 Epoch 3 Batch 504/1077 - Train Accuracy: 0.9645, Validation Accuracy: 0.9620, Loss: 0.0182 Epoch 3 Batch 505/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9563, Loss: 0.0178 Epoch 3 Batch 506/1077 - Train Accuracy: 0.9504, Validation Accuracy: 0.9535, Loss: 0.0292 Epoch 3 Batch 507/1077 - Train Accuracy: 0.9582, Validation Accuracy: 0.9585, Loss: 0.0250 Epoch 3 Batch 508/1077 - Train Accuracy: 0.9784, Validation Accuracy: 0.9585, Loss: 0.0148 Epoch 3 Batch 509/1077 - Train Accuracy: 0.9625, Validation Accuracy: 0.9581, Loss: 0.0286 Epoch 3 Batch 510/1077 - Train Accuracy: 0.9566, Validation Accuracy: 0.9648, Loss: 0.0230 Epoch 3 Batch 511/1077 - Train Accuracy: 0.9782, Validation Accuracy: 0.9712, Loss: 0.0195 Epoch 3 Batch 512/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9712, Loss: 0.0154 Epoch 3 Batch 513/1077 - Train Accuracy: 0.9723, Validation Accuracy: 0.9716, Loss: 0.0268 Epoch 3 Batch 514/1077 - Train Accuracy: 0.9594, Validation Accuracy: 0.9613, Loss: 0.0216 Epoch 3 Batch 515/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9524, Loss: 0.0220 Epoch 3 Batch 516/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9517, Loss: 0.0224 Epoch 3 Batch 517/1077 - Train Accuracy: 0.9736, Validation Accuracy: 0.9517, Loss: 0.0242 Epoch 3 Batch 518/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9517, Loss: 0.0182 Epoch 3 Batch 519/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9567, Loss: 0.0227 Epoch 3 Batch 520/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9411, Loss: 0.0154 Epoch 3 Batch 521/1077 - Train Accuracy: 0.9650, Validation Accuracy: 0.9403, Loss: 0.0212 Epoch 3 Batch 522/1077 - Train Accuracy: 0.9508, Validation Accuracy: 0.9467, Loss: 0.0255 Epoch 3 Batch 523/1077 - Train Accuracy: 0.9445, Validation Accuracy: 0.9467, Loss: 0.0258 Epoch 3 Batch 524/1077 - Train Accuracy: 0.9625, Validation Accuracy: 0.9496, Loss: 0.0262 Epoch 3 Batch 525/1077 - Train Accuracy: 0.9516, Validation Accuracy: 0.9453, Loss: 0.0239 Epoch 3 Batch 526/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9450, Loss: 0.0181 Epoch 3 Batch 527/1077 - Train Accuracy: 0.9692, Validation Accuracy: 0.9538, Loss: 0.0223 Epoch 3 Batch 528/1077 - Train Accuracy: 0.9500, Validation Accuracy: 0.9549, Loss: 0.0231 Epoch 3 Batch 529/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9599, Loss: 0.0241 Epoch 3 Batch 530/1077 - Train Accuracy: 0.9672, Validation Accuracy: 0.9574, Loss: 0.0266 Epoch 3 Batch 531/1077 - Train Accuracy: 0.9574, Validation Accuracy: 0.9670, Loss: 0.0239 Epoch 3 Batch 532/1077 - Train Accuracy: 0.9492, Validation Accuracy: 0.9624, Loss: 0.0347 Epoch 3 Batch 533/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9648, Loss: 0.0256 Epoch 3 Batch 534/1077 - Train Accuracy: 0.9453, Validation Accuracy: 0.9595, Loss: 0.0274 Epoch 3 Batch 535/1077 - Train Accuracy: 0.9652, Validation Accuracy: 0.9549, Loss: 0.0249 Epoch 3 Batch 536/1077 - Train Accuracy: 0.9539, Validation Accuracy: 0.9453, Loss: 0.0252 Epoch 3 Batch 537/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9496, Loss: 0.0169 Epoch 3 Batch 538/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9489, Loss: 0.0174 Epoch 3 Batch 539/1077 - Train Accuracy: 0.9512, Validation Accuracy: 0.9393, Loss: 0.0310 Epoch 3 Batch 540/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9354, Loss: 0.0172 Epoch 3 Batch 541/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9513, Loss: 0.0232 Epoch 3 Batch 542/1077 - Train Accuracy: 0.9496, Validation Accuracy: 0.9556, Loss: 0.0270 Epoch 3 Batch 543/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9670, Loss: 0.0186 Epoch 3 Batch 544/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9627, Loss: 0.0141 Epoch 3 Batch 545/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9773, Loss: 0.0262 Epoch 3 Batch 546/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9602, Loss: 0.0231 Epoch 3 Batch 547/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9560, Loss: 0.0214 Epoch 3 Batch 548/1077 - Train Accuracy: 0.9492, Validation Accuracy: 0.9517, Loss: 0.0274 Epoch 3 Batch 549/1077 - Train Accuracy: 0.9582, Validation Accuracy: 0.9581, Loss: 0.0237 Epoch 3 Batch 550/1077 - Train Accuracy: 0.9563, Validation Accuracy: 0.9581, Loss: 0.0225 Epoch 3 Batch 551/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9581, Loss: 0.0209 Epoch 3 Batch 552/1077 - Train Accuracy: 0.9574, Validation Accuracy: 0.9581, Loss: 0.0295 Epoch 3 Batch 553/1077 - Train Accuracy: 0.9641, Validation Accuracy: 0.9684, Loss: 0.0331 Epoch 3 Batch 554/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9688, Loss: 0.0210 Epoch 3 Batch 555/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9663, Loss: 0.0192 Epoch 3 Batch 556/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9656, Loss: 0.0203 Epoch 3 Batch 557/1077 - Train Accuracy: 0.9664, Validation Accuracy: 0.9684, Loss: 0.0209 Epoch 3 Batch 558/1077 - Train Accuracy: 0.9563, Validation Accuracy: 0.9709, Loss: 0.0210 Epoch 3 Batch 559/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9712, Loss: 0.0219 Epoch 3 Batch 560/1077 - Train Accuracy: 0.9586, Validation Accuracy: 0.9577, Loss: 0.0210 Epoch 3 Batch 561/1077 - Train Accuracy: 0.9676, Validation Accuracy: 0.9599, Loss: 0.0203 Epoch 3 Batch 562/1077 - Train Accuracy: 0.9673, Validation Accuracy: 0.9673, Loss: 0.0207 Epoch 3 Batch 563/1077 - Train Accuracy: 0.9664, Validation Accuracy: 0.9652, Loss: 0.0261 Epoch 3 Batch 564/1077 - Train Accuracy: 0.9671, Validation Accuracy: 0.9659, Loss: 0.0262 Epoch 3 Batch 565/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9712, Loss: 0.0241 Epoch 3 Batch 566/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9659, Loss: 0.0194 Epoch 3 Batch 567/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9712, Loss: 0.0209 Epoch 3 Batch 568/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9641, Loss: 0.0199 Epoch 3 Batch 569/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9609, Loss: 0.0235 Epoch 3 Batch 570/1077 - Train Accuracy: 0.9494, Validation Accuracy: 0.9702, Loss: 0.0297 Epoch 3 Batch 571/1077 - Train Accuracy: 0.9647, Validation Accuracy: 0.9602, Loss: 0.0167 Epoch 3 Batch 572/1077 - Train Accuracy: 0.9736, Validation Accuracy: 0.9553, Loss: 0.0265 Epoch 3 Batch 573/1077 - Train Accuracy: 0.9512, Validation Accuracy: 0.9361, Loss: 0.0365 Epoch 3 Batch 574/1077 - Train Accuracy: 0.9716, Validation Accuracy: 0.9261, Loss: 0.0258 Epoch 3 Batch 575/1077 - Train Accuracy: 0.9792, Validation Accuracy: 0.9276, Loss: 0.0176 Epoch 3 Batch 576/1077 - Train Accuracy: 0.9790, Validation Accuracy: 0.9474, Loss: 0.0175 Epoch 3 Batch 577/1077 - Train Accuracy: 0.9655, Validation Accuracy: 0.9506, Loss: 0.0247 Epoch 3 Batch 578/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9613, Loss: 0.0162 Epoch 3 Batch 579/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9656, Loss: 0.0173 Epoch 3 Batch 580/1077 - Train Accuracy: 0.9673, Validation Accuracy: 0.9659, Loss: 0.0188 Epoch 3 Batch 581/1077 - Train Accuracy: 0.9664, Validation Accuracy: 0.9673, Loss: 0.0176 Epoch 3 Batch 582/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9677, Loss: 0.0219 Epoch 3 Batch 583/1077 - Train Accuracy: 0.9650, Validation Accuracy: 0.9730, Loss: 0.0232 Epoch 3 Batch 584/1077 - Train Accuracy: 0.9874, Validation Accuracy: 0.9730, Loss: 0.0193 Epoch 3 Batch 585/1077 - Train Accuracy: 0.9669, Validation Accuracy: 0.9680, Loss: 0.0142 Epoch 3 Batch 586/1077 - Train Accuracy: 0.9675, Validation Accuracy: 0.9688, Loss: 0.0192 Epoch 3 Batch 587/1077 - Train Accuracy: 0.9535, Validation Accuracy: 0.9542, Loss: 0.0270 Epoch 3 Batch 588/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9542, Loss: 0.0183 Epoch 3 Batch 589/1077 - Train Accuracy: 0.9803, Validation Accuracy: 0.9542, Loss: 0.0195 Epoch 3 Batch 590/1077 - Train Accuracy: 0.9704, Validation Accuracy: 0.9496, Loss: 0.0239 Epoch 3 Batch 591/1077 - Train Accuracy: 0.9482, Validation Accuracy: 0.9446, Loss: 0.0245 Epoch 3 Batch 592/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9425, Loss: 0.0239 Epoch 3 Batch 593/1077 - Train Accuracy: 0.9661, Validation Accuracy: 0.9485, Loss: 0.0356 Epoch 3 Batch 594/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9343, Loss: 0.0279 Epoch 3 Batch 595/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9439, Loss: 0.0199 Epoch 3 Batch 596/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9489, Loss: 0.0224 Epoch 3 Batch 597/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9478, Loss: 0.0211 Epoch 3 Batch 598/1077 - Train Accuracy: 0.9665, Validation Accuracy: 0.9482, Loss: 0.0217 Epoch 3 Batch 599/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9581, Loss: 0.0327 Epoch 3 Batch 600/1077 - Train Accuracy: 0.9568, Validation Accuracy: 0.9691, Loss: 0.0316 Epoch 3 Batch 601/1077 - Train Accuracy: 0.9799, Validation Accuracy: 0.9695, Loss: 0.0228 Epoch 3 Batch 602/1077 - Train Accuracy: 0.9695, Validation Accuracy: 0.9730, Loss: 0.0222 Epoch 3 Batch 603/1077 - Train Accuracy: 0.9792, Validation Accuracy: 0.9663, Loss: 0.0217 Epoch 3 Batch 604/1077 - Train Accuracy: 0.9492, Validation Accuracy: 0.9641, Loss: 0.0303 Epoch 3 Batch 605/1077 - Train Accuracy: 0.9700, Validation Accuracy: 0.9716, Loss: 0.0297 Epoch 3 Batch 606/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9712, Loss: 0.0160 Epoch 3 Batch 607/1077 - Train Accuracy: 0.9748, Validation Accuracy: 0.9741, Loss: 0.0296 Epoch 3 Batch 608/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9712, Loss: 0.0255 Epoch 3 Batch 609/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9666, Loss: 0.0253 Epoch 3 Batch 610/1077 - Train Accuracy: 0.9589, Validation Accuracy: 0.9691, Loss: 0.0273 Epoch 3 Batch 611/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9691, Loss: 0.0213 Epoch 3 Batch 612/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9677, Loss: 0.0159 Epoch 3 Batch 613/1077 - Train Accuracy: 0.9664, Validation Accuracy: 0.9631, Loss: 0.0282 Epoch 3 Batch 614/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9638, Loss: 0.0181 Epoch 3 Batch 615/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9719, Loss: 0.0173 Epoch 3 Batch 616/1077 - Train Accuracy: 0.9794, Validation Accuracy: 0.9734, Loss: 0.0235 Epoch 3 Batch 617/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9759, Loss: 0.0191 Epoch 3 Batch 618/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9759, Loss: 0.0177 Epoch 3 Batch 619/1077 - Train Accuracy: 0.9831, Validation Accuracy: 0.9638, Loss: 0.0153 Epoch 3 Batch 620/1077 - Train Accuracy: 0.9637, Validation Accuracy: 0.9453, Loss: 0.0240 Epoch 3 Batch 621/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9528, Loss: 0.0216 Epoch 3 Batch 622/1077 - Train Accuracy: 0.9794, Validation Accuracy: 0.9563, Loss: 0.0311 Epoch 3 Batch 623/1077 - Train Accuracy: 0.9574, Validation Accuracy: 0.9517, Loss: 0.0233 Epoch 3 Batch 624/1077 - Train Accuracy: 0.9598, Validation Accuracy: 0.9403, Loss: 0.0228 Epoch 3 Batch 625/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9425, Loss: 0.0178 Epoch 3 Batch 626/1077 - Train Accuracy: 0.9457, Validation Accuracy: 0.9471, Loss: 0.0201 Epoch 3 Batch 627/1077 - Train Accuracy: 0.9437, Validation Accuracy: 0.9538, Loss: 0.0256 Epoch 3 Batch 628/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9609, Loss: 0.0257 Epoch 3 Batch 629/1077 - Train Accuracy: 0.9708, Validation Accuracy: 0.9659, Loss: 0.0234 Epoch 3 Batch 630/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9616, Loss: 0.0218 Epoch 3 Batch 631/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9616, Loss: 0.0196 Epoch 3 Batch 632/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9688, Loss: 0.0130 Epoch 3 Batch 633/1077 - Train Accuracy: 0.9645, Validation Accuracy: 0.9641, Loss: 0.0215 Epoch 3 Batch 634/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9744, Loss: 0.0123 Epoch 3 Batch 635/1077 - Train Accuracy: 0.9741, Validation Accuracy: 0.9741, Loss: 0.0219 Epoch 3 Batch 636/1077 - Train Accuracy: 0.9605, Validation Accuracy: 0.9698, Loss: 0.0220 Epoch 3 Batch 637/1077 - Train Accuracy: 0.9523, Validation Accuracy: 0.9648, Loss: 0.0233 Epoch 3 Batch 638/1077 - Train Accuracy: 0.9702, Validation Accuracy: 0.9620, Loss: 0.0200 Epoch 3 Batch 639/1077 - Train Accuracy: 0.9516, Validation Accuracy: 0.9620, Loss: 0.0371 Epoch 3 Batch 640/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9659, Loss: 0.0204 Epoch 3 Batch 641/1077 - Train Accuracy: 0.9617, Validation Accuracy: 0.9695, Loss: 0.0218 Epoch 3 Batch 642/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9641, Loss: 0.0176 Epoch 3 Batch 643/1077 - Train Accuracy: 0.9810, Validation Accuracy: 0.9570, Loss: 0.0167 Epoch 3 Batch 644/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9567, Loss: 0.0209 Epoch 3 Batch 645/1077 - Train Accuracy: 0.9706, Validation Accuracy: 0.9567, Loss: 0.0280 Epoch 3 Batch 646/1077 - Train Accuracy: 0.9803, Validation Accuracy: 0.9634, Loss: 0.0245 Epoch 3 Batch 647/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9588, Loss: 0.0192 Epoch 3 Batch 648/1077 - Train Accuracy: 0.9702, Validation Accuracy: 0.9702, Loss: 0.0168 Epoch 3 Batch 649/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9602, Loss: 0.0175 Epoch 3 Batch 650/1077 - Train Accuracy: 0.9723, Validation Accuracy: 0.9627, Loss: 0.0201 Epoch 3 Batch 651/1077 - Train Accuracy: 0.9788, Validation Accuracy: 0.9620, Loss: 0.0198 Epoch 3 Batch 652/1077 - Train Accuracy: 0.9790, Validation Accuracy: 0.9549, Loss: 0.0241 Epoch 3 Batch 653/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9521, Loss: 0.0216 Epoch 3 Batch 654/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9563, Loss: 0.0177 Epoch 3 Batch 655/1077 - Train Accuracy: 0.9574, Validation Accuracy: 0.9560, Loss: 0.0253 Epoch 3 Batch 656/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9606, Loss: 0.0237 Epoch 3 Batch 657/1077 - Train Accuracy: 0.9782, Validation Accuracy: 0.9695, Loss: 0.0217 Epoch 3 Batch 658/1077 - Train Accuracy: 0.9725, Validation Accuracy: 0.9648, Loss: 0.0169 Epoch 3 Batch 659/1077 - Train Accuracy: 0.9866, Validation Accuracy: 0.9631, Loss: 0.0223 Epoch 3 Batch 660/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9531, Loss: 0.0208 Epoch 3 Batch 661/1077 - Train Accuracy: 0.9888, Validation Accuracy: 0.9499, Loss: 0.0196 Epoch 3 Batch 662/1077 - Train Accuracy: 0.9792, Validation Accuracy: 0.9503, Loss: 0.0172 Epoch 3 Batch 663/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9499, Loss: 0.0179 Epoch 3 Batch 664/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9457, Loss: 0.0195 Epoch 3 Batch 665/1077 - Train Accuracy: 0.9652, Validation Accuracy: 0.9450, Loss: 0.0191 Epoch 3 Batch 666/1077 - Train Accuracy: 0.9683, Validation Accuracy: 0.9517, Loss: 0.0312 Epoch 3 Batch 667/1077 - Train Accuracy: 0.9757, Validation Accuracy: 0.9517, Loss: 0.0268 Epoch 3 Batch 668/1077 - Train Accuracy: 0.9665, Validation Accuracy: 0.9517, Loss: 0.0181 Epoch 3 Batch 669/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9496, Loss: 0.0207 Epoch 3 Batch 670/1077 - Train Accuracy: 0.9698, Validation Accuracy: 0.9453, Loss: 0.0212 Epoch 3 Batch 671/1077 - Train Accuracy: 0.9539, Validation Accuracy: 0.9439, Loss: 0.0295 Epoch 3 Batch 672/1077 - Train Accuracy: 0.9833, Validation Accuracy: 0.9506, Loss: 0.0158 Epoch 3 Batch 673/1077 - Train Accuracy: 0.9721, Validation Accuracy: 0.9570, Loss: 0.0174 Epoch 3 Batch 674/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9549, Loss: 0.0230 Epoch 3 Batch 675/1077 - Train Accuracy: 0.9617, Validation Accuracy: 0.9506, Loss: 0.0279 Epoch 3 Batch 676/1077 - Train Accuracy: 0.9712, Validation Accuracy: 0.9620, Loss: 0.0166 Epoch 3 Batch 677/1077 - Train Accuracy: 0.9559, Validation Accuracy: 0.9599, Loss: 0.0268 Epoch 3 Batch 678/1077 - Train Accuracy: 0.9788, Validation Accuracy: 0.9592, Loss: 0.0178 Epoch 3 Batch 679/1077 - Train Accuracy: 0.9692, Validation Accuracy: 0.9531, Loss: 0.0186 Epoch 3 Batch 680/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9524, Loss: 0.0230 Epoch 3 Batch 681/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9474, Loss: 0.0193 Epoch 3 Batch 682/1077 - Train Accuracy: 0.9641, Validation Accuracy: 0.9542, Loss: 0.0187 Epoch 3 Batch 683/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9581, Loss: 0.0166 Epoch 3 Batch 684/1077 - Train Accuracy: 0.9645, Validation Accuracy: 0.9609, Loss: 0.0244 Epoch 3 Batch 685/1077 - Train Accuracy: 0.9617, Validation Accuracy: 0.9563, Loss: 0.0233 Epoch 3 Batch 686/1077 - Train Accuracy: 0.9747, Validation Accuracy: 0.9570, Loss: 0.0141 Epoch 3 Batch 687/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9563, Loss: 0.0240 Epoch 3 Batch 688/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9563, Loss: 0.0162 Epoch 3 Batch 689/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9567, Loss: 0.0127 Epoch 3 Batch 690/1077 - Train Accuracy: 0.9555, Validation Accuracy: 0.9588, Loss: 0.0229 Epoch 3 Batch 691/1077 - Train Accuracy: 0.9679, Validation Accuracy: 0.9588, Loss: 0.0291 Epoch 3 Batch 692/1077 - Train Accuracy: 0.9747, Validation Accuracy: 0.9588, Loss: 0.0183 Epoch 3 Batch 693/1077 - Train Accuracy: 0.9519, Validation Accuracy: 0.9613, Loss: 0.0313 Epoch 3 Batch 694/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9616, Loss: 0.0191 Epoch 3 Batch 695/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9663, Loss: 0.0214 Epoch 3 Batch 696/1077 - Train Accuracy: 0.9630, Validation Accuracy: 0.9716, Loss: 0.0236 Epoch 3 Batch 697/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9588, Loss: 0.0197 Epoch 3 Batch 698/1077 - Train Accuracy: 0.9769, Validation Accuracy: 0.9641, Loss: 0.0151 Epoch 3 Batch 699/1077 - Train Accuracy: 0.9720, Validation Accuracy: 0.9588, Loss: 0.0170 Epoch 3 Batch 700/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9641, Loss: 0.0145 Epoch 3 Batch 701/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9659, Loss: 0.0194 Epoch 3 Batch 702/1077 - Train Accuracy: 0.9591, Validation Accuracy: 0.9656, Loss: 0.0321 Epoch 3 Batch 703/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9627, Loss: 0.0243 Epoch 3 Batch 704/1077 - Train Accuracy: 0.9543, Validation Accuracy: 0.9656, Loss: 0.0250 Epoch 3 Batch 705/1077 - Train Accuracy: 0.9630, Validation Accuracy: 0.9659, Loss: 0.0293 Epoch 3 Batch 706/1077 - Train Accuracy: 0.9568, Validation Accuracy: 0.9602, Loss: 0.0413 Epoch 3 Batch 707/1077 - Train Accuracy: 0.9574, Validation Accuracy: 0.9606, Loss: 0.0234 Epoch 3 Batch 708/1077 - Train Accuracy: 0.9480, Validation Accuracy: 0.9609, Loss: 0.0214 Epoch 3 Batch 709/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9542, Loss: 0.0257 Epoch 3 Batch 710/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9513, Loss: 0.0170 Epoch 3 Batch 711/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9599, Loss: 0.0287 Epoch 3 Batch 712/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9691, Loss: 0.0155 Epoch 3 Batch 713/1077 - Train Accuracy: 0.9670, Validation Accuracy: 0.9780, Loss: 0.0159 Epoch 3 Batch 714/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9730, Loss: 0.0220 Epoch 3 Batch 715/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9712, Loss: 0.0229 Epoch 3 Batch 716/1077 - Train Accuracy: 0.9664, Validation Accuracy: 0.9716, Loss: 0.0160 Epoch 3 Batch 717/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9716, Loss: 0.0142 Epoch 3 Batch 718/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9691, Loss: 0.0212 Epoch 3 Batch 719/1077 - Train Accuracy: 0.9572, Validation Accuracy: 0.9680, Loss: 0.0240 Epoch 3 Batch 720/1077 - Train Accuracy: 0.9646, Validation Accuracy: 0.9677, Loss: 0.0222 Epoch 3 Batch 721/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9673, Loss: 0.0212 Epoch 3 Batch 722/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9688, Loss: 0.0149 Epoch 3 Batch 723/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9673, Loss: 0.0218 Epoch 3 Batch 724/1077 - Train Accuracy: 0.9823, Validation Accuracy: 0.9624, Loss: 0.0226 Epoch 3 Batch 725/1077 - Train Accuracy: 0.9732, Validation Accuracy: 0.9684, Loss: 0.0185 Epoch 3 Batch 726/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9634, Loss: 0.0185 Epoch 3 Batch 727/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9677, Loss: 0.0197 Epoch 3 Batch 728/1077 - Train Accuracy: 0.9487, Validation Accuracy: 0.9620, Loss: 0.0249 Epoch 3 Batch 729/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9577, Loss: 0.0227 Epoch 3 Batch 730/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9577, Loss: 0.0271 Epoch 3 Batch 731/1077 - Train Accuracy: 0.9795, Validation Accuracy: 0.9585, Loss: 0.0183 Epoch 3 Batch 732/1077 - Train Accuracy: 0.9683, Validation Accuracy: 0.9585, Loss: 0.0239 Epoch 3 Batch 733/1077 - Train Accuracy: 0.9656, Validation Accuracy: 0.9503, Loss: 0.0227 Epoch 3 Batch 734/1077 - Train Accuracy: 0.9786, Validation Accuracy: 0.9506, Loss: 0.0196 Epoch 3 Batch 735/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9506, Loss: 0.0177 Epoch 3 Batch 736/1077 - Train Accuracy: 0.9655, Validation Accuracy: 0.9506, Loss: 0.0168 Epoch 3 Batch 737/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9741, Loss: 0.0282 Epoch 3 Batch 738/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9691, Loss: 0.0152 Epoch 3 Batch 739/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9698, Loss: 0.0206 Epoch 3 Batch 740/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9698, Loss: 0.0188 Epoch 3 Batch 741/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9744, Loss: 0.0267 Epoch 3 Batch 742/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9734, Loss: 0.0203 Epoch 3 Batch 743/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9751, Loss: 0.0241 Epoch 3 Batch 744/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9751, Loss: 0.0176 Epoch 3 Batch 745/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9751, Loss: 0.0197 Epoch 3 Batch 746/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9751, Loss: 0.0159 Epoch 3 Batch 747/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9698, Loss: 0.0164 Epoch 3 Batch 748/1077 - Train Accuracy: 0.9586, Validation Accuracy: 0.9688, Loss: 0.0183 Epoch 3 Batch 749/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9741, Loss: 0.0165 Epoch 3 Batch 750/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9691, Loss: 0.0190 Epoch 3 Batch 751/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9737, Loss: 0.0189 Epoch 3 Batch 752/1077 - Train Accuracy: 0.9643, Validation Accuracy: 0.9723, Loss: 0.0178 Epoch 3 Batch 753/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9702, Loss: 0.0178 Epoch 3 Batch 754/1077 - Train Accuracy: 0.9695, Validation Accuracy: 0.9702, Loss: 0.0222 Epoch 3 Batch 755/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9744, Loss: 0.0283 Epoch 3 Batch 756/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9652, Loss: 0.0166 Epoch 3 Batch 757/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9673, Loss: 0.0175 Epoch 3 Batch 758/1077 - Train Accuracy: 0.9747, Validation Accuracy: 0.9638, Loss: 0.0164 Epoch 3 Batch 759/1077 - Train Accuracy: 0.9665, Validation Accuracy: 0.9670, Loss: 0.0244 Epoch 3 Batch 760/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9730, Loss: 0.0203 Epoch 3 Batch 761/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9734, Loss: 0.0203 Epoch 3 Batch 762/1077 - Train Accuracy: 0.9672, Validation Accuracy: 0.9780, Loss: 0.0208 Epoch 3 Batch 763/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9734, Loss: 0.0209 Epoch 3 Batch 764/1077 - Train Accuracy: 0.9877, Validation Accuracy: 0.9734, Loss: 0.0210 Epoch 3 Batch 765/1077 - Train Accuracy: 0.9492, Validation Accuracy: 0.9737, Loss: 0.0253 Epoch 3 Batch 766/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9737, Loss: 0.0206 Epoch 3 Batch 767/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9723, Loss: 0.0203 Epoch 3 Batch 768/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9776, Loss: 0.0155 Epoch 3 Batch 769/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9776, Loss: 0.0202 Epoch 3 Batch 770/1077 - Train Accuracy: 0.9717, Validation Accuracy: 0.9709, Loss: 0.0208 Epoch 3 Batch 771/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9709, Loss: 0.0219 Epoch 3 Batch 772/1077 - Train Accuracy: 0.9673, Validation Accuracy: 0.9705, Loss: 0.0185 Epoch 3 Batch 773/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9716, Loss: 0.0191 Epoch 3 Batch 774/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9719, Loss: 0.0228 Epoch 3 Batch 775/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9702, Loss: 0.0189 Epoch 3 Batch 776/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9748, Loss: 0.0157 Epoch 3 Batch 777/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9719, Loss: 0.0182 Epoch 3 Batch 778/1077 - Train Accuracy: 0.9792, Validation Accuracy: 0.9719, Loss: 0.0199 Epoch 3 Batch 779/1077 - Train Accuracy: 0.9551, Validation Accuracy: 0.9695, Loss: 0.0230 Epoch 3 Batch 780/1077 - Train Accuracy: 0.9523, Validation Accuracy: 0.9545, Loss: 0.0276 Epoch 3 Batch 781/1077 - Train Accuracy: 0.9814, Validation Accuracy: 0.9549, Loss: 0.0145 Epoch 3 Batch 782/1077 - Train Accuracy: 0.9583, Validation Accuracy: 0.9538, Loss: 0.0219 Epoch 3 Batch 783/1077 - Train Accuracy: 0.9647, Validation Accuracy: 0.9613, Loss: 0.0243 Epoch 3 Batch 784/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9659, Loss: 0.0148 Epoch 3 Batch 785/1077 - Train Accuracy: 0.9743, Validation Accuracy: 0.9588, Loss: 0.0184 Epoch 3 Batch 786/1077 - Train Accuracy: 0.9625, Validation Accuracy: 0.9645, Loss: 0.0165 Epoch 3 Batch 787/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9719, Loss: 0.0189 Epoch 3 Batch 788/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9702, Loss: 0.0153 Epoch 3 Batch 789/1077 - Train Accuracy: 0.9782, Validation Accuracy: 0.9695, Loss: 0.0198 Epoch 3 Batch 790/1077 - Train Accuracy: 0.9363, Validation Accuracy: 0.9631, Loss: 0.0283 Epoch 3 Batch 791/1077 - Train Accuracy: 0.9594, Validation Accuracy: 0.9691, Loss: 0.0241 Epoch 3 Batch 792/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9656, Loss: 0.0263 Epoch 3 Batch 793/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9606, Loss: 0.0186 Epoch 3 Batch 794/1077 - Train Accuracy: 0.9512, Validation Accuracy: 0.9606, Loss: 0.0154 Epoch 3 Batch 795/1077 - Train Accuracy: 0.9395, Validation Accuracy: 0.9567, Loss: 0.0263 Epoch 3 Batch 796/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9648, Loss: 0.0180 Epoch 3 Batch 797/1077 - Train Accuracy: 0.9676, Validation Accuracy: 0.9680, Loss: 0.0158 Epoch 3 Batch 798/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9645, Loss: 0.0193 Epoch 3 Batch 799/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9773, Loss: 0.0312 Epoch 3 Batch 800/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9719, Loss: 0.0207 Epoch 3 Batch 801/1077 - Train Accuracy: 0.9656, Validation Accuracy: 0.9645, Loss: 0.0210 Epoch 3 Batch 802/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9510, Loss: 0.0205 Epoch 3 Batch 803/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9535, Loss: 0.0216 Epoch 3 Batch 804/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9538, Loss: 0.0150 Epoch 3 Batch 805/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9609, Loss: 0.0177 Epoch 3 Batch 806/1077 - Train Accuracy: 0.9667, Validation Accuracy: 0.9599, Loss: 0.0179 Epoch 3 Batch 807/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9652, Loss: 0.0147 Epoch 3 Batch 808/1077 - Train Accuracy: 0.9656, Validation Accuracy: 0.9641, Loss: 0.0362 Epoch 3 Batch 809/1077 - Train Accuracy: 0.9778, Validation Accuracy: 0.9698, Loss: 0.0332 Epoch 3 Batch 810/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9680, Loss: 0.0167 Epoch 3 Batch 811/1077 - Train Accuracy: 0.9736, Validation Accuracy: 0.9684, Loss: 0.0213 Epoch 3 Batch 812/1077 - Train Accuracy: 0.9652, Validation Accuracy: 0.9680, Loss: 0.0196 Epoch 3 Batch 813/1077 - Train Accuracy: 0.9505, Validation Accuracy: 0.9709, Loss: 0.0237 Epoch 3 Batch 814/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9659, Loss: 0.0204 Epoch 3 Batch 815/1077 - Train Accuracy: 0.9566, Validation Accuracy: 0.9609, Loss: 0.0227 Epoch 3 Batch 816/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9634, Loss: 0.0205 Epoch 3 Batch 817/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9581, Loss: 0.0231 Epoch 3 Batch 818/1077 - Train Accuracy: 0.9656, Validation Accuracy: 0.9577, Loss: 0.0235 Epoch 3 Batch 819/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9577, Loss: 0.0233 Epoch 3 Batch 820/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9588, Loss: 0.0194 Epoch 3 Batch 821/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9542, Loss: 0.0234 Epoch 3 Batch 822/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9542, Loss: 0.0199 Epoch 3 Batch 823/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9585, Loss: 0.0242 Epoch 3 Batch 824/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9482, Loss: 0.0249 Epoch 3 Batch 825/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9549, Loss: 0.0123 Epoch 3 Batch 826/1077 - Train Accuracy: 0.9628, Validation Accuracy: 0.9474, Loss: 0.0198 Epoch 3 Batch 827/1077 - Train Accuracy: 0.9704, Validation Accuracy: 0.9460, Loss: 0.0199 Epoch 3 Batch 828/1077 - Train Accuracy: 0.9695, Validation Accuracy: 0.9538, Loss: 0.0210 Epoch 3 Batch 829/1077 - Train Accuracy: 0.9598, Validation Accuracy: 0.9538, Loss: 0.0341 Epoch 3 Batch 830/1077 - Train Accuracy: 0.9469, Validation Accuracy: 0.9592, Loss: 0.0255 Epoch 3 Batch 831/1077 - Train Accuracy: 0.9551, Validation Accuracy: 0.9634, Loss: 0.0212 Epoch 3 Batch 832/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9652, Loss: 0.0177 Epoch 3 Batch 833/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9641, Loss: 0.0242 Epoch 3 Batch 834/1077 - Train Accuracy: 0.9972, Validation Accuracy: 0.9517, Loss: 0.0177 Epoch 3 Batch 835/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9592, Loss: 0.0187 Epoch 3 Batch 836/1077 - Train Accuracy: 0.9877, Validation Accuracy: 0.9656, Loss: 0.0156 Epoch 3 Batch 837/1077 - Train Accuracy: 0.9625, Validation Accuracy: 0.9627, Loss: 0.0256 Epoch 3 Batch 838/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9673, Loss: 0.0218 Epoch 3 Batch 839/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9624, Loss: 0.0175 Epoch 3 Batch 840/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9631, Loss: 0.0220 Epoch 3 Batch 841/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9627, Loss: 0.0276 Epoch 3 Batch 842/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9545, Loss: 0.0163 Epoch 3 Batch 843/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9489, Loss: 0.0155 Epoch 3 Batch 844/1077 - Train Accuracy: 0.9829, Validation Accuracy: 0.9570, Loss: 0.0197 Epoch 3 Batch 845/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9641, Loss: 0.0180 Epoch 3 Batch 846/1077 - Train Accuracy: 0.9582, Validation Accuracy: 0.9659, Loss: 0.0265 Epoch 3 Batch 847/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9609, Loss: 0.0249 Epoch 3 Batch 848/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9634, Loss: 0.0189 Epoch 3 Batch 849/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9631, Loss: 0.0166 Epoch 3 Batch 850/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9606, Loss: 0.0402 Epoch 3 Batch 851/1077 - Train Accuracy: 0.9784, Validation Accuracy: 0.9673, Loss: 0.0296 Epoch 3 Batch 852/1077 - Train Accuracy: 0.9574, Validation Accuracy: 0.9670, Loss: 0.0288 Epoch 3 Batch 853/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9719, Loss: 0.0189 Epoch 3 Batch 854/1077 - Train Accuracy: 0.9633, Validation Accuracy: 0.9673, Loss: 0.0231 Epoch 3 Batch 855/1077 - Train Accuracy: 0.9559, Validation Accuracy: 0.9648, Loss: 0.0214 Epoch 3 Batch 856/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9648, Loss: 0.0213 Epoch 3 Batch 857/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9677, Loss: 0.0173 Epoch 3 Batch 858/1077 - Train Accuracy: 0.9803, Validation Accuracy: 0.9609, Loss: 0.0146 Epoch 3 Batch 859/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9613, Loss: 0.0222 Epoch 3 Batch 860/1077 - Train Accuracy: 0.9740, Validation Accuracy: 0.9609, Loss: 0.0182 Epoch 3 Batch 861/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9634, Loss: 0.0149 Epoch 3 Batch 862/1077 - Train Accuracy: 0.9590, Validation Accuracy: 0.9581, Loss: 0.0256 Epoch 3 Batch 863/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9620, Loss: 0.0174 Epoch 3 Batch 864/1077 - Train Accuracy: 0.9492, Validation Accuracy: 0.9620, Loss: 0.0238 Epoch 3 Batch 865/1077 - Train Accuracy: 0.9531, Validation Accuracy: 0.9616, Loss: 0.0217 Epoch 3 Batch 866/1077 - Train Accuracy: 0.9706, Validation Accuracy: 0.9695, Loss: 0.0221 Epoch 3 Batch 867/1077 - Train Accuracy: 0.9418, Validation Accuracy: 0.9702, Loss: 0.0531 Epoch 3 Batch 868/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9723, Loss: 0.0242 Epoch 3 Batch 869/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9670, Loss: 0.0215 Epoch 3 Batch 870/1077 - Train Accuracy: 0.9618, Validation Accuracy: 0.9570, Loss: 0.0205 Epoch 3 Batch 871/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9616, Loss: 0.0204 Epoch 3 Batch 872/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9659, Loss: 0.0252 Epoch 3 Batch 873/1077 - Train Accuracy: 0.9520, Validation Accuracy: 0.9634, Loss: 0.0212 Epoch 3 Batch 874/1077 - Train Accuracy: 0.9496, Validation Accuracy: 0.9712, Loss: 0.0312 Epoch 3 Batch 875/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9712, Loss: 0.0219 Epoch 3 Batch 876/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9659, Loss: 0.0209 Epoch 3 Batch 877/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9641, Loss: 0.0149 Epoch 3 Batch 878/1077 - Train Accuracy: 0.9465, Validation Accuracy: 0.9641, Loss: 0.0201 Epoch 3 Batch 879/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9641, Loss: 0.0169 Epoch 3 Batch 880/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9663, Loss: 0.0244 Epoch 3 Batch 881/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9851, Loss: 0.0270 Epoch 3 Batch 882/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9766, Loss: 0.0177 Epoch 3 Batch 883/1077 - Train Accuracy: 0.9556, Validation Accuracy: 0.9719, Loss: 0.0313 Epoch 3 Batch 884/1077 - Train Accuracy: 0.9543, Validation Accuracy: 0.9670, Loss: 0.0239 Epoch 3 Batch 885/1077 - Train Accuracy: 0.9769, Validation Accuracy: 0.9648, Loss: 0.0155 Epoch 3 Batch 886/1077 - Train Accuracy: 0.9582, Validation Accuracy: 0.9648, Loss: 0.0211 Epoch 3 Batch 887/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9695, Loss: 0.0228 Epoch 3 Batch 888/1077 - Train Accuracy: 0.9740, Validation Accuracy: 0.9677, Loss: 0.0178 Epoch 3 Batch 889/1077 - Train Accuracy: 0.9598, Validation Accuracy: 0.9634, Loss: 0.0199 Epoch 3 Batch 890/1077 - Train Accuracy: 0.9717, Validation Accuracy: 0.9684, Loss: 0.0207 Epoch 3 Batch 891/1077 - Train Accuracy: 0.9971, Validation Accuracy: 0.9688, Loss: 0.0132 Epoch 3 Batch 892/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9691, Loss: 0.0159 Epoch 3 Batch 893/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9783, Loss: 0.0205 Epoch 3 Batch 894/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9719, Loss: 0.0172 Epoch 3 Batch 895/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9663, Loss: 0.0151 Epoch 3 Batch 896/1077 - Train Accuracy: 0.9737, Validation Accuracy: 0.9663, Loss: 0.0212 Epoch 3 Batch 897/1077 - Train Accuracy: 0.9658, Validation Accuracy: 0.9709, Loss: 0.0171 Epoch 3 Batch 898/1077 - Train Accuracy: 0.9792, Validation Accuracy: 0.9659, Loss: 0.0149 Epoch 3 Batch 899/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9709, Loss: 0.0232 Epoch 3 Batch 900/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9606, Loss: 0.0246 Epoch 3 Batch 901/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9659, Loss: 0.0330 Epoch 3 Batch 902/1077 - Train Accuracy: 0.9457, Validation Accuracy: 0.9688, Loss: 0.0265 Epoch 3 Batch 903/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9627, Loss: 0.0214 Epoch 3 Batch 904/1077 - Train Accuracy: 0.9594, Validation Accuracy: 0.9599, Loss: 0.0209 Epoch 3 Batch 905/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9574, Loss: 0.0200 Epoch 3 Batch 906/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9624, Loss: 0.0172 Epoch 3 Batch 907/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9613, Loss: 0.0197 Epoch 3 Batch 908/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9599, Loss: 0.0242 Epoch 3 Batch 909/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9616, Loss: 0.0285 Epoch 3 Batch 910/1077 - Train Accuracy: 0.9647, Validation Accuracy: 0.9641, Loss: 0.0237 Epoch 3 Batch 911/1077 - Train Accuracy: 0.9606, Validation Accuracy: 0.9592, Loss: 0.0253 Epoch 3 Batch 912/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9538, Loss: 0.0153 Epoch 3 Batch 913/1077 - Train Accuracy: 0.9629, Validation Accuracy: 0.9538, Loss: 0.0255 Epoch 3 Batch 914/1077 - Train Accuracy: 0.9567, Validation Accuracy: 0.9620, Loss: 0.0443 Epoch 3 Batch 915/1077 - Train Accuracy: 0.9700, Validation Accuracy: 0.9570, Loss: 0.0160 Epoch 3 Batch 916/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9574, Loss: 0.0201 Epoch 3 Batch 917/1077 - Train Accuracy: 0.9633, Validation Accuracy: 0.9570, Loss: 0.0210 Epoch 3 Batch 918/1077 - Train Accuracy: 0.9751, Validation Accuracy: 0.9648, Loss: 0.0173 Epoch 3 Batch 919/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9616, Loss: 0.0169 Epoch 3 Batch 920/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9616, Loss: 0.0184 Epoch 3 Batch 921/1077 - Train Accuracy: 0.9645, Validation Accuracy: 0.9730, Loss: 0.0250 Epoch 3 Batch 922/1077 - Train Accuracy: 0.9706, Validation Accuracy: 0.9730, Loss: 0.0230 Epoch 3 Batch 923/1077 - Train Accuracy: 0.9897, Validation Accuracy: 0.9730, Loss: 0.0136 Epoch 3 Batch 924/1077 - Train Accuracy: 0.9679, Validation Accuracy: 0.9730, Loss: 0.0332 Epoch 3 Batch 925/1077 - Train Accuracy: 0.9769, Validation Accuracy: 0.9730, Loss: 0.0193 Epoch 3 Batch 926/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9737, Loss: 0.0190 Epoch 3 Batch 927/1077 - Train Accuracy: 0.9547, Validation Accuracy: 0.9688, Loss: 0.0355 Epoch 3 Batch 928/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9716, Loss: 0.0187 Epoch 3 Batch 929/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9769, Loss: 0.0160 Epoch 3 Batch 930/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9744, Loss: 0.0156 Epoch 3 Batch 931/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9805, Loss: 0.0178 Epoch 3 Batch 932/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9858, Loss: 0.0185 Epoch 3 Batch 933/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9908, Loss: 0.0155 Epoch 3 Batch 934/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9854, Loss: 0.0190 Epoch 3 Batch 935/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9826, Loss: 0.0150 Epoch 3 Batch 936/1077 - Train Accuracy: 0.9554, Validation Accuracy: 0.9858, Loss: 0.0307 Epoch 3 Batch 937/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9844, Loss: 0.0246 Epoch 3 Batch 938/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9822, Loss: 0.0263 Epoch 3 Batch 939/1077 - Train Accuracy: 0.9605, Validation Accuracy: 0.9798, Loss: 0.0232 Epoch 3 Batch 940/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9634, Loss: 0.0172 Epoch 3 Batch 941/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9634, Loss: 0.0173 Epoch 3 Batch 942/1077 - Train Accuracy: 0.9641, Validation Accuracy: 0.9663, Loss: 0.0230 Epoch 3 Batch 943/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9759, Loss: 0.0204 Epoch 3 Batch 944/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9759, Loss: 0.0160 Epoch 3 Batch 945/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9641, Loss: 0.0139 Epoch 3 Batch 946/1077 - Train Accuracy: 0.9975, Validation Accuracy: 0.9492, Loss: 0.0169 Epoch 3 Batch 947/1077 - Train Accuracy: 0.9663, Validation Accuracy: 0.9506, Loss: 0.0212 Epoch 3 Batch 948/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9624, Loss: 0.0193 Epoch 3 Batch 949/1077 - Train Accuracy: 0.9900, Validation Accuracy: 0.9624, Loss: 0.0149 Epoch 3 Batch 950/1077 - Train Accuracy: 0.9788, Validation Accuracy: 0.9673, Loss: 0.0140 Epoch 3 Batch 951/1077 - Train Accuracy: 0.9665, Validation Accuracy: 0.9673, Loss: 0.0222 Epoch 3 Batch 952/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9673, Loss: 0.0135 Epoch 3 Batch 953/1077 - Train Accuracy: 0.9815, Validation Accuracy: 0.9712, Loss: 0.0138 Epoch 3 Batch 954/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9712, Loss: 0.0221 Epoch 3 Batch 955/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9641, Loss: 0.0255 Epoch 3 Batch 956/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9588, Loss: 0.0231 Epoch 3 Batch 957/1077 - Train Accuracy: 0.9948, Validation Accuracy: 0.9577, Loss: 0.0105 Epoch 3 Batch 958/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9581, Loss: 0.0162 Epoch 3 Batch 959/1077 - Train Accuracy: 0.9664, Validation Accuracy: 0.9624, Loss: 0.0201 Epoch 3 Batch 960/1077 - Train Accuracy: 0.9821, Validation Accuracy: 0.9616, Loss: 0.0163 Epoch 3 Batch 961/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9620, Loss: 0.0174 Epoch 3 Batch 962/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9616, Loss: 0.0215 Epoch 3 Batch 963/1077 - Train Accuracy: 0.9740, Validation Accuracy: 0.9712, Loss: 0.0261 Epoch 3 Batch 964/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9798, Loss: 0.0190 Epoch 3 Batch 965/1077 - Train Accuracy: 0.9618, Validation Accuracy: 0.9798, Loss: 0.0242 Epoch 3 Batch 966/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9794, Loss: 0.0150 Epoch 3 Batch 967/1077 - Train Accuracy: 0.9598, Validation Accuracy: 0.9790, Loss: 0.0180 Epoch 3 Batch 968/1077 - Train Accuracy: 0.9477, Validation Accuracy: 0.9815, Loss: 0.0261 Epoch 3 Batch 969/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9766, Loss: 0.0313 Epoch 3 Batch 970/1077 - Train Accuracy: 0.9656, Validation Accuracy: 0.9766, Loss: 0.0182 Epoch 3 Batch 971/1077 - Train Accuracy: 0.9900, Validation Accuracy: 0.9670, Loss: 0.0237 Epoch 3 Batch 972/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9680, Loss: 0.0200 Epoch 3 Batch 973/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9585, Loss: 0.0150 Epoch 3 Batch 974/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9588, Loss: 0.0119 Epoch 3 Batch 975/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9727, Loss: 0.0154 Epoch 3 Batch 976/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9734, Loss: 0.0151 Epoch 3 Batch 977/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9805, Loss: 0.0114 Epoch 3 Batch 978/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9723, Loss: 0.0215 Epoch 3 Batch 979/1077 - Train Accuracy: 0.9626, Validation Accuracy: 0.9659, Loss: 0.0195 Epoch 3 Batch 980/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9656, Loss: 0.0214 Epoch 3 Batch 981/1077 - Train Accuracy: 0.9605, Validation Accuracy: 0.9673, Loss: 0.0198 Epoch 3 Batch 982/1077 - Train Accuracy: 0.9728, Validation Accuracy: 0.9741, Loss: 0.0167 Epoch 3 Batch 983/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9677, Loss: 0.0197 Epoch 3 Batch 984/1077 - Train Accuracy: 0.9406, Validation Accuracy: 0.9624, Loss: 0.0234 Epoch 3 Batch 985/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9677, Loss: 0.0133 Epoch 3 Batch 986/1077 - Train Accuracy: 0.9645, Validation Accuracy: 0.9748, Loss: 0.0207 Epoch 3 Batch 987/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9751, Loss: 0.0126 Epoch 3 Batch 988/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9712, Loss: 0.0244 Epoch 3 Batch 989/1077 - Train Accuracy: 0.9605, Validation Accuracy: 0.9666, Loss: 0.0211 Epoch 3 Batch 990/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9616, Loss: 0.0186 Epoch 3 Batch 991/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9567, Loss: 0.0175 Epoch 3 Batch 992/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9450, Loss: 0.0296 Epoch 3 Batch 993/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9496, Loss: 0.0138 Epoch 3 Batch 994/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9496, Loss: 0.0139 Epoch 3 Batch 995/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9581, Loss: 0.0205 Epoch 3 Batch 996/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9581, Loss: 0.0171 Epoch 3 Batch 997/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9528, Loss: 0.0222 Epoch 3 Batch 998/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9631, Loss: 0.0161 Epoch 3 Batch 999/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9609, Loss: 0.0195 Epoch 3 Batch 1000/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9609, Loss: 0.0173 Epoch 3 Batch 1001/1077 - Train Accuracy: 0.9751, Validation Accuracy: 0.9478, Loss: 0.0161 Epoch 3 Batch 1002/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9428, Loss: 0.0188 Epoch 3 Batch 1003/1077 - Train Accuracy: 0.9683, Validation Accuracy: 0.9567, Loss: 0.0207 Epoch 3 Batch 1004/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9478, Loss: 0.0206 Epoch 3 Batch 1005/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9489, Loss: 0.0172 Epoch 3 Batch 1006/1077 - Train Accuracy: 0.9625, Validation Accuracy: 0.9425, Loss: 0.0158 Epoch 3 Batch 1007/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9428, Loss: 0.0196 Epoch 3 Batch 1008/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9432, Loss: 0.0250 Epoch 3 Batch 1009/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9496, Loss: 0.0121 Epoch 3 Batch 1010/1077 - Train Accuracy: 0.9652, Validation Accuracy: 0.9567, Loss: 0.0187 Epoch 3 Batch 1011/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9517, Loss: 0.0167 Epoch 3 Batch 1012/1077 - Train Accuracy: 0.9583, Validation Accuracy: 0.9592, Loss: 0.0195 Epoch 3 Batch 1013/1077 - Train Accuracy: 0.9896, Validation Accuracy: 0.9595, Loss: 0.0135 Epoch 3 Batch 1014/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9538, Loss: 0.0147 Epoch 3 Batch 1015/1077 - Train Accuracy: 0.9445, Validation Accuracy: 0.9545, Loss: 0.0257 Epoch 3 Batch 1016/1077 - Train Accuracy: 0.9542, Validation Accuracy: 0.9574, Loss: 0.0196 Epoch 3 Batch 1017/1077 - Train Accuracy: 0.9733, Validation Accuracy: 0.9616, Loss: 0.0191 Epoch 3 Batch 1018/1077 - Train Accuracy: 0.9747, Validation Accuracy: 0.9666, Loss: 0.0160 Epoch 3 Batch 1019/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9716, Loss: 0.0263 Epoch 3 Batch 1020/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9723, Loss: 0.0156 Epoch 3 Batch 1021/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9727, Loss: 0.0179 Epoch 3 Batch 1022/1077 - Train Accuracy: 0.9874, Validation Accuracy: 0.9727, Loss: 0.0151 Epoch 3 Batch 1023/1077 - Train Accuracy: 0.9673, Validation Accuracy: 0.9748, Loss: 0.0209 Epoch 3 Batch 1024/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9748, Loss: 0.0257 Epoch 3 Batch 1025/1077 - Train Accuracy: 0.9650, Validation Accuracy: 0.9744, Loss: 0.0194 Epoch 3 Batch 1026/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9801, Loss: 0.0255 Epoch 3 Batch 1027/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9805, Loss: 0.0188 Epoch 3 Batch 1028/1077 - Train Accuracy: 0.9650, Validation Accuracy: 0.9741, Loss: 0.0189 Epoch 3 Batch 1029/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9723, Loss: 0.0183 Epoch 3 Batch 1030/1077 - Train Accuracy: 0.9949, Validation Accuracy: 0.9723, Loss: 0.0170 Epoch 3 Batch 1031/1077 - Train Accuracy: 0.9692, Validation Accuracy: 0.9709, Loss: 0.0229 Epoch 3 Batch 1032/1077 - Train Accuracy: 0.9725, Validation Accuracy: 0.9755, Loss: 0.0235 Epoch 3 Batch 1033/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9656, Loss: 0.0198 Epoch 3 Batch 1034/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9695, Loss: 0.0157 Epoch 3 Batch 1035/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9535, Loss: 0.0117 Epoch 3 Batch 1036/1077 - Train Accuracy: 0.9624, Validation Accuracy: 0.9428, Loss: 0.0201 Epoch 3 Batch 1037/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9425, Loss: 0.0195 Epoch 3 Batch 1038/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9386, Loss: 0.0217 Epoch 3 Batch 1039/1077 - Train Accuracy: 0.9706, Validation Accuracy: 0.9389, Loss: 0.0176 Epoch 3 Batch 1040/1077 - Train Accuracy: 0.9774, Validation Accuracy: 0.9389, Loss: 0.0242 Epoch 3 Batch 1041/1077 - Train Accuracy: 0.9559, Validation Accuracy: 0.9489, Loss: 0.0286 Epoch 3 Batch 1042/1077 - Train Accuracy: 0.9723, Validation Accuracy: 0.9506, Loss: 0.0159 Epoch 3 Batch 1043/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9606, Loss: 0.0249 Epoch 3 Batch 1044/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9521, Loss: 0.0222 Epoch 3 Batch 1045/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9524, Loss: 0.0188 Epoch 3 Batch 1046/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9531, Loss: 0.0119 Epoch 3 Batch 1047/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9478, Loss: 0.0184 Epoch 3 Batch 1048/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9478, Loss: 0.0203 Epoch 3 Batch 1049/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9538, Loss: 0.0183 Epoch 3 Batch 1050/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9538, Loss: 0.0140 Epoch 3 Batch 1051/1077 - Train Accuracy: 0.9740, Validation Accuracy: 0.9631, Loss: 0.0202 Epoch 3 Batch 1052/1077 - Train Accuracy: 0.9818, Validation Accuracy: 0.9688, Loss: 0.0174 Epoch 3 Batch 1053/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9638, Loss: 0.0220 Epoch 3 Batch 1054/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9680, Loss: 0.0182 Epoch 3 Batch 1055/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9680, Loss: 0.0208 Epoch 3 Batch 1056/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9677, Loss: 0.0180 Epoch 3 Batch 1057/1077 - Train Accuracy: 0.9803, Validation Accuracy: 0.9631, Loss: 0.0207 Epoch 3 Batch 1058/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9588, Loss: 0.0192 Epoch 3 Batch 1059/1077 - Train Accuracy: 0.9535, Validation Accuracy: 0.9588, Loss: 0.0249 Epoch 3 Batch 1060/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9535, Loss: 0.0155 Epoch 3 Batch 1061/1077 - Train Accuracy: 0.9563, Validation Accuracy: 0.9581, Loss: 0.0238 Epoch 3 Batch 1062/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9592, Loss: 0.0183 Epoch 3 Batch 1063/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9641, Loss: 0.0230 Epoch 3 Batch 1064/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9641, Loss: 0.0198 Epoch 3 Batch 1065/1077 - Train Accuracy: 0.9672, Validation Accuracy: 0.9641, Loss: 0.0152 Epoch 3 Batch 1066/1077 - Train Accuracy: 0.9656, Validation Accuracy: 0.9631, Loss: 0.0165 Epoch 3 Batch 1067/1077 - Train Accuracy: 0.9590, Validation Accuracy: 0.9634, Loss: 0.0207 Epoch 3 Batch 1068/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9595, Loss: 0.0132 Epoch 3 Batch 1069/1077 - Train Accuracy: 0.9818, Validation Accuracy: 0.9645, Loss: 0.0114 Epoch 3 Batch 1070/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9691, Loss: 0.0150 Epoch 3 Batch 1071/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9691, Loss: 0.0172 Epoch 3 Batch 1072/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9691, Loss: 0.0219 Epoch 3 Batch 1073/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9638, Loss: 0.0194 Epoch 3 Batch 1074/1077 - Train Accuracy: 0.9851, Validation Accuracy: 0.9592, Loss: 0.0218 Epoch 3 Batch 1075/1077 - Train Accuracy: 0.9618, Validation Accuracy: 0.9634, Loss: 0.0228 Epoch 4 Batch 1/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9631, Loss: 0.0147 Epoch 4 Batch 2/1077 - Train Accuracy: 0.9757, Validation Accuracy: 0.9677, Loss: 0.0184 Epoch 4 Batch 3/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9680, Loss: 0.0226 Epoch 4 Batch 4/1077 - Train Accuracy: 0.9723, Validation Accuracy: 0.9680, Loss: 0.0178 Epoch 4 Batch 5/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9734, Loss: 0.0261 Epoch 4 Batch 6/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9737, Loss: 0.0156 Epoch 4 Batch 7/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9737, Loss: 0.0147 Epoch 4 Batch 8/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9730, Loss: 0.0197 Epoch 4 Batch 9/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9730, Loss: 0.0180 Epoch 4 Batch 10/1077 - Train Accuracy: 0.9712, Validation Accuracy: 0.9677, Loss: 0.0243 Epoch 4 Batch 11/1077 - Train Accuracy: 0.9673, Validation Accuracy: 0.9727, Loss: 0.0232 Epoch 4 Batch 12/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9748, Loss: 0.0156 Epoch 4 Batch 13/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9748, Loss: 0.0197 Epoch 4 Batch 14/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9773, Loss: 0.0118 Epoch 4 Batch 15/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9773, Loss: 0.0158 Epoch 4 Batch 16/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9748, Loss: 0.0193 Epoch 4 Batch 17/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9755, Loss: 0.0178 Epoch 4 Batch 18/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9705, Loss: 0.0190 Epoch 4 Batch 19/1077 - Train Accuracy: 0.9520, Validation Accuracy: 0.9705, Loss: 0.0186 Epoch 4 Batch 20/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9684, Loss: 0.0179 Epoch 4 Batch 21/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9755, Loss: 0.0194 Epoch 4 Batch 22/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9776, Loss: 0.0172 Epoch 4 Batch 23/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9766, Loss: 0.0207 Epoch 4 Batch 24/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9670, Loss: 0.0227 Epoch 4 Batch 25/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9595, Loss: 0.0131 Epoch 4 Batch 26/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9688, Loss: 0.0260 Epoch 4 Batch 27/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9734, Loss: 0.0166 Epoch 4 Batch 28/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9780, Loss: 0.0197 Epoch 4 Batch 29/1077 - Train Accuracy: 0.9629, Validation Accuracy: 0.9677, Loss: 0.0209 Epoch 4 Batch 30/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9663, Loss: 0.0143 Epoch 4 Batch 31/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9670, Loss: 0.0167 Epoch 4 Batch 32/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9723, Loss: 0.0169 Epoch 4 Batch 33/1077 - Train Accuracy: 0.9721, Validation Accuracy: 0.9769, Loss: 0.0154 Epoch 4 Batch 34/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9702, Loss: 0.0195 Epoch 4 Batch 35/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9577, Loss: 0.0198 Epoch 4 Batch 36/1077 - Train Accuracy: 0.9664, Validation Accuracy: 0.9528, Loss: 0.0178 Epoch 4 Batch 37/1077 - Train Accuracy: 0.9629, Validation Accuracy: 0.9531, Loss: 0.0206 Epoch 4 Batch 38/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9535, Loss: 0.0317 Epoch 4 Batch 39/1077 - Train Accuracy: 0.9445, Validation Accuracy: 0.9631, Loss: 0.0246 Epoch 4 Batch 40/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9830, Loss: 0.0146 Epoch 4 Batch 41/1077 - Train Accuracy: 0.9795, Validation Accuracy: 0.9652, Loss: 0.0156 Epoch 4 Batch 42/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9673, Loss: 0.0264 Epoch 4 Batch 43/1077 - Train Accuracy: 0.9877, Validation Accuracy: 0.9705, Loss: 0.0136 Epoch 4 Batch 44/1077 - Train Accuracy: 0.9790, Validation Accuracy: 0.9705, Loss: 0.0172 Epoch 4 Batch 45/1077 - Train Accuracy: 0.9543, Validation Accuracy: 0.9769, Loss: 0.0219 Epoch 4 Batch 46/1077 - Train Accuracy: 0.9720, Validation Accuracy: 0.9670, Loss: 0.0192 Epoch 4 Batch 47/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9499, Loss: 0.0160 Epoch 4 Batch 48/1077 - Train Accuracy: 0.9527, Validation Accuracy: 0.9393, Loss: 0.0251 Epoch 4 Batch 49/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9446, Loss: 0.0227 Epoch 4 Batch 50/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9446, Loss: 0.0182 Epoch 4 Batch 51/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9602, Loss: 0.0231 Epoch 4 Batch 52/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9613, Loss: 0.0172 Epoch 4 Batch 53/1077 - Train Accuracy: 0.9416, Validation Accuracy: 0.9716, Loss: 0.0208 Epoch 4 Batch 54/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9645, Loss: 0.0282 Epoch 4 Batch 55/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9588, Loss: 0.0176 Epoch 4 Batch 56/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9624, Loss: 0.0130 Epoch 4 Batch 57/1077 - Train Accuracy: 0.9627, Validation Accuracy: 0.9634, Loss: 0.0170 Epoch 4 Batch 58/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9599, Loss: 0.0215 Epoch 4 Batch 59/1077 - Train Accuracy: 0.9507, Validation Accuracy: 0.9688, Loss: 0.0173 Epoch 4 Batch 60/1077 - Train Accuracy: 0.9810, Validation Accuracy: 0.9666, Loss: 0.0133 Epoch 4 Batch 61/1077 - Train Accuracy: 0.9637, Validation Accuracy: 0.9592, Loss: 0.0206 Epoch 4 Batch 62/1077 - Train Accuracy: 0.9650, Validation Accuracy: 0.9553, Loss: 0.0229 Epoch 4 Batch 63/1077 - Train Accuracy: 0.9714, Validation Accuracy: 0.9553, Loss: 0.0148 Epoch 4 Batch 64/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9585, Loss: 0.0179 Epoch 4 Batch 65/1077 - Train Accuracy: 0.9811, Validation Accuracy: 0.9613, Loss: 0.0156 Epoch 4 Batch 66/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9663, Loss: 0.0137 Epoch 4 Batch 67/1077 - Train Accuracy: 0.9676, Validation Accuracy: 0.9648, Loss: 0.0185 Epoch 4 Batch 68/1077 - Train Accuracy: 0.9637, Validation Accuracy: 0.9542, Loss: 0.0233 Epoch 4 Batch 69/1077 - Train Accuracy: 0.9437, Validation Accuracy: 0.9588, Loss: 0.0274 Epoch 4 Batch 70/1077 - Train Accuracy: 0.9683, Validation Accuracy: 0.9581, Loss: 0.0184 Epoch 4 Batch 71/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9656, Loss: 0.0125 Epoch 4 Batch 72/1077 - Train Accuracy: 0.9637, Validation Accuracy: 0.9560, Loss: 0.0193 Epoch 4 Batch 73/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9616, Loss: 0.0160 Epoch 4 Batch 74/1077 - Train Accuracy: 0.9673, Validation Accuracy: 0.9641, Loss: 0.0196 Epoch 4 Batch 75/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9670, Loss: 0.0232 Epoch 4 Batch 76/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9695, Loss: 0.0143 Epoch 4 Batch 77/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9645, Loss: 0.0213 Epoch 4 Batch 78/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9645, Loss: 0.0142 Epoch 4 Batch 79/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9645, Loss: 0.0131 Epoch 4 Batch 80/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9595, Loss: 0.0162 Epoch 4 Batch 81/1077 - Train Accuracy: 0.9637, Validation Accuracy: 0.9592, Loss: 0.0157 Epoch 4 Batch 82/1077 - Train Accuracy: 0.9794, Validation Accuracy: 0.9482, Loss: 0.0179 Epoch 4 Batch 83/1077 - Train Accuracy: 0.9720, Validation Accuracy: 0.9482, Loss: 0.0163 Epoch 4 Batch 84/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9503, Loss: 0.0180 Epoch 4 Batch 85/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9560, Loss: 0.0152 Epoch 4 Batch 86/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9677, Loss: 0.0182 Epoch 4 Batch 87/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9709, Loss: 0.0169 Epoch 4 Batch 88/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9705, Loss: 0.0198 Epoch 4 Batch 89/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9705, Loss: 0.0188 Epoch 4 Batch 90/1077 - Train Accuracy: 0.9648, Validation Accuracy: 0.9698, Loss: 0.0181 Epoch 4 Batch 91/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9695, Loss: 0.0163 Epoch 4 Batch 92/1077 - Train Accuracy: 0.9453, Validation Accuracy: 0.9698, Loss: 0.0221 Epoch 4 Batch 93/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9698, Loss: 0.0140 Epoch 4 Batch 94/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9698, Loss: 0.0165 Epoch 4 Batch 95/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9702, Loss: 0.0170 Epoch 4 Batch 96/1077 - Train Accuracy: 0.9598, Validation Accuracy: 0.9741, Loss: 0.0227 Epoch 4 Batch 97/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9741, Loss: 0.0186 Epoch 4 Batch 98/1077 - Train Accuracy: 0.9568, Validation Accuracy: 0.9705, Loss: 0.0200 Epoch 4 Batch 99/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9705, Loss: 0.0152 Epoch 4 Batch 100/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9709, Loss: 0.0143 Epoch 4 Batch 101/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9659, Loss: 0.0148 Epoch 4 Batch 102/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9577, Loss: 0.0163 Epoch 4 Batch 103/1077 - Train Accuracy: 0.9745, Validation Accuracy: 0.9577, Loss: 0.0201 Epoch 4 Batch 104/1077 - Train Accuracy: 0.9815, Validation Accuracy: 0.9553, Loss: 0.0185 Epoch 4 Batch 105/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9553, Loss: 0.0140 Epoch 4 Batch 106/1077 - Train Accuracy: 0.9803, Validation Accuracy: 0.9553, Loss: 0.0192 Epoch 4 Batch 107/1077 - Train Accuracy: 0.9498, Validation Accuracy: 0.9599, Loss: 0.0168 Epoch 4 Batch 108/1077 - Train Accuracy: 0.9563, Validation Accuracy: 0.9574, Loss: 0.0211 Epoch 4 Batch 109/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9624, Loss: 0.0193 Epoch 4 Batch 110/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9624, Loss: 0.0109 Epoch 4 Batch 111/1077 - Train Accuracy: 0.9625, Validation Accuracy: 0.9670, Loss: 0.0155 Epoch 4 Batch 112/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9684, Loss: 0.0149 Epoch 4 Batch 113/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9684, Loss: 0.0199 Epoch 4 Batch 114/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9684, Loss: 0.0147 Epoch 4 Batch 115/1077 - Train Accuracy: 0.9637, Validation Accuracy: 0.9712, Loss: 0.0206 Epoch 4 Batch 116/1077 - Train Accuracy: 0.9402, Validation Accuracy: 0.9709, Loss: 0.0365 Epoch 4 Batch 117/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9641, Loss: 0.0166 Epoch 4 Batch 118/1077 - Train Accuracy: 0.9860, Validation Accuracy: 0.9645, Loss: 0.0134 Epoch 4 Batch 119/1077 - Train Accuracy: 0.9645, Validation Accuracy: 0.9645, Loss: 0.0190 Epoch 4 Batch 120/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9645, Loss: 0.0175 Epoch 4 Batch 121/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9698, Loss: 0.0219 Epoch 4 Batch 122/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9698, Loss: 0.0166 Epoch 4 Batch 123/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9673, Loss: 0.0170 Epoch 4 Batch 124/1077 - Train Accuracy: 0.9656, Validation Accuracy: 0.9613, Loss: 0.0200 Epoch 4 Batch 125/1077 - Train Accuracy: 0.9568, Validation Accuracy: 0.9616, Loss: 0.0312 Epoch 4 Batch 126/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9616, Loss: 0.0159 Epoch 4 Batch 127/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9620, Loss: 0.0170 Epoch 4 Batch 128/1077 - Train Accuracy: 0.9732, Validation Accuracy: 0.9563, Loss: 0.0203 Epoch 4 Batch 129/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9581, Loss: 0.0187 Epoch 4 Batch 130/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9560, Loss: 0.0173 Epoch 4 Batch 131/1077 - Train Accuracy: 0.9551, Validation Accuracy: 0.9499, Loss: 0.0183 Epoch 4 Batch 132/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9517, Loss: 0.0141 Epoch 4 Batch 133/1077 - Train Accuracy: 0.9745, Validation Accuracy: 0.9517, Loss: 0.0145 Epoch 4 Batch 134/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9585, Loss: 0.0154 Epoch 4 Batch 135/1077 - Train Accuracy: 0.9700, Validation Accuracy: 0.9616, Loss: 0.0109 Epoch 4 Batch 136/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9595, Loss: 0.0179 Epoch 4 Batch 137/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9545, Loss: 0.0135 Epoch 4 Batch 138/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9545, Loss: 0.0169 Epoch 4 Batch 139/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9595, Loss: 0.0195 Epoch 4 Batch 140/1077 - Train Accuracy: 0.9873, Validation Accuracy: 0.9602, Loss: 0.0163 Epoch 4 Batch 141/1077 - Train Accuracy: 0.9977, Validation Accuracy: 0.9592, Loss: 0.0135 Epoch 4 Batch 142/1077 - Train Accuracy: 0.9587, Validation Accuracy: 0.9595, Loss: 0.0166 Epoch 4 Batch 143/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9577, Loss: 0.0138 Epoch 4 Batch 144/1077 - Train Accuracy: 0.9650, Validation Accuracy: 0.9602, Loss: 0.0284 Epoch 4 Batch 145/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9599, Loss: 0.0168 Epoch 4 Batch 146/1077 - Train Accuracy: 0.9710, Validation Accuracy: 0.9538, Loss: 0.0408 Epoch 4 Batch 147/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9482, Loss: 0.0187 Epoch 4 Batch 148/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9595, Loss: 0.0217 Epoch 4 Batch 149/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9638, Loss: 0.0167 Epoch 4 Batch 150/1077 - Train Accuracy: 0.9851, Validation Accuracy: 0.9641, Loss: 0.0183 Epoch 4 Batch 151/1077 - Train Accuracy: 0.9725, Validation Accuracy: 0.9695, Loss: 0.0137 Epoch 4 Batch 152/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9595, Loss: 0.0248 Epoch 4 Batch 153/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9599, Loss: 0.0230 Epoch 4 Batch 154/1077 - Train Accuracy: 0.9807, Validation Accuracy: 0.9574, Loss: 0.0173 Epoch 4 Batch 155/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9737, Loss: 0.0202 Epoch 4 Batch 156/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9741, Loss: 0.0110 Epoch 4 Batch 157/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9741, Loss: 0.0169 Epoch 4 Batch 158/1077 - Train Accuracy: 0.9665, Validation Accuracy: 0.9741, Loss: 0.0230 Epoch 4 Batch 159/1077 - Train Accuracy: 0.9706, Validation Accuracy: 0.9730, Loss: 0.0156 Epoch 4 Batch 160/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9762, Loss: 0.0176 Epoch 4 Batch 161/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9695, Loss: 0.0116 Epoch 4 Batch 162/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9670, Loss: 0.0170 Epoch 4 Batch 163/1077 - Train Accuracy: 0.9786, Validation Accuracy: 0.9609, Loss: 0.0248 Epoch 4 Batch 164/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9599, Loss: 0.0172 Epoch 4 Batch 165/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9638, Loss: 0.0169 Epoch 4 Batch 166/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9616, Loss: 0.0202 Epoch 4 Batch 167/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9609, Loss: 0.0149 Epoch 4 Batch 168/1077 - Train Accuracy: 0.9700, Validation Accuracy: 0.9606, Loss: 0.0239 Epoch 4 Batch 169/1077 - Train Accuracy: 0.9695, Validation Accuracy: 0.9560, Loss: 0.0200 Epoch 4 Batch 170/1077 - Train Accuracy: 0.9617, Validation Accuracy: 0.9609, Loss: 0.0206 Epoch 4 Batch 171/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9585, Loss: 0.0176 Epoch 4 Batch 172/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9588, Loss: 0.0155 Epoch 4 Batch 173/1077 - Train Accuracy: 0.9696, Validation Accuracy: 0.9592, Loss: 0.0174 Epoch 4 Batch 174/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9613, Loss: 0.0136 Epoch 4 Batch 175/1077 - Train Accuracy: 0.9477, Validation Accuracy: 0.9634, Loss: 0.0240 Epoch 4 Batch 176/1077 - Train Accuracy: 0.9578, Validation Accuracy: 0.9645, Loss: 0.0140 Epoch 4 Batch 177/1077 - Train Accuracy: 0.9856, Validation Accuracy: 0.9648, Loss: 0.0190 Epoch 4 Batch 178/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9602, Loss: 0.0171 Epoch 4 Batch 179/1077 - Train Accuracy: 0.9720, Validation Accuracy: 0.9595, Loss: 0.0173 Epoch 4 Batch 180/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9620, Loss: 0.0129 Epoch 4 Batch 181/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9620, Loss: 0.0197 Epoch 4 Batch 182/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9648, Loss: 0.0240 Epoch 4 Batch 183/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9648, Loss: 0.0193 Epoch 4 Batch 184/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9698, Loss: 0.0168 Epoch 4 Batch 185/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9695, Loss: 0.0191 Epoch 4 Batch 186/1077 - Train Accuracy: 0.9704, Validation Accuracy: 0.9670, Loss: 0.0152 Epoch 4 Batch 187/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9606, Loss: 0.0146 Epoch 4 Batch 188/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9602, Loss: 0.0165 Epoch 4 Batch 189/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9577, Loss: 0.0152 Epoch 4 Batch 190/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9624, Loss: 0.0129 Epoch 4 Batch 191/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9624, Loss: 0.0142 Epoch 4 Batch 192/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9627, Loss: 0.0161 Epoch 4 Batch 193/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9592, Loss: 0.0160 Epoch 4 Batch 194/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9659, Loss: 0.0138 Epoch 4 Batch 195/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9673, Loss: 0.0140 Epoch 4 Batch 196/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9663, Loss: 0.0138 Epoch 4 Batch 197/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9663, Loss: 0.0170 Epoch 4 Batch 198/1077 - Train Accuracy: 0.9706, Validation Accuracy: 0.9663, Loss: 0.0216 Epoch 4 Batch 199/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9684, Loss: 0.0179 Epoch 4 Batch 200/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9702, Loss: 0.0204 Epoch 4 Batch 201/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9659, Loss: 0.0122 Epoch 4 Batch 202/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9659, Loss: 0.0137 Epoch 4 Batch 203/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9613, Loss: 0.0156 Epoch 4 Batch 204/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9577, Loss: 0.0261 Epoch 4 Batch 205/1077 - Train Accuracy: 0.9406, Validation Accuracy: 0.9577, Loss: 0.0296 Epoch 4 Batch 206/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9574, Loss: 0.0134 Epoch 4 Batch 207/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9620, Loss: 0.0150 Epoch 4 Batch 208/1077 - Train Accuracy: 0.9740, Validation Accuracy: 0.9620, Loss: 0.0211 Epoch 4 Batch 209/1077 - Train Accuracy: 0.9829, Validation Accuracy: 0.9634, Loss: 0.0138 Epoch 4 Batch 210/1077 - Train Accuracy: 0.9825, Validation Accuracy: 0.9638, Loss: 0.0167 Epoch 4 Batch 211/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9663, Loss: 0.0149 Epoch 4 Batch 212/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9634, Loss: 0.0105 Epoch 4 Batch 213/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9688, Loss: 0.0141 Epoch 4 Batch 214/1077 - Train Accuracy: 0.9676, Validation Accuracy: 0.9677, Loss: 0.0135 Epoch 4 Batch 215/1077 - Train Accuracy: 0.9570, Validation Accuracy: 0.9627, Loss: 0.0258 Epoch 4 Batch 216/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9567, Loss: 0.0159 Epoch 4 Batch 217/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9542, Loss: 0.0140 Epoch 4 Batch 218/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9549, Loss: 0.0179 Epoch 4 Batch 219/1077 - Train Accuracy: 0.9695, Validation Accuracy: 0.9545, Loss: 0.0133 Epoch 4 Batch 220/1077 - Train Accuracy: 0.9683, Validation Accuracy: 0.9549, Loss: 0.0183 Epoch 4 Batch 221/1077 - Train Accuracy: 0.9729, Validation Accuracy: 0.9588, Loss: 0.0182 Epoch 4 Batch 222/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9588, Loss: 0.0188 Epoch 4 Batch 223/1077 - Train Accuracy: 0.9658, Validation Accuracy: 0.9585, Loss: 0.0168 Epoch 4 Batch 224/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9585, Loss: 0.0151 Epoch 4 Batch 225/1077 - Train Accuracy: 0.9633, Validation Accuracy: 0.9585, Loss: 0.0225 Epoch 4 Batch 226/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9638, Loss: 0.0171 Epoch 4 Batch 227/1077 - Train Accuracy: 0.9539, Validation Accuracy: 0.9585, Loss: 0.0216 Epoch 4 Batch 228/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9535, Loss: 0.0159 Epoch 4 Batch 229/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9535, Loss: 0.0132 Epoch 4 Batch 230/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9545, Loss: 0.0153 Epoch 4 Batch 231/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9545, Loss: 0.0170 Epoch 4 Batch 232/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9648, Loss: 0.0101 Epoch 4 Batch 233/1077 - Train Accuracy: 0.9590, Validation Accuracy: 0.9702, Loss: 0.0243 Epoch 4 Batch 234/1077 - Train Accuracy: 0.9706, Validation Accuracy: 0.9748, Loss: 0.0198 Epoch 4 Batch 235/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9751, Loss: 0.0190 Epoch 4 Batch 236/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9751, Loss: 0.0238 Epoch 4 Batch 237/1077 - Train Accuracy: 0.9829, Validation Accuracy: 0.9751, Loss: 0.0131 Epoch 4 Batch 238/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9755, Loss: 0.0144 Epoch 4 Batch 239/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9755, Loss: 0.0111 Epoch 4 Batch 240/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9705, Loss: 0.0134 Epoch 4 Batch 241/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9702, Loss: 0.0100 Epoch 4 Batch 242/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9602, Loss: 0.0121 Epoch 4 Batch 243/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9638, Loss: 0.0208 Epoch 4 Batch 244/1077 - Train Accuracy: 0.9865, Validation Accuracy: 0.9638, Loss: 0.0135 Epoch 4 Batch 245/1077 - Train Accuracy: 0.9661, Validation Accuracy: 0.9652, Loss: 0.0166 Epoch 4 Batch 246/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9670, Loss: 0.0212 Epoch 4 Batch 247/1077 - Train Accuracy: 0.9509, Validation Accuracy: 0.9620, Loss: 0.0196 Epoch 4 Batch 248/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9666, Loss: 0.0180 Epoch 4 Batch 249/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9716, Loss: 0.0124 Epoch 4 Batch 250/1077 - Train Accuracy: 0.9780, Validation Accuracy: 0.9670, Loss: 0.0188 Epoch 4 Batch 251/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9712, Loss: 0.0273 Epoch 4 Batch 252/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9712, Loss: 0.0171 Epoch 4 Batch 253/1077 - Train Accuracy: 0.9599, Validation Accuracy: 0.9702, Loss: 0.0156 Epoch 4 Batch 254/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9702, Loss: 0.0185 Epoch 4 Batch 255/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9702, Loss: 0.0159 Epoch 4 Batch 256/1077 - Train Accuracy: 0.9656, Validation Accuracy: 0.9702, Loss: 0.0280 Epoch 4 Batch 257/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9702, Loss: 0.0149 Epoch 4 Batch 258/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9705, Loss: 0.0161 Epoch 4 Batch 259/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9705, Loss: 0.0129 Epoch 4 Batch 260/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9705, Loss: 0.0123 Epoch 4 Batch 261/1077 - Train Accuracy: 0.9877, Validation Accuracy: 0.9656, Loss: 0.0181 Epoch 4 Batch 262/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9702, Loss: 0.0169 Epoch 4 Batch 263/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9727, Loss: 0.0117 Epoch 4 Batch 264/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9727, Loss: 0.0178 Epoch 4 Batch 265/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9727, Loss: 0.0157 Epoch 4 Batch 266/1077 - Train Accuracy: 0.9632, Validation Accuracy: 0.9727, Loss: 0.0164 Epoch 4 Batch 267/1077 - Train Accuracy: 0.9830, Validation Accuracy: 0.9712, Loss: 0.0164 Epoch 4 Batch 268/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9709, Loss: 0.0208 Epoch 4 Batch 269/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9723, Loss: 0.0237 Epoch 4 Batch 270/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9723, Loss: 0.0230 Epoch 4 Batch 271/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9723, Loss: 0.0178 Epoch 4 Batch 272/1077 - Train Accuracy: 0.9788, Validation Accuracy: 0.9627, Loss: 0.0283 Epoch 4 Batch 273/1077 - Train Accuracy: 0.9825, Validation Accuracy: 0.9627, Loss: 0.0142 Epoch 4 Batch 274/1077 - Train Accuracy: 0.9728, Validation Accuracy: 0.9631, Loss: 0.0158 Epoch 4 Batch 275/1077 - Train Accuracy: 0.9732, Validation Accuracy: 0.9535, Loss: 0.0207 Epoch 4 Batch 276/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9485, Loss: 0.0236 Epoch 4 Batch 277/1077 - Train Accuracy: 0.9829, Validation Accuracy: 0.9489, Loss: 0.0160 Epoch 4 Batch 278/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9517, Loss: 0.0221 Epoch 4 Batch 279/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9577, Loss: 0.0182 Epoch 4 Batch 280/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9602, Loss: 0.0176 Epoch 4 Batch 281/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9545, Loss: 0.0230 Epoch 4 Batch 282/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9613, Loss: 0.0266 Epoch 4 Batch 283/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9577, Loss: 0.0195 Epoch 4 Batch 284/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9631, Loss: 0.0241 Epoch 4 Batch 285/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9624, Loss: 0.0183 Epoch 4 Batch 286/1077 - Train Accuracy: 0.9728, Validation Accuracy: 0.9620, Loss: 0.0207 Epoch 4 Batch 287/1077 - Train Accuracy: 0.9457, Validation Accuracy: 0.9574, Loss: 0.0282 Epoch 4 Batch 288/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9606, Loss: 0.0242 Epoch 4 Batch 289/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9506, Loss: 0.0170 Epoch 4 Batch 290/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9485, Loss: 0.0263 Epoch 4 Batch 291/1077 - Train Accuracy: 0.9587, Validation Accuracy: 0.9411, Loss: 0.0268 Epoch 4 Batch 292/1077 - Train Accuracy: 0.9710, Validation Accuracy: 0.9563, Loss: 0.0213 Epoch 4 Batch 293/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9595, Loss: 0.0158 Epoch 4 Batch 294/1077 - Train Accuracy: 0.9723, Validation Accuracy: 0.9659, Loss: 0.0143 Epoch 4 Batch 295/1077 - Train Accuracy: 0.9725, Validation Accuracy: 0.9592, Loss: 0.0232 Epoch 4 Batch 296/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9588, Loss: 0.0166 Epoch 4 Batch 297/1077 - Train Accuracy: 0.9492, Validation Accuracy: 0.9588, Loss: 0.0226 Epoch 4 Batch 298/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9570, Loss: 0.0217 Epoch 4 Batch 299/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9670, Loss: 0.0187 Epoch 4 Batch 300/1077 - Train Accuracy: 0.9782, Validation Accuracy: 0.9624, Loss: 0.0145 Epoch 4 Batch 301/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9677, Loss: 0.0160 Epoch 4 Batch 302/1077 - Train Accuracy: 0.9953, Validation Accuracy: 0.9577, Loss: 0.0163 Epoch 4 Batch 303/1077 - Train Accuracy: 0.9590, Validation Accuracy: 0.9577, Loss: 0.0214 Epoch 4 Batch 304/1077 - Train Accuracy: 0.9725, Validation Accuracy: 0.9531, Loss: 0.0235 Epoch 4 Batch 305/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9648, Loss: 0.0172 Epoch 4 Batch 306/1077 - Train Accuracy: 0.9792, Validation Accuracy: 0.9695, Loss: 0.0219 Epoch 4 Batch 307/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9684, Loss: 0.0168 Epoch 4 Batch 308/1077 - Train Accuracy: 0.9633, Validation Accuracy: 0.9641, Loss: 0.0225 Epoch 4 Batch 309/1077 - Train Accuracy: 0.9784, Validation Accuracy: 0.9641, Loss: 0.0145 Epoch 4 Batch 310/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9613, Loss: 0.0212 Epoch 4 Batch 311/1077 - Train Accuracy: 0.9673, Validation Accuracy: 0.9595, Loss: 0.0174 Epoch 4 Batch 312/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9599, Loss: 0.0219 Epoch 4 Batch 313/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9648, Loss: 0.0111 Epoch 4 Batch 314/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9624, Loss: 0.0192 Epoch 4 Batch 315/1077 - Train Accuracy: 0.9833, Validation Accuracy: 0.9641, Loss: 0.0176 Epoch 4 Batch 316/1077 - Train Accuracy: 0.9751, Validation Accuracy: 0.9641, Loss: 0.0182 Epoch 4 Batch 317/1077 - Train Accuracy: 0.9831, Validation Accuracy: 0.9609, Loss: 0.0196 Epoch 4 Batch 318/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9563, Loss: 0.0126 Epoch 4 Batch 319/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9588, Loss: 0.0180 Epoch 4 Batch 320/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9435, Loss: 0.0230 Epoch 4 Batch 321/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9435, Loss: 0.0155 Epoch 4 Batch 322/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9428, Loss: 0.0152 Epoch 4 Batch 323/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9446, Loss: 0.0170 Epoch 4 Batch 324/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9450, Loss: 0.0143 Epoch 4 Batch 325/1077 - Train Accuracy: 0.9714, Validation Accuracy: 0.9567, Loss: 0.0197 Epoch 4 Batch 326/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9609, Loss: 0.0148 Epoch 4 Batch 327/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9663, Loss: 0.0233 Epoch 4 Batch 328/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9666, Loss: 0.0241 Epoch 4 Batch 329/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9666, Loss: 0.0179 Epoch 4 Batch 330/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9670, Loss: 0.0153 Epoch 4 Batch 331/1077 - Train Accuracy: 0.9786, Validation Accuracy: 0.9670, Loss: 0.0163 Epoch 4 Batch 332/1077 - Train Accuracy: 0.9814, Validation Accuracy: 0.9616, Loss: 0.0090 Epoch 4 Batch 333/1077 - Train Accuracy: 0.9753, Validation Accuracy: 0.9521, Loss: 0.0133 Epoch 4 Batch 334/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9485, Loss: 0.0145 Epoch 4 Batch 335/1077 - Train Accuracy: 0.9818, Validation Accuracy: 0.9485, Loss: 0.0216 Epoch 4 Batch 336/1077 - Train Accuracy: 0.9578, Validation Accuracy: 0.9485, Loss: 0.0317 Epoch 4 Batch 337/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9531, Loss: 0.0197 Epoch 4 Batch 338/1077 - Train Accuracy: 0.9445, Validation Accuracy: 0.9531, Loss: 0.0250 Epoch 4 Batch 339/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9517, Loss: 0.0133 Epoch 4 Batch 340/1077 - Train Accuracy: 0.9868, Validation Accuracy: 0.9513, Loss: 0.0172 Epoch 4 Batch 341/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9588, Loss: 0.0218 Epoch 4 Batch 342/1077 - Train Accuracy: 0.9892, Validation Accuracy: 0.9588, Loss: 0.0136 Epoch 4 Batch 343/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9585, Loss: 0.0155 Epoch 4 Batch 344/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9585, Loss: 0.0178 Epoch 4 Batch 345/1077 - Train Accuracy: 0.9795, Validation Accuracy: 0.9556, Loss: 0.0132 Epoch 4 Batch 346/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9531, Loss: 0.0158 Epoch 4 Batch 347/1077 - Train Accuracy: 0.9903, Validation Accuracy: 0.9474, Loss: 0.0121 Epoch 4 Batch 348/1077 - Train Accuracy: 0.9725, Validation Accuracy: 0.9588, Loss: 0.0178 Epoch 4 Batch 349/1077 - Train Accuracy: 0.9582, Validation Accuracy: 0.9592, Loss: 0.0177 Epoch 4 Batch 350/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9592, Loss: 0.0155 Epoch 4 Batch 351/1077 - Train Accuracy: 0.9749, Validation Accuracy: 0.9553, Loss: 0.0167 Epoch 4 Batch 352/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9549, Loss: 0.0109 Epoch 4 Batch 353/1077 - Train Accuracy: 0.9819, Validation Accuracy: 0.9613, Loss: 0.0198 Epoch 4 Batch 354/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9613, Loss: 0.0224 Epoch 4 Batch 355/1077 - Train Accuracy: 0.9784, Validation Accuracy: 0.9613, Loss: 0.0137 Epoch 4 Batch 356/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9645, Loss: 0.0182 Epoch 4 Batch 357/1077 - Train Accuracy: 0.9635, Validation Accuracy: 0.9666, Loss: 0.0168 Epoch 4 Batch 358/1077 - Train Accuracy: 0.9618, Validation Accuracy: 0.9666, Loss: 0.0221 Epoch 4 Batch 359/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9659, Loss: 0.0148 Epoch 4 Batch 360/1077 - Train Accuracy: 0.9652, Validation Accuracy: 0.9656, Loss: 0.0143 Epoch 4 Batch 361/1077 - Train Accuracy: 0.9745, Validation Accuracy: 0.9602, Loss: 0.0219 Epoch 4 Batch 362/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9588, Loss: 0.0215 Epoch 4 Batch 363/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9613, Loss: 0.0178 Epoch 4 Batch 364/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9616, Loss: 0.0205 Epoch 4 Batch 365/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9659, Loss: 0.0125 Epoch 4 Batch 366/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9616, Loss: 0.0151 Epoch 4 Batch 367/1077 - Train Accuracy: 0.9821, Validation Accuracy: 0.9616, Loss: 0.0112 Epoch 4 Batch 368/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9545, Loss: 0.0178 Epoch 4 Batch 369/1077 - Train Accuracy: 0.9598, Validation Accuracy: 0.9588, Loss: 0.0173 Epoch 4 Batch 370/1077 - Train Accuracy: 0.9892, Validation Accuracy: 0.9588, Loss: 0.0177 Epoch 4 Batch 371/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9553, Loss: 0.0136 Epoch 4 Batch 372/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9602, Loss: 0.0079 Epoch 4 Batch 373/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9599, Loss: 0.0130 Epoch 4 Batch 374/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9602, Loss: 0.0190 Epoch 4 Batch 375/1077 - Train Accuracy: 0.9830, Validation Accuracy: 0.9631, Loss: 0.0149 Epoch 4 Batch 376/1077 - Train Accuracy: 0.9837, Validation Accuracy: 0.9631, Loss: 0.0194 Epoch 4 Batch 377/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9680, Loss: 0.0147 Epoch 4 Batch 378/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9705, Loss: 0.0129 Epoch 4 Batch 379/1077 - Train Accuracy: 0.9617, Validation Accuracy: 0.9705, Loss: 0.0212 Epoch 4 Batch 380/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9709, Loss: 0.0150 Epoch 4 Batch 381/1077 - Train Accuracy: 0.9656, Validation Accuracy: 0.9670, Loss: 0.0199 Epoch 4 Batch 382/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9670, Loss: 0.0276 Epoch 4 Batch 383/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9663, Loss: 0.0150 Epoch 4 Batch 384/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9634, Loss: 0.0142 Epoch 4 Batch 385/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9634, Loss: 0.0152 Epoch 4 Batch 386/1077 - Train Accuracy: 0.9866, Validation Accuracy: 0.9677, Loss: 0.0160 Epoch 4 Batch 387/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9624, Loss: 0.0150 Epoch 4 Batch 388/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9684, Loss: 0.0204 Epoch 4 Batch 389/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9680, Loss: 0.0179 Epoch 4 Batch 390/1077 - Train Accuracy: 0.9723, Validation Accuracy: 0.9684, Loss: 0.0225 Epoch 4 Batch 391/1077 - Train Accuracy: 0.9799, Validation Accuracy: 0.9719, Loss: 0.0212 Epoch 4 Batch 392/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9659, Loss: 0.0180 Epoch 4 Batch 393/1077 - Train Accuracy: 0.9877, Validation Accuracy: 0.9659, Loss: 0.0127 Epoch 4 Batch 394/1077 - Train Accuracy: 0.9723, Validation Accuracy: 0.9659, Loss: 0.0158 Epoch 4 Batch 395/1077 - Train Accuracy: 0.9795, Validation Accuracy: 0.9688, Loss: 0.0143 Epoch 4 Batch 396/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9688, Loss: 0.0218 Epoch 4 Batch 397/1077 - Train Accuracy: 0.9617, Validation Accuracy: 0.9656, Loss: 0.0184 Epoch 4 Batch 398/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9613, Loss: 0.0149 Epoch 4 Batch 399/1077 - Train Accuracy: 0.9741, Validation Accuracy: 0.9613, Loss: 0.0162 Epoch 4 Batch 400/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9641, Loss: 0.0222 Epoch 4 Batch 401/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9641, Loss: 0.0100 Epoch 4 Batch 402/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9641, Loss: 0.0117 Epoch 4 Batch 403/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9638, Loss: 0.0265 Epoch 4 Batch 404/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9567, Loss: 0.0168 Epoch 4 Batch 405/1077 - Train Accuracy: 0.9720, Validation Accuracy: 0.9563, Loss: 0.0185 Epoch 4 Batch 406/1077 - Train Accuracy: 0.9794, Validation Accuracy: 0.9613, Loss: 0.0118 Epoch 4 Batch 407/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9588, Loss: 0.0160 Epoch 4 Batch 408/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9538, Loss: 0.0192 Epoch 4 Batch 409/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9489, Loss: 0.0245 Epoch 4 Batch 410/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9492, Loss: 0.0298 Epoch 4 Batch 411/1077 - Train Accuracy: 0.9818, Validation Accuracy: 0.9442, Loss: 0.0184 Epoch 4 Batch 412/1077 - Train Accuracy: 0.9563, Validation Accuracy: 0.9492, Loss: 0.0161 Epoch 4 Batch 413/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9446, Loss: 0.0124 Epoch 4 Batch 414/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9450, Loss: 0.0159 Epoch 4 Batch 415/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9478, Loss: 0.0222 Epoch 4 Batch 416/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9478, Loss: 0.0167 Epoch 4 Batch 417/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9570, Loss: 0.0249 Epoch 4 Batch 418/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9563, Loss: 0.0128 Epoch 4 Batch 419/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9577, Loss: 0.0186 Epoch 4 Batch 420/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9499, Loss: 0.0133 Epoch 4 Batch 421/1077 - Train Accuracy: 0.9656, Validation Accuracy: 0.9549, Loss: 0.0226 Epoch 4 Batch 422/1077 - Train Accuracy: 0.9818, Validation Accuracy: 0.9549, Loss: 0.0147 Epoch 4 Batch 423/1077 - Train Accuracy: 0.9672, Validation Accuracy: 0.9553, Loss: 0.0229 Epoch 4 Batch 424/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9556, Loss: 0.0149 Epoch 4 Batch 425/1077 - Train Accuracy: 0.9792, Validation Accuracy: 0.9606, Loss: 0.0135 Epoch 4 Batch 426/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9609, Loss: 0.0154 Epoch 4 Batch 427/1077 - Train Accuracy: 0.9695, Validation Accuracy: 0.9513, Loss: 0.0149 Epoch 4 Batch 428/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9506, Loss: 0.0138 Epoch 4 Batch 429/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9496, Loss: 0.0134 Epoch 4 Batch 430/1077 - Train Accuracy: 0.9645, Validation Accuracy: 0.9542, Loss: 0.0140 Epoch 4 Batch 431/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9531, Loss: 0.0142 Epoch 4 Batch 432/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9627, Loss: 0.0165 Epoch 4 Batch 433/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9563, Loss: 0.0259 Epoch 4 Batch 434/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9567, Loss: 0.0155 Epoch 4 Batch 435/1077 - Train Accuracy: 0.9873, Validation Accuracy: 0.9602, Loss: 0.0200 Epoch 4 Batch 436/1077 - Train Accuracy: 0.9866, Validation Accuracy: 0.9602, Loss: 0.0165 Epoch 4 Batch 437/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9545, Loss: 0.0125 Epoch 4 Batch 438/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9513, Loss: 0.0155 Epoch 4 Batch 439/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9418, Loss: 0.0150 Epoch 4 Batch 440/1077 - Train Accuracy: 0.9602, Validation Accuracy: 0.9428, Loss: 0.0214 Epoch 4 Batch 441/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9460, Loss: 0.0167 Epoch 4 Batch 442/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9513, Loss: 0.0201 Epoch 4 Batch 443/1077 - Train Accuracy: 0.9821, Validation Accuracy: 0.9492, Loss: 0.0141 Epoch 4 Batch 444/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9492, Loss: 0.0162 Epoch 4 Batch 445/1077 - Train Accuracy: 0.9794, Validation Accuracy: 0.9492, Loss: 0.0169 Epoch 4 Batch 446/1077 - Train Accuracy: 0.9825, Validation Accuracy: 0.9489, Loss: 0.0127 Epoch 4 Batch 447/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9474, Loss: 0.0137 Epoch 4 Batch 448/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9517, Loss: 0.0198 Epoch 4 Batch 449/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9482, Loss: 0.0150 Epoch 4 Batch 450/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9567, Loss: 0.0208 Epoch 4 Batch 451/1077 - Train Accuracy: 0.9743, Validation Accuracy: 0.9613, Loss: 0.0146 Epoch 4 Batch 452/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9613, Loss: 0.0193 Epoch 4 Batch 453/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9585, Loss: 0.0176 Epoch 4 Batch 454/1077 - Train Accuracy: 0.9629, Validation Accuracy: 0.9648, Loss: 0.0210 Epoch 4 Batch 455/1077 - Train Accuracy: 0.9783, Validation Accuracy: 0.9670, Loss: 0.0211 Epoch 4 Batch 456/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9670, Loss: 0.0180 Epoch 4 Batch 457/1077 - Train Accuracy: 0.9658, Validation Accuracy: 0.9670, Loss: 0.0159 Epoch 4 Batch 458/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9723, Loss: 0.0233 Epoch 4 Batch 459/1077 - Train Accuracy: 0.9710, Validation Accuracy: 0.9648, Loss: 0.0218 Epoch 4 Batch 460/1077 - Train Accuracy: 0.9648, Validation Accuracy: 0.9648, Loss: 0.0213 Epoch 4 Batch 461/1077 - Train Accuracy: 0.9598, Validation Accuracy: 0.9627, Loss: 0.0174 Epoch 4 Batch 462/1077 - Train Accuracy: 0.9969, Validation Accuracy: 0.9577, Loss: 0.0167 Epoch 4 Batch 463/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9577, Loss: 0.0138 Epoch 4 Batch 464/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9577, Loss: 0.0166 Epoch 4 Batch 465/1077 - Train Accuracy: 0.9704, Validation Accuracy: 0.9577, Loss: 0.0189 Epoch 4 Batch 466/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9577, Loss: 0.0157 Epoch 4 Batch 467/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9535, Loss: 0.0141 Epoch 4 Batch 468/1077 - Train Accuracy: 0.9792, Validation Accuracy: 0.9535, Loss: 0.0155 Epoch 4 Batch 469/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9531, Loss: 0.0127 Epoch 4 Batch 470/1077 - Train Accuracy: 0.9803, Validation Accuracy: 0.9577, Loss: 0.0170 Epoch 4 Batch 471/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9570, Loss: 0.0119 Epoch 4 Batch 472/1077 - Train Accuracy: 0.9833, Validation Accuracy: 0.9503, Loss: 0.0155 Epoch 4 Batch 473/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9499, Loss: 0.0141 Epoch 4 Batch 474/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9503, Loss: 0.0171 Epoch 4 Batch 475/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9503, Loss: 0.0171 Epoch 4 Batch 476/1077 - Train Accuracy: 0.9708, Validation Accuracy: 0.9553, Loss: 0.0117 Epoch 4 Batch 477/1077 - Train Accuracy: 0.9751, Validation Accuracy: 0.9503, Loss: 0.0190 Epoch 4 Batch 478/1077 - Train Accuracy: 0.9741, Validation Accuracy: 0.9407, Loss: 0.0171 Epoch 4 Batch 479/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9457, Loss: 0.0188 Epoch 4 Batch 480/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9556, Loss: 0.0151 Epoch 4 Batch 481/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9524, Loss: 0.0163 Epoch 4 Batch 482/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9524, Loss: 0.0204 Epoch 4 Batch 483/1077 - Train Accuracy: 0.9656, Validation Accuracy: 0.9524, Loss: 0.0191 Epoch 4 Batch 484/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9524, Loss: 0.0205 Epoch 4 Batch 485/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9524, Loss: 0.0160 Epoch 4 Batch 486/1077 - Train Accuracy: 0.9778, Validation Accuracy: 0.9524, Loss: 0.0137 Epoch 4 Batch 487/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9524, Loss: 0.0116 Epoch 4 Batch 488/1077 - Train Accuracy: 0.9729, Validation Accuracy: 0.9556, Loss: 0.0173 Epoch 4 Batch 489/1077 - Train Accuracy: 0.9740, Validation Accuracy: 0.9556, Loss: 0.0152 Epoch 4 Batch 490/1077 - Train Accuracy: 0.9633, Validation Accuracy: 0.9506, Loss: 0.0159 Epoch 4 Batch 491/1077 - Train Accuracy: 0.9723, Validation Accuracy: 0.9503, Loss: 0.0248 Epoch 4 Batch 492/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9407, Loss: 0.0216 Epoch 4 Batch 493/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9485, Loss: 0.0129 Epoch 4 Batch 494/1077 - Train Accuracy: 0.9605, Validation Accuracy: 0.9485, Loss: 0.0158 Epoch 4 Batch 495/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9577, Loss: 0.0154 Epoch 4 Batch 496/1077 - Train Accuracy: 0.9676, Validation Accuracy: 0.9574, Loss: 0.0184 Epoch 4 Batch 497/1077 - Train Accuracy: 0.9794, Validation Accuracy: 0.9670, Loss: 0.0207 Epoch 4 Batch 498/1077 - Train Accuracy: 0.9641, Validation Accuracy: 0.9716, Loss: 0.0202 Epoch 4 Batch 499/1077 - Train Accuracy: 0.9702, Validation Accuracy: 0.9737, Loss: 0.0158 Epoch 4 Batch 500/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9716, Loss: 0.0105 Epoch 4 Batch 501/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9716, Loss: 0.0149 Epoch 4 Batch 502/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9719, Loss: 0.0223 Epoch 4 Batch 503/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9723, Loss: 0.0131 Epoch 4 Batch 504/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9723, Loss: 0.0133 Epoch 4 Batch 505/1077 - Train Accuracy: 0.9955, Validation Accuracy: 0.9719, Loss: 0.0107 Epoch 4 Batch 506/1077 - Train Accuracy: 0.9512, Validation Accuracy: 0.9719, Loss: 0.0243 Epoch 4 Batch 507/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9691, Loss: 0.0215 Epoch 4 Batch 508/1077 - Train Accuracy: 0.9862, Validation Accuracy: 0.9695, Loss: 0.0113 Epoch 4 Batch 509/1077 - Train Accuracy: 0.9652, Validation Accuracy: 0.9691, Loss: 0.0253 Epoch 4 Batch 510/1077 - Train Accuracy: 0.9664, Validation Accuracy: 0.9691, Loss: 0.0177 Epoch 4 Batch 511/1077 - Train Accuracy: 0.9815, Validation Accuracy: 0.9691, Loss: 0.0128 Epoch 4 Batch 512/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9695, Loss: 0.0121 Epoch 4 Batch 513/1077 - Train Accuracy: 0.9664, Validation Accuracy: 0.9663, Loss: 0.0209 Epoch 4 Batch 514/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9663, Loss: 0.0194 Epoch 4 Batch 515/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9616, Loss: 0.0184 Epoch 4 Batch 516/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9616, Loss: 0.0182 Epoch 4 Batch 517/1077 - Train Accuracy: 0.9717, Validation Accuracy: 0.9624, Loss: 0.0192 Epoch 4 Batch 518/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9574, Loss: 0.0158 Epoch 4 Batch 519/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9556, Loss: 0.0173 Epoch 4 Batch 520/1077 - Train Accuracy: 0.9903, Validation Accuracy: 0.9513, Loss: 0.0123 Epoch 4 Batch 521/1077 - Train Accuracy: 0.9725, Validation Accuracy: 0.9538, Loss: 0.0153 Epoch 4 Batch 522/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9485, Loss: 0.0207 Epoch 4 Batch 523/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9485, Loss: 0.0191 Epoch 4 Batch 524/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9485, Loss: 0.0215 Epoch 4 Batch 525/1077 - Train Accuracy: 0.9656, Validation Accuracy: 0.9538, Loss: 0.0190 Epoch 4 Batch 526/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9556, Loss: 0.0145 Epoch 4 Batch 527/1077 - Train Accuracy: 0.9725, Validation Accuracy: 0.9535, Loss: 0.0190 Epoch 4 Batch 528/1077 - Train Accuracy: 0.9648, Validation Accuracy: 0.9538, Loss: 0.0179 Epoch 4 Batch 529/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9560, Loss: 0.0184 Epoch 4 Batch 530/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9609, Loss: 0.0163 Epoch 4 Batch 531/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9602, Loss: 0.0172 Epoch 4 Batch 532/1077 - Train Accuracy: 0.9566, Validation Accuracy: 0.9652, Loss: 0.0256 Epoch 4 Batch 533/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9727, Loss: 0.0136 Epoch 4 Batch 534/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9702, Loss: 0.0192 Epoch 4 Batch 535/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9652, Loss: 0.0184 Epoch 4 Batch 536/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9606, Loss: 0.0178 Epoch 4 Batch 537/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9606, Loss: 0.0115 Epoch 4 Batch 538/1077 - Train Accuracy: 0.9665, Validation Accuracy: 0.9609, Loss: 0.0170 Epoch 4 Batch 539/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9613, Loss: 0.0222 Epoch 4 Batch 540/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9613, Loss: 0.0118 Epoch 4 Batch 541/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9663, Loss: 0.0118 Epoch 4 Batch 542/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9666, Loss: 0.0177 Epoch 4 Batch 543/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9663, Loss: 0.0116 Epoch 4 Batch 544/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9663, Loss: 0.0092 Epoch 4 Batch 545/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9613, Loss: 0.0181 Epoch 4 Batch 546/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9503, Loss: 0.0147 Epoch 4 Batch 547/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9457, Loss: 0.0187 Epoch 4 Batch 548/1077 - Train Accuracy: 0.9578, Validation Accuracy: 0.9496, Loss: 0.0177 Epoch 4 Batch 549/1077 - Train Accuracy: 0.9648, Validation Accuracy: 0.9492, Loss: 0.0185 Epoch 4 Batch 550/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9567, Loss: 0.0160 Epoch 4 Batch 551/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9524, Loss: 0.0175 Epoch 4 Batch 552/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9524, Loss: 0.0198 Epoch 4 Batch 553/1077 - Train Accuracy: 0.9625, Validation Accuracy: 0.9624, Loss: 0.0275 Epoch 4 Batch 554/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9670, Loss: 0.0136 Epoch 4 Batch 555/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9645, Loss: 0.0146 Epoch 4 Batch 556/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9581, Loss: 0.0132 Epoch 4 Batch 557/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9680, Loss: 0.0124 Epoch 4 Batch 558/1077 - Train Accuracy: 0.9555, Validation Accuracy: 0.9695, Loss: 0.0116 Epoch 4 Batch 559/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9695, Loss: 0.0157 Epoch 4 Batch 560/1077 - Train Accuracy: 0.9582, Validation Accuracy: 0.9695, Loss: 0.0167 Epoch 4 Batch 561/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9638, Loss: 0.0141 Epoch 4 Batch 562/1077 - Train Accuracy: 0.9795, Validation Accuracy: 0.9638, Loss: 0.0131 Epoch 4 Batch 563/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9684, Loss: 0.0166 Epoch 4 Batch 564/1077 - Train Accuracy: 0.9811, Validation Accuracy: 0.9684, Loss: 0.0191 Epoch 4 Batch 565/1077 - Train Accuracy: 0.9866, Validation Accuracy: 0.9734, Loss: 0.0159 Epoch 4 Batch 566/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9719, Loss: 0.0130 Epoch 4 Batch 567/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9723, Loss: 0.0162 Epoch 4 Batch 568/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9705, Loss: 0.0145 Epoch 4 Batch 569/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9705, Loss: 0.0126 Epoch 4 Batch 570/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9705, Loss: 0.0186 Epoch 4 Batch 571/1077 - Train Accuracy: 0.9810, Validation Accuracy: 0.9709, Loss: 0.0111 Epoch 4 Batch 572/1077 - Train Accuracy: 0.9896, Validation Accuracy: 0.9741, Loss: 0.0164 Epoch 4 Batch 573/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9691, Loss: 0.0273 Epoch 4 Batch 574/1077 - Train Accuracy: 0.9753, Validation Accuracy: 0.9670, Loss: 0.0181 Epoch 4 Batch 575/1077 - Train Accuracy: 0.9784, Validation Accuracy: 0.9684, Loss: 0.0104 Epoch 4 Batch 576/1077 - Train Accuracy: 0.9873, Validation Accuracy: 0.9609, Loss: 0.0113 Epoch 4 Batch 577/1077 - Train Accuracy: 0.9712, Validation Accuracy: 0.9585, Loss: 0.0179 Epoch 4 Batch 578/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9634, Loss: 0.0126 Epoch 4 Batch 579/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9659, Loss: 0.0127 Epoch 4 Batch 580/1077 - Train Accuracy: 0.9788, Validation Accuracy: 0.9670, Loss: 0.0156 Epoch 4 Batch 581/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9673, Loss: 0.0122 Epoch 4 Batch 582/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9769, Loss: 0.0156 Epoch 4 Batch 583/1077 - Train Accuracy: 0.9708, Validation Accuracy: 0.9769, Loss: 0.0192 Epoch 4 Batch 584/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9727, Loss: 0.0146 Epoch 4 Batch 585/1077 - Train Accuracy: 0.9900, Validation Accuracy: 0.9677, Loss: 0.0084 Epoch 4 Batch 586/1077 - Train Accuracy: 0.9827, Validation Accuracy: 0.9673, Loss: 0.0132 Epoch 4 Batch 587/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9652, Loss: 0.0192 Epoch 4 Batch 588/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9652, Loss: 0.0127 Epoch 4 Batch 589/1077 - Train Accuracy: 0.9819, Validation Accuracy: 0.9624, Loss: 0.0151 Epoch 4 Batch 590/1077 - Train Accuracy: 0.9790, Validation Accuracy: 0.9737, Loss: 0.0165 Epoch 4 Batch 591/1077 - Train Accuracy: 0.9698, Validation Accuracy: 0.9688, Loss: 0.0177 Epoch 4 Batch 592/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9659, Loss: 0.0192 Epoch 4 Batch 593/1077 - Train Accuracy: 0.9728, Validation Accuracy: 0.9609, Loss: 0.0327 Epoch 4 Batch 594/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9538, Loss: 0.0209 Epoch 4 Batch 595/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9588, Loss: 0.0161 Epoch 4 Batch 596/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9641, Loss: 0.0148 Epoch 4 Batch 597/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9641, Loss: 0.0155 Epoch 4 Batch 598/1077 - Train Accuracy: 0.9788, Validation Accuracy: 0.9609, Loss: 0.0144 Epoch 4 Batch 599/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9627, Loss: 0.0240 Epoch 4 Batch 600/1077 - Train Accuracy: 0.9647, Validation Accuracy: 0.9627, Loss: 0.0225 Epoch 4 Batch 601/1077 - Train Accuracy: 0.9736, Validation Accuracy: 0.9652, Loss: 0.0167 Epoch 4 Batch 602/1077 - Train Accuracy: 0.9676, Validation Accuracy: 0.9695, Loss: 0.0193 Epoch 4 Batch 603/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9798, Loss: 0.0144 Epoch 4 Batch 604/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9716, Loss: 0.0210 Epoch 4 Batch 605/1077 - Train Accuracy: 0.9745, Validation Accuracy: 0.9716, Loss: 0.0246 Epoch 4 Batch 606/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9716, Loss: 0.0112 Epoch 4 Batch 607/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9709, Loss: 0.0228 Epoch 4 Batch 608/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9712, Loss: 0.0150 Epoch 4 Batch 609/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9691, Loss: 0.0199 Epoch 4 Batch 610/1077 - Train Accuracy: 0.9901, Validation Accuracy: 0.9691, Loss: 0.0144 Epoch 4 Batch 611/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9766, Loss: 0.0170 Epoch 4 Batch 612/1077 - Train Accuracy: 0.9970, Validation Accuracy: 0.9766, Loss: 0.0119 Epoch 4 Batch 613/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9719, Loss: 0.0197 Epoch 4 Batch 614/1077 - Train Accuracy: 0.9829, Validation Accuracy: 0.9723, Loss: 0.0149 Epoch 4 Batch 615/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9677, Loss: 0.0121 Epoch 4 Batch 616/1077 - Train Accuracy: 0.9782, Validation Accuracy: 0.9677, Loss: 0.0197 Epoch 4 Batch 617/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9670, Loss: 0.0167 Epoch 4 Batch 618/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9769, Loss: 0.0154 Epoch 4 Batch 619/1077 - Train Accuracy: 0.9778, Validation Accuracy: 0.9748, Loss: 0.0126 Epoch 4 Batch 620/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9641, Loss: 0.0170 Epoch 4 Batch 621/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9634, Loss: 0.0181 Epoch 4 Batch 622/1077 - Train Accuracy: 0.9753, Validation Accuracy: 0.9592, Loss: 0.0237 Epoch 4 Batch 623/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9659, Loss: 0.0184 Epoch 4 Batch 624/1077 - Train Accuracy: 0.9788, Validation Accuracy: 0.9719, Loss: 0.0173 Epoch 4 Batch 625/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9631, Loss: 0.0122 Epoch 4 Batch 626/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9695, Loss: 0.0155 Epoch 4 Batch 627/1077 - Train Accuracy: 0.9629, Validation Accuracy: 0.9624, Loss: 0.0182 Epoch 4 Batch 628/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9627, Loss: 0.0177 Epoch 4 Batch 629/1077 - Train Accuracy: 0.9901, Validation Accuracy: 0.9638, Loss: 0.0193 Epoch 4 Batch 630/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9638, Loss: 0.0160 Epoch 4 Batch 631/1077 - Train Accuracy: 0.9851, Validation Accuracy: 0.9656, Loss: 0.0182 Epoch 4 Batch 632/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9631, Loss: 0.0111 Epoch 4 Batch 633/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9634, Loss: 0.0155 Epoch 4 Batch 634/1077 - Train Accuracy: 0.9710, Validation Accuracy: 0.9659, Loss: 0.0115 Epoch 4 Batch 635/1077 - Train Accuracy: 0.9716, Validation Accuracy: 0.9684, Loss: 0.0180 Epoch 4 Batch 636/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9684, Loss: 0.0157 Epoch 4 Batch 637/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9737, Loss: 0.0203 Epoch 4 Batch 638/1077 - Train Accuracy: 0.9810, Validation Accuracy: 0.9755, Loss: 0.0142 Epoch 4 Batch 639/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9737, Loss: 0.0231 Epoch 4 Batch 640/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9684, Loss: 0.0161 Epoch 4 Batch 641/1077 - Train Accuracy: 0.9672, Validation Accuracy: 0.9709, Loss: 0.0175 Epoch 4 Batch 642/1077 - Train Accuracy: 0.9810, Validation Accuracy: 0.9755, Loss: 0.0139 Epoch 4 Batch 643/1077 - Train Accuracy: 0.9833, Validation Accuracy: 0.9755, Loss: 0.0126 Epoch 4 Batch 644/1077 - Train Accuracy: 0.9957, Validation Accuracy: 0.9688, Loss: 0.0157 Epoch 4 Batch 645/1077 - Train Accuracy: 0.9743, Validation Accuracy: 0.9663, Loss: 0.0200 Epoch 4 Batch 646/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9616, Loss: 0.0164 Epoch 4 Batch 647/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9567, Loss: 0.0138 Epoch 4 Batch 648/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9595, Loss: 0.0128 Epoch 4 Batch 649/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9567, Loss: 0.0134 Epoch 4 Batch 650/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9613, Loss: 0.0142 Epoch 4 Batch 651/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9613, Loss: 0.0131 Epoch 4 Batch 652/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9656, Loss: 0.0224 Epoch 4 Batch 653/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9652, Loss: 0.0185 Epoch 4 Batch 654/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9645, Loss: 0.0121 Epoch 4 Batch 655/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9645, Loss: 0.0167 Epoch 4 Batch 656/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9673, Loss: 0.0154 Epoch 4 Batch 657/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9666, Loss: 0.0165 Epoch 4 Batch 658/1077 - Train Accuracy: 0.9862, Validation Accuracy: 0.9712, Loss: 0.0108 Epoch 4 Batch 659/1077 - Train Accuracy: 0.9900, Validation Accuracy: 0.9712, Loss: 0.0158 Epoch 4 Batch 660/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9712, Loss: 0.0127 Epoch 4 Batch 661/1077 - Train Accuracy: 0.9888, Validation Accuracy: 0.9716, Loss: 0.0157 Epoch 4 Batch 662/1077 - Train Accuracy: 0.9877, Validation Accuracy: 0.9716, Loss: 0.0146 Epoch 4 Batch 663/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9727, Loss: 0.0142 Epoch 4 Batch 664/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9727, Loss: 0.0163 Epoch 4 Batch 665/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9698, Loss: 0.0114 Epoch 4 Batch 666/1077 - Train Accuracy: 0.9774, Validation Accuracy: 0.9702, Loss: 0.0224 Epoch 4 Batch 667/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9702, Loss: 0.0210 Epoch 4 Batch 668/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9631, Loss: 0.0150 Epoch 4 Batch 669/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9645, Loss: 0.0126 Epoch 4 Batch 670/1077 - Train Accuracy: 0.9904, Validation Accuracy: 0.9645, Loss: 0.0140 Epoch 4 Batch 671/1077 - Train Accuracy: 0.9696, Validation Accuracy: 0.9620, Loss: 0.0214 Epoch 4 Batch 672/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9705, Loss: 0.0134 Epoch 4 Batch 673/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9684, Loss: 0.0129 Epoch 4 Batch 674/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9684, Loss: 0.0166 Epoch 4 Batch 675/1077 - Train Accuracy: 0.9572, Validation Accuracy: 0.9684, Loss: 0.0218 Epoch 4 Batch 676/1077 - Train Accuracy: 0.9655, Validation Accuracy: 0.9680, Loss: 0.0141 Epoch 4 Batch 677/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9695, Loss: 0.0173 Epoch 4 Batch 678/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9695, Loss: 0.0147 Epoch 4 Batch 679/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9695, Loss: 0.0133 Epoch 4 Batch 680/1077 - Train Accuracy: 0.9665, Validation Accuracy: 0.9719, Loss: 0.0166 Epoch 4 Batch 681/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9719, Loss: 0.0152 Epoch 4 Batch 682/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9719, Loss: 0.0153 Epoch 4 Batch 683/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9719, Loss: 0.0111 Epoch 4 Batch 684/1077 - Train Accuracy: 0.9672, Validation Accuracy: 0.9719, Loss: 0.0169 Epoch 4 Batch 685/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9702, Loss: 0.0164 Epoch 4 Batch 686/1077 - Train Accuracy: 0.9769, Validation Accuracy: 0.9705, Loss: 0.0135 Epoch 4 Batch 687/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9709, Loss: 0.0205 Epoch 4 Batch 688/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9759, Loss: 0.0119 Epoch 4 Batch 689/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9762, Loss: 0.0103 Epoch 4 Batch 690/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9762, Loss: 0.0190 Epoch 4 Batch 691/1077 - Train Accuracy: 0.9708, Validation Accuracy: 0.9759, Loss: 0.0237 Epoch 4 Batch 692/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9787, Loss: 0.0137 Epoch 4 Batch 693/1077 - Train Accuracy: 0.9823, Validation Accuracy: 0.9663, Loss: 0.0229 Epoch 4 Batch 694/1077 - Train Accuracy: 0.9851, Validation Accuracy: 0.9567, Loss: 0.0139 Epoch 4 Batch 695/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9592, Loss: 0.0170 Epoch 4 Batch 696/1077 - Train Accuracy: 0.9679, Validation Accuracy: 0.9659, Loss: 0.0170 Epoch 4 Batch 697/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9659, Loss: 0.0181 Epoch 4 Batch 698/1077 - Train Accuracy: 0.9903, Validation Accuracy: 0.9684, Loss: 0.0112 Epoch 4 Batch 699/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9734, Loss: 0.0121 Epoch 4 Batch 700/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9776, Loss: 0.0095 Epoch 4 Batch 701/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9755, Loss: 0.0118 Epoch 4 Batch 702/1077 - Train Accuracy: 0.9554, Validation Accuracy: 0.9755, Loss: 0.0295 Epoch 4 Batch 703/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9705, Loss: 0.0194 Epoch 4 Batch 704/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9705, Loss: 0.0214 Epoch 4 Batch 705/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9734, Loss: 0.0238 Epoch 4 Batch 706/1077 - Train Accuracy: 0.9628, Validation Accuracy: 0.9737, Loss: 0.0355 Epoch 4 Batch 707/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9712, Loss: 0.0165 Epoch 4 Batch 708/1077 - Train Accuracy: 0.9453, Validation Accuracy: 0.9734, Loss: 0.0159 Epoch 4 Batch 709/1077 - Train Accuracy: 0.9656, Validation Accuracy: 0.9737, Loss: 0.0185 Epoch 4 Batch 710/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9787, Loss: 0.0115 Epoch 4 Batch 711/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9741, Loss: 0.0194 Epoch 4 Batch 712/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9705, Loss: 0.0124 Epoch 4 Batch 713/1077 - Train Accuracy: 0.9794, Validation Accuracy: 0.9709, Loss: 0.0112 Epoch 4 Batch 714/1077 - Train Accuracy: 0.9888, Validation Accuracy: 0.9769, Loss: 0.0185 Epoch 4 Batch 715/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9719, Loss: 0.0157 Epoch 4 Batch 716/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9794, Loss: 0.0119 Epoch 4 Batch 717/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9794, Loss: 0.0100 Epoch 4 Batch 718/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9794, Loss: 0.0150 Epoch 4 Batch 719/1077 - Train Accuracy: 0.9717, Validation Accuracy: 0.9673, Loss: 0.0186 Epoch 4 Batch 720/1077 - Train Accuracy: 0.9811, Validation Accuracy: 0.9794, Loss: 0.0184 Epoch 4 Batch 721/1077 - Train Accuracy: 0.9656, Validation Accuracy: 0.9744, Loss: 0.0190 Epoch 4 Batch 722/1077 - Train Accuracy: 0.9676, Validation Accuracy: 0.9744, Loss: 0.0133 Epoch 4 Batch 723/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9741, Loss: 0.0182 Epoch 4 Batch 724/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9691, Loss: 0.0170 Epoch 4 Batch 725/1077 - Train Accuracy: 0.9740, Validation Accuracy: 0.9794, Loss: 0.0135 Epoch 4 Batch 726/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9730, Loss: 0.0151 Epoch 4 Batch 727/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9727, Loss: 0.0128 Epoch 4 Batch 728/1077 - Train Accuracy: 0.9639, Validation Accuracy: 0.9677, Loss: 0.0183 Epoch 4 Batch 729/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9631, Loss: 0.0190 Epoch 4 Batch 730/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9585, Loss: 0.0208 Epoch 4 Batch 731/1077 - Train Accuracy: 0.9792, Validation Accuracy: 0.9585, Loss: 0.0138 Epoch 4 Batch 732/1077 - Train Accuracy: 0.9745, Validation Accuracy: 0.9585, Loss: 0.0169 Epoch 4 Batch 733/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9535, Loss: 0.0155 Epoch 4 Batch 734/1077 - Train Accuracy: 0.9807, Validation Accuracy: 0.9588, Loss: 0.0164 Epoch 4 Batch 735/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9609, Loss: 0.0116 Epoch 4 Batch 736/1077 - Train Accuracy: 0.9799, Validation Accuracy: 0.9663, Loss: 0.0126 Epoch 4 Batch 737/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9663, Loss: 0.0192 Epoch 4 Batch 738/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9613, Loss: 0.0102 Epoch 4 Batch 739/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9666, Loss: 0.0157 Epoch 4 Batch 740/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9663, Loss: 0.0142 Epoch 4 Batch 741/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9762, Loss: 0.0185 Epoch 4 Batch 742/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9762, Loss: 0.0161 Epoch 4 Batch 743/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9716, Loss: 0.0181 Epoch 4 Batch 744/1077 - Train Accuracy: 0.9903, Validation Accuracy: 0.9712, Loss: 0.0149 Epoch 4 Batch 745/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9712, Loss: 0.0167 Epoch 4 Batch 746/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9656, Loss: 0.0126 Epoch 4 Batch 747/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9656, Loss: 0.0082 Epoch 4 Batch 748/1077 - Train Accuracy: 0.9566, Validation Accuracy: 0.9723, Loss: 0.0167 Epoch 4 Batch 749/1077 - Train Accuracy: 0.9992, Validation Accuracy: 0.9755, Loss: 0.0104 Epoch 4 Batch 750/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9759, Loss: 0.0182 Epoch 4 Batch 751/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9773, Loss: 0.0157 Epoch 4 Batch 752/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9822, Loss: 0.0130 Epoch 4 Batch 753/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9798, Loss: 0.0098 Epoch 4 Batch 754/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9748, Loss: 0.0172 Epoch 4 Batch 755/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9776, Loss: 0.0231 Epoch 4 Batch 756/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9773, Loss: 0.0187 Epoch 4 Batch 757/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9727, Loss: 0.0129 Epoch 4 Batch 758/1077 - Train Accuracy: 0.9892, Validation Accuracy: 0.9684, Loss: 0.0105 Epoch 4 Batch 759/1077 - Train Accuracy: 0.9810, Validation Accuracy: 0.9684, Loss: 0.0166 Epoch 4 Batch 760/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9638, Loss: 0.0159 Epoch 4 Batch 761/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9688, Loss: 0.0156 Epoch 4 Batch 762/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9734, Loss: 0.0136 Epoch 4 Batch 763/1077 - Train Accuracy: 0.9900, Validation Accuracy: 0.9741, Loss: 0.0174 Epoch 4 Batch 764/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9716, Loss: 0.0164 Epoch 4 Batch 765/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9716, Loss: 0.0187 Epoch 4 Batch 766/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9766, Loss: 0.0155 Epoch 4 Batch 767/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9808, Loss: 0.0146 Epoch 4 Batch 768/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9783, Loss: 0.0132 Epoch 4 Batch 769/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9737, Loss: 0.0169 Epoch 4 Batch 770/1077 - Train Accuracy: 0.9795, Validation Accuracy: 0.9737, Loss: 0.0168 Epoch 4 Batch 771/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9680, Loss: 0.0175 Epoch 4 Batch 772/1077 - Train Accuracy: 0.9792, Validation Accuracy: 0.9659, Loss: 0.0125 Epoch 4 Batch 773/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9673, Loss: 0.0133 Epoch 4 Batch 774/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9620, Loss: 0.0141 Epoch 4 Batch 775/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9719, Loss: 0.0144 Epoch 4 Batch 776/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9670, Loss: 0.0149 Epoch 4 Batch 777/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9627, Loss: 0.0115 Epoch 4 Batch 778/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9677, Loss: 0.0189 Epoch 4 Batch 779/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9677, Loss: 0.0178 Epoch 4 Batch 780/1077 - Train Accuracy: 0.9645, Validation Accuracy: 0.9698, Loss: 0.0201 Epoch 4 Batch 781/1077 - Train Accuracy: 0.9743, Validation Accuracy: 0.9652, Loss: 0.0122 Epoch 4 Batch 782/1077 - Train Accuracy: 0.9665, Validation Accuracy: 0.9641, Loss: 0.0146 Epoch 4 Batch 783/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9542, Loss: 0.0182 Epoch 4 Batch 784/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9638, Loss: 0.0103 Epoch 4 Batch 785/1077 - Train Accuracy: 0.9896, Validation Accuracy: 0.9684, Loss: 0.0116 Epoch 4 Batch 786/1077 - Train Accuracy: 0.9695, Validation Accuracy: 0.9688, Loss: 0.0140 Epoch 4 Batch 787/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9688, Loss: 0.0153 Epoch 4 Batch 788/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9762, Loss: 0.0141 Epoch 4 Batch 789/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9759, Loss: 0.0119 Epoch 4 Batch 790/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9730, Loss: 0.0195 Epoch 4 Batch 791/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9727, Loss: 0.0141 Epoch 4 Batch 792/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9723, Loss: 0.0165 Epoch 4 Batch 793/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9673, Loss: 0.0135 Epoch 4 Batch 794/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9627, Loss: 0.0111 Epoch 4 Batch 795/1077 - Train Accuracy: 0.9488, Validation Accuracy: 0.9727, Loss: 0.0199 Epoch 4 Batch 796/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9727, Loss: 0.0155 Epoch 4 Batch 797/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9680, Loss: 0.0119 Epoch 4 Batch 798/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9684, Loss: 0.0135 Epoch 4 Batch 799/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9680, Loss: 0.0194 Epoch 4 Batch 800/1077 - Train Accuracy: 0.9969, Validation Accuracy: 0.9680, Loss: 0.0115 Epoch 4 Batch 801/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9730, Loss: 0.0166 Epoch 4 Batch 802/1077 - Train Accuracy: 0.9829, Validation Accuracy: 0.9755, Loss: 0.0143 Epoch 4 Batch 803/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9755, Loss: 0.0141 Epoch 4 Batch 804/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9734, Loss: 0.0095 Epoch 4 Batch 805/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9787, Loss: 0.0157 Epoch 4 Batch 806/1077 - Train Accuracy: 0.9831, Validation Accuracy: 0.9787, Loss: 0.0142 Epoch 4 Batch 807/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9783, Loss: 0.0117 Epoch 4 Batch 808/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9762, Loss: 0.0290 Epoch 4 Batch 809/1077 - Train Accuracy: 0.9774, Validation Accuracy: 0.9748, Loss: 0.0273 Epoch 4 Batch 810/1077 - Train Accuracy: 0.9825, Validation Accuracy: 0.9695, Loss: 0.0115 Epoch 4 Batch 811/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9730, Loss: 0.0194 Epoch 4 Batch 812/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9730, Loss: 0.0159 Epoch 4 Batch 813/1077 - Train Accuracy: 0.9717, Validation Accuracy: 0.9670, Loss: 0.0204 Epoch 4 Batch 814/1077 - Train Accuracy: 0.9949, Validation Accuracy: 0.9620, Loss: 0.0132 Epoch 4 Batch 815/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9620, Loss: 0.0185 Epoch 4 Batch 816/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9613, Loss: 0.0157 Epoch 4 Batch 817/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9602, Loss: 0.0183 Epoch 4 Batch 818/1077 - Train Accuracy: 0.9574, Validation Accuracy: 0.9581, Loss: 0.0203 Epoch 4 Batch 819/1077 - Train Accuracy: 0.9672, Validation Accuracy: 0.9531, Loss: 0.0194 Epoch 4 Batch 820/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9567, Loss: 0.0151 Epoch 4 Batch 821/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9645, Loss: 0.0136 Epoch 4 Batch 822/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9652, Loss: 0.0128 Epoch 4 Batch 823/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9638, Loss: 0.0173 Epoch 4 Batch 824/1077 - Train Accuracy: 0.9888, Validation Accuracy: 0.9592, Loss: 0.0237 Epoch 4 Batch 825/1077 - Train Accuracy: 0.9957, Validation Accuracy: 0.9602, Loss: 0.0104 Epoch 4 Batch 826/1077 - Train Accuracy: 0.9821, Validation Accuracy: 0.9602, Loss: 0.0132 Epoch 4 Batch 827/1077 - Train Accuracy: 0.9811, Validation Accuracy: 0.9570, Loss: 0.0134 Epoch 4 Batch 828/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9638, Loss: 0.0130 Epoch 4 Batch 829/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9638, Loss: 0.0244 Epoch 4 Batch 830/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9716, Loss: 0.0212 Epoch 4 Batch 831/1077 - Train Accuracy: 0.9625, Validation Accuracy: 0.9588, Loss: 0.0149 Epoch 4 Batch 832/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9556, Loss: 0.0138 Epoch 4 Batch 833/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9556, Loss: 0.0166 Epoch 4 Batch 834/1077 - Train Accuracy: 0.9876, Validation Accuracy: 0.9492, Loss: 0.0148 Epoch 4 Batch 835/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9492, Loss: 0.0151 Epoch 4 Batch 836/1077 - Train Accuracy: 0.9864, Validation Accuracy: 0.9542, Loss: 0.0111 Epoch 4 Batch 837/1077 - Train Accuracy: 0.9672, Validation Accuracy: 0.9542, Loss: 0.0202 Epoch 4 Batch 838/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9606, Loss: 0.0175 Epoch 4 Batch 839/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9581, Loss: 0.0159 Epoch 4 Batch 840/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9581, Loss: 0.0164 Epoch 4 Batch 841/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9680, Loss: 0.0242 Epoch 4 Batch 842/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9695, Loss: 0.0115 Epoch 4 Batch 843/1077 - Train Accuracy: 0.9792, Validation Accuracy: 0.9691, Loss: 0.0107 Epoch 4 Batch 844/1077 - Train Accuracy: 0.9862, Validation Accuracy: 0.9691, Loss: 0.0112 Epoch 4 Batch 845/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9691, Loss: 0.0114 Epoch 4 Batch 846/1077 - Train Accuracy: 0.9664, Validation Accuracy: 0.9616, Loss: 0.0203 Epoch 4 Batch 847/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9666, Loss: 0.0184 Epoch 4 Batch 848/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9663, Loss: 0.0127 Epoch 4 Batch 849/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9663, Loss: 0.0122 Epoch 4 Batch 850/1077 - Train Accuracy: 0.9717, Validation Accuracy: 0.9727, Loss: 0.0268 Epoch 4 Batch 851/1077 - Train Accuracy: 0.9799, Validation Accuracy: 0.9677, Loss: 0.0228 Epoch 4 Batch 852/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9677, Loss: 0.0226 Epoch 4 Batch 853/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9698, Loss: 0.0181 Epoch 4 Batch 854/1077 - Train Accuracy: 0.9695, Validation Accuracy: 0.9794, Loss: 0.0193 Epoch 4 Batch 855/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9794, Loss: 0.0172 Epoch 4 Batch 856/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9794, Loss: 0.0158 Epoch 4 Batch 857/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9787, Loss: 0.0160 Epoch 4 Batch 858/1077 - Train Accuracy: 0.9851, Validation Accuracy: 0.9734, Loss: 0.0080 Epoch 4 Batch 859/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9719, Loss: 0.0154 Epoch 4 Batch 860/1077 - Train Accuracy: 0.9732, Validation Accuracy: 0.9709, Loss: 0.0181 Epoch 4 Batch 861/1077 - Train Accuracy: 0.9723, Validation Accuracy: 0.9730, Loss: 0.0126 Epoch 4 Batch 862/1077 - Train Accuracy: 0.9590, Validation Accuracy: 0.9780, Loss: 0.0214 Epoch 4 Batch 863/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9755, Loss: 0.0148 Epoch 4 Batch 864/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9737, Loss: 0.0161 Epoch 4 Batch 865/1077 - Train Accuracy: 0.9560, Validation Accuracy: 0.9812, Loss: 0.0223 Epoch 4 Batch 866/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9830, Loss: 0.0166 Epoch 4 Batch 867/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9833, Loss: 0.0467 Epoch 4 Batch 868/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9883, Loss: 0.0175 Epoch 4 Batch 869/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9833, Loss: 0.0178 Epoch 4 Batch 870/1077 - Train Accuracy: 0.9786, Validation Accuracy: 0.9680, Loss: 0.0130 Epoch 4 Batch 871/1077 - Train Accuracy: 0.9648, Validation Accuracy: 0.9631, Loss: 0.0141 Epoch 4 Batch 872/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9588, Loss: 0.0157 Epoch 4 Batch 873/1077 - Train Accuracy: 0.9648, Validation Accuracy: 0.9585, Loss: 0.0163 Epoch 4 Batch 874/1077 - Train Accuracy: 0.9437, Validation Accuracy: 0.9684, Loss: 0.0278 Epoch 4 Batch 875/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9751, Loss: 0.0153 Epoch 4 Batch 876/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9751, Loss: 0.0165 Epoch 4 Batch 877/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9730, Loss: 0.0120 Epoch 4 Batch 878/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9734, Loss: 0.0129 Epoch 4 Batch 879/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9787, Loss: 0.0100 Epoch 4 Batch 880/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9759, Loss: 0.0178 Epoch 4 Batch 881/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9759, Loss: 0.0200 Epoch 4 Batch 882/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9759, Loss: 0.0138 Epoch 4 Batch 883/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9759, Loss: 0.0208 Epoch 4 Batch 884/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9631, Loss: 0.0173 Epoch 4 Batch 885/1077 - Train Accuracy: 0.9769, Validation Accuracy: 0.9585, Loss: 0.0102 Epoch 4 Batch 886/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9627, Loss: 0.0185 Epoch 4 Batch 887/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9627, Loss: 0.0164 Epoch 4 Batch 888/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9638, Loss: 0.0149 Epoch 4 Batch 889/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9595, Loss: 0.0149 Epoch 4 Batch 890/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9645, Loss: 0.0157 Epoch 4 Batch 891/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9638, Loss: 0.0092 Epoch 4 Batch 892/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9688, Loss: 0.0134 Epoch 4 Batch 893/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9737, Loss: 0.0175 Epoch 4 Batch 894/1077 - Train Accuracy: 0.9825, Validation Accuracy: 0.9712, Loss: 0.0135 Epoch 4 Batch 895/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9663, Loss: 0.0128 Epoch 4 Batch 896/1077 - Train Accuracy: 0.9815, Validation Accuracy: 0.9620, Loss: 0.0125 Epoch 4 Batch 897/1077 - Train Accuracy: 0.9632, Validation Accuracy: 0.9663, Loss: 0.0174 Epoch 4 Batch 898/1077 - Train Accuracy: 0.9792, Validation Accuracy: 0.9663, Loss: 0.0140 Epoch 4 Batch 899/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9670, Loss: 0.0187 Epoch 4 Batch 900/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9670, Loss: 0.0196 Epoch 4 Batch 901/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9719, Loss: 0.0212 Epoch 4 Batch 902/1077 - Train Accuracy: 0.9706, Validation Accuracy: 0.9723, Loss: 0.0173 Epoch 4 Batch 903/1077 - Train Accuracy: 0.9629, Validation Accuracy: 0.9741, Loss: 0.0203 Epoch 4 Batch 904/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9691, Loss: 0.0157 Epoch 4 Batch 905/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9716, Loss: 0.0130 Epoch 4 Batch 906/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9741, Loss: 0.0156 Epoch 4 Batch 907/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9759, Loss: 0.0180 Epoch 4 Batch 908/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9759, Loss: 0.0194 Epoch 4 Batch 909/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9808, Loss: 0.0184 Epoch 4 Batch 910/1077 - Train Accuracy: 0.9888, Validation Accuracy: 0.9851, Loss: 0.0195 Epoch 4 Batch 911/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9854, Loss: 0.0224 Epoch 4 Batch 912/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9854, Loss: 0.0125 Epoch 4 Batch 913/1077 - Train Accuracy: 0.9625, Validation Accuracy: 0.9808, Loss: 0.0211 Epoch 4 Batch 914/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9805, Loss: 0.0441 Epoch 4 Batch 915/1077 - Train Accuracy: 0.9745, Validation Accuracy: 0.9808, Loss: 0.0125 Epoch 4 Batch 916/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9808, Loss: 0.0138 Epoch 4 Batch 917/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9808, Loss: 0.0157 Epoch 4 Batch 918/1077 - Train Accuracy: 0.9903, Validation Accuracy: 0.9808, Loss: 0.0116 Epoch 4 Batch 919/1077 - Train Accuracy: 0.9942, Validation Accuracy: 0.9808, Loss: 0.0093 Epoch 4 Batch 920/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9808, Loss: 0.0132 Epoch 4 Batch 921/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9801, Loss: 0.0180 Epoch 4 Batch 922/1077 - Train Accuracy: 0.9721, Validation Accuracy: 0.9801, Loss: 0.0156 Epoch 4 Batch 923/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9734, Loss: 0.0088 Epoch 4 Batch 924/1077 - Train Accuracy: 0.9725, Validation Accuracy: 0.9691, Loss: 0.0291 Epoch 4 Batch 925/1077 - Train Accuracy: 0.9874, Validation Accuracy: 0.9695, Loss: 0.0122 Epoch 4 Batch 926/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9695, Loss: 0.0133 Epoch 4 Batch 927/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9790, Loss: 0.0235 Epoch 4 Batch 928/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9787, Loss: 0.0145 Epoch 4 Batch 929/1077 - Train Accuracy: 0.9973, Validation Accuracy: 0.9787, Loss: 0.0115 Epoch 4 Batch 930/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9833, Loss: 0.0144 Epoch 4 Batch 931/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9837, Loss: 0.0131 Epoch 4 Batch 932/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9833, Loss: 0.0135 Epoch 4 Batch 933/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9858, Loss: 0.0119 Epoch 4 Batch 934/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9808, Loss: 0.0155 Epoch 4 Batch 935/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9822, Loss: 0.0115 Epoch 4 Batch 936/1077 - Train Accuracy: 0.9784, Validation Accuracy: 0.9819, Loss: 0.0186 Epoch 4 Batch 937/1077 - Train Accuracy: 0.9823, Validation Accuracy: 0.9748, Loss: 0.0187 Epoch 4 Batch 938/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9751, Loss: 0.0192 Epoch 4 Batch 939/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9748, Loss: 0.0183 Epoch 4 Batch 940/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9759, Loss: 0.0143 Epoch 4 Batch 941/1077 - Train Accuracy: 0.9833, Validation Accuracy: 0.9716, Loss: 0.0115 Epoch 4 Batch 942/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9716, Loss: 0.0194 Epoch 4 Batch 943/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9712, Loss: 0.0157 Epoch 4 Batch 944/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9705, Loss: 0.0101 Epoch 4 Batch 945/1077 - Train Accuracy: 0.9965, Validation Accuracy: 0.9759, Loss: 0.0103 Epoch 4 Batch 946/1077 - Train Accuracy: 0.9984, Validation Accuracy: 0.9759, Loss: 0.0097 Epoch 4 Batch 947/1077 - Train Accuracy: 0.9951, Validation Accuracy: 0.9705, Loss: 0.0138 Epoch 4 Batch 948/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9695, Loss: 0.0151 Epoch 4 Batch 949/1077 - Train Accuracy: 0.9795, Validation Accuracy: 0.9698, Loss: 0.0147 Epoch 4 Batch 950/1077 - Train Accuracy: 0.9862, Validation Accuracy: 0.9680, Loss: 0.0105 Epoch 4 Batch 951/1077 - Train Accuracy: 0.9833, Validation Accuracy: 0.9748, Loss: 0.0182 Epoch 4 Batch 952/1077 - Train Accuracy: 0.9961, Validation Accuracy: 0.9748, Loss: 0.0096 Epoch 4 Batch 953/1077 - Train Accuracy: 0.9803, Validation Accuracy: 0.9755, Loss: 0.0128 Epoch 4 Batch 954/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9780, Loss: 0.0188 Epoch 4 Batch 955/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9780, Loss: 0.0192 Epoch 4 Batch 956/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9759, Loss: 0.0216 Epoch 4 Batch 957/1077 - Train Accuracy: 0.9993, Validation Accuracy: 0.9712, Loss: 0.0090 Epoch 4 Batch 958/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9641, Loss: 0.0155 Epoch 4 Batch 959/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9666, Loss: 0.0171 Epoch 4 Batch 960/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9613, Loss: 0.0107 Epoch 4 Batch 961/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9677, Loss: 0.0131 Epoch 4 Batch 962/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9702, Loss: 0.0158 Epoch 4 Batch 963/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9751, Loss: 0.0195 Epoch 4 Batch 964/1077 - Train Accuracy: 0.9903, Validation Accuracy: 0.9751, Loss: 0.0174 Epoch 4 Batch 965/1077 - Train Accuracy: 0.9725, Validation Accuracy: 0.9748, Loss: 0.0194 Epoch 4 Batch 966/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9751, Loss: 0.0092 Epoch 4 Batch 967/1077 - Train Accuracy: 0.9645, Validation Accuracy: 0.9801, Loss: 0.0142 Epoch 4 Batch 968/1077 - Train Accuracy: 0.9648, Validation Accuracy: 0.9748, Loss: 0.0185 Epoch 4 Batch 969/1077 - Train Accuracy: 0.9587, Validation Accuracy: 0.9741, Loss: 0.0230 Epoch 4 Batch 970/1077 - Train Accuracy: 0.9652, Validation Accuracy: 0.9741, Loss: 0.0151 Epoch 4 Batch 971/1077 - Train Accuracy: 0.9907, Validation Accuracy: 0.9627, Loss: 0.0166 Epoch 4 Batch 972/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9581, Loss: 0.0128 Epoch 4 Batch 973/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9609, Loss: 0.0118 Epoch 4 Batch 974/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9609, Loss: 0.0085 Epoch 4 Batch 975/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9609, Loss: 0.0126 Epoch 4 Batch 976/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9648, Loss: 0.0124 Epoch 4 Batch 977/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9787, Loss: 0.0085 Epoch 4 Batch 978/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9723, Loss: 0.0182 Epoch 4 Batch 979/1077 - Train Accuracy: 0.9745, Validation Accuracy: 0.9719, Loss: 0.0136 Epoch 4 Batch 980/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9691, Loss: 0.0142 Epoch 4 Batch 981/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9645, Loss: 0.0154 Epoch 4 Batch 982/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9645, Loss: 0.0118 Epoch 4 Batch 983/1077 - Train Accuracy: 0.9679, Validation Accuracy: 0.9645, Loss: 0.0137 Epoch 4 Batch 984/1077 - Train Accuracy: 0.9531, Validation Accuracy: 0.9592, Loss: 0.0177 Epoch 4 Batch 985/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9702, Loss: 0.0093 Epoch 4 Batch 986/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9698, Loss: 0.0138 Epoch 4 Batch 987/1077 - Train Accuracy: 0.9661, Validation Accuracy: 0.9581, Loss: 0.0112 Epoch 4 Batch 988/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9624, Loss: 0.0204 Epoch 4 Batch 989/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9574, Loss: 0.0125 Epoch 4 Batch 990/1077 - Train Accuracy: 0.9819, Validation Accuracy: 0.9549, Loss: 0.0128 Epoch 4 Batch 991/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9595, Loss: 0.0130 Epoch 4 Batch 992/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9556, Loss: 0.0198 Epoch 4 Batch 993/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9471, Loss: 0.0116 Epoch 4 Batch 994/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9478, Loss: 0.0113 Epoch 4 Batch 995/1077 - Train Accuracy: 0.9784, Validation Accuracy: 0.9425, Loss: 0.0134 Epoch 4 Batch 996/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9467, Loss: 0.0122 Epoch 4 Batch 997/1077 - Train Accuracy: 0.9864, Validation Accuracy: 0.9467, Loss: 0.0134 Epoch 4 Batch 998/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9489, Loss: 0.0186 Epoch 4 Batch 999/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9592, Loss: 0.0156 Epoch 4 Batch 1000/1077 - Train Accuracy: 0.9803, Validation Accuracy: 0.9627, Loss: 0.0125 Epoch 4 Batch 1001/1077 - Train Accuracy: 0.9833, Validation Accuracy: 0.9648, Loss: 0.0126 Epoch 4 Batch 1002/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9648, Loss: 0.0110 Epoch 4 Batch 1003/1077 - Train Accuracy: 0.9786, Validation Accuracy: 0.9648, Loss: 0.0157 Epoch 4 Batch 1004/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9702, Loss: 0.0122 Epoch 4 Batch 1005/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9624, Loss: 0.0134 Epoch 4 Batch 1006/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9567, Loss: 0.0141 Epoch 4 Batch 1007/1077 - Train Accuracy: 0.9940, Validation Accuracy: 0.9585, Loss: 0.0134 Epoch 4 Batch 1008/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9585, Loss: 0.0222 Epoch 4 Batch 1009/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9585, Loss: 0.0112 Epoch 4 Batch 1010/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9585, Loss: 0.0125 Epoch 4 Batch 1011/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9513, Loss: 0.0116 Epoch 4 Batch 1012/1077 - Train Accuracy: 0.9751, Validation Accuracy: 0.9581, Loss: 0.0126 Epoch 4 Batch 1013/1077 - Train Accuracy: 0.9933, Validation Accuracy: 0.9581, Loss: 0.0116 Epoch 4 Batch 1014/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9631, Loss: 0.0125 Epoch 4 Batch 1015/1077 - Train Accuracy: 0.9641, Validation Accuracy: 0.9719, Loss: 0.0178 Epoch 4 Batch 1016/1077 - Train Accuracy: 0.9747, Validation Accuracy: 0.9645, Loss: 0.0144 Epoch 4 Batch 1017/1077 - Train Accuracy: 0.9868, Validation Accuracy: 0.9645, Loss: 0.0123 Epoch 4 Batch 1018/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9627, Loss: 0.0128 Epoch 4 Batch 1019/1077 - Train Accuracy: 0.9544, Validation Accuracy: 0.9627, Loss: 0.0232 Epoch 4 Batch 1020/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9680, Loss: 0.0137 Epoch 4 Batch 1021/1077 - Train Accuracy: 0.9888, Validation Accuracy: 0.9631, Loss: 0.0141 Epoch 4 Batch 1022/1077 - Train Accuracy: 0.9833, Validation Accuracy: 0.9730, Loss: 0.0134 Epoch 4 Batch 1023/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9783, Loss: 0.0159 Epoch 4 Batch 1024/1077 - Train Accuracy: 0.9672, Validation Accuracy: 0.9780, Loss: 0.0188 Epoch 4 Batch 1025/1077 - Train Accuracy: 0.9747, Validation Accuracy: 0.9773, Loss: 0.0127 Epoch 4 Batch 1026/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9748, Loss: 0.0223 Epoch 4 Batch 1027/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9769, Loss: 0.0176 Epoch 4 Batch 1028/1077 - Train Accuracy: 0.9751, Validation Accuracy: 0.9769, Loss: 0.0142 Epoch 4 Batch 1029/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9819, Loss: 0.0146 Epoch 4 Batch 1030/1077 - Train Accuracy: 0.9949, Validation Accuracy: 0.9819, Loss: 0.0116 Epoch 4 Batch 1031/1077 - Train Accuracy: 0.9774, Validation Accuracy: 0.9769, Loss: 0.0162 Epoch 4 Batch 1032/1077 - Train Accuracy: 0.9732, Validation Accuracy: 0.9780, Loss: 0.0211 Epoch 4 Batch 1033/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9684, Loss: 0.0170 Epoch 4 Batch 1034/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9734, Loss: 0.0136 Epoch 4 Batch 1035/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9734, Loss: 0.0113 Epoch 4 Batch 1036/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9734, Loss: 0.0171 Epoch 4 Batch 1037/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9766, Loss: 0.0169 Epoch 4 Batch 1038/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9666, Loss: 0.0180 Epoch 4 Batch 1039/1077 - Train Accuracy: 0.9929, Validation Accuracy: 0.9666, Loss: 0.0131 Epoch 4 Batch 1040/1077 - Train Accuracy: 0.9786, Validation Accuracy: 0.9716, Loss: 0.0156 Epoch 4 Batch 1041/1077 - Train Accuracy: 0.9629, Validation Accuracy: 0.9716, Loss: 0.0218 Epoch 4 Batch 1042/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9666, Loss: 0.0113 Epoch 4 Batch 1043/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9716, Loss: 0.0158 Epoch 4 Batch 1044/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9716, Loss: 0.0153 Epoch 4 Batch 1045/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9716, Loss: 0.0163 Epoch 4 Batch 1046/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9574, Loss: 0.0071 Epoch 4 Batch 1047/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9542, Loss: 0.0113 Epoch 4 Batch 1048/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9528, Loss: 0.0136 Epoch 4 Batch 1049/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9588, Loss: 0.0136 Epoch 4 Batch 1050/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9588, Loss: 0.0088 Epoch 4 Batch 1051/1077 - Train Accuracy: 0.9829, Validation Accuracy: 0.9588, Loss: 0.0192 Epoch 4 Batch 1052/1077 - Train Accuracy: 0.9896, Validation Accuracy: 0.9588, Loss: 0.0130 Epoch 4 Batch 1053/1077 - Train Accuracy: 0.9650, Validation Accuracy: 0.9563, Loss: 0.0148 Epoch 4 Batch 1054/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9506, Loss: 0.0169 Epoch 4 Batch 1055/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9506, Loss: 0.0139 Epoch 4 Batch 1056/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9503, Loss: 0.0110 Epoch 4 Batch 1057/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9503, Loss: 0.0151 Epoch 4 Batch 1058/1077 - Train Accuracy: 0.9799, Validation Accuracy: 0.9503, Loss: 0.0175 Epoch 4 Batch 1059/1077 - Train Accuracy: 0.9663, Validation Accuracy: 0.9535, Loss: 0.0217 Epoch 4 Batch 1060/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9535, Loss: 0.0139 Epoch 4 Batch 1061/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9535, Loss: 0.0184 Epoch 4 Batch 1062/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9645, Loss: 0.0136 Epoch 4 Batch 1063/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9698, Loss: 0.0156 Epoch 4 Batch 1064/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9719, Loss: 0.0130 Epoch 4 Batch 1065/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9620, Loss: 0.0139 Epoch 4 Batch 1066/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9624, Loss: 0.0131 Epoch 4 Batch 1067/1077 - Train Accuracy: 0.9695, Validation Accuracy: 0.9680, Loss: 0.0162 Epoch 4 Batch 1068/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9780, Loss: 0.0099 Epoch 4 Batch 1069/1077 - Train Accuracy: 0.9862, Validation Accuracy: 0.9783, Loss: 0.0102 Epoch 4 Batch 1070/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9776, Loss: 0.0104 Epoch 4 Batch 1071/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9776, Loss: 0.0154 Epoch 4 Batch 1072/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9773, Loss: 0.0181 Epoch 4 Batch 1073/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9773, Loss: 0.0148 Epoch 4 Batch 1074/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9773, Loss: 0.0165 Epoch 4 Batch 1075/1077 - Train Accuracy: 0.9753, Validation Accuracy: 0.9773, Loss: 0.0167 Epoch 5 Batch 1/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9801, Loss: 0.0093 Epoch 5 Batch 2/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9801, Loss: 0.0111 Epoch 5 Batch 3/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9730, Loss: 0.0158 Epoch 5 Batch 4/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9680, Loss: 0.0122 Epoch 5 Batch 5/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9673, Loss: 0.0210 Epoch 5 Batch 6/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9727, Loss: 0.0097 Epoch 5 Batch 7/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9680, Loss: 0.0126 Epoch 5 Batch 8/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9737, Loss: 0.0154 Epoch 5 Batch 9/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9780, Loss: 0.0116 Epoch 5 Batch 10/1077 - Train Accuracy: 0.9856, Validation Accuracy: 0.9780, Loss: 0.0176 Epoch 5 Batch 11/1077 - Train Accuracy: 0.9647, Validation Accuracy: 0.9798, Loss: 0.0210 Epoch 5 Batch 12/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9798, Loss: 0.0120 Epoch 5 Batch 13/1077 - Train Accuracy: 0.9825, Validation Accuracy: 0.9798, Loss: 0.0159 Epoch 5 Batch 14/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9755, Loss: 0.0082 Epoch 5 Batch 15/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9751, Loss: 0.0107 Epoch 5 Batch 16/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9787, Loss: 0.0168 Epoch 5 Batch 17/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9737, Loss: 0.0132 Epoch 5 Batch 18/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9716, Loss: 0.0124 Epoch 5 Batch 19/1077 - Train Accuracy: 0.9695, Validation Accuracy: 0.9719, Loss: 0.0118 Epoch 5 Batch 20/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9769, Loss: 0.0150 Epoch 5 Batch 21/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9769, Loss: 0.0146 Epoch 5 Batch 22/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9819, Loss: 0.0148 Epoch 5 Batch 23/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9819, Loss: 0.0126 Epoch 5 Batch 24/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9727, Loss: 0.0164 Epoch 5 Batch 25/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9684, Loss: 0.0080 Epoch 5 Batch 26/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9719, Loss: 0.0208 Epoch 5 Batch 27/1077 - Train Accuracy: 0.9911, Validation Accuracy: 0.9712, Loss: 0.0143 Epoch 5 Batch 28/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9688, Loss: 0.0131 Epoch 5 Batch 29/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9737, Loss: 0.0152 Epoch 5 Batch 30/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9719, Loss: 0.0122 Epoch 5 Batch 31/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9719, Loss: 0.0144 Epoch 5 Batch 32/1077 - Train Accuracy: 0.9784, Validation Accuracy: 0.9695, Loss: 0.0135 Epoch 5 Batch 33/1077 - Train Accuracy: 0.9807, Validation Accuracy: 0.9695, Loss: 0.0111 Epoch 5 Batch 34/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9645, Loss: 0.0163 Epoch 5 Batch 35/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9648, Loss: 0.0161 Epoch 5 Batch 36/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9680, Loss: 0.0138 Epoch 5 Batch 37/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9677, Loss: 0.0180 Epoch 5 Batch 38/1077 - Train Accuracy: 0.9729, Validation Accuracy: 0.9702, Loss: 0.0249 Epoch 5 Batch 39/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9730, Loss: 0.0193 Epoch 5 Batch 40/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9727, Loss: 0.0130 Epoch 5 Batch 41/1077 - Train Accuracy: 0.9948, Validation Accuracy: 0.9748, Loss: 0.0138 Epoch 5 Batch 42/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9748, Loss: 0.0214 Epoch 5 Batch 43/1077 - Train Accuracy: 0.9889, Validation Accuracy: 0.9744, Loss: 0.0078 Epoch 5 Batch 44/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9790, Loss: 0.0111 Epoch 5 Batch 45/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9790, Loss: 0.0163 Epoch 5 Batch 46/1077 - Train Accuracy: 0.9786, Validation Accuracy: 0.9741, Loss: 0.0139 Epoch 5 Batch 47/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9741, Loss: 0.0133 Epoch 5 Batch 48/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9719, Loss: 0.0173 Epoch 5 Batch 49/1077 - Train Accuracy: 0.9807, Validation Accuracy: 0.9648, Loss: 0.0208 Epoch 5 Batch 50/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9652, Loss: 0.0134 Epoch 5 Batch 51/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9648, Loss: 0.0173 Epoch 5 Batch 52/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9585, Loss: 0.0150 Epoch 5 Batch 53/1077 - Train Accuracy: 0.9539, Validation Accuracy: 0.9585, Loss: 0.0160 Epoch 5 Batch 54/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9528, Loss: 0.0208 Epoch 5 Batch 55/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9577, Loss: 0.0139 Epoch 5 Batch 56/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9670, Loss: 0.0086 Epoch 5 Batch 57/1077 - Train Accuracy: 0.9709, Validation Accuracy: 0.9599, Loss: 0.0120 Epoch 5 Batch 58/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9641, Loss: 0.0150 Epoch 5 Batch 59/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9695, Loss: 0.0129 Epoch 5 Batch 60/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9698, Loss: 0.0104 Epoch 5 Batch 61/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9648, Loss: 0.0147 Epoch 5 Batch 62/1077 - Train Accuracy: 0.9741, Validation Accuracy: 0.9645, Loss: 0.0172 Epoch 5 Batch 63/1077 - Train Accuracy: 0.9821, Validation Accuracy: 0.9744, Loss: 0.0099 Epoch 5 Batch 64/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9744, Loss: 0.0122 Epoch 5 Batch 65/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9744, Loss: 0.0141 Epoch 5 Batch 66/1077 - Train Accuracy: 0.9907, Validation Accuracy: 0.9645, Loss: 0.0083 Epoch 5 Batch 67/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9616, Loss: 0.0170 Epoch 5 Batch 68/1077 - Train Accuracy: 0.9676, Validation Accuracy: 0.9673, Loss: 0.0187 Epoch 5 Batch 69/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9677, Loss: 0.0228 Epoch 5 Batch 70/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9652, Loss: 0.0136 Epoch 5 Batch 71/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9702, Loss: 0.0084 Epoch 5 Batch 72/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9702, Loss: 0.0142 Epoch 5 Batch 73/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9727, Loss: 0.0107 Epoch 5 Batch 74/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9751, Loss: 0.0140 Epoch 5 Batch 75/1077 - Train Accuracy: 0.9617, Validation Accuracy: 0.9751, Loss: 0.0202 Epoch 5 Batch 76/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9751, Loss: 0.0144 Epoch 5 Batch 77/1077 - Train Accuracy: 0.9559, Validation Accuracy: 0.9737, Loss: 0.0147 Epoch 5 Batch 78/1077 - Train Accuracy: 0.9749, Validation Accuracy: 0.9744, Loss: 0.0106 Epoch 5 Batch 79/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9737, Loss: 0.0098 Epoch 5 Batch 80/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9737, Loss: 0.0118 Epoch 5 Batch 81/1077 - Train Accuracy: 0.9594, Validation Accuracy: 0.9702, Loss: 0.0124 Epoch 5 Batch 82/1077 - Train Accuracy: 0.9826, Validation Accuracy: 0.9702, Loss: 0.0159 Epoch 5 Batch 83/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9751, Loss: 0.0103 Epoch 5 Batch 84/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9755, Loss: 0.0146 Epoch 5 Batch 85/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9751, Loss: 0.0116 Epoch 5 Batch 86/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9702, Loss: 0.0144 Epoch 5 Batch 87/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9698, Loss: 0.0134 Epoch 5 Batch 88/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9641, Loss: 0.0162 Epoch 5 Batch 89/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9641, Loss: 0.0103 Epoch 5 Batch 90/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9680, Loss: 0.0132 Epoch 5 Batch 91/1077 - Train Accuracy: 0.9829, Validation Accuracy: 0.9755, Loss: 0.0142 Epoch 5 Batch 92/1077 - Train Accuracy: 0.9606, Validation Accuracy: 0.9755, Loss: 0.0161 Epoch 5 Batch 93/1077 - Train Accuracy: 0.9953, Validation Accuracy: 0.9755, Loss: 0.0097 Epoch 5 Batch 94/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9755, Loss: 0.0156 Epoch 5 Batch 95/1077 - Train Accuracy: 0.9877, Validation Accuracy: 0.9755, Loss: 0.0125 Epoch 5 Batch 96/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9741, Loss: 0.0164 Epoch 5 Batch 97/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9744, Loss: 0.0142 Epoch 5 Batch 98/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9741, Loss: 0.0173 Epoch 5 Batch 99/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9741, Loss: 0.0102 Epoch 5 Batch 100/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9741, Loss: 0.0099 Epoch 5 Batch 101/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9744, Loss: 0.0118 Epoch 5 Batch 102/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9741, Loss: 0.0117 Epoch 5 Batch 103/1077 - Train Accuracy: 0.9815, Validation Accuracy: 0.9741, Loss: 0.0138 Epoch 5 Batch 104/1077 - Train Accuracy: 0.9799, Validation Accuracy: 0.9709, Loss: 0.0171 Epoch 5 Batch 105/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9719, Loss: 0.0090 Epoch 5 Batch 106/1077 - Train Accuracy: 0.9864, Validation Accuracy: 0.9716, Loss: 0.0166 Epoch 5 Batch 107/1077 - Train Accuracy: 0.9769, Validation Accuracy: 0.9670, Loss: 0.0116 Epoch 5 Batch 108/1077 - Train Accuracy: 0.9702, Validation Accuracy: 0.9719, Loss: 0.0199 Epoch 5 Batch 109/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9766, Loss: 0.0166 Epoch 5 Batch 110/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9716, Loss: 0.0096 Epoch 5 Batch 111/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9673, Loss: 0.0122 Epoch 5 Batch 112/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9702, Loss: 0.0128 Epoch 5 Batch 113/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9677, Loss: 0.0182 Epoch 5 Batch 114/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9677, Loss: 0.0099 Epoch 5 Batch 115/1077 - Train Accuracy: 0.9641, Validation Accuracy: 0.9677, Loss: 0.0166 Epoch 5 Batch 116/1077 - Train Accuracy: 0.9578, Validation Accuracy: 0.9727, Loss: 0.0272 Epoch 5 Batch 117/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9719, Loss: 0.0121 Epoch 5 Batch 118/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9723, Loss: 0.0102 Epoch 5 Batch 119/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9698, Loss: 0.0134 Epoch 5 Batch 120/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9741, Loss: 0.0125 Epoch 5 Batch 121/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9748, Loss: 0.0241 Epoch 5 Batch 122/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9751, Loss: 0.0141 Epoch 5 Batch 123/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9702, Loss: 0.0137 Epoch 5 Batch 124/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9702, Loss: 0.0195 Epoch 5 Batch 125/1077 - Train Accuracy: 0.9665, Validation Accuracy: 0.9702, Loss: 0.0248 Epoch 5 Batch 126/1077 - Train Accuracy: 0.9866, Validation Accuracy: 0.9755, Loss: 0.0158 Epoch 5 Batch 127/1077 - Train Accuracy: 0.9645, Validation Accuracy: 0.9755, Loss: 0.0131 Epoch 5 Batch 128/1077 - Train Accuracy: 0.9896, Validation Accuracy: 0.9723, Loss: 0.0178 Epoch 5 Batch 129/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9719, Loss: 0.0183 Epoch 5 Batch 130/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9719, Loss: 0.0124 Epoch 5 Batch 131/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9716, Loss: 0.0128 Epoch 5 Batch 132/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9716, Loss: 0.0143 Epoch 5 Batch 133/1077 - Train Accuracy: 0.9868, Validation Accuracy: 0.9712, Loss: 0.0066 Epoch 5 Batch 134/1077 - Train Accuracy: 0.9933, Validation Accuracy: 0.9737, Loss: 0.0116 Epoch 5 Batch 135/1077 - Train Accuracy: 0.9790, Validation Accuracy: 0.9737, Loss: 0.0106 Epoch 5 Batch 136/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9737, Loss: 0.0163 Epoch 5 Batch 137/1077 - Train Accuracy: 0.9903, Validation Accuracy: 0.9688, Loss: 0.0108 Epoch 5 Batch 138/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9592, Loss: 0.0153 Epoch 5 Batch 139/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9592, Loss: 0.0150 Epoch 5 Batch 140/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9542, Loss: 0.0128 Epoch 5 Batch 141/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9545, Loss: 0.0103 Epoch 5 Batch 142/1077 - Train Accuracy: 0.9639, Validation Accuracy: 0.9545, Loss: 0.0123 Epoch 5 Batch 143/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9592, Loss: 0.0132 Epoch 5 Batch 144/1077 - Train Accuracy: 0.9877, Validation Accuracy: 0.9645, Loss: 0.0228 Epoch 5 Batch 145/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9698, Loss: 0.0132 Epoch 5 Batch 146/1077 - Train Accuracy: 0.9647, Validation Accuracy: 0.9695, Loss: 0.0366 Epoch 5 Batch 147/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9691, Loss: 0.0133 Epoch 5 Batch 148/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9716, Loss: 0.0156 Epoch 5 Batch 149/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9716, Loss: 0.0112 Epoch 5 Batch 150/1077 - Train Accuracy: 0.9952, Validation Accuracy: 0.9719, Loss: 0.0129 Epoch 5 Batch 151/1077 - Train Accuracy: 0.9807, Validation Accuracy: 0.9719, Loss: 0.0130 Epoch 5 Batch 152/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9773, Loss: 0.0215 Epoch 5 Batch 153/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9716, Loss: 0.0177 Epoch 5 Batch 154/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9766, Loss: 0.0098 Epoch 5 Batch 155/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9759, Loss: 0.0125 Epoch 5 Batch 156/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9737, Loss: 0.0092 Epoch 5 Batch 157/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9737, Loss: 0.0112 Epoch 5 Batch 158/1077 - Train Accuracy: 0.9829, Validation Accuracy: 0.9734, Loss: 0.0145 Epoch 5 Batch 159/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9734, Loss: 0.0135 Epoch 5 Batch 160/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9734, Loss: 0.0121 Epoch 5 Batch 161/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9666, Loss: 0.0105 Epoch 5 Batch 162/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9744, Loss: 0.0108 Epoch 5 Batch 163/1077 - Train Accuracy: 0.9749, Validation Accuracy: 0.9744, Loss: 0.0188 Epoch 5 Batch 164/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9695, Loss: 0.0145 Epoch 5 Batch 165/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9648, Loss: 0.0145 Epoch 5 Batch 166/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9716, Loss: 0.0138 Epoch 5 Batch 167/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9691, Loss: 0.0103 Epoch 5 Batch 168/1077 - Train Accuracy: 0.9807, Validation Accuracy: 0.9691, Loss: 0.0150 Epoch 5 Batch 169/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9691, Loss: 0.0142 Epoch 5 Batch 170/1077 - Train Accuracy: 0.9723, Validation Accuracy: 0.9638, Loss: 0.0142 Epoch 5 Batch 171/1077 - Train Accuracy: 0.9826, Validation Accuracy: 0.9560, Loss: 0.0117 Epoch 5 Batch 172/1077 - Train Accuracy: 0.9736, Validation Accuracy: 0.9616, Loss: 0.0126 Epoch 5 Batch 173/1077 - Train Accuracy: 0.9868, Validation Accuracy: 0.9592, Loss: 0.0131 Epoch 5 Batch 174/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9542, Loss: 0.0117 Epoch 5 Batch 175/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9553, Loss: 0.0153 Epoch 5 Batch 176/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9602, Loss: 0.0133 Epoch 5 Batch 177/1077 - Train Accuracy: 0.9860, Validation Accuracy: 0.9606, Loss: 0.0132 Epoch 5 Batch 178/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9609, Loss: 0.0167 Epoch 5 Batch 179/1077 - Train Accuracy: 0.9650, Validation Accuracy: 0.9659, Loss: 0.0150 Epoch 5 Batch 180/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9609, Loss: 0.0112 Epoch 5 Batch 181/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9602, Loss: 0.0137 Epoch 5 Batch 182/1077 - Train Accuracy: 0.9740, Validation Accuracy: 0.9577, Loss: 0.0155 Epoch 5 Batch 183/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9577, Loss: 0.0142 Epoch 5 Batch 184/1077 - Train Accuracy: 0.9648, Validation Accuracy: 0.9577, Loss: 0.0121 Epoch 5 Batch 185/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9641, Loss: 0.0172 Epoch 5 Batch 186/1077 - Train Accuracy: 0.9811, Validation Accuracy: 0.9595, Loss: 0.0120 Epoch 5 Batch 187/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9599, Loss: 0.0090 Epoch 5 Batch 188/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9606, Loss: 0.0137 Epoch 5 Batch 189/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9606, Loss: 0.0120 Epoch 5 Batch 190/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9602, Loss: 0.0081 Epoch 5 Batch 191/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9602, Loss: 0.0117 Epoch 5 Batch 192/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9634, Loss: 0.0130 Epoch 5 Batch 193/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9602, Loss: 0.0125 Epoch 5 Batch 194/1077 - Train Accuracy: 0.9862, Validation Accuracy: 0.9656, Loss: 0.0113 Epoch 5 Batch 195/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9656, Loss: 0.0131 Epoch 5 Batch 196/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9698, Loss: 0.0106 Epoch 5 Batch 197/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9695, Loss: 0.0124 Epoch 5 Batch 198/1077 - Train Accuracy: 0.9795, Validation Accuracy: 0.9744, Loss: 0.0182 Epoch 5 Batch 199/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9769, Loss: 0.0115 Epoch 5 Batch 200/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9769, Loss: 0.0138 Epoch 5 Batch 201/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9769, Loss: 0.0094 Epoch 5 Batch 202/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9773, Loss: 0.0109 Epoch 5 Batch 203/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9798, Loss: 0.0112 Epoch 5 Batch 204/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9794, Loss: 0.0208 Epoch 5 Batch 205/1077 - Train Accuracy: 0.9484, Validation Accuracy: 0.9794, Loss: 0.0237 Epoch 5 Batch 206/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9794, Loss: 0.0096 Epoch 5 Batch 207/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9709, Loss: 0.0123 Epoch 5 Batch 208/1077 - Train Accuracy: 0.9807, Validation Accuracy: 0.9709, Loss: 0.0151 Epoch 5 Batch 209/1077 - Train Accuracy: 0.9888, Validation Accuracy: 0.9659, Loss: 0.0095 Epoch 5 Batch 210/1077 - Train Accuracy: 0.9743, Validation Accuracy: 0.9659, Loss: 0.0145 Epoch 5 Batch 211/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9663, Loss: 0.0116 Epoch 5 Batch 212/1077 - Train Accuracy: 0.9740, Validation Accuracy: 0.9751, Loss: 0.0104 Epoch 5 Batch 213/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9794, Loss: 0.0132 Epoch 5 Batch 214/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9748, Loss: 0.0106 Epoch 5 Batch 215/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9748, Loss: 0.0195 Epoch 5 Batch 216/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9648, Loss: 0.0130 Epoch 5 Batch 217/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9624, Loss: 0.0096 Epoch 5 Batch 218/1077 - Train Accuracy: 0.9947, Validation Accuracy: 0.9624, Loss: 0.0144 Epoch 5 Batch 219/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9627, Loss: 0.0083 Epoch 5 Batch 220/1077 - Train Accuracy: 0.9807, Validation Accuracy: 0.9627, Loss: 0.0159 Epoch 5 Batch 221/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9627, Loss: 0.0137 Epoch 5 Batch 222/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9719, Loss: 0.0157 Epoch 5 Batch 223/1077 - Train Accuracy: 0.9725, Validation Accuracy: 0.9691, Loss: 0.0151 Epoch 5 Batch 224/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9673, Loss: 0.0160 Epoch 5 Batch 225/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9723, Loss: 0.0203 Epoch 5 Batch 226/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9773, Loss: 0.0114 Epoch 5 Batch 227/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9670, Loss: 0.0153 Epoch 5 Batch 228/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9716, Loss: 0.0124 Epoch 5 Batch 229/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9716, Loss: 0.0115 Epoch 5 Batch 230/1077 - Train Accuracy: 0.9728, Validation Accuracy: 0.9616, Loss: 0.0109 Epoch 5 Batch 231/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9702, Loss: 0.0142 Epoch 5 Batch 232/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9698, Loss: 0.0097 Epoch 5 Batch 233/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9641, Loss: 0.0205 Epoch 5 Batch 234/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9695, Loss: 0.0171 Epoch 5 Batch 235/1077 - Train Accuracy: 0.9702, Validation Accuracy: 0.9798, Loss: 0.0149 Epoch 5 Batch 236/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9741, Loss: 0.0190 Epoch 5 Batch 237/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9737, Loss: 0.0100 Epoch 5 Batch 238/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9741, Loss: 0.0119 Epoch 5 Batch 239/1077 - Train Accuracy: 0.9993, Validation Accuracy: 0.9645, Loss: 0.0071 Epoch 5 Batch 240/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9656, Loss: 0.0107 Epoch 5 Batch 241/1077 - Train Accuracy: 0.9961, Validation Accuracy: 0.9656, Loss: 0.0079 Epoch 5 Batch 242/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9656, Loss: 0.0113 Epoch 5 Batch 243/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9656, Loss: 0.0163 Epoch 5 Batch 244/1077 - Train Accuracy: 0.9815, Validation Accuracy: 0.9723, Loss: 0.0099 Epoch 5 Batch 245/1077 - Train Accuracy: 0.9736, Validation Accuracy: 0.9723, Loss: 0.0148 Epoch 5 Batch 246/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9762, Loss: 0.0166 Epoch 5 Batch 247/1077 - Train Accuracy: 0.9639, Validation Accuracy: 0.9712, Loss: 0.0192 Epoch 5 Batch 248/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9759, Loss: 0.0142 Epoch 5 Batch 249/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9709, Loss: 0.0101 Epoch 5 Batch 250/1077 - Train Accuracy: 0.9794, Validation Accuracy: 0.9680, Loss: 0.0158 Epoch 5 Batch 251/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9634, Loss: 0.0189 Epoch 5 Batch 252/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9634, Loss: 0.0149 Epoch 5 Batch 253/1077 - Train Accuracy: 0.9712, Validation Accuracy: 0.9634, Loss: 0.0140 Epoch 5 Batch 254/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9634, Loss: 0.0157 Epoch 5 Batch 255/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9631, Loss: 0.0118 Epoch 5 Batch 256/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9656, Loss: 0.0220 Epoch 5 Batch 257/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9716, Loss: 0.0128 Epoch 5 Batch 258/1077 - Train Accuracy: 0.9818, Validation Accuracy: 0.9766, Loss: 0.0128 Epoch 5 Batch 259/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9790, Loss: 0.0078 Epoch 5 Batch 260/1077 - Train Accuracy: 0.9821, Validation Accuracy: 0.9741, Loss: 0.0081 Epoch 5 Batch 261/1077 - Train Accuracy: 0.9795, Validation Accuracy: 0.9695, Loss: 0.0148 Epoch 5 Batch 262/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9695, Loss: 0.0122 Epoch 5 Batch 263/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9695, Loss: 0.0096 Epoch 5 Batch 264/1077 - Train Accuracy: 0.9984, Validation Accuracy: 0.9695, Loss: 0.0124 Epoch 5 Batch 265/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9698, Loss: 0.0102 Epoch 5 Batch 266/1077 - Train Accuracy: 0.9639, Validation Accuracy: 0.9698, Loss: 0.0137 Epoch 5 Batch 267/1077 - Train Accuracy: 0.9929, Validation Accuracy: 0.9695, Loss: 0.0116 Epoch 5 Batch 268/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9691, Loss: 0.0146 Epoch 5 Batch 269/1077 - Train Accuracy: 0.9868, Validation Accuracy: 0.9688, Loss: 0.0181 Epoch 5 Batch 270/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9688, Loss: 0.0161 Epoch 5 Batch 271/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9737, Loss: 0.0131 Epoch 5 Batch 272/1077 - Train Accuracy: 0.9877, Validation Accuracy: 0.9741, Loss: 0.0229 Epoch 5 Batch 273/1077 - Train Accuracy: 0.9874, Validation Accuracy: 0.9691, Loss: 0.0109 Epoch 5 Batch 274/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9641, Loss: 0.0140 Epoch 5 Batch 275/1077 - Train Accuracy: 0.9807, Validation Accuracy: 0.9648, Loss: 0.0129 Epoch 5 Batch 276/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9648, Loss: 0.0189 Epoch 5 Batch 277/1077 - Train Accuracy: 0.9807, Validation Accuracy: 0.9652, Loss: 0.0127 Epoch 5 Batch 278/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9648, Loss: 0.0141 Epoch 5 Batch 279/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9702, Loss: 0.0162 Epoch 5 Batch 280/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9599, Loss: 0.0193 Epoch 5 Batch 281/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9702, Loss: 0.0180 Epoch 5 Batch 282/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9673, Loss: 0.0178 Epoch 5 Batch 283/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9673, Loss: 0.0115 Epoch 5 Batch 284/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9673, Loss: 0.0177 Epoch 5 Batch 285/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9705, Loss: 0.0124 Epoch 5 Batch 286/1077 - Train Accuracy: 0.9900, Validation Accuracy: 0.9705, Loss: 0.0130 Epoch 5 Batch 287/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9656, Loss: 0.0232 Epoch 5 Batch 288/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9595, Loss: 0.0189 Epoch 5 Batch 289/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9698, Loss: 0.0125 Epoch 5 Batch 290/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9702, Loss: 0.0217 Epoch 5 Batch 291/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9652, Loss: 0.0237 Epoch 5 Batch 292/1077 - Train Accuracy: 0.9903, Validation Accuracy: 0.9648, Loss: 0.0134 Epoch 5 Batch 293/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9702, Loss: 0.0097 Epoch 5 Batch 294/1077 - Train Accuracy: 0.9780, Validation Accuracy: 0.9702, Loss: 0.0130 Epoch 5 Batch 295/1077 - Train Accuracy: 0.9864, Validation Accuracy: 0.9762, Loss: 0.0147 Epoch 5 Batch 296/1077 - Train Accuracy: 0.9907, Validation Accuracy: 0.9766, Loss: 0.0106 Epoch 5 Batch 297/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9741, Loss: 0.0133 Epoch 5 Batch 298/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9801, Loss: 0.0197 Epoch 5 Batch 299/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9801, Loss: 0.0117 Epoch 5 Batch 300/1077 - Train Accuracy: 0.9873, Validation Accuracy: 0.9787, Loss: 0.0125 Epoch 5 Batch 301/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9787, Loss: 0.0105 Epoch 5 Batch 302/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9773, Loss: 0.0114 Epoch 5 Batch 303/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9773, Loss: 0.0193 Epoch 5 Batch 304/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9748, Loss: 0.0179 Epoch 5 Batch 305/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9684, Loss: 0.0122 Epoch 5 Batch 306/1077 - Train Accuracy: 0.9810, Validation Accuracy: 0.9656, Loss: 0.0187 Epoch 5 Batch 307/1077 - Train Accuracy: 0.9769, Validation Accuracy: 0.9556, Loss: 0.0125 Epoch 5 Batch 308/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9595, Loss: 0.0176 Epoch 5 Batch 309/1077 - Train Accuracy: 0.9818, Validation Accuracy: 0.9595, Loss: 0.0100 Epoch 5 Batch 310/1077 - Train Accuracy: 0.9949, Validation Accuracy: 0.9542, Loss: 0.0130 Epoch 5 Batch 311/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9702, Loss: 0.0129 Epoch 5 Batch 312/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9751, Loss: 0.0178 Epoch 5 Batch 313/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9751, Loss: 0.0089 Epoch 5 Batch 314/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9727, Loss: 0.0152 Epoch 5 Batch 315/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9748, Loss: 0.0117 Epoch 5 Batch 316/1077 - Train Accuracy: 0.9874, Validation Accuracy: 0.9748, Loss: 0.0144 Epoch 5 Batch 317/1077 - Train Accuracy: 0.9753, Validation Accuracy: 0.9751, Loss: 0.0160 Epoch 5 Batch 318/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9755, Loss: 0.0089 Epoch 5 Batch 319/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9783, Loss: 0.0133 Epoch 5 Batch 320/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9783, Loss: 0.0134 Epoch 5 Batch 321/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9705, Loss: 0.0132 Epoch 5 Batch 322/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9705, Loss: 0.0142 Epoch 5 Batch 323/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9680, Loss: 0.0148 Epoch 5 Batch 324/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9680, Loss: 0.0133 Epoch 5 Batch 325/1077 - Train Accuracy: 0.9725, Validation Accuracy: 0.9680, Loss: 0.0122 Epoch 5 Batch 326/1077 - Train Accuracy: 0.9970, Validation Accuracy: 0.9680, Loss: 0.0100 Epoch 5 Batch 327/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9680, Loss: 0.0154 Epoch 5 Batch 328/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9680, Loss: 0.0161 Epoch 5 Batch 329/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9705, Loss: 0.0126 Epoch 5 Batch 330/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9702, Loss: 0.0117 Epoch 5 Batch 331/1077 - Train Accuracy: 0.9811, Validation Accuracy: 0.9702, Loss: 0.0156 Epoch 5 Batch 332/1077 - Train Accuracy: 0.9829, Validation Accuracy: 0.9702, Loss: 0.0084 Epoch 5 Batch 333/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9702, Loss: 0.0101 Epoch 5 Batch 334/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9663, Loss: 0.0087 Epoch 5 Batch 335/1077 - Train Accuracy: 0.9810, Validation Accuracy: 0.9666, Loss: 0.0168 Epoch 5 Batch 336/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9666, Loss: 0.0233 Epoch 5 Batch 337/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9691, Loss: 0.0166 Epoch 5 Batch 338/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9691, Loss: 0.0219 Epoch 5 Batch 339/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9716, Loss: 0.0084 Epoch 5 Batch 340/1077 - Train Accuracy: 0.9803, Validation Accuracy: 0.9716, Loss: 0.0159 Epoch 5 Batch 341/1077 - Train Accuracy: 0.9648, Validation Accuracy: 0.9670, Loss: 0.0210 Epoch 5 Batch 342/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9666, Loss: 0.0086 Epoch 5 Batch 343/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9712, Loss: 0.0121 Epoch 5 Batch 344/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9730, Loss: 0.0150 Epoch 5 Batch 345/1077 - Train Accuracy: 0.9807, Validation Accuracy: 0.9773, Loss: 0.0096 Epoch 5 Batch 346/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9730, Loss: 0.0155 Epoch 5 Batch 347/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9680, Loss: 0.0094 Epoch 5 Batch 348/1077 - Train Accuracy: 0.9903, Validation Accuracy: 0.9577, Loss: 0.0102 Epoch 5 Batch 349/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9577, Loss: 0.0142 Epoch 5 Batch 350/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9570, Loss: 0.0138 Epoch 5 Batch 351/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9627, Loss: 0.0137 Epoch 5 Batch 352/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9627, Loss: 0.0065 Epoch 5 Batch 353/1077 - Train Accuracy: 0.9827, Validation Accuracy: 0.9581, Loss: 0.0180 Epoch 5 Batch 354/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9680, Loss: 0.0166 Epoch 5 Batch 355/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9684, Loss: 0.0093 Epoch 5 Batch 356/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9638, Loss: 0.0134 Epoch 5 Batch 357/1077 - Train Accuracy: 0.9769, Validation Accuracy: 0.9659, Loss: 0.0122 Epoch 5 Batch 358/1077 - Train Accuracy: 0.9700, Validation Accuracy: 0.9688, Loss: 0.0178 Epoch 5 Batch 359/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9688, Loss: 0.0098 Epoch 5 Batch 360/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9688, Loss: 0.0105 Epoch 5 Batch 361/1077 - Train Accuracy: 0.9741, Validation Accuracy: 0.9688, Loss: 0.0148 Epoch 5 Batch 362/1077 - Train Accuracy: 0.9792, Validation Accuracy: 0.9688, Loss: 0.0160 Epoch 5 Batch 363/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9780, Loss: 0.0171 Epoch 5 Batch 364/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9780, Loss: 0.0160 Epoch 5 Batch 365/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9780, Loss: 0.0082 Epoch 5 Batch 366/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9680, Loss: 0.0108 Epoch 5 Batch 367/1077 - Train Accuracy: 0.9825, Validation Accuracy: 0.9712, Loss: 0.0075 Epoch 5 Batch 368/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9712, Loss: 0.0160 Epoch 5 Batch 369/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9663, Loss: 0.0107 Epoch 5 Batch 370/1077 - Train Accuracy: 0.9948, Validation Accuracy: 0.9663, Loss: 0.0103 Epoch 5 Batch 371/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9663, Loss: 0.0086 Epoch 5 Batch 372/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9709, Loss: 0.0057 Epoch 5 Batch 373/1077 - Train Accuracy: 0.9866, Validation Accuracy: 0.9709, Loss: 0.0111 Epoch 5 Batch 374/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9709, Loss: 0.0126 Epoch 5 Batch 375/1077 - Train Accuracy: 0.9901, Validation Accuracy: 0.9712, Loss: 0.0108 Epoch 5 Batch 376/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9709, Loss: 0.0112 Epoch 5 Batch 377/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9709, Loss: 0.0112 Epoch 5 Batch 378/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9705, Loss: 0.0101 Epoch 5 Batch 379/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9776, Loss: 0.0151 Epoch 5 Batch 380/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9776, Loss: 0.0103 Epoch 5 Batch 381/1077 - Train Accuracy: 0.9652, Validation Accuracy: 0.9826, Loss: 0.0145 Epoch 5 Batch 382/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9844, Loss: 0.0213 Epoch 5 Batch 383/1077 - Train Accuracy: 0.9751, Validation Accuracy: 0.9844, Loss: 0.0132 Epoch 5 Batch 384/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9844, Loss: 0.0105 Epoch 5 Batch 385/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9840, Loss: 0.0114 Epoch 5 Batch 386/1077 - Train Accuracy: 0.9933, Validation Accuracy: 0.9812, Loss: 0.0096 Epoch 5 Batch 387/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9812, Loss: 0.0109 Epoch 5 Batch 388/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9812, Loss: 0.0180 Epoch 5 Batch 389/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9759, Loss: 0.0150 Epoch 5 Batch 390/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9766, Loss: 0.0192 Epoch 5 Batch 391/1077 - Train Accuracy: 0.9829, Validation Accuracy: 0.9719, Loss: 0.0166 Epoch 5 Batch 392/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9719, Loss: 0.0135 Epoch 5 Batch 393/1077 - Train Accuracy: 0.9825, Validation Accuracy: 0.9719, Loss: 0.0089 Epoch 5 Batch 394/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9748, Loss: 0.0138 Epoch 5 Batch 395/1077 - Train Accuracy: 0.9851, Validation Accuracy: 0.9730, Loss: 0.0095 Epoch 5 Batch 396/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9730, Loss: 0.0153 Epoch 5 Batch 397/1077 - Train Accuracy: 0.9851, Validation Accuracy: 0.9734, Loss: 0.0152 Epoch 5 Batch 398/1077 - Train Accuracy: 0.9815, Validation Accuracy: 0.9734, Loss: 0.0120 Epoch 5 Batch 399/1077 - Train Accuracy: 0.9819, Validation Accuracy: 0.9670, Loss: 0.0115 Epoch 5 Batch 400/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9670, Loss: 0.0184 Epoch 5 Batch 401/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9670, Loss: 0.0085 Epoch 5 Batch 402/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9666, Loss: 0.0094 Epoch 5 Batch 403/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9648, Loss: 0.0244 Epoch 5 Batch 404/1077 - Train Accuracy: 0.9829, Validation Accuracy: 0.9648, Loss: 0.0155 Epoch 5 Batch 405/1077 - Train Accuracy: 0.9737, Validation Accuracy: 0.9609, Loss: 0.0129 Epoch 5 Batch 406/1077 - Train Accuracy: 0.9700, Validation Accuracy: 0.9613, Loss: 0.0122 Epoch 5 Batch 407/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9634, Loss: 0.0122 Epoch 5 Batch 408/1077 - Train Accuracy: 0.9645, Validation Accuracy: 0.9595, Loss: 0.0138 Epoch 5 Batch 409/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9616, Loss: 0.0199 Epoch 5 Batch 410/1077 - Train Accuracy: 0.9683, Validation Accuracy: 0.9638, Loss: 0.0225 Epoch 5 Batch 411/1077 - Train Accuracy: 0.9732, Validation Accuracy: 0.9638, Loss: 0.0153 Epoch 5 Batch 412/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9588, Loss: 0.0123 Epoch 5 Batch 413/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9588, Loss: 0.0086 Epoch 5 Batch 414/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9652, Loss: 0.0152 Epoch 5 Batch 415/1077 - Train Accuracy: 0.9792, Validation Accuracy: 0.9684, Loss: 0.0171 Epoch 5 Batch 416/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9680, Loss: 0.0125 Epoch 5 Batch 417/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9634, Loss: 0.0219 Epoch 5 Batch 418/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9588, Loss: 0.0094 Epoch 5 Batch 419/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9563, Loss: 0.0146 Epoch 5 Batch 420/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9563, Loss: 0.0112 Epoch 5 Batch 421/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9563, Loss: 0.0198 Epoch 5 Batch 422/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9563, Loss: 0.0119 Epoch 5 Batch 423/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9560, Loss: 0.0163 Epoch 5 Batch 424/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9563, Loss: 0.0117 Epoch 5 Batch 425/1077 - Train Accuracy: 0.9795, Validation Accuracy: 0.9556, Loss: 0.0099 Epoch 5 Batch 426/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9556, Loss: 0.0126 Epoch 5 Batch 427/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9624, Loss: 0.0126 Epoch 5 Batch 428/1077 - Train Accuracy: 0.9862, Validation Accuracy: 0.9606, Loss: 0.0139 Epoch 5 Batch 429/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9613, Loss: 0.0083 Epoch 5 Batch 430/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9663, Loss: 0.0126 Epoch 5 Batch 431/1077 - Train Accuracy: 0.9969, Validation Accuracy: 0.9709, Loss: 0.0105 Epoch 5 Batch 432/1077 - Train Accuracy: 0.9645, Validation Accuracy: 0.9737, Loss: 0.0153 Epoch 5 Batch 433/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9751, Loss: 0.0191 Epoch 5 Batch 434/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9656, Loss: 0.0118 Epoch 5 Batch 435/1077 - Train Accuracy: 0.9803, Validation Accuracy: 0.9631, Loss: 0.0182 Epoch 5 Batch 436/1077 - Train Accuracy: 0.9944, Validation Accuracy: 0.9609, Loss: 0.0135 Epoch 5 Batch 437/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9624, Loss: 0.0105 Epoch 5 Batch 438/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9592, Loss: 0.0101 Epoch 5 Batch 439/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9542, Loss: 0.0144 Epoch 5 Batch 440/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9549, Loss: 0.0159 Epoch 5 Batch 441/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9553, Loss: 0.0158 Epoch 5 Batch 442/1077 - Train Accuracy: 0.9736, Validation Accuracy: 0.9531, Loss: 0.0164 Epoch 5 Batch 443/1077 - Train Accuracy: 0.9940, Validation Accuracy: 0.9595, Loss: 0.0109 Epoch 5 Batch 444/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9599, Loss: 0.0123 Epoch 5 Batch 445/1077 - Train Accuracy: 0.9860, Validation Accuracy: 0.9648, Loss: 0.0103 Epoch 5 Batch 446/1077 - Train Accuracy: 0.9874, Validation Accuracy: 0.9613, Loss: 0.0100 Epoch 5 Batch 447/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9613, Loss: 0.0105 Epoch 5 Batch 448/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9656, Loss: 0.0182 Epoch 5 Batch 449/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9656, Loss: 0.0131 Epoch 5 Batch 450/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9592, Loss: 0.0158 Epoch 5 Batch 451/1077 - Train Accuracy: 0.9788, Validation Accuracy: 0.9592, Loss: 0.0115 Epoch 5 Batch 452/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9592, Loss: 0.0155 Epoch 5 Batch 453/1077 - Train Accuracy: 0.9669, Validation Accuracy: 0.9656, Loss: 0.0146 Epoch 5 Batch 454/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9656, Loss: 0.0156 Epoch 5 Batch 455/1077 - Train Accuracy: 0.9872, Validation Accuracy: 0.9656, Loss: 0.0171 Epoch 5 Batch 456/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9656, Loss: 0.0130 Epoch 5 Batch 457/1077 - Train Accuracy: 0.9743, Validation Accuracy: 0.9680, Loss: 0.0102 Epoch 5 Batch 458/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9631, Loss: 0.0188 Epoch 5 Batch 459/1077 - Train Accuracy: 0.9740, Validation Accuracy: 0.9631, Loss: 0.0176 Epoch 5 Batch 460/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9624, Loss: 0.0138 Epoch 5 Batch 461/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9673, Loss: 0.0127 Epoch 5 Batch 462/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9574, Loss: 0.0140 Epoch 5 Batch 463/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9528, Loss: 0.0155 Epoch 5 Batch 464/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9531, Loss: 0.0166 Epoch 5 Batch 465/1077 - Train Accuracy: 0.9786, Validation Accuracy: 0.9542, Loss: 0.0134 Epoch 5 Batch 466/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9670, Loss: 0.0162 Epoch 5 Batch 467/1077 - Train Accuracy: 0.9888, Validation Accuracy: 0.9670, Loss: 0.0126 Epoch 5 Batch 468/1077 - Train Accuracy: 0.9799, Validation Accuracy: 0.9712, Loss: 0.0155 Epoch 5 Batch 469/1077 - Train Accuracy: 0.9973, Validation Accuracy: 0.9670, Loss: 0.0087 Epoch 5 Batch 470/1077 - Train Accuracy: 0.9877, Validation Accuracy: 0.9670, Loss: 0.0122 Epoch 5 Batch 471/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9695, Loss: 0.0117 Epoch 5 Batch 472/1077 - Train Accuracy: 0.9650, Validation Accuracy: 0.9663, Loss: 0.0117 Epoch 5 Batch 473/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9663, Loss: 0.0102 Epoch 5 Batch 474/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9680, Loss: 0.0122 Epoch 5 Batch 475/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9680, Loss: 0.0127 Epoch 5 Batch 476/1077 - Train Accuracy: 0.9819, Validation Accuracy: 0.9656, Loss: 0.0082 Epoch 5 Batch 477/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9656, Loss: 0.0179 Epoch 5 Batch 478/1077 - Train Accuracy: 0.9823, Validation Accuracy: 0.9656, Loss: 0.0114 Epoch 5 Batch 479/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9567, Loss: 0.0167 Epoch 5 Batch 480/1077 - Train Accuracy: 0.9868, Validation Accuracy: 0.9567, Loss: 0.0103 Epoch 5 Batch 481/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9567, Loss: 0.0127 Epoch 5 Batch 482/1077 - Train Accuracy: 0.9671, Validation Accuracy: 0.9567, Loss: 0.0180 Epoch 5 Batch 483/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9581, Loss: 0.0160 Epoch 5 Batch 484/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9616, Loss: 0.0149 Epoch 5 Batch 485/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9616, Loss: 0.0111 Epoch 5 Batch 486/1077 - Train Accuracy: 0.9893, Validation Accuracy: 0.9616, Loss: 0.0110 Epoch 5 Batch 487/1077 - Train Accuracy: 0.9868, Validation Accuracy: 0.9680, Loss: 0.0082 Epoch 5 Batch 488/1077 - Train Accuracy: 0.9741, Validation Accuracy: 0.9680, Loss: 0.0174 Epoch 5 Batch 489/1077 - Train Accuracy: 0.9799, Validation Accuracy: 0.9712, Loss: 0.0111 Epoch 5 Batch 490/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9741, Loss: 0.0144 Epoch 5 Batch 491/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9741, Loss: 0.0177 Epoch 5 Batch 492/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9691, Loss: 0.0161 Epoch 5 Batch 493/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9691, Loss: 0.0108 Epoch 5 Batch 494/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9712, Loss: 0.0110 Epoch 5 Batch 495/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9712, Loss: 0.0113 Epoch 5 Batch 496/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9755, Loss: 0.0126 Epoch 5 Batch 497/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9755, Loss: 0.0154 Epoch 5 Batch 498/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9705, Loss: 0.0179 Epoch 5 Batch 499/1077 - Train Accuracy: 0.9810, Validation Accuracy: 0.9751, Loss: 0.0127 Epoch 5 Batch 500/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9730, Loss: 0.0089 Epoch 5 Batch 501/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9648, Loss: 0.0119 Epoch 5 Batch 502/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9698, Loss: 0.0159 Epoch 5 Batch 503/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9688, Loss: 0.0126 Epoch 5 Batch 504/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9684, Loss: 0.0109 Epoch 5 Batch 505/1077 - Train Accuracy: 0.9974, Validation Accuracy: 0.9688, Loss: 0.0091 Epoch 5 Batch 506/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9638, Loss: 0.0220 Epoch 5 Batch 507/1077 - Train Accuracy: 0.9617, Validation Accuracy: 0.9638, Loss: 0.0170 Epoch 5 Batch 508/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9638, Loss: 0.0077 Epoch 5 Batch 509/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9641, Loss: 0.0210 Epoch 5 Batch 510/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9698, Loss: 0.0112 Epoch 5 Batch 511/1077 - Train Accuracy: 0.9889, Validation Accuracy: 0.9698, Loss: 0.0105 Epoch 5 Batch 512/1077 - Train Accuracy: 0.9980, Validation Accuracy: 0.9698, Loss: 0.0107 Epoch 5 Batch 513/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9702, Loss: 0.0142 Epoch 5 Batch 514/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9702, Loss: 0.0155 Epoch 5 Batch 515/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9702, Loss: 0.0131 Epoch 5 Batch 516/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9688, Loss: 0.0161 Epoch 5 Batch 517/1077 - Train Accuracy: 0.9725, Validation Accuracy: 0.9624, Loss: 0.0141 Epoch 5 Batch 518/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9595, Loss: 0.0097 Epoch 5 Batch 519/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9656, Loss: 0.0141 Epoch 5 Batch 520/1077 - Train Accuracy: 0.9851, Validation Accuracy: 0.9638, Loss: 0.0099 Epoch 5 Batch 521/1077 - Train Accuracy: 0.9896, Validation Accuracy: 0.9666, Loss: 0.0129 Epoch 5 Batch 522/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9727, Loss: 0.0163 Epoch 5 Batch 523/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9723, Loss: 0.0122 Epoch 5 Batch 524/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9719, Loss: 0.0156 Epoch 5 Batch 525/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9716, Loss: 0.0159 Epoch 5 Batch 526/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9695, Loss: 0.0165 Epoch 5 Batch 527/1077 - Train Accuracy: 0.9655, Validation Accuracy: 0.9648, Loss: 0.0170 Epoch 5 Batch 528/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9648, Loss: 0.0146 Epoch 5 Batch 529/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9659, Loss: 0.0152 Epoch 5 Batch 530/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9705, Loss: 0.0168 Epoch 5 Batch 531/1077 - Train Accuracy: 0.9723, Validation Accuracy: 0.9673, Loss: 0.0140 Epoch 5 Batch 532/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9680, Loss: 0.0190 Epoch 5 Batch 533/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9684, Loss: 0.0127 Epoch 5 Batch 534/1077 - Train Accuracy: 0.9788, Validation Accuracy: 0.9691, Loss: 0.0153 Epoch 5 Batch 535/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9691, Loss: 0.0161 Epoch 5 Batch 536/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9688, Loss: 0.0145 Epoch 5 Batch 537/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9663, Loss: 0.0064 Epoch 5 Batch 538/1077 - Train Accuracy: 0.9818, Validation Accuracy: 0.9670, Loss: 0.0137 Epoch 5 Batch 539/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9719, Loss: 0.0170 Epoch 5 Batch 540/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9670, Loss: 0.0113 Epoch 5 Batch 541/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9680, Loss: 0.0098 Epoch 5 Batch 542/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9684, Loss: 0.0140 Epoch 5 Batch 543/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9680, Loss: 0.0099 Epoch 5 Batch 544/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9574, Loss: 0.0083 Epoch 5 Batch 545/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9631, Loss: 0.0126 Epoch 5 Batch 546/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9677, Loss: 0.0134 Epoch 5 Batch 547/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9670, Loss: 0.0144 Epoch 5 Batch 548/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9677, Loss: 0.0165 Epoch 5 Batch 549/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9677, Loss: 0.0141 Epoch 5 Batch 550/1077 - Train Accuracy: 0.9656, Validation Accuracy: 0.9634, Loss: 0.0158 Epoch 5 Batch 551/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9588, Loss: 0.0126 Epoch 5 Batch 552/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9588, Loss: 0.0168 Epoch 5 Batch 553/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9567, Loss: 0.0250 Epoch 5 Batch 554/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9624, Loss: 0.0118 Epoch 5 Batch 555/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9638, Loss: 0.0123 Epoch 5 Batch 556/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9634, Loss: 0.0118 Epoch 5 Batch 557/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9602, Loss: 0.0119 Epoch 5 Batch 558/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9595, Loss: 0.0094 Epoch 5 Batch 559/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9698, Loss: 0.0126 Epoch 5 Batch 560/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9702, Loss: 0.0143 Epoch 5 Batch 561/1077 - Train Accuracy: 0.9888, Validation Accuracy: 0.9670, Loss: 0.0118 Epoch 5 Batch 562/1077 - Train Accuracy: 0.9833, Validation Accuracy: 0.9670, Loss: 0.0121 Epoch 5 Batch 563/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9716, Loss: 0.0129 Epoch 5 Batch 564/1077 - Train Accuracy: 0.9864, Validation Accuracy: 0.9727, Loss: 0.0131 Epoch 5 Batch 565/1077 - Train Accuracy: 0.9833, Validation Accuracy: 0.9677, Loss: 0.0136 Epoch 5 Batch 566/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9719, Loss: 0.0112 Epoch 5 Batch 567/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9719, Loss: 0.0129 Epoch 5 Batch 568/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9734, Loss: 0.0114 Epoch 5 Batch 569/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9762, Loss: 0.0118 Epoch 5 Batch 570/1077 - Train Accuracy: 0.9893, Validation Accuracy: 0.9716, Loss: 0.0111 Epoch 5 Batch 571/1077 - Train Accuracy: 0.9900, Validation Accuracy: 0.9716, Loss: 0.0078 Epoch 5 Batch 572/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9709, Loss: 0.0135 Epoch 5 Batch 573/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9680, Loss: 0.0255 Epoch 5 Batch 574/1077 - Train Accuracy: 0.9753, Validation Accuracy: 0.9680, Loss: 0.0169 Epoch 5 Batch 575/1077 - Train Accuracy: 0.9769, Validation Accuracy: 0.9680, Loss: 0.0095 Epoch 5 Batch 576/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9695, Loss: 0.0099 Epoch 5 Batch 577/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9695, Loss: 0.0135 Epoch 5 Batch 578/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9744, Loss: 0.0107 Epoch 5 Batch 579/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9744, Loss: 0.0135 Epoch 5 Batch 580/1077 - Train Accuracy: 0.9747, Validation Accuracy: 0.9748, Loss: 0.0124 Epoch 5 Batch 581/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9751, Loss: 0.0097 Epoch 5 Batch 582/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9776, Loss: 0.0122 Epoch 5 Batch 583/1077 - Train Accuracy: 0.9823, Validation Accuracy: 0.9776, Loss: 0.0132 Epoch 5 Batch 584/1077 - Train Accuracy: 0.9900, Validation Accuracy: 0.9780, Loss: 0.0095 Epoch 5 Batch 585/1077 - Train Accuracy: 1.0000, Validation Accuracy: 0.9730, Loss: 0.0083 Epoch 5 Batch 586/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9776, Loss: 0.0127 Epoch 5 Batch 587/1077 - Train Accuracy: 0.9807, Validation Accuracy: 0.9773, Loss: 0.0130 Epoch 5 Batch 588/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9822, Loss: 0.0096 Epoch 5 Batch 589/1077 - Train Accuracy: 0.9868, Validation Accuracy: 0.9798, Loss: 0.0125 Epoch 5 Batch 590/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9798, Loss: 0.0122 Epoch 5 Batch 591/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9751, Loss: 0.0147 Epoch 5 Batch 592/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9751, Loss: 0.0131 Epoch 5 Batch 593/1077 - Train Accuracy: 0.9717, Validation Accuracy: 0.9709, Loss: 0.0246 Epoch 5 Batch 594/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9705, Loss: 0.0183 Epoch 5 Batch 595/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9680, Loss: 0.0128 Epoch 5 Batch 596/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9705, Loss: 0.0121 Epoch 5 Batch 597/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9712, Loss: 0.0114 Epoch 5 Batch 598/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9737, Loss: 0.0175 Epoch 5 Batch 599/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9688, Loss: 0.0207 Epoch 5 Batch 600/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9688, Loss: 0.0158 Epoch 5 Batch 601/1077 - Train Accuracy: 0.9784, Validation Accuracy: 0.9688, Loss: 0.0155 Epoch 5 Batch 602/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9780, Loss: 0.0181 Epoch 5 Batch 603/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9780, Loss: 0.0130 Epoch 5 Batch 604/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9755, Loss: 0.0157 Epoch 5 Batch 605/1077 - Train Accuracy: 0.9893, Validation Accuracy: 0.9801, Loss: 0.0201 Epoch 5 Batch 606/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9751, Loss: 0.0117 Epoch 5 Batch 607/1077 - Train Accuracy: 0.9876, Validation Accuracy: 0.9801, Loss: 0.0170 Epoch 5 Batch 608/1077 - Train Accuracy: 0.9953, Validation Accuracy: 0.9773, Loss: 0.0112 Epoch 5 Batch 609/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9773, Loss: 0.0157 Epoch 5 Batch 610/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9794, Loss: 0.0166 Epoch 5 Batch 611/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9744, Loss: 0.0135 Epoch 5 Batch 612/1077 - Train Accuracy: 0.9929, Validation Accuracy: 0.9794, Loss: 0.0058 Epoch 5 Batch 613/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9769, Loss: 0.0154 Epoch 5 Batch 614/1077 - Train Accuracy: 0.9892, Validation Accuracy: 0.9773, Loss: 0.0074 Epoch 5 Batch 615/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9819, Loss: 0.0081 Epoch 5 Batch 616/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9773, Loss: 0.0156 Epoch 5 Batch 617/1077 - Train Accuracy: 0.9903, Validation Accuracy: 0.9769, Loss: 0.0119 Epoch 5 Batch 618/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9769, Loss: 0.0147 Epoch 5 Batch 619/1077 - Train Accuracy: 0.9757, Validation Accuracy: 0.9769, Loss: 0.0101 Epoch 5 Batch 620/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9790, Loss: 0.0148 Epoch 5 Batch 621/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9790, Loss: 0.0128 Epoch 5 Batch 622/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9790, Loss: 0.0183 Epoch 5 Batch 623/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9790, Loss: 0.0151 Epoch 5 Batch 624/1077 - Train Accuracy: 0.9829, Validation Accuracy: 0.9741, Loss: 0.0154 Epoch 5 Batch 625/1077 - Train Accuracy: 0.9969, Validation Accuracy: 0.9790, Loss: 0.0092 Epoch 5 Batch 626/1077 - Train Accuracy: 0.9822, Validation Accuracy: 0.9741, Loss: 0.0136 Epoch 5 Batch 627/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9691, Loss: 0.0170 Epoch 5 Batch 628/1077 - Train Accuracy: 0.9664, Validation Accuracy: 0.9691, Loss: 0.0140 Epoch 5 Batch 629/1077 - Train Accuracy: 0.9901, Validation Accuracy: 0.9737, Loss: 0.0125 Epoch 5 Batch 630/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9716, Loss: 0.0107 Epoch 5 Batch 631/1077 - Train Accuracy: 0.9933, Validation Accuracy: 0.9741, Loss: 0.0098 Epoch 5 Batch 632/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9837, Loss: 0.0112 Epoch 5 Batch 633/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9837, Loss: 0.0134 Epoch 5 Batch 634/1077 - Train Accuracy: 0.9751, Validation Accuracy: 0.9862, Loss: 0.0094 Epoch 5 Batch 635/1077 - Train Accuracy: 0.9951, Validation Accuracy: 0.9812, Loss: 0.0123 Epoch 5 Batch 636/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9862, Loss: 0.0106 Epoch 5 Batch 637/1077 - Train Accuracy: 0.9664, Validation Accuracy: 0.9862, Loss: 0.0124 Epoch 5 Batch 638/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9865, Loss: 0.0108 Epoch 5 Batch 639/1077 - Train Accuracy: 0.9645, Validation Accuracy: 0.9865, Loss: 0.0195 Epoch 5 Batch 640/1077 - Train Accuracy: 0.9829, Validation Accuracy: 0.9869, Loss: 0.0118 Epoch 5 Batch 641/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9776, Loss: 0.0148 Epoch 5 Batch 642/1077 - Train Accuracy: 0.9952, Validation Accuracy: 0.9744, Loss: 0.0095 Epoch 5 Batch 643/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9744, Loss: 0.0093 Epoch 5 Batch 644/1077 - Train Accuracy: 0.9953, Validation Accuracy: 0.9744, Loss: 0.0146 Epoch 5 Batch 645/1077 - Train Accuracy: 0.9833, Validation Accuracy: 0.9744, Loss: 0.0155 Epoch 5 Batch 646/1077 - Train Accuracy: 0.9903, Validation Accuracy: 0.9744, Loss: 0.0120 Epoch 5 Batch 647/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9744, Loss: 0.0128 Epoch 5 Batch 648/1077 - Train Accuracy: 0.9911, Validation Accuracy: 0.9744, Loss: 0.0114 Epoch 5 Batch 649/1077 - Train Accuracy: 0.9988, Validation Accuracy: 0.9727, Loss: 0.0123 Epoch 5 Batch 650/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9727, Loss: 0.0084 Epoch 5 Batch 651/1077 - Train Accuracy: 0.9747, Validation Accuracy: 0.9727, Loss: 0.0112 Epoch 5 Batch 652/1077 - Train Accuracy: 0.9823, Validation Accuracy: 0.9727, Loss: 0.0198 Epoch 5 Batch 653/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9727, Loss: 0.0121 Epoch 5 Batch 654/1077 - Train Accuracy: 0.9953, Validation Accuracy: 0.9727, Loss: 0.0100 Epoch 5 Batch 655/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9727, Loss: 0.0127 Epoch 5 Batch 656/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9769, Loss: 0.0104 Epoch 5 Batch 657/1077 - Train Accuracy: 0.9815, Validation Accuracy: 0.9766, Loss: 0.0120 Epoch 5 Batch 658/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9688, Loss: 0.0092 Epoch 5 Batch 659/1077 - Train Accuracy: 0.9937, Validation Accuracy: 0.9737, Loss: 0.0115 Epoch 5 Batch 660/1077 - Train Accuracy: 0.9961, Validation Accuracy: 0.9744, Loss: 0.0103 Epoch 5 Batch 661/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9744, Loss: 0.0097 Epoch 5 Batch 662/1077 - Train Accuracy: 0.9803, Validation Accuracy: 0.9744, Loss: 0.0139 Epoch 5 Batch 663/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9744, Loss: 0.0093 Epoch 5 Batch 664/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9744, Loss: 0.0090 Epoch 5 Batch 665/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9812, Loss: 0.0085 Epoch 5 Batch 666/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9812, Loss: 0.0167 Epoch 5 Batch 667/1077 - Train Accuracy: 0.9782, Validation Accuracy: 0.9812, Loss: 0.0193 Epoch 5 Batch 668/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9762, Loss: 0.0124 Epoch 5 Batch 669/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9762, Loss: 0.0122 Epoch 5 Batch 670/1077 - Train Accuracy: 0.9915, Validation Accuracy: 0.9712, Loss: 0.0113 Epoch 5 Batch 671/1077 - Train Accuracy: 0.9782, Validation Accuracy: 0.9648, Loss: 0.0168 Epoch 5 Batch 672/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9648, Loss: 0.0092 Epoch 5 Batch 673/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9648, Loss: 0.0106 Epoch 5 Batch 674/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9606, Loss: 0.0163 Epoch 5 Batch 675/1077 - Train Accuracy: 0.9825, Validation Accuracy: 0.9606, Loss: 0.0193 Epoch 5 Batch 676/1077 - Train Accuracy: 0.9700, Validation Accuracy: 0.9645, Loss: 0.0109 Epoch 5 Batch 677/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9627, Loss: 0.0139 Epoch 5 Batch 678/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9719, Loss: 0.0120 Epoch 5 Batch 679/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9719, Loss: 0.0078 Epoch 5 Batch 680/1077 - Train Accuracy: 0.9728, Validation Accuracy: 0.9719, Loss: 0.0115 Epoch 5 Batch 681/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9719, Loss: 0.0104 Epoch 5 Batch 682/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9719, Loss: 0.0109 Epoch 5 Batch 683/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9719, Loss: 0.0067 Epoch 5 Batch 684/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9716, Loss: 0.0133 Epoch 5 Batch 685/1077 - Train Accuracy: 0.9625, Validation Accuracy: 0.9716, Loss: 0.0142 Epoch 5 Batch 686/1077 - Train Accuracy: 0.9661, Validation Accuracy: 0.9716, Loss: 0.0096 Epoch 5 Batch 687/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9719, Loss: 0.0184 Epoch 5 Batch 688/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9769, Loss: 0.0097 Epoch 5 Batch 689/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9773, Loss: 0.0056 Epoch 5 Batch 690/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9773, Loss: 0.0160 Epoch 5 Batch 691/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9751, Loss: 0.0134 Epoch 5 Batch 692/1077 - Train Accuracy: 0.9866, Validation Accuracy: 0.9719, Loss: 0.0110 Epoch 5 Batch 693/1077 - Train Accuracy: 0.9753, Validation Accuracy: 0.9769, Loss: 0.0176 Epoch 5 Batch 694/1077 - Train Accuracy: 0.9963, Validation Accuracy: 0.9741, Loss: 0.0085 Epoch 5 Batch 695/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9695, Loss: 0.0130 Epoch 5 Batch 696/1077 - Train Accuracy: 0.9683, Validation Accuracy: 0.9695, Loss: 0.0102 Epoch 5 Batch 697/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9695, Loss: 0.0121 Epoch 5 Batch 698/1077 - Train Accuracy: 0.9896, Validation Accuracy: 0.9631, Loss: 0.0083 Epoch 5 Batch 699/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9627, Loss: 0.0096 Epoch 5 Batch 700/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9727, Loss: 0.0116 Epoch 5 Batch 701/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9734, Loss: 0.0090 Epoch 5 Batch 702/1077 - Train Accuracy: 0.9769, Validation Accuracy: 0.9762, Loss: 0.0234 Epoch 5 Batch 703/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9709, Loss: 0.0128 Epoch 5 Batch 704/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9659, Loss: 0.0166 Epoch 5 Batch 705/1077 - Train Accuracy: 0.9737, Validation Accuracy: 0.9709, Loss: 0.0193 Epoch 5 Batch 706/1077 - Train Accuracy: 0.9628, Validation Accuracy: 0.9730, Loss: 0.0341 Epoch 5 Batch 707/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9780, Loss: 0.0131 Epoch 5 Batch 708/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9780, Loss: 0.0122 Epoch 5 Batch 709/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9801, Loss: 0.0131 Epoch 5 Batch 710/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9801, Loss: 0.0096 Epoch 5 Batch 711/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9801, Loss: 0.0161 Epoch 5 Batch 712/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9801, Loss: 0.0085 Epoch 5 Batch 713/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9755, Loss: 0.0094 Epoch 5 Batch 714/1077 - Train Accuracy: 0.9903, Validation Accuracy: 0.9751, Loss: 0.0145 Epoch 5 Batch 715/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9769, Loss: 0.0125 Epoch 5 Batch 716/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9769, Loss: 0.0099 Epoch 5 Batch 717/1077 - Train Accuracy: 0.9988, Validation Accuracy: 0.9769, Loss: 0.0071 Epoch 5 Batch 718/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9769, Loss: 0.0110 Epoch 5 Batch 719/1077 - Train Accuracy: 0.9877, Validation Accuracy: 0.9645, Loss: 0.0112 Epoch 5 Batch 720/1077 - Train Accuracy: 0.9971, Validation Accuracy: 0.9648, Loss: 0.0113 Epoch 5 Batch 721/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9645, Loss: 0.0157 Epoch 5 Batch 722/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9645, Loss: 0.0127 Epoch 5 Batch 723/1077 - Train Accuracy: 0.9795, Validation Accuracy: 0.9645, Loss: 0.0162 Epoch 5 Batch 724/1077 - Train Accuracy: 0.9947, Validation Accuracy: 0.9695, Loss: 0.0121 Epoch 5 Batch 725/1077 - Train Accuracy: 0.9821, Validation Accuracy: 0.9695, Loss: 0.0111 Epoch 5 Batch 726/1077 - Train Accuracy: 0.9961, Validation Accuracy: 0.9695, Loss: 0.0088 Epoch 5 Batch 727/1077 - Train Accuracy: 0.9984, Validation Accuracy: 0.9691, Loss: 0.0092 Epoch 5 Batch 728/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9688, Loss: 0.0143 Epoch 5 Batch 729/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9634, Loss: 0.0146 Epoch 5 Batch 730/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9666, Loss: 0.0151 Epoch 5 Batch 731/1077 - Train Accuracy: 0.9799, Validation Accuracy: 0.9666, Loss: 0.0109 Epoch 5 Batch 732/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9666, Loss: 0.0177 Epoch 5 Batch 733/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9616, Loss: 0.0129 Epoch 5 Batch 734/1077 - Train Accuracy: 0.9831, Validation Accuracy: 0.9616, Loss: 0.0117 Epoch 5 Batch 735/1077 - Train Accuracy: 0.9988, Validation Accuracy: 0.9659, Loss: 0.0077 Epoch 5 Batch 736/1077 - Train Accuracy: 0.9794, Validation Accuracy: 0.9656, Loss: 0.0124 Epoch 5 Batch 737/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9712, Loss: 0.0154 Epoch 5 Batch 738/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9680, Loss: 0.0096 Epoch 5 Batch 739/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9680, Loss: 0.0143 Epoch 5 Batch 740/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9680, Loss: 0.0107 Epoch 5 Batch 741/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9624, Loss: 0.0125 Epoch 5 Batch 742/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9624, Loss: 0.0116 Epoch 5 Batch 743/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9673, Loss: 0.0134 Epoch 5 Batch 744/1077 - Train Accuracy: 0.9818, Validation Accuracy: 0.9727, Loss: 0.0123 Epoch 5 Batch 745/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9712, Loss: 0.0170 Epoch 5 Batch 746/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9705, Loss: 0.0123 Epoch 5 Batch 747/1077 - Train Accuracy: 1.0000, Validation Accuracy: 0.9659, Loss: 0.0079 Epoch 5 Batch 748/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9659, Loss: 0.0118 Epoch 5 Batch 749/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9677, Loss: 0.0097 Epoch 5 Batch 750/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9673, Loss: 0.0119 Epoch 5 Batch 751/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9766, Loss: 0.0108 Epoch 5 Batch 752/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9766, Loss: 0.0104 Epoch 5 Batch 753/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9762, Loss: 0.0085 Epoch 5 Batch 754/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9744, Loss: 0.0168 Epoch 5 Batch 755/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9673, Loss: 0.0190 Epoch 5 Batch 756/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9673, Loss: 0.0145 Epoch 5 Batch 757/1077 - Train Accuracy: 0.9774, Validation Accuracy: 0.9723, Loss: 0.0117 Epoch 5 Batch 758/1077 - Train Accuracy: 0.9940, Validation Accuracy: 0.9727, Loss: 0.0075 Epoch 5 Batch 759/1077 - Train Accuracy: 0.9911, Validation Accuracy: 0.9730, Loss: 0.0165 Epoch 5 Batch 760/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9780, Loss: 0.0134 Epoch 5 Batch 761/1077 - Train Accuracy: 0.9811, Validation Accuracy: 0.9762, Loss: 0.0128 Epoch 5 Batch 762/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9762, Loss: 0.0110 Epoch 5 Batch 763/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9787, Loss: 0.0136 Epoch 5 Batch 764/1077 - Train Accuracy: 0.9942, Validation Accuracy: 0.9759, Loss: 0.0152 Epoch 5 Batch 765/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9759, Loss: 0.0179 Epoch 5 Batch 766/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9759, Loss: 0.0119 Epoch 5 Batch 767/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9783, Loss: 0.0087 Epoch 5 Batch 768/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9812, Loss: 0.0113 Epoch 5 Batch 769/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9684, Loss: 0.0138 Epoch 5 Batch 770/1077 - Train Accuracy: 0.9896, Validation Accuracy: 0.9698, Loss: 0.0128 Epoch 5 Batch 771/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9663, Loss: 0.0111 Epoch 5 Batch 772/1077 - Train Accuracy: 0.9803, Validation Accuracy: 0.9709, Loss: 0.0106 Epoch 5 Batch 773/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9744, Loss: 0.0103 Epoch 5 Batch 774/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9723, Loss: 0.0121 Epoch 5 Batch 775/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9723, Loss: 0.0121 Epoch 5 Batch 776/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9730, Loss: 0.0129 Epoch 5 Batch 777/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9730, Loss: 0.0098 Epoch 5 Batch 778/1077 - Train Accuracy: 0.9788, Validation Accuracy: 0.9727, Loss: 0.0140 Epoch 5 Batch 779/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9748, Loss: 0.0164 Epoch 5 Batch 780/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9798, Loss: 0.0163 Epoch 5 Batch 781/1077 - Train Accuracy: 0.9792, Validation Accuracy: 0.9748, Loss: 0.0102 Epoch 5 Batch 782/1077 - Train Accuracy: 0.9673, Validation Accuracy: 0.9744, Loss: 0.0130 Epoch 5 Batch 783/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9769, Loss: 0.0143 Epoch 5 Batch 784/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9716, Loss: 0.0072 Epoch 5 Batch 785/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9716, Loss: 0.0083 Epoch 5 Batch 786/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9648, Loss: 0.0112 Epoch 5 Batch 787/1077 - Train Accuracy: 0.9888, Validation Accuracy: 0.9744, Loss: 0.0102 Epoch 5 Batch 788/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9744, Loss: 0.0141 Epoch 5 Batch 789/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9631, Loss: 0.0102 Epoch 5 Batch 790/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9634, Loss: 0.0169 Epoch 5 Batch 791/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9638, Loss: 0.0117 Epoch 5 Batch 792/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9684, Loss: 0.0150 Epoch 5 Batch 793/1077 - Train Accuracy: 0.9969, Validation Accuracy: 0.9734, Loss: 0.0092 Epoch 5 Batch 794/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9638, Loss: 0.0084 Epoch 5 Batch 795/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9634, Loss: 0.0132 Epoch 5 Batch 796/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9638, Loss: 0.0103 Epoch 5 Batch 797/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9634, Loss: 0.0077 Epoch 5 Batch 798/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9634, Loss: 0.0121 Epoch 5 Batch 799/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9634, Loss: 0.0162 Epoch 5 Batch 800/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9652, Loss: 0.0098 Epoch 5 Batch 801/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9719, Loss: 0.0143 Epoch 5 Batch 802/1077 - Train Accuracy: 0.9877, Validation Accuracy: 0.9723, Loss: 0.0123 Epoch 5 Batch 803/1077 - Train Accuracy: 0.9977, Validation Accuracy: 0.9723, Loss: 0.0097 Epoch 5 Batch 804/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9719, Loss: 0.0085 Epoch 5 Batch 805/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9773, Loss: 0.0126 Epoch 5 Batch 806/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9776, Loss: 0.0119 Epoch 5 Batch 807/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9748, Loss: 0.0080 Epoch 5 Batch 808/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9751, Loss: 0.0233 Epoch 5 Batch 809/1077 - Train Accuracy: 0.9733, Validation Accuracy: 0.9688, Loss: 0.0190 Epoch 5 Batch 810/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9787, Loss: 0.0106 Epoch 5 Batch 811/1077 - Train Accuracy: 0.9851, Validation Accuracy: 0.9787, Loss: 0.0151 Epoch 5 Batch 812/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9762, Loss: 0.0110 Epoch 5 Batch 813/1077 - Train Accuracy: 0.9747, Validation Accuracy: 0.9762, Loss: 0.0130 Epoch 5 Batch 814/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9762, Loss: 0.0111 Epoch 5 Batch 815/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9812, Loss: 0.0153 Epoch 5 Batch 816/1077 - Train Accuracy: 0.9947, Validation Accuracy: 0.9666, Loss: 0.0102 Epoch 5 Batch 817/1077 - Train Accuracy: 0.9723, Validation Accuracy: 0.9666, Loss: 0.0149 Epoch 5 Batch 818/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9666, Loss: 0.0176 Epoch 5 Batch 819/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9666, Loss: 0.0133 Epoch 5 Batch 820/1077 - Train Accuracy: 0.9949, Validation Accuracy: 0.9712, Loss: 0.0101 Epoch 5 Batch 821/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9741, Loss: 0.0117 Epoch 5 Batch 822/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9734, Loss: 0.0085 Epoch 5 Batch 823/1077 - Train Accuracy: 0.9961, Validation Accuracy: 0.9734, Loss: 0.0137 Epoch 5 Batch 824/1077 - Train Accuracy: 0.9702, Validation Accuracy: 0.9709, Loss: 0.0166 Epoch 5 Batch 825/1077 - Train Accuracy: 0.9961, Validation Accuracy: 0.9705, Loss: 0.0045 Epoch 5 Batch 826/1077 - Train Accuracy: 0.9814, Validation Accuracy: 0.9659, Loss: 0.0108 Epoch 5 Batch 827/1077 - Train Accuracy: 0.9856, Validation Accuracy: 0.9656, Loss: 0.0119 Epoch 5 Batch 828/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9748, Loss: 0.0099 Epoch 5 Batch 829/1077 - Train Accuracy: 0.9664, Validation Accuracy: 0.9748, Loss: 0.0264 Epoch 5 Batch 830/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9748, Loss: 0.0181 Epoch 5 Batch 831/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9688, Loss: 0.0137 Epoch 5 Batch 832/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9663, Loss: 0.0091 Epoch 5 Batch 833/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9613, Loss: 0.0132 Epoch 5 Batch 834/1077 - Train Accuracy: 0.9925, Validation Accuracy: 0.9613, Loss: 0.0148 Epoch 5 Batch 835/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9613, Loss: 0.0127 Epoch 5 Batch 836/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9588, Loss: 0.0095 Epoch 5 Batch 837/1077 - Train Accuracy: 0.9605, Validation Accuracy: 0.9588, Loss: 0.0177 Epoch 5 Batch 838/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9588, Loss: 0.0117 Epoch 5 Batch 839/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9645, Loss: 0.0128 Epoch 5 Batch 840/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9698, Loss: 0.0114 Epoch 5 Batch 841/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9688, Loss: 0.0221 Epoch 5 Batch 842/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9712, Loss: 0.0111 Epoch 5 Batch 843/1077 - Train Accuracy: 0.9874, Validation Accuracy: 0.9712, Loss: 0.0084 Epoch 5 Batch 844/1077 - Train Accuracy: 0.9821, Validation Accuracy: 0.9712, Loss: 0.0110 Epoch 5 Batch 845/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9712, Loss: 0.0099 Epoch 5 Batch 846/1077 - Train Accuracy: 0.9723, Validation Accuracy: 0.9712, Loss: 0.0175 Epoch 5 Batch 847/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9755, Loss: 0.0151 Epoch 5 Batch 848/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9751, Loss: 0.0113 Epoch 5 Batch 849/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9748, Loss: 0.0085 Epoch 5 Batch 850/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9737, Loss: 0.0256 Epoch 5 Batch 851/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9730, Loss: 0.0188 Epoch 5 Batch 852/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9730, Loss: 0.0188 Epoch 5 Batch 853/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9737, Loss: 0.0160 Epoch 5 Batch 854/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9744, Loss: 0.0170 Epoch 5 Batch 855/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9762, Loss: 0.0170 Epoch 5 Batch 856/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9730, Loss: 0.0153 Epoch 5 Batch 857/1077 - Train Accuracy: 0.9656, Validation Accuracy: 0.9755, Loss: 0.0132 Epoch 5 Batch 858/1077 - Train Accuracy: 0.9833, Validation Accuracy: 0.9755, Loss: 0.0089 Epoch 5 Batch 859/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9755, Loss: 0.0135 Epoch 5 Batch 860/1077 - Train Accuracy: 0.9851, Validation Accuracy: 0.9755, Loss: 0.0127 Epoch 5 Batch 861/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9727, Loss: 0.0120 Epoch 5 Batch 862/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9822, Loss: 0.0181 Epoch 5 Batch 863/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9819, Loss: 0.0118 Epoch 5 Batch 864/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9815, Loss: 0.0112 Epoch 5 Batch 865/1077 - Train Accuracy: 0.9570, Validation Accuracy: 0.9794, Loss: 0.0168 Epoch 5 Batch 866/1077 - Train Accuracy: 0.9732, Validation Accuracy: 0.9794, Loss: 0.0133 Epoch 5 Batch 867/1077 - Train Accuracy: 0.9512, Validation Accuracy: 0.9702, Loss: 0.0379 Epoch 5 Batch 868/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9702, Loss: 0.0172 Epoch 5 Batch 869/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9769, Loss: 0.0128 Epoch 5 Batch 870/1077 - Train Accuracy: 0.9729, Validation Accuracy: 0.9822, Loss: 0.0099 Epoch 5 Batch 871/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9748, Loss: 0.0123 Epoch 5 Batch 872/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9702, Loss: 0.0139 Epoch 5 Batch 873/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9748, Loss: 0.0120 Epoch 5 Batch 874/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9723, Loss: 0.0192 Epoch 5 Batch 875/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9723, Loss: 0.0114 Epoch 5 Batch 876/1077 - Train Accuracy: 0.9949, Validation Accuracy: 0.9723, Loss: 0.0123 Epoch 5 Batch 877/1077 - Train Accuracy: 0.9969, Validation Accuracy: 0.9641, Loss: 0.0082 Epoch 5 Batch 878/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9599, Loss: 0.0108 Epoch 5 Batch 879/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9567, Loss: 0.0076 Epoch 5 Batch 880/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9567, Loss: 0.0148 Epoch 5 Batch 881/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9613, Loss: 0.0158 Epoch 5 Batch 882/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9570, Loss: 0.0117 Epoch 5 Batch 883/1077 - Train Accuracy: 0.9737, Validation Accuracy: 0.9595, Loss: 0.0152 Epoch 5 Batch 884/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9595, Loss: 0.0142 Epoch 5 Batch 885/1077 - Train Accuracy: 0.9808, Validation Accuracy: 0.9663, Loss: 0.0127 Epoch 5 Batch 886/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9659, Loss: 0.0192 Epoch 5 Batch 887/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9702, Loss: 0.0139 Epoch 5 Batch 888/1077 - Train Accuracy: 0.9851, Validation Accuracy: 0.9684, Loss: 0.0135 Epoch 5 Batch 889/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9688, Loss: 0.0116 Epoch 5 Batch 890/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9645, Loss: 0.0104 Epoch 5 Batch 891/1077 - Train Accuracy: 0.9782, Validation Accuracy: 0.9645, Loss: 0.0105 Epoch 5 Batch 892/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9695, Loss: 0.0118 Epoch 5 Batch 893/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9695, Loss: 0.0142 Epoch 5 Batch 894/1077 - Train Accuracy: 0.9833, Validation Accuracy: 0.9695, Loss: 0.0108 Epoch 5 Batch 895/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9645, Loss: 0.0133 Epoch 5 Batch 896/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9634, Loss: 0.0127 Epoch 5 Batch 897/1077 - Train Accuracy: 0.9769, Validation Accuracy: 0.9592, Loss: 0.0115 Epoch 5 Batch 898/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9677, Loss: 0.0108 Epoch 5 Batch 899/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9634, Loss: 0.0183 Epoch 5 Batch 900/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9616, Loss: 0.0164 Epoch 5 Batch 901/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9620, Loss: 0.0212 Epoch 5 Batch 902/1077 - Train Accuracy: 0.9702, Validation Accuracy: 0.9627, Loss: 0.0164 Epoch 5 Batch 903/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9727, Loss: 0.0135 Epoch 5 Batch 904/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9727, Loss: 0.0102 Epoch 5 Batch 905/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9727, Loss: 0.0102 Epoch 5 Batch 906/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9727, Loss: 0.0109 Epoch 5 Batch 907/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9727, Loss: 0.0111 Epoch 5 Batch 908/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9723, Loss: 0.0155 Epoch 5 Batch 909/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9766, Loss: 0.0184 Epoch 5 Batch 910/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9766, Loss: 0.0124 Epoch 5 Batch 911/1077 - Train Accuracy: 0.9798, Validation Accuracy: 0.9766, Loss: 0.0152 Epoch 5 Batch 912/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9773, Loss: 0.0135 Epoch 5 Batch 913/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9773, Loss: 0.0170 Epoch 5 Batch 914/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9766, Loss: 0.0393 Epoch 5 Batch 915/1077 - Train Accuracy: 0.9864, Validation Accuracy: 0.9766, Loss: 0.0104 Epoch 5 Batch 916/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9766, Loss: 0.0133 Epoch 5 Batch 917/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9719, Loss: 0.0109 Epoch 5 Batch 918/1077 - Train Accuracy: 0.9952, Validation Accuracy: 0.9712, Loss: 0.0098 Epoch 5 Batch 919/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9712, Loss: 0.0069 Epoch 5 Batch 920/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9712, Loss: 0.0108 Epoch 5 Batch 921/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9716, Loss: 0.0152 Epoch 5 Batch 922/1077 - Train Accuracy: 0.9788, Validation Accuracy: 0.9716, Loss: 0.0155 Epoch 5 Batch 923/1077 - Train Accuracy: 0.9988, Validation Accuracy: 0.9716, Loss: 0.0076 Epoch 5 Batch 924/1077 - Train Accuracy: 0.9778, Validation Accuracy: 0.9716, Loss: 0.0199 Epoch 5 Batch 925/1077 - Train Accuracy: 0.9933, Validation Accuracy: 0.9716, Loss: 0.0105 Epoch 5 Batch 926/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9741, Loss: 0.0111 Epoch 5 Batch 927/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9716, Loss: 0.0187 Epoch 5 Batch 928/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9716, Loss: 0.0126 Epoch 5 Batch 929/1077 - Train Accuracy: 0.9969, Validation Accuracy: 0.9716, Loss: 0.0098 Epoch 5 Batch 930/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9762, Loss: 0.0122 Epoch 5 Batch 931/1077 - Train Accuracy: 0.9957, Validation Accuracy: 0.9762, Loss: 0.0127 Epoch 5 Batch 932/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9780, Loss: 0.0094 Epoch 5 Batch 933/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9794, Loss: 0.0096 Epoch 5 Batch 934/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9794, Loss: 0.0139 Epoch 5 Batch 935/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9783, Loss: 0.0094 Epoch 5 Batch 936/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9783, Loss: 0.0177 Epoch 5 Batch 937/1077 - Train Accuracy: 0.9860, Validation Accuracy: 0.9783, Loss: 0.0168 Epoch 5 Batch 938/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9794, Loss: 0.0157 Epoch 5 Batch 939/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9794, Loss: 0.0142 Epoch 5 Batch 940/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9840, Loss: 0.0124 Epoch 5 Batch 941/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9783, Loss: 0.0124 Epoch 5 Batch 942/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9830, Loss: 0.0153 Epoch 5 Batch 943/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9830, Loss: 0.0106 Epoch 5 Batch 944/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9780, Loss: 0.0119 Epoch 5 Batch 945/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9780, Loss: 0.0083 Epoch 5 Batch 946/1077 - Train Accuracy: 0.9951, Validation Accuracy: 0.9776, Loss: 0.0089 Epoch 5 Batch 947/1077 - Train Accuracy: 0.9786, Validation Accuracy: 0.9826, Loss: 0.0118 Epoch 5 Batch 948/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9826, Loss: 0.0121 Epoch 5 Batch 949/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9826, Loss: 0.0125 Epoch 5 Batch 950/1077 - Train Accuracy: 0.9952, Validation Accuracy: 0.9798, Loss: 0.0079 Epoch 5 Batch 951/1077 - Train Accuracy: 0.9833, Validation Accuracy: 0.9744, Loss: 0.0152 Epoch 5 Batch 952/1077 - Train Accuracy: 0.9961, Validation Accuracy: 0.9744, Loss: 0.0076 Epoch 5 Batch 953/1077 - Train Accuracy: 0.9799, Validation Accuracy: 0.9744, Loss: 0.0121 Epoch 5 Batch 954/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9801, Loss: 0.0119 Epoch 5 Batch 955/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9734, Loss: 0.0158 Epoch 5 Batch 956/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9734, Loss: 0.0176 Epoch 5 Batch 957/1077 - Train Accuracy: 0.9955, Validation Accuracy: 0.9688, Loss: 0.0082 Epoch 5 Batch 958/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9730, Loss: 0.0089 Epoch 5 Batch 959/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9680, Loss: 0.0149 Epoch 5 Batch 960/1077 - Train Accuracy: 0.9784, Validation Accuracy: 0.9680, Loss: 0.0098 Epoch 5 Batch 961/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9680, Loss: 0.0100 Epoch 5 Batch 962/1077 - Train Accuracy: 0.9888, Validation Accuracy: 0.9730, Loss: 0.0118 Epoch 5 Batch 963/1077 - Train Accuracy: 0.9822, Validation Accuracy: 0.9773, Loss: 0.0169 Epoch 5 Batch 964/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9773, Loss: 0.0147 Epoch 5 Batch 965/1077 - Train Accuracy: 0.9753, Validation Accuracy: 0.9755, Loss: 0.0151 Epoch 5 Batch 966/1077 - Train Accuracy: 0.9942, Validation Accuracy: 0.9755, Loss: 0.0082 Epoch 5 Batch 967/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9688, Loss: 0.0099 Epoch 5 Batch 968/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9688, Loss: 0.0168 Epoch 5 Batch 969/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9688, Loss: 0.0191 Epoch 5 Batch 970/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9737, Loss: 0.0147 Epoch 5 Batch 971/1077 - Train Accuracy: 0.9877, Validation Accuracy: 0.9688, Loss: 0.0129 Epoch 5 Batch 972/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9684, Loss: 0.0106 Epoch 5 Batch 973/1077 - Train Accuracy: 0.9877, Validation Accuracy: 0.9684, Loss: 0.0090 Epoch 5 Batch 974/1077 - Train Accuracy: 1.0000, Validation Accuracy: 0.9759, Loss: 0.0088 Epoch 5 Batch 975/1077 - Train Accuracy: 0.9970, Validation Accuracy: 0.9759, Loss: 0.0095 Epoch 5 Batch 976/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9759, Loss: 0.0087 Epoch 5 Batch 977/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9712, Loss: 0.0076 Epoch 5 Batch 978/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9709, Loss: 0.0124 Epoch 5 Batch 979/1077 - Train Accuracy: 0.9704, Validation Accuracy: 0.9709, Loss: 0.0122 Epoch 5 Batch 980/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9709, Loss: 0.0120 Epoch 5 Batch 981/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9641, Loss: 0.0140 Epoch 5 Batch 982/1077 - Train Accuracy: 0.9955, Validation Accuracy: 0.9641, Loss: 0.0077 Epoch 5 Batch 983/1077 - Train Accuracy: 0.9786, Validation Accuracy: 0.9705, Loss: 0.0120 Epoch 5 Batch 984/1077 - Train Accuracy: 0.9723, Validation Accuracy: 0.9641, Loss: 0.0135 Epoch 5 Batch 985/1077 - Train Accuracy: 0.9957, Validation Accuracy: 0.9712, Loss: 0.0070 Epoch 5 Batch 986/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9712, Loss: 0.0136 Epoch 5 Batch 987/1077 - Train Accuracy: 0.9795, Validation Accuracy: 0.9691, Loss: 0.0098 Epoch 5 Batch 988/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9751, Loss: 0.0154 Epoch 5 Batch 989/1077 - Train Accuracy: 0.9676, Validation Accuracy: 0.9751, Loss: 0.0111 Epoch 5 Batch 990/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9751, Loss: 0.0109 Epoch 5 Batch 991/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9751, Loss: 0.0087 Epoch 5 Batch 992/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9751, Loss: 0.0132 Epoch 5 Batch 993/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9751, Loss: 0.0097 Epoch 5 Batch 994/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9709, Loss: 0.0100 Epoch 5 Batch 995/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9709, Loss: 0.0113 Epoch 5 Batch 996/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9709, Loss: 0.0097 Epoch 5 Batch 997/1077 - Train Accuracy: 0.9905, Validation Accuracy: 0.9709, Loss: 0.0113 Epoch 5 Batch 998/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9709, Loss: 0.0116 Epoch 5 Batch 999/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9659, Loss: 0.0137 Epoch 5 Batch 1000/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9659, Loss: 0.0139 Epoch 5 Batch 1001/1077 - Train Accuracy: 0.9826, Validation Accuracy: 0.9659, Loss: 0.0106 Epoch 5 Batch 1002/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9609, Loss: 0.0053 Epoch 5 Batch 1003/1077 - Train Accuracy: 0.9856, Validation Accuracy: 0.9709, Loss: 0.0135 Epoch 5 Batch 1004/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9727, Loss: 0.0130 Epoch 5 Batch 1005/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9748, Loss: 0.0106 Epoch 5 Batch 1006/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9748, Loss: 0.0080 Epoch 5 Batch 1007/1077 - Train Accuracy: 0.9940, Validation Accuracy: 0.9751, Loss: 0.0102 Epoch 5 Batch 1008/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9851, Loss: 0.0162 Epoch 5 Batch 1009/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9744, Loss: 0.0081 Epoch 5 Batch 1010/1077 - Train Accuracy: 0.9965, Validation Accuracy: 0.9794, Loss: 0.0107 Epoch 5 Batch 1011/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9833, Loss: 0.0083 Epoch 5 Batch 1012/1077 - Train Accuracy: 0.9714, Validation Accuracy: 0.9830, Loss: 0.0105 Epoch 5 Batch 1013/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9822, Loss: 0.0092 Epoch 5 Batch 1014/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9886, Loss: 0.0107 Epoch 5 Batch 1015/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9840, Loss: 0.0146 Epoch 5 Batch 1016/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9787, Loss: 0.0100 Epoch 5 Batch 1017/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9737, Loss: 0.0097 Epoch 5 Batch 1018/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9741, Loss: 0.0108 Epoch 5 Batch 1019/1077 - Train Accuracy: 0.9638, Validation Accuracy: 0.9773, Loss: 0.0201 Epoch 5 Batch 1020/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9773, Loss: 0.0102 Epoch 5 Batch 1021/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9773, Loss: 0.0093 Epoch 5 Batch 1022/1077 - Train Accuracy: 0.9900, Validation Accuracy: 0.9709, Loss: 0.0091 Epoch 5 Batch 1023/1077 - Train Accuracy: 0.9790, Validation Accuracy: 0.9709, Loss: 0.0144 Epoch 5 Batch 1024/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9759, Loss: 0.0160 Epoch 5 Batch 1025/1077 - Train Accuracy: 0.9814, Validation Accuracy: 0.9805, Loss: 0.0126 Epoch 5 Batch 1026/1077 - Train Accuracy: 0.9907, Validation Accuracy: 0.9826, Loss: 0.0178 Epoch 5 Batch 1027/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9822, Loss: 0.0127 Epoch 5 Batch 1028/1077 - Train Accuracy: 0.9803, Validation Accuracy: 0.9773, Loss: 0.0114 Epoch 5 Batch 1029/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9723, Loss: 0.0138 Epoch 5 Batch 1030/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9748, Loss: 0.0111 Epoch 5 Batch 1031/1077 - Train Accuracy: 0.9774, Validation Accuracy: 0.9698, Loss: 0.0164 Epoch 5 Batch 1032/1077 - Train Accuracy: 0.9743, Validation Accuracy: 0.9698, Loss: 0.0210 Epoch 5 Batch 1033/1077 - Train Accuracy: 0.9747, Validation Accuracy: 0.9744, Loss: 0.0154 Epoch 5 Batch 1034/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9773, Loss: 0.0105 Epoch 5 Batch 1035/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9773, Loss: 0.0081 Epoch 5 Batch 1036/1077 - Train Accuracy: 0.9747, Validation Accuracy: 0.9751, Loss: 0.0124 Epoch 5 Batch 1037/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9751, Loss: 0.0157 Epoch 5 Batch 1038/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9698, Loss: 0.0139 Epoch 5 Batch 1039/1077 - Train Accuracy: 0.9911, Validation Accuracy: 0.9656, Loss: 0.0119 Epoch 5 Batch 1040/1077 - Train Accuracy: 0.9790, Validation Accuracy: 0.9656, Loss: 0.0146 Epoch 5 Batch 1041/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9656, Loss: 0.0245 Epoch 5 Batch 1042/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9656, Loss: 0.0076 Epoch 5 Batch 1043/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9705, Loss: 0.0123 Epoch 5 Batch 1044/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9609, Loss: 0.0112 Epoch 5 Batch 1045/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9656, Loss: 0.0119 Epoch 5 Batch 1046/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9656, Loss: 0.0089 Epoch 5 Batch 1047/1077 - Train Accuracy: 0.9988, Validation Accuracy: 0.9652, Loss: 0.0094 Epoch 5 Batch 1048/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9680, Loss: 0.0125 Epoch 5 Batch 1049/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9638, Loss: 0.0111 Epoch 5 Batch 1050/1077 - Train Accuracy: 0.9977, Validation Accuracy: 0.9638, Loss: 0.0075 Epoch 5 Batch 1051/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9641, Loss: 0.0144 Epoch 5 Batch 1052/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9641, Loss: 0.0100 Epoch 5 Batch 1053/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9638, Loss: 0.0147 Epoch 5 Batch 1054/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9734, Loss: 0.0131 Epoch 5 Batch 1055/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9776, Loss: 0.0130 Epoch 5 Batch 1056/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9627, Loss: 0.0114 Epoch 5 Batch 1057/1077 - Train Accuracy: 0.9897, Validation Accuracy: 0.9631, Loss: 0.0130 Epoch 5 Batch 1058/1077 - Train Accuracy: 0.9815, Validation Accuracy: 0.9631, Loss: 0.0153 Epoch 5 Batch 1059/1077 - Train Accuracy: 0.9794, Validation Accuracy: 0.9627, Loss: 0.0168 Epoch 5 Batch 1060/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9631, Loss: 0.0098 Epoch 5 Batch 1061/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9641, Loss: 0.0149 Epoch 5 Batch 1062/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9688, Loss: 0.0106 Epoch 5 Batch 1063/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9688, Loss: 0.0134 Epoch 5 Batch 1064/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9688, Loss: 0.0110 Epoch 5 Batch 1065/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9727, Loss: 0.0101 Epoch 5 Batch 1066/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9727, Loss: 0.0127 Epoch 5 Batch 1067/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9727, Loss: 0.0142 Epoch 5 Batch 1068/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9727, Loss: 0.0069 Epoch 5 Batch 1069/1077 - Train Accuracy: 0.9937, Validation Accuracy: 0.9705, Loss: 0.0095 Epoch 5 Batch 1070/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9727, Loss: 0.0113 Epoch 5 Batch 1071/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9751, Loss: 0.0141 Epoch 5 Batch 1072/1077 - Train Accuracy: 0.9799, Validation Accuracy: 0.9723, Loss: 0.0142 Epoch 5 Batch 1073/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9776, Loss: 0.0154 Epoch 5 Batch 1074/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9776, Loss: 0.0138 Epoch 5 Batch 1075/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9734, Loss: 0.0157 Epoch 6 Batch 1/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9592, Loss: 0.0123 Epoch 6 Batch 2/1077 - Train Accuracy: 0.9901, Validation Accuracy: 0.9538, Loss: 0.0114 Epoch 6 Batch 3/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9588, Loss: 0.0168 Epoch 6 Batch 4/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9585, Loss: 0.0090 Epoch 6 Batch 5/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9581, Loss: 0.0160 Epoch 6 Batch 6/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9581, Loss: 0.0089 Epoch 6 Batch 7/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9585, Loss: 0.0111 Epoch 6 Batch 8/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9602, Loss: 0.0142 Epoch 6 Batch 9/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9709, Loss: 0.0143 Epoch 6 Batch 10/1077 - Train Accuracy: 0.9868, Validation Accuracy: 0.9709, Loss: 0.0126 Epoch 6 Batch 11/1077 - Train Accuracy: 0.9814, Validation Accuracy: 0.9709, Loss: 0.0184 Epoch 6 Batch 12/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9652, Loss: 0.0095 Epoch 6 Batch 13/1077 - Train Accuracy: 0.9896, Validation Accuracy: 0.9652, Loss: 0.0120 Epoch 6 Batch 14/1077 - Train Accuracy: 1.0000, Validation Accuracy: 0.9648, Loss: 0.0058 Epoch 6 Batch 15/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9734, Loss: 0.0102 Epoch 6 Batch 16/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9730, Loss: 0.0107 Epoch 6 Batch 17/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9730, Loss: 0.0106 Epoch 6 Batch 18/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9730, Loss: 0.0153 Epoch 6 Batch 19/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9766, Loss: 0.0111 Epoch 6 Batch 20/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9773, Loss: 0.0120 Epoch 6 Batch 21/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9766, Loss: 0.0088 Epoch 6 Batch 22/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9769, Loss: 0.0090 Epoch 6 Batch 23/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9773, Loss: 0.0124 Epoch 6 Batch 24/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9776, Loss: 0.0181 Epoch 6 Batch 25/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9776, Loss: 0.0078 Epoch 6 Batch 26/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9741, Loss: 0.0179 Epoch 6 Batch 27/1077 - Train Accuracy: 0.9866, Validation Accuracy: 0.9741, Loss: 0.0094 Epoch 6 Batch 28/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9741, Loss: 0.0117 Epoch 6 Batch 29/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9741, Loss: 0.0129 Epoch 6 Batch 30/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9744, Loss: 0.0087 Epoch 6 Batch 31/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9744, Loss: 0.0119 Epoch 6 Batch 32/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9741, Loss: 0.0123 Epoch 6 Batch 33/1077 - Train Accuracy: 0.9829, Validation Accuracy: 0.9762, Loss: 0.0086 Epoch 6 Batch 34/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9688, Loss: 0.0115 Epoch 6 Batch 35/1077 - Train Accuracy: 0.9977, Validation Accuracy: 0.9670, Loss: 0.0119 Epoch 6 Batch 36/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9663, Loss: 0.0106 Epoch 6 Batch 37/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9666, Loss: 0.0154 Epoch 6 Batch 38/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9666, Loss: 0.0177 Epoch 6 Batch 39/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9666, Loss: 0.0148 Epoch 6 Batch 40/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9716, Loss: 0.0103 Epoch 6 Batch 41/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9769, Loss: 0.0105 Epoch 6 Batch 42/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9719, Loss: 0.0212 Epoch 6 Batch 43/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9744, Loss: 0.0072 Epoch 6 Batch 44/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9744, Loss: 0.0071 Epoch 6 Batch 45/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9748, Loss: 0.0146 Epoch 6 Batch 46/1077 - Train Accuracy: 0.9963, Validation Accuracy: 0.9748, Loss: 0.0101 Epoch 6 Batch 47/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9748, Loss: 0.0122 Epoch 6 Batch 48/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9748, Loss: 0.0143 Epoch 6 Batch 49/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9744, Loss: 0.0150 Epoch 6 Batch 50/1077 - Train Accuracy: 0.9988, Validation Accuracy: 0.9748, Loss: 0.0107 Epoch 6 Batch 51/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9748, Loss: 0.0131 Epoch 6 Batch 52/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9798, Loss: 0.0108 Epoch 6 Batch 53/1077 - Train Accuracy: 0.9696, Validation Accuracy: 0.9847, Loss: 0.0141 Epoch 6 Batch 54/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9847, Loss: 0.0155 Epoch 6 Batch 55/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9794, Loss: 0.0127 Epoch 6 Batch 56/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9790, Loss: 0.0086 Epoch 6 Batch 57/1077 - Train Accuracy: 0.9865, Validation Accuracy: 0.9790, Loss: 0.0079 Epoch 6 Batch 58/1077 - Train Accuracy: 0.9953, Validation Accuracy: 0.9801, Loss: 0.0114 Epoch 6 Batch 59/1077 - Train Accuracy: 0.9868, Validation Accuracy: 0.9755, Loss: 0.0076 Epoch 6 Batch 60/1077 - Train Accuracy: 0.9784, Validation Accuracy: 0.9755, Loss: 0.0089 Epoch 6 Batch 61/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9755, Loss: 0.0117 Epoch 6 Batch 62/1077 - Train Accuracy: 0.9696, Validation Accuracy: 0.9705, Loss: 0.0121 Epoch 6 Batch 63/1077 - Train Accuracy: 0.9821, Validation Accuracy: 0.9705, Loss: 0.0082 Epoch 6 Batch 64/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9705, Loss: 0.0098 Epoch 6 Batch 65/1077 - Train Accuracy: 0.9889, Validation Accuracy: 0.9705, Loss: 0.0127 Epoch 6 Batch 66/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9751, Loss: 0.0068 Epoch 6 Batch 67/1077 - Train Accuracy: 0.9676, Validation Accuracy: 0.9748, Loss: 0.0158 Epoch 6 Batch 68/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9748, Loss: 0.0163 Epoch 6 Batch 69/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9702, Loss: 0.0148 Epoch 6 Batch 70/1077 - Train Accuracy: 0.9794, Validation Accuracy: 0.9702, Loss: 0.0109 Epoch 6 Batch 71/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9702, Loss: 0.0070 Epoch 6 Batch 72/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9698, Loss: 0.0103 Epoch 6 Batch 73/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9652, Loss: 0.0111 Epoch 6 Batch 74/1077 - Train Accuracy: 0.9706, Validation Accuracy: 0.9634, Loss: 0.0131 Epoch 6 Batch 75/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9688, Loss: 0.0181 Epoch 6 Batch 76/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9634, Loss: 0.0104 Epoch 6 Batch 77/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9638, Loss: 0.0135 Epoch 6 Batch 78/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9638, Loss: 0.0074 Epoch 6 Batch 79/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9638, Loss: 0.0129 Epoch 6 Batch 80/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9638, Loss: 0.0105 Epoch 6 Batch 81/1077 - Train Accuracy: 0.9723, Validation Accuracy: 0.9588, Loss: 0.0102 Epoch 6 Batch 82/1077 - Train Accuracy: 0.9929, Validation Accuracy: 0.9585, Loss: 0.0116 Epoch 6 Batch 83/1077 - Train Accuracy: 0.9799, Validation Accuracy: 0.9634, Loss: 0.0098 Epoch 6 Batch 84/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9581, Loss: 0.0161 Epoch 6 Batch 85/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9577, Loss: 0.0111 Epoch 6 Batch 86/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9652, Loss: 0.0128 Epoch 6 Batch 87/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9723, Loss: 0.0100 Epoch 6 Batch 88/1077 - Train Accuracy: 0.9613, Validation Accuracy: 0.9670, Loss: 0.0134 Epoch 6 Batch 89/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9719, Loss: 0.0099 Epoch 6 Batch 90/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9719, Loss: 0.0106 Epoch 6 Batch 91/1077 - Train Accuracy: 0.9851, Validation Accuracy: 0.9773, Loss: 0.0112 Epoch 6 Batch 92/1077 - Train Accuracy: 0.9728, Validation Accuracy: 0.9773, Loss: 0.0109 Epoch 6 Batch 93/1077 - Train Accuracy: 0.9953, Validation Accuracy: 0.9734, Loss: 0.0087 Epoch 6 Batch 94/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9734, Loss: 0.0115 Epoch 6 Batch 95/1077 - Train Accuracy: 0.9974, Validation Accuracy: 0.9780, Loss: 0.0099 Epoch 6 Batch 96/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9780, Loss: 0.0131 Epoch 6 Batch 97/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9780, Loss: 0.0143 Epoch 6 Batch 98/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9759, Loss: 0.0130 Epoch 6 Batch 99/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9759, Loss: 0.0077 Epoch 6 Batch 100/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9759, Loss: 0.0086 Epoch 6 Batch 101/1077 - Train Accuracy: 0.9641, Validation Accuracy: 0.9759, Loss: 0.0109 Epoch 6 Batch 102/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9776, Loss: 0.0118 Epoch 6 Batch 103/1077 - Train Accuracy: 0.9774, Validation Accuracy: 0.9776, Loss: 0.0143 Epoch 6 Batch 104/1077 - Train Accuracy: 0.9757, Validation Accuracy: 0.9680, Loss: 0.0125 Epoch 6 Batch 105/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9677, Loss: 0.0094 Epoch 6 Batch 106/1077 - Train Accuracy: 0.9868, Validation Accuracy: 0.9673, Loss: 0.0123 Epoch 6 Batch 107/1077 - Train Accuracy: 0.9650, Validation Accuracy: 0.9673, Loss: 0.0147 Epoch 6 Batch 108/1077 - Train Accuracy: 0.9702, Validation Accuracy: 0.9673, Loss: 0.0152 Epoch 6 Batch 109/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9723, Loss: 0.0152 Epoch 6 Batch 110/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9680, Loss: 0.0082 Epoch 6 Batch 111/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9719, Loss: 0.0129 Epoch 6 Batch 112/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9727, Loss: 0.0092 Epoch 6 Batch 113/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9673, Loss: 0.0164 Epoch 6 Batch 114/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9723, Loss: 0.0088 Epoch 6 Batch 115/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9748, Loss: 0.0159 Epoch 6 Batch 116/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9741, Loss: 0.0204 Epoch 6 Batch 117/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9737, Loss: 0.0110 Epoch 6 Batch 118/1077 - Train Accuracy: 0.9984, Validation Accuracy: 0.9734, Loss: 0.0069 Epoch 6 Batch 119/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9780, Loss: 0.0113 Epoch 6 Batch 120/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9709, Loss: 0.0124 Epoch 6 Batch 121/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9766, Loss: 0.0206 Epoch 6 Batch 122/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9748, Loss: 0.0120 Epoch 6 Batch 123/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9755, Loss: 0.0106 Epoch 6 Batch 124/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9755, Loss: 0.0151 Epoch 6 Batch 125/1077 - Train Accuracy: 0.9747, Validation Accuracy: 0.9748, Loss: 0.0218 Epoch 6 Batch 126/1077 - Train Accuracy: 0.9877, Validation Accuracy: 0.9766, Loss: 0.0131 Epoch 6 Batch 127/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9766, Loss: 0.0115 Epoch 6 Batch 128/1077 - Train Accuracy: 0.9989, Validation Accuracy: 0.9766, Loss: 0.0123 Epoch 6 Batch 129/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9716, Loss: 0.0146 Epoch 6 Batch 130/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9719, Loss: 0.0129 Epoch 6 Batch 131/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9702, Loss: 0.0092 Epoch 6 Batch 132/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9702, Loss: 0.0094 Epoch 6 Batch 133/1077 - Train Accuracy: 1.0000, Validation Accuracy: 0.9648, Loss: 0.0046 Epoch 6 Batch 134/1077 - Train Accuracy: 0.9937, Validation Accuracy: 0.9695, Loss: 0.0098 Epoch 6 Batch 135/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9695, Loss: 0.0079 Epoch 6 Batch 136/1077 - Train Accuracy: 0.9965, Validation Accuracy: 0.9695, Loss: 0.0094 Epoch 6 Batch 137/1077 - Train Accuracy: 0.9862, Validation Accuracy: 0.9698, Loss: 0.0089 Epoch 6 Batch 138/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9648, Loss: 0.0126 Epoch 6 Batch 139/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9648, Loss: 0.0111 Epoch 6 Batch 140/1077 - Train Accuracy: 0.9905, Validation Accuracy: 0.9648, Loss: 0.0100 Epoch 6 Batch 141/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9648, Loss: 0.0103 Epoch 6 Batch 142/1077 - Train Accuracy: 0.9747, Validation Accuracy: 0.9648, Loss: 0.0097 Epoch 6 Batch 143/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9645, Loss: 0.0087 Epoch 6 Batch 144/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9695, Loss: 0.0178 Epoch 6 Batch 145/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9716, Loss: 0.0124 Epoch 6 Batch 146/1077 - Train Accuracy: 0.9751, Validation Accuracy: 0.9716, Loss: 0.0279 Epoch 6 Batch 147/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9716, Loss: 0.0092 Epoch 6 Batch 148/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9716, Loss: 0.0139 Epoch 6 Batch 149/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9712, Loss: 0.0090 Epoch 6 Batch 150/1077 - Train Accuracy: 0.9970, Validation Accuracy: 0.9762, Loss: 0.0090 Epoch 6 Batch 151/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9762, Loss: 0.0120 Epoch 6 Batch 152/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9762, Loss: 0.0175 Epoch 6 Batch 153/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9766, Loss: 0.0137 Epoch 6 Batch 154/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9766, Loss: 0.0078 Epoch 6 Batch 155/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9766, Loss: 0.0111 Epoch 6 Batch 156/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9766, Loss: 0.0087 Epoch 6 Batch 157/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9712, Loss: 0.0110 Epoch 6 Batch 158/1077 - Train Accuracy: 0.9807, Validation Accuracy: 0.9712, Loss: 0.0132 Epoch 6 Batch 159/1077 - Train Accuracy: 0.9896, Validation Accuracy: 0.9712, Loss: 0.0102 Epoch 6 Batch 160/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9691, Loss: 0.0098 Epoch 6 Batch 161/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9705, Loss: 0.0076 Epoch 6 Batch 162/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9705, Loss: 0.0102 Epoch 6 Batch 163/1077 - Train Accuracy: 0.9860, Validation Accuracy: 0.9705, Loss: 0.0155 Epoch 6 Batch 164/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9705, Loss: 0.0123 Epoch 6 Batch 165/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9723, Loss: 0.0093 Epoch 6 Batch 166/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9723, Loss: 0.0110 Epoch 6 Batch 167/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9773, Loss: 0.0098 Epoch 6 Batch 168/1077 - Train Accuracy: 0.9745, Validation Accuracy: 0.9727, Loss: 0.0186 Epoch 6 Batch 169/1077 - Train Accuracy: 0.9862, Validation Accuracy: 0.9663, Loss: 0.0092 Epoch 6 Batch 170/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9652, Loss: 0.0139 Epoch 6 Batch 171/1077 - Train Accuracy: 0.9929, Validation Accuracy: 0.9656, Loss: 0.0099 Epoch 6 Batch 172/1077 - Train Accuracy: 0.9747, Validation Accuracy: 0.9609, Loss: 0.0097 Epoch 6 Batch 173/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9634, Loss: 0.0108 Epoch 6 Batch 174/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9680, Loss: 0.0083 Epoch 6 Batch 175/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9705, Loss: 0.0143 Epoch 6 Batch 176/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9631, Loss: 0.0104 Epoch 6 Batch 177/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9680, Loss: 0.0126 Epoch 6 Batch 178/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9680, Loss: 0.0125 Epoch 6 Batch 179/1077 - Train Accuracy: 0.9790, Validation Accuracy: 0.9680, Loss: 0.0126 Epoch 6 Batch 180/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9727, Loss: 0.0080 Epoch 6 Batch 181/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9794, Loss: 0.0153 Epoch 6 Batch 182/1077 - Train Accuracy: 0.9799, Validation Accuracy: 0.9730, Loss: 0.0136 Epoch 6 Batch 183/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9727, Loss: 0.0146 Epoch 6 Batch 184/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9723, Loss: 0.0110 Epoch 6 Batch 185/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9723, Loss: 0.0119 Epoch 6 Batch 186/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9723, Loss: 0.0118 Epoch 6 Batch 187/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9723, Loss: 0.0067 Epoch 6 Batch 188/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9709, Loss: 0.0115 Epoch 6 Batch 189/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9776, Loss: 0.0082 Epoch 6 Batch 190/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9776, Loss: 0.0096 Epoch 6 Batch 191/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9688, Loss: 0.0076 Epoch 6 Batch 192/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9688, Loss: 0.0105 Epoch 6 Batch 193/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9702, Loss: 0.0095 Epoch 6 Batch 194/1077 - Train Accuracy: 0.9967, Validation Accuracy: 0.9702, Loss: 0.0074 Epoch 6 Batch 195/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9727, Loss: 0.0100 Epoch 6 Batch 196/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9769, Loss: 0.0076 Epoch 6 Batch 197/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9769, Loss: 0.0123 Epoch 6 Batch 198/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9794, Loss: 0.0136 Epoch 6 Batch 199/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9844, Loss: 0.0109 Epoch 6 Batch 200/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9844, Loss: 0.0101 Epoch 6 Batch 201/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9844, Loss: 0.0076 Epoch 6 Batch 202/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9847, Loss: 0.0090 Epoch 6 Batch 203/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9794, Loss: 0.0088 Epoch 6 Batch 204/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9741, Loss: 0.0163 Epoch 6 Batch 205/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9790, Loss: 0.0187 Epoch 6 Batch 206/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9790, Loss: 0.0087 Epoch 6 Batch 207/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9790, Loss: 0.0097 Epoch 6 Batch 208/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9741, Loss: 0.0142 Epoch 6 Batch 209/1077 - Train Accuracy: 0.9933, Validation Accuracy: 0.9659, Loss: 0.0068 Epoch 6 Batch 210/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9709, Loss: 0.0137 Epoch 6 Batch 211/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9688, Loss: 0.0107 Epoch 6 Batch 212/1077 - Train Accuracy: 0.9866, Validation Accuracy: 0.9691, Loss: 0.0089 Epoch 6 Batch 213/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9723, Loss: 0.0098 Epoch 6 Batch 214/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9719, Loss: 0.0092 Epoch 6 Batch 215/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9719, Loss: 0.0194 Epoch 6 Batch 216/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9677, Loss: 0.0088 Epoch 6 Batch 217/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9677, Loss: 0.0079 Epoch 6 Batch 218/1077 - Train Accuracy: 0.9951, Validation Accuracy: 0.9677, Loss: 0.0132 Epoch 6 Batch 219/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9677, Loss: 0.0092 Epoch 6 Batch 220/1077 - Train Accuracy: 0.9856, Validation Accuracy: 0.9648, Loss: 0.0136 Epoch 6 Batch 221/1077 - Train Accuracy: 0.9988, Validation Accuracy: 0.9652, Loss: 0.0103 Epoch 6 Batch 222/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9652, Loss: 0.0152 Epoch 6 Batch 223/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9641, Loss: 0.0100 Epoch 6 Batch 224/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9641, Loss: 0.0086 Epoch 6 Batch 225/1077 - Train Accuracy: 0.9695, Validation Accuracy: 0.9688, Loss: 0.0156 Epoch 6 Batch 226/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9688, Loss: 0.0106 Epoch 6 Batch 227/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9688, Loss: 0.0135 Epoch 6 Batch 228/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9673, Loss: 0.0100 Epoch 6 Batch 229/1077 - Train Accuracy: 0.9656, Validation Accuracy: 0.9670, Loss: 0.0108 Epoch 6 Batch 230/1077 - Train Accuracy: 0.9810, Validation Accuracy: 0.9698, Loss: 0.0095 Epoch 6 Batch 231/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9698, Loss: 0.0101 Epoch 6 Batch 232/1077 - Train Accuracy: 0.9967, Validation Accuracy: 0.9698, Loss: 0.0075 Epoch 6 Batch 233/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9695, Loss: 0.0178 Epoch 6 Batch 234/1077 - Train Accuracy: 0.9829, Validation Accuracy: 0.9695, Loss: 0.0143 Epoch 6 Batch 235/1077 - Train Accuracy: 0.9710, Validation Accuracy: 0.9691, Loss: 0.0123 Epoch 6 Batch 236/1077 - Train Accuracy: 0.9961, Validation Accuracy: 0.9695, Loss: 0.0159 Epoch 6 Batch 237/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9695, Loss: 0.0091 Epoch 6 Batch 238/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9695, Loss: 0.0109 Epoch 6 Batch 239/1077 - Train Accuracy: 0.9911, Validation Accuracy: 0.9695, Loss: 0.0085 Epoch 6 Batch 240/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9695, Loss: 0.0090 Epoch 6 Batch 241/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9748, Loss: 0.0072 Epoch 6 Batch 242/1077 - Train Accuracy: 0.9977, Validation Accuracy: 0.9698, Loss: 0.0082 Epoch 6 Batch 243/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9684, Loss: 0.0106 Epoch 6 Batch 244/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9691, Loss: 0.0105 Epoch 6 Batch 245/1077 - Train Accuracy: 0.9702, Validation Accuracy: 0.9705, Loss: 0.0119 Epoch 6 Batch 246/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9656, Loss: 0.0152 Epoch 6 Batch 247/1077 - Train Accuracy: 0.9706, Validation Accuracy: 0.9656, Loss: 0.0142 Epoch 6 Batch 248/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9656, Loss: 0.0125 Epoch 6 Batch 249/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9656, Loss: 0.0093 Epoch 6 Batch 250/1077 - Train Accuracy: 0.9822, Validation Accuracy: 0.9656, Loss: 0.0135 Epoch 6 Batch 251/1077 - Train Accuracy: 0.9874, Validation Accuracy: 0.9656, Loss: 0.0147 Epoch 6 Batch 252/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9702, Loss: 0.0136 Epoch 6 Batch 253/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9673, Loss: 0.0125 Epoch 6 Batch 254/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9723, Loss: 0.0154 Epoch 6 Batch 255/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9695, Loss: 0.0121 Epoch 6 Batch 256/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9723, Loss: 0.0182 Epoch 6 Batch 257/1077 - Train Accuracy: 0.9877, Validation Accuracy: 0.9723, Loss: 0.0103 Epoch 6 Batch 258/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9723, Loss: 0.0106 Epoch 6 Batch 259/1077 - Train Accuracy: 0.9949, Validation Accuracy: 0.9716, Loss: 0.0068 Epoch 6 Batch 260/1077 - Train Accuracy: 0.9944, Validation Accuracy: 0.9673, Loss: 0.0061 Epoch 6 Batch 261/1077 - Train Accuracy: 0.9937, Validation Accuracy: 0.9727, Loss: 0.0118 Epoch 6 Batch 262/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9769, Loss: 0.0109 Epoch 6 Batch 263/1077 - Train Accuracy: 0.9969, Validation Accuracy: 0.9769, Loss: 0.0078 Epoch 6 Batch 264/1077 - Train Accuracy: 0.9969, Validation Accuracy: 0.9769, Loss: 0.0092 Epoch 6 Batch 265/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9794, Loss: 0.0094 Epoch 6 Batch 266/1077 - Train Accuracy: 0.9721, Validation Accuracy: 0.9794, Loss: 0.0122 Epoch 6 Batch 267/1077 - Train Accuracy: 0.9929, Validation Accuracy: 0.9794, Loss: 0.0100 Epoch 6 Batch 268/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9776, Loss: 0.0135 Epoch 6 Batch 269/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9773, Loss: 0.0145 Epoch 6 Batch 270/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9773, Loss: 0.0147 Epoch 6 Batch 271/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9776, Loss: 0.0115 Epoch 6 Batch 272/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9680, Loss: 0.0178 Epoch 6 Batch 273/1077 - Train Accuracy: 0.9866, Validation Accuracy: 0.9677, Loss: 0.0109 Epoch 6 Batch 274/1077 - Train Accuracy: 0.9862, Validation Accuracy: 0.9677, Loss: 0.0098 Epoch 6 Batch 275/1077 - Train Accuracy: 0.9981, Validation Accuracy: 0.9631, Loss: 0.0071 Epoch 6 Batch 276/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9631, Loss: 0.0204 Epoch 6 Batch 277/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9680, Loss: 0.0115 Epoch 6 Batch 278/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9680, Loss: 0.0120 Epoch 6 Batch 279/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9659, Loss: 0.0151 Epoch 6 Batch 280/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9688, Loss: 0.0095 Epoch 6 Batch 281/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9688, Loss: 0.0136 Epoch 6 Batch 282/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9688, Loss: 0.0152 Epoch 6 Batch 283/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9709, Loss: 0.0110 Epoch 6 Batch 284/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9755, Loss: 0.0127 Epoch 6 Batch 285/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9801, Loss: 0.0120 Epoch 6 Batch 286/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9798, Loss: 0.0124 Epoch 6 Batch 287/1077 - Train Accuracy: 0.9648, Validation Accuracy: 0.9801, Loss: 0.0199 Epoch 6 Batch 288/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9808, Loss: 0.0202 Epoch 6 Batch 289/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9808, Loss: 0.0102 Epoch 6 Batch 290/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9759, Loss: 0.0176 Epoch 6 Batch 291/1077 - Train Accuracy: 0.9594, Validation Accuracy: 0.9648, Loss: 0.0227 Epoch 6 Batch 292/1077 - Train Accuracy: 0.9788, Validation Accuracy: 0.9648, Loss: 0.0113 Epoch 6 Batch 293/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9645, Loss: 0.0076 Epoch 6 Batch 294/1077 - Train Accuracy: 0.9755, Validation Accuracy: 0.9645, Loss: 0.0097 Epoch 6 Batch 295/1077 - Train Accuracy: 0.9868, Validation Accuracy: 0.9641, Loss: 0.0110 Epoch 6 Batch 296/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9695, Loss: 0.0094 Epoch 6 Batch 297/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9695, Loss: 0.0112 Epoch 6 Batch 298/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9712, Loss: 0.0134 Epoch 6 Batch 299/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9727, Loss: 0.0104 Epoch 6 Batch 300/1077 - Train Accuracy: 0.9811, Validation Accuracy: 0.9776, Loss: 0.0089 Epoch 6 Batch 301/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9776, Loss: 0.0089 Epoch 6 Batch 302/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9737, Loss: 0.0105 Epoch 6 Batch 303/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9737, Loss: 0.0128 Epoch 6 Batch 304/1077 - Train Accuracy: 0.9866, Validation Accuracy: 0.9684, Loss: 0.0156 Epoch 6 Batch 305/1077 - Train Accuracy: 0.9957, Validation Accuracy: 0.9684, Loss: 0.0073 Epoch 6 Batch 306/1077 - Train Accuracy: 0.9788, Validation Accuracy: 0.9634, Loss: 0.0137 Epoch 6 Batch 307/1077 - Train Accuracy: 0.9632, Validation Accuracy: 0.9620, Loss: 0.0092 Epoch 6 Batch 308/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9620, Loss: 0.0102 Epoch 6 Batch 309/1077 - Train Accuracy: 0.9892, Validation Accuracy: 0.9670, Loss: 0.0084 Epoch 6 Batch 310/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9666, Loss: 0.0106 Epoch 6 Batch 311/1077 - Train Accuracy: 0.9795, Validation Accuracy: 0.9666, Loss: 0.0121 Epoch 6 Batch 312/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9648, Loss: 0.0126 Epoch 6 Batch 313/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9691, Loss: 0.0081 Epoch 6 Batch 314/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9677, Loss: 0.0117 Epoch 6 Batch 315/1077 - Train Accuracy: 0.9888, Validation Accuracy: 0.9677, Loss: 0.0109 Epoch 6 Batch 316/1077 - Train Accuracy: 0.9900, Validation Accuracy: 0.9677, Loss: 0.0105 Epoch 6 Batch 317/1077 - Train Accuracy: 0.9873, Validation Accuracy: 0.9677, Loss: 0.0113 Epoch 6 Batch 318/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9577, Loss: 0.0059 Epoch 6 Batch 319/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9592, Loss: 0.0125 Epoch 6 Batch 320/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9592, Loss: 0.0115 Epoch 6 Batch 321/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9592, Loss: 0.0126 Epoch 6 Batch 322/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9691, Loss: 0.0109 Epoch 6 Batch 323/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9741, Loss: 0.0100 Epoch 6 Batch 324/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9741, Loss: 0.0102 Epoch 6 Batch 325/1077 - Train Accuracy: 0.9788, Validation Accuracy: 0.9741, Loss: 0.0099 Epoch 6 Batch 326/1077 - Train Accuracy: 0.9955, Validation Accuracy: 0.9755, Loss: 0.0083 Epoch 6 Batch 327/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9755, Loss: 0.0138 Epoch 6 Batch 328/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9705, Loss: 0.0123 Epoch 6 Batch 329/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9705, Loss: 0.0085 Epoch 6 Batch 330/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9734, Loss: 0.0105 Epoch 6 Batch 331/1077 - Train Accuracy: 0.9860, Validation Accuracy: 0.9705, Loss: 0.0102 Epoch 6 Batch 332/1077 - Train Accuracy: 0.9896, Validation Accuracy: 0.9705, Loss: 0.0066 Epoch 6 Batch 333/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9705, Loss: 0.0090 Epoch 6 Batch 334/1077 - Train Accuracy: 0.9992, Validation Accuracy: 0.9723, Loss: 0.0067 Epoch 6 Batch 335/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9723, Loss: 0.0140 Epoch 6 Batch 336/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9723, Loss: 0.0198 Epoch 6 Batch 337/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9723, Loss: 0.0138 Epoch 6 Batch 338/1077 - Train Accuracy: 0.9652, Validation Accuracy: 0.9751, Loss: 0.0164 Epoch 6 Batch 339/1077 - Train Accuracy: 1.0000, Validation Accuracy: 0.9801, Loss: 0.0066 Epoch 6 Batch 340/1077 - Train Accuracy: 0.9905, Validation Accuracy: 0.9773, Loss: 0.0137 Epoch 6 Batch 341/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9773, Loss: 0.0151 Epoch 6 Batch 342/1077 - Train Accuracy: 0.9911, Validation Accuracy: 0.9773, Loss: 0.0062 Epoch 6 Batch 343/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9794, Loss: 0.0099 Epoch 6 Batch 344/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9748, Loss: 0.0111 Epoch 6 Batch 345/1077 - Train Accuracy: 0.9911, Validation Accuracy: 0.9748, Loss: 0.0086 Epoch 6 Batch 346/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9727, Loss: 0.0114 Epoch 6 Batch 347/1077 - Train Accuracy: 0.9900, Validation Accuracy: 0.9727, Loss: 0.0085 Epoch 6 Batch 348/1077 - Train Accuracy: 0.9900, Validation Accuracy: 0.9705, Loss: 0.0094 Epoch 6 Batch 349/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9645, Loss: 0.0111 Epoch 6 Batch 350/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9645, Loss: 0.0098 Epoch 6 Batch 351/1077 - Train Accuracy: 0.9959, Validation Accuracy: 0.9645, Loss: 0.0104 Epoch 6 Batch 352/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9645, Loss: 0.0070 Epoch 6 Batch 353/1077 - Train Accuracy: 0.9856, Validation Accuracy: 0.9602, Loss: 0.0152 Epoch 6 Batch 354/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9602, Loss: 0.0126 Epoch 6 Batch 355/1077 - Train Accuracy: 0.9825, Validation Accuracy: 0.9648, Loss: 0.0081 Epoch 6 Batch 356/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9648, Loss: 0.0114 Epoch 6 Batch 357/1077 - Train Accuracy: 0.9795, Validation Accuracy: 0.9648, Loss: 0.0109 Epoch 6 Batch 358/1077 - Train Accuracy: 0.9864, Validation Accuracy: 0.9695, Loss: 0.0128 Epoch 6 Batch 359/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9695, Loss: 0.0087 Epoch 6 Batch 360/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9695, Loss: 0.0068 Epoch 6 Batch 361/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9645, Loss: 0.0138 Epoch 6 Batch 362/1077 - Train Accuracy: 0.9825, Validation Accuracy: 0.9645, Loss: 0.0125 Epoch 6 Batch 363/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9645, Loss: 0.0121 Epoch 6 Batch 364/1077 - Train Accuracy: 0.9664, Validation Accuracy: 0.9709, Loss: 0.0148 Epoch 6 Batch 365/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9709, Loss: 0.0062 Epoch 6 Batch 366/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9716, Loss: 0.0111 Epoch 6 Batch 367/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9716, Loss: 0.0071 Epoch 6 Batch 368/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9759, Loss: 0.0109 Epoch 6 Batch 369/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9709, Loss: 0.0100 Epoch 6 Batch 370/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9709, Loss: 0.0091 Epoch 6 Batch 371/1077 - Train Accuracy: 0.9965, Validation Accuracy: 0.9709, Loss: 0.0084 Epoch 6 Batch 372/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9688, Loss: 0.0044 Epoch 6 Batch 373/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9688, Loss: 0.0095 Epoch 6 Batch 374/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9759, Loss: 0.0116 Epoch 6 Batch 375/1077 - Train Accuracy: 0.9897, Validation Accuracy: 0.9755, Loss: 0.0083 Epoch 6 Batch 376/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9751, Loss: 0.0098 Epoch 6 Batch 377/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9702, Loss: 0.0092 Epoch 6 Batch 378/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9705, Loss: 0.0099 Epoch 6 Batch 379/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9702, Loss: 0.0130 Epoch 6 Batch 380/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9719, Loss: 0.0082 Epoch 6 Batch 381/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9719, Loss: 0.0126 Epoch 6 Batch 382/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9723, Loss: 0.0162 Epoch 6 Batch 383/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9773, Loss: 0.0109 Epoch 6 Batch 384/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9773, Loss: 0.0068 Epoch 6 Batch 385/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9776, Loss: 0.0111 Epoch 6 Batch 386/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9776, Loss: 0.0096 Epoch 6 Batch 387/1077 - Train Accuracy: 0.9965, Validation Accuracy: 0.9776, Loss: 0.0071 Epoch 6 Batch 388/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9776, Loss: 0.0143 Epoch 6 Batch 389/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9762, Loss: 0.0146 Epoch 6 Batch 390/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9730, Loss: 0.0136 Epoch 6 Batch 391/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9734, Loss: 0.0117 Epoch 6 Batch 392/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9734, Loss: 0.0095 Epoch 6 Batch 393/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9759, Loss: 0.0076 Epoch 6 Batch 394/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9712, Loss: 0.0104 Epoch 6 Batch 395/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9712, Loss: 0.0079 Epoch 6 Batch 396/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9659, Loss: 0.0127 Epoch 6 Batch 397/1077 - Train Accuracy: 0.9803, Validation Accuracy: 0.9709, Loss: 0.0159 Epoch 6 Batch 398/1077 - Train Accuracy: 0.9811, Validation Accuracy: 0.9709, Loss: 0.0100 Epoch 6 Batch 399/1077 - Train Accuracy: 0.9712, Validation Accuracy: 0.9709, Loss: 0.0115 Epoch 6 Batch 400/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9716, Loss: 0.0143 Epoch 6 Batch 401/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9698, Loss: 0.0091 Epoch 6 Batch 402/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9695, Loss: 0.0071 Epoch 6 Batch 403/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9673, Loss: 0.0252 Epoch 6 Batch 404/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9670, Loss: 0.0161 Epoch 6 Batch 405/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9666, Loss: 0.0122 Epoch 6 Batch 406/1077 - Train Accuracy: 0.9667, Validation Accuracy: 0.9712, Loss: 0.0149 Epoch 6 Batch 407/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9712, Loss: 0.0102 Epoch 6 Batch 408/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9716, Loss: 0.0125 Epoch 6 Batch 409/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9695, Loss: 0.0162 Epoch 6 Batch 410/1077 - Train Accuracy: 0.9745, Validation Accuracy: 0.9719, Loss: 0.0217 Epoch 6 Batch 411/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9627, Loss: 0.0135 Epoch 6 Batch 412/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9556, Loss: 0.0073 Epoch 6 Batch 413/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9553, Loss: 0.0087 Epoch 6 Batch 414/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9553, Loss: 0.0123 Epoch 6 Batch 415/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9567, Loss: 0.0137 Epoch 6 Batch 416/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9613, Loss: 0.0106 Epoch 6 Batch 417/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9595, Loss: 0.0202 Epoch 6 Batch 418/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9638, Loss: 0.0079 Epoch 6 Batch 419/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9666, Loss: 0.0117 Epoch 6 Batch 420/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9666, Loss: 0.0078 Epoch 6 Batch 421/1077 - Train Accuracy: 0.9957, Validation Accuracy: 0.9684, Loss: 0.0136 Epoch 6 Batch 422/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9684, Loss: 0.0116 Epoch 6 Batch 423/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9688, Loss: 0.0162 Epoch 6 Batch 424/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9688, Loss: 0.0100 Epoch 6 Batch 425/1077 - Train Accuracy: 0.9788, Validation Accuracy: 0.9688, Loss: 0.0071 Epoch 6 Batch 426/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9638, Loss: 0.0102 Epoch 6 Batch 427/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9616, Loss: 0.0102 Epoch 6 Batch 428/1077 - Train Accuracy: 0.9825, Validation Accuracy: 0.9595, Loss: 0.0126 Epoch 6 Batch 429/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9595, Loss: 0.0055 Epoch 6 Batch 430/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9595, Loss: 0.0088 Epoch 6 Batch 431/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9624, Loss: 0.0103 Epoch 6 Batch 432/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9673, Loss: 0.0127 Epoch 6 Batch 433/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9744, Loss: 0.0143 Epoch 6 Batch 434/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9734, Loss: 0.0109 Epoch 6 Batch 435/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9787, Loss: 0.0126 Epoch 6 Batch 436/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9716, Loss: 0.0152 Epoch 6 Batch 437/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9666, Loss: 0.0076 Epoch 6 Batch 438/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9634, Loss: 0.0079 Epoch 6 Batch 439/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9634, Loss: 0.0110 Epoch 6 Batch 440/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9634, Loss: 0.0122 Epoch 6 Batch 441/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9677, Loss: 0.0102 Epoch 6 Batch 442/1077 - Train Accuracy: 0.9788, Validation Accuracy: 0.9744, Loss: 0.0118 Epoch 6 Batch 443/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9748, Loss: 0.0085 Epoch 6 Batch 444/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9719, Loss: 0.0097 Epoch 6 Batch 445/1077 - Train Accuracy: 0.9873, Validation Accuracy: 0.9677, Loss: 0.0100 Epoch 6 Batch 446/1077 - Train Accuracy: 0.9944, Validation Accuracy: 0.9677, Loss: 0.0086 Epoch 6 Batch 447/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9677, Loss: 0.0092 Epoch 6 Batch 448/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9677, Loss: 0.0187 Epoch 6 Batch 449/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9585, Loss: 0.0122 Epoch 6 Batch 450/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9631, Loss: 0.0168 Epoch 6 Batch 451/1077 - Train Accuracy: 0.9851, Validation Accuracy: 0.9631, Loss: 0.0077 Epoch 6 Batch 452/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9631, Loss: 0.0134 Epoch 6 Batch 453/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9631, Loss: 0.0142 Epoch 6 Batch 454/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9631, Loss: 0.0111 Epoch 6 Batch 455/1077 - Train Accuracy: 0.9858, Validation Accuracy: 0.9656, Loss: 0.0150 Epoch 6 Batch 456/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9606, Loss: 0.0110 Epoch 6 Batch 457/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9606, Loss: 0.0078 Epoch 6 Batch 458/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9638, Loss: 0.0143 Epoch 6 Batch 459/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9638, Loss: 0.0115 Epoch 6 Batch 460/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9588, Loss: 0.0114 Epoch 6 Batch 461/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9560, Loss: 0.0130 Epoch 6 Batch 462/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9517, Loss: 0.0095 Epoch 6 Batch 463/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9538, Loss: 0.0118 Epoch 6 Batch 464/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9560, Loss: 0.0108 Epoch 6 Batch 465/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9588, Loss: 0.0118 Epoch 6 Batch 466/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9588, Loss: 0.0130 Epoch 6 Batch 467/1077 - Train Accuracy: 0.9803, Validation Accuracy: 0.9588, Loss: 0.0135 Epoch 6 Batch 468/1077 - Train Accuracy: 0.9769, Validation Accuracy: 0.9613, Loss: 0.0113 Epoch 6 Batch 469/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9613, Loss: 0.0085 Epoch 6 Batch 470/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9613, Loss: 0.0079 Epoch 6 Batch 471/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9570, Loss: 0.0083 Epoch 6 Batch 472/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9616, Loss: 0.0104 Epoch 6 Batch 473/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9680, Loss: 0.0076 Epoch 6 Batch 474/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9645, Loss: 0.0087 Epoch 6 Batch 475/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9641, Loss: 0.0108 Epoch 6 Batch 476/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9641, Loss: 0.0077 Epoch 6 Batch 477/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9606, Loss: 0.0127 Epoch 6 Batch 478/1077 - Train Accuracy: 0.9782, Validation Accuracy: 0.9609, Loss: 0.0117 Epoch 6 Batch 479/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9606, Loss: 0.0153 Epoch 6 Batch 480/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9545, Loss: 0.0084 Epoch 6 Batch 481/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9545, Loss: 0.0104 Epoch 6 Batch 482/1077 - Train Accuracy: 0.9757, Validation Accuracy: 0.9574, Loss: 0.0139 Epoch 6 Batch 483/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9616, Loss: 0.0128 Epoch 6 Batch 484/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9602, Loss: 0.0144 Epoch 6 Batch 485/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9602, Loss: 0.0091 Epoch 6 Batch 486/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9624, Loss: 0.0086 Epoch 6 Batch 487/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9627, Loss: 0.0061 Epoch 6 Batch 488/1077 - Train Accuracy: 0.9803, Validation Accuracy: 0.9688, Loss: 0.0108 Epoch 6 Batch 489/1077 - Train Accuracy: 0.9736, Validation Accuracy: 0.9688, Loss: 0.0105 Epoch 6 Batch 490/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9748, Loss: 0.0118 Epoch 6 Batch 491/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9748, Loss: 0.0152 Epoch 6 Batch 492/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9798, Loss: 0.0144 Epoch 6 Batch 493/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9748, Loss: 0.0079 Epoch 6 Batch 494/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9695, Loss: 0.0091 Epoch 6 Batch 495/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9719, Loss: 0.0089 Epoch 6 Batch 496/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9723, Loss: 0.0121 Epoch 6 Batch 497/1077 - Train Accuracy: 0.9868, Validation Accuracy: 0.9727, Loss: 0.0141 Epoch 6 Batch 498/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9698, Loss: 0.0148 Epoch 6 Batch 499/1077 - Train Accuracy: 0.9807, Validation Accuracy: 0.9698, Loss: 0.0089 Epoch 6 Batch 500/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9744, Loss: 0.0072 Epoch 6 Batch 501/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9744, Loss: 0.0090 Epoch 6 Batch 502/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9798, Loss: 0.0123 Epoch 6 Batch 503/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9741, Loss: 0.0102 Epoch 6 Batch 504/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9741, Loss: 0.0095 Epoch 6 Batch 505/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9741, Loss: 0.0085 Epoch 6 Batch 506/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9695, Loss: 0.0168 Epoch 6 Batch 507/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9695, Loss: 0.0110 Epoch 6 Batch 508/1077 - Train Accuracy: 0.9970, Validation Accuracy: 0.9695, Loss: 0.0064 Epoch 6 Batch 509/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9695, Loss: 0.0170 Epoch 6 Batch 510/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9684, Loss: 0.0110 Epoch 6 Batch 511/1077 - Train Accuracy: 0.9786, Validation Accuracy: 0.9684, Loss: 0.0081 Epoch 6 Batch 512/1077 - Train Accuracy: 0.9965, Validation Accuracy: 0.9659, Loss: 0.0089 Epoch 6 Batch 513/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9688, Loss: 0.0118 Epoch 6 Batch 514/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9695, Loss: 0.0112 Epoch 6 Batch 515/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9691, Loss: 0.0110 Epoch 6 Batch 516/1077 - Train Accuracy: 0.9695, Validation Accuracy: 0.9769, Loss: 0.0132 Epoch 6 Batch 517/1077 - Train Accuracy: 0.9807, Validation Accuracy: 0.9769, Loss: 0.0123 Epoch 6 Batch 518/1077 - Train Accuracy: 0.9949, Validation Accuracy: 0.9695, Loss: 0.0086 Epoch 6 Batch 519/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9695, Loss: 0.0117 Epoch 6 Batch 520/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9648, Loss: 0.0069 Epoch 6 Batch 521/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9624, Loss: 0.0129 Epoch 6 Batch 522/1077 - Train Accuracy: 0.9641, Validation Accuracy: 0.9680, Loss: 0.0150 Epoch 6 Batch 523/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9680, Loss: 0.0137 Epoch 6 Batch 524/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9741, Loss: 0.0128 Epoch 6 Batch 525/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9737, Loss: 0.0156 Epoch 6 Batch 526/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9688, Loss: 0.0107 Epoch 6 Batch 527/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9663, Loss: 0.0115 Epoch 6 Batch 528/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9638, Loss: 0.0132 Epoch 6 Batch 529/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9588, Loss: 0.0129 Epoch 6 Batch 530/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9634, Loss: 0.0126 Epoch 6 Batch 531/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9634, Loss: 0.0130 Epoch 6 Batch 532/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9712, Loss: 0.0201 Epoch 6 Batch 533/1077 - Train Accuracy: 0.9992, Validation Accuracy: 0.9805, Loss: 0.0092 Epoch 6 Batch 534/1077 - Train Accuracy: 0.9825, Validation Accuracy: 0.9808, Loss: 0.0155 Epoch 6 Batch 535/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9808, Loss: 0.0100 Epoch 6 Batch 536/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9801, Loss: 0.0137 Epoch 6 Batch 537/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9822, Loss: 0.0056 Epoch 6 Batch 538/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9769, Loss: 0.0102 Epoch 6 Batch 539/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9712, Loss: 0.0202 Epoch 6 Batch 540/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9716, Loss: 0.0083 Epoch 6 Batch 541/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9695, Loss: 0.0094 Epoch 6 Batch 542/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9684, Loss: 0.0113 Epoch 6 Batch 543/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9673, Loss: 0.0090 Epoch 6 Batch 544/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9631, Loss: 0.0087 Epoch 6 Batch 545/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9581, Loss: 0.0110 Epoch 6 Batch 546/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9627, Loss: 0.0096 Epoch 6 Batch 547/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9574, Loss: 0.0110 Epoch 6 Batch 548/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9627, Loss: 0.0142 Epoch 6 Batch 549/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9680, Loss: 0.0118 Epoch 6 Batch 550/1077 - Train Accuracy: 0.9605, Validation Accuracy: 0.9631, Loss: 0.0151 Epoch 6 Batch 551/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9680, Loss: 0.0138 Epoch 6 Batch 552/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9631, Loss: 0.0154 Epoch 6 Batch 553/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9673, Loss: 0.0189 Epoch 6 Batch 554/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9673, Loss: 0.0083 Epoch 6 Batch 555/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9673, Loss: 0.0082 Epoch 6 Batch 556/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9673, Loss: 0.0084 Epoch 6 Batch 557/1077 - Train Accuracy: 0.9957, Validation Accuracy: 0.9769, Loss: 0.0089 Epoch 6 Batch 558/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9719, Loss: 0.0074 Epoch 6 Batch 559/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9719, Loss: 0.0098 Epoch 6 Batch 560/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9769, Loss: 0.0114 Epoch 6 Batch 561/1077 - Train Accuracy: 0.9874, Validation Accuracy: 0.9748, Loss: 0.0104 Epoch 6 Batch 562/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9748, Loss: 0.0100 Epoch 6 Batch 563/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9727, Loss: 0.0088 Epoch 6 Batch 564/1077 - Train Accuracy: 0.9807, Validation Accuracy: 0.9727, Loss: 0.0141 Epoch 6 Batch 565/1077 - Train Accuracy: 0.9851, Validation Accuracy: 0.9727, Loss: 0.0129 Epoch 6 Batch 566/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9776, Loss: 0.0093 Epoch 6 Batch 567/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9826, Loss: 0.0121 Epoch 6 Batch 568/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9776, Loss: 0.0111 Epoch 6 Batch 569/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9798, Loss: 0.0090 Epoch 6 Batch 570/1077 - Train Accuracy: 0.9947, Validation Accuracy: 0.9748, Loss: 0.0094 Epoch 6 Batch 571/1077 - Train Accuracy: 0.9866, Validation Accuracy: 0.9695, Loss: 0.0075 Epoch 6 Batch 572/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9648, Loss: 0.0116 Epoch 6 Batch 573/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9648, Loss: 0.0222 Epoch 6 Batch 574/1077 - Train Accuracy: 0.9782, Validation Accuracy: 0.9670, Loss: 0.0128 Epoch 6 Batch 575/1077 - Train Accuracy: 0.9862, Validation Accuracy: 0.9670, Loss: 0.0092 Epoch 6 Batch 576/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9723, Loss: 0.0085 Epoch 6 Batch 577/1077 - Train Accuracy: 0.9860, Validation Accuracy: 0.9680, Loss: 0.0132 Epoch 6 Batch 578/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9634, Loss: 0.0084 Epoch 6 Batch 579/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9634, Loss: 0.0092 Epoch 6 Batch 580/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9613, Loss: 0.0120 Epoch 6 Batch 581/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9656, Loss: 0.0093 Epoch 6 Batch 582/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9773, Loss: 0.0128 Epoch 6 Batch 583/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9769, Loss: 0.0119 Epoch 6 Batch 584/1077 - Train Accuracy: 1.0000, Validation Accuracy: 0.9769, Loss: 0.0050 Epoch 6 Batch 585/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9769, Loss: 0.0086 Epoch 6 Batch 586/1077 - Train Accuracy: 0.9971, Validation Accuracy: 0.9769, Loss: 0.0112 Epoch 6 Batch 587/1077 - Train Accuracy: 0.9814, Validation Accuracy: 0.9769, Loss: 0.0133 Epoch 6 Batch 588/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9790, Loss: 0.0105 Epoch 6 Batch 589/1077 - Train Accuracy: 0.9893, Validation Accuracy: 0.9790, Loss: 0.0086 Epoch 6 Batch 590/1077 - Train Accuracy: 0.9860, Validation Accuracy: 0.9748, Loss: 0.0093 Epoch 6 Batch 591/1077 - Train Accuracy: 0.9837, Validation Accuracy: 0.9790, Loss: 0.0122 Epoch 6 Batch 592/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9741, Loss: 0.0109 Epoch 6 Batch 593/1077 - Train Accuracy: 0.9714, Validation Accuracy: 0.9688, Loss: 0.0244 Epoch 6 Batch 594/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9680, Loss: 0.0188 Epoch 6 Batch 595/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9680, Loss: 0.0121 Epoch 6 Batch 596/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9737, Loss: 0.0141 Epoch 6 Batch 597/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9801, Loss: 0.0113 Epoch 6 Batch 598/1077 - Train Accuracy: 0.9810, Validation Accuracy: 0.9851, Loss: 0.0109 Epoch 6 Batch 599/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9748, Loss: 0.0156 Epoch 6 Batch 600/1077 - Train Accuracy: 0.9788, Validation Accuracy: 0.9748, Loss: 0.0136 Epoch 6 Batch 601/1077 - Train Accuracy: 0.9911, Validation Accuracy: 0.9723, Loss: 0.0127 Epoch 6 Batch 602/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9723, Loss: 0.0139 Epoch 6 Batch 603/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9723, Loss: 0.0098 Epoch 6 Batch 604/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9719, Loss: 0.0129 Epoch 6 Batch 605/1077 - Train Accuracy: 0.9864, Validation Accuracy: 0.9719, Loss: 0.0164 Epoch 6 Batch 606/1077 - Train Accuracy: 0.9892, Validation Accuracy: 0.9766, Loss: 0.0085 Epoch 6 Batch 607/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9766, Loss: 0.0138 Epoch 6 Batch 608/1077 - Train Accuracy: 0.9977, Validation Accuracy: 0.9812, Loss: 0.0093 Epoch 6 Batch 609/1077 - Train Accuracy: 0.9977, Validation Accuracy: 0.9787, Loss: 0.0137 Epoch 6 Batch 610/1077 - Train Accuracy: 0.9889, Validation Accuracy: 0.9737, Loss: 0.0137 Epoch 6 Batch 611/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9787, Loss: 0.0122 Epoch 6 Batch 612/1077 - Train Accuracy: 0.9829, Validation Accuracy: 0.9762, Loss: 0.0066 Epoch 6 Batch 613/1077 - Train Accuracy: 0.9949, Validation Accuracy: 0.9762, Loss: 0.0120 Epoch 6 Batch 614/1077 - Train Accuracy: 0.9896, Validation Accuracy: 0.9808, Loss: 0.0082 Epoch 6 Batch 615/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9837, Loss: 0.0088 Epoch 6 Batch 616/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9862, Loss: 0.0129 Epoch 6 Batch 617/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9862, Loss: 0.0090 Epoch 6 Batch 618/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9844, Loss: 0.0098 Epoch 6 Batch 619/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9869, Loss: 0.0061 Epoch 6 Batch 620/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9869, Loss: 0.0156 Epoch 6 Batch 621/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9869, Loss: 0.0100 Epoch 6 Batch 622/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9865, Loss: 0.0169 Epoch 6 Batch 623/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9865, Loss: 0.0124 Epoch 6 Batch 624/1077 - Train Accuracy: 0.9896, Validation Accuracy: 0.9865, Loss: 0.0138 Epoch 6 Batch 625/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9815, Loss: 0.0066 Epoch 6 Batch 626/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9787, Loss: 0.0111 Epoch 6 Batch 627/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9762, Loss: 0.0138 Epoch 6 Batch 628/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9769, Loss: 0.0096 Epoch 6 Batch 629/1077 - Train Accuracy: 0.9963, Validation Accuracy: 0.9773, Loss: 0.0095 Epoch 6 Batch 630/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9773, Loss: 0.0113 Epoch 6 Batch 631/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9730, Loss: 0.0092 Epoch 6 Batch 632/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9730, Loss: 0.0099 Epoch 6 Batch 633/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9730, Loss: 0.0092 Epoch 6 Batch 634/1077 - Train Accuracy: 0.9807, Validation Accuracy: 0.9780, Loss: 0.0071 Epoch 6 Batch 635/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9780, Loss: 0.0126 Epoch 6 Batch 636/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9780, Loss: 0.0088 Epoch 6 Batch 637/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9798, Loss: 0.0110 Epoch 6 Batch 638/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9798, Loss: 0.0076 Epoch 6 Batch 639/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9798, Loss: 0.0204 Epoch 6 Batch 640/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9798, Loss: 0.0092 Epoch 6 Batch 641/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9830, Loss: 0.0121 Epoch 6 Batch 642/1077 - Train Accuracy: 0.9952, Validation Accuracy: 0.9830, Loss: 0.0081 Epoch 6 Batch 643/1077 - Train Accuracy: 0.9866, Validation Accuracy: 0.9872, Loss: 0.0093 Epoch 6 Batch 644/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9869, Loss: 0.0100 Epoch 6 Batch 645/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9773, Loss: 0.0110 Epoch 6 Batch 646/1077 - Train Accuracy: 0.9903, Validation Accuracy: 0.9773, Loss: 0.0090 Epoch 6 Batch 647/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9773, Loss: 0.0119 Epoch 6 Batch 648/1077 - Train Accuracy: 0.9911, Validation Accuracy: 0.9723, Loss: 0.0087 Epoch 6 Batch 649/1077 - Train Accuracy: 0.9977, Validation Accuracy: 0.9663, Loss: 0.0081 Epoch 6 Batch 650/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9695, Loss: 0.0098 Epoch 6 Batch 651/1077 - Train Accuracy: 0.9981, Validation Accuracy: 0.9648, Loss: 0.0093 Epoch 6 Batch 652/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9748, Loss: 0.0155 Epoch 6 Batch 653/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9744, Loss: 0.0135 Epoch 6 Batch 654/1077 - Train Accuracy: 0.9988, Validation Accuracy: 0.9691, Loss: 0.0083 Epoch 6 Batch 655/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9634, Loss: 0.0152 Epoch 6 Batch 656/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9609, Loss: 0.0094 Epoch 6 Batch 657/1077 - Train Accuracy: 0.9856, Validation Accuracy: 0.9609, Loss: 0.0116 Epoch 6 Batch 658/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9560, Loss: 0.0078 Epoch 6 Batch 659/1077 - Train Accuracy: 0.9952, Validation Accuracy: 0.9510, Loss: 0.0082 Epoch 6 Batch 660/1077 - Train Accuracy: 0.9977, Validation Accuracy: 0.9553, Loss: 0.0081 Epoch 6 Batch 661/1077 - Train Accuracy: 0.9896, Validation Accuracy: 0.9553, Loss: 0.0123 Epoch 6 Batch 662/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9684, Loss: 0.0149 Epoch 6 Batch 663/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9688, Loss: 0.0092 Epoch 6 Batch 664/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9673, Loss: 0.0113 Epoch 6 Batch 665/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9719, Loss: 0.0097 Epoch 6 Batch 666/1077 - Train Accuracy: 0.9823, Validation Accuracy: 0.9712, Loss: 0.0161 Epoch 6 Batch 667/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9759, Loss: 0.0119 Epoch 6 Batch 668/1077 - Train Accuracy: 0.9833, Validation Accuracy: 0.9762, Loss: 0.0131 Epoch 6 Batch 669/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9762, Loss: 0.0123 Epoch 6 Batch 670/1077 - Train Accuracy: 0.9854, Validation Accuracy: 0.9716, Loss: 0.0098 Epoch 6 Batch 671/1077 - Train Accuracy: 0.9778, Validation Accuracy: 0.9759, Loss: 0.0158 Epoch 6 Batch 672/1077 - Train Accuracy: 0.9911, Validation Accuracy: 0.9759, Loss: 0.0088 Epoch 6 Batch 673/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9759, Loss: 0.0090 Epoch 6 Batch 674/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9759, Loss: 0.0106 Epoch 6 Batch 675/1077 - Train Accuracy: 0.9788, Validation Accuracy: 0.9712, Loss: 0.0148 Epoch 6 Batch 676/1077 - Train Accuracy: 0.9827, Validation Accuracy: 0.9712, Loss: 0.0095 Epoch 6 Batch 677/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9695, Loss: 0.0121 Epoch 6 Batch 678/1077 - Train Accuracy: 0.9948, Validation Accuracy: 0.9677, Loss: 0.0109 Epoch 6 Batch 679/1077 - Train Accuracy: 0.9893, Validation Accuracy: 0.9631, Loss: 0.0079 Epoch 6 Batch 680/1077 - Train Accuracy: 0.9810, Validation Accuracy: 0.9631, Loss: 0.0098 Epoch 6 Batch 681/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9631, Loss: 0.0096 Epoch 6 Batch 682/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9627, Loss: 0.0089 Epoch 6 Batch 683/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9624, Loss: 0.0120 Epoch 6 Batch 684/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9673, Loss: 0.0149 Epoch 6 Batch 685/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9673, Loss: 0.0125 Epoch 6 Batch 686/1077 - Train Accuracy: 0.9736, Validation Accuracy: 0.9677, Loss: 0.0090 Epoch 6 Batch 687/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9727, Loss: 0.0117 Epoch 6 Batch 688/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9727, Loss: 0.0093 Epoch 6 Batch 689/1077 - Train Accuracy: 1.0000, Validation Accuracy: 0.9730, Loss: 0.0047 Epoch 6 Batch 690/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9776, Loss: 0.0150 Epoch 6 Batch 691/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9826, Loss: 0.0145 Epoch 6 Batch 692/1077 - Train Accuracy: 0.9874, Validation Accuracy: 0.9826, Loss: 0.0067 Epoch 6 Batch 693/1077 - Train Accuracy: 0.9811, Validation Accuracy: 0.9730, Loss: 0.0169 Epoch 6 Batch 694/1077 - Train Accuracy: 0.9937, Validation Accuracy: 0.9730, Loss: 0.0085 Epoch 6 Batch 695/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9730, Loss: 0.0109 Epoch 6 Batch 696/1077 - Train Accuracy: 0.9873, Validation Accuracy: 0.9684, Loss: 0.0078 Epoch 6 Batch 697/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9684, Loss: 0.0091 Epoch 6 Batch 698/1077 - Train Accuracy: 0.9967, Validation Accuracy: 0.9684, Loss: 0.0079 Epoch 6 Batch 699/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9684, Loss: 0.0079 Epoch 6 Batch 700/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9684, Loss: 0.0080 Epoch 6 Batch 701/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9734, Loss: 0.0091 Epoch 6 Batch 702/1077 - Train Accuracy: 0.9747, Validation Accuracy: 0.9723, Loss: 0.0133 Epoch 6 Batch 703/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9751, Loss: 0.0106 Epoch 6 Batch 704/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9751, Loss: 0.0126 Epoch 6 Batch 705/1077 - Train Accuracy: 0.9901, Validation Accuracy: 0.9716, Loss: 0.0153 Epoch 6 Batch 706/1077 - Train Accuracy: 0.9784, Validation Accuracy: 0.9716, Loss: 0.0291 Epoch 6 Batch 707/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9719, Loss: 0.0121 Epoch 6 Batch 708/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9723, Loss: 0.0098 Epoch 6 Batch 709/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9727, Loss: 0.0126 Epoch 6 Batch 710/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9727, Loss: 0.0078 Epoch 6 Batch 711/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9773, Loss: 0.0128 Epoch 6 Batch 712/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9727, Loss: 0.0065 Epoch 6 Batch 713/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9773, Loss: 0.0097 Epoch 6 Batch 714/1077 - Train Accuracy: 0.9888, Validation Accuracy: 0.9773, Loss: 0.0095 Epoch 6 Batch 715/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9723, Loss: 0.0111 Epoch 6 Batch 716/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9723, Loss: 0.0094 Epoch 6 Batch 717/1077 - Train Accuracy: 0.9967, Validation Accuracy: 0.9723, Loss: 0.0056 Epoch 6 Batch 718/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9723, Loss: 0.0091 Epoch 6 Batch 719/1077 - Train Accuracy: 0.9818, Validation Accuracy: 0.9723, Loss: 0.0102 Epoch 6 Batch 720/1077 - Train Accuracy: 0.9992, Validation Accuracy: 0.9819, Loss: 0.0068 Epoch 6 Batch 721/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9819, Loss: 0.0117 Epoch 6 Batch 722/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9819, Loss: 0.0083 Epoch 6 Batch 723/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9822, Loss: 0.0141 Epoch 6 Batch 724/1077 - Train Accuracy: 0.9947, Validation Accuracy: 0.9822, Loss: 0.0110 Epoch 6 Batch 725/1077 - Train Accuracy: 0.9851, Validation Accuracy: 0.9773, Loss: 0.0106 Epoch 6 Batch 726/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9773, Loss: 0.0081 Epoch 6 Batch 727/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9773, Loss: 0.0115 Epoch 6 Batch 728/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9773, Loss: 0.0152 Epoch 6 Batch 729/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9759, Loss: 0.0143 Epoch 6 Batch 730/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9716, Loss: 0.0161 Epoch 6 Batch 731/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9716, Loss: 0.0077 Epoch 6 Batch 732/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9716, Loss: 0.0126 Epoch 6 Batch 733/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9684, Loss: 0.0106 Epoch 6 Batch 734/1077 - Train Accuracy: 0.9815, Validation Accuracy: 0.9709, Loss: 0.0093 Epoch 6 Batch 735/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9680, Loss: 0.0093 Epoch 6 Batch 736/1077 - Train Accuracy: 0.9947, Validation Accuracy: 0.9680, Loss: 0.0076 Epoch 6 Batch 737/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9680, Loss: 0.0142 Epoch 6 Batch 738/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9631, Loss: 0.0093 Epoch 6 Batch 739/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9631, Loss: 0.0120 Epoch 6 Batch 740/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9556, Loss: 0.0108 Epoch 6 Batch 741/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9556, Loss: 0.0128 Epoch 6 Batch 742/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9599, Loss: 0.0066 Epoch 6 Batch 743/1077 - Train Accuracy: 0.9969, Validation Accuracy: 0.9545, Loss: 0.0109 Epoch 6 Batch 744/1077 - Train Accuracy: 0.9769, Validation Accuracy: 0.9599, Loss: 0.0110 Epoch 6 Batch 745/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9648, Loss: 0.0121 Epoch 6 Batch 746/1077 - Train Accuracy: 0.9953, Validation Accuracy: 0.9648, Loss: 0.0111 Epoch 6 Batch 747/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9652, Loss: 0.0064 Epoch 6 Batch 748/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9702, Loss: 0.0089 Epoch 6 Batch 749/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9702, Loss: 0.0089 Epoch 6 Batch 750/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9769, Loss: 0.0154 Epoch 6 Batch 751/1077 - Train Accuracy: 0.9953, Validation Accuracy: 0.9769, Loss: 0.0076 Epoch 6 Batch 752/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9759, Loss: 0.0114 Epoch 6 Batch 753/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9709, Loss: 0.0093 Epoch 6 Batch 754/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9712, Loss: 0.0094 Epoch 6 Batch 755/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9670, Loss: 0.0180 Epoch 6 Batch 756/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9673, Loss: 0.0108 Epoch 6 Batch 757/1077 - Train Accuracy: 0.9823, Validation Accuracy: 0.9673, Loss: 0.0081 Epoch 6 Batch 758/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9673, Loss: 0.0046 Epoch 6 Batch 759/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9673, Loss: 0.0129 Epoch 6 Batch 760/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9648, Loss: 0.0118 Epoch 6 Batch 761/1077 - Train Accuracy: 0.9868, Validation Accuracy: 0.9609, Loss: 0.0114 Epoch 6 Batch 762/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9609, Loss: 0.0107 Epoch 6 Batch 763/1077 - Train Accuracy: 0.9851, Validation Accuracy: 0.9563, Loss: 0.0116 Epoch 6 Batch 764/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9638, Loss: 0.0074 Epoch 6 Batch 765/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9638, Loss: 0.0145 Epoch 6 Batch 766/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9684, Loss: 0.0093 Epoch 6 Batch 767/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9641, Loss: 0.0093 Epoch 6 Batch 768/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9616, Loss: 0.0103 Epoch 6 Batch 769/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9616, Loss: 0.0093 Epoch 6 Batch 770/1077 - Train Accuracy: 0.9900, Validation Accuracy: 0.9616, Loss: 0.0112 Epoch 6 Batch 771/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9560, Loss: 0.0120 Epoch 6 Batch 772/1077 - Train Accuracy: 0.9993, Validation Accuracy: 0.9560, Loss: 0.0070 Epoch 6 Batch 773/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9560, Loss: 0.0106 Epoch 6 Batch 774/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9560, Loss: 0.0123 Epoch 6 Batch 775/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9560, Loss: 0.0094 Epoch 6 Batch 776/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9606, Loss: 0.0084 Epoch 6 Batch 777/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9652, Loss: 0.0073 Epoch 6 Batch 778/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9684, Loss: 0.0119 Epoch 6 Batch 779/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9684, Loss: 0.0102 Epoch 6 Batch 780/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9684, Loss: 0.0159 Epoch 6 Batch 781/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9684, Loss: 0.0061 Epoch 6 Batch 782/1077 - Train Accuracy: 0.9799, Validation Accuracy: 0.9680, Loss: 0.0104 Epoch 6 Batch 783/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9727, Loss: 0.0119 Epoch 6 Batch 784/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9684, Loss: 0.0060 Epoch 6 Batch 785/1077 - Train Accuracy: 0.9952, Validation Accuracy: 0.9730, Loss: 0.0065 Epoch 6 Batch 786/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9730, Loss: 0.0106 Epoch 6 Batch 787/1077 - Train Accuracy: 0.9970, Validation Accuracy: 0.9727, Loss: 0.0074 Epoch 6 Batch 788/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9730, Loss: 0.0119 Epoch 6 Batch 789/1077 - Train Accuracy: 0.9868, Validation Accuracy: 0.9712, Loss: 0.0085 Epoch 6 Batch 790/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9663, Loss: 0.0135 Epoch 6 Batch 791/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9595, Loss: 0.0097 Epoch 6 Batch 792/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9641, Loss: 0.0142 Epoch 6 Batch 793/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9620, Loss: 0.0100 Epoch 6 Batch 794/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9620, Loss: 0.0078 Epoch 6 Batch 795/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9688, Loss: 0.0137 Epoch 6 Batch 796/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9691, Loss: 0.0108 Epoch 6 Batch 797/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9705, Loss: 0.0075 Epoch 6 Batch 798/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9709, Loss: 0.0094 Epoch 6 Batch 799/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9762, Loss: 0.0143 Epoch 6 Batch 800/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9709, Loss: 0.0092 Epoch 6 Batch 801/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9663, Loss: 0.0096 Epoch 6 Batch 802/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9684, Loss: 0.0104 Epoch 6 Batch 803/1077 - Train Accuracy: 0.9980, Validation Accuracy: 0.9684, Loss: 0.0063 Epoch 6 Batch 804/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9684, Loss: 0.0070 Epoch 6 Batch 805/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9684, Loss: 0.0100 Epoch 6 Batch 806/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9684, Loss: 0.0079 Epoch 6 Batch 807/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9730, Loss: 0.0060 Epoch 6 Batch 808/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9730, Loss: 0.0177 Epoch 6 Batch 809/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9712, Loss: 0.0171 Epoch 6 Batch 810/1077 - Train Accuracy: 0.9833, Validation Accuracy: 0.9712, Loss: 0.0095 Epoch 6 Batch 811/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9712, Loss: 0.0107 Epoch 6 Batch 812/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9755, Loss: 0.0121 Epoch 6 Batch 813/1077 - Train Accuracy: 0.9795, Validation Accuracy: 0.9755, Loss: 0.0120 Epoch 6 Batch 814/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9755, Loss: 0.0104 Epoch 6 Batch 815/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9755, Loss: 0.0121 Epoch 6 Batch 816/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9759, Loss: 0.0076 Epoch 6 Batch 817/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9759, Loss: 0.0122 Epoch 6 Batch 818/1077 - Train Accuracy: 0.9656, Validation Accuracy: 0.9805, Loss: 0.0163 Epoch 6 Batch 819/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9776, Loss: 0.0123 Epoch 6 Batch 820/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9730, Loss: 0.0090 Epoch 6 Batch 821/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9730, Loss: 0.0091 Epoch 6 Batch 822/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9727, Loss: 0.0094 Epoch 6 Batch 823/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9727, Loss: 0.0166 Epoch 6 Batch 824/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9741, Loss: 0.0158 Epoch 6 Batch 825/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9719, Loss: 0.0062 Epoch 6 Batch 826/1077 - Train Accuracy: 0.9940, Validation Accuracy: 0.9744, Loss: 0.0079 Epoch 6 Batch 827/1077 - Train Accuracy: 0.9901, Validation Accuracy: 0.9748, Loss: 0.0087 Epoch 6 Batch 828/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9691, Loss: 0.0085 Epoch 6 Batch 829/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9691, Loss: 0.0202 Epoch 6 Batch 830/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9691, Loss: 0.0136 Epoch 6 Batch 831/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9712, Loss: 0.0103 Epoch 6 Batch 832/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9727, Loss: 0.0064 Epoch 6 Batch 833/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9727, Loss: 0.0107 Epoch 6 Batch 834/1077 - Train Accuracy: 0.9950, Validation Accuracy: 0.9666, Loss: 0.0107 Epoch 6 Batch 835/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9670, Loss: 0.0112 Epoch 6 Batch 836/1077 - Train Accuracy: 0.9988, Validation Accuracy: 0.9670, Loss: 0.0054 Epoch 6 Batch 837/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9670, Loss: 0.0147 Epoch 6 Batch 838/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9620, Loss: 0.0116 Epoch 6 Batch 839/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9670, Loss: 0.0107 Epoch 6 Batch 840/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9723, Loss: 0.0119 Epoch 6 Batch 841/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9727, Loss: 0.0217 Epoch 6 Batch 842/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9705, Loss: 0.0088 Epoch 6 Batch 843/1077 - Train Accuracy: 0.9892, Validation Accuracy: 0.9691, Loss: 0.0071 Epoch 6 Batch 844/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9741, Loss: 0.0096 Epoch 6 Batch 845/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9695, Loss: 0.0066 Epoch 6 Batch 846/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9766, Loss: 0.0142 Epoch 6 Batch 847/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9776, Loss: 0.0125 Epoch 6 Batch 848/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9776, Loss: 0.0105 Epoch 6 Batch 849/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9805, Loss: 0.0090 Epoch 6 Batch 850/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9780, Loss: 0.0197 Epoch 6 Batch 851/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9826, Loss: 0.0169 Epoch 6 Batch 852/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9773, Loss: 0.0145 Epoch 6 Batch 853/1077 - Train Accuracy: 0.9949, Validation Accuracy: 0.9723, Loss: 0.0092 Epoch 6 Batch 854/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9723, Loss: 0.0118 Epoch 6 Batch 855/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9773, Loss: 0.0119 Epoch 6 Batch 856/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9773, Loss: 0.0124 Epoch 6 Batch 857/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9826, Loss: 0.0099 Epoch 6 Batch 858/1077 - Train Accuracy: 0.9978, Validation Accuracy: 0.9826, Loss: 0.0055 Epoch 6 Batch 859/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9727, Loss: 0.0119 Epoch 6 Batch 860/1077 - Train Accuracy: 0.9952, Validation Accuracy: 0.9727, Loss: 0.0071 Epoch 6 Batch 861/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9727, Loss: 0.0090 Epoch 6 Batch 862/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9826, Loss: 0.0130 Epoch 6 Batch 863/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9826, Loss: 0.0107 Epoch 6 Batch 864/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9854, Loss: 0.0110 Epoch 6 Batch 865/1077 - Train Accuracy: 0.9748, Validation Accuracy: 0.9872, Loss: 0.0167 Epoch 6 Batch 866/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9872, Loss: 0.0094 Epoch 6 Batch 867/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9872, Loss: 0.0295 Epoch 6 Batch 868/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9872, Loss: 0.0107 Epoch 6 Batch 869/1077 - Train Accuracy: 0.9664, Validation Accuracy: 0.9872, Loss: 0.0100 Epoch 6 Batch 870/1077 - Train Accuracy: 0.9823, Validation Accuracy: 0.9822, Loss: 0.0092 Epoch 6 Batch 871/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9822, Loss: 0.0100 Epoch 6 Batch 872/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9822, Loss: 0.0109 Epoch 6 Batch 873/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9787, Loss: 0.0106 Epoch 6 Batch 874/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9734, Loss: 0.0145 Epoch 6 Batch 875/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9691, Loss: 0.0101 Epoch 6 Batch 876/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9748, Loss: 0.0134 Epoch 6 Batch 877/1077 - Train Accuracy: 0.9977, Validation Accuracy: 0.9748, Loss: 0.0069 Epoch 6 Batch 878/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9748, Loss: 0.0077 Epoch 6 Batch 879/1077 - Train Accuracy: 0.9949, Validation Accuracy: 0.9748, Loss: 0.0049 Epoch 6 Batch 880/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9748, Loss: 0.0129 Epoch 6 Batch 881/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9748, Loss: 0.0143 Epoch 6 Batch 882/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9719, Loss: 0.0075 Epoch 6 Batch 883/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9652, Loss: 0.0138 Epoch 6 Batch 884/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9652, Loss: 0.0122 Epoch 6 Batch 885/1077 - Train Accuracy: 0.9933, Validation Accuracy: 0.9698, Loss: 0.0090 Epoch 6 Batch 886/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9695, Loss: 0.0158 Epoch 6 Batch 887/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9691, Loss: 0.0134 Epoch 6 Batch 888/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9719, Loss: 0.0106 Epoch 6 Batch 889/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9787, Loss: 0.0105 Epoch 6 Batch 890/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9769, Loss: 0.0096 Epoch 6 Batch 891/1077 - Train Accuracy: 0.9984, Validation Accuracy: 0.9723, Loss: 0.0063 Epoch 6 Batch 892/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9723, Loss: 0.0121 Epoch 6 Batch 893/1077 - Train Accuracy: 0.9648, Validation Accuracy: 0.9723, Loss: 0.0123 Epoch 6 Batch 894/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9723, Loss: 0.0098 Epoch 6 Batch 895/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9705, Loss: 0.0116 Epoch 6 Batch 896/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9705, Loss: 0.0097 Epoch 6 Batch 897/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9773, Loss: 0.0113 Epoch 6 Batch 898/1077 - Train Accuracy: 0.9818, Validation Accuracy: 0.9773, Loss: 0.0105 Epoch 6 Batch 899/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9773, Loss: 0.0119 Epoch 6 Batch 900/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9759, Loss: 0.0131 Epoch 6 Batch 901/1077 - Train Accuracy: 0.9725, Validation Accuracy: 0.9716, Loss: 0.0171 Epoch 6 Batch 902/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9719, Loss: 0.0138 Epoch 6 Batch 903/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9719, Loss: 0.0122 Epoch 6 Batch 904/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9719, Loss: 0.0061 Epoch 6 Batch 905/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9719, Loss: 0.0074 Epoch 6 Batch 906/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9766, Loss: 0.0091 Epoch 6 Batch 907/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9719, Loss: 0.0108 Epoch 6 Batch 908/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9702, Loss: 0.0128 Epoch 6 Batch 909/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9702, Loss: 0.0138 Epoch 6 Batch 910/1077 - Train Accuracy: 0.9900, Validation Accuracy: 0.9741, Loss: 0.0110 Epoch 6 Batch 911/1077 - Train Accuracy: 0.9862, Validation Accuracy: 0.9741, Loss: 0.0109 Epoch 6 Batch 912/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9702, Loss: 0.0079 Epoch 6 Batch 913/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9702, Loss: 0.0158 Epoch 6 Batch 914/1077 - Train Accuracy: 0.9899, Validation Accuracy: 0.9688, Loss: 0.0343 Epoch 6 Batch 915/1077 - Train Accuracy: 0.9811, Validation Accuracy: 0.9688, Loss: 0.0078 Epoch 6 Batch 916/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9688, Loss: 0.0110 Epoch 6 Batch 917/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9737, Loss: 0.0111 Epoch 6 Batch 918/1077 - Train Accuracy: 0.9940, Validation Accuracy: 0.9741, Loss: 0.0082 Epoch 6 Batch 919/1077 - Train Accuracy: 0.9951, Validation Accuracy: 0.9695, Loss: 0.0061 Epoch 6 Batch 920/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9744, Loss: 0.0080 Epoch 6 Batch 921/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9737, Loss: 0.0111 Epoch 6 Batch 922/1077 - Train Accuracy: 0.9799, Validation Accuracy: 0.9798, Loss: 0.0118 Epoch 6 Batch 923/1077 - Train Accuracy: 0.9984, Validation Accuracy: 0.9748, Loss: 0.0080 Epoch 6 Batch 924/1077 - Train Accuracy: 0.9778, Validation Accuracy: 0.9759, Loss: 0.0179 Epoch 6 Batch 925/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9762, Loss: 0.0089 Epoch 6 Batch 926/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9709, Loss: 0.0090 Epoch 6 Batch 927/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9734, Loss: 0.0146 Epoch 6 Batch 928/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9787, Loss: 0.0117 Epoch 6 Batch 929/1077 - Train Accuracy: 0.9969, Validation Accuracy: 0.9801, Loss: 0.0068 Epoch 6 Batch 930/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9801, Loss: 0.0113 Epoch 6 Batch 931/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9801, Loss: 0.0083 Epoch 6 Batch 932/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9826, Loss: 0.0061 Epoch 6 Batch 933/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9872, Loss: 0.0108 Epoch 6 Batch 934/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9869, Loss: 0.0119 Epoch 6 Batch 935/1077 - Train Accuracy: 1.0000, Validation Accuracy: 0.9886, Loss: 0.0079 Epoch 6 Batch 936/1077 - Train Accuracy: 0.9877, Validation Accuracy: 0.9876, Loss: 0.0110 Epoch 6 Batch 937/1077 - Train Accuracy: 0.9868, Validation Accuracy: 0.9826, Loss: 0.0140 Epoch 6 Batch 938/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9858, Loss: 0.0105 Epoch 6 Batch 939/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9762, Loss: 0.0138 Epoch 6 Batch 940/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9773, Loss: 0.0087 Epoch 6 Batch 941/1077 - Train Accuracy: 0.9851, Validation Accuracy: 0.9773, Loss: 0.0110 Epoch 6 Batch 942/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9822, Loss: 0.0128 Epoch 6 Batch 943/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9819, Loss: 0.0097 Epoch 6 Batch 944/1077 - Train Accuracy: 0.9896, Validation Accuracy: 0.9773, Loss: 0.0074 Epoch 6 Batch 945/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9826, Loss: 0.0069 Epoch 6 Batch 946/1077 - Train Accuracy: 0.9992, Validation Accuracy: 0.9808, Loss: 0.0067 Epoch 6 Batch 947/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9808, Loss: 0.0093 Epoch 6 Batch 948/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9819, Loss: 0.0123 Epoch 6 Batch 949/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9819, Loss: 0.0103 Epoch 6 Batch 950/1077 - Train Accuracy: 0.9892, Validation Accuracy: 0.9819, Loss: 0.0056 Epoch 6 Batch 951/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9766, Loss: 0.0109 Epoch 6 Batch 952/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9716, Loss: 0.0058 Epoch 6 Batch 953/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9716, Loss: 0.0100 Epoch 6 Batch 954/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9769, Loss: 0.0108 Epoch 6 Batch 955/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9822, Loss: 0.0134 Epoch 6 Batch 956/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9822, Loss: 0.0171 Epoch 6 Batch 957/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9822, Loss: 0.0040 Epoch 6 Batch 958/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9773, Loss: 0.0079 Epoch 6 Batch 959/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9723, Loss: 0.0104 Epoch 6 Batch 960/1077 - Train Accuracy: 0.9929, Validation Accuracy: 0.9723, Loss: 0.0084 Epoch 6 Batch 961/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9751, Loss: 0.0082 Epoch 6 Batch 962/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9751, Loss: 0.0094 Epoch 6 Batch 963/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9751, Loss: 0.0126 Epoch 6 Batch 964/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9751, Loss: 0.0109 Epoch 6 Batch 965/1077 - Train Accuracy: 0.9749, Validation Accuracy: 0.9776, Loss: 0.0163 Epoch 6 Batch 966/1077 - Train Accuracy: 0.9878, Validation Accuracy: 0.9776, Loss: 0.0088 Epoch 6 Batch 967/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9776, Loss: 0.0094 Epoch 6 Batch 968/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9776, Loss: 0.0124 Epoch 6 Batch 969/1077 - Train Accuracy: 0.9654, Validation Accuracy: 0.9826, Loss: 0.0170 Epoch 6 Batch 970/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9826, Loss: 0.0111 Epoch 6 Batch 971/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9826, Loss: 0.0108 Epoch 6 Batch 972/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9755, Loss: 0.0110 Epoch 6 Batch 973/1077 - Train Accuracy: 0.9900, Validation Accuracy: 0.9755, Loss: 0.0073 Epoch 6 Batch 974/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9751, Loss: 0.0068 Epoch 6 Batch 975/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9751, Loss: 0.0077 Epoch 6 Batch 976/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9737, Loss: 0.0082 Epoch 6 Batch 977/1077 - Train Accuracy: 0.9957, Validation Accuracy: 0.9734, Loss: 0.0047 Epoch 6 Batch 978/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9734, Loss: 0.0101 Epoch 6 Batch 979/1077 - Train Accuracy: 0.9737, Validation Accuracy: 0.9734, Loss: 0.0134 Epoch 6 Batch 980/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9787, Loss: 0.0090 Epoch 6 Batch 981/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9787, Loss: 0.0109 Epoch 6 Batch 982/1077 - Train Accuracy: 0.9874, Validation Accuracy: 0.9741, Loss: 0.0076 Epoch 6 Batch 983/1077 - Train Accuracy: 0.9889, Validation Accuracy: 0.9741, Loss: 0.0097 Epoch 6 Batch 984/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9734, Loss: 0.0119 Epoch 6 Batch 985/1077 - Train Accuracy: 0.9992, Validation Accuracy: 0.9737, Loss: 0.0070 Epoch 6 Batch 986/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9741, Loss: 0.0104 Epoch 6 Batch 987/1077 - Train Accuracy: 0.9821, Validation Accuracy: 0.9741, Loss: 0.0066 Epoch 6 Batch 988/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9741, Loss: 0.0127 Epoch 6 Batch 989/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9737, Loss: 0.0114 Epoch 6 Batch 990/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9751, Loss: 0.0068 Epoch 6 Batch 991/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9751, Loss: 0.0078 Epoch 6 Batch 992/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9709, Loss: 0.0163 Epoch 6 Batch 993/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9709, Loss: 0.0072 Epoch 6 Batch 994/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9613, Loss: 0.0116 Epoch 6 Batch 995/1077 - Train Accuracy: 0.9896, Validation Accuracy: 0.9659, Loss: 0.0075 Epoch 6 Batch 996/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9680, Loss: 0.0081 Epoch 6 Batch 997/1077 - Train Accuracy: 0.9951, Validation Accuracy: 0.9723, Loss: 0.0096 Epoch 6 Batch 998/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9656, Loss: 0.0114 Epoch 6 Batch 999/1077 - Train Accuracy: 0.9973, Validation Accuracy: 0.9705, Loss: 0.0103 Epoch 6 Batch 1000/1077 - Train Accuracy: 0.9810, Validation Accuracy: 0.9705, Loss: 0.0092 Epoch 6 Batch 1001/1077 - Train Accuracy: 0.9890, Validation Accuracy: 0.9656, Loss: 0.0076 Epoch 6 Batch 1002/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9659, Loss: 0.0050 Epoch 6 Batch 1003/1077 - Train Accuracy: 0.9889, Validation Accuracy: 0.9606, Loss: 0.0091 Epoch 6 Batch 1004/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9606, Loss: 0.0132 Epoch 6 Batch 1005/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9606, Loss: 0.0098 Epoch 6 Batch 1006/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9606, Loss: 0.0107 Epoch 6 Batch 1007/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9602, Loss: 0.0091 Epoch 6 Batch 1008/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9602, Loss: 0.0123 Epoch 6 Batch 1009/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9666, Loss: 0.0087 Epoch 6 Batch 1010/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9670, Loss: 0.0088 Epoch 6 Batch 1011/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9613, Loss: 0.0109 Epoch 6 Batch 1012/1077 - Train Accuracy: 0.9821, Validation Accuracy: 0.9613, Loss: 0.0091 Epoch 6 Batch 1013/1077 - Train Accuracy: 0.9937, Validation Accuracy: 0.9574, Loss: 0.0095 Epoch 6 Batch 1014/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9602, Loss: 0.0113 Epoch 6 Batch 1015/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9599, Loss: 0.0126 Epoch 6 Batch 1016/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9599, Loss: 0.0106 Epoch 6 Batch 1017/1077 - Train Accuracy: 0.9877, Validation Accuracy: 0.9531, Loss: 0.0063 Epoch 6 Batch 1018/1077 - Train Accuracy: 0.9866, Validation Accuracy: 0.9581, Loss: 0.0085 Epoch 6 Batch 1019/1077 - Train Accuracy: 0.9720, Validation Accuracy: 0.9563, Loss: 0.0184 Epoch 6 Batch 1020/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9666, Loss: 0.0127 Epoch 6 Batch 1021/1077 - Train Accuracy: 0.9814, Validation Accuracy: 0.9663, Loss: 0.0084 Epoch 6 Batch 1022/1077 - Train Accuracy: 0.9955, Validation Accuracy: 0.9716, Loss: 0.0126 Epoch 6 Batch 1023/1077 - Train Accuracy: 0.9737, Validation Accuracy: 0.9741, Loss: 0.0165 Epoch 6 Batch 1024/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9645, Loss: 0.0106 Epoch 6 Batch 1025/1077 - Train Accuracy: 0.9661, Validation Accuracy: 0.9641, Loss: 0.0129 Epoch 6 Batch 1026/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9656, Loss: 0.0157 Epoch 6 Batch 1027/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9680, Loss: 0.0118 Epoch 6 Batch 1028/1077 - Train Accuracy: 0.9851, Validation Accuracy: 0.9798, Loss: 0.0119 Epoch 6 Batch 1029/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9851, Loss: 0.0101 Epoch 6 Batch 1030/1077 - Train Accuracy: 0.9973, Validation Accuracy: 0.9851, Loss: 0.0070 Epoch 6 Batch 1031/1077 - Train Accuracy: 0.9799, Validation Accuracy: 0.9801, Loss: 0.0113 Epoch 6 Batch 1032/1077 - Train Accuracy: 0.9743, Validation Accuracy: 0.9801, Loss: 0.0154 Epoch 6 Batch 1033/1077 - Train Accuracy: 0.9792, Validation Accuracy: 0.9741, Loss: 0.0115 Epoch 6 Batch 1034/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9688, Loss: 0.0095 Epoch 6 Batch 1035/1077 - Train Accuracy: 0.9851, Validation Accuracy: 0.9730, Loss: 0.0091 Epoch 6 Batch 1036/1077 - Train Accuracy: 0.9829, Validation Accuracy: 0.9730, Loss: 0.0088 Epoch 6 Batch 1037/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9773, Loss: 0.0120 Epoch 6 Batch 1038/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9723, Loss: 0.0128 Epoch 6 Batch 1039/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9723, Loss: 0.0108 Epoch 6 Batch 1040/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9702, Loss: 0.0117 Epoch 6 Batch 1041/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9705, Loss: 0.0191 Epoch 6 Batch 1042/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9698, Loss: 0.0099 Epoch 6 Batch 1043/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9602, Loss: 0.0102 Epoch 6 Batch 1044/1077 - Train Accuracy: 0.9977, Validation Accuracy: 0.9602, Loss: 0.0106 Epoch 6 Batch 1045/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9602, Loss: 0.0101 Epoch 6 Batch 1046/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9599, Loss: 0.0079 Epoch 6 Batch 1047/1077 - Train Accuracy: 0.9980, Validation Accuracy: 0.9602, Loss: 0.0099 Epoch 6 Batch 1048/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9599, Loss: 0.0100 Epoch 6 Batch 1049/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9556, Loss: 0.0077 Epoch 6 Batch 1050/1077 - Train Accuracy: 0.9980, Validation Accuracy: 0.9560, Loss: 0.0054 Epoch 6 Batch 1051/1077 - Train Accuracy: 0.9833, Validation Accuracy: 0.9560, Loss: 0.0138 Epoch 6 Batch 1052/1077 - Train Accuracy: 0.9911, Validation Accuracy: 0.9560, Loss: 0.0095 Epoch 6 Batch 1053/1077 - Train Accuracy: 0.9795, Validation Accuracy: 0.9560, Loss: 0.0138 Epoch 6 Batch 1054/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9620, Loss: 0.0114 Epoch 6 Batch 1055/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9620, Loss: 0.0081 Epoch 6 Batch 1056/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9641, Loss: 0.0065 Epoch 6 Batch 1057/1077 - Train Accuracy: 0.9947, Validation Accuracy: 0.9641, Loss: 0.0123 Epoch 6 Batch 1058/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9641, Loss: 0.0117 Epoch 6 Batch 1059/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9645, Loss: 0.0109 Epoch 6 Batch 1060/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9645, Loss: 0.0085 Epoch 6 Batch 1061/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9645, Loss: 0.0133 Epoch 6 Batch 1062/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9645, Loss: 0.0114 Epoch 6 Batch 1063/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9734, Loss: 0.0118 Epoch 6 Batch 1064/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9730, Loss: 0.0102 Epoch 6 Batch 1065/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9734, Loss: 0.0114 Epoch 6 Batch 1066/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9734, Loss: 0.0097 Epoch 6 Batch 1067/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9734, Loss: 0.0125 Epoch 6 Batch 1068/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9734, Loss: 0.0075 Epoch 6 Batch 1069/1077 - Train Accuracy: 0.9933, Validation Accuracy: 0.9734, Loss: 0.0089 Epoch 6 Batch 1070/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9734, Loss: 0.0078 Epoch 6 Batch 1071/1077 - Train Accuracy: 0.9965, Validation Accuracy: 0.9663, Loss: 0.0099 Epoch 6 Batch 1072/1077 - Train Accuracy: 0.9814, Validation Accuracy: 0.9712, Loss: 0.0133 Epoch 6 Batch 1073/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9712, Loss: 0.0117 Epoch 6 Batch 1074/1077 - Train Accuracy: 0.9959, Validation Accuracy: 0.9663, Loss: 0.0102 Epoch 6 Batch 1075/1077 - Train Accuracy: 0.9803, Validation Accuracy: 0.9712, Loss: 0.0127 Epoch 7 Batch 1/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9659, Loss: 0.0103 Epoch 7 Batch 2/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9663, Loss: 0.0081 Epoch 7 Batch 3/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9616, Loss: 0.0118 Epoch 7 Batch 4/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9613, Loss: 0.0083 Epoch 7 Batch 5/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9659, Loss: 0.0144 Epoch 7 Batch 6/1077 - Train Accuracy: 0.9957, Validation Accuracy: 0.9659, Loss: 0.0058 Epoch 7 Batch 7/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9613, Loss: 0.0088 Epoch 7 Batch 8/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9613, Loss: 0.0086 Epoch 7 Batch 9/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9609, Loss: 0.0114 Epoch 7 Batch 10/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9609, Loss: 0.0102 Epoch 7 Batch 11/1077 - Train Accuracy: 0.9833, Validation Accuracy: 0.9609, Loss: 0.0145 Epoch 7 Batch 12/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9634, Loss: 0.0086 Epoch 7 Batch 13/1077 - Train Accuracy: 0.9937, Validation Accuracy: 0.9723, Loss: 0.0081 Epoch 7 Batch 14/1077 - Train Accuracy: 0.9940, Validation Accuracy: 0.9723, Loss: 0.0081 Epoch 7 Batch 15/1077 - Train Accuracy: 0.9965, Validation Accuracy: 0.9723, Loss: 0.0088 Epoch 7 Batch 16/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9670, Loss: 0.0120 Epoch 7 Batch 17/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9695, Loss: 0.0088 Epoch 7 Batch 18/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9695, Loss: 0.0083 Epoch 7 Batch 19/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9695, Loss: 0.0103 Epoch 7 Batch 20/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9727, Loss: 0.0082 Epoch 7 Batch 21/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9730, Loss: 0.0092 Epoch 7 Batch 22/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9730, Loss: 0.0100 Epoch 7 Batch 23/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9730, Loss: 0.0100 Epoch 7 Batch 24/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9730, Loss: 0.0119 Epoch 7 Batch 25/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9730, Loss: 0.0056 Epoch 7 Batch 26/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9730, Loss: 0.0134 Epoch 7 Batch 27/1077 - Train Accuracy: 0.9866, Validation Accuracy: 0.9730, Loss: 0.0078 Epoch 7 Batch 28/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9727, Loss: 0.0099 Epoch 7 Batch 29/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9744, Loss: 0.0123 Epoch 7 Batch 30/1077 - Train Accuracy: 1.0000, Validation Accuracy: 0.9744, Loss: 0.0045 Epoch 7 Batch 31/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9698, Loss: 0.0090 Epoch 7 Batch 32/1077 - Train Accuracy: 0.9743, Validation Accuracy: 0.9744, Loss: 0.0109 Epoch 7 Batch 33/1077 - Train Accuracy: 0.9862, Validation Accuracy: 0.9748, Loss: 0.0112 Epoch 7 Batch 34/1077 - Train Accuracy: 0.9961, Validation Accuracy: 0.9748, Loss: 0.0112 Epoch 7 Batch 35/1077 - Train Accuracy: 0.9961, Validation Accuracy: 0.9748, Loss: 0.0090 Epoch 7 Batch 36/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9751, Loss: 0.0093 Epoch 7 Batch 37/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9751, Loss: 0.0119 Epoch 7 Batch 38/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9751, Loss: 0.0134 Epoch 7 Batch 39/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9748, Loss: 0.0101 Epoch 7 Batch 40/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9748, Loss: 0.0119 Epoch 7 Batch 41/1077 - Train Accuracy: 0.9888, Validation Accuracy: 0.9748, Loss: 0.0086 Epoch 7 Batch 42/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9744, Loss: 0.0183 Epoch 7 Batch 43/1077 - Train Accuracy: 0.9897, Validation Accuracy: 0.9744, Loss: 0.0062 Epoch 7 Batch 44/1077 - Train Accuracy: 0.9979, Validation Accuracy: 0.9744, Loss: 0.0058 Epoch 7 Batch 45/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9773, Loss: 0.0120 Epoch 7 Batch 46/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9773, Loss: 0.0121 Epoch 7 Batch 47/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9773, Loss: 0.0094 Epoch 7 Batch 48/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9773, Loss: 0.0145 Epoch 7 Batch 49/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9773, Loss: 0.0157 Epoch 7 Batch 50/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9773, Loss: 0.0094 Epoch 7 Batch 51/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9773, Loss: 0.0102 Epoch 7 Batch 52/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9727, Loss: 0.0092 Epoch 7 Batch 53/1077 - Train Accuracy: 0.9716, Validation Accuracy: 0.9727, Loss: 0.0120 Epoch 7 Batch 54/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9727, Loss: 0.0144 Epoch 7 Batch 55/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9723, Loss: 0.0118 Epoch 7 Batch 56/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9727, Loss: 0.0051 Epoch 7 Batch 57/1077 - Train Accuracy: 0.9854, Validation Accuracy: 0.9727, Loss: 0.0055 Epoch 7 Batch 58/1077 - Train Accuracy: 0.9988, Validation Accuracy: 0.9709, Loss: 0.0104 Epoch 7 Batch 59/1077 - Train Accuracy: 0.9745, Validation Accuracy: 0.9712, Loss: 0.0086 Epoch 7 Batch 60/1077 - Train Accuracy: 0.9866, Validation Accuracy: 0.9684, Loss: 0.0054 Epoch 7 Batch 61/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9684, Loss: 0.0100 Epoch 7 Batch 62/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9684, Loss: 0.0152 Epoch 7 Batch 63/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9684, Loss: 0.0073 Epoch 7 Batch 64/1077 - Train Accuracy: 0.9992, Validation Accuracy: 0.9684, Loss: 0.0075 Epoch 7 Batch 65/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9734, Loss: 0.0097 Epoch 7 Batch 66/1077 - Train Accuracy: 0.9955, Validation Accuracy: 0.9734, Loss: 0.0043 Epoch 7 Batch 67/1077 - Train Accuracy: 0.9751, Validation Accuracy: 0.9734, Loss: 0.0138 Epoch 7 Batch 68/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9762, Loss: 0.0112 Epoch 7 Batch 69/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9762, Loss: 0.0168 Epoch 7 Batch 70/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9695, Loss: 0.0091 Epoch 7 Batch 71/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9691, Loss: 0.0064 Epoch 7 Batch 72/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9688, Loss: 0.0077 Epoch 7 Batch 73/1077 - Train Accuracy: 0.9992, Validation Accuracy: 0.9638, Loss: 0.0054 Epoch 7 Batch 74/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9638, Loss: 0.0087 Epoch 7 Batch 75/1077 - Train Accuracy: 0.9672, Validation Accuracy: 0.9638, Loss: 0.0166 Epoch 7 Batch 76/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9656, Loss: 0.0096 Epoch 7 Batch 77/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9656, Loss: 0.0096 Epoch 7 Batch 78/1077 - Train Accuracy: 0.9873, Validation Accuracy: 0.9673, Loss: 0.0093 Epoch 7 Batch 79/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9727, Loss: 0.0084 Epoch 7 Batch 80/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9727, Loss: 0.0089 Epoch 7 Batch 81/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9719, Loss: 0.0076 Epoch 7 Batch 82/1077 - Train Accuracy: 0.9893, Validation Accuracy: 0.9719, Loss: 0.0096 Epoch 7 Batch 83/1077 - Train Accuracy: 0.9774, Validation Accuracy: 0.9719, Loss: 0.0078 Epoch 7 Batch 84/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9723, Loss: 0.0105 Epoch 7 Batch 85/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9723, Loss: 0.0073 Epoch 7 Batch 86/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9723, Loss: 0.0083 Epoch 7 Batch 87/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9723, Loss: 0.0113 Epoch 7 Batch 88/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9727, Loss: 0.0091 Epoch 7 Batch 89/1077 - Train Accuracy: 0.9969, Validation Accuracy: 0.9677, Loss: 0.0057 Epoch 7 Batch 90/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9673, Loss: 0.0077 Epoch 7 Batch 91/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9673, Loss: 0.0088 Epoch 7 Batch 92/1077 - Train Accuracy: 0.9825, Validation Accuracy: 0.9727, Loss: 0.0087 Epoch 7 Batch 93/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9727, Loss: 0.0066 Epoch 7 Batch 94/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9723, Loss: 0.0076 Epoch 7 Batch 95/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9709, Loss: 0.0070 Epoch 7 Batch 96/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9712, Loss: 0.0118 Epoch 7 Batch 97/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9712, Loss: 0.0097 Epoch 7 Batch 98/1077 - Train Accuracy: 0.9874, Validation Accuracy: 0.9663, Loss: 0.0098 Epoch 7 Batch 99/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9645, Loss: 0.0060 Epoch 7 Batch 100/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9645, Loss: 0.0083 Epoch 7 Batch 101/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9645, Loss: 0.0083 Epoch 7 Batch 102/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9645, Loss: 0.0069 Epoch 7 Batch 103/1077 - Train Accuracy: 0.9951, Validation Accuracy: 0.9695, Loss: 0.0088 Epoch 7 Batch 104/1077 - Train Accuracy: 0.9737, Validation Accuracy: 0.9645, Loss: 0.0129 Epoch 7 Batch 105/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9645, Loss: 0.0064 Epoch 7 Batch 106/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9677, Loss: 0.0104 Epoch 7 Batch 107/1077 - Train Accuracy: 0.9788, Validation Accuracy: 0.9677, Loss: 0.0113 Epoch 7 Batch 108/1077 - Train Accuracy: 0.9702, Validation Accuracy: 0.9656, Loss: 0.0118 Epoch 7 Batch 109/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9705, Loss: 0.0113 Epoch 7 Batch 110/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9705, Loss: 0.0067 Epoch 7 Batch 111/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9705, Loss: 0.0086 Epoch 7 Batch 112/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9705, Loss: 0.0074 Epoch 7 Batch 113/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9705, Loss: 0.0109 Epoch 7 Batch 114/1077 - Train Accuracy: 0.9792, Validation Accuracy: 0.9709, Loss: 0.0069 Epoch 7 Batch 115/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9798, Loss: 0.0107 Epoch 7 Batch 116/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9798, Loss: 0.0141 Epoch 7 Batch 117/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9794, Loss: 0.0057 Epoch 7 Batch 118/1077 - Train Accuracy: 0.9984, Validation Accuracy: 0.9748, Loss: 0.0054 Epoch 7 Batch 119/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9691, Loss: 0.0088 Epoch 7 Batch 120/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9691, Loss: 0.0108 Epoch 7 Batch 121/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9691, Loss: 0.0115 Epoch 7 Batch 122/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9688, Loss: 0.0097 Epoch 7 Batch 123/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9688, Loss: 0.0076 Epoch 7 Batch 124/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9684, Loss: 0.0109 Epoch 7 Batch 125/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9684, Loss: 0.0172 Epoch 7 Batch 126/1077 - Train Accuracy: 0.9933, Validation Accuracy: 0.9688, Loss: 0.0087 Epoch 7 Batch 127/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9638, Loss: 0.0087 Epoch 7 Batch 128/1077 - Train Accuracy: 0.9974, Validation Accuracy: 0.9634, Loss: 0.0103 Epoch 7 Batch 129/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9691, Loss: 0.0088 Epoch 7 Batch 130/1077 - Train Accuracy: 0.9952, Validation Accuracy: 0.9663, Loss: 0.0076 Epoch 7 Batch 131/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9663, Loss: 0.0104 Epoch 7 Batch 132/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9652, Loss: 0.0093 Epoch 7 Batch 133/1077 - Train Accuracy: 1.0000, Validation Accuracy: 0.9666, Loss: 0.0044 Epoch 7 Batch 134/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9716, Loss: 0.0061 Epoch 7 Batch 135/1077 - Train Accuracy: 0.9905, Validation Accuracy: 0.9716, Loss: 0.0059 Epoch 7 Batch 136/1077 - Train Accuracy: 0.9969, Validation Accuracy: 0.9716, Loss: 0.0079 Epoch 7 Batch 137/1077 - Train Accuracy: 0.9933, Validation Accuracy: 0.9716, Loss: 0.0078 Epoch 7 Batch 138/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9716, Loss: 0.0071 Epoch 7 Batch 139/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9716, Loss: 0.0101 Epoch 7 Batch 140/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9769, Loss: 0.0087 Epoch 7 Batch 141/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9769, Loss: 0.0055 Epoch 7 Batch 142/1077 - Train Accuracy: 0.9821, Validation Accuracy: 0.9769, Loss: 0.0076 Epoch 7 Batch 143/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9769, Loss: 0.0077 Epoch 7 Batch 144/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9769, Loss: 0.0127 Epoch 7 Batch 145/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9769, Loss: 0.0099 Epoch 7 Batch 146/1077 - Train Accuracy: 0.9728, Validation Accuracy: 0.9769, Loss: 0.0238 Epoch 7 Batch 147/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9766, Loss: 0.0107 Epoch 7 Batch 148/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9755, Loss: 0.0104 Epoch 7 Batch 149/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9769, Loss: 0.0070 Epoch 7 Batch 150/1077 - Train Accuracy: 0.9940, Validation Accuracy: 0.9769, Loss: 0.0096 Epoch 7 Batch 151/1077 - Train Accuracy: 0.9833, Validation Accuracy: 0.9773, Loss: 0.0103 Epoch 7 Batch 152/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9773, Loss: 0.0128 Epoch 7 Batch 153/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9819, Loss: 0.0134 Epoch 7 Batch 154/1077 - Train Accuracy: 0.9975, Validation Accuracy: 0.9819, Loss: 0.0077 Epoch 7 Batch 155/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9819, Loss: 0.0103 Epoch 7 Batch 156/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9769, Loss: 0.0066 Epoch 7 Batch 157/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9766, Loss: 0.0080 Epoch 7 Batch 158/1077 - Train Accuracy: 0.9751, Validation Accuracy: 0.9766, Loss: 0.0139 Epoch 7 Batch 159/1077 - Train Accuracy: 0.9900, Validation Accuracy: 0.9766, Loss: 0.0069 Epoch 7 Batch 160/1077 - Train Accuracy: 0.9949, Validation Accuracy: 0.9755, Loss: 0.0100 Epoch 7 Batch 161/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9741, Loss: 0.0068 Epoch 7 Batch 162/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9705, Loss: 0.0056 Epoch 7 Batch 163/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9705, Loss: 0.0121 Epoch 7 Batch 164/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9677, Loss: 0.0111 Epoch 7 Batch 165/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9677, Loss: 0.0097 Epoch 7 Batch 166/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9652, Loss: 0.0095 Epoch 7 Batch 167/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9652, Loss: 0.0082 Epoch 7 Batch 168/1077 - Train Accuracy: 0.9873, Validation Accuracy: 0.9691, Loss: 0.0126 Epoch 7 Batch 169/1077 - Train Accuracy: 0.9807, Validation Accuracy: 0.9666, Loss: 0.0104 Epoch 7 Batch 170/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9666, Loss: 0.0108 Epoch 7 Batch 171/1077 - Train Accuracy: 0.9854, Validation Accuracy: 0.9712, Loss: 0.0098 Epoch 7 Batch 172/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9762, Loss: 0.0080 Epoch 7 Batch 173/1077 - Train Accuracy: 0.9827, Validation Accuracy: 0.9727, Loss: 0.0115 Epoch 7 Batch 174/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9677, Loss: 0.0065 Epoch 7 Batch 175/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9677, Loss: 0.0111 Epoch 7 Batch 176/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9712, Loss: 0.0093 Epoch 7 Batch 177/1077 - Train Accuracy: 0.9823, Validation Accuracy: 0.9684, Loss: 0.0114 Epoch 7 Batch 178/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9737, Loss: 0.0143 Epoch 7 Batch 179/1077 - Train Accuracy: 0.9794, Validation Accuracy: 0.9734, Loss: 0.0128 Epoch 7 Batch 180/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9730, Loss: 0.0091 Epoch 7 Batch 181/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9730, Loss: 0.0112 Epoch 7 Batch 182/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9730, Loss: 0.0114 Epoch 7 Batch 183/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9730, Loss: 0.0104 Epoch 7 Batch 184/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9730, Loss: 0.0090 Epoch 7 Batch 185/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9680, Loss: 0.0112 Epoch 7 Batch 186/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9606, Loss: 0.0117 Epoch 7 Batch 187/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9602, Loss: 0.0102 Epoch 7 Batch 188/1077 - Train Accuracy: 0.9977, Validation Accuracy: 0.9602, Loss: 0.0090 Epoch 7 Batch 189/1077 - Train Accuracy: 0.9656, Validation Accuracy: 0.9670, Loss: 0.0100 Epoch 7 Batch 190/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9677, Loss: 0.0068 Epoch 7 Batch 191/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9677, Loss: 0.0083 Epoch 7 Batch 192/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9705, Loss: 0.0080 Epoch 7 Batch 193/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9709, Loss: 0.0105 Epoch 7 Batch 194/1077 - Train Accuracy: 0.9940, Validation Accuracy: 0.9709, Loss: 0.0071 Epoch 7 Batch 195/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9766, Loss: 0.0102 Epoch 7 Batch 196/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9751, Loss: 0.0075 Epoch 7 Batch 197/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9755, Loss: 0.0086 Epoch 7 Batch 198/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9762, Loss: 0.0102 Epoch 7 Batch 199/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9762, Loss: 0.0080 Epoch 7 Batch 200/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9805, Loss: 0.0096 Epoch 7 Batch 201/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9798, Loss: 0.0081 Epoch 7 Batch 202/1077 - Train Accuracy: 0.9965, Validation Accuracy: 0.9798, Loss: 0.0073 Epoch 7 Batch 203/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9773, Loss: 0.0085 Epoch 7 Batch 204/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9773, Loss: 0.0121 Epoch 7 Batch 205/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9730, Loss: 0.0185 Epoch 7 Batch 206/1077 - Train Accuracy: 0.9977, Validation Accuracy: 0.9730, Loss: 0.0072 Epoch 7 Batch 207/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9680, Loss: 0.0105 Epoch 7 Batch 208/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9680, Loss: 0.0117 Epoch 7 Batch 209/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9624, Loss: 0.0077 Epoch 7 Batch 210/1077 - Train Accuracy: 0.9810, Validation Accuracy: 0.9624, Loss: 0.0103 Epoch 7 Batch 211/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9688, Loss: 0.0077 Epoch 7 Batch 212/1077 - Train Accuracy: 0.9684, Validation Accuracy: 0.9712, Loss: 0.0100 Epoch 7 Batch 213/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9695, Loss: 0.0091 Epoch 7 Batch 214/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9695, Loss: 0.0068 Epoch 7 Batch 215/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9759, Loss: 0.0130 Epoch 7 Batch 216/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9776, Loss: 0.0130 Epoch 7 Batch 217/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9776, Loss: 0.0063 Epoch 7 Batch 218/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9776, Loss: 0.0127 Epoch 7 Batch 219/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9780, Loss: 0.0067 Epoch 7 Batch 220/1077 - Train Accuracy: 0.9873, Validation Accuracy: 0.9656, Loss: 0.0108 Epoch 7 Batch 221/1077 - Train Accuracy: 0.9868, Validation Accuracy: 0.9656, Loss: 0.0094 Epoch 7 Batch 222/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9659, Loss: 0.0134 Epoch 7 Batch 223/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9656, Loss: 0.0108 Epoch 7 Batch 224/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9705, Loss: 0.0083 Epoch 7 Batch 225/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9705, Loss: 0.0147 Epoch 7 Batch 226/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9705, Loss: 0.0096 Epoch 7 Batch 227/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9705, Loss: 0.0134 Epoch 7 Batch 228/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9705, Loss: 0.0084 Epoch 7 Batch 229/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9734, Loss: 0.0118 Epoch 7 Batch 230/1077 - Train Accuracy: 0.9907, Validation Accuracy: 0.9734, Loss: 0.0066 Epoch 7 Batch 231/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9734, Loss: 0.0115 Epoch 7 Batch 232/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9734, Loss: 0.0061 Epoch 7 Batch 233/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9783, Loss: 0.0136 Epoch 7 Batch 234/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9755, Loss: 0.0115 Epoch 7 Batch 235/1077 - Train Accuracy: 0.9751, Validation Accuracy: 0.9805, Loss: 0.0102 Epoch 7 Batch 236/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9808, Loss: 0.0119 Epoch 7 Batch 237/1077 - Train Accuracy: 0.9989, Validation Accuracy: 0.9805, Loss: 0.0068 Epoch 7 Batch 238/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9826, Loss: 0.0097 Epoch 7 Batch 239/1077 - Train Accuracy: 0.9907, Validation Accuracy: 0.9830, Loss: 0.0062 Epoch 7 Batch 240/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9780, Loss: 0.0075 Epoch 7 Batch 241/1077 - Train Accuracy: 0.9961, Validation Accuracy: 0.9734, Loss: 0.0044 Epoch 7 Batch 242/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9734, Loss: 0.0087 Epoch 7 Batch 243/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9783, Loss: 0.0126 Epoch 7 Batch 244/1077 - Train Accuracy: 0.9901, Validation Accuracy: 0.9783, Loss: 0.0094 Epoch 7 Batch 245/1077 - Train Accuracy: 0.9818, Validation Accuracy: 0.9830, Loss: 0.0093 Epoch 7 Batch 246/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9755, Loss: 0.0128 Epoch 7 Batch 247/1077 - Train Accuracy: 0.9695, Validation Accuracy: 0.9723, Loss: 0.0132 Epoch 7 Batch 248/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9641, Loss: 0.0116 Epoch 7 Batch 249/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9641, Loss: 0.0088 Epoch 7 Batch 250/1077 - Train Accuracy: 0.9744, Validation Accuracy: 0.9688, Loss: 0.0132 Epoch 7 Batch 251/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9688, Loss: 0.0146 Epoch 7 Batch 252/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9688, Loss: 0.0109 Epoch 7 Batch 253/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9688, Loss: 0.0078 Epoch 7 Batch 254/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9688, Loss: 0.0113 Epoch 7 Batch 255/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9688, Loss: 0.0110 Epoch 7 Batch 256/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9666, Loss: 0.0156 Epoch 7 Batch 257/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9666, Loss: 0.0098 Epoch 7 Batch 258/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9688, Loss: 0.0070 Epoch 7 Batch 259/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9688, Loss: 0.0078 Epoch 7 Batch 260/1077 - Train Accuracy: 0.9933, Validation Accuracy: 0.9744, Loss: 0.0064 Epoch 7 Batch 261/1077 - Train Accuracy: 0.9967, Validation Accuracy: 0.9744, Loss: 0.0072 Epoch 7 Batch 262/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9741, Loss: 0.0090 Epoch 7 Batch 263/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9741, Loss: 0.0074 Epoch 7 Batch 264/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9769, Loss: 0.0065 Epoch 7 Batch 265/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9766, Loss: 0.0066 Epoch 7 Batch 266/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9766, Loss: 0.0120 Epoch 7 Batch 267/1077 - Train Accuracy: 0.9964, Validation Accuracy: 0.9766, Loss: 0.0110 Epoch 7 Batch 268/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9815, Loss: 0.0107 Epoch 7 Batch 269/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9815, Loss: 0.0130 Epoch 7 Batch 270/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9815, Loss: 0.0115 Epoch 7 Batch 271/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9815, Loss: 0.0096 Epoch 7 Batch 272/1077 - Train Accuracy: 0.9907, Validation Accuracy: 0.9790, Loss: 0.0156 Epoch 7 Batch 273/1077 - Train Accuracy: 0.9829, Validation Accuracy: 0.9695, Loss: 0.0100 Epoch 7 Batch 274/1077 - Train Accuracy: 0.9795, Validation Accuracy: 0.9695, Loss: 0.0102 Epoch 7 Batch 275/1077 - Train Accuracy: 0.9948, Validation Accuracy: 0.9673, Loss: 0.0070 Epoch 7 Batch 276/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9673, Loss: 0.0139 Epoch 7 Batch 277/1077 - Train Accuracy: 0.9866, Validation Accuracy: 0.9673, Loss: 0.0091 Epoch 7 Batch 278/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9680, Loss: 0.0087 Epoch 7 Batch 279/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9727, Loss: 0.0111 Epoch 7 Batch 280/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9730, Loss: 0.0088 Epoch 7 Batch 281/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9780, Loss: 0.0107 Epoch 7 Batch 282/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9780, Loss: 0.0136 Epoch 7 Batch 283/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9830, Loss: 0.0077 Epoch 7 Batch 284/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9854, Loss: 0.0132 Epoch 7 Batch 285/1077 - Train Accuracy: 0.9911, Validation Accuracy: 0.9876, Loss: 0.0071 Epoch 7 Batch 286/1077 - Train Accuracy: 0.9907, Validation Accuracy: 0.9826, Loss: 0.0120 Epoch 7 Batch 287/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9826, Loss: 0.0159 Epoch 7 Batch 288/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9826, Loss: 0.0135 Epoch 7 Batch 289/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9808, Loss: 0.0053 Epoch 7 Batch 290/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9808, Loss: 0.0124 Epoch 7 Batch 291/1077 - Train Accuracy: 0.9710, Validation Accuracy: 0.9759, Loss: 0.0195 Epoch 7 Batch 292/1077 - Train Accuracy: 0.9892, Validation Accuracy: 0.9705, Loss: 0.0092 Epoch 7 Batch 293/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9705, Loss: 0.0066 Epoch 7 Batch 294/1077 - Train Accuracy: 0.9890, Validation Accuracy: 0.9659, Loss: 0.0077 Epoch 7 Batch 295/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9659, Loss: 0.0117 Epoch 7 Batch 296/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9659, Loss: 0.0065 Epoch 7 Batch 297/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9659, Loss: 0.0075 Epoch 7 Batch 298/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9659, Loss: 0.0111 Epoch 7 Batch 299/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9659, Loss: 0.0082 Epoch 7 Batch 300/1077 - Train Accuracy: 0.9819, Validation Accuracy: 0.9659, Loss: 0.0078 Epoch 7 Batch 301/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9609, Loss: 0.0102 Epoch 7 Batch 302/1077 - Train Accuracy: 0.9953, Validation Accuracy: 0.9609, Loss: 0.0077 Epoch 7 Batch 303/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9631, Loss: 0.0110 Epoch 7 Batch 304/1077 - Train Accuracy: 0.9702, Validation Accuracy: 0.9677, Loss: 0.0114 Epoch 7 Batch 305/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9677, Loss: 0.0081 Epoch 7 Batch 306/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9723, Loss: 0.0121 Epoch 7 Batch 307/1077 - Train Accuracy: 0.9792, Validation Accuracy: 0.9709, Loss: 0.0105 Epoch 7 Batch 308/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9705, Loss: 0.0078 Epoch 7 Batch 309/1077 - Train Accuracy: 0.9792, Validation Accuracy: 0.9755, Loss: 0.0108 Epoch 7 Batch 310/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9759, Loss: 0.0086 Epoch 7 Batch 311/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9741, Loss: 0.0084 Epoch 7 Batch 312/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9716, Loss: 0.0143 Epoch 7 Batch 313/1077 - Train Accuracy: 1.0000, Validation Accuracy: 0.9716, Loss: 0.0052 Epoch 7 Batch 314/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9716, Loss: 0.0113 Epoch 7 Batch 315/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9730, Loss: 0.0090 Epoch 7 Batch 316/1077 - Train Accuracy: 0.9944, Validation Accuracy: 0.9734, Loss: 0.0098 Epoch 7 Batch 317/1077 - Train Accuracy: 0.9827, Validation Accuracy: 0.9730, Loss: 0.0098 Epoch 7 Batch 318/1077 - Train Accuracy: 0.9961, Validation Accuracy: 0.9730, Loss: 0.0067 Epoch 7 Batch 319/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9680, Loss: 0.0100 Epoch 7 Batch 320/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9680, Loss: 0.0102 Epoch 7 Batch 321/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9680, Loss: 0.0069 Epoch 7 Batch 322/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9680, Loss: 0.0107 Epoch 7 Batch 323/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9680, Loss: 0.0109 Epoch 7 Batch 324/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9709, Loss: 0.0088 Epoch 7 Batch 325/1077 - Train Accuracy: 0.9955, Validation Accuracy: 0.9709, Loss: 0.0074 Epoch 7 Batch 326/1077 - Train Accuracy: 0.9955, Validation Accuracy: 0.9709, Loss: 0.0088 Epoch 7 Batch 327/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9705, Loss: 0.0115 Epoch 7 Batch 328/1077 - Train Accuracy: 0.9952, Validation Accuracy: 0.9730, Loss: 0.0083 Epoch 7 Batch 329/1077 - Train Accuracy: 0.9969, Validation Accuracy: 0.9730, Loss: 0.0065 Epoch 7 Batch 330/1077 - Train Accuracy: 0.9953, Validation Accuracy: 0.9780, Loss: 0.0064 Epoch 7 Batch 331/1077 - Train Accuracy: 0.9877, Validation Accuracy: 0.9780, Loss: 0.0098 Epoch 7 Batch 332/1077 - Train Accuracy: 0.9929, Validation Accuracy: 0.9783, Loss: 0.0053 Epoch 7 Batch 333/1077 - Train Accuracy: 0.9893, Validation Accuracy: 0.9783, Loss: 0.0073 Epoch 7 Batch 334/1077 - Train Accuracy: 0.9980, Validation Accuracy: 0.9783, Loss: 0.0056 Epoch 7 Batch 335/1077 - Train Accuracy: 0.9862, Validation Accuracy: 0.9783, Loss: 0.0140 Epoch 7 Batch 336/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9787, Loss: 0.0177 Epoch 7 Batch 337/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9787, Loss: 0.0103 Epoch 7 Batch 338/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9787, Loss: 0.0159 Epoch 7 Batch 339/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9702, Loss: 0.0049 Epoch 7 Batch 340/1077 - Train Accuracy: 0.9873, Validation Accuracy: 0.9698, Loss: 0.0110 Epoch 7 Batch 341/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9698, Loss: 0.0127 Epoch 7 Batch 342/1077 - Train Accuracy: 0.9948, Validation Accuracy: 0.9698, Loss: 0.0063 Epoch 7 Batch 343/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9712, Loss: 0.0065 Epoch 7 Batch 344/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9737, Loss: 0.0091 Epoch 7 Batch 345/1077 - Train Accuracy: 0.9851, Validation Accuracy: 0.9751, Loss: 0.0078 Epoch 7 Batch 346/1077 - Train Accuracy: 0.9980, Validation Accuracy: 0.9751, Loss: 0.0070 Epoch 7 Batch 347/1077 - Train Accuracy: 0.9948, Validation Accuracy: 0.9751, Loss: 0.0062 Epoch 7 Batch 348/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9755, Loss: 0.0059 Epoch 7 Batch 349/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9755, Loss: 0.0104 Epoch 7 Batch 350/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9723, Loss: 0.0061 Epoch 7 Batch 351/1077 - Train Accuracy: 0.9955, Validation Accuracy: 0.9670, Loss: 0.0077 Epoch 7 Batch 352/1077 - Train Accuracy: 0.9992, Validation Accuracy: 0.9695, Loss: 0.0041 Epoch 7 Batch 353/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9695, Loss: 0.0119 Epoch 7 Batch 354/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9709, Loss: 0.0097 Epoch 7 Batch 355/1077 - Train Accuracy: 0.9896, Validation Accuracy: 0.9712, Loss: 0.0065 Epoch 7 Batch 356/1077 - Train Accuracy: 0.9965, Validation Accuracy: 0.9712, Loss: 0.0077 Epoch 7 Batch 357/1077 - Train Accuracy: 0.9821, Validation Accuracy: 0.9716, Loss: 0.0106 Epoch 7 Batch 358/1077 - Train Accuracy: 0.9799, Validation Accuracy: 0.9663, Loss: 0.0109 Epoch 7 Batch 359/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9613, Loss: 0.0074 Epoch 7 Batch 360/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9613, Loss: 0.0071 Epoch 7 Batch 361/1077 - Train Accuracy: 0.9753, Validation Accuracy: 0.9613, Loss: 0.0091 Epoch 7 Batch 362/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9613, Loss: 0.0114 Epoch 7 Batch 363/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9613, Loss: 0.0102 Epoch 7 Batch 364/1077 - Train Accuracy: 0.9641, Validation Accuracy: 0.9563, Loss: 0.0154 Epoch 7 Batch 365/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9609, Loss: 0.0056 Epoch 7 Batch 366/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9609, Loss: 0.0092 Epoch 7 Batch 367/1077 - Train Accuracy: 0.9981, Validation Accuracy: 0.9567, Loss: 0.0059 Epoch 7 Batch 368/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9567, Loss: 0.0120 Epoch 7 Batch 369/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9567, Loss: 0.0098 Epoch 7 Batch 370/1077 - Train Accuracy: 0.9937, Validation Accuracy: 0.9567, Loss: 0.0084 Epoch 7 Batch 371/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9613, Loss: 0.0063 Epoch 7 Batch 372/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9613, Loss: 0.0066 Epoch 7 Batch 373/1077 - Train Accuracy: 0.9933, Validation Accuracy: 0.9545, Loss: 0.0061 Epoch 7 Batch 374/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9545, Loss: 0.0157 Epoch 7 Batch 375/1077 - Train Accuracy: 0.9954, Validation Accuracy: 0.9588, Loss: 0.0100 Epoch 7 Batch 376/1077 - Train Accuracy: 0.9893, Validation Accuracy: 0.9638, Loss: 0.0074 Epoch 7 Batch 377/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9641, Loss: 0.0060 Epoch 7 Batch 378/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9641, Loss: 0.0063 Epoch 7 Batch 379/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9712, Loss: 0.0116 Epoch 7 Batch 380/1077 - Train Accuracy: 0.9953, Validation Accuracy: 0.9712, Loss: 0.0051 Epoch 7 Batch 381/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9730, Loss: 0.0092 Epoch 7 Batch 382/1077 - Train Accuracy: 0.9799, Validation Accuracy: 0.9680, Loss: 0.0108 Epoch 7 Batch 383/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9677, Loss: 0.0085 Epoch 7 Batch 384/1077 - Train Accuracy: 0.9992, Validation Accuracy: 0.9659, Loss: 0.0067 Epoch 7 Batch 385/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9659, Loss: 0.0099 Epoch 7 Batch 386/1077 - Train Accuracy: 0.9955, Validation Accuracy: 0.9659, Loss: 0.0067 Epoch 7 Batch 387/1077 - Train Accuracy: 0.9992, Validation Accuracy: 0.9709, Loss: 0.0062 Epoch 7 Batch 388/1077 - Train Accuracy: 0.9565, Validation Accuracy: 0.9616, Loss: 0.0103 Epoch 7 Batch 389/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9616, Loss: 0.0094 Epoch 7 Batch 390/1077 - Train Accuracy: 0.9664, Validation Accuracy: 0.9620, Loss: 0.0129 Epoch 7 Batch 391/1077 - Train Accuracy: 0.9929, Validation Accuracy: 0.9616, Loss: 0.0126 Epoch 7 Batch 392/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9712, Loss: 0.0097 Epoch 7 Batch 393/1077 - Train Accuracy: 0.9892, Validation Accuracy: 0.9712, Loss: 0.0065 Epoch 7 Batch 394/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9712, Loss: 0.0091 Epoch 7 Batch 395/1077 - Train Accuracy: 1.0000, Validation Accuracy: 0.9712, Loss: 0.0053 Epoch 7 Batch 396/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9663, Loss: 0.0123 Epoch 7 Batch 397/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9663, Loss: 0.0125 Epoch 7 Batch 398/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9663, Loss: 0.0076 Epoch 7 Batch 399/1077 - Train Accuracy: 0.9819, Validation Accuracy: 0.9663, Loss: 0.0092 Epoch 7 Batch 400/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9659, Loss: 0.0097 Epoch 7 Batch 401/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9659, Loss: 0.0057 Epoch 7 Batch 402/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9659, Loss: 0.0071 Epoch 7 Batch 403/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9659, Loss: 0.0184 Epoch 7 Batch 404/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9702, Loss: 0.0127 Epoch 7 Batch 405/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9723, Loss: 0.0068 Epoch 7 Batch 406/1077 - Train Accuracy: 0.9725, Validation Accuracy: 0.9737, Loss: 0.0152 Epoch 7 Batch 407/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9670, Loss: 0.0127 Epoch 7 Batch 408/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9656, Loss: 0.0108 Epoch 7 Batch 409/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9727, Loss: 0.0113 Epoch 7 Batch 410/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9727, Loss: 0.0152 Epoch 7 Batch 411/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9730, Loss: 0.0088 Epoch 7 Batch 412/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9606, Loss: 0.0092 Epoch 7 Batch 413/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9606, Loss: 0.0077 Epoch 7 Batch 414/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9670, Loss: 0.0082 Epoch 7 Batch 415/1077 - Train Accuracy: 0.9807, Validation Accuracy: 0.9670, Loss: 0.0136 Epoch 7 Batch 416/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9663, Loss: 0.0085 Epoch 7 Batch 417/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9663, Loss: 0.0212 Epoch 7 Batch 418/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9663, Loss: 0.0065 Epoch 7 Batch 419/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9645, Loss: 0.0075 Epoch 7 Batch 420/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9659, Loss: 0.0075 Epoch 7 Batch 421/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9762, Loss: 0.0141 Epoch 7 Batch 422/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9762, Loss: 0.0084 Epoch 7 Batch 423/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9787, Loss: 0.0119 Epoch 7 Batch 424/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9812, Loss: 0.0076 Epoch 7 Batch 425/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9766, Loss: 0.0063 Epoch 7 Batch 426/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9766, Loss: 0.0072 Epoch 7 Batch 427/1077 - Train Accuracy: 0.9892, Validation Accuracy: 0.9695, Loss: 0.0085 Epoch 7 Batch 428/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9698, Loss: 0.0098 Epoch 7 Batch 429/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9698, Loss: 0.0048 Epoch 7 Batch 430/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9727, Loss: 0.0080 Epoch 7 Batch 431/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9755, Loss: 0.0071 Epoch 7 Batch 432/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9702, Loss: 0.0096 Epoch 7 Batch 433/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9723, Loss: 0.0130 Epoch 7 Batch 434/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9723, Loss: 0.0061 Epoch 7 Batch 435/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9652, Loss: 0.0119 Epoch 7 Batch 436/1077 - Train Accuracy: 0.9903, Validation Accuracy: 0.9592, Loss: 0.0105 Epoch 7 Batch 437/1077 - Train Accuracy: 0.9977, Validation Accuracy: 0.9599, Loss: 0.0054 Epoch 7 Batch 438/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9599, Loss: 0.0090 Epoch 7 Batch 439/1077 - Train Accuracy: 0.9973, Validation Accuracy: 0.9542, Loss: 0.0082 Epoch 7 Batch 440/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9592, Loss: 0.0090 Epoch 7 Batch 441/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9652, Loss: 0.0113 Epoch 7 Batch 442/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9712, Loss: 0.0082 Epoch 7 Batch 443/1077 - Train Accuracy: 0.9940, Validation Accuracy: 0.9712, Loss: 0.0078 Epoch 7 Batch 444/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9709, Loss: 0.0077 Epoch 7 Batch 445/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9684, Loss: 0.0059 Epoch 7 Batch 446/1077 - Train Accuracy: 0.9892, Validation Accuracy: 0.9680, Loss: 0.0060 Epoch 7 Batch 447/1077 - Train Accuracy: 0.9965, Validation Accuracy: 0.9680, Loss: 0.0069 Epoch 7 Batch 448/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9634, Loss: 0.0134 Epoch 7 Batch 449/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9684, Loss: 0.0105 Epoch 7 Batch 450/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9684, Loss: 0.0096 Epoch 7 Batch 451/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9684, Loss: 0.0075 Epoch 7 Batch 452/1077 - Train Accuracy: 0.9723, Validation Accuracy: 0.9684, Loss: 0.0116 Epoch 7 Batch 453/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9595, Loss: 0.0103 Epoch 7 Batch 454/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9595, Loss: 0.0110 Epoch 7 Batch 455/1077 - Train Accuracy: 0.9936, Validation Accuracy: 0.9606, Loss: 0.0087 Epoch 7 Batch 456/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9606, Loss: 0.0106 Epoch 7 Batch 457/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9606, Loss: 0.0084 Epoch 7 Batch 458/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9609, Loss: 0.0110 Epoch 7 Batch 459/1077 - Train Accuracy: 0.9795, Validation Accuracy: 0.9609, Loss: 0.0108 Epoch 7 Batch 460/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9609, Loss: 0.0093 Epoch 7 Batch 461/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9609, Loss: 0.0092 Epoch 7 Batch 462/1077 - Train Accuracy: 0.9949, Validation Accuracy: 0.9560, Loss: 0.0064 Epoch 7 Batch 463/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9560, Loss: 0.0084 Epoch 7 Batch 464/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9510, Loss: 0.0077 Epoch 7 Batch 465/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9510, Loss: 0.0088 Epoch 7 Batch 466/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9560, Loss: 0.0134 Epoch 7 Batch 467/1077 - Train Accuracy: 0.9911, Validation Accuracy: 0.9560, Loss: 0.0078 Epoch 7 Batch 468/1077 - Train Accuracy: 0.9874, Validation Accuracy: 0.9609, Loss: 0.0098 Epoch 7 Batch 469/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9609, Loss: 0.0095 Epoch 7 Batch 470/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9659, Loss: 0.0076 Epoch 7 Batch 471/1077 - Train Accuracy: 0.9977, Validation Accuracy: 0.9659, Loss: 0.0049 Epoch 7 Batch 472/1077 - Train Accuracy: 0.9829, Validation Accuracy: 0.9563, Loss: 0.0091 Epoch 7 Batch 473/1077 - Train Accuracy: 0.9965, Validation Accuracy: 0.9560, Loss: 0.0065 Epoch 7 Batch 474/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9556, Loss: 0.0103 Epoch 7 Batch 475/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9652, Loss: 0.0102 Epoch 7 Batch 476/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9652, Loss: 0.0070 Epoch 7 Batch 477/1077 - Train Accuracy: 0.9821, Validation Accuracy: 0.9609, Loss: 0.0141 Epoch 7 Batch 478/1077 - Train Accuracy: 0.9873, Validation Accuracy: 0.9609, Loss: 0.0105 Epoch 7 Batch 479/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9549, Loss: 0.0092 Epoch 7 Batch 480/1077 - Train Accuracy: 0.9897, Validation Accuracy: 0.9549, Loss: 0.0077 Epoch 7 Batch 481/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9489, Loss: 0.0112 Epoch 7 Batch 482/1077 - Train Accuracy: 0.9712, Validation Accuracy: 0.9489, Loss: 0.0117 Epoch 7 Batch 483/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9513, Loss: 0.0088 Epoch 7 Batch 484/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9513, Loss: 0.0108 Epoch 7 Batch 485/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9464, Loss: 0.0109 Epoch 7 Batch 486/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9489, Loss: 0.0081 Epoch 7 Batch 487/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9474, Loss: 0.0062 Epoch 7 Batch 488/1077 - Train Accuracy: 0.9819, Validation Accuracy: 0.9474, Loss: 0.0114 Epoch 7 Batch 489/1077 - Train Accuracy: 0.9795, Validation Accuracy: 0.9506, Loss: 0.0074 Epoch 7 Batch 490/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9506, Loss: 0.0126 Epoch 7 Batch 491/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9553, Loss: 0.0161 Epoch 7 Batch 492/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9553, Loss: 0.0163 Epoch 7 Batch 493/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9553, Loss: 0.0109 Epoch 7 Batch 494/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9553, Loss: 0.0069 Epoch 7 Batch 495/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9549, Loss: 0.0112 Epoch 7 Batch 496/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9599, Loss: 0.0105 Epoch 7 Batch 497/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9648, Loss: 0.0118 Epoch 7 Batch 498/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9652, Loss: 0.0121 Epoch 7 Batch 499/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9652, Loss: 0.0068 Epoch 7 Batch 500/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9652, Loss: 0.0057 Epoch 7 Batch 501/1077 - Train Accuracy: 0.9988, Validation Accuracy: 0.9648, Loss: 0.0088 Epoch 7 Batch 502/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9648, Loss: 0.0143 Epoch 7 Batch 503/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9648, Loss: 0.0109 Epoch 7 Batch 504/1077 - Train Accuracy: 0.9984, Validation Accuracy: 0.9645, Loss: 0.0063 Epoch 7 Batch 505/1077 - Train Accuracy: 0.9955, Validation Accuracy: 0.9645, Loss: 0.0063 Epoch 7 Batch 506/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9638, Loss: 0.0143 Epoch 7 Batch 507/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9638, Loss: 0.0111 Epoch 7 Batch 508/1077 - Train Accuracy: 0.9944, Validation Accuracy: 0.9638, Loss: 0.0051 Epoch 7 Batch 509/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9702, Loss: 0.0164 Epoch 7 Batch 510/1077 - Train Accuracy: 0.9676, Validation Accuracy: 0.9702, Loss: 0.0127 Epoch 7 Batch 511/1077 - Train Accuracy: 0.9901, Validation Accuracy: 0.9748, Loss: 0.0095 Epoch 7 Batch 512/1077 - Train Accuracy: 0.9969, Validation Accuracy: 0.9748, Loss: 0.0059 Epoch 7 Batch 513/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9748, Loss: 0.0078 Epoch 7 Batch 514/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9748, Loss: 0.0071 Epoch 7 Batch 515/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9695, Loss: 0.0089 Epoch 7 Batch 516/1077 - Train Accuracy: 0.9818, Validation Accuracy: 0.9695, Loss: 0.0108 Epoch 7 Batch 517/1077 - Train Accuracy: 0.9892, Validation Accuracy: 0.9748, Loss: 0.0111 Epoch 7 Batch 518/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9744, Loss: 0.0086 Epoch 7 Batch 519/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9794, Loss: 0.0115 Epoch 7 Batch 520/1077 - Train Accuracy: 0.9978, Validation Accuracy: 0.9798, Loss: 0.0054 Epoch 7 Batch 521/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9798, Loss: 0.0125 Epoch 7 Batch 522/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9748, Loss: 0.0103 Epoch 7 Batch 523/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9659, Loss: 0.0108 Epoch 7 Batch 524/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9684, Loss: 0.0137 Epoch 7 Batch 525/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9638, Loss: 0.0158 Epoch 7 Batch 526/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9638, Loss: 0.0122 Epoch 7 Batch 527/1077 - Train Accuracy: 0.9893, Validation Accuracy: 0.9588, Loss: 0.0112 Epoch 7 Batch 528/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9588, Loss: 0.0129 Epoch 7 Batch 529/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9609, Loss: 0.0137 Epoch 7 Batch 530/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9609, Loss: 0.0119 Epoch 7 Batch 531/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9609, Loss: 0.0077 Epoch 7 Batch 532/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9609, Loss: 0.0156 Epoch 7 Batch 533/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9659, Loss: 0.0072 Epoch 7 Batch 534/1077 - Train Accuracy: 0.9833, Validation Accuracy: 0.9659, Loss: 0.0098 Epoch 7 Batch 535/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9634, Loss: 0.0097 Epoch 7 Batch 536/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9627, Loss: 0.0095 Epoch 7 Batch 537/1077 - Train Accuracy: 0.9965, Validation Accuracy: 0.9624, Loss: 0.0043 Epoch 7 Batch 538/1077 - Train Accuracy: 0.9647, Validation Accuracy: 0.9670, Loss: 0.0126 Epoch 7 Batch 539/1077 - Train Accuracy: 0.9723, Validation Accuracy: 0.9719, Loss: 0.0144 Epoch 7 Batch 540/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9773, Loss: 0.0066 Epoch 7 Batch 541/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9773, Loss: 0.0080 Epoch 7 Batch 542/1077 - Train Accuracy: 0.9723, Validation Accuracy: 0.9719, Loss: 0.0117 Epoch 7 Batch 543/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9663, Loss: 0.0079 Epoch 7 Batch 544/1077 - Train Accuracy: 0.9953, Validation Accuracy: 0.9663, Loss: 0.0064 Epoch 7 Batch 545/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9666, Loss: 0.0087 Epoch 7 Batch 546/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9666, Loss: 0.0073 Epoch 7 Batch 547/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9716, Loss: 0.0077 Epoch 7 Batch 548/1077 - Train Accuracy: 0.9629, Validation Accuracy: 0.9663, Loss: 0.0118 Epoch 7 Batch 549/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9659, Loss: 0.0108 Epoch 7 Batch 550/1077 - Train Accuracy: 0.9551, Validation Accuracy: 0.9709, Loss: 0.0122 Epoch 7 Batch 551/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9709, Loss: 0.0093 Epoch 7 Batch 552/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9719, Loss: 0.0128 Epoch 7 Batch 553/1077 - Train Accuracy: 0.9801, Validation Accuracy: 0.9702, Loss: 0.0205 Epoch 7 Batch 554/1077 - Train Accuracy: 0.9961, Validation Accuracy: 0.9698, Loss: 0.0056 Epoch 7 Batch 555/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9702, Loss: 0.0073 Epoch 7 Batch 556/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9702, Loss: 0.0064 Epoch 7 Batch 557/1077 - Train Accuracy: 0.9988, Validation Accuracy: 0.9705, Loss: 0.0066 Epoch 7 Batch 558/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9727, Loss: 0.0065 Epoch 7 Batch 559/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9723, Loss: 0.0075 Epoch 7 Batch 560/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9723, Loss: 0.0091 Epoch 7 Batch 561/1077 - Train Accuracy: 0.9989, Validation Accuracy: 0.9773, Loss: 0.0064 Epoch 7 Batch 562/1077 - Train Accuracy: 0.9981, Validation Accuracy: 0.9822, Loss: 0.0058 Epoch 7 Batch 563/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9790, Loss: 0.0103 Epoch 7 Batch 564/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9790, Loss: 0.0102 Epoch 7 Batch 565/1077 - Train Accuracy: 0.9940, Validation Accuracy: 0.9787, Loss: 0.0085 Epoch 7 Batch 566/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9798, Loss: 0.0095 Epoch 7 Batch 567/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9801, Loss: 0.0092 Epoch 7 Batch 568/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9801, Loss: 0.0061 Epoch 7 Batch 569/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9751, Loss: 0.0060 Epoch 7 Batch 570/1077 - Train Accuracy: 0.9947, Validation Accuracy: 0.9751, Loss: 0.0100 Epoch 7 Batch 571/1077 - Train Accuracy: 0.9866, Validation Accuracy: 0.9751, Loss: 0.0077 Epoch 7 Batch 572/1077 - Train Accuracy: 0.9892, Validation Accuracy: 0.9751, Loss: 0.0074 Epoch 7 Batch 573/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9755, Loss: 0.0183 Epoch 7 Batch 574/1077 - Train Accuracy: 0.9811, Validation Accuracy: 0.9755, Loss: 0.0130 Epoch 7 Batch 575/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9705, Loss: 0.0087 Epoch 7 Batch 576/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9755, Loss: 0.0086 Epoch 7 Batch 577/1077 - Train Accuracy: 0.9893, Validation Accuracy: 0.9751, Loss: 0.0093 Epoch 7 Batch 578/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9748, Loss: 0.0081 Epoch 7 Batch 579/1077 - Train Accuracy: 0.9961, Validation Accuracy: 0.9748, Loss: 0.0089 Epoch 7 Batch 580/1077 - Train Accuracy: 0.9877, Validation Accuracy: 0.9748, Loss: 0.0076 Epoch 7 Batch 581/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9748, Loss: 0.0049 Epoch 7 Batch 582/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9766, Loss: 0.0074 Epoch 7 Batch 583/1077 - Train Accuracy: 1.0000, Validation Accuracy: 0.9766, Loss: 0.0082 Epoch 7 Batch 584/1077 - Train Accuracy: 0.9963, Validation Accuracy: 0.9766, Loss: 0.0058 Epoch 7 Batch 585/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9766, Loss: 0.0048 Epoch 7 Batch 586/1077 - Train Accuracy: 0.9951, Validation Accuracy: 0.9769, Loss: 0.0068 Epoch 7 Batch 587/1077 - Train Accuracy: 0.9907, Validation Accuracy: 0.9769, Loss: 0.0131 Epoch 7 Batch 588/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9698, Loss: 0.0079 Epoch 7 Batch 589/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9705, Loss: 0.0081 Epoch 7 Batch 590/1077 - Train Accuracy: 0.9692, Validation Accuracy: 0.9702, Loss: 0.0087 Epoch 7 Batch 591/1077 - Train Accuracy: 0.9819, Validation Accuracy: 0.9702, Loss: 0.0084 Epoch 7 Batch 592/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9702, Loss: 0.0121 Epoch 7 Batch 593/1077 - Train Accuracy: 0.9661, Validation Accuracy: 0.9748, Loss: 0.0193 Epoch 7 Batch 594/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9751, Loss: 0.0145 Epoch 7 Batch 595/1077 - Train Accuracy: 0.9977, Validation Accuracy: 0.9751, Loss: 0.0060 Epoch 7 Batch 596/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9751, Loss: 0.0082 Epoch 7 Batch 597/1077 - Train Accuracy: 0.9988, Validation Accuracy: 0.9751, Loss: 0.0084 Epoch 7 Batch 598/1077 - Train Accuracy: 0.9799, Validation Accuracy: 0.9776, Loss: 0.0093 Epoch 7 Batch 599/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9709, Loss: 0.0139 Epoch 7 Batch 600/1077 - Train Accuracy: 0.9974, Validation Accuracy: 0.9709, Loss: 0.0096 Epoch 7 Batch 601/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9709, Loss: 0.0096 Epoch 7 Batch 602/1077 - Train Accuracy: 0.9988, Validation Accuracy: 0.9709, Loss: 0.0076 Epoch 7 Batch 603/1077 - Train Accuracy: 0.9907, Validation Accuracy: 0.9709, Loss: 0.0067 Epoch 7 Batch 604/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9709, Loss: 0.0106 Epoch 7 Batch 605/1077 - Train Accuracy: 0.9799, Validation Accuracy: 0.9773, Loss: 0.0144 Epoch 7 Batch 606/1077 - Train Accuracy: 0.9821, Validation Accuracy: 0.9773, Loss: 0.0106 Epoch 7 Batch 607/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9773, Loss: 0.0139 Epoch 7 Batch 608/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9773, Loss: 0.0106 Epoch 7 Batch 609/1077 - Train Accuracy: 0.9957, Validation Accuracy: 0.9773, Loss: 0.0114 Epoch 7 Batch 610/1077 - Train Accuracy: 0.9827, Validation Accuracy: 0.9794, Loss: 0.0092 Epoch 7 Batch 611/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9794, Loss: 0.0111 Epoch 7 Batch 612/1077 - Train Accuracy: 1.0000, Validation Accuracy: 0.9798, Loss: 0.0041 Epoch 7 Batch 613/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9769, Loss: 0.0117 Epoch 7 Batch 614/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9798, Loss: 0.0076 Epoch 7 Batch 615/1077 - Train Accuracy: 0.9992, Validation Accuracy: 0.9798, Loss: 0.0066 Epoch 7 Batch 616/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9798, Loss: 0.0115 Epoch 7 Batch 617/1077 - Train Accuracy: 0.9866, Validation Accuracy: 0.9798, Loss: 0.0086 Epoch 7 Batch 618/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9741, Loss: 0.0113 Epoch 7 Batch 619/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9741, Loss: 0.0054 Epoch 7 Batch 620/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9737, Loss: 0.0134 Epoch 7 Batch 621/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9737, Loss: 0.0115 Epoch 7 Batch 622/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9737, Loss: 0.0127 Epoch 7 Batch 623/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9737, Loss: 0.0119 Epoch 7 Batch 624/1077 - Train Accuracy: 0.9892, Validation Accuracy: 0.9741, Loss: 0.0109 Epoch 7 Batch 625/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9741, Loss: 0.0081 Epoch 7 Batch 626/1077 - Train Accuracy: 0.9837, Validation Accuracy: 0.9741, Loss: 0.0095 Epoch 7 Batch 627/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9741, Loss: 0.0085 Epoch 7 Batch 628/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9691, Loss: 0.0098 Epoch 7 Batch 629/1077 - Train Accuracy: 0.9971, Validation Accuracy: 0.9737, Loss: 0.0075 Epoch 7 Batch 630/1077 - Train Accuracy: 0.9953, Validation Accuracy: 0.9737, Loss: 0.0087 Epoch 7 Batch 631/1077 - Train Accuracy: 0.9970, Validation Accuracy: 0.9741, Loss: 0.0078 Epoch 7 Batch 632/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9691, Loss: 0.0062 Epoch 7 Batch 633/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9691, Loss: 0.0083 Epoch 7 Batch 634/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9691, Loss: 0.0050 Epoch 7 Batch 635/1077 - Train Accuracy: 0.9897, Validation Accuracy: 0.9688, Loss: 0.0103 Epoch 7 Batch 636/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9680, Loss: 0.0092 Epoch 7 Batch 637/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9680, Loss: 0.0097 Epoch 7 Batch 638/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9680, Loss: 0.0058 Epoch 7 Batch 639/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9684, Loss: 0.0164 Epoch 7 Batch 640/1077 - Train Accuracy: 0.9866, Validation Accuracy: 0.9638, Loss: 0.0106 Epoch 7 Batch 641/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9656, Loss: 0.0083 Epoch 7 Batch 642/1077 - Train Accuracy: 0.9989, Validation Accuracy: 0.9698, Loss: 0.0059 Epoch 7 Batch 643/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9698, Loss: 0.0098 Epoch 7 Batch 644/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9698, Loss: 0.0103 Epoch 7 Batch 645/1077 - Train Accuracy: 0.9877, Validation Accuracy: 0.9698, Loss: 0.0098 Epoch 7 Batch 646/1077 - Train Accuracy: 0.9903, Validation Accuracy: 0.9698, Loss: 0.0067 Epoch 7 Batch 647/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9698, Loss: 0.0092 Epoch 7 Batch 648/1077 - Train Accuracy: 0.9948, Validation Accuracy: 0.9698, Loss: 0.0070 Epoch 7 Batch 649/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9684, Loss: 0.0110 Epoch 7 Batch 650/1077 - Train Accuracy: 0.9980, Validation Accuracy: 0.9734, Loss: 0.0046 Epoch 7 Batch 651/1077 - Train Accuracy: 0.9888, Validation Accuracy: 0.9748, Loss: 0.0070 Epoch 7 Batch 652/1077 - Train Accuracy: 0.9799, Validation Accuracy: 0.9790, Loss: 0.0107 Epoch 7 Batch 653/1077 - Train Accuracy: 0.9754, Validation Accuracy: 0.9751, Loss: 0.0108 Epoch 7 Batch 654/1077 - Train Accuracy: 0.9988, Validation Accuracy: 0.9744, Loss: 0.0080 Epoch 7 Batch 655/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9744, Loss: 0.0116 Epoch 7 Batch 656/1077 - Train Accuracy: 0.9980, Validation Accuracy: 0.9741, Loss: 0.0073 Epoch 7 Batch 657/1077 - Train Accuracy: 0.9959, Validation Accuracy: 0.9741, Loss: 0.0071 Epoch 7 Batch 658/1077 - Train Accuracy: 0.9989, Validation Accuracy: 0.9744, Loss: 0.0053 Epoch 7 Batch 659/1077 - Train Accuracy: 0.9933, Validation Accuracy: 0.9744, Loss: 0.0076 Epoch 7 Batch 660/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9744, Loss: 0.0086 Epoch 7 Batch 661/1077 - Train Accuracy: 0.9948, Validation Accuracy: 0.9798, Loss: 0.0081 Epoch 7 Batch 662/1077 - Train Accuracy: 0.9933, Validation Accuracy: 0.9840, Loss: 0.0111 Epoch 7 Batch 663/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9840, Loss: 0.0056 Epoch 7 Batch 664/1077 - Train Accuracy: 0.9961, Validation Accuracy: 0.9840, Loss: 0.0084 Epoch 7 Batch 665/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9790, Loss: 0.0070 Epoch 7 Batch 666/1077 - Train Accuracy: 0.9975, Validation Accuracy: 0.9790, Loss: 0.0101 Epoch 7 Batch 667/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9790, Loss: 0.0096 Epoch 7 Batch 668/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9787, Loss: 0.0139 Epoch 7 Batch 669/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9762, Loss: 0.0081 Epoch 7 Batch 670/1077 - Train Accuracy: 1.0000, Validation Accuracy: 0.9790, Loss: 0.0062 Epoch 7 Batch 671/1077 - Train Accuracy: 0.9794, Validation Accuracy: 0.9790, Loss: 0.0109 Epoch 7 Batch 672/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9830, Loss: 0.0106 Epoch 7 Batch 673/1077 - Train Accuracy: 0.9751, Validation Accuracy: 0.9780, Loss: 0.0080 Epoch 7 Batch 674/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9780, Loss: 0.0098 Epoch 7 Batch 675/1077 - Train Accuracy: 0.9829, Validation Accuracy: 0.9730, Loss: 0.0130 Epoch 7 Batch 676/1077 - Train Accuracy: 0.9823, Validation Accuracy: 0.9730, Loss: 0.0087 Epoch 7 Batch 677/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9730, Loss: 0.0089 Epoch 7 Batch 678/1077 - Train Accuracy: 0.9937, Validation Accuracy: 0.9730, Loss: 0.0070 Epoch 7 Batch 679/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9741, Loss: 0.0069 Epoch 7 Batch 680/1077 - Train Accuracy: 0.9825, Validation Accuracy: 0.9741, Loss: 0.0108 Epoch 7 Batch 681/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9741, Loss: 0.0110 Epoch 7 Batch 682/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9723, Loss: 0.0115 Epoch 7 Batch 683/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9727, Loss: 0.0071 Epoch 7 Batch 684/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9719, Loss: 0.0096 Epoch 7 Batch 685/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9723, Loss: 0.0111 Epoch 7 Batch 686/1077 - Train Accuracy: 0.9933, Validation Accuracy: 0.9723, Loss: 0.0051 Epoch 7 Batch 687/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9723, Loss: 0.0103 Epoch 7 Batch 688/1077 - Train Accuracy: 0.9961, Validation Accuracy: 0.9723, Loss: 0.0075 Epoch 7 Batch 689/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9673, Loss: 0.0046 Epoch 7 Batch 690/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9723, Loss: 0.0154 Epoch 7 Batch 691/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9723, Loss: 0.0102 Epoch 7 Batch 692/1077 - Train Accuracy: 0.9874, Validation Accuracy: 0.9723, Loss: 0.0075 Epoch 7 Batch 693/1077 - Train Accuracy: 0.9959, Validation Accuracy: 0.9723, Loss: 0.0084 Epoch 7 Batch 694/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9727, Loss: 0.0079 Epoch 7 Batch 695/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9727, Loss: 0.0104 Epoch 7 Batch 696/1077 - Train Accuracy: 0.9873, Validation Accuracy: 0.9727, Loss: 0.0089 Epoch 7 Batch 697/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9776, Loss: 0.0073 Epoch 7 Batch 698/1077 - Train Accuracy: 0.9970, Validation Accuracy: 0.9762, Loss: 0.0080 Epoch 7 Batch 699/1077 - Train Accuracy: 0.9984, Validation Accuracy: 0.9808, Loss: 0.0063 Epoch 7 Batch 700/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9808, Loss: 0.0062 Epoch 7 Batch 701/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9808, Loss: 0.0076 Epoch 7 Batch 702/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9815, Loss: 0.0136 Epoch 7 Batch 703/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9815, Loss: 0.0110 Epoch 7 Batch 704/1077 - Train Accuracy: 0.9723, Validation Accuracy: 0.9815, Loss: 0.0112 Epoch 7 Batch 705/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9815, Loss: 0.0107 Epoch 7 Batch 706/1077 - Train Accuracy: 0.9821, Validation Accuracy: 0.9815, Loss: 0.0226 Epoch 7 Batch 707/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9815, Loss: 0.0093 Epoch 7 Batch 708/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9830, Loss: 0.0100 Epoch 7 Batch 709/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9780, Loss: 0.0103 Epoch 7 Batch 710/1077 - Train Accuracy: 0.9953, Validation Accuracy: 0.9776, Loss: 0.0052 Epoch 7 Batch 711/1077 - Train Accuracy: 0.9957, Validation Accuracy: 0.9776, Loss: 0.0113 Epoch 7 Batch 712/1077 - Train Accuracy: 0.9949, Validation Accuracy: 0.9776, Loss: 0.0059 Epoch 7 Batch 713/1077 - Train Accuracy: 0.9869, Validation Accuracy: 0.9776, Loss: 0.0048 Epoch 7 Batch 714/1077 - Train Accuracy: 0.9814, Validation Accuracy: 0.9776, Loss: 0.0087 Epoch 7 Batch 715/1077 - Train Accuracy: 0.9727, Validation Accuracy: 0.9776, Loss: 0.0102 Epoch 7 Batch 716/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9727, Loss: 0.0064 Epoch 7 Batch 717/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9727, Loss: 0.0055 Epoch 7 Batch 718/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9727, Loss: 0.0074 Epoch 7 Batch 719/1077 - Train Accuracy: 0.9818, Validation Accuracy: 0.9801, Loss: 0.0102 Epoch 7 Batch 720/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9751, Loss: 0.0095 Epoch 7 Batch 721/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9727, Loss: 0.0058 Epoch 7 Batch 722/1077 - Train Accuracy: 0.9957, Validation Accuracy: 0.9727, Loss: 0.0063 Epoch 7 Batch 723/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9727, Loss: 0.0131 Epoch 7 Batch 724/1077 - Train Accuracy: 0.9901, Validation Accuracy: 0.9727, Loss: 0.0094 Epoch 7 Batch 725/1077 - Train Accuracy: 0.9803, Validation Accuracy: 0.9702, Loss: 0.0073 Epoch 7 Batch 726/1077 - Train Accuracy: 0.9965, Validation Accuracy: 0.9702, Loss: 0.0068 Epoch 7 Batch 727/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9702, Loss: 0.0091 Epoch 7 Batch 728/1077 - Train Accuracy: 0.9903, Validation Accuracy: 0.9751, Loss: 0.0096 Epoch 7 Batch 729/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9798, Loss: 0.0112 Epoch 7 Batch 730/1077 - Train Accuracy: 0.9953, Validation Accuracy: 0.9787, Loss: 0.0113 Epoch 7 Batch 731/1077 - Train Accuracy: 0.9948, Validation Accuracy: 0.9780, Loss: 0.0064 Epoch 7 Batch 732/1077 - Train Accuracy: 0.9856, Validation Accuracy: 0.9730, Loss: 0.0138 Epoch 7 Batch 733/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9730, Loss: 0.0072 Epoch 7 Batch 734/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9751, Loss: 0.0084 Epoch 7 Batch 735/1077 - Train Accuracy: 0.9957, Validation Accuracy: 0.9698, Loss: 0.0064 Epoch 7 Batch 736/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9648, Loss: 0.0071 Epoch 7 Batch 737/1077 - Train Accuracy: 0.9836, Validation Accuracy: 0.9659, Loss: 0.0120 Epoch 7 Batch 738/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9656, Loss: 0.0069 Epoch 7 Batch 739/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9656, Loss: 0.0090 Epoch 7 Batch 740/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9673, Loss: 0.0087 Epoch 7 Batch 741/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9624, Loss: 0.0117 Epoch 7 Batch 742/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9727, Loss: 0.0080 Epoch 7 Batch 743/1077 - Train Accuracy: 0.9961, Validation Accuracy: 0.9727, Loss: 0.0077 Epoch 7 Batch 744/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9727, Loss: 0.0092 Epoch 7 Batch 745/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9727, Loss: 0.0100 Epoch 7 Batch 746/1077 - Train Accuracy: 0.9961, Validation Accuracy: 0.9727, Loss: 0.0079 Epoch 7 Batch 747/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9723, Loss: 0.0062 Epoch 7 Batch 748/1077 - Train Accuracy: 0.9773, Validation Accuracy: 0.9723, Loss: 0.0095 Epoch 7 Batch 749/1077 - Train Accuracy: 0.9969, Validation Accuracy: 0.9723, Loss: 0.0055 Epoch 7 Batch 750/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9769, Loss: 0.0094 Epoch 7 Batch 751/1077 - Train Accuracy: 0.9969, Validation Accuracy: 0.9769, Loss: 0.0070 Epoch 7 Batch 752/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9769, Loss: 0.0067 Epoch 7 Batch 753/1077 - Train Accuracy: 0.9953, Validation Accuracy: 0.9769, Loss: 0.0049 Epoch 7 Batch 754/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9769, Loss: 0.0111 Epoch 7 Batch 755/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9716, Loss: 0.0145 Epoch 7 Batch 756/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9716, Loss: 0.0092 Epoch 7 Batch 757/1077 - Train Accuracy: 0.9947, Validation Accuracy: 0.9716, Loss: 0.0056 Epoch 7 Batch 758/1077 - Train Accuracy: 1.0000, Validation Accuracy: 0.9716, Loss: 0.0041 Epoch 7 Batch 759/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9695, Loss: 0.0109 Epoch 7 Batch 760/1077 - Train Accuracy: 0.9953, Validation Accuracy: 0.9695, Loss: 0.0085 Epoch 7 Batch 761/1077 - Train Accuracy: 0.9942, Validation Accuracy: 0.9698, Loss: 0.0096 Epoch 7 Batch 762/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9748, Loss: 0.0062 Epoch 7 Batch 763/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9748, Loss: 0.0118 Epoch 7 Batch 764/1077 - Train Accuracy: 0.9963, Validation Accuracy: 0.9748, Loss: 0.0050 Epoch 7 Batch 765/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9751, Loss: 0.0113 Epoch 7 Batch 766/1077 - Train Accuracy: 0.9949, Validation Accuracy: 0.9751, Loss: 0.0070 Epoch 7 Batch 767/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9751, Loss: 0.0074 Epoch 7 Batch 768/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9751, Loss: 0.0080 Epoch 7 Batch 769/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9751, Loss: 0.0093 Epoch 7 Batch 770/1077 - Train Accuracy: 0.9900, Validation Accuracy: 0.9751, Loss: 0.0084 Epoch 7 Batch 771/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9748, Loss: 0.0098 Epoch 7 Batch 772/1077 - Train Accuracy: 0.9952, Validation Accuracy: 0.9748, Loss: 0.0062 Epoch 7 Batch 773/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9748, Loss: 0.0068 Epoch 7 Batch 774/1077 - Train Accuracy: 0.9957, Validation Accuracy: 0.9751, Loss: 0.0083 Epoch 7 Batch 775/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9698, Loss: 0.0080 Epoch 7 Batch 776/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9680, Loss: 0.0068 Epoch 7 Batch 777/1077 - Train Accuracy: 0.9957, Validation Accuracy: 0.9684, Loss: 0.0059 Epoch 7 Batch 778/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9734, Loss: 0.0090 Epoch 7 Batch 779/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9734, Loss: 0.0086 Epoch 7 Batch 780/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9688, Loss: 0.0123 Epoch 7 Batch 781/1077 - Train Accuracy: 0.9989, Validation Accuracy: 0.9688, Loss: 0.0064 Epoch 7 Batch 782/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9688, Loss: 0.0070 Epoch 7 Batch 783/1077 - Train Accuracy: 0.9769, Validation Accuracy: 0.9588, Loss: 0.0096 Epoch 7 Batch 784/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9606, Loss: 0.0075 Epoch 7 Batch 785/1077 - Train Accuracy: 0.9970, Validation Accuracy: 0.9606, Loss: 0.0042 Epoch 7 Batch 786/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9631, Loss: 0.0057 Epoch 7 Batch 787/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9627, Loss: 0.0106 Epoch 7 Batch 788/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9698, Loss: 0.0099 Epoch 7 Batch 789/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9719, Loss: 0.0081 Epoch 7 Batch 790/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9812, Loss: 0.0122 Epoch 7 Batch 791/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9805, Loss: 0.0083 Epoch 7 Batch 792/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9783, Loss: 0.0093 Epoch 7 Batch 793/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9727, Loss: 0.0105 Epoch 7 Batch 794/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9727, Loss: 0.0105 Epoch 7 Batch 795/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9730, Loss: 0.0105 Epoch 7 Batch 796/1077 - Train Accuracy: 0.9977, Validation Accuracy: 0.9780, Loss: 0.0076 Epoch 7 Batch 797/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9783, Loss: 0.0063 Epoch 7 Batch 798/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9790, Loss: 0.0085 Epoch 7 Batch 799/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9695, Loss: 0.0118 Epoch 7 Batch 800/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9698, Loss: 0.0060 Epoch 7 Batch 801/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9698, Loss: 0.0098 Epoch 7 Batch 802/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9719, Loss: 0.0087 Epoch 7 Batch 803/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9719, Loss: 0.0088 Epoch 7 Batch 804/1077 - Train Accuracy: 0.9965, Validation Accuracy: 0.9719, Loss: 0.0049 Epoch 7 Batch 805/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9716, Loss: 0.0073 Epoch 7 Batch 806/1077 - Train Accuracy: 0.9992, Validation Accuracy: 0.9716, Loss: 0.0091 Epoch 7 Batch 807/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9712, Loss: 0.0041 Epoch 7 Batch 808/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9798, Loss: 0.0155 Epoch 7 Batch 809/1077 - Train Accuracy: 0.9778, Validation Accuracy: 0.9819, Loss: 0.0134 Epoch 7 Batch 810/1077 - Train Accuracy: 0.9933, Validation Accuracy: 0.9819, Loss: 0.0068 Epoch 7 Batch 811/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9826, Loss: 0.0077 Epoch 7 Batch 812/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9773, Loss: 0.0091 Epoch 7 Batch 813/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9773, Loss: 0.0106 Epoch 7 Batch 814/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9734, Loss: 0.0091 Epoch 7 Batch 815/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9709, Loss: 0.0101 Epoch 7 Batch 816/1077 - Train Accuracy: 0.9942, Validation Accuracy: 0.9709, Loss: 0.0087 Epoch 7 Batch 817/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9648, Loss: 0.0087 Epoch 7 Batch 818/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9663, Loss: 0.0121 Epoch 7 Batch 819/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9663, Loss: 0.0112 Epoch 7 Batch 820/1077 - Train Accuracy: 0.9949, Validation Accuracy: 0.9616, Loss: 0.0068 Epoch 7 Batch 821/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9616, Loss: 0.0086 Epoch 7 Batch 822/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9616, Loss: 0.0068 Epoch 7 Batch 823/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9616, Loss: 0.0113 Epoch 7 Batch 824/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9677, Loss: 0.0127 Epoch 7 Batch 825/1077 - Train Accuracy: 1.0000, Validation Accuracy: 0.9624, Loss: 0.0033 Epoch 7 Batch 826/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9670, Loss: 0.0045 Epoch 7 Batch 827/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9670, Loss: 0.0081 Epoch 7 Batch 828/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9670, Loss: 0.0066 Epoch 7 Batch 829/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9670, Loss: 0.0172 Epoch 7 Batch 830/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9670, Loss: 0.0104 Epoch 7 Batch 831/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9677, Loss: 0.0111 Epoch 7 Batch 832/1077 - Train Accuracy: 0.9980, Validation Accuracy: 0.9677, Loss: 0.0067 Epoch 7 Batch 833/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9616, Loss: 0.0076 Epoch 7 Batch 834/1077 - Train Accuracy: 0.9993, Validation Accuracy: 0.9616, Loss: 0.0069 Epoch 7 Batch 835/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9666, Loss: 0.0085 Epoch 7 Batch 836/1077 - Train Accuracy: 0.9901, Validation Accuracy: 0.9666, Loss: 0.0087 Epoch 7 Batch 837/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9659, Loss: 0.0128 Epoch 7 Batch 838/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9659, Loss: 0.0102 Epoch 7 Batch 839/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9588, Loss: 0.0082 Epoch 7 Batch 840/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9588, Loss: 0.0090 Epoch 7 Batch 841/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9645, Loss: 0.0155 Epoch 7 Batch 842/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9645, Loss: 0.0082 Epoch 7 Batch 843/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9702, Loss: 0.0078 Epoch 7 Batch 844/1077 - Train Accuracy: 0.9862, Validation Accuracy: 0.9702, Loss: 0.0074 Epoch 7 Batch 845/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9702, Loss: 0.0114 Epoch 7 Batch 846/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9702, Loss: 0.0101 Epoch 7 Batch 847/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9702, Loss: 0.0091 Epoch 7 Batch 848/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9723, Loss: 0.0076 Epoch 7 Batch 849/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9723, Loss: 0.0102 Epoch 7 Batch 850/1077 - Train Accuracy: 0.9903, Validation Accuracy: 0.9773, Loss: 0.0144 Epoch 7 Batch 851/1077 - Train Accuracy: 0.9929, Validation Accuracy: 0.9723, Loss: 0.0142 Epoch 7 Batch 852/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9773, Loss: 0.0121 Epoch 7 Batch 853/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9773, Loss: 0.0087 Epoch 7 Batch 854/1077 - Train Accuracy: 0.9699, Validation Accuracy: 0.9769, Loss: 0.0133 Epoch 7 Batch 855/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9773, Loss: 0.0138 Epoch 7 Batch 856/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9780, Loss: 0.0083 Epoch 7 Batch 857/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9780, Loss: 0.0078 Epoch 7 Batch 858/1077 - Train Accuracy: 0.9978, Validation Accuracy: 0.9755, Loss: 0.0037 Epoch 7 Batch 859/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9755, Loss: 0.0112 Epoch 7 Batch 860/1077 - Train Accuracy: 0.9948, Validation Accuracy: 0.9751, Loss: 0.0071 Epoch 7 Batch 861/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9751, Loss: 0.0080 Epoch 7 Batch 862/1077 - Train Accuracy: 0.9730, Validation Accuracy: 0.9751, Loss: 0.0122 Epoch 7 Batch 863/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9737, Loss: 0.0068 Epoch 7 Batch 864/1077 - Train Accuracy: 0.9941, Validation Accuracy: 0.9762, Loss: 0.0094 Epoch 7 Batch 865/1077 - Train Accuracy: 0.9666, Validation Accuracy: 0.9805, Loss: 0.0127 Epoch 7 Batch 866/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9730, Loss: 0.0112 Epoch 7 Batch 867/1077 - Train Accuracy: 0.9621, Validation Accuracy: 0.9734, Loss: 0.0316 Epoch 7 Batch 868/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9751, Loss: 0.0115 Epoch 7 Batch 869/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9698, Loss: 0.0073 Epoch 7 Batch 870/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9712, Loss: 0.0072 Epoch 7 Batch 871/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9716, Loss: 0.0073 Epoch 7 Batch 872/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9670, Loss: 0.0107 Epoch 7 Batch 873/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9670, Loss: 0.0072 Epoch 7 Batch 874/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9691, Loss: 0.0173 Epoch 7 Batch 875/1077 - Train Accuracy: 0.9949, Validation Accuracy: 0.9663, Loss: 0.0070 Epoch 7 Batch 876/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9719, Loss: 0.0090 Epoch 7 Batch 877/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9769, Loss: 0.0055 Epoch 7 Batch 878/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9776, Loss: 0.0065 Epoch 7 Batch 879/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9773, Loss: 0.0057 Epoch 7 Batch 880/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9790, Loss: 0.0137 Epoch 7 Batch 881/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9794, Loss: 0.0113 Epoch 7 Batch 882/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9744, Loss: 0.0084 Epoch 7 Batch 883/1077 - Train Accuracy: 0.9811, Validation Accuracy: 0.9744, Loss: 0.0130 Epoch 7 Batch 884/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9741, Loss: 0.0095 Epoch 7 Batch 885/1077 - Train Accuracy: 0.9904, Validation Accuracy: 0.9741, Loss: 0.0058 Epoch 7 Batch 886/1077 - Train Accuracy: 0.9805, Validation Accuracy: 0.9790, Loss: 0.0155 Epoch 7 Batch 887/1077 - Train Accuracy: 0.9707, Validation Accuracy: 0.9766, Loss: 0.0130 Epoch 7 Batch 888/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9744, Loss: 0.0097 Epoch 7 Batch 889/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9744, Loss: 0.0106 Epoch 7 Batch 890/1077 - Train Accuracy: 0.9952, Validation Accuracy: 0.9680, Loss: 0.0104 Epoch 7 Batch 891/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9684, Loss: 0.0070 Epoch 7 Batch 892/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9734, Loss: 0.0108 Epoch 7 Batch 893/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9688, Loss: 0.0082 Epoch 7 Batch 894/1077 - Train Accuracy: 0.9874, Validation Accuracy: 0.9748, Loss: 0.0114 Epoch 7 Batch 895/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9819, Loss: 0.0093 Epoch 7 Batch 896/1077 - Train Accuracy: 0.9864, Validation Accuracy: 0.9819, Loss: 0.0115 Epoch 7 Batch 897/1077 - Train Accuracy: 0.9769, Validation Accuracy: 0.9815, Loss: 0.0072 Epoch 7 Batch 898/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9741, Loss: 0.0076 Epoch 7 Batch 899/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9741, Loss: 0.0086 Epoch 7 Batch 900/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9741, Loss: 0.0094 Epoch 7 Batch 901/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9766, Loss: 0.0194 Epoch 7 Batch 902/1077 - Train Accuracy: 0.9821, Validation Accuracy: 0.9766, Loss: 0.0110 Epoch 7 Batch 903/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9759, Loss: 0.0095 Epoch 7 Batch 904/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9755, Loss: 0.0068 Epoch 7 Batch 905/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9755, Loss: 0.0058 Epoch 7 Batch 906/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9755, Loss: 0.0102 Epoch 7 Batch 907/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9755, Loss: 0.0098 Epoch 7 Batch 908/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9755, Loss: 0.0082 Epoch 7 Batch 909/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9751, Loss: 0.0080 Epoch 7 Batch 910/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9755, Loss: 0.0096 Epoch 7 Batch 911/1077 - Train Accuracy: 0.9893, Validation Accuracy: 0.9755, Loss: 0.0083 Epoch 7 Batch 912/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9755, Loss: 0.0077 Epoch 7 Batch 913/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9751, Loss: 0.0113 Epoch 7 Batch 914/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9755, Loss: 0.0296 Epoch 7 Batch 915/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9759, Loss: 0.0053 Epoch 7 Batch 916/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9755, Loss: 0.0075 Epoch 7 Batch 917/1077 - Train Accuracy: 0.9949, Validation Accuracy: 0.9759, Loss: 0.0096 Epoch 7 Batch 918/1077 - Train Accuracy: 0.9900, Validation Accuracy: 0.9805, Loss: 0.0074 Epoch 7 Batch 919/1077 - Train Accuracy: 0.9951, Validation Accuracy: 0.9805, Loss: 0.0059 Epoch 7 Batch 920/1077 - Train Accuracy: 1.0000, Validation Accuracy: 0.9759, Loss: 0.0051 Epoch 7 Batch 921/1077 - Train Accuracy: 0.9828, Validation Accuracy: 0.9751, Loss: 0.0130 Epoch 7 Batch 922/1077 - Train Accuracy: 0.9874, Validation Accuracy: 0.9755, Loss: 0.0081 Epoch 7 Batch 923/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9766, Loss: 0.0053 Epoch 7 Batch 924/1077 - Train Accuracy: 0.9790, Validation Accuracy: 0.9769, Loss: 0.0143 Epoch 7 Batch 925/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9769, Loss: 0.0067 Epoch 7 Batch 926/1077 - Train Accuracy: 0.9973, Validation Accuracy: 0.9769, Loss: 0.0072 Epoch 7 Batch 927/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9719, Loss: 0.0155 Epoch 7 Batch 928/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9719, Loss: 0.0098 Epoch 7 Batch 929/1077 - Train Accuracy: 0.9973, Validation Accuracy: 0.9719, Loss: 0.0057 Epoch 7 Batch 930/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9716, Loss: 0.0079 Epoch 7 Batch 931/1077 - Train Accuracy: 0.9957, Validation Accuracy: 0.9716, Loss: 0.0064 Epoch 7 Batch 932/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9712, Loss: 0.0074 Epoch 7 Batch 933/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9709, Loss: 0.0072 Epoch 7 Batch 934/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9709, Loss: 0.0125 Epoch 7 Batch 935/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9709, Loss: 0.0057 Epoch 7 Batch 936/1077 - Train Accuracy: 0.9933, Validation Accuracy: 0.9709, Loss: 0.0108 Epoch 7 Batch 937/1077 - Train Accuracy: 0.9979, Validation Accuracy: 0.9709, Loss: 0.0120 Epoch 7 Batch 938/1077 - Train Accuracy: 0.9957, Validation Accuracy: 0.9741, Loss: 0.0067 Epoch 7 Batch 939/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9741, Loss: 0.0113 Epoch 7 Batch 940/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9727, Loss: 0.0092 Epoch 7 Batch 941/1077 - Train Accuracy: 0.9911, Validation Accuracy: 0.9727, Loss: 0.0088 Epoch 7 Batch 942/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9727, Loss: 0.0096 Epoch 7 Batch 943/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9727, Loss: 0.0066 Epoch 7 Batch 944/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9773, Loss: 0.0053 Epoch 7 Batch 945/1077 - Train Accuracy: 0.9953, Validation Accuracy: 0.9744, Loss: 0.0053 Epoch 7 Batch 946/1077 - Train Accuracy: 0.9967, Validation Accuracy: 0.9727, Loss: 0.0065 Epoch 7 Batch 947/1077 - Train Accuracy: 0.9856, Validation Accuracy: 0.9776, Loss: 0.0082 Epoch 7 Batch 948/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9780, Loss: 0.0073 Epoch 7 Batch 949/1077 - Train Accuracy: 0.9807, Validation Accuracy: 0.9780, Loss: 0.0100 Epoch 7 Batch 950/1077 - Train Accuracy: 0.9989, Validation Accuracy: 0.9705, Loss: 0.0064 Epoch 7 Batch 951/1077 - Train Accuracy: 0.9807, Validation Accuracy: 0.9734, Loss: 0.0121 Epoch 7 Batch 952/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9734, Loss: 0.0057 Epoch 7 Batch 953/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9734, Loss: 0.0059 Epoch 7 Batch 954/1077 - Train Accuracy: 0.9977, Validation Accuracy: 0.9734, Loss: 0.0061 Epoch 7 Batch 955/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9734, Loss: 0.0125 Epoch 7 Batch 956/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9734, Loss: 0.0161 Epoch 7 Batch 957/1077 - Train Accuracy: 0.9944, Validation Accuracy: 0.9734, Loss: 0.0076 Epoch 7 Batch 958/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9748, Loss: 0.0079 Epoch 7 Batch 959/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9727, Loss: 0.0121 Epoch 7 Batch 960/1077 - Train Accuracy: 0.9870, Validation Accuracy: 0.9727, Loss: 0.0062 Epoch 7 Batch 961/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9776, Loss: 0.0076 Epoch 7 Batch 962/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9776, Loss: 0.0070 Epoch 7 Batch 963/1077 - Train Accuracy: 0.9900, Validation Accuracy: 0.9776, Loss: 0.0123 Epoch 7 Batch 964/1077 - Train Accuracy: 0.9866, Validation Accuracy: 0.9776, Loss: 0.0112 Epoch 7 Batch 965/1077 - Train Accuracy: 0.9774, Validation Accuracy: 0.9776, Loss: 0.0114 Epoch 7 Batch 966/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9776, Loss: 0.0047 Epoch 7 Batch 967/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9709, Loss: 0.0078 Epoch 7 Batch 968/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9776, Loss: 0.0094 Epoch 7 Batch 969/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9776, Loss: 0.0154 Epoch 7 Batch 970/1077 - Train Accuracy: 0.9785, Validation Accuracy: 0.9737, Loss: 0.0098 Epoch 7 Batch 971/1077 - Train Accuracy: 0.9955, Validation Accuracy: 0.9737, Loss: 0.0083 Epoch 7 Batch 972/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9737, Loss: 0.0097 Epoch 7 Batch 973/1077 - Train Accuracy: 0.9903, Validation Accuracy: 0.9730, Loss: 0.0066 Epoch 7 Batch 974/1077 - Train Accuracy: 0.9949, Validation Accuracy: 0.9730, Loss: 0.0085 Epoch 7 Batch 975/1077 - Train Accuracy: 0.9978, Validation Accuracy: 0.9730, Loss: 0.0063 Epoch 7 Batch 976/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9737, Loss: 0.0092 Epoch 7 Batch 977/1077 - Train Accuracy: 0.9965, Validation Accuracy: 0.9737, Loss: 0.0053 Epoch 7 Batch 978/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9826, Loss: 0.0078 Epoch 7 Batch 979/1077 - Train Accuracy: 0.9757, Validation Accuracy: 0.9826, Loss: 0.0126 Epoch 7 Batch 980/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9826, Loss: 0.0080 Epoch 7 Batch 981/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9826, Loss: 0.0101 Epoch 7 Batch 982/1077 - Train Accuracy: 0.9866, Validation Accuracy: 0.9826, Loss: 0.0068 Epoch 7 Batch 983/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9826, Loss: 0.0098 Epoch 7 Batch 984/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9826, Loss: 0.0095 Epoch 7 Batch 985/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9826, Loss: 0.0071 Epoch 7 Batch 986/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9826, Loss: 0.0090 Epoch 7 Batch 987/1077 - Train Accuracy: 0.9892, Validation Accuracy: 0.9826, Loss: 0.0071 Epoch 7 Batch 988/1077 - Train Accuracy: 0.9871, Validation Accuracy: 0.9826, Loss: 0.0122 Epoch 7 Batch 989/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9826, Loss: 0.0067 Epoch 7 Batch 990/1077 - Train Accuracy: 0.9947, Validation Accuracy: 0.9826, Loss: 0.0062 Epoch 7 Batch 991/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9826, Loss: 0.0061 Epoch 7 Batch 992/1077 - Train Accuracy: 0.9766, Validation Accuracy: 0.9826, Loss: 0.0102 Epoch 7 Batch 993/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9822, Loss: 0.0062 Epoch 7 Batch 994/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9776, Loss: 0.0072 Epoch 7 Batch 995/1077 - Train Accuracy: 0.9970, Validation Accuracy: 0.9776, Loss: 0.0064 Epoch 7 Batch 996/1077 - Train Accuracy: 0.9824, Validation Accuracy: 0.9776, Loss: 0.0096 Epoch 7 Batch 997/1077 - Train Accuracy: 0.9897, Validation Accuracy: 0.9776, Loss: 0.0075 Epoch 7 Batch 998/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9776, Loss: 0.0119 Epoch 7 Batch 999/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9776, Loss: 0.0094 Epoch 7 Batch 1000/1077 - Train Accuracy: 0.9803, Validation Accuracy: 0.9780, Loss: 0.0083 Epoch 7 Batch 1001/1077 - Train Accuracy: 0.9830, Validation Accuracy: 0.9734, Loss: 0.0066 Epoch 7 Batch 1002/1077 - Train Accuracy: 0.9996, Validation Accuracy: 0.9776, Loss: 0.0048 Epoch 7 Batch 1003/1077 - Train Accuracy: 0.9942, Validation Accuracy: 0.9776, Loss: 0.0070 Epoch 7 Batch 1004/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9769, Loss: 0.0119 Epoch 7 Batch 1005/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9712, Loss: 0.0077 Epoch 7 Batch 1006/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9712, Loss: 0.0076 Epoch 7 Batch 1007/1077 - Train Accuracy: 0.9981, Validation Accuracy: 0.9712, Loss: 0.0059 Epoch 7 Batch 1008/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9652, Loss: 0.0085 Epoch 7 Batch 1009/1077 - Train Accuracy: 0.9934, Validation Accuracy: 0.9659, Loss: 0.0047 Epoch 7 Batch 1010/1077 - Train Accuracy: 0.9984, Validation Accuracy: 0.9659, Loss: 0.0047 Epoch 7 Batch 1011/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9659, Loss: 0.0052 Epoch 7 Batch 1012/1077 - Train Accuracy: 0.9736, Validation Accuracy: 0.9741, Loss: 0.0100 Epoch 7 Batch 1013/1077 - Train Accuracy: 0.9851, Validation Accuracy: 0.9741, Loss: 0.0078 Epoch 7 Batch 1014/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9762, Loss: 0.0085 Epoch 7 Batch 1015/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9712, Loss: 0.0101 Epoch 7 Batch 1016/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9712, Loss: 0.0059 Epoch 7 Batch 1017/1077 - Train Accuracy: 0.9885, Validation Accuracy: 0.9769, Loss: 0.0048 Epoch 7 Batch 1018/1077 - Train Accuracy: 0.9978, Validation Accuracy: 0.9769, Loss: 0.0068 Epoch 7 Batch 1019/1077 - Train Accuracy: 0.9753, Validation Accuracy: 0.9819, Loss: 0.0157 Epoch 7 Batch 1020/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9819, Loss: 0.0069 Epoch 7 Batch 1021/1077 - Train Accuracy: 0.9829, Validation Accuracy: 0.9819, Loss: 0.0083 Epoch 7 Batch 1022/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9869, Loss: 0.0052 Epoch 7 Batch 1023/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9869, Loss: 0.0113 Epoch 7 Batch 1024/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9869, Loss: 0.0099 Epoch 7 Batch 1025/1077 - Train Accuracy: 0.9702, Validation Accuracy: 0.9869, Loss: 0.0094 Epoch 7 Batch 1026/1077 - Train Accuracy: 0.9955, Validation Accuracy: 0.9819, Loss: 0.0110 Epoch 7 Batch 1027/1077 - Train Accuracy: 0.9695, Validation Accuracy: 0.9815, Loss: 0.0111 Epoch 7 Batch 1028/1077 - Train Accuracy: 0.9911, Validation Accuracy: 0.9869, Loss: 0.0090 Epoch 7 Batch 1029/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9869, Loss: 0.0113 Epoch 7 Batch 1030/1077 - Train Accuracy: 1.0000, Validation Accuracy: 0.9869, Loss: 0.0046 Epoch 7 Batch 1031/1077 - Train Accuracy: 0.9811, Validation Accuracy: 0.9872, Loss: 0.0102 Epoch 7 Batch 1032/1077 - Train Accuracy: 0.9900, Validation Accuracy: 0.9776, Loss: 0.0132 Epoch 7 Batch 1033/1077 - Train Accuracy: 0.9732, Validation Accuracy: 0.9776, Loss: 0.0123 Epoch 7 Batch 1034/1077 - Train Accuracy: 0.9953, Validation Accuracy: 0.9776, Loss: 0.0069 Epoch 7 Batch 1035/1077 - Train Accuracy: 0.9877, Validation Accuracy: 0.9755, Loss: 0.0064 Epoch 7 Batch 1036/1077 - Train Accuracy: 0.9847, Validation Accuracy: 0.9652, Loss: 0.0105 Epoch 7 Batch 1037/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9698, Loss: 0.0104 Epoch 7 Batch 1038/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9698, Loss: 0.0111 Epoch 7 Batch 1039/1077 - Train Accuracy: 0.9881, Validation Accuracy: 0.9698, Loss: 0.0079 Epoch 7 Batch 1040/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9652, Loss: 0.0116 Epoch 7 Batch 1041/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9751, Loss: 0.0159 Epoch 7 Batch 1042/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9751, Loss: 0.0116 Epoch 7 Batch 1043/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9748, Loss: 0.0092 Epoch 7 Batch 1044/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9723, Loss: 0.0110 Epoch 7 Batch 1045/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9727, Loss: 0.0091 Epoch 7 Batch 1046/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9723, Loss: 0.0045 Epoch 7 Batch 1047/1077 - Train Accuracy: 1.0000, Validation Accuracy: 0.9702, Loss: 0.0063 Epoch 7 Batch 1048/1077 - Train Accuracy: 0.9816, Validation Accuracy: 0.9656, Loss: 0.0067 Epoch 7 Batch 1049/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9680, Loss: 0.0064 Epoch 7 Batch 1050/1077 - Train Accuracy: 0.9980, Validation Accuracy: 0.9638, Loss: 0.0057 Epoch 7 Batch 1051/1077 - Train Accuracy: 0.9866, Validation Accuracy: 0.9695, Loss: 0.0114 Epoch 7 Batch 1052/1077 - Train Accuracy: 0.9840, Validation Accuracy: 0.9698, Loss: 0.0081 Epoch 7 Batch 1053/1077 - Train Accuracy: 0.9795, Validation Accuracy: 0.9719, Loss: 0.0090 Epoch 7 Batch 1054/1077 - Train Accuracy: 0.9770, Validation Accuracy: 0.9719, Loss: 0.0137 Epoch 7 Batch 1055/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9719, Loss: 0.0103 Epoch 7 Batch 1056/1077 - Train Accuracy: 0.9855, Validation Accuracy: 0.9766, Loss: 0.0071 Epoch 7 Batch 1057/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9769, Loss: 0.0073 Epoch 7 Batch 1058/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9794, Loss: 0.0129 Epoch 7 Batch 1059/1077 - Train Accuracy: 0.9823, Validation Accuracy: 0.9794, Loss: 0.0125 Epoch 7 Batch 1060/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9602, Loss: 0.0083 Epoch 7 Batch 1061/1077 - Train Accuracy: 0.9863, Validation Accuracy: 0.9645, Loss: 0.0088 Epoch 7 Batch 1062/1077 - Train Accuracy: 0.9895, Validation Accuracy: 0.9645, Loss: 0.0077 Epoch 7 Batch 1063/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9645, Loss: 0.0116 Epoch 7 Batch 1064/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9688, Loss: 0.0078 Epoch 7 Batch 1065/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9634, Loss: 0.0080 Epoch 7 Batch 1066/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9577, Loss: 0.0077 Epoch 7 Batch 1067/1077 - Train Accuracy: 0.9750, Validation Accuracy: 0.9577, Loss: 0.0121 Epoch 7 Batch 1068/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9627, Loss: 0.0051 Epoch 7 Batch 1069/1077 - Train Accuracy: 0.9944, Validation Accuracy: 0.9627, Loss: 0.0049 Epoch 7 Batch 1070/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9627, Loss: 0.0085 Epoch 7 Batch 1071/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9670, Loss: 0.0086 Epoch 7 Batch 1072/1077 - Train Accuracy: 0.9818, Validation Accuracy: 0.9616, Loss: 0.0095 Epoch 7 Batch 1073/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9624, Loss: 0.0090 Epoch 7 Batch 1074/1077 - Train Accuracy: 0.9948, Validation Accuracy: 0.9656, Loss: 0.0077 Epoch 7 Batch 1075/1077 - Train Accuracy: 0.9799, Validation Accuracy: 0.9734, Loss: 0.0153 Model Trained and Saved ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ unknow_id = vocab_to_int['<UNK>'] vocabs = (word.lower() for word in sentence.split(' ') if len(word) > 0) ids = [vocab_to_int.get(vocab, unknow_id) for vocab in vocabs] return ids """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) ###Output INFO:tensorflow:Restoring parameters from checkpoints/dev Input Word Ids: [155, 61, 23, 226, 206, 160, 77] English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.'] Prediction Word Ids: [86, 117, 18, 119, 305, 62, 69, 221, 1] French Words: il a vu un vieux camion jaune . <EOS> ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_id(text, vocab_to_int): sentences = [word for word in text.split("\n")] return list(map(lambda sentence: [vocab_to_int[word] for word in sentence.split()], sentences)) def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ def add_eos(ids): ids.append(target_vocab_to_int['<EOS>']) return ids return text_to_id(source_text, source_vocab_to_int), \ [add_eos(ids) for ids in text_to_id(target_text, target_vocab_to_int)] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.1.0 Default GPU Device: /gpu:0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoder_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.- Target sequence length placeholder named "target_sequence_length" with rank 1- Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.- Source sequence length placeholder named "source_sequence_length" with rank 1Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ # TODO: Implement Function input = tf.placeholder(dtype=tf.int32, shape=[None, None], name='input') target = tf.placeholder(dtype=tf.int32, shape=[None, None], name='target') learning_rate = tf.placeholder(dtype=tf.float32, name='learning_rate') keep_prob = tf.placeholder(dtype=tf.float32, name='keep_prob') target_seq_length = tf.placeholder(dtype=tf.int32, shape=[None], name='target_sequence_length') max_target_len = tf.reduce_max(target_seq_length, name='max_target_len') source_sequence_length = tf.placeholder(dtype=tf.float32, shape=[None], name='source_sequence_length') return input, target, learning_rate, keep_prob, target_seq_length, max_target_len, source_sequence_length """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output Tests Passed ###Markdown Process Decoder InputImplement `process_decoder_input` by removing the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch. ###Code def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return dec_input """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer: * Embed the encoder input using [`tf.contrib.layers.embed_sequence`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence) * Construct a [stacked](https://github.com/tensorflow/tensorflow/blob/6947f65a374ebf29e74bb71e36fd82760056d82c/tensorflow/docs_src/tutorials/recurrent.mdstacking-multiple-lstms) [`tf.contrib.rnn.LSTMCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell) wrapped in a [`tf.contrib.rnn.DropoutWrapper`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper) * Pass cell and embedded input to [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) ###Code from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size) def single_cell(): return tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.LSTMCell(rnn_size), output_keep_prob=keep_prob) cell = tf.contrib.rnn.MultiRNNCell([single_cell() for _ in range(num_layers)]) return tf.nn.dynamic_rnn(cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate a training decoding layer:* Create a [`tf.contrib.seq2seq.TrainingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper) * Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ # Helper for the training process. Used by BasicDecoder to read inputs. training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, sequence_length=target_sequence_length, time_major=False) # Basic decoder training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer) # Perform dynamic decoding using the decoder training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True, maximum_iterations=max_summary_length) return training_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference decoder:* Create a [`tf.contrib.seq2seq.GreedyEmbeddingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper)* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens') # Helper for the inference process. inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id) # Basic decoder inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer) # Perform dynamic decoding using the decoder inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length) return inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.* Embed the target sequences* Construct the decoder LSTM cell (just like you constructed the encoder cell above)* Create an output layer to map the outputs of the decoder to the elements of our vocabulary* Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)` function to get the training logits.* Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # 1. Decoder Embedding dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) # 2. Construct the decoder cell def make_cell(): return tf.contrib.rnn.DropoutWrapper( tf.contrib.rnn.LSTMCell( rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)), output_keep_prob=keep_prob) dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell() for _ in range(num_layers)]) # 3. Dense layer to translate the decoder's output at each time # step into a choice from the target vocabulary output_layer = Dense(target_vocab_size, kernel_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1)) # 4. Set up a training decoder and an inference decoder # Training Decoder with tf.variable_scope("decode"): # Helper for the training process. Used by BasicDecoder to read inputs. training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, sequence_length=target_sequence_length, time_major=False) # Basic decoder training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer) # Perform dynamic decoding using the decoder training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length) # 5. Inference Decoder # Reuses the same parameters trained by the training process with tf.variable_scope("decode", reuse=True): start_tokens = tf.tile(tf.constant([target_vocab_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens') # Helper for the inference process. inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, target_vocab_to_int['<EOS>']) # Basic decoder inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer) # Perform dynamic decoding using the decoder inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Apply embedding to the input data for the encoder.- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size)`.- Process target data using your `process_decoder_input(target_data, target_vocab_to_int, batch_size)` function.- Apply embedding to the target data for the decoder.- Decode the encoded input using your `decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)` function. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # Pass the input data through the encoder. We'll ignore the encoder output, but use the state _, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) # Prepare the target sequences we'll feed to the decoder in training mode dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size) # Pass encoder state and decoder inputs to the decoders training_decoder_output, inference_decoder_output = decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability- Set `display_step` to state how many steps between each debug output statement ###Code # Number of Epochs epochs = 8 # Batch Size batch_size = 512 # RNN Size rnn_size = 512 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 200 decoding_embedding_size = 200 # Learning Rate learning_rate = 0.01 # Dropout Keep Probability keep_probability = 0.5 display_step = 1 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown Batch and pad the source and target sequences ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def pad_sentence_batch(sentence_batch, pad_int): """Pad sentences with <PAD> so that each sentence of a batch has the same length""" max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): """Batch targets, sources, and the lengths of their sentences together""" for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 1/269 - Train Accuracy: 0.0647, Validation Accuracy: 0.0543, Loss: 7.4632 Epoch 0 Batch 2/269 - Train Accuracy: 0.2768, Validation Accuracy: 0.3220, Loss: 4.8513 Epoch 0 Batch 3/269 - Train Accuracy: 0.1073, Validation Accuracy: 0.0959, Loss: 4.5128 Epoch 0 Batch 4/269 - Train Accuracy: 0.1257, Validation Accuracy: 0.1113, Loss: 4.3018 Epoch 0 Batch 5/269 - Train Accuracy: 0.2315, Validation Accuracy: 0.3085, Loss: 4.1339 Epoch 0 Batch 6/269 - Train Accuracy: 0.3173, Validation Accuracy: 0.3471, Loss: 3.9303 Epoch 0 Batch 7/269 - Train Accuracy: 0.3157, Validation Accuracy: 0.3455, Loss: 3.7237 Epoch 0 Batch 8/269 - Train Accuracy: 0.2832, Validation Accuracy: 0.3454, Loss: 3.6791 Epoch 0 Batch 9/269 - Train Accuracy: 0.3090, Validation Accuracy: 0.3479, Loss: 3.5153 Epoch 0 Batch 10/269 - Train Accuracy: 0.2809, Validation Accuracy: 0.3515, Loss: 3.4832 Epoch 0 Batch 11/269 - Train Accuracy: 0.3448, Validation Accuracy: 0.3778, Loss: 3.3030 Epoch 0 Batch 12/269 - Train Accuracy: 0.3267, Validation Accuracy: 0.3850, Loss: 3.3671 Epoch 0 Batch 13/269 - Train Accuracy: 0.3841, Validation Accuracy: 0.3807, Loss: 3.0089 Epoch 0 Batch 14/269 - Train Accuracy: 0.3442, Validation Accuracy: 0.3811, Loss: 3.1348 Epoch 0 Batch 15/269 - Train Accuracy: 0.3437, Validation Accuracy: 0.3862, Loss: 3.1166 Epoch 0 Batch 16/269 - Train Accuracy: 0.3556, Validation Accuracy: 0.3833, Loss: 3.0293 Epoch 0 Batch 17/269 - Train Accuracy: 0.3446, Validation Accuracy: 0.3819, Loss: 2.9874 Epoch 0 Batch 18/269 - Train Accuracy: 0.3212, Validation Accuracy: 0.3875, Loss: 3.0841 Epoch 0 Batch 19/269 - Train Accuracy: 0.3954, Validation Accuracy: 0.3963, Loss: 2.7804 Epoch 0 Batch 20/269 - Train Accuracy: 0.3390, Validation Accuracy: 0.3999, Loss: 2.9828 Epoch 0 Batch 21/269 - Train Accuracy: 0.3739, Validation Accuracy: 0.4268, Loss: 3.0080 Epoch 0 Batch 22/269 - Train Accuracy: 0.4016, Validation Accuracy: 0.4240, Loss: 2.8177 Epoch 0 Batch 23/269 - Train Accuracy: 0.3923, Validation Accuracy: 0.4111, Loss: 2.7814 Epoch 0 Batch 24/269 - Train Accuracy: 0.3608, Validation Accuracy: 0.4213, Loss: 2.9058 Epoch 0 Batch 25/269 - Train Accuracy: 0.3745, Validation Accuracy: 0.4310, Loss: 2.8798 Epoch 0 Batch 26/269 - Train Accuracy: 0.4329, Validation Accuracy: 0.4290, Loss: 2.5936 Epoch 0 Batch 27/269 - Train Accuracy: 0.4136, Validation Accuracy: 0.4363, Loss: 2.7220 Epoch 0 Batch 28/269 - Train Accuracy: 0.3795, Validation Accuracy: 0.4439, Loss: 2.8614 Epoch 0 Batch 29/269 - Train Accuracy: 0.3970, Validation Accuracy: 0.4452, Loss: 2.7851 Epoch 0 Batch 30/269 - Train Accuracy: 0.4159, Validation Accuracy: 0.4466, Loss: 2.6471 Epoch 0 Batch 31/269 - Train Accuracy: 0.4279, Validation Accuracy: 0.4512, Loss: 2.6038 Epoch 0 Batch 32/269 - Train Accuracy: 0.4220, Validation Accuracy: 0.4532, Loss: 2.6186 Epoch 0 Batch 33/269 - Train Accuracy: 0.4366, Validation Accuracy: 0.4576, Loss: 2.5396 Epoch 0 Batch 34/269 - Train Accuracy: 0.4166, Validation Accuracy: 0.4450, Loss: 2.5414 Epoch 0 Batch 35/269 - Train Accuracy: 0.4311, Validation Accuracy: 0.4553, Loss: 2.5223 Epoch 0 Batch 36/269 - Train Accuracy: 0.4410, Validation Accuracy: 0.4628, Loss: 2.5200 Epoch 0 Batch 37/269 - Train Accuracy: 0.4362, Validation Accuracy: 0.4609, Loss: 2.5019 Epoch 0 Batch 38/269 - Train Accuracy: 0.4273, Validation Accuracy: 0.4550, Loss: 2.4758 Epoch 0 Batch 39/269 - Train Accuracy: 0.4399, Validation Accuracy: 0.4634, Loss: 2.4427 Epoch 0 Batch 40/269 - Train Accuracy: 0.4225, Validation Accuracy: 0.4663, Loss: 2.5267 Epoch 0 Batch 41/269 - Train Accuracy: 0.4489, Validation Accuracy: 0.4730, Loss: 2.4047 Epoch 0 Batch 42/269 - Train Accuracy: 0.4775, Validation Accuracy: 0.4754, Loss: 2.2843 Epoch 0 Batch 43/269 - Train Accuracy: 0.4232, Validation Accuracy: 0.4720, Loss: 2.4341 Epoch 0 Batch 44/269 - Train Accuracy: 0.4516, Validation Accuracy: 0.4665, Loss: 2.3223 Epoch 0 Batch 45/269 - Train Accuracy: 0.3907, Validation Accuracy: 0.4466, Loss: 2.4262 Epoch 0 Batch 46/269 - Train Accuracy: 0.4339, Validation Accuracy: 0.4854, Loss: 2.5309 Epoch 0 Batch 47/269 - Train Accuracy: 0.4550, Validation Accuracy: 0.4461, Loss: 2.1602 Epoch 0 Batch 48/269 - Train Accuracy: 0.4371, Validation Accuracy: 0.4635, Loss: 2.3183 Epoch 0 Batch 49/269 - Train Accuracy: 0.4280, Validation Accuracy: 0.4721, Loss: 2.3729 Epoch 0 Batch 50/269 - Train Accuracy: 0.4560, Validation Accuracy: 0.5004, Loss: 2.3914 Epoch 0 Batch 51/269 - Train Accuracy: 0.4546, Validation Accuracy: 0.4822, Loss: 2.2883 Epoch 0 Batch 52/269 - Train Accuracy: 0.4562, Validation Accuracy: 0.4757, Loss: 2.2015 Epoch 0 Batch 53/269 - Train Accuracy: 0.4186, Validation Accuracy: 0.4648, Loss: 2.2967 Epoch 0 Batch 54/269 - Train Accuracy: 0.4412, Validation Accuracy: 0.4851, Loss: 2.2910 Epoch 0 Batch 55/269 - Train Accuracy: 0.4677, Validation Accuracy: 0.4896, Loss: 2.1397 Epoch 0 Batch 56/269 - Train Accuracy: 0.4780, Validation Accuracy: 0.4939, Loss: 2.1382 Epoch 0 Batch 57/269 - Train Accuracy: 0.4838, Validation Accuracy: 0.5010, Loss: 2.1263 Epoch 0 Batch 58/269 - Train Accuracy: 0.4701, Validation Accuracy: 0.4866, Loss: 2.0905 Epoch 0 Batch 59/269 - Train Accuracy: 0.4556, Validation Accuracy: 0.4765, Loss: 2.0540 Epoch 0 Batch 60/269 - Train Accuracy: 0.4769, Validation Accuracy: 0.4861, Loss: 2.0112 Epoch 0 Batch 61/269 - Train Accuracy: 0.5120, Validation Accuracy: 0.4987, Loss: 1.9459 Epoch 0 Batch 62/269 - Train Accuracy: 0.5050, Validation Accuracy: 0.5016, Loss: 1.9562 Epoch 0 Batch 63/269 - Train Accuracy: 0.4728, Validation Accuracy: 0.4938, Loss: 2.0149 Epoch 0 Batch 64/269 - Train Accuracy: 0.4740, Validation Accuracy: 0.4922, Loss: 2.0106 Epoch 0 Batch 65/269 - Train Accuracy: 0.4753, Validation Accuracy: 0.4904, Loss: 1.9874 Epoch 0 Batch 66/269 - Train Accuracy: 0.4953, Validation Accuracy: 0.4971, Loss: 1.9142 Epoch 0 Batch 67/269 - Train Accuracy: 0.4716, Validation Accuracy: 0.4927, Loss: 1.9732 Epoch 0 Batch 68/269 - Train Accuracy: 0.4822, Validation Accuracy: 0.5025, Loss: 1.9683 Epoch 0 Batch 69/269 - Train Accuracy: 0.4688, Validation Accuracy: 0.5119, Loss: 2.0719 Epoch 0 Batch 70/269 - Train Accuracy: 0.5100, Validation Accuracy: 0.5203, Loss: 1.9024 Epoch 0 Batch 71/269 - Train Accuracy: 0.4776, Validation Accuracy: 0.5202, Loss: 2.0005 Epoch 0 Batch 72/269 - Train Accuracy: 0.5122, Validation Accuracy: 0.5153, Loss: 1.8255 Epoch 0 Batch 73/269 - Train Accuracy: 0.4758, Validation Accuracy: 0.5049, Loss: 1.9043 Epoch 0 Batch 74/269 - Train Accuracy: 0.4827, Validation Accuracy: 0.5249, Loss: 1.9515 Epoch 0 Batch 75/269 - Train Accuracy: 0.4987, Validation Accuracy: 0.5233, Loss: 1.8361 Epoch 0 Batch 76/269 - Train Accuracy: 0.4706, Validation Accuracy: 0.5067, Loss: 1.8502 Epoch 0 Batch 77/269 - Train Accuracy: 0.5081, Validation Accuracy: 0.5133, Loss: 1.8091 Epoch 0 Batch 78/269 - Train Accuracy: 0.4976, Validation Accuracy: 0.5244, Loss: 1.8077 Epoch 0 Batch 79/269 - Train Accuracy: 0.5015, Validation Accuracy: 0.5245, Loss: 1.7613 Epoch 0 Batch 80/269 - Train Accuracy: 0.5165, Validation Accuracy: 0.5265, Loss: 1.7057 Epoch 0 Batch 81/269 - Train Accuracy: 0.5025, Validation Accuracy: 0.5231, Loss: 1.7522 Epoch 0 Batch 82/269 - Train Accuracy: 0.5185, Validation Accuracy: 0.5346, Loss: 1.6739 Epoch 0 Batch 83/269 - Train Accuracy: 0.5099, Validation Accuracy: 0.5214, Loss: 1.6534 Epoch 0 Batch 84/269 - Train Accuracy: 0.5020, Validation Accuracy: 0.5222, Loss: 1.6497 Epoch 0 Batch 85/269 - Train Accuracy: 0.5053, Validation Accuracy: 0.5281, Loss: 1.6543 Epoch 0 Batch 86/269 - Train Accuracy: 0.4973, Validation Accuracy: 0.5352, Loss: 1.6567 Epoch 0 Batch 87/269 - Train Accuracy: 0.4847, Validation Accuracy: 0.5431, Loss: 1.7522 Epoch 0 Batch 88/269 - Train Accuracy: 0.4974, Validation Accuracy: 0.5112, Loss: 1.6162 Epoch 0 Batch 89/269 - Train Accuracy: 0.5017, Validation Accuracy: 0.5194, Loss: 1.5810 ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ unk = vocab_to_int['<UNK>'] sentence = sentence.lower() return [vocab_to_int.get(w, unk) for w in sentence.split()] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) ###Output INFO:tensorflow:Restoring parameters from checkpoints/dev Input Word Ids: [190, 217, 140, 170, 169, 184, 90] English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.'] Prediction Word Ids: [142, 63, 334, 262, 163, 123, 347, 311, 1] French Words: il a vu un vieux camion jaune . <EOS> ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function def split(text, vocab): for seq in text.split('\n'): result = [] for word in seq.split(): result.append(vocab[word]) yield result source_id_text = list(split(source_text, source_vocab_to_int)) target_id_text = [] for sentence in split(target_text, target_vocab_to_int): sentence.append(target_vocab_to_int['<EOS>']) target_id_text.append(sentence) return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper import problem_unittests as tests (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.2.1 Default GPU Device: /gpu:0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoder_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.- Target sequence length placeholder named "target_sequence_length" with rank 1- Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.- Source sequence length placeholder named "source_sequence_length" with rank 1Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ # TODO: Implement Function input_tf = tf.placeholder(tf.int32, shape=[None, None], name="input") targets = tf.placeholder(tf.int32, shape=[None, None], name="targets") learning_rate = tf.placeholder(tf.float32, name="learning_rate") keep_prob = tf.placeholder(tf.float32, name="keep_prob") target_sequence_length = tf.placeholder(tf.int32, shape=[None,], name="target_sequence_length") max_target_len = tf.reduce_max(target_sequence_length, name="max_target_len") source_sequence_length = tf.placeholder(tf.int32, shape=[None,], name="source_sequence_length") return input_tf, targets, learning_rate, keep_prob, target_sequence_length, max_target_len, source_sequence_length """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output ERROR:tensorflow:================================== Object was never used (type <class 'tensorflow.python.framework.ops.Operation'>): <tf.Operation 'assert_rank_2/Assert/Assert' type=Assert> If you want to mark it as used call its "mark_used()" method. It was originally created here: ['File "/home/carnd/anaconda3/envs/dl/lib/python3.5/runpy.py", line 184, in _run_module_as_main\n "__main__", mod_spec)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/runpy.py", line 85, in _run_code\n exec(code, run_globals)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/__main__.py", line 3, in <module>\n app.launch_new_instance()', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/traitlets/config/application.py", line 658, in launch_instance\n app.start()', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/kernelapp.py", line 474, in start\n ioloop.IOLoop.instance().start()', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/zmq/eventloop/ioloop.py", line 177, in start\n super(ZMQIOLoop, self).start()', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tornado/ioloop.py", line 887, in start\n handler_func(fd_obj, events)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tornado/stack_context.py", line 275, in null_wrapper\n return fn(*args, **kwargs)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 440, in _handle_events\n self._handle_recv()', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 472, in _handle_recv\n self._run_callback(callback, msg)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 414, in _run_callback\n callback(*args, **kwargs)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tornado/stack_context.py", line 275, in null_wrapper\n return fn(*args, **kwargs)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 276, in dispatcher\n return self.dispatch_shell(stream, msg)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 228, in dispatch_shell\n handler(stream, idents, msg)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 390, in execute_request\n user_expressions, allow_stdin)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/ipkernel.py", line 196, in do_execute\n res = shell.run_cell(code, store_history=store_history, silent=silent)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/zmqshell.py", line 501, in run_cell\n return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2717, in run_cell\n interactivity=interactivity, compiler=compiler, result=result)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2827, in run_ast_nodes\n if self.run_code(code, result):', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2881, in run_code\n exec(code_obj, self.user_global_ns, self.user_ns)', 'File "<ipython-input-11-18ba77b6dde4>", line 21, in <module>\n tests.test_model_inputs(model_inputs)', 'File "/home/carnd/nd101-p4/problem_unittests.py", line 106, in test_model_inputs\n assert tf.assert_rank(lr, 0, message=\'Learning Rate has wrong rank\')', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tensorflow/python/ops/check_ops.py", line 617, in assert_rank\n dynamic_condition, data, summarize)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tensorflow/python/ops/check_ops.py", line 571, in _assert_rank_condition\n return control_flow_ops.Assert(condition, data, summarize=summarize)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py", line 170, in wrapped\n return _add_should_use_warning(fn(*args, **kwargs))', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py", line 139, in _add_should_use_warning\n wrapped = TFShouldUseWarningWrapper(x)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py", line 96, in __init__\n stack = [s.strip() for s in traceback.format_stack()]'] ================================== ERROR:tensorflow:================================== Object was never used (type <class 'tensorflow.python.framework.ops.Operation'>): <tf.Operation 'assert_rank_3/Assert/Assert' type=Assert> If you want to mark it as used call its "mark_used()" method. It was originally created here: ['File "/home/carnd/anaconda3/envs/dl/lib/python3.5/runpy.py", line 184, in _run_module_as_main\n "__main__", mod_spec)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/runpy.py", line 85, in _run_code\n exec(code, run_globals)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/__main__.py", line 3, in <module>\n app.launch_new_instance()', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/traitlets/config/application.py", line 658, in launch_instance\n app.start()', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/kernelapp.py", line 474, in start\n ioloop.IOLoop.instance().start()', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/zmq/eventloop/ioloop.py", line 177, in start\n super(ZMQIOLoop, self).start()', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tornado/ioloop.py", line 887, in start\n handler_func(fd_obj, events)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tornado/stack_context.py", line 275, in null_wrapper\n return fn(*args, **kwargs)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 440, in _handle_events\n self._handle_recv()', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 472, in _handle_recv\n self._run_callback(callback, msg)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 414, in _run_callback\n callback(*args, **kwargs)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tornado/stack_context.py", line 275, in null_wrapper\n return fn(*args, **kwargs)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 276, in dispatcher\n return self.dispatch_shell(stream, msg)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 228, in dispatch_shell\n handler(stream, idents, msg)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 390, in execute_request\n user_expressions, allow_stdin)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/ipkernel.py", line 196, in do_execute\n res = shell.run_cell(code, store_history=store_history, silent=silent)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/zmqshell.py", line 501, in run_cell\n return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2717, in run_cell\n interactivity=interactivity, compiler=compiler, result=result)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2827, in run_ast_nodes\n if self.run_code(code, result):', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2881, in run_code\n exec(code_obj, self.user_global_ns, self.user_ns)', 'File "<ipython-input-11-18ba77b6dde4>", line 21, in <module>\n tests.test_model_inputs(model_inputs)', 'File "/home/carnd/nd101-p4/problem_unittests.py", line 107, in test_model_inputs\n assert tf.assert_rank(keep_prob, 0, message=\'Keep Probability has wrong rank\')', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tensorflow/python/ops/check_ops.py", line 617, in assert_rank\n dynamic_condition, data, summarize)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tensorflow/python/ops/check_ops.py", line 571, in _assert_rank_condition\n return control_flow_ops.Assert(condition, data, summarize=summarize)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py", line 170, in wrapped\n return _add_should_use_warning(fn(*args, **kwargs))', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py", line 139, in _add_should_use_warning\n wrapped = TFShouldUseWarningWrapper(x)', 'File "/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py", line 96, in __init__\n stack = [s.strip() for s in traceback.format_stack()]'] ================================== Tests Passed ###Markdown Process Decoder InputImplement `process_decoder_input` by removing the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch. ###Code def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return dec_input """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer: * Embed the encoder input using [`tf.contrib.layers.embed_sequence`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence) * Construct a [stacked](https://github.com/tensorflow/tensorflow/blob/6947f65a374ebf29e74bb71e36fd82760056d82c/tensorflow/docs_src/tutorials/recurrent.mdstacking-multiple-lstms) [`tf.contrib.rnn.LSTMCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell) wrapped in a [`tf.contrib.rnn.DropoutWrapper`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper) * Pass cell and embedded input to [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) ###Code from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ # TODO: Implement Function # Encoder embedding enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size) # RNN cell def make_cell(rnn_size): enc_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) return enc_cell enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) #enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size)] * num_layers) enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32) return enc_output, enc_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate a training decoding layer:* Create a [`tf.contrib.seq2seq.TrainingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper) * Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ # TODO: Implement Function train_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, sequence_length=target_sequence_length) decoder = tf.contrib.seq2seq.BasicDecoder(cell=dec_cell, helper=train_helper, initial_state=encoder_state, output_layer=output_layer) # Tensorflow 1.2 returns 3 value tuple # Returns (final_outputs, final_state, final_sequence_lengths) final_outputs, _, _ = tf.contrib.seq2seq.dynamic_decode(decoder=decoder, impute_finished=True, maximum_iterations=max_summary_length) return final_outputs """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference decoder:* Create a [`tf.contrib.seq2seq.GreedyEmbeddingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper)* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ # TODO: Implement Function start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens') greedy = tf.contrib.seq2seq.GreedyEmbeddingHelper(embedding=dec_embeddings, start_tokens=start_tokens, end_token=end_of_sequence_id) decoder = tf.contrib.seq2seq.BasicDecoder(cell=dec_cell, helper=greedy, initial_state=encoder_state, output_layer=output_layer) # TF 1.2 returns (final_outputs, final_state, final_sequence_lengths) final_outputs, _, _ = tf.contrib.seq2seq.dynamic_decode(decoder=decoder, impute_finished=True, maximum_iterations=max_target_sequence_length) return final_outputs """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.* Embed the target sequences* Construct the decoder LSTM cell (just like you constructed the encoder cell above)* Create an output layer to map the outputs of the decoder to the elements of our vocabulary* Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)` function to get the training logits.* Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function # 1. Decoder Embedding dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) # 2. Construct the decoder cell def make_cell(rnn_size): dec_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) return dec_cell dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) # 3. Dense layer to translate the decoder's output at each time # step into a choice from the target vocabulary output_layer = Dense(target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1)) # 4. Set up a training decoder and an inference decoder # Training Decoder with tf.variable_scope("decode"): # Helper for the training process. Used by BasicDecoder to read inputs. training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, sequence_length=target_sequence_length, time_major=False) # Basic decoder training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer) # Perform dynamic decoding using the decoder # TF 1.2 returns (final_outputs, final_state, final_sequence_lengths) training_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length) # 5. Inference Decoder # Reuses the same parameters trained by the training process with tf.variable_scope("decode", reuse=True): start_tokens = tf.tile(tf.constant([target_vocab_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens') # Helper for the inference process. inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, target_vocab_to_int['<EOS>']) # Basic decoder inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer) # Perform dynamic decoding using the decoder # TF 1.2 returns (final_outputs, final_state, final_sequence_lengths) inference_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Apply embedding to the input data for the encoder.- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size)`.- Process target data using your `process_decoder_input(target_data, target_vocab_to_int, batch_size)` function.- Apply embedding to the target data for the decoder.- Decode the encoded input using your `decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)` function. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function # Pass the input data through the encoder. We'll ignore the encoder output, but use the state _, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) # Prepare the target sequences we'll feed to the decoder in training mode dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size) # Pass encoder state and decoder inputs to the decoders training_decoder_output, inference_decoder_output = decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability- Set `display_step` to state how many steps between each debug output statement ###Code # Number of Epochs epochs = 20 # Batch Size batch_size = 128 # RNN Size rnn_size = 256 # Number of Layers num_layers = 1 # Embedding Size encoding_embedding_size = 128 decoding_embedding_size = 128 # Learning Rate learning_rate = 1e-3 # Dropout Keep Probability keep_probability = 0.5 display_step = 512 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown Batch and pad the source and target sequences ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def pad_sentence_batch(sentence_batch, pad_int): """Pad sentences with <PAD> so that each sentence of a batch has the same length""" max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): """Batch targets, sources, and the lengths of their sentences together""" for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 512/1077 - Train Accuracy: 0.7379, Validation Accuracy: 0.6712, Loss: 0.4576 Epoch 0 Batch 1024/1077 - Train Accuracy: 0.8738, Validation Accuracy: 0.8746, Loss: 0.1436 Epoch 1 Batch 512/1077 - Train Accuracy: 0.9598, Validation Accuracy: 0.9094, Loss: 0.0450 Epoch 1 Batch 1024/1077 - Train Accuracy: 0.9430, Validation Accuracy: 0.9244, Loss: 0.0472 Epoch 2 Batch 512/1077 - Train Accuracy: 0.9859, Validation Accuracy: 0.9549, Loss: 0.0196 Epoch 2 Batch 1024/1077 - Train Accuracy: 0.9641, Validation Accuracy: 0.9627, Loss: 0.0280 Epoch 3 Batch 512/1077 - Train Accuracy: 0.9906, Validation Accuracy: 0.9595, Loss: 0.0123 Epoch 3 Batch 1024/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9762, Loss: 0.0208 Epoch 4 Batch 512/1077 - Train Accuracy: 0.9914, Validation Accuracy: 0.9719, Loss: 0.0099 Epoch 4 Batch 1024/1077 - Train Accuracy: 0.9812, Validation Accuracy: 0.9833, Loss: 0.0149 Epoch 5 Batch 512/1077 - Train Accuracy: 0.9891, Validation Accuracy: 0.9755, Loss: 0.0086 Epoch 5 Batch 1024/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9890, Loss: 0.0116 Epoch 6 Batch 512/1077 - Train Accuracy: 0.9961, Validation Accuracy: 0.9730, Loss: 0.0050 Epoch 6 Batch 1024/1077 - Train Accuracy: 0.9918, Validation Accuracy: 0.9862, Loss: 0.0104 Epoch 7 Batch 512/1077 - Train Accuracy: 0.9957, Validation Accuracy: 0.9798, Loss: 0.0065 Epoch 7 Batch 1024/1077 - Train Accuracy: 0.9879, Validation Accuracy: 0.9709, Loss: 0.0102 Epoch 8 Batch 512/1077 - Train Accuracy: 0.9977, Validation Accuracy: 0.9851, Loss: 0.0050 Epoch 8 Batch 1024/1077 - Train Accuracy: 0.9953, Validation Accuracy: 0.9712, Loss: 0.0086 Epoch 9 Batch 512/1077 - Train Accuracy: 0.9992, Validation Accuracy: 0.9719, Loss: 0.0036 Epoch 9 Batch 1024/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9727, Loss: 0.0094 Epoch 10 Batch 512/1077 - Train Accuracy: 1.0000, Validation Accuracy: 0.9826, Loss: 0.0033 Epoch 10 Batch 1024/1077 - Train Accuracy: 0.9953, Validation Accuracy: 0.9844, Loss: 0.0058 Epoch 11 Batch 512/1077 - Train Accuracy: 0.9969, Validation Accuracy: 0.9872, Loss: 0.0043 Epoch 11 Batch 1024/1077 - Train Accuracy: 0.9965, Validation Accuracy: 0.9833, Loss: 0.0046 Epoch 12 Batch 512/1077 - Train Accuracy: 1.0000, Validation Accuracy: 0.9830, Loss: 0.0025 Epoch 12 Batch 1024/1077 - Train Accuracy: 0.9965, Validation Accuracy: 0.9851, Loss: 0.0050 Epoch 13 Batch 512/1077 - Train Accuracy: 1.0000, Validation Accuracy: 0.9833, Loss: 0.0017 Epoch 13 Batch 1024/1077 - Train Accuracy: 0.9977, Validation Accuracy: 0.9837, Loss: 0.0053 Epoch 14 Batch 512/1077 - Train Accuracy: 0.9949, Validation Accuracy: 0.9826, Loss: 0.0027 Epoch 14 Batch 1024/1077 - Train Accuracy: 0.9977, Validation Accuracy: 0.9897, Loss: 0.0061 Epoch 15 Batch 512/1077 - Train Accuracy: 0.9969, Validation Accuracy: 0.9794, Loss: 0.0041 Epoch 15 Batch 1024/1077 - Train Accuracy: 0.9977, Validation Accuracy: 0.9837, Loss: 0.0041 Epoch 16 Batch 512/1077 - Train Accuracy: 1.0000, Validation Accuracy: 0.9755, Loss: 0.0021 Epoch 16 Batch 1024/1077 - Train Accuracy: 0.9988, Validation Accuracy: 0.9830, Loss: 0.0026 Epoch 17 Batch 512/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9830, Loss: 0.0031 Epoch 17 Batch 1024/1077 - Train Accuracy: 0.9988, Validation Accuracy: 0.9929, Loss: 0.0025 Epoch 18 Batch 512/1077 - Train Accuracy: 1.0000, Validation Accuracy: 0.9826, Loss: 0.0047 Epoch 18 Batch 1024/1077 - Train Accuracy: 0.9988, Validation Accuracy: 0.9854, Loss: 0.0029 Epoch 19 Batch 512/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9759, Loss: 0.0032 Epoch 19 Batch 1024/1077 - Train Accuracy: 0.9988, Validation Accuracy: 0.9893, Loss: 0.0030 Model Trained and Saved ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function sentence = sentence.lower() sentence_ids = [vocab_to_int[word] if word in vocab_to_int.keys() else vocab_to_int['<UNK>'] for word in sentence.lower().split() ] return sentence_ids """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) ###Output INFO:tensorflow:Restoring parameters from checkpoints/dev Input Word Ids: [130, 229, 105, 230, 226, 131, 41] English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.'] Prediction Word Ids: [233, 35, 122, 52, 312, 45, 153, 11, 1] French Words: il a vu un vieux camion jaune . <EOS> ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (20, 21) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 20 to 21: china is usually pleasant during november , and it is never quiet in october . French sentences 20 to 21: chine est généralement agréable en novembre , et il est jamais tranquille en octobre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of each sentence from `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function source_id_text = [] target_id_text = [] word_source_sentence_id = [] word_target_sentence_id = [] # Source ID transformation for sentence_source in source_text.split('\n'): #break data into sentences for word_source in sentence_source.split(' '): #for each word in each sentence find its ID if word_source in source_vocab_to_int: word_source_sentence_id.append(source_vocab_to_int[word_source]) #append all ID for all words in a given sentence source_id_text.append(word_source_sentence_id) #append sequence of IDs for all sentences word_source_sentence_id = [] # Traget ID transformation for sentence_target in target_text.split('\n'): #break data into sentences for word_target in sentence_target.split(' '): #for each word in each sentence find its ID if word_target in target_vocab_to_int: word_target_sentence_id.append(target_vocab_to_int[word_target]) #append all ID for all words in a given sentence word_target_sentence_id.append(target_vocab_to_int['<EOS>']) #append end of statement for each sentence target target_id_text.append(word_target_sentence_id) #append sequence of IDs for all sentences word_target_sentence_id = [] return (source_id_text,target_id_text) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__) print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.0.1 Default GPU Device: /gpu:0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoding_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) """ # TODO: Implement Function input_ = tf.placeholder(tf.int32, [None, None],name='input') target_ = tf.placeholder(tf.int32, [None, None]) learning_rate = tf.placeholder(tf.float32) keep_probability = tf.placeholder(tf.float32,name='keep_prob') return (input_,target_,learning_rate,keep_probability) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output Tests Passed ###Markdown Process Decoding InputImplement `process_decoding_input` using TensorFlow to remove the last word id from each batch in `target_data` and concat the GO ID to the beginning of each batch. ###Code def process_decoding_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for dencoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) #remove last EOS dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) #append go return dec_input """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_decoding_input(process_decoding_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer using [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn). ###Code def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ # TODO: Implement Function LSTM = tf.contrib.rnn.BasicLSTMCell(rnn_size) cell = tf.contrib.rnn.MultiRNNCell([LSTM]*num_layers) enc_cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) _, enc_state = tf.nn.dynamic_rnn(enc_cell, rnn_inputs, dtype=tf.float32) return enc_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate training logits using [`tf.contrib.seq2seq.simple_decoder_fn_train()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_train) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). Apply the `output_fn` to the [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder) outputs. ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits """ # TODO: Implement Function train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) dec_cell=tf.contrib.rnn.DropoutWrapper(dec_cell,input_keep_prob=keep_prob) outputs, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope) train_logits = output_fn(outputs) return train_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference logits using [`tf.contrib.seq2seq.simple_decoder_fn_inference()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_inference) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: The maximum allowed time steps to decode :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits """ # TODO: Implement Function infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length,vocab_size) dec_cell=tf.contrib.rnn.DropoutWrapper(dec_cell,input_keep_prob=keep_prob) inf_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, decoder_fn=infer_decoder_fn, sequence_length=maximum_length, scope=decoding_scope) return inf_pred """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.- Create RNN cell for decoding using `rnn_size` and `num_layers`.- Create the output fuction using [`lambda`](https://docs.python.org/3/tutorial/controlflow.htmllambda-expressions) to transform it's input, logits, to class logits.- Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)` function to get the training logits.- Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function # Decoder RNNs lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) cell = tf.contrib.rnn.MultiRNNCell([lstm]*num_layers) dec_cell = tf.contrib.rnn.DropoutWrapper(cell, keep_prob) with tf.variable_scope("decoding") as decoding_scope: output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope) Training_Logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) with tf.variable_scope("decoding", reuse=True) as decoding_scope: Inference_Logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], sequence_length, vocab_size, decoding_scope, output_fn, keep_prob) return (Training_Logits, Inference_Logits) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Apply embedding to the input data for the encoder.- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob)`.- Process target data using your `process_decoding_input(target_data, target_vocab_to_int, batch_size)` function.- Apply embedding to the target data for the decoder.- Decode the encoded input using your `decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)`. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size) enc_layer = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob) dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size) dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) Training_Logits, Inference_Logits = decoding_layer(dec_embed_input, dec_embeddings, enc_layer, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) return (Training_Logits, Inference_Logits) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability ###Code # Number of Epochs epochs = 5 # Batch Size batch_size = 256 # RNN Size rnn_size = 128 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 256 decoding_embedding_size = 256 # Learning Rate learning_rate = 0.01 # Dropout Keep Probability keep_probability = 0.9 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_source_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob = model_inputs() sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model( tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) tf.identity(inference_logits, 'logits') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([input_shape[0], sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import time def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 0/538 - Train Accuracy: 0.234, Validation Accuracy: 0.316, Loss: 5.910 Epoch 0 Batch 1/538 - Train Accuracy: 0.231, Validation Accuracy: 0.316, Loss: 4.967 Epoch 0 Batch 2/538 - Train Accuracy: 0.252, Validation Accuracy: 0.316, Loss: 4.153 Epoch 0 Batch 3/538 - Train Accuracy: 0.264, Validation Accuracy: 0.346, Loss: 3.852 Epoch 0 Batch 4/538 - Train Accuracy: 0.276, Validation Accuracy: 0.347, Loss: 3.661 Epoch 0 Batch 5/538 - Train Accuracy: 0.299, Validation Accuracy: 0.347, Loss: 3.483 Epoch 0 Batch 6/538 - Train Accuracy: 0.305, Validation Accuracy: 0.347, Loss: 3.419 Epoch 0 Batch 7/538 - Train Accuracy: 0.287, Validation Accuracy: 0.352, Loss: 3.431 Epoch 0 Batch 8/538 - Train Accuracy: 0.293, Validation Accuracy: 0.357, Loss: 3.350 Epoch 0 Batch 9/538 - Train Accuracy: 0.292, Validation Accuracy: 0.358, Loss: 3.223 Epoch 0 Batch 10/538 - Train Accuracy: 0.282, Validation Accuracy: 0.371, Loss: 3.263 Epoch 0 Batch 11/538 - Train Accuracy: 0.293, Validation Accuracy: 0.366, Loss: 3.151 Epoch 0 Batch 12/538 - Train Accuracy: 0.312, Validation Accuracy: 0.389, Loss: 3.144 Epoch 0 Batch 13/538 - Train Accuracy: 0.360, Validation Accuracy: 0.385, Loss: 2.879 Epoch 0 Batch 14/538 - Train Accuracy: 0.340, Validation Accuracy: 0.404, Loss: 2.955 Epoch 0 Batch 15/538 - Train Accuracy: 0.377, Validation Accuracy: 0.401, Loss: 2.798 Epoch 0 Batch 16/538 - Train Accuracy: 0.366, Validation Accuracy: 0.402, Loss: 2.773 Epoch 0 Batch 17/538 - Train Accuracy: 0.352, Validation Accuracy: 0.406, Loss: 2.802 Epoch 0 Batch 18/538 - Train Accuracy: 0.337, Validation Accuracy: 0.410, Loss: 2.838 Epoch 0 Batch 19/538 - Train Accuracy: 0.354, Validation Accuracy: 0.426, Loss: 2.789 Epoch 0 Batch 20/538 - Train Accuracy: 0.388, Validation Accuracy: 0.430, Loss: 2.638 Epoch 0 Batch 21/538 - Train Accuracy: 0.338, Validation Accuracy: 0.436, Loss: 2.798 Epoch 0 Batch 22/538 - Train Accuracy: 0.400, Validation Accuracy: 0.460, Loss: 2.632 Epoch 0 Batch 23/538 - Train Accuracy: 0.413, Validation Accuracy: 0.464, Loss: 2.600 Epoch 0 Batch 24/538 - Train Accuracy: 0.410, Validation Accuracy: 0.461, Loss: 2.576 Epoch 0 Batch 25/538 - Train Accuracy: 0.407, Validation Accuracy: 0.457, Loss: 2.546 Epoch 0 Batch 26/538 - Train Accuracy: 0.411, Validation Accuracy: 0.467, Loss: 2.545 Epoch 0 Batch 27/538 - Train Accuracy: 0.430, Validation Accuracy: 0.475, Loss: 2.472 Epoch 0 Batch 28/538 - Train Accuracy: 0.474, Validation Accuracy: 0.474, Loss: 2.261 Epoch 0 Batch 29/538 - Train Accuracy: 0.446, Validation Accuracy: 0.489, Loss: 2.402 Epoch 0 Batch 30/538 - Train Accuracy: 0.416, Validation Accuracy: 0.468, Loss: 2.474 Epoch 0 Batch 31/538 - Train Accuracy: 0.466, Validation Accuracy: 0.497, Loss: 2.346 Epoch 0 Batch 32/538 - Train Accuracy: 0.451, Validation Accuracy: 0.499, Loss: 2.373 Epoch 0 Batch 33/538 - Train Accuracy: 0.471, Validation Accuracy: 0.499, Loss: 2.309 Epoch 0 Batch 34/538 - Train Accuracy: 0.460, Validation Accuracy: 0.500, Loss: 2.327 Epoch 0 Batch 35/538 - Train Accuracy: 0.451, Validation Accuracy: 0.499, Loss: 2.324 Epoch 0 Batch 36/538 - Train Accuracy: 0.467, Validation Accuracy: 0.502, Loss: 2.215 Epoch 0 Batch 37/538 - Train Accuracy: 0.478, Validation Accuracy: 0.514, Loss: 2.241 Epoch 0 Batch 38/538 - Train Accuracy: 0.432, Validation Accuracy: 0.493, Loss: 2.286 Epoch 0 Batch 39/538 - Train Accuracy: 0.463, Validation Accuracy: 0.520, Loss: 2.272 Epoch 0 Batch 40/538 - Train Accuracy: 0.535, Validation Accuracy: 0.526, Loss: 2.037 Epoch 0 Batch 41/538 - Train Accuracy: 0.460, Validation Accuracy: 0.511, Loss: 2.219 Epoch 0 Batch 42/538 - Train Accuracy: 0.477, Validation Accuracy: 0.517, Loss: 2.154 Epoch 0 Batch 43/538 - Train Accuracy: 0.481, Validation Accuracy: 0.521, Loss: 2.195 Epoch 0 Batch 44/538 - Train Accuracy: 0.483, Validation Accuracy: 0.527, Loss: 2.216 Epoch 0 Batch 45/538 - Train Accuracy: 0.501, Validation Accuracy: 0.527, Loss: 2.026 Epoch 0 Batch 46/538 - Train Accuracy: 0.478, Validation Accuracy: 0.523, Loss: 2.107 Epoch 0 Batch 47/538 - Train Accuracy: 0.508, Validation Accuracy: 0.535, Loss: 2.032 Epoch 0 Batch 48/538 - Train Accuracy: 0.522, Validation Accuracy: 0.530, Loss: 1.965 Epoch 0 Batch 49/538 - Train Accuracy: 0.487, Validation Accuracy: 0.517, Loss: 2.126 Epoch 0 Batch 50/538 - Train Accuracy: 0.498, Validation Accuracy: 0.530, Loss: 1.993 Epoch 0 Batch 51/538 - Train Accuracy: 0.428, Validation Accuracy: 0.518, Loss: 2.182 Epoch 0 Batch 52/538 - Train Accuracy: 0.515, Validation Accuracy: 0.553, Loss: 2.005 Epoch 0 Batch 53/538 - Train Accuracy: 0.511, Validation Accuracy: 0.528, Loss: 1.810 Epoch 0 Batch 54/538 - Train Accuracy: 0.497, Validation Accuracy: 0.534, Loss: 1.917 Epoch 0 Batch 55/538 - Train Accuracy: 0.488, Validation Accuracy: 0.547, Loss: 1.978 Epoch 0 Batch 56/538 - Train Accuracy: 0.515, Validation Accuracy: 0.545, Loss: 1.861 Epoch 0 Batch 57/538 - Train Accuracy: 0.484, Validation Accuracy: 0.554, Loss: 1.960 Epoch 0 Batch 58/538 - Train Accuracy: 0.449, Validation Accuracy: 0.526, Loss: 1.923 Epoch 0 Batch 59/538 - Train Accuracy: 0.488, Validation Accuracy: 0.534, Loss: 1.868 Epoch 0 Batch 60/538 - Train Accuracy: 0.521, Validation Accuracy: 0.549, Loss: 1.815 Epoch 0 Batch 61/538 - Train Accuracy: 0.511, Validation Accuracy: 0.551, Loss: 1.772 Epoch 0 Batch 62/538 - Train Accuracy: 0.510, Validation Accuracy: 0.544, Loss: 1.726 Epoch 0 Batch 63/538 - Train Accuracy: 0.527, Validation Accuracy: 0.542, Loss: 1.672 Epoch 0 Batch 64/538 - Train Accuracy: 0.545, Validation Accuracy: 0.553, Loss: 1.638 Epoch 0 Batch 65/538 - Train Accuracy: 0.480, Validation Accuracy: 0.541, Loss: 1.741 Epoch 0 Batch 66/538 - Train Accuracy: 0.517, Validation Accuracy: 0.548, Loss: 1.572 Epoch 0 Batch 67/538 - Train Accuracy: 0.529, Validation Accuracy: 0.565, Loss: 1.615 Epoch 0 Batch 68/538 - Train Accuracy: 0.543, Validation Accuracy: 0.553, Loss: 1.501 Epoch 0 Batch 69/538 - Train Accuracy: 0.513, Validation Accuracy: 0.549, Loss: 1.572 Epoch 0 Batch 70/538 - Train Accuracy: 0.535, Validation Accuracy: 0.549, Loss: 1.486 Epoch 0 Batch 71/538 - Train Accuracy: 0.457, Validation Accuracy: 0.515, Loss: 1.540 Epoch 0 Batch 72/538 - Train Accuracy: 0.529, Validation Accuracy: 0.537, Loss: 1.432 Epoch 0 Batch 73/538 - Train Accuracy: 0.508, Validation Accuracy: 0.552, Loss: 1.489 Epoch 0 Batch 74/538 - Train Accuracy: 0.514, Validation Accuracy: 0.548, Loss: 1.414 Epoch 0 Batch 75/538 - Train Accuracy: 0.531, Validation Accuracy: 0.551, Loss: 1.370 Epoch 0 Batch 76/538 - Train Accuracy: 0.514, Validation Accuracy: 0.560, Loss: 1.446 Epoch 0 Batch 77/538 - Train Accuracy: 0.485, Validation Accuracy: 0.540, Loss: 1.383 Epoch 0 Batch 78/538 - Train Accuracy: 0.492, Validation Accuracy: 0.534, Loss: 1.352 Epoch 0 Batch 79/538 - Train Accuracy: 0.504, Validation Accuracy: 0.534, Loss: 1.271 Epoch 0 Batch 80/538 - Train Accuracy: 0.483, Validation Accuracy: 0.536, Loss: 1.356 Epoch 0 Batch 81/538 - Train Accuracy: 0.473, Validation Accuracy: 0.528, Loss: 1.326 Epoch 0 Batch 82/538 - Train Accuracy: 0.504, Validation Accuracy: 0.538, Loss: 1.279 Epoch 0 Batch 83/538 - Train Accuracy: 0.499, Validation Accuracy: 0.542, Loss: 1.295 Epoch 0 Batch 84/538 - Train Accuracy: 0.519, Validation Accuracy: 0.541, Loss: 1.256 Epoch 0 Batch 85/538 - Train Accuracy: 0.534, Validation Accuracy: 0.540, Loss: 1.160 Epoch 0 Batch 86/538 - Train Accuracy: 0.518, Validation Accuracy: 0.543, Loss: 1.254 Epoch 0 Batch 87/538 - Train Accuracy: 0.523, Validation Accuracy: 0.548, Loss: 1.207 Epoch 0 Batch 88/538 - Train Accuracy: 0.482, Validation Accuracy: 0.512, Loss: 1.211 ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function word_ids = [] sentence = sentence.lower() for word in sentence.split(' '): #for each word in each sentence find its ID if word in vocab_to_int: word_ids.append(vocab_to_int[word]) else: word_ids.append(vocab_to_int['<UNK>']) return word_ids """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw an old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('logits:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)])) print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)])) ###Output Input Word Ids: [81, 230, 2, 145, 33, 116, 164] English Words: ['he', 'saw', '<UNK>', 'old', 'yellow', 'truck', '.'] Prediction Word Ids: [214, 183, 302, 110, 139, 348, 335, 245, 1] French Words: ['il', 'a', 'vu', 'le', 'vieux', 'camion', 'octobre', '.', '<EOS>'] ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of each sentence from `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function target_lines = [( line + ' <EOS>') for line in target_text.split('\n')] target_id_text = [[target_vocab_to_int[word] for word in sentence.split()] for sentence in target_lines] source_id_text = [[source_vocab_to_int[word] for word in line.split()] for line in source_text.split('\n')] return (source_id_text, target_id_text) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__) print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.0.0 Default GPU Device: /gpu:0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoding_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) """ # TODO: Implement Function input_data = tf.placeholder(tf.int32, [None, None], name="input") #input = tf.placeholder(tf.int32, name="input") targets = tf.placeholder(tf.int32, [None,None], name="targets") lr = tf.placeholder(tf.float32, name="lr") keep_prob = tf.placeholder(tf.float32, name="keep_prob") return (input_data, targets, lr, keep_prob) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output Tests Passed ###Markdown Process Decoding InputImplement `process_decoding_input` using TensorFlow to remove the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch. ###Code def process_decoding_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for dencoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function import numpy as np target_data = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) proc_target = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), target_data], 1) return proc_target """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_decoding_input(process_decoding_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer using [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn). ###Code def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ # TODO: Implement Function #drop = tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.BasicLSTMCell(num_units=rnn_size), # output_keep_prob=keep_prob) #cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers) encoding_cell = tf.contrib.rnn.MultiRNNCell( [tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers ) encoding_dropout = tf.contrib.rnn.DropoutWrapper( encoding_cell, keep_prob ) _, rnn_state = tf.nn.dynamic_rnn( cell = encoding_dropout, inputs = rnn_inputs, dtype=tf.float32 ) return rnn_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate training logits using [`tf.contrib.seq2seq.simple_decoder_fn_train()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_train) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). Apply the `output_fn` to the [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder) outputs. ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits """ # TODO: Implement Function with tf.variable_scope("decoding_scope") as decoding_scope: #Training decoder train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) #Make a dropout layer here dropout = tf.contrib.rnn.DropoutWrapper(dec_cell, keep_prob) # and tf,contrib.seq2seq.dynamic_rnn_encoder train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder( dropout, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope) #Apply the output function train_logits = output_fn(train_pred) return train_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference logits using [`tf.contrib.seq2seq.simple_decoder_fn_inference()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_inference) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: The maximum allowed time steps to decode :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits """ # TODO: Implement Function with tf.variable_scope("decoding_scope", reuse=None) as decoding_scope: # Inference Decoder infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference( output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, dtype=tf.int32) inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope) return inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.- Create RNN cell for decoding using `rnn_size` and `num_layers`.- Create the output fuction using [`lambda`](https://docs.python.org/3/tutorial/controlflow.htmllambda-expressions) to transform it's input, logits, to class logits.- Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)` function to get the training logits.- Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) dec_cell = tf.contrib.rnn.MultiRNNCell([lstm]*num_layers) start_of_sequence_id = target_vocab_to_int['<GO>'] end_of_sequence_id = target_vocab_to_int['<EOS>'] with tf.variable_scope("decoding") as decoding_scope: output_fn = lambda x: tf.contrib.layers.fully_connected( x, vocab_size, None, scope=decoding_scope ) train_logits = decoding_layer_train( encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob ) decoding_scope.reuse_variables() #with tf.variable_scope('decoding', reuse=True) as decoding_scope: inference_logits = decoding_layer_infer( encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, sequence_length, vocab_size, decoding_scope, output_fn, keep_prob) return (train_logits, inference_logits) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Apply embedding to the input data for the encoder.- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob)`.- Process target data using your `process_decoding_input(target_data, target_vocab_to_int, batch_size)` function.- Apply embedding to the target data for the decoder.- Decode the encoded input using your `decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)`. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) """ #enc_embed_input = tf.placeholder( # tf.int32, shape=[None, None], name="enc_embed_input" # ) #dec_embed_input = tf.placeholder( # tf.int32, shape=[None, None], name="dec_embed_input" # ) proc_target = process_decoding_input( target_data, target_vocab_to_int, batch_size ) enc_embed_input = tf.contrib.layers.embed_sequence( input_data, source_vocab_size, enc_embedding_size ) # enc_embed_input.set_shape([None, None, enc_embedding_size]) encoder_state = encoding_layer( enc_embed_input, rnn_size, num_layers, keep_prob ) dec_embeddings = tf.Variable( tf.random_uniform([target_vocab_size, dec_embedding_size]) ) dec_embed_input = tf.nn.embedding_lookup( dec_embeddings, proc_target ) train_logits, inference_logits = decoding_layer( dec_embed_input, dec_embeddings, encoder_state, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob ) return (train_logits, inference_logits) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability ###Code # Number of Epochs epochs = 5 # Batch Size batch_size = 256 # RNN Size rnn_size = 256 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 256 decoding_embedding_size = 256 # Learning Rate learning_rate = 0.003 # Dropout Keep Probability keep_probability = 0.50 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_source_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob = model_inputs() sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model( tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) tf.identity(inference_logits, 'logits') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([input_shape[0], sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import time def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 0/538 - Train Accuracy: 0.234, Validation Accuracy: 0.316, Loss: 5.893 Epoch 0 Batch 1/538 - Train Accuracy: 0.231, Validation Accuracy: 0.316, Loss: 5.052 Epoch 0 Batch 2/538 - Train Accuracy: 0.285, Validation Accuracy: 0.345, Loss: 4.389 Epoch 0 Batch 3/538 - Train Accuracy: 0.265, Validation Accuracy: 0.347, Loss: 4.101 Epoch 0 Batch 4/538 - Train Accuracy: 0.276, Validation Accuracy: 0.347, Loss: 3.826 Epoch 0 Batch 5/538 - Train Accuracy: 0.313, Validation Accuracy: 0.365, Loss: 3.505 Epoch 0 Batch 6/538 - Train Accuracy: 0.333, Validation Accuracy: 0.378, Loss: 3.343 Epoch 0 Batch 7/538 - Train Accuracy: 0.319, Validation Accuracy: 0.386, Loss: 3.369 Epoch 0 Batch 8/538 - Train Accuracy: 0.332, Validation Accuracy: 0.394, Loss: 3.252 Epoch 0 Batch 9/538 - Train Accuracy: 0.311, Validation Accuracy: 0.376, Loss: 3.135 Epoch 0 Batch 10/538 - Train Accuracy: 0.314, Validation Accuracy: 0.399, Loss: 3.160 Epoch 0 Batch 11/538 - Train Accuracy: 0.338, Validation Accuracy: 0.405, Loss: 3.075 Epoch 0 Batch 12/538 - Train Accuracy: 0.343, Validation Accuracy: 0.415, Loss: 3.064 Epoch 0 Batch 13/538 - Train Accuracy: 0.399, Validation Accuracy: 0.420, Loss: 2.777 Epoch 0 Batch 14/538 - Train Accuracy: 0.366, Validation Accuracy: 0.431, Loss: 2.885 Epoch 0 Batch 15/538 - Train Accuracy: 0.408, Validation Accuracy: 0.436, Loss: 2.731 Epoch 0 Batch 16/538 - Train Accuracy: 0.389, Validation Accuracy: 0.428, Loss: 2.688 Epoch 0 Batch 17/538 - Train Accuracy: 0.370, Validation Accuracy: 0.428, Loss: 2.740 Epoch 0 Batch 18/538 - Train Accuracy: 0.373, Validation Accuracy: 0.438, Loss: 2.780 Epoch 0 Batch 19/538 - Train Accuracy: 0.376, Validation Accuracy: 0.445, Loss: 2.745 Epoch 0 Batch 20/538 - Train Accuracy: 0.416, Validation Accuracy: 0.450, Loss: 2.585 Epoch 0 Batch 21/538 - Train Accuracy: 0.355, Validation Accuracy: 0.454, Loss: 2.759 Epoch 0 Batch 22/538 - Train Accuracy: 0.416, Validation Accuracy: 0.470, Loss: 2.596 Epoch 0 Batch 23/538 - Train Accuracy: 0.417, Validation Accuracy: 0.474, Loss: 2.590 Epoch 0 Batch 24/538 - Train Accuracy: 0.415, Validation Accuracy: 0.464, Loss: 2.571 Epoch 0 Batch 25/538 - Train Accuracy: 0.404, Validation Accuracy: 0.465, Loss: 2.550 Epoch 0 Batch 26/538 - Train Accuracy: 0.414, Validation Accuracy: 0.474, Loss: 2.537 Epoch 0 Batch 27/538 - Train Accuracy: 0.426, Validation Accuracy: 0.473, Loss: 2.469 Epoch 0 Batch 28/538 - Train Accuracy: 0.470, Validation Accuracy: 0.473, Loss: 2.277 Epoch 0 Batch 29/538 - Train Accuracy: 0.439, Validation Accuracy: 0.480, Loss: 2.417 Epoch 0 Batch 30/538 - Train Accuracy: 0.414, Validation Accuracy: 0.471, Loss: 2.462 Epoch 0 Batch 31/538 - Train Accuracy: 0.436, Validation Accuracy: 0.475, Loss: 2.353 Epoch 0 Batch 32/538 - Train Accuracy: 0.436, Validation Accuracy: 0.485, Loss: 2.438 Epoch 0 Batch 33/538 - Train Accuracy: 0.451, Validation Accuracy: 0.490, Loss: 2.345 Epoch 0 Batch 34/538 - Train Accuracy: 0.429, Validation Accuracy: 0.482, Loss: 2.379 Epoch 0 Batch 35/538 - Train Accuracy: 0.435, Validation Accuracy: 0.497, Loss: 2.400 Epoch 0 Batch 36/538 - Train Accuracy: 0.440, Validation Accuracy: 0.482, Loss: 2.256 Epoch 0 Batch 37/538 - Train Accuracy: 0.433, Validation Accuracy: 0.489, Loss: 2.312 Epoch 0 Batch 38/538 - Train Accuracy: 0.428, Validation Accuracy: 0.501, Loss: 2.387 Epoch 0 Batch 39/538 - Train Accuracy: 0.444, Validation Accuracy: 0.492, Loss: 2.329 Epoch 0 Batch 40/538 - Train Accuracy: 0.489, Validation Accuracy: 0.492, Loss: 2.115 Epoch 0 Batch 41/538 - Train Accuracy: 0.466, Validation Accuracy: 0.513, Loss: 2.305 Epoch 0 Batch 42/538 - Train Accuracy: 0.430, Validation Accuracy: 0.476, Loss: 2.218 Epoch 0 Batch 43/538 - Train Accuracy: 0.440, Validation Accuracy: 0.498, Loss: 2.294 Epoch 0 Batch 44/538 - Train Accuracy: 0.455, Validation Accuracy: 0.508, Loss: 2.327 Epoch 0 Batch 45/538 - Train Accuracy: 0.454, Validation Accuracy: 0.477, Loss: 2.117 Epoch 0 Batch 46/538 - Train Accuracy: 0.453, Validation Accuracy: 0.508, Loss: 2.267 Epoch 0 Batch 47/538 - Train Accuracy: 0.488, Validation Accuracy: 0.518, Loss: 2.135 Epoch 0 Batch 48/538 - Train Accuracy: 0.478, Validation Accuracy: 0.488, Loss: 2.076 Epoch 0 Batch 49/538 - Train Accuracy: 0.466, Validation Accuracy: 0.525, Loss: 2.270 Epoch 0 Batch 50/538 - Train Accuracy: 0.482, Validation Accuracy: 0.521, Loss: 2.098 Epoch 0 Batch 51/538 - Train Accuracy: 0.380, Validation Accuracy: 0.489, Loss: 2.335 Epoch 0 Batch 52/538 - Train Accuracy: 0.484, Validation Accuracy: 0.518, Loss: 2.149 Epoch 0 Batch 53/538 - Train Accuracy: 0.520, Validation Accuracy: 0.520, Loss: 1.942 Epoch 0 Batch 54/538 - Train Accuracy: 0.499, Validation Accuracy: 0.520, Loss: 2.073 Epoch 0 Batch 55/538 - Train Accuracy: 0.442, Validation Accuracy: 0.499, Loss: 2.119 Epoch 0 Batch 56/538 - Train Accuracy: 0.485, Validation Accuracy: 0.522, Loss: 2.045 Epoch 0 Batch 57/538 - Train Accuracy: 0.459, Validation Accuracy: 0.528, Loss: 2.135 Epoch 0 Batch 58/538 - Train Accuracy: 0.436, Validation Accuracy: 0.516, Loss: 2.125 Epoch 0 Batch 59/538 - Train Accuracy: 0.457, Validation Accuracy: 0.516, Loss: 2.096 Epoch 0 Batch 60/538 - Train Accuracy: 0.477, Validation Accuracy: 0.527, Loss: 2.038 Epoch 0 Batch 61/538 - Train Accuracy: 0.457, Validation Accuracy: 0.502, Loss: 1.986 Epoch 0 Batch 62/538 - Train Accuracy: 0.474, Validation Accuracy: 0.514, Loss: 1.996 Epoch 0 Batch 63/538 - Train Accuracy: 0.501, Validation Accuracy: 0.519, Loss: 1.883 Epoch 0 Batch 64/538 - Train Accuracy: 0.482, Validation Accuracy: 0.504, Loss: 1.885 Epoch 0 Batch 65/538 - Train Accuracy: 0.456, Validation Accuracy: 0.530, Loss: 2.057 Epoch 0 Batch 66/538 - Train Accuracy: 0.483, Validation Accuracy: 0.515, Loss: 1.815 Epoch 0 Batch 67/538 - Train Accuracy: 0.493, Validation Accuracy: 0.530, Loss: 1.906 Epoch 0 Batch 68/538 - Train Accuracy: 0.515, Validation Accuracy: 0.538, Loss: 1.782 Epoch 0 Batch 69/538 - Train Accuracy: 0.418, Validation Accuracy: 0.479, Loss: 1.891 Epoch 0 Batch 70/538 - Train Accuracy: 0.496, Validation Accuracy: 0.528, Loss: 1.831 Epoch 0 Batch 71/538 - Train Accuracy: 0.489, Validation Accuracy: 0.534, Loss: 1.829 Epoch 0 Batch 72/538 - Train Accuracy: 0.476, Validation Accuracy: 0.494, Loss: 1.748 Epoch 0 Batch 73/538 - Train Accuracy: 0.448, Validation Accuracy: 0.506, Loss: 1.849 Epoch 0 Batch 74/538 - Train Accuracy: 0.511, Validation Accuracy: 0.538, Loss: 1.735 Epoch 0 Batch 75/538 - Train Accuracy: 0.472, Validation Accuracy: 0.494, Loss: 1.723 Epoch 0 Batch 76/538 - Train Accuracy: 0.443, Validation Accuracy: 0.491, Loss: 1.818 Epoch 0 Batch 77/538 - Train Accuracy: 0.478, Validation Accuracy: 0.533, Loss: 1.744 Epoch 0 Batch 78/538 - Train Accuracy: 0.487, Validation Accuracy: 0.519, Loss: 1.701 Epoch 0 Batch 79/538 - Train Accuracy: 0.458, Validation Accuracy: 0.481, Loss: 1.625 Epoch 0 Batch 80/538 - Train Accuracy: 0.484, Validation Accuracy: 0.534, Loss: 1.731 Epoch 0 Batch 81/538 - Train Accuracy: 0.471, Validation Accuracy: 0.519, Loss: 1.700 Epoch 0 Batch 82/538 - Train Accuracy: 0.418, Validation Accuracy: 0.468, Loss: 1.616 Epoch 0 Batch 83/538 - Train Accuracy: 0.457, Validation Accuracy: 0.507, Loss: 1.669 Epoch 0 Batch 84/538 - Train Accuracy: 0.500, Validation Accuracy: 0.526, Loss: 1.554 Epoch 0 Batch 85/538 - Train Accuracy: 0.458, Validation Accuracy: 0.463, Loss: 1.498 Epoch 0 Batch 86/538 - Train Accuracy: 0.415, Validation Accuracy: 0.467, Loss: 1.583 Epoch 0 Batch 87/538 - Train Accuracy: 0.482, Validation Accuracy: 0.515, Loss: 1.511 Epoch 0 Batch 88/538 - Train Accuracy: 0.470, Validation Accuracy: 0.507, Loss: 1.526 Epoch 0 Batch 89/538 - Train Accuracy: 0.434, Validation Accuracy: 0.477, Loss: 1.477 Epoch 0 Batch 90/538 - Train Accuracy: 0.467, Validation Accuracy: 0.489, Loss: 1.447 Epoch 0 Batch 91/538 - Train Accuracy: 0.468, Validation Accuracy: 0.508, Loss: 1.487 Epoch 0 Batch 92/538 - Train Accuracy: 0.443, Validation Accuracy: 0.488, Loss: 1.435 Epoch 0 Batch 93/538 - Train Accuracy: 0.410, Validation Accuracy: 0.478, Loss: 1.441 Epoch 0 Batch 94/538 - Train Accuracy: 0.430, Validation Accuracy: 0.484, Loss: 1.436 Epoch 0 Batch 95/538 - Train Accuracy: 0.523, Validation Accuracy: 0.512, Loss: 1.286 Epoch 0 Batch 96/538 - Train Accuracy: 0.503, Validation Accuracy: 0.510, Loss: 1.281 Epoch 0 Batch 97/538 - Train Accuracy: 0.448, Validation Accuracy: 0.497, Loss: 1.357 Epoch 0 Batch 98/538 - Train Accuracy: 0.474, Validation Accuracy: 0.488, Loss: 1.257 Epoch 0 Batch 99/538 - Train Accuracy: 0.457, Validation Accuracy: 0.513, Loss: 1.355 Epoch 0 Batch 100/538 - Train Accuracy: 0.475, Validation Accuracy: 0.508, Loss: 1.286 Epoch 0 Batch 101/538 - Train Accuracy: 0.451, Validation Accuracy: 0.503, Loss: 1.304 Epoch 0 Batch 102/538 - Train Accuracy: 0.467, Validation Accuracy: 0.506, Loss: 1.314 Epoch 0 Batch 103/538 - Train Accuracy: 0.503, Validation Accuracy: 0.534, Loss: 1.274 Epoch 0 Batch 104/538 - Train Accuracy: 0.463, Validation Accuracy: 0.493, Loss: 1.248 Epoch 0 Batch 105/538 - Train Accuracy: 0.474, Validation Accuracy: 0.501, Loss: 1.206 Epoch 0 Batch 106/538 - Train Accuracy: 0.476, Validation Accuracy: 0.524, Loss: 1.234 Epoch 0 Batch 107/538 - Train Accuracy: 0.472, Validation Accuracy: 0.541, Loss: 1.272 Epoch 0 Batch 108/538 - Train Accuracy: 0.471, Validation Accuracy: 0.515, Loss: 1.225 Epoch 0 Batch 109/538 - Train Accuracy: 0.472, Validation Accuracy: 0.510, Loss: 1.193 Epoch 0 Batch 110/538 - Train Accuracy: 0.510, Validation Accuracy: 0.550, Loss: 1.235 Epoch 0 Batch 111/538 - Train Accuracy: 0.521, Validation Accuracy: 0.525, Loss: 1.142 Epoch 0 Batch 112/538 - Train Accuracy: 0.467, Validation Accuracy: 0.513, Loss: 1.190 Epoch 0 Batch 113/538 - Train Accuracy: 0.491, Validation Accuracy: 0.537, Loss: 1.212 Epoch 0 Batch 114/538 - Train Accuracy: 0.545, Validation Accuracy: 0.536, Loss: 1.112 Epoch 0 Batch 115/538 - Train Accuracy: 0.496, Validation Accuracy: 0.526, Loss: 1.143 Epoch 0 Batch 116/538 - Train Accuracy: 0.513, Validation Accuracy: 0.539, Loss: 1.148 Epoch 0 Batch 117/538 - Train Accuracy: 0.526, Validation Accuracy: 0.559, Loss: 1.090 Epoch 0 Batch 118/538 - Train Accuracy: 0.531, Validation Accuracy: 0.545, Loss: 1.075 Epoch 0 Batch 119/538 - Train Accuracy: 0.512, Validation Accuracy: 0.537, Loss: 1.043 Epoch 0 Batch 120/538 - Train Accuracy: 0.500, Validation Accuracy: 0.555, Loss: 1.084 Epoch 0 Batch 121/538 - Train Accuracy: 0.543, Validation Accuracy: 0.559, Loss: 1.037 Epoch 0 Batch 122/538 - Train Accuracy: 0.532, Validation Accuracy: 0.555, Loss: 1.039 Epoch 0 Batch 123/538 - Train Accuracy: 0.535, Validation Accuracy: 0.539, Loss: 1.009 Epoch 0 Batch 124/538 - Train Accuracy: 0.539, Validation Accuracy: 0.540, Loss: 0.982 Epoch 0 Batch 125/538 - Train Accuracy: 0.533, Validation Accuracy: 0.541, Loss: 1.018 Epoch 0 Batch 126/538 - Train Accuracy: 0.550, Validation Accuracy: 0.538, Loss: 0.998 Epoch 0 Batch 127/538 - Train Accuracy: 0.486, Validation Accuracy: 0.538, Loss: 1.068 Epoch 0 Batch 128/538 - Train Accuracy: 0.524, Validation Accuracy: 0.551, Loss: 0.989 Epoch 0 Batch 129/538 - Train Accuracy: 0.514, Validation Accuracy: 0.544, Loss: 0.980 Epoch 0 Batch 130/538 - Train Accuracy: 0.506, Validation Accuracy: 0.545, Loss: 0.969 Epoch 0 Batch 131/538 - Train Accuracy: 0.496, Validation Accuracy: 0.543, Loss: 1.009 Epoch 0 Batch 132/538 - Train Accuracy: 0.509, Validation Accuracy: 0.549, Loss: 0.948 Epoch 0 Batch 133/538 - Train Accuracy: 0.535, Validation Accuracy: 0.554, Loss: 0.914 Epoch 0 Batch 134/538 - Train Accuracy: 0.501, Validation Accuracy: 0.555, Loss: 1.023 Epoch 0 Batch 135/538 - Train Accuracy: 0.525, Validation Accuracy: 0.556, Loss: 0.967 Epoch 0 Batch 136/538 - Train Accuracy: 0.528, Validation Accuracy: 0.555, Loss: 0.947 Epoch 0 Batch 137/538 - Train Accuracy: 0.500, Validation Accuracy: 0.539, Loss: 0.947 Epoch 0 Batch 138/538 - Train Accuracy: 0.531, Validation Accuracy: 0.551, Loss: 0.937 Epoch 0 Batch 139/538 - Train Accuracy: 0.492, Validation Accuracy: 0.551, Loss: 1.003 Epoch 0 Batch 140/538 - Train Accuracy: 0.496, Validation Accuracy: 0.551, Loss: 1.009 Epoch 0 Batch 141/538 - Train Accuracy: 0.493, Validation Accuracy: 0.542, Loss: 0.980 Epoch 0 Batch 142/538 - Train Accuracy: 0.542, Validation Accuracy: 0.551, Loss: 0.879 Epoch 0 Batch 143/538 - Train Accuracy: 0.503, Validation Accuracy: 0.557, Loss: 0.935 Epoch 0 Batch 144/538 - Train Accuracy: 0.510, Validation Accuracy: 0.551, Loss: 0.933 Epoch 0 Batch 145/538 - Train Accuracy: 0.540, Validation Accuracy: 0.557, Loss: 0.915 Epoch 0 Batch 146/538 - Train Accuracy: 0.538, Validation Accuracy: 0.556, Loss: 0.850 Epoch 0 Batch 147/538 - Train Accuracy: 0.547, Validation Accuracy: 0.556, Loss: 0.863 Epoch 0 Batch 148/538 - Train Accuracy: 0.511, Validation Accuracy: 0.558, Loss: 0.956 Epoch 0 Batch 149/538 - Train Accuracy: 0.527, Validation Accuracy: 0.559, Loss: 0.866 Epoch 0 Batch 150/538 - Train Accuracy: 0.535, Validation Accuracy: 0.566, Loss: 0.890 Epoch 0 Batch 151/538 - Train Accuracy: 0.531, Validation Accuracy: 0.568, Loss: 0.847 Epoch 0 Batch 152/538 - Train Accuracy: 0.550, Validation Accuracy: 0.571, Loss: 0.844 Epoch 0 Batch 153/538 - Train Accuracy: 0.520, Validation Accuracy: 0.572, Loss: 0.881 Epoch 0 Batch 154/538 - Train Accuracy: 0.531, Validation Accuracy: 0.562, Loss: 0.831 Epoch 0 Batch 155/538 - Train Accuracy: 0.556, Validation Accuracy: 0.564, Loss: 0.844 Epoch 0 Batch 156/538 - Train Accuracy: 0.528, Validation Accuracy: 0.568, Loss: 0.854 Epoch 0 Batch 157/538 - Train Accuracy: 0.531, Validation Accuracy: 0.568, Loss: 0.817 Epoch 0 Batch 158/538 - Train Accuracy: 0.548, Validation Accuracy: 0.570, Loss: 0.863 Epoch 0 Batch 159/538 - Train Accuracy: 0.522, Validation Accuracy: 0.563, Loss: 0.852 Epoch 0 Batch 160/538 - Train Accuracy: 0.542, Validation Accuracy: 0.566, Loss: 0.808 Epoch 0 Batch 161/538 - Train Accuracy: 0.538, Validation Accuracy: 0.572, Loss: 0.827 Epoch 0 Batch 162/538 - Train Accuracy: 0.569, Validation Accuracy: 0.560, Loss: 0.797 Epoch 0 Batch 163/538 - Train Accuracy: 0.537, Validation Accuracy: 0.557, Loss: 0.824 Epoch 0 Batch 164/538 - Train Accuracy: 0.530, Validation Accuracy: 0.557, Loss: 0.843 Epoch 0 Batch 165/538 - Train Accuracy: 0.547, Validation Accuracy: 0.565, Loss: 0.754 Epoch 0 Batch 166/538 - Train Accuracy: 0.552, Validation Accuracy: 0.567, Loss: 0.808 Epoch 0 Batch 167/538 - Train Accuracy: 0.578, Validation Accuracy: 0.578, Loss: 0.769 Epoch 0 Batch 168/538 - Train Accuracy: 0.541, Validation Accuracy: 0.580, Loss: 0.839 Epoch 0 Batch 169/538 - Train Accuracy: 0.528, Validation Accuracy: 0.580, Loss: 0.781 Epoch 0 Batch 170/538 - Train Accuracy: 0.548, Validation Accuracy: 0.575, Loss: 0.794 Epoch 0 Batch 171/538 - Train Accuracy: 0.507, Validation Accuracy: 0.565, Loss: 0.817 Epoch 0 Batch 172/538 - Train Accuracy: 0.551, Validation Accuracy: 0.573, Loss: 0.770 Epoch 0 Batch 173/538 - Train Accuracy: 0.558, Validation Accuracy: 0.576, Loss: 0.756 Epoch 0 Batch 174/538 - Train Accuracy: 0.520, Validation Accuracy: 0.583, Loss: 0.810 Epoch 0 Batch 175/538 - Train Accuracy: 0.510, Validation Accuracy: 0.560, Loss: 0.811 Epoch 0 Batch 176/538 - Train Accuracy: 0.542, Validation Accuracy: 0.569, Loss: 0.815 Epoch 0 Batch 177/538 - Train Accuracy: 0.548, Validation Accuracy: 0.582, Loss: 0.766 Epoch 0 Batch 178/538 - Train Accuracy: 0.574, Validation Accuracy: 0.581, Loss: 0.738 Epoch 0 Batch 179/538 - Train Accuracy: 0.577, Validation Accuracy: 0.593, Loss: 0.777 Epoch 0 Batch 180/538 - Train Accuracy: 0.601, Validation Accuracy: 0.590, Loss: 0.747 Epoch 0 Batch 181/538 - Train Accuracy: 0.539, Validation Accuracy: 0.589, Loss: 0.793 Epoch 0 Batch 182/538 - Train Accuracy: 0.543, Validation Accuracy: 0.573, Loss: 0.775 Epoch 0 Batch 183/538 - Train Accuracy: 0.570, Validation Accuracy: 0.568, Loss: 0.709 Epoch 0 Batch 184/538 - Train Accuracy: 0.588, Validation Accuracy: 0.592, Loss: 0.730 Epoch 0 Batch 185/538 - Train Accuracy: 0.588, Validation Accuracy: 0.599, Loss: 0.743 Epoch 0 Batch 186/538 - Train Accuracy: 0.588, Validation Accuracy: 0.606, Loss: 0.739 Epoch 0 Batch 187/538 - Train Accuracy: 0.612, Validation Accuracy: 0.596, Loss: 0.715 Epoch 0 Batch 188/538 - Train Accuracy: 0.546, Validation Accuracy: 0.601, Loss: 0.748 Epoch 0 Batch 189/538 - Train Accuracy: 0.574, Validation Accuracy: 0.602, Loss: 0.752 Epoch 0 Batch 190/538 - Train Accuracy: 0.570, Validation Accuracy: 0.594, Loss: 0.741 Epoch 0 Batch 191/538 - Train Accuracy: 0.602, Validation Accuracy: 0.597, Loss: 0.705 Epoch 0 Batch 192/538 - Train Accuracy: 0.578, Validation Accuracy: 0.611, Loss: 0.732 Epoch 0 Batch 193/538 - Train Accuracy: 0.611, Validation Accuracy: 0.615, Loss: 0.702 Epoch 0 Batch 194/538 - Train Accuracy: 0.577, Validation Accuracy: 0.617, Loss: 0.754 Epoch 0 Batch 195/538 - Train Accuracy: 0.606, Validation Accuracy: 0.609, Loss: 0.698 Epoch 0 Batch 196/538 - Train Accuracy: 0.594, Validation Accuracy: 0.614, Loss: 0.700 Epoch 0 Batch 197/538 - Train Accuracy: 0.605, Validation Accuracy: 0.619, Loss: 0.701 Epoch 0 Batch 198/538 - Train Accuracy: 0.612, Validation Accuracy: 0.615, Loss: 0.700 Epoch 0 Batch 199/538 - Train Accuracy: 0.575, Validation Accuracy: 0.601, Loss: 0.738 Epoch 0 Batch 200/538 - Train Accuracy: 0.595, Validation Accuracy: 0.614, Loss: 0.710 Epoch 0 Batch 201/538 - Train Accuracy: 0.603, Validation Accuracy: 0.608, Loss: 0.696 Epoch 0 Batch 202/538 - Train Accuracy: 0.606, Validation Accuracy: 0.627, Loss: 0.738 Epoch 0 Batch 203/538 - Train Accuracy: 0.573, Validation Accuracy: 0.619, Loss: 0.752 Epoch 0 Batch 204/538 - Train Accuracy: 0.573, Validation Accuracy: 0.609, Loss: 0.717 Epoch 0 Batch 205/538 - Train Accuracy: 0.602, Validation Accuracy: 0.611, Loss: 0.674 Epoch 0 Batch 206/538 - Train Accuracy: 0.560, Validation Accuracy: 0.604, Loss: 0.725 Epoch 0 Batch 207/538 - Train Accuracy: 0.601, Validation Accuracy: 0.613, Loss: 0.666 Epoch 0 Batch 208/538 - Train Accuracy: 0.587, Validation Accuracy: 0.617, Loss: 0.709 Epoch 0 Batch 209/538 - Train Accuracy: 0.583, Validation Accuracy: 0.603, Loss: 0.698 Epoch 0 Batch 210/538 - Train Accuracy: 0.580, Validation Accuracy: 0.601, Loss: 0.676 Epoch 0 Batch 211/538 - Train Accuracy: 0.579, Validation Accuracy: 0.615, Loss: 0.721 Epoch 0 Batch 212/538 - Train Accuracy: 0.591, Validation Accuracy: 0.628, Loss: 0.684 Epoch 0 Batch 213/538 - Train Accuracy: 0.582, Validation Accuracy: 0.627, Loss: 0.680 Epoch 0 Batch 214/538 - Train Accuracy: 0.588, Validation Accuracy: 0.630, Loss: 0.687 Epoch 0 Batch 215/538 - Train Accuracy: 0.588, Validation Accuracy: 0.634, Loss: 0.689 Epoch 0 Batch 216/538 - Train Accuracy: 0.578, Validation Accuracy: 0.636, Loss: 0.706 Epoch 0 Batch 217/538 - Train Accuracy: 0.616, Validation Accuracy: 0.626, Loss: 0.656 Epoch 0 Batch 218/538 - Train Accuracy: 0.584, Validation Accuracy: 0.631, Loss: 0.693 Epoch 0 Batch 219/538 - Train Accuracy: 0.584, Validation Accuracy: 0.624, Loss: 0.711 Epoch 0 Batch 220/538 - Train Accuracy: 0.577, Validation Accuracy: 0.618, Loss: 0.650 Epoch 0 Batch 221/538 - Train Accuracy: 0.607, Validation Accuracy: 0.614, Loss: 0.649 Epoch 0 Batch 222/538 - Train Accuracy: 0.598, Validation Accuracy: 0.624, Loss: 0.634 Epoch 0 Batch 223/538 - Train Accuracy: 0.594, Validation Accuracy: 0.633, Loss: 0.686 Epoch 0 Batch 224/538 - Train Accuracy: 0.570, Validation Accuracy: 0.632, Loss: 0.679 Epoch 0 Batch 225/538 - Train Accuracy: 0.624, Validation Accuracy: 0.631, Loss: 0.647 Epoch 0 Batch 226/538 - Train Accuracy: 0.623, Validation Accuracy: 0.630, Loss: 0.648 Epoch 0 Batch 227/538 - Train Accuracy: 0.623, Validation Accuracy: 0.629, Loss: 0.626 Epoch 0 Batch 228/538 - Train Accuracy: 0.597, Validation Accuracy: 0.631, Loss: 0.635 Epoch 0 Batch 229/538 - Train Accuracy: 0.623, Validation Accuracy: 0.638, Loss: 0.636 Epoch 0 Batch 230/538 - Train Accuracy: 0.601, Validation Accuracy: 0.643, Loss: 0.652 Epoch 0 Batch 231/538 - Train Accuracy: 0.608, Validation Accuracy: 0.646, Loss: 0.662 Epoch 0 Batch 232/538 - Train Accuracy: 0.613, Validation Accuracy: 0.643, Loss: 0.662 Epoch 0 Batch 233/538 - Train Accuracy: 0.658, Validation Accuracy: 0.645, Loss: 0.637 Epoch 0 Batch 234/538 - Train Accuracy: 0.599, Validation Accuracy: 0.656, Loss: 0.660 Epoch 0 Batch 235/538 - Train Accuracy: 0.618, Validation Accuracy: 0.651, Loss: 0.619 Epoch 0 Batch 236/538 - Train Accuracy: 0.593, Validation Accuracy: 0.645, Loss: 0.671 Epoch 0 Batch 237/538 - Train Accuracy: 0.623, Validation Accuracy: 0.638, Loss: 0.621 Epoch 0 Batch 238/538 - Train Accuracy: 0.660, Validation Accuracy: 0.641, Loss: 0.621 Epoch 0 Batch 239/538 - Train Accuracy: 0.627, Validation Accuracy: 0.645, Loss: 0.636 Epoch 0 Batch 240/538 - Train Accuracy: 0.614, Validation Accuracy: 0.643, Loss: 0.643 Epoch 0 Batch 241/538 - Train Accuracy: 0.612, Validation Accuracy: 0.635, Loss: 0.643 Epoch 0 Batch 242/538 - Train Accuracy: 0.632, Validation Accuracy: 0.626, Loss: 0.632 Epoch 0 Batch 243/538 - Train Accuracy: 0.598, Validation Accuracy: 0.633, Loss: 0.658 Epoch 0 Batch 244/538 - Train Accuracy: 0.630, Validation Accuracy: 0.636, Loss: 0.612 Epoch 0 Batch 245/538 - Train Accuracy: 0.601, Validation Accuracy: 0.629, Loss: 0.647 Epoch 0 Batch 246/538 - Train Accuracy: 0.628, Validation Accuracy: 0.637, Loss: 0.588 Epoch 0 Batch 247/538 - Train Accuracy: 0.612, Validation Accuracy: 0.640, Loss: 0.626 Epoch 0 Batch 248/538 - Train Accuracy: 0.629, Validation Accuracy: 0.646, Loss: 0.630 Epoch 0 Batch 249/538 - Train Accuracy: 0.617, Validation Accuracy: 0.650, Loss: 0.601 Epoch 0 Batch 250/538 - Train Accuracy: 0.614, Validation Accuracy: 0.646, Loss: 0.617 Epoch 0 Batch 251/538 - Train Accuracy: 0.611, Validation Accuracy: 0.640, Loss: 0.634 Epoch 0 Batch 252/538 - Train Accuracy: 0.630, Validation Accuracy: 0.641, Loss: 0.585 Epoch 0 Batch 253/538 - Train Accuracy: 0.628, Validation Accuracy: 0.643, Loss: 0.592 Epoch 0 Batch 254/538 - Train Accuracy: 0.627, Validation Accuracy: 0.647, Loss: 0.624 Epoch 0 Batch 255/538 - Train Accuracy: 0.630, Validation Accuracy: 0.649, Loss: 0.601 Epoch 0 Batch 256/538 - Train Accuracy: 0.606, Validation Accuracy: 0.641, Loss: 0.622 Epoch 0 Batch 257/538 - Train Accuracy: 0.627, Validation Accuracy: 0.640, Loss: 0.592 Epoch 0 Batch 258/538 - Train Accuracy: 0.656, Validation Accuracy: 0.641, Loss: 0.585 Epoch 0 Batch 259/538 - Train Accuracy: 0.643, Validation Accuracy: 0.638, Loss: 0.575 Epoch 0 Batch 260/538 - Train Accuracy: 0.619, Validation Accuracy: 0.648, Loss: 0.590 Epoch 0 Batch 261/538 - Train Accuracy: 0.600, Validation Accuracy: 0.649, Loss: 0.621 Epoch 0 Batch 262/538 - Train Accuracy: 0.627, Validation Accuracy: 0.647, Loss: 0.590 Epoch 0 Batch 263/538 - Train Accuracy: 0.631, Validation Accuracy: 0.647, Loss: 0.588 Epoch 0 Batch 264/538 - Train Accuracy: 0.626, Validation Accuracy: 0.643, Loss: 0.609 Epoch 0 Batch 265/538 - Train Accuracy: 0.591, Validation Accuracy: 0.643, Loss: 0.620 Epoch 0 Batch 266/538 - Train Accuracy: 0.647, Validation Accuracy: 0.640, Loss: 0.590 Epoch 0 Batch 267/538 - Train Accuracy: 0.620, Validation Accuracy: 0.648, Loss: 0.589 Epoch 0 Batch 268/538 - Train Accuracy: 0.635, Validation Accuracy: 0.649, Loss: 0.561 Epoch 0 Batch 269/538 - Train Accuracy: 0.620, Validation Accuracy: 0.651, Loss: 0.579 Epoch 0 Batch 270/538 - Train Accuracy: 0.611, Validation Accuracy: 0.642, Loss: 0.587 Epoch 0 Batch 271/538 - Train Accuracy: 0.604, Validation Accuracy: 0.640, Loss: 0.594 Epoch 0 Batch 272/538 - Train Accuracy: 0.605, Validation Accuracy: 0.646, Loss: 0.617 Epoch 0 Batch 273/538 - Train Accuracy: 0.620, Validation Accuracy: 0.646, Loss: 0.590 Epoch 0 Batch 274/538 - Train Accuracy: 0.603, Validation Accuracy: 0.651, Loss: 0.624 Epoch 0 Batch 275/538 - Train Accuracy: 0.617, Validation Accuracy: 0.647, Loss: 0.599 Epoch 0 Batch 276/538 - Train Accuracy: 0.648, Validation Accuracy: 0.648, Loss: 0.594 Epoch 0 Batch 277/538 - Train Accuracy: 0.643, Validation Accuracy: 0.652, Loss: 0.577 Epoch 0 Batch 278/538 - Train Accuracy: 0.632, Validation Accuracy: 0.645, Loss: 0.586 Epoch 0 Batch 279/538 - Train Accuracy: 0.624, Validation Accuracy: 0.648, Loss: 0.563 Epoch 0 Batch 280/538 - Train Accuracy: 0.665, Validation Accuracy: 0.654, Loss: 0.548 Epoch 0 Batch 281/538 - Train Accuracy: 0.627, Validation Accuracy: 0.653, Loss: 0.596 Epoch 0 Batch 282/538 - Train Accuracy: 0.648, Validation Accuracy: 0.656, Loss: 0.569 Epoch 0 Batch 283/538 - Train Accuracy: 0.631, Validation Accuracy: 0.652, Loss: 0.579 Epoch 0 Batch 284/538 - Train Accuracy: 0.634, Validation Accuracy: 0.655, Loss: 0.566 Epoch 0 Batch 285/538 - Train Accuracy: 0.645, Validation Accuracy: 0.660, Loss: 0.532 Epoch 0 Batch 286/538 - Train Accuracy: 0.625, Validation Accuracy: 0.658, Loss: 0.572 Epoch 0 Batch 287/538 - Train Accuracy: 0.658, Validation Accuracy: 0.657, Loss: 0.548 Epoch 0 Batch 288/538 - Train Accuracy: 0.612, Validation Accuracy: 0.651, Loss: 0.573 Epoch 0 Batch 289/538 - Train Accuracy: 0.657, Validation Accuracy: 0.653, Loss: 0.522 Epoch 0 Batch 290/538 - Train Accuracy: 0.622, Validation Accuracy: 0.653, Loss: 0.552 Epoch 0 Batch 291/538 - Train Accuracy: 0.648, Validation Accuracy: 0.655, Loss: 0.546 Epoch 0 Batch 292/538 - Train Accuracy: 0.647, Validation Accuracy: 0.656, Loss: 0.527 Epoch 0 Batch 293/538 - Train Accuracy: 0.643, Validation Accuracy: 0.654, Loss: 0.532 Epoch 0 Batch 294/538 - Train Accuracy: 0.596, Validation Accuracy: 0.650, Loss: 0.586 Epoch 0 Batch 295/538 - Train Accuracy: 0.654, Validation Accuracy: 0.642, Loss: 0.509 Epoch 0 Batch 296/538 - Train Accuracy: 0.632, Validation Accuracy: 0.641, Loss: 0.546 Epoch 0 Batch 297/538 - Train Accuracy: 0.620, Validation Accuracy: 0.658, Loss: 0.564 Epoch 0 Batch 298/538 - Train Accuracy: 0.650, Validation Accuracy: 0.665, Loss: 0.537 Epoch 0 Batch 299/538 - Train Accuracy: 0.645, Validation Accuracy: 0.656, Loss: 0.541 Epoch 0 Batch 300/538 - Train Accuracy: 0.653, Validation Accuracy: 0.656, Loss: 0.536 Epoch 0 Batch 301/538 - Train Accuracy: 0.643, Validation Accuracy: 0.661, Loss: 0.536 Epoch 0 Batch 302/538 - Train Accuracy: 0.682, Validation Accuracy: 0.662, Loss: 0.521 Epoch 0 Batch 303/538 - Train Accuracy: 0.687, Validation Accuracy: 0.659, Loss: 0.514 Epoch 0 Batch 304/538 - Train Accuracy: 0.641, Validation Accuracy: 0.656, Loss: 0.546 Epoch 0 Batch 305/538 - Train Accuracy: 0.652, Validation Accuracy: 0.656, Loss: 0.523 Epoch 0 Batch 306/538 - Train Accuracy: 0.651, Validation Accuracy: 0.661, Loss: 0.531 Epoch 0 Batch 307/538 - Train Accuracy: 0.644, Validation Accuracy: 0.664, Loss: 0.537 Epoch 0 Batch 308/538 - Train Accuracy: 0.656, Validation Accuracy: 0.666, Loss: 0.522 Epoch 0 Batch 309/538 - Train Accuracy: 0.636, Validation Accuracy: 0.668, Loss: 0.530 Epoch 0 Batch 310/538 - Train Accuracy: 0.651, Validation Accuracy: 0.663, Loss: 0.531 Epoch 0 Batch 311/538 - Train Accuracy: 0.660, Validation Accuracy: 0.662, Loss: 0.513 Epoch 0 Batch 312/538 - Train Accuracy: 0.689, Validation Accuracy: 0.661, Loss: 0.489 Epoch 0 Batch 313/538 - Train Accuracy: 0.639, Validation Accuracy: 0.661, Loss: 0.551 Epoch 0 Batch 314/538 - Train Accuracy: 0.665, Validation Accuracy: 0.666, Loss: 0.531 Epoch 0 Batch 315/538 - Train Accuracy: 0.660, Validation Accuracy: 0.671, Loss: 0.506 Epoch 0 Batch 316/538 - Train Accuracy: 0.667, Validation Accuracy: 0.673, Loss: 0.503 Epoch 0 Batch 317/538 - Train Accuracy: 0.661, Validation Accuracy: 0.670, Loss: 0.526 Epoch 0 Batch 318/538 - Train Accuracy: 0.649, Validation Accuracy: 0.673, Loss: 0.517 Epoch 0 Batch 319/538 - Train Accuracy: 0.669, Validation Accuracy: 0.672, Loss: 0.506 Epoch 0 Batch 320/538 - Train Accuracy: 0.653, Validation Accuracy: 0.668, Loss: 0.509 Epoch 0 Batch 321/538 - Train Accuracy: 0.654, Validation Accuracy: 0.670, Loss: 0.494 Epoch 0 Batch 322/538 - Train Accuracy: 0.689, Validation Accuracy: 0.671, Loss: 0.497 Epoch 0 Batch 323/538 - Train Accuracy: 0.674, Validation Accuracy: 0.676, Loss: 0.496 Epoch 0 Batch 324/538 - Train Accuracy: 0.628, Validation Accuracy: 0.671, Loss: 0.532 Epoch 0 Batch 325/538 - Train Accuracy: 0.651, Validation Accuracy: 0.657, Loss: 0.494 Epoch 0 Batch 326/538 - Train Accuracy: 0.645, Validation Accuracy: 0.662, Loss: 0.516 Epoch 0 Batch 327/538 - Train Accuracy: 0.655, Validation Accuracy: 0.657, Loss: 0.526 Epoch 0 Batch 328/538 - Train Accuracy: 0.680, Validation Accuracy: 0.657, Loss: 0.488 Epoch 0 Batch 329/538 - Train Accuracy: 0.662, Validation Accuracy: 0.649, Loss: 0.498 Epoch 0 Batch 330/538 - Train Accuracy: 0.654, Validation Accuracy: 0.654, Loss: 0.479 Epoch 0 Batch 331/538 - Train Accuracy: 0.650, Validation Accuracy: 0.663, Loss: 0.496 Epoch 0 Batch 332/538 - Train Accuracy: 0.638, Validation Accuracy: 0.668, Loss: 0.511 Epoch 0 Batch 333/538 - Train Accuracy: 0.655, Validation Accuracy: 0.665, Loss: 0.489 Epoch 0 Batch 334/538 - Train Accuracy: 0.691, Validation Accuracy: 0.673, Loss: 0.462 Epoch 0 Batch 335/538 - Train Accuracy: 0.672, Validation Accuracy: 0.674, Loss: 0.490 Epoch 0 Batch 336/538 - Train Accuracy: 0.666, Validation Accuracy: 0.670, Loss: 0.486 Epoch 0 Batch 337/538 - Train Accuracy: 0.671, Validation Accuracy: 0.678, Loss: 0.488 Epoch 0 Batch 338/538 - Train Accuracy: 0.648, Validation Accuracy: 0.676, Loss: 0.491 Epoch 0 Batch 339/538 - Train Accuracy: 0.669, Validation Accuracy: 0.671, Loss: 0.484 Epoch 0 Batch 340/538 - Train Accuracy: 0.652, Validation Accuracy: 0.669, Loss: 0.506 Epoch 0 Batch 341/538 - Train Accuracy: 0.649, Validation Accuracy: 0.651, Loss: 0.488 Epoch 0 Batch 342/538 - Train Accuracy: 0.638, Validation Accuracy: 0.643, Loss: 0.475 Epoch 0 Batch 343/538 - Train Accuracy: 0.640, Validation Accuracy: 0.672, Loss: 0.507 Epoch 0 Batch 344/538 - Train Accuracy: 0.665, Validation Accuracy: 0.673, Loss: 0.491 Epoch 0 Batch 345/538 - Train Accuracy: 0.687, Validation Accuracy: 0.673, Loss: 0.482 Epoch 0 Batch 346/538 - Train Accuracy: 0.654, Validation Accuracy: 0.665, Loss: 0.488 Epoch 0 Batch 347/538 - Train Accuracy: 0.658, Validation Accuracy: 0.669, Loss: 0.481 Epoch 0 Batch 348/538 - Train Accuracy: 0.659, Validation Accuracy: 0.668, Loss: 0.461 Epoch 0 Batch 349/538 - Train Accuracy: 0.642, Validation Accuracy: 0.686, Loss: 0.481 Epoch 0 Batch 350/538 - Train Accuracy: 0.683, Validation Accuracy: 0.691, Loss: 0.484 Epoch 0 Batch 351/538 - Train Accuracy: 0.656, Validation Accuracy: 0.679, Loss: 0.504 Epoch 0 Batch 352/538 - Train Accuracy: 0.674, Validation Accuracy: 0.678, Loss: 0.487 Epoch 0 Batch 353/538 - Train Accuracy: 0.676, Validation Accuracy: 0.684, Loss: 0.493 Epoch 0 Batch 354/538 - Train Accuracy: 0.662, Validation Accuracy: 0.691, Loss: 0.493 Epoch 0 Batch 355/538 - Train Accuracy: 0.679, Validation Accuracy: 0.694, Loss: 0.488 Epoch 0 Batch 356/538 - Train Accuracy: 0.678, Validation Accuracy: 0.692, Loss: 0.439 Epoch 0 Batch 357/538 - Train Accuracy: 0.670, Validation Accuracy: 0.693, Loss: 0.468 Epoch 0 Batch 358/538 - Train Accuracy: 0.648, Validation Accuracy: 0.685, Loss: 0.479 Epoch 0 Batch 359/538 - Train Accuracy: 0.686, Validation Accuracy: 0.678, Loss: 0.460 Epoch 0 Batch 360/538 - Train Accuracy: 0.650, Validation Accuracy: 0.674, Loss: 0.474 Epoch 0 Batch 361/538 - Train Accuracy: 0.670, Validation Accuracy: 0.681, Loss: 0.459 Epoch 0 Batch 362/538 - Train Accuracy: 0.687, Validation Accuracy: 0.677, Loss: 0.442 Epoch 0 Batch 363/538 - Train Accuracy: 0.676, Validation Accuracy: 0.684, Loss: 0.444 Epoch 0 Batch 364/538 - Train Accuracy: 0.634, Validation Accuracy: 0.692, Loss: 0.485 Epoch 0 Batch 365/538 - Train Accuracy: 0.677, Validation Accuracy: 0.680, Loss: 0.453 Epoch 0 Batch 366/538 - Train Accuracy: 0.677, Validation Accuracy: 0.673, Loss: 0.478 Epoch 0 Batch 367/538 - Train Accuracy: 0.673, Validation Accuracy: 0.674, Loss: 0.450 Epoch 0 Batch 368/538 - Train Accuracy: 0.727, Validation Accuracy: 0.682, Loss: 0.408 Epoch 0 Batch 369/538 - Train Accuracy: 0.668, Validation Accuracy: 0.684, Loss: 0.457 Epoch 0 Batch 370/538 - Train Accuracy: 0.660, Validation Accuracy: 0.671, Loss: 0.472 Epoch 0 Batch 371/538 - Train Accuracy: 0.673, Validation Accuracy: 0.680, Loss: 0.437 Epoch 0 Batch 372/538 - Train Accuracy: 0.681, Validation Accuracy: 0.674, Loss: 0.460 Epoch 0 Batch 373/538 - Train Accuracy: 0.679, Validation Accuracy: 0.683, Loss: 0.448 Epoch 0 Batch 374/538 - Train Accuracy: 0.668, Validation Accuracy: 0.684, Loss: 0.466 Epoch 0 Batch 375/538 - Train Accuracy: 0.679, Validation Accuracy: 0.684, Loss: 0.431 Epoch 0 Batch 376/538 - Train Accuracy: 0.672, Validation Accuracy: 0.692, Loss: 0.462 Epoch 0 Batch 377/538 - Train Accuracy: 0.674, Validation Accuracy: 0.689, Loss: 0.451 Epoch 0 Batch 378/538 - Train Accuracy: 0.698, Validation Accuracy: 0.696, Loss: 0.421 Epoch 0 Batch 379/538 - Train Accuracy: 0.701, Validation Accuracy: 0.694, Loss: 0.423 Epoch 0 Batch 380/538 - Train Accuracy: 0.669, Validation Accuracy: 0.692, Loss: 0.442 Epoch 0 Batch 381/538 - Train Accuracy: 0.695, Validation Accuracy: 0.671, Loss: 0.419 Epoch 0 Batch 382/538 - Train Accuracy: 0.667, Validation Accuracy: 0.684, Loss: 0.445 Epoch 0 Batch 383/538 - Train Accuracy: 0.692, Validation Accuracy: 0.699, Loss: 0.451 Epoch 0 Batch 384/538 - Train Accuracy: 0.718, Validation Accuracy: 0.705, Loss: 0.423 Epoch 0 Batch 385/538 - Train Accuracy: 0.706, Validation Accuracy: 0.699, Loss: 0.427 Epoch 0 Batch 386/538 - Train Accuracy: 0.678, Validation Accuracy: 0.687, Loss: 0.448 Epoch 0 Batch 387/538 - Train Accuracy: 0.684, Validation Accuracy: 0.686, Loss: 0.436 Epoch 0 Batch 388/538 - Train Accuracy: 0.681, Validation Accuracy: 0.689, Loss: 0.426 Epoch 0 Batch 389/538 - Train Accuracy: 0.675, Validation Accuracy: 0.685, Loss: 0.447 Epoch 0 Batch 390/538 - Train Accuracy: 0.714, Validation Accuracy: 0.704, Loss: 0.425 Epoch 0 Batch 391/538 - Train Accuracy: 0.705, Validation Accuracy: 0.709, Loss: 0.416 Epoch 0 Batch 392/538 - Train Accuracy: 0.701, Validation Accuracy: 0.708, Loss: 0.419 Epoch 0 Batch 393/538 - Train Accuracy: 0.689, Validation Accuracy: 0.697, Loss: 0.407 Epoch 0 Batch 394/538 - Train Accuracy: 0.646, Validation Accuracy: 0.700, Loss: 0.450 Epoch 0 Batch 395/538 - Train Accuracy: 0.656, Validation Accuracy: 0.702, Loss: 0.444 Epoch 0 Batch 396/538 - Train Accuracy: 0.675, Validation Accuracy: 0.704, Loss: 0.435 Epoch 0 Batch 397/538 - Train Accuracy: 0.691, Validation Accuracy: 0.693, Loss: 0.434 Epoch 0 Batch 398/538 - Train Accuracy: 0.678, Validation Accuracy: 0.711, Loss: 0.437 Epoch 0 Batch 399/538 - Train Accuracy: 0.668, Validation Accuracy: 0.708, Loss: 0.444 Epoch 0 Batch 400/538 - Train Accuracy: 0.702, Validation Accuracy: 0.702, Loss: 0.418 Epoch 0 Batch 401/538 - Train Accuracy: 0.688, Validation Accuracy: 0.702, Loss: 0.432 Epoch 0 Batch 402/538 - Train Accuracy: 0.710, Validation Accuracy: 0.702, Loss: 0.413 Epoch 0 Batch 403/538 - Train Accuracy: 0.680, Validation Accuracy: 0.709, Loss: 0.428 Epoch 0 Batch 404/538 - Train Accuracy: 0.706, Validation Accuracy: 0.706, Loss: 0.407 Epoch 0 Batch 405/538 - Train Accuracy: 0.699, Validation Accuracy: 0.704, Loss: 0.399 Epoch 0 Batch 406/538 - Train Accuracy: 0.700, Validation Accuracy: 0.700, Loss: 0.398 Epoch 0 Batch 407/538 - Train Accuracy: 0.699, Validation Accuracy: 0.704, Loss: 0.419 Epoch 0 Batch 408/538 - Train Accuracy: 0.675, Validation Accuracy: 0.711, Loss: 0.435 Epoch 0 Batch 409/538 - Train Accuracy: 0.676, Validation Accuracy: 0.704, Loss: 0.425 Epoch 0 Batch 410/538 - Train Accuracy: 0.687, Validation Accuracy: 0.714, Loss: 0.420 Epoch 0 Batch 411/538 - Train Accuracy: 0.690, Validation Accuracy: 0.710, Loss: 0.398 Epoch 0 Batch 412/538 - Train Accuracy: 0.703, Validation Accuracy: 0.703, Loss: 0.391 Epoch 0 Batch 413/538 - Train Accuracy: 0.715, Validation Accuracy: 0.711, Loss: 0.406 Epoch 0 Batch 414/538 - Train Accuracy: 0.695, Validation Accuracy: 0.717, Loss: 0.417 Epoch 0 Batch 415/538 - Train Accuracy: 0.693, Validation Accuracy: 0.715, Loss: 0.415 Epoch 0 Batch 416/538 - Train Accuracy: 0.728, Validation Accuracy: 0.716, Loss: 0.383 Epoch 0 Batch 417/538 - Train Accuracy: 0.710, Validation Accuracy: 0.702, Loss: 0.398 Epoch 0 Batch 418/538 - Train Accuracy: 0.698, Validation Accuracy: 0.705, Loss: 0.407 Epoch 0 Batch 419/538 - Train Accuracy: 0.714, Validation Accuracy: 0.712, Loss: 0.391 Epoch 0 Batch 420/538 - Train Accuracy: 0.700, Validation Accuracy: 0.706, Loss: 0.402 Epoch 0 Batch 421/538 - Train Accuracy: 0.714, Validation Accuracy: 0.706, Loss: 0.374 Epoch 0 Batch 422/538 - Train Accuracy: 0.713, Validation Accuracy: 0.715, Loss: 0.393 Epoch 0 Batch 423/538 - Train Accuracy: 0.719, Validation Accuracy: 0.717, Loss: 0.401 Epoch 0 Batch 424/538 - Train Accuracy: 0.715, Validation Accuracy: 0.727, Loss: 0.388 Epoch 0 Batch 425/538 - Train Accuracy: 0.703, Validation Accuracy: 0.721, Loss: 0.394 Epoch 0 Batch 426/538 - Train Accuracy: 0.735, Validation Accuracy: 0.714, Loss: 0.387 Epoch 0 Batch 427/538 - Train Accuracy: 0.696, Validation Accuracy: 0.713, Loss: 0.392 Epoch 0 Batch 428/538 - Train Accuracy: 0.731, Validation Accuracy: 0.718, Loss: 0.370 Epoch 0 Batch 429/538 - Train Accuracy: 0.720, Validation Accuracy: 0.718, Loss: 0.380 Epoch 0 Batch 430/538 - Train Accuracy: 0.729, Validation Accuracy: 0.727, Loss: 0.382 Epoch 0 Batch 431/538 - Train Accuracy: 0.718, Validation Accuracy: 0.726, Loss: 0.384 Epoch 0 Batch 432/538 - Train Accuracy: 0.733, Validation Accuracy: 0.721, Loss: 0.357 Epoch 0 Batch 433/538 - Train Accuracy: 0.689, Validation Accuracy: 0.719, Loss: 0.404 Epoch 0 Batch 434/538 - Train Accuracy: 0.703, Validation Accuracy: 0.722, Loss: 0.377 Epoch 0 Batch 435/538 - Train Accuracy: 0.702, Validation Accuracy: 0.718, Loss: 0.383 Epoch 0 Batch 436/538 - Train Accuracy: 0.714, Validation Accuracy: 0.721, Loss: 0.384 Epoch 0 Batch 437/538 - Train Accuracy: 0.696, Validation Accuracy: 0.719, Loss: 0.387 Epoch 0 Batch 438/538 - Train Accuracy: 0.730, Validation Accuracy: 0.729, Loss: 0.366 Epoch 0 Batch 439/538 - Train Accuracy: 0.753, Validation Accuracy: 0.732, Loss: 0.364 Epoch 0 Batch 440/538 - Train Accuracy: 0.719, Validation Accuracy: 0.734, Loss: 0.397 Epoch 0 Batch 441/538 - Train Accuracy: 0.697, Validation Accuracy: 0.737, Loss: 0.381 Epoch 0 Batch 442/538 - Train Accuracy: 0.737, Validation Accuracy: 0.742, Loss: 0.332 Epoch 0 Batch 443/538 - Train Accuracy: 0.729, Validation Accuracy: 0.731, Loss: 0.364 Epoch 0 Batch 444/538 - Train Accuracy: 0.781, Validation Accuracy: 0.729, Loss: 0.344 Epoch 0 Batch 445/538 - Train Accuracy: 0.756, Validation Accuracy: 0.743, Loss: 0.346 Epoch 0 Batch 446/538 - Train Accuracy: 0.750, Validation Accuracy: 0.746, Loss: 0.353 Epoch 0 Batch 447/538 - Train Accuracy: 0.703, Validation Accuracy: 0.741, Loss: 0.369 Epoch 0 Batch 448/538 - Train Accuracy: 0.751, Validation Accuracy: 0.735, Loss: 0.331 Epoch 0 Batch 449/538 - Train Accuracy: 0.729, Validation Accuracy: 0.733, Loss: 0.373 Epoch 0 Batch 450/538 - Train Accuracy: 0.743, Validation Accuracy: 0.732, Loss: 0.359 Epoch 0 Batch 451/538 - Train Accuracy: 0.715, Validation Accuracy: 0.732, Loss: 0.351 Epoch 0 Batch 452/538 - Train Accuracy: 0.753, Validation Accuracy: 0.730, Loss: 0.335 Epoch 0 Batch 453/538 - Train Accuracy: 0.734, Validation Accuracy: 0.723, Loss: 0.356 Epoch 0 Batch 454/538 - Train Accuracy: 0.725, Validation Accuracy: 0.731, Loss: 0.345 Epoch 0 Batch 455/538 - Train Accuracy: 0.748, Validation Accuracy: 0.731, Loss: 0.320 Epoch 0 Batch 456/538 - Train Accuracy: 0.790, Validation Accuracy: 0.735, Loss: 0.314 Epoch 0 Batch 457/538 - Train Accuracy: 0.728, Validation Accuracy: 0.742, Loss: 0.353 Epoch 0 Batch 458/538 - Train Accuracy: 0.753, Validation Accuracy: 0.749, Loss: 0.320 Epoch 0 Batch 459/538 - Train Accuracy: 0.769, Validation Accuracy: 0.754, Loss: 0.335 Epoch 0 Batch 460/538 - Train Accuracy: 0.718, Validation Accuracy: 0.740, Loss: 0.328 Epoch 0 Batch 461/538 - Train Accuracy: 0.742, Validation Accuracy: 0.751, Loss: 0.360 Epoch 0 Batch 462/538 - Train Accuracy: 0.744, Validation Accuracy: 0.741, Loss: 0.325 Epoch 0 Batch 463/538 - Train Accuracy: 0.726, Validation Accuracy: 0.745, Loss: 0.347 Epoch 0 Batch 464/538 - Train Accuracy: 0.726, Validation Accuracy: 0.744, Loss: 0.335 Epoch 0 Batch 465/538 - Train Accuracy: 0.725, Validation Accuracy: 0.751, Loss: 0.338 Epoch 0 Batch 466/538 - Train Accuracy: 0.755, Validation Accuracy: 0.746, Loss: 0.335 Epoch 0 Batch 467/538 - Train Accuracy: 0.746, Validation Accuracy: 0.760, Loss: 0.331 Epoch 0 Batch 468/538 - Train Accuracy: 0.762, Validation Accuracy: 0.761, Loss: 0.333 Epoch 0 Batch 469/538 - Train Accuracy: 0.750, Validation Accuracy: 0.756, Loss: 0.324 Epoch 0 Batch 470/538 - Train Accuracy: 0.764, Validation Accuracy: 0.753, Loss: 0.306 Epoch 0 Batch 471/538 - Train Accuracy: 0.743, Validation Accuracy: 0.754, Loss: 0.318 Epoch 0 Batch 472/538 - Train Accuracy: 0.762, Validation Accuracy: 0.748, Loss: 0.314 Epoch 0 Batch 473/538 - Train Accuracy: 0.750, Validation Accuracy: 0.739, Loss: 0.322 Epoch 0 Batch 474/538 - Train Accuracy: 0.761, Validation Accuracy: 0.751, Loss: 0.302 Epoch 0 Batch 475/538 - Train Accuracy: 0.753, Validation Accuracy: 0.752, Loss: 0.304 Epoch 0 Batch 476/538 - Train Accuracy: 0.765, Validation Accuracy: 0.749, Loss: 0.321 Epoch 0 Batch 477/538 - Train Accuracy: 0.771, Validation Accuracy: 0.755, Loss: 0.316 Epoch 0 Batch 478/538 - Train Accuracy: 0.770, Validation Accuracy: 0.742, Loss: 0.294 Epoch 0 Batch 479/538 - Train Accuracy: 0.775, Validation Accuracy: 0.766, Loss: 0.293 Epoch 0 Batch 480/538 - Train Accuracy: 0.765, Validation Accuracy: 0.767, Loss: 0.303 Epoch 0 Batch 481/538 - Train Accuracy: 0.790, Validation Accuracy: 0.762, Loss: 0.296 Epoch 0 Batch 482/538 - Train Accuracy: 0.763, Validation Accuracy: 0.742, Loss: 0.272 Epoch 0 Batch 483/538 - Train Accuracy: 0.745, Validation Accuracy: 0.759, Loss: 0.320 Epoch 0 Batch 484/538 - Train Accuracy: 0.780, Validation Accuracy: 0.751, Loss: 0.306 Epoch 0 Batch 485/538 - Train Accuracy: 0.775, Validation Accuracy: 0.758, Loss: 0.294 Epoch 0 Batch 486/538 - Train Accuracy: 0.789, Validation Accuracy: 0.752, Loss: 0.279 Epoch 0 Batch 487/538 - Train Accuracy: 0.782, Validation Accuracy: 0.752, Loss: 0.272 Epoch 0 Batch 488/538 - Train Accuracy: 0.789, Validation Accuracy: 0.752, Loss: 0.291 Epoch 0 Batch 489/538 - Train Accuracy: 0.780, Validation Accuracy: 0.769, Loss: 0.300 Epoch 0 Batch 490/538 - Train Accuracy: 0.780, Validation Accuracy: 0.768, Loss: 0.287 Epoch 0 Batch 491/538 - Train Accuracy: 0.743, Validation Accuracy: 0.774, Loss: 0.302 Epoch 0 Batch 492/538 - Train Accuracy: 0.772, Validation Accuracy: 0.768, Loss: 0.291 Epoch 0 Batch 493/538 - Train Accuracy: 0.767, Validation Accuracy: 0.773, Loss: 0.276 Epoch 0 Batch 494/538 - Train Accuracy: 0.784, Validation Accuracy: 0.768, Loss: 0.296 Epoch 0 Batch 495/538 - Train Accuracy: 0.754, Validation Accuracy: 0.758, Loss: 0.285 Epoch 0 Batch 496/538 - Train Accuracy: 0.761, Validation Accuracy: 0.763, Loss: 0.274 Epoch 0 Batch 497/538 - Train Accuracy: 0.778, Validation Accuracy: 0.764, Loss: 0.272 Epoch 0 Batch 498/538 - Train Accuracy: 0.760, Validation Accuracy: 0.771, Loss: 0.278 Epoch 0 Batch 499/538 - Train Accuracy: 0.819, Validation Accuracy: 0.781, Loss: 0.257 Epoch 0 Batch 500/538 - Train Accuracy: 0.806, Validation Accuracy: 0.780, Loss: 0.251 Epoch 0 Batch 501/538 - Train Accuracy: 0.817, Validation Accuracy: 0.777, Loss: 0.274 Epoch 0 Batch 502/538 - Train Accuracy: 0.770, Validation Accuracy: 0.767, Loss: 0.276 Epoch 0 Batch 503/538 - Train Accuracy: 0.824, Validation Accuracy: 0.770, Loss: 0.265 Epoch 0 Batch 504/538 - Train Accuracy: 0.820, Validation Accuracy: 0.770, Loss: 0.256 Epoch 0 Batch 505/538 - Train Accuracy: 0.806, Validation Accuracy: 0.775, Loss: 0.258 Epoch 0 Batch 506/538 - Train Accuracy: 0.790, Validation Accuracy: 0.778, Loss: 0.264 Epoch 0 Batch 507/538 - Train Accuracy: 0.778, Validation Accuracy: 0.771, Loss: 0.286 Epoch 0 Batch 508/538 - Train Accuracy: 0.801, Validation Accuracy: 0.776, Loss: 0.250 Epoch 0 Batch 509/538 - Train Accuracy: 0.816, Validation Accuracy: 0.773, Loss: 0.262 Epoch 0 Batch 510/538 - Train Accuracy: 0.800, Validation Accuracy: 0.781, Loss: 0.252 Epoch 0 Batch 511/538 - Train Accuracy: 0.801, Validation Accuracy: 0.784, Loss: 0.251 Epoch 0 Batch 512/538 - Train Accuracy: 0.829, Validation Accuracy: 0.781, Loss: 0.247 Epoch 0 Batch 513/538 - Train Accuracy: 0.759, Validation Accuracy: 0.766, Loss: 0.254 Epoch 0 Batch 514/538 - Train Accuracy: 0.797, Validation Accuracy: 0.778, Loss: 0.261 Epoch 0 Batch 515/538 - Train Accuracy: 0.829, Validation Accuracy: 0.776, Loss: 0.260 Epoch 0 Batch 516/538 - Train Accuracy: 0.783, Validation Accuracy: 0.788, Loss: 0.254 Epoch 0 Batch 517/538 - Train Accuracy: 0.821, Validation Accuracy: 0.779, Loss: 0.240 Epoch 0 Batch 518/538 - Train Accuracy: 0.795, Validation Accuracy: 0.788, Loss: 0.264 Epoch 0 Batch 519/538 - Train Accuracy: 0.814, Validation Accuracy: 0.787, Loss: 0.242 Epoch 0 Batch 520/538 - Train Accuracy: 0.818, Validation Accuracy: 0.798, Loss: 0.259 Epoch 0 Batch 521/538 - Train Accuracy: 0.814, Validation Accuracy: 0.796, Loss: 0.267 Epoch 0 Batch 522/538 - Train Accuracy: 0.800, Validation Accuracy: 0.795, Loss: 0.241 Epoch 0 Batch 523/538 - Train Accuracy: 0.806, Validation Accuracy: 0.794, Loss: 0.242 Epoch 0 Batch 524/538 - Train Accuracy: 0.819, Validation Accuracy: 0.781, Loss: 0.248 Epoch 0 Batch 525/538 - Train Accuracy: 0.838, Validation Accuracy: 0.783, Loss: 0.229 Epoch 0 Batch 526/538 - Train Accuracy: 0.815, Validation Accuracy: 0.788, Loss: 0.236 Epoch 0 Batch 527/538 - Train Accuracy: 0.794, Validation Accuracy: 0.787, Loss: 0.247 Epoch 0 Batch 528/538 - Train Accuracy: 0.816, Validation Accuracy: 0.794, Loss: 0.257 Epoch 0 Batch 529/538 - Train Accuracy: 0.805, Validation Accuracy: 0.792, Loss: 0.239 Epoch 0 Batch 530/538 - Train Accuracy: 0.801, Validation Accuracy: 0.791, Loss: 0.240 Epoch 0 Batch 531/538 - Train Accuracy: 0.826, Validation Accuracy: 0.797, Loss: 0.231 Epoch 0 Batch 532/538 - Train Accuracy: 0.826, Validation Accuracy: 0.795, Loss: 0.226 Epoch 0 Batch 533/538 - Train Accuracy: 0.841, Validation Accuracy: 0.786, Loss: 0.227 Epoch 0 Batch 534/538 - Train Accuracy: 0.824, Validation Accuracy: 0.803, Loss: 0.216 Epoch 0 Batch 535/538 - Train Accuracy: 0.824, Validation Accuracy: 0.807, Loss: 0.212 Epoch 0 Batch 536/538 - Train Accuracy: 0.845, Validation Accuracy: 0.807, Loss: 0.235 Epoch 1 Batch 0/538 - Train Accuracy: 0.820, Validation Accuracy: 0.809, Loss: 0.222 Epoch 1 Batch 1/538 - Train Accuracy: 0.835, Validation Accuracy: 0.814, Loss: 0.223 Epoch 1 Batch 2/538 - Train Accuracy: 0.826, Validation Accuracy: 0.810, Loss: 0.236 Epoch 1 Batch 3/538 - Train Accuracy: 0.836, Validation Accuracy: 0.811, Loss: 0.220 Epoch 1 Batch 4/538 - Train Accuracy: 0.865, Validation Accuracy: 0.816, Loss: 0.219 Epoch 1 Batch 5/538 - Train Accuracy: 0.833, Validation Accuracy: 0.815, Loss: 0.227 Epoch 1 Batch 6/538 - Train Accuracy: 0.860, Validation Accuracy: 0.807, Loss: 0.215 Epoch 1 Batch 7/538 - Train Accuracy: 0.829, Validation Accuracy: 0.809, Loss: 0.224 Epoch 1 Batch 8/538 - Train Accuracy: 0.836, Validation Accuracy: 0.808, Loss: 0.216 Epoch 1 Batch 9/538 - Train Accuracy: 0.845, Validation Accuracy: 0.809, Loss: 0.211 Epoch 1 Batch 10/538 - Train Accuracy: 0.836, Validation Accuracy: 0.804, Loss: 0.221 Epoch 1 Batch 11/538 - Train Accuracy: 0.854, Validation Accuracy: 0.804, Loss: 0.211 Epoch 1 Batch 12/538 - Train Accuracy: 0.819, Validation Accuracy: 0.801, Loss: 0.215 Epoch 1 Batch 13/538 - Train Accuracy: 0.880, Validation Accuracy: 0.804, Loss: 0.193 Epoch 1 Batch 14/538 - Train Accuracy: 0.850, Validation Accuracy: 0.808, Loss: 0.204 Epoch 1 Batch 15/538 - Train Accuracy: 0.858, Validation Accuracy: 0.808, Loss: 0.195 Epoch 1 Batch 16/538 - Train Accuracy: 0.854, Validation Accuracy: 0.820, Loss: 0.201 Epoch 1 Batch 17/538 - Train Accuracy: 0.848, Validation Accuracy: 0.830, Loss: 0.201 Epoch 1 Batch 18/538 - Train Accuracy: 0.853, Validation Accuracy: 0.819, Loss: 0.212 Epoch 1 Batch 19/538 - Train Accuracy: 0.842, Validation Accuracy: 0.819, Loss: 0.215 Epoch 1 Batch 20/538 - Train Accuracy: 0.847, Validation Accuracy: 0.830, Loss: 0.216 Epoch 1 Batch 21/538 - Train Accuracy: 0.871, Validation Accuracy: 0.827, Loss: 0.196 Epoch 1 Batch 22/538 - Train Accuracy: 0.821, Validation Accuracy: 0.827, Loss: 0.200 Epoch 1 Batch 23/538 - Train Accuracy: 0.833, Validation Accuracy: 0.822, Loss: 0.215 Epoch 1 Batch 24/538 - Train Accuracy: 0.854, Validation Accuracy: 0.840, Loss: 0.190 Epoch 1 Batch 25/538 - Train Accuracy: 0.838, Validation Accuracy: 0.836, Loss: 0.195 Epoch 1 Batch 26/538 - Train Accuracy: 0.843, Validation Accuracy: 0.839, Loss: 0.208 Epoch 1 Batch 27/538 - Train Accuracy: 0.858, Validation Accuracy: 0.824, Loss: 0.186 Epoch 1 Batch 28/538 - Train Accuracy: 0.850, Validation Accuracy: 0.831, Loss: 0.179 Epoch 1 Batch 29/538 - Train Accuracy: 0.869, Validation Accuracy: 0.830, Loss: 0.182 Epoch 1 Batch 30/538 - Train Accuracy: 0.841, Validation Accuracy: 0.818, Loss: 0.203 Epoch 1 Batch 31/538 - Train Accuracy: 0.864, Validation Accuracy: 0.809, Loss: 0.172 Epoch 1 Batch 32/538 - Train Accuracy: 0.874, Validation Accuracy: 0.823, Loss: 0.167 Epoch 1 Batch 33/538 - Train Accuracy: 0.864, Validation Accuracy: 0.837, Loss: 0.181 Epoch 1 Batch 34/538 - Train Accuracy: 0.870, Validation Accuracy: 0.841, Loss: 0.192 Epoch 1 Batch 35/538 - Train Accuracy: 0.862, Validation Accuracy: 0.856, Loss: 0.178 Epoch 1 Batch 36/538 - Train Accuracy: 0.877, Validation Accuracy: 0.841, Loss: 0.171 Epoch 1 Batch 37/538 - Train Accuracy: 0.881, Validation Accuracy: 0.848, Loss: 0.182 Epoch 1 Batch 38/538 - Train Accuracy: 0.855, Validation Accuracy: 0.854, Loss: 0.183 Epoch 1 Batch 39/538 - Train Accuracy: 0.872, Validation Accuracy: 0.851, Loss: 0.181 Epoch 1 Batch 40/538 - Train Accuracy: 0.883, Validation Accuracy: 0.843, Loss: 0.154 Epoch 1 Batch 41/538 - Train Accuracy: 0.867, Validation Accuracy: 0.836, Loss: 0.171 Epoch 1 Batch 42/538 - Train Accuracy: 0.871, Validation Accuracy: 0.846, Loss: 0.178 Epoch 1 Batch 43/538 - Train Accuracy: 0.848, Validation Accuracy: 0.841, Loss: 0.196 Epoch 1 Batch 44/538 - Train Accuracy: 0.848, Validation Accuracy: 0.837, Loss: 0.186 Epoch 1 Batch 45/538 - Train Accuracy: 0.879, Validation Accuracy: 0.833, Loss: 0.165 Epoch 1 Batch 46/538 - Train Accuracy: 0.885, Validation Accuracy: 0.844, Loss: 0.167 Epoch 1 Batch 47/538 - Train Accuracy: 0.854, Validation Accuracy: 0.855, Loss: 0.174 Epoch 1 Batch 48/538 - Train Accuracy: 0.862, Validation Accuracy: 0.857, Loss: 0.165 Epoch 1 Batch 49/538 - Train Accuracy: 0.883, Validation Accuracy: 0.846, Loss: 0.176 Epoch 1 Batch 50/538 - Train Accuracy: 0.877, Validation Accuracy: 0.852, Loss: 0.159 Epoch 1 Batch 51/538 - Train Accuracy: 0.869, Validation Accuracy: 0.839, Loss: 0.181 Epoch 1 Batch 52/538 - Train Accuracy: 0.865, Validation Accuracy: 0.836, Loss: 0.177 Epoch 1 Batch 53/538 - Train Accuracy: 0.867, Validation Accuracy: 0.852, Loss: 0.156 Epoch 1 Batch 54/538 - Train Accuracy: 0.887, Validation Accuracy: 0.859, Loss: 0.158 Epoch 1 Batch 55/538 - Train Accuracy: 0.872, Validation Accuracy: 0.856, Loss: 0.168 Epoch 1 Batch 56/538 - Train Accuracy: 0.874, Validation Accuracy: 0.857, Loss: 0.156 Epoch 1 Batch 57/538 - Train Accuracy: 0.847, Validation Accuracy: 0.852, Loss: 0.177 Epoch 1 Batch 58/538 - Train Accuracy: 0.861, Validation Accuracy: 0.849, Loss: 0.163 Epoch 1 Batch 59/538 - Train Accuracy: 0.873, Validation Accuracy: 0.849, Loss: 0.168 Epoch 1 Batch 60/538 - Train Accuracy: 0.904, Validation Accuracy: 0.847, Loss: 0.162 Epoch 1 Batch 61/538 - Train Accuracy: 0.889, Validation Accuracy: 0.854, Loss: 0.157 Epoch 1 Batch 62/538 - Train Accuracy: 0.895, Validation Accuracy: 0.854, Loss: 0.150 Epoch 1 Batch 63/538 - Train Accuracy: 0.886, Validation Accuracy: 0.847, Loss: 0.146 Epoch 1 Batch 64/538 - Train Accuracy: 0.873, Validation Accuracy: 0.843, Loss: 0.149 Epoch 1 Batch 65/538 - Train Accuracy: 0.860, Validation Accuracy: 0.841, Loss: 0.156 Epoch 1 Batch 66/538 - Train Accuracy: 0.896, Validation Accuracy: 0.843, Loss: 0.136 Epoch 1 Batch 67/538 - Train Accuracy: 0.869, Validation Accuracy: 0.839, Loss: 0.148 Epoch 1 Batch 68/538 - Train Accuracy: 0.884, Validation Accuracy: 0.842, Loss: 0.135 Epoch 1 Batch 69/538 - Train Accuracy: 0.881, Validation Accuracy: 0.851, Loss: 0.147 Epoch 1 Batch 70/538 - Train Accuracy: 0.892, Validation Accuracy: 0.856, Loss: 0.145 Epoch 1 Batch 71/538 - Train Accuracy: 0.873, Validation Accuracy: 0.856, Loss: 0.155 Epoch 1 Batch 72/538 - Train Accuracy: 0.904, Validation Accuracy: 0.853, Loss: 0.148 Epoch 1 Batch 73/538 - Train Accuracy: 0.859, Validation Accuracy: 0.854, Loss: 0.145 Epoch 1 Batch 74/538 - Train Accuracy: 0.870, Validation Accuracy: 0.870, Loss: 0.145 Epoch 1 Batch 75/538 - Train Accuracy: 0.885, Validation Accuracy: 0.865, Loss: 0.139 Epoch 1 Batch 76/538 - Train Accuracy: 0.891, Validation Accuracy: 0.868, Loss: 0.147 Epoch 1 Batch 77/538 - Train Accuracy: 0.885, Validation Accuracy: 0.864, Loss: 0.138 Epoch 1 Batch 78/538 - Train Accuracy: 0.862, Validation Accuracy: 0.873, Loss: 0.151 Epoch 1 Batch 79/538 - Train Accuracy: 0.905, Validation Accuracy: 0.869, Loss: 0.132 Epoch 1 Batch 80/538 - Train Accuracy: 0.888, Validation Accuracy: 0.864, Loss: 0.146 Epoch 1 Batch 81/538 - Train Accuracy: 0.886, Validation Accuracy: 0.869, Loss: 0.139 Epoch 1 Batch 82/538 - Train Accuracy: 0.870, Validation Accuracy: 0.872, Loss: 0.141 Epoch 1 Batch 83/538 - Train Accuracy: 0.896, Validation Accuracy: 0.876, Loss: 0.141 Epoch 1 Batch 84/538 - Train Accuracy: 0.877, Validation Accuracy: 0.869, Loss: 0.139 Epoch 1 Batch 85/538 - Train Accuracy: 0.912, Validation Accuracy: 0.869, Loss: 0.122 Epoch 1 Batch 86/538 - Train Accuracy: 0.887, Validation Accuracy: 0.857, Loss: 0.135 Epoch 1 Batch 87/538 - Train Accuracy: 0.888, Validation Accuracy: 0.876, Loss: 0.136 Epoch 1 Batch 88/538 - Train Accuracy: 0.886, Validation Accuracy: 0.865, Loss: 0.139 Epoch 1 Batch 89/538 - Train Accuracy: 0.879, Validation Accuracy: 0.864, Loss: 0.131 Epoch 1 Batch 90/538 - Train Accuracy: 0.877, Validation Accuracy: 0.861, Loss: 0.144 Epoch 1 Batch 91/538 - Train Accuracy: 0.882, Validation Accuracy: 0.867, Loss: 0.130 Epoch 1 Batch 92/538 - Train Accuracy: 0.893, Validation Accuracy: 0.871, Loss: 0.134 Epoch 1 Batch 93/538 - Train Accuracy: 0.885, Validation Accuracy: 0.875, Loss: 0.128 Epoch 1 Batch 94/538 - Train Accuracy: 0.902, Validation Accuracy: 0.875, Loss: 0.125 Epoch 1 Batch 95/538 - Train Accuracy: 0.875, Validation Accuracy: 0.874, Loss: 0.124 Epoch 1 Batch 96/538 - Train Accuracy: 0.908, Validation Accuracy: 0.879, Loss: 0.111 Epoch 1 Batch 97/538 - Train Accuracy: 0.894, Validation Accuracy: 0.882, Loss: 0.118 Epoch 1 Batch 98/538 - Train Accuracy: 0.898, Validation Accuracy: 0.874, Loss: 0.131 Epoch 1 Batch 99/538 - Train Accuracy: 0.901, Validation Accuracy: 0.867, Loss: 0.125 Epoch 1 Batch 100/538 - Train Accuracy: 0.889, Validation Accuracy: 0.877, Loss: 0.120 Epoch 1 Batch 101/538 - Train Accuracy: 0.864, Validation Accuracy: 0.874, Loss: 0.136 Epoch 1 Batch 102/538 - Train Accuracy: 0.885, Validation Accuracy: 0.870, Loss: 0.131 Epoch 1 Batch 103/538 - Train Accuracy: 0.906, Validation Accuracy: 0.871, Loss: 0.118 Epoch 1 Batch 104/538 - Train Accuracy: 0.878, Validation Accuracy: 0.862, Loss: 0.117 Epoch 1 Batch 105/538 - Train Accuracy: 0.893, Validation Accuracy: 0.872, Loss: 0.113 Epoch 1 Batch 106/538 - Train Accuracy: 0.875, Validation Accuracy: 0.861, Loss: 0.112 Epoch 1 Batch 107/538 - Train Accuracy: 0.868, Validation Accuracy: 0.857, Loss: 0.131 Epoch 1 Batch 108/538 - Train Accuracy: 0.894, Validation Accuracy: 0.868, Loss: 0.122 Epoch 1 Batch 109/538 - Train Accuracy: 0.922, Validation Accuracy: 0.865, Loss: 0.108 Epoch 1 Batch 110/538 - Train Accuracy: 0.899, Validation Accuracy: 0.878, Loss: 0.120 Epoch 1 Batch 111/538 - Train Accuracy: 0.891, Validation Accuracy: 0.873, Loss: 0.115 Epoch 1 Batch 112/538 - Train Accuracy: 0.891, Validation Accuracy: 0.873, Loss: 0.122 Epoch 1 Batch 113/538 - Train Accuracy: 0.872, Validation Accuracy: 0.876, Loss: 0.121 Epoch 1 Batch 114/538 - Train Accuracy: 0.892, Validation Accuracy: 0.865, Loss: 0.116 Epoch 1 Batch 115/538 - Train Accuracy: 0.890, Validation Accuracy: 0.866, Loss: 0.112 Epoch 1 Batch 116/538 - Train Accuracy: 0.884, Validation Accuracy: 0.875, Loss: 0.128 Epoch 1 Batch 117/538 - Train Accuracy: 0.883, Validation Accuracy: 0.869, Loss: 0.118 Epoch 1 Batch 118/538 - Train Accuracy: 0.909, Validation Accuracy: 0.873, Loss: 0.103 Epoch 1 Batch 119/538 - Train Accuracy: 0.928, Validation Accuracy: 0.867, Loss: 0.094 Epoch 1 Batch 120/538 - Train Accuracy: 0.917, Validation Accuracy: 0.868, Loss: 0.097 Epoch 1 Batch 121/538 - Train Accuracy: 0.898, Validation Accuracy: 0.862, Loss: 0.108 Epoch 1 Batch 122/538 - Train Accuracy: 0.902, Validation Accuracy: 0.871, Loss: 0.104 Epoch 1 Batch 123/538 - Train Accuracy: 0.913, Validation Accuracy: 0.883, Loss: 0.105 Epoch 1 Batch 124/538 - Train Accuracy: 0.917, Validation Accuracy: 0.881, Loss: 0.097 Epoch 1 Batch 125/538 - Train Accuracy: 0.898, Validation Accuracy: 0.884, Loss: 0.111 Epoch 1 Batch 126/538 - Train Accuracy: 0.901, Validation Accuracy: 0.886, Loss: 0.103 Epoch 1 Batch 127/538 - Train Accuracy: 0.897, Validation Accuracy: 0.887, Loss: 0.116 Epoch 1 Batch 128/538 - Train Accuracy: 0.915, Validation Accuracy: 0.890, Loss: 0.106 Epoch 1 Batch 129/538 - Train Accuracy: 0.906, Validation Accuracy: 0.900, Loss: 0.097 Epoch 1 Batch 130/538 - Train Accuracy: 0.916, Validation Accuracy: 0.900, Loss: 0.098 Epoch 1 Batch 131/538 - Train Accuracy: 0.925, Validation Accuracy: 0.897, Loss: 0.100 Epoch 1 Batch 132/538 - Train Accuracy: 0.883, Validation Accuracy: 0.900, Loss: 0.103 Epoch 1 Batch 133/538 - Train Accuracy: 0.908, Validation Accuracy: 0.898, Loss: 0.097 Epoch 1 Batch 134/538 - Train Accuracy: 0.898, Validation Accuracy: 0.894, Loss: 0.119 Epoch 1 Batch 135/538 - Train Accuracy: 0.910, Validation Accuracy: 0.893, Loss: 0.116 Epoch 1 Batch 136/538 - Train Accuracy: 0.905, Validation Accuracy: 0.876, Loss: 0.098 Epoch 1 Batch 137/538 - Train Accuracy: 0.886, Validation Accuracy: 0.877, Loss: 0.115 Epoch 1 Batch 138/538 - Train Accuracy: 0.906, Validation Accuracy: 0.883, Loss: 0.105 Epoch 1 Batch 139/538 - Train Accuracy: 0.899, Validation Accuracy: 0.888, Loss: 0.113 Epoch 1 Batch 140/538 - Train Accuracy: 0.895, Validation Accuracy: 0.896, Loss: 0.114 Epoch 1 Batch 141/538 - Train Accuracy: 0.905, Validation Accuracy: 0.886, Loss: 0.110 Epoch 1 Batch 142/538 - Train Accuracy: 0.914, Validation Accuracy: 0.888, Loss: 0.103 Epoch 1 Batch 143/538 - Train Accuracy: 0.906, Validation Accuracy: 0.890, Loss: 0.102 Epoch 1 Batch 144/538 - Train Accuracy: 0.908, Validation Accuracy: 0.891, Loss: 0.114 Epoch 1 Batch 145/538 - Train Accuracy: 0.885, Validation Accuracy: 0.889, Loss: 0.113 Epoch 1 Batch 146/538 - Train Accuracy: 0.889, Validation Accuracy: 0.897, Loss: 0.098 Epoch 1 Batch 147/538 - Train Accuracy: 0.904, Validation Accuracy: 0.895, Loss: 0.104 Epoch 1 Batch 148/538 - Train Accuracy: 0.889, Validation Accuracy: 0.899, Loss: 0.111 Epoch 1 Batch 149/538 - Train Accuracy: 0.930, Validation Accuracy: 0.892, Loss: 0.094 Epoch 1 Batch 150/538 - Train Accuracy: 0.913, Validation Accuracy: 0.893, Loss: 0.092 Epoch 1 Batch 151/538 - Train Accuracy: 0.922, Validation Accuracy: 0.885, Loss: 0.097 Epoch 1 Batch 152/538 - Train Accuracy: 0.912, Validation Accuracy: 0.882, Loss: 0.104 Epoch 1 Batch 153/538 - Train Accuracy: 0.888, Validation Accuracy: 0.875, Loss: 0.103 Epoch 1 Batch 154/538 - Train Accuracy: 0.914, Validation Accuracy: 0.875, Loss: 0.090 Epoch 1 Batch 155/538 - Train Accuracy: 0.896, Validation Accuracy: 0.892, Loss: 0.099 Epoch 1 Batch 156/538 - Train Accuracy: 0.912, Validation Accuracy: 0.892, Loss: 0.086 Epoch 1 Batch 157/538 - Train Accuracy: 0.922, Validation Accuracy: 0.891, Loss: 0.089 Epoch 1 Batch 158/538 - Train Accuracy: 0.910, Validation Accuracy: 0.891, Loss: 0.101 Epoch 1 Batch 159/538 - Train Accuracy: 0.899, Validation Accuracy: 0.895, Loss: 0.110 Epoch 1 Batch 160/538 - Train Accuracy: 0.889, Validation Accuracy: 0.895, Loss: 0.087 Epoch 1 Batch 161/538 - Train Accuracy: 0.919, Validation Accuracy: 0.902, Loss: 0.092 Epoch 1 Batch 162/538 - Train Accuracy: 0.896, Validation Accuracy: 0.895, Loss: 0.089 Epoch 1 Batch 163/538 - Train Accuracy: 0.911, Validation Accuracy: 0.887, Loss: 0.103 Epoch 1 Batch 164/538 - Train Accuracy: 0.889, Validation Accuracy: 0.886, Loss: 0.101 Epoch 1 Batch 165/538 - Train Accuracy: 0.903, Validation Accuracy: 0.890, Loss: 0.084 Epoch 1 Batch 166/538 - Train Accuracy: 0.931, Validation Accuracy: 0.887, Loss: 0.085 Epoch 1 Batch 167/538 - Train Accuracy: 0.920, Validation Accuracy: 0.887, Loss: 0.088 Epoch 1 Batch 168/538 - Train Accuracy: 0.879, Validation Accuracy: 0.890, Loss: 0.105 Epoch 1 Batch 169/538 - Train Accuracy: 0.926, Validation Accuracy: 0.889, Loss: 0.082 Epoch 1 Batch 170/538 - Train Accuracy: 0.908, Validation Accuracy: 0.898, Loss: 0.095 Epoch 1 Batch 171/538 - Train Accuracy: 0.922, Validation Accuracy: 0.874, Loss: 0.088 Epoch 1 Batch 172/538 - Train Accuracy: 0.898, Validation Accuracy: 0.882, Loss: 0.084 Epoch 1 Batch 173/538 - Train Accuracy: 0.932, Validation Accuracy: 0.876, Loss: 0.082 Epoch 1 Batch 174/538 - Train Accuracy: 0.911, Validation Accuracy: 0.884, Loss: 0.090 Epoch 1 Batch 175/538 - Train Accuracy: 0.896, Validation Accuracy: 0.890, Loss: 0.085 Epoch 1 Batch 176/538 - Train Accuracy: 0.885, Validation Accuracy: 0.891, Loss: 0.098 Epoch 1 Batch 177/538 - Train Accuracy: 0.904, Validation Accuracy: 0.892, Loss: 0.089 Epoch 1 Batch 178/538 - Train Accuracy: 0.887, Validation Accuracy: 0.892, Loss: 0.092 Epoch 1 Batch 179/538 - Train Accuracy: 0.918, Validation Accuracy: 0.894, Loss: 0.087 Epoch 1 Batch 180/538 - Train Accuracy: 0.913, Validation Accuracy: 0.892, Loss: 0.085 Epoch 1 Batch 181/538 - Train Accuracy: 0.899, Validation Accuracy: 0.895, Loss: 0.095 Epoch 1 Batch 182/538 - Train Accuracy: 0.930, Validation Accuracy: 0.895, Loss: 0.076 Epoch 1 Batch 183/538 - Train Accuracy: 0.929, Validation Accuracy: 0.898, Loss: 0.078 Epoch 1 Batch 184/538 - Train Accuracy: 0.934, Validation Accuracy: 0.900, Loss: 0.083 Epoch 1 Batch 185/538 - Train Accuracy: 0.934, Validation Accuracy: 0.902, Loss: 0.077 Epoch 1 Batch 186/538 - Train Accuracy: 0.934, Validation Accuracy: 0.895, Loss: 0.085 Epoch 1 Batch 187/538 - Train Accuracy: 0.915, Validation Accuracy: 0.898, Loss: 0.088 Epoch 1 Batch 188/538 - Train Accuracy: 0.921, Validation Accuracy: 0.898, Loss: 0.080 Epoch 1 Batch 189/538 - Train Accuracy: 0.930, Validation Accuracy: 0.904, Loss: 0.083 Epoch 1 Batch 190/538 - Train Accuracy: 0.895, Validation Accuracy: 0.912, Loss: 0.105 Epoch 1 Batch 191/538 - Train Accuracy: 0.923, Validation Accuracy: 0.914, Loss: 0.077 Epoch 1 Batch 192/538 - Train Accuracy: 0.909, Validation Accuracy: 0.918, Loss: 0.078 Epoch 1 Batch 193/538 - Train Accuracy: 0.914, Validation Accuracy: 0.912, Loss: 0.085 Epoch 1 Batch 194/538 - Train Accuracy: 0.879, Validation Accuracy: 0.906, Loss: 0.088 Epoch 1 Batch 195/538 - Train Accuracy: 0.917, Validation Accuracy: 0.909, Loss: 0.089 Epoch 1 Batch 196/538 - Train Accuracy: 0.925, Validation Accuracy: 0.901, Loss: 0.076 Epoch 1 Batch 197/538 - Train Accuracy: 0.930, Validation Accuracy: 0.903, Loss: 0.083 Epoch 1 Batch 198/538 - Train Accuracy: 0.918, Validation Accuracy: 0.911, Loss: 0.080 Epoch 1 Batch 199/538 - Train Accuracy: 0.906, Validation Accuracy: 0.901, Loss: 0.081 Epoch 1 Batch 200/538 - Train Accuracy: 0.911, Validation Accuracy: 0.893, Loss: 0.076 Epoch 1 Batch 201/538 - Train Accuracy: 0.900, Validation Accuracy: 0.892, Loss: 0.084 Epoch 1 Batch 202/538 - Train Accuracy: 0.925, Validation Accuracy: 0.887, Loss: 0.081 Epoch 1 Batch 203/538 - Train Accuracy: 0.905, Validation Accuracy: 0.888, Loss: 0.093 Epoch 1 Batch 204/538 - Train Accuracy: 0.903, Validation Accuracy: 0.893, Loss: 0.083 Epoch 1 Batch 205/538 - Train Accuracy: 0.925, Validation Accuracy: 0.899, Loss: 0.076 Epoch 1 Batch 206/538 - Train Accuracy: 0.924, Validation Accuracy: 0.905, Loss: 0.075 Epoch 1 Batch 207/538 - Train Accuracy: 0.936, Validation Accuracy: 0.906, Loss: 0.081 Epoch 1 Batch 208/538 - Train Accuracy: 0.901, Validation Accuracy: 0.900, Loss: 0.095 Epoch 1 Batch 209/538 - Train Accuracy: 0.934, Validation Accuracy: 0.890, Loss: 0.072 Epoch 1 Batch 210/538 - Train Accuracy: 0.908, Validation Accuracy: 0.892, Loss: 0.077 Epoch 1 Batch 211/538 - Train Accuracy: 0.910, Validation Accuracy: 0.894, Loss: 0.083 Epoch 1 Batch 212/538 - Train Accuracy: 0.908, Validation Accuracy: 0.905, Loss: 0.081 Epoch 1 Batch 213/538 - Train Accuracy: 0.909, Validation Accuracy: 0.900, Loss: 0.073 Epoch 1 Batch 214/538 - Train Accuracy: 0.913, Validation Accuracy: 0.894, Loss: 0.073 Epoch 1 Batch 215/538 - Train Accuracy: 0.909, Validation Accuracy: 0.890, Loss: 0.077 Epoch 1 Batch 216/538 - Train Accuracy: 0.938, Validation Accuracy: 0.879, Loss: 0.078 Epoch 1 Batch 217/538 - Train Accuracy: 0.916, Validation Accuracy: 0.886, Loss: 0.081 Epoch 1 Batch 218/538 - Train Accuracy: 0.919, Validation Accuracy: 0.888, Loss: 0.071 Epoch 1 Batch 219/538 - Train Accuracy: 0.884, Validation Accuracy: 0.896, Loss: 0.093 Epoch 1 Batch 220/538 - Train Accuracy: 0.898, Validation Accuracy: 0.896, Loss: 0.079 Epoch 1 Batch 221/538 - Train Accuracy: 0.925, Validation Accuracy: 0.903, Loss: 0.073 Epoch 1 Batch 222/538 - Train Accuracy: 0.897, Validation Accuracy: 0.900, Loss: 0.070 Epoch 1 Batch 223/538 - Train Accuracy: 0.916, Validation Accuracy: 0.885, Loss: 0.089 Epoch 1 Batch 224/538 - Train Accuracy: 0.912, Validation Accuracy: 0.888, Loss: 0.080 Epoch 1 Batch 225/538 - Train Accuracy: 0.920, Validation Accuracy: 0.895, Loss: 0.074 Epoch 1 Batch 226/538 - Train Accuracy: 0.925, Validation Accuracy: 0.901, Loss: 0.077 Epoch 1 Batch 227/538 - Train Accuracy: 0.926, Validation Accuracy: 0.914, Loss: 0.080 Epoch 1 Batch 228/538 - Train Accuracy: 0.922, Validation Accuracy: 0.917, Loss: 0.071 Epoch 1 Batch 229/538 - Train Accuracy: 0.922, Validation Accuracy: 0.911, Loss: 0.075 Epoch 1 Batch 230/538 - Train Accuracy: 0.902, Validation Accuracy: 0.906, Loss: 0.075 Epoch 1 Batch 231/538 - Train Accuracy: 0.919, Validation Accuracy: 0.904, Loss: 0.075 Epoch 1 Batch 232/538 - Train Accuracy: 0.917, Validation Accuracy: 0.896, Loss: 0.072 Epoch 1 Batch 233/538 - Train Accuracy: 0.914, Validation Accuracy: 0.887, Loss: 0.082 Epoch 1 Batch 234/538 - Train Accuracy: 0.930, Validation Accuracy: 0.901, Loss: 0.076 Epoch 1 Batch 235/538 - Train Accuracy: 0.938, Validation Accuracy: 0.898, Loss: 0.063 Epoch 1 Batch 236/538 - Train Accuracy: 0.924, Validation Accuracy: 0.899, Loss: 0.074 Epoch 1 Batch 237/538 - Train Accuracy: 0.924, Validation Accuracy: 0.910, Loss: 0.065 Epoch 1 Batch 238/538 - Train Accuracy: 0.933, Validation Accuracy: 0.912, Loss: 0.066 Epoch 1 Batch 239/538 - Train Accuracy: 0.904, Validation Accuracy: 0.913, Loss: 0.073 Epoch 1 Batch 240/538 - Train Accuracy: 0.920, Validation Accuracy: 0.914, Loss: 0.078 Epoch 1 Batch 241/538 - Train Accuracy: 0.918, Validation Accuracy: 0.908, Loss: 0.076 Epoch 1 Batch 242/538 - Train Accuracy: 0.930, Validation Accuracy: 0.899, Loss: 0.070 Epoch 1 Batch 243/538 - Train Accuracy: 0.926, Validation Accuracy: 0.900, Loss: 0.068 Epoch 1 Batch 244/538 - Train Accuracy: 0.918, Validation Accuracy: 0.901, Loss: 0.066 Epoch 1 Batch 245/538 - Train Accuracy: 0.922, Validation Accuracy: 0.902, Loss: 0.078 Epoch 1 Batch 246/538 - Train Accuracy: 0.914, Validation Accuracy: 0.904, Loss: 0.060 Epoch 1 Batch 247/538 - Train Accuracy: 0.914, Validation Accuracy: 0.897, Loss: 0.068 Epoch 1 Batch 248/538 - Train Accuracy: 0.926, Validation Accuracy: 0.899, Loss: 0.064 Epoch 1 Batch 249/538 - Train Accuracy: 0.915, Validation Accuracy: 0.906, Loss: 0.060 Epoch 1 Batch 250/538 - Train Accuracy: 0.920, Validation Accuracy: 0.909, Loss: 0.067 Epoch 1 Batch 251/538 - Train Accuracy: 0.935, Validation Accuracy: 0.912, Loss: 0.068 Epoch 1 Batch 252/538 - Train Accuracy: 0.936, Validation Accuracy: 0.910, Loss: 0.063 Epoch 1 Batch 253/538 - Train Accuracy: 0.884, Validation Accuracy: 0.896, Loss: 0.067 Epoch 1 Batch 254/538 - Train Accuracy: 0.902, Validation Accuracy: 0.906, Loss: 0.074 Epoch 1 Batch 255/538 - Train Accuracy: 0.920, Validation Accuracy: 0.905, Loss: 0.064 Epoch 1 Batch 256/538 - Train Accuracy: 0.897, Validation Accuracy: 0.909, Loss: 0.074 Epoch 1 Batch 257/538 - Train Accuracy: 0.925, Validation Accuracy: 0.910, Loss: 0.073 Epoch 1 Batch 258/538 - Train Accuracy: 0.908, Validation Accuracy: 0.918, Loss: 0.067 Epoch 1 Batch 259/538 - Train Accuracy: 0.942, Validation Accuracy: 0.922, Loss: 0.063 Epoch 1 Batch 260/538 - Train Accuracy: 0.894, Validation Accuracy: 0.919, Loss: 0.072 Epoch 1 Batch 261/538 - Train Accuracy: 0.924, Validation Accuracy: 0.915, Loss: 0.070 Epoch 1 Batch 262/538 - Train Accuracy: 0.935, Validation Accuracy: 0.908, Loss: 0.066 Epoch 1 Batch 263/538 - Train Accuracy: 0.910, Validation Accuracy: 0.911, Loss: 0.069 Epoch 1 Batch 264/538 - Train Accuracy: 0.922, Validation Accuracy: 0.914, Loss: 0.069 Epoch 1 Batch 265/538 - Train Accuracy: 0.923, Validation Accuracy: 0.912, Loss: 0.074 Epoch 1 Batch 266/538 - Train Accuracy: 0.894, Validation Accuracy: 0.905, Loss: 0.070 Epoch 1 Batch 267/538 - Train Accuracy: 0.898, Validation Accuracy: 0.908, Loss: 0.067 Epoch 1 Batch 268/538 - Train Accuracy: 0.930, Validation Accuracy: 0.907, Loss: 0.055 Epoch 1 Batch 269/538 - Train Accuracy: 0.920, Validation Accuracy: 0.905, Loss: 0.072 Epoch 1 Batch 270/538 - Train Accuracy: 0.930, Validation Accuracy: 0.904, Loss: 0.064 Epoch 1 Batch 271/538 - Train Accuracy: 0.934, Validation Accuracy: 0.907, Loss: 0.058 Epoch 1 Batch 272/538 - Train Accuracy: 0.927, Validation Accuracy: 0.895, Loss: 0.073 Epoch 1 Batch 273/538 - Train Accuracy: 0.918, Validation Accuracy: 0.897, Loss: 0.071 Epoch 1 Batch 274/538 - Train Accuracy: 0.893, Validation Accuracy: 0.895, Loss: 0.073 Epoch 1 Batch 275/538 - Train Accuracy: 0.917, Validation Accuracy: 0.881, Loss: 0.071 Epoch 1 Batch 276/538 - Train Accuracy: 0.900, Validation Accuracy: 0.896, Loss: 0.073 Epoch 1 Batch 277/538 - Train Accuracy: 0.927, Validation Accuracy: 0.894, Loss: 0.059 Epoch 1 Batch 278/538 - Train Accuracy: 0.915, Validation Accuracy: 0.911, Loss: 0.060 Epoch 1 Batch 279/538 - Train Accuracy: 0.920, Validation Accuracy: 0.920, Loss: 0.063 Epoch 1 Batch 280/538 - Train Accuracy: 0.936, Validation Accuracy: 0.920, Loss: 0.055 Epoch 1 Batch 281/538 - Train Accuracy: 0.917, Validation Accuracy: 0.923, Loss: 0.070 Epoch 1 Batch 282/538 - Train Accuracy: 0.923, Validation Accuracy: 0.902, Loss: 0.071 Epoch 1 Batch 283/538 - Train Accuracy: 0.931, Validation Accuracy: 0.900, Loss: 0.069 Epoch 1 Batch 284/538 - Train Accuracy: 0.917, Validation Accuracy: 0.896, Loss: 0.070 Epoch 1 Batch 285/538 - Train Accuracy: 0.923, Validation Accuracy: 0.892, Loss: 0.057 Epoch 1 Batch 286/538 - Train Accuracy: 0.899, Validation Accuracy: 0.902, Loss: 0.075 Epoch 1 Batch 287/538 - Train Accuracy: 0.941, Validation Accuracy: 0.898, Loss: 0.056 Epoch 1 Batch 288/538 - Train Accuracy: 0.940, Validation Accuracy: 0.900, Loss: 0.062 Epoch 1 Batch 289/538 - Train Accuracy: 0.930, Validation Accuracy: 0.899, Loss: 0.058 Epoch 1 Batch 290/538 - Train Accuracy: 0.930, Validation Accuracy: 0.899, Loss: 0.060 Epoch 1 Batch 291/538 - Train Accuracy: 0.921, Validation Accuracy: 0.908, Loss: 0.065 Epoch 1 Batch 292/538 - Train Accuracy: 0.926, Validation Accuracy: 0.906, Loss: 0.057 Epoch 1 Batch 293/538 - Train Accuracy: 0.927, Validation Accuracy: 0.900, Loss: 0.065 Epoch 1 Batch 294/538 - Train Accuracy: 0.907, Validation Accuracy: 0.905, Loss: 0.067 Epoch 1 Batch 295/538 - Train Accuracy: 0.930, Validation Accuracy: 0.906, Loss: 0.063 Epoch 1 Batch 296/538 - Train Accuracy: 0.927, Validation Accuracy: 0.906, Loss: 0.072 Epoch 1 Batch 297/538 - Train Accuracy: 0.929, Validation Accuracy: 0.908, Loss: 0.066 Epoch 1 Batch 298/538 - Train Accuracy: 0.916, Validation Accuracy: 0.903, Loss: 0.064 Epoch 1 Batch 299/538 - Train Accuracy: 0.915, Validation Accuracy: 0.907, Loss: 0.076 Epoch 1 Batch 300/538 - Train Accuracy: 0.926, Validation Accuracy: 0.906, Loss: 0.069 Epoch 1 Batch 301/538 - Train Accuracy: 0.909, Validation Accuracy: 0.915, Loss: 0.067 Epoch 1 Batch 302/538 - Train Accuracy: 0.935, Validation Accuracy: 0.924, Loss: 0.061 Epoch 1 Batch 303/538 - Train Accuracy: 0.940, Validation Accuracy: 0.920, Loss: 0.067 Epoch 1 Batch 304/538 - Train Accuracy: 0.924, Validation Accuracy: 0.921, Loss: 0.065 Epoch 1 Batch 305/538 - Train Accuracy: 0.938, Validation Accuracy: 0.920, Loss: 0.058 Epoch 1 Batch 306/538 - Train Accuracy: 0.922, Validation Accuracy: 0.915, Loss: 0.066 Epoch 1 Batch 307/538 - Train Accuracy: 0.937, Validation Accuracy: 0.912, Loss: 0.059 Epoch 1 Batch 308/538 - Train Accuracy: 0.932, Validation Accuracy: 0.917, Loss: 0.059 Epoch 1 Batch 309/538 - Train Accuracy: 0.930, Validation Accuracy: 0.932, Loss: 0.057 Epoch 1 Batch 310/538 - Train Accuracy: 0.930, Validation Accuracy: 0.923, Loss: 0.068 Epoch 1 Batch 311/538 - Train Accuracy: 0.913, Validation Accuracy: 0.931, Loss: 0.067 Epoch 1 Batch 312/538 - Train Accuracy: 0.924, Validation Accuracy: 0.929, Loss: 0.055 Epoch 1 Batch 313/538 - Train Accuracy: 0.926, Validation Accuracy: 0.928, Loss: 0.063 Epoch 1 Batch 314/538 - Train Accuracy: 0.927, Validation Accuracy: 0.928, Loss: 0.060 Epoch 1 Batch 315/538 - Train Accuracy: 0.926, Validation Accuracy: 0.932, Loss: 0.054 Epoch 1 Batch 316/538 - Train Accuracy: 0.926, Validation Accuracy: 0.932, Loss: 0.050 Epoch 1 Batch 317/538 - Train Accuracy: 0.938, Validation Accuracy: 0.923, Loss: 0.061 Epoch 1 Batch 318/538 - Train Accuracy: 0.916, Validation Accuracy: 0.917, Loss: 0.055 Epoch 1 Batch 319/538 - Train Accuracy: 0.914, Validation Accuracy: 0.921, Loss: 0.066 Epoch 1 Batch 320/538 - Train Accuracy: 0.924, Validation Accuracy: 0.926, Loss: 0.057 Epoch 1 Batch 321/538 - Train Accuracy: 0.920, Validation Accuracy: 0.920, Loss: 0.053 Epoch 1 Batch 322/538 - Train Accuracy: 0.921, Validation Accuracy: 0.919, Loss: 0.065 Epoch 1 Batch 323/538 - Train Accuracy: 0.946, Validation Accuracy: 0.917, Loss: 0.055 Epoch 1 Batch 324/538 - Train Accuracy: 0.939, Validation Accuracy: 0.920, Loss: 0.061 Epoch 1 Batch 325/538 - Train Accuracy: 0.926, Validation Accuracy: 0.918, Loss: 0.055 Epoch 1 Batch 326/538 - Train Accuracy: 0.933, Validation Accuracy: 0.920, Loss: 0.056 Epoch 1 Batch 327/538 - Train Accuracy: 0.927, Validation Accuracy: 0.917, Loss: 0.065 Epoch 1 Batch 328/538 - Train Accuracy: 0.940, Validation Accuracy: 0.909, Loss: 0.050 Epoch 1 Batch 329/538 - Train Accuracy: 0.930, Validation Accuracy: 0.923, Loss: 0.062 Epoch 1 Batch 330/538 - Train Accuracy: 0.953, Validation Accuracy: 0.923, Loss: 0.053 Epoch 1 Batch 331/538 - Train Accuracy: 0.941, Validation Accuracy: 0.913, Loss: 0.054 Epoch 1 Batch 332/538 - Train Accuracy: 0.934, Validation Accuracy: 0.920, Loss: 0.053 Epoch 1 Batch 333/538 - Train Accuracy: 0.938, Validation Accuracy: 0.920, Loss: 0.055 Epoch 1 Batch 334/538 - Train Accuracy: 0.938, Validation Accuracy: 0.929, Loss: 0.055 Epoch 1 Batch 335/538 - Train Accuracy: 0.930, Validation Accuracy: 0.930, Loss: 0.056 Epoch 1 Batch 336/538 - Train Accuracy: 0.934, Validation Accuracy: 0.926, Loss: 0.055 Epoch 1 Batch 337/538 - Train Accuracy: 0.930, Validation Accuracy: 0.921, Loss: 0.058 Epoch 1 Batch 338/538 - Train Accuracy: 0.933, Validation Accuracy: 0.927, Loss: 0.056 Epoch 1 Batch 339/538 - Train Accuracy: 0.936, Validation Accuracy: 0.927, Loss: 0.055 Epoch 1 Batch 340/538 - Train Accuracy: 0.920, Validation Accuracy: 0.926, Loss: 0.057 Epoch 1 Batch 341/538 - Train Accuracy: 0.933, Validation Accuracy: 0.921, Loss: 0.052 Epoch 1 Batch 342/538 - Train Accuracy: 0.920, Validation Accuracy: 0.923, Loss: 0.050 Epoch 1 Batch 343/538 - Train Accuracy: 0.936, Validation Accuracy: 0.921, Loss: 0.059 Epoch 1 Batch 344/538 - Train Accuracy: 0.931, Validation Accuracy: 0.918, Loss: 0.052 Epoch 1 Batch 345/538 - Train Accuracy: 0.944, Validation Accuracy: 0.919, Loss: 0.056 Epoch 1 Batch 346/538 - Train Accuracy: 0.925, Validation Accuracy: 0.916, Loss: 0.067 Epoch 1 Batch 347/538 - Train Accuracy: 0.919, Validation Accuracy: 0.921, Loss: 0.057 Epoch 1 Batch 348/538 - Train Accuracy: 0.927, Validation Accuracy: 0.919, Loss: 0.051 Epoch 1 Batch 349/538 - Train Accuracy: 0.947, Validation Accuracy: 0.920, Loss: 0.047 Epoch 1 Batch 350/538 - Train Accuracy: 0.938, Validation Accuracy: 0.928, Loss: 0.061 Epoch 1 Batch 351/538 - Train Accuracy: 0.944, Validation Accuracy: 0.928, Loss: 0.059 Epoch 1 Batch 352/538 - Train Accuracy: 0.899, Validation Accuracy: 0.929, Loss: 0.078 Epoch 1 Batch 353/538 - Train Accuracy: 0.905, Validation Accuracy: 0.933, Loss: 0.060 Epoch 1 Batch 354/538 - Train Accuracy: 0.914, Validation Accuracy: 0.931, Loss: 0.059 Epoch 1 Batch 355/538 - Train Accuracy: 0.935, Validation Accuracy: 0.929, Loss: 0.056 Epoch 1 Batch 356/538 - Train Accuracy: 0.936, Validation Accuracy: 0.927, Loss: 0.046 Epoch 1 Batch 357/538 - Train Accuracy: 0.924, Validation Accuracy: 0.923, Loss: 0.055 Epoch 1 Batch 358/538 - Train Accuracy: 0.947, Validation Accuracy: 0.926, Loss: 0.051 Epoch 1 Batch 359/538 - Train Accuracy: 0.937, Validation Accuracy: 0.928, Loss: 0.059 Epoch 1 Batch 360/538 - Train Accuracy: 0.934, Validation Accuracy: 0.931, Loss: 0.055 Epoch 1 Batch 361/538 - Train Accuracy: 0.937, Validation Accuracy: 0.931, Loss: 0.055 Epoch 1 Batch 362/538 - Train Accuracy: 0.947, Validation Accuracy: 0.928, Loss: 0.048 Epoch 1 Batch 363/538 - Train Accuracy: 0.941, Validation Accuracy: 0.931, Loss: 0.056 Epoch 1 Batch 364/538 - Train Accuracy: 0.928, Validation Accuracy: 0.931, Loss: 0.063 Epoch 1 Batch 365/538 - Train Accuracy: 0.922, Validation Accuracy: 0.928, Loss: 0.059 Epoch 1 Batch 366/538 - Train Accuracy: 0.939, Validation Accuracy: 0.922, Loss: 0.058 Epoch 1 Batch 367/538 - Train Accuracy: 0.927, Validation Accuracy: 0.927, Loss: 0.050 Epoch 1 Batch 368/538 - Train Accuracy: 0.927, Validation Accuracy: 0.929, Loss: 0.048 Epoch 1 Batch 369/538 - Train Accuracy: 0.937, Validation Accuracy: 0.926, Loss: 0.047 Epoch 1 Batch 370/538 - Train Accuracy: 0.943, Validation Accuracy: 0.930, Loss: 0.056 Epoch 1 Batch 371/538 - Train Accuracy: 0.947, Validation Accuracy: 0.928, Loss: 0.053 Epoch 1 Batch 372/538 - Train Accuracy: 0.960, Validation Accuracy: 0.926, Loss: 0.053 Epoch 1 Batch 373/538 - Train Accuracy: 0.922, Validation Accuracy: 0.924, Loss: 0.046 Epoch 1 Batch 374/538 - Train Accuracy: 0.933, Validation Accuracy: 0.922, Loss: 0.052 Epoch 1 Batch 375/538 - Train Accuracy: 0.938, Validation Accuracy: 0.917, Loss: 0.049 Epoch 1 Batch 376/538 - Train Accuracy: 0.932, Validation Accuracy: 0.915, Loss: 0.055 Epoch 1 Batch 377/538 - Train Accuracy: 0.936, Validation Accuracy: 0.930, Loss: 0.056 Epoch 1 Batch 378/538 - Train Accuracy: 0.953, Validation Accuracy: 0.931, Loss: 0.048 Epoch 1 Batch 379/538 - Train Accuracy: 0.940, Validation Accuracy: 0.926, Loss: 0.050 Epoch 1 Batch 380/538 - Train Accuracy: 0.934, Validation Accuracy: 0.928, Loss: 0.050 Epoch 1 Batch 381/538 - Train Accuracy: 0.953, Validation Accuracy: 0.937, Loss: 0.050 Epoch 1 Batch 382/538 - Train Accuracy: 0.932, Validation Accuracy: 0.935, Loss: 0.055 Epoch 1 Batch 383/538 - Train Accuracy: 0.935, Validation Accuracy: 0.939, Loss: 0.051 Epoch 1 Batch 384/538 - Train Accuracy: 0.930, Validation Accuracy: 0.940, Loss: 0.052 Epoch 1 Batch 385/538 - Train Accuracy: 0.943, Validation Accuracy: 0.938, Loss: 0.052 Epoch 1 Batch 386/538 - Train Accuracy: 0.944, Validation Accuracy: 0.939, Loss: 0.055 Epoch 1 Batch 387/538 - Train Accuracy: 0.925, Validation Accuracy: 0.938, Loss: 0.052 Epoch 1 Batch 388/538 - Train Accuracy: 0.938, Validation Accuracy: 0.942, Loss: 0.051 Epoch 1 Batch 389/538 - Train Accuracy: 0.919, Validation Accuracy: 0.933, Loss: 0.064 Epoch 1 Batch 390/538 - Train Accuracy: 0.946, Validation Accuracy: 0.932, Loss: 0.044 Epoch 1 Batch 391/538 - Train Accuracy: 0.928, Validation Accuracy: 0.933, Loss: 0.051 Epoch 1 Batch 392/538 - Train Accuracy: 0.929, Validation Accuracy: 0.937, Loss: 0.050 Epoch 1 Batch 393/538 - Train Accuracy: 0.956, Validation Accuracy: 0.938, Loss: 0.049 Epoch 1 Batch 394/538 - Train Accuracy: 0.920, Validation Accuracy: 0.939, Loss: 0.052 Epoch 1 Batch 395/538 - Train Accuracy: 0.926, Validation Accuracy: 0.941, Loss: 0.058 Epoch 1 Batch 396/538 - Train Accuracy: 0.935, Validation Accuracy: 0.934, Loss: 0.044 Epoch 1 Batch 397/538 - Train Accuracy: 0.937, Validation Accuracy: 0.932, Loss: 0.057 Epoch 1 Batch 398/538 - Train Accuracy: 0.938, Validation Accuracy: 0.940, Loss: 0.050 Epoch 1 Batch 399/538 - Train Accuracy: 0.920, Validation Accuracy: 0.937, Loss: 0.056 Epoch 1 Batch 400/538 - Train Accuracy: 0.932, Validation Accuracy: 0.930, Loss: 0.051 Epoch 1 Batch 401/538 - Train Accuracy: 0.949, Validation Accuracy: 0.924, Loss: 0.050 Epoch 1 Batch 402/538 - Train Accuracy: 0.948, Validation Accuracy: 0.930, Loss: 0.048 Epoch 1 Batch 403/538 - Train Accuracy: 0.930, Validation Accuracy: 0.925, Loss: 0.056 Epoch 1 Batch 404/538 - Train Accuracy: 0.941, Validation Accuracy: 0.924, Loss: 0.055 Epoch 1 Batch 405/538 - Train Accuracy: 0.944, Validation Accuracy: 0.927, Loss: 0.050 Epoch 1 Batch 406/538 - Train Accuracy: 0.934, Validation Accuracy: 0.922, Loss: 0.052 Epoch 1 Batch 407/538 - Train Accuracy: 0.949, Validation Accuracy: 0.919, Loss: 0.055 Epoch 1 Batch 408/538 - Train Accuracy: 0.938, Validation Accuracy: 0.926, Loss: 0.055 Epoch 1 Batch 409/538 - Train Accuracy: 0.944, Validation Accuracy: 0.931, Loss: 0.051 Epoch 1 Batch 410/538 - Train Accuracy: 0.946, Validation Accuracy: 0.938, Loss: 0.052 Epoch 1 Batch 411/538 - Train Accuracy: 0.951, Validation Accuracy: 0.932, Loss: 0.052 Epoch 1 Batch 412/538 - Train Accuracy: 0.941, Validation Accuracy: 0.931, Loss: 0.044 Epoch 1 Batch 413/538 - Train Accuracy: 0.938, Validation Accuracy: 0.936, Loss: 0.050 Epoch 1 Batch 414/538 - Train Accuracy: 0.899, Validation Accuracy: 0.943, Loss: 0.063 Epoch 1 Batch 415/538 - Train Accuracy: 0.913, Validation Accuracy: 0.938, Loss: 0.050 Epoch 1 Batch 416/538 - Train Accuracy: 0.949, Validation Accuracy: 0.936, Loss: 0.049 Epoch 1 Batch 417/538 - Train Accuracy: 0.940, Validation Accuracy: 0.927, Loss: 0.051 Epoch 1 Batch 418/538 - Train Accuracy: 0.938, Validation Accuracy: 0.929, Loss: 0.056 Epoch 1 Batch 419/538 - Train Accuracy: 0.958, Validation Accuracy: 0.927, Loss: 0.043 Epoch 1 Batch 420/538 - Train Accuracy: 0.948, Validation Accuracy: 0.929, Loss: 0.050 Epoch 1 Batch 421/538 - Train Accuracy: 0.936, Validation Accuracy: 0.929, Loss: 0.046 Epoch 1 Batch 422/538 - Train Accuracy: 0.921, Validation Accuracy: 0.932, Loss: 0.053 Epoch 1 Batch 423/538 - Train Accuracy: 0.954, Validation Accuracy: 0.938, Loss: 0.049 Epoch 1 Batch 424/538 - Train Accuracy: 0.923, Validation Accuracy: 0.941, Loss: 0.054 Epoch 1 Batch 425/538 - Train Accuracy: 0.920, Validation Accuracy: 0.923, Loss: 0.061 Epoch 1 Batch 426/538 - Train Accuracy: 0.948, Validation Accuracy: 0.926, Loss: 0.052 Epoch 1 Batch 427/538 - Train Accuracy: 0.922, Validation Accuracy: 0.925, Loss: 0.054 Epoch 1 Batch 428/538 - Train Accuracy: 0.947, Validation Accuracy: 0.930, Loss: 0.043 Epoch 1 Batch 429/538 - Train Accuracy: 0.946, Validation Accuracy: 0.934, Loss: 0.050 Epoch 1 Batch 430/538 - Train Accuracy: 0.924, Validation Accuracy: 0.932, Loss: 0.049 Epoch 1 Batch 431/538 - Train Accuracy: 0.917, Validation Accuracy: 0.934, Loss: 0.046 Epoch 1 Batch 432/538 - Train Accuracy: 0.943, Validation Accuracy: 0.934, Loss: 0.054 Epoch 1 Batch 433/538 - Train Accuracy: 0.919, Validation Accuracy: 0.930, Loss: 0.074 Epoch 1 Batch 434/538 - Train Accuracy: 0.928, Validation Accuracy: 0.934, Loss: 0.048 Epoch 1 Batch 435/538 - Train Accuracy: 0.936, Validation Accuracy: 0.931, Loss: 0.048 Epoch 1 Batch 436/538 - Train Accuracy: 0.938, Validation Accuracy: 0.932, Loss: 0.053 Epoch 1 Batch 437/538 - Train Accuracy: 0.951, Validation Accuracy: 0.934, Loss: 0.047 Epoch 1 Batch 438/538 - Train Accuracy: 0.940, Validation Accuracy: 0.936, Loss: 0.045 Epoch 1 Batch 439/538 - Train Accuracy: 0.953, Validation Accuracy: 0.940, Loss: 0.050 Epoch 1 Batch 440/538 - Train Accuracy: 0.937, Validation Accuracy: 0.934, Loss: 0.054 Epoch 1 Batch 441/538 - Train Accuracy: 0.921, Validation Accuracy: 0.933, Loss: 0.057 Epoch 1 Batch 442/538 - Train Accuracy: 0.947, Validation Accuracy: 0.935, Loss: 0.038 Epoch 1 Batch 443/538 - Train Accuracy: 0.936, Validation Accuracy: 0.931, Loss: 0.052 Epoch 1 Batch 444/538 - Train Accuracy: 0.952, Validation Accuracy: 0.932, Loss: 0.048 Epoch 1 Batch 445/538 - Train Accuracy: 0.944, Validation Accuracy: 0.925, Loss: 0.043 Epoch 1 Batch 446/538 - Train Accuracy: 0.954, Validation Accuracy: 0.920, Loss: 0.045 Epoch 1 Batch 447/538 - Train Accuracy: 0.933, Validation Accuracy: 0.924, Loss: 0.047 Epoch 1 Batch 448/538 - Train Accuracy: 0.930, Validation Accuracy: 0.929, Loss: 0.043 Epoch 1 Batch 449/538 - Train Accuracy: 0.943, Validation Accuracy: 0.935, Loss: 0.057 Epoch 1 Batch 450/538 - Train Accuracy: 0.932, Validation Accuracy: 0.939, Loss: 0.058 Epoch 1 Batch 451/538 - Train Accuracy: 0.931, Validation Accuracy: 0.931, Loss: 0.051 Epoch 1 Batch 452/538 - Train Accuracy: 0.944, Validation Accuracy: 0.942, Loss: 0.042 Epoch 1 Batch 453/538 - Train Accuracy: 0.945, Validation Accuracy: 0.929, Loss: 0.057 Epoch 1 Batch 454/538 - Train Accuracy: 0.940, Validation Accuracy: 0.939, Loss: 0.054 Epoch 1 Batch 455/538 - Train Accuracy: 0.946, Validation Accuracy: 0.938, Loss: 0.047 Epoch 1 Batch 456/538 - Train Accuracy: 0.944, Validation Accuracy: 0.940, Loss: 0.059 Epoch 1 Batch 457/538 - Train Accuracy: 0.924, Validation Accuracy: 0.942, Loss: 0.048 Epoch 1 Batch 458/538 - Train Accuracy: 0.937, Validation Accuracy: 0.938, Loss: 0.048 Epoch 1 Batch 459/538 - Train Accuracy: 0.947, Validation Accuracy: 0.931, Loss: 0.039 Epoch 1 Batch 460/538 - Train Accuracy: 0.919, Validation Accuracy: 0.940, Loss: 0.050 Epoch 1 Batch 461/538 - Train Accuracy: 0.953, Validation Accuracy: 0.939, Loss: 0.052 Epoch 1 Batch 462/538 - Train Accuracy: 0.937, Validation Accuracy: 0.923, Loss: 0.046 Epoch 1 Batch 463/538 - Train Accuracy: 0.932, Validation Accuracy: 0.924, Loss: 0.052 Epoch 1 Batch 464/538 - Train Accuracy: 0.932, Validation Accuracy: 0.936, Loss: 0.048 Epoch 1 Batch 465/538 - Train Accuracy: 0.937, Validation Accuracy: 0.934, Loss: 0.045 Epoch 1 Batch 466/538 - Train Accuracy: 0.922, Validation Accuracy: 0.934, Loss: 0.045 Epoch 1 Batch 467/538 - Train Accuracy: 0.955, Validation Accuracy: 0.936, Loss: 0.047 Epoch 1 Batch 468/538 - Train Accuracy: 0.949, Validation Accuracy: 0.933, Loss: 0.055 Epoch 1 Batch 469/538 - Train Accuracy: 0.935, Validation Accuracy: 0.936, Loss: 0.048 Epoch 1 Batch 470/538 - Train Accuracy: 0.947, Validation Accuracy: 0.925, Loss: 0.044 Epoch 1 Batch 471/538 - Train Accuracy: 0.960, Validation Accuracy: 0.925, Loss: 0.039 Epoch 1 Batch 472/538 - Train Accuracy: 0.974, Validation Accuracy: 0.925, Loss: 0.039 Epoch 1 Batch 473/538 - Train Accuracy: 0.939, Validation Accuracy: 0.930, Loss: 0.046 Epoch 1 Batch 474/538 - Train Accuracy: 0.949, Validation Accuracy: 0.935, Loss: 0.041 Epoch 1 Batch 475/538 - Train Accuracy: 0.951, Validation Accuracy: 0.933, Loss: 0.043 Epoch 1 Batch 476/538 - Train Accuracy: 0.953, Validation Accuracy: 0.933, Loss: 0.040 Epoch 1 Batch 477/538 - Train Accuracy: 0.932, Validation Accuracy: 0.930, Loss: 0.053 Epoch 1 Batch 478/538 - Train Accuracy: 0.958, Validation Accuracy: 0.940, Loss: 0.039 Epoch 1 Batch 479/538 - Train Accuracy: 0.937, Validation Accuracy: 0.932, Loss: 0.042 Epoch 1 Batch 480/538 - Train Accuracy: 0.956, Validation Accuracy: 0.932, Loss: 0.043 Epoch 1 Batch 481/538 - Train Accuracy: 0.949, Validation Accuracy: 0.931, Loss: 0.045 Epoch 1 Batch 482/538 - Train Accuracy: 0.947, Validation Accuracy: 0.942, Loss: 0.038 Epoch 1 Batch 483/538 - Train Accuracy: 0.937, Validation Accuracy: 0.943, Loss: 0.051 Epoch 1 Batch 484/538 - Train Accuracy: 0.936, Validation Accuracy: 0.940, Loss: 0.051 Epoch 1 Batch 485/538 - Train Accuracy: 0.945, Validation Accuracy: 0.940, Loss: 0.048 Epoch 1 Batch 486/538 - Train Accuracy: 0.957, Validation Accuracy: 0.938, Loss: 0.036 Epoch 1 Batch 487/538 - Train Accuracy: 0.952, Validation Accuracy: 0.936, Loss: 0.038 Epoch 1 Batch 488/538 - Train Accuracy: 0.957, Validation Accuracy: 0.945, Loss: 0.035 Epoch 1 Batch 489/538 - Train Accuracy: 0.936, Validation Accuracy: 0.948, Loss: 0.046 Epoch 1 Batch 490/538 - Train Accuracy: 0.946, Validation Accuracy: 0.943, Loss: 0.042 Epoch 1 Batch 491/538 - Train Accuracy: 0.926, Validation Accuracy: 0.948, Loss: 0.049 Epoch 1 Batch 492/538 - Train Accuracy: 0.956, Validation Accuracy: 0.946, Loss: 0.041 Epoch 1 Batch 493/538 - Train Accuracy: 0.946, Validation Accuracy: 0.945, Loss: 0.040 Epoch 1 Batch 494/538 - Train Accuracy: 0.935, Validation Accuracy: 0.935, Loss: 0.049 Epoch 1 Batch 495/538 - Train Accuracy: 0.949, Validation Accuracy: 0.929, Loss: 0.045 Epoch 1 Batch 496/538 - Train Accuracy: 0.957, Validation Accuracy: 0.935, Loss: 0.036 Epoch 1 Batch 497/538 - Train Accuracy: 0.954, Validation Accuracy: 0.932, Loss: 0.042 Epoch 1 Batch 498/538 - Train Accuracy: 0.946, Validation Accuracy: 0.934, Loss: 0.041 Epoch 1 Batch 499/538 - Train Accuracy: 0.943, Validation Accuracy: 0.941, Loss: 0.044 Epoch 1 Batch 500/538 - Train Accuracy: 0.965, Validation Accuracy: 0.947, Loss: 0.033 Epoch 1 Batch 501/538 - Train Accuracy: 0.950, Validation Accuracy: 0.942, Loss: 0.052 Epoch 1 Batch 502/538 - Train Accuracy: 0.941, Validation Accuracy: 0.944, Loss: 0.038 Epoch 1 Batch 503/538 - Train Accuracy: 0.947, Validation Accuracy: 0.940, Loss: 0.047 Epoch 1 Batch 504/538 - Train Accuracy: 0.967, Validation Accuracy: 0.941, Loss: 0.034 Epoch 1 Batch 505/538 - Train Accuracy: 0.951, Validation Accuracy: 0.941, Loss: 0.033 Epoch 1 Batch 506/538 - Train Accuracy: 0.954, Validation Accuracy: 0.939, Loss: 0.034 Epoch 1 Batch 507/538 - Train Accuracy: 0.917, Validation Accuracy: 0.940, Loss: 0.046 Epoch 1 Batch 508/538 - Train Accuracy: 0.937, Validation Accuracy: 0.948, Loss: 0.039 Epoch 1 Batch 509/538 - Train Accuracy: 0.943, Validation Accuracy: 0.948, Loss: 0.044 Epoch 1 Batch 510/538 - Train Accuracy: 0.952, Validation Accuracy: 0.944, Loss: 0.041 Epoch 1 Batch 511/538 - Train Accuracy: 0.947, Validation Accuracy: 0.947, Loss: 0.043 Epoch 1 Batch 512/538 - Train Accuracy: 0.946, Validation Accuracy: 0.942, Loss: 0.041 Epoch 1 Batch 513/538 - Train Accuracy: 0.919, Validation Accuracy: 0.943, Loss: 0.039 Epoch 1 Batch 514/538 - Train Accuracy: 0.943, Validation Accuracy: 0.942, Loss: 0.043 Epoch 1 Batch 515/538 - Train Accuracy: 0.934, Validation Accuracy: 0.937, Loss: 0.050 Epoch 1 Batch 516/538 - Train Accuracy: 0.930, Validation Accuracy: 0.929, Loss: 0.043 Epoch 1 Batch 517/538 - Train Accuracy: 0.946, Validation Accuracy: 0.934, Loss: 0.039 Epoch 1 Batch 518/538 - Train Accuracy: 0.922, Validation Accuracy: 0.934, Loss: 0.047 Epoch 1 Batch 519/538 - Train Accuracy: 0.947, Validation Accuracy: 0.945, Loss: 0.041 Epoch 1 Batch 520/538 - Train Accuracy: 0.951, Validation Accuracy: 0.940, Loss: 0.042 Epoch 1 Batch 521/538 - Train Accuracy: 0.947, Validation Accuracy: 0.937, Loss: 0.056 Epoch 1 Batch 522/538 - Train Accuracy: 0.933, Validation Accuracy: 0.935, Loss: 0.039 Epoch 1 Batch 523/538 - Train Accuracy: 0.945, Validation Accuracy: 0.939, Loss: 0.040 Epoch 1 Batch 524/538 - Train Accuracy: 0.954, Validation Accuracy: 0.931, Loss: 0.038 Epoch 1 Batch 525/538 - Train Accuracy: 0.953, Validation Accuracy: 0.925, Loss: 0.041 Epoch 1 Batch 526/538 - Train Accuracy: 0.944, Validation Accuracy: 0.925, Loss: 0.041 Epoch 1 Batch 527/538 - Train Accuracy: 0.944, Validation Accuracy: 0.931, Loss: 0.043 Epoch 1 Batch 528/538 - Train Accuracy: 0.947, Validation Accuracy: 0.946, Loss: 0.046 Epoch 1 Batch 529/538 - Train Accuracy: 0.927, Validation Accuracy: 0.941, Loss: 0.045 Epoch 1 Batch 530/538 - Train Accuracy: 0.917, Validation Accuracy: 0.936, Loss: 0.048 Epoch 1 Batch 531/538 - Train Accuracy: 0.931, Validation Accuracy: 0.939, Loss: 0.048 Epoch 1 Batch 532/538 - Train Accuracy: 0.938, Validation Accuracy: 0.938, Loss: 0.038 Epoch 1 Batch 533/538 - Train Accuracy: 0.948, Validation Accuracy: 0.938, Loss: 0.042 Epoch 1 Batch 534/538 - Train Accuracy: 0.953, Validation Accuracy: 0.934, Loss: 0.035 Epoch 1 Batch 535/538 - Train Accuracy: 0.953, Validation Accuracy: 0.937, Loss: 0.038 Epoch 1 Batch 536/538 - Train Accuracy: 0.950, Validation Accuracy: 0.937, Loss: 0.050 Epoch 2 Batch 0/538 - Train Accuracy: 0.955, Validation Accuracy: 0.939, Loss: 0.038 Epoch 2 Batch 1/538 - Train Accuracy: 0.958, Validation Accuracy: 0.934, Loss: 0.042 Epoch 2 Batch 2/538 - Train Accuracy: 0.950, Validation Accuracy: 0.934, Loss: 0.047 Epoch 2 Batch 3/538 - Train Accuracy: 0.946, Validation Accuracy: 0.944, Loss: 0.040 Epoch 2 Batch 4/538 - Train Accuracy: 0.946, Validation Accuracy: 0.947, Loss: 0.040 Epoch 2 Batch 5/538 - Train Accuracy: 0.944, Validation Accuracy: 0.944, Loss: 0.044 Epoch 2 Batch 6/538 - Train Accuracy: 0.947, Validation Accuracy: 0.940, Loss: 0.039 Epoch 2 Batch 7/538 - Train Accuracy: 0.954, Validation Accuracy: 0.943, Loss: 0.041 Epoch 2 Batch 8/538 - Train Accuracy: 0.942, Validation Accuracy: 0.941, Loss: 0.044 Epoch 2 Batch 9/538 - Train Accuracy: 0.927, Validation Accuracy: 0.938, Loss: 0.039 Epoch 2 Batch 10/538 - Train Accuracy: 0.943, Validation Accuracy: 0.938, Loss: 0.043 Epoch 2 Batch 11/538 - Train Accuracy: 0.951, Validation Accuracy: 0.938, Loss: 0.040 Epoch 2 Batch 12/538 - Train Accuracy: 0.958, Validation Accuracy: 0.935, Loss: 0.037 Epoch 2 Batch 13/538 - Train Accuracy: 0.969, Validation Accuracy: 0.935, Loss: 0.036 Epoch 2 Batch 14/538 - Train Accuracy: 0.947, Validation Accuracy: 0.942, Loss: 0.038 Epoch 2 Batch 15/538 - Train Accuracy: 0.951, Validation Accuracy: 0.942, Loss: 0.037 Epoch 2 Batch 16/538 - Train Accuracy: 0.945, Validation Accuracy: 0.943, Loss: 0.041 Epoch 2 Batch 17/538 - Train Accuracy: 0.945, Validation Accuracy: 0.942, Loss: 0.043 Epoch 2 Batch 18/538 - Train Accuracy: 0.957, Validation Accuracy: 0.941, Loss: 0.042 Epoch 2 Batch 19/538 - Train Accuracy: 0.934, Validation Accuracy: 0.944, Loss: 0.046 Epoch 2 Batch 20/538 - Train Accuracy: 0.946, Validation Accuracy: 0.943, Loss: 0.043 Epoch 2 Batch 21/538 - Train Accuracy: 0.962, Validation Accuracy: 0.940, Loss: 0.030 Epoch 2 Batch 22/538 - Train Accuracy: 0.946, Validation Accuracy: 0.942, Loss: 0.041 Epoch 2 Batch 23/538 - Train Accuracy: 0.941, Validation Accuracy: 0.939, Loss: 0.048 Epoch 2 Batch 24/538 - Train Accuracy: 0.958, Validation Accuracy: 0.941, Loss: 0.041 Epoch 2 Batch 25/538 - Train Accuracy: 0.933, Validation Accuracy: 0.943, Loss: 0.046 Epoch 2 Batch 26/538 - Train Accuracy: 0.941, Validation Accuracy: 0.943, Loss: 0.046 Epoch 2 Batch 27/538 - Train Accuracy: 0.948, Validation Accuracy: 0.938, Loss: 0.034 Epoch 2 Batch 28/538 - Train Accuracy: 0.943, Validation Accuracy: 0.933, Loss: 0.037 Epoch 2 Batch 29/538 - Train Accuracy: 0.940, Validation Accuracy: 0.938, Loss: 0.033 Epoch 2 Batch 30/538 - Train Accuracy: 0.934, Validation Accuracy: 0.937, Loss: 0.048 Epoch 2 Batch 31/538 - Train Accuracy: 0.961, Validation Accuracy: 0.936, Loss: 0.032 Epoch 2 Batch 32/538 - Train Accuracy: 0.968, Validation Accuracy: 0.935, Loss: 0.030 Epoch 2 Batch 33/538 - Train Accuracy: 0.951, Validation Accuracy: 0.938, Loss: 0.043 Epoch 2 Batch 34/538 - Train Accuracy: 0.948, Validation Accuracy: 0.951, Loss: 0.045 Epoch 2 Batch 35/538 - Train Accuracy: 0.948, Validation Accuracy: 0.947, Loss: 0.035 Epoch 2 Batch 36/538 - Train Accuracy: 0.956, Validation Accuracy: 0.939, Loss: 0.036 Epoch 2 Batch 37/538 - Train Accuracy: 0.943, Validation Accuracy: 0.934, Loss: 0.039 Epoch 2 Batch 38/538 - Train Accuracy: 0.944, Validation Accuracy: 0.936, Loss: 0.037 Epoch 2 Batch 39/538 - Train Accuracy: 0.948, Validation Accuracy: 0.936, Loss: 0.036 Epoch 2 Batch 40/538 - Train Accuracy: 0.952, Validation Accuracy: 0.931, Loss: 0.031 Epoch 2 Batch 41/538 - Train Accuracy: 0.956, Validation Accuracy: 0.925, Loss: 0.035 Epoch 2 Batch 42/538 - Train Accuracy: 0.956, Validation Accuracy: 0.929, Loss: 0.037 Epoch 2 Batch 43/538 - Train Accuracy: 0.923, Validation Accuracy: 0.939, Loss: 0.047 Epoch 2 Batch 44/538 - Train Accuracy: 0.942, Validation Accuracy: 0.934, Loss: 0.043 Epoch 2 Batch 45/538 - Train Accuracy: 0.945, Validation Accuracy: 0.933, Loss: 0.043 Epoch 2 Batch 46/538 - Train Accuracy: 0.957, Validation Accuracy: 0.938, Loss: 0.033 Epoch 2 Batch 47/538 - Train Accuracy: 0.942, Validation Accuracy: 0.945, Loss: 0.047 Epoch 2 Batch 48/538 - Train Accuracy: 0.933, Validation Accuracy: 0.946, Loss: 0.040 Epoch 2 Batch 49/538 - Train Accuracy: 0.946, Validation Accuracy: 0.940, Loss: 0.035 Epoch 2 Batch 50/538 - Train Accuracy: 0.940, Validation Accuracy: 0.934, Loss: 0.037 Epoch 2 Batch 51/538 - Train Accuracy: 0.945, Validation Accuracy: 0.926, Loss: 0.042 Epoch 2 Batch 52/538 - Train Accuracy: 0.951, Validation Accuracy: 0.917, Loss: 0.038 Epoch 2 Batch 53/538 - Train Accuracy: 0.939, Validation Accuracy: 0.936, Loss: 0.040 Epoch 2 Batch 54/538 - Train Accuracy: 0.960, Validation Accuracy: 0.952, Loss: 0.036 Epoch 2 Batch 55/538 - Train Accuracy: 0.941, Validation Accuracy: 0.945, Loss: 0.035 Epoch 2 Batch 56/538 - Train Accuracy: 0.950, Validation Accuracy: 0.945, Loss: 0.036 Epoch 2 Batch 57/538 - Train Accuracy: 0.919, Validation Accuracy: 0.942, Loss: 0.053 Epoch 2 Batch 58/538 - Train Accuracy: 0.949, Validation Accuracy: 0.943, Loss: 0.037 Epoch 2 Batch 59/538 - Train Accuracy: 0.948, Validation Accuracy: 0.943, Loss: 0.037 Epoch 2 Batch 60/538 - Train Accuracy: 0.937, Validation Accuracy: 0.946, Loss: 0.039 Epoch 2 Batch 61/538 - Train Accuracy: 0.956, Validation Accuracy: 0.950, Loss: 0.038 Epoch 2 Batch 62/538 - Train Accuracy: 0.942, Validation Accuracy: 0.947, Loss: 0.040 Epoch 2 Batch 63/538 - Train Accuracy: 0.959, Validation Accuracy: 0.948, Loss: 0.037 Epoch 2 Batch 64/538 - Train Accuracy: 0.948, Validation Accuracy: 0.947, Loss: 0.038 Epoch 2 Batch 65/538 - Train Accuracy: 0.951, Validation Accuracy: 0.947, Loss: 0.040 Epoch 2 Batch 66/538 - Train Accuracy: 0.965, Validation Accuracy: 0.946, Loss: 0.032 Epoch 2 Batch 67/538 - Train Accuracy: 0.966, Validation Accuracy: 0.941, Loss: 0.036 Epoch 2 Batch 68/538 - Train Accuracy: 0.939, Validation Accuracy: 0.937, Loss: 0.031 Epoch 2 Batch 69/538 - Train Accuracy: 0.957, Validation Accuracy: 0.940, Loss: 0.034 Epoch 2 Batch 70/538 - Train Accuracy: 0.972, Validation Accuracy: 0.942, Loss: 0.036 Epoch 2 Batch 71/538 - Train Accuracy: 0.935, Validation Accuracy: 0.940, Loss: 0.045 Epoch 2 Batch 72/538 - Train Accuracy: 0.951, Validation Accuracy: 0.938, Loss: 0.044 Epoch 2 Batch 73/538 - Train Accuracy: 0.946, Validation Accuracy: 0.933, Loss: 0.040 Epoch 2 Batch 74/538 - Train Accuracy: 0.953, Validation Accuracy: 0.938, Loss: 0.037 Epoch 2 Batch 75/538 - Train Accuracy: 0.941, Validation Accuracy: 0.940, Loss: 0.038 Epoch 2 Batch 76/538 - Train Accuracy: 0.948, Validation Accuracy: 0.938, Loss: 0.037 Epoch 2 Batch 77/538 - Train Accuracy: 0.962, Validation Accuracy: 0.944, Loss: 0.033 Epoch 2 Batch 78/538 - Train Accuracy: 0.942, Validation Accuracy: 0.946, Loss: 0.041 Epoch 2 Batch 79/538 - Train Accuracy: 0.959, Validation Accuracy: 0.946, Loss: 0.030 Epoch 2 Batch 80/538 - Train Accuracy: 0.951, Validation Accuracy: 0.941, Loss: 0.035 Epoch 2 Batch 81/538 - Train Accuracy: 0.940, Validation Accuracy: 0.942, Loss: 0.043 Epoch 2 Batch 82/538 - Train Accuracy: 0.951, Validation Accuracy: 0.936, Loss: 0.039 Epoch 2 Batch 83/538 - Train Accuracy: 0.942, Validation Accuracy: 0.932, Loss: 0.038 Epoch 2 Batch 84/538 - Train Accuracy: 0.930, Validation Accuracy: 0.939, Loss: 0.041 Epoch 2 Batch 85/538 - Train Accuracy: 0.968, Validation Accuracy: 0.939, Loss: 0.031 Epoch 2 Batch 86/538 - Train Accuracy: 0.963, Validation Accuracy: 0.927, Loss: 0.033 Epoch 2 Batch 87/538 - Train Accuracy: 0.931, Validation Accuracy: 0.937, Loss: 0.043 Epoch 2 Batch 88/538 - Train Accuracy: 0.963, Validation Accuracy: 0.953, Loss: 0.036 Epoch 2 Batch 89/538 - Train Accuracy: 0.963, Validation Accuracy: 0.954, Loss: 0.034 Epoch 2 Batch 90/538 - Train Accuracy: 0.945, Validation Accuracy: 0.958, Loss: 0.040 Epoch 2 Batch 91/538 - Train Accuracy: 0.943, Validation Accuracy: 0.953, Loss: 0.038 Epoch 2 Batch 92/538 - Train Accuracy: 0.952, Validation Accuracy: 0.947, Loss: 0.041 Epoch 2 Batch 93/538 - Train Accuracy: 0.954, Validation Accuracy: 0.943, Loss: 0.034 Epoch 2 Batch 94/538 - Train Accuracy: 0.951, Validation Accuracy: 0.938, Loss: 0.030 Epoch 2 Batch 95/538 - Train Accuracy: 0.925, Validation Accuracy: 0.937, Loss: 0.034 Epoch 2 Batch 96/538 - Train Accuracy: 0.971, Validation Accuracy: 0.938, Loss: 0.028 Epoch 2 Batch 97/538 - Train Accuracy: 0.959, Validation Accuracy: 0.940, Loss: 0.029 Epoch 2 Batch 98/538 - Train Accuracy: 0.944, Validation Accuracy: 0.945, Loss: 0.040 Epoch 2 Batch 99/538 - Train Accuracy: 0.962, Validation Accuracy: 0.950, Loss: 0.037 Epoch 2 Batch 100/538 - Train Accuracy: 0.959, Validation Accuracy: 0.950, Loss: 0.033 Epoch 2 Batch 101/538 - Train Accuracy: 0.950, Validation Accuracy: 0.944, Loss: 0.041 Epoch 2 Batch 102/538 - Train Accuracy: 0.937, Validation Accuracy: 0.949, Loss: 0.043 Epoch 2 Batch 103/538 - Train Accuracy: 0.963, Validation Accuracy: 0.955, Loss: 0.033 Epoch 2 Batch 104/538 - Train Accuracy: 0.956, Validation Accuracy: 0.952, Loss: 0.034 Epoch 2 Batch 105/538 - Train Accuracy: 0.950, Validation Accuracy: 0.954, Loss: 0.030 Epoch 2 Batch 106/538 - Train Accuracy: 0.943, Validation Accuracy: 0.951, Loss: 0.031 Epoch 2 Batch 107/538 - Train Accuracy: 0.936, Validation Accuracy: 0.947, Loss: 0.038 Epoch 2 Batch 108/538 - Train Accuracy: 0.952, Validation Accuracy: 0.942, Loss: 0.034 Epoch 2 Batch 109/538 - Train Accuracy: 0.946, Validation Accuracy: 0.942, Loss: 0.030 Epoch 2 Batch 110/538 - Train Accuracy: 0.955, Validation Accuracy: 0.945, Loss: 0.037 Epoch 2 Batch 111/538 - Train Accuracy: 0.952, Validation Accuracy: 0.954, Loss: 0.033 Epoch 2 Batch 112/538 - Train Accuracy: 0.945, Validation Accuracy: 0.947, Loss: 0.040 Epoch 2 Batch 113/538 - Train Accuracy: 0.947, Validation Accuracy: 0.951, Loss: 0.039 Epoch 2 Batch 114/538 - Train Accuracy: 0.950, Validation Accuracy: 0.944, Loss: 0.037 Epoch 2 Batch 115/538 - Train Accuracy: 0.960, Validation Accuracy: 0.940, Loss: 0.032 Epoch 2 Batch 116/538 - Train Accuracy: 0.938, Validation Accuracy: 0.933, Loss: 0.043 Epoch 2 Batch 117/538 - Train Accuracy: 0.962, Validation Accuracy: 0.936, Loss: 0.040 Epoch 2 Batch 118/538 - Train Accuracy: 0.968, Validation Accuracy: 0.944, Loss: 0.029 Epoch 2 Batch 119/538 - Train Accuracy: 0.974, Validation Accuracy: 0.951, Loss: 0.027 Epoch 2 Batch 120/538 - Train Accuracy: 0.962, Validation Accuracy: 0.948, Loss: 0.026 Epoch 2 Batch 121/538 - Train Accuracy: 0.947, Validation Accuracy: 0.952, Loss: 0.035 Epoch 2 Batch 122/538 - Train Accuracy: 0.934, Validation Accuracy: 0.954, Loss: 0.035 Epoch 2 Batch 123/538 - Train Accuracy: 0.947, Validation Accuracy: 0.952, Loss: 0.037 Epoch 2 Batch 124/538 - Train Accuracy: 0.955, Validation Accuracy: 0.952, Loss: 0.032 Epoch 2 Batch 125/538 - Train Accuracy: 0.949, Validation Accuracy: 0.952, Loss: 0.043 Epoch 2 Batch 126/538 - Train Accuracy: 0.940, Validation Accuracy: 0.956, Loss: 0.037 Epoch 2 Batch 127/538 - Train Accuracy: 0.936, Validation Accuracy: 0.953, Loss: 0.044 Epoch 2 Batch 128/538 - Train Accuracy: 0.966, Validation Accuracy: 0.950, Loss: 0.035 Epoch 2 Batch 129/538 - Train Accuracy: 0.954, Validation Accuracy: 0.943, Loss: 0.030 Epoch 2 Batch 130/538 - Train Accuracy: 0.949, Validation Accuracy: 0.946, Loss: 0.033 Epoch 2 Batch 131/538 - Train Accuracy: 0.974, Validation Accuracy: 0.946, Loss: 0.030 Epoch 2 Batch 132/538 - Train Accuracy: 0.943, Validation Accuracy: 0.948, Loss: 0.035 Epoch 2 Batch 133/538 - Train Accuracy: 0.947, Validation Accuracy: 0.944, Loss: 0.033 Epoch 2 Batch 134/538 - Train Accuracy: 0.960, Validation Accuracy: 0.942, Loss: 0.043 Epoch 2 Batch 135/538 - Train Accuracy: 0.959, Validation Accuracy: 0.932, Loss: 0.045 Epoch 2 Batch 136/538 - Train Accuracy: 0.943, Validation Accuracy: 0.924, Loss: 0.033 Epoch 2 Batch 137/538 - Train Accuracy: 0.941, Validation Accuracy: 0.929, Loss: 0.044 Epoch 2 Batch 138/538 - Train Accuracy: 0.953, Validation Accuracy: 0.925, Loss: 0.036 Epoch 2 Batch 139/538 - Train Accuracy: 0.943, Validation Accuracy: 0.939, Loss: 0.040 Epoch 2 Batch 140/538 - Train Accuracy: 0.948, Validation Accuracy: 0.941, Loss: 0.043 Epoch 2 Batch 141/538 - Train Accuracy: 0.954, Validation Accuracy: 0.939, Loss: 0.037 Epoch 2 Batch 142/538 - Train Accuracy: 0.954, Validation Accuracy: 0.947, Loss: 0.037 Epoch 2 Batch 143/538 - Train Accuracy: 0.962, Validation Accuracy: 0.946, Loss: 0.038 Epoch 2 Batch 144/538 - Train Accuracy: 0.950, Validation Accuracy: 0.942, Loss: 0.043 Epoch 2 Batch 145/538 - Train Accuracy: 0.943, Validation Accuracy: 0.943, Loss: 0.041 Epoch 2 Batch 146/538 - Train Accuracy: 0.964, Validation Accuracy: 0.932, Loss: 0.032 Epoch 2 Batch 147/538 - Train Accuracy: 0.964, Validation Accuracy: 0.931, Loss: 0.033 Epoch 2 Batch 148/538 - Train Accuracy: 0.944, Validation Accuracy: 0.934, Loss: 0.040 Epoch 2 Batch 149/538 - Train Accuracy: 0.961, Validation Accuracy: 0.941, Loss: 0.032 Epoch 2 Batch 150/538 - Train Accuracy: 0.948, Validation Accuracy: 0.942, Loss: 0.029 Epoch 2 Batch 151/538 - Train Accuracy: 0.950, Validation Accuracy: 0.949, Loss: 0.036 Epoch 2 Batch 152/538 - Train Accuracy: 0.957, Validation Accuracy: 0.947, Loss: 0.037 Epoch 2 Batch 153/538 - Train Accuracy: 0.952, Validation Accuracy: 0.945, Loss: 0.035 Epoch 2 Batch 154/538 - Train Accuracy: 0.950, Validation Accuracy: 0.936, Loss: 0.031 Epoch 2 Batch 155/538 - Train Accuracy: 0.953, Validation Accuracy: 0.937, Loss: 0.035 Epoch 2 Batch 156/538 - Train Accuracy: 0.962, Validation Accuracy: 0.942, Loss: 0.031 Epoch 2 Batch 157/538 - Train Accuracy: 0.959, Validation Accuracy: 0.945, Loss: 0.033 Epoch 2 Batch 158/538 - Train Accuracy: 0.955, Validation Accuracy: 0.946, Loss: 0.037 Epoch 2 Batch 159/538 - Train Accuracy: 0.956, Validation Accuracy: 0.946, Loss: 0.042 Epoch 2 Batch 160/538 - Train Accuracy: 0.940, Validation Accuracy: 0.943, Loss: 0.033 Epoch 2 Batch 161/538 - Train Accuracy: 0.952, Validation Accuracy: 0.949, Loss: 0.031 Epoch 2 Batch 162/538 - Train Accuracy: 0.961, Validation Accuracy: 0.950, Loss: 0.034 Epoch 2 Batch 163/538 - Train Accuracy: 0.943, Validation Accuracy: 0.941, Loss: 0.039 Epoch 2 Batch 164/538 - Train Accuracy: 0.955, Validation Accuracy: 0.944, Loss: 0.034 Epoch 2 Batch 165/538 - Train Accuracy: 0.943, Validation Accuracy: 0.944, Loss: 0.031 Epoch 2 Batch 166/538 - Train Accuracy: 0.956, Validation Accuracy: 0.939, Loss: 0.030 Epoch 2 Batch 167/538 - Train Accuracy: 0.955, Validation Accuracy: 0.943, Loss: 0.037 Epoch 2 Batch 168/538 - Train Accuracy: 0.941, Validation Accuracy: 0.938, Loss: 0.041 Epoch 2 Batch 169/538 - Train Accuracy: 0.965, Validation Accuracy: 0.939, Loss: 0.031 Epoch 2 Batch 170/538 - Train Accuracy: 0.940, Validation Accuracy: 0.939, Loss: 0.035 Epoch 2 Batch 171/538 - Train Accuracy: 0.943, Validation Accuracy: 0.935, Loss: 0.035 Epoch 2 Batch 172/538 - Train Accuracy: 0.956, Validation Accuracy: 0.934, Loss: 0.030 Epoch 2 Batch 173/538 - Train Accuracy: 0.964, Validation Accuracy: 0.923, Loss: 0.029 Epoch 2 Batch 174/538 - Train Accuracy: 0.949, Validation Accuracy: 0.925, Loss: 0.033 Epoch 2 Batch 175/538 - Train Accuracy: 0.966, Validation Accuracy: 0.922, Loss: 0.032 Epoch 2 Batch 176/538 - Train Accuracy: 0.958, Validation Accuracy: 0.927, Loss: 0.041 Epoch 2 Batch 177/538 - Train Accuracy: 0.963, Validation Accuracy: 0.938, Loss: 0.033 Epoch 2 Batch 178/538 - Train Accuracy: 0.950, Validation Accuracy: 0.945, Loss: 0.033 Epoch 2 Batch 179/538 - Train Accuracy: 0.956, Validation Accuracy: 0.951, Loss: 0.030 Epoch 2 Batch 180/538 - Train Accuracy: 0.955, Validation Accuracy: 0.946, Loss: 0.034 Epoch 2 Batch 181/538 - Train Accuracy: 0.936, Validation Accuracy: 0.946, Loss: 0.041 Epoch 2 Batch 182/538 - Train Accuracy: 0.970, Validation Accuracy: 0.946, Loss: 0.029 Epoch 2 Batch 183/538 - Train Accuracy: 0.974, Validation Accuracy: 0.946, Loss: 0.027 Epoch 2 Batch 184/538 - Train Accuracy: 0.960, Validation Accuracy: 0.936, Loss: 0.031 Epoch 2 Batch 185/538 - Train Accuracy: 0.963, Validation Accuracy: 0.936, Loss: 0.028 Epoch 2 Batch 186/538 - Train Accuracy: 0.965, Validation Accuracy: 0.938, Loss: 0.031 Epoch 2 Batch 187/538 - Train Accuracy: 0.951, Validation Accuracy: 0.936, Loss: 0.033 Epoch 2 Batch 188/538 - Train Accuracy: 0.952, Validation Accuracy: 0.942, Loss: 0.028 Epoch 2 Batch 189/538 - Train Accuracy: 0.961, Validation Accuracy: 0.948, Loss: 0.033 Epoch 2 Batch 190/538 - Train Accuracy: 0.950, Validation Accuracy: 0.951, Loss: 0.044 Epoch 2 Batch 191/538 - Train Accuracy: 0.965, Validation Accuracy: 0.953, Loss: 0.030 Epoch 2 Batch 192/538 - Train Accuracy: 0.960, Validation Accuracy: 0.953, Loss: 0.032 Epoch 2 Batch 193/538 - Train Accuracy: 0.953, Validation Accuracy: 0.947, Loss: 0.036 Epoch 2 Batch 194/538 - Train Accuracy: 0.934, Validation Accuracy: 0.944, Loss: 0.041 Epoch 2 Batch 195/538 - Train Accuracy: 0.955, Validation Accuracy: 0.941, Loss: 0.039 Epoch 2 Batch 196/538 - Train Accuracy: 0.940, Validation Accuracy: 0.945, Loss: 0.032 Epoch 2 Batch 197/538 - Train Accuracy: 0.953, Validation Accuracy: 0.950, Loss: 0.031 Epoch 2 Batch 198/538 - Train Accuracy: 0.954, Validation Accuracy: 0.947, Loss: 0.032 Epoch 2 Batch 199/538 - Train Accuracy: 0.950, Validation Accuracy: 0.951, Loss: 0.034 Epoch 2 Batch 200/538 - Train Accuracy: 0.955, Validation Accuracy: 0.949, Loss: 0.027 Epoch 2 Batch 201/538 - Train Accuracy: 0.956, Validation Accuracy: 0.950, Loss: 0.036 Epoch 2 Batch 202/538 - Train Accuracy: 0.954, Validation Accuracy: 0.947, Loss: 0.031 Epoch 2 Batch 203/538 - Train Accuracy: 0.961, Validation Accuracy: 0.949, Loss: 0.033 Epoch 2 Batch 204/538 - Train Accuracy: 0.953, Validation Accuracy: 0.950, Loss: 0.036 Epoch 2 Batch 205/538 - Train Accuracy: 0.944, Validation Accuracy: 0.947, Loss: 0.033 Epoch 2 Batch 206/538 - Train Accuracy: 0.961, Validation Accuracy: 0.952, Loss: 0.029 Epoch 2 Batch 207/538 - Train Accuracy: 0.964, Validation Accuracy: 0.947, Loss: 0.034 Epoch 2 Batch 208/538 - Train Accuracy: 0.964, Validation Accuracy: 0.947, Loss: 0.038 Epoch 2 Batch 209/538 - Train Accuracy: 0.969, Validation Accuracy: 0.942, Loss: 0.027 Epoch 2 Batch 210/538 - Train Accuracy: 0.952, Validation Accuracy: 0.944, Loss: 0.034 Epoch 2 Batch 211/538 - Train Accuracy: 0.948, Validation Accuracy: 0.941, Loss: 0.035 Epoch 2 Batch 212/538 - Train Accuracy: 0.964, Validation Accuracy: 0.948, Loss: 0.032 Epoch 2 Batch 213/538 - Train Accuracy: 0.957, Validation Accuracy: 0.948, Loss: 0.029 Epoch 2 Batch 214/538 - Train Accuracy: 0.959, Validation Accuracy: 0.950, Loss: 0.027 Epoch 2 Batch 215/538 - Train Accuracy: 0.956, Validation Accuracy: 0.954, Loss: 0.028 Epoch 2 Batch 216/538 - Train Accuracy: 0.952, Validation Accuracy: 0.953, Loss: 0.037 Epoch 2 Batch 217/538 - Train Accuracy: 0.953, Validation Accuracy: 0.948, Loss: 0.035 Epoch 2 Batch 218/538 - Train Accuracy: 0.958, Validation Accuracy: 0.945, Loss: 0.027 Epoch 2 Batch 219/538 - Train Accuracy: 0.939, Validation Accuracy: 0.940, Loss: 0.038 Epoch 2 Batch 220/538 - Train Accuracy: 0.944, Validation Accuracy: 0.943, Loss: 0.035 Epoch 2 Batch 221/538 - Train Accuracy: 0.958, Validation Accuracy: 0.942, Loss: 0.029 Epoch 2 Batch 222/538 - Train Accuracy: 0.944, Validation Accuracy: 0.943, Loss: 0.028 Epoch 2 Batch 223/538 - Train Accuracy: 0.957, Validation Accuracy: 0.945, Loss: 0.033 Epoch 2 Batch 224/538 - Train Accuracy: 0.957, Validation Accuracy: 0.945, Loss: 0.034 Epoch 2 Batch 225/538 - Train Accuracy: 0.957, Validation Accuracy: 0.946, Loss: 0.030 Epoch 2 Batch 226/538 - Train Accuracy: 0.938, Validation Accuracy: 0.943, Loss: 0.036 Epoch 2 Batch 227/538 - Train Accuracy: 0.941, Validation Accuracy: 0.943, Loss: 0.036 Epoch 2 Batch 228/538 - Train Accuracy: 0.953, Validation Accuracy: 0.952, Loss: 0.029 Epoch 2 Batch 229/538 - Train Accuracy: 0.958, Validation Accuracy: 0.958, Loss: 0.030 Epoch 2 Batch 230/538 - Train Accuracy: 0.958, Validation Accuracy: 0.960, Loss: 0.028 Epoch 2 Batch 231/538 - Train Accuracy: 0.970, Validation Accuracy: 0.958, Loss: 0.031 Epoch 2 Batch 232/538 - Train Accuracy: 0.949, Validation Accuracy: 0.957, Loss: 0.029 Epoch 2 Batch 233/538 - Train Accuracy: 0.959, Validation Accuracy: 0.952, Loss: 0.032 Epoch 2 Batch 234/538 - Train Accuracy: 0.957, Validation Accuracy: 0.953, Loss: 0.030 Epoch 2 Batch 235/538 - Train Accuracy: 0.965, Validation Accuracy: 0.955, Loss: 0.027 Epoch 2 Batch 236/538 - Train Accuracy: 0.956, Validation Accuracy: 0.953, Loss: 0.028 Epoch 2 Batch 237/538 - Train Accuracy: 0.960, Validation Accuracy: 0.952, Loss: 0.026 Epoch 2 Batch 238/538 - Train Accuracy: 0.954, Validation Accuracy: 0.949, Loss: 0.030 Epoch 2 Batch 239/538 - Train Accuracy: 0.958, Validation Accuracy: 0.948, Loss: 0.032 Epoch 2 Batch 240/538 - Train Accuracy: 0.952, Validation Accuracy: 0.949, Loss: 0.033 Epoch 2 Batch 241/538 - Train Accuracy: 0.945, Validation Accuracy: 0.947, Loss: 0.038 Epoch 2 Batch 242/538 - Train Accuracy: 0.957, Validation Accuracy: 0.952, Loss: 0.030 Epoch 2 Batch 243/538 - Train Accuracy: 0.964, Validation Accuracy: 0.948, Loss: 0.029 Epoch 2 Batch 244/538 - Train Accuracy: 0.944, Validation Accuracy: 0.940, Loss: 0.029 Epoch 2 Batch 245/538 - Train Accuracy: 0.954, Validation Accuracy: 0.947, Loss: 0.039 Epoch 2 Batch 246/538 - Train Accuracy: 0.963, Validation Accuracy: 0.941, Loss: 0.022 Epoch 2 Batch 247/538 - Train Accuracy: 0.948, Validation Accuracy: 0.947, Loss: 0.029 Epoch 2 Batch 248/538 - Train Accuracy: 0.948, Validation Accuracy: 0.939, Loss: 0.029 Epoch 2 Batch 249/538 - Train Accuracy: 0.957, Validation Accuracy: 0.943, Loss: 0.027 Epoch 2 Batch 250/538 - Train Accuracy: 0.955, Validation Accuracy: 0.945, Loss: 0.031 Epoch 2 Batch 251/538 - Train Accuracy: 0.960, Validation Accuracy: 0.947, Loss: 0.028 Epoch 2 Batch 252/538 - Train Accuracy: 0.967, Validation Accuracy: 0.950, Loss: 0.028 Epoch 2 Batch 253/538 - Train Accuracy: 0.940, Validation Accuracy: 0.953, Loss: 0.031 Epoch 2 Batch 254/538 - Train Accuracy: 0.946, Validation Accuracy: 0.949, Loss: 0.037 Epoch 2 Batch 255/538 - Train Accuracy: 0.967, Validation Accuracy: 0.947, Loss: 0.031 Epoch 2 Batch 256/538 - Train Accuracy: 0.941, Validation Accuracy: 0.942, Loss: 0.032 Epoch 2 Batch 257/538 - Train Accuracy: 0.967, Validation Accuracy: 0.948, Loss: 0.028 Epoch 2 Batch 258/538 - Train Accuracy: 0.962, Validation Accuracy: 0.944, Loss: 0.034 Epoch 2 Batch 259/538 - Train Accuracy: 0.970, Validation Accuracy: 0.947, Loss: 0.026 Epoch 2 Batch 260/538 - Train Accuracy: 0.938, Validation Accuracy: 0.946, Loss: 0.034 Epoch 2 Batch 261/538 - Train Accuracy: 0.966, Validation Accuracy: 0.943, Loss: 0.030 Epoch 2 Batch 262/538 - Train Accuracy: 0.957, Validation Accuracy: 0.949, Loss: 0.029 Epoch 2 Batch 263/538 - Train Accuracy: 0.938, Validation Accuracy: 0.943, Loss: 0.031 Epoch 2 Batch 264/538 - Train Accuracy: 0.944, Validation Accuracy: 0.947, Loss: 0.032 Epoch 2 Batch 265/538 - Train Accuracy: 0.951, Validation Accuracy: 0.945, Loss: 0.036 Epoch 2 Batch 266/538 - Train Accuracy: 0.940, Validation Accuracy: 0.951, Loss: 0.030 Epoch 2 Batch 267/538 - Train Accuracy: 0.936, Validation Accuracy: 0.941, Loss: 0.030 Epoch 2 Batch 268/538 - Train Accuracy: 0.967, Validation Accuracy: 0.942, Loss: 0.023 Epoch 2 Batch 269/538 - Train Accuracy: 0.949, Validation Accuracy: 0.944, Loss: 0.036 Epoch 2 Batch 270/538 - Train Accuracy: 0.966, Validation Accuracy: 0.945, Loss: 0.025 Epoch 2 Batch 271/538 - Train Accuracy: 0.962, Validation Accuracy: 0.948, Loss: 0.024 Epoch 2 Batch 272/538 - Train Accuracy: 0.955, Validation Accuracy: 0.949, Loss: 0.033 Epoch 2 Batch 273/538 - Train Accuracy: 0.960, Validation Accuracy: 0.949, Loss: 0.036 Epoch 2 Batch 274/538 - Train Accuracy: 0.938, Validation Accuracy: 0.949, Loss: 0.033 Epoch 2 Batch 275/538 - Train Accuracy: 0.953, Validation Accuracy: 0.952, Loss: 0.031 Epoch 2 Batch 276/538 - Train Accuracy: 0.943, Validation Accuracy: 0.947, Loss: 0.038 Epoch 2 Batch 277/538 - Train Accuracy: 0.956, Validation Accuracy: 0.942, Loss: 0.025 Epoch 2 Batch 278/538 - Train Accuracy: 0.951, Validation Accuracy: 0.941, Loss: 0.027 Epoch 2 Batch 279/538 - Train Accuracy: 0.949, Validation Accuracy: 0.947, Loss: 0.029 Epoch 2 Batch 280/538 - Train Accuracy: 0.966, Validation Accuracy: 0.947, Loss: 0.024 Epoch 2 Batch 281/538 - Train Accuracy: 0.944, Validation Accuracy: 0.943, Loss: 0.032 Epoch 2 Batch 282/538 - Train Accuracy: 0.959, Validation Accuracy: 0.954, Loss: 0.033 Epoch 2 Batch 283/538 - Train Accuracy: 0.970, Validation Accuracy: 0.954, Loss: 0.030 Epoch 2 Batch 284/538 - Train Accuracy: 0.946, Validation Accuracy: 0.951, Loss: 0.031 Epoch 2 Batch 285/538 - Train Accuracy: 0.961, Validation Accuracy: 0.954, Loss: 0.025 Epoch 2 Batch 286/538 - Train Accuracy: 0.954, Validation Accuracy: 0.945, Loss: 0.043 Epoch 2 Batch 287/538 - Train Accuracy: 0.970, Validation Accuracy: 0.953, Loss: 0.023 Epoch 2 Batch 288/538 - Train Accuracy: 0.966, Validation Accuracy: 0.952, Loss: 0.027 Epoch 2 Batch 289/538 - Train Accuracy: 0.954, Validation Accuracy: 0.954, Loss: 0.027 Epoch 2 Batch 290/538 - Train Accuracy: 0.974, Validation Accuracy: 0.957, Loss: 0.024 Epoch 2 Batch 291/538 - Train Accuracy: 0.972, Validation Accuracy: 0.963, Loss: 0.028 Epoch 2 Batch 292/538 - Train Accuracy: 0.965, Validation Accuracy: 0.964, Loss: 0.023 Epoch 2 Batch 293/538 - Train Accuracy: 0.946, Validation Accuracy: 0.964, Loss: 0.028 Epoch 2 Batch 294/538 - Train Accuracy: 0.958, Validation Accuracy: 0.966, Loss: 0.029 Epoch 2 Batch 295/538 - Train Accuracy: 0.952, Validation Accuracy: 0.961, Loss: 0.029 Epoch 2 Batch 296/538 - Train Accuracy: 0.940, Validation Accuracy: 0.961, Loss: 0.040 Epoch 2 Batch 297/538 - Train Accuracy: 0.960, Validation Accuracy: 0.955, Loss: 0.030 Epoch 2 Batch 298/538 - Train Accuracy: 0.947, Validation Accuracy: 0.953, Loss: 0.026 Epoch 2 Batch 299/538 - Train Accuracy: 0.951, Validation Accuracy: 0.952, Loss: 0.038 Epoch 2 Batch 300/538 - Train Accuracy: 0.952, Validation Accuracy: 0.949, Loss: 0.031 Epoch 2 Batch 301/538 - Train Accuracy: 0.955, Validation Accuracy: 0.944, Loss: 0.032 Epoch 2 Batch 302/538 - Train Accuracy: 0.964, Validation Accuracy: 0.953, Loss: 0.027 Epoch 2 Batch 303/538 - Train Accuracy: 0.956, Validation Accuracy: 0.953, Loss: 0.034 Epoch 2 Batch 304/538 - Train Accuracy: 0.961, Validation Accuracy: 0.956, Loss: 0.032 Epoch 2 Batch 305/538 - Train Accuracy: 0.969, Validation Accuracy: 0.950, Loss: 0.026 Epoch 2 Batch 306/538 - Train Accuracy: 0.967, Validation Accuracy: 0.952, Loss: 0.031 Epoch 2 Batch 307/538 - Train Accuracy: 0.968, Validation Accuracy: 0.954, Loss: 0.028 Epoch 2 Batch 308/538 - Train Accuracy: 0.964, Validation Accuracy: 0.957, Loss: 0.026 Epoch 2 Batch 309/538 - Train Accuracy: 0.962, Validation Accuracy: 0.950, Loss: 0.025 Epoch 2 Batch 310/538 - Train Accuracy: 0.964, Validation Accuracy: 0.949, Loss: 0.035 Epoch 2 Batch 311/538 - Train Accuracy: 0.942, Validation Accuracy: 0.953, Loss: 0.032 Epoch 2 Batch 312/538 - Train Accuracy: 0.963, Validation Accuracy: 0.958, Loss: 0.024 Epoch 2 Batch 313/538 - Train Accuracy: 0.948, Validation Accuracy: 0.958, Loss: 0.028 Epoch 2 Batch 314/538 - Train Accuracy: 0.966, Validation Accuracy: 0.964, Loss: 0.030 Epoch 2 Batch 315/538 - Train Accuracy: 0.956, Validation Accuracy: 0.972, Loss: 0.024 Epoch 2 Batch 316/538 - Train Accuracy: 0.955, Validation Accuracy: 0.972, Loss: 0.023 Epoch 2 Batch 317/538 - Train Accuracy: 0.948, Validation Accuracy: 0.963, Loss: 0.029 Epoch 2 Batch 318/538 - Train Accuracy: 0.952, Validation Accuracy: 0.959, Loss: 0.028 Epoch 2 Batch 319/538 - Train Accuracy: 0.956, Validation Accuracy: 0.957, Loss: 0.030 Epoch 2 Batch 320/538 - Train Accuracy: 0.959, Validation Accuracy: 0.954, Loss: 0.026 Epoch 2 Batch 321/538 - Train Accuracy: 0.948, Validation Accuracy: 0.962, Loss: 0.027 Epoch 2 Batch 322/538 - Train Accuracy: 0.964, Validation Accuracy: 0.963, Loss: 0.029 Epoch 2 Batch 323/538 - Train Accuracy: 0.970, Validation Accuracy: 0.961, Loss: 0.023 Epoch 2 Batch 324/538 - Train Accuracy: 0.967, Validation Accuracy: 0.955, Loss: 0.027 Epoch 2 Batch 325/538 - Train Accuracy: 0.959, Validation Accuracy: 0.954, Loss: 0.026 Epoch 2 Batch 326/538 - Train Accuracy: 0.960, Validation Accuracy: 0.956, Loss: 0.027 Epoch 2 Batch 327/538 - Train Accuracy: 0.947, Validation Accuracy: 0.955, Loss: 0.033 Epoch 2 Batch 328/538 - Train Accuracy: 0.976, Validation Accuracy: 0.949, Loss: 0.022 Epoch 2 Batch 329/538 - Train Accuracy: 0.957, Validation Accuracy: 0.948, Loss: 0.029 Epoch 2 Batch 330/538 - Train Accuracy: 0.979, Validation Accuracy: 0.947, Loss: 0.023 Epoch 2 Batch 331/538 - Train Accuracy: 0.968, Validation Accuracy: 0.944, Loss: 0.025 Epoch 2 Batch 332/538 - Train Accuracy: 0.966, Validation Accuracy: 0.941, Loss: 0.026 Epoch 2 Batch 333/538 - Train Accuracy: 0.964, Validation Accuracy: 0.949, Loss: 0.029 Epoch 2 Batch 334/538 - Train Accuracy: 0.963, Validation Accuracy: 0.954, Loss: 0.025 Epoch 2 Batch 335/538 - Train Accuracy: 0.949, Validation Accuracy: 0.955, Loss: 0.029 Epoch 2 Batch 336/538 - Train Accuracy: 0.962, Validation Accuracy: 0.957, Loss: 0.028 Epoch 2 Batch 337/538 - Train Accuracy: 0.953, Validation Accuracy: 0.956, Loss: 0.029 Epoch 2 Batch 338/538 - Train Accuracy: 0.960, Validation Accuracy: 0.958, Loss: 0.029 Epoch 2 Batch 339/538 - Train Accuracy: 0.961, Validation Accuracy: 0.954, Loss: 0.024 Epoch 2 Batch 340/538 - Train Accuracy: 0.957, Validation Accuracy: 0.953, Loss: 0.026 Epoch 2 Batch 341/538 - Train Accuracy: 0.961, Validation Accuracy: 0.956, Loss: 0.027 Epoch 2 Batch 342/538 - Train Accuracy: 0.945, Validation Accuracy: 0.961, Loss: 0.027 Epoch 2 Batch 343/538 - Train Accuracy: 0.975, Validation Accuracy: 0.963, Loss: 0.027 Epoch 2 Batch 344/538 - Train Accuracy: 0.960, Validation Accuracy: 0.959, Loss: 0.027 Epoch 2 Batch 345/538 - Train Accuracy: 0.944, Validation Accuracy: 0.958, Loss: 0.032 Epoch 2 Batch 346/538 - Train Accuracy: 0.952, Validation Accuracy: 0.950, Loss: 0.033 Epoch 2 Batch 347/538 - Train Accuracy: 0.955, Validation Accuracy: 0.948, Loss: 0.025 Epoch 2 Batch 348/538 - Train Accuracy: 0.958, Validation Accuracy: 0.940, Loss: 0.023 Epoch 2 Batch 349/538 - Train Accuracy: 0.971, Validation Accuracy: 0.941, Loss: 0.020 Epoch 2 Batch 350/538 - Train Accuracy: 0.959, Validation Accuracy: 0.941, Loss: 0.035 Epoch 2 Batch 351/538 - Train Accuracy: 0.951, Validation Accuracy: 0.942, Loss: 0.033 Epoch 2 Batch 352/538 - Train Accuracy: 0.935, Validation Accuracy: 0.951, Loss: 0.047 Epoch 2 Batch 353/538 - Train Accuracy: 0.949, Validation Accuracy: 0.956, Loss: 0.030 Epoch 2 Batch 354/538 - Train Accuracy: 0.957, Validation Accuracy: 0.952, Loss: 0.027 Epoch 2 Batch 355/538 - Train Accuracy: 0.960, Validation Accuracy: 0.956, Loss: 0.029 Epoch 2 Batch 356/538 - Train Accuracy: 0.965, Validation Accuracy: 0.956, Loss: 0.023 Epoch 2 Batch 357/538 - Train Accuracy: 0.958, Validation Accuracy: 0.957, Loss: 0.026 Epoch 2 Batch 358/538 - Train Accuracy: 0.968, Validation Accuracy: 0.959, Loss: 0.022 Epoch 2 Batch 359/538 - Train Accuracy: 0.957, Validation Accuracy: 0.961, Loss: 0.030 Epoch 2 Batch 360/538 - Train Accuracy: 0.956, Validation Accuracy: 0.961, Loss: 0.026 Epoch 2 Batch 361/538 - Train Accuracy: 0.973, Validation Accuracy: 0.961, Loss: 0.026 Epoch 2 Batch 362/538 - Train Accuracy: 0.968, Validation Accuracy: 0.958, Loss: 0.026 Epoch 2 Batch 363/538 - Train Accuracy: 0.958, Validation Accuracy: 0.956, Loss: 0.028 Epoch 2 Batch 364/538 - Train Accuracy: 0.954, Validation Accuracy: 0.956, Loss: 0.038 Epoch 2 Batch 365/538 - Train Accuracy: 0.946, Validation Accuracy: 0.961, Loss: 0.029 Epoch 2 Batch 366/538 - Train Accuracy: 0.953, Validation Accuracy: 0.957, Loss: 0.029 Epoch 2 Batch 367/538 - Train Accuracy: 0.958, Validation Accuracy: 0.957, Loss: 0.025 Epoch 2 Batch 368/538 - Train Accuracy: 0.967, Validation Accuracy: 0.957, Loss: 0.022 Epoch 2 Batch 369/538 - Train Accuracy: 0.951, Validation Accuracy: 0.955, Loss: 0.023 Epoch 2 Batch 370/538 - Train Accuracy: 0.968, Validation Accuracy: 0.951, Loss: 0.027 Epoch 2 Batch 371/538 - Train Accuracy: 0.963, Validation Accuracy: 0.951, Loss: 0.028 Epoch 2 Batch 372/538 - Train Accuracy: 0.970, Validation Accuracy: 0.943, Loss: 0.026 Epoch 2 Batch 373/538 - Train Accuracy: 0.952, Validation Accuracy: 0.949, Loss: 0.024 Epoch 2 Batch 374/538 - Train Accuracy: 0.964, Validation Accuracy: 0.955, Loss: 0.024 Epoch 2 Batch 375/538 - Train Accuracy: 0.957, Validation Accuracy: 0.954, Loss: 0.025 Epoch 2 Batch 376/538 - Train Accuracy: 0.954, Validation Accuracy: 0.953, Loss: 0.028 Epoch 2 Batch 377/538 - Train Accuracy: 0.958, Validation Accuracy: 0.957, Loss: 0.028 Epoch 2 Batch 378/538 - Train Accuracy: 0.969, Validation Accuracy: 0.957, Loss: 0.021 Epoch 2 Batch 379/538 - Train Accuracy: 0.963, Validation Accuracy: 0.958, Loss: 0.027 Epoch 2 Batch 380/538 - Train Accuracy: 0.953, Validation Accuracy: 0.956, Loss: 0.026 Epoch 2 Batch 381/538 - Train Accuracy: 0.974, Validation Accuracy: 0.951, Loss: 0.027 Epoch 2 Batch 382/538 - Train Accuracy: 0.961, Validation Accuracy: 0.953, Loss: 0.029 Epoch 2 Batch 383/538 - Train Accuracy: 0.968, Validation Accuracy: 0.962, Loss: 0.025 Epoch 2 Batch 384/538 - Train Accuracy: 0.961, Validation Accuracy: 0.967, Loss: 0.027 Epoch 2 Batch 385/538 - Train Accuracy: 0.963, Validation Accuracy: 0.965, Loss: 0.027 Epoch 2 Batch 386/538 - Train Accuracy: 0.973, Validation Accuracy: 0.965, Loss: 0.024 Epoch 2 Batch 387/538 - Train Accuracy: 0.961, Validation Accuracy: 0.965, Loss: 0.023 Epoch 2 Batch 388/538 - Train Accuracy: 0.967, Validation Accuracy: 0.961, Loss: 0.027 Epoch 2 Batch 389/538 - Train Accuracy: 0.936, Validation Accuracy: 0.956, Loss: 0.034 Epoch 2 Batch 390/538 - Train Accuracy: 0.962, Validation Accuracy: 0.952, Loss: 0.023 Epoch 2 Batch 391/538 - Train Accuracy: 0.962, Validation Accuracy: 0.945, Loss: 0.022 Epoch 2 Batch 392/538 - Train Accuracy: 0.959, Validation Accuracy: 0.950, Loss: 0.022 Epoch 2 Batch 393/538 - Train Accuracy: 0.972, Validation Accuracy: 0.950, Loss: 0.024 Epoch 2 Batch 394/538 - Train Accuracy: 0.941, Validation Accuracy: 0.953, Loss: 0.029 Epoch 2 Batch 395/538 - Train Accuracy: 0.966, Validation Accuracy: 0.949, Loss: 0.027 Epoch 2 Batch 396/538 - Train Accuracy: 0.962, Validation Accuracy: 0.940, Loss: 0.022 Epoch 2 Batch 397/538 - Train Accuracy: 0.955, Validation Accuracy: 0.945, Loss: 0.027 Epoch 2 Batch 398/538 - Train Accuracy: 0.953, Validation Accuracy: 0.950, Loss: 0.026 Epoch 2 Batch 399/538 - Train Accuracy: 0.956, Validation Accuracy: 0.949, Loss: 0.027 Epoch 2 Batch 400/538 - Train Accuracy: 0.958, Validation Accuracy: 0.951, Loss: 0.028 Epoch 2 Batch 401/538 - Train Accuracy: 0.966, Validation Accuracy: 0.954, Loss: 0.021 Epoch 2 Batch 402/538 - Train Accuracy: 0.962, Validation Accuracy: 0.951, Loss: 0.023 Epoch 2 Batch 403/538 - Train Accuracy: 0.963, Validation Accuracy: 0.949, Loss: 0.028 Epoch 2 Batch 404/538 - Train Accuracy: 0.956, Validation Accuracy: 0.950, Loss: 0.028 Epoch 2 Batch 405/538 - Train Accuracy: 0.963, Validation Accuracy: 0.953, Loss: 0.028 Epoch 2 Batch 406/538 - Train Accuracy: 0.970, Validation Accuracy: 0.948, Loss: 0.026 Epoch 2 Batch 407/538 - Train Accuracy: 0.971, Validation Accuracy: 0.953, Loss: 0.028 Epoch 2 Batch 408/538 - Train Accuracy: 0.944, Validation Accuracy: 0.958, Loss: 0.030 Epoch 2 Batch 409/538 - Train Accuracy: 0.956, Validation Accuracy: 0.959, Loss: 0.025 Epoch 2 Batch 410/538 - Train Accuracy: 0.969, Validation Accuracy: 0.960, Loss: 0.025 Epoch 2 Batch 411/538 - Train Accuracy: 0.961, Validation Accuracy: 0.955, Loss: 0.028 Epoch 2 Batch 412/538 - Train Accuracy: 0.961, Validation Accuracy: 0.951, Loss: 0.021 Epoch 2 Batch 413/538 - Train Accuracy: 0.973, Validation Accuracy: 0.944, Loss: 0.025 Epoch 2 Batch 414/538 - Train Accuracy: 0.949, Validation Accuracy: 0.947, Loss: 0.033 Epoch 2 Batch 415/538 - Train Accuracy: 0.962, Validation Accuracy: 0.951, Loss: 0.025 Epoch 2 Batch 416/538 - Train Accuracy: 0.968, Validation Accuracy: 0.947, Loss: 0.025 Epoch 2 Batch 417/538 - Train Accuracy: 0.966, Validation Accuracy: 0.950, Loss: 0.024 Epoch 2 Batch 418/538 - Train Accuracy: 0.966, Validation Accuracy: 0.956, Loss: 0.032 Epoch 2 Batch 419/538 - Train Accuracy: 0.974, Validation Accuracy: 0.957, Loss: 0.023 Epoch 2 Batch 420/538 - Train Accuracy: 0.966, Validation Accuracy: 0.955, Loss: 0.026 Epoch 2 Batch 421/538 - Train Accuracy: 0.967, Validation Accuracy: 0.955, Loss: 0.023 Epoch 2 Batch 422/538 - Train Accuracy: 0.948, Validation Accuracy: 0.959, Loss: 0.029 Epoch 2 Batch 423/538 - Train Accuracy: 0.970, Validation Accuracy: 0.956, Loss: 0.026 Epoch 2 Batch 424/538 - Train Accuracy: 0.951, Validation Accuracy: 0.960, Loss: 0.033 Epoch 2 Batch 425/538 - Train Accuracy: 0.958, Validation Accuracy: 0.964, Loss: 0.036 Epoch 2 Batch 426/538 - Train Accuracy: 0.956, Validation Accuracy: 0.960, Loss: 0.030 Epoch 2 Batch 427/538 - Train Accuracy: 0.947, Validation Accuracy: 0.951, Loss: 0.028 Epoch 2 Batch 428/538 - Train Accuracy: 0.966, Validation Accuracy: 0.948, Loss: 0.020 Epoch 2 Batch 429/538 - Train Accuracy: 0.971, Validation Accuracy: 0.950, Loss: 0.027 Epoch 2 Batch 430/538 - Train Accuracy: 0.958, Validation Accuracy: 0.954, Loss: 0.028 Epoch 2 Batch 431/538 - Train Accuracy: 0.958, Validation Accuracy: 0.949, Loss: 0.022 Epoch 2 Batch 432/538 - Train Accuracy: 0.948, Validation Accuracy: 0.951, Loss: 0.030 Epoch 2 Batch 433/538 - Train Accuracy: 0.954, Validation Accuracy: 0.948, Loss: 0.044 Epoch 2 Batch 434/538 - Train Accuracy: 0.956, Validation Accuracy: 0.941, Loss: 0.026 Epoch 2 Batch 435/538 - Train Accuracy: 0.950, Validation Accuracy: 0.944, Loss: 0.028 Epoch 2 Batch 436/538 - Train Accuracy: 0.945, Validation Accuracy: 0.939, Loss: 0.031 Epoch 2 Batch 437/538 - Train Accuracy: 0.976, Validation Accuracy: 0.936, Loss: 0.023 Epoch 2 Batch 438/538 - Train Accuracy: 0.962, Validation Accuracy: 0.945, Loss: 0.021 Epoch 2 Batch 439/538 - Train Accuracy: 0.965, Validation Accuracy: 0.946, Loss: 0.030 Epoch 2 Batch 440/538 - Train Accuracy: 0.966, Validation Accuracy: 0.942, Loss: 0.029 Epoch 2 Batch 441/538 - Train Accuracy: 0.939, Validation Accuracy: 0.953, Loss: 0.034 Epoch 2 Batch 442/538 - Train Accuracy: 0.966, Validation Accuracy: 0.955, Loss: 0.021 Epoch 2 Batch 443/538 - Train Accuracy: 0.957, Validation Accuracy: 0.953, Loss: 0.026 Epoch 2 Batch 444/538 - Train Accuracy: 0.962, Validation Accuracy: 0.955, Loss: 0.024 Epoch 2 Batch 445/538 - Train Accuracy: 0.973, Validation Accuracy: 0.950, Loss: 0.025 Epoch 2 Batch 446/538 - Train Accuracy: 0.956, Validation Accuracy: 0.946, Loss: 0.025 Epoch 2 Batch 447/538 - Train Accuracy: 0.959, Validation Accuracy: 0.948, Loss: 0.027 Epoch 2 Batch 448/538 - Train Accuracy: 0.958, Validation Accuracy: 0.953, Loss: 0.024 Epoch 2 Batch 449/538 - Train Accuracy: 0.978, Validation Accuracy: 0.955, Loss: 0.030 Epoch 2 Batch 450/538 - Train Accuracy: 0.940, Validation Accuracy: 0.961, Loss: 0.036 Epoch 2 Batch 451/538 - Train Accuracy: 0.953, Validation Accuracy: 0.961, Loss: 0.030 Epoch 2 Batch 452/538 - Train Accuracy: 0.968, Validation Accuracy: 0.955, Loss: 0.022 Epoch 2 Batch 453/538 - Train Accuracy: 0.972, Validation Accuracy: 0.953, Loss: 0.029 Epoch 2 Batch 454/538 - Train Accuracy: 0.959, Validation Accuracy: 0.949, Loss: 0.031 Epoch 2 Batch 455/538 - Train Accuracy: 0.969, Validation Accuracy: 0.947, Loss: 0.027 Epoch 2 Batch 456/538 - Train Accuracy: 0.971, Validation Accuracy: 0.952, Loss: 0.034 Epoch 2 Batch 457/538 - Train Accuracy: 0.958, Validation Accuracy: 0.947, Loss: 0.024 Epoch 2 Batch 458/538 - Train Accuracy: 0.969, Validation Accuracy: 0.947, Loss: 0.023 Epoch 2 Batch 459/538 - Train Accuracy: 0.978, Validation Accuracy: 0.950, Loss: 0.020 Epoch 2 Batch 460/538 - Train Accuracy: 0.952, Validation Accuracy: 0.949, Loss: 0.027 Epoch 2 Batch 461/538 - Train Accuracy: 0.967, Validation Accuracy: 0.951, Loss: 0.027 Epoch 2 Batch 462/538 - Train Accuracy: 0.972, Validation Accuracy: 0.952, Loss: 0.022 Epoch 2 Batch 463/538 - Train Accuracy: 0.959, Validation Accuracy: 0.951, Loss: 0.027 Epoch 2 Batch 464/538 - Train Accuracy: 0.961, Validation Accuracy: 0.955, Loss: 0.022 Epoch 2 Batch 465/538 - Train Accuracy: 0.957, Validation Accuracy: 0.952, Loss: 0.023 Epoch 2 Batch 466/538 - Train Accuracy: 0.962, Validation Accuracy: 0.953, Loss: 0.024 Epoch 2 Batch 467/538 - Train Accuracy: 0.970, Validation Accuracy: 0.950, Loss: 0.024 Epoch 2 Batch 468/538 - Train Accuracy: 0.966, Validation Accuracy: 0.956, Loss: 0.031 Epoch 2 Batch 469/538 - Train Accuracy: 0.965, Validation Accuracy: 0.955, Loss: 0.026 Epoch 2 Batch 470/538 - Train Accuracy: 0.953, Validation Accuracy: 0.957, Loss: 0.025 Epoch 2 Batch 471/538 - Train Accuracy: 0.969, Validation Accuracy: 0.952, Loss: 0.018 Epoch 2 Batch 472/538 - Train Accuracy: 0.984, Validation Accuracy: 0.953, Loss: 0.017 Epoch 2 Batch 473/538 - Train Accuracy: 0.960, Validation Accuracy: 0.953, Loss: 0.025 Epoch 2 Batch 474/538 - Train Accuracy: 0.959, Validation Accuracy: 0.959, Loss: 0.022 Epoch 2 Batch 475/538 - Train Accuracy: 0.970, Validation Accuracy: 0.957, Loss: 0.024 Epoch 2 Batch 476/538 - Train Accuracy: 0.973, Validation Accuracy: 0.960, Loss: 0.025 Epoch 2 Batch 477/538 - Train Accuracy: 0.955, Validation Accuracy: 0.962, Loss: 0.032 Epoch 2 Batch 478/538 - Train Accuracy: 0.972, Validation Accuracy: 0.957, Loss: 0.021 Epoch 2 Batch 479/538 - Train Accuracy: 0.955, Validation Accuracy: 0.957, Loss: 0.026 Epoch 2 Batch 480/538 - Train Accuracy: 0.973, Validation Accuracy: 0.952, Loss: 0.023 Epoch 2 Batch 481/538 - Train Accuracy: 0.963, Validation Accuracy: 0.950, Loss: 0.026 Epoch 2 Batch 482/538 - Train Accuracy: 0.954, Validation Accuracy: 0.952, Loss: 0.023 Epoch 2 Batch 483/538 - Train Accuracy: 0.938, Validation Accuracy: 0.960, Loss: 0.030 Epoch 2 Batch 484/538 - Train Accuracy: 0.947, Validation Accuracy: 0.963, Loss: 0.030 Epoch 2 Batch 485/538 - Train Accuracy: 0.963, Validation Accuracy: 0.959, Loss: 0.028 Epoch 2 Batch 486/538 - Train Accuracy: 0.975, Validation Accuracy: 0.955, Loss: 0.019 Epoch 2 Batch 487/538 - Train Accuracy: 0.964, Validation Accuracy: 0.949, Loss: 0.022 Epoch 2 Batch 488/538 - Train Accuracy: 0.963, Validation Accuracy: 0.951, Loss: 0.021 Epoch 2 Batch 489/538 - Train Accuracy: 0.955, Validation Accuracy: 0.953, Loss: 0.027 Epoch 2 Batch 490/538 - Train Accuracy: 0.959, Validation Accuracy: 0.951, Loss: 0.022 Epoch 2 Batch 491/538 - Train Accuracy: 0.958, Validation Accuracy: 0.952, Loss: 0.026 Epoch 2 Batch 492/538 - Train Accuracy: 0.955, Validation Accuracy: 0.953, Loss: 0.023 Epoch 2 Batch 493/538 - Train Accuracy: 0.960, Validation Accuracy: 0.959, Loss: 0.023 Epoch 2 Batch 494/538 - Train Accuracy: 0.960, Validation Accuracy: 0.955, Loss: 0.026 Epoch 2 Batch 495/538 - Train Accuracy: 0.958, Validation Accuracy: 0.948, Loss: 0.027 Epoch 2 Batch 496/538 - Train Accuracy: 0.968, Validation Accuracy: 0.944, Loss: 0.020 Epoch 2 Batch 497/538 - Train Accuracy: 0.971, Validation Accuracy: 0.950, Loss: 0.023 Epoch 2 Batch 498/538 - Train Accuracy: 0.963, Validation Accuracy: 0.954, Loss: 0.023 Epoch 2 Batch 499/538 - Train Accuracy: 0.957, Validation Accuracy: 0.953, Loss: 0.026 Epoch 2 Batch 500/538 - Train Accuracy: 0.979, Validation Accuracy: 0.953, Loss: 0.017 Epoch 2 Batch 501/538 - Train Accuracy: 0.962, Validation Accuracy: 0.950, Loss: 0.027 Epoch 2 Batch 502/538 - Train Accuracy: 0.960, Validation Accuracy: 0.952, Loss: 0.021 Epoch 2 Batch 503/538 - Train Accuracy: 0.970, Validation Accuracy: 0.955, Loss: 0.028 Epoch 2 Batch 504/538 - Train Accuracy: 0.977, Validation Accuracy: 0.958, Loss: 0.019 Epoch 2 Batch 505/538 - Train Accuracy: 0.963, Validation Accuracy: 0.961, Loss: 0.019 Epoch 2 Batch 506/538 - Train Accuracy: 0.968, Validation Accuracy: 0.963, Loss: 0.019 Epoch 2 Batch 507/538 - Train Accuracy: 0.954, Validation Accuracy: 0.964, Loss: 0.025 Epoch 2 Batch 508/538 - Train Accuracy: 0.967, Validation Accuracy: 0.957, Loss: 0.021 Epoch 2 Batch 509/538 - Train Accuracy: 0.958, Validation Accuracy: 0.960, Loss: 0.027 Epoch 2 Batch 510/538 - Train Accuracy: 0.976, Validation Accuracy: 0.961, Loss: 0.020 Epoch 2 Batch 511/538 - Train Accuracy: 0.950, Validation Accuracy: 0.960, Loss: 0.027 Epoch 2 Batch 512/538 - Train Accuracy: 0.964, Validation Accuracy: 0.966, Loss: 0.024 Epoch 2 Batch 513/538 - Train Accuracy: 0.943, Validation Accuracy: 0.967, Loss: 0.023 Epoch 2 Batch 514/538 - Train Accuracy: 0.969, Validation Accuracy: 0.967, Loss: 0.022 Epoch 2 Batch 515/538 - Train Accuracy: 0.945, Validation Accuracy: 0.966, Loss: 0.028 Epoch 2 Batch 516/538 - Train Accuracy: 0.962, Validation Accuracy: 0.971, Loss: 0.023 Epoch 2 Batch 517/538 - Train Accuracy: 0.955, Validation Accuracy: 0.958, Loss: 0.022 Epoch 2 Batch 518/538 - Train Accuracy: 0.957, Validation Accuracy: 0.961, Loss: 0.029 Epoch 2 Batch 519/538 - Train Accuracy: 0.967, Validation Accuracy: 0.956, Loss: 0.024 Epoch 2 Batch 520/538 - Train Accuracy: 0.974, Validation Accuracy: 0.961, Loss: 0.025 Epoch 2 Batch 521/538 - Train Accuracy: 0.969, Validation Accuracy: 0.958, Loss: 0.031 Epoch 2 Batch 522/538 - Train Accuracy: 0.962, Validation Accuracy: 0.958, Loss: 0.022 Epoch 2 Batch 523/538 - Train Accuracy: 0.961, Validation Accuracy: 0.953, Loss: 0.024 Epoch 2 Batch 524/538 - Train Accuracy: 0.966, Validation Accuracy: 0.956, Loss: 0.020 Epoch 2 Batch 525/538 - Train Accuracy: 0.966, Validation Accuracy: 0.957, Loss: 0.026 Epoch 2 Batch 526/538 - Train Accuracy: 0.958, Validation Accuracy: 0.958, Loss: 0.027 Epoch 2 Batch 527/538 - Train Accuracy: 0.970, Validation Accuracy: 0.965, Loss: 0.021 Epoch 2 Batch 528/538 - Train Accuracy: 0.963, Validation Accuracy: 0.966, Loss: 0.027 Epoch 2 Batch 529/538 - Train Accuracy: 0.947, Validation Accuracy: 0.971, Loss: 0.026 Epoch 2 Batch 530/538 - Train Accuracy: 0.948, Validation Accuracy: 0.967, Loss: 0.027 Epoch 2 Batch 531/538 - Train Accuracy: 0.953, Validation Accuracy: 0.967, Loss: 0.027 Epoch 2 Batch 532/538 - Train Accuracy: 0.956, Validation Accuracy: 0.968, Loss: 0.021 Epoch 2 Batch 533/538 - Train Accuracy: 0.961, Validation Accuracy: 0.959, Loss: 0.022 Epoch 2 Batch 534/538 - Train Accuracy: 0.966, Validation Accuracy: 0.957, Loss: 0.017 Epoch 2 Batch 535/538 - Train Accuracy: 0.966, Validation Accuracy: 0.954, Loss: 0.025 Epoch 2 Batch 536/538 - Train Accuracy: 0.968, Validation Accuracy: 0.951, Loss: 0.027 Epoch 3 Batch 0/538 - Train Accuracy: 0.974, Validation Accuracy: 0.953, Loss: 0.020 Epoch 3 Batch 1/538 - Train Accuracy: 0.979, Validation Accuracy: 0.952, Loss: 0.026 Epoch 3 Batch 2/538 - Train Accuracy: 0.962, Validation Accuracy: 0.954, Loss: 0.026 Epoch 3 Batch 3/538 - Train Accuracy: 0.974, Validation Accuracy: 0.956, Loss: 0.023 Epoch 3 Batch 4/538 - Train Accuracy: 0.969, Validation Accuracy: 0.957, Loss: 0.023 Epoch 3 Batch 5/538 - Train Accuracy: 0.960, Validation Accuracy: 0.958, Loss: 0.026 Epoch 3 Batch 6/538 - Train Accuracy: 0.963, Validation Accuracy: 0.957, Loss: 0.022 Epoch 3 Batch 7/538 - Train Accuracy: 0.972, Validation Accuracy: 0.956, Loss: 0.025 Epoch 3 Batch 8/538 - Train Accuracy: 0.955, Validation Accuracy: 0.951, Loss: 0.025 Epoch 3 Batch 9/538 - Train Accuracy: 0.958, Validation Accuracy: 0.952, Loss: 0.022 Epoch 3 Batch 10/538 - Train Accuracy: 0.956, Validation Accuracy: 0.955, Loss: 0.024 Epoch 3 Batch 11/538 - Train Accuracy: 0.966, Validation Accuracy: 0.955, Loss: 0.024 Epoch 3 Batch 12/538 - Train Accuracy: 0.964, Validation Accuracy: 0.958, Loss: 0.024 Epoch 3 Batch 13/538 - Train Accuracy: 0.970, Validation Accuracy: 0.963, Loss: 0.024 Epoch 3 Batch 14/538 - Train Accuracy: 0.972, Validation Accuracy: 0.962, Loss: 0.022 Epoch 3 Batch 15/538 - Train Accuracy: 0.969, Validation Accuracy: 0.962, Loss: 0.019 Epoch 3 Batch 16/538 - Train Accuracy: 0.970, Validation Accuracy: 0.960, Loss: 0.024 Epoch 3 Batch 17/538 - Train Accuracy: 0.965, Validation Accuracy: 0.956, Loss: 0.026 Epoch 3 Batch 18/538 - Train Accuracy: 0.963, Validation Accuracy: 0.956, Loss: 0.027 Epoch 3 Batch 19/538 - Train Accuracy: 0.968, Validation Accuracy: 0.959, Loss: 0.025 Epoch 3 Batch 20/538 - Train Accuracy: 0.962, Validation Accuracy: 0.954, Loss: 0.027 Epoch 3 Batch 21/538 - Train Accuracy: 0.982, Validation Accuracy: 0.949, Loss: 0.015 Epoch 3 Batch 22/538 - Train Accuracy: 0.963, Validation Accuracy: 0.951, Loss: 0.026 Epoch 3 Batch 23/538 - Train Accuracy: 0.954, Validation Accuracy: 0.957, Loss: 0.028 Epoch 3 Batch 24/538 - Train Accuracy: 0.976, Validation Accuracy: 0.956, Loss: 0.026 Epoch 3 Batch 25/538 - Train Accuracy: 0.952, Validation Accuracy: 0.954, Loss: 0.027 Epoch 3 Batch 26/538 - Train Accuracy: 0.963, Validation Accuracy: 0.953, Loss: 0.027 Epoch 3 Batch 27/538 - Train Accuracy: 0.969, Validation Accuracy: 0.948, Loss: 0.019 Epoch 3 Batch 28/538 - Train Accuracy: 0.969, Validation Accuracy: 0.946, Loss: 0.022 Epoch 3 Batch 29/538 - Train Accuracy: 0.961, Validation Accuracy: 0.954, Loss: 0.018 Epoch 3 Batch 30/538 - Train Accuracy: 0.952, Validation Accuracy: 0.954, Loss: 0.028 Epoch 3 Batch 31/538 - Train Accuracy: 0.977, Validation Accuracy: 0.950, Loss: 0.018 Epoch 3 Batch 32/538 - Train Accuracy: 0.972, Validation Accuracy: 0.946, Loss: 0.016 Epoch 3 Batch 33/538 - Train Accuracy: 0.959, Validation Accuracy: 0.949, Loss: 0.026 Epoch 3 Batch 34/538 - Train Accuracy: 0.957, Validation Accuracy: 0.943, Loss: 0.028 Epoch 3 Batch 35/538 - Train Accuracy: 0.967, Validation Accuracy: 0.951, Loss: 0.019 Epoch 3 Batch 36/538 - Train Accuracy: 0.966, Validation Accuracy: 0.954, Loss: 0.022 Epoch 3 Batch 37/538 - Train Accuracy: 0.957, Validation Accuracy: 0.953, Loss: 0.027 Epoch 3 Batch 38/538 - Train Accuracy: 0.956, Validation Accuracy: 0.947, Loss: 0.021 Epoch 3 Batch 39/538 - Train Accuracy: 0.955, Validation Accuracy: 0.959, Loss: 0.019 Epoch 3 Batch 40/538 - Train Accuracy: 0.967, Validation Accuracy: 0.953, Loss: 0.019 Epoch 3 Batch 41/538 - Train Accuracy: 0.972, Validation Accuracy: 0.961, Loss: 0.022 Epoch 3 Batch 42/538 - Train Accuracy: 0.961, Validation Accuracy: 0.958, Loss: 0.020 Epoch 3 Batch 43/538 - Train Accuracy: 0.948, Validation Accuracy: 0.958, Loss: 0.026 Epoch 3 Batch 44/538 - Train Accuracy: 0.965, Validation Accuracy: 0.955, Loss: 0.025 Epoch 3 Batch 45/538 - Train Accuracy: 0.963, Validation Accuracy: 0.951, Loss: 0.023 Epoch 3 Batch 46/538 - Train Accuracy: 0.970, Validation Accuracy: 0.954, Loss: 0.019 Epoch 3 Batch 47/538 - Train Accuracy: 0.964, Validation Accuracy: 0.956, Loss: 0.025 Epoch 3 Batch 48/538 - Train Accuracy: 0.956, Validation Accuracy: 0.955, Loss: 0.026 Epoch 3 Batch 49/538 - Train Accuracy: 0.966, Validation Accuracy: 0.958, Loss: 0.020 Epoch 3 Batch 50/538 - Train Accuracy: 0.963, Validation Accuracy: 0.962, Loss: 0.023 Epoch 3 Batch 51/538 - Train Accuracy: 0.958, Validation Accuracy: 0.957, Loss: 0.024 Epoch 3 Batch 52/538 - Train Accuracy: 0.971, Validation Accuracy: 0.946, Loss: 0.023 Epoch 3 Batch 53/538 - Train Accuracy: 0.940, Validation Accuracy: 0.951, Loss: 0.025 Epoch 3 Batch 54/538 - Train Accuracy: 0.966, Validation Accuracy: 0.960, Loss: 0.022 Epoch 3 Batch 55/538 - Train Accuracy: 0.968, Validation Accuracy: 0.959, Loss: 0.022 Epoch 3 Batch 56/538 - Train Accuracy: 0.949, Validation Accuracy: 0.964, Loss: 0.022 Epoch 3 Batch 57/538 - Train Accuracy: 0.936, Validation Accuracy: 0.961, Loss: 0.033 Epoch 3 Batch 58/538 - Train Accuracy: 0.959, Validation Accuracy: 0.966, Loss: 0.020 Epoch 3 Batch 59/538 - Train Accuracy: 0.949, Validation Accuracy: 0.965, Loss: 0.021 Epoch 3 Batch 60/538 - Train Accuracy: 0.951, Validation Accuracy: 0.964, Loss: 0.024 Epoch 3 Batch 61/538 - Train Accuracy: 0.977, Validation Accuracy: 0.964, Loss: 0.021 Epoch 3 Batch 62/538 - Train Accuracy: 0.963, Validation Accuracy: 0.964, Loss: 0.026 Epoch 3 Batch 63/538 - Train Accuracy: 0.966, Validation Accuracy: 0.969, Loss: 0.022 Epoch 3 Batch 64/538 - Train Accuracy: 0.964, Validation Accuracy: 0.962, Loss: 0.022 Epoch 3 Batch 65/538 - Train Accuracy: 0.957, Validation Accuracy: 0.963, Loss: 0.024 Epoch 3 Batch 66/538 - Train Accuracy: 0.978, Validation Accuracy: 0.968, Loss: 0.017 Epoch 3 Batch 67/538 - Train Accuracy: 0.975, Validation Accuracy: 0.970, Loss: 0.019 Epoch 3 Batch 68/538 - Train Accuracy: 0.963, Validation Accuracy: 0.969, Loss: 0.019 Epoch 3 Batch 69/538 - Train Accuracy: 0.976, Validation Accuracy: 0.973, Loss: 0.020 Epoch 3 Batch 70/538 - Train Accuracy: 0.958, Validation Accuracy: 0.975, Loss: 0.018 Epoch 3 Batch 71/538 - Train Accuracy: 0.967, Validation Accuracy: 0.976, Loss: 0.026 Epoch 3 Batch 72/538 - Train Accuracy: 0.967, Validation Accuracy: 0.973, Loss: 0.032 Epoch 3 Batch 73/538 - Train Accuracy: 0.956, Validation Accuracy: 0.971, Loss: 0.023 Epoch 3 Batch 74/538 - Train Accuracy: 0.972, Validation Accuracy: 0.969, Loss: 0.022 Epoch 3 Batch 75/538 - Train Accuracy: 0.967, Validation Accuracy: 0.969, Loss: 0.023 Epoch 3 Batch 76/538 - Train Accuracy: 0.965, Validation Accuracy: 0.963, Loss: 0.021 Epoch 3 Batch 77/538 - Train Accuracy: 0.974, Validation Accuracy: 0.959, Loss: 0.019 Epoch 3 Batch 78/538 - Train Accuracy: 0.958, Validation Accuracy: 0.957, Loss: 0.024 Epoch 3 Batch 79/538 - Train Accuracy: 0.963, Validation Accuracy: 0.955, Loss: 0.020 Epoch 3 Batch 80/538 - Train Accuracy: 0.975, Validation Accuracy: 0.956, Loss: 0.020 Epoch 3 Batch 81/538 - Train Accuracy: 0.962, Validation Accuracy: 0.960, Loss: 0.027 Epoch 3 Batch 82/538 - Train Accuracy: 0.967, Validation Accuracy: 0.966, Loss: 0.026 Epoch 3 Batch 83/538 - Train Accuracy: 0.959, Validation Accuracy: 0.964, Loss: 0.024 Epoch 3 Batch 84/538 - Train Accuracy: 0.956, Validation Accuracy: 0.966, Loss: 0.027 Epoch 3 Batch 85/538 - Train Accuracy: 0.973, Validation Accuracy: 0.961, Loss: 0.019 Epoch 3 Batch 86/538 - Train Accuracy: 0.976, Validation Accuracy: 0.958, Loss: 0.018 Epoch 3 Batch 87/538 - Train Accuracy: 0.954, Validation Accuracy: 0.945, Loss: 0.025 Epoch 3 Batch 88/538 - Train Accuracy: 0.969, Validation Accuracy: 0.943, Loss: 0.021 Epoch 3 Batch 89/538 - Train Accuracy: 0.964, Validation Accuracy: 0.952, Loss: 0.021 Epoch 3 Batch 90/538 - Train Accuracy: 0.965, Validation Accuracy: 0.957, Loss: 0.027 Epoch 3 Batch 91/538 - Train Accuracy: 0.954, Validation Accuracy: 0.960, Loss: 0.025 Epoch 3 Batch 92/538 - Train Accuracy: 0.962, Validation Accuracy: 0.965, Loss: 0.023 Epoch 3 Batch 93/538 - Train Accuracy: 0.961, Validation Accuracy: 0.962, Loss: 0.023 Epoch 3 Batch 94/538 - Train Accuracy: 0.965, Validation Accuracy: 0.962, Loss: 0.018 Epoch 3 Batch 95/538 - Train Accuracy: 0.946, Validation Accuracy: 0.964, Loss: 0.023 Epoch 3 Batch 96/538 - Train Accuracy: 0.974, Validation Accuracy: 0.960, Loss: 0.016 Epoch 3 Batch 97/538 - Train Accuracy: 0.964, Validation Accuracy: 0.959, Loss: 0.019 Epoch 3 Batch 98/538 - Train Accuracy: 0.960, Validation Accuracy: 0.957, Loss: 0.023 Epoch 3 Batch 99/538 - Train Accuracy: 0.973, Validation Accuracy: 0.954, Loss: 0.022 Epoch 3 Batch 100/538 - Train Accuracy: 0.972, Validation Accuracy: 0.952, Loss: 0.020 Epoch 3 Batch 101/538 - Train Accuracy: 0.951, Validation Accuracy: 0.961, Loss: 0.031 Epoch 3 Batch 102/538 - Train Accuracy: 0.959, Validation Accuracy: 0.971, Loss: 0.027 Epoch 3 Batch 103/538 - Train Accuracy: 0.962, Validation Accuracy: 0.973, Loss: 0.021 Epoch 3 Batch 104/538 - Train Accuracy: 0.968, Validation Accuracy: 0.973, Loss: 0.020 Epoch 3 Batch 105/538 - Train Accuracy: 0.961, Validation Accuracy: 0.967, Loss: 0.020 Epoch 3 Batch 106/538 - Train Accuracy: 0.960, Validation Accuracy: 0.966, Loss: 0.021 Epoch 3 Batch 107/538 - Train Accuracy: 0.957, Validation Accuracy: 0.963, Loss: 0.023 Epoch 3 Batch 108/538 - Train Accuracy: 0.961, Validation Accuracy: 0.958, Loss: 0.024 Epoch 3 Batch 109/538 - Train Accuracy: 0.974, Validation Accuracy: 0.952, Loss: 0.017 Epoch 3 Batch 110/538 - Train Accuracy: 0.961, Validation Accuracy: 0.953, Loss: 0.021 Epoch 3 Batch 111/538 - Train Accuracy: 0.959, Validation Accuracy: 0.956, Loss: 0.021 Epoch 3 Batch 112/538 - Train Accuracy: 0.968, Validation Accuracy: 0.958, Loss: 0.027 Epoch 3 Batch 113/538 - Train Accuracy: 0.961, Validation Accuracy: 0.958, Loss: 0.024 Epoch 3 Batch 114/538 - Train Accuracy: 0.964, Validation Accuracy: 0.962, Loss: 0.020 Epoch 3 Batch 115/538 - Train Accuracy: 0.978, Validation Accuracy: 0.960, Loss: 0.019 Epoch 3 Batch 116/538 - Train Accuracy: 0.966, Validation Accuracy: 0.961, Loss: 0.026 Epoch 3 Batch 117/538 - Train Accuracy: 0.972, Validation Accuracy: 0.961, Loss: 0.023 Epoch 3 Batch 118/538 - Train Accuracy: 0.973, Validation Accuracy: 0.960, Loss: 0.018 Epoch 3 Batch 119/538 - Train Accuracy: 0.985, Validation Accuracy: 0.964, Loss: 0.016 Epoch 3 Batch 120/538 - Train Accuracy: 0.974, Validation Accuracy: 0.970, Loss: 0.017 Epoch 3 Batch 121/538 - Train Accuracy: 0.964, Validation Accuracy: 0.965, Loss: 0.023 Epoch 3 Batch 122/538 - Train Accuracy: 0.950, Validation Accuracy: 0.965, Loss: 0.024 Epoch 3 Batch 123/538 - Train Accuracy: 0.961, Validation Accuracy: 0.965, Loss: 0.022 Epoch 3 Batch 124/538 - Train Accuracy: 0.963, Validation Accuracy: 0.958, Loss: 0.019 Epoch 3 Batch 125/538 - Train Accuracy: 0.959, Validation Accuracy: 0.957, Loss: 0.026 Epoch 3 Batch 126/538 - Train Accuracy: 0.958, Validation Accuracy: 0.952, Loss: 0.022 Epoch 3 Batch 127/538 - Train Accuracy: 0.960, Validation Accuracy: 0.945, Loss: 0.028 Epoch 3 Batch 128/538 - Train Accuracy: 0.967, Validation Accuracy: 0.950, Loss: 0.022 Epoch 3 Batch 129/538 - Train Accuracy: 0.971, Validation Accuracy: 0.952, Loss: 0.015 Epoch 3 Batch 130/538 - Train Accuracy: 0.968, Validation Accuracy: 0.954, Loss: 0.023 Epoch 3 Batch 131/538 - Train Accuracy: 0.978, Validation Accuracy: 0.963, Loss: 0.017 Epoch 3 Batch 132/538 - Train Accuracy: 0.962, Validation Accuracy: 0.962, Loss: 0.021 Epoch 3 Batch 133/538 - Train Accuracy: 0.960, Validation Accuracy: 0.960, Loss: 0.020 Epoch 3 Batch 134/538 - Train Accuracy: 0.962, Validation Accuracy: 0.961, Loss: 0.029 Epoch 3 Batch 135/538 - Train Accuracy: 0.969, Validation Accuracy: 0.954, Loss: 0.027 Epoch 3 Batch 136/538 - Train Accuracy: 0.962, Validation Accuracy: 0.948, Loss: 0.023 Epoch 3 Batch 137/538 - Train Accuracy: 0.979, Validation Accuracy: 0.941, Loss: 0.029 Epoch 3 Batch 138/538 - Train Accuracy: 0.970, Validation Accuracy: 0.936, Loss: 0.022 Epoch 3 Batch 139/538 - Train Accuracy: 0.965, Validation Accuracy: 0.945, Loss: 0.025 Epoch 3 Batch 140/538 - Train Accuracy: 0.958, Validation Accuracy: 0.957, Loss: 0.030 Epoch 3 Batch 141/538 - Train Accuracy: 0.966, Validation Accuracy: 0.963, Loss: 0.023 Epoch 3 Batch 142/538 - Train Accuracy: 0.966, Validation Accuracy: 0.960, Loss: 0.023 Epoch 3 Batch 143/538 - Train Accuracy: 0.964, Validation Accuracy: 0.958, Loss: 0.027 Epoch 3 Batch 144/538 - Train Accuracy: 0.962, Validation Accuracy: 0.958, Loss: 0.027 Epoch 3 Batch 145/538 - Train Accuracy: 0.954, Validation Accuracy: 0.955, Loss: 0.029 Epoch 3 Batch 146/538 - Train Accuracy: 0.960, Validation Accuracy: 0.954, Loss: 0.021 Epoch 3 Batch 147/538 - Train Accuracy: 0.959, Validation Accuracy: 0.951, Loss: 0.022 Epoch 3 Batch 148/538 - Train Accuracy: 0.950, Validation Accuracy: 0.949, Loss: 0.030 Epoch 3 Batch 149/538 - Train Accuracy: 0.968, Validation Accuracy: 0.949, Loss: 0.020 Epoch 3 Batch 150/538 - Train Accuracy: 0.971, Validation Accuracy: 0.955, Loss: 0.021 Epoch 3 Batch 151/538 - Train Accuracy: 0.957, Validation Accuracy: 0.957, Loss: 0.024 Epoch 3 Batch 152/538 - Train Accuracy: 0.967, Validation Accuracy: 0.961, Loss: 0.024 Epoch 3 Batch 153/538 - Train Accuracy: 0.962, Validation Accuracy: 0.958, Loss: 0.022 Epoch 3 Batch 154/538 - Train Accuracy: 0.953, Validation Accuracy: 0.953, Loss: 0.022 Epoch 3 Batch 155/538 - Train Accuracy: 0.968, Validation Accuracy: 0.962, Loss: 0.021 Epoch 3 Batch 156/538 - Train Accuracy: 0.973, Validation Accuracy: 0.961, Loss: 0.020 Epoch 3 Batch 157/538 - Train Accuracy: 0.976, Validation Accuracy: 0.959, Loss: 0.022 Epoch 3 Batch 158/538 - Train Accuracy: 0.967, Validation Accuracy: 0.950, Loss: 0.020 Epoch 3 Batch 159/538 - Train Accuracy: 0.969, Validation Accuracy: 0.954, Loss: 0.030 Epoch 3 Batch 160/538 - Train Accuracy: 0.960, Validation Accuracy: 0.954, Loss: 0.019 Epoch 3 Batch 161/538 - Train Accuracy: 0.973, Validation Accuracy: 0.954, Loss: 0.020 Epoch 3 Batch 162/538 - Train Accuracy: 0.968, Validation Accuracy: 0.952, Loss: 0.022 Epoch 3 Batch 163/538 - Train Accuracy: 0.978, Validation Accuracy: 0.954, Loss: 0.027 Epoch 3 Batch 164/538 - Train Accuracy: 0.969, Validation Accuracy: 0.951, Loss: 0.023 Epoch 3 Batch 165/538 - Train Accuracy: 0.962, Validation Accuracy: 0.957, Loss: 0.018 Epoch 3 Batch 166/538 - Train Accuracy: 0.983, Validation Accuracy: 0.957, Loss: 0.018 Epoch 3 Batch 167/538 - Train Accuracy: 0.955, Validation Accuracy: 0.953, Loss: 0.029 Epoch 3 Batch 168/538 - Train Accuracy: 0.947, Validation Accuracy: 0.956, Loss: 0.028 Epoch 3 Batch 169/538 - Train Accuracy: 0.975, Validation Accuracy: 0.952, Loss: 0.021 Epoch 3 Batch 170/538 - Train Accuracy: 0.954, Validation Accuracy: 0.954, Loss: 0.022 Epoch 3 Batch 171/538 - Train Accuracy: 0.964, Validation Accuracy: 0.954, Loss: 0.022 Epoch 3 Batch 172/538 - Train Accuracy: 0.967, Validation Accuracy: 0.956, Loss: 0.021 Epoch 3 Batch 173/538 - Train Accuracy: 0.984, Validation Accuracy: 0.958, Loss: 0.016 Epoch 3 Batch 174/538 - Train Accuracy: 0.959, Validation Accuracy: 0.956, Loss: 0.019 Epoch 3 Batch 175/538 - Train Accuracy: 0.975, Validation Accuracy: 0.954, Loss: 0.020 Epoch 3 Batch 176/538 - Train Accuracy: 0.963, Validation Accuracy: 0.956, Loss: 0.025 Epoch 3 Batch 177/538 - Train Accuracy: 0.972, Validation Accuracy: 0.953, Loss: 0.022 Epoch 3 Batch 178/538 - Train Accuracy: 0.960, Validation Accuracy: 0.959, Loss: 0.022 Epoch 3 Batch 179/538 - Train Accuracy: 0.968, Validation Accuracy: 0.962, Loss: 0.016 Epoch 3 Batch 180/538 - Train Accuracy: 0.970, Validation Accuracy: 0.961, Loss: 0.022 Epoch 3 Batch 181/538 - Train Accuracy: 0.947, Validation Accuracy: 0.961, Loss: 0.026 Epoch 3 Batch 182/538 - Train Accuracy: 0.973, Validation Accuracy: 0.963, Loss: 0.016 Epoch 3 Batch 183/538 - Train Accuracy: 0.979, Validation Accuracy: 0.965, Loss: 0.019 Epoch 3 Batch 184/538 - Train Accuracy: 0.955, Validation Accuracy: 0.963, Loss: 0.022 Epoch 3 Batch 185/538 - Train Accuracy: 0.974, Validation Accuracy: 0.960, Loss: 0.017 Epoch 3 Batch 186/538 - Train Accuracy: 0.970, Validation Accuracy: 0.962, Loss: 0.020 Epoch 3 Batch 187/538 - Train Accuracy: 0.971, Validation Accuracy: 0.956, Loss: 0.021 Epoch 3 Batch 188/538 - Train Accuracy: 0.963, Validation Accuracy: 0.957, Loss: 0.016 Epoch 3 Batch 189/538 - Train Accuracy: 0.964, Validation Accuracy: 0.957, Loss: 0.021 Epoch 3 Batch 190/538 - Train Accuracy: 0.958, Validation Accuracy: 0.957, Loss: 0.028 Epoch 3 Batch 191/538 - Train Accuracy: 0.975, Validation Accuracy: 0.960, Loss: 0.020 Epoch 3 Batch 192/538 - Train Accuracy: 0.976, Validation Accuracy: 0.962, Loss: 0.018 Epoch 3 Batch 193/538 - Train Accuracy: 0.970, Validation Accuracy: 0.962, Loss: 0.022 Epoch 3 Batch 194/538 - Train Accuracy: 0.954, Validation Accuracy: 0.963, Loss: 0.025 Epoch 3 Batch 195/538 - Train Accuracy: 0.972, Validation Accuracy: 0.969, Loss: 0.028 Epoch 3 Batch 196/538 - Train Accuracy: 0.963, Validation Accuracy: 0.967, Loss: 0.019 Epoch 3 Batch 197/538 - Train Accuracy: 0.964, Validation Accuracy: 0.966, Loss: 0.022 Epoch 3 Batch 198/538 - Train Accuracy: 0.963, Validation Accuracy: 0.970, Loss: 0.022 Epoch 3 Batch 199/538 - Train Accuracy: 0.963, Validation Accuracy: 0.972, Loss: 0.023 Epoch 3 Batch 200/538 - Train Accuracy: 0.976, Validation Accuracy: 0.970, Loss: 0.016 Epoch 3 Batch 201/538 - Train Accuracy: 0.964, Validation Accuracy: 0.965, Loss: 0.026 Epoch 3 Batch 202/538 - Train Accuracy: 0.960, Validation Accuracy: 0.963, Loss: 0.022 Epoch 3 Batch 203/538 - Train Accuracy: 0.963, Validation Accuracy: 0.958, Loss: 0.019 Epoch 3 Batch 204/538 - Train Accuracy: 0.962, Validation Accuracy: 0.958, Loss: 0.026 Epoch 3 Batch 205/538 - Train Accuracy: 0.961, Validation Accuracy: 0.955, Loss: 0.022 Epoch 3 Batch 206/538 - Train Accuracy: 0.964, Validation Accuracy: 0.960, Loss: 0.020 Epoch 3 Batch 207/538 - Train Accuracy: 0.974, Validation Accuracy: 0.962, Loss: 0.022 Epoch 3 Batch 208/538 - Train Accuracy: 0.967, Validation Accuracy: 0.964, Loss: 0.027 Epoch 3 Batch 209/538 - Train Accuracy: 0.980, Validation Accuracy: 0.966, Loss: 0.018 Epoch 3 Batch 210/538 - Train Accuracy: 0.972, Validation Accuracy: 0.963, Loss: 0.023 Epoch 3 Batch 211/538 - Train Accuracy: 0.961, Validation Accuracy: 0.959, Loss: 0.022 Epoch 3 Batch 212/538 - Train Accuracy: 0.975, Validation Accuracy: 0.956, Loss: 0.019 Epoch 3 Batch 213/538 - Train Accuracy: 0.971, Validation Accuracy: 0.959, Loss: 0.019 Epoch 3 Batch 214/538 - Train Accuracy: 0.977, Validation Accuracy: 0.955, Loss: 0.015 Epoch 3 Batch 215/538 - Train Accuracy: 0.969, Validation Accuracy: 0.959, Loss: 0.018 Epoch 3 Batch 216/538 - Train Accuracy: 0.964, Validation Accuracy: 0.956, Loss: 0.021 Epoch 3 Batch 217/538 - Train Accuracy: 0.968, Validation Accuracy: 0.955, Loss: 0.024 Epoch 3 Batch 218/538 - Train Accuracy: 0.973, Validation Accuracy: 0.957, Loss: 0.017 Epoch 3 Batch 219/538 - Train Accuracy: 0.964, Validation Accuracy: 0.954, Loss: 0.025 Epoch 3 Batch 220/538 - Train Accuracy: 0.947, Validation Accuracy: 0.957, Loss: 0.024 Epoch 3 Batch 221/538 - Train Accuracy: 0.976, Validation Accuracy: 0.953, Loss: 0.016 Epoch 3 Batch 222/538 - Train Accuracy: 0.972, Validation Accuracy: 0.953, Loss: 0.015 Epoch 3 Batch 223/538 - Train Accuracy: 0.968, Validation Accuracy: 0.952, Loss: 0.022 Epoch 3 Batch 224/538 - Train Accuracy: 0.973, Validation Accuracy: 0.954, Loss: 0.021 Epoch 3 Batch 225/538 - Train Accuracy: 0.971, Validation Accuracy: 0.953, Loss: 0.019 Epoch 3 Batch 226/538 - Train Accuracy: 0.958, Validation Accuracy: 0.956, Loss: 0.023 Epoch 3 Batch 227/538 - Train Accuracy: 0.965, Validation Accuracy: 0.958, Loss: 0.022 Epoch 3 Batch 228/538 - Train Accuracy: 0.964, Validation Accuracy: 0.956, Loss: 0.020 Epoch 3 Batch 229/538 - Train Accuracy: 0.970, Validation Accuracy: 0.955, Loss: 0.020 Epoch 3 Batch 230/538 - Train Accuracy: 0.960, Validation Accuracy: 0.955, Loss: 0.018 Epoch 3 Batch 231/538 - Train Accuracy: 0.967, Validation Accuracy: 0.953, Loss: 0.023 Epoch 3 Batch 232/538 - Train Accuracy: 0.954, Validation Accuracy: 0.959, Loss: 0.023 Epoch 3 Batch 233/538 - Train Accuracy: 0.969, Validation Accuracy: 0.957, Loss: 0.019 Epoch 3 Batch 234/538 - Train Accuracy: 0.974, Validation Accuracy: 0.956, Loss: 0.019 Epoch 3 Batch 235/538 - Train Accuracy: 0.970, Validation Accuracy: 0.960, Loss: 0.017 Epoch 3 Batch 236/538 - Train Accuracy: 0.967, Validation Accuracy: 0.967, Loss: 0.019 Epoch 3 Batch 237/538 - Train Accuracy: 0.960, Validation Accuracy: 0.965, Loss: 0.017 Epoch 3 Batch 238/538 - Train Accuracy: 0.967, Validation Accuracy: 0.967, Loss: 0.019 Epoch 3 Batch 239/538 - Train Accuracy: 0.967, Validation Accuracy: 0.961, Loss: 0.019 Epoch 3 Batch 240/538 - Train Accuracy: 0.963, Validation Accuracy: 0.958, Loss: 0.020 Epoch 3 Batch 241/538 - Train Accuracy: 0.952, Validation Accuracy: 0.958, Loss: 0.024 Epoch 3 Batch 242/538 - Train Accuracy: 0.973, Validation Accuracy: 0.957, Loss: 0.019 Epoch 3 Batch 243/538 - Train Accuracy: 0.969, Validation Accuracy: 0.957, Loss: 0.018 Epoch 3 Batch 244/538 - Train Accuracy: 0.962, Validation Accuracy: 0.953, Loss: 0.017 Epoch 3 Batch 245/538 - Train Accuracy: 0.970, Validation Accuracy: 0.948, Loss: 0.023 Epoch 3 Batch 246/538 - Train Accuracy: 0.961, Validation Accuracy: 0.953, Loss: 0.014 Epoch 3 Batch 247/538 - Train Accuracy: 0.960, Validation Accuracy: 0.946, Loss: 0.020 Epoch 3 Batch 248/538 - Train Accuracy: 0.953, Validation Accuracy: 0.947, Loss: 0.022 Epoch 3 Batch 249/538 - Train Accuracy: 0.974, Validation Accuracy: 0.947, Loss: 0.016 Epoch 3 Batch 250/538 - Train Accuracy: 0.981, Validation Accuracy: 0.953, Loss: 0.019 Epoch 3 Batch 251/538 - Train Accuracy: 0.958, Validation Accuracy: 0.953, Loss: 0.017 Epoch 3 Batch 252/538 - Train Accuracy: 0.973, Validation Accuracy: 0.956, Loss: 0.019 Epoch 3 Batch 253/538 - Train Accuracy: 0.960, Validation Accuracy: 0.958, Loss: 0.019 Epoch 3 Batch 254/538 - Train Accuracy: 0.954, Validation Accuracy: 0.956, Loss: 0.025 Epoch 3 Batch 255/538 - Train Accuracy: 0.984, Validation Accuracy: 0.958, Loss: 0.017 Epoch 3 Batch 256/538 - Train Accuracy: 0.959, Validation Accuracy: 0.957, Loss: 0.019 Epoch 3 Batch 257/538 - Train Accuracy: 0.977, Validation Accuracy: 0.949, Loss: 0.019 Epoch 3 Batch 258/538 - Train Accuracy: 0.963, Validation Accuracy: 0.948, Loss: 0.022 Epoch 3 Batch 259/538 - Train Accuracy: 0.975, Validation Accuracy: 0.953, Loss: 0.016 Epoch 3 Batch 260/538 - Train Accuracy: 0.953, Validation Accuracy: 0.947, Loss: 0.021 Epoch 3 Batch 261/538 - Train Accuracy: 0.971, Validation Accuracy: 0.947, Loss: 0.022 Epoch 3 Batch 262/538 - Train Accuracy: 0.965, Validation Accuracy: 0.950, Loss: 0.022 Epoch 3 Batch 263/538 - Train Accuracy: 0.948, Validation Accuracy: 0.959, Loss: 0.020 Epoch 3 Batch 264/538 - Train Accuracy: 0.961, Validation Accuracy: 0.956, Loss: 0.023 Epoch 3 Batch 265/538 - Train Accuracy: 0.952, Validation Accuracy: 0.956, Loss: 0.025 Epoch 3 Batch 266/538 - Train Accuracy: 0.961, Validation Accuracy: 0.953, Loss: 0.019 Epoch 3 Batch 267/538 - Train Accuracy: 0.969, Validation Accuracy: 0.957, Loss: 0.018 Epoch 3 Batch 268/538 - Train Accuracy: 0.976, Validation Accuracy: 0.960, Loss: 0.013 Epoch 3 Batch 269/538 - Train Accuracy: 0.962, Validation Accuracy: 0.961, Loss: 0.022 Epoch 3 Batch 270/538 - Train Accuracy: 0.976, Validation Accuracy: 0.960, Loss: 0.017 Epoch 3 Batch 271/538 - Train Accuracy: 0.977, Validation Accuracy: 0.961, Loss: 0.015 Epoch 3 Batch 272/538 - Train Accuracy: 0.965, Validation Accuracy: 0.963, Loss: 0.023 Epoch 3 Batch 273/538 - Train Accuracy: 0.960, Validation Accuracy: 0.959, Loss: 0.023 Epoch 3 Batch 274/538 - Train Accuracy: 0.955, Validation Accuracy: 0.956, Loss: 0.023 Epoch 3 Batch 275/538 - Train Accuracy: 0.967, Validation Accuracy: 0.954, Loss: 0.019 Epoch 3 Batch 276/538 - Train Accuracy: 0.953, Validation Accuracy: 0.958, Loss: 0.026 Epoch 3 Batch 277/538 - Train Accuracy: 0.973, Validation Accuracy: 0.957, Loss: 0.017 Epoch 3 Batch 278/538 - Train Accuracy: 0.957, Validation Accuracy: 0.960, Loss: 0.016 Epoch 3 Batch 279/538 - Train Accuracy: 0.965, Validation Accuracy: 0.960, Loss: 0.020 Epoch 3 Batch 280/538 - Train Accuracy: 0.963, Validation Accuracy: 0.963, Loss: 0.017 Epoch 3 Batch 281/538 - Train Accuracy: 0.977, Validation Accuracy: 0.965, Loss: 0.021 Epoch 3 Batch 282/538 - Train Accuracy: 0.974, Validation Accuracy: 0.960, Loss: 0.022 Epoch 3 Batch 283/538 - Train Accuracy: 0.975, Validation Accuracy: 0.959, Loss: 0.020 Epoch 3 Batch 284/538 - Train Accuracy: 0.960, Validation Accuracy: 0.960, Loss: 0.020 Epoch 3 Batch 285/538 - Train Accuracy: 0.970, Validation Accuracy: 0.960, Loss: 0.018 Epoch 3 Batch 286/538 - Train Accuracy: 0.962, Validation Accuracy: 0.965, Loss: 0.031 Epoch 3 Batch 287/538 - Train Accuracy: 0.975, Validation Accuracy: 0.967, Loss: 0.017 Epoch 3 Batch 288/538 - Train Accuracy: 0.973, Validation Accuracy: 0.969, Loss: 0.019 Epoch 3 Batch 289/538 - Train Accuracy: 0.963, Validation Accuracy: 0.970, Loss: 0.018 Epoch 3 Batch 290/538 - Train Accuracy: 0.967, Validation Accuracy: 0.964, Loss: 0.017 Epoch 3 Batch 291/538 - Train Accuracy: 0.984, Validation Accuracy: 0.968, Loss: 0.019 Epoch 3 Batch 292/538 - Train Accuracy: 0.981, Validation Accuracy: 0.968, Loss: 0.014 Epoch 3 Batch 293/538 - Train Accuracy: 0.972, Validation Accuracy: 0.965, Loss: 0.017 Epoch 3 Batch 294/538 - Train Accuracy: 0.958, Validation Accuracy: 0.965, Loss: 0.020 Epoch 3 Batch 295/538 - Train Accuracy: 0.962, Validation Accuracy: 0.966, Loss: 0.021 Epoch 3 Batch 296/538 - Train Accuracy: 0.954, Validation Accuracy: 0.964, Loss: 0.030 Epoch 3 Batch 297/538 - Train Accuracy: 0.977, Validation Accuracy: 0.965, Loss: 0.021 Epoch 3 Batch 298/538 - Train Accuracy: 0.973, Validation Accuracy: 0.968, Loss: 0.017 Epoch 3 Batch 299/538 - Train Accuracy: 0.965, Validation Accuracy: 0.968, Loss: 0.028 Epoch 3 Batch 300/538 - Train Accuracy: 0.968, Validation Accuracy: 0.964, Loss: 0.020 Epoch 3 Batch 301/538 - Train Accuracy: 0.961, Validation Accuracy: 0.964, Loss: 0.023 Epoch 3 Batch 302/538 - Train Accuracy: 0.972, Validation Accuracy: 0.963, Loss: 0.019 Epoch 3 Batch 303/538 - Train Accuracy: 0.977, Validation Accuracy: 0.963, Loss: 0.023 Epoch 3 Batch 304/538 - Train Accuracy: 0.973, Validation Accuracy: 0.962, Loss: 0.020 Epoch 3 Batch 305/538 - Train Accuracy: 0.978, Validation Accuracy: 0.961, Loss: 0.018 Epoch 3 Batch 306/538 - Train Accuracy: 0.973, Validation Accuracy: 0.957, Loss: 0.020 Epoch 3 Batch 307/538 - Train Accuracy: 0.976, Validation Accuracy: 0.958, Loss: 0.018 Epoch 3 Batch 308/538 - Train Accuracy: 0.976, Validation Accuracy: 0.960, Loss: 0.017 Epoch 3 Batch 309/538 - Train Accuracy: 0.978, Validation Accuracy: 0.961, Loss: 0.017 Epoch 3 Batch 310/538 - Train Accuracy: 0.967, Validation Accuracy: 0.963, Loss: 0.023 Epoch 3 Batch 311/538 - Train Accuracy: 0.950, Validation Accuracy: 0.966, Loss: 0.021 Epoch 3 Batch 312/538 - Train Accuracy: 0.970, Validation Accuracy: 0.968, Loss: 0.015 Epoch 3 Batch 313/538 - Train Accuracy: 0.967, Validation Accuracy: 0.970, Loss: 0.019 Epoch 3 Batch 314/538 - Train Accuracy: 0.980, Validation Accuracy: 0.972, Loss: 0.020 Epoch 3 Batch 315/538 - Train Accuracy: 0.974, Validation Accuracy: 0.975, Loss: 0.017 Epoch 3 Batch 316/538 - Train Accuracy: 0.965, Validation Accuracy: 0.971, Loss: 0.014 Epoch 3 Batch 317/538 - Train Accuracy: 0.973, Validation Accuracy: 0.968, Loss: 0.019 Epoch 3 Batch 318/538 - Train Accuracy: 0.959, Validation Accuracy: 0.966, Loss: 0.018 Epoch 3 Batch 319/538 - Train Accuracy: 0.967, Validation Accuracy: 0.966, Loss: 0.021 Epoch 3 Batch 320/538 - Train Accuracy: 0.972, Validation Accuracy: 0.961, Loss: 0.016 Epoch 3 Batch 321/538 - Train Accuracy: 0.963, Validation Accuracy: 0.962, Loss: 0.016 Epoch 3 Batch 322/538 - Train Accuracy: 0.977, Validation Accuracy: 0.964, Loss: 0.019 Epoch 3 Batch 323/538 - Train Accuracy: 0.981, Validation Accuracy: 0.963, Loss: 0.013 Epoch 3 Batch 324/538 - Train Accuracy: 0.984, Validation Accuracy: 0.964, Loss: 0.018 Epoch 3 Batch 325/538 - Train Accuracy: 0.974, Validation Accuracy: 0.964, Loss: 0.018 Epoch 3 Batch 326/538 - Train Accuracy: 0.978, Validation Accuracy: 0.965, Loss: 0.018 Epoch 3 Batch 327/538 - Train Accuracy: 0.962, Validation Accuracy: 0.960, Loss: 0.021 Epoch 3 Batch 328/538 - Train Accuracy: 0.980, Validation Accuracy: 0.964, Loss: 0.017 Epoch 3 Batch 329/538 - Train Accuracy: 0.986, Validation Accuracy: 0.961, Loss: 0.016 Epoch 3 Batch 330/538 - Train Accuracy: 0.972, Validation Accuracy: 0.959, Loss: 0.016 Epoch 3 Batch 331/538 - Train Accuracy: 0.976, Validation Accuracy: 0.954, Loss: 0.019 Epoch 3 Batch 332/538 - Train Accuracy: 0.978, Validation Accuracy: 0.954, Loss: 0.019 Epoch 3 Batch 333/538 - Train Accuracy: 0.974, Validation Accuracy: 0.959, Loss: 0.019 Epoch 3 Batch 334/538 - Train Accuracy: 0.973, Validation Accuracy: 0.951, Loss: 0.016 Epoch 3 Batch 335/538 - Train Accuracy: 0.969, Validation Accuracy: 0.963, Loss: 0.020 Epoch 3 Batch 336/538 - Train Accuracy: 0.963, Validation Accuracy: 0.967, Loss: 0.020 Epoch 3 Batch 337/538 - Train Accuracy: 0.965, Validation Accuracy: 0.968, Loss: 0.019 Epoch 3 Batch 338/538 - Train Accuracy: 0.980, Validation Accuracy: 0.971, Loss: 0.018 Epoch 3 Batch 339/538 - Train Accuracy: 0.976, Validation Accuracy: 0.974, Loss: 0.018 Epoch 3 Batch 340/538 - Train Accuracy: 0.965, Validation Accuracy: 0.971, Loss: 0.019 Epoch 3 Batch 341/538 - Train Accuracy: 0.968, Validation Accuracy: 0.971, Loss: 0.017 Epoch 3 Batch 342/538 - Train Accuracy: 0.957, Validation Accuracy: 0.969, Loss: 0.018 Epoch 3 Batch 343/538 - Train Accuracy: 0.981, Validation Accuracy: 0.963, Loss: 0.019 Epoch 3 Batch 344/538 - Train Accuracy: 0.968, Validation Accuracy: 0.962, Loss: 0.018 Epoch 3 Batch 345/538 - Train Accuracy: 0.971, Validation Accuracy: 0.958, Loss: 0.021 Epoch 3 Batch 346/538 - Train Accuracy: 0.967, Validation Accuracy: 0.958, Loss: 0.023 Epoch 3 Batch 347/538 - Train Accuracy: 0.971, Validation Accuracy: 0.958, Loss: 0.019 Epoch 3 Batch 348/538 - Train Accuracy: 0.960, Validation Accuracy: 0.957, Loss: 0.016 Epoch 3 Batch 349/538 - Train Accuracy: 0.984, Validation Accuracy: 0.964, Loss: 0.015 Epoch 3 Batch 350/538 - Train Accuracy: 0.970, Validation Accuracy: 0.961, Loss: 0.023 Epoch 3 Batch 351/538 - Train Accuracy: 0.963, Validation Accuracy: 0.959, Loss: 0.021 Epoch 3 Batch 352/538 - Train Accuracy: 0.949, Validation Accuracy: 0.958, Loss: 0.035 Epoch 3 Batch 353/538 - Train Accuracy: 0.970, Validation Accuracy: 0.956, Loss: 0.018 Epoch 3 Batch 354/538 - Train Accuracy: 0.973, Validation Accuracy: 0.961, Loss: 0.017 Epoch 3 Batch 355/538 - Train Accuracy: 0.980, Validation Accuracy: 0.964, Loss: 0.024 Epoch 3 Batch 356/538 - Train Accuracy: 0.966, Validation Accuracy: 0.964, Loss: 0.018 Epoch 3 Batch 357/538 - Train Accuracy: 0.971, Validation Accuracy: 0.962, Loss: 0.017 Epoch 3 Batch 358/538 - Train Accuracy: 0.980, Validation Accuracy: 0.967, Loss: 0.014 Epoch 3 Batch 359/538 - Train Accuracy: 0.979, Validation Accuracy: 0.967, Loss: 0.019 Epoch 3 Batch 360/538 - Train Accuracy: 0.978, Validation Accuracy: 0.964, Loss: 0.015 Epoch 3 Batch 361/538 - Train Accuracy: 0.976, Validation Accuracy: 0.962, Loss: 0.018 Epoch 3 Batch 362/538 - Train Accuracy: 0.972, Validation Accuracy: 0.965, Loss: 0.017 Epoch 3 Batch 363/538 - Train Accuracy: 0.970, Validation Accuracy: 0.972, Loss: 0.021 Epoch 3 Batch 364/538 - Train Accuracy: 0.966, Validation Accuracy: 0.970, Loss: 0.024 Epoch 3 Batch 365/538 - Train Accuracy: 0.958, Validation Accuracy: 0.963, Loss: 0.019 Epoch 3 Batch 366/538 - Train Accuracy: 0.970, Validation Accuracy: 0.962, Loss: 0.019 Epoch 3 Batch 367/538 - Train Accuracy: 0.971, Validation Accuracy: 0.958, Loss: 0.014 Epoch 3 Batch 368/538 - Train Accuracy: 0.977, Validation Accuracy: 0.958, Loss: 0.016 Epoch 3 Batch 369/538 - Train Accuracy: 0.967, Validation Accuracy: 0.955, Loss: 0.015 Epoch 3 Batch 370/538 - Train Accuracy: 0.965, Validation Accuracy: 0.959, Loss: 0.021 Epoch 3 Batch 371/538 - Train Accuracy: 0.976, Validation Accuracy: 0.963, Loss: 0.018 Epoch 3 Batch 372/538 - Train Accuracy: 0.973, Validation Accuracy: 0.964, Loss: 0.020 Epoch 3 Batch 373/538 - Train Accuracy: 0.964, Validation Accuracy: 0.962, Loss: 0.015 Epoch 3 Batch 374/538 - Train Accuracy: 0.972, Validation Accuracy: 0.963, Loss: 0.017 Epoch 3 Batch 375/538 - Train Accuracy: 0.982, Validation Accuracy: 0.960, Loss: 0.016 Epoch 3 Batch 376/538 - Train Accuracy: 0.974, Validation Accuracy: 0.960, Loss: 0.019 Epoch 3 Batch 377/538 - Train Accuracy: 0.975, Validation Accuracy: 0.955, Loss: 0.018 Epoch 3 Batch 378/538 - Train Accuracy: 0.977, Validation Accuracy: 0.954, Loss: 0.012 Epoch 3 Batch 379/538 - Train Accuracy: 0.978, Validation Accuracy: 0.957, Loss: 0.017 Epoch 3 Batch 380/538 - Train Accuracy: 0.972, Validation Accuracy: 0.955, Loss: 0.017 Epoch 3 Batch 381/538 - Train Accuracy: 0.985, Validation Accuracy: 0.957, Loss: 0.017 Epoch 3 Batch 382/538 - Train Accuracy: 0.960, Validation Accuracy: 0.959, Loss: 0.023 Epoch 3 Batch 383/538 - Train Accuracy: 0.975, Validation Accuracy: 0.961, Loss: 0.016 Epoch 3 Batch 384/538 - Train Accuracy: 0.974, Validation Accuracy: 0.964, Loss: 0.019 Epoch 3 Batch 385/538 - Train Accuracy: 0.980, Validation Accuracy: 0.964, Loss: 0.019 Epoch 3 Batch 386/538 - Train Accuracy: 0.980, Validation Accuracy: 0.964, Loss: 0.017 Epoch 3 Batch 387/538 - Train Accuracy: 0.964, Validation Accuracy: 0.966, Loss: 0.018 Epoch 3 Batch 388/538 - Train Accuracy: 0.972, Validation Accuracy: 0.965, Loss: 0.018 Epoch 3 Batch 389/538 - Train Accuracy: 0.954, Validation Accuracy: 0.966, Loss: 0.024 Epoch 3 Batch 390/538 - Train Accuracy: 0.966, Validation Accuracy: 0.963, Loss: 0.016 Epoch 3 Batch 391/538 - Train Accuracy: 0.967, Validation Accuracy: 0.957, Loss: 0.016 Epoch 3 Batch 392/538 - Train Accuracy: 0.974, Validation Accuracy: 0.954, Loss: 0.013 Epoch 3 Batch 393/538 - Train Accuracy: 0.981, Validation Accuracy: 0.947, Loss: 0.016 Epoch 3 Batch 394/538 - Train Accuracy: 0.955, Validation Accuracy: 0.952, Loss: 0.022 Epoch 3 Batch 395/538 - Train Accuracy: 0.966, Validation Accuracy: 0.959, Loss: 0.020 Epoch 3 Batch 396/538 - Train Accuracy: 0.974, Validation Accuracy: 0.964, Loss: 0.015 Epoch 3 Batch 397/538 - Train Accuracy: 0.968, Validation Accuracy: 0.971, Loss: 0.019 Epoch 3 Batch 398/538 - Train Accuracy: 0.960, Validation Accuracy: 0.969, Loss: 0.015 Epoch 3 Batch 399/538 - Train Accuracy: 0.950, Validation Accuracy: 0.969, Loss: 0.020 Epoch 3 Batch 400/538 - Train Accuracy: 0.964, Validation Accuracy: 0.972, Loss: 0.019 Epoch 3 Batch 401/538 - Train Accuracy: 0.985, Validation Accuracy: 0.974, Loss: 0.014 Epoch 3 Batch 402/538 - Train Accuracy: 0.984, Validation Accuracy: 0.972, Loss: 0.017 Epoch 3 Batch 403/538 - Train Accuracy: 0.967, Validation Accuracy: 0.967, Loss: 0.019 Epoch 3 Batch 404/538 - Train Accuracy: 0.973, Validation Accuracy: 0.959, Loss: 0.020 Epoch 3 Batch 405/538 - Train Accuracy: 0.973, Validation Accuracy: 0.957, Loss: 0.017 Epoch 3 Batch 406/538 - Train Accuracy: 0.977, Validation Accuracy: 0.955, Loss: 0.015 Epoch 3 Batch 407/538 - Train Accuracy: 0.964, Validation Accuracy: 0.957, Loss: 0.021 Epoch 3 Batch 408/538 - Train Accuracy: 0.962, Validation Accuracy: 0.956, Loss: 0.019 Epoch 3 Batch 409/538 - Train Accuracy: 0.963, Validation Accuracy: 0.957, Loss: 0.016 Epoch 3 Batch 410/538 - Train Accuracy: 0.987, Validation Accuracy: 0.959, Loss: 0.014 Epoch 3 Batch 411/538 - Train Accuracy: 0.976, Validation Accuracy: 0.962, Loss: 0.019 Epoch 3 Batch 412/538 - Train Accuracy: 0.976, Validation Accuracy: 0.960, Loss: 0.013 Epoch 3 Batch 413/538 - Train Accuracy: 0.979, Validation Accuracy: 0.958, Loss: 0.015 Epoch 3 Batch 414/538 - Train Accuracy: 0.962, Validation Accuracy: 0.964, Loss: 0.023 Epoch 3 Batch 415/538 - Train Accuracy: 0.966, Validation Accuracy: 0.964, Loss: 0.017 Epoch 3 Batch 416/538 - Train Accuracy: 0.972, Validation Accuracy: 0.964, Loss: 0.019 Epoch 3 Batch 417/538 - Train Accuracy: 0.978, Validation Accuracy: 0.962, Loss: 0.014 Epoch 3 Batch 418/538 - Train Accuracy: 0.969, Validation Accuracy: 0.962, Loss: 0.021 Epoch 3 Batch 419/538 - Train Accuracy: 0.970, Validation Accuracy: 0.967, Loss: 0.015 Epoch 3 Batch 420/538 - Train Accuracy: 0.970, Validation Accuracy: 0.955, Loss: 0.020 Epoch 3 Batch 421/538 - Train Accuracy: 0.980, Validation Accuracy: 0.957, Loss: 0.014 Epoch 3 Batch 422/538 - Train Accuracy: 0.965, Validation Accuracy: 0.961, Loss: 0.019 Epoch 3 Batch 423/538 - Train Accuracy: 0.981, Validation Accuracy: 0.965, Loss: 0.016 Epoch 3 Batch 424/538 - Train Accuracy: 0.961, Validation Accuracy: 0.963, Loss: 0.025 Epoch 3 Batch 425/538 - Train Accuracy: 0.962, Validation Accuracy: 0.964, Loss: 0.027 Epoch 3 Batch 426/538 - Train Accuracy: 0.975, Validation Accuracy: 0.966, Loss: 0.020 Epoch 3 Batch 427/538 - Train Accuracy: 0.963, Validation Accuracy: 0.971, Loss: 0.019 Epoch 3 Batch 428/538 - Train Accuracy: 0.976, Validation Accuracy: 0.968, Loss: 0.012 Epoch 3 Batch 429/538 - Train Accuracy: 0.977, Validation Accuracy: 0.968, Loss: 0.019 Epoch 3 Batch 430/538 - Train Accuracy: 0.967, Validation Accuracy: 0.956, Loss: 0.018 Epoch 3 Batch 431/538 - Train Accuracy: 0.985, Validation Accuracy: 0.955, Loss: 0.015 Epoch 3 Batch 432/538 - Train Accuracy: 0.964, Validation Accuracy: 0.953, Loss: 0.022 Epoch 3 Batch 433/538 - Train Accuracy: 0.961, Validation Accuracy: 0.960, Loss: 0.034 Epoch 3 Batch 434/538 - Train Accuracy: 0.955, Validation Accuracy: 0.964, Loss: 0.017 Epoch 3 Batch 435/538 - Train Accuracy: 0.969, Validation Accuracy: 0.965, Loss: 0.019 Epoch 3 Batch 436/538 - Train Accuracy: 0.956, Validation Accuracy: 0.966, Loss: 0.020 Epoch 3 Batch 437/538 - Train Accuracy: 0.980, Validation Accuracy: 0.965, Loss: 0.018 Epoch 3 Batch 438/538 - Train Accuracy: 0.974, Validation Accuracy: 0.965, Loss: 0.014 Epoch 3 Batch 439/538 - Train Accuracy: 0.981, Validation Accuracy: 0.969, Loss: 0.021 Epoch 3 Batch 440/538 - Train Accuracy: 0.976, Validation Accuracy: 0.967, Loss: 0.020 Epoch 3 Batch 441/538 - Train Accuracy: 0.957, Validation Accuracy: 0.969, Loss: 0.026 Epoch 3 Batch 442/538 - Train Accuracy: 0.970, Validation Accuracy: 0.966, Loss: 0.016 Epoch 3 Batch 443/538 - Train Accuracy: 0.963, Validation Accuracy: 0.968, Loss: 0.020 Epoch 3 Batch 444/538 - Train Accuracy: 0.962, Validation Accuracy: 0.964, Loss: 0.020 Epoch 3 Batch 445/538 - Train Accuracy: 0.976, Validation Accuracy: 0.959, Loss: 0.016 Epoch 3 Batch 446/538 - Train Accuracy: 0.961, Validation Accuracy: 0.957, Loss: 0.018 Epoch 3 Batch 447/538 - Train Accuracy: 0.964, Validation Accuracy: 0.956, Loss: 0.019 Epoch 3 Batch 448/538 - Train Accuracy: 0.967, Validation Accuracy: 0.961, Loss: 0.018 Epoch 3 Batch 449/538 - Train Accuracy: 0.973, Validation Accuracy: 0.964, Loss: 0.023 Epoch 3 Batch 450/538 - Train Accuracy: 0.952, Validation Accuracy: 0.967, Loss: 0.028 Epoch 3 Batch 451/538 - Train Accuracy: 0.961, Validation Accuracy: 0.969, Loss: 0.021 Epoch 3 Batch 452/538 - Train Accuracy: 0.978, Validation Accuracy: 0.971, Loss: 0.015 Epoch 3 Batch 453/538 - Train Accuracy: 0.964, Validation Accuracy: 0.969, Loss: 0.022 Epoch 3 Batch 454/538 - Train Accuracy: 0.970, Validation Accuracy: 0.967, Loss: 0.022 Epoch 3 Batch 455/538 - Train Accuracy: 0.978, Validation Accuracy: 0.964, Loss: 0.018 Epoch 3 Batch 456/538 - Train Accuracy: 0.974, Validation Accuracy: 0.963, Loss: 0.025 Epoch 3 Batch 457/538 - Train Accuracy: 0.971, Validation Accuracy: 0.963, Loss: 0.018 Epoch 3 Batch 458/538 - Train Accuracy: 0.979, Validation Accuracy: 0.963, Loss: 0.018 Epoch 3 Batch 459/538 - Train Accuracy: 0.989, Validation Accuracy: 0.966, Loss: 0.014 Epoch 3 Batch 460/538 - Train Accuracy: 0.966, Validation Accuracy: 0.968, Loss: 0.018 Epoch 3 Batch 461/538 - Train Accuracy: 0.973, Validation Accuracy: 0.968, Loss: 0.019 Epoch 3 Batch 462/538 - Train Accuracy: 0.973, Validation Accuracy: 0.964, Loss: 0.016 Epoch 3 Batch 463/538 - Train Accuracy: 0.960, Validation Accuracy: 0.965, Loss: 0.020 Epoch 3 Batch 464/538 - Train Accuracy: 0.980, Validation Accuracy: 0.963, Loss: 0.015 Epoch 3 Batch 465/538 - Train Accuracy: 0.971, Validation Accuracy: 0.959, Loss: 0.019 Epoch 3 Batch 466/538 - Train Accuracy: 0.967, Validation Accuracy: 0.959, Loss: 0.018 Epoch 3 Batch 467/538 - Train Accuracy: 0.970, Validation Accuracy: 0.957, Loss: 0.016 Epoch 3 Batch 468/538 - Train Accuracy: 0.980, Validation Accuracy: 0.960, Loss: 0.023 Epoch 3 Batch 469/538 - Train Accuracy: 0.973, Validation Accuracy: 0.961, Loss: 0.018 Epoch 3 Batch 470/538 - Train Accuracy: 0.965, Validation Accuracy: 0.970, Loss: 0.020 Epoch 3 Batch 471/538 - Train Accuracy: 0.984, Validation Accuracy: 0.971, Loss: 0.012 Epoch 3 Batch 472/538 - Train Accuracy: 0.990, Validation Accuracy: 0.969, Loss: 0.012 Epoch 3 Batch 473/538 - Train Accuracy: 0.974, Validation Accuracy: 0.973, Loss: 0.016 Epoch 3 Batch 474/538 - Train Accuracy: 0.961, Validation Accuracy: 0.967, Loss: 0.017 Epoch 3 Batch 475/538 - Train Accuracy: 0.975, Validation Accuracy: 0.964, Loss: 0.016 Epoch 3 Batch 476/538 - Train Accuracy: 0.981, Validation Accuracy: 0.968, Loss: 0.016 Epoch 3 Batch 477/538 - Train Accuracy: 0.972, Validation Accuracy: 0.967, Loss: 0.021 Epoch 3 Batch 478/538 - Train Accuracy: 0.982, Validation Accuracy: 0.963, Loss: 0.013 Epoch 3 Batch 479/538 - Train Accuracy: 0.961, Validation Accuracy: 0.956, Loss: 0.019 Epoch 3 Batch 480/538 - Train Accuracy: 0.975, Validation Accuracy: 0.956, Loss: 0.017 Epoch 3 Batch 481/538 - Train Accuracy: 0.977, Validation Accuracy: 0.954, Loss: 0.018 Epoch 3 Batch 482/538 - Train Accuracy: 0.965, Validation Accuracy: 0.966, Loss: 0.015 Epoch 3 Batch 483/538 - Train Accuracy: 0.961, Validation Accuracy: 0.968, Loss: 0.020 Epoch 3 Batch 484/538 - Train Accuracy: 0.963, Validation Accuracy: 0.971, Loss: 0.020 Epoch 3 Batch 485/538 - Train Accuracy: 0.977, Validation Accuracy: 0.971, Loss: 0.018 Epoch 3 Batch 486/538 - Train Accuracy: 0.973, Validation Accuracy: 0.968, Loss: 0.013 Epoch 3 Batch 487/538 - Train Accuracy: 0.984, Validation Accuracy: 0.968, Loss: 0.013 Epoch 3 Batch 488/538 - Train Accuracy: 0.976, Validation Accuracy: 0.972, Loss: 0.013 Epoch 3 Batch 489/538 - Train Accuracy: 0.975, Validation Accuracy: 0.969, Loss: 0.018 Epoch 3 Batch 490/538 - Train Accuracy: 0.972, Validation Accuracy: 0.969, Loss: 0.019 Epoch 3 Batch 491/538 - Train Accuracy: 0.954, Validation Accuracy: 0.968, Loss: 0.018 Epoch 3 Batch 492/538 - Train Accuracy: 0.976, Validation Accuracy: 0.970, Loss: 0.014 Epoch 3 Batch 493/538 - Train Accuracy: 0.967, Validation Accuracy: 0.965, Loss: 0.017 Epoch 3 Batch 494/538 - Train Accuracy: 0.969, Validation Accuracy: 0.964, Loss: 0.017 Epoch 3 Batch 495/538 - Train Accuracy: 0.969, Validation Accuracy: 0.967, Loss: 0.019 Epoch 3 Batch 496/538 - Train Accuracy: 0.980, Validation Accuracy: 0.968, Loss: 0.014 Epoch 3 Batch 497/538 - Train Accuracy: 0.974, Validation Accuracy: 0.963, Loss: 0.016 Epoch 3 Batch 498/538 - Train Accuracy: 0.970, Validation Accuracy: 0.963, Loss: 0.016 Epoch 3 Batch 499/538 - Train Accuracy: 0.966, Validation Accuracy: 0.960, Loss: 0.020 Epoch 3 Batch 500/538 - Train Accuracy: 0.984, Validation Accuracy: 0.960, Loss: 0.012 Epoch 3 Batch 501/538 - Train Accuracy: 0.966, Validation Accuracy: 0.958, Loss: 0.021 Epoch 3 Batch 502/538 - Train Accuracy: 0.971, Validation Accuracy: 0.956, Loss: 0.017 Epoch 3 Batch 503/538 - Train Accuracy: 0.970, Validation Accuracy: 0.958, Loss: 0.022 Epoch 3 Batch 504/538 - Train Accuracy: 0.984, Validation Accuracy: 0.955, Loss: 0.014 Epoch 3 Batch 505/538 - Train Accuracy: 0.966, Validation Accuracy: 0.962, Loss: 0.013 Epoch 3 Batch 506/538 - Train Accuracy: 0.972, Validation Accuracy: 0.964, Loss: 0.015 Epoch 3 Batch 507/538 - Train Accuracy: 0.980, Validation Accuracy: 0.961, Loss: 0.016 Epoch 3 Batch 508/538 - Train Accuracy: 0.968, Validation Accuracy: 0.967, Loss: 0.018 Epoch 3 Batch 509/538 - Train Accuracy: 0.966, Validation Accuracy: 0.967, Loss: 0.021 Epoch 3 Batch 510/538 - Train Accuracy: 0.981, Validation Accuracy: 0.967, Loss: 0.013 Epoch 3 Batch 511/538 - Train Accuracy: 0.961, Validation Accuracy: 0.970, Loss: 0.020 Epoch 3 Batch 512/538 - Train Accuracy: 0.958, Validation Accuracy: 0.970, Loss: 0.020 Epoch 3 Batch 513/538 - Train Accuracy: 0.961, Validation Accuracy: 0.962, Loss: 0.018 Epoch 3 Batch 514/538 - Train Accuracy: 0.976, Validation Accuracy: 0.960, Loss: 0.017 Epoch 3 Batch 515/538 - Train Accuracy: 0.970, Validation Accuracy: 0.962, Loss: 0.020 Epoch 3 Batch 516/538 - Train Accuracy: 0.975, Validation Accuracy: 0.955, Loss: 0.016 Epoch 3 Batch 517/538 - Train Accuracy: 0.979, Validation Accuracy: 0.952, Loss: 0.015 Epoch 3 Batch 518/538 - Train Accuracy: 0.973, Validation Accuracy: 0.950, Loss: 0.021 Epoch 3 Batch 519/538 - Train Accuracy: 0.976, Validation Accuracy: 0.957, Loss: 0.020 Epoch 3 Batch 520/538 - Train Accuracy: 0.977, Validation Accuracy: 0.957, Loss: 0.020 Epoch 3 Batch 521/538 - Train Accuracy: 0.974, Validation Accuracy: 0.963, Loss: 0.020 Epoch 3 Batch 522/538 - Train Accuracy: 0.970, Validation Accuracy: 0.963, Loss: 0.014 Epoch 3 Batch 523/538 - Train Accuracy: 0.975, Validation Accuracy: 0.961, Loss: 0.021 Epoch 3 Batch 524/538 - Train Accuracy: 0.978, Validation Accuracy: 0.958, Loss: 0.014 Epoch 3 Batch 525/538 - Train Accuracy: 0.969, Validation Accuracy: 0.952, Loss: 0.019 Epoch 3 Batch 526/538 - Train Accuracy: 0.968, Validation Accuracy: 0.951, Loss: 0.018 Epoch 3 Batch 527/538 - Train Accuracy: 0.980, Validation Accuracy: 0.951, Loss: 0.017 Epoch 3 Batch 528/538 - Train Accuracy: 0.964, Validation Accuracy: 0.960, Loss: 0.022 Epoch 3 Batch 529/538 - Train Accuracy: 0.962, Validation Accuracy: 0.963, Loss: 0.021 Epoch 3 Batch 530/538 - Train Accuracy: 0.968, Validation Accuracy: 0.962, Loss: 0.019 Epoch 3 Batch 531/538 - Train Accuracy: 0.966, Validation Accuracy: 0.960, Loss: 0.020 Epoch 3 Batch 532/538 - Train Accuracy: 0.973, Validation Accuracy: 0.967, Loss: 0.016 Epoch 3 Batch 533/538 - Train Accuracy: 0.975, Validation Accuracy: 0.964, Loss: 0.017 Epoch 3 Batch 534/538 - Train Accuracy: 0.970, Validation Accuracy: 0.963, Loss: 0.012 Epoch 3 Batch 535/538 - Train Accuracy: 0.970, Validation Accuracy: 0.958, Loss: 0.019 Epoch 3 Batch 536/538 - Train Accuracy: 0.980, Validation Accuracy: 0.951, Loss: 0.020 Epoch 4 Batch 0/538 - Train Accuracy: 0.972, Validation Accuracy: 0.956, Loss: 0.016 Epoch 4 Batch 1/538 - Train Accuracy: 0.979, Validation Accuracy: 0.961, Loss: 0.019 Epoch 4 Batch 2/538 - Train Accuracy: 0.975, Validation Accuracy: 0.966, Loss: 0.021 Epoch 4 Batch 3/538 - Train Accuracy: 0.974, Validation Accuracy: 0.965, Loss: 0.016 Epoch 4 Batch 4/538 - Train Accuracy: 0.967, Validation Accuracy: 0.967, Loss: 0.017 Epoch 4 Batch 5/538 - Train Accuracy: 0.964, Validation Accuracy: 0.968, Loss: 0.021 Epoch 4 Batch 6/538 - Train Accuracy: 0.965, Validation Accuracy: 0.964, Loss: 0.017 Epoch 4 Batch 7/538 - Train Accuracy: 0.979, Validation Accuracy: 0.960, Loss: 0.017 Epoch 4 Batch 8/538 - Train Accuracy: 0.974, Validation Accuracy: 0.964, Loss: 0.019 Epoch 4 Batch 9/538 - Train Accuracy: 0.969, Validation Accuracy: 0.959, Loss: 0.016 Epoch 4 Batch 10/538 - Train Accuracy: 0.970, Validation Accuracy: 0.961, Loss: 0.018 Epoch 4 Batch 11/538 - Train Accuracy: 0.978, Validation Accuracy: 0.961, Loss: 0.018 Epoch 4 Batch 12/538 - Train Accuracy: 0.974, Validation Accuracy: 0.962, Loss: 0.018 Epoch 4 Batch 13/538 - Train Accuracy: 0.975, Validation Accuracy: 0.966, Loss: 0.018 Epoch 4 Batch 14/538 - Train Accuracy: 0.971, Validation Accuracy: 0.965, Loss: 0.015 Epoch 4 Batch 15/538 - Train Accuracy: 0.966, Validation Accuracy: 0.962, Loss: 0.016 Epoch 4 Batch 16/538 - Train Accuracy: 0.982, Validation Accuracy: 0.962, Loss: 0.016 Epoch 4 Batch 17/538 - Train Accuracy: 0.973, Validation Accuracy: 0.961, Loss: 0.020 Epoch 4 Batch 18/538 - Train Accuracy: 0.966, Validation Accuracy: 0.968, Loss: 0.021 Epoch 4 Batch 19/538 - Train Accuracy: 0.973, Validation Accuracy: 0.966, Loss: 0.019 Epoch 4 Batch 20/538 - Train Accuracy: 0.969, Validation Accuracy: 0.967, Loss: 0.020 Epoch 4 Batch 21/538 - Train Accuracy: 0.985, Validation Accuracy: 0.965, Loss: 0.012 Epoch 4 Batch 22/538 - Train Accuracy: 0.978, Validation Accuracy: 0.963, Loss: 0.019 Epoch 4 Batch 23/538 - Train Accuracy: 0.970, Validation Accuracy: 0.958, Loss: 0.025 Epoch 4 Batch 24/538 - Train Accuracy: 0.975, Validation Accuracy: 0.959, Loss: 0.019 Epoch 4 Batch 25/538 - Train Accuracy: 0.974, Validation Accuracy: 0.958, Loss: 0.020 Epoch 4 Batch 26/538 - Train Accuracy: 0.964, Validation Accuracy: 0.952, Loss: 0.020 Epoch 4 Batch 27/538 - Train Accuracy: 0.972, Validation Accuracy: 0.954, Loss: 0.014 Epoch 4 Batch 28/538 - Train Accuracy: 0.974, Validation Accuracy: 0.955, Loss: 0.016 Epoch 4 Batch 29/538 - Train Accuracy: 0.958, Validation Accuracy: 0.952, Loss: 0.015 Epoch 4 Batch 30/538 - Train Accuracy: 0.965, Validation Accuracy: 0.954, Loss: 0.019 Epoch 4 Batch 31/538 - Train Accuracy: 0.979, Validation Accuracy: 0.960, Loss: 0.014 Epoch 4 Batch 32/538 - Train Accuracy: 0.980, Validation Accuracy: 0.956, Loss: 0.012 Epoch 4 Batch 33/538 - Train Accuracy: 0.953, Validation Accuracy: 0.957, Loss: 0.024 Epoch 4 Batch 34/538 - Train Accuracy: 0.973, Validation Accuracy: 0.957, Loss: 0.022 Epoch 4 Batch 35/538 - Train Accuracy: 0.966, Validation Accuracy: 0.959, Loss: 0.014 Epoch 4 Batch 36/538 - Train Accuracy: 0.973, Validation Accuracy: 0.961, Loss: 0.013 Epoch 4 Batch 37/538 - Train Accuracy: 0.971, Validation Accuracy: 0.963, Loss: 0.020 Epoch 4 Batch 38/538 - Train Accuracy: 0.968, Validation Accuracy: 0.966, Loss: 0.015 Epoch 4 Batch 39/538 - Train Accuracy: 0.977, Validation Accuracy: 0.961, Loss: 0.015 Epoch 4 Batch 40/538 - Train Accuracy: 0.965, Validation Accuracy: 0.963, Loss: 0.014 Epoch 4 Batch 41/538 - Train Accuracy: 0.971, Validation Accuracy: 0.963, Loss: 0.017 Epoch 4 Batch 42/538 - Train Accuracy: 0.977, Validation Accuracy: 0.965, Loss: 0.015 Epoch 4 Batch 43/538 - Train Accuracy: 0.956, Validation Accuracy: 0.961, Loss: 0.019 Epoch 4 Batch 44/538 - Train Accuracy: 0.964, Validation Accuracy: 0.965, Loss: 0.019 Epoch 4 Batch 45/538 - Train Accuracy: 0.970, Validation Accuracy: 0.966, Loss: 0.019 Epoch 4 Batch 46/538 - Train Accuracy: 0.967, Validation Accuracy: 0.966, Loss: 0.014 Epoch 4 Batch 47/538 - Train Accuracy: 0.965, Validation Accuracy: 0.968, Loss: 0.018 Epoch 4 Batch 48/538 - Train Accuracy: 0.965, Validation Accuracy: 0.968, Loss: 0.021 Epoch 4 Batch 49/538 - Train Accuracy: 0.978, Validation Accuracy: 0.968, Loss: 0.014 Epoch 4 Batch 50/538 - Train Accuracy: 0.977, Validation Accuracy: 0.964, Loss: 0.017 Epoch 4 Batch 51/538 - Train Accuracy: 0.972, Validation Accuracy: 0.959, Loss: 0.019 Epoch 4 Batch 52/538 - Train Accuracy: 0.978, Validation Accuracy: 0.955, Loss: 0.015 Epoch 4 Batch 53/538 - Train Accuracy: 0.945, Validation Accuracy: 0.953, Loss: 0.019 Epoch 4 Batch 54/538 - Train Accuracy: 0.972, Validation Accuracy: 0.963, Loss: 0.017 Epoch 4 Batch 55/538 - Train Accuracy: 0.968, Validation Accuracy: 0.966, Loss: 0.017 Epoch 4 Batch 56/538 - Train Accuracy: 0.961, Validation Accuracy: 0.968, Loss: 0.018 Epoch 4 Batch 57/538 - Train Accuracy: 0.946, Validation Accuracy: 0.971, Loss: 0.025 Epoch 4 Batch 58/538 - Train Accuracy: 0.973, Validation Accuracy: 0.975, Loss: 0.016 Epoch 4 Batch 59/538 - Train Accuracy: 0.968, Validation Accuracy: 0.970, Loss: 0.016 Epoch 4 Batch 60/538 - Train Accuracy: 0.970, Validation Accuracy: 0.971, Loss: 0.019 Epoch 4 Batch 61/538 - Train Accuracy: 0.971, Validation Accuracy: 0.971, Loss: 0.016 Epoch 4 Batch 62/538 - Train Accuracy: 0.960, Validation Accuracy: 0.963, Loss: 0.022 Epoch 4 Batch 63/538 - Train Accuracy: 0.969, Validation Accuracy: 0.962, Loss: 0.016 Epoch 4 Batch 64/538 - Train Accuracy: 0.970, Validation Accuracy: 0.959, Loss: 0.016 Epoch 4 Batch 65/538 - Train Accuracy: 0.957, Validation Accuracy: 0.960, Loss: 0.017 Epoch 4 Batch 66/538 - Train Accuracy: 0.985, Validation Accuracy: 0.960, Loss: 0.011 Epoch 4 Batch 67/538 - Train Accuracy: 0.983, Validation Accuracy: 0.966, Loss: 0.014 Epoch 4 Batch 68/538 - Train Accuracy: 0.972, Validation Accuracy: 0.968, Loss: 0.013 Epoch 4 Batch 69/538 - Train Accuracy: 0.984, Validation Accuracy: 0.968, Loss: 0.016 Epoch 4 Batch 70/538 - Train Accuracy: 0.971, Validation Accuracy: 0.968, Loss: 0.013 Epoch 4 Batch 71/538 - Train Accuracy: 0.968, Validation Accuracy: 0.967, Loss: 0.020 Epoch 4 Batch 72/538 - Train Accuracy: 0.971, Validation Accuracy: 0.964, Loss: 0.025 Epoch 4 Batch 73/538 - Train Accuracy: 0.960, Validation Accuracy: 0.969, Loss: 0.017 Epoch 4 Batch 74/538 - Train Accuracy: 0.970, Validation Accuracy: 0.966, Loss: 0.016 Epoch 4 Batch 75/538 - Train Accuracy: 0.973, Validation Accuracy: 0.965, Loss: 0.017 Epoch 4 Batch 76/538 - Train Accuracy: 0.969, Validation Accuracy: 0.960, Loss: 0.016 Epoch 4 Batch 77/538 - Train Accuracy: 0.978, Validation Accuracy: 0.955, Loss: 0.014 Epoch 4 Batch 78/538 - Train Accuracy: 0.964, Validation Accuracy: 0.952, Loss: 0.018 Epoch 4 Batch 79/538 - Train Accuracy: 0.982, Validation Accuracy: 0.953, Loss: 0.013 Epoch 4 Batch 80/538 - Train Accuracy: 0.974, Validation Accuracy: 0.958, Loss: 0.014 Epoch 4 Batch 81/538 - Train Accuracy: 0.960, Validation Accuracy: 0.969, Loss: 0.019 Epoch 4 Batch 82/538 - Train Accuracy: 0.976, Validation Accuracy: 0.972, Loss: 0.020 Epoch 4 Batch 83/538 - Train Accuracy: 0.972, Validation Accuracy: 0.972, Loss: 0.016 Epoch 4 Batch 84/538 - Train Accuracy: 0.966, Validation Accuracy: 0.967, Loss: 0.019 Epoch 4 Batch 85/538 - Train Accuracy: 0.985, Validation Accuracy: 0.964, Loss: 0.013 Epoch 4 Batch 86/538 - Train Accuracy: 0.976, Validation Accuracy: 0.958, Loss: 0.014 Epoch 4 Batch 87/538 - Train Accuracy: 0.966, Validation Accuracy: 0.956, Loss: 0.016 Epoch 4 Batch 88/538 - Train Accuracy: 0.976, Validation Accuracy: 0.952, Loss: 0.015 Epoch 4 Batch 89/538 - Train Accuracy: 0.974, Validation Accuracy: 0.956, Loss: 0.014 Epoch 4 Batch 90/538 - Train Accuracy: 0.971, Validation Accuracy: 0.960, Loss: 0.017 Epoch 4 Batch 91/538 - Train Accuracy: 0.969, Validation Accuracy: 0.960, Loss: 0.017 Epoch 4 Batch 92/538 - Train Accuracy: 0.962, Validation Accuracy: 0.963, Loss: 0.017 Epoch 4 Batch 93/538 - Train Accuracy: 0.963, Validation Accuracy: 0.970, Loss: 0.016 Epoch 4 Batch 94/538 - Train Accuracy: 0.976, Validation Accuracy: 0.970, Loss: 0.013 Epoch 4 Batch 95/538 - Train Accuracy: 0.953, Validation Accuracy: 0.973, Loss: 0.017 Epoch 4 Batch 96/538 - Train Accuracy: 0.977, Validation Accuracy: 0.969, Loss: 0.013 Epoch 4 Batch 97/538 - Train Accuracy: 0.974, Validation Accuracy: 0.966, Loss: 0.012 Epoch 4 Batch 98/538 - Train Accuracy: 0.968, Validation Accuracy: 0.971, Loss: 0.018 Epoch 4 Batch 99/538 - Train Accuracy: 0.974, Validation Accuracy: 0.970, Loss: 0.015 Epoch 4 Batch 100/538 - Train Accuracy: 0.975, Validation Accuracy: 0.967, Loss: 0.014 Epoch 4 Batch 101/538 - Train Accuracy: 0.956, Validation Accuracy: 0.970, Loss: 0.023 Epoch 4 Batch 102/538 - Train Accuracy: 0.965, Validation Accuracy: 0.968, Loss: 0.020 Epoch 4 Batch 103/538 - Train Accuracy: 0.969, Validation Accuracy: 0.970, Loss: 0.016 Epoch 4 Batch 104/538 - Train Accuracy: 0.978, Validation Accuracy: 0.974, Loss: 0.016 Epoch 4 Batch 105/538 - Train Accuracy: 0.974, Validation Accuracy: 0.979, Loss: 0.014 Epoch 4 Batch 106/538 - Train Accuracy: 0.974, Validation Accuracy: 0.977, Loss: 0.014 Epoch 4 Batch 107/538 - Train Accuracy: 0.966, Validation Accuracy: 0.980, Loss: 0.017 Epoch 4 Batch 108/538 - Train Accuracy: 0.968, Validation Accuracy: 0.976, Loss: 0.015 Epoch 4 Batch 109/538 - Train Accuracy: 0.966, Validation Accuracy: 0.974, Loss: 0.014 Epoch 4 Batch 110/538 - Train Accuracy: 0.971, Validation Accuracy: 0.974, Loss: 0.015 Epoch 4 Batch 111/538 - Train Accuracy: 0.972, Validation Accuracy: 0.970, Loss: 0.014 Epoch 4 Batch 112/538 - Train Accuracy: 0.969, Validation Accuracy: 0.967, Loss: 0.019 Epoch 4 Batch 113/538 - Train Accuracy: 0.957, Validation Accuracy: 0.965, Loss: 0.017 Epoch 4 Batch 114/538 - Train Accuracy: 0.964, Validation Accuracy: 0.966, Loss: 0.014 Epoch 4 Batch 115/538 - Train Accuracy: 0.979, Validation Accuracy: 0.966, Loss: 0.017 Epoch 4 Batch 116/538 - Train Accuracy: 0.965, Validation Accuracy: 0.965, Loss: 0.021 Epoch 4 Batch 117/538 - Train Accuracy: 0.977, Validation Accuracy: 0.971, Loss: 0.017 Epoch 4 Batch 118/538 - Train Accuracy: 0.978, Validation Accuracy: 0.973, Loss: 0.014 Epoch 4 Batch 119/538 - Train Accuracy: 0.985, Validation Accuracy: 0.964, Loss: 0.011 Epoch 4 Batch 120/538 - Train Accuracy: 0.978, Validation Accuracy: 0.963, Loss: 0.012 Epoch 4 Batch 121/538 - Train Accuracy: 0.972, Validation Accuracy: 0.962, Loss: 0.015 Epoch 4 Batch 122/538 - Train Accuracy: 0.959, Validation Accuracy: 0.961, Loss: 0.019 Epoch 4 Batch 123/538 - Train Accuracy: 0.963, Validation Accuracy: 0.958, Loss: 0.018 Epoch 4 Batch 124/538 - Train Accuracy: 0.970, Validation Accuracy: 0.958, Loss: 0.015 Epoch 4 Batch 125/538 - Train Accuracy: 0.962, Validation Accuracy: 0.956, Loss: 0.021 Epoch 4 Batch 126/538 - Train Accuracy: 0.976, Validation Accuracy: 0.956, Loss: 0.018 Epoch 4 Batch 127/538 - Train Accuracy: 0.970, Validation Accuracy: 0.961, Loss: 0.024 Epoch 4 Batch 128/538 - Train Accuracy: 0.970, Validation Accuracy: 0.965, Loss: 0.015 Epoch 4 Batch 129/538 - Train Accuracy: 0.978, Validation Accuracy: 0.967, Loss: 0.013 Epoch 4 Batch 130/538 - Train Accuracy: 0.967, Validation Accuracy: 0.970, Loss: 0.016 Epoch 4 Batch 131/538 - Train Accuracy: 0.982, Validation Accuracy: 0.973, Loss: 0.014 Epoch 4 Batch 132/538 - Train Accuracy: 0.964, Validation Accuracy: 0.974, Loss: 0.018 Epoch 4 Batch 133/538 - Train Accuracy: 0.974, Validation Accuracy: 0.968, Loss: 0.017 Epoch 4 Batch 134/538 - Train Accuracy: 0.972, Validation Accuracy: 0.967, Loss: 0.020 Epoch 4 Batch 135/538 - Train Accuracy: 0.969, Validation Accuracy: 0.955, Loss: 0.023 Epoch 4 Batch 136/538 - Train Accuracy: 0.965, Validation Accuracy: 0.941, Loss: 0.017 Epoch 4 Batch 137/538 - Train Accuracy: 0.973, Validation Accuracy: 0.943, Loss: 0.019 Epoch 4 Batch 138/538 - Train Accuracy: 0.967, Validation Accuracy: 0.947, Loss: 0.016 Epoch 4 Batch 139/538 - Train Accuracy: 0.954, Validation Accuracy: 0.952, Loss: 0.020 Epoch 4 Batch 140/538 - Train Accuracy: 0.960, Validation Accuracy: 0.957, Loss: 0.022 Epoch 4 Batch 141/538 - Train Accuracy: 0.974, Validation Accuracy: 0.956, Loss: 0.016 Epoch 4 Batch 142/538 - Train Accuracy: 0.975, Validation Accuracy: 0.958, Loss: 0.017 Epoch 4 Batch 143/538 - Train Accuracy: 0.971, Validation Accuracy: 0.958, Loss: 0.020 Epoch 4 Batch 144/538 - Train Accuracy: 0.975, Validation Accuracy: 0.960, Loss: 0.019 Epoch 4 Batch 145/538 - Train Accuracy: 0.967, Validation Accuracy: 0.960, Loss: 0.021 Epoch 4 Batch 146/538 - Train Accuracy: 0.978, Validation Accuracy: 0.960, Loss: 0.013 Epoch 4 Batch 147/538 - Train Accuracy: 0.978, Validation Accuracy: 0.961, Loss: 0.017 Epoch 4 Batch 148/538 - Train Accuracy: 0.964, Validation Accuracy: 0.963, Loss: 0.023 Epoch 4 Batch 149/538 - Train Accuracy: 0.976, Validation Accuracy: 0.960, Loss: 0.014 Epoch 4 Batch 150/538 - Train Accuracy: 0.972, Validation Accuracy: 0.958, Loss: 0.014 Epoch 4 Batch 151/538 - Train Accuracy: 0.964, Validation Accuracy: 0.963, Loss: 0.019 Epoch 4 Batch 152/538 - Train Accuracy: 0.974, Validation Accuracy: 0.970, Loss: 0.019 Epoch 4 Batch 153/538 - Train Accuracy: 0.962, Validation Accuracy: 0.968, Loss: 0.018 Epoch 4 Batch 154/538 - Train Accuracy: 0.962, Validation Accuracy: 0.970, Loss: 0.016 Epoch 4 Batch 155/538 - Train Accuracy: 0.981, Validation Accuracy: 0.970, Loss: 0.016 Epoch 4 Batch 156/538 - Train Accuracy: 0.985, Validation Accuracy: 0.970, Loss: 0.017 Epoch 4 Batch 157/538 - Train Accuracy: 0.980, Validation Accuracy: 0.971, Loss: 0.016 Epoch 4 Batch 158/538 - Train Accuracy: 0.979, Validation Accuracy: 0.971, Loss: 0.015 Epoch 4 Batch 159/538 - Train Accuracy: 0.964, Validation Accuracy: 0.967, Loss: 0.022 Epoch 4 Batch 160/538 - Train Accuracy: 0.969, Validation Accuracy: 0.954, Loss: 0.015 Epoch 4 Batch 161/538 - Train Accuracy: 0.966, Validation Accuracy: 0.953, Loss: 0.015 Epoch 4 Batch 162/538 - Train Accuracy: 0.978, Validation Accuracy: 0.954, Loss: 0.017 Epoch 4 Batch 163/538 - Train Accuracy: 0.965, Validation Accuracy: 0.962, Loss: 0.022 Epoch 4 Batch 164/538 - Train Accuracy: 0.975, Validation Accuracy: 0.960, Loss: 0.018 Epoch 4 Batch 165/538 - Train Accuracy: 0.972, Validation Accuracy: 0.956, Loss: 0.013 Epoch 4 Batch 166/538 - Train Accuracy: 0.980, Validation Accuracy: 0.958, Loss: 0.012 Epoch 4 Batch 167/538 - Train Accuracy: 0.966, Validation Accuracy: 0.958, Loss: 0.023 Epoch 4 Batch 168/538 - Train Accuracy: 0.957, Validation Accuracy: 0.959, Loss: 0.022 Epoch 4 Batch 169/538 - Train Accuracy: 0.981, Validation Accuracy: 0.962, Loss: 0.013 Epoch 4 Batch 170/538 - Train Accuracy: 0.965, Validation Accuracy: 0.961, Loss: 0.017 Epoch 4 Batch 171/538 - Train Accuracy: 0.971, Validation Accuracy: 0.957, Loss: 0.017 Epoch 4 Batch 172/538 - Train Accuracy: 0.968, Validation Accuracy: 0.956, Loss: 0.015 Epoch 4 Batch 173/538 - Train Accuracy: 0.979, Validation Accuracy: 0.954, Loss: 0.012 Epoch 4 Batch 174/538 - Train Accuracy: 0.973, Validation Accuracy: 0.949, Loss: 0.014 Epoch 4 Batch 175/538 - Train Accuracy: 0.968, Validation Accuracy: 0.942, Loss: 0.016 Epoch 4 Batch 176/538 - Train Accuracy: 0.963, Validation Accuracy: 0.948, Loss: 0.021 Epoch 4 Batch 177/538 - Train Accuracy: 0.978, Validation Accuracy: 0.953, Loss: 0.015 Epoch 4 Batch 178/538 - Train Accuracy: 0.964, Validation Accuracy: 0.952, Loss: 0.017 Epoch 4 Batch 179/538 - Train Accuracy: 0.980, Validation Accuracy: 0.961, Loss: 0.012 Epoch 4 Batch 180/538 - Train Accuracy: 0.973, Validation Accuracy: 0.964, Loss: 0.016 Epoch 4 Batch 181/538 - Train Accuracy: 0.968, Validation Accuracy: 0.964, Loss: 0.020 Epoch 4 Batch 182/538 - Train Accuracy: 0.972, Validation Accuracy: 0.966, Loss: 0.013 Epoch 4 Batch 183/538 - Train Accuracy: 0.980, Validation Accuracy: 0.964, Loss: 0.015 Epoch 4 Batch 184/538 - Train Accuracy: 0.973, Validation Accuracy: 0.962, Loss: 0.017 Epoch 4 Batch 185/538 - Train Accuracy: 0.984, Validation Accuracy: 0.960, Loss: 0.011 Epoch 4 Batch 186/538 - Train Accuracy: 0.977, Validation Accuracy: 0.956, Loss: 0.016 Epoch 4 Batch 187/538 - Train Accuracy: 0.974, Validation Accuracy: 0.954, Loss: 0.016 Epoch 4 Batch 188/538 - Train Accuracy: 0.971, Validation Accuracy: 0.954, Loss: 0.013 Epoch 4 Batch 189/538 - Train Accuracy: 0.971, Validation Accuracy: 0.952, Loss: 0.020 Epoch 4 Batch 190/538 - Train Accuracy: 0.965, Validation Accuracy: 0.955, Loss: 0.022 Epoch 4 Batch 191/538 - Train Accuracy: 0.986, Validation Accuracy: 0.954, Loss: 0.014 Epoch 4 Batch 192/538 - Train Accuracy: 0.977, Validation Accuracy: 0.954, Loss: 0.015 Epoch 4 Batch 193/538 - Train Accuracy: 0.968, Validation Accuracy: 0.956, Loss: 0.018 Epoch 4 Batch 194/538 - Train Accuracy: 0.961, Validation Accuracy: 0.956, Loss: 0.022 Epoch 4 Batch 195/538 - Train Accuracy: 0.977, Validation Accuracy: 0.955, Loss: 0.020 Epoch 4 Batch 196/538 - Train Accuracy: 0.969, Validation Accuracy: 0.958, Loss: 0.016 Epoch 4 Batch 197/538 - Train Accuracy: 0.969, Validation Accuracy: 0.959, Loss: 0.016 Epoch 4 Batch 198/538 - Train Accuracy: 0.968, Validation Accuracy: 0.956, Loss: 0.016 Epoch 4 Batch 199/538 - Train Accuracy: 0.964, Validation Accuracy: 0.953, Loss: 0.018 Epoch 4 Batch 200/538 - Train Accuracy: 0.985, Validation Accuracy: 0.960, Loss: 0.012 Epoch 4 Batch 201/538 - Train Accuracy: 0.972, Validation Accuracy: 0.962, Loss: 0.021 Epoch 4 Batch 202/538 - Train Accuracy: 0.970, Validation Accuracy: 0.962, Loss: 0.017 Epoch 4 Batch 203/538 - Train Accuracy: 0.974, Validation Accuracy: 0.962, Loss: 0.013 Epoch 4 Batch 204/538 - Train Accuracy: 0.966, Validation Accuracy: 0.962, Loss: 0.021 Epoch 4 Batch 205/538 - Train Accuracy: 0.969, Validation Accuracy: 0.962, Loss: 0.015 Epoch 4 Batch 206/538 - Train Accuracy: 0.978, Validation Accuracy: 0.962, Loss: 0.013 Epoch 4 Batch 207/538 - Train Accuracy: 0.981, Validation Accuracy: 0.958, Loss: 0.017 Epoch 4 Batch 208/538 - Train Accuracy: 0.972, Validation Accuracy: 0.962, Loss: 0.022 Epoch 4 Batch 209/538 - Train Accuracy: 0.982, Validation Accuracy: 0.962, Loss: 0.014 Epoch 4 Batch 210/538 - Train Accuracy: 0.974, Validation Accuracy: 0.967, Loss: 0.018 Epoch 4 Batch 211/538 - Train Accuracy: 0.962, Validation Accuracy: 0.955, Loss: 0.019 Epoch 4 Batch 212/538 - Train Accuracy: 0.977, Validation Accuracy: 0.959, Loss: 0.014 Epoch 4 Batch 213/538 - Train Accuracy: 0.974, Validation Accuracy: 0.960, Loss: 0.015 Epoch 4 Batch 214/538 - Train Accuracy: 0.988, Validation Accuracy: 0.960, Loss: 0.010 Epoch 4 Batch 215/538 - Train Accuracy: 0.974, Validation Accuracy: 0.961, Loss: 0.012 Epoch 4 Batch 216/538 - Train Accuracy: 0.965, Validation Accuracy: 0.959, Loss: 0.015 Epoch 4 Batch 217/538 - Train Accuracy: 0.974, Validation Accuracy: 0.958, Loss: 0.018 Epoch 4 Batch 218/538 - Train Accuracy: 0.974, Validation Accuracy: 0.959, Loss: 0.012 Epoch 4 Batch 219/538 - Train Accuracy: 0.967, Validation Accuracy: 0.961, Loss: 0.020 Epoch 4 Batch 220/538 - Train Accuracy: 0.959, Validation Accuracy: 0.956, Loss: 0.018 Epoch 4 Batch 221/538 - Train Accuracy: 0.977, Validation Accuracy: 0.963, Loss: 0.012 Epoch 4 Batch 222/538 - Train Accuracy: 0.971, Validation Accuracy: 0.961, Loss: 0.012 Epoch 4 Batch 223/538 - Train Accuracy: 0.971, Validation Accuracy: 0.956, Loss: 0.014 Epoch 4 Batch 224/538 - Train Accuracy: 0.968, Validation Accuracy: 0.952, Loss: 0.016 Epoch 4 Batch 225/538 - Train Accuracy: 0.972, Validation Accuracy: 0.952, Loss: 0.018 Epoch 4 Batch 226/538 - Train Accuracy: 0.969, Validation Accuracy: 0.956, Loss: 0.018 Epoch 4 Batch 227/538 - Train Accuracy: 0.968, Validation Accuracy: 0.958, Loss: 0.018 Epoch 4 Batch 228/538 - Train Accuracy: 0.961, Validation Accuracy: 0.959, Loss: 0.015 Epoch 4 Batch 229/538 - Train Accuracy: 0.979, Validation Accuracy: 0.957, Loss: 0.015 Epoch 4 Batch 230/538 - Train Accuracy: 0.973, Validation Accuracy: 0.963, Loss: 0.013 Epoch 4 Batch 231/538 - Train Accuracy: 0.968, Validation Accuracy: 0.960, Loss: 0.016 Epoch 4 Batch 232/538 - Train Accuracy: 0.958, Validation Accuracy: 0.959, Loss: 0.020 Epoch 4 Batch 233/538 - Train Accuracy: 0.974, Validation Accuracy: 0.958, Loss: 0.014 Epoch 4 Batch 234/538 - Train Accuracy: 0.978, Validation Accuracy: 0.950, Loss: 0.012 Epoch 4 Batch 235/538 - Train Accuracy: 0.975, Validation Accuracy: 0.950, Loss: 0.013 Epoch 4 Batch 236/538 - Train Accuracy: 0.976, Validation Accuracy: 0.949, Loss: 0.013 Epoch 4 Batch 237/538 - Train Accuracy: 0.973, Validation Accuracy: 0.949, Loss: 0.013 Epoch 4 Batch 238/538 - Train Accuracy: 0.982, Validation Accuracy: 0.954, Loss: 0.014 Epoch 4 Batch 239/538 - Train Accuracy: 0.970, Validation Accuracy: 0.959, Loss: 0.014 Epoch 4 Batch 240/538 - Train Accuracy: 0.968, Validation Accuracy: 0.958, Loss: 0.016 Epoch 4 Batch 241/538 - Train Accuracy: 0.957, Validation Accuracy: 0.961, Loss: 0.018 Epoch 4 Batch 242/538 - Train Accuracy: 0.977, Validation Accuracy: 0.961, Loss: 0.014 Epoch 4 Batch 243/538 - Train Accuracy: 0.973, Validation Accuracy: 0.962, Loss: 0.014 Epoch 4 Batch 244/538 - Train Accuracy: 0.964, Validation Accuracy: 0.955, Loss: 0.014 Epoch 4 Batch 245/538 - Train Accuracy: 0.965, Validation Accuracy: 0.951, Loss: 0.021 Epoch 4 Batch 246/538 - Train Accuracy: 0.965, Validation Accuracy: 0.953, Loss: 0.012 Epoch 4 Batch 247/538 - Train Accuracy: 0.979, Validation Accuracy: 0.952, Loss: 0.015 Epoch 4 Batch 248/538 - Train Accuracy: 0.965, Validation Accuracy: 0.950, Loss: 0.020 Epoch 4 Batch 249/538 - Train Accuracy: 0.975, Validation Accuracy: 0.959, Loss: 0.013 Epoch 4 Batch 250/538 - Train Accuracy: 0.988, Validation Accuracy: 0.961, Loss: 0.014 Epoch 4 Batch 251/538 - Train Accuracy: 0.972, Validation Accuracy: 0.964, Loss: 0.014 Epoch 4 Batch 252/538 - Train Accuracy: 0.967, Validation Accuracy: 0.969, Loss: 0.017 Epoch 4 Batch 253/538 - Train Accuracy: 0.960, Validation Accuracy: 0.965, Loss: 0.017 Epoch 4 Batch 254/538 - Train Accuracy: 0.949, Validation Accuracy: 0.964, Loss: 0.021 Epoch 4 Batch 255/538 - Train Accuracy: 0.986, Validation Accuracy: 0.968, Loss: 0.015 Epoch 4 Batch 256/538 - Train Accuracy: 0.979, Validation Accuracy: 0.966, Loss: 0.017 Epoch 4 Batch 257/538 - Train Accuracy: 0.977, Validation Accuracy: 0.966, Loss: 0.016 Epoch 4 Batch 258/538 - Train Accuracy: 0.967, Validation Accuracy: 0.965, Loss: 0.017 Epoch 4 Batch 259/538 - Train Accuracy: 0.979, Validation Accuracy: 0.968, Loss: 0.016 Epoch 4 Batch 260/538 - Train Accuracy: 0.963, Validation Accuracy: 0.963, Loss: 0.019 Epoch 4 Batch 261/538 - Train Accuracy: 0.973, Validation Accuracy: 0.958, Loss: 0.017 Epoch 4 Batch 262/538 - Train Accuracy: 0.977, Validation Accuracy: 0.955, Loss: 0.018 Epoch 4 Batch 263/538 - Train Accuracy: 0.953, Validation Accuracy: 0.958, Loss: 0.017 Epoch 4 Batch 264/538 - Train Accuracy: 0.964, Validation Accuracy: 0.963, Loss: 0.019 Epoch 4 Batch 265/538 - Train Accuracy: 0.953, Validation Accuracy: 0.967, Loss: 0.021 Epoch 4 Batch 266/538 - Train Accuracy: 0.966, Validation Accuracy: 0.966, Loss: 0.017 Epoch 4 Batch 267/538 - Train Accuracy: 0.962, Validation Accuracy: 0.962, Loss: 0.016 Epoch 4 Batch 268/538 - Train Accuracy: 0.980, Validation Accuracy: 0.959, Loss: 0.010 Epoch 4 Batch 269/538 - Train Accuracy: 0.966, Validation Accuracy: 0.961, Loss: 0.018 Epoch 4 Batch 270/538 - Train Accuracy: 0.984, Validation Accuracy: 0.961, Loss: 0.012 Epoch 4 Batch 271/538 - Train Accuracy: 0.979, Validation Accuracy: 0.964, Loss: 0.012 Epoch 4 Batch 272/538 - Train Accuracy: 0.969, Validation Accuracy: 0.960, Loss: 0.017 Epoch 4 Batch 273/538 - Train Accuracy: 0.965, Validation Accuracy: 0.956, Loss: 0.019 Epoch 4 Batch 274/538 - Train Accuracy: 0.954, Validation Accuracy: 0.959, Loss: 0.018 Epoch 4 Batch 275/538 - Train Accuracy: 0.968, Validation Accuracy: 0.966, Loss: 0.016 Epoch 4 Batch 276/538 - Train Accuracy: 0.967, Validation Accuracy: 0.971, Loss: 0.020 Epoch 4 Batch 277/538 - Train Accuracy: 0.978, Validation Accuracy: 0.974, Loss: 0.013 Epoch 4 Batch 278/538 - Train Accuracy: 0.969, Validation Accuracy: 0.974, Loss: 0.014 Epoch 4 Batch 279/538 - Train Accuracy: 0.971, Validation Accuracy: 0.975, Loss: 0.015 Epoch 4 Batch 280/538 - Train Accuracy: 0.972, Validation Accuracy: 0.972, Loss: 0.013 Epoch 4 Batch 281/538 - Train Accuracy: 0.974, Validation Accuracy: 0.974, Loss: 0.017 Epoch 4 Batch 282/538 - Train Accuracy: 0.973, Validation Accuracy: 0.972, Loss: 0.017 Epoch 4 Batch 283/538 - Train Accuracy: 0.979, Validation Accuracy: 0.972, Loss: 0.016 Epoch 4 Batch 284/538 - Train Accuracy: 0.969, Validation Accuracy: 0.970, Loss: 0.017 Epoch 4 Batch 285/538 - Train Accuracy: 0.977, Validation Accuracy: 0.972, Loss: 0.012 Epoch 4 Batch 286/538 - Train Accuracy: 0.977, Validation Accuracy: 0.973, Loss: 0.023 Epoch 4 Batch 287/538 - Train Accuracy: 0.972, Validation Accuracy: 0.978, Loss: 0.014 Epoch 4 Batch 288/538 - Train Accuracy: 0.977, Validation Accuracy: 0.975, Loss: 0.015 Epoch 4 Batch 289/538 - Train Accuracy: 0.973, Validation Accuracy: 0.978, Loss: 0.013 Epoch 4 Batch 290/538 - Train Accuracy: 0.978, Validation Accuracy: 0.976, Loss: 0.012 Epoch 4 Batch 291/538 - Train Accuracy: 0.984, Validation Accuracy: 0.970, Loss: 0.013 Epoch 4 Batch 292/538 - Train Accuracy: 0.982, Validation Accuracy: 0.964, Loss: 0.011 Epoch 4 Batch 293/538 - Train Accuracy: 0.969, Validation Accuracy: 0.968, Loss: 0.016 Epoch 4 Batch 294/538 - Train Accuracy: 0.976, Validation Accuracy: 0.968, Loss: 0.013 Epoch 4 Batch 295/538 - Train Accuracy: 0.964, Validation Accuracy: 0.968, Loss: 0.017 Epoch 4 Batch 296/538 - Train Accuracy: 0.974, Validation Accuracy: 0.971, Loss: 0.021 Epoch 4 Batch 297/538 - Train Accuracy: 0.979, Validation Accuracy: 0.975, Loss: 0.015 Epoch 4 Batch 298/538 - Train Accuracy: 0.982, Validation Accuracy: 0.975, Loss: 0.014 Epoch 4 Batch 299/538 - Train Accuracy: 0.967, Validation Accuracy: 0.975, Loss: 0.019 Epoch 4 Batch 300/538 - Train Accuracy: 0.978, Validation Accuracy: 0.973, Loss: 0.013 Epoch 4 Batch 301/538 - Train Accuracy: 0.969, Validation Accuracy: 0.973, Loss: 0.019 Epoch 4 Batch 302/538 - Train Accuracy: 0.971, Validation Accuracy: 0.973, Loss: 0.016 Epoch 4 Batch 303/538 - Train Accuracy: 0.983, Validation Accuracy: 0.970, Loss: 0.017 Epoch 4 Batch 304/538 - Train Accuracy: 0.985, Validation Accuracy: 0.969, Loss: 0.014 Epoch 4 Batch 305/538 - Train Accuracy: 0.983, Validation Accuracy: 0.971, Loss: 0.012 Epoch 4 Batch 306/538 - Train Accuracy: 0.973, Validation Accuracy: 0.968, Loss: 0.015 Epoch 4 Batch 307/538 - Train Accuracy: 0.988, Validation Accuracy: 0.962, Loss: 0.013 Epoch 4 Batch 308/538 - Train Accuracy: 0.981, Validation Accuracy: 0.962, Loss: 0.013 Epoch 4 Batch 309/538 - Train Accuracy: 0.979, Validation Accuracy: 0.964, Loss: 0.013 Epoch 4 Batch 310/538 - Train Accuracy: 0.972, Validation Accuracy: 0.969, Loss: 0.018 Epoch 4 Batch 311/538 - Train Accuracy: 0.976, Validation Accuracy: 0.965, Loss: 0.016 Epoch 4 Batch 312/538 - Train Accuracy: 0.972, Validation Accuracy: 0.965, Loss: 0.012 Epoch 4 Batch 313/538 - Train Accuracy: 0.975, Validation Accuracy: 0.965, Loss: 0.017 Epoch 4 Batch 314/538 - Train Accuracy: 0.986, Validation Accuracy: 0.963, Loss: 0.016 Epoch 4 Batch 315/538 - Train Accuracy: 0.979, Validation Accuracy: 0.963, Loss: 0.014 Epoch 4 Batch 316/538 - Train Accuracy: 0.966, Validation Accuracy: 0.973, Loss: 0.011 Epoch 4 Batch 317/538 - Train Accuracy: 0.972, Validation Accuracy: 0.975, Loss: 0.019 Epoch 4 Batch 318/538 - Train Accuracy: 0.978, Validation Accuracy: 0.977, Loss: 0.014 Epoch 4 Batch 319/538 - Train Accuracy: 0.973, Validation Accuracy: 0.976, Loss: 0.016 Epoch 4 Batch 320/538 - Train Accuracy: 0.978, Validation Accuracy: 0.973, Loss: 0.012 Epoch 4 Batch 321/538 - Train Accuracy: 0.974, Validation Accuracy: 0.969, Loss: 0.014 Epoch 4 Batch 322/538 - Train Accuracy: 0.984, Validation Accuracy: 0.963, Loss: 0.014 Epoch 4 Batch 323/538 - Train Accuracy: 0.985, Validation Accuracy: 0.960, Loss: 0.013 Epoch 4 Batch 324/538 - Train Accuracy: 0.985, Validation Accuracy: 0.958, Loss: 0.014 Epoch 4 Batch 325/538 - Train Accuracy: 0.977, Validation Accuracy: 0.961, Loss: 0.015 Epoch 4 Batch 326/538 - Train Accuracy: 0.979, Validation Accuracy: 0.968, Loss: 0.016 Epoch 4 Batch 327/538 - Train Accuracy: 0.977, Validation Accuracy: 0.966, Loss: 0.016 Epoch 4 Batch 328/538 - Train Accuracy: 0.978, Validation Accuracy: 0.966, Loss: 0.012 Epoch 4 Batch 329/538 - Train Accuracy: 0.985, Validation Accuracy: 0.966, Loss: 0.012 Epoch 4 Batch 330/538 - Train Accuracy: 0.974, Validation Accuracy: 0.964, Loss: 0.013 Epoch 4 Batch 331/538 - Train Accuracy: 0.976, Validation Accuracy: 0.965, Loss: 0.014 Epoch 4 Batch 332/538 - Train Accuracy: 0.979, Validation Accuracy: 0.967, Loss: 0.015 Epoch 4 Batch 333/538 - Train Accuracy: 0.976, Validation Accuracy: 0.975, Loss: 0.014 Epoch 4 Batch 334/538 - Train Accuracy: 0.981, Validation Accuracy: 0.975, Loss: 0.013 Epoch 4 Batch 335/538 - Train Accuracy: 0.968, Validation Accuracy: 0.975, Loss: 0.015 Epoch 4 Batch 336/538 - Train Accuracy: 0.968, Validation Accuracy: 0.980, Loss: 0.015 Epoch 4 Batch 337/538 - Train Accuracy: 0.974, Validation Accuracy: 0.979, Loss: 0.018 Epoch 4 Batch 338/538 - Train Accuracy: 0.976, Validation Accuracy: 0.980, Loss: 0.014 Epoch 4 Batch 339/538 - Train Accuracy: 0.972, Validation Accuracy: 0.977, Loss: 0.014 Epoch 4 Batch 340/538 - Train Accuracy: 0.984, Validation Accuracy: 0.976, Loss: 0.015 Epoch 4 Batch 341/538 - Train Accuracy: 0.970, Validation Accuracy: 0.974, Loss: 0.016 Epoch 4 Batch 342/538 - Train Accuracy: 0.971, Validation Accuracy: 0.971, Loss: 0.016 Epoch 4 Batch 343/538 - Train Accuracy: 0.981, Validation Accuracy: 0.971, Loss: 0.014 Epoch 4 Batch 344/538 - Train Accuracy: 0.967, Validation Accuracy: 0.969, Loss: 0.013 Epoch 4 Batch 345/538 - Train Accuracy: 0.979, Validation Accuracy: 0.969, Loss: 0.017 Epoch 4 Batch 346/538 - Train Accuracy: 0.974, Validation Accuracy: 0.972, Loss: 0.017 Epoch 4 Batch 347/538 - Train Accuracy: 0.981, Validation Accuracy: 0.969, Loss: 0.014 Epoch 4 Batch 348/538 - Train Accuracy: 0.968, Validation Accuracy: 0.967, Loss: 0.014 Epoch 4 Batch 349/538 - Train Accuracy: 0.985, Validation Accuracy: 0.963, Loss: 0.010 Epoch 4 Batch 350/538 - Train Accuracy: 0.977, Validation Accuracy: 0.971, Loss: 0.017 Epoch 4 Batch 351/538 - Train Accuracy: 0.971, Validation Accuracy: 0.971, Loss: 0.016 Epoch 4 Batch 352/538 - Train Accuracy: 0.954, Validation Accuracy: 0.969, Loss: 0.031 Epoch 4 Batch 353/538 - Train Accuracy: 0.972, Validation Accuracy: 0.970, Loss: 0.016 Epoch 4 Batch 354/538 - Train Accuracy: 0.975, Validation Accuracy: 0.970, Loss: 0.014 Epoch 4 Batch 355/538 - Train Accuracy: 0.982, Validation Accuracy: 0.967, Loss: 0.017 Epoch 4 Batch 356/538 - Train Accuracy: 0.972, Validation Accuracy: 0.965, Loss: 0.013 Epoch 4 Batch 357/538 - Train Accuracy: 0.971, Validation Accuracy: 0.965, Loss: 0.014 Epoch 4 Batch 358/538 - Train Accuracy: 0.980, Validation Accuracy: 0.965, Loss: 0.012 Epoch 4 Batch 359/538 - Train Accuracy: 0.977, Validation Accuracy: 0.967, Loss: 0.016 Epoch 4 Batch 360/538 - Train Accuracy: 0.987, Validation Accuracy: 0.967, Loss: 0.013 Epoch 4 Batch 361/538 - Train Accuracy: 0.980, Validation Accuracy: 0.968, Loss: 0.015 Epoch 4 Batch 362/538 - Train Accuracy: 0.978, Validation Accuracy: 0.970, Loss: 0.014 Epoch 4 Batch 363/538 - Train Accuracy: 0.974, Validation Accuracy: 0.964, Loss: 0.015 Epoch 4 Batch 364/538 - Train Accuracy: 0.967, Validation Accuracy: 0.965, Loss: 0.023 Epoch 4 Batch 365/538 - Train Accuracy: 0.967, Validation Accuracy: 0.965, Loss: 0.015 Epoch 4 Batch 366/538 - Train Accuracy: 0.980, Validation Accuracy: 0.961, Loss: 0.015 Epoch 4 Batch 367/538 - Train Accuracy: 0.979, Validation Accuracy: 0.966, Loss: 0.012 Epoch 4 Batch 368/538 - Train Accuracy: 0.983, Validation Accuracy: 0.964, Loss: 0.011 Epoch 4 Batch 369/538 - Train Accuracy: 0.966, Validation Accuracy: 0.960, Loss: 0.013 Epoch 4 Batch 370/538 - Train Accuracy: 0.970, Validation Accuracy: 0.959, Loss: 0.014 Epoch 4 Batch 371/538 - Train Accuracy: 0.984, Validation Accuracy: 0.960, Loss: 0.014 Epoch 4 Batch 372/538 - Train Accuracy: 0.978, Validation Accuracy: 0.959, Loss: 0.015 Epoch 4 Batch 373/538 - Train Accuracy: 0.966, Validation Accuracy: 0.957, Loss: 0.014 Epoch 4 Batch 374/538 - Train Accuracy: 0.975, Validation Accuracy: 0.957, Loss: 0.012 Epoch 4 Batch 375/538 - Train Accuracy: 0.975, Validation Accuracy: 0.960, Loss: 0.014 Epoch 4 Batch 376/538 - Train Accuracy: 0.964, Validation Accuracy: 0.960, Loss: 0.015 Epoch 4 Batch 377/538 - Train Accuracy: 0.974, Validation Accuracy: 0.962, Loss: 0.016 Epoch 4 Batch 378/538 - Train Accuracy: 0.980, Validation Accuracy: 0.962, Loss: 0.010 Epoch 4 Batch 379/538 - Train Accuracy: 0.984, Validation Accuracy: 0.963, Loss: 0.016 Epoch 4 Batch 380/538 - Train Accuracy: 0.971, Validation Accuracy: 0.970, Loss: 0.015 Epoch 4 Batch 381/538 - Train Accuracy: 0.987, Validation Accuracy: 0.971, Loss: 0.014 Epoch 4 Batch 382/538 - Train Accuracy: 0.967, Validation Accuracy: 0.966, Loss: 0.018 Epoch 4 Batch 383/538 - Train Accuracy: 0.973, Validation Accuracy: 0.966, Loss: 0.014 Epoch 4 Batch 384/538 - Train Accuracy: 0.972, Validation Accuracy: 0.966, Loss: 0.017 Epoch 4 Batch 385/538 - Train Accuracy: 0.976, Validation Accuracy: 0.971, Loss: 0.015 Epoch 4 Batch 386/538 - Train Accuracy: 0.980, Validation Accuracy: 0.971, Loss: 0.013 Epoch 4 Batch 387/538 - Train Accuracy: 0.972, Validation Accuracy: 0.968, Loss: 0.013 Epoch 4 Batch 388/538 - Train Accuracy: 0.980, Validation Accuracy: 0.969, Loss: 0.014 Epoch 4 Batch 389/538 - Train Accuracy: 0.969, Validation Accuracy: 0.965, Loss: 0.020 Epoch 4 Batch 390/538 - Train Accuracy: 0.972, Validation Accuracy: 0.963, Loss: 0.013 Epoch 4 Batch 391/538 - Train Accuracy: 0.975, Validation Accuracy: 0.959, Loss: 0.013 Epoch 4 Batch 392/538 - Train Accuracy: 0.974, Validation Accuracy: 0.960, Loss: 0.010 Epoch 4 Batch 393/538 - Train Accuracy: 0.983, Validation Accuracy: 0.960, Loss: 0.016 Epoch 4 Batch 394/538 - Train Accuracy: 0.961, Validation Accuracy: 0.965, Loss: 0.021 Epoch 4 Batch 395/538 - Train Accuracy: 0.977, Validation Accuracy: 0.970, Loss: 0.015 Epoch 4 Batch 396/538 - Train Accuracy: 0.972, Validation Accuracy: 0.968, Loss: 0.013 Epoch 4 Batch 397/538 - Train Accuracy: 0.978, Validation Accuracy: 0.975, Loss: 0.015 Epoch 4 Batch 398/538 - Train Accuracy: 0.971, Validation Accuracy: 0.976, Loss: 0.013 Epoch 4 Batch 399/538 - Train Accuracy: 0.984, Validation Accuracy: 0.977, Loss: 0.016 Epoch 4 Batch 400/538 - Train Accuracy: 0.972, Validation Accuracy: 0.979, Loss: 0.015 Epoch 4 Batch 401/538 - Train Accuracy: 0.987, Validation Accuracy: 0.979, Loss: 0.010 Epoch 4 Batch 402/538 - Train Accuracy: 0.973, Validation Accuracy: 0.980, Loss: 0.016 Epoch 4 Batch 403/538 - Train Accuracy: 0.969, Validation Accuracy: 0.976, Loss: 0.017 Epoch 4 Batch 404/538 - Train Accuracy: 0.975, Validation Accuracy: 0.977, Loss: 0.017 Epoch 4 Batch 405/538 - Train Accuracy: 0.972, Validation Accuracy: 0.972, Loss: 0.016 Epoch 4 Batch 406/538 - Train Accuracy: 0.977, Validation Accuracy: 0.969, Loss: 0.014 Epoch 4 Batch 407/538 - Train Accuracy: 0.974, Validation Accuracy: 0.967, Loss: 0.016 Epoch 4 Batch 408/538 - Train Accuracy: 0.961, Validation Accuracy: 0.972, Loss: 0.016 Epoch 4 Batch 409/538 - Train Accuracy: 0.969, Validation Accuracy: 0.969, Loss: 0.014 Epoch 4 Batch 410/538 - Train Accuracy: 0.981, Validation Accuracy: 0.966, Loss: 0.013 Epoch 4 Batch 411/538 - Train Accuracy: 0.974, Validation Accuracy: 0.969, Loss: 0.019 Epoch 4 Batch 412/538 - Train Accuracy: 0.991, Validation Accuracy: 0.969, Loss: 0.009 Epoch 4 Batch 413/538 - Train Accuracy: 0.987, Validation Accuracy: 0.969, Loss: 0.013 Epoch 4 Batch 414/538 - Train Accuracy: 0.965, Validation Accuracy: 0.968, Loss: 0.021 Epoch 4 Batch 415/538 - Train Accuracy: 0.965, Validation Accuracy: 0.967, Loss: 0.014 Epoch 4 Batch 416/538 - Train Accuracy: 0.975, Validation Accuracy: 0.973, Loss: 0.014 Epoch 4 Batch 417/538 - Train Accuracy: 0.982, Validation Accuracy: 0.971, Loss: 0.011 Epoch 4 Batch 418/538 - Train Accuracy: 0.971, Validation Accuracy: 0.967, Loss: 0.018 Epoch 4 Batch 419/538 - Train Accuracy: 0.977, Validation Accuracy: 0.972, Loss: 0.013 Epoch 4 Batch 420/538 - Train Accuracy: 0.975, Validation Accuracy: 0.966, Loss: 0.017 Epoch 4 Batch 421/538 - Train Accuracy: 0.981, Validation Accuracy: 0.967, Loss: 0.012 Epoch 4 Batch 422/538 - Train Accuracy: 0.967, Validation Accuracy: 0.962, Loss: 0.014 Epoch 4 Batch 423/538 - Train Accuracy: 0.979, Validation Accuracy: 0.964, Loss: 0.015 Epoch 4 Batch 424/538 - Train Accuracy: 0.976, Validation Accuracy: 0.966, Loss: 0.022 Epoch 4 Batch 425/538 - Train Accuracy: 0.970, Validation Accuracy: 0.975, Loss: 0.022 Epoch 4 Batch 426/538 - Train Accuracy: 0.974, Validation Accuracy: 0.978, Loss: 0.016 Epoch 4 Batch 427/538 - Train Accuracy: 0.970, Validation Accuracy: 0.973, Loss: 0.016 Epoch 4 Batch 428/538 - Train Accuracy: 0.979, Validation Accuracy: 0.975, Loss: 0.010 Epoch 4 Batch 429/538 - Train Accuracy: 0.979, Validation Accuracy: 0.976, Loss: 0.015 Epoch 4 Batch 430/538 - Train Accuracy: 0.975, Validation Accuracy: 0.974, Loss: 0.014 Epoch 4 Batch 431/538 - Train Accuracy: 0.978, Validation Accuracy: 0.968, Loss: 0.014 Epoch 4 Batch 432/538 - Train Accuracy: 0.960, Validation Accuracy: 0.961, Loss: 0.018 Epoch 4 Batch 433/538 - Train Accuracy: 0.969, Validation Accuracy: 0.966, Loss: 0.031 Epoch 4 Batch 434/538 - Train Accuracy: 0.967, Validation Accuracy: 0.963, Loss: 0.012 Epoch 4 Batch 435/538 - Train Accuracy: 0.970, Validation Accuracy: 0.956, Loss: 0.015 Epoch 4 Batch 436/538 - Train Accuracy: 0.959, Validation Accuracy: 0.957, Loss: 0.017 Epoch 4 Batch 437/538 - Train Accuracy: 0.975, Validation Accuracy: 0.973, Loss: 0.015 Epoch 4 Batch 438/538 - Train Accuracy: 0.983, Validation Accuracy: 0.975, Loss: 0.012 Epoch 4 Batch 439/538 - Train Accuracy: 0.987, Validation Accuracy: 0.969, Loss: 0.015 Epoch 4 Batch 440/538 - Train Accuracy: 0.971, Validation Accuracy: 0.967, Loss: 0.016 Epoch 4 Batch 441/538 - Train Accuracy: 0.965, Validation Accuracy: 0.967, Loss: 0.021 Epoch 4 Batch 442/538 - Train Accuracy: 0.969, Validation Accuracy: 0.972, Loss: 0.014 Epoch 4 Batch 443/538 - Train Accuracy: 0.958, Validation Accuracy: 0.968, Loss: 0.017 Epoch 4 Batch 444/538 - Train Accuracy: 0.971, Validation Accuracy: 0.974, Loss: 0.015 Epoch 4 Batch 445/538 - Train Accuracy: 0.979, Validation Accuracy: 0.966, Loss: 0.015 Epoch 4 Batch 446/538 - Train Accuracy: 0.977, Validation Accuracy: 0.963, Loss: 0.013 Epoch 4 Batch 447/538 - Train Accuracy: 0.976, Validation Accuracy: 0.961, Loss: 0.015 Epoch 4 Batch 448/538 - Train Accuracy: 0.963, Validation Accuracy: 0.962, Loss: 0.015 Epoch 4 Batch 449/538 - Train Accuracy: 0.979, Validation Accuracy: 0.965, Loss: 0.016 Epoch 4 Batch 450/538 - Train Accuracy: 0.958, Validation Accuracy: 0.976, Loss: 0.022 Epoch 4 Batch 451/538 - Train Accuracy: 0.962, Validation Accuracy: 0.978, Loss: 0.018 Epoch 4 Batch 452/538 - Train Accuracy: 0.971, Validation Accuracy: 0.978, Loss: 0.015 Epoch 4 Batch 453/538 - Train Accuracy: 0.970, Validation Accuracy: 0.977, Loss: 0.018 Epoch 4 Batch 454/538 - Train Accuracy: 0.969, Validation Accuracy: 0.977, Loss: 0.019 Epoch 4 Batch 455/538 - Train Accuracy: 0.974, Validation Accuracy: 0.979, Loss: 0.014 Epoch 4 Batch 456/538 - Train Accuracy: 0.969, Validation Accuracy: 0.977, Loss: 0.025 Epoch 4 Batch 457/538 - Train Accuracy: 0.973, Validation Accuracy: 0.977, Loss: 0.015 Epoch 4 Batch 458/538 - Train Accuracy: 0.977, Validation Accuracy: 0.973, Loss: 0.014 Epoch 4 Batch 459/538 - Train Accuracy: 0.983, Validation Accuracy: 0.975, Loss: 0.013 Epoch 4 Batch 460/538 - Train Accuracy: 0.971, Validation Accuracy: 0.977, Loss: 0.015 Epoch 4 Batch 461/538 - Train Accuracy: 0.983, Validation Accuracy: 0.973, Loss: 0.017 Epoch 4 Batch 462/538 - Train Accuracy: 0.978, Validation Accuracy: 0.971, Loss: 0.012 Epoch 4 Batch 463/538 - Train Accuracy: 0.951, Validation Accuracy: 0.972, Loss: 0.019 Epoch 4 Batch 464/538 - Train Accuracy: 0.985, Validation Accuracy: 0.967, Loss: 0.013 Epoch 4 Batch 465/538 - Train Accuracy: 0.976, Validation Accuracy: 0.969, Loss: 0.014 Epoch 4 Batch 466/538 - Train Accuracy: 0.978, Validation Accuracy: 0.970, Loss: 0.014 Epoch 4 Batch 467/538 - Train Accuracy: 0.971, Validation Accuracy: 0.972, Loss: 0.016 Epoch 4 Batch 468/538 - Train Accuracy: 0.975, Validation Accuracy: 0.970, Loss: 0.018 Epoch 4 Batch 469/538 - Train Accuracy: 0.981, Validation Accuracy: 0.962, Loss: 0.014 Epoch 4 Batch 470/538 - Train Accuracy: 0.962, Validation Accuracy: 0.976, Loss: 0.019 Epoch 4 Batch 471/538 - Train Accuracy: 0.979, Validation Accuracy: 0.981, Loss: 0.010 Epoch 4 Batch 472/538 - Train Accuracy: 0.989, Validation Accuracy: 0.983, Loss: 0.010 Epoch 4 Batch 473/538 - Train Accuracy: 0.978, Validation Accuracy: 0.979, Loss: 0.015 Epoch 4 Batch 474/538 - Train Accuracy: 0.973, Validation Accuracy: 0.975, Loss: 0.013 Epoch 4 Batch 475/538 - Train Accuracy: 0.980, Validation Accuracy: 0.973, Loss: 0.013 Epoch 4 Batch 476/538 - Train Accuracy: 0.976, Validation Accuracy: 0.973, Loss: 0.016 Epoch 4 Batch 477/538 - Train Accuracy: 0.969, Validation Accuracy: 0.971, Loss: 0.021 Epoch 4 Batch 478/538 - Train Accuracy: 0.987, Validation Accuracy: 0.977, Loss: 0.010 Epoch 4 Batch 479/538 - Train Accuracy: 0.972, Validation Accuracy: 0.967, Loss: 0.013 Epoch 4 Batch 480/538 - Train Accuracy: 0.974, Validation Accuracy: 0.961, Loss: 0.015 Epoch 4 Batch 481/538 - Train Accuracy: 0.973, Validation Accuracy: 0.953, Loss: 0.016 Epoch 4 Batch 482/538 - Train Accuracy: 0.968, Validation Accuracy: 0.958, Loss: 0.014 Epoch 4 Batch 483/538 - Train Accuracy: 0.964, Validation Accuracy: 0.963, Loss: 0.014 Epoch 4 Batch 484/538 - Train Accuracy: 0.969, Validation Accuracy: 0.969, Loss: 0.019 Epoch 4 Batch 485/538 - Train Accuracy: 0.982, Validation Accuracy: 0.970, Loss: 0.017 Epoch 4 Batch 486/538 - Train Accuracy: 0.983, Validation Accuracy: 0.965, Loss: 0.010 Epoch 4 Batch 487/538 - Train Accuracy: 0.984, Validation Accuracy: 0.968, Loss: 0.011 Epoch 4 Batch 488/538 - Train Accuracy: 0.977, Validation Accuracy: 0.970, Loss: 0.012 Epoch 4 Batch 489/538 - Train Accuracy: 0.973, Validation Accuracy: 0.967, Loss: 0.013 Epoch 4 Batch 490/538 - Train Accuracy: 0.973, Validation Accuracy: 0.965, Loss: 0.014 Epoch 4 Batch 491/538 - Train Accuracy: 0.969, Validation Accuracy: 0.963, Loss: 0.014 Epoch 4 Batch 492/538 - Train Accuracy: 0.965, Validation Accuracy: 0.967, Loss: 0.013 Epoch 4 Batch 493/538 - Train Accuracy: 0.970, Validation Accuracy: 0.973, Loss: 0.016 Epoch 4 Batch 494/538 - Train Accuracy: 0.980, Validation Accuracy: 0.972, Loss: 0.016 Epoch 4 Batch 495/538 - Train Accuracy: 0.974, Validation Accuracy: 0.973, Loss: 0.017 Epoch 4 Batch 496/538 - Train Accuracy: 0.979, Validation Accuracy: 0.968, Loss: 0.012 Epoch 4 Batch 497/538 - Train Accuracy: 0.978, Validation Accuracy: 0.966, Loss: 0.013 Epoch 4 Batch 498/538 - Train Accuracy: 0.978, Validation Accuracy: 0.968, Loss: 0.016 Epoch 4 Batch 499/538 - Train Accuracy: 0.974, Validation Accuracy: 0.970, Loss: 0.014 Epoch 4 Batch 500/538 - Train Accuracy: 0.985, Validation Accuracy: 0.973, Loss: 0.009 Epoch 4 Batch 501/538 - Train Accuracy: 0.971, Validation Accuracy: 0.971, Loss: 0.014 Epoch 4 Batch 502/538 - Train Accuracy: 0.968, Validation Accuracy: 0.968, Loss: 0.012 Epoch 4 Batch 503/538 - Train Accuracy: 0.978, Validation Accuracy: 0.968, Loss: 0.016 Epoch 4 Batch 504/538 - Train Accuracy: 0.989, Validation Accuracy: 0.968, Loss: 0.010 Epoch 4 Batch 505/538 - Train Accuracy: 0.977, Validation Accuracy: 0.969, Loss: 0.012 Epoch 4 Batch 506/538 - Train Accuracy: 0.974, Validation Accuracy: 0.969, Loss: 0.012 Epoch 4 Batch 507/538 - Train Accuracy: 0.977, Validation Accuracy: 0.972, Loss: 0.014 Epoch 4 Batch 508/538 - Train Accuracy: 0.963, Validation Accuracy: 0.964, Loss: 0.015 Epoch 4 Batch 509/538 - Train Accuracy: 0.974, Validation Accuracy: 0.964, Loss: 0.016 Epoch 4 Batch 510/538 - Train Accuracy: 0.982, Validation Accuracy: 0.966, Loss: 0.010 Epoch 4 Batch 511/538 - Train Accuracy: 0.973, Validation Accuracy: 0.970, Loss: 0.016 Epoch 4 Batch 512/538 - Train Accuracy: 0.965, Validation Accuracy: 0.968, Loss: 0.016 Epoch 4 Batch 513/538 - Train Accuracy: 0.959, Validation Accuracy: 0.973, Loss: 0.013 Epoch 4 Batch 514/538 - Train Accuracy: 0.979, Validation Accuracy: 0.974, Loss: 0.013 Epoch 4 Batch 515/538 - Train Accuracy: 0.980, Validation Accuracy: 0.973, Loss: 0.015 Epoch 4 Batch 516/538 - Train Accuracy: 0.975, Validation Accuracy: 0.973, Loss: 0.013 Epoch 4 Batch 517/538 - Train Accuracy: 0.979, Validation Accuracy: 0.975, Loss: 0.013 Epoch 4 Batch 518/538 - Train Accuracy: 0.973, Validation Accuracy: 0.971, Loss: 0.019 Epoch 4 Batch 519/538 - Train Accuracy: 0.976, Validation Accuracy: 0.966, Loss: 0.014 Epoch 4 Batch 520/538 - Train Accuracy: 0.975, Validation Accuracy: 0.964, Loss: 0.018 Epoch 4 Batch 521/538 - Train Accuracy: 0.975, Validation Accuracy: 0.964, Loss: 0.015 Epoch 4 Batch 522/538 - Train Accuracy: 0.975, Validation Accuracy: 0.966, Loss: 0.013 Epoch 4 Batch 523/538 - Train Accuracy: 0.977, Validation Accuracy: 0.958, Loss: 0.016 Epoch 4 Batch 524/538 - Train Accuracy: 0.973, Validation Accuracy: 0.956, Loss: 0.011 Epoch 4 Batch 525/538 - Train Accuracy: 0.976, Validation Accuracy: 0.962, Loss: 0.017 Epoch 4 Batch 526/538 - Train Accuracy: 0.969, Validation Accuracy: 0.966, Loss: 0.014 Epoch 4 Batch 527/538 - Train Accuracy: 0.980, Validation Accuracy: 0.965, Loss: 0.012 Epoch 4 Batch 528/538 - Train Accuracy: 0.970, Validation Accuracy: 0.965, Loss: 0.015 Epoch 4 Batch 529/538 - Train Accuracy: 0.960, Validation Accuracy: 0.969, Loss: 0.018 Epoch 4 Batch 530/538 - Train Accuracy: 0.968, Validation Accuracy: 0.968, Loss: 0.015 Epoch 4 Batch 531/538 - Train Accuracy: 0.968, Validation Accuracy: 0.971, Loss: 0.018 Epoch 4 Batch 532/538 - Train Accuracy: 0.974, Validation Accuracy: 0.969, Loss: 0.011 Epoch 4 Batch 533/538 - Train Accuracy: 0.969, Validation Accuracy: 0.971, Loss: 0.014 Epoch 4 Batch 534/538 - Train Accuracy: 0.978, Validation Accuracy: 0.971, Loss: 0.011 Epoch 4 Batch 535/538 - Train Accuracy: 0.974, Validation Accuracy: 0.967, Loss: 0.014 Epoch 4 Batch 536/538 - Train Accuracy: 0.971, Validation Accuracy: 0.969, Loss: 0.015 Model Trained and Saved ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function sentence = sentence.lower() word_ids = [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.split()] return word_ids """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('logits:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)])) print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)])) ###Output Input Word Ids: [58, 203, 136, 80, 65, 165, 60] English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.'] Prediction Word Ids: [357, 246, 345, 217, 158, 130, 301, 1] French Words: ['il', 'a', 'vu', 'un', 'camion', 'jaune', '.', '<EOS>'] ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of each sentence from `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function eos= source_vocab_to_int['<EOS>'] source_id_text= [[source_vocab_to_int[word] for word in sentence.split()]\ for sentence in source_text.split('\n')] target_id_text= [[target_vocab_to_int[word] for word in sentence.split()]+[eos]\ for sentence in target_text.split('\n')] return source_id_text,target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__) print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) print(tf.__version__) ###Output 1.0.0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoding_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) """ # TODO: Implement Function inputs=tf.placeholder(tf.int32,[None,None],name='input') targets=tf.placeholder(tf.int32,[None,None],name='target') learning_rate=tf.placeholder(tf.float32,name="learning_rate") keep_prob=tf.placeholder(tf.float32,name="keep_prob") return inputs, targets, learning_rate, keep_prob """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output Tests Passed ###Markdown Process Decoding InputImplement `process_decoding_input` using TensorFlow to remove the last word id from each batch in `target_data` and concat the GO ID to the beginning of each batch. ###Code def process_decoding_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for decoding :param target_data: Target Placeholder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function last=tf.strided_slice(target_data,[0,0],[batch_size,-1],[1,1]) target_data=tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), last], 1) return target_data """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_decoding_input(process_decoding_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer using [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn). ###Code def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ # TODO: Implement Function lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) cell = tf.contrib.rnn.MultiRNNCell([lstm]*num_layers) cell = tf.contrib.rnn.DropoutWrapper(cell,output_keep_prob=keep_prob) output, rnn_state = tf.nn.dynamic_rnn(cell,rnn_inputs,dtype=tf.float32) return rnn_state #Above code was throwing a ValueError for version 1.1.0: # Attempt to reuse RNNCell with a different variable # scope than its first use # for the above code #multicell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.BasicLSTMCell(rnn_size),\ # output_keep_prob=keep_prob) for _ in range(num_layers)]) #outputs,state=tf.nn.dynamic_rnn(multicell,rnn_inputs,dtype=tf.float32) #return state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate training logits using [`tf.contrib.seq2seq.simple_decoder_fn_train()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_train) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). Apply the `output_fn` to the [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder) outputs. ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits """ # TODO: Implement Function decoder_fn_train = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) outputs,final_state,final_context_state = tf.contrib.seq2seq.dynamic_rnn_decoder\ (dec_cell,decoder_fn_train,dec_embed_input,sequence_length,scope=decoding_scope) train_logits = output_fn(outputs) train_logits = tf.nn.dropout(train_logits,keep_prob) return train_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference logits using [`tf.contrib.seq2seq.simple_decoder_fn_inference()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_inference) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: The maximum allowed time steps to decode :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits """ # TODO: Implement Function decoder_fn_inference = tf.contrib.seq2seq.simple_decoder_fn_inference\ (output_fn,encoder_state,dec_embeddings,start_of_sequence_id,end_of_sequence_id,maximum_length,vocab_size) outputs_logits,_,_ = tf.contrib.seq2seq.dynamic_rnn_decoder\ (dec_cell,decoder_fn_inference,scope=decoding_scope) inference_logits = tf.nn.dropout(outputs_logits, tf.to_float(keep_prob)) return inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.- Create RNN cell for decoding using `rnn_size` and `num_layers`.- Create the output fuction using [`lambda`](https://docs.python.org/3/tutorial/controlflow.htmllambda-expressions) to transform it's input, logits, to class logits.- Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)` function to get the training logits.- Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function #Decoding rnn cell dec_cell = tf.contrib.rnn.BasicLSTMCell(rnn_size) dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell,output_keep_prob=keep_prob) dec_cell = tf.contrib.rnn.MultiRNNCell([dec_cell] * num_layers) #transforming it's input, logits, to class logits with tf.variable_scope('decoding') as decoding_scope: output_fn = lambda x: tf.contrib.layers.fully_connected(x,vocab_size,None,scope=decoding_scope) #getting the training logits training_logits = decoding_layer_train(encoder_state,dec_cell,dec_embed_input,\ sequence_length,decoding_scope,output_fn,keep_prob) #getting the inference logits with tf.variable_scope("decoding", reuse=True) as decoding_scope: inference_logits = decoding_layer_infer\ (encoder_state,dec_cell,dec_embeddings,target_vocab_to_int['<GO>'],target_vocab_to_int['<EOS>'],\ sequence_length,vocab_size,decoding_scope,output_fn,keep_prob) return training_logits,inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Apply embedding to the input data for the encoder.- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob)`.- Process target data using your `process_decoding_input(target_data, target_vocab_to_int, batch_size)` function.- Apply embedding to the target data for the decoder.- Decode the encoded input using your `decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)`. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function #Applying embedding to the input data for the encoder encoding_input = tf.contrib.layers.embed_sequence(input_data,source_vocab_size,enc_embedding_size) #Encoding the input encoder_state = encoding_layer(encoding_input,rnn_size,num_layers,keep_prob) #Process target data target_data = process_decoding_input(target_data,target_vocab_to_int,batch_size) #Appling embedding to the target data for the decoder dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, target_data) #Decoding the encoded input training_logits,inference_logits = decoding_layer\ (dec_embed_input,dec_embeddings,encoder_state,target_vocab_size,sequence_length,rnn_size,num_layers,\ target_vocab_to_int,keep_prob) return training_logits,inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability ###Code # Number of Epochs epochs = 3 #10 # Batch Size batch_size = 256 # RNN Size rnn_size = 256 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 200 decoding_embedding_size = 200 # Learning Rate learning_rate = 0.01 #0.001 # Dropout Keep Probability keep_probability = 0.8 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_source_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob = model_inputs() sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model( tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) tf.identity(inference_logits, 'logits') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([input_shape[0], sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import time def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 0/538 - Train Accuracy: 0.234, Validation Accuracy: 0.316, Loss: 5.901 Epoch 0 Batch 1/538 - Train Accuracy: 0.169, Validation Accuracy: 0.238, Loss: 4.838 Epoch 0 Batch 2/538 - Train Accuracy: 0.165, Validation Accuracy: 0.238, Loss: 5.141 Epoch 0 Batch 3/538 - Train Accuracy: 0.241, Validation Accuracy: 0.328, Loss: 4.593 Epoch 0 Batch 4/538 - Train Accuracy: 0.185, Validation Accuracy: 0.253, Loss: 4.467 Epoch 0 Batch 5/538 - Train Accuracy: 0.304, Validation Accuracy: 0.350, Loss: 4.247 Epoch 0 Batch 6/538 - Train Accuracy: 0.312, Validation Accuracy: 0.353, Loss: 3.992 Epoch 0 Batch 7/538 - Train Accuracy: 0.287, Validation Accuracy: 0.352, Loss: 3.988 Epoch 0 Batch 8/538 - Train Accuracy: 0.284, Validation Accuracy: 0.352, Loss: 3.918 Epoch 0 Batch 9/538 - Train Accuracy: 0.286, Validation Accuracy: 0.352, Loss: 3.891 Epoch 0 Batch 10/538 - Train Accuracy: 0.267, Validation Accuracy: 0.352, Loss: 3.911 Epoch 0 Batch 11/538 - Train Accuracy: 0.279, Validation Accuracy: 0.352, Loss: 3.832 Epoch 0 Batch 12/538 - Train Accuracy: 0.277, Validation Accuracy: 0.353, Loss: 3.777 Epoch 0 Batch 13/538 - Train Accuracy: 0.355, Validation Accuracy: 0.382, Loss: 3.552 Epoch 0 Batch 14/538 - Train Accuracy: 0.321, Validation Accuracy: 0.385, Loss: 3.612 Epoch 0 Batch 15/538 - Train Accuracy: 0.363, Validation Accuracy: 0.389, Loss: 3.513 Epoch 0 Batch 16/538 - Train Accuracy: 0.347, Validation Accuracy: 0.388, Loss: 3.487 Epoch 0 Batch 17/538 - Train Accuracy: 0.334, Validation Accuracy: 0.393, Loss: 3.537 Epoch 0 Batch 18/538 - Train Accuracy: 0.322, Validation Accuracy: 0.390, Loss: 3.559 Epoch 0 Batch 19/538 - Train Accuracy: 0.328, Validation Accuracy: 0.397, Loss: 3.527 Epoch 0 Batch 20/538 - Train Accuracy: 0.375, Validation Accuracy: 0.414, Loss: 3.404 Epoch 0 Batch 21/538 - Train Accuracy: 0.308, Validation Accuracy: 0.406, Loss: 3.532 Epoch 0 Batch 22/538 - Train Accuracy: 0.343, Validation Accuracy: 0.406, Loss: 3.433 Epoch 0 Batch 23/538 - Train Accuracy: 0.353, Validation Accuracy: 0.410, Loss: 3.431 Epoch 0 Batch 24/538 - Train Accuracy: 0.379, Validation Accuracy: 0.417, Loss: 3.336 Epoch 0 Batch 25/538 - Train Accuracy: 0.368, Validation Accuracy: 0.421, Loss: 3.321 Epoch 0 Batch 26/538 - Train Accuracy: 0.377, Validation Accuracy: 0.435, Loss: 3.348 Epoch 0 Batch 27/538 - Train Accuracy: 0.384, Validation Accuracy: 0.433, Loss: 3.347 Epoch 0 Batch 28/538 - Train Accuracy: 0.433, Validation Accuracy: 0.437, Loss: 3.119 Epoch 0 Batch 29/538 - Train Accuracy: 0.403, Validation Accuracy: 0.445, Loss: 3.212 Epoch 0 Batch 30/538 - Train Accuracy: 0.392, Validation Accuracy: 0.456, Loss: 3.302 Epoch 0 Batch 31/538 - Train Accuracy: 0.426, Validation Accuracy: 0.458, Loss: 3.131 Epoch 0 Batch 32/538 - Train Accuracy: 0.419, Validation Accuracy: 0.472, Loss: 3.183 Epoch 0 Batch 33/538 - Train Accuracy: 0.423, Validation Accuracy: 0.463, Loss: 3.086 Epoch 0 Batch 34/538 - Train Accuracy: 0.422, Validation Accuracy: 0.476, Loss: 3.176 Epoch 0 Batch 35/538 - Train Accuracy: 0.409, Validation Accuracy: 0.471, Loss: 3.150 Epoch 0 Batch 36/538 - Train Accuracy: 0.435, Validation Accuracy: 0.476, Loss: 3.009 Epoch 0 Batch 37/538 - Train Accuracy: 0.450, Validation Accuracy: 0.487, Loss: 3.045 Epoch 0 Batch 38/538 - Train Accuracy: 0.429, Validation Accuracy: 0.491, Loss: 3.101 Epoch 0 Batch 39/538 - Train Accuracy: 0.438, Validation Accuracy: 0.494, Loss: 3.058 Epoch 0 Batch 40/538 - Train Accuracy: 0.499, Validation Accuracy: 0.498, Loss: 2.849 Epoch 0 Batch 41/538 - Train Accuracy: 0.435, Validation Accuracy: 0.497, Loss: 3.053 Epoch 0 Batch 42/538 - Train Accuracy: 0.437, Validation Accuracy: 0.488, Loss: 2.964 Epoch 0 Batch 43/538 - Train Accuracy: 0.474, Validation Accuracy: 0.512, Loss: 2.975 Epoch 0 Batch 44/538 - Train Accuracy: 0.441, Validation Accuracy: 0.504, Loss: 3.028 Epoch 0 Batch 45/538 - Train Accuracy: 0.466, Validation Accuracy: 0.499, Loss: 2.831 Epoch 0 Batch 46/538 - Train Accuracy: 0.464, Validation Accuracy: 0.506, Loss: 2.915 Epoch 0 Batch 47/538 - Train Accuracy: 0.484, Validation Accuracy: 0.509, Loss: 2.832 Epoch 0 Batch 48/538 - Train Accuracy: 0.472, Validation Accuracy: 0.493, Loss: 2.745 Epoch 0 Batch 49/538 - Train Accuracy: 0.457, Validation Accuracy: 0.512, Loss: 2.908 Epoch 0 Batch 50/538 - Train Accuracy: 0.484, Validation Accuracy: 0.516, Loss: 2.791 Epoch 0 Batch 51/538 - Train Accuracy: 0.384, Validation Accuracy: 0.499, Loss: 2.945 Epoch 0 Batch 52/538 - Train Accuracy: 0.474, Validation Accuracy: 0.518, Loss: 2.764 Epoch 0 Batch 53/538 - Train Accuracy: 0.514, Validation Accuracy: 0.522, Loss: 2.604 Epoch 0 Batch 54/538 - Train Accuracy: 0.452, Validation Accuracy: 0.510, Loss: 2.689 Epoch 0 Batch 55/538 - Train Accuracy: 0.455, Validation Accuracy: 0.515, Loss: 2.767 Epoch 0 Batch 56/538 - Train Accuracy: 0.489, Validation Accuracy: 0.526, Loss: 2.675 Epoch 0 Batch 57/538 - Train Accuracy: 0.448, Validation Accuracy: 0.525, Loss: 2.727 Epoch 0 Batch 58/538 - Train Accuracy: 0.439, Validation Accuracy: 0.515, Loss: 2.701 Epoch 0 Batch 59/538 - Train Accuracy: 0.469, Validation Accuracy: 0.529, Loss: 2.613 Epoch 0 Batch 60/538 - Train Accuracy: 0.475, Validation Accuracy: 0.527, Loss: 2.561 Epoch 0 Batch 61/538 - Train Accuracy: 0.459, Validation Accuracy: 0.520, Loss: 2.545 Epoch 0 Batch 62/538 - Train Accuracy: 0.499, Validation Accuracy: 0.529, Loss: 2.485 Epoch 0 Batch 63/538 - Train Accuracy: 0.494, Validation Accuracy: 0.531, Loss: 2.403 Epoch 0 Batch 64/538 - Train Accuracy: 0.486, Validation Accuracy: 0.514, Loss: 2.413 Epoch 0 Batch 65/538 - Train Accuracy: 0.459, Validation Accuracy: 0.530, Loss: 2.468 Epoch 0 Batch 66/538 - Train Accuracy: 0.506, Validation Accuracy: 0.538, Loss: 2.306 Epoch 0 Batch 67/538 - Train Accuracy: 0.497, Validation Accuracy: 0.522, Loss: 2.400 Epoch 0 Batch 68/538 - Train Accuracy: 0.505, Validation Accuracy: 0.528, Loss: 2.223 Epoch 0 Batch 69/538 - Train Accuracy: 0.438, Validation Accuracy: 0.493, Loss: 2.302 Epoch 0 Batch 70/538 - Train Accuracy: 0.445, Validation Accuracy: 0.481, Loss: 2.247 Epoch 0 Batch 71/538 - Train Accuracy: 0.426, Validation Accuracy: 0.481, Loss: 2.220 Epoch 0 Batch 72/538 - Train Accuracy: 0.462, Validation Accuracy: 0.480, Loss: 2.104 Epoch 0 Batch 73/538 - Train Accuracy: 0.413, Validation Accuracy: 0.474, Loss: 2.194 Epoch 0 Batch 74/538 - Train Accuracy: 0.465, Validation Accuracy: 0.485, Loss: 2.082 Epoch 0 Batch 75/538 - Train Accuracy: 0.445, Validation Accuracy: 0.454, Loss: 2.097 Epoch 0 Batch 76/538 - Train Accuracy: 0.414, Validation Accuracy: 0.457, Loss: 2.105 Epoch 0 Batch 77/538 - Train Accuracy: 0.400, Validation Accuracy: 0.449, Loss: 2.040 Epoch 0 Batch 78/538 - Train Accuracy: 0.437, Validation Accuracy: 0.450, Loss: 2.047 Epoch 0 Batch 79/538 - Train Accuracy: 0.425, Validation Accuracy: 0.453, Loss: 1.973 Epoch 0 Batch 80/538 - Train Accuracy: 0.407, Validation Accuracy: 0.463, Loss: 2.045 Epoch 0 Batch 81/538 - Train Accuracy: 0.402, Validation Accuracy: 0.464, Loss: 1.997 Epoch 0 Batch 82/538 - Train Accuracy: 0.482, Validation Accuracy: 0.504, Loss: 1.970 Epoch 0 Batch 83/538 - Train Accuracy: 0.457, Validation Accuracy: 0.499, Loss: 2.001 Epoch 0 Batch 84/538 - Train Accuracy: 0.500, Validation Accuracy: 0.506, Loss: 1.925 Epoch 0 Batch 85/538 - Train Accuracy: 0.477, Validation Accuracy: 0.484, Loss: 1.847 Epoch 0 Batch 86/538 - Train Accuracy: 0.404, Validation Accuracy: 0.460, Loss: 1.921 Epoch 0 Batch 87/538 - Train Accuracy: 0.415, Validation Accuracy: 0.464, Loss: 1.884 Epoch 0 Batch 88/538 - Train Accuracy: 0.471, Validation Accuracy: 0.514, Loss: 1.894 Epoch 0 Batch 89/538 - Train Accuracy: 0.460, Validation Accuracy: 0.505, Loss: 1.857 Epoch 0 Batch 90/538 - Train Accuracy: 0.504, Validation Accuracy: 0.529, Loss: 1.856 Epoch 0 Batch 91/538 - Train Accuracy: 0.501, Validation Accuracy: 0.544, Loss: 1.859 Epoch 0 Batch 92/538 - Train Accuracy: 0.491, Validation Accuracy: 0.534, Loss: 1.826 Epoch 0 Batch 93/538 - Train Accuracy: 0.502, Validation Accuracy: 0.550, Loss: 1.829 Epoch 0 Batch 94/538 - Train Accuracy: 0.489, Validation Accuracy: 0.540, Loss: 1.856 Epoch 0 Batch 95/538 - Train Accuracy: 0.552, Validation Accuracy: 0.544, Loss: 1.740 Epoch 0 Batch 96/538 - Train Accuracy: 0.537, Validation Accuracy: 0.543, Loss: 1.752 Epoch 0 Batch 97/538 - Train Accuracy: 0.514, Validation Accuracy: 0.541, Loss: 1.756 Epoch 0 Batch 98/538 - Train Accuracy: 0.526, Validation Accuracy: 0.534, Loss: 1.740 Epoch 0 Batch 99/538 - Train Accuracy: 0.491, Validation Accuracy: 0.544, Loss: 1.777 Epoch 0 Batch 100/538 - Train Accuracy: 0.526, Validation Accuracy: 0.543, Loss: 1.763 Epoch 0 Batch 101/538 - Train Accuracy: 0.508, Validation Accuracy: 0.533, Loss: 1.800 Epoch 0 Batch 102/538 - Train Accuracy: 0.522, Validation Accuracy: 0.553, Loss: 1.769 Epoch 0 Batch 103/538 - Train Accuracy: 0.517, Validation Accuracy: 0.552, Loss: 1.756 Epoch 0 Batch 104/538 - Train Accuracy: 0.539, Validation Accuracy: 0.559, Loss: 1.689 Epoch 0 Batch 105/538 - Train Accuracy: 0.528, Validation Accuracy: 0.556, Loss: 1.722 Epoch 0 Batch 106/538 - Train Accuracy: 0.504, Validation Accuracy: 0.562, Loss: 1.708 Epoch 0 Batch 107/538 - Train Accuracy: 0.486, Validation Accuracy: 0.559, Loss: 1.736 Epoch 0 Batch 108/538 - Train Accuracy: 0.536, Validation Accuracy: 0.567, Loss: 1.669 Epoch 0 Batch 109/538 - Train Accuracy: 0.551, Validation Accuracy: 0.568, Loss: 1.662 Epoch 0 Batch 110/538 - Train Accuracy: 0.526, Validation Accuracy: 0.559, Loss: 1.736 Epoch 0 Batch 111/538 - Train Accuracy: 0.547, Validation Accuracy: 0.548, Loss: 1.650 Epoch 0 Batch 112/538 - Train Accuracy: 0.549, Validation Accuracy: 0.555, Loss: 1.639 Epoch 0 Batch 113/538 - Train Accuracy: 0.528, Validation Accuracy: 0.567, Loss: 1.689 Epoch 0 Batch 114/538 - Train Accuracy: 0.575, Validation Accuracy: 0.566, Loss: 1.665 Epoch 0 Batch 115/538 - Train Accuracy: 0.520, Validation Accuracy: 0.563, Loss: 1.712 Epoch 0 Batch 116/538 - Train Accuracy: 0.529, Validation Accuracy: 0.567, Loss: 1.675 Epoch 0 Batch 117/538 - Train Accuracy: 0.548, Validation Accuracy: 0.572, Loss: 1.623 Epoch 0 Batch 118/538 - Train Accuracy: 0.564, Validation Accuracy: 0.573, Loss: 1.672 Epoch 0 Batch 119/538 - Train Accuracy: 0.569, Validation Accuracy: 0.577, Loss: 1.600 Epoch 0 Batch 120/538 - Train Accuracy: 0.546, Validation Accuracy: 0.573, Loss: 1.610 Epoch 0 Batch 121/538 - Train Accuracy: 0.548, Validation Accuracy: 0.571, Loss: 1.623 Epoch 0 Batch 122/538 - Train Accuracy: 0.549, Validation Accuracy: 0.579, Loss: 1.623 Epoch 0 Batch 123/538 - Train Accuracy: 0.574, Validation Accuracy: 0.583, Loss: 1.607 Epoch 0 Batch 124/538 - Train Accuracy: 0.552, Validation Accuracy: 0.585, Loss: 1.572 Epoch 0 Batch 125/538 - Train Accuracy: 0.566, Validation Accuracy: 0.578, Loss: 1.648 Epoch 0 Batch 126/538 - Train Accuracy: 0.593, Validation Accuracy: 0.576, Loss: 1.597 Epoch 0 Batch 127/538 - Train Accuracy: 0.482, Validation Accuracy: 0.542, Loss: 1.667 Epoch 0 Batch 128/538 - Train Accuracy: 0.544, Validation Accuracy: 0.563, Loss: 1.603 Epoch 0 Batch 129/538 - Train Accuracy: 0.542, Validation Accuracy: 0.581, Loss: 1.599 Epoch 0 Batch 130/538 - Train Accuracy: 0.556, Validation Accuracy: 0.582, Loss: 1.626 Epoch 0 Batch 131/538 - Train Accuracy: 0.536, Validation Accuracy: 0.586, Loss: 1.595 Epoch 0 Batch 132/538 - Train Accuracy: 0.560, Validation Accuracy: 0.576, Loss: 1.621 Epoch 0 Batch 133/538 - Train Accuracy: 0.579, Validation Accuracy: 0.585, Loss: 1.553 Epoch 0 Batch 134/538 - Train Accuracy: 0.524, Validation Accuracy: 0.586, Loss: 1.653 Epoch 0 Batch 135/538 - Train Accuracy: 0.552, Validation Accuracy: 0.586, Loss: 1.566 Epoch 0 Batch 136/538 - Train Accuracy: 0.559, Validation Accuracy: 0.587, Loss: 1.560 Epoch 0 Batch 137/538 - Train Accuracy: 0.555, Validation Accuracy: 0.579, Loss: 1.538 Epoch 0 Batch 138/538 - Train Accuracy: 0.561, Validation Accuracy: 0.602, Loss: 1.551 Epoch 0 Batch 139/538 - Train Accuracy: 0.566, Validation Accuracy: 0.613, Loss: 1.640 Epoch 0 Batch 140/538 - Train Accuracy: 0.547, Validation Accuracy: 0.609, Loss: 1.671 Epoch 0 Batch 141/538 - Train Accuracy: 0.574, Validation Accuracy: 0.603, Loss: 1.601 Epoch 0 Batch 142/538 - Train Accuracy: 0.609, Validation Accuracy: 0.605, Loss: 1.543 Epoch 0 Batch 143/538 - Train Accuracy: 0.558, Validation Accuracy: 0.608, Loss: 1.560 Epoch 0 Batch 144/538 - Train Accuracy: 0.586, Validation Accuracy: 0.612, Loss: 1.656 Epoch 0 Batch 145/538 - Train Accuracy: 0.584, Validation Accuracy: 0.608, Loss: 1.572 Epoch 0 Batch 146/538 - Train Accuracy: 0.600, Validation Accuracy: 0.613, Loss: 1.530 Epoch 0 Batch 147/538 - Train Accuracy: 0.612, Validation Accuracy: 0.606, Loss: 1.570 Epoch 0 Batch 148/538 - Train Accuracy: 0.550, Validation Accuracy: 0.606, Loss: 1.634 Epoch 0 Batch 149/538 - Train Accuracy: 0.576, Validation Accuracy: 0.600, Loss: 1.525 Epoch 0 Batch 150/538 - Train Accuracy: 0.544, Validation Accuracy: 0.587, Loss: 1.575 Epoch 0 Batch 151/538 - Train Accuracy: 0.592, Validation Accuracy: 0.609, Loss: 1.546 Epoch 0 Batch 152/538 - Train Accuracy: 0.613, Validation Accuracy: 0.616, Loss: 1.528 Epoch 0 Batch 153/538 - Train Accuracy: 0.572, Validation Accuracy: 0.606, Loss: 1.535 Epoch 0 Batch 154/538 - Train Accuracy: 0.611, Validation Accuracy: 0.618, Loss: 1.552 Epoch 0 Batch 155/538 - Train Accuracy: 0.619, Validation Accuracy: 0.615, Loss: 1.496 Epoch 0 Batch 156/538 - Train Accuracy: 0.580, Validation Accuracy: 0.617, Loss: 1.535 Epoch 0 Batch 157/538 - Train Accuracy: 0.594, Validation Accuracy: 0.622, Loss: 1.516 Epoch 0 Batch 158/538 - Train Accuracy: 0.571, Validation Accuracy: 0.610, Loss: 1.546 Epoch 0 Batch 159/538 - Train Accuracy: 0.595, Validation Accuracy: 0.613, Loss: 1.564 Epoch 0 Batch 160/538 - Train Accuracy: 0.586, Validation Accuracy: 0.613, Loss: 1.485 Epoch 0 Batch 161/538 - Train Accuracy: 0.606, Validation Accuracy: 0.611, Loss: 1.475 Epoch 0 Batch 162/538 - Train Accuracy: 0.627, Validation Accuracy: 0.620, Loss: 1.528 Epoch 0 Batch 163/538 - Train Accuracy: 0.608, Validation Accuracy: 0.613, Loss: 1.517 Epoch 0 Batch 164/538 - Train Accuracy: 0.596, Validation Accuracy: 0.619, Loss: 1.575 Epoch 0 Batch 165/538 - Train Accuracy: 0.616, Validation Accuracy: 0.624, Loss: 1.461 Epoch 0 Batch 166/538 - Train Accuracy: 0.626, Validation Accuracy: 0.623, Loss: 1.489 Epoch 0 Batch 167/538 - Train Accuracy: 0.617, Validation Accuracy: 0.612, Loss: 1.483 Epoch 0 Batch 168/538 - Train Accuracy: 0.602, Validation Accuracy: 0.625, Loss: 1.559 Epoch 0 Batch 169/538 - Train Accuracy: 0.603, Validation Accuracy: 0.621, Loss: 1.567 Epoch 0 Batch 170/538 - Train Accuracy: 0.623, Validation Accuracy: 0.614, Loss: 1.537 Epoch 0 Batch 171/538 - Train Accuracy: 0.588, Validation Accuracy: 0.626, Loss: 1.539 Epoch 0 Batch 172/538 - Train Accuracy: 0.610, Validation Accuracy: 0.624, Loss: 1.454 Epoch 0 Batch 173/538 - Train Accuracy: 0.610, Validation Accuracy: 0.625, Loss: 1.476 Epoch 0 Batch 174/538 - Train Accuracy: 0.583, Validation Accuracy: 0.608, Loss: 1.497 Epoch 0 Batch 175/538 - Train Accuracy: 0.574, Validation Accuracy: 0.615, Loss: 1.501 Epoch 0 Batch 176/538 - Train Accuracy: 0.605, Validation Accuracy: 0.629, Loss: 1.505 Epoch 0 Batch 177/538 - Train Accuracy: 0.613, Validation Accuracy: 0.636, Loss: 1.454 Epoch 0 Batch 178/538 - Train Accuracy: 0.637, Validation Accuracy: 0.633, Loss: 1.475 Epoch 0 Batch 179/538 - Train Accuracy: 0.624, Validation Accuracy: 0.635, Loss: 1.568 Epoch 0 Batch 180/538 - Train Accuracy: 0.658, Validation Accuracy: 0.642, Loss: 1.483 Epoch 0 Batch 181/538 - Train Accuracy: 0.606, Validation Accuracy: 0.635, Loss: 1.500 Epoch 0 Batch 182/538 - Train Accuracy: 0.597, Validation Accuracy: 0.620, Loss: 1.495 Epoch 0 Batch 183/538 - Train Accuracy: 0.647, Validation Accuracy: 0.616, Loss: 1.457 Epoch 0 Batch 184/538 - Train Accuracy: 0.627, Validation Accuracy: 0.611, Loss: 1.426 Epoch 0 Batch 185/538 - Train Accuracy: 0.614, Validation Accuracy: 0.615, Loss: 1.434 Epoch 0 Batch 186/538 - Train Accuracy: 0.616, Validation Accuracy: 0.623, Loss: 1.446 Epoch 0 Batch 187/538 - Train Accuracy: 0.632, Validation Accuracy: 0.631, Loss: 1.444 Epoch 0 Batch 188/538 - Train Accuracy: 0.612, Validation Accuracy: 0.640, Loss: 1.468 Epoch 0 Batch 189/538 - Train Accuracy: 0.606, Validation Accuracy: 0.632, Loss: 1.491 Epoch 0 Batch 190/538 - Train Accuracy: 0.626, Validation Accuracy: 0.629, Loss: 1.473 Epoch 0 Batch 191/538 - Train Accuracy: 0.622, Validation Accuracy: 0.633, Loss: 1.462 Epoch 0 Batch 192/538 - Train Accuracy: 0.623, Validation Accuracy: 0.637, Loss: 1.452 Epoch 0 Batch 193/538 - Train Accuracy: 0.648, Validation Accuracy: 0.645, Loss: 1.435 Epoch 0 Batch 194/538 - Train Accuracy: 0.611, Validation Accuracy: 0.646, Loss: 1.491 Epoch 0 Batch 195/538 - Train Accuracy: 0.667, Validation Accuracy: 0.655, Loss: 1.456 Epoch 0 Batch 196/538 - Train Accuracy: 0.642, Validation Accuracy: 0.655, Loss: 1.422 Epoch 0 Batch 197/538 - Train Accuracy: 0.651, Validation Accuracy: 0.653, Loss: 1.419 Epoch 0 Batch 198/538 - Train Accuracy: 0.652, Validation Accuracy: 0.654, Loss: 1.445 Epoch 0 Batch 199/538 - Train Accuracy: 0.623, Validation Accuracy: 0.656, Loss: 1.445 Epoch 0 Batch 200/538 - Train Accuracy: 0.654, Validation Accuracy: 0.640, Loss: 1.409 Epoch 0 Batch 201/538 - Train Accuracy: 0.640, Validation Accuracy: 0.641, Loss: 1.466 Epoch 0 Batch 202/538 - Train Accuracy: 0.632, Validation Accuracy: 0.625, Loss: 1.487 Epoch 0 Batch 203/538 - Train Accuracy: 0.618, Validation Accuracy: 0.646, Loss: 1.463 Epoch 0 Batch 204/538 - Train Accuracy: 0.627, Validation Accuracy: 0.661, Loss: 1.497 Epoch 0 Batch 205/538 - Train Accuracy: 0.663, Validation Accuracy: 0.664, Loss: 1.415 Epoch 0 Batch 206/538 - Train Accuracy: 0.608, Validation Accuracy: 0.625, Loss: 1.465 Epoch 0 Batch 207/538 - Train Accuracy: 0.635, Validation Accuracy: 0.646, Loss: 1.409 Epoch 0 Batch 208/538 - Train Accuracy: 0.635, Validation Accuracy: 0.630, Loss: 1.443 Epoch 0 Batch 209/538 - Train Accuracy: 0.636, Validation Accuracy: 0.640, Loss: 1.443 Epoch 0 Batch 210/538 - Train Accuracy: 0.627, Validation Accuracy: 0.657, Loss: 1.461 Epoch 0 Batch 211/538 - Train Accuracy: 0.601, Validation Accuracy: 0.655, Loss: 1.450 Epoch 0 Batch 212/538 - Train Accuracy: 0.644, Validation Accuracy: 0.661, Loss: 1.456 Epoch 0 Batch 213/538 - Train Accuracy: 0.644, Validation Accuracy: 0.655, Loss: 1.419 Epoch 0 Batch 214/538 - Train Accuracy: 0.635, Validation Accuracy: 0.652, Loss: 1.450 Epoch 0 Batch 215/538 - Train Accuracy: 0.640, Validation Accuracy: 0.658, Loss: 1.471 Epoch 0 Batch 216/538 - Train Accuracy: 0.620, Validation Accuracy: 0.635, Loss: 1.497 Epoch 0 Batch 217/538 - Train Accuracy: 0.649, Validation Accuracy: 0.638, Loss: 1.389 Epoch 0 Batch 218/538 - Train Accuracy: 0.618, Validation Accuracy: 0.642, Loss: 1.468 Epoch 0 Batch 219/538 - Train Accuracy: 0.624, Validation Accuracy: 0.656, Loss: 1.493 Epoch 0 Batch 220/538 - Train Accuracy: 0.636, Validation Accuracy: 0.657, Loss: 1.395 Epoch 0 Batch 221/538 - Train Accuracy: 0.652, Validation Accuracy: 0.646, Loss: 1.350 Epoch 0 Batch 222/538 - Train Accuracy: 0.652, Validation Accuracy: 0.647, Loss: 1.384 Epoch 0 Batch 223/538 - Train Accuracy: 0.632, Validation Accuracy: 0.647, Loss: 1.450 Epoch 0 Batch 224/538 - Train Accuracy: 0.619, Validation Accuracy: 0.640, Loss: 1.432 Epoch 0 Batch 225/538 - Train Accuracy: 0.649, Validation Accuracy: 0.646, Loss: 1.393 Epoch 0 Batch 226/538 - Train Accuracy: 0.643, Validation Accuracy: 0.658, Loss: 1.398 Epoch 0 Batch 227/538 - Train Accuracy: 0.667, Validation Accuracy: 0.662, Loss: 1.392 Epoch 0 Batch 228/538 - Train Accuracy: 0.643, Validation Accuracy: 0.675, Loss: 1.426 Epoch 0 Batch 229/538 - Train Accuracy: 0.658, Validation Accuracy: 0.669, Loss: 1.433 Epoch 0 Batch 230/538 - Train Accuracy: 0.640, Validation Accuracy: 0.677, Loss: 1.380 Epoch 0 Batch 231/538 - Train Accuracy: 0.641, Validation Accuracy: 0.661, Loss: 1.409 Epoch 0 Batch 232/538 - Train Accuracy: 0.639, Validation Accuracy: 0.661, Loss: 1.436 Epoch 0 Batch 233/538 - Train Accuracy: 0.699, Validation Accuracy: 0.675, Loss: 1.349 Epoch 0 Batch 234/538 - Train Accuracy: 0.634, Validation Accuracy: 0.667, Loss: 1.448 Epoch 0 Batch 235/538 - Train Accuracy: 0.667, Validation Accuracy: 0.665, Loss: 1.383 Epoch 0 Batch 236/538 - Train Accuracy: 0.635, Validation Accuracy: 0.665, Loss: 1.462 Epoch 0 Batch 237/538 - Train Accuracy: 0.655, Validation Accuracy: 0.664, Loss: 1.339 Epoch 0 Batch 238/538 - Train Accuracy: 0.677, Validation Accuracy: 0.667, Loss: 1.386 Epoch 0 Batch 239/538 - Train Accuracy: 0.666, Validation Accuracy: 0.656, Loss: 1.407 Epoch 0 Batch 240/538 - Train Accuracy: 0.649, Validation Accuracy: 0.652, Loss: 1.349 Epoch 0 Batch 241/538 - Train Accuracy: 0.638, Validation Accuracy: 0.660, Loss: 1.394 Epoch 0 Batch 242/538 - Train Accuracy: 0.671, Validation Accuracy: 0.669, Loss: 1.444 Epoch 0 Batch 243/538 - Train Accuracy: 0.640, Validation Accuracy: 0.667, Loss: 1.406 Epoch 0 Batch 244/538 - Train Accuracy: 0.662, Validation Accuracy: 0.674, Loss: 1.417 Epoch 0 Batch 245/538 - Train Accuracy: 0.655, Validation Accuracy: 0.671, Loss: 1.400 Epoch 0 Batch 246/538 - Train Accuracy: 0.681, Validation Accuracy: 0.677, Loss: 1.336 Epoch 0 Batch 247/538 - Train Accuracy: 0.662, Validation Accuracy: 0.674, Loss: 1.390 Epoch 0 Batch 248/538 - Train Accuracy: 0.664, Validation Accuracy: 0.681, Loss: 1.367 Epoch 0 Batch 249/538 - Train Accuracy: 0.682, Validation Accuracy: 0.673, Loss: 1.393 Epoch 0 Batch 250/538 - Train Accuracy: 0.662, Validation Accuracy: 0.660, Loss: 1.406 Epoch 0 Batch 251/538 - Train Accuracy: 0.672, Validation Accuracy: 0.662, Loss: 1.402 Epoch 0 Batch 252/538 - Train Accuracy: 0.700, Validation Accuracy: 0.649, Loss: 1.388 Epoch 0 Batch 253/538 - Train Accuracy: 0.651, Validation Accuracy: 0.652, Loss: 1.403 Epoch 0 Batch 254/538 - Train Accuracy: 0.646, Validation Accuracy: 0.657, Loss: 1.372 Epoch 0 Batch 255/538 - Train Accuracy: 0.665, Validation Accuracy: 0.673, Loss: 1.418 Epoch 0 Batch 256/538 - Train Accuracy: 0.678, Validation Accuracy: 0.689, Loss: 1.388 Epoch 0 Batch 257/538 - Train Accuracy: 0.698, Validation Accuracy: 0.701, Loss: 1.372 Epoch 0 Batch 258/538 - Train Accuracy: 0.694, Validation Accuracy: 0.693, Loss: 1.416 Epoch 0 Batch 259/538 - Train Accuracy: 0.705, Validation Accuracy: 0.688, Loss: 1.350 Epoch 0 Batch 260/538 - Train Accuracy: 0.686, Validation Accuracy: 0.683, Loss: 1.357 Epoch 0 Batch 261/538 - Train Accuracy: 0.683, Validation Accuracy: 0.662, Loss: 1.439 Epoch 0 Batch 262/538 - Train Accuracy: 0.667, Validation Accuracy: 0.665, Loss: 1.380 Epoch 0 Batch 263/538 - Train Accuracy: 0.673, Validation Accuracy: 0.656, Loss: 1.308 Epoch 0 Batch 264/538 - Train Accuracy: 0.678, Validation Accuracy: 0.669, Loss: 1.405 Epoch 0 Batch 265/538 - Train Accuracy: 0.648, Validation Accuracy: 0.675, Loss: 1.436 Epoch 0 Batch 266/538 - Train Accuracy: 0.688, Validation Accuracy: 0.677, Loss: 1.352 Epoch 0 Batch 267/538 - Train Accuracy: 0.700, Validation Accuracy: 0.682, Loss: 1.375 Epoch 0 Batch 268/538 - Train Accuracy: 0.706, Validation Accuracy: 0.694, Loss: 1.386 Epoch 0 Batch 269/538 - Train Accuracy: 0.686, Validation Accuracy: 0.691, Loss: 1.364 Epoch 0 Batch 270/538 - Train Accuracy: 0.671, Validation Accuracy: 0.681, Loss: 1.352 Epoch 0 Batch 271/538 - Train Accuracy: 0.690, Validation Accuracy: 0.691, Loss: 1.351 Epoch 0 Batch 272/538 - Train Accuracy: 0.682, Validation Accuracy: 0.704, Loss: 1.388 Epoch 0 Batch 273/538 - Train Accuracy: 0.697, Validation Accuracy: 0.702, Loss: 1.357 Epoch 0 Batch 274/538 - Train Accuracy: 0.683, Validation Accuracy: 0.709, Loss: 1.362 Epoch 0 Batch 275/538 - Train Accuracy: 0.701, Validation Accuracy: 0.711, Loss: 1.407 Epoch 0 Batch 276/538 - Train Accuracy: 0.720, Validation Accuracy: 0.715, Loss: 1.349 Epoch 0 Batch 277/538 - Train Accuracy: 0.710, Validation Accuracy: 0.710, Loss: 1.375 Epoch 0 Batch 278/538 - Train Accuracy: 0.705, Validation Accuracy: 0.713, Loss: 1.339 Epoch 0 Batch 279/538 - Train Accuracy: 0.696, Validation Accuracy: 0.711, Loss: 1.340 Epoch 0 Batch 280/538 - Train Accuracy: 0.731, Validation Accuracy: 0.706, Loss: 1.325 Epoch 0 Batch 281/538 - Train Accuracy: 0.696, Validation Accuracy: 0.702, Loss: 1.374 Epoch 0 Batch 282/538 - Train Accuracy: 0.707, Validation Accuracy: 0.705, Loss: 1.299 Epoch 0 Batch 283/538 - Train Accuracy: 0.727, Validation Accuracy: 0.702, Loss: 1.360 Epoch 0 Batch 284/538 - Train Accuracy: 0.709, Validation Accuracy: 0.712, Loss: 1.382 Epoch 0 Batch 285/538 - Train Accuracy: 0.721, Validation Accuracy: 0.706, Loss: 1.311 Epoch 0 Batch 286/538 - Train Accuracy: 0.713, Validation Accuracy: 0.705, Loss: 1.308 Epoch 0 Batch 287/538 - Train Accuracy: 0.717, Validation Accuracy: 0.710, Loss: 1.286 Epoch 0 Batch 288/538 - Train Accuracy: 0.707, Validation Accuracy: 0.710, Loss: 1.283 Epoch 0 Batch 289/538 - Train Accuracy: 0.730, Validation Accuracy: 0.700, Loss: 1.343 Epoch 0 Batch 290/538 - Train Accuracy: 0.712, Validation Accuracy: 0.705, Loss: 1.319 Epoch 0 Batch 291/538 - Train Accuracy: 0.711, Validation Accuracy: 0.704, Loss: 1.287 Epoch 0 Batch 292/538 - Train Accuracy: 0.718, Validation Accuracy: 0.708, Loss: 1.313 Epoch 0 Batch 293/538 - Train Accuracy: 0.711, Validation Accuracy: 0.711, Loss: 1.322 Epoch 0 Batch 294/538 - Train Accuracy: 0.682, Validation Accuracy: 0.710, Loss: 1.381 Epoch 0 Batch 295/538 - Train Accuracy: 0.727, Validation Accuracy: 0.704, Loss: 1.276 Epoch 0 Batch 296/538 - Train Accuracy: 0.706, Validation Accuracy: 0.718, Loss: 1.345 Epoch 0 Batch 297/538 - Train Accuracy: 0.707, Validation Accuracy: 0.727, Loss: 1.314 Epoch 0 Batch 298/538 - Train Accuracy: 0.722, Validation Accuracy: 0.730, Loss: 1.325 Epoch 0 Batch 299/538 - Train Accuracy: 0.720, Validation Accuracy: 0.725, Loss: 1.304 Epoch 0 Batch 300/538 - Train Accuracy: 0.739, Validation Accuracy: 0.719, Loss: 1.313 Epoch 0 Batch 301/538 - Train Accuracy: 0.721, Validation Accuracy: 0.723, Loss: 1.262 Epoch 0 Batch 302/538 - Train Accuracy: 0.751, Validation Accuracy: 0.731, Loss: 1.283 Epoch 0 Batch 303/538 - Train Accuracy: 0.752, Validation Accuracy: 0.727, Loss: 1.285 Epoch 0 Batch 304/538 - Train Accuracy: 0.730, Validation Accuracy: 0.726, Loss: 1.286 Epoch 0 Batch 305/538 - Train Accuracy: 0.729, Validation Accuracy: 0.722, Loss: 1.299 Epoch 0 Batch 306/538 - Train Accuracy: 0.735, Validation Accuracy: 0.721, Loss: 1.284 Epoch 0 Batch 307/538 - Train Accuracy: 0.710, Validation Accuracy: 0.729, Loss: 1.271 Epoch 0 Batch 308/538 - Train Accuracy: 0.735, Validation Accuracy: 0.725, Loss: 1.317 Epoch 0 Batch 309/538 - Train Accuracy: 0.742, Validation Accuracy: 0.721, Loss: 1.313 Epoch 0 Batch 310/538 - Train Accuracy: 0.741, Validation Accuracy: 0.716, Loss: 1.328 Epoch 0 Batch 311/538 - Train Accuracy: 0.746, Validation Accuracy: 0.711, Loss: 1.310 Epoch 0 Batch 312/538 - Train Accuracy: 0.754, Validation Accuracy: 0.729, Loss: 1.237 Epoch 0 Batch 313/538 - Train Accuracy: 0.720, Validation Accuracy: 0.721, Loss: 1.286 Epoch 0 Batch 314/538 - Train Accuracy: 0.719, Validation Accuracy: 0.724, Loss: 1.291 Epoch 0 Batch 315/538 - Train Accuracy: 0.715, Validation Accuracy: 0.725, Loss: 1.275 Epoch 0 Batch 316/538 - Train Accuracy: 0.729, Validation Accuracy: 0.725, Loss: 1.255 Epoch 0 Batch 317/538 - Train Accuracy: 0.741, Validation Accuracy: 0.728, Loss: 1.235 Epoch 0 Batch 318/538 - Train Accuracy: 0.721, Validation Accuracy: 0.733, Loss: 1.299 Epoch 0 Batch 319/538 - Train Accuracy: 0.737, Validation Accuracy: 0.719, Loss: 1.295 Epoch 0 Batch 320/538 - Train Accuracy: 0.707, Validation Accuracy: 0.730, Loss: 1.271 Epoch 0 Batch 321/538 - Train Accuracy: 0.746, Validation Accuracy: 0.746, Loss: 1.217 Epoch 0 Batch 322/538 - Train Accuracy: 0.742, Validation Accuracy: 0.754, Loss: 1.348 Epoch 0 Batch 323/538 - Train Accuracy: 0.754, Validation Accuracy: 0.739, Loss: 1.246 Epoch 0 Batch 324/538 - Train Accuracy: 0.716, Validation Accuracy: 0.736, Loss: 1.272 Epoch 0 Batch 325/538 - Train Accuracy: 0.765, Validation Accuracy: 0.733, Loss: 1.257 Epoch 0 Batch 326/538 - Train Accuracy: 0.733, Validation Accuracy: 0.743, Loss: 1.264 Epoch 0 Batch 327/538 - Train Accuracy: 0.727, Validation Accuracy: 0.737, Loss: 1.263 Epoch 0 Batch 328/538 - Train Accuracy: 0.753, Validation Accuracy: 0.723, Loss: 1.242 Epoch 0 Batch 329/538 - Train Accuracy: 0.766, Validation Accuracy: 0.743, Loss: 1.330 Epoch 0 Batch 330/538 - Train Accuracy: 0.748, Validation Accuracy: 0.739, Loss: 1.288 Epoch 0 Batch 331/538 - Train Accuracy: 0.729, Validation Accuracy: 0.736, Loss: 1.289 Epoch 0 Batch 332/538 - Train Accuracy: 0.750, Validation Accuracy: 0.747, Loss: 1.263 Epoch 0 Batch 333/538 - Train Accuracy: 0.730, Validation Accuracy: 0.729, Loss: 1.277 Epoch 0 Batch 334/538 - Train Accuracy: 0.755, Validation Accuracy: 0.744, Loss: 1.274 Epoch 0 Batch 335/538 - Train Accuracy: 0.776, Validation Accuracy: 0.756, Loss: 1.265 Epoch 0 Batch 336/538 - Train Accuracy: 0.773, Validation Accuracy: 0.761, Loss: 1.246 Epoch 0 Batch 337/538 - Train Accuracy: 0.750, Validation Accuracy: 0.756, Loss: 1.264 Epoch 0 Batch 338/538 - Train Accuracy: 0.737, Validation Accuracy: 0.758, Loss: 1.248 Epoch 0 Batch 339/538 - Train Accuracy: 0.754, Validation Accuracy: 0.770, Loss: 1.250 Epoch 0 Batch 340/538 - Train Accuracy: 0.751, Validation Accuracy: 0.766, Loss: 1.315 Epoch 0 Batch 341/538 - Train Accuracy: 0.769, Validation Accuracy: 0.770, Loss: 1.262 Epoch 0 Batch 342/538 - Train Accuracy: 0.774, Validation Accuracy: 0.780, Loss: 1.240 Epoch 0 Batch 343/538 - Train Accuracy: 0.777, Validation Accuracy: 0.778, Loss: 1.299 Epoch 0 Batch 344/538 - Train Accuracy: 0.787, Validation Accuracy: 0.774, Loss: 1.225 Epoch 0 Batch 345/538 - Train Accuracy: 0.774, Validation Accuracy: 0.768, Loss: 1.183 Epoch 0 Batch 346/538 - Train Accuracy: 0.753, Validation Accuracy: 0.775, Loss: 1.272 Epoch 0 Batch 347/538 - Train Accuracy: 0.769, Validation Accuracy: 0.773, Loss: 1.277 Epoch 0 Batch 348/538 - Train Accuracy: 0.776, Validation Accuracy: 0.768, Loss: 1.194 Epoch 0 Batch 349/538 - Train Accuracy: 0.761, Validation Accuracy: 0.768, Loss: 1.261 Epoch 0 Batch 350/538 - Train Accuracy: 0.778, Validation Accuracy: 0.773, Loss: 1.242 Epoch 0 Batch 351/538 - Train Accuracy: 0.748, Validation Accuracy: 0.764, Loss: 1.239 Epoch 0 Batch 352/538 - Train Accuracy: 0.764, Validation Accuracy: 0.746, Loss: 1.302 Epoch 0 Batch 353/538 - Train Accuracy: 0.759, Validation Accuracy: 0.728, Loss: 1.246 Epoch 0 Batch 354/538 - Train Accuracy: 0.753, Validation Accuracy: 0.732, Loss: 1.284 Epoch 0 Batch 355/538 - Train Accuracy: 0.769, Validation Accuracy: 0.758, Loss: 1.277 Epoch 0 Batch 356/538 - Train Accuracy: 0.778, Validation Accuracy: 0.762, Loss: 1.243 Epoch 0 Batch 357/538 - Train Accuracy: 0.785, Validation Accuracy: 0.765, Loss: 1.214 Epoch 0 Batch 358/538 - Train Accuracy: 0.776, Validation Accuracy: 0.772, Loss: 1.208 Epoch 0 Batch 359/538 - Train Accuracy: 0.772, Validation Accuracy: 0.772, Loss: 1.253 Epoch 0 Batch 360/538 - Train Accuracy: 0.774, Validation Accuracy: 0.775, Loss: 1.224 Epoch 0 Batch 361/538 - Train Accuracy: 0.789, Validation Accuracy: 0.781, Loss: 1.230 Epoch 0 Batch 362/538 - Train Accuracy: 0.795, Validation Accuracy: 0.758, Loss: 1.222 Epoch 0 Batch 363/538 - Train Accuracy: 0.763, Validation Accuracy: 0.758, Loss: 1.213 Epoch 0 Batch 364/538 - Train Accuracy: 0.775, Validation Accuracy: 0.771, Loss: 1.284 Epoch 0 Batch 365/538 - Train Accuracy: 0.771, Validation Accuracy: 0.768, Loss: 1.218 Epoch 0 Batch 366/538 - Train Accuracy: 0.780, Validation Accuracy: 0.759, Loss: 1.249 Epoch 0 Batch 367/538 - Train Accuracy: 0.767, Validation Accuracy: 0.762, Loss: 1.156 Epoch 0 Batch 368/538 - Train Accuracy: 0.798, Validation Accuracy: 0.757, Loss: 1.257 Epoch 0 Batch 369/538 - Train Accuracy: 0.776, Validation Accuracy: 0.767, Loss: 1.170 Epoch 0 Batch 370/538 - Train Accuracy: 0.790, Validation Accuracy: 0.785, Loss: 1.223 Epoch 0 Batch 371/538 - Train Accuracy: 0.790, Validation Accuracy: 0.782, Loss: 1.219 Epoch 0 Batch 372/538 - Train Accuracy: 0.794, Validation Accuracy: 0.775, Loss: 1.248 Epoch 0 Batch 373/538 - Train Accuracy: 0.784, Validation Accuracy: 0.773, Loss: 1.238 Epoch 0 Batch 374/538 - Train Accuracy: 0.776, Validation Accuracy: 0.775, Loss: 1.197 Epoch 0 Batch 375/538 - Train Accuracy: 0.786, Validation Accuracy: 0.770, Loss: 1.203 Epoch 0 Batch 376/538 - Train Accuracy: 0.746, Validation Accuracy: 0.760, Loss: 1.195 Epoch 0 Batch 377/538 - Train Accuracy: 0.769, Validation Accuracy: 0.770, Loss: 1.200 Epoch 0 Batch 378/538 - Train Accuracy: 0.799, Validation Accuracy: 0.773, Loss: 1.238 Epoch 0 Batch 379/538 - Train Accuracy: 0.801, Validation Accuracy: 0.785, Loss: 1.180 Epoch 0 Batch 380/538 - Train Accuracy: 0.793, Validation Accuracy: 0.788, Loss: 1.189 Epoch 0 Batch 381/538 - Train Accuracy: 0.781, Validation Accuracy: 0.793, Loss: 1.218 Epoch 0 Batch 382/538 - Train Accuracy: 0.770, Validation Accuracy: 0.793, Loss: 1.225 Epoch 0 Batch 383/538 - Train Accuracy: 0.772, Validation Accuracy: 0.790, Loss: 1.158 Epoch 0 Batch 384/538 - Train Accuracy: 0.791, Validation Accuracy: 0.789, Loss: 1.213 Epoch 0 Batch 385/538 - Train Accuracy: 0.796, Validation Accuracy: 0.791, Loss: 1.218 Epoch 0 Batch 386/538 - Train Accuracy: 0.785, Validation Accuracy: 0.797, Loss: 1.182 Epoch 0 Batch 387/538 - Train Accuracy: 0.794, Validation Accuracy: 0.799, Loss: 1.169 Epoch 0 Batch 388/538 - Train Accuracy: 0.806, Validation Accuracy: 0.798, Loss: 1.215 Epoch 0 Batch 389/538 - Train Accuracy: 0.778, Validation Accuracy: 0.795, Loss: 1.238 Epoch 0 Batch 390/538 - Train Accuracy: 0.806, Validation Accuracy: 0.790, Loss: 1.158 Epoch 0 Batch 391/538 - Train Accuracy: 0.801, Validation Accuracy: 0.792, Loss: 1.122 Epoch 0 Batch 392/538 - Train Accuracy: 0.773, Validation Accuracy: 0.791, Loss: 1.190 Epoch 0 Batch 393/538 - Train Accuracy: 0.811, Validation Accuracy: 0.795, Loss: 1.122 Epoch 0 Batch 394/538 - Train Accuracy: 0.744, Validation Accuracy: 0.790, Loss: 1.215 Epoch 0 Batch 395/538 - Train Accuracy: 0.779, Validation Accuracy: 0.788, Loss: 1.244 Epoch 0 Batch 396/538 - Train Accuracy: 0.806, Validation Accuracy: 0.797, Loss: 1.199 Epoch 0 Batch 397/538 - Train Accuracy: 0.796, Validation Accuracy: 0.785, Loss: 1.216 Epoch 0 Batch 398/538 - Train Accuracy: 0.799, Validation Accuracy: 0.784, Loss: 1.194 Epoch 0 Batch 399/538 - Train Accuracy: 0.759, Validation Accuracy: 0.786, Loss: 1.227 Epoch 0 Batch 400/538 - Train Accuracy: 0.789, Validation Accuracy: 0.787, Loss: 1.182 Epoch 0 Batch 401/538 - Train Accuracy: 0.788, Validation Accuracy: 0.786, Loss: 1.229 Epoch 0 Batch 402/538 - Train Accuracy: 0.794, Validation Accuracy: 0.789, Loss: 1.165 Epoch 0 Batch 403/538 - Train Accuracy: 0.789, Validation Accuracy: 0.784, Loss: 1.187 Epoch 0 Batch 404/538 - Train Accuracy: 0.802, Validation Accuracy: 0.769, Loss: 1.184 Epoch 0 Batch 405/538 - Train Accuracy: 0.797, Validation Accuracy: 0.767, Loss: 1.155 Epoch 0 Batch 406/538 - Train Accuracy: 0.783, Validation Accuracy: 0.785, Loss: 1.197 Epoch 0 Batch 407/538 - Train Accuracy: 0.805, Validation Accuracy: 0.780, Loss: 1.223 Epoch 0 Batch 408/538 - Train Accuracy: 0.784, Validation Accuracy: 0.792, Loss: 1.177 Epoch 0 Batch 409/538 - Train Accuracy: 0.785, Validation Accuracy: 0.792, Loss: 1.154 Epoch 0 Batch 410/538 - Train Accuracy: 0.802, Validation Accuracy: 0.805, Loss: 1.148 Epoch 0 Batch 411/538 - Train Accuracy: 0.818, Validation Accuracy: 0.809, Loss: 1.181 Epoch 0 Batch 412/538 - Train Accuracy: 0.801, Validation Accuracy: 0.798, Loss: 1.154 Epoch 0 Batch 413/538 - Train Accuracy: 0.802, Validation Accuracy: 0.792, Loss: 1.204 Epoch 0 Batch 414/538 - Train Accuracy: 0.773, Validation Accuracy: 0.790, Loss: 1.211 Epoch 0 Batch 415/538 - Train Accuracy: 0.798, Validation Accuracy: 0.813, Loss: 1.187 Epoch 0 Batch 416/538 - Train Accuracy: 0.804, Validation Accuracy: 0.809, Loss: 1.138 Epoch 0 Batch 417/538 - Train Accuracy: 0.823, Validation Accuracy: 0.811, Loss: 1.182 Epoch 0 Batch 418/538 - Train Accuracy: 0.806, Validation Accuracy: 0.813, Loss: 1.172 Epoch 0 Batch 419/538 - Train Accuracy: 0.803, Validation Accuracy: 0.815, Loss: 1.195 Epoch 0 Batch 420/538 - Train Accuracy: 0.825, Validation Accuracy: 0.802, Loss: 1.208 Epoch 0 Batch 421/538 - Train Accuracy: 0.805, Validation Accuracy: 0.803, Loss: 1.128 Epoch 0 Batch 422/538 - Train Accuracy: 0.813, Validation Accuracy: 0.801, Loss: 1.167 Epoch 0 Batch 423/538 - Train Accuracy: 0.820, Validation Accuracy: 0.802, Loss: 1.179 Epoch 0 Batch 424/538 - Train Accuracy: 0.795, Validation Accuracy: 0.803, Loss: 1.174 Epoch 0 Batch 425/538 - Train Accuracy: 0.806, Validation Accuracy: 0.806, Loss: 1.141 Epoch 0 Batch 426/538 - Train Accuracy: 0.806, Validation Accuracy: 0.809, Loss: 1.149 Epoch 0 Batch 427/538 - Train Accuracy: 0.799, Validation Accuracy: 0.811, Loss: 1.189 Epoch 0 Batch 428/538 - Train Accuracy: 0.817, Validation Accuracy: 0.815, Loss: 1.125 Epoch 0 Batch 429/538 - Train Accuracy: 0.829, Validation Accuracy: 0.812, Loss: 1.134 Epoch 0 Batch 430/538 - Train Accuracy: 0.799, Validation Accuracy: 0.807, Loss: 1.174 Epoch 0 Batch 431/538 - Train Accuracy: 0.825, Validation Accuracy: 0.803, Loss: 1.160 Epoch 0 Batch 432/538 - Train Accuracy: 0.817, Validation Accuracy: 0.804, Loss: 1.156 Epoch 0 Batch 433/538 - Train Accuracy: 0.781, Validation Accuracy: 0.795, Loss: 1.198 Epoch 0 Batch 434/538 - Train Accuracy: 0.792, Validation Accuracy: 0.801, Loss: 1.164 Epoch 0 Batch 435/538 - Train Accuracy: 0.821, Validation Accuracy: 0.802, Loss: 1.191 Epoch 0 Batch 436/538 - Train Accuracy: 0.789, Validation Accuracy: 0.808, Loss: 1.146 Epoch 0 Batch 437/538 - Train Accuracy: 0.826, Validation Accuracy: 0.816, Loss: 1.176 Epoch 0 Batch 438/538 - Train Accuracy: 0.825, Validation Accuracy: 0.811, Loss: 1.172 Epoch 0 Batch 439/538 - Train Accuracy: 0.823, Validation Accuracy: 0.816, Loss: 1.169 Epoch 0 Batch 440/538 - Train Accuracy: 0.821, Validation Accuracy: 0.811, Loss: 1.167 Epoch 0 Batch 441/538 - Train Accuracy: 0.811, Validation Accuracy: 0.807, Loss: 1.151 Epoch 0 Batch 442/538 - Train Accuracy: 0.825, Validation Accuracy: 0.802, Loss: 1.153 Epoch 0 Batch 443/538 - Train Accuracy: 0.795, Validation Accuracy: 0.793, Loss: 1.168 Epoch 0 Batch 444/538 - Train Accuracy: 0.834, Validation Accuracy: 0.785, Loss: 1.139 Epoch 0 Batch 445/538 - Train Accuracy: 0.841, Validation Accuracy: 0.794, Loss: 1.177 Epoch 0 Batch 446/538 - Train Accuracy: 0.805, Validation Accuracy: 0.790, Loss: 1.156 Epoch 0 Batch 447/538 - Train Accuracy: 0.782, Validation Accuracy: 0.809, Loss: 1.157 Epoch 0 Batch 448/538 - Train Accuracy: 0.816, Validation Accuracy: 0.817, Loss: 1.130 Epoch 0 Batch 449/538 - Train Accuracy: 0.826, Validation Accuracy: 0.814, Loss: 1.213 Epoch 0 Batch 450/538 - Train Accuracy: 0.805, Validation Accuracy: 0.819, Loss: 1.184 Epoch 0 Batch 451/538 - Train Accuracy: 0.802, Validation Accuracy: 0.813, Loss: 1.134 Epoch 0 Batch 452/538 - Train Accuracy: 0.823, Validation Accuracy: 0.812, Loss: 1.144 Epoch 0 Batch 453/538 - Train Accuracy: 0.813, Validation Accuracy: 0.801, Loss: 1.150 Epoch 0 Batch 454/538 - Train Accuracy: 0.814, Validation Accuracy: 0.799, Loss: 1.136 Epoch 0 Batch 455/538 - Train Accuracy: 0.825, Validation Accuracy: 0.809, Loss: 1.175 Epoch 0 Batch 456/538 - Train Accuracy: 0.839, Validation Accuracy: 0.814, Loss: 1.146 Epoch 0 Batch 457/538 - Train Accuracy: 0.812, Validation Accuracy: 0.822, Loss: 1.158 Epoch 0 Batch 458/538 - Train Accuracy: 0.837, Validation Accuracy: 0.814, Loss: 1.116 Epoch 0 Batch 459/538 - Train Accuracy: 0.837, Validation Accuracy: 0.808, Loss: 1.168 Epoch 0 Batch 460/538 - Train Accuracy: 0.812, Validation Accuracy: 0.813, Loss: 1.138 Epoch 0 Batch 461/538 - Train Accuracy: 0.804, Validation Accuracy: 0.806, Loss: 1.192 Epoch 0 Batch 462/538 - Train Accuracy: 0.815, Validation Accuracy: 0.819, Loss: 1.117 Epoch 0 Batch 463/538 - Train Accuracy: 0.798, Validation Accuracy: 0.804, Loss: 1.134 Epoch 0 Batch 464/538 - Train Accuracy: 0.821, Validation Accuracy: 0.800, Loss: 1.139 Epoch 0 Batch 465/538 - Train Accuracy: 0.804, Validation Accuracy: 0.807, Loss: 1.106 Epoch 0 Batch 466/538 - Train Accuracy: 0.813, Validation Accuracy: 0.815, Loss: 1.169 Epoch 0 Batch 467/538 - Train Accuracy: 0.818, Validation Accuracy: 0.817, Loss: 1.152 Epoch 0 Batch 468/538 - Train Accuracy: 0.846, Validation Accuracy: 0.815, Loss: 1.124 Epoch 0 Batch 469/538 - Train Accuracy: 0.819, Validation Accuracy: 0.811, Loss: 1.152 Epoch 0 Batch 470/538 - Train Accuracy: 0.806, Validation Accuracy: 0.811, Loss: 1.120 Epoch 0 Batch 471/538 - Train Accuracy: 0.812, Validation Accuracy: 0.805, Loss: 1.138 Epoch 0 Batch 472/538 - Train Accuracy: 0.857, Validation Accuracy: 0.804, Loss: 1.144 Epoch 0 Batch 473/538 - Train Accuracy: 0.802, Validation Accuracy: 0.789, Loss: 1.137 Epoch 0 Batch 474/538 - Train Accuracy: 0.832, Validation Accuracy: 0.789, Loss: 1.142 Epoch 0 Batch 475/538 - Train Accuracy: 0.834, Validation Accuracy: 0.799, Loss: 1.101 Epoch 0 Batch 476/538 - Train Accuracy: 0.797, Validation Accuracy: 0.795, Loss: 1.113 Epoch 0 Batch 477/538 - Train Accuracy: 0.831, Validation Accuracy: 0.799, Loss: 1.126 Epoch 0 Batch 478/538 - Train Accuracy: 0.825, Validation Accuracy: 0.816, Loss: 1.114 Epoch 0 Batch 479/538 - Train Accuracy: 0.836, Validation Accuracy: 0.810, Loss: 1.087 Epoch 0 Batch 480/538 - Train Accuracy: 0.842, Validation Accuracy: 0.813, Loss: 1.126 Epoch 0 Batch 481/538 - Train Accuracy: 0.849, Validation Accuracy: 0.818, Loss: 1.087 Epoch 0 Batch 482/538 - Train Accuracy: 0.848, Validation Accuracy: 0.829, Loss: 1.085 Epoch 0 Batch 483/538 - Train Accuracy: 0.805, Validation Accuracy: 0.813, Loss: 1.170 Epoch 0 Batch 484/538 - Train Accuracy: 0.842, Validation Accuracy: 0.819, Loss: 1.166 Epoch 0 Batch 485/538 - Train Accuracy: 0.834, Validation Accuracy: 0.812, Loss: 1.088 Epoch 0 Batch 486/538 - Train Accuracy: 0.847, Validation Accuracy: 0.815, Loss: 1.129 Epoch 0 Batch 487/538 - Train Accuracy: 0.847, Validation Accuracy: 0.825, Loss: 1.102 Epoch 0 Batch 488/538 - Train Accuracy: 0.845, Validation Accuracy: 0.828, Loss: 1.118 Epoch 0 Batch 489/538 - Train Accuracy: 0.807, Validation Accuracy: 0.831, Loss: 1.073 Epoch 0 Batch 490/538 - Train Accuracy: 0.836, Validation Accuracy: 0.833, Loss: 1.164 Epoch 0 Batch 491/538 - Train Accuracy: 0.814, Validation Accuracy: 0.839, Loss: 1.137 Epoch 0 Batch 492/538 - Train Accuracy: 0.846, Validation Accuracy: 0.846, Loss: 1.094 Epoch 0 Batch 493/538 - Train Accuracy: 0.823, Validation Accuracy: 0.845, Loss: 1.099 Epoch 0 Batch 494/538 - Train Accuracy: 0.820, Validation Accuracy: 0.843, Loss: 1.148 Epoch 0 Batch 495/538 - Train Accuracy: 0.849, Validation Accuracy: 0.845, Loss: 1.142 Epoch 0 Batch 496/538 - Train Accuracy: 0.849, Validation Accuracy: 0.841, Loss: 1.039 Epoch 0 Batch 497/538 - Train Accuracy: 0.858, Validation Accuracy: 0.836, Loss: 1.121 Epoch 0 Batch 498/538 - Train Accuracy: 0.838, Validation Accuracy: 0.816, Loss: 1.107 Epoch 0 Batch 499/538 - Train Accuracy: 0.855, Validation Accuracy: 0.813, Loss: 1.066 Epoch 0 Batch 500/538 - Train Accuracy: 0.858, Validation Accuracy: 0.824, Loss: 1.103 Epoch 0 Batch 501/538 - Train Accuracy: 0.869, Validation Accuracy: 0.839, Loss: 1.117 Epoch 0 Batch 502/538 - Train Accuracy: 0.833, Validation Accuracy: 0.835, Loss: 1.088 Epoch 0 Batch 503/538 - Train Accuracy: 0.869, Validation Accuracy: 0.834, Loss: 1.114 Epoch 0 Batch 504/538 - Train Accuracy: 0.852, Validation Accuracy: 0.839, Loss: 1.142 Epoch 0 Batch 505/538 - Train Accuracy: 0.855, Validation Accuracy: 0.831, Loss: 1.128 Epoch 0 Batch 506/538 - Train Accuracy: 0.836, Validation Accuracy: 0.834, Loss: 1.111 Epoch 0 Batch 507/538 - Train Accuracy: 0.817, Validation Accuracy: 0.839, Loss: 1.131 Epoch 0 Batch 508/538 - Train Accuracy: 0.851, Validation Accuracy: 0.845, Loss: 1.109 Epoch 0 Batch 509/538 - Train Accuracy: 0.870, Validation Accuracy: 0.832, Loss: 1.118 Epoch 0 Batch 510/538 - Train Accuracy: 0.852, Validation Accuracy: 0.845, Loss: 1.062 Epoch 0 Batch 511/538 - Train Accuracy: 0.836, Validation Accuracy: 0.858, Loss: 1.078 Epoch 0 Batch 512/538 - Train Accuracy: 0.856, Validation Accuracy: 0.851, Loss: 1.091 Epoch 0 Batch 513/538 - Train Accuracy: 0.810, Validation Accuracy: 0.835, Loss: 1.152 Epoch 0 Batch 514/538 - Train Accuracy: 0.843, Validation Accuracy: 0.815, Loss: 1.124 Epoch 0 Batch 515/538 - Train Accuracy: 0.856, Validation Accuracy: 0.826, Loss: 1.141 Epoch 0 Batch 516/538 - Train Accuracy: 0.813, Validation Accuracy: 0.831, Loss: 1.117 Epoch 0 Batch 517/538 - Train Accuracy: 0.848, Validation Accuracy: 0.834, Loss: 1.087 Epoch 0 Batch 518/538 - Train Accuracy: 0.821, Validation Accuracy: 0.830, Loss: 1.151 Epoch 0 Batch 519/538 - Train Accuracy: 0.848, Validation Accuracy: 0.841, Loss: 1.064 Epoch 0 Batch 520/538 - Train Accuracy: 0.834, Validation Accuracy: 0.840, Loss: 1.122 Epoch 0 Batch 521/538 - Train Accuracy: 0.846, Validation Accuracy: 0.847, Loss: 1.105 Epoch 0 Batch 522/538 - Train Accuracy: 0.833, Validation Accuracy: 0.847, Loss: 1.094 Epoch 0 Batch 523/538 - Train Accuracy: 0.824, Validation Accuracy: 0.842, Loss: 1.086 Epoch 0 Batch 524/538 - Train Accuracy: 0.838, Validation Accuracy: 0.843, Loss: 1.125 Epoch 0 Batch 525/538 - Train Accuracy: 0.855, Validation Accuracy: 0.854, Loss: 1.110 Epoch 0 Batch 526/538 - Train Accuracy: 0.841, Validation Accuracy: 0.843, Loss: 1.113 Epoch 0 Batch 527/538 - Train Accuracy: 0.861, Validation Accuracy: 0.835, Loss: 1.026 Epoch 0 Batch 528/538 - Train Accuracy: 0.826, Validation Accuracy: 0.839, Loss: 1.107 Epoch 0 Batch 529/538 - Train Accuracy: 0.831, Validation Accuracy: 0.842, Loss: 1.062 Epoch 0 Batch 530/538 - Train Accuracy: 0.824, Validation Accuracy: 0.839, Loss: 1.089 Epoch 0 Batch 531/538 - Train Accuracy: 0.862, Validation Accuracy: 0.835, Loss: 1.126 Epoch 0 Batch 532/538 - Train Accuracy: 0.846, Validation Accuracy: 0.852, Loss: 1.130 Epoch 0 Batch 533/538 - Train Accuracy: 0.845, Validation Accuracy: 0.840, Loss: 1.091 Epoch 0 Batch 534/538 - Train Accuracy: 0.851, Validation Accuracy: 0.847, Loss: 1.076 Epoch 0 Batch 535/538 - Train Accuracy: 0.861, Validation Accuracy: 0.853, Loss: 1.060 Epoch 0 Batch 536/538 - Train Accuracy: 0.842, Validation Accuracy: 0.843, Loss: 1.070 Epoch 1 Batch 0/538 - Train Accuracy: 0.859, Validation Accuracy: 0.859, Loss: 1.076 Epoch 1 Batch 1/538 - Train Accuracy: 0.862, Validation Accuracy: 0.864, Loss: 1.124 Epoch 1 Batch 2/538 - Train Accuracy: 0.841, Validation Accuracy: 0.866, Loss: 1.096 Epoch 1 Batch 3/538 - Train Accuracy: 0.863, Validation Accuracy: 0.860, Loss: 1.047 Epoch 1 Batch 4/538 - Train Accuracy: 0.854, Validation Accuracy: 0.845, Loss: 1.092 Epoch 1 Batch 5/538 - Train Accuracy: 0.840, Validation Accuracy: 0.842, Loss: 1.078 Epoch 1 Batch 6/538 - Train Accuracy: 0.855, Validation Accuracy: 0.831, Loss: 1.100 Epoch 1 Batch 7/538 - Train Accuracy: 0.854, Validation Accuracy: 0.842, Loss: 1.089 Epoch 1 Batch 8/538 - Train Accuracy: 0.852, Validation Accuracy: 0.840, Loss: 1.048 Epoch 1 Batch 9/538 - Train Accuracy: 0.847, Validation Accuracy: 0.833, Loss: 1.116 Epoch 1 Batch 10/538 - Train Accuracy: 0.848, Validation Accuracy: 0.842, Loss: 1.099 Epoch 1 Batch 11/538 - Train Accuracy: 0.851, Validation Accuracy: 0.843, Loss: 1.064 Epoch 1 Batch 12/538 - Train Accuracy: 0.841, Validation Accuracy: 0.843, Loss: 1.036 Epoch 1 Batch 13/538 - Train Accuracy: 0.879, Validation Accuracy: 0.831, Loss: 1.076 Epoch 1 Batch 14/538 - Train Accuracy: 0.879, Validation Accuracy: 0.828, Loss: 1.114 Epoch 1 Batch 15/538 - Train Accuracy: 0.866, Validation Accuracy: 0.848, Loss: 1.072 Epoch 1 Batch 16/538 - Train Accuracy: 0.860, Validation Accuracy: 0.855, Loss: 1.037 Epoch 1 Batch 17/538 - Train Accuracy: 0.857, Validation Accuracy: 0.860, Loss: 1.082 Epoch 1 Batch 18/538 - Train Accuracy: 0.869, Validation Accuracy: 0.864, Loss: 1.106 Epoch 1 Batch 19/538 - Train Accuracy: 0.852, Validation Accuracy: 0.871, Loss: 1.076 Epoch 1 Batch 20/538 - Train Accuracy: 0.853, Validation Accuracy: 0.871, Loss: 1.109 Epoch 1 Batch 21/538 - Train Accuracy: 0.877, Validation Accuracy: 0.870, Loss: 1.063 Epoch 1 Batch 22/538 - Train Accuracy: 0.844, Validation Accuracy: 0.870, Loss: 1.080 Epoch 1 Batch 23/538 - Train Accuracy: 0.870, Validation Accuracy: 0.854, Loss: 1.076 Epoch 1 Batch 24/538 - Train Accuracy: 0.859, Validation Accuracy: 0.833, Loss: 1.094 Epoch 1 Batch 25/538 - Train Accuracy: 0.857, Validation Accuracy: 0.825, Loss: 1.078 Epoch 1 Batch 26/538 - Train Accuracy: 0.842, Validation Accuracy: 0.843, Loss: 1.071 Epoch 1 Batch 27/538 - Train Accuracy: 0.853, Validation Accuracy: 0.837, Loss: 1.076 Epoch 1 Batch 28/538 - Train Accuracy: 0.865, Validation Accuracy: 0.853, Loss: 1.075 Epoch 1 Batch 29/538 - Train Accuracy: 0.869, Validation Accuracy: 0.856, Loss: 1.016 Epoch 1 Batch 30/538 - Train Accuracy: 0.832, Validation Accuracy: 0.861, Loss: 1.086 Epoch 1 Batch 31/538 - Train Accuracy: 0.871, Validation Accuracy: 0.862, Loss: 0.985 Epoch 1 Batch 32/538 - Train Accuracy: 0.868, Validation Accuracy: 0.866, Loss: 1.031 Epoch 1 Batch 33/538 - Train Accuracy: 0.883, Validation Accuracy: 0.868, Loss: 1.047 Epoch 1 Batch 34/538 - Train Accuracy: 0.858, Validation Accuracy: 0.865, Loss: 1.049 Epoch 1 Batch 35/538 - Train Accuracy: 0.867, Validation Accuracy: 0.875, Loss: 1.086 Epoch 1 Batch 36/538 - Train Accuracy: 0.876, Validation Accuracy: 0.861, Loss: 1.046 Epoch 1 Batch 37/538 - Train Accuracy: 0.868, Validation Accuracy: 0.857, Loss: 1.045 Epoch 1 Batch 38/538 - Train Accuracy: 0.860, Validation Accuracy: 0.862, Loss: 1.016 Epoch 1 Batch 39/538 - Train Accuracy: 0.871, Validation Accuracy: 0.871, Loss: 1.064 Epoch 1 Batch 40/538 - Train Accuracy: 0.876, Validation Accuracy: 0.869, Loss: 1.000 Epoch 1 Batch 41/538 - Train Accuracy: 0.847, Validation Accuracy: 0.863, Loss: 1.055 Epoch 1 Batch 42/538 - Train Accuracy: 0.857, Validation Accuracy: 0.866, Loss: 1.054 Epoch 1 Batch 43/538 - Train Accuracy: 0.839, Validation Accuracy: 0.850, Loss: 1.072 Epoch 1 Batch 44/538 - Train Accuracy: 0.825, Validation Accuracy: 0.856, Loss: 1.094 Epoch 1 Batch 45/538 - Train Accuracy: 0.866, Validation Accuracy: 0.860, Loss: 1.032 Epoch 1 Batch 46/538 - Train Accuracy: 0.888, Validation Accuracy: 0.862, Loss: 1.059 Epoch 1 Batch 47/538 - Train Accuracy: 0.873, Validation Accuracy: 0.860, Loss: 1.061 Epoch 1 Batch 48/538 - Train Accuracy: 0.858, Validation Accuracy: 0.862, Loss: 1.026 Epoch 1 Batch 49/538 - Train Accuracy: 0.868, Validation Accuracy: 0.869, Loss: 1.087 Epoch 1 Batch 50/538 - Train Accuracy: 0.878, Validation Accuracy: 0.856, Loss: 1.067 Epoch 1 Batch 51/538 - Train Accuracy: 0.860, Validation Accuracy: 0.848, Loss: 1.038 Epoch 1 Batch 52/538 - Train Accuracy: 0.862, Validation Accuracy: 0.852, Loss: 1.016 Epoch 1 Batch 53/538 - Train Accuracy: 0.876, Validation Accuracy: 0.845, Loss: 1.047 Epoch 1 Batch 54/538 - Train Accuracy: 0.893, Validation Accuracy: 0.853, Loss: 1.037 Epoch 1 Batch 55/538 - Train Accuracy: 0.864, Validation Accuracy: 0.860, Loss: 1.079 Epoch 1 Batch 56/538 - Train Accuracy: 0.890, Validation Accuracy: 0.869, Loss: 1.047 Epoch 1 Batch 57/538 - Train Accuracy: 0.854, Validation Accuracy: 0.874, Loss: 1.068 Epoch 1 Batch 58/538 - Train Accuracy: 0.854, Validation Accuracy: 0.873, Loss: 1.039 Epoch 1 Batch 59/538 - Train Accuracy: 0.878, Validation Accuracy: 0.868, Loss: 1.081 Epoch 1 Batch 60/538 - Train Accuracy: 0.890, Validation Accuracy: 0.867, Loss: 1.075 Epoch 1 Batch 61/538 - Train Accuracy: 0.877, Validation Accuracy: 0.878, Loss: 1.022 Epoch 1 Batch 62/538 - Train Accuracy: 0.882, Validation Accuracy: 0.881, Loss: 0.995 Epoch 1 Batch 63/538 - Train Accuracy: 0.881, Validation Accuracy: 0.887, Loss: 0.997 Epoch 1 Batch 64/538 - Train Accuracy: 0.883, Validation Accuracy: 0.875, Loss: 1.035 Epoch 1 Batch 65/538 - Train Accuracy: 0.842, Validation Accuracy: 0.877, Loss: 1.063 Epoch 1 Batch 66/538 - Train Accuracy: 0.879, Validation Accuracy: 0.869, Loss: 1.015 Epoch 1 Batch 67/538 - Train Accuracy: 0.893, Validation Accuracy: 0.869, Loss: 1.000 Epoch 1 Batch 68/538 - Train Accuracy: 0.866, Validation Accuracy: 0.871, Loss: 0.984 Epoch 1 Batch 69/538 - Train Accuracy: 0.874, Validation Accuracy: 0.869, Loss: 1.024 Epoch 1 Batch 70/538 - Train Accuracy: 0.875, Validation Accuracy: 0.870, Loss: 1.024 Epoch 1 Batch 71/538 - Train Accuracy: 0.853, Validation Accuracy: 0.875, Loss: 1.030 Epoch 1 Batch 72/538 - Train Accuracy: 0.890, Validation Accuracy: 0.873, Loss: 1.093 Epoch 1 Batch 73/538 - Train Accuracy: 0.861, Validation Accuracy: 0.872, Loss: 1.048 Epoch 1 Batch 74/538 - Train Accuracy: 0.869, Validation Accuracy: 0.873, Loss: 1.023 Epoch 1 Batch 75/538 - Train Accuracy: 0.865, Validation Accuracy: 0.872, Loss: 1.004 Epoch 1 Batch 76/538 - Train Accuracy: 0.876, Validation Accuracy: 0.877, Loss: 1.039 Epoch 1 Batch 77/538 - Train Accuracy: 0.895, Validation Accuracy: 0.880, Loss: 1.052 Epoch 1 Batch 78/538 - Train Accuracy: 0.859, Validation Accuracy: 0.877, Loss: 1.040 Epoch 1 Batch 79/538 - Train Accuracy: 0.896, Validation Accuracy: 0.885, Loss: 1.018 Epoch 1 Batch 80/538 - Train Accuracy: 0.867, Validation Accuracy: 0.877, Loss: 1.013 Epoch 1 Batch 81/538 - Train Accuracy: 0.878, Validation Accuracy: 0.879, Loss: 1.037 Epoch 1 Batch 82/538 - Train Accuracy: 0.868, Validation Accuracy: 0.880, Loss: 1.026 Epoch 1 Batch 83/538 - Train Accuracy: 0.873, Validation Accuracy: 0.879, Loss: 1.012 Epoch 1 Batch 84/538 - Train Accuracy: 0.871, Validation Accuracy: 0.882, Loss: 1.046 Epoch 1 Batch 85/538 - Train Accuracy: 0.890, Validation Accuracy: 0.883, Loss: 1.003 Epoch 1 Batch 86/538 - Train Accuracy: 0.882, Validation Accuracy: 0.889, Loss: 1.005 Epoch 1 Batch 87/538 - Train Accuracy: 0.876, Validation Accuracy: 0.889, Loss: 1.050 Epoch 1 Batch 88/538 - Train Accuracy: 0.890, Validation Accuracy: 0.890, Loss: 1.027 Epoch 1 Batch 89/538 - Train Accuracy: 0.882, Validation Accuracy: 0.889, Loss: 0.992 Epoch 1 Batch 90/538 - Train Accuracy: 0.881, Validation Accuracy: 0.886, Loss: 1.041 Epoch 1 Batch 91/538 - Train Accuracy: 0.874, Validation Accuracy: 0.888, Loss: 1.028 Epoch 1 Batch 92/538 - Train Accuracy: 0.878, Validation Accuracy: 0.901, Loss: 1.047 Epoch 1 Batch 93/538 - Train Accuracy: 0.878, Validation Accuracy: 0.897, Loss: 1.021 Epoch 1 Batch 94/538 - Train Accuracy: 0.889, Validation Accuracy: 0.888, Loss: 1.008 Epoch 1 Batch 95/538 - Train Accuracy: 0.881, Validation Accuracy: 0.885, Loss: 1.028 Epoch 1 Batch 96/538 - Train Accuracy: 0.895, Validation Accuracy: 0.889, Loss: 0.976 Epoch 1 Batch 97/538 - Train Accuracy: 0.899, Validation Accuracy: 0.893, Loss: 0.997 Epoch 1 Batch 98/538 - Train Accuracy: 0.898, Validation Accuracy: 0.887, Loss: 1.032 Epoch 1 Batch 99/538 - Train Accuracy: 0.896, Validation Accuracy: 0.887, Loss: 1.047 Epoch 1 Batch 100/538 - Train Accuracy: 0.886, Validation Accuracy: 0.891, Loss: 1.016 Epoch 1 Batch 101/538 - Train Accuracy: 0.853, Validation Accuracy: 0.882, Loss: 1.055 Epoch 1 Batch 102/538 - Train Accuracy: 0.869, Validation Accuracy: 0.866, Loss: 1.027 Epoch 1 Batch 103/538 - Train Accuracy: 0.895, Validation Accuracy: 0.868, Loss: 1.001 Epoch 1 Batch 104/538 - Train Accuracy: 0.891, Validation Accuracy: 0.857, Loss: 1.057 Epoch 1 Batch 105/538 - Train Accuracy: 0.898, Validation Accuracy: 0.877, Loss: 0.994 Epoch 1 Batch 106/538 - Train Accuracy: 0.864, Validation Accuracy: 0.890, Loss: 1.017 Epoch 1 Batch 107/538 - Train Accuracy: 0.854, Validation Accuracy: 0.895, Loss: 1.069 Epoch 1 Batch 108/538 - Train Accuracy: 0.887, Validation Accuracy: 0.893, Loss: 1.036 Epoch 1 Batch 109/538 - Train Accuracy: 0.911, Validation Accuracy: 0.877, Loss: 1.009 Epoch 1 Batch 110/538 - Train Accuracy: 0.888, Validation Accuracy: 0.877, Loss: 0.979 Epoch 1 Batch 111/538 - Train Accuracy: 0.894, Validation Accuracy: 0.886, Loss: 1.045 Epoch 1 Batch 112/538 - Train Accuracy: 0.894, Validation Accuracy: 0.888, Loss: 1.011 Epoch 1 Batch 113/538 - Train Accuracy: 0.884, Validation Accuracy: 0.883, Loss: 0.996 Epoch 1 Batch 114/538 - Train Accuracy: 0.886, Validation Accuracy: 0.883, Loss: 1.004 Epoch 1 Batch 115/538 - Train Accuracy: 0.887, Validation Accuracy: 0.869, Loss: 0.965 Epoch 1 Batch 116/538 - Train Accuracy: 0.872, Validation Accuracy: 0.869, Loss: 1.062 Epoch 1 Batch 117/538 - Train Accuracy: 0.890, Validation Accuracy: 0.874, Loss: 0.989 Epoch 1 Batch 118/538 - Train Accuracy: 0.894, Validation Accuracy: 0.881, Loss: 0.988 Epoch 1 Batch 119/538 - Train Accuracy: 0.892, Validation Accuracy: 0.878, Loss: 1.014 Epoch 1 Batch 120/538 - Train Accuracy: 0.903, Validation Accuracy: 0.880, Loss: 1.011 Epoch 1 Batch 121/538 - Train Accuracy: 0.890, Validation Accuracy: 0.887, Loss: 1.015 Epoch 1 Batch 122/538 - Train Accuracy: 0.883, Validation Accuracy: 0.876, Loss: 1.027 Epoch 1 Batch 123/538 - Train Accuracy: 0.897, Validation Accuracy: 0.878, Loss: 0.978 Epoch 1 Batch 124/538 - Train Accuracy: 0.880, Validation Accuracy: 0.864, Loss: 1.004 Epoch 1 Batch 125/538 - Train Accuracy: 0.891, Validation Accuracy: 0.871, Loss: 1.028 Epoch 1 Batch 126/538 - Train Accuracy: 0.882, Validation Accuracy: 0.876, Loss: 0.972 Epoch 1 Batch 127/538 - Train Accuracy: 0.872, Validation Accuracy: 0.896, Loss: 0.997 Epoch 1 Batch 128/538 - Train Accuracy: 0.902, Validation Accuracy: 0.913, Loss: 1.046 Epoch 1 Batch 129/538 - Train Accuracy: 0.904, Validation Accuracy: 0.913, Loss: 0.956 Epoch 1 Batch 130/538 - Train Accuracy: 0.889, Validation Accuracy: 0.902, Loss: 0.964 Epoch 1 Batch 131/538 - Train Accuracy: 0.918, Validation Accuracy: 0.907, Loss: 1.022 Epoch 1 Batch 132/538 - Train Accuracy: 0.874, Validation Accuracy: 0.903, Loss: 0.972 Epoch 1 Batch 133/538 - Train Accuracy: 0.886, Validation Accuracy: 0.895, Loss: 0.989 Epoch 1 Batch 134/538 - Train Accuracy: 0.874, Validation Accuracy: 0.887, Loss: 0.993 Epoch 1 Batch 135/538 - Train Accuracy: 0.873, Validation Accuracy: 0.873, Loss: 1.015 Epoch 1 Batch 136/538 - Train Accuracy: 0.888, Validation Accuracy: 0.875, Loss: 1.030 Epoch 1 Batch 137/538 - Train Accuracy: 0.862, Validation Accuracy: 0.865, Loss: 1.043 Epoch 1 Batch 138/538 - Train Accuracy: 0.879, Validation Accuracy: 0.863, Loss: 1.015 Epoch 1 Batch 139/538 - Train Accuracy: 0.871, Validation Accuracy: 0.870, Loss: 1.014 Epoch 1 Batch 140/538 - Train Accuracy: 0.875, Validation Accuracy: 0.874, Loss: 1.000 Epoch 1 Batch 141/538 - Train Accuracy: 0.900, Validation Accuracy: 0.881, Loss: 0.999 Epoch 1 Batch 142/538 - Train Accuracy: 0.904, Validation Accuracy: 0.879, Loss: 0.982 Epoch 1 Batch 143/538 - Train Accuracy: 0.915, Validation Accuracy: 0.879, Loss: 0.964 Epoch 1 Batch 144/538 - Train Accuracy: 0.891, Validation Accuracy: 0.881, Loss: 1.006 Epoch 1 Batch 145/538 - Train Accuracy: 0.877, Validation Accuracy: 0.885, Loss: 1.012 Epoch 1 Batch 146/538 - Train Accuracy: 0.900, Validation Accuracy: 0.892, Loss: 0.983 Epoch 1 Batch 147/538 - Train Accuracy: 0.897, Validation Accuracy: 0.889, Loss: 1.011 Epoch 1 Batch 148/538 - Train Accuracy: 0.875, Validation Accuracy: 0.886, Loss: 1.006 Epoch 1 Batch 149/538 - Train Accuracy: 0.909, Validation Accuracy: 0.885, Loss: 1.009 Epoch 1 Batch 150/538 - Train Accuracy: 0.894, Validation Accuracy: 0.885, Loss: 0.997 Epoch 1 Batch 151/538 - Train Accuracy: 0.911, Validation Accuracy: 0.890, Loss: 0.975 Epoch 1 Batch 152/538 - Train Accuracy: 0.899, Validation Accuracy: 0.894, Loss: 0.971 Epoch 1 Batch 153/538 - Train Accuracy: 0.871, Validation Accuracy: 0.887, Loss: 1.019 Epoch 1 Batch 154/538 - Train Accuracy: 0.895, Validation Accuracy: 0.895, Loss: 0.987 Epoch 1 Batch 155/538 - Train Accuracy: 0.865, Validation Accuracy: 0.898, Loss: 1.013 Epoch 1 Batch 156/538 - Train Accuracy: 0.905, Validation Accuracy: 0.902, Loss: 0.985 Epoch 1 Batch 157/538 - Train Accuracy: 0.906, Validation Accuracy: 0.891, Loss: 1.011 Epoch 1 Batch 158/538 - Train Accuracy: 0.899, Validation Accuracy: 0.888, Loss: 1.007 Epoch 1 Batch 159/538 - Train Accuracy: 0.901, Validation Accuracy: 0.878, Loss: 1.015 Epoch 1 Batch 160/538 - Train Accuracy: 0.884, Validation Accuracy: 0.893, Loss: 0.933 Epoch 1 Batch 161/538 - Train Accuracy: 0.898, Validation Accuracy: 0.893, Loss: 0.973 Epoch 1 Batch 162/538 - Train Accuracy: 0.900, Validation Accuracy: 0.887, Loss: 1.016 Epoch 1 Batch 163/538 - Train Accuracy: 0.910, Validation Accuracy: 0.888, Loss: 1.003 Epoch 1 Batch 164/538 - Train Accuracy: 0.886, Validation Accuracy: 0.887, Loss: 0.995 Epoch 1 Batch 165/538 - Train Accuracy: 0.905, Validation Accuracy: 0.887, Loss: 0.997 Epoch 1 Batch 166/538 - Train Accuracy: 0.904, Validation Accuracy: 0.880, Loss: 0.986 Epoch 1 Batch 167/538 - Train Accuracy: 0.892, Validation Accuracy: 0.871, Loss: 0.973 Epoch 1 Batch 168/538 - Train Accuracy: 0.872, Validation Accuracy: 0.865, Loss: 0.974 Epoch 1 Batch 169/538 - Train Accuracy: 0.912, Validation Accuracy: 0.874, Loss: 0.993 Epoch 1 Batch 170/538 - Train Accuracy: 0.880, Validation Accuracy: 0.877, Loss: 0.974 Epoch 1 Batch 171/538 - Train Accuracy: 0.887, Validation Accuracy: 0.882, Loss: 1.011 Epoch 1 Batch 172/538 - Train Accuracy: 0.887, Validation Accuracy: 0.883, Loss: 0.960 Epoch 1 Batch 173/538 - Train Accuracy: 0.903, Validation Accuracy: 0.879, Loss: 0.982 Epoch 1 Batch 174/538 - Train Accuracy: 0.887, Validation Accuracy: 0.873, Loss: 0.996 Epoch 1 Batch 175/538 - Train Accuracy: 0.908, Validation Accuracy: 0.881, Loss: 0.974 Epoch 1 Batch 176/538 - Train Accuracy: 0.877, Validation Accuracy: 0.897, Loss: 1.036 Epoch 1 Batch 177/538 - Train Accuracy: 0.897, Validation Accuracy: 0.894, Loss: 0.990 Epoch 1 Batch 178/538 - Train Accuracy: 0.875, Validation Accuracy: 0.893, Loss: 0.992 Epoch 1 Batch 179/538 - Train Accuracy: 0.907, Validation Accuracy: 0.900, Loss: 1.001 Epoch 1 Batch 180/538 - Train Accuracy: 0.915, Validation Accuracy: 0.898, Loss: 0.963 Epoch 1 Batch 181/538 - Train Accuracy: 0.879, Validation Accuracy: 0.902, Loss: 0.970 Epoch 1 Batch 182/538 - Train Accuracy: 0.919, Validation Accuracy: 0.901, Loss: 0.974 Epoch 1 Batch 183/538 - Train Accuracy: 0.914, Validation Accuracy: 0.889, Loss: 0.965 Epoch 1 Batch 184/538 - Train Accuracy: 0.901, Validation Accuracy: 0.893, Loss: 0.993 Epoch 1 Batch 185/538 - Train Accuracy: 0.917, Validation Accuracy: 0.889, Loss: 0.963 Epoch 1 Batch 186/538 - Train Accuracy: 0.892, Validation Accuracy: 0.884, Loss: 0.992 Epoch 1 Batch 187/538 - Train Accuracy: 0.905, Validation Accuracy: 0.879, Loss: 0.971 Epoch 1 Batch 188/538 - Train Accuracy: 0.904, Validation Accuracy: 0.882, Loss: 1.017 Epoch 1 Batch 189/538 - Train Accuracy: 0.911, Validation Accuracy: 0.875, Loss: 0.996 Epoch 1 Batch 190/538 - Train Accuracy: 0.896, Validation Accuracy: 0.879, Loss: 1.034 Epoch 1 Batch 191/538 - Train Accuracy: 0.906, Validation Accuracy: 0.888, Loss: 1.027 Epoch 1 Batch 192/538 - Train Accuracy: 0.900, Validation Accuracy: 0.893, Loss: 0.986 Epoch 1 Batch 193/538 - Train Accuracy: 0.900, Validation Accuracy: 0.897, Loss: 0.980 Epoch 1 Batch 194/538 - Train Accuracy: 0.868, Validation Accuracy: 0.903, Loss: 1.020 Epoch 1 Batch 195/538 - Train Accuracy: 0.896, Validation Accuracy: 0.900, Loss: 0.982 Epoch 1 Batch 196/538 - Train Accuracy: 0.887, Validation Accuracy: 0.897, Loss: 0.980 Epoch 1 Batch 197/538 - Train Accuracy: 0.910, Validation Accuracy: 0.887, Loss: 0.980 Epoch 1 Batch 198/538 - Train Accuracy: 0.897, Validation Accuracy: 0.892, Loss: 0.955 Epoch 1 Batch 199/538 - Train Accuracy: 0.898, Validation Accuracy: 0.892, Loss: 0.957 Epoch 1 Batch 200/538 - Train Accuracy: 0.887, Validation Accuracy: 0.900, Loss: 1.015 Epoch 1 Batch 201/538 - Train Accuracy: 0.888, Validation Accuracy: 0.892, Loss: 0.949 Epoch 1 Batch 202/538 - Train Accuracy: 0.914, Validation Accuracy: 0.893, Loss: 0.983 Epoch 1 Batch 203/538 - Train Accuracy: 0.887, Validation Accuracy: 0.890, Loss: 0.993 Epoch 1 Batch 204/538 - Train Accuracy: 0.886, Validation Accuracy: 0.892, Loss: 0.993 Epoch 1 Batch 205/538 - Train Accuracy: 0.912, Validation Accuracy: 0.894, Loss: 0.934 Epoch 1 Batch 206/538 - Train Accuracy: 0.883, Validation Accuracy: 0.893, Loss: 0.959 Epoch 1 Batch 207/538 - Train Accuracy: 0.896, Validation Accuracy: 0.894, Loss: 0.965 Epoch 1 Batch 208/538 - Train Accuracy: 0.898, Validation Accuracy: 0.887, Loss: 1.009 Epoch 1 Batch 209/538 - Train Accuracy: 0.919, Validation Accuracy: 0.887, Loss: 0.944 Epoch 1 Batch 210/538 - Train Accuracy: 0.894, Validation Accuracy: 0.882, Loss: 1.005 Epoch 1 Batch 211/538 - Train Accuracy: 0.907, Validation Accuracy: 0.882, Loss: 1.010 Epoch 1 Batch 212/538 - Train Accuracy: 0.891, Validation Accuracy: 0.879, Loss: 1.001 Epoch 1 Batch 213/538 - Train Accuracy: 0.916, Validation Accuracy: 0.881, Loss: 0.983 Epoch 1 Batch 214/538 - Train Accuracy: 0.904, Validation Accuracy: 0.884, Loss: 0.948 Epoch 1 Batch 215/538 - Train Accuracy: 0.912, Validation Accuracy: 0.881, Loss: 0.984 Epoch 1 Batch 216/538 - Train Accuracy: 0.887, Validation Accuracy: 0.884, Loss: 0.951 Epoch 1 Batch 217/538 - Train Accuracy: 0.898, Validation Accuracy: 0.884, Loss: 0.953 Epoch 1 Batch 218/538 - Train Accuracy: 0.902, Validation Accuracy: 0.879, Loss: 0.964 Epoch 1 Batch 219/538 - Train Accuracy: 0.868, Validation Accuracy: 0.887, Loss: 0.962 Epoch 1 Batch 220/538 - Train Accuracy: 0.892, Validation Accuracy: 0.887, Loss: 0.954 Epoch 1 Batch 221/538 - Train Accuracy: 0.918, Validation Accuracy: 0.887, Loss: 0.990 Epoch 1 Batch 222/538 - Train Accuracy: 0.889, Validation Accuracy: 0.896, Loss: 0.937 Epoch 1 Batch 223/538 - Train Accuracy: 0.891, Validation Accuracy: 0.885, Loss: 1.016 Epoch 1 Batch 224/538 - Train Accuracy: 0.894, Validation Accuracy: 0.884, Loss: 0.970 Epoch 1 Batch 225/538 - Train Accuracy: 0.914, Validation Accuracy: 0.887, Loss: 1.010 Epoch 1 Batch 226/538 - Train Accuracy: 0.909, Validation Accuracy: 0.898, Loss: 0.967 Epoch 1 Batch 227/538 - Train Accuracy: 0.922, Validation Accuracy: 0.893, Loss: 0.944 Epoch 1 Batch 228/538 - Train Accuracy: 0.891, Validation Accuracy: 0.897, Loss: 0.965 Epoch 1 Batch 229/538 - Train Accuracy: 0.911, Validation Accuracy: 0.904, Loss: 0.959 Epoch 1 Batch 230/538 - Train Accuracy: 0.891, Validation Accuracy: 0.894, Loss: 0.962 Epoch 1 Batch 231/538 - Train Accuracy: 0.894, Validation Accuracy: 0.892, Loss: 1.000 Epoch 1 Batch 232/538 - Train Accuracy: 0.902, Validation Accuracy: 0.902, Loss: 0.971 Epoch 1 Batch 233/538 - Train Accuracy: 0.911, Validation Accuracy: 0.899, Loss: 0.948 Epoch 1 Batch 234/538 - Train Accuracy: 0.905, Validation Accuracy: 0.894, Loss: 0.982 Epoch 1 Batch 235/538 - Train Accuracy: 0.903, Validation Accuracy: 0.893, Loss: 0.973 Epoch 1 Batch 236/538 - Train Accuracy: 0.886, Validation Accuracy: 0.879, Loss: 1.004 Epoch 1 Batch 237/538 - Train Accuracy: 0.911, Validation Accuracy: 0.888, Loss: 0.948 Epoch 1 Batch 238/538 - Train Accuracy: 0.937, Validation Accuracy: 0.895, Loss: 0.986 Epoch 1 Batch 239/538 - Train Accuracy: 0.895, Validation Accuracy: 0.908, Loss: 0.968 Epoch 1 Batch 240/538 - Train Accuracy: 0.920, Validation Accuracy: 0.910, Loss: 0.949 Epoch 1 Batch 241/538 - Train Accuracy: 0.896, Validation Accuracy: 0.912, Loss: 0.961 Epoch 1 Batch 242/538 - Train Accuracy: 0.920, Validation Accuracy: 0.904, Loss: 0.965 Epoch 1 Batch 243/538 - Train Accuracy: 0.921, Validation Accuracy: 0.898, Loss: 0.971 Epoch 1 Batch 244/538 - Train Accuracy: 0.903, Validation Accuracy: 0.896, Loss: 0.980 Epoch 1 Batch 245/538 - Train Accuracy: 0.904, Validation Accuracy: 0.900, Loss: 0.958 Epoch 1 Batch 246/538 - Train Accuracy: 0.906, Validation Accuracy: 0.900, Loss: 0.986 Epoch 1 Batch 247/538 - Train Accuracy: 0.885, Validation Accuracy: 0.891, Loss: 0.918 Epoch 1 Batch 248/538 - Train Accuracy: 0.910, Validation Accuracy: 0.898, Loss: 1.000 Epoch 1 Batch 249/538 - Train Accuracy: 0.905, Validation Accuracy: 0.902, Loss: 0.988 Epoch 1 Batch 250/538 - Train Accuracy: 0.912, Validation Accuracy: 0.904, Loss: 0.979 Epoch 1 Batch 251/538 - Train Accuracy: 0.910, Validation Accuracy: 0.896, Loss: 0.986 Epoch 1 Batch 252/538 - Train Accuracy: 0.920, Validation Accuracy: 0.897, Loss: 0.964 Epoch 1 Batch 253/538 - Train Accuracy: 0.892, Validation Accuracy: 0.894, Loss: 0.967 Epoch 1 Batch 254/538 - Train Accuracy: 0.879, Validation Accuracy: 0.895, Loss: 0.984 Epoch 1 Batch 255/538 - Train Accuracy: 0.900, Validation Accuracy: 0.898, Loss: 0.930 Epoch 1 Batch 256/538 - Train Accuracy: 0.882, Validation Accuracy: 0.895, Loss: 0.968 Epoch 1 Batch 257/538 - Train Accuracy: 0.914, Validation Accuracy: 0.896, Loss: 0.942 Epoch 1 Batch 258/538 - Train Accuracy: 0.908, Validation Accuracy: 0.892, Loss: 0.961 Epoch 1 Batch 259/538 - Train Accuracy: 0.918, Validation Accuracy: 0.894, Loss: 0.977 Epoch 1 Batch 260/538 - Train Accuracy: 0.887, Validation Accuracy: 0.903, Loss: 1.001 Epoch 1 Batch 261/538 - Train Accuracy: 0.899, Validation Accuracy: 0.899, Loss: 0.970 Epoch 1 Batch 262/538 - Train Accuracy: 0.918, Validation Accuracy: 0.894, Loss: 0.978 Epoch 1 Batch 263/538 - Train Accuracy: 0.903, Validation Accuracy: 0.903, Loss: 0.984 Epoch 1 Batch 264/538 - Train Accuracy: 0.892, Validation Accuracy: 0.911, Loss: 0.997 Epoch 1 Batch 265/538 - Train Accuracy: 0.911, Validation Accuracy: 0.905, Loss: 0.945 Epoch 1 Batch 266/538 - Train Accuracy: 0.900, Validation Accuracy: 0.897, Loss: 0.985 Epoch 1 Batch 267/538 - Train Accuracy: 0.901, Validation Accuracy: 0.898, Loss: 0.932 Epoch 1 Batch 268/538 - Train Accuracy: 0.930, Validation Accuracy: 0.904, Loss: 0.925 Epoch 1 Batch 269/538 - Train Accuracy: 0.907, Validation Accuracy: 0.914, Loss: 0.973 Epoch 1 Batch 270/538 - Train Accuracy: 0.911, Validation Accuracy: 0.912, Loss: 0.978 Epoch 1 Batch 271/538 - Train Accuracy: 0.911, Validation Accuracy: 0.909, Loss: 0.925 Epoch 1 Batch 272/538 - Train Accuracy: 0.893, Validation Accuracy: 0.903, Loss: 0.968 Epoch 1 Batch 273/538 - Train Accuracy: 0.903, Validation Accuracy: 0.900, Loss: 0.980 Epoch 1 Batch 274/538 - Train Accuracy: 0.875, Validation Accuracy: 0.898, Loss: 0.975 Epoch 1 Batch 275/538 - Train Accuracy: 0.890, Validation Accuracy: 0.886, Loss: 0.956 Epoch 1 Batch 276/538 - Train Accuracy: 0.905, Validation Accuracy: 0.893, Loss: 0.942 Epoch 1 Batch 277/538 - Train Accuracy: 0.909, Validation Accuracy: 0.889, Loss: 0.979 Epoch 1 Batch 278/538 - Train Accuracy: 0.912, Validation Accuracy: 0.888, Loss: 0.959 Epoch 1 Batch 279/538 - Train Accuracy: 0.925, Validation Accuracy: 0.891, Loss: 0.938 Epoch 1 Batch 280/538 - Train Accuracy: 0.924, Validation Accuracy: 0.890, Loss: 0.978 Epoch 1 Batch 281/538 - Train Accuracy: 0.923, Validation Accuracy: 0.886, Loss: 0.990 Epoch 1 Batch 282/538 - Train Accuracy: 0.913, Validation Accuracy: 0.891, Loss: 1.023 Epoch 1 Batch 283/538 - Train Accuracy: 0.905, Validation Accuracy: 0.890, Loss: 0.940 Epoch 1 Batch 284/538 - Train Accuracy: 0.911, Validation Accuracy: 0.893, Loss: 0.951 Epoch 1 Batch 285/538 - Train Accuracy: 0.904, Validation Accuracy: 0.898, Loss: 0.918 Epoch 1 Batch 286/538 - Train Accuracy: 0.900, Validation Accuracy: 0.901, Loss: 0.957 Epoch 1 Batch 287/538 - Train Accuracy: 0.936, Validation Accuracy: 0.913, Loss: 0.949 Epoch 1 Batch 288/538 - Train Accuracy: 0.915, Validation Accuracy: 0.910, Loss: 0.958 Epoch 1 Batch 289/538 - Train Accuracy: 0.914, Validation Accuracy: 0.911, Loss: 0.935 Epoch 1 Batch 290/538 - Train Accuracy: 0.921, Validation Accuracy: 0.893, Loss: 0.952 Epoch 1 Batch 291/538 - Train Accuracy: 0.918, Validation Accuracy: 0.890, Loss: 0.971 Epoch 1 Batch 292/538 - Train Accuracy: 0.919, Validation Accuracy: 0.892, Loss: 0.923 Epoch 1 Batch 293/538 - Train Accuracy: 0.909, Validation Accuracy: 0.901, Loss: 0.936 Epoch 1 Batch 294/538 - Train Accuracy: 0.899, Validation Accuracy: 0.900, Loss: 0.946 Epoch 1 Batch 295/538 - Train Accuracy: 0.903, Validation Accuracy: 0.898, Loss: 0.982 Epoch 1 Batch 296/538 - Train Accuracy: 0.902, Validation Accuracy: 0.897, Loss: 0.980 Epoch 1 Batch 297/538 - Train Accuracy: 0.925, Validation Accuracy: 0.904, Loss: 0.983 Epoch 1 Batch 298/538 - Train Accuracy: 0.906, Validation Accuracy: 0.913, Loss: 0.966 Epoch 1 Batch 299/538 - Train Accuracy: 0.903, Validation Accuracy: 0.911, Loss: 0.988 Epoch 1 Batch 300/538 - Train Accuracy: 0.906, Validation Accuracy: 0.913, Loss: 0.945 Epoch 1 Batch 301/538 - Train Accuracy: 0.907, Validation Accuracy: 0.920, Loss: 0.968 Epoch 1 Batch 302/538 - Train Accuracy: 0.916, Validation Accuracy: 0.909, Loss: 1.025 Epoch 1 Batch 303/538 - Train Accuracy: 0.926, Validation Accuracy: 0.898, Loss: 0.910 Epoch 1 Batch 304/538 - Train Accuracy: 0.911, Validation Accuracy: 0.897, Loss: 0.957 Epoch 1 Batch 305/538 - Train Accuracy: 0.916, Validation Accuracy: 0.902, Loss: 0.924 Epoch 1 Batch 306/538 - Train Accuracy: 0.916, Validation Accuracy: 0.903, Loss: 0.929 Epoch 1 Batch 307/538 - Train Accuracy: 0.933, Validation Accuracy: 0.903, Loss: 0.925 Epoch 1 Batch 308/538 - Train Accuracy: 0.930, Validation Accuracy: 0.901, Loss: 0.961 Epoch 1 Batch 309/538 - Train Accuracy: 0.916, Validation Accuracy: 0.894, Loss: 0.921 Epoch 1 Batch 310/538 - Train Accuracy: 0.941, Validation Accuracy: 0.900, Loss: 0.933 Epoch 1 Batch 311/538 - Train Accuracy: 0.905, Validation Accuracy: 0.896, Loss: 0.976 Epoch 1 Batch 312/538 - Train Accuracy: 0.911, Validation Accuracy: 0.900, Loss: 0.979 Epoch 1 Batch 313/538 - Train Accuracy: 0.923, Validation Accuracy: 0.903, Loss: 0.962 Epoch 1 Batch 314/538 - Train Accuracy: 0.913, Validation Accuracy: 0.906, Loss: 0.947 Epoch 1 Batch 315/538 - Train Accuracy: 0.920, Validation Accuracy: 0.902, Loss: 0.909 Epoch 1 Batch 316/538 - Train Accuracy: 0.914, Validation Accuracy: 0.901, Loss: 0.938 Epoch 1 Batch 317/538 - Train Accuracy: 0.915, Validation Accuracy: 0.905, Loss: 0.965 Epoch 1 Batch 318/538 - Train Accuracy: 0.895, Validation Accuracy: 0.910, Loss: 0.937 Epoch 1 Batch 319/538 - Train Accuracy: 0.910, Validation Accuracy: 0.913, Loss: 0.969 Epoch 1 Batch 320/538 - Train Accuracy: 0.916, Validation Accuracy: 0.910, Loss: 0.996 Epoch 1 Batch 321/538 - Train Accuracy: 0.900, Validation Accuracy: 0.903, Loss: 0.918 Epoch 1 Batch 322/538 - Train Accuracy: 0.915, Validation Accuracy: 0.902, Loss: 0.972 Epoch 1 Batch 323/538 - Train Accuracy: 0.928, Validation Accuracy: 0.916, Loss: 0.949 Epoch 1 Batch 324/538 - Train Accuracy: 0.911, Validation Accuracy: 0.910, Loss: 0.982 Epoch 1 Batch 325/538 - Train Accuracy: 0.919, Validation Accuracy: 0.908, Loss: 0.967 Epoch 1 Batch 326/538 - Train Accuracy: 0.926, Validation Accuracy: 0.906, Loss: 0.921 Epoch 1 Batch 327/538 - Train Accuracy: 0.912, Validation Accuracy: 0.912, Loss: 0.987 Epoch 1 Batch 328/538 - Train Accuracy: 0.926, Validation Accuracy: 0.924, Loss: 0.943 Epoch 1 Batch 329/538 - Train Accuracy: 0.933, Validation Accuracy: 0.909, Loss: 0.915 Epoch 1 Batch 330/538 - Train Accuracy: 0.925, Validation Accuracy: 0.900, Loss: 0.891 Epoch 1 Batch 331/538 - Train Accuracy: 0.927, Validation Accuracy: 0.901, Loss: 0.950 Epoch 1 Batch 332/538 - Train Accuracy: 0.911, Validation Accuracy: 0.898, Loss: 0.948 Epoch 1 Batch 333/538 - Train Accuracy: 0.923, Validation Accuracy: 0.897, Loss: 0.914 Epoch 1 Batch 334/538 - Train Accuracy: 0.924, Validation Accuracy: 0.892, Loss: 0.958 Epoch 1 Batch 335/538 - Train Accuracy: 0.918, Validation Accuracy: 0.910, Loss: 0.930 Epoch 1 Batch 336/538 - Train Accuracy: 0.919, Validation Accuracy: 0.912, Loss: 0.931 Epoch 1 Batch 337/538 - Train Accuracy: 0.918, Validation Accuracy: 0.915, Loss: 0.981 Epoch 1 Batch 338/538 - Train Accuracy: 0.896, Validation Accuracy: 0.920, Loss: 0.962 Epoch 1 Batch 339/538 - Train Accuracy: 0.915, Validation Accuracy: 0.912, Loss: 0.953 Epoch 1 Batch 340/538 - Train Accuracy: 0.910, Validation Accuracy: 0.916, Loss: 0.914 Epoch 1 Batch 341/538 - Train Accuracy: 0.919, Validation Accuracy: 0.912, Loss: 0.936 Epoch 1 Batch 342/538 - Train Accuracy: 0.911, Validation Accuracy: 0.918, Loss: 0.937 Epoch 1 Batch 343/538 - Train Accuracy: 0.926, Validation Accuracy: 0.915, Loss: 0.941 Epoch 1 Batch 344/538 - Train Accuracy: 0.931, Validation Accuracy: 0.909, Loss: 0.923 Epoch 1 Batch 345/538 - Train Accuracy: 0.913, Validation Accuracy: 0.911, Loss: 0.963 Epoch 1 Batch 346/538 - Train Accuracy: 0.897, Validation Accuracy: 0.916, Loss: 0.960 Epoch 1 Batch 347/538 - Train Accuracy: 0.918, Validation Accuracy: 0.915, Loss: 0.900 Epoch 1 Batch 348/538 - Train Accuracy: 0.916, Validation Accuracy: 0.914, Loss: 0.906 Epoch 1 Batch 349/538 - Train Accuracy: 0.922, Validation Accuracy: 0.909, Loss: 0.972 Epoch 1 Batch 350/538 - Train Accuracy: 0.909, Validation Accuracy: 0.905, Loss: 0.962 Epoch 1 Batch 351/538 - Train Accuracy: 0.908, Validation Accuracy: 0.906, Loss: 0.941 Epoch 1 Batch 352/538 - Train Accuracy: 0.904, Validation Accuracy: 0.918, Loss: 0.941 Epoch 1 Batch 353/538 - Train Accuracy: 0.888, Validation Accuracy: 0.919, Loss: 0.974 Epoch 1 Batch 354/538 - Train Accuracy: 0.912, Validation Accuracy: 0.921, Loss: 0.977 Epoch 1 Batch 355/538 - Train Accuracy: 0.924, Validation Accuracy: 0.922, Loss: 0.992 Epoch 1 Batch 356/538 - Train Accuracy: 0.925, Validation Accuracy: 0.919, Loss: 0.972 Epoch 1 Batch 357/538 - Train Accuracy: 0.921, Validation Accuracy: 0.915, Loss: 0.967 Epoch 1 Batch 358/538 - Train Accuracy: 0.922, Validation Accuracy: 0.926, Loss: 0.930 Epoch 1 Batch 359/538 - Train Accuracy: 0.911, Validation Accuracy: 0.928, Loss: 0.954 Epoch 1 Batch 360/538 - Train Accuracy: 0.909, Validation Accuracy: 0.925, Loss: 0.916 Epoch 1 Batch 361/538 - Train Accuracy: 0.928, Validation Accuracy: 0.923, Loss: 0.955 Epoch 1 Batch 362/538 - Train Accuracy: 0.933, Validation Accuracy: 0.924, Loss: 0.931 Epoch 1 Batch 363/538 - Train Accuracy: 0.914, Validation Accuracy: 0.917, Loss: 0.931 Epoch 1 Batch 364/538 - Train Accuracy: 0.908, Validation Accuracy: 0.920, Loss: 0.970 Epoch 1 Batch 365/538 - Train Accuracy: 0.914, Validation Accuracy: 0.922, Loss: 0.998 Epoch 1 Batch 366/538 - Train Accuracy: 0.932, Validation Accuracy: 0.918, Loss: 0.952 Epoch 1 Batch 367/538 - Train Accuracy: 0.941, Validation Accuracy: 0.928, Loss: 0.875 Epoch 1 Batch 368/538 - Train Accuracy: 0.935, Validation Accuracy: 0.918, Loss: 0.905 Epoch 1 Batch 369/538 - Train Accuracy: 0.933, Validation Accuracy: 0.921, Loss: 0.944 Epoch 1 Batch 370/538 - Train Accuracy: 0.918, Validation Accuracy: 0.922, Loss: 0.980 Epoch 1 Batch 371/538 - Train Accuracy: 0.928, Validation Accuracy: 0.920, Loss: 0.932 Epoch 1 Batch 372/538 - Train Accuracy: 0.923, Validation Accuracy: 0.919, Loss: 0.965 Epoch 1 Batch 373/538 - Train Accuracy: 0.913, Validation Accuracy: 0.919, Loss: 0.914 Epoch 1 Batch 374/538 - Train Accuracy: 0.916, Validation Accuracy: 0.917, Loss: 0.929 Epoch 1 Batch 375/538 - Train Accuracy: 0.916, Validation Accuracy: 0.921, Loss: 0.921 Epoch 1 Batch 376/538 - Train Accuracy: 0.916, Validation Accuracy: 0.918, Loss: 0.938 Epoch 1 Batch 377/538 - Train Accuracy: 0.932, Validation Accuracy: 0.917, Loss: 0.918 Epoch 1 Batch 378/538 - Train Accuracy: 0.921, Validation Accuracy: 0.915, Loss: 0.915 Epoch 1 Batch 379/538 - Train Accuracy: 0.916, Validation Accuracy: 0.908, Loss: 0.934 Epoch 1 Batch 380/538 - Train Accuracy: 0.909, Validation Accuracy: 0.918, Loss: 0.925 Epoch 1 Batch 381/538 - Train Accuracy: 0.922, Validation Accuracy: 0.919, Loss: 0.940 Epoch 1 Batch 382/538 - Train Accuracy: 0.913, Validation Accuracy: 0.924, Loss: 0.922 Epoch 1 Batch 383/538 - Train Accuracy: 0.930, Validation Accuracy: 0.926, Loss: 0.946 Epoch 1 Batch 384/538 - Train Accuracy: 0.906, Validation Accuracy: 0.931, Loss: 0.986 Epoch 1 Batch 385/538 - Train Accuracy: 0.931, Validation Accuracy: 0.929, Loss: 0.936 Epoch 1 Batch 386/538 - Train Accuracy: 0.925, Validation Accuracy: 0.920, Loss: 0.919 Epoch 1 Batch 387/538 - Train Accuracy: 0.914, Validation Accuracy: 0.921, Loss: 0.954 Epoch 1 Batch 388/538 - Train Accuracy: 0.923, Validation Accuracy: 0.927, Loss: 0.956 Epoch 1 Batch 389/538 - Train Accuracy: 0.868, Validation Accuracy: 0.930, Loss: 0.933 Epoch 1 Batch 390/538 - Train Accuracy: 0.920, Validation Accuracy: 0.919, Loss: 0.928 Epoch 1 Batch 391/538 - Train Accuracy: 0.928, Validation Accuracy: 0.915, Loss: 0.954 Epoch 1 Batch 392/538 - Train Accuracy: 0.895, Validation Accuracy: 0.912, Loss: 0.936 Epoch 1 Batch 393/538 - Train Accuracy: 0.921, Validation Accuracy: 0.913, Loss: 0.895 Epoch 1 Batch 394/538 - Train Accuracy: 0.900, Validation Accuracy: 0.908, Loss: 0.959 Epoch 1 Batch 395/538 - Train Accuracy: 0.913, Validation Accuracy: 0.912, Loss: 0.918 Epoch 1 Batch 396/538 - Train Accuracy: 0.906, Validation Accuracy: 0.909, Loss: 0.942 Epoch 1 Batch 397/538 - Train Accuracy: 0.929, Validation Accuracy: 0.904, Loss: 0.942 Epoch 1 Batch 398/538 - Train Accuracy: 0.931, Validation Accuracy: 0.910, Loss: 0.962 Epoch 1 Batch 399/538 - Train Accuracy: 0.887, Validation Accuracy: 0.909, Loss: 0.965 Epoch 1 Batch 400/538 - Train Accuracy: 0.909, Validation Accuracy: 0.915, Loss: 0.980 Epoch 1 Batch 401/538 - Train Accuracy: 0.931, Validation Accuracy: 0.914, Loss: 0.951 Epoch 1 Batch 402/538 - Train Accuracy: 0.921, Validation Accuracy: 0.911, Loss: 0.953 Epoch 1 Batch 403/538 - Train Accuracy: 0.936, Validation Accuracy: 0.902, Loss: 0.943 Epoch 1 Batch 404/538 - Train Accuracy: 0.907, Validation Accuracy: 0.901, Loss: 0.952 Epoch 1 Batch 405/538 - Train Accuracy: 0.923, Validation Accuracy: 0.905, Loss: 0.950 Epoch 1 Batch 406/538 - Train Accuracy: 0.900, Validation Accuracy: 0.907, Loss: 0.924 Epoch 1 Batch 407/538 - Train Accuracy: 0.932, Validation Accuracy: 0.910, Loss: 0.961 Epoch 1 Batch 408/538 - Train Accuracy: 0.929, Validation Accuracy: 0.915, Loss: 0.923 Epoch 1 Batch 409/538 - Train Accuracy: 0.903, Validation Accuracy: 0.914, Loss: 0.932 Epoch 1 Batch 410/538 - Train Accuracy: 0.935, Validation Accuracy: 0.902, Loss: 0.992 Epoch 1 Batch 411/538 - Train Accuracy: 0.928, Validation Accuracy: 0.907, Loss: 0.927 Epoch 1 Batch 412/538 - Train Accuracy: 0.921, Validation Accuracy: 0.901, Loss: 0.878 Epoch 1 Batch 413/538 - Train Accuracy: 0.903, Validation Accuracy: 0.900, Loss: 0.942 Epoch 1 Batch 414/538 - Train Accuracy: 0.866, Validation Accuracy: 0.903, Loss: 0.972 Epoch 1 Batch 415/538 - Train Accuracy: 0.904, Validation Accuracy: 0.908, Loss: 0.945 Epoch 1 Batch 416/538 - Train Accuracy: 0.928, Validation Accuracy: 0.909, Loss: 0.926 Epoch 1 Batch 417/538 - Train Accuracy: 0.939, Validation Accuracy: 0.911, Loss: 0.953 Epoch 1 Batch 418/538 - Train Accuracy: 0.916, Validation Accuracy: 0.915, Loss: 0.903 Epoch 1 Batch 419/538 - Train Accuracy: 0.923, Validation Accuracy: 0.917, Loss: 0.907 Epoch 1 Batch 420/538 - Train Accuracy: 0.936, Validation Accuracy: 0.916, Loss: 0.961 Epoch 1 Batch 421/538 - Train Accuracy: 0.919, Validation Accuracy: 0.914, Loss: 0.887 Epoch 1 Batch 422/538 - Train Accuracy: 0.912, Validation Accuracy: 0.913, Loss: 0.962 Epoch 1 Batch 423/538 - Train Accuracy: 0.917, Validation Accuracy: 0.911, Loss: 0.953 Epoch 1 Batch 424/538 - Train Accuracy: 0.911, Validation Accuracy: 0.913, Loss: 0.954 Epoch 1 Batch 425/538 - Train Accuracy: 0.911, Validation Accuracy: 0.914, Loss: 0.979 Epoch 1 Batch 426/538 - Train Accuracy: 0.939, Validation Accuracy: 0.913, Loss: 0.927 Epoch 1 Batch 427/538 - Train Accuracy: 0.912, Validation Accuracy: 0.911, Loss: 0.938 Epoch 1 Batch 428/538 - Train Accuracy: 0.935, Validation Accuracy: 0.908, Loss: 0.938 Epoch 1 Batch 429/538 - Train Accuracy: 0.935, Validation Accuracy: 0.912, Loss: 0.919 Epoch 1 Batch 430/538 - Train Accuracy: 0.919, Validation Accuracy: 0.911, Loss: 0.958 Epoch 1 Batch 431/538 - Train Accuracy: 0.923, Validation Accuracy: 0.911, Loss: 0.909 Epoch 1 Batch 432/538 - Train Accuracy: 0.916, Validation Accuracy: 0.904, Loss: 0.938 Epoch 1 Batch 433/538 - Train Accuracy: 0.916, Validation Accuracy: 0.897, Loss: 0.926 Epoch 1 Batch 434/538 - Train Accuracy: 0.914, Validation Accuracy: 0.898, Loss: 0.942 Epoch 1 Batch 435/538 - Train Accuracy: 0.922, Validation Accuracy: 0.895, Loss: 0.964 Epoch 1 Batch 436/538 - Train Accuracy: 0.911, Validation Accuracy: 0.894, Loss: 0.966 Epoch 1 Batch 437/538 - Train Accuracy: 0.921, Validation Accuracy: 0.892, Loss: 0.949 Epoch 1 Batch 438/538 - Train Accuracy: 0.929, Validation Accuracy: 0.894, Loss: 0.961 Epoch 1 Batch 439/538 - Train Accuracy: 0.939, Validation Accuracy: 0.903, Loss: 0.915 Epoch 1 Batch 440/538 - Train Accuracy: 0.930, Validation Accuracy: 0.904, Loss: 0.952 Epoch 1 Batch 441/538 - Train Accuracy: 0.920, Validation Accuracy: 0.903, Loss: 0.949 Epoch 1 Batch 442/538 - Train Accuracy: 0.911, Validation Accuracy: 0.903, Loss: 0.921 Epoch 1 Batch 443/538 - Train Accuracy: 0.908, Validation Accuracy: 0.906, Loss: 0.947 Epoch 1 Batch 444/538 - Train Accuracy: 0.923, Validation Accuracy: 0.904, Loss: 0.974 Epoch 1 Batch 445/538 - Train Accuracy: 0.935, Validation Accuracy: 0.900, Loss: 0.929 Epoch 1 Batch 446/538 - Train Accuracy: 0.938, Validation Accuracy: 0.901, Loss: 0.927 Epoch 1 Batch 447/538 - Train Accuracy: 0.911, Validation Accuracy: 0.902, Loss: 0.910 Epoch 1 Batch 448/538 - Train Accuracy: 0.916, Validation Accuracy: 0.903, Loss: 0.926 Epoch 1 Batch 449/538 - Train Accuracy: 0.937, Validation Accuracy: 0.910, Loss: 0.936 Epoch 1 Batch 450/538 - Train Accuracy: 0.908, Validation Accuracy: 0.908, Loss: 0.945 Epoch 1 Batch 451/538 - Train Accuracy: 0.899, Validation Accuracy: 0.907, Loss: 0.983 Epoch 1 Batch 452/538 - Train Accuracy: 0.920, Validation Accuracy: 0.896, Loss: 0.909 Epoch 1 Batch 453/538 - Train Accuracy: 0.936, Validation Accuracy: 0.912, Loss: 0.921 Epoch 1 Batch 454/538 - Train Accuracy: 0.921, Validation Accuracy: 0.914, Loss: 0.936 Epoch 1 Batch 455/538 - Train Accuracy: 0.925, Validation Accuracy: 0.917, Loss: 0.891 Epoch 1 Batch 456/538 - Train Accuracy: 0.921, Validation Accuracy: 0.915, Loss: 0.955 Epoch 1 Batch 457/538 - Train Accuracy: 0.919, Validation Accuracy: 0.919, Loss: 0.925 Epoch 1 Batch 458/538 - Train Accuracy: 0.916, Validation Accuracy: 0.919, Loss: 0.892 Epoch 1 Batch 459/538 - Train Accuracy: 0.911, Validation Accuracy: 0.913, Loss: 0.935 Epoch 1 Batch 460/538 - Train Accuracy: 0.901, Validation Accuracy: 0.912, Loss: 0.907 Epoch 1 Batch 461/538 - Train Accuracy: 0.915, Validation Accuracy: 0.909, Loss: 0.964 Epoch 1 Batch 462/538 - Train Accuracy: 0.916, Validation Accuracy: 0.915, Loss: 0.868 Epoch 1 Batch 463/538 - Train Accuracy: 0.894, Validation Accuracy: 0.922, Loss: 0.977 Epoch 1 Batch 464/538 - Train Accuracy: 0.932, Validation Accuracy: 0.920, Loss: 0.949 Epoch 1 Batch 465/538 - Train Accuracy: 0.919, Validation Accuracy: 0.916, Loss: 0.973 Epoch 1 Batch 466/538 - Train Accuracy: 0.916, Validation Accuracy: 0.911, Loss: 0.934 Epoch 1 Batch 467/538 - Train Accuracy: 0.924, Validation Accuracy: 0.911, Loss: 0.956 Epoch 1 Batch 468/538 - Train Accuracy: 0.937, Validation Accuracy: 0.913, Loss: 0.961 Epoch 1 Batch 469/538 - Train Accuracy: 0.927, Validation Accuracy: 0.915, Loss: 0.933 Epoch 1 Batch 470/538 - Train Accuracy: 0.911, Validation Accuracy: 0.913, Loss: 0.948 Epoch 1 Batch 471/538 - Train Accuracy: 0.934, Validation Accuracy: 0.907, Loss: 0.917 Epoch 1 Batch 472/538 - Train Accuracy: 0.954, Validation Accuracy: 0.911, Loss: 0.923 Epoch 1 Batch 473/538 - Train Accuracy: 0.903, Validation Accuracy: 0.917, Loss: 0.960 Epoch 1 Batch 474/538 - Train Accuracy: 0.927, Validation Accuracy: 0.912, Loss: 0.911 Epoch 1 Batch 475/538 - Train Accuracy: 0.920, Validation Accuracy: 0.914, Loss: 0.972 Epoch 1 Batch 476/538 - Train Accuracy: 0.920, Validation Accuracy: 0.907, Loss: 0.921 Epoch 1 Batch 477/538 - Train Accuracy: 0.904, Validation Accuracy: 0.903, Loss: 0.933 Epoch 1 Batch 478/538 - Train Accuracy: 0.931, Validation Accuracy: 0.899, Loss: 0.973 Epoch 1 Batch 479/538 - Train Accuracy: 0.940, Validation Accuracy: 0.910, Loss: 0.889 Epoch 1 Batch 480/538 - Train Accuracy: 0.931, Validation Accuracy: 0.910, Loss: 0.961 Epoch 1 Batch 481/538 - Train Accuracy: 0.912, Validation Accuracy: 0.907, Loss: 0.902 Epoch 1 Batch 482/538 - Train Accuracy: 0.905, Validation Accuracy: 0.902, Loss: 0.958 Epoch 1 Batch 483/538 - Train Accuracy: 0.910, Validation Accuracy: 0.894, Loss: 0.887 Epoch 1 Batch 484/538 - Train Accuracy: 0.926, Validation Accuracy: 0.898, Loss: 0.945 Epoch 1 Batch 485/538 - Train Accuracy: 0.922, Validation Accuracy: 0.903, Loss: 0.968 Epoch 1 Batch 486/538 - Train Accuracy: 0.928, Validation Accuracy: 0.908, Loss: 0.923 Epoch 1 Batch 487/538 - Train Accuracy: 0.912, Validation Accuracy: 0.905, Loss: 0.915 Epoch 1 Batch 488/538 - Train Accuracy: 0.934, Validation Accuracy: 0.903, Loss: 0.918 Epoch 1 Batch 489/538 - Train Accuracy: 0.903, Validation Accuracy: 0.911, Loss: 0.935 Epoch 1 Batch 490/538 - Train Accuracy: 0.929, Validation Accuracy: 0.912, Loss: 0.914 Epoch 1 Batch 491/538 - Train Accuracy: 0.902, Validation Accuracy: 0.913, Loss: 0.945 Epoch 1 Batch 492/538 - Train Accuracy: 0.921, Validation Accuracy: 0.916, Loss: 0.939 Epoch 1 Batch 493/538 - Train Accuracy: 0.903, Validation Accuracy: 0.921, Loss: 0.941 Epoch 1 Batch 494/538 - Train Accuracy: 0.927, Validation Accuracy: 0.920, Loss: 0.942 Epoch 1 Batch 495/538 - Train Accuracy: 0.913, Validation Accuracy: 0.920, Loss: 0.952 Epoch 1 Batch 496/538 - Train Accuracy: 0.933, Validation Accuracy: 0.922, Loss: 0.890 Epoch 1 Batch 497/538 - Train Accuracy: 0.937, Validation Accuracy: 0.922, Loss: 0.951 Epoch 1 Batch 498/538 - Train Accuracy: 0.918, Validation Accuracy: 0.921, Loss: 0.906 Epoch 1 Batch 499/538 - Train Accuracy: 0.923, Validation Accuracy: 0.918, Loss: 0.958 Epoch 1 Batch 500/538 - Train Accuracy: 0.938, Validation Accuracy: 0.912, Loss: 0.897 Epoch 1 Batch 501/538 - Train Accuracy: 0.923, Validation Accuracy: 0.910, Loss: 0.960 Epoch 1 Batch 502/538 - Train Accuracy: 0.917, Validation Accuracy: 0.914, Loss: 0.918 Epoch 1 Batch 503/538 - Train Accuracy: 0.932, Validation Accuracy: 0.917, Loss: 0.960 Epoch 1 Batch 504/538 - Train Accuracy: 0.948, Validation Accuracy: 0.923, Loss: 0.895 Epoch 1 Batch 505/538 - Train Accuracy: 0.934, Validation Accuracy: 0.919, Loss: 0.919 Epoch 1 Batch 506/538 - Train Accuracy: 0.924, Validation Accuracy: 0.911, Loss: 0.913 Epoch 1 Batch 507/538 - Train Accuracy: 0.897, Validation Accuracy: 0.908, Loss: 0.956 Epoch 1 Batch 508/538 - Train Accuracy: 0.919, Validation Accuracy: 0.907, Loss: 0.939 Epoch 1 Batch 509/538 - Train Accuracy: 0.946, Validation Accuracy: 0.910, Loss: 0.907 Epoch 1 Batch 510/538 - Train Accuracy: 0.926, Validation Accuracy: 0.911, Loss: 0.927 Epoch 1 Batch 511/538 - Train Accuracy: 0.915, Validation Accuracy: 0.908, Loss: 0.911 Epoch 1 Batch 512/538 - Train Accuracy: 0.934, Validation Accuracy: 0.908, Loss: 0.939 Epoch 1 Batch 513/538 - Train Accuracy: 0.892, Validation Accuracy: 0.914, Loss: 0.963 Epoch 1 Batch 514/538 - Train Accuracy: 0.925, Validation Accuracy: 0.922, Loss: 0.930 Epoch 1 Batch 515/538 - Train Accuracy: 0.921, Validation Accuracy: 0.932, Loss: 0.933 Epoch 1 Batch 516/538 - Train Accuracy: 0.906, Validation Accuracy: 0.930, Loss: 0.901 Epoch 1 Batch 517/538 - Train Accuracy: 0.928, Validation Accuracy: 0.922, Loss: 0.924 Epoch 1 Batch 518/538 - Train Accuracy: 0.895, Validation Accuracy: 0.917, Loss: 0.915 Epoch 1 Batch 519/538 - Train Accuracy: 0.934, Validation Accuracy: 0.922, Loss: 0.938 Epoch 1 Batch 520/538 - Train Accuracy: 0.925, Validation Accuracy: 0.921, Loss: 0.958 Epoch 1 Batch 521/538 - Train Accuracy: 0.931, Validation Accuracy: 0.923, Loss: 1.001 Epoch 1 Batch 522/538 - Train Accuracy: 0.924, Validation Accuracy: 0.923, Loss: 0.893 Epoch 1 Batch 523/538 - Train Accuracy: 0.926, Validation Accuracy: 0.929, Loss: 0.901 Epoch 1 Batch 524/538 - Train Accuracy: 0.936, Validation Accuracy: 0.918, Loss: 0.966 Epoch 1 Batch 525/538 - Train Accuracy: 0.927, Validation Accuracy: 0.918, Loss: 0.915 Epoch 1 Batch 526/538 - Train Accuracy: 0.931, Validation Accuracy: 0.911, Loss: 0.942 Epoch 1 Batch 527/538 - Train Accuracy: 0.946, Validation Accuracy: 0.913, Loss: 0.904 Epoch 1 Batch 528/538 - Train Accuracy: 0.922, Validation Accuracy: 0.918, Loss: 0.934 Epoch 1 Batch 529/538 - Train Accuracy: 0.909, Validation Accuracy: 0.926, Loss: 0.944 Epoch 1 Batch 530/538 - Train Accuracy: 0.900, Validation Accuracy: 0.919, Loss: 0.883 Epoch 1 Batch 531/538 - Train Accuracy: 0.921, Validation Accuracy: 0.921, Loss: 0.933 Epoch 1 Batch 532/538 - Train Accuracy: 0.930, Validation Accuracy: 0.919, Loss: 0.932 Epoch 1 Batch 533/538 - Train Accuracy: 0.931, Validation Accuracy: 0.923, Loss: 0.881 Epoch 1 Batch 534/538 - Train Accuracy: 0.935, Validation Accuracy: 0.917, Loss: 0.873 Epoch 1 Batch 535/538 - Train Accuracy: 0.941, Validation Accuracy: 0.922, Loss: 0.919 Epoch 1 Batch 536/538 - Train Accuracy: 0.953, Validation Accuracy: 0.923, Loss: 0.909 Epoch 2 Batch 0/538 - Train Accuracy: 0.938, Validation Accuracy: 0.924, Loss: 0.934 Epoch 2 Batch 1/538 - Train Accuracy: 0.946, Validation Accuracy: 0.923, Loss: 0.946 Epoch 2 Batch 2/538 - Train Accuracy: 0.929, Validation Accuracy: 0.919, Loss: 0.927 Epoch 2 Batch 3/538 - Train Accuracy: 0.948, Validation Accuracy: 0.918, Loss: 0.933 Epoch 2 Batch 4/538 - Train Accuracy: 0.931, Validation Accuracy: 0.912, Loss: 0.913 Epoch 2 Batch 5/538 - Train Accuracy: 0.928, Validation Accuracy: 0.917, Loss: 0.927 Epoch 2 Batch 6/538 - Train Accuracy: 0.939, Validation Accuracy: 0.912, Loss: 0.935 Epoch 2 Batch 7/538 - Train Accuracy: 0.936, Validation Accuracy: 0.915, Loss: 0.934 Epoch 2 Batch 8/538 - Train Accuracy: 0.938, Validation Accuracy: 0.913, Loss: 0.943 Epoch 2 Batch 9/538 - Train Accuracy: 0.906, Validation Accuracy: 0.915, Loss: 0.924 Epoch 2 Batch 10/538 - Train Accuracy: 0.910, Validation Accuracy: 0.917, Loss: 0.929 Epoch 2 Batch 11/538 - Train Accuracy: 0.930, Validation Accuracy: 0.912, Loss: 0.889 Epoch 2 Batch 12/538 - Train Accuracy: 0.935, Validation Accuracy: 0.911, Loss: 0.960 Epoch 2 Batch 13/538 - Train Accuracy: 0.941, Validation Accuracy: 0.919, Loss: 0.909 Epoch 2 Batch 14/538 - Train Accuracy: 0.934, Validation Accuracy: 0.915, Loss: 0.930 Epoch 2 Batch 15/538 - Train Accuracy: 0.941, Validation Accuracy: 0.914, Loss: 0.891 Epoch 2 Batch 16/538 - Train Accuracy: 0.932, Validation Accuracy: 0.921, Loss: 0.892 Epoch 2 Batch 17/538 - Train Accuracy: 0.929, Validation Accuracy: 0.926, Loss: 0.930 Epoch 2 Batch 18/538 - Train Accuracy: 0.942, Validation Accuracy: 0.925, Loss: 0.944 Epoch 2 Batch 19/538 - Train Accuracy: 0.924, Validation Accuracy: 0.926, Loss: 0.943 Epoch 2 Batch 20/538 - Train Accuracy: 0.936, Validation Accuracy: 0.924, Loss: 0.968 Epoch 2 Batch 21/538 - Train Accuracy: 0.942, Validation Accuracy: 0.923, Loss: 0.956 Epoch 2 Batch 22/538 - Train Accuracy: 0.904, Validation Accuracy: 0.924, Loss: 0.901 Epoch 2 Batch 23/538 - Train Accuracy: 0.915, Validation Accuracy: 0.925, Loss: 0.939 Epoch 2 Batch 24/538 - Train Accuracy: 0.932, Validation Accuracy: 0.917, Loss: 0.981 Epoch 2 Batch 25/538 - Train Accuracy: 0.925, Validation Accuracy: 0.918, Loss: 0.962 Epoch 2 Batch 26/538 - Train Accuracy: 0.923, Validation Accuracy: 0.913, Loss: 0.963 Epoch 2 Batch 27/538 - Train Accuracy: 0.940, Validation Accuracy: 0.909, Loss: 0.891 Epoch 2 Batch 28/538 - Train Accuracy: 0.919, Validation Accuracy: 0.916, Loss: 0.921 Epoch 2 Batch 29/538 - Train Accuracy: 0.947, Validation Accuracy: 0.915, Loss: 0.919 Epoch 2 Batch 30/538 - Train Accuracy: 0.923, Validation Accuracy: 0.918, Loss: 0.969 Epoch 2 Batch 31/538 - Train Accuracy: 0.934, Validation Accuracy: 0.923, Loss: 0.912 Epoch 2 Batch 32/538 - Train Accuracy: 0.926, Validation Accuracy: 0.918, Loss: 0.948 Epoch 2 Batch 33/538 - Train Accuracy: 0.937, Validation Accuracy: 0.914, Loss: 0.911 Epoch 2 Batch 34/538 - Train Accuracy: 0.916, Validation Accuracy: 0.919, Loss: 0.974 Epoch 2 Batch 35/538 - Train Accuracy: 0.927, Validation Accuracy: 0.920, Loss: 0.888 Epoch 2 Batch 36/538 - Train Accuracy: 0.933, Validation Accuracy: 0.925, Loss: 0.902 Epoch 2 Batch 37/538 - Train Accuracy: 0.931, Validation Accuracy: 0.919, Loss: 0.932 Epoch 2 Batch 38/538 - Train Accuracy: 0.925, Validation Accuracy: 0.921, Loss: 0.898 Epoch 2 Batch 39/538 - Train Accuracy: 0.936, Validation Accuracy: 0.924, Loss: 0.867 Epoch 2 Batch 40/538 - Train Accuracy: 0.924, Validation Accuracy: 0.917, Loss: 0.884 Epoch 2 Batch 41/538 - Train Accuracy: 0.931, Validation Accuracy: 0.921, Loss: 0.961 Epoch 2 Batch 42/538 - Train Accuracy: 0.929, Validation Accuracy: 0.932, Loss: 0.948 Epoch 2 Batch 43/538 - Train Accuracy: 0.896, Validation Accuracy: 0.932, Loss: 0.915 Epoch 2 Batch 44/538 - Train Accuracy: 0.921, Validation Accuracy: 0.933, Loss: 0.901 Epoch 2 Batch 45/538 - Train Accuracy: 0.930, Validation Accuracy: 0.930, Loss: 0.897 Epoch 2 Batch 46/538 - Train Accuracy: 0.947, Validation Accuracy: 0.927, Loss: 0.899 Epoch 2 Batch 47/538 - Train Accuracy: 0.928, Validation Accuracy: 0.929, Loss: 0.939 Epoch 2 Batch 48/538 - Train Accuracy: 0.894, Validation Accuracy: 0.937, Loss: 0.968 Epoch 2 Batch 49/538 - Train Accuracy: 0.921, Validation Accuracy: 0.937, Loss: 0.980 Epoch 2 Batch 50/538 - Train Accuracy: 0.921, Validation Accuracy: 0.928, Loss: 0.901 Epoch 2 Batch 51/538 - Train Accuracy: 0.929, Validation Accuracy: 0.926, Loss: 0.926 Epoch 2 Batch 52/538 - Train Accuracy: 0.926, Validation Accuracy: 0.930, Loss: 0.949 Epoch 2 Batch 53/538 - Train Accuracy: 0.898, Validation Accuracy: 0.928, Loss: 0.912 Epoch 2 Batch 54/538 - Train Accuracy: 0.942, Validation Accuracy: 0.928, Loss: 0.908 Epoch 2 Batch 55/538 - Train Accuracy: 0.910, Validation Accuracy: 0.927, Loss: 0.931 Epoch 2 Batch 56/538 - Train Accuracy: 0.929, Validation Accuracy: 0.920, Loss: 0.951 Epoch 2 Batch 57/538 - Train Accuracy: 0.918, Validation Accuracy: 0.922, Loss: 0.887 Epoch 2 Batch 58/538 - Train Accuracy: 0.921, Validation Accuracy: 0.911, Loss: 0.932 Epoch 2 Batch 59/538 - Train Accuracy: 0.924, Validation Accuracy: 0.915, Loss: 0.928 Epoch 2 Batch 60/538 - Train Accuracy: 0.934, Validation Accuracy: 0.906, Loss: 0.940 Epoch 2 Batch 61/538 - Train Accuracy: 0.940, Validation Accuracy: 0.907, Loss: 0.923 Epoch 2 Batch 62/538 - Train Accuracy: 0.937, Validation Accuracy: 0.904, Loss: 0.926 Epoch 2 Batch 63/538 - Train Accuracy: 0.935, Validation Accuracy: 0.907, Loss: 0.929 Epoch 2 Batch 64/538 - Train Accuracy: 0.923, Validation Accuracy: 0.902, Loss: 0.922 Epoch 2 Batch 65/538 - Train Accuracy: 0.923, Validation Accuracy: 0.903, Loss: 0.961 Epoch 2 Batch 66/538 - Train Accuracy: 0.941, Validation Accuracy: 0.910, Loss: 0.916 Epoch 2 Batch 67/538 - Train Accuracy: 0.940, Validation Accuracy: 0.906, Loss: 0.926 Epoch 2 Batch 68/538 - Train Accuracy: 0.915, Validation Accuracy: 0.905, Loss: 0.922 Epoch 2 Batch 69/538 - Train Accuracy: 0.936, Validation Accuracy: 0.908, Loss: 0.901 Epoch 2 Batch 70/538 - Train Accuracy: 0.940, Validation Accuracy: 0.921, Loss: 0.910 Epoch 2 Batch 71/538 - Train Accuracy: 0.928, Validation Accuracy: 0.920, Loss: 0.901 Epoch 2 Batch 72/538 - Train Accuracy: 0.937, Validation Accuracy: 0.922, Loss: 0.939 Epoch 2 Batch 73/538 - Train Accuracy: 0.925, Validation Accuracy: 0.917, Loss: 0.957 Epoch 2 Batch 74/538 - Train Accuracy: 0.927, Validation Accuracy: 0.917, Loss: 0.945 Epoch 2 Batch 75/538 - Train Accuracy: 0.925, Validation Accuracy: 0.922, Loss: 0.975 Epoch 2 Batch 76/538 - Train Accuracy: 0.932, Validation Accuracy: 0.920, Loss: 0.959 Epoch 2 Batch 77/538 - Train Accuracy: 0.941, Validation Accuracy: 0.913, Loss: 0.943 Epoch 2 Batch 78/538 - Train Accuracy: 0.918, Validation Accuracy: 0.909, Loss: 0.885 Epoch 2 Batch 79/538 - Train Accuracy: 0.933, Validation Accuracy: 0.908, Loss: 0.905 Epoch 2 Batch 80/538 - Train Accuracy: 0.937, Validation Accuracy: 0.911, Loss: 0.884 Epoch 2 Batch 81/538 - Train Accuracy: 0.925, Validation Accuracy: 0.919, Loss: 0.955 Epoch 2 Batch 82/538 - Train Accuracy: 0.927, Validation Accuracy: 0.919, Loss: 0.952 Epoch 2 Batch 83/538 - Train Accuracy: 0.912, Validation Accuracy: 0.921, Loss: 0.903 Epoch 2 Batch 84/538 - Train Accuracy: 0.919, Validation Accuracy: 0.926, Loss: 0.949 Epoch 2 Batch 85/538 - Train Accuracy: 0.934, Validation Accuracy: 0.933, Loss: 0.923 Epoch 2 Batch 86/538 - Train Accuracy: 0.929, Validation Accuracy: 0.927, Loss: 0.885 Epoch 2 Batch 87/538 - Train Accuracy: 0.919, Validation Accuracy: 0.922, Loss: 0.944 Epoch 2 Batch 88/538 - Train Accuracy: 0.933, Validation Accuracy: 0.922, Loss: 0.938 Epoch 2 Batch 89/538 - Train Accuracy: 0.922, Validation Accuracy: 0.927, Loss: 0.920 Epoch 2 Batch 90/538 - Train Accuracy: 0.928, Validation Accuracy: 0.927, Loss: 0.962 Epoch 2 Batch 91/538 - Train Accuracy: 0.939, Validation Accuracy: 0.930, Loss: 0.941 Epoch 2 Batch 92/538 - Train Accuracy: 0.926, Validation Accuracy: 0.923, Loss: 0.944 Epoch 2 Batch 93/538 - Train Accuracy: 0.924, Validation Accuracy: 0.921, Loss: 0.875 Epoch 2 Batch 94/538 - Train Accuracy: 0.934, Validation Accuracy: 0.925, Loss: 0.949 Epoch 2 Batch 95/538 - Train Accuracy: 0.912, Validation Accuracy: 0.917, Loss: 0.949 Epoch 2 Batch 96/538 - Train Accuracy: 0.948, Validation Accuracy: 0.919, Loss: 0.892 Epoch 2 Batch 97/538 - Train Accuracy: 0.936, Validation Accuracy: 0.914, Loss: 0.908 Epoch 2 Batch 98/538 - Train Accuracy: 0.936, Validation Accuracy: 0.921, Loss: 0.932 Epoch 2 Batch 99/538 - Train Accuracy: 0.919, Validation Accuracy: 0.924, Loss: 0.919 Epoch 2 Batch 100/538 - Train Accuracy: 0.938, Validation Accuracy: 0.924, Loss: 0.931 Epoch 2 Batch 101/538 - Train Accuracy: 0.907, Validation Accuracy: 0.926, Loss: 0.937 Epoch 2 Batch 102/538 - Train Accuracy: 0.906, Validation Accuracy: 0.924, Loss: 0.877 Epoch 2 Batch 103/538 - Train Accuracy: 0.920, Validation Accuracy: 0.926, Loss: 0.930 Epoch 2 Batch 104/538 - Train Accuracy: 0.925, Validation Accuracy: 0.921, Loss: 0.928 Epoch 2 Batch 105/538 - Train Accuracy: 0.942, Validation Accuracy: 0.920, Loss: 0.906 Epoch 2 Batch 106/538 - Train Accuracy: 0.901, Validation Accuracy: 0.915, Loss: 0.870 Epoch 2 Batch 107/538 - Train Accuracy: 0.918, Validation Accuracy: 0.919, Loss: 0.923 Epoch 2 Batch 108/538 - Train Accuracy: 0.944, Validation Accuracy: 0.925, Loss: 0.891 Epoch 2 Batch 109/538 - Train Accuracy: 0.960, Validation Accuracy: 0.914, Loss: 0.911 Epoch 2 Batch 110/538 - Train Accuracy: 0.938, Validation Accuracy: 0.909, Loss: 0.911 Epoch 2 Batch 111/538 - Train Accuracy: 0.934, Validation Accuracy: 0.909, Loss: 0.873 Epoch 2 Batch 112/538 - Train Accuracy: 0.935, Validation Accuracy: 0.903, Loss: 0.915 Epoch 2 Batch 113/538 - Train Accuracy: 0.916, Validation Accuracy: 0.902, Loss: 0.887 Epoch 2 Batch 114/538 - Train Accuracy: 0.936, Validation Accuracy: 0.908, Loss: 0.955 Epoch 2 Batch 115/538 - Train Accuracy: 0.935, Validation Accuracy: 0.906, Loss: 0.899 Epoch 2 Batch 116/538 - Train Accuracy: 0.928, Validation Accuracy: 0.908, Loss: 0.933 Epoch 2 Batch 117/538 - Train Accuracy: 0.915, Validation Accuracy: 0.904, Loss: 0.921 Epoch 2 Batch 118/538 - Train Accuracy: 0.923, Validation Accuracy: 0.905, Loss: 0.888 Epoch 2 Batch 119/538 - Train Accuracy: 0.955, Validation Accuracy: 0.920, Loss: 0.941 Epoch 2 Batch 120/538 - Train Accuracy: 0.941, Validation Accuracy: 0.922, Loss: 0.967 Epoch 2 Batch 121/538 - Train Accuracy: 0.926, Validation Accuracy: 0.927, Loss: 0.947 Epoch 2 Batch 122/538 - Train Accuracy: 0.916, Validation Accuracy: 0.924, Loss: 0.933 Epoch 2 Batch 123/538 - Train Accuracy: 0.923, Validation Accuracy: 0.923, Loss: 0.871 Epoch 2 Batch 124/538 - Train Accuracy: 0.943, Validation Accuracy: 0.915, Loss: 0.933 Epoch 2 Batch 125/538 - Train Accuracy: 0.915, Validation Accuracy: 0.910, Loss: 0.956 Epoch 2 Batch 126/538 - Train Accuracy: 0.925, Validation Accuracy: 0.913, Loss: 0.944 Epoch 2 Batch 127/538 - Train Accuracy: 0.910, Validation Accuracy: 0.910, Loss: 0.907 Epoch 2 Batch 128/538 - Train Accuracy: 0.926, Validation Accuracy: 0.910, Loss: 0.917 Epoch 2 Batch 129/538 - Train Accuracy: 0.923, Validation Accuracy: 0.908, Loss: 0.866 Epoch 2 Batch 130/538 - Train Accuracy: 0.935, Validation Accuracy: 0.908, Loss: 0.915 Epoch 2 Batch 131/538 - Train Accuracy: 0.942, Validation Accuracy: 0.913, Loss: 0.892 Epoch 2 Batch 132/538 - Train Accuracy: 0.908, Validation Accuracy: 0.919, Loss: 0.887 Epoch 2 Batch 133/538 - Train Accuracy: 0.931, Validation Accuracy: 0.920, Loss: 0.869 Epoch 2 Batch 134/538 - Train Accuracy: 0.914, Validation Accuracy: 0.923, Loss: 0.923 Epoch 2 Batch 135/538 - Train Accuracy: 0.932, Validation Accuracy: 0.922, Loss: 0.948 Epoch 2 Batch 136/538 - Train Accuracy: 0.940, Validation Accuracy: 0.924, Loss: 0.916 Epoch 2 Batch 137/538 - Train Accuracy: 0.921, Validation Accuracy: 0.924, Loss: 0.932 Epoch 2 Batch 138/538 - Train Accuracy: 0.923, Validation Accuracy: 0.923, Loss: 0.931 Epoch 2 Batch 139/538 - Train Accuracy: 0.925, Validation Accuracy: 0.926, Loss: 0.935 Epoch 2 Batch 140/538 - Train Accuracy: 0.926, Validation Accuracy: 0.923, Loss: 0.952 Epoch 2 Batch 141/538 - Train Accuracy: 0.935, Validation Accuracy: 0.914, Loss: 0.934 Epoch 2 Batch 142/538 - Train Accuracy: 0.937, Validation Accuracy: 0.914, Loss: 0.959 Epoch 2 Batch 143/538 - Train Accuracy: 0.948, Validation Accuracy: 0.915, Loss: 0.913 Epoch 2 Batch 144/538 - Train Accuracy: 0.936, Validation Accuracy: 0.920, Loss: 0.995 Epoch 2 Batch 145/538 - Train Accuracy: 0.919, Validation Accuracy: 0.919, Loss: 0.935 Epoch 2 Batch 146/538 - Train Accuracy: 0.926, Validation Accuracy: 0.914, Loss: 0.898 Epoch 2 Batch 147/538 - Train Accuracy: 0.935, Validation Accuracy: 0.911, Loss: 0.920 Epoch 2 Batch 148/538 - Train Accuracy: 0.916, Validation Accuracy: 0.910, Loss: 0.985 Epoch 2 Batch 149/538 - Train Accuracy: 0.943, Validation Accuracy: 0.915, Loss: 0.921 Epoch 2 Batch 150/538 - Train Accuracy: 0.939, Validation Accuracy: 0.917, Loss: 0.911 Epoch 2 Batch 151/538 - Train Accuracy: 0.938, Validation Accuracy: 0.914, Loss: 0.919 Epoch 2 Batch 152/538 - Train Accuracy: 0.939, Validation Accuracy: 0.912, Loss: 0.962 Epoch 2 Batch 153/538 - Train Accuracy: 0.916, Validation Accuracy: 0.908, Loss: 0.930 Epoch 2 Batch 154/538 - Train Accuracy: 0.945, Validation Accuracy: 0.906, Loss: 0.923 Epoch 2 Batch 155/538 - Train Accuracy: 0.932, Validation Accuracy: 0.921, Loss: 0.950 Epoch 2 Batch 156/538 - Train Accuracy: 0.936, Validation Accuracy: 0.930, Loss: 0.898 Epoch 2 Batch 157/538 - Train Accuracy: 0.939, Validation Accuracy: 0.928, Loss: 0.883 Epoch 2 Batch 158/538 - Train Accuracy: 0.939, Validation Accuracy: 0.925, Loss: 0.870 Epoch 2 Batch 159/538 - Train Accuracy: 0.922, Validation Accuracy: 0.927, Loss: 0.928 Epoch 2 Batch 160/538 - Train Accuracy: 0.919, Validation Accuracy: 0.926, Loss: 0.923 Epoch 2 Batch 161/538 - Train Accuracy: 0.930, Validation Accuracy: 0.919, Loss: 0.912 Epoch 2 Batch 162/538 - Train Accuracy: 0.930, Validation Accuracy: 0.918, Loss: 0.932 Epoch 2 Batch 163/538 - Train Accuracy: 0.916, Validation Accuracy: 0.914, Loss: 0.915 Epoch 2 Batch 164/538 - Train Accuracy: 0.932, Validation Accuracy: 0.907, Loss: 1.000 Epoch 2 Batch 165/538 - Train Accuracy: 0.913, Validation Accuracy: 0.915, Loss: 0.943 Epoch 2 Batch 166/538 - Train Accuracy: 0.950, Validation Accuracy: 0.911, Loss: 0.904 Epoch 2 Batch 167/538 - Train Accuracy: 0.924, Validation Accuracy: 0.908, Loss: 0.961 Epoch 2 Batch 168/538 - Train Accuracy: 0.927, Validation Accuracy: 0.913, Loss: 0.925 Epoch 2 Batch 169/538 - Train Accuracy: 0.927, Validation Accuracy: 0.922, Loss: 0.928 Epoch 2 Batch 170/538 - Train Accuracy: 0.921, Validation Accuracy: 0.914, Loss: 0.967 Epoch 2 Batch 171/538 - Train Accuracy: 0.912, Validation Accuracy: 0.914, Loss: 0.887 Epoch 2 Batch 172/538 - Train Accuracy: 0.916, Validation Accuracy: 0.914, Loss: 0.923 Epoch 2 Batch 173/538 - Train Accuracy: 0.930, Validation Accuracy: 0.914, Loss: 0.896 Epoch 2 Batch 174/538 - Train Accuracy: 0.935, Validation Accuracy: 0.914, Loss: 0.934 Epoch 2 Batch 175/538 - Train Accuracy: 0.943, Validation Accuracy: 0.914, Loss: 0.905 Epoch 2 Batch 176/538 - Train Accuracy: 0.924, Validation Accuracy: 0.912, Loss: 0.930 Epoch 2 Batch 177/538 - Train Accuracy: 0.922, Validation Accuracy: 0.919, Loss: 0.941 Epoch 2 Batch 178/538 - Train Accuracy: 0.898, Validation Accuracy: 0.920, Loss: 0.899 Epoch 2 Batch 179/538 - Train Accuracy: 0.944, Validation Accuracy: 0.928, Loss: 0.887 Epoch 2 Batch 180/538 - Train Accuracy: 0.938, Validation Accuracy: 0.925, Loss: 0.902 Epoch 2 Batch 181/538 - Train Accuracy: 0.911, Validation Accuracy: 0.919, Loss: 0.932 Epoch 2 Batch 182/538 - Train Accuracy: 0.948, Validation Accuracy: 0.920, Loss: 0.962 Epoch 2 Batch 183/538 - Train Accuracy: 0.945, Validation Accuracy: 0.918, Loss: 0.923 Epoch 2 Batch 184/538 - Train Accuracy: 0.937, Validation Accuracy: 0.924, Loss: 0.929 Epoch 2 Batch 185/538 - Train Accuracy: 0.948, Validation Accuracy: 0.924, Loss: 0.921 Epoch 2 Batch 186/538 - Train Accuracy: 0.935, Validation Accuracy: 0.934, Loss: 0.915 Epoch 2 Batch 187/538 - Train Accuracy: 0.930, Validation Accuracy: 0.939, Loss: 0.925 Epoch 2 Batch 188/538 - Train Accuracy: 0.942, Validation Accuracy: 0.932, Loss: 0.898 Epoch 2 Batch 189/538 - Train Accuracy: 0.937, Validation Accuracy: 0.932, Loss: 0.938 Epoch 2 Batch 190/538 - Train Accuracy: 0.921, Validation Accuracy: 0.926, Loss: 0.959 Epoch 2 Batch 191/538 - Train Accuracy: 0.939, Validation Accuracy: 0.925, Loss: 0.926 Epoch 2 Batch 192/538 - Train Accuracy: 0.937, Validation Accuracy: 0.929, Loss: 0.964 Epoch 2 Batch 193/538 - Train Accuracy: 0.932, Validation Accuracy: 0.931, Loss: 0.925 Epoch 2 Batch 194/538 - Train Accuracy: 0.913, Validation Accuracy: 0.935, Loss: 0.934 Epoch 2 Batch 195/538 - Train Accuracy: 0.932, Validation Accuracy: 0.927, Loss: 0.925 Epoch 2 Batch 196/538 - Train Accuracy: 0.928, Validation Accuracy: 0.928, Loss: 0.909 Epoch 2 Batch 197/538 - Train Accuracy: 0.927, Validation Accuracy: 0.931, Loss: 0.948 Epoch 2 Batch 198/538 - Train Accuracy: 0.939, Validation Accuracy: 0.931, Loss: 0.960 Epoch 2 Batch 199/538 - Train Accuracy: 0.923, Validation Accuracy: 0.926, Loss: 0.901 Epoch 2 Batch 200/538 - Train Accuracy: 0.942, Validation Accuracy: 0.928, Loss: 0.904 Epoch 2 Batch 201/538 - Train Accuracy: 0.932, Validation Accuracy: 0.927, Loss: 0.909 Epoch 2 Batch 202/538 - Train Accuracy: 0.925, Validation Accuracy: 0.921, Loss: 0.909 Epoch 2 Batch 203/538 - Train Accuracy: 0.917, Validation Accuracy: 0.920, Loss: 0.913 Epoch 2 Batch 204/538 - Train Accuracy: 0.928, Validation Accuracy: 0.918, Loss: 0.899 Epoch 2 Batch 205/538 - Train Accuracy: 0.947, Validation Accuracy: 0.922, Loss: 0.930 Epoch 2 Batch 206/538 - Train Accuracy: 0.931, Validation Accuracy: 0.922, Loss: 0.915 Epoch 2 Batch 207/538 - Train Accuracy: 0.934, Validation Accuracy: 0.921, Loss: 0.907 Epoch 2 Batch 208/538 - Train Accuracy: 0.924, Validation Accuracy: 0.928, Loss: 0.948 Epoch 2 Batch 209/538 - Train Accuracy: 0.960, Validation Accuracy: 0.933, Loss: 0.917 Epoch 2 Batch 210/538 - Train Accuracy: 0.913, Validation Accuracy: 0.932, Loss: 0.919 Epoch 2 Batch 211/538 - Train Accuracy: 0.922, Validation Accuracy: 0.941, Loss: 0.905 Epoch 2 Batch 212/538 - Train Accuracy: 0.923, Validation Accuracy: 0.939, Loss: 0.899 Epoch 2 Batch 213/538 - Train Accuracy: 0.940, Validation Accuracy: 0.938, Loss: 0.897 Epoch 2 Batch 214/538 - Train Accuracy: 0.944, Validation Accuracy: 0.941, Loss: 0.927 Epoch 2 Batch 215/538 - Train Accuracy: 0.944, Validation Accuracy: 0.939, Loss: 0.922 Epoch 2 Batch 216/538 - Train Accuracy: 0.930, Validation Accuracy: 0.933, Loss: 0.886 Epoch 2 Batch 217/538 - Train Accuracy: 0.940, Validation Accuracy: 0.935, Loss: 0.922 Epoch 2 Batch 218/538 - Train Accuracy: 0.922, Validation Accuracy: 0.934, Loss: 0.890 Epoch 2 Batch 219/538 - Train Accuracy: 0.913, Validation Accuracy: 0.929, Loss: 0.956 Epoch 2 Batch 220/538 - Train Accuracy: 0.916, Validation Accuracy: 0.924, Loss: 0.922 Epoch 2 Batch 221/538 - Train Accuracy: 0.941, Validation Accuracy: 0.920, Loss: 0.898 Epoch 2 Batch 222/538 - Train Accuracy: 0.921, Validation Accuracy: 0.917, Loss: 0.967 Epoch 2 Batch 223/538 - Train Accuracy: 0.931, Validation Accuracy: 0.925, Loss: 0.917 Epoch 2 Batch 224/538 - Train Accuracy: 0.926, Validation Accuracy: 0.926, Loss: 0.936 Epoch 2 Batch 225/538 - Train Accuracy: 0.928, Validation Accuracy: 0.922, Loss: 0.927 Epoch 2 Batch 226/538 - Train Accuracy: 0.927, Validation Accuracy: 0.926, Loss: 0.934 Epoch 2 Batch 227/538 - Train Accuracy: 0.937, Validation Accuracy: 0.931, Loss: 0.895 Epoch 2 Batch 228/538 - Train Accuracy: 0.936, Validation Accuracy: 0.928, Loss: 0.895 Epoch 2 Batch 229/538 - Train Accuracy: 0.938, Validation Accuracy: 0.931, Loss: 0.908 Epoch 2 Batch 230/538 - Train Accuracy: 0.925, Validation Accuracy: 0.925, Loss: 0.917 Epoch 2 Batch 231/538 - Train Accuracy: 0.912, Validation Accuracy: 0.928, Loss: 0.932 Epoch 2 Batch 232/538 - Train Accuracy: 0.921, Validation Accuracy: 0.930, Loss: 0.980 Epoch 2 Batch 233/538 - Train Accuracy: 0.943, Validation Accuracy: 0.928, Loss: 0.959 Epoch 2 Batch 234/538 - Train Accuracy: 0.933, Validation Accuracy: 0.930, Loss: 0.942 Epoch 2 Batch 235/538 - Train Accuracy: 0.935, Validation Accuracy: 0.929, Loss: 0.923 Epoch 2 Batch 236/538 - Train Accuracy: 0.927, Validation Accuracy: 0.932, Loss: 0.887 Epoch 2 Batch 237/538 - Train Accuracy: 0.927, Validation Accuracy: 0.935, Loss: 0.969 Epoch 2 Batch 238/538 - Train Accuracy: 0.945, Validation Accuracy: 0.926, Loss: 0.919 Epoch 2 Batch 239/538 - Train Accuracy: 0.918, Validation Accuracy: 0.926, Loss: 0.958 Epoch 2 Batch 240/538 - Train Accuracy: 0.920, Validation Accuracy: 0.923, Loss: 0.919 Epoch 2 Batch 241/538 - Train Accuracy: 0.936, Validation Accuracy: 0.922, Loss: 0.905 Epoch 2 Batch 242/538 - Train Accuracy: 0.940, Validation Accuracy: 0.922, Loss: 0.939 Epoch 2 Batch 243/538 - Train Accuracy: 0.945, Validation Accuracy: 0.928, Loss: 0.967 Epoch 2 Batch 244/538 - Train Accuracy: 0.927, Validation Accuracy: 0.925, Loss: 0.931 Epoch 2 Batch 245/538 - Train Accuracy: 0.949, Validation Accuracy: 0.926, Loss: 0.929 Epoch 2 Batch 246/538 - Train Accuracy: 0.944, Validation Accuracy: 0.921, Loss: 0.901 Epoch 2 Batch 247/538 - Train Accuracy: 0.922, Validation Accuracy: 0.917, Loss: 0.873 Epoch 2 Batch 248/538 - Train Accuracy: 0.937, Validation Accuracy: 0.919, Loss: 0.949 Epoch 2 Batch 249/538 - Train Accuracy: 0.932, Validation Accuracy: 0.921, Loss: 0.945 Epoch 2 Batch 250/538 - Train Accuracy: 0.929, Validation Accuracy: 0.920, Loss: 0.902 Epoch 2 Batch 251/538 - Train Accuracy: 0.939, Validation Accuracy: 0.922, Loss: 0.937 Epoch 2 Batch 252/538 - Train Accuracy: 0.935, Validation Accuracy: 0.932, Loss: 0.925 Epoch 2 Batch 253/538 - Train Accuracy: 0.909, Validation Accuracy: 0.923, Loss: 0.927 Epoch 2 Batch 254/538 - Train Accuracy: 0.902, Validation Accuracy: 0.926, Loss: 0.916 Epoch 2 Batch 255/538 - Train Accuracy: 0.941, Validation Accuracy: 0.927, Loss: 0.922 Epoch 2 Batch 256/538 - Train Accuracy: 0.926, Validation Accuracy: 0.925, Loss: 0.964 Epoch 2 Batch 257/538 - Train Accuracy: 0.926, Validation Accuracy: 0.931, Loss: 0.895 Epoch 2 Batch 258/538 - Train Accuracy: 0.947, Validation Accuracy: 0.928, Loss: 0.913 Epoch 2 Batch 259/538 - Train Accuracy: 0.948, Validation Accuracy: 0.924, Loss: 0.913 Epoch 2 Batch 260/538 - Train Accuracy: 0.918, Validation Accuracy: 0.917, Loss: 0.914 Epoch 2 Batch 261/538 - Train Accuracy: 0.936, Validation Accuracy: 0.915, Loss: 0.918 Epoch 2 Batch 262/538 - Train Accuracy: 0.942, Validation Accuracy: 0.917, Loss: 0.926 Epoch 2 Batch 263/538 - Train Accuracy: 0.910, Validation Accuracy: 0.914, Loss: 0.946 Epoch 2 Batch 264/538 - Train Accuracy: 0.924, Validation Accuracy: 0.904, Loss: 0.899 Epoch 2 Batch 265/538 - Train Accuracy: 0.920, Validation Accuracy: 0.906, Loss: 0.925 Epoch 2 Batch 266/538 - Train Accuracy: 0.917, Validation Accuracy: 0.901, Loss: 0.918 Epoch 2 Batch 267/538 - Train Accuracy: 0.932, Validation Accuracy: 0.904, Loss: 0.917 Epoch 2 Batch 268/538 - Train Accuracy: 0.951, Validation Accuracy: 0.904, Loss: 0.948 Epoch 2 Batch 269/538 - Train Accuracy: 0.936, Validation Accuracy: 0.899, Loss: 0.914 Epoch 2 Batch 270/538 - Train Accuracy: 0.940, Validation Accuracy: 0.900, Loss: 0.897 Epoch 2 Batch 271/538 - Train Accuracy: 0.910, Validation Accuracy: 0.903, Loss: 0.899 Epoch 2 Batch 272/538 - Train Accuracy: 0.919, Validation Accuracy: 0.907, Loss: 0.891 Epoch 2 Batch 273/538 - Train Accuracy: 0.920, Validation Accuracy: 0.913, Loss: 0.920 Epoch 2 Batch 274/538 - Train Accuracy: 0.886, Validation Accuracy: 0.910, Loss: 0.929 Epoch 2 Batch 275/538 - Train Accuracy: 0.920, Validation Accuracy: 0.913, Loss: 0.939 Epoch 2 Batch 276/538 - Train Accuracy: 0.929, Validation Accuracy: 0.921, Loss: 0.909 Epoch 2 Batch 277/538 - Train Accuracy: 0.939, Validation Accuracy: 0.922, Loss: 0.916 Epoch 2 Batch 278/538 - Train Accuracy: 0.921, Validation Accuracy: 0.928, Loss: 0.913 Epoch 2 Batch 279/538 - Train Accuracy: 0.939, Validation Accuracy: 0.930, Loss: 0.934 Epoch 2 Batch 280/538 - Train Accuracy: 0.929, Validation Accuracy: 0.928, Loss: 0.887 Epoch 2 Batch 281/538 - Train Accuracy: 0.909, Validation Accuracy: 0.928, Loss: 0.937 Epoch 2 Batch 282/538 - Train Accuracy: 0.926, Validation Accuracy: 0.929, Loss: 0.860 Epoch 2 Batch 283/538 - Train Accuracy: 0.946, Validation Accuracy: 0.926, Loss: 0.887 Epoch 2 Batch 284/538 - Train Accuracy: 0.926, Validation Accuracy: 0.928, Loss: 0.938 Epoch 2 Batch 285/538 - Train Accuracy: 0.931, Validation Accuracy: 0.933, Loss: 0.873 Epoch 2 Batch 286/538 - Train Accuracy: 0.908, Validation Accuracy: 0.926, Loss: 0.950 Epoch 2 Batch 287/538 - Train Accuracy: 0.942, Validation Accuracy: 0.929, Loss: 0.863 Epoch 2 Batch 288/538 - Train Accuracy: 0.945, Validation Accuracy: 0.927, Loss: 0.906 Epoch 2 Batch 289/538 - Train Accuracy: 0.940, Validation Accuracy: 0.928, Loss: 0.894 Epoch 2 Batch 290/538 - Train Accuracy: 0.936, Validation Accuracy: 0.925, Loss: 0.929 Epoch 2 Batch 291/538 - Train Accuracy: 0.938, Validation Accuracy: 0.921, Loss: 0.952 Epoch 2 Batch 292/538 - Train Accuracy: 0.940, Validation Accuracy: 0.926, Loss: 0.902 Epoch 2 Batch 293/538 - Train Accuracy: 0.936, Validation Accuracy: 0.930, Loss: 0.947 Epoch 2 Batch 294/538 - Train Accuracy: 0.912, Validation Accuracy: 0.928, Loss: 0.881 Epoch 2 Batch 295/538 - Train Accuracy: 0.943, Validation Accuracy: 0.927, Loss: 0.890 Epoch 2 Batch 296/538 - Train Accuracy: 0.920, Validation Accuracy: 0.923, Loss: 0.892 Epoch 2 Batch 297/538 - Train Accuracy: 0.954, Validation Accuracy: 0.926, Loss: 0.887 Epoch 2 Batch 298/538 - Train Accuracy: 0.904, Validation Accuracy: 0.923, Loss: 0.912 Epoch 2 Batch 299/538 - Train Accuracy: 0.924, Validation Accuracy: 0.925, Loss: 0.865 Epoch 2 Batch 300/538 - Train Accuracy: 0.930, Validation Accuracy: 0.928, Loss: 0.899 Epoch 2 Batch 301/538 - Train Accuracy: 0.937, Validation Accuracy: 0.935, Loss: 0.908 Epoch 2 Batch 302/538 - Train Accuracy: 0.937, Validation Accuracy: 0.933, Loss: 0.901 Epoch 2 Batch 303/538 - Train Accuracy: 0.939, Validation Accuracy: 0.925, Loss: 0.927 Epoch 2 Batch 304/538 - Train Accuracy: 0.923, Validation Accuracy: 0.933, Loss: 0.924 Epoch 2 Batch 305/538 - Train Accuracy: 0.936, Validation Accuracy: 0.930, Loss: 0.972 Epoch 2 Batch 306/538 - Train Accuracy: 0.924, Validation Accuracy: 0.931, Loss: 0.934 Epoch 2 Batch 307/538 - Train Accuracy: 0.946, Validation Accuracy: 0.943, Loss: 0.947 Epoch 2 Batch 308/538 - Train Accuracy: 0.945, Validation Accuracy: 0.947, Loss: 0.944 Epoch 2 Batch 309/538 - Train Accuracy: 0.936, Validation Accuracy: 0.953, Loss: 0.929 Epoch 2 Batch 310/538 - Train Accuracy: 0.953, Validation Accuracy: 0.950, Loss: 0.937 Epoch 2 Batch 311/538 - Train Accuracy: 0.921, Validation Accuracy: 0.946, Loss: 0.899 Epoch 2 Batch 312/538 - Train Accuracy: 0.937, Validation Accuracy: 0.935, Loss: 0.884 Epoch 2 Batch 313/538 - Train Accuracy: 0.936, Validation Accuracy: 0.934, Loss: 0.871 Epoch 2 Batch 314/538 - Train Accuracy: 0.935, Validation Accuracy: 0.942, Loss: 0.917 Epoch 2 Batch 315/538 - Train Accuracy: 0.932, Validation Accuracy: 0.941, Loss: 0.925 Epoch 2 Batch 316/538 - Train Accuracy: 0.929, Validation Accuracy: 0.940, Loss: 0.879 Epoch 2 Batch 317/538 - Train Accuracy: 0.925, Validation Accuracy: 0.936, Loss: 0.926 Epoch 2 Batch 318/538 - Train Accuracy: 0.916, Validation Accuracy: 0.939, Loss: 0.934 Epoch 2 Batch 319/538 - Train Accuracy: 0.939, Validation Accuracy: 0.939, Loss: 0.904 Epoch 2 Batch 320/538 - Train Accuracy: 0.917, Validation Accuracy: 0.936, Loss: 0.902 Epoch 2 Batch 321/538 - Train Accuracy: 0.929, Validation Accuracy: 0.936, Loss: 0.874 Epoch 2 Batch 322/538 - Train Accuracy: 0.939, Validation Accuracy: 0.927, Loss: 0.953 Epoch 2 Batch 323/538 - Train Accuracy: 0.943, Validation Accuracy: 0.922, Loss: 0.915 Epoch 2 Batch 324/538 - Train Accuracy: 0.940, Validation Accuracy: 0.926, Loss: 0.924 Epoch 2 Batch 325/538 - Train Accuracy: 0.941, Validation Accuracy: 0.924, Loss: 0.946 Epoch 2 Batch 326/538 - Train Accuracy: 0.958, Validation Accuracy: 0.921, Loss: 0.948 Epoch 2 Batch 327/538 - Train Accuracy: 0.928, Validation Accuracy: 0.924, Loss: 0.927 Epoch 2 Batch 328/538 - Train Accuracy: 0.953, Validation Accuracy: 0.920, Loss: 0.922 Epoch 2 Batch 329/538 - Train Accuracy: 0.948, Validation Accuracy: 0.919, Loss: 0.893 Epoch 2 Batch 330/538 - Train Accuracy: 0.952, Validation Accuracy: 0.918, Loss: 0.907 Epoch 2 Batch 331/538 - Train Accuracy: 0.945, Validation Accuracy: 0.919, Loss: 0.889 Epoch 2 Batch 332/538 - Train Accuracy: 0.941, Validation Accuracy: 0.925, Loss: 0.911 Epoch 2 Batch 333/538 - Train Accuracy: 0.939, Validation Accuracy: 0.918, Loss: 0.955 Epoch 2 Batch 334/538 - Train Accuracy: 0.941, Validation Accuracy: 0.922, Loss: 0.923 Epoch 2 Batch 335/538 - Train Accuracy: 0.938, Validation Accuracy: 0.926, Loss: 0.876 Epoch 2 Batch 336/538 - Train Accuracy: 0.935, Validation Accuracy: 0.931, Loss: 0.970 Epoch 2 Batch 337/538 - Train Accuracy: 0.925, Validation Accuracy: 0.934, Loss: 0.916 Epoch 2 Batch 338/538 - Train Accuracy: 0.937, Validation Accuracy: 0.938, Loss: 0.912 Epoch 2 Batch 339/538 - Train Accuracy: 0.930, Validation Accuracy: 0.936, Loss: 0.919 Epoch 2 Batch 340/538 - Train Accuracy: 0.934, Validation Accuracy: 0.934, Loss: 0.874 Epoch 2 Batch 341/538 - Train Accuracy: 0.943, Validation Accuracy: 0.930, Loss: 0.945 Epoch 2 Batch 342/538 - Train Accuracy: 0.923, Validation Accuracy: 0.931, Loss: 0.943 Epoch 2 Batch 343/538 - Train Accuracy: 0.929, Validation Accuracy: 0.930, Loss: 0.877 Epoch 2 Batch 344/538 - Train Accuracy: 0.952, Validation Accuracy: 0.936, Loss: 0.896 Epoch 2 Batch 345/538 - Train Accuracy: 0.947, Validation Accuracy: 0.931, Loss: 0.931 Epoch 2 Batch 346/538 - Train Accuracy: 0.911, Validation Accuracy: 0.919, Loss: 0.924 Epoch 2 Batch 347/538 - Train Accuracy: 0.950, Validation Accuracy: 0.918, Loss: 0.851 Epoch 2 Batch 348/538 - Train Accuracy: 0.922, Validation Accuracy: 0.919, Loss: 0.884 Epoch 2 Batch 349/538 - Train Accuracy: 0.954, Validation Accuracy: 0.921, Loss: 0.923 Epoch 2 Batch 350/538 - Train Accuracy: 0.922, Validation Accuracy: 0.925, Loss: 0.963 Epoch 2 Batch 351/538 - Train Accuracy: 0.932, Validation Accuracy: 0.925, Loss: 0.938 Epoch 2 Batch 352/538 - Train Accuracy: 0.913, Validation Accuracy: 0.925, Loss: 0.893 Epoch 2 Batch 353/538 - Train Accuracy: 0.923, Validation Accuracy: 0.923, Loss: 0.918 Epoch 2 Batch 354/538 - Train Accuracy: 0.932, Validation Accuracy: 0.927, Loss: 0.898 Epoch 2 Batch 355/538 - Train Accuracy: 0.942, Validation Accuracy: 0.930, Loss: 0.892 Epoch 2 Batch 356/538 - Train Accuracy: 0.947, Validation Accuracy: 0.928, Loss: 0.952 Epoch 2 Batch 357/538 - Train Accuracy: 0.922, Validation Accuracy: 0.933, Loss: 0.893 Epoch 2 Batch 358/538 - Train Accuracy: 0.949, Validation Accuracy: 0.938, Loss: 0.928 Epoch 2 Batch 359/538 - Train Accuracy: 0.930, Validation Accuracy: 0.937, Loss: 0.920 Epoch 2 Batch 360/538 - Train Accuracy: 0.929, Validation Accuracy: 0.936, Loss: 0.908 Epoch 2 Batch 361/538 - Train Accuracy: 0.948, Validation Accuracy: 0.940, Loss: 0.904 Epoch 2 Batch 362/538 - Train Accuracy: 0.952, Validation Accuracy: 0.939, Loss: 0.920 Epoch 2 Batch 363/538 - Train Accuracy: 0.952, Validation Accuracy: 0.937, Loss: 0.869 Epoch 2 Batch 364/538 - Train Accuracy: 0.923, Validation Accuracy: 0.933, Loss: 0.896 Epoch 2 Batch 365/538 - Train Accuracy: 0.937, Validation Accuracy: 0.931, Loss: 0.918 Epoch 2 Batch 366/538 - Train Accuracy: 0.946, Validation Accuracy: 0.931, Loss: 0.895 Epoch 2 Batch 367/538 - Train Accuracy: 0.943, Validation Accuracy: 0.934, Loss: 0.895 Epoch 2 Batch 368/538 - Train Accuracy: 0.938, Validation Accuracy: 0.941, Loss: 0.895 Epoch 2 Batch 369/538 - Train Accuracy: 0.941, Validation Accuracy: 0.936, Loss: 0.944 Epoch 2 Batch 370/538 - Train Accuracy: 0.947, Validation Accuracy: 0.933, Loss: 0.926 Epoch 2 Batch 371/538 - Train Accuracy: 0.947, Validation Accuracy: 0.933, Loss: 0.944 Epoch 2 Batch 372/538 - Train Accuracy: 0.938, Validation Accuracy: 0.929, Loss: 0.929 Epoch 2 Batch 373/538 - Train Accuracy: 0.921, Validation Accuracy: 0.925, Loss: 0.908 Epoch 2 Batch 374/538 - Train Accuracy: 0.933, Validation Accuracy: 0.923, Loss: 0.896 Epoch 2 Batch 375/538 - Train Accuracy: 0.934, Validation Accuracy: 0.920, Loss: 0.934 Epoch 2 Batch 376/538 - Train Accuracy: 0.934, Validation Accuracy: 0.923, Loss: 0.933 Epoch 2 Batch 377/538 - Train Accuracy: 0.950, Validation Accuracy: 0.926, Loss: 0.848 Epoch 2 Batch 378/538 - Train Accuracy: 0.935, Validation Accuracy: 0.930, Loss: 0.920 Epoch 2 Batch 379/538 - Train Accuracy: 0.939, Validation Accuracy: 0.935, Loss: 0.868 Epoch 2 Batch 380/538 - Train Accuracy: 0.933, Validation Accuracy: 0.937, Loss: 0.937 Epoch 2 Batch 381/538 - Train Accuracy: 0.948, Validation Accuracy: 0.941, Loss: 0.914 Epoch 2 Batch 382/538 - Train Accuracy: 0.922, Validation Accuracy: 0.945, Loss: 0.918 Epoch 2 Batch 383/538 - Train Accuracy: 0.945, Validation Accuracy: 0.949, Loss: 0.909 Epoch 2 Batch 384/538 - Train Accuracy: 0.933, Validation Accuracy: 0.942, Loss: 0.914 Epoch 2 Batch 385/538 - Train Accuracy: 0.951, Validation Accuracy: 0.946, Loss: 0.895 Epoch 2 Batch 386/538 - Train Accuracy: 0.942, Validation Accuracy: 0.942, Loss: 0.919 Epoch 2 Batch 387/538 - Train Accuracy: 0.938, Validation Accuracy: 0.944, Loss: 0.929 Epoch 2 Batch 388/538 - Train Accuracy: 0.940, Validation Accuracy: 0.942, Loss: 0.977 Epoch 2 Batch 389/538 - Train Accuracy: 0.920, Validation Accuracy: 0.936, Loss: 0.905 Epoch 2 Batch 390/538 - Train Accuracy: 0.937, Validation Accuracy: 0.934, Loss: 0.900 Epoch 2 Batch 391/538 - Train Accuracy: 0.936, Validation Accuracy: 0.933, Loss: 0.885 Epoch 2 Batch 392/538 - Train Accuracy: 0.924, Validation Accuracy: 0.937, Loss: 0.916 Epoch 2 Batch 393/538 - Train Accuracy: 0.949, Validation Accuracy: 0.938, Loss: 0.900 Epoch 2 Batch 394/538 - Train Accuracy: 0.915, Validation Accuracy: 0.938, Loss: 0.946 Epoch 2 Batch 395/538 - Train Accuracy: 0.948, Validation Accuracy: 0.933, Loss: 0.916 Epoch 2 Batch 396/538 - Train Accuracy: 0.922, Validation Accuracy: 0.930, Loss: 0.903 Epoch 2 Batch 397/538 - Train Accuracy: 0.946, Validation Accuracy: 0.931, Loss: 0.924 Epoch 2 Batch 398/538 - Train Accuracy: 0.931, Validation Accuracy: 0.934, Loss: 0.889 Epoch 2 Batch 399/538 - Train Accuracy: 0.931, Validation Accuracy: 0.932, Loss: 0.888 Epoch 2 Batch 400/538 - Train Accuracy: 0.941, Validation Accuracy: 0.937, Loss: 0.871 Epoch 2 Batch 401/538 - Train Accuracy: 0.960, Validation Accuracy: 0.938, Loss: 0.933 Epoch 2 Batch 402/538 - Train Accuracy: 0.929, Validation Accuracy: 0.932, Loss: 0.885 Epoch 2 Batch 403/538 - Train Accuracy: 0.935, Validation Accuracy: 0.935, Loss: 0.933 Epoch 2 Batch 404/538 - Train Accuracy: 0.942, Validation Accuracy: 0.931, Loss: 0.889 Epoch 2 Batch 405/538 - Train Accuracy: 0.941, Validation Accuracy: 0.924, Loss: 0.950 Epoch 2 Batch 406/538 - Train Accuracy: 0.921, Validation Accuracy: 0.923, Loss: 0.913 Epoch 2 Batch 407/538 - Train Accuracy: 0.952, Validation Accuracy: 0.917, Loss: 0.933 Epoch 2 Batch 408/538 - Train Accuracy: 0.926, Validation Accuracy: 0.919, Loss: 0.902 Epoch 2 Batch 409/538 - Train Accuracy: 0.929, Validation Accuracy: 0.929, Loss: 0.920 Epoch 2 Batch 410/538 - Train Accuracy: 0.954, Validation Accuracy: 0.930, Loss: 0.950 Epoch 2 Batch 411/538 - Train Accuracy: 0.934, Validation Accuracy: 0.933, Loss: 0.931 Epoch 2 Batch 412/538 - Train Accuracy: 0.936, Validation Accuracy: 0.931, Loss: 0.937 Epoch 2 Batch 413/538 - Train Accuracy: 0.923, Validation Accuracy: 0.935, Loss: 0.921 Epoch 2 Batch 414/538 - Train Accuracy: 0.908, Validation Accuracy: 0.929, Loss: 0.955 Epoch 2 Batch 415/538 - Train Accuracy: 0.922, Validation Accuracy: 0.931, Loss: 0.913 Epoch 2 Batch 416/538 - Train Accuracy: 0.942, Validation Accuracy: 0.929, Loss: 0.897 Epoch 2 Batch 417/538 - Train Accuracy: 0.947, Validation Accuracy: 0.927, Loss: 0.908 Epoch 2 Batch 418/538 - Train Accuracy: 0.933, Validation Accuracy: 0.933, Loss: 0.851 Epoch 2 Batch 419/538 - Train Accuracy: 0.939, Validation Accuracy: 0.927, Loss: 0.876 Epoch 2 Batch 420/538 - Train Accuracy: 0.947, Validation Accuracy: 0.922, Loss: 0.901 Epoch 2 Batch 421/538 - Train Accuracy: 0.933, Validation Accuracy: 0.931, Loss: 0.903 Epoch 2 Batch 422/538 - Train Accuracy: 0.918, Validation Accuracy: 0.923, Loss: 0.891 Epoch 2 Batch 423/538 - Train Accuracy: 0.946, Validation Accuracy: 0.923, Loss: 0.923 Epoch 2 Batch 424/538 - Train Accuracy: 0.928, Validation Accuracy: 0.921, Loss: 0.913 Epoch 2 Batch 425/538 - Train Accuracy: 0.943, Validation Accuracy: 0.923, Loss: 0.910 Epoch 2 Batch 426/538 - Train Accuracy: 0.936, Validation Accuracy: 0.922, Loss: 0.886 Epoch 2 Batch 427/538 - Train Accuracy: 0.922, Validation Accuracy: 0.925, Loss: 0.887 Epoch 2 Batch 428/538 - Train Accuracy: 0.951, Validation Accuracy: 0.928, Loss: 0.925 Epoch 2 Batch 429/538 - Train Accuracy: 0.939, Validation Accuracy: 0.931, Loss: 0.934 Epoch 2 Batch 430/538 - Train Accuracy: 0.930, Validation Accuracy: 0.934, Loss: 0.930 Epoch 2 Batch 431/538 - Train Accuracy: 0.932, Validation Accuracy: 0.932, Loss: 0.914 Epoch 2 Batch 432/538 - Train Accuracy: 0.924, Validation Accuracy: 0.930, Loss: 0.908 Epoch 2 Batch 433/538 - Train Accuracy: 0.930, Validation Accuracy: 0.923, Loss: 0.958 Epoch 2 Batch 434/538 - Train Accuracy: 0.936, Validation Accuracy: 0.927, Loss: 0.970 Epoch 2 Batch 435/538 - Train Accuracy: 0.932, Validation Accuracy: 0.919, Loss: 0.930 Epoch 2 Batch 436/538 - Train Accuracy: 0.927, Validation Accuracy: 0.918, Loss: 0.907 Epoch 2 Batch 437/538 - Train Accuracy: 0.935, Validation Accuracy: 0.920, Loss: 0.923 Epoch 2 Batch 438/538 - Train Accuracy: 0.944, Validation Accuracy: 0.923, Loss: 0.899 Epoch 2 Batch 439/538 - Train Accuracy: 0.948, Validation Accuracy: 0.929, Loss: 0.931 Epoch 2 Batch 440/538 - Train Accuracy: 0.941, Validation Accuracy: 0.920, Loss: 0.947 Epoch 2 Batch 441/538 - Train Accuracy: 0.928, Validation Accuracy: 0.926, Loss: 0.934 Epoch 2 Batch 442/538 - Train Accuracy: 0.938, Validation Accuracy: 0.921, Loss: 0.903 Epoch 2 Batch 443/538 - Train Accuracy: 0.931, Validation Accuracy: 0.921, Loss: 0.904 Epoch 2 Batch 444/538 - Train Accuracy: 0.948, Validation Accuracy: 0.918, Loss: 0.895 Epoch 2 Batch 445/538 - Train Accuracy: 0.956, Validation Accuracy: 0.917, Loss: 0.867 Epoch 2 Batch 446/538 - Train Accuracy: 0.943, Validation Accuracy: 0.917, Loss: 0.914 Epoch 2 Batch 447/538 - Train Accuracy: 0.935, Validation Accuracy: 0.919, Loss: 0.937 Epoch 2 Batch 448/538 - Train Accuracy: 0.938, Validation Accuracy: 0.922, Loss: 0.921 Epoch 2 Batch 449/538 - Train Accuracy: 0.943, Validation Accuracy: 0.919, Loss: 0.902 Epoch 2 Batch 450/538 - Train Accuracy: 0.921, Validation Accuracy: 0.924, Loss: 0.930 Epoch 2 Batch 451/538 - Train Accuracy: 0.925, Validation Accuracy: 0.925, Loss: 0.874 Epoch 2 Batch 452/538 - Train Accuracy: 0.937, Validation Accuracy: 0.928, Loss: 0.895 Epoch 2 Batch 453/538 - Train Accuracy: 0.942, Validation Accuracy: 0.929, Loss: 0.942 Epoch 2 Batch 454/538 - Train Accuracy: 0.928, Validation Accuracy: 0.927, Loss: 0.920 Epoch 2 Batch 455/538 - Train Accuracy: 0.933, Validation Accuracy: 0.930, Loss: 0.918 Epoch 2 Batch 456/538 - Train Accuracy: 0.929, Validation Accuracy: 0.936, Loss: 0.922 Epoch 2 Batch 457/538 - Train Accuracy: 0.933, Validation Accuracy: 0.934, Loss: 0.875 Epoch 2 Batch 458/538 - Train Accuracy: 0.943, Validation Accuracy: 0.931, Loss: 0.918 Epoch 2 Batch 459/538 - Train Accuracy: 0.942, Validation Accuracy: 0.930, Loss: 0.907 Epoch 2 Batch 460/538 - Train Accuracy: 0.912, Validation Accuracy: 0.923, Loss: 0.941 Epoch 2 Batch 461/538 - Train Accuracy: 0.951, Validation Accuracy: 0.919, Loss: 0.904 Epoch 2 Batch 462/538 - Train Accuracy: 0.940, Validation Accuracy: 0.921, Loss: 0.897 Epoch 2 Batch 463/538 - Train Accuracy: 0.910, Validation Accuracy: 0.931, Loss: 0.972 Epoch 2 Batch 464/538 - Train Accuracy: 0.942, Validation Accuracy: 0.927, Loss: 0.897 Epoch 2 Batch 465/538 - Train Accuracy: 0.942, Validation Accuracy: 0.919, Loss: 0.912 Epoch 2 Batch 466/538 - Train Accuracy: 0.937, Validation Accuracy: 0.917, Loss: 0.863 Epoch 2 Batch 467/538 - Train Accuracy: 0.931, Validation Accuracy: 0.916, Loss: 0.877 Epoch 2 Batch 468/538 - Train Accuracy: 0.944, Validation Accuracy: 0.925, Loss: 0.938 Epoch 2 Batch 469/538 - Train Accuracy: 0.954, Validation Accuracy: 0.924, Loss: 0.944 Epoch 2 Batch 470/538 - Train Accuracy: 0.939, Validation Accuracy: 0.928, Loss: 0.896 Epoch 2 Batch 471/538 - Train Accuracy: 0.941, Validation Accuracy: 0.933, Loss: 0.925 Epoch 2 Batch 472/538 - Train Accuracy: 0.974, Validation Accuracy: 0.942, Loss: 0.877 Epoch 2 Batch 473/538 - Train Accuracy: 0.917, Validation Accuracy: 0.939, Loss: 0.917 Epoch 2 Batch 474/538 - Train Accuracy: 0.951, Validation Accuracy: 0.940, Loss: 0.903 Epoch 2 Batch 475/538 - Train Accuracy: 0.953, Validation Accuracy: 0.947, Loss: 0.889 Epoch 2 Batch 476/538 - Train Accuracy: 0.936, Validation Accuracy: 0.945, Loss: 0.887 Epoch 2 Batch 477/538 - Train Accuracy: 0.948, Validation Accuracy: 0.942, Loss: 0.925 Epoch 2 Batch 478/538 - Train Accuracy: 0.956, Validation Accuracy: 0.939, Loss: 0.897 Epoch 2 Batch 479/538 - Train Accuracy: 0.951, Validation Accuracy: 0.933, Loss: 0.910 Epoch 2 Batch 480/538 - Train Accuracy: 0.949, Validation Accuracy: 0.932, Loss: 0.935 Epoch 2 Batch 481/538 - Train Accuracy: 0.940, Validation Accuracy: 0.933, Loss: 0.918 Epoch 2 Batch 482/538 - Train Accuracy: 0.948, Validation Accuracy: 0.938, Loss: 0.909 Epoch 2 Batch 483/538 - Train Accuracy: 0.925, Validation Accuracy: 0.941, Loss: 0.923 Epoch 2 Batch 484/538 - Train Accuracy: 0.943, Validation Accuracy: 0.942, Loss: 0.938 Epoch 2 Batch 485/538 - Train Accuracy: 0.946, Validation Accuracy: 0.939, Loss: 0.913 Epoch 2 Batch 486/538 - Train Accuracy: 0.953, Validation Accuracy: 0.945, Loss: 0.911 Epoch 2 Batch 487/538 - Train Accuracy: 0.944, Validation Accuracy: 0.946, Loss: 0.904 Epoch 2 Batch 488/538 - Train Accuracy: 0.950, Validation Accuracy: 0.947, Loss: 0.875 Epoch 2 Batch 489/538 - Train Accuracy: 0.929, Validation Accuracy: 0.947, Loss: 0.917 Epoch 2 Batch 490/538 - Train Accuracy: 0.941, Validation Accuracy: 0.942, Loss: 0.916 Epoch 2 Batch 491/538 - Train Accuracy: 0.921, Validation Accuracy: 0.943, Loss: 0.895 Epoch 2 Batch 492/538 - Train Accuracy: 0.935, Validation Accuracy: 0.940, Loss: 0.883 Epoch 2 Batch 493/538 - Train Accuracy: 0.915, Validation Accuracy: 0.946, Loss: 0.905 Epoch 2 Batch 494/538 - Train Accuracy: 0.960, Validation Accuracy: 0.940, Loss: 0.952 Epoch 2 Batch 495/538 - Train Accuracy: 0.939, Validation Accuracy: 0.936, Loss: 0.942 Epoch 2 Batch 496/538 - Train Accuracy: 0.954, Validation Accuracy: 0.937, Loss: 0.937 Epoch 2 Batch 497/538 - Train Accuracy: 0.949, Validation Accuracy: 0.939, Loss: 0.888 Epoch 2 Batch 498/538 - Train Accuracy: 0.940, Validation Accuracy: 0.938, Loss: 0.937 Epoch 2 Batch 499/538 - Train Accuracy: 0.939, Validation Accuracy: 0.931, Loss: 0.912 Epoch 2 Batch 500/538 - Train Accuracy: 0.958, Validation Accuracy: 0.928, Loss: 0.880 Epoch 2 Batch 501/538 - Train Accuracy: 0.955, Validation Accuracy: 0.925, Loss: 0.890 Epoch 2 Batch 502/538 - Train Accuracy: 0.936, Validation Accuracy: 0.920, Loss: 0.919 Epoch 2 Batch 503/538 - Train Accuracy: 0.958, Validation Accuracy: 0.925, Loss: 0.885 Epoch 2 Batch 504/538 - Train Accuracy: 0.952, Validation Accuracy: 0.923, Loss: 0.900 Epoch 2 Batch 505/538 - Train Accuracy: 0.956, Validation Accuracy: 0.919, Loss: 0.879 Epoch 2 Batch 506/538 - Train Accuracy: 0.955, Validation Accuracy: 0.915, Loss: 0.864 Epoch 2 Batch 507/538 - Train Accuracy: 0.918, Validation Accuracy: 0.914, Loss: 0.902 Epoch 2 Batch 508/538 - Train Accuracy: 0.940, Validation Accuracy: 0.917, Loss: 0.885 Epoch 2 Batch 509/538 - Train Accuracy: 0.945, Validation Accuracy: 0.921, Loss: 0.935 Epoch 2 Batch 510/538 - Train Accuracy: 0.950, Validation Accuracy: 0.931, Loss: 0.910 Epoch 2 Batch 511/538 - Train Accuracy: 0.924, Validation Accuracy: 0.931, Loss: 0.890 Epoch 2 Batch 512/538 - Train Accuracy: 0.939, Validation Accuracy: 0.932, Loss: 0.886 Epoch 2 Batch 513/538 - Train Accuracy: 0.930, Validation Accuracy: 0.929, Loss: 0.882 Epoch 2 Batch 514/538 - Train Accuracy: 0.962, Validation Accuracy: 0.935, Loss: 0.918 Epoch 2 Batch 515/538 - Train Accuracy: 0.934, Validation Accuracy: 0.929, Loss: 0.953 Epoch 2 Batch 516/538 - Train Accuracy: 0.930, Validation Accuracy: 0.916, Loss: 0.942 Epoch 2 Batch 517/538 - Train Accuracy: 0.943, Validation Accuracy: 0.913, Loss: 0.887 Epoch 2 Batch 518/538 - Train Accuracy: 0.934, Validation Accuracy: 0.915, Loss: 0.889 Epoch 2 Batch 519/538 - Train Accuracy: 0.937, Validation Accuracy: 0.919, Loss: 0.927 Epoch 2 Batch 520/538 - Train Accuracy: 0.933, Validation Accuracy: 0.922, Loss: 0.934 Epoch 2 Batch 521/538 - Train Accuracy: 0.952, Validation Accuracy: 0.918, Loss: 0.913 Epoch 2 Batch 522/538 - Train Accuracy: 0.929, Validation Accuracy: 0.923, Loss: 0.901 Epoch 2 Batch 523/538 - Train Accuracy: 0.936, Validation Accuracy: 0.921, Loss: 0.918 Epoch 2 Batch 524/538 - Train Accuracy: 0.941, Validation Accuracy: 0.924, Loss: 0.923 Epoch 2 Batch 525/538 - Train Accuracy: 0.940, Validation Accuracy: 0.922, Loss: 0.913 Epoch 2 Batch 526/538 - Train Accuracy: 0.939, Validation Accuracy: 0.929, Loss: 0.893 Epoch 2 Batch 527/538 - Train Accuracy: 0.947, Validation Accuracy: 0.930, Loss: 0.902 Epoch 2 Batch 528/538 - Train Accuracy: 0.938, Validation Accuracy: 0.935, Loss: 0.891 Epoch 2 Batch 529/538 - Train Accuracy: 0.929, Validation Accuracy: 0.936, Loss: 0.902 Epoch 2 Batch 530/538 - Train Accuracy: 0.925, Validation Accuracy: 0.940, Loss: 0.932 Epoch 2 Batch 531/538 - Train Accuracy: 0.941, Validation Accuracy: 0.944, Loss: 0.898 Epoch 2 Batch 532/538 - Train Accuracy: 0.936, Validation Accuracy: 0.932, Loss: 0.899 Epoch 2 Batch 533/538 - Train Accuracy: 0.942, Validation Accuracy: 0.934, Loss: 0.876 Epoch 2 Batch 534/538 - Train Accuracy: 0.945, Validation Accuracy: 0.937, Loss: 0.888 Epoch 2 Batch 535/538 - Train Accuracy: 0.953, Validation Accuracy: 0.934, Loss: 0.875 Epoch 2 Batch 536/538 - Train Accuracy: 0.941, Validation Accuracy: 0.938, Loss: 0.942 Model Trained and Saved ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int`- Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function word_ids = [] words = sentence.lower().split() for l in words: if l in vocab_to_int: word_ids.append(vocab_to_int[l]) else: word_ids.append(vocab_to_int['<UNK>']) return word_ids """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('logits:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)])) print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)])) ###Output Input Word Ids: [192, 171, 197, 79, 99, 229, 206] English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.'] Prediction Word Ids: [5, 348, 82, 215, 100, 218, 175, 180, 1] French Words: ['il', 'a', 'vu', 'un', 'vieux', 'camion', 'jaune', '.', '<EOS>'] ###Markdown Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ###Output _____no_output_____ ###Markdown Explore the DataPlay around with view_sentence_range to view different parts of the data. ###Code view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ###Output Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . ###Markdown Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ###Code def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function source_id_text = [] target_id_text = [] end_of_sentence = '<EOS>' def splitText(text,fn,needsEnd): sentence = [] for sentences in text.split('\n'): result = [] for word in sentences.split(): result.append(fn(word)) if needsEnd: result.append(target_vocab_to_int[end_of_sentence]) sentence.append(result) return sentence source_func = lambda x:source_vocab_to_int[x] source_id_text = splitText(source_text,source_func,False) target_func = lambda x:target_vocab_to_int[x] target_id_text = splitText(target_text,target_func,True) return (source_id_text, target_id_text) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ###Output Tests Passed ###Markdown Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ###Output _____no_output_____ ###Markdown Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ###Output _____no_output_____ ###Markdown Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output TensorFlow Version: 1.1.0 Default GPU Device: /gpu:0 ###Markdown Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoder_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.- Target sequence length placeholder named "target_sequence_length" with rank 1- Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.- Source sequence length placeholder named "source_sequence_length" with rank 1Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) ###Code def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ # TODO: Implement Function input_placeholder = tf.placeholder(name="input",shape=[None,None],dtype=tf.int32) target_placeholder = tf.placeholder(name="target",shape=[None,None],dtype=tf.int32) learning_rate = tf.placeholder(name="learning_rate",dtype=tf.float32) keep_prob = tf.placeholder(name="keep_prob",dtype=tf.float32) target_seq_len = tf.placeholder(name="target_sequence_length",shape=[None],dtype=tf.int32) max_target_len = tf.reduce_max(name="max_target_len",input_tensor=target_seq_len) source_seq_len = tf.placeholder(name="source_sequence_length",shape=[None],dtype=tf.int32) return input_placeholder, target_placeholder, learning_rate, keep_prob, target_seq_len, max_target_len, source_seq_len """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ###Output Tests Passed ###Markdown Process Decoder InputImplement `process_decoder_input` by removing the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch. ###Code def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function go_id = "<GO>" constant = target_vocab_to_int[go_id] ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) final_tensor = tf.concat([tf.fill([batch_size,1],constant),ending],1) return final_tensor """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) ###Output Tests Passed ###Markdown EncodingImplement `encoding_layer()` to create a Encoder RNN layer: * Embed the encoder input using [`tf.contrib.layers.embed_sequence`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence) * Construct a [stacked](https://github.com/tensorflow/tensorflow/blob/6947f65a374ebf29e74bb71e36fd82760056d82c/tensorflow/docs_src/tutorials/recurrent.mdstacking-multiple-lstms) [`tf.contrib.rnn.LSTMCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell) wrapped in a [`tf.contrib.rnn.DropoutWrapper`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper) * Pass cell and embedded input to [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) ###Code from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ # TODO: Implement Function embeded_input = tf.contrib.layers.embed_sequence(rnn_inputs,source_vocab_size,encoding_embedding_size) def lstm_cell_gen(rnn_size,keep_prob): lstm_cell = tf.contrib.rnn.LSTMCell(rnn_size,initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) dropout = tf.contrib.rnn.DropoutWrapper(lstm_cell,keep_prob) return dropout multi_cell = tf.contrib.rnn.MultiRNNCell([lstm_cell_gen(rnn_size,keep_prob) for _ in range(num_layers)]) rnn_output,rnn_state = tf.nn.dynamic_rnn(multi_cell,embeded_input,dtype=tf.float32,sequence_length=source_sequence_length) return rnn_output, rnn_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ###Output Tests Passed ###Markdown Decoding - TrainingCreate a training decoding layer:* Create a [`tf.contrib.seq2seq.TrainingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper) * Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ # TODO: Implement Function training_helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input,target_sequence_length) basic_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,training_helper,encoder_state,output_layer) decoder_output,final_state = tf.contrib.seq2seq.dynamic_decode(basic_decoder,impute_finished=True, maximum_iterations=max_summary_length) return decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ###Output Tests Passed ###Markdown Decoding - InferenceCreate inference decoder:* Create a [`tf.contrib.seq2seq.GreedyEmbeddingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper)* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ###Code def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ # TODO: Implement Function start = tf.tile(tf.constant([start_of_sequence_id],dtype=tf.int32),[batch_size]) embedding_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,start,end_of_sequence_id) basic_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,embedding_helper,encoder_state,output_layer) decoder_output,final_state = tf.contrib.seq2seq.dynamic_decode(basic_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length) return decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ###Output Tests Passed ###Markdown Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.* Embed the target sequences* Construct the decoder LSTM cell (just like you constructed the encoder cell above)* Create an output layer to map the outputs of the decoder to the elements of our vocabulary* Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)` function to get the training logits.* Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ###Code def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function start_of_sequence_id, end_of_sequence_id = target_vocab_to_int["<GO>"],target_vocab_to_int["<EOS>"] dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) def lstm_cell_gn(rnn_size,keep_prob): lstm_cell = tf.contrib.rnn.LSTMCell(rnn_size,initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) dropout = tf.contrib.rnn.DropoutWrapper(lstm_cell,keep_prob) return dropout dec_cell = tf.contrib.rnn.MultiRNNCell([lstm_cell_gn(rnn_size,keep_prob) for _ in range(num_layers)]) output_layer = Dense(target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1)) with tf.variable_scope("decode"): logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) with tf.variable_scope("decode", reuse=True): inference_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob) return logits, inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ###Output Tests Passed ###Markdown Build the Neural NetworkApply the functions you implemented above to:- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size)`.- Process target data using your `process_decoder_input(target_data, target_vocab_to_int, batch_size)` function.- Decode the encoded input using your `decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)` function. ###Code def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function _,enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size) logits,infer_logits = decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return logits,infer_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ###Output Tests Passed ###Markdown Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability- Set `display_step` to state how many steps between each debug output statement ###Code # Number of Epochs epochs = 6 # Batch Size batch_size = 512 # RNN Size rnn_size = 256 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 128 decoding_embedding_size = 128 # Learning Rate learning_rate = 0.005 # Dropout Keep Probability keep_probability = 0.7 display_step = 50 ###Output _____no_output_____ ###Markdown Build the GraphBuild the graph using the neural network you implemented. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ###Output _____no_output_____ ###Markdown Batch and pad the source and target sequences ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def pad_sentence_batch(sentence_batch, pad_int): """Pad sentences with <PAD> so that each sentence of a batch has the same length""" max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): """Batch targets, sources, and the lengths of their sentences together""" for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths ###Output _____no_output_____ ###Markdown TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ###Output Epoch 0 Batch 50/269 - Train Accuracy: 0.4459, Validation Accuracy: 0.4928, Loss: 2.0323 Epoch 0 Batch 100/269 - Train Accuracy: 0.5401, Validation Accuracy: 0.5330, Loss: 0.9660 Epoch 0 Batch 150/269 - Train Accuracy: 0.6091, Validation Accuracy: 0.6162, Loss: 0.6983 Epoch 0 Batch 200/269 - Train Accuracy: 0.6346, Validation Accuracy: 0.6451, Loss: 0.5972 Epoch 0 Batch 250/269 - Train Accuracy: 0.6526, Validation Accuracy: 0.6679, Loss: 0.5019 Epoch 1 Batch 50/269 - Train Accuracy: 0.6933, Validation Accuracy: 0.7024, Loss: 0.4035 Epoch 1 Batch 100/269 - Train Accuracy: 0.7592, Validation Accuracy: 0.7414, Loss: 0.3057 Epoch 1 Batch 150/269 - Train Accuracy: 0.7833, Validation Accuracy: 0.7820, Loss: 0.2332 Epoch 1 Batch 200/269 - Train Accuracy: 0.8191, Validation Accuracy: 0.8168, Loss: 0.1742 Epoch 1 Batch 250/269 - Train Accuracy: 0.8665, Validation Accuracy: 0.8493, Loss: 0.1269 Epoch 2 Batch 50/269 - Train Accuracy: 0.8863, Validation Accuracy: 0.9219, Loss: 0.0935 Epoch 2 Batch 100/269 - Train Accuracy: 0.9172, Validation Accuracy: 0.9198, Loss: 0.0637 Epoch 2 Batch 150/269 - Train Accuracy: 0.9328, Validation Accuracy: 0.9332, Loss: 0.0554 Epoch 2 Batch 200/269 - Train Accuracy: 0.9521, Validation Accuracy: 0.9321, Loss: 0.0447 Epoch 2 Batch 250/269 - Train Accuracy: 0.9477, Validation Accuracy: 0.9416, Loss: 0.0410 Epoch 3 Batch 50/269 - Train Accuracy: 0.9349, Validation Accuracy: 0.9555, Loss: 0.0424 Epoch 3 Batch 100/269 - Train Accuracy: 0.9556, Validation Accuracy: 0.9517, Loss: 0.0348 Epoch 3 Batch 150/269 - Train Accuracy: 0.9549, Validation Accuracy: 0.9463, Loss: 0.0328 Epoch 3 Batch 200/269 - Train Accuracy: 0.9658, Validation Accuracy: 0.9569, Loss: 0.0262 Epoch 3 Batch 250/269 - Train Accuracy: 0.9643, Validation Accuracy: 0.9595, Loss: 0.0270 Epoch 4 Batch 50/269 - Train Accuracy: 0.9483, Validation Accuracy: 0.9631, Loss: 0.0303 Epoch 4 Batch 100/269 - Train Accuracy: 0.9621, Validation Accuracy: 0.9641, Loss: 0.0255 Epoch 4 Batch 150/269 - Train Accuracy: 0.9683, Validation Accuracy: 0.9607, Loss: 0.0257 Epoch 4 Batch 200/269 - Train Accuracy: 0.9784, Validation Accuracy: 0.9651, Loss: 0.0182 Epoch 4 Batch 250/269 - Train Accuracy: 0.9659, Validation Accuracy: 0.9631, Loss: 0.0223 Epoch 5 Batch 50/269 - Train Accuracy: 0.9554, Validation Accuracy: 0.9697, Loss: 0.0255 Epoch 5 Batch 100/269 - Train Accuracy: 0.9727, Validation Accuracy: 0.9721, Loss: 0.0208 Epoch 5 Batch 150/269 - Train Accuracy: 0.9741, Validation Accuracy: 0.9617, Loss: 0.0200 Epoch 5 Batch 200/269 - Train Accuracy: 0.9821, Validation Accuracy: 0.9713, Loss: 0.0144 Epoch 5 Batch 250/269 - Train Accuracy: 0.9726, Validation Accuracy: 0.9699, Loss: 0.0184 Model Trained and Saved ###Markdown Save ParametersSave the `batch_size` and `save_path` parameters for inference. ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ###Output _____no_output_____ ###Markdown Checkpoint ###Code """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ###Output _____no_output_____ ###Markdown Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id. ###Code def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function id_sentence = [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.lower().split()] return id_sentence """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ###Output Tests Passed ###Markdown TranslateThis will translate `translate_sentence` from English to French. ###Code translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) ###Output INFO:tensorflow:Restoring parameters from checkpoints/dev Input Word Ids: [29, 107, 96, 188, 40, 173, 134] English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.'] Prediction Word Ids: [108, 302, 289, 255, 204, 194, 155, 1] French Words: il a vu un camion jaune . <EOS>
notebooks/000-Instalando_Python.ipynb
###Markdown Curso de introducción a Python: procesamiento y análisis de datos La mejor forma de aprender a programar es haciendo algo útil, por lo que esta introducción a Python se centrará alrededor de una tarea común: el _análisis de datos_. En este taller práctico se hará un breve repaso a los conceptos básicos de programación con el fin de automatizar procesos cubriendo la sintaxis de Python (junto a NumPy y matplotlib). Para ello, seguiremos los materiales de [Software-Carpentry](https://software-carpentry.org/) ([ver apuntes](http://swcarpentry.github.io/python-novice-inflammation/)).__Nuestra herramienta fundamental de trabajo es el Notebook de Jupyter__, podrás conocer más acerca de él en las siguientes clases. Durante el curso te familiarizarás con él y aprenderás a manejarlo (este documento ha sido generado a partir de un notebook).En esta sesión inicial, veremos los pasos a seguir para que __instales Python y puedas empezar a aprender a tu ritmo.__ Pasos a seguir: 1. Descarga de Python. La instalación de Python, el Notebook y todos los paquetes que utilizaremos, por separado puede ser una tarea ardua y agotadora, pero no te preocupes: ¡alguien ha hecho ya el trabajo duro!__[Anaconda](https://continuum.io/anaconda/) es una distribución de Python que recopila muchas de las bibliotecas necesarias en el ámbito de la computación científica__ y desde luego, todas las que necesitaremos en este curso. Además __incluye herramientas para programar en Python, como [Jupyter Notebook](http://jupyter.org/) o [Spyder](https://github.com/spyder-ide/spyderspyder---the-scientific-python-development-environment)__ (un IDE al estilo de MATLAB). Lo único que necesitas hacer es:* Ir a la [página de descargas de Anaconda](http://continuum.io/downloads).* Seleccionar tu sistema operativo (Windows, OSX, Linux).* Descargar Anaconda (utilizaremos Python 3.X). 2. Instalación de Python. Consulta las __[instrucciones de instalación](http://docs.continuum.io/anaconda/install.html)__ de Anaconda para tu sistema operativo. En el caso de Windows y OS X, te encontrarás con los típicos instaladores gráficos a los que ya estás acostumbrado. Si te encuentras en Linux, deberás ejectuar el script de instalación desde la consola de comandos, así que recuerda comprobar que tienes bash instalado y asignar permisos de ejecución al script. __Importante: asegurate de instalar Anaconda sólo para tu usuario y sin permisos de administrador, no son necesarios y te pueden dar problemas más tarde si no tienes derechos de acceso siempre.__¡Muy bien! Ya tienes instalado ¿pero dónde?* En __Windows__, desde `Inicio > Anaconda` verás una serie de herramientas de las que ahora dispones ¡no tengas miedo de abrirlas! * En __OS X__, podrás acceder a un launcher con las mismas herramientas desde la carpeta `anaconda` dentro de tu carpeta personal. * En __Linux__, debido al gran número de combinaciones de distribuciones más escritorios no tendrás esos accesos directos gráficos (lo que no quiere decir que no puedas crearlos tú a posteriori) pero, como comprobarás, no hacen ninguna falta y no forman parte de nuestra forma de trabajar en el curso.Ahora, vamos a __actualizar Anaconda__ para asegurarnos de que tenemos nuestra distribución de Python con todos sus paquetes al día para lo que abrimos una __ventana de comandos__ (símbolo de sistema en Windows o terminal en OS X) y ejecutamos los siguientes comandos de actualización (confirmando en el caso de tener que instalar paquetes nuevos):```conda update condaconda update --all```Si experimentas cualquier clase de problema durante este proceso, [desinstala tu distribución de Anaconda](http://docs.continuum.io/anaconda/install.html) y vuelve a instalarla donde puedas asegurarte de tener una conexión a internet estable.Por último, comprueba que Jupyter Notebook funciona correctamente. Escribe esto en una ventana de comandos y espera a que se abra el navegador. ```jupyter notebook```Deberías ver [esta interfaz](https://try.jupyter.org/) (aunque sin archivos).Ya tenemos nuestra distribución de Python con todos los paquetes que necesitemos (y prácticamente todos los que en un futuro podamos necesitar). En caso de que tengas cualquier caso de duda durante el proceso, pregúntanos y recuerda que __¡los buscadores de internet son tus mejores amigos!___¡A trabajar!_ ---Clase en vídeo, parte del [Curso de Python para científicos e ingenieros](http://cacheme.org/curso-online-python-cientifico-ingenieros/) grabado en la Escuela Politécnica Superior de la Universidad de Alicante. ###Code from IPython.display import YouTubeVideo YouTubeVideo("x4xegDME5C0", width=560, height=315, list="PLGBbVX_WvN7as_DnOGcpkSsUyXB1G_wqb") ###Output _____no_output_____
01_merge_resid.ipynb
###Markdown 시트별 파일명- 추정매출: sales_2018.csv- 상주인구: resid.csv- 상권변화지표 : regional_attraction.csv- 아파트: apt.csv- 점포: store.csv- 직장인구: employee.csv- 상권배후지의 소득소비: income.csv- 상권의 집객시설 : facilities.csv- 추정유동인구 : floating_pop.csv ###Code #매출 데이터 sales = pd.read_csv("sales_2018.csv", encoding='euc-kr') sales.head(10) sales_merge= sales.copy() sales_merge.tail(3) sales_merge.columns monthly_sale = sales_merge[sales_merge.columns[:5]] monthly_sale.to_csv('monthly_sale.csv',sep=",",index= False, encoding='euc-kr') ###Output _____no_output_____ ###Markdown 당월매출금액/점포수 컬럼 가지는 df 만들기- 만약 다월 매출금액이 점포 1개당 매출금액이 아닐경우 사용하기 위함 ###Code sales_merge=sales_merge[['기준_년_코드','기준_분기_코드','상권_코드','서비스_업종_코드','당월_매출_금액','점포수']] sales_merge.tail(3) sales_merge['sales']=round(sales_merge['당월_매출_금액']/sales_merge['점포수'] , 0) sales_merge.tail() sale1 = sales_merge[sales_merge.columns[:-3]] sale2 = sales_merge['sales'] sales_result = pd.concat([sale1,sale2], axis=1) sales_result.tail() #저장 sales_result.to_csv('sales_result.csv',sep=",",index= False, encoding='euc-kr') #동일 기간, 동일 상권인데 매출이 다른 행이 존재한다는것을 확인 monthly_sale[monthly_sale.duplicated(['기준_년_코드','기준_분기_코드','상권_코드'])] #동일 기간, 동일 상권, 서비스업종코드인데 다른 매출을 가지는 행은 존재하지 않음 monthly_sale[monthly_sale.duplicated(['기준_년_코드','기준_분기_코드','상권_코드',"서비스_업종_코드"])] ###Output _____no_output_____ ###Markdown 상주인구 ###Code resid = pd.read_csv("resid.csv", encoding='euc-kr') resid.head(1) resid_merge = resid.copy() resid_merge.columns resid_merge=resid_merge[['기준_년_코드', '기준_분기_코드','상권 코드','총 상주인구 수', '총 가구 수']] resid_merge.tail() resid_merge = resid_merge[resid_merge['기준_년_코드']==2018] resid_merge.tail() pd.Series.value_counts(resid_merge['상권 코드']) #동일 기간, 동일 상권인데 인구가 다른 중복행이 존재하지 않는다 resid_merge[resid_merge.duplicated(['기준_분기_코드','상권 코드'])] resid_merge.to_csv("resid.csv", sep=",", encoding = 'euc-kr', index=False) ###Output _____no_output_____ ###Markdown 매출과 상주인구 데이터 합치기 ###Code #merge하기 위해 key값 이름 동일하기 맞추기위해 column명 변경 resid_merge.columns = ['기준_년_코드', '기준_분기_코드', '상권_코드', '총 상주인구 수', '총 가구 수'] df=pd.merge(monthly_sale, resid_merge, on=['기준_년_코드', '기준_분기_코드', '상권_코드']) df.head() ###Output _____no_output_____
NEERAJAP2001/CLUSTERING-FOR-A-MALL-master/KMEANS CLUSTERING.ipynb
###Markdown KMEANS CLUSTERING Project follows the CRISP-DM Process while analyzing their data.PROBLEM :PREDICT THE CLUSTER OF CUSTOMERS BASED ON ANNUAL INCOME AND SPENDING TO BRING VALUABLE INSIGHTS FOR THE MALL. Questions : 1.Which cluster has both spending good score and income? 2.On which cluster should company concentrate to increase sales? 3.which cluster has maximum probability to get into high spending score? IMPORTING THE DATASET AND LIBRARIES ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.preprocessing import LabelEncoder,OneHotEncoder # Importing the dataset dataset = pd.read_csv(r'C:\Users\neeraj\OneDrive\Desktop\data challenge\Mall_Customers.csv') X=dataset.iloc[:,:].values ###Output _____no_output_____ ###Markdown Explore the Dataset ###Code dataset.head() dataset.info() dataset.isnull().sum() ###Output _____no_output_____ ###Markdown Check for categories in object variable(categorical variable) ###Code dataset['Genre'].value_counts() ###Output _____no_output_____ ###Markdown Replace categories by one hot encoding Here this method works fine as there are only 2 categories in object variable ###Code labelencoder_X=LabelEncoder() X[:,1]= labelencoder_X.fit_transform(X[:,1]) Data=pd.DataFrame(X) ###Output _____no_output_____ ###Markdown Now check for categorical values if any ###Code Data.head() ###Output _____no_output_____ ###Markdown loading data (test and train) ###Code x= dataset.iloc[:, [3,4]].values Final=pd.DataFrame(x) Final.head() ###Output _____no_output_____ ###Markdown USING ELBOW METHOD FOR OPTIMAL CLUSTERS Here I have used a function which taken in the 'i' value and returns the graph between 'i' and WCSS(Sum of squares of distances within clusters) ###Code from sklearn.cluster import KMeans wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i, init = 'k-means++', random_state = 42) kmeans.fit(x) wcss.append(kmeans.inertia_) plt.plot(range(1, 11), wcss) plt.title('The Elbow Method') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown Training the model ###Code kmeans = KMeans(n_clusters = 5, init = 'k-means++', random_state = 42) y_kmeans = kmeans.fit_predict(x) print(y_kmeans) ###Output [2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 0 2 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 1 4 0 4 1 4 1 4 0 4 1 4 1 4 1 4 1 4 0 4 1 4 1 4 1 4 1 4 1 4 1 4 1 4 1 4 1 4 1 4 1 4 1 4 1 4 1 4 1 4 1 4 1 4 1 4 1 4 1 4 1 4 1 4 1 4 1 4 1 4 1 4 1 4 1 4] ###Markdown LETS VISUALISE OUR RESULT ###Code # Visualising the clusters plt.scatter(x[y_kmeans == 0, 0], x[y_kmeans == 0, 1], s = 100, c = 'red', label = 'Cluster 1') plt.scatter(x[y_kmeans == 1, 0], x[y_kmeans == 1, 1], s = 100, c = 'blue', label = 'Cluster 2') plt.scatter(x[y_kmeans == 2, 0], x[y_kmeans == 2, 1], s = 100, c = 'green', label = 'Cluster 3') plt.scatter(x[y_kmeans == 3, 0], x[y_kmeans == 3, 1], s = 100, c = 'cyan', label = 'Cluster 4') plt.scatter(x[y_kmeans == 4, 0], x[y_kmeans == 4, 1], s = 100, c = 'magenta', label = 'Cluster 5') plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1], s = 300, c = 'yellow', label = 'Centroids') plt.title('Clusters of customers') plt.xlabel('Annual Income (k$)') plt.ylabel('Spending Score (1-100)') plt.legend() plt.show() ###Output _____no_output_____
assignments/assignment04/TheoryAndPracticeEx02.ipynb
###Markdown Theory and Practice of Visualization Exercise 2 Imports ###Code from IPython.display import Image ###Output _____no_output_____ ###Markdown Violations of graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a *negative* example of the principles that Tufte describes in *The Visual Display of Quantitative Information*.* [CNN](http://www.cnn.com/)* [Fox News](http://www.foxnews.com/)* [Time](http://time.com/)Upload the image for the visualization to this directory and display the image inline in this notebook. ###Code # Add your filename and uncomment the following line: Image(filename='graph2.JPG') ###Output _____no_output_____ ###Markdown Theory and Practice of Visualization Exercise 2 Imports ###Code from IPython.display import Image ###Output _____no_output_____ ###Markdown Violations of graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a *negative* example of the principles that Tufte describes in *The Visual Display of Quantitative Information*.* [CNN](http://www.cnn.com/)* [Fox News](http://www.foxnews.com/)* [Time](http://time.com/)Upload the image for the visualization to this directory and display the image inline in this notebook. ###Code Image(filename='PythonImage2.png') ###Output _____no_output_____ ###Markdown Theory and Practice of Visualization Exercise 2 Imports ###Code from IPython.display import Image ###Output _____no_output_____ ###Markdown Violations of graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a *negative* example of the principles that Tufte describes in *The Visual Display of Quantitative Information*.* [CNN](http://www.cnn.com/)* [Fox News](http://www.foxnews.com/)* [Time](http://time.com/)Upload the image for the visualization to this directory and display the image inline in this notebook. ###Code # Add your filename and uncomment the following line: Image(filename='surface-temperatures.png') ###Output _____no_output_____ ###Markdown Theory and Practice of Visualization Exercise 2 Imports ###Code from IPython.display import Image ###Output _____no_output_____ ###Markdown Violations of graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a *negative* example of the principles that Tufte describes in *The Visual Display of Quantitative Information*.* [CNN](http://www.cnn.com/)* [Fox News](http://www.foxnews.com/)* [Time](http://time.com/)Upload the image for the visualization to this directory and display the image inline in this notebook. ###Code # Add your filename and uncomment the following line: Image(filename='Assignment04b.png') ###Output _____no_output_____ ###Markdown Theory and Practice of Visualization Exercise 2 Imports ###Code from IPython.display import Image ###Output _____no_output_____ ###Markdown Violations of graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a *negative* example of the principles that Tufte describes in *The Visual Display of Quantitative Information*.* [CNN](http://www.cnn.com/)* [Fox News](http://www.foxnews.com/)* [Time](http://time.com/)Upload the image for the visualization to this directory and display the image inline in this notebook. ###Code Image(filename='141027150129-gender-gap-infographic-1024x576.jpg') ###Output _____no_output_____ ###Markdown Theory and Practice of Visualization Exercise 2 Imports ###Code from IPython.display import Image ###Output _____no_output_____ ###Markdown Violations of graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a *negative* example of the principles that Tufte describes in *The Visual Display of Quantitative Information*.* [CNN](http://www.cnn.com/)* [Fox News](http://www.foxnews.com/)* [Time](http://time.com/)Upload the image for the visualization to this directory and display the image inline in this notebook. ###Code Image(filename='infograph2.png') ###Output _____no_output_____ ###Markdown Theory and Practice of Visualization Exercise 2 Imports ###Code from IPython.display import Image ###Output _____no_output_____ ###Markdown Violations of graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a *negative* example of the principles that Tufte describes in *The Visual Display of Quantitative Information*.* [CNN](http://www.cnn.com/)* [Fox News](http://www.foxnews.com/)* [Time](http://time.com/)Upload the image for the visualization to this directory and display the image inline in this notebook. ###Code # Add your filename and uncomment the following line: Image(filename='TheoryAndPracticeEx02graph.png') ###Output _____no_output_____ ###Markdown Theory and Practice of Visualization Exercise 2 Imports ###Code from IPython.display import Image ###Output _____no_output_____ ###Markdown Violations of graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a *negative* example of the principles that Tufte describes in *The Visual Display of Quantitative Information*.* [CNN](http://www.cnn.com/)* [Fox News](http://www.foxnews.com/)* [Time](http://time.com/)Upload the image for the visualization to this directory and display the image inline in this notebook. ###Code # Add your filename and uncomment the following line: # Image(filename='yourfile.png') ###Output _____no_output_____ ###Markdown Theory and Practice of Visualization Exercise 2 Imports ###Code from IPython.display import Image ###Output _____no_output_____ ###Markdown Violations of graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a *negative* example of the principles that Tufte describes in *The Visual Display of Quantitative Information*.* [CNN](http://www.cnn.com/)* [Fox News](http://www.foxnews.com/)* [Time](http://time.com/)Upload the image for the visualization to this directory and display the image inline in this notebook. ###Code # Add your filename and uncomment the following line: Image(filename='confusing graph part 1.PNG') Image(filename='confusing graph part 2.PNG') Image(filename='confusing graph part 3.PNG') Image(filename='confusing graph part 4.PNG') ###Output _____no_output_____ ###Markdown Theory and Practice of Visualization Exercise 2 Imports ###Code from IPython.display import Image ###Output _____no_output_____ ###Markdown Violations of graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a *negative* example of the principles that Tufte describes in *The Visual Display of Quantitative Information*.* [CNN](http://www.cnn.com/)* [Fox News](http://www.foxnews.com/)* [Time](http://time.com/)Upload the image for the visualization to this directory and display the image inline in this notebook. ###Code Image(filename='tax-dollars.jpg') ###Output _____no_output_____ ###Markdown Theory and Practice of Visualization Exercise 2 Imports ###Code from IPython.display import Image ###Output _____no_output_____ ###Markdown Violations of graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a *negative* example of the principles that Tufte describes in *The Visual Display of Quantitative Information*.* [CNN](http://www.cnn.com/)* [Fox News](http://www.foxnews.com/)* [Time](http://time.com/)Upload the image for the visualization to this directory and display the image inline in this notebook. ###Code Image(filename='food-oil-graph.png') ###Output _____no_output_____ ###Markdown Theory and Practice of Visualization Exercise 2 Imports ###Code from IPython.display import Image ###Output _____no_output_____ ###Markdown Violations of graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a *negative* example of the principles that Tufte describes in *The Visual Display of Quantitative Information*.* [CNN](http://www.cnn.com/)* [Fox News](http://www.foxnews.com/)* [Time](http://time.com/)Upload the image for the visualization to this directory and display the image inline in this notebook. ###Code # Add your filename an����JFIF``��C    $.' ",#(7),01444'9=82<.342�� �4�� ���}!1AQa"q2���#B��R��$3br� %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz���������������������������������������������������������������������������?��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(���iⶅ��T�$gv �{�X�3����N)�Q��٘�� ���N��?�yh����dKo�O����3M7*��#���A���/;�QUF�X��V����m�\��; ��,���̫;�OjLZU��wy-���3�R}�������f��f�}OW�I��������Q2���SY����Q{�����f�|U8:N��?��?�����&,g��mcQ�%͔���$T_���s����~��*�^&���d��� ��(BGѰjX��i8�[��V� '�f�(��(��(��(��(��(��(��(��(��(��(��(��(��ϼ�t�=U�5;;p� �Ϋ���Y���K��>���X����M���C�Gw})N��\d�xnDC�k۔���Uwq��-�Ypٖ�J�U�!�y�}K2���O�d�"�\���E�`��[�$�!��� �9Ӣ�prf�&i��bI�kZ h-c����$�vơF~������Eq �SnX�x� �~�j�(��*��6wL�� � $a��F�V�M>�Xt+ �?Jϓ���F�Mѝ�K�4�|)�Z�6v)h��L���>?�R 3�[/���?�kq��ԯ�:‘Z�:�����)?S*��j1�ރ��I�s��m�����6Z���[kV��O���|���!i�x:��}�c���h��X;ҙ�����C����54�� ��04�k���7^Z���.MF���K�o ��<�����%ƿ=���#���N7đ�?B\҈|B���ukX�ng�a��3� o�%�W����˟�7@�����S����^+[��@��z���D��z�Ͱ?�S��9�ȫ��͔0���2�9�������3z+�jZ(��(��(��(��(��(��ʻ�.�c�C��jv�j�����fݜ~x8��V5��G{"�6��D ������ӮX�iY������3�G�#`�W���E\��. �����8aO�$�GnI�{�xz�ڜ2�wb���}���i�H>���sb[po����)V��wH�4�2�q�i�^vA�QUA?G���&���:���#�LV6�U� ۘ|ӏ��ɗ���7@����@�h��zN���t�;g+��0*= 5~�(������� ��z���g������ �Ͻp���-:���k�$2��U�Ls�I�$���ZO���0\%�Ηf�\Kp��襷l��Ԍ�����Zx�ď���ij��lG����[=�����U����dU0kP���h\ ���Aǯ#���g�kĶ�C �Ƌq3:(��`\2�H�#8 @��O�G���+���<���Kv�]p�$��c<^�4?�=k�z~��1�˨����;w���g�kڨ��(��(��(��( ���Vl�Ю&ig�t�dc�w�F$��)��� =�V�c 0�Zu���T*��!z�������t����l�r�Y��� ܛ���MG����������*G�5$TK?�C�$q����e�F�6���T��o���� �Ox�?�tc���^�-L���ClK�f^O�|�u���j/�x���:O� d��4�zΫ��g�$�b��.ceP��?�mE ���`Q�}QEQEQEW � #^ռC�Gca����v'Wa��h����6�=q�y� |<�-�[mf�6Z]�֑n6��i��t*AL`OR{V��4h��6�Z��V�mȃ�1�!R���P�ֵ�Ѧo��]k��ʹ���P�~uh�s�#�����4�Y�� ��n��s4���|�~��᷂�J� q)9!('��(��(��(��5ycM~�%�Spַ�9ǒ�_U��`{UY�"y��r�_�kfDf�}���T�q���T���2�����e�q�h��Xm@H�^{��0���9'ºZ��,�?j�I� ��>�N��E��O��N��[�!đ�ȅW*��?קS�ޠ�o����JX���>I����<jNH����s���ڷ| !><��&�Q�V!��s��~5��QEQEQEQEQEQEQEcx��[/ ^�Bؑpq���u�J(��(��(��(��(�K��4��]��+n�(��(��(��Mq�|s�*]������B�ۡ��3����4���[����5ʲ��9\B� ��v��5�y��"��[[�������i�G���?��'r���2j�|2׭��-������`B�r��gӥt�)�0�A��v�Ծ�6���^���x�|���:��4`?�쨧����s�Q������\g�p�'9�}{�=ko��o� �R�ԭ���x���V�(��(����ߊ~Ч��K�n.� PDNY[a]� ��u�c�JMFe�Mm�9Z���r��b8��r����*�^k�Ի�|Ks�i���3ɴTn8 v��Ğ�rd/y �xuMh��c���d18��n1�� ��Q�����Kfا�fE�,��˜`�}>lc��Sj~+���oJ��l�lcM�y�m�Fy�|�����|G���5挻#b�=����S���y=��s\�����{�n?�u)# UmD���X�븂I��3]Wï�M�ۋ�[J[#io���Cn,1����z�袊(��(����'�������QEQEQEQEQEa�X��z�N�V��̂L��/ַ(��(��Gu�݂���' ǻ�f�e/�.�O�!���6N�$�@<Gyw�i��f����(��������4m�]��M7M���U�$Onv��'�"�\ȒjZ��tW$��f�'�G��� ʛA�t�ik�i��<�}�wE`� �䑓�9���ޒ�Gp�B�Al98�s��ӽyޱ��SԵ}Z�M}.%�d[w��g{dnDF�Ux9�z�4�|8�t��C"�ya��H�ι��2�����q܀zWU��"�F��:}��=�����L�ڒ��r��}9�_$j�b�o���H�m�2[-����_ǚ�2n���� ߮p?�z��t�>7�������d�L����s�{澺>Д"��c<�L��ϯ�Fz���;3 ����"�S���S��G�#�_�3k_��?�j�4�x> ��C�ث��C �B���"��p����䍦kc���X���%^z}���N~,K�٤mG�a�_[2����G��R6��[���/f�䴒9�L����Č�k�X���e�"goҝ�,%�Qm�@X��.~$�W��[�h&F�%S{xT�g�s'�kM���4S�Z�uW�H?�+8x+�㧇���x�O��3|6�����1���b\ «9��*���P��}V�I���;�C�{���߈>o x'R�4�sV[�P�F�(q� �$�;��������M��7Wf/ݑ1q��;G=N:��c�r�� ��!�N8oç �=��}F]ką��X��;�$�1�?�t��{�QEQE����D�C����AEQEQEQEQE����������I���(��^�6:lk%����1ڭ<���@I���-2RE�w�����]�=w��&��K� �� ������>�n��E9��~�Yn�E�)]���<��&B�>�c'�ts/�v���>{ۇ����ح�K+K�Vv���NvCA�\ ��+��w�a�gkBt��4d�)x~U�����Z�fw�V7�L������ ��8鞜c4ߵ� ̐:�$���k�\�Þ�i8�z �NL�$�Ȼ�l3n|���|��@0 ���s�>t[@�Q� ,x�?��Q_ ꑨ�u�tq �&6o�H�׌�g�=��ld�c������C0�P3��?��Z��D&?�s{��R��瑼N�>�z��4QEQEV'�?� '�^�L��X���ῙukS��.�?��'?�3��H��[W� Y��86l2LlQ�N��ִ�ξ8�>j��=z��(�B�|(37�+�N�~:��3�RC �X˶ ���B�0p��ۧ d^��)��5ۍ2���� O��Z& �zV��8�ڽ�;�!-q����ox�ϰ����_�Z�� �_��M_J�[�kQʤ�U����EL<Se����P�@�G�->�A������@Ow��@��� �����{K����ƯZ_Z_�'����H�!�:��"��=�h�Z����Ct�i,����Wh��(��(��(����/�l|,-��m?����-̾Tk��>:(����k.o��$:������O��sy��#O/��9�H��x�L�ְ��p�=�\0e�U}�~x%���i����.���b��ı�?���o<Y�X�ɵ[c&��(��v�W$����%���xT�l�hŪOs! �9�SM�|]{������3��Ƞp~U$�8�zSቮ�S׵;�6�� ~�q��xlw��j姆�K)|�4�a>3:o���.�b}ɭ@�ԴQEQ\����m�l�;� ���1o�B ��`u�aOf <�mպچ���fQ��{c� ��"}�D����G�b�����N����1�7��+͏-�(�a�ģ�g�O��*�qj���P J�QN�v�{c�l�>=�v&�yÑq&%��������5 ��l���9…C���<z~����f �� Ǵu+l�R/8��󟴨��(��(�_ +.� `A�u����$���k߇������RM����� �]3�~a�8lm����x;P��$�4W���6�Vs�%ʫ�2�#RRzu���f[h�������͎H�k�~9��'���= �ѫ_20X�ͨ�������>e�6����7TR�\����F��ʏAۏN3_A�Ϝ�z�m ��;�=��W��E ��ڠ� �����~�T����~�ךM��� d�[�xO@�?���kfe�d�O%��d���?�ҿ��������TQ�ZHc��MuQ~�3��=2�I�I���uc��k����m�nU���e�WaEQEQEQEW�ú����K &�NK?7̼��I�U>Nv�9gޫE����e��}�m /#Cl��m��NI�c�y��W�U,���~�ku����Bn �Al̨�+j��� �wC#7&�u?xVle��"+���U��fpT ��Qiag`��v�[+��@O�X��(��(��+���x�D.�I�� n����?'=��z�W�3��v_0n�b��3��#�������\�mW���֭��o�- �1��;���dwß�O 2��$���r�q�����%x�Ь~� �Ͷ�m@8�q�:�|c� ���X�� ��>O����)B��Ѧ +�3�ǿ���Nk_�(���s�@y�?�{WڴQEQEVn�i�J��\p��֕Q^w���W��L�c����_3�D��nĠm�d`�~ ڣ��q�9j�Hd�]�n$�<�X�l����� ����� ��E�,O����?�ӌW�QEQEQX>+��]3�–��4V�QEQEQEQEV���i?�����~�(��(�������/6��x��,��kx�M�%t���7�v�˓��p��� D��E�&2�L@��c3��V<=���mGL���>�u���7R�\A� q��qP <+d� �+\o45��$O��� ���=s����> � V�Tr�,�_�Vlg$D�� �9>�'�uдD�$@`%�n&u��X�=s����P��%�A���[�?�`�=��z�Z'�_PVq07.L��nr݀�{��qU �]��#.:{g���ա��f>"ѣ�G��^��J�%>a��`�����$��B��|F�rQ��$ѳŻ���D۟��.q�}����HV=�U��)��Q��i��j� �x^����@�}�gS���ĉ ��t�^dƶO6>���i�����n���7mm�� ��p��x�@����j���1��Wa�t��V(o�d��"L����ER���2���4���Њ�EW��i~߉I �A� ��5{dg�����s�!p� ��qԂzt�>�� )�n0�r@�����������?A~�i��:�mۋ���R������ҽ��(��(���׬����c1_A1��� �Z(��(��(��(��(���ҿ�+m����QE�t�K;*��X� ē�);l)*i�g϶�ʃ�"�mW�7�?��(���D�����b �װ�X�M���fD8�zt"�Ԃij�*��>�����v�[�������>�O�nQ\����mxyQU�\L l2$g���}�U"�Ye�(�M��ly��=:s��<�b�G�����N�oh��>�Y-ӗa��|0I>�<��g�ZG��4�?+#���Z9KZ�V.�x�'�ӯ5}NmgU�'���bv�#���[���u��5�@�Z3��"������ʪ�K�>�.rT��'�´4)]|I�M�b����� �����袊(��-����s sFz����<9�K#i6[[��S�����4+YD��zC*��#v �C�j�Z�X7���ps���G��ZZd�L��\Ip0<�ZO���OZ�L�h� e�# ���<=�>��ύ��K5��`���_^+��[uC�Dp�M΍�C�y�z�=��\|��`!�� �l��GLd����8 ���4Mk��2��Y/�x���潉YXeH#�Z(��(����o^�S�-�;n�6Ǡ����iQEQEQEQEQEe��-}�S�&���[5�E�y�� ŶI��$��PfWc�W'9�Qn�wx����ө<K} �O��N����I����%���`�ʀ�i�~xy2s� �K�4��R��_�f�a���� 1[ıCG�Q@�O��(�[Ř����f ��b�c&5�<�a�Gcڬ�+����A���2�}��|g<� KK���+�2��s�s��اI E��3ӧ9�LL�J�b�Ge(Pd�p�l��x���� I��F\�bM��8��q�:�۟�uYa�S�6���� H,�g�s�x=9���>�ؠ����3����~�T�$Ayas_1gF^T���t��L{�"�梊(��(���� ��`�C­��"�W�_��&M[N���rPٲ��?���Y�����J���/x:�WB�WM�?�U����WI^{���*�G���6 g�Z��W�.�D��ǀX�s��q�C׌�F���e!X��I|����=y�{���M���m�-2��D�P��%�����,3�s���z���dTE�6��;c����g���F}�3�kX��쯵; [so�}|�ߦ)÷Pe��E���!L�*Tf�=+^��7� �g�k(�?\sMx�]�)u���yG뾈�<T��ˣirH�QuV>��L����N&�����塞��-"���D�"�8Au���H��bZ���91n�f��N�!�k����<fK�I"S���o���K�êۖ�~f���*���X^��Ék}m;��IX�V }�^p t�QEQEQEQEex�P��4����h�Xy���y1AR�ps�A�\4�f����E��H-'�3LCƓ�bV�m��@<�!�i��;��x-�+Kk ���_=�!�Q����`� V����M3A�a} wѨA ߒ��$�v�<��[�ۥhz��́,���>�<Ɇ���i>-��]i�d%~�1�Ġ�v*����>�������n��2v۩ϗ�`~�6��kZ�L�ҭ��O���%BCA���V���(��+��+4zׇdVPR�f9Rr��q�~��T�!qp�K�1��d����9�O��S4�4����7�-�a��g�x�N�A��e����(D(�����. Ƿ��f���%�m���af�%ܬO�8�;`g�=m��ۿ� u0ny#y�qQ��4���cĸ��:|������l�i%��+2��'�=���>碊(��(���4�?�tƂ3F� ! �V��"[����J�J&���8���O�_ �n�^!����wt��\g��¨���=��S�=������V[C�ܨq�ϸ���������v�������R����1�V�g�)�Gr�f^c�l����ӌ�W ���������R3�-�N����b���h��:��� qząP;�ߏ�9�S��(��(�M W4SF���yC���KN�������7���D�H�<�d �h��(��(��(����o���v�4�ﮭ!�F6�~}����g³d�{cq}cwu�_\IAu�2��F���0B����u����S]�]\ �$B$P�H�|���;���[=�Z�����������(��(��(�縂���h��~��0U�&���:u�ۤGs�?cg���!����mf�W�]��ךe����; i���ya� ( 2HVc����m3��8��P���#n%L7u���jh���h�Tl�o,]lv�Ǖ�z���x�5byn�H����-zv���ۧ��U�hBJZ�8��?��i�P6�s�����<Wj�x�VX�H>�/��0f䑞;�?ΫI�Es)x �T����>pr}����ئ��7�! �MΎN[�9$�=��r+챨밣5΃���ez�1c�4�^���4 N�1�Rr��1�� �V��<�Q_YD�K�9#ROa����ó�"]^�\�@���� b��5; �e����ea ��|�EQX~ ��+E��8��[���T��!����J�?��+��������i�g�f���A��<�_1Cu��%x+��9�<`�C�����0p��H�ks����c�c+�;VP�\�e�����zUQEQE�|��+� x�m���?u��L�R0�kf�(��(��(��(��+Ɗ��;�U,I��?����QEQT� cM�a2�����3H=�3ס���ioG�I�k����fO�Dx�L1�?��Բ�M�n��-t؎�a�'��$����x<%�$�5�2j �M#N��� z�=�i#EDUTQ��0�o��߆���3^J������S�}�����6��bC��I�l�n#� ����9�[%�Vұ��r-D��F[;����8�<����C,�(vf��yPW������{r��U�)�<�7d ��v������t�����(*Nq�8-���X��瓉v��9�=�������m��|�S����� �=k��(�����\A�vȁ��q����]�!9#��Ut��� -��Yo �Y��1��HAǵ ���V6����1�כ�e޿�3I�����m�Gt�� �D@��?��Q5�� ex�M.�r9��2�~X����G#�i�u�q�;k�C����<������(�=>Q[���T�t~ڕ���J���+��;A��,�O����>��rJc�DJc )#�NFON1��=�O#6����v������q���9��g�i'���9?j��'/�ӧ�߭zMQEQE�w�#���^7��EQEQ^c�+ź��� ;O�o^�g��b�i�ky$VP�#�\�RX|Y�9V�TЮEʹ,��D�|����u��p>�z�c�li�g���K_ ���Aܩ�3C��l�#,�v�@�V{|i����W���@���VQ�w��7c"�O���->��E�Y���^-�ǘ�G��\��=8=�Io�sG����+ ���m(cc$��w�����V��ӡi��NԠ�͖!u�<m(`~~O��3�ף�����*� �S���vQo�j��w�+ �UK�U�YL�pLF ���������|Y�i�4S_$� ��{pf�#�lL�r{����/B0�$���H<����v�ߚi�����_���i�-��ݒ����օ��t�2Q5�� �3�������c�5�EW7�]�Z��X��^ȍ����H3���jْS��t!��"�c�!x����c�f`���o\�}�I ���s�L������[���3 ܱ���8c��ۯ� ��b��3]K\���P ��>R �w�מ����%'Ǿ!��k ���[��������x*��dF\ ��w�\���Rؔv�(��'$������(��(��+���%h�����+r����^�W���9��T�F ��p+B�����$~�x�./�9_�I��`�Z�ɗ�8�w7H�w �N���N�lK,q�P�!H��.8oN?:d��_\im>�qڤ ��|��\n�A��Z�E�� {��E�t��������E�/�3�xwQ�%,���������8k�Z�.�Ma(E��q�*X<a���<���S�s"�(�XW`�4���}V�i[��w�~��EQX��:�?��w���[tQEQEJM&�]bY����˸��Aa�㪏ʲn|᫽^�T�LSyr��8��>���h;w8݌��W~���̈́3� ���"�D�te�* T}�����v�{yc�Hl�im��Ίq �����R�?.2A���j:^���R=���4�r+H�\�F\ ��R�*�^�߉<;����kY��[O6��ul�*�_*��H8?�\Ҿ�Q�����"��Y$�%�5&F �l�ʯ�$�]�VW���S����ս7�Av���?�V$�"B�:���V�5�w����U�B����2��W����å7�>*Ԃ�VzDM�^���@�� 8��'�)c������_j�q�$�ʄ��yq�ӆ��VŎ�c�B!����1�أ ={}MZ��(����Rf 9���5(>j�*P��*/� �-8IQF�HF ���N�^���O/}�#C���k��m��������-s�����*d;QM���^}s�Qô��Gr�q`�* Hl�FW�zg�x�W�A��8סeo��v(2�@t��q׶;�LѲ�H��ǔI�_��pQ����4�qIͼ�x�zv0�,���؏ӱ��h��(��(�?ȕ��ה_���(�����*< e��7��y������W���ʩ��lq ?yq��~f�0j�]P���9'�/���+�߀���m�풐����(��sFc�Ѻ� ��U)�M*��]6��q�S �?*�<��`�i6�H9Y!_-���\~����Z'��������ϴ��ة�A�Լ��x�TIAL��"u�TF2?N>&��ݔ��'�(ذrG�H$�lg|��<�91�,��� �7N����#&���gqki�6� K,��� s����EQEQEy�πo�<g6���Ii.� ��Ecs�{ +� ��'�W4?���iq���"�$]�d�#H�U��$���� t�����R�t�GO�5���P�#���k���01��� �Fz�ݷ�|=s�� F �����+��1gdl�r:��GU��h��H.Y�Y#ިޥr3��xĺ��7�� �M�d�x��{Ě�Ȗ�Q�ڌ�Ìq֭]\kZ6�y{{��ua�����������|Ű���GP�O#���_o|=��t�`�7zd�#M&�_��pz`��v�&��mj6zd>Zqi�B6�~@s�a�ך�/��[��T�H>f�1��>��}ON����"�#�#@UF���N��(��(�:������_[�n��#�����Ǒ���G\�՗N�}�VCP��h\�"@wW�7L�n9��3"*y�"f8K�/F]��9��8�*hÛ�(�~ӸoR���s��*{�t�X��\ tV�c�J���:��=x����_#N�&.�9���E���ۧn�&�@E�����o�zV��Q�ƅF\&6�H�s�zT�̢{A���N��S�߁�_e�[( q��6���ؾ�����h7 �u@�%�F�@�y�z�X�� LVZ��̀d�3��=p ]��(����g��Z/�yE���ܢ�+ʾ=����WyM��wxe�$�����%�� <Ꭷ�r6�9�n{�3���xd�ph �� 3v\~_�ˑ_M|M��Fq��#;���G^���I��(��(��b������p��kz�(��(�����T��H���inn���LjHP� �q#� ��q�L� ���z��c�|�!5 #i"�g��3:�neW �~O�Ϗ�]=���{M�����wBɊ^��]�F��6XpO�♭|J�F���m%ķ��h�q���hF��n �I cޭ����R�����=ţ[�4�hr���0��k֨�$���%���r+FS�= 6]>�;ʹ�O>1���A�)��=�2�N�mm6$X`��V1�"��*��n����Uˀ[X؁��V�QEQUo�+25����zW :���V@�<��h�E�������z�go������׵8��5����0�k����l�^�v�v��z>�1���?���q.e���21,>¨�Ŋ�i�#�HԭȌ���w�9�MV2y��ȉ:��v<Z6��0�\c�ڬ�#��/�2K�m�0�뼏3nq�sϯ#8�Vuo)#���U�V8<�wˏ^�V����#�y�2����B�}������c�?�n�?�W1�~�i?w��dr:�;4 ]����=����;NA��c�5I9���?�}�L�(�B�����"�]hZM�^UΙi*g;^#�U/�C�$}��Mfq��)������g��� �]͞���#�(.����C����E�[������M�H����T�|^�*ɢH�%R��qO����۔�M@d�P ~��I�y` ��∰Vqr��r��6>�ԟ��h��̷6i�=/-%�3��� �f��d7�tBA������ݢ�+��h ��C|ڊ��9ϕ'�_�|�m1�BH��_�nnHc>ߏ�)Ѓ�'�8��˹X0��c =�1�=����~Dá���Grk�(��(��(���~���\�6Zޢ�(��(��(�L��"h��$���:��}QE�����_����A�EQP^^�i��q{s � �I�"����&��iW���`.Fd|dr>�=iͦ��S,/��4�G��G�B:� �����׊��xWG�g�JZ��{�}������r�G^ح�(�s�X:i2y@jV�����}�x��>����&�R���������.�����σb� ��̉u���kC���+�����]��E�x���)�+9d��G�+\�����_5q�m�!hb ����hٞ���>�w��� �ok識����;[�����O��\���|ϵ�� �s� $����O�\}�݇�Q��ޘ���U�uQEr��� Aym����F9��_2L��q���#��>����;ۅ�5����+��I@?�xU�9��� �;�i"����]ږ r�ź��ga��{��1������� J"e�Iot '�dT r{pzkF���r�nt�^�m�n89�'ӿ�����L�����b##��EW��}`� �,F ����I��~�}k����ڥ���68=N?�}�����w2mI uP����x�� ��{b���O�� h?4�>�Z���c� V �c��9�w�?<%��$v� �/�K�����m�j6:�m%����)�4��} ��EQE� �o�^+� 8�e�Bۯхk�EQMGI<l����r :�(��(��(����"V��^q�"�(�����#-�����0Q����t�h�7Z�� ��񕰼g��>�߱��S���B�J���V)�ˌ�28�8�!{����:=��t��yz�K�s<���ާ����n��Ҋ(����fJ���@-��m���������i�,MxE�$*6� <ؐ�双`��O|�|�� ۋM_S��������2�3���9:�����{��dWѠk}'� "��ӹO����A��'��Z*�F������$_������c?*�Dr�|@�d�9by�y�A�c��X�Y�7��� dy��������po��D�L����_xQE�z���?�������Ks��"�$�F��K"��ӡ�q��D�/Ɩ�������H��X�Y0W�\�������&ET���6�G$�Ԕ]14!eiL|�I�6�20s��-g5�����݌���y��ll,L��9'���PO�[K!�K�Y|��6Zl�`���20��{C�x��� V�˖{��|��ȷ�:���Fd���!Ix����=N�n����(�(��1�=��@�x霟�I�C�NJ���PeH�tR[v�����L�׿ S�n������H� $�1�;�g��x`Coᦸ��$��������������5�y��ʸ 匉 ��c ��a���d��Ī�o��2O%�e $ck� ���9����S�\��Q����s���Eiq(���0�8b6��g� oL�g�ź�V����U�NF7;�[6��zm������[��[k���;��đ��0���s��t�o���:٬���[���Ro�C ���S� s��b��������ُ� ��Lcuɼ���z��M g�s���|U�^�c��,��n*&���/-��}������6?*�� >)�$�Io����[�QER`����(��(��o<[��>����V�����O!|�������A��i��:\���/%�����0����c���J���Xnm�A2,����FA��d�~����j�a����hrX*"�#�rz -W����G��X����t`�2�'_Bzj���P��{-A� ��|n9����y�-|��:��,���yڄ�b0!OʽO@:�CEQER ������i˧ZY��j4�V�0¡�s(�Ca��'5����F� ���H�0Y���rF ��7�� ��WSڼwZ��/mQѢ`�|�ONI�|��9�Ʒ7>d����":�8#g���{c��=�N����d����R���@?x`��8��~P���x�]������a�{�sX�$�]A�8p7|ç���]&���wQ\}�ZGn�������w�2@ ��0zW�q���6����3E��fa�/�H GGV�8��_Z�O�w�5��5`Lb�)�g��q J����Z_�: �=��jN��\Y�\� ��#;�>�u��~<�������*c8a4�Q{>3���o�.���iq�B��ܺ�eb����N�{��]�/9��ef����Ų:69>��N*P���k��"A�` �B6q�N9�zqQ�ɾ(� NX�0���d�� ��ӊ��]���>�-��6��g���)0��� �sЌr3��c�����h�������}��/�'���m���Z�h����U_�ЖP^� ��7�����s*(�D��G����?ǜT`�c���������-���{�����&���3Qu�����Gy"��J�8�zq��#��� �MH��!�ᵘ�X��f�F@ �g�9�1� �K��:�PM-���[�]��GP�&��C.�����}���i��{��wi�\X�ʹ1��lv��e��s��t������o>���&�. 7${�ۥ\�1q�鱺.�o`I8p/��܀0:QEW����@���\(9hÁ��l����7��Gx����n3T</�Z�~ ���^Th&�pLC=k���(��(��(��(�*������t] !����kd�,fhR ��x�=}��K��RxkL�����Q��Wm%���A��� ��^G���CӼA�����ך}�n��nS�dÎ@��֪x[�<�Oy ����^mk�L�~�!� �B�&�uUE �G@-QE`��3�4Y7Z�5�eU��YY��E�� �F�����Ʀ������v�ڈ/)�F/�un�<.H͊��N%��kr��C �y�0�ӧ ������$��mm8�l.���h����K��!�� cvܮb���2��O�i�/�P2���AVlm#�#$t�@bӇ�VFn<W�뺥�2��KE4 ���H��ê�r*��<}��Ҵ�+t��om���KIa�6����u3J$yJQ9�J���$w9Ƕx�8��n�CZ�&VȲ���>z��p>�jYD������H]qe����z��5~H�lG��"p�Q�S>X����> �xZ�G�n|Oy���e��a���� `�@����6�t��j�~Ӧ��-/��{d�v������x=F��+���zl��/�l� n%�B1"� �-��� %���-�g/~d�ޤD1�I��lc�!3�[��~�蓸E��xdE۷���8� wۻo;�,��%����sv�H�fl*���<���������>$��q�[�B������h�1���X��8۸�����j��p��[��r�[���FX�?/~����:�w��C��A��&�N8�s��ڼ�X���&��i��Klח��W���U��C�0��$q�*�֓��w\��4++��N�E����a��w@G�Tʇ9y<����o���bɧO�`n�y-��������E�'� |�8=z����������s�����-k���+�h�G�N�lw%ĮTcT9=1��z�*3g�GN���Ӡ�v�}��@;3�H�G��8��>)���'��d�Co����O���M ��V_�Y�̌"�N|��7.>q� ��>���i9�{{�&�E����9Të���:�\?��O�C��d�L.�2�!쓏L���u����BcE;�;�Ka�`���������J�c�m���0�k�J(����-�/k�2�fxJ�\F��(��(��(��(��(��<)� i����Jd��(���-_A��'���ĩ�.n :iF�]��0H���H��V^�n�m�[Zx~�H��o�w�}��T����9 � �V�����~֑YR4�&^FX��r}1�����a�����r Y�+(�7��$`~�s��g9�nV��,���-����x����?7����]��E���(����$ B��N�������q��8p���Mj��LlC���f���g�2y9nk&ߪ����߼Q��ۙA,0}�������y�(y=Р#�j��`pA��w�9���s7�D�k��I��W�gf{����i�����ـ� 16E��� ��:��~\Uv.������E����y��װ�� ��T��+C�ĒXd�����CY���u�/4����<��ϩ�c���’,^V����KI^\g*G1���9>��?�g��6�h�H� �N$ ��v$W�zޣ{�jWo��.g@�&ې�^��J��B<����'<���~�V �� ���k��1<�&�'�=����j���]���GS���p*����\m:�`Л fն+m {|��������x����6�!��&є0�H�W����������W��7K"i�-�*����� $�C���8��F6���y>�ھ������6��E�t4QEx_� O��9����y#js����4q�)�pHn����w�q��+.Â�O����9�u��Ë[mGᾓa0x�[#������ �q��p9��4��G�� �G{=S��V&�vf� ��r������5�˟ Al�PL"���Ա����?uwc<��s�E[���K�x�����,hAc'���0?�1_< ,K����J��ӟ�Jt�����2Ny�௹h��+/E�k�Է� �!@�kR�(��(��(��(��(�o��Q��$[_̤�W"S��+f�(�~��ZDW-if�ꗊ2m����:}�>U�>�x6G�C�Iah${���9��D� � t�mq!�L8�����I���� ��Lrlb������ϊ���5ۨt]>���Rh��6�����R�\)n��0?��L�#)VW���@��=i8HՇ]Ǹ=�j�:Ԇ�iJ��#�ż��� *x��:���sΒ;��[ �)A��ǻ���hzdq�S$��Uثqq)������_���ZCm��3C� $���c6X�{�y��Sx��N��`��#c�=�#�k��y�q����&mV�"�u'%U�d(?�{`g�)��-�[�1/կmeHd�U�.�¯1�q�q[�����jW�>_��pđ�<q� ��͐7�U\|�X�<�)�;����v�I�I��5������-ד���~�N��@8c����W;wq��|K��}�3��i�5��vSp��r?@͐��{w'��dKt���![C+Iup %��U9�#�G �>bEQ�_�<����. L<��H�ǎI��������C_m� ��À�m���Z�h�����;p�Ås��>T����r���if-��u9�?�2f��&���p��{d���澙���C��a~�kO�.�1�s3 2cN1�~ا��~(�G�"34�����7l���VH���`9��� ���5k=�=_U[��C�9UY~R�����8;���8��؇�H������ˑ(W둍����z_<6�A9��|9�/�#*��ݼ` �9�<W��ڕ�딵�����X�V ~�QE��_������%tQEQEQEQEQEaxs�?|C�aF��1V�g��既���}�|�Go�N�*�X�8��w�u�Z2��Уd-櫘PppDC�n@뷭9|'-�x�V�ԏSl����~�yl��[�vV�}��Y[Eo�,q U� �����.��%{��V�x�� Y���3��w������ݺ���� ��#<t�w��״����Z�%�7Er���vf.��;�������K�C�^�������w �o|��٨����Fz�\�*�$�^I=��}��$q��9� ��g�O�k���u흼c7<�Vo9��>yCzUFü'O�z��$��z�-��!��N�p{�~C��⢉�F���Py�ƚ|���?�~z��� /�4� Xp�27y����J�d���i hY1��y\[����>�����Ş��%��Яm��E���/�<� f�����}vq+$(���Tr1����:�cT��~$���j�;���t�3�zh8�r:WӞ/����|/�\åCq�ؠ����Oϑ�nr��gO�/Zѵk�� ]��2k�e2}���gu�=H�W���y;X}��q>D�����?1��ۚ��@�>���'�&7j�+�[��:3�9��������p����,8Q�����Jآ�+�i<}�Ù���o�c���߱��wm�N)H���F�9�>�J����5��4\4^sY���"xَ#��3�RF=�eDn���iȭo Io!�|�y��F�z�c�6�<���\ 7l�?k�2 '=�<�=x�3��4�xsJ�[�[Y�����|2� I�f��\��<��`��Ӛ�>�G}�B���h��=Ѻ� ����8�k뻯 xz����lY��! @=FF*�v� ah��q�&+[��L��V����� ����` '�!�oԴe��9�����|"6������7�=��{��#`�����ݿ}���HGe�*A�ޠc�P�2Y��X�����6�1x���p�dpk���(��(��(��(��R�G����Al�R�ۼ��22���?����i4�.��{`�]4z��شE�VW�����_��k��|o�^h�o�om ����Fۚ �dI'���Gj��|T�^����e��3�Dh�ʄ~�<n��#>ծ�W�5i�j�z]�l��K]��7L�Ӯ����_G�_δ���V����0]�nÌ��(���*�_z�����?��G�EnW�x����މ�}��[�m*��0S;�Np�6�{�}\w�_�%�!��S��|o�q�����{�P��X�m��L�3׭1c$p',+�Xd�ZO)��j*��NK�1D�b�RG8���t�$k����1�-��M����o=}x�lS�\�4ܠ+\�) �(�/#�߯�9�2R��`#�DF,>л:�q�3ߦ{��xvx'KB��`���tU�u�"��v~�W�� <cnx'=H G�o�_jˍb��\H?��UՄl�@#8�s�E4�λ�#`��{G� m@��(�Y�Pn7��v�( ��zt?5o|\մi�G��.�K ��I7H�y�'�g�N��o Y�a��L��0@�ѭ�o�8� �M��ێ}>lԾ-_�u��]��g6j��/����q�g�9ɯ��1���t��Údr�2%�J�:t�(��'��$[�h��IqБ�?J��K����Njp�|����A z�����m<Y��?ជ��Y<��Ei`�pI�'�������\?����K�/V��]�M =9��t�z��z}���ZGqm4$C��,�F8���<g�q^I�A���^iijs���/�^:㞹�I�^O<��u5��1'�����* ׁ_fQEUk(��I��lҴ�Gv�O�Vh��(��(��(��*��a����\��n#1��*�#���XV~�l� t�_�Cqr�7S\��Kr����G*B���c���v���CT�Ӗa.�/�0�M�NY���,ǿZ��ٵ�Ϊ�O�E��g��E��Mi�EV����������Ԛܬ��Y\����.��Z[�n�����B0��Ƶ�����$��?�����VM�#�~�jib~\� ���⾃��æ/7�}�����p�z̤m;�FN���z;O�^ ��[Q0[��i"��A$r��������;��"�5=[ʃR���&ͤ��7�I� q�O�^I�� ��׋�4h��:�� �� � ��3�'|��m$(��< cq�ں:��-��n�&�B���Z���KU+���HF�n��:��j���8;s�?֔HC��;_Tk^��c�}wΊ�4_�K�v@NT�wq��k����_O�2��q,��g�%��3�d����y�J��u�(^T[��� (" 6�pO៺9���Q+�%�\����j�ql|����rx�����d�=6��޶ N�U�!@?!V(�����D1���Q�-r�J��f9�X�9HN��)#����h����� �p^k�����J,f'*���#�oLV'�����������&С��D<��+7�gw$���^��mCN�|'o�Z�ʰ5�kv�J��c�`��A����^s�@J�YhN7���b��`�v�]�����^��u�y���v��Z^ۜ}�c?C_gQEV�ٟÑ�1c���O�w��h��(��(��(��(�����d��ջ�–������f�(��<54W�!�RXΧ�#��K�u��� u����)�ؔ�&*�r9r+�4 K�Sh���_|Ui�2�<0<��rq����z�O†ռ'��������3��|�W3g��o���K�C�^������� Zm�v^}�l���[���(� zg=��U�sn��+1 �F�z���=y�����鷮��p�@ Y����I z��<�"�#���� ��ݶJ2��%O�)�v��d�� ��}���|:�X��h�$��p+���+��ST�g����ݘ`�v��e��}��}�i�Y&�[x��݊�ch �0p@��ǽ9"I4�E�g��I����&�����{���� �~��V�i|�R0 �q�:�7�\��uy3C-��d� (0�c��`g��?" g�ǵ}�e������袊��U��xp��r?�����zs���#��}*���? �-*k��8��e����<�NDg~}������ �,�+�j�QeC$�$�2O��z��M�K[8--�ˎ�BXE�[�J���<�?�漏��BEyc�3<ٸ\�aG$�N}s^]��f�������q��M}�EQX> ��j/������V�QEQEQEQEQX_��^ ��v�� ��^���M�{���-m��K4�}I����t�,�ڴ�ǜO�O# yn3�v�q�y��i�og���4����K(�� ����#��8�Z����T�D��.B+���ؒk���eu��8�-,�丹�� jY��:�� j���cO�o�귷��Cܝ?%�bs�g����<3�>���[�4��)�p��mx±Q������k?�%���w�ۃ�l�o��n���@9�rF)�WlS�%��bv��9��x���|p;2��62��Q���O^�3�⦴�V���c�y����s���|�{���Kt���^%�q�� \�`� ^8�_K|7��+�8�����8����ic�&��G�v=$׏C� Ǟ3����.�c��h�4��;�a�ߘ{��#&����;�����:���Z�� ��;���}��s�9滚�7T�����Ï�x�<�1N�������u�>+jqXC��YOkq���$��`�#wˎ88�d֭��}[4��{ �1[�b@��i 6��=8��n��P���Y����,�� b1�C���`����5s�:ƕ��OĒ�:����3}���H��M����88���6�\g�BF �n=��l-����!S���\��>$xwź�����+]Zͽ0�(m���A��+���y������?h��Y ��?�x8�z;W�� �G�46��m-��b7�?<�=���+n��ؐʓe�b�#�����a�X�+$L&R���P�%���\u��s��(��iq��h���|��#�>\v&�6�j�����K�*U��=�,���0A��=�^�Ə�\O�z]�ދ�F��s��= m����Q�ĉ.���]�P$��pO���-�0�j[��x5[U)�4�Y��8��I#����zֽ�ď߫��E`�[me�O%��> oéX�Ē�yo,n#$��Ќk3����tm�+�261�yY��En�EQEQEQEQT��4�!��ҳ�Q����]���'<j��F�讬���J瑟J���������j�ף�~7���!g��H!q�,���s޼�Z��}~|�iB��u��Y$\��lg�'>�旚����6�6�x[�2�Elq�gh�c�������,@�gPB����6~n�N�\n�28�x#�(P�,Hٜ���>��\n�V���߈4=?����>�imr��/�1X B�N9 �������I�Y"ux�VS�G�4�+'�?�(�_��?��j�_��9��z�W ӝ�&����988�zA�nO$����0� ��r>��s��P��űdi���B��P� l������6�n��c ��_!Wl��c*:�t7�͒a�d�3�YA?:�f�[�}@� ��_P�4�ᯇ��9%^��������k LwRYȱ��s��s�J����t�V�/*�)��R�'!��1����ž>�<c�j��Z\��I�I^<F��`{����u5�f�Ư{�_�F�ӕʂ��'%s����̸�u�K�3yG�GQ�����+�XH%��Gɶ.���� ד۞L������VpY��;�X�[�x��q�5��<n��f��`z��t'�|N0j�D"�dX�KWB��$�� ��g6 �~����}��r_�+����u𯀵+ՓeԱ�{l1�}O�^�%�ÍS��"����k���1-�r��q��c�|����DWB ��#��W���?�����O��j�Ld┌���Q��D�f���%��b�a ��G`I���� ��t�cK�w��2ܗ ^�H���zg�8�;��IYTyK,_�����E�:)ǿ���x���kѣx��S���� ���N����'�3�+WA�3߇wX�@Iv�q�?˯j�c�fk����E;r�:��#��{��26�G,��d���.y$��Q��9 Թ�X|��N��+E ���u$��9,��I��E�� q�m�X[# ����6��殻� n<T����izz��B��=��P:�8�p��k�<)���E.�f`h�j3�Q�c��[4QEQEQEQE^�9e��c��W��8��8��^��ϩ����i��kk���B�,�w�����!db�����|%���F�H��m~�l.��h��NBaw�©������T|C���(�;��yt�����]VS��Ď��G\t�Q�!��Ye�ݬ��f�\�S�Fu�z�8�\�׀|Qc*5��5}� B�±�����Ӡ�Id�a2,L�b�$a���| ���Nlp�Ȋ�9�$+� 0�3��8��q��0f�ݒ9�M�d��|���|������1�b-qf����g��z�'�H��}�t�����e-��y$�i�]�����GIi�����%������8�O<���Y ���x'�WCo��Ŗ`�sm�j�`E�9�<� ����ZW��Q�.���;p'���ݤ��:������~����d���UVa��e�����^�m��($0Z���2�X 2s���w@EZ�Cn��Y؃��2�/'�9�dA�����2m�R?������=wp~o�<�#�Z°�c,ed/]ާ�8��"�1�w$%��j�`m���`�x�>n��ɨ��LJd������In0���zq�6~��c �ᶄ��žӆ � Ȯ���]�+�MsT��G�ln.N�E��Q�9'i��]W��%��7J��ѭ���n���yՏ�[��n�����`��I���<p'iH8�;Wcj�% y`dgn~>��˚���e��3G#8#9�Aۦ?ا�qy#j�6�cC9l��������#��z�+�s��%z��g;�M��qp���C��%L|�c<n ���c;������Տ�������!^{�#��׎|A�[��OY�&�W������?�}*���xV�÷��ZDv��L�Jnd�|��#�v>��t�i�v����H����]dU�[$�`q����͟����zA+����^*͞p@�JN�5��'ŭCI�-��I[��"�Z�N� �#n�A�$��m�����;hgѵ X�E�`�>K6��l��w5�Y�B����욝�ʁo ��#P�1�$�q�ç5�/�umWC�ɸ���8X�dWT���q�?j�t� ̡�REgo.4(G�������q��;4��i#�Jl �° 08����|g�6�n�6���dG��1�7o��A�{�v� �w�E��[,B�>a���s���H]:�Q�n΍��Oqq)Y<���P��|��=�����'���-��,z���E)��b:l�@<�^�X~.Ե=#�7w�=���Fˈ��$B�X�I�98�q���_kv�uм��e�>��"m,��!�ﺥ��5r���ږ���l4�ۘ$�~f�8��r�<7N:`�|c��v�W���i�C���M)�k��qě�W ��C|�zkR��ž��e���jbb�e����v�m���-��k��ֺӍ��cw4RJp�"%���8�n@�c��d�(<7��j�M�v�2�7�9�Fr:�J�4]VsD��-���VdIA����QEQEQE�-<�3b�������QP]YZ_D#����0rhÌ� s���oj�F�ЭRI)��[�+��/��by��;�N˃�,�E'�N�O^z�=u�Q���׬�p��MjЃ���#�xy�k���G��"Id�L���&ަ��k�����dds�X�.��� 惫۩�0�U�6����g�ބ��[�<�4�.��� ������N�ݰ��F2+�7���A?{' �p{ps�s]��>h��ri��O\��^1�=�d!��2���(c�����s��j��/%'_!����d�X � [9������H�6�r����c����s�8 ��,q��M���J2��r��8�9��+p��[`�,glA��J��9=����.I�ɖH�Xp�i!�!@�s��s�/��{No��U����^+�����č�:��zW��E�f��k7�q��?�#P$a���z`��եe��s�%�ġ��.Fq�nC�k����dK��I��)<g���'�܏�4�ؑ�#F)-��8H������qO �J4��d/���8����4�En�1&��7H�c c$��˻eJ��5��6Kv�7#��ˑ�$�q�>n�ݜ���,�m�\�� �EW�_�����?�x������<u�ֺ�� ��*����H�۱�x�'����*tO�m�]�m1�ɝ�+�9���;d����M�H�E �)9��8������U��D0@������g���d, ���"'`!��|�=?9�m��[��% �t8��89����88cx���"���m�q&~n8�9���x����~�|_$P��4���?�L�����OrpH��#9"���^�|'�&����o����Vn��.ձES�4�-kM�O� Y��do�z�#�k�8���[�K`��&C�I�/.� 9pp���zUȼ�Cus:X(7Y2���ٌ�~\�eN�Ag���1Ʊ�h� �|��3y���$�W�{T-���Oh�Ϥ��:���r�Uv�ų��)����|?m-đi�/�?�.N����s��$QOҼ#�艷N�b�y���aA�Np���ZvVV�u�6v�,6��H�^�j��(��(��(��(��(��Ͽ�t}U5 *��T� ���?��_T�C�}T���Gm,�KWh��v���^�ѥ������+s"�R���$��@5���o�z��k�i����q�vxY0�ꠏ�y��Oᯎ,�y'���1Dwo�����*3���㓊��4��*ᢿ��- �ۼD���G����rq�=����6��1�Q��p{g�;yS�f+�%�ϕ"��m���v����z���c��3��e�tm�U�d�' ����8�g��%��$�-�y�C�s������'=��*_i-Λ�K��F�98���ʪ[�621Ǯp��k�����-9 ��Q2��,w��A9'n���zs�q[v?<Om��a��Q��I��\��[���#ֺ/�vL�.���R�F����3 }��+� ��q_9M�]�]��5����+��'?{���׎�� ���A a � [��rH#��s�[���<�F�"���~�鳱�9g��s�c�p~I^G*�� �9R9=@����� a(h��.b;���@�N�8��7+� ���H@��_`��� ���~\(�ۃrGl\�!�4ګ���x Nx�*@'������-I���� d�c<�8��I\}�c� �o����U�(��k��s� n����z����%H�NzR(ɮ�ɢ�=��l Smfl��(���>jh��"B���!A�ht<�ON7T�����UK� |���>�㔪�{t�V]������$�-�p:v?.�VTi���8�,m�b8��8�ONI���̆9 �A ���p|�x�vm��6�y+�s��p�?���K{�F��8���2=*ky#�,��˨�y�>\��k�B�U(-QEQEQEQEQEQE��M�/����$�r�v�' �ʠ���G�\������E��%���]jV���z���;@m��g]�g 9$�sO_�:��p�ex~�Y���M��7&��\��=*����P��aI�H^�f��-���C�!2 ���z�x�I����M;H�V9o�t�Z����!��.�W���I�U���qs��Ѯ�n�}�[ ��icy��3��I�� �������JG�5気Y�-�gi^V��J��v��$�qֻ ��W�W�,,��,�mE������N^H�ǎ̀�����k����E�4TE�*�N�e �X�`�ޱo���N!L��ۨ��s��v'0��YI*��op�`# �^�t�π���l|I<q�����9���A\��#ҹ�C���mUE��~��b���E *wÒq��p{V׃|W��v�xb�F��1�¾r=2{�7^+Q��r!�l�HՑ.��C �|�|�#>�$��B��l��U;�Ȋ[ p:7��������]�*�qͽTl]���|g�x��%XR�%]���� �v��������d�E�(����� Nv���@1���$�j1<�dwHQ6������q�� 5�n~�[�2o�,�� � @��^�)�!����$���)`� ����-Ϡn�-8܋kEVb[dEJ 7 ���o����,n]^M�#,�� ���H;��A9hu���[�����4�Z�� ���l�q珯,�m�\�� �EW͟��a��#i9��齫�F��8=}�徝4�4�l�'�p;��룷�m�v��cErT�8�@ ���9�Z��Ym�� � с!�省���8� ��{�y˴r�nz�q����~�Mm$emRC����pA''���ǯɁ[>�����G\[���@~���c��E�ߢ���H�m��4i�@��nd���g���]�%�M&BF��<�\����jɧ=��h����)J��D 7�H�z�}oI�+y_T�X�N v�@%9��s�u(msI_���,��N.3p��9����9�K���o��f^�w@�ụ�F~o³t�hZ�����*�Ф�9p��ت�I#�����̥��]C%NFA�PA�(��(��(��(��(�������&��)��d�d��� T��4{��K�.�u�uy����� ��$�N�D�a��i�i�%�6��ljەG(n@�棟Ú%ח��2yr����N$c�n�I�{���G�Q*�֋ ��!PD�0Ϝ}�8'�W�O}�)����6�'�*e6��q�;zS��z�,O�i����m� ��m ��ך�k���R,��6�:�! J�FBd� <t���EQEQLx����5��?���[� �� �1B"$���5��|������S���!�,����?��v_��n�i#�������1��`q�� ��_ �[�XIwq��ݒV/6�삊~U8#8�0;f��|#� !x���m�*U�d���@#9�?��l�i8��iK�F�vn}� �'߯��e�*ct0��>p�����d��7{T�ba�����xp��'�G�;�]��xbKV!� �72���Kd�`�wvw%�+�m"i�[F���#9^�'w��}v��8��u}wG�&���J#���fo(n;@�1P�'���j��~'���|��Aw��� ��P6��8�9������f�d?j�4��c%b��d��#��GL�>��c����G�����<�ʋ4q�$�rrxZ�l�*�"�X��-aۃ��� �2� ����W�f�l.#�Uӷ���b9�� ��Һ�u�V!�0۽3ۜw��i,��(6��qC����c� u��]ZI]��0坂[�����0>l����Jʩ �,�Tl�מ��N@�\$W�G�~��=��W���0}x�U�w� �kh�|M ���7�e\t���t�G=A���ggm���igp[¡#�5¨OP�@nl�� �� lg�&?�m�u�6�l�E��3|��Fxݱw�jΝ���HKi�XJ�E,%���# � AE-��1���m�F{Huأ�D�Q]�y��h���`e��5v/� �xv��Q��(7ٲ��V�.I�q�'��j�� �M�7�XO�t�t6�D���\C)x�Vf(�I��^��}|=�� -N^�z�ߑ��oŋƵ袊(��(��(��(��(��(��(��+����{� iu͉�W��a�w�'�cG � �F��[W����tI���KR[��%�J��E+o,F9���"�x��u��ڄ:_��.��}��NS�.̟(�C;d�սW�<Qx�K���q|e����i���8���$c'��q����^�hmU�K���� '�F�$�\ĻK�2x��>�b,�ݍ�2L�-�nb��M����������q�I|=�;�GNky/c�q�r ����8��>��y�6&��k}:��=�p��L�5�*Į�wU����Z� ��x�/���Ig�Egou"������e � �Ȯ���zt:����α�#�*���\���e���/���Wi\M~QȮR��/���7���![p6Ţ$��#5��?t�m3T�l$W�+*����g�[�\����~�B�v���B��*���Y��1���= '��n�៌���O���:͉��,cs��������l�=&i��t�J�$�#�m6� ��� wm��W�� K%�,�Yb$ܥT�z� pJ���Jc��K���끀J�����/l �����4�l��K���0H��ds���|�Y�I���uQ�^2�${㜑�v2v�g��ɵH��#@�z��ϱ ���']� ���]c �a ��;q��;w�A��I+�eTH }�0W�]��p~]��v�NE$�K��\�Ğ\gh�@9���@��[���^ ]L��Q�1��=L��6  s�~ �i�xU"��a�kFn�Q��0Dk� �=O~�v�Q^{�5-D͠� ���Q�~fwTe�ǒ��$ت���$�u���z�3M.�o���D�c����y淈C! # ��0���G����4S�Ki�>��T{��ɚC���fe'$�F8���\�<Um$0؝TCO���д�82I/�gz�c�q]����}gH�����c�gk,C$H�$�r2�'��` ٬��C�.���^l���{{9��#ý��(C����A j���?]��F����hY-�ˈ�QYeFB�����%��+��$�K�m��i��e�;�y�N{�����(��(��(��(��(��(��(��(��[D�־���v��w��lbD��}G'���G�tO�m.���[�ƾT�e|d6:��?QR��;FԴ��>xe_\ ��%es(ۇ :�zzT���t�^k �b�f��ΆH�(�+����=j���<7c� �Z����Z �G�l�5�ï \�Er�R$�+��9�|���2��̥�ǰ�z��t?��\ꖭ,��j핐2�6��˸�� �>�e5;���/�r$���@VFT��8$ ��x[H�ߟ��lb3mV/#9� ��$*����l�EQE�0@#��>��j�f���뇘bI�w�� �k���9�۸V��,�\���N���r #������ow��Gx%�P]Ʋ��>�z�kP�5��Fac>�{j�XF��9� ���n�7Z�e�{� ,B��v�̗~���2)����I��g��g�s���e+��Ok"K����Fzg��1�x�3������&d�H���@�N;���:�� � �:��;��!�J,f)����#=v�����.��x{M�O�l⵶N��'Ԟ����ZQEE ��S I�̒m\nb��8��h��*������B�q2��_,o�G@�#���ƑF�Ɗ�� ��@��QEQEQEQEQEQEQEQEQEQEQEQ\���Fע7Gyme�E��>L� ���%��W�/��X�͹����A$���y��G�i�*�m4�&���ɃU�(�K�ir��ǭ6�2��^ϩ���1p��t���(z�U5e�I�Ge��Zr�L�#���#��g�˻���g���&�W*2v����Ve��4K�[9�� Y/-��g�Vw���Dy�L�Tx{Ś'�-#��u'/��$_65?�@r��mTs��s�q r�s�E 3�5��?|%�ܭ����c wo �c�u�� :�K��5��A���OZ�EQEQEQEQEQEQEQEQEQEQEQEQEQEQE嶟 �����G��Z��G���s4s���dΟíjK-U�.���&�#�;��70�23��ϵ\��l�g���ٸ�����H�&w�o) 6��g H�p3X׿n�t8,廱�{{v�9%��&��$ze>S���U5��Z��^=��lk%�԰۸e�8fH� 8e(x=Ev�Ɓ�d�Ɨ��7�~�m>�x/#lJ�0�ԮH#�����r+�z��\��5ޞ�tRZL.����L���6�2yC���'��SÚ�����Z��dֱ:6G�.A&b@ݷ��+�h��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(���]ƞ%֟^��m�ӯ�O�Ic��&Q��;��9��J׌�t�z�جR)�*�x�#%� I'�<�U��}6]�^�h����2���z�<�R6����ͩY��f2gQ��q�<�RG��Kv֑�[��gt+*����k�>*��b����73�CNѫ�{c�8���֥��=Vg��І�."��yL�o*��d���2k�3���[9�i�ܙ'r�������>'k��յ���Tҵ/��ڬR �Xyݞ��wQk�I�� �#LK��v|���3�#'�ִ��t�b�Yu+8��;�P�pz�=��K{���F������[Y��V�]Fs����-?\�uk�*(��$v� �*� m�I�����W�W���.�h����5fR�����>����v��J���� S.%EuRy���� x�O�:m��}j��c ��.��x��Һn�� O�(r�|������r��������BY[�^kEfn�s%�Ӄ!��H pO>�V_j�1Э�7�L��ӧ���r��T+�-3�z��M���A �_�,S��0'��<�� ���蠊�N6_eidS)�F썬M��O�*Hu�*��k�ui-��Y"� �$r={T�Z���b��wQMjA>j�s����Uĺ#�ۮ�i��o���66�}FCR���M�]<�����a(�D�F�� ��<1�yu���q_%֗o�֥b�W�߸g�(����E�j�6��Z��=@�k_,�mȉ�O,:���oE�����o���1�$ ���S����\���us�?� ��������%;�Yܪ�����8g�;x�4�5i.����--���� ��-����Z�� ^�4���j[������>g�1�w�3�ig�N�l��6�h�v��o0��;@���V�F��7vͪډ���� ����֩K��WմԵԴ�e<3�(��L�9�c|ہ�qT�+�#A�=�����o��F['|�ɻv�<v��jV���N<�_$̮��ˌ���xw�vz��e����m�$��F}� ��q��sZRx�ñG;ɫ�*��%%��9�>����>��]J�� �N� '��T��`I'���v��{I�{d��E�d-�l�t7�}�u/xwG�6���km8���n�������e�^_AOv����ZYpv��g��k���u���- �U~c���$�ra�T��3��԰G�j�<��1���{jy�Q�[�XK�ۥ�`# ���^q��'�4���\ҥ�����y(�zHF0�j��4Y5wғR�7�Hhw����$uQ|oᖴ��ͯ�fS���-��O����W��{K�$�=>�)��j.C&z88<�� �m�o��:�b����I�UY�̜`��c��Z~�M֭��j��[�A-Ʀ�������x��>��/��9��3j���A#1#n����@#�����<(ѵ�ehʉ7n7tݑ���j��&ѭ�c6� \,�xGq�V=��V�f��;X��v�I�̋�2�ٜcw�Z�(��(��(��(���x5��K�kΑ�\K�=����YQ���O���O[��O\#��q$������{qrM�D�����=H��o�\7����]N�k =������u�@2��q�#��@�|�w��մ�s���{ �B�Xd���Z��?]��� GEin,���M]���'��~H�����1���\���=?���ٗդ��̕��@aW�c���O�u����Z*cL����9%����)~�w�����}P�2��f�iU�`M�Uz� 1 �{ 燀�o뚔��M�l�%� NC#8';NÎ:�>��m�=b����=�w��n�ow,o'����,Jp:&���`}j���qumwsae,O�)u9m%`|�?/oB � @��<���K�V?/u�������iFv����dP�F##�U����[<GL� ���h)��2 O���ǥR��-wH�����u�G�lⶑjy~em��9�>��#���+ �Ùl,����Aqm*^����%��� l�v��+�WI��mc ���>�u)�q�.������=1]�4��+¶��L��l��t+�BG����j����w��3h��Q�zlRI%��r#tF;��{~a�砨u ^��N��]��]�M:xn6˵L�`����6����j�����A�E��5��ܖ� �[[<�O0˼6H��q��?Ýn4���H$�<9>���1�%r������@#ӛ���u�g[d�X���;,wD ���Wz�*9�z�'�tzN�l|9�C� ����G�{��Yde�2;rs���a�/�?���n-md��A�M@n$R.�����nʎHیֆ��]f�K�K{���v���I��A!WP�p����ʷ�=��1x�Y֯��-m��m��R�HQ���Gw��Y�%𦹭C�i�֑�������I�bTX� \��X��洴/����[���I,D�Z�CxɉR�=�c�~�G�Q�5���!�H�ӖՋɟ0�bALp0�sڸ����v�v�qm ���� ̇������p�c�3�V�φ�a�i�$��l!�������A���UZC.���oc�"�v���R][h�4�����w�p[a��Fӽy�09�J��Z������� &�;O��� q)� �˘X*��A!GJ���S���߅n�)#�]5oE���n�l�����n3�j�a��� xcI�k{-wK����?��B�n00 1#�+��t��mt�3i�FuY���'#8!O:�\��O�όl�{+D���G��vr��&0p��� ���S�7�!x�-�u- -: n�� 9�~����~��Ak�G��ϯi��Sy��Y���)�0K c�Q�.���,���WQӭ,-d���s�2Hv�y�|��u�&�� SQ��[ۭ2���l�u q��Ċ���9p\��5�څ����b�[gN�ZS2ܕ>oE����H�:���m�7y���m�kQ^����`�I���������h��}mR�cKG���u���� S� �үh^�4]JH_Jү-���� BW>yi2H۷ �!wg��¨�~�-<�1��K�UK۝�|�"�:r~q�Ҭh���k.����Ai5��~�r����9��ς��r1YC���n4��\��ˬi�v�D��rB9@���Wi��7��6Ե��g����E � b����K��J;��$I�,���8����ޡNNs�Ɵ��K���Q���t�`� ��'�R���c�����z�kw�����8"��ʂ D�b?�s� Q��S�� ����hr��L ϮpqR��������hא��%�Kw <��Ft#+�I���ߔ���?#ħ]?+��|�׭vz���{��2]Ȗ�����H=Q��x�+V�(��(��(��+�O^G��i���i�1^Bc]�ʱ$����\��o�C6�ak��K�ir��<FTX%P�U�6YHp �~�����|#��_lҴi�Y �M/��Vn���,�d��k�����ˤַPE儼�E)d �Idw�;������+�����o>���6�#2�E,W�'k<W}�h�E�v�]��_�O-³Fz-`I���Q�X�'�=o��Ӷ�ec ��R�:N���t*a�v�''���� J���Zψ��{e����ͥ�a4�`�<\6���=0+���A��QES�t�-Z �TS�~b�O�RX����QY�[�om�H�\*�uխ����]��H0�J����*�E ���QEQETP[�k�o p���cP�$����R�EQQh ȹ0�n v��I�댎�-QER}K��^ɦٽ�`�v�K��g#��EQEQEQEQEQEQEQ\���j�|D�5{�HO����O��v[��{����ิ��k"� Kh"{{ X�v$�<.[��0�SV&�5 �7��%�5+��Ґī.���c;J(_`Mei�=CT�In<? �FE��r6�PC>J����sZ���|#�i�}���s�[���i݌Dw�$c�R\�Y���}֟e��mj���!�>˕΍�I��8��Z�û9<5�xK��H���I8���r\���I�� �;U���]���Y�Gj�%��dy"���#=�< ��v�``t��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(����d uncomment the following line: # Image(filename='yourfile.png') ###Output _____no_output_____ ###Markdown Theory and Practice of Visualization Exercise 2 Imports ###Code from IPython.display import Image ###Output _____no_output_____ ###Markdown Violations of graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a *negative* example of the principles that Tufte describes in *The Visual Display of Quantitative Information*.* [CNN](http://www.cnn.com/)* [Fox News](http://www.foxnews.com/)* [Time](http://time.com/)Upload the image for the visualization to this directory and display the image inline in this notebook. ###Code Image(filename='foxnewsgraph.png') ###Output _____no_output_____ ###Markdown Theory and Practice of Visualization Exercise 2 Imports ###Code from IPython.display import Image ###Output _____no_output_____ ###Markdown Violations of graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a *negative* example of the principles that Tufte describes in *The Visual Display of Quantitative Information*.* [CNN](http://www.cnn.com/)* [Fox News](http://www.foxnews.com/)* [Time](http://time.com/)Upload the image for the visualization to this directory and display the image inline in this notebook. ###Code # Add your filename and uncomment the following line: Image(filename='bad graph.jpg') ###Output _____no_output_____ ###Markdown Theory and Practice of Visualization Exercise 2 Imports ###Code from IPython.display import Image ###Output _____no_output_____ ###Markdown Violations of graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a *negative* example of the principles that Tufte describes in *The Visual Display of Quantitative Information*.* [CNN](http://www.cnn.com/)* [Fox News](http://www.foxnews.com/)* [Time](http://time.com/)Upload the image for the visualization to this directory and display the image inline in this notebook. ###Code Image(filename='badgraphic.png') # Second picture should be attached now ###Output _____no_output_____ ###Markdown Theory and Practice of Visualization Exercise 2 Imports ###Code from IPython.display import Image ###Output _____no_output_____ ###Markdown Violations of graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a *negative* example of the principles that Tufte describes in *The Visual Display of Quantitative Information*.* [CNN](http://www.cnn.com/)* [Fox News](http://www.foxnews.com/)* [Time](http://time.com/)Upload the image for the visualization to this directory and display the image inline in this notebook. ###Code # Add your filename and uncomment the following line: # Image(filename='yourfile.png') Image('netfli.png') ###Output _____no_output_____ ###Markdown Theory and Practice of Visualization Exercise 2 Imports ###Code from IPython.display import Image ###Output _____no_output_____ ###Markdown Violations of graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a *negative* example of the principles that Tufte describes in *The Visual Display of Quantitative Information*.* [CNN](http://www.cnn.com/)* [Fox News](http://www.foxnews.com/)* [Time](http://time.com/)Upload the image for the visualization to this directory and display the image inline in this notebook. ###Code from IPython.display import Image Image(filename='How-Every-UFC-Fight-Ended-in-2013-no-background-4x3.png', width='100%') ###Output _____no_output_____ ###Markdown Theory and Practice of Visualization Exercise 2 Imports ###Code from IPython.display import Image ###Output _____no_output_____ ###Markdown Violations of graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a *negative* example of the principles that Tufte describes in *The Visual Display of Quantitative Information*.* [CNN](http://www.cnn.com/)* [Fox News](http://www.foxnews.com/)* [Time](http://time.com/)Upload the image for the visualization to this directory and display the image inline in this notebook. I received a zero for the section where I was supposed to upload an image, but the image I described below, where my description got a perfect score, shows up when I run this code. ###Code # Add your filename and uncomment the following line: Image(filename='bad data viz.png') ###Output _____no_output_____ ###Markdown Theory and Practice of Visualization Exercise 2 Imports ###Code from IPython.display import Image ###Output _____no_output_____ ###Markdown Violations of graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a *negative* example of the principles that Tufte describes in *The Visual Display of Quantitative Information*.* [CNN](http://www.cnn.com/)* [Fox News](http://www.foxnews.com/)* [Time](http://time.com/)Upload the image for the visualization to this directory and display the image inline in this notebook. ###Code # Add your filename and uncomment the following line: Image(filename='150420092841-premarket-stocks-trading-780x439.png') ###Output _____no_output_____ ###Markdown Theory and Practice of Visualization Exercise 2 Imports ###Code from IPython.display import Image ###Output _____no_output_____ ###Markdown Violations of graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a *negative* example of the principles that Tufte describes in *The Visual Display of Quantitative Information*.* [CNN](http://www.cnn.com/)* [Fox News](http://www.foxnews.com/)* [Time](http://time.com/)Upload the image for the visualization to this directory and display the image inline in this notebook. ###Code # Add your filename and uncomment the following line: Image(filename='graph2.jpg') ###Output _____no_output_____ ###Markdown Theory and Practice of Visualization Exercise 2 Imports ###Code from IPython.display import Image ###Output _____no_output_____ ###Markdown Violations of graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a *negative* example of the principles that Tufte describes in *The Visual Display of Quantitative Information*.* [CNN](http://www.cnn.com/)* [Fox News](http://www.foxnews.com/)* [Time](http://time.com/)Upload the image for the visualization to this directory and display the image inline in this notebook. ###Code # Add your filename and uncomment the following line: Image(filename='StockPicture.png') ###Output _____no_output_____
modules/layer_wise_learning_rate.ipynb
###Markdown Layer wise learning rate settingsIn this tutorial, we introduce how to easily select or filter out network layers and set specific learning rate values for transfer learning. MONAI provides a utility function to achieve this requirements: `generate_param_groups`, for example:```pynet = Unet(dimensions=3, in_channels=1, out_channels=3, channels=[2, 2, 2], strides=[1, 1, 1])print(net) print out network components to select expected itemsprint(net.named_parameters()) print out all the named parameters to filter out expected itemsparams = generate_param_groups( network=net, layer_matches=[lambda x: x.model[-1], lambda x: "conv.weight" in x], match_types=["select", "filter"], lr_values=[1e-2, 1e-3],)optimizer = torch.optim.Adam(params, 1e-4)```[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/layer_wise_learning_rate.ipynb) Setup environment ###Code !python -c "import monai" || pip install -q "monai[pillow, ignite, tqdm]" !python -c "import matplotlib" || pip install -q matplotlib %matplotlib inline from monai.transforms import ( AddChanneld, Compose, LoadImaged, ScaleIntensityd, ToTensord, ) from monai.optimizers import generate_param_groups from monai.networks.nets import densenet121 from monai.inferers import SimpleInferer from monai.handlers import StatsHandler from monai.engines import SupervisedTrainer from monai.data import DataLoader from monai.config import print_config from monai.apps import MedNISTDataset import torch import matplotlib.pyplot as plt from ignite.engine import Engine, Events from ignite.metrics import Accuracy import tempfile import sys import shutil import os import logging ###Output _____no_output_____ ###Markdown Setup imports ###Code # Copyright 2020 MONAI Consortium # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. print_config() ###Output MONAI version: 0.4.0+35.g6adbcde Numpy version: 1.19.5 Pytorch version: 1.7.1 MONAI flags: HAS_EXT = False, USE_COMPILED = False MONAI rev id: 6adbcdee45c16f18f5b713575af3410437177311 Optional dependencies: Pytorch Ignite version: 0.4.2 Nibabel version: 3.2.1 scikit-image version: 0.18.1 Pillow version: 7.0.0 Tensorboard version: 2.4.0 gdown version: 3.12.2 TorchVision version: 0.8.2 ITK version: 5.1.2 tqdm version: 4.51.0 lmdb version: 1.0.0 psutil version: 5.8.0 For details about installing the optional dependencies, please visit: https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies ###Markdown Setup data directoryYou can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable. This allows you to save results and reuse downloads. If not specified a temporary directory will be used. ###Code directory = os.environ.get("MONAI_DATA_DIRECTORY") root_dir = tempfile.mkdtemp() if directory is None else directory print(root_dir) ###Output /workspace/data/medical ###Markdown Setup logging ###Code logging.basicConfig(stream=sys.stdout, level=logging.INFO) ###Output _____no_output_____ ###Markdown Create training experiment with MedNISTDataset and workflowThe MedNIST dataset was gathered from several sets from [TCIA](https://wiki.cancerimagingarchive.net/display/Public/Data+Usage+Policies+and+Restrictions), [the RSNA Bone Age Challenge](http://rsnachallenges.cloudapp.net/competitions/4), and [the NIH Chest X-ray dataset](https://cloud.google.com/healthcare/docs/resources/public-datasets/nih-chest). Set up pre-processing transforms ###Code transform = Compose( [ LoadImaged(keys="image"), AddChanneld(keys="image"), ScaleIntensityd(keys="image"), ToTensord(keys="image"), ] ) ###Output _____no_output_____ ###Markdown Create MedNISTDataset for training`MedNISTDataset` inherits from MONAI `CacheDataset` and provides rich parameters to automatically download dataset and extract, and acts as normal PyTorch Dataset with cache mechanism. ###Code train_ds = MedNISTDataset( root_dir=root_dir, transform=transform, section="training", download=True) # the dataset can work seamlessly with the pytorch native dataset loader, # but using monai.data.DataLoader has additional benefits of mutli-process # random seeds handling, and the customized collate functions train_loader = DataLoader(train_ds, batch_size=300, shuffle=True, num_workers=10) ###Output Verified 'MedNIST.tar.gz', md5: 0bc7306e7427e00ad1c5526a6677552d. file /workspace/data/medical/MedNIST.tar.gz exists, skip downloading. extracted file /workspace/data/medical/MedNIST exists, skip extracting. ###Markdown Pick images from MedNISTDataset to visualize and check ###Code plt.subplots(3, 3, figsize=(8, 8)) for i in range(9): plt.subplot(3, 3, i + 1) plt.imshow(train_ds[i * 5000]["image"][0].detach().cpu(), cmap="gray") plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown Create training components - device, network, loss function ###Code device = torch.device("cuda:0") net = densenet121(pretrained=True, progress=False, spatial_dims=2, in_channels=1, out_channels=6).to(device) loss = torch.nn.CrossEntropyLoss() ###Output _____no_output_____ ###Markdown Set different learning rate values for layersPlease refer to the appendix at the end this notebook for the layers of `DenseNet121`.1. Set LR=1e-3 for the selected `class_layers` block.2. Set LR=1e-4 for convolution layers based on the filter where `conv.weight` is in the layer name.3. LR=1e-5 for other layers. ###Code params = generate_param_groups( network=net, layer_matches=[lambda x: x.class_layers, lambda x: "conv.weight" in x], match_types=["select", "filter"], lr_values=[1e-3, 1e-4], ) ###Output _____no_output_____ ###Markdown Define the optimizer based on the parameter groups ###Code opt = torch.optim.Adam(params, 1e-5) ###Output _____no_output_____ ###Markdown Define the easiest training workflow and runUse MONAI SupervisedTrainer handlers to quickly set up a training workflow. ###Code trainer = SupervisedTrainer( device=device, max_epochs=5, train_data_loader=train_loader, network=net, optimizer=opt, loss_function=loss, inferer=SimpleInferer(), key_train_metric={ "train_acc": Accuracy( output_transform=lambda x: (x["pred"], x["label"])) }, train_handlers=StatsHandler( tag_name="train_loss", output_transform=lambda x: x["loss"]), ) ###Output _____no_output_____ ###Markdown Define a ignite handler to adjust LR in runtime ###Code class LrScheduler: def attach(self, engine: Engine) -> None: engine.add_event_handler(Events.EPOCH_COMPLETED, self) def __call__(self, engine: Engine) -> None: for i, param_group in enumerate(engine.optimizer.param_groups): if i == 0: param_group["lr"] *= 0.1 elif i == 1: param_group["lr"] *= 0.5 print("LR values of 3 parameter groups: ", [ g["lr"] for g in engine.optimizer.param_groups]) LrScheduler().attach(trainer) ###Output _____no_output_____ ###Markdown Execute the training ###Code trainer.run() ###Output INFO:ignite.engine.engine.SupervisedTrainer:Engine run resuming from iteration 0, epoch 0 until 1 epochs INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 1/157 -- train_loss: 1.8304 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 2/157 -- train_loss: 1.7026 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 3/157 -- train_loss: 1.6544 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 4/157 -- train_loss: 1.5711 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 5/157 -- train_loss: 1.4678 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 6/157 -- train_loss: 1.3817 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 7/157 -- train_loss: 1.3329 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 8/157 -- train_loss: 1.2512 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 9/157 -- train_loss: 1.1802 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 10/157 -- train_loss: 1.1098 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 11/157 -- train_loss: 1.0461 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 12/157 -- train_loss: 1.0530 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 13/157 -- train_loss: 0.9845 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 14/157 -- train_loss: 0.8910 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 15/157 -- train_loss: 0.9101 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 16/157 -- train_loss: 0.8881 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 17/157 -- train_loss: 0.7921 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 18/157 -- train_loss: 0.7147 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 19/157 -- train_loss: 0.6718 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 20/157 -- train_loss: 0.6844 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 21/157 -- train_loss: 0.6290 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 22/157 -- train_loss: 0.7280 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 23/157 -- train_loss: 0.6159 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 24/157 -- train_loss: 0.5863 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 25/157 -- train_loss: 0.5241 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 26/157 -- train_loss: 0.5349 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 27/157 -- train_loss: 0.4609 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 28/157 -- train_loss: 0.4156 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 29/157 -- train_loss: 0.4626 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 30/157 -- train_loss: 0.4041 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 31/157 -- train_loss: 0.3877 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 32/157 -- train_loss: 0.3813 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 33/157 -- train_loss: 0.4080 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 34/157 -- train_loss: 0.3451 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 35/157 -- train_loss: 0.3205 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 36/157 -- train_loss: 0.3162 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 37/157 -- train_loss: 0.3033 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 38/157 -- train_loss: 0.2810 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 39/157 -- train_loss: 0.2591 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 40/157 -- train_loss: 0.2244 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 41/157 -- train_loss: 0.2436 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 42/157 -- train_loss: 0.2618 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 43/157 -- train_loss: 0.2264 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 44/157 -- train_loss: 0.2437 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 45/157 -- train_loss: 0.2080 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 46/157 -- train_loss: 0.1706 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 47/157 -- train_loss: 0.2043 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 48/157 -- train_loss: 0.1685 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 49/157 -- train_loss: 0.1606 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 50/157 -- train_loss: 0.2219 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 51/157 -- train_loss: 0.1919 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 52/157 -- train_loss: 0.1748 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 53/157 -- train_loss: 0.1301 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 54/157 -- train_loss: 0.1477 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 55/157 -- train_loss: 0.1473 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 56/157 -- train_loss: 0.1911 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 57/157 -- train_loss: 0.1643 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 58/157 -- train_loss: 0.1959 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 59/157 -- train_loss: 0.1822 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 60/157 -- train_loss: 0.1220 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 61/157 -- train_loss: 0.1198 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 62/157 -- train_loss: 0.1211 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 63/157 -- train_loss: 0.1026 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 64/157 -- train_loss: 0.0996 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 65/157 -- train_loss: 0.1122 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 66/157 -- train_loss: 0.1218 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 67/157 -- train_loss: 0.0756 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 68/157 -- train_loss: 0.1237 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 69/157 -- train_loss: 0.1331 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 70/157 -- train_loss: 0.1379 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 71/157 -- train_loss: 0.0877 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 72/157 -- train_loss: 0.0932 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 73/157 -- train_loss: 0.0888 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 74/157 -- train_loss: 0.1115 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 75/157 -- train_loss: 0.0967 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 76/157 -- train_loss: 0.0718 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 77/157 -- train_loss: 0.0881 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 78/157 -- train_loss: 0.0954 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 79/157 -- train_loss: 0.1165 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 80/157 -- train_loss: 0.0679 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 81/157 -- train_loss: 0.0691 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 82/157 -- train_loss: 0.0555 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 83/157 -- train_loss: 0.0829 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 84/157 -- train_loss: 0.0530 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 85/157 -- train_loss: 0.0945 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 86/157 -- train_loss: 0.0960 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 87/157 -- train_loss: 0.0897 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 88/157 -- train_loss: 0.0553 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 89/157 -- train_loss: 0.0509 ###Markdown Cleanup data directoryRemove directory if a temporary was used. ###Code if directory is None: shutil.rmtree(root_dir) ###Output _____no_output_____ ###Markdown Appendix: layers of DenseNet 121 network ###Code print(net) ###Output DenseNet( (features): Sequential( (conv0): Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (norm0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu0): ReLU(inplace=True) (pool0): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) (denseblock1): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(96, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(160, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(192, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(224, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (transition1): _Transition( (norm): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock2): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(160, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(192, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(224, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(288, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer7): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(320, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer8): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(352, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer9): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(384, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer10): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(416, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer11): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(448, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(448, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer12): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(480, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (transition2): _Transition( (norm): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock3): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(288, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(320, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(352, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(384, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(416, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer7): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(448, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(448, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer8): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(480, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer9): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer10): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(544, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(544, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer11): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(576, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer12): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(608, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(608, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer13): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(640, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer14): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(672, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(672, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer15): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(704, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(704, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer16): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(736, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(736, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer17): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer18): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(800, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(800, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer19): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer20): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(864, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(864, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer21): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(896, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer22): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(928, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(928, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer23): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(960, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer24): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(992, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(992, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (transition3): _Transition( (norm): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock4): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(544, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(544, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(576, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(608, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(608, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(640, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(672, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(672, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer7): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(704, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(704, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer8): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(736, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(736, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer9): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer10): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(800, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(800, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer11): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer12): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(864, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(864, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer13): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(896, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer14): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(928, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(928, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer15): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(960, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer16): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(992, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(992, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (norm5): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (class_layers): Sequential( (relu): ReLU(inplace=True) (pool): AdaptiveAvgPool2d(output_size=1) (flatten): Flatten(start_dim=1, end_dim=-1) (out): Linear(in_features=1024, out_features=6, bias=True) ) ) ###Markdown Layer wise learning rate settingsIn this tutorial, we introduce how to easily select or filter out network layers and set specific learning rate values for transfer learning. MONAI provides a utility function to achieve this requirements: `generate_param_groups`, for example:```pynet = Unet(dimensions=3, in_channels=1, out_channels=3, channels=[2, 2, 2], strides=[1, 1, 1])print(net) print out network components to select expected itemsprint(net.named_parameters()) print out all the named parameters to filter out expected itemsparams = generate_param_groups( network=net, layer_matches=[lambda x: x.model[-1], lambda x: "conv.weight" in x], match_types=["select", "filter"], lr_values=[1e-2, 1e-3],)optimizer = torch.optim.Adam(params, 1e-4)```[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/layer_wise_learning_rate.ipynb) Setup environment ###Code %pip install -q "monai[pillow, ignite, tqdm]" %pip install -q matplotlib %matplotlib inline ###Output Note: you may need to restart the kernel to use updated packages. ###Markdown Setup imports ###Code # Copyright 2020 MONAI Consortium # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import os import shutil import sys import tempfile from ignite.metrics import Accuracy from ignite.engine import Engine, Events import matplotlib.pyplot as plt import torch from monai.apps import MedNISTDataset from monai.config import print_config from monai.data import DataLoader from monai.engines import SupervisedTrainer from monai.handlers import StatsHandler from monai.inferers import SimpleInferer from monai.networks.nets import densenet121 from monai.optimizers import generate_param_groups from monai.transforms import ( AddChanneld, Compose, LoadImaged, ScaleIntensityd, ToTensord, ) print_config() ###Output MONAI version: 0.4.0 Numpy version: 1.19.1 Pytorch version: 1.7.0a0+7036e91 MONAI flags: HAS_EXT = False, USE_COMPILED = False MONAI rev id: 0563a4467fa602feca92d91c7f47261868d171a1 Optional dependencies: Pytorch Ignite version: 0.4.2 Nibabel version: 3.2.1 scikit-image version: 0.15.0 Pillow version: 7.0.0 Tensorboard version: 2.2.0 gdown version: 3.12.2 TorchVision version: 0.8.0a0 ITK version: 5.1.0 tqdm version: 4.54.1 lmdb version: 1.0.0 psutil version: 5.7.2 For details about installing the optional dependencies, please visit: https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies ###Markdown Setup data directoryYou can specify a directory with the MONAI_DATA_DIRECTORY environment variable. This allows you to save results and reuse downloads. If not specified a temporary directory will be used. ###Code directory = os.environ.get("MONAI_DATA_DIRECTORY") root_dir = tempfile.mkdtemp() if directory is None else directory print(root_dir) ###Output /workspace/data/medical ###Markdown Setup logging ###Code logging.basicConfig(stream=sys.stdout, level=logging.INFO) ###Output _____no_output_____ ###Markdown Create training experiment with MedNISTDataset and workflowThe MedNIST dataset was gathered from several sets from [TCIA](https://wiki.cancerimagingarchive.net/display/Public/Data+Usage+Policies+and+Restrictions), [the RSNA Bone Age Challenge](http://rsnachallenges.cloudapp.net/competitions/4), and [the NIH Chest X-ray dataset](https://cloud.google.com/healthcare/docs/resources/public-datasets/nih-chest). Set up pre-processing transforms ###Code transform = Compose( [ LoadImaged(keys="image"), AddChanneld(keys="image"), ScaleIntensityd(keys="image"), ToTensord(keys="image"), ] ) ###Output _____no_output_____ ###Markdown Create MedNISTDataset for training`MedNISTDataset` inherits from MONAI `CacheDataset` and provides rich parameters to automatically download dataset and extract, and acts as normal PyTorch Dataset with cache mechanism. ###Code train_ds = MedNISTDataset(root_dir=root_dir, transform=transform, section="training", download=True) # the dataset can work seamlessly with the pytorch native dataset loader, # but using monai.data.DataLoader has additional benefits of mutli-process # random seeds handling, and the customized collate functions train_loader = DataLoader(train_ds, batch_size=300, shuffle=True, num_workers=10) ###Output Verified 'MedNIST.tar.gz', md5: 0bc7306e7427e00ad1c5526a6677552d. file /workspace/data/medical/MedNIST.tar.gz exists, skip downloading. extracted file /workspace/data/medical/MedNIST exists, skip extracting. ###Markdown Pick images from MedNISTDataset to visualize and check ###Code plt.subplots(3, 3, figsize=(8, 8)) for i in range(9): plt.subplot(3, 3, i + 1) plt.imshow(train_ds[i * 5000]["image"][0].detach().cpu(), cmap="gray") plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown Create training components - device, network, loss function ###Code device = torch.device("cuda:0") net = densenet121(pretrained=True, progress=False, spatial_dims=2, in_channels=1, out_channels=6).to(device) loss = torch.nn.CrossEntropyLoss() ###Output Downloading: "https://download.pytorch.org/models/densenet121-a639ec97.pth" to /root/.cache/torch/hub/checkpoints/densenet121-a639ec97.pth ###Markdown Set different learning rate values for layersPlease refer to the appendix at the end this notebook for the layers of `DenseNet121`.1. Set LR=1e-3 for the selected `class_layers` block.2. Set LR=1e-4 for convolution layers based on the filter where `conv.weight` is in the layer name.3. LR=1e-5 for other layers. ###Code params = generate_param_groups( network=net, layer_matches=[lambda x: x.class_layers, lambda x: "conv.weight" in x], match_types=["select", "filter"], lr_values=[1e-3, 1e-4], ) ###Output _____no_output_____ ###Markdown Define the optimizer based on the parameter groups ###Code opt = torch.optim.Adam(params, 1e-5) ###Output _____no_output_____ ###Markdown Define the easiest training workflow and runUse MONAI SupervisedTrainer handlers to quickly set up a training workflow. ###Code trainer = SupervisedTrainer( device=device, max_epochs=5, train_data_loader=train_loader, network=net, optimizer=opt, loss_function=loss, inferer=SimpleInferer(), key_train_metric={ "train_acc": Accuracy(output_transform=lambda x: (x["pred"], x["label"])) }, train_handlers=StatsHandler(tag_name="train_loss", output_transform=lambda x: x["loss"]), ) ###Output _____no_output_____ ###Markdown Define a ignite handler to adjust LR in runtime ###Code class LrScheduler: def attach(self, engine: Engine) -> None: engine.add_event_handler(Events.EPOCH_COMPLETED, self) def __call__(self, engine: Engine) -> None: for i, param_group in enumerate(engine.optimizer.param_groups): if i == 0: param_group["lr"] *= 0.1 elif i == 1: param_group["lr"] *= 0.5 print("LR values of 3 parameter groups: ", [g["lr"] for g in engine.optimizer.param_groups]) LrScheduler().attach(trainer) ###Output _____no_output_____ ###Markdown Execute the training ###Code trainer.run() ###Output _____no_output_____ ###Markdown Cleanup data directoryRemove directory if a temporary was used. ###Code if directory is None: shutil.rmtree(root_dir) ###Output _____no_output_____ ###Markdown Appendix: layers of DenseNet 121 network ###Code print(net) ###Output DenseNet( (features): Sequential( (conv0): Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (norm0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu0): ReLU(inplace=True) (pool0): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) (denseblock1): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(96, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(160, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(192, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(224, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (transition1): _Transition( (norm): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock2): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(160, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(192, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(224, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(288, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer7): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(320, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer8): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(352, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer9): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(384, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer10): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(416, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer11): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(448, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(448, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer12): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(480, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (transition2): _Transition( (norm): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock3): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(288, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(320, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(352, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(384, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(416, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer7): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(448, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(448, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer8): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(480, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer9): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer10): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(544, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(544, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer11): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(576, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer12): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(608, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(608, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer13): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(640, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer14): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(672, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(672, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer15): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(704, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(704, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer16): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(736, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(736, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer17): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer18): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(800, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(800, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer19): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer20): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(864, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(864, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer21): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(896, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer22): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(928, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(928, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer23): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(960, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer24): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(992, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(992, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (transition3): _Transition( (norm): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock4): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(544, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(544, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(576, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(608, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(608, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(640, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(672, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(672, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer7): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(704, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(704, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer8): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(736, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(736, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer9): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer10): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(800, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(800, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer11): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer12): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(864, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(864, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer13): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(896, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer14): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(928, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(928, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer15): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(960, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer16): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(992, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(992, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (norm5): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (class_layers): Sequential( (relu): ReLU(inplace=True) (pool): AdaptiveAvgPool2d(output_size=1) (flatten): Flatten(start_dim=1, end_dim=-1) (out): Linear(in_features=1024, out_features=6, bias=True) ) ) ###Markdown Layer wise learning rate settingsIn this tutorial, we introduce how to easily select or filter out network layers and set specific learning rate values for transfer learning. MONAI provides a utility function to achieve this requirements: `generate_param_groups`, for example:```pynet = Unet(spatial_dims=3, in_channels=1, out_channels=3, channels=[2, 2, 2], strides=[1, 1, 1])print(net) print out network components to select expected itemsprint(net.named_parameters()) print out all the named parameters to filter out expected itemsparams = generate_param_groups( network=net, layer_matches=[lambda x: x.model[0], lambda x: "2.0.conv" in x[0]], match_types=["select", "filter"], lr_values=[1e-2, 1e-3],)optimizer = torch.optim.Adam(params, 1e-4)```[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/main/modules/layer_wise_learning_rate.ipynb) Setup environment ###Code !python -c "import monai" || pip install -q "monai-weekly[pillow, ignite, tqdm]" !python -c "import matplotlib" || pip install -q matplotlib %matplotlib inline from monai.transforms import ( AddChanneld, Compose, LoadImaged, ScaleIntensityd, EnsureTyped, ) from monai.optimizers import generate_param_groups from monai.networks.nets import DenseNet121 from monai.inferers import SimpleInferer from monai.handlers import StatsHandler, from_engine from monai.engines import SupervisedTrainer from monai.data import DataLoader from monai.config import print_config from monai.apps import MedNISTDataset import torch import matplotlib.pyplot as plt from ignite.engine import Engine, Events from ignite.metrics import Accuracy import tempfile import sys import shutil import os import logging ###Output _____no_output_____ ###Markdown Setup imports ###Code # Copyright 2020 MONAI Consortium # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. print_config() ###Output MONAI version: 0.6.0rc1+23.gc6793fd0 Numpy version: 1.20.3 Pytorch version: 1.9.0a0+c3d40fd MONAI flags: HAS_EXT = True, USE_COMPILED = False MONAI rev id: c6793fd0f316a448778d0047664aaf8c1895fe1c Optional dependencies: Pytorch Ignite version: 0.4.5 Nibabel version: 3.2.1 scikit-image version: 0.15.0 Pillow version: 7.0.0 Tensorboard version: 2.5.0 gdown version: 3.13.0 TorchVision version: 0.10.0a0 ITK version: 5.1.2 tqdm version: 4.53.0 lmdb version: 1.2.1 psutil version: 5.8.0 pandas version: 1.1.4 einops version: 0.3.0 For details about installing the optional dependencies, please visit: https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies ###Markdown Setup data directoryYou can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable. This allows you to save results and reuse downloads. If not specified a temporary directory will be used. ###Code directory = os.environ.get("MONAI_DATA_DIRECTORY") root_dir = tempfile.mkdtemp() if directory is None else directory print(root_dir) ###Output _____no_output_____ ###Markdown Setup logging ###Code logging.basicConfig(stream=sys.stdout, level=logging.INFO) ###Output _____no_output_____ ###Markdown Create training experiment with MedNISTDataset and workflowThe MedNIST dataset was gathered from several sets from [TCIA](https://wiki.cancerimagingarchive.net/display/Public/Data+Usage+Policies+and+Restrictions), [the RSNA Bone Age Challenge](http://rsnachallenges.cloudapp.net/competitions/4), and [the NIH Chest X-ray dataset](https://cloud.google.com/healthcare/docs/resources/public-datasets/nih-chest). Set up pre-processing transforms ###Code transform = Compose( [ LoadImaged(keys="image"), AddChanneld(keys="image"), ScaleIntensityd(keys="image"), EnsureTyped(keys="image"), ] ) ###Output _____no_output_____ ###Markdown Create MedNISTDataset for training`MedNISTDataset` inherits from MONAI `CacheDataset` and provides rich parameters to automatically download dataset and extract, and acts as normal PyTorch Dataset with cache mechanism. ###Code train_ds = MedNISTDataset( root_dir=root_dir, transform=transform, section="training", download=True) # the dataset can work seamlessly with the pytorch native dataset loader, # but using monai.data.DataLoader has additional benefits of mutli-process # random seeds handling, and the customized collate functions train_loader = DataLoader(train_ds, batch_size=300, shuffle=True, num_workers=10) ###Output _____no_output_____ ###Markdown Pick images from MedNISTDataset to visualize and check ###Code plt.subplots(3, 3, figsize=(8, 8)) for i in range(9): plt.subplot(3, 3, i + 1) plt.imshow(train_ds[i * 5000]["image"][0].detach().cpu(), cmap="gray") plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown Create training components - device, network, loss function ###Code device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") net = DenseNet121(pretrained=True, progress=False, spatial_dims=2, in_channels=1, out_channels=6).to(device) loss = torch.nn.CrossEntropyLoss() ###Output _____no_output_____ ###Markdown Set different learning rate values for layersPlease refer to the appendix at the end this notebook for the layers of `DenseNet121`.1. Set LR=1e-3 for the selected `class_layers` block.2. Set LR=1e-4 for convolution layers based on the filter where `conv.weight` is in the layer name.3. LR=1e-5 for other layers. ###Code params = generate_param_groups( network=net, layer_matches=[lambda x: x.class_layers, lambda x: "conv.weight" in x[0]], match_types=["select", "filter"], lr_values=[1e-3, 1e-4], ) ###Output _____no_output_____ ###Markdown Define the optimizer based on the parameter groups ###Code opt = torch.optim.Adam(params, 1e-5) ###Output _____no_output_____ ###Markdown Define the easiest training workflow and runUse MONAI SupervisedTrainer handlers to quickly set up a training workflow. ###Code trainer = SupervisedTrainer( device=device, max_epochs=5, train_data_loader=train_loader, network=net, optimizer=opt, loss_function=loss, inferer=SimpleInferer(), key_train_metric={ "train_acc": Accuracy( output_transform=from_engine(["pred", "label"])) }, train_handlers=StatsHandler( tag_name="train_loss", output_transform=from_engine(["loss"], first=True)), ) ###Output _____no_output_____ ###Markdown Define a ignite handler to adjust LR in runtime ###Code class LrScheduler: def attach(self, engine: Engine) -> None: engine.add_event_handler(Events.EPOCH_COMPLETED, self) def __call__(self, engine: Engine) -> None: for i, param_group in enumerate(engine.optimizer.param_groups): if i == 0: param_group["lr"] *= 0.1 elif i == 1: param_group["lr"] *= 0.5 print("LR values of 3 parameter groups: ", [ g["lr"] for g in engine.optimizer.param_groups]) LrScheduler().attach(trainer) ###Output _____no_output_____ ###Markdown Execute the training ###Code trainer.run() ###Output _____no_output_____ ###Markdown Cleanup data directoryRemove directory if a temporary was used. ###Code if directory is None: shutil.rmtree(root_dir) ###Output _____no_output_____ ###Markdown Appendix: layers of DenseNet 121 network ###Code print(net) ###Output DenseNet121( (features): Sequential( (conv0): Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (norm0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu0): ReLU(inplace=True) (pool0): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) (denseblock1): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(96, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(160, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(192, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(224, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (transition1): _Transition( (norm): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock2): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(160, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(192, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(224, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(288, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer7): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(320, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer8): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(352, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer9): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(384, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer10): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(416, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer11): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(448, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(448, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer12): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(480, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (transition2): _Transition( (norm): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock3): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(288, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(320, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(352, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(384, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(416, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer7): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(448, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(448, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer8): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(480, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer9): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer10): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(544, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(544, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer11): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(576, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer12): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(608, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(608, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer13): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(640, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer14): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(672, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(672, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer15): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(704, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(704, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer16): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(736, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(736, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer17): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer18): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(800, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(800, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer19): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer20): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(864, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(864, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer21): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(896, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer22): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(928, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(928, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer23): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(960, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer24): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(992, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(992, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (transition3): _Transition( (norm): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock4): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(544, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(544, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(576, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(608, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(608, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(640, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(672, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(672, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer7): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(704, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(704, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer8): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(736, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(736, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer9): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer10): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(800, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(800, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer11): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer12): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(864, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(864, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer13): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(896, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer14): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(928, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(928, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer15): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(960, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer16): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(992, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(992, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (norm5): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (class_layers): Sequential( (relu): ReLU(inplace=True) (pool): AdaptiveAvgPool2d(output_size=1) (flatten): Flatten(start_dim=1, end_dim=-1) (out): Linear(in_features=1024, out_features=6, bias=True) ) ) ###Markdown Layer wise learning rate settingsIn this tutorial, we introduce how to easily select or filter out network layers and set specific learning rate values for transfer learning. MONAI provides a utility function to achieve this requirements: `generate_param_groups`, for example:```pynet = Unet(spatial_dims=3, in_channels=1, out_channels=3, channels=[2, 2, 2], strides=[1, 1, 1])print(net) print out network components to select expected itemsprint(net.named_parameters()) print out all the named parameters to filter out expected itemsparams = generate_param_groups( network=net, layer_matches=[lambda x: x.model[0], lambda x: "2.0.conv" in x[0]], match_types=["select", "filter"], lr_values=[1e-2, 1e-3],)optimizer = torch.optim.Adam(params, 1e-4)```[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/layer_wise_learning_rate.ipynb) Setup environment ###Code !python -c "import monai" || pip install -q "monai-weekly[pillow, ignite, tqdm]" !python -c "import matplotlib" || pip install -q matplotlib %matplotlib inline from monai.transforms import ( AddChanneld, Compose, LoadImaged, ScaleIntensityd, EnsureTyped, ) from monai.optimizers import generate_param_groups from monai.networks.nets import DenseNet121 from monai.inferers import SimpleInferer from monai.handlers import StatsHandler, from_engine from monai.engines import SupervisedTrainer from monai.data import DataLoader from monai.config import print_config from monai.apps import MedNISTDataset import torch import matplotlib.pyplot as plt from ignite.engine import Engine, Events from ignite.metrics import Accuracy import tempfile import sys import shutil import os import logging ###Output _____no_output_____ ###Markdown Setup imports ###Code # Copyright 2020 MONAI Consortium # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. print_config() ###Output MONAI version: 0.6.0rc1+23.gc6793fd0 Numpy version: 1.20.3 Pytorch version: 1.9.0a0+c3d40fd MONAI flags: HAS_EXT = True, USE_COMPILED = False MONAI rev id: c6793fd0f316a448778d0047664aaf8c1895fe1c Optional dependencies: Pytorch Ignite version: 0.4.5 Nibabel version: 3.2.1 scikit-image version: 0.15.0 Pillow version: 7.0.0 Tensorboard version: 2.5.0 gdown version: 3.13.0 TorchVision version: 0.10.0a0 ITK version: 5.1.2 tqdm version: 4.53.0 lmdb version: 1.2.1 psutil version: 5.8.0 pandas version: 1.1.4 einops version: 0.3.0 For details about installing the optional dependencies, please visit: https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies ###Markdown Setup data directoryYou can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable. This allows you to save results and reuse downloads. If not specified a temporary directory will be used. ###Code directory = os.environ.get("MONAI_DATA_DIRECTORY") root_dir = tempfile.mkdtemp() if directory is None else directory print(root_dir) ###Output _____no_output_____ ###Markdown Setup logging ###Code logging.basicConfig(stream=sys.stdout, level=logging.INFO) ###Output _____no_output_____ ###Markdown Create training experiment with MedNISTDataset and workflowThe MedNIST dataset was gathered from several sets from [TCIA](https://wiki.cancerimagingarchive.net/display/Public/Data+Usage+Policies+and+Restrictions), [the RSNA Bone Age Challenge](http://rsnachallenges.cloudapp.net/competitions/4), and [the NIH Chest X-ray dataset](https://cloud.google.com/healthcare/docs/resources/public-datasets/nih-chest). Set up pre-processing transforms ###Code transform = Compose( [ LoadImaged(keys="image"), AddChanneld(keys="image"), ScaleIntensityd(keys="image"), EnsureTyped(keys="image"), ] ) ###Output _____no_output_____ ###Markdown Create MedNISTDataset for training`MedNISTDataset` inherits from MONAI `CacheDataset` and provides rich parameters to automatically download dataset and extract, and acts as normal PyTorch Dataset with cache mechanism. ###Code train_ds = MedNISTDataset( root_dir=root_dir, transform=transform, section="training", download=True) # the dataset can work seamlessly with the pytorch native dataset loader, # but using monai.data.DataLoader has additional benefits of mutli-process # random seeds handling, and the customized collate functions train_loader = DataLoader(train_ds, batch_size=300, shuffle=True, num_workers=10) ###Output _____no_output_____ ###Markdown Pick images from MedNISTDataset to visualize and check ###Code plt.subplots(3, 3, figsize=(8, 8)) for i in range(9): plt.subplot(3, 3, i + 1) plt.imshow(train_ds[i * 5000]["image"][0].detach().cpu(), cmap="gray") plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown Create training components - device, network, loss function ###Code device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") net = DenseNet121(pretrained=True, progress=False, spatial_dims=2, in_channels=1, out_channels=6).to(device) loss = torch.nn.CrossEntropyLoss() ###Output _____no_output_____ ###Markdown Set different learning rate values for layersPlease refer to the appendix at the end this notebook for the layers of `DenseNet121`.1. Set LR=1e-3 for the selected `class_layers` block.2. Set LR=1e-4 for convolution layers based on the filter where `conv.weight` is in the layer name.3. LR=1e-5 for other layers. ###Code params = generate_param_groups( network=net, layer_matches=[lambda x: x.class_layers, lambda x: "conv.weight" in x[0]], match_types=["select", "filter"], lr_values=[1e-3, 1e-4], ) ###Output _____no_output_____ ###Markdown Define the optimizer based on the parameter groups ###Code opt = torch.optim.Adam(params, 1e-5) ###Output _____no_output_____ ###Markdown Define the easiest training workflow and runUse MONAI SupervisedTrainer handlers to quickly set up a training workflow. ###Code trainer = SupervisedTrainer( device=device, max_epochs=5, train_data_loader=train_loader, network=net, optimizer=opt, loss_function=loss, inferer=SimpleInferer(), key_train_metric={ "train_acc": Accuracy( output_transform=from_engine(["pred", "label"])) }, train_handlers=StatsHandler( tag_name="train_loss", output_transform=from_engine(["loss"], first=True)), ) ###Output _____no_output_____ ###Markdown Define a ignite handler to adjust LR in runtime ###Code class LrScheduler: def attach(self, engine: Engine) -> None: engine.add_event_handler(Events.EPOCH_COMPLETED, self) def __call__(self, engine: Engine) -> None: for i, param_group in enumerate(engine.optimizer.param_groups): if i == 0: param_group["lr"] *= 0.1 elif i == 1: param_group["lr"] *= 0.5 print("LR values of 3 parameter groups: ", [ g["lr"] for g in engine.optimizer.param_groups]) LrScheduler().attach(trainer) ###Output _____no_output_____ ###Markdown Execute the training ###Code trainer.run() ###Output _____no_output_____ ###Markdown Cleanup data directoryRemove directory if a temporary was used. ###Code if directory is None: shutil.rmtree(root_dir) ###Output _____no_output_____ ###Markdown Appendix: layers of DenseNet 121 network ###Code print(net) ###Output DenseNet121( (features): Sequential( (conv0): Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (norm0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu0): ReLU(inplace=True) (pool0): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) (denseblock1): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(96, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(160, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(192, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(224, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (transition1): _Transition( (norm): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock2): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(160, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(192, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(224, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(288, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer7): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(320, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer8): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(352, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer9): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(384, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer10): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(416, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer11): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(448, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(448, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer12): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(480, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (transition2): _Transition( (norm): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock3): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(288, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(320, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(352, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(384, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(416, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer7): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(448, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(448, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer8): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(480, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer9): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer10): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(544, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(544, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer11): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(576, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer12): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(608, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(608, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer13): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(640, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer14): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(672, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(672, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer15): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(704, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(704, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer16): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(736, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(736, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer17): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer18): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(800, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(800, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer19): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer20): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(864, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(864, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer21): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(896, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer22): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(928, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(928, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer23): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(960, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer24): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(992, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(992, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (transition3): _Transition( (norm): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock4): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(544, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(544, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(576, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(608, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(608, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(640, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(672, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(672, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer7): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(704, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(704, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer8): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(736, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(736, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer9): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer10): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(800, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(800, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer11): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer12): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(864, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(864, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer13): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(896, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer14): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(928, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(928, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer15): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(960, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer16): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(992, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(992, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (norm5): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (class_layers): Sequential( (relu): ReLU(inplace=True) (pool): AdaptiveAvgPool2d(output_size=1) (flatten): Flatten(start_dim=1, end_dim=-1) (out): Linear(in_features=1024, out_features=6, bias=True) ) ) ###Markdown Layer wise learning rate settingsIn this tutorial, we introduce how to easily select or filter out network layers and set specific learning rate values for transfer learning. MONAI provides a utility function to achieve this requirements: `generate_param_groups`, for example:```pynet = Unet(dimensions=3, in_channels=1, out_channels=3, channels=[2, 2, 2], strides=[1, 1, 1])print(net) print out network components to select expected itemsprint(net.named_parameters()) print out all the named parameters to filter out expected itemsparams = generate_param_groups( network=net, layer_matches=[lambda x: x.model[-1], lambda x: "conv.weight" in x], match_types=["select", "filter"], lr_values=[1e-2, 1e-3],)optimizer = torch.optim.Adam(params, 1e-4)```[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/layer_wise_learning_rate.ipynb) Setup environment ###Code !python -c "import monai" || pip install -q "monai-weekly[pillow, ignite, tqdm]" !python -c "import matplotlib" || pip install -q matplotlib %matplotlib inline from monai.transforms import ( AddChanneld, Compose, LoadImaged, ScaleIntensityd, ToTensord, ) from monai.optimizers import generate_param_groups from monai.networks.nets import densenet121 from monai.inferers import SimpleInferer from monai.handlers import StatsHandler from monai.engines import SupervisedTrainer from monai.data import DataLoader from monai.config import print_config from monai.apps import MedNISTDataset import torch import matplotlib.pyplot as plt from ignite.engine import Engine, Events from ignite.metrics import Accuracy import tempfile import sys import shutil import os import logging ###Output _____no_output_____ ###Markdown Setup imports ###Code # Copyright 2020 MONAI Consortium # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. print_config() ###Output MONAI version: 0.4.0+35.g6adbcde Numpy version: 1.19.5 Pytorch version: 1.7.1 MONAI flags: HAS_EXT = False, USE_COMPILED = False MONAI rev id: 6adbcdee45c16f18f5b713575af3410437177311 Optional dependencies: Pytorch Ignite version: 0.4.2 Nibabel version: 3.2.1 scikit-image version: 0.18.1 Pillow version: 7.0.0 Tensorboard version: 2.4.0 gdown version: 3.12.2 TorchVision version: 0.8.2 ITK version: 5.1.2 tqdm version: 4.51.0 lmdb version: 1.0.0 psutil version: 5.8.0 For details about installing the optional dependencies, please visit: https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies ###Markdown Setup data directoryYou can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable. This allows you to save results and reuse downloads. If not specified a temporary directory will be used. ###Code directory = os.environ.get("MONAI_DATA_DIRECTORY") root_dir = tempfile.mkdtemp() if directory is None else directory print(root_dir) ###Output /workspace/data/medical ###Markdown Setup logging ###Code logging.basicConfig(stream=sys.stdout, level=logging.INFO) ###Output _____no_output_____ ###Markdown Create training experiment with MedNISTDataset and workflowThe MedNIST dataset was gathered from several sets from [TCIA](https://wiki.cancerimagingarchive.net/display/Public/Data+Usage+Policies+and+Restrictions), [the RSNA Bone Age Challenge](http://rsnachallenges.cloudapp.net/competitions/4), and [the NIH Chest X-ray dataset](https://cloud.google.com/healthcare/docs/resources/public-datasets/nih-chest). Set up pre-processing transforms ###Code transform = Compose( [ LoadImaged(keys="image"), AddChanneld(keys="image"), ScaleIntensityd(keys="image"), ToTensord(keys="image"), ] ) ###Output _____no_output_____ ###Markdown Create MedNISTDataset for training`MedNISTDataset` inherits from MONAI `CacheDataset` and provides rich parameters to automatically download dataset and extract, and acts as normal PyTorch Dataset with cache mechanism. ###Code train_ds = MedNISTDataset( root_dir=root_dir, transform=transform, section="training", download=True) # the dataset can work seamlessly with the pytorch native dataset loader, # but using monai.data.DataLoader has additional benefits of mutli-process # random seeds handling, and the customized collate functions train_loader = DataLoader(train_ds, batch_size=300, shuffle=True, num_workers=10) ###Output Verified 'MedNIST.tar.gz', md5: 0bc7306e7427e00ad1c5526a6677552d. file /workspace/data/medical/MedNIST.tar.gz exists, skip downloading. extracted file /workspace/data/medical/MedNIST exists, skip extracting. ###Markdown Pick images from MedNISTDataset to visualize and check ###Code plt.subplots(3, 3, figsize=(8, 8)) for i in range(9): plt.subplot(3, 3, i + 1) plt.imshow(train_ds[i * 5000]["image"][0].detach().cpu(), cmap="gray") plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown Create training components - device, network, loss function ###Code device = torch.device("cuda:0") net = densenet121(pretrained=True, progress=False, spatial_dims=2, in_channels=1, out_channels=6).to(device) loss = torch.nn.CrossEntropyLoss() ###Output _____no_output_____ ###Markdown Set different learning rate values for layersPlease refer to the appendix at the end this notebook for the layers of `DenseNet121`.1. Set LR=1e-3 for the selected `class_layers` block.2. Set LR=1e-4 for convolution layers based on the filter where `conv.weight` is in the layer name.3. LR=1e-5 for other layers. ###Code params = generate_param_groups( network=net, layer_matches=[lambda x: x.class_layers, lambda x: "conv.weight" in x], match_types=["select", "filter"], lr_values=[1e-3, 1e-4], ) ###Output _____no_output_____ ###Markdown Define the optimizer based on the parameter groups ###Code opt = torch.optim.Adam(params, 1e-5) ###Output _____no_output_____ ###Markdown Define the easiest training workflow and runUse MONAI SupervisedTrainer handlers to quickly set up a training workflow. ###Code trainer = SupervisedTrainer( device=device, max_epochs=5, train_data_loader=train_loader, network=net, optimizer=opt, loss_function=loss, inferer=SimpleInferer(), key_train_metric={ "train_acc": Accuracy( output_transform=lambda x: (x["pred"], x["label"])) }, train_handlers=StatsHandler( tag_name="train_loss", output_transform=lambda x: x["loss"]), ) ###Output _____no_output_____ ###Markdown Define a ignite handler to adjust LR in runtime ###Code class LrScheduler: def attach(self, engine: Engine) -> None: engine.add_event_handler(Events.EPOCH_COMPLETED, self) def __call__(self, engine: Engine) -> None: for i, param_group in enumerate(engine.optimizer.param_groups): if i == 0: param_group["lr"] *= 0.1 elif i == 1: param_group["lr"] *= 0.5 print("LR values of 3 parameter groups: ", [ g["lr"] for g in engine.optimizer.param_groups]) LrScheduler().attach(trainer) ###Output _____no_output_____ ###Markdown Execute the training ###Code trainer.run() ###Output INFO:ignite.engine.engine.SupervisedTrainer:Engine run resuming from iteration 0, epoch 0 until 1 epochs INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 1/157 -- train_loss: 1.8304 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 2/157 -- train_loss: 1.7026 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 3/157 -- train_loss: 1.6544 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 4/157 -- train_loss: 1.5711 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 5/157 -- train_loss: 1.4678 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 6/157 -- train_loss: 1.3817 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 7/157 -- train_loss: 1.3329 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 8/157 -- train_loss: 1.2512 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 9/157 -- train_loss: 1.1802 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 10/157 -- train_loss: 1.1098 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 11/157 -- train_loss: 1.0461 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 12/157 -- train_loss: 1.0530 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 13/157 -- train_loss: 0.9845 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 14/157 -- train_loss: 0.8910 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 15/157 -- train_loss: 0.9101 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 16/157 -- train_loss: 0.8881 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 17/157 -- train_loss: 0.7921 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 18/157 -- train_loss: 0.7147 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 19/157 -- train_loss: 0.6718 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 20/157 -- train_loss: 0.6844 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 21/157 -- train_loss: 0.6290 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 22/157 -- train_loss: 0.7280 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 23/157 -- train_loss: 0.6159 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 24/157 -- train_loss: 0.5863 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 25/157 -- train_loss: 0.5241 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 26/157 -- train_loss: 0.5349 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 27/157 -- train_loss: 0.4609 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 28/157 -- train_loss: 0.4156 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 29/157 -- train_loss: 0.4626 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 30/157 -- train_loss: 0.4041 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 31/157 -- train_loss: 0.3877 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 32/157 -- train_loss: 0.3813 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 33/157 -- train_loss: 0.4080 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 34/157 -- train_loss: 0.3451 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 35/157 -- train_loss: 0.3205 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 36/157 -- train_loss: 0.3162 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 37/157 -- train_loss: 0.3033 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 38/157 -- train_loss: 0.2810 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 39/157 -- train_loss: 0.2591 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 40/157 -- train_loss: 0.2244 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 41/157 -- train_loss: 0.2436 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 42/157 -- train_loss: 0.2618 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 43/157 -- train_loss: 0.2264 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 44/157 -- train_loss: 0.2437 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 45/157 -- train_loss: 0.2080 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 46/157 -- train_loss: 0.1706 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 47/157 -- train_loss: 0.2043 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 48/157 -- train_loss: 0.1685 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 49/157 -- train_loss: 0.1606 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 50/157 -- train_loss: 0.2219 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 51/157 -- train_loss: 0.1919 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 52/157 -- train_loss: 0.1748 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 53/157 -- train_loss: 0.1301 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 54/157 -- train_loss: 0.1477 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 55/157 -- train_loss: 0.1473 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 56/157 -- train_loss: 0.1911 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 57/157 -- train_loss: 0.1643 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 58/157 -- train_loss: 0.1959 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 59/157 -- train_loss: 0.1822 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 60/157 -- train_loss: 0.1220 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 61/157 -- train_loss: 0.1198 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 62/157 -- train_loss: 0.1211 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 63/157 -- train_loss: 0.1026 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 64/157 -- train_loss: 0.0996 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 65/157 -- train_loss: 0.1122 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 66/157 -- train_loss: 0.1218 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 67/157 -- train_loss: 0.0756 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 68/157 -- train_loss: 0.1237 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 69/157 -- train_loss: 0.1331 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 70/157 -- train_loss: 0.1379 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 71/157 -- train_loss: 0.0877 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 72/157 -- train_loss: 0.0932 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 73/157 -- train_loss: 0.0888 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 74/157 -- train_loss: 0.1115 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 75/157 -- train_loss: 0.0967 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 76/157 -- train_loss: 0.0718 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 77/157 -- train_loss: 0.0881 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 78/157 -- train_loss: 0.0954 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 79/157 -- train_loss: 0.1165 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 80/157 -- train_loss: 0.0679 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 81/157 -- train_loss: 0.0691 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 82/157 -- train_loss: 0.0555 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 83/157 -- train_loss: 0.0829 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 84/157 -- train_loss: 0.0530 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 85/157 -- train_loss: 0.0945 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 86/157 -- train_loss: 0.0960 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 87/157 -- train_loss: 0.0897 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 88/157 -- train_loss: 0.0553 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 89/157 -- train_loss: 0.0509 ###Markdown Cleanup data directoryRemove directory if a temporary was used. ###Code if directory is None: shutil.rmtree(root_dir) ###Output _____no_output_____ ###Markdown Appendix: layers of DenseNet 121 network ###Code print(net) ###Output DenseNet( (features): Sequential( (conv0): Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (norm0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu0): ReLU(inplace=True) (pool0): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) (denseblock1): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(96, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(160, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(192, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(224, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (transition1): _Transition( (norm): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock2): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(160, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(192, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(224, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(288, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer7): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(320, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer8): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(352, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer9): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(384, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer10): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(416, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer11): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(448, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(448, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer12): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(480, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (transition2): _Transition( (norm): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock3): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(288, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(320, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(352, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(384, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(416, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer7): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(448, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(448, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer8): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(480, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer9): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer10): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(544, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(544, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer11): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(576, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer12): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(608, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(608, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer13): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(640, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer14): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(672, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(672, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer15): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(704, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(704, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer16): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(736, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(736, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer17): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer18): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(800, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(800, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer19): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer20): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(864, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(864, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer21): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(896, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer22): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(928, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(928, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer23): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(960, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer24): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(992, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(992, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (transition3): _Transition( (norm): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock4): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(544, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(544, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(576, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(608, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(608, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(640, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(672, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(672, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer7): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(704, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(704, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer8): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(736, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(736, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer9): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer10): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(800, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(800, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer11): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer12): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(864, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(864, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer13): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(896, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer14): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(928, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(928, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer15): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(960, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer16): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(992, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(992, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (norm5): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (class_layers): Sequential( (relu): ReLU(inplace=True) (pool): AdaptiveAvgPool2d(output_size=1) (flatten): Flatten(start_dim=1, end_dim=-1) (out): Linear(in_features=1024, out_features=6, bias=True) ) ) ###Markdown Layer wise learning rate settingsIn this tutorial, we introduce how to easily select or filter out network layers and set specific learning rate values for transfer learning. MONAI provides a utility function to achieve this requirements: `generate_param_groups`, for example:```pynet = Unet(dimensions=3, in_channels=1, out_channels=3, channels=[2, 2, 2], strides=[1, 1, 1])print(net) print out network components to select expected itemsprint(net.named_parameters()) print out all the named parameters to filter out expected itemsparams = generate_param_groups( network=net, layer_matches=[lambda x: x.model[-1], lambda x: "conv.weight" in x], match_types=["select", "filter"], lr_values=[1e-2, 1e-3],)optimizer = torch.optim.Adam(params, 1e-4)```[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/layer_wise_learning_rate.ipynb) Setup environment ###Code !python -c "import monai" || pip install -q "monai-weekly[pillow, ignite, tqdm]" !python -c "import matplotlib" || pip install -q matplotlib %matplotlib inline from monai.transforms import ( AddChanneld, Compose, LoadImaged, ScaleIntensityd, ToTensord, ) from monai.optimizers import generate_param_groups from monai.networks.nets import DenseNet121 from monai.inferers import SimpleInferer from monai.handlers import StatsHandler from monai.engines import SupervisedTrainer from monai.data import DataLoader from monai.config import print_config from monai.apps import MedNISTDataset import torch import matplotlib.pyplot as plt from ignite.engine import Engine, Events from ignite.metrics import Accuracy import tempfile import sys import shutil import os import logging ###Output _____no_output_____ ###Markdown Setup imports ###Code # Copyright 2020 MONAI Consortium # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. print_config() ###Output MONAI version: 0.4.0+35.g6adbcde Numpy version: 1.19.5 Pytorch version: 1.7.1 MONAI flags: HAS_EXT = False, USE_COMPILED = False MONAI rev id: 6adbcdee45c16f18f5b713575af3410437177311 Optional dependencies: Pytorch Ignite version: 0.4.2 Nibabel version: 3.2.1 scikit-image version: 0.18.1 Pillow version: 7.0.0 Tensorboard version: 2.4.0 gdown version: 3.12.2 TorchVision version: 0.8.2 ITK version: 5.1.2 tqdm version: 4.51.0 lmdb version: 1.0.0 psutil version: 5.8.0 For details about installing the optional dependencies, please visit: https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies ###Markdown Setup data directoryYou can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable. This allows you to save results and reuse downloads. If not specified a temporary directory will be used. ###Code directory = os.environ.get("MONAI_DATA_DIRECTORY") root_dir = tempfile.mkdtemp() if directory is None else directory print(root_dir) ###Output /workspace/data/medical ###Markdown Setup logging ###Code logging.basicConfig(stream=sys.stdout, level=logging.INFO) ###Output _____no_output_____ ###Markdown Create training experiment with MedNISTDataset and workflowThe MedNIST dataset was gathered from several sets from [TCIA](https://wiki.cancerimagingarchive.net/display/Public/Data+Usage+Policies+and+Restrictions), [the RSNA Bone Age Challenge](http://rsnachallenges.cloudapp.net/competitions/4), and [the NIH Chest X-ray dataset](https://cloud.google.com/healthcare/docs/resources/public-datasets/nih-chest). Set up pre-processing transforms ###Code transform = Compose( [ LoadImaged(keys="image"), AddChanneld(keys="image"), ScaleIntensityd(keys="image"), ToTensord(keys="image"), ] ) ###Output _____no_output_____ ###Markdown Create MedNISTDataset for training`MedNISTDataset` inherits from MONAI `CacheDataset` and provides rich parameters to automatically download dataset and extract, and acts as normal PyTorch Dataset with cache mechanism. ###Code train_ds = MedNISTDataset( root_dir=root_dir, transform=transform, section="training", download=True) # the dataset can work seamlessly with the pytorch native dataset loader, # but using monai.data.DataLoader has additional benefits of mutli-process # random seeds handling, and the customized collate functions train_loader = DataLoader(train_ds, batch_size=300, shuffle=True, num_workers=10) ###Output Verified 'MedNIST.tar.gz', md5: 0bc7306e7427e00ad1c5526a6677552d. file /workspace/data/medical/MedNIST.tar.gz exists, skip downloading. extracted file /workspace/data/medical/MedNIST exists, skip extracting. ###Markdown Pick images from MedNISTDataset to visualize and check ###Code plt.subplots(3, 3, figsize=(8, 8)) for i in range(9): plt.subplot(3, 3, i + 1) plt.imshow(train_ds[i * 5000]["image"][0].detach().cpu(), cmap="gray") plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown Create training components - device, network, loss function ###Code device = torch.device("cuda:0") net = DenseNet121(pretrained=True, progress=False, spatial_dims=2, in_channels=1, out_channels=6).to(device) loss = torch.nn.CrossEntropyLoss() ###Output _____no_output_____ ###Markdown Set different learning rate values for layersPlease refer to the appendix at the end this notebook for the layers of `DenseNet121`.1. Set LR=1e-3 for the selected `class_layers` block.2. Set LR=1e-4 for convolution layers based on the filter where `conv.weight` is in the layer name.3. LR=1e-5 for other layers. ###Code params = generate_param_groups( network=net, layer_matches=[lambda x: x.class_layers, lambda x: "conv.weight" in x], match_types=["select", "filter"], lr_values=[1e-3, 1e-4], ) ###Output _____no_output_____ ###Markdown Define the optimizer based on the parameter groups ###Code opt = torch.optim.Adam(params, 1e-5) ###Output _____no_output_____ ###Markdown Define the easiest training workflow and runUse MONAI SupervisedTrainer handlers to quickly set up a training workflow. ###Code trainer = SupervisedTrainer( device=device, max_epochs=5, train_data_loader=train_loader, network=net, optimizer=opt, loss_function=loss, inferer=SimpleInferer(), key_train_metric={ "train_acc": Accuracy( output_transform=lambda x: (x["pred"], x["label"])) }, train_handlers=StatsHandler( tag_name="train_loss", output_transform=lambda x: x["loss"]), ) ###Output _____no_output_____ ###Markdown Define a ignite handler to adjust LR in runtime ###Code class LrScheduler: def attach(self, engine: Engine) -> None: engine.add_event_handler(Events.EPOCH_COMPLETED, self) def __call__(self, engine: Engine) -> None: for i, param_group in enumerate(engine.optimizer.param_groups): if i == 0: param_group["lr"] *= 0.1 elif i == 1: param_group["lr"] *= 0.5 print("LR values of 3 parameter groups: ", [ g["lr"] for g in engine.optimizer.param_groups]) LrScheduler().attach(trainer) ###Output _____no_output_____ ###Markdown Execute the training ###Code trainer.run() ###Output INFO:ignite.engine.engine.SupervisedTrainer:Engine run resuming from iteration 0, epoch 0 until 1 epochs INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 1/157 -- train_loss: 1.8304 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 2/157 -- train_loss: 1.7026 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 3/157 -- train_loss: 1.6544 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 4/157 -- train_loss: 1.5711 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 5/157 -- train_loss: 1.4678 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 6/157 -- train_loss: 1.3817 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 7/157 -- train_loss: 1.3329 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 8/157 -- train_loss: 1.2512 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 9/157 -- train_loss: 1.1802 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 10/157 -- train_loss: 1.1098 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 11/157 -- train_loss: 1.0461 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 12/157 -- train_loss: 1.0530 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 13/157 -- train_loss: 0.9845 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 14/157 -- train_loss: 0.8910 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 15/157 -- train_loss: 0.9101 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 16/157 -- train_loss: 0.8881 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 17/157 -- train_loss: 0.7921 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 18/157 -- train_loss: 0.7147 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 19/157 -- train_loss: 0.6718 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 20/157 -- train_loss: 0.6844 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 21/157 -- train_loss: 0.6290 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 22/157 -- train_loss: 0.7280 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 23/157 -- train_loss: 0.6159 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 24/157 -- train_loss: 0.5863 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 25/157 -- train_loss: 0.5241 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 26/157 -- train_loss: 0.5349 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 27/157 -- train_loss: 0.4609 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 28/157 -- train_loss: 0.4156 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 29/157 -- train_loss: 0.4626 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 30/157 -- train_loss: 0.4041 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 31/157 -- train_loss: 0.3877 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 32/157 -- train_loss: 0.3813 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 33/157 -- train_loss: 0.4080 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 34/157 -- train_loss: 0.3451 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 35/157 -- train_loss: 0.3205 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 36/157 -- train_loss: 0.3162 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 37/157 -- train_loss: 0.3033 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 38/157 -- train_loss: 0.2810 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 39/157 -- train_loss: 0.2591 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 40/157 -- train_loss: 0.2244 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 41/157 -- train_loss: 0.2436 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 42/157 -- train_loss: 0.2618 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 43/157 -- train_loss: 0.2264 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 44/157 -- train_loss: 0.2437 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 45/157 -- train_loss: 0.2080 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 46/157 -- train_loss: 0.1706 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 47/157 -- train_loss: 0.2043 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 48/157 -- train_loss: 0.1685 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 49/157 -- train_loss: 0.1606 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 50/157 -- train_loss: 0.2219 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 51/157 -- train_loss: 0.1919 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 52/157 -- train_loss: 0.1748 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 53/157 -- train_loss: 0.1301 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 54/157 -- train_loss: 0.1477 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 55/157 -- train_loss: 0.1473 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 56/157 -- train_loss: 0.1911 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 57/157 -- train_loss: 0.1643 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 58/157 -- train_loss: 0.1959 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 59/157 -- train_loss: 0.1822 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 60/157 -- train_loss: 0.1220 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 61/157 -- train_loss: 0.1198 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 62/157 -- train_loss: 0.1211 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 63/157 -- train_loss: 0.1026 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 64/157 -- train_loss: 0.0996 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 65/157 -- train_loss: 0.1122 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 66/157 -- train_loss: 0.1218 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 67/157 -- train_loss: 0.0756 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 68/157 -- train_loss: 0.1237 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 69/157 -- train_loss: 0.1331 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 70/157 -- train_loss: 0.1379 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 71/157 -- train_loss: 0.0877 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 72/157 -- train_loss: 0.0932 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 73/157 -- train_loss: 0.0888 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 74/157 -- train_loss: 0.1115 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 75/157 -- train_loss: 0.0967 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 76/157 -- train_loss: 0.0718 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 77/157 -- train_loss: 0.0881 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 78/157 -- train_loss: 0.0954 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 79/157 -- train_loss: 0.1165 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 80/157 -- train_loss: 0.0679 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 81/157 -- train_loss: 0.0691 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 82/157 -- train_loss: 0.0555 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 83/157 -- train_loss: 0.0829 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 84/157 -- train_loss: 0.0530 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 85/157 -- train_loss: 0.0945 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 86/157 -- train_loss: 0.0960 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 87/157 -- train_loss: 0.0897 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 88/157 -- train_loss: 0.0553 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/1, Iter: 89/157 -- train_loss: 0.0509 ###Markdown Cleanup data directoryRemove directory if a temporary was used. ###Code if directory is None: shutil.rmtree(root_dir) ###Output _____no_output_____ ###Markdown Appendix: layers of DenseNet 121 network ###Code print(net) ###Output DenseNet( (features): Sequential( (conv0): Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (norm0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu0): ReLU(inplace=True) (pool0): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) (denseblock1): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(96, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(160, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(192, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(224, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (transition1): _Transition( (norm): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock2): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(160, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(192, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(224, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(288, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer7): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(320, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer8): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(352, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer9): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(384, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer10): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(416, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer11): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(448, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(448, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer12): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(480, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (transition2): _Transition( (norm): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock3): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(288, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(320, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(352, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(384, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(416, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer7): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(448, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(448, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer8): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(480, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer9): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer10): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(544, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(544, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer11): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(576, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer12): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(608, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(608, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer13): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(640, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer14): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(672, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(672, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer15): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(704, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(704, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer16): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(736, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(736, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer17): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer18): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(800, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(800, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer19): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer20): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(864, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(864, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer21): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(896, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer22): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(928, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(928, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer23): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(960, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer24): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(992, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(992, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (transition3): _Transition( (norm): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock4): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(544, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(544, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(576, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(608, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(608, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(640, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(672, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(672, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer7): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(704, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(704, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer8): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(736, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(736, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer9): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer10): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(800, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(800, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer11): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer12): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(864, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(864, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer13): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(896, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer14): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(928, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(928, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer15): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(960, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer16): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(992, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(992, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (norm5): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (class_layers): Sequential( (relu): ReLU(inplace=True) (pool): AdaptiveAvgPool2d(output_size=1) (flatten): Flatten(start_dim=1, end_dim=-1) (out): Linear(in_features=1024, out_features=6, bias=True) ) ) ###Markdown Layer wise learning rate settingsIn this tutorial, we introduce how to easily select or filter out network layers and set specific learning rate values for transfer learning. MONAI provides a utility function to achieve this requirements: `generate_param_groups`, for example:```pynet = Unet(dimensions=3, in_channels=1, out_channels=3, channels=[2, 2, 2], strides=[1, 1, 1])print(net) print out network components to select expected itemsprint(net.named_parameters()) print out all the named parameters to filter out expected itemsparams = generate_param_groups( network=net, layer_matches=[lambda x: x.model[0], lambda x: "2.0.conv" in x[0]], match_types=["select", "filter"], lr_values=[1e-2, 1e-3],)optimizer = torch.optim.Adam(params, 1e-4)```[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/layer_wise_learning_rate.ipynb) Setup environment ###Code !python -c "import monai" || pip install -q "monai-weekly[pillow, ignite, tqdm]" !python -c "import matplotlib" || pip install -q matplotlib %matplotlib inline from monai.transforms import ( AddChanneld, Compose, LoadImaged, ScaleIntensityd, EnsureTyped, ) from monai.optimizers import generate_param_groups from monai.networks.nets import DenseNet121 from monai.inferers import SimpleInferer from monai.handlers import StatsHandler, from_engine from monai.engines import SupervisedTrainer from monai.data import DataLoader from monai.config import print_config from monai.apps import MedNISTDataset import torch import matplotlib.pyplot as plt from ignite.engine import Engine, Events from ignite.metrics import Accuracy import tempfile import sys import shutil import os import logging ###Output _____no_output_____ ###Markdown Setup imports ###Code # Copyright 2020 MONAI Consortium # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. print_config() ###Output MONAI version: 0.6.0rc1+23.gc6793fd0 Numpy version: 1.20.3 Pytorch version: 1.9.0a0+c3d40fd MONAI flags: HAS_EXT = True, USE_COMPILED = False MONAI rev id: c6793fd0f316a448778d0047664aaf8c1895fe1c Optional dependencies: Pytorch Ignite version: 0.4.5 Nibabel version: 3.2.1 scikit-image version: 0.15.0 Pillow version: 7.0.0 Tensorboard version: 2.5.0 gdown version: 3.13.0 TorchVision version: 0.10.0a0 ITK version: 5.1.2 tqdm version: 4.53.0 lmdb version: 1.2.1 psutil version: 5.8.0 pandas version: 1.1.4 einops version: 0.3.0 For details about installing the optional dependencies, please visit: https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies ###Markdown Setup data directoryYou can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable. This allows you to save results and reuse downloads. If not specified a temporary directory will be used. ###Code directory = os.environ.get("MONAI_DATA_DIRECTORY") root_dir = tempfile.mkdtemp() if directory is None else directory print(root_dir) ###Output _____no_output_____ ###Markdown Setup logging ###Code logging.basicConfig(stream=sys.stdout, level=logging.INFO) ###Output _____no_output_____ ###Markdown Create training experiment with MedNISTDataset and workflowThe MedNIST dataset was gathered from several sets from [TCIA](https://wiki.cancerimagingarchive.net/display/Public/Data+Usage+Policies+and+Restrictions), [the RSNA Bone Age Challenge](http://rsnachallenges.cloudapp.net/competitions/4), and [the NIH Chest X-ray dataset](https://cloud.google.com/healthcare/docs/resources/public-datasets/nih-chest). Set up pre-processing transforms ###Code transform = Compose( [ LoadImaged(keys="image"), AddChanneld(keys="image"), ScaleIntensityd(keys="image"), EnsureTyped(keys="image"), ] ) ###Output _____no_output_____ ###Markdown Create MedNISTDataset for training`MedNISTDataset` inherits from MONAI `CacheDataset` and provides rich parameters to automatically download dataset and extract, and acts as normal PyTorch Dataset with cache mechanism. ###Code train_ds = MedNISTDataset( root_dir=root_dir, transform=transform, section="training", download=True) # the dataset can work seamlessly with the pytorch native dataset loader, # but using monai.data.DataLoader has additional benefits of mutli-process # random seeds handling, and the customized collate functions train_loader = DataLoader(train_ds, batch_size=300, shuffle=True, num_workers=10) ###Output _____no_output_____ ###Markdown Pick images from MedNISTDataset to visualize and check ###Code plt.subplots(3, 3, figsize=(8, 8)) for i in range(9): plt.subplot(3, 3, i + 1) plt.imshow(train_ds[i * 5000]["image"][0].detach().cpu(), cmap="gray") plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown Create training components - device, network, loss function ###Code device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") net = DenseNet121(pretrained=True, progress=False, spatial_dims=2, in_channels=1, out_channels=6).to(device) loss = torch.nn.CrossEntropyLoss() ###Output _____no_output_____ ###Markdown Set different learning rate values for layersPlease refer to the appendix at the end this notebook for the layers of `DenseNet121`.1. Set LR=1e-3 for the selected `class_layers` block.2. Set LR=1e-4 for convolution layers based on the filter where `conv.weight` is in the layer name.3. LR=1e-5 for other layers. ###Code params = generate_param_groups( network=net, layer_matches=[lambda x: x.class_layers, lambda x: "conv.weight" in x[0]], match_types=["select", "filter"], lr_values=[1e-3, 1e-4], ) ###Output _____no_output_____ ###Markdown Define the optimizer based on the parameter groups ###Code opt = torch.optim.Adam(params, 1e-5) ###Output _____no_output_____ ###Markdown Define the easiest training workflow and runUse MONAI SupervisedTrainer handlers to quickly set up a training workflow. ###Code trainer = SupervisedTrainer( device=device, max_epochs=5, train_data_loader=train_loader, network=net, optimizer=opt, loss_function=loss, inferer=SimpleInferer(), key_train_metric={ "train_acc": Accuracy( output_transform=from_engine(["pred", "label"])) }, train_handlers=StatsHandler( tag_name="train_loss", output_transform=from_engine(["loss"], first=True)), ) ###Output _____no_output_____ ###Markdown Define a ignite handler to adjust LR in runtime ###Code class LrScheduler: def attach(self, engine: Engine) -> None: engine.add_event_handler(Events.EPOCH_COMPLETED, self) def __call__(self, engine: Engine) -> None: for i, param_group in enumerate(engine.optimizer.param_groups): if i == 0: param_group["lr"] *= 0.1 elif i == 1: param_group["lr"] *= 0.5 print("LR values of 3 parameter groups: ", [ g["lr"] for g in engine.optimizer.param_groups]) LrScheduler().attach(trainer) ###Output _____no_output_____ ###Markdown Execute the training ###Code trainer.run() ###Output _____no_output_____ ###Markdown Cleanup data directoryRemove directory if a temporary was used. ###Code if directory is None: shutil.rmtree(root_dir) ###Output _____no_output_____ ###Markdown Appendix: layers of DenseNet 121 network ###Code print(net) ###Output DenseNet121( (features): Sequential( (conv0): Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (norm0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu0): ReLU(inplace=True) (pool0): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) (denseblock1): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(96, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(160, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(192, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(224, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (transition1): _Transition( (norm): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock2): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(160, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(192, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(224, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(288, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer7): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(320, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer8): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(352, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer9): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(384, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer10): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(416, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer11): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(448, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(448, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer12): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(480, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (transition2): _Transition( (norm): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock3): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(288, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(320, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(352, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(384, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(416, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer7): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(448, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(448, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer8): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(480, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer9): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer10): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(544, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(544, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer11): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(576, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer12): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(608, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(608, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer13): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(640, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer14): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(672, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(672, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer15): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(704, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(704, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer16): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(736, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(736, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer17): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer18): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(800, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(800, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer19): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer20): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(864, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(864, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer21): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(896, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer22): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(928, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(928, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer23): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(960, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer24): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(992, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(992, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (transition3): _Transition( (norm): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock4): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(544, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(544, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(576, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(608, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(608, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(640, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(672, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(672, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer7): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(704, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(704, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer8): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(736, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(736, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer9): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer10): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(800, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(800, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer11): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer12): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(864, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(864, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer13): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(896, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer14): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(928, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(928, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer15): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(960, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer16): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(992, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(992, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (norm5): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (class_layers): Sequential( (relu): ReLU(inplace=True) (pool): AdaptiveAvgPool2d(output_size=1) (flatten): Flatten(start_dim=1, end_dim=-1) (out): Linear(in_features=1024, out_features=6, bias=True) ) ) ###Markdown Layer wise learning rate settingsIn this tutorial, we introduce how to easily select or filter out network layers and set specific learning rate values for transfer learning. MONAI provides a utility function to achieve this requirements: `generate_param_groups`, for example:```pynet = Unet(dimensions=3, in_channels=1, out_channels=3, channels=[2, 2, 2], strides=[1, 1, 1])print(net) print out network components to select expected itemsprint(net.named_parameters()) print out all the named parameters to filter out expected itemsparams = generate_param_groups( network=net, layer_matches=[lambda x: x.model[0], lambda x: "2.0.conv" in x[0]], match_types=["select", "filter"], lr_values=[1e-2, 1e-3],)optimizer = torch.optim.Adam(params, 1e-4)```[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/layer_wise_learning_rate.ipynb) Setup environment ###Code !python -c "import monai" || pip install -q "monai-weekly[pillow, ignite, tqdm]" !python -c "import matplotlib" || pip install -q matplotlib %matplotlib inline from monai.transforms import ( AddChanneld, Compose, LoadImaged, ScaleIntensityd, ToTensord, ) from monai.optimizers import generate_param_groups from monai.networks.nets import DenseNet121 from monai.inferers import SimpleInferer from monai.handlers import StatsHandler from monai.engines import SupervisedTrainer from monai.data import DataLoader from monai.config import print_config from monai.apps import MedNISTDataset import torch import matplotlib.pyplot as plt from ignite.engine import Engine, Events from ignite.metrics import Accuracy import tempfile import sys import shutil import os import logging ###Output _____no_output_____ ###Markdown Setup imports ###Code # Copyright 2020 MONAI Consortium # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. print_config() ###Output MONAI version: 0.5.3+106.g223ac7c0 Numpy version: 1.20.3 Pytorch version: 1.8.1 MONAI flags: HAS_EXT = False, USE_COMPILED = False MONAI rev id: 223ac7c04d95e54c7feb59127bffa8f436b17978 Optional dependencies: Pytorch Ignite version: 0.4.4 Nibabel version: 3.2.1 scikit-image version: 0.18.1 Pillow version: 8.2.0 Tensorboard version: 2.5.0 gdown version: 3.13.0 TorchVision version: 0.9.1 ITK version: 5.1.2 tqdm version: 4.61.0 lmdb version: 1.2.1 psutil version: 5.8.0 For details about installing the optional dependencies, please visit: https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies ###Markdown Setup data directoryYou can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable. This allows you to save results and reuse downloads. If not specified a temporary directory will be used. ###Code directory = os.environ.get("MONAI_DATA_DIRECTORY") root_dir = tempfile.mkdtemp() if directory is None else directory print(root_dir) ###Output _____no_output_____ ###Markdown Setup logging ###Code logging.basicConfig(stream=sys.stdout, level=logging.INFO) ###Output _____no_output_____ ###Markdown Create training experiment with MedNISTDataset and workflowThe MedNIST dataset was gathered from several sets from [TCIA](https://wiki.cancerimagingarchive.net/display/Public/Data+Usage+Policies+and+Restrictions), [the RSNA Bone Age Challenge](http://rsnachallenges.cloudapp.net/competitions/4), and [the NIH Chest X-ray dataset](https://cloud.google.com/healthcare/docs/resources/public-datasets/nih-chest). Set up pre-processing transforms ###Code transform = Compose( [ LoadImaged(keys="image"), AddChanneld(keys="image"), ScaleIntensityd(keys="image"), ToTensord(keys="image"), ] ) ###Output _____no_output_____ ###Markdown Create MedNISTDataset for training`MedNISTDataset` inherits from MONAI `CacheDataset` and provides rich parameters to automatically download dataset and extract, and acts as normal PyTorch Dataset with cache mechanism. ###Code train_ds = MedNISTDataset( root_dir=root_dir, transform=transform, section="training", download=True) # the dataset can work seamlessly with the pytorch native dataset loader, # but using monai.data.DataLoader has additional benefits of mutli-process # random seeds handling, and the customized collate functions train_loader = DataLoader(train_ds, batch_size=300, shuffle=True, num_workers=10) ###Output _____no_output_____ ###Markdown Pick images from MedNISTDataset to visualize and check ###Code plt.subplots(3, 3, figsize=(8, 8)) for i in range(9): plt.subplot(3, 3, i + 1) plt.imshow(train_ds[i * 5000]["image"][0].detach().cpu(), cmap="gray") plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown Create training components - device, network, loss function ###Code device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") net = DenseNet121(pretrained=True, progress=False, spatial_dims=2, in_channels=1, out_channels=6).to(device) loss = torch.nn.CrossEntropyLoss() ###Output _____no_output_____ ###Markdown Set different learning rate values for layersPlease refer to the appendix at the end this notebook for the layers of `DenseNet121`.1. Set LR=1e-3 for the selected `class_layers` block.2. Set LR=1e-4 for convolution layers based on the filter where `conv.weight` is in the layer name.3. LR=1e-5 for other layers. ###Code params = generate_param_groups( network=net, layer_matches=[lambda x: x.class_layers, lambda x: "conv.weight" in x[0]], match_types=["select", "filter"], lr_values=[1e-3, 1e-4], ) ###Output _____no_output_____ ###Markdown Define the optimizer based on the parameter groups ###Code opt = torch.optim.Adam(params, 1e-5) ###Output _____no_output_____ ###Markdown Define the easiest training workflow and runUse MONAI SupervisedTrainer handlers to quickly set up a training workflow. ###Code trainer = SupervisedTrainer( device=device, max_epochs=5, train_data_loader=train_loader, network=net, optimizer=opt, loss_function=loss, inferer=SimpleInferer(), key_train_metric={ "train_acc": Accuracy( output_transform=lambda x: (x["pred"], x["label"])) }, train_handlers=StatsHandler( tag_name="train_loss", output_transform=lambda x: x["loss"]), ) ###Output _____no_output_____ ###Markdown Define a ignite handler to adjust LR in runtime ###Code class LrScheduler: def attach(self, engine: Engine) -> None: engine.add_event_handler(Events.EPOCH_COMPLETED, self) def __call__(self, engine: Engine) -> None: for i, param_group in enumerate(engine.optimizer.param_groups): if i == 0: param_group["lr"] *= 0.1 elif i == 1: param_group["lr"] *= 0.5 print("LR values of 3 parameter groups: ", [ g["lr"] for g in engine.optimizer.param_groups]) LrScheduler().attach(trainer) ###Output _____no_output_____ ###Markdown Execute the training ###Code trainer.run() ###Output _____no_output_____ ###Markdown Cleanup data directoryRemove directory if a temporary was used. ###Code if directory is None: shutil.rmtree(root_dir) ###Output _____no_output_____ ###Markdown Appendix: layers of DenseNet 121 network ###Code print(net) ###Output DenseNet121( (features): Sequential( (conv0): Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (norm0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu0): ReLU(inplace=True) (pool0): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) (denseblock1): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(96, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(160, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(192, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(224, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (transition1): _Transition( (norm): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock2): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(160, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(192, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(224, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(288, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer7): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(320, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer8): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(352, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer9): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(384, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer10): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(416, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer11): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(448, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(448, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer12): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(480, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (transition2): _Transition( (norm): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock3): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(288, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(320, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(352, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(384, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(416, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer7): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(448, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(448, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer8): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(480, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer9): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer10): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(544, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(544, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer11): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(576, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer12): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(608, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(608, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer13): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(640, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer14): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(672, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(672, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer15): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(704, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(704, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer16): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(736, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(736, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer17): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer18): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(800, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(800, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer19): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer20): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(864, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(864, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer21): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(896, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer22): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(928, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(928, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer23): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(960, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer24): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(992, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(992, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (transition3): _Transition( (norm): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock4): _DenseBlock( (denselayer1): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer2): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(544, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(544, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer3): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(576, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer4): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(608, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(608, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer5): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(640, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer6): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(672, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(672, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer7): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(704, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(704, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer8): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(736, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(736, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer9): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer10): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(800, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(800, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer11): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer12): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(864, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(864, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer13): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(896, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer14): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(928, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(928, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer15): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(960, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (denselayer16): _DenseLayer( (layers): Sequential( (norm1): BatchNorm2d(992, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(992, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (norm5): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (class_layers): Sequential( (relu): ReLU(inplace=True) (pool): AdaptiveAvgPool2d(output_size=1) (flatten): Flatten(start_dim=1, end_dim=-1) (out): Linear(in_features=1024, out_features=6, bias=True) ) )
t81_558_class_14_04_ids_kdd99.ipynb
###Markdown T81-558: Applications of Deep Neural Networks**Module 14: Other Neural Network Techniques*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 14 Video Material* Part 14.1: What is AutoML [[Video]](https://www.youtube.com/watch?v=TFUysIR5AB0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_01_automl.ipynb)* Part 14.2: Using Denoising AutoEncoders in Keras [[Video]](https://www.youtube.com/watch?v=4bTSu6_fucc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_02_auto_encode.ipynb)* Part 14.3: Training an Intrusion Detection System with KDD99 [[Video]](https://www.youtube.com/watch?v=1ySn6h2A68I&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_03_anomaly.ipynb)* **Part 14.4: Anomaly Detection in Keras** [[Video]](https://www.youtube.com/watch?v=VgyKQ5MTDFc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_04_ids_kdd99.ipynb)* Part 14.5: The Deep Learning Technologies I am Excited About [[Video]]() [[Notebook]](t81_558_class_14_05_new_tech.ipynb) Part 14.4: Training an Intrusion Detection System with KDD99The [KDD-99 dataset](http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html) is very famous in the security field and almost a "hello world" of intrusion detection systems in machine learning. Read in Raw KDD-99 Dataset ###Code import pandas as pd from tensorflow.keras.utils import get_file try: path = get_file('kddcup.data_10_percent.gz', origin='http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz') except: print('Error downloading') raise print(path) # This file is a CSV, just no CSV extension or headers # Download from: http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html df = pd.read_csv(path, header=None) print("Read {} rows.".format(len(df))) # df = df.sample(frac=0.1, replace=False) # Uncomment this line to sample only 10% of the dataset df.dropna(inplace=True,axis=1) # For now, just drop NA's (rows with missing values) # The CSV file has no column heads, so add them df.columns = [ 'duration', 'protocol_type', 'service', 'flag', 'src_bytes', 'dst_bytes', 'land', 'wrong_fragment', 'urgent', 'hot', 'num_failed_logins', 'logged_in', 'num_compromised', 'root_shell', 'su_attempted', 'num_root', 'num_file_creations', 'num_shells', 'num_access_files', 'num_outbound_cmds', 'is_host_login', 'is_guest_login', 'count', 'srv_count', 'serror_rate', 'srv_serror_rate', 'rerror_rate', 'srv_rerror_rate', 'same_srv_rate', 'diff_srv_rate', 'srv_diff_host_rate', 'dst_host_count', 'dst_host_srv_count', 'dst_host_same_srv_rate', 'dst_host_diff_srv_rate', 'dst_host_same_src_port_rate', 'dst_host_srv_diff_host_rate', 'dst_host_serror_rate', 'dst_host_srv_serror_rate', 'dst_host_rerror_rate', 'dst_host_srv_rerror_rate', 'outcome' ] # display 5 rows df[0:5] ###Output /Users/jheaton/.keras/datasets/kddcup.data_10_percent.gz Read 494021 rows. ###Markdown Analyzing a DatasetThe following script can be used to give a high-level overview of how a dataset appears. ###Code ENCODING = 'utf-8' def expand_categories(values): result = [] s = values.value_counts() t = float(len(values)) for v in s.index: result.append("{}:{}%".format(v,round(100*(s[v]/t),2))) return "[{}]".format(",".join(result)) def analyze(df): print() cols = df.columns.values total = float(len(df)) print("{} rows".format(int(total))) for col in cols: uniques = df[col].unique() unique_count = len(uniques) if unique_count>100: print("** {}:{} ({}%)".format(col,unique_count,int(((unique_count)/total)*100))) else: print("** {}:{}".format(col,expand_categories(df[col]))) expand_categories(df[col]) # Analyze KDD-99 import pandas as pd import os import numpy as np from sklearn import metrics from scipy.stats import zscore analyze(df) ###Output 494021 rows ** duration:2495 (0%) ** protocol_type:[icmp:57.41%,tcp:38.47%,udp:4.12%] ** service:[ecr_i:56.96%,private:22.45%,http:13.01%,smtp:1.97%,other:1.46%,domain_u:1.19%,ftp_data:0.96%,eco_i:0.33%,ftp:0.16%,finger:0.14%,urp_i:0.11%,telnet:0.1%,ntp_u:0.08%,auth:0.07%,pop_3:0.04%,time:0.03%,csnet_ns:0.03%,remote_job:0.02%,gopher:0.02%,imap4:0.02%,discard:0.02%,domain:0.02%,systat:0.02%,iso_tsap:0.02%,shell:0.02%,echo:0.02%,rje:0.02%,whois:0.02%,sql_net:0.02%,printer:0.02%,nntp:0.02%,courier:0.02%,sunrpc:0.02%,mtp:0.02%,netbios_ssn:0.02%,uucp_path:0.02%,bgp:0.02%,klogin:0.02%,uucp:0.02%,vmnet:0.02%,supdup:0.02%,ssh:0.02%,nnsp:0.02%,login:0.02%,hostnames:0.02%,efs:0.02%,daytime:0.02%,netbios_ns:0.02%,link:0.02%,ldap:0.02%,pop_2:0.02%,exec:0.02%,netbios_dgm:0.02%,http_443:0.02%,kshell:0.02%,name:0.02%,ctf:0.02%,netstat:0.02%,Z39_50:0.02%,IRC:0.01%,urh_i:0.0%,X11:0.0%,tim_i:0.0%,tftp_u:0.0%,red_i:0.0%,pm_dump:0.0%] ** flag:[SF:76.6%,S0:17.61%,REJ:5.44%,RSTR:0.18%,RSTO:0.12%,SH:0.02%,S1:0.01%,S2:0.0%,RSTOS0:0.0%,S3:0.0%,OTH:0.0%] ** src_bytes:3300 (0%) ** dst_bytes:10725 (2%) ** land:[0:100.0%,1:0.0%] ** wrong_fragment:[0:99.75%,3:0.2%,1:0.05%] ** urgent:[0:100.0%,1:0.0%,3:0.0%,2:0.0%] ** hot:[0:99.35%,2:0.44%,28:0.06%,1:0.05%,4:0.02%,6:0.02%,5:0.01%,3:0.01%,14:0.01%,30:0.01%,22:0.01%,19:0.0%,18:0.0%,24:0.0%,20:0.0%,7:0.0%,17:0.0%,12:0.0%,15:0.0%,16:0.0%,10:0.0%,9:0.0%] ** num_failed_logins:[0:99.99%,1:0.01%,2:0.0%,5:0.0%,4:0.0%,3:0.0%] ** logged_in:[0:85.18%,1:14.82%] ** num_compromised:[0:99.55%,1:0.44%,2:0.0%,4:0.0%,3:0.0%,6:0.0%,5:0.0%,7:0.0%,12:0.0%,9:0.0%,11:0.0%,767:0.0%,238:0.0%,16:0.0%,18:0.0%,275:0.0%,21:0.0%,22:0.0%,281:0.0%,38:0.0%,102:0.0%,884:0.0%,13:0.0%] ** root_shell:[0:99.99%,1:0.01%] ** su_attempted:[0:100.0%,2:0.0%,1:0.0%] ** num_root:[0:99.88%,1:0.05%,9:0.03%,6:0.03%,2:0.0%,5:0.0%,4:0.0%,3:0.0%,119:0.0%,7:0.0%,993:0.0%,268:0.0%,14:0.0%,16:0.0%,278:0.0%,39:0.0%,306:0.0%,54:0.0%,857:0.0%,12:0.0%] ** num_file_creations:[0:99.95%,1:0.04%,2:0.01%,4:0.0%,16:0.0%,9:0.0%,5:0.0%,7:0.0%,8:0.0%,28:0.0%,25:0.0%,12:0.0%,14:0.0%,15:0.0%,20:0.0%,21:0.0%,22:0.0%,10:0.0%] ** num_shells:[0:99.99%,1:0.01%,2:0.0%] ** num_access_files:[0:99.91%,1:0.09%,2:0.01%,3:0.0%,8:0.0%,6:0.0%,4:0.0%] ** num_outbound_cmds:[0:100.0%] ** is_host_login:[0:100.0%] ** is_guest_login:[0:99.86%,1:0.14%] ** count:490 (0%) ** srv_count:470 (0%) ** serror_rate:[0.0:81.94%,1.0:17.52%,0.99:0.06%,0.08:0.03%,0.05:0.03%,0.07:0.03%,0.06:0.03%,0.14:0.02%,0.04:0.02%,0.01:0.02%,0.09:0.02%,0.1:0.02%,0.03:0.02%,0.11:0.02%,0.13:0.02%,0.5:0.02%,0.12:0.02%,0.2:0.01%,0.25:0.01%,0.02:0.01%,0.17:0.01%,0.33:0.01%,0.15:0.01%,0.22:0.01%,0.18:0.01%,0.23:0.01%,0.16:0.01%,0.21:0.01%,0.19:0.0%,0.27:0.0%,0.98:0.0%,0.44:0.0%,0.29:0.0%,0.24:0.0%,0.97:0.0%,0.96:0.0%,0.31:0.0%,0.26:0.0%,0.67:0.0%,0.36:0.0%,0.65:0.0%,0.94:0.0%,0.28:0.0%,0.79:0.0%,0.95:0.0%,0.53:0.0%,0.81:0.0%,0.62:0.0%,0.85:0.0%,0.6:0.0%,0.64:0.0%,0.88:0.0%,0.68:0.0%,0.52:0.0%,0.66:0.0%,0.71:0.0%,0.93:0.0%,0.57:0.0%,0.63:0.0%,0.83:0.0%,0.78:0.0%,0.75:0.0%,0.51:0.0%,0.58:0.0%,0.56:0.0%,0.55:0.0%,0.3:0.0%,0.76:0.0%,0.86:0.0%,0.74:0.0%,0.35:0.0%,0.38:0.0%,0.54:0.0%,0.72:0.0%,0.84:0.0%,0.69:0.0%,0.61:0.0%,0.59:0.0%,0.42:0.0%,0.32:0.0%,0.82:0.0%,0.77:0.0%,0.7:0.0%,0.91:0.0%,0.92:0.0%,0.4:0.0%,0.73:0.0%,0.9:0.0%,0.34:0.0%,0.8:0.0%,0.89:0.0%,0.87:0.0%] ** srv_serror_rate:[0.0:82.12%,1.0:17.62%,0.03:0.03%,0.04:0.02%,0.05:0.02%,0.06:0.02%,0.02:0.02%,0.5:0.02%,0.08:0.01%,0.07:0.01%,0.25:0.01%,0.33:0.01%,0.17:0.01%,0.09:0.01%,0.1:0.01%,0.2:0.01%,0.11:0.01%,0.12:0.01%,0.14:0.01%,0.01:0.0%,0.67:0.0%,0.92:0.0%,0.18:0.0%,0.94:0.0%,0.95:0.0%,0.58:0.0%,0.88:0.0%,0.75:0.0%,0.19:0.0%,0.4:0.0%,0.76:0.0%,0.83:0.0%,0.91:0.0%,0.15:0.0%,0.22:0.0%,0.93:0.0%,0.85:0.0%,0.27:0.0%,0.86:0.0%,0.44:0.0%,0.35:0.0%,0.51:0.0%,0.36:0.0%,0.38:0.0%,0.21:0.0%,0.8:0.0%,0.9:0.0%,0.45:0.0%,0.16:0.0%,0.37:0.0%,0.23:0.0%] ** rerror_rate:[0.0:94.12%,1.0:5.46%,0.86:0.02%,0.87:0.02%,0.92:0.02%,0.25:0.02%,0.95:0.02%,0.9:0.02%,0.5:0.02%,0.91:0.02%,0.88:0.01%,0.96:0.01%,0.33:0.01%,0.2:0.01%,0.93:0.01%,0.94:0.01%,0.01:0.01%,0.89:0.01%,0.85:0.01%,0.99:0.01%,0.82:0.01%,0.77:0.01%,0.17:0.01%,0.97:0.01%,0.02:0.01%,0.98:0.01%,0.03:0.01%,0.8:0.01%,0.78:0.01%,0.76:0.01%,0.75:0.0%,0.79:0.0%,0.84:0.0%,0.14:0.0%,0.05:0.0%,0.73:0.0%,0.81:0.0%,0.06:0.0%,0.71:0.0%,0.83:0.0%,0.67:0.0%,0.56:0.0%,0.08:0.0%,0.04:0.0%,0.1:0.0%,0.09:0.0%,0.12:0.0%,0.07:0.0%,0.11:0.0%,0.69:0.0%,0.74:0.0%,0.64:0.0%,0.4:0.0%,0.72:0.0%,0.7:0.0%,0.6:0.0%,0.29:0.0%,0.22:0.0%,0.62:0.0%,0.65:0.0%,0.21:0.0%,0.68:0.0%,0.37:0.0%,0.19:0.0%,0.43:0.0%,0.58:0.0%,0.35:0.0%,0.24:0.0%,0.31:0.0%,0.23:0.0%,0.27:0.0%,0.28:0.0%,0.26:0.0%,0.36:0.0%,0.34:0.0%,0.66:0.0%,0.32:0.0%] ** srv_rerror_rate:[0.0:93.99%,1.0:5.69%,0.33:0.05%,0.5:0.04%,0.25:0.04%,0.2:0.03%,0.17:0.03%,0.14:0.01%,0.04:0.01%,0.03:0.01%,0.12:0.01%,0.02:0.01%,0.06:0.01%,0.05:0.01%,0.07:0.01%,0.4:0.01%,0.67:0.01%,0.08:0.01%,0.11:0.01%,0.29:0.01%,0.09:0.0%,0.1:0.0%,0.75:0.0%,0.6:0.0%,0.01:0.0%,0.22:0.0%,0.71:0.0%,0.86:0.0%,0.83:0.0%,0.73:0.0%,0.81:0.0%,0.88:0.0%,0.96:0.0%,0.92:0.0%,0.18:0.0%,0.43:0.0%,0.79:0.0%,0.93:0.0%,0.13:0.0%,0.27:0.0%,0.38:0.0%,0.94:0.0%,0.95:0.0%,0.37:0.0%,0.85:0.0%,0.8:0.0%,0.62:0.0%,0.82:0.0%,0.69:0.0%,0.21:0.0%,0.87:0.0%] ** same_srv_rate:[1.0:77.34%,0.06:2.27%,0.05:2.14%,0.04:2.06%,0.07:2.03%,0.03:1.93%,0.02:1.9%,0.01:1.77%,0.08:1.48%,0.09:1.01%,0.1:0.8%,0.0:0.73%,0.12:0.73%,0.11:0.67%,0.13:0.66%,0.14:0.51%,0.15:0.35%,0.5:0.29%,0.16:0.25%,0.17:0.17%,0.33:0.12%,0.18:0.1%,0.2:0.08%,0.19:0.07%,0.67:0.05%,0.25:0.04%,0.21:0.04%,0.99:0.03%,0.22:0.03%,0.24:0.02%,0.23:0.02%,0.4:0.02%,0.98:0.02%,0.75:0.02%,0.27:0.02%,0.26:0.01%,0.8:0.01%,0.29:0.01%,0.38:0.01%,0.86:0.01%,0.3:0.01%,0.31:0.01%,0.44:0.01%,0.83:0.01%,0.36:0.01%,0.28:0.01%,0.43:0.01%,0.6:0.01%,0.42:0.01%,0.97:0.01%,0.32:0.01%,0.35:0.01%,0.45:0.01%,0.47:0.01%,0.88:0.0%,0.48:0.0%,0.39:0.0%,0.52:0.0%,0.46:0.0%,0.37:0.0%,0.41:0.0%,0.89:0.0%,0.34:0.0%,0.92:0.0%,0.54:0.0%,0.53:0.0%,0.94:0.0%,0.95:0.0%,0.57:0.0%,0.96:0.0%,0.64:0.0%,0.71:0.0%,0.56:0.0%,0.62:0.0%,0.78:0.0%,0.9:0.0%,0.49:0.0%,0.91:0.0%,0.55:0.0%,0.65:0.0%,0.73:0.0%,0.58:0.0%,0.59:0.0%,0.93:0.0%,0.76:0.0%,0.51:0.0%,0.77:0.0%,0.82:0.0%,0.81:0.0%,0.74:0.0%,0.69:0.0%,0.79:0.0%,0.72:0.0%,0.7:0.0%,0.85:0.0%,0.68:0.0%,0.61:0.0%,0.63:0.0%,0.87:0.0%] ** diff_srv_rate:[0.0:77.33%,0.06:10.69%,0.07:5.83%,0.05:3.89%,0.08:0.66%,1.0:0.48%,0.04:0.19%,0.67:0.13%,0.5:0.12%,0.09:0.08%,0.6:0.06%,0.12:0.05%,0.1:0.04%,0.11:0.04%,0.14:0.03%,0.4:0.02%,0.13:0.02%,0.29:0.02%,0.01:0.02%,0.15:0.02%,0.03:0.02%,0.33:0.02%,0.17:0.02%,0.25:0.02%,0.75:0.01%,0.2:0.01%,0.18:0.01%,0.16:0.01%,0.19:0.01%,0.02:0.01%,0.22:0.01%,0.21:0.01%,0.27:0.01%,0.96:0.01%,0.31:0.01%,0.38:0.01%,0.24:0.01%,0.23:0.01%,0.43:0.0%,0.52:0.0%,0.95:0.0%,0.44:0.0%,0.53:0.0%,0.36:0.0%,0.8:0.0%,0.57:0.0%,0.42:0.0%,0.3:0.0%,0.26:0.0%,0.28:0.0%,0.56:0.0%,0.99:0.0%,0.54:0.0%,0.62:0.0%,0.37:0.0%,0.55:0.0%,0.35:0.0%,0.41:0.0%,0.47:0.0%,0.89:0.0%,0.32:0.0%,0.71:0.0%,0.58:0.0%,0.46:0.0%,0.39:0.0%,0.51:0.0%,0.45:0.0%,0.97:0.0%,0.83:0.0%,0.7:0.0%,0.69:0.0%,0.78:0.0%,0.74:0.0%,0.64:0.0%,0.73:0.0%,0.82:0.0%,0.88:0.0%,0.86:0.0%] ** srv_diff_host_rate:[0.0:92.99%,1.0:1.64%,0.12:0.31%,0.5:0.29%,0.67:0.29%,0.33:0.25%,0.11:0.24%,0.25:0.23%,0.1:0.22%,0.14:0.21%,0.17:0.21%,0.08:0.2%,0.15:0.2%,0.18:0.19%,0.2:0.19%,0.09:0.19%,0.4:0.19%,0.07:0.17%,0.29:0.17%,0.13:0.16%,0.22:0.16%,0.06:0.14%,0.02:0.1%,0.05:0.1%,0.01:0.08%,0.21:0.08%,0.19:0.08%,0.16:0.07%,0.75:0.07%,0.27:0.06%,0.04:0.06%,0.6:0.06%,0.3:0.06%,0.38:0.05%,0.43:0.05%,0.23:0.05%,0.03:0.03%,0.24:0.02%,0.36:0.02%,0.31:0.02%,0.8:0.02%,0.57:0.01%,0.44:0.01%,0.28:0.01%,0.26:0.01%,0.42:0.0%,0.45:0.0%,0.62:0.0%,0.83:0.0%,0.71:0.0%,0.56:0.0%,0.35:0.0%,0.32:0.0%,0.37:0.0%,0.41:0.0%,0.47:0.0%,0.86:0.0%,0.55:0.0%,0.54:0.0%,0.88:0.0%,0.64:0.0%,0.46:0.0%,0.7:0.0%,0.77:0.0%] ** dst_host_count:256 (0%) ** dst_host_srv_count:256 (0%) ** dst_host_same_srv_rate:101 (0%) ** dst_host_diff_srv_rate:101 (0%) ** dst_host_same_src_port_rate:101 (0%) ** dst_host_srv_diff_host_rate:[0.0:89.45%,0.02:2.38%,0.01:2.13%,0.04:1.35%,0.03:1.34%,0.05:0.94%,0.06:0.39%,0.07:0.31%,0.5:0.15%,0.08:0.14%,0.09:0.13%,0.15:0.09%,0.11:0.09%,0.16:0.08%,0.13:0.08%,0.1:0.08%,0.14:0.07%,1.0:0.07%,0.17:0.07%,0.2:0.07%,0.12:0.07%,0.18:0.07%,0.25:0.05%,0.22:0.05%,0.19:0.05%,0.21:0.05%,0.24:0.03%,0.23:0.02%,0.26:0.02%,0.27:0.02%,0.33:0.02%,0.29:0.02%,0.51:0.02%,0.4:0.01%,0.28:0.01%,0.3:0.01%,0.67:0.01%,0.52:0.01%,0.31:0.01%,0.32:0.01%,0.38:0.01%,0.53:0.0%,0.43:0.0%,0.44:0.0%,0.34:0.0%,0.6:0.0%,0.36:0.0%,0.57:0.0%,0.35:0.0%,0.54:0.0%,0.37:0.0%,0.56:0.0%,0.55:0.0%,0.42:0.0%,0.46:0.0%,0.45:0.0%,0.41:0.0%,0.48:0.0%,0.39:0.0%,0.8:0.0%,0.7:0.0%,0.47:0.0%,0.62:0.0%,0.75:0.0%,0.58:0.0%] ** dst_host_serror_rate:[0.0:80.93%,1.0:17.56%,0.01:0.74%,0.02:0.2%,0.03:0.09%,0.09:0.05%,0.04:0.04%,0.05:0.04%,0.07:0.03%,0.08:0.03%,0.06:0.02%,0.14:0.02%,0.15:0.02%,0.11:0.02%,0.13:0.02%,0.16:0.02%,0.1:0.02%,0.12:0.01%,0.18:0.01%,0.25:0.01%,0.2:0.01%,0.17:0.01%,0.33:0.01%,0.99:0.01%,0.19:0.01%,0.31:0.01%,0.27:0.01%,0.5:0.0%,0.22:0.0%,0.98:0.0%,0.35:0.0%,0.28:0.0%,0.53:0.0%,0.24:0.0%,0.96:0.0%,0.3:0.0%,0.26:0.0%,0.97:0.0%,0.29:0.0%,0.94:0.0%,0.42:0.0%,0.32:0.0%,0.56:0.0%,0.55:0.0%,0.95:0.0%,0.6:0.0%,0.23:0.0%,0.93:0.0%,0.34:0.0%,0.85:0.0%,0.89:0.0%,0.21:0.0%,0.92:0.0%,0.58:0.0%,0.43:0.0%,0.9:0.0%,0.57:0.0%,0.91:0.0%,0.49:0.0%,0.82:0.0%,0.36:0.0%,0.87:0.0%,0.45:0.0%,0.62:0.0%,0.65:0.0%,0.46:0.0%,0.38:0.0%,0.61:0.0%,0.47:0.0%,0.76:0.0%,0.81:0.0%,0.54:0.0%,0.64:0.0%,0.44:0.0%,0.48:0.0%,0.72:0.0%,0.39:0.0%,0.52:0.0%,0.51:0.0%,0.67:0.0%,0.84:0.0%,0.73:0.0%,0.4:0.0%,0.69:0.0%,0.79:0.0%,0.41:0.0%,0.68:0.0%,0.88:0.0%,0.77:0.0%,0.75:0.0%,0.7:0.0%,0.8:0.0%,0.59:0.0%,0.71:0.0%,0.37:0.0%,0.86:0.0%,0.66:0.0%,0.78:0.0%,0.74:0.0%,0.83:0.0%] ** dst_host_srv_serror_rate:[0.0:81.16%,1.0:17.61%,0.01:0.99%,0.02:0.14%,0.03:0.03%,0.04:0.02%,0.05:0.01%,0.06:0.01%,0.08:0.0%,0.5:0.0%,0.07:0.0%,0.1:0.0%,0.09:0.0%,0.11:0.0%,0.17:0.0%,0.14:0.0%,0.12:0.0%,0.96:0.0%,0.33:0.0%,0.67:0.0%,0.97:0.0%,0.25:0.0%,0.98:0.0%,0.4:0.0%,0.75:0.0%,0.48:0.0%,0.83:0.0%,0.16:0.0%,0.93:0.0%,0.69:0.0%,0.2:0.0%,0.91:0.0%,0.78:0.0%,0.95:0.0%,0.8:0.0%,0.92:0.0%,0.68:0.0%,0.29:0.0%,0.38:0.0%,0.88:0.0%,0.3:0.0%,0.32:0.0%,0.94:0.0%,0.57:0.0%,0.63:0.0%,0.62:0.0%,0.31:0.0%,0.85:0.0%,0.56:0.0%,0.81:0.0%,0.74:0.0%,0.86:0.0%,0.13:0.0%,0.23:0.0%,0.18:0.0%,0.64:0.0%,0.46:0.0%,0.52:0.0%,0.66:0.0%,0.6:0.0%,0.84:0.0%,0.55:0.0%,0.9:0.0%,0.15:0.0%,0.79:0.0%,0.82:0.0%,0.87:0.0%,0.47:0.0%,0.53:0.0%,0.45:0.0%,0.42:0.0%,0.24:0.0%] ** dst_host_rerror_rate:101 (0%) ** dst_host_srv_rerror_rate:101 (0%) ** outcome:[smurf.:56.84%,neptune.:21.7%,normal.:19.69%,back.:0.45%,satan.:0.32%,ipsweep.:0.25%,portsweep.:0.21%,warezclient.:0.21%,teardrop.:0.2%,pod.:0.05%,nmap.:0.05%,guess_passwd.:0.01%,buffer_overflow.:0.01%,land.:0.0%,warezmaster.:0.0%,imap.:0.0%,rootkit.:0.0%,loadmodule.:0.0%,ftp_write.:0.0%,multihop.:0.0%,phf.:0.0%,perl.:0.0%,spy.:0.0%] ###Markdown Encode the feature vectorEncode every row in the database. This is not instant! ###Code # Encode a numeric column as zscores def encode_numeric_zscore(df, name, mean=None, sd=None): if mean is None: mean = df[name].mean() if sd is None: sd = df[name].std() df[name] = (df[name] - mean) / sd # Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue) def encode_text_dummy(df, name): dummies = pd.get_dummies(df[name]) for x in dummies.columns: dummy_name = f"{name}-{x}" df[dummy_name] = dummies[x] df.drop(name, axis=1, inplace=True) # Now encode the feature vector encode_numeric_zscore(df, 'duration') encode_text_dummy(df, 'protocol_type') encode_text_dummy(df, 'service') encode_text_dummy(df, 'flag') encode_numeric_zscore(df, 'src_bytes') encode_numeric_zscore(df, 'dst_bytes') encode_text_dummy(df, 'land') encode_numeric_zscore(df, 'wrong_fragment') encode_numeric_zscore(df, 'urgent') encode_numeric_zscore(df, 'hot') encode_numeric_zscore(df, 'num_failed_logins') encode_text_dummy(df, 'logged_in') encode_numeric_zscore(df, 'num_compromised') encode_numeric_zscore(df, 'root_shell') encode_numeric_zscore(df, 'su_attempted') encode_numeric_zscore(df, 'num_root') encode_numeric_zscore(df, 'num_file_creations') encode_numeric_zscore(df, 'num_shells') encode_numeric_zscore(df, 'num_access_files') encode_numeric_zscore(df, 'num_outbound_cmds') encode_text_dummy(df, 'is_host_login') encode_text_dummy(df, 'is_guest_login') encode_numeric_zscore(df, 'count') encode_numeric_zscore(df, 'srv_count') encode_numeric_zscore(df, 'serror_rate') encode_numeric_zscore(df, 'srv_serror_rate') encode_numeric_zscore(df, 'rerror_rate') encode_numeric_zscore(df, 'srv_rerror_rate') encode_numeric_zscore(df, 'same_srv_rate') encode_numeric_zscore(df, 'diff_srv_rate') encode_numeric_zscore(df, 'srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_count') encode_numeric_zscore(df, 'dst_host_srv_count') encode_numeric_zscore(df, 'dst_host_same_srv_rate') encode_numeric_zscore(df, 'dst_host_diff_srv_rate') encode_numeric_zscore(df, 'dst_host_same_src_port_rate') encode_numeric_zscore(df, 'dst_host_srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_serror_rate') encode_numeric_zscore(df, 'dst_host_srv_serror_rate') encode_numeric_zscore(df, 'dst_host_rerror_rate') encode_numeric_zscore(df, 'dst_host_srv_rerror_rate') # display 5 rows df.dropna(inplace=True,axis=1) df[0:5] # This is the numeric feature vector, as it goes to the neural net # Convert to numpy - Classification x_columns = df.columns.drop('outcome') x = df[x_columns].values dummies = pd.get_dummies(df['outcome']) # Classification outcomes = dummies.columns num_classes = len(outcomes) y = dummies.values df.groupby('outcome')['outcome'].count() ###Output _____no_output_____ ###Markdown Train the Neural Network ###Code import pandas as pd import io import requests import numpy as np import os from sklearn.model_selection import train_test_split from sklearn import metrics from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.callbacks import EarlyStopping # Create a test/train split. 25% test # Split into train/test x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.25, random_state=42) # Create neural net model = Sequential() model.add(Dense(10, input_dim=x.shape[1], activation='relu')) model.add(Dense(50, input_dim=x.shape[1], activation='relu')) model.add(Dense(10, input_dim=x.shape[1], activation='relu')) model.add(Dense(1, kernel_initializer='normal')) model.add(Dense(y.shape[1],activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam') monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto') model.fit(x_train,y_train,validation_data=(x_test,y_test), callbacks=[monitor],verbose=2,epochs=1000) # Measure accuracy pred = model.predict(x_test) pred = np.argmax(pred,axis=1) y_eval = np.argmax(y_test,axis=1) score = metrics.accuracy_score(y_eval, pred) print("Validation score: {}".format(score)) ###Output Validation score: 0.9971418392628698 ###Markdown T81-558: Applications of Deep Neural Networks**Module 14: Other Neural Network Techniques*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 14 Video Material* Part 14.1: What is AutoML [[Video]](https://www.youtube.com/watch?v=TFUysIR5AB0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_01_automl.ipynb)* Part 14.2: Using Denoising AutoEncoders in Keras [[Video]](https://www.youtube.com/watch?v=4bTSu6_fucc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_02_auto_encode.ipynb)* Part 14.3: Training an Intrusion Detection System with KDD99 [[Video]](https://www.youtube.com/watch?v=1ySn6h2A68I&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_03_anomaly.ipynb)* **Part 14.4: Anomaly Detection in Keras** [[Video]](https://www.youtube.com/watch?v=VgyKQ5MTDFc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_04_ids_kdd99.ipynb)* Part 14.5: The Deep Learning Technologies I am Excited About [[Video]]() [[Notebook]](t81_558_class_14_05_new_tech.ipynb) Part 14.4: Training an Intrusion Detection System with KDD99The [KDD-99 dataset](http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html) is very famous in the security field and almost a "hello world" of intrusion detection systems in machine learning. Read in Raw KDD-99 Dataset ###Code import pandas as pd from tensorflow.keras.utils import get_file try: path = get_file('kddcup.data_10_percent.gz', origin='http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz') except: print('Error downloading') raise print(path) # This file is a CSV, just no CSV extension or headers # Download from: http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html df = pd.read_csv(path, header=None) print("Read {} rows.".format(len(df))) # df = df.sample(frac=0.1, replace=False) # Uncomment this line to sample only 10% of the dataset df.dropna(inplace=True,axis=1) # For now, just drop NA's (rows with missing values) # The CSV file has no column heads, so add them df.columns = [ 'duration', 'protocol_type', 'service', 'flag', 'src_bytes', 'dst_bytes', 'land', 'wrong_fragment', 'urgent', 'hot', 'num_failed_logins', 'logged_in', 'num_compromised', 'root_shell', 'su_attempted', 'num_root', 'num_file_creations', 'num_shells', 'num_access_files', 'num_outbound_cmds', 'is_host_login', 'is_guest_login', 'count', 'srv_count', 'serror_rate', 'srv_serror_rate', 'rerror_rate', 'srv_rerror_rate', 'same_srv_rate', 'diff_srv_rate', 'srv_diff_host_rate', 'dst_host_count', 'dst_host_srv_count', 'dst_host_same_srv_rate', 'dst_host_diff_srv_rate', 'dst_host_same_src_port_rate', 'dst_host_srv_diff_host_rate', 'dst_host_serror_rate', 'dst_host_srv_serror_rate', 'dst_host_rerror_rate', 'dst_host_srv_rerror_rate', 'outcome' ] # display 5 rows df[0:5] ###Output /Users/jheaton/.keras/datasets/kddcup.data_10_percent.gz Read 494021 rows. ###Markdown Analyzing a DatasetThe following script can be used to give a high-level overview of how a dataset appears. ###Code ENCODING = 'utf-8' def expand_categories(values): result = [] s = values.value_counts() t = float(len(values)) for v in s.index: result.append("{}:{}%".format(v,round(100*(s[v]/t),2))) return "[{}]".format(",".join(result)) def analyze(df): print() cols = df.columns.values total = float(len(df)) print("{} rows".format(int(total))) for col in cols: uniques = df[col].unique() unique_count = len(uniques) if unique_count>100: print("** {}:{} ({}%)".format(col,unique_count,int(((unique_count)/total)*100))) else: print("** {}:{}".format(col,expand_categories(df[col]))) expand_categories(df[col]) # Analyze KDD-99 import pandas as pd import os import numpy as np from sklearn import metrics from scipy.stats import zscore analyze(df) ###Output 494021 rows ** duration:2495 (0%) ** protocol_type:[icmp:57.41%,tcp:38.47%,udp:4.12%] ** service:[ecr_i:56.96%,private:22.45%,http:13.01%,smtp:1.97%,other:1.46%,domain_u:1.19%,ftp_data:0.96%,eco_i:0.33%,ftp:0.16%,finger:0.14%,urp_i:0.11%,telnet:0.1%,ntp_u:0.08%,auth:0.07%,pop_3:0.04%,time:0.03%,csnet_ns:0.03%,remote_job:0.02%,gopher:0.02%,imap4:0.02%,discard:0.02%,domain:0.02%,systat:0.02%,iso_tsap:0.02%,shell:0.02%,echo:0.02%,rje:0.02%,whois:0.02%,sql_net:0.02%,printer:0.02%,nntp:0.02%,courier:0.02%,sunrpc:0.02%,mtp:0.02%,netbios_ssn:0.02%,uucp_path:0.02%,bgp:0.02%,klogin:0.02%,uucp:0.02%,vmnet:0.02%,supdup:0.02%,ssh:0.02%,nnsp:0.02%,login:0.02%,hostnames:0.02%,efs:0.02%,daytime:0.02%,netbios_ns:0.02%,link:0.02%,ldap:0.02%,pop_2:0.02%,exec:0.02%,netbios_dgm:0.02%,http_443:0.02%,kshell:0.02%,name:0.02%,ctf:0.02%,netstat:0.02%,Z39_50:0.02%,IRC:0.01%,urh_i:0.0%,X11:0.0%,tim_i:0.0%,tftp_u:0.0%,red_i:0.0%,pm_dump:0.0%] ** flag:[SF:76.6%,S0:17.61%,REJ:5.44%,RSTR:0.18%,RSTO:0.12%,SH:0.02%,S1:0.01%,S2:0.0%,RSTOS0:0.0%,S3:0.0%,OTH:0.0%] ** src_bytes:3300 (0%) ** dst_bytes:10725 (2%) ** land:[0:100.0%,1:0.0%] ** wrong_fragment:[0:99.75%,3:0.2%,1:0.05%] ** urgent:[0:100.0%,1:0.0%,3:0.0%,2:0.0%] ** hot:[0:99.35%,2:0.44%,28:0.06%,1:0.05%,4:0.02%,6:0.02%,5:0.01%,3:0.01%,14:0.01%,30:0.01%,22:0.01%,19:0.0%,18:0.0%,24:0.0%,20:0.0%,7:0.0%,17:0.0%,12:0.0%,15:0.0%,16:0.0%,10:0.0%,9:0.0%] ** num_failed_logins:[0:99.99%,1:0.01%,2:0.0%,5:0.0%,4:0.0%,3:0.0%] ** logged_in:[0:85.18%,1:14.82%] ** num_compromised:[0:99.55%,1:0.44%,2:0.0%,4:0.0%,3:0.0%,6:0.0%,5:0.0%,7:0.0%,12:0.0%,9:0.0%,11:0.0%,767:0.0%,238:0.0%,16:0.0%,18:0.0%,275:0.0%,21:0.0%,22:0.0%,281:0.0%,38:0.0%,102:0.0%,884:0.0%,13:0.0%] ** root_shell:[0:99.99%,1:0.01%] ** su_attempted:[0:100.0%,2:0.0%,1:0.0%] ** num_root:[0:99.88%,1:0.05%,9:0.03%,6:0.03%,2:0.0%,5:0.0%,4:0.0%,3:0.0%,119:0.0%,7:0.0%,993:0.0%,268:0.0%,14:0.0%,16:0.0%,278:0.0%,39:0.0%,306:0.0%,54:0.0%,857:0.0%,12:0.0%] ** num_file_creations:[0:99.95%,1:0.04%,2:0.01%,4:0.0%,16:0.0%,9:0.0%,5:0.0%,7:0.0%,8:0.0%,28:0.0%,25:0.0%,12:0.0%,14:0.0%,15:0.0%,20:0.0%,21:0.0%,22:0.0%,10:0.0%] ** num_shells:[0:99.99%,1:0.01%,2:0.0%] ** num_access_files:[0:99.91%,1:0.09%,2:0.01%,3:0.0%,8:0.0%,6:0.0%,4:0.0%] ** num_outbound_cmds:[0:100.0%] ** is_host_login:[0:100.0%] ** is_guest_login:[0:99.86%,1:0.14%] ** count:490 (0%) ** srv_count:470 (0%) ** serror_rate:[0.0:81.94%,1.0:17.52%,0.99:0.06%,0.08:0.03%,0.05:0.03%,0.07:0.03%,0.06:0.03%,0.14:0.02%,0.04:0.02%,0.01:0.02%,0.09:0.02%,0.1:0.02%,0.03:0.02%,0.11:0.02%,0.13:0.02%,0.5:0.02%,0.12:0.02%,0.2:0.01%,0.25:0.01%,0.02:0.01%,0.17:0.01%,0.33:0.01%,0.15:0.01%,0.22:0.01%,0.18:0.01%,0.23:0.01%,0.16:0.01%,0.21:0.01%,0.19:0.0%,0.27:0.0%,0.98:0.0%,0.44:0.0%,0.29:0.0%,0.24:0.0%,0.97:0.0%,0.96:0.0%,0.31:0.0%,0.26:0.0%,0.67:0.0%,0.36:0.0%,0.65:0.0%,0.94:0.0%,0.28:0.0%,0.79:0.0%,0.95:0.0%,0.53:0.0%,0.81:0.0%,0.62:0.0%,0.85:0.0%,0.6:0.0%,0.64:0.0%,0.88:0.0%,0.68:0.0%,0.52:0.0%,0.66:0.0%,0.71:0.0%,0.93:0.0%,0.57:0.0%,0.63:0.0%,0.83:0.0%,0.78:0.0%,0.75:0.0%,0.51:0.0%,0.58:0.0%,0.56:0.0%,0.55:0.0%,0.3:0.0%,0.76:0.0%,0.86:0.0%,0.74:0.0%,0.35:0.0%,0.38:0.0%,0.54:0.0%,0.72:0.0%,0.84:0.0%,0.69:0.0%,0.61:0.0%,0.59:0.0%,0.42:0.0%,0.32:0.0%,0.82:0.0%,0.77:0.0%,0.7:0.0%,0.91:0.0%,0.92:0.0%,0.4:0.0%,0.73:0.0%,0.9:0.0%,0.34:0.0%,0.8:0.0%,0.89:0.0%,0.87:0.0%] ** srv_serror_rate:[0.0:82.12%,1.0:17.62%,0.03:0.03%,0.04:0.02%,0.05:0.02%,0.06:0.02%,0.02:0.02%,0.5:0.02%,0.08:0.01%,0.07:0.01%,0.25:0.01%,0.33:0.01%,0.17:0.01%,0.09:0.01%,0.1:0.01%,0.2:0.01%,0.11:0.01%,0.12:0.01%,0.14:0.01%,0.01:0.0%,0.67:0.0%,0.92:0.0%,0.18:0.0%,0.94:0.0%,0.95:0.0%,0.58:0.0%,0.88:0.0%,0.75:0.0%,0.19:0.0%,0.4:0.0%,0.76:0.0%,0.83:0.0%,0.91:0.0%,0.15:0.0%,0.22:0.0%,0.93:0.0%,0.85:0.0%,0.27:0.0%,0.86:0.0%,0.44:0.0%,0.35:0.0%,0.51:0.0%,0.36:0.0%,0.38:0.0%,0.21:0.0%,0.8:0.0%,0.9:0.0%,0.45:0.0%,0.16:0.0%,0.37:0.0%,0.23:0.0%] ** rerror_rate:[0.0:94.12%,1.0:5.46%,0.86:0.02%,0.87:0.02%,0.92:0.02%,0.25:0.02%,0.95:0.02%,0.9:0.02%,0.5:0.02%,0.91:0.02%,0.88:0.01%,0.96:0.01%,0.33:0.01%,0.2:0.01%,0.93:0.01%,0.94:0.01%,0.01:0.01%,0.89:0.01%,0.85:0.01%,0.99:0.01%,0.82:0.01%,0.77:0.01%,0.17:0.01%,0.97:0.01%,0.02:0.01%,0.98:0.01%,0.03:0.01%,0.8:0.01%,0.78:0.01%,0.76:0.01%,0.75:0.0%,0.79:0.0%,0.84:0.0%,0.14:0.0%,0.05:0.0%,0.73:0.0%,0.81:0.0%,0.06:0.0%,0.71:0.0%,0.83:0.0%,0.67:0.0%,0.56:0.0%,0.08:0.0%,0.04:0.0%,0.1:0.0%,0.09:0.0%,0.12:0.0%,0.07:0.0%,0.11:0.0%,0.69:0.0%,0.74:0.0%,0.64:0.0%,0.4:0.0%,0.72:0.0%,0.7:0.0%,0.6:0.0%,0.29:0.0%,0.22:0.0%,0.62:0.0%,0.65:0.0%,0.21:0.0%,0.68:0.0%,0.37:0.0%,0.19:0.0%,0.43:0.0%,0.58:0.0%,0.35:0.0%,0.24:0.0%,0.31:0.0%,0.23:0.0%,0.27:0.0%,0.28:0.0%,0.26:0.0%,0.36:0.0%,0.34:0.0%,0.66:0.0%,0.32:0.0%] ** srv_rerror_rate:[0.0:93.99%,1.0:5.69%,0.33:0.05%,0.5:0.04%,0.25:0.04%,0.2:0.03%,0.17:0.03%,0.14:0.01%,0.04:0.01%,0.03:0.01%,0.12:0.01%,0.02:0.01%,0.06:0.01%,0.05:0.01%,0.07:0.01%,0.4:0.01%,0.67:0.01%,0.08:0.01%,0.11:0.01%,0.29:0.01%,0.09:0.0%,0.1:0.0%,0.75:0.0%,0.6:0.0%,0.01:0.0%,0.22:0.0%,0.71:0.0%,0.86:0.0%,0.83:0.0%,0.73:0.0%,0.81:0.0%,0.88:0.0%,0.96:0.0%,0.92:0.0%,0.18:0.0%,0.43:0.0%,0.79:0.0%,0.93:0.0%,0.13:0.0%,0.27:0.0%,0.38:0.0%,0.94:0.0%,0.95:0.0%,0.37:0.0%,0.85:0.0%,0.8:0.0%,0.62:0.0%,0.82:0.0%,0.69:0.0%,0.21:0.0%,0.87:0.0%] ** same_srv_rate:[1.0:77.34%,0.06:2.27%,0.05:2.14%,0.04:2.06%,0.07:2.03%,0.03:1.93%,0.02:1.9%,0.01:1.77%,0.08:1.48%,0.09:1.01%,0.1:0.8%,0.0:0.73%,0.12:0.73%,0.11:0.67%,0.13:0.66%,0.14:0.51%,0.15:0.35%,0.5:0.29%,0.16:0.25%,0.17:0.17%,0.33:0.12%,0.18:0.1%,0.2:0.08%,0.19:0.07%,0.67:0.05%,0.25:0.04%,0.21:0.04%,0.99:0.03%,0.22:0.03%,0.24:0.02%,0.23:0.02%,0.4:0.02%,0.98:0.02%,0.75:0.02%,0.27:0.02%,0.26:0.01%,0.8:0.01%,0.29:0.01%,0.38:0.01%,0.86:0.01%,0.3:0.01%,0.31:0.01%,0.44:0.01%,0.83:0.01%,0.36:0.01%,0.28:0.01%,0.43:0.01%,0.6:0.01%,0.42:0.01%,0.97:0.01%,0.32:0.01%,0.35:0.01%,0.45:0.01%,0.47:0.01%,0.88:0.0%,0.48:0.0%,0.39:0.0%,0.52:0.0%,0.46:0.0%,0.37:0.0%,0.41:0.0%,0.89:0.0%,0.34:0.0%,0.92:0.0%,0.54:0.0%,0.53:0.0%,0.94:0.0%,0.95:0.0%,0.57:0.0%,0.96:0.0%,0.64:0.0%,0.71:0.0%,0.56:0.0%,0.62:0.0%,0.78:0.0%,0.9:0.0%,0.49:0.0%,0.91:0.0%,0.55:0.0%,0.65:0.0%,0.73:0.0%,0.58:0.0%,0.59:0.0%,0.93:0.0%,0.76:0.0%,0.51:0.0%,0.77:0.0%,0.82:0.0%,0.81:0.0%,0.74:0.0%,0.69:0.0%,0.79:0.0%,0.72:0.0%,0.7:0.0%,0.85:0.0%,0.68:0.0%,0.61:0.0%,0.63:0.0%,0.87:0.0%] ** diff_srv_rate:[0.0:77.33%,0.06:10.69%,0.07:5.83%,0.05:3.89%,0.08:0.66%,1.0:0.48%,0.04:0.19%,0.67:0.13%,0.5:0.12%,0.09:0.08%,0.6:0.06%,0.12:0.05%,0.1:0.04%,0.11:0.04%,0.14:0.03%,0.4:0.02%,0.13:0.02%,0.29:0.02%,0.01:0.02%,0.15:0.02%,0.03:0.02%,0.33:0.02%,0.17:0.02%,0.25:0.02%,0.75:0.01%,0.2:0.01%,0.18:0.01%,0.16:0.01%,0.19:0.01%,0.02:0.01%,0.22:0.01%,0.21:0.01%,0.27:0.01%,0.96:0.01%,0.31:0.01%,0.38:0.01%,0.24:0.01%,0.23:0.01%,0.43:0.0%,0.52:0.0%,0.95:0.0%,0.44:0.0%,0.53:0.0%,0.36:0.0%,0.8:0.0%,0.57:0.0%,0.42:0.0%,0.3:0.0%,0.26:0.0%,0.28:0.0%,0.56:0.0%,0.99:0.0%,0.54:0.0%,0.62:0.0%,0.37:0.0%,0.55:0.0%,0.35:0.0%,0.41:0.0%,0.47:0.0%,0.89:0.0%,0.32:0.0%,0.71:0.0%,0.58:0.0%,0.46:0.0%,0.39:0.0%,0.51:0.0%,0.45:0.0%,0.97:0.0%,0.83:0.0%,0.7:0.0%,0.69:0.0%,0.78:0.0%,0.74:0.0%,0.64:0.0%,0.73:0.0%,0.82:0.0%,0.88:0.0%,0.86:0.0%] ** srv_diff_host_rate:[0.0:92.99%,1.0:1.64%,0.12:0.31%,0.5:0.29%,0.67:0.29%,0.33:0.25%,0.11:0.24%,0.25:0.23%,0.1:0.22%,0.14:0.21%,0.17:0.21%,0.08:0.2%,0.15:0.2%,0.18:0.19%,0.2:0.19%,0.09:0.19%,0.4:0.19%,0.07:0.17%,0.29:0.17%,0.13:0.16%,0.22:0.16%,0.06:0.14%,0.02:0.1%,0.05:0.1%,0.01:0.08%,0.21:0.08%,0.19:0.08%,0.16:0.07%,0.75:0.07%,0.27:0.06%,0.04:0.06%,0.6:0.06%,0.3:0.06%,0.38:0.05%,0.43:0.05%,0.23:0.05%,0.03:0.03%,0.24:0.02%,0.36:0.02%,0.31:0.02%,0.8:0.02%,0.57:0.01%,0.44:0.01%,0.28:0.01%,0.26:0.01%,0.42:0.0%,0.45:0.0%,0.62:0.0%,0.83:0.0%,0.71:0.0%,0.56:0.0%,0.35:0.0%,0.32:0.0%,0.37:0.0%,0.41:0.0%,0.47:0.0%,0.86:0.0%,0.55:0.0%,0.54:0.0%,0.88:0.0%,0.64:0.0%,0.46:0.0%,0.7:0.0%,0.77:0.0%] ** dst_host_count:256 (0%) ** dst_host_srv_count:256 (0%) ** dst_host_same_srv_rate:101 (0%) ** dst_host_diff_srv_rate:101 (0%) ** dst_host_same_src_port_rate:101 (0%) ** dst_host_srv_diff_host_rate:[0.0:89.45%,0.02:2.38%,0.01:2.13%,0.04:1.35%,0.03:1.34%,0.05:0.94%,0.06:0.39%,0.07:0.31%,0.5:0.15%,0.08:0.14%,0.09:0.13%,0.15:0.09%,0.11:0.09%,0.16:0.08%,0.13:0.08%,0.1:0.08%,0.14:0.07%,1.0:0.07%,0.17:0.07%,0.2:0.07%,0.12:0.07%,0.18:0.07%,0.25:0.05%,0.22:0.05%,0.19:0.05%,0.21:0.05%,0.24:0.03%,0.23:0.02%,0.26:0.02%,0.27:0.02%,0.33:0.02%,0.29:0.02%,0.51:0.02%,0.4:0.01%,0.28:0.01%,0.3:0.01%,0.67:0.01%,0.52:0.01%,0.31:0.01%,0.32:0.01%,0.38:0.01%,0.53:0.0%,0.43:0.0%,0.44:0.0%,0.34:0.0%,0.6:0.0%,0.36:0.0%,0.57:0.0%,0.35:0.0%,0.54:0.0%,0.37:0.0%,0.56:0.0%,0.55:0.0%,0.42:0.0%,0.46:0.0%,0.45:0.0%,0.41:0.0%,0.48:0.0%,0.39:0.0%,0.8:0.0%,0.7:0.0%,0.47:0.0%,0.62:0.0%,0.75:0.0%,0.58:0.0%] ** dst_host_serror_rate:[0.0:80.93%,1.0:17.56%,0.01:0.74%,0.02:0.2%,0.03:0.09%,0.09:0.05%,0.04:0.04%,0.05:0.04%,0.07:0.03%,0.08:0.03%,0.06:0.02%,0.14:0.02%,0.15:0.02%,0.11:0.02%,0.13:0.02%,0.16:0.02%,0.1:0.02%,0.12:0.01%,0.18:0.01%,0.25:0.01%,0.2:0.01%,0.17:0.01%,0.33:0.01%,0.99:0.01%,0.19:0.01%,0.31:0.01%,0.27:0.01%,0.5:0.0%,0.22:0.0%,0.98:0.0%,0.35:0.0%,0.28:0.0%,0.53:0.0%,0.24:0.0%,0.96:0.0%,0.3:0.0%,0.26:0.0%,0.97:0.0%,0.29:0.0%,0.94:0.0%,0.42:0.0%,0.32:0.0%,0.56:0.0%,0.55:0.0%,0.95:0.0%,0.6:0.0%,0.23:0.0%,0.93:0.0%,0.34:0.0%,0.85:0.0%,0.89:0.0%,0.21:0.0%,0.92:0.0%,0.58:0.0%,0.43:0.0%,0.9:0.0%,0.57:0.0%,0.91:0.0%,0.49:0.0%,0.82:0.0%,0.36:0.0%,0.87:0.0%,0.45:0.0%,0.62:0.0%,0.65:0.0%,0.46:0.0%,0.38:0.0%,0.61:0.0%,0.47:0.0%,0.76:0.0%,0.81:0.0%,0.54:0.0%,0.64:0.0%,0.44:0.0%,0.48:0.0%,0.72:0.0%,0.39:0.0%,0.52:0.0%,0.51:0.0%,0.67:0.0%,0.84:0.0%,0.73:0.0%,0.4:0.0%,0.69:0.0%,0.79:0.0%,0.41:0.0%,0.68:0.0%,0.88:0.0%,0.77:0.0%,0.75:0.0%,0.7:0.0%,0.8:0.0%,0.59:0.0%,0.71:0.0%,0.37:0.0%,0.86:0.0%,0.66:0.0%,0.78:0.0%,0.74:0.0%,0.83:0.0%] ** dst_host_srv_serror_rate:[0.0:81.16%,1.0:17.61%,0.01:0.99%,0.02:0.14%,0.03:0.03%,0.04:0.02%,0.05:0.01%,0.06:0.01%,0.08:0.0%,0.5:0.0%,0.07:0.0%,0.1:0.0%,0.09:0.0%,0.11:0.0%,0.17:0.0%,0.14:0.0%,0.12:0.0%,0.96:0.0%,0.33:0.0%,0.67:0.0%,0.97:0.0%,0.25:0.0%,0.98:0.0%,0.4:0.0%,0.75:0.0%,0.48:0.0%,0.83:0.0%,0.16:0.0%,0.93:0.0%,0.69:0.0%,0.2:0.0%,0.91:0.0%,0.78:0.0%,0.95:0.0%,0.8:0.0%,0.92:0.0%,0.68:0.0%,0.29:0.0%,0.38:0.0%,0.88:0.0%,0.3:0.0%,0.32:0.0%,0.94:0.0%,0.57:0.0%,0.63:0.0%,0.62:0.0%,0.31:0.0%,0.85:0.0%,0.56:0.0%,0.81:0.0%,0.74:0.0%,0.86:0.0%,0.13:0.0%,0.23:0.0%,0.18:0.0%,0.64:0.0%,0.46:0.0%,0.52:0.0%,0.66:0.0%,0.6:0.0%,0.84:0.0%,0.55:0.0%,0.9:0.0%,0.15:0.0%,0.79:0.0%,0.82:0.0%,0.87:0.0%,0.47:0.0%,0.53:0.0%,0.45:0.0%,0.42:0.0%,0.24:0.0%] ** dst_host_rerror_rate:101 (0%) ** dst_host_srv_rerror_rate:101 (0%) ** outcome:[smurf.:56.84%,neptune.:21.7%,normal.:19.69%,back.:0.45%,satan.:0.32%,ipsweep.:0.25%,portsweep.:0.21%,warezclient.:0.21%,teardrop.:0.2%,pod.:0.05%,nmap.:0.05%,guess_passwd.:0.01%,buffer_overflow.:0.01%,land.:0.0%,warezmaster.:0.0%,imap.:0.0%,rootkit.:0.0%,loadmodule.:0.0%,ftp_write.:0.0%,multihop.:0.0%,phf.:0.0%,perl.:0.0%,spy.:0.0%] ###Markdown Encode the feature vectorEncode every row in the database. This is not instant! ###Code # Encode a numeric column as zscores def encode_numeric_zscore(df, name, mean=None, sd=None): if mean is None: mean = df[name].mean() if sd is None: sd = df[name].std() df[name] = (df[name] - mean) / sd # Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue) def encode_text_dummy(df, name): dummies = pd.get_dummies(df[name]) for x in dummies.columns: dummy_name = f"{name}-{x}" df[dummy_name] = dummies[x] df.drop(name, axis=1, inplace=True) # Now encode the feature vector encode_numeric_zscore(df, 'duration') encode_text_dummy(df, 'protocol_type') encode_text_dummy(df, 'service') encode_text_dummy(df, 'flag') encode_numeric_zscore(df, 'src_bytes') encode_numeric_zscore(df, 'dst_bytes') encode_text_dummy(df, 'land') encode_numeric_zscore(df, 'wrong_fragment') encode_numeric_zscore(df, 'urgent') encode_numeric_zscore(df, 'hot') encode_numeric_zscore(df, 'num_failed_logins') encode_text_dummy(df, 'logged_in') encode_numeric_zscore(df, 'num_compromised') encode_numeric_zscore(df, 'root_shell') encode_numeric_zscore(df, 'su_attempted') encode_numeric_zscore(df, 'num_root') encode_numeric_zscore(df, 'num_file_creations') encode_numeric_zscore(df, 'num_shells') encode_numeric_zscore(df, 'num_access_files') encode_numeric_zscore(df, 'num_outbound_cmds') encode_text_dummy(df, 'is_host_login') encode_text_dummy(df, 'is_guest_login') encode_numeric_zscore(df, 'count') encode_numeric_zscore(df, 'srv_count') encode_numeric_zscore(df, 'serror_rate') encode_numeric_zscore(df, 'srv_serror_rate') encode_numeric_zscore(df, 'rerror_rate') encode_numeric_zscore(df, 'srv_rerror_rate') encode_numeric_zscore(df, 'same_srv_rate') encode_numeric_zscore(df, 'diff_srv_rate') encode_numeric_zscore(df, 'srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_count') encode_numeric_zscore(df, 'dst_host_srv_count') encode_numeric_zscore(df, 'dst_host_same_srv_rate') encode_numeric_zscore(df, 'dst_host_diff_srv_rate') encode_numeric_zscore(df, 'dst_host_same_src_port_rate') encode_numeric_zscore(df, 'dst_host_srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_serror_rate') encode_numeric_zscore(df, 'dst_host_srv_serror_rate') encode_numeric_zscore(df, 'dst_host_rerror_rate') encode_numeric_zscore(df, 'dst_host_srv_rerror_rate') # display 5 rows df.dropna(inplace=True,axis=1) df[0:5] # This is the numeric feature vector, as it goes to the neural net # Convert to numpy - Classification x_columns = df.columns.drop('outcome') x = df[x_columns].values dummies = pd.get_dummies(df['outcome']) # Classification outcomes = dummies.columns num_classes = len(outcomes) y = dummies.values df.groupby('outcome')['outcome'].count() ###Output _____no_output_____ ###Markdown Train the Neural Network ###Code import pandas as pd import io import requests import numpy as np import os from sklearn.model_selection import train_test_split from sklearn import metrics from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.callbacks import EarlyStopping # Create a test/train split. 25% test # Split into train/test x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.25, random_state=42) # Create neural net model = Sequential() model.add(Dense(10, input_dim=x.shape[1], kernel_initializer='normal', activation='relu')) model.add(Dense(50, input_dim=x.shape[1], kernel_initializer='normal', activation='relu')) model.add(Dense(10, input_dim=x.shape[1], kernel_initializer='normal', activation='relu')) model.add(Dense(1, kernel_initializer='normal')) model.add(Dense(y.shape[1],activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam') monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto') model.fit(x_train,y_train,validation_data=(x_test,y_test),callbacks=[monitor],verbose=2,epochs=1000) # Measure accuracy pred = model.predict(x_test) pred = np.argmax(pred,axis=1) y_eval = np.argmax(y_test,axis=1) score = metrics.accuracy_score(y_eval, pred) print("Validation score: {}".format(score)) ###Output Validation score: 0.9971418392628698 ###Markdown T81-558: Applications of Deep Neural Networks**Module 14: Other Neural Network Techniques*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 14 Video Material* Part 14.1: What is AutoML [[Video]](https://www.youtube.com/watch?v=TFUysIR5AB0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_01_automl.ipynb)* Part 14.2: Using Denoising AutoEncoders in Keras [[Video]](https://www.youtube.com/watch?v=4bTSu6_fucc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_02_auto_encode.ipynb)* Part 14.3: Training an Intrusion Detection System with KDD99 [[Video]](https://www.youtube.com/watch?v=1ySn6h2A68I&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_03_anomaly.ipynb)* **Part 14.4: Anomaly Detection in Keras** [[Video]](https://www.youtube.com/watch?v=VgyKQ5MTDFc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_04_ids_kdd99.ipynb)* Part 14.5: The Deep Learning Technologies I am Excited About [[Video]]() [[Notebook]](t81_558_class_14_05_new_tech.ipynb) Part 14.4: Training an Intrusion Detection System with KDD99The [KDD-99 dataset](http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html) is very famous in the security field and almost a "hello world" of intrusion detection systems in machine learning. Read in Raw KDD-99 Dataset ###Code import pandas as pd from tensorflow.keras.utils import get_file try: path = get_file('kddcup.data_10_percent.gz', origin='http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz') except: print('Error downloading') raise print(path) # This file is a CSV, just no CSV extension or headers # Download from: http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html df = pd.read_csv(path, header=None) print("Read {} rows.".format(len(df))) # df = df.sample(frac=0.1, replace=False) # Uncomment this line to sample only 10% of the dataset df.dropna(inplace=True,axis=1) # For now, just drop NA's (rows with missing values) # The CSV file has no column heads, so add them df.columns = [ 'duration', 'protocol_type', 'service', 'flag', 'src_bytes', 'dst_bytes', 'land', 'wrong_fragment', 'urgent', 'hot', 'num_failed_logins', 'logged_in', 'num_compromised', 'root_shell', 'su_attempted', 'num_root', 'num_file_creations', 'num_shells', 'num_access_files', 'num_outbound_cmds', 'is_host_login', 'is_guest_login', 'count', 'srv_count', 'serror_rate', 'srv_serror_rate', 'rerror_rate', 'srv_rerror_rate', 'same_srv_rate', 'diff_srv_rate', 'srv_diff_host_rate', 'dst_host_count', 'dst_host_srv_count', 'dst_host_same_srv_rate', 'dst_host_diff_srv_rate', 'dst_host_same_src_port_rate', 'dst_host_srv_diff_host_rate', 'dst_host_serror_rate', 'dst_host_srv_serror_rate', 'dst_host_rerror_rate', 'dst_host_srv_rerror_rate', 'outcome' ] # display 5 rows df[0:5] ###Output /Users/jheaton/.keras/datasets/kddcup.data_10_percent.gz Read 494021 rows. ###Markdown Analyzing a DatasetThe following script can be used to give a high-level overview of how a dataset appears. ###Code ENCODING = 'utf-8' def expand_categories(values): result = [] s = values.value_counts() t = float(len(values)) for v in s.index: result.append("{}:{}%".format(v,round(100*(s[v]/t),2))) return "[{}]".format(",".join(result)) def analyze(df): print() cols = df.columns.values total = float(len(df)) print("{} rows".format(int(total))) for col in cols: uniques = df[col].unique() unique_count = len(uniques) if unique_count>100: print("** {}:{} ({}%)".format(col,unique_count,int(((unique_count)/total)*100))) else: print("** {}:{}".format(col,expand_categories(df[col]))) expand_categories(df[col]) # Analyze KDD-99 import pandas as pd import os import numpy as np from sklearn import metrics from scipy.stats import zscore analyze(df) ###Output 494021 rows ** duration:2495 (0%) ** protocol_type:[icmp:57.41%,tcp:38.47%,udp:4.12%] ** service:[ecr_i:56.96%,private:22.45%,http:13.01%,smtp:1.97%,other:1.46%,domain_u:1.19%,ftp_data:0.96%,eco_i:0.33%,ftp:0.16%,finger:0.14%,urp_i:0.11%,telnet:0.1%,ntp_u:0.08%,auth:0.07%,pop_3:0.04%,time:0.03%,csnet_ns:0.03%,remote_job:0.02%,gopher:0.02%,imap4:0.02%,discard:0.02%,domain:0.02%,systat:0.02%,iso_tsap:0.02%,shell:0.02%,echo:0.02%,rje:0.02%,whois:0.02%,sql_net:0.02%,printer:0.02%,nntp:0.02%,courier:0.02%,sunrpc:0.02%,mtp:0.02%,netbios_ssn:0.02%,uucp_path:0.02%,bgp:0.02%,klogin:0.02%,uucp:0.02%,vmnet:0.02%,supdup:0.02%,ssh:0.02%,nnsp:0.02%,login:0.02%,hostnames:0.02%,efs:0.02%,daytime:0.02%,netbios_ns:0.02%,link:0.02%,ldap:0.02%,pop_2:0.02%,exec:0.02%,netbios_dgm:0.02%,http_443:0.02%,kshell:0.02%,name:0.02%,ctf:0.02%,netstat:0.02%,Z39_50:0.02%,IRC:0.01%,urh_i:0.0%,X11:0.0%,tim_i:0.0%,tftp_u:0.0%,red_i:0.0%,pm_dump:0.0%] ** flag:[SF:76.6%,S0:17.61%,REJ:5.44%,RSTR:0.18%,RSTO:0.12%,SH:0.02%,S1:0.01%,S2:0.0%,RSTOS0:0.0%,S3:0.0%,OTH:0.0%] ** src_bytes:3300 (0%) ** dst_bytes:10725 (2%) ** land:[0:100.0%,1:0.0%] ** wrong_fragment:[0:99.75%,3:0.2%,1:0.05%] ** urgent:[0:100.0%,1:0.0%,3:0.0%,2:0.0%] ** hot:[0:99.35%,2:0.44%,28:0.06%,1:0.05%,4:0.02%,6:0.02%,5:0.01%,3:0.01%,14:0.01%,30:0.01%,22:0.01%,19:0.0%,18:0.0%,24:0.0%,20:0.0%,7:0.0%,17:0.0%,12:0.0%,15:0.0%,16:0.0%,10:0.0%,9:0.0%] ** num_failed_logins:[0:99.99%,1:0.01%,2:0.0%,5:0.0%,4:0.0%,3:0.0%] ** logged_in:[0:85.18%,1:14.82%] ** num_compromised:[0:99.55%,1:0.44%,2:0.0%,4:0.0%,3:0.0%,6:0.0%,5:0.0%,7:0.0%,12:0.0%,9:0.0%,11:0.0%,767:0.0%,238:0.0%,16:0.0%,18:0.0%,275:0.0%,21:0.0%,22:0.0%,281:0.0%,38:0.0%,102:0.0%,884:0.0%,13:0.0%] ** root_shell:[0:99.99%,1:0.01%] ** su_attempted:[0:100.0%,2:0.0%,1:0.0%] ** num_root:[0:99.88%,1:0.05%,9:0.03%,6:0.03%,2:0.0%,5:0.0%,4:0.0%,3:0.0%,119:0.0%,7:0.0%,993:0.0%,268:0.0%,14:0.0%,16:0.0%,278:0.0%,39:0.0%,306:0.0%,54:0.0%,857:0.0%,12:0.0%] ** num_file_creations:[0:99.95%,1:0.04%,2:0.01%,4:0.0%,16:0.0%,9:0.0%,5:0.0%,7:0.0%,8:0.0%,28:0.0%,25:0.0%,12:0.0%,14:0.0%,15:0.0%,20:0.0%,21:0.0%,22:0.0%,10:0.0%] ** num_shells:[0:99.99%,1:0.01%,2:0.0%] ** num_access_files:[0:99.91%,1:0.09%,2:0.01%,3:0.0%,8:0.0%,6:0.0%,4:0.0%] ** num_outbound_cmds:[0:100.0%] ** is_host_login:[0:100.0%] ** is_guest_login:[0:99.86%,1:0.14%] ** count:490 (0%) ** srv_count:470 (0%) ** serror_rate:[0.0:81.94%,1.0:17.52%,0.99:0.06%,0.08:0.03%,0.05:0.03%,0.07:0.03%,0.06:0.03%,0.14:0.02%,0.04:0.02%,0.01:0.02%,0.09:0.02%,0.1:0.02%,0.03:0.02%,0.11:0.02%,0.13:0.02%,0.5:0.02%,0.12:0.02%,0.2:0.01%,0.25:0.01%,0.02:0.01%,0.17:0.01%,0.33:0.01%,0.15:0.01%,0.22:0.01%,0.18:0.01%,0.23:0.01%,0.16:0.01%,0.21:0.01%,0.19:0.0%,0.27:0.0%,0.98:0.0%,0.44:0.0%,0.29:0.0%,0.24:0.0%,0.97:0.0%,0.96:0.0%,0.31:0.0%,0.26:0.0%,0.67:0.0%,0.36:0.0%,0.65:0.0%,0.94:0.0%,0.28:0.0%,0.79:0.0%,0.95:0.0%,0.53:0.0%,0.81:0.0%,0.62:0.0%,0.85:0.0%,0.6:0.0%,0.64:0.0%,0.88:0.0%,0.68:0.0%,0.52:0.0%,0.66:0.0%,0.71:0.0%,0.93:0.0%,0.57:0.0%,0.63:0.0%,0.83:0.0%,0.78:0.0%,0.75:0.0%,0.51:0.0%,0.58:0.0%,0.56:0.0%,0.55:0.0%,0.3:0.0%,0.76:0.0%,0.86:0.0%,0.74:0.0%,0.35:0.0%,0.38:0.0%,0.54:0.0%,0.72:0.0%,0.84:0.0%,0.69:0.0%,0.61:0.0%,0.59:0.0%,0.42:0.0%,0.32:0.0%,0.82:0.0%,0.77:0.0%,0.7:0.0%,0.91:0.0%,0.92:0.0%,0.4:0.0%,0.73:0.0%,0.9:0.0%,0.34:0.0%,0.8:0.0%,0.89:0.0%,0.87:0.0%] ** srv_serror_rate:[0.0:82.12%,1.0:17.62%,0.03:0.03%,0.04:0.02%,0.05:0.02%,0.06:0.02%,0.02:0.02%,0.5:0.02%,0.08:0.01%,0.07:0.01%,0.25:0.01%,0.33:0.01%,0.17:0.01%,0.09:0.01%,0.1:0.01%,0.2:0.01%,0.11:0.01%,0.12:0.01%,0.14:0.01%,0.01:0.0%,0.67:0.0%,0.92:0.0%,0.18:0.0%,0.94:0.0%,0.95:0.0%,0.58:0.0%,0.88:0.0%,0.75:0.0%,0.19:0.0%,0.4:0.0%,0.76:0.0%,0.83:0.0%,0.91:0.0%,0.15:0.0%,0.22:0.0%,0.93:0.0%,0.85:0.0%,0.27:0.0%,0.86:0.0%,0.44:0.0%,0.35:0.0%,0.51:0.0%,0.36:0.0%,0.38:0.0%,0.21:0.0%,0.8:0.0%,0.9:0.0%,0.45:0.0%,0.16:0.0%,0.37:0.0%,0.23:0.0%] ** rerror_rate:[0.0:94.12%,1.0:5.46%,0.86:0.02%,0.87:0.02%,0.92:0.02%,0.25:0.02%,0.95:0.02%,0.9:0.02%,0.5:0.02%,0.91:0.02%,0.88:0.01%,0.96:0.01%,0.33:0.01%,0.2:0.01%,0.93:0.01%,0.94:0.01%,0.01:0.01%,0.89:0.01%,0.85:0.01%,0.99:0.01%,0.82:0.01%,0.77:0.01%,0.17:0.01%,0.97:0.01%,0.02:0.01%,0.98:0.01%,0.03:0.01%,0.8:0.01%,0.78:0.01%,0.76:0.01%,0.75:0.0%,0.79:0.0%,0.84:0.0%,0.14:0.0%,0.05:0.0%,0.73:0.0%,0.81:0.0%,0.06:0.0%,0.71:0.0%,0.83:0.0%,0.67:0.0%,0.56:0.0%,0.08:0.0%,0.04:0.0%,0.1:0.0%,0.09:0.0%,0.12:0.0%,0.07:0.0%,0.11:0.0%,0.69:0.0%,0.74:0.0%,0.64:0.0%,0.4:0.0%,0.72:0.0%,0.7:0.0%,0.6:0.0%,0.29:0.0%,0.22:0.0%,0.62:0.0%,0.65:0.0%,0.21:0.0%,0.68:0.0%,0.37:0.0%,0.19:0.0%,0.43:0.0%,0.58:0.0%,0.35:0.0%,0.24:0.0%,0.31:0.0%,0.23:0.0%,0.27:0.0%,0.28:0.0%,0.26:0.0%,0.36:0.0%,0.34:0.0%,0.66:0.0%,0.32:0.0%] ** srv_rerror_rate:[0.0:93.99%,1.0:5.69%,0.33:0.05%,0.5:0.04%,0.25:0.04%,0.2:0.03%,0.17:0.03%,0.14:0.01%,0.04:0.01%,0.03:0.01%,0.12:0.01%,0.02:0.01%,0.06:0.01%,0.05:0.01%,0.07:0.01%,0.4:0.01%,0.67:0.01%,0.08:0.01%,0.11:0.01%,0.29:0.01%,0.09:0.0%,0.1:0.0%,0.75:0.0%,0.6:0.0%,0.01:0.0%,0.22:0.0%,0.71:0.0%,0.86:0.0%,0.83:0.0%,0.73:0.0%,0.81:0.0%,0.88:0.0%,0.96:0.0%,0.92:0.0%,0.18:0.0%,0.43:0.0%,0.79:0.0%,0.93:0.0%,0.13:0.0%,0.27:0.0%,0.38:0.0%,0.94:0.0%,0.95:0.0%,0.37:0.0%,0.85:0.0%,0.8:0.0%,0.62:0.0%,0.82:0.0%,0.69:0.0%,0.21:0.0%,0.87:0.0%] ** same_srv_rate:[1.0:77.34%,0.06:2.27%,0.05:2.14%,0.04:2.06%,0.07:2.03%,0.03:1.93%,0.02:1.9%,0.01:1.77%,0.08:1.48%,0.09:1.01%,0.1:0.8%,0.0:0.73%,0.12:0.73%,0.11:0.67%,0.13:0.66%,0.14:0.51%,0.15:0.35%,0.5:0.29%,0.16:0.25%,0.17:0.17%,0.33:0.12%,0.18:0.1%,0.2:0.08%,0.19:0.07%,0.67:0.05%,0.25:0.04%,0.21:0.04%,0.99:0.03%,0.22:0.03%,0.24:0.02%,0.23:0.02%,0.4:0.02%,0.98:0.02%,0.75:0.02%,0.27:0.02%,0.26:0.01%,0.8:0.01%,0.29:0.01%,0.38:0.01%,0.86:0.01%,0.3:0.01%,0.31:0.01%,0.44:0.01%,0.83:0.01%,0.36:0.01%,0.28:0.01%,0.43:0.01%,0.6:0.01%,0.42:0.01%,0.97:0.01%,0.32:0.01%,0.35:0.01%,0.45:0.01%,0.47:0.01%,0.88:0.0%,0.48:0.0%,0.39:0.0%,0.52:0.0%,0.46:0.0%,0.37:0.0%,0.41:0.0%,0.89:0.0%,0.34:0.0%,0.92:0.0%,0.54:0.0%,0.53:0.0%,0.94:0.0%,0.95:0.0%,0.57:0.0%,0.96:0.0%,0.64:0.0%,0.71:0.0%,0.56:0.0%,0.62:0.0%,0.78:0.0%,0.9:0.0%,0.49:0.0%,0.91:0.0%,0.55:0.0%,0.65:0.0%,0.73:0.0%,0.58:0.0%,0.59:0.0%,0.93:0.0%,0.76:0.0%,0.51:0.0%,0.77:0.0%,0.82:0.0%,0.81:0.0%,0.74:0.0%,0.69:0.0%,0.79:0.0%,0.72:0.0%,0.7:0.0%,0.85:0.0%,0.68:0.0%,0.61:0.0%,0.63:0.0%,0.87:0.0%] ** diff_srv_rate:[0.0:77.33%,0.06:10.69%,0.07:5.83%,0.05:3.89%,0.08:0.66%,1.0:0.48%,0.04:0.19%,0.67:0.13%,0.5:0.12%,0.09:0.08%,0.6:0.06%,0.12:0.05%,0.1:0.04%,0.11:0.04%,0.14:0.03%,0.4:0.02%,0.13:0.02%,0.29:0.02%,0.01:0.02%,0.15:0.02%,0.03:0.02%,0.33:0.02%,0.17:0.02%,0.25:0.02%,0.75:0.01%,0.2:0.01%,0.18:0.01%,0.16:0.01%,0.19:0.01%,0.02:0.01%,0.22:0.01%,0.21:0.01%,0.27:0.01%,0.96:0.01%,0.31:0.01%,0.38:0.01%,0.24:0.01%,0.23:0.01%,0.43:0.0%,0.52:0.0%,0.95:0.0%,0.44:0.0%,0.53:0.0%,0.36:0.0%,0.8:0.0%,0.57:0.0%,0.42:0.0%,0.3:0.0%,0.26:0.0%,0.28:0.0%,0.56:0.0%,0.99:0.0%,0.54:0.0%,0.62:0.0%,0.37:0.0%,0.55:0.0%,0.35:0.0%,0.41:0.0%,0.47:0.0%,0.89:0.0%,0.32:0.0%,0.71:0.0%,0.58:0.0%,0.46:0.0%,0.39:0.0%,0.51:0.0%,0.45:0.0%,0.97:0.0%,0.83:0.0%,0.7:0.0%,0.69:0.0%,0.78:0.0%,0.74:0.0%,0.64:0.0%,0.73:0.0%,0.82:0.0%,0.88:0.0%,0.86:0.0%] ** srv_diff_host_rate:[0.0:92.99%,1.0:1.64%,0.12:0.31%,0.5:0.29%,0.67:0.29%,0.33:0.25%,0.11:0.24%,0.25:0.23%,0.1:0.22%,0.14:0.21%,0.17:0.21%,0.08:0.2%,0.15:0.2%,0.18:0.19%,0.2:0.19%,0.09:0.19%,0.4:0.19%,0.07:0.17%,0.29:0.17%,0.13:0.16%,0.22:0.16%,0.06:0.14%,0.02:0.1%,0.05:0.1%,0.01:0.08%,0.21:0.08%,0.19:0.08%,0.16:0.07%,0.75:0.07%,0.27:0.06%,0.04:0.06%,0.6:0.06%,0.3:0.06%,0.38:0.05%,0.43:0.05%,0.23:0.05%,0.03:0.03%,0.24:0.02%,0.36:0.02%,0.31:0.02%,0.8:0.02%,0.57:0.01%,0.44:0.01%,0.28:0.01%,0.26:0.01%,0.42:0.0%,0.45:0.0%,0.62:0.0%,0.83:0.0%,0.71:0.0%,0.56:0.0%,0.35:0.0%,0.32:0.0%,0.37:0.0%,0.41:0.0%,0.47:0.0%,0.86:0.0%,0.55:0.0%,0.54:0.0%,0.88:0.0%,0.64:0.0%,0.46:0.0%,0.7:0.0%,0.77:0.0%] ** dst_host_count:256 (0%) ** dst_host_srv_count:256 (0%) ** dst_host_same_srv_rate:101 (0%) ** dst_host_diff_srv_rate:101 (0%) ** dst_host_same_src_port_rate:101 (0%) ** dst_host_srv_diff_host_rate:[0.0:89.45%,0.02:2.38%,0.01:2.13%,0.04:1.35%,0.03:1.34%,0.05:0.94%,0.06:0.39%,0.07:0.31%,0.5:0.15%,0.08:0.14%,0.09:0.13%,0.15:0.09%,0.11:0.09%,0.16:0.08%,0.13:0.08%,0.1:0.08%,0.14:0.07%,1.0:0.07%,0.17:0.07%,0.2:0.07%,0.12:0.07%,0.18:0.07%,0.25:0.05%,0.22:0.05%,0.19:0.05%,0.21:0.05%,0.24:0.03%,0.23:0.02%,0.26:0.02%,0.27:0.02%,0.33:0.02%,0.29:0.02%,0.51:0.02%,0.4:0.01%,0.28:0.01%,0.3:0.01%,0.67:0.01%,0.52:0.01%,0.31:0.01%,0.32:0.01%,0.38:0.01%,0.53:0.0%,0.43:0.0%,0.44:0.0%,0.34:0.0%,0.6:0.0%,0.36:0.0%,0.57:0.0%,0.35:0.0%,0.54:0.0%,0.37:0.0%,0.56:0.0%,0.55:0.0%,0.42:0.0%,0.46:0.0%,0.45:0.0%,0.41:0.0%,0.48:0.0%,0.39:0.0%,0.8:0.0%,0.7:0.0%,0.47:0.0%,0.62:0.0%,0.75:0.0%,0.58:0.0%] ** dst_host_serror_rate:[0.0:80.93%,1.0:17.56%,0.01:0.74%,0.02:0.2%,0.03:0.09%,0.09:0.05%,0.04:0.04%,0.05:0.04%,0.07:0.03%,0.08:0.03%,0.06:0.02%,0.14:0.02%,0.15:0.02%,0.11:0.02%,0.13:0.02%,0.16:0.02%,0.1:0.02%,0.12:0.01%,0.18:0.01%,0.25:0.01%,0.2:0.01%,0.17:0.01%,0.33:0.01%,0.99:0.01%,0.19:0.01%,0.31:0.01%,0.27:0.01%,0.5:0.0%,0.22:0.0%,0.98:0.0%,0.35:0.0%,0.28:0.0%,0.53:0.0%,0.24:0.0%,0.96:0.0%,0.3:0.0%,0.26:0.0%,0.97:0.0%,0.29:0.0%,0.94:0.0%,0.42:0.0%,0.32:0.0%,0.56:0.0%,0.55:0.0%,0.95:0.0%,0.6:0.0%,0.23:0.0%,0.93:0.0%,0.34:0.0%,0.85:0.0%,0.89:0.0%,0.21:0.0%,0.92:0.0%,0.58:0.0%,0.43:0.0%,0.9:0.0%,0.57:0.0%,0.91:0.0%,0.49:0.0%,0.82:0.0%,0.36:0.0%,0.87:0.0%,0.45:0.0%,0.62:0.0%,0.65:0.0%,0.46:0.0%,0.38:0.0%,0.61:0.0%,0.47:0.0%,0.76:0.0%,0.81:0.0%,0.54:0.0%,0.64:0.0%,0.44:0.0%,0.48:0.0%,0.72:0.0%,0.39:0.0%,0.52:0.0%,0.51:0.0%,0.67:0.0%,0.84:0.0%,0.73:0.0%,0.4:0.0%,0.69:0.0%,0.79:0.0%,0.41:0.0%,0.68:0.0%,0.88:0.0%,0.77:0.0%,0.75:0.0%,0.7:0.0%,0.8:0.0%,0.59:0.0%,0.71:0.0%,0.37:0.0%,0.86:0.0%,0.66:0.0%,0.78:0.0%,0.74:0.0%,0.83:0.0%] ** dst_host_srv_serror_rate:[0.0:81.16%,1.0:17.61%,0.01:0.99%,0.02:0.14%,0.03:0.03%,0.04:0.02%,0.05:0.01%,0.06:0.01%,0.08:0.0%,0.5:0.0%,0.07:0.0%,0.1:0.0%,0.09:0.0%,0.11:0.0%,0.17:0.0%,0.14:0.0%,0.12:0.0%,0.96:0.0%,0.33:0.0%,0.67:0.0%,0.97:0.0%,0.25:0.0%,0.98:0.0%,0.4:0.0%,0.75:0.0%,0.48:0.0%,0.83:0.0%,0.16:0.0%,0.93:0.0%,0.69:0.0%,0.2:0.0%,0.91:0.0%,0.78:0.0%,0.95:0.0%,0.8:0.0%,0.92:0.0%,0.68:0.0%,0.29:0.0%,0.38:0.0%,0.88:0.0%,0.3:0.0%,0.32:0.0%,0.94:0.0%,0.57:0.0%,0.63:0.0%,0.62:0.0%,0.31:0.0%,0.85:0.0%,0.56:0.0%,0.81:0.0%,0.74:0.0%,0.86:0.0%,0.13:0.0%,0.23:0.0%,0.18:0.0%,0.64:0.0%,0.46:0.0%,0.52:0.0%,0.66:0.0%,0.6:0.0%,0.84:0.0%,0.55:0.0%,0.9:0.0%,0.15:0.0%,0.79:0.0%,0.82:0.0%,0.87:0.0%,0.47:0.0%,0.53:0.0%,0.45:0.0%,0.42:0.0%,0.24:0.0%] ** dst_host_rerror_rate:101 (0%) ** dst_host_srv_rerror_rate:101 (0%) ** outcome:[smurf.:56.84%,neptune.:21.7%,normal.:19.69%,back.:0.45%,satan.:0.32%,ipsweep.:0.25%,portsweep.:0.21%,warezclient.:0.21%,teardrop.:0.2%,pod.:0.05%,nmap.:0.05%,guess_passwd.:0.01%,buffer_overflow.:0.01%,land.:0.0%,warezmaster.:0.0%,imap.:0.0%,rootkit.:0.0%,loadmodule.:0.0%,ftp_write.:0.0%,multihop.:0.0%,phf.:0.0%,perl.:0.0%,spy.:0.0%] ###Markdown Encode the feature vectorEncode every row in the database. This is not instant! ###Code # Encode a numeric column as zscores def encode_numeric_zscore(df, name, mean=None, sd=None): if mean is None: mean = df[name].mean() if sd is None: sd = df[name].std() df[name] = (df[name] - mean) / sd # Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue) def encode_text_dummy(df, name): dummies = pd.get_dummies(df[name]) for x in dummies.columns: dummy_name = f"{name}-{x}" df[dummy_name] = dummies[x] df.drop(name, axis=1, inplace=True) # Now encode the feature vector encode_numeric_zscore(df, 'duration') encode_text_dummy(df, 'protocol_type') encode_text_dummy(df, 'service') encode_text_dummy(df, 'flag') encode_numeric_zscore(df, 'src_bytes') encode_numeric_zscore(df, 'dst_bytes') encode_text_dummy(df, 'land') encode_numeric_zscore(df, 'wrong_fragment') encode_numeric_zscore(df, 'urgent') encode_numeric_zscore(df, 'hot') encode_numeric_zscore(df, 'num_failed_logins') encode_text_dummy(df, 'logged_in') encode_numeric_zscore(df, 'num_compromised') encode_numeric_zscore(df, 'root_shell') encode_numeric_zscore(df, 'su_attempted') encode_numeric_zscore(df, 'num_root') encode_numeric_zscore(df, 'num_file_creations') encode_numeric_zscore(df, 'num_shells') encode_numeric_zscore(df, 'num_access_files') encode_numeric_zscore(df, 'num_outbound_cmds') encode_text_dummy(df, 'is_host_login') encode_text_dummy(df, 'is_guest_login') encode_numeric_zscore(df, 'count') encode_numeric_zscore(df, 'srv_count') encode_numeric_zscore(df, 'serror_rate') encode_numeric_zscore(df, 'srv_serror_rate') encode_numeric_zscore(df, 'rerror_rate') encode_numeric_zscore(df, 'srv_rerror_rate') encode_numeric_zscore(df, 'same_srv_rate') encode_numeric_zscore(df, 'diff_srv_rate') encode_numeric_zscore(df, 'srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_count') encode_numeric_zscore(df, 'dst_host_srv_count') encode_numeric_zscore(df, 'dst_host_same_srv_rate') encode_numeric_zscore(df, 'dst_host_diff_srv_rate') encode_numeric_zscore(df, 'dst_host_same_src_port_rate') encode_numeric_zscore(df, 'dst_host_srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_serror_rate') encode_numeric_zscore(df, 'dst_host_srv_serror_rate') encode_numeric_zscore(df, 'dst_host_rerror_rate') encode_numeric_zscore(df, 'dst_host_srv_rerror_rate') # display 5 rows df.dropna(inplace=True,axis=1) df[0:5] # This is the numeric feature vector, as it goes to the neural net # Convert to numpy - Classification x_columns = df.columns.drop('outcome') x = df[x_columns].values dummies = pd.get_dummies(df['outcome']) # Classification outcomes = dummies.columns num_classes = len(outcomes) y = dummies.values df.groupby('outcome')['outcome'].count() ###Output _____no_output_____ ###Markdown Train the Neural Network ###Code import pandas as pd import io import requests import numpy as np import os from sklearn.model_selection import train_test_split from sklearn import metrics from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.callbacks import EarlyStopping # Create a test/train split. 25% test # Split into train/test x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.25, random_state=42) # Create neural net model = Sequential() model.add(Dense(10, input_dim=x.shape[1], activation='relu')) model.add(Dense(50, input_dim=x.shape[1], activation='relu')) model.add(Dense(10, input_dim=x.shape[1], activation='relu')) model.add(Dense(1, kernel_initializer='normal')) model.add(Dense(y.shape[1],activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam') monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto') model.fit(x_train,y_train,validation_data=(x_test,y_test), callbacks=[monitor],verbose=2,epochs=1000) # Measure accuracy pred = model.predict(x_test) pred = np.argmax(pred,axis=1) y_eval = np.argmax(y_test,axis=1) score = metrics.accuracy_score(y_eval, pred) print("Validation score: {}".format(score)) ###Output Validation score: 0.9971418392628698 ###Markdown T81-558: Applications of Deep Neural Networks**Module 14: Other Neural Network Techniques*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 14 Video Material* Part 14.1: What is AutoML [[Video]](https://www.youtube.com/watch?v=TFUysIR5AB0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_01_automl.ipynb)* Part 14.2: Using Denoising AutoEncoders in Keras [[Video]](https://www.youtube.com/watch?v=4bTSu6_fucc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_02_auto_encode.ipynb)* Part 14.3: Training an Intrusion Detection System with KDD99 [[Video]](https://www.youtube.com/watch?v=1ySn6h2A68I&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_03_anomaly.ipynb)* **Part 14.4: Anomaly Detection in Keras** [[Video]](https://www.youtube.com/watch?v=VgyKQ5MTDFc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_04_ids_kdd99.ipynb)* Part 14.5: The Deep Learning Technologies I am Excited About [[Video]]() [[Notebook]](t81_558_class_14_05_new_tech.ipynb) Part 14.4: Training an Intrusion Detection System with KDD99The [KDD-99 dataset](http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html) is very famous in the security field and almost a "hello world" of Intrusion Detection Systems (IDS) in machine learning. An intrusion detection system (IDS) is program that monitors computers and network systems for malicious activity or policy violations. Any intrusion activity or violation is typically reported either to an administrator or collected centrally. IDS types range in scope from single computers to large networks. Although KDD99 dataset is over 20 years old, it is still widely used to demonstrate Intrusion Detection Systems (IDS). KDD99 is the data set used for The Third International Knowledge Discovery and Data Mining Tools Competition, which was held in conjunction with KDD-99 The Fifth International Conference on Knowledge Discovery and Data Mining. The competition task was to build a network intrusion detector, a predictive model capable of distinguishing between "bad" connections, called intrusions or attacks, and "good" normal connections. This database contains a standard set of data to be audited, including a wide variety of intrusions simulated in a military network environment. Read in Raw KDD-99 DatasetThe following code reads the KDD99 CSV dataset into a Pandas data frame. The standard format of KDD99 does not include column names. Because of that, the program adds them. ###Code import pandas as pd from tensorflow.keras.utils import get_file try: path = get_file('kddcup.data_10_percent.gz', origin='http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz') except: print('Error downloading') raise print(path) # This file is a CSV, just no CSV extension or headers # Download from: http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html df = pd.read_csv(path, header=None) print("Read {} rows.".format(len(df))) # df = df.sample(frac=0.1, replace=False) # Uncomment this line to sample only 10% of the dataset df.dropna(inplace=True,axis=1) # For now, just drop NA's (rows with missing values) # The CSV file has no column heads, so add them df.columns = [ 'duration', 'protocol_type', 'service', 'flag', 'src_bytes', 'dst_bytes', 'land', 'wrong_fragment', 'urgent', 'hot', 'num_failed_logins', 'logged_in', 'num_compromised', 'root_shell', 'su_attempted', 'num_root', 'num_file_creations', 'num_shells', 'num_access_files', 'num_outbound_cmds', 'is_host_login', 'is_guest_login', 'count', 'srv_count', 'serror_rate', 'srv_serror_rate', 'rerror_rate', 'srv_rerror_rate', 'same_srv_rate', 'diff_srv_rate', 'srv_diff_host_rate', 'dst_host_count', 'dst_host_srv_count', 'dst_host_same_srv_rate', 'dst_host_diff_srv_rate', 'dst_host_same_src_port_rate', 'dst_host_srv_diff_host_rate', 'dst_host_serror_rate', 'dst_host_srv_serror_rate', 'dst_host_rerror_rate', 'dst_host_srv_rerror_rate', 'outcome' ] # display 5 rows df[0:5] ###Output /Users/jheaton/.keras/datasets/kddcup.data_10_percent.gz Read 494021 rows. ###Markdown Analyzing a DatasetBefore we preprocess the KDD99 dataset let's have a look at the individual columns and distributions. You can use the following script to give a high-level overview of how a dataset appears. ###Code import pandas as pd import os import numpy as np from sklearn import metrics from scipy.stats import zscore def expand_categories(values): result = [] s = values.value_counts() t = float(len(values)) for v in s.index: result.append("{}:{}%".format(v,round(100*(s[v]/t),2))) return "[{}]".format(",".join(result)) def analyze(df): print() cols = df.columns.values total = float(len(df)) print("{} rows".format(int(total))) for col in cols: uniques = df[col].unique() unique_count = len(uniques) if unique_count>100: print("** {}:{} ({}%)".format(col,unique_count,int(((unique_count)/total)*100))) else: print("** {}:{}".format(col,expand_categories(df[col]))) expand_categories(df[col]) ###Output _____no_output_____ ###Markdown The analysis looks at how many unique values are present. For example, duration, which is a numeric value, has 2495 unique values, and there is a 0% overlap. A text/categorical value such as protocol_type only has a few unique values, and the program shows the percentages of each. Columns with a large number of unique values do not have their item counts shown to save display space. ###Code # Analyze KDD-99 analyze(df) ###Output 494021 rows ** duration:2495 (0%) ** protocol_type:[icmp:57.41%,tcp:38.47%,udp:4.12%] ** service:[ecr_i:56.96%,private:22.45%,http:13.01%,smtp:1.97%,other:1.46%,domain_u:1.19%,ftp_data:0.96%,eco_i:0.33%,ftp:0.16%,finger:0.14%,urp_i:0.11%,telnet:0.1%,ntp_u:0.08%,auth:0.07%,pop_3:0.04%,time:0.03%,csnet_ns:0.03%,remote_job:0.02%,gopher:0.02%,imap4:0.02%,discard:0.02%,domain:0.02%,systat:0.02%,iso_tsap:0.02%,shell:0.02%,echo:0.02%,rje:0.02%,whois:0.02%,sql_net:0.02%,printer:0.02%,nntp:0.02%,courier:0.02%,sunrpc:0.02%,mtp:0.02%,netbios_ssn:0.02%,uucp_path:0.02%,bgp:0.02%,klogin:0.02%,uucp:0.02%,vmnet:0.02%,supdup:0.02%,ssh:0.02%,nnsp:0.02%,login:0.02%,hostnames:0.02%,efs:0.02%,daytime:0.02%,netbios_ns:0.02%,link:0.02%,ldap:0.02%,pop_2:0.02%,exec:0.02%,netbios_dgm:0.02%,http_443:0.02%,kshell:0.02%,name:0.02%,ctf:0.02%,netstat:0.02%,Z39_50:0.02%,IRC:0.01%,urh_i:0.0%,X11:0.0%,tim_i:0.0%,tftp_u:0.0%,red_i:0.0%,pm_dump:0.0%] ** flag:[SF:76.6%,S0:17.61%,REJ:5.44%,RSTR:0.18%,RSTO:0.12%,SH:0.02%,S1:0.01%,S2:0.0%,RSTOS0:0.0%,S3:0.0%,OTH:0.0%] ** src_bytes:3300 (0%) ** dst_bytes:10725 (2%) ** land:[0:100.0%,1:0.0%] ** wrong_fragment:[0:99.75%,3:0.2%,1:0.05%] ** urgent:[0:100.0%,1:0.0%,3:0.0%,2:0.0%] ** hot:[0:99.35%,2:0.44%,28:0.06%,1:0.05%,4:0.02%,6:0.02%,5:0.01%,3:0.01%,14:0.01%,30:0.01%,22:0.01%,19:0.0%,18:0.0%,24:0.0%,20:0.0%,7:0.0%,17:0.0%,12:0.0%,15:0.0%,16:0.0%,10:0.0%,9:0.0%] ** num_failed_logins:[0:99.99%,1:0.01%,2:0.0%,5:0.0%,4:0.0%,3:0.0%] ** logged_in:[0:85.18%,1:14.82%] ** num_compromised:[0:99.55%,1:0.44%,2:0.0%,4:0.0%,3:0.0%,6:0.0%,5:0.0%,7:0.0%,12:0.0%,9:0.0%,11:0.0%,767:0.0%,238:0.0%,16:0.0%,18:0.0%,275:0.0%,21:0.0%,22:0.0%,281:0.0%,38:0.0%,102:0.0%,884:0.0%,13:0.0%] ** root_shell:[0:99.99%,1:0.01%] ** su_attempted:[0:100.0%,2:0.0%,1:0.0%] ** num_root:[0:99.88%,1:0.05%,9:0.03%,6:0.03%,2:0.0%,5:0.0%,4:0.0%,3:0.0%,119:0.0%,7:0.0%,993:0.0%,268:0.0%,14:0.0%,16:0.0%,278:0.0%,39:0.0%,306:0.0%,54:0.0%,857:0.0%,12:0.0%] ** num_file_creations:[0:99.95%,1:0.04%,2:0.01%,4:0.0%,16:0.0%,9:0.0%,5:0.0%,7:0.0%,8:0.0%,28:0.0%,25:0.0%,12:0.0%,14:0.0%,15:0.0%,20:0.0%,21:0.0%,22:0.0%,10:0.0%] ** num_shells:[0:99.99%,1:0.01%,2:0.0%] ** num_access_files:[0:99.91%,1:0.09%,2:0.01%,3:0.0%,8:0.0%,6:0.0%,4:0.0%] ** num_outbound_cmds:[0:100.0%] ** is_host_login:[0:100.0%] ** is_guest_login:[0:99.86%,1:0.14%] ** count:490 (0%) ** srv_count:470 (0%) ** serror_rate:[0.0:81.94%,1.0:17.52%,0.99:0.06%,0.08:0.03%,0.05:0.03%,0.07:0.03%,0.06:0.03%,0.14:0.02%,0.04:0.02%,0.01:0.02%,0.09:0.02%,0.1:0.02%,0.03:0.02%,0.11:0.02%,0.13:0.02%,0.5:0.02%,0.12:0.02%,0.2:0.01%,0.25:0.01%,0.02:0.01%,0.17:0.01%,0.33:0.01%,0.15:0.01%,0.22:0.01%,0.18:0.01%,0.23:0.01%,0.16:0.01%,0.21:0.01%,0.19:0.0%,0.27:0.0%,0.98:0.0%,0.44:0.0%,0.29:0.0%,0.24:0.0%,0.97:0.0%,0.96:0.0%,0.31:0.0%,0.26:0.0%,0.67:0.0%,0.36:0.0%,0.65:0.0%,0.94:0.0%,0.28:0.0%,0.79:0.0%,0.95:0.0%,0.53:0.0%,0.81:0.0%,0.62:0.0%,0.85:0.0%,0.6:0.0%,0.64:0.0%,0.88:0.0%,0.68:0.0%,0.52:0.0%,0.66:0.0%,0.71:0.0%,0.93:0.0%,0.57:0.0%,0.63:0.0%,0.83:0.0%,0.78:0.0%,0.75:0.0%,0.51:0.0%,0.58:0.0%,0.56:0.0%,0.55:0.0%,0.3:0.0%,0.76:0.0%,0.86:0.0%,0.74:0.0%,0.35:0.0%,0.38:0.0%,0.54:0.0%,0.72:0.0%,0.84:0.0%,0.69:0.0%,0.61:0.0%,0.59:0.0%,0.42:0.0%,0.32:0.0%,0.82:0.0%,0.77:0.0%,0.7:0.0%,0.91:0.0%,0.92:0.0%,0.4:0.0%,0.73:0.0%,0.9:0.0%,0.34:0.0%,0.8:0.0%,0.89:0.0%,0.87:0.0%] ** srv_serror_rate:[0.0:82.12%,1.0:17.62%,0.03:0.03%,0.04:0.02%,0.05:0.02%,0.06:0.02%,0.02:0.02%,0.5:0.02%,0.08:0.01%,0.07:0.01%,0.25:0.01%,0.33:0.01%,0.17:0.01%,0.09:0.01%,0.1:0.01%,0.2:0.01%,0.11:0.01%,0.12:0.01%,0.14:0.01%,0.01:0.0%,0.67:0.0%,0.92:0.0%,0.18:0.0%,0.94:0.0%,0.95:0.0%,0.58:0.0%,0.88:0.0%,0.75:0.0%,0.19:0.0%,0.4:0.0%,0.76:0.0%,0.83:0.0%,0.91:0.0%,0.15:0.0%,0.22:0.0%,0.93:0.0%,0.85:0.0%,0.27:0.0%,0.86:0.0%,0.44:0.0%,0.35:0.0%,0.51:0.0%,0.36:0.0%,0.38:0.0%,0.21:0.0%,0.8:0.0%,0.9:0.0%,0.45:0.0%,0.16:0.0%,0.37:0.0%,0.23:0.0%] ** rerror_rate:[0.0:94.12%,1.0:5.46%,0.86:0.02%,0.87:0.02%,0.92:0.02%,0.25:0.02%,0.95:0.02%,0.9:0.02%,0.5:0.02%,0.91:0.02%,0.88:0.01%,0.96:0.01%,0.33:0.01%,0.2:0.01%,0.93:0.01%,0.94:0.01%,0.01:0.01%,0.89:0.01%,0.85:0.01%,0.99:0.01%,0.82:0.01%,0.77:0.01%,0.17:0.01%,0.97:0.01%,0.02:0.01%,0.98:0.01%,0.03:0.01%,0.8:0.01%,0.78:0.01%,0.76:0.01%,0.75:0.0%,0.79:0.0%,0.84:0.0%,0.14:0.0%,0.05:0.0%,0.73:0.0%,0.81:0.0%,0.06:0.0%,0.71:0.0%,0.83:0.0%,0.67:0.0%,0.56:0.0%,0.08:0.0%,0.04:0.0%,0.1:0.0%,0.09:0.0%,0.12:0.0%,0.07:0.0%,0.11:0.0%,0.69:0.0%,0.74:0.0%,0.64:0.0%,0.4:0.0%,0.72:0.0%,0.7:0.0%,0.6:0.0%,0.29:0.0%,0.22:0.0%,0.62:0.0%,0.65:0.0%,0.21:0.0%,0.68:0.0%,0.37:0.0%,0.19:0.0%,0.43:0.0%,0.58:0.0%,0.35:0.0%,0.24:0.0%,0.31:0.0%,0.23:0.0%,0.27:0.0%,0.28:0.0%,0.26:0.0%,0.36:0.0%,0.34:0.0%,0.66:0.0%,0.32:0.0%] ** srv_rerror_rate:[0.0:93.99%,1.0:5.69%,0.33:0.05%,0.5:0.04%,0.25:0.04%,0.2:0.03%,0.17:0.03%,0.14:0.01%,0.04:0.01%,0.03:0.01%,0.12:0.01%,0.02:0.01%,0.06:0.01%,0.05:0.01%,0.07:0.01%,0.4:0.01%,0.67:0.01%,0.08:0.01%,0.11:0.01%,0.29:0.01%,0.09:0.0%,0.1:0.0%,0.75:0.0%,0.6:0.0%,0.01:0.0%,0.22:0.0%,0.71:0.0%,0.86:0.0%,0.83:0.0%,0.73:0.0%,0.81:0.0%,0.88:0.0%,0.96:0.0%,0.92:0.0%,0.18:0.0%,0.43:0.0%,0.79:0.0%,0.93:0.0%,0.13:0.0%,0.27:0.0%,0.38:0.0%,0.94:0.0%,0.95:0.0%,0.37:0.0%,0.85:0.0%,0.8:0.0%,0.62:0.0%,0.82:0.0%,0.69:0.0%,0.21:0.0%,0.87:0.0%] ** same_srv_rate:[1.0:77.34%,0.06:2.27%,0.05:2.14%,0.04:2.06%,0.07:2.03%,0.03:1.93%,0.02:1.9%,0.01:1.77%,0.08:1.48%,0.09:1.01%,0.1:0.8%,0.0:0.73%,0.12:0.73%,0.11:0.67%,0.13:0.66%,0.14:0.51%,0.15:0.35%,0.5:0.29%,0.16:0.25%,0.17:0.17%,0.33:0.12%,0.18:0.1%,0.2:0.08%,0.19:0.07%,0.67:0.05%,0.25:0.04%,0.21:0.04%,0.99:0.03%,0.22:0.03%,0.24:0.02%,0.23:0.02%,0.4:0.02%,0.98:0.02%,0.75:0.02%,0.27:0.02%,0.26:0.01%,0.8:0.01%,0.29:0.01%,0.38:0.01%,0.86:0.01%,0.3:0.01%,0.31:0.01%,0.44:0.01%,0.83:0.01%,0.36:0.01%,0.28:0.01%,0.43:0.01%,0.6:0.01%,0.42:0.01%,0.97:0.01%,0.32:0.01%,0.35:0.01%,0.45:0.01%,0.47:0.01%,0.88:0.0%,0.48:0.0%,0.39:0.0%,0.52:0.0%,0.46:0.0%,0.37:0.0%,0.41:0.0%,0.89:0.0%,0.34:0.0%,0.92:0.0%,0.54:0.0%,0.53:0.0%,0.94:0.0%,0.95:0.0%,0.57:0.0%,0.96:0.0%,0.64:0.0%,0.71:0.0%,0.56:0.0%,0.62:0.0%,0.78:0.0%,0.9:0.0%,0.49:0.0%,0.91:0.0%,0.55:0.0%,0.65:0.0%,0.73:0.0%,0.58:0.0%,0.59:0.0%,0.93:0.0%,0.76:0.0%,0.51:0.0%,0.77:0.0%,0.82:0.0%,0.81:0.0%,0.74:0.0%,0.69:0.0%,0.79:0.0%,0.72:0.0%,0.7:0.0%,0.85:0.0%,0.68:0.0%,0.61:0.0%,0.63:0.0%,0.87:0.0%] ** diff_srv_rate:[0.0:77.33%,0.06:10.69%,0.07:5.83%,0.05:3.89%,0.08:0.66%,1.0:0.48%,0.04:0.19%,0.67:0.13%,0.5:0.12%,0.09:0.08%,0.6:0.06%,0.12:0.05%,0.1:0.04%,0.11:0.04%,0.14:0.03%,0.4:0.02%,0.13:0.02%,0.29:0.02%,0.01:0.02%,0.15:0.02%,0.03:0.02%,0.33:0.02%,0.17:0.02%,0.25:0.02%,0.75:0.01%,0.2:0.01%,0.18:0.01%,0.16:0.01%,0.19:0.01%,0.02:0.01%,0.22:0.01%,0.21:0.01%,0.27:0.01%,0.96:0.01%,0.31:0.01%,0.38:0.01%,0.24:0.01%,0.23:0.01%,0.43:0.0%,0.52:0.0%,0.95:0.0%,0.44:0.0%,0.53:0.0%,0.36:0.0%,0.8:0.0%,0.57:0.0%,0.42:0.0%,0.3:0.0%,0.26:0.0%,0.28:0.0%,0.56:0.0%,0.99:0.0%,0.54:0.0%,0.62:0.0%,0.37:0.0%,0.55:0.0%,0.35:0.0%,0.41:0.0%,0.47:0.0%,0.89:0.0%,0.32:0.0%,0.71:0.0%,0.58:0.0%,0.46:0.0%,0.39:0.0%,0.51:0.0%,0.45:0.0%,0.97:0.0%,0.83:0.0%,0.7:0.0%,0.69:0.0%,0.78:0.0%,0.74:0.0%,0.64:0.0%,0.73:0.0%,0.82:0.0%,0.88:0.0%,0.86:0.0%] ** srv_diff_host_rate:[0.0:92.99%,1.0:1.64%,0.12:0.31%,0.5:0.29%,0.67:0.29%,0.33:0.25%,0.11:0.24%,0.25:0.23%,0.1:0.22%,0.14:0.21%,0.17:0.21%,0.08:0.2%,0.15:0.2%,0.18:0.19%,0.2:0.19%,0.09:0.19%,0.4:0.19%,0.07:0.17%,0.29:0.17%,0.13:0.16%,0.22:0.16%,0.06:0.14%,0.02:0.1%,0.05:0.1%,0.01:0.08%,0.21:0.08%,0.19:0.08%,0.16:0.07%,0.75:0.07%,0.27:0.06%,0.04:0.06%,0.6:0.06%,0.3:0.06%,0.38:0.05%,0.43:0.05%,0.23:0.05%,0.03:0.03%,0.24:0.02%,0.36:0.02%,0.31:0.02%,0.8:0.02%,0.57:0.01%,0.44:0.01%,0.28:0.01%,0.26:0.01%,0.42:0.0%,0.45:0.0%,0.62:0.0%,0.83:0.0%,0.71:0.0%,0.56:0.0%,0.35:0.0%,0.32:0.0%,0.37:0.0%,0.41:0.0%,0.47:0.0%,0.86:0.0%,0.55:0.0%,0.54:0.0%,0.88:0.0%,0.64:0.0%,0.46:0.0%,0.7:0.0%,0.77:0.0%] ** dst_host_count:256 (0%) ** dst_host_srv_count:256 (0%) ** dst_host_same_srv_rate:101 (0%) ** dst_host_diff_srv_rate:101 (0%) ** dst_host_same_src_port_rate:101 (0%) ** dst_host_srv_diff_host_rate:[0.0:89.45%,0.02:2.38%,0.01:2.13%,0.04:1.35%,0.03:1.34%,0.05:0.94%,0.06:0.39%,0.07:0.31%,0.5:0.15%,0.08:0.14%,0.09:0.13%,0.15:0.09%,0.11:0.09%,0.16:0.08%,0.13:0.08%,0.1:0.08%,0.14:0.07%,1.0:0.07%,0.17:0.07%,0.2:0.07%,0.12:0.07%,0.18:0.07%,0.25:0.05%,0.22:0.05%,0.19:0.05%,0.21:0.05%,0.24:0.03%,0.23:0.02%,0.26:0.02%,0.27:0.02%,0.33:0.02%,0.29:0.02%,0.51:0.02%,0.4:0.01%,0.28:0.01%,0.3:0.01%,0.67:0.01%,0.52:0.01%,0.31:0.01%,0.32:0.01%,0.38:0.01%,0.53:0.0%,0.43:0.0%,0.44:0.0%,0.34:0.0%,0.6:0.0%,0.36:0.0%,0.57:0.0%,0.35:0.0%,0.54:0.0%,0.37:0.0%,0.56:0.0%,0.55:0.0%,0.42:0.0%,0.46:0.0%,0.45:0.0%,0.41:0.0%,0.48:0.0%,0.39:0.0%,0.8:0.0%,0.7:0.0%,0.47:0.0%,0.62:0.0%,0.75:0.0%,0.58:0.0%] ** dst_host_serror_rate:[0.0:80.93%,1.0:17.56%,0.01:0.74%,0.02:0.2%,0.03:0.09%,0.09:0.05%,0.04:0.04%,0.05:0.04%,0.07:0.03%,0.08:0.03%,0.06:0.02%,0.14:0.02%,0.15:0.02%,0.11:0.02%,0.13:0.02%,0.16:0.02%,0.1:0.02%,0.12:0.01%,0.18:0.01%,0.25:0.01%,0.2:0.01%,0.17:0.01%,0.33:0.01%,0.99:0.01%,0.19:0.01%,0.31:0.01%,0.27:0.01%,0.5:0.0%,0.22:0.0%,0.98:0.0%,0.35:0.0%,0.28:0.0%,0.53:0.0%,0.24:0.0%,0.96:0.0%,0.3:0.0%,0.26:0.0%,0.97:0.0%,0.29:0.0%,0.94:0.0%,0.42:0.0%,0.32:0.0%,0.56:0.0%,0.55:0.0%,0.95:0.0%,0.6:0.0%,0.23:0.0%,0.93:0.0%,0.34:0.0%,0.85:0.0%,0.89:0.0%,0.21:0.0%,0.92:0.0%,0.58:0.0%,0.43:0.0%,0.9:0.0%,0.57:0.0%,0.91:0.0%,0.49:0.0%,0.82:0.0%,0.36:0.0%,0.87:0.0%,0.45:0.0%,0.62:0.0%,0.65:0.0%,0.46:0.0%,0.38:0.0%,0.61:0.0%,0.47:0.0%,0.76:0.0%,0.81:0.0%,0.54:0.0%,0.64:0.0%,0.44:0.0%,0.48:0.0%,0.72:0.0%,0.39:0.0%,0.52:0.0%,0.51:0.0%,0.67:0.0%,0.84:0.0%,0.73:0.0%,0.4:0.0%,0.69:0.0%,0.79:0.0%,0.41:0.0%,0.68:0.0%,0.88:0.0%,0.77:0.0%,0.75:0.0%,0.7:0.0%,0.8:0.0%,0.59:0.0%,0.71:0.0%,0.37:0.0%,0.86:0.0%,0.66:0.0%,0.78:0.0%,0.74:0.0%,0.83:0.0%] ** dst_host_srv_serror_rate:[0.0:81.16%,1.0:17.61%,0.01:0.99%,0.02:0.14%,0.03:0.03%,0.04:0.02%,0.05:0.01%,0.06:0.01%,0.08:0.0%,0.5:0.0%,0.07:0.0%,0.1:0.0%,0.09:0.0%,0.11:0.0%,0.17:0.0%,0.14:0.0%,0.12:0.0%,0.96:0.0%,0.33:0.0%,0.67:0.0%,0.97:0.0%,0.25:0.0%,0.98:0.0%,0.4:0.0%,0.75:0.0%,0.48:0.0%,0.83:0.0%,0.16:0.0%,0.93:0.0%,0.69:0.0%,0.2:0.0%,0.91:0.0%,0.78:0.0%,0.95:0.0%,0.8:0.0%,0.92:0.0%,0.68:0.0%,0.29:0.0%,0.38:0.0%,0.88:0.0%,0.3:0.0%,0.32:0.0%,0.94:0.0%,0.57:0.0%,0.63:0.0%,0.62:0.0%,0.31:0.0%,0.85:0.0%,0.56:0.0%,0.81:0.0%,0.74:0.0%,0.86:0.0%,0.13:0.0%,0.23:0.0%,0.18:0.0%,0.64:0.0%,0.46:0.0%,0.52:0.0%,0.66:0.0%,0.6:0.0%,0.84:0.0%,0.55:0.0%,0.9:0.0%,0.15:0.0%,0.79:0.0%,0.82:0.0%,0.87:0.0%,0.47:0.0%,0.53:0.0%,0.45:0.0%,0.42:0.0%,0.24:0.0%] ** dst_host_rerror_rate:101 (0%) ** dst_host_srv_rerror_rate:101 (0%) ** outcome:[smurf.:56.84%,neptune.:21.7%,normal.:19.69%,back.:0.45%,satan.:0.32%,ipsweep.:0.25%,portsweep.:0.21%,warezclient.:0.21%,teardrop.:0.2%,pod.:0.05%,nmap.:0.05%,guess_passwd.:0.01%,buffer_overflow.:0.01%,land.:0.0%,warezmaster.:0.0%,imap.:0.0%,rootkit.:0.0%,loadmodule.:0.0%,ftp_write.:0.0%,multihop.:0.0%,phf.:0.0%,perl.:0.0%,spy.:0.0%] ###Markdown Encode the feature vectorWe use the same two functions provided earlier to preprocess the data. The first encodes Z-Scores, and the second creates dummy variables from categorical columns. ###Code # Encode a numeric column as zscores def encode_numeric_zscore(df, name, mean=None, sd=None): if mean is None: mean = df[name].mean() if sd is None: sd = df[name].std() df[name] = (df[name] - mean) / sd # Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue) def encode_text_dummy(df, name): dummies = pd.get_dummies(df[name]) for x in dummies.columns: dummy_name = f"{name}-{x}" df[dummy_name] = dummies[x] df.drop(name, axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Again, just as we did for anomaly detection, we preprocess the data set. We convert all numeric values to Z-Score, and we translate all categorical to dummy variables. ###Code # Now encode the feature vector encode_numeric_zscore(df, 'duration') encode_text_dummy(df, 'protocol_type') encode_text_dummy(df, 'service') encode_text_dummy(df, 'flag') encode_numeric_zscore(df, 'src_bytes') encode_numeric_zscore(df, 'dst_bytes') encode_text_dummy(df, 'land') encode_numeric_zscore(df, 'wrong_fragment') encode_numeric_zscore(df, 'urgent') encode_numeric_zscore(df, 'hot') encode_numeric_zscore(df, 'num_failed_logins') encode_text_dummy(df, 'logged_in') encode_numeric_zscore(df, 'num_compromised') encode_numeric_zscore(df, 'root_shell') encode_numeric_zscore(df, 'su_attempted') encode_numeric_zscore(df, 'num_root') encode_numeric_zscore(df, 'num_file_creations') encode_numeric_zscore(df, 'num_shells') encode_numeric_zscore(df, 'num_access_files') encode_numeric_zscore(df, 'num_outbound_cmds') encode_text_dummy(df, 'is_host_login') encode_text_dummy(df, 'is_guest_login') encode_numeric_zscore(df, 'count') encode_numeric_zscore(df, 'srv_count') encode_numeric_zscore(df, 'serror_rate') encode_numeric_zscore(df, 'srv_serror_rate') encode_numeric_zscore(df, 'rerror_rate') encode_numeric_zscore(df, 'srv_rerror_rate') encode_numeric_zscore(df, 'same_srv_rate') encode_numeric_zscore(df, 'diff_srv_rate') encode_numeric_zscore(df, 'srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_count') encode_numeric_zscore(df, 'dst_host_srv_count') encode_numeric_zscore(df, 'dst_host_same_srv_rate') encode_numeric_zscore(df, 'dst_host_diff_srv_rate') encode_numeric_zscore(df, 'dst_host_same_src_port_rate') encode_numeric_zscore(df, 'dst_host_srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_serror_rate') encode_numeric_zscore(df, 'dst_host_srv_serror_rate') encode_numeric_zscore(df, 'dst_host_rerror_rate') encode_numeric_zscore(df, 'dst_host_srv_rerror_rate') # display 5 rows df.dropna(inplace=True,axis=1) df[0:5] # This is the numeric feature vector, as it goes to the neural net # Convert to numpy - Classification x_columns = df.columns.drop('outcome') x = df[x_columns].values dummies = pd.get_dummies(df['outcome']) # Classification outcomes = dummies.columns num_classes = len(outcomes) y = dummies.values ###Output _____no_output_____ ###Markdown We will attempt to predict what type of attack is underway. The outcome column specifies the attack type. A value of normal indicates that there is no attack underway. We display the outcomes; some attack types are much rarer than others. ###Code df.groupby('outcome')['outcome'].count() ###Output _____no_output_____ ###Markdown Train the Neural NetworkWe now train the neural network to classify the different KDD99 outcomes. The code provided here implements a relatively simple neural with two hidden layers. We train it with the provided KDD99 data. ###Code import pandas as pd import io import requests import numpy as np import os from sklearn.model_selection import train_test_split from sklearn import metrics from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.callbacks import EarlyStopping # Create a test/train split. 25% test # Split into train/test x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.25, random_state=42) # Create neural net model = Sequential() model.add(Dense(10, input_dim=x.shape[1], activation='relu')) model.add(Dense(50, input_dim=x.shape[1], activation='relu')) model.add(Dense(10, input_dim=x.shape[1], activation='relu')) model.add(Dense(1, kernel_initializer='normal')) model.add(Dense(y.shape[1],activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam') monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto') model.fit(x_train,y_train,validation_data=(x_test,y_test), callbacks=[monitor],verbose=2,epochs=1000) ###Output WARNING: Logging before flag parsing goes to stderr. W0810 23:45:15.921615 140735657337728 deprecation.py:323] From /Users/jheaton/miniconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow_core/python/ops/math_grad.py:1366: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where W0810 23:45:16.019797 140735657337728 deprecation.py:323] From /Users/jheaton/miniconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow_core/python/keras/optimizer_v2/optimizer_v2.py:468: BaseResourceVariable.constraint (from tensorflow.python.ops.resource_variable_ops) is deprecated and will be removed in a future version. Instructions for updating: Apply a constraint manually following the optimizer update step. ###Markdown We can now evaluate the neural network. As you can see, the neural network achieves a 99% accuracy rate. ###Code # Measure accuracy pred = model.predict(x_test) pred = np.argmax(pred,axis=1) y_eval = np.argmax(y_test,axis=1) score = metrics.accuracy_score(y_eval, pred) print("Validation score: {}".format(score)) ###Output Validation score: 0.9971418392628698 ###Markdown T81-558: Applications of Deep Neural Networks**Module 14: Other Neural Network Techniques*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 14 Video Material* Part 14.1: What is AutoML [[Video]](https://www.youtube.com/watch?v=1mB_5iurqzw&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_01_automl.ipynb)* Part 14.2: Using Denoising AutoEncoders in Keras [[Video]](https://www.youtube.com/watch?v=4bTSu6_fucc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_02_auto_encode.ipynb)* Part 14.3: Training an Intrusion Detection System with KDD99 [[Video]](https://www.youtube.com/watch?v=1ySn6h2A68I&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_03_anomaly.ipynb)* **Part 14.4: Anomaly Detection in Keras** [[Video]](https://www.youtube.com/watch?v=VgyKQ5MTDFc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_04_ids_kdd99.ipynb)* Part 14.5: The Deep Learning Technologies I am Excited About [[Video]]() [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_05_new_tech.ipynb) Part 14.4: Training an Intrusion Detection System with KDD99The [KDD-99 dataset](http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html) is very famous in the security field and almost a "hello world" of Intrusion Detection Systems (IDS) in machine learning. An intrusion detection system (IDS) is a program that monitors computers and network systems for malicious activity or policy violations. Any intrusion activity or violation is typically reported to an administrator or collected centrally. IDS types range in scope from single computers to large networks. Although the KDD99 dataset is over 20 years old, it is still widely used to demonstrate Intrusion Detection Systems (IDS). KDD99 is the data set used for The Third International Knowledge Discovery and Data Mining Tools Competition, which was held in conjunction with KDD-99, The Fifth International Conference on Knowledge Discovery and Data Mining. The competition task was to build a network intrusion detector, a predictive model capable of distinguishing between "bad" connections, called intrusions or attacks, and "good" normal connections. This database contains a standard set of data to be audited, including various intrusions simulated in a military network environment. Read in Raw KDD-99 DatasetThe following code reads the KDD99 CSV dataset into a Pandas data frame. The standard format of KDD99 does not include column names. Because of that, the program adds them. ###Code import pandas as pd from tensorflow.keras.utils import get_file pd.set_option('display.max_columns', 6) pd.set_option('display.max_rows', 5) try: path = get_file('kdd-with-columns.csv', origin=\ 'https://github.com/jeffheaton/jheaton-ds2/raw/main/'\ 'kdd-with-columns.csv',archive_format=None) except: print('Error downloading') raise print(path) # Origional file: http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html df = pd.read_csv(path) print("Read {} rows.".format(len(df))) # df = df.sample(frac=0.1, replace=False) # Uncomment this line to # sample only 10% of the dataset df.dropna(inplace=True,axis=1) # For now, just drop NA's (rows with missing values) # display 5 rows pd.set_option('display.max_columns', 5) pd.set_option('display.max_rows', 5) df ###Output Downloading data from https://github.com/jeffheaton/jheaton-ds2/raw/main/kdd-with-columns.csv 68132864/68132668 [==============================] - 1s 0us/step 68141056/68132668 [==============================] - 1s 0us/step /root/.keras/datasets/kdd-with-columns.csv Read 494021 rows. ###Markdown Analyzing a DatasetBefore we preprocess the KDD99 dataset, let's look at the individual columns and distributions. You can use the following script to give a high-level overview of how a dataset appears. ###Code import pandas as pd import os import numpy as np from sklearn import metrics from scipy.stats import zscore def expand_categories(values): result = [] s = values.value_counts() t = float(len(values)) for v in s.index: result.append("{}:{}%".format(v,round(100*(s[v]/t),2))) return "[{}]".format(",".join(result)) def analyze(df): print() cols = df.columns.values total = float(len(df)) print("{} rows".format(int(total))) for col in cols: uniques = df[col].unique() unique_count = len(uniques) if unique_count>100: print("** {}:{} ({}%)".format(col,unique_count,\ int(((unique_count)/total)*100))) else: print("** {}:{}".format(col,expand_categories(df[col]))) expand_categories(df[col]) ###Output _____no_output_____ ###Markdown The analysis looks at how many unique values are present. For example, duration, a numeric value, has 2495 unique values, and there is a 0% overlap. A text/categorical value such as protocol_type only has a few unique values, and the program shows the percentages of each. Columns with many unique values do not have their item counts shown to save display space. ###Code # Analyze KDD-99 analyze(df) ###Output 494021 rows ** duration:2495 (0%) ** protocol_type:[icmp:57.41%,tcp:38.47%,udp:4.12%] ** service:[ecr_i:56.96%,private:22.45%,http:13.01%,smtp:1.97%,other:1.46%,domain_u:1.19%,ftp_data:0.96%,eco_i:0.33%,ftp:0.16%,finger:0.14%,urp_i:0.11%,telnet:0.1%,ntp_u:0.08%,auth:0.07%,pop_3:0.04%,time:0.03%,csnet_ns:0.03%,remote_job:0.02%,gopher:0.02%,imap4:0.02%,discard:0.02%,domain:0.02%,iso_tsap:0.02%,systat:0.02%,shell:0.02%,echo:0.02%,rje:0.02%,whois:0.02%,sql_net:0.02%,printer:0.02%,nntp:0.02%,courier:0.02%,sunrpc:0.02%,netbios_ssn:0.02%,mtp:0.02%,vmnet:0.02%,uucp_path:0.02%,uucp:0.02%,klogin:0.02%,bgp:0.02%,ssh:0.02%,supdup:0.02%,nnsp:0.02%,login:0.02%,hostnames:0.02%,efs:0.02%,daytime:0.02%,link:0.02%,netbios_ns:0.02%,pop_2:0.02%,ldap:0.02%,netbios_dgm:0.02%,exec:0.02%,http_443:0.02%,kshell:0.02%,name:0.02%,ctf:0.02%,netstat:0.02%,Z39_50:0.02%,IRC:0.01%,urh_i:0.0%,X11:0.0%,tim_i:0.0%,pm_dump:0.0%,tftp_u:0.0%,red_i:0.0%] ** flag:[SF:76.6%,S0:17.61%,REJ:5.44%,RSTR:0.18%,RSTO:0.12%,SH:0.02%,S1:0.01%,S2:0.0%,RSTOS0:0.0%,S3:0.0%,OTH:0.0%] ** src_bytes:3300 (0%) ** dst_bytes:10725 (2%) ** land:[0:100.0%,1:0.0%] ** wrong_fragment:[0:99.75%,3:0.2%,1:0.05%] ** urgent:[0:100.0%,1:0.0%,2:0.0%,3:0.0%] ** hot:[0:99.35%,2:0.44%,28:0.06%,1:0.05%,4:0.02%,6:0.02%,5:0.01%,3:0.01%,14:0.01%,30:0.01%,22:0.01%,19:0.0%,24:0.0%,18:0.0%,20:0.0%,7:0.0%,17:0.0%,12:0.0%,16:0.0%,10:0.0%,15:0.0%,9:0.0%] ** num_failed_logins:[0:99.99%,1:0.01%,2:0.0%,5:0.0%,4:0.0%,3:0.0%] ** logged_in:[0:85.18%,1:14.82%] ** num_compromised:[0:99.55%,1:0.44%,2:0.0%,4:0.0%,3:0.0%,6:0.0%,5:0.0%,7:0.0%,767:0.0%,12:0.0%,9:0.0%,884:0.0%,13:0.0%,38:0.0%,18:0.0%,11:0.0%,275:0.0%,281:0.0%,16:0.0%,238:0.0%,21:0.0%,22:0.0%,102:0.0%] ** root_shell:[0:99.99%,1:0.01%] ** su_attempted:[0:100.0%,1:0.0%,2:0.0%] ** num_root:[0:99.88%,1:0.05%,9:0.03%,6:0.03%,2:0.0%,5:0.0%,4:0.0%,3:0.0%,7:0.0%,993:0.0%,54:0.0%,306:0.0%,14:0.0%,39:0.0%,278:0.0%,268:0.0%,12:0.0%,857:0.0%,16:0.0%,119:0.0%] ** num_file_creations:[0:99.95%,1:0.04%,2:0.01%,4:0.0%,16:0.0%,5:0.0%,22:0.0%,25:0.0%,12:0.0%,8:0.0%,7:0.0%,21:0.0%,14:0.0%,10:0.0%,28:0.0%,9:0.0%,15:0.0%,20:0.0%] ** num_shells:[0:99.99%,1:0.01%,2:0.0%] ** num_access_files:[0:99.91%,1:0.09%,2:0.01%,3:0.0%,4:0.0%,6:0.0%,8:0.0%] ** num_outbound_cmds:[0:100.0%] ** is_host_login:[0:100.0%] ** is_guest_login:[0:99.86%,1:0.14%] ** count:490 (0%) ** srv_count:470 (0%) ** serror_rate:[0.0:81.94%,1.0:17.52%,0.99:0.06%,0.08:0.03%,0.05:0.03%,0.06:0.03%,0.07:0.03%,0.14:0.02%,0.04:0.02%,0.01:0.02%,0.09:0.02%,0.1:0.02%,0.03:0.02%,0.11:0.02%,0.13:0.02%,0.5:0.02%,0.12:0.02%,0.2:0.01%,0.25:0.01%,0.02:0.01%,0.17:0.01%,0.33:0.01%,0.15:0.01%,0.22:0.01%,0.18:0.01%,0.23:0.01%,0.16:0.01%,0.21:0.01%,0.19:0.0%,0.27:0.0%,0.98:0.0%,0.29:0.0%,0.44:0.0%,0.24:0.0%,0.97:0.0%,0.31:0.0%,0.96:0.0%,0.26:0.0%,0.79:0.0%,0.28:0.0%,0.36:0.0%,0.94:0.0%,0.95:0.0%,0.65:0.0%,0.67:0.0%,0.85:0.0%,0.64:0.0%,0.62:0.0%,0.6:0.0%,0.53:0.0%,0.81:0.0%,0.88:0.0%,0.93:0.0%,0.57:0.0%,0.55:0.0%,0.58:0.0%,0.56:0.0%,0.52:0.0%,0.51:0.0%,0.66:0.0%,0.75:0.0%,0.3:0.0%,0.83:0.0%,0.71:0.0%,0.78:0.0%,0.63:0.0%,0.68:0.0%,0.86:0.0%,0.54:0.0%,0.61:0.0%,0.74:0.0%,0.72:0.0%,0.69:0.0%,0.59:0.0%,0.38:0.0%,0.84:0.0%,0.35:0.0%,0.76:0.0%,0.42:0.0%,0.82:0.0%,0.77:0.0%,0.32:0.0%,0.7:0.0%,0.4:0.0%,0.73:0.0%,0.91:0.0%,0.92:0.0%,0.87:0.0%,0.8:0.0%,0.9:0.0%,0.34:0.0%,0.89:0.0%] ** srv_serror_rate:[0.0:82.12%,1.0:17.62%,0.03:0.03%,0.04:0.02%,0.05:0.02%,0.06:0.02%,0.02:0.02%,0.5:0.02%,0.08:0.01%,0.07:0.01%,0.25:0.01%,0.33:0.01%,0.17:0.01%,0.09:0.01%,0.1:0.01%,0.2:0.01%,0.12:0.01%,0.11:0.01%,0.14:0.01%,0.01:0.0%,0.67:0.0%,0.18:0.0%,0.92:0.0%,0.95:0.0%,0.94:0.0%,0.88:0.0%,0.19:0.0%,0.58:0.0%,0.75:0.0%,0.83:0.0%,0.76:0.0%,0.15:0.0%,0.91:0.0%,0.4:0.0%,0.85:0.0%,0.27:0.0%,0.22:0.0%,0.93:0.0%,0.16:0.0%,0.38:0.0%,0.36:0.0%,0.35:0.0%,0.45:0.0%,0.21:0.0%,0.44:0.0%,0.23:0.0%,0.51:0.0%,0.86:0.0%,0.9:0.0%,0.8:0.0%,0.37:0.0%] ** rerror_rate:[0.0:94.12%,1.0:5.46%,0.86:0.02%,0.87:0.02%,0.92:0.02%,0.25:0.02%,0.9:0.02%,0.95:0.02%,0.5:0.02%,0.91:0.02%,0.88:0.01%,0.96:0.01%,0.33:0.01%,0.2:0.01%,0.93:0.01%,0.94:0.01%,0.01:0.01%,0.89:0.01%,0.85:0.01%,0.99:0.01%,0.82:0.01%,0.77:0.01%,0.17:0.01%,0.97:0.01%,0.02:0.01%,0.98:0.01%,0.03:0.01%,0.78:0.01%,0.8:0.01%,0.76:0.01%,0.79:0.0%,0.84:0.0%,0.75:0.0%,0.14:0.0%,0.05:0.0%,0.73:0.0%,0.81:0.0%,0.83:0.0%,0.71:0.0%,0.06:0.0%,0.67:0.0%,0.56:0.0%,0.08:0.0%,0.04:0.0%,0.1:0.0%,0.12:0.0%,0.09:0.0%,0.07:0.0%,0.11:0.0%,0.69:0.0%,0.4:0.0%,0.64:0.0%,0.7:0.0%,0.72:0.0%,0.74:0.0%,0.6:0.0%,0.29:0.0%,0.62:0.0%,0.65:0.0%,0.21:0.0%,0.22:0.0%,0.37:0.0%,0.58:0.0%,0.68:0.0%,0.19:0.0%,0.43:0.0%,0.35:0.0%,0.36:0.0%,0.23:0.0%,0.26:0.0%,0.27:0.0%,0.28:0.0%,0.66:0.0%,0.31:0.0%,0.32:0.0%,0.34:0.0%,0.24:0.0%] ** srv_rerror_rate:[0.0:93.99%,1.0:5.69%,0.33:0.05%,0.5:0.04%,0.25:0.04%,0.2:0.03%,0.17:0.03%,0.14:0.01%,0.04:0.01%,0.03:0.01%,0.12:0.01%,0.06:0.01%,0.02:0.01%,0.05:0.01%,0.07:0.01%,0.4:0.01%,0.67:0.01%,0.08:0.01%,0.11:0.01%,0.29:0.01%,0.09:0.0%,0.1:0.0%,0.75:0.0%,0.6:0.0%,0.01:0.0%,0.71:0.0%,0.22:0.0%,0.83:0.0%,0.86:0.0%,0.18:0.0%,0.96:0.0%,0.79:0.0%,0.43:0.0%,0.92:0.0%,0.81:0.0%,0.88:0.0%,0.73:0.0%,0.69:0.0%,0.94:0.0%,0.62:0.0%,0.8:0.0%,0.85:0.0%,0.93:0.0%,0.82:0.0%,0.27:0.0%,0.37:0.0%,0.21:0.0%,0.38:0.0%,0.87:0.0%,0.95:0.0%,0.13:0.0%] ** same_srv_rate:[1.0:77.34%,0.06:2.27%,0.05:2.14%,0.04:2.06%,0.07:2.03%,0.03:1.93%,0.02:1.9%,0.01:1.77%,0.08:1.48%,0.09:1.01%,0.1:0.8%,0.0:0.73%,0.12:0.73%,0.11:0.67%,0.13:0.66%,0.14:0.51%,0.15:0.35%,0.5:0.29%,0.16:0.25%,0.17:0.17%,0.33:0.12%,0.18:0.1%,0.2:0.08%,0.19:0.07%,0.67:0.05%,0.25:0.04%,0.21:0.04%,0.99:0.03%,0.22:0.03%,0.24:0.02%,0.23:0.02%,0.4:0.02%,0.98:0.02%,0.75:0.02%,0.27:0.02%,0.26:0.01%,0.8:0.01%,0.29:0.01%,0.38:0.01%,0.86:0.01%,0.3:0.01%,0.31:0.01%,0.44:0.01%,0.36:0.01%,0.83:0.01%,0.28:0.01%,0.43:0.01%,0.42:0.01%,0.6:0.01%,0.97:0.01%,0.32:0.01%,0.35:0.01%,0.45:0.01%,0.47:0.01%,0.88:0.0%,0.48:0.0%,0.39:0.0%,0.46:0.0%,0.52:0.0%,0.37:0.0%,0.41:0.0%,0.89:0.0%,0.34:0.0%,0.92:0.0%,0.54:0.0%,0.53:0.0%,0.95:0.0%,0.94:0.0%,0.57:0.0%,0.56:0.0%,0.96:0.0%,0.64:0.0%,0.71:0.0%,0.62:0.0%,0.78:0.0%,0.9:0.0%,0.49:0.0%,0.55:0.0%,0.91:0.0%,0.65:0.0%,0.73:0.0%,0.58:0.0%,0.93:0.0%,0.59:0.0%,0.82:0.0%,0.51:0.0%,0.81:0.0%,0.76:0.0%,0.77:0.0%,0.79:0.0%,0.74:0.0%,0.85:0.0%,0.72:0.0%,0.7:0.0%,0.68:0.0%,0.69:0.0%,0.87:0.0%,0.63:0.0%,0.61:0.0%] ** diff_srv_rate:[0.0:77.33%,0.06:10.69%,0.07:5.83%,0.05:3.89%,0.08:0.66%,1.0:0.48%,0.04:0.19%,0.67:0.13%,0.5:0.12%,0.09:0.08%,0.6:0.06%,0.12:0.05%,0.1:0.04%,0.11:0.04%,0.14:0.03%,0.4:0.02%,0.13:0.02%,0.29:0.02%,0.01:0.02%,0.15:0.02%,0.03:0.02%,0.33:0.02%,0.25:0.02%,0.17:0.02%,0.75:0.01%,0.2:0.01%,0.18:0.01%,0.16:0.01%,0.19:0.01%,0.02:0.01%,0.22:0.01%,0.21:0.01%,0.27:0.01%,0.96:0.01%,0.31:0.01%,0.38:0.01%,0.24:0.01%,0.23:0.01%,0.43:0.0%,0.52:0.0%,0.44:0.0%,0.95:0.0%,0.36:0.0%,0.8:0.0%,0.53:0.0%,0.57:0.0%,0.42:0.0%,0.3:0.0%,0.26:0.0%,0.28:0.0%,0.56:0.0%,0.99:0.0%,0.54:0.0%,0.62:0.0%,0.37:0.0%,0.41:0.0%,0.35:0.0%,0.55:0.0%,0.47:0.0%,0.32:0.0%,0.46:0.0%,0.39:0.0%,0.58:0.0%,0.71:0.0%,0.89:0.0%,0.51:0.0%,0.45:0.0%,0.97:0.0%,0.73:0.0%,0.69:0.0%,0.78:0.0%,0.7:0.0%,0.74:0.0%,0.82:0.0%,0.86:0.0%,0.64:0.0%,0.83:0.0%,0.88:0.0%] ** srv_diff_host_rate:[0.0:92.99%,1.0:1.64%,0.12:0.31%,0.5:0.29%,0.67:0.29%,0.33:0.25%,0.11:0.24%,0.25:0.23%,0.1:0.22%,0.14:0.21%,0.17:0.21%,0.08:0.2%,0.15:0.2%,0.18:0.19%,0.2:0.19%,0.09:0.19%,0.4:0.19%,0.07:0.17%,0.29:0.17%,0.13:0.16%,0.22:0.16%,0.06:0.14%,0.02:0.1%,0.05:0.1%,0.01:0.08%,0.21:0.08%,0.19:0.08%,0.16:0.07%,0.75:0.07%,0.27:0.06%,0.04:0.06%,0.6:0.06%,0.3:0.06%,0.38:0.05%,0.43:0.05%,0.23:0.05%,0.03:0.03%,0.24:0.02%,0.36:0.02%,0.31:0.02%,0.8:0.02%,0.57:0.01%,0.44:0.01%,0.28:0.01%,0.26:0.01%,0.42:0.0%,0.45:0.0%,0.62:0.0%,0.83:0.0%,0.71:0.0%,0.56:0.0%,0.35:0.0%,0.32:0.0%,0.37:0.0%,0.47:0.0%,0.41:0.0%,0.86:0.0%,0.55:0.0%,0.64:0.0%,0.54:0.0%,0.46:0.0%,0.88:0.0%,0.7:0.0%,0.77:0.0%] ** dst_host_count:256 (0%) ** dst_host_srv_count:256 (0%) ** dst_host_same_srv_rate:101 (0%) ** dst_host_diff_srv_rate:101 (0%) ** dst_host_same_src_port_rate:101 (0%) ** dst_host_srv_diff_host_rate:[0.0:89.45%,0.02:2.38%,0.01:2.13%,0.04:1.35%,0.03:1.34%,0.05:0.94%,0.06:0.39%,0.07:0.31%,0.5:0.15%,0.08:0.14%,0.09:0.13%,0.15:0.09%,0.11:0.09%,0.16:0.08%,0.13:0.08%,0.1:0.08%,0.14:0.07%,1.0:0.07%,0.17:0.07%,0.2:0.07%,0.12:0.07%,0.18:0.07%,0.25:0.05%,0.22:0.05%,0.19:0.05%,0.21:0.05%,0.24:0.03%,0.23:0.02%,0.26:0.02%,0.27:0.02%,0.33:0.02%,0.29:0.02%,0.51:0.02%,0.4:0.01%,0.28:0.01%,0.3:0.01%,0.67:0.01%,0.52:0.01%,0.31:0.01%,0.32:0.01%,0.38:0.01%,0.53:0.0%,0.43:0.0%,0.44:0.0%,0.34:0.0%,0.6:0.0%,0.36:0.0%,0.57:0.0%,0.35:0.0%,0.54:0.0%,0.37:0.0%,0.56:0.0%,0.55:0.0%,0.42:0.0%,0.46:0.0%,0.39:0.0%,0.45:0.0%,0.41:0.0%,0.48:0.0%,0.62:0.0%,0.8:0.0%,0.58:0.0%,0.75:0.0%,0.7:0.0%,0.47:0.0%] ** dst_host_serror_rate:[0.0:80.93%,1.0:17.56%,0.01:0.74%,0.02:0.2%,0.03:0.09%,0.09:0.05%,0.04:0.04%,0.05:0.04%,0.07:0.03%,0.08:0.03%,0.06:0.02%,0.14:0.02%,0.15:0.02%,0.11:0.02%,0.13:0.02%,0.16:0.02%,0.1:0.02%,0.12:0.01%,0.18:0.01%,0.25:0.01%,0.2:0.01%,0.17:0.01%,0.33:0.01%,0.99:0.01%,0.19:0.01%,0.31:0.01%,0.27:0.01%,0.5:0.0%,0.22:0.0%,0.98:0.0%,0.35:0.0%,0.28:0.0%,0.24:0.0%,0.53:0.0%,0.96:0.0%,0.3:0.0%,0.94:0.0%,0.29:0.0%,0.26:0.0%,0.97:0.0%,0.42:0.0%,0.32:0.0%,0.6:0.0%,0.95:0.0%,0.56:0.0%,0.55:0.0%,0.23:0.0%,0.85:0.0%,0.93:0.0%,0.34:0.0%,0.89:0.0%,0.58:0.0%,0.21:0.0%,0.92:0.0%,0.57:0.0%,0.91:0.0%,0.9:0.0%,0.43:0.0%,0.82:0.0%,0.49:0.0%,0.36:0.0%,0.76:0.0%,0.47:0.0%,0.46:0.0%,0.62:0.0%,0.38:0.0%,0.45:0.0%,0.87:0.0%,0.61:0.0%,0.65:0.0%,0.41:0.0%,0.39:0.0%,0.44:0.0%,0.48:0.0%,0.52:0.0%,0.81:0.0%,0.77:0.0%,0.79:0.0%,0.73:0.0%,0.88:0.0%,0.69:0.0%,0.67:0.0%,0.54:0.0%,0.72:0.0%,0.68:0.0%,0.4:0.0%,0.64:0.0%,0.51:0.0%,0.84:0.0%,0.59:0.0%,0.7:0.0%,0.75:0.0%,0.8:0.0%,0.71:0.0%,0.83:0.0%,0.66:0.0%,0.74:0.0%,0.78:0.0%,0.86:0.0%,0.37:0.0%] ** dst_host_srv_serror_rate:[0.0:81.16%,1.0:17.61%,0.01:0.99%,0.02:0.14%,0.03:0.03%,0.04:0.02%,0.05:0.01%,0.06:0.01%,0.08:0.0%,0.5:0.0%,0.07:0.0%,0.1:0.0%,0.09:0.0%,0.11:0.0%,0.17:0.0%,0.96:0.0%,0.33:0.0%,0.14:0.0%,0.12:0.0%,0.67:0.0%,0.97:0.0%,0.25:0.0%,0.98:0.0%,0.48:0.0%,0.75:0.0%,0.83:0.0%,0.4:0.0%,0.69:0.0%,0.8:0.0%,0.2:0.0%,0.91:0.0%,0.93:0.0%,0.78:0.0%,0.95:0.0%,0.16:0.0%,0.57:0.0%,0.94:0.0%,0.31:0.0%,0.92:0.0%,0.62:0.0%,0.88:0.0%,0.63:0.0%,0.29:0.0%,0.56:0.0%,0.3:0.0%,0.38:0.0%,0.32:0.0%,0.85:0.0%,0.68:0.0%,0.23:0.0%,0.15:0.0%,0.47:0.0%,0.52:0.0%,0.6:0.0%,0.24:0.0%,0.79:0.0%,0.74:0.0%,0.82:0.0%,0.64:0.0%,0.18:0.0%,0.13:0.0%,0.45:0.0%,0.66:0.0%,0.9:0.0%,0.42:0.0%,0.46:0.0%,0.86:0.0%,0.87:0.0%,0.84:0.0%,0.55:0.0%,0.81:0.0%,0.53:0.0%] ** dst_host_rerror_rate:101 (0%) ** dst_host_srv_rerror_rate:101 (0%) ** outcome:[smurf.:56.84%,neptune.:21.7%,normal.:19.69%,back.:0.45%,satan.:0.32%,ipsweep.:0.25%,portsweep.:0.21%,warezclient.:0.21%,teardrop.:0.2%,pod.:0.05%,nmap.:0.05%,guess_passwd.:0.01%,buffer_overflow.:0.01%,land.:0.0%,warezmaster.:0.0%,imap.:0.0%,rootkit.:0.0%,loadmodule.:0.0%,ftp_write.:0.0%,multihop.:0.0%,phf.:0.0%,perl.:0.0%,spy.:0.0%] ###Markdown Encode the feature vectorWe use the same two functions provided earlier to preprocess the data. The first encodes Z-Scores, and the second creates dummy variables from categorical columns. ###Code # Encode a numeric column as zscores def encode_numeric_zscore(df, name, mean=None, sd=None): if mean is None: mean = df[name].mean() if sd is None: sd = df[name].std() df[name] = (df[name] - mean) / sd # Encode text values to dummy variables(i.e. [1,0,0], # [0,1,0],[0,0,1] for red,green,blue) def encode_text_dummy(df, name): dummies = pd.get_dummies(df[name]) for x in dummies.columns: dummy_name = f"{name}-{x}" df[dummy_name] = dummies[x] df.drop(name, axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Again, just as we did for anomaly detection, we preprocess the data set. We convert all numeric values to Z-Score and translate all categorical to dummy variables. ###Code # Now encode the feature vector pd.set_option('display.max_columns', 6) pd.set_option('display.max_rows', 5) for name in df.columns: if name == 'outcome': pass elif name in ['protocol_type','service','flag','land','logged_in', 'is_host_login','is_guest_login']: encode_text_dummy(df,name) else: encode_numeric_zscore(df,name) # display 5 rows df.dropna(inplace=True,axis=1) df[0:5] # Convert to numpy - Classification x_columns = df.columns.drop('outcome') x = df[x_columns].values dummies = pd.get_dummies(df['outcome']) # Classification outcomes = dummies.columns num_classes = len(outcomes) y = dummies.values ###Output _____no_output_____ ###Markdown We will attempt to predict what type of attack is underway. The outcome column specifies the attack type. A value of normal indicates that there is no attack underway. We display the outcomes; some attack types are much rarer than others. ###Code df.groupby('outcome')['outcome'].count() ###Output _____no_output_____ ###Markdown Train the Neural NetworkWe now train the neural network to classify the different KDD99 outcomes. The code provided here implements a relatively simple neural with two hidden layers. We train it with the provided KDD99 data. ###Code import pandas as pd import io import requests import numpy as np import os from sklearn.model_selection import train_test_split from sklearn import metrics from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.callbacks import EarlyStopping # Create a test/train split. 25% test # Split into train/test x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.25, random_state=42) # Create neural net model = Sequential() model.add(Dense(10, input_dim=x.shape[1], activation='relu')) model.add(Dense(50, input_dim=x.shape[1], activation='relu')) model.add(Dense(10, input_dim=x.shape[1], activation='relu')) model.add(Dense(1, kernel_initializer='normal')) model.add(Dense(y.shape[1],activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam') monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto', restore_best_weights=True) model.fit(x_train,y_train,validation_data=(x_test,y_test), callbacks=[monitor],verbose=2,epochs=1000) ###Output Epoch 1/1000 11579/11579 - 25s - loss: 0.1851 - val_loss: 0.0362 - 25s/epoch - 2ms/step Epoch 2/1000 11579/11579 - 23s - loss: 0.0360 - val_loss: 0.0275 - 23s/epoch - 2ms/step Epoch 3/1000 11579/11579 - 22s - loss: 0.0271 - val_loss: 0.0270 - 22s/epoch - 2ms/step Epoch 4/1000 11579/11579 - 22s - loss: 0.0244 - val_loss: 0.0221 - 22s/epoch - 2ms/step Epoch 5/1000 11579/11579 - 23s - loss: 0.0235 - val_loss: 0.0228 - 23s/epoch - 2ms/step Epoch 6/1000 11579/11579 - 22s - loss: 0.1370 - val_loss: 0.0210 - 22s/epoch - 2ms/step Epoch 7/1000 11579/11579 - 22s - loss: 0.0859 - val_loss: 0.0218 - 22s/epoch - 2ms/step Epoch 8/1000 11579/11579 - 22s - loss: 0.0507 - val_loss: 0.0192 - 22s/epoch - 2ms/step Epoch 9/1000 11579/11579 - 33s - loss: 0.0203 - val_loss: 0.0185 - 33s/epoch - 3ms/step Epoch 10/1000 11579/11579 - 21s - loss: 0.0180 - val_loss: 0.0185 - 21s/epoch - 2ms/step Epoch 11/1000 11579/11579 - 22s - loss: 0.0180 - val_loss: 0.0208 - 22s/epoch - 2ms/step Epoch 12/1000 11579/11579 - 22s - loss: 0.0166 - val_loss: 0.0165 - 22s/epoch - 2ms/step Epoch 13/1000 11579/11579 - 23s - loss: 0.0165 - val_loss: 0.0166 - 23s/epoch - 2ms/step Epoch 14/1000 11579/11579 - 24s - loss: 0.0158 - val_loss: 0.0152 - 24s/epoch - 2ms/step Epoch 15/1000 11579/11579 - 21s - loss: 0.0152 - val_loss: 0.0182 - 21s/epoch - 2ms/step Epoch 16/1000 11579/11579 - 21s - loss: 0.0160 - val_loss: 0.0149 - 21s/epoch - 2ms/step Epoch 17/1000 11579/11579 - 22s - loss: 0.0142 - val_loss: 0.0143 - 22s/epoch - 2ms/step Epoch 18/1000 11579/11579 - 22s - loss: 0.0139 - val_loss: 0.0153 - 22s/epoch - 2ms/step Epoch 19/1000 Restoring model weights from the end of the best epoch: 14. 11579/11579 - 23s - loss: 0.0141 - val_loss: 0.0152 - 23s/epoch - 2ms/step Epoch 19: early stopping ###Markdown We can now evaluate the neural network. As you can see, the neural network achieves a 99% accuracy rate. ###Code # Measure accuracy pred = model.predict(x_test) pred = np.argmax(pred,axis=1) y_eval = np.argmax(y_test,axis=1) score = metrics.accuracy_score(y_eval, pred) print("Validation score: {}".format(score)) ###Output Validation score: 0.9977005165740935 ###Markdown T81-558: Applications of Deep Neural Networks**Module 14: Other Neural Network Techniques*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 14 Video Material* Part 14.1: What is AutoML [[Video]](https://www.youtube.com/watch?v=1mB_5iurqzw&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_01_automl.ipynb)* Part 14.2: Using Denoising AutoEncoders in Keras [[Video]](https://www.youtube.com/watch?v=4bTSu6_fucc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_02_auto_encode.ipynb)* Part 14.3: Training an Intrusion Detection System with KDD99 [[Video]](https://www.youtube.com/watch?v=1ySn6h2A68I&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_03_anomaly.ipynb)* **Part 14.4: Anomaly Detection in Keras** [[Video]](https://www.youtube.com/watch?v=VgyKQ5MTDFc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_04_ids_kdd99.ipynb)* Part 14.5: The Deep Learning Technologies I am Excited About [[Video]]() [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_05_new_tech.ipynb) Part 14.4: Training an Intrusion Detection System with KDD99The [KDD-99 dataset](http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html) is very famous in the security field and almost a "hello world" of Intrusion Detection Systems (IDS) in machine learning. An intrusion detection system (IDS) is program that monitors computers and network systems for malicious activity or policy violations. Any intrusion activity or violation is typically reported either to an administrator or collected centrally. IDS types range in scope from single computers to large networks. Although KDD99 dataset is over 20 years old, it is still widely used to demonstrate Intrusion Detection Systems (IDS). KDD99 is the data set used for The Third International Knowledge Discovery and Data Mining Tools Competition, which was held in conjunction with KDD-99 The Fifth International Conference on Knowledge Discovery and Data Mining. The competition task was to build a network intrusion detector, a predictive model capable of distinguishing between "bad" connections, called intrusions or attacks, and "good" normal connections. This database contains a standard set of data to be audited, including a wide variety of intrusions simulated in a military network environment. Read in Raw KDD-99 DatasetThe following code reads the KDD99 CSV dataset into a Pandas data frame. The standard format of KDD99 does not include column names. Because of that, the program adds them. ###Code import pandas as pd from tensorflow.keras.utils import get_file try: path = get_file('kddcup.data_10_percent.gz', origin= 'http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz') except: print('Error downloading') raise print(path) # This file is a CSV, just no CSV extension or headers # Download from: http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html df = pd.read_csv(path, header=None) print("Read {} rows.".format(len(df))) # df = df.sample(frac=0.1, replace=False) # Uncomment this line to # sample only 10% of the dataset df.dropna(inplace=True,axis=1) # For now, just drop NA's # (rows with missing values) # The CSV file has no column heads, so add them df.columns = [ 'duration', 'protocol_type', 'service', 'flag', 'src_bytes', 'dst_bytes', 'land', 'wrong_fragment', 'urgent', 'hot', 'num_failed_logins', 'logged_in', 'num_compromised', 'root_shell', 'su_attempted', 'num_root', 'num_file_creations', 'num_shells', 'num_access_files', 'num_outbound_cmds', 'is_host_login', 'is_guest_login', 'count', 'srv_count', 'serror_rate', 'srv_serror_rate', 'rerror_rate', 'srv_rerror_rate', 'same_srv_rate', 'diff_srv_rate', 'srv_diff_host_rate', 'dst_host_count', 'dst_host_srv_count', 'dst_host_same_srv_rate', 'dst_host_diff_srv_rate', 'dst_host_same_src_port_rate', 'dst_host_srv_diff_host_rate', 'dst_host_serror_rate', 'dst_host_srv_serror_rate', 'dst_host_rerror_rate', 'dst_host_srv_rerror_rate', 'outcome' ] pd.set_option('display.max_columns', 5) pd.set_option('display.max_rows', 5) # display 5 rows display(df[0:5]) ###Output C:\Users\jeffh\.keras\datasets\kddcup.data_10_percent.gz Read 494021 rows. ###Markdown Analyzing a DatasetBefore we preprocess the KDD99 dataset let's have a look at the individual columns and distributions. You can use the following script to give a high-level overview of how a dataset appears. ###Code import pandas as pd import os import numpy as np from sklearn import metrics from scipy.stats import zscore def expand_categories(values): result = [] s = values.value_counts() t = float(len(values)) for v in s.index: result.append("{}:{}%".format(v,round(100*(s[v]/t),2))) return "[{}]".format(",".join(result)) def analyze(df): print() cols = df.columns.values total = float(len(df)) print("{} rows".format(int(total))) for col in cols: uniques = df[col].unique() unique_count = len(uniques) if unique_count>100: print("** {}:{} ({}%)".format(col,unique_count,int(((unique_count)/total)*100))) else: print("** {}:{}".format(col,expand_categories(df[col]))) expand_categories(df[col]) ###Output _____no_output_____ ###Markdown The analysis looks at how many unique values are present. For example, duration, which is a numeric value, has 2495 unique values, and there is a 0% overlap. A text/categorical value such as protocol_type only has a few unique values, and the program shows the percentages of each. Columns with a large number of unique values do not have their item counts shown to save display space. ###Code # Analyze KDD-99 analyze(df) ###Output 494021 rows ** duration:2495 (0%) ** protocol_type:[icmp:57.41%,tcp:38.47%,udp:4.12%] ** service:[ecr_i:56.96%,private:22.45%,http:13.01%,smtp:1.97%,other:1.46%,domain_u:1.19%,ftp_data:0.96%,eco_i:0.33%,ftp:0.16%,finger:0.14%,urp_i:0.11%,telnet:0.1%,ntp_u:0.08%,auth:0.07%,pop_3:0.04%,time:0.03%,csnet_ns:0.03%,remote_job:0.02%,gopher:0.02%,imap4:0.02%,domain:0.02%,discard:0.02%,iso_tsap:0.02%,systat:0.02%,shell:0.02%,echo:0.02%,rje:0.02%,whois:0.02%,sql_net:0.02%,printer:0.02%,courier:0.02%,nntp:0.02%,netbios_ssn:0.02%,mtp:0.02%,sunrpc:0.02%,klogin:0.02%,vmnet:0.02%,bgp:0.02%,uucp:0.02%,uucp_path:0.02%,ssh:0.02%,nnsp:0.02%,supdup:0.02%,hostnames:0.02%,login:0.02%,efs:0.02%,daytime:0.02%,link:0.02%,netbios_ns:0.02%,ldap:0.02%,pop_2:0.02%,netbios_dgm:0.02%,http_443:0.02%,exec:0.02%,name:0.02%,kshell:0.02%,ctf:0.02%,netstat:0.02%,Z39_50:0.02%,IRC:0.01%,urh_i:0.0%,X11:0.0%,tim_i:0.0%,tftp_u:0.0%,red_i:0.0%,pm_dump:0.0%] ** flag:[SF:76.6%,S0:17.61%,REJ:5.44%,RSTR:0.18%,RSTO:0.12%,SH:0.02%,S1:0.01%,S2:0.0%,RSTOS0:0.0%,S3:0.0%,OTH:0.0%] ** src_bytes:3300 (0%) ** dst_bytes:10725 (2%) ** land:[0:100.0%,1:0.0%] ** wrong_fragment:[0:99.75%,3:0.2%,1:0.05%] ** urgent:[0:100.0%,1:0.0%,3:0.0%,2:0.0%] ** hot:[0:99.35%,2:0.44%,28:0.06%,1:0.05%,4:0.02%,6:0.02%,5:0.01%,3:0.01%,14:0.01%,30:0.01%,22:0.01%,19:0.0%,18:0.0%,24:0.0%,20:0.0%,7:0.0%,17:0.0%,12:0.0%,15:0.0%,16:0.0%,10:0.0%,9:0.0%] ** num_failed_logins:[0:99.99%,1:0.01%,2:0.0%,5:0.0%,4:0.0%,3:0.0%] ** logged_in:[0:85.18%,1:14.82%] ** num_compromised:[0:99.55%,1:0.44%,2:0.0%,4:0.0%,3:0.0%,6:0.0%,5:0.0%,7:0.0%,12:0.0%,9:0.0%,11:0.0%,767:0.0%,238:0.0%,16:0.0%,18:0.0%,275:0.0%,21:0.0%,22:0.0%,281:0.0%,38:0.0%,102:0.0%,884:0.0%,13:0.0%] ** root_shell:[0:99.99%,1:0.01%] ** su_attempted:[0:100.0%,2:0.0%,1:0.0%] ** num_root:[0:99.88%,1:0.05%,9:0.03%,6:0.03%,2:0.0%,5:0.0%,4:0.0%,3:0.0%,119:0.0%,7:0.0%,993:0.0%,268:0.0%,14:0.0%,16:0.0%,278:0.0%,39:0.0%,306:0.0%,54:0.0%,857:0.0%,12:0.0%] ** num_file_creations:[0:99.95%,1:0.04%,2:0.01%,4:0.0%,16:0.0%,9:0.0%,5:0.0%,7:0.0%,8:0.0%,28:0.0%,25:0.0%,12:0.0%,14:0.0%,15:0.0%,20:0.0%,21:0.0%,22:0.0%,10:0.0%] ** num_shells:[0:99.99%,1:0.01%,2:0.0%] ** num_access_files:[0:99.91%,1:0.09%,2:0.01%,3:0.0%,8:0.0%,6:0.0%,4:0.0%] ** num_outbound_cmds:[0:100.0%] ** is_host_login:[0:100.0%] ** is_guest_login:[0:99.86%,1:0.14%] ** count:490 (0%) ** srv_count:470 (0%) ** serror_rate:[0.0:81.94%,1.0:17.52%,0.99:0.06%,0.08:0.03%,0.05:0.03%,0.07:0.03%,0.06:0.03%,0.14:0.02%,0.04:0.02%,0.01:0.02%,0.09:0.02%,0.1:0.02%,0.03:0.02%,0.11:0.02%,0.13:0.02%,0.5:0.02%,0.12:0.02%,0.2:0.01%,0.25:0.01%,0.02:0.01%,0.17:0.01%,0.33:0.01%,0.15:0.01%,0.22:0.01%,0.18:0.01%,0.23:0.01%,0.16:0.01%,0.21:0.01%,0.19:0.0%,0.27:0.0%,0.98:0.0%,0.44:0.0%,0.29:0.0%,0.24:0.0%,0.97:0.0%,0.96:0.0%,0.31:0.0%,0.26:0.0%,0.67:0.0%,0.36:0.0%,0.65:0.0%,0.94:0.0%,0.28:0.0%,0.79:0.0%,0.95:0.0%,0.53:0.0%,0.81:0.0%,0.62:0.0%,0.85:0.0%,0.6:0.0%,0.64:0.0%,0.88:0.0%,0.68:0.0%,0.52:0.0%,0.66:0.0%,0.71:0.0%,0.93:0.0%,0.57:0.0%,0.63:0.0%,0.83:0.0%,0.78:0.0%,0.75:0.0%,0.51:0.0%,0.58:0.0%,0.56:0.0%,0.55:0.0%,0.3:0.0%,0.76:0.0%,0.86:0.0%,0.74:0.0%,0.35:0.0%,0.38:0.0%,0.54:0.0%,0.72:0.0%,0.84:0.0%,0.69:0.0%,0.61:0.0%,0.59:0.0%,0.42:0.0%,0.32:0.0%,0.82:0.0%,0.77:0.0%,0.7:0.0%,0.91:0.0%,0.92:0.0%,0.4:0.0%,0.73:0.0%,0.9:0.0%,0.34:0.0%,0.8:0.0%,0.89:0.0%,0.87:0.0%] ** srv_serror_rate:[0.0:82.12%,1.0:17.62%,0.03:0.03%,0.04:0.02%,0.05:0.02%,0.06:0.02%,0.02:0.02%,0.5:0.02%,0.08:0.01%,0.07:0.01%,0.25:0.01%,0.33:0.01%,0.17:0.01%,0.09:0.01%,0.1:0.01%,0.2:0.01%,0.11:0.01%,0.12:0.01%,0.14:0.01%,0.01:0.0%,0.67:0.0%,0.92:0.0%,0.18:0.0%,0.94:0.0%,0.95:0.0%,0.58:0.0%,0.88:0.0%,0.75:0.0%,0.19:0.0%,0.4:0.0%,0.76:0.0%,0.83:0.0%,0.91:0.0%,0.15:0.0%,0.22:0.0%,0.93:0.0%,0.85:0.0%,0.27:0.0%,0.86:0.0%,0.44:0.0%,0.35:0.0%,0.51:0.0%,0.36:0.0%,0.38:0.0%,0.21:0.0%,0.8:0.0%,0.9:0.0%,0.45:0.0%,0.16:0.0%,0.37:0.0%,0.23:0.0%] ** rerror_rate:[0.0:94.12%,1.0:5.46%,0.86:0.02%,0.87:0.02%,0.92:0.02%,0.25:0.02%,0.95:0.02%,0.9:0.02%,0.5:0.02%,0.91:0.02%,0.88:0.01%,0.96:0.01%,0.33:0.01%,0.2:0.01%,0.93:0.01%,0.94:0.01%,0.01:0.01%,0.89:0.01%,0.85:0.01%,0.99:0.01%,0.82:0.01%,0.77:0.01%,0.17:0.01%,0.97:0.01%,0.02:0.01%,0.98:0.01%,0.03:0.01%,0.8:0.01%,0.78:0.01%,0.76:0.01%,0.75:0.0%,0.79:0.0%,0.84:0.0%,0.14:0.0%,0.05:0.0%,0.73:0.0%,0.81:0.0%,0.06:0.0%,0.71:0.0%,0.83:0.0%,0.67:0.0%,0.56:0.0%,0.08:0.0%,0.04:0.0%,0.1:0.0%,0.09:0.0%,0.12:0.0%,0.07:0.0%,0.11:0.0%,0.69:0.0%,0.74:0.0%,0.64:0.0%,0.4:0.0%,0.72:0.0%,0.7:0.0%,0.6:0.0%,0.29:0.0%,0.22:0.0%,0.62:0.0%,0.65:0.0%,0.21:0.0%,0.68:0.0%,0.37:0.0%,0.19:0.0%,0.43:0.0%,0.58:0.0%,0.35:0.0%,0.24:0.0%,0.31:0.0%,0.23:0.0%,0.27:0.0%,0.28:0.0%,0.26:0.0%,0.36:0.0%,0.34:0.0%,0.66:0.0%,0.32:0.0%] ** srv_rerror_rate:[0.0:93.99%,1.0:5.69%,0.33:0.05%,0.5:0.04%,0.25:0.04%,0.2:0.03%,0.17:0.03%,0.14:0.01%,0.04:0.01%,0.03:0.01%,0.12:0.01%,0.02:0.01%,0.06:0.01%,0.05:0.01%,0.07:0.01%,0.4:0.01%,0.67:0.01%,0.08:0.01%,0.11:0.01%,0.29:0.01%,0.09:0.0%,0.1:0.0%,0.75:0.0%,0.6:0.0%,0.01:0.0%,0.22:0.0%,0.71:0.0%,0.86:0.0%,0.83:0.0%,0.73:0.0%,0.81:0.0%,0.88:0.0%,0.96:0.0%,0.92:0.0%,0.18:0.0%,0.43:0.0%,0.79:0.0%,0.93:0.0%,0.13:0.0%,0.27:0.0%,0.38:0.0%,0.94:0.0%,0.95:0.0%,0.37:0.0%,0.85:0.0%,0.8:0.0%,0.62:0.0%,0.82:0.0%,0.69:0.0%,0.21:0.0%,0.87:0.0%] ** same_srv_rate:[1.0:77.34%,0.06:2.27%,0.05:2.14%,0.04:2.06%,0.07:2.03%,0.03:1.93%,0.02:1.9%,0.01:1.77%,0.08:1.48%,0.09:1.01%,0.1:0.8%,0.0:0.73%,0.12:0.73%,0.11:0.67%,0.13:0.66%,0.14:0.51%,0.15:0.35%,0.5:0.29%,0.16:0.25%,0.17:0.17%,0.33:0.12%,0.18:0.1%,0.2:0.08%,0.19:0.07%,0.67:0.05%,0.25:0.04%,0.21:0.04%,0.99:0.03%,0.22:0.03%,0.24:0.02%,0.23:0.02%,0.4:0.02%,0.98:0.02%,0.75:0.02%,0.27:0.02%,0.26:0.01%,0.8:0.01%,0.29:0.01%,0.38:0.01%,0.86:0.01%,0.3:0.01%,0.31:0.01%,0.44:0.01%,0.83:0.01%,0.36:0.01%,0.28:0.01%,0.43:0.01%,0.6:0.01%,0.42:0.01%,0.97:0.01%,0.32:0.01%,0.35:0.01%,0.45:0.01%,0.47:0.01%,0.88:0.0%,0.48:0.0%,0.39:0.0%,0.52:0.0%,0.46:0.0%,0.37:0.0%,0.41:0.0%,0.89:0.0%,0.34:0.0%,0.92:0.0%,0.54:0.0%,0.53:0.0%,0.94:0.0%,0.95:0.0%,0.57:0.0%,0.96:0.0%,0.64:0.0%,0.71:0.0%,0.56:0.0%,0.62:0.0%,0.78:0.0%,0.9:0.0%,0.49:0.0%,0.91:0.0%,0.55:0.0%,0.65:0.0%,0.73:0.0%,0.58:0.0%,0.59:0.0%,0.93:0.0%,0.76:0.0%,0.51:0.0%,0.77:0.0%,0.82:0.0%,0.81:0.0%,0.74:0.0%,0.69:0.0%,0.79:0.0%,0.72:0.0%,0.7:0.0%,0.85:0.0%,0.68:0.0%,0.61:0.0%,0.63:0.0%,0.87:0.0%] ** diff_srv_rate:[0.0:77.33%,0.06:10.69%,0.07:5.83%,0.05:3.89%,0.08:0.66%,1.0:0.48%,0.04:0.19%,0.67:0.13%,0.5:0.12%,0.09:0.08%,0.6:0.06%,0.12:0.05%,0.1:0.04%,0.11:0.04%,0.14:0.03%,0.4:0.02%,0.13:0.02%,0.29:0.02%,0.01:0.02%,0.15:0.02%,0.03:0.02%,0.33:0.02%,0.17:0.02%,0.25:0.02%,0.75:0.01%,0.2:0.01%,0.18:0.01%,0.16:0.01%,0.19:0.01%,0.02:0.01%,0.22:0.01%,0.21:0.01%,0.27:0.01%,0.96:0.01%,0.31:0.01%,0.38:0.01%,0.24:0.01%,0.23:0.01%,0.43:0.0%,0.52:0.0%,0.95:0.0%,0.44:0.0%,0.53:0.0%,0.36:0.0%,0.8:0.0%,0.57:0.0%,0.42:0.0%,0.3:0.0%,0.26:0.0%,0.28:0.0%,0.56:0.0%,0.99:0.0%,0.54:0.0%,0.62:0.0%,0.37:0.0%,0.55:0.0%,0.35:0.0%,0.41:0.0%,0.47:0.0%,0.89:0.0%,0.32:0.0%,0.71:0.0%,0.58:0.0%,0.46:0.0%,0.39:0.0%,0.51:0.0%,0.45:0.0%,0.97:0.0%,0.83:0.0%,0.7:0.0%,0.69:0.0%,0.78:0.0%,0.74:0.0%,0.64:0.0%,0.73:0.0%,0.82:0.0%,0.88:0.0%,0.86:0.0%] ** srv_diff_host_rate:[0.0:92.99%,1.0:1.64%,0.12:0.31%,0.5:0.29%,0.67:0.29%,0.33:0.25%,0.11:0.24%,0.25:0.23%,0.1:0.22%,0.14:0.21%,0.17:0.21%,0.08:0.2%,0.15:0.2%,0.18:0.19%,0.2:0.19%,0.09:0.19%,0.4:0.19%,0.07:0.17%,0.29:0.17%,0.13:0.16%,0.22:0.16%,0.06:0.14%,0.02:0.1%,0.05:0.1%,0.01:0.08%,0.21:0.08%,0.19:0.08%,0.16:0.07%,0.75:0.07%,0.27:0.06%,0.04:0.06%,0.6:0.06%,0.3:0.06%,0.38:0.05%,0.43:0.05%,0.23:0.05%,0.03:0.03%,0.24:0.02%,0.36:0.02%,0.31:0.02%,0.8:0.02%,0.57:0.01%,0.44:0.01%,0.28:0.01%,0.26:0.01%,0.42:0.0%,0.45:0.0%,0.62:0.0%,0.83:0.0%,0.71:0.0%,0.56:0.0%,0.35:0.0%,0.32:0.0%,0.37:0.0%,0.41:0.0%,0.47:0.0%,0.86:0.0%,0.55:0.0%,0.54:0.0%,0.88:0.0%,0.64:0.0%,0.46:0.0%,0.7:0.0%,0.77:0.0%] ** dst_host_count:256 (0%) ** dst_host_srv_count:256 (0%) ** dst_host_same_srv_rate:101 (0%) ** dst_host_diff_srv_rate:101 (0%) ** dst_host_same_src_port_rate:101 (0%) ** dst_host_srv_diff_host_rate:[0.0:89.45%,0.02:2.38%,0.01:2.13%,0.04:1.35%,0.03:1.34%,0.05:0.94%,0.06:0.39%,0.07:0.31%,0.5:0.15%,0.08:0.14%,0.09:0.13%,0.15:0.09%,0.11:0.09%,0.16:0.08%,0.13:0.08%,0.1:0.08%,0.14:0.07%,1.0:0.07%,0.17:0.07%,0.2:0.07%,0.12:0.07%,0.18:0.07%,0.25:0.05%,0.22:0.05%,0.19:0.05%,0.21:0.05%,0.24:0.03%,0.23:0.02%,0.26:0.02%,0.27:0.02%,0.33:0.02%,0.29:0.02%,0.51:0.02%,0.4:0.01%,0.28:0.01%,0.3:0.01%,0.67:0.01%,0.52:0.01%,0.31:0.01%,0.32:0.01%,0.38:0.01%,0.53:0.0%,0.43:0.0%,0.44:0.0%,0.34:0.0%,0.6:0.0%,0.36:0.0%,0.57:0.0%,0.35:0.0%,0.54:0.0%,0.37:0.0%,0.56:0.0%,0.55:0.0%,0.42:0.0%,0.46:0.0%,0.45:0.0%,0.41:0.0%,0.48:0.0%,0.39:0.0%,0.8:0.0%,0.7:0.0%,0.47:0.0%,0.62:0.0%,0.75:0.0%,0.58:0.0%] ** dst_host_serror_rate:[0.0:80.93%,1.0:17.56%,0.01:0.74%,0.02:0.2%,0.03:0.09%,0.09:0.05%,0.04:0.04%,0.05:0.04%,0.07:0.03%,0.08:0.03%,0.06:0.02%,0.14:0.02%,0.15:0.02%,0.11:0.02%,0.13:0.02%,0.16:0.02%,0.1:0.02%,0.12:0.01%,0.18:0.01%,0.25:0.01%,0.2:0.01%,0.17:0.01%,0.33:0.01%,0.99:0.01%,0.19:0.01%,0.31:0.01%,0.27:0.01%,0.5:0.0%,0.22:0.0%,0.98:0.0%,0.35:0.0%,0.28:0.0%,0.53:0.0%,0.24:0.0%,0.96:0.0%,0.3:0.0%,0.26:0.0%,0.97:0.0%,0.29:0.0%,0.94:0.0%,0.42:0.0%,0.32:0.0%,0.56:0.0%,0.55:0.0%,0.95:0.0%,0.6:0.0%,0.23:0.0%,0.93:0.0%,0.34:0.0%,0.85:0.0%,0.89:0.0%,0.21:0.0%,0.92:0.0%,0.58:0.0%,0.43:0.0%,0.9:0.0%,0.57:0.0%,0.91:0.0%,0.49:0.0%,0.82:0.0%,0.36:0.0%,0.87:0.0%,0.45:0.0%,0.62:0.0%,0.65:0.0%,0.46:0.0%,0.38:0.0%,0.61:0.0%,0.47:0.0%,0.76:0.0%,0.81:0.0%,0.54:0.0%,0.64:0.0%,0.44:0.0%,0.48:0.0%,0.72:0.0%,0.39:0.0%,0.52:0.0%,0.51:0.0%,0.67:0.0%,0.84:0.0%,0.73:0.0%,0.4:0.0%,0.69:0.0%,0.79:0.0%,0.41:0.0%,0.68:0.0%,0.88:0.0%,0.77:0.0%,0.75:0.0%,0.7:0.0%,0.8:0.0%,0.59:0.0%,0.71:0.0%,0.37:0.0%,0.86:0.0%,0.66:0.0%,0.78:0.0%,0.74:0.0%,0.83:0.0%] ** dst_host_srv_serror_rate:[0.0:81.16%,1.0:17.61%,0.01:0.99%,0.02:0.14%,0.03:0.03%,0.04:0.02%,0.05:0.01%,0.06:0.01%,0.08:0.0%,0.5:0.0%,0.07:0.0%,0.1:0.0%,0.09:0.0%,0.11:0.0%,0.17:0.0%,0.14:0.0%,0.12:0.0%,0.96:0.0%,0.33:0.0%,0.67:0.0%,0.97:0.0%,0.25:0.0%,0.98:0.0%,0.4:0.0%,0.75:0.0%,0.48:0.0%,0.83:0.0%,0.16:0.0%,0.93:0.0%,0.69:0.0%,0.2:0.0%,0.91:0.0%,0.78:0.0%,0.95:0.0%,0.8:0.0%,0.92:0.0%,0.68:0.0%,0.29:0.0%,0.38:0.0%,0.88:0.0%,0.3:0.0%,0.32:0.0%,0.94:0.0%,0.57:0.0%,0.63:0.0%,0.62:0.0%,0.31:0.0%,0.85:0.0%,0.56:0.0%,0.81:0.0%,0.74:0.0%,0.86:0.0%,0.13:0.0%,0.23:0.0%,0.18:0.0%,0.64:0.0%,0.46:0.0%,0.52:0.0%,0.66:0.0%,0.6:0.0%,0.84:0.0%,0.55:0.0%,0.9:0.0%,0.15:0.0%,0.79:0.0%,0.82:0.0%,0.87:0.0%,0.47:0.0%,0.53:0.0%,0.45:0.0%,0.42:0.0%,0.24:0.0%] ** dst_host_rerror_rate:101 (0%) ** dst_host_srv_rerror_rate:101 (0%) ** outcome:[smurf.:56.84%,neptune.:21.7%,normal.:19.69%,back.:0.45%,satan.:0.32%,ipsweep.:0.25%,portsweep.:0.21%,warezclient.:0.21%,teardrop.:0.2%,pod.:0.05%,nmap.:0.05%,guess_passwd.:0.01%,buffer_overflow.:0.01%,land.:0.0%,warezmaster.:0.0%,imap.:0.0%,rootkit.:0.0%,loadmodule.:0.0%,ftp_write.:0.0%,multihop.:0.0%,phf.:0.0%,perl.:0.0%,spy.:0.0%] ###Markdown Encode the feature vectorWe use the same two functions provided earlier to preprocess the data. The first encodes Z-Scores, and the second creates dummy variables from categorical columns. ###Code # Encode a numeric column as zscores def encode_numeric_zscore(df, name, mean=None, sd=None): if mean is None: mean = df[name].mean() if sd is None: sd = df[name].std() df[name] = (df[name] - mean) / sd # Encode text values to dummy variables(i.e. [1,0,0], # [0,1,0],[0,0,1] for red,green,blue) def encode_text_dummy(df, name): dummies = pd.get_dummies(df[name]) for x in dummies.columns: dummy_name = f"{name}-{x}" df[dummy_name] = dummies[x] df.drop(name, axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Again, just as we did for anomaly detection, we preprocess the data set. We convert all numeric values to Z-Score, and we translate all categorical to dummy variables. ###Code # Now encode the feature vector encode_numeric_zscore(df, 'duration') encode_text_dummy(df, 'protocol_type') encode_text_dummy(df, 'service') encode_text_dummy(df, 'flag') encode_numeric_zscore(df, 'src_bytes') encode_numeric_zscore(df, 'dst_bytes') encode_text_dummy(df, 'land') encode_numeric_zscore(df, 'wrong_fragment') encode_numeric_zscore(df, 'urgent') encode_numeric_zscore(df, 'hot') encode_numeric_zscore(df, 'num_failed_logins') encode_text_dummy(df, 'logged_in') encode_numeric_zscore(df, 'num_compromised') encode_numeric_zscore(df, 'root_shell') encode_numeric_zscore(df, 'su_attempted') encode_numeric_zscore(df, 'num_root') encode_numeric_zscore(df, 'num_file_creations') encode_numeric_zscore(df, 'num_shells') encode_numeric_zscore(df, 'num_access_files') encode_numeric_zscore(df, 'num_outbound_cmds') encode_text_dummy(df, 'is_host_login') encode_text_dummy(df, 'is_guest_login') encode_numeric_zscore(df, 'count') encode_numeric_zscore(df, 'srv_count') encode_numeric_zscore(df, 'serror_rate') encode_numeric_zscore(df, 'srv_serror_rate') encode_numeric_zscore(df, 'rerror_rate') encode_numeric_zscore(df, 'srv_rerror_rate') encode_numeric_zscore(df, 'same_srv_rate') encode_numeric_zscore(df, 'diff_srv_rate') encode_numeric_zscore(df, 'srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_count') encode_numeric_zscore(df, 'dst_host_srv_count') encode_numeric_zscore(df, 'dst_host_same_srv_rate') encode_numeric_zscore(df, 'dst_host_diff_srv_rate') encode_numeric_zscore(df, 'dst_host_same_src_port_rate') encode_numeric_zscore(df, 'dst_host_srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_serror_rate') encode_numeric_zscore(df, 'dst_host_srv_serror_rate') encode_numeric_zscore(df, 'dst_host_rerror_rate') encode_numeric_zscore(df, 'dst_host_srv_rerror_rate') # display 5 rows df.dropna(inplace=True,axis=1) df[0:5] # This is the numeric feature vector, as it goes to the neural net # Convert to numpy - Classification x_columns = df.columns.drop('outcome') x = df[x_columns].values dummies = pd.get_dummies(df['outcome']) # Classification outcomes = dummies.columns num_classes = len(outcomes) y = dummies.values ###Output _____no_output_____ ###Markdown We will attempt to predict what type of attack is underway. The outcome column specifies the attack type. A value of normal indicates that there is no attack underway. We display the outcomes; some attack types are much rarer than others. ###Code df.groupby('outcome')['outcome'].count() ###Output _____no_output_____ ###Markdown Train the Neural NetworkWe now train the neural network to classify the different KDD99 outcomes. The code provided here implements a relatively simple neural with two hidden layers. We train it with the provided KDD99 data. ###Code import pandas as pd import io import requests import numpy as np import os from sklearn.model_selection import train_test_split from sklearn import metrics from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.callbacks import EarlyStopping # Create a test/train split. 25% test # Split into train/test x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.25, random_state=42) # Create neural net model = Sequential() model.add(Dense(10, input_dim=x.shape[1], activation='relu')) model.add(Dense(50, input_dim=x.shape[1], activation='relu')) model.add(Dense(10, input_dim=x.shape[1], activation='relu')) model.add(Dense(1, kernel_initializer='normal')) model.add(Dense(y.shape[1],activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam') monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto', restore_best_weights=True) model.fit(x_train,y_train,validation_data=(x_test,y_test), callbacks=[monitor],verbose=2,epochs=1000) ###Output Train on 370515 samples, validate on 123506 samples Epoch 1/1000 370515/370515 - 24s - loss: 0.1942 - val_loss: 0.0408 Epoch 2/1000 370515/370515 - 24s - loss: 0.1164 - val_loss: 0.0293 Epoch 3/1000 370515/370515 - 24s - loss: 0.0780 - val_loss: 0.0414 Epoch 4/1000 370515/370515 - 24s - loss: 0.0524 - val_loss: 0.0251 Epoch 5/1000 370515/370515 - 24s - loss: 0.0248 - val_loss: 0.0250 Epoch 6/1000 370515/370515 - 24s - loss: 0.0224 - val_loss: 0.0220 Epoch 7/1000 370515/370515 - 24s - loss: 0.0211 - val_loss: 0.0217 Epoch 8/1000 370515/370515 - 25s - loss: 0.0203 - val_loss: 0.0198 Epoch 9/1000 370515/370515 - 24s - loss: 0.0197 - val_loss: 0.0202 Epoch 10/1000 370515/370515 - 24s - loss: 0.0195 - val_loss: 0.0206 Epoch 11/1000 370515/370515 - 25s - loss: 0.0186 - val_loss: 0.0194 Epoch 12/1000 370515/370515 - 24s - loss: 0.0177 - val_loss: 0.0187 Epoch 13/1000 370515/370515 - 25s - loss: 0.0176 - val_loss: 0.0180 Epoch 14/1000 370515/370515 - 25s - loss: 0.0171 - val_loss: 0.0212 Epoch 15/1000 370515/370515 - 25s - loss: 0.0178 - val_loss: 0.0200 Epoch 16/1000 370515/370515 - 25s - loss: 0.0163 - val_loss: 0.0153 Epoch 17/1000 370515/370515 - 25s - loss: 0.0157 - val_loss: 0.0153 Epoch 18/1000 370515/370515 - 24s - loss: 0.0154 - val_loss: 0.0149 Epoch 19/1000 370515/370515 - 24s - loss: 0.0150 - val_loss: 0.0153 Epoch 20/1000 370515/370515 - 25s - loss: 0.0187 - val_loss: 0.0146 Epoch 21/1000 Restoring model weights from the end of the best epoch. 370515/370515 - 25s - loss: 0.0141 - val_loss: 0.0149 Epoch 00021: early stopping ###Markdown We can now evaluate the neural network. As you can see, the neural network achieves a 99% accuracy rate. ###Code # Measure accuracy pred = model.predict(x_test) pred = np.argmax(pred,axis=1) y_eval = np.argmax(y_test,axis=1) score = metrics.accuracy_score(y_eval, pred) print("Validation score: {}".format(score)) ###Output Validation score: 0.998234903567438 ###Markdown T81-558: Applications of Deep Neural Networks**Module 14: Other Neural Network Techniques*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 14 Video Material* Part 14.1: What is AutoML [[Video]](https://www.youtube.com/watch?v=1mB_5iurqzw&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_01_automl.ipynb)* Part 14.2: Using Denoising AutoEncoders in Keras [[Video]](https://www.youtube.com/watch?v=4bTSu6_fucc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_02_auto_encode.ipynb)* Part 14.3: Training an Intrusion Detection System with KDD99 [[Video]](https://www.youtube.com/watch?v=1ySn6h2A68I&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_03_anomaly.ipynb)* **Part 14.4: Anomaly Detection in Keras** [[Video]](https://www.youtube.com/watch?v=VgyKQ5MTDFc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_04_ids_kdd99.ipynb)* Part 14.5: The Deep Learning Technologies I am Excited About [[Video]]() [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_05_new_tech.ipynb) Part 14.4: Training an Intrusion Detection System with KDD99The [KDD-99 dataset](http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html) is very famous in the security field and almost a "hello world" of Intrusion Detection Systems (IDS) in machine learning. An intrusion detection system (IDS) is program that monitors computers and network systems for malicious activity or policy violations. Any intrusion activity or violation is typically reported either to an administrator or collected centrally. IDS types range in scope from single computers to large networks. Although KDD99 dataset is over 20 years old, it is still widely used to demonstrate Intrusion Detection Systems (IDS). KDD99 is the data set used for The Third International Knowledge Discovery and Data Mining Tools Competition, which was held in conjunction with KDD-99 The Fifth International Conference on Knowledge Discovery and Data Mining. The competition task was to build a network intrusion detector, a predictive model capable of distinguishing between "bad" connections, called intrusions or attacks, and "good" normal connections. This database contains a standard set of data to be audited, including a wide variety of intrusions simulated in a military network environment. Read in Raw KDD-99 DatasetThe following code reads the KDD99 CSV dataset into a Pandas data frame. The standard format of KDD99 does not include column names. Because of that, the program adds them. ###Code import pandas as pd from tensorflow.keras.utils import get_file pd.set_option('display.max_columns', 6) pd.set_option('display.max_rows', 5) try: path = get_file('kdd-with-columns.csv', origin=\ 'https://github.com/jeffheaton/jheaton-ds2/raw/main/'\ 'kdd-with-columns.csv',archive_format=None) except: print('Error downloading') raise print(path) # Origional file: http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html df = pd.read_csv(path) print("Read {} rows.".format(len(df))) # df = df.sample(frac=0.1, replace=False) # Uncomment this line to # sample only 10% of the dataset df.dropna(inplace=True,axis=1) # For now, just drop NA's (rows with missing values) # display 5 rows pd.set_option('display.max_columns', 5) pd.set_option('display.max_rows', 5) df ###Output Downloading data from https://github.com/jeffheaton/jheaton-ds2/raw/main/kdd-with-columns.csv 68132864/68132668 [==============================] - 1s 0us/step 68141056/68132668 [==============================] - 1s 0us/step /root/.keras/datasets/kdd-with-columns.csv Read 494021 rows. ###Markdown Analyzing a DatasetBefore we preprocess the KDD99 dataset let's have a look at the individual columns and distributions. You can use the following script to give a high-level overview of how a dataset appears. ###Code import pandas as pd import os import numpy as np from sklearn import metrics from scipy.stats import zscore def expand_categories(values): result = [] s = values.value_counts() t = float(len(values)) for v in s.index: result.append("{}:{}%".format(v,round(100*(s[v]/t),2))) return "[{}]".format(",".join(result)) def analyze(df): print() cols = df.columns.values total = float(len(df)) print("{} rows".format(int(total))) for col in cols: uniques = df[col].unique() unique_count = len(uniques) if unique_count>100: print("** {}:{} ({}%)".format(col,unique_count,\ int(((unique_count)/total)*100))) else: print("** {}:{}".format(col,expand_categories(df[col]))) expand_categories(df[col]) ###Output _____no_output_____ ###Markdown The analysis looks at how many unique values are present. For example, duration, which is a numeric value, has 2495 unique values, and there is a 0% overlap. A text/categorical value such as protocol_type only has a few unique values, and the program shows the percentages of each. Columns with a large number of unique values do not have their item counts shown to save display space. ###Code # Analyze KDD-99 analyze(df) ###Output 494021 rows ** duration:2495 (0%) ** protocol_type:[icmp:57.41%,tcp:38.47%,udp:4.12%] ** service:[ecr_i:56.96%,private:22.45%,http:13.01%,smtp:1.97%,other:1.46%,domain_u:1.19%,ftp_data:0.96%,eco_i:0.33%,ftp:0.16%,finger:0.14%,urp_i:0.11%,telnet:0.1%,ntp_u:0.08%,auth:0.07%,pop_3:0.04%,time:0.03%,csnet_ns:0.03%,remote_job:0.02%,gopher:0.02%,imap4:0.02%,discard:0.02%,domain:0.02%,iso_tsap:0.02%,systat:0.02%,shell:0.02%,echo:0.02%,rje:0.02%,whois:0.02%,sql_net:0.02%,printer:0.02%,nntp:0.02%,courier:0.02%,sunrpc:0.02%,netbios_ssn:0.02%,mtp:0.02%,vmnet:0.02%,uucp_path:0.02%,uucp:0.02%,klogin:0.02%,bgp:0.02%,ssh:0.02%,supdup:0.02%,nnsp:0.02%,login:0.02%,hostnames:0.02%,efs:0.02%,daytime:0.02%,link:0.02%,netbios_ns:0.02%,pop_2:0.02%,ldap:0.02%,netbios_dgm:0.02%,exec:0.02%,http_443:0.02%,kshell:0.02%,name:0.02%,ctf:0.02%,netstat:0.02%,Z39_50:0.02%,IRC:0.01%,urh_i:0.0%,X11:0.0%,tim_i:0.0%,pm_dump:0.0%,tftp_u:0.0%,red_i:0.0%] ** flag:[SF:76.6%,S0:17.61%,REJ:5.44%,RSTR:0.18%,RSTO:0.12%,SH:0.02%,S1:0.01%,S2:0.0%,RSTOS0:0.0%,S3:0.0%,OTH:0.0%] ** src_bytes:3300 (0%) ** dst_bytes:10725 (2%) ** land:[0:100.0%,1:0.0%] ** wrong_fragment:[0:99.75%,3:0.2%,1:0.05%] ** urgent:[0:100.0%,1:0.0%,2:0.0%,3:0.0%] ** hot:[0:99.35%,2:0.44%,28:0.06%,1:0.05%,4:0.02%,6:0.02%,5:0.01%,3:0.01%,14:0.01%,30:0.01%,22:0.01%,19:0.0%,24:0.0%,18:0.0%,20:0.0%,7:0.0%,17:0.0%,12:0.0%,16:0.0%,10:0.0%,15:0.0%,9:0.0%] ** num_failed_logins:[0:99.99%,1:0.01%,2:0.0%,5:0.0%,4:0.0%,3:0.0%] ** logged_in:[0:85.18%,1:14.82%] ** num_compromised:[0:99.55%,1:0.44%,2:0.0%,4:0.0%,3:0.0%,6:0.0%,5:0.0%,7:0.0%,767:0.0%,12:0.0%,9:0.0%,884:0.0%,13:0.0%,38:0.0%,18:0.0%,11:0.0%,275:0.0%,281:0.0%,16:0.0%,238:0.0%,21:0.0%,22:0.0%,102:0.0%] ** root_shell:[0:99.99%,1:0.01%] ** su_attempted:[0:100.0%,1:0.0%,2:0.0%] ** num_root:[0:99.88%,1:0.05%,9:0.03%,6:0.03%,2:0.0%,5:0.0%,4:0.0%,3:0.0%,7:0.0%,993:0.0%,54:0.0%,306:0.0%,14:0.0%,39:0.0%,278:0.0%,268:0.0%,12:0.0%,857:0.0%,16:0.0%,119:0.0%] ** num_file_creations:[0:99.95%,1:0.04%,2:0.01%,4:0.0%,16:0.0%,5:0.0%,22:0.0%,25:0.0%,12:0.0%,8:0.0%,7:0.0%,21:0.0%,14:0.0%,10:0.0%,28:0.0%,9:0.0%,15:0.0%,20:0.0%] ** num_shells:[0:99.99%,1:0.01%,2:0.0%] ** num_access_files:[0:99.91%,1:0.09%,2:0.01%,3:0.0%,4:0.0%,6:0.0%,8:0.0%] ** num_outbound_cmds:[0:100.0%] ** is_host_login:[0:100.0%] ** is_guest_login:[0:99.86%,1:0.14%] ** count:490 (0%) ** srv_count:470 (0%) ** serror_rate:[0.0:81.94%,1.0:17.52%,0.99:0.06%,0.08:0.03%,0.05:0.03%,0.06:0.03%,0.07:0.03%,0.14:0.02%,0.04:0.02%,0.01:0.02%,0.09:0.02%,0.1:0.02%,0.03:0.02%,0.11:0.02%,0.13:0.02%,0.5:0.02%,0.12:0.02%,0.2:0.01%,0.25:0.01%,0.02:0.01%,0.17:0.01%,0.33:0.01%,0.15:0.01%,0.22:0.01%,0.18:0.01%,0.23:0.01%,0.16:0.01%,0.21:0.01%,0.19:0.0%,0.27:0.0%,0.98:0.0%,0.29:0.0%,0.44:0.0%,0.24:0.0%,0.97:0.0%,0.31:0.0%,0.96:0.0%,0.26:0.0%,0.79:0.0%,0.28:0.0%,0.36:0.0%,0.94:0.0%,0.95:0.0%,0.65:0.0%,0.67:0.0%,0.85:0.0%,0.64:0.0%,0.62:0.0%,0.6:0.0%,0.53:0.0%,0.81:0.0%,0.88:0.0%,0.93:0.0%,0.57:0.0%,0.55:0.0%,0.58:0.0%,0.56:0.0%,0.52:0.0%,0.51:0.0%,0.66:0.0%,0.75:0.0%,0.3:0.0%,0.83:0.0%,0.71:0.0%,0.78:0.0%,0.63:0.0%,0.68:0.0%,0.86:0.0%,0.54:0.0%,0.61:0.0%,0.74:0.0%,0.72:0.0%,0.69:0.0%,0.59:0.0%,0.38:0.0%,0.84:0.0%,0.35:0.0%,0.76:0.0%,0.42:0.0%,0.82:0.0%,0.77:0.0%,0.32:0.0%,0.7:0.0%,0.4:0.0%,0.73:0.0%,0.91:0.0%,0.92:0.0%,0.87:0.0%,0.8:0.0%,0.9:0.0%,0.34:0.0%,0.89:0.0%] ** srv_serror_rate:[0.0:82.12%,1.0:17.62%,0.03:0.03%,0.04:0.02%,0.05:0.02%,0.06:0.02%,0.02:0.02%,0.5:0.02%,0.08:0.01%,0.07:0.01%,0.25:0.01%,0.33:0.01%,0.17:0.01%,0.09:0.01%,0.1:0.01%,0.2:0.01%,0.12:0.01%,0.11:0.01%,0.14:0.01%,0.01:0.0%,0.67:0.0%,0.18:0.0%,0.92:0.0%,0.95:0.0%,0.94:0.0%,0.88:0.0%,0.19:0.0%,0.58:0.0%,0.75:0.0%,0.83:0.0%,0.76:0.0%,0.15:0.0%,0.91:0.0%,0.4:0.0%,0.85:0.0%,0.27:0.0%,0.22:0.0%,0.93:0.0%,0.16:0.0%,0.38:0.0%,0.36:0.0%,0.35:0.0%,0.45:0.0%,0.21:0.0%,0.44:0.0%,0.23:0.0%,0.51:0.0%,0.86:0.0%,0.9:0.0%,0.8:0.0%,0.37:0.0%] ** rerror_rate:[0.0:94.12%,1.0:5.46%,0.86:0.02%,0.87:0.02%,0.92:0.02%,0.25:0.02%,0.9:0.02%,0.95:0.02%,0.5:0.02%,0.91:0.02%,0.88:0.01%,0.96:0.01%,0.33:0.01%,0.2:0.01%,0.93:0.01%,0.94:0.01%,0.01:0.01%,0.89:0.01%,0.85:0.01%,0.99:0.01%,0.82:0.01%,0.77:0.01%,0.17:0.01%,0.97:0.01%,0.02:0.01%,0.98:0.01%,0.03:0.01%,0.78:0.01%,0.8:0.01%,0.76:0.01%,0.79:0.0%,0.84:0.0%,0.75:0.0%,0.14:0.0%,0.05:0.0%,0.73:0.0%,0.81:0.0%,0.83:0.0%,0.71:0.0%,0.06:0.0%,0.67:0.0%,0.56:0.0%,0.08:0.0%,0.04:0.0%,0.1:0.0%,0.12:0.0%,0.09:0.0%,0.07:0.0%,0.11:0.0%,0.69:0.0%,0.4:0.0%,0.64:0.0%,0.7:0.0%,0.72:0.0%,0.74:0.0%,0.6:0.0%,0.29:0.0%,0.62:0.0%,0.65:0.0%,0.21:0.0%,0.22:0.0%,0.37:0.0%,0.58:0.0%,0.68:0.0%,0.19:0.0%,0.43:0.0%,0.35:0.0%,0.36:0.0%,0.23:0.0%,0.26:0.0%,0.27:0.0%,0.28:0.0%,0.66:0.0%,0.31:0.0%,0.32:0.0%,0.34:0.0%,0.24:0.0%] ** srv_rerror_rate:[0.0:93.99%,1.0:5.69%,0.33:0.05%,0.5:0.04%,0.25:0.04%,0.2:0.03%,0.17:0.03%,0.14:0.01%,0.04:0.01%,0.03:0.01%,0.12:0.01%,0.06:0.01%,0.02:0.01%,0.05:0.01%,0.07:0.01%,0.4:0.01%,0.67:0.01%,0.08:0.01%,0.11:0.01%,0.29:0.01%,0.09:0.0%,0.1:0.0%,0.75:0.0%,0.6:0.0%,0.01:0.0%,0.71:0.0%,0.22:0.0%,0.83:0.0%,0.86:0.0%,0.18:0.0%,0.96:0.0%,0.79:0.0%,0.43:0.0%,0.92:0.0%,0.81:0.0%,0.88:0.0%,0.73:0.0%,0.69:0.0%,0.94:0.0%,0.62:0.0%,0.8:0.0%,0.85:0.0%,0.93:0.0%,0.82:0.0%,0.27:0.0%,0.37:0.0%,0.21:0.0%,0.38:0.0%,0.87:0.0%,0.95:0.0%,0.13:0.0%] ** same_srv_rate:[1.0:77.34%,0.06:2.27%,0.05:2.14%,0.04:2.06%,0.07:2.03%,0.03:1.93%,0.02:1.9%,0.01:1.77%,0.08:1.48%,0.09:1.01%,0.1:0.8%,0.0:0.73%,0.12:0.73%,0.11:0.67%,0.13:0.66%,0.14:0.51%,0.15:0.35%,0.5:0.29%,0.16:0.25%,0.17:0.17%,0.33:0.12%,0.18:0.1%,0.2:0.08%,0.19:0.07%,0.67:0.05%,0.25:0.04%,0.21:0.04%,0.99:0.03%,0.22:0.03%,0.24:0.02%,0.23:0.02%,0.4:0.02%,0.98:0.02%,0.75:0.02%,0.27:0.02%,0.26:0.01%,0.8:0.01%,0.29:0.01%,0.38:0.01%,0.86:0.01%,0.3:0.01%,0.31:0.01%,0.44:0.01%,0.36:0.01%,0.83:0.01%,0.28:0.01%,0.43:0.01%,0.42:0.01%,0.6:0.01%,0.97:0.01%,0.32:0.01%,0.35:0.01%,0.45:0.01%,0.47:0.01%,0.88:0.0%,0.48:0.0%,0.39:0.0%,0.46:0.0%,0.52:0.0%,0.37:0.0%,0.41:0.0%,0.89:0.0%,0.34:0.0%,0.92:0.0%,0.54:0.0%,0.53:0.0%,0.95:0.0%,0.94:0.0%,0.57:0.0%,0.56:0.0%,0.96:0.0%,0.64:0.0%,0.71:0.0%,0.62:0.0%,0.78:0.0%,0.9:0.0%,0.49:0.0%,0.55:0.0%,0.91:0.0%,0.65:0.0%,0.73:0.0%,0.58:0.0%,0.93:0.0%,0.59:0.0%,0.82:0.0%,0.51:0.0%,0.81:0.0%,0.76:0.0%,0.77:0.0%,0.79:0.0%,0.74:0.0%,0.85:0.0%,0.72:0.0%,0.7:0.0%,0.68:0.0%,0.69:0.0%,0.87:0.0%,0.63:0.0%,0.61:0.0%] ** diff_srv_rate:[0.0:77.33%,0.06:10.69%,0.07:5.83%,0.05:3.89%,0.08:0.66%,1.0:0.48%,0.04:0.19%,0.67:0.13%,0.5:0.12%,0.09:0.08%,0.6:0.06%,0.12:0.05%,0.1:0.04%,0.11:0.04%,0.14:0.03%,0.4:0.02%,0.13:0.02%,0.29:0.02%,0.01:0.02%,0.15:0.02%,0.03:0.02%,0.33:0.02%,0.25:0.02%,0.17:0.02%,0.75:0.01%,0.2:0.01%,0.18:0.01%,0.16:0.01%,0.19:0.01%,0.02:0.01%,0.22:0.01%,0.21:0.01%,0.27:0.01%,0.96:0.01%,0.31:0.01%,0.38:0.01%,0.24:0.01%,0.23:0.01%,0.43:0.0%,0.52:0.0%,0.44:0.0%,0.95:0.0%,0.36:0.0%,0.8:0.0%,0.53:0.0%,0.57:0.0%,0.42:0.0%,0.3:0.0%,0.26:0.0%,0.28:0.0%,0.56:0.0%,0.99:0.0%,0.54:0.0%,0.62:0.0%,0.37:0.0%,0.41:0.0%,0.35:0.0%,0.55:0.0%,0.47:0.0%,0.32:0.0%,0.46:0.0%,0.39:0.0%,0.58:0.0%,0.71:0.0%,0.89:0.0%,0.51:0.0%,0.45:0.0%,0.97:0.0%,0.73:0.0%,0.69:0.0%,0.78:0.0%,0.7:0.0%,0.74:0.0%,0.82:0.0%,0.86:0.0%,0.64:0.0%,0.83:0.0%,0.88:0.0%] ** srv_diff_host_rate:[0.0:92.99%,1.0:1.64%,0.12:0.31%,0.5:0.29%,0.67:0.29%,0.33:0.25%,0.11:0.24%,0.25:0.23%,0.1:0.22%,0.14:0.21%,0.17:0.21%,0.08:0.2%,0.15:0.2%,0.18:0.19%,0.2:0.19%,0.09:0.19%,0.4:0.19%,0.07:0.17%,0.29:0.17%,0.13:0.16%,0.22:0.16%,0.06:0.14%,0.02:0.1%,0.05:0.1%,0.01:0.08%,0.21:0.08%,0.19:0.08%,0.16:0.07%,0.75:0.07%,0.27:0.06%,0.04:0.06%,0.6:0.06%,0.3:0.06%,0.38:0.05%,0.43:0.05%,0.23:0.05%,0.03:0.03%,0.24:0.02%,0.36:0.02%,0.31:0.02%,0.8:0.02%,0.57:0.01%,0.44:0.01%,0.28:0.01%,0.26:0.01%,0.42:0.0%,0.45:0.0%,0.62:0.0%,0.83:0.0%,0.71:0.0%,0.56:0.0%,0.35:0.0%,0.32:0.0%,0.37:0.0%,0.47:0.0%,0.41:0.0%,0.86:0.0%,0.55:0.0%,0.64:0.0%,0.54:0.0%,0.46:0.0%,0.88:0.0%,0.7:0.0%,0.77:0.0%] ** dst_host_count:256 (0%) ** dst_host_srv_count:256 (0%) ** dst_host_same_srv_rate:101 (0%) ** dst_host_diff_srv_rate:101 (0%) ** dst_host_same_src_port_rate:101 (0%) ** dst_host_srv_diff_host_rate:[0.0:89.45%,0.02:2.38%,0.01:2.13%,0.04:1.35%,0.03:1.34%,0.05:0.94%,0.06:0.39%,0.07:0.31%,0.5:0.15%,0.08:0.14%,0.09:0.13%,0.15:0.09%,0.11:0.09%,0.16:0.08%,0.13:0.08%,0.1:0.08%,0.14:0.07%,1.0:0.07%,0.17:0.07%,0.2:0.07%,0.12:0.07%,0.18:0.07%,0.25:0.05%,0.22:0.05%,0.19:0.05%,0.21:0.05%,0.24:0.03%,0.23:0.02%,0.26:0.02%,0.27:0.02%,0.33:0.02%,0.29:0.02%,0.51:0.02%,0.4:0.01%,0.28:0.01%,0.3:0.01%,0.67:0.01%,0.52:0.01%,0.31:0.01%,0.32:0.01%,0.38:0.01%,0.53:0.0%,0.43:0.0%,0.44:0.0%,0.34:0.0%,0.6:0.0%,0.36:0.0%,0.57:0.0%,0.35:0.0%,0.54:0.0%,0.37:0.0%,0.56:0.0%,0.55:0.0%,0.42:0.0%,0.46:0.0%,0.39:0.0%,0.45:0.0%,0.41:0.0%,0.48:0.0%,0.62:0.0%,0.8:0.0%,0.58:0.0%,0.75:0.0%,0.7:0.0%,0.47:0.0%] ** dst_host_serror_rate:[0.0:80.93%,1.0:17.56%,0.01:0.74%,0.02:0.2%,0.03:0.09%,0.09:0.05%,0.04:0.04%,0.05:0.04%,0.07:0.03%,0.08:0.03%,0.06:0.02%,0.14:0.02%,0.15:0.02%,0.11:0.02%,0.13:0.02%,0.16:0.02%,0.1:0.02%,0.12:0.01%,0.18:0.01%,0.25:0.01%,0.2:0.01%,0.17:0.01%,0.33:0.01%,0.99:0.01%,0.19:0.01%,0.31:0.01%,0.27:0.01%,0.5:0.0%,0.22:0.0%,0.98:0.0%,0.35:0.0%,0.28:0.0%,0.24:0.0%,0.53:0.0%,0.96:0.0%,0.3:0.0%,0.94:0.0%,0.29:0.0%,0.26:0.0%,0.97:0.0%,0.42:0.0%,0.32:0.0%,0.6:0.0%,0.95:0.0%,0.56:0.0%,0.55:0.0%,0.23:0.0%,0.85:0.0%,0.93:0.0%,0.34:0.0%,0.89:0.0%,0.58:0.0%,0.21:0.0%,0.92:0.0%,0.57:0.0%,0.91:0.0%,0.9:0.0%,0.43:0.0%,0.82:0.0%,0.49:0.0%,0.36:0.0%,0.76:0.0%,0.47:0.0%,0.46:0.0%,0.62:0.0%,0.38:0.0%,0.45:0.0%,0.87:0.0%,0.61:0.0%,0.65:0.0%,0.41:0.0%,0.39:0.0%,0.44:0.0%,0.48:0.0%,0.52:0.0%,0.81:0.0%,0.77:0.0%,0.79:0.0%,0.73:0.0%,0.88:0.0%,0.69:0.0%,0.67:0.0%,0.54:0.0%,0.72:0.0%,0.68:0.0%,0.4:0.0%,0.64:0.0%,0.51:0.0%,0.84:0.0%,0.59:0.0%,0.7:0.0%,0.75:0.0%,0.8:0.0%,0.71:0.0%,0.83:0.0%,0.66:0.0%,0.74:0.0%,0.78:0.0%,0.86:0.0%,0.37:0.0%] ** dst_host_srv_serror_rate:[0.0:81.16%,1.0:17.61%,0.01:0.99%,0.02:0.14%,0.03:0.03%,0.04:0.02%,0.05:0.01%,0.06:0.01%,0.08:0.0%,0.5:0.0%,0.07:0.0%,0.1:0.0%,0.09:0.0%,0.11:0.0%,0.17:0.0%,0.96:0.0%,0.33:0.0%,0.14:0.0%,0.12:0.0%,0.67:0.0%,0.97:0.0%,0.25:0.0%,0.98:0.0%,0.48:0.0%,0.75:0.0%,0.83:0.0%,0.4:0.0%,0.69:0.0%,0.8:0.0%,0.2:0.0%,0.91:0.0%,0.93:0.0%,0.78:0.0%,0.95:0.0%,0.16:0.0%,0.57:0.0%,0.94:0.0%,0.31:0.0%,0.92:0.0%,0.62:0.0%,0.88:0.0%,0.63:0.0%,0.29:0.0%,0.56:0.0%,0.3:0.0%,0.38:0.0%,0.32:0.0%,0.85:0.0%,0.68:0.0%,0.23:0.0%,0.15:0.0%,0.47:0.0%,0.52:0.0%,0.6:0.0%,0.24:0.0%,0.79:0.0%,0.74:0.0%,0.82:0.0%,0.64:0.0%,0.18:0.0%,0.13:0.0%,0.45:0.0%,0.66:0.0%,0.9:0.0%,0.42:0.0%,0.46:0.0%,0.86:0.0%,0.87:0.0%,0.84:0.0%,0.55:0.0%,0.81:0.0%,0.53:0.0%] ** dst_host_rerror_rate:101 (0%) ** dst_host_srv_rerror_rate:101 (0%) ** outcome:[smurf.:56.84%,neptune.:21.7%,normal.:19.69%,back.:0.45%,satan.:0.32%,ipsweep.:0.25%,portsweep.:0.21%,warezclient.:0.21%,teardrop.:0.2%,pod.:0.05%,nmap.:0.05%,guess_passwd.:0.01%,buffer_overflow.:0.01%,land.:0.0%,warezmaster.:0.0%,imap.:0.0%,rootkit.:0.0%,loadmodule.:0.0%,ftp_write.:0.0%,multihop.:0.0%,phf.:0.0%,perl.:0.0%,spy.:0.0%] ###Markdown Encode the feature vectorWe use the same two functions provided earlier to preprocess the data. The first encodes Z-Scores, and the second creates dummy variables from categorical columns. ###Code # Encode a numeric column as zscores def encode_numeric_zscore(df, name, mean=None, sd=None): if mean is None: mean = df[name].mean() if sd is None: sd = df[name].std() df[name] = (df[name] - mean) / sd # Encode text values to dummy variables(i.e. [1,0,0], # [0,1,0],[0,0,1] for red,green,blue) def encode_text_dummy(df, name): dummies = pd.get_dummies(df[name]) for x in dummies.columns: dummy_name = f"{name}-{x}" df[dummy_name] = dummies[x] df.drop(name, axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Again, just as we did for anomaly detection, we preprocess the data set. We convert all numeric values to Z-Score, and we translate all categorical to dummy variables. ###Code # Now encode the feature vector pd.set_option('display.max_columns', 6) pd.set_option('display.max_rows', 5) for name in df.columns: if name == 'outcome': pass elif name in ['protocol_type','service','flag','land','logged_in', 'is_host_login','is_guest_login']: encode_text_dummy(df,name) else: encode_numeric_zscore(df,name) # display 5 rows df.dropna(inplace=True,axis=1) df[0:5] # Convert to numpy - Classification x_columns = df.columns.drop('outcome') x = df[x_columns].values dummies = pd.get_dummies(df['outcome']) # Classification outcomes = dummies.columns num_classes = len(outcomes) y = dummies.values ###Output _____no_output_____ ###Markdown We will attempt to predict what type of attack is underway. The outcome column specifies the attack type. A value of normal indicates that there is no attack underway. We display the outcomes; some attack types are much rarer than others. ###Code df.groupby('outcome')['outcome'].count() ###Output _____no_output_____ ###Markdown Train the Neural NetworkWe now train the neural network to classify the different KDD99 outcomes. The code provided here implements a relatively simple neural with two hidden layers. We train it with the provided KDD99 data. ###Code import pandas as pd import io import requests import numpy as np import os from sklearn.model_selection import train_test_split from sklearn import metrics from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.callbacks import EarlyStopping # Create a test/train split. 25% test # Split into train/test x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.25, random_state=42) # Create neural net model = Sequential() model.add(Dense(10, input_dim=x.shape[1], activation='relu')) model.add(Dense(50, input_dim=x.shape[1], activation='relu')) model.add(Dense(10, input_dim=x.shape[1], activation='relu')) model.add(Dense(1, kernel_initializer='normal')) model.add(Dense(y.shape[1],activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam') monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto', restore_best_weights=True) model.fit(x_train,y_train,validation_data=(x_test,y_test), callbacks=[monitor],verbose=2,epochs=1000) ###Output Epoch 1/1000 11579/11579 - 25s - loss: 0.1851 - val_loss: 0.0362 - 25s/epoch - 2ms/step Epoch 2/1000 11579/11579 - 23s - loss: 0.0360 - val_loss: 0.0275 - 23s/epoch - 2ms/step Epoch 3/1000 11579/11579 - 22s - loss: 0.0271 - val_loss: 0.0270 - 22s/epoch - 2ms/step Epoch 4/1000 11579/11579 - 22s - loss: 0.0244 - val_loss: 0.0221 - 22s/epoch - 2ms/step Epoch 5/1000 11579/11579 - 23s - loss: 0.0235 - val_loss: 0.0228 - 23s/epoch - 2ms/step Epoch 6/1000 11579/11579 - 22s - loss: 0.1370 - val_loss: 0.0210 - 22s/epoch - 2ms/step Epoch 7/1000 11579/11579 - 22s - loss: 0.0859 - val_loss: 0.0218 - 22s/epoch - 2ms/step Epoch 8/1000 11579/11579 - 22s - loss: 0.0507 - val_loss: 0.0192 - 22s/epoch - 2ms/step Epoch 9/1000 11579/11579 - 33s - loss: 0.0203 - val_loss: 0.0185 - 33s/epoch - 3ms/step Epoch 10/1000 11579/11579 - 21s - loss: 0.0180 - val_loss: 0.0185 - 21s/epoch - 2ms/step Epoch 11/1000 11579/11579 - 22s - loss: 0.0180 - val_loss: 0.0208 - 22s/epoch - 2ms/step Epoch 12/1000 11579/11579 - 22s - loss: 0.0166 - val_loss: 0.0165 - 22s/epoch - 2ms/step Epoch 13/1000 11579/11579 - 23s - loss: 0.0165 - val_loss: 0.0166 - 23s/epoch - 2ms/step Epoch 14/1000 11579/11579 - 24s - loss: 0.0158 - val_loss: 0.0152 - 24s/epoch - 2ms/step Epoch 15/1000 11579/11579 - 21s - loss: 0.0152 - val_loss: 0.0182 - 21s/epoch - 2ms/step Epoch 16/1000 11579/11579 - 21s - loss: 0.0160 - val_loss: 0.0149 - 21s/epoch - 2ms/step Epoch 17/1000 11579/11579 - 22s - loss: 0.0142 - val_loss: 0.0143 - 22s/epoch - 2ms/step Epoch 18/1000 11579/11579 - 22s - loss: 0.0139 - val_loss: 0.0153 - 22s/epoch - 2ms/step Epoch 19/1000 Restoring model weights from the end of the best epoch: 14. 11579/11579 - 23s - loss: 0.0141 - val_loss: 0.0152 - 23s/epoch - 2ms/step Epoch 19: early stopping ###Markdown We can now evaluate the neural network. As you can see, the neural network achieves a 99% accuracy rate. ###Code # Measure accuracy pred = model.predict(x_test) pred = np.argmax(pred,axis=1) y_eval = np.argmax(y_test,axis=1) score = metrics.accuracy_score(y_eval, pred) print("Validation score: {}".format(score)) ###Output Validation score: 0.9977005165740935 ###Markdown T81-558: Applications of Deep Neural Networks**Module 14: Other Neural Network Techniques*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 14 Video Material* Part 14.1: What is AutoML [[Video]](https://www.youtube.com/watch?v=TFUysIR5AB0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_01_automl.ipynb)* Part 14.2: Using Denoising AutoEncoders in Keras [[Video]](https://www.youtube.com/watch?v=4bTSu6_fucc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_02_auto_encode.ipynb)* Part 14.3: Training an Intrusion Detection System with KDD99 [[Video]](https://www.youtube.com/watch?v=1ySn6h2A68I&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_03_anomaly.ipynb)* **Part 14.4: Anomaly Detection in Keras** [[Video]](https://www.youtube.com/watch?v=VgyKQ5MTDFc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_04_ids_kdd99.ipynb)* Part 14.5: The Deep Learning Technologies I am Excited About [[Video]]() [[Notebook]](t81_558_class_14_05_new_tech.ipynb) Part 14.4: Training an Intrusion Detection System with KDD99The [KDD-99 dataset](http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html) is very famous in the security field and almost a "hello world" of Intrusion Detection Systems (IDS) in machine learning. An intrusion detection system (IDS) is program that monitors computers and network systems for malicious activity or policy violations. Any intrusion activity or violation is typically reported either to an administrator or collected centrally. IDS types range in scope from single computers to large networks. Although KDD99 dataset is over 20 years old, it is still widely used to demonstrate Intrusion Detection Systems (IDS). KDD99 is the data set used for The Third International Knowledge Discovery and Data Mining Tools Competition, which was held in conjunction with KDD-99 The Fifth International Conference on Knowledge Discovery and Data Mining. The competition task was to build a network intrusion detector, a predictive model capable of distinguishing between "bad" connections, called intrusions or attacks, and "good" normal connections. This database contains a standard set of data to be audited, including a wide variety of intrusions simulated in a military network environment. Read in Raw KDD-99 DatasetThe following code reads the KDD99 CSV dataset into a Pandas data frame. The standard format of KDD99 does not include column names. Because of that, the program adds them. ###Code import pandas as pd from tensorflow.keras.utils import get_file pd.set_option('display.max_columns', 6) pd.set_option('display.max_rows', 5) try: path = get_file('kddcup.data_10_percent.gz', origin= 'http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz') except: print('Error downloading') raise print(path) # This file is a CSV, just no CSV extension or headers # Download from: http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html df = pd.read_csv(path, header=None) print("Read {} rows.".format(len(df))) # df = df.sample(frac=0.1, replace=False) # Uncomment this line to # sample only 10% of the dataset df.dropna(inplace=True,axis=1) # For now, just drop NA's # (rows with missing values) # The CSV file has no column heads, so add them df.columns = [ 'duration', 'protocol_type', 'service', 'flag', 'src_bytes', 'dst_bytes', 'land', 'wrong_fragment', 'urgent', 'hot', 'num_failed_logins', 'logged_in', 'num_compromised', 'root_shell', 'su_attempted', 'num_root', 'num_file_creations', 'num_shells', 'num_access_files', 'num_outbound_cmds', 'is_host_login', 'is_guest_login', 'count', 'srv_count', 'serror_rate', 'srv_serror_rate', 'rerror_rate', 'srv_rerror_rate', 'same_srv_rate', 'diff_srv_rate', 'srv_diff_host_rate', 'dst_host_count', 'dst_host_srv_count', 'dst_host_same_srv_rate', 'dst_host_diff_srv_rate', 'dst_host_same_src_port_rate', 'dst_host_srv_diff_host_rate', 'dst_host_serror_rate', 'dst_host_srv_serror_rate', 'dst_host_rerror_rate', 'dst_host_srv_rerror_rate', 'outcome' ] # display 5 rows df[0:5] ###Output /Users/jheaton/.keras/datasets/kddcup.data_10_percent.gz Read 494021 rows. ###Markdown Analyzing a DatasetBefore we preprocess the KDD99 dataset let's have a look at the individual columns and distributions. You can use the following script to give a high-level overview of how a dataset appears. ###Code import pandas as pd import os import numpy as np from sklearn import metrics from scipy.stats import zscore def expand_categories(values): result = [] s = values.value_counts() t = float(len(values)) for v in s.index: result.append("{}:{}%".format(v,round(100*(s[v]/t),2))) return "[{}]".format(",".join(result)) def analyze(df): print() cols = df.columns.values total = float(len(df)) print("{} rows".format(int(total))) for col in cols: uniques = df[col].unique() unique_count = len(uniques) if unique_count>100: print("** {}:{} ({}%)".format(col,unique_count,int(((unique_count)/total)*100))) else: print("** {}:{}".format(col,expand_categories(df[col]))) expand_categories(df[col]) ###Output _____no_output_____ ###Markdown The analysis looks at how many unique values are present. For example, duration, which is a numeric value, has 2495 unique values, and there is a 0% overlap. A text/categorical value such as protocol_type only has a few unique values, and the program shows the percentages of each. Columns with a large number of unique values do not have their item counts shown to save display space. ###Code # Analyze KDD-99 analyze(df) ###Output 494021 rows ** duration:2495 (0%) ** protocol_type:[icmp:57.41%,tcp:38.47%,udp:4.12%] ** service:[ecr_i:56.96%,private:22.45%,http:13.01%,smtp:1.97%,other:1.46%,domain_u:1.19%,ftp_data:0.96%,eco_i:0.33%,ftp:0.16%,finger:0.14%,urp_i:0.11%,telnet:0.1%,ntp_u:0.08%,auth:0.07%,pop_3:0.04%,time:0.03%,csnet_ns:0.03%,remote_job:0.02%,gopher:0.02%,imap4:0.02%,discard:0.02%,domain:0.02%,systat:0.02%,iso_tsap:0.02%,shell:0.02%,echo:0.02%,rje:0.02%,whois:0.02%,sql_net:0.02%,printer:0.02%,nntp:0.02%,courier:0.02%,sunrpc:0.02%,mtp:0.02%,netbios_ssn:0.02%,uucp_path:0.02%,bgp:0.02%,klogin:0.02%,uucp:0.02%,vmnet:0.02%,supdup:0.02%,ssh:0.02%,nnsp:0.02%,login:0.02%,hostnames:0.02%,efs:0.02%,daytime:0.02%,netbios_ns:0.02%,link:0.02%,ldap:0.02%,pop_2:0.02%,exec:0.02%,netbios_dgm:0.02%,http_443:0.02%,kshell:0.02%,name:0.02%,ctf:0.02%,netstat:0.02%,Z39_50:0.02%,IRC:0.01%,urh_i:0.0%,X11:0.0%,tim_i:0.0%,tftp_u:0.0%,red_i:0.0%,pm_dump:0.0%] ** flag:[SF:76.6%,S0:17.61%,REJ:5.44%,RSTR:0.18%,RSTO:0.12%,SH:0.02%,S1:0.01%,S2:0.0%,RSTOS0:0.0%,S3:0.0%,OTH:0.0%] ** src_bytes:3300 (0%) ** dst_bytes:10725 (2%) ** land:[0:100.0%,1:0.0%] ** wrong_fragment:[0:99.75%,3:0.2%,1:0.05%] ** urgent:[0:100.0%,1:0.0%,3:0.0%,2:0.0%] ** hot:[0:99.35%,2:0.44%,28:0.06%,1:0.05%,4:0.02%,6:0.02%,5:0.01%,3:0.01%,14:0.01%,30:0.01%,22:0.01%,19:0.0%,18:0.0%,24:0.0%,20:0.0%,7:0.0%,17:0.0%,12:0.0%,15:0.0%,16:0.0%,10:0.0%,9:0.0%] ** num_failed_logins:[0:99.99%,1:0.01%,2:0.0%,5:0.0%,4:0.0%,3:0.0%] ** logged_in:[0:85.18%,1:14.82%] ** num_compromised:[0:99.55%,1:0.44%,2:0.0%,4:0.0%,3:0.0%,6:0.0%,5:0.0%,7:0.0%,12:0.0%,9:0.0%,11:0.0%,767:0.0%,238:0.0%,16:0.0%,18:0.0%,275:0.0%,21:0.0%,22:0.0%,281:0.0%,38:0.0%,102:0.0%,884:0.0%,13:0.0%] ** root_shell:[0:99.99%,1:0.01%] ** su_attempted:[0:100.0%,2:0.0%,1:0.0%] ** num_root:[0:99.88%,1:0.05%,9:0.03%,6:0.03%,2:0.0%,5:0.0%,4:0.0%,3:0.0%,119:0.0%,7:0.0%,993:0.0%,268:0.0%,14:0.0%,16:0.0%,278:0.0%,39:0.0%,306:0.0%,54:0.0%,857:0.0%,12:0.0%] ** num_file_creations:[0:99.95%,1:0.04%,2:0.01%,4:0.0%,16:0.0%,9:0.0%,5:0.0%,7:0.0%,8:0.0%,28:0.0%,25:0.0%,12:0.0%,14:0.0%,15:0.0%,20:0.0%,21:0.0%,22:0.0%,10:0.0%] ** num_shells:[0:99.99%,1:0.01%,2:0.0%] ** num_access_files:[0:99.91%,1:0.09%,2:0.01%,3:0.0%,8:0.0%,6:0.0%,4:0.0%] ** num_outbound_cmds:[0:100.0%] ** is_host_login:[0:100.0%] ** is_guest_login:[0:99.86%,1:0.14%] ** count:490 (0%) ** srv_count:470 (0%) ** serror_rate:[0.0:81.94%,1.0:17.52%,0.99:0.06%,0.08:0.03%,0.05:0.03%,0.07:0.03%,0.06:0.03%,0.14:0.02%,0.04:0.02%,0.01:0.02%,0.09:0.02%,0.1:0.02%,0.03:0.02%,0.11:0.02%,0.13:0.02%,0.5:0.02%,0.12:0.02%,0.2:0.01%,0.25:0.01%,0.02:0.01%,0.17:0.01%,0.33:0.01%,0.15:0.01%,0.22:0.01%,0.18:0.01%,0.23:0.01%,0.16:0.01%,0.21:0.01%,0.19:0.0%,0.27:0.0%,0.98:0.0%,0.44:0.0%,0.29:0.0%,0.24:0.0%,0.97:0.0%,0.96:0.0%,0.31:0.0%,0.26:0.0%,0.67:0.0%,0.36:0.0%,0.65:0.0%,0.94:0.0%,0.28:0.0%,0.79:0.0%,0.95:0.0%,0.53:0.0%,0.81:0.0%,0.62:0.0%,0.85:0.0%,0.6:0.0%,0.64:0.0%,0.88:0.0%,0.68:0.0%,0.52:0.0%,0.66:0.0%,0.71:0.0%,0.93:0.0%,0.57:0.0%,0.63:0.0%,0.83:0.0%,0.78:0.0%,0.75:0.0%,0.51:0.0%,0.58:0.0%,0.56:0.0%,0.55:0.0%,0.3:0.0%,0.76:0.0%,0.86:0.0%,0.74:0.0%,0.35:0.0%,0.38:0.0%,0.54:0.0%,0.72:0.0%,0.84:0.0%,0.69:0.0%,0.61:0.0%,0.59:0.0%,0.42:0.0%,0.32:0.0%,0.82:0.0%,0.77:0.0%,0.7:0.0%,0.91:0.0%,0.92:0.0%,0.4:0.0%,0.73:0.0%,0.9:0.0%,0.34:0.0%,0.8:0.0%,0.89:0.0%,0.87:0.0%] ** srv_serror_rate:[0.0:82.12%,1.0:17.62%,0.03:0.03%,0.04:0.02%,0.05:0.02%,0.06:0.02%,0.02:0.02%,0.5:0.02%,0.08:0.01%,0.07:0.01%,0.25:0.01%,0.33:0.01%,0.17:0.01%,0.09:0.01%,0.1:0.01%,0.2:0.01%,0.11:0.01%,0.12:0.01%,0.14:0.01%,0.01:0.0%,0.67:0.0%,0.92:0.0%,0.18:0.0%,0.94:0.0%,0.95:0.0%,0.58:0.0%,0.88:0.0%,0.75:0.0%,0.19:0.0%,0.4:0.0%,0.76:0.0%,0.83:0.0%,0.91:0.0%,0.15:0.0%,0.22:0.0%,0.93:0.0%,0.85:0.0%,0.27:0.0%,0.86:0.0%,0.44:0.0%,0.35:0.0%,0.51:0.0%,0.36:0.0%,0.38:0.0%,0.21:0.0%,0.8:0.0%,0.9:0.0%,0.45:0.0%,0.16:0.0%,0.37:0.0%,0.23:0.0%] ** rerror_rate:[0.0:94.12%,1.0:5.46%,0.86:0.02%,0.87:0.02%,0.92:0.02%,0.25:0.02%,0.95:0.02%,0.9:0.02%,0.5:0.02%,0.91:0.02%,0.88:0.01%,0.96:0.01%,0.33:0.01%,0.2:0.01%,0.93:0.01%,0.94:0.01%,0.01:0.01%,0.89:0.01%,0.85:0.01%,0.99:0.01%,0.82:0.01%,0.77:0.01%,0.17:0.01%,0.97:0.01%,0.02:0.01%,0.98:0.01%,0.03:0.01%,0.8:0.01%,0.78:0.01%,0.76:0.01%,0.75:0.0%,0.79:0.0%,0.84:0.0%,0.14:0.0%,0.05:0.0%,0.73:0.0%,0.81:0.0%,0.06:0.0%,0.71:0.0%,0.83:0.0%,0.67:0.0%,0.56:0.0%,0.08:0.0%,0.04:0.0%,0.1:0.0%,0.09:0.0%,0.12:0.0%,0.07:0.0%,0.11:0.0%,0.69:0.0%,0.74:0.0%,0.64:0.0%,0.4:0.0%,0.72:0.0%,0.7:0.0%,0.6:0.0%,0.29:0.0%,0.22:0.0%,0.62:0.0%,0.65:0.0%,0.21:0.0%,0.68:0.0%,0.37:0.0%,0.19:0.0%,0.43:0.0%,0.58:0.0%,0.35:0.0%,0.24:0.0%,0.31:0.0%,0.23:0.0%,0.27:0.0%,0.28:0.0%,0.26:0.0%,0.36:0.0%,0.34:0.0%,0.66:0.0%,0.32:0.0%] ** srv_rerror_rate:[0.0:93.99%,1.0:5.69%,0.33:0.05%,0.5:0.04%,0.25:0.04%,0.2:0.03%,0.17:0.03%,0.14:0.01%,0.04:0.01%,0.03:0.01%,0.12:0.01%,0.02:0.01%,0.06:0.01%,0.05:0.01%,0.07:0.01%,0.4:0.01%,0.67:0.01%,0.08:0.01%,0.11:0.01%,0.29:0.01%,0.09:0.0%,0.1:0.0%,0.75:0.0%,0.6:0.0%,0.01:0.0%,0.22:0.0%,0.71:0.0%,0.86:0.0%,0.83:0.0%,0.73:0.0%,0.81:0.0%,0.88:0.0%,0.96:0.0%,0.92:0.0%,0.18:0.0%,0.43:0.0%,0.79:0.0%,0.93:0.0%,0.13:0.0%,0.27:0.0%,0.38:0.0%,0.94:0.0%,0.95:0.0%,0.37:0.0%,0.85:0.0%,0.8:0.0%,0.62:0.0%,0.82:0.0%,0.69:0.0%,0.21:0.0%,0.87:0.0%] ** same_srv_rate:[1.0:77.34%,0.06:2.27%,0.05:2.14%,0.04:2.06%,0.07:2.03%,0.03:1.93%,0.02:1.9%,0.01:1.77%,0.08:1.48%,0.09:1.01%,0.1:0.8%,0.0:0.73%,0.12:0.73%,0.11:0.67%,0.13:0.66%,0.14:0.51%,0.15:0.35%,0.5:0.29%,0.16:0.25%,0.17:0.17%,0.33:0.12%,0.18:0.1%,0.2:0.08%,0.19:0.07%,0.67:0.05%,0.25:0.04%,0.21:0.04%,0.99:0.03%,0.22:0.03%,0.24:0.02%,0.23:0.02%,0.4:0.02%,0.98:0.02%,0.75:0.02%,0.27:0.02%,0.26:0.01%,0.8:0.01%,0.29:0.01%,0.38:0.01%,0.86:0.01%,0.3:0.01%,0.31:0.01%,0.44:0.01%,0.83:0.01%,0.36:0.01%,0.28:0.01%,0.43:0.01%,0.6:0.01%,0.42:0.01%,0.97:0.01%,0.32:0.01%,0.35:0.01%,0.45:0.01%,0.47:0.01%,0.88:0.0%,0.48:0.0%,0.39:0.0%,0.52:0.0%,0.46:0.0%,0.37:0.0%,0.41:0.0%,0.89:0.0%,0.34:0.0%,0.92:0.0%,0.54:0.0%,0.53:0.0%,0.94:0.0%,0.95:0.0%,0.57:0.0%,0.96:0.0%,0.64:0.0%,0.71:0.0%,0.56:0.0%,0.62:0.0%,0.78:0.0%,0.9:0.0%,0.49:0.0%,0.91:0.0%,0.55:0.0%,0.65:0.0%,0.73:0.0%,0.58:0.0%,0.59:0.0%,0.93:0.0%,0.76:0.0%,0.51:0.0%,0.77:0.0%,0.82:0.0%,0.81:0.0%,0.74:0.0%,0.69:0.0%,0.79:0.0%,0.72:0.0%,0.7:0.0%,0.85:0.0%,0.68:0.0%,0.61:0.0%,0.63:0.0%,0.87:0.0%] ** diff_srv_rate:[0.0:77.33%,0.06:10.69%,0.07:5.83%,0.05:3.89%,0.08:0.66%,1.0:0.48%,0.04:0.19%,0.67:0.13%,0.5:0.12%,0.09:0.08%,0.6:0.06%,0.12:0.05%,0.1:0.04%,0.11:0.04%,0.14:0.03%,0.4:0.02%,0.13:0.02%,0.29:0.02%,0.01:0.02%,0.15:0.02%,0.03:0.02%,0.33:0.02%,0.17:0.02%,0.25:0.02%,0.75:0.01%,0.2:0.01%,0.18:0.01%,0.16:0.01%,0.19:0.01%,0.02:0.01%,0.22:0.01%,0.21:0.01%,0.27:0.01%,0.96:0.01%,0.31:0.01%,0.38:0.01%,0.24:0.01%,0.23:0.01%,0.43:0.0%,0.52:0.0%,0.95:0.0%,0.44:0.0%,0.53:0.0%,0.36:0.0%,0.8:0.0%,0.57:0.0%,0.42:0.0%,0.3:0.0%,0.26:0.0%,0.28:0.0%,0.56:0.0%,0.99:0.0%,0.54:0.0%,0.62:0.0%,0.37:0.0%,0.55:0.0%,0.35:0.0%,0.41:0.0%,0.47:0.0%,0.89:0.0%,0.32:0.0%,0.71:0.0%,0.58:0.0%,0.46:0.0%,0.39:0.0%,0.51:0.0%,0.45:0.0%,0.97:0.0%,0.83:0.0%,0.7:0.0%,0.69:0.0%,0.78:0.0%,0.74:0.0%,0.64:0.0%,0.73:0.0%,0.82:0.0%,0.88:0.0%,0.86:0.0%] ** srv_diff_host_rate:[0.0:92.99%,1.0:1.64%,0.12:0.31%,0.5:0.29%,0.67:0.29%,0.33:0.25%,0.11:0.24%,0.25:0.23%,0.1:0.22%,0.14:0.21%,0.17:0.21%,0.08:0.2%,0.15:0.2%,0.18:0.19%,0.2:0.19%,0.09:0.19%,0.4:0.19%,0.07:0.17%,0.29:0.17%,0.13:0.16%,0.22:0.16%,0.06:0.14%,0.02:0.1%,0.05:0.1%,0.01:0.08%,0.21:0.08%,0.19:0.08%,0.16:0.07%,0.75:0.07%,0.27:0.06%,0.04:0.06%,0.6:0.06%,0.3:0.06%,0.38:0.05%,0.43:0.05%,0.23:0.05%,0.03:0.03%,0.24:0.02%,0.36:0.02%,0.31:0.02%,0.8:0.02%,0.57:0.01%,0.44:0.01%,0.28:0.01%,0.26:0.01%,0.42:0.0%,0.45:0.0%,0.62:0.0%,0.83:0.0%,0.71:0.0%,0.56:0.0%,0.35:0.0%,0.32:0.0%,0.37:0.0%,0.41:0.0%,0.47:0.0%,0.86:0.0%,0.55:0.0%,0.54:0.0%,0.88:0.0%,0.64:0.0%,0.46:0.0%,0.7:0.0%,0.77:0.0%] ** dst_host_count:256 (0%) ** dst_host_srv_count:256 (0%) ** dst_host_same_srv_rate:101 (0%) ** dst_host_diff_srv_rate:101 (0%) ** dst_host_same_src_port_rate:101 (0%) ** dst_host_srv_diff_host_rate:[0.0:89.45%,0.02:2.38%,0.01:2.13%,0.04:1.35%,0.03:1.34%,0.05:0.94%,0.06:0.39%,0.07:0.31%,0.5:0.15%,0.08:0.14%,0.09:0.13%,0.15:0.09%,0.11:0.09%,0.16:0.08%,0.13:0.08%,0.1:0.08%,0.14:0.07%,1.0:0.07%,0.17:0.07%,0.2:0.07%,0.12:0.07%,0.18:0.07%,0.25:0.05%,0.22:0.05%,0.19:0.05%,0.21:0.05%,0.24:0.03%,0.23:0.02%,0.26:0.02%,0.27:0.02%,0.33:0.02%,0.29:0.02%,0.51:0.02%,0.4:0.01%,0.28:0.01%,0.3:0.01%,0.67:0.01%,0.52:0.01%,0.31:0.01%,0.32:0.01%,0.38:0.01%,0.53:0.0%,0.43:0.0%,0.44:0.0%,0.34:0.0%,0.6:0.0%,0.36:0.0%,0.57:0.0%,0.35:0.0%,0.54:0.0%,0.37:0.0%,0.56:0.0%,0.55:0.0%,0.42:0.0%,0.46:0.0%,0.45:0.0%,0.41:0.0%,0.48:0.0%,0.39:0.0%,0.8:0.0%,0.7:0.0%,0.47:0.0%,0.62:0.0%,0.75:0.0%,0.58:0.0%] ** dst_host_serror_rate:[0.0:80.93%,1.0:17.56%,0.01:0.74%,0.02:0.2%,0.03:0.09%,0.09:0.05%,0.04:0.04%,0.05:0.04%,0.07:0.03%,0.08:0.03%,0.06:0.02%,0.14:0.02%,0.15:0.02%,0.11:0.02%,0.13:0.02%,0.16:0.02%,0.1:0.02%,0.12:0.01%,0.18:0.01%,0.25:0.01%,0.2:0.01%,0.17:0.01%,0.33:0.01%,0.99:0.01%,0.19:0.01%,0.31:0.01%,0.27:0.01%,0.5:0.0%,0.22:0.0%,0.98:0.0%,0.35:0.0%,0.28:0.0%,0.53:0.0%,0.24:0.0%,0.96:0.0%,0.3:0.0%,0.26:0.0%,0.97:0.0%,0.29:0.0%,0.94:0.0%,0.42:0.0%,0.32:0.0%,0.56:0.0%,0.55:0.0%,0.95:0.0%,0.6:0.0%,0.23:0.0%,0.93:0.0%,0.34:0.0%,0.85:0.0%,0.89:0.0%,0.21:0.0%,0.92:0.0%,0.58:0.0%,0.43:0.0%,0.9:0.0%,0.57:0.0%,0.91:0.0%,0.49:0.0%,0.82:0.0%,0.36:0.0%,0.87:0.0%,0.45:0.0%,0.62:0.0%,0.65:0.0%,0.46:0.0%,0.38:0.0%,0.61:0.0%,0.47:0.0%,0.76:0.0%,0.81:0.0%,0.54:0.0%,0.64:0.0%,0.44:0.0%,0.48:0.0%,0.72:0.0%,0.39:0.0%,0.52:0.0%,0.51:0.0%,0.67:0.0%,0.84:0.0%,0.73:0.0%,0.4:0.0%,0.69:0.0%,0.79:0.0%,0.41:0.0%,0.68:0.0%,0.88:0.0%,0.77:0.0%,0.75:0.0%,0.7:0.0%,0.8:0.0%,0.59:0.0%,0.71:0.0%,0.37:0.0%,0.86:0.0%,0.66:0.0%,0.78:0.0%,0.74:0.0%,0.83:0.0%] ** dst_host_srv_serror_rate:[0.0:81.16%,1.0:17.61%,0.01:0.99%,0.02:0.14%,0.03:0.03%,0.04:0.02%,0.05:0.01%,0.06:0.01%,0.08:0.0%,0.5:0.0%,0.07:0.0%,0.1:0.0%,0.09:0.0%,0.11:0.0%,0.17:0.0%,0.14:0.0%,0.12:0.0%,0.96:0.0%,0.33:0.0%,0.67:0.0%,0.97:0.0%,0.25:0.0%,0.98:0.0%,0.4:0.0%,0.75:0.0%,0.48:0.0%,0.83:0.0%,0.16:0.0%,0.93:0.0%,0.69:0.0%,0.2:0.0%,0.91:0.0%,0.78:0.0%,0.95:0.0%,0.8:0.0%,0.92:0.0%,0.68:0.0%,0.29:0.0%,0.38:0.0%,0.88:0.0%,0.3:0.0%,0.32:0.0%,0.94:0.0%,0.57:0.0%,0.63:0.0%,0.62:0.0%,0.31:0.0%,0.85:0.0%,0.56:0.0%,0.81:0.0%,0.74:0.0%,0.86:0.0%,0.13:0.0%,0.23:0.0%,0.18:0.0%,0.64:0.0%,0.46:0.0%,0.52:0.0%,0.66:0.0%,0.6:0.0%,0.84:0.0%,0.55:0.0%,0.9:0.0%,0.15:0.0%,0.79:0.0%,0.82:0.0%,0.87:0.0%,0.47:0.0%,0.53:0.0%,0.45:0.0%,0.42:0.0%,0.24:0.0%] ** dst_host_rerror_rate:101 (0%) ** dst_host_srv_rerror_rate:101 (0%) ** outcome:[smurf.:56.84%,neptune.:21.7%,normal.:19.69%,back.:0.45%,satan.:0.32%,ipsweep.:0.25%,portsweep.:0.21%,warezclient.:0.21%,teardrop.:0.2%,pod.:0.05%,nmap.:0.05%,guess_passwd.:0.01%,buffer_overflow.:0.01%,land.:0.0%,warezmaster.:0.0%,imap.:0.0%,rootkit.:0.0%,loadmodule.:0.0%,ftp_write.:0.0%,multihop.:0.0%,phf.:0.0%,perl.:0.0%,spy.:0.0%] ###Markdown Encode the feature vectorWe use the same two functions provided earlier to preprocess the data. The first encodes Z-Scores, and the second creates dummy variables from categorical columns. ###Code # Encode a numeric column as zscores def encode_numeric_zscore(df, name, mean=None, sd=None): if mean is None: mean = df[name].mean() if sd is None: sd = df[name].std() df[name] = (df[name] - mean) / sd # Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue) def encode_text_dummy(df, name): dummies = pd.get_dummies(df[name]) for x in dummies.columns: dummy_name = f"{name}-{x}" df[dummy_name] = dummies[x] df.drop(name, axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Again, just as we did for anomaly detection, we preprocess the data set. We convert all numeric values to Z-Score, and we translate all categorical to dummy variables. ###Code # Now encode the feature vector encode_numeric_zscore(df, 'duration') encode_text_dummy(df, 'protocol_type') encode_text_dummy(df, 'service') encode_text_dummy(df, 'flag') encode_numeric_zscore(df, 'src_bytes') encode_numeric_zscore(df, 'dst_bytes') encode_text_dummy(df, 'land') encode_numeric_zscore(df, 'wrong_fragment') encode_numeric_zscore(df, 'urgent') encode_numeric_zscore(df, 'hot') encode_numeric_zscore(df, 'num_failed_logins') encode_text_dummy(df, 'logged_in') encode_numeric_zscore(df, 'num_compromised') encode_numeric_zscore(df, 'root_shell') encode_numeric_zscore(df, 'su_attempted') encode_numeric_zscore(df, 'num_root') encode_numeric_zscore(df, 'num_file_creations') encode_numeric_zscore(df, 'num_shells') encode_numeric_zscore(df, 'num_access_files') encode_numeric_zscore(df, 'num_outbound_cmds') encode_text_dummy(df, 'is_host_login') encode_text_dummy(df, 'is_guest_login') encode_numeric_zscore(df, 'count') encode_numeric_zscore(df, 'srv_count') encode_numeric_zscore(df, 'serror_rate') encode_numeric_zscore(df, 'srv_serror_rate') encode_numeric_zscore(df, 'rerror_rate') encode_numeric_zscore(df, 'srv_rerror_rate') encode_numeric_zscore(df, 'same_srv_rate') encode_numeric_zscore(df, 'diff_srv_rate') encode_numeric_zscore(df, 'srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_count') encode_numeric_zscore(df, 'dst_host_srv_count') encode_numeric_zscore(df, 'dst_host_same_srv_rate') encode_numeric_zscore(df, 'dst_host_diff_srv_rate') encode_numeric_zscore(df, 'dst_host_same_src_port_rate') encode_numeric_zscore(df, 'dst_host_srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_serror_rate') encode_numeric_zscore(df, 'dst_host_srv_serror_rate') encode_numeric_zscore(df, 'dst_host_rerror_rate') encode_numeric_zscore(df, 'dst_host_srv_rerror_rate') # display 5 rows df.dropna(inplace=True,axis=1) df[0:5] # This is the numeric feature vector, as it goes to the neural net # Convert to numpy - Classification x_columns = df.columns.drop('outcome') x = df[x_columns].values dummies = pd.get_dummies(df['outcome']) # Classification outcomes = dummies.columns num_classes = len(outcomes) y = dummies.values ###Output _____no_output_____ ###Markdown We will attempt to predict what type of attack is underway. The outcome column specifies the attack type. A value of normal indicates that there is no attack underway. We display the outcomes; some attack types are much rarer than others. ###Code df.groupby('outcome')['outcome'].count() ###Output _____no_output_____ ###Markdown Train the Neural NetworkWe now train the neural network to classify the different KDD99 outcomes. The code provided here implements a relatively simple neural with two hidden layers. We train it with the provided KDD99 data. ###Code import pandas as pd import io import requests import numpy as np import os from sklearn.model_selection import train_test_split from sklearn import metrics from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.callbacks import EarlyStopping # Create a test/train split. 25% test # Split into train/test x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.25, random_state=42) # Create neural net model = Sequential() model.add(Dense(10, input_dim=x.shape[1], activation='relu')) model.add(Dense(50, input_dim=x.shape[1], activation='relu')) model.add(Dense(10, input_dim=x.shape[1], activation='relu')) model.add(Dense(1, kernel_initializer='normal')) model.add(Dense(y.shape[1],activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam') monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto') model.fit(x_train,y_train,validation_data=(x_test,y_test), callbacks=[monitor],verbose=2,epochs=1000) ###Output WARNING: Logging before flag parsing goes to stderr. W0810 23:45:15.921615 140735657337728 deprecation.py:323] From /Users/jheaton/miniconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow_core/python/ops/math_grad.py:1366: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where W0810 23:45:16.019797 140735657337728 deprecation.py:323] From /Users/jheaton/miniconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow_core/python/keras/optimizer_v2/optimizer_v2.py:468: BaseResourceVariable.constraint (from tensorflow.python.ops.resource_variable_ops) is deprecated and will be removed in a future version. Instructions for updating: Apply a constraint manually following the optimizer update step. ###Markdown We can now evaluate the neural network. As you can see, the neural network achieves a 99% accuracy rate. ###Code # Measure accuracy pred = model.predict(x_test) pred = np.argmax(pred,axis=1) y_eval = np.argmax(y_test,axis=1) score = metrics.accuracy_score(y_eval, pred) print("Validation score: {}".format(score)) ###Output Validation score: 0.9971418392628698 ###Markdown T81-558: Applications of Deep Neural Networks**Module 14: Other Neural Network Techniques*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 14 Video Material* Part 14.1: What is AutoML [[Video]](https://www.youtube.com/watch?v=TFUysIR5AB0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_01_automl.ipynb)* Part 14.2: Using Denoising AutoEncoders in Keras [[Video]](https://www.youtube.com/watch?v=4bTSu6_fucc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_02_auto_encode.ipynb)* Part 14.3: Training an Intrusion Detection System with KDD99 [[Video]](https://www.youtube.com/watch?v=1ySn6h2A68I&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_03_anomaly.ipynb)* **Part 14.4: Anomaly Detection in Keras** [[Video]](https://www.youtube.com/watch?v=VgyKQ5MTDFc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_04_ids_kdd99.ipynb)* Part 14.5: The Deep Learning Technologies I am Excited About [[Video]]() [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_05_new_tech.ipynb) Part 14.4: Training an Intrusion Detection System with KDD99The [KDD-99 dataset](http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html) is very famous in the security field and almost a "hello world" of Intrusion Detection Systems (IDS) in machine learning. An intrusion detection system (IDS) is program that monitors computers and network systems for malicious activity or policy violations. Any intrusion activity or violation is typically reported either to an administrator or collected centrally. IDS types range in scope from single computers to large networks. Although KDD99 dataset is over 20 years old, it is still widely used to demonstrate Intrusion Detection Systems (IDS). KDD99 is the data set used for The Third International Knowledge Discovery and Data Mining Tools Competition, which was held in conjunction with KDD-99 The Fifth International Conference on Knowledge Discovery and Data Mining. The competition task was to build a network intrusion detector, a predictive model capable of distinguishing between "bad" connections, called intrusions or attacks, and "good" normal connections. This database contains a standard set of data to be audited, including a wide variety of intrusions simulated in a military network environment. Read in Raw KDD-99 DatasetThe following code reads the KDD99 CSV dataset into a Pandas data frame. The standard format of KDD99 does not include column names. Because of that, the program adds them. ###Code import pandas as pd from tensorflow.keras.utils import get_file try: path = get_file('kddcup.data_10_percent.gz', origin= 'http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz') except: print('Error downloading') raise print(path) # This file is a CSV, just no CSV extension or headers # Download from: http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html df = pd.read_csv(path, header=None) print("Read {} rows.".format(len(df))) # df = df.sample(frac=0.1, replace=False) # Uncomment this line to # sample only 10% of the dataset df.dropna(inplace=True,axis=1) # For now, just drop NA's # (rows with missing values) # The CSV file has no column heads, so add them df.columns = [ 'duration', 'protocol_type', 'service', 'flag', 'src_bytes', 'dst_bytes', 'land', 'wrong_fragment', 'urgent', 'hot', 'num_failed_logins', 'logged_in', 'num_compromised', 'root_shell', 'su_attempted', 'num_root', 'num_file_creations', 'num_shells', 'num_access_files', 'num_outbound_cmds', 'is_host_login', 'is_guest_login', 'count', 'srv_count', 'serror_rate', 'srv_serror_rate', 'rerror_rate', 'srv_rerror_rate', 'same_srv_rate', 'diff_srv_rate', 'srv_diff_host_rate', 'dst_host_count', 'dst_host_srv_count', 'dst_host_same_srv_rate', 'dst_host_diff_srv_rate', 'dst_host_same_src_port_rate', 'dst_host_srv_diff_host_rate', 'dst_host_serror_rate', 'dst_host_srv_serror_rate', 'dst_host_rerror_rate', 'dst_host_srv_rerror_rate', 'outcome' ] pd.set_option('display.max_columns', 5) pd.set_option('display.max_rows', 5) # display 5 rows display(df[0:5]) ###Output C:\Users\jeffh\.keras\datasets\kddcup.data_10_percent.gz Read 494021 rows. ###Markdown Analyzing a DatasetBefore we preprocess the KDD99 dataset let's have a look at the individual columns and distributions. You can use the following script to give a high-level overview of how a dataset appears. ###Code import pandas as pd import os import numpy as np from sklearn import metrics from scipy.stats import zscore def expand_categories(values): result = [] s = values.value_counts() t = float(len(values)) for v in s.index: result.append("{}:{}%".format(v,round(100*(s[v]/t),2))) return "[{}]".format(",".join(result)) def analyze(df): print() cols = df.columns.values total = float(len(df)) print("{} rows".format(int(total))) for col in cols: uniques = df[col].unique() unique_count = len(uniques) if unique_count>100: print("** {}:{} ({}%)".format(col,unique_count,int(((unique_count)/total)*100))) else: print("** {}:{}".format(col,expand_categories(df[col]))) expand_categories(df[col]) ###Output _____no_output_____ ###Markdown The analysis looks at how many unique values are present. For example, duration, which is a numeric value, has 2495 unique values, and there is a 0% overlap. A text/categorical value such as protocol_type only has a few unique values, and the program shows the percentages of each. Columns with a large number of unique values do not have their item counts shown to save display space. ###Code # Analyze KDD-99 analyze(df) ###Output 494021 rows ** duration:2495 (0%) ** protocol_type:[icmp:57.41%,tcp:38.47%,udp:4.12%] ** service:[ecr_i:56.96%,private:22.45%,http:13.01%,smtp:1.97%,other:1.46%,domain_u:1.19%,ftp_data:0.96%,eco_i:0.33%,ftp:0.16%,finger:0.14%,urp_i:0.11%,telnet:0.1%,ntp_u:0.08%,auth:0.07%,pop_3:0.04%,time:0.03%,csnet_ns:0.03%,remote_job:0.02%,gopher:0.02%,imap4:0.02%,domain:0.02%,discard:0.02%,iso_tsap:0.02%,systat:0.02%,shell:0.02%,echo:0.02%,rje:0.02%,whois:0.02%,sql_net:0.02%,printer:0.02%,courier:0.02%,nntp:0.02%,netbios_ssn:0.02%,mtp:0.02%,sunrpc:0.02%,klogin:0.02%,vmnet:0.02%,bgp:0.02%,uucp:0.02%,uucp_path:0.02%,ssh:0.02%,nnsp:0.02%,supdup:0.02%,hostnames:0.02%,login:0.02%,efs:0.02%,daytime:0.02%,link:0.02%,netbios_ns:0.02%,ldap:0.02%,pop_2:0.02%,netbios_dgm:0.02%,http_443:0.02%,exec:0.02%,name:0.02%,kshell:0.02%,ctf:0.02%,netstat:0.02%,Z39_50:0.02%,IRC:0.01%,urh_i:0.0%,X11:0.0%,tim_i:0.0%,tftp_u:0.0%,red_i:0.0%,pm_dump:0.0%] ** flag:[SF:76.6%,S0:17.61%,REJ:5.44%,RSTR:0.18%,RSTO:0.12%,SH:0.02%,S1:0.01%,S2:0.0%,RSTOS0:0.0%,S3:0.0%,OTH:0.0%] ** src_bytes:3300 (0%) ** dst_bytes:10725 (2%) ** land:[0:100.0%,1:0.0%] ** wrong_fragment:[0:99.75%,3:0.2%,1:0.05%] ** urgent:[0:100.0%,1:0.0%,3:0.0%,2:0.0%] ** hot:[0:99.35%,2:0.44%,28:0.06%,1:0.05%,4:0.02%,6:0.02%,5:0.01%,3:0.01%,14:0.01%,30:0.01%,22:0.01%,19:0.0%,18:0.0%,24:0.0%,20:0.0%,7:0.0%,17:0.0%,12:0.0%,15:0.0%,16:0.0%,10:0.0%,9:0.0%] ** num_failed_logins:[0:99.99%,1:0.01%,2:0.0%,5:0.0%,4:0.0%,3:0.0%] ** logged_in:[0:85.18%,1:14.82%] ** num_compromised:[0:99.55%,1:0.44%,2:0.0%,4:0.0%,3:0.0%,6:0.0%,5:0.0%,7:0.0%,12:0.0%,9:0.0%,11:0.0%,767:0.0%,238:0.0%,16:0.0%,18:0.0%,275:0.0%,21:0.0%,22:0.0%,281:0.0%,38:0.0%,102:0.0%,884:0.0%,13:0.0%] ** root_shell:[0:99.99%,1:0.01%] ** su_attempted:[0:100.0%,2:0.0%,1:0.0%] ** num_root:[0:99.88%,1:0.05%,9:0.03%,6:0.03%,2:0.0%,5:0.0%,4:0.0%,3:0.0%,119:0.0%,7:0.0%,993:0.0%,268:0.0%,14:0.0%,16:0.0%,278:0.0%,39:0.0%,306:0.0%,54:0.0%,857:0.0%,12:0.0%] ** num_file_creations:[0:99.95%,1:0.04%,2:0.01%,4:0.0%,16:0.0%,9:0.0%,5:0.0%,7:0.0%,8:0.0%,28:0.0%,25:0.0%,12:0.0%,14:0.0%,15:0.0%,20:0.0%,21:0.0%,22:0.0%,10:0.0%] ** num_shells:[0:99.99%,1:0.01%,2:0.0%] ** num_access_files:[0:99.91%,1:0.09%,2:0.01%,3:0.0%,8:0.0%,6:0.0%,4:0.0%] ** num_outbound_cmds:[0:100.0%] ** is_host_login:[0:100.0%] ** is_guest_login:[0:99.86%,1:0.14%] ** count:490 (0%) ** srv_count:470 (0%) ** serror_rate:[0.0:81.94%,1.0:17.52%,0.99:0.06%,0.08:0.03%,0.05:0.03%,0.07:0.03%,0.06:0.03%,0.14:0.02%,0.04:0.02%,0.01:0.02%,0.09:0.02%,0.1:0.02%,0.03:0.02%,0.11:0.02%,0.13:0.02%,0.5:0.02%,0.12:0.02%,0.2:0.01%,0.25:0.01%,0.02:0.01%,0.17:0.01%,0.33:0.01%,0.15:0.01%,0.22:0.01%,0.18:0.01%,0.23:0.01%,0.16:0.01%,0.21:0.01%,0.19:0.0%,0.27:0.0%,0.98:0.0%,0.44:0.0%,0.29:0.0%,0.24:0.0%,0.97:0.0%,0.96:0.0%,0.31:0.0%,0.26:0.0%,0.67:0.0%,0.36:0.0%,0.65:0.0%,0.94:0.0%,0.28:0.0%,0.79:0.0%,0.95:0.0%,0.53:0.0%,0.81:0.0%,0.62:0.0%,0.85:0.0%,0.6:0.0%,0.64:0.0%,0.88:0.0%,0.68:0.0%,0.52:0.0%,0.66:0.0%,0.71:0.0%,0.93:0.0%,0.57:0.0%,0.63:0.0%,0.83:0.0%,0.78:0.0%,0.75:0.0%,0.51:0.0%,0.58:0.0%,0.56:0.0%,0.55:0.0%,0.3:0.0%,0.76:0.0%,0.86:0.0%,0.74:0.0%,0.35:0.0%,0.38:0.0%,0.54:0.0%,0.72:0.0%,0.84:0.0%,0.69:0.0%,0.61:0.0%,0.59:0.0%,0.42:0.0%,0.32:0.0%,0.82:0.0%,0.77:0.0%,0.7:0.0%,0.91:0.0%,0.92:0.0%,0.4:0.0%,0.73:0.0%,0.9:0.0%,0.34:0.0%,0.8:0.0%,0.89:0.0%,0.87:0.0%] ** srv_serror_rate:[0.0:82.12%,1.0:17.62%,0.03:0.03%,0.04:0.02%,0.05:0.02%,0.06:0.02%,0.02:0.02%,0.5:0.02%,0.08:0.01%,0.07:0.01%,0.25:0.01%,0.33:0.01%,0.17:0.01%,0.09:0.01%,0.1:0.01%,0.2:0.01%,0.11:0.01%,0.12:0.01%,0.14:0.01%,0.01:0.0%,0.67:0.0%,0.92:0.0%,0.18:0.0%,0.94:0.0%,0.95:0.0%,0.58:0.0%,0.88:0.0%,0.75:0.0%,0.19:0.0%,0.4:0.0%,0.76:0.0%,0.83:0.0%,0.91:0.0%,0.15:0.0%,0.22:0.0%,0.93:0.0%,0.85:0.0%,0.27:0.0%,0.86:0.0%,0.44:0.0%,0.35:0.0%,0.51:0.0%,0.36:0.0%,0.38:0.0%,0.21:0.0%,0.8:0.0%,0.9:0.0%,0.45:0.0%,0.16:0.0%,0.37:0.0%,0.23:0.0%] ** rerror_rate:[0.0:94.12%,1.0:5.46%,0.86:0.02%,0.87:0.02%,0.92:0.02%,0.25:0.02%,0.95:0.02%,0.9:0.02%,0.5:0.02%,0.91:0.02%,0.88:0.01%,0.96:0.01%,0.33:0.01%,0.2:0.01%,0.93:0.01%,0.94:0.01%,0.01:0.01%,0.89:0.01%,0.85:0.01%,0.99:0.01%,0.82:0.01%,0.77:0.01%,0.17:0.01%,0.97:0.01%,0.02:0.01%,0.98:0.01%,0.03:0.01%,0.8:0.01%,0.78:0.01%,0.76:0.01%,0.75:0.0%,0.79:0.0%,0.84:0.0%,0.14:0.0%,0.05:0.0%,0.73:0.0%,0.81:0.0%,0.06:0.0%,0.71:0.0%,0.83:0.0%,0.67:0.0%,0.56:0.0%,0.08:0.0%,0.04:0.0%,0.1:0.0%,0.09:0.0%,0.12:0.0%,0.07:0.0%,0.11:0.0%,0.69:0.0%,0.74:0.0%,0.64:0.0%,0.4:0.0%,0.72:0.0%,0.7:0.0%,0.6:0.0%,0.29:0.0%,0.22:0.0%,0.62:0.0%,0.65:0.0%,0.21:0.0%,0.68:0.0%,0.37:0.0%,0.19:0.0%,0.43:0.0%,0.58:0.0%,0.35:0.0%,0.24:0.0%,0.31:0.0%,0.23:0.0%,0.27:0.0%,0.28:0.0%,0.26:0.0%,0.36:0.0%,0.34:0.0%,0.66:0.0%,0.32:0.0%] ** srv_rerror_rate:[0.0:93.99%,1.0:5.69%,0.33:0.05%,0.5:0.04%,0.25:0.04%,0.2:0.03%,0.17:0.03%,0.14:0.01%,0.04:0.01%,0.03:0.01%,0.12:0.01%,0.02:0.01%,0.06:0.01%,0.05:0.01%,0.07:0.01%,0.4:0.01%,0.67:0.01%,0.08:0.01%,0.11:0.01%,0.29:0.01%,0.09:0.0%,0.1:0.0%,0.75:0.0%,0.6:0.0%,0.01:0.0%,0.22:0.0%,0.71:0.0%,0.86:0.0%,0.83:0.0%,0.73:0.0%,0.81:0.0%,0.88:0.0%,0.96:0.0%,0.92:0.0%,0.18:0.0%,0.43:0.0%,0.79:0.0%,0.93:0.0%,0.13:0.0%,0.27:0.0%,0.38:0.0%,0.94:0.0%,0.95:0.0%,0.37:0.0%,0.85:0.0%,0.8:0.0%,0.62:0.0%,0.82:0.0%,0.69:0.0%,0.21:0.0%,0.87:0.0%] ** same_srv_rate:[1.0:77.34%,0.06:2.27%,0.05:2.14%,0.04:2.06%,0.07:2.03%,0.03:1.93%,0.02:1.9%,0.01:1.77%,0.08:1.48%,0.09:1.01%,0.1:0.8%,0.0:0.73%,0.12:0.73%,0.11:0.67%,0.13:0.66%,0.14:0.51%,0.15:0.35%,0.5:0.29%,0.16:0.25%,0.17:0.17%,0.33:0.12%,0.18:0.1%,0.2:0.08%,0.19:0.07%,0.67:0.05%,0.25:0.04%,0.21:0.04%,0.99:0.03%,0.22:0.03%,0.24:0.02%,0.23:0.02%,0.4:0.02%,0.98:0.02%,0.75:0.02%,0.27:0.02%,0.26:0.01%,0.8:0.01%,0.29:0.01%,0.38:0.01%,0.86:0.01%,0.3:0.01%,0.31:0.01%,0.44:0.01%,0.83:0.01%,0.36:0.01%,0.28:0.01%,0.43:0.01%,0.6:0.01%,0.42:0.01%,0.97:0.01%,0.32:0.01%,0.35:0.01%,0.45:0.01%,0.47:0.01%,0.88:0.0%,0.48:0.0%,0.39:0.0%,0.52:0.0%,0.46:0.0%,0.37:0.0%,0.41:0.0%,0.89:0.0%,0.34:0.0%,0.92:0.0%,0.54:0.0%,0.53:0.0%,0.94:0.0%,0.95:0.0%,0.57:0.0%,0.96:0.0%,0.64:0.0%,0.71:0.0%,0.56:0.0%,0.62:0.0%,0.78:0.0%,0.9:0.0%,0.49:0.0%,0.91:0.0%,0.55:0.0%,0.65:0.0%,0.73:0.0%,0.58:0.0%,0.59:0.0%,0.93:0.0%,0.76:0.0%,0.51:0.0%,0.77:0.0%,0.82:0.0%,0.81:0.0%,0.74:0.0%,0.69:0.0%,0.79:0.0%,0.72:0.0%,0.7:0.0%,0.85:0.0%,0.68:0.0%,0.61:0.0%,0.63:0.0%,0.87:0.0%] ** diff_srv_rate:[0.0:77.33%,0.06:10.69%,0.07:5.83%,0.05:3.89%,0.08:0.66%,1.0:0.48%,0.04:0.19%,0.67:0.13%,0.5:0.12%,0.09:0.08%,0.6:0.06%,0.12:0.05%,0.1:0.04%,0.11:0.04%,0.14:0.03%,0.4:0.02%,0.13:0.02%,0.29:0.02%,0.01:0.02%,0.15:0.02%,0.03:0.02%,0.33:0.02%,0.17:0.02%,0.25:0.02%,0.75:0.01%,0.2:0.01%,0.18:0.01%,0.16:0.01%,0.19:0.01%,0.02:0.01%,0.22:0.01%,0.21:0.01%,0.27:0.01%,0.96:0.01%,0.31:0.01%,0.38:0.01%,0.24:0.01%,0.23:0.01%,0.43:0.0%,0.52:0.0%,0.95:0.0%,0.44:0.0%,0.53:0.0%,0.36:0.0%,0.8:0.0%,0.57:0.0%,0.42:0.0%,0.3:0.0%,0.26:0.0%,0.28:0.0%,0.56:0.0%,0.99:0.0%,0.54:0.0%,0.62:0.0%,0.37:0.0%,0.55:0.0%,0.35:0.0%,0.41:0.0%,0.47:0.0%,0.89:0.0%,0.32:0.0%,0.71:0.0%,0.58:0.0%,0.46:0.0%,0.39:0.0%,0.51:0.0%,0.45:0.0%,0.97:0.0%,0.83:0.0%,0.7:0.0%,0.69:0.0%,0.78:0.0%,0.74:0.0%,0.64:0.0%,0.73:0.0%,0.82:0.0%,0.88:0.0%,0.86:0.0%] ** srv_diff_host_rate:[0.0:92.99%,1.0:1.64%,0.12:0.31%,0.5:0.29%,0.67:0.29%,0.33:0.25%,0.11:0.24%,0.25:0.23%,0.1:0.22%,0.14:0.21%,0.17:0.21%,0.08:0.2%,0.15:0.2%,0.18:0.19%,0.2:0.19%,0.09:0.19%,0.4:0.19%,0.07:0.17%,0.29:0.17%,0.13:0.16%,0.22:0.16%,0.06:0.14%,0.02:0.1%,0.05:0.1%,0.01:0.08%,0.21:0.08%,0.19:0.08%,0.16:0.07%,0.75:0.07%,0.27:0.06%,0.04:0.06%,0.6:0.06%,0.3:0.06%,0.38:0.05%,0.43:0.05%,0.23:0.05%,0.03:0.03%,0.24:0.02%,0.36:0.02%,0.31:0.02%,0.8:0.02%,0.57:0.01%,0.44:0.01%,0.28:0.01%,0.26:0.01%,0.42:0.0%,0.45:0.0%,0.62:0.0%,0.83:0.0%,0.71:0.0%,0.56:0.0%,0.35:0.0%,0.32:0.0%,0.37:0.0%,0.41:0.0%,0.47:0.0%,0.86:0.0%,0.55:0.0%,0.54:0.0%,0.88:0.0%,0.64:0.0%,0.46:0.0%,0.7:0.0%,0.77:0.0%] ** dst_host_count:256 (0%) ** dst_host_srv_count:256 (0%) ** dst_host_same_srv_rate:101 (0%) ** dst_host_diff_srv_rate:101 (0%) ** dst_host_same_src_port_rate:101 (0%) ** dst_host_srv_diff_host_rate:[0.0:89.45%,0.02:2.38%,0.01:2.13%,0.04:1.35%,0.03:1.34%,0.05:0.94%,0.06:0.39%,0.07:0.31%,0.5:0.15%,0.08:0.14%,0.09:0.13%,0.15:0.09%,0.11:0.09%,0.16:0.08%,0.13:0.08%,0.1:0.08%,0.14:0.07%,1.0:0.07%,0.17:0.07%,0.2:0.07%,0.12:0.07%,0.18:0.07%,0.25:0.05%,0.22:0.05%,0.19:0.05%,0.21:0.05%,0.24:0.03%,0.23:0.02%,0.26:0.02%,0.27:0.02%,0.33:0.02%,0.29:0.02%,0.51:0.02%,0.4:0.01%,0.28:0.01%,0.3:0.01%,0.67:0.01%,0.52:0.01%,0.31:0.01%,0.32:0.01%,0.38:0.01%,0.53:0.0%,0.43:0.0%,0.44:0.0%,0.34:0.0%,0.6:0.0%,0.36:0.0%,0.57:0.0%,0.35:0.0%,0.54:0.0%,0.37:0.0%,0.56:0.0%,0.55:0.0%,0.42:0.0%,0.46:0.0%,0.45:0.0%,0.41:0.0%,0.48:0.0%,0.39:0.0%,0.8:0.0%,0.7:0.0%,0.47:0.0%,0.62:0.0%,0.75:0.0%,0.58:0.0%] ** dst_host_serror_rate:[0.0:80.93%,1.0:17.56%,0.01:0.74%,0.02:0.2%,0.03:0.09%,0.09:0.05%,0.04:0.04%,0.05:0.04%,0.07:0.03%,0.08:0.03%,0.06:0.02%,0.14:0.02%,0.15:0.02%,0.11:0.02%,0.13:0.02%,0.16:0.02%,0.1:0.02%,0.12:0.01%,0.18:0.01%,0.25:0.01%,0.2:0.01%,0.17:0.01%,0.33:0.01%,0.99:0.01%,0.19:0.01%,0.31:0.01%,0.27:0.01%,0.5:0.0%,0.22:0.0%,0.98:0.0%,0.35:0.0%,0.28:0.0%,0.53:0.0%,0.24:0.0%,0.96:0.0%,0.3:0.0%,0.26:0.0%,0.97:0.0%,0.29:0.0%,0.94:0.0%,0.42:0.0%,0.32:0.0%,0.56:0.0%,0.55:0.0%,0.95:0.0%,0.6:0.0%,0.23:0.0%,0.93:0.0%,0.34:0.0%,0.85:0.0%,0.89:0.0%,0.21:0.0%,0.92:0.0%,0.58:0.0%,0.43:0.0%,0.9:0.0%,0.57:0.0%,0.91:0.0%,0.49:0.0%,0.82:0.0%,0.36:0.0%,0.87:0.0%,0.45:0.0%,0.62:0.0%,0.65:0.0%,0.46:0.0%,0.38:0.0%,0.61:0.0%,0.47:0.0%,0.76:0.0%,0.81:0.0%,0.54:0.0%,0.64:0.0%,0.44:0.0%,0.48:0.0%,0.72:0.0%,0.39:0.0%,0.52:0.0%,0.51:0.0%,0.67:0.0%,0.84:0.0%,0.73:0.0%,0.4:0.0%,0.69:0.0%,0.79:0.0%,0.41:0.0%,0.68:0.0%,0.88:0.0%,0.77:0.0%,0.75:0.0%,0.7:0.0%,0.8:0.0%,0.59:0.0%,0.71:0.0%,0.37:0.0%,0.86:0.0%,0.66:0.0%,0.78:0.0%,0.74:0.0%,0.83:0.0%] ** dst_host_srv_serror_rate:[0.0:81.16%,1.0:17.61%,0.01:0.99%,0.02:0.14%,0.03:0.03%,0.04:0.02%,0.05:0.01%,0.06:0.01%,0.08:0.0%,0.5:0.0%,0.07:0.0%,0.1:0.0%,0.09:0.0%,0.11:0.0%,0.17:0.0%,0.14:0.0%,0.12:0.0%,0.96:0.0%,0.33:0.0%,0.67:0.0%,0.97:0.0%,0.25:0.0%,0.98:0.0%,0.4:0.0%,0.75:0.0%,0.48:0.0%,0.83:0.0%,0.16:0.0%,0.93:0.0%,0.69:0.0%,0.2:0.0%,0.91:0.0%,0.78:0.0%,0.95:0.0%,0.8:0.0%,0.92:0.0%,0.68:0.0%,0.29:0.0%,0.38:0.0%,0.88:0.0%,0.3:0.0%,0.32:0.0%,0.94:0.0%,0.57:0.0%,0.63:0.0%,0.62:0.0%,0.31:0.0%,0.85:0.0%,0.56:0.0%,0.81:0.0%,0.74:0.0%,0.86:0.0%,0.13:0.0%,0.23:0.0%,0.18:0.0%,0.64:0.0%,0.46:0.0%,0.52:0.0%,0.66:0.0%,0.6:0.0%,0.84:0.0%,0.55:0.0%,0.9:0.0%,0.15:0.0%,0.79:0.0%,0.82:0.0%,0.87:0.0%,0.47:0.0%,0.53:0.0%,0.45:0.0%,0.42:0.0%,0.24:0.0%] ** dst_host_rerror_rate:101 (0%) ** dst_host_srv_rerror_rate:101 (0%) ** outcome:[smurf.:56.84%,neptune.:21.7%,normal.:19.69%,back.:0.45%,satan.:0.32%,ipsweep.:0.25%,portsweep.:0.21%,warezclient.:0.21%,teardrop.:0.2%,pod.:0.05%,nmap.:0.05%,guess_passwd.:0.01%,buffer_overflow.:0.01%,land.:0.0%,warezmaster.:0.0%,imap.:0.0%,rootkit.:0.0%,loadmodule.:0.0%,ftp_write.:0.0%,multihop.:0.0%,phf.:0.0%,perl.:0.0%,spy.:0.0%] ###Markdown Encode the feature vectorWe use the same two functions provided earlier to preprocess the data. The first encodes Z-Scores, and the second creates dummy variables from categorical columns. ###Code # Encode a numeric column as zscores def encode_numeric_zscore(df, name, mean=None, sd=None): if mean is None: mean = df[name].mean() if sd is None: sd = df[name].std() df[name] = (df[name] - mean) / sd # Encode text values to dummy variables(i.e. [1,0,0], # [0,1,0],[0,0,1] for red,green,blue) def encode_text_dummy(df, name): dummies = pd.get_dummies(df[name]) for x in dummies.columns: dummy_name = f"{name}-{x}" df[dummy_name] = dummies[x] df.drop(name, axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Again, just as we did for anomaly detection, we preprocess the data set. We convert all numeric values to Z-Score, and we translate all categorical to dummy variables. ###Code # Now encode the feature vector encode_numeric_zscore(df, 'duration') encode_text_dummy(df, 'protocol_type') encode_text_dummy(df, 'service') encode_text_dummy(df, 'flag') encode_numeric_zscore(df, 'src_bytes') encode_numeric_zscore(df, 'dst_bytes') encode_text_dummy(df, 'land') encode_numeric_zscore(df, 'wrong_fragment') encode_numeric_zscore(df, 'urgent') encode_numeric_zscore(df, 'hot') encode_numeric_zscore(df, 'num_failed_logins') encode_text_dummy(df, 'logged_in') encode_numeric_zscore(df, 'num_compromised') encode_numeric_zscore(df, 'root_shell') encode_numeric_zscore(df, 'su_attempted') encode_numeric_zscore(df, 'num_root') encode_numeric_zscore(df, 'num_file_creations') encode_numeric_zscore(df, 'num_shells') encode_numeric_zscore(df, 'num_access_files') encode_numeric_zscore(df, 'num_outbound_cmds') encode_text_dummy(df, 'is_host_login') encode_text_dummy(df, 'is_guest_login') encode_numeric_zscore(df, 'count') encode_numeric_zscore(df, 'srv_count') encode_numeric_zscore(df, 'serror_rate') encode_numeric_zscore(df, 'srv_serror_rate') encode_numeric_zscore(df, 'rerror_rate') encode_numeric_zscore(df, 'srv_rerror_rate') encode_numeric_zscore(df, 'same_srv_rate') encode_numeric_zscore(df, 'diff_srv_rate') encode_numeric_zscore(df, 'srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_count') encode_numeric_zscore(df, 'dst_host_srv_count') encode_numeric_zscore(df, 'dst_host_same_srv_rate') encode_numeric_zscore(df, 'dst_host_diff_srv_rate') encode_numeric_zscore(df, 'dst_host_same_src_port_rate') encode_numeric_zscore(df, 'dst_host_srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_serror_rate') encode_numeric_zscore(df, 'dst_host_srv_serror_rate') encode_numeric_zscore(df, 'dst_host_rerror_rate') encode_numeric_zscore(df, 'dst_host_srv_rerror_rate') # display 5 rows df.dropna(inplace=True,axis=1) df[0:5] # This is the numeric feature vector, as it goes to the neural net # Convert to numpy - Classification x_columns = df.columns.drop('outcome') x = df[x_columns].values dummies = pd.get_dummies(df['outcome']) # Classification outcomes = dummies.columns num_classes = len(outcomes) y = dummies.values ###Output _____no_output_____ ###Markdown We will attempt to predict what type of attack is underway. The outcome column specifies the attack type. A value of normal indicates that there is no attack underway. We display the outcomes; some attack types are much rarer than others. ###Code df.groupby('outcome')['outcome'].count() ###Output _____no_output_____ ###Markdown Train the Neural NetworkWe now train the neural network to classify the different KDD99 outcomes. The code provided here implements a relatively simple neural with two hidden layers. We train it with the provided KDD99 data. ###Code import pandas as pd import io import requests import numpy as np import os from sklearn.model_selection import train_test_split from sklearn import metrics from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.callbacks import EarlyStopping # Create a test/train split. 25% test # Split into train/test x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.25, random_state=42) # Create neural net model = Sequential() model.add(Dense(10, input_dim=x.shape[1], activation='relu')) model.add(Dense(50, input_dim=x.shape[1], activation='relu')) model.add(Dense(10, input_dim=x.shape[1], activation='relu')) model.add(Dense(1, kernel_initializer='normal')) model.add(Dense(y.shape[1],activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam') monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto', restore_best_weights=True) model.fit(x_train,y_train,validation_data=(x_test,y_test), callbacks=[monitor],verbose=2,epochs=1000) ###Output Train on 370515 samples, validate on 123506 samples Epoch 1/1000 370515/370515 - 24s - loss: 0.1942 - val_loss: 0.0408 Epoch 2/1000 370515/370515 - 24s - loss: 0.1164 - val_loss: 0.0293 Epoch 3/1000 370515/370515 - 24s - loss: 0.0780 - val_loss: 0.0414 Epoch 4/1000 370515/370515 - 24s - loss: 0.0524 - val_loss: 0.0251 Epoch 5/1000 370515/370515 - 24s - loss: 0.0248 - val_loss: 0.0250 Epoch 6/1000 370515/370515 - 24s - loss: 0.0224 - val_loss: 0.0220 Epoch 7/1000 370515/370515 - 24s - loss: 0.0211 - val_loss: 0.0217 Epoch 8/1000 370515/370515 - 25s - loss: 0.0203 - val_loss: 0.0198 Epoch 9/1000 370515/370515 - 24s - loss: 0.0197 - val_loss: 0.0202 Epoch 10/1000 370515/370515 - 24s - loss: 0.0195 - val_loss: 0.0206 Epoch 11/1000 370515/370515 - 25s - loss: 0.0186 - val_loss: 0.0194 Epoch 12/1000 370515/370515 - 24s - loss: 0.0177 - val_loss: 0.0187 Epoch 13/1000 370515/370515 - 25s - loss: 0.0176 - val_loss: 0.0180 Epoch 14/1000 370515/370515 - 25s - loss: 0.0171 - val_loss: 0.0212 Epoch 15/1000 370515/370515 - 25s - loss: 0.0178 - val_loss: 0.0200 Epoch 16/1000 370515/370515 - 25s - loss: 0.0163 - val_loss: 0.0153 Epoch 17/1000 370515/370515 - 25s - loss: 0.0157 - val_loss: 0.0153 Epoch 18/1000 370515/370515 - 24s - loss: 0.0154 - val_loss: 0.0149 Epoch 19/1000 370515/370515 - 24s - loss: 0.0150 - val_loss: 0.0153 Epoch 20/1000 370515/370515 - 25s - loss: 0.0187 - val_loss: 0.0146 Epoch 21/1000 Restoring model weights from the end of the best epoch. 370515/370515 - 25s - loss: 0.0141 - val_loss: 0.0149 Epoch 00021: early stopping ###Markdown We can now evaluate the neural network. As you can see, the neural network achieves a 99% accuracy rate. ###Code # Measure accuracy pred = model.predict(x_test) pred = np.argmax(pred,axis=1) y_eval = np.argmax(y_test,axis=1) score = metrics.accuracy_score(y_eval, pred) print("Validation score: {}".format(score)) ###Output Validation score: 0.998234903567438 ###Markdown T81-558: Applications of Deep Neural Networks**Module 14: Other Neural Network Techniques*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 14 Video Material* Part 14.1: What is AutoML [[Video]](https://www.youtube.com/watch?v=TFUysIR5AB0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_01_automl.ipynb)* Part 14.2: Using Denoising AutoEncoders in Keras [[Video]](https://www.youtube.com/watch?v=4bTSu6_fucc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_02_auto_encode.ipynb)* Part 14.3: Training an Intrusion Detection System with KDD99 [[Video]](https://www.youtube.com/watch?v=1ySn6h2A68I&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_03_anomaly.ipynb)* **Part 14.4: Anomaly Detection in Keras** [[Video]](https://www.youtube.com/watch?v=VgyKQ5MTDFc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_04_ids_kdd99.ipynb)* Part 14.5: The Deep Learning Technologies I am Excited About [[Video]]() [[Notebook]](t81_558_class_14_05_new_tech.ipynb) Part 14.4: Training an Intrusion Detection System with KDD99The [KDD-99 dataset](http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html) is very famous in the security field and almost a "hello world" of Intrusion Detection Systems (IDS) in machine learning. An intrusion detection system (IDS) is program that monitors computers and network systems for malicious activity or policy violations. Any intrusion activity or violation is typically reported either to an administrator or collected centrally. IDS types range in scope from single computers to large networks. Although KDD99 dataset is over 20 years old, it is still widely used to demonstrate Intrusion Detection Systems (IDS). KDD99 is the data set used for The Third International Knowledge Discovery and Data Mining Tools Competition, which was held in conjunction with KDD-99 The Fifth International Conference on Knowledge Discovery and Data Mining. The competition task was to build a network intrusion detector, a predictive model capable of distinguishing between "bad" connections, called intrusions or attacks, and "good" normal connections. This database contains a standard set of data to be audited, including a wide variety of intrusions simulated in a military network environment. Read in Raw KDD-99 DatasetThe following code reads the KDD99 CSV dataset into a Pandas data frame. The standard format of KDD99 does not include column names. Because of that, the program adds them. ###Code import pandas as pd from tensorflow.keras.utils import get_file try: path = get_file('kddcup.data_10_percent.gz', origin= 'http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz') except: print('Error downloading') raise print(path) # This file is a CSV, just no CSV extension or headers # Download from: http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html df = pd.read_csv(path, header=None) print("Read {} rows.".format(len(df))) # df = df.sample(frac=0.1, replace=False) # Uncomment this line to # sample only 10% of the dataset df.dropna(inplace=True,axis=1) # For now, just drop NA's # (rows with missing values) # The CSV file has no column heads, so add them df.columns = [ 'duration', 'protocol_type', 'service', 'flag', 'src_bytes', 'dst_bytes', 'land', 'wrong_fragment', 'urgent', 'hot', 'num_failed_logins', 'logged_in', 'num_compromised', 'root_shell', 'su_attempted', 'num_root', 'num_file_creations', 'num_shells', 'num_access_files', 'num_outbound_cmds', 'is_host_login', 'is_guest_login', 'count', 'srv_count', 'serror_rate', 'srv_serror_rate', 'rerror_rate', 'srv_rerror_rate', 'same_srv_rate', 'diff_srv_rate', 'srv_diff_host_rate', 'dst_host_count', 'dst_host_srv_count', 'dst_host_same_srv_rate', 'dst_host_diff_srv_rate', 'dst_host_same_src_port_rate', 'dst_host_srv_diff_host_rate', 'dst_host_serror_rate', 'dst_host_srv_serror_rate', 'dst_host_rerror_rate', 'dst_host_srv_rerror_rate', 'outcome' ] pd.set_option('display.max_columns', 5) pd.set_option('display.max_rows', 5) # display 5 rows display(df[0:5]) ###Output C:\Users\jeffh\.keras\datasets\kddcup.data_10_percent.gz Read 494021 rows. ###Markdown Analyzing a DatasetBefore we preprocess the KDD99 dataset let's have a look at the individual columns and distributions. You can use the following script to give a high-level overview of how a dataset appears. ###Code import pandas as pd import os import numpy as np from sklearn import metrics from scipy.stats import zscore def expand_categories(values): result = [] s = values.value_counts() t = float(len(values)) for v in s.index: result.append("{}:{}%".format(v,round(100*(s[v]/t),2))) return "[{}]".format(",".join(result)) def analyze(df): print() cols = df.columns.values total = float(len(df)) print("{} rows".format(int(total))) for col in cols: uniques = df[col].unique() unique_count = len(uniques) if unique_count>100: print("** {}:{} ({}%)".format(col,unique_count,int(((unique_count)/total)*100))) else: print("** {}:{}".format(col,expand_categories(df[col]))) expand_categories(df[col]) ###Output _____no_output_____ ###Markdown The analysis looks at how many unique values are present. For example, duration, which is a numeric value, has 2495 unique values, and there is a 0% overlap. A text/categorical value such as protocol_type only has a few unique values, and the program shows the percentages of each. Columns with a large number of unique values do not have their item counts shown to save display space. ###Code # Analyze KDD-99 analyze(df) ###Output 494021 rows ** duration:2495 (0%) ** protocol_type:[icmp:57.41%,tcp:38.47%,udp:4.12%] ** service:[ecr_i:56.96%,private:22.45%,http:13.01%,smtp:1.97%,other:1.46%,domain_u:1.19%,ftp_data:0.96%,eco_i:0.33%,ftp:0.16%,finger:0.14%,urp_i:0.11%,telnet:0.1%,ntp_u:0.08%,auth:0.07%,pop_3:0.04%,time:0.03%,csnet_ns:0.03%,remote_job:0.02%,gopher:0.02%,imap4:0.02%,domain:0.02%,discard:0.02%,iso_tsap:0.02%,systat:0.02%,shell:0.02%,echo:0.02%,rje:0.02%,whois:0.02%,sql_net:0.02%,printer:0.02%,courier:0.02%,nntp:0.02%,netbios_ssn:0.02%,mtp:0.02%,sunrpc:0.02%,klogin:0.02%,vmnet:0.02%,bgp:0.02%,uucp:0.02%,uucp_path:0.02%,ssh:0.02%,nnsp:0.02%,supdup:0.02%,hostnames:0.02%,login:0.02%,efs:0.02%,daytime:0.02%,link:0.02%,netbios_ns:0.02%,ldap:0.02%,pop_2:0.02%,netbios_dgm:0.02%,http_443:0.02%,exec:0.02%,name:0.02%,kshell:0.02%,ctf:0.02%,netstat:0.02%,Z39_50:0.02%,IRC:0.01%,urh_i:0.0%,X11:0.0%,tim_i:0.0%,tftp_u:0.0%,red_i:0.0%,pm_dump:0.0%] ** flag:[SF:76.6%,S0:17.61%,REJ:5.44%,RSTR:0.18%,RSTO:0.12%,SH:0.02%,S1:0.01%,S2:0.0%,RSTOS0:0.0%,S3:0.0%,OTH:0.0%] ** src_bytes:3300 (0%) ** dst_bytes:10725 (2%) ** land:[0:100.0%,1:0.0%] ** wrong_fragment:[0:99.75%,3:0.2%,1:0.05%] ** urgent:[0:100.0%,1:0.0%,3:0.0%,2:0.0%] ** hot:[0:99.35%,2:0.44%,28:0.06%,1:0.05%,4:0.02%,6:0.02%,5:0.01%,3:0.01%,14:0.01%,30:0.01%,22:0.01%,19:0.0%,18:0.0%,24:0.0%,20:0.0%,7:0.0%,17:0.0%,12:0.0%,15:0.0%,16:0.0%,10:0.0%,9:0.0%] ** num_failed_logins:[0:99.99%,1:0.01%,2:0.0%,5:0.0%,4:0.0%,3:0.0%] ** logged_in:[0:85.18%,1:14.82%] ** num_compromised:[0:99.55%,1:0.44%,2:0.0%,4:0.0%,3:0.0%,6:0.0%,5:0.0%,7:0.0%,12:0.0%,9:0.0%,11:0.0%,767:0.0%,238:0.0%,16:0.0%,18:0.0%,275:0.0%,21:0.0%,22:0.0%,281:0.0%,38:0.0%,102:0.0%,884:0.0%,13:0.0%] ** root_shell:[0:99.99%,1:0.01%] ** su_attempted:[0:100.0%,2:0.0%,1:0.0%] ** num_root:[0:99.88%,1:0.05%,9:0.03%,6:0.03%,2:0.0%,5:0.0%,4:0.0%,3:0.0%,119:0.0%,7:0.0%,993:0.0%,268:0.0%,14:0.0%,16:0.0%,278:0.0%,39:0.0%,306:0.0%,54:0.0%,857:0.0%,12:0.0%] ** num_file_creations:[0:99.95%,1:0.04%,2:0.01%,4:0.0%,16:0.0%,9:0.0%,5:0.0%,7:0.0%,8:0.0%,28:0.0%,25:0.0%,12:0.0%,14:0.0%,15:0.0%,20:0.0%,21:0.0%,22:0.0%,10:0.0%] ** num_shells:[0:99.99%,1:0.01%,2:0.0%] ** num_access_files:[0:99.91%,1:0.09%,2:0.01%,3:0.0%,8:0.0%,6:0.0%,4:0.0%] ** num_outbound_cmds:[0:100.0%] ** is_host_login:[0:100.0%] ** is_guest_login:[0:99.86%,1:0.14%] ** count:490 (0%) ** srv_count:470 (0%) ** serror_rate:[0.0:81.94%,1.0:17.52%,0.99:0.06%,0.08:0.03%,0.05:0.03%,0.07:0.03%,0.06:0.03%,0.14:0.02%,0.04:0.02%,0.01:0.02%,0.09:0.02%,0.1:0.02%,0.03:0.02%,0.11:0.02%,0.13:0.02%,0.5:0.02%,0.12:0.02%,0.2:0.01%,0.25:0.01%,0.02:0.01%,0.17:0.01%,0.33:0.01%,0.15:0.01%,0.22:0.01%,0.18:0.01%,0.23:0.01%,0.16:0.01%,0.21:0.01%,0.19:0.0%,0.27:0.0%,0.98:0.0%,0.44:0.0%,0.29:0.0%,0.24:0.0%,0.97:0.0%,0.96:0.0%,0.31:0.0%,0.26:0.0%,0.67:0.0%,0.36:0.0%,0.65:0.0%,0.94:0.0%,0.28:0.0%,0.79:0.0%,0.95:0.0%,0.53:0.0%,0.81:0.0%,0.62:0.0%,0.85:0.0%,0.6:0.0%,0.64:0.0%,0.88:0.0%,0.68:0.0%,0.52:0.0%,0.66:0.0%,0.71:0.0%,0.93:0.0%,0.57:0.0%,0.63:0.0%,0.83:0.0%,0.78:0.0%,0.75:0.0%,0.51:0.0%,0.58:0.0%,0.56:0.0%,0.55:0.0%,0.3:0.0%,0.76:0.0%,0.86:0.0%,0.74:0.0%,0.35:0.0%,0.38:0.0%,0.54:0.0%,0.72:0.0%,0.84:0.0%,0.69:0.0%,0.61:0.0%,0.59:0.0%,0.42:0.0%,0.32:0.0%,0.82:0.0%,0.77:0.0%,0.7:0.0%,0.91:0.0%,0.92:0.0%,0.4:0.0%,0.73:0.0%,0.9:0.0%,0.34:0.0%,0.8:0.0%,0.89:0.0%,0.87:0.0%] ** srv_serror_rate:[0.0:82.12%,1.0:17.62%,0.03:0.03%,0.04:0.02%,0.05:0.02%,0.06:0.02%,0.02:0.02%,0.5:0.02%,0.08:0.01%,0.07:0.01%,0.25:0.01%,0.33:0.01%,0.17:0.01%,0.09:0.01%,0.1:0.01%,0.2:0.01%,0.11:0.01%,0.12:0.01%,0.14:0.01%,0.01:0.0%,0.67:0.0%,0.92:0.0%,0.18:0.0%,0.94:0.0%,0.95:0.0%,0.58:0.0%,0.88:0.0%,0.75:0.0%,0.19:0.0%,0.4:0.0%,0.76:0.0%,0.83:0.0%,0.91:0.0%,0.15:0.0%,0.22:0.0%,0.93:0.0%,0.85:0.0%,0.27:0.0%,0.86:0.0%,0.44:0.0%,0.35:0.0%,0.51:0.0%,0.36:0.0%,0.38:0.0%,0.21:0.0%,0.8:0.0%,0.9:0.0%,0.45:0.0%,0.16:0.0%,0.37:0.0%,0.23:0.0%] ** rerror_rate:[0.0:94.12%,1.0:5.46%,0.86:0.02%,0.87:0.02%,0.92:0.02%,0.25:0.02%,0.95:0.02%,0.9:0.02%,0.5:0.02%,0.91:0.02%,0.88:0.01%,0.96:0.01%,0.33:0.01%,0.2:0.01%,0.93:0.01%,0.94:0.01%,0.01:0.01%,0.89:0.01%,0.85:0.01%,0.99:0.01%,0.82:0.01%,0.77:0.01%,0.17:0.01%,0.97:0.01%,0.02:0.01%,0.98:0.01%,0.03:0.01%,0.8:0.01%,0.78:0.01%,0.76:0.01%,0.75:0.0%,0.79:0.0%,0.84:0.0%,0.14:0.0%,0.05:0.0%,0.73:0.0%,0.81:0.0%,0.06:0.0%,0.71:0.0%,0.83:0.0%,0.67:0.0%,0.56:0.0%,0.08:0.0%,0.04:0.0%,0.1:0.0%,0.09:0.0%,0.12:0.0%,0.07:0.0%,0.11:0.0%,0.69:0.0%,0.74:0.0%,0.64:0.0%,0.4:0.0%,0.72:0.0%,0.7:0.0%,0.6:0.0%,0.29:0.0%,0.22:0.0%,0.62:0.0%,0.65:0.0%,0.21:0.0%,0.68:0.0%,0.37:0.0%,0.19:0.0%,0.43:0.0%,0.58:0.0%,0.35:0.0%,0.24:0.0%,0.31:0.0%,0.23:0.0%,0.27:0.0%,0.28:0.0%,0.26:0.0%,0.36:0.0%,0.34:0.0%,0.66:0.0%,0.32:0.0%] ** srv_rerror_rate:[0.0:93.99%,1.0:5.69%,0.33:0.05%,0.5:0.04%,0.25:0.04%,0.2:0.03%,0.17:0.03%,0.14:0.01%,0.04:0.01%,0.03:0.01%,0.12:0.01%,0.02:0.01%,0.06:0.01%,0.05:0.01%,0.07:0.01%,0.4:0.01%,0.67:0.01%,0.08:0.01%,0.11:0.01%,0.29:0.01%,0.09:0.0%,0.1:0.0%,0.75:0.0%,0.6:0.0%,0.01:0.0%,0.22:0.0%,0.71:0.0%,0.86:0.0%,0.83:0.0%,0.73:0.0%,0.81:0.0%,0.88:0.0%,0.96:0.0%,0.92:0.0%,0.18:0.0%,0.43:0.0%,0.79:0.0%,0.93:0.0%,0.13:0.0%,0.27:0.0%,0.38:0.0%,0.94:0.0%,0.95:0.0%,0.37:0.0%,0.85:0.0%,0.8:0.0%,0.62:0.0%,0.82:0.0%,0.69:0.0%,0.21:0.0%,0.87:0.0%] ** same_srv_rate:[1.0:77.34%,0.06:2.27%,0.05:2.14%,0.04:2.06%,0.07:2.03%,0.03:1.93%,0.02:1.9%,0.01:1.77%,0.08:1.48%,0.09:1.01%,0.1:0.8%,0.0:0.73%,0.12:0.73%,0.11:0.67%,0.13:0.66%,0.14:0.51%,0.15:0.35%,0.5:0.29%,0.16:0.25%,0.17:0.17%,0.33:0.12%,0.18:0.1%,0.2:0.08%,0.19:0.07%,0.67:0.05%,0.25:0.04%,0.21:0.04%,0.99:0.03%,0.22:0.03%,0.24:0.02%,0.23:0.02%,0.4:0.02%,0.98:0.02%,0.75:0.02%,0.27:0.02%,0.26:0.01%,0.8:0.01%,0.29:0.01%,0.38:0.01%,0.86:0.01%,0.3:0.01%,0.31:0.01%,0.44:0.01%,0.83:0.01%,0.36:0.01%,0.28:0.01%,0.43:0.01%,0.6:0.01%,0.42:0.01%,0.97:0.01%,0.32:0.01%,0.35:0.01%,0.45:0.01%,0.47:0.01%,0.88:0.0%,0.48:0.0%,0.39:0.0%,0.52:0.0%,0.46:0.0%,0.37:0.0%,0.41:0.0%,0.89:0.0%,0.34:0.0%,0.92:0.0%,0.54:0.0%,0.53:0.0%,0.94:0.0%,0.95:0.0%,0.57:0.0%,0.96:0.0%,0.64:0.0%,0.71:0.0%,0.56:0.0%,0.62:0.0%,0.78:0.0%,0.9:0.0%,0.49:0.0%,0.91:0.0%,0.55:0.0%,0.65:0.0%,0.73:0.0%,0.58:0.0%,0.59:0.0%,0.93:0.0%,0.76:0.0%,0.51:0.0%,0.77:0.0%,0.82:0.0%,0.81:0.0%,0.74:0.0%,0.69:0.0%,0.79:0.0%,0.72:0.0%,0.7:0.0%,0.85:0.0%,0.68:0.0%,0.61:0.0%,0.63:0.0%,0.87:0.0%] ** diff_srv_rate:[0.0:77.33%,0.06:10.69%,0.07:5.83%,0.05:3.89%,0.08:0.66%,1.0:0.48%,0.04:0.19%,0.67:0.13%,0.5:0.12%,0.09:0.08%,0.6:0.06%,0.12:0.05%,0.1:0.04%,0.11:0.04%,0.14:0.03%,0.4:0.02%,0.13:0.02%,0.29:0.02%,0.01:0.02%,0.15:0.02%,0.03:0.02%,0.33:0.02%,0.17:0.02%,0.25:0.02%,0.75:0.01%,0.2:0.01%,0.18:0.01%,0.16:0.01%,0.19:0.01%,0.02:0.01%,0.22:0.01%,0.21:0.01%,0.27:0.01%,0.96:0.01%,0.31:0.01%,0.38:0.01%,0.24:0.01%,0.23:0.01%,0.43:0.0%,0.52:0.0%,0.95:0.0%,0.44:0.0%,0.53:0.0%,0.36:0.0%,0.8:0.0%,0.57:0.0%,0.42:0.0%,0.3:0.0%,0.26:0.0%,0.28:0.0%,0.56:0.0%,0.99:0.0%,0.54:0.0%,0.62:0.0%,0.37:0.0%,0.55:0.0%,0.35:0.0%,0.41:0.0%,0.47:0.0%,0.89:0.0%,0.32:0.0%,0.71:0.0%,0.58:0.0%,0.46:0.0%,0.39:0.0%,0.51:0.0%,0.45:0.0%,0.97:0.0%,0.83:0.0%,0.7:0.0%,0.69:0.0%,0.78:0.0%,0.74:0.0%,0.64:0.0%,0.73:0.0%,0.82:0.0%,0.88:0.0%,0.86:0.0%] ** srv_diff_host_rate:[0.0:92.99%,1.0:1.64%,0.12:0.31%,0.5:0.29%,0.67:0.29%,0.33:0.25%,0.11:0.24%,0.25:0.23%,0.1:0.22%,0.14:0.21%,0.17:0.21%,0.08:0.2%,0.15:0.2%,0.18:0.19%,0.2:0.19%,0.09:0.19%,0.4:0.19%,0.07:0.17%,0.29:0.17%,0.13:0.16%,0.22:0.16%,0.06:0.14%,0.02:0.1%,0.05:0.1%,0.01:0.08%,0.21:0.08%,0.19:0.08%,0.16:0.07%,0.75:0.07%,0.27:0.06%,0.04:0.06%,0.6:0.06%,0.3:0.06%,0.38:0.05%,0.43:0.05%,0.23:0.05%,0.03:0.03%,0.24:0.02%,0.36:0.02%,0.31:0.02%,0.8:0.02%,0.57:0.01%,0.44:0.01%,0.28:0.01%,0.26:0.01%,0.42:0.0%,0.45:0.0%,0.62:0.0%,0.83:0.0%,0.71:0.0%,0.56:0.0%,0.35:0.0%,0.32:0.0%,0.37:0.0%,0.41:0.0%,0.47:0.0%,0.86:0.0%,0.55:0.0%,0.54:0.0%,0.88:0.0%,0.64:0.0%,0.46:0.0%,0.7:0.0%,0.77:0.0%] ** dst_host_count:256 (0%) ** dst_host_srv_count:256 (0%) ** dst_host_same_srv_rate:101 (0%) ** dst_host_diff_srv_rate:101 (0%) ** dst_host_same_src_port_rate:101 (0%) ** dst_host_srv_diff_host_rate:[0.0:89.45%,0.02:2.38%,0.01:2.13%,0.04:1.35%,0.03:1.34%,0.05:0.94%,0.06:0.39%,0.07:0.31%,0.5:0.15%,0.08:0.14%,0.09:0.13%,0.15:0.09%,0.11:0.09%,0.16:0.08%,0.13:0.08%,0.1:0.08%,0.14:0.07%,1.0:0.07%,0.17:0.07%,0.2:0.07%,0.12:0.07%,0.18:0.07%,0.25:0.05%,0.22:0.05%,0.19:0.05%,0.21:0.05%,0.24:0.03%,0.23:0.02%,0.26:0.02%,0.27:0.02%,0.33:0.02%,0.29:0.02%,0.51:0.02%,0.4:0.01%,0.28:0.01%,0.3:0.01%,0.67:0.01%,0.52:0.01%,0.31:0.01%,0.32:0.01%,0.38:0.01%,0.53:0.0%,0.43:0.0%,0.44:0.0%,0.34:0.0%,0.6:0.0%,0.36:0.0%,0.57:0.0%,0.35:0.0%,0.54:0.0%,0.37:0.0%,0.56:0.0%,0.55:0.0%,0.42:0.0%,0.46:0.0%,0.45:0.0%,0.41:0.0%,0.48:0.0%,0.39:0.0%,0.8:0.0%,0.7:0.0%,0.47:0.0%,0.62:0.0%,0.75:0.0%,0.58:0.0%] ** dst_host_serror_rate:[0.0:80.93%,1.0:17.56%,0.01:0.74%,0.02:0.2%,0.03:0.09%,0.09:0.05%,0.04:0.04%,0.05:0.04%,0.07:0.03%,0.08:0.03%,0.06:0.02%,0.14:0.02%,0.15:0.02%,0.11:0.02%,0.13:0.02%,0.16:0.02%,0.1:0.02%,0.12:0.01%,0.18:0.01%,0.25:0.01%,0.2:0.01%,0.17:0.01%,0.33:0.01%,0.99:0.01%,0.19:0.01%,0.31:0.01%,0.27:0.01%,0.5:0.0%,0.22:0.0%,0.98:0.0%,0.35:0.0%,0.28:0.0%,0.53:0.0%,0.24:0.0%,0.96:0.0%,0.3:0.0%,0.26:0.0%,0.97:0.0%,0.29:0.0%,0.94:0.0%,0.42:0.0%,0.32:0.0%,0.56:0.0%,0.55:0.0%,0.95:0.0%,0.6:0.0%,0.23:0.0%,0.93:0.0%,0.34:0.0%,0.85:0.0%,0.89:0.0%,0.21:0.0%,0.92:0.0%,0.58:0.0%,0.43:0.0%,0.9:0.0%,0.57:0.0%,0.91:0.0%,0.49:0.0%,0.82:0.0%,0.36:0.0%,0.87:0.0%,0.45:0.0%,0.62:0.0%,0.65:0.0%,0.46:0.0%,0.38:0.0%,0.61:0.0%,0.47:0.0%,0.76:0.0%,0.81:0.0%,0.54:0.0%,0.64:0.0%,0.44:0.0%,0.48:0.0%,0.72:0.0%,0.39:0.0%,0.52:0.0%,0.51:0.0%,0.67:0.0%,0.84:0.0%,0.73:0.0%,0.4:0.0%,0.69:0.0%,0.79:0.0%,0.41:0.0%,0.68:0.0%,0.88:0.0%,0.77:0.0%,0.75:0.0%,0.7:0.0%,0.8:0.0%,0.59:0.0%,0.71:0.0%,0.37:0.0%,0.86:0.0%,0.66:0.0%,0.78:0.0%,0.74:0.0%,0.83:0.0%] ** dst_host_srv_serror_rate:[0.0:81.16%,1.0:17.61%,0.01:0.99%,0.02:0.14%,0.03:0.03%,0.04:0.02%,0.05:0.01%,0.06:0.01%,0.08:0.0%,0.5:0.0%,0.07:0.0%,0.1:0.0%,0.09:0.0%,0.11:0.0%,0.17:0.0%,0.14:0.0%,0.12:0.0%,0.96:0.0%,0.33:0.0%,0.67:0.0%,0.97:0.0%,0.25:0.0%,0.98:0.0%,0.4:0.0%,0.75:0.0%,0.48:0.0%,0.83:0.0%,0.16:0.0%,0.93:0.0%,0.69:0.0%,0.2:0.0%,0.91:0.0%,0.78:0.0%,0.95:0.0%,0.8:0.0%,0.92:0.0%,0.68:0.0%,0.29:0.0%,0.38:0.0%,0.88:0.0%,0.3:0.0%,0.32:0.0%,0.94:0.0%,0.57:0.0%,0.63:0.0%,0.62:0.0%,0.31:0.0%,0.85:0.0%,0.56:0.0%,0.81:0.0%,0.74:0.0%,0.86:0.0%,0.13:0.0%,0.23:0.0%,0.18:0.0%,0.64:0.0%,0.46:0.0%,0.52:0.0%,0.66:0.0%,0.6:0.0%,0.84:0.0%,0.55:0.0%,0.9:0.0%,0.15:0.0%,0.79:0.0%,0.82:0.0%,0.87:0.0%,0.47:0.0%,0.53:0.0%,0.45:0.0%,0.42:0.0%,0.24:0.0%] ** dst_host_rerror_rate:101 (0%) ** dst_host_srv_rerror_rate:101 (0%) ** outcome:[smurf.:56.84%,neptune.:21.7%,normal.:19.69%,back.:0.45%,satan.:0.32%,ipsweep.:0.25%,portsweep.:0.21%,warezclient.:0.21%,teardrop.:0.2%,pod.:0.05%,nmap.:0.05%,guess_passwd.:0.01%,buffer_overflow.:0.01%,land.:0.0%,warezmaster.:0.0%,imap.:0.0%,rootkit.:0.0%,loadmodule.:0.0%,ftp_write.:0.0%,multihop.:0.0%,phf.:0.0%,perl.:0.0%,spy.:0.0%] ###Markdown Encode the feature vectorWe use the same two functions provided earlier to preprocess the data. The first encodes Z-Scores, and the second creates dummy variables from categorical columns. ###Code # Encode a numeric column as zscores def encode_numeric_zscore(df, name, mean=None, sd=None): if mean is None: mean = df[name].mean() if sd is None: sd = df[name].std() df[name] = (df[name] - mean) / sd # Encode text values to dummy variables(i.e. [1,0,0], # [0,1,0],[0,0,1] for red,green,blue) def encode_text_dummy(df, name): dummies = pd.get_dummies(df[name]) for x in dummies.columns: dummy_name = f"{name}-{x}" df[dummy_name] = dummies[x] df.drop(name, axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Again, just as we did for anomaly detection, we preprocess the data set. We convert all numeric values to Z-Score, and we translate all categorical to dummy variables. ###Code # Now encode the feature vector encode_numeric_zscore(df, 'duration') encode_text_dummy(df, 'protocol_type') encode_text_dummy(df, 'service') encode_text_dummy(df, 'flag') encode_numeric_zscore(df, 'src_bytes') encode_numeric_zscore(df, 'dst_bytes') encode_text_dummy(df, 'land') encode_numeric_zscore(df, 'wrong_fragment') encode_numeric_zscore(df, 'urgent') encode_numeric_zscore(df, 'hot') encode_numeric_zscore(df, 'num_failed_logins') encode_text_dummy(df, 'logged_in') encode_numeric_zscore(df, 'num_compromised') encode_numeric_zscore(df, 'root_shell') encode_numeric_zscore(df, 'su_attempted') encode_numeric_zscore(df, 'num_root') encode_numeric_zscore(df, 'num_file_creations') encode_numeric_zscore(df, 'num_shells') encode_numeric_zscore(df, 'num_access_files') encode_numeric_zscore(df, 'num_outbound_cmds') encode_text_dummy(df, 'is_host_login') encode_text_dummy(df, 'is_guest_login') encode_numeric_zscore(df, 'count') encode_numeric_zscore(df, 'srv_count') encode_numeric_zscore(df, 'serror_rate') encode_numeric_zscore(df, 'srv_serror_rate') encode_numeric_zscore(df, 'rerror_rate') encode_numeric_zscore(df, 'srv_rerror_rate') encode_numeric_zscore(df, 'same_srv_rate') encode_numeric_zscore(df, 'diff_srv_rate') encode_numeric_zscore(df, 'srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_count') encode_numeric_zscore(df, 'dst_host_srv_count') encode_numeric_zscore(df, 'dst_host_same_srv_rate') encode_numeric_zscore(df, 'dst_host_diff_srv_rate') encode_numeric_zscore(df, 'dst_host_same_src_port_rate') encode_numeric_zscore(df, 'dst_host_srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_serror_rate') encode_numeric_zscore(df, 'dst_host_srv_serror_rate') encode_numeric_zscore(df, 'dst_host_rerror_rate') encode_numeric_zscore(df, 'dst_host_srv_rerror_rate') # display 5 rows df.dropna(inplace=True,axis=1) df[0:5] # This is the numeric feature vector, as it goes to the neural net # Convert to numpy - Classification x_columns = df.columns.drop('outcome') x = df[x_columns].values dummies = pd.get_dummies(df['outcome']) # Classification outcomes = dummies.columns num_classes = len(outcomes) y = dummies.values ###Output _____no_output_____ ###Markdown We will attempt to predict what type of attack is underway. The outcome column specifies the attack type. A value of normal indicates that there is no attack underway. We display the outcomes; some attack types are much rarer than others. ###Code df.groupby('outcome')['outcome'].count() ###Output _____no_output_____ ###Markdown Train the Neural NetworkWe now train the neural network to classify the different KDD99 outcomes. The code provided here implements a relatively simple neural with two hidden layers. We train it with the provided KDD99 data. ###Code import pandas as pd import io import requests import numpy as np import os from sklearn.model_selection import train_test_split from sklearn import metrics from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.callbacks import EarlyStopping # Create a test/train split. 25% test # Split into train/test x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.25, random_state=42) # Create neural net model = Sequential() model.add(Dense(10, input_dim=x.shape[1], activation='relu')) model.add(Dense(50, input_dim=x.shape[1], activation='relu')) model.add(Dense(10, input_dim=x.shape[1], activation='relu')) model.add(Dense(1, kernel_initializer='normal')) model.add(Dense(y.shape[1],activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam') monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto', restore_best_weights=True) model.fit(x_train,y_train,validation_data=(x_test,y_test), callbacks=[monitor],verbose=2,epochs=1000) ###Output Train on 370515 samples, validate on 123506 samples Epoch 1/1000 370515/370515 - 24s - loss: 0.1942 - val_loss: 0.0408 Epoch 2/1000 370515/370515 - 24s - loss: 0.1164 - val_loss: 0.0293 Epoch 3/1000 370515/370515 - 24s - loss: 0.0780 - val_loss: 0.0414 Epoch 4/1000 370515/370515 - 24s - loss: 0.0524 - val_loss: 0.0251 Epoch 5/1000 370515/370515 - 24s - loss: 0.0248 - val_loss: 0.0250 Epoch 6/1000 370515/370515 - 24s - loss: 0.0224 - val_loss: 0.0220 Epoch 7/1000 370515/370515 - 24s - loss: 0.0211 - val_loss: 0.0217 Epoch 8/1000 370515/370515 - 25s - loss: 0.0203 - val_loss: 0.0198 Epoch 9/1000 370515/370515 - 24s - loss: 0.0197 - val_loss: 0.0202 Epoch 10/1000 370515/370515 - 24s - loss: 0.0195 - val_loss: 0.0206 Epoch 11/1000 370515/370515 - 25s - loss: 0.0186 - val_loss: 0.0194 Epoch 12/1000 370515/370515 - 24s - loss: 0.0177 - val_loss: 0.0187 Epoch 13/1000 370515/370515 - 25s - loss: 0.0176 - val_loss: 0.0180 Epoch 14/1000 370515/370515 - 25s - loss: 0.0171 - val_loss: 0.0212 Epoch 15/1000 370515/370515 - 25s - loss: 0.0178 - val_loss: 0.0200 Epoch 16/1000 370515/370515 - 25s - loss: 0.0163 - val_loss: 0.0153 Epoch 17/1000 370515/370515 - 25s - loss: 0.0157 - val_loss: 0.0153 Epoch 18/1000 370515/370515 - 24s - loss: 0.0154 - val_loss: 0.0149 Epoch 19/1000 370515/370515 - 24s - loss: 0.0150 - val_loss: 0.0153 Epoch 20/1000 370515/370515 - 25s - loss: 0.0187 - val_loss: 0.0146 Epoch 21/1000 Restoring model weights from the end of the best epoch. 370515/370515 - 25s - loss: 0.0141 - val_loss: 0.0149 Epoch 00021: early stopping ###Markdown We can now evaluate the neural network. As you can see, the neural network achieves a 99% accuracy rate. ###Code # Measure accuracy pred = model.predict(x_test) pred = np.argmax(pred,axis=1) y_eval = np.argmax(y_test,axis=1) score = metrics.accuracy_score(y_eval, pred) print("Validation score: {}".format(score)) ###Output Validation score: 0.998234903567438 ###Markdown T81-558: Applications of Deep Neural Networks**Module 14: Other Neural Network Techniques*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 14 Video Material* Part 14.1: What is AutoML [[Video]](https://www.youtube.com/watch?v=TFUysIR5AB0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_01_automl.ipynb)* Part 14.2: Using Denoising AutoEncoders in Keras [[Video]](https://www.youtube.com/watch?v=4bTSu6_fucc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_02_auto_encode.ipynb)* Part 14.3: Training an Intrusion Detection System with KDD99 [[Video]](https://www.youtube.com/watch?v=1ySn6h2A68I&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_03_anomaly.ipynb)* **Part 14.4: Anomaly Detection in Keras** [[Video]](https://www.youtube.com/watch?v=VgyKQ5MTDFc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_04_ids_kdd99.ipynb)* Part 14.5: The Deep Learning Technologies I am Excited About [[Video]]() [[Notebook]](t81_558_class_14_05_new_tech.ipynb) Part 14.4: Training an Intrusion Detection System with KDD99The [KDD-99 dataset](http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html) is very famous in the security field and almost a "hello world" of intrusion detection systems in machine learning. Read in Raw KDD-99 Dataset ###Code import pandas as pd from tensorflow.keras.utils import get_file try: path = get_file('kddcup.data_10_percent.gz', origin='http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz') except: print('Error downloading') raise print(path) # This file is a CSV, just no CSV extension or headers # Download from: http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html df = pd.read_csv(path, header=None) print("Read {} rows.".format(len(df))) # df = df.sample(frac=0.1, replace=False) # Uncomment this line to sample only 10% of the dataset df.dropna(inplace=True,axis=1) # For now, just drop NA's (rows with missing values) # The CSV file has no column heads, so add them df.columns = [ 'duration', 'protocol_type', 'service', 'flag', 'src_bytes', 'dst_bytes', 'land', 'wrong_fragment', 'urgent', 'hot', 'num_failed_logins', 'logged_in', 'num_compromised', 'root_shell', 'su_attempted', 'num_root', 'num_file_creations', 'num_shells', 'num_access_files', 'num_outbound_cmds', 'is_host_login', 'is_guest_login', 'count', 'srv_count', 'serror_rate', 'srv_serror_rate', 'rerror_rate', 'srv_rerror_rate', 'same_srv_rate', 'diff_srv_rate', 'srv_diff_host_rate', 'dst_host_count', 'dst_host_srv_count', 'dst_host_same_srv_rate', 'dst_host_diff_srv_rate', 'dst_host_same_src_port_rate', 'dst_host_srv_diff_host_rate', 'dst_host_serror_rate', 'dst_host_srv_serror_rate', 'dst_host_rerror_rate', 'dst_host_srv_rerror_rate', 'outcome' ] # display 5 rows df[0:5] ###Output /Users/jheaton/.keras/datasets/kddcup.data_10_percent.gz Read 494021 rows. ###Markdown Analyzing a DatasetThe following script can be used to give a high-level overview of how a dataset appears. ###Code ENCODING = 'utf-8' def expand_categories(values): result = [] s = values.value_counts() t = float(len(values)) for v in s.index: result.append("{}:{}%".format(v,round(100*(s[v]/t),2))) return "[{}]".format(",".join(result)) def analyze(df): print() cols = df.columns.values total = float(len(df)) print("{} rows".format(int(total))) for col in cols: uniques = df[col].unique() unique_count = len(uniques) if unique_count>100: print("** {}:{} ({}%)".format(col,unique_count,int(((unique_count)/total)*100))) else: print("** {}:{}".format(col,expand_categories(df[col]))) expand_categories(df[col]) # Analyze KDD-99 import pandas as pd import os import numpy as np from sklearn import metrics from scipy.stats import zscore analyze(df) ###Output 494021 rows ** duration:2495 (0%) ** protocol_type:[icmp:57.41%,tcp:38.47%,udp:4.12%] ** service:[ecr_i:56.96%,private:22.45%,http:13.01%,smtp:1.97%,other:1.46%,domain_u:1.19%,ftp_data:0.96%,eco_i:0.33%,ftp:0.16%,finger:0.14%,urp_i:0.11%,telnet:0.1%,ntp_u:0.08%,auth:0.07%,pop_3:0.04%,time:0.03%,csnet_ns:0.03%,remote_job:0.02%,gopher:0.02%,imap4:0.02%,discard:0.02%,domain:0.02%,systat:0.02%,iso_tsap:0.02%,shell:0.02%,echo:0.02%,rje:0.02%,whois:0.02%,sql_net:0.02%,printer:0.02%,nntp:0.02%,courier:0.02%,sunrpc:0.02%,mtp:0.02%,netbios_ssn:0.02%,uucp_path:0.02%,bgp:0.02%,klogin:0.02%,uucp:0.02%,vmnet:0.02%,supdup:0.02%,ssh:0.02%,nnsp:0.02%,login:0.02%,hostnames:0.02%,efs:0.02%,daytime:0.02%,netbios_ns:0.02%,link:0.02%,ldap:0.02%,pop_2:0.02%,exec:0.02%,netbios_dgm:0.02%,http_443:0.02%,kshell:0.02%,name:0.02%,ctf:0.02%,netstat:0.02%,Z39_50:0.02%,IRC:0.01%,urh_i:0.0%,X11:0.0%,tim_i:0.0%,tftp_u:0.0%,red_i:0.0%,pm_dump:0.0%] ** flag:[SF:76.6%,S0:17.61%,REJ:5.44%,RSTR:0.18%,RSTO:0.12%,SH:0.02%,S1:0.01%,S2:0.0%,RSTOS0:0.0%,S3:0.0%,OTH:0.0%] ** src_bytes:3300 (0%) ** dst_bytes:10725 (2%) ** land:[0:100.0%,1:0.0%] ** wrong_fragment:[0:99.75%,3:0.2%,1:0.05%] ** urgent:[0:100.0%,1:0.0%,3:0.0%,2:0.0%] ** hot:[0:99.35%,2:0.44%,28:0.06%,1:0.05%,4:0.02%,6:0.02%,5:0.01%,3:0.01%,14:0.01%,30:0.01%,22:0.01%,19:0.0%,18:0.0%,24:0.0%,20:0.0%,7:0.0%,17:0.0%,12:0.0%,15:0.0%,16:0.0%,10:0.0%,9:0.0%] ** num_failed_logins:[0:99.99%,1:0.01%,2:0.0%,5:0.0%,4:0.0%,3:0.0%] ** logged_in:[0:85.18%,1:14.82%] ** num_compromised:[0:99.55%,1:0.44%,2:0.0%,4:0.0%,3:0.0%,6:0.0%,5:0.0%,7:0.0%,12:0.0%,9:0.0%,11:0.0%,767:0.0%,238:0.0%,16:0.0%,18:0.0%,275:0.0%,21:0.0%,22:0.0%,281:0.0%,38:0.0%,102:0.0%,884:0.0%,13:0.0%] ** root_shell:[0:99.99%,1:0.01%] ** su_attempted:[0:100.0%,2:0.0%,1:0.0%] ** num_root:[0:99.88%,1:0.05%,9:0.03%,6:0.03%,2:0.0%,5:0.0%,4:0.0%,3:0.0%,119:0.0%,7:0.0%,993:0.0%,268:0.0%,14:0.0%,16:0.0%,278:0.0%,39:0.0%,306:0.0%,54:0.0%,857:0.0%,12:0.0%] ** num_file_creations:[0:99.95%,1:0.04%,2:0.01%,4:0.0%,16:0.0%,9:0.0%,5:0.0%,7:0.0%,8:0.0%,28:0.0%,25:0.0%,12:0.0%,14:0.0%,15:0.0%,20:0.0%,21:0.0%,22:0.0%,10:0.0%] ** num_shells:[0:99.99%,1:0.01%,2:0.0%] ** num_access_files:[0:99.91%,1:0.09%,2:0.01%,3:0.0%,8:0.0%,6:0.0%,4:0.0%] ** num_outbound_cmds:[0:100.0%] ** is_host_login:[0:100.0%] ** is_guest_login:[0:99.86%,1:0.14%] ** count:490 (0%) ** srv_count:470 (0%) ** serror_rate:[0.0:81.94%,1.0:17.52%,0.99:0.06%,0.08:0.03%,0.05:0.03%,0.07:0.03%,0.06:0.03%,0.14:0.02%,0.04:0.02%,0.01:0.02%,0.09:0.02%,0.1:0.02%,0.03:0.02%,0.11:0.02%,0.13:0.02%,0.5:0.02%,0.12:0.02%,0.2:0.01%,0.25:0.01%,0.02:0.01%,0.17:0.01%,0.33:0.01%,0.15:0.01%,0.22:0.01%,0.18:0.01%,0.23:0.01%,0.16:0.01%,0.21:0.01%,0.19:0.0%,0.27:0.0%,0.98:0.0%,0.44:0.0%,0.29:0.0%,0.24:0.0%,0.97:0.0%,0.96:0.0%,0.31:0.0%,0.26:0.0%,0.67:0.0%,0.36:0.0%,0.65:0.0%,0.94:0.0%,0.28:0.0%,0.79:0.0%,0.95:0.0%,0.53:0.0%,0.81:0.0%,0.62:0.0%,0.85:0.0%,0.6:0.0%,0.64:0.0%,0.88:0.0%,0.68:0.0%,0.52:0.0%,0.66:0.0%,0.71:0.0%,0.93:0.0%,0.57:0.0%,0.63:0.0%,0.83:0.0%,0.78:0.0%,0.75:0.0%,0.51:0.0%,0.58:0.0%,0.56:0.0%,0.55:0.0%,0.3:0.0%,0.76:0.0%,0.86:0.0%,0.74:0.0%,0.35:0.0%,0.38:0.0%,0.54:0.0%,0.72:0.0%,0.84:0.0%,0.69:0.0%,0.61:0.0%,0.59:0.0%,0.42:0.0%,0.32:0.0%,0.82:0.0%,0.77:0.0%,0.7:0.0%,0.91:0.0%,0.92:0.0%,0.4:0.0%,0.73:0.0%,0.9:0.0%,0.34:0.0%,0.8:0.0%,0.89:0.0%,0.87:0.0%] ** srv_serror_rate:[0.0:82.12%,1.0:17.62%,0.03:0.03%,0.04:0.02%,0.05:0.02%,0.06:0.02%,0.02:0.02%,0.5:0.02%,0.08:0.01%,0.07:0.01%,0.25:0.01%,0.33:0.01%,0.17:0.01%,0.09:0.01%,0.1:0.01%,0.2:0.01%,0.11:0.01%,0.12:0.01%,0.14:0.01%,0.01:0.0%,0.67:0.0%,0.92:0.0%,0.18:0.0%,0.94:0.0%,0.95:0.0%,0.58:0.0%,0.88:0.0%,0.75:0.0%,0.19:0.0%,0.4:0.0%,0.76:0.0%,0.83:0.0%,0.91:0.0%,0.15:0.0%,0.22:0.0%,0.93:0.0%,0.85:0.0%,0.27:0.0%,0.86:0.0%,0.44:0.0%,0.35:0.0%,0.51:0.0%,0.36:0.0%,0.38:0.0%,0.21:0.0%,0.8:0.0%,0.9:0.0%,0.45:0.0%,0.16:0.0%,0.37:0.0%,0.23:0.0%] ** rerror_rate:[0.0:94.12%,1.0:5.46%,0.86:0.02%,0.87:0.02%,0.92:0.02%,0.25:0.02%,0.95:0.02%,0.9:0.02%,0.5:0.02%,0.91:0.02%,0.88:0.01%,0.96:0.01%,0.33:0.01%,0.2:0.01%,0.93:0.01%,0.94:0.01%,0.01:0.01%,0.89:0.01%,0.85:0.01%,0.99:0.01%,0.82:0.01%,0.77:0.01%,0.17:0.01%,0.97:0.01%,0.02:0.01%,0.98:0.01%,0.03:0.01%,0.8:0.01%,0.78:0.01%,0.76:0.01%,0.75:0.0%,0.79:0.0%,0.84:0.0%,0.14:0.0%,0.05:0.0%,0.73:0.0%,0.81:0.0%,0.06:0.0%,0.71:0.0%,0.83:0.0%,0.67:0.0%,0.56:0.0%,0.08:0.0%,0.04:0.0%,0.1:0.0%,0.09:0.0%,0.12:0.0%,0.07:0.0%,0.11:0.0%,0.69:0.0%,0.74:0.0%,0.64:0.0%,0.4:0.0%,0.72:0.0%,0.7:0.0%,0.6:0.0%,0.29:0.0%,0.22:0.0%,0.62:0.0%,0.65:0.0%,0.21:0.0%,0.68:0.0%,0.37:0.0%,0.19:0.0%,0.43:0.0%,0.58:0.0%,0.35:0.0%,0.24:0.0%,0.31:0.0%,0.23:0.0%,0.27:0.0%,0.28:0.0%,0.26:0.0%,0.36:0.0%,0.34:0.0%,0.66:0.0%,0.32:0.0%] ** srv_rerror_rate:[0.0:93.99%,1.0:5.69%,0.33:0.05%,0.5:0.04%,0.25:0.04%,0.2:0.03%,0.17:0.03%,0.14:0.01%,0.04:0.01%,0.03:0.01%,0.12:0.01%,0.02:0.01%,0.06:0.01%,0.05:0.01%,0.07:0.01%,0.4:0.01%,0.67:0.01%,0.08:0.01%,0.11:0.01%,0.29:0.01%,0.09:0.0%,0.1:0.0%,0.75:0.0%,0.6:0.0%,0.01:0.0%,0.22:0.0%,0.71:0.0%,0.86:0.0%,0.83:0.0%,0.73:0.0%,0.81:0.0%,0.88:0.0%,0.96:0.0%,0.92:0.0%,0.18:0.0%,0.43:0.0%,0.79:0.0%,0.93:0.0%,0.13:0.0%,0.27:0.0%,0.38:0.0%,0.94:0.0%,0.95:0.0%,0.37:0.0%,0.85:0.0%,0.8:0.0%,0.62:0.0%,0.82:0.0%,0.69:0.0%,0.21:0.0%,0.87:0.0%] ** same_srv_rate:[1.0:77.34%,0.06:2.27%,0.05:2.14%,0.04:2.06%,0.07:2.03%,0.03:1.93%,0.02:1.9%,0.01:1.77%,0.08:1.48%,0.09:1.01%,0.1:0.8%,0.0:0.73%,0.12:0.73%,0.11:0.67%,0.13:0.66%,0.14:0.51%,0.15:0.35%,0.5:0.29%,0.16:0.25%,0.17:0.17%,0.33:0.12%,0.18:0.1%,0.2:0.08%,0.19:0.07%,0.67:0.05%,0.25:0.04%,0.21:0.04%,0.99:0.03%,0.22:0.03%,0.24:0.02%,0.23:0.02%,0.4:0.02%,0.98:0.02%,0.75:0.02%,0.27:0.02%,0.26:0.01%,0.8:0.01%,0.29:0.01%,0.38:0.01%,0.86:0.01%,0.3:0.01%,0.31:0.01%,0.44:0.01%,0.83:0.01%,0.36:0.01%,0.28:0.01%,0.43:0.01%,0.6:0.01%,0.42:0.01%,0.97:0.01%,0.32:0.01%,0.35:0.01%,0.45:0.01%,0.47:0.01%,0.88:0.0%,0.48:0.0%,0.39:0.0%,0.52:0.0%,0.46:0.0%,0.37:0.0%,0.41:0.0%,0.89:0.0%,0.34:0.0%,0.92:0.0%,0.54:0.0%,0.53:0.0%,0.94:0.0%,0.95:0.0%,0.57:0.0%,0.96:0.0%,0.64:0.0%,0.71:0.0%,0.56:0.0%,0.62:0.0%,0.78:0.0%,0.9:0.0%,0.49:0.0%,0.91:0.0%,0.55:0.0%,0.65:0.0%,0.73:0.0%,0.58:0.0%,0.59:0.0%,0.93:0.0%,0.76:0.0%,0.51:0.0%,0.77:0.0%,0.82:0.0%,0.81:0.0%,0.74:0.0%,0.69:0.0%,0.79:0.0%,0.72:0.0%,0.7:0.0%,0.85:0.0%,0.68:0.0%,0.61:0.0%,0.63:0.0%,0.87:0.0%] ** diff_srv_rate:[0.0:77.33%,0.06:10.69%,0.07:5.83%,0.05:3.89%,0.08:0.66%,1.0:0.48%,0.04:0.19%,0.67:0.13%,0.5:0.12%,0.09:0.08%,0.6:0.06%,0.12:0.05%,0.1:0.04%,0.11:0.04%,0.14:0.03%,0.4:0.02%,0.13:0.02%,0.29:0.02%,0.01:0.02%,0.15:0.02%,0.03:0.02%,0.33:0.02%,0.17:0.02%,0.25:0.02%,0.75:0.01%,0.2:0.01%,0.18:0.01%,0.16:0.01%,0.19:0.01%,0.02:0.01%,0.22:0.01%,0.21:0.01%,0.27:0.01%,0.96:0.01%,0.31:0.01%,0.38:0.01%,0.24:0.01%,0.23:0.01%,0.43:0.0%,0.52:0.0%,0.95:0.0%,0.44:0.0%,0.53:0.0%,0.36:0.0%,0.8:0.0%,0.57:0.0%,0.42:0.0%,0.3:0.0%,0.26:0.0%,0.28:0.0%,0.56:0.0%,0.99:0.0%,0.54:0.0%,0.62:0.0%,0.37:0.0%,0.55:0.0%,0.35:0.0%,0.41:0.0%,0.47:0.0%,0.89:0.0%,0.32:0.0%,0.71:0.0%,0.58:0.0%,0.46:0.0%,0.39:0.0%,0.51:0.0%,0.45:0.0%,0.97:0.0%,0.83:0.0%,0.7:0.0%,0.69:0.0%,0.78:0.0%,0.74:0.0%,0.64:0.0%,0.73:0.0%,0.82:0.0%,0.88:0.0%,0.86:0.0%] ** srv_diff_host_rate:[0.0:92.99%,1.0:1.64%,0.12:0.31%,0.5:0.29%,0.67:0.29%,0.33:0.25%,0.11:0.24%,0.25:0.23%,0.1:0.22%,0.14:0.21%,0.17:0.21%,0.08:0.2%,0.15:0.2%,0.18:0.19%,0.2:0.19%,0.09:0.19%,0.4:0.19%,0.07:0.17%,0.29:0.17%,0.13:0.16%,0.22:0.16%,0.06:0.14%,0.02:0.1%,0.05:0.1%,0.01:0.08%,0.21:0.08%,0.19:0.08%,0.16:0.07%,0.75:0.07%,0.27:0.06%,0.04:0.06%,0.6:0.06%,0.3:0.06%,0.38:0.05%,0.43:0.05%,0.23:0.05%,0.03:0.03%,0.24:0.02%,0.36:0.02%,0.31:0.02%,0.8:0.02%,0.57:0.01%,0.44:0.01%,0.28:0.01%,0.26:0.01%,0.42:0.0%,0.45:0.0%,0.62:0.0%,0.83:0.0%,0.71:0.0%,0.56:0.0%,0.35:0.0%,0.32:0.0%,0.37:0.0%,0.41:0.0%,0.47:0.0%,0.86:0.0%,0.55:0.0%,0.54:0.0%,0.88:0.0%,0.64:0.0%,0.46:0.0%,0.7:0.0%,0.77:0.0%] ** dst_host_count:256 (0%) ** dst_host_srv_count:256 (0%) ** dst_host_same_srv_rate:101 (0%) ** dst_host_diff_srv_rate:101 (0%) ** dst_host_same_src_port_rate:101 (0%) ** dst_host_srv_diff_host_rate:[0.0:89.45%,0.02:2.38%,0.01:2.13%,0.04:1.35%,0.03:1.34%,0.05:0.94%,0.06:0.39%,0.07:0.31%,0.5:0.15%,0.08:0.14%,0.09:0.13%,0.15:0.09%,0.11:0.09%,0.16:0.08%,0.13:0.08%,0.1:0.08%,0.14:0.07%,1.0:0.07%,0.17:0.07%,0.2:0.07%,0.12:0.07%,0.18:0.07%,0.25:0.05%,0.22:0.05%,0.19:0.05%,0.21:0.05%,0.24:0.03%,0.23:0.02%,0.26:0.02%,0.27:0.02%,0.33:0.02%,0.29:0.02%,0.51:0.02%,0.4:0.01%,0.28:0.01%,0.3:0.01%,0.67:0.01%,0.52:0.01%,0.31:0.01%,0.32:0.01%,0.38:0.01%,0.53:0.0%,0.43:0.0%,0.44:0.0%,0.34:0.0%,0.6:0.0%,0.36:0.0%,0.57:0.0%,0.35:0.0%,0.54:0.0%,0.37:0.0%,0.56:0.0%,0.55:0.0%,0.42:0.0%,0.46:0.0%,0.45:0.0%,0.41:0.0%,0.48:0.0%,0.39:0.0%,0.8:0.0%,0.7:0.0%,0.47:0.0%,0.62:0.0%,0.75:0.0%,0.58:0.0%] ** dst_host_serror_rate:[0.0:80.93%,1.0:17.56%,0.01:0.74%,0.02:0.2%,0.03:0.09%,0.09:0.05%,0.04:0.04%,0.05:0.04%,0.07:0.03%,0.08:0.03%,0.06:0.02%,0.14:0.02%,0.15:0.02%,0.11:0.02%,0.13:0.02%,0.16:0.02%,0.1:0.02%,0.12:0.01%,0.18:0.01%,0.25:0.01%,0.2:0.01%,0.17:0.01%,0.33:0.01%,0.99:0.01%,0.19:0.01%,0.31:0.01%,0.27:0.01%,0.5:0.0%,0.22:0.0%,0.98:0.0%,0.35:0.0%,0.28:0.0%,0.53:0.0%,0.24:0.0%,0.96:0.0%,0.3:0.0%,0.26:0.0%,0.97:0.0%,0.29:0.0%,0.94:0.0%,0.42:0.0%,0.32:0.0%,0.56:0.0%,0.55:0.0%,0.95:0.0%,0.6:0.0%,0.23:0.0%,0.93:0.0%,0.34:0.0%,0.85:0.0%,0.89:0.0%,0.21:0.0%,0.92:0.0%,0.58:0.0%,0.43:0.0%,0.9:0.0%,0.57:0.0%,0.91:0.0%,0.49:0.0%,0.82:0.0%,0.36:0.0%,0.87:0.0%,0.45:0.0%,0.62:0.0%,0.65:0.0%,0.46:0.0%,0.38:0.0%,0.61:0.0%,0.47:0.0%,0.76:0.0%,0.81:0.0%,0.54:0.0%,0.64:0.0%,0.44:0.0%,0.48:0.0%,0.72:0.0%,0.39:0.0%,0.52:0.0%,0.51:0.0%,0.67:0.0%,0.84:0.0%,0.73:0.0%,0.4:0.0%,0.69:0.0%,0.79:0.0%,0.41:0.0%,0.68:0.0%,0.88:0.0%,0.77:0.0%,0.75:0.0%,0.7:0.0%,0.8:0.0%,0.59:0.0%,0.71:0.0%,0.37:0.0%,0.86:0.0%,0.66:0.0%,0.78:0.0%,0.74:0.0%,0.83:0.0%] ** dst_host_srv_serror_rate:[0.0:81.16%,1.0:17.61%,0.01:0.99%,0.02:0.14%,0.03:0.03%,0.04:0.02%,0.05:0.01%,0.06:0.01%,0.08:0.0%,0.5:0.0%,0.07:0.0%,0.1:0.0%,0.09:0.0%,0.11:0.0%,0.17:0.0%,0.14:0.0%,0.12:0.0%,0.96:0.0%,0.33:0.0%,0.67:0.0%,0.97:0.0%,0.25:0.0%,0.98:0.0%,0.4:0.0%,0.75:0.0%,0.48:0.0%,0.83:0.0%,0.16:0.0%,0.93:0.0%,0.69:0.0%,0.2:0.0%,0.91:0.0%,0.78:0.0%,0.95:0.0%,0.8:0.0%,0.92:0.0%,0.68:0.0%,0.29:0.0%,0.38:0.0%,0.88:0.0%,0.3:0.0%,0.32:0.0%,0.94:0.0%,0.57:0.0%,0.63:0.0%,0.62:0.0%,0.31:0.0%,0.85:0.0%,0.56:0.0%,0.81:0.0%,0.74:0.0%,0.86:0.0%,0.13:0.0%,0.23:0.0%,0.18:0.0%,0.64:0.0%,0.46:0.0%,0.52:0.0%,0.66:0.0%,0.6:0.0%,0.84:0.0%,0.55:0.0%,0.9:0.0%,0.15:0.0%,0.79:0.0%,0.82:0.0%,0.87:0.0%,0.47:0.0%,0.53:0.0%,0.45:0.0%,0.42:0.0%,0.24:0.0%] ** dst_host_rerror_rate:101 (0%) ** dst_host_srv_rerror_rate:101 (0%) ** outcome:[smurf.:56.84%,neptune.:21.7%,normal.:19.69%,back.:0.45%,satan.:0.32%,ipsweep.:0.25%,portsweep.:0.21%,warezclient.:0.21%,teardrop.:0.2%,pod.:0.05%,nmap.:0.05%,guess_passwd.:0.01%,buffer_overflow.:0.01%,land.:0.0%,warezmaster.:0.0%,imap.:0.0%,rootkit.:0.0%,loadmodule.:0.0%,ftp_write.:0.0%,multihop.:0.0%,phf.:0.0%,perl.:0.0%,spy.:0.0%] ###Markdown Encode the feature vectorEncode every row in the database. This is not instant! ###Code # Encode a numeric column as zscores def encode_numeric_zscore(df, name, mean=None, sd=None): if mean is None: mean = df[name].mean() if sd is None: sd = df[name].std() df[name] = (df[name] - mean) / sd # Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue) def encode_text_dummy(df, name): dummies = pd.get_dummies(df[name]) for x in dummies.columns: dummy_name = f"{name}-{x}" df[dummy_name] = dummies[x] df.drop(name, axis=1, inplace=True) # Now encode the feature vector encode_numeric_zscore(df, 'duration') encode_text_dummy(df, 'protocol_type') encode_text_dummy(df, 'service') encode_text_dummy(df, 'flag') encode_numeric_zscore(df, 'src_bytes') encode_numeric_zscore(df, 'dst_bytes') encode_text_dummy(df, 'land') encode_numeric_zscore(df, 'wrong_fragment') encode_numeric_zscore(df, 'urgent') encode_numeric_zscore(df, 'hot') encode_numeric_zscore(df, 'num_failed_logins') encode_text_dummy(df, 'logged_in') encode_numeric_zscore(df, 'num_compromised') encode_numeric_zscore(df, 'root_shell') encode_numeric_zscore(df, 'su_attempted') encode_numeric_zscore(df, 'num_root') encode_numeric_zscore(df, 'num_file_creations') encode_numeric_zscore(df, 'num_shells') encode_numeric_zscore(df, 'num_access_files') encode_numeric_zscore(df, 'num_outbound_cmds') encode_text_dummy(df, 'is_host_login') encode_text_dummy(df, 'is_guest_login') encode_numeric_zscore(df, 'count') encode_numeric_zscore(df, 'srv_count') encode_numeric_zscore(df, 'serror_rate') encode_numeric_zscore(df, 'srv_serror_rate') encode_numeric_zscore(df, 'rerror_rate') encode_numeric_zscore(df, 'srv_rerror_rate') encode_numeric_zscore(df, 'same_srv_rate') encode_numeric_zscore(df, 'diff_srv_rate') encode_numeric_zscore(df, 'srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_count') encode_numeric_zscore(df, 'dst_host_srv_count') encode_numeric_zscore(df, 'dst_host_same_srv_rate') encode_numeric_zscore(df, 'dst_host_diff_srv_rate') encode_numeric_zscore(df, 'dst_host_same_src_port_rate') encode_numeric_zscore(df, 'dst_host_srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_serror_rate') encode_numeric_zscore(df, 'dst_host_srv_serror_rate') encode_numeric_zscore(df, 'dst_host_rerror_rate') encode_numeric_zscore(df, 'dst_host_srv_rerror_rate') # display 5 rows df.dropna(inplace=True,axis=1) df[0:5] # This is the numeric feature vector, as it goes to the neural net # Convert to numpy - Classification x_columns = df.columns.drop('outcome') x = df[x_columns].values dummies = pd.get_dummies(df['outcome']) # Classification outcomes = dummies.columns num_classes = len(outcomes) y = dummies.values df.groupby('outcome')['outcome'].count() ###Output _____no_output_____ ###Markdown Train the Neural Network ###Code import pandas as pd import io import requests import numpy as np import os from sklearn.model_selection import train_test_split from sklearn import metrics from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.callbacks import EarlyStopping # Create a test/train split. 25% test # Split into train/test x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.25, random_state=42) # Create neural net model = Sequential() model.add(Dense(10, input_dim=x.shape[1], kernel_initializer='normal', activation='relu')) model.add(Dense(50, input_dim=x.shape[1], kernel_initializer='normal', activation='relu')) model.add(Dense(10, input_dim=x.shape[1], kernel_initializer='normal', activation='relu')) model.add(Dense(1, kernel_initializer='normal')) model.add(Dense(y.shape[1],activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam') monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto') model.fit(x_train,y_train,validation_data=(x_test,y_test),callbacks=[monitor],verbose=2,epochs=1000) # Measure accuracy pred = model.predict(x_test) pred = np.argmax(pred,axis=1) y_eval = np.argmax(y_test,axis=1) score = metrics.accuracy_score(y_eval, pred) print("Validation score: {}".format(score)) ###Output Validation score: 0.9971418392628698 ###Markdown T81-558: Applications of Deep Neural Networks**Module 14: Other Neural Network Techniques*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 14 Video Material* Part 14.1: What is AutoML [[Video]]() [[Notebook]](t81_558_class_14_01_automl.ipynb)* Part 14.2: Using Denoising AutoEncoders in Keras [[Video]]() [[Notebook]](t81_558_class_14_02_auto_encode.ipynb)* Part 14.3: Anomaly Detection in Keras [[Video]]() [[Notebook]](t81_558_class_14_03_anomaly.ipynb)* **Part 14.4: Training an Intrusion Detection System with KDD99** [[Video]]() [[Notebook]](t81_558_class_14_04_ids_kdd99.ipynb)* Part 14.5: The Deep Learning Technologies I am Excited About [[Video]]() [[Notebook]](t81_558_class_14_05_new_tech.ipynb) Part 14.4: Training an Intrusion Detection System with KDD99The [KDD-99 dataset](http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html) is very famous in the security field and almost a "hello world" of intrusion detection systems in machine learning. Read in Raw KDD-99 Dataset ###Code from keras.utils.data_utils import get_file try: path = get_file('kddcup.data_10_percent.gz', origin='http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz') except: print('Error downloading') raise print(path) # This file is a CSV, just no CSV extension or headers # Download from: http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html df = pd.read_csv(path, header=None) print("Read {} rows.".format(len(df))) # df = df.sample(frac=0.1, replace=False) # Uncomment this line to sample only 10% of the dataset df.dropna(inplace=True,axis=1) # For now, just drop NA's (rows with missing values) # The CSV file has no column heads, so add them df.columns = [ 'duration', 'protocol_type', 'service', 'flag', 'src_bytes', 'dst_bytes', 'land', 'wrong_fragment', 'urgent', 'hot', 'num_failed_logins', 'logged_in', 'num_compromised', 'root_shell', 'su_attempted', 'num_root', 'num_file_creations', 'num_shells', 'num_access_files', 'num_outbound_cmds', 'is_host_login', 'is_guest_login', 'count', 'srv_count', 'serror_rate', 'srv_serror_rate', 'rerror_rate', 'srv_rerror_rate', 'same_srv_rate', 'diff_srv_rate', 'srv_diff_host_rate', 'dst_host_count', 'dst_host_srv_count', 'dst_host_same_srv_rate', 'dst_host_diff_srv_rate', 'dst_host_same_src_port_rate', 'dst_host_srv_diff_host_rate', 'dst_host_serror_rate', 'dst_host_srv_serror_rate', 'dst_host_rerror_rate', 'dst_host_srv_rerror_rate', 'outcome' ] # display 5 rows df[0:5] ###Output Using TensorFlow backend. ###Markdown Analyzing a DatasetThe following script can be used to give a high-level overview of how a dataset appears. ###Code ENCODING = 'utf-8' def expand_categories(values): result = [] s = values.value_counts() t = float(len(values)) for v in s.index: result.append("{}:{}%".format(v,round(100*(s[v]/t),2))) return "[{}]".format(",".join(result)) def analyze(filename): print() print("Analyzing: {}".format(filename)) df = pd.read_csv(filename,encoding=ENCODING) cols = df.columns.values total = float(len(df)) print("{} rows".format(int(total))) for col in cols: uniques = df[col].unique() unique_count = len(uniques) if unique_count>100: print("** {}:{} ({}%)".format(col,unique_count,int(((unique_count)/total)*100))) else: print("** {}:{}".format(col,expand_categories(df[col]))) expand_categories(df[col]) # Analyze KDD-99 import tensorflow.contrib.learn as skflow import pandas as pd import os import numpy as np from sklearn import metrics from scipy.stats import zscore path = "./data/" filename_read = os.path.join(path,"auto-mpg.csv") ###Output _____no_output_____ ###Markdown Encode the feature vectorEncode every row in the database. This is not instant! ###Code # Now encode the feature vector encode_numeric_zscore(df, 'duration') encode_text_dummy(df, 'protocol_type') encode_text_dummy(df, 'service') encode_text_dummy(df, 'flag') encode_numeric_zscore(df, 'src_bytes') encode_numeric_zscore(df, 'dst_bytes') encode_text_dummy(df, 'land') encode_numeric_zscore(df, 'wrong_fragment') encode_numeric_zscore(df, 'urgent') encode_numeric_zscore(df, 'hot') encode_numeric_zscore(df, 'num_failed_logins') encode_text_dummy(df, 'logged_in') encode_numeric_zscore(df, 'num_compromised') encode_numeric_zscore(df, 'root_shell') encode_numeric_zscore(df, 'su_attempted') encode_numeric_zscore(df, 'num_root') encode_numeric_zscore(df, 'num_file_creations') encode_numeric_zscore(df, 'num_shells') encode_numeric_zscore(df, 'num_access_files') encode_numeric_zscore(df, 'num_outbound_cmds') encode_text_dummy(df, 'is_host_login') encode_text_dummy(df, 'is_guest_login') encode_numeric_zscore(df, 'count') encode_numeric_zscore(df, 'srv_count') encode_numeric_zscore(df, 'serror_rate') encode_numeric_zscore(df, 'srv_serror_rate') encode_numeric_zscore(df, 'rerror_rate') encode_numeric_zscore(df, 'srv_rerror_rate') encode_numeric_zscore(df, 'same_srv_rate') encode_numeric_zscore(df, 'diff_srv_rate') encode_numeric_zscore(df, 'srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_count') encode_numeric_zscore(df, 'dst_host_srv_count') encode_numeric_zscore(df, 'dst_host_same_srv_rate') encode_numeric_zscore(df, 'dst_host_diff_srv_rate') encode_numeric_zscore(df, 'dst_host_same_src_port_rate') encode_numeric_zscore(df, 'dst_host_srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_serror_rate') encode_numeric_zscore(df, 'dst_host_srv_serror_rate') encode_numeric_zscore(df, 'dst_host_rerror_rate') encode_numeric_zscore(df, 'dst_host_srv_rerror_rate') outcomes = encode_text_index(df, 'outcome') num_classes = len(outcomes) # display 5 rows df.dropna(inplace=True,axis=1) df[0:5] # This is the numeric feature vector, as it goes to the neural net ###Output _____no_output_____ ###Markdown Train the Neural Network ###Code import pandas as pd import io import requests import numpy as np import os from sklearn.model_selection import train_test_split from sklearn import metrics from keras.models import Sequential from keras.layers.core import Dense, Activation from keras.callbacks import EarlyStopping # Break into X (predictors) & y (prediction) x, y = to_xy(df,'outcome') # Create a test/train split. 25% test # Split into train/test x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.25, random_state=42) # Create neural net model = Sequential() model.add(Dense(10, input_dim=x.shape[1], kernel_initializer='normal', activation='relu')) model.add(Dense(50, input_dim=x.shape[1], kernel_initializer='normal', activation='relu')) model.add(Dense(10, input_dim=x.shape[1], kernel_initializer='normal', activation='relu')) model.add(Dense(1, kernel_initializer='normal')) model.add(Dense(y.shape[1],activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam') monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto') model.fit(x_train,y_train,validation_data=(x_test,y_test),callbacks=[monitor],verbose=2,epochs=1000) # Measure accuracy pred = model.predict(x_test) pred = np.argmax(pred,axis=1) y_eval = np.argmax(y_test,axis=1) score = metrics.accuracy_score(y_eval, pred) print("Validation score: {}".format(score)) ###Output Validation score: 0.9975547746668179
tutorials/basics/2_feature_engineering.ipynb
###Markdown Featurizer This notebook demonstrates how to use `pyTigerGraph` for common data processing and feature engineering tasks on graphs stored in `TigerGraph`. Connection to Database The `TigerGraphConnection` class represents a connection to the TigerGraph database. Under the hood, it stores the necessary information to communicate with the database. Please see its documentation for details https://pytigergraph.github.io/pyTigerGraph/GettingStarted/. ###Code from pyTigerGraph import TigerGraphConnection conn = TigerGraphConnection( host="http://127.0.0.1", # Change the address to your database server's graphname="Cora", username="tigergraph", password="tigergraph", useCert=False ) # Graph schema and other information. print(conn.gsql("ls")) # Number of vertices for every vertex type conn.getVertexCount('*') # Number of vertices of a specific type conn.getVertexCount("Paper") # Number of edges for every type conn.getEdgeCount() # Number of edges of a specific type conn.getEdgeCount("Cite") ###Output _____no_output_____ ###Markdown Feature Engineering We added the graph algorithms to the workbench to perform feature engineering tasks. The usefull functions to extract the features are as below:1. `listAlgorithm()` function: If it gets the class of algorithms (e.g. Centrality) as an input, it will print the available algorithms for the specified category;otherwise will print the entire available algorithms. 2. `installAlgorithm()` function: Gets tha name of the algorithmm as input and installs the algorithm if it is not already installed. 3. `runAlgorithmm()` function: Gets the algorithm name, schema type (e.g. vertex/edge, by default it is vertex), attribute name (if the result attribute needs to be stored in the schema type), and a list of schema type names (list of vertices/edges that the attribute needs to be saved in, by default it is for all vertices/edges). ###Code f = conn.gds.featurizer() f.listAlgorithms() ###Output The list of the categories for available algorithms in the GDS (https://github.com/tigergraph/gsql-graph-algorithms): Centrality: pagerank: global: weigthed: https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Centrality/pagerank/global/weighted/tg_pagerank_wt.gsql. unweighted: https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Centrality/pagerank/global/unweighted/tg_pagerank.gsql. article_rank: https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Centrality/article_rank/tg_article_rank.gsql. Betweenness: https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Centrality/betweenness/tg_betweenness_cent.gsql. closeness: approximate: https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Centrality/closeness/approximate/tg_closeness_cent_approx.gsql. exact: https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Centrality/closeness/exact/tg_closeness_cent.gsql. degree: https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Centrality/degree/tg_degree_cent.gsql. eigenvector: https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Centrality/eigenvector/tg_eigenvector_cent.gsql. harmonic: https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Centrality/harmonic/tg_harmonic_cent.gsql. Classification: maximal_independent_set: deterministic: https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Classification/maximal_independent_set/deterministic/tg_maximal_indep_set.gsql. Community: connected_components: strongly_connected_components: standard: https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Community/connected_components/strongly_connected_components/standard/tg_scc.gsql. k_core: https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Community/k_core/tg_kcore.gsql. label_propagation: https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Community/label_propagation/tg_label_prop.gsql. local_clustering_coefficient: https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Community/local_clustering_coefficient/tg_lcc.gsql. louvain: https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Community/louvain/tg_louvain.gsql. triangle_counting: fast: https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Community/triangle_counting/fast/tg_tri_count_fast.gsql. Embeddings: FastRP: https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/GraphML/Embeddings/FastRP/tg_fastRP.gsql. Path: bfs: https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Path/bfs/tg_bfs.gsql. cycle_detection: count: https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Path/cycle_detection/count/tg_cycle_detection_count.gsql. shortest_path: unweighted: https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Path/shortest_path/unweighted/tg_shortest_ss_no_wt.gsql. Topological Link Prediction: common_neighbors: https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Topological%20Link%20Prediction/common_neighbors/tg_common_neighbors.gsql. preferential_attachment: https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Topological%20Link%20Prediction/preferential_attachment/tg_preferential_attachment.gsql. same_community: https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Topological%20Link%20Prediction/same_community/tg_same_community.gsql. total_neighbors: https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Topological%20Link%20Prediction/total_neighbors/tg_total_neighbors.gsql. ###Markdown Examples of running graph algorithms from GDS library In the following, one example of each class of algoeirhms are provided. Some algorithms will generate a feature per vertex/edge;however, some other algorithms will calculates a number or statistics information about the graph. For example, the common neighbor algorithm calculates the number of common neighbors between two vertices. Get Pagerank as a feature The pagerank is available in GDS library called tg_pagerank under the class of centrality algorithms https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Centrality/pagerank/global/unweighted/tg_pagerank.gsql. ###Code f.installAlgorithm("tg_pagerank") params = {'v_type':'Paper','e_type':'Cite','max_change':0.001, 'max_iter': 25, 'damping': 0.85, 'top_k': 10, 'print_accum': True, 'result_attr':'','file_path':'','display_edges': False} f.runAlgorithm('tg_pagerank',params=params,feat_name="pagerank",timeout=2147480,sizeLimit = 2000000) ###Output Global schema change succeeded. ###Markdown Run Maximal Independent Set The Maximal Independent Set algorithm is available in GDS library called tg_maximal_indep_set under the class of classification algorithms https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Classification/maximal_independent_set/deterministic/tg_maximal_indep_set.gsql. ###Code f.installAlgorithm("tg_maximal_indep_set") params = {'v_type': 'Paper', 'e_type': 'Cite','max_iter': 100,'print_accum': False,'file_path':''} f.runAlgorithm('tg_maximal_indep_set',params=params) ###Output _____no_output_____ ###Markdown Get Louvain as a feature The Louvain algorithm is available in GDS library called tg_louvain under the class of community detection algorithms https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Community/louvain/tg_louvain.gsql. ###Code f.installAlgorithm(query_name='tg_louvain') params = {'v_type': 'Paper', 'e_type':['Cite','reverse_Cite'],'wt_attr':"",'max_iter':10,'result_attr':"cid",'file_path' :"",'print_info':True} f.runAlgorithm('tg_louvain',params,feat_name="cid") ###Output Global schema change succeeded. ###Markdown Get fastRP as a feature The fastRP algorithm is available in GDS library called tg_fastRP under the class of community detection algorithms https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/GraphML/Embeddings/FastRP/tg_fastRP.gsql ###Code f.installAlgorithm("tg_fastRP") params = {'v_type': 'Paper', 'e_type': ['Cite','reverse_Cite'], 'weights': '1,1,2', 'beta': -0.85, 'k': 3, 'reduced_dim': 128, 'sampling_constant': 1, 'random_seed': 42, 'print_accum': False,'result_attr':"",'file_path' :""} f.runAlgorithm('tg_fastRP',params,feat_name ="fastrp_embedding") ###Output _____no_output_____ ###Markdown Run Breadth-First Search Algorithm from a single source node The Breadth-First Search algorithm is available in GDS library called tg_bfs under the class of Path algorithms https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Path/bfs/tg_bfs.gsql. ###Code f.installAlgorithm(query_name='tg_bfs') params = {'v_type': 'Paper', 'e_type':['Cite','reverse_Cite'],'max_hops':10,"v_start":("2180","Paper"), 'print_accum':False,'result_attr':"",'file_path' :"",'display_edges':False} f.runAlgorithm('tg_bfs',params,feat_name="bfs") ###Output Global schema change succeeded. ###Markdown Calculates the number of common neighbors between two vertices The common neighbors algorithm is available in GDS library called tg_common_neighbors under the class of Topological Link Prediction algorithms https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Topological%20Link%20Prediction/common_neighbors/tg_common_neighbors.gsql ###Code f.installAlgorithm(query_name='tg_common_neighbors') params={"a":("2180","Paper"),"b":("431","Paper"),"e_type":"Cite","print_res":True} f.runAlgorithm('tg_common_neighbors',params) ###Output _____no_output_____
notebooks/experiments_with_raga.ipynb
###Markdown Functions to work out probabilities from notations ###Code def old_all_octaves(file): with open(file) as f: return f.read().strip().replace("\n",",").split(",") def old_octaves(): return ['sa','lre','re','lga','ga','ma','mau','pa','lda','da','lni','ni', 'sA','lrE','rE','lgA','gA','mA','mAu','pA','ldA','dA','lnI','nI', 'SA','lRE','RE','lGA','GA','MA','MAu','PA','lDA','DA','lNI','NI'] def old_to_new(file): """ sa, lre, re ... -> sa, re_, re, ga_, ga... """ with open(file) as f: s = f.read() newoctave = all_saptak() for i, item in enumerate(old_octaves()): s = s.replace(item, newoctave[i]) return s def transiotion_hist_up(data): """ data is saragam string read from file """ saptak = all_saptak() +('',) data = data.strip().replace("\n",",,").split(",") hist = {i:{j:0 for j in saptak} for i in saptak} for i, s in enumerate(data[:-1]): hist[s][data[i+1]] += 1 return hist def remove_empty(data): for v in data.values(): del v[''] del data[''] return data def compute_prob(hist): def divide(a, b): return a/b if b > 0 else 0 hist = remove_empty(hist) probs = {} for k, v in hist.items(): probs[k] = {k1: divide(v1,sum(v.values())) for k1, v1 in v.items()} return probs def transiotion_hist_down(data): """ data is saragam string read from file """ saptak = all_saptak() +('',) data = data.strip().replace("\n",",,").split(",") hist = {i:{j:0 for j in saptak} for i in saptak} for i, s in enumerate(data[1:]): hist[s][data[i-1]] += 1 return hist def transiotion_prob_up(file): return compute_prob(transiotion_hist_up(file)) def transiotion_prob_down(file): return compute_prob(transiotion_hist_down(file)) def test_probs(): p1 = transiotion_prob_up(old_to_new("/home/vikrant/Downloads/Bhoop1.txt")) p2 = convert_to_transition(read_prob_from_file("/home/vikrant/Downloads/prob_matrix.txt")) for k in p1: v1 = p1[k] v2 = p2[k] for j in v1: assert abs(v1[j] - v2[j])<= 0.001 old_all_octaves("/home/vikrant/Downloads/AllOctaves.txt" ) help([].extend) old_to_new("/home/vikrant/Downloads/Bhoop1.txt").strip().replace("\n",",").replace(",,",",").split(",") p1 = transiotion_prob_down(old_to_new("/home/vikrant/Downloads/Bhoop1.txt")) p2 = convert_to_transition(read_prob_from_file("/home/vikrant/Downloads/prob_matrix.txt")) len(p1) == len(p2) p1['Dha'] p2['Dha'] test_probs() %%file bhoop.csv SA,SA,Dha,Pa,Ga,Re,Sa,Re,Ga,Ga,Pa,Ga,Dha,Pa,Ga,Ga Ga,Pa,Dha,SA,RE,SA,Dha,Pa,SA,Pa,Dha,Pa,Ga,Re,Sa,Sa Ga,Ga,Pa,Dha,Pa,SA,SA,SA,Dha,Dha,SA,RE,GA,RE,SA,Dha GA,GA,RE,SA,RE,RE,SA,Dha,SA,Pa,Dha,Pa,Ga,Re,Sa,Sa Ga,Re,Ga,Ga,Sa,Re,Sa,Sa,Sa,Sa,Sa,dha,Sa,Re,Ga,Ga Pa,Ga,Pa,Pa,Dha,Dha,Pa,Pa,Ga,Pa,Dha,SA,Dha,Pa,Ga,Sa Pa,Ga,Ga,Re,Ga,Pa,SA,Dha,SA,SA,SA,SA,Dha,Re,SA,SA Dha,Dha,Dha,Dha,SA,RE,GA,RE,SA,SA,Dha,Pa,Dha,SA,Dha,Pa Ga,Re,Ga,Ga,Ga,Re,Pa,Ga,Dha,Pa,Dha,SA,Dha,Pa,Ga,Sa Sa,Re,Ga,Pa,Ga,Re,Sa,Sa,Re,Pa,Pa,Pa,Re,Ga,Ga,Re Ga,Ga,Pa,Ga,Re,Ga,Pa,Dha,SA,SA,SA,SA,Dha,Dha,Pa,Ga,Pa Dha,RE,SA,SA,Dha,Dha,Pa,Ga,Re,Ga,Pa,Dha,SA,Pa,Dha,SA,Dha,SA,Dha,Pa,Ga,Re,Sa Pa,Ga,Ga,Ga,Pa,Pa,SA,Dha,SA,SA,SA,SA,SA,RE,GA,RE,SA,SA SA,Dha,Dha,SA,SA,SA,RE,RE,Dha,SA,Pa,Dha,SA,SA,Dha,Dha,Pa Ga,Ga,Pa,Ga,Re,Ga,Pa,Dha,SA,SA,RE,GA,RE,SA,Dha,Pa,Dha,SA,Dha,Pa,Ga,Re,Ga,Pa,Ga,Re,Sa Sa,dha,dha,Sa dha,Sa,Re Sa,Re dha,Sa Sa,Re,Ga,Re,Ga,Sa,Re,dha,Sa Sa,Re,Ga,Re,Ga,Pa,Ga,Re,Pa,Ga,dha,dha,Sa Ga,Pa,Dha,Ga,Ga,Ga,Pa Ga,Pa,Dha,Pa,Ga,Re,Sa Ga,Pa,Dha,SA,SA,Dha,Pa,Ga,Re,Ga,Re,Pa,Ga,Re,Sa Ga,Re,Sa,Re,Ga,Pa,Dha,SA,Pa,Dha,SA,RE,GA,RE,SA Dha,SA,RE,SA,Dha,SA,Dha,Pa,Ga,Pa,Dha,Pa,Ga,Pa,Ga,Re,Sa,dha,dha,Sa with open("bhoop.csv") as f: data = f.read() a = aalap("Sa", 8, transiotion_prob_up(data), transiotion_prob_down(data)) transiotion_prob_down(data)['Ga'] take(a, 16) ###Output _____no_output_____ ###Markdown Aalap with nyaas ###Code def aalap_nyaas(initial, beats=8, nyaas = None, transition_up=None, transition_down=None): current = initial scale = all_saptak() yield initial while True: if current in nyaas: aroha = random.choice([True, False]) for i in range(beats): if aroha: current = get_next([transition_up[current][v] for v in scale]) else: current = get_next([transition_down[current][v] for v in scale]) yield current a = aalap_nyaas("Sa", beats=8, nyaas=['sa','Sa','SA','re','Re','RE','ga','Ga','GA'], transition_up=transiotion_prob_up(data), transition_down=transiotion_prob_down(data)) for i,item in enumerate(take(a, 32)): print(i+1, item) from collections import deque def search(seq, subseq, end=100): def compare(source, dest): for item in dest: return any(["".join(item).lower() in "".join(source).lower() for item in dest]) n = len(max(subseq, key=len)) window = deque(take(seq, n), n) for i in range(n, end): if compare(window, subseq): yield i-n window = deque(take(seq, n), n) else: window.append(next(seq)) def count(seq): return sum(1 for i in seq) a = aalap_nyaas("Sa", beats=8, nyaas=['sa','Sa','SA','re','Re','RE','ga','Ga','GA'], transition_up=transiotion_prob_up(data), transition_down=transiotion_prob_down(data)) pakad = [["dha","dha","sa"],["ga","re","pa","ga"],["dha","pa","ga","re"]] sum([count(search(a,pakad, 64)) for i in range(1000)])/1000 1024/16 def subset_prob(probs, start, end): subset = probs[start:end] newvalues = [v/sum(subset) for v in subset] return [0 for i in range(start)] + newvalues + [0 for i in range(end, len(probs))] subset_prob([0.1,0.2,0.3,0.1,0.2,0.2],0,3) def aalap_bounded(beats=8, top_bound = 5, transition_up=None, transition_down=None): initial = 'Sa' scale = all_saptak() yield initial current = initial index = scale.index(initial) if top_bound > 0: aroha = True else: aroha = False for i in range(beats): if aroha: current = get_next(subset_prob([transition_up[current][v] for v in scale], 0, index + top_bound)) if scale.index(current) == index + top_bound: print(current, scale.index(current), top_bound+index) aroha = False else: current = get_next(subset_prob([transition_down[current][v] for v in scale], 0, index + top_bound)) yield current a = aalap_bounded(beats=64, top_bound=12, transition_up=transiotion_prob_up(data), transition_down=transiotion_prob_down(data)) for i,j in enumerate(a): print(i, j) a = aalap("Sa",8,transiotion_prob_up(data), transiotion_prob_up(data)) take(a, 16) take(a, 32) def transition_probability(data): data = data.strip().replace("\n",",,").split(",") hist = {} for i, item in enumerate(data[:-1]): if item and data[i+1]: itemd = hist.get(item, {}) itemd[data[i+1]] = itemd.get(data[i+1], 0) + 1 hist[item] = itemd prob = {} for k in hist: total = sum(hist[k].values()) prob[k] = {j: v/total for j,v in hist[k].items()} return prob p = transition_probability(data) p.keys() p def sample(items, probs): r = random.random() #random.uniform()? index = 0 while(r >= 0 and index < len(probs)): r -= probs[index] index += 1 return items[index - 1] def aalap_(initial, probs): current = initial while True: yield current targets = [item for item in probs[current]] probability = [probs[current][item] for item in targets] current = sample(targets, probability) sample(list(p['Sa'].keys()), [p['Sa'][k] for k in p['Sa'].keys()]) a = aalap_("Sa", p) sum([count(search(a,pakad,32)) for i in range(1000)])/1000 %%file bhoop1.csv SA,SA,Dha,Pa,Ga,Re,Sa,Re,Ga,Ga,Pa,Ga,Dha,Pa,Ga,Ga Ga,Pa,Dha,SA,RE,SA,Dha,Pa,SA,Pa,Dha,Pa,Ga,Re,Sa,Sa Ga,Ga,Pa,Dha,Pa,SA,SA,SA,Dha,Dha,SA,RE,GA,RE,SA,Dha GA,GA,RE,SA,RE,RE,SA,Dha,SA,Pa,Dha,Pa,Ga,Re,Sa,Sa Ga,Re,Ga,Ga,Sa,Re,Sa,Sa,Sa,Sa,Sa,dha,Sa,Re,Ga,Ga Pa,Ga,Pa,Pa,Dha,Dha,Pa,Pa,Ga,Pa,Dha,SA,Dha,Pa,Ga,Sa Pa,Ga,Ga,Re,Ga,Pa,SA,Dha,SA,SA,SA,SA,Dha,Re,SA,SA Dha,Dha,Dha,Dha,SA,RE,GA,RE,SA,SA,Dha,Pa,Dha,SA,Dha,Pa Ga,Re,Ga,Ga,Ga,Re,Pa,Ga,Dha,Pa,Dha,SA,Dha,Pa,Ga,Sa Sa,Re,Ga,Pa,Ga,Re,Sa,Sa,Re,Pa,Pa,Pa,Re,Ga,Ga,Re Ga,GaPa,Ga,Re,Ga,Pa,Dha,SA,SA,SA,SA,Dha,Dha,Pa,Ga,Pa DhaRE,SA,SA,Dha,Dha,Pa,Ga,Re,GaPa,DhaSA,PaDha,SA,DhaSA,DhaPa,GaRe,Sa Pa,Ga,Ga,Ga,Pa,Pa,SA,Dha,SA,SA,SA,SA,SARE,GARE,SA,SA SA,Dha,Dha,SA,SA,SA,RE,RE,DhaSA,PaDha,SA,SA,Dha,Dha,Pa Ga,GaPa,Ga,Re,Ga,Pa,Dha,SA,SARE,GARE,SA,DhaPa,DhaSA,DhaPa,GaRe,GaPa,GaRe,Sa Sa,dha,dha,Sa dha,Sa,Re Sa,Re dha,Sa Sa,Re,Ga,Re,Ga,Sa,Re,dha,Sa Sa,Re,Ga,Re,Ga,Pa,Ga,Re,Pa,Ga,dha,dha,Sa Ga,Pa,Dha,Ga,Ga,Ga,Pa Ga,Pa,Dha,Pa,Ga,Re,Sa Ga,Pa,Dha,SA,SA,Dha,Pa,Ga,Re,Ga,Re,Pa,Ga,Re,Sa Ga,Re,Sa,Re,Ga,Pa,Dha,SA,Pa,Dha,SA,RE,GA,RE,SA Dha,SA,RE,SA,Dha,SA,Dha,Pa,Ga,Pa,Dha,Pa,Ga,Pa,Ga,Re,Sa,dha,dha,Sa bhoop1 = transition_probability(open("bhoop1.csv").read()) a = aalap_("Sa", bhoop1) sum([count(search(a,pakad,32)) for i in range(1000)])/1000 a = aalap_("Sa", bhoop1) take(a, 32) bhoop1 tune = """ SA,SA,Dha,Pa,Ga,Re,Sa,Re,Ga,Ga,Pa,Ga,Dha,Pa,Ga,Ga Ga,Pa,Dha,SA,RE,SA,Dha,Pa,SA,Pa,Dha,Pa,Ga,Re,Sa,Sa Ga,Ga,Pa,Dha,Pa,SA,SA,SA,Dha,Dha,SA,RE,GA,RE,SA,Dha GA,GA,RE,SA,RE,RE,SA,Dha,SA,Pa,Dha,Pa,Ga,Re,Sa,Sa Ga,Re,Ga,Ga,Sa,Re,Sa,Sa,Sa,Sa,Sa,dha,Sa,Re,Ga,Ga Pa,Ga,Pa,Pa,Dha,Dha,Pa,Pa,Ga,Pa,Dha,SA,Dha,Pa,Ga,Sa Pa,Ga,Ga,Re,Ga,Pa,SA,Dha,SA,SA,SA,SA,Dha,Re,SA,SA Dha,Dha,Dha,Dha,SA,RE,GA,RE,SA,SA,Dha,Pa,Dha,SA,Dha,Pa Ga,Re,Ga,Ga,Ga,Re,Pa,Ga,Dha,Pa,Dha,SA,Dha,Pa,Ga,Sa Sa,dha,dha,Sa dha,Sa,Re Sa,Re dha,Sa Sa,Re,Ga,Re,Ga,Sa,Re,dha,Sa Sa,Re,Ga,Re,Ga,Pa,Ga,Re,Pa,Ga,dha,dha,Sa Ga,Pa,Dha,Ga,Ga,Ga,Pa Ga,Pa,Dha,Pa,Ga,Re,Sa Ga,Pa,Dha,SA,SA,Dha,Pa,Ga,Re,Ga,Re,Pa,Ga,Re,Sa Ga,Re,Sa,Re,Ga,Pa,Dha,SA,Pa,Dha,SA,RE,GA,RE,SA Dha,SA,RE,SA,Dha,SA,Dha,Pa,Ga,Pa,Dha,Pa,Ga,Pa,Ga,Re,Sa,dha,dha,Sa """ tune = tune.strip().replace("\n",",").replace(",,",",").split(",") from matplotlib import pyplot scale = all_saptak() pyplot.plot([scale.index(s) for s in tune]) tune ###Output _____no_output_____
examples/test.ipynb
###Markdown Examples for the implementation of different known distributions for the hmcparameter class ###Code class StateMultivarNormal(HMCParameter): def __init__(self, init_val, mu=0, sigma_inv=1): super().__init__(np.array(init_val)) self.mu = mu self.sigma_inv = sigma_inv def get_energy_grad(self): return np.dot(self.sigma_inv, (self.value - self.mu)) def energy(self, value): return np.dot((value - self.mu).transpose(), np.dot(self.sigma_inv, (value - self.mu))) / 2 def get_energy(self): return self.energy(self.value) def get_energy_for_value(self, value): return self.energy(value) class StateExpDist(HMCParameter): def __init__(self, init_val, gamma): super().__init__(np.array(init_val)) self.gamma = gamma def get_energy_grad(self, *args): return self.gamma def energy(self, value): if value <= 0: return np.inf else: return self.gamma * value def get_energy(self): return self.energy(self.value) def get_energy_for_value(self, value): return self.energy(value) class StateInvGamma(HMCParameter): def __init__(self, init_val, alpha, betta): super().__init__(np.array(init_val)) self.alpha = alpha self.betta = betta def get_energy_grad(self): return (self.alpha + 1) / self.value - self.betta / (self.value ** 2) def energy(self, value): if value <= 0: return np.inf else: return (self.alpha + 1) * np.log(value) + self.betta / value def get_energy(self): return self.energy(self.value) def get_energy_for_value(self, value): return self.energy(value) class StateLapDist(HMCParameter): def __init__(self, init_val): super().__init__(np.array(init_val)) def get_energy_grad(self): return 1 if self.value > 0 else -1 def energy(self, value): return abs(value) def get_energy(self): return self.energy(self.value) def get_energy_for_value(self, value): return self.energy(value) class StatebettaDist(HMCParameter): def __init__(self, init_val, alpha, betta): super().__init__(np.array(init_val)) self.alpha = alpha self.betta = betta def get_energy_grad(self): return (1 - self.alpha) / self.value + (self.betta - 1) / (1 - self.value) def energy(self, value): if value < 0 or value > 1: return np.inf else: return (1 - self.alpha) * np.log(value) + (1 - self.betta) * np.log(1 - value) def get_energy(self): return self.energy(self.value) def get_energy_for_value(self, value): return self.energy(value) ###Output _____no_output_____ ###Markdown Implementation for the default velocity parameter with a Gaussian distribution ###Code class VelParam(HMCParameter): def __init__(self, init_val): super().__init__(np.array(init_val)) dim = np.array(init_val).shape self.mu = np.zeros(dim) self.sigma = np.identity(dim[0]) def gen_init_value(self): self.value = multivariate_normal.rvs(self.mu, self.sigma) def get_energy_grad(self): return self.value def energy(self, value): return np.dot(value, value) / 2 def get_energy(self): return self.energy(self.value) def get_energy_for_value(self, value): return self.energy(value) ###Output _____no_output_____ ###Markdown Example for creating instances for the state and velocity and running the hmc algorithm for multivariate Gaussian distribution ###Code state = StateMultivarNormal([1, 2, 3, 4, 5, 6], [2, 3, 4, 5, 6, 7], np.identity(6)) vel = VelParam(np.array([1, 1, 1, 1, 1, 1])) delta = 1 n = 10 m = 10000 hmc = HMC(state, vel, delta, n, m) # create an instance of the HMC class hmc.HMC() # Run the HMC algorithm res = np.array(hmc.get_samples()) # Getting the chain of samples for the state parameter # Plotting the chains for each dimension of the multivariate Gaussian plt.plot(res) plt.xlabel('iteration number') plt.ylabel('value') plt.show() # Looking at the samples for one variate as a histogram sns.distplot(res[:,3]) plt.show() # looking at the acceptance rate print('Acceptance rate: %f' %hmc.calc_acceptence_rate()) ###Output Acceptance rate: 0.770700 ###Markdown $z[n]$と乱数を含む手法(現提案手法) ###Code %%prun print("N(P-1): {}".format(N*(P-1))) print() test[0].estimate(T=T, ord=mode, log=True, gctype='0') test[0].result() ###Output final mse: 0.006662085280325191 ###Markdown $z[n]$は含むが, 乱数は含まない手法 ###Code %%prun print("N(P-1): {}".format(N*(P-1))) print() test[1].estimate(T=T, ord=mode, log=True, gctype='1') test[1].result() ###Output final mse: 0.005586122013534853 ###Markdown $z[n]$は含まないが, 乱数は含む手法 ###Code %%prun print("N(P-1): {}".format(N*(P-1))) print() test[2].estimate(T=T, ord=mode, log=True, gctype='2') test[2].result() ###Output final mse: 0.08610414711112314 ###Markdown $z[n]$も乱数も含まない手法 ###Code %%prun print("N(P-1): {}".format(N*(P-1))) print() test[3].estimate(T=T, ord=mode, log=True, gctype='3') test[3].result() ###Output final mse: 0.08037400751643581 ###Markdown イレギュラーな値$b[n] = \sum_{p \ge 2, p \notin S_n} w^t_p [n]$を用いる手法 ###Code %%prun print("N(P-1): {}".format(N*(P-1))) print() test[4].estimate(T=T, ord=mode, log=True, gctype='4') test[4].result() plt.figure(figsize=(14, 5.5)) plt.subplot(121) plt_CC(test[0].communication_cost, 'z[n]: O, rand: O', T, N, P) plt_CC(test[1].communication_cost, 'z[n]: O, rand: X', T, N, P) plt_CC(test[2].communication_cost, 'z[n]: X, rand: O', T, N, P) plt_CC(test[3].communication_cost, 'z[n]: X, rand: X', T, N, P) plt_CC(test[4].communication_cost, 'irregular', T, N, P) plt.grid() plt.subplot(122) plt_MSE(oamp.mse, 'OAMP', T, 'black') plt_MSE(test[0].mse, 'z[n]: O, rand: O', T) plt_MSE(test[1].mse, 'z[n]: O, rand: X', T) plt_MSE(test[2].mse, 'z[n]: X, rand: O', T) plt_MSE(test[3].mse, 'z[n]: X, rand: X', T) plt_MSE(test[4].mse, 'irregular', T) plt.grid() for n in range(N): arr = iidG.A[:, n].copy() print(M * np.var(arr)) np.mean(iidG.A) np.var(iidG.A) print(SNR) print(SNRdB) print(sigma) print(np.mean(iidG.A)) print(np.mean(x)) print(np.mean(iidG.A @ x)) print(np.var(iidG.A)) print(np.var(x)) print(np.var(iidG.A @ x)) 10 * np.log10(np.var(oamp.A @ oamp.x) / np.var(oamp.n)) np.var(oamp.n) oamp_noise = oamp.n.copy().flatten() plt.plot(oamp_noise) for i in range(5): sample = test[i].oamps Axvar = 0 nvar = 0 snr = 0 for p in range(P): Axvar += np.var(sample[p].A_p @ sample[p].x) n = sample[p].y - sample[p].A_p @ sample[p].x nvar += np.var(n) snr += 10*np.log10(np.var(sample[p].A_p @ sample[p].x) / np.var(n)) print(10*np.log10(np.var(sample[p].A_p @ sample[p].x) / np.var(n))) print(10*np.log10(Axvar / nvar)) print(snr / P) print('='*21) test_noise = [] for i in range(5): n = np.empty(0) for p in range(P): n = np.append(n, test[i].oamps[p].y - test[i].oamps[p].A_p @ x) test_noise.append(n.flatten()) plt.plot(test_noise[0]) plt.plot(test_noise[1]) plt.plot(test_noise[2]) plt.plot(test_noise[3]) plt.plot(test_noise[4]) for i in range(5): norm = np.linalg.norm(oamp_noise - test_noise[i]) print(norm) N = 4000 # 列数 alpha = 0.5 # 圧縮率 M = int(alpha*N) # 行数 rho = 0.2 # 非零成分の割合 SNR = 60 # 信号対雑音比 SNRdB = 10**(0.1*SNR) kappa = 5 # 条件数 P = 2 # ノード数 T = 40 # 反復回数 sim = 100 # 実験数 xs = [bernouli_gaussian(N, rho) for _ in range(sim)] l = 5 color = ['tab:blue', 'tab:orange', 'tab:green', 'tab:red', 'tab:pink'] MSE_iidG_oamp = np.empty((sim, T+1)) MSE_iidG_test = np.empty((sim, 5, T+1)) CommCost_iidG_test = np.empty((sim, 5, T)) for i in tqdm(range(sim)): iidG = iidGaussian(M, N, m=0, v=1/M) Ax = iidG.A @ xs[i] sigma = np.var(Ax) / SNRdB noise = np.random.normal(0, sigma**0.5, (M, 1)) oamp = D_OAMP(iidG.A, xs[i], noise, 1) test = [D_Test(iidG.A, xs[i], noise, P) for _ in range(5)] oamp.estimate(T=T) MSE_iidG_oamp[i] = oamp.mse for j in range(5): test[j].estimate(T, gctype=str(j)) MSE_iidG_test[i, j] = test[j].mse CommCost_iidG_test[i, j] = test[j].communication_cost MSE_iidG_oamp_mean = np.mean(MSE_iidG_oamp, axis=0) MSE_iidG_test_mean = np.empty((5, T+1)) CommCost_iidG_test_mean = np.empty((5, T)) for i in range(5): MSE_iidG_test_mean[i] = np.mean(MSE_iidG_test[:, i], axis=0) CommCost_iidG_test_mean[i] = np.mean(CommCost_iidG_test[:, i], axis=0) plt.figure(figsize=(14, 5.5)) plt.subplot(121) plt.title('Communication Cost(i.i.d. Gaussian)') plt_CC(CommCost_iidG_test_mean[0], 'z[n]: O, rand: O', T, N, P, color[0]) plt_CC(CommCost_iidG_test_mean[1], 'z[n]: O, rand: X', T, N, P, color[1]) plt_CC(CommCost_iidG_test_mean[2], 'z[n]: X, rand: O', T, N, P, color[2]) plt_CC(CommCost_iidG_test_mean[3], 'z[n]: X, rand: X', T, N, P, color[3]) plt_CC(CommCost_iidG_test_mean[4], 'irregular', T, N, P, color[4]) plt.grid() plt.subplot(122) plt.title('MSE(i.i.d. Gaussian)') plt_MSE(MSE_iidG_oamp_mean, 'OAMP', T, 'black') plt_MSE(MSE_iidG_test_mean[0], 'z[n]: O, rand: O', T, color[0]) plt_MSE(MSE_iidG_test_mean[1], 'z[n]: O, rand: X', T, color[1]) plt_MSE(MSE_iidG_test_mean[2], 'z[n]: X, rand: O', T, color[2]) plt_MSE(MSE_iidG_test_mean[3], 'z[n]: X, rand: X', T, color[3]) plt_MSE(MSE_iidG_test_mean[4], 'irregular', T, color[4]) plt.legend(loc="lower left") plt.grid() plt_CC(CommCost_iidG_test_mean[0], 'z[n]: O, rand: O', T, N, P, color[0]) plt_CC(CommCost_iidG_test_mean[1], 'z[n]: O, rand: X', T, N, P, color[1]) plt_CC(CommCost_iidG_test_mean[2], 'z[n]: X, rand: O', T, N, P, color[2]) plt_CC(CommCost_iidG_test_mean[3], 'z[n]: X, rand: X', T, N, P, color[3]) plt_CC(CommCost_iidG_test_mean[4], 'irregular', T, N, P, color[4]) plt.grid() plt_MSE(MSE_iidG_oamp_mean, 'OAMP', T, 'black') plt_MSE(MSE_iidG_test_mean[0], 'z[n]: O, rand: O', T, color[0]) plt_MSE(MSE_iidG_test_mean[1], 'z[n]: O, rand: X', T, color[1]) plt_MSE(MSE_iidG_test_mean[2], 'z[n]: X, rand: O', T, color[2]) plt_MSE(MSE_iidG_test_mean[3], 'z[n]: X, rand: X', T, color[3]) plt_MSE(MSE_iidG_test_mean[4], 'irregular', T, color[4]) plt.grid() MSE_UniInv_oamp = np.empty((sim, T+1)) MSE_UniInv_test = np.empty((sim, 5, T+1)) CommCost_UniInv_test = np.empty((sim, 5, T)) for i in tqdm(range(sim)): UniInv = UniInvar(M, N, kappa) Ax = UniInv.A @ xs[i] sigma = np.var(Ax) / SNRdB noise = np.random.normal(0, sigma**0.5, (M, 1)) oamp = D_OAMP(iidG.A, xs[i], noise, 1) test = [D_Test(iidG.A, xs[i], noise, P) for _ in range(5)] oamp.estimate(T=T) MSE_UniInv_oamp[i] = oamp.mse for j in range(5): test[j].estimate(T, gctype=str(j)) MSE_UniInv_test[i, j] = test[j].mse CommCost_UniInv_test[i, j] = test[j].communication_cost MSE_UniInv_oamp_mean = np.mean(MSE_UniInv_oamp, axis=0) MSE_UniInv_test_mean = np.empty((5, T+1)) CommCost_UniInv_test_mean = np.empty((5, T)) for i in range(5): MSE_UniInv_test_mean[i] = np.mean(MSE_UniInv_test[:, i], axis=0) CommCost_UniInv_test_mean[i] = np.mean(CommCost_UniInv_test[:, i], axis=0) plt.figure(figsize=(14, 5.5)) plt.subplot(121) plt.title('Communication Cost(Unitary Invariant Matrix)') plt_CC(CommCost_UniInv_test_mean[0], 'z[n]: O, rand: O', T, N, P, color[0]) plt_CC(CommCost_UniInv_test_mean[1], 'z[n]: O, rand: X', T, N, P, color[1]) plt_CC(CommCost_UniInv_test_mean[2], 'z[n]: X, rand: O', T, N, P, color[2]) plt_CC(CommCost_UniInv_test_mean[3], 'z[n]: X, rand: X', T, N, P, color[3]) plt_CC(CommCost_UniInv_test_mean[4], 'irregular', T, N, P, color[4]) plt.grid() plt.subplot(122) plt.title('MSE(Unitary Invariant Matrix)') plt_MSE(MSE_UniInv_oamp_mean, 'OAMP', T, 'black') plt_MSE(MSE_UniInv_test_mean[0], 'z[n]: O, rand: O', T, color[0]) plt_MSE(MSE_UniInv_test_mean[1], 'z[n]: O, rand: X', T, color[1]) plt_MSE(MSE_UniInv_test_mean[2], 'z[n]: X, rand: O', T, color[2]) plt_MSE(MSE_UniInv_test_mean[3], 'z[n]: X, rand: X', T, color[3]) plt_MSE(MSE_UniInv_test_mean[4], 'irregular', T, color[4]) plt.legend(loc="lower left") plt.grid() ###Output _____no_output_____ ###Markdown ###Code !pip install tfest # import os # os.chdir("..") import base64 import requests import matplotlib.pyplot as plt import numpy as np from scipy import signal import tfest def get_values_from_github(u_name, y_name): u_get = requests.get("https://raw.githubusercontent.com/giuliovv/bldc_project_work/master/data/tfest/" + u_name + ".csv").text y_get = requests.get("https://raw.githubusercontent.com/giuliovv/bldc_project_work/master/data/tfest/" + y_name + ".csv").text # Last value is empty u = np.array(u_get.split("\n")[:-1]).astype(float) y = np.array(y_get.split("\n")[:-1]).astype(float) return u, y s1 = signal.lti([1], [1, 1]) w, mag, phase = s1.bode() plt.figure() plt.semilogx(w, mag) # Bode magnitude plot plt.grid() plt.figure() plt.semilogx(w, phase) # Bode phase plot plt.grid() plt.show() t = np.linspace(0, 5, num=500) u = np.ones_like(t) tout, y, x = signal.lsim(s1, u, t) plt.plot(t, y) plt.xlabel('Time [s]') plt.ylabel('Amplitude') plt.title('Step response for 1. Order Lowpass') plt.grid() u, y = get_values_from_github("sin_sweep", "after_filter") te = tfest.tfest(u=u, y=y) te.estimate(nzeros=0, npoles=1, init_value=1, method="fft", time=10) te.get_transfer_function() te.plot_bode() te.plot() ###Output _____no_output_____ ###Markdown Example code Importing library ###Code import sys,os homelib = ".." if not homelib in sys.path: sys.path.append(homelib) from object_publisher import * ###Output _____no_output_____ ###Markdown Defining class to be published in CLI and Flask web app`publish` decorator is specified for method to be used as sub-command or URL handler.in CLI, every method is called with following arugment.```$ exe [parameters ...]```method arguments without default value is used as positional parameter, and arguments with default value is used as optional parameter (`--`)in Flask App, method is called with follwing URL access.```//[path spec]/```where path spec is specified in the decorator:```@publish(flask={"path": ["args to be specified in path", ...] })```source of the request parameter is specified by:```@publish(flask={"params": ""})```where `` is one of:|value|notes||-|-||args|parameters are specified as field parameters in the URL.||form|parameters are specified as content body of the requests, with standard form representation.||json|parameters are specified as content body of the requests, with 'Content-type: application/json'| Example class`hello`, `world`, `test`, `show_resource` is defined to be published.`not_published` is not published, so that no interface is not build for `not_published` method.docstring is used as source of help in the CLI interface. ###Code class Test: def __init__(self): self.x = "X" self.y = "Y" @publish def hello(self): return "Hello" def not_published(self): return "private.: %s"%self.x @publish def world(self, name): """ Worldコマンド Parameters ---------- name: str 名前(表示用) Returns ------- """ return name @publish(flask = {"params": "form"}) def test(self, name, kw=None): """ Testコマンド Little bit detailed description of Test method. Parameters ---------- name: str 名前(表示用) kw: str 表示用のキーワード Returns ------- """ return "%s-%s-%s"%(self.y, name, kw) @publish(flask = {"path": ["id", "attr"], "params": "json"}) def show_resource(self, id, attr, mustarg, someargs="test"): print("TestID: %s"%id) print("TestAttr: %s"%attr) print("MustArg: %s"%mustarg) print("SomeArgs: %s"%someargs) return {"TestID": id, "TestAttr": attr, "MustArg": mustarg, "SomeArgs": someargs} ###Output _____no_output_____ ###Markdown Typical usage First step: Creating object, and call method as usual library instance.published method can be used as normal method. No special handling is required to invoke published method. ###Code obj = Test() display(obj.hello()) display(obj.not_published()) display(obj.world("me")) display(obj.show_resource("id1", "attr1", "must")) ###Output _____no_output_____ ###Markdown Second step: Call object from command line interfaceBuliding CLI interface object, and run it wtih sys.argv or parameter array.You can build interface by one of two initialization parameters:1. specifying instance by `object=` Same object is used throughout several `run` calls in the same process.2. specifying class (and optional allocator/deallocator) by `klass=, [allocator=, deallocator=]` New object is created for every call of the `run` method. If allocator / deallocator is not specified, simple factory to call default constructor is automatically generated. Global help ###Code try: CLI(object=obj).run(["-h"]) except SystemExit as e: pass ###Output usage: Test [-h] {hello,world,test,show_resource} ... positional arguments: {hello,world,test,show_resource} sub-command help hello world Worldコマンド test Testコマンド show_resource optional arguments: -h, --help show this help message and exit ###Markdown Sub-command help for 'hello' method. ###Code try: CLI(klass=Test).run(["hello", "-h"]) except SystemExit as e: pass ###Output usage: Test hello [-h] optional arguments: -h, --help show this help message and exit ###Markdown Run method by "hello" subcommand. ###Code CLI(klass=Test).run(["hello"]) ###Output 'Hello' ###Markdown Sub-command help for 'world' method.docstring is used as the source of the parameter description. ###Code try: CLI(klass=Test).run(["world", "-h"]) except SystemExit as e: pass ###Output usage: Test world [-h] <name> positional arguments: <name> 名前(表示用) optional arguments: -h, --help show this help message and exit ###Markdown Run method by "world" subcommand.`name` parameter is filled with first sub-command argument, because `name` is positional argument. ###Code CLI(object=obj).run(["world", "him"]) ###Output 'him' ###Markdown Sub-command help for 'test' method.`name` is treated as positional parameter because it has no default value,while `kw` is treated as optional parameter because it has default value (`None`) ###Code try: CLI(object=obj).run(["test", "-h"]) except SystemExit as e: pass ###Output usage: Test test [-h] [--kw <kw>] <name> positional arguments: <name> 名前(表示用) optional arguments: -h, --help show this help message and exit --kw <kw> 表示用のキーワード ###Markdown Run method by "test" subcommand without any optional parameters. ###Code CLI(object=obj).run(["test", "its"]) ###Output 'Y-its-None' ###Markdown Run method by "test" subcommand with explicit optional parameters. ###Code CLI(klass=Test).run(["test", "its", "--kw", "TEST"]) ###Output 'Y-its-TEST' ###Markdown Sub-command help for 'show_resource' method.This method has explicit annotation for flask mode, so that parameter of CLI is treated different way from web app. ###Code try: CLI(klass=Test).run(["show_resource", "-h"]) except SystemExit as e: pass ###Output usage: Test show_resource [-h] [--someargs <someargs>] <id> <attr> <mustarg> positional arguments: <id> <attr> <mustarg> optional arguments: -h, --help show this help message and exit --someargs <someargs> ###Markdown Run method by "show_resource" subcommand. ###Code CLI(klass=Test).run(["show_resource", "test","ATTR","must!!!","--someargs","keyword"]) ###Output TestID: test TestAttr: ATTR MustArg: must!!! SomeArgs: keyword {'MustArg': 'must!!!', 'SomeArgs': 'keyword', 'TestAttr': 'ATTR', 'TestID': 'test'} ###Markdown Third step: Publish object as web service in Flask.**CAUTION**: After testing, you need to stop notebook manually.after launching web service, you can try it as follwing command in the terminal.1. Simple "GET" service.```$ curl http://localhost:9999/Test/hello```2. "GET" service with arguments```$ curl http://localhost:9999/Test/world?name=test```3. "POST" service with form representation for arguments `@publish(flask={"params": "form"})` is used. method is automatically guessed as "POST" because it has "form" type parameter. You can override it by explicitly specifying method in the decrorator.```$ curl -X POST --form name="Example user" http://localhost:9999/Test/test```4. "POST" service with JSON representation for arguments "`id`" and "`attr`" is specified as part of path in URL. `@publish(flask={"path": ["id", "attr"], "params": "json", "method": "POST"})` is used.```$ curl -X POST -d '{"mustarg": 1}' -H 'Content-type: application/json' http://localhost:9999/Test/trend/world/show_resource``` ###Code Flask(klass=Test).run(port=9999) class Test2(Test): def hello(self): print("hidden by Test2") return None @publish def world(self, you): print("It's you, %s"%you) return you obj2=Test2() obj2.hello() obj2.world("ME") CLI(object=obj2).run(["-h"]) CLI(klass=Test2).run(["world", "YOU"]) ###Output It's you, YOU 'YOU' ###Markdown We work on a 2D dataset ###Code n_channels = 1 n_classes = 4 # 设置参数 NUM_EPOCHS = 3 BATCH_SIZE = 64 lr = 0.001 # preprocessing data_transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize(mean=[.5], std=[.5])]) # load the data DataClass = getattr(medmnist, 'OCTMNIST') train_dataset = DataClass(split='train', transform=data_transform, download=True) test_dataset = DataClass(split='test', transform=data_transform, download=True) pil_dataset = DataClass(split='train', download=True) # encapsulate data into dataloader form train_loader = data.DataLoader(dataset=train_dataset, batch_size=BATCH_SIZE, shuffle=True) train_loader_at_eval = data.DataLoader(dataset=train_dataset, batch_size=2*BATCH_SIZE, shuffle=False) test_loader = data.DataLoader(dataset=test_dataset, batch_size=2*BATCH_SIZE, shuffle=False) # 定义模型 class Net(nn.Module): def __init__(self, in_channels, num_classes): super(Net, self).__init__() self.layer1 = nn.Sequential( nn.Conv2d(in_channels, 16, kernel_size=3), nn.BatchNorm2d(16), nn.ReLU()) self.layer2 = nn.Sequential( nn.Conv2d(16, 16, kernel_size=3), nn.BatchNorm2d(16), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2)) self.layer3 = nn.Sequential( nn.Conv2d(16, 64, kernel_size=3), nn.BatchNorm2d(64), nn.ReLU()) self.layer4 = nn.Sequential( nn.Conv2d(64, 64, kernel_size=3), nn.BatchNorm2d(64), nn.ReLU()) self.layer5 = nn.Sequential( nn.Conv2d(64, 64, kernel_size=3, padding=1), nn.BatchNorm2d(64), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2)) self.fc = nn.Sequential( nn.Linear(64 * 4 * 4, 128), nn.ReLU(), nn.Linear(128, 128), nn.ReLU(), nn.Linear(128, num_classes)) def forward(self, x): x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) x = self.layer5(x) x = x.view(x.size(0), -1) x = self.fc(x) return x model = Net(in_channels=n_channels, num_classes=n_classes) # 定义损失函数 criterion = nn.CrossEntropyLoss() # 定义优化器 optimizer = optim.SGD(model.parameters(), lr=lr, momentum=0.9) # 训练 for epoch in range(NUM_EPOCHS): train_correct, train_total, test_correct, test_total = 0, 0, 0, 0 model.train() # 设置模型为训练模式 for inputs, targets in tqdm(train_loader): optimizer.zero_grad() outputs = model(inputs) targets = targets.squeeze().long() loss = criterion(outputs, targets) loss.backward() optimizer.step() # 预测 def test(split): model.eval() # 设置模型为预测模式 y_true = torch.tensor([]) y_score = torch.tensor([]) data_loader = train_loader_at_eval if split == 'train' else test_loader with torch.no_grad(): for inputs, targets in tqdm(data_loader): outputs = model(inputs) targets = targets.squeeze().long() outputs = outputs.softmax(dim=-1) targets = targets.float().resize_(len(targets), 1) y_true = torch.cat((y_true, targets), 0) y_score = torch.cat((y_score, outputs), 0) y_true = y_true.numpy() y_score = y_score.detach().numpy() evaluator = Evaluator('octmnist', split) metrics = evaluator.evaluate(y_score) print('%s auc: %.3f acc:%.3f' % (split, *metrics)) print('==> Evaluating ...') test('train') test('test') ###Output ==> Evaluating ... ###Markdown Papermill Test NotebookBelow are some papermill parameters and code to save some values to glue. The goal is to provide the parameters via papermill, read them back using glue, and return them all as part of the papermill API. ###Code !pip -q install nteract-scrapbook[all] import scrapbook as sb string num obj sb.glue("string", string) sb.glue("number", num) sb.glue("Object", obj) sb.glue("statusCode", 201) ###Output _____no_output_____ ###Markdown To do * substituents that have multiple points of attachment* second order ennumeration (using enumerated fragments as seeds) Issues* problems with Si and Carboxylic Acid substituents : cannot canonicalize ###Code from smilescombine import Combiner substituents = ['(N(C)C)', '(O)', '(N)', '(S)', '(C)', '(F)'] combiner = Combiner('c1ccsc1', substituents, nmax=2, auto_placement=True) combiner.combine_substituents('./SMILES.csv') ###Output Skeleton SMILES: c1ccsc1 Number of vacant sites: 4 Numer of unique substituent permutations: 127 ###Markdown Test IMarkdown ```mermaidflowchart LR;P[Execute Markdown] --> Q[Store MIME-bundles in Attachments]A[Render Markdown] --> B[Create Placeholders] --> C[Replace Placeholders from Attachments]``` Patch IPyWidgets ###Code import ipywidgets as w import numpy as np from IPython.display import HTML def repr_widget_mimebundle(self, include, exclude): plaintext = repr(self) if len(plaintext) > 110: plaintext = plaintext[:110] + '…' data = { 'text/plain': plaintext, } if self._view_name is not None: # The 'application/vnd.jupyter.widget-view+json' mimetype has not been registered yet. # See the registration process and naming convention at # http://tools.ietf.org/html/rfc6838 # and the currently registered mimetypes at # http://www.iana.org/assignments/media-types/media-types.xhtml. data['application/vnd.jupyter.widget-view+json'] = { 'version_major': 2, 'version_minor': 0, 'model_id': self._model_id } return data try: del w.Widget._ipython_display_ except AttributeError: pass w.Widget._repr_mimebundle_ = repr_widget_mimebundle ###Output _____no_output_____ ###Markdown The best thing about Markdown is the ability to include inspirational messages from space: ###Code message = HTML("""<a style="color:green">hello from Mars</a>""") message ###Output _____no_output_____ ###Markdown Receiving broadcast ...> {{ message }} ###Code slider = w.IntSlider(layout=w.Layout(display="inline-flex")) slider ###Output _____no_output_____ ###Markdown We can also include widgets!This is a slider — {{ slider }} And we can include raw text (as blocks, currently)! ###Code mat = [ [1, 2, 3], [4, 5, 6], [7, 8, 9] ] ###Output _____no_output_____ ###Markdown In the above example, the table has {{ len(mat) }} rows.The first row has {{ len(mat[0]) }} values, which sum to {{ sum(mat[0]) }} Expressions are also supported: ###Code def get_slider(): return slider ###Output _____no_output_____ ###Markdown Setting for Colab Edit->Notebook Setting->Enable GPU ###Code dir_of_file='MyDrive/Colab Notebooks/attributionpriors/examples' from google.colab import drive import os drive.mount('./gdrive') # Print the current working directory print("Current working directory: {0}".format(os.getcwd())) # Change the current working directory os.chdir('./gdrive/'+dir_of_file) # Print the current working directory print("Current working directory: {0}".format(os.getcwd())) import pkg_resources if 'shap' not in [i.key for i in pkg_resources.working_set]: !pip install shap ###Output _____no_output_____ ###Markdown Import library ###Code import sys sys.path.insert(0, '../') import numpy as np import pandas as pd import matplotlib.pyplot as plt import altair as alt import torch from torch.autograd import grad from torch.utils.data import Dataset, DataLoader, Subset from torch.utils.data.dataset import random_split import torchvision from torchvision import transforms import shap ###Output _____no_output_____ ###Markdown Demo ###Code class BinaryData(Dataset): def __init__(self, X, y=None, transform=None): self.X=X self.y=y self.transform=transform def __len__(self): return len(self.X) def __getitem__(self, index): sample=self.X[index,:] if self.transform is not None: sample=self.transform(sample) if self.y is not None: return sample, self.y[index] else: return sample batch_size=64 a_train=torch.empty(1000,1).uniform_(0,1) x_train=torch.bernoulli(a_train) x_train=torch.cat([x_train,x_train],axis=1) y_train=x_train[:,0] train_dataset=BinaryData(x_train, y_train) train_loader=DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True, drop_last=True) #a_test=torch.empty(1000,2).uniform_(0,1) #x_test=torch.bernoulli(a_test) #y_test=x_test[:,0] #test_dataset=BinaryData(x_test, y_test) #test_loader=DataLoader(dataset=test_dataset, batch_size=64, shuffle=True, drop_last=True) class MLP(torch.nn.Module): def __init__(self): super(MLP,self).__init__() self.layers=torch.nn.Sequential(torch.nn.Linear(2,1), torch.nn.Sigmoid()) def forward(self,x): x=self.layers(x) return x def calculate_dependence(model): zero_arange=torch.tensor(np.concatenate([np.zeros(100).reshape(-1,1), np.arange(0,1,0.01).reshape(-1,1)],axis=1)).float().to(device) one_arange=torch.tensor(np.concatenate([np.ones(100).reshape(-1,1), np.arange(0,1,0.01).reshape(-1,1)],axis=1)).float().to(device) arange_zero=torch.tensor(np.concatenate([np.arange(0,1,0.01).reshape(-1,1), np.zeros(100).reshape(-1,1)],axis=1)).float().to(device) arange_one=torch.tensor(np.concatenate([np.arange(0,1,0.01).reshape(-1,1), np.ones(100).reshape(-1,1)],axis=1)).float().to(device) dep1=(model(one_arange)-model(zero_arange)).mean().detach().cpu().numpy().reshape(-1)[0] dep2=(model(arange_one)-model(arange_zero)).mean().detach().cpu().numpy().reshape(-1)[0] return dep2/dep1, dep1, dep2 device=torch.device('cuda') convergence_list1_list_eg=[] convergence_list2_list_eg=[] convergence_list3_list_eg=[] for k in [1,2,3,4,5]: print('k =',k) model=MLP().to(device) with torch.no_grad(): model.layers[0].weight[0,0]=10 model.layers[0].weight[0,1]=10 model.layers[0].bias[0]=-6 x_zeros = torch.ones_like(x_train[:,:]) background_dataset = BinaryData(x_zeros) explainer = AttributionPriorExplainer(background_dataset, None, 64, k=k) optimizer = torch.optim.Adam(model.parameters(), lr=1e-2) bce_term = torch.nn.BCELoss() train_loss_list_mean_list=[] convergence_list1=[] convergence_list2=[] convergence_list3=[] for epoch in range(200): train_loss_list=[] for i, (x, y_true) in enumerate(train_loader): x, y_true= x.float().to(device), y_true.float().to(device) optimizer.zero_grad() y_pred=model(x) eg=explainer.attribution(model, x) eg_abs_mean=eg.abs().mean(0) loss=bce_term(y_pred, y_true.unsqueeze(1)) + eg_abs_mean[1] loss.backward(retain_graph=True) optimizer.step() train_loss_list.append(loss.item()) train_loss_list_mean=np.mean(train_loss_list) train_loss_list_mean_list.append(train_loss_list_mean) convergence_list1.append(calculate_dependence(model)[0]) convergence_list2.append(calculate_dependence(model)[1]) convergence_list3.append(calculate_dependence(model)[2]) convergence_list1_list_eg.append(convergence_list1) convergence_list2_list_eg.append(convergence_list2) convergence_list3_list_eg.append(convergence_list3) plt.figure(figsize=(6,6)) for k, convergence_list1 in enumerate(convergence_list1_list_eg): plt.plot((np.arange(len(convergence_list1))+1), convergence_list1,label=k+1) plt.xlim([40,75]) plt.ylim([-0.01,0.4]) plt.legend(title='k') plt.xlabel('epochs') plt.ylabel('fractional dependence on feature 2') plt.show() plt.figure(figsize=(6,6)) for k, convergence_list1 in enumerate(convergence_list1_list_eg): plt.plot((np.arange(len(convergence_list1))+1)*(k+1), convergence_list1,label=k+1) plt.xlim([0,350]) plt.ylim([-0.01,0.4]) plt.legend(title='k') plt.xlabel('gradient calls per training example') plt.ylabel('fractional dependence on feature 2') plt.show() ###Output _____no_output_____ ###Markdown example_usage ###Code feature_mean=0 feature_sigma=1 dummy_sigma=0.5 n_samples=1000 n_features=3 X=np.random.randn(n_samples,n_features)*feature_sigma+feature_mean X[:,2]=X[:,0]+np.random.randn(n_samples)*dummy_sigma output_mean=0 output_sigma=0.5 Y=X[:,0]-X[:,1]+np.random.randn(n_samples)*output_sigma+output_mean Y=Y.reshape([Y.shape[0],1]) data = pd.DataFrame({'Feature 0': X[:, 0], 'Feature 1': X[:, 1], 'Feature 2': X[:, 2], 'Outcome': Y.squeeze()}) alt.Chart(data).mark_point(filled=True).encode( x=alt.X(alt.repeat('column'), type='quantitative', scale=alt.Scale(domain=[-4, 4])), y=alt.Y('Outcome:Q', scale=alt.Scale(domain=[-6, 6])) ).properties( height=200, width=200 ).repeat( column=['Feature 0', 'Feature 1', 'Feature 2'] ).properties( title='The relationship between the outcome and the three features in our simulated data' ).configure_axis( labelFontSize=15, labelFontWeight=alt.FontWeight('lighter'), titleFontSize=15, titleFontWeight=alt.FontWeight('normal') ).configure_title( fontSize=18 ) class CustomDataset(Dataset): def __init__(self, x, y=None): self.x=x self.y=y def __len__(self): return len(self.x) def __getitem__(self, index): if self.y is not None: return self.x[index], self.y[index] else: return self.x[index] batch_size=20 dataset=CustomDataset(x=X,y=Y) train_dataset, test_dataset, valid_dataset=random_split(dataset, [int(n_samples*0.8), int(n_samples*0.1), int(n_samples*0.1)]) train_dataloader=DataLoader(dataset=train_dataset, batch_size=20, shuffle=True, drop_last=True) test_dataloader=DataLoader(dataset=test_dataset, batch_size=len(test_dataset), shuffle=True, drop_last=True) valid_dataloader=DataLoader(dataset=valid_dataset, batch_size=len(valid_dataset), shuffle=True, drop_last=True) class MLP(torch.nn.Module): def __init__(self): super(MLP,self).__init__() self.layers=torch.nn.Sequential(torch.nn.Linear(2,1), torch.nn.Sigmoid()) def forward(self,x): x=self.layers(x) return x class CustomModel(torch.nn.Module): def __init__(self): super(CustomModel,self).__init__() self.layers=torch.nn.Sequential(torch.nn.Linear(n_features,5), torch.nn.ReLU(), torch.nn.Linear(5,1)) def forward(self, x): return self.layers(x) ###Output _____no_output_____ ###Markdown train with an attribution prior ###Code device=torch.device('cuda') model=CustomModel().to(device) explainer = AttributionPriorExplainer(train_dataset[:][0], None, batch_size=batch_size, k=1) explainer_valid = AttributionPriorExplainer(valid_dataset[:][0], None, batch_size=100, k=1) optimizer=torch.optim.SGD(model.parameters(), lr=1e-2, momentum=0, dampening=0) loss_func = torch.nn.MSELoss() batch_count=0 valid_loss_list=[] step_list=[] for epoch in range(15): for i, (x, y_true) in enumerate(train_dataloader): batch_count+=1 x, y_true= x.float().to(device), y_true.float().to(device) optimizer.zero_grad() y_pred=model(x) eg=explainer.attribution(model, x) loss=loss_func(y_pred, y_true) + 30*(eg*eg)[:,2].mean() loss.backward() optimizer.step() if batch_count%10==0: valid_loss=[] for i, (x, y_true) in enumerate(valid_dataloader): x, y_true= x.float().to(device), y_true.float().to(device) y_pred = model(x) loss=loss_func(y_pred, y_true) valid_loss.append(loss.item()) eg=explainer_valid.attribution(model,x) #print(eg.abs().mean(axis=0).detach().cpu()) valid_loss_list.append(np.mean(valid_loss)) step_list.append(batch_count) test_loss=[] for i, (x, y_true) in enumerate(test_dataloader): x, y_true= x.float().to(device), y_true.float().to(device) y_pred = model(x) loss=loss_func(y_pred, y_true) test_loss.append(loss.item()) print('MSE:',np.mean(test_loss)) data = pd.DataFrame({ 'Iteration': step_list, 'Validation Loss': valid_loss_list }) alt.Chart(data ).mark_line().encode(alt.X('Iteration:Q'), alt.Y('Validation Loss:Q', scale=alt.Scale(domain=[0.0, 2.5]))) ###Output _____no_output_____ ###Markdown using shap.GradientExplainer ###Code explainer=shap.GradientExplainer(model=model, data=torch.Tensor(X).to(device)) shap_values=explainer.shap_values(torch.Tensor(X).to(device), nsamples=200) shap.summary_plot(shap_values, X) ###Output _____no_output_____ ###Markdown using implemented explainer ###Code explainer_temp = AttributionPriorExplainer(dataset, input_index=0, batch_size=5, k=200) temp_dataloader=DataLoader(dataset=dataset, batch_size=5, shuffle=True, drop_last=True) eg_list=[] x_list=[] for i, (x, y_true) in enumerate(temp_dataloader): x, y_true= x.float().to(device), y_true.float().to(device) eg_temp=explainer_temp.attribution(model, x) eg_list.append(eg_temp.detach().cpu().numpy()) x_list.append(x.detach().cpu().numpy()) eg_list_concat=np.concatenate(eg_list) x_list_concat=np.concatenate(x_list) shap.summary_plot(eg_list_concat, x_list_concat) ###Output _____no_output_____ ###Markdown train without an attribution prior ###Code device=torch.device('cuda') model=CustomModel().to(device) explainer = AttributionPriorExplainer(train_dataset, input_index=0, batch_size=batch_size, k=1) explainer_valid = AttributionPriorExplainer(valid_dataset, input_index=0, batch_size=100, k=1) optimizer=torch.optim.SGD(model.parameters(), lr=1e-2, momentum=0, dampening=0) loss_func = torch.nn.MSELoss() batch_count=0 valid_loss_list=[] step_list=[] for epoch in range(15): for i, (x, y_true) in enumerate(train_dataloader): batch_count+=1 x, y_true= x.float().to(device), y_true.float().to(device) optimizer.zero_grad() y_pred=model(x) eg=explainer.attribution(model, x) loss=loss_func(y_pred, y_true)# + 30*(eg*eg)[:,2].mean() loss.backward() optimizer.step() if batch_count%10==0: valid_loss=[] for i, (x, y_true) in enumerate(valid_dataloader): x, y_true= x.float().to(device), y_true.float().to(device) y_pred = model(x) loss=loss_func(y_pred, y_true) valid_loss.append(loss.item()) eg=explainer_valid.attribution(model,x) #print(eg.abs().mean(axis=0).detach().cpu()) valid_loss_list.append(np.mean(valid_loss)) step_list.append(batch_count) test_loss=[] for i, (x, y_true) in enumerate(test_dataloader): x, y_true= x.float().to(device), y_true.float().to(device) y_pred = model(x) loss=loss_func(y_pred, y_true) test_loss.append(loss.item()) print('MSE:',np.mean(test_loss)) data = pd.DataFrame({ 'Iteration': step_list, 'Validation Loss': valid_loss_list }) alt.Chart(data ).mark_line().encode(alt.X('Iteration:Q'), alt.Y('Validation Loss:Q', scale=alt.Scale(domain=[0.0, 2.5]))) ###Output _____no_output_____ ###Markdown using shap.GradientExplainer ###Code explainer=shap.GradientExplainer(model=model, data=torch.Tensor(X).to(device)) shap_values=explainer.shap_values(torch.Tensor(X).to(device), nsamples=200) shap.summary_plot(shap_values, X) ###Output _____no_output_____ ###Markdown using implemented explainer ###Code explainer_temp = AttributionPriorExplainer(dataset, input_index=0, batch_size=5, k=200) temp_dataloader=DataLoader(dataset=dataset, batch_size=5, shuffle=True, drop_last=True) eg_list=[] x_list=[] for i, (x, y_true) in enumerate(temp_dataloader): x, y_true= x.float().to(device), y_true.float().to(device) eg_temp=explainer_temp.attribution(model, x) eg_list.append(eg_temp.detach().cpu().numpy()) x_list.append(x.detach().cpu().numpy()) eg_list_concat=np.concatenate(eg_list) x_list_concat=np.concatenate(x_list) shap.summary_plot(eg_list_concat, x_list_concat) ###Output _____no_output_____ ###Markdown MNIST download dataset ###Code batch_size=50 num_epochs=60 valid_size=5000 train_dataset=torchvision.datasets.MNIST('./data', train=True, download=True, transform=transforms.Compose([transforms.RandomRotation([-15,15], fill = (0,)), transforms.RandomAffine(degrees=0, translate=(4/28,4/28), fillcolor=0), transforms.ToTensor(), transforms.Normalize(mean=(0.5,), std=(1,)), ])) train_dataset=Subset(train_dataset,range(valid_size,len(train_dataset))) valid_dataset=torchvision.datasets.MNIST('./data', train=True, download=True, transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize(mean=(0.5,), std=(1,)), ])) valid_dataset=Subset(valid_dataset,range(valid_size)) test_dataset=torchvision.datasets.MNIST('./data', train=False, download=True, transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize(mean=(0.5,), std=(1,)), ])) train_dataloader=DataLoader(train_dataset, shuffle=True, drop_last=True, batch_size=batch_size) valid_dataloader=DataLoader(valid_dataset, shuffle=False, drop_last=True, batch_size=batch_size) test_dataloader=DataLoader(test_dataset, shuffle=False, drop_last=True, batch_size=batch_size) class MNISTModel(torch.nn.Module): def __init__(self): super(MNISTModel,self).__init__() layer1_conv=torch.nn.Conv2d(in_channels=1, out_channels=32, kernel_size=5, padding=int((5-1)/2));torch.nn.init.xavier_uniform_(layer1_conv.weight);torch.nn.init.zeros_(layer1_conv.bias); layer1_batchnorm=torch.nn.BatchNorm2d(num_features=32, momentum=0.1) layer1_activation=torch.nn.ReLU() layer1_maxpool=torch.nn.MaxPool2d(kernel_size=2, padding=0) layer2_conv=torch.nn.Conv2d(in_channels=32, out_channels=64, kernel_size=5, padding=int((5-1)/2));torch.nn.init.xavier_uniform_(layer2_conv.weight);torch.nn.init.zeros_(layer2_conv.bias); layer2_batchnorm=torch.nn.BatchNorm2d(num_features=64, momentum=0.1) layer2_activation=torch.nn.ReLU() layer2_maxpool=torch.nn.MaxPool2d(kernel_size=2, padding=0) layer3_flatten=torch.nn.Flatten() layer3_fc=torch.nn.Linear(3136,1024);torch.nn.init.xavier_uniform_(layer3_fc.weight);torch.nn.init.zeros_(layer3_fc.bias); layer3_activation=torch.nn.ReLU() layer3_dropout=torch.nn.Dropout(p=0.5) layer4_fc=torch.nn.Linear(1024, 10) self.layers=torch.nn.Sequential(layer1_conv, layer1_batchnorm, layer1_activation, layer1_maxpool, layer2_conv, layer2_batchnorm, layer2_activation, layer2_maxpool, layer3_flatten, layer3_fc, layer3_activation, layer3_dropout, layer4_fc) #print(dir(self.layers)) #print(self.layers._get_name()) def forward(self,x): x=self.layers(x) return x device=torch.device('cuda') model=MNISTModel().to(device) len(train_dataset) optimizer = torch.optim.Adam(model.parameters(), lr=1e-4) scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer=optimizer, gamma=0.95) #scheduler1 = torch.optim.lr_scheduler.StepLR(optimizer1, step_size=5, gamma=0.5) """ global_step=0 """ lamb=0.5 explainer=AttributionPriorExplainer(reference_dataset=train_dataset, input_index=0, batch_size=batch_size, k=5) loss_func=torch.nn.CrossEntropyLoss() for epoch in range(60): for i, (images, labels_true) in enumerate(train_dataloader): images, labels_true = images.to(device), labels_true.to(device) optimizer.zero_grad() labels_onehot_pred=model(images) labels_onehot_true=torch.nn.functional.one_hot(labels_true, num_classes=10) eg=explainer.attribution(model, images, valid_output=labels_onehot_true) eg_standardized=(eg-eg.mean(dim=(-1,-2,-3), keepdim=True))/\ (eg.std(dim=(-1,-2,-3),keepdim=True).clamp(max=1/np.sqrt(torch.numel(eg[0])))) loss=loss_func(labels_onehot_pred, labels_true) loss.backward(retain_graph=True) optimizer.step() """ global_step+=1 if (global_step*50)%60000==0: pass """ scheduler.step() break images.shape, labels_true.shape eg_standardized=(eg-eg.mean(dim=(-1,-2,-3), keepdim=True))/\ (eg.std(dim=(-1,-2,-3),keepdim=True).clamp(max=1/np.sqrt(torch.numel(eg[0])))) images.shape torch.nn.functional.one_hot?? eg_standardized[0].std() eg[0].mean() eg[0].std() np.sqrt() ###Output _____no_output_____ ###Markdown AttributionPriorExplainer ###Code """ https://github.com/suinleelab/attributionpriors/blob/master/attributionpriors/pytorch_ops.py https://github.com/slundberg/shap/blob/master/shap/explainers/_gradient.py Currently, in the case of one-hot encoded class output, ignore attribution for output indices that are not true. """ from torch.autograd import grad from torch.utils.data import Dataset, DataLoader class AttributionPriorExplainer(): def __init__(self, reference_dataset, input_index, batch_size, k): self.reference_dataloader=DataLoader(dataset=reference_dataset, batch_size=batch_size*k, shuffle=True, drop_last=True) self.reference_dataloader_iterator=iter(self.reference_dataloader) self.batch_size=batch_size self.k=k self.input_index=input_index def get_reference_data(self): try: reference_data=next(self.reference_dataloader_iterator) except: self.reference_dataloader_iterator=iter(self.reference_dataloader) reference_data=next(self.reference_dataloader_iterator) if self.input_index is None: return reference_data else: return reference_data[self.input_index] def interpolate_input_reference(self, input_data, reference_data): alpha=torch.empty(self.batch_size, self.k).uniform_(0,1).to(input_data.device) alpha=alpha.view(*([self.batch_size, self.k,]+[1]*len(input_data.shape[1:]))) input_reference_interpolated=(1-alpha)*reference_data+(alpha)*input_data.unsqueeze(1) return input_reference_interpolated def diff_input_reference(self, input_data, reference_data): return input_data.unsqueeze(1)-reference_data def get_grad(self, model, input_reference_interpolated, valid_output): input_reference_interpolated.requires_grad=True input_reference_interpolated_grad=torch.zeros(input_reference_interpolated.shape).float().to(input_reference_interpolated.device) for i in range(self.k): batch_input=input_reference_interpolated[:,i,] batch_output=model(batch_input) if valid_output is None: grad_out=grad(outputs=batch_output, inputs=batch_input, grad_outputs=torch.ones_like(batch_output).to(input_reference_interpolated.device), create_graph=True)[0] else: grad_out=grad(outputs=batch_output, inputs=batch_input, grad_outputs=valid_output, create_graph=True)[0] input_reference_interpolated_grad[:,i,]=grad_out return input_reference_interpolated_grad def attribution(self, model, input_data, valid_output=None): model_dtype=next(model.parameters()).dtype reference_data=self.get_reference_data().to(model_dtype).to(input_data.device) assert input_data.dtype==model_dtype assert input_data.shape[0]==self.batch_size assert input_data.shape[1:]==reference_data.shape[1:] assert input_data.device==next(model.parameters()).device reference_data=reference_data.view(self.batch_size, self.k, *reference_data.shape[1:]) input_reference_interpolated=self.interpolate_input_reference(input_data, reference_data) input_reference_diff=self.diff_input_reference(input_data, reference_data) input_reference_interpolated_grad=self.get_grad(model, input_reference_interpolated, valid_output) diff_interpolated_grad=input_reference_diff*input_reference_interpolated_grad expected_grad=diff_interpolated_grad.mean(axis=1) return expected_grad """ if list(batch_output.shape[1:])==[1]: # scalar output else: # vector output if grad_output is None: grad_out=grad(outputs=batch_output, inputs=batch_input, grad_outputs=torch.ones_like(batch_output).to(input_reference_interpolated.device), create_graph=True)[0] else: grad_out=grad(outputs=batch_output, inputs=batch_input, grad_outputs=grad_outputs.to(input_reference_interpolated.device), create_graph=True)[0] def gather_nd(self,params, indices): max_value = functools.reduce(operator.mul, list(params.size())) - 1 indices = indices.t().long() ndim = indices.size(0) idx = torch.zeros_like(indices[0]).long() m = 1 for i in range(ndim)[::-1]: idx += indices[i]*m m *= params.size(i) idx[idx < 0] = 0 idx[idx > max_value] = 0 return torch.take(params, idx) sample_indices=torch.arange(0,batch_output.size(0)).to(input_reference_interpolated.device) indices_tensor=torch.cat([sample_indices.unsqueeze(1), sparse_output.unsqueeze(1).to(input_reference_interpolated.device)],dim=1) batch_output=self.gather_nd(batch_output, indices_tensor) grad_out=grad(outputs=batch_output, inputs=batch_input, grad_outputs=torch.ones_like(batch_output).to(input_reference_interpolated.device), create_graph=True)[0] print('a',torch.ones_like(batch_output).to(input_reference_interpolated.device).shape) print('equal',np.all((grad_out==grad_out2).cpu().numpy())) """ optimizer1.param_groups[0]['amsgrad'] optimizer2 = torch.optim.Adam(model.parameters(), lr=1e-4) scheduler2 = torch.optim.lr_scheduler.ExponentialLR(optimizer=optimizer2, gamma=0.95) #scheduler1 = torch.optim.lr_scheduler.StepLR(optimizer1, step_size=5, gamma=0.5) optimizer2.param_groups[0]['params'][-1] len(optimizer1.param_groups) scheduler1.get_last_lr() scheduler1.step() optimizer1.param_groups[0]['initial_lr'] optimizer1.param_groups[0]['lr'] test_dataset device=torch.device('cuda') convergence_list1_list_eg=[] convergence_list2_list_eg=[] convergence_list3_list_eg=[] for k in [1,2,3,4,5]: print('k =',k) model=MLP().to(device) with torch.no_grad(): model.layers[0].weight[0,0]=10 model.layers[0].weight[0,1]=10 model.layers[0].bias[0]=-6 x_zeros = torch.ones_like(x_train[:,:]) background_dataset = BinaryData(x_zeros) explainer = AttributionPriorExplainer(background_dataset, 64, k=k) optimizer = torch.optim.Adam(model.parameters(), lr=1e-2) bce_term = torch.nn.BCELoss() train_loss_list_mean_list=[] convergence_list1=[] convergence_list2=[] convergence_list3=[] for epoch in range(200): train_loss_list=[] for i, (x, y_true) in enumerate(train_loader): x, y_true= x.float().to(device), y_true.float().to(device) optimizer.zero_grad() y_pred=model(x) eg=explainer.attribution(model, x) eg_abs_mean=eg.abs().mean(0) loss=bce_term(y_pred, y_true.unsqueeze(1)) + eg_abs_mean[1] loss.backward(retain_graph=True) optimizer.step() train_loss_list.append(loss.item()) train_loss_list_mean=np.mean(train_loss_list) train_loss_list_mean_list.append(train_loss_list_mean) convergence_list1.append(calculate_dependence(model)[0]) convergence_list2.append(calculate_dependence(model)[1]) convergence_list3.append(calculate_dependence(model)[2]) convergence_list1_list_eg.append(convergence_list1) convergence_list2_list_eg.append(convergence_list2) convergence_list3_list_eg.append(convergence_list3) model(img),model(img).shape model(img).shape img.shape train_dataloader=DataLoader(dataset=train_dataset, batch_size=10) for img,label in train_dataloader: print(img) break torch.nn.MaxPool2d(kernel_size=2, padding='valid') torch.nn.MaxPool2d(kernel_size=2) train_dataloader=DataLoader(dataset=train_dataset, batch_size=10) loss_func=torch.nn.CrossEntropyLoss() for images, labels_true in train_dataloader: images=images labels_pred=model.forward(images) print(labels_pred.shape) loss_func(labels_pred, labels_true) import tensorflow as tf image = tf.constant(np.arange(1, 24+1, dtype=np.int32), shape=[2,2, 2, 3]) new_image = tf.image.per_image_standardization(image) np.var(new_image[0]) new_image np.var(new_image) torch.nn.Dropout? import torch torch.nn.Conv2d?? from __future__ import print_function import argparse import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import datasets, transforms from torch.optim.lr_scheduler import StepLR class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 32, 3, 1) self.conv2 = nn.Conv2d(32, 64, 3, 1) self.dropout1 = nn.Dropout(0.25) self.dropout2 = nn.Dropout(0.5) self.fc1 = nn.Linear(9216, 128) self.fc2 = nn.Linear(128, 10) def forward(self, x): x = self.conv1(x) x = F.relu(x) x = self.conv2(x) x = F.relu(x) x = F.max_pool2d(x, 2) x = self.dropout1(x) x = torch.flatten(x, 1) x = self.fc1(x) x = F.relu(x) x = self.dropout2(x) x = self.fc2(x) output = F.log_softmax(x, dim=1) return output def train(args, model, device, train_loader, optimizer, epoch): model.train() for batch_idx, (data, target) in enumerate(train_loader): data, target = data.to(device), target.to(device) optimizer.zero_grad() output = model(data) loss = F.nll_loss(output, target) loss.backward() optimizer.step() if batch_idx % args.log_interval == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.item())) if args.dry_run: break def test(model, device, test_loader): model.eval() test_loss = 0 correct = 0 with torch.no_grad(): for data, target in test_loader: data, target = data.to(device), target.to(device) output = model(data) test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability correct += pred.eq(target.view_as(pred)).sum().item() test_loss /= len(test_loader.dataset) print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset))) transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ]) dataset1 = datasets.MNIST('../data', train=True, download=True, transform=transform) dataset2 = datasets.MNIST('../data', train=False, transform=transform) model = Net() train_loader = torch.utils.data.DataLoader(dataset1,batch_size=64) test_loader = torch.utils.data.DataLoader(dataset2, batch_size=1000) device=torch.device('cuda') for batch_idx, (data, target) in enumerate(train_loader): data, target = data.to(device), target.to(device) break data.shape def main(): # Training settings parser = argparse.ArgumentParser(description='PyTorch MNIST Example') parser.add_argument('--batch-size', type=int, default=64, metavar='N', help='input batch size for training (default: 64)') parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N', help='input batch size for testing (default: 1000)') parser.add_argument('--epochs', type=int, default=14, metavar='N', help='number of epochs to train (default: 14)') parser.add_argument('--lr', type=float, default=1.0, metavar='LR', help='learning rate (default: 1.0)') parser.add_argument('--gamma', type=float, default=0.7, metavar='M', help='Learning rate step gamma (default: 0.7)') parser.add_argument('--no-cuda', action='store_true', default=False, help='disables CUDA training') parser.add_argument('--dry-run', action='store_true', default=False, help='quickly check a single pass') parser.add_argument('--seed', type=int, default=1, metavar='S', help='random seed (default: 1)') parser.add_argument('--log-interval', type=int, default=10, metavar='N', help='how many batches to wait before logging training status') parser.add_argument('--save-model', action='store_true', default=False, help='For Saving the current Model') args = parser.parse_args() use_cuda = not args.no_cuda and torch.cuda.is_available() torch.manual_seed(args.seed) device = torch.device("cuda" if use_cuda else "cpu") train_kwargs = {'batch_size': args.batch_size} test_kwargs = {'batch_size': args.test_batch_size} if use_cuda: cuda_kwargs = {'num_workers': 1, 'pin_memory': True, 'shuffle': True} train_kwargs.update(cuda_kwargs) test_kwargs.update(cuda_kwargs) transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ]) dataset1 = datasets.MNIST('../data', train=True, download=True, transform=transform) dataset2 = datasets.MNIST('../data', train=False, transform=transform) train_loader = torch.utils.data.DataLoader(dataset1,**train_kwargs) test_loader = torch.utils.data.DataLoader(dataset2, **test_kwargs) model = Net().to(device) optimizer = optim.Adadelta(model.parameters(), lr=args.lr) scheduler = StepLR(optimizer, step_size=1, gamma=args.gamma) for epoch in range(1, args.epochs + 1): train(args, model, device, train_loader, optimizer, epoch) test(model, device, test_loader) scheduler.step() if args.save_model: torch.save(model.state_dict(), "mnist_cnn.pt") ###Output _____no_output_____ ###Markdown AttributionPriorExplainer ###Code """ https://github.com/suinleelab/attributionpriors/blob/master/attributionpriors/pytorch_ops.py https://github.com/slundberg/shap/blob/master/shap/explainers/_gradient.py """ from torch.autograd import grad from torch.utils.data import Dataset, DataLoader class AttributionPriorExplainer(): def __init__(self, reference_dataset, batch_size, k): self.reference_dataloader=DataLoader(dataset=reference_dataset, batch_size=batch_size*k, shuffle=True, drop_last=True) self.reference_dataloader_iterator=iter(self.reference_dataloader) self.batch_size=batch_size self.k=k def get_reference_data(self): try: reference_data=next(self.reference_dataloader_iterator) except: self.reference_dataloader_iterator=iter(self.reference_dataloader) reference_data=next(self.reference_dataloader_iterator) return reference_data def interpolate_input_reference(self, input_data, reference_data): alpha=torch.empty(self.batch_size, self.k).uniform_(0,1).to(input_data.device) alpha=alpha.view(*([self.batch_size, self.k,]+[1]*len(input_data.shape[1:]))) input_reference_interpolated=(1-alpha)*reference_data+(alpha)*input_data.unsqueeze(1) return input_reference_interpolated def diff_input_reference(self, input_data, reference_data): return input_data.unsqueeze(1)-reference_data def get_grad(self, model, input_reference_interpolated): input_reference_interpolated.requires_grad=True input_reference_interpolated_grad=torch.zeros(input_reference_interpolated.shape).float().to(input_reference_interpolated.device) for i in range(self.k): batch_input=input_reference_interpolated[:,i,] batch_output=model(batch_input) grad_out=grad(outputs=batch_output, inputs=batch_input, grad_outputs=torch.ones_like(batch_output).to(input_reference_interpolated.device), create_graph=True)[0] input_reference_interpolated_grad[:,i,]=grad_out return input_reference_interpolated_grad def attribution(self, model, input_data): model_dtype=next(model.parameters()).dtype reference_data=self.get_reference_data().to(model_dtype).to(input_data.device) assert input_data.dtype==model_dtype assert input_data.shape[0]==self.batch_size assert input_data.shape[1:]==reference_data.shape[1:] assert input_data.device==next(model.parameters()).device reference_data=reference_data.view(self.batch_size, self.k, *reference_data.shape[1:]) input_reference_interpolated=self.interpolate_input_reference(input_data, reference_data) input_reference_diff=self.diff_input_reference(input_data, reference_data) input_reference_interpolated_grad=self.get_grad(model, input_reference_interpolated) diff_interpolated_grad=input_reference_diff*input_reference_interpolated_grad expected_grad=diff_interpolated_grad.mean(axis=1) return expected_grad ㅠ ###Output _____no_output_____ ###Markdown An example of how the wrapper works ###Code # let's add the module path to our PYTHONPATH import sys sys.path[0] = '../' import numpy as np test_matrix = np.loadtxt('../falcon/test.csv',delimiter=',') # literally two lines, MATLAB starts automatically import falcon result = falcon.nested_test(test_matrix) print(result) # recommended: quit the MATLAB engine when you're done falcon.quit_matlab() print('\n\n *** auto shutdown engine example *** \n\n') # if you don't want to have to remember to quit the MATLAB engine, # a context manager is available that shuts it down on exit: # `as eng` not required, use if you need direct access to MATLAB engine with falcon.AutoEngineShutdown() as eng: result = falcon.nested_test(test_matrix) print(result) ###Output {'binary': 1, 'sorting': 0, 'MEASURE': ['NODF'], 'nulls': matlab.double([]), 'ensNum': matlab.double([]), 'plot': 0, 'Matrix': {'Matrix': matlab.logical([[True,False,False,False,True],[False,True,False,True,True],[False,False,True,False,True],[True,True,False,False,False],[True,False,False,False,True]]), 'fill': 11.0, 'connectance': 0.44}, 'NestedConfig': {'DegreeMatrix': matlab.logical([[True,False,True,False,True],[True,True,False,False,False],[True,False,False,True,False],[False,True,True,False,False],[True,True,False,False,False]]), 'Degreeindex_rows': matlab.double([[2.0,1.0,3.0,4.0,5.0]]), 'Degreeindex_cols': matlab.double([[5.0,1.0,2.0,3.0,4.0]])}, 'Bin_t1': {'EnsembleSize': 1000.0, 'SignificanceTable': matlab.double([[0.0,0.0,1.0,0.0,0.0]]), 'measures': [{'MEASURE': 'NODF', 'NANcount': 0.0, 'Measure': 15.0, 'pvalue': 0.644, 'pvalueCorrected': 0.0, 'Mean': 18.185833333333346, 'StandardDeviation': 9.715093589879967, 'sampleZscore': -0.3279261598315398, 'Median': 17.5, 'minimum': 0.0, 'maximum': 53.33333333333333, 'NormalisedTemperature': 0.8248178527241895, 'NestednessUpOrDown': 'Up'}]}, 'Bin_t2': {'EnsembleSize': 1000.0, 'SignificanceTable': matlab.double([[0.0,0.0,1.0,0.0,0.0]]), 'measures': [{'MEASURE': 'NODF', 'NANcount': 0.0, 'Measure': 15.0, 'pvalue': 0.911, 'pvalueCorrected': 0.0, 'Mean': 17.0975, 'StandardDeviation': 2.8530481302693786, 'sampleZscore': -0.7351786244846695, 'Median': 17.5, 'minimum': 7.5, 'maximum': 22.5, 'NormalisedTemperature': 0.877321245796169, 'NestednessUpOrDown': 'Up'}]}, 'Bin_t3': {'EnsembleSize': 1000.0, 'SignificanceTable': matlab.double([[0.0,0.0,1.0,0.0,0.0]]), 'measures': [{'MEASURE': 'NODF', 'NANcount': 0.0, 'Measure': 15.0, 'pvalue': 0.683, 'pvalueCorrected': 0.0, 'Mean': 19.557500000000008, 'StandardDeviation': 10.363722274273217, 'sampleZscore': -0.4397551265256782, 'Median': 17.916666666666664, 'minimum': 0.0, 'maximum': 63.33333333333333, 'NormalisedTemperature': 0.7669691934040647, 'NestednessUpOrDown': 'Up'}]}, 'Bin_t4': {'EnsembleSize': 1000.0, 'SignificanceTable': matlab.double([[0.0,0.0,1.0,0.0,0.0]]), 'measures': [{'MEASURE': 'NODF', 'NANcount': 0.0, 'Measure': 15.0, 'pvalue': 0.774, 'pvalueCorrected': 0.0, 'Mean': 24.693830128205132, 'StandardDeviation': 13.31029362735014, 'sampleZscore': -0.7282957385918326, 'Median': 23.333333333333332, 'minimum': 0.0, 'maximum': 91.66666666666667, 'NormalisedTemperature': 0.6074391830721755, 'NestednessUpOrDown': 'Up'}]}, 'Bin_t5': {'EnsembleSize': 1000.0, 'SignificanceTable': matlab.double([[0.0,0.0,1.0,0.0,0.0]]), 'measures': [{'MEASURE': 'NODF', 'NANcount': 0.0, 'Measure': 15.0, 'pvalue': 0.737, 'pvalueCorrected': 0.0, 'Mean': 22.55299984737483, 'StandardDeviation': 12.645669729700424, 'sampleZscore': -0.5972795438137511, 'Median': 21.875, 'minimum': 0.0, 'maximum': 77.77777777777777, 'NormalisedTemperature': 0.6650999911989979, 'NestednessUpOrDown': 'Up'}]}, 'SignificanceTableSummary': matlab.double([[0.0,0.0,5.0,0.0,0.0]])} *** auto shutdown engine example *** {'binary': 1, 'sorting': 0, 'MEASURE': ['NODF'], 'nulls': matlab.double([]), 'ensNum': matlab.double([]), 'plot': 0, 'Matrix': {'Matrix': matlab.logical([[True,False,False,False,True],[False,True,False,True,True],[False,False,True,False,True],[True,True,False,False,False],[True,False,False,False,True]]), 'fill': 11.0, 'connectance': 0.44}, 'NestedConfig': {'DegreeMatrix': matlab.logical([[True,False,True,False,True],[True,True,False,False,False],[True,False,False,True,False],[False,True,True,False,False],[True,True,False,False,False]]), 'Degreeindex_rows': matlab.double([[2.0,1.0,3.0,4.0,5.0]]), 'Degreeindex_cols': matlab.double([[5.0,1.0,2.0,3.0,4.0]])}, 'Bin_t1': {'EnsembleSize': 1000.0, 'SignificanceTable': matlab.double([[0.0,0.0,1.0,0.0,0.0]]), 'measures': [{'MEASURE': 'NODF', 'NANcount': 0.0, 'Measure': 15.0, 'pvalue': 0.644, 'pvalueCorrected': 0.0, 'Mean': 18.185833333333346, 'StandardDeviation': 9.715093589879967, 'sampleZscore': -0.3279261598315398, 'Median': 17.5, 'minimum': 0.0, 'maximum': 53.33333333333333, 'NormalisedTemperature': 0.8248178527241895, 'NestednessUpOrDown': 'Up'}]}, 'Bin_t2': {'EnsembleSize': 1000.0, 'SignificanceTable': matlab.double([[0.0,0.0,1.0,0.0,0.0]]), 'measures': [{'MEASURE': 'NODF', 'NANcount': 0.0, 'Measure': 15.0, 'pvalue': 0.911, 'pvalueCorrected': 0.0, 'Mean': 17.0975, 'StandardDeviation': 2.8530481302693786, 'sampleZscore': -0.7351786244846695, 'Median': 17.5, 'minimum': 7.5, 'maximum': 22.5, 'NormalisedTemperature': 0.877321245796169, 'NestednessUpOrDown': 'Up'}]}, 'Bin_t3': {'EnsembleSize': 1000.0, 'SignificanceTable': matlab.double([[0.0,0.0,1.0,0.0,0.0]]), 'measures': [{'MEASURE': 'NODF', 'NANcount': 0.0, 'Measure': 15.0, 'pvalue': 0.683, 'pvalueCorrected': 0.0, 'Mean': 19.557500000000008, 'StandardDeviation': 10.363722274273217, 'sampleZscore': -0.4397551265256782, 'Median': 17.916666666666664, 'minimum': 0.0, 'maximum': 63.33333333333333, 'NormalisedTemperature': 0.7669691934040647, 'NestednessUpOrDown': 'Up'}]}, 'Bin_t4': {'EnsembleSize': 1000.0, 'SignificanceTable': matlab.double([[0.0,0.0,1.0,0.0,0.0]]), 'measures': [{'MEASURE': 'NODF', 'NANcount': 0.0, 'Measure': 15.0, 'pvalue': 0.774, 'pvalueCorrected': 0.0, 'Mean': 24.693830128205132, 'StandardDeviation': 13.31029362735014, 'sampleZscore': -0.7282957385918326, 'Median': 23.333333333333332, 'minimum': 0.0, 'maximum': 91.66666666666667, 'NormalisedTemperature': 0.6074391830721755, 'NestednessUpOrDown': 'Up'}]}, 'Bin_t5': {'EnsembleSize': 1000.0, 'SignificanceTable': matlab.double([[0.0,0.0,1.0,0.0,0.0]]), 'measures': [{'MEASURE': 'NODF', 'NANcount': 0.0, 'Measure': 15.0, 'pvalue': 0.737, 'pvalueCorrected': 0.0, 'Mean': 22.55299984737483, 'StandardDeviation': 12.645669729700424, 'sampleZscore': -0.5972795438137511, 'Median': 21.875, 'minimum': 0.0, 'maximum': 77.77777777777777, 'NormalisedTemperature': 0.6650999911989979, 'NestednessUpOrDown': 'Up'}]}, 'SignificanceTableSummary': matlab.double([[0.0,0.0,5.0,0.0,0.0]])} ###Markdown Introduction to ZSE and Useing Zeolite Frameworks Here I will show you how to call any zeolite framework in as an ASE atoms object, and some of the basic operations ZSE can perform. zse.collectionsEvery zeolite framework listed on the [IZA Database](https://asia.iza-structure.org/IZA-SC/ftc_table.php) as of January of 2020 is included in ZSE. \You don't have to use the structure files provided by ZSE, you can use your own as well. ###Code from zse.collections import * from ase.visualize import view ###Output _____no_output_____ ###Markdown collections.framework( ) The framework command calls zeolite structure files from an included database. \Just input the IZA framework code of the zeolite you want to use. ###Code z = framework('CHA') view(z) ###Output _____no_output_____ ###Markdown collections.get_all_fws( )This command will return a list of every framework included in ZSE. \This is handy if you wanted to iterate through all the frameworks. ###Code # get list of all codes fws = get_all_fws() # iterate over list for some actions # in this case, I'll create a trajectory of every structure file traj = [framework(x) for x in fws] view(traj) ###Output _____no_output_____ ###Markdown collections.get_ring_sizes( )This command will get the list of the ring sizes included in a particular framework. \Many other functions in ZSE rely on this information. ###Code # the CHA framework contains 8-, 6-, and 4-membered rings (MR) ring_sizes = get_ring_sizes('CHA') print(ring_sizes) ###Output [8 6 4] ###Markdown zse.utilitiesThese are some basic utilities to get information about a framework. \Other functions in ZSE rely on on these utilities. ###Code from zse.utilities import * ###Output _____no_output_____ ###Markdown utilities.get_tsites( )This will provide the unique T-sites, their multiplicity, and an atom index for an example in your framework. ###Code z = framework('TON') tsites, tmults, tinds = get_tsites('TON') # print the information print('T-site \t Multiplicity \t Index') for ts,tm,ti in zip(tsites,tmults,tinds): print('{0} \t {1} \t\t {2}'.format(ts,tm,ti)) ###Output T-site Multiplicity Index T1 8 48 T2 8 56 T3 4 64 T4 4 68 ###Markdown utilities.get_osites( )Same as above, but will get the information for each unique oxygen site. ###Code z = framework('TON') osites, omults, oinds = get_osites('TON') # print the information print('O-site \t Multiplicity \t Index') for os,om,oi in zip(osites,omults,oinds): print('{0} \t {1} \t\t {2}'.format(os,om,oi)) ###Output O-site Multiplicity Index O1 8 0 O2 8 8 O3 8 16 O4 8 24 O5 8 32 O6 8 40 ###Markdown utilities.site_labels( )This function will get all the site labels as a dictionary for you entire atoms object. \The dictionary key:value pairs are index:site_label. \This will work for atoms objects provided by ZSE, or if you have your own zeolite atoms object as well. Inputs**z** is the atoms object you want labels for \**'TON'** is the IZA framework code for the zeolite you are using ###Code z = framework('TON') labels = site_labels(z,'TON') # let's make sure we get results that match the last two functions # this should be a T1 print('atom index 48: ',labels[48]) # this should be an O4 print('atom index 24: ',labels[24]) ###Output atom index 48: T1 atom index 24: O4 ###Markdown A brief guide to FIt-SNE. Visualizing MNISTAuthor: Dmitry Kobak ###Code %matplotlib notebook import numpy as np import pylab as plt import seaborn as sns sns.set_style('ticks') # the path should point to the FIt-SNE directory import sys; sys.path.append('../') from fast_tsne import fast_tsne # Load MNIST data from sklearn.datasets import fetch_openml mnist = fetch_openml('mnist_784') X = mnist.data y = mnist.target.astype('int') print(X.shape) # Do PCA and keep 50 dimensions X = X - X.mean(axis=0) #U, s, V = np.linalg.svd(X, full_matrices=False) #X50 = np.dot(U, np.diag(s))[:,:50] X50 = X #X50 = np.dot(U, np.diag(s))[:,:] # 10 nice colors col = np.array(['#a6cee3','#1f78b4','#b2df8a','#33a02c','#fb9a99', '#e31a1c','#fdbf6f','#ff7f00','#cab2d6','#6a3d9a']) # Running t-SNE on the full PCA-reduced MNIST in the default way # This uses perplexity 30 and PCA initialization. # It runs for 750 iterations with learning rate N/12. %time Z = fast_tsne(X50) plt.figure(figsize=(4,4)) plt.axis('equal') plt.scatter(Z[:,0], Z[:,1], c=col[y], s=2, edgecolors='none') sns.despine() plt.tight_layout() # Running t-SNE on the full PCA-reduced MNIST in the default way # This uses perplexity 30 and PCA initialization. # It runs for 750 iterations with learning rate N/12. %time Z = fast_tsne(X50) plt.figure(figsize=(4,4)) plt.axis('equal') plt.scatter(Z[:,0], Z[:,1], c=col[y], s=2, edgecolors='none') sns.despine() plt.tight_layout() ###Output CPU times: user 18.3 s, sys: 26 s, total: 44.4 s Wall time: 3min 16s ###Markdown Using random initializationNote that this is generally a bad idea. See here for discussion: https://www.nature.com/articles/s41467-019-13056-x. ###Code %time Z1 = fast_tsne(X50, initialization='random', seed=1) %time Z2 = fast_tsne(X50, initialization='random', seed=2) plt.figure(figsize=(8, 4.5)) for i, (Z,title) in enumerate(zip([Z1,Z2], ['Random seed = 1', 'Random seed = 2'])): plt.subplot(1,2,i+1) plt.axis('equal') plt.scatter(Z[:,0], Z[:,1], c=col[y], s=2, edgecolors='none') plt.xticks([]) plt.yticks([]) plt.title(title) sns.despine(left=True, bottom=True) plt.tight_layout() ###Output _____no_output_____ ###Markdown The above uses learning rate N/12. If one uses low learning rate, e.g. 200 which used to be the default value, then clusters sometimes get split into multiple subclusters. ###Code %time Z1 = fast_tsne(X50, initialization='random', seed=1, learning_rate=200) %time Z2 = fast_tsne(X50, initialization='random', seed=2, learning_rate=200) plt.figure(figsize=(8, 4.5)) for i,(Z,title) in enumerate(zip([Z1,Z2], ['Random seed = 1\nlearning rate = 200', 'Random seed = 2\nlearning rate = 200'])): plt.subplot(1,2,i+1) plt.axis('equal') plt.scatter(Z[:,0], Z[:,1], c=col[y], s=2, edgecolors='none') plt.xticks([]) plt.yticks([]) plt.title(title) sns.despine(left=True, bottom=True) plt.tight_layout() ###Output _____no_output_____ ###Markdown Changing perplexity ###Code %time Z1 = fast_tsne(X50, perplexity=3) %time Z2 = fast_tsne(X50, perplexity=300) plt.figure(figsize=(8, 4.5)) for i,(Z,title) in enumerate(zip([Z1,Z2], ['Perplexity 3', 'Perplexity 300'])): plt.subplot(1,2,i+1) plt.axis('equal') plt.scatter(Z[:,0], Z[:,1], c=col[y], s=2, edgecolors='none') plt.xticks([]) plt.yticks([]) plt.title(title) sns.despine(left=True, bottom=True) plt.tight_layout() ###Output _____no_output_____ ###Markdown ExaggerationExaggeration is applied after the early exaggeration phase is over. By default, early exaggeration lasts 250 iterations, with coefficient 12. ###Code %time Z1 = fast_tsne(X50, late_exag_coeff=2) %time Z2 = fast_tsne(X50, late_exag_coeff=4) plt.figure(figsize=(8, 4.5)) for i,(Z,title) in enumerate(zip([Z1,Z2], ['Exaggeration 2', 'Exaggeration 4'])): plt.subplot(1,2,i+1) plt.axis('equal') plt.scatter(Z[:,0], Z[:,1], c=col[y], s=2, edgecolors='none') plt.xticks([]) plt.yticks([]) plt.title(title) sns.despine(left=True, bottom=True) plt.tight_layout() ###Output _____no_output_____ ###Markdown Making the kernel more/less heavy-tailedThe default Cauchy kernel corresponds to `df=1`. Large `df` corresponds to the Gaussian kernel.See https://ecmlpkdd2019.org/downloads/paper/327.pdf ###Code %time Z1 = fast_tsne(X50, df=100) %time Z2 = fast_tsne(X50, df=.5) plt.figure(figsize=(8, 4.5)) for i,(Z,title) in enumerate(zip([Z1,Z2], ['1/(1+x^2/100)^100 kernel', '1/(1+x^2/0.5)^0.5 kernel'])): plt.subplot(1,2,i+1) plt.axis('equal') plt.scatter(Z[:,0], Z[:,1], c=col[y], s=2, edgecolors='none') plt.xticks([]) plt.yticks([]) plt.title(title) sns.despine(left=True, bottom=True) plt.tight_layout() ###Output _____no_output_____ ###Markdown 1-dimensional embedding ###Code %time Z1 = fast_tsne(X50) %time Z2 = fast_tsne(X50, map_dims=1) plt.figure(figsize=(8,4.5)) plt.subplot(121) plt.axis('equal') plt.scatter(Z1[:,0], Z1[:,1], c=col[y], s=2, edgecolors='none') plt.xticks([]) plt.yticks([]) plt.title('2D') plt.subplot(122) plt.scatter(Z2[:,0], np.random.uniform(size=Z2.shape[0]), c=col[y], s=2, edgecolors='none') plt.ylim([-2,3]) plt.xticks([]) plt.yticks([]) plt.title('1D') sns.despine(left=True, bottom=True) plt.tight_layout() ###Output _____no_output_____ ###Markdown 1D embeddings with different output kernels ###Code %%time dfs = [100, 10, 1, .5, .2, .1, .05] Zs = [] for df in dfs: Z = fast_tsne(X50, map_dims=1, df=df) Zs.append(Z) plt.figure(figsize=(8,4.5)) ax1 = plt.subplot(121) for i,(df,Z) in enumerate(zip(dfs,Zs)): plt.scatter(Z[:,0], np.random.uniform(size=Z.shape[0])/2-i, c=col[y], s=1) plt.ylim([-6.5,1]) plt.yticks(np.array([-6,-5,-4,-3,-2,-1,0])+.25, ['df=.05','df=.1','df=.2','df=.5','df=1','df=10','df=100']) ax2 = plt.subplot(122) for i,(df,Z) in enumerate(zip(dfs,Zs)): plt.scatter((Z[:,0]-np.min(Z))/(np.max(Z)-np.min(Z)), np.random.uniform(size=Z.shape[0])/2-i, c=col[y], s=1) plt.ylim([-6.5,1]) plt.yticks(np.array([-6,-5,-4,-3,-2,-1,0])+.25, ['df=.05','df=.1','df=.2','df=.5','df=1','df=10','df=100']) plt.xticks([]) sns.despine(ax=ax1) sns.despine(ax=ax2, bottom=True) plt.tight_layout() ###Output _____no_output_____ ###Markdown Fixed sigma instead of perplexity ###Code %time Z1 = fast_tsne(X50) %time Z2 = fast_tsne(X50, sigma=1e+6, K=10) plt.figure(figsize=(8, 4.5)) for i,(Z,title) in enumerate(zip([Z1,Z2], ['Perplexity=30 (3*30 neighbors used)', 'Fixed sigma=1e+6 with kNN=10'])): plt.subplot(1,2,i+1) plt.axis('equal') plt.scatter(Z[:,0], Z[:,1], c=col[y], s=2, edgecolors='none') plt.xticks([]) plt.yticks([]) plt.title(title) sns.despine(left=True, bottom=True) plt.tight_layout() ###Output _____no_output_____ ###Markdown Perplexity combinationIs not useful for MNIST, but can be useful in other cases, see https://www.nature.com/articles/s41467-019-13056-x. ###Code %time Z1 = fast_tsne(X50) %time Z2 = fast_tsne(X50, perplexity_list=[3,30,300]) plt.figure(figsize=(8, 4.5)) for i,(Z,title) in enumerate(zip([Z1,Z2], ['Perplexity 30', 'Perplexity combination of 3, 30, 300'])): plt.subplot(1,2,i+1) plt.axis('equal') plt.scatter(Z[:,0], Z[:,1], c=col[y], s=2, edgecolors='none') plt.xticks([]) plt.yticks([]) plt.title(title) sns.despine(left=True, bottom=True) plt.tight_layout() ###Output _____no_output_____ ###Markdown VP tree vs ANNOY for kNN search ###Code %time Z1 = fast_tsne(X50) %time Z2 = fast_tsne(X50, knn_algo='vp-tree') plt.figure(figsize=(8, 4.5)) for i,(Z,title) in enumerate(zip([Z1,Z2], ['Annoy (approximate kNN)', 'VP tree (exact kNN)'])): plt.subplot(1,2,i+1) plt.axis('equal') plt.scatter(Z[:,0], Z[:,1], c=col[y], s=2, edgecolors='none') plt.xticks([]) plt.yticks([]) plt.title(title) sns.despine(left=True, bottom=True) plt.tight_layout() ###Output _____no_output_____ ###Markdown Barnes-Hut vs FFT to approximate repulsive forces during gradient descent Using a subsampled dataset here, to speed up Barnes-Hut. ###Code # Subsampling np.random.seed(42) ind10k = np.random.choice(X.shape[0], 10000, replace=False) %time Z1 = fast_tsne(X50[ind10k,:]) %time Z2 = fast_tsne(X50[ind10k,:], nbody_algo='Barnes-Hut') plt.figure(figsize=(8, 4.5)) for i,(Z,title) in enumerate(zip([Z2,Z2], ['FFT interpolation', 'Barnes-Hut'])): plt.subplot(1,2,i+1) plt.axis('equal') plt.scatter(Z[:,0], Z[:,1], c=col[y[ind10k]], s=2, edgecolors='none') plt.xticks([]) plt.yticks([]) plt.title(title) sns.despine(left=True, bottom=True) plt.tight_layout() ###Output _____no_output_____ ###Markdown Exact t-SNE ###Code # Subsampling np.random.seed(42) ind2k = np.random.choice(X.shape[0], 2000, replace=False) %time Z1 = fast_tsne(X50[ind2k,:]) %time Z2 = fast_tsne(X50[ind2k,:], theta=0) plt.figure(figsize=(8, 4.5)) for i,(Z,title) in enumerate(zip([Z1,Z2], ['Annoy and FFT', 'Exact t-SNE'])): plt.subplot(1,2,i+1) plt.axis('equal') plt.scatter(Z[:,0], Z[:,1], c=col[y[ind2k]], s=5, edgecolors='none') plt.xticks([]) plt.yticks([]) plt.title(title) sns.despine(left=True, bottom=True) plt.tight_layout() ###Output _____no_output_____ ###Markdown Loading and saving input similarities ###Code %time Z1 = fast_tsne(X50, load_affinities = 'save') %time Z2 = fast_tsne(X50, load_affinities = 'load') plt.figure(figsize=(8, 4.5)) for i,(Z,title) in enumerate(zip([Z1,Z2], ['Similarities saved...', '...similarities loaded'])): plt.subplot(1,2,i+1) plt.axis('equal') plt.scatter(Z[:,0], Z[:,1], c=col[y], s=2, edgecolors='none') plt.xticks([]) plt.yticks([]) plt.title(title) sns.despine(left=True, bottom=True) plt.tight_layout() # And now for the exact t-SNE %time Z1 = fast_tsne(X50[ind2k,:], theta=0, load_affinities = 'save') %time Z2 = fast_tsne(X50[ind2k,:], theta=0, load_affinities = 'load') plt.figure(figsize=(8, 4.5)) for i,(Z,title) in enumerate(zip([Z1,Z2], ['Exact similarities saved...', '...exact similarities loaded'])): plt.subplot(1,2,i+1) plt.axis('equal') plt.scatter(Z[:,0], Z[:,1], c=col[y[ind2k]], s=5, edgecolors='none') plt.xticks([]) plt.yticks([]) plt.title(title) sns.despine(left=True, bottom=True) plt.tight_layout() ###Output _____no_output_____ ###Markdown Import libraries ###Code %load_ext autoreload %autoreload 2 import pandas as pd import pandas_profiling import numpy as np df=pd.read_csv("test.csv",) ###Output _____no_output_____ ###Markdown Inline report without saving object ###Code profile = pandas_profiling.ProfileReport(df, check_correlation=True) profile profile = pandas_profiling.ProfileReport(df, check_correlation=False) profile ###Output /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) ###Markdown time ###Code %timeit pandas_profiling.ProfileReport(df, check_correlation=False) %timeit pandas_profiling.ProfileReport(df, check_correlation=True) ###Output /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) /Applications/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext]))
coursera_ml/a2_w1_s3_SparkML_Splitting.ipynb
###Markdown This notebook is designed to run in a IBM Watson Studio default runtime (NOT the Watson Studio Apache Spark Runtime as the default runtime with 1 vCPU is free of charge). Therefore, we install Apache Spark in local mode for test purposes only. Please don't use it in production.In case you are facing issues, please read the following two documents first:https://github.com/IBM/skillsnetwork/wiki/Environment-Setuphttps://github.com/IBM/skillsnetwork/wiki/FAQThen, please feel free to ask:https://coursera.org/learn/machine-learning-big-data-apache-spark/discussions/allPlease make sure to follow the guidelines before asking a question:https://github.com/IBM/skillsnetwork/wiki/FAQim-feeling-lost-and-confused-please-help-meIf running outside Watson Studio, this should work as well. In case you are running in an Apache Spark context outside Watson Studio, please remove the Apache Spark setup in the first notebook cells. ###Code from IPython.display import Markdown, display def printmd(string): display(Markdown('# <span style="color:red">'+string+'</span>')) if ('sc' in locals() or 'sc' in globals()): printmd('<<<<<!!!!! It seems that you are running in a IBM Watson Studio Apache Spark Notebook. Please run it in an IBM Watson Studio Default Runtime (without Apache Spark) !!!!!>>>>>') !pip install pyspark==2.4.5 try: from pyspark import SparkContext, SparkConf from pyspark.sql import SparkSession except ImportError as e: printmd('<<<<<!!!!! Please restart your kernel after installing Apache Spark !!!!!>>>>>') sc = SparkContext.getOrCreate(SparkConf().setMaster("local[*]")) spark = SparkSession \ .builder \ .getOrCreate() ###Output _____no_output_____ ###Markdown In case you want to learn how ETL is done, please run the following notebook first and update the file name below accordinglyhttps://github.com/IBM/coursera/blob/master/coursera_ml/a2_w1_s3_ETL.ipynb ###Code # delete files from previous runs !rm -f hmp.parquet* # download the file containing the data in PARQUET format !wget https://github.com/IBM/coursera/raw/master/hmp.parquet # create a dataframe out of it df = spark.read.parquet('hmp.parquet') # register a corresponding query table df.createOrReplaceTempView('df') df_energy = spark.sql(""" select sqrt(sum(x*x)+sum(y*y)+sum(z*z)) as label, class from df group by class """) df_energy.createOrReplaceTempView('df_energy') df_join = spark.sql('select * from df inner join df_energy on df.class=df_energy.class') splits = df_join.randomSplit([0.8, 0.2]) df_train = splits[0] df_test = splits[1] df_train.count() df_test.count() from pyspark.ml.feature import VectorAssembler from pyspark.ml.feature import MinMaxScaler vectorAssembler = VectorAssembler(inputCols=["x","y","z"], outputCol="features") normalizer = MinMaxScaler(inputCol="features", outputCol="features_norm") from pyspark.ml.regression import LinearRegression lr = LinearRegression(maxIter=10, regParam=0.3, elasticNetParam=0.8) from pyspark.ml import Pipeline pipeline = Pipeline(stages=[vectorAssembler, normalizer,lr]) model = pipeline.fit(df_train) model.stages[2].summary.r2 model = pipeline.fit(df_test) model.stages[2].summary.r2 ###Output _____no_output_____
docs/learning/beautiful-soup-web-scraping-first-steps.ipynb
###Markdown ###Code #print print(r.text[0:500 ]) from bs4 import BeautifulSoup # Initializer soup = BeautifulSoup(r.text, 'html.parser') # Finds all span tags with attributes class equal to short-desc results = soup.find_all('span', attrs={'class':'short-desc'}) len(results ) results[0:3] results[-1] first_result = results[0] first_result #gets the strong within the first result first_result.find('strong') first_result.find('strong').text first_result.find('strong').text[0:-1] + ', 2017' first_result.contents first_result.contents[1] first_result.contents[1][1:-2] first_result.find('a').text[1:-1] first_result.find('a')['href'] records = [] for result in results: date = result.find('strong').text[0:-1] + ', 2017' lie = result.contents[1][1:-2] explanation = result.find('a').text[1:-1] url = result.find('a')['href'] records.append((date, lie, explanation, url)) len(records) records[0:3] import pandas as pd df =pd.DataFrame(records, columns=['date', 'lie', 'explantion','url']) df.head() df.tail() df['date'] = pd.to_datetime(df['date']) df.head() df.tail() df.to_csv('trump.csv', index = False, encoding='utf-8') df2 = pd.read_csv('trump.csv', parse_dates=['date'], encoding='utf-8') df.head() ###Output _____no_output_____
Sentimental_Analysis.ipynb
###Markdown Using RNN ###Code # A simple RNN network to classify the emoji class from a input Sentence model = Sequential() model.add(SimpleRNN(64, input_shape=(10,50), return_sequences=True)) model.add(Dropout(0.5)) model.add(SimpleRNN(64, return_sequences=False)) model.add(Dropout(0.5)) model.add(Dense(5)) model.add(Activation('softmax')) model.summary() # Setting Loss, Optimizer of the Model model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # Training of the Model hist = model.fit(embedding_matrix_train,Y_train, epochs = 50, batch_size=32,shuffle=True ) # prediction from the trained model pred = model.predict_classes(embedding_matrix_test) # Calculating the score of the algorithm float(sum(pred==Y_test))/embedding_matrix_test.shape[0] # printing the sentences with the predicted emoji and the labelled emoji for ix in range(embedding_matrix_test.shape[0]): if pred[ix] != Y_test[ix]: print(ix) print(test[0][ix],end=" ") print(emoji.emojize(emoji_dict[pred[ix]], use_aliases=True),end=" ") print(emoji.emojize(emoji_dict[Y_test[ix]], use_aliases=True)) # Predicting for Our random sentence x = ['i', 'do', 'think','this', 'class', 'is', 'very', 'interesting'] x_ = np.zeros((1,10,50)) for ix in range(len(x)): x_[0][ix] = embeddings_index[x[ix].lower()] model.predict_classes(x_) ###Output _____no_output_____ ###Markdown Using LSTM ###Code model = Sequential() model.add(LSTM(128, input_shape=(10,50), return_sequences=True)) model.add(Dropout(0.5)) model.add(LSTM(128, return_sequences=False)) model.add(Dropout(0.5)) model.add(Dense(5)) model.add(Activation('softmax')) model.summary() model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) hist = model.fit(embedding_matrix_train,Y_train, epochs = 50, batch_size=32,shuffle=True ) pred = model.predict_classes(embedding_matrix_test) float(sum(pred==Y_test))/embedding_matrix_test.shape[0] for ix in range(embedding_matrix_test.shape[0]): if pred[ix] != Y_test[ix]: print(ix) print(test[0][ix],end=" ") print(emoji.emojize(emoji_dict[pred[ix]], use_aliases=True),end=" ") print(emoji.emojize(emoji_dict[Y_test[ix]], use_aliases=True)) ###Output _____no_output_____ ###Markdown ***Information About Data*** ###Code print ("Number of Data Points :", data.shape[0]) print ("Number of Features /Variables:" ,data.shape[1]) ###Output Number of Data Points : 50000 Number of Features /Variables: 2 ###Markdown ***Data Preprocessing*** ###Code lmtzr = WordNetLemmatizer() stop_words = set(stopwords.words('english')) def remove_punctuation(s): s = ''.join([i for i in s if i not in frozenset(string.punctuation)]) return s def nlp_preprocessing(total_text, index, column): if type(total_text) is not int: string = "" for words in total_text.split(): # remove the special chars in review like '"#$@!%^&*()_+-~?>< etc. word = ("".join(e for e in words if e.isalpha())) # Conver all letters to lower-case word = word.lower() # stop-word removal if not word in stop_words: string += lmtzr.lemmatize(word) + " " data[column][index] = string start_time = time.clock() data['review'] = data['review'].apply(remove_punctuation) print("Time for Punctuation removal",(time.clock() - start_time)/60, "minutes") start_time = time.clock() for index, row in data.iterrows(): nlp_preprocessing(row['review'], index, 'review') print("Time for Preprocesing",(time.clock() - start_time)/60, "minutes") pd.to_hdf('movie_data.h5','data') ###Output _____no_output_____ ###Markdown ***New Starting*** ###Code data=pd.read_hdf('movie_data.h5','data') data.shape data.columns Y=data['sentiment'] ###Output _____no_output_____ ###Markdown ***BOW Model*** ###Code vectorizer=CountVectorizer() X=vectorizer.fit_transform(data['review']) X.shape X_train, X_test, y_train, y_test = train_test_split(X, Y,test_size=.2,random_state=42) print (X_train.shape, y_train.shape) print (X_test.shape, y_test.shape) ###Output (40000, 164728) (40000,) (10000, 164728) (10000,) ###Markdown ***RBF Kernel*** ###Code start_time = time.clock() model_svm = svm.SVC(C=1,gamma=1) model_svm.fit(X_train,y_train) y_pred= model_svm.predict(X_test) print((time.clock() - start_time)/60, "minutes") print("Accuracy",model_svm.score(X_test,y_test)) svm_pkl_model=open("svm_model_rbf_bow.pkl","wb") pickle.dump(model_svm,svm_pkl_model) svm_pkl_model.close() ###Output _____no_output_____ ###Markdown ***Linear Kernel*** ###Code start_time = time.clock() model_svm = svm.SVC(kernel='linear',C=1,gamma=1) model_svm.fit(X_train,y_train) print((time.clock() - start_time)/60, "minutes") print("Accuracy",model_svm.score(X_test,y_test)) svm_pkl_model=open("svm_model_linear_bow.pkl","wb") pickle.dump(model_svm,svm_pkl_model) svm_pkl_model.close() ###Output _____no_output_____ ###Markdown ***Tf-Idf Model*** ###Code tfidf_review_vectorizer_train1 = TfidfVectorizer() tfidf_review_features_train1= tfidf_review_vectorizer_train1.fit_transform(data['review']) X_train1, X_test1, y_train1, y_test1 = train_test_split(tfidf_review_features_train1, Y,) print (X_train1.shape, y_train1.shape) print (X_test1.shape, y_test1.shape) ###Output _____no_output_____ ###Markdown ***Linear kernel*** ###Code start_time = time.clock() model_svm = svm.SVC(kernel='linear',C=1,gamma=1) model_svm.fit(X_train1,y_train1) print((time.clock() - start_time)/60, "minutes") print("Accuracy:",model_svm.score(X_test1,y_test1)) svm_pkl_model=open("svm_model_linear_TfIDF.pkl","wb") pickle.dump(model_svm,svm_pkl_model) svm_pkl_model.close() ###Output _____no_output_____ ###Markdown ***Rbf Kernel*** ###Code start_time = time.clock() model_svm = svm.SVC(C=1,gamma=1) model_svm.fit(X_train1,y_train1) print((time.clock() - start_time)/60, "minutes") print("Accuracy:",model_svm.score(X_test1,y_test1)) svm_pkl_model=open("svm_model_rbf_TfIDF.pkl","wb") pickle.dump(model_svm,svm_pkl_model) svm_pkl_model.close() ###Output _____no_output_____ ###Markdown Naive bayes Model ***Tf-Idf Model*** ###Code start_time = time.clock() clf = MultinomialNB() clf.fit(X_train1, y_train1) print("Accuracy:",clf.score(X_test1,y_test1)) print((time.clock() - start_time)/60, "minutes") nb_pkl_model=open("nb_model_multinb.pkl","wb") pickle.dump(clf,nb_pkl_model) nb_pkl_model.close() ###Output _____no_output_____ ###Markdown ***Bow Model*** ###Code start_time = time.clock() clf = MultinomialNB() clf.fit(X_train, y_train) print("Accuracy:",clf.score(X_test,y_test)) print((time.clock() - start_time)/60, "minutes") nb_pkl_model=open("nb_model_multinb_Bow.pkl","wb") pickle.dump(clf,nb_pkl_model) nb_pkl_model.close() ###Output _____no_output_____ ###Markdown Dimensionality Reduction ***Latent Semantic analysis on BOW Model*** ###Code start_time = time.clock() svd = TruncatedSVD(n_components=1000, random_state=42) bow_X_svd=svd.fit_transform(X) print((time.clock() - start_time)/60, "minutes") svd.singular_values_.size bow_X_svd.size arr=svd.explained_variance_ratio_ plt.plot(np.cumsum(arr)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance'); plt.show() X_train, X_test, y_train, y_test = train_test_split(bow_X_svd, Y,test_size=.2,random_state=42) print (X_train.shape, y_train.shape) print (X_test.shape, y_test.shape) ###Output (40000, 1000) (40000,) (10000, 1000) (10000,) ###Markdown SVM-Linear model ###Code start_time = time.clock() model_svm = svm.SVC(kernel='linear',C=1,gamma=1) model_svm.fit(X_train,y_train) print((time.clock() - start_time)/60, "minutes") print("Accuracy:",model_svm.score(X_test,y_test)) svm_pkl_model=open("SVM_Bow_svd_linear.pkl","wb") pickle.dump(model_svm,svm_pkl_model) svm_pkl_model.close() ###Output _____no_output_____ ###Markdown SVM-RBf model ###Code start_time = time.clock() model_svm = svm.SVC(C=1,gamma=1) model_svm.fit(X_train,y_train) print((time.clock() - start_time)/60, "minutes") print("Accuracy:",model_svm.score(X_test,y_test)) svm_pkl_model=open("SVM_Bow_svd_rbf.pkl","wb") pickle.dump(model_svm,svm_pkl_model) svm_pkl_model.close() ###Output _____no_output_____ ###Markdown Naive Bayes-Multinomial ###Code start_time = time.clock() clf = MultinomialNB() clf.fit(X_train, y_train) print("Accuracy:",clf.score(X_test,y_test)) print((time.clock() - start_time)/60, "minutes") nb_pkl_model=open("nb_multinb_Bow_svd.pkl","wb") pickle.dump(clf,nb_pkl_model) nb_pkl_model.close() ###Output _____no_output_____ ###Markdown ***LSA on TF-Idf Model*** ###Code start_time = time.clock() svd = TruncatedSVD(n_components=1200, random_state=42) tfidf_X_svd=svd.fit_transform(tfidf_review_features_train1) print((time.clock() - start_time)/60, "minutes") arr=svd.explained_variance_ratio_ plt.plot(np.cumsum(arr)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance'); plt.show() X_train ###Output _____no_output_____ ###Markdown PCA ***PCA on Bow*** ###Code start_time = time.clock() pca = PCA(n_components=150) pca_X_bow=pca.fit_transform(tfidf_review_features_train1) print((time.clock() - start_time)/60, "minutes") arr=pca.explained_variance_ratio_ plt.plot(np.cumsum(arr)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance'); plt.show() ###Output _____no_output_____ ###Markdown Save result to Google Drive ###Code from google.colab import drive drive.mount('/gdrive', force_remount=True) ###Output Mounted at /gdrive ###Markdown Install Transformer and import libraries ###Code pip install transformers import tensorflow as tf import tensorflow_hub as hub import pandas as pd from sklearn.model_selection import train_test_split import numpy as np import re import unicodedata import nltk import keras from tqdm import tqdm import pickle from keras.models import Model import keras.backend as K from sklearn.metrics import confusion_matrix,f1_score,classification_report from keras.callbacks import ModelCheckpoint import itertools from sklearn.utils import shuffle import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown Set up functions for pre-processing ###Code def unicode_to_ascii(s): return ''.join(c for c in unicodedata.normalize('NFD', s) if unicodedata.category(c) != 'Mn') def clean_stopwords_shortwords(w): stopwords_list=stopwords.words('english') words = w.split() clean_words = [word for word in words if (word not in stopwords_list) and len(word) > 2] return " ".join(clean_words) def preprocess_sentence(w): w = unicode_to_ascii(w.lower().strip()) w = re.sub(r"([?.!,¿])", r" ", w) w = re.sub(r'[" "]+', " ", w) w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w) w=clean_stopwords_shortwords(w) w=re.sub(r'@\w+', '',w) return w ###Output _____no_output_____ ###Markdown Uploading Data ###Code from google.colab import files uploaded = files.upload() import io data = pd.read_csv(io.BytesIO(uploaded['FinAccumulatedPreppedData.csv'])) data.head() ###Output _____no_output_____ ###Markdown Pre-process Data ###Code import nltk nltk.download("stopwords") data=data.dropna() # Drop NaN valuues, if any data=data.reset_index(drop=True) # Reset index after dropping the columns/rows with NaN values data = shuffle(data) # Shuffle the dataset print('Available labels: ',data.cat_val.unique()) # Print all the unique labels in the dataset data['text']=data['text'].map(preprocess_sentence) # Clean the text column using preprocess_sentence function defined above ###Output Available labels: ['M' 'G' 'R'] ###Markdown Set up Distil_Bert ###Code from transformers import BertTokenizer, TFBertModel, BertConfig, TFDistilBertForSequenceClassification, DistilBertTokenizer num_classes = len(data.cat_val.unique()) dbert_tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased") dbert_model = TFDistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased', num_labels = num_classes) data['gt'] = data['cat_val'].map({'G':0,'M':1,'R':2}) data.head() sentences=data['text'] labels=data['gt'] len(sentences),len(labels) ###Output _____no_output_____ ###Markdown Checking the inputs / Masked Task ###Code input_ids=[] attention_masks=[] for sent in sentences: dbert_inp=dbert_tokenizer.encode_plus(sent, add_special_tokens = True, max_length = 64, pad_to_max_length = True, return_attention_mask = True) input_ids.append(dbert_inp['input_ids']) attention_masks.append(dbert_inp['attention_mask']) input_ids=np.asarray(input_ids) attention_masks=np.array(attention_masks) labels=np.array(labels) len(input_ids),len(attention_masks),len(labels) print('Preparing the pickle file.....') pickle_inp_path='/gdrive/Shared drives/COMP484-Project/ModellingFiles/dbert_inp.pkl' pickle_mask_path='/gdrive/Shared drives/COMP484-Project/ModellingFiles/dbert_mask.pkl' pickle_label_path='/gdrive/Shared drives/COMP484-Project/ModellingFiles/dbert_label.pkl' pickle.dump((input_ids),open(pickle_inp_path,'wb')) pickle.dump((attention_masks),open(pickle_mask_path,'wb')) pickle.dump((labels),open(pickle_label_path,'wb')) print('Pickle files saved as ',pickle_inp_path,pickle_mask_path,pickle_label_path) print('Loading the saved pickle files..') input_ids=pickle.load(open(pickle_inp_path, 'rb')) attention_masks=pickle.load(open(pickle_mask_path, 'rb')) labels=pickle.load(open(pickle_label_path, 'rb')) print('Input shape {} Attention mask shape {} Input label shape {}'.format(input_ids.shape,attention_masks.shape,labels.shape)) ###Output Loading the saved pickle files.. Input shape (1057, 64) Attention mask shape (1057, 64) Input label shape (1057,) ###Markdown Split data (Train, Test) ###Code train_inp,val_inp,train_label,val_label,train_mask,val_mask=train_test_split(input_ids,labels,attention_masks,test_size=0.2) print('Train inp shape {} Val input shape {}\nTrain label shape {} Val label shape {}\nTrain attention mask shape {} Val attention mask shape {}'.format(train_inp.shape,val_inp.shape,train_label.shape,val_label.shape,train_mask.shape,val_mask.shape)) ###Output Train inp shape (845, 64) Val input shape (212, 64) Train label shape (845,) Val label shape (212,) Train attention mask shape (845, 64) Val attention mask shape (212, 64) ###Markdown Compile the model ###Code log_dir='/gdrive/Shared drives/COMP484-Project/ModellingFiles/tb_dbert' model_save_path='/gdrive/Shared drives/COMP484-Project/ModellingFiles/models/dbert_model.h5' callbacks = [ tf.keras.callbacks.EarlyStopping(patience = 2), tf.keras.callbacks.ModelCheckpoint(filepath=model_save_path,save_weights_only=True,monitor='val_loss',mode='min',save_best_only=True), tf.keras.callbacks.TensorBoard(log_dir=log_dir)] print('\nDistil Bert Model',dbert_model.summary()) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy') optimizer = tf.keras.optimizers.Adam(learning_rate=2e-5, epsilon=1e-08) dbert_model.compile(loss=loss,optimizer=optimizer,metrics=[metric]) ###Output Model: "tf_distil_bert_for_sequence_classification" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= distilbert (TFDistilBertMai multiple 66362880 nLayer) pre_classifier (Dense) multiple 590592 classifier (Dense) multiple 2307 dropout_95 (Dropout) multiple 0 ================================================================= Total params: 66,955,779 Trainable params: 66,955,779 Non-trainable params: 0 _________________________________________________________________ Distil Bert Model None ###Markdown Train the model ###Code history=dbert_model.fit( [train_inp,train_mask], train_label, batch_size=32, epochs=10, validation_data=([val_inp,val_mask],val_label), callbacks=callbacks) ###Output Epoch 1/10 27/27 [==============================] - 338s 12s/step - loss: 0.1966 - accuracy: 0.9243 - val_loss: 1.0916 - val_accuracy: 0.6604 Epoch 2/10 27/27 [==============================] - 309s 11s/step - loss: 0.1720 - accuracy: 0.9337 - val_loss: 1.0214 - val_accuracy: 0.6792 Epoch 3/10 27/27 [==============================] - 324s 12s/step - loss: 0.1516 - accuracy: 0.9325 - val_loss: 0.8937 - val_accuracy: 0.7594 Epoch 4/10 27/27 [==============================] - 321s 12s/step - loss: 0.1543 - accuracy: 0.9349 - val_loss: 0.9656 - val_accuracy: 0.7311 Epoch 5/10 27/27 [==============================] - 325s 12s/step - loss: 0.1635 - accuracy: 0.9290 - val_loss: 0.9100 - val_accuracy: 0.6934 ###Markdown Analysis of model ###Code history_dict = history.history # print(history_dict.keys()) loss = history_dict['loss'] acc = history_dict['accuracy'] val_loss = history_dict['val_loss'] val_acc = history_dict['val_accuracy'] epoches = range(1, 6) fig = plt.figure(figsize=(20,10)) fig.tight_layout() plt.subplot(2,1,1) #nrow, ncol, index plt.plot(epoches, loss, 'blue', label = 'Training loss') plt.plot(epoches, val_loss, 'green', label = 'Validation loss') plt.title('Training and validation loss') plt.xlabel('Epoch') plt.ylabel('Loss') plt.legend() plt.subplot(2,1,2) #nrow, ncol, index plt.plot(epoches, acc, 'b', label = 'Training acc') plt.plot(epoches, val_acc, 'g', label = 'Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epoch') plt.ylabel('Loss') plt.legend() ###Output _____no_output_____ ###Markdown Analysis of Data ###Code import seaborn as sns sns.countplot(data.cat_val) plt.xlabel('Distribution of categories') ###Output /usr/local/lib/python3.7/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation. FutureWarning ###Markdown Importing libraries ###Code import nltk import glob import os import numpy as np import string import pickle from gensim.models import Doc2Vec from gensim.models.doc2vec import LabeledSentence from tqdm import tqdm from sklearn import utils from sklearn.svm import LinearSVC from sklearn.neural_network import MLPClassifier from sklearn.metrics import roc_curve from sklearn.metrics import roc_auc_score from matplotlib import pyplot from nltk import sent_tokenize from nltk.tokenize import word_tokenize from nltk.corpus import stopwords from nltk.stem.porter import PorterStemmer from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.naive_bayes import MultinomialNB from sklearn.naive_bayes import BernoulliNB from sklearn.naive_bayes import GaussianNB from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import accuracy_score from collections import Counter from collections import defaultdict X_train_text = [] Y_train = [] X_test_text =[] Y_test =[] Vocab = {} VocabFile = "aclImdb/imdb.vocab" ###Output _____no_output_____ ###Markdown Create Vocabulary Function ###Code def CreateVocab(): with open(VocabFile, encoding='latin-1') as f: words = f.read().splitlines() stop_words = set(stopwords.words('english')) i=0 for word in words: if word not in stop_words: Vocab[word] = i i+=1 print(len(Vocab)) ###Output _____no_output_____ ###Markdown Cleaning Data ###Code def clean_review(text): tokens = word_tokenize(text) tokens = [w.lower() for w in tokens] table = str.maketrans('', '', string.punctuation) stripped = [w.translate(table) for w in tokens] words = [word for word in stripped if word.isalpha()] stop_words = set(stopwords.words('english')) words = [w for w in words if not w in stop_words] return words ###Output _____no_output_____ ###Markdown Generating Word Matrices ###Code def BoWMatrix(docs): vectorizer = CountVectorizer(binary=True,vocabulary = Vocab) Doc_Term_matrix = vectorizer.fit_transform(docs) return Doc_Term_matrix def TfidfMatrix(docs): vectorizer = TfidfVectorizer(vocabulary = Vocab,norm = 'l1') Doc_Term_matrix = vectorizer.fit_transform(docs) return Doc_Term_matrix ###Output _____no_output_____ ###Markdown ROC Curve Function ###Code def ROC(Y_train, pred1, Y_test, pred2): fpr1, tpr1, thresholds1 = roc_curve(Y_train, pred1) pyplot.plot([0, 1], [0, 1], linestyle='--') pyplot.plot(fpr1, tpr1, marker='.', color='blue', label="Train", linewidth=1.0) fpr2, tpr2, thresholds2 = roc_curve(Y_test, pred2) pyplot.plot(fpr2, tpr2, marker='.', color='red', label="Test", linewidth=1.0) pyplot.legend() pyplot.show() def ROC2(X, pred, pred1, pred2): fpr, tpr, thresholds = roc_curve(X, pred) pyplot.plot([0, 1], [0, 1], linestyle='--') pyplot.plot(fpr, tpr, marker='.') fpr1, tpr1, thresholds1 = roc_curve(X, pred1) pyplot.plot([0, 1], [0, 1], linestyle='--') pyplot.plot(fpr1, tpr1, marker='.') fpr2, tpr2, thresholds2 = roc_curve(X, pred2) pyplot.plot([0, 1], [0, 1], linestyle='--') pyplot.plot(fpr2, tpr2, marker='.') pyplot.show() ###Output _____no_output_____ ###Markdown Naive Bayes Function ###Code def NB(X,Y_train,Xtest,Y_test,mtype): if mtype == "Bow": model = BernoulliNB() elif mtype == "Tfidf": model = MultinomialNB() else: model = GaussianNB() model.fit(X,Y_train) pred1 = model.predict(X) pred2 = model.predict(Xtest) acc1 = accuracy_score(Y_train,pred1) acc2 = accuracy_score(Y_test,pred2) print("NaiveBayes + " + mtype + " Train Accuracy: " + str(acc1*100) + "%") print("NaiveBayes + " + mtype + " Test Accuracy: " + str(acc2*100) + "%") prob1 = model.predict_proba(X) prob1 = prob1[:, 1] prob2 = model.predict_proba(Xtest) prob2 = prob2[:, 1] #ROC(Y_train, pred1, Y_test, pred2) ROC(Y_train, prob1, Y_test, prob2) ###Output _____no_output_____ ###Markdown Logistic Regression Function ###Code def LR(X,Y_train,Xtest,Y_test,mtype): model = LogisticRegression() model.fit(X,Y_train) pred1 = model.predict(X) pred2 = model.predict(Xtest) acc1 = accuracy_score(Y_train,pred1) acc2 = accuracy_score(Y_test,pred2) print("LogisticRegression + " + mtype + " Train Accuracy: " + str(acc1*100) + "%") print("LogisticRegression + " + mtype + " Test Accuracy: " + str(acc2*100) + "%") prob1 = model.predict_proba(X) prob1 = prob1[:, 1] prob2 = model.predict_proba(Xtest) prob2 = prob2[:, 1] #ROC(Y_train, pred1, Y_test, pred2) ROC(Y_train, prob1, Y_test, prob2) ###Output _____no_output_____ ###Markdown Random Forest Function ###Code def RF(X,Y_train,Xtest,Y_test,mtype): if mtype == "Bow": n = 400 md = 100 elif mtype == "Tfidf": n = 400 md = 100 else: n = 100 md = 10 model = RandomForestClassifier(n_estimators=n, bootstrap=True, max_depth=md, max_features='auto', min_samples_leaf=4, min_samples_split=10) model.fit(X,Y_train) pred1 = model.predict(X) pred2 = model.predict(Xtest) acc1 = accuracy_score(Y_train,pred1) acc2 = accuracy_score(Y_test,pred2) print("RandomForest + " + mtype + " Train Accuracy: " + str(acc1*100) + "%") print("RandomForest + " + mtype + " Test Accuracy: " + str(acc2*100) + "%") prob1 = model.predict_proba(X) prob1 = prob1[:, 1] prob2 = model.predict_proba(Xtest) prob2 = prob2[:, 1] #ROC(Y_train, pred1, Y_test, pred2) ROC(Y_train, prob1, Y_test, prob2) ###Output _____no_output_____ ###Markdown Support Vector Machine Function ###Code def SVM(X,Y_train,Xtest,Y_test,mtype): model = LinearSVC() model.fit(X,Y_train) pred1 = model.predict(X) pred2 = model.predict(Xtest) acc1 = accuracy_score(Y_train,pred1) acc2 = accuracy_score(Y_test,pred2) print("SVM + " + mtype + " Train Accuracy: " + str(acc1*100) + "%") print("SVM + " + mtype + " Test Accuracy: " + str(acc2*100) + "%") ROC(Y_train, pred1, Y_test, pred2) ###Output _____no_output_____ ###Markdown Forward Feed Neural Network Function ###Code def NN(X,Y_train,Xtest,Y_test,mtype): model = MLPClassifier(hidden_layer_sizes=(10,10),activation='relu',max_iter=200) model.fit(X,Y_train) pred1 = model.predict(X) pred2 = model.predict(Xtest) acc1 = accuracy_score(Y_train,pred1) acc2 = accuracy_score(Y_test,pred2) print("FFN + " + mtype + " Train Accuracy: " + str(acc1*100) + "%") print("FFN + " + mtype + " Test Accuracy: " + str(acc2*100) + "%") prob1 = model.predict_proba(X) prob1 = prob1[:, 1] prob2 = model.predict_proba(Xtest) prob2 = prob2[:, 1] #ROC(Y_train, pred1, Y_test, pred2) ROC(Y_train, prob1, Y_test, prob2) ###Output _____no_output_____ ###Markdown Loading Data ###Code path1 = 'aclImdb/train/pos/*.txt' path2 = 'aclImdb/train/neg/*.txt' path3 = 'aclImdb/test/pos/*.txt' path4 = 'aclImdb/test/neg/*.txt' files1 = glob.glob(path1) files2 = glob.glob(path2) files3 = glob.glob(path3) files4 = glob.glob(path4) #Positive labels for i,filename in enumerate(files1): f = open(filename,"r+", encoding='latin-1') text = f.read() f.close() X_train_text.append(text) Y_train.append(1) #Neg labels for j,filename in enumerate(files2): f = open(filename,"r+", encoding='latin-1') text = f.read() f.close() X_train_text.append(text) Y_train.append(0) #Test labels + for k,filename in enumerate(files3): f = open(filename,"r+", encoding='latin-1') text = f.read() f.close() X_test_text.append(text) Y_test.append(1) #Test labels + for l,filename in enumerate(files4): f = open(filename,"r+", encoding='latin-1') text = f.read() f.close() X_test_text.append(text) Y_test.append(0) CreateVocab(); ###Output 89356 ###Markdown Generating Word Matrix for Test & Train Data ###Code def Getbowvec(X_train_text,Y_train,X_test_text,Y_test): X = BoWMatrix(X_train_text) Xtest = BoWMatrix(X_test_text) return X,Xtest def Gettfidfvec(X_train_text,Y_train,X_test_text,Y_test): X = TfidfMatrix(X_train_text) Xtest = TfidfMatrix(X_test_text) return X,Xtest ###Output _____no_output_____ ###Markdown Doc2Vec Representation ###Code ''' def LabelRev(reviews,label_string): result = [] prefix = label_string for i, t in enumerate(reviews): # print(t) result.append(LabeledSentence(t, [prefix + '_%s' % i])) return result LabelledXtrain = LabelRev(X_train_text,"review") LabelledXtest = LabelRev(X_test_text,"test") LabelledData = LabelledXtrain + LabelledXtest modeld2v = Doc2Vec(dm=1, min_count=2, alpha=0.065, min_alpha=0.065) modeld2v.build_vocab([x for x in tqdm(LabelledData)]) print("Training the Doc2Vec Model.....") for epoch in range(50): print("epoch : ",epoch) modeld2v.train(utils.shuffle([x for x in tqdm(LabelledData)]), total_examples=len(LabelledData), epochs=1) modeld2v.alpha -= 0.002 modeld2v.min_alpha = modeld2v.alpha print("Saving Doc2Vec1 Model....") modeld2v.save('doc2vec1.model') #print("Saving Doc2Vec Model....") #modeld2v.save('doc2vec.model') ''' def Doc2vec(X_train_text,Y_train,X_test_text,Y_test): model = Doc2Vec.load('doc2vec.model') #model = Doc2Vec.load('doc2vec1.model') X = [] Xtest =[] for i,l in enumerate(X_train_text): temp = "review" + "_" + str(i) X.append(model.docvecs[temp]) for i,l in enumerate(X_test_text): temp = "test" + "_" + str(i) Xtest.append(model.docvecs[temp]) return X,Xtest print("Bag of Words is being built...") X,Xtest = Getbowvec(X_train_text,Y_train,X_test_text,Y_test) print("Tf-idf is being built...") X1,Xtest1 = Gettfidfvec(X_train_text,Y_train,X_test_text,Y_test) print("Doc2Vec is being built...") X2,Xtest2 = Doc2vec(X_train_text,Y_train,X_test_text,Y_test) len(X[0]) ###Output [ 3.8730502e-01 -2.4226126e-01 1.8447070e-01 2.7821556e-01 5.6896007e-01 2.4535358e-01 -2.5278392e-01 1.5472387e-01 -1.1508914e-01 4.5354521e-01 3.8365748e-02 -2.5258625e-01 -5.8337015e-01 1.0790733e+00 -1.6636238e-01 4.3921784e-02 6.0108167e-01 -7.4332672e-01 -3.5194703e-03 5.2995592e-01 -4.6524012e-01 -2.2355888e-02 8.3163187e-02 -3.0640143e-01 -7.0134312e-01 6.8939185e-01 2.0314547e-01 -4.4697830e-01 -1.9118084e-02 -1.1067226e-01 -3.0236110e-01 3.7518761e-01 -5.2174801e-01 -6.5070170e-01 4.4410905e-01 -3.6672121e-01 -6.6573453e-01 -7.9081794e-03 -9.3441790e-01 1.5675272e-01 2.4286245e-01 -5.1078118e-02 2.6444617e-01 1.8430072e-01 -3.6135960e-01 1.0735059e+00 -1.7230248e-01 1.9120649e-01 2.9940462e-01 1.2391033e+00 -3.0393973e-01 2.6779616e-01 4.8723632e-01 2.8240904e-03 -2.5652254e-01 1.0942936e+00 6.4555228e-01 -6.1286968e-01 8.6758316e-01 -1.3204034e-01 3.7257919e-01 -2.8581244e-01 7.1880788e-01 2.4354255e-01 8.9242440e-01 -6.6156977e-01 2.5719035e-01 2.2389887e-01 1.4597312e-01 2.1936417e-01 2.6718158e-02 -1.9517705e-01 3.0567044e-01 4.3458107e-01 3.6869454e-01 1.0570779e+00 -4.2633000e-01 -1.2129361e-01 -3.2494134e-01 9.7133152e-02 -1.2926124e-01 3.7891394e-01 -8.6885536e-01 1.1091402e-02 -8.0138183e-01 5.7643318e-01 -3.5828719e-01 6.8130858e-02 -6.5640682e-01 1.1339425e+00 7.7646202e-04 1.5335666e-01 -3.6238903e-01 -1.3119204e-01 -2.6151622e-02 9.3961424e-01 -8.3501214e-01 3.9072767e-01 2.4230620e-01 6.3034648e-01] ###Markdown Applying Classification Algorithms ###Code print("Naive Bayes:") NB(X,Y_train,Xtest,Y_test,"Bow") NB(X1,Y_train,Xtest1,Y_test,"Tfidf") NB(X2,Y_train,Xtest2,Y_test,"Doc2Vec") print("Logistic Regression:") LR(X,Y_train,Xtest,Y_test,"Bow") LR(X1,Y_train,Xtest1,Y_test,"Tfidf") LR(X2,Y_train,Xtest2,Y_test,"Doc2Vec") print("Random Forest:") RF(X,Y_train,Xtest,Y_test,"Bow") RF(X1,Y_train,Xtest1,Y_test,"Tfidf") RF(X2,Y_train,Xtest2,Y_test,"Doc2Vec") print("SVM:") SVM(X,Y_train,Xtest,Y_test,"Bow") SVM(X1,Y_train,Xtest1,Y_test,"Tfidf") SVM(X2,Y_train,Xtest2,Y_test,"Doc2Vec") print("Neural Networks:") NN(X,Y_train,Xtest,Y_test,"Bow") NN(X1,Y_train,Xtest1,Y_test,"Tfidf") NN(X2,Y_train,Xtest2,Y_test,"Doc2Vec") ###Output Neural Networks: FFN + Bow Train Accuracy: 100.0% FFN + Bow Test Accuracy: 85.6% ###Markdown ###Code import numpy as np import pandas as pd import matplotlib.pyplot as plt import statsmodels.api as sm import seaborn as sns import scipy as sp import matplotlib as mpl !pip install wget import wget url1 = 'https://raw.githubusercontent.com/e9t/nsmc/master/ratings_train.txt' wget.download(url1) url2 = 'https://raw.githubusercontent.com/e9t/nsmc/master/ratings_test.txt' wget.download(url2) import codecs with codecs.open("ratings_train.txt", encoding='utf-8') as f: data = [line.split('\t') for line in f.read().splitlines()] data = data[1:] # header 제외 from pprint import pprint pprint(data[0]) X = list(zip(*data))[1] y = np.array(list(zip(*data))[2], dtype=int) from sklearn.feature_extraction.text import CountVectorizer from sklearn.naive_bayes import MultinomialNB from sklearn.pipeline import Pipeline from sklearn.metrics import classification_report model1 = Pipeline([ ('vect', CountVectorizer()), ('mb', MultinomialNB()), ]) model1.fit(X, y) import codecs with codecs.open("ratings_test.txt", encoding='utf-8') as f: data_test = [line.split('\t') for line in f.read().splitlines()] data_test = data_test[1:] # header 제외 X_test = list(zip(*data_test))[1] y_test = np.array(list(zip(*data_test))[2], dtype=int) print(classification_report(y_test, model1.predict(X_test))) from sklearn.feature_extraction.text import TfidfVectorizer model2 = Pipeline([ ('vect', TfidfVectorizer()), ('mb', MultinomialNB()), ]) model2.fit(X, y) print(classification_report(y_test, model2.predict(X_test))) !pip install konlpy from konlpy.tag import Okt pos_tagger = Okt() def tokenize_pos(doc): return ['/'.join(t) for t in pos_tagger.pos(doc)] model3 = Pipeline([ ('vect', CountVectorizer(tokenizer=tokenize_pos)), ('mb', MultinomialNB()), ]) model3.fit(X, y) print(classification_report(y_test, model3.predict(X_test))) model4 = Pipeline([ ('vect', TfidfVectorizer(tokenizer=tokenize_pos, ngram_range=(1, 2))), ('mb', MultinomialNB()), ]) model4.fit(X, y) print(classification_report(y_test, model4.predict(X_test))) ###Output precision recall f1-score support 0 0.86 0.87 0.87 24827 1 0.87 0.86 0.87 25173 accuracy 0.87 50000 macro avg 0.87 0.87 0.87 50000 weighted avg 0.87 0.87 0.87 50000
Certamen/P2/201473611-8- Pregunta_2.ipynb
###Markdown INF285/ILI285 Computación Científica COP-1Nombre: Felipe Montero ConchaRol: 201473611-8 Pregunta 2 (a) ###Code def bisection_raiz (f,a,b,tol): while (b-a)/2 > tol: c = (a+b)/2 if f(c)==0: return c if f(a)*f(c)<0: b = c else: a = c return c bomba_funcion = lambda x : (x**10 - 10**x)/x # En x = 0 la función se indetermina raiz_negativa = bisection_raiz(bomba_funcion,-2,-0.000000001,0.5*10**-7) raiz_positiva = bisection_raiz(bomba_funcion,0.00000000001,2,0.5*10**-8) print(bomba_funcion(raiz_negativa)) print(bomba_funcion(raiz_positiva)) print(raiz_negativa) print(raiz_positiva) import bitstring rep_bin = raiz_positiva + (2**-8) def f_new_rep(x,bits_mant): # Algoritmo de representación de punto flotante modificada. rep_IEEE = bitstring.BitArray(float= x, length = 64) ceros_unos = rep_IEEE.bin signo = ceros_unos[0] exponente = ceros_unos[1:12] mantisa = ceros_unos[12:] print(exponente[0:9]) new_mantisa = mantisa[0:bits_mant] check_mant = mantisa[bits_mant:53] if check_mant[0] == "0": new_mantisa = new_mantisa elif check_mant[0] == "1": if "1" in check_mant[1:]: new_mantisa = sumar_1_bin(new_mantisa) elif "1" not in check_mant[1:]: if new_mantisa[-1] == "1": new_mantisa = sumar_1_bin(new_mantisa) print(new_mantisa) return f_new_rep(rep_bin,8) ###Output 011111111 01100000
ch04_Building_Good_TrainingData/Chapter04_CleaningData.ipynb
###Markdown Generally losing data is bad and should be avoided. To help we can impute data ###Code #### Mean Imputation from sklearn.preprocessing import Imputer imr = Imputer(missing_values='NaN', strategy='mean', axis = 0) imr = imr.fit(df) imputed_data = imr.transform(df.values) imputed_data ###Output _____no_output_____ ###Markdown Other options for strategy are median or most_frequent Catgorical Data ###Code import pandas as pd df = pd.DataFrame([ ['green', 'M', 10.1, 'class1'], ['red', 'L', 13.5, 'class2'], ['blue', 'XL', 15.3, 'class1']]) df.columns = ['color', 'size', 'price', 'classlabel'] df size_mapping = { 'XL': 3, 'L' : 2, 'M' : 1 } df['size'] = df['size'].map(size_mapping) df import numpy as np class_mapping = {label:idx for idx,label in enumerate(np.unique(df['classlabel']))} class_mapping df['classlabel'] = df['classlabel'].map(class_mapping) df inv_class_map = {v: k for k, v in class_mapping.items()} df['classlabel'] = df['classlabel'].map(inv_class_map) df from sklearn.preprocessing import LabelEncoder class_le = LabelEncoder() y = class_le.fit_transform(df['classlabel'].values) y class_le.inverse_transform(y) ###Output _____no_output_____ ###Markdown Onehot encoding on nominal features ###Code X = df[['color', 'size', 'price']].values color_le = LabelEncoder() X[:, 0] = color_le.fit_transform(X[:,0]) X ## THE ABOVE IS BAD since it assumes an ordering of the colors ## Use One-hot enoding to create dummy features from sklearn.preprocessing import OneHotEncoder ohe = OneHotEncoder(categorical_features=[0]) ohe.fit_transform(X).toarray() ##can also use "get_dummies" to make dummy variables pd.get_dummies(df[["price", "color", "size"]]) ###Output _____no_output_____
_doc/notebooks/sklearn/quantile_mlpregression.ipynb
###Markdown Quantile MLPRegressor[scikit-learn](http://scikit-learn.org/stable/) does not have a quantile regression for multi-layer perceptron. [mlinsights](http://www.xavierdupre.fr/app/mlinsights/helpsphinx/index.html) implements a version of it based on the *scikit-learn* model. The implementation overwrites method ``_backprop``. ###Code %matplotlib inline import warnings warnings.simplefilter("ignore") ###Output _____no_output_____ ###Markdown We generate some dummy data. ###Code import numpy X = numpy.random.random(1000) eps1 = (numpy.random.random(900) - 0.5) * 0.1 eps2 = (numpy.random.random(100)) * 10 eps = numpy.hstack([eps1, eps2]) X = X.reshape((1000, 1)) Y = X.ravel() * 3.4 + 5.6 + eps from sklearn.neural_network import MLPRegressor clr = MLPRegressor(hidden_layer_sizes=(30,), activation='tanh') clr.fit(X, Y) from mlinsights.mlmodel import QuantileMLPRegressor clq = QuantileMLPRegressor(hidden_layer_sizes=(30,), activation='tanh') clq.fit(X, Y) from pandas import DataFrame data= dict(X=X.ravel(), Y=Y, clr=clr.predict(X), clq=clq.predict(X)) df = DataFrame(data) df.head() import matplotlib.pyplot as plt fig, ax = plt.subplots(1, 1, figsize=(10, 4)) choice = numpy.random.choice(X.shape[0]-1, size=100) xx = X.ravel()[choice] yy = Y[choice] ax.plot(xx, yy, '.', label="data") xx = numpy.array([[0], [1]]) y1 = clr.predict(xx) y2 = clq.predict(xx) ax.plot(xx, y1, "--", label="L2") ax.plot(xx, y2, "--", label="L1") ax.set_title("Quantile (L1) vs Square (L2) for MLPRegressor") ax.legend(); ###Output _____no_output_____
notebooks/MuscleSimulation.ipynb
###Markdown Muscle simulationMarcos Duarte, Renato Watanabe Let's simulate the 3-component Hill-type muscle model we described in [Muscle modeling](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/MuscleModeling.ipynb) and illustrated below:Figure. A Hill-type muscle model with three components: two for the muscle, an active contractile element, $\mathsf{CE}$, and a passive elastic element in parallel, $\mathsf{PE}$, with the $\mathsf{CE}$, and one component for the tendon, an elastic element in series, $\mathsf{SE}$, with the muscle. $\mathsf{L_{MT}}$: muscle–tendon length, $\mathsf{L_T}$: tendon length, $\mathsf{L_M}$: muscle fiber length, $\mathsf{F_T}$: tendon force, $\mathsf{F_M}$: muscle force, and $α$: pennation angle.The following relationships are true for the model:$$ \begin{array}{l}L_{MT} = L_{T} + L_M\cos\alpha \\\\L_M = L_{CE} = L_{PE} \\\\\dot{L}_M = \dot{L}_{CE} = \dot{L}_{PE} \\\\F_{M} = F_{CE} + F_{PE} \end{array} $$If we assume that the muscle–tendon system is at equilibrium, that is, muscle, $F_{M}$, and tendon, $F_{T}$, forces are in equlibrium at all times, the following equation holds (and that a muscle can only pull):$$ F_{T} = F_{SE} = F_{M}\cos\alpha $$ Pennation angleThe pennation angle will vary during muscle activation; for instance, Kawakami et al. (1998) showed that the pennation angle of the medial gastrocnemius muscle can vary from 22$^o$ to 67$^o$ during activation. The most used approach is to assume that the muscle width (defined as the length of the perpendicular line between the lines of the muscle origin and insertion) remains constant (Scott & Winter, 1991):$$ w = L_{M,0} \sin\alpha_0 $$The pennation angle as a function of time will be given by:$$ \alpha = \sin^{-1} \left(\frac{w}{L_M}\right) $$The cosine of the pennation angle can be given by (if $L_M$ is known):$$ \cos \alpha = \frac{\sqrt{L_M^2-w^2}}{L_M} = \sqrt{1-\left(\frac{w}{L_M}\right)^2} $$or (if $L_M$ is not known):$$ \cos \alpha = \frac{L_{MT}-L_T}{L_M} = \frac{1}{\sqrt{1 + \left(\frac{w}{L_{MT}-L_T}\right)^2}} $$ Muscle forceIn general, the dependence of the force of the contractile element with its length and velocity and with the activation level are assumed independent of each other:$$ F_{CE}(a, L_{CE}, \dot{L}_{CE}) = a \: f_l(L_{CE}) \: f_v(\dot{L}_{CE}) \: F_{M0} $$where $f_l(L_M)$ and $f_v(\dot{L}_M)$ are mathematical functions describing the force-length and force-velocity relationships of the contractile element (typically these functions are normalized by $F_{M0}$, the maximum isometric (at zero velocity) muscle force, so we have to multiply the right side of the equation by $F_{M0}$). And for the muscle force:$$ F_{M}(a, L_M, \dot{L}_M) = \left[a \: f_l(L_M)f_v(\dot{L}_M) + F_{PE}(L_M)\right]F_{M0} $$This equation for the muscle force, with $a$, $L_{M}$, and $\dot{L}_{M}$ as state variables, can be used to simulate the dynamics of a muscle given an excitation and determine the muscle force and length. We can rearrange the equation, invert the expression for $f_v$, and integrate the resulting first-order ordinary differential equation (ODE) to obatin $L_M$:$$ \dot{L}_M = f_v^{-1}\left(\frac{F_{SE}(L_{MT}-L_M\cos\alpha)/\cos\alpha - F_{PE}(L_M)}{a f_l(L_M)}\right) $$This approach is the most commonly employed in the literature (see for example, [OpenSim](http://simtk-confluence.stanford.edu:8080/display/OpenSim/Muscle+Model+Theory+and+Publications); McLean, Su, van den Bogert, 2003; Thelen, 2003; Nigg and Herzog, 2007). Although the equation for the muscle force doesn't have numerical singularities, the differential equation for muscle velocity has four ([OpenSim Millard 2012 Muscle Models](http://simtk-confluence.stanford.edu:8080/display/OpenSim/Millard+2012+Muscle+Models)): When $a \rightarrow 0$; when $f_l(L_M) \rightarrow 0$; when $\alpha \rightarrow \pi/2$; and when $\partial f_v/\partial v \rightarrow 0 $. The following solutions can be employed to avoid the numerical singularities ([OpenSim Millard 2012 Muscle Models](http://simtk-confluence.stanford.edu:8080/display/OpenSim/Millard+2012+Muscle+Models)): Adopt a minimum value for $a$; e.g., $a_{min}=0.01$; adopt a minimum value for $f_l(L_M)$; e.g., $f_l(0.1)$; adopt a maximum value for pennation angle; e.g., constrain $\alpha$ to $\cos\alpha > 0.1 \; (\alpha < 84.26^o)$; and make the slope of $f_V$ at and beyond maximum velocity different than zero (for both concentric and excentric activations). We will adopt these solutions to avoid singularities in the simulation of muscle mechanics. A probem of imposing values to variables as described above is that we can make the ordinary differential equation numerically stiff, which will increase the computational cost of the numerical integration. A better solution would be to modify the model to not have these singularities (see [OpenSim Millard 2012 Muscle Models](http://simtk-confluence.stanford.edu:8080/display/OpenSim/Millard+2012+Muscle+Models)). SimulationLet's simulate muscle dynamics using the Thelen2003Muscle model we defined in [Muscle modeling](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/MuscleModeling.ipynb). For the simulation of the Thelen2003Muscle, we simply have to integrate the equation:$$ V_M = (0.25+0.75a)\,V_{Mmax}\frac{\bar{F}_M-a\bar{f}_{l,CE}}{b} $$ where$$ b = \left\{ \begin{array}{l l l} a\bar{f}_{l,CE} + \bar{F}_M/A_f \quad & \text{if} \quad \bar{F}_M \leq a\bar{f}_{l,CE} & \text{(shortening)} \\ \\ \frac{(2+2/A_f)(a\bar{f}_{l,CE}\bar{f}_{CEmax} - \bar{F}_M)}{\bar{f}_{CEmax}-1} \quad & \text{if} \quad \bar{F}_M > a\bar{f}_{l,CE} & \text{(lengthening)} \end{array} \right.$$ The equation above already contains the terms for actvation, $a$, and force-length dependence, $\bar{f}_{l,CE}$. The equation is too complicated for solving analytically, we will solve it by numerical integration using the [`scipy.integrate.ode`](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.integrate.ode.html) class of numeric integrators, particularly the `dopri5`, an explicit runge-kutta method of order (4)5 due to Dormand and Prince (a.k.a. ode45 in Matlab). We could run a simulation using [OpenSim](https://simtk.org/home/opensim); it would be faster, but for fun, let's program in Python. All the necessary functions for the Thelen2003Muscle model described in [Muscle modeling](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/MuscleModeling.ipynb) were grouped in one file (module), `muscles.py`. Besides these functions, the module `muscles.py` contains a function for the muscle velocity, `vm_eq`, which will be called by the function that specifies the numerical integration, `lm_sol`; a standard way of performing numerical integration in scientific computing:```python def vm_eq(self, t, lm, lm0, lmt0, lmopt, ltslack, alpha0, vmmax, fm0): """Equation for muscle velocity.""" if lm < 0.1*lmopt: lm = 0.1*lmopt a = self.activation(t) lmt = self.lmt_eq(t, lmt0) alpha = self.penn_ang(lmt=lmt, lm=lm, lm0=lm0, alpha0=alpha0) lt = lmt - lm*np.cos(alpha) fse = self.force_se(lt=lt, ltslack=ltslack) fpe = self.force_pe(lm=lm/lmopt) fl = self.force_l(lm=lm/lmopt) fce_t = fse/np.cos(alpha) - fpe vm = self.velo_fm(fm=fce_t, a=a, fl=fl) return vmdef lm_sol(self, fun, t0, t1, lm0, lmt0, ltslack, lmopt, alpha0, vmmax, fm0, show, axs): """Runge-Kutta (4)5 ODE solver for muscle length.""" if fun is None: fun = self.vm_eq f = ode(fun).set_integrator('dopri5', nsteps=1, max_step=0.005, atol=1e-8) f.set_initial_value(lm0, t0).set_f_params(lm0, lmt0, lmopt, ltslack, alpha0, vmmax, fm0) suppress Fortran warning warnings.filterwarnings("ignore", category=UserWarning) data = [] while f.t < t1: f.integrate(t1, step=True) d = self.calc_data(f.t, f.y, lm0, lmt0, ltslack, lmopt, alpha0, fm0) data.append(d) warnings.resetwarnings() data = np.array(data) self.lm_data = data if show: self.lm_plot(data, axs) return data````muscles.py` also contains some auxiliary functions for entering data and for plotting the results. Let's import the necessary Python libraries and customize the environment in order to run some simulations using `muscles.py`: ###Code import numpy as np import matplotlib.pyplot as plt %matplotlib inline #%matplotlib nbagg import matplotlib matplotlib.rcParams['lines.linewidth'] = 3 matplotlib.rcParams['font.size'] = 13 matplotlib.rcParams['lines.markersize'] = 5 matplotlib.rc('axes', grid=False, labelsize=14, titlesize=16, ymargin=0.05) matplotlib.rc('legend', numpoints=1, fontsize=11) # import the muscles.py module import sys sys.path.insert(1, r'./../functions') import muscles ###Output _____no_output_____ ###Markdown The `muscles.py` module contains the class `Thelen2003()` which has the functions we want to use. For such, we need to create an instance of this class: ###Code ms = muscles.Thelen2003() ###Output _____no_output_____ ###Markdown Now, we need to enter the parameters and states for the simulation: we can load files with these values or enter as input parameters when calling the function (method) '`set_parameters()`' and '`set_states()`'. If nothing if inputed, these methods assume that the parameters and states are stored in the files '`muscle_parameter.txt`' and '`muscle_state.txt`' inside the directory '`./../data/`'. Let's use some of the parameters and states from an exercise of the chapter 4 of Nigg and Herzog (2006). ###Code ms.set_parameters() ms.set_states() ###Output The parameters were successfully loaded and are stored in the variable P. The states were successfully loaded and are stored in the variable S. ###Markdown We can see the parameters and states: ###Code print('Parameters:\n', ms.P) print('States:\n', ms.S) ###Output Parameters: {'id': '', 'name': '', 'u_max': 1.0, 'u_min': 0.01, 't_act': 0.015, 't_deact': 0.05, 'lmopt': 0.093, 'alpha0': 0.0, 'fm0': 7400.0, 'gammal': 0.45, 'kpe': 5.0, 'epsm0': 0.6, 'vmmax': 10.0, 'fmlen': 1.4, 'af': 0.25, 'ltslack': 0.223, 'epst0': 0.04, 'kttoe': 3.0} States: {'id': '', 'name': '', 'lmt0': 0.31, 'lm0': 0.087, 'lt0': 0.223} ###Markdown We can plot the muscle-tendon forces considering these parameters and initial states: ###Code ms.muscle_plot(); ###Output _____no_output_____ ###Markdown Let's simulate an isometric activation (and since we didn't enter an activation level, $a=1$ will be used): ###Code def lmt_eq(t, lmt0): # isometric activation lmt = lmt0 return lmt ms.lmt_eq = lmt_eq data = ms.lm_sol() ###Output _____no_output_____ ###Markdown We can input a prescribed muscle-tendon length for the simulation: ###Code def lmt_eq(t, lmt0): # prescribed change in the muscle-tendon length if t < 1: lmt = lmt0 if 1 <= t < 2: lmt = lmt0 - 0.04*(t - 1) if t >= 2: lmt = lmt0 - 0.04 return lmt ms.lmt_eq = lmt_eq data = ms.lm_sol() ###Output _____no_output_____ ###Markdown Let's simulate a pennated muscle with an angle of $30^o$. We don't need to enter all parameters again, we can change only the parameter `alpha0`: ###Code ms.P['alpha0'] = 30*np.pi/180 print('New initial pennation angle:', ms.P['alpha0']) ###Output New initial pennation angle: 0.5235987755982988 ###Markdown Because the muscle length is now shortenned by $\cos(30^o)$, we will also have to change the initial muscle-tendon length if we want to start with the tendon at its slack length: ###Code ms.S['lmt0'] = ms.S['lmt0'] - ms.S['lm0'] + ms.S['lm0']*np.cos(ms.P['alpha0']) print('New initial muscle-tendon length:', ms.S['lmt0']) data = ms.lm_sol() ###Output _____no_output_____ ###Markdown Here is a plot of the simulated pennation angle: ###Code plt.plot(data[:, 0], data[:, 9]*180/np.pi) plt.xlabel('Time (s)') plt.ylabel('Pennation angle $(^o)$') plt.show() ###Output _____no_output_____ ###Markdown Change back to the old values: ###Code ms.P['alpha0'] = 0 ms.S['lmt0'] = 0.313 ###Output _____no_output_____ ###Markdown We can change the initial states to show the role of the passive parallel element: ###Code ms.S = {'id': '', 'lt0': np.nan, 'lmt0': 0.323, 'lm0': 0.10, 'name': ''} ms.muscle_plot() ###Output _____no_output_____ ###Markdown Let's also change the excitation signal: ###Code def excitation(t, u_max=1, u_min=0.01, t0=1, t1=2): """Excitation signal, a hat signal.""" u = u_min if t >= t0 and t <= t1: u = u_max return u ms.excitation = excitation act = ms.activation_sol() ###Output _____no_output_____ ###Markdown And let's simulate an isometric contraction: ###Code def lmt_eq(t, lmt0): # isometric activation lmt = lmt0 return lmt ms.lmt_eq = lmt_eq data = ms.lm_sol() ###Output _____no_output_____ ###Markdown Let's use as excitation a train of pulses: ###Code def excitation(t, u_max=.5, u_min=0.01, t0=.2, t1=2): """Excitation signal, a train of square pulses.""" u = u_min ts = np.arange(1, 2.0, .1) #ts = np.delete(ts, np.arange(2, ts.size, 3)) if t >= ts[0] and t <= ts[1]: u = u_max elif t >= ts[2] and t <= ts[3]: u = u_max elif t >= ts[4] and t <= ts[5]: u = u_max elif t >= ts[6] and t <= ts[7]: u = u_max elif t >= ts[8] and t <= ts[9]: u = u_max return u ms.excitation = excitation act = ms.activation_sol() data = ms.lm_sol() ###Output _____no_output_____ ###Markdown References- Kawakami Y, Ichinose Y, Fukunaga T (1998) [Architectural and functional features of human triceps surae muscles during contraction](http://www.ncbi.nlm.nih.gov/pubmed/9688711). Journal of Applied Physiology, 85, 398–404. - McLean SG, Su A, van den Bogert AJ (2003) [Development and validation of a 3-D model to predict knee joint loading during dynamic movement](http://www.ncbi.nlm.nih.gov/pubmed/14986412). Journal of Biomechanical Engineering, 125, 864-74. - Nigg BM and Herzog W (2006) [Biomechanics of the Musculo-skeletal System](https://books.google.com.br/books?id=hOIeAQAAIAAJ&dq=editions:ISBN0470017678). 3rd Edition. Wiley. - Scott SH, Winter DA (1991) [A comparison of three muscle pennation assumptions and their effect on isometric and isotonic force](http://www.ncbi.nlm.nih.gov/pubmed/2037616). Journal of Biomechanics, 24, 163–167. - Thelen DG (2003) [Adjustment of muscle mechanics model parameters to simulate dynamic contractions in older adults](http://homepages.cae.wisc.edu/~thelen/pubs/jbme03.pdf). Journal of Biomechanical Engineering, 125(1):70–77. Module muscles.py ###Code # %load ./../functions/muscles.py """Muscle modeling and simulation.""" from __future__ import division, print_function import numpy as np from scipy.integrate import ode import warnings import configparser __author__ = 'Marcos Duarte, https://github.com/demotu/BMC' __version__ = 'muscles.py v.1 2015/03/01' class Thelen2003(): """ Thelen (2003) muscle model. """ def __init__(self, parameters=None, states=None): if parameters is not None: self.set_parameters(parameters) if states is not None: self.set_states(states) self.lm_data = [] self.act_data = [] def set_parameters(self, var=None): """Load and set parameters for the muscle model. """ if var is None: var = './../data/muscle_parameter.txt' if isinstance(var, str): self.P = self.config_parser(var, 'parameters') elif isinstance(var, dict): self.P = var else: raise ValueError('Wrong parameters!') print('The parameters were successfully loaded ' + 'and are stored in the variable P.') def set_states(self, var=None): """Load and set states for the muscle model. """ if var is None: var = './../data/muscle_state.txt' if isinstance(var, str): self.S = self.config_parser(var, 'states') elif isinstance(var, dict): self.S = var else: raise ValueError('Wrong states!') print('The states were successfully loaded ' + 'and are stored in the variable S.') def config_parser(self, filename, var): parser = configparser.ConfigParser() parser.optionxform = str # make option names case sensitive parser.read(filename) if not parser: raise ValueError('File %s not found!' %var) #if not 'Muscle' in parser.sections()[0]: # raise ValueError('Wrong %s file!' %var) var = {} for key, value in parser.items(parser.sections()[0]): if key.lower() in ['name', 'id']: var.update({key: value}) else: try: value = float(value) except ValueError: print('"%s" value "%s" was replaced by NaN.' %(key, value)) value = np.nan var.update({key: value}) return var def force_l(self, lm, gammal=None): """Thelen (2003) force of the contractile element vs. muscle length. Parameters ---------- lm : float normalized muscle fiber length gammal : float, optional (default from parameter file) shape factor Returns ------- fl : float normalized force of the muscle contractile element """ if gammal is None: gammal = self.P['gammal'] fl = np.exp(-(lm-1)**2/gammal) return fl def force_pe(self, lm, kpe=None, epsm0=None): """Thelen (2003) force of the muscle parallel element vs. muscle length. Parameters ---------- lm : float normalized muscle fiber length kpe : float, optional (default from parameter file) exponential shape factor epsm0 : float, optional (default from parameter file) passive muscle strain due to maximum isometric force Returns ------- fpe : float normalized force of the muscle parallel (passive) element """ if kpe is None: kpe = self.P['kpe'] if epsm0 is None: epsm0 = self.P['epsm0'] if lm <= 1: fpe = 0 else: fpe = (np.exp(kpe*(lm-1)/epsm0)-1)/(np.exp(kpe)-1) return fpe def force_se(self, lt, ltslack=None, epst0=None, kttoe=None): """Thelen (2003) force-length relationship of tendon vs. tendon length. Parameters ---------- lt : float tendon length (normalized or not) ltslack : float, optional (default from parameter file) tendon slack length (normalized or not) epst0 : float, optional (default from parameter file) tendon strain at the maximal isometric muscle force kttoe : float, optional (default from parameter file) linear scale factor Returns ------- fse : float normalized force of the tendon series element """ if ltslack is None: ltslack = self.P['ltslack'] if epst0 is None: epst0 = self.P['epst0'] if kttoe is None: kttoe = self.P['kttoe'] epst = (lt-ltslack)/ltslack fttoe = .33 # values from OpenSim Thelen2003Muscle epsttoe = .99*epst0*np.e**3/(1.66*np.e**3 - .67) ktlin = .67/(epst0 - epsttoe) # if epst <= 0: fse = 0 elif epst <= epsttoe: fse = fttoe/(np.exp(kttoe)-1)*(np.exp(kttoe*epst/epsttoe)-1) else: fse = ktlin*(epst-epsttoe) + fttoe return fse def velo_fm(self, fm, a, fl, lmopt=None, vmmax=None, fmlen=None, af=None): """Thelen (2003) velocity of the force-velocity relationship vs. CE force. Parameters ---------- fm : float normalized muscle force a : float muscle activation level fl : float normalized muscle force due to the force-length relationship lmopt : float, optional (default from parameter file) optimal muscle fiber length vmmax : float, optional (default from parameter file) normalized maximum muscle velocity for concentric activation fmlen : float, optional (default from parameter file) normalized maximum force generated at the lengthening phase af : float, optional (default from parameter file) shape factor Returns ------- vm : float velocity of the muscle """ if lmopt is None: lmopt = self.P['lmopt'] if vmmax is None: vmmax = self.P['vmmax'] if fmlen is None: fmlen = self.P['fmlen'] if af is None: af = self.P['af'] if fm <= a*fl: # isometric and concentric activation if fm > 0: b = a*fl + fm/af else: b = a*fl else: # eccentric activation asyE_thresh = 0.95 # from OpenSim Thelen2003Muscle if fm < a*fl*fmlen*asyE_thresh: b = (2 + 2/af)*(a*fl*fmlen - fm)/(fmlen - 1) else: fm0 = a*fl*fmlen*asyE_thresh b = (2 + 2/af)*(a*fl*fmlen - fm0)/(fmlen - 1) vm = (0.25 + 0.75*a)*1*(fm - a*fl)/b vm = vm*vmmax*lmopt return vm def force_vm(self, vm, a, fl, lmopt=None, vmmax=None, fmlen=None, af=None): """Thelen (2003) force of the contractile element vs. muscle velocity. Parameters ---------- vm : float muscle velocity a : float muscle activation level fl : float normalized muscle force due to the force-length relationship lmopt : float, optional (default from parameter file) optimal muscle fiber length vmmax : float, optional (default from parameter file) normalized maximum muscle velocity for concentric activation fmlen : float, optional (default from parameter file) normalized normalized maximum force generated at the lengthening phase af : float, optional (default from parameter file) shape factor Returns ------- fvm : float normalized force of the muscle contractile element """ if lmopt is None: lmopt = self.P['lmopt'] if vmmax is None: vmmax = self.P['vmmax'] if fmlen is None: fmlen = self.P['fmlen'] if af is None: af = self.P['af'] vmmax = vmmax*lmopt if vm <= 0: # isometric and concentric activation fvm = af*a*fl*(4*vm + vmmax*(3*a + 1))/(-4*vm + vmmax*af*(3*a + 1)) else: # eccentric activation fvm = a*fl*(af*vmmax*(3*a*fmlen - 3*a + fmlen - 1) + \ 8*vm*fmlen*(af + 1)) / \ (af*vmmax*(3*a*fmlen - 3*a + fmlen - 1) + 8*vm*(af + 1)) return fvm def lmt_eq(self, t, lmt0=None): """Equation for muscle-tendon length.""" if lmt0 is None: lmt0 = self.S['lmt0'] return lmt0 def vm_eq(self, t, lm, lm0, lmt0, lmopt, ltslack, alpha0, vmmax, fm0): """Equation for muscle velocity.""" if lm < 0.1*lmopt: lm = 0.1*lmopt #lt0 = lmt0 - lm0*np.cos(alpha0) a = self.activation(t) lmt = self.lmt_eq(t, lmt0) alpha = self.penn_ang(lmt=lmt, lm=lm, lm0=lm0, alpha0=alpha0) lt = lmt - lm*np.cos(alpha) fse = self.force_se(lt=lt, ltslack=ltslack) fpe = self.force_pe(lm=lm/lmopt) fl = self.force_l(lm=lm/lmopt) fce_t = fse/np.cos(alpha) - fpe #if fce_t < 0: fce_t=0 vm = self.velo_fm(fm=fce_t, a=a, fl=fl) return vm def lm_sol(self, fun=None, t0=0, t1=3, lm0=None, lmt0=None, ltslack=None, lmopt=None, alpha0=None, vmmax=None, fm0=None, show=True, axs=None): """Runge-Kutta (4)5 ODE solver for muscle length.""" if lm0 is None: lm0 = self.S['lm0'] if lmt0 is None: lmt0 = self.S['lmt0'] if ltslack is None: ltslack = self.P['ltslack'] if alpha0 is None: alpha0 = self.P['alpha0'] if lmopt is None: lmopt = self.P['lmopt'] if vmmax is None: vmmax = self.P['vmmax'] if fm0 is None: fm0 = self.P['fm0'] if fun is None: fun = self.vm_eq f = ode(fun).set_integrator('dopri5', nsteps=1, max_step=0.005, atol=1e-8) f.set_initial_value(lm0, t0).set_f_params(lm0, lmt0, lmopt, ltslack, alpha0, vmmax, fm0) # suppress Fortran warning warnings.filterwarnings("ignore", category=UserWarning) data = [] while f.t < t1: f.integrate(t1, step=True) d = self.calc_data(f.t, np.max([f.y, 0.1*lmopt]), lm0, lmt0, ltslack, lmopt, alpha0, fm0) data.append(d) warnings.resetwarnings() data = np.array(data) self.lm_data = data if show: self.lm_plot(data, axs) return data def calc_data(self, t, lm, lm0, lmt0, ltslack, lmopt, alpha0, fm0): """Calculus of muscle-tendon variables.""" a = self.activation(t) lmt = self.lmt_eq(t, lmt0=lmt0) alpha = self.penn_ang(lmt=lmt, lm=lm, lm0=lm0, alpha0=alpha0) lt = lmt - lm*np.cos(alpha) fl = self.force_l(lm=lm/lmopt) fpe = self.force_pe(lm=lm/lmopt) fse = self.force_se(lt=lt, ltslack=ltslack) fce_t = fse/np.cos(alpha) - fpe vm = self.velo_fm(fm=fce_t, a=a, fl=fl, lmopt=lmopt) fm = self.force_vm(vm=vm, fl=fl, lmopt=lmopt, a=a) + fpe data = [t, lmt, lm, lt, vm, fm*fm0, fse*fm0, a*fl*fm0, fpe*fm0, alpha] return data def muscle_plot(self, a=1, axs=None): """Plot muscle-tendon relationships with length and velocity.""" try: import matplotlib.pyplot as plt except ImportError: print('matplotlib is not available.') return if axs is None: _, axs = plt.subplots(nrows=1, ncols=3, figsize=(9, 4)) lmopt = self.P['lmopt'] ltslack = self.P['ltslack'] vmmax = self.P['vmmax'] alpha0 = self.P['alpha0'] fm0 = self.P['fm0'] lm0 = self.S['lm0'] lmt0 = self.S['lmt0'] lt0 = self.S['lt0'] if np.isnan(lt0): lt0 = lmt0 - lm0*np.cos(alpha0) lm = np.linspace(0, 2, 101) lt = np.linspace(0, 1, 101)*0.05 + 1 vm = np.linspace(-1, 1, 101)*vmmax*lmopt fl = np.zeros(lm.size) fpe = np.zeros(lm.size) fse = np.zeros(lt.size) fvm = np.zeros(vm.size) fl_lm0 = self.force_l(lm0/lmopt) fpe_lm0 = self.force_pe(lm0/lmopt) fm_lm0 = fl_lm0 + fpe_lm0 ft_lt0 = self.force_se(lt0, ltslack)*fm0 for i in range(101): fl[i] = self.force_l(lm[i]) fpe[i] = self.force_pe(lm[i]) fse[i] = self.force_se(lt[i], ltslack=1) fvm[i] = self.force_vm(vm[i], a=a, fl=fl_lm0) lm = lm*lmopt lt = lt*ltslack fl = fl fpe = fpe fse = fse*fm0 fvm = fvm*fm0 xlim = self.margins(lm, margin=.05, minmargin=False) axs[0].set_xlim(xlim) ylim = self.margins([0, 2], margin=.05) axs[0].set_ylim(ylim) axs[0].plot(lm, fl, 'b', label='Active') axs[0].plot(lm, fpe, 'b--', label='Passive') axs[0].plot(lm, fl+fpe, 'b:', label='') axs[0].plot([lm0, lm0], [ylim[0], fm_lm0], 'k:', lw=2, label='') axs[0].plot([xlim[0], lm0], [fm_lm0, fm_lm0], 'k:', lw=2, label='') axs[0].plot(lm0, fm_lm0, 'o', ms=6, mfc='r', mec='r', mew=2, label='fl(LM0)') axs[0].legend(loc='best', frameon=True, framealpha=.5) axs[0].set_xlabel('Length [m]') axs[0].set_ylabel('Scale factor') axs[0].xaxis.set_major_locator(plt.MaxNLocator(4)) axs[0].yaxis.set_major_locator(plt.MaxNLocator(4)) axs[0].set_title('Muscle F-L (a=1)') xlim = self.margins([0, np.min(vm), np.max(vm)], margin=.05, minmargin=False) axs[1].set_xlim(xlim) ylim = self.margins([0, fm0*1.2, np.max(fvm)*1.5], margin=.025) axs[1].set_ylim(ylim) axs[1].plot(vm, fvm, label='') axs[1].set_xlabel('$\mathbf{^{CON}}\;$ Velocity [m/s] $\;\mathbf{^{EXC}}$') axs[1].plot([0, 0], [ylim[0], fvm[50]], 'k:', lw=2, label='') axs[1].plot([xlim[0], 0], [fvm[50], fvm[50]], 'k:', lw=2, label='') axs[1].plot(0, fvm[50], 'o', ms=6, mfc='r', mec='r', mew=2, label='FM0(LM0)') axs[1].plot(xlim[0], fm0, '+', ms=10, mfc='r', mec='r', mew=2, label='') axs[1].text(vm[0], fm0, 'FM0') axs[1].legend(loc='upper right', frameon=True, framealpha=.5) axs[1].set_ylabel('Force [N]') axs[1].xaxis.set_major_locator(plt.MaxNLocator(4)) axs[1].yaxis.set_major_locator(plt.MaxNLocator(4)) axs[1].set_title('Muscle F-V (a=1)') xlim = self.margins([lt0, ltslack, np.min(lt), np.max(lt)], margin=.05, minmargin=False) axs[2].set_xlim(xlim) ylim = self.margins([ft_lt0, 0, np.max(fse)], margin=.05) axs[2].set_ylim(ylim) axs[2].plot(lt, fse, label='') axs[2].set_xlabel('Length [m]') axs[2].plot([lt0, lt0], [ylim[0], ft_lt0], 'k:', lw=2, label='') axs[2].plot([xlim[0], lt0], [ft_lt0, ft_lt0], 'k:', lw=2, label='') axs[2].plot(lt0, ft_lt0, 'o', ms=6, mfc='r', mec='r', mew=2, label='FT(LT0)') axs[2].legend(loc='upper left', frameon=True, framealpha=.5) axs[2].set_ylabel('Force [N]') axs[2].xaxis.set_major_locator(plt.MaxNLocator(4)) axs[2].yaxis.set_major_locator(plt.MaxNLocator(4)) axs[2].set_title('Tendon') plt.suptitle('Muscle-tendon mechanics', fontsize=18, y=1.03) plt.tight_layout(w_pad=.1) plt.show() def lm_plot(self, x, axs): """Plot results of actdyn_ode45 function. data = [t, lmt, lm, lt, vm, fm*fm0, fse*fm0, fl*fm0, fpe*fm0, alpha] """ try: import matplotlib.pyplot as plt except ImportError: print('matplotlib is not available.') return if axs is None: _, axs = plt.subplots(nrows=3, ncols=2, sharex=True, figsize=(10, 6)) axs[0, 0].plot(x[:, 0], x[:, 1], 'b', label='LMT') lmt = x[:, 2]*np.cos(x[:, 9]) + x[:, 3] if np.sum(x[:, 9]) > 0: axs[0, 0].plot(x[:, 0], lmt, 'g--', label=r'$LM \cos \alpha + LT$') else: axs[0, 0].plot(x[:, 0], lmt, 'g--', label=r'LM+LT') ylim = self.margins(x[:, 1], margin=.1) axs[0, 0].set_ylim(ylim) axs[0, 0].legend(framealpha=.5, loc='best') axs[0, 1].plot(x[:, 0], x[:, 3], 'b') #axs[0, 1].plot(x[:, 0], lt0*np.ones(len(x)), 'r') ylim = self.margins(x[:, 3], margin=.1) axs[0, 1].set_ylim(ylim) axs[1, 0].plot(x[:, 0], x[:, 2], 'b') #axs[1, 0].plot(x[:, 0], lmopt*np.ones(len(x)), 'r') ylim = self.margins(x[:, 2], margin=.1) axs[1, 0].set_ylim(ylim) axs[1, 1].plot(x[:, 0], x[:, 4], 'b') ylim = self.margins(x[:, 4], margin=.1) axs[1, 1].set_ylim(ylim) axs[2, 0].plot(x[:, 0], x[:, 5], 'b', label='Muscle') axs[2, 0].plot(x[:, 0], x[:, 6], 'g--', label='Tendon') ylim = self.margins(x[:, [5, 6]], margin=.1) axs[2, 0].set_ylim(ylim) axs[2, 0].set_xlabel('Time (s)') axs[2, 0].legend(framealpha=.5, loc='best') axs[2, 1].plot(x[:, 0], x[:, 8], 'b', label='PE') ylim = self.margins(x[:, 8], margin=.1) axs[2, 1].set_ylim(ylim) axs[2, 1].set_xlabel('Time (s)') axs[2, 1].legend(framealpha=.5, loc='best') axs = axs.flatten() ylabel = ['$L_{MT}\,(m)$', '$L_{T}\,(m)$', '$L_{M}\,(m)$', '$V_{CE}\,(m/s)$', '$Force\,(N)$', '$Force\,(N)$'] for i, axi in enumerate(axs): axi.set_ylabel(ylabel[i], fontsize=14) axi.yaxis.set_major_locator(plt.MaxNLocator(4)) axi.yaxis.set_label_coords(-.2, 0.5) plt.suptitle('Simulation of muscle-tendon mechanics', fontsize=18, y=1.03) plt.tight_layout() plt.show() def penn_ang(self, lmt, lm, lt=None, lm0=None, alpha0=None): """Pennation angle. Parameters ---------- lmt : float muscle-tendon length lt : float, optional (default=None) tendon length lm : float, optional (default=None) muscle fiber length lm0 : float, optional (default from states file) initial muscle fiber length alpha0 : float, optional (default from parameter file) initial pennation angle Returns ------- alpha : float pennation angle """ if lm0 is None: lm0 = self.S['lm0'] if alpha0 is None: alpha0 = self.P['alpha0'] alpha = alpha0 if alpha0 != 0: w = lm0*np.sin(alpha0) if lm is not None: cosalpha = np.sqrt(1-(w/lm)**2) elif lmt is not None and lt is not None: cosalpha = 1/(np.sqrt(1 + (w/(lmt-lt))**2)) alpha = np.arccos(cosalpha) if alpha > 1.4706289: # np.arccos(0.1), 84.2608 degrees alpha = 1.4706289 return alpha def excitation(self, t, u_max=None, u_min=None, t0=0, t1=5): """Excitation signal, a square wave. Parameters ---------- t : float time instant [s] u_max : float (0 < u_max <= 1), optional (default from parameter file) maximum value for muscle excitation u_min : float (0 < u_min < 1), optional (default from parameter file) minimum value for muscle excitation t0 : float, optional (default=0) initial time instant for muscle excitation equals to u_max [s] t1 : float, optional (default=5) final time instant for muscle excitation equals to u_max [s] Returns ------- u : float (0 < u <= 1) excitation signal """ if u_max is None: u_max = self.P['u_max'] if u_min is None: u_min = self.P['u_min'] u = u_min if t >= t0 and t <= t1: u = u_max return u def activation_dyn(self, t, a, t_act=None, t_deact=None): """Thelen (2003) activation dynamics, the derivative of `a` at `t`. Parameters ---------- t : float time instant [s] a : float (0 <= a <= 1) muscle activation t_act : float, optional (default from parameter file) activation time constant [s] t_deact : float, optional (default from parameter file) deactivation time constant [s] Returns ------- adot : float derivative of `a` at `t` """ if t_act is None: t_act = self.P['t_act'] if t_deact is None: t_deact = self.P['t_deact'] u = self.excitation(t) if u > a: adot = (u - a)/(t_act*(0.5 + 1.5*a)) else: adot = (u - a)/(t_deact/(0.5 + 1.5*a)) return adot def activation_sol(self, fun=None, t0=0, t1=3, a0=0, u_min=None, t_act=None, t_deact=None, show=True, axs=None): """Runge-Kutta (4)5 ODE solver for activation dynamics. Parameters ---------- fun : function object, optional (default is None and `actdyn` is used) function with ODE to be solved t0 : float, optional (default=0) initial time instant for the simulation [s] t1 : float, optional (default=0) final time instant for the simulation [s] a0 : float, optional (default=0) initial muscle activation u_max : float (0 < u_max <= 1), optional (default from parameter file) maximum value for muscle excitation u_min : float (0 < u_min < 1), optional (default from parameter file) minimum value for muscle excitation t_act : float, optional (default from parameter file) activation time constant [s] t_deact : float, optional (default from parameter file) deactivation time constant [s] show : bool, optional (default = True) if True (1), plot data in matplotlib figure axs : a matplotlib.axes.Axes instance, optional (default = None) Returns ------- data : 2-d array array with columns [time, excitation, activation] """ if u_min is None: u_min = self.P['u_min'] if t_act is None: t_act = self.P['t_act'] if t_deact is None: t_deact = self.P['t_deact'] if fun is None: fun = self.activation_dyn f = ode(fun).set_integrator('dopri5', nsteps=1, max_step=0.005, atol=1e-8) f.set_initial_value(a0, t0).set_f_params(t_act, t_deact) # suppress Fortran warning warnings.filterwarnings("ignore", category=UserWarning) data = [] while f.t < t1: f.integrate(t1, step=True) data.append([f.t, self.excitation(f.t), np.max([f.y, u_min])]) warnings.resetwarnings() data = np.array(data) if show: self.actvation_plot(data, axs) self.act_data = data return data def activation(self, t=None): """Activation signal.""" data = self.act_data if t is not None and len(data): if t <= self.act_data[0, 0]: a = self.act_data[0, 2] elif t >= self.act_data[-1, 0]: a = self.act_data[-1, 2] else: a = np.interp(t, self.act_data[:, 0], self.act_data[:, 2]) else: a = 1 return a def actvation_plot(self, data, axs): """Plot results of actdyn_ode45 function.""" try: import matplotlib.pyplot as plt except ImportError: print('matplotlib is not available.') return if axs is None: _, axs = plt.subplots(nrows=1, ncols=1, figsize=(6, 4)) axs.plot(data[:, 0], data[:, 1], color=[1, 0, 0, .6], label='Excitation') axs.plot(data[:, 0], data[:, 2], color=[0, 0, 1, .6], label='Activation') axs.set_xlabel('Time [s]') axs.set_ylabel('Level') axs.legend() plt.title('Activation dynamics') plt.tight_layout() plt.show() def margins(self, x, margin=0.01, minmargin=True): """Calculate plot limits with extra margins. """ rang = np.nanmax(x) - np.nanmin(x) if rang < 0.001 and minmargin: rang = 0.001*np.nanmean(x)/margin if rang < 1: rang = 1 lim = [np.nanmin(x) - rang*margin, np.nanmax(x) + rang*margin] return lim ###Output _____no_output_____ ###Markdown Muscle simulationMarcos Duarte Let's simulate the 3-component Hill-type muscle model we described in [Muscle modeling](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/MuscleModeling.ipynb) and illustrated below:Figure. A Hill-type muscle model with three components: two for the muscle, an active contractile element, $\mathsf{CE}$, and a passive elastic element in parallel, $\mathsf{PE}$, with the $\mathsf{CE}$, and one component for the tendon, an elastic element in series, $\mathsf{SE}$, with the muscle. $\mathsf{L_{MT}}$: muscle–tendon length, $\mathsf{L_T}$: tendon length, $\mathsf{L_M}$: muscle fiber length, $\mathsf{F_T}$: tendon force, $\mathsf{F_M}$: muscle force, and $α$: pennation angle.The following relationships are true for the model:$$ \begin{array}{l}L_{MT} = L_{T} + L_M\cos\alpha \\\\L_M = L_{CE} = L_{PE} \\\\\dot{L}_M = \dot{L}_{CE} = \dot{L}_{PE} \\\\F_{M} = F_{CE} + F_{PE} \end{array} $$If we assume that the muscle–tendon system is at equilibrium, that is, muscle, $F_{M}$, and tendon, $F_{T}$, forces are in equlibrium at all times, the following equation holds (and that a muscle can only pull):$$ F_{T} = F_{SE} = F_{M}\cos\alpha $$ Pennation angleThe pennation angle will vary during muscle activation; for instance, Kawakami et al. (1998) showed that the pennation angle of the medial gastrocnemius muscle can vary from 22$^o$ to 67$^o$ during activation. The most used approach is to assume that the muscle width (defined as the length of the perpendicular line between the lines of the muscle origin and insertion) remains constant (Scott & Winter, 1991):$$ w = L_{M,0} \sin\alpha_0 $$The pennation angle as a function of time will be given by:$$ \alpha = \sin^{-1} \left(\frac{w}{L_M}\right) $$The cosine of the pennation angle can be given by (if $L_M$ is known):$$ \cos \alpha = \frac{\sqrt{L_M^2-w^2}}{L_M} = \sqrt{1-\left(\frac{w}{L_M}\right)^2} $$or (if $L_M$ is not known):$$ \cos \alpha = \frac{L_{MT}-L_T}{L_M} = \frac{1}{\sqrt{1 + \left(\frac{w}{L_{MT}-L_T}\right)^2}} $$ Muscle forceIn general, the dependence of the force of the contractile element with its length and velocity and with the activation level are assumed independent of each other:$$ F_{CE}(a, L_{CE}, \dot{L}_{CE}) = a \: f_l(L_{CE}) \: f_v(\dot{L}_{CE}) \: F_{M0} $$where $f_l(L_M)$ and $f_v(\dot{L}_M)$ are mathematical functions describing the force-length and force-velocity relationships of the contractile element (typically these functions are normalized by $F_{M0}$, the maximum isometric (at zero velocity) muscle force, so we have to multiply the right side of the equation by $F_{M0}$). And for the muscle force:$$ F_{M}(a, L_M, \dot{L}_M) = \left[a \: f_l(L_M)f_v(\dot{L}_M) + F_{PE}(L_M)\right]F_{M0} $$This equation for the muscle force, with $a$, $L_{M}$, and $\dot{L}_{M}$ as state variables, can be used to simulate the dynamics of a muscle given an excitation and determine the muscle force and length. We can rearrange the equation, invert the expression for $f_v$, and integrate the resulting first-order ordinary differential equation (ODE) to obatin $L_M$:$$ \dot{L}_M = f_v^{-1}\left(\frac{F_{SE}(L_{MT}-L_M\cos\alpha)/\cos\alpha - F_{PE}(L_M)}{a f_l(L_M)}\right) $$This approach is the most commonly employed in the literature (see for example, [OpenSim](http://simtk-confluence.stanford.edu:8080/display/OpenSim/Muscle+Model+Theory+and+Publications); McLean, Su, van den Bogert, 2003; Thelen, 2003; Nigg and Herzog, 2007). Although the equation for the muscle force doesn't have numerical singularities, the differential equation for muscle velocity has four ([OpenSim Millard 2012 Muscle Models](http://simtk-confluence.stanford.edu:8080/display/OpenSim/Millard+2012+Muscle+Models)): When $a \rightarrow 0$; when $f_l(L_M) \rightarrow 0$; when $\alpha \rightarrow \pi/2$; and when $\partial f_v/\partial v \rightarrow 0 $. The following solutions can be employed to avoid the numerical singularities ([OpenSim Millard 2012 Muscle Models](http://simtk-confluence.stanford.edu:8080/display/OpenSim/Millard+2012+Muscle+Models)): Adopt a minimum value for $a$; e.g., $a_{min}=0.01$; adopt a minimum value for $f_l(L_M)$; e.g., $f_l(0.1)$; adopt a maximum value for pennation angle; e.g., constrain $\alpha$ to $\cos\alpha > 0.1 \; (\alpha < 84.26^o)$; and make the slope of $f_V$ at and beyond maximum velocity different than zero (for both concentric and excentric activations). We will adopt these solutions to avoid singularities in the simulation of muscle mechanics. A probem of imposing values to variables as described above is that we can make the ordinary differential equation numerically stiff, which will increase the computational cost of the numerical integration. A better solution would be to modify the model to not have these singularities (see [OpenSim Millard 2012 Muscle Models](http://simtk-confluence.stanford.edu:8080/display/OpenSim/Millard+2012+Muscle+Models)). SimulationLet's simulate muscle dynamics using the Thelen2003Muscle model we defined in [Muscle modeling](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/MuscleModeling.ipynb). For the simulation of the Thelen2003Muscle, we simply have to integrate the equation:$$ V_M = (0.25+0.75a)\,V_{Mmax}\frac{\bar{F}_M-a\bar{f}_{l,CE}}{b} $$ where$$ b = \left\{ \begin{array}{l l l} a\bar{f}_{l,CE} + \bar{F}_M/A_f \quad & \text{if} \quad \bar{F}_M \leq a\bar{f}_{l,CE} & \text{(shortening)} \\ \\ \frac{(2+2/A_f)(a\bar{f}_{l,CE}\bar{f}_{CEmax} - \bar{F}_M)}{\bar{f}_{CEmax}-1} \quad & \text{if} \quad \bar{F}_M > a\bar{f}_{l,CE} & \text{(lengthening)} \end{array} \right.$$ The equation above already contains the terms for actvation, $a$, and force-length dependence, $\bar{f}_{l,CE}$. The equation is too complicated for solving analytically, we will solve it by numerical integration using the [`scipy.integrate.ode`](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.integrate.ode.html) class of numeric integrators, particularly the `dopri5`, an explicit runge-kutta method of order (4)5 due to Dormand and Prince (a.k.a. ode45 in Matlab). We could run a simulation using [OpenSim](https://simtk.org/home/opensim); it would be faster, but for fun, let's program in Python. All the necessary functions for the Thelen2003Muscle model described in [Muscle modeling](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/MuscleModeling.ipynb) were grouped in one file (module), `muscles.py`. Besides these functions, the module `muscles.py` contains a function for the muscle velocity, `vm_eq`, which will be called by the function that specifies the numerical integration, `lm_sol`; a standard way of performing numerical integration in scientific computing:```python def vm_eq(self, t, lm, lm0, lmt0, lmopt, ltslack, alpha0, vmmax, fm0): """Equation for muscle velocity.""" if lm < 0.1*lmopt: lm = 0.1*lmopt a = self.activation(t) lmt = self.lmt_eq(t, lmt0) alpha = self.penn_ang(lmt=lmt, lm=lm, lm0=lm0, alpha0=alpha0) lt = lmt - lm*np.cos(alpha) fse = self.force_se(lt=lt, ltslack=ltslack) fpe = self.force_pe(lm=lm/lmopt) fl = self.force_l(lm=lm/lmopt) fce_t = fse/np.cos(alpha) - fpe vm = self.velo_fm(fm=fce_t, a=a, fl=fl) return vmdef lm_sol(self, fun, t0, t1, lm0, lmt0, ltslack, lmopt, alpha0, vmmax, fm0, show, axs): """Runge-Kutta (4)5 ODE solver for muscle length.""" if fun is None: fun = self.vm_eq f = ode(fun).set_integrator('dopri5', nsteps=1, max_step=0.005, atol=1e-8) f.set_initial_value(lm0, t0).set_f_params(lm0, lmt0, lmopt, ltslack, alpha0, vmmax, fm0) suppress Fortran warning warnings.filterwarnings("ignore", category=UserWarning) data = [] while f.t < t1: f.integrate(t1, step=True) d = self.calc_data(f.t, f.y, lm0, lmt0, ltslack, lmopt, alpha0, fm0) data.append(d) warnings.resetwarnings() data = np.array(data) self.lm_data = data if show: self.lm_plot(data, axs) return data````muscles.py` also contains some auxiliary functions for entering data and for plotting the results. Let's import the necessary Python libraries and customize the environment in order to run some simulations using `muscles.py`: ###Code import numpy as np import matplotlib.pyplot as plt %matplotlib inline #%matplotlib nbagg import matplotlib matplotlib.rcParams['lines.linewidth'] = 3 matplotlib.rcParams['font.size'] = 13 matplotlib.rcParams['lines.markersize'] = 5 matplotlib.rc('axes', grid=False, labelsize=14, titlesize=16, ymargin=0.05) matplotlib.rc('legend', numpoints=1, fontsize=11) # import the muscles.py module import sys sys.path.insert(1, r'./../functions') import muscles ###Output _____no_output_____ ###Markdown The `muscles.py` module contains the class `Thelen2003()` which has the functions we want to use. For such, we need to create an instance of this class: ###Code ms = muscles.Thelen2003() ###Output _____no_output_____ ###Markdown Now, we need to enter the parameters and states for the simulation: we can load files with these values or enter as input parameters when calling the function (method) '`set_parameters()`' and '`set_states()`'. If nothing if inputed, these methods assume that the parameters and states are stored in the files '`muscle_parameter.txt`' and '`muscle_state.txt`' inside the directory '`./../data/`'. Let's use some of the parameters and states from an exercise of the chapter 4 of Nigg and Herzog (2006). ###Code ms.set_parameters() ms.set_states() ###Output The parameters were successfully loaded and are stored in the variable P. "lt0" value "" was replaced by NaN. The states were successfully loaded and are stored in the variable S. ###Markdown We can see the parameters and states: ###Code print('Parameters:\n', ms.P) print('States:\n', ms.S) ###Output Parameters: {'u_min': 0.01, 'ltslack': 0.223, 'id': '', 'lmopt': 0.093, 't_act': 0.015, 'epst0': 0.04, 'af': 0.25, 'gammal': 0.45, 'fmlen': 1.4, 'kttoe': 3.0, 'epsm0': 0.6, 'name': '', 'kpe': 5.0, 'alpha0': 0.0, 'u_max': 1.0, 'fm0': 7400.0, 't_deact': 0.05, 'vmmax': 10.0} States: {'lm0': 0.09, 'lmt0': 0.313, 'lt0': nan, 'id': '', 'name': ''} ###Markdown We can plot the muscle-tendon forces considering these parameters and initial states: ###Code ms.muscle_plot() ###Output _____no_output_____ ###Markdown Let's simulate an isometric activation (and since we didn't enter an activation level, $a=1$ will be used): ###Code def lmt_eq(t, lmt0): # isometric activation lmt = lmt0 return lmt ms.lmt_eq = lmt_eq data = ms.lm_sol() ###Output _____no_output_____ ###Markdown We can input a prescribed muscle-tendon length for the simulation: ###Code def lmt_eq(t, lmt0): # prescribed change in the muscle-tendon length if t < 1: lmt = lmt0 if 1 <= t < 2: lmt = lmt0 - 0.04*(t - 1) if t >= 2: lmt = lmt0 - 0.04 return lmt ms.lmt_eq = lmt_eq data = ms.lm_sol() ###Output _____no_output_____ ###Markdown Let's simulate a pennated muscle with an angle of $30^o$. We don't need to enter all parameters again, we can change only the parameter `alpha0`: ###Code ms.P['alpha0'] = 30*np.pi/180 print('New initial pennation angle:', ms.P['alpha0']) ###Output New initial pennation angle: 0.5235987755982988 ###Markdown Because the muscle length is now shortenned by $\cos(30^o)$, we will also have to change the initial muscle-tendon length if we want to start with the tendon at its slack length: ###Code ms.S['lmt0'] = ms.S['lmt0'] - ms.S['lm0'] + ms.S['lm0']*np.cos(ms.P['alpha0']) print('New initial muscle-tendon length:', ms.S['lmt0']) data = ms.lm_sol() ###Output _____no_output_____ ###Markdown Here is a plot of the simulated pennation angle: ###Code plt.plot(data[:, 0], data[:, 9]*180/np.pi) plt.xlabel('Time (s)') plt.ylabel('Pennation angle $(^o)$') plt.show() ###Output _____no_output_____ ###Markdown Change back to the old values: ###Code ms.P['alpha0'] = 0 ms.S['lmt0'] = 0.313 ###Output _____no_output_____ ###Markdown We can change the initial states to show the role of the passive parallel element: ###Code ms.S = {'id': '', 'lt0': np.nan, 'lmt0': 0.323, 'lm0': 0.10, 'name': ''} ms.muscle_plot() ###Output _____no_output_____ ###Markdown Let's also change the excitation signal: ###Code def excitation(t, u_max=1, u_min=0.01, t0=1, t1=2): """Excitation signal, a hat signal.""" u = u_min if t >= t0 and t <= t1: u = u_max return u ms.excitation = excitation act = ms.activation_sol() ###Output _____no_output_____ ###Markdown And let's simulate an isometric contraction: ###Code def lmt_eq(t, lmt0): # isometric activation lmt = lmt0 return lmt ms.lmt_eq = lmt_eq data = ms.lm_sol() ###Output _____no_output_____ ###Markdown Let's use as excitation a train of pulses: ###Code def excitation(t, u_max=.5, u_min=0.01, t0=.2, t1=2): """Excitation signal, a train of square pulses.""" u = u_min ts = np.arange(1, 2.0, .1) #ts = np.delete(ts, np.arange(2, ts.size, 3)) if t >= ts[0] and t <= ts[1]: u = u_max elif t >= ts[2] and t <= ts[3]: u = u_max elif t >= ts[4] and t <= ts[5]: u = u_max elif t >= ts[6] and t <= ts[7]: u = u_max elif t >= ts[8] and t <= ts[9]: u = u_max return u ms.excitation = excitation act = ms.activation_sol() data = ms.lm_sol() ###Output _____no_output_____ ###Markdown References- Kawakami Y, Ichinose Y, Fukunaga T (1998) [Architectural and functional features of human triceps surae muscles during contraction](http://www.ncbi.nlm.nih.gov/pubmed/9688711). Journal of Applied Physiology, 85, 398–404. - McLean SG, Su A, van den Bogert AJ (2003) [Development and validation of a 3-D model to predict knee joint loading during dynamic movement](http://www.ncbi.nlm.nih.gov/pubmed/14986412). Journal of Biomechanical Engineering, 125, 864-74. - Nigg BM and Herzog W (2006) [Biomechanics of the Musculo-skeletal System](https://books.google.com.br/books?id=hOIeAQAAIAAJ&dq=editions:ISBN0470017678). 3rd Edition. Wiley. - Scott SH, Winter DA (1991) [A comparison of three muscle pennation assumptions and their effect on isometric and isotonic force](http://www.ncbi.nlm.nih.gov/pubmed/2037616). Journal of Biomechanics, 24, 163–167. - Thelen DG (2003) [Adjustment of muscle mechanics model parameters to simulate dynamic contractions in older adults](http://homepages.cae.wisc.edu/~thelen/pubs/jbme03.pdf). Journal of Biomechanical Engineering, 125(1):70–77. Module muscles.py ###Code # %load ./../functions/muscles.py """Muscle modeling and simulation.""" from __future__ import division, print_function import numpy as np from scipy.integrate import ode import warnings import configparser __author__ = 'Marcos Duarte, https://github.com/demotu/BMC' __version__ = 'muscles.py v.1 2015/03/01' class Thelen2003(): """ Thelen (2003) muscle model. """ def __init__(self, parameters=None, states=None): if parameters is not None: self.set_parameters(parameters) if states is not None: self.set_states(states) self.lm_data = [] self.act_data = [] def set_parameters(self, var=None): """Load and set parameters for the muscle model. """ if var is None: var = './../data/muscle_parameter.txt' if isinstance(var, str): self.P = self.config_parser(var, 'parameters') elif isinstance(var, dict): self.P = var else: raise ValueError('Wrong parameters!') print('The parameters were successfully loaded ' + 'and are stored in the variable P.') def set_states(self, var=None): """Load and set states for the muscle model. """ if var is None: var = './../data/muscle_state.txt' if isinstance(var, str): self.S = self.config_parser(var, 'states') elif isinstance(var, dict): self.S = var else: raise ValueError('Wrong states!') print('The states were successfully loaded ' + 'and are stored in the variable S.') def config_parser(self, filename, var): parser = configparser.ConfigParser() parser.optionxform = str # make option names case sensitive parser.read(filename) if not parser: raise ValueError('File %s not found!' %var) #if not 'Muscle' in parser.sections()[0]: # raise ValueError('Wrong %s file!' %var) var = {} for key, value in parser.items(parser.sections()[0]): if key.lower() in ['name', 'id']: var.update({key: value}) else: try: value = float(value) except ValueError: print('"%s" value "%s" was replaced by NaN.' %(key, value)) value = np.nan var.update({key: value}) return var def force_l(self, lm, gammal=None): """Thelen (2003) force of the contractile element vs. muscle length. Parameters ---------- lm : float normalized muscle fiber length gammal : float, optional (default from parameter file) shape factor Returns ------- fl : float normalized force of the muscle contractile element """ if gammal is None: gammal = self.P['gammal'] fl = np.exp(-(lm-1)**2/gammal) return fl def force_pe(self, lm, kpe=None, epsm0=None): """Thelen (2003) force of the muscle parallel element vs. muscle length. Parameters ---------- lm : float normalized muscle fiber length kpe : float, optional (default from parameter file) exponential shape factor epsm0 : float, optional (default from parameter file) passive muscle strain due to maximum isometric force Returns ------- fpe : float normalized force of the muscle parallel (passive) element """ if kpe is None: kpe = self.P['kpe'] if epsm0 is None: epsm0 = self.P['epsm0'] if lm <= 1: fpe = 0 else: fpe = (np.exp(kpe*(lm-1)/epsm0)-1)/(np.exp(kpe)-1) return fpe def force_se(self, lt, ltslack=None, epst0=None, kttoe=None): """Thelen (2003) force-length relationship of tendon vs. tendon length. Parameters ---------- lt : float tendon length (normalized or not) ltslack : float, optional (default from parameter file) tendon slack length (normalized or not) epst0 : float, optional (default from parameter file) tendon strain at the maximal isometric muscle force kttoe : float, optional (default from parameter file) linear scale factor Returns ------- fse : float normalized force of the tendon series element """ if ltslack is None: ltslack = self.P['ltslack'] if epst0 is None: epst0 = self.P['epst0'] if kttoe is None: kttoe = self.P['kttoe'] epst = (lt-ltslack)/ltslack fttoe = .33 # values from OpenSim Thelen2003Muscle epsttoe = .99*epst0*np.e**3/(1.66*np.e**3 - .67) ktlin = .67/(epst0 - epsttoe) # if epst <= 0: fse = 0 elif epst <= epsttoe: fse = fttoe/(np.exp(kttoe)-1)*(np.exp(kttoe*epst/epsttoe)-1) else: fse = ktlin*(epst-epsttoe) + fttoe return fse def velo_fm(self, fm, a, fl, lmopt=None, vmmax=None, fmlen=None, af=None): """Thelen (2003) velocity of the force-velocity relationship vs. CE force. Parameters ---------- fm : float normalized muscle force a : float muscle activation level fl : float normalized muscle force due to the force-length relationship lmopt : float, optional (default from parameter file) optimal muscle fiber length vmmax : float, optional (default from parameter file) normalized maximum muscle velocity for concentric activation fmlen : float, optional (default from parameter file) normalized maximum force generated at the lengthening phase af : float, optional (default from parameter file) shape factor Returns ------- vm : float velocity of the muscle """ if lmopt is None: lmopt = self.P['lmopt'] if vmmax is None: vmmax = self.P['vmmax'] if fmlen is None: fmlen = self.P['fmlen'] if af is None: af = self.P['af'] if fm <= a*fl: # isometric and concentric activation if fm > 0: b = a*fl + fm/af else: b = a*fl else: # eccentric activation asyE_thresh = 0.95 # from OpenSim Thelen2003Muscle if fm < a*fl*fmlen*asyE_thresh: b = (2 + 2/af)*(a*fl*fmlen - fm)/(fmlen - 1) else: fm0 = a*fl*fmlen*asyE_thresh b = (2 + 2/af)*(a*fl*fmlen - fm0)/(fmlen - 1) vm = (0.25 + 0.75*a)*1*(fm - a*fl)/b vm = vm*vmmax*lmopt return vm def force_vm(self, vm, a, fl, lmopt=None, vmmax=None, fmlen=None, af=None): """Thelen (2003) force of the contractile element vs. muscle velocity. Parameters ---------- vm : float muscle velocity a : float muscle activation level fl : float normalized muscle force due to the force-length relationship lmopt : float, optional (default from parameter file) optimal muscle fiber length vmmax : float, optional (default from parameter file) normalized maximum muscle velocity for concentric activation fmlen : float, optional (default from parameter file) normalized normalized maximum force generated at the lengthening phase af : float, optional (default from parameter file) shape factor Returns ------- fvm : float normalized force of the muscle contractile element """ if lmopt is None: lmopt = self.P['lmopt'] if vmmax is None: vmmax = self.P['vmmax'] if fmlen is None: fmlen = self.P['fmlen'] if af is None: af = self.P['af'] vmmax = vmmax*lmopt if vm <= 0: # isometric and concentric activation fvm = af*a*fl*(4*vm + vmmax*(3*a + 1))/(-4*vm + vmmax*af*(3*a + 1)) else: # eccentric activation fvm = a*fl*(af*vmmax*(3*a*fmlen - 3*a + fmlen - 1) + \ 8*vm*fmlen*(af + 1)) / \ (af*vmmax*(3*a*fmlen - 3*a + fmlen - 1) + 8*vm*(af + 1)) return fvm def lmt_eq(self, t, lmt0=None): """Equation for muscle-tendon length.""" if lmt0 is None: lmt0 = self.S['lmt0'] return lmt0 def vm_eq(self, t, lm, lm0, lmt0, lmopt, ltslack, alpha0, vmmax, fm0): """Equation for muscle velocity.""" if lm < 0.1*lmopt: lm = 0.1*lmopt #lt0 = lmt0 - lm0*np.cos(alpha0) a = self.activation(t) lmt = self.lmt_eq(t, lmt0) alpha = self.penn_ang(lmt=lmt, lm=lm, lm0=lm0, alpha0=alpha0) lt = lmt - lm*np.cos(alpha) fse = self.force_se(lt=lt, ltslack=ltslack) fpe = self.force_pe(lm=lm/lmopt) fl = self.force_l(lm=lm/lmopt) fce_t = fse/np.cos(alpha) - fpe #if fce_t < 0: fce_t=0 vm = self.velo_fm(fm=fce_t, a=a, fl=fl) return vm def lm_sol(self, fun=None, t0=0, t1=3, lm0=None, lmt0=None, ltslack=None, lmopt=None, alpha0=None, vmmax=None, fm0=None, show=True, axs=None): """Runge-Kutta (4)5 ODE solver for muscle length.""" if lm0 is None: lm0 = self.S['lm0'] if lmt0 is None: lmt0 = self.S['lmt0'] if ltslack is None: ltslack = self.P['ltslack'] if alpha0 is None: alpha0 = self.P['alpha0'] if lmopt is None: lmopt = self.P['lmopt'] if vmmax is None: vmmax = self.P['vmmax'] if fm0 is None: fm0 = self.P['fm0'] if fun is None: fun = self.vm_eq f = ode(fun).set_integrator('dopri5', nsteps=1, max_step=0.005, atol=1e-8) f.set_initial_value(lm0, t0).set_f_params(lm0, lmt0, lmopt, ltslack, alpha0, vmmax, fm0) # suppress Fortran warning warnings.filterwarnings("ignore", category=UserWarning) data = [] while f.t < t1: f.integrate(t1, step=True) d = self.calc_data(f.t, np.max([f.y, 0.1*lmopt]), lm0, lmt0, ltslack, lmopt, alpha0, fm0) data.append(d) warnings.resetwarnings() data = np.array(data) self.lm_data = data if show: self.lm_plot(data, axs) return data def calc_data(self, t, lm, lm0, lmt0, ltslack, lmopt, alpha0, fm0): """Calculus of muscle-tendon variables.""" a = self.activation(t) lmt = self.lmt_eq(t, lmt0=lmt0) alpha = self.penn_ang(lmt=lmt, lm=lm, lm0=lm0, alpha0=alpha0) lt = lmt - lm*np.cos(alpha) fl = self.force_l(lm=lm/lmopt) fpe = self.force_pe(lm=lm/lmopt) fse = self.force_se(lt=lt, ltslack=ltslack) fce_t = fse/np.cos(alpha) - fpe vm = self.velo_fm(fm=fce_t, a=a, fl=fl, lmopt=lmopt) fm = self.force_vm(vm=vm, fl=fl, lmopt=lmopt, a=a) + fpe data = [t, lmt, lm, lt, vm, fm*fm0, fse*fm0, a*fl*fm0, fpe*fm0, alpha] return data def muscle_plot(self, a=1, axs=None): """Plot muscle-tendon relationships with length and velocity.""" try: import matplotlib.pyplot as plt except ImportError: print('matplotlib is not available.') return if axs is None: _, axs = plt.subplots(nrows=1, ncols=3, figsize=(9, 4)) lmopt = self.P['lmopt'] ltslack = self.P['ltslack'] vmmax = self.P['vmmax'] alpha0 = self.P['alpha0'] fm0 = self.P['fm0'] lm0 = self.S['lm0'] lmt0 = self.S['lmt0'] lt0 = self.S['lt0'] if np.isnan(lt0): lt0 = lmt0 - lm0*np.cos(alpha0) lm = np.linspace(0, 2, 101) lt = np.linspace(0, 1, 101)*0.05 + 1 vm = np.linspace(-1, 1, 101)*vmmax*lmopt fl = np.zeros(lm.size) fpe = np.zeros(lm.size) fse = np.zeros(lt.size) fvm = np.zeros(vm.size) fl_lm0 = self.force_l(lm0/lmopt) fpe_lm0 = self.force_pe(lm0/lmopt) fm_lm0 = fl_lm0 + fpe_lm0 ft_lt0 = self.force_se(lt0, ltslack)*fm0 for i in range(101): fl[i] = self.force_l(lm[i]) fpe[i] = self.force_pe(lm[i]) fse[i] = self.force_se(lt[i], ltslack=1) fvm[i] = self.force_vm(vm[i], a=a, fl=fl_lm0) lm = lm*lmopt lt = lt*ltslack fl = fl fpe = fpe fse = fse*fm0 fvm = fvm*fm0 xlim = self.margins(lm, margin=.05, minmargin=False) axs[0].set_xlim(xlim) ylim = self.margins([0, 2], margin=.05) axs[0].set_ylim(ylim) axs[0].plot(lm, fl, 'b', label='Active') axs[0].plot(lm, fpe, 'b--', label='Passive') axs[0].plot(lm, fl+fpe, 'b:', label='') axs[0].plot([lm0, lm0], [ylim[0], fm_lm0], 'k:', lw=2, label='') axs[0].plot([xlim[0], lm0], [fm_lm0, fm_lm0], 'k:', lw=2, label='') axs[0].plot(lm0, fm_lm0, 'o', ms=6, mfc='r', mec='r', mew=2, label='fl(LM0)') axs[0].legend(loc='best', frameon=True, framealpha=.5) axs[0].set_xlabel('Length [m]') axs[0].set_ylabel('Scale factor') axs[0].xaxis.set_major_locator(plt.MaxNLocator(4)) axs[0].yaxis.set_major_locator(plt.MaxNLocator(4)) axs[0].set_title('Muscle F-L (a=1)') xlim = self.margins([0, np.min(vm), np.max(vm)], margin=.05, minmargin=False) axs[1].set_xlim(xlim) ylim = self.margins([0, fm0*1.2, np.max(fvm)*1.5], margin=.025) axs[1].set_ylim(ylim) axs[1].plot(vm, fvm, label='') axs[1].set_xlabel('$\mathbf{^{CON}}\;$ Velocity [m/s] $\;\mathbf{^{EXC}}$') axs[1].plot([0, 0], [ylim[0], fvm[50]], 'k:', lw=2, label='') axs[1].plot([xlim[0], 0], [fvm[50], fvm[50]], 'k:', lw=2, label='') axs[1].plot(0, fvm[50], 'o', ms=6, mfc='r', mec='r', mew=2, label='FM0(LM0)') axs[1].plot(xlim[0], fm0, '+', ms=10, mfc='r', mec='r', mew=2, label='') axs[1].text(vm[0], fm0, 'FM0') axs[1].legend(loc='upper right', frameon=True, framealpha=.5) axs[1].set_ylabel('Force [N]') axs[1].xaxis.set_major_locator(plt.MaxNLocator(4)) axs[1].yaxis.set_major_locator(plt.MaxNLocator(4)) axs[1].set_title('Muscle F-V (a=1)') xlim = self.margins([lt0, ltslack, np.min(lt), np.max(lt)], margin=.05, minmargin=False) axs[2].set_xlim(xlim) ylim = self.margins([ft_lt0, 0, np.max(fse)], margin=.05) axs[2].set_ylim(ylim) axs[2].plot(lt, fse, label='') axs[2].set_xlabel('Length [m]') axs[2].plot([lt0, lt0], [ylim[0], ft_lt0], 'k:', lw=2, label='') axs[2].plot([xlim[0], lt0], [ft_lt0, ft_lt0], 'k:', lw=2, label='') axs[2].plot(lt0, ft_lt0, 'o', ms=6, mfc='r', mec='r', mew=2, label='FT(LT0)') axs[2].legend(loc='upper left', frameon=True, framealpha=.5) axs[2].set_ylabel('Force [N]') axs[2].xaxis.set_major_locator(plt.MaxNLocator(4)) axs[2].yaxis.set_major_locator(plt.MaxNLocator(4)) axs[2].set_title('Tendon') plt.suptitle('Muscle-tendon mechanics', fontsize=18, y=1.03) plt.tight_layout(w_pad=.1) plt.show() def lm_plot(self, x, axs): """Plot results of actdyn_ode45 function. data = [t, lmt, lm, lt, vm, fm*fm0, fse*fm0, fl*fm0, fpe*fm0, alpha] """ try: import matplotlib.pyplot as plt except ImportError: print('matplotlib is not available.') return if axs is None: _, axs = plt.subplots(nrows=3, ncols=2, sharex=True, figsize=(10, 6)) axs[0, 0].plot(x[:, 0], x[:, 1], 'b', label='LMT') lmt = x[:, 2]*np.cos(x[:, 9]) + x[:, 3] if np.sum(x[:, 9]) > 0: axs[0, 0].plot(x[:, 0], lmt, 'g--', label=r'$LM \cos \alpha + LT$') else: axs[0, 0].plot(x[:, 0], lmt, 'g--', label=r'LM+LT') ylim = self.margins(x[:, 1], margin=.1) axs[0, 0].set_ylim(ylim) axs[0, 0].legend(framealpha=.5, loc='best') axs[0, 1].plot(x[:, 0], x[:, 3], 'b') #axs[0, 1].plot(x[:, 0], lt0*np.ones(len(x)), 'r') ylim = self.margins(x[:, 3], margin=.1) axs[0, 1].set_ylim(ylim) axs[1, 0].plot(x[:, 0], x[:, 2], 'b') #axs[1, 0].plot(x[:, 0], lmopt*np.ones(len(x)), 'r') ylim = self.margins(x[:, 2], margin=.1) axs[1, 0].set_ylim(ylim) axs[1, 1].plot(x[:, 0], x[:, 4], 'b') ylim = self.margins(x[:, 4], margin=.1) axs[1, 1].set_ylim(ylim) axs[2, 0].plot(x[:, 0], x[:, 5], 'b', label='Muscle') axs[2, 0].plot(x[:, 0], x[:, 6], 'g--', label='Tendon') ylim = self.margins(x[:, [5, 6]], margin=.1) axs[2, 0].set_ylim(ylim) axs[2, 0].set_xlabel('Time (s)') axs[2, 0].legend(framealpha=.5, loc='best') axs[2, 1].plot(x[:, 0], x[:, 8], 'b', label='PE') ylim = self.margins(x[:, 8], margin=.1) axs[2, 1].set_ylim(ylim) axs[2, 1].set_xlabel('Time (s)') axs[2, 1].legend(framealpha=.5, loc='best') axs = axs.flatten() ylabel = ['$L_{MT}\,(m)$', '$L_{T}\,(m)$', '$L_{M}\,(m)$', '$V_{CE}\,(m/s)$', '$Force\,(N)$', '$Force\,(N)$'] for i, axi in enumerate(axs): axi.set_ylabel(ylabel[i], fontsize=14) axi.yaxis.set_major_locator(plt.MaxNLocator(4)) axi.yaxis.set_label_coords(-.2, 0.5) plt.suptitle('Simulation of muscle-tendon mechanics', fontsize=18, y=1.03) plt.tight_layout() plt.show() def penn_ang(self, lmt, lm, lt=None, lm0=None, alpha0=None): """Pennation angle. Parameters ---------- lmt : float muscle-tendon length lt : float, optional (default=None) tendon length lm : float, optional (default=None) muscle fiber length lm0 : float, optional (default from states file) initial muscle fiber length alpha0 : float, optional (default from parameter file) initial pennation angle Returns ------- alpha : float pennation angle """ if lm0 is None: lm0 = self.S['lm0'] if alpha0 is None: alpha0 = self.P['alpha0'] alpha = alpha0 if alpha0 != 0: w = lm0*np.sin(alpha0) if lm is not None: cosalpha = np.sqrt(1-(w/lm)**2) elif lmt is not None and lt is not None: cosalpha = 1/(np.sqrt(1 + (w/(lmt-lt))**2)) alpha = np.arccos(cosalpha) if alpha > 1.4706289: # np.arccos(0.1), 84.2608 degrees alpha = 1.4706289 return alpha def excitation(self, t, u_max=None, u_min=None, t0=0, t1=5): """Excitation signal, a square wave. Parameters ---------- t : float time instant [s] u_max : float (0 < u_max <= 1), optional (default from parameter file) maximum value for muscle excitation u_min : float (0 < u_min < 1), optional (default from parameter file) minimum value for muscle excitation t0 : float, optional (default=0) initial time instant for muscle excitation equals to u_max [s] t1 : float, optional (default=5) final time instant for muscle excitation equals to u_max [s] Returns ------- u : float (0 < u <= 1) excitation signal """ if u_max is None: u_max = self.P['u_max'] if u_min is None: u_min = self.P['u_min'] u = u_min if t >= t0 and t <= t1: u = u_max return u def activation_dyn(self, t, a, t_act=None, t_deact=None): """Thelen (2003) activation dynamics, the derivative of `a` at `t`. Parameters ---------- t : float time instant [s] a : float (0 <= a <= 1) muscle activation t_act : float, optional (default from parameter file) activation time constant [s] t_deact : float, optional (default from parameter file) deactivation time constant [s] Returns ------- adot : float derivative of `a` at `t` """ if t_act is None: t_act = self.P['t_act'] if t_deact is None: t_deact = self.P['t_deact'] u = self.excitation(t) if u > a: adot = (u - a)/(t_act*(0.5 + 1.5*a)) else: adot = (u - a)/(t_deact/(0.5 + 1.5*a)) return adot def activation_sol(self, fun=None, t0=0, t1=3, a0=0, u_min=None, t_act=None, t_deact=None, show=True, axs=None): """Runge-Kutta (4)5 ODE solver for activation dynamics. Parameters ---------- fun : function object, optional (default is None and `actdyn` is used) function with ODE to be solved t0 : float, optional (default=0) initial time instant for the simulation [s] t1 : float, optional (default=0) final time instant for the simulation [s] a0 : float, optional (default=0) initial muscle activation u_max : float (0 < u_max <= 1), optional (default from parameter file) maximum value for muscle excitation u_min : float (0 < u_min < 1), optional (default from parameter file) minimum value for muscle excitation t_act : float, optional (default from parameter file) activation time constant [s] t_deact : float, optional (default from parameter file) deactivation time constant [s] show : bool, optional (default = True) if True (1), plot data in matplotlib figure axs : a matplotlib.axes.Axes instance, optional (default = None) Returns ------- data : 2-d array array with columns [time, excitation, activation] """ if u_min is None: u_min = self.P['u_min'] if t_act is None: t_act = self.P['t_act'] if t_deact is None: t_deact = self.P['t_deact'] if fun is None: fun = self.activation_dyn f = ode(fun).set_integrator('dopri5', nsteps=1, max_step=0.005, atol=1e-8) f.set_initial_value(a0, t0).set_f_params(t_act, t_deact) # suppress Fortran warning warnings.filterwarnings("ignore", category=UserWarning) data = [] while f.t < t1: f.integrate(t1, step=True) data.append([f.t, self.excitation(f.t), np.max([f.y, u_min])]) warnings.resetwarnings() data = np.array(data) if show: self.actvation_plot(data, axs) self.act_data = data return data def activation(self, t=None): """Activation signal.""" data = self.act_data if t is not None and len(data): if t <= self.act_data[0, 0]: a = self.act_data[0, 2] elif t >= self.act_data[-1, 0]: a = self.act_data[-1, 2] else: a = np.interp(t, self.act_data[:, 0], self.act_data[:, 2]) else: a = 1 return a def actvation_plot(self, data, axs): """Plot results of actdyn_ode45 function.""" try: import matplotlib.pyplot as plt except ImportError: print('matplotlib is not available.') return if axs is None: _, axs = plt.subplots(nrows=1, ncols=1, figsize=(6, 4)) axs.plot(data[:, 0], data[:, 1], color=[1, 0, 0, .6], label='Excitation') axs.plot(data[:, 0], data[:, 2], color=[0, 0, 1, .6], label='Activation') axs.set_xlabel('Time [s]') axs.set_ylabel('Level') axs.legend() plt.title('Activation dynamics') plt.tight_layout() plt.show() def margins(self, x, margin=0.01, minmargin=True): """Calculate plot limits with extra margins. """ rang = np.nanmax(x) - np.nanmin(x) if rang < 0.001 and minmargin: rang = 0.001*np.nanmean(x)/margin if rang < 1: rang = 1 lim = [np.nanmin(x) - rang*margin, np.nanmax(x) + rang*margin] return lim ###Output _____no_output_____ ###Markdown Muscle simulationMarcos Duarte, Renato Watanabe Let's simulate the 3-component Hill-type muscle model we described in [Muscle modeling](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/MuscleModeling.ipynb) and illustrated below:Figure. A Hill-type muscle model with three components: two for the muscle, an active contractile element, $\mathsf{CE}$, and a passive elastic element in parallel, $\mathsf{PE}$, with the $\mathsf{CE}$, and one component for the tendon, an elastic element in series, $\mathsf{SE}$, with the muscle. $\mathsf{L_{MT}}$: muscle–tendon length, $\mathsf{L_T}$: tendon length, $\mathsf{L_M}$: muscle fiber length, $\mathsf{F_T}$: tendon force, $\mathsf{F_M}$: muscle force, and $α$: pennation angle.The following relationships are true for the model:$$ \begin{array}{l}L_{MT} = L_{T} + L_M\cos\alpha \\\\L_M = L_{CE} = L_{PE} \\\\\dot{L}_M = \dot{L}_{CE} = \dot{L}_{PE} \\\\F_{M} = F_{CE} + F_{PE} \end{array} $$If we assume that the muscle–tendon system is at equilibrium, that is, muscle, $F_{M}$, and tendon, $F_{T}$, forces are in equlibrium at all times, the following equation holds (and that a muscle can only pull):$$ F_{T} = F_{SE} = F_{M}\cos\alpha $$ Pennation angleThe pennation angle will vary during muscle activation; for instance, Kawakami et al. (1998) showed that the pennation angle of the medial gastrocnemius muscle can vary from 22$^o$ to 67$^o$ during activation. The most used approach is to assume that the muscle width (defined as the length of the perpendicular line between the lines of the muscle origin and insertion) remains constant (Scott & Winter, 1991):$$ w = L_{M,0} \sin\alpha_0 $$The pennation angle as a function of time will be given by:$$ \alpha = \sin^{-1} \left(\frac{w}{L_M}\right) $$The cosine of the pennation angle can be given by (if $L_M$ is known):$$ \cos \alpha = \frac{\sqrt{L_M^2-w^2}}{L_M} = \sqrt{1-\left(\frac{w}{L_M}\right)^2} $$or (if $L_M$ is not known):$$ \cos \alpha = \frac{L_{MT}-L_T}{L_M} = \frac{1}{\sqrt{1 + \left(\frac{w}{L_{MT}-L_T}\right)^2}} $$ Muscle forceIn general, the dependence of the force of the contractile element with its length and velocity and with the activation level are assumed independent of each other:$$ F_{CE}(a, L_{CE}, \dot{L}_{CE}) = a \: f_l(L_{CE}) \: f_v(\dot{L}_{CE}) \: F_{M0} $$where $f_l(L_M)$ and $f_v(\dot{L}_M)$ are mathematical functions describing the force-length and force-velocity relationships of the contractile element (typically these functions are normalized by $F_{M0}$, the maximum isometric (at zero velocity) muscle force, so we have to multiply the right side of the equation by $F_{M0}$). And for the muscle force:$$ F_{M}(a, L_M, \dot{L}_M) = \left[a \: f_l(L_M)f_v(\dot{L}_M) + F_{PE}(L_M)\right]F_{M0} $$This equation for the muscle force, with $a$, $L_{M}$, and $\dot{L}_{M}$ as state variables, can be used to simulate the dynamics of a muscle given an excitation and determine the muscle force and length. We can rearrange the equation, invert the expression for $f_v$, and integrate the resulting first-order ordinary differential equation (ODE) to obatin $L_M$:$$ \dot{L}_M = f_v^{-1}\left(\frac{F_{SE}(L_{MT}-L_M\cos\alpha)/\cos\alpha - F_{PE}(L_M)}{a f_l(L_M)}\right) $$This approach is the most commonly employed in the literature (see for example, [OpenSim](http://simtk-confluence.stanford.edu:8080/display/OpenSim/Muscle+Model+Theory+and+Publications); McLean, Su, van den Bogert, 2003; Thelen, 2003; Nigg and Herzog, 2007). Although the equation for the muscle force doesn't have numerical singularities, the differential equation for muscle velocity has four ([OpenSim Millard 2012 Muscle Models](http://simtk-confluence.stanford.edu:8080/display/OpenSim/Millard+2012+Muscle+Models)): When $a \rightarrow 0$; when $f_l(L_M) \rightarrow 0$; when $\alpha \rightarrow \pi/2$; and when $\partial f_v/\partial v \rightarrow 0 $. The following solutions can be employed to avoid the numerical singularities ([OpenSim Millard 2012 Muscle Models](http://simtk-confluence.stanford.edu:8080/display/OpenSim/Millard+2012+Muscle+Models)): Adopt a minimum value for $a$; e.g., $a_{min}=0.01$; adopt a minimum value for $f_l(L_M)$; e.g., $f_l(0.1)$; adopt a maximum value for pennation angle; e.g., constrain $\alpha$ to $\cos\alpha > 0.1 \; (\alpha < 84.26^o)$; and make the slope of $f_V$ at and beyond maximum velocity different than zero (for both concentric and excentric activations). We will adopt these solutions to avoid singularities in the simulation of muscle mechanics. A probem of imposing values to variables as described above is that we can make the ordinary differential equation numerically stiff, which will increase the computational cost of the numerical integration. A better solution would be to modify the model to not have these singularities (see [OpenSim Millard 2012 Muscle Models](http://simtk-confluence.stanford.edu:8080/display/OpenSim/Millard+2012+Muscle+Models)). SimulationLet's simulate muscle dynamics using the Thelen2003Muscle model we defined in [Muscle modeling](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/MuscleModeling.ipynb). For the simulation of the Thelen2003Muscle, we simply have to integrate the equation:$$ V_M = (0.25+0.75a)\,V_{Mmax}\frac{\bar{F}_M-a\bar{f}_{l,CE}}{b} $$ where$$ b = \left\{ \begin{array}{l l l} a\bar{f}_{l,CE} + \bar{F}_M/A_f \quad & \text{if} \quad \bar{F}_M \leq a\bar{f}_{l,CE} & \text{(shortening)} \\ \\ \frac{(2+2/A_f)(a\bar{f}_{l,CE}\bar{f}_{CEmax} - \bar{F}_M)}{\bar{f}_{CEmax}-1} \quad & \text{if} \quad \bar{F}_M > a\bar{f}_{l,CE} & \text{(lengthening)} \end{array} \right.$$ The equation above already contains the terms for actvation, $a$, and force-length dependence, $\bar{f}_{l,CE}$. The equation is too complicated for solving analytically, we will solve it by numerical integration using the [`scipy.integrate.ode`](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.integrate.ode.html) class of numeric integrators, particularly the `dopri5`, an explicit runge-kutta method of order (4)5 due to Dormand and Prince (a.k.a. ode45 in Matlab). We could run a simulation using [OpenSim](https://simtk.org/home/opensim); it would be faster, but for fun, let's program in Python. All the necessary functions for the Thelen2003Muscle model described in [Muscle modeling](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/MuscleModeling.ipynb) were grouped in one file (module), `muscles.py`. Besides these functions, the module `muscles.py` contains a function for the muscle velocity, `vm_eq`, which will be called by the function that specifies the numerical integration, `lm_sol`; a standard way of performing numerical integration in scientific computing:```python def vm_eq(self, t, lm, lm0, lmt0, lmopt, ltslack, alpha0, vmmax, fm0): """Equation for muscle velocity.""" if lm < 0.1*lmopt: lm = 0.1*lmopt a = self.activation(t) lmt = self.lmt_eq(t, lmt0) alpha = self.penn_ang(lmt=lmt, lm=lm, lm0=lm0, alpha0=alpha0) lt = lmt - lm*np.cos(alpha) fse = self.force_se(lt=lt, ltslack=ltslack) fpe = self.force_pe(lm=lm/lmopt) fl = self.force_l(lm=lm/lmopt) fce_t = fse/np.cos(alpha) - fpe vm = self.velo_fm(fm=fce_t, a=a, fl=fl) return vmdef lm_sol(self, fun, t0, t1, lm0, lmt0, ltslack, lmopt, alpha0, vmmax, fm0, show, axs): """Runge-Kutta (4)5 ODE solver for muscle length.""" if fun is None: fun = self.vm_eq f = ode(fun).set_integrator('dopri5', nsteps=1, max_step=0.005, atol=1e-8) f.set_initial_value(lm0, t0).set_f_params(lm0, lmt0, lmopt, ltslack, alpha0, vmmax, fm0) suppress Fortran warning warnings.filterwarnings("ignore", category=UserWarning) data = [] while f.t < t1: f.integrate(t1, step=True) d = self.calc_data(f.t, f.y, lm0, lmt0, ltslack, lmopt, alpha0, fm0) data.append(d) warnings.resetwarnings() data = np.array(data) self.lm_data = data if show: self.lm_plot(data, axs) return data````muscles.py` also contains some auxiliary functions for entering data and for plotting the results. Let's import the necessary Python libraries and customize the environment in order to run some simulations using `muscles.py`: ###Code import numpy as np import matplotlib.pyplot as plt %matplotlib inline #%matplotlib nbagg import matplotlib matplotlib.rcParams['lines.linewidth'] = 3 matplotlib.rcParams['font.size'] = 13 matplotlib.rcParams['lines.markersize'] = 5 matplotlib.rc('axes', grid=False, labelsize=14, titlesize=16, ymargin=0.05) matplotlib.rc('legend', numpoints=1, fontsize=11) # import the muscles.py module import sys sys.path.insert(1, r'./../functions') import muscles ###Output _____no_output_____ ###Markdown The `muscles.py` module contains the class `Thelen2003()` which has the functions we want to use. For such, we need to create an instance of this class: ###Code ms = muscles.Thelen2003() ###Output _____no_output_____ ###Markdown Now, we need to enter the parameters and states for the simulation: we can load files with these values or enter as input parameters when calling the function (method) '`set_parameters()`' and '`set_states()`'. If nothing if inputed, these methods assume that the parameters and states are stored in the files '`muscle_parameter.txt`' and '`muscle_state.txt`' inside the directory '`./../data/`'. Let's use some of the parameters and states from an exercise of the chapter 4 of Nigg and Herzog (2006). ###Code ms.set_parameters() ms.set_states() ###Output The parameters were successfully loaded and are stored in the variable P. The states were successfully loaded and are stored in the variable S. ###Markdown We can see the parameters and states: ###Code print('Parameters:\n', ms.P) print('States:\n', ms.S) ###Output Parameters: {'id': '', 'name': '', 'u_max': 1.0, 'u_min': 0.01, 't_act': 0.015, 't_deact': 0.05, 'lmopt': 0.093, 'alpha0': 0.0, 'fm0': 7400.0, 'gammal': 0.45, 'kpe': 5.0, 'epsm0': 0.6, 'vmmax': 10.0, 'fmlen': 1.4, 'af': 0.25, 'ltslack': 0.223, 'epst0': 0.04, 'kttoe': 3.0} States: {'id': '', 'name': '', 'lmt0': 0.31, 'lm0': 0.087, 'lt0': 0.223} ###Markdown We can plot the muscle-tendon forces considering these parameters and initial states: ###Code ms.muscle_plot(); ###Output _____no_output_____ ###Markdown Let's simulate an isometric activation (and since we didn't enter an activation level, $a=1$ will be used): ###Code def lmt_eq(t, lmt0): # isometric activation lmt = lmt0 return lmt ms.lmt_eq = lmt_eq data = ms.lm_sol() ###Output _____no_output_____ ###Markdown We can input a prescribed muscle-tendon length for the simulation: ###Code def lmt_eq(t, lmt0): # prescribed change in the muscle-tendon length if t < 1: lmt = lmt0 if 1 <= t < 2: lmt = lmt0 - 0.04*(t - 1) if t >= 2: lmt = lmt0 - 0.04 return lmt ms.lmt_eq = lmt_eq data = ms.lm_sol() ###Output _____no_output_____ ###Markdown Let's simulate a pennated muscle with an angle of $30^o$. We don't need to enter all parameters again, we can change only the parameter `alpha0`: ###Code ms.P['alpha0'] = 30*np.pi/180 print('New initial pennation angle:', ms.P['alpha0']) ###Output New initial pennation angle: 0.5235987755982988 ###Markdown Because the muscle length is now shortenned by $\cos(30^o)$, we will also have to change the initial muscle-tendon length if we want to start with the tendon at its slack length: ###Code ms.S['lmt0'] = ms.S['lmt0'] - ms.S['lm0'] + ms.S['lm0']*np.cos(ms.P['alpha0']) print('New initial muscle-tendon length:', ms.S['lmt0']) data = ms.lm_sol() ###Output _____no_output_____ ###Markdown Here is a plot of the simulated pennation angle: ###Code plt.plot(data[:, 0], data[:, 9]*180/np.pi) plt.xlabel('Time (s)') plt.ylabel('Pennation angle $(^o)$') plt.show() ###Output _____no_output_____ ###Markdown Change back to the old values: ###Code ms.P['alpha0'] = 0 ms.S['lmt0'] = 0.313 ###Output _____no_output_____ ###Markdown We can change the initial states to show the role of the passive parallel element: ###Code ms.S = {'id': '', 'lt0': np.nan, 'lmt0': 0.323, 'lm0': 0.10, 'name': ''} ms.muscle_plot() ###Output _____no_output_____ ###Markdown Let's also change the excitation signal: ###Code def excitation(t, u_max=1, u_min=0.01, t0=1, t1=2): """Excitation signal, a hat signal.""" u = u_min if t >= t0 and t <= t1: u = u_max return u ms.excitation = excitation act = ms.activation_sol() ###Output _____no_output_____ ###Markdown And let's simulate an isometric contraction: ###Code def lmt_eq(t, lmt0): # isometric activation lmt = lmt0 return lmt ms.lmt_eq = lmt_eq data = ms.lm_sol() ###Output _____no_output_____ ###Markdown Let's use as excitation a train of pulses: ###Code def excitation(t, u_max=.5, u_min=0.01, t0=.2, t1=2): """Excitation signal, a train of square pulses.""" u = u_min ts = np.arange(1, 2.0, .1) #ts = np.delete(ts, np.arange(2, ts.size, 3)) if t >= ts[0] and t <= ts[1]: u = u_max elif t >= ts[2] and t <= ts[3]: u = u_max elif t >= ts[4] and t <= ts[5]: u = u_max elif t >= ts[6] and t <= ts[7]: u = u_max elif t >= ts[8] and t <= ts[9]: u = u_max return u ms.excitation = excitation act = ms.activation_sol() data = ms.lm_sol() ###Output _____no_output_____ ###Markdown References- Kawakami Y, Ichinose Y, Fukunaga T (1998) [Architectural and functional features of human triceps surae muscles during contraction](http://www.ncbi.nlm.nih.gov/pubmed/9688711). Journal of Applied Physiology, 85, 398–404. - McLean SG, Su A, van den Bogert AJ (2003) [Development and validation of a 3-D model to predict knee joint loading during dynamic movement](http://www.ncbi.nlm.nih.gov/pubmed/14986412). Journal of Biomechanical Engineering, 125, 864-74. - Nigg BM and Herzog W (2006) [Biomechanics of the Musculo-skeletal System](https://books.google.com.br/books?id=hOIeAQAAIAAJ&dq=editions:ISBN0470017678). 3rd Edition. Wiley. - Scott SH, Winter DA (1991) [A comparison of three muscle pennation assumptions and their effect on isometric and isotonic force](http://www.ncbi.nlm.nih.gov/pubmed/2037616). Journal of Biomechanics, 24, 163–167. - Thelen DG (2003) [Adjustment of muscle mechanics model parameters to simulate dynamic contractions in older adults](http://homepages.cae.wisc.edu/~thelen/pubs/jbme03.pdf). Journal of Biomechanical Engineering, 125(1):70–77. Module muscles.py ###Code # %load ./../functions/muscles.py """Muscle modeling and simulation.""" from __future__ import division, print_function import numpy as np from scipy.integrate import ode import warnings import configparser __author__ = 'Marcos Duarte, https://github.com/demotu/BMC' __version__ = 'muscles.py v.1 2015/03/01' class Thelen2003(): """ Thelen (2003) muscle model. """ def __init__(self, parameters=None, states=None): if parameters is not None: self.set_parameters(parameters) if states is not None: self.set_states(states) self.lm_data = [] self.act_data = [] def set_parameters(self, var=None): """Load and set parameters for the muscle model. """ if var is None: var = './../data/muscle_parameter.txt' if isinstance(var, str): self.P = self.config_parser(var, 'parameters') elif isinstance(var, dict): self.P = var else: raise ValueError('Wrong parameters!') print('The parameters were successfully loaded ' + 'and are stored in the variable P.') def set_states(self, var=None): """Load and set states for the muscle model. """ if var is None: var = './../data/muscle_state.txt' if isinstance(var, str): self.S = self.config_parser(var, 'states') elif isinstance(var, dict): self.S = var else: raise ValueError('Wrong states!') print('The states were successfully loaded ' + 'and are stored in the variable S.') def config_parser(self, filename, var): parser = configparser.ConfigParser() parser.optionxform = str # make option names case sensitive parser.read(filename) if not parser: raise ValueError('File %s not found!' %var) #if not 'Muscle' in parser.sections()[0]: # raise ValueError('Wrong %s file!' %var) var = {} for key, value in parser.items(parser.sections()[0]): if key.lower() in ['name', 'id']: var.update({key: value}) else: try: value = float(value) except ValueError: print('"%s" value "%s" was replaced by NaN.' %(key, value)) value = np.nan var.update({key: value}) return var def force_l(self, lm, gammal=None): """Thelen (2003) force of the contractile element vs. muscle length. Parameters ---------- lm : float normalized muscle fiber length gammal : float, optional (default from parameter file) shape factor Returns ------- fl : float normalized force of the muscle contractile element """ if gammal is None: gammal = self.P['gammal'] fl = np.exp(-(lm-1)**2/gammal) return fl def force_pe(self, lm, kpe=None, epsm0=None): """Thelen (2003) force of the muscle parallel element vs. muscle length. Parameters ---------- lm : float normalized muscle fiber length kpe : float, optional (default from parameter file) exponential shape factor epsm0 : float, optional (default from parameter file) passive muscle strain due to maximum isometric force Returns ------- fpe : float normalized force of the muscle parallel (passive) element """ if kpe is None: kpe = self.P['kpe'] if epsm0 is None: epsm0 = self.P['epsm0'] if lm <= 1: fpe = 0 else: fpe = (np.exp(kpe*(lm-1)/epsm0)-1)/(np.exp(kpe)-1) return fpe def force_se(self, lt, ltslack=None, epst0=None, kttoe=None): """Thelen (2003) force-length relationship of tendon vs. tendon length. Parameters ---------- lt : float tendon length (normalized or not) ltslack : float, optional (default from parameter file) tendon slack length (normalized or not) epst0 : float, optional (default from parameter file) tendon strain at the maximal isometric muscle force kttoe : float, optional (default from parameter file) linear scale factor Returns ------- fse : float normalized force of the tendon series element """ if ltslack is None: ltslack = self.P['ltslack'] if epst0 is None: epst0 = self.P['epst0'] if kttoe is None: kttoe = self.P['kttoe'] epst = (lt-ltslack)/ltslack fttoe = .33 # values from OpenSim Thelen2003Muscle epsttoe = .99*epst0*np.e**3/(1.66*np.e**3 - .67) ktlin = .67/(epst0 - epsttoe) # if epst <= 0: fse = 0 elif epst <= epsttoe: fse = fttoe/(np.exp(kttoe)-1)*(np.exp(kttoe*epst/epsttoe)-1) else: fse = ktlin*(epst-epsttoe) + fttoe return fse def velo_fm(self, fm, a, fl, lmopt=None, vmmax=None, fmlen=None, af=None): """Thelen (2003) velocity of the force-velocity relationship vs. CE force. Parameters ---------- fm : float normalized muscle force a : float muscle activation level fl : float normalized muscle force due to the force-length relationship lmopt : float, optional (default from parameter file) optimal muscle fiber length vmmax : float, optional (default from parameter file) normalized maximum muscle velocity for concentric activation fmlen : float, optional (default from parameter file) normalized maximum force generated at the lengthening phase af : float, optional (default from parameter file) shape factor Returns ------- vm : float velocity of the muscle """ if lmopt is None: lmopt = self.P['lmopt'] if vmmax is None: vmmax = self.P['vmmax'] if fmlen is None: fmlen = self.P['fmlen'] if af is None: af = self.P['af'] if fm <= a*fl: # isometric and concentric activation if fm > 0: b = a*fl + fm/af else: b = a*fl else: # eccentric activation asyE_thresh = 0.95 # from OpenSim Thelen2003Muscle if fm < a*fl*fmlen*asyE_thresh: b = (2 + 2/af)*(a*fl*fmlen - fm)/(fmlen - 1) else: fm0 = a*fl*fmlen*asyE_thresh b = (2 + 2/af)*(a*fl*fmlen - fm0)/(fmlen - 1) vm = (0.25 + 0.75*a)*1*(fm - a*fl)/b vm = vm*vmmax*lmopt return vm def force_vm(self, vm, a, fl, lmopt=None, vmmax=None, fmlen=None, af=None): """Thelen (2003) force of the contractile element vs. muscle velocity. Parameters ---------- vm : float muscle velocity a : float muscle activation level fl : float normalized muscle force due to the force-length relationship lmopt : float, optional (default from parameter file) optimal muscle fiber length vmmax : float, optional (default from parameter file) normalized maximum muscle velocity for concentric activation fmlen : float, optional (default from parameter file) normalized normalized maximum force generated at the lengthening phase af : float, optional (default from parameter file) shape factor Returns ------- fvm : float normalized force of the muscle contractile element """ if lmopt is None: lmopt = self.P['lmopt'] if vmmax is None: vmmax = self.P['vmmax'] if fmlen is None: fmlen = self.P['fmlen'] if af is None: af = self.P['af'] vmmax = vmmax*lmopt if vm <= 0: # isometric and concentric activation fvm = af*a*fl*(4*vm + vmmax*(3*a + 1))/(-4*vm + vmmax*af*(3*a + 1)) else: # eccentric activation fvm = a*fl*(af*vmmax*(3*a*fmlen - 3*a + fmlen - 1) + \ 8*vm*fmlen*(af + 1)) / \ (af*vmmax*(3*a*fmlen - 3*a + fmlen - 1) + 8*vm*(af + 1)) return fvm def lmt_eq(self, t, lmt0=None): """Equation for muscle-tendon length.""" if lmt0 is None: lmt0 = self.S['lmt0'] return lmt0 def vm_eq(self, t, lm, lm0, lmt0, lmopt, ltslack, alpha0, vmmax, fm0): """Equation for muscle velocity.""" if lm < 0.1*lmopt: lm = 0.1*lmopt #lt0 = lmt0 - lm0*np.cos(alpha0) a = self.activation(t) lmt = self.lmt_eq(t, lmt0) alpha = self.penn_ang(lmt=lmt, lm=lm, lm0=lm0, alpha0=alpha0) lt = lmt - lm*np.cos(alpha) fse = self.force_se(lt=lt, ltslack=ltslack) fpe = self.force_pe(lm=lm/lmopt) fl = self.force_l(lm=lm/lmopt) fce_t = fse/np.cos(alpha) - fpe #if fce_t < 0: fce_t=0 vm = self.velo_fm(fm=fce_t, a=a, fl=fl) return vm def lm_sol(self, fun=None, t0=0, t1=3, lm0=None, lmt0=None, ltslack=None, lmopt=None, alpha0=None, vmmax=None, fm0=None, show=True, axs=None): """Runge-Kutta (4)5 ODE solver for muscle length.""" if lm0 is None: lm0 = self.S['lm0'] if lmt0 is None: lmt0 = self.S['lmt0'] if ltslack is None: ltslack = self.P['ltslack'] if alpha0 is None: alpha0 = self.P['alpha0'] if lmopt is None: lmopt = self.P['lmopt'] if vmmax is None: vmmax = self.P['vmmax'] if fm0 is None: fm0 = self.P['fm0'] if fun is None: fun = self.vm_eq f = ode(fun).set_integrator('dopri5', nsteps=1, max_step=0.005, atol=1e-8) f.set_initial_value(lm0, t0).set_f_params(lm0, lmt0, lmopt, ltslack, alpha0, vmmax, fm0) # suppress Fortran warning warnings.filterwarnings("ignore", category=UserWarning) data = [] while f.t < t1: f.integrate(t1, step=True) d = self.calc_data(f.t, np.max([f.y, 0.1*lmopt]), lm0, lmt0, ltslack, lmopt, alpha0, fm0) data.append(d) warnings.resetwarnings() data = np.array(data) self.lm_data = data if show: self.lm_plot(data, axs) return data def calc_data(self, t, lm, lm0, lmt0, ltslack, lmopt, alpha0, fm0): """Calculus of muscle-tendon variables.""" a = self.activation(t) lmt = self.lmt_eq(t, lmt0=lmt0) alpha = self.penn_ang(lmt=lmt, lm=lm, lm0=lm0, alpha0=alpha0) lt = lmt - lm*np.cos(alpha) fl = self.force_l(lm=lm/lmopt) fpe = self.force_pe(lm=lm/lmopt) fse = self.force_se(lt=lt, ltslack=ltslack) fce_t = fse/np.cos(alpha) - fpe vm = self.velo_fm(fm=fce_t, a=a, fl=fl, lmopt=lmopt) fm = self.force_vm(vm=vm, fl=fl, lmopt=lmopt, a=a) + fpe data = [t, lmt, lm, lt, vm, fm*fm0, fse*fm0, a*fl*fm0, fpe*fm0, alpha] return data def muscle_plot(self, a=1, axs=None): """Plot muscle-tendon relationships with length and velocity.""" try: import matplotlib.pyplot as plt except ImportError: print('matplotlib is not available.') return if axs is None: _, axs = plt.subplots(nrows=1, ncols=3, figsize=(9, 4)) lmopt = self.P['lmopt'] ltslack = self.P['ltslack'] vmmax = self.P['vmmax'] alpha0 = self.P['alpha0'] fm0 = self.P['fm0'] lm0 = self.S['lm0'] lmt0 = self.S['lmt0'] lt0 = self.S['lt0'] if np.isnan(lt0): lt0 = lmt0 - lm0*np.cos(alpha0) lm = np.linspace(0, 2, 101) lt = np.linspace(0, 1, 101)*0.05 + 1 vm = np.linspace(-1, 1, 101)*vmmax*lmopt fl = np.zeros(lm.size) fpe = np.zeros(lm.size) fse = np.zeros(lt.size) fvm = np.zeros(vm.size) fl_lm0 = self.force_l(lm0/lmopt) fpe_lm0 = self.force_pe(lm0/lmopt) fm_lm0 = fl_lm0 + fpe_lm0 ft_lt0 = self.force_se(lt0, ltslack)*fm0 for i in range(101): fl[i] = self.force_l(lm[i]) fpe[i] = self.force_pe(lm[i]) fse[i] = self.force_se(lt[i], ltslack=1) fvm[i] = self.force_vm(vm[i], a=a, fl=fl_lm0) lm = lm*lmopt lt = lt*ltslack fl = fl fpe = fpe fse = fse*fm0 fvm = fvm*fm0 xlim = self.margins(lm, margin=.05, minmargin=False) axs[0].set_xlim(xlim) ylim = self.margins([0, 2], margin=.05) axs[0].set_ylim(ylim) axs[0].plot(lm, fl, 'b', label='Active') axs[0].plot(lm, fpe, 'b--', label='Passive') axs[0].plot(lm, fl+fpe, 'b:', label='') axs[0].plot([lm0, lm0], [ylim[0], fm_lm0], 'k:', lw=2, label='') axs[0].plot([xlim[0], lm0], [fm_lm0, fm_lm0], 'k:', lw=2, label='') axs[0].plot(lm0, fm_lm0, 'o', ms=6, mfc='r', mec='r', mew=2, label='fl(LM0)') axs[0].legend(loc='best', frameon=True, framealpha=.5) axs[0].set_xlabel('Length [m]') axs[0].set_ylabel('Scale factor') axs[0].xaxis.set_major_locator(plt.MaxNLocator(4)) axs[0].yaxis.set_major_locator(plt.MaxNLocator(4)) axs[0].set_title('Muscle F-L (a=1)') xlim = self.margins([0, np.min(vm), np.max(vm)], margin=.05, minmargin=False) axs[1].set_xlim(xlim) ylim = self.margins([0, fm0*1.2, np.max(fvm)*1.5], margin=.025) axs[1].set_ylim(ylim) axs[1].plot(vm, fvm, label='') axs[1].set_xlabel('$\mathbf{^{CON}}\;$ Velocity [m/s] $\;\mathbf{^{EXC}}$') axs[1].plot([0, 0], [ylim[0], fvm[50]], 'k:', lw=2, label='') axs[1].plot([xlim[0], 0], [fvm[50], fvm[50]], 'k:', lw=2, label='') axs[1].plot(0, fvm[50], 'o', ms=6, mfc='r', mec='r', mew=2, label='FM0(LM0)') axs[1].plot(xlim[0], fm0, '+', ms=10, mfc='r', mec='r', mew=2, label='') axs[1].text(vm[0], fm0, 'FM0') axs[1].legend(loc='upper right', frameon=True, framealpha=.5) axs[1].set_ylabel('Force [N]') axs[1].xaxis.set_major_locator(plt.MaxNLocator(4)) axs[1].yaxis.set_major_locator(plt.MaxNLocator(4)) axs[1].set_title('Muscle F-V (a=1)') xlim = self.margins([lt0, ltslack, np.min(lt), np.max(lt)], margin=.05, minmargin=False) axs[2].set_xlim(xlim) ylim = self.margins([ft_lt0, 0, np.max(fse)], margin=.05) axs[2].set_ylim(ylim) axs[2].plot(lt, fse, label='') axs[2].set_xlabel('Length [m]') axs[2].plot([lt0, lt0], [ylim[0], ft_lt0], 'k:', lw=2, label='') axs[2].plot([xlim[0], lt0], [ft_lt0, ft_lt0], 'k:', lw=2, label='') axs[2].plot(lt0, ft_lt0, 'o', ms=6, mfc='r', mec='r', mew=2, label='FT(LT0)') axs[2].legend(loc='upper left', frameon=True, framealpha=.5) axs[2].set_ylabel('Force [N]') axs[2].xaxis.set_major_locator(plt.MaxNLocator(4)) axs[2].yaxis.set_major_locator(plt.MaxNLocator(4)) axs[2].set_title('Tendon') plt.suptitle('Muscle-tendon mechanics', fontsize=18, y=1.03) plt.tight_layout(w_pad=.1) plt.show() def lm_plot(self, x, axs): """Plot results of actdyn_ode45 function. data = [t, lmt, lm, lt, vm, fm*fm0, fse*fm0, fl*fm0, fpe*fm0, alpha] """ try: import matplotlib.pyplot as plt except ImportError: print('matplotlib is not available.') return if axs is None: _, axs = plt.subplots(nrows=3, ncols=2, sharex=True, figsize=(10, 6)) axs[0, 0].plot(x[:, 0], x[:, 1], 'b', label='LMT') lmt = x[:, 2]*np.cos(x[:, 9]) + x[:, 3] if np.sum(x[:, 9]) > 0: axs[0, 0].plot(x[:, 0], lmt, 'g--', label=r'$LM \cos \alpha + LT$') else: axs[0, 0].plot(x[:, 0], lmt, 'g--', label=r'LM+LT') ylim = self.margins(x[:, 1], margin=.1) axs[0, 0].set_ylim(ylim) axs[0, 0].legend(framealpha=.5, loc='best') axs[0, 1].plot(x[:, 0], x[:, 3], 'b') #axs[0, 1].plot(x[:, 0], lt0*np.ones(len(x)), 'r') ylim = self.margins(x[:, 3], margin=.1) axs[0, 1].set_ylim(ylim) axs[1, 0].plot(x[:, 0], x[:, 2], 'b') #axs[1, 0].plot(x[:, 0], lmopt*np.ones(len(x)), 'r') ylim = self.margins(x[:, 2], margin=.1) axs[1, 0].set_ylim(ylim) axs[1, 1].plot(x[:, 0], x[:, 4], 'b') ylim = self.margins(x[:, 4], margin=.1) axs[1, 1].set_ylim(ylim) axs[2, 0].plot(x[:, 0], x[:, 5], 'b', label='Muscle') axs[2, 0].plot(x[:, 0], x[:, 6], 'g--', label='Tendon') ylim = self.margins(x[:, [5, 6]], margin=.1) axs[2, 0].set_ylim(ylim) axs[2, 0].set_xlabel('Time (s)') axs[2, 0].legend(framealpha=.5, loc='best') axs[2, 1].plot(x[:, 0], x[:, 8], 'b', label='PE') ylim = self.margins(x[:, 8], margin=.1) axs[2, 1].set_ylim(ylim) axs[2, 1].set_xlabel('Time (s)') axs[2, 1].legend(framealpha=.5, loc='best') axs = axs.flatten() ylabel = ['$L_{MT}\,(m)$', '$L_{T}\,(m)$', '$L_{M}\,(m)$', '$V_{CE}\,(m/s)$', '$Force\,(N)$', '$Force\,(N)$'] for i, axi in enumerate(axs): axi.set_ylabel(ylabel[i], fontsize=14) axi.yaxis.set_major_locator(plt.MaxNLocator(4)) axi.yaxis.set_label_coords(-.2, 0.5) plt.suptitle('Simulation of muscle-tendon mechanics', fontsize=18, y=1.03) plt.tight_layout() plt.show() def penn_ang(self, lmt, lm, lt=None, lm0=None, alpha0=None): """Pennation angle. Parameters ---------- lmt : float muscle-tendon length lt : float, optional (default=None) tendon length lm : float, optional (default=None) muscle fiber length lm0 : float, optional (default from states file) initial muscle fiber length alpha0 : float, optional (default from parameter file) initial pennation angle Returns ------- alpha : float pennation angle """ if lm0 is None: lm0 = self.S['lm0'] if alpha0 is None: alpha0 = self.P['alpha0'] alpha = alpha0 if alpha0 != 0: w = lm0*np.sin(alpha0) if lm is not None: cosalpha = np.sqrt(1-(w/lm)**2) elif lmt is not None and lt is not None: cosalpha = 1/(np.sqrt(1 + (w/(lmt-lt))**2)) alpha = np.arccos(cosalpha) if alpha > 1.4706289: # np.arccos(0.1), 84.2608 degrees alpha = 1.4706289 return alpha def excitation(self, t, u_max=None, u_min=None, t0=0, t1=5): """Excitation signal, a square wave. Parameters ---------- t : float time instant [s] u_max : float (0 < u_max <= 1), optional (default from parameter file) maximum value for muscle excitation u_min : float (0 < u_min < 1), optional (default from parameter file) minimum value for muscle excitation t0 : float, optional (default=0) initial time instant for muscle excitation equals to u_max [s] t1 : float, optional (default=5) final time instant for muscle excitation equals to u_max [s] Returns ------- u : float (0 < u <= 1) excitation signal """ if u_max is None: u_max = self.P['u_max'] if u_min is None: u_min = self.P['u_min'] u = u_min if t >= t0 and t <= t1: u = u_max return u def activation_dyn(self, t, a, t_act=None, t_deact=None): """Thelen (2003) activation dynamics, the derivative of `a` at `t`. Parameters ---------- t : float time instant [s] a : float (0 <= a <= 1) muscle activation t_act : float, optional (default from parameter file) activation time constant [s] t_deact : float, optional (default from parameter file) deactivation time constant [s] Returns ------- adot : float derivative of `a` at `t` """ if t_act is None: t_act = self.P['t_act'] if t_deact is None: t_deact = self.P['t_deact'] u = self.excitation(t) if u > a: adot = (u - a)/(t_act*(0.5 + 1.5*a)) else: adot = (u - a)/(t_deact/(0.5 + 1.5*a)) return adot def activation_sol(self, fun=None, t0=0, t1=3, a0=0, u_min=None, t_act=None, t_deact=None, show=True, axs=None): """Runge-Kutta (4)5 ODE solver for activation dynamics. Parameters ---------- fun : function object, optional (default is None and `actdyn` is used) function with ODE to be solved t0 : float, optional (default=0) initial time instant for the simulation [s] t1 : float, optional (default=0) final time instant for the simulation [s] a0 : float, optional (default=0) initial muscle activation u_max : float (0 < u_max <= 1), optional (default from parameter file) maximum value for muscle excitation u_min : float (0 < u_min < 1), optional (default from parameter file) minimum value for muscle excitation t_act : float, optional (default from parameter file) activation time constant [s] t_deact : float, optional (default from parameter file) deactivation time constant [s] show : bool, optional (default = True) if True (1), plot data in matplotlib figure axs : a matplotlib.axes.Axes instance, optional (default = None) Returns ------- data : 2-d array array with columns [time, excitation, activation] """ if u_min is None: u_min = self.P['u_min'] if t_act is None: t_act = self.P['t_act'] if t_deact is None: t_deact = self.P['t_deact'] if fun is None: fun = self.activation_dyn f = ode(fun).set_integrator('dopri5', nsteps=1, max_step=0.005, atol=1e-8) f.set_initial_value(a0, t0).set_f_params(t_act, t_deact) # suppress Fortran warning warnings.filterwarnings("ignore", category=UserWarning) data = [] while f.t < t1: f.integrate(t1, step=True) data.append([f.t, self.excitation(f.t), np.max([f.y, u_min])]) warnings.resetwarnings() data = np.array(data) if show: self.actvation_plot(data, axs) self.act_data = data return data def activation(self, t=None): """Activation signal.""" data = self.act_data if t is not None and len(data): if t <= self.act_data[0, 0]: a = self.act_data[0, 2] elif t >= self.act_data[-1, 0]: a = self.act_data[-1, 2] else: a = np.interp(t, self.act_data[:, 0], self.act_data[:, 2]) else: a = 1 return a def actvation_plot(self, data, axs): """Plot results of actdyn_ode45 function.""" try: import matplotlib.pyplot as plt except ImportError: print('matplotlib is not available.') return if axs is None: _, axs = plt.subplots(nrows=1, ncols=1, figsize=(6, 4)) axs.plot(data[:, 0], data[:, 1], color=[1, 0, 0, .6], label='Excitation') axs.plot(data[:, 0], data[:, 2], color=[0, 0, 1, .6], label='Activation') axs.set_xlabel('Time [s]') axs.set_ylabel('Level') axs.legend() plt.title('Activation dynamics') plt.tight_layout() plt.show() def margins(self, x, margin=0.01, minmargin=True): """Calculate plot limits with extra margins. """ rang = np.nanmax(x) - np.nanmin(x) if rang < 0.001 and minmargin: rang = 0.001*np.nanmean(x)/margin if rang < 1: rang = 1 lim = [np.nanmin(x) - rang*margin, np.nanmax(x) + rang*margin] return lim ###Output _____no_output_____ ###Markdown Muscle simulationMarcos Duarte, Renato Watanabe Let's simulate the 3-component Hill-type muscle model we described in [Muscle modeling](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/MuscleModeling.ipynb) and illustrated below:Figure. A Hill-type muscle model with three components: two for the muscle, an active contractile element, $\mathsf{CE}$, and a passive elastic element in parallel, $\mathsf{PE}$, with the $\mathsf{CE}$, and one component for the tendon, an elastic element in series, $\mathsf{SE}$, with the muscle. $\mathsf{L_{MT}}$: muscle–tendon length, $\mathsf{L_T}$: tendon length, $\mathsf{L_M}$: muscle fiber length, $\mathsf{F_T}$: tendon force, $\mathsf{F_M}$: muscle force, and $α$: pennation angle.The following relationships are true for the model:$$ \begin{array}{l}L_{MT} = L_{T} + L_M\cos\alpha \\\\L_M = L_{CE} = L_{PE} \\\\\dot{L}_M = \dot{L}_{CE} = \dot{L}_{PE} \\\\F_{M} = F_{CE} + F_{PE} \end{array} $$If we assume that the muscle–tendon system is at equilibrium, that is, muscle, $F_{M}$, and tendon, $F_{T}$, forces are in equlibrium at all times, the following equation holds (and that a muscle can only pull):$$ F_{T} = F_{SE} = F_{M}\cos\alpha $$ Pennation angleThe pennation angle will vary during muscle activation; for instance, Kawakami et al. (1998) showed that the pennation angle of the medial gastrocnemius muscle can vary from 22$^o$ to 67$^o$ during activation. The most used approach is to assume that the muscle width (defined as the length of the perpendicular line between the lines of the muscle origin and insertion) remains constant (Scott & Winter, 1991):$$ w = L_{M,0} \sin\alpha_0 $$The pennation angle as a function of time will be given by:$$ \alpha = \sin^{-1} \left(\frac{w}{L_M}\right) $$The cosine of the pennation angle can be given by (if $L_M$ is known):$$ \cos \alpha = \frac{\sqrt{L_M^2-w^2}}{L_M} = \sqrt{1-\left(\frac{w}{L_M}\right)^2} $$or (if $L_M$ is not known):$$ \cos \alpha = \frac{L_{MT}-L_T}{L_M} = \frac{1}{\sqrt{1 + \left(\frac{w}{L_{MT}-L_T}\right)^2}} $$ Muscle forceIn general, the dependence of the force of the contractile element with its length and velocity and with the activation level are assumed independent of each other:$$ F_{CE}(a, L_{CE}, \dot{L}_{CE}) = a \: f_l(L_{CE}) \: f_v(\dot{L}_{CE}) \: F_{M0} $$where $f_l(L_M)$ and $f_v(\dot{L}_M)$ are mathematical functions describing the force-length and force-velocity relationships of the contractile element (typically these functions are normalized by $F_{M0}$, the maximum isometric (at zero velocity) muscle force, so we have to multiply the right side of the equation by $F_{M0}$). And for the muscle force:$$ F_{M}(a, L_M, \dot{L}_M) = \left[a \: f_l(L_M)f_v(\dot{L}_M) + F_{PE}(L_M)\right]F_{M0} $$This equation for the muscle force, with $a$, $L_{M}$, and $\dot{L}_{M}$ as state variables, can be used to simulate the dynamics of a muscle given an excitation and determine the muscle force and length. We can rearrange the equation, invert the expression for $f_v$, and integrate the resulting first-order ordinary differential equation (ODE) to obatin $L_M$:$$ \dot{L}_M = f_v^{-1}\left(\frac{F_{SE}(L_{MT}-L_M\cos\alpha)/\cos\alpha - F_{PE}(L_M)}{a f_l(L_M)}\right) $$This approach is the most commonly employed in the literature (see for example, [OpenSim](http://simtk-confluence.stanford.edu:8080/display/OpenSim/Muscle+Model+Theory+and+Publications); McLean, Su, van den Bogert, 2003; Thelen, 2003; Nigg and Herzog, 2007). Although the equation for the muscle force doesn't have numerical singularities, the differential equation for muscle velocity has four ([OpenSim Millard 2012 Muscle Models](http://simtk-confluence.stanford.edu:8080/display/OpenSim/Millard+2012+Muscle+Models)): When $a \rightarrow 0$; when $f_l(L_M) \rightarrow 0$; when $\alpha \rightarrow \pi/2$; and when $\partial f_v/\partial v \rightarrow 0 $. The following solutions can be employed to avoid the numerical singularities ([OpenSim Millard 2012 Muscle Models](http://simtk-confluence.stanford.edu:8080/display/OpenSim/Millard+2012+Muscle+Models)): Adopt a minimum value for $a$; e.g., $a_{min}=0.01$; adopt a minimum value for $f_l(L_M)$; e.g., $f_l(0.1)$; adopt a maximum value for pennation angle; e.g., constrain $\alpha$ to $\cos\alpha > 0.1 \; (\alpha < 84.26^o)$; and make the slope of $f_V$ at and beyond maximum velocity different than zero (for both concentric and excentric activations). We will adopt these solutions to avoid singularities in the simulation of muscle mechanics. A probem of imposing values to variables as described above is that we can make the ordinary differential equation numerically stiff, which will increase the computational cost of the numerical integration. A better solution would be to modify the model to not have these singularities (see [OpenSim Millard 2012 Muscle Models](http://simtk-confluence.stanford.edu:8080/display/OpenSim/Millard+2012+Muscle+Models)). SimulationLet's simulate muscle dynamics using the Thelen2003Muscle model we defined in [Muscle modeling](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/MuscleModeling.ipynb). For the simulation of the Thelen2003Muscle, we simply have to integrate the equation:$$ V_M = (0.25+0.75a)\,V_{Mmax}\frac{\bar{F}_M-a\bar{f}_{l,CE}}{b} $$ where$$ b = \left\{ \begin{array}{l l l} a\bar{f}_{l,CE} + \bar{F}_M/A_f \quad & \text{if} \quad \bar{F}_M \leq a\bar{f}_{l,CE} & \text{(shortening)} \\ \\ \frac{(2+2/A_f)(a\bar{f}_{l,CE}\bar{f}_{CEmax} - \bar{F}_M)}{\bar{f}_{CEmax}-1} \quad & \text{if} \quad \bar{F}_M > a\bar{f}_{l,CE} & \text{(lengthening)} \end{array} \right.$$ The equation above already contains the terms for actvation, $a$, and force-length dependence, $\bar{f}_{l,CE}$. The equation is too complicated for solving analytically, we will solve it by numerical integration using the [`scipy.integrate.ode`](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.integrate.ode.html) class of numeric integrators, particularly the `dopri5`, an explicit runge-kutta method of order (4)5 due to Dormand and Prince (a.k.a. ode45 in Matlab). We could run a simulation using [OpenSim](https://simtk.org/home/opensim); it would be faster, but for fun, let's program in Python. All the necessary functions for the Thelen2003Muscle model described in [Muscle modeling](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/MuscleModeling.ipynb) were grouped in one file (module), `muscles.py`. Besides these functions, the module `muscles.py` contains a function for the muscle velocity, `vm_eq`, which will be called by the function that specifies the numerical integration, `lm_sol`; a standard way of performing numerical integration in scientific computing:```python def vm_eq(self, t, lm, lm0, lmt0, lmopt, ltslack, alpha0, vmmax, fm0): """Equation for muscle velocity.""" if lm < 0.1*lmopt: lm = 0.1*lmopt a = self.activation(t) lmt = self.lmt_eq(t, lmt0) alpha = self.penn_ang(lmt=lmt, lm=lm, lm0=lm0, alpha0=alpha0) lt = lmt - lm*np.cos(alpha) fse = self.force_se(lt=lt, ltslack=ltslack) fpe = self.force_pe(lm=lm/lmopt) fl = self.force_l(lm=lm/lmopt) fce_t = fse/np.cos(alpha) - fpe vm = self.velo_fm(fm=fce_t, a=a, fl=fl) return vmdef lm_sol(self, fun, t0, t1, lm0, lmt0, ltslack, lmopt, alpha0, vmmax, fm0, show, axs): """Runge-Kutta (4)5 ODE solver for muscle length.""" if fun is None: fun = self.vm_eq f = ode(fun).set_integrator('dopri5', nsteps=1, max_step=0.005, atol=1e-8) f.set_initial_value(lm0, t0).set_f_params(lm0, lmt0, lmopt, ltslack, alpha0, vmmax, fm0) suppress Fortran warning warnings.filterwarnings("ignore", category=UserWarning) data = [] while f.t < t1: f.integrate(t1, step=True) d = self.calc_data(f.t, f.y, lm0, lmt0, ltslack, lmopt, alpha0, fm0) data.append(d) warnings.resetwarnings() data = np.array(data) self.lm_data = data if show: self.lm_plot(data, axs) return data````muscles.py` also contains some auxiliary functions for entering data and for plotting the results. Let's import the necessary Python libraries and customize the environment in order to run some simulations using `muscles.py`: ###Code import numpy as np import matplotlib.pyplot as plt %matplotlib inline #%matplotlib nbagg import matplotlib matplotlib.rcParams['lines.linewidth'] = 3 matplotlib.rcParams['font.size'] = 13 matplotlib.rcParams['lines.markersize'] = 5 matplotlib.rc('axes', grid=False, labelsize=14, titlesize=16, ymargin=0.05) matplotlib.rc('legend', numpoints=1, fontsize=11) # import the muscles.py module import sys sys.path.insert(1, r'./../functions') import muscles ###Output _____no_output_____ ###Markdown The `muscles.py` module contains the class `Thelen2003()` which has the functions we want to use. For such, we need to create an instance of this class: ###Code ms = muscles.Thelen2003() ###Output _____no_output_____ ###Markdown Now, we need to enter the parameters and states for the simulation: we can load files with these values or enter as input parameters when calling the function (method) '`set_parameters()`' and '`set_states()`'. If nothing if inputed, these methods assume that the parameters and states are stored in the files '`muscle_parameter.txt`' and '`muscle_state.txt`' inside the directory '`./../data/`'. Let's use some of the parameters and states from an exercise of the chapter 4 of Nigg and Herzog (2006). ###Code ms.set_parameters() ms.set_states() ###Output The parameters were successfully loaded and are stored in the variable P. The states were successfully loaded and are stored in the variable S. ###Markdown We can see the parameters and states: ###Code print('Parameters:\n', ms.P) print('States:\n', ms.S) ###Output Parameters: {'id': '', 'name': '', 'u_max': 1.0, 'u_min': 0.01, 't_act': 0.015, 't_deact': 0.05, 'lmopt': 0.093, 'alpha0': 0.0, 'fm0': 7400.0, 'gammal': 0.45, 'kpe': 5.0, 'epsm0': 0.6, 'vmmax': 10.0, 'fmlen': 1.4, 'af': 0.25, 'ltslack': 0.223, 'epst0': 0.04, 'kttoe': 3.0} States: {'id': '', 'name': '', 'lmt0': 0.31, 'lm0': 0.087, 'lt0': 0.223} ###Markdown We can plot the muscle-tendon forces considering these parameters and initial states: ###Code ms.muscle_plot(); ###Output _____no_output_____ ###Markdown Let's simulate an isometric activation (and since we didn't enter an activation level, $a=1$ will be used): ###Code def lmt_eq(t, lmt0): # isometric activation lmt = lmt0 return lmt ms.lmt_eq = lmt_eq data = ms.lm_sol() ###Output _____no_output_____ ###Markdown We can input a prescribed muscle-tendon length for the simulation: ###Code def lmt_eq(t, lmt0): # prescribed change in the muscle-tendon length if t < 1: lmt = lmt0 if 1 <= t < 2: lmt = lmt0 - 0.04*(t - 1) if t >= 2: lmt = lmt0 - 0.04 return lmt ms.lmt_eq = lmt_eq data = ms.lm_sol() ###Output _____no_output_____ ###Markdown Let's simulate a pennated muscle with an angle of $30^o$. We don't need to enter all parameters again, we can change only the parameter `alpha0`: ###Code ms.P['alpha0'] = 30*np.pi/180 print('New initial pennation angle:', ms.P['alpha0']) ###Output New initial pennation angle: 0.5235987755982988 ###Markdown Because the muscle length is now shortenned by $\cos(30^o)$, we will also have to change the initial muscle-tendon length if we want to start with the tendon at its slack length: ###Code ms.S['lmt0'] = ms.S['lmt0'] - ms.S['lm0'] + ms.S['lm0']*np.cos(ms.P['alpha0']) print('New initial muscle-tendon length:', ms.S['lmt0']) data = ms.lm_sol() ###Output _____no_output_____ ###Markdown Here is a plot of the simulated pennation angle: ###Code plt.plot(data[:, 0], data[:, 9]*180/np.pi) plt.xlabel('Time (s)') plt.ylabel('Pennation angle $(^o)$') plt.show() ###Output _____no_output_____ ###Markdown Change back to the old values: ###Code ms.P['alpha0'] = 0 ms.S['lmt0'] = 0.313 ###Output _____no_output_____ ###Markdown We can change the initial states to show the role of the passive parallel element: ###Code ms.S = {'id': '', 'lt0': np.nan, 'lmt0': 0.323, 'lm0': 0.10, 'name': ''} ms.muscle_plot() ###Output _____no_output_____ ###Markdown Let's also change the excitation signal: ###Code def excitation(t, u_max=1, u_min=0.01, t0=1, t1=2): """Excitation signal, a hat signal.""" u = u_min if t >= t0 and t <= t1: u = u_max return u ms.excitation = excitation act = ms.activation_sol() ###Output _____no_output_____ ###Markdown And let's simulate an isometric contraction: ###Code def lmt_eq(t, lmt0): # isometric activation lmt = lmt0 return lmt ms.lmt_eq = lmt_eq data = ms.lm_sol() ###Output _____no_output_____ ###Markdown Let's use as excitation a train of pulses: ###Code def excitation(t, u_max=.5, u_min=0.01, t0=.2, t1=2): """Excitation signal, a train of square pulses.""" u = u_min ts = np.arange(1, 2.0, .1) #ts = np.delete(ts, np.arange(2, ts.size, 3)) if t >= ts[0] and t <= ts[1]: u = u_max elif t >= ts[2] and t <= ts[3]: u = u_max elif t >= ts[4] and t <= ts[5]: u = u_max elif t >= ts[6] and t <= ts[7]: u = u_max elif t >= ts[8] and t <= ts[9]: u = u_max return u ms.excitation = excitation act = ms.activation_sol() data = ms.lm_sol() ###Output _____no_output_____ ###Markdown References- Kawakami Y, Ichinose Y, Fukunaga T (1998) [Architectural and functional features of human triceps surae muscles during contraction](http://www.ncbi.nlm.nih.gov/pubmed/9688711). Journal of Applied Physiology, 85, 398–404. - McLean SG, Su A, van den Bogert AJ (2003) [Development and validation of a 3-D model to predict knee joint loading during dynamic movement](http://www.ncbi.nlm.nih.gov/pubmed/14986412). Journal of Biomechanical Engineering, 125, 864-74. - Nigg BM and Herzog W (2006) [Biomechanics of the Musculo-skeletal System](https://books.google.com.br/books?id=hOIeAQAAIAAJ&dq=editions:ISBN0470017678). 3rd Edition. Wiley. - Scott SH, Winter DA (1991) [A comparison of three muscle pennation assumptions and their effect on isometric and isotonic force](http://www.ncbi.nlm.nih.gov/pubmed/2037616). Journal of Biomechanics, 24, 163–167. - Thelen DG (2003) [Adjustment of muscle mechanics model parameters to simulate dynamic contractions in older adults](http://homepages.cae.wisc.edu/~thelen/pubs/jbme03.pdf). Journal of Biomechanical Engineering, 125(1):70–77. Module muscles.py ###Code # %load ./../functions/muscles.py """Muscle modeling and simulation.""" from __future__ import division, print_function import numpy as np from scipy.integrate import ode import warnings import configparser __author__ = 'Marcos Duarte, https://github.com/demotu/BMC' __version__ = 'muscles.py v.1 2015/03/01' class Thelen2003(): """ Thelen (2003) muscle model. """ def __init__(self, parameters=None, states=None): if parameters is not None: self.set_parameters(parameters) if states is not None: self.set_states(states) self.lm_data = [] self.act_data = [] def set_parameters(self, var=None): """Load and set parameters for the muscle model. """ if var is None: var = './../data/muscle_parameter.txt' if isinstance(var, str): self.P = self.config_parser(var, 'parameters') elif isinstance(var, dict): self.P = var else: raise ValueError('Wrong parameters!') print('The parameters were successfully loaded ' + 'and are stored in the variable P.') def set_states(self, var=None): """Load and set states for the muscle model. """ if var is None: var = './../data/muscle_state.txt' if isinstance(var, str): self.S = self.config_parser(var, 'states') elif isinstance(var, dict): self.S = var else: raise ValueError('Wrong states!') print('The states were successfully loaded ' + 'and are stored in the variable S.') def config_parser(self, filename, var): parser = configparser.ConfigParser() parser.optionxform = str # make option names case sensitive parser.read(filename) if not parser: raise ValueError('File %s not found!' %var) #if not 'Muscle' in parser.sections()[0]: # raise ValueError('Wrong %s file!' %var) var = {} for key, value in parser.items(parser.sections()[0]): if key.lower() in ['name', 'id']: var.update({key: value}) else: try: value = float(value) except ValueError: print('"%s" value "%s" was replaced by NaN.' %(key, value)) value = np.nan var.update({key: value}) return var def force_l(self, lm, gammal=None): """Thelen (2003) force of the contractile element vs. muscle length. Parameters ---------- lm : float normalized muscle fiber length gammal : float, optional (default from parameter file) shape factor Returns ------- fl : float normalized force of the muscle contractile element """ if gammal is None: gammal = self.P['gammal'] fl = np.exp(-(lm-1)**2/gammal) return fl def force_pe(self, lm, kpe=None, epsm0=None): """Thelen (2003) force of the muscle parallel element vs. muscle length. Parameters ---------- lm : float normalized muscle fiber length kpe : float, optional (default from parameter file) exponential shape factor epsm0 : float, optional (default from parameter file) passive muscle strain due to maximum isometric force Returns ------- fpe : float normalized force of the muscle parallel (passive) element """ if kpe is None: kpe = self.P['kpe'] if epsm0 is None: epsm0 = self.P['epsm0'] if lm <= 1: fpe = 0 else: fpe = (np.exp(kpe*(lm-1)/epsm0)-1)/(np.exp(kpe)-1) return fpe def force_se(self, lt, ltslack=None, epst0=None, kttoe=None): """Thelen (2003) force-length relationship of tendon vs. tendon length. Parameters ---------- lt : float tendon length (normalized or not) ltslack : float, optional (default from parameter file) tendon slack length (normalized or not) epst0 : float, optional (default from parameter file) tendon strain at the maximal isometric muscle force kttoe : float, optional (default from parameter file) linear scale factor Returns ------- fse : float normalized force of the tendon series element """ if ltslack is None: ltslack = self.P['ltslack'] if epst0 is None: epst0 = self.P['epst0'] if kttoe is None: kttoe = self.P['kttoe'] epst = (lt-ltslack)/ltslack fttoe = .33 # values from OpenSim Thelen2003Muscle epsttoe = .99*epst0*np.e**3/(1.66*np.e**3 - .67) ktlin = .67/(epst0 - epsttoe) # if epst <= 0: fse = 0 elif epst <= epsttoe: fse = fttoe/(np.exp(kttoe)-1)*(np.exp(kttoe*epst/epsttoe)-1) else: fse = ktlin*(epst-epsttoe) + fttoe return fse def velo_fm(self, fm, a, fl, lmopt=None, vmmax=None, fmlen=None, af=None): """Thelen (2003) velocity of the force-velocity relationship vs. CE force. Parameters ---------- fm : float normalized muscle force a : float muscle activation level fl : float normalized muscle force due to the force-length relationship lmopt : float, optional (default from parameter file) optimal muscle fiber length vmmax : float, optional (default from parameter file) normalized maximum muscle velocity for concentric activation fmlen : float, optional (default from parameter file) normalized maximum force generated at the lengthening phase af : float, optional (default from parameter file) shape factor Returns ------- vm : float velocity of the muscle """ if lmopt is None: lmopt = self.P['lmopt'] if vmmax is None: vmmax = self.P['vmmax'] if fmlen is None: fmlen = self.P['fmlen'] if af is None: af = self.P['af'] if fm <= a*fl: # isometric and concentric activation if fm > 0: b = a*fl + fm/af else: b = a*fl else: # eccentric activation asyE_thresh = 0.95 # from OpenSim Thelen2003Muscle if fm < a*fl*fmlen*asyE_thresh: b = (2 + 2/af)*(a*fl*fmlen - fm)/(fmlen - 1) else: fm0 = a*fl*fmlen*asyE_thresh b = (2 + 2/af)*(a*fl*fmlen - fm0)/(fmlen - 1) vm = (0.25 + 0.75*a)*1*(fm - a*fl)/b vm = vm*vmmax*lmopt return vm def force_vm(self, vm, a, fl, lmopt=None, vmmax=None, fmlen=None, af=None): """Thelen (2003) force of the contractile element vs. muscle velocity. Parameters ---------- vm : float muscle velocity a : float muscle activation level fl : float normalized muscle force due to the force-length relationship lmopt : float, optional (default from parameter file) optimal muscle fiber length vmmax : float, optional (default from parameter file) normalized maximum muscle velocity for concentric activation fmlen : float, optional (default from parameter file) normalized normalized maximum force generated at the lengthening phase af : float, optional (default from parameter file) shape factor Returns ------- fvm : float normalized force of the muscle contractile element """ if lmopt is None: lmopt = self.P['lmopt'] if vmmax is None: vmmax = self.P['vmmax'] if fmlen is None: fmlen = self.P['fmlen'] if af is None: af = self.P['af'] vmmax = vmmax*lmopt if vm <= 0: # isometric and concentric activation fvm = af*a*fl*(4*vm + vmmax*(3*a + 1))/(-4*vm + vmmax*af*(3*a + 1)) else: # eccentric activation fvm = a*fl*(af*vmmax*(3*a*fmlen - 3*a + fmlen - 1) + \ 8*vm*fmlen*(af + 1)) / \ (af*vmmax*(3*a*fmlen - 3*a + fmlen - 1) + 8*vm*(af + 1)) return fvm def lmt_eq(self, t, lmt0=None): """Equation for muscle-tendon length.""" if lmt0 is None: lmt0 = self.S['lmt0'] return lmt0 def vm_eq(self, t, lm, lm0, lmt0, lmopt, ltslack, alpha0, vmmax, fm0): """Equation for muscle velocity.""" if lm < 0.1*lmopt: lm = 0.1*lmopt #lt0 = lmt0 - lm0*np.cos(alpha0) a = self.activation(t) lmt = self.lmt_eq(t, lmt0) alpha = self.penn_ang(lmt=lmt, lm=lm, lm0=lm0, alpha0=alpha0) lt = lmt - lm*np.cos(alpha) fse = self.force_se(lt=lt, ltslack=ltslack) fpe = self.force_pe(lm=lm/lmopt) fl = self.force_l(lm=lm/lmopt) fce_t = fse/np.cos(alpha) - fpe #if fce_t < 0: fce_t=0 vm = self.velo_fm(fm=fce_t, a=a, fl=fl) return vm def lm_sol(self, fun=None, t0=0, t1=3, lm0=None, lmt0=None, ltslack=None, lmopt=None, alpha0=None, vmmax=None, fm0=None, show=True, axs=None): """Runge-Kutta (4)5 ODE solver for muscle length.""" if lm0 is None: lm0 = self.S['lm0'] if lmt0 is None: lmt0 = self.S['lmt0'] if ltslack is None: ltslack = self.P['ltslack'] if alpha0 is None: alpha0 = self.P['alpha0'] if lmopt is None: lmopt = self.P['lmopt'] if vmmax is None: vmmax = self.P['vmmax'] if fm0 is None: fm0 = self.P['fm0'] if fun is None: fun = self.vm_eq f = ode(fun).set_integrator('dopri5', nsteps=1, max_step=0.005, atol=1e-8) f.set_initial_value(lm0, t0).set_f_params(lm0, lmt0, lmopt, ltslack, alpha0, vmmax, fm0) # suppress Fortran warning warnings.filterwarnings("ignore", category=UserWarning) data = [] while f.t < t1: f.integrate(t1, step=True) d = self.calc_data(f.t, np.max([f.y, 0.1*lmopt]), lm0, lmt0, ltslack, lmopt, alpha0, fm0) data.append(d) warnings.resetwarnings() data = np.array(data) self.lm_data = data if show: self.lm_plot(data, axs) return data def calc_data(self, t, lm, lm0, lmt0, ltslack, lmopt, alpha0, fm0): """Calculus of muscle-tendon variables.""" a = self.activation(t) lmt = self.lmt_eq(t, lmt0=lmt0) alpha = self.penn_ang(lmt=lmt, lm=lm, lm0=lm0, alpha0=alpha0) lt = lmt - lm*np.cos(alpha) fl = self.force_l(lm=lm/lmopt) fpe = self.force_pe(lm=lm/lmopt) fse = self.force_se(lt=lt, ltslack=ltslack) fce_t = fse/np.cos(alpha) - fpe vm = self.velo_fm(fm=fce_t, a=a, fl=fl, lmopt=lmopt) fm = self.force_vm(vm=vm, fl=fl, lmopt=lmopt, a=a) + fpe data = [t, lmt, lm, lt, vm, fm*fm0, fse*fm0, a*fl*fm0, fpe*fm0, alpha] return data def muscle_plot(self, a=1, axs=None): """Plot muscle-tendon relationships with length and velocity.""" try: import matplotlib.pyplot as plt except ImportError: print('matplotlib is not available.') return if axs is None: _, axs = plt.subplots(nrows=1, ncols=3, figsize=(9, 4)) lmopt = self.P['lmopt'] ltslack = self.P['ltslack'] vmmax = self.P['vmmax'] alpha0 = self.P['alpha0'] fm0 = self.P['fm0'] lm0 = self.S['lm0'] lmt0 = self.S['lmt0'] lt0 = self.S['lt0'] if np.isnan(lt0): lt0 = lmt0 - lm0*np.cos(alpha0) lm = np.linspace(0, 2, 101) lt = np.linspace(0, 1, 101)*0.05 + 1 vm = np.linspace(-1, 1, 101)*vmmax*lmopt fl = np.zeros(lm.size) fpe = np.zeros(lm.size) fse = np.zeros(lt.size) fvm = np.zeros(vm.size) fl_lm0 = self.force_l(lm0/lmopt) fpe_lm0 = self.force_pe(lm0/lmopt) fm_lm0 = fl_lm0 + fpe_lm0 ft_lt0 = self.force_se(lt0, ltslack)*fm0 for i in range(101): fl[i] = self.force_l(lm[i]) fpe[i] = self.force_pe(lm[i]) fse[i] = self.force_se(lt[i], ltslack=1) fvm[i] = self.force_vm(vm[i], a=a, fl=fl_lm0) lm = lm*lmopt lt = lt*ltslack fl = fl fpe = fpe fse = fse*fm0 fvm = fvm*fm0 xlim = self.margins(lm, margin=.05, minmargin=False) axs[0].set_xlim(xlim) ylim = self.margins([0, 2], margin=.05) axs[0].set_ylim(ylim) axs[0].plot(lm, fl, 'b', label='Active') axs[0].plot(lm, fpe, 'b--', label='Passive') axs[0].plot(lm, fl+fpe, 'b:', label='') axs[0].plot([lm0, lm0], [ylim[0], fm_lm0], 'k:', lw=2, label='') axs[0].plot([xlim[0], lm0], [fm_lm0, fm_lm0], 'k:', lw=2, label='') axs[0].plot(lm0, fm_lm0, 'o', ms=6, mfc='r', mec='r', mew=2, label='fl(LM0)') axs[0].legend(loc='best', frameon=True, framealpha=.5) axs[0].set_xlabel('Length [m]') axs[0].set_ylabel('Scale factor') axs[0].xaxis.set_major_locator(plt.MaxNLocator(4)) axs[0].yaxis.set_major_locator(plt.MaxNLocator(4)) axs[0].set_title('Muscle F-L (a=1)') xlim = self.margins([0, np.min(vm), np.max(vm)], margin=.05, minmargin=False) axs[1].set_xlim(xlim) ylim = self.margins([0, fm0*1.2, np.max(fvm)*1.5], margin=.025) axs[1].set_ylim(ylim) axs[1].plot(vm, fvm, label='') axs[1].set_xlabel('$\mathbf{^{CON}}\;$ Velocity [m/s] $\;\mathbf{^{EXC}}$') axs[1].plot([0, 0], [ylim[0], fvm[50]], 'k:', lw=2, label='') axs[1].plot([xlim[0], 0], [fvm[50], fvm[50]], 'k:', lw=2, label='') axs[1].plot(0, fvm[50], 'o', ms=6, mfc='r', mec='r', mew=2, label='FM0(LM0)') axs[1].plot(xlim[0], fm0, '+', ms=10, mfc='r', mec='r', mew=2, label='') axs[1].text(vm[0], fm0, 'FM0') axs[1].legend(loc='upper right', frameon=True, framealpha=.5) axs[1].set_ylabel('Force [N]') axs[1].xaxis.set_major_locator(plt.MaxNLocator(4)) axs[1].yaxis.set_major_locator(plt.MaxNLocator(4)) axs[1].set_title('Muscle F-V (a=1)') xlim = self.margins([lt0, ltslack, np.min(lt), np.max(lt)], margin=.05, minmargin=False) axs[2].set_xlim(xlim) ylim = self.margins([ft_lt0, 0, np.max(fse)], margin=.05) axs[2].set_ylim(ylim) axs[2].plot(lt, fse, label='') axs[2].set_xlabel('Length [m]') axs[2].plot([lt0, lt0], [ylim[0], ft_lt0], 'k:', lw=2, label='') axs[2].plot([xlim[0], lt0], [ft_lt0, ft_lt0], 'k:', lw=2, label='') axs[2].plot(lt0, ft_lt0, 'o', ms=6, mfc='r', mec='r', mew=2, label='FT(LT0)') axs[2].legend(loc='upper left', frameon=True, framealpha=.5) axs[2].set_ylabel('Force [N]') axs[2].xaxis.set_major_locator(plt.MaxNLocator(4)) axs[2].yaxis.set_major_locator(plt.MaxNLocator(4)) axs[2].set_title('Tendon') plt.suptitle('Muscle-tendon mechanics', fontsize=18, y=1.03) plt.tight_layout(w_pad=.1) plt.show() def lm_plot(self, x, axs): """Plot results of actdyn_ode45 function. data = [t, lmt, lm, lt, vm, fm*fm0, fse*fm0, fl*fm0, fpe*fm0, alpha] """ try: import matplotlib.pyplot as plt except ImportError: print('matplotlib is not available.') return if axs is None: _, axs = plt.subplots(nrows=3, ncols=2, sharex=True, figsize=(10, 6)) axs[0, 0].plot(x[:, 0], x[:, 1], 'b', label='LMT') lmt = x[:, 2]*np.cos(x[:, 9]) + x[:, 3] if np.sum(x[:, 9]) > 0: axs[0, 0].plot(x[:, 0], lmt, 'g--', label=r'$LM \cos \alpha + LT$') else: axs[0, 0].plot(x[:, 0], lmt, 'g--', label=r'LM+LT') ylim = self.margins(x[:, 1], margin=.1) axs[0, 0].set_ylim(ylim) axs[0, 0].legend(framealpha=.5, loc='best') axs[0, 1].plot(x[:, 0], x[:, 3], 'b') #axs[0, 1].plot(x[:, 0], lt0*np.ones(len(x)), 'r') ylim = self.margins(x[:, 3], margin=.1) axs[0, 1].set_ylim(ylim) axs[1, 0].plot(x[:, 0], x[:, 2], 'b') #axs[1, 0].plot(x[:, 0], lmopt*np.ones(len(x)), 'r') ylim = self.margins(x[:, 2], margin=.1) axs[1, 0].set_ylim(ylim) axs[1, 1].plot(x[:, 0], x[:, 4], 'b') ylim = self.margins(x[:, 4], margin=.1) axs[1, 1].set_ylim(ylim) axs[2, 0].plot(x[:, 0], x[:, 5], 'b', label='Muscle') axs[2, 0].plot(x[:, 0], x[:, 6], 'g--', label='Tendon') ylim = self.margins(x[:, [5, 6]], margin=.1) axs[2, 0].set_ylim(ylim) axs[2, 0].set_xlabel('Time (s)') axs[2, 0].legend(framealpha=.5, loc='best') axs[2, 1].plot(x[:, 0], x[:, 8], 'b', label='PE') ylim = self.margins(x[:, 8], margin=.1) axs[2, 1].set_ylim(ylim) axs[2, 1].set_xlabel('Time (s)') axs[2, 1].legend(framealpha=.5, loc='best') axs = axs.flatten() ylabel = ['$L_{MT}\,(m)$', '$L_{T}\,(m)$', '$L_{M}\,(m)$', '$V_{CE}\,(m/s)$', '$Force\,(N)$', '$Force\,(N)$'] for i, axi in enumerate(axs): axi.set_ylabel(ylabel[i], fontsize=14) axi.yaxis.set_major_locator(plt.MaxNLocator(4)) axi.yaxis.set_label_coords(-.2, 0.5) plt.suptitle('Simulation of muscle-tendon mechanics', fontsize=18, y=1.03) plt.tight_layout() plt.show() def penn_ang(self, lmt, lm, lt=None, lm0=None, alpha0=None): """Pennation angle. Parameters ---------- lmt : float muscle-tendon length lt : float, optional (default=None) tendon length lm : float, optional (default=None) muscle fiber length lm0 : float, optional (default from states file) initial muscle fiber length alpha0 : float, optional (default from parameter file) initial pennation angle Returns ------- alpha : float pennation angle """ if lm0 is None: lm0 = self.S['lm0'] if alpha0 is None: alpha0 = self.P['alpha0'] alpha = alpha0 if alpha0 != 0: w = lm0*np.sin(alpha0) if lm is not None: cosalpha = np.sqrt(1-(w/lm)**2) elif lmt is not None and lt is not None: cosalpha = 1/(np.sqrt(1 + (w/(lmt-lt))**2)) alpha = np.arccos(cosalpha) if alpha > 1.4706289: # np.arccos(0.1), 84.2608 degrees alpha = 1.4706289 return alpha def excitation(self, t, u_max=None, u_min=None, t0=0, t1=5): """Excitation signal, a square wave. Parameters ---------- t : float time instant [s] u_max : float (0 < u_max <= 1), optional (default from parameter file) maximum value for muscle excitation u_min : float (0 < u_min < 1), optional (default from parameter file) minimum value for muscle excitation t0 : float, optional (default=0) initial time instant for muscle excitation equals to u_max [s] t1 : float, optional (default=5) final time instant for muscle excitation equals to u_max [s] Returns ------- u : float (0 < u <= 1) excitation signal """ if u_max is None: u_max = self.P['u_max'] if u_min is None: u_min = self.P['u_min'] u = u_min if t >= t0 and t <= t1: u = u_max return u def activation_dyn(self, t, a, t_act=None, t_deact=None): """Thelen (2003) activation dynamics, the derivative of `a` at `t`. Parameters ---------- t : float time instant [s] a : float (0 <= a <= 1) muscle activation t_act : float, optional (default from parameter file) activation time constant [s] t_deact : float, optional (default from parameter file) deactivation time constant [s] Returns ------- adot : float derivative of `a` at `t` """ if t_act is None: t_act = self.P['t_act'] if t_deact is None: t_deact = self.P['t_deact'] u = self.excitation(t) if u > a: adot = (u - a)/(t_act*(0.5 + 1.5*a)) else: adot = (u - a)/(t_deact/(0.5 + 1.5*a)) return adot def activation_sol(self, fun=None, t0=0, t1=3, a0=0, u_min=None, t_act=None, t_deact=None, show=True, axs=None): """Runge-Kutta (4)5 ODE solver for activation dynamics. Parameters ---------- fun : function object, optional (default is None and `actdyn` is used) function with ODE to be solved t0 : float, optional (default=0) initial time instant for the simulation [s] t1 : float, optional (default=0) final time instant for the simulation [s] a0 : float, optional (default=0) initial muscle activation u_max : float (0 < u_max <= 1), optional (default from parameter file) maximum value for muscle excitation u_min : float (0 < u_min < 1), optional (default from parameter file) minimum value for muscle excitation t_act : float, optional (default from parameter file) activation time constant [s] t_deact : float, optional (default from parameter file) deactivation time constant [s] show : bool, optional (default = True) if True (1), plot data in matplotlib figure axs : a matplotlib.axes.Axes instance, optional (default = None) Returns ------- data : 2-d array array with columns [time, excitation, activation] """ if u_min is None: u_min = self.P['u_min'] if t_act is None: t_act = self.P['t_act'] if t_deact is None: t_deact = self.P['t_deact'] if fun is None: fun = self.activation_dyn f = ode(fun).set_integrator('dopri5', nsteps=1, max_step=0.005, atol=1e-8) f.set_initial_value(a0, t0).set_f_params(t_act, t_deact) # suppress Fortran warning warnings.filterwarnings("ignore", category=UserWarning) data = [] while f.t < t1: f.integrate(t1, step=True) data.append([f.t, self.excitation(f.t), np.max([f.y, u_min])]) warnings.resetwarnings() data = np.array(data) if show: self.actvation_plot(data, axs) self.act_data = data return data def activation(self, t=None): """Activation signal.""" data = self.act_data if t is not None and len(data): if t <= self.act_data[0, 0]: a = self.act_data[0, 2] elif t >= self.act_data[-1, 0]: a = self.act_data[-1, 2] else: a = np.interp(t, self.act_data[:, 0], self.act_data[:, 2]) else: a = 1 return a def actvation_plot(self, data, axs): """Plot results of actdyn_ode45 function.""" try: import matplotlib.pyplot as plt except ImportError: print('matplotlib is not available.') return if axs is None: _, axs = plt.subplots(nrows=1, ncols=1, figsize=(6, 4)) axs.plot(data[:, 0], data[:, 1], color=[1, 0, 0, .6], label='Excitation') axs.plot(data[:, 0], data[:, 2], color=[0, 0, 1, .6], label='Activation') axs.set_xlabel('Time [s]') axs.set_ylabel('Level') axs.legend() plt.title('Activation dynamics') plt.tight_layout() plt.show() def margins(self, x, margin=0.01, minmargin=True): """Calculate plot limits with extra margins. """ rang = np.nanmax(x) - np.nanmin(x) if rang < 0.001 and minmargin: rang = 0.001*np.nanmean(x)/margin if rang < 1: rang = 1 lim = [np.nanmin(x) - rang*margin, np.nanmax(x) + rang*margin] return lim ###Output _____no_output_____ ###Markdown Muscle simulationMarcos Duarte Let's simulate the 3-component Hill-type muscle model we described in [Muscle modeling](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/MuscleModeling.ipynb) and illustrated below:Figure. A Hill-type muscle model with three components: two for the muscle, an active contractile element, $\mathsf{CE}$, and a passive elastic element in parallel, $\mathsf{PE}$, with the $\mathsf{CE}$, and one component for the tendon, an elastic element in series, $\mathsf{SE}$, with the muscle. $\mathsf{L_{MT}}$: muscle–tendon length, $\mathsf{L_T}$: tendon length, $\mathsf{L_M}$: muscle fiber length, $\mathsf{F_T}$: tendon force, $\mathsf{F_M}$: muscle force, and $α$: pennation angle.The following relationships are true for the model:$$ \begin{array}{l}L_{MT} = L_{T} + L_M\cos\alpha \\\\L_M = L_{CE} = L_{PE} \\\\\dot{L}_M = \dot{L}_{CE} = \dot{L}_{PE} \\\\F_{M} = F_{CE} + F_{PE} \end{array} $$If we assume that the muscle–tendon system is at equilibrium, that is, muscle, $F_{M}$, and tendon, $F_{T}$, forces are in equlibrium at all times, the following equation holds (and that a muscle can only pull):$$ F_{T} = F_{SE} = F_{M}\cos\alpha $$ Pennation angleThe pennation angle will vary during muscle activation; for instance, Kawakami et al. (1998) showed that the pennation angle of the medial gastrocnemius muscle can vary from 22$^o$ to 67$^o$ during activation. The most used approach is to assume that the muscle width (defined as the length of the perpendicular line between the lines of the muscle origin and insertion) remains constant (Scott & Winter, 1991):$$ w = L_{M,0} \sin\alpha_0 $$The pennation angle as a function of time will be given by:$$ \alpha = \sin^{-1} \left(\frac{w}{L_M}\right) $$The cosine of the pennation angle can be given by (if $L_M$ is known):$$ \cos \alpha = \frac{\sqrt{L_M^2-w^2}}{L_M} = \sqrt{1-\left(\frac{w}{L_M}\right)^2} $$or (if $L_M$ is not known):$$ \cos \alpha = \frac{L_{MT}-L_T}{L_M} = \frac{1}{\sqrt{1 + \left(\frac{w}{L_{MT}-L_T}\right)^2}} $$ Muscle forceIn general, the dependence of the force of the contractile element with its length and velocity and with the activation level are assumed independent of each other:$$ F_{CE}(a, L_{CE}, \dot{L}_{CE}) = a \: f_l(L_{CE}) \: f_v(\dot{L}_{CE}) \: F_{M0} $$where $f_l(L_M)$ and $f_v(\dot{L}_M)$ are mathematical functions describing the force-length and force-velocity relationships of the contractile element (typically these functions are normalized by $F_{M0}$, the maximum isometric (at zero velocity) muscle force, so we have to multiply the right side of the equation by $F_{M0}$). And for the muscle force:$$ F_{M}(a, L_M, \dot{L}_M) = \left[a \: f_l(L_M)f_v(\dot{L}_M) + F_{PE}(L_M)\right]F_{M0} $$This equation for the muscle force, with $a$, $L_{M}$, and $\dot{L}_{M}$ as state variables, can be used to simulate the dynamics of a muscle given an excitation and determine the muscle force and length. We can rearrange the equation, invert the expression for $f_v$, and integrate the resulting first-order ordinary differential equation (ODE) to obatin $L_M$:$$ \dot{L}_M = f_v^{-1}\left(\frac{F_{SE}(L_{MT}-L_M\cos\alpha)/\cos\alpha - F_{PE}(L_M)}{a f_l(L_M)}\right) $$This approach is the most commonly employed in the literature (see for example, [OpenSim](http://simtk-confluence.stanford.edu:8080/display/OpenSim/Muscle+Model+Theory+and+Publications); McLean, Su, van den Bogert, 2003; Thelen, 2003; Nigg and Herzog, 2007). Although the equation for the muscle force doesn't have numerical singularities, the differential equation for muscle velocity has four ([OpenSim Millard 2012 Muscle Models](http://simtk-confluence.stanford.edu:8080/display/OpenSim/Millard+2012+Muscle+Models)): When $a \rightarrow 0$; when $f_l(L_M) \rightarrow 0$; when $\alpha \rightarrow \pi/2$; and when $\partial f_v/\partial v \rightarrow 0 $. The following solutions can be employed to avoid the numerical singularities ([OpenSim Millard 2012 Muscle Models](http://simtk-confluence.stanford.edu:8080/display/OpenSim/Millard+2012+Muscle+Models)): Adopt a minimum value for $a$; e.g., $a_{min}=0.01$; adopt a minimum value for $f_l(L_M)$; e.g., $f_l(0.1)$; adopt a maximum value for pennation angle; e.g., constrain $\alpha$ to $\cos\alpha > 0.1 \; (\alpha < 84.26^o)$; and make the slope of $f_V$ at and beyond maximum velocity different than zero (for both concentric and excentric activations). We will adopt these solutions to avoid singularities in the simulation of muscle mechanics. A probem of imposing values to variables as described above is that we can make the ordinary differential equation numerically stiff, which will increase the computational cost of the numerical integration. A better solution would be to modify the model to not have these singularities (see [OpenSim Millard 2012 Muscle Models](http://simtk-confluence.stanford.edu:8080/display/OpenSim/Millard+2012+Muscle+Models)). SimulationLet's simulate muscle dynamics using the Thelen2003Muscle model we defined in [Muscle modeling](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/MuscleModeling.ipynb). For the simulation of the Thelen2003Muscle, we simply have to integrate the equation:$$ V_M = (0.25+0.75a)\,V_{Mmax}\frac{\bar{F}_M-a\bar{f}_{l,CE}}{b} $$ where$$ b = \left\{ \begin{array}{l l l} a\bar{f}_{l,CE} + \bar{F}_M/A_f \quad & \text{if} \quad \bar{F}_M \leq a\bar{f}_{l,CE} & \text{(shortening)} \\ \\ \frac{(2+2/A_f)(a\bar{f}_{l,CE}\bar{f}_{CEmax} - \bar{F}_M)}{\bar{f}_{CEmax}-1} \quad & \text{if} \quad \bar{F}_M > a\bar{f}_{l,CE} & \text{(lengthening)} \end{array} \right.$$ The equation above already contains the terms for actvation, $a$, and force-length dependence, $\bar{f}_{l,CE}$. The equation is too complicated for solving analytically, we will solve it by numerical integration using the [`scipy.integrate.ode`](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.integrate.ode.html) class of numeric integrators, particularly the `dopri5`, an explicit runge-kutta method of order (4)5 due to Dormand and Prince (a.k.a. ode45 in Matlab). We could run a simulation using [OpenSim](https://simtk.org/home/opensim); it would be faster, but for fun, let's program in Python. All the necessary functions for the Thelen2003Muscle model described in [Muscle modeling](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/MuscleModeling.ipynb) were grouped in one file (module), `muscles.py`. Besides these functions, the module `muscles.py` contains a function for the muscle velocity, `vm_eq`, which will be called by the function that specifies the numerical integration, `lm_sol`; a standard way of performing numerical integration in scientific computing:```python def vm_eq(self, t, lm, lm0, lmt0, lmopt, ltslack, alpha0, vmmax, fm0): """Equation for muscle velocity.""" if lm < 0.1*lmopt: lm = 0.1*lmopt a = self.activation(t) lmt = self.lmt_eq(t, lmt0) alpha = self.penn_ang(lmt=lmt, lm=lm, lm0=lm0, alpha0=alpha0) lt = lmt - lm*np.cos(alpha) fse = self.force_se(lt=lt, ltslack=ltslack) fpe = self.force_pe(lm=lm/lmopt) fl = self.force_l(lm=lm/lmopt) fce_t = fse/np.cos(alpha) - fpe vm = self.velo_fm(fm=fce_t, a=a, fl=fl) return vmdef lm_sol(self, fun, t0, t1, lm0, lmt0, ltslack, lmopt, alpha0, vmmax, fm0, show, axs): """Runge-Kutta (4)5 ODE solver for muscle length.""" if fun is None: fun = self.vm_eq f = ode(fun).set_integrator('dopri5', nsteps=1, max_step=0.005, atol=1e-8) f.set_initial_value(lm0, t0).set_f_params(lm0, lmt0, lmopt, ltslack, alpha0, vmmax, fm0) suppress Fortran warning warnings.filterwarnings("ignore", category=UserWarning) data = [] while f.t < t1: f.integrate(t1, step=True) d = self.calc_data(f.t, f.y, lm0, lmt0, ltslack, lmopt, alpha0, fm0) data.append(d) warnings.resetwarnings() data = np.array(data) self.lm_data = data if show: self.lm_plot(data, axs) return data````muscles.py` also contains some auxiliary functions for entering data and for plotting the results. Let's import the necessary Python libraries and customize the environment in order to run some simulations using `muscles.py`: ###Code import numpy as np import matplotlib.pyplot as plt %matplotlib inline #%matplotlib nbagg import matplotlib matplotlib.rcParams['lines.linewidth'] = 3 matplotlib.rcParams['font.size'] = 13 matplotlib.rcParams['lines.markersize'] = 5 matplotlib.rc('axes', grid=False, labelsize=14, titlesize=16, ymargin=0.05) matplotlib.rc('legend', numpoints=1, fontsize=11) # import the muscles.py module import sys sys.path.insert(1, r'./../functions') import muscles ###Output _____no_output_____ ###Markdown The `muscles.py` module contains the class `Thelen2003()` which has the functions we want to use. For such, we need to create an instance of this class: ###Code ms = muscles.Thelen2003() ###Output _____no_output_____ ###Markdown Now, we need to enter the parameters and states for the simulation: we can load files with these values or enter as input parameters when calling the function (method) '`set_parameters()`' and '`set_states()`'. If nothing if inputed, these methods assume that the parameters and states are stored in the files '`muscle_parameter.txt`' and '`muscle_state.txt`' inside the directory '`./../data/`'. Let's use some of the parameters and states from an exercise of the chapter 4 of Nigg and Herzog (2006). ###Code ms.set_parameters() ms.set_states() ###Output The parameters were successfully loaded and are stored in the variable P. "lt0" value "" was replaced by NaN. The states were successfully loaded and are stored in the variable S. ###Markdown We can see the parameters and states: ###Code print('Parameters:\n', ms.P) print('States:\n', ms.S) ###Output Parameters: {'u_min': 0.01, 'ltslack': 0.223, 'id': '', 'lmopt': 0.093, 't_act': 0.015, 'epst0': 0.04, 'af': 0.25, 'gammal': 0.45, 'fmlen': 1.4, 'kttoe': 3.0, 'epsm0': 0.6, 'name': '', 'kpe': 5.0, 'alpha0': 0.0, 'u_max': 1.0, 'fm0': 7400.0, 't_deact': 0.05, 'vmmax': 10.0} States: {'lm0': 0.09, 'lmt0': 0.313, 'lt0': nan, 'id': '', 'name': ''} ###Markdown We can plot the muscle-tendon forces considering these parameters and initial states: ###Code ms.muscle_plot() ###Output _____no_output_____ ###Markdown Let's simulate an isometric activation (and since we didn't enter an activation level, $a=1$ will be used): ###Code def lmt_eq(t, lmt0): # isometric activation lmt = lmt0 return lmt ms.lmt_eq = lmt_eq data = ms.lm_sol() ###Output _____no_output_____ ###Markdown We can input a prescribed muscle-tendon length for the simulation: ###Code def lmt_eq(t, lmt0): # prescribed change in the muscle-tendon length if t < 1: lmt = lmt0 if 1 <= t < 2: lmt = lmt0 - 0.04*(t - 1) if t >= 2: lmt = lmt0 - 0.04 return lmt ms.lmt_eq = lmt_eq data = ms.lm_sol() ###Output _____no_output_____ ###Markdown Let's simulate a pennated muscle with an angle of $30^o$. We don't need to enter all parameters again, we can change only the parameter `alpha0`: ###Code ms.P['alpha0'] = 30*np.pi/180 print('New initial pennation angle:', ms.P['alpha0']) ###Output New initial pennation angle: 0.5235987755982988 ###Markdown Because the muscle length is now shortenned by $\cos(30^o)$, we will also have to change the initial muscle-tendon length if we want to start with the tendon at its slack length: ###Code ms.S['lmt0'] = ms.S['lmt0'] - ms.S['lm0'] + ms.S['lm0']*np.cos(ms.P['alpha0']) print('New initial muscle-tendon length:', ms.S['lmt0']) data = ms.lm_sol() ###Output _____no_output_____ ###Markdown Here is a plot of the simulated pennation angle: ###Code plt.plot(data[:, 0], data[:, 9]*180/np.pi) plt.xlabel('Time (s)') plt.ylabel('Pennation angle $(^o)$') plt.show() ###Output _____no_output_____ ###Markdown Change back to the old values: ###Code ms.P['alpha0'] = 0 ms.S['lmt0'] = 0.313 ###Output _____no_output_____ ###Markdown We can change the initial states to show the role of the passive parallel element: ###Code ms.S = {'id': '', 'lt0': np.nan, 'lmt0': 0.323, 'lm0': 0.10, 'name': ''} ms.muscle_plot() ###Output _____no_output_____ ###Markdown Let's also change the excitation signal: ###Code def excitation(t, u_max=1, u_min=0.01, t0=1, t1=2): """Excitation signal, a hat signal.""" u = u_min if t >= t0 and t <= t1: u = u_max return u ms.excitation = excitation act = ms.activation_sol() ###Output _____no_output_____ ###Markdown And let's simulate an isometric contraction: ###Code def lmt_eq(t, lmt0): # isometric activation lmt = lmt0 return lmt ms.lmt_eq = lmt_eq data = ms.lm_sol() ###Output _____no_output_____ ###Markdown Let's use as excitation a train of pulses: ###Code def excitation(t, u_max=.5, u_min=0.01, t0=.2, t1=2): """Excitation signal, a train of square pulses.""" u = u_min ts = np.arange(1, 2.0, .1) #ts = np.delete(ts, np.arange(2, ts.size, 3)) if t >= ts[0] and t <= ts[1]: u = u_max elif t >= ts[2] and t <= ts[3]: u = u_max elif t >= ts[4] and t <= ts[5]: u = u_max elif t >= ts[6] and t <= ts[7]: u = u_max elif t >= ts[8] and t <= ts[9]: u = u_max return u ms.excitation = excitation act = ms.activation_sol() data = ms.lm_sol() ###Output _____no_output_____ ###Markdown References- Kawakami Y, Ichinose Y, Fukunaga T (1998) [Architectural and functional features of human triceps surae muscles during contraction](http://www.ncbi.nlm.nih.gov/pubmed/9688711). Journal of Applied Physiology, 85, 398–404. - McLean SG, Su A, van den Bogert AJ (2003) [Development and validation of a 3-D model to predict knee joint loading during dynamic movement](http://www.ncbi.nlm.nih.gov/pubmed/14986412). Journal of Biomechanical Engineering, 125, 864-74. - Nigg BM and Herzog W (2006) [Biomechanics of the Musculo-skeletal System](https://books.google.com.br/books?id=hOIeAQAAIAAJ&dq=editions:ISBN0470017678). 3rd Edition. Wiley. - Scott SH, Winter DA (1991) [A comparison of three muscle pennation assumptions and their effect on isometric and isotonic force](http://www.ncbi.nlm.nih.gov/pubmed/2037616). Journal of Biomechanics, 24, 163–167. - Thelen DG (2003) [Adjustment of muscle mechanics model parameters to simulate dynamic contractions in older adults](http://homepages.cae.wisc.edu/~thelen/pubs/jbme03.pdf). Journal of Biomechanical Engineering, 125(1):70–77. Module muscles.py ###Code # %load ./../functions/muscles.py """Muscle modeling and simulation.""" from __future__ import division, print_function import numpy as np from scipy.integrate import ode import warnings import configparser __author__ = 'Marcos Duarte, https://github.com/demotu/BMC' __version__ = 'muscles.py v.1 2015/03/01' class Thelen2003(): """ Thelen (2003) muscle model. """ def __init__(self, parameters=None, states=None): if parameters is not None: self.set_parameters(parameters) if states is not None: self.set_states(states) self.lm_data = [] self.act_data = [] def set_parameters(self, var=None): """Load and set parameters for the muscle model. """ if var is None: var = './../data/muscle_parameter.txt' if isinstance(var, str): self.P = self.config_parser(var, 'parameters') elif isinstance(var, dict): self.P = var else: raise ValueError('Wrong parameters!') print('The parameters were successfully loaded ' + 'and are stored in the variable P.') def set_states(self, var=None): """Load and set states for the muscle model. """ if var is None: var = './../data/muscle_state.txt' if isinstance(var, str): self.S = self.config_parser(var, 'states') elif isinstance(var, dict): self.S = var else: raise ValueError('Wrong states!') print('The states were successfully loaded ' + 'and are stored in the variable S.') def config_parser(self, filename, var): parser = configparser.ConfigParser() parser.optionxform = str # make option names case sensitive parser.read(filename) if not parser: raise ValueError('File %s not found!' %var) #if not 'Muscle' in parser.sections()[0]: # raise ValueError('Wrong %s file!' %var) var = {} for key, value in parser.items(parser.sections()[0]): if key.lower() in ['name', 'id']: var.update({key: value}) else: try: value = float(value) except ValueError: print('"%s" value "%s" was replaced by NaN.' %(key, value)) value = np.nan var.update({key: value}) return var def force_l(self, lm, gammal=None): """Thelen (2003) force of the contractile element vs. muscle length. Parameters ---------- lm : float normalized muscle fiber length gammal : float, optional (default from parameter file) shape factor Returns ------- fl : float normalized force of the muscle contractile element """ if gammal is None: gammal = self.P['gammal'] fl = np.exp(-(lm-1)**2/gammal) return fl def force_pe(self, lm, kpe=None, epsm0=None): """Thelen (2003) force of the muscle parallel element vs. muscle length. Parameters ---------- lm : float normalized muscle fiber length kpe : float, optional (default from parameter file) exponential shape factor epsm0 : float, optional (default from parameter file) passive muscle strain due to maximum isometric force Returns ------- fpe : float normalized force of the muscle parallel (passive) element """ if kpe is None: kpe = self.P['kpe'] if epsm0 is None: epsm0 = self.P['epsm0'] if lm <= 1: fpe = 0 else: fpe = (np.exp(kpe*(lm-1)/epsm0)-1)/(np.exp(kpe)-1) return fpe def force_se(self, lt, ltslack=None, epst0=None, kttoe=None): """Thelen (2003) force-length relationship of tendon vs. tendon length. Parameters ---------- lt : float tendon length (normalized or not) ltslack : float, optional (default from parameter file) tendon slack length (normalized or not) epst0 : float, optional (default from parameter file) tendon strain at the maximal isometric muscle force kttoe : float, optional (default from parameter file) linear scale factor Returns ------- fse : float normalized force of the tendon series element """ if ltslack is None: ltslack = self.P['ltslack'] if epst0 is None: epst0 = self.P['epst0'] if kttoe is None: kttoe = self.P['kttoe'] epst = (lt-ltslack)/ltslack fttoe = .33 # values from OpenSim Thelen2003Muscle epsttoe = .99*epst0*np.e**3/(1.66*np.e**3 - .67) ktlin = .67/(epst0 - epsttoe) # if epst <= 0: fse = 0 elif epst <= epsttoe: fse = fttoe/(np.exp(kttoe)-1)*(np.exp(kttoe*epst/epsttoe)-1) else: fse = ktlin*(epst-epsttoe) + fttoe return fse def velo_fm(self, fm, a, fl, lmopt=None, vmmax=None, fmlen=None, af=None): """Thelen (2003) velocity of the force-velocity relationship vs. CE force. Parameters ---------- fm : float normalized muscle force a : float muscle activation level fl : float normalized muscle force due to the force-length relationship lmopt : float, optional (default from parameter file) optimal muscle fiber length vmmax : float, optional (default from parameter file) normalized maximum muscle velocity for concentric activation fmlen : float, optional (default from parameter file) normalized maximum force generated at the lengthening phase af : float, optional (default from parameter file) shape factor Returns ------- vm : float velocity of the muscle """ if lmopt is None: lmopt = self.P['lmopt'] if vmmax is None: vmmax = self.P['vmmax'] if fmlen is None: fmlen = self.P['fmlen'] if af is None: af = self.P['af'] if fm <= a*fl: # isometric and concentric activation if fm > 0: b = a*fl + fm/af else: b = a*fl else: # eccentric activation asyE_thresh = 0.95 # from OpenSim Thelen2003Muscle if fm < a*fl*fmlen*asyE_thresh: b = (2 + 2/af)*(a*fl*fmlen - fm)/(fmlen - 1) else: fm0 = a*fl*fmlen*asyE_thresh b = (2 + 2/af)*(a*fl*fmlen - fm0)/(fmlen - 1) vm = (0.25 + 0.75*a)*1*(fm - a*fl)/b vm = vm*vmmax*lmopt return vm def force_vm(self, vm, a, fl, lmopt=None, vmmax=None, fmlen=None, af=None): """Thelen (2003) force of the contractile element vs. muscle velocity. Parameters ---------- vm : float muscle velocity a : float muscle activation level fl : float normalized muscle force due to the force-length relationship lmopt : float, optional (default from parameter file) optimal muscle fiber length vmmax : float, optional (default from parameter file) normalized maximum muscle velocity for concentric activation fmlen : float, optional (default from parameter file) normalized normalized maximum force generated at the lengthening phase af : float, optional (default from parameter file) shape factor Returns ------- fvm : float normalized force of the muscle contractile element """ if lmopt is None: lmopt = self.P['lmopt'] if vmmax is None: vmmax = self.P['vmmax'] if fmlen is None: fmlen = self.P['fmlen'] if af is None: af = self.P['af'] vmmax = vmmax*lmopt if vm <= 0: # isometric and concentric activation fvm = af*a*fl*(4*vm + vmmax*(3*a + 1))/(-4*vm + vmmax*af*(3*a + 1)) else: # eccentric activation fvm = a*fl*(af*vmmax*(3*a*fmlen - 3*a + fmlen - 1) + \ 8*vm*fmlen*(af + 1)) / \ (af*vmmax*(3*a*fmlen - 3*a + fmlen - 1) + 8*vm*(af + 1)) return fvm def lmt_eq(self, t, lmt0=None): """Equation for muscle-tendon length.""" if lmt0 is None: lmt0 = self.S['lmt0'] return lmt0 def vm_eq(self, t, lm, lm0, lmt0, lmopt, ltslack, alpha0, vmmax, fm0): """Equation for muscle velocity.""" if lm < 0.1*lmopt: lm = 0.1*lmopt #lt0 = lmt0 - lm0*np.cos(alpha0) a = self.activation(t) lmt = self.lmt_eq(t, lmt0) alpha = self.penn_ang(lmt=lmt, lm=lm, lm0=lm0, alpha0=alpha0) lt = lmt - lm*np.cos(alpha) fse = self.force_se(lt=lt, ltslack=ltslack) fpe = self.force_pe(lm=lm/lmopt) fl = self.force_l(lm=lm/lmopt) fce_t = fse/np.cos(alpha) - fpe #if fce_t < 0: fce_t=0 vm = self.velo_fm(fm=fce_t, a=a, fl=fl) return vm def lm_sol(self, fun=None, t0=0, t1=3, lm0=None, lmt0=None, ltslack=None, lmopt=None, alpha0=None, vmmax=None, fm0=None, show=True, axs=None): """Runge-Kutta (4)5 ODE solver for muscle length.""" if lm0 is None: lm0 = self.S['lm0'] if lmt0 is None: lmt0 = self.S['lmt0'] if ltslack is None: ltslack = self.P['ltslack'] if alpha0 is None: alpha0 = self.P['alpha0'] if lmopt is None: lmopt = self.P['lmopt'] if vmmax is None: vmmax = self.P['vmmax'] if fm0 is None: fm0 = self.P['fm0'] if fun is None: fun = self.vm_eq f = ode(fun).set_integrator('dopri5', nsteps=1, max_step=0.005, atol=1e-8) f.set_initial_value(lm0, t0).set_f_params(lm0, lmt0, lmopt, ltslack, alpha0, vmmax, fm0) # suppress Fortran warning warnings.filterwarnings("ignore", category=UserWarning) data = [] while f.t < t1: f.integrate(t1, step=True) d = self.calc_data(f.t, np.max([f.y, 0.1*lmopt]), lm0, lmt0, ltslack, lmopt, alpha0, fm0) data.append(d) warnings.resetwarnings() data = np.array(data) self.lm_data = data if show: self.lm_plot(data, axs) return data def calc_data(self, t, lm, lm0, lmt0, ltslack, lmopt, alpha0, fm0): """Calculus of muscle-tendon variables.""" a = self.activation(t) lmt = self.lmt_eq(t, lmt0=lmt0) alpha = self.penn_ang(lmt=lmt, lm=lm, lm0=lm0, alpha0=alpha0) lt = lmt - lm*np.cos(alpha) fl = self.force_l(lm=lm/lmopt) fpe = self.force_pe(lm=lm/lmopt) fse = self.force_se(lt=lt, ltslack=ltslack) fce_t = fse/np.cos(alpha) - fpe vm = self.velo_fm(fm=fce_t, a=a, fl=fl, lmopt=lmopt) fm = self.force_vm(vm=vm, fl=fl, lmopt=lmopt, a=a) + fpe data = [t, lmt, lm, lt, vm, fm*fm0, fse*fm0, a*fl*fm0, fpe*fm0, alpha] return data def muscle_plot(self, a=1, axs=None): """Plot muscle-tendon relationships with length and velocity.""" try: import matplotlib.pyplot as plt except ImportError: print('matplotlib is not available.') return if axs is None: _, axs = plt.subplots(nrows=1, ncols=3, figsize=(9, 4)) lmopt = self.P['lmopt'] ltslack = self.P['ltslack'] vmmax = self.P['vmmax'] alpha0 = self.P['alpha0'] fm0 = self.P['fm0'] lm0 = self.S['lm0'] lmt0 = self.S['lmt0'] lt0 = self.S['lt0'] if np.isnan(lt0): lt0 = lmt0 - lm0*np.cos(alpha0) lm = np.linspace(0, 2, 101) lt = np.linspace(0, 1, 101)*0.05 + 1 vm = np.linspace(-1, 1, 101)*vmmax*lmopt fl = np.zeros(lm.size) fpe = np.zeros(lm.size) fse = np.zeros(lt.size) fvm = np.zeros(vm.size) fl_lm0 = self.force_l(lm0/lmopt) fpe_lm0 = self.force_pe(lm0/lmopt) fm_lm0 = fl_lm0 + fpe_lm0 ft_lt0 = self.force_se(lt0, ltslack)*fm0 for i in range(101): fl[i] = self.force_l(lm[i]) fpe[i] = self.force_pe(lm[i]) fse[i] = self.force_se(lt[i], ltslack=1) fvm[i] = self.force_vm(vm[i], a=a, fl=fl_lm0) lm = lm*lmopt lt = lt*ltslack fl = fl fpe = fpe fse = fse*fm0 fvm = fvm*fm0 xlim = self.margins(lm, margin=.05, minmargin=False) axs[0].set_xlim(xlim) ylim = self.margins([0, 2], margin=.05) axs[0].set_ylim(ylim) axs[0].plot(lm, fl, 'b', label='Active') axs[0].plot(lm, fpe, 'b--', label='Passive') axs[0].plot(lm, fl+fpe, 'b:', label='') axs[0].plot([lm0, lm0], [ylim[0], fm_lm0], 'k:', lw=2, label='') axs[0].plot([xlim[0], lm0], [fm_lm0, fm_lm0], 'k:', lw=2, label='') axs[0].plot(lm0, fm_lm0, 'o', ms=6, mfc='r', mec='r', mew=2, label='fl(LM0)') axs[0].legend(loc='best', frameon=True, framealpha=.5) axs[0].set_xlabel('Length [m]') axs[0].set_ylabel('Scale factor') axs[0].xaxis.set_major_locator(plt.MaxNLocator(4)) axs[0].yaxis.set_major_locator(plt.MaxNLocator(4)) axs[0].set_title('Muscle F-L (a=1)') xlim = self.margins([0, np.min(vm), np.max(vm)], margin=.05, minmargin=False) axs[1].set_xlim(xlim) ylim = self.margins([0, fm0*1.2, np.max(fvm)*1.5], margin=.025) axs[1].set_ylim(ylim) axs[1].plot(vm, fvm, label='') axs[1].set_xlabel('$\mathbf{^{CON}}\;$ Velocity [m/s] $\;\mathbf{^{EXC}}$') axs[1].plot([0, 0], [ylim[0], fvm[50]], 'k:', lw=2, label='') axs[1].plot([xlim[0], 0], [fvm[50], fvm[50]], 'k:', lw=2, label='') axs[1].plot(0, fvm[50], 'o', ms=6, mfc='r', mec='r', mew=2, label='FM0(LM0)') axs[1].plot(xlim[0], fm0, '+', ms=10, mfc='r', mec='r', mew=2, label='') axs[1].text(vm[0], fm0, 'FM0') axs[1].legend(loc='upper right', frameon=True, framealpha=.5) axs[1].set_ylabel('Force [N]') axs[1].xaxis.set_major_locator(plt.MaxNLocator(4)) axs[1].yaxis.set_major_locator(plt.MaxNLocator(4)) axs[1].set_title('Muscle F-V (a=1)') xlim = self.margins([lt0, ltslack, np.min(lt), np.max(lt)], margin=.05, minmargin=False) axs[2].set_xlim(xlim) ylim = self.margins([ft_lt0, 0, np.max(fse)], margin=.05) axs[2].set_ylim(ylim) axs[2].plot(lt, fse, label='') axs[2].set_xlabel('Length [m]') axs[2].plot([lt0, lt0], [ylim[0], ft_lt0], 'k:', lw=2, label='') axs[2].plot([xlim[0], lt0], [ft_lt0, ft_lt0], 'k:', lw=2, label='') axs[2].plot(lt0, ft_lt0, 'o', ms=6, mfc='r', mec='r', mew=2, label='FT(LT0)') axs[2].legend(loc='upper left', frameon=True, framealpha=.5) axs[2].set_ylabel('Force [N]') axs[2].xaxis.set_major_locator(plt.MaxNLocator(4)) axs[2].yaxis.set_major_locator(plt.MaxNLocator(4)) axs[2].set_title('Tendon') plt.suptitle('Muscle-tendon mechanics', fontsize=18, y=1.03) plt.tight_layout(w_pad=.1) plt.show() def lm_plot(self, x, axs): """Plot results of actdyn_ode45 function. data = [t, lmt, lm, lt, vm, fm*fm0, fse*fm0, fl*fm0, fpe*fm0, alpha] """ try: import matplotlib.pyplot as plt except ImportError: print('matplotlib is not available.') return if axs is None: _, axs = plt.subplots(nrows=3, ncols=2, sharex=True, figsize=(10, 6)) axs[0, 0].plot(x[:, 0], x[:, 1], 'b', label='LMT') lmt = x[:, 2]*np.cos(x[:, 9]) + x[:, 3] if np.sum(x[:, 9]) > 0: axs[0, 0].plot(x[:, 0], lmt, 'g--', label=r'$LM \cos \alpha + LT$') else: axs[0, 0].plot(x[:, 0], lmt, 'g--', label=r'LM+LT') ylim = self.margins(x[:, 1], margin=.1) axs[0, 0].set_ylim(ylim) axs[0, 0].legend(framealpha=.5, loc='best') axs[0, 1].plot(x[:, 0], x[:, 3], 'b') #axs[0, 1].plot(x[:, 0], lt0*np.ones(len(x)), 'r') ylim = self.margins(x[:, 3], margin=.1) axs[0, 1].set_ylim(ylim) axs[1, 0].plot(x[:, 0], x[:, 2], 'b') #axs[1, 0].plot(x[:, 0], lmopt*np.ones(len(x)), 'r') ylim = self.margins(x[:, 2], margin=.1) axs[1, 0].set_ylim(ylim) axs[1, 1].plot(x[:, 0], x[:, 4], 'b') ylim = self.margins(x[:, 4], margin=.1) axs[1, 1].set_ylim(ylim) axs[2, 0].plot(x[:, 0], x[:, 5], 'b', label='Muscle') axs[2, 0].plot(x[:, 0], x[:, 6], 'g--', label='Tendon') ylim = self.margins(x[:, [5, 6]], margin=.1) axs[2, 0].set_ylim(ylim) axs[2, 0].set_xlabel('Time (s)') axs[2, 0].legend(framealpha=.5, loc='best') axs[2, 1].plot(x[:, 0], x[:, 8], 'b', label='PE') ylim = self.margins(x[:, 8], margin=.1) axs[2, 1].set_ylim(ylim) axs[2, 1].set_xlabel('Time (s)') axs[2, 1].legend(framealpha=.5, loc='best') axs = axs.flatten() ylabel = ['$L_{MT}\,(m)$', '$L_{T}\,(m)$', '$L_{M}\,(m)$', '$V_{CE}\,(m/s)$', '$Force\,(N)$', '$Force\,(N)$'] for i, axi in enumerate(axs): axi.set_ylabel(ylabel[i], fontsize=14) axi.yaxis.set_major_locator(plt.MaxNLocator(4)) axi.yaxis.set_label_coords(-.2, 0.5) plt.suptitle('Simulation of muscle-tendon mechanics', fontsize=18, y=1.03) plt.tight_layout() plt.show() def penn_ang(self, lmt, lm, lt=None, lm0=None, alpha0=None): """Pennation angle. Parameters ---------- lmt : float muscle-tendon length lt : float, optional (default=None) tendon length lm : float, optional (default=None) muscle fiber length lm0 : float, optional (default from states file) initial muscle fiber length alpha0 : float, optional (default from parameter file) initial pennation angle Returns ------- alpha : float pennation angle """ if lm0 is None: lm0 = self.S['lm0'] if alpha0 is None: alpha0 = self.P['alpha0'] alpha = alpha0 if alpha0 != 0: w = lm0*np.sin(alpha0) if lm is not None: cosalpha = np.sqrt(1-(w/lm)**2) elif lmt is not None and lt is not None: cosalpha = 1/(np.sqrt(1 + (w/(lmt-lt))**2)) alpha = np.arccos(cosalpha) if alpha > 1.4706289: # np.arccos(0.1), 84.2608 degrees alpha = 1.4706289 return alpha def excitation(self, t, u_max=None, u_min=None, t0=0, t1=5): """Excitation signal, a square wave. Parameters ---------- t : float time instant [s] u_max : float (0 < u_max <= 1), optional (default from parameter file) maximum value for muscle excitation u_min : float (0 < u_min < 1), optional (default from parameter file) minimum value for muscle excitation t0 : float, optional (default=0) initial time instant for muscle excitation equals to u_max [s] t1 : float, optional (default=5) final time instant for muscle excitation equals to u_max [s] Returns ------- u : float (0 < u <= 1) excitation signal """ if u_max is None: u_max = self.P['u_max'] if u_min is None: u_min = self.P['u_min'] u = u_min if t >= t0 and t <= t1: u = u_max return u def activation_dyn(self, t, a, t_act=None, t_deact=None): """Thelen (2003) activation dynamics, the derivative of `a` at `t`. Parameters ---------- t : float time instant [s] a : float (0 <= a <= 1) muscle activation t_act : float, optional (default from parameter file) activation time constant [s] t_deact : float, optional (default from parameter file) deactivation time constant [s] Returns ------- adot : float derivative of `a` at `t` """ if t_act is None: t_act = self.P['t_act'] if t_deact is None: t_deact = self.P['t_deact'] u = self.excitation(t) if u > a: adot = (u - a)/(t_act*(0.5 + 1.5*a)) else: adot = (u - a)/(t_deact/(0.5 + 1.5*a)) return adot def activation_sol(self, fun=None, t0=0, t1=3, a0=0, u_min=None, t_act=None, t_deact=None, show=True, axs=None): """Runge-Kutta (4)5 ODE solver for activation dynamics. Parameters ---------- fun : function object, optional (default is None and `actdyn` is used) function with ODE to be solved t0 : float, optional (default=0) initial time instant for the simulation [s] t1 : float, optional (default=0) final time instant for the simulation [s] a0 : float, optional (default=0) initial muscle activation u_max : float (0 < u_max <= 1), optional (default from parameter file) maximum value for muscle excitation u_min : float (0 < u_min < 1), optional (default from parameter file) minimum value for muscle excitation t_act : float, optional (default from parameter file) activation time constant [s] t_deact : float, optional (default from parameter file) deactivation time constant [s] show : bool, optional (default = True) if True (1), plot data in matplotlib figure axs : a matplotlib.axes.Axes instance, optional (default = None) Returns ------- data : 2-d array array with columns [time, excitation, activation] """ if u_min is None: u_min = self.P['u_min'] if t_act is None: t_act = self.P['t_act'] if t_deact is None: t_deact = self.P['t_deact'] if fun is None: fun = self.activation_dyn f = ode(fun).set_integrator('dopri5', nsteps=1, max_step=0.005, atol=1e-8) f.set_initial_value(a0, t0).set_f_params(t_act, t_deact) # suppress Fortran warning warnings.filterwarnings("ignore", category=UserWarning) data = [] while f.t < t1: f.integrate(t1, step=True) data.append([f.t, self.excitation(f.t), np.max([f.y, u_min])]) warnings.resetwarnings() data = np.array(data) if show: self.actvation_plot(data, axs) self.act_data = data return data def activation(self, t=None): """Activation signal.""" data = self.act_data if t is not None and len(data): if t <= self.act_data[0, 0]: a = self.act_data[0, 2] elif t >= self.act_data[-1, 0]: a = self.act_data[-1, 2] else: a = np.interp(t, self.act_data[:, 0], self.act_data[:, 2]) else: a = 1 return a def actvation_plot(self, data, axs): """Plot results of actdyn_ode45 function.""" try: import matplotlib.pyplot as plt except ImportError: print('matplotlib is not available.') return if axs is None: _, axs = plt.subplots(nrows=1, ncols=1, figsize=(6, 4)) axs.plot(data[:, 0], data[:, 1], color=[1, 0, 0, .6], label='Excitation') axs.plot(data[:, 0], data[:, 2], color=[0, 0, 1, .6], label='Activation') axs.set_xlabel('Time [s]') axs.set_ylabel('Level') axs.legend() plt.title('Activation dynamics') plt.tight_layout() plt.show() def margins(self, x, margin=0.01, minmargin=True): """Calculate plot limits with extra margins. """ rang = np.nanmax(x) - np.nanmin(x) if rang < 0.001 and minmargin: rang = 0.001*np.nanmean(x)/margin if rang < 1: rang = 1 lim = [np.nanmin(x) - rang*margin, np.nanmax(x) + rang*margin] return lim ###Output _____no_output_____ ###Markdown Muscle simulation> Marcos Duarte > Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/)) > Federal University of ABC, Brazil Let's simulate the 3-component Hill-type muscle model we described in [Muscle modeling](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/MuscleModeling.ipynb) and illustrated below:Figure. A Hill-type muscle model with three components: two for the muscle, an active contractile element, $\mathsf{CE}$, and a passive elastic element in parallel, $\mathsf{PE}$, with the $\mathsf{CE}$, and one component for the tendon, an elastic element in series, $\mathsf{SE}$, with the muscle. $\mathsf{L_{MT}}$: muscle–tendon length, $\mathsf{L_T}$: tendon length, $\mathsf{L_M}$: muscle fiber length, $\mathsf{F_T}$: tendon force, $\mathsf{F_M}$: muscle force, and $α$: pennation angle.The following relationships are true for the model:\begin{equation}\begin{array}{l}L_{MT} = L_{T} + L_M\cos\alpha \\\\L_M = L_{CE} = L_{PE} \\\\\dot{L}_M = \dot{L}_{CE} = \dot{L}_{PE} \\\\F_{M} = F_{CE} + F_{PE} \end{array}\label{}\end{equation}If we assume that the muscle–tendon system is at equilibrium, that is, muscle, $F_{M}$, and tendon, $F_{T}$, forces are in equilibrium at all times, the following equation holds (and that a muscle can only pull):\begin{equation}F_{T} = F_{SE} = F_{M}\cos\alpha\label{}\end{equation} Pennation angleThe pennation angle will vary during muscle activation; for instance, Kawakami et al. (1998) showed that the pennation angle of the medial gastrocnemius muscle can vary from 22$^o$ to 67$^o$ during activation. The most used approach is to assume that the muscle width (defined as the length of the perpendicular line between the lines of the muscle origin and insertion) remains constant (Scott & Winter, 1991):\begin{equation}w = L_{M,0} \sin\alpha_0\label{}\end{equation}The pennation angle as a function of time will be given by:\begin{equation}\alpha = \sin^{-1} \left(\dfrac{w}{L_M}\right)\label{}\end{equation}The cosine of the pennation angle can be given by (if $L_M$ is known):\begin{equation}\cos \alpha = \dfrac{\sqrt{L_M^2-w^2}}{L_M} = \sqrt{1-\left(\dfrac{w}{L_M}\right)^2}\label{}\end{equation}or (if $L_M$ is not known):\begin{equation}\cos \alpha = \dfrac{L_{MT}-L_T}{L_M} = \dfrac{1}{\sqrt{1 + \left(\dfrac{w}{L_{MT}-L_T}\right)^2}}\label{}\end{equation} Muscle forceIn general, the dependence of the force of the contractile element with its length and velocity and with the activation level are assumed independent of each other:\begin{equation}F_{CE}(a, L_{CE}, \dot{L}_{CE}) = a \: f_l(L_{CE}) \: f_v(\dot{L}_{CE}) \: F_{M0}\label{}\end{equation}where $f_l(L_M)$ and $f_v(\dot{L}_M)$ are mathematical functions describing the force-length and force-velocity relationships of the contractile element (typically these functions are normalized by $F_{M0}$, the maximum isometric (at zero velocity) muscle force, so we have to multiply the right side of the equation by $F_{M0}$). And for the muscle force:\begin{equation}F_{M}(a, L_M, \dot{L}_M) = \left[a \: f_l(L_M)f_v(\dot{L}_M) + F_{PE}(L_M)\right]F_{M0}\label{}\end{equation}This equation for the muscle force, with $a$, $L_{M}$, and $\dot{L}_{M}$ as state variables, can be used to simulate the dynamics of a muscle given an excitation and determine the muscle force and length. We can rearrange the equation, invert the expression for $f_v$, and integrate the resulting first-order ordinary differential equation (ODE) to obatin $L_M$:\begin{equation}\dot{L}_M = f_v^{-1}\left(\dfrac{F_{SE}(L_{MT}-L_M\cos\alpha)/\cos\alpha - F_{PE}(L_M)}{a f_l(L_M)}\right)\label{}\end{equation}This approach is the most commonly employed in the literature (see for example, [OpenSim](http://simtk-confluence.stanford.edu:8080/display/OpenSim/Muscle+Model+Theory+and+Publications); McLean, Su, van den Bogert, 2003; Thelen, 2003; Nigg and Herzog, 2007). Although the equation for the muscle force doesn't have numerical singularities, the differential equation for muscle velocity has four ([OpenSim Millard 2012 Muscle Models](http://simtk-confluence.stanford.edu:8080/display/OpenSim/Millard+2012+Muscle+Models)): When $a \rightarrow 0$; when $f_l(L_M) \rightarrow 0$; when $\alpha \rightarrow \pi/2$; and when $\partial f_v/\partial v \rightarrow 0 $. The following solutions can be employed to avoid the numerical singularities ([OpenSim Millard 2012 Muscle Models](http://simtk-confluence.stanford.edu:8080/display/OpenSim/Millard+2012+Muscle+Models)): Adopt a minimum value for $a$; e.g., $a_{min}=0.01$; adopt a minimum value for $f_l(L_M)$; e.g., $f_l(0.1)$; adopt a maximum value for pennation angle; e.g., constrain $\alpha$ to $\cos\alpha > 0.1 \; (\alpha < 84.26^o)$; and make the slope of $f_V$ at and beyond maximum velocity different than zero (for both concentric and eccentric activations). We will adopt these solutions to avoid singularities in the simulation of muscle mechanics. A problem of imposing values to variables as described above is that we can make the ordinary differential equation numerically stiff, which will increase the computational cost of the numerical integration. A better solution would be to modify the model to not have these singularities (see [OpenSim Millard 2012 Muscle Models](http://simtk-confluence.stanford.edu:8080/display/OpenSim/Millard+2012+Muscle+Models)). SimulationLet's simulate muscle dynamics using the Thelen2003Muscle model we defined in [Muscle modeling](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/MuscleModeling.ipynb). For the simulation of the Thelen2003Muscle, we simply have to integrate the equation:\begin{equation}V_M = (0.25+0.75a)\,V_{Mmax}\frac{\bar{F}_M-a\bar{f}_{l,CE}}{b}\label{}\end{equation}where\begin{equation} b = \left\{ \begin{array}{l l l} a\bar{f}_{l,CE} + \bar{F}_M/A_f \quad & \text{if} \quad \bar{F}_M \leq a\bar{f}_{l,CE} & \text{(shortening)} \\ \\ \dfrac{(2+2/A_f)(a\bar{f}_{l,CE}\bar{f}_{CEmax} - \bar{F}_M)}{\bar{f}_{CEmax}-1} \quad & \text{if} \quad \bar{F}_M > a\bar{f}_{l,CE} & \text{(lengthening)} \end{array} \right.\label{}\end{equation}The equation above already contains the terms for actvation, $a$, and force-length dependence, $\bar{f}_{l,CE}$. The equation is too complicated for solving analytically, we will solve it by numerical integration using the [`scipy.integrate.ode`](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.integrate.ode.html) class of numeric integrators, particularly the `dopri5`, an explicit runge-kutta method of order (4)5 due to Dormand and Prince (a.k.a. ode45 in Matlab). We could run a simulation using [OpenSim](https://simtk.org/home/opensim); it would be faster, but for fun, let's program in Python. All the necessary functions for the Thelen2003Muscle model described in [Muscle modeling](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/MuscleModeling.ipynb) were grouped in one file (module), `muscles.py`. Besides these functions, the module `muscles.py` contains a function for the muscle velocity, `vm_eq`, which will be called by the function that specifies the numerical integration, `lm_sol`; a standard way of performing numerical integration in scientific computing:```python def vm_eq(self, t, lm, lm0, lmt0, lmopt, ltslack, alpha0, vmmax, fm0): """Equation for muscle velocity.""" if lm < 0.1*lmopt: lm = 0.1*lmopt a = self.activation(t) lmt = self.lmt_eq(t, lmt0) alpha = self.penn_ang(lmt=lmt, lm=lm, lm0=lm0, alpha0=alpha0) lt = lmt - lm*np.cos(alpha) fse = self.force_se(lt=lt, ltslack=ltslack) fpe = self.force_pe(lm=lm/lmopt) fl = self.force_l(lm=lm/lmopt) fce_t = fse/np.cos(alpha) - fpe vm = self.velo_fm(fm=fce_t, a=a, fl=fl) return vmdef lm_sol(self, fun, t0, t1, lm0, lmt0, ltslack, lmopt, alpha0, vmmax, fm0, show, axs): """Runge-Kutta (4)5 ODE solver for muscle length.""" if fun is None: fun = self.vm_eq f = ode(fun).set_integrator('dopri5', nsteps=1, max_step=0.005, atol=1e-8) f.set_initial_value(lm0, t0).set_f_params(lm0, lmt0, lmopt, ltslack, alpha0, vmmax, fm0) suppress Fortran warning warnings.filterwarnings("ignore", category=UserWarning) data = [] while f.t < t1: f.integrate(t1, step=True) d = self.calc_data(f.t, f.y, lm0, lmt0, ltslack, lmopt, alpha0, fm0) data.append(d) warnings.resetwarnings() data = np.array(data) self.lm_data = data if show: self.lm_plot(data, axs) return data````muscles.py` also contains some auxiliary functions for entering data and for plotting the results. Let's import the necessary Python libraries and customize the environment in order to run some simulations using `muscles.py`: ###Code import numpy as np %matplotlib inline import matplotlib.pyplot as plt import matplotlib matplotlib.rcParams['lines.linewidth'] = 3 matplotlib.rcParams['font.size'] = 13 matplotlib.rcParams['lines.markersize'] = 5 matplotlib.rc('axes', grid=False, labelsize=14, titlesize=16, ymargin=0.05) matplotlib.rc('legend', numpoints=1, fontsize=11) # import the muscles.py module import sys sys.path.insert(1, r'./../functions') import muscles ###Output _____no_output_____ ###Markdown The `muscles.py` module contains the class `Thelen2003()` which has the functions we want to use. For such, we need to create an instance of this class: ###Code ms = muscles.Thelen2003() ###Output _____no_output_____ ###Markdown Now, we need to enter the parameters and states for the simulation: we can load files with these values or enter as input parameters when calling the function (method) '`set_parameters()`' and '`set_states()`'. If nothing if inputed, these methods assume that the parameters and states are stored in the files '`muscle_parameter.txt`' and '`muscle_state.txt`' inside the directory '`./../data/`'. Let's use some of the parameters and states from an exercise of the chapter 4 of Nigg and Herzog (2006). ###Code ms.set_parameters() ms.set_states() ###Output The parameters were successfully loaded and are stored in the variable P. The states were successfully loaded and are stored in the variable S. ###Markdown We can see the parameters and states: ###Code print('Parameters:\n', ms.P) print('States:\n', ms.S) ###Output Parameters: {'id': '', 'name': '', 'u_max': 1.0, 'u_min': 0.01, 't_act': 0.015, 't_deact': 0.05, 'lmopt': 0.093, 'alpha0': 0.0, 'fm0': 7400.0, 'gammal': 0.45, 'kpe': 5.0, 'epsm0': 0.6, 'vmmax': 10.0, 'fmlen': 1.4, 'af': 0.25, 'ltslack': 0.223, 'epst0': 0.04, 'kttoe': 3.0} States: {'id': '', 'name': '', 'lmt0': 0.31, 'lm0': 0.087, 'lt0': 0.223} ###Markdown We can plot the muscle-tendon forces considering these parameters and initial states: ###Code ms.muscle_plot(); ###Output _____no_output_____ ###Markdown Let's simulate an isometric activation (and since we didn't enter an activation level, $a=1$ will be used): ###Code def lmt_eq(t, lmt0): # isometric activation lmt = lmt0 return lmt ms.lmt_eq = lmt_eq data = ms.lm_sol() ###Output _____no_output_____ ###Markdown We can input a prescribed muscle-tendon length for the simulation: ###Code def lmt_eq(t, lmt0): # prescribed change in the muscle-tendon length if t < 1: lmt = lmt0 if 1 <= t < 2: lmt = lmt0 - 0.04*(t - 1) if t >= 2: lmt = lmt0 - 0.04 return lmt ms.lmt_eq = lmt_eq data = ms.lm_sol() ###Output _____no_output_____ ###Markdown Let's simulate a pennated muscle with an angle of $30^o$. We don't need to enter all parameters again, we can change only the parameter `alpha0`: ###Code ms.P['alpha0'] = 30*np.pi/180 print('New initial pennation angle:', ms.P['alpha0']) ###Output New initial pennation angle: 0.5235987755982988 ###Markdown Because the muscle length is now shortened by $\cos(30^o)$, we will also have to change the initial muscle-tendon length if we want to start with the tendon at its slack length: ###Code ms.S['lmt0'] = ms.S['lmt0'] - ms.S['lm0'] + ms.S['lm0']*np.cos(ms.P['alpha0']) print('New initial muscle-tendon length:', ms.S['lmt0']) data = ms.lm_sol() ###Output _____no_output_____ ###Markdown Here is a plot of the simulated pennation angle: ###Code plt.plot(data[:, 0], data[:, 9]*180/np.pi) plt.xlabel('Time (s)') plt.ylabel('Pennation angle $(^o)$') plt.show() ###Output _____no_output_____ ###Markdown Change back to the old values: ###Code ms.P['alpha0'] = 0 ms.S['lmt0'] = 0.313 ###Output _____no_output_____ ###Markdown We can change the initial states to show the role of the passive parallel element: ###Code ms.S = {'id': '', 'lt0': np.nan, 'lmt0': 0.323, 'lm0': 0.10, 'name': ''} ms.muscle_plot(); ###Output _____no_output_____ ###Markdown Let's also change the excitation signal: ###Code def excitation(t, u_max=1, u_min=0.01, t0=1, t1=2): """Excitation signal, a hat signal.""" u = u_min if t >= t0 and t <= t1: u = u_max return u ms.excitation = excitation act = ms.activation_sol() ###Output _____no_output_____ ###Markdown And let's simulate an isometric contraction: ###Code def lmt_eq(t, lmt0): # isometric activation lmt = lmt0 return lmt ms.lmt_eq = lmt_eq data = ms.lm_sol() ###Output _____no_output_____ ###Markdown Let's use as excitation a train of pulses: ###Code def excitation(t, u_max=.5, u_min=0.01, t0=.2, t1=2): """Excitation signal, a train of square pulses.""" u = u_min ts = np.arange(1, 2.0, .1) #ts = np.delete(ts, np.arange(2, ts.size, 3)) if t >= ts[0] and t <= ts[1]: u = u_max elif t >= ts[2] and t <= ts[3]: u = u_max elif t >= ts[4] and t <= ts[5]: u = u_max elif t >= ts[6] and t <= ts[7]: u = u_max elif t >= ts[8] and t <= ts[9]: u = u_max return u ms.excitation = excitation act = ms.activation_sol() data = ms.lm_sol() ###Output _____no_output_____ ###Markdown References- Kawakami Y, Ichinose Y, Fukunaga T (1998) [Architectural and functional features of human triceps surae muscles during contraction](http://www.ncbi.nlm.nih.gov/pubmed/9688711). Journal of Applied Physiology, 85, 398–404. - McLean SG, Su A, van den Bogert AJ (2003) [Development and validation of a 3-D model to predict knee joint loading during dynamic movement](http://www.ncbi.nlm.nih.gov/pubmed/14986412). Journal of Biomechanical Engineering, 125, 864-74. - Nigg BM and Herzog W (2006) [Biomechanics of the Musculo-skeletal System](https://books.google.com.br/books?id=hOIeAQAAIAAJ&dq=editions:ISBN0470017678). 3rd Edition. Wiley. - Scott SH, Winter DA (1991) [A comparison of three muscle pennation assumptions and their effect on isometric and isotonic force](http://www.ncbi.nlm.nih.gov/pubmed/2037616). Journal of Biomechanics, 24, 163–167. - Thelen DG (2003) [Adjustment of muscle mechanics model parameters to simulate dynamic contractions in older adults](http://homepages.cae.wisc.edu/~thelen/pubs/jbme03.pdf). Journal of Biomechanical Engineering, 125(1):70–77. Module muscles.py ###Code # %load ./../functions/muscles.py """Muscle modeling and simulation.""" import numpy as np from scipy.integrate import ode import warnings import configparser __author__ = 'Marcos Duarte, https://github.com/demotu/BMC' __version__ = 'muscles.py v.1 2015/03/01' class Thelen2003(): """ Thelen (2003) muscle model. """ def __init__(self, parameters=None, states=None): if parameters is not None: self.set_parameters(parameters) if states is not None: self.set_states(states) self.lm_data = [] self.act_data = [] def set_parameters(self, var=None): """Load and set parameters for the muscle model. """ if var is None: var = './../data/muscle_parameter.txt' if isinstance(var, str): self.P = self.config_parser(var, 'parameters') elif isinstance(var, dict): self.P = var else: raise ValueError('Wrong parameters!') print('The parameters were successfully loaded ' + 'and are stored in the variable P.') def set_states(self, var=None): """Load and set states for the muscle model. """ if var is None: var = './../data/muscle_state.txt' if isinstance(var, str): self.S = self.config_parser(var, 'states') elif isinstance(var, dict): self.S = var else: raise ValueError('Wrong states!') print('The states were successfully loaded ' + 'and are stored in the variable S.') def config_parser(self, filename, var): parser = configparser.ConfigParser() parser.optionxform = str # make option names case sensitive parser.read(filename) if not parser: raise ValueError('File %s not found!' %var) #if not 'Muscle' in parser.sections()[0]: # raise ValueError('Wrong %s file!' %var) var = {} for key, value in parser.items(parser.sections()[0]): if key.lower() in ['name', 'id']: var.update({key: value}) else: try: value = float(value) except ValueError: print('"%s" value "%s" was replaced by NaN.' %(key, value)) value = np.nan var.update({key: value}) return var def force_l(self, lm, gammal=None): """Thelen (2003) force of the contractile element vs. muscle length. Parameters ---------- lm : float normalized muscle fiber length gammal : float, optional (default from parameter file) shape factor Returns ------- fl : float normalized force of the muscle contractile element """ if gammal is None: gammal = self.P['gammal'] fl = np.exp(-(lm-1)**2/gammal) return fl def force_pe(self, lm, kpe=None, epsm0=None): """Thelen (2003) force of the muscle parallel element vs. muscle length. Parameters ---------- lm : float normalized muscle fiber length kpe : float, optional (default from parameter file) exponential shape factor epsm0 : float, optional (default from parameter file) passive muscle strain due to maximum isometric force Returns ------- fpe : float normalized force of the muscle parallel (passive) element """ if kpe is None: kpe = self.P['kpe'] if epsm0 is None: epsm0 = self.P['epsm0'] if lm <= 1: fpe = 0 else: fpe = (np.exp(kpe*(lm-1)/epsm0)-1)/(np.exp(kpe)-1) return fpe def force_se(self, lt, ltslack=None, epst0=None, kttoe=None): """Thelen (2003) force-length relationship of tendon vs. tendon length. Parameters ---------- lt : float tendon length (normalized or not) ltslack : float, optional (default from parameter file) tendon slack length (normalized or not) epst0 : float, optional (default from parameter file) tendon strain at the maximal isometric muscle force kttoe : float, optional (default from parameter file) linear scale factor Returns ------- fse : float normalized force of the tendon series element """ if ltslack is None: ltslack = self.P['ltslack'] if epst0 is None: epst0 = self.P['epst0'] if kttoe is None: kttoe = self.P['kttoe'] epst = (lt-ltslack)/ltslack fttoe = .33 # values from OpenSim Thelen2003Muscle epsttoe = .99*epst0*np.e**3/(1.66*np.e**3 - .67) ktlin = .67/(epst0 - epsttoe) # if epst <= 0: fse = 0 elif epst <= epsttoe: fse = fttoe/(np.exp(kttoe)-1)*(np.exp(kttoe*epst/epsttoe)-1) else: fse = ktlin*(epst-epsttoe) + fttoe return fse def velo_fm(self, fm, a, fl, lmopt=None, vmmax=None, fmlen=None, af=None): """Thelen (2003) velocity of the force-velocity relationship vs. CE force. Parameters ---------- fm : float normalized muscle force a : float muscle activation level fl : float normalized muscle force due to the force-length relationship lmopt : float, optional (default from parameter file) optimal muscle fiber length vmmax : float, optional (default from parameter file) normalized maximum muscle velocity for concentric activation fmlen : float, optional (default from parameter file) normalized maximum force generated at the lengthening phase af : float, optional (default from parameter file) shape factor Returns ------- vm : float velocity of the muscle """ if lmopt is None: lmopt = self.P['lmopt'] if vmmax is None: vmmax = self.P['vmmax'] if fmlen is None: fmlen = self.P['fmlen'] if af is None: af = self.P['af'] if fm <= a*fl: # isometric and concentric activation if fm > 0: b = a*fl + fm/af else: b = a*fl else: # eccentric activation asyE_thresh = 0.95 # from OpenSim Thelen2003Muscle if fm < a*fl*fmlen*asyE_thresh: b = (2 + 2/af)*(a*fl*fmlen - fm)/(fmlen - 1) else: fm0 = a*fl*fmlen*asyE_thresh b = (2 + 2/af)*(a*fl*fmlen - fm0)/(fmlen - 1) vm = (0.25 + 0.75*a)*1*(fm - a*fl)/b vm = vm*vmmax*lmopt return vm def force_vm(self, vm, a, fl, lmopt=None, vmmax=None, fmlen=None, af=None): """Thelen (2003) force of the contractile element vs. muscle velocity. Parameters ---------- vm : float muscle velocity a : float muscle activation level fl : float normalized muscle force due to the force-length relationship lmopt : float, optional (default from parameter file) optimal muscle fiber length vmmax : float, optional (default from parameter file) normalized maximum muscle velocity for concentric activation fmlen : float, optional (default from parameter file) normalized normalized maximum force generated at the lengthening phase af : float, optional (default from parameter file) shape factor Returns ------- fvm : float normalized force of the muscle contractile element """ if lmopt is None: lmopt = self.P['lmopt'] if vmmax is None: vmmax = self.P['vmmax'] if fmlen is None: fmlen = self.P['fmlen'] if af is None: af = self.P['af'] vmmax = vmmax*lmopt if vm <= 0: # isometric and concentric activation fvm = af*a*fl*(4*vm + vmmax*(3*a + 1))/(-4*vm + vmmax*af*(3*a + 1)) else: # eccentric activation fvm = a*fl*(af*vmmax*(3*a*fmlen - 3*a + fmlen - 1) + \ 8*vm*fmlen*(af + 1)) / \ (af*vmmax*(3*a*fmlen - 3*a + fmlen - 1) + 8*vm*(af + 1)) return fvm def lmt_eq(self, t, lmt0=None): """Equation for muscle-tendon length.""" if lmt0 is None: lmt0 = self.S['lmt0'] return lmt0 def vm_eq(self, t, lm, lm0, lmt0, lmopt, ltslack, alpha0, vmmax, fm0): """Equation for muscle velocity.""" if lm < 0.1*lmopt: lm = 0.1*lmopt #lt0 = lmt0 - lm0*np.cos(alpha0) a = self.activation(t) lmt = self.lmt_eq(t, lmt0) alpha = self.penn_ang(lmt=lmt, lm=lm, lm0=lm0, alpha0=alpha0) lt = lmt - lm*np.cos(alpha) fse = self.force_se(lt=lt, ltslack=ltslack) fpe = self.force_pe(lm=lm/lmopt) fl = self.force_l(lm=lm/lmopt) fce_t = fse/np.cos(alpha) - fpe #if fce_t < 0: fce_t=0 vm = self.velo_fm(fm=fce_t, a=a, fl=fl) return vm def lm_sol(self, fun=None, t0=0, t1=3, lm0=None, lmt0=None, ltslack=None, lmopt=None, alpha0=None, vmmax=None, fm0=None, show=True, axs=None): """Runge-Kutta (4)5 ODE solver for muscle length.""" if lm0 is None: lm0 = self.S['lm0'] if lmt0 is None: lmt0 = self.S['lmt0'] if ltslack is None: ltslack = self.P['ltslack'] if alpha0 is None: alpha0 = self.P['alpha0'] if lmopt is None: lmopt = self.P['lmopt'] if vmmax is None: vmmax = self.P['vmmax'] if fm0 is None: fm0 = self.P['fm0'] if fun is None: fun = self.vm_eq f = ode(fun).set_integrator('dopri5', nsteps=1, max_step=0.005, atol=1e-8) f.set_initial_value(lm0, t0).set_f_params(lm0, lmt0, lmopt, ltslack, alpha0, vmmax, fm0) # suppress Fortran warning warnings.filterwarnings("ignore", category=UserWarning) data = [] while f.t < t1: f.integrate(t1, step=True) d = self.calc_data(f.t, np.max([f.y, 0.1*lmopt]), lm0, lmt0, ltslack, lmopt, alpha0, fm0) data.append(d) warnings.resetwarnings() data = np.array(data) self.lm_data = data if show: self.lm_plot(data, axs) return data def calc_data(self, t, lm, lm0, lmt0, ltslack, lmopt, alpha0, fm0): """Calculus of muscle-tendon variables.""" a = self.activation(t) lmt = self.lmt_eq(t, lmt0=lmt0) alpha = self.penn_ang(lmt=lmt, lm=lm, lm0=lm0, alpha0=alpha0) lt = lmt - lm*np.cos(alpha) fl = self.force_l(lm=lm/lmopt) fpe = self.force_pe(lm=lm/lmopt) fse = self.force_se(lt=lt, ltslack=ltslack) fce_t = fse/np.cos(alpha) - fpe vm = self.velo_fm(fm=fce_t, a=a, fl=fl, lmopt=lmopt) fm = self.force_vm(vm=vm, fl=fl, lmopt=lmopt, a=a) + fpe data = [t, lmt, lm, lt, vm, fm*fm0, fse*fm0, a*fl*fm0, fpe*fm0, alpha] return data def muscle_plot(self, a=1, axs=None): """Plot muscle-tendon relationships with length and velocity.""" try: import matplotlib.pyplot as plt except ImportError: print('matplotlib is not available.') return if axs is None: _, axs = plt.subplots(nrows=1, ncols=3, figsize=(9, 4)) lmopt = self.P['lmopt'] ltslack = self.P['ltslack'] vmmax = self.P['vmmax'] alpha0 = self.P['alpha0'] fm0 = self.P['fm0'] lm0 = self.S['lm0'] lmt0 = self.S['lmt0'] lt0 = self.S['lt0'] if np.isnan(lt0): lt0 = lmt0 - lm0*np.cos(alpha0) lm = np.linspace(0, 2, 101) lt = np.linspace(0, 1, 101)*0.05 + 1 vm = np.linspace(-1, 1, 101)*vmmax*lmopt fl = np.zeros(lm.size) fpe = np.zeros(lm.size) fse = np.zeros(lt.size) fvm = np.zeros(vm.size) fl_lm0 = self.force_l(lm0/lmopt) fpe_lm0 = self.force_pe(lm0/lmopt) fm_lm0 = fl_lm0 + fpe_lm0 ft_lt0 = self.force_se(lt0, ltslack)*fm0 for i in range(101): fl[i] = self.force_l(lm[i]) fpe[i] = self.force_pe(lm[i]) fse[i] = self.force_se(lt[i], ltslack=1) fvm[i] = self.force_vm(vm[i], a=a, fl=fl_lm0) lm = lm*lmopt lt = lt*ltslack fl = fl fpe = fpe fse = fse*fm0 fvm = fvm*fm0 xlim = self.margins(lm, margin=.05, minmargin=False) axs[0].set_xlim(xlim) ylim = self.margins([0, 2], margin=.05) axs[0].set_ylim(ylim) axs[0].plot(lm, fl, 'b', label='Active') axs[0].plot(lm, fpe, 'b--', label='Passive') axs[0].plot(lm, fl+fpe, 'b:', label='') axs[0].plot([lm0, lm0], [ylim[0], fm_lm0], 'k:', lw=2, label='') axs[0].plot([xlim[0], lm0], [fm_lm0, fm_lm0], 'k:', lw=2, label='') axs[0].plot(lm0, fm_lm0, 'o', ms=6, mfc='r', mec='r', mew=2, label='fl(LM0)') axs[0].legend(loc='best', frameon=True, framealpha=.5) axs[0].set_xlabel('Length [m]') axs[0].set_ylabel('Scale factor') axs[0].xaxis.set_major_locator(plt.MaxNLocator(4)) axs[0].yaxis.set_major_locator(plt.MaxNLocator(4)) axs[0].set_title('Muscle F-L (a=1)') xlim = self.margins([0, np.min(vm), np.max(vm)], margin=.05, minmargin=False) axs[1].set_xlim(xlim) ylim = self.margins([0, fm0*1.2, np.max(fvm)*1.5], margin=.025) axs[1].set_ylim(ylim) axs[1].plot(vm, fvm, label='') axs[1].set_xlabel('$\mathbf{^{CON}}\;$ Velocity [m/s] $\;\mathbf{^{EXC}}$') axs[1].plot([0, 0], [ylim[0], fvm[50]], 'k:', lw=2, label='') axs[1].plot([xlim[0], 0], [fvm[50], fvm[50]], 'k:', lw=2, label='') axs[1].plot(0, fvm[50], 'o', ms=6, mfc='r', mec='r', mew=2, label='FM0(LM0)') axs[1].plot(xlim[0], fm0, '+', ms=10, mfc='r', mec='r', mew=2, label='') axs[1].text(vm[0], fm0, 'FM0') axs[1].legend(loc='upper right', frameon=True, framealpha=.5) axs[1].set_ylabel('Force [N]') axs[1].xaxis.set_major_locator(plt.MaxNLocator(4)) axs[1].yaxis.set_major_locator(plt.MaxNLocator(4)) axs[1].set_title('Muscle F-V (a=1)') xlim = self.margins([lt0, ltslack, np.min(lt), np.max(lt)], margin=.05, minmargin=False) axs[2].set_xlim(xlim) ylim = self.margins([ft_lt0, 0, np.max(fse)], margin=.05) axs[2].set_ylim(ylim) axs[2].plot(lt, fse, label='') axs[2].set_xlabel('Length [m]') axs[2].plot([lt0, lt0], [ylim[0], ft_lt0], 'k:', lw=2, label='') axs[2].plot([xlim[0], lt0], [ft_lt0, ft_lt0], 'k:', lw=2, label='') axs[2].plot(lt0, ft_lt0, 'o', ms=6, mfc='r', mec='r', mew=2, label='FT(LT0)') axs[2].legend(loc='upper left', frameon=True, framealpha=.5) axs[2].set_ylabel('Force [N]') axs[2].xaxis.set_major_locator(plt.MaxNLocator(4)) axs[2].yaxis.set_major_locator(plt.MaxNLocator(4)) axs[2].set_title('Tendon') plt.suptitle('Muscle-tendon mechanics', fontsize=18, y=1.03) plt.tight_layout(w_pad=.1) plt.show() return axs def lm_plot(self, x, axs=None): """Plot results of actdyn_ode45 function. data = [t, lmt, lm, lt, vm, fm*fm0, fse*fm0, fl*fm0, fpe*fm0, alpha] """ try: import matplotlib.pyplot as plt except ImportError: print('matplotlib is not available.') return if axs is None: _, axs = plt.subplots(nrows=3, ncols=2, sharex=True, figsize=(10, 6)) axs[0, 0].plot(x[:, 0], x[:, 1], 'b', label='LMT') lmt = x[:, 2]*np.cos(x[:, 9]) + x[:, 3] if np.sum(x[:, 9]) > 0: axs[0, 0].plot(x[:, 0], lmt, 'g--', label=r'$LM \cos \alpha + LT$') else: axs[0, 0].plot(x[:, 0], lmt, 'g--', label=r'LM+LT') ylim = self.margins(x[:, 1], margin=.1) axs[0, 0].set_ylim(ylim) axs[0, 0].legend(framealpha=.5, loc='best') axs[0, 1].plot(x[:, 0], x[:, 3], 'b') #axs[0, 1].plot(x[:, 0], lt0*np.ones(len(x)), 'r') ylim = self.margins(x[:, 3], margin=.1) axs[0, 1].set_ylim(ylim) axs[1, 0].plot(x[:, 0], x[:, 2], 'b') #axs[1, 0].plot(x[:, 0], lmopt*np.ones(len(x)), 'r') ylim = self.margins(x[:, 2], margin=.1) axs[1, 0].set_ylim(ylim) axs[1, 1].plot(x[:, 0], x[:, 4], 'b') ylim = self.margins(x[:, 4], margin=.1) axs[1, 1].set_ylim(ylim) axs[2, 0].plot(x[:, 0], x[:, 5], 'b', label='Muscle') axs[2, 0].plot(x[:, 0], x[:, 6], 'g--', label='Tendon') ylim = self.margins(x[:, [5, 6]], margin=.1) axs[2, 0].set_ylim(ylim) axs[2, 0].set_xlabel('Time (s)') axs[2, 0].legend(framealpha=.5, loc='best') axs[2, 1].plot(x[:, 0], x[:, 8], 'b', label='PE') ylim = self.margins(x[:, 8], margin=.1) axs[2, 1].set_ylim(ylim) axs[2, 1].set_xlabel('Time (s)') axs[2, 1].legend(framealpha=.5, loc='best') ylabel = ['$L_{MT}\,(m)$', '$L_{T}\,(m)$', '$L_{M}\,(m)$', '$V_{CE}\,(m/s)$', '$Force\,(N)$', '$Force\,(N)$'] for i, axi in enumerate(axs.flat): axi.set_ylabel(ylabel[i], fontsize=14) axi.yaxis.set_major_locator(plt.MaxNLocator(4)) axi.yaxis.set_label_coords(-.2, 0.5) plt.suptitle('Simulation of muscle-tendon mechanics', fontsize=18, y=1.03) plt.tight_layout() plt.show() return axs def penn_ang(self, lmt, lm, lt=None, lm0=None, alpha0=None): """Pennation angle. Parameters ---------- lmt : float muscle-tendon length lt : float, optional (default=None) tendon length lm : float, optional (default=None) muscle fiber length lm0 : float, optional (default from states file) initial muscle fiber length alpha0 : float, optional (default from parameter file) initial pennation angle Returns ------- alpha : float pennation angle """ if lm0 is None: lm0 = self.S['lm0'] if alpha0 is None: alpha0 = self.P['alpha0'] alpha = alpha0 if alpha0 != 0: w = lm0*np.sin(alpha0) if lm is not None: cosalpha = np.sqrt(1-(w/lm)**2) elif lmt is not None and lt is not None: cosalpha = 1/(np.sqrt(1 + (w/(lmt-lt))**2)) alpha = np.arccos(cosalpha) if alpha > 1.4706289: # np.arccos(0.1), 84.2608 degrees alpha = 1.4706289 return alpha def excitation(self, t, u_max=None, u_min=None, t0=0, t1=5): """Excitation signal, a square wave. Parameters ---------- t : float time instant [s] u_max : float (0 < u_max <= 1), optional (default from parameter file) maximum value for muscle excitation u_min : float (0 < u_min < 1), optional (default from parameter file) minimum value for muscle excitation t0 : float, optional (default=0) initial time instant for muscle excitation equals to u_max [s] t1 : float, optional (default=5) final time instant for muscle excitation equals to u_max [s] Returns ------- u : float (0 < u <= 1) excitation signal """ if u_max is None: u_max = self.P['u_max'] if u_min is None: u_min = self.P['u_min'] u = u_min if t >= t0 and t <= t1: u = u_max return u def activation_dyn(self, t, a, t_act=None, t_deact=None): """Thelen (2003) activation dynamics, the derivative of `a` at `t`. Parameters ---------- t : float time instant [s] a : float (0 <= a <= 1) muscle activation t_act : float, optional (default from parameter file) activation time constant [s] t_deact : float, optional (default from parameter file) deactivation time constant [s] Returns ------- adot : float derivative of `a` at `t` """ if t_act is None: t_act = self.P['t_act'] if t_deact is None: t_deact = self.P['t_deact'] u = self.excitation(t) if u > a: adot = (u - a)/(t_act*(0.5 + 1.5*a)) else: adot = (u - a)/(t_deact/(0.5 + 1.5*a)) return adot def activation_sol(self, fun=None, t0=0, t1=3, a0=0, u_min=None, t_act=None, t_deact=None, show=True, axs=None): """Runge-Kutta (4)5 ODE solver for activation dynamics. Parameters ---------- fun : function object, optional (default is None and `actdyn` is used) function with ODE to be solved t0 : float, optional (default=0) initial time instant for the simulation [s] t1 : float, optional (default=0) final time instant for the simulation [s] a0 : float, optional (default=0) initial muscle activation u_max : float (0 < u_max <= 1), optional (default from parameter file) maximum value for muscle excitation u_min : float (0 < u_min < 1), optional (default from parameter file) minimum value for muscle excitation t_act : float, optional (default from parameter file) activation time constant [s] t_deact : float, optional (default from parameter file) deactivation time constant [s] show : bool, optional (default = True) if True (1), plot data in matplotlib figure axs : a matplotlib.axes.Axes instance, optional (default = None) Returns ------- data : 2-d array array with columns [time, excitation, activation] """ if u_min is None: u_min = self.P['u_min'] if t_act is None: t_act = self.P['t_act'] if t_deact is None: t_deact = self.P['t_deact'] if fun is None: fun = self.activation_dyn f = ode(fun).set_integrator('dopri5', nsteps=1, max_step=0.005, atol=1e-8) f.set_initial_value(a0, t0).set_f_params(t_act, t_deact) # suppress Fortran warning warnings.filterwarnings("ignore", category=UserWarning) data = [] while f.t < t1: f.integrate(t1, step=True) data.append([f.t, self.excitation(f.t), np.max([f.y, u_min])]) warnings.resetwarnings() data = np.array(data) if show: self.actvation_plot(data, axs) self.act_data = data return data def activation(self, t=None): """Activation signal.""" data = self.act_data if t is not None and len(data): if t <= self.act_data[0, 0]: a = self.act_data[0, 2] elif t >= self.act_data[-1, 0]: a = self.act_data[-1, 2] else: a = np.interp(t, self.act_data[:, 0], self.act_data[:, 2]) else: a = 1 return a def actvation_plot(self, data, axs=None): """Plot results of actdyn_ode45 function.""" try: import matplotlib.pyplot as plt except ImportError: print('matplotlib is not available.') return if axs is None: _, axs = plt.subplots(nrows=1, ncols=1, figsize=(6, 4)) axs.plot(data[:, 0], data[:, 1], color=[1, 0, 0, .6], label='Excitation') axs.plot(data[:, 0], data[:, 2], color=[0, 0, 1, .6], label='Activation') axs.set_xlabel('Time [s]') axs.set_ylabel('Level') axs.legend() plt.title('Activation dynamics') plt.tight_layout() plt.show() return axs def margins(self, x, margin=0.01, minmargin=True): """Calculate plot limits with extra margins. """ rang = np.nanmax(x) - np.nanmin(x) if rang < 0.001 and minmargin: rang = 0.001*np.nanmean(x)/margin if rang < 1: rang = 1 lim = [np.nanmin(x) - rang*margin, np.nanmax(x) + rang*margin] return lim ###Output _____no_output_____
examples/notebooks/anime.ipynb
###Markdown Processing Anime Data BackgroundWe will use pyjanitor to showcase how to conveniently chain methods together to perform data cleaning in one shot. We We first define and register a series of dataframe methods with pandas_flavor. Then we chain the dataframe methods together with pyjanitor methods to complete the data cleaning process. The below example shows a one-shot script followed by a step-by-step detail of each part of the method chain.We have adapted a [TidyTuesday analysis](https://github.com/rfordatascience/tidytuesday/blob/master/data/2019/2019-04-23/readme.md) that was originally performed in R. The original text from TidyTuesday will be shown in blockquotes.Here is a description of the Anime data set that we will use. >This week's data comes from [Tam Nguyen](https://github.com/tamdrashtri) and [MyAnimeList.net via Kaggle](https://www.kaggle.com/aludosan/myanimelist-anime-dataset-as-20190204). [According to Wikipedia](https://en.wikipedia.org/wiki/MyAnimeList) - "MyAnimeList, often abbreviated as MAL, is an anime and manga social networking and social cataloging application website. The site provides its users with a list-like system to organize and score anime and manga. It facilitates finding users who share similar tastes and provides a large database on anime and manga. The site claims to have 4.4 million anime and 775,000 manga entries. In 2015, the site received 120 million visitors a month.">>Anime without rankings or popularity scores were excluded. Producers, genre, and studio were converted from lists to tidy observations, so there will be repetitions of shows with multiple producers, genres, etc. The raw data is also uploaded.>>Lots of interesting ways to explore the data this week! Import libraries and load data ###Code # Import pyjanitor and pandas import janitor import pandas as pd import pandas_flavor as pf # Supress user warnings when we try overwriting our custom pandas flavor functions import warnings warnings.filterwarnings('ignore') ###Output _____no_output_____ ###Markdown One-Shot ###Code filename = 'https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-04-23/raw_anime.csv' df = pd.read_csv(filename) @pf.register_dataframe_method def str_remove(df, column_name: str, pat: str, *args, **kwargs): """Wrapper around df.str.replace""" df[column_name] = df[column_name].str.replace(pat, "", *args, **kwargs) return df @pf.register_dataframe_method def str_trim(df, column_name: str, *args, **kwargs): """Wrapper around df.str.strip""" df[column_name] = df[column_name].str.strip(*args, **kwargs) return df @pf.register_dataframe_method def explode(df: pd.DataFrame, column_name: str, sep: str): """ For rows with a list of values, this function will create new rows for each value in the list """ df["id"] = df.index wdf = ( pd.DataFrame(df[column_name].str.split(sep).fillna("").tolist()) .stack() .reset_index() ) # exploded_column = column_name wdf.columns = ["id", "depth", column_name] ## plural form to singular form # wdf[column_name] = wdf[column_name].apply(lambda x: x.strip()) # trim wdf.drop("depth", axis=1, inplace=True) return pd.merge(df, wdf, on="id", suffixes=("_drop", "")).drop( columns=["id", column_name + "_drop"] ) @pf.register_dataframe_method def str_word( df, column_name: str, start: int = None, stop: int = None, pat: str = " ", *args, **kwargs ): """ Wrapper around `df.str.split` with additional `start` and `end` arguments to select a slice of the list of words. """ df[column_name] = df[column_name].str.split(pat).str[start:stop] return df @pf.register_dataframe_method def str_join(df, column_name: str, sep: str, *args, **kwargs): """ Wrapper around `df.str.join` Joins items in a list. """ df[column_name] = df[column_name].str.join(sep) return df @pf.register_dataframe_method def str_slice( df, column_name: str, start: int = None, stop: int = None, *args, **kwargs ): """ Wrapper around `df.str.slice """ df[column_name] = df[column_name].str[start:stop] return df clean_df = ( df.str_remove(column_name="producers", pat="\[|\]") .explode(column_name="producers", sep=",") .str_remove(column_name="producers", pat="'") .str_trim("producers") .str_remove(column_name="genre", pat="\[|\]") .explode(column_name="genre", sep=",") .str_remove(column_name="genre", pat="'") .str_trim(column_name="genre") .str_remove(column_name="studio", pat="\[|\]") .explode(column_name="studio", sep=",") .str_remove(column_name="studio", pat="'") .str_trim(column_name="studio") .str_remove(column_name="aired", pat="\{|\}|'from':\s*|'to':\s*") .str_word(column_name="aired", start=0, stop=2, pat=",") .str_join(column_name="aired", sep=",") .deconcatenate_column( column_name="aired", new_column_names=["start_date", "end_date"], sep="," ) .remove_columns(column_names=["aired"]) .str_remove(column_name="start_date", pat="'") .str_slice(column_name="start_date", start=0, stop=10) .str_remove(column_name="end_date", pat="'") .str_slice(column_name="end_date", start=0, stop=11) .to_datetime("start_date", format="%Y-%m-%d", errors="coerce") .to_datetime("end_date", format="%Y-%m-%d", errors="coerce") .fill_empty(columns=["rank", "popularity"], value=0) .filter_on("rank != 0 & popularity != 0") ) clean_df.head() ###Output _____no_output_____ ###Markdown Multi-Step >Data Dictionary>>Heads up the dataset is about 97 mb - if you want to free up some space, drop the synopsis and background, they are long strings, or broadcast, premiered, related as they are redundant or less useful.>>|variable |class |description ||:--------------|:---------|:-----------||animeID |double | Anime ID (as in https://myanimelist.net/anime/animeID) ||name |character |anime title - extracted from the site. ||title_english |character | title in English (sometimes is different, sometimes is missing) ||title_japanese |character | title in Japanese (if Anime is Chinese or Korean, the title, if available, in the respective language) ||title_synonyms |character | other variants of the title ||type |character | anime type (e.g. TV, Movie, OVA) ||source |character | source of anime (i.e original, manga, game, music, visual novel etc.) ||producers |character | producers ||genre |character | genre ||studio |character | studio ||episodes |double | number of episodes ||status |character | Aired or not aired ||airing |logical | True/False is still airing ||start_date |double | Start date (ymd) ||end_date |double | End date (ymd) ||duration |character | Per episode duration or entire duration, text string ||rating |character | Age rating ||score |double | Score (higher = better) ||scored_by |double | Number of users that scored ||rank |double | Rank - weight according to MyAnimeList formula ||popularity |double | based on how many members/users have the respective anime in their list ||members |double | number members that added this anime in their list ||favorites |double | number members that favorites these in their list ||synopsis |character | long string with anime synopsis ||background |character | long string with production background and other things ||premiered |character | anime premiered on season/year ||broadcast |character | when is (regularly) broadcasted ||related |character | dictionary: related animes, series, games etc. Step 0: Load data ###Code filename = 'https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-04-23/raw_anime.csv' df = pd.read_csv(filename) df.head(3).T ###Output _____no_output_____ ###Markdown Step 1: Clean `producers` column The first step tries to clean up the `producers` column by removing some brackets ('[]') and trim off some empty spaces>```>clean_df % > Producers> mutate(producers = str_remove(producers, "\\["), producers = str_remove(producers, "\\]"))>```What is mutate? This [link](https://pandas.pydata.org/pandas-docs/stable/getting_started/comparison/comparison_with_r.html) compares R's `mutate` to be similar to pandas' `df.assign`.However, `df.assign` returns a new DataFrame whereas `mutate` adds a new variable while preserving the previous ones.Therefore, for this example, I will compare `mutate` to be similar to `df['col'] = X` As we can see, this is looks like a list of items but in string form ###Code # Let's see what we trying to remove df.loc[df['producers'].str.contains("\[", na=False), 'producers'].head() ###Output _____no_output_____ ###Markdown Let's use pandas flavor to create a custom method for just removing some strings so we don't have to use str.replace so many times. ###Code @pf.register_dataframe_method def str_remove(df, column_name: str, pat: str, *args, **kwargs): """ Wrapper around df.str.replace The function will loop through regex patterns and remove them from the desired column. :param df: A pandas DataFrame. :param column_name: A `str` indicating which column the string removal action is to be made. :param pat: A regex pattern to match and remove. """ if not isinstance(pat, str): raise TypeError( f"Pattern should be a valid regex pattern. Received pattern: {pat} with dtype: {type(pat)}" ) df[column_name] = df[column_name].str.replace(pat, "", *args, **kwargs) return df clean_df = ( df .str_remove(column_name='producers', pat='\[|\]') ) ###Output _____no_output_____ ###Markdown With brackets removed. ###Code clean_df['producers'].head() ###Output _____no_output_____ ###Markdown Brackets are removed. Now the next part>```> separate_rows(producers, sep = ",") %>% >```It seems like separate rows will go through each value of the column, and if the value is a list, will create a new row for each value in the list with the remaining column values being the same. This is commonly known as an `explode` method but it is not yet implemented in pandas. We will need a function for this (code adopted from [here](https://qiita.com/rikima/items/c10e27d8b7495af4c159)). ###Code @pf.register_dataframe_method def explode(df: pd.DataFrame, column_name: str, sep: str): """ For rows with a list of values, this function will create new rows for each value in the list :param df: A pandas DataFrame. :param column_name: A `str` indicating which column the string removal action is to be made. :param sep: The delimiter. Example delimiters include `|`, `, `, `,` etc. """ df["id"] = df.index wdf = ( pd.DataFrame(df[column_name].str.split(sep).fillna("").tolist()) .stack() .reset_index() ) # exploded_column = column_name wdf.columns = ["id", "depth", column_name] ## plural form to singular form # wdf[column_name] = wdf[column_name].apply(lambda x: x.strip()) # trim wdf.drop("depth", axis=1, inplace=True) return pd.merge(df, wdf, on="id", suffixes=("_drop", "")).drop( columns=["id", column_name + "_drop"] ) clean_df = ( clean_df .explode(column_name='producers', sep=',') ) ###Output _____no_output_____ ###Markdown Now every producer is its own row. ###Code clean_df['producers'].head() ###Output _____no_output_____ ###Markdown Now remove single quotes and a bit of trimming>``` mutate(producers = str_remove(producers, "\\'"), producers = str_remove(producers, "\\'"), producers = str_trim(producers)) %>% ``` ###Code clean_df = ( clean_df .str_remove(column_name='producers', pat='\'') ) ###Output _____no_output_____ ###Markdown We'll make another custom function for trimming whitespace. ###Code @pf.register_dataframe_method def str_trim(df, column_name: str, *args, **kwargs): """Remove trailing and leading characters, in a given column""" df[column_name] = df[column_name].str.strip(*args, **kwargs) return df clean_df = clean_df.str_trim('producers') ###Output _____no_output_____ ###Markdown Finally, here is our cleaned `producers` column. ###Code clean_df['producers'].head() ###Output _____no_output_____ ###Markdown Step 2: Clean `genre` and `studio` Columns Let's do the same process for columns `Genre` and `Studio`>```> Genre mutate(genre = str_remove(genre, "\\["), genre = str_remove(genre, "\\]")) %>% separate_rows(genre, sep = ",") %>% mutate(genre = str_remove(genre, "\\'"), genre = str_remove(genre, "\\'"), genre = str_trim(genre)) %>% > Studio mutate(studio = str_remove(studio, "\\["), studio = str_remove(studio, "\\]")) %>% separate_rows(studio, sep = ",") %>% mutate(studio = str_remove(studio, "\\'"), studio = str_remove(studio, "\\'"), studio = str_trim(studio)) %>% ``` ###Code clean_df = ( clean_df # Perform operation for genre. .str_remove(column_name='genre', pat='\[|\]') .explode(column_name='genre', sep=',') .str_remove(column_name='genre', pat='\'') .str_trim(column_name='genre') # Now do it for studio .str_remove(column_name='studio', pat='\[|\]') .explode(column_name='studio', sep=',') .str_remove(column_name='studio', pat='\'') .str_trim(column_name='studio') ) ###Output _____no_output_____ ###Markdown Resulting cleaned columns. ###Code clean_df[['genre', 'studio']].head() ###Output _____no_output_____ ###Markdown Step 3: Clean `aired` column The `aired` column has something a little different. In addition to the usual removing some strings and whitespace trimming, we want to separate the values into two separate columns `start_date` and `end_date`>```r> Aired mutate(aired = str_remove(aired, "\\{"), aired = str_remove(aired, "\\}"), aired = str_remove(aired, "'from': "), aired = str_remove(aired, "'to': "), aired = word(aired, start = 1, 2, sep = ",")) %>% separate(aired, into = c("start_date", "end_date"), sep = ",") %>% mutate(start_date = str_remove_all(start_date, "'"), start_date = str_sub(start_date, 1, 10), end_date = str_remove_all(start_date, "'"), end_date = str_sub(end_date, 1, 10)) %>% mutate(start_date = lubridate::ymd(start_date), end_date = lubridate::ymd(end_date)) %>%``` We will create some custom wrapper functions to emulate R's `word` and use pyjanitor's `deconcatenate_column`. ###Code # Currently looks like this clean_df['aired'].head() @pf.register_dataframe_method def str_word( df, column_name: str, start: int = None, stop: int = None, pat: str = " ", *args, **kwargs ): """ Wrapper around `df.str.split` with additional `start` and `end` arguments to select a slice of the list of words. :param df: A pandas DataFrame. :param column_name: A `str` indicating which column the split action is to be made. :param start: optional An `int` for the start index of the slice :param stop: optinal An `int` for the end index of the slice :param pat: String or regular expression to split on. If not specified, split on whitespace. """ df[column_name] = df[column_name].str.split(pat).str[start:stop] return df @pf.register_dataframe_method def str_join(df, column_name: str, sep: str, *args, **kwargs): """ Wrapper around `df.str.join` Joins items in a list. :param df: A pandas DataFrame. :param column_name: A `str` indicating which column the split action is to be made. :param sep: The delimiter. Example delimiters include `|`, `, `, `,` etc. """ df[column_name] = df[column_name].str.join(sep) return df @pf.register_dataframe_method def str_slice( df, column_name: str, start: int = None, stop: int = None, *args, **kwargs ): """ Wrapper around `df.str.slice Slices strings. :param df: A pandas DataFrame. :param column_name: A `str` indicating which column the split action is to be made. :param start: 'int' indicating start of slice. :param stop: 'int' indicating stop of slice. """ df[column_name] = df[column_name].str[start:stop] return df clean_df = ( clean_df.str_remove(column_name="aired", pat="\{|\}|'from':\s*|'to':\s*") .str_word(column_name="aired", start=0, stop=2, pat=",") .str_join(column_name="aired", sep=",") # .add_columns({'start_date': clean_df['aired'][0]}) .deconcatenate_column( column_name="aired", new_column_names=["start_date", "end_date"], sep="," ) .remove_columns(column_names=["aired"]) .str_remove(column_name="start_date", pat="'") .str_slice(column_name="start_date", start=0, stop=10) .str_remove(column_name="end_date", pat="'") .str_slice(column_name="end_date", start=0, stop=11) .to_datetime("start_date", format="%Y-%m-%d", errors="coerce") .to_datetime("end_date", format="%Y-%m-%d", errors="coerce") ) # Resulting 'start_date' and 'end_date' columns with 'aired' column removed clean_df[['start_date', 'end_date']].head() ###Output _____no_output_____ ###Markdown Step 4: Filter out unranked and unpopular series Finally, let's drop the unranked or unpopular series with pyjanitor's `filter_on`. ###Code # First fill any NA values with 0 and then filter != 0 clean_df = clean_df.fill_empty(column_names=["rank", "popularity"], value=0).filter_on( "rank != 0 & popularity != 0" ) ###Output _____no_output_____ ###Markdown End Result ###Code clean_df.head() ###Output _____no_output_____ ###Markdown Processing Anime Data BackgroundWe will use pyjanitor to showcase how to conveniently chain methods together to perform data cleaning in one shot. We We first define and register a series of dataframe methods with pandas_flavor. Then we chain the dataframe methods together with pyjanitor methods to complete the data cleaning process. The below example shows a one-shot script followed by a step-by-step detail of each part of the method chain.We have adapted a [TidyTuesday analysis](https://github.com/rfordatascience/tidytuesday/blob/master/data/2019/2019-04-23/readme.md) that was originally performed in R. The original text from TidyTuesday will be shown in blockquotes.Note: TidyTuesday is based on the principles discussed and made popular by Hadley Wickham in his paper [Tidy Data](https://www.jstatsoft.org/index.php/jss/article/view/v059i10/v59i10.pdf).*The original text from TidyTuesday will be shown in blockquotes.*Here is a description of the Anime data set that we will use. >This week's data comes from [Tam Nguyen](https://github.com/tamdrashtri) and [MyAnimeList.net via Kaggle](https://www.kaggle.com/aludosan/myanimelist-anime-dataset-as-20190204). [According to Wikipedia](https://en.wikipedia.org/wiki/MyAnimeList) - "MyAnimeList, often abbreviated as MAL, is an anime and manga social networking and social cataloging application website. The site provides its users with a list-like system to organize and score anime and manga. It facilitates finding users who share similar tastes and provides a large database on anime and manga. The site claims to have 4.4 million anime and 775,000 manga entries. In 2015, the site received 120 million visitors a month.">>Anime without rankings or popularity scores were excluded. Producers, genre, and studio were converted from lists to tidy observations, so there will be repetitions of shows with multiple producers, genres, etc. The raw data is also uploaded.>>Lots of interesting ways to explore the data this week! Import libraries and load data ###Code # Import pyjanitor and pandas import janitor import pandas as pd import pandas_flavor as pf # Supress user warnings when we try overwriting our custom pandas flavor functions import warnings warnings.filterwarnings('ignore') ###Output _____no_output_____ ###Markdown One-Shot ###Code filename = 'https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-04-23/raw_anime.csv' df = pd.read_csv(filename) @pf.register_dataframe_method def str_remove(df, column_name: str, pat: str, *args, **kwargs): """Wrapper around df.str.replace""" df[column_name] = df[column_name].str.replace(pat, "", *args, **kwargs) return df @pf.register_dataframe_method def str_trim(df, column_name: str, *args, **kwargs): """Wrapper around df.str.strip""" df[column_name] = df[column_name].str.strip(*args, **kwargs) return df @pf.register_dataframe_method def explode(df: pd.DataFrame, column_name: str, sep: str): """ For rows with a list of values, this function will create new rows for each value in the list """ df["id"] = df.index wdf = ( pd.DataFrame(df[column_name].str.split(sep).fillna("").tolist()) .stack() .reset_index() ) # exploded_column = column_name wdf.columns = ["id", "depth", column_name] ## plural form to singular form # wdf[column_name] = wdf[column_name].apply(lambda x: x.strip()) # trim wdf.drop("depth", axis=1, inplace=True) return pd.merge(df, wdf, on="id", suffixes=("_drop", "")).drop( columns=["id", column_name + "_drop"] ) @pf.register_dataframe_method def str_word( df, column_name: str, start: int = None, stop: int = None, pat: str = " ", *args, **kwargs ): """ Wrapper around `df.str.split` with additional `start` and `end` arguments to select a slice of the list of words. """ df[column_name] = df[column_name].str.split(pat).str[start:stop] return df @pf.register_dataframe_method def str_join(df, column_name: str, sep: str, *args, **kwargs): """ Wrapper around `df.str.join` Joins items in a list. """ df[column_name] = df[column_name].str.join(sep) return df @pf.register_dataframe_method def str_slice( df, column_name: str, start: int = None, stop: int = None, *args, **kwargs ): """ Wrapper around `df.str.slice """ df[column_name] = df[column_name].str[start:stop] return df clean_df = ( df.str_remove(column_name="producers", pat="\[|\]") .explode(column_name="producers", sep=",") .str_remove(column_name="producers", pat="'") .str_trim("producers") .str_remove(column_name="genre", pat="\[|\]") .explode(column_name="genre", sep=",") .str_remove(column_name="genre", pat="'") .str_trim(column_name="genre") .str_remove(column_name="studio", pat="\[|\]") .explode(column_name="studio", sep=",") .str_remove(column_name="studio", pat="'") .str_trim(column_name="studio") .str_remove(column_name="aired", pat="\{|\}|'from':\s*|'to':\s*") .str_word(column_name="aired", start=0, stop=2, pat=",") .str_join(column_name="aired", sep=",") .deconcatenate_column( column_name="aired", new_column_names=["start_date", "end_date"], sep="," ) .remove_columns(column_names=["aired"]) .str_remove(column_name="start_date", pat="'") .str_slice(column_name="start_date", start=0, stop=10) .str_remove(column_name="end_date", pat="'") .str_slice(column_name="end_date", start=0, stop=11) .to_datetime("start_date", format="%Y-%m-%d", errors="coerce") .to_datetime("end_date", format="%Y-%m-%d", errors="coerce") .fill_empty(columns=["rank", "popularity"], value=0) .filter_on("rank != 0 & popularity != 0") ) clean_df.head() ###Output _____no_output_____ ###Markdown Multi-Step >Data Dictionary>>Heads up the dataset is about 97 mb - if you want to free up some space, drop the synopsis and background, they are long strings, or broadcast, premiered, related as they are redundant or less useful.>>|variable |class |description ||:--------------|:---------|:-----------||animeID |double | Anime ID (as in https://myanimelist.net/anime/animeID) ||name |character |anime title - extracted from the site. ||title_english |character | title in English (sometimes is different, sometimes is missing) ||title_japanese |character | title in Japanese (if Anime is Chinese or Korean, the title, if available, in the respective language) ||title_synonyms |character | other variants of the title ||type |character | anime type (e.g. TV, Movie, OVA) ||source |character | source of anime (i.e original, manga, game, music, visual novel etc.) ||producers |character | producers ||genre |character | genre ||studio |character | studio ||episodes |double | number of episodes ||status |character | Aired or not aired ||airing |logical | True/False is still airing ||start_date |double | Start date (ymd) ||end_date |double | End date (ymd) ||duration |character | Per episode duration or entire duration, text string ||rating |character | Age rating ||score |double | Score (higher = better) ||scored_by |double | Number of users that scored ||rank |double | Rank - weight according to MyAnimeList formula ||popularity |double | based on how many members/users have the respective anime in their list ||members |double | number members that added this anime in their list ||favorites |double | number members that favorites these in their list ||synopsis |character | long string with anime synopsis ||background |character | long string with production background and other things ||premiered |character | anime premiered on season/year ||broadcast |character | when is (regularly) broadcasted ||related |character | dictionary: related animes, series, games etc. Step 0: Load data ###Code filename = 'https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-04-23/raw_anime.csv' df = pd.read_csv(filename) df.head(3).T ###Output _____no_output_____ ###Markdown Step 1: Clean `producers` column The first step tries to clean up the `producers` column by removing some brackets ('[]') and trim off some empty spaces>```>clean_df % > Producers> mutate(producers = str_remove(producers, "\\["), producers = str_remove(producers, "\\]"))>```What is mutate? This [link](https://pandas.pydata.org/pandas-docs/stable/getting_started/comparison/comparison_with_r.html) compares R's `mutate` to be similar to pandas' `df.assign`.However, `df.assign` returns a new DataFrame whereas `mutate` adds a new variable while preserving the previous ones.Therefore, for this example, I will compare `mutate` to be similar to `df['col'] = X` As we can see, this is looks like a list of items but in string form ###Code # Let's see what we trying to remove df.loc[df['producers'].str.contains("\[", na=False), 'producers'].head() ###Output _____no_output_____ ###Markdown Let's use pandas flavor to create a custom method for just removing some strings so we don't have to use str.replace so many times. ###Code @pf.register_dataframe_method def str_remove(df, column_name: str, pat: str, *args, **kwargs): """ Wrapper around df.str.replace The function will loop through regex patterns and remove them from the desired column. :param df: A pandas DataFrame. :param column_name: A `str` indicating which column the string removal action is to be made. :param pat: A regex pattern to match and remove. """ if not isinstance(pat, str): raise TypeError( f"Pattern should be a valid regex pattern. Received pattern: {pat} with dtype: {type(pat)}" ) df[column_name] = df[column_name].str.replace(pat, "", *args, **kwargs) return df clean_df = ( df .str_remove(column_name='producers', pat='\[|\]') ) ###Output _____no_output_____ ###Markdown With brackets removed. ###Code clean_df['producers'].head() ###Output _____no_output_____ ###Markdown Brackets are removed. Now the next part>```> separate_rows(producers, sep = ",") %>% >```It seems like separate rows will go through each value of the column, and if the value is a list, will create a new row for each value in the list with the remaining column values being the same. This is commonly known as an `explode` method but it is not yet implemented in pandas. We will need a function for this (code adopted from [here](https://qiita.com/rikima/items/c10e27d8b7495af4c159)). ###Code @pf.register_dataframe_method def explode(df: pd.DataFrame, column_name: str, sep: str): """ For rows with a list of values, this function will create new rows for each value in the list :param df: A pandas DataFrame. :param column_name: A `str` indicating which column the string removal action is to be made. :param sep: The delimiter. Example delimiters include `|`, `, `, `,` etc. """ df["id"] = df.index wdf = ( pd.DataFrame(df[column_name].str.split(sep).fillna("").tolist()) .stack() .reset_index() ) # exploded_column = column_name wdf.columns = ["id", "depth", column_name] ## plural form to singular form # wdf[column_name] = wdf[column_name].apply(lambda x: x.strip()) # trim wdf.drop("depth", axis=1, inplace=True) return pd.merge(df, wdf, on="id", suffixes=("_drop", "")).drop( columns=["id", column_name + "_drop"] ) clean_df = ( clean_df .explode(column_name='producers', sep=',') ) ###Output _____no_output_____ ###Markdown Now every producer is its own row. ###Code clean_df['producers'].head() ###Output _____no_output_____ ###Markdown Now remove single quotes and a bit of trimming>``` mutate(producers = str_remove(producers, "\\'"), producers = str_remove(producers, "\\'"), producers = str_trim(producers)) %>% ``` ###Code clean_df = ( clean_df .str_remove(column_name='producers', pat='\'') ) ###Output _____no_output_____ ###Markdown We'll make another custom function for trimming whitespace. ###Code @pf.register_dataframe_method def str_trim(df, column_name: str, *args, **kwargs): """Remove trailing and leading characters, in a given column""" df[column_name] = df[column_name].str.strip(*args, **kwargs) return df clean_df = clean_df.str_trim('producers') ###Output _____no_output_____ ###Markdown Finally, here is our cleaned `producers` column. ###Code clean_df['producers'].head() ###Output _____no_output_____ ###Markdown Step 2: Clean `genre` and `studio` Columns Let's do the same process for columns `Genre` and `Studio`>```> Genre mutate(genre = str_remove(genre, "\\["), genre = str_remove(genre, "\\]")) %>% separate_rows(genre, sep = ",") %>% mutate(genre = str_remove(genre, "\\'"), genre = str_remove(genre, "\\'"), genre = str_trim(genre)) %>% > Studio mutate(studio = str_remove(studio, "\\["), studio = str_remove(studio, "\\]")) %>% separate_rows(studio, sep = ",") %>% mutate(studio = str_remove(studio, "\\'"), studio = str_remove(studio, "\\'"), studio = str_trim(studio)) %>% ``` ###Code clean_df = ( clean_df # Perform operation for genre. .str_remove(column_name='genre', pat='\[|\]') .explode(column_name='genre', sep=',') .str_remove(column_name='genre', pat='\'') .str_trim(column_name='genre') # Now do it for studio .str_remove(column_name='studio', pat='\[|\]') .explode(column_name='studio', sep=',') .str_remove(column_name='studio', pat='\'') .str_trim(column_name='studio') ) ###Output _____no_output_____ ###Markdown Resulting cleaned columns. ###Code clean_df[['genre', 'studio']].head() ###Output _____no_output_____ ###Markdown Step 3: Clean `aired` column The `aired` column has something a little different. In addition to the usual removing some strings and whitespace trimming, we want to separate the values into two separate columns `start_date` and `end_date`>```r> Aired mutate(aired = str_remove(aired, "\\{"), aired = str_remove(aired, "\\}"), aired = str_remove(aired, "'from': "), aired = str_remove(aired, "'to': "), aired = word(aired, start = 1, 2, sep = ",")) %>% separate(aired, into = c("start_date", "end_date"), sep = ",") %>% mutate(start_date = str_remove_all(start_date, "'"), start_date = str_sub(start_date, 1, 10), end_date = str_remove_all(start_date, "'"), end_date = str_sub(end_date, 1, 10)) %>% mutate(start_date = lubridate::ymd(start_date), end_date = lubridate::ymd(end_date)) %>%``` We will create some custom wrapper functions to emulate R's `word` and use pyjanitor's `deconcatenate_column`. ###Code # Currently looks like this clean_df['aired'].head() @pf.register_dataframe_method def str_word( df, column_name: str, start: int = None, stop: int = None, pat: str = " ", *args, **kwargs ): """ Wrapper around `df.str.split` with additional `start` and `end` arguments to select a slice of the list of words. :param df: A pandas DataFrame. :param column_name: A `str` indicating which column the split action is to be made. :param start: optional An `int` for the start index of the slice :param stop: optinal An `int` for the end index of the slice :param pat: String or regular expression to split on. If not specified, split on whitespace. """ df[column_name] = df[column_name].str.split(pat).str[start:stop] return df @pf.register_dataframe_method def str_join(df, column_name: str, sep: str, *args, **kwargs): """ Wrapper around `df.str.join` Joins items in a list. :param df: A pandas DataFrame. :param column_name: A `str` indicating which column the split action is to be made. :param sep: The delimiter. Example delimiters include `|`, `, `, `,` etc. """ df[column_name] = df[column_name].str.join(sep) return df @pf.register_dataframe_method def str_slice( df, column_name: str, start: int = None, stop: int = None, *args, **kwargs ): """ Wrapper around `df.str.slice Slices strings. :param df: A pandas DataFrame. :param column_name: A `str` indicating which column the split action is to be made. :param start: 'int' indicating start of slice. :param stop: 'int' indicating stop of slice. """ df[column_name] = df[column_name].str[start:stop] return df clean_df = ( clean_df.str_remove(column_name="aired", pat="\{|\}|'from':\s*|'to':\s*") .str_word(column_name="aired", start=0, stop=2, pat=",") .str_join(column_name="aired", sep=",") # .add_columns({'start_date': clean_df['aired'][0]}) .deconcatenate_column( column_name="aired", new_column_names=["start_date", "end_date"], sep="," ) .remove_columns(column_names=["aired"]) .str_remove(column_name="start_date", pat="'") .str_slice(column_name="start_date", start=0, stop=10) .str_remove(column_name="end_date", pat="'") .str_slice(column_name="end_date", start=0, stop=11) .to_datetime("start_date", format="%Y-%m-%d", errors="coerce") .to_datetime("end_date", format="%Y-%m-%d", errors="coerce") ) # Resulting 'start_date' and 'end_date' columns with 'aired' column removed clean_df[['start_date', 'end_date']].head() ###Output _____no_output_____ ###Markdown Step 4: Filter out unranked and unpopular series Finally, let's drop the unranked or unpopular series with pyjanitor's `filter_on`. ###Code # First fill any NA values with 0 and then filter != 0 clean_df = clean_df.fill_empty(column_names=["rank", "popularity"], value=0).filter_on( "rank != 0 & popularity != 0" ) ###Output _____no_output_____ ###Markdown End Result ###Code clean_df.head() ###Output _____no_output_____ ###Markdown Processing Anime Data BackgroundWe will use pyjanitor to showcase how to conveniently chain methods together to perform data cleaning in one shot. We We first define and register a series of dataframe methods with pandas_flavor. Then we chain the dataframe methods together with pyjanitor methods to complete the data cleaning process. The below example shows a one-shot script followed by a step-by-step detail of each part of the method chain.We have adapted a [TidyTuesday analysis](https://github.com/rfordatascience/tidytuesday/blob/master/data/2019/2019-04-23/readme.md) that was originally performed in R. The original text from TidyTuesday will be shown in blockquotes.Note: TidyTuesday is based on the principles discussed and made popular by Hadley Wickham in his paper [Tidy Data](https://www.jstatsoft.org/index.php/jss/article/view/v059i10/v59i10.pdf).*The original text from TidyTuesday will be shown in blockquotes.*Here is a description of the Anime data set that we will use. >This week's data comes from [Tam Nguyen](https://github.com/tamdrashtri) and [MyAnimeList.net via Kaggle](https://www.kaggle.com/aludosan/myanimelist-anime-dataset-as-20190204). [According to Wikipedia](https://en.wikipedia.org/wiki/MyAnimeList) - "MyAnimeList, often abbreviated as MAL, is an anime and manga social networking and social cataloging application website. The site provides its users with a list-like system to organize and score anime and manga. It facilitates finding users who share similar tastes and provides a large database on anime and manga. The site claims to have 4.4 million anime and 775,000 manga entries. In 2015, the site received 120 million visitors a month.">>Anime without rankings or popularity scores were excluded. Producers, genre, and studio were converted from lists to tidy observations, so there will be repetitions of shows with multiple producers, genres, etc. The raw data is also uploaded.>>Lots of interesting ways to explore the data this week! Import libraries and load data ###Code # Import pyjanitor and pandas import janitor import pandas as pd import pandas_flavor as pf # Supress user warnings when we try overwriting our custom pandas flavor functions import warnings warnings.filterwarnings('ignore') ###Output _____no_output_____ ###Markdown One-Shot ###Code filename = 'https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-04-23/raw_anime.csv' df = pd.read_csv(filename) @pf.register_dataframe_method def str_remove(df, column_name: str, pat: str, *args, **kwargs): """Wrapper around df.str.replace""" df[column_name] = df[column_name].str.replace(pat, "", *args, **kwargs) return df @pf.register_dataframe_method def str_trim(df, column_name: str, *args, **kwargs): """Wrapper around df.str.strip""" df[column_name] = df[column_name].str.strip(*args, **kwargs) return df @pf.register_dataframe_method def explode(df: pd.DataFrame, column_name: str, sep: str): """ For rows with a list of values, this function will create new rows for each value in the list """ df["id"] = df.index wdf = ( pd.DataFrame(df[column_name].str.split(sep).fillna("").tolist()) .stack() .reset_index() ) # exploded_column = column_name wdf.columns = ["id", "depth", column_name] ## plural form to singular form # wdf[column_name] = wdf[column_name].apply(lambda x: x.strip()) # trim wdf.drop("depth", axis=1, inplace=True) return pd.merge(df, wdf, on="id", suffixes=("_drop", "")).drop( columns=["id", column_name + "_drop"] ) @pf.register_dataframe_method def str_word( df, column_name: str, start: int = None, stop: int = None, pat: str = " ", *args, **kwargs ): """ Wrapper around `df.str.split` with additional `start` and `end` arguments to select a slice of the list of words. """ df[column_name] = df[column_name].str.split(pat).str[start:stop] return df @pf.register_dataframe_method def str_join(df, column_name: str, sep: str, *args, **kwargs): """ Wrapper around `df.str.join` Joins items in a list. """ df[column_name] = df[column_name].str.join(sep) return df @pf.register_dataframe_method def str_slice( df, column_name: str, start: int = None, stop: int = None, *args, **kwargs ): """ Wrapper around `df.str.slice """ df[column_name] = df[column_name].str[start:stop] return df clean_df = ( df.str_remove(column_name="producers", pat="\[|\]") .explode(column_name="producers", sep=",") .str_remove(column_name="producers", pat="'") .str_trim("producers") .str_remove(column_name="genre", pat="\[|\]") .explode(column_name="genre", sep=",") .str_remove(column_name="genre", pat="'") .str_trim(column_name="genre") .str_remove(column_name="studio", pat="\[|\]") .explode(column_name="studio", sep=",") .str_remove(column_name="studio", pat="'") .str_trim(column_name="studio") .str_remove(column_name="aired", pat="\{|\}|'from':\s*|'to':\s*") .str_word(column_name="aired", start=0, stop=2, pat=",") .str_join(column_name="aired", sep=",") .deconcatenate_column( column_name="aired", new_column_names=["start_date", "end_date"], sep="," ) .remove_columns(column_names=["aired"]) .str_remove(column_name="start_date", pat="'") .str_slice(column_name="start_date", start=0, stop=10) .str_remove(column_name="end_date", pat="'") .str_slice(column_name="end_date", start=0, stop=11) .to_datetime("start_date", format="%Y-%m-%d", errors="coerce") .to_datetime("end_date", format="%Y-%m-%d", errors="coerce") .fill_empty(columns=["rank", "popularity"], value=0) .filter_on("rank != 0 & popularity != 0") ) clean_df.head() ###Output _____no_output_____ ###Markdown Multi-Step >Data Dictionary>>Heads up the dataset is about 97 mb - if you want to free up some space, drop the synopsis and background, they are long strings, or broadcast, premiered, related as they are redundant or less useful.>>|variable |class |description ||:--------------|:---------|:-----------||animeID |double | Anime ID (as in https://myanimelist.net/anime/animeID) ||name |character |anime title - extracted from the site. ||title_english |character | title in English (sometimes is different, sometimes is missing) ||title_japanese |character | title in Japanese (if Anime is Chinese or Korean, the title, if available, in the respective language) ||title_synonyms |character | other variants of the title ||type |character | anime type (e.g. TV, Movie, OVA) ||source |character | source of anime (i.e original, manga, game, music, visual novel etc.) ||producers |character | producers ||genre |character | genre ||studio |character | studio ||episodes |double | number of episodes ||status |character | Aired or not aired ||airing |logical | True/False is still airing ||start_date |double | Start date (ymd) ||end_date |double | End date (ymd) ||duration |character | Per episode duration or entire duration, text string ||rating |character | Age rating ||score |double | Score (higher = better) ||scored_by |double | Number of users that scored ||rank |double | Rank - weight according to MyAnimeList formula ||popularity |double | based on how many members/users have the respective anime in their list ||members |double | number members that added this anime in their list ||favorites |double | number members that favorites these in their list ||synopsis |character | long string with anime synopsis ||background |character | long string with production background and other things ||premiered |character | anime premiered on season/year ||broadcast |character | when is (regularly) broadcasted ||related |character | dictionary: related animes, series, games etc. Step 0: Load data ###Code filename = 'https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-04-23/raw_anime.csv' df = pd.read_csv(filename) df.head(3).T ###Output _____no_output_____ ###Markdown Step 1: Clean `producers` column The first step tries to clean up the `producers` column by removing some brackets ('[]') and trim off some empty spaces>```>clean_df % > Producers> mutate(producers = str_remove(producers, "\\["), producers = str_remove(producers, "\\]"))>```What is mutate? This [link](https://pandas.pydata.org/pandas-docs/stable/getting_started/comparison/comparison_with_r.html) compares R's `mutate` to be similar to pandas' `df.assign`.However, `df.assign` returns a new DataFrame whereas `mutate` adds a new variable while preserving the previous ones.Therefore, for this example, I will compare `mutate` to be similar to `df['col'] = X` As we can see, this is looks like a list of items but in string form ###Code # Let's see what we trying to remove df.loc[df['producers'].str.contains("\[", na=False), 'producers'].head() ###Output _____no_output_____ ###Markdown Let's use pandas flavor to create a custom method for just removing some strings so we don't have to use str.replace so many times. ###Code @pf.register_dataframe_method def str_remove(df, column_name: str, pat: str, *args, **kwargs): """ Wrapper around df.str.replace The function will loop through regex patterns and remove them from the desired column. :param df: A pandas DataFrame. :param column_name: A `str` indicating which column the string removal action is to be made. :param pat: A regex pattern to match and remove. """ if not isinstance(pat, str): raise TypeError( f"Pattern should be a valid regex pattern. Received pattern: {pat} with dtype: {type(pat)}" ) df[column_name] = df[column_name].str.replace(pat, "", *args, **kwargs) return df clean_df = ( df .str_remove(column_name='producers', pat='\[|\]') ) ###Output _____no_output_____ ###Markdown With brackets removed. ###Code clean_df['producers'].head() ###Output _____no_output_____ ###Markdown Brackets are removed. Now the next part>```> separate_rows(producers, sep = ",") %>% >```It seems like separate rows will go through each value of the column, and if the value is a list, will create a new row for each value in the list with the remaining column values being the same. This is commonly known as an `explode` method but it is not yet implemented in pandas. We will need a function for this (code adopted from [here](https://qiita.com/rikima/items/c10e27d8b7495af4c159)). ###Code @pf.register_dataframe_method def explode(df: pd.DataFrame, column_name: str, sep: str): """ For rows with a list of values, this function will create new rows for each value in the list :param df: A pandas DataFrame. :param column_name: A `str` indicating which column the string removal action is to be made. :param sep: The delimiter. Example delimiters include `|`, `, `, `,` etc. """ df["id"] = df.index wdf = ( pd.DataFrame(df[column_name].str.split(sep).fillna("").tolist()) .stack() .reset_index() ) # exploded_column = column_name wdf.columns = ["id", "depth", column_name] ## plural form to singular form # wdf[column_name] = wdf[column_name].apply(lambda x: x.strip()) # trim wdf.drop("depth", axis=1, inplace=True) return pd.merge(df, wdf, on="id", suffixes=("_drop", "")).drop( columns=["id", column_name + "_drop"] ) clean_df = ( clean_df .explode(column_name='producers', sep=',') ) ###Output _____no_output_____ ###Markdown Now every producer is its own row. ###Code clean_df['producers'].head() ###Output _____no_output_____ ###Markdown Now remove single quotes and a bit of trimming>``` mutate(producers = str_remove(producers, "\\'"), producers = str_remove(producers, "\\'"), producers = str_trim(producers)) %>% ``` ###Code clean_df = ( clean_df .str_remove(column_name='producers', pat='\'') ) ###Output _____no_output_____ ###Markdown We'll make another custom function for trimming whitespace. ###Code @pf.register_dataframe_method def str_trim(df, column_name: str, *args, **kwargs): """Remove trailing and leading characters, in a given column""" df[column_name] = df[column_name].str.strip(*args, **kwargs) return df clean_df = clean_df.str_trim('producers') ###Output _____no_output_____ ###Markdown Finally, here is our cleaned `producers` column. ###Code clean_df['producers'].head() ###Output _____no_output_____ ###Markdown Step 2: Clean `genre` and `studio` Columns Let's do the same process for columns `Genre` and `Studio`>```> Genre mutate(genre = str_remove(genre, "\\["), genre = str_remove(genre, "\\]")) %>% separate_rows(genre, sep = ",") %>% mutate(genre = str_remove(genre, "\\'"), genre = str_remove(genre, "\\'"), genre = str_trim(genre)) %>% > Studio mutate(studio = str_remove(studio, "\\["), studio = str_remove(studio, "\\]")) %>% separate_rows(studio, sep = ",") %>% mutate(studio = str_remove(studio, "\\'"), studio = str_remove(studio, "\\'"), studio = str_trim(studio)) %>% ``` ###Code clean_df = ( clean_df # Perform operation for genre. .str_remove(column_name='genre', pat='\[|\]') .explode(column_name='genre', sep=',') .str_remove(column_name='genre', pat='\'') .str_trim(column_name='genre') # Now do it for studio .str_remove(column_name='studio', pat='\[|\]') .explode(column_name='studio', sep=',') .str_remove(column_name='studio', pat='\'') .str_trim(column_name='studio') ) ###Output _____no_output_____ ###Markdown Resulting cleaned columns. ###Code clean_df[['genre', 'studio']].head() ###Output _____no_output_____ ###Markdown Step 3: Clean `aired` column The `aired` column has something a little different. In addition to the usual removing some strings and whitespace trimming, we want to separate the values into two separate columns `start_date` and `end_date`>```r> Aired mutate(aired = str_remove(aired, "\\{"), aired = str_remove(aired, "\\}"), aired = str_remove(aired, "'from': "), aired = str_remove(aired, "'to': "), aired = word(aired, start = 1, 2, sep = ",")) %>% separate(aired, into = c("start_date", "end_date"), sep = ",") %>% mutate(start_date = str_remove_all(start_date, "'"), start_date = str_sub(start_date, 1, 10), end_date = str_remove_all(start_date, "'"), end_date = str_sub(end_date, 1, 10)) %>% mutate(start_date = lubridate::ymd(start_date), end_date = lubridate::ymd(end_date)) %>%``` We will create some custom wrapper functions to emulate R's `word` and use pyjanitor's `deconcatenate_column`. ###Code # Currently looks like this clean_df['aired'].head() @pf.register_dataframe_method def str_word( df, column_name: str, start: int = None, stop: int = None, pat: str = " ", *args, **kwargs ): """ Wrapper around `df.str.split` with additional `start` and `end` arguments to select a slice of the list of words. :param df: A pandas DataFrame. :param column_name: A `str` indicating which column the split action is to be made. :param start: optional An `int` for the start index of the slice :param stop: optinal An `int` for the end index of the slice :param pat: String or regular expression to split on. If not specified, split on whitespace. """ df[column_name] = df[column_name].str.split(pat).str[start:stop] return df @pf.register_dataframe_method def str_join(df, column_name: str, sep: str, *args, **kwargs): """ Wrapper around `df.str.join` Joins items in a list. :param df: A pandas DataFrame. :param column_name: A `str` indicating which column the split action is to be made. :param sep: The delimiter. Example delimiters include `|`, `, `, `,` etc. """ df[column_name] = df[column_name].str.join(sep) return df @pf.register_dataframe_method def str_slice( df, column_name: str, start: int = None, stop: int = None, *args, **kwargs ): """ Wrapper around `df.str.slice Slices strings. :param df: A pandas DataFrame. :param column_name: A `str` indicating which column the split action is to be made. :param start: 'int' indicating start of slice. :param stop: 'int' indicating stop of slice. """ df[column_name] = df[column_name].str[start:stop] return df clean_df = ( clean_df.str_remove(column_name="aired", pat="\{|\}|'from':\s*|'to':\s*") .str_word(column_name="aired", start=0, stop=2, pat=",") .str_join(column_name="aired", sep=",") # .add_columns({'start_date': clean_df['aired'][0]}) .deconcatenate_column( column_name="aired", new_column_names=["start_date", "end_date"], sep="," ) .remove_columns(column_names=["aired"]) .str_remove(column_name="start_date", pat="'") .str_slice(column_name="start_date", start=0, stop=10) .str_remove(column_name="end_date", pat="'") .str_slice(column_name="end_date", start=0, stop=11) .to_datetime("start_date", format="%Y-%m-%d", errors="coerce") .to_datetime("end_date", format="%Y-%m-%d", errors="coerce") ) # Resulting 'start_date' and 'end_date' columns with 'aired' column removed clean_df[['start_date', 'end_date']].head() ###Output _____no_output_____ ###Markdown Step 4: Filter out unranked and unpopular series Finally, let's drop the unranked or unpopular series with pyjanitor's `filter_on`. ###Code # First fill any NA values with 0 and then filter != 0 clean_df = clean_df.fill_empty(column_names=["rank", "popularity"], value=0).filter_on( "rank != 0 & popularity != 0" ) ###Output _____no_output_____ ###Markdown End Result ###Code clean_df.head() ###Output _____no_output_____ ###Markdown Processing Anime Data BackgroundWe will use pyjanitor to showcase how to conveniently chain methods together to perform data cleaning in one shot. We We first define and register a series of dataframe methods with pandas_flavor. Then we chain the dataframe methods together with pyjanitor methods to complete the data cleaning process. The below example shows a one-shot script followed by a step-by-step detail of each part of the method chain.We have adapted a [TidyTuesday analysis](https://github.com/rfordatascience/tidytuesday/blob/master/data/2019/2019-04-23/readme.md) that was originally performed in R. The original text from TidyTuesday will be shown in blockquotes.Note: TidyTuesday is based on the principles discussed and made popular by Hadley Wickham in his paper [Tidy Data](https://www.jstatsoft.org/index.php/jss/article/view/v059i10/v59i10.pdf).*The original text from TidyTuesday will be shown in blockquotes.*Here is a description of the Anime data set that we will use. >This week's data comes from [Tam Nguyen](https://github.com/tamdrashtri) and [MyAnimeList.net via Kaggle](https://www.kaggle.com/aludosan/myanimelist-anime-dataset-as-20190204). [According to Wikipedia](https://en.wikipedia.org/wiki/MyAnimeList) - "MyAnimeList, often abbreviated as MAL, is an anime and manga social networking and social cataloging application website. The site provides its users with a list-like system to organize and score anime and manga. It facilitates finding users who share similar tastes and provides a large database on anime and manga. The site claims to have 4.4 million anime and 775,000 manga entries. In 2015, the site received 120 million visitors a month.">>Anime without rankings or popularity scores were excluded. Producers, genre, and studio were converted from lists to tidy observations, so there will be repetitions of shows with multiple producers, genres, etc. The raw data is also uploaded.>>Lots of interesting ways to explore the data this week! Import libraries and load data ###Code # Import pyjanitor and pandas import janitor import pandas as pd import pandas_flavor as pf # Supress user warnings when we try overwriting our custom pandas flavor functions import warnings warnings.filterwarnings('ignore') ###Output _____no_output_____ ###Markdown One-Shot ###Code filename = 'https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-04-23/raw_anime.csv' df = pd.read_csv(filename) @pf.register_dataframe_method def str_remove(df, column_name: str, pat: str, *args, **kwargs): """Wrapper around df.str.replace""" df[column_name] = df[column_name].str.replace(pat, "", *args, **kwargs) return df @pf.register_dataframe_method def str_trim(df, column_name: str, *args, **kwargs): """Wrapper around df.str.strip""" df[column_name] = df[column_name].str.strip(*args, **kwargs) return df @pf.register_dataframe_method def explode(df: pd.DataFrame, column_name: str, sep: str): """ For rows with a list of values, this function will create new rows for each value in the list """ df["id"] = df.index wdf = ( pd.DataFrame(df[column_name].str.split(sep).fillna("").tolist()) .stack() .reset_index() ) # exploded_column = column_name wdf.columns = ["id", "depth", column_name] # plural form to singular form # wdf[column_name] = wdf[column_name].apply(lambda x: x.strip()) # trim wdf.drop("depth", axis=1, inplace=True) return pd.merge(df, wdf, on="id", suffixes=("_drop", "")).drop( columns=["id", column_name + "_drop"] ) @pf.register_dataframe_method def str_word( df, column_name: str, start: int = None, stop: int = None, pat: str = " ", *args, **kwargs ): """ Wrapper around `df.str.split` with additional `start` and `end` arguments to select a slice of the list of words. """ df[column_name] = df[column_name].str.split(pat).str[start:stop] return df @pf.register_dataframe_method def str_join(df, column_name: str, sep: str, *args, **kwargs): """ Wrapper around `df.str.join` Joins items in a list. """ df[column_name] = df[column_name].str.join(sep) return df @pf.register_dataframe_method def str_slice( df, column_name: str, start: int = None, stop: int = None, *args, **kwargs ): """ Wrapper around `df.str.slice """ df[column_name] = df[column_name].str[start:stop] return df clean_df = ( df.str_remove(column_name="producers", pat="\[|\]") .explode(column_name="producers", sep=",") .str_remove(column_name="producers", pat="'") .str_trim("producers") .str_remove(column_name="genre", pat="\[|\]") .explode(column_name="genre", sep=",") .str_remove(column_name="genre", pat="'") .str_trim(column_name="genre") .str_remove(column_name="studio", pat="\[|\]") .explode(column_name="studio", sep=",") .str_remove(column_name="studio", pat="'") .str_trim(column_name="studio") .str_remove(column_name="aired", pat="\{|\}|'from':\s*|'to':\s*") .str_word(column_name="aired", start=0, stop=2, pat=",") .str_join(column_name="aired", sep=",") .deconcatenate_column( column_name="aired", new_column_names=["start_date", "end_date"], sep="," ) .remove_columns(column_names=["aired"]) .str_remove(column_name="start_date", pat="'") .str_slice(column_name="start_date", start=0, stop=10) .str_remove(column_name="end_date", pat="'") .str_slice(column_name="end_date", start=0, stop=11) .to_datetime("start_date", format="%Y-%m-%d", errors="coerce") .to_datetime("end_date", format="%Y-%m-%d", errors="coerce") .fill_empty(columns=["rank", "popularity"], value=0) .filter_on("rank != 0 & popularity != 0") ) clean_df.head() ###Output _____no_output_____ ###Markdown Multi-Step >Data Dictionary>>Heads up the dataset is about 97 mb - if you want to free up some space, drop the synopsis and background, they are long strings, or broadcast, premiered, related as they are redundant or less useful.>>|variable |class |description ||:--------------|:---------|:-----------||animeID |double | Anime ID (as in https://myanimelist.net/anime/animeID) ||name |character |anime title - extracted from the site. ||title_english |character | title in English (sometimes is different, sometimes is missing) ||title_japanese |character | title in Japanese (if Anime is Chinese or Korean, the title, if available, in the respective language) ||title_synonyms |character | other variants of the title ||type |character | anime type (e.g. TV, Movie, OVA) ||source |character | source of anime (i.e original, manga, game, music, visual novel etc.) ||producers |character | producers ||genre |character | genre ||studio |character | studio ||episodes |double | number of episodes ||status |character | Aired or not aired ||airing |logical | True/False is still airing ||start_date |double | Start date (ymd) ||end_date |double | End date (ymd) ||duration |character | Per episode duration or entire duration, text string ||rating |character | Age rating ||score |double | Score (higher = better) ||scored_by |double | Number of users that scored ||rank |double | Rank - weight according to MyAnimeList formula ||popularity |double | based on how many members/users have the respective anime in their list ||members |double | number members that added this anime in their list ||favorites |double | number members that favorites these in their list ||synopsis |character | long string with anime synopsis ||background |character | long string with production background and other things ||premiered |character | anime premiered on season/year ||broadcast |character | when is (regularly) broadcasted ||related |character | dictionary: related animes, series, games etc. Step 0: Load data ###Code filename = 'https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-04-23/raw_anime.csv' df = pd.read_csv(filename) df.head(3).T ###Output _____no_output_____ ###Markdown Step 1: Clean `producers` column The first step tries to clean up the `producers` column by removing some brackets ('[]') and trim off some empty spaces>```>clean_df % > Producers> mutate(producers = str_remove(producers, "\\["), producers = str_remove(producers, "\\]"))>```What is mutate? This [link](https://pandas.pydata.org/pandas-docs/stable/getting_started/comparison/comparison_with_r.html) compares R's `mutate` to be similar to pandas' `df.assign`.However, `df.assign` returns a new DataFrame whereas `mutate` adds a new variable while preserving the previous ones.Therefore, for this example, I will compare `mutate` to be similar to `df['col'] = X` As we can see, this is looks like a list of items but in string form ###Code # Let's see what we trying to remove df.loc[df['producers'].str.contains("\[", na=False), 'producers'].head() ###Output _____no_output_____ ###Markdown Let's use pandas flavor to create a custom method for just removing some strings so we don't have to use str.replace so many times. ###Code @pf.register_dataframe_method def str_remove(df, column_name: str, pat: str, *args, **kwargs): """ Wrapper around df.str.replace The function will loop through regex patterns and remove them from the desired column. :param df: A pandas DataFrame. :param column_name: A `str` indicating which column the string removal action is to be made. :param pat: A regex pattern to match and remove. """ if not isinstance(pat, str): raise TypeError( f"Pattern should be a valid regex pattern. Received pattern: {pat} with dtype: {type(pat)}" ) df[column_name] = df[column_name].str.replace(pat, "", *args, **kwargs) return df clean_df = ( df .str_remove(column_name='producers', pat='\[|\]') ) ###Output _____no_output_____ ###Markdown With brackets removed. ###Code clean_df['producers'].head() ###Output _____no_output_____ ###Markdown Brackets are removed. Now the next part>```> separate_rows(producers, sep = ",") %>% >```It seems like separate rows will go through each value of the column, and if the value is a list, will create a new row for each value in the list with the remaining column values being the same. This is commonly known as an `explode` method but it is not yet implemented in pandas. We will need a function for this (code adopted from [here](https://qiita.com/rikima/items/c10e27d8b7495af4c159)). ###Code @pf.register_dataframe_method def explode(df: pd.DataFrame, column_name: str, sep: str): """ For rows with a list of values, this function will create new rows for each value in the list :param df: A pandas DataFrame. :param column_name: A `str` indicating which column the string removal action is to be made. :param sep: The delimiter. Example delimiters include `|`, `, `, `,` etc. """ df["id"] = df.index wdf = ( pd.DataFrame(df[column_name].str.split(sep).fillna("").tolist()) .stack() .reset_index() ) # exploded_column = column_name wdf.columns = ["id", "depth", column_name] # plural form to singular form # wdf[column_name] = wdf[column_name].apply(lambda x: x.strip()) # trim wdf.drop("depth", axis=1, inplace=True) return pd.merge(df, wdf, on="id", suffixes=("_drop", "")).drop( columns=["id", column_name + "_drop"] ) clean_df = ( clean_df .explode(column_name='producers', sep=',') ) ###Output _____no_output_____ ###Markdown Now every producer is its own row. ###Code clean_df['producers'].head() ###Output _____no_output_____ ###Markdown Now remove single quotes and a bit of trimming>``` mutate(producers = str_remove(producers, "\\'"), producers = str_remove(producers, "\\'"), producers = str_trim(producers)) %>% ``` ###Code clean_df = ( clean_df .str_remove(column_name='producers', pat='\'') ) ###Output _____no_output_____ ###Markdown We'll make another custom function for trimming whitespace. ###Code @pf.register_dataframe_method def str_trim(df, column_name: str, *args, **kwargs): """Remove trailing and leading characters, in a given column""" df[column_name] = df[column_name].str.strip(*args, **kwargs) return df clean_df = clean_df.str_trim('producers') ###Output _____no_output_____ ###Markdown Finally, here is our cleaned `producers` column. ###Code clean_df['producers'].head() ###Output _____no_output_____ ###Markdown Step 2: Clean `genre` and `studio` Columns Let's do the same process for columns `Genre` and `Studio`>```> Genre mutate(genre = str_remove(genre, "\\["), genre = str_remove(genre, "\\]")) %>% separate_rows(genre, sep = ",") %>% mutate(genre = str_remove(genre, "\\'"), genre = str_remove(genre, "\\'"), genre = str_trim(genre)) %>% > Studio mutate(studio = str_remove(studio, "\\["), studio = str_remove(studio, "\\]")) %>% separate_rows(studio, sep = ",") %>% mutate(studio = str_remove(studio, "\\'"), studio = str_remove(studio, "\\'"), studio = str_trim(studio)) %>% ``` ###Code clean_df = ( clean_df # Perform operation for genre. .str_remove(column_name='genre', pat='\[|\]') .explode(column_name='genre', sep=',') .str_remove(column_name='genre', pat='\'') .str_trim(column_name='genre') # Now do it for studio .str_remove(column_name='studio', pat='\[|\]') .explode(column_name='studio', sep=',') .str_remove(column_name='studio', pat='\'') .str_trim(column_name='studio') ) ###Output _____no_output_____ ###Markdown Resulting cleaned columns. ###Code clean_df[['genre', 'studio']].head() ###Output _____no_output_____ ###Markdown Step 3: Clean `aired` column The `aired` column has something a little different. In addition to the usual removing some strings and whitespace trimming, we want to separate the values into two separate columns `start_date` and `end_date`>```r> Aired mutate(aired = str_remove(aired, "\\{"), aired = str_remove(aired, "\\}"), aired = str_remove(aired, "'from': "), aired = str_remove(aired, "'to': "), aired = word(aired, start = 1, 2, sep = ",")) %>% separate(aired, into = c("start_date", "end_date"), sep = ",") %>% mutate(start_date = str_remove_all(start_date, "'"), start_date = str_sub(start_date, 1, 10), end_date = str_remove_all(start_date, "'"), end_date = str_sub(end_date, 1, 10)) %>% mutate(start_date = lubridate::ymd(start_date), end_date = lubridate::ymd(end_date)) %>%``` We will create some custom wrapper functions to emulate R's `word` and use pyjanitor's `deconcatenate_column`. ###Code # Currently looks like this clean_df['aired'].head() @pf.register_dataframe_method def str_word( df, column_name: str, start: int = None, stop: int = None, pat: str = " ", *args, **kwargs ): """ Wrapper around `df.str.split` with additional `start` and `end` arguments to select a slice of the list of words. :param df: A pandas DataFrame. :param column_name: A `str` indicating which column the split action is to be made. :param start: optional An `int` for the start index of the slice :param stop: optinal An `int` for the end index of the slice :param pat: String or regular expression to split on. If not specified, split on whitespace. """ df[column_name] = df[column_name].str.split(pat).str[start:stop] return df @pf.register_dataframe_method def str_join(df, column_name: str, sep: str, *args, **kwargs): """ Wrapper around `df.str.join` Joins items in a list. :param df: A pandas DataFrame. :param column_name: A `str` indicating which column the split action is to be made. :param sep: The delimiter. Example delimiters include `|`, `, `, `,` etc. """ df[column_name] = df[column_name].str.join(sep) return df @pf.register_dataframe_method def str_slice( df, column_name: str, start: int = None, stop: int = None, *args, **kwargs ): """ Wrapper around `df.str.slice Slices strings. :param df: A pandas DataFrame. :param column_name: A `str` indicating which column the split action is to be made. :param start: 'int' indicating start of slice. :param stop: 'int' indicating stop of slice. """ df[column_name] = df[column_name].str[start:stop] return df clean_df = ( clean_df.str_remove(column_name="aired", pat="\{|\}|'from':\s*|'to':\s*") .str_word(column_name="aired", start=0, stop=2, pat=",") .str_join(column_name="aired", sep=",") # .add_columns({'start_date': clean_df['aired'][0]}) .deconcatenate_column( column_name="aired", new_column_names=["start_date", "end_date"], sep="," ) .remove_columns(column_names=["aired"]) .str_remove(column_name="start_date", pat="'") .str_slice(column_name="start_date", start=0, stop=10) .str_remove(column_name="end_date", pat="'") .str_slice(column_name="end_date", start=0, stop=11) .to_datetime("start_date", format="%Y-%m-%d", errors="coerce") .to_datetime("end_date", format="%Y-%m-%d", errors="coerce") ) # Resulting 'start_date' and 'end_date' columns with 'aired' column removed clean_df[['start_date', 'end_date']].head() ###Output _____no_output_____ ###Markdown Step 4: Filter out unranked and unpopular series Finally, let's drop the unranked or unpopular series with pyjanitor's `filter_on`. ###Code # First fill any NA values with 0 and then filter != 0 clean_df = clean_df.fill_empty(column_names=["rank", "popularity"], value=0).filter_on( "rank != 0 & popularity != 0" ) ###Output _____no_output_____ ###Markdown End Result ###Code clean_df.head() ###Output _____no_output_____
other/deprecated/tmp9.ipynb
###Markdown Load the data set * There are totally 1,225,029 training images and 117,703 test images. * Totoally 14,951 landmarks ###Code train = pd.read_csv('./data/train.csv') print('Train:\t\t', train.shape) ###Output Train: (1225029, 3) ###Markdown Download Images ###Code def fetch_image(url): """ Get image from given url """ response=requests.get(url, stream=True) with open('./data/image_9.jpg', 'wb') as out_file: shutil.copyfileobj(response.raw, out_file) del response # Download images to ./train/ urls = train['url'].values t0 = time.time() # Loop through urls to download images for idx in range(900000, 1000000): url = urls[idx] # Check if already downloaded if os.path.exists('./data/tmp9/' + str(idx) + '.jpg'): continue # Get image from url fetch_image(url) os.rename('./data/image_9.jpg', './data/tmp9/'+ str(idx) + '.jpg') # Helpful information if idx % 10000 == 0: t = time.time() - t0 print('\nProcess: {:9d}'.format(idx), ' Used time: {} s'.format(np.round(t, 0))) t0 = time.time() if idx % 125 == 0: print('=', end='') ###Output Process: 900000 Used time: 0.0 s ================================================================================ Process: 910000 Used time: 4563.0 s ================================================================================ Process: 920000 Used time: 4607.0 s ================================================================================ Process: 930000 Used time: 4859.0 s ================================================================================ Process: 940000 Used time: 4302.0 s ================================================================================ Process: 950000 Used time: 4640.0 s ================================================================================ Process: 960000 Used time: 4409.0 s ================================================================================ Process: 970000 Used time: 4534.0 s ================================================================================ Process: 980000 Used time: 4536.0 s ================================================================================ Process: 990000 Used time: 4381.0 s ================================================================================
Supervised_Learning/titanic_survival_exploration.ipynb
###Markdown Lab: Titanic Survival Exploration with Decision Trees Getting StartedIn this lab, you will see how decision trees work by implementing a decision tree in sklearn.We'll start by loading the dataset and displaying some of its rows. ###Code # Import libraries necessary for this project import numpy as np import pandas as pd from IPython.display import display # Allows the use of display() for DataFrames # Pretty display for notebooks %matplotlib inline # Set a random seed import random random.seed(42) # Load the dataset in_file = 'titanic_data.csv' full_data = pd.read_csv(in_file) # Print the first few entries of the RMS Titanic data display(full_data.head()) ###Output _____no_output_____ ###Markdown Recall that these are the various features present for each passenger on the ship:- **Survived**: Outcome of survival (0 = No; 1 = Yes)- **Pclass**: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)- **Name**: Name of passenger- **Sex**: Sex of the passenger- **Age**: Age of the passenger (Some entries contain `NaN`)- **SibSp**: Number of siblings and spouses of the passenger aboard- **Parch**: Number of parents and children of the passenger aboard- **Ticket**: Ticket number of the passenger- **Fare**: Fare paid by the passenger- **Cabin** Cabin number of the passenger (Some entries contain `NaN`)- **Embarked**: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)Since we're interested in the outcome of survival for each passenger or crew member, we can remove the **Survived** feature from this dataset and store it as its own separate variable `outcomes`. We will use these outcomes as our prediction targets. Run the code cell below to remove **Survived** as a feature of the dataset and store it in `outcomes`. ###Code # Store the 'Survived' feature in a new variable and remove it from the dataset outcomes = full_data['Survived'] features_raw = full_data.drop('Survived', axis = 1) # Show the new dataset with 'Survived' removed display(features_raw.head()) ###Output _____no_output_____ ###Markdown The very same sample of the RMS Titanic data now shows the **Survived** feature removed from the DataFrame. Note that `data` (the passenger data) and `outcomes` (the outcomes of survival) are now *paired*. That means for any passenger `data.loc[i]`, they have the survival outcome `outcomes[i]`. Preprocessing the dataNow, let's do some data preprocessing. First, we'll remove the names of the passengers, and then one-hot encode the features.**Question:** Why would it be a terrible idea to one-hot encode the data without removing the names?(Andw ###Code # Removing the names features_no_names = features_raw.drop(['Name'], axis=1) # One-hot encoding features = pd.get_dummies(features_no_names) ###Output _____no_output_____ ###Markdown And now we'll fill in any blanks with zeroes. ###Code features = features.fillna(0.0) display(features.head()) ###Output _____no_output_____ ###Markdown (TODO) Training the modelNow we're ready to train a model in sklearn. First, let's split the data into training and testing sets. Then we'll train the model on the training set. ###Code from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(features, outcomes, test_size=0.2, random_state=42) # Import the classifier from sklearn from sklearn.tree import DecisionTreeClassifier # TODO: Define the classifier, and fit it to the data model = DecisionTreeClassifier() model.fit(X_train,y_train) ###Output _____no_output_____ ###Markdown Testing the modelNow, let's see how our model does, let's calculate the accuracy over both the training and the testing set. ###Code # Making predictions y_train_pred = model.predict(X_train) y_test_pred = model.predict(X_test) # Calculate the accuracy from sklearn.metrics import accuracy_score train_accuracy = accuracy_score(y_train, y_train_pred) test_accuracy = accuracy_score(y_test, y_test_pred) print('The training accuracy is', train_accuracy) print('The test accuracy is', test_accuracy) ###Output The training accuracy is 1.0 The test accuracy is 0.815642458101 ###Markdown Exercise: Improving the modelOk, high training accuracy and a lower testing accuracy. We may be overfitting a bit.So now it's your turn to shine! Train a new model, and try to specify some parameters in order to improve the testing accuracy, such as:- `max_depth`- `min_samples_leaf`- `min_samples_split`You can use your intuition, trial and error, or even better, feel free to use Grid Search!**Challenge:** Try to get to 85% accuracy on the testing set. If you'd like a hint, take a look at the solutions notebook next. ###Code # TODO: Train the model # Import the classifier from sklearn from sklearn.tree import DecisionTreeClassifier # TODO: Define the classifier, and fit it to the data model = DecisionTreeClassifier(max_depth = 9, min_samples_leaf = 5, min_samples_split = 12) model.fit(X_train,y_train) # TODO: Make predictions # Making predictions y_train_pred = model.predict(X_train) y_test_pred = model.predict(X_test) # TODO: Calculate the accuracy # Calculate the accuracy from sklearn.metrics import accuracy_score train_accuracy = accuracy_score(y_train, y_train_pred) test_accuracy = accuracy_score(y_test, y_test_pred) print('The training accuracy is', train_accuracy) print('The test accuracy is', test_accuracy) ###Output The training accuracy is 0.880617977528 The test accuracy is 0.854748603352
_notebooks/2021-05-31-missing-values.ipynb
###Markdown Missing values in scikit-learn ###Code #code adapted from https://github.com/thomasjpfan/ml-workshop-intermediate-1-of-2 ###Output _____no_output_____ ###Markdown SimpleImputer ###Code from sklearn.impute import SimpleImputer import numpy as np import sklearn sklearn.set_config(display='diagram') import pandas as pd url = 'https://raw.githubusercontent.com/davidrkearney/Kearney_Data_Science/master/_notebooks/df_panel_fix.csv' df = pd.read_csv(url, error_bad_lines=False) df import pandas as pd import sklearn from sklearn.datasets import fetch_openml from sklearn.model_selection import train_test_split df.columns sklearn.set_config(display='diagram') X, y = df.drop(['it', 'Unnamed: 0'], axis = 1), df['it'] X = X.select_dtypes(include='number') X _ = X.hist(figsize=(30, 15), layout=(5, 8)) df.isnull().sum() ###Output _____no_output_____ ###Markdown Default uses mean ###Code imputer = SimpleImputer() imputer.fit_transform(X) df.isnull().sum() ###Output _____no_output_____ ###Markdown Add indicator! ###Code imputer = SimpleImputer(add_indicator=True) imputer.fit_transform(X) df.isnull().sum() ###Output _____no_output_____ ###Markdown Other strategies ###Code imputer = SimpleImputer(strategy='median') imputer.fit_transform(X) imputer = SimpleImputer(strategy='most_frequent') imputer.fit_transform(X) ###Output _____no_output_____ ###Markdown Categorical data ###Code import pandas as pd imputer = SimpleImputer(strategy='constant', fill_value='sk_missing') imputer.fit_transform(df) ###Output _____no_output_____ ###Markdown pandas categorical ###Code df['a'] = df['a'].astype('category') df df.dtypes imputer.fit_transform(df) # %load solutions/03-ex01-solutions.py from sklearn.datasets import fetch_openml cancer = fetch_openml(data_id=15, as_frame=True) X, y = cancer.data, cancer.target X.shape X.isna().sum() imputer = SimpleImputer(add_indicator=True) X_imputed = imputer.fit_transform(X) X_imputed.shape from sklearn.linear_model import LogisticRegression from sklearn.preprocessing import StandardScaler from sklearn.pipeline import make_pipeline from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split( X, y, random_state=42, stratify=y ) log_reg = make_pipeline( SimpleImputer(add_indicator=True), StandardScaler(), LogisticRegression(random_state=0) ) log_reg.fit(X_train, y_train) log_reg.score(X_test, y_test) ###Output _____no_output_____ ###Markdown HistGradientBoosting Native support for missing values ###Code from sklearn.experimental import enable_hist_gradient_boosting from sklearn.ensemble import HistGradientBoostingClassifier hist = HistGradientBoostingClassifier(random_state=42) hist.fit(X_train, y_train) hist.score(X_test, y_test) ###Output _____no_output_____ ###Markdown Grid searching the imputer ###Code from sklearn.model_selection import GridSearchCV from sklearn.pipeline import Pipeline from sklearn.ensemble import RandomForestClassifier iris = pd.read_csv('data/iris_w_missing.csv') iris.head() X = iris.drop('target', axis='columns') y = iris['target'] X_train, X_test, y_train, y_test = train_test_split( X, y, random_state=0, stratify=y ) pipe = Pipeline([ ('imputer', SimpleImputer(add_indicator=True)), ('rf', RandomForestClassifier(random_state=42)) ]) ###Output _____no_output_____ ###Markdown scikit-learn uses `get_params` to find names ###Code pipe.get_params() ###Output _____no_output_____ ###Markdown Is it better to add the indicator? ###Code from sklearn.model_selection import GridSearchCV params = { 'imputer__add_indicator': [True, False] } grid_search = GridSearchCV(pipe, param_grid=params, verbose=1) grid_search.fit(X_train, y_train) grid_search.best_params_ grid_search.best_score_ grid_search.score(X_test, y_test) ###Output _____no_output_____ ###Markdown Compare to `make_pipeline` ###Code from sklearn.pipeline import make_pipeline pipe2 = make_pipeline(SimpleImputer(add_indicator=True), RandomForestClassifier(random_state=42)) pipe2.get_params() ###Output _____no_output_____ ###Markdown Which imputer to use? ###Code from sklearn.impute import KNNImputer from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import RandomForestRegressor from sklearn.experimental import enable_iterative_imputer from sklearn.impute import IterativeImputer params = { 'imputer': [ SimpleImputer(strategy='median', add_indicator=True), SimpleImputer(strategy='mean', add_indicator=True), KNNImputer(add_indicator=True), IterativeImputer(estimator=RandomForestRegressor(random_state=42), random_state=42, add_indicator=True)] } search_cv = GridSearchCV(pipe, param_grid=params, verbose=1, n_jobs=-1) search_cv.fit(X_train, y_train) search_cv.best_params_ search_cv.best_score_ search_cv.score(X_test, y_test) ###Output _____no_output_____
config.ipynb
###Markdown Network Configuration ###Code class Config(): data_root = '' # data path save_root = None # intermediate saves # network parameters img_size = 224 # iuput image size batch_size = 96 # batch size lr = 0.045 # learning rate lr_decay = 0.98 # learning rate decay ratio upd_freq = 1 # learning rate update frequency momentum = 0.9 # momentum weight_decay = 0.00004 # weight decay max_epoch = 500 # max epoch # other parameters num_workers = 8 # number of threads plot_freq = 2500 # plot frequency ###Output _____no_output_____ ###Markdown Setup your environment Install anconda Anaconda is a distribution of python for scientific computing that aims to simplify package management and deployment.To install anaconda on windows1. Download anaconda from https://www.anaconda.com/products/individualwindows.2. Double click the downloaded file.3. Proceed with the installation. Install Git bash Git Bash is an application for Microsoft Windows environments which provides an emulation layer for a Git command lineexperience.To install git bash on windows1. Download anaconda from https://www.anaconda.com/products/individualwindows.2. Double click the downloaded file.3. Proceed with the installation. git configuration Global config: ###Code !git config --global user.name "FIRST_NAME LAST_NAME" !git config --global user.email "[email protected]" ###Output _____no_output_____ ###Markdown Local config: ###Code !git config user.name "FIRST_NAME LAST_NAME" !git config user.email "[email protected]" ###Output _____no_output_____ ###Markdown *Note that you can use git config for futher customization* Track an existing project To start tracking an existing project use the following commands ###Code !cd yourproject !git init . !git add * !git commit -m "commit message" ###Output _____no_output_____ ###Markdown To make it vailable in github, create a new repository in github then run ###Code !git remote add origin https://github.com/username/new_repo ###Output _____no_output_____ ###Markdown Virtual environments We will use condaconda is a very powerful package manager that excels at managing dependencies and offers an easy way to createand use virtual environments for your projects. It is used primarily in the Data Science world.To create a virtual environment run ###Code !conda create --name myenv ###Output _____no_output_____ ###Markdown To activate your virtual environment run ###Code !conda activate myenv ###Output _____no_output_____ ###Markdown List you virtual environments ###Code !conda env list ###Output _____no_output_____ ###Markdown list packages installed in you virtual environment: ###Code !conda list -n myenv ###Output _____no_output_____ ###Markdown Network Configuration ###Code # camera parameters wb_stat = { 'slope': -0.933335, 'const': -1.249542, 'std': 0.063877, 'min': -1.020177, 'max': -0.042540 } noise_stat = { 'slope': 2.385028, 'const': 4.055563, 'std': 0.100832, 'min': -8.373093, 'max': -7.049319 } d65_wb = [2.408501, 1.000000, 1.499017, 1.000000] cam_matrix = [[1.4462, -0.8273, -0.1252], [-0.1818, 1.0996, 0.0918], [0.0193, 0.044, 0.7474]] amp_range = (1, 20) # dataset paths r2r_path = '/home/lab/Documents/ssd/r2rSet' fivek_path = '/home/lab/Documents/ssd/fivekNight' class r2rNetConf(): data_root = r2r_path save_root = None # network parameters r2r_size = 128 # input size for r2rNet batch_size = 8 lr = 1e-4 lr_decay = 0.95 upd_freq = 20 # learning rate update frequency max_epoch = 1500 # camera parameters d65_wb = d65_wb cam_matrix = cam_matrix # other parameters num_workers = 8 save_epoch = 1000 # epoch to be saved class mainConf(): data_root = fivek_path save_root = None # network parameters att_size = (256, 192) # input size for att module isp_size = (640, 480) # input size for isp module batch_size = 4 lr = 1e-4 # camera parameters wb_stat = wb_stat noise_stat = noise_stat amp_range = amp_range # other parameters num_workers = 8 plot_freq = 100 # plot frequency save_freq = 2500 # save frequency class valConf(): data_root = fivek_path # network parameters att_size = (256, 192) isp_size = (768, 576) # camera parameters wb_stat = wb_stat noise_stat = noise_stat amp_range = amp_range ###Output _____no_output_____ ###Markdown Setup your environment Install anconda Anaconda is a distribution of python for scientific computing that aims to simplify package management and deployment.To install anaconda on windows1. Download anaconda from https://www.anaconda.com/products/individualwindows.2. Double click the downloaded file.3. Proceed with the installation. Install Git bash Git Bash is an application for Microsoft Windows environments which provides an emulation layer for a Git command lineexperience.To install git bash on windows1. Download anaconda from https://www.anaconda.com/products/individualwindows.2. Double click the downloaded file.3. Proceed with the installation. git configuration Global config: ###Code !git config --global user.name "FIRST_NAME LAST_NAME" !git config --global user.email "[email protected]" ###Output _____no_output_____ ###Markdown Local config: ###Code !git config user.name "FIRST_NAME LAST_NAME" !git config user.email "[email protected]" ###Output _____no_output_____ ###Markdown *Note that you can use git config for futher customization* Track an existing project To start tracking an existing project use the following commands ###Code !cd yourproject !git init . !git add * !git commit -m "commit message" ###Output _____no_output_____ ###Markdown To make it vailable in github, create a new repository in github then run ###Code !git remote add origin https://github.com/username/new_repo ###Output _____no_output_____ ###Markdown Virtual environments We will use condaconda is a very powerful package manager that excels at managing dependencies and offers an easy way to createand use virtual environments for your projects. It is used primarily in the Data Science world.To create a virtual environment run ###Code !conda create --name myenv ###Output _____no_output_____ ###Markdown To activate your virtual environment run ###Code !conda activate myenv ###Output _____no_output_____ ###Markdown List you virtual environments ###Code !conda env list ###Output _____no_output_____ ###Markdown list packages installed in you virtual environment: ###Code !conda list -n myenv ###Output _____no_output_____ ###Markdown Network Configration ###Code class Config(): # data path data_root = '/trainSets/Sony' save_root = None # intermediate saves # network parameters img_size = 512 # iuput image size batch_size = 1 # batch size lr = 1e-4 # learning rate lr_decay = 0.1 # learning rate decay ratio upd_freq = 2000 # learning rate update frequency max_epoch = 4000 # max epoch # other parameters num_workers = 8 # number of threads save_freq = 100 # save frequency val_freq = 50 # validation frequency ###Output _____no_output_____ ###Markdown Import libraries needed to display files and operate time. ###Code from pathlib import Path from IPython.display import FileLink from IPython.display import IFrame from os.path import splitext from datetime import timedelta from datetime import datetime ###Output _____no_output_____ ###Markdown Configure and simplify the Natural Language Toolkit (NTLK) Portuguese treebank. ###Code import nltk from nltk import tokenize from nltk.corpus import floresta def simplify_tag(t): if "+" in t: return t[t.index("+")+1:] else: return t twords = nltk.corpus.floresta.tagged_words() twords = [(w.lower(),simplify_tag(t)) for (w,t) in twords] ###Output _____no_output_____ ###Markdown Insert some missing tagged prepositions. ###Code twords.insert(0,('da','prp')) twords.insert(0,('de','prp')) twords.insert(0,('di','prp')) twords.insert(0,('do','prp')) twords.insert(0,('du','prp')) ###Output _____no_output_____ ###Markdown Define ```title_pos_tag``` function that is similar to ```title``` built-in function but doesn't capitalize ```text``` input string conjunctions and prepositions. It is useful when titling proper names. ###Code def title_pos_tag(text): def pos_tag_portuguese(tokens): for index in range(len(tokens)): for word in twords: token = tokens[index].lower() if word[0] == token: tag = word[1] tokens[index] = (token, tag) break return tokens tokens = tokenize.word_tokenize(text, language='portuguese') tagged = pos_tag_portuguese(tokens) new_text = '' for index in range(len(tagged)): token = tagged[index] if isinstance(token, tuple): word = token[0] tag = token[1] # n: substantivo # prop: nome próprio # art: artigo # pron: pronome # pron-pers: pronome pessoal # pron-det: pronome determinativo # pron-indp: substantivo/pron-indp # adj: adjetivo # n-adj: substantivo/adjetivo # v: verbo # v-fin: verbo finitivo # v-inf: verbo infinitivo # v-pcp: verbo particípio # v-ger: verbo gerúndio # num: numeral # prp: preposição # adj: adjetivo # conj: conjunção # conj-s: conjunção subordinativa # conj-c: conjunção coordenativa # intj: interjeição # adv: advérbio # xxx: outro if 'conj' in tag or \ 'prp' in tag: new_text = new_text + ' ' + word.lower() else: new_text = new_text + ' ' + word.capitalize() else: new_text = new_text + ' ' + token.capitalize() new_text = new_text.strip() # return (new_text, tagged) # uncomment this line if is desired to retriev the tags as well return new_text ###Output _____no_output_____ ###Markdown Create ```showJSON``` object that shows an expandable JSON file ```json_data``` content (*credits to [David Caldwell](https://github.com/caldwell/renderjson)*) ###Code import uuid, json from IPython.display import HTML, display class showJSON(object): def __init__(self, json_data): if isinstance(json_data, dict): self.json_str = json.dumps(json_data) else: self.json_str = json_data self.uuid = str(uuid.uuid4()) def _ipython_display_(self): htmlstr = """ <html> <head> <style> .renderjson a {{ text-decoration: none; }} .renderjson .disclosure {{ color: crimson; font-size: 150%; }} .renderjson .syntax {{ color: grey; }} .renderjson .string {{ color: red; }} .renderjson .number {{ color: blue; }} .renderjson .boolean {{ color: plum; }} .renderjson .key {{ color: blue; }} .renderjson .keyword {{ color: goldenrodyellow; }} .renderjson .object.syntax {{ color: seagreen; }} .renderjson .array.syntax {{ color: salmon; }} </style> </head> <body> <div id="{0}" style="height: 600px; width:100%;"></div> <script> require(["renderjson.js"], function() {{ renderjson.set_show_to_level('all'); document.getElementById('{0}').appendChild(renderjson({1})) }}); </script> </body> </html> """.format(self.uuid, self.json_str) display(HTML(htmlstr)) ###Output _____no_output_____ ###Markdown Define the week names list. ###Code week_names = ('segunda','terça','quarta','quinta','sexta','sábado','domingo') ###Output _____no_output_____ ###Markdown Setup your environment Install anconda Anaconda is a distribution of python for scientific computing that aims to simplify package management and deployment.To install anaconda on windows1. Download anaconda from https://www.anaconda.com/products/individualwindows.2. Double click the downloaded file.3. Proceed with the installation. Install Git bash Git Bash is an application for Microsoft Windows environments which provides an emulation layer for a Git command lineexperience.To install git bash on windows1. Download anaconda from https://www.anaconda.com/products/individualwindows.2. Double click the downloaded file.3. Proceed with the installation. git configuration Global config: ###Code !git config --global user.name "FIRST_NAME LAST_NAME" !git config --global user.email "[email protected]" ###Output _____no_output_____ ###Markdown Local config: ###Code !git config user.name "FIRST_NAME LAST_NAME" !git config user.email "[email protected]" ###Output _____no_output_____ ###Markdown *Note that you can use git config for futher customization* Track an existing project To start tracking an existing project use the following commands ###Code !cd yourproject !git init . !git add * !git commit -m "commit message" ###Output _____no_output_____ ###Markdown To make it vailable in github, create a new repository in github then run ###Code !git remote add origin https://github.com/username/new_repo ###Output _____no_output_____ ###Markdown Virtual environments We will use condaconda is a very powerful package manager that excels at managing dependencies and offers an easy way to createand use virtual environments for your projects. It is used primarily in the Data Science world.To create a virtual environment run ###Code !conda create --name myenv ###Output _____no_output_____ ###Markdown To activate your virtual environment run ###Code !conda activate myenv ###Output _____no_output_____ ###Markdown List you virtual environments ###Code !conda env list ###Output _____no_output_____ ###Markdown list packages installed in you virtual environment: ###Code !conda list -n myenv ###Output _____no_output_____
notebooks/C0.3-lsf-metrics.ipynb
###Markdown 0.0 Imports ###Code import pandas as pd import numpy as np import seaborn as sns import umap.umap_ as umap from matplotlib import pyplot as plt from IPython.display import HTML from sklearn import preprocessing as pp from sklearn import cluster as c from sklearn import metrics as m from plotly import express as px from yellowbrick.cluster import KElbowVisualizer, SilhouetteVisualizer ###Output _____no_output_____ ###Markdown 0.1. Helper Functions ###Code def jupyter_settings(): %matplotlib inline %pylab inline plt.style.use( 'bmh' ) plt.rcParams['figure.figsize'] = [25,12] plt.rcParams['font.size'] = 24 display( HTML( '<style>.container {width:100% !important;}</style>')) pd.options.display.max_columns = None pd.options.display.max_rows = None pd.set_option( 'display.expand_frame_repr', False ) sns.set() jupyter_settings() ###Output Populating the interactive namespace from numpy and matplotlib ###Markdown 0.2. Load Dataset ###Code # load data df_raw = pd.read_csv('../data/raw/Ecommerce.csv') df_raw.head() ###Output _____no_output_____ ###Markdown 1.0. Descrição dos dados 1.1. Rename Columns ###Code df1 = df_raw.copy() df1.columns # Rename Columns cols_new = ['invoice_no','stock_code','description','quantity','invoice_date','unit_price','customer_id','country'] df1.columns = cols_new df1.sample() df_raw.sample() ###Output _____no_output_____ ###Markdown 1.2. Data Dimensions ###Code print( 'Number of rows: {}'.format ( df1.shape[0] ) ) print( 'Number of cols: {}'.format ( df1.shape[1] ) ) ###Output Number of rows: 541909 Number of cols: 8 ###Markdown 1.3. Data Types ###Code df1.dtypes ###Output _____no_output_____ ###Markdown 1.4. Check NA ###Code df1.isna().sum() ###Output _____no_output_____ ###Markdown 1.5. Replace NA ###Code # Remove NA df1 = df1.dropna( subset = ['description','customer_id']) print( 'Remove data: {:.2f}'.format( 1 - (df1.shape[0]/ df_raw.shape[0]) )) df1.shape df1.isna().sum() ###Output _____no_output_____ ###Markdown 1.6. Change Dtypes ###Code df1.dtypes # Invoice Date df1['invoice_date'] = pd.to_datetime( df1['invoice_date'], format = '%d-%b-%y') # Customer Id df1['customer_id'] = df1['customer_id'].astype(int) df1.head() df1['invoice_date'] = pd.to_datetime( df1['invoice_date'], format = '%d-%b-%y') # Customer Id df1.head() df1.dtypes ###Output _____no_output_____ ###Markdown 1.7. Descriptive Statistics ###Code num_attributes = df1.select_dtypes( include = [ 'int64', 'float64'] ) cat_attributes = df1.select_dtypes( exclude = [ 'int64', 'float64','datetime64[ns]']) ###Output _____no_output_____ ###Markdown 1.7.1 Numerical Attributes ###Code # Central tendency - mean, median ct1 = pd.DataFrame(num_attributes.apply( np.mean )).T ct2 = pd.DataFrame(num_attributes.apply( np.median )).T # Dispersion - desvio padrão, mínimo, máximo, range, skew, kurtosis d1 = pd.DataFrame( num_attributes.apply( np.std ) ).T d2 = pd.DataFrame( num_attributes.apply( np.min ) ).T d3 = pd.DataFrame( num_attributes.apply( np.max ) ).T d4 = pd.DataFrame( num_attributes.apply( lambda x: x.max( ) - x.min() ) ).T d5 = pd.DataFrame( num_attributes.apply( lambda x: x.skew( ) ) ).T d6 = pd.DataFrame( num_attributes.apply( lambda x: x.kurtosis() ) ).T # Concatenate m = pd.concat( [d2, d3, d4, ct1, ct2, d1, d5, d6] ).T.reset_index() m.columns = ['attributes', 'min', 'max', 'range', 'mean', 'mediana', 'std', 'skew', 'kurtosis'] m ###Output _____no_output_____ ###Markdown 1.7.1.1 Numerical Attributs - Investigating - Quantity negativa ( pode ser devolução )- Preço unitário igual a zero ( pode ser promoção ) 1.7.2 Categorical Attributes ###Code cat_attributes.head() ###Output _____no_output_____ ###Markdown Invoice_No ###Code # Problema: Temos invoice com letras e números # Identificação > df_letter_invoices = df1.loc[df1['invoice_no'].apply( lambda x : bool( re.search( '[^0-9]+', x ) ) ), :] print('Total number of invoices: {}'.format( len( df_letter_invoices ))) print('Total number of negative quantity: {}'.format( len(df_letter_invoices[ df_letter_invoices['quantity'] < 0]))) ###Output Total number of invoices: 8905 Total number of negative quantity: 8905 ###Markdown Stock Code ###Code # Check stock codes only characters df1.loc[df1['stock_code'].apply( lambda x : bool( re.search( '^[a-zA-Z]+$', x ) ) ) ,'stock_code'].unique() # Ação: ## 1. Remove stock_code in ['POST', 'D', 'M', 'PADS', 'DOT', 'CRUK'] ###Output _____no_output_____ ###Markdown Description ###Code df1.head() # Ação: Delete Description ###Output _____no_output_____ ###Markdown Country ###Code df1['country'].unique() df1['country'].value_counts( normalize = True) #df1[['custumer_id','country']].drop_duplicates().groupby( 'country') ###Output _____no_output_____ ###Markdown 2.0. Filtragem de Variáveis ###Code df2 = df1.copy() df2.dtypes # === Numerical attributes ==== df2 = df2.loc[df2['unit_price'] >= 0.04, :] # === Categorical attributes ==== df2 = df2[~df2['stock_code'].isin( ['POST', 'D', 'M', 'PADS', 'DOT', 'CRUK'] )] # description df2 = df2.drop( columns='description', axis=1 ) # map df2 = df2[~df2['country'].isin( ['European Community', 'Unspecified' ] ) ] # quantity df2_returns = df2.loc[df1['quantity'] < 0, :] df2_purchases = df2.loc[df1['quantity'] >= 0, :] ###Output _____no_output_____ ###Markdown 3.0. Feature Engineering ###Code df3 = df2.copy() ###Output _____no_output_____ ###Markdown 3.1. Feature Creation ###Code # Data Reference df_ref = df3.drop( ['invoice_no', 'stock_code', 'quantity', 'invoice_date', 'unit_price', 'country'], axis=1 ).drop_duplicates( ignore_index=True ) # Gross Revenue ( Faturamento ) quantity * price df2_purchases.loc[:, 'gross_revenue'] = df2_purchases.loc[:, 'quantity'] * df2_purchases.loc[:, 'unit_price'] # Monetary df_monetary = df2_purchases.loc[:, ['customer_id', 'gross_revenue']].groupby( 'customer_id' ).sum().reset_index() df_ref = pd.merge( df_ref, df_monetary, on='customer_id', how='left' ) # Recency - Last day purchase df_recency = df2_purchases.loc[:, ['customer_id', 'invoice_date']].groupby( 'customer_id' ).max().reset_index() df_recency['recency_days'] = ( df2['invoice_date'].max() - df_recency['invoice_date'] ).dt.days df_recency = df_recency[['customer_id', 'recency_days']].copy() df_ref = pd.merge( df_ref, df_recency, on='customer_id', how='left' ) # Frequency df_frequ = df2_purchases[['customer_id','invoice_no']].drop_duplicates().groupby('customer_id').count().reset_index() df_ref = pd.merge ( df_ref, df_frequ, on = 'customer_id', how = 'left') # Average Ticket df_avg_ticket = df2_purchases[['customer_id', 'gross_revenue']].groupby('customer_id').mean().reset_index().rename ( columns = {'gross_revenue':'avg_ticket'} ) df_ref = pd.merge( df_ref, df_avg_ticket, on='customer_id', how ='left') df_ref.isna().sum() df_ref.head() df3 = df_ref.copy() ###Output _____no_output_____ ###Markdown 4.0. Exploratory Data Analysis ###Code df4 = df3.dropna() df4.isna().sum() df4.head() ###Output _____no_output_____ ###Markdown 5.0. Data Preparation ###Code df5 = df4.copy() df5.head() ## Standar Scakaer ss = pp.StandardScaler() df5['gross_revenue'] = ss.fit_transform ( df5[['gross_revenue']]) df5['recency_days'] = ss.fit_transform ( df5[['recency_days']]) df5['invoice_no'] = ss.fit_transform ( df5[['invoice_no']]) df5['avg_ticket'] = ss.fit_transform ( df5[['avg_ticket']]) ###Output _____no_output_____ ###Markdown 6.0. Feature Selection ###Code df6 = df5.copy() ###Output _____no_output_____ ###Markdown 7.0. Hyperparameter Fine-tuning ###Code df7 = df6.copy() X = df6.drop( columns=['customer_id'] ) # How cluster we needs clusters = [ 2 , 3 , 4 , 5 , 6, 7 ] kmeans = KElbowVisualizer( c.KMeans(), k = clusters, timings = False) kmeans.fit( X ) kmeans.show(); ###Output _____no_output_____ ###Markdown 7.2. Silhouette Score ###Code kmeans = KElbowVisualizer( c.KMeans(), k = clusters,metric = 'silhouette' , timings = False) kmeans.fit( X ) kmeans.show(); ###Output _____no_output_____ ###Markdown 7.3 Silhouette Analysis ###Code fig, ax = plt.subplots( 3, 2, figsize = (25,18)) for k in clusters: km = c.KMeans( n_clusters = k,init= 'random', n_init = 10, max_iter = 100, random_state = 42) q, mod = divmod( k , 2) visualizer = SilhouetteVisualizer( km, colors = 'yellowbrick', ax = ax[q-1][mod]) visualizer.fit( X ); visualizer.finalize() ###Output _____no_output_____ ###Markdown 8.0. Model Training ###Code df8 = df7.copy() ###Output _____no_output_____ ###Markdown 8.1. K-Means ###Code # Model Definition k = 4 kmeans = c.KMeans( init = 'random', n_clusters = k, n_init = 10, max_iter = 300, random_state = 42 ) # Model Training kmeans.fit(X) # Clustering labels = kmeans.labels_ ###Output _____no_output_____ ###Markdown 8.2. Cluster Validation ###Code # WSS ( Within-Cluster Sum of Square) print('WSS value : {}'.format(kmeans.inertia_)) ## SS ( Silhouette Score) print('SS value : {}'.format(m.silhouette_score ( X, labels, metric = 'euclidean'))) ###Output _____no_output_____ ###Markdown 9.0. Cluster Analysis ###Code df9 = df8.copy() df9['cluster'] = labels df9.head() ###Output _____no_output_____ ###Markdown 9.1. Visualization Inspection ###Code visualizer = SilhouetteVisualizer(kmeans, colors ='yellowbrick') visualizer.fit( X ) visualizer.finalize() #fig = px.scatter_3d( df9, x = 'recency_days', y = 'invoice_no', z = 'gross_revenue', color = 'cluster') #fig.show() ###Output _____no_output_____ ###Markdown 9.2. 2D Plot ###Code df_viz = df9.drop( columns = 'customer_id', axis = 1) sns.pairplot(df_viz, hue = 'cluster') ###Output _____no_output_____ ###Markdown 9.3 UMAP ###Code X.head() reducer = umap.UMAP( n_neighbors = 20, random_state= 42) embedding = reducer.fit_transform( X ) # embedding df_viz['embedding_x'] = embedding[:, 0] df_viz['embedding_y'] = embedding[:, 1] # plot UMAP sns.scatterplot ( x = 'embedding_x', y = 'embedding_y', hue= 'cluster', palette = sns.color_palette( 'hls', n_colors= len(df_viz['cluster'].unique())), data= df_viz) ###Output _____no_output_____ ###Markdown 9.3. Cluster Profile ###Code # Number of customer df_cluster = df9[['customer_id','cluster']].groupby( 'cluster' ).count().reset_index() df_cluster['pec_customer'] = 100*(df_cluster['customer_id']/df_cluster['customer_id'].sum()) # Average gross revenue df_avg_gross_revenue = df9[['gross_revenue', 'cluster']].groupby('cluster').mean().reset_index() df_cluster = pd.merge( df_cluster, df_avg_gross_revenue, how = 'inner', on = 'cluster') # Average recency days df_avg_recency_days = df9[['recency_days', 'cluster']].groupby('cluster').mean().reset_index() df_cluster = pd.merge( df_cluster, df_avg_recency_days, how = 'inner', on = 'cluster') # Average invoice_no df_avg_invoice_no = df9[['invoice_no', 'cluster']].groupby('cluster').mean().reset_index() df_cluster = pd.merge( df_cluster, df_avg_invoice_no, how = 'inner', on = 'cluster') # Average Ticket df_ticket = df9[['avg_ticket','cluster']].groupby( 'cluster' ).mean().reset_index() df_cluster = pd.merge( df_cluster, df_ticket, how = 'inner', on = 'cluster') df_cluster.head() ###Output _____no_output_____ ###Markdown Cluster 01: ( Candidato a Insider ) - Número de customer: 6 (0.13% dos customers) - Recência em média: 7 dias - Compras em média: 89 compras - Receita em média: $ 182.181,00. Cluster 02: - Número de customer: 31 (0.7% dos customers) - Recência em média: 14 dias - Compras em média: 53 compras - Receita em média: $ 40.543,00. Cluster 03: - Número de customer: 4.335 (99% dos customers) - Recência em média: 92 dias - Compras em média: 05 compras - Receita em média: $ 1.372,57. 10.0. Deploy to Production ###Code df10 = df9.copy() ###Output _____no_output_____
notebooks/exercises/function_exercises.ipynb
###Markdown Exercises on functionsFor background, please read the [functions](../07/functions) introduction. This function is not properly defined, and will give a `SyntaxError`. Fix it, then run the call below to confirm it is working. ###Code function subtract(p, q): r = p - q return r subtract(5, 10) ###Output _____no_output_____ ###Markdown This function also gives a `SyntaxError`. Fix and run. ###Code def add(first_value, second_value) return first_value + second_value add(2, 3) ###Output _____no_output_____ ###Markdown Here is another error. Fix and run. ###Code def cube(a_variable): return a_variable ** 2 cube(3) ###Output _____no_output_____ ###Markdown Why does the second cell below give you an error? ###Code def add_then_multiply(a, b): added = a + b return added * b result = add_then_multiply(10, 4) result + added ###Output _____no_output_____ ###Markdown Write a function called "increase" that* accepts two arguments* the body multiplies the first argument by 2 and the second by 3, adds the two resulting numbers, and returns the result.If your function is right, the cell below should return 13. ###Code # Your function here increase(2, 3) ###Output _____no_output_____ ###Markdown This function will run, but it probably doesn't give you the result you expect. Fix to give the result you expect and run. ###Code def divide(p, q): # Give result of dividing p by q result = p / q divide(10, 2) ###Output _____no_output_____ ###Markdown Remember that, in function world, the function can see the variablesat the top level, but it cannot change what value the top-levelvariables point to.Read the code the below, but do not run it.Predict what you will see. Will it be an error? If not, what value will you see? Run it to see if you're right. ###Code a = 12 def my_function(b): return a * b my_function(4) ###Output _____no_output_____ ###Markdown This is now going into deeper water.We know that, in function world, the function can see the variables at the top level, but it cannot change what value the top-level variables point to.Read the code the below, but do not run it.Predict what you will see. Will it be an error? If not, what value will you see?Run it to see if you're right. See if you can explain why you are right. Call your instructor over to check your explanation. ###Code a = 12 def function_two(b): a = b return a * b function_two(4) ###Output _____no_output_____ ###Markdown Exercises on functionsFor background, please read the [functions](../07/functions) introduction. This function is not properly defined, and will give a `SyntaxError`. Fix it, then run the call below to confirm it is working. ###Code function subtract(p, q): r = p - q return r subtract(5, 10) ###Output _____no_output_____ ###Markdown This function also gives a `SyntaxError`. Fix and run. ###Code def add(first_value, second_value) return first_value + second_value add(2, 3) ###Output _____no_output_____ ###Markdown Here is another error. Fix and run. ###Code def cube(a_variable): return a_variable ** 2 cube(3) ###Output _____no_output_____ ###Markdown Why does the second cell below give you an error? ###Code def add_then_multiply(a, b): added = a + b return added * b result = add_then_multiply(10, 4) result + added ###Output _____no_output_____ ###Markdown Write a function called "increase" that* accepts two arguments* the body multiplies the first argument by 2 and the second by 3, adds the two resulting numbers, and returns the result.If your function is right, the cell below should return 13. ###Code # Your function here increase(2, 3) ###Output _____no_output_____ ###Markdown This function will run, but it probably doesn't give you the result you expect. Fix to give the result you expect and run. ###Code def divide(p, q): # Give result of dividing p by q result = p / q divide(10, 2) ###Output _____no_output_____ ###Markdown Remember that, in function world, the function can see the variablesat the top level, but it cannot change what value the top-levelvariables point to.Read the code the below, but do not run it.Predict what you will see. Will it be an error? If not, what value will you see? Run it to see if you're right. ###Code a = 12 def my_function(b): return a * b my_function(4) ###Output _____no_output_____ ###Markdown This is now going into deeper water.We know that, in function world, the function can see the variables at the top level, but it cannot change what value the top-level variables point to.Read the code the below, but do not run it.Predict what you will see. Will it be an error? If not, what value will you see?Run it to see if you're right. See if you can explain why you are right. Call your instructor over to check your explanation. ###Code a = 12 def function_two(b): a = b return a * b function_two(4) ###Output _____no_output_____
notebooks/Import_urine_spectra_with_concentration.ipynb
###Markdown Import urine spectra with concentration Install project packages ###Code %%bash pip install -e ../. ###Output Obtaining file:///data/ar1220/MscProjectNMR Installing collected packages: MscProjectNMR Attempting uninstall: MscProjectNMR Found existing installation: MscProjectNMR 0 Uninstalling MscProjectNMR-0: Successfully uninstalled MscProjectNMR-0 Running setup.py develop for MscProjectNMR Successfully installed MscProjectNMR-0 ###Markdown Install required python modules ###Code %%bash pip install -r ../requirements.txt ###Output _____no_output_____ ###Markdown Import fucntions ###Code import tensorflow as tf import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.utils import shuffle from sklearn.model_selection import train_test_split, cross_validate import joblib import os from tfrecords import write_tfrecords_concentrations, write_tfrecords_concentrations_single tf.__version__ ###Output _____no_output_____ ###Markdown I. Import data I.a. Large dataset with independent metabolites (10 000 samples) I.1.a load the data ###Code filename_spectrum_large = '../data/concentration_data/Large_sample/Spectra_Mixt1.txt' filename_concentrations_large = '../data/concentration_data/Large_sample/Concentrations_Mix1.txt' data_spectrum_large = np.loadtxt(filename_spectrum_large, dtype=float) data_concentrations_large = np.loadtxt(filename_concentrations_large, delimiter='\t', dtype=float, usecols=range(1,data_spectrum_large.shape[1]+1)) #Convert into dataframes df_spectrum_large = pd.DataFrame(data_spectrum_large).T df_concentrations_large = pd.DataFrame(data_concentrations_large).T ###Output _____no_output_____ ###Markdown I.1.b Normalize the input Define minimum and maximum value of spectrum ###Code min_val = -50 max_val = 20000 norm_df_spectrum_large = (df_spectrum_large - min_val)/(max_val - min_val) ###Output _____no_output_____ ###Markdown I.1.c. Standardise the output ###Code #Import mean concentration data and metabolites filename_mean_concentrations = '../data/concentration_data/normal_urine.txt' mean_concentrations = np.loadtxt(filename_mean_concentrations, delimiter='\t', dtype=float, usecols=1, skiprows=1) sd_concentrations = np.loadtxt(filename_mean_concentrations, delimiter='\t', dtype=float, usecols=2, skiprows=1) stand_df_concentrations_large = (df_concentrations_large - mean_concentrations)/sd_concentrations print(norm_df_spectrum_large.shape) print(stand_df_concentrations_large.shape) ###Output (10000, 10000) (10000, 48) ###Markdown I.1.d. Shuffle the data ###Code norm_df_spectrum_large, stand_df_concentrations_large = shuffle(norm_df_spectrum_large, stand_df_concentrations_large) ###Output _____no_output_____ ###Markdown I.1.e. Split data into train and validation datasets ###Code X_train_large, X_test_large, y_train_large, y_test_large = train_test_split(norm_df_spectrum_large, stand_df_concentrations_large, test_size=0.2) ###Output _____no_output_____ ###Markdown I.1.f. Convert into tf.data ###Code train_dataset_large = tf.data.Dataset.from_tensor_slices((X_train_large, y_train_large)) val_dataset_large = tf.data.Dataset.from_tensor_slices((X_test_large, y_test_large)) print(train_dataset_large.element_spec) print(val_dataset_large.element_spec) train_dataset_large_single = [tf.data.Dataset.from_tensor_slices((X_train_large, y_train_large[y_train_large.columns[i]])) for i in range(48)] val_dataset_large_single = [tf.data.Dataset.from_tensor_slices((X_test_large, y_test_large[y_test_large.columns[i]])) for i in range(48)] print(train_dataset_large_single[0].element_spec) print(val_dataset_large_single[0].element_spec) ###Output (TensorSpec(shape=(10000,), dtype=tf.float64, name=None), TensorSpec(shape=(), dtype=tf.float64, name=None)) (TensorSpec(shape=(10000,), dtype=tf.float64, name=None), TensorSpec(shape=(), dtype=tf.float64, name=None)) ###Markdown I.1.g. Write tf.Record ###Code write_tfrecords_concentrations('../data/tfrecords/Concentrations_data/Large_sample/train', dataset=train_dataset_large, number=32) write_tfrecords_concentrations('../data/tfrecords/Concentrations_data/Large_sample/validation', dataset=val_dataset_large, number=8) for i in range(48): if not os.path.exists('../data/tfrecords/Concentrations_data/Large_sample_single/metabolite_{}/train'.format(i)): os.makedirs('../data/tfrecords/Concentrations_data/Large_sample_single/metabolite_{}/train'.format(i)) if not os.path.exists('../data/tfrecords/Concentrations_data/Large_sample_single/metabolite_{}/validation'.format(i)): os.makedirs('../data/tfrecords/Concentrations_data/Large_sample_single/metabolite_{}/validation'.format(i)) for i in range(48): write_tfrecords_concentrations_single('../data/tfrecords/Concentrations_data/Large_sample_single/metabolite_{}/train'.format(i), dataset=train_dataset_large_single[i], number=32) write_tfrecords_concentrations('../data/tfrecords/Concentrations_data/Large_sample_single/metabolite_{}/validation'.format(i), dataset=val_dataset_large_single[i], number=8) ###Output _____no_output_____ ###Markdown I.2 Large dataset with correlated metabolites (10 000 samples) I.2.a load the data ###Code filename_spectrum_large_corr = '../data/concentration_data/Large_correlated/Spectra_Mixt1.txt' filename_concentrations_large_corr = '../data/concentration_data/Large_correlated/Concentrations_Mix1.txt' data_spectrum_large_corr = np.loadtxt(filename_spectrum_large_corr, dtype=float) data_concentrations_large_corr = np.loadtxt(filename_concentrations_large_corr, delimiter='\t', dtype=float, usecols=range(1,data_spectrum_large_corr.shape[1]+1)) #Convert into dataframes df_spectrum_large_corr = pd.DataFrame(data_spectrum_large_corr).T df_concentrations_large_corr = pd.DataFrame(data_concentrations_large_corr).T ###Output _____no_output_____ ###Markdown I.2.b Normalize the input ###Code norm_df_spectrum_large_corr = (df_spectrum_large_corr - min_val)/(max_val - min_val) ###Output _____no_output_____ ###Markdown I.2.c Standardise the output ###Code #Import mean concentration data and metabolites filename_mean_concentrations = '../data/concentration_data/normal_urine.txt' mean_concentrations = np.loadtxt(filename_mean_concentrations, delimiter='\t', dtype=float, usecols=1, skiprows=1) sd_concentrations = np.loadtxt(filename_mean_concentrations, delimiter='\t', dtype=float, usecols=2, skiprows=1) stand_df_concentrations_large_corr = (df_concentrations_large_corr - mean_concentrations)/sd_concentrations print(norm_df_spectrum_large_corr.shape) print(stand_df_concentrations_large_corr.shape) ###Output (10000, 10000) (10000, 48) ###Markdown I.2.d Shuffle the data ###Code norm_df_spectrum_large_corr, stand_df_concentrations_large_corr = shuffle(norm_df_spectrum_large_corr, stand_df_concentrations_large_corr) ###Output _____no_output_____ ###Markdown I.2.e Split data into train and validation datasets ###Code X_train_large_corr, X_test_large_corr, y_train_large_corr, y_test_large_corr = train_test_split(norm_df_spectrum_large_corr, stand_df_concentrations_large_corr, test_size=0.2) ###Output _____no_output_____ ###Markdown I.2.f Convert into tf.data ###Code train_dataset_large_corr = tf.data.Dataset.from_tensor_slices((X_train_large_corr, y_train_large_corr)) val_dataset_large_corr = tf.data.Dataset.from_tensor_slices((X_test_large_corr, y_test_large_corr)) print(train_dataset_large_corr.element_spec) print(val_dataset_large_corr.element_spec) train_dataset_large_corr_single = [tf.data.Dataset.from_tensor_slices((X_train_large_corr, y_train_large_corr[y_train_large_corr.columns[i]])) for i in range(48)] val_dataset_large_corr_single = [tf.data.Dataset.from_tensor_slices((X_test_large_corr, y_test_large_corr[y_test_large_corr.columns[i]])) for i in range(48)] print(train_dataset_large_corr_single[0].element_spec) print(val_dataset_large_corr_single[0].element_spec) ###Output (TensorSpec(shape=(10000,), dtype=tf.float64, name=None), TensorSpec(shape=(), dtype=tf.float64, name=None)) (TensorSpec(shape=(10000,), dtype=tf.float64, name=None), TensorSpec(shape=(), dtype=tf.float64, name=None)) ###Markdown I.2.g Write tf.Record ###Code write_tfrecords_concentrations('../data/tfrecords/Concentrations_data/Large_correlated/train', dataset=train_dataset_large_corr, number=32) write_tfrecords_concentrations('../data/tfrecords/Concentrations_data/Large_correlated/validation', dataset=val_dataset_large_corr, number=8) import os for i in range(48): if not os.path.exists('../data/tfrecords/Concentrations_data/Large_corr_single/metabolite_{}/train'.format(i)): os.makedirs('../data/tfrecords/Concentrations_data/Large_corr_single/metabolite_{}/train'.format(i)) if not os.path.exists('../data/tfrecords/Concentrations_data/Large_corr_single/metabolite_{}/validation'.format(i)): os.makedirs('../data/tfrecords/Concentrations_data/Large_corr_single/metabolite_{}/validation'.format(i)) for i in range(48): write_tfrecords_concentrations_single('../data/tfrecords/Concentrations_data/Large_corr_single/metabolite_{}/train'.format(i), dataset=train_dataset_large_corr_single[i], number=32) write_tfrecords_concentrations('../data/tfrecords/Concentrations_data/Large_corr_single/metabolite_{}/validation'.format(i), dataset=val_dataset_large_corr_single[i], number=8) ###Output _____no_output_____ ###Markdown I.3 Small dataset with independent metabolites (1000 samples) I.3.a load the data ###Code filename_spectrum_small = '../data/concentration_data/Small_sample/Spectra_Mixt1.txt' filename_concentrations_small = '../data/concentration_data/Small_sample/Concentrations_Mix1.txt' data_spectrum_small = np.loadtxt(filename_spectrum_small, dtype=float) data_concentrations_small = np.loadtxt(filename_concentrations_small, delimiter='\t', dtype=float, usecols=range(1,data_spectrum_small.shape[1]+1)) #Convert into dataframes df_spectrum_small = pd.DataFrame(data_spectrum_small).T df_concentrations_small = pd.DataFrame(data_concentrations_small).T ###Output _____no_output_____ ###Markdown I.3.b Normalize the input ###Code norm_df_spectrum_small = (df_spectrum_small - min_val)/(max_val - min_val) ###Output _____no_output_____ ###Markdown I.3.c Standardise the output ###Code #Import mean concentration data and metabolites filename_mean_concentrations = '../data/concentration_data/normal_urine.txt' mean_concentrations = np.loadtxt(filename_mean_concentrations, delimiter='\t', dtype=float, usecols=1, skiprows=1) sd_concentrations = np.loadtxt(filename_mean_concentrations, delimiter='\t', dtype=float, usecols=2, skiprows=1) stand_df_concentrations_small = (df_concentrations_small - mean_concentrations)/sd_concentrations print(norm_df_spectrum_small.shape) print(stand_df_concentrations_small.shape) ###Output (1000, 10000) (1000, 48) ###Markdown I.3.d Shuffle the data ###Code norm_df_spectrum_small, stand_df_concentrations_small = shuffle(norm_df_spectrum_small, stand_df_concentrations_small) ###Output _____no_output_____ ###Markdown I.3.e Split data into train and validation datasets ###Code X_train_small, X_test_small, y_train_small, y_test_small = train_test_split(norm_df_spectrum_small, stand_df_concentrations_small, test_size=0.2) ###Output _____no_output_____ ###Markdown I.3.f Convert into tf.data ###Code train_dataset_small = tf.data.Dataset.from_tensor_slices((X_train_small, y_train_small)) val_dataset_small = tf.data.Dataset.from_tensor_slices((X_test_small, y_test_small)) print(train_dataset_small.element_spec) print(val_dataset_small.element_spec) ###Output (TensorSpec(shape=(10000,), dtype=tf.float64, name=None), TensorSpec(shape=(48,), dtype=tf.float64, name=None)) (TensorSpec(shape=(10000,), dtype=tf.float64, name=None), TensorSpec(shape=(48,), dtype=tf.float64, name=None)) ###Markdown I.3.g Write tf.Record ###Code write_tfrecords_concentrations('../data/tfrecords/Concentrations_data/Small_sample/train', dataset=train_dataset_small, number=8) write_tfrecords_concentrations('../data/tfrecords/Concentrations_data/Small_sample/validation', dataset=val_dataset_small, number=2) ###Output _____no_output_____ ###Markdown I.4 Small dataset with correlated metabolites (1000 samples) I.4.a load the data ###Code filename_spectrum_small_corr = '../data/concentration_data/Small_correlated/Spectra_Mixt1.txt' filename_concentrations_small_corr = '../data/concentration_data/Small_correlated/Concentrations_Mix1.txt' data_spectrum_small_corr = np.loadtxt(filename_spectrum_small_corr, dtype=float) data_concentrations_small_corr = np.loadtxt(filename_concentrations_small_corr, delimiter='\t', dtype=float, usecols=range(1,data_spectrum_small_corr.shape[1]+1)) #Convert into dataframes df_spectrum_small_corr = pd.DataFrame(data_spectrum_small_corr).T df_concentrations_small_corr = pd.DataFrame(data_concentrations_small_corr).T ###Output _____no_output_____ ###Markdown I.4.b Normalize the input ###Code norm_df_spectrum_small_corr = (df_spectrum_small_corr - min_val)/(max_val - min_val) ###Output _____no_output_____ ###Markdown I.4.c Standardise the output ###Code #Import mean concentration data and metabolites filename_mean_concentrations = '../data/concentration_data/normal_urine.txt' mean_concentrations = np.loadtxt(filename_mean_concentrations, delimiter='\t', dtype=float, usecols=1, skiprows=1) sd_concentrations = np.loadtxt(filename_mean_concentrations, delimiter='\t', dtype=float, usecols=2, skiprows=1) stand_df_concentrations_small_corr = (df_concentrations_small_corr - mean_concentrations)/sd_concentrations print(norm_df_spectrum_small_corr.shape) print(stand_df_concentrations_small_corr.shape) ###Output (1000, 10000) (1000, 48) ###Markdown I.4.d Shuffle the data ###Code norm_df_spectrum_small_corr, stand_df_concentrations_small_corr = shuffle(norm_df_spectrum_small_corr, stand_df_concentrations_small_corr) ###Output _____no_output_____ ###Markdown I.4.e Split data into train and validation datasets ###Code X_train_small_corr, X_test_small_corr, y_train_small_corr, y_test_small_corr = train_test_split(norm_df_spectrum_small_corr, stand_df_concentrations_small_corr, test_size=0.2) ###Output _____no_output_____ ###Markdown I.4.f Convert into tf.data ###Code train_dataset_small_corr = tf.data.Dataset.from_tensor_slices((X_train_small_corr, y_train_small_corr)) val_dataset_small_corr = tf.data.Dataset.from_tensor_slices((X_test_small_corr, y_test_small_corr)) print(train_dataset_small_corr.element_spec) print(val_dataset_small_corr.element_spec) ###Output (TensorSpec(shape=(10000,), dtype=tf.float64, name=None), TensorSpec(shape=(48,), dtype=tf.float64, name=None)) (TensorSpec(shape=(10000,), dtype=tf.float64, name=None), TensorSpec(shape=(48,), dtype=tf.float64, name=None)) ###Markdown I.4.g Write tf.Record ###Code write_tfrecords_concentrations('../data/tfrecords/Concentrations_data/Small_correlated/train', dataset=train_dataset_small_corr, number=8) write_tfrecords_concentrations('../data/tfrecords/Concentrations_data/Small_correlated/validation', dataset=val_dataset_small_corr, number=2) ###Output _____no_output_____ ###Markdown I.5 Extra small dataset with independent metabolites (100 samples) I.5.a load the data ###Code filename_spectrum_xsmall = '../data/concentration_data/Extra_small_sample/Spectra_Mixt1.txt' filename_concentrations_xsmall = '../data/concentration_data/Extra_small_sample/Concentrations_Mix1.txt' data_spectrum_xsmall = np.loadtxt(filename_spectrum_xsmall, dtype=float) data_concentrations_xsmall = np.loadtxt(filename_concentrations_xsmall, delimiter='\t', dtype=float, usecols=range(1,data_spectrum_xsmall.shape[1]+1)) #Convert into dataframes df_spectrum_xsmall = pd.DataFrame(data_spectrum_xsmall).T df_concentrations_xsmall = pd.DataFrame(data_concentrations_xsmall).T ###Output _____no_output_____ ###Markdown I.5.b Normalize the input ###Code norm_df_spectrum_xsmall = (df_spectrum_xsmall - min_val)/(max_val - min_val) ###Output _____no_output_____ ###Markdown I.5.c Standardise the output ###Code #Import mean concentration data and metabolites filename_mean_concentrations = '../data/concentration_data/normal_urine.txt' mean_concentrations = np.loadtxt(filename_mean_concentrations, delimiter='\t', dtype=float, usecols=1, skiprows=1) sd_concentrations = np.loadtxt(filename_mean_concentrations, delimiter='\t', dtype=float, usecols=2, skiprows=1) stand_df_concentrations_xsmall = (df_concentrations_xsmall - mean_concentrations)/sd_concentrations print(norm_df_spectrum_xsmall.shape) print(stand_df_concentrations_xsmall.shape) ###Output (100, 10000) (100, 48) ###Markdown I.5.d Shuffle the data ###Code norm_df_spectrum_xsmall, stand_df_concentrations_xsmall = shuffle(norm_df_spectrum_xsmall, stand_df_concentrations_xsmall) ###Output _____no_output_____ ###Markdown I.5.e Split data into train and validation datasets ###Code X_train_xsmall, X_test_xsmall, y_train_xsmall, y_test_xsmall = train_test_split(norm_df_spectrum_xsmall, stand_df_concentrations_xsmall, test_size=0.2) ###Output _____no_output_____ ###Markdown I.5.f Convert into tf.data ###Code train_dataset_xsmall = tf.data.Dataset.from_tensor_slices((X_train_xsmall, y_train_xsmall)) val_dataset_xsmall = tf.data.Dataset.from_tensor_slices((X_test_xsmall, y_test_xsmall)) print(train_dataset_xsmall.element_spec) print(.element_spec) ###Output (TensorSpec(shape=(10000,), dtype=tf.float64, name=None), TensorSpec(shape=(48,), dtype=tf.float64, name=None)) (TensorSpec(shape=(10000,), dtype=tf.float64, name=None), TensorSpec(shape=(48,), dtype=tf.float64, name=None)) ###Markdown I.5.g Write tf.Record ###Code write_tfrecords_concentrations('../data/tfrecords/Concentrations_data/Extra_small_sample/train', dataset=train_dataset_xsmall, number=4) write_tfrecords_concentrations('../data/tfrecords/Concentrations_data/Extra_small_sample/validation', dataset=val_dataset_xsmall, number=1) ###Output _____no_output_____ ###Markdown I.6 Extra small dataset with correlated metabolites (100 samples) I.6.a load the data ###Code filename_spectrum_xsmall_corr = '../data/concentration_data/Extra_small_correlated/Spectra_Mixt1.txt' filename_concentrations_xsmall_corr = '../data/concentration_data/Extra_small_correlated/Concentrations_Mix1.txt' data_spectrum_xsmall_corr = np.loadtxt(filename_spectrum_xsmall_corr, dtype=float) data_concentrations_xsmall_corr = np.loadtxt(filename_concentrations_xsmall_corr, delimiter='\t', dtype=float, usecols=range(1,data_spectrum_xsmall_corr.shape[1]+1)) #Convert into dataframes df_spectrum_xsmall_corr = pd.DataFrame(data_spectrum_xsmall_corr).T df_concentrations_xsmall_corr = pd.DataFrame(data_concentrations_xsmall_corr).T ###Output _____no_output_____ ###Markdown I.6.b Normalize the input ###Code norm_df_spectrum_xsmall_corr = (df_spectrum_xsmall_corr - min_val)/(max_val - min_val) ###Output _____no_output_____ ###Markdown I.6.c Standardise the output ###Code #Import mean concentration data and metabolites filename_mean_concentrations = '../data/concentration_data/normal_urine.txt' mean_concentrations = np.loadtxt(filename_mean_concentrations, delimiter='\t', dtype=float, usecols=1, skiprows=1) sd_concentrations = np.loadtxt(filename_mean_concentrations, delimiter='\t', dtype=float, usecols=2, skiprows=1) stand_df_concentrations_xsmall_corr = (df_concentrations_xsmall_corr - mean_concentrations)/sd_concentrations print(norm_df_spectrum_xsmall_corr.shape) print(stand_df_concentrations_xsmall_corr.shape) ###Output (100, 10000) (100, 48) ###Markdown I.6.d Shuffle the data ###Code norm_df_spectrum_xsmall_corr, stand_df_concentrations_xsmall_corr = shuffle(norm_df_spectrum_xsmall_corr, stand_df_concentrations_xsmall_corr) ###Output _____no_output_____ ###Markdown I.6.e Split data into train and validation datasets ###Code X_train_xsmall_corr, X_test_xsmall_corr, y_train_xsmall_corr, y_test_xsmall_corr = train_test_split(norm_df_spectrum_xsmall_corr, stand_df_concentrations_xsmall_corr, test_size=0.2) ###Output _____no_output_____ ###Markdown I.6.f Convert into tf.data ###Code train_dataset_xsmall_corr = tf.data.Dataset.from_tensor_slices((X_train_xsmall_corr, y_train_xsmall_corr)) val_dataset_xsmall_corr = tf.data.Dataset.from_tensor_slices((X_test_xsmall_corr, y_test_xsmall_corr)) print(train_dataset_xsmall_corr.element_spec) print(val_dataset_xsmall_corr.element_spec) ###Output (TensorSpec(shape=(10000,), dtype=tf.float64, name=None), TensorSpec(shape=(48,), dtype=tf.float64, name=None)) (TensorSpec(shape=(10000,), dtype=tf.float64, name=None), TensorSpec(shape=(48,), dtype=tf.float64, name=None)) ###Markdown I.6.g Write tf.Record ###Code write_tfrecords_concentrations('../data/tfrecords/Concentrations_data/Extra_small_correlated/train', dataset=train_dataset_xsmall_corr, number=4) write_tfrecords_concentrations('../data/tfrecords/Concentrations_data/Extra_small_correlated/validation', dataset=val_dataset_xsmall_corr, number=1) ###Output _____no_output_____ ###Markdown I.7 Test dataset with independent metabolites (1000 samples) I.7.a load the data ###Code filename_spectrum_test = '../data/concentration_data/Test_independent/Spectra_Mixt1.txt' filename_concentrations_test = '../data/concentration_data/Test_independent/Concentrations_Mix1.txt' data_spectrum_test = np.loadtxt(filename_spectrum_test, dtype=float) data_concentrations_test = np.loadtxt(filename_concentrations_test, delimiter='\t', dtype=float, usecols=range(1,data_spectrum_test.shape[1]+1)) #Convert into dataframes df_spectrum_test = pd.DataFrame(data_spectrum_test).T df_concentrations_test = pd.DataFrame(data_concentrations_test).T ###Output _____no_output_____ ###Markdown I.7.b Normalize the input ###Code norm_df_spectrum_test = (df_spectrum_test - min_val)/(max_val - min_val) ###Output _____no_output_____ ###Markdown I.7.c Standardise the output ###Code #Import mean concentration data and metabolites filename_mean_concentrations = '../data/concentration_data/normal_urine.txt' mean_concentrations = np.loadtxt(filename_mean_concentrations, delimiter='\t', dtype=float, usecols=1, skiprows=1) sd_concentrations = np.loadtxt(filename_mean_concentrations, delimiter='\t', dtype=float, usecols=2, skiprows=1) stand_df_concentrations_test = (df_concentrations_test - mean_concentrations)/sd_concentrations print(norm_df_spectrum_test.shape) print(stand_df_concentrations_test.shape) ###Output (1000, 10000) (1000, 48) ###Markdown I.7.d Shuffle the data ###Code norm_df_spectrum_test, stand_df_concentrations_test = shuffle(norm_df_spectrum_test, stand_df_concentrations_test) ###Output _____no_output_____ ###Markdown I.7.e Convert into tf.data ###Code test_dataset = tf.data.Dataset.from_tensor_slices((norm_df_spectrum_test, stand_df_concentrations_test)) print(test_dataset.element_spec) ###Output (TensorSpec(shape=(10000,), dtype=tf.float64, name=None), TensorSpec(shape=(48,), dtype=tf.float64, name=None)) ###Markdown I.7.f Write tf.Record ###Code write_tfrecords_concentrations('../data/tfrecords/Concentrations_data/Test_independent', dataset=test_dataset, number=10) ###Output _____no_output_____ ###Markdown I.8 Test dataset with correlated metabolites (1000 samples) I.8.a load the data ###Code filename_spectrum_test_corr = '../data/concentration_data/Test_correlated/Spectra_Mixt1.txt' filename_concentrations_test_corr = '../data/concentration_data/Test_correlated/Concentrations_Mix1.txt' data_spectrum_test_corr = np.loadtxt(filename_spectrum_test_corr, dtype=float) data_concentrations_test_corr = np.loadtxt(filename_concentrations_test_corr, delimiter='\t', dtype=float, usecols=range(1,data_spectrum_test_corr.shape[1]+1)) #Convert into dataframes df_spectrum_test_corr = pd.DataFrame(data_spectrum_test_corr).T df_concentrations_test_corr = pd.DataFrame(data_concentrations_test_corr).T ###Output _____no_output_____ ###Markdown I.8.b Normalize the input ###Code norm_df_spectrum_test_corr = (df_spectrum_test_corr - min_val)/(max_val - min_val) ###Output _____no_output_____ ###Markdown I.8.c Standardise the output ###Code #Import mean concentration data and metabolites filename_mean_concentrations = '../data/concentration_data/normal_urine.txt' mean_concentrations = np.loadtxt(filename_mean_concentrations, delimiter='\t', dtype=float, usecols=1, skiprows=1) sd_concentrations = np.loadtxt(filename_mean_concentrations, delimiter='\t', dtype=float, usecols=2, skiprows=1) stand_df_concentrations_test_corr = (df_concentrations_test_corr - mean_concentrations)/sd_concentrations print(norm_df_spectrum_test_corr.shape) print(stand_df_concentrations_test_corr.shape) ###Output (1000, 10000) (1000, 48) ###Markdown I.8.d Shuffle the data ###Code norm_df_spectrum_test_corr, stand_df_concentrations_test_corr = shuffle(norm_df_spectrum_test_corr, stand_df_concentrations_test_corr) ###Output _____no_output_____ ###Markdown I.8.e Convert into tf.data ###Code test_corr_dataset = tf.data.Dataset.from_tensor_slices((norm_df_spectrum_test_corr, stand_df_concentrations_test_corr)) print(test_corr_dataset.element_spec) ###Output (TensorSpec(shape=(10000,), dtype=tf.float64, name=None), TensorSpec(shape=(48,), dtype=tf.float64, name=None)) ###Markdown I.8.f Write tf.Record ###Code write_tfrecords_concentrations('../data/tfrecords/Concentrations_data/Test_correlated', dataset=test_corr_dataset, number=10) ###Output _____no_output_____ ###Markdown I.9 Test abnormal urine dataset with independent metabolites (1000 samples) I.9.a load the data ###Code filename_spectrum_abn_test = '../data/concentration_data/Test_abnormal/Spectra_Mix2.txt' filename_concentrations_abn_test = '../data/concentration_data/Test_abnormal/Concentrations_Mix2.txt' data_spectrum_abn_test = np.loadtxt(filename_spectrum_abn_test, dtype=float) data_concentrations_abn_test = np.loadtxt(filename_concentrations_abn_test, delimiter='\t', dtype=float, usecols=range(1,data_spectrum_abn_test.shape[1]+1)) #Convert into dataframes df_spectrum_abn_test = pd.DataFrame(data_spectrum_abn_test).T df_concentrations_abn_test = pd.DataFrame(data_concentrations_abn_test).T ###Output _____no_output_____ ###Markdown I.9.b Normalize the input ###Code norm_df_spectrum_abn_test = (df_spectrum_abn_test - min_val)/(max_val - min_val) ###Output _____no_output_____ ###Markdown I.9.c Standardise the output ###Code #Import mean concentration data and metabolites filename_mean_concentrations = '../data/concentration_data/normal_urine.txt' mean_concentrations = np.loadtxt(filename_mean_concentrations, delimiter='\t', dtype=float, usecols=1, skiprows=1) sd_concentrations = np.loadtxt(filename_mean_concentrations, delimiter='\t', dtype=float, usecols=2, skiprows=1) stand_df_concentrations_abn_test = (df_concentrations_abn_test - mean_concentrations)/sd_concentrations print(norm_df_spectrum_abn_test.shape) print(stand_df_concentrations_abn_test.shape) ###Output (1000, 10000) (1000, 48) ###Markdown I.9.d Shuffle the data ###Code norm_df_spectrum_abn_test, stand_df_concentrations_abn_test = shuffle(norm_df_spectrum_abn_test, stand_df_concentrations_abn_test) ###Output _____no_output_____ ###Markdown I.9.e Convert into tf.data ###Code abn_test_dataset = tf.data.Dataset.from_tensor_slices((norm_df_spectrum_abn_test, stand_df_concentrations_abn_test)) print(abn_test_dataset.element_spec) ###Output (TensorSpec(shape=(10000,), dtype=tf.float64, name=None), TensorSpec(shape=(48,), dtype=tf.float64, name=None)) ###Markdown I.9.f Write tf.Record ###Code write_tfrecords_concentrations('../data/tfrecords/Concentrations_data/Test_abnormal', dataset=abn_test_dataset, number=10) ###Output _____no_output_____ ###Markdown I.10 Test abnormal urine dataset with correlated metabolites (1000 samples) I.10.a load the data ###Code filename_spectrum_abn_test_corr = '../data/concentration_data/Test_abnormal_corr/Spectra_Mix2.txt' filename_concentrations_abn_test_corr = '../data/concentration_data/Test_abnormal_corr/Concentrations_Mix2.txt' data_spectrum_abn_test_corr = np.loadtxt(filename_spectrum_abn_test_corr, dtype=float) data_concentrations_abn_test_corr = np.loadtxt(filename_concentrations_abn_test_corr, delimiter='\t', dtype=float, usecols=range(1,data_spectrum_abn_test_corr.shape[1]+1)) #Convert into dataframes df_spectrum_abn_test_corr = pd.DataFrame(data_spectrum_abn_test_corr).T df_concentrations_abn_test_corr = pd.DataFrame(data_concentrations_abn_test_corr).T ###Output _____no_output_____ ###Markdown I.10.b Normalize the input ###Code norm_df_spectrum_abn_test_corr = (df_spectrum_abn_test_corr - min_val)/(max_val - min_val) ###Output _____no_output_____ ###Markdown I.10.c Standardise the output ###Code #Import mean concentration data and metabolites filename_mean_concentrations = '../data/concentration_data/normal_urine.txt' mean_concentrations = np.loadtxt(filename_mean_concentrations, delimiter='\t', dtype=float, usecols=1, skiprows=1) sd_concentrations = np.loadtxt(filename_mean_concentrations, delimiter='\t', dtype=float, usecols=2, skiprows=1) stand_df_concentrations_abn_test_corr = (df_concentrations_abn_test_corr - mean_concentrations)/sd_concentrations print(norm_df_spectrum_abn_test_corr.shape) print(stand_df_concentrations_abn_test_corr.shape) ###Output (1000, 10000) (1000, 48) ###Markdown I.10.d Shuffle the data ###Code norm_df_spectrum_abn_test_corr, stand_df_concentrations_abn_test_corr = shuffle(norm_df_spectrum_abn_test_corr, stand_df_concentrations_abn_test_corr) ###Output _____no_output_____ ###Markdown I.10.e Convert into tf.data ###Code abn_test_corr_dataset = tf.data.Dataset.from_tensor_slices((norm_df_spectrum_abn_test_corr, stand_df_concentrations_abn_test_corr)) print(abn_test_corr_dataset.element_spec) ###Output (TensorSpec(shape=(10000,), dtype=tf.float64, name=None), TensorSpec(shape=(48,), dtype=tf.float64, name=None)) ###Markdown I.10.f Write tf.Record ###Code write_tfrecords_concentrations('../data/tfrecords/Concentrations_data/Test_abnormal_corr', dataset=abn_test_corr_dataset, number=10) ###Output _____no_output_____
module4/assignment_regression_classification_4.ipynb
###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [X] Watch Aaron Gallant's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [x] Do train/validate/test split with the Tanzania Waterpumps data.- [X] Do one-hot encoding. For example, in addition to `quantity`, you could try `basin`, `extraction_type_class`, and more. (But remember it may not work with high cardinality categoricals.)- [X] Use scikit-learn for logistic regression.- [X] Get your validation accuracy score.- [X] Get and plot your coefficients.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [X] Commit your notebook to your fork of the GitHub repo.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons. Stretch Goals Doing- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. To visualize this dataset, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this problem, you may want to use the parameter `logistic=True`You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from the previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` Pipelines[Scikit-Learn User Guide](https://scikit-learn.org/stable/modules/compose.html) explains why pipelines are useful, and demonstrates how to use them:> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors. Reading- [ ] [Why is logistic regression considered a linear model?](https://www.quora.com/Why-is-logistic-regression-considered-a-linear-model)- [ ] [Training, Validation, and Testing Data Sets](https://end-to-end-machine-learning.teachable.com/blog/146320/training-validation-testing-data-sets)- [ ] [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)- [ ] [Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)- [ ] [Statistical Modeling: The Two Cultures](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)- [ ] [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way (without an excessive amount of formulas or academic pre-requisites). ###Code import os, sys in_colab = 'google.colab' in sys.modules # If you're in Colab... if in_colab: # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Install required python packages !pip install -r requirements.txt # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') import pandas as pd train_features = pd.read_csv('../data/tanzania/train_features.csv') train_labels = pd.read_csv('../data/tanzania/train_labels.csv') test_features = pd.read_csv('../data/tanzania/test_features.csv') sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) # make a validation set from train set from sklearn.model_selection import train_test_split X_train = train_features y_train = train_labels['status_group'] X_train, X_val, y_train, y_val = train_test_split( X_train, y_train, train_size=0.8, test_size=0.2, random_state=42) X_train.shape, X_val.shape, y_train.shape, y_val.shape # Make numeric subsets X_train_numeric = X_train.select_dtypes('number') X_val_numeric = X_val.select_dtypes('number') X_train_numeric.isnull().sum() # Check cardinality of categorical features X_train.describe(exclude='number').T.sort_values(by='unique') X_train.management.value_counts() X_train.scheme_management.value_counts() # strange that the unique values are so similiar between these two features, # yet their value counts are markedly different. X_train.management_group.value_counts() X_train[(X_train['management'] == 'wug') & (X_train['scheme_management'] != 'WUG')].head() # It's not clear why the management of the pumps seems inconsistently coded. # But management_group and management seems to agree (see count for parastatal # in each) so I'm going to use 'management'. import seaborn as sns # Subset non-numerical X_train_categorical = X_train.select_dtypes(exclude='number') X_val_categorical = X_val.select_dtypes(exclude='number') categorical_features = [] excluded = ['recorded_by', 'extraction_type_class', 'extraction_type_group', 'quantity_group', 'source_type', 'source_class'] for i in X_train_categorical.columns.drop(excluded): if X_train[i].nunique() < 22: if X_train[i].isnull().sum() == 0: categorical_features.append(i) else: print(i, X_train[i].isnull().sum()) # Not using features with nulls in our regression. print(categorical_features) # categorical_features.append('scheme_management') # #Let's see if this improves our model, even leaving in the NaNs from sklearn.linear_model import LogisticRegressionCV, LogisticRegression from category_encoders import OneHotEncoder numerical_features = X_train_numeric.columns.drop('id').tolist() print(numerical_features) features = categorical_features + numerical_features X_train_subset = X_train[features] X_val_subset = X_val[features] encoder = OneHotEncoder(use_cat_names=True) X_train_encoded = encoder.fit_transform(X_train_subset) X_val_encoded = encoder.transform(X_val_subset) from sklearn.preprocessing import StandardScaler scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_encoded) X_val_scaled = scaler.fit_transform(X_val_encoded) model1 = LogisticRegression(n_jobs=-1) model2 = LogisticRegressionCV(n_jobs=-1) model1.fit(X_train_scaled, y_train) model2.fit(X_train_scaled, y_train) print("LR Validation accuracy", model1.score(X_val_scaled, y_val)) print("LRCV Validation accuracy", model2.score(X_val_scaled, y_val)) model1.coef_, model2.coef_ # I don't know what plotting these coefficients would entail. X_train_subset.head(100) # remove 'recorded_by', 'extraction_type_class', 'extraction_type_group', 'quantity_group', 'source_type', 'source_class' # Look at cases where the model failed on training data. # I'm aware I'm in danger of introducing overfitting by doing this, but it # must be done as part of the iterative process. X_train_subset_copy = X_train_subset.copy() X_train_subset_copy['prediction'] = model2.predict(X_train_scaled) X_train_subset_copy['y_train'] = y_train X_train_subset_copy.head() failed = X_train_subset_copy[X_train_subset_copy['prediction'] != y_train] failed.head(40) ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron Gallant's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Do one-hot encoding. (Remember it may not work with high cardinality categoricals.)- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Get and plot your coefficients.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons. Stretch Goals Doing- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this problem, you may want to use the parameter `logistic=True`You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from the previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` Pipelines[Scikit-Learn User Guide](https://scikit-learn.org/stable/modules/compose.html) explains why pipelines are useful, and demonstrates how to use them:> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors. Reading- [ ] [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)- [ ] [Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)- [ ] [Statistical Modeling: The Two Cultures](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)- [ ] [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way (without an excessive amount of formulas or academic pre-requisites). ###Code # If you're in Colab... import os, sys in_colab = 'google.colab' in sys.modules if in_colab: # Install required python packages: # category_encoders, version >= 2.0 # pandas-profiling, version >= 2.0 # plotly, version >= 4.0 !pip install --upgrade category_encoders pandas-profiling plotly # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') import pandas as pd train_features = pd.read_csv('../data/tanzania/train_features.csv') train_labels = pd.read_csv('../data/tanzania/train_labels.csv') test_features = pd.read_csv('../data/tanzania/test_features.csv') sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) from sklearn.model_selection import train_test_split X_train, X_val, y_train, y_val = train_test_split( train_features, train_labels['status_group'], train_size = 0.80, test_size = 0.20, stratify = train_labels['status_group'], random_state=42 ) X_train.describe(exclude='number') from sklearn.linear_model import LogisticRegressionCV import category_encoders as ce from sklearn.preprocessing import StandardScaler categorical_features = ['quantity', 'payment', 'source'] numeric_features = X_train.select_dtypes('number').columns.drop('id').tolist() features = categorical_features + numeric_features X_train_subset = X_train[features] X_val_subset = X_val[features] encoder = ce.OneHotEncoder(use_cat_names=True) X_train_encoded = encoder.fit_transform(X_train_subset) X_val_encoded = encoder.transform(X_val_subset) scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_encoded) X_val_scaled = scaler.transform(X_val_encoded) model = LogisticRegressionCV(n_jobs = -1) model.fit(X_train_scaled, y_train) print('Validation Accuracy', model.score(X_val_scaled, y_val)) X_test_subset = test_features[features] X_test_encoded = encoder.transform(X_test_subset) X_test_scaled = scaler.transform(X_test_encoded) assert all(X_test_encoded.columns == X_train_encoded.columns) y_pred = model.predict(X_test_scaled) submission = sample_submission.copy() submission['status_group'] = y_pred submission.to_csv('submission-01.csv', index=False) if in_colab: from google.colab import files files.download('submission-01.csv') ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron Gallant's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Do one-hot encoding. For example, in addition to `quantity`, you could try `basin`, `extraction_type_class`, and more. (But remember it may not work with high cardinality categoricals.)- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Get and plot your coefficients.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons. Stretch Goals Doing- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. To visualize this dataset, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this problem, you may want to use the parameter `logistic=True`You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from the previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` Pipelines[Scikit-Learn User Guide](https://scikit-learn.org/stable/modules/compose.html) explains why pipelines are useful, and demonstrates how to use them:> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors. Reading- [ ] [Why is logistic regression considered a linear model?](https://www.quora.com/Why-is-logistic-regression-considered-a-linear-model)- [ ] [Training, Validation, and Testing Data Sets](https://end-to-end-machine-learning.teachable.com/blog/146320/training-validation-testing-data-sets)- [ ] [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)- [ ] [Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)- [ ] [Statistical Modeling: The Two Cultures](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)- [ ] [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way (without an excessive amount of formulas or academic pre-requisites). ###Code import os, sys in_colab = 'google.colab' in sys.modules # If you're in Colab... if in_colab: # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Install required python packages !pip install -r requirements.txt # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') import pandas as pd train_features = pd.read_csv('../data/tanzania/train_features.csv') train_labels = pd.read_csv('../data/tanzania/train_labels.csv') test_features = pd.read_csv('../data/tanzania/test_features.csv') sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) from sklearn.model_selection import train_test_split import category_encoders as ce from sklearn.preprocessing import MinMaxScaler from sklearn.linear_model import LogisticRegressionCV from sklearn.metrics import accuracy_score X_train, X_val, y_train, y_val = train_test_split(train_features, train_labels, test_size=0.25) encoder = ce.OneHotEncoder(use_cat_names=True) encoded = encoder.fit_transform(X_train.basin) X_train = pd.concat([X_train, encoded], axis=1) encoded = encoder.transform(X_val.basin) X_val = pd.concat([X_val, encoded], axis=1) model = LogisticRegressionCV(n_jobs=-1) features = ['amount_tsh', 'gps_height', 'longitude', 'latitude', 'num_private', 'population', 'construction_year', 'basin_Lake Victoria', 'basin_Internal', 'basin_Ruvuma / Southern Coast', 'basin_Lake Tanganyika', 'basin_Lake Nyasa', 'basin_Rufiji', 'basin_Wami / Ruvu', 'basin_Pangani', 'basin_Lake Rukwa'] target = 'status_group' X_train = X_train[features] y_train = y_train[target] model.fit(X_train, y_train) y_pred = model.predict(X_val[features]) accuracy_score(y_val[target], y_pred) scaler = MinMaxScaler() X_train_scaled = scaler.fit_transform(X_train) X_val_scaled = scaler.transform(X_val) train_labels.dtypes ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [X] Watch Aaron Gallant's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [X] Do train/validate/test split with the Tanzania Waterpumps data.- [X] Do one-hot encoding. (Remember it may not work with high cardinality categoricals.)- [X] Use scikit-learn for logistic regression.- [X] Get your validation accuracy score.- [X] Get and plot your coefficients.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons. Stretch Goals Doing- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [X] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this problem, you may want to use the parameter `logistic=True`You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from the previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` Pipelines[Scikit-Learn User Guide](https://scikit-learn.org/stable/modules/compose.html) explains why pipelines are useful, and demonstrates how to use them:> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors. Reading- [ ] [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)- [ ] [Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)- [ ] [Statistical Modeling: The Two Cultures](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)- [ ] [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way (without an excessive amount of formulas or academic pre-requisites). ###Code # If you're in Colab... import os, sys in_colab = 'google.colab' in sys.modules if in_colab: # Install required python packages: # category_encoders, version >= 2.0 # pandas-profiling, version >= 2.0 # plotly, version >= 4.0 !pip install --upgrade category_encoders pandas-profiling plotly # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') import pandas as pd train_features = pd.read_csv('../data/tanzania/train_features.csv') train_labels = pd.read_csv('../data/tanzania/train_labels.csv') test_features = pd.read_csv('../data/tanzania/test_features.csv') sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) # import pandas_profiling # train_features.profile_report() train_features.head() # Splitting data into training, validation, and test sets from sklearn.model_selection import train_test_split X_train, X_val, y_train, y_val = train_test_split(train_features, train_labels['status_group'], train_size = .8, test_size = .2, stratify = train_labels['status_group']) X_train, X_test, y_train, y_test = train_test_split(X_train, y_train, train_size = .75, test_size = .25, stratify = y_train) # Checking sizes of features in each set print(X_train.shape) print(X_val.shape) print(X_test.shape) # Checking sizes of targets in each set print(y_train.shape) print(y_val.shape) print(y_test.shape) X_train.isnull().sum() train_total = X_train.copy() train_total['target'] = y_train train_total['target'].replace('non functional',0, inplace = True) train_total['target'].replace('functional needs repair', 1, inplace = True) train_total['target'].replace('functional',2,inplace = True) # train_total['target'].unique() train_total['target'] = train_total['target'].astype(int) print(train_total['target'].dtypes) train_total.head() # Plotting categorical features vs. target import seaborn as sns import matplotlib.pyplot as plt for col in sorted(train_total.columns): if train_total[col].nunique() < 20: sns.catplot(x = col, y = 'target', data = train_total, kind = 'bar', color = 'grey') plt.xticks(rotation='vertical') plt.show() # Plotting numerical features vs. target numeric = train_total.select_dtypes('number') for col in sorted(numeric.columns): sns.lmplot(x=col, y='target', data=train_total, scatter_kws=dict(alpha=0.005)) plt.show() import category_encoders as ce from sklearn.preprocessing import StandardScaler categorical_features = ['management', 'extraction_type', 'payment_type', 'permit', 'quantity', 'quality_group', 'source', 'public_meeting'] numeric_features = X_train.select_dtypes('number').columns.drop('id').tolist() features = categorical_features + numeric_features X_train_subset = X_train[features] X_val_subset = X_val[features] encoder = ce.OneHotEncoder(use_cat_names = True) X_train_encoded = encoder.fit_transform(X_train_subset) X_val_encoded = encoder.transform(X_val_subset) scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_encoded) X_val_scaled = scaler.transform(X_val_encoded) # Want to encode: 'management', 'extraction_type', 'payment_type', 'permit', 'region_code', 'quantity', 'quality_group', 'source', 'public_meeting' from sklearn.linear_model import LogisticRegression model = LogisticRegression(max_iter = 5000) model.fit(X_train_scaled, y_train) print('Validation accuracy: ', model.score(X_val_scaled, y_val)) functional_coefficients = pd.Series(model.coef_[0], X_train_encoded.columns) plt.figure(figsize = (15,15)) functional_coefficients.sort_values().plot.barh(); model.classes_ X_train.head() # Defining feature engineering function def engineer(df): import datetime import numpy as np # Making age variable df.date_recorded = pd.to_datetime(df.date_recorded) df.construction_year.replace(0,np.NaN, inplace = True) mean_year = np.nanmean(df.construction_year) df.construction_year.replace(np.NaN,mean_year, inplace = True) df['age'] = df.date_recorded.dt.year - df.construction_year # Adding month_recorded feature for month in range(1,13): df[f'month_recorded_{month}'] = (df.date_recorded.dt.month==month) # Making region code categorical instead of numeric df['region_code'] = pd.Categorical(df.region_code) return df cardinality= [[],[]] for column in X_train.select_dtypes(exclude = 'number').columns: cardinality[0].append(column) cardinality[1].append(X_train[column].nunique()) cardinality = pd.DataFrame(cardinality).T cardinality cardinality_50 = cardinality[cardinality[1]>51] cardinality_50 # applying feature engineering function X_train = engineer(X_train).copy() X_val = engineer(X_val).copy() # Defining features as numeric features or categorical features with cardinality < 51 features = X_train.drop(list(cardinality_50[0]), axis = 'columns').columns # Subsetting to only features X_train_subset = X_train[features].copy() X_val_subset = X_val[features].copy() # One-hot encoding encoder = ce.OneHotEncoder(use_cat_names = True) X_train_encoded = encoder.fit_transform(X_train_subset) X_val_encoded = encoder.transform(X_val_subset) # standardizing scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_encoded) X_val_scaled = scaler.transform(X_val_encoded) features pd.DataFrame(X_train_scaled).dtypes.describe() # Using select K best features to make model from sklearn.feature_selection import f_classif, SelectKBest for k in range(1, len(pd.DataFrame(X_train_scaled).columns)+1): print(f'{k} features') selector = SelectKBest(score_func = f_classif, k = k) selector.fit(X_train_scaled, y_train) X_train_selected = selector.transform(X_train_scaled) X_val_selected = selector.transform(X_val_scaled) model = LogisticRegression(max_iter = 5000, n_jobs = -1) model.fit(X_train_selected, y_train) # Calculating and displaying model stats print('Train accuracy: ', model.score(X_train_selected, y_train)) print('Validation accuracy: ', model.score(X_val_selected, y_val)) print() # fitting best model selector = SelectKBest(score_func = f_classif, k = 143) selector.fit(X_train_scaled, y_train) X_train_selected = selector.transform(X_train_scaled) X_val_selected = selector.transform(X_val_scaled) model = LogisticRegression(max_iter = 5000, n_jobs = -1) model.fit(X_train_selected, y_train) # Getting output file test_features = engineer(test_features).copy() test_features_subset = test_features[features] test_features_encoded = encoder.transform(test_features_subset) test_features_scaled = scaler.transform(test_features_encoded) test_features_selected = selector.transform(test_features_scaled) y_pred = model.predict(test_features_selected) y_pred.shape y_pred = pd.Series(y_pred) submission = y_pred.to_frame() submission.head() submission['status_group'] = submission[0] submission.head() submission[0] = test_features['id'] submission.head() submission.columns = ['id', 'status_group'] submission.head() submission.to_csv('submission4.csv', index=False) from google.colab import files # Just try again if you get this error: # TypeError: Failed to fetch # https://github.com/googlecolab/colabtools/issues/337 files.download('submission4.csv') submission.shape ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [ ] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Begin with baselines for classification.- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.--- Stretch Goals- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do one-hot encoding. For example, you could try `quantity`, `basin`, `extraction_type_class`, and more. (But remember it may not work with high cardinality categoricals.)- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Get and plot your coefficients.- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).--- Data Dictionary FeaturesYour goal is to predict the operating condition of a waterpoint for each record in the dataset. You are provided the following set of information about the waterpoints:- `amount_tsh` : Total static head (amount water available to waterpoint)- `date_recorded` : The date the row was entered- `funder` : Who funded the well- `gps_height` : Altitude of the well- `installer` : Organization that installed the well- `longitude` : GPS coordinate- `latitude` : GPS coordinate- `wpt_name` : Name of the waterpoint if there is one- `num_private` : - `basin` : Geographic water basin- `subvillage` : Geographic location- `region` : Geographic location- `region_code` : Geographic location (coded)- `district_code` : Geographic location (coded)- `lga` : Geographic location- `ward` : Geographic location- `population` : Population around the well- `public_meeting` : True/False- `recorded_by` : Group entering this row of data- `scheme_management` : Who operates the waterpoint- `scheme_name` : Who operates the waterpoint- `permit` : If the waterpoint is permitted- `construction_year` : Year the waterpoint was constructed- `extraction_type` : The kind of extraction the waterpoint uses- `extraction_type_group` : The kind of extraction the waterpoint uses- `extraction_type_class` : The kind of extraction the waterpoint uses- `management` : How the waterpoint is managed- `management_group` : How the waterpoint is managed- `payment` : What the water costs- `payment_type` : What the water costs- `water_quality` : The quality of the water- `quality_group` : The quality of the water- `quantity` : The quantity of water- `quantity_group` : The quantity of water- `source` : The source of the water- `source_type` : The source of the water- `source_class` : The source of the water- `waterpoint_type` : The kind of waterpoint- `waterpoint_type_group` : The kind of waterpoint LabelsThere are three possible values:- `functional` : the waterpoint is operational and there are no repairs needed- `functional needs repair` : the waterpoint is operational, but needs repairs- `non functional` : the waterpoint is not operational--- Generate a submissionYour code to generate a submission file may look like this:```python estimator is your model or pipeline, which you've fit on X_train X_test is your pandas dataframe or numpy array, with the same number of rows, in the same order, as test_features.csv, and the same number of columns, in the same order, as X_trainy_pred = estimator.predict(X_test) Makes a dataframe with two columns, id and status_group, and writes to a csv file, without the indexsample_submission = pd.read_csv('sample_submission.csv')submission = sample_submission.copy()submission['status_group'] = y_predsubmission.to_csv('your-submission-filename.csv', index=False)```If you're working locally, the csv file is saved in the same directory as your notebook.If you're using Google Colab, you can use this code to download your submission csv file.```pythonfrom google.colab import filesfiles.download('your-submission-filename.csv')```--- ###Code !pip install kaggle # import os, sys # in_colab = 'google.colab' in sys.modules # # If you're in Colab... # if in_colab: # # Pull files from Github repo # os.chdir('/content') # !git init . # !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git # !git pull origin master # # Install required python packages # !pip install -r requirements.txt # # Change into directory for module # os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') # # Read the Tanzania Waterpumps data # # train_features.csv : the training set features # # train_labels.csv : the training set labels # # test_features.csv : the test set features # # sample_submission.csv : a sample submission file in the correct format # import pandas as pd # train_features = pd.read_csv('../data/waterpumps/train_features.csv') # train_labels = pd.read_csv('../data/waterpumps/train_labels.csv') # test_features = pd.read_csv('../data/waterpumps/test_features.csv') # sample_submission = pd.read_csv('../data/waterpumps/sample_submission.csv') # assert train_features.shape == (59400, 40) # assert train_labels.shape == (59400, 2) # assert test_features.shape == (14358, 40) # assert sample_submission.shape == (14358, 2) !kaggle competitions download -c ds8-predictive-modeling-challenge import pandas as pd import zipfile zf = zipfile.ZipFile('ds8-predictive-modeling-challenge.zip') # available files in the container print (zf.namelist()) # zipped_df = pd.read_csv('ds8-predictive-modeling-challenge.zip') train_features = pd.read_csv(zf.open('train_features.csv')) train_labels = pd.read_csv(zf.open('train_labels.csv')) test_features = pd.read_csv(zf.open('test_features.csv')) sample_submission = pd.read_csv(zf.open('sample_submission.csv')) assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) from sklearn.model_selection import train_test_split train_labels['status_group'] = train_labels['status_group'].replace('functional','2').replace('functional needs repair','1').replace('non functional','0').astype(int) sample_submission['status_group'] = sample_submission['status_group'].replace('functional','2').replace('functional needs repair','1').replace('non functional','0').astype(int) my_train_features, my_val_features = train_test_split(train_features, random_state=7) my_train_labels, my_val_labels = train_test_split(train_labels, random_state=7) my_train_features.shape, my_val_features.shape, my_train_labels.shape, my_val_labels.shape my_train_features.head(2), my_val_features.head(2), my_train_labels.head(2), my_val_labels.head(2) import numpy as np my_train_features.describe() my_train_features.describe(exclude=[np.number]).T my_train_labels.describe(include='all') my_train_features.isnull().sum() target = 'status_group' y_train = my_train_labels[target] y_train.value_counts(normalize=True) y_train.mode()[0] majority_class = y_train.mode()[0] y_pred = [majority_class] * len(y_train) sum(abs(y_pred - y_train)) / len(y_train) # How much we got wrong from sklearn.metrics import accuracy_score baseline_prediction = accuracy_score(y_train, y_pred) baseline_prediction y_val = my_val_labels[target] y_pred = [majority_class] * len(y_val) accuracy_score(y_pred, y_val) y_pred = baseline_prediction.predict(X_test) sample_submission = pd.read_csv('sample_submission.csv') submission = sample_submission.copy() submission['status_group'] = y_pred submission.to_csv('your-submission-filename.csv', index=False) ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron Gallant's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Do one-hot encoding. (Remember it may not work with high cardinality categoricals.)- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Get and plot your coefficients.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons. Stretch Goals Doing- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this problem, you may want to use the parameter `logistic=True`You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from the previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` Pipelines[Scikit-Learn User Guide](https://scikit-learn.org/stable/modules/compose.html) explains why pipelines are useful, and demonstrates how to use them:> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors. Reading- [ ] [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)- [ ] [Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)- [ ] [Statistical Modeling: The Two Cultures](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)- [ ] [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way (without an excessive amount of formulas or academic pre-requisites). ###Code # If you're in Colab... import os, sys in_colab = 'google.colab' in sys.modules if in_colab: # Install required python packages: # category_encoders, version >= 2.0 # pandas-profiling, version >= 2.0 # plotly, version >= 4.0 !pip install --upgrade category_encoders pandas-profiling plotly # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') import pandas as pd train_features = pd.read_csv('../data/tanzania/train_features.csv') train_labels = pd.read_csv('../data/tanzania/train_labels.csv') test_features = pd.read_csv('../data/tanzania/test_features.csv') sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) train_features.describe(exclude='number').T.sort_values(by='unique') # Function for cleaning def clean_X(X, categories=False): import category_encoders as ce from sklearn.preprocessing import StandardScaler from sklearn.feature_selection import f_regression, SelectKBest from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score if categories == True: categorical_features = cats_to_use numeric_features = X.select_dtypes('number').columns.drop('id').tolist() features = categorical_features + numeric_features X = X[features] X = X.drop(columns='population') # X['latitude'] = pd.cut(X['latitude'], 1000) # X['longitude'] = pd.cut(X['longitude'], 1000) return X cats_to_use = [ 'quantity', 'permit' ] train_features = clean_X(train_features, categories=True) train_features.head() from sklearn.model_selection import train_test_split X_train = train_features y_train = train_labels['status_group'] X_train, X_val, y_train, y_val = train_test_split( X_train, y_train, train_size=0.80, test_size=0.20, stratify=y_train ) from sklearn.linear_model import LogisticRegression import category_encoders as ce from sklearn.preprocessing import StandardScaler encoder = ce.OneHotEncoder(use_cat_names=True) X_train_encoded = encoder.fit_transform(X_train) X_val_encoded = encoder.transform(X_val) scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_encoded) X_val_scaled = scaler.transform(X_val_encoded) model = LogisticRegression(max_iter=5000, n_jobs=-1) model.fit(X_train_scaled, y_train) model.score(X_val_scaled, y_val) # test_features_numeric = test_features.select_dtypes('number').drop(columns='id') # y_pred = model.predict(test_features_numeric) # y_pred # submission = sample_submission.copy() # submission['status_group'] = y_pred # submission.to_csv('JohnM_Kaggle_Base.csv', index=False) from sklearn.metrics import accuracy_score y_pred = model.predict(X_val_scaled) accuracy_score(y_val, y_pred) %matplotlib inline import matplotlib.pyplot as plt functional_coefficients = pd.Series( model.coef_[0], X_train_encoded.columns ) plt.figure(figsize=(10,10)) functional_coefficients.sort_values().plot.barh(); test_features = clean_X(test_features, categories=True) encoder = ce.OneHotEncoder(use_cat_names=True) test_encoded = encoder.fit_transform(test_features) scaler = StandardScaler() test_scaled = scaler.fit_transform(test_encoded) y_pred = model.predict(test_scaled) submission = sample_submission.copy() submission['status_group'] = y_pred submission.to_csv('JohnM_Kaggle_Base.csv', index=False) from google.colab import files files.download('JohnM_Kaggle_Base.csv') ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron Gallant's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Do one-hot encoding. (Remember it may not work with high cardinality categoricals.)- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Get and plot your coefficients.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons. Stretch Goals Doing- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this problem, you may want to use the parameter `logistic=True`You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from the previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` Pipelines[Scikit-Learn User Guide](https://scikit-learn.org/stable/modules/compose.html) explains why pipelines are useful, and demonstrates how to use them:> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors. Reading- [ ] [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)- [ ] [Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)- [ ] [Statistical Modeling: The Two Cultures](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)- [ ] [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way (without an excessive amount of formulas or academic pre-requisites). ###Code ''' # If you're in Colab... import os, sys in_colab = 'google.colab' in sys.modules if in_colab: # Install required python packages: # category_encoders, version >= 2.0 # pandas-profiling, version >= 2.0 # plotly, version >= 4.0 !pip install --upgrade category_encoders pandas-profiling plotly # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Change into directory for module os.chdir('module4') ''' # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') import pandas as pd train_features = pd.read_csv('../data/tanzania/train_features.csv') train_labels = pd.read_csv('../data/tanzania/train_labels.csv') test_features = pd.read_csv('../data/tanzania/test_features.csv') sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) train_features.head() ###Output _____no_output_____ ###Markdown Variable Exploration**Are there any variables that we woudn't want to look at? If so, let's make sure they are coded in now so we can take them from train, validation, and test sets** ###Code train_features.dtypes ###Output _____no_output_____ ###Markdown **A few items that don't have physical/environmental bearing on the pump*** permit*recorded_by*quantity_group seems to be somewhat redundant with quantity, so will be dropped*basin, subvillage, region will covary with region_code most likely, and will be dropped*source and source_type have a lot of redundancy*waterpoint_type_group and waterpoint have some redundancy, the group has lower cat_no, so will be used*lga, ward, public_meeting*scheme_name is interesting and may warrant further analysis. scheme_management is set for categorization alreadydrop_list = ['permit', 'recorded_by', 'quantity_group', 'basin', 'source', 'waterpoint_type', 'lga', 'ward', 'public_meeting', 'scheme_name']**Some generation might make analysis more intuitive*** construction_year to relative_age by counting backward from the most recent year**Iterative approach* first predict failure, then use surrounding likelihood of failure to re-weight logistic ###Code drop_list = ['permit', 'recorded_by', 'quantity_group', 'basin', 'source', 'waterpoint_type', 'lga', 'ward', 'public_meeting', 'scheme_name'] train_labels.status_group.value_counts() ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron Gallant's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Do one-hot encoding. (Remember it may not work with high cardinality categoricals.)- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Get and plot your coefficients.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons. Stretch Goals Doing- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this problem, you may want to use the parameter `logistic=True`You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from the previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` Pipelines[Scikit-Learn User Guide](https://scikit-learn.org/stable/modules/compose.html) explains why pipelines are useful, and demonstrates how to use them:> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors. Reading- [ ] [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)- [ ] [Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)- [ ] [Statistical Modeling: The Two Cultures](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)- [ ] [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way (without an excessive amount of formulas or academic pre-requisites). ###Code # If you're in Colab... import os, sys in_colab = 'google.colab' in sys.modules if in_colab: # Install required python packages: # category_encoders, version >= 2.0 # pandas-profiling, version >= 2.0 # plotly, version >= 4.0 !pip install --upgrade category_encoders pandas-profiling plotly # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') import pandas as pd train_features = pd.read_csv('../data/tanzania/train_features.csv') train_labels = pd.read_csv('../data/tanzania/train_labels.csv') test_features = pd.read_csv('../data/tanzania/test_features.csv') sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) # split training dataset into training and validation from sklearn.model_selection import train_test_split X_train = train_features y_train = train_labels['status_group'] #split training data into training data and validation data, 80/20 split #stratify = y_train ensures there are roughly the same proportion of functional, nonfunctional, and functional needs repair wells in training and validation datasets X_train, X_val, y_train, y_val = train_test_split( X_train, y_train, train_size=0.80, test_size=0.20, stratify=y_train, random_state=42 ) X_train.shape, X_val.shape, y_train.shape, y_val.shape # train model on all numerical variables, and quantity, payment, & source categorical variables import category_encoders as ce from sklearn.preprocessing import StandardScaler # scalar gives all features same magnitude from sklearn.linear_model import LogisticRegression categorical_features = ['quantity', 'payment', 'source'] numeric_features = X_train.select_dtypes('number').columns.drop('id').tolist() # dont need IDs, may throw dataset off features = categorical_features + numeric_features X_train_subset = X_train[features] # create subset of training data using features designated above X_val_subset = X_val[features] # create subset of validation data using features designated above encoder = ce.OneHotEncoder(use_cat_names=True) X_train_encoded = encoder.fit_transform(X_train_subset) # create subset of subset training data with encoded categorical features, categorical features designated above X_val_encoded = encoder.transform(X_val_subset) # create subset of subset validation data with encoded categorical features, categorical features designated above scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_encoded) X_val_scaled = scaler.transform(X_val_encoded) model = LogisticRegression(max_iter=5000, n_jobs =-1) model.fit(X_train_scaled, y_train) # fit model using training data print('\n','Validation Accuracy', model.score(X_val_scaled, y_val)) # plug in validation data to check accuracy of model # create subset, encode, and scale test dataset X_test_subset = test_features[features] X_test_encoded = encoder.transform(X_test_subset) X_test_scaled = scaler.transform(X_test_encoded) assert all(X_test_encoded.columns == X_train_encoded.columns) # create submission csv file y_pred = model.predict(X_test_scaled) # fit X_test_scaled to trained model submission = sample_submission.copy() submission['status_group'] = y_pred #y_pred is predictions for test set submission.to_csv('submission-01.csv', index=False) # export csv file if in_colab: from google.colab import files # Just try again if you get this error: # TypeError: Failed to fetch # https://github.com/googlecolab/colabtools/issues/337 files.download('submission-01.csv') ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [ ] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Begin with baselines for classification.- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.--- Stretch Goals- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do one-hot encoding. For example, you could try `quantity`, `basin`, `extraction_type_class`, and more. (But remember it may not work with high cardinality categoricals.)- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Get and plot your coefficients.- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).--- Data Dictionary FeaturesYour goal is to predict the operating condition of a waterpoint for each record in the dataset. You are provided the following set of information about the waterpoints:- `amount_tsh` : Total static head (amount water available to waterpoint)- `date_recorded` : The date the row was entered- `funder` : Who funded the well- `gps_height` : Altitude of the well- `installer` : Organization that installed the well- `longitude` : GPS coordinate- `latitude` : GPS coordinate- `wpt_name` : Name of the waterpoint if there is one- `num_private` : - `basin` : Geographic water basin- `subvillage` : Geographic location- `region` : Geographic location- `region_code` : Geographic location (coded)- `district_code` : Geographic location (coded)- `lga` : Geographic location- `ward` : Geographic location- `population` : Population around the well- `public_meeting` : True/False- `recorded_by` : Group entering this row of data- `scheme_management` : Who operates the waterpoint- `scheme_name` : Who operates the waterpoint- `permit` : If the waterpoint is permitted- `construction_year` : Year the waterpoint was constructed- `extraction_type` : The kind of extraction the waterpoint uses- `extraction_type_group` : The kind of extraction the waterpoint uses- `extraction_type_class` : The kind of extraction the waterpoint uses- `management` : How the waterpoint is managed- `management_group` : How the waterpoint is managed- `payment` : What the water costs- `payment_type` : What the water costs- `water_quality` : The quality of the water- `quality_group` : The quality of the water- `quantity` : The quantity of water- `quantity_group` : The quantity of water- `source` : The source of the water- `source_type` : The source of the water- `source_class` : The source of the water- `waterpoint_type` : The kind of waterpoint- `waterpoint_type_group` : The kind of waterpoint LabelsThere are three possible values:- `functional` : the waterpoint is operational and there are no repairs needed- `functional needs repair` : the waterpoint is operational, but needs repairs- `non functional` : the waterpoint is not operational--- Generate a submissionYour code to generate a submission file may look like this:```python estimator is your model or pipeline, which you've fit on X_train X_test is your pandas dataframe or numpy array, with the same number of rows, in the same order, as test_features.csv, and the same number of columns, in the same order, as X_trainy_pred = estimator.predict(X_test) Makes a dataframe with two columns, id and status_group, and writes to a csv file, without the indexsample_submission = pd.read_csv('sample_submission.csv')submission = sample_submission.copy()submission['status_group'] = y_predsubmission.to_csv('your-submission-filename.csv', index=False)```If you're working locally, the csv file is saved in the same directory as your notebook.If you're using Google Colab, you can use this code to download your submission csv file.```pythonfrom google.colab import filesfiles.download('your-submission-filename.csv')```--- Create DFs ###Code import os, sys in_colab = 'google.colab' in sys.modules # If you're in Colab... if in_colab: # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Install required python packages !pip install -r requirements.txt # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') # Read the Tanzania Waterpumps data # train_features.csv : the training set features # train_labels.csv : the training set labels # test_features.csv : the test set features # sample_submission.csv : a sample submission file in the correct format import pandas as pd train_features = pd.read_csv('../data/waterpumps/train_features.csv') train_labels = pd.read_csv('../data/waterpumps/train_labels.csv') test_features = pd.read_csv('../data/waterpumps/test_features.csv') sample_submission = pd.read_csv('../data/waterpumps/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) train_features.head() train_labels.head() train_labels['status_group'].value_counts() train_features.describe() # Merge to create training data (include target) train = pd.merge(train_features, train_labels, on='id') train.head() ###Output _____no_output_____ ###Markdown Train/test/val split ###Code import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.linear_model import LogisticRegressionCV, LogisticRegression, Ridge from sklearn.preprocessing import StandardScaler from sklearn.metrics import accuracy_score from sklearn.model_selection import train_test_split import numpy as np my_train, my_val = train_test_split(train, random_state=42) my_train['status_group'].describe() target = 'status_group' y_train = my_train[target] y_train.value_counts(normalize=True) y_val = my_val[target] y_val.value_counts(normalize=True) # majority class train majority_class = y_train.mode()[0] y_pred = [majority_class]*len(y_train) accuracy_score(y_train, y_pred) y_pred_val = [majority_class]*len(y_val) accuracy_score(y_val,y_pred_val) ###Output _____no_output_____ ###Markdown Crappy baseline submission ###Code baseline_submission = sample_submission.copy() baseline_submission.to_csv('baseline_submission.csv', index=False) ###Output _____no_output_____ ###Markdown Logistic Regression model ###Code my_train.columns[1:40] features = my_train.columns[1:40] X_train = my_train[features] X_val = my_val[features] features #TOO many features print(my_train.shape) my_train.select_dtypes(exclude='number').describe().T.sort_values(by='unique', ascending=False).head(10) new_features = my_train.columns[[1,2,3,4,6,7,9,10,12,13,14,15,17,18,20,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39]] numeric_features = ['amount_tsh', 'gps_height', 'longitude', 'latitude', 'num_private', 'region_code', 'district_code', 'population', 'construction_year'] X_train = my_train[new_features] X_val = my_val[new_features] X_train.shape, X_val.shape encoder = ce.OneHotEncoder(use_cat_names=True) imputer = SimpleImputer() scaler = StandardScaler() X_train_encoded = encoder.fit_transform(X_train) X_train_imputed = imputer.fit_transform(X_train_encoded) X_train_scaled = scaler.fit_transform(X_train_imputed) X_train_imputed.shape X_val_encoded = encoder.transform(X_val) X_val_imputed = imputer.transform(X_val_encoded) X_val_scaled = scaler.transform(X_val_imputed) model = LogisticRegression(solver='lbfgs', multi_class='multinomial', max_iter=1000, random_state=42) model.fit(X_train_scaled, y_train) model.score(X_val_scaled, y_val) # From previous model with different features model.score(X_val_scaled, y_val) log_reg_cv = LogisticRegressionCV(cv=5, n_jobs=-1, random_state=42, multi_class='multinomial') log_reg_cv.fit(X_train_scaled, y_train) print(f'Logistic Regression CV: {log_reg_cv.score(X_val_scaled, y_val)}') log_reg_cv.score(X_val_scaled, y_val) ###Output _____no_output_____ ###Markdown Final ###Code X_test = test_features[new_features] X_test_encoded = encoder.transform(X_test) X_test_imputed = imputer.transform(X_test_encoded) X_test_scaled = scaler.transform(X_test_imputed) y_pred = model.predict(X_test_scaled) submission = sample_submission.copy() submission['status_group'] = y_pred submission.to_csv('better.csv', index=False) ###Output _____no_output_____ ###Markdown Ignore ###Code # Just for testing def code_impute_scale_log(X_train, X_val, y_train, y_val): # Encode encoder = ce.OneHotEncoder(use_cat_names=True) X_train_encoded = encoder.fit_transform(X_train) X_val_encoded = encoder.transform(X_val) # Impute imputer = SimpleImputer() X_train_imputed = imputer.fit_transform(X_train_encoded) X_val_imputed = imputer.transform(X_val_encoded) # Scale scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_imputed) X_val_scaled = scaler.transform(X_val_imputed) #Regression log_reg = LogisticRegression(solver='lbfgs', multi_class='multinomial', max_iter=1000, random_state=42) log_reg.fit(X_train_scaled, y_train) print(f'Logistic Regression: {log_reg.score(X_val_scaled, y_val)}') log_reg_cv = LogisticRegressionCV(cv=5, n_jobs=-1, random_state=42, multi_class='multinomial') log_reg_cv.fit(X_train_scaled, y_train) print(f'Logistic Regression CV: {log_reg_cv.score(X_val_scaled, y_val)}') code_impute_scale_log(X_train, X_val, y_train, y_val) ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [ ] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Begin with baselines for classification.- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.--- Stretch Goals- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do one-hot encoding. For example, you could try `quantity`, `basin`, `extraction_type_class`, and more. (But remember it may not work with high cardinality categoricals.)- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Get and plot your coefficients.- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).--- Data Dictionary FeaturesYour goal is to predict the operating condition of a waterpoint for each record in the dataset. You are provided the following set of information about the waterpoints:- `amount_tsh` : Total static head (amount water available to waterpoint)- `date_recorded` : The date the row was entered- `funder` : Who funded the well- `gps_height` : Altitude of the well- `installer` : Organization that installed the well- `longitude` : GPS coordinate- `latitude` : GPS coordinate- `wpt_name` : Name of the waterpoint if there is one- `num_private` : - `basin` : Geographic water basin- `subvillage` : Geographic location- `region` : Geographic location- `region_code` : Geographic location (coded)- `district_code` : Geographic location (coded)- `lga` : Geographic location- `ward` : Geographic location- `population` : Population around the well- `public_meeting` : True/False- `recorded_by` : Group entering this row of data- `scheme_management` : Who operates the waterpoint- `scheme_name` : Who operates the waterpoint- `permit` : If the waterpoint is permitted- `construction_year` : Year the waterpoint was constructed- `extraction_type` : The kind of extraction the waterpoint uses- `extraction_type_group` : The kind of extraction the waterpoint uses- `extraction_type_class` : The kind of extraction the waterpoint uses- `management` : How the waterpoint is managed- `management_group` : How the waterpoint is managed- `payment` : What the water costs- `payment_type` : What the water costs- `water_quality` : The quality of the water- `quality_group` : The quality of the water- `quantity` : The quantity of water- `quantity_group` : The quantity of water- `source` : The source of the water- `source_type` : The source of the water- `source_class` : The source of the water- `waterpoint_type` : The kind of waterpoint- `waterpoint_type_group` : The kind of waterpoint LabelsThere are three possible values:- `functional` : the waterpoint is operational and there are no repairs needed- `functional needs repair` : the waterpoint is operational, but needs repairs- `non functional` : the waterpoint is not operational--- Generate a submissionYour code to generate a submission file may look like this:```python estimator is your model or pipeline, which you've fit on X_train X_test is your pandas dataframe or numpy array, with the same number of rows, in the same order, as test_features.csv, and the same number of columns, in the same order, as X_trainy_pred = estimator.predict(X_test) Makes a dataframe with two columns, id and status_group, and writes to a csv file, without the indexsample_submission = pd.read_csv('sample_submission.csv')submission = sample_submission.copy()submission['status_group'] = y_predsubmission.to_csv('your-submission-filename.csv', index=False)```If you're working locally, the csv file is saved in the same directory as your notebook.If you're using Google Colab, you can use this code to download your submission csv file.```pythonfrom google.colab import filesfiles.download('your-submission-filename.csv')```--- ###Code import os, sys in_colab = 'google.colab' in sys.modules # If you're in Colab... if in_colab: # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Install required python packages !pip install -r requirements.txt # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') # Read the Tanzania Waterpumps data # train_features.csv : the training set features # train_labels.csv : the training set labels # test_features.csv : the test set features # sample_submission.csv : a sample submission file in the correct format import pandas as pd train_features = pd.read_csv('../data/waterpumps/train_features.csv') train_labels = pd.read_csv('../data/waterpumps/train_labels.csv') test_features = pd.read_csv('../data/waterpumps/test_features.csv') sample_submission = pd.read_csv('../data/waterpumps/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron Gallant's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Do one-hot encoding. (Remember it may not work with high cardinality categoricals.)- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Get and plot your coefficients.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons. Stretch Goals Doing- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this problem, you may want to use the parameter `logistic=True`You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from the previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` Pipelines[Scikit-Learn User Guide](https://scikit-learn.org/stable/modules/compose.html) explains why pipelines are useful, and demonstrates how to use them:> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors. Reading- [ ] [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)- [ ] [Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)- [ ] [Statistical Modeling: The Two Cultures](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)- [ ] [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way (without an excessive amount of formulas or academic pre-requisites). ###Code # If you're in Colab... import os, sys in_colab = 'google.colab' in sys.modules if in_colab: # Install required python packages: # category_encoders, version >= 2.0 # pandas-profiling, version >= 2.0 # plotly, version >= 4.0 !pip install --upgrade category_encoders pandas-profiling plotly # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') import pandas as pd train_features = pd.read_csv('../data/tanzania/train_features.csv') train_labels = pd.read_csv('../data/tanzania/train_labels.csv') test_features = pd.read_csv('../data/tanzania/test_features.csv') sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron Gallant's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Do one-hot encoding. (Remember it may not work with high cardinality categoricals.)- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Get and plot your coefficients.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons. Stretch Goals Doing- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this problem, you may want to use the parameter `logistic=True`You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from the previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` Pipelines[Scikit-Learn User Guide](https://scikit-learn.org/stable/modules/compose.html) explains why pipelines are useful, and demonstrates how to use them:> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors. Reading- [ ] [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)- [ ] [Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)- [ ] [Statistical Modeling: The Two Cultures](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)- [ ] [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way (without an excessive amount of formulas or academic pre-requisites). ###Code # If you're in Colab... import os, sys in_colab = 'google.colab' in sys.modules if in_colab: # Install required python packages: # category_encoders, version >= 2.0 # pandas-profiling, version >= 2.0 # plotly, version >= 4.0 !pip install --upgrade category_encoders pandas-profiling plotly # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') import pandas as pd train_features = pd.read_csv('../data/tanzania/train_features.csv') train_labels = pd.read_csv('../data/tanzania/train_labels.csv') test_features = pd.read_csv('../data/tanzania/test_features.csv') sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) # import block pd.set_option('display.max_columns', None) import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns plt.style.use('dark_background') import numpy as np from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression, LogisticRegressionCV from sklearn.preprocessing import StandardScaler from sklearn.feature_selection import f_regression, SelectKBest from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score, accuracy_score import category_encoders as ce # Changing date feature to datetime format train_features.dtypes # Train-validation split X_train = train_features.copy() y_train = train_labels['status_group'].copy() X_train, X_val, y_train, y_val = train_test_split( X_train, y_train, train_size = 0.80, test_size = 0.20, stratify = y_train, random_state = 42 ) X_train.shape, X_val.shape, y_train.shape, y_val.shape for col in X_train.select_dtypes('object').columns: print(col) # Fill the frame with the mode for object feature for column in X_train.select_dtypes('object').columns: X_train[column].fillna(X_train[column].mode()[0], inplace=True) X_val[column].fillna(X_train[column].mode()[0], inplace=True) test_features[column].fillna(X_train[column].mode()[0], inplace=True) # Fill the frame with the mean for numeric features for column in X_train.select_dtypes('number').columns: X_train[column].fillna(X_train[column].mean(), inplace=True) X_val[column].fillna(X_train[column].mean(), inplace=True) test_features[column].fillna(X_train[column].mean(), inplace=True) X_val.isna().sum() # Add feature - pump age X_train['pump_age'] = 2013 - X_train['construction_year'] X_val['pump_age'] = 2013 - X_val['construction_year'] test_features['pump_age'] = 2013 - test_features['construction_year'] test_features['latitude'].head() # Add feature - Distance from Dodoma X_train['dodomadistance'] = (((X_train['latitude']-(6.1630))**2)+((X_train['longitude']-(35.7516))**2))**0.5 X_val['dodomadistance'] = (((X_val['latitude']-(6.1630))**2)+((X_val['longitude']-(35.7516))**2))**0.5 test_features['dodomadistance'] = (((test_features['latitude']-(6.1630))**2)+((test_features['longitude']-(35.7516))**2))**0.5 test_features.dodomadistance.max() # Mapping the ys to integers for the encoder mapdict = { 'functional': 1, 'non functional': -1, 'functional needs repair': 0 } y_train = y_train.map(mapdict) y_val = y_val.map(mapdict) # Using category encoder to establish feature rank categoryfeatures = X_train.select_dtypes(include = 'object').columns encoder = ce.cat_boost.CatBoostEncoder() X_train_encoded = encoder.fit_transform(X_train, y_train) X_val_encoded = encoder.transform(X_val) scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_encoded) X_val_scaled = scaler.transform(X_val_encoded) model = LogisticRegressionCV() model.fit(X_train_scaled, y_train) print('Validation Accuracy', model.score(X_val_scaled, y_val)) # Selecting best features selector = SelectKBest(score_func=f_regression, k = 42) X_train_selected = selector.fit_transform(X_train_scaled, y_train) #X_test_selected = selector.transform(X_test) X_train_selected.shape#, #X_test_selected.shape # List selected features all_names = X_train.columns selected_mask = selector.get_support() selected_names = all_names[selected_mask] for name in selected_names: print(name) # Cat-boosted logistic with best features X_train_subset = X_train[selected_names] X_val_subset = X_val[selected_names] encoder2 = ce.cat_boost.CatBoostEncoder() X_train_encoded2 = encoder2.fit_transform(X_train_subset, y_train) X_val_encoded2 = encoder2.transform(X_val_subset) scaler2 = StandardScaler() X_train_scaled2 = scaler2.fit_transform(X_train_encoded2) X_val_scaled2 = scaler2.transform(X_val_encoded2) model2 = LogisticRegressionCV() model2.fit(X_train_scaled2, y_train) print('Validation Accuracy', model2.score(X_val_scaled2, y_val)) # Transforming the test data X_test_subset = test_features[selected_names] X_test_encoded = encoder2.transform(X_test_subset) X_test_scaled = scaler2.transform(X_test_encoded) assert all(X_test_encoded.columns == X_train_encoded2.columns) X_test_encoded.isna().sum() # Predicting the test data y_test_pred = model2.predict(X_test_scaled) # Unmapping the prediction y_test_pred = pd.Series(y_test_pred) unmapdict = {value: key for key, value in mapdict.items()} y_test_pred = y_test_pred.map(unmapdict) # Formatting submission submission = sample_submission.copy() submission['status_group'] = y_test_pred submission.to_csv('submission-02.csv', index = False) ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [ ] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Begin with baselines for classification.- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.--- Stretch Goals- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do one-hot encoding. For example, you could try `quantity`, `basin`, `extraction_type_class`, and more. (But remember it may not work with high cardinality categoricals.)- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Get and plot your coefficients.- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).--- Data Dictionary FeaturesYour goal is to predict the operating condition of a waterpoint for each record in the dataset. You are provided the following set of information about the waterpoints:- `amount_tsh` : Total static head (amount water available to waterpoint)- `date_recorded` : The date the row was entered- `funder` : Who funded the well- `gps_height` : Altitude of the well- `installer` : Organization that installed the well- `longitude` : GPS coordinate- `latitude` : GPS coordinate- `wpt_name` : Name of the waterpoint if there is one- `num_private` : - `basin` : Geographic water basin- `subvillage` : Geographic location- `region` : Geographic location- `region_code` : Geographic location (coded)- `district_code` : Geographic location (coded)- `lga` : Geographic location- `ward` : Geographic location- `population` : Population around the well- `public_meeting` : True/False- `recorded_by` : Group entering this row of data- `scheme_management` : Who operates the waterpoint- `scheme_name` : Who operates the waterpoint- `permit` : If the waterpoint is permitted- `construction_year` : Year the waterpoint was constructed- `extraction_type` : The kind of extraction the waterpoint uses- `extraction_type_group` : The kind of extraction the waterpoint uses- `extraction_type_class` : The kind of extraction the waterpoint uses- `management` : How the waterpoint is managed- `management_group` : How the waterpoint is managed- `payment` : What the water costs- `payment_type` : What the water costs- `water_quality` : The quality of the water- `quality_group` : The quality of the water- `quantity` : The quantity of water- `quantity_group` : The quantity of water- `source` : The source of the water- `source_type` : The source of the water- `source_class` : The source of the water- `waterpoint_type` : The kind of waterpoint- `waterpoint_type_group` : The kind of waterpoint LabelsThere are three possible values:- `functional` : the waterpoint is operational and there are no repairs needed- `functional needs repair` : the waterpoint is operational, but needs repairs- `non functional` : the waterpoint is not operational--- Generate a submissionYour code to generate a submission file may look like this:```python estimator is your model or pipeline, which you've fit on X_train X_test is your pandas dataframe or numpy array, with the same number of rows, in the same order, as test_features.csv, and the same number of columns, in the same order, as X_trainy_pred = estimator.predict(X_test) Makes a dataframe with two columns, id and status_group, and writes to a csv file, without the indexsample_submission = pd.read_csv('sample_submission.csv')submission = sample_submission.copy()submission['status_group'] = y_predsubmission.to_csv('your-submission-filename.csv', index=False)```If you're working locally, the csv file is saved in the same directory as your notebook.If you're using Google Colab, you can use this code to download your submission csv file.```pythonfrom google.colab import filesfiles.download('your-submission-filename.csv')```--- ###Code import os, sys in_colab = 'google.colab' in sys.modules # If you're in Colab... if in_colab: # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Install required python packages !pip install -r requirements.txt # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') # Read the Tanzania Waterpumps data # train_features.csv : the training set features # train_labels.csv : the training set labels # test_features.csv : the test set features # sample_submission.csv : a sample submission file in the correct format import pandas as pd train_features = pd.read_csv('../data/waterpumps/train_features.csv') train_labels = pd.read_csv('../data/waterpumps/train_labels.csv') test_features = pd.read_csv('../data/waterpumps/test_features.csv') sample_submission = pd.read_csv('../data/waterpumps/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) train_features.head() train_features.shape train_labels.head() test_features.head() sample_submission.head() train = pd.merge(train_features,train_labels) train.head() train['status_group'].replace({'functional':2,'non functional':0,'functional needs repair':1},inplace=True) train.head() train.describe(exclude='number').T.sort_values(by='unique') train = train.drop(['lga','date_recorded','funder','ward','installer','scheme_name','subvillage','wpt_name'],axis=1) train.describe(exclude='number').T.sort_values(by='unique') from sklearn.model_selection import train_test_split splittedTrain, splittedValidate = train_test_split(train, random_state=84) splittedTrain.shape, splittedValidate.shape # Determine the baseline, the most common status_group target = 'status_group' y_splittedTrain = splittedTrain[target] y_splittedTrain.value_counts(normalize=True) y_splittedTrain.mode()[0] import category_encoders as ce encoder = ce.OneHotEncoder(use_cat_names=True) splittedTrain = encoder.fit_transform(splittedTrain) splittedTrain.head() splittedValidate = encoder.transform(splittedValidate) splittedValidate.head() features = splittedTrain.columns.drop(['id', 'status_group']) target = 'status_group' x_splittedTrain = splittedTrain[features] y_splittedTrain = splittedTrain[target] x_splittedValidate = splittedTrain[features] y_splittedValidate = splittedValidate[target] from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver='lbfgs') log_reg.fit(x_splittedTrain, y_splittedTrain) print('Validation Accuracy', log_reg.score(x_splittedTrain, y_splittedTrain)) log_reg.predict(x_splittedTrain) features from sklearn.feature_selection import f_regression, SelectKBest selector = SelectKBest(score_func=f_regression, k=15) x_splittedTrain_selected = selector.fit_transform(x_splittedTrain,y_splittedTrain) x_splittedValidate_selected = selector.transform(x_splittedValidate) x_splittedTrain_selected.shape all_names = x_splittedTrain.columns selected_mask = selector.get_support() selected_names = all_names[selected_mask] print(selected_names) x_splittedTrain_selected import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.linear_model import LogisticRegressionCV from sklearn.preprocessing import StandardScaler target = 'status_group' features = ['region', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked'] X_train = my_train[features] y_train = my_train[target] X_val = my_val[features] y_val = my_val[target] imputer = SimpleImputer() X_train_imputed = imputer.fit_transform(X_train_encoded) X_val_imputed = imputer.transform(X_val_encoded) encoder = ce.OneHotEncoder(use_cat_names=True) X_train_encoded = encoder.fit_transform(X_train) X_val_encoded = encoder.transform(X_val) scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_imputed) X_val_scaled = scaler.transform(X_val_imputed) ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [X] Watch Aaron Gallant's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [X] Do train/validate/test split with the Tanzania Waterpumps data.- [X] Do one-hot encoding. (Remember it may not work with high cardinality categoricals.)- [X] Use scikit-learn for logistic regression.- [X] Get your validation accuracy score.- [X] Get and plot your coefficients.- [X] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [X] Commit your notebook to your fork of the GitHub repo.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons. Stretch Goals Doing- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [X] Make exploratory visualizations.- [X] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this problem, you may want to use the parameter `logistic=True`You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from the previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` Pipelines[Scikit-Learn User Guide](https://scikit-learn.org/stable/modules/compose.html) explains why pipelines are useful, and demonstrates how to use them:> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors. Reading- [ ] [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)- [ ] [Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)- [ ] [Statistical Modeling: The Two Cultures](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)- [ ] [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way (without an excessive amount of formulas or academic pre-requisites). Setup ###Code # If you're in Colab... import os, sys in_colab = 'google.colab' in sys.modules if in_colab: # Install required python packages: # category_encoders, version >= 2.0 # pandas-profiling, version >= 2.0 # plotly, version >= 4.0 !pip install --upgrade category_encoders pandas-profiling plotly # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') import pandas as pd train_features = pd.read_csv('../data/tanzania/train_features.csv') train_labels = pd.read_csv('../data/tanzania/train_labels.csv') test_features = pd.read_csv('../data/tanzania/test_features.csv') sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) ###Output _____no_output_____ ###Markdown Profile the training data FeaturesYour goal is to predict the operating condition of a waterpoint for each record in the dataset. You are provided the following set of information about the waterpoints:- `amount_tsh` : Total static head (amount water available to waterpoint)- `date_recorded` : The date the row was entered- `funder` : Who funded the well- `gps_height` : Altitude of the well- `installer` : Organization that installed the well- `longitude` : GPS coordinate- `latitude` : GPS coordinate- `wpt_name` : Name of the waterpoint if there is one- `num_private` : - `basin` : Geographic water basin- `subvillage` : Geographic location- `region` : Geographic location- `region_code` : Geographic location (coded)- `district_code` : Geographic location (coded)- `lga` : Geographic location- `ward` : Geographic location- `population` : Population around the well- `public_meeting` : True/False- `recorded_by` : Group entering this row of data- `scheme_management` : Who operates the waterpoint- `scheme_name` : Who operates the waterpoint- `permit` : If the waterpoint is permitted- `construction_year` : Year the waterpoint was constructed- `extraction_type` : The kind of extraction the waterpoint uses- `extraction_type_group` : The kind of extraction the waterpoint uses- `extraction_type_class` : The kind of extraction the waterpoint uses- `management` : How the waterpoint is managed- `management_group` : How the waterpoint is managed- `payment` : What the water costs- `payment_type` : What the water costs- `water_quality` : The quality of the water- `quality_group` : The quality of the water- `quantity` : The quantity of water- `quantity_group` : The quantity of water- `source` : The source of the water- `source_type` : The source of the water- `source_class` : The source of the water- `waterpoint_type` : The kind of waterpoint- `waterpoint_type_group` : The kind of waterpoint LabelsThere are three possible values:- `functional` : the waterpoint is operational and there are no repairs needed- `functional needs repair` : the waterpoint is operational, but needs repairs- `non functional` : the waterpoint is not operational ###Code import pandas_profiling train_features.profile_report() ###Output _____no_output_____ ###Markdown Begin with baselines for classification ###Code # Determine majority class y_train = train_labels['status_group'] y_train.value_counts(normalize=True) # Guess the majority class for every prediction majority_class = y_train.mode()[0] y_pred = [majority_class] * len(y_train) len(y_pred) # Accuracy of majority class baseline = frequency of majority class from sklearn.metrics import accuracy_score accuracy_score(y_train, y_pred) ###Output _____no_output_____ ###Markdown Do train|validate|test split ###Code from sklearn.model_selection import train_test_split X_train = train_features y_train = train_labels['status_group'] X_train, X_val, y_train, y_val = train_test_split( X_train, y_train, train_size=0.80, test_size=0.20, stratify=y_train, random_state=42 ) X_train.shape, X_val.shape, y_train.shape, y_val.shape ###Output _____no_output_____ ###Markdown Use scikit-learn for logistic regression Begin with baselines: fast, first models ###Code # Drop non-numeric features X_train_numeric = X_train.select_dtypes('number') X_val_numeric = X_val.select_dtypes('number') # Check for nulls and drop before fitting X_train_numeric.isnull().sum(), X_val_numeric.isnull().sum() from sklearn.linear_model import LogisticRegression model = LogisticRegression() model.fit(X_train_numeric, y_train) # Evaluate on validation data y_pred = model.predict(X_val_numeric) accuracy_score(y_val, y_pred) # Same as above in one lin model.score(X_val_numeric, y_val) # What predictions does a logistic regression return? print(y_pred) pd.Series(y_pred).value_counts(normalize=True) ###Output _____no_output_____ ###Markdown Do one-hot encoding of categorical features ###Code # Check "cardinality" of categorical features X_train.describe(exclude='number').T.sort_values(by='unique').head(10) # Explore 'quantity' feature X_train['quantity'].value_counts(dropna=False) # Recombine X_train and y_train for exploratory data analysis train = X_train.copy() train['status_group'] = y_train # Now do groupby train.groupby('quantity')['status_group'].value_counts(normalize=True) import matplotlib.pyplot as plt import seaborn as sns train['functional'] = train['status_group'] == 'functional' sns.catplot(x='quantity', y='functional', data=train, kind='bar'); ###Output _____no_output_____ ###Markdown Do one-hot encoding and scale features ###Code import category_encoders as ce from sklearn.preprocessing import StandardScaler categorical_features = ['quantity'] numeric_features = X_train.select_dtypes('number').columns.drop('id').tolist() features = categorical_features + numeric_features X_train_subset = X_train[features] X_val_subset = X_val[features] encoder = ce.OneHotEncoder(use_cat_names=True) X_train_encoded = encoder.fit_transform(X_train_subset) X_val_encoded = encoder.transform(X_val_subset) scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_encoded) X_val_scaled = scaler.transform(X_val_encoded) model = LogisticRegression() model.fit(X_train_scaled, y_train) print('Validation Accuracy:', model.score(X_val_scaled, y_val)) ###Output _____no_output_____ ###Markdown Get and plot coefficients ###Code for i, status in enumerate(model.classes_): coefficients = pd.Series(model.coef_[i], X_train_encoded.columns) coefficients.sort_values().plot.barh() plt.title(status) plt.show() ###Output _____no_output_____ ###Markdown Submit to predictive modeling competition ###Code # put test_features through the same paces as train and validate X_test_subset = test_features[features] X_test_encoded = encoder.transform(X_test_subset) X_test_scaled = scaler.transform(X_test_encoded) assert all(X_test_encoded.columns == X_train_encoded.columns) y_pred = model.predict(X_test_scaled) submission = sample_submission.copy() submission['status_group'] = y_pred submission.to_csv('submission-01.csv', index=False) !head submission-01.csv ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron Gallant's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Do one-hot encoding. (Remember it may not work with high cardinality categoricals.)- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Get and plot your coefficients.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons. Stretch Goals Doing- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this problem, you may want to use the parameter `logistic=True`You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from the previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` Pipelines[Scikit-Learn User Guide](https://scikit-learn.org/stable/modules/compose.html) explains why pipelines are useful, and demonstrates how to use them:> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors. Reading- [ ] [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)- [ ] [Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)- [ ] [Statistical Modeling: The Two Cultures](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)- [ ] [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way (without an excessive amount of formulas or academic pre-requisites). ###Code # If you're in Colab... import os, sys in_colab = 'google.colab' in sys.modules if in_colab: # Install required python packages: # category_encoders, version >= 2.0 # pandas-profiling, version >= 2.0 # plotly, version >= 4.0 !pip install --upgrade category_encoders pandas-profiling plotly # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') import pandas as pd train_features = pd.read_csv('../data/tanzania/train_features.csv') train_labels = pd.read_csv('../data/tanzania/train_labels.csv') test_features = pd.read_csv('../data/tanzania/test_features.csv') sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) import sklearn from sklearn.metrics import accuracy_score from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegressionCV import matplotlib.pyplot as plt import seaborn as sns import category_encoders as ce from sklearn.preprocessing import StandardScaler train_features.head() train_labels.head() y_train = train_labels['status_group'] y_train.value_counts(normalize=True) maj_class = y_train.mode()[0] y_pred = [maj_class] * len(y_train) print(len(y_pred)) #Baseline ACCURACY accuracy_score(y_train, y_pred) X_train = train_features y_train = train_labels['status_group'] print(X_train.shape) print(y_train.shape) X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, train_size=0.8, test_size=0.2, stratify=y_train, random_state=666) X_train.shape, X_val.shape, y_train.shape, y_val.shape print(y_train.value_counts(normalize=True)) print(y_val.value_counts(normalize=True)) #fast first model X_train_num = X_train.select_dtypes('number') X_val_num = X_val.select_dtypes('number') X_train_num.isna().sum() model = LogisticRegressionCV(n_jobs=-1) model.fit(X_train_num, y_train) y_pred = model.predict(X_val_num) accuracy_score(y_val, y_pred) #Pick feature to one hot encode X_train.describe(exclude='number').T.sort_values(by='unique') X_train['extraction_type_class'].value_counts(normalize=True) train = X_train.copy() train['status_group'] = y_train train.groupby('extraction_type_class')['status_group'].value_counts(normalize=True) train['functional'] = (train['status_group'] == 'functional').astype(int) train[['status_group', 'functional']] sns.catplot(x='extraction_type_class', y='functional', data=train, kind='bar', color='blue', ) cat_feature = ['extraction_type_class'] num_feature = X_train.select_dtypes('number').columns.drop('id').tolist() features = cat_feature + num_feature X_train_sub = X_train[features] X_val_sub = X_val[features] encoder = ce.OneHotEncoder(use_cat_names=True) X_train_encoded = encoder.fit_transform(X_train_sub) X_val_encoded = encoder.transform(X_val_sub) scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_encoded) X_val_scaled = scaler.transform(X_val_encoded) model = LogisticRegressionCV(n_jobs=-1) model.fit(X_train_scaled, y_train) print('Val Accuracy: {}'.format(model.score(X_val_scaled, y_val))) def ohe(cat_feature, X_train, X_val, y_train, y_val): num_ft = X_train.select_dtypes('number').columns.drop('id').tolist() features = [cat_feature] + num_ft X_train_sub = X_train[features] X_val_sub = X_val[features] encoder = ce.OneHotEncoder(use_cat_names=True) X_train_encoded = encoder.fit_transform(X_train_sub) X_val_encoded = encoder.transform(X_val_sub) scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_encoded) X_val_scaled = scaler.transform(X_val_encoded) model = LogisticRegressionCV(n_jobs=-1) model.fit(X_train_scaled, y_train) print('Val Accuracy: {}'.format(model.score(X_val_scaled, y_val))) ohe('waterpoint_type_group', X_train, X_val, y_train, y_val) #aterpoint_type = 0.6332070707070707 #quantity_group = 0.6483585858585859 #quantity = 0.6483585858585859 from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(n_estimators=110, max_depth=19, random_state=3) clf.fit(X_train_scaled, y_train) clf.score(X_val_scaled, y_val) print(test_features.shape) test_features.head() X_test_subset = test_features[features] X_test_encoded = encoder.transform(X_test_subset) X_test_scaled = scaler.transform(X_test_encoded) assert all(X_test_encoded.columns == X_train_encoded.columns) y_pred = clf.predict(X_test_scaled) submission = sample_submission.copy() submission['status_group'] = y_pred submission.to_csv('RG_Submission_kaggle_01.csv', index=False) !head RG_Submission_kaggle_01.csv if in_colab: from google.colab import files files.download('RG_Submission_kaggle_01.csv') ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [ ] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Begin with baselines for classification.- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.--- Stretch Goals- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do one-hot encoding. For example, you could try `quantity`, `basin`, `extraction_type_class`, and more. (But remember it may not work with high cardinality categoricals.)- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Get and plot your coefficients.- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).--- Data Dictionary FeaturesYour goal is to predict the operating condition of a waterpoint for each record in the dataset. You are provided the following set of information about the waterpoints:- `amount_tsh` : Total static head (amount water available to waterpoint)- `date_recorded` : The date the row was entered- `funder` : Who funded the well- `gps_height` : Altitude of the well- `installer` : Organization that installed the well- `longitude` : GPS coordinate- `latitude` : GPS coordinate- `wpt_name` : Name of the waterpoint if there is one- `num_private` : - `basin` : Geographic water basin- `subvillage` : Geographic location- `region` : Geographic location- `region_code` : Geographic location (coded)- `district_code` : Geographic location (coded)- `lga` : Geographic location- `ward` : Geographic location- `population` : Population around the well- `public_meeting` : True/False- `recorded_by` : Group entering this row of data- `scheme_management` : Who operates the waterpoint- `scheme_name` : Who operates the waterpoint- `permit` : If the waterpoint is permitted- `construction_year` : Year the waterpoint was constructed- `extraction_type` : The kind of extraction the waterpoint uses- `extraction_type_group` : The kind of extraction the waterpoint uses- `extraction_type_class` : The kind of extraction the waterpoint uses- `management` : How the waterpoint is managed- `management_group` : How the waterpoint is managed- `payment` : What the water costs- `payment_type` : What the water costs- `water_quality` : The quality of the water- `quality_group` : The quality of the water- `quantity` : The quantity of water- `quantity_group` : The quantity of water- `source` : The source of the water- `source_type` : The source of the water- `source_class` : The source of the water- `waterpoint_type` : The kind of waterpoint- `waterpoint_type_group` : The kind of waterpoint LabelsThere are three possible values:- `functional` : the waterpoint is operational and there are no repairs needed- `functional needs repair` : the waterpoint is operational, but needs repairs- `non functional` : the waterpoint is not operational--- Generate a submissionYour code to generate a submission file may look like this:```python estimator is your model or pipeline, which you've fit on X_train X_test is your pandas dataframe or numpy array, with the same number of rows, in the same order, as test_features.csv, and the same number of columns, in the same order, as X_trainy_pred = estimator.predict(X_test) Makes a dataframe with two columns, id and status_group, and writes to a csv file, without the indexsample_submission = pd.read_csv('sample_submission.csv')submission = sample_submission.copy()submission['status_group'] = y_predsubmission.to_csv('your-submission-filename.csv', index=False)```If you're working locally, the csv file is saved in the same directory as your notebook.If you're using Google Colab, you can use this code to download your submission csv file.```pythonfrom google.colab import filesfiles.download('your-submission-filename.csv')```--- ###Code import os, sys in_colab = 'google.colab' in sys.modules # If you're in Colab... if in_colab: # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Install required python packages !pip install -r requirements.txt # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') # Read the Tanzania Waterpumps data # train_features.csv : the training set features # train_labels.csv : the training set labels # test_features.csv : the test set features # sample_submission.csv : a sample submission file in the correct format import pandas as pd train_features = pd.read_csv('../data/waterpumps/train_features.csv') train_labels = pd.read_csv('../data/waterpumps/train_labels.csv') test_features = pd.read_csv('../data/waterpumps/test_features.csv') sample_submission = pd.read_csv('../data/waterpumps/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) train_features.head() train_labels.head() train = pd.merge(train_features, train_labels, how='inner', on='id') train.head() train.shape status_ranks = {'non functional': 1, 'functional needs repair': 2, 'functional': 3} train.status_group = train.status_group.map(status_ranks) train.head() from sklearn.model_selection import train_test_split my_train, my_val = train_test_split(train, random_state=42) my_train.shape, my_val.shape my_train[:5] my_train.describe() target = 'status_group' y_train = my_train[target] y_train.value_counts(normalize=True) y_train.mode()[0] majority_class = y_train.mode()[0] y_pred = [majority_class] * len(y_train) sum(abs(y_pred - y_train)) / len(y_train) from sklearn.metrics import accuracy_score accuracy_score(y_train, y_pred) my_val.describe() y_val = my_val[target] y_pred = [majority_class] * len(y_val) accuracy_score(y_pred, y_val) my_train.describe() # 1. Import estimator class from sklearn.linear_model import LinearRegression # 2. Instantiate this class linear_reg = LinearRegression() # 3. Arrange X feature matrices (already did y target vectors) features = ['population', 'construction_year', 'amount_tsh'] X_train = my_train[features] X_val = my_val[features] # Impute missing values from sklearn.impute import SimpleImputer imputer = SimpleImputer() X_train_imputed = imputer.fit_transform(X_train) X_val_imputed = imputer.transform(X_val) # 4. Fit the model linear_reg.fit(X_train_imputed, y_train) # 5. Apply the model to new data. # The predictions look like this ... linear_reg.predict(X_val_imputed) pd.Series(linear_reg.coef_, features) from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver='lbfgs') log_reg.fit(X_train_imputed, y_train) print('Validation Accuracy', log_reg.score(X_val_imputed, y_val)) log_reg.predict(X_val_imputed) log_reg.predict_proba(X_val_imputed) log_reg.coef_ pd.Series(linear_reg.coef_, features) log_reg.intercept_ import numpy as np def sigmoid(x): return 1 / (1 + np.e**(-x)) import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.linear_model import LogisticRegressionCV from sklearn.preprocessing import StandardScaler target = 'status_group' features = ['amount_tsh', 'gps_height', 'installer', 'waterpoint_type', 'source', 'source_type', 'population', 'scheme_management', 'region_code', 'district_code', 'construction_year', 'payment', 'management', 'funder'] X_train = my_train[features] y_train = my_train[target] X_val = my_val[features] y_val = my_val[target] encoder = ce.OneHotEncoder(use_cat_names=True) X_train_encoded = encoder.fit_transform(X_train) X_val_encoded = encoder.transform(X_val) imputer = SimpleImputer() X_train_imputed = imputer.fit_transform(X_train_encoded) X_val_imputed = imputer.transform(X_val_encoded) scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_imputed) X_val_scaled = scaler.transform(X_val_imputed) model = LogisticRegressionCV(cv=5, n_jobs=-1, random_state=42) model.fit(X_train_scaled, y_train) model.score(X_val_scaled, y_val) X_test = test_features[features] X_test_encoded = encoder.transform(X_test) X_test_imputed = imputer.transform(X_test_encoded) X_test_scaled = scaler.transform(X_test_imputed) y_pred = model.predict(X_test_scaled) print(y_pred) submission = test_features[['id']].copy() submission['status_group'] = y_pred submission.describe() submission.head() submission_ranks = {1: 'non functional', 2: 'functional needs repair', 3: 'functional'} submission.status_group = submission.status_group.map(submission_ranks) submission.head() submission.to_csv('mugilchoi-submission-02.csv', index=False) from google.colab import files files.download('mugilchoi-submission-02.csv') ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [X] Watch Aaron Gallant's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [X] Do train/validate/test split with the Tanzania Waterpumps data.- [X] Do one-hot encoding. (Remember it may not work with high cardinality categoricals.)- [X] Use scikit-learn for logistic regression.- [X] Get your validation accuracy score.- [X] Get and plot your coefficients.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [X] Commit your notebook to your fork of the GitHub repo.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons. Stretch Goals Doing- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this problem, you may want to use the parameter `logistic=True`You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from the previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` Pipelines[Scikit-Learn User Guide](https://scikit-learn.org/stable/modules/compose.html) explains why pipelines are useful, and demonstrates how to use them:> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors. Reading- [ ] [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)- [ ] [Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)- [ ] [Statistical Modeling: The Two Cultures](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)- [ ] [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way (without an excessive amount of formulas or academic pre-requisites). ###Code # If you're in Colab... import os, sys in_colab = 'google.colab' in sys.modules if in_colab: # Install required python packages: # category_encoders, version >= 2.0 # pandas-profiling, version >= 2.0 # plotly, version >= 4.0 !pip install --upgrade category_encoders pandas-profiling plotly # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') import pandas as pd train_features = pd.read_csv('../data/tanzania/train_features.csv') train_labels = pd.read_csv('../data/tanzania/train_labels.csv') test_features = pd.read_csv('../data/tanzania/test_features.csv') sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) from sklearn.model_selection import train_test_split X_train = train_features y_train = train_labels['status_group'] X_train, X_val, y_train, y_val = train_test_split( X_train, y_train, train_size=0.80, test_size=0.20, stratify=y_train, random_state=42) X_train.shape, X_val.shape, y_train.shape, y_val.shape y_train.value_counts(normalize=True) y_val.value_counts(normalize=True) X_train_numeric = X_train.select_dtypes('number') X_val_numeric = X_val.select_dtypes('number') from sklearn.metrics import accuracy_score y_pred = model.predict(X_val_numeric) accuracy_score(y_val, y_pred) ## cardinality check X_train.describe(exclude='number').T.sort_values(by='unique') X_train['water_quality'].value_counts(dropna=False) X_train['source_type'].value_counts(dropna=False) train = X_train.copy() train['status_group'] = y_train train.groupby('water_quality')['status_group'].value_counts(normalize=True) ## Visualization %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns # this adds a 'functional' col that marks 1 if 'functional' train['functional'] = (train['status_group']=='functional').astype(int) # plot sns.catplot(x='water_quality', y='functional', data=train, kind='bar'); ## validation import category_encoders as ce from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LogisticRegression categorical_features = ['water_quality'] numeric_features = X_train.select_dtypes('number').columns.drop('id').tolist() features = categorical_features + numeric_features X_train_subset = X_train[features] X_val_subset = X_val[features] encoder = ce.OneHotEncoder(use_cat_names=True) X_train_encoded = encoder.fit_transform(X_train_subset) X_val_encoded = encoder.transform(X_val_subset) scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_encoded) X_val_scaled = scaler.transform(X_val_encoded) model = LogisticRegression() model.fit(X_train_scaled, y_train) print('Validation Accuracy', model.score(X_val_scaled, y_val)); model.coef_[0] functional_coefficients = pd.Series( model.coef_[0], X_train_encoded.columns ) plt.figure(figsize=(15,15)) functional_coefficients.sort_values().plot.barh(); ###Output _____no_output_____ ###Markdown ###Code import os, sys in_colab = 'google.colab' in sys.modules # If you're in Colab... if in_colab: # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Install required python packages !pip install -r requirements.txt # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') # Read the Tanzania Waterpumps data # train_features.csv : the training set features # train_labels.csv : the training set labels # test_features.csv : the test set features # sample_submission.csv : a sample submission file in the correct format import pandas as pd train_features = pd.read_csv('../data/waterpumps/train_features.csv') train_labels = pd.read_csv('../data/waterpumps/train_labels.csv') test_features = pd.read_csv('../data/waterpumps/test_features.csv') sample_submission = pd.read_csv('../data/waterpumps/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [ ] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Begin with baselines for classification.- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.--- Stretch Goals- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do one-hot encoding. For example, you could try `quantity`, `basin`, `extraction_type_class`, and more. (But remember it may not work with high cardinality categoricals.)- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Get and plot your coefficients.- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).--- Data Dictionary FeaturesYour goal is to predict the operating condition of a waterpoint for each record in the dataset. You are provided the following set of information about the waterpoints:- `amount_tsh` : Total static head (amount water available to waterpoint)- `date_recorded` : The date the row was entered- `funder` : Who funded the well- `gps_height` : Altitude of the well- `installer` : Organization that installed the well- `longitude` : GPS coordinate- `latitude` : GPS coordinate- `wpt_name` : Name of the waterpoint if there is one- `num_private` : - `basin` : Geographic water basin- `subvillage` : Geographic location- `region` : Geographic location- `region_code` : Geographic location (coded)- `district_code` : Geographic location (coded)- `lga` : Geographic location- `ward` : Geographic location- `population` : Population around the well- `public_meeting` : True/False- `recorded_by` : Group entering this row of data- `scheme_management` : Who operates the waterpoint- `scheme_name` : Who operates the waterpoint- `permit` : If the waterpoint is permitted- `construction_year` : Year the waterpoint was constructed- `extraction_type` : The kind of extraction the waterpoint uses- `extraction_type_group` : The kind of extraction the waterpoint uses- `extraction_type_class` : The kind of extraction the waterpoint uses- `management` : How the waterpoint is managed- `management_group` : How the waterpoint is managed- `payment` : What the water costs- `payment_type` : What the water costs- `water_quality` : The quality of the water- `quality_group` : The quality of the water- `quantity` : The quantity of water- `quantity_group` : The quantity of water- `source` : The source of the water- `source_type` : The source of the water- `source_class` : The source of the water- `waterpoint_type` : The kind of waterpoint- `waterpoint_type_group` : The kind of waterpoint LabelsThere are three possible values:- `functional` : the waterpoint is operational and there are no repairs needed- `functional needs repair` : the waterpoint is operational, but needs repairs- `non functional` : the waterpoint is not operational--- Generate a submissionYour code to generate a submission file may look like this:```python estimator is your model or pipeline, which you've fit on X_train X_test is your pandas dataframe or numpy array, with the same number of rows, in the same order, as test_features.csv, and the same number of columns, in the same order, as X_trainy_pred = estimator.predict(X_test) Makes a dataframe with two columns, id and status_group, and writes to a csv file, without the indexsample_submission = pd.read_csv('sample_submission.csv')submission = sample_submission.copy()submission['status_group'] = y_predsubmission.to_csv('your-submission-filename.csv', index=False)```If you're working locally, the csv file is saved in the same directory as your notebook.If you're using Google Colab, you can use this code to download your submission csv file.```pythonfrom google.colab import filesfiles.download('your-submission-filename.csv')```--- ###Code import numpy as np import pandas as pd # checking if this is a functioning dataframe; it is train_features; # checking what datatypes we have, see if there's # anything categorical we need to worry about train_features.dtypes; #checking for NaN's, there's a lot, but we'll deal with them later train_features.isna().sum(); # takes the features of a train and test set in a pandas dataframe. # Remove the target column before putting the features in here. # returns the feature columns as a pandas dataframe one-hot-encoded # specify which of a test or trian set you are putting in! def one_hot_encode(train_features, test_features, max_cardinality=50): # necessary imports import pandas as pd import numpy as np import category_encoders as ce # create the one_hot_encoder encoder = ce.OneHotEncoder(use_cat_names = True) #determine which categorical features are appropriate to one-hot-encode numerics = train_features.select_dtypes(include='number').columns.tolist() categoricals = train_features.select_dtypes(exclude='number').columns.tolist() low_cardinality = [col for col in categoricals if train_features[col].nunique() <= max_cardinality] features = numerics + low_cardinality #create a shortened dataframe of numeric features and categorical features with low cardinality train_features = train_features[features] test_features = test_features[features] # ALWAYS .fit_transform on TRAIN data set # ALWAYS .transform on TEST data set train_features_encoded = encoder.fit_transform(train_features) test_features_encoded = encoder.transform(test_features) return train_features_encoded, test_features_encoded # calling the one-hot-encode function on our train and test features dataframes train_features_encoded, test_features_encoded = one_hot_encode(train_features, test_features) # train, val, test split from sklearn.model_selection import train_test_split train_x, val_x = train_test_split(train_features_encoded, random_state = 20392) train_y, val_y = train_test_split(train_labels, random_state = 20392) train_x.shape, val_x.shape, train_y.shape, val_y.shape # making sure the x and y sets match up correctly train_x.head(); train_y.head(); # getting a sense of how much accuracy our baseline model should have train_labels['status_group'].value_counts(normalize=True) # takes the column of a train_label dataframe # prints the accuracy of the baseline # returns the baseline list def baseline_Logistic(labels): # making a list of the same size as 'labels' # with only the mode as the only possible entree most_common_label = labels.mode()[0] pred_baseline = [most_common_label] * len(labels) #calculate baseline accuracy wrong_count = 0 for i in range(len(pred_baseline)): if pred_baseline[i] != train_labels['status_group'][i]: wrong_count += 1 acc = 1 - wrong_count / len(pred_baseline) print('Accuracy of Baseline: ', acc*100, '%') return(pred_baseline) # calling the baseline function to get the baseline accuracy pred_baseline = baseline_Logistic(train_labels['status_group']) #on to Logistic Regression! from sklearn.linear_model import LogisticRegression logistic_reg = LogisticRegression(solver='lbfgs', multi_class='multinomial', max_iter=10000) logistic_reg.fit(train_x, train_y['status_group']) print('Validation Accuracy', logistic_reg.score(val_x, val_y['status_group'])) # generate submission # estimator is your model or pipeline, which you've fit on X_train # X_test is your pandas dataframe or numpy array, # with the same number of rows, in the same order, as test_features.csv, # and the same number of columns, in the same order, as X_train # y_pred is an array, we make it into a column in our submission dataframe next y_pred = logistic_reg.predict(test_features_encoded) # Makes a dataframe with two columns, id and status_group, # and writes to a csv file, without the index submission = sample_submission.copy() submission['status_group'] = y_pred submission.to_csv('Curtis-McKendrick-Pipes.csv', index=False) # If you're working locally, the csv file is saved in the same directory as your notebook. # If you're using Google Colab, you can use this code to download your submission csv file. # from google.colab import files # files.download('module4/Curtis-McKendrick-Pipes.csv') # make sure the y_pred entrees overwrite the status_group column # of the submission correctly y_pred[:5] submission.head() ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron Gallant's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Do one-hot encoding. (Remember it may not work with high cardinality categoricals.)- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Get and plot your coefficients.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons. Stretch Goals Doing- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this problem, you may want to use the parameter `logistic=True`You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from the previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` Pipelines[Scikit-Learn User Guide](https://scikit-learn.org/stable/modules/compose.html) explains why pipelines are useful, and demonstrates how to use them:> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors. Reading- [ ] [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)- [ ] [Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)- [ ] [Statistical Modeling: The Two Cultures](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)- [ ] [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way (without an excessive amount of formulas or academic pre-requisites). ###Code # If you're in Colab... import os, sys in_colab = 'google.colab' in sys.modules if in_colab: # Install required python packages: # category_encoders, version >= 2.0 # pandas-profiling, version >= 2.0 # plotly, version >= 4.0 !pip install --upgrade category_encoders pandas-profiling plotly # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') import pandas as pd train_features = pd.read_csv('../data/tanzania/train_features.csv') train_labels = pd.read_csv('../data/tanzania/train_labels.csv') test_features = pd.read_csv('../data/tanzania/test_features.csv') sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) y_train.tail() majority_class = y_train.mode()[0] majority_class len(y_train) y_train =train_labels['status_group'] y_train.value_counts() ###Output _____no_output_____ ###Markdown Train/validate/test split with the Tanzania Waterpumps data. ###Code from sklearn.model_selection import train_test_split X_train = train_features y_train = train_labels['status_group'] X_train, X_val, y_train, y_val = train_test_split( X_train, y_train, train_size=0.80, test_size=0.20, stratify=y_train, random_state=42) ###Output _____no_output_____ ###Markdown Do one-hot encoding and logistic regression ###Code X_train.describe(exclude='number').T.sort_values(by='unique') import category_encoders as ce from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LogisticRegression categorical_features = ['quantity'] numeric_features = X_train.select_dtypes('number').columns.drop('id').tolist() features = categorical_features + numeric_features X_train_subset = X_train[features] X_val_subset = X_val[features] encoder = ce.OneHotEncoder(use_cat_names=True) X_train_encoded = encoder.fit_transform(X_train_subset) X_val_encoded = encoder.transform(X_val_subset) scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_encoded) X_val_scaled = scaler.transform(X_val_encoded) model = LogisticRegression() model.fit(X_train_scaled, y_train) print('Validation Accuracy', model.score(X_val_scaled, y_val)) ###Output /usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:432: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning. FutureWarning) /usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:469: FutureWarning: Default multi_class will be changed to 'auto' in 0.22. Specify the multi_class option to silence this warning. "this warning.", FutureWarning) ###Markdown Get & plot coefficients ###Code import matplotlib.pyplot as plt functional_coefficients = pd.Series( model.coef_[0], X_train_encoded.columns ) plt.figure(figsize=(10, 10)) functional_coefficients.sort_values().plot.barh(); ###Output _____no_output_____ ###Markdown Submission ###Code X_test_subset = test_features[features] X_test_encoded = encoder.transform(X_test_subset) X_test_scaled = scaler.transform(X_test_encoded) assert all(X_test_encoded.columns == X_train_encoded.columns) y_pred = model.predict(X_test_scaled) submission = sample_submission.copy() submission['status_group'] = y_pred submission.to_csv('submission-01.csv', index=False) !head submission-01.csv if in_colab: from google.colab import files files.download('submission-01.csv') ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron Gallant's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Do one-hot encoding. (Remember it may not work with high cardinality categoricals.)- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Get and plot your coefficients.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons. Stretch Goals Doing- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this problem, you may want to use the parameter `logistic=True`You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from the previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` Pipelines[Scikit-Learn User Guide](https://scikit-learn.org/stable/modules/compose.html) explains why pipelines are useful, and demonstrates how to use them:> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors. Reading- [ ] [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)- [ ] [Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)- [ ] [Statistical Modeling: The Two Cultures](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)- [ ] [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way (without an excessive amount of formulas or academic pre-requisites). ###Code # If you're in Colab... import os, sys in_colab = 'google.colab' in sys.modules if in_colab: # Install required python packages: # category_encoders, version >= 2.0 # pandas-profiling, version >= 2.0 # plotly, version >= 4.0 !pip install --upgrade category_encoders pandas-profiling plotly # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') import pandas as pd train_features = pd.read_csv('../data/tanzania/train_features.csv') train_labels = pd.read_csv('../data/tanzania/train_labels.csv') test_features = pd.read_csv('../data/tanzania/test_features.csv') sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) ###Output _____no_output_____ ###Markdown Do train/validate/test split with the Tanzania Waterpumps data. ###Code train_labels.head() ###Output _____no_output_____ ###Markdown Baseline Model A baseline for classification can be the most common class in the training dataset. logistic regression predicts the probablity of an event occuring ###Code y_train= train_labels['status_group'] # determine the majority class (y_train.value_counts(normalize= True)*100).round(2) ###Output _____no_output_____ ###Markdown Baseline prediction = 54.31 ###Code # accuray for the classification is frequency of the most common labels # check how accurate model is # guess this mojority class for every prediction majority_class= y_train.mode()[0] y_pred= [majority_class]*len(y_train) print(len(y_pred)) #Accuracy of majority class baseline = frequency of the majority class from sklearn.metrics import accuracy_score accuracy_score(y_train, y_pred) train_features.head() X_train = train_features X_train.shape , y_train.shape X_test= test_features sample_submission.head() y_test = sample_submission['status_group'] X_test.shape, y_test.shape # standarize the data # split the data into train and validate data from sklearn.model_selection import train_test_split X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, stratify= y_train, test_size=0.2, random_state= 44 ) X_train.shape , X_val.shape , y_train.shape, y_val.shape ###Output _____no_output_____ ###Markdown Use scikit-learn for logistic regression. ###Code # drop the non numeric feature X_train_numeric= X_train.select_dtypes('number') print(X_train_numeric.shape) print(y_train.shape) X_train_numeric.head() # Look for nan X_train_numeric.isna().sum() from sklearn.linear_model import LogisticRegressionCV # Instantiate it model= LogisticRegressionCV(solver= 'lbfgs', multi_class= 'auto', n_jobs= -1,max_iter= 1000) # Fit it model.fit(X_train_numeric, y_train) import sklearn sklearn.__version__ ###Output _____no_output_____ ###Markdown Get your validation accuracy score. ###Code # evaluate on validation data X_val_numeric = X_val.select_dtypes('number') y_pred= model.predict(X_val_numeric) acc= accuracy_score(y_val, y_pred) print(f'Accuracy score for just numeric feature: {acc: .2f}') # didn't beat the baseline prediction X_train.isna().sum() ###Output _____no_output_____ ###Markdown Simple and fast Baseline Model with subset of columns ###Code # Just numeric value with no missing columns train_subset= X_train.select_dtypes('number').dropna(axis=1) ###Output _____no_output_____ ###Markdown Do one-hot encoding. (Remember it may not work with high cardinality categoricals.) ###Code # check the cardinality of data X_train.describe(exclude='number').T.sort_values(by='unique') X_train['quantity'].value_counts() #combine X_train and y_train for exploratory data visualisation train = X_train.copy() train['status_group']= y_train train.groupby('quantity')['status_group'].value_counts(dropna = True, normalize=True) %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns train['functional']= (train['status_group']== 'functional').astype(int) train[['status_group' , 'functional']] sns.catplot(x = 'quantity', y= 'functional', data = train, kind= 'bar', color= 'gray') # one or so many unique value feature is not useful for model import category_encoders as ce from sklearn.preprocessing import StandardScaler # Use both numerical feature and categorical feature(quantify) categorical_features= ['quantity'] numeric_features = X_train.select_dtypes('number').columns.drop('id').tolist() # combine the feature features = categorical_features + numeric_features # create subset of numeric and categorical features X_train_subset = X_train[features] X_val_subset= X_val[features] # do the hot encoding encoder = ce.OneHotEncoder(use_cat_names = True) X_train_encoded = encoder.fit_transform(X_train_subset) X_val_encoded = encoder.transform(X_val_subset) #Standarize the data scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_encoded) X_val_scaled = scaler.transform(X_val_encoded) # Instantiate the model model = LogisticRegressionCV(multi_class = 'auto', n_jobs = -1, ) # fit the model model.fit(X_train_scaled, y_train) # Print the accuracy output print(f'validation score:{model.score(X_val_scaled, y_val):.2f}') ###Output /usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_split.py:1978: FutureWarning: The default value of cv will change from 3 to 5 in version 0.22. Specify it explicitly to silence this warning. warnings.warn(CV_WARNING, FutureWarning) ###Markdown Get and plot your coefficients. ###Code # we have coefficient for each variable(column) # model.coef[0] for the 0th class functional #model.coef[1] for the 1st class needs to repair coefficient= pd.Series(model.coef_[0], X_train_encoded.columns) plt.figure(figsize= (10,7)) coefficient.sort_values().plot.barh(); ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron Gallant's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Do one-hot encoding. For example, in addition to `quantity`, you could try `basin`, `extraction_type_class`, and more. (But remember it may not work with high cardinality categoricals.)- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Get and plot your coefficients.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons. Stretch Goals Doing- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. To visualize this dataset, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this problem, you may want to use the parameter `logistic=True`You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from the previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` Pipelines[Scikit-Learn User Guide](https://scikit-learn.org/stable/modules/compose.html) explains why pipelines are useful, and demonstrates how to use them:> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors. Reading- [ ] [Why is logistic regression considered a linear model?](https://www.quora.com/Why-is-logistic-regression-considered-a-linear-model)- [ ] [Training, Validation, and Testing Data Sets](https://end-to-end-machine-learning.teachable.com/blog/146320/training-validation-testing-data-sets)- [ ] [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)- [ ] [Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)- [ ] [Statistical Modeling: The Two Cultures](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)- [ ] [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way (without an excessive amount of formulas or academic pre-requisites). ###Code import os, sys in_colab = 'google.colab' in sys.modules # If you're in Colab... if in_colab: # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Install required python packages !pip install -r requirements.txt # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') import pandas as pd train_features = pd.read_csv('../data/tanzania/train_features.csv') train_labels = pd.read_csv('../data/tanzania/train_labels.csv') test_features = pd.read_csv('../data/tanzania/test_features.csv') sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [ ] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Begin with baselines for classification.- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.--- Stretch Goals- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do one-hot encoding. For example, you could try `quantity`, `basin`, `extraction_type_class`, and more. (But remember it may not work with high cardinality categoricals.)- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Get and plot your coefficients.- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).--- Data Dictionary FeaturesYour goal is to predict the operating condition of a waterpoint for each record in the dataset. You are provided the following set of information about the waterpoints:- `amount_tsh` : Total static head (amount water available to waterpoint)- `date_recorded` : The date the row was entered- `funder` : Who funded the well- `gps_height` : Altitude of the well- `installer` : Organization that installed the well- `longitude` : GPS coordinate- `latitude` : GPS coordinate- `wpt_name` : Name of the waterpoint if there is one- `num_private` : - `basin` : Geographic water basin- `subvillage` : Geographic location- `region` : Geographic location- `region_code` : Geographic location (coded)- `district_code` : Geographic location (coded)- `lga` : Geographic location- `ward` : Geographic location- `population` : Population around the well- `public_meeting` : True/False- `recorded_by` : Group entering this row of data- `scheme_management` : Who operates the waterpoint- `scheme_name` : Who operates the waterpoint- `permit` : If the waterpoint is permitted- `construction_year` : Year the waterpoint was constructed- `extraction_type` : The kind of extraction the waterpoint uses- `extraction_type_group` : The kind of extraction the waterpoint uses- `extraction_type_class` : The kind of extraction the waterpoint uses- `management` : How the waterpoint is managed- `management_group` : How the waterpoint is managed- `payment` : What the water costs- `payment_type` : What the water costs- `water_quality` : The quality of the water- `quality_group` : The quality of the water- `quantity` : The quantity of water- `quantity_group` : The quantity of water- `source` : The source of the water- `source_type` : The source of the water- `source_class` : The source of the water- `waterpoint_type` : The kind of waterpoint- `waterpoint_type_group` : The kind of waterpoint LabelsThere are three possible values:- `functional` : the waterpoint is operational and there are no repairs needed- `functional needs repair` : the waterpoint is operational, but needs repairs- `non functional` : the waterpoint is not operational--- Generate a submissionYour code to generate a submission file may look like this:```python estimator is your model or pipeline, which you've fit on X_train X_test is your pandas dataframe or numpy array, with the same number of rows, in the same order, as test_features.csv, and the same number of columns, in the same order, as X_trainy_pred = estimator.predict(X_test) Makes a dataframe with two columns, id and status_group, and writes to a csv file, without the indexsample_submission = pd.read_csv('sample_submission.csv')submission = sample_submission.copy()submission['status_group'] = y_predsubmission.to_csv('your-submission-filename.csv', index=False)```If you're working locally, the csv file is saved in the same directory as your notebook.If you're using Google Colab, you can use this code to download your submission csv file.```pythonfrom google.colab import filesfiles.download('your-submission-filename.csv')```--- ###Code import os, sys in_colab = 'google.colab' in sys.modules # If you're in Colab... if in_colab: # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Install required python packages !pip install -r requirements.txt # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') # Read the Tanzania Waterpumps data # train_features.csv : the training set features # train_labels.csv : the training set labels # test_features.csv : the test set features # sample_submission.csv : a sample submission file in the correct format import pandas as pd train_features = pd.read_csv('../data/waterpumps/train_features.csv') train_labels = pd.read_csv('../data/waterpumps/train_labels.csv') test_features = pd.read_csv('../data/waterpumps/test_features.csv') sample_submission = pd.read_csv('../data/waterpumps/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) #I'm gonna need to train/test split this train_features.head() train_labels.head() train_labels['status_group'].value_counts() #the predictions for this data are what get submitted test_features.head() #if I don't make a model I like better, I can export and submit this sample_submission.head() ###Output _____no_output_____ ###Markdown Checking out some categorical variables ###Code train_features['funder'].value_counts(dropna=False) train_features['installer'].value_counts(dropna=False) train_features['wpt_name'].value_counts(dropna=False) train_features['basin'].value_counts(dropna=False) train_features['region'].value_counts(dropna=False) train_features['lga'].value_counts(dropna=False) train_features['ward'].value_counts(dropna=False) train_features['scheme_management'].value_counts(dropna=False) train_features['scheme_name'].value_counts(dropna=False) train_features['permit'].value_counts(dropna=False) #the zero here is definitely gonna fuck up regression train_features['construction_year'].value_counts(dropna=False) train_features['extraction_type'].value_counts(dropna=False) train_features['extraction_type_group'].value_counts(dropna=False) #this looks promising for one hot! train_features['extraction_type_class'].value_counts(dropna=False) train_features['management'].value_counts(dropna=False) train_features['management_group'].value_counts(dropna=False) train_features['payment'].value_counts(dropna=False) #are free pumps different from pay? train_features['payment_type'].value_counts(dropna=False) train_features['water_quality'].value_counts(dropna=False) train_features['quality_group'].value_counts(dropna=False) train_features['quantity'].value_counts(dropna=False) #they're the same train_features['quantity_group'].value_counts(dropna=False) train_features['source'].value_counts(dropna=False) train_features['source_type'].value_counts(dropna=False) train_features['source_class'].value_counts(dropna=False) train_features['waterpoint_type'].value_counts(dropna=False) train_features['waterpoint_type_group'].value_counts(dropna=False) ###Output _____no_output_____ ###Markdown thank god that's over ###Code #merge labels and features before doing train/test split bigtrain = pd.merge(train_features,train_labels,how='inner',on='id') bigtrain.head() #let's not feature engineer yet from sklearn.model_selection import train_test_split train, val = train_test_split(bigtrain, random_state=42) train.shape, val.shape target = 'status_group' y_val = val[target] y_train = train[target] y_train.value_counts(normalize=True) #baseline model (all functional) = 54% accurate #for now, some random-ish numeric features #data are uncleaned and thus suck features = ['population','construction_year','region_code','amount_tsh'] X_train = train[features] X_val = val[features] from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver='lbfgs') log_reg.fit(X_train, y_train) print('Validation Accuracy', log_reg.score(X_val, y_val)) #BARELY BETTER... BUT BETTER ###Output Validation Accuracy 0.55993265993266 ###Markdown Generate a submissionYour code to generate a submission file may look like this:```python estimator is your model or pipeline, which you've fit on X_train X_test is your pandas dataframe or numpy array, with the same number of rows, in the same order, as test_features.csv, and the same number of columns, in the same order, as X_trainy_pred = estimator.predict(X_test) Makes a dataframe with two columns, id and status_group, and writes to a csv file, without the indexsample_submission = pd.read_csv('sample_submission.csv')submission = sample_submission.copy()submission['status_group'] = y_predsubmission.to_csv('your-submission-filename.csv', index=False)```If you're working locally, the csv file is saved in the same directory as your notebook.If you're using Google Colab, you can use this code to download your submission csv file.```pythonfrom google.colab import filesfiles.download('your-submission-filename.csv')```--- ###Code y_pred = log_reg.predict(test_features[features]) y_pred sampledf = pd.DataFrame([test_features.id,y_pred]).transpose() sampledf.head() sampledf.to_csv('first-submission.csv', index=False) from google.colab import files files.download('first-submission.csv') from google.colab import drive drive.mount('/content/drive') ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron Gallant's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Do one-hot encoding. (Remember it may not work with high cardinality categoricals.)- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Get and plot your coefficients.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons. Stretch Goals Doing- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this problem, you may want to use the parameter `logistic=True`You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from the previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` Pipelines[Scikit-Learn User Guide](https://scikit-learn.org/stable/modules/compose.html) explains why pipelines are useful, and demonstrates how to use them:> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors. Reading- [ ] [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)- [ ] [Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)- [ ] [Statistical Modeling: The Two Cultures](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)- [ ] [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way (without an excessive amount of formulas or academic pre-requisites). ###Code # If you're in Colab... import os, sys in_colab = 'google.colab' in sys.modules if in_colab: # Install required python packages: # category_encoders, version >= 2.0 # pandas-profiling, version >= 2.0 # plotly, version >= 4.0 !pip install --upgrade category_encoders pandas-profiling plotly # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') import pandas as pd train_features = pd.read_csv('../data/tanzania/train_features.csv') train_labels = pd.read_csv('../data/tanzania/train_labels.csv') test_features = pd.read_csv('../data/tanzania/test_features.csv') sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) from sklearn.model_selection import train_test_split xtrain, xval, ytrain, yval= train_test_split(train_features,train_labels) xtrain.shape, xval.shape, ytrain.shape, yval.shape def cleanUp(df): df=df.drop(columns=['id','num_private','amount_tsh']) df.date_recorded=pd.to_datetime(df.date_recorded) df.longitude=df.longitude.replace({0:35.15315303394668}) df.latitude=df.latitude.replace({0:-5.699241279889469}) df.construction_year=df.construction_year.replace({0:2010}) df.population=df.population.replace({0:281.60745437348936}) df.region_code=df.region_code.astype(str) df.district_code=df.district_code.astype(str) features=['basin','district_code', 'extraction_type', 'extraction_type_class', 'extraction_type_group', 'management', 'management_group','payment','payment_type','permit', 'public_meeting', 'quality_group', 'quantity', 'quantity_group', 'recorded_by', 'region', 'region_code', 'scheme_management', 'source', 'source_class', 'source_type', 'water_quality', 'waterpoint_type', 'waterpoint_type_group', 'gps_height', 'longitude', 'latitude', 'population', 'construction_year'] df=df[features] return df def cleanuptarget(y): y=y.status_group return y xtrain=cleanUp(xtrain) xval=cleanUp(xval) xtest=cleanUp(test_features) ytrain=cleanuptarget(ytrain) yval=cleanuptarget(yval) #xtrain=xtrain.drop(columns=['id','num_private', "amount_tsh"]) yval #xval=xval.drop(columns=['id','num_private','amount_tsh']) #xtrain.date_recorded=pd.to_datetime(xtrain.date_recorded) #xval.date_recorded=pd.to_datetime(xval.date_recorded) #xtrain.population.replace({0:np.nan}).mean() import numpy as np # xtrain.longitude=xtrain.longitude.replace({0:np.nan}) # xtrain.longitude=xtrain.longitude.replace({np.nan:35.15315303394668}) # xtrain.latitude=xtrain.latitude.replace({0:np.nan}) # xtrain.latitude.mean() # xtrain.latitude=xtrain.latitude.replace({np.nan:-5.699241279889469}) # xtrain.latitude # xtrain.construction_year=xtrain.construction_year.replace({0:np.NaN}) # xtrain.construction_year=xtrain.construction_year.replace({np.nan:2010}) # xtrain # #Determine Majoirty Class # yval.status_group.value_counts() ytrain.value_counts(normalize=True) #So baseline model would get 54 percent correct. # totaltrain=xtrain.copy() # totaltrain['target']=ytrain.status_group # totaltrain.groupby('region').target.describe() catagorical_features=xtrain.select_dtypes(exclude="number").columns numeric_features=xtrain.select_dtypes('number').columns numeric_features catagorical_features # xtrain['functional_number']=ytrain.status_group.copy().replace({'non functional':0, # 'functional needs repair':1, # 'functional':2}) # xtrain.head() import seaborn as sns import matplotlib.pyplot as plt # low_cardinality_cat_feat=[] # for col in sorted(catagorical_features): # if xtrain[col].nunique()<=30: # # sns.catplot(x=col, y='functional_number', data=xtrain, kind='bar', color='blue') # # plt.show() # low_cardinality_cat_feat.append(col) # low_cardinality_cat_feat # features=list(low_cardinality_cat_feat)+list(numeric_features) # features from sklearn.metrics import accuracy_score from sklearn.linear_model import LogisticRegressionCV model=LogisticRegressionCV(n_jobs=-1, cv=5, multi_class='auto') model.fit(xtrain[numeric_features],ytrain) from sklearn.metrics import accuracy_score ypred=model.predict(xval[numeric_features]) accuracy_score(yval, ypred) import category_encoders as ce from sklearn.preprocessing import MinMaxScaler # xtrain_subset=xtrain[features] # xval_subset=xval[features] encoder=ce.OneHotEncoder() xtrain_encoded=encoder.fit_transform(xtrain_subset) xval_encoded=encoder.transform(xval_subset) scaler=MinMaxScaler() xtrain_scaled=scaler.fit_transform(xtrain_encoded) xval_scaled=scaler.transform(xval_encoded) model=LogisticRegressionCV(n_jobs=-1, multi_class='auto', cv=5, max_iter=1000) model.fit(xtrain_scaled, ytrain) print("Validation Accuracy", model.score(xval_scaled,yval)) ypred=model.predict(xtest) sample_submission.status_group=ypred from sklearn.feature_selection import f_regression, SelectKBest def targetNumberizer(y): y=y.replace({'non functional':0, 'functional needs repair':1, 'functional':2}) return y yvalnumb=targetNumberizer(yval) ytrainnumb=targetNumberizer(ytrain) for k in range (1, len(features)+1): print(f'{k} features') selector=SelectKBest(score_func=f_regression, k=k) x_train_selected=selector.fit_transform(xtrain_scaled,ytrainnumb) x_val_selected=selector.transform(xval_scaled) model=LogisticRegressionCV(n_jobs=-1, multi_class='auto', cv=5, max_iter=1000) model.fit(x_train_selected, ytrainnumb) print("Validation accuracy", model.score(x_val_selected, yvalnumb)) ###Output 1 features ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron Gallant's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Do one-hot encoding. (Remember it may not work with high cardinality categoricals.)- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Get and plot your coefficients.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons. Stretch Goals Doing- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this problem, you may want to use the parameter `logistic=True`You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from the previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` Pipelines[Scikit-Learn User Guide](https://scikit-learn.org/stable/modules/compose.html) explains why pipelines are useful, and demonstrates how to use them:> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors. Reading- [ ] [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)- [ ] [Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)- [ ] [Statistical Modeling: The Two Cultures](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)- [ ] [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way (without an excessive amount of formulas or academic pre-requisites). ###Code # If you're in Colab... import os, sys in_colab = 'google.colab' in sys.modules if in_colab: # Install required python packages: # category_encoders, version >= 2.0 # pandas-profiling, version >= 2.0 # plotly, version >= 4.0 !pip install --upgrade category_encoders pandas-profiling plotly # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') import pandas as pd train_features = pd.read_csv('../data/tanzania/train_features.csv') train_labels = pd.read_csv('../data/tanzania/train_labels.csv') test_features = pd.read_csv('../data/tanzania/test_features.csv') sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) train_features.head().T import pandas_profiling train_features.profile_report() ##Accuracy from sklearn.metrics import accuracy_score ###Output _____no_output_____ ###Markdown Train Test ###Code from sklearn.model_selection import train_test_split X_train = train_features y_train = train_labels['status_group'] X_train, X_val, y_train, y_val = train_test_split( X_train, y_train, train_size=0.80, test_size=0.20, stratify=y_train, random_state=42 ) X_train.shape, X_val.shape, y_train.shape, y_val.shape import sklearn; sklearn.__version__ X_train_numeric = X_train.select_dtypes('number') X_val_numeric = X_val.select_dtypes('number') X_train_numeric.isnull().sum() from sklearn.linear_model import LogisticRegression model = LogisticRegression(max_iter=5000, n_jobs=-1) model.fit(X_train_numeric, y_train) y_pred = model.predict(X_val_numeric) accuracy_score(y_val, y_pred) model.score(X_val_numeric, y_val) y_pred ###Output _____no_output_____ ###Markdown One Hot Encoding ###Code X_train.describe(exclude='number').T.sort_values(by='unique') source_class,quantity,extraction_type X_train['quantity'].value_counts(dropna=False) X_train['source_class'].value_counts(dropna=False) X_train['extraction_type'].value_counts(dropna=False) train = X_train.copy() train['status_group'] = y_train train.groupby('source_class')['status_group'].value_counts(normalize=True) %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns # This will give an error: # sns.catplot(x='quantity', y='status_group', data=train, kind='bar') train['functional'] = (train['status_group']=='functional').astype(int) sns.catplot(x='source_class', y='functional', data=train, kind='bar'); import category_encoders as ce from sklearn.preprocessing import StandardScaler categorical_features = ['quantity','source_class','extraction_type','basin','water_quality'] numeric_features = X_train.select_dtypes('number').columns.drop('id').tolist() features = categorical_features + numeric_features X_train_subset = X_train[features] X_val_subset = X_val[features] encoder = ce.OneHotEncoder(use_cat_names=True) X_train_encoded = encoder.fit_transform(X_train_subset) X_val_encoded = encoder.transform(X_val_subset) scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_encoded) X_val_scaled = scaler.transform(X_val_encoded) model = LogisticRegression() model.fit(X_train_scaled, y_train) print('Validation Accuracy', model.score(X_val_scaled, y_val)) # X_val_subset.head() X_train_encoded.head() functional_coefficients = pd.Series( model.coef_[0], X_train_encoded.columns ) plt.figure(figsize=(10, 10)) functional_coefficients.sort_values().plot.barh(); X_test_subset = test_features[features] X_test_encoded = encoder.transform(X_test_subset) X_test_scaled = scaler.transform(X_test_encoded) assert all(X_test_encoded.columns == X_train_encoded.columns) y_pred = model.predict(X_test_scaled) submission = sample_submission.copy() submission['status_group'] = y_pred submission.to_csv('submission-01.csv', index=False) ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron Gallant's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Do one-hot encoding. For example, in addition to `quantity`, you could try `basin`, `extraction_type_class`, and more. (But remember it may not work with high cardinality categoricals.)- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Get and plot your coefficients.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons. Stretch Goals Doing- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. To visualize this dataset, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this problem, you may want to use the parameter `logistic=True`You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from the previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` Pipelines[Scikit-Learn User Guide](https://scikit-learn.org/stable/modules/compose.html) explains why pipelines are useful, and demonstrates how to use them:> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors. Reading- [ ] [Why is logistic regression considered a linear model?](https://www.quora.com/Why-is-logistic-regression-considered-a-linear-model)- [ ] [Training, Validation, and Testing Data Sets](https://end-to-end-machine-learning.teachable.com/blog/146320/training-validation-testing-data-sets)- [ ] [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)- [ ] [Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)- [ ] [Statistical Modeling: The Two Cultures](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)- [ ] [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way (without an excessive amount of formulas or academic pre-requisites). ###Code import os, sys in_colab = 'google.colab' in sys.modules # If you're in Colab... if in_colab: # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Install required python packages !pip install -r requirements.txt # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') warnings.filterwarnings(action='ignore', category=FutureWarning, module='sklearn') import pandas as pd train_features = pd.read_csv('../data/tanzania/train_features.csv') train_labels = pd.read_csv('../data/tanzania/train_labels.csv') test_features = pd.read_csv('../data/tanzania/test_features.csv') sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv') #assert checks if statements after it is true or false. if false it throws an error. assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) train_features.head() #import libraries import numpy as np import seaborn as sns import plotly.express as px from sklearn.linear_model import LogisticRegression import matplotlib.pyplot as plt train_labels.head() ###Output _____no_output_____ ###Markdown Determine Majority class ###Code y_train = train_labels['status_group'] y_train.value_counts(normalize = True) #Guess the majority class for every prediction majority_class = y_train.mode()[0] y_pred = [majority_class] * len(y_train) print(len(y_pred)) #Baseline accuracy if we guessed the majority class for every prediction from sklearn.metrics import accuracy_score accuracy_score(y_train, y_pred) ###Output _____no_output_____ ###Markdown Do Train/validate/test split ###Code from sklearn.model_selection import train_test_split X_train = train_features y_train = train_labels['status_group'] #Randomly takes sample data X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, train_size=0.80, test_size=0.20, stratify= y_train, random_state=42) #Show shapes X_train.shape, X_val.shape, y_train.shape, y_val.shape #drop non-numeric features X_train_numeric = X_train.select_dtypes('number') X_val_numeric = X_val.select_dtypes('number') #any nulls? X_train_numeric.isnull().sum() #fit logistic regression on train data from sklearn.linear_model import LogisticRegressionCV model = LogisticRegressionCV(n_jobs = -1) model.fit(X_train_numeric, y_train) from sklearn.metrics import accuracy_score y_pred = model.predict(X_val_numeric) accuracy_score(y_val, y_pred) model.score(X_val_numeric, y_val) #Baseline ###Output _____no_output_____ ###Markdown That Data is not much better (probably because we used train_test_split random data instead of splitting the data via time) ###Code y_pred y_pred_proba = model.predict_proba(X_val_numeric) y_pred_proba # Recombine X_train and y_train, for exploratory data analysis train = X_train.copy() train['status_group'] = y_train # Now do groupby... train.groupby('quantity')['status_group'].value_counts(normalize=True) train['functional'] = (train['status_group']=='functional').astype(int) train[['status_group', 'functional']] #plot coefficients sns.catplot(x='quantity', y='functional', data=train, kind='bar') plt.title('% of Waterpumps Functional, by Water Quantity'); y_pred = model.predict(X_test_scaled) submission.shape, y_pred.shape submission = sample_submission.copy() submission['status_group'] = y_pred submission.to_csv('submission-01.csv', index=False) !head submission-01.csv if in_colab: from google.colab import files # Just try again if you get this error: # TypeError: Failed to fetch # https://github.com/googlecolab/colabtools/issues/337 files.download('submission-01.csv') pip install kaggle X_test_subset = test_features[features] X_test_encoded = encoder.transform(X_test_subset) X_test_scaled = scaler.transform(X_test_encoded) assert all(X_test_encoded.columns == X_train_encoded.columns) from google.colab import drive drive.mount('/content/drive') %env KAGGLE_CONFIG_DIR=/content/drive/My Drive/ y_pred = model.predict(X_test_scaled) ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [ ] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Begin with baselines for classification.- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.--- Stretch Goals- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do one-hot encoding. For example, you could try `quantity`, `basin`, `extraction_type_class`, and more. (But remember it may not work with high cardinality categoricals.)- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Get and plot your coefficients.- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).--- Data Dictionary FeaturesYour goal is to predict the operating condition of a waterpoint for each record in the dataset. You are provided the following set of information about the waterpoints:- `amount_tsh` : Total static head (amount water available to waterpoint)- `date_recorded` : The date the row was entered- `funder` : Who funded the well- `gps_height` : Altitude of the well- `installer` : Organization that installed the well- `longitude` : GPS coordinate- `latitude` : GPS coordinate- `wpt_name` : Name of the waterpoint if there is one- `num_private` : - `basin` : Geographic water basin- `subvillage` : Geographic location- `region` : Geographic location- `region_code` : Geographic location (coded)- `district_code` : Geographic location (coded)- `lga` : Geographic location- `ward` : Geographic location- `population` : Population around the well- `public_meeting` : True/False- `recorded_by` : Group entering this row of data- `scheme_management` : Who operates the waterpoint- `scheme_name` : Who operates the waterpoint- `permit` : If the waterpoint is permitted- `construction_year` : Year the waterpoint was constructed- `extraction_type` : The kind of extraction the waterpoint uses- `extraction_type_group` : The kind of extraction the waterpoint uses- `extraction_type_class` : The kind of extraction the waterpoint uses- `management` : How the waterpoint is managed- `management_group` : How the waterpoint is managed- `payment` : What the water costs- `payment_type` : What the water costs- `water_quality` : The quality of the water- `quality_group` : The quality of the water- `quantity` : The quantity of water- `quantity_group` : The quantity of water- `source` : The source of the water- `source_type` : The source of the water- `source_class` : The source of the water- `waterpoint_type` : The kind of waterpoint- `waterpoint_type_group` : The kind of waterpoint LabelsThere are three possible values:- `functional` : the waterpoint is operational and there are no repairs needed- `functional needs repair` : the waterpoint is operational, but needs repairs- `non functional` : the waterpoint is not operational--- Generate a submissionYour code to generate a submission file may look like this:```python estimator is your model or pipeline, which you've fit on X_train X_test is your pandas dataframe or numpy array, with the same number of rows, in the same order, as test_features.csv, and the same number of columns, in the same order, as X_trainy_pred = estimator.predict(X_test) Makes a dataframe with two columns, id and status_group, and writes to a csv file, without the indexsample_submission = pd.read_csv('sample_submission.csv')submission = sample_submission.copy()submission['status_group'] = y_predsubmission.to_csv('your-submission-filename.csv', index=False)```If you're working locally, the csv file is saved in the same directory as your notebook.If you're using Google Colab, you can use this code to download your submission csv file.```pythonfrom google.colab import filesfiles.download('your-submission-filename.csv')```--- ###Code import os, sys in_colab = 'google.colab' in sys.modules # If you're in Colab... if in_colab: # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Install required python packages !pip install -r requirements.txt # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') # Read the Tanzania Waterpumps data # train_features.csv : the training set features # train_labels.csv : the training set labels # test_features.csv : the test set features # sample_submission.csv : a sample submission file in the correct format import pandas as pd train_features = pd.read_csv('../data/waterpumps/train_features.csv') train_labels = pd.read_csv('../data/waterpumps/train_labels.csv') test_features = pd.read_csv('../data/waterpumps/test_features.csv') sample_submission = pd.read_csv('../data/waterpumps/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) #If I guess functional on the training data, I'll be right train_labels['status_group'].value_counts(normalize=True) train_labels = pd.get_dummies(train_labels, 'status_group') train = pd.merge(train_labels, train_features) train.head() from sklearn.model_selection import train_test_split # Of the training data I have to work with, I need to split it into # train and validation sets to test my data. my_train, my_val = train_test_split(train, random_state=333) my_train.head() my_train.shape, my_val.shape target = 'status_group_functional' y_train = my_train[target] y_train.value_counts(normalize=True) y_train.mode()[0] from sklearn.metrics import accuracy_score majority_class = y_train.mode()[0] y_pred = [majority_class] * len(y_train) accuracy_score(y_train,y_pred) y_val = my_val[target] y_pred = [majority_class] * len(y_val) #So my random sample was pretty darn close. 54.28 compared to 54.377 # Judging by some of the other accuracies I'm seeing in Kaggle, I can just submit # this and be done with it haha accuracy_score(y_pred, y_val) from sklearn.linear_model import LogisticRegression from sklearn.impute import SimpleImputer #Note! Make sure to put a solver down log_reg = LogisticRegression(solver='lbfgs') features = ['population','construction_year','region_code'] X_train = my_train[features] X_val = my_val[features] #Fixing missing values imputer = SimpleImputer() X_train_imputed = imputer.fit_transform(X_train) X_val_imputed = imputer.transform(X_val) #Fitting the model now log_reg.fit(X_train_imputed, y_train) #Predicting print(log_reg.predict(X_val_imputed)) #Checking accuracy. Don't need to use accuracy_score, b/c built in # Wow, barely a dent! log_reg.score(X_val_imputed, y_val) ###Output _____no_output_____ ###Markdown Submission Code ###Code sample_submission.head() test_features.head() # submission = pd.read_csv('sample_submission.csv') # submission = submission.copy() # submission['status_group'] = y_pred # submission.to_csv('your-submission-filename.csv', index=False) y_pred_for_test = ['functional'] * len(test_features) test_submission = test_features.copy() y_pred_for_test test_submission['status_group'] = y_pred_for_test test_submission.head() test_submission = test_submission.filter(['id','status_group']) test_submission test_submission.to_csv('baseline_submission.csv',index=False) ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron Gallant's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Do one-hot encoding. (Remember it may not work with high cardinality categoricals.)- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Get and plot your coefficients.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons. Stretch Goals Doing- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this problem, you may want to use the parameter `logistic=True`You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from the previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` Pipelines[Scikit-Learn User Guide](https://scikit-learn.org/stable/modules/compose.html) explains why pipelines are useful, and demonstrates how to use them:> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors. Reading- [ ] [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)- [ ] [Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)- [ ] [Statistical Modeling: The Two Cultures](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)- [ ] [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way (without an excessive amount of formulas or academic pre-requisites). ###Code # If you're in Colab... import os, sys in_colab = 'google.colab' in sys.modules if in_colab: # Install required python packages: # category_encoders, version >= 2.0 # pandas-profiling, version >= 2.0 # plotly, version >= 4.0 !pip install --upgrade category_encoders pandas-profiling plotly # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') import pandas as pd train_features = pd.read_csv('../data/tanzania/train_features.csv') train_labels = pd.read_csv('../data/tanzania/train_labels.csv') test_features = pd.read_csv('../data/tanzania/test_features.csv') sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) ''' Do train/validate/test split with the Tanzania Waterpumps data. Do one-hot encoding. (Remember it may not work with high cardinality categoricals.) Use scikit-learn for logistic regression. Get your validation accuracy score. Get and plot your coefficients. Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue Submit Predictions button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.) Commit your notebook to your fork of the GitHub repo. ''' from sklearn.model_selection import train_test_split X_train, X_val, y_train, y_val = train_test_split(train_features, train_labels, test_size=0.2, random_state=1) X_test = test_features X_train.head() #find cardinality X_train.describe(exclude = 'number').T.sort_values(by = 'unique') categorical_features = ['source_class', 'source_type', 'extraction_type_class' ] numeric_features = X_train.select_dtypes('number').columns.drop('id').tolist() features = categorical_features + numeric_features X_train[features].head(5) !pip install category_encoders import category_encoders as ce #one hot encode all string columns encoder = ce.OneHotEncoder(use_cat_names=True) X_train_encoded = encoder.fit_transform(X_train[features]) X_val_encoded = encoder.transform(X_val[features]) X_val_encoded.head() y_val.head() from sklearn.linear_model import LogisticRegression from sklearn.metrics import accuracy_score #instantiate and fit the model log_reg = LogisticRegression(solver='lbfgs') log_reg.fit(X_train_encoded, y_train['status_group']) #make predictions y_pred = log_reg.predict(X_val_encoded) accuracy_score(y_val['status_group'], y_pred) from sklearn.metrics import accuracy_score log_reg.score(X_val, y_val) ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [ ] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Begin with baselines for classification.- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.--- Stretch Goals- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do one-hot encoding. For example, you could try `quantity`, `basin`, `extraction_type_class`, and more. (But remember it may not work with high cardinality categoricals.)- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Get and plot your coefficients.- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).--- Data Dictionary FeaturesYour goal is to predict the operating condition of a waterpoint for each record in the dataset. You are provided the following set of information about the waterpoints:- `amount_tsh` : Total static head (amount water available to waterpoint)- `date_recorded` : The date the row was entered- `funder` : Who funded the well- `gps_height` : Altitude of the well- `installer` : Organization that installed the well- `longitude` : GPS coordinate- `latitude` : GPS coordinate- `wpt_name` : Name of the waterpoint if there is one- `num_private` : - `basin` : Geographic water basin- `subvillage` : Geographic location- `region` : Geographic location- `region_code` : Geographic location (coded)- `district_code` : Geographic location (coded)- `lga` : Geographic location- `ward` : Geographic location- `population` : Population around the well- `public_meeting` : True/False- `recorded_by` : Group entering this row of data- `scheme_management` : Who operates the waterpoint- `scheme_name` : Who operates the waterpoint- `permit` : If the waterpoint is permitted- `construction_year` : Year the waterpoint was constructed- `extraction_type` : The kind of extraction the waterpoint uses- `extraction_type_group` : The kind of extraction the waterpoint uses- `extraction_type_class` : The kind of extraction the waterpoint uses- `management` : How the waterpoint is managed- `management_group` : How the waterpoint is managed- `payment` : What the water costs- `payment_type` : What the water costs- `water_quality` : The quality of the water- `quality_group` : The quality of the water- `quantity` : The quantity of water- `quantity_group` : The quantity of water- `source` : The source of the water- `source_type` : The source of the water- `source_class` : The source of the water- `waterpoint_type` : The kind of waterpoint- `waterpoint_type_group` : The kind of waterpoint LabelsThere are three possible values:- `functional` : the waterpoint is operational and there are no repairs needed- `functional needs repair` : the waterpoint is operational, but needs repairs- `non functional` : the waterpoint is not operational--- Generate a submissionYour code to generate a submission file may look like this:```python estimator is your model or pipeline, which you've fit on X_train X_test is your pandas dataframe or numpy array, with the same number of rows, in the same order, as test_features.csv, and the same number of columns, in the same order, as X_trainy_pred = estimator.predict(X_test) Makes a dataframe with two columns, id and status_group, and writes to a csv file, without the indexsample_submission = pd.read_csv('sample_submission.csv')submission = sample_submission.copy()submission['status_group'] = y_predsubmission.to_csv('your-submission-filename.csv', index=False)```If you're working locally, the csv file is saved in the same directory as your notebook.If you're using Google Colab, you can use this code to download your submission csv file.```pythonfrom google.colab import filesfiles.download('your-submission-filename.csv')```--- Import data ###Code import os, sys in_colab = 'google.colab' in sys.modules # If you're in Colab... if in_colab: # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Install required python packages !pip install -r requirements.txt # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') # Read the Tanzania Waterpumps data # train_features.csv : the training set features # train_labels.csv : the training set labels # test_features.csv : the test set features # sample_submission.csv : a sample submission file in the correct format import pandas as pd train_features = pd.read_csv('../data/waterpumps/train_features.csv') train_labels = pd.read_csv('../data/waterpumps/train_labels.csv') test_features = pd.read_csv('../data/waterpumps/test_features.csv') sample_submission = pd.read_csv('../data/waterpumps/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) ###Output _____no_output_____ ###Markdown spliting into train and validate ###Code train_features.shape train_labels.shape train_features.head() train_labels.head() all_values = pd.merge(train_features, train_labels, how='inner', on='id') all_values.head() from sklearn.model_selection import train_test_split my_train, my_val = train_test_split(all_values, random_state=69) my_train.head() target = 'status_group' y_train = my_train['status_group'] y_val = my_val['status_group'] my_train = my_train.drop('status_group', axis=1) my_val = my_val.drop('status_group', axis=1) ###Output _____no_output_____ ###Markdown Baseline (Mode) ###Code y_train.value_counts(normalize=True) y_train.mode()[0] majority_class = y_train.mode()[0] y_pred = [majority_class] * len(y_train) from sklearn.metrics import accuracy_score accuracy_score(y_train, y_pred) y_pred = [majority_class] * len(y_val) accuracy_score(y_val, y_pred) ###Output _____no_output_____ ###Markdown Logistic regression first attempt ###Code import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.linear_model import LogisticRegressionCV from sklearn.preprocessing import StandardScaler my_train.describe(include='object') high_cardinal = ['funder', 'installer', 'wpt_name', 'subvillage', 'lga', 'ward', 'scheme_name'] X_train = my_train.drop(high_cardinal, axis=1) X_val = my_val.drop(high_cardinal, axis=1) encoder = ce.OneHotEncoder(use_cat_names=True) X_train_encoded = encoder.fit_transform(X_train) X_val_encoded = encoder.transform(X_val) imputer = SimpleImputer() X_train_imputed = imputer.fit_transform(X_train_encoded) X_val_imputed = imputer.transform(X_val_encoded) scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_imputed) X_val_scaled = scaler.transform(X_val_imputed) model = LogisticRegressionCV(cv=5, n_jobs=-1, random_state=69) model.fit(X_train_scaled, y_train) model.score(X_val_scaled, y_val) # estimator is your model or pipeline, which you've fit on X_train # X_test is your pandas dataframe or numpy array, # with the same number of rows, in the same order, as test_features.csv, # and the same number of columns, in the same order, as X_train X_test = test_features.drop(high_cardinal, axis=1) X_test_encoded = encoder.transform(X_test) X_test_imputed = imputer.transform(X_test_encoded) X_test_scaled = scaler.transform(X_test_imputed) y_pred = model.predict(X_test_scaled) # Makes a dataframe with two columns, id and status_group, # and writes to a csv file, without the index sample_submission = pd.read_csv('../data/waterpumps/sample_submission.csv') submission = sample_submission.copy() submission['status_group'] = y_pred submission.to_csv('not_quite_there.csv', index=False) # If you're working locally, the csv file is saved in the same directory as your notebook. # If you're using Google Colab, you can use this code to download your submission csv file. from google.colab import files files.download('not_quite_there.csv') y_pred.head() ###Output _____no_output_____ ###Markdown Logistic regression attempt 2 (more thought on feature selection) ###Code my_train.describe(include='all') ###Output _____no_output_____ ###Markdown I am looking for columns to drop. Here is the list:- date_recorded- funder- installer- wpt_name- subvillage- lga- ward- scheme_name- num_private- recorded_by- extraction_type_group- extraction_type_group- payment_type- quantity_group- quality_group- source_type- source_class- waterpoint_type_groupMany of these are redundant data, or just broader categories of features, may play around with changing these in and out or engineering some features ###Code my_train['water_quality'].value_counts() my_train['quality_group'].value_counts() drop_list = ['date_recorded', 'funder', 'installer', 'wpt_name', 'subvillage', 'lga', 'ward', 'scheme_name', 'num_private', 'recorded_by', 'extraction_type_group', 'extraction_type_group', 'payment_type', 'quantity_group', 'quality_group', 'source_type', 'source_class', 'waterpoint_type_group'] X_train = my_train.drop(drop_list, axis=1) X_val = my_val.drop(drop_list, axis=1) encoder = ce.OneHotEncoder(use_cat_names=True) X_train_encoded = encoder.fit_transform(X_train) X_val_encoded = encoder.transform(X_val) imputer = SimpleImputer() X_train_imputed = imputer.fit_transform(X_train_encoded) X_val_imputed = imputer.transform(X_val_encoded) scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_imputed) X_val_scaled = scaler.transform(X_val_imputed) # This model is worse than just dropping the high cardinality columns model = LogisticRegressionCV(cv=5, n_jobs=-1, random_state=69) model.fit(X_train_scaled, y_train) model.score(X_val_scaled, y_val) ###Output /usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:469: FutureWarning: Default multi_class will be changed to 'auto' in 0.22. Specify the multi_class option to silence this warning. "this warning.", FutureWarning) ###Markdown Trying with just a couple of the high-cardinality columns (installer and funder) ###Code columns_list = [] for header in my_train.columns: columns_list.append(header) columns_list.remove('funder') columns_list.remove('installer') X_train = my_train.drop(columns_list, axis=1) X_val = my_val.drop(columns_list, axis=1) encoder = ce.OneHotEncoder(use_cat_names=True) X_train_encoded = encoder.fit_transform(X_train) X_val_encoded = encoder.transform(X_val) imputer = SimpleImputer() X_train_imputed = imputer.fit_transform(X_train_encoded) X_val_imputed = imputer.transform(X_val_encoded) scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_imputed) X_val_scaled = scaler.transform(X_val_imputed) # The worst model so far, but still not too bad. # Could we assume that 60% of the time we can identify bad wells based on who made it and who funded it? # missing values may have impacted this too much model = LogisticRegressionCV(cv=5, n_jobs=-1, random_state=69) model.fit(X_train_scaled, y_train) model.score(X_val_scaled, y_val) ###Output /usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:469: FutureWarning: Default multi_class will be changed to 'auto' in 0.22. Specify the multi_class option to silence this warning. "this warning.", FutureWarning) ###Markdown Now we will try ChoseKBest and see what happens ###Code from sklearn. ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [X] Watch Aaron Gallant's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [X] Do train/validate/test split with the Tanzania Waterpumps data.- [X] Do one-hot encoding. For example, in addition to `quantity`, you could try `basin`, `extraction_type_class`, and more. (But remember it may not work with high cardinality categoricals.)- [X] Use scikit-learn for logistic regression.- [X] Get your validation accuracy score.- [X] Get and plot your coefficients.- [X] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [X] Commit your notebook to your fork of the GitHub repo.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons. Stretch Goals Doing- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. To visualize this dataset, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this problem, you may want to use the parameter `logistic=True`You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from the previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` Pipelines[Scikit-Learn User Guide](https://scikit-learn.org/stable/modules/compose.html) explains why pipelines are useful, and demonstrates how to use them:> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors. Reading- [ ] [Why is logistic regression considered a linear model?](https://www.quora.com/Why-is-logistic-regression-considered-a-linear-model)- [ ] [Training, Validation, and Testing Data Sets](https://end-to-end-machine-learning.teachable.com/blog/146320/training-validation-testing-data-sets)- [ ] [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)- [ ] [Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)- [ ] [Statistical Modeling: The Two Cultures](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)- [ ] [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way (without an excessive amount of formulas or academic pre-requisites). ###Code import os, sys in_colab = 'google.colab' in sys.modules # If you're in Colab... if in_colab: # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Install required python packages !pip install -r requirements.txt # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') import pandas as pd train_features = pd.read_csv('../data/tanzania/train_features.csv') train_labels = pd.read_csv('../data/tanzania/train_labels.csv') test_features = pd.read_csv('../data/tanzania/test_features.csv') sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) #combining train features and train labels for plotting df = train_features.merge(train_labels,on='id') #creating a new status to help in plotting of categorical columns #df['status']=df['status_group'].replace({'functional':1,'non functional':2,'functional needs repair':3}) import numpy as np import seaborn as sns import matplotlib.pyplot as plt from sklearn.linear_model import LogisticRegression,LogisticRegressionCV from sklearn.preprocessing import StandardScaler,MinMaxScaler from sklearn.feature_selection import SelectKBest import category_encoders as ce from sklearn.model_selection import train_test_split,StratifiedKFold from sklearn.metrics import accuracy_score from lightgbm import LGBMClassifier for col in df.select_dtypes(exclude='number'): if(df[col].nunique()<20): g = sns.FacetGrid(col='status_group',data=df) g.map(sns.countplot,col); plt.show() for col in df.select_dtypes(include='number'): g = sns.FacetGrid(col='status_group',data=df) g.map(plt.hist,col); plt.show() # for col in cat_features: # print(X_train[col].value_counts(dropna=False)) #quantity_group and quantity are same #remove scheme_management as it is similar to management plus it has null values #Fillna in public_meeeting and permit with majority values which is True #removing waterpoint_type first, later check if it can be replaced with waterpoint_type_group #keeping quality_group and later maybe check for water_quality #payment_type and payment are same #keeping extraction_type_class and removing extraction_type_group and extraction_type cat_features = ['source_class', 'management_group', 'quantity', 'waterpoint_type', 'quality_group', 'source_type','extraction_type', 'payment', 'basin', 'source', 'management', 'region','district_code'] num_features =['amount_tsh', 'gps_height', 'longitude', 'latitude', 'num_private', 'population','construction_year','public_meeting', 'permit',] features = cat_features+num_features target = 'status_group' #Plotting the functional status of the train labels train_labels.status_group.value_counts(normalize=True).plot(kind='barh'); X_train,X_test,Y_train,Y_test = train_test_split(train_features,train_labels,test_size=0.20,stratify=train_labels['status_group']) X_train['public_meeting'] = X_train['public_meeting'].fillna(True).astype(int) X_test['public_meeting'] = X_test['public_meeting'].fillna(True).astype(int) X_train['permit'] = X_train['permit'].fillna(True).astype(int) X_test['permit'] = X_test['permit'].fillna(True).astype(int) X_train['district_code'] = X_train['district_code'].astype('str') X_test['district_code'] = X_test['district_code'].astype('str') X_train.describe() #Code for Feature Selection, scaling and model prediction #Encoding Cardinal Columns encoder = ce.OneHotEncoder(use_cat_names=True) X_train_encoded = encoder.fit_transform(X_train[features]) X_test_encoded = encoder.transform(X_test[features]) scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_encoded) X_test_scaled = scaler.transform(X_test_encoded) # X_train_scaled = (X_train_encoded) # X_test_scaled = (X_test_encoded) model = LogisticRegression(multi_class='multinomial',max_iter=1500,solver='lbfgs',verbose=True) model.fit(X_train_scaled,Y_train[target]) y_pred = model.predict(X_test_scaled) accuracy_score(Y_test[target],y_pred) #preparing target for lgbm model train_target = (Y_train[target].replace({'functional':0,'non functional':1,'functional needs repair':2})).reset_index(drop=True) test_target = (Y_test[target].replace({'functional':0,'non functional':1,'functional needs repair':2})).reset_index(drop=True) skf = StratifiedKFold(5,shuffle=True,random_state=3) for trainindex,testindex in skf.split(X_train_scaled,train_target): lgbm = LGBMClassifier(objective='multiclass',n_estimators=1500,learning_rate = 0.05,is_provide_training_metric=True) X_train,y_train = X_train_scaled[trainindex],train_target.iloc[trainindex] X_test,y_test = X_train_scaled[testindex],train_target.iloc[testindex] #print(X_train.shape,Y_train.shape,X_test.shape,Y_test.shape) lgbm.fit(X_train,y_train,eval_metric='softmax',verbose=500,eval_set=(X_test,y_test)) predict = lgbm.predict(X_test) #print(predict[:5]) print(accuracy_score(y_test,predict)) #Getting accuracy score for X_test y_pred = lgbm.predict(X_test_scaled) accuracy_score(test_target,y_pred) #plotting sample submission before prediction sample_submission.status_group.value_counts(normalize=True).plot(kind='barh'); #making the same changes to test features as done to train features test_features['public_meeting'] = test_features['public_meeting'].fillna(True).astype(int) test_features['permit'] = test_features['permit'].fillna(True).astype(int) test_features['district_code'] = test_features['district_code'].astype('str') LGBMMODEL = True #preparing test features for fitting to model test_encoded = encoder.transform(test_features[features]) test_scaled = scaler.transform(test_encoded) if(LGBMMODEL): test_predictions = lgbm.predict(test_scaled) sample_submission['status_group']=test_predictions sample_submission['status_group']=sample_submission['status_group'].replace({0:'functional',1:'non functional',2:'functional needs repair'}) else: test_predictions = model.predict(test_scaled) sample_submission['status_group']=test_predictions #Code for selecting the best features # for k in range(1,(X_train_scaled.shape[1])): # selector = SelectKBest(k=k) # X_train_transformed = selector.fit_transform(X_train_scaled,Y_train[target]) # X_test_formed = selector.transform(X_test_scaled) # model = LogisticRegression(multi_class='multinomial',solver='lbfgs',max_iter=1000) # model.fit(X_train_transformed,Y_train[target]) # ypred_train = model.predict(X_train_transformed) # y_pred = model.predict(X_test_formed) # print(f'{k}: Train Accuracy : {accuracy_score(Y_train[target],ypred_train)}Test Accuracy : {accuracy_score(Y_test[target],y_pred)}') #plotting the sumbission after prediction sample_submission.status_group.value_counts(normalize=True).plot(kind='barh'); #preparing the csv file for submission sample_submission.to_csv('kaggle-submission-5.csv', index=False) #Making the dataframe and Plotting the Model coefficients model_coefficients = pd.DataFrame(dict(zip(train_labels['status_group'].unique().tolist(),model.coef_)),index=np.array(test_encoded.columns)) #Model Coefficients plotting for status functional fig = plt.figure(figsize=(12,20)) model_coefficients['functional'].plot(kind='barh'); plt.title("Coefficients for status Functional"); #Model Coefficients plotting for status non functional fig = plt.figure(figsize=(12,20)) model_coefficients['non functional'].plot(kind='barh'); plt.title("Coefficients for Non Functional"); #Model Coefficients plotting for status functional needs repair fig = plt.figure(figsize=(12,20)) model_coefficients['functional needs repair'].plot(kind='barh'); plt.title("Coefficients for Needs Repair Status"); sample_submission.shape ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [X] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Begin with baselines for classification.- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.--- Stretch Goals- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do one-hot encoding. For example, you could try `quantity`, `basin`, `extraction_type_class`, and more. (But remember it may not work with high cardinality categoricals.)- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Get and plot your coefficients.- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).--- Data Dictionary FeaturesYour goal is to predict the operating condition of a waterpoint for each record in the dataset. You are provided the following set of information about the waterpoints:- `amount_tsh` : Total static head (amount water available to waterpoint)- `date_recorded` : The date the row was entered- `funder` : Who funded the well- `gps_height` : Altitude of the well- `installer` : Organization that installed the well- `longitude` : GPS coordinate- `latitude` : GPS coordinate- `wpt_name` : Name of the waterpoint if there is one- `num_private` : - `basin` : Geographic water basin- `subvillage` : Geographic location- `region` : Geographic location- `region_code` : Geographic location (coded)- `district_code` : Geographic location (coded)- `lga` : Geographic location- `ward` : Geographic location- `population` : Population around the well- `public_meeting` : True/False- `recorded_by` : Group entering this row of data- `scheme_management` : Who operates the waterpoint- `scheme_name` : Who operates the waterpoint- `permit` : If the waterpoint is permitted- `construction_year` : Year the waterpoint was constructed- `extraction_type` : The kind of extraction the waterpoint uses- `extraction_type_group` : The kind of extraction the waterpoint uses- `extraction_type_class` : The kind of extraction the waterpoint uses- `management` : How the waterpoint is managed- `management_group` : How the waterpoint is managed- `payment` : What the water costs- `payment_type` : What the water costs- `water_quality` : The quality of the water- `quality_group` : The quality of the water- `quantity` : The quantity of water- `quantity_group` : The quantity of water- `source` : The source of the water- `source_type` : The source of the water- `source_class` : The source of the water- `waterpoint_type` : The kind of waterpoint- `waterpoint_type_group` : The kind of waterpoint LabelsThere are three possible values:- `functional` : the waterpoint is operational and there are no repairs needed- `functional needs repair` : the waterpoint is operational, but needs repairs- `non functional` : the waterpoint is not operational--- Generate a submissionYour code to generate a submission file may look like this:```python estimator is your model or pipeline, which you've fit on X_train X_test is your pandas dataframe or numpy array, with the same number of rows, in the same order, as test_features.csv, and the same number of columns, in the same order, as X_trainy_pred = estimator.predict(X_test) Makes a dataframe with two columns, id and status_group, and writes to a csv file, without the indexsample_submission = pd.read_csv('sample_submission.csv')submission = sample_submission.copy()submission['status_group'] = y_predsubmission.to_csv('your-submission-filename.csv', index=False)```If you're working locally, the csv file is saved in the same directory as your notebook.If you're using Google Colab, you can use this code to download your submission csv file.```pythonfrom google.colab import filesfiles.download('your-submission-filename.csv')```--- ###Code import os, sys in_colab = 'google.colab' in sys.modules # If you're in Colab... if in_colab: # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Install required python packages !pip install -r requirements.txt # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings('ignore') # Read the Tanzania Waterpumps data # train_features.csv : the training set features # train_labels.csv : the training set labels # test_features.csv : the test set features # sample_submission.csv : a sample submission file in the correct format import pandas as pd train_features = pd.read_csv('../data/waterpumps/train_features.csv') train_labels = pd.read_csv('../data/waterpumps/train_labels.csv') test_features = pd.read_csv('../data/waterpumps/test_features.csv') sample_submission = pd.read_csv('../data/waterpumps/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) train_labels["status_group"] = train_labels["status_group"].replace({"non functional": -1, "functional needs repair": 0, "functional": 1}) train_labels.head(10) import category_encoders as ce from sklearn.linear_model import LinearRegression, LogisticRegressionCV from sklearn.impute import SimpleImputer features = ["lga", "population", "management", "permit", "gps_height", "region", "basin", "payment", "longitude", "latitude", "waterpoint_type", "extraction_type", "water_quality", "quantity", "source"] features=["waterpoint_type", "extraction_type", "quantity_group", "basin", "quality_group", "source_type", "lga", "management_group"] X_train = train_features[features] X_test = test_features[features] y_train = train_labels["status_group"] encoder = ce.OneHotEncoder(use_cat_names=True) X_train = encoder.fit_transform(X_train) X_test = encoder.transform(X_test) features_new = X_train.columns imputer = SimpleImputer() X_train_imputed = imputer.fit_transform(X_train) X_test_imputed = imputer.transform(X_test) model = LogisticRegressionCV(verbose=0) model.fit(X_train, y_train) print(model.score(X_train, train_labels["status_group"])) y_pred = model.predict(X_test_imputed) submission = sample_submission.copy() submission["status_group"] = y_pred submission.to_csv('rosie-lasota-submission.csv', index=False) ###Output _____no_output_____ ###Markdown Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron Gallant's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Do one-hot encoding. (Remember it may not work with high cardinality categoricals.)- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Get and plot your coefficients.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons. Stretch Goals Doing- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this problem, you may want to use the parameter `logistic=True`You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from the previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` Pipelines[Scikit-Learn User Guide](https://scikit-learn.org/stable/modules/compose.html) explains why pipelines are useful, and demonstrates how to use them:> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors. Reading- [ ] [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)- [ ] [Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)- [ ] [Statistical Modeling: The Two Cultures](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)- [ ] [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way (without an excessive amount of formulas or academic pre-requisites). ###Code # If you're in Colab... import os, sys in_colab = 'google.colab' in sys.modules if in_colab: # Install required python packages: # category_encoders, version >= 2.0 # pandas-profiling, version >= 2.0 # plotly, version >= 4.0 !pip install --upgrade category_encoders pandas-profiling plotly # Pull files from Github repo os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master # Change into directory for module os.chdir('module4') # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') import pandas as pd train_features = pd.read_csv('../data/tanzania/train_features.csv') train_labels = pd.read_csv('../data/tanzania/train_labels.csv') test_features = pd.read_csv('../data/tanzania/test_features.csv') sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) ###Output _____no_output_____ ###Markdown feature enginering ###Code train_with_labels = train_features.merge(train_labels) # import pandas_profiling # train_features.profile_report() extraction = ['extraction_type', 'extraction_type_group', 'extraction_type_class'] management = ['management', 'management_group'] payment = ['payment', 'payment_type'] quality = ['quantity', 'quantity_group'] region = ['region', 'region_code'] source = ['source', 'source_type', 'source_class'] waterpoint = ['waterpoint_type', 'waterpoint_type_group'] for cat in extraction: print(pd.crosstab(train_with_labels[cat], train_with_labels['status_group'], normalize='index').sort_values('functional', ascending=False)) # let's go with first # extraction_type for cat in management: print(pd.crosstab(train_with_labels[cat], train_with_labels['status_group'], normalize='index').sort_values('functional', ascending=False)) # first again # management for cat in payment: print(pd.crosstab(train_with_labels[cat], train_with_labels['status_group'], normalize='index').sort_values('functional', ascending=False)) # i'll just have to try with each one and see which one is better all(train_with_labels['payment'] == train_with_labels['payment_type']) for cat in quality: print(pd.crosstab(train_with_labels[cat], train_with_labels['status_group'], normalize='index').sort_values('functional', ascending=False)) # repeat just use quality all(train_with_labels['quantity'] == train_with_labels['quantity_group']) for cat in region: print(pd.crosstab(train_with_labels[cat], train_with_labels['status_group'], normalize='index').sort_values('functional', ascending=False)) # region code, but maybe map so it's linear? for cat in source: print(pd.crosstab(train_with_labels[cat], train_with_labels['status_group'], normalize='index').sort_values('functional', ascending=False)) # go with source for cat in waterpoint: print(pd.crosstab(train_with_labels[cat], train_with_labels['status_group'], normalize='index').sort_values('functional', ascending=False)) # waterpoint_type # columns to drop drop = [ # kept # # low cardinality # 'basin', # keep because low cardinality # 'quality_group', # keep because low cardinality # 'quantity', # keep because low cardinality # 'scheme_management', # keep because low cardinality # 'water_quality', # keep because low cardinality # # think it was better than alternatives # 'extraction_type', # dont drop becauase i think this is the most use full of the extraction_type ones # 'management', # keep becuase i thought it was more useful than management_group # 'payment', # keep becuase i don't think there's much of a difference between payment, and payment_type # 'region_code', # keep becuase i prefered over region # 'source', # keep because prefered to source_class, source_type # 'waterpoint_type', # keep because prefered over waterpoint_type_group # # kept because number # 'district_code', # keep because numeric # 'gps_height', # keep because number and i think there's a relationship # dropped # dates 'construction_year', # drop because has missing and is date 'date_recorded', # drop because date # had alternatives 'extraction_type_class', # didn't think was as useful as extraction_type 'extraction_type_group', # same with class 'management_group', # drop because i prefered management 'payment_type', # drop because payment is kinda the same thing 'region', # drop becuase i prefered region code 'waterpoint_type_group', # drop because prefer waterpoint_type 'source_class', # drop because source is better 'source_type', # drop because source is better # useless 'id', # drop because doesn't mean anything 'latitude', # doesn't mean anything 'longitude', # same as latitude 'recorded_by', # drop because it's only 1 value 'quantity_group', # drop because just a repeat of quatity 'permit', # drop because missing values, 5%, and i dont think it would add anything, didn't notice a relationship 'public_meeting', # drop because missing values, 5%, and too many trues 'population', # drop because too many 0 values, which i assume are missing? 'amount_tsh', # drop becuase too many 0, so data is too skewed 'num_private', # drop because too many 0 # too many missing 'scheme_name', # drop because too many missing 47% # high cardinality 'wpt_name', # drop because high cardinality 'subvillage', # drop because too high cardinality 'ward', # drop because too high cardinality 'funder', # drop becuase high cardinality and missing values, 6% # 'installer', # same as funder, too many values and missing values, 6% 'lga', # drop because too many values, maybe could encode somehow ] features = train_features.drop(drop, axis=1) features.dtypes.sort_values() columns_to_use = features.columns.to_list() columns_to_use ###Output _____no_output_____ ###Markdown Do train/validate/test split with the Tanzania Waterpumps data. ###Code from sklearn.model_selection import train_test_split train_features = pd.read_csv('../data/tanzania/train_features.csv') train_labels = pd.read_csv('../data/tanzania/train_labels.csv') test_features = pd.read_csv('../data/tanzania/test_features.csv') sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv') X_train = train_features y_train = train_labels['status_group'] X_train, X_val, y_train, y_val = train_test_split( X_train, y_train, train_size = 0.80, stratify = y_train, random_state = 69 ) ###Output _____no_output_____ ###Markdown Do one-hot encoding. (Remember it may not work with high cardinality categoricals.) ###Code cats = X_train[columns_to_use].select_dtypes('object').columns.to_list() cats cats = X_train[cats].columns[X_train[cats].nunique() < 25].tolist() X_train[cats].nunique() import category_encoders as ce from sklearn.preprocessing import StandardScaler, MinMaxScaler, LabelEncoder import bisect # installer label_encoder = LabelEncoder() # train on train installer X_train['installer'] = label_encoder.fit_transform(X_train['installer'].fillna(X_train['installer'].mode()[0])) # change classes to have other in there? classes = label_encoder.classes_.tolist() bisect.insort_left(classes, 'other') label_encoder.classes_ = classes # map validation data to change any unseen installers to 'other' X_val['installer'] = X_val['installer'].map(lambda s: 'other' if s not in label_encoder.classes_ else s) # tranform X_val['installer'] = label_encoder.transform(X_val['installer']) numeric_features = X_train[columns_to_use].select_dtypes(['number', 'bool']).columns.tolist() features = cats + numeric_features + ['installer'] X_train_subset = X_train[features] X_val_subset = X_val[features] encoder = ce.OneHotEncoder(use_cat_names=True) X_train_encoded = encoder.fit_transform(X_train_subset) X_val_encoded = encoder.transform(X_val_subset) scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_encoded) X_val_scaled = scaler.transform(X_val_encoded) features ###Output _____no_output_____ ###Markdown Use scikit-learn for logistic regression. ###Code from sklearn.linear_model import LogisticRegressionCV from sklearn.ensemble import RandomForestClassifier from sklearn.feature_selection import f_regression, SelectKBest import time y_int = y_train.map({'functional': 3, 'functional needs repair': 2, 'non functional': 1}) start = time.time() model = RandomForestClassifier(n_jobs = -1, n_estimators = 1000) model.fit(X_train_scaled, y_train) print('score ', model.score(X_val_scaled, y_val)) print('time ', (time.time() - start) / 60, ' mins') ###Output score 0.7697811447811448 time 0.631817889213562 mins
python/examples/notebooks/tsp_simple_cuts_generic.ipynb
###Markdown Travelling Salesman Problem with subtour eliminationThis example shows how to solve a TSP by eliminating subtours using:1. amplpy (defining the subtour elimination constraint in AMPL and instantiating it appropriately)2. ampls (adding cuts directly from the solver callback) Options ###Code SOLVER = "xpress" SOLVER_OPTIONS = ['outlev=1'] USE_CALLBAKCS = True PLOTSUBTOURS = True TSP_FILE = "../tsp/a280.tsp" import sys sys.path.append('D:/Development/ampl/solvers-private/build/vs64/bin') ###Output _____no_output_____ ###Markdown Imports ###Code # Import utilities from amplpy import AMPL, DataFrame # pip install amplpy if SOLVER == "gurobi": import amplpy_gurobi as ampls # pip install ampls-gurobi elif SOLVER == "cplex": import amplpy_cplex as ampls # pip install ampls- elif SOLVER == "xpress": import amplpy_xpress as ampls # pip install ampls-gurobi import tsplib95 as tsp # pip install tsplib95 import matplotlib.pyplot as plt # pip install matplotlib import matplotlib.colors as colors from time import time plt.rcParams['figure.dpi'] = 200 ###Output _____no_output_____ ###Markdown Register jupyter magics for AMPL ###Code from amplpy import register_magics register_magics('_ampl_cells') # Store %%ampl cells in the list _ampl_cells ###Output _____no_output_____ ###Markdown Define TSP model in AMPL ###Code %%ampl set NODES ordered; param hpos {NODES}; param vpos {NODES}; set PAIRS := {i in NODES, j in NODES: ord(i) < ord(j)}; param distance {(i,j) in PAIRS} := sqrt((hpos[j]-hpos[i])**2 + (vpos[j]-vpos[i])**2); var X {PAIRS} binary; minimize Tour_Length: sum {(i,j) in PAIRS} distance[i,j] * X[i,j]; subject to Visit_All {i in NODES}: sum {(i, j) in PAIRS} X[i,j] + sum {(j, i) in PAIRS} X[j,i] = 2; ###Output _____no_output_____ ###Markdown Function to load TSP data files and return a dictionary of (nodeid : coordinate) ###Code def getDictFromTspFile(tspFile): p = tsp.load(tspFile) if not p.is_depictable: print("Problem is not depictable!") # Amendments as we need the nodes lexographically ordered nnodes = len(list(p.get_nodes())) i = 0 while nnodes>1: nnodes = nnodes/10 i+=1 formatString = f"{{:0{i}d}}" nodes = {formatString.format(value) : p.node_coords[index+1] for index, value in enumerate(p.get_nodes())} return nodes ###Output _____no_output_____ ###Markdown Create AMPL object with amplpy and load model and data ###Code # Get the model from the cell above tsp_model = _ampl_cells[0] # Load model in AMPL ampl = AMPL() ampl.eval(tsp_model) ampl.option["solver"] = SOLVER ampl.option[SOLVER + "_options"] = ' '.join(SOLVER_OPTIONS) # Set problem data from tsp file nodes = getDictFromTspFile(TSP_FILE) # Pass them to AMPL using a dataframe df = DataFrame(index=[('NODES')], columns=['hpos', 'vpos']) df.setValues(nodes) ampl.setData(df, "NODES") # Set some globals that never change during the execution of the problem NODES = set(nodes.keys()) CPOINTS = {node : complex(coordinate[0], coordinate[1]) for (node, coordinate) in nodes.items()} ###Output _____no_output_____ ###Markdown Define some helpers functions to plot the tours ###Code def plotTours(tours: list, points_coordinate: dict): # Plot all the tours in the list each with a different color colors = ['b', 'g', 'c', 'm', 'y', 'k'] for i, tour in enumerate(tours): tourCoordinates = [points_coordinate[point.strip("'")] for point in tour] color = colors[i % len(colors)] plot_all(tourCoordinates, color = color) plt.show() def plot_all(tour, alpha=1, color=None): # Plot the tour as blue lines between blue circles plotline(list(tour) + [tour[0]], alpha=alpha, color=color) plotline([tour[0]], 's', alpha=alpha, color=color) def plotline(points, style='o-', alpha=1, color=None): "Plot a list of points (complex numbers) in the 2-D plane." X, Y = XY(points) if color: plt.plot(X, Y, style, alpha=alpha, color=color) else: plt.plot(X, Y, style, alpha=alpha) def XY(points): "Given a list of points, return two lists: X coordinates, and Y coordinates." return [p.real for p in points], [p.imag for p in points] ###Output _____no_output_____ ###Markdown Define some helper functions to help with the graphs (e.g. get the subtour given a set of arcs) ###Code # Graphs helper routines def trasverse(node, arcs: set, allnodes: set, subtour = None) -> list: # Trasverses all the arcs in the set arcs, starting from node # and returns the tour if not subtour: subtour = list() # Find arcs involving the current node myarcs = [(i,j) for (i,j) in arcs if node == i or node == j] if len(myarcs) == 0: return # Append the current node to the current subtour subtour.append(node) # Use the first arc found myarc = myarcs[0] # Find destination (or origin) node destination = next(i for i in myarc if i != node) # Remove from arcs and nodes to visit arcs.remove(myarc) if node in allnodes: allnodes.remove(node) trasverse(destination, arcs, allnodes, subtour) return subtour def findSubTours(arcs: set, allnodes: set): """Find all the subtours defined by a set of arcs and return them as a list of list """ subtours = list() allnodes = allnodes.copy() while len(allnodes) > 0: l = trasverse(next(iter(allnodes)), arcs, allnodes) subtours.append(l) return subtours ###Output _____no_output_____ ###Markdown AMPLPY implementation of sub-tours elimination ###Code def amplSubTourElimination(ampl: AMPL): # Add the constraint and the needed parameters subToursAMPL = """param nSubtours >= 0 integer, default 0; set SUB {1..nSubtours} within NODES; subject to Subtour_Elimination {k in 1..nSubtours}: sum {i in SUB[k], j in NODES diff SUB[k]} if (i, j) in PAIRS then X[i, j] else X[j, i] >= 2;""" ampl.eval(subToursAMPL) nSubtoursParam = ampl.getParameter("nSubtours") SubtoursSet = ampl.getSet("SUB") allsubtours = list() while True: # Repeat until the solution contains only one tour ampl.solve() # Get solution ARCS = ampl.getData("{(i,j) in PAIRS : X[i,j] > 0} X[i,j];") ARCS = set([(i, j) for (i, j, k)in ARCS.toList()]) subtours = findSubTours(ARCS, NODES) # If we have only one tour, the solution is valid if len(subtours) <= 1: break print(f"Found {len(subtours)} subtours, plotting them and adding cuts") if PLOTSUBTOURS: plotTours(subtours, CPOINTS) # Else add the current tours to the list of subtours allsubtours.extend(subtours) # And add those to the constraints by assigning the values to # the parameter and the set nSubtoursParam.set(len(allsubtours)) for (i, tour) in enumerate(allsubtours): SubtoursSet[i+1].setValues(tour) ###Output _____no_output_____ ###Markdown ampls callbacks implementation of subtours elimination ###Code # Callback class that actually add the cuts if subtours are found in a solution class MyCallback(ampls.GenericCallback): def __init__(self): # Constructor, simply sets the iteration number to 0 super().__init__() self.iteration = 0 def run(self): try: # For each solution if self.getAMPLWhere() == ampls.Where.MIPSOL: self.iteration += 1 print(f"/nIteration {self.iteration}: Finding subtours") sol = self.getSolutionVector() arcs = [xvars[i] for i, value in enumerate(sol) if value > 0] subTours = findSubTours(set(arcs), set(vertices)) if len(subTours) ==1: print("No subtours detected. Not adding any cut") return 0 print(f"Adding {len(subTours)} cuts") if PLOTSUBTOURS: plotTours(subTours, CPOINTS) for subTour in subTours: st1 = set(subTour) nst1 = set(vertices) - st1 externalArcs = [(i, j) if i < j else (j, i) for i in st1 for j in nst1] varsExternalArcs = [xinverse[i, j] for (i, j) in externalArcs] coeffs = [1 for i in range(len(varsExternalArcs))] if PLOTSUBTOURS: print("Adding cut for subtour:", st1) self.addLazyIndices(varsExternalArcs, coeffs, ampls.CutDirection.GE, 2) if len(subTours) == 2: return 0 print("Continue solving") return 0 except Exception as e: print('Error:', e) return 1 # Global variables to store entities needed by the callbacks # that never change xvars = None xinverse = None vertices = None def solverSubTourElimination(ampl: AMPL, solver, solver_options): global xvars, xinverse, vertices # Export the model using ampls model = ampl.exportModel(solver, solver_options) model.enableLazyConstraints() # Get the global maps between solver vars and AMPL entities varMap = model.getVarMapFiltered("X") #print("varMap:", varMap) inverse = model.getVarMapInverse() xvars = {index: ampls.var2tuple(var)[1:] for var, index in varMap.items()} xinverse = {ampls.var2tuple(var)[1:]: index for index, var in inverse.items()} vertices = list(sorted(set([x[0] for x in xvars.values()] + [x[1] for x in xvars.values()]))) # Assign the callback callback = MyCallback() model.setCallback(callback) print("Start optimization") # Start the optimization model.optimize() # Import the solution back to AMPL ampl.importSolution(model) ###Output _____no_output_____ ###Markdown Script running the optimization ###Code t0 = time() if not USE_CALLBAKCS: amplSubTourElimination(ampl) else: solverSubTourElimination(ampl, SOLVER, SOLVER_OPTIONS) ###Output _____no_output_____ ###Markdown Get the solution, print it and display it ###Code # Get the solution into ARCS ARCS = ampl.getData("{(i,j) in PAIRS : X[i,j] > 0} X[i,j];") ARCS = set([(i,j) for (i,j,k) in ARCS.toList()]) # Display it tours = findSubTours(ARCS, NODES) for st in tours: print(st) plotTours(tours, CPOINTS) ampl.getValue('Tour_Length') time()-t0 ###Output _____no_output_____
lipschitz_estimates/random_forest_iris_lipschitz_estimates.ipynb
###Markdown Imports and Paths ###Code from IPython.display import display, HTML from lime.lime_tabular import LimeTabularExplainer from pprint import pprint from scipy.spatial.distance import pdist, squareform from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier, export_graphviz from sklearn.ensemble import RandomForestClassifier from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler, MinMaxScaler from sklearn.model_selection import train_test_split from sklearn.metrics import f1_score, confusion_matrix from sklearn.utils.multiclass import unique_labels from sklearn import metrics from sklearn.metrics import classification_report from sklearn.metrics.pairwise import cosine_similarity from scipy import spatial %matplotlib inline import glob import matplotlib.pyplot as plt import matplotlib.ticker as ticker import numpy as np import pandas as pd import pathlib import sklearn import seaborn as sns import statsmodels import eli5 import lime import shap shap.initjs() ###Output _____no_output_____ ###Markdown 1. Predictive Models Load and preprocess dataTrain/test split = 0.80/0.20 ###Code # Set the seed experimentations and interpretations. np.random.seed(111) project_path = pathlib.Path.cwd().parent.parent.parent modelling_result_path = str(project_path) + '/datasets/modelling-results/' plots_path = str(project_path) + '/plots/' # print(project_path) from sklearn.datasets import load_iris iris = load_iris() train, test, labels_train, labels_test = train_test_split(iris.data, iris.target, train_size=0.80) x_testset = test feature_names = iris.feature_names target_names = iris.target_names total_targets = len(target_names) # total number of unique target names unique_targets = np.unique(iris.target) # LIME only takes integer targets_labels = dict(zip(unique_targets, target_names)) print("Feature names", feature_names) print("Target names", target_names) print("Number of uniques label or target names", unique_targets) print("Target labels as unique target (key) with target names (value)", targets_labels) print("Training record", train[0:1]) print("Label for training record", labels_train[0:1]) ###Output Training record [[4.6 3.6 1. 0.2]] Label for training record [0] ###Markdown Train and evaluate models.Train Random Forest model so these can be used as black box models when evaluating explanations methods. Fit Random Forest ###Code rf = RandomForestClassifier(n_estimators=500, class_weight='balanced_subsample') rf.fit(train, labels_train) ###Output _____no_output_____ ###Markdown Predict using random forest model ###Code labels_pred_rf = rf.predict(test) score_rf = metrics.accuracy_score(labels_test, labels_pred_rf) print("\nRandom Forest accuracy score.", score_rf) predict_proba_rf = rf.predict_proba(test[:5]) print("\nRandom Forest predict probabilities\n\n", predict_proba_rf) predict_rf = rf.predict(test[:5]) print("\nRandom Forest predictions", predict_rf) ###Output Random Forest accuracy score. 0.9 Random Forest predict probabilities [[1. 0. 0. ] [1. 0. 0. ] [0. 0.002 0.998] [0. 0.11 0.89 ] [0. 0.034 0.966]] Random Forest predictions [0 0 2 2 2] ###Markdown Classification report of random forest ###Code report_rf = classification_report(labels_test, labels_pred_rf, target_names=target_names) print("Random Forestclassification report.") print(report_rf) ###Output Random Forestclassification report. precision recall f1-score support setosa 1.00 1.00 1.00 10 versicolor 0.83 0.71 0.77 7 virginica 0.86 0.92 0.89 13 accuracy 0.90 30 macro avg 0.90 0.88 0.89 30 weighted avg 0.90 0.90 0.90 30 ###Markdown Classification report of random forest displayed as dataframe ###Code report_rf = classification_report(labels_test, labels_pred_rf, target_names=target_names, output_dict=True) report_rf = pd.DataFrame(report_rf).transpose().round(2) report_rf = report_rf.iloc[:total_targets,:-1] display(report_rf) ###Output _____no_output_____ ###Markdown Average F1-score of random forest model ###Code avg_f1_rf = report_rf['f1-score'].mean() print("Random Forest average f1-score", avg_f1_rf) ###Output Random Forest average f1-score 0.8866666666666667 ###Markdown Confusion matrix of random forest model ###Code matrix_rf = confusion_matrix(labels_test, labels_pred_rf) matrix_rf = pd.DataFrame(matrix_rf, columns=target_names).transpose() matrix_rf.columns = target_names display(matrix_rf) ###Output _____no_output_____ ###Markdown Combine confusion matrix and classification report of random forest model ###Code matrix_report_rf = pd.concat([matrix_rf, report_rf], axis=1) display(matrix_report_rf) ###Output _____no_output_____ ###Markdown Saving confusion matrix and classification report of random forest model into csvIt is because CSV can be used to draw table in LaTex easily. ###Code filename = 'iris_matrix_report_rf.csv' matrix_report_rf.to_csv(modelling_result_path + filename, index=True) ###Output _____no_output_____ ###Markdown Extract target names for prediction of random forest model ###Code labels_names_pred_rf = [] for label in labels_pred_rf: labels_names_pred_rf.append(targets_labels[label]) print("Random Forest predicted targets and their names.\n") print(labels_pred_rf) print(labels_names_pred_rf) ###Output Random Forest predicted targets and their names. [0 0 2 2 2 0 0 2 2 1 2 0 1 2 2 0 2 1 0 2 1 2 1 1 2 0 0 2 0 2] ['setosa', 'setosa', 'virginica', 'virginica', 'virginica', 'setosa', 'setosa', 'virginica', 'virginica', 'versicolor', 'virginica', 'setosa', 'versicolor', 'virginica', 'virginica', 'setosa', 'virginica', 'versicolor', 'setosa', 'virginica', 'versicolor', 'virginica', 'versicolor', 'versicolor', 'virginica', 'setosa', 'setosa', 'virginica', 'setosa', 'virginica'] ###Markdown 2. Explanation Models a. Interpreting models using LIME LIME util functions ###Code def lime_explanations(index, x_testset, explainer, model, unique_targets, class_predictions): instance = x_testset[index] exp = explainer.explain_instance(instance, model.predict_proba, labels=unique_targets, top_labels=None, num_features=len(x_testset[index]), num_samples=6000) # Array class_predictions contains predicted class labels exp_vector_predicted_class = exp.as_map()[class_predictions[index]] return (exp_vector_predicted_class, exp.score), exp def explanation_to_dataframe(index, x_testset, explainer, model, unique_targets, class_predictions, dataframe): feature_imp_tuple, exp = lime_explanations(index, x_testset, explainer, model, unique_targets, class_predictions) exp_val = tuple(sorted(feature_imp_tuple[0])) data = dict((x, y) for x, y in exp_val) list_val = list(data.values()) list_val.append(feature_imp_tuple[1]) dataframe.loc[index] = list_val return dataframe, exp """ Define LIME Explainer """ explainer_lime = LimeTabularExplainer(train, mode = 'classification', training_labels = labels_train, feature_names=feature_names, verbose=False, class_names=target_names, feature_selection='auto', discretize_continuous=True) from tqdm import tqdm col_names = list(feature_names) col_names.append('lime_score') ###Output _____no_output_____ ###Markdown Interpret random forest model for all test instances using LIME ###Code explanations_lime_rf = pd.DataFrame(columns=col_names) for index in tqdm(range(0,len(test))): explanations_lime_rf, exp = explanation_to_dataframe(index, test, explainer_lime, rf, # random forest model unique_targets, labels_pred_rf, # random forest predictions explanations_lime_rf) print("LIME explanations on random forest.") display(explanations_lime_rf.head()) display(explanations_lime_rf.iloc[:,:-1].head(1)) ###Output LIME explanations on random forest. ###Markdown b. Interpreting models using SHAP SHAP util functions ###Code def shapvalue_to_dataframe(test, labels_pred, shap_values, feature_names): exp_shap_array = [] for test_index in range(0, len(test)): label_pred = labels_pred[test_index] exp_shap_array.append(shap_values[label_pred][test_index]) df_exp_shap = pd.DataFrame(exp_shap_array) df_exp_shap.columns = feature_names return df_exp_shap ###Output _____no_output_____ ###Markdown Interpret random forest model for all test instances using SHAP ###Code shap_values_rf = shap.TreeExplainer(rf).shap_values(test) shap.summary_plot(shap_values_rf, test, feature_names=feature_names) ###Output _____no_output_____ ###Markdown Extracting SHAP values as explanations**_shap_values_** returns 3D array in a form of (num_classes, num_test_instance, num_features) e.g. for iris dataset the 3D array shape would be (3, 30, 4) Extract explanations (SHAP values) of random forest predictions. ###Code explanations_shap_rf = shapvalue_to_dataframe(test, labels_pred_rf, shap_values_rf, feature_names) display(explanations_shap_rf.head()) display(explanations_shap_rf.iloc[:,:].head(1)) ###Output _____no_output_____ ###Markdown 3. Local lipschitz estimation as a stability measure Local lipschitz estimation util functions ###Code def norm(Xs, x0, norm=2): # https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.norm.html norm = np.linalg.norm(x0 - Xs, norm) # /np.linalg.norm(b[0] - b, 2) return norm def neighborhood_with_euclidean(x_points, anchor_index, radius): # http://mathonline.wikidot.com/open-and-closed-balls-in-euclidean-space x_i = x_points[anchor_index] x_js = x_points.tolist() dist = (x_i - x_js)**2 dist = np.sum(dist, axis=1) dist = np.sqrt(dist) neighborhood_indices = [] for index in range(0, len(dist)): if dist[index] < radius: neighborhood_indices.append(index) return neighborhood_indices def neighborhood_with_KDTree(x_points, anchor_index, radius): # https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.KDTree.query_ball_point.html tree = spatial.KDTree(x_points) neighborhood_indices = tree.query_ball_point(x_points[anchor_index], radius * np.sqrt(len(x_points[anchor_index]))) return neighborhood_indices ###Output _____no_output_____ ###Markdown Local Lipschitz of explanation methods ###Code def lipschitz_formula(nearby_points, nearby_points_exp, anchorX, anchorX_exp): anchorX_norm2 = np.apply_along_axis(norm, 1, nearby_points, anchorX) anchorX_exp_norm2 = np.apply_along_axis(norm, 1, nearby_points_exp, anchorX_exp) anchorX_avg_norm2 = anchorX_exp_norm2/anchorX_norm2 anchorX_LC_argmax = np.argmax(anchorX_avg_norm2) return anchorX_avg_norm2, anchorX_LC_argmax def lipschitz_estimate(anchorX, x_points, explanations_x_points, anchor_index, neighborhood_indices): # extract anchor point explanations anchorX_exp = explanations_x_points[anchor_index] # extract anchor point neighborhood's explanations nearby_points = x_points[neighborhood_indices] nearby_points_exp = explanations_x_points[neighborhood_indices] # find local lipschitz estimate (lc) anchorX_avg_norm2, anchorX_LC_argmax = lipschitz_formula(nearby_points, nearby_points_exp, anchorX, anchorX_exp) return anchorX_avg_norm2, anchorX_LC_argmax def find_lipschitz_estimates(x_points, x_points_lime_exp, x_points_shap_exp, radii): # https://docs.scipy.org/doc/numpy/reference/generated/numpy.apply_along_axis.html # https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.argmax.html # https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.KDTree.query_ball_point.html instances = [] anchor_x_index = [] lc_coefficient_lime = [] x_deviation_index_lime = [] x_deviation_index_shap = [] lc_coefficient_shap = [] radiuses = [] neighborhood_size = [] for radius in radii: for anchor_index in range(0, len(x_points)): # define neighorbood of around anchor point using radius and KDTree # neighborhood_indices = neighborhood_with_KDTree(x_points, anchor_index, radius) # define neighorbood of around anchor point using radius and Euclidean Distance neighborhood_indices = neighborhood_with_euclidean(x_points, anchor_index, radius) # remove anchor index to remove anchor point and append neighborhood_size neighborhood_indices.remove(anchor_index) neighborhood_size.append(len(neighborhood_indices)) # append radius (it is useful column when apply filtering based on radius) radiuses.append(radius) # extract anchor point and its original index anchorX = x_points[anchor_index] instances.append(anchorX) anchor_x_index.append(anchor_index) if len(neighborhood_indices) != 0: # find local lipschitz estimate (lc) LIME anchorX_avg_norm2, anchorX_LC_argmax = lipschitz_estimate(anchorX, x_points, x_points_lime_exp, anchor_index, neighborhood_indices) lc_coefficient_lime.append(anchorX_avg_norm2[anchorX_LC_argmax]) # find deviation point from anchor point LIME explanations deviation_point_index = neighborhood_indices[anchorX_LC_argmax] x_deviation_index_lime.append(deviation_point_index) # find local lipschitz estimate (lc) SHAP anchorX_avg_norm2, anchorX_LC_argmax = lipschitz_estimate(anchorX, x_points, x_points_shap_exp, anchor_index, neighborhood_indices) lc_coefficient_shap.append(anchorX_avg_norm2[anchorX_LC_argmax]) # find deviation point from anchor point LIME explanations deviation_point_index = neighborhood_indices[anchorX_LC_argmax] x_deviation_index_shap.append(deviation_point_index) else: lc_coefficient_lime.append(-1) x_deviation_index_lime.append('NaN') lc_coefficient_shap.append(-1) x_deviation_index_shap.append('NaN') # columns_lipschitz will be reused so to avoid confusion naming convention should remain similar columns_lipschitz = ['instance', 'anchor_x_index', 'lc_coefficient_lime', 'x_deviation_index_lime', 'lc_coefficient_shap', 'x_deviation_index_shap', 'radiuses', 'neighborhood_size'] zippedList = list(zip(instances, anchor_x_index, lc_coefficient_lime, x_deviation_index_lime, lc_coefficient_shap, x_deviation_index_shap, radiuses, neighborhood_size)) return zippedList, columns_lipschitz ###Output _____no_output_____ ###Markdown Set instances, explanations and epsilon choices ###Code X = pd.DataFrame(test) display(X.head().values) x_points = X.copy().values radii = [1.00] # radii = [0.75, 1.00, 1.25] ###Output _____no_output_____ ###Markdown Lipschitz estimations Predictive model: random forest Explanation methods: LIME, SHAP ###Code print("LIME generated explanations") X_lime_exp = explanations_lime_rf.iloc[:,:-1].copy() display(X_lime_exp.head()) print("SHAP generated explanations") X_shap_exp = explanations_shap_rf.iloc[:,:].copy() display(X_shap_exp.head()) x_points_lime_exp = X_lime_exp.copy().values x_points_shap_exp = X_shap_exp.copy().values zippedList, columns_lipschitz = find_lipschitz_estimates(x_points, x_points_lime_exp, x_points_shap_exp, radii) rf_lipschitz = pd.DataFrame(zippedList, columns=columns_lipschitz) display(rf_lipschitz) ###Output _____no_output_____ ###Markdown 4. Results a. Selecting anchor point or point of interest to demonstrate resultsHere the selection is made based on max 'lc_coefficient_lime' just to take an example point. Anchor point ###Code highest_deviation_example = rf_lipschitz.loc[rf_lipschitz['lc_coefficient_lime'].idxmax()] display(highest_deviation_example) print("Anchor Point") anchor_point_index = highest_deviation_example["anchor_x_index"] anchor_point = highest_deviation_example['instance'] print(anchor_point) ###Output _____no_output_____ ###Markdown Deviation point with respect to LIME explanation ###Code print("\nDeviation Point with respect to LIME explanation") deviation_point_lime_index = highest_deviation_example["x_deviation_index_lime"] deviation_point_lime = rf_lipschitz['instance'][deviation_point_lime_index] print(deviation_point_lime) ###Output Deviation Point with respect to LIME explanation [5. 3.4 1.6 0.4] ###Markdown Deviation point with respect to SHAP explanation ###Code print("\nDeviation Point with respect to SHAP explanation") deviation_point_shap_index = highest_deviation_example["x_deviation_index_shap"] deviation_point_shap = rf_lipschitz['instance'][deviation_point_shap_index] print(deviation_point_shap) ###Output Deviation Point with respect to SHAP explanation [5.5 3.5 1.3 0.2] ###Markdown Anchor point and deviation point LIME explanation ###Code print("Anchor Point LIME explanation") anchor_point_lime_exp = x_points_lime_exp[anchor_point_index] anchor_point_lime_exp = [ round(elem, 3) for elem in anchor_point_lime_exp ] print(anchor_point_lime_exp) print("\nDeviation Point LIME explanation") deviation_point_lime_exp = x_points_lime_exp[deviation_point_lime_index] deviation_point_lime_exp = [ round(elem, 3) for elem in deviation_point_lime_exp ] print(deviation_point_lime_exp) ###Output Anchor Point LIME explanation [0.029, 0.015, 0.428, 0.433] Deviation Point LIME explanation [0.004, 0.008, -0.039, -0.07] ###Markdown Anchor point and deviation point SHAP explanation ###Code print("Anchor Point SHAP explanation") anchor_point_shap_exp = x_points_shap_exp[anchor_point_index] anchor_point_shap_exp = [ round(elem, 3) for elem in anchor_point_shap_exp ] print(anchor_point_shap_exp) print("\nDeviation Point SHAP explanation") deviation_point_shap_exp = x_points_shap_exp[deviation_point_shap_index] deviation_point_shap_exp = [ round(elem, 3) for elem in deviation_point_shap_exp ] print(deviation_point_shap_exp) ###Output Anchor Point SHAP explanation [0.067, 0.003, 0.294, 0.302] Deviation Point SHAP explanation [-0.023, 0.003, 0.312, 0.319] ###Markdown b. Preparing results for box plots Predictive model: random forest Epsilon: 1.00 Explanation methods: LIME, SHAP Evaluation: Lipschitz estimations as stability ###Code epsilon1 = rf_lipschitz.loc[rf_lipschitz['neighborhood_size'] > 0] epsilon1 = epsilon1[epsilon1['radiuses'] == 1.00] display(epsilon1.head()) epsilon1_lc_lime_aggre = np.mean(epsilon1['lc_coefficient_lime']) epsilon1_lc_shap_aggre = np.mean(epsilon1['lc_coefficient_shap']) print("\nLIME, epsilon 1.00, Aggregated L(x) = ", epsilon1_lc_lime_aggre) print("SHAP, epsilon 1.00, Aggregated L(x) = ", epsilon1_lc_shap_aggre) lc_lime_df = epsilon1.loc[:, ['lc_coefficient_lime']] lc_lime_df.rename(columns={'lc_coefficient_lime': 'Lipschitz Estimates'}, inplace=True) lc_lime_df['method'] = 'LIME' lc_lime_df['Dataset'] = 'Iris' lc_shap_df = epsilon1.loc[:, ['lc_coefficient_shap']] lc_shap_df.rename(columns={'lc_coefficient_shap': 'Lipschitz Estimates'}, inplace=True) lc_shap_df['method'] = 'SHAP' lc_shap_df['Dataset'] = 'Iris' ###Output _____no_output_____ ###Markdown 5. Visualize Results Highest deviation example and corresponding LIME and SHAP examples ###Code print(feature_names) print('\nAnchor Point in worst deviation case') print(anchor_point) print(anchor_point_lime_exp) print(anchor_point_shap_exp) print('\nDeviation Point in worst deviation case') print(deviation_point) print(deviation_point_lime_exp) print(deviation_point_shap_exp) ###Output ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'] Anchor Point in worst deviation case [5.1 3.4 1.5 0.2] [0.029, 0.015, 0.428, 0.433] [0.067, 0.003, 0.294, 0.302] Deviation Point in worst deviation case [5.9 3.2 4.8 1.8] [0.004, 0.008, -0.039, -0.07] [-0.023, 0.003, 0.312, 0.319] ###Markdown Final plot to explain deviation as unstability in explanations ###Code # Some example data to display x = np.linspace(0, 2 * np.pi, 400) y = np.sin(x ** 2) fig, axs = plt.subplots(2, 4) fig.set_size_inches(28.5, 14.5) # position axs[0, 0] axs[0, 0].set_title('Feature Value') colors = [["#3DE8F7","w"],[ "#3DE8F7","w"], [ "#3DE8F7","w"], [ "#3DE8F7","w"]] anchor_point_dict = dict(zip(feature_names, anchor_point)) anchor_point_df = pd.DataFrame.from_dict(anchor_point_dict, orient='index').reset_index() table = axs[0, 0].table( cellText = anchor_point_df.values, loc = 'center', cellColours = colors, colWidths=[0.3] * 2) table.set_fontsize(12) table.scale(1.5,6) cellDict = table.get_celld() cellDict[(0,1)].set_width(0.15) cellDict[(1,1)].set_width(0.15) cellDict[(2,1)].set_width(0.15) cellDict[(3,1)].set_width(0.15) axs[0, 0].axis('off') axs[0, 0].axis('tight') # position axs[0, 1] axs[0, 1].set_title('Explanation') x = feature_names[::-1] y = np.array(anchor_point_shap_exp[::-1]) # anchor_point_shap_exp # print(x, y) width = 0.75 # the width of the bars ind = np.arange(len(y)) # the x locations for the groups above_threshold = np.maximum(y - threshold, 0) below_threshold = np.minimum(y, threshold) axs[0, 1].barh(x, below_threshold, width, color="#FF4D4D") # below threshold value axs[0, 1].barh(x, above_threshold, width, color="#3DE8F7", left=below_threshold) # above threshold value axs[0, 1].set_yticks(ind+width/2) # position axs[0, 2] axs[0, 2].set_title('Feature Value') colors = [["#3DE8F7","w"],[ "#3DE8F7","w"], [ "#3DE8F7","w"], [ "#3DE8F7","w"]] anchor_point_dict = dict(zip(feature_names, anchor_point)) anchor_point_df = pd.DataFrame.from_dict(anchor_point_dict, orient='index').reset_index() table = axs[0, 2].table( cellText = anchor_point_df.values, loc = 'center', cellColours = colors, colWidths=[0.3] * 2) table.set_fontsize(12) table.scale(1.5,6) cellDict = table.get_celld() cellDict[(0,1)].set_width(0.15) cellDict[(1,1)].set_width(0.15) cellDict[(2,1)].set_width(0.15) cellDict[(3,1)].set_width(0.15) axs[0, 2].axis('off') axs[0, 2].axis('tight') # position axs[0, 3] axs[0, 3].set_title('Explanation') x = feature_names[::-1] y = np.array(anchor_point_lime_exp[::-1]) # # anchor_point_lime_exp # print(x, y) width = 0.75 # the width of the bars ind = np.arange(len(y)) # the x locations for the groups above_threshold = np.maximum(y - threshold, 0) below_threshold = np.minimum(y, threshold) # ax.barh(ind, y, width, color="#3DE8F7") axs[0, 3].barh(x, below_threshold, width, color="#FF4D4D") # below threshold value axs[0, 3].barh(x, above_threshold, width, color="#3DE8F7", left=below_threshold) # above threshold value axs[0, 3].set_yticks(ind+width/2) # position axs[1, 0] axs[1, 0].set_title('Feature Value') colors = [["#FF4D4D","w"],[ "#3DE8F7","w"], [ "#3DE8F7","w"], [ "#3DE8F7","w"]] deviation_point_dict = dict(zip(feature_names, deviation_point_shap)) # deviation_point_shap deviation_point_df = pd.DataFrame.from_dict(deviation_point_dict, orient='index').reset_index() table = axs[1, 0].table( cellText = deviation_point_df.values, loc = 'center', cellColours = colors, colWidths=[0.3] * 2) table.set_fontsize(12) table.scale(1.5,6) cellDict = table.get_celld() cellDict[(0,1)].set_width(0.15) cellDict[(1,1)].set_width(0.15) cellDict[(2,1)].set_width(0.15) cellDict[(3,1)].set_width(0.15) axs[1, 0].axis('off') axs[1, 0].axis('tight') # position axs[1, 1] axs[1, 1].set_title('Explanation') x = feature_names[::-1] y = np.array(deviation_point_shap_exp[::-1]) # deviation_point_shap_exp # print(x, y) width = 0.75 # the width of the bars ind = np.arange(len(y)) # the x locations for the groups above_threshold = np.maximum(y - threshold, 0) below_threshold = np.minimum(y, threshold) # ax.barh(ind, y, width, color="#3DE8F7") axs[1, 1].barh(x, below_threshold, width, color="#FF4D4D") # below threshold value axs[1, 1].barh(x, above_threshold, width, color="#3DE8F7", left=below_threshold) # above threshold value axs[1, 1].set_yticks(ind+width/2) # position axs[1, 2] axs[1, 2].set_title('Feature Value') colors = [["#3DE8F7","w"],[ "#3DE8F7","w"], [ "#FF4D4D","w"], [ "#FF4D4D","w"]] deviation_point_dict = dict(zip(feature_names, deviation_point_lime)) # deviation_point_lime deviation_point_df = pd.DataFrame.from_dict(deviation_point_dict, orient='index').reset_index() table = axs[1, 2].table( cellText = deviation_point_df.values, loc = 'center', cellColours = colors, colWidths=[0.3] * 2) table.set_fontsize(12) table.scale(1.5,6) cellDict = table.get_celld() cellDict[(0,1)].set_width(0.15) cellDict[(1,1)].set_width(0.15) cellDict[(2,1)].set_width(0.15) cellDict[(3,1)].set_width(0.15) axs[1, 2].axis('off') axs[1, 2].axis('tight') # position axs[1, 3] axs[1, 3].set_title('Explanation') x = feature_names[::-1] y = np.array(deviation_point_lime_exp[::-1]) # deviation_point_lime_exp # print(x,y) width = 0.75 # the width of the bars ind = np.arange(len(y)) # the x locations for the groups above_threshold = np.maximum(y - threshold, 0) below_threshold = np.minimum(y, threshold) # ax.barh(ind, y, width, color="#3DE8F7") axs[1, 3].barh(x, below_threshold, width, color="#FF4D4D") # below threshold value axs[1, 3].barh(x, above_threshold, width, color="#3DE8F7", left=below_threshold) # above threshold value axs[1, 3].set_yticks(ind+width/2) # for ax in axs.flat: # ax.set(xlabel='x-label', ylabel='y-label') # # Hide x labels and tick labels for top plots and y ticks for right plots. # for ax in axs.flat: # ax.label_outer() # fig.suptitle('(a) SHAP (L=0.2)', fontsize=16) fig.text(0.3, 0.04, '(a) SHAP (L=0.20)', ha='center', fontsize=20, fontstyle='italic') fig.text(0.7, 0.04, '(a) LIME (L=2.80)', ha='center', fontsize=20, fontstyle='italic') fig.savefig(plots_path + 'experiments_figure1.png') ###Output _____no_output_____ ###Markdown 1. Visualize anchor point and corresponding LIME explanation ###Code ''' anchor point ''' anchor_point_dict = dict(zip(feature_names, anchor_point)) # print(anchor_point_dict) anchor_point_columns = ['Feature', 'Value'] colors = [["#3DE8F7","w"],[ "#3DE8F7","w"], [ "#3DE8F7","w"], [ "#3DE8F7","w"]] anchor_point_df = pd.DataFrame.from_dict(anchor_point_dict, orient='index').reset_index() fig, ax = plt.subplots() table = ax.table(cellText = anchor_point_df.values, # colLabels = anchor_point_df.columns, loc = 'center', cellColours = colors, colWidths=[0.3] * 2) table.set_fontsize(10) table.scale(1,4) cellDict = table.get_celld() cellDict[(0,1)].set_width(0.15) cellDict[(1,1)].set_width(0.15) cellDict[(2,1)].set_width(0.15) cellDict[(3,1)].set_width(0.15) ax.axis('off') ax.axis('tight') fig.patch.set_visible(False) fig.tight_layout() plt.title('Feature Value') ''' corresponding LIME explanation ''' x = feature_names[::-1] print(x) y = np.array(anchor_point_lime_exp[::-1]) # anchor_x_maximise_lc_exp_lime print(y) fig, ax = plt.subplots() width = 0.75 # the width of the bars ind = np.arange(len(y)) # the x locations for the groups # split it up above_threshold = np.maximum(y - threshold, 0) below_threshold = np.minimum(y, threshold) # ax.barh(ind, y, width, color="#3DE8F7") ax.barh(x, below_threshold, width, color="#FF4D4D") # below threshold value ax.barh(x, above_threshold, width, color="#3DE8F7", left=below_threshold) # above threshold value ax.set_yticks(ind+width/2) ###Output ['petal width (cm)', 'petal length (cm)', 'sepal width (cm)', 'sepal length (cm)'] [0.433 0.428 0.015 0.029] ###Markdown 2. Visualize anchor point and corresponding SHAP explanation ###Code ''' anchor point ''' anchor_point_dict = dict(zip(feature_names, anchor_point)) colors = [["#3DE8F7","w"],[ "#3DE8F7","w"], [ "#3DE8F7","w"], [ "#3DE8F7","w"]] anchor_point_df = pd.DataFrame.from_dict(anchor_point_dict, orient='index').reset_index() fig, ax = plt.subplots() table = ax.table(cellText = anchor_point_df.values, # colLabels = anchor_point_df.columns, loc = 'center', cellColours = colors, colWidths=[0.3] * 2) table.set_fontsize(10) table.scale(1,4) cellDict = table.get_celld() cellDict[(0,1)].set_width(0.15) cellDict[(1,1)].set_width(0.15) cellDict[(2,1)].set_width(0.15) cellDict[(3,1)].set_width(0.15) ax.axis('off') ax.axis('tight') fig.patch.set_visible(False) fig.tight_layout() plt.title('Feature Value') ''' corresponding LIME explanation ''' x = feature_names[::-1] print(x) y = np.array(anchor_point_shap_exp[::-1]) # anchor_x_maximise_lc_exp_lime print(y) fig, ax = plt.subplots() width = 0.75 # the width of the bars ind = np.arange(len(y)) # the x locations for the groups # split it up above_threshold = np.maximum(y - threshold, 0) below_threshold = np.minimum(y, threshold) # ax.barh(ind, y, width, color="#3DE8F7") ax.barh(x, below_threshold, width, color="#FF4D4D") # below threshold value ax.barh(x, above_threshold, width, color="#3DE8F7", left=below_threshold) # above threshold value ax.set_yticks(ind+width/2) plt.title('Explanation') ###Output ['petal width (cm)', 'petal length (cm)', 'sepal width (cm)', 'sepal length (cm)'] [0.302 0.294 0.003 0.067] ###Markdown 3. Visualize deviation point and corresponding LIME explanation ###Code ''' anchor point ''' deviation_point_dict = dict(zip(feature_names, deviation_point)) # print(anchor_point_dict) deviation_point_columns = ['Feature', 'Value'] colors = [["#3DE8F7","w"],[ "#3DE8F7","w"], [ "#FF4D4D","w"], [ "#FF4D4D","w"]] deviation_point_df = pd.DataFrame.from_dict(deviation_point_dict, orient='index').reset_index() # deviation_point_df.rename(columns={'index': 'Feature', 0: 'Value' }, inplace=True) fig, ax = plt.subplots() table = ax.table(cellText = deviation_point_df.values, # colLabels = deviation_point_df.columns, loc = 'center', cellColours = colors, colWidths=[0.3] * 2) table.set_fontsize(10) table.scale(1,4) cellDict = table.get_celld() cellDict[(0,1)].set_width(0.15) cellDict[(1,1)].set_width(0.15) cellDict[(2,1)].set_width(0.15) cellDict[(3,1)].set_width(0.15) ax.axis('off') ax.axis('tight') fig.patch.set_visible(False) fig.tight_layout() plt.title('Feature Value') ''' corresponding LIME explanation ''' x = feature_names[::-1] print(x) y = np.array(deviation_point_lime_exp[::-1]) # anchor_x_maximise_lc_exp_lime print(y) fig, ax = plt.subplots() width = 0.75 # the width of the bars ind = np.arange(len(y)) # the x locations for the groups # split it up above_threshold = np.maximum(y - threshold, 0) below_threshold = np.minimum(y, threshold) # ax.barh(ind, y, width, color="#3DE8F7") ax.barh(x, below_threshold, width, color="#FF4D4D") # below threshold value ax.barh(x, above_threshold, width, color="#3DE8F7", left=below_threshold) # above threshold value ax.set_yticks(ind+width/2) plt.title('Explanation') # for key, cell in cellDict.items(): # print (str(key[0])+", "+ str(key[1])+"\t"+str(cell.get_text())) ###Output _____no_output_____ ###Markdown 4. Visualize deviation point and corresponding SHAP explanation ###Code ''' anchor point ''' deviation_point_dict = dict(zip(feature_names, deviation_point)) # print(anchor_point_dict) deviation_point_columns = ['Feature', 'Value'] colors = [["#3DE8F7","w"],[ "#3DE8F7","w"], [ "#3DE8F7","w"], [ "#3DE8F7","w"]] deviation_point_df = pd.DataFrame.from_dict(deviation_point_dict, orient='index').reset_index() # deviation_point_df.rename(columns={'index': 'Feature', 0: 'Value' }, inplace=True) fig, ax = plt.subplots() table = ax.table(cellText = deviation_point_df.values, # colLabels = deviation_point_df.columns, loc = 'center', cellColours = colors, colWidths=[0.3] * 2) table.set_fontsize(10) table.scale(1,4) cellDict = table.get_celld() cellDict[(0,1)].set_width(0.15) cellDict[(1,1)].set_width(0.15) cellDict[(2,1)].set_width(0.15) cellDict[(3,1)].set_width(0.15) ax.axis('off') ax.axis('tight') fig.patch.set_visible(False) fig.tight_layout() plt.title('Feature Value') ''' corresponding LIME explanation ''' x = feature_names[::-1] print(x) y = np.array(deviation_point_shap_exp[::-1]) # anchor_x_maximise_lc_exp_lime print(y) fig, ax = plt.subplots() width = 0.75 # the width of the bars ind = np.arange(len(y)) # the x locations for the groups # split it up above_threshold = np.maximum(y - threshold, 0) below_threshold = np.minimum(y, threshold) # ax.barh(ind, y, width, color="#3DE8F7") ax.barh(x, below_threshold, width, color="#FF4D4D") # below threshold value ax.barh(x, above_threshold, width, color="#3DE8F7", left=below_threshold) # above threshold value ax.set_yticks(ind+width/2) plt.title('Explanation') ###Output ['petal width (cm)', 'petal length (cm)', 'sepal width (cm)', 'sepal length (cm)'] [0.302 0.294 0.003 0.067] ###Markdown Visualize lipschitz estimations for all test instances ###Code df = lc_lime_df.append(lc_shap_df) ax = sns.boxplot(x='method', y="Lipschitz Estimates", data=df) ax = sns.boxplot(x="Dataset", y="Lipschitz Estimates", hue="method", data=df) sns.despine(offset=10, trim=True) ###Output _____no_output_____ ###Markdown LIME visualizations by single points ###Code explainer_lime = LimeTabularExplainer(train, mode = 'classification', training_labels = labels_train, feature_names=feature_names, verbose=False, class_names=target_names, feature_selection='auto', discretize_continuous=True) x_instance = test[anchor_index] LR_exp_lime = explainer_lime.explain_instance(x_instance, LR_iris.predict_proba, labels=np.unique(iris.target), top_labels=None, num_features=len(x_instance), num_samples=6000) LR_exp_lime.show_in_notebook() x_instance = test[similar_point_index] LR_exp_lime = explainer_lime.explain_instance(x_instance, LR_iris.predict_proba, labels=np.unique(iris.target), top_labels=None, num_features=len(x_instance), num_samples=6000) LR_exp_lime.show_in_notebook() i = np.random.randint(0, test.shape[0]) i = 0 LR_exp_lime_map = LR_exp_lime.as_map() # pprint(LR_exp_lime_map) print('Predicted class for i:', labels_pred_lr[i]) LR_exp_lime_list = LR_exp_lime.as_list(label=labels_pred_lr[i]) # pprint(LR_exp_lime_list) ###Output _____no_output_____ ###Markdown Conclusions ###Code lr_lime_iris = [2.657, 3.393, 1.495] rf_lime_iris = [3.010, 3.783, 1.767] lr_shap_iris = [2.716, 3.512, 1.463] rf_shap_iris = [1.969, 3.546, 2.136] find_min_vector = np.array([lr_lime_iris, rf_lime_iris, lr_shap_iris, rf_shap_iris]) np.amin(find_min_vector, axis=0) from sklearn.linear_model import Ridge import numpy as np n_samples, n_features = 10, 5 rng = np.random.RandomState(0) y = rng.randn(n_samples) X = rng.randn(n_samples, n_features) clf = Ridge(alpha=1.0) clf.fit(X, y) ###Output _____no_output_____ ###Markdown Debuging Space ###Code """ Use euclidean distance to define neighborhood points """ display(X.head()) points = X.values epsilon = 0.75 * np.sqrt(len(points[0])) dist = (points[0] - points[1:])**2 dist = np.sum(dist, axis=1) dist = np.sqrt(dist) print(dist) neighborhood_indices = [] for index in range(0, len(dist)): if dist[index] < epsilon: neighborhood_indices.append(index) print(neighborhood_indices) ###Output _____no_output_____
1-LaneLines/others/P1-original.ipynb
###Markdown Self-Driving Car Engineer Nanodegree Project: **Finding Lane Lines on the Road** ***In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below. Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/!/rubrics/322/view) for this project.---Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**--- **The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**--- Your output should look something like this (above) after detecting line segments using the helper functions below Your goal is to connect/average/extrapolate line segments to get output like this **Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** Import Packages ###Code #importing some useful packages import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import cv2 %matplotlib inline ###Output _____no_output_____ ###Markdown Read in an Image ###Code #reading in an image image = mpimg.imread('test_images/solidWhiteRight.jpg') #printing out some stats and plotting print('This image is:', type(image), 'with dimensions:', image.shape) plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray') ###Output This image is: <class 'numpy.ndarray'> with dimensions: (540, 960, 3) ###Markdown Ideas for Lane Detection Pipeline **Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**`cv2.inRange()` for color selection `cv2.fillPoly()` for regions selection `cv2.line()` to draw lines on an image given endpoints `cv2.addWeighted()` to coadd / overlay two images `cv2.cvtColor()` to grayscale or change color `cv2.imwrite()` to output images to file `cv2.bitwise_and()` to apply a mask to an image**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!** Helper Functions Below are some helper functions to help get you started. They should look familiar from the lesson! ###Code import math def grayscale(img): """Applies the Grayscale transform This will return an image with only one color channel but NOTE: to see the returned image as grayscale (assuming your grayscaled image is called 'gray') you should call plt.imshow(gray, cmap='gray')""" return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Or use BGR2GRAY if you read an image with cv2.imread() # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) def canny(img, low_threshold, high_threshold): """Applies the Canny transform""" return cv2.Canny(img, low_threshold, high_threshold) def gaussian_blur(img, kernel_size): """Applies a Gaussian Noise kernel""" return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0) def region_of_interest(img, vertices): """ Applies an image mask. Only keeps the region of the image defined by the polygon formed from `vertices`. The rest of the image is set to black. `vertices` should be a numpy array of integer points. """ #defining a blank mask to start with mask = np.zeros_like(img) #defining a 3 channel or 1 channel color to fill the mask with depending on the input image if len(img.shape) > 2: channel_count = img.shape[2] # i.e. 3 or 4 depending on your image ignore_mask_color = (255,) * channel_count else: ignore_mask_color = 255 #filling pixels inside the polygon defined by "vertices" with the fill color cv2.fillPoly(mask, vertices, ignore_mask_color) #returning the image only where mask pixels are nonzero masked_image = cv2.bitwise_and(img, mask) return masked_image def draw_lines(img, lines, color=[255, 0, 0], thickness=2): """ NOTE: this is the function you might want to use as a starting point once you want to average/extrapolate the line segments you detect to map out the full extent of the lane (going from the result shown in raw-lines-example.mp4 to that shown in P1_example.mp4). Think about things like separating line segments by their slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left line vs. the right line. Then, you can average the position of each of the lines and extrapolate to the top and bottom of the lane. This function draws `lines` with `color` and `thickness`. Lines are drawn on the image inplace (mutates the image). If you want to make the lines semi-transparent, think about combining this function with the weighted_img() function below """ for line in lines: for x1,y1,x2,y2 in line: cv2.line(img, (x1, y1), (x2, y2), color, thickness) def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap): """ `img` should be the output of a Canny transform. Returns an image with hough lines drawn. """ lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap) line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8) draw_lines(line_img, lines) return line_img, lines # Python 3 has support for cool math symbols. def weighted_img(img, initial_img, α=0.8, β=1., γ=0.): """ `img` is the output of the hough_lines(), An image with lines drawn on it. Should be a blank image (all black) with lines drawn on it. `initial_img` should be the image before any processing. The result image is computed as follows: initial_img * α + img * β + γ NOTE: initial_img and img must be the same shape! """ return cv2.addWeighted(initial_img, α, img, β, γ) ###Output _____no_output_____ ###Markdown Test ImagesBuild your pipeline to work on the images in the directory "test_images" **You should make sure your pipeline works well on these images before you try the videos.** ###Code import os os.listdir("test_images/") ###Output _____no_output_____ ###Markdown Build a Lane Finding Pipeline Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters. ###Code # TODO: Build your pipeline that will draw lane lines on the test_images # then save them to the test_images_output directory. grey = grayscale(image) # blur kernel = 3 img_blur = gaussian_blur(grey, kernel) low_thre = 1 high_thre = 25 edges = canny(img_blur, low_thre, high_thre) w = image.shape[1] h= image.shape[0] #vertices = np.array([[(0.15*w, 0.4*h), (0.35*w, 0.4*h), (0.5*w, 0.3*h), (0.9*w, h)]], dtype=np.int32) vertices = np.array([[(100,h),(490, 285), (900,h)]], dtype=np.int32) # 4 points to create the boundry masked_img = region_of_interest(edges, vertices) rho = 1 theta = np.pi/180 thre = 100 min_len = 100 max_gap = 50 line_img, lines = hough_lines(masked_img, rho, theta, thre, min_len, max_gap) # lines.shape= (4, 1, 4) img = weighted_img(line_img, image) #img = weighted_img(line_img, image) plt.imshow(img) #print(lines) ###Output _____no_output_____ ###Markdown Test on VideosYou know what's cooler than drawing lanes over images? Drawing lanes over video!We can test our solution on two provided videos:`solidWhiteRight.mp4``solidYellowLeft.mp4`**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.****If you get an error that looks like this:**```NeedDownloadError: Need ffmpeg exe. You can download it by calling: imageio.plugins.ffmpeg.download()```**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.** ###Code # Import everything needed to edit/save/watch video clips from moviepy.editor import VideoFileClip from IPython.display import HTML def process_image(image): # NOTE: The output you return should be a color image (3 channel) for processing video below # TODO: put your pipeline here, # you should return the final output (image where lines are drawn on lanes) return result ###Output _____no_output_____ ###Markdown Let's try the one with the solid white lane on the right first ... ###Code white_output = 'test_videos_output/solidWhiteRight.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5) clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4") white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!! %time white_clip.write_videofile(white_output, audio=False) ###Output _____no_output_____ ###Markdown Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice. ###Code HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(white_output)) ###Output _____no_output_____ ###Markdown Improve the draw_lines() function**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".****Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.** Now for the one with the solid yellow lane on the left. This one's more tricky! ###Code yellow_output = 'test_videos_output/solidYellowLeft.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5) clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4') yellow_clip = clip2.fl_image(process_image) %time yellow_clip.write_videofile(yellow_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(yellow_output)) ###Output _____no_output_____ ###Markdown Writeup and SubmissionIf you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file. Optional ChallengeTry your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project! ###Code challenge_output = 'test_videos_output/challenge.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5) clip3 = VideoFileClip('test_videos/challenge.mp4') challenge_clip = clip3.fl_image(process_image) %time challenge_clip.write_videofile(challenge_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(challenge_output)) ###Output _____no_output_____
tutorials/DAG-Creation-And-Submission.ipynb
###Markdown DAG Creation and Submission Launch this tutorial in a Jupyter Notebook on Binder: [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/htcondor/htcondor-python-bindings-tutorials/master?urlpath=lab/tree/DAG-Creation-And-Submission.ipynb)In this tutorial, we will learn how to use `htcondor.dags` to create and submit an HTCondor DAGMan workflow.Our goal will be to create an image of the Mandelbrot set.This is a perfect problem for high-throughput computing because each point in the image can be calculated completely independently of any other point, so we are free to divide the image creation up into patches, each created by a single HTCondor job.DAGMan will enter the picture to coordinate stitching the image patches we create back into a single image. Making a Mandelbrot set image locallyWe'll use `goatbrot` (https://github.com/beejjorgensen/goatbrot) to make the image.`goatbrot` can be run from the command line, and takes a series of options to specify which part of the Mandelbrot set to draw, as well as the properties of the image itself.`goatbrot` options:- `-i 1000` The number of iterations.- `-c 0,0` The center point of the image region.- `-w 3` The width of the image region.- `-s 1000,1000` The pixel dimensions of the image.- `-o test.ppm` The name of the output file to generate.We can run a shell command from Jupyter by prefixing it with a `!`: ###Code ! ./goatbrot -i 10 -c 0,0 -w 3 -s 500,500 -o test.ppm ! convert test.ppm test.png ###Output _____no_output_____ ###Markdown Let's take a look at the test image. It won't be very good, because we didn't run for very many iterations.We'll use HTCondor to produce a better image! ###Code from IPython.display import Image Image('test.png') ###Output _____no_output_____ ###Markdown What is the workflow? We can parallelize this calculation by drawing rectangular sub-regions of the full region ("tiles") we want and stitching them together into a single image using `montage`.Let's draw this out as a graph, showing how data (image patches) will flow through the system.(Don't worry about this code, unless you want to know how to make dot diagrams in Python!) ###Code from graphviz import Digraph import itertools num_tiles_per_side = 2 dot = Digraph() dot.node('montage') for x, y in itertools.product(range(num_tiles_per_side), repeat = 2): n = f'tile_{x}-{y}' dot.node(n) dot.edge(n, 'montage') dot ###Output _____no_output_____ ###Markdown Since we can chop the image up however we'd like, we have as many tiles per side as we'd like (try changing `num_tiles_per_side` above).The "shape" of the DAG is the same: there is a "layer" of `goatbrot` jobs that calculate tiles, which all feed into `montage`.Now that we know the structure of the problem, we can start describing it to HTCondor. Describing `goatbrot` as an HTCondor jobWe describe a job using a `Submit` object. It corresponds to the submit *file* used by the command line tools.It mostly behaves like a standard Python dictionary, where the keys and values correspond to submit descriptors. ###Code import htcondor tile_description = htcondor.Submit( executable = 'goatbrot', # the program we want to run arguments = '-i 10000 -c $(x),$(y) -w $(w) -s 500,500 -o tile_$(tile_x)-$(tile_y).ppm', # the arguments to pass to the executable log = 'mandelbrot.log', # the HTCondor job event log output = 'goatbrot.out.$(tile_x)_$(tile_y)', # stdout from the job goes here error = 'goatbrot.err.$(tile_x)_$(tile_y)', # stderr from the job goes here request_cpus = '1', # resource requests; we don't need much per job for this problem request_memory = '128MB', request_disk = '1GB', ) print(tile_description) ###Output _____no_output_____ ###Markdown Notice the heavy use of macros like `$(x)` to specify the tile.Those aren't built-in submit macros; instead, we will plan on passing their values in through **vars**.Vars will let us customize each individual job in the tile layer by filling in those macros individually.Each job will recieve a dictionary of macro values; our next goal is to make a list of those dictionaries.We will do this using a function that takes the number of tiles per side as an argument.As mentioned above, the **structure** of the DAG is the same no matter how "wide" the tile layer is.This is why we define a function to produce the tile vars instead of just calculating them once: we can vary the width of the DAG by passing different arguments to `make_tile_vars`.More customizations could be applied to make different images (for example, you could make it possible to set the center point of the image). ###Code def make_tile_vars(num_tiles_per_side, width = 3): width_per_tile = width / num_tiles_per_side centers = [ width_per_tile * (n + 0.5 - (num_tiles_per_side / 2)) for n in range(num_tiles_per_side) ] vars = [] for (tile_y, y), (tile_x, x) in itertools.product(enumerate(centers), repeat = 2): var = dict( w = width_per_tile, x = x, y = -y, # image coordinates vs. Cartesian coordinates tile_x = str(tile_x).rjust(5, '0'), tile_y = str(tile_y).rjust(5, '0'), ) vars.append(var) return vars tile_vars = make_tile_vars(2) for var in tile_vars: print(var) ###Output _____no_output_____ ###Markdown If we want to increase the number of tiles per side, we just pass in a larger number.Because the `tile_description` is **parameterized** in terms of these variables, it will work the same way no matter what we pass in as `vars`. ###Code tile_vars = make_tile_vars(4) for var in tile_vars: print(var) ###Output _____no_output_____ ###Markdown Describing montage as an HTCondor jobNow we can write the `montage` job description. The problem is that the arguments and input files depend on how many tiles we have, which we don't know ahead-of-time.We'll take the brute-force approach of just writing a function that takes the tile `vars` we made in the previous section and using them to build the `montage` job description.Not that some of the work of building up the submit description is done in Python.This is a major advantage of communicating with HTCondor via Python: you can do the hard work in Python instead of in submit language!One area for possible improvement here is to remove the duplication of the format of the input file names, which is repeated here from when it was first used in the `goatbrot` submit object. When building a larger, more complicated workflow, it is important to reduce duplication of information to make it easier to modify the workflow in the future. ###Code def make_montage_description(tile_vars): num_tiles_per_side = int(len(tile_vars) ** .5) input_files = [f'tile_{d["tile_x"]}-{d["tile_y"]}.ppm' for d in tile_vars] return htcondor.Submit( executable = '/usr/bin/montage', arguments = f'{" ".join(input_files)} -mode Concatenate -tile {num_tiles_per_side}x{num_tiles_per_side} mandelbrot.png', transfer_input_files = ', '.join(input_files), log = 'mandelbrot.log', output = 'montage.out', error = 'montage.err', request_cpus = '1', request_memory = '128MB', request_disk = '1GB', ) montage_description = make_montage_description(make_tile_vars(2)) print(montage_description) ###Output _____no_output_____ ###Markdown Describing the DAG using `htcondor.dags`Now that we have the job descriptions, all we have to do is use `htcondor.dags` to tell DAGMan about the dependencies between them.`htcondor.dags` is a subpackage of the HTCondor Python bindings that lets you write DAG descriptions using a higher-level language than raw DAG description file syntax.Incidentally, it also lets you use Python to drive the creation process, increasing your flexibility.**Important Concept:** the code from `dag = dags.DAG()` onwards only defines the **topology** (or **structure**) of the DAG. The `tile` layer can be flexibly grown or shrunk by adjusting the `tile_vars` without changing the topology, and this can be clearly expressed in the code.The `tile_vars` are driving the creation of the DAG. Try changing `num_tiles_per_side` to some other value! ###Code from htcondor import dags num_tiles_per_side = 2 # create the tile vars early, since we need to pass them to multiple places later tile_vars = make_tile_vars(num_tiles_per_side) dag = dags.DAG() # create the tile layer, passing in the submit description for a tile job and the tile vars tile_layer = dag.layer( name = 'tile', submit_description = tile_description, vars = tile_vars, ) # create the montage "layer" (it only has one job in it, so no need for vars) # note that the submit description is created "on the fly"! montage_layer = tile_layer.child_layer( name = 'montage', submit_description = make_montage_description(tile_vars), ) ###Output _____no_output_____ ###Markdown We can get a textual description of the DAG structure by calling the `describe` method: ###Code print(dag.describe()) ###Output _____no_output_____ ###Markdown Write the DAG to diskWe still need to write the DAG to disk to get DAGMan to work with it.We also need to move some files around so that the jobs know where to find them. ###Code from pathlib import Path import shutil dag_dir = (Path.cwd() / 'mandelbrot-dag').absolute() # blow away any old files shutil.rmtree(dag_dir, ignore_errors = True) # make the magic happen! dag_file = dags.write_dag(dag, dag_dir) # the submit files are expecting goatbrot to be next to them, so copy it into the dag directory shutil.copy2('goatbrot', dag_dir) print(f'DAG directory: {dag_dir}') print(f'DAG description file: {dag_file}') ###Output _____no_output_____ ###Markdown Submit the DAG via the Python bindingsNow that we have written out the DAG description file, we can submit it for execution using the standard Python bindings submit mechanism.The `Submit` class has a static method which can read a DAG description and generate a corresponding `Submit` object: ###Code dag_submit = htcondor.Submit.from_dag(str(dag_file), {'force': 1}) print(dag_submit) ###Output _____no_output_____ ###Markdown Now we can enter the DAG directory and submit the DAGMan job, which will execute the graph: ###Code import os os.chdir(dag_dir) schedd = htcondor.Schedd() with schedd.transaction() as txn: cluster_id = dag_submit.queue(txn) print(f"DAGMan job cluster is {cluster_id}") os.chdir('..') ###Output _____no_output_____ ###Markdown Let's wait for the DAGMan job to complete by reading it's event log: ###Code dag_job_log = f"{dag_file}.dagman.log" print(f"DAG job log file is {dag_job_log}") # read events from the log, waiting forever for the next event dagman_job_events = htcondor.JobEventLog(str(dag_job_log)).events(None) # this event stream only contains the events for the DAGMan job itself, not the jobs it submits for event in dagman_job_events: print(event) # stop waiting when we see the terminate event if event.type is htcondor.JobEventType.JOB_TERMINATED and event.cluster == cluster_id: break ###Output _____no_output_____ ###Markdown Let's look at the final image! ###Code Image(dag_dir / "mandelbrot.png") ###Output _____no_output_____ ###Markdown DAG Creation and Submission Launch this tutorial in a Jupyter Notebook on Binder: [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/htcondor/htcondor-python-bindings-tutorials/master?urlpath=lab/tree/DAG-Creation-And-Submission.ipynb)In this tutorial, we will learn how to use `htcondor.dags` to create and submit an HTCondor DAGMan workflow.Our goal will be to create an image of the Mandelbrot set.This is a perfect problem for high-throughput computing because each point in the image can be calculated completely independently of any other point, so we are free to divide the image creation up into patches, each created by a single HTCondor job.DAGMan will enter the picture to coordinate stitching the image patches we create back into a single image. Making a Mandelbrot set image locallyWe'll use `goatbrot` (https://github.com/beejjorgensen/goatbrot) to make the image.`goatbrot` can be run from the command line, and takes a series of options to specify which part of the Mandelbrot set to draw, as well as the properties of the image itself.`goatbrot` options:- `-i 1000` The number of iterations.- `-c 0,0` The center point of the image region.- `-w 3` The width of the image region.- `-s 1000,1000` The pixel dimensions of the image.- `-o test.ppm` The name of the output file to generate.We can run a shell command from Jupyter by prefixing it with a `!`: ###Code ! ./goatbrot -i 10 -c 0,0 -w 3 -s 500,500 -o test.ppm ! convert test.ppm test.png ###Output _____no_output_____ ###Markdown Let's take a look at the test image. It won't be very good, because we didn't run for very many iterations.We'll use HTCondor to produce a better image! ###Code from IPython.display import Image Image('test.png') ###Output _____no_output_____ ###Markdown What is the workflow? We can parallelize this calculation by drawing rectangular sub-regions of the full region ("tiles") we want and stitching them together into a single image using `montage`.Let's draw this out as a graph, showing how data (image patches) will flow through the system.(Don't worry about this code, unless you want to know how to make dot diagrams in Python!) ###Code from graphviz import Digraph import itertools num_tiles_per_side = 2 dot = Digraph() dot.node('montage') for x, y in itertools.product(range(num_tiles_per_side), repeat = 2): n = f'tile_{x}-{y}' dot.node(n) dot.edge(n, 'montage') dot ###Output _____no_output_____ ###Markdown Since we can chop the image up however we'd like, we have as many tiles per side as we'd like (try changing `num_tiles_per_side` above).The "shape" of the DAG is the same: there is a "layer" of `goatbrot` jobs that calculate tiles, which all feed into `montage`.Now that we know the structure of the problem, we can start describing it to HTCondor. Describing `goatbrot` as an HTCondor jobWe describe a job using a `Submit` object. It corresponds to the submit *file* used by the command line tools.It mostly behaves like a standard Python dictionary, where the keys and values correspond to submit descriptors. ###Code import htcondor tile_description = htcondor.Submit( executable = 'goatbrot', # the program we want to run arguments = '-i 10000 -c $(x),$(y) -w $(w) -s 500,500 -o tile_$(tile_x)-$(tile_y).ppm', # the arguments to pass to the executable log = 'mandelbrot.log', # the HTCondor job event log output = 'goatbrot.out.$(tile_x)_$(tile_y)', # stdout from the job goes here error = 'goatbrot.err.$(tile_x)_$(tile_y)', # stderr from the job goes here request_cpus = '1', # resource requests; we don't need much per job for this problem request_memory = '128MB', request_disk = '1GB', ) print(tile_description) ###Output _____no_output_____ ###Markdown Notice the heavy use of macros like `$(x)` to specify the tile.Those aren't built-in submit macros; instead, we will plan on passing their values in through **vars**.Vars will let us customize each individual job in the tile layer by filling in those macros individually.Each job will recieve a dictionary of macro values; our next goal is to make a list of those dictionaries.We will do this using a function that takes the number of tiles per side as an argument.As mentioned above, the **structure** of the DAG is the same no matter how "wide" the tile layer is.This is why we define a function to produce the tile vars instead of just calculating them once: we can vary the width of the DAG by passing different arguments to `make_tile_vars`.More customizations could be applied to make different images (for example, you could make it possible to set the center point of the image). ###Code def make_tile_vars(num_tiles_per_side, width = 3): width_per_tile = width / num_tiles_per_side centers = [ width_per_tile * (n + 0.5 - (num_tiles_per_side / 2)) for n in range(num_tiles_per_side) ] vars = [] for (tile_y, y), (tile_x, x) in itertools.product(enumerate(centers), repeat = 2): var = dict( w = width_per_tile, x = x, y = -y, # image coordinates vs. Cartesian coordinates tile_x = str(tile_x).rjust(5, '0'), tile_y = str(tile_y).rjust(5, '0'), ) vars.append(var) return vars tile_vars = make_tile_vars(2) for var in tile_vars: print(var) ###Output _____no_output_____ ###Markdown If we want to increase the number of tiles per side, we just pass in a larger number.Because the `tile_description` is **parameterized** in terms of these variables, it will work the same way no matter what we pass in as `vars`. ###Code tile_vars = make_tile_vars(4) for var in tile_vars: print(var) ###Output _____no_output_____ ###Markdown Describing montage as an HTCondor jobNow we can write the `montage` job description. The problem is that the arguments and input files depend on how many tiles we have, which we don't know ahead-of-time.We'll take the brute-force approach of just writing a function that takes the tile `vars` we made in the previous section and using them to build the `montage` job description.Not that some of the work of building up the submit description is done in Python.This is a major advantage of communicating with HTCondor via Python: you can do the hard work in Python instead of in submit language!One area for possible improvement here is to remove the duplication of the format of the input file names, which is repeated here from when it was first used in the `goatbrot` submit object. When building a larger, more complicated workflow, it is important to reduce duplication of information to make it easier to modify the workflow in the future. ###Code def make_montage_description(tile_vars): num_tiles_per_side = int(len(tile_vars) ** .5) input_files = [f'tile_{d["tile_x"]}-{d["tile_y"]}.ppm' for d in tile_vars] return htcondor.Submit( executable = '/usr/bin/montage', arguments = f'{" ".join(input_files)} -mode Concatenate -tile {num_tiles_per_side}x{num_tiles_per_side} mandelbrot.png', transfer_input_files = ', '.join(input_files), log = 'mandelbrot.log', output = 'montage.out', error = 'montage.err', request_cpus = '1', request_memory = '128MB', request_disk = '1GB', ) montage_description = make_montage_description(make_tile_vars(2)) print(montage_description) ###Output _____no_output_____ ###Markdown Describing the DAG using `htcondor.dags`Now that we have the job descriptions, all we have to do is use `htcondor.dags` to tell DAGMan about the dependencies between them.`htcondor.dags` is a subpackage of the HTCondor Python bindings that lets you write DAG descriptions using a higher-level language than raw DAG description file syntax.Incidentally, it also lets you use Python to drive the creation process, increasing your flexibility.**Important Concept:** the code from `dag = dags.DAG()` onwards only defines the **topology** (or **structure**) of the DAG. The `tile` layer can be flexibly grown or shrunk by adjusting the `tile_vars` without changing the topology, and this can be clearly expressed in the code.The `tile_vars` are driving the creation of the DAG. Try changing `num_tiles_per_side` to some other value! ###Code from htcondor import dags num_tiles_per_side = 2 # create the tile vars early, since we need to pass them to multiple places later tile_vars = make_tile_vars(num_tiles_per_side) dag = dags.DAG() # create the tile layer, passing in the submit description for a tile job and the tile vars tile_layer = dag.layer( name = 'tile', submit_description = tile_description, vars = tile_vars, ) # create the montage "layer" (it only has one job in it, so no need for vars) # note that the submit description is created "on the fly"! montage_layer = tile_layer.child_layer( name = 'montage', submit_description = make_montage_description(tile_vars), ) ###Output _____no_output_____ ###Markdown We can get a textual description of the DAG structure by calling the `describe` method: ###Code print(dag.describe()) ###Output _____no_output_____ ###Markdown Write the DAG to diskWe still need to write the DAG to disk to get DAGMan to work with it.We also need to move some files around so that the jobs know where to find them. ###Code from pathlib import Path import shutil dag_dir = (Path.cwd() / 'mandelbrot-dag').absolute() # blow away any old files shutil.rmtree(dag_dir, ignore_errors = True) # make the magic happen! dag_file = dags.write_dag(dag, dag_dir) # the submit files are expecting goatbrot to be next to them, so copy it into the dag directory shutil.copy2('goatbrot', dag_dir) print(f'DAG directory: {dag_dir}') print(f'DAG description file: {dag_file}') ###Output _____no_output_____ ###Markdown Submit the DAG via the Python bindingsNow that we have written out the DAG description file, we can submit it for execution using the standard Python bindings submit mechanism.The `Submit` class has a static method which can read a DAG description and generate a corresponding `Submit` object: ###Code dag_submit = htcondor.Submit.from_dag(str(dag_file), {'force': 1}) print(dag_submit) ###Output _____no_output_____ ###Markdown Now we can enter the DAG directory and submit the DAGMan job, which will execute the graph: ###Code import os os.chdir(dag_dir) schedd = htcondor.Schedd() with schedd.transaction() as txn: cluster_id = dag_submit.queue(txn) print(f"DAGMan job cluster is {cluster_id}") os.chdir('..') ###Output _____no_output_____ ###Markdown Let's wait for the DAGMan job to complete by reading it's event log: ###Code dag_job_log = f"{dag_file}.dagman.log" print(f"DAG job log file is {dag_job_log}") # read events from the log, waiting forever for the next event dagman_job_events = htcondor.JobEventLog(str(dag_job_log)).events(None) # this event stream only contains the events for the DAGMan job itself, not the jobs it submits for event in dagman_job_events: print(event) # stop waiting when we see the terminate event if event.type is htcondor.JobEventType.JOB_TERMINATED and event.cluster == cluster_id: break ###Output _____no_output_____ ###Markdown Let's look at the final image! ###Code Image(dag_dir / "mandelbrot.png") ###Output _____no_output_____
Python Pandas Tutorials 07.ipynb
###Markdown Tutorial 7 - Group By (Split Apply Combine) ###Code import pandas as pd df = pd.read_csv('sample_data_tutorial_07.csv') df # Vamos agrupar este DataFrame pelas cidades: g = df.groupby('city') g # O comando anterior agrupará as cidades como "key" e o resto como valores for city, city_df in g: print(city) print (city_df) # Especificando um grupo: g.get_group('paris') # Pegando o valor máximo nos grupos g.max() # Pegando o resumo estatístico g.describe() %matplotlib inline g.plot() # O primeiro comando é para rodar o Matplotlib no Jupyter Notebook # Procure na documentação dos pandas "Groupby" para ver mais detalhes!!! ###Output _____no_output_____
Credit Report Complaint topic modelling.ipynb
###Markdown Credit Report Complaint topic modelling This is a Natural Language Processing(NLP) based solution to identify key topics present in credit reporting specific customer complaints.This sample notebook shows you how to deploy Credit Report Complaint topic modelling using Amazon SageMaker.> **Note**: This is a reference notebook and it cannot run unless you make changes suggested in the notebook. Pre-requisites:1. **Note**: This notebook contains elements which render correctly in Jupyter interface. Open this notebook from an Amazon SageMaker Notebook Instance or Amazon SageMaker Studio.1. Ensure that IAM role used has **AmazonSageMakerFullAccess**1. To deploy this ML model successfully, ensure that: 1. Either your IAM role has these three permissions and you have authority to make AWS Marketplace subscriptions in the AWS account used: 1. **aws-marketplace:ViewSubscriptions** 1. **aws-marketplace:Unsubscribe** 1. **aws-marketplace:Subscribe** 2. or your AWS account has a subscription to Credit Report Complaint topic modelling. If so, skip step: [Subscribe to the model package](1.-Subscribe-to-the-model-package) Contents:1. [Subscribe to the model package](1.-Subscribe-to-the-model-package)2. [Create an endpoint and perform real-time inference](2.-Create-an-endpoint-and-perform-real-time-inference) 1. [Create an endpoint](A.-Create-an-endpoint) 2. [Create input payload](B.-Create-input-payload) 3. [Perform real-time inference](C.-Perform-real-time-inference) 4. [Visualize output](D.-Visualize-output) 5. [Delete the endpoint](E.-Delete-the-endpoint)3. [Perform batch inference](3.-Perform-batch-inference) 4. [Clean-up](4.-Clean-up) 1. [Delete the model](A.-Delete-the-model) 2. [Unsubscribe to the listing (optional)](B.-Unsubscribe-to-the-listing-(optional)) Usage instructionsYou can run this notebook one cell at a time (By using Shift+Enter for running a cell). 1. Subscribe to the model package To subscribe to the model package:1. Open the model package listing page Credit Report Complaint topic modelling 1. On the AWS Marketplace listing, click on the **Continue to subscribe** button.1. On the **Subscribe to this software** page, review and click on **"Accept Offer"** if you and your organization agrees with EULA, pricing, and support terms. 1. Once you click on **Continue to configuration button** and then choose a **region**, you will see a **Product Arn** displayed. This is the model package ARN that you need to specify while creating a deployable model using Boto3. Copy the ARN corresponding to your region and specify the same in the following cell. ###Code model_package_arn='' #put your ARN here import base64 import json import uuid from sagemaker import ModelPackage import sagemaker as sage from sagemaker import get_execution_role from sagemaker import ModelPackage from urllib.parse import urlparse import boto3 from IPython.display import Image from PIL import Image as ImageEdit import urllib.request import numpy as np role = get_execution_role() print(role) sagemaker_session = sage.Session() bucket=sagemaker_session.default_bucket() bucket ###Output _____no_output_____ ###Markdown 2. Create an endpoint and perform real-time inference If you want to understand how real-time inference with Amazon SageMaker works, see [Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-hosting.html). ###Code model_name='topic-model' content_type='text/csv' real_time_inference_instance_type='ml.m5.large' batch_transform_inference_instance_type='ml.m5.large' ###Output _____no_output_____ ###Markdown A. Create an endpoint ###Code def predict_wrapper(endpoint, session): return sage.predictor.Predictor(endpoint, session,content_type) #create a deployable model from the model package. model = ModelPackage(role=role, model_package_arn=model_package_arn, sagemaker_session=sagemaker_session, predictor_cls=predict_wrapper) #Deploy the model predictor = model.deploy(1, real_time_inference_instance_type, endpoint_name=model_name) ###Output _____no_output_____ ###Markdown Once endpoint has been created, you would be able to perform real-time inference. B. Create input payload ###Code file_name = 'sample_input.csv' ###Output _____no_output_____ ###Markdown C. Perform real-time inference ###Code !aws sagemaker-runtime invoke-endpoint \ --endpoint-name $model_name \ --body fileb://$file_name \ --content-type $content_type \ --region $sagemaker_session.boto_region_name \ output.csv ###Output _____no_output_____ ###Markdown D. Visualize output ###Code import pandas as pd pd.read_csv("output.csv",header=None) ###Output _____no_output_____ ###Markdown E. Delete the endpoint Now that you have successfully performed a real-time inference, you do not need the endpoint any more. You can terminate the endpoint to avoid being charged. ###Code predictor=sage.predictor.Predictor(model_name, sagemaker_session,content_type) predictor.delete_endpoint(delete_endpoint_config=True) ###Output _____no_output_____ ###Markdown 3. Perform batch inference In this section, you will perform batch inference using multiple input payloads together. If you are not familiar with batch transform, and want to learn more, see these links:1. [How it works](https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-batch-transform.html)2. [How to run a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-batch.html) ###Code #upload the batch-transform job input files to S3 #maintain this folder structure which means create a folder name "Input" and put sample_input.csv file inside that transform_input_folder = "Input" transform_input = sagemaker_session.upload_data(transform_input_folder, key_prefix=model_name) print("Transform input uploaded to " + transform_input) #Run the batch-transform job transformer = model.transformer(1, batch_transform_inference_instance_type) transformer.transform(transform_input, content_type=content_type) transformer.wait() import os s3_conn = boto3.client("s3") with open('output.csv', 'wb') as f: s3_conn.download_fileobj(bucket, os.path.basename(transformer.output_path)+'/sample_input.csv.out', f) print("Output file loaded from bucket") pd.read_csv("output.csv",header=None) ###Output _____no_output_____ ###Markdown 4. Clean-up A. Delete the model ###Code model.delete_model() ###Output _____no_output_____
xgboostLab.ipynb
###Markdown XGBoost Lab ReflectionsLet's go back to thinking about a few algorithms we worked on. Decisions treesWe began our exploration of decision trees with a mountain bike example:![](https://raw.githubusercontent.com/zacharski/ml-class/master/labs/pics/dtree77.png)Here's is roughly what we did by hand.1. We determined that if we couldn't ask any questions, we would say the person mountain biked since they mountained bike 9 times and didn't 5 times. So our error rate was 5 out of 14 or roughly 36%2. Next, if we could ask one question we determined that the question should be about Outlook. Now our error rate was 4 out of 14 or 29%3. Then we determined the next question to ask and reduced the error rate more. And then the next question ...In some sense, the algorithm is additive. We start with zero questions with whatever error rate. Add a question and reduce the error rate. Add another question and reduce the rate. And so on.**Additive** is the key word. Let's look at an example, from [Gradient boosting: Distance to target](https://explained.ai/gradient-boosting/L2-loss.html) by Terence Parr and Jeremy Howard. They ask us to imagine writing the formula for *y* that matches this plot:![](https://explained.ai/gradient-boosting/images/L2-loss/L2-loss_additive_2.svg)Like the decision tree example above, our first approximation might be simple, perhaps just the y-intercept:$$y = 30$$as shown in the leftmost picture below. ![](https://explained.ai/gradient-boosting/images/L2-loss/L2-loss_additive_3.svg)Next, we may want to add in the slope of the line and get$$y = 30 + x$$and get the middle graph above. Finally, we add in the squiggle:$$y = 30 + x + sin(x)$$We have decomposed a complex task into subtasks, each refining the previous approximation. So, again, we have an additive algorithm.This approach shouldn't be surprising to us since this is how we typically develop programs. We get some skeleton code working and then incrementally add to it. BoostingBoosting algorithms work in a similar additive fashion. We first develop a simple model that roughly classifies the data. Next, we add another simple model that is focused on ameliorating the errors of the first. And then we add another and another.$$boosting=model_1 + model_2 + model_3 + ... + model_n$$ How boosting differs from bagging and pastingWith bagging and pasting we created a number of decision trees each of which was trained on different data. **One tree did not influence the construction of another.** Thus, each classifier was independent of the others. BoostingBoosting is different. Imagine that we create one decision tree classifier. Let's call it Classifier 1. Classifier 1 doesn't perform with 100% accuracy. Next we create a second decision tree classifier and as part of its training data we will use the instances that Classifier got wrong. Now Classifier 2 isn't perfect either and there will be some instances that both Classifier 1 and Classifier 2 got wrong, and, you guessed it, we will use those instances as part of the training data for Classifier 3. 400 ClassifiersSuppose we created 400 classifiers using the bagging algorithm. Since each classifier is independent of the others, we can run those 400 in parallel. Now think about boosting for a moment. Can we run those in parallel? Think about it for 1. second2. seconds3. seconds4. seconds5. secondsSince one classifier is dependent on the errors of the others it seems like we couldn't run them in parallel and training 400 classifiers sequentially seems impractical. This is true in general with boosting algorithms but as we will see XGBoost is different. Gradient BoostingSuppose I am interested in taking my camper van ![](https://raw.githubusercontent.com/zacharski/ml-class/master/labs/pics/travato2.png)to White Horse Road Dispersed Camping in Utah.![](https://raw.githubusercontent.com/zacharski/ml-class/master/labs/pics/wildHorse.png)And to get there from my home in Santa Fe, I am using an old school paper map.![](https://raw.githubusercontent.com/zacharski/ml-class/master/labs/pics/map.png)A route will be something like.$$route = road_0 + road_1 + road_2 + ... + road_n$$To get to White Horse Road, it looks like my best bet is to start by taking I25 to Albuquerque.$$route = i25 $$Now the difference between where I am and where I want to go is Albuquerque to White Horse. So I performed an action and now my new problem is dealing with this new problem of getting from Albuquerque to White HorseFrom Albuquerque I can take 550 to Farmington$$route = i25 + US550$$and from there take 491 to Monticello Utah$$route = i25 + US550+ US491$$and so on.There are some similarities between this old school mapping and gradient boosting. In gradient boosting we start with a poor model (in our case, we decided to go to Albuquerque). Then we are going to look at the difference between what we want and where we are-- and then take the next step, the delta $\Delta$. Let's look at a simple example of classification of one feature *x* to predict a label *y*. We will label our prediction $\hat{y}$. For gradient boosting our formula is$$\hat{y}=f_0(x) + \Delta_1(x) + \Delta_2(x) + ... + \Delta_m(x)$$Where $\Delta_1$ is the first improvement, $\Delta_2$ the second and so on.Gradient Boosting is an ensemble method, meaning that it is built with a number of sub-classifiers. So perhaps a better Utah analogy is that I hitchhike from here to Albuquerque with one person (one 'classifier'), then go to Framington with another and so on.This is the rough intuition of gradient boosting. In any gradient algorithm there is a parameter called *learning rate* and in a sense it is how big of steps we can take. Suppose we are hiking on a mountain in Utah and suddenly we are fogged in and can't see a thing. We want to get back to our van in the valley.In my 2D Utah it looks like this:![](https://raw.githubusercontent.com/zacharski/ml-class/master/labs/pics/gradient1.png)The purple dot is us near the top of the mountain and the burnt orange dot is our van. So our algorithm is```WHILE NOT AT VAN OR NOT MOVING: take one step to the left. IF we are lower than when we started: stay here at the new location ELSE go back to starting point and go one step to the right IF we are lower than when we started: stay here at the new location ELSE go back to starting point```We repeat the above procedure and get to the state shown on the right above. If we take a step to the right or left we go uphill so we are stuck. We hit what is called a local minima and local minima are a problem with all gradient descent algorithms.Perhaps the one step was too small an increment. So let's say we have a rope. You stay where we are and hold one end of the rope and I walk until I reach the end of the rope. Based on the angle of the rope, we see if I am lower or not and we move accordingly. Now we jump over that local minima and reach a state that looks like the following image on the left. We don't know it, but we are almost to the van!![](https://raw.githubusercontent.com/zacharski/ml-class/master/labs/pics/gradient2.png)We use the rope technique again but this time I jump over the location of our van since I am not at the end of the rope yet and am in the position shown on the right. The learning rate was too large. (Now I am sounding like the three bears tale!)The one step was our learning rate as was our rope technique and you can see that selecting a good one is crucial. Loss FunctionFor both these examples, one thing we needed was a measure for how far away are we from our goal. Are we better or worse? For the fog on a mountain example, the loss function was our altitude and we are trying to reduce the loss -- the altitude. Two more examples One Dimensional Team Frisbee GolfHere is my representation of our 1D golf game. The hole is the green circle on the right and our frisbee's location is shown with the lovely pink circle on the left. Let $y$ be the actual distance between the two and $x$ what I see standing by the frisbee--off in the one dimensional distance I see the hole. ![](https://raw.githubusercontent.com/zacharski/ml-class/master/labs/pics/golf1.png)It is player zero's turn and she estimates the distance to be 70 yards.$$f_0(x) = 70$$ She flings the frisbee and ...![](https://raw.githubusercontent.com/zacharski/ml-class/master/labs/pics/golf2.png)Now it is player two's turn. He is only concerned with the difference, the $\Delta_1$ --the current position of the frisbee and the location of the hole. He estimates it to be 20 yards$$\Delta_1(x) = 20$$So far we have flung the frisbee$$\hat{y}= f+0(x) + \Delta_1(x) = 70 + 20 = 90$$![](https://raw.githubusercontent.com/zacharski/ml-class/master/labs/pics/golf3.png)Now it is player two's turn. She estimates the distance remaining ($\Delta_2$) to be 15 yards...![](https://raw.githubusercontent.com/zacharski/ml-class/master/labs/pics/golf4.png)And she overshot. Player three estimates the remaining distance to be -5 yards and ...![](https://raw.githubusercontent.com/zacharski/ml-class/master/labs/pics/golf5.png)Notice that each player is not concerned with the original problem. She is just concerned with the **residual** --- meaning what is remaining based on the previous players' results.The formula is $$\hat{y} = f_0(x) + \Delta_1(x) + \Delta_2(x) + ... + \Delta_m(x)$$$$=f_0(x) + \sum_{m=1}^M{\Delta_m(x)}$$So the first classifier works on the original problem but all the rest work on the residual. Expenditures on Makeup and ClothesOk, I have exhausted my creativity, so even though I am not keen on this example, let's go back to predicting a young lady's expenditure on makeup based on what she spends on clothes. And just for readability I am going to make the feature clothes to be represented by *x* and what we want to predict, the makeup, *y* ###Code import pandas as pd from pandas import DataFrame makeup = [3000, 5000, 12000, 2000, 7000, 15000, 5000, 6000, 8000, 10000] clothes = [7000, 8000, 25000, 5000, 12000, 30000, 10000, 15000, 20000, 18000] ladies = ['Ms A','Ms B','Ms C','Ms D','Ms E','Ms F','Ms G','Ms H','Ms I','Ms J',] monthly = DataFrame({'x': clothes, 'y': makeup}, index= ladies) monthly ###Output _____no_output_____ ###Markdown And for our first prediction $f_0$ let's predict just the average value: ###Code monthly['f0'] = monthly.y.mean() monthly ###Output _____no_output_____ ###Markdown and the differences between our predictions and the actual values ###Code monthly['y-f0'] = monthly.y - monthly.f0 monthly ###Output _____no_output_____ ###Markdown That $y-f_0$ is the residual. What is left, or how far the first classifier was off. The residual is what the second classifier is trying to predict.Next, we are going to create a classifier Δ1 that predicts $y - f_0$ from the x. Let's say my next classifier has the wacky$$(x - 10000)$$ ###Code monthly['Δ1'] = (monthly['x'] - 10000) monthly['y-f1'] = monthly['y-f0'] - monthly['Δ1'] monthly ###Output _____no_output_____ ###Markdown And the next classifier will try to predict $y-f_1$ based on x.If you understand all these examples, from Utah to Makeup, you have a pretty good intuition on how Gradient Boosting works. XGBoostYou may recall that in the first few videos, we mentioned that XGBoost was one of the state-of-the-art algorithms. The Kaggle competition winners are dominated by deep learning and XGBoost solutions.>I only use XGBoost (Liberty Mutual Property Inspection, Winner's Interview: Qingchen Wang)> As the winner of an increasing amount of Kaggle competitions XGBoost showed us again to be a great all-around algorithm worith having in your toolbox (Dato Winner's Interview, 1st Place, Mad Professors)> The only supervised learning method I used was gradient boosting as implemented in the excellent xgboost package (Recruit Coupon Purchase Winner's Interview, 2nd place, Halla Yang)We are going to start our exploration of XGBoost using the Iris dataset, which we have used before. ###Code from IPython.display import YouTubeVideo YouTubeVideo('1jLIRJwfZhg') ###Output _____no_output_____ ###Markdown This reminds me of a section of the *Hitchhiker's Guide to the Galaxy* by Douglas Adams, where Marvin, the robot, is asked to bring two hitchhikers to the bridge and he says:> Here I am, brain the size of a planet, and they ask me to take you to the bridge. Call that job satisfaction, 'cause I don'tXGBoost is an extremely powerful state-of-the-art algorithm and we are using it on a toy example. Oh well. GPU!We are going to be running this code on a Graphics Processing Unit, GPU, a graphics card.To do so, under the runtime menu above, select **Change Runtime Type** and select **GPU**That's it! Now let's check out what GPU we are using: ###Code !nvidia-smi ###Output Thu Mar 25 16:33:36 2021 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.56 Driver Version: 460.32.03 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 | | N/A 57C P8 10W / 70W | 0MiB / 15109MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ ###Markdown It is a Tesla T4, which has 320 tensor cores.Now let's load the database The Iris Data Set Load the dataset: ###Code import pandas as pd from sklearn.model_selection import train_test_split iris = pd.read_csv('https://raw.githubusercontent.com/zacharski/ml-class/master/data/iris.csv') iris_train, iris_test = train_test_split(iris, test_size = 0.2) train_X = iris_train[['Sepal Length', 'Sepal Width', 'Petal Length', 'Petal Width']] train_y = iris_train['Class'] test_X = iris_test[['Sepal Length', 'Sepal Width', 'Petal Length', 'Petal Width']] test_y = iris_test['Class'] ###Output _____no_output_____ ###Markdown Create an instance of the XGBoost classifierWe are going to create an XGBoost classifier with gpu support. ###Code from xgboost import XGBClassifier params = { "n_estimators": 400, 'tree_method':'gpu_hist', 'predictor':'gpu_predictor' } model = XGBClassifier(**params) model ###Output _____no_output_____ ###Markdown Let's take a look at those parameters.* **n_estimators** the number of classifiers in the boost ensemble. The default is 100.* **tree_method** the tree construction algorithm that is used. `gpu_hist` is a distributed histogram approach (see the [original paper](https://arxiv.org/pdf/1603.02754.pdf))* **predictor** the prediction algorithm to use. `gpu_predictor` means use the gpu!* **max_depth** the depth of the decision trees. The default of 3 is used here. The trees for any ensemble method are typically very shallow. Fitting model to the data ###Code model.fit(train_X, train_y) ###Output _____no_output_____ ###Markdown evaluate modelFinally let's evaluate the model ###Code from sklearn.metrics import accuracy_score iris_predictions = model.predict(test_X) accuracy_score(test_y, iris_predictions) ###Output _____no_output_____ ###Markdown We ran a state-of-the-art algorithm on a GPU. Yay us!Now we are going to back up quite a bit. Bagging and PastingWith bagging and pasting we created a number of decision trees each of which was trained on different data. One tree did not influence the construction of another. Each classifier was independent of the others. BoostingBoosting is different. Imagine that we create one decision tree classifier. Let's call it Classifier 1. Classifier 1 doesn't perform with 100% accuracy. Next we create a second decision tree classifier and as part of its training data we will use the instances that Classifier got wrong. Now Classifier 2 isn't perfect either and there will be some instances that both Classifier 1 and Classifier 2 got wrong, and, you guessed it, we will use those instances as part of the training data for Classifier 3. 400 ClassifiersSuppose we created 400 classifiers using the bagging algorithm. Since each classifier is independent of the others, we can run those 400 in parallel. Now think about boosting for a moment. Can we run those in parallel?Since one classifier is dependent on the errors of the others it seems like we couldn't run them in parallel and doing 400 classifiers in series seems impractical. Fortunately for us, XGBoost has parallelized training! The task - The Adult DatasetLet's try a bit larger dataset, the [Adult Dataset](http://archive.ics.uci.edu/ml/datasets/Adult). The webpage describes the problem. We are trying to predict whether someone makes more that $50,000 year based on a number of features. The data folder contains both training data `adult.data` and test data `adult.test`. Prepare the data. ###Code colNames = ['age', 'workclass', 'fnlwgt', 'education', 'education-num', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'capital-gain', 'capital-loss', 'hours-per-week', 'native-country', 'wage'] adult = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data', names=colNames) adult ###Output _____no_output_____ ###Markdown divide features and labelslet's create 2 DataFrames, one for the features and one for the labels ###Code adult_features = adult.drop('wage', axis=1) adult_labels = adult['wage'] adult_features ###Output _____no_output_____ ###Markdown Now let's one hot encode the features using sklearn's OneHotEncoder. ###Code #TODO adultSparse = pd.get_dummies(adult, columns=['age', 'workclass', 'fnlwgt', 'education', 'education-num', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'capital-gain', 'capital-loss', 'hours-per-week', 'native-country']) adultSparse ###Output _____no_output_____ ###Markdown Fantastic! Let's go ahead divide this up into training and test sets (Notice that this is a bit different than we have been doing it. ###Code from sklearn.model_selection import train_test_split adult_train_features, adult_test_features, adult_train_labels, adult_test_labels = train_test_split(adultSparse, adult_labels, test_size = 0.7) adult_train_features = adult_train_features.drop(['wage'], axis=1) ###Output _____no_output_____ ###Markdown You may have noticed that we put a whopping 70% of the data in the test set. We did this because when we are just playing with things to gain an understanding we don't want to wait hours for a result.Create an XGBoost classifier called model with the parameters:* `tree_method: gpu_hist`* `predictor: gpu_predictor` ###Code ## TO DO call it model from xgboost import XGBClassifier params = { 'tree_method':'gpu_hist', 'predictor':'gpu_predictor' } model = XGBClassifier(**params) model ###Output _____no_output_____ ###Markdown Now let's say we want to find the best hyperparameter values for * n_estimators -- let's try 50, 100, 150, 200* max_depth -- let's try 2, 4, 6, 8Go ahead and create the `param_grid` ###Code model.fit(adult_train_features, adult_train_labels) from sklearn.model_selection import GridSearchCV, KFold from keras.models import Sequential from keras.layers import Dense, Dropout from keras.wrappers.scikit_learn import KerasClassifier from keras.optimizers import Adam import sys import pandas as pd seed=42 import numpy as np n_estimators=[50,100,150,200] max_depth=[2,4,6,8] param_grid=dict(n_estimators=n_estimators, max_depth=max_depth) grid = GridSearchCV(estimator=model, param_grid=param_grid, cv=KFold(random_state=seed), verbose=10) grid_results = grid.fit(adult_train_features, adult_train_labels) print("Best: {0}, using {1}".format(grid_results.best_score_, grid_results.best_params_)) means = grid_results.cv_results_['mean_test_score'] stds = grid_results.cv_results_['std_test_score'] params = grid_results.cv_results_['params'] for mean, stdev, param in zip(means, stds, params): print('{0} ({1}) with: {2}'.format(mean, stdev, param)) print("it is done") ###Output /usr/local/lib/python3.7/dist-packages/sklearn/model_selection/_split.py:296: FutureWarning: Setting a random_state has no effect since shuffle is False. This will raise an error in 0.24. You should leave random_state to its default (None), or set shuffle=True. FutureWarning [Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers. ###Markdown Oh my goodness, that took forever.Here are the results: Best: 0.867629204420567, using {'max_depth': 6, 'n_estimators': 200} Time ConstraintEven with a GPU it is going to take a long time to do an exhaustive search of which parameters are best. There are 16 possible combinations. We may want 5 fold cross validation. That is 80 fits, each of which is creating on average 100 classifiers. And we have around 20,000 instances in our training data. Let's pick a random smaller set of combinations to test. Let's say we want the search algorithm to select 5 combinations of hyperparameters `param_comb` at random. ###Code # TO DO from sklearn.model_selection import StratifiedKFold from sklearn.model_selection import RandomizedSearchCV param_comb = 5 folds=5 skf = StratifiedKFold(n_splits=folds, shuffle = True, random_state = 1001) random_search = RandomizedSearchCV(model, param_distributions=param_grid, n_iter=param_comb, n_jobs=-1, cv=skf.split(adult_train_features,adult_train_labels), verbose=3) ###Output _____no_output_____ ###Markdown Let's fit the model (this will take awhile) ###Code %%time #grid_result = random_search.fit(adult_train_features, adult_train_labels) #This is what i did above and it took 40 minutes ###Output CPU times: user 3 µs, sys: 0 ns, total: 3 µs Wall time: 6.91 µs ###Markdown Now let's see what the best parameters are, make predictions on our test data, and check accuracy... ###Code grid_results.best_params_ predictions = random_search.best_estimator_.predict(adult_test_features) accuracy_score(adult_test_labels, predictions) ###Output _____no_output_____ ###Markdown This ends our first look at XGBoostT ###Code ###Output _____no_output_____
Day_009_correlation_example.ipynb
###Markdown 以下程式碼將示範在 python 如何利用 numpy 計算出兩組數據之間的相關係數,並觀察散佈圖 ###Code import numpy as np np.random.seed(1) import matplotlib import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 正相關 ###Code # 隨機生成 1000 個介於 0~50 的數 x = np.random.randint(0, 50, 1000) # 正相關,增加一些雜訊 y = x + np.random.normal(0, 10, 1000) # 可以用 numpy 裡寫好的 function 來計算相關係數 np.corrcoef(x, y) plt.scatter(x, y) ###Output _____no_output_____ ###Markdown 負相關 ###Code # 隨機生成 1000 個介於 0~50 的數 x = np.random.randint(0, 50, 1000) # 負相關,增加一些雜訊 y = 100 - x + np.random.normal(0, 5, 1000) np.corrcoef(x, y) plt.scatter(x, y) ###Output _____no_output_____ ###Markdown 弱相關 ###Code x = np.random.randint(0, 50, 1000) y = np.random.randint(0, 50, 1000) np.corrcoef(x, y) plt.scatter(x, y) ###Output _____no_output_____
05_deep_q_learning/dqn_per/Modularized/.ipynb_checkpoints/dqn_per-checkpoint.ipynb
###Markdown Deep Q-Network With Prioritized Experience Replay (DQN_PER)--- 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline from agent_dqn_per import Agent ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) agent = Agent(state_size=8, action_size=4, seed=0) def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -192.83 Episode 200 Average Score: -136.72 Episode 300 Average Score: -42.177 Episode 400 Average Score: -9.269 Episode 500 Average Score: -4.236 Episode 600 Average Score: 17.57 Episode 700 Average Score: -4.75 Episode 800 Average Score: 56.98 Episode 890 Average Score: 200.58 Environment solved in 790 episodes! Average Score: 200.58
Operations_and_Expressions.ipynb
###Markdown ###Code ###Output _____no_output_____ ###Markdown Boolean Operators ###Code #Booleans represents one of two values: True or False print(11>10) print(11==10) print(10>11) a=10 b=9 print(a>b) print(a==a) print(b>a) print(b>a) print(bool("Hello")) print(bool(15)) print(bool(False)) print(bool(None)) print(bool(0)) print(bool("")) #allows you to evaluate and gives False return def myFunction():return True print(myFunction()) def myFunction():return True print(myFunction()) if myFunction(): print("YES") else: print("NO") ###Output _____no_output_____ ###Markdown You try ###Code a=6 b=7 print(a==b) print(a!=a) print(10+5) print(10-5) print(10*5) print(10/5) #division-quotient print(10%5) #modulo division print(10%3) #modulo division print(10//3) #floor division a=60 #0011 1100 0000 1111 b=13 #0000 1101 print(a&b) print(a|b) print(a^b) print(a<<2) print(a>>2) x=6 x+=3 #x=x+3 print(x) x%=3 print(x) a=True b=False print (a and b) print (a or b) print (not(a and b)) print (not(a and b)) #negation print(a is b) print (a is not b) ###Output _____no_output_____ ###Markdown Boolean Operator ###Code x=1 y=2 print(x>y) print(10>11) print(10==10) print(10!=11) #using bool() function print(bool("Hello")) print(bool(15)) print(bool(True)) print(bool(False)) print(bool(None)) print(bool(0)) print(bool([])) ###Output True True True False False False False ###Markdown Functions can return Boolean ###Code def myFunction(): return True print(myFunction()) def yourFunction(): return False if yourFunction(): print("Yes!") else: print("No!") ###Output _____no_output_____ ###Markdown You Try! ###Code a=6 b=7 print(a==b) print(a!=a) ###Output False False ###Markdown Arithmetic Operators ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) #modulo division, remainder print(10//5) #floor division print(10//3) #floor division print(10%3) #3 x 3 = 9 ###Output 15 5 50 2.0 0 2 3 1 ###Markdown Bitwise Operators ###Code a = 60 # 0011 1100 b = 13 # 0000 1101 print(a&b) print(a|b) print(a^b) print(~a) print(a<<1) # 0111 1000 print(a<<2) # 1111 0000 print(b>>1) # 1 0000 0110 print(b>>2) # 0000 0011 carry flag bit = 01 ###Output 12 61 49 -61 120 240 6 3 ###Markdown Python Assignment Operators ###Code a+=3 # Same As a = a + 3 # Same As a = a + 3, a = 63 print(a) ###Output 63 ###Markdown Logical Operators ###Code #and logical operator a = True b = False print(a and b) print(not(a and b)) print(a or b) print(not(a or b)) print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operators ###Code a = 7 b = 6 print(10>9) print(10<9) print(a>b) ###Output True False True ###Markdown bool() function ###Code print(bool("Maria")) print(bool(1)) print(bool(0)) print(bool(None)) print(bool(False)) print(bool(True)) ###Output True True False False False True ###Markdown Functions can return a Boolean ###Code def my_Function(): return True print(my_Function()) if my_Function(): print("True") else: print("False") ###Output True ###Markdown Application 1 ###Code print(a==b) print(a<b) ###Output _____no_output_____ ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10/3) print(10*5) print(10%5) print(10//3) #10/3 = 3.3333 print(10**2) ###Output _____no_output_____ ###Markdown Bitwise Operators ###Code c = 60 # binary 0011 1100 d = 13 # binary 0000 1101 c&d print(c|d) print(c^d) print(d<<2) ###Output _____no_output_____ ###Markdown Logical Operators ###Code h = True l = False h and l h or l not (h or l) ###Output _____no_output_____ ###Markdown Application 2 ###Code #Python Assignment Operators x = 100 x+=3 # Same as x = x +3, x = 100+3=103 print(x) ###Output _____no_output_____ ###Markdown Identity Operators ###Code h is l h is not l ###Output _____no_output_____ ###Markdown Control Structure If statement ###Code if a>b: print("a is greater than b") ###Output a is greater than b ###Markdown Elif Statement ###Code if a<b: print("a is less than b") elif a>b: print("a is greater than b") ###Output a is equal to b ###Markdown Else Statement ###Code a= 10 b =10 if a>b: print("a is greater than b") elif a>b: print("a is greater than b") else: print("a is equal to b") ###Output _____no_output_____ ###Markdown Short Hand If Statement ###Code if a==b: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand If...Else Statement ###Code a = 10 b = 9 print("a is greater than b") if a>b else print('b is greater than a') ###Output a is greater than b ###Markdown And ###Code if a>b and b==b: print("both conditions are True") ###Output both conditions are True ###Markdown Or ###Code if a<b or b==b: print("the condition is True") ###Output _____no_output_____ ###Markdown Nested If ###Code x = int(input()) if x>10: print("x is above 10") if x>20: print("and also above 20") if x>30: print("and also above 30") if x>40: print("and also above 40") if x>50: print("and also above 50") else: print("but not above 50") ###Output _____no_output_____ ###Markdown Loop Statement For Loop ###Code week = ['Sunday',"Monday",'Tuesday', "Wednesday","Thursday","Friday","Saturday"] for x in week: print(x) ###Output _____no_output_____ ###Markdown The break statement ###Code #to display Sunday to Wednesday using For loop for x in week: print(x) if x=="Wednesday": break #to display only Wednesday using break statement for x in week: if x=="Wednesday": break print(x) ###Output _____no_output_____ ###Markdown While Statement ###Code i =1 while i<6: print(i) i+=1 #same as i = i +1 ###Output _____no_output_____ ###Markdown Application 3- Create a python program that displays no.3 using break statement ###Code i =1 while i<6: if i==3: break i+=1 print(i) ###Output _____no_output_____ ###Markdown Boolean Operators ###Code a = 7 b = 6 print(10>9) print(10<9) ###Output True False ###Markdown Bool() Function ###Code #if True print(bool("True")) print(bool(1)) #if False print(bool(0)) print(bool(False)) ###Output True True False False ###Markdown Functions can return a Boolean ###Code def my_Function(): return True print(my_Function()) ###Output True ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10/3) print(10*5) print(10%5) print(10//3) #10/3 = 3.3333 print(10**2) ###Output 15 5 3.3333333333333335 50 0 3 100 ###Markdown Bitwise Operators ###Code c = 60 #Binary 0011 1100 d = 13 #Binary 0000 1101 c&d print(c|d) print(c^d) print(d<<2) ###Output 61 49 52 ###Markdown Logical Operators ###Code h = True l = False h and l h or l not (h or l) ###Output _____no_output_____ ###Markdown Application 2 ###Code x = 100 x += 3 #Same as x = x + 3, x = 100 + 3 = 103 print(x) ###Output 103 ###Markdown Identity Operators ###Code h is l h is not l ###Output _____no_output_____ ###Markdown Control StructureIf Statement ###Code if a>b: print("a is greater than b") ###Output a is greater than b ###Markdown Elif Statement ###Code if a<b: print("a is less than b") elif a>b: print("a is greater than b") ###Output a is greater than b ###Markdown Else Statement ###Code a = 10 b = 10 if a>b: print("a is greater than b") elif a<b: print("a is less than b") else: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand If Statement ###Code if a==b: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand If... Else Statement ###Code a = 10 b = 9 print("a is greater than b") if a>b else print('b is greater than a') ###Output a is greater than b ###Markdown And ###Code if a>b and b==b: print("both conditions are True") ###Output both conditions are True ###Markdown Or ###Code if a<b or b==b: print("the condition is True") ###Output the condition is True ###Markdown Nested If ###Code x = int(input()) if x>10: print("x is above 10") if x>20: print("and also above 20") if x>30: print("and also above 30") if x>40: print("and also above 40") if x>50: print("and also above 50") else: print("but not above 50") ###Output 48 x is above 10 and also above 20 and also above 30 and also above 40 but not above 50 ###Markdown For Loop ###Code week = ['Sunday', "Monday", 'Tuesday', "Wednesday", 'Thursday', "Friday", 'Saturday'] for x in week: print(x) ###Output Sunday Monday Tuesday Wednesday Thursday Friday Saturday ###Markdown The Break Statement ###Code #to display Sunday to Wednesday using For loop for x in week: print(x) if x=="Wednesday": break #to display only Wednesday using break statement for x in week: if x=="Wednesday": break print(x) ###Output Wednesday ###Markdown While Statement ###Code i = 1 while i<6: print(i) i += 1 #same as i = i + 1 ###Output 1 2 3 4 5 ###Markdown Application 3 - Create a Python that displays no.3 using break statement ###Code i = 1 while i<6: if i==3: break i += 1 print(i) ###Output 3 ###Markdown Boolean Operators ###Code print (10>9) print (10==9) print (10<9) a =10 b= 9 c= 8 c = print (a>b) print (bool ("Hello!")) print (bool(15)) print (bool(True)) print (bool(False)) print (bool(1)) print (bool(0)) print (bool(None)) print (bool([])) print (bool([{}])) def myFunction():return True print (myFunction()) def myFunction():return True if myFunction(): print ("Yes") else: print ("No") def myFunction():return True if myFunction(): print ("True") else: print ("False") print (10>9) a=6 b=7 print (a==b) print (6==6) print (a!=a) ###Output True False True False ###Markdown Python Operators ###Code print (10+5) print (10-5) print (10*5) print (10/5) print (10%5) print (10//3) print (10**2) print (10**2) ###Output 15 5 50 2.0 0 3 100 100 ###Markdown Bitwise Operators ###Code a = 60 #0011 1100 b = 13 print (a^b) print (a & b) print (a | b) print (~a) print (a<<2) print (a>>2) #0000 1111 ###Output 49 12 61 -61 240 15 ###Markdown Assignment Operator ###Code x = 2 x+=3 #Same As x=x+3 print (x) x ###Output 5 ###Markdown Logical Operator ###Code a = 5 b = 6 print (a>b and a==a) print (a<b or b==a) ###Output False True ###Markdown Identity Operator ###Code print (a is b) print (a is not b ) ###Output False True ###Markdown Boolean OperatorBooleans represent one of two values.True or False ###Code print(10>9) print(10==9) print(9>10) print(bool(True)) print(bool(False)) print(bool(1)) print(bool(0)) print(bool([])) print(bool(None)) def myFunction():return True print(myFunction()) def myFunction():return False print(myFunction()) #Boolean answer of a function def myFunction():return True if myFunction(): print("Yes") else: print("No") #Boolean answer of a function def myFuntion():return False if myFunction(): print("Yes!") else: print("No!") ###Output Yes! ###Markdown You Try! ###Code print(10>9) a=6 b=7 print(a==b) print(a!=a) a=6 b=7 print(a==b) print(a!=a) ###Output False False ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10//5) #Floor division print(10/3) print(10//3) #Floor division print(10%3) #Modulo print(10**2) #concatenation ###Output 15 5 50 2.0 2 3.3333333333333335 3 1 100 ###Markdown Python Bitwise Operators ###Code a= 60 #0011 1100 , 0111 1000, 1111 0000 b= 13 #0000 1101 print (a & b) print (a | b) print (a << 1) print (a << 2) print (a >> 1) #0011 1100, 0001 1110 a+=2 #Same as a=a + 2, a=60+2, a = 62 print(a) ###Output 62 ###Markdown Logical Operators ###Code a = True b = False print(a and b) print(a or b) a = 60 b = 13 (a>b) and (a<b) a = 60 b = 13 (a>b) and (a<b) (a>b) or (a<b) not(a>b) ###Output _____no_output_____ ###Markdown Identity Operators ###Code print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operator ###Code x = 1 y = 2 print(x>y) print(10>11) print(10==10) print(10!=11) #using bool() function print(bool("Hello")) print(bool(15)) print(bool(1)) print(bool(True)) print(bool(False)) print(bool(None)) print(bool(0)) print(bool([])) ###Output True True True True False False False False ###Markdown Functions can return Boolean ###Code def myFunction(): return False print(myFunction()) def yourFunction(): return False if yourFunction(): print("Yes!") else: print("No") ###Output No ###Markdown You Try! ###Code a = 6 b = 7 print(a==b) print(a!=a) ###Output False False ###Markdown Arithmetic Operators ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) #modulo division, remainder print(10//5) #floor division print(10//3) #floor division print(10%3) #3 x 3 = 9 + 1 ###Output 15 5 50 2.0 0 2 3 1 ###Markdown Bitwise Operators ###Code a = 60 # 0011 1100 b = 13 # 0000 1101 print(a&b) print(a|b) print(a^b) print(~a) print(a<<1) # 0111 1000 print(a<<2) #1111 0000 print(b>>1) # 1 0000 0110 print(b>>2) # 0000 0011 carry flag bit = 01 ###Output 12 61 49 -61 120 240 6 3 ###Markdown Python Assignment Operators ###Code a+=3 # Same As a = a + 3 #Same As a = 60 + 3, a=63 print(a) ###Output 63 ###Markdown Logical Operators ###Code #and logical operator a = True b = False print(a and b) print(a or b) print(not(a or b)) print(not(a and b)) print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operators ###Code a=7 b=6 print(10>9) print(10<9) print(a>b) ###Output True False True ###Markdown Bool() function ###Code print(bool("Alfred")) print(bool(1)) print(bool(0)) print(bool(None)) print(bool(False)) print(bool(True)) ###Output True True False False False True ###Markdown Functions can return a Boolean ###Code def my_Function(): return True print(my_Function()) if my_Function(): print("True") else: print("False") print (10>9) a=6 b=7 print(a==b) print(a!=a) a<b a>b a=b ###Output True False False ###Markdown Phython Operators ###Code print(10+5) print(10-5) print(10/3) print(10*5) print(10%5) print(10//3) #10/3=3.3333 ###Output 15 5 3.3333333333333335 50 0 3 ###Markdown Bitwise Operators ###Code c=60 # binary 0011 1100 d=13 # binary 0000 1101 c&d print(c|d) print(c^d) print(d<<2) ###Output 61 49 52 ###Markdown Logical Operators ###Code h = True l = False h and l h or l not (h or l) ###Output _____no_output_____ ###Markdown Application 2 ###Code #Phython Assignment Operators x = 100 x+=3 #Same as x = x +3, x = 100+3=103 print(x) ###Output 103 ###Markdown Identity Operators ###Code h is l h is not l ###Output _____no_output_____ ###Markdown Control Structure if statement ###Code if a>b: print("a is greater than b") ###Output _____no_output_____ ###Markdown Elif Statement ###Code if a<b: print("a is less than b") elif a>b: print*("a is greater than b") ###Output _____no_output_____ ###Markdown Short hand If Statement ###Code if a==b: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand if...Else Statement ###Code a = 6 b = 9 print("a is greater than b") if a>b else print("b is greater than a") ###Output b is greater than a ###Markdown And ###Code if a<b and b==b: print("the condition is True") ###Output the condition is True ###Markdown Nested if ###Code x = int(input()) if x>10: print("x is above 10") if x>20: print("and also above 20") if x>30: print("and also above 30") if x>40: print("and also above 40") else: print("but not above 40") ###Output 20 x is above 10 ###Markdown Loop Statement For Loop ###Code week =["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday","Sunday"] for x in week: print(x): Sunday Monday Tuesday Wednesday Thursday Friday Saturday ###Output _____no_output_____ ###Markdown The Break Statement ###Code #to display Sunday to Wednesday using for loop for x in week: print(x) if x=="Wednesday": break #to display only Wednesday using break statement for x in week: if x=="Wednesday": break print(x) ###Output _____no_output_____ ###Markdown While Statement ###Code i=1 while i<6: print(i) i+=1 #same as i = i+1 ###Output 1 2 3 4 5 ###Markdown Application 3 - Create a phython prgram that displays no.3 using break statement ###Code i=1 while i<6: if i==3: break i+=1 print(i) ###Output 2 3 ###Markdown Boolean Operators ###Code print (10>9) print (10==9) a=10 b=9 c=8 c= print (a>b) c print(bool("hello")) print(bool(15)) print(bool(True)) print(bool(False)) print(bool(1)) print(bool(0)) print(bool(None)) print(bool([])) def myFunction(): return True print(myFunction()) def myFunction(): return True if myFunction(): print("True") else: print("False") print(10>9) a=6 # 0000 0110 b=7 # 0000 0111 print(a==b) print(6==6) print(a!=a) ###Output True False True False ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) print(10//3) print(10**2) ###Output 15 5 50 2.0 0 3 100 ###Markdown Bitwise Operators ###Code a = 60 #0011 1100 b = 13 print(a^b) print(~a) print(a<<2) print(a>>2) # 0000 1111 ###Output 49 -61 240 15 ###Markdown Assignment Operators ###Code x = 2 x+=3 #Same As x = x+3 print(x) x ###Output 5 ###Markdown Logical Operators ###Code a = 5 b = 6 print(a>b and a==a) print(a<b or b==a) ###Output False True ###Markdown Identity Operator ###Code print(a is b) print(a is not b) ###Output False True ###Markdown ***Boolean Operator*** ###Code x=1 y=2 y x=1 y=2 print(y) x=1 y=2 print(x>y) print(10>11) print(10==10) print(10!=11) #using bool() function print(bool("Hello")) print(bool(15)) print(bool(1)) print(bool(True)) print(bool(False)) print(bool(None)) print(bool(0)) print(bool([])) ###Output True True True True False False False False ###Markdown ***Functions can return Boolean*** ###Code def myFunction(): return True print(myFunction()) def myFunction(): return False print(myFunction()) def yourFunction(): return True if yourFunction(): print("Yes!") else: Print("No!") def yourFunction(): return False if yourFunction(): print("Yes!") else: print("No!") ###Output No! ###Markdown ***You must try!*** ###Code print(10>8) a=5 b=2 print(a==b) print(a!=a) ###Output True False False ###Markdown ***Arithmetic*** ***Operators*** ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) #modulo division, remainder print(10//5) #floor division print(10//3) #floor division print(10%3) #3X3=9+1 ###Output 15 5 50 2.0 0 2 3 1 ###Markdown ***Bitwise*** ***Operators*** ###Code a=50 # 0011 1100 b=14 # 0000 1101 print(a&b) print(a|b) print(a^b) print(~a) print(a<<1) #0111 1000 print(a<<2) #1111 0000 print(a>>1) #1 0000 0110 print(a>>2) #0000 0011 ###Output 2 62 60 -51 100 200 25 12 ###Markdown ***Phyton Assignment Operators*** ###Code a+=3 #Same As a=a+3 #Same As a=50+4, a=54 print(a) ###Output 53 ###Markdown ***Logical Operators*** ###Code #and logical operator a=True b=False print(a and b) print(not(a and b)) print(a or b) print(not(a or b)) print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Functions ###Code print (10>9) print (10<9) print (10==9) a = 10 b = 9 print (a>b) print (a<b) print (a==b) ###Output True False False ###Markdown Bool() Function ###Code print (bool(1)) print (bool(0)) print (bool("Michael")) print (bool(None)) print (bool(0)) def my_Function(): return True if my_Function(): print ("Yes") else: print ("No") print (my_Function()) print (10>9) a=6 b=7 print (a==b) print (a!=a) ###Output True False False ###Markdown Python Operators ###Code print (10+5) print (10-5) print (10*5) print (10/5) print (10%5) print (10//5) print (10**5) print(2+5) print(2-5) print(2*5) print(5/2) print(5//2) print(5**2) ###Output 7 -3 10 2.5 2 25 ###Markdown Bitwise Operators ###Code a = 60 b = 13 print(a&b) print(a|b) print(a^b) ###Output 12 61 49 ###Markdown Assignment Operators ###Code a += 3 #Same as a = a + 3 print(a) ###Output 63 ###Markdown Logical Operators ###Code x = True y = False print(x and y) print(x or y) not(print(x and y)) ###Output False True False ###Markdown Identity Operators ###Code print(x is y) print(x is x) print(x is not y) ###Output False True True ###Markdown Boolean Operators ###Code #Booleans represent one of two values : True or False print(10>9) print(10==9) print(9>10) a= 10 b= 9 print(a>b) print(a==a) print(b>a) print(bool("Hello")) print(bool(15)) print(bool(True)) print(bool(False)) print(bool(None)) print(bool(0)) print(bool([])) #allows you evaluate and gives False in Return def myFunction(): return True print(myFunction()) def myFunction():return False if myFunction(): print("Yes") else: print("No") ###Output No ###Markdown You Try! ###Code a= 6 a=7 print(a==b) print(a!=a) print(10+5) print(10-5) print(10*5) print(10/5) #division-quotient print(10%5) #modulo division print(10%3) #modulo division print(10//3) #floor division print(10**2) #concatenation a = 60 b = 13 print(a & b) print(a|b) print(a^b) print(a<<2) print(a>>2) x=6 x+=3 print(x) x%=3 print(x) a= True b= False print(a and b) print(a or b) print(not(a and b)) print(not(a or b)) #negation print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operators ###Code print(10>9) print(10<9) print(10==9) a = 10 b = 9 print(a>b) print(a<b) print(a==b) ###Output True False False ###Markdown Bool () Function ###Code print(bool(1)) print(bool(0)) print(bool("Maria")) print(bool(None)) print(bool([])) ###Output True False True False False ###Markdown Functions can Return a Boolean ###Code def my_Function(): return False print(my_Function()) if my_Function(): print("Yes!") else: print("No!") ###Output False No! ###Markdown Application 1 ###Code print (10>9) a = 6 b = 7 print(a==b) print(a!=a) ###Output True False False ###Markdown Python Operators ###Code print(2+5) print(2-5) print(2*5) print(5/2) print(5//2) print(5**2) ###Output 7 -3 10 2.5 2 25 ###Markdown Bitwise Operators ###Code a = 60 b = 13 print(a&b) print(a|b) print(a^b) ###Output 12 61 49 ###Markdown Assignment Operators ###Code a += 3 #Same as a = a+3 print(a) ###Output 66 ###Markdown Logical Operators ###Code x = True y = False print(x and y) print(x or y) print(not(x and y)) ###Output False True True ###Markdown Identity Operators ###Code print(x is y) print(x is x) print(x is not y) ###Output False True True ###Markdown Boolean Operators ###Code #Booleans represents one or two: True or False print(10>9) print(10==9) print(9>10) a = 10 b = 9 print(a>b) print(a==a) print(b>a) a b>a a>b a==a b>a print(bool("hello")) print(bool(16)) print(bool(True)) print(bool(False)) print(bool(None)) print(bool(0)) print(bool({})) #allows you to evaluate and gives False in return print(bool([])) #allows you to evaluate and gives False in return def myFunction(): return True if myFunction(): print("Yes") else: print("No") def myFunction(): return False if myFunction(): print("Yes") else: print("No") a=6 b=7 print(a==b) print(a!=a) print(10+5) print(10-5) print(10*5) print(10/5) #Division - Quotient print(10%5) #Modulo Division print(10%4) print(10//3) #Floor Division print(10**2) #Concatenation a = 60 #0011 1100 b = 13 #0000 1101 print(a & b) print(a | b) print(a ^ b) print(~a) print(~b) print(a<<2) print(b<<2) print(a>>2) print(b>>2) x=6 x+=3 # x=x+3 print(x) x%=6 # x=6% 3 print(x) a = True b = False print(a and b) print(a or b) print(not(a and b)) #Negation of result print(a is b) print(a is not b) ###Output False True ###Markdown **Boolean Operators** ###Code print(10>9) print(10==9) print(10!=9) print(10<9) ###Output _____no_output_____ ###Markdown **bool() Fnction** ###Code print(bool("Maria")) print(bool(19)) print(bool([])) print(bool(0)) print(bool(1)) print(bool(None)) print(bool(False)) ###Output _____no_output_____ ###Markdown **Function can return a Boolean** ###Code def myFunction(): return False print(myFunction()) def myFunction(): return True if myFunction(): print("Yes!") else: print("No!") ###Output _____no_output_____ ###Markdown **Application 1** ###Code print(10>9) a=6 b=7 print(a==b) print(a!=a) a=60 b=13 print(a>b) if a<b: print("a is less than b") else: print("a is greater than b") ###Output _____no_output_____ ###Markdown **Python Operators** ###Code print(10+3) print(10-3) print(10*3) print(10%3) print(10/3) ###Output _____no_output_____ ###Markdown **Bitwise Operators** ###Code # a = 60, binary 0011 1100 # b = 13, binary 0000 1101 print(a&b) print(a|b) print(a^b) print(a<<1) print(a<<2) ###Output _____no_output_____ ###Markdown **Application 2** ###Code #Assignment Operators x=2 x+=3 #same as x=x+3 print(x) x-=3 #same as x=x-3 print(x) x*=3 #same as x=x*3 print(x) x/=3 #same as x=x/3 print(x) x%=3 #same as x=x%3 print(x) ###Output _____no_output_____ ###Markdown **Logical Operators** ###Code k = True l = False print(k and l) print(k or l) print(not(k or l)) ###Output _____no_output_____ ###Markdown **Identity Operators** ###Code k is l k is not l ###Output _____no_output_____ ###Markdown Control Structure **If Statement** ###Code v = 2 z = 1 if 1<2: print("1 is less than 2") ###Output _____no_output_____ ###Markdown **Elif Statement** ###Code if v<z: print("v is less than z") elif v>z: print("v is greater than z") ###Output _____no_output_____ ###Markdown **Else Statement** ###Code number = int(input()) #to know if the number is positive or negative if number>0: print("Positive") elif number<0: print("Negative") else: print("number is equal to zero") ###Output _____no_output_____ ###Markdown **Application 3** ###Code #Develop a Python program that will accept if the person is entitled to vote or not age = int(input()) if age>=18: print("You are qualified to vote") else: print("You are not qualified to vote") ###Output _____no_output_____ ###Markdown **Nested If...Else** ###Code u = int(input()) if u>10: print("u is above 10") if u>20: print("u is above 20") if u>30: print("u is above 30") if u>40: print("u is above 40") else: print("u is below 40") if u>50: print("u is above 50") else: print("u is below 50") ###Output _____no_output_____ ###Markdown **Loop Structure** ###Code week = ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'] season = ["Rainy", "Sunny"] for x in week: for y in season: print(y,x) ###Output _____no_output_____ ###Markdown **Break Statement** ###Code for x in week: print(x) if x == "Thursday": break ###Output _____no_output_____ ###Markdown **While Loop** ###Code i=1 while i<6: print(i) i+=1 ###Output _____no_output_____ ###Markdown **Application 4** ###Code #Create a Python program that displays numbers from 1 to 4 using while loop statement n=1 while n<5: print(n) n+=1 ###Output _____no_output_____ ###Markdown **Application 5** ###Code #Create a Python program that displays 4 numbers using while loop and break statement r=1 while r<=4: if r==4: print(r) r+=1 ###Output _____no_output_____ ###Markdown Intro Python ###Code x= "Sally" # This is a type of string x = int(4.585) x = "John" y = 'Ana' print(type(x)) print(type(y)) x, y, z = 'one', 'two', 'three' print(x) print(y) print(z) x=y=z='four' print(x) print(y) print(z) x = "enjoying" # This is type of string print("Python programming is" " " + x) x = "enjoying" y = 'Python programming is ' print(y+ x) x = 6 y = 7 y%=x #This is the same as y = y%x print(y) x = float(6) y = 7 x is not y ###Output _____no_output_____ ###Markdown Boolean Operator ###Code #Booleans represent one or two of the values: True or False print(10 > 9) print(10 == 9) print(9 > 10) a = 10 b = 9 print(a > b) print(a == a) print(b > a) print(bool("Hello")) print(bool(15)) print(bool(True)) print(bool(False)) print(bool(None)) print(bool(0)) print(bool([])) def myFunction(): return True print(myFunction()) if myFunction(): print("Yes") else: print("No") a = 6 b = 7 print(a == b) print(a != a) print(10 + 5) print(10 - 5) print(10 * 5) print(10 / 5) # division - quotient print(10 % 5) # modulo division - remainder print(10 % 3) # modulo division print(10 // 3) # floor division - whole number print(10 ** 2) # concatenation a = 60 # 0011 1100 b = 13 # 0000 1101 print(a & b) print(a | b) print(a ^ b) print(a << 2) print(a >> 2) x = 6 x += 3 print(x) x %= 3 print(x) a = True b = False print(a and b) print(a or b) print(not(a and b)) print(not(a or b)) # negation print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operators ###Code a = 7 b =6 print(10>9) print(10<9) print(a>b) ###Output True False True ###Markdown bool() function ###Code print(bool("Maria")) print(bool(1)) print(bool(0)) print(bool(None)) print(bool(False)) print(bool(True)) ###Output True True False False False True ###Markdown Functions can return a Boolean ###Code def my_Function(): return True print(my_Function()) if my_Function(): print("True") else: print("False") ###Output True ###Markdown Application 1 ###Code print(a==b) print(a<b) ###Output False False ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10/3) print(10*5) print(10%5) print(10//3) #10/3 = 3.3333 print(10**2) ###Output 15 5 3.3333333333333335 50 0 3 100 ###Markdown Bitwise operators ###Code c = 60 # binary 0011 1100 d = 13 # binary 0000 1101 c&d print(c|d) print(c^d) print(d<<2) ###Output 61 49 52 ###Markdown Logical Operators ###Code h = True l = False h and l h or l not (h or l) ###Output _____no_output_____ ###Markdown Application 2 ###Code #Python Assignment Operators x = 100 x+=3 # Same as x = x +3, x = 100+3=103 print(x) ###Output 103 ###Markdown Identity Operators ###Code h is l h is not l ###Output _____no_output_____ ###Markdown Control Structure If Statement ###Code if a>b: print("a is greater than b") ###Output False ###Markdown Elif Statement ###Code if a<b: print("a is less than b") elif a>b: print("a is greater than b") ###Output a is greater than b ###Markdown Else Statement ###Code a= 10 b =10 if a>b: print("a is greater than b") elif a>b: print("a is greater than b") else: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand If Statement ###Code if a==b: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand If...Else Statement ###Code a = 10 b = 9 print("a is greater than b") if a>b else print('b is greater than a') ###Output a is greater than b ###Markdown And ###Code if a>b and b==b: print("both conditions are True") ###Output both conditions are True ###Markdown Or ###Code if a<b or b==b: print("the condition is True") ###Output the condition is True ###Markdown Nested If ###Code x = int(input()) if x>10: print("x is above 10") if x>20: print("and also above 20") if x>30: print("and also above 30") if x>40: print("and also above 40") if x>50: print("and also above 50") else: print("but not above 50") ###Output 48 x is above 10 and also above 20 and also above 30 and also above 40 but not above 50 ###Markdown Loop Statement For Loop ###Code week = ['Sunday',"Monday",'Tuesday', "Wednesday","Thursday","Friday","Saturday"] for x in week: print(x) ###Output Sunday Monday Tuesday Wednesday Thursday Friday Saturday ###Markdown The break statement ###Code #to display Sunday to Wednesday using For loop for x in week: print(x) if x=="Wednesday": break #to display only Wednesday using break statement for x in week: if x=="Wednesday": break print(x) ###Output Wednesday ###Markdown While Statement ###Code i =1 while i<6: print(i) i+=1 #same as i = i +1 ###Output 1 2 3 4 5 ###Markdown Application 3 - Create a python program that displays no.3 using break statement ###Code i =1 while i<6: if i==3: break i+=1 print(i) ###Output 3 ###Markdown Operations and Expressions Boolean Operators ###Code print(10>9) print(10==9) print (10<9) a=10 b=9 print(a>b) print(bool("Hello")) print(bool(15)) print(bool(False)) print(bool(1)) print(bool(0)) print(bool(None)) print(bool({})) print(bool([])) def myFunction():return True print(myFunction()) def myFunction():return True if myFunction(): print("Yes") else: print("No") print(10>9) a=6 b=7 print(a==b) print(6==6) print(a!=a) ###Output True False True False ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) print(10//3) print(10**2) ###Output 15 5 50 2.0 0 3 100 ###Markdown Bitwise Operators ###Code a=60 #0011 1100 b=13 #0000 1101 print(a&b) print(a^b) print(~a) print(a<<2)#1111 0000 print(a>>2)#0000 1111 ###Output 12 49 -61 240 15 ###Markdown Assignment Operator ###Code x=2 x+=3 print(x) #same as x=x+3 ###Output 5 ###Markdown Logical Operators ###Code a=5 b=6 print(a>b and a==a) a<b or b==a ###Output False ###Markdown Identity Operator ###Code print(a is b) a is not b ###Output False ###Markdown Operations and Expressions Boolean Operations ###Code a = 1 b = 2 print(a>b) print(a<b) print(a==a) print(a!=b) ###Output False True True True ###Markdown Bol() Function ###Code print(bool(15)) print(bool(True)) print(bool(1)) print(bool(False)) print(bool(0)) print(bool(None)) print(bool([])) ###Output True True True False False False False ###Markdown Functions Return a Boolean ###Code def myFunction(): return True print(myFunction()) def myFunction(): return False if myFunction(): print("True") else: print("False") ###Output False ###Markdown Relational Operation ###Code print(10>9) print(10<9) print() ###Output True False ###Markdown Arithmetic Operator ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) #modulo division that shows a remainder after dividing print(10//3) #floor division, 3.33 (making whole number instead of remainder) print(10**5) #concatenation(means how 10 is multiplied) ###Output 15 5 50 2.0 0 3 100000 ###Markdown Bitwise Operators ###Code a = 60 #0011 1100 b = 13 #0000 1101 print(a & b) print(a|b) print(a^b) print(a<<2)#0011 1100 0111 1000-120 print(a>>1)#00011 1110-30 carry-bit-0 ###Output 12 61 49 240 30 ###Markdown Assignment Operators ###Code a+=3 #Same as a=a+3, a=60+3, a=63 print(a) ###Output 63 ###Markdown Logical Operators ###Code a = True b = False print(a and b) print(a or b) print(not (a or b)) a>b and b<a a = 60 b = 13 print(a>b and b>a) print(a==a or b==b) print(not(a==a or b==b)) ###Output False True False ###Markdown Identity Operators ###Code print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operators ###Code #Booleans represent one of two values: True or False print(10 > 9) print(10 == 9) print(9 > 10) a = 10 b = 9 print(a > b) print(a == a) print(b > a) a>b print(bool("Hello")) print(bool(15)) print(bool(True)) print(bool(False)) print(bool(None)) print(bool(0)) print(bool([])) #allows you to evaluate and gives False in return def myFunction():return True print(myFunction()) def myFunction():return True print(myFunction()) if myFunction(): print("Yes") else: print("No") a=6 b=7 print(a == b) print(a != a) print(10 + 5) print(10 - 5) print(10 * 5) print(10 / 5) a=60 #0011 1100 b=13 #0000 1101 print(a&b) print(a|b) print(a^b) print(a<<2) print(b>>2) x = 6 x += 3 # x=x+3 print (x) x %= 3 # x= 6%3, remainder 0 print(x) a = True b = False print(a and b) print(a or b) print(not(a and b)) print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operators ###Code #Booleans represent one of two values:True or False print(10>9) print(10==9) print(9>10) a=10 b=9 print(a>b) print(a==a) print(b>a) b>a print(bool("Hello")) print(bool(15)) print(bool(True)) print(bool(False)) print(bool(None)) print(bool(0)) def myFunction():return True if myFunction(): print("Yes") else: print("No") a=6 b=7 print(a==b) print(a!=a) print(10+5) print(10-5) print(10*5) print(10/5) #division-quotient print(10%5) #modulo division print(10%3) #modulo division print(10//3) #floor division a=60 #0011 1100 0000 1111 b=13 #0000 1101 print(a & b) print(a|b) print(a^b) print(a<<2) print(a>>2) x=6 x+=3 #x=x+3 print(x) x%=3 #x=6%3, remainder 0 print(x) a=True b=False print(a and b) print(a or b) print(not(a and b)) print(not(a or b)) #negation print(a is b) print(a is not b) ###Output _____no_output_____ ###Markdown Boolean Operators ###Code print(10>9) print(10==9) a=10 b=9 print(a>b) print(bool("Hello")) print(bool(3)) print(bool(False)) print(bool(0)) print(bool(None)) print(bool([])) print(bool(1)) def myFunction():return True print(myFunction()) def myFunction():return True if myFunction(): print("True") else: print("False") print(10>9) a=6 # 0000 0110 b=7 # 0000 0111 print(a==b) print(a!=a) print(6==6) ###Output True False False True ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10/5) print(10*5) print(10%5) print(10//3) print(10**2) ###Output 15 5 2.0 50 0 3 100 ###Markdown Bitwise Operation ###Code a=60 #0011 1100 b=13 # 0000 1100 print(a^b) print(~a) print(a<<2) print(a>>2) #0000 1111 ###Output 49 -61 240 15 ###Markdown Assignment Operators ###Code x=2 x+=3 #Same as x =x+3 print(x) x ###Output 5 ###Markdown Logical Operators ###Code a=6 b=7 print( a>b and b==b) print( a>b or b==a) ###Output False False ###Markdown Identity Operator ###Code print(a is a) print(a is not b) print(a is not a) ###Output True True False ###Markdown Boolean Operators ###Code a=7 b=8 10>9 ###Output _____no_output_____ ###Markdown bool() function ###Code print(bool("Maria")) print(bool(1)) print(bool(0)) print(bool(None)) print(bool(False)) print(bool(True)) ###Output True True False False False True ###Markdown Functions can return a Boolean ###Code def My_Function(): return False #return True print(My_Function()) #print(My_Function()) if My_Function(): print("true") else: print("false") ###Output false ###Markdown Application 1 ###Code a=6 a=7 print(10>9) print(a==b) print(a!=b) ###Output True False True ###Markdown Python ###Code print(10+5) print(10-5) print(10/5) #print(int(10/5)) print(10*5) print(10%5) print(10//3) print(10**2) ###Output 15 5 2.0 50 0 3 100 ###Markdown Bitwise Operators ###Code c = 60 # binary 0011 1100 d = 13 # binary 0000 1100 c&d print(c|d) print(c^d) print(d<<2) ###Output 61 49 52 ###Markdown Logical Operators ###Code h = True I = False h and I h or I not(h or I) ###Output _____no_output_____ ###Markdown Python Assignment Operators ###Code #x += 3 # x = x + 3 #x -= 3 # x = x - 3 #x *= 3 # x = x * 3 #x /= 3 # x = x / 3 #x %= 3 # x = x % 3 ###Output _____no_output_____ ###Markdown Identity Operators ###Code h is I h is not I ###Output _____no_output_____ ###Markdown Control Structure if statement ###Code if a>b: print("a is greater than b") ###Output _____no_output_____ ###Markdown Elif Statement ###Code a=10 b=10 if a<b: print("a is less than b") elif a>b: print("a is greater than b") else: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand if statement ###Code if a==b: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand else statement ###Code a=10 b=9 print("a is greater than b") if a>b else print("b is greater than a") ###Output a is greater than b ###Markdown And ###Code if a>b and b==b: print("both conditions are true") ###Output both conditions are true ###Markdown Or ###Code if a<b or b==b: print("the condition is true") ###Output the condition is true ###Markdown Nested if ###Code x = 31 if x > 10: print("x is above 10") if x>20: print("and also above 20") if x>30: print("and also above 30") if x>40: print("and also above 40") else: print("but not above 40") ###Output x is above 10 and also above 20 and also above 30 but not above 40 ###Markdown Loop Statement for loop ###Code week ###Output _____no_output_____ ###Markdown the break statement ###Code #to display sunday to wednesday using for loop for x in week: print(x) if x=="Wednesday": break #to display only wednesday using for loop for x in week: if x=="Wednesday": break print(x) ###Output _____no_output_____ ###Markdown While statement ###Code i = 1 while i<6: print(i) i+=1 # i = i + 1 ###Output 1 2 3 4 5 ###Markdown Application 3 - creater a python program that displays no.3 using break statement ###Code i = 3 while i<6: print(i) i+1 break ###Output 3 ###Markdown Bool() Function ###Code print(bool(0)) print(bool(1)) print(bool("Maria")) print(bool(None)) print(bool([])) ###Output False True True False False ###Markdown Function can return a boolean ###Code def my_function(): return True print(my_function()) if my_function(): print("Yes!!!") else: print("No") ###Output True Yes!!! ###Markdown Application 1 ###Code print(10>9) a = 6 b = 7 print(a==b) print(a!=a) ###Output True False False ###Markdown Python Operators ###Code print(2+5) print(2-5) print(2*5) print(5/2) print(5%2) print(5//2) print(5**2) ###Output 7 -3 10 2.5 1 2 25 ###Markdown Bitwise Operators ###Code a = 60 b = 13 print(a&b) print(a|b) print(a^b) ###Output 12 61 49 ###Markdown Assignment Operators ###Code a +=3 #Same as a=a+3 print(a) ###Output 63 ###Markdown Logical Operators ###Code x = True y = False print(x and y) print(x or y) print(not((x and y))) ###Output False True True ###Markdown Identity Operators ###Code print(x is y) print(x is x) print(x is not y) ###Output False True True ###Markdown Boolean Operators ###Code a=10 print(10>9) print(10==9) print(10<9) print(a>9) ###Output True False False True ###Markdown bool()Function ###Code print(bool(1)) print(bool("Marina")) print(bool("Adamson")) print(bool(0)) print(bool(None)) print(bool([])) ###Output True True True False False False ###Markdown Functions can Return a Boolean ###Code def myFunction(): return True print(myFunction()) if myFunction(): print("True") else: print("False") ###Output True ###Markdown Application 1 ###Code a=6 b=7 print(a==b) print(a!=a) ###Output False False ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10*5) print(int(10/5)) print(10%5) print(10/3) print(10//3) print(float(10**2)) ###Output 15 5 50 2 0 3.3333333333333335 3 100.0 ###Markdown Bitwise Operators ###Code a = 60 #0011 1100 b = 13 print(a&b) print(a|b) print(a^b) print(a<<1) print(a>>1) ###Output 12 61 49 120 30 ###Markdown Application 2 ###Code #Python Assignment Operators x = 5 x += 3 print(x) x -= 3 print(x) x *= 3 print(x) x /= 3 print(x) x %= 3 print(x) ###Output 2.0 ###Markdown Logical Operators ###Code s = True t= False s and t not(s and t) ###Output _____no_output_____ ###Markdown Identity Operators ###Code s is t s is not t ###Output _____no_output_____ ###Markdown Control Structure if Statement ###Code g = 100 h = 50 if g>h: print("g is greater than h") ###Output _____no_output_____ ###Markdown Elif Statement ###Code g = 100 h = 100 if g>h: print("g is greater than h") elif g<h: print("g is less than h") ###Output _____no_output_____ ###Markdown Else Statement ###Code g = 100 h = 100 if g>h: print("g is greater than h") elif g<h: print("g is less than h") else: print("g is equal to h") ###Output g is equal to h ###Markdown Short Hand If Statement ###Code if g==h: print('g is equal to h') ###Output g is equal to h ###Markdown Short Hand If...Else Statement ###Code print("G") if g>h else print ("H") ###Output H ###Markdown Nested If...Else ###Code x = 31 if x>10: print("x is above 10") if x>20: print("and also above 20") if x>30: print("and also above 30") if x>40: print("and also above 40") else: print("but not above 40") if x>50: print("and also above 50") else: print("but not above 50") ###Output x is above 10 and also above 20 and also above 30 but not above 40 but not above 50 ###Markdown Application 3 ###Code age = int(input()) if age>=18: print("You are qualifiied to vote") else: print("You are not qualified to vote") ###Output 18 You are qualifiied to vote ###Markdown Loop Statement For Loop ###Code week = ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"] season = ['rainy','sunny'] for x in week: for y in season: print(y,x) ###Output rainy Sunday sunny Sunday rainy Monday sunny Monday rainy Tuesday sunny Tuesday rainy Wednesday sunny Wednesday rainy Thursday sunny Thursday rainy Friday sunny Friday rainy Saturday sunny Saturday ###Markdown The break statement ###Code for x in week: if x == "Friday": break print(x) #If you want to display Sunday to Friday for x in week: print(x) if x=="Friday": break ###Output Sunday Monday Tuesday Wednesday Thursday Friday ###Markdown Application 4 ###Code for x in week: if x == "Friday": print(x) break ###Output Friday ###Markdown While loop ###Code i = 1 while i<6: print(i) i+=1 #same as i=i+1 ###Output 1 2 3 4 5 ###Markdown Application 5 ###Code i = 1 while i<6: if i==2: print(i) break i+=1 #same as i=i+1 ###Output 2 ###Markdown Boolean Operators ###Code print(10>9) print(10==9) print(10!=9) print(10<9) ###Output True False True False ###Markdown bool() function ###Code print(bool("Hello")) print(bool(15)) print(bool([])) print(bool(0)) print(bool(1)) print(bool(None)) print(bool(False)) ###Output True True False False True False False ###Markdown Function can return a Boolean ###Code def myFunction(): return True print(myFunction()) def myFunction(): return False print(myFunction()) def myFunction(): return False if myFunction(): print("Yes!") else: print("No!") ###Output No! ###Markdown Application 1 ###Code a=6 b=7 print(a==b) print(a!=a) print(a>b) if a<b: print("a is less than b") else: print("a is greater than b") ###Output False False False a is less than b ###Markdown Python Operators ###Code print(10+3) print(10-3) print(10*3) print(10//3) print(10%3) print(10/3) ###Output 13 7 30 3 1 3.3333333333333335 ###Markdown Bitwise Operators ###Code a=60 b=13 print(a==b) print(a!=a) print(a>b) if a<b: print("a is less than b") else: print("a is greater than b") #a= 60, binary 0011 1100 #b= 13, binary 0000 1101 print(a&b) print(a|b) print(a^b) print(a<<1) print(a<<2) print(a>>b) ###Output 12 61 49 120 240 0 ###Markdown Application 2 ###Code #Assignment Operators x=5 x +=3 #the same as x=x+3 print(x) x -=3 #the same as x=x-3 x *=3 #the same as x=x*3 x /=3 #the same as x=x/3 x %=3 #the same as x=x%3 ###Output 8 ###Markdown Logical Operators ###Code k= True l= False print(k and l) print(k or l) print(not(k or l)) ###Output False True False ###Markdown Identity Operators ###Code print(k is l) print(k is not l) ###Output False True ###Markdown Control Structure If Statement ###Code v = 1 z = 2 if 1<2: print("1 is less than 2") ###Output 1 is less than 2 ###Markdown Elif Statement ###Code if v<z: print("v is less than z") elif v>z: print("v is greater than z") ###Output v is less than z ###Markdown Else Statement ###Code if v<z: print("v is less than z") elif v>z: print("v is greater than z") else: print("v is equal to z") number = int(input()) #to know if the number is positive or negative if number>0: print("positive") elif number<0: print("negative") else: print("number is equal to zero") ###Output 0 number is equal to zero ###Markdown Application 3 - Develop a Python program that will accept if a person is entitled to vote or not ###Code age = int(input()) if age>=18: print("You are qualified to vote") else: print("You are not qualified to vote") ###Output 20 You are qualified to vote ###Markdown Nested If...Else ###Code u = 41 if u>10: print("u is above 10") if u>20: print("u is above 20") else: print("u is above 10 and below 20") u = 11 if u>10: print("u is above 10") if u>20: print("u is above 20") else: print("u is above 10 and below 20") u = int(input()) if u>10: print("u is above 10") if u>20: print("u is above 20") if u>30: print("u is above 30") if u>40: print("u is above 40") if u>50: print("u is above 50") else: print("u is below 50") ###Output 49 u is above 10 u is above 20 u is above 30 u is above 40 u is below 50 ###Markdown Loop Structure ###Code week = ['Sunday','Monday','Tuesday','Wednesday','Thursday','Friday','Saturday'] season =["rainy","sunny"] for x in week: for y in season: print(y,x) ###Output rainy Sunday sunny Sunday rainy Monday sunny Monday rainy Tuesday sunny Tuesday rainy Wednesday sunny Wednesday rainy Thursday sunny Thursday rainy Friday sunny Friday rainy Saturday sunny Saturday ###Markdown The break statement ###Code for x in week: if x == 'Thursday': break print(x) for x in week: if x == 'Thursday': break print(x) #To display Sunday to Thursday for x in week: print(x) if x == 'Thursday': break ###Output Sunday Monday Tuesday Wednesday Thursday ###Markdown While loop ###Code i=1 while i<6: print(i) i+=1 ###Output 1 2 3 4 5 ###Markdown Application 4 - Create a Python program that displays numbers from 1 to 4 using while loop statement ###Code j=1 while j<=4: print(j) j+=1 ###Output 1 2 3 4 ###Markdown Application 5 Create a Python program that displays numbers 4 using while loop statement and break statement ###Code j=1 while j<=4: if j==4: print(j) j+=1 ###Output 4 ###Markdown Boolean Opereator ###Code x=1 y=2 print(x>y) print(10>11) print(10==10) #using bool ###Output _____no_output_____ ###Markdown fuctions using return boolean ###Code def myfunction():return True print(myfunction()) def yourfunction():return True if yourfunction(): print("Yes!") else: print("No!") ###Output Yes! ###Markdown Boolean Operator ###Code print (10>9) a=6 b=7 print(a==b) print(a!=a) ###Output True False False ###Markdown Arithmetic Operators ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) print(10//5) print(10**5) ###Output 15 5 50 2.0 0 2 100000 ###Markdown Bitwise Operators ###Code a=60 #0011 1100 b=13 #0000 1101 print(a&b) print(a|b) print(a^b) print(~a) print(a<<1) #0111 1000 print(a<<2) #1111 0000 print(b>>1) print(b>>2) ###Output 12 61 49 -61 120 240 6 3 ###Markdown Python Assignment Operators ###Code a+=3 #Same As a = a + 3 #Same As a = 60 + 3, a = 63 print(a) ###Output 63 ###Markdown Logical Operators ###Code #and logical operator a= True b= False print(a and b) print(not(a and b)) print(a or B) print(not(a or b)) print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operators ###Code a=1 b=2 print(a>b) print(a<b) print(a==a) print(a!=b) ###Output False True True True ###Markdown bool() fuction ###Code print(bool(15)) print(bool(True)) print(bool(1)) print(bool(False)) print(bool(0)) print(bool(None)) print(bool([])) ###Output True True True False False False False ###Markdown Functions return a Boolean ###Code def myFunction(): return True print(myFunction()) def myFunction(): return False if myFunction(): print("True") else: print("False") ###Output False ###Markdown Relational Operator ###Code print(10>9) print(10<9) print(10==9) ###Output True False False ###Markdown Arithmetic Opeartor ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) print(10//3) #floor division, 3.33 print(10**) #concatination ###Output 15 5 50 2.0 0 3 ###Markdown Bitwise Operators ###Code a=60 #0011 1100 b=13 #0000 1101 print(a & b) print(a | b) print(a^b) print(a<<1) print(a>>1) ###Output 12 61 49 120 30 ###Markdown Assignment Operator ###Code a+=3 #Same As a = a + 3, a = 60+3, a=63 print(a) ###Output 4 ###Markdown Logical Operator ###Code a = True b = False print(a and b) print(a or b) print(not(a or b)) print(a>b and b>a) print(a==a or b==b) print(not(a==a or b==b)) ###Output False True False False True False ###Markdown Identity Operators ###Code print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operators ###Code #Booleans represent one of two values: True or False print(10>9) print(10==9) print(9>10) a=10 b=9 print(a>b) print(a==b) print(b>a) print(bool("Hello")) print(bool(15)) print(bool(True)) print(bool(False)) print(bool(None)) print(bool(0)) print(bool([])) #Allows you to evaluate and gives False in return def myFunction():return False if myFunction(): print("Yes") else: print("No") ###Output No ###Markdown You Try! ###Code a=6 b=7 print(a==b) print(a!=a) print(10+5) print(10-5) print(10*5) print(10/5) #division-quotient print(10%5) #modulo division print(10%3) #modulo division print(10//3) #floor division print(10**2) #concatenation a=60 #0011 1100 b=13 #0000 1101 print(a & b) print(a | b) print(a^b) print(a<<2) print(a>>2) x=6 x+=3 #x+ = x+3 print(x) x%=3 print(x) a=True b=False print(a and b) print(a or b) print(not(a and b)) print(not(a or b)) print(a is b) print(a is not b) ###Output _____no_output_____ ###Markdown Boolean Operators ###Code print(10>9) print(10<9) print(10==9) a = 10 b = 9 print(a>b) print(a<b) print(a==b) ###Output True False False ###Markdown Bool () Function ###Code print(bool(1)) print(bool(0)) print(bool("Matthew")) print(bool(None)) print(bool([])) ###Output True False True False False ###Markdown Functions can Return a Boolean ###Code def my_Function(): return False print(my_Function()) if my_Function(): print("Yes!") else: print("No!") ###Output False No! ###Markdown Application 1 ###Code print(10>9) a = 6 b = 7 print(a==b) print(a!=a) ###Output True False False ###Markdown Python Operators ###Code print(2+5) print(2-5) print(2*5) print(5/2) print(5//2) print(5**2) ###Output 7 -3 10 2.5 2 25 ###Markdown Bitwise Operators ###Code a=60 b=13 print(a&b) print(a|b) print(a^b) ###Output 12 61 49 ###Markdown Assignment Operators ###Code a += 3 #Same as a = a + 3 print(a) ###Output 63 ###Markdown Logical Operators ###Code x = True y = False print(x and y) print(x or y) print (not(x and y)) ###Output False True True ###Markdown Indentity Operator ###Code print(x is y) print(x is x) print(x is not y) ###Output _____no_output_____ ###Markdown Boolean Oprators ###Code print(10>9) print(10==9) print(10!=9) print(10<9) ###Output True False True False ###Markdown bool() Function ###Code print(bool("Lance")) print(bool(239)) print(bool()) print(bool(0)) print(bool(1)) print(bool(None)) print(bool(False)) ###Output True True False False True False False ###Markdown Function can return Boolean ###Code def myFunction(): return True print(myFunction()) if myFunction(): print("Yes!") else: print("No!") ###Output Yes! ###Markdown Application 1 ###Code print(10>9) a=60 b=13 print(a==b) print(a!=b) ###Output True False True ###Markdown Python Operators ###Code print(10+3) print(10 - 3) print(10*3) print(10/3) print(10 % 3) print(10//3) ###Output 13 7 30 3.3333333333333335 1 3 ###Markdown Python Operators ###Code print(10+3) print(10-3) print(10*3) print(10//3) print(10%3) print(10/3) ###Output 13 7 30 3 1 3.3333333333333335 ###Markdown Bitwise Operators ###Code #a=60, binary 0011 1100 #b=13, binary 0000 1101 print(a&b) print(a|b) print(a^b) print(a<<b) print(a>>b) ###Output 12 61 49 491520 0 ###Markdown Application 2 Assignment Operators ###Code a+=3 #Same As a = a+3 a-=3 #Same As a = a-3 a*=3 #Same As a = a*3 a/=3 #Same As a = a/3 a%=3 #Same As a = a%3 ###Output _____no_output_____ ###Markdown Logical Operators ###Code k = True l = False print(k and l) print(k or l) print(not(k or l)) ###Output False True False ###Markdown Identity Operators ###Code print(k is l) print(k is not l) ###Output False True ###Markdown Control Structure ###Code v = 1 z = 1 if v<z: print("1 is less than 2") ###Output _____no_output_____ ###Markdown Elif Statement ###Code if v<z: print("v is less than z") elif v>z: print("v is greater than z") ###Output v is less than z ###Markdown Else Statement ###Code number = int(input()) #to know if the number is positive or negative if number>0: print("positive") elif number<0: print("negative") else: print("0") ###Output 33 positive ###Markdown Application 3 - Develop a Phython program that will accept if a person is entitled to vote or not ###Code age = int(input()) if age >= 18: print("You are qulifed to vote") else: print("you are not qualified to vote") ###Output 12 you are not qualified to vote ###Markdown Nested If.. Else ###Code u=int(input()) if u>10: print("u is above 10") if u>20: print("u is above 20") if u>30: print("u is above 30") if u>40: print("u is above 40") if u>50: print("u is above 50") else: print("u is less tha 50") ###Output 50 u is above 10 u is above 20 u is above 30 u is above 40 u is less tha 50 ###Markdown Loop Structure ###Code week=["Sunday","Monday", "Tuesday","Wednesday","Thursday","Friday","Saturday"] season=["rainy", "sunny"] for x in week: for y in season: print(y,x) ###Output rainy Sunday sunny Sunday rainy Monday sunny Monday rainy Tuesday sunny Tuesday rainy Wednesday sunny Wednesday rainy Thursday sunny Thursday rainy Friday sunny Friday rainy Saturday sunny Saturday ###Markdown The break statement ###Code for x in week: if x == "Thursday": break print(x) for x in week: if x == "Thursday": break print(x) # To display Sunday to Thursday for x in week: print(x) if x=="Thursday": break ###Output Sunday Monday Tuesday Wednesday Thursday ###Markdown While loop ###Code i=1 while i<6: print(i) i+=1 ###Output 1 2 3 4 5 ###Markdown Application 4 Create a python program that displays number from 1 to 4 using while loop statement ###Code j = 1 while j<=4: print(j) j+=1 ###Output 1 2 3 4 ###Markdown Application 5 Create a python program that displays number 4 using while loop statement ###Code j = 1 while j<=4: if j==4: print(j) j+=1 ###Output 4 ###Markdown Boolean Operators ###Code print(10 > 9) print(10 == 9) print(10!=9) print(10<9) ###Output True False True False ###Markdown Bool ( ) Function ###Code print(bool("Maria")) print(bool(19)) print(bool([])) print(bool(0)) print(bool(1)) print(bool(None)) print(bool(False)) ###Output True True False False True False False ###Markdown Function can return a Boolean ###Code def myFunction(): return False print(myFunction()) def myFunction(): return True if myFunction(): print("Yes!!!") else: print("No!!!") ###Output Yes!!! ###Markdown Application 1 ###Code print(10>9) a = 6 b = 7 print(a==b) print(a!=b) a = 60 b = 13 print(a>b) if a < b: print("a is less than b") else: print("a is greater than b") ###Output True a is greater than b ###Markdown Python Operators ###Code print(10+3) print(10-3) print(10*3) print(10%3) print(10/3) ###Output 13 7 30 1 3.3333333333333335 ###Markdown Bitwise Operators ###Code a = 60 b = 13 print(a%b) print(a|b) print(a^b) print(a<<1) print(a<<2) ###Output 8 61 49 120 240 ###Markdown Application 2 ###Code #Assignment Operators x = 2 x += 3 print(x) x -= 3 print(x) x *= 3 print(x) x /= 3 print(x) x %= 3 print(x) ###Output 5 2 6 2.0 2.0 ###Markdown Logical Operators ###Code k = True l = False print (k and l) print (k or l) print (not(k or l)) ###Output False True False ###Markdown Control StructureIf statement ###Code v = 2 z = 1 if 1<2: print ("1 is less than 2") ###Output 1 is less than 2 ###Markdown Elif Statement ###Code if v < z: print("v is less than v") elif v > z: print("v is greater than z") ###Output v is greater than z ###Markdown Else Statement ###Code number = int(input()) if number>0: print("Positive") elif number<0: print("Negative") else: print("number is equal to zero") ###Output 15 Positive ###Markdown Application 3 ###Code age = int(input()) if age >= 18: print("You are qualified to vote!") else: print("You are not qualified to vote") ###Output 26 You are qualified to vote! ###Markdown Nested If ... Else ###Code u = int(input()) if u >10: print("u is above 10") if u>20: print("u is above 20") if u >30: print("u is above 30") if u>40: print("u is above 40") else: print("u is below 40") if u>50: print("u is above 50") else: print("u is below 50") ###Output 42 u is above 10 u is above 20 u is above 30 u is above 40 ###Markdown Loop Structure ###Code week = ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'] season = ['Rainy', 'Sunny'] for x in week: for y in season: print(y,x) ###Output Rainy Sunday Sunny Sunday Rainy Monday Sunny Monday Rainy Tuesday Sunny Tuesday Rainy Wednesday Sunny Wednesday Rainy Thursday Sunny Thursday Rainy Friday Sunny Friday Rainy Saturday Sunny Saturday ###Markdown Break Statement ###Code for x in week: print(x) if x == "Thursday": break ###Output Sunday Monday Tuesday Wednesday Thursday ###Markdown While loop ###Code i = 1 while i<=6: print(i) i+=1 ###Output 1 2 3 4 5 6 ###Markdown Application 4 ###Code n=1 while n<5: print(n) break ###Output 1 ###Markdown Application 5 ###Code r=1 while r<=4: if r==4: print(r) r+=1 ###Output 4 ###Markdown Operations and Expressions Boolean Operators ###Code a = 1 b = 2 print(a>b) print(a<b) print(a==a) print(a!=b) ###Output False True True True ###Markdown bool() function ###Code print(bool(15)) print(bool(True)) print(bool(1)) print(bool(False)) print(bool(0)) print(bool(None)) print(bool([])) ###Output True True True False False False False ###Markdown Fuctions return a Boolean ###Code def myFunction(): return True print(myFunction()) def myFunction(): return True if myFunction(): print("True") else: print("False") print(10>0) print(10<9) print(10==9) ###Output True False False ###Markdown Arithmetic Operator ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) #modulo division that that shows the remainder after division print(10//3) #floor division, 3.33 print(10**2) #concatenation ###Output 15 5 50 2.0 0 3 100 ###Markdown Bitwise Operators ###Code a = 60 #0011 1100 b = 13 #0000 1101 print(a & b) print(a | b) print(a^b) print(a<<2)#0011 1100 0111 1000-120 print(a>>1)#0001 1110-30 carry bit-0 ###Output 12 61 49 240 30 ###Markdown Assignment Operator ###Code a+=3 #Same As a = a + 3, a =60+3, a+63 print(a) ###Output 63 ###Markdown Logical Operator ###Code a = 60 b = 13 print(a>b and b>a) print(a==a or b==b) print(not(a==a or b==b)) ###Output False True False ###Markdown Identity Operators ###Code print(a is b) print(a is not b) ###Output False True ###Markdown ###Code print(10>9) a=6 b=7 print(a==b) print(a!=a) ###Output _____no_output_____ ###Markdown arithmetic operator ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) #moodulo division, remainder print(10//5) #floor division print(10//3) #floor division print(10%3) #3 x 3 = 9 + 1 ###Output _____no_output_____ ###Markdown bitwise operator ###Code a=60 #0011 1100 b=13 #0000 1101 print(a&b) print(a|b) print(a^b) print(-a) print(a<<1) #0111 1000 print(a<<2) #0111 1000 print(b>>1) #1 0000 0110 print(b>>2) #0000 0011 carry flag bit = 1 ###Output _____no_output_____ ###Markdown python operators ###Code a+=3 #Same as a= a + 3 #Same as a= 60 + 3, a=63 print(a) ###Output _____no_output_____ ###Markdown logical operators ###Code #and logical operator a=True b=False print(a and b) print(not( a and b)) print(a or b) print(not a or b) a is b a is not b ###Output False True True False ###Markdown Boolean Operators ###Code a = 7 b =6 print(10>9) print(10<9) print(a>b) print(bool("Maria")) print(bool(1)) print(bool(0)) print(bool(None)) print(bool(False)) print(bool(True)) def my_Function(): return True print(my_Function()) if my_Function(): print("True") else: print("False") print(a==b) print(a<b) print(10+5) print(10-5) print(10/3) print(10*5) print(10%5) print(10//3) #10/3 = 3.3333 print(10**2) c = 60 # binary 0011 1100 d = 13 # binary 0000 1101 c&d print(c|d) print(c^d) print(d<<2) h = True l = False h and l h or l not (h or l) ###Output _____no_output_____ ###Markdown Application 2 ###Code x = 100 x+=3 # Same as x = x +3, x = 100+3=103 print(x) h is l h is not l if a>b: print("a is greater than b") if a<b: print("a is less than b") elif a>b: print("a is greater than b") a= 10 b =10 if a>b: print("a is greater than b") elif a>b: print("a is greater than b") else: print("a is equal to b") if a==b: print("a is equal to b") a = 10 b = 9 print("a is greater than b") if a>b else print('b is greater than a') if a>b and b==b: print("both conditions are True") if a<b or b==b: print("the condition is True") x = int(input()) if x>10: print("x is above 10") if x>20: print("and also above 20") if x>30: print("and also above 30") if x>40: print("and also above 40") if x>50: print("and also above 50") else: print("but not above 50") week = ['Sunday',"Monday",'Tuesday', "Wednesday","Thursday","Friday","Saturday"] for x in week: print(x) for x in week: print(x) if x=="Wednesday": break for x in week: if x=="Wednesday": break print(x) i =1 while i<6: print(i) i+=1 #same as i = i +1 i =1 while i<6: if i==3: break i+=1 print(i) ###Output 3 ###Markdown Boolean Operators ###Code #Booleaan represent one or two values: True or False print (12>5) print(13==5) print (16<6) a = 10 b = 9 print (a>b) print (a==b) print (a<b) #if didnt start with print only the last will be the most recent a>b b>a a==a print (bool("HELLO")) print (bool(15)) print (bool(True)) #allows you to evaluate and give False in return print (bool(False)) print (bool(None)) print (bool(0)) print (bool(" ")) #Functions can Return a Boolean def myfunction(): return True print(myfunction()) def myfunction(): return True print(myfunction()) if myfunction(): print ("Yes") #note that print should always be intented so that it won't experience error else: print ("No") def myfunction(): return False print(myfunction()) if myfunction(): print ("Yes") #note that print should always be intented so that it won't experience error else: print ("No") #TRY a = 6 b = 7 print (a==b) print (a!=a) print (10+5) print (10-5) print (10*5) print (10/5) #division - quotient print (10%5) #modulo - division print (10//3) #floor division print (10**2) #concatenation #application in bits a = 60 #0011 1100 b = 13 #0000 1101 print(a & b) print(a | b) print (a^b) print(a<<2) print (a>>2) #python assignment operator x = 6 x+=3 #x=x+3 print(x) x%=3 #6%3, remainder 0 print (x) a = True b = False print (a and b) print (a or b) print (not(a and b)) #not = opposite print (not(a or b)) #negation print (a is b) print (a is not b) ###Output False True ###Markdown Boolean Operators ###Code #Booleans represent one of two values: True or False print(10>0) print(10==3) print(9>10) a = 10 b = 9 print(a>b) print(a==a) print(b>a) b>a print(bool("Hello")) print(bool(15)) print(bool(True)) print(bool(False)) print(bool(None)) print(bool(0)) print(bool([])) #allows you to evaluate and give False in return def myFunction():return True if myFunction(): print("Yes") else: print("NO") def myFunction():return False if myFunction(): print("Yes") else: print("NO") ###Output NO ###Markdown You Try! ###Code a = 6 b = 7 print(a==b) print(a!=a) print(10+5) print(10-5) print(10*5) print(10/5) #division - quotient print(10%5) #modulo division print(10%3) #modulo division print(10//3) #floor division print(10**2) #concatenation a = 60 #0011 1100 0000 1111 b = 13 #0000 1101 print(a&b) print(a|b) print(a^b) print(a<<2) print(a>>2) x=6 x+=3 #x=x+3 print(x) x%=3 #x=6%3 ; remainder 0 print(x) a = True b = False a and b a or b a = True b = False print(a and b) print(a or b) print(not(a and b)) print(not(a or b)) #negation print(a is b ) print(a is not b) ###Output False True ###Markdown Boolean Operators ###Code a=7 b=6 print(10>9) print(10<9) print(a>b) ###Output True False False ###Markdown bool() function ###Code print(bool("Denzel")) print(bool(1)) print(bool(0)) print(bool(None)) print(bool(False)) print(bool(True)) print(bool(-1)) ###Output True True False False False True True ###Markdown Functions can return a Boolean ###Code def my_Function(): return True print (my_Function()) if my_Function(): print("True") else: print("False") ###Output True ###Markdown Application 1 ###Code print(a==b) print(a>b) ###Output False False ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10/3) print(10*5) print(10%5) print(10//3) #10/3=3.3333 print(10**2) ###Output 15 5 3.3333333333333335 50 0 3 100 ###Markdown Bitwise operators ###Code c = 60 # binary 0011 1100 d = 13 # binary 0000 1101 c&d print(c|d) print(c^d) print(d<<2) ###Output 61 49 52 ###Markdown Logical Operators ###Code h = True l = False h and l h or l not (h or l) ###Output _____no_output_____ ###Markdown Application 2 ###Code #Python Assignment Operators x=3 x+=3 #similar to x=x+3 print(x) x-=3 #similar to x=x-3 print(x) x*=3 #similar to x=x*3 print(x) x/=3 #similar to x=x/3 print(x) x%=3 #similar to x=x%3 print(x) ###Output 6 3 9 3.0 0.0 ###Markdown Identity Operators ###Code h is l h is not l ###Output _____no_output_____ ###Markdown Control Structure If statement ###Code if h is l: print("a is greater than b") ###Output _____no_output_____ ###Markdown Elif Statement ###Code if a<b: print("a is less than b") elif a>b: print("a is greater than b") ###Output a is less than b ###Markdown Else Statement ###Code a=10 b=10 if a>b: print("a greater than b") elif a>b: print("a is greater than b") else: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand If Statement ###Code if a==b: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand If...Else Statement ###Code a=6 b=9 print("a is greater than b") if a>b else print("b is greater than a") ###Output b is greater than a ###Markdown And ###Code if a and b: print("Yes") ###Output Yes ###Markdown Or ###Code if a<b and b==b: print("The condition is true") ###Output The condition is true ###Markdown Nested if ###Code x=int(input()) if x>10: print("x is above 10") if x>20: print("and also above 20") if x>30: print("and also above 30") if x>40: print("and also above 40") if x>50: print("and also above 50") else: print("but not above 50") ###Output 49 x is above 10 and also above 20 and also above 30 and also above 40 but not above 50 ###Markdown Loop Statement For Loop ###Code week = ['Sunday','Monday','Tuesday','Wednesday',"Thursday","Friday","Saturday"] for x in week: print(x) ###Output Sunday Monday Tuesday Wednesday Thursday Friday Saturday ###Markdown The break statement ###Code #to display Sunday to Wednesday using For Loop for x in week: print(x) if x=="Wednesday": break #to display only Wednesday using break statement for x in week: if x=="Wednesday": break print(x) #to display only Wednesday using break statement for x in week: if x=="Wednesday": break print(x) ###Output Wednesday ###Markdown While Statement ###Code i=1 while i<6: print(i) i+=1 #same as i=i+1 ###Output 1 2 3 4 5 ###Markdown Application 3 - Create a python program that displays no.3 using break statement ###Code i=1 while i<6: if i==3: break i+=1 print(i) ###Output 3 ###Markdown Operations and Expressions Boolean Operator ###Code x=1 y=2 print(x>y) print(y) print(10>11) print(10==10) #Using bool() function print(bool("Hello")) print(bool(15)) print(bool("false")) print(bool(0)) ###Output True True True False ###Markdown Function can return Boolean ###Code def myFunction():return True print(myFunction()) ###Output True ###Markdown You try ###Code print (10>9) a=6 b=7 print(a==b) print(a!=a) ###Output True False False ###Markdown Arithmetic operators ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) #modulo division, reminder print(10//5) #floor division print(10//3) #floor division ###Output 15 5 50 2.0 0 2 3 ###Markdown Bitwise Operators ###Code a=60 #0011 1100 b=13 #0000 1101 print(a&b) print(a|b) print(a^b) print(~a) print(a<<1) #0011 1100 print(a<<2) #0011 1100 print(b>>1) #0000 1101 print(b>>2) ###Output 12 61 49 -61 120 240 6 3 ###Markdown Python Assignment Operators ###Code a++3 #Same Asa=a+3 #Same Asa=60, a=63 print(a) ###Output 60 ###Markdown Logical Operator ###Code #and logical operator a=True b=False print(a and b) print(a or b) print(not(a or b)) print(a is b) a is not b ###Output False ###Markdown ###Code print("10>9") print(10>9) print("10<9") print(10<9) print("10==9") print(10==9) a = 100 b = 90 print(a>b) print(a<b) print(a==b) ###Output True False False ###Markdown Bool Function ###Code print(bool("Hello")) ## True print(bool(15)) ## True print(bool(0)) ## False print(bool(1)) ## True print(bool(None)) ## True print(bool([])) ## True ###Output True True False True False False ###Markdown Functions can return a Boolean ###Code def myFunction(): return False print(myFunction()) if myFunction(): print("Yes") else: print("No") ###Output No ###Markdown Application 1 ###Code print(10>9) a = 6 b = 7 print(a==b) print(a!=a) ###Output True False False ###Markdown Python Operators ###Code print(10 + 9) print(10 - 9) print(10 * 2) ## Multiplication print(10 ** 2) ## Exponent print(10 / 2) ## Division : returns float print(10 / 3) ## Division : returns float print(10 // 3) ## Floor/Integer Division : returns int ###Output 19 1 20 100 5.0 3.3333333333333335 3 ###Markdown Python Bitwise Operators ###Code a = 60 b = 13 print(a & b) print(a | b) ###Output 12 61 ###Markdown Python Assignment Operators ###Code print(a) a=+3 ## [X] a = 3 print(a) a+=3 ## [O] a = a + 3 print(a) ###Output 63 3 ###Markdown Python Logical Operators ###Code a = True b = False print(a and b) print(a or b) print(not(a or b)) ###Output False True False ###Markdown Python Identity Operators ###Code print(a is b) print(a is not b) ###Output False True ###Markdown Operations and Expressions Boolean Operators ###Code a = 1 b = 2 print(a>b) print(a<b) print(a==a) print(a!=b) ###Output False True True True ###Markdown bool() function ###Code print(bool(15)) print(bool(True)) print(bool(1)) print(bool(False)) print(bool(0)) print(bool(None)) print(bool([])) ###Output True True True False False False False ###Markdown Functions Return a Boolean ###Code def myFunction(): return True print(myFunction()) def myFunction(): return False if myFunction(): print("True") else: print("False") ###Output False ###Markdown Arithmetic Operator ###Code print(10>9) print(10<9) print(10==9) print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) #module division that shows the remainder after division print(10//3) #floor division, 3.33 print(10**2) #concatenation ###Output 15 5 50 2.0 0 3 100 ###Markdown Bitwise Operators ###Code a = 60 #0011 1100 b = 13 #0000 1101 print(a & b) print(a | b) print(a^b) print(a<<2) #0011 1100 0111 1000 #shift to the left print(a>>1) #0001 1110 carry bit -0 #shift to the right ###Output 12 61 49 240 30 ###Markdown Assignment Operator ###Code a+=3 #Same As a=a+3, a=60+3, a=63 print(a) ###Output 63 ###Markdown Logical Operator ###Code a=True b=False print(a and b) print(a or b) print(not(a or b)) a=60 b=13 print(a>b and b>a) print(a==a or b==b) print(not(a==a or b==b)) ###Output False True False ###Markdown Identity Operators ###Code print(a is b) print(a is not b) ###Output False True ###Markdown Operations and Expressions Boolean Operators ###Code a = 1 b = 2 print(a>b) print(a<b) print(a==a) print(a!=b) ###Output False True True True ###Markdown Bool() Function ###Code print(bool(15)) print(bool(True)) print(bool(1)) print(bool(False)) print(bool(0)) print(bool(None)) print(bool([])) ###Output True True True False False False False ###Markdown Functions return a Boolean ###Code def myFunction(): return True print(myFunction()) def myFunction(): return False if myFunction(): print("True") else: print("False") def myFunction(): return True if myFunction(): print("True") else: print("False") ###Output True ###Markdown Relation Operator ###Code print(10>9) print(10<9) print(10==9) ###Output True False False ###Markdown Arithmetic Operator ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) # modulo division that shows the remainder after division print(10//3) #floor division, 3.33 print(10**2) #concatenation ###Output 15 5 50 2.0 0 3 100 ###Markdown Bitwise Operators ###Code a = 60 #0011 1100 b = 13 #0000 1101 print(a & b) print(a | b) print(a^b) print(a<<1) #0011 1100 0111 1000 -120 print(a<<2) print(a>>1) #0001 1110-30 carry bit-0 ###Output 12 61 49 120 240 30 ###Markdown Assignment Operator ###Code a+=3 #Same As a = a+3, a = 60+3, a=63 print(a) ###Output 63 ###Markdown Logical Operator ###Code a = True b = False print(a and b) print(a or b) print(not(a or b)) a>b and b<a a = 60 b = 13 print(a>b and b>a) print(a==a or b==b) print(not(a==a or b==b)) ###Output False True False ###Markdown Identity Operators ###Code print(a is b) print(a is not b) ###Output False True ###Markdown Operations and Expresions ###Code print(86==67) print(10<9) 86==67 10<9 a=532 b=564 print(a==b) print(a>b) print(a<b) print(bool(52)) print(bool("anything")) print(bool(False)) print(bool(0)) print(bool({})) print(bool(None)) def yes(): return True print(yes()) if yes(): print("aye") else: print("nah") a=6 b=7 print(a==b) print(a!=a) print(10+5) print(10-5) print(10*5) print(10/5) #division print(10%5) #modulo division(remainder) print(10//5) #floor division(whole no.) a=60 #111100 b=13 #001101 print(a|b) print(a&b) x=1 x+=14 print(x) a=534 b=341 print(a is b) print(a is not b) ###Output False True ###Markdown Operations and Expressions Boolean Operators ###Code a = 10 b = 9 c = 8 print (a>b) c = print (a>b) print(10>9) print(10==9) print(bool("Hello")) print(bool(15)) print(bool(True)) print(bool(False)) print(bool(1)) print(bool(0)) print(bool(None)) print(bool([])) def myFunction(): return True print(myFunction()) def myFunction(): return True if myFunction(): print("True!") else: print("False") print(10>9) a=6 #0000 0110 b=7 #0000 0111 print(a==b) print(6==6) print(a!=a) ###Output True False True False ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) print(10//3) print(10**2) ###Output 15 5 50 2.0 0 3 100 ###Markdown Bitwise Operators ###Code a= 60 # 0011 1100 b= 13 print(a|b) print(a^b) print(~a) print(a<<2) print(a>>2) # 0000 1111 ###Output 61 49 -61 240 15 ###Markdown Assignment Operators Logical Operators ###Code a = 6 b = 5 a<b and a==a print(a>b and a==a) print(a<b or b==a) ###Output True False ###Markdown Identity Operators ###Code print(a is b) a is not b ###Output False ###Markdown Boolean Operators ###Code a = 7 b =6 print(5>9) print(5<9) print(a>b) ###Output False True True ###Markdown bool() function ###Code print(bool("Machew")) print(bool(1)) print(bool(0)) print(bool(None)) print(bool(False)) print(bool(True)) ###Output True True False False False True ###Markdown Functions can return a Boolean ###Code def my_Function(): return False #def= Definition of object print(my_Function()) if my_Function(): print("True") else: print("False") ###Output False ###Markdown Application 1 ###Code print(a==b) print(a<b) ###Output False False ###Markdown Python Operators ###Code print(2+5) print(2-5) print(2/3) print(2*5) print(2%5) #Remainder print(2//3) #without decimal print(2**2) #Raise ###Output 7 -3 0.6666666666666666 10 2 0 4 ###Markdown Bitwise operators ###Code c = 60 # binary 0011 1100 d = 13 # binary 0000 1101 c&d print(c|d) print(c^d) print(d<<2) ###Output 61 49 52 ###Markdown Logical Operators ###Code h = True l = False h and l h or l not (h or l) ###Output _____no_output_____ ###Markdown Identity Operators ###Code h is l h is not l ###Output _____no_output_____ ###Markdown Application 2 Python Assignment Operators ###Code x = 100 x+=11 # Same as x = x +3, x = 100+3=103 print(x) ###Output 111 ###Markdown Control Structure If Statement ###Code if a>b: print("a is greater than b") ###Output a is greater than b ###Markdown Elif Statement ###Code if a<b: print("a is less than b") elif a>b: print("a is greater than b") ###Output a is greater than b ###Markdown Else Statement ###Code a= 10 b =10 if a>b: print("a is greater than b") elif a>b: print("a is greater than b") else: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand If Statement ###Code if a==b: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand If...Else Statement ###Code a = 10 b = 9 print("a is greater than b") if a>b else print('b is greater than a') ###Output a is greater than b ###Markdown And ###Code if a>b and b==b: print("both conditions are True") ###Output both conditions are True ###Markdown Or ###Code if a<b or b==b: print("the condition is True") ###Output the condition is True ###Markdown Nested If ###Code x = int(input()) if x>10: print("x is above 10") if x>20: print("and also above 20") if x>30: print("and also above 30") if x>40: print("and also above 40") if x>50: print("and also above 50") else: print("but not above 50") ###Output 40 x is above 10 and also above 20 and also above 30 ###Markdown Loop Statement For Loop ###Code week = ['Sunday',"Monday",'Tuesday', "Wednesday","Thursday","Friday","Saturday"] for x in week: print(x) ###Output Sunday Monday Tuesday Wednesday Thursday Friday Saturday ###Markdown The break statement ###Code #to display Sunday to Wednesday using For loop for x in week: print(x) if x=="Wednesday": break #to display only Wednesday using break statement for x in week: if x=="Wednesday": break print(x) ###Output Wednesday ###Markdown While Statement ###Code i =1 while i<6: print(i) i+=1 #same as i = i +1 ###Output 1 2 3 4 5 ###Markdown Application 3 - Create a python program that displays no.3 using break statement ###Code i =0 while i<6: i+=1 if i==3: break print(i) ###Output 3 ###Markdown Boolean Operator ###Code x = 1 y = 2 print(x>y) print(10>11) print(10==10) print(10!=11) #using bool() function print(bool('Hello')) print(bool(15)) print(bool(1)) print(bool(True)) print(bool(False)) print(bool(None)) print(bool(0)) print(bool([])) ###Output True True True True False False False False ###Markdown Functions can return Boolean ###Code def myFunction(): return False print(myFunction()) def yourFunction():return False if yourFunction(): print("Yes!") else: print("No") ###Output No ###Markdown You Try! ###Code a = 6 b = 7 print(a==b) print(a!=a) ###Output False False ###Markdown Arithmetic Operators ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) #modulo division, remainder print(10//5) #floor division print(10//3) #floor division print(10%3) #3 x 3 = 9 + 1 ###Output 15 5 50 2.0 0 2 3 1 ###Markdown Bitwise Operators ###Code a = 60 # 0011 1100 b = 13 # 0000 1101 print(a&b) print(a|b) print(a^b) print(~a) print(a<<1) #0111 1000 print(a<<2) #1111 0000 print(b>>1) #1 0000 0110 print(b>>2) #0000 0110 carry flag bit = 01 ###Output 12 61 49 -61 120 240 6 3 ###Markdown Python Assignment Operators ###Code a+=3#Same As a = a + 3 #Same As a = 60 + 3, a = 63 print(a) ###Output 63 ###Markdown Logical Operators ###Code #and logical operator a = True b = False print(a and b) print(not(a and b)) print(a or b) print(not(a or b)) print(a is b) print(a is not b) ###Output False True ###Markdown Boolean OperatorsBolean represent one of two values: True or False ###Code print(10>9) print(10<9) print(10==9) print(10!=9) print(bool(True)) print(bool(False)) print(bool(1)) print(bool(0)) print(bool([])) print(bool(None)) def myFunction(): return True print(myFunction()) #Boolean answer of a function def myFunction(): return True if myFunction(): print("YES!") else: print("NO!") a=6 b=7 print(a>b) print(a==b) print(a!=b) ###Output False False True ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10//5) #floor division print(10/3) #quotient print(10//3) #floor division print(10%3) #modulo print(10**2) #concatenation ###Output 15 5 50 2.0 2 3.3333333333333335 3 1 100 ###Markdown Python Bitwise Operators ###Code a = 60 #0011 1100 (0111 1000 , 1111 0000 , 0001 1110) b = 13 print(a&b) print(a|b) print(a<<1) # 0111 1000 print(a<<2) # 1111 0000 print(a>>1) # 0001 1110 a&b a|b a+=2 #same as a=a+2, a=60+2, a=62 print(a) ###Output 62 ###Markdown Logical Operator ###Code a = True b = True print(a and b) print(a or b) print(bool(a and b)) a = 60 b = 13 print((a>b) and (a<b)) (a>b) or (a<b) not(a>b) ###Output False ###Markdown Identity Operator ###Code print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operators ###Code print(10>9) print(10==9) print(a>b) a=10 b=9 c=8 c = print(a>b) c print(bool("Hello")) print(bool(15)) print(bool(True)) print(bool(False)) print(bool(1)) print(bool(0)) print(bool(None)) print(bool([])) def myFunction(): return True print(myFunction()) def myFunction(): return True if myFunction(): print("Yes") else: print("No") print(10>9) a=6 b=7 print(a==b) print(a!=b) print(6==6) print(a!=a) ###Output True False True True False ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) print(10//3) print(10**2) ###Output 15 5 50 2.0 0 3 100 ###Markdown Bitwise Operators ###Code a=60 #0011 1100 b=13 print(a^b) print(~a) print(a<<2) print(a>>2) # 0000 1111 ###Output 49 -61 240 15 ###Markdown Assignment Operators ###Code x=2 x+=3 #Same As x = x + 3 print(x) x ###Output 5 ###Markdown Logical Operators ###Code a=5 b=6 print(a>b and a==a) print(a<b or b==a) ###Output False True ###Markdown Identity Operator ###Code print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operators ###Code print(10>9) print(9>10) print(10==9) a = 10 b = 9 print(a>b) print(b>a) print(a==b) ###Output True False ###Markdown Bool () Function ###Code print(bool(1)) print(bool(0)) print(bool(None)) print(bool("")) print(bool("a")) print(bool()) ###Output True False False False True False ###Markdown Functions can return Boolean ###Code def my_function(): return False print(my_function()) if my_function == True: print("Yes!") else: print("No!") ###Output False No! ###Markdown Application 1 ###Code print(10>9) a= 6 b= 7 print(a==b) print(a!=a) ###Output True False False ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10/3) print(10*5) print(10//3) print(10%3) print(10**2) ###Output 15 5 3.3333333333333335 50 3 1 100 ###Markdown Bitwise Operators ###Code a = 60 b = 13 print (a&b) print(a|b) print(a^b) ###Output 12 61 49 ###Markdown Assignment Operators ###Code a += 3 print (a) ###Output 63 ###Markdown Logical Operators ###Code x = True y = False print(x and y) print(x or y) print(not(x and y)) ###Output False True True ###Markdown Identity Operators ###Code x is y x is x print(x is not y) ###Output True ###Markdown Boolean Operators ###Code a = 7 b = 6 print(10>9) print(10<9) print(a>b) ###Output True False True ###Markdown Bool () Function ###Code print(bool("Rockwel")) print(bool(1)) print(bool(0)) print(bool(None)) print(bool(False)) print(bool(True)) ###Output True True False False False True ###Markdown Function can return a Boolean ###Code def my_Function(): return True print(my_Function()) if my_Function(): print("True") else: print("False") ###Output True ###Markdown Application 1 ###Code print (10>9) a=6 b=7 print(a==b) print(a!=b) ###Output True False True ###Markdown Phyton Operators ###Code print(10+5) print(10-5) print(10/3) print(10*5) print(10%5) print(10//3) #10/3 = 3.3333 print(10**2) ###Output 15 5 3.3333333333333335 50 0 3 100 ###Markdown Bitwise Operators ###Code c = 60 # binary 0011 1100 d = 13 # binary 0000 1101 c&d print(c|d) print(c^d) print(d<<2) ###Output 61 49 52 ###Markdown Logical Operators ###Code h = True l = False h and l h or l not(h or l) ###Output _____no_output_____ ###Markdown Application 2 ###Code [] #Phython Assignment Oprators x = 100 x +=3 # Same as x = x +3, x = 100+3=103 print(x) ###Output 103 ###Markdown Identity Operators ###Code h is l h is not l ###Output _____no_output_____ ###Markdown Control Structure If Statement ###Code if a>b: print("a is greater than b") ###Output _____no_output_____ ###Markdown Elif Statement ###Code if a<b: print("a is less than b") elif a>b: print("a is greater than b") ###Output a is less than b ###Markdown Else Statement ###Code a = 10 b = 10 if a<b: print("a is less than b") elif a>b: print("a is greater than b") else: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand If Statement ###Code if a==b: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand If...Else Statement ###Code a = 10 b = 9 print("a is greater than b") if a>b else print('b is greater than a') ###Output a is greater than b ###Markdown And ###Code if a>b and b==b: print("Both conditions are true") ###Output Both conditions are true ###Markdown Or ###Code if a<b or b==b: print("The condition is True") ###Output The condition is True ###Markdown Nested If ###Code x = int(input()) if x>10: print("x is above 10") if x>20: print("and also above 20") if x>30: print("and also above 30") if x>40: print("and also above 40") if x>50: print("and also above 50") else: print("but not above 50") ###Output 48 x is above 10 and also above 20 and also above 30 and also above 40 but not above 50 ###Markdown Loop Statement For Loop ###Code week = ['Sunday',"Monday",'Tuesday',"Wednesday","Thursday","Friday","Saturday"] for x in week: print(x) ###Output Sunday Monday Tuesday Wednesday Thursday Friday Saturday ###Markdown The Break Statement ###Code #to display Sunday to Wednesday using For loop for x in week: print(x) if x=="Wednesday": break #to display only Wednesday using break statement for x in week: if x=="Wednesday": break print(x) ###Output Wednesday ###Markdown While Statement ###Code i =1 while i<6: print(i) i+=1 #same as i = i+1 ###Output 1 2 3 4 5 ###Markdown Application 3 - Create a phyton program that displays no. 3 using break statement ###Code i =1 while i<6: if i==3: break i+=1 print(i) ###Output 3 ###Markdown Boolean Operators ###Code a = 7 b = 6 print(10>9) print(10<9) print(a>b) ###Output True False True ###Markdown Bool () Function ###Code print(bool("Xienina")) print(bool(1)) print(bool(0)) print(bool(None)) print(bool(False)) print(bool(True)) ###Output True True False False False True ###Markdown Function can return a Boolean ###Code def my_Function(): return True print(my_Function()) if my_Function(): print("True") else: print("False") ###Output True ###Markdown Application 1 ###Code print (10>9) a=6 b=7 print(a==b) print(a!=a) ###Output True False False ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10/3) print(10*5) print(10%5) print(10//3) #10/3 = 3.3333 print(10**2) ###Output 15 5 3.3333333333333335 50 0 3 100 ###Markdown Bitwise Operators ###Code c = 60 # binary 0011 1100 d = 13 # binary 0000 1101 c&d print(c|d) print(c^d) print(d<<2) ###Output 61 49 52 ###Markdown Logical Operators ###Code h = True l = False h and l h or l not(h or l) ###Output _____no_output_____ ###Markdown Application 2 ###Code #Assignment Operators x = 3 print(x+=3) print(x-=3) print(x*=3) print(x/=3) print(x%=3) x = 100 x +=3 # Same as x = x +3, x = 100+3=103 print(x) ###Output 103 ###Markdown Identity Operators ###Code h is l h is not l ###Output _____no_output_____ ###Markdown Control Structure If Statement ###Code if a>b: print("a is greater than b") ###Output a is greater than b ###Markdown Elif Statement ###Code if a<b: print("a is less than b") elif a>b: print("a is greater than b") ###Output a is greater than b ###Markdown Else Statement ###Code a = 10 b = 10 if a<b: print("a is less than b") elif a>b: print("a is greater than b") else: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand If Statement ###Code if a==b: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand If...Else Statement ###Code a = 10 b = 9 print("a is greater than b") if a>b else print('b is greater than a') ###Output a is greater than b ###Markdown And ###Code if a>b and b==b: print("Both conditions are true") ###Output Both conditions are true ###Markdown Or ###Code if a<b or b==b: print("The condition is True") ###Output The condition is True ###Markdown Nested If ###Code x = int(input()) if x>10: print("x is above 10") if x>20: print("and also above 20") if x>30: print("and also above 30") if x>40: print("and also above 40") if x>50: print("and also above 50") else: print("but not above 50") ###Output 93 x is above 10 and also above 20 and also above 30 and also above 40 and also above 50 ###Markdown Loop Statement For Loop ###Code week = ['Sunday',"Monday",'Tuesday',"Wednesday","Thursday","Friday","Saturday"] for x in week: print(x) ###Output Sunday Monday Tuesday Wednesday Thursday Friday Saturday ###Markdown The Break Statement ###Code #to display Sunday to Wednesday using For loop for x in week: print(x) if x=="Wednesday": break #to display only Wednesday using break statement for x in week: if x=="Wednesday": break print(x) ###Output Wednesday ###Markdown While Statement ###Code i =1 while i<6: print(i) i+=1 #same as i = i+1 ###Output 1 2 3 4 5 ###Markdown Application 3 - Create a Python program that displays no. 3 using break statement ###Code i =1 while i<6: if i==3: break i+=1 print(i) ###Output 3 ###Markdown Boolean Operators ###Code x=1 y=2 print(x>y) print(10>11) print(10==10) ###Output False False True ###Markdown functions using return boolean ###Code def myfunction():return True print(myfunction()) def yourfunction():return True if yourfunction(): print("Yes!") print("No!") ###Output Yes! No! ###Markdown You Try ###Code print(10>9) a=6 b=7 print(a==b) print(a!=a) ###Output True False False ###Markdown Arithmetic Operators ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) #modulo division, remainder print(10//5) #floor division print(10//3) #floor division print(10%3) #3 x 3 = 9 + 1 ###Output 15 5 50 2.0 0 2 3 1 ###Markdown Bitwise Operators ###Code a=60 #0011 1100 b=13 #0000 1101 print (a&b) print(a|b) print(a^b) print(~a) print(a<<1) #0111 1000 print(a<<2) #1111 1000 print(b>>1) #1000 0110 print(b>>2) #0000 0011 ###Output 12 61 49 -61 120 240 6 ###Markdown Python Assignment Operators ###Code a+=3 # Same As a = a + 3 # Same As a = 60 + 3, a = 63 print(a) ###Output 63 ###Markdown Logical Operators ###Code #and logical operator a = True b = False print(a and b) print(not(a and b)) print(a or b) print(not(a or b)) print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operators ###Code #Boolean represents one of two values: True or False print(10>9) print(10==9) print(10<9) a = 10 b = 9 print(a>b) print(a==a) print(a<b) print(bool("hello")) print(bool("15")) print(bool(False)) print(bool(None)) print(bool(0)) print(bool([])) #allows you to evaluate and gives you def myFunction(): return True print(myFunction()) if myFunction(): print("Yes!") else: print("No!") print(10>9) a=6 b=7 print(a==b) print(a!=a) print(10+5) print(10-5) print(10*5) print(10/5) #division - quotient print(10%5) #modulo division print(10%3) #modulo division print(10//3) #floor division print(10**2) #concatenation a = 60 #0011 1100 0000 1111 b = 13 #0000 1101 print(a & b) print(a | b) print(a ^ b) print(~a) print(a<<2) print(a>>2) print(b<<2) print(b>>2) x = 6 x += 3 #x = x+3 print(x) x%=3 #x = x%3, remainder 0 print(x) a=True b=False print(a and b) print(a or b) print(not(a and b)) print(not(a or b)) #negation print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operators ###Code a = 7 b = 6 print(10>9) print(10<9) print(a>b) ###Output True False True ###Markdown bool() function ###Code print(bool("Chelsea")) print(bool(1)) print(bool(0)) print(bool(None)) print(bool(False)) print(bool(True)) ###Output True True False False False True ###Markdown Functions can return a Boolean ###Code def my_Function(): return False print(my_Function()) if my_Function(): print("True") else: print("False") ###Output False ###Markdown Application 1 ###Code print(10>9) g=6 f=7 print(g==f) print(g!=f) print(a==b) print(a!=b) ###Output False True ###Markdown Python Operators ###Code print(10+5) print(10-5) print(int(10/5)) print(10*5) print(10%5) print(10//3) #10/3 = 3.33333 print(10**2) #power ###Output 15 5 2 50 0 3 100 ###Markdown Bitwise Operators ###Code c = 60 #binary 0011 1100 d = 13 #binary 0000 1101 print(c&d) print(c|d) print(c^d) print(d<<2) ###Output 12 61 49 52 ###Markdown Logical Operators ###Code h = True l = False h and l h or l not(h or l) ###Output _____no_output_____ ###Markdown Application 2 ###Code #Python Assignment Operators x = 8 x += 3 print(x) x -= 3 print(x) x *= 3 print(x) x /= 3 print(x) x %= 3 print(x) #Python Assignment Operators x = 100 x+=3 #Same as x = x +3, x = 100 + 3 = 103 print(x) ###Output 103 ###Markdown Identity Operator ###Code h is l h is not l ###Output _____no_output_____ ###Markdown Control Structure If statement ###Code if a>b: print("a is greater than b") ###Output a is greater than b ###Markdown Eif Statement ###Code if a<b: print("a is less than b") elif a>b: print("a is greater than b") ###Output a is greater than b ###Markdown Else Statement ###Code a = 10 b = 10 if a<b: print("a is less than b") elif a>b: print("a is greater than b") else: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand If Statement ###Code if a==b: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand If... Else Statement ###Code a = 10 b = 9 print("a is greater than b") if a>b else print("b is greater than a") ###Output a is greater than b ###Markdown And-both Conditions are true ###Code if a>b and b==b: print("Both conditions are TRUE") ###Output Both conditions are TRUE ###Markdown Or ###Code if a>b or b==b: print("the condition is TRUE") ###Output the condition is TRUE ###Markdown Nested If ###Code x = int(input()) if x>10: print("x is above 10") if x>20: print("and also above 20") if x>30: print("and also above 30") if x>40: print("and also above 40") else: print("but not above 40") if x>50: print("and also above 50") else: print("but not above 50") ###Output 48 x is above 10 and also above 20 and also above 30 and also above 40 ###Markdown LOOP Statement For Loop ###Code week = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat'] for x in week: print(x) ###Output Sun Mon Tue Wed Thu Fri Sat ###Markdown The break statemnet ###Code #to display Sunday to Wednesday for x in week: print(x) if x=="Wed": break #to display only Wednesday using break statement for x in week: if x=="Wed": break print(x) ###Output Wed ###Markdown While Statement ###Code i = 1 while i<6: print(i) i+=1 #same as i = i + 1 ###Output 1 2 3 4 5 ###Markdown Application 3 - Create a python program that displays no. 3 using break statement ###Code i = 1 while i<6: print(i) i+=1 #same as i = i + 1 if i==3: break print(i) i = 1 while i<6: if i==3: break i+=1 print(i) ###Output 2 3 ###Markdown Boolean OperatorsBooleans represent one of two values: True or False ###Code print(10>9) print(10<9) print(10==9) print(10!=9) print(bool(True)) print(bool(False)) print(bool(1)) print(bool(0)) print(bool([])) print(bool(None)) def myFunction():return False print(myFunction()) #Boolean answer of a function def myFunction():return True if myFunction(): print("Yes!") else: print("No!") a=6 b=7 print(a>b) print(a==b) ###Output False False ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10//5) #floor division print(10/3) #quotient print(10//3) #floor division print(10%3) #modulo print(10**2) #concatenation a=60 #0011 1100 , 0111 1000 , 1111 0000 b=13 print(a&b) print(a|b) print(a<<1) print(a<<2) print(a>>1) #0011 1100 , 0001 1110 a+=2 #same as a = a+2, a=60+2, a=62 print(a) ###Output 62 ###Markdown Logical Operators ###Code a = 60 b = 13 (a>b) and (a<b) (a>b) or (a<b) not(a>b) ###Output _____no_output_____ ###Markdown Identity Operator ###Code print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operators ###Code a = 7 b = 10 print(10>9) ###Output True ###Markdown Bool ###Code print(bool("Neil")) print(bool(1)) print(bool(0)) print(bool(None)) print(bool(True)) print(bool(False)) ###Output True True False False True False ###Markdown Functions can return a boolean ###Code def my_Function(): return True print(my_Function()) if my_Function(): print("True") else: print("False") ###Output True ###Markdown Application 1 ###Code a = 6 b = 7 print(10>9) print(a==b) print(a!=a) ###Output True False False ###Markdown Python Operators ###Code print(10+5) print(10-5) print(int(10/5)) print(10*5) print(10%5) print(10//3) ###Output 15 5 2 50 0 3 ###Markdown Bitwise Operators ###Code #a=60, binary 0011 1100 #b=15, binary 0000 1111 print(a&b) print(a|b) print(a^b) print(a<<b) print(a>>b) ###Output 6 7 1 768 0 ###Markdown Application 2Assignment Operators ###Code a+=5 #Same As a = a+5 a-=5 #Same As a = a-5 a*=5 #Same As a = a*5 a/=5 #Same As a = a/5 a%=5 #Same As a = a%5 ###Output _____no_output_____ ###Markdown Logical Operators ###Code k = True l = False print(k and l) print(k or l) print(not(k or l)) ###Output False True False ###Markdown Identity Operators ###Code print(k is l) print(k is not l) ###Output False True ###Markdown Control StructureIf Statement ###Code v = 1 z = 1 if v<z: print("1 is less than 2") ###Output _____no_output_____ ###Markdown elif Statement ###Code if v<z: print("v is less than z") elif v!=z: print("v is not z") ###Output _____no_output_____ ###Markdown else Statement ###Code number = int(input()) if number>0: print("Positive") elif number<0: print("negative") else: print("number is equal to zero") ###Output 3 Positive ###Markdown Application 3 - Develop a Python program that will accept if a person is entitled to vote or not ###Code age = int(input()) if age >= 18: print("You are qualifed to vote") else: print("you are not qualified to vote") ###Output 18 You are qualifed to vote ###Markdown Nested if...else ###Code u = int(input()) if u>10: print("u is above 10") if u>20: print("u is above 20") if u>30: print("u is above 30") if u>40: print("u is above 40") if u>50: print("u is above 50") else: print("u is below 50") ###Output 3 ###Markdown Loop Structure ###Code week = ['Sunday','Monday','Tuesday','Wednesday','Thursday','Friday','Saturday'] Season=['rainy','sunny'] for x in week: for y in Season: print(y,x) ###Output rainy Sunday sunny Sunday rainy Monday sunny Monday rainy Tuesday sunny Tuesday rainy Wednesday sunny Wednesday rainy Thursday sunny Thursday rainy Friday sunny Friday rainy Saturday sunny Saturday ###Markdown Break Statement ###Code for x in week: print(x) if x == 'Thursday': break ###Output Sunday Monday Tuesday Wednesday Thursday ###Markdown While Loop ###Code i=1 while i<=5: print(i) i+=1 ###Output 1 2 3 4 5 ###Markdown Application 4 - Create a Python program thatdisplays numbers from 1 to 7 ###Code j=1 while j<=7: print(j) j+=1 ###Output 1 2 3 4 5 6 7 ###Markdown Application 5 - Create a Python program that displays number 7 using while and break statement ###Code j=1 while j<=7: j+=1 if j == 7: print(j) ###Output 7 ###Markdown Boolean Operators ###Code print(10>9) print(10<9) print(10==9) a=10 b=9 print(a>b) print(a<b) print(a==b) ###Output True False False ###Markdown Bool Function ###Code print(bool(5)) print(bool("Maria")) print(bool(0)) print(bool(1)) print(bool(None)) print(bool([])) ###Output True True False True False False ###Markdown Functions can return a Boolean ###Code def myFunction(): return False print(myFunction()) if myFunction(): print("YES!") else: print("NO!") ###Output NO! ###Markdown Application 1 ###Code print(10>9) a=6 b=7 print(a==b) print(a!=a) ###Output True False False ###Markdown Python Operators ###Code print(10+9) print(10-9) print(10*2) print(10/2) print(10**2) ###Output 19 1 20 5.0 100 ###Markdown Python Bitwise Operators ###Code a=60 b=13 print(a&b) print(a|b) ###Output 12 61 ###Markdown Python Assignment Operators ###Code a+=3 print(a) ###Output 75 ###Markdown Logical Operators ###Code a=True b=False print(a and b) print(not(a or b)) ###Output False False ###Markdown Identity Operators ###Code a is b a is not b ###Output _____no_output_____ ###Markdown Boolean Operators ###Code print(10>9) print(10==9) print(10!=9) print(10<9) ###Output True False True False ###Markdown bool() Function ###Code print(bool("Maria")) print(bool(19)) print(bool([])) print(bool(0)) print(bool(1)) print(bool(None)) print(bool(False)) ###Output True True False False True False False ###Markdown Function can return a Boolean ###Code def myFunction(): return False print(myFunction()) def myFunction(): return True if myFunction(): print("Yes!") else: print("No!") ###Output Yes! ###Markdown Application 1 ###Code print(10>9) a=6 b=7 print(a==b) print(a!=a) a=60 b=13 print(a>b) if a<b: print("a is less than b") else: print("a is greater than b") ###Output True a is greater than b ###Markdown Python Operators ###Code print(10+3) print(10-3) print(10*3) print(10%3) print(10/3) ###Output 13 7 30 1 3.3333333333333335 ###Markdown Bitwise Operators ###Code # a = 60, binary 0011 1100 # b = 13, binary 0000 1101 print(a&b) print(a|b) print(a^b) print(a<<1) print(a<<2) ###Output 12 61 49 120 240 ###Markdown Application 2 ###Code #Assignment Operators x=2 x+=3 #same as x=x+3 print(x) x-=3 #same as x=x-3 print(x) x*=3 #same as x=x*3 print(x) x/=3 #same as x=x/3 print(x) x%=3 #same as x=x%3 print(x) ###Output 5 2 6 2.0 2.0 ###Markdown Logical Operators ###Code k = True l = False print(k and l) print(k or l) print(not(k or l)) ###Output False True False ###Markdown Identity Operators ###Code k is l k is not l ###Output _____no_output_____ ###Markdown **Control Structure** If Statements ###Code v = 2 z = 1 if 1<2: print("1 is less than 2") ###Output 1 is less than 2 ###Markdown Elif Statement ###Code if v<z: print("v is less than z") elif v>z: print("v is greater than z") ###Output v is greater than z ###Markdown Else Statement ###Code number = int(input()) #to know if the number is positive or negative if number>0: print("Positive") elif number<0: print("Negative") else: print("number is equal to zero") ###Output _____no_output_____ ###Markdown Boolean Operators ###Code print(10>9) print(10==9) print(10<9) ###Output True False False ###Markdown bool() function ###Code print(bool("Maria")) print(bool(1)) print(bool(0)) print(bool(None)) print(bool(False)) print(bool(True)) ###Output True True False False False True ###Markdown Functions can return a Boolean ###Code def my_Function(): return False print(my_Function()) if my_Function(): print("True") else: print("False") ###Output False ###Markdown Application 1 ###Code print(10>9) a=6 b=7 print(a==b) print(a!=a) ###Output True False False ###Markdown Python Operators ###Code print(10+5) print(10-5) print(int(10/5)) print(10*5) print(10%5) print(10//3) #10//3= 3.33 print(10**2) #power (**) ###Output 15 5 2 50 0 3 100 ###Markdown Bitwise Operators & = and | = or ^ = xor ~ = << = binary left shift >> = binary right shift ###Code c=60 #binary 0011 1100 d=13 #binary 0000 1101 print(c&d) print(c|d) print(c^d) print(c<<2) print(d<<2) print(c>>d) ###Output 12 61 49 240 52 0 ###Markdown Logical Operators ###Code h= True l= False h and l h or l not (h or l) ###Output _____no_output_____ ###Markdown Application 2 Python Assginment Operators ###Code x=4 x+=3 #same as x=x+3 print(x) x-=3 print(x) x*=3 print(x) x/=3 print(x) x%=3 print(x) ###Output 7 4 12 4.0 1.0 ###Markdown Identity Operators ###Code h is l h is not l ###Output _____no_output_____ ###Markdown Control Structre If statement ###Code a = 7 b = 6 if a>b: print("Haha") ###Output Haha ###Markdown Elif Statement ###Code if a<b: print("a is less than b") elif a>b: print("a is greater than b") ###Output a is greater than b ###Markdown Else Statement ###Code a = 10 b = 10 if a<b: print("a is less than b") elif a>b: print("a is greater than b") else: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand If Statement ###Code if a==b: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand If...Else Statement ###Code a = 10 b = 5 print("a is greater than b")if a>b else print("b is greater than a") ###Output a is greater than b ###Markdown And ###Code if a>b and b<a: print("Both conditions are TRUE") ###Output Both conditions are TRUE ###Markdown Or ###Code if a<b or b<a: print("the condition is true") ###Output the condition is true ###Markdown Nested If ###Code x = int(input()) if x>10: print("above ten") if x>20: print("and also above 20") if x>30: print("and also above 30") if x>40: print("and also above 40") if x>50: print("and also above 50") else: print("but not above 50") ###Output 52 above ten and also above 20 and also above 30 and also above 40 and also above 50 ###Markdown Loop Statement For Loop ###Code week = ["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"] for x in week: print(x) ###Output Sunday Monday Tuesday Wednesday Thursday Friday Saturday ###Markdown Break Statement ###Code #Sunday to Wednesday using For loop for x in week: print(x) if x=="Wednesday": break #Only Wednesday for x in week: if x=="Wednesday": break print(x) ###Output Wednesday ###Markdown While Statement ###Code i = 1 while i<6: print(i) i+=1 #same as i = i + 1 ###Output 1 2 3 4 5 ###Markdown Application 3 - Create a python program that displays no.3 using break statement ###Code i = 1 while i<6: if i==3: break i+=1 print(i) ###Output 3 ###Markdown Boolean Operators ###Code #Booleans represent one of of two values: True or False print(10>9) print(10==9) print(9>10) a=10 b=9 print(a>b) print(a==a) print(b>a) print(bool("Hello")) print(bool(15)) print(bool(True)) print(bool(False)) print(bool(None)) print(bool(0)) print(bool([])) #allows you to evaluate and gives False in return def myFunction():return True print(myFunction()) def myFunction():return False if myFunction(): print("Yes") else: print("No") ###Output No ###Markdown You Try! ###Code a=6 b=7 print(a==b) print(a!=a) print(10+5) print(10-5) print(10*5) print(10/5) #division - quotient print(10%5) #modulo division print(10%3) #modulo division print(10//3) #floor division print(10**2) #concatenation a = 60 #0011 1100 b = 13 #0000 1101 print(a & b) print(a | b) print(a ^ b) print(a<<2) print(a>>2) x = 6 x+=3 #x = x+3 print(x) x%=3 #x = 6%3, remainder 0 print(x) a = True b = False print(a and b) print(a or b) print(not(a and b)) print(not(a or b)) #negation print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operator ###Code x=1 y=2 print(x>y) print(10>11) print(10==10) print(10!=11) #using bool()function print(bool('Hi')) print(bool(15)) print(bool(True)) print(bool(False)) print(bool(None)) print(bool(0)) print(bool([])) ###Output True True True False False False False ###Markdown Boomlean Operator ###Code a = 1 b = 2 print(a>b) print(a<b) print(a==a) print(a!=b) ###Output False True True True ###Markdown Bool() function ###Code print(bool(15)) print(bool(True)) print(bool(1)) print(bool(False)) print(bool(0)) print(bool(None)) print(bool([])) ###Output True True True False False False False ###Markdown Functions return a Boolean ###Code def myFunction(): return True print(myFunction()) def myFunction(): return False if myFunction(): print("True") else: print("False") ###Output False ###Markdown Relational Operator ###Code print(10>9) print(10<9) print() ###Output True False ###Markdown Arithmetic Operator ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) #modulo division that shows a remainder after divding print(10//3) #floor division, 3.33 (making whole number instead of remaider) print(10**5) #concatenation (means how 10 is multipliied) ###Output 15 5 50 2.0 0 3 100000 ###Markdown Bitwise Operator ###Code a = 60 #0011 1100 b = 13 #0000 1101 print(a & b) print(a|b) print(a^b) print(a<<2)#0011 1100 0111 print(a>>1)#0011 1110-30 ###Output 12 61 49 240 30 ###Markdown Assigment Operators ###Code a+=3 #Same as a = a + 3, a = 60 + 3, a = 63 print(a) ###Output 63 ###Markdown Logical Operators ###Code a = True b = False print(a and b) print(a or b) print(not(a or b)) a>b and b<a a = 60 b = 13 print(a>b and b>a) print(a==a or b==b) print(not(a==a and b==b)) ###Output False True False ###Markdown Identity Operators ###Code print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operator ###Code a = 10 b = 9 #print(a>b) #print(a<b) #print(a==b) print(bool(a)) print(bool(None)) print(bool(False)) print(bool(True)) ###Output True False False True ###Markdown Functions can return a Boolean ###Code def daniel(): return False print(daniel()) if daniel(): print("W") else: print("L") ###Output L ###Markdown Bitwise operator ###Code D = 60 #0011 1100 R = 13 D&R print(D|R) print(D^R) print(D<<2) # 0011 1100 -> 1111 0000 ###Output 61 49 240 ###Markdown Logical operator ###Code v = True n = False v and n #conjunction = false v or n #disjunction = true not (v or n) #negation disjunction = false ###Output _____no_output_____ ###Markdown Assignment Operator ###Code logic = 100 logic*= 3 print(logic) ###Output 300 ###Markdown Identity Operator ###Code logic is daniel #false daniel is not logic #true ###Output _____no_output_____ ###Markdown Control Structure If statement ###Code if daniel is logic: print("omsim") else: print("no") ###Output no ###Markdown Elif statement ###Code daniel = 1 logic = 1 if daniel == logic: print("yes") elif daniel > logic: #false print("cap") else: #false print("no") ###Output yes ###Markdown Short Hand If Statement ###Code print ("yes") if daniel==logic else print("no") ###Output yes ###Markdown And ###Code if daniel==logic and daniel==daniel: print("yes") ###Output yes ###Markdown Nested if ###Code a = 49 if a>10: print("a is greater than 10") if a>20: print("a is greater than 20") if a>30: print("a is greater than 30") if a>40: print("a is greater than 40") if a>50: print("a is greater than 50") else: print("a is less than 50") ###Output a is greater than 10 a is greater than 20 a is greater than 30 a is greater than 40 a is less than 50 ###Markdown For Loop ###Code week=['Sunday', 'Monday','Tuesday','Wednesday','Thursday','Friday','Saturday'] for x in week: print(x) ###Output Sunday Monday Tuesday Wednesday Thursday Friday Saturday ###Markdown Break ###Code week=['Sunday', 'Monday','Tuesday','Wednesday','Thursday','Friday','Saturday'] for x in week: print(x) if x=="Wednesday": break week=['Sunday', 'Monday','Tuesday','Wednesday','Thursday','Friday','Saturday'] for x in week: if x=="Wednesday": break print(x) ###Output Wednesday ###Markdown While Statement ###Code a = 1 while a<6: print(a) a+=1 b=1 while b<6: if b==3: break b+=1 print(b) ###Output 3 ###Markdown Create a program that displays ###Code c = 0 while c<10: c+=1 print("hello " + str(c)) ###Output hello 1 hello 2 hello 3 hello 4 hello 5 hello 6 hello 7 hello 8 hello 9 hello 10 ###Markdown displays integers less than 10 but not less than 3 ###Code for g in range(11): if g>3 and g<10: print(g) ###Output 4 5 6 7 8 9 ###Markdown ###Code print(10>9) print(10<9) print(10==9) a=10 b=9 print(a>b) print(a<b) print(a==b) ###Output True False False ###Markdown Bool Function ###Code print(bool(5)) print(bool("Maria")) print(bool(0)) print(bool(1)) print(bool(None)) print(bool([])) ###Output True True False True False False ###Markdown Functions can return a Boolean ###Code def myFunction(): return True print(myFunction()) True if myFunction(): print("yes") else: print("no") ###Output yes ###Markdown Application 1 ###Code print(10>9) a=6 b=7 print(a==b) print(a!=a) ###Output True False False ###Markdown Python Operators ###Code print(10%5) print(10//3) print(10**2) print(10+9) print(10-9) print(10*2) print(10//2) print(10*2) ###Output 19 1 20 5 20 ###Markdown Python Bitwise Operators ###Code a=60 b=13 print(a&b) print(a|b) ###Output 12 61 ###Markdown Python Assignment Operators ###Code a+3 #The same as a = a+3 a+=3 print(a) ###Output 69 ###Markdown Logical Operators ###Code a = True b = False a and b not(a or b) ###Output _____no_output_____ ###Markdown Identity Operators ###Code a is b a is not b ###Output _____no_output_____ ###Markdown Boolean Operators ###Code #Booleans represent one of two values: True or False print(10>9) print(10==9) print(9>10) a = 10 b = 9 print(a>b) print(a==a) print(b>a) print(bool("Hello")) print(bool(15)) print(bool(True)) print(bool(False)) print(bool(None)) print(bool(0)) print(bool([])) #allows you to evaulate and gives False in return def myFunction(): return True print(myFunction()) def myFunction(): return False if myFunction(): print("Yes") else: print("No") ###Output No ###Markdown You Try! ###Code a = 6 b = 7 print(a==b) print(a!=a) print(10+5) print(10-5) print(10*5) print(10/5) #division - quotient print(10%5) #modulo division print(10%3) #modulo division print(10//3) #floor division print(10**2) #concatenation a = 60 #0011 1100 0000 1111 b = 13 # 0000 1101 print(a & b) print(a | b) print(a^b) print(a<<2) print(a>>2) x = 6 x+=3 #x = x+3 print(x) x%=3 #x = 6% 3, remainder 0 print(x) a = True b = False print(a and b) print(a or b) print(not(a and b)) print(not(a or b)) #negation print(a is b) print(a is not b) ###Output False True ###Markdown Boomlean Operator ###Code a = 1 b = 2 print(a>b) print(a<b) print(a==a) print(a!=b) ###Output False True True True ###Markdown Bool() function ###Code print(bool(15)) print(bool(True)) print(bool(1)) print(bool(False)) print(bool(0)) print(bool(None)) print(bool([])) ###Output True True True False False False False ###Markdown Functions return a Boolean ###Code def myFunction(): return True print(myFunction()) def myFunction(): return False if myFunction(): print("True") else: print("False") ###Output False ###Markdown Relational Operator ###Code print(10>9) print(10<9) print() ###Output True False ###Markdown Arithmetic Operator ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) #modulo division that shows a remainder after divding print(10//3) #floor division, 3.33 (making whole number instead of remaider) print(10**5) #concatenation (means how 10 is multipliied) ###Output 15 5 50 2.0 0 3 100000 ###Markdown Bitwise Operator ###Code a = 60 #0011 1100 b = 13 #0000 1101 print(a & b) print(a|b) print(a^b) print(a<<2)#0011 1100 0111 print(a>>1)#0011 1110-30 ###Output 12 61 49 240 30 ###Markdown Assigment Operators ###Code a+=3 #Same as a = a + 3, a = 60 + 3, a = 63 print(a) ###Output 63 ###Markdown Logical Operators ###Code a = True b = False print(a and b) print(a or b) print(not(a or b)) a>b and b<a a = 60 b = 13 print(a>b and b>a) print(a==a or b==b) print(not(a==a and b==b)) ###Output False True False ###Markdown Identity Operators ###Code print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operators ###Code #Booleans represent one of two values: True or False print(10>9) print(10==9) print(9>10) a=10 b=9 print(a>b) print(a==a) print(b>a) b==a print(bool("Hello")) print(bool(15)) print(bool(True)) print(bool(False)) print(bool(None)) print(bool(0)) print(bool([])) def Function():return True print(Function()) def Function(): return False if Function(): print("YES") else: print("NO") ###Output NO ###Markdown Try ###Code a=6 b=7 print(a==b) print(a!=a) print(10+5) print(10-5) print(10*5) print(10/5) #Division print(10%6) #Prints remainder (Modulo Division) print(10//6) #Floor Division - Whole Number only print(10**2) #Raise to a certain power a=60 #0011 1100 b=13 #0000 1101 print(a&b) print(a|b) print(a^b) print(a<<2) print(a>>2) x=6 x+=3 print(x) x-=3 print(x) x%=3 print(x) a=True b=False print(a and b) print(a or b) print(not(a and b)) print(not(a or b)) print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operators ###Code a = 7 b = 6 print(10>9) print(10<9) print(a>b) ###Output True False True ###Markdown Bool () Function ###Code print(bool("Jose")) print(bool(1)) print(bool(0)) print(bool(None)) print(bool(False)) print(bool(True)) ###Output True True False False False True ###Markdown Function can return a Boolean ###Code def my_Function(): return True print(my_Function()) if my_Function(): print("True") else: print("False") ###Output True ###Markdown Application 1: ###Code print (10>9) a=6 b=7 print(a==b) print(a!=b) ###Output True False True ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10/3) print(10*5) print(10//3) #10/3 = 3.3333 print(10**2) ###Output 15 5 3.3333333333333335 50 3 100 ###Markdown Bitwise Operators ###Code c = 60 # binary 0011 1100 d = 13 # binary 0000 1101 c&d print(c|d) print(c^d) print(d<<2) ###Output 61 49 52 ###Markdown Logical Operators ###Code h = True l = False h and l h or l not(h or l) ###Output _____no_output_____ ###Markdown Application 2 ###Code [] #Python Assignment Operators x = 100 x += 3 # same as x= x+3, x = 100+3=103 print(x) ###Output 103 ###Markdown Identity Operators ###Code h is l h is not l ###Output _____no_output_____ ###Markdown **Control** **structure** If Statement ###Code if a>b: print("a is greater than b") ###Output _____no_output_____ ###Markdown Elif Statement ###Code if a<b: print("a is less than b") elif a>b: print("a is greater than b") ###Output a is less than b ###Markdown Else Statement ###Code a = 10 b = 10 if a<b: print("a is less than b") elif a>b: print(" a is greater than b") else: print("a is equal to b") ###Output a is equal to b ###Markdown Short hand If..else statement ###Code a= 10 a= 9 print("a is greater than b") if a>b else print('b is greater than a') ###Output b is greater than a ###Markdown And ###Code if a>b and b==b: print(" Both conditions are true") ###Output _____no_output_____ ###Markdown Or ###Code if a<b or b==b: print("The condition is true") ###Output The condition is true ###Markdown Nested if ###Code if x>10: print("x is above 10") if x>20: print("and also above 20") if x>30: print("and also above 30") if x>40: print("and also above 40") if x>50: print("and also above 50") else: print("but not above 50") ###Output x is above 10 and also above 20 and also above 30 and also above 40 and also above 50 ###Markdown **Loop** **Statement** ###Code week=['Sunday',"Monday",'Tuesday',"wednesday","Thursday","Friday","Saturday"] for x in week: print(x) ###Output Sunday Monday Tuesday wednesday Thursday Friday Saturday ###Markdown The break Statement ###Code #to display Sunday to Wednesday using for loop for x in week: print(x) if x=="Wednesday": break ###Output Sunday Monday Tuesday wednesday Thursday Friday Saturday ###Markdown While Statement ###Code i =1 while i<6: print(i) i+=1 #same as i = i+1 ###Output 1 2 3 4 5 ###Markdown Application 3 ###Code i = 1 while i<6: if i==3: break i+=1 print(i) ###Output 2 3 ###Markdown Operations and Expressions Boolean Operator ###Code x=1 y=2 print(x>y) print(10>11) print(10==10) #equality symbol print(10!=11) #notequal #using bool() function print(bool("Hello")) print(bool(15)) print(bool(True)) print(bool(False)) print(bool(None)) print(bool(0)) print(bool([])) ###Output True True True False False False False ###Markdown Functions can return Boolean ###Code def myFunction(): return False print(myFunction()) def yourFunction(): return False if yourFunction(): print("Yes!") else: print("No") ###Output No ###Markdown You Try! ###Code print(10>9) a=6 b=7 print(a==b) print(a!=a) ###Output True False False ###Markdown Arithmetic Operators ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) #modulo division, remainder print(10//5) #floor division, whole number print(10//3) #floor division print(10%3) #3 x 3 = 9 + 1 ###Output 15 5 50 2.0 0 2 3 1 ###Markdown Bitwise Operators ###Code a=60 #0011 1100 b=13 #0000 1101 print(a&b) print(a|b) print(a^b) print(~a) print(a<<1) #0111 1000 print(a<<2) #1111 0000 print(b>>1) #1 0000 0110 print(b>>2) #0000 0011 carry flag bit=01 ###Output 12 61 49 -61 120 240 6 3 ###Markdown Python Assignment ###Code a+=3 #Same As a=a+3 #same As a=60+3. a=63 print(a) ###Output 63 ###Markdown Logical Operators ###Code #and logical operator a=True b=False print(a and b) print(not(a and b)) print(a or b) print(not(a or b)) print(a is b) print(a is not b) ###Output False True ###Markdown Application 3 ###Code #Develop a Python program that will accept if the person is entitled to vote or not age = int(input()) if age>=18: print("You are qualified to vote") else: print("You are not qualified to vote") ###Output _____no_output_____ ###Markdown Nested If...Else ###Code u = int(input()) if u>10: print("u is above 10") if u>20: print("u is above 20") if u>30: print("u is above 30") if u>40: print("u is above 40") else: print("u is below 40") if u>50: print("u is above 50") else: print("u is below 50") ###Output _____no_output_____ ###Markdown Loop Structure ###Code week = ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'] season = ["Rainy", "Sunny"] for x in week: for y in season: print(y,x) ###Output _____no_output_____ ###Markdown Break Statement ###Code for x in week: print(x) if x == "Thursday": break ###Output _____no_output_____ ###Markdown While Loop ###Code i=1 while i<6: print(i) i+=1 ###Output _____no_output_____ ###Markdown Application 4 ###Code #Create a Python program that displays numbers from 1 to 4 using while loop statement n=1 while n<5: print(n) n+=1 ###Output _____no_output_____ ###Markdown Application 5 ###Code #Create a Python program that displays 4 numbers using while loop and break statement r=1 while r<=4: if r==4: print(r) r+=1 ###Output _____no_output_____ ###Markdown Boolean Operators ###Code print(10>9) print(10==9) print(10<9) ###Output True False False ###Markdown bool() function ###Code print(bool("Maria")) print(bool(19)) print(bool([])) print(bool(0)) print(bool(1)) print(bool(None)) print(bool(False)) ###Output True True False False True False False ###Markdown Function can return a Boolean ###Code def myFunction(): return True print(myFunction()) if myFunction(): print("Yes!") else: print("No!") ###Output Yes! ###Markdown Application 1 ###Code print(10>9) a=60 b=13 print(a==b) print(a!=b) print(a>b) if a<b: print("a is less than b") else: print("a is greater than b") ###Output False a is less than b ###Markdown Python Operators ###Code print(10+3) print(10-3) print(10*3) print(10//3) print(10%3) print(10/3) ###Output 13 7 30 3 1 3.3333333333333335 ###Markdown Bitwise Operation ###Code # a = 60, binary 0011 1100 # b = 13, binary 0000 1101 print(a&b) print(a|b) print(a^b) print(a<<2) ###Output 12 61 49 240 ###Markdown Application 2 ###Code #Assignment Operators x = 2 print(x) x +=3 # is the same as x = x+3, 2+3=5 print(x) x -=3 # is the same as x = x-3 print(x) x *=3 # is the same as x = x*3 print(x) x /=3 # is the same as x = x/3 print(x) x %=3 # is the same as x = x%3 print(x) ###Output 2 5 2 6 2.0 2.0 ###Markdown Logical Operator ###Code k, l = True, False print(k and l) print(k or l) print(not(k or l)) ###Output False True False ###Markdown Identity Operators ###Code k is l k is not l ###Output _____no_output_____ ###Markdown Control Structure If Statement ###Code v = 1 z = 2 if v<z: print("1 is less than 2") ###Output 1 is less than 2 ###Markdown Elif ###Code if v<z: print("1 is less than 2") elif v>z: print("v is greater than z") ###Output _____no_output_____ ###Markdown Else Statement ###Code number = int(input()) #to know if the number is positive or negative if number>0: print("positive") elif number<0: print("negative") else: print("v is equal to z") ###Output -5 negative ###Markdown Application - Develop a Python Program that will accept if a person is entitled to vote or not ###Code age = int(input()) if age>=18: print("You are qualified to vote") elif age<0: print("You have inputted an invalid age") else: print("You are not qualified to vote") ###Output -15 You have inputted an invalid age ###Markdown Nested If... Else ###Code u = int(input()) if u>10: print("u is above 10") if u>20: print("u is above 20") if u>30: print("u is above 30") if u>40: print("u is above 40") if u>50: print("u is above 50") else: print("u is below 50") ###Output 49 u is above 10 u is above 20 u is above 30 u is above 40 u is below 50 ###Markdown Loop Statement ###Code week = ["Sunday", 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'] season = ['rainy', 'sunny'] for x in week: for y in season: print(y, x) ###Output rainy Sunday sunny Sunday rainy Monday sunny Monday rainy Tuesday sunny Tuesday rainy Wednesday sunny Wednesday rainy Thursday sunny Thursday rainy Friday sunny Friday rainy Saturday sunny Saturday ###Markdown The Break Statement ###Code for x in week: if x == "Thursday": break print(x) #To display Sunday to Thursday for x in week: if x == "Thursday": break print(x) ###Output Sunday Monday Tuesday Wednesday ###Markdown While Loop ###Code i= 1 while i<6: print(i) i +=1 #i+1 Assignment operator ###Output 1 2 3 4 5 ###Markdown Application 4 - Create a Python Program that displays numbers from 1 to 4 using while loop ###Code j= 1 while j<5: print(j) j +=1 ###Output 1 2 3 4 ###Markdown Application 5 - Create a Python Program that displays numbers from 1 to 4 using while loop and using break statement ###Code j= 1 while j<=4: if j ==4: print(j) j +=1 ###Output 4 ###Markdown Boolean Operators ###Code #Booleans represent one or two values: True or False print(10>9) print(10==9) print(9>10) a = 10 b = 9 print(a>b) print(a==a) print(b>a) b>a print(bool("Hello")) print(bool(15)) print(bool(True)) print(bool(False)) print(bool(None)) print(bool(0)) print(bool([])) #allows you to evaluate and gives False in return def myFunction(): return True print(myFunction()) def myFunction(): return False print(myFunction()) if myFunction(): print("Yes") else: print("No") ###Output False No ###Markdown You Try! ###Code a = 6 b = 7 print(a==b) print(a!=a) print(10+5) print(10-5) print(10*5) print(10/5) #division print(10%5) #modulo division print(10%3) #modulo division print(10//3) #floor division print(10**2) #concatenation a = 60 #0011 1100 b = 13 #0000 1101 print(a & b) print(a|b) print(a^b) print(a<<2) print(a>>2) x=6 x+=3 #x=x+3 print(x) x%=3 #x=6% remainder 0 print(x) a = True b = False print(a and b) print(a or b) print(not(a and b)) print(not(a or b)) #negation print(a is b) print(a is not b) ###Output False True ###Markdown Operations and Expressions Boolean Operators ###Code #booleans represent one of the two values: True and False print(10>9) print(10==9) print(9>10) a=10 b=9 print(a>b) print(a==b) print(b>a) print(bool("hello")) print(bool(15)) print(bool(False)) print(bool(None)) print(bool({})) print(bool(0)) print(bool([])) def myFunction(): return True print(myFunction()) if myFunction(): print("Yes") else: print("No") a=6 b=7 print(a==b) print(a!=a) a = 60 #0011 1100 b = 13 #0000 1101 print(a & b) print(a|b) print(a^b) print(a<<2) print(a>>2) x= 6 x+= 3 print(x) x%= 3 print(x) a = True b = False print(a and b) print(a or b) print(not(a and b)) print(not(a or b)) print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operators ###Code a=10 print(10>9) print(10<9) print(10==9) print(a>9) ###Output True False False True ###Markdown bool() Function ###Code print(bool(1)) print(bool("Maria")) print(bool('Ana')) print(bool(0)) print(bool(None)) print(bool([])) ###Output True True True False False False ###Markdown Functions can Return a Boolean ###Code def myFunction(): return True print(myFunction()) if myFunction(): print("True") else: print("False") ###Output True ###Markdown Application 1 ###Code ###Output _____no_output_____ ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10*5) print(int(10/5)) print(10%5) print(10/3) print(10//3) print(float(10**2)) ###Output 15 5 50 2 0 3.3333333333333335 3 100.0 ###Markdown Bitwise Operators ###Code a = 60 #0011 1100 b = 13 print(a&b) print(a|b) print(a^b) print(a<<1) print(a>>1) ###Output 12 61 49 120 30 ###Markdown Application 2 ###Code #Python Assignment Operators ###Output _____no_output_____ ###Markdown Logical Operators ###Code s = True t = False not(s and t) ###Output _____no_output_____ ###Markdown Identity Operators ###Code s is t s is not t ###Output _____no_output_____ ###Markdown Control Structure If Statement ###Code g = 200 h = 100 if g>h: print("g is greater than h") ###Output g is greater than h ###Markdown Elif Statement ###Code if g>h: print("g is greater than h") elif g<h: print("g is less than h") ###Output g is greater than h ###Markdown Else Statement ###Code if g>h: print("g is greater than h") elif g<h: print("g is less than h") else: print("g is equal to h") ###Output g is equal to h ###Markdown Short Hand If Statement... ###Code if g==h: print('g is equal to h') ###Output g is equal to h ###Markdown Short Hand If.. Else Statement ###Code print("G") if g>h else print("H") ###Output G ###Markdown Nested If... Else ###Code x = int(input()) if x>10: print("x is above 10 ") if x>20: print("and also above 20") if x>30: print("and also above 30") if x>40: print("and also above 40") else: print("but not above 40") if x>50: print("and also above 50") else: print("but not above 50") ###Output 41 x is above 10 and also above 20 and also above 30 and also above 40 ###Markdown Application 3- Write a program (using if else) that determines if the input age is qualified to vote or not. The qualifying age is 18 years old and above. ###Code age=int(input()) if age>=18: print("You are qualified to vote") else: print("You are not qualified to vote") ###Output 24 You are qualified to vote ###Markdown Loop Statement For Loop ###Code week = ['Sunday', 'Monday', 'Tuesday', 'Wednesday', "Thursday", "Friday", "Saturday"] season = ['rainy', 'sunny'] for x in week: for y in season: print(y,x) ###Output rainy Sunday sunny Sunday rainy Monday sunny Monday rainy Tuesday sunny Tuesday rainy Wednesday sunny Wednesday rainy Thursday sunny Thursday rainy Friday sunny Friday rainy Saturday sunny Saturday ###Markdown The break statement ###Code for x in week: if x =="Friday": break print(x) # If you want to display Sunday to Friday for x in week: print(x) if x=="Friday": break ###Output Sunday Monday Tuesday Wednesday Thursday Friday ###Markdown Application 4 - To display only "Friday" ###Code for x in week: if x=="Friday": print(x) break ###Output Friday ###Markdown While loop ###Code i = 1 while i<6: print(i) i+=1 #same as i = i +1 ###Output 1 2 3 4 5 ###Markdown Application 5 - To display only no.2 using break statement ###Code i = 1 while i<6: if i==2: print(i) break i+=1 ###Output 2 ###Markdown Boolean Operators ###Code print(10>9) print(10==9) print(10>9) a=6 b=7 print(a==b) print(a!=b) ###Output True False True ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) print(10//3) print(10**2) ###Output 15 5 50 2.0 0 3 100 ###Markdown Bitwise Operators ###Code a=60 b=13 (a^b) ###Output _____no_output_____ ###Markdown Assignment Operators ###Code x=2 x+=3 #Same As x=x+3 print(x) ###Output 5 ###Markdown Logical Operators ###Code a =5 b = 6 print(a>b and a==a) print(a<b or b==a) ###Output False True ###Markdown Identity Operators ###Code print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operators ###Code a=10 print(10>9) print(10<9) print(10==9) print(a>9) ###Output True False False True ###Markdown Bool() Function ###Code print(bool(1)) print(bool("Maria")) print(bool('Ana')) print(bool(0)) print(bool(None)) print(bool([])) ###Output True True True False False False ###Markdown Functions can Return a Boolean ###Code def myFunction(): return False print(myFunction()) if myFunction(): print("False") else: print("True") ###Output True ###Markdown Application 1 ###Code a=6 b=7 print(a==b) print(a!=a) ###Output False False ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) print(int(10/5)) print(10/3) print(10//3) print(10**5) print(float(10**5)) ###Output 15 5 50 2.0 0 2 3.3333333333333335 3 100000 100000.0 ###Markdown Bitwise Operators ###Code a=60 b=13 print(a&b) print(a|b) print(a^b) print(a<<1) print(a>>1) ###Output 12 61 49 120 30 ###Markdown Application 2 ###Code x=5 y=5 z=5 a=5 b=5 x += 3 y -= 3 z *= 3 a /= 3 b %= 3 print(x) print(y) print(z) print(a) print(b) ###Output 8 2 15 1.6666666666666667 2 ###Markdown Logical Operators ###Code s=True t=False not(s and t) ###Output _____no_output_____ ###Markdown Identity Operators ###Code s is t s is not t ###Output _____no_output_____ ###Markdown Control Structure If Statement ###Code g=100 h=50 if g>h: print("g is greater than h") ###Output g is greater than h ###Markdown Elif Statement ###Code a=30 b=50 if a>b: print("a is greater than b") elif a<b: print("a is less than b") ###Output a is less than b ###Markdown Else Statement ###Code x=20 y=20 if x>y: print("x is greater than y") elif x<y: print("x is less than y") else: print("x is equal to y") ###Output x is equal to y ###Markdown Short Hand If.. Else Statement ###Code print("G") if g>h else print("H") ###Output G ###Markdown Nested If.. Else ###Code x=41 if x>10: print("x is above 10") if x>20: print("and also above 20") if x>30: print("and also above 30") if x>40: print("and also above 40") if x>50: print("and also above 50") else: print("but not above 10") ###Output x is above 10 and also above 20 and also above 30 and also above 40 but not above 10 ###Markdown Application 3 - Write a program that determines if the input age is qualified to vote or not. The qualifying age is 18 years old and above. ###Code age=int(input("Your age: ")) if age>17: print("You are qualified to vote!") else: print("You're not allowed to vote!") ###Output Your age: 18 You are qualified to vote! ###Markdown Loop Statement For Loop ###Code week=['Sunday','Monday','Tuesday','Wednesday','Thursday','Friday','Saturday',] season=['rainy','sunny'] for x in week: for y in season: print(y,x) ###Output rainy Sunday sunny Sunday rainy Monday sunny Monday rainy Tuesday sunny Tuesday rainy Wednesday sunny Wednesday rainy Thursday sunny Thursday rainy Friday sunny Friday rainy Saturday sunny Saturday ###Markdown Break Statement ###Code for x in week: if x=="Friday": break print(x) #To stop at Friday ###Output Sunday Monday Tuesday Wednesday Thursday Friday ###Markdown Application 4 - To display only "Friday" ###Code for x in week: if x=="Friday": print(x) break ###Output Friday ###Markdown While Loop ###Code i=1 while i<6: print(i) i+=1 ###Output 1 2 3 4 5 ###Markdown Application 5 - To display only no.2 using break statement ###Code i=1 while i<6: if i==2: print(i) break i+=1 ###Output 2 ###Markdown Operations and Expressions Boolean Operators ###Code a=1 b=2 print(a>b) print(a<b) print(a==a) print(a!=b) ###Output False True True True ###Markdown bool() function ###Code print(bool(15)) print(bool(True)) print(bool(1)) print(bool(False)) print(bool(0)) print(bool(None)) print(bool([])) ###Output True True True False False False False ###Markdown Functions return a Boolean ###Code def myFunction(): return True print(myFunction()) def myFunction(): return True if myFunction(): print("True") else: print("False") ###Output True ###Markdown Relational Operator ###Code print(10>9) print(10<9) print(10==9) ###Output True False False ###Markdown Arithmetic Operator ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%2) #modulo division that shows the remainder after dividing print(10//3) #floor division, 3.33 print(10**2) #concatenation ###Output 15 5 50 2.0 0 3 100 ###Markdown Bitwise Operators ###Code a=60 #0011 1100 b=13 #0000 1101 print(a&b) print(a|b) print(a^b) print(a<<1) #0011 1100 0111 1000 - 120 print(a>>1) #0001 1110 carry bit-0 ###Output 12 61 49 120 30 ###Markdown Assignment Operator ###Code a+=3 #Same as a=a+3, a=60+3, a=63 print(a) ###Output 63 ###Markdown Logical Operator ###Code a = 60 b = 13 print(a>b and b>a) print(a==a or b==b) print(not(a==a or b==b)) ###Output False True False ###Markdown Identity Operators ###Code print(a is b) print(a is not b) ###Output False True ###Markdown Bolean Operator ###Code x = 1 y = 2 print(x>y) print(x<y) print(10==10) print(10!=11) #using bool()function print(bool("Hello")) print(bool(15)) print(bool(1)) ###Output True True True ###Markdown Functions can return Boolean ###Code def myfunction():return True print(myfunction()) False ###Output True ###Markdown For you ###Code print(10>9) a=6 b=7 print(a==b) print(a!=a) ###Output True False False ###Markdown Arithmetic Operators ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) #moodulo division,remainder print(10//5) #floor division print(10//3) #floor division print(10%3) #3 x 3 = 9 + 1 ###Output 15 5 50 2.0 0 2 3 1 ###Markdown Bitwise Operator ###Code a=60 #0011 1100 b=13 #0000 1101 print(a&b) print(a|b) print(a^b) print(-a) print(a<<1) #0111 1000 print(a>>2) #0111 1000 print(b<<1) #1 0000 0110 print(b>>1) #0000 0011 carryflagbit=1 ###Output 12 61 49 -60 120 15 26 6 ###Markdown Logical Operators ###Code #and logical operator a=True b=False print(a and b) print(not(a and b)) print(a or b) print(not a or b) print(a is b) print(a is not b) ###Output False True True False False True ###Markdown Bolean Operators ###Code print (12>6) print (12==6) a = 12 b = 6 print (a>b) print(bool("Good day!")) print(bool(10)) print(bool(False)) print(bool([])) def myFunction(): return True print (myFunction()) def myFunction():return False if myFunction(): print("True") else: print("False") print (12>6) a = 6 # 0000 0110 b = 7 # 0000 0111 print (a==b) print (a!=a) print (7==7) ###Output True False False True ###Markdown Python Operators ###Code print(12+6) print(12-6) print(12*6) print(12/6) print(6%12) print(12//6) print(12**6) ###Output 18 6 72 2.0 6 2 2985984 ###Markdown Bitwise Operation ###Code a=45 # 0010 1101 b=18 # 0001 0010 print (a^b) print (~a) print (a<<5) ###Output 63 -46 1440 ###Markdown Assignment Operators ###Code x=17 x+=26 # Same as x=x+26 print (x) ###Output 43 ###Markdown Logical Operators ###Code a = 18 b = 22 print (a>b and b==b) print (b>a and a==b) ###Output False False ###Markdown Identity Operator ###Code print(b is b) print(b is not b) print(b is not x) ###Output True False True ###Markdown **Boolean Operators** ###Code a = 7 b = 6 print(10>9) print(10<9) ###Output True False ###Markdown **Bool () function** ###Code #if true print(bool("Brendon")) print(bool(1)) #if false print(bool(0)) print(bool(False)) ###Output True True False False ###Markdown **Functions can return to Boolean** ###Code def my_Function(): return True print (my_Function()) if my_Function(): print ("True") else: print ("False") ###Output True ###Markdown **Python Operators** ###Code print(15+5) print(15-5) print(15/3) print(15*5) print(15%5) print(15/3) print(15**5) ###Output 20 10 5.0 75 0 5.0 759375 ###Markdown **Bitwise Operators** ###Code c = 60 #Binary 0011 1100 d = 13 #Binary 0000 1101 c&d print(c|d) print(c^d) print(d<<2) ###Output 61 49 52 ###Markdown **Logical Operators** ###Code h = True g = False h and g h or g not (h or g) ###Output _____no_output_____ ###Markdown **Application 2** ###Code x = 100 x += 3 #Same as x = x + 3, x = 100 + 3 = 103 print(x) ###Output 103 ###Markdown **Identity Operators** ###Code h is 1 h is not 1 ###Output _____no_output_____ ###Markdown Control Structure **If Statement** ###Code if a>b: print("a is greater than b") ###Output a is greater than b ###Markdown **Elif Statement** ###Code if a>b: print("a is greater than b") elif a<b: print("a is lesser than b") ###Output a is greater than b ###Markdown **Else Statement** ###Code a = 10 b = 10 if a>b: print("a is greater than b") elif a<b: print("a is lesser than b") else: print("a is equal to b") ###Output a is equal to b ###Markdown **Short Hand If Statement** ###Code if a==b: print("a is equal to b") ###Output a is equal to b ###Markdown **Short hand if...else statement** ###Code a = 10 b = 9 print("a is greater than b") if a>b else print("b is greater than a") ###Output a is greater than b ###Markdown **And** ###Code if a<b and b==b: print("both conditions are True") ###Output _____no_output_____ ###Markdown **Or** ###Code if a<b and b==b: print("The Condition is True") ###Output _____no_output_____ ###Markdown **Nested If Statement** ###Code x = int(input()) if x>10: print("x is above 10") if x>20: print("x is above 20") if x>30: print("x is above 30") if x>40: print("x is above 40") if x>50: print("x is above 50") else: print("but not above 50") ###Output 30 x is above 10 x is above 20 ###Markdown **For Loop Statement** ###Code week = ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'] for x in week: print(x) ###Output Sunday Monday Tuesday Wednesday Thursday Friday Saturday ###Markdown **The Break Statement** ###Code #to display only Friday using break statement for x in week: if x == "Friday": break print(x) ###Output Friday ###Markdown **While Statement** ###Code i =1 while i < 6: print(i) i += 1 #same as i = i + 1 ###Output 1 2 3 4 5 ###Markdown **Application 3: Create a Program that displays no.3 using break statement** ###Code i = 1 while i < 6: if i==3: break i += 1 print(i) ###Output 3 ###Markdown Boolean OperatorsBoleans represents two values: True and False ###Code print(10>9) print(10==9) print(10<9) print(10!=9) print(bool(True)) print(bool(False)) print(bool(1)) print(bool(0)) print(bool([])) print(bool(None)) def myFunction():return True print(myFunction()) #Boolean answe of a function def myFunction():return True if myFunction(): print("Yes!") else: print("No!") a=6 b=7 print(a==b) print(a!=a) ###Output False False ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10//5) #floor division print(10//3) #floor division print(10%3) #modulo print(10**2) #concatenation a = 60 #0011 1100, 0111 1000 b = 13 print(a&b) print(a|b) print(a<<1) a+=2 #same as a = a+2, a=60, a=62 print(a) ###Output 62 ###Markdown Logical Operators ###Code a = True b = False print(a and b) print(a or b) a = 60 b = 13 (a>b) and (a<b) (a>b) or (a<b) not(a>b) ###Output _____no_output_____ ###Markdown Identity Operator ###Code print(a is b) print(a is not b) ###Output False True ###Markdown Operations and Expressions Boolean Operators ###Code a=int(10) b=int(9) print(10>9) print(10==9) print(10<9) print(a>b) print(a==b) print(a<b) print(bool("Hi")) print(bool(99)) print(bool(True)) print(bool(False)) print(bool()) print(bool(0)) def myfunction(): return True print(myfunction()) def myfunction(): return True if(myfunction()): print("Yes") else: print("No") print(10>9) a=6 b=7 print(a==b) print(a!=a) ###Output True False False ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) print(10//3) print(10**2) ###Output 15 5 50 2.0 0 3 100 ###Markdown Bitwise Operators ###Code a=60 #0011 0011 b=13 # print(a^b) print(~a) print(a<<2) #1111 0000 print(a>>2) #0000 1111 ###Output 49 -61 240 15 ###Markdown Assignment Operators ###Code x=2 x+=3 #Same as X= x+3 print(x) x ###Output 5 ###Markdown Logical Operators ###Code x=5 print(x<3 and x>5) a=5 b=6 print(a>b and a==b) a<b or a==b ###Output False ###Markdown Identity Operators ###Code print(a is b) a is not b ###Output False ###Markdown Boolean Operators ###Code print(10>9) print(10<9) print(10==9) a=10 b=9 print(a>b) print(a<b) print(a==b) ###Output True False False ###Markdown Bool Function ###Code print(bool(5)) print(bool("Maria")) print(bool(0)) print(bool(1)) print(bool(None)) print(bool([])) ###Output True True False True False False ###Markdown Functions can return a boolean ###Code def myFunction(): return True print(myFunction()) if myFunction(): print("yes") else: print("No") ###Output yes ###Markdown Application 1 ###Code print(10>9) a=6 b=7 print(a==b) print(a!=a) ###Output True False False ###Markdown Python Operators ###Code print(10+9) print(10-9) print(10*2) print(10/2) print(10**2) ###Output 19 1 20 5.0 100 ###Markdown Python Bitwise Operators ###Code a=60 b=13 print(a&b) print(a|b) ###Output 12 61 ###Markdown Python Assignment Operators ###Code a+3 a+=3 print(a) ###Output 63 ###Markdown Logical Operators ###Code a=True b=False a and b not(a or b) ###Output _____no_output_____ ###Markdown Identity Operator ###Code a is b a is not b print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operators ###Code ##Booleans represent one of two values: True or False print(10>9) print (10==9) a = 5 b = 10 print(a>b) print(b>a) print(bool("Hello")) print(bool()) print(bool(8)) def myFunction(): return False if myFunction(): print("Yes") else: print("No") a = 6 b = 7 print(a==b) print(a!=a) print(10+5) print(10-5) print(10*5) print(10/5) #division - quotient print(10%5) #modula division print(10%3) #modula division print(10//3) #floor division print(10**2) #concatenation a = 60 #0011 1100 b = 13 #0000 1101 print(a & b) print(a | b) print(a ^ b) print(a<<2) print(a>>2) x = 6 x += 3 #x = X+3 print(x) a = True b = False print(a and b) print(a or b) print(not(a and b)) print(not(a or b)) #negation print(a is b) print(a is not b) ###Output False True ###Markdown Operations and Expressions Boolean Operators ###Code print (6>9) print (6==9) print (6!=9) print (6<9) ###Output False False True True ###Markdown bool() function ###Code print (bool("Tinola")) print (bool(5)) print (bool([])) print (bool(0)) print (bool(1)) print (bool(None)) print (bool(False)) ###Output True True False False True False False ###Markdown Function returning a boolean ###Code def HerFunction(): return True print (HerFunction()) if HerFunction(): print ("good") else: print ("bad") ###Output good ###Markdown bool logic ###Code print (5>3) q=23 w=45 print (q==w) print (q!=w) ###Output True False True ###Markdown Operators ###Code print(69+2) print(69-2) print(69*2) print(69/2) print(69%2) print(69//2) ###Output 71 67 138 34.5 1 34 ###Markdown Bitwise operators ###Code print(q&w) print(q|w) print(q^w) print(q<<w) print(q>>w) #q=23 // 00010111 #w=45 // 00101101 ###Output 5 63 58 809240558043136 0 ###Markdown Assignment operators ###Code q+=3 q-=3 q*=3 q/=3 q%=3 ###Output _____no_output_____ ###Markdown Logical operators ###Code a=True s=False print (a and s) print (a or s) print (not(a or s)) print (not(a and s)) ###Output False True False True ###Markdown Identity operators ###Code print (a is s) print (a is not s) ###Output False True ###Markdown **Control structures** If ###Code z = 4 x = 6 if z<x: print ("z is less than x") ###Output z is less than x ###Markdown Elif ###Code if z>x: print ("z is more than x") elif z<x: print ("z is less than x") elif z==x: print ("z and x are equal") ###Output z is less than x ###Markdown Else ###Code num = int(input()) if num>0: print ("happy and positive") elif num<0: print ("bitter and negative") else: print ("neutral and equal") ###Output -0 neutral and equal ###Markdown **voter qualification program** ###Code age = int(input()) if age>=18: print ("you can vote") else: print ("you cannot vote") ###Output 18 you can vote ###Markdown nested if else ###Code e=int(input()) if e>1: print ("e is more than 1") if e>10: print ("e is more than 10") if e>20: print ("e is more than 20") else: print ("e is less than 20") ###Output 25 e is more than 1 e is more than 10 e is more than 20 ###Markdown **loop structure** ###Code color = ["yellow", "blue", "red"] shade = ["light", "dark"] for a in color: for b in shade: print (b,a) ###Output light yellow dark yellow light blue dark blue light red dark red ###Markdown break ###Code for a in color: print (a) if a=="blue": break ###Output yellow blue ###Markdown while loop ###Code x=1 while x<=5: print(x) x+=1 ###Output 1 2 3 4 5 ###Markdown Boolean Operators ###Code print(10>9) print(10==9) print(10!=9) print(10<9) ###Output True False True False ###Markdown bool() function ###Code print(bool("Maria")) print(bool(19)) print(bool([])) print(bool(0)) print(bool(1)) print(bool(None)) print(bool(False)) ###Output True True False False True False False ###Markdown Function can return a Boolean ###Code def myFunction(): return True print(myFunction()) if myFunction(): print("Yes!") else: print("No!") ###Output Yes! ###Markdown Application 1 ###Code a=60 b=13 print(a>b) if a<b: print("a is less than b") else: print("a is greater than b") ###Output True a is greater than b ###Markdown Python Operators ###Code print(10+3) print(10-3) print(10*3) print(10//3) print(10%3) print(10/3) ###Output 13 7 30 3 1 3.3333333333333335 ###Markdown Bitwise Operators ###Code # a = 60 , binary 0011 1100 #b = 13, binary 0000 1101 print(a&b) print(a|b) print(a^b) print(a<<1) print(a<<2) ###Output 12 61 49 120 240 ###Markdown Application 2 ###Code #Assignment Operators x = 2 x+=3 # the same as x = x+3 , x = 2+3=5 print(x) ###Output 5 ###Markdown Logical Operators ###Code k = True l = False print(k and l) print(k or l) print(not(k or l)) ###Output False True False ###Markdown Identity Operators ###Code k is l k is not l ###Output _____no_output_____ ###Markdown Control Structure If Statement ###Code v = 1 z = 1 if 1<2: print("1 is less than 2") ###Output 1 is less than 2 ###Markdown Elif Statement ###Code if v<z: print("v is less than z") elif v>z: print("v is greater than z") ###Output v is greater than z ###Markdown Else Statement ###Code number = int(input()) # to know if the number is positive or negative if number>0: print("positive") elif number<0: print("negative") else: print("number is equal to zero") ###Output 5 positive ###Markdown Application 3 - Develop a Python program that will accept if a person is entitled to vote or not ###Code age = int(input()) if age>=18: print("You are qualified to vote") else: print("You are not qualfied to vote") ###Output 15 You are not qualfied to vote ###Markdown Nested If... Else ###Code u = int(input()) if u>10: print("u is above 10") if u>20: print("u is above 20") if u>30: print("u is above 30") if u>40: print("u is above 40") if u>50: print("u is above 50") else: print("u is below 50") ###Output 49 u is above 10 u is above 20 u is above 30 u is above 40 u is below 50 ###Markdown Loop Structure ###Code week = ['Sunday','Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'] season = ["rainy","sunny"] for x in week: for y in season: print(y,x) ###Output rainy Sunday sunny Sunday rainy Monday sunny Monday rainy Tuesday sunny Tuesday rainy Wednesday sunny Wednesday rainy Thursday sunny Thursday rainy Friday sunny Friday rainy Saturday sunny Saturday ###Markdown The break statement ###Code for x in week: if x == "Thursday": break print(x) for x in week: if x == "Thursday": break print(x) # To display Sunday to Thursday for x in week: print(x) if x=="Thursday": break ###Output Sunday Monday Tuesday Wednesday Thursday ###Markdown While loop ###Code i=1 while i<6: print(i) i+=1 ###Output 1 2 3 4 5 ###Markdown Application 4 - Create a Python program that displays numbers from 1 to 4 using While loop statement ###Code j = 1 while j<=4: print(j) j+=1 ###Output 1 2 3 4 ###Markdown Application 5 Create a Python program that displays number 4 using While loop statement and break statement ###Code j = 1 while j<=4: if j==4: print(j) j+=1 ###Output 4 ###Markdown BOOLEAN OPERATORS ###Code print(10>9) print(10<9) print(10==9) print(10!=9) a=10 b=9 print(a>b) print(a<b) print(a==b) print(a!=b) print(bool(5)) print(bool("Ria")) print(bool(0)) print(bool(1)) print(bool(None)) print(bool([])) ###Output True True False True False False ###Markdown FUNCTIONS CAN RETURN A BOOLEAN ###Code def myFunction(): return False print(myFunction()) if myFunction(): print("yes") else: print("no") ###Output no ###Markdown Application 1 ###Code print(10>9) a=6 b=7 print(a==b) print(a!=a) ###Output True False False ###Markdown PYTHON OPERATORS ###Code print(10+9) print(10-9) print(10*2) print(10/2) print(10**2) ###Output 19 1 20 5.0 100 ###Markdown PYTHON BITWISE OPERATIONS ###Code a=60 b=13 print(a&b) print(b|a) ###Output 12 61 ###Markdown PYTHON ASSIGNMENT OPERATORS ###Code a+=3 #The same as a=a+3 print(a) ###Output 63 ###Markdown LOGICAL OPERATORS ###Code a = True b = False a and b not (a or b) ###Output _____no_output_____ ###Markdown Boolean Operators ###Code print(10>9) print(10==9) print(10!=9) print(10<9) ###Output True False True False ###Markdown bool() Function ###Code print(bool("Maria")) print(bool(19)) print(bool([])) print(bool(0)) print(bool(1)) print(bool(None)) print(bool(False)) ###Output True True False False True False False ###Markdown Function can return a Boolean ###Code def myFunction(): return False print(myFunction()) def myFunction(): return True if myFunction(): print("Yes!") else: print("No!") ###Output Yes! ###Markdown Application 1 ###Code print(10>9) a=6 b=7 print(a==b) print(a!=a) a=60 b=13 print(a>b) if a<b: print("a is less than b") else: print("a is greater than b") ###Output True a is greater than b ###Markdown Python Operators ###Code print(10+3) print(10-3) print(10*3) print(10%3) print(10/3) ###Output 13 7 30 1 3.3333333333333335 ###Markdown Bitwise Operators ###Code c = 60 # binary 0011 1100 d = 13 # binary 0000 1101 c&d print(c|d) print(c^d) print(d<<2) ###Output 61 49 52 ###Markdown Logical Operators ###Code k = True l = False print(k and l) print(k or l) print(not(k or l)) ###Output False True False ###Markdown Application 2 ###Code #Assignment Operators x=2 x+=3 #same as x=x+3 print(x) x-=3 #same as x=x-3 print(x) x*=3 #same as x=x*3 print(x) x/=3 #same as x=x/3 print(x) x%=3 #same as x=x%3 print(x) ###Output 5 2 6 2.0 2.0 ###Markdown Identity Operators ###Code k is l k is not l ###Output _____no_output_____ ###Markdown **Control Structure**If Statements ###Code v = 2 z = 1 if 1<2: print("1 is less than 2") ###Output 1 is less than 2 ###Markdown Elif Statement ###Code if v<z: print("v is less than z") elif v>z: print("v is greater than z") ###Output v is greater than z ###Markdown Else Statement ###Code number = int(input()) #to know if the number is positive or negative if number>0: print("Positive") elif number<0: print("Negative") else: print("number is equal to zero") ###Output 0 number is equal to zero ###Markdown Short Hand If Statement ###Code i = 10 if i < 15: print("i is less than 15") ###Output i is less than 15 ###Markdown Short Hand If...Else Statement ###Code a = 10 b = 9 print("a is greater than b") if a>b else print('b is greater than a') if a>b and b==b: print("Both conditions are true") if a<b or b==b: print("The condition is True") ###Output The condition is True ###Markdown Nested If ###Code x = int(input()) if x>10: print("x is above 10") if x>20: print("and also above 20") if x>30: print("and also above 30") if x>40: print("and also above 40") if x>50: print("and also above 50") else: print("but not above 50") ###Output 49 x is above 10 and also above 20 and also above 30 and also above 40 but not above 50 ###Markdown **Loop Statement**For Loop ###Code week = ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'] season = ["Rainy", "Sunny"] for x in week: for y in season: print(y,x) ###Output Rainy Sunday Sunny Sunday Rainy Monday Sunny Monday Rainy Tuesday Sunny Tuesday Rainy Wednesday Sunny Wednesday Rainy Thursday Sunny Thursday Rainy Friday Sunny Friday Rainy Saturday Sunny Saturday ###Markdown Break Statement ###Code for x in week: print(x) if x == "Thursday": break for x in week: if x=="Wednesday": break print(x) ###Output Wednesday ###Markdown ###Code i=1 while i<6: print(i) i+=1 ###Output 1 2 3 4 5 ###Markdown Application 3 - Create a phyton program that displays no. 3 using break statement ###Code i =1 while i<4: if i==3: break i+=1 print(i) ###Output 3 ###Markdown Application 4 ###Code #Create a Python program that displays numbers from 1 to 4 using while loop statement n=1 while n<5: print(n) n+=1 ###Output 1 2 3 4 ###Markdown Application 5 ###Code #Create a Python program that displays 4 numbers using while loop and break statement r=1 while r<=4: if r==4: print(r) r+=1 ###Output 4 ###Markdown **BOOLEAN OPERATORS** ###Code a = 10 b = 9 c = 8 print (10 > 9) print (10 == 9) print (10 < 9) print (a) print (a > b) c = print (a > b) c ##true print(bool("Hello")) print(bool(15)) print(bool(True)) print(bool(1)) ##false print(bool(False)) print(bool(0)) print(bool(None)) print(bool([])) def myFunction(): return True print(myFunction()) def myFunction(): return True if myFunction(): print("Yes/True") else: print("No/False") print(10>9) a = 6 #0000 0110 b = 7 #0000 0111 print(a == b) print(a != b) ###Output True False True ###Markdown **PHYTHON OPERATORS** ###Code print(10 + 5) print(10 - 5) print(10 * 5) print(10 / 5) print(10 % 5) print(10 // 3) print(10 ** 2) ###Output 15 5 50 2.0 0 3 100 ###Markdown **BITWISE OPERATORS** ###Code a = 60 #0011 1100 b = 13 print (a^b) print (~a) print (a<<2) print (a>>2) #0000 1111 ###Output 49 -61 240 15 ###Markdown **ASSIGNMENT OPERATORS** ###Code x = 2 x += 3 #Same As x = x+3 print(x) x ###Output 5 ###Markdown **LOGICAL OPERATORS** ###Code a = 5 b = 6 print(a>b and a==a) print(a<b or b==a) ###Output False True ###Markdown **IDENTITY OPERATORS** ###Code print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operators ###Code #Booleans represent one of two values. True or False print(10>9) print(10==9) print(9>10) a = 10 b = 9 print(a>b) print(a==b) print(b>a) a print(bool("Hello")) print(bool(15)) print(bool(True)) print(bool(False)) print(bool(None)) print(bool(0)) print(bool([])) #allows you to evaluate and gives False in return def myFunction():return True print(myFunction()) def myFunction():return False print(myFunction()) if myFunction(): print("Yes") else: print("No") ###Output False No ###Markdown You Try! ###Code a = 6 b = 7 print(a==b) print(a!=a) ###Output False False ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10*5) print(10/5) #division - quotient print(10%3) #modulo division print(10//3) #floor division print(10**2) #concatenation ###Output 15 5 50 2.0 1 3 100 ###Markdown Python Bitwise Operators ###Code a = 60 #0011 1100 b = 13 #0000 1101 print(a & b) print(a|b) print(a^b) print(~a) print(a<<2) print(a>>2) print(b>>2) ###Output 12 61 49 -61 240 15 3 ###Markdown Python Assignment Operators ###Code x = 6 x+=3 #x=x+3 print(x) x%=3 #x=6%3, remainder 0 print(x) ###Output 9 0 ###Markdown Logic Operator ###Code a = True b = False print(a and b) print(a or b) print(not(a and b)) #negation ###Output False True True ###Markdown Identity Operator ###Code a is b a is not b ###Output _____no_output_____ ###Markdown Boolean Operators ###Code #Booleans represent one of two values: True or False print(10>9) print(10==9) print(9>10) a = 10 b = 9 print(a>b) print(a==a) print(b>a) print(bool("Hello")) print(bool(15)) print(bool(True)) print(bool(False)) print(bool(None)) print(bool(0)) print(bool([])) #allows you to evaluate and gives False in return. def myFunction(): return True print(myFunction()) def myFunction(): return False print(myFunction()) if myFunction(): print("Yes!") else: print("No!") ###Output False No! ###Markdown You Try! ###Code print(10>9) a=6 b=7 print(a==b) print(a!=a) print(10+5) print(10-5) print(10*5) print(10/5) #division - quotient print(10%5) #modulo division print(10%3) #modulo division print(10//3) #floor division print(10**2) #concatenation a = 60 #0011 1100 0000 1111 b = 13 #0000 1101 print(a & b) print(a | b) print(a ^ b) print(~a) print(a<<2) print(a>>2) print(b<<2) print(b>>2) x = 6 x += 3 #x = x+3 print(x) x%=3 #x = x%3, remainder 0 print(x) a=True b=False print(a and b) print(a or b) print(not(a and b)) print(not(a or b)) #negation print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Expressions ###Code print(10>9) print(10<9) print(10==9) a = 10 b = 9 print(a>b) print(a<b) print(a==b) ###Output True False False ###Markdown Bool() Function ###Code print(bool(1)) print(bool(0)) print(bool("Maria")) print(bool(None)) print(bool([])) ###Output True False True False False ###Markdown Functions can Return a Boolean ###Code def my_Function(): return True print(my_Function()) if my_Function(): print("Yes!") else: print("No!") ###Output True Yes! ###Markdown Python Operators ###Code print(2+5) print(2-5) print(2*5) print(5/2) print(5//2) print(5**2) ###Output 7 -3 10 2.5 2 25 ###Markdown Bitwise Operators ###Code a = 60 b = 13 print(a&b) print(a|b) print(a^b) ###Output 12 61 49 ###Markdown Assignment Operators ###Code a +=3 # Same as a = a+3 print(a) ###Output 63 ###Markdown Logical Operators ###Code x = True y = False print(x and y) print(x or y) print(not(x and y)) ###Output False True True ###Markdown Identity Operators ###Code x is y x is x print(x is not y) ###Output _____no_output_____ ###Markdown BOOLEAN OPERATOR ###Code print(12>9) print(12==9) print(12<9) a = 12 b = 9 print(a>b) print(a==a) a= 5 b= 4 print(a==b) print(a!=a) print(12+6) print(12-6) print(12*6) print(12/6) #division - quotient print(12%6) a= 60 #0011 1100 b= 13 #0000 1101 print(a &b) print(a|b) print(a^b) print(~a) print(a << 2) print(a >> 2) x = 6 x+= 3 #x= x+3 print(x) x%=3 print(x) a= True b= False print(a and b) print(a or b) print(not(a and b)) def myFunction() :return True print(myFunction()) def myFunction() :return True print(myFunction()) if myFunction() : print("Yes") else : print("No") a is b a is not b ###Output _____no_output_____ ###Markdown Functions can return Boolean ###Code def myFunction():return False print(myFunction()) def myFunction():return False if myFunction(): print("Yes!") else: print("No") ###Output No ###Markdown You Try! ###Code a=6 b=7 print(a==b) print(a!=a) ###Output False False ###Markdown Arithmetic Operators ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5)#modulo division, remainder print(10//5)#floor division print(10//3)#floor division print(10%3)#3*3=9+1 ###Output 15 5 50 2.0 0 2 3 1 ###Markdown Bitwise Operators ###Code a=60 #0011 1100 b=13 #0000 1101 print(a&b) print(a|b) print(a^b) print(~a) print(a<<1) #0111 1000 print(a<<2) #1111 0000 print(b>>1) #1000 0110 print(b>>2) #0000 0011 carry flag bit=01 ###Output 12 61 49 -61 120 240 6 3 ###Markdown Python Assignment Operators ###Code a+=3 #Same As a = a + 3 #Same As a = 60 + 3, a = 63 print(a) ###Output 63 ###Markdown Logical Operators ###Code #and logical operator a=True b=False print(a and b) print(not(a and b)) print(a or b) print(not(a or b)) print(a is b) print( a is not b) ###Output False True ###Markdown Boolean Operators ###Code #Booleans represent one or two values: True or False print(10>9) print(10==9) print(9>10) a = 10 b = 9 print(a>b) print(a==a) print(b>a) print(b>a) print(bool("hello")) print(bool(15)) print(bool(True)) print(bool(False)) print(bool(None)) print(bool(0)) print(bool([])) #allows you to evaluate and give False in return def myFunction():return True print(myFunction()) def myFunction():return True if (myFunction()): print("Yes") else: print("No") ###Output Yes ###Markdown You Try! ###Code a=6 b=7 print(a==b) print(a!=a) print(10+5) print(10-5) print(10*5) print(10/5) #division-quotient print(10%5) #modulo division print(10%3) print(10//3) #floor division print(10**2) #concatenation a=60 #0011 1100 b=13 #0000 1101 print(a & b) print(a | b) print(a^b) print(a<<2) print(b>>a) x=6 x+=3 #x=x+3 print(x) x%=3 #x=x%3, remainder 0 print(x) a=True b=False print(a and b) print(a or b) print(not(a and b)) #negation print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operators ###Code x=1 y=2 print(x>y) print(10>11) print(10==10) print(10!=11) #using bool() function print(bool("Hello")) print(bool(15)) print(bool(True)) print(bool(False)) print(bool(None)) print(bool(0)) print(bool([])) ###Output True True True False False False False ###Markdown Functions can return Boolean ###Code def myFunction(): return False print(myFunction()) def yourFunction(): return False if yourFunction(): print("Yes!") else: print("No") ###Output No ###Markdown You Try ###Code a=6 b=7 print(a==b) print(a!=a) ###Output False False ###Markdown Arithmetic Operators ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) #modulo division, remainder print(10//5) #floor division, whole number print(10//3) #floor division print(10%3) #3 x 3 =9 + 1 ###Output 15 5 50 2.0 0 2 3 1 ###Markdown Bitwise Operators ###Code a=60 #0011 1100 b=13 #0000 1101 print(a&b) print(a|b) print(a^b) print(-a) print(a<<1) #0011 1000 print(a<<2) #1111 0000 print(b>>1) #1 0000 0110 print(b>>2) #0000 0011 carry flag bit =01 ###Output 12 61 49 -60 120 240 6 3 ###Markdown Python Assignment Operators ###Code a+=3 # Same As a = a +3 # Same As a = 60 + 3, a = 63 print(a) ###Output 63 ###Markdown Logical Operators ###Code #and logical operator a= True b= False print(a and b) print(not(a and b)) print(a or b) print(not(a or b)) print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operators ###Code a = 7 b = 6 print(10>9) print(10<9) print(a>b) ###Output True False True ###Markdown bool()function ###Code print(bool("Maria")) print(bool(1)) print(bool(0)) print(bool(None)) print(bool(False)) print(bool(True)) ###Output True True False False False True ###Markdown Functions can return a Boolean ###Code def my_Function(): return True print(my_Function()) if my_Function(): print("True") else: print("False") ###Output True ###Markdown Application 1 ###Code print(a==b) print(a<b) ###Output False False ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10/3) print(10*5) print(10%5) print(10//3) #10/3 = 3.3333 print(10**2) ###Output 15 5 3.3333333333333335 50 0 3 100 ###Markdown Bitwirse Operators ###Code c = 60 # binary 0011 1100 d = 13 # binary 0000 1101 c&d print(c|d) print(c^d) print(d<<2) ###Output 61 49 52 ###Markdown Logical Operators ###Code h = True l = False h and l h or l not (h or l) ###Output _____no_output_____ ###Markdown Application 2 ###Code #Python Assignment Operators x = 100 x += 3 # Same as x = x +3, x = 100+3=103 print(x) ###Output 103 ###Markdown Identity Operators ###Code h is l h is not l ###Output _____no_output_____ ###Markdown Control Structure If Statement ###Code if a>b: print("a is greater than b") ###Output a is greater than b ###Markdown Elif Statement ###Code if a<b: print("a is less than b") elif a>b: print("a is greater than b") ###Output a is greater than b ###Markdown Else Statement ###Code a= 10 b =10 if a>b: print("a is greater than b") elif a>b: print("a is greater than b") else: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand If Statement ###Code if a==b: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand If...Else Statement ###Code a = 10 b = 9 print("a is greater than b") if a>b else print('b is greater than a') ###Output a is greater than b ###Markdown And ###Code if a>b and b==b: print("both conditions are True") ###Output both conditions are True ###Markdown Or ###Code if a<b or b==b: print("the conditions is True") ###Output the conditions is True ###Markdown Nested If ###Code x = int(input()) if x > 10: print("x is above 10") if x > 20: print("and also above 20") if x > 30: print("and also above 30") if x > 40: print("and also above 40") if x > 50: print("and also above 50") else: print("but not above 50") ###Output 48 x is above 10 and also above 20 and also above 30 and also above 40 but not above 50 ###Markdown Loop Statement For Loop ###Code week = ['Sunday',"Monday",'Tuesday', "Wednesday","Thursday","Friday","Saturday"] for x in week: print(x) ###Output Sunday Monday Tuesday Wednesday Thursday Friday Saturday ###Markdown The break statement ###Code #to display Sunday to Wednesday using For loop for x in week: print(x) if x == "Wednesday": break #to display only Wednesday using break statement for x in week: if x == "Wednesday": break print(x) ###Output Wednesday ###Markdown While Statement ###Code i = 1 while i < 6: print(i) i += 1 #same as i = i + 1 ###Output 1 2 3 4 5 ###Markdown Application 3 - Create a python program that displays no.3 using break statement ###Code i = 1 while i < 6: if i == 3: break i += 1 print(i) ###Output 3 ###Markdown Boolean Operators ###Code a = 7 b = 6 print(10>9) print(10<9) print(a>b) ###Output True False True ###Markdown Bool () Function ###Code print(bool("Lawrence")) print(bool(1)) print(bool(0)) print(bool(None)) print(bool(False)) print(bool(True)) ###Output True True False False False True ###Markdown Function can return a Boolean ###Code def my_Function(): return True print(my_Function()) if my_Function(): print("True") else: print("False") ###Output True ###Markdown Application 1 ###Code print (10>9) a=6 b=7 print(a==b) print(a!=b) ###Output True False True ###Markdown Phyton Operators ###Code print(10+5) print(10-5) print(10/3) print(10*5) print(10%5) print(10//3) #10/3 = 3.3333 print(10**2) ###Output 15 5 3.3333333333333335 50 0 3 100 ###Markdown Bitwise Operators ###Code c = 60 # binary 0011 1100 d = 13 # binary 0000 1101 c&d print(c|d) print(c^d) print(d<<2) ###Output 61 49 52 ###Markdown Logical Operators ###Code h = True l = False h and l h or l not(h or l) ###Output _____no_output_____ ###Markdown Application 2 ###Code [] #Phython Assignment Oprators x = 100 x +=3 # Same as x = x +3, x = 100+3=103 print(x) ###Output 103 ###Markdown Identity Operators ###Code h is l h is not l ###Output _____no_output_____ ###Markdown Control Structure If Statement ###Code if a>b: print("a is greater than b") ###Output _____no_output_____ ###Markdown Elif Statement ###Code if a<b: print("a is less than b") elif a>b: print("a is greater than b") ###Output a is less than b ###Markdown Else Statement ###Code a = 10 b = 10 if a<b: print("a is less than b") elif a>b: print("a is greater than b") else: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand If Statement ###Code if a==b: print("a is equal to b") ###Output _____no_output_____ ###Markdown Short Hand If...Else Statement ###Code a = 10 b = 9 print("a is greater than b") if a>b else print('b is greater than a') ###Output a is greater than b ###Markdown And ###Code if a>b and b==b: print("Both conditions are true") ###Output Both conditions are true ###Markdown Or ###Code if a<b or b==b: print("The condition is True") ###Output The condition is True ###Markdown Nested If ###Code x = int(input()) if x>10: print("x is above 10") if x>20: print("and also above 20") if x>30: print("and also above 30") if x>40: print("and also above 40") if x>50: print("and also above 50") else: print("but not above 50") ###Output 48 x is above 10 and also above 20 and also above 30 and also above 40 but not above 50 ###Markdown Loop Statement For Loop ###Code week = ['Sunday',"Monday",'Tuesday',"Wednesday","Thursday","Friday","Saturday"] for x in week: print(x) ###Output Sunday Monday Tuesday Wednesday Thursday Friday Saturday ###Markdown The Break Statement ###Code #to display Sunday to Wednesday using For loop for x in week: print(x) if x=="Wednesday": break #to display only Wednesday using break statement for x in week: if x=="Wednesday": break print(x) ###Output Wednesday ###Markdown While Statement ###Code i =1 while i<6: print(i) i+=1 #same as i = i+1 ###Output 1 2 3 4 5 ###Markdown Application 3 - Create a phyton program that displays no. 3 using break statement ###Code i =1 while i<6: if i==3: break i+=1 print(i) ###Output 3 ###Markdown Boolean Operators ###Code a = 7 b = 6 print(10>9) print(10<9) print(a>b) ###Output True False True ###Markdown bool()function ###Code print(bool("Maria")) print(bool(1)) print(bool(0)) print(bool(None)) print(bool(False)) print(bool(True)) ###Output True True False False False True ###Markdown Functions can return a Boolean:: ###Code def my_Function(): return False print(my_Function()) if my_Function(): print("True") else: print("False") ###Output False ###Markdown Application 1 ###Code print(a==b) a<b ###Output False ###Markdown Python Operators ###Code print(10+5) print(10-5) print((10/3)) print(10%5) print(10*5) print(10//5) print(10**2) ###Output 15 5 3.3333333333333335 0 50 2 100 ###Markdown Bitwise Operators ###Code c = 60 # binary 0011 1100 d = 13 # binary 0000 1101 c&d print(c|d) print(c^d) print(d<<2) ###Output 61 49 52 ###Markdown Logical Operators ###Code h = True l = False h and l h or l not (h or l) ###Output _____no_output_____ ###Markdown Application 2 ###Code #Python Assignment Operators x = 100 x+=3 #Sames as x = x+3, x = 100+3 print(x) ###Output 103 ###Markdown Identity Operators ###Code h is l h is not l ###Output _____no_output_____ ###Markdown Control Structure If statement ###Code if a>b: print("a is greater than b") ###Output a is greater than b ###Markdown Elif Statement ###Code if a<b: print("a is less than b") elif a>b: print("a is greater than b") ###Output a is greater than b ###Markdown Else Statement ###Code a = 10 b = 10 if a<b: print("a is less than b") elif a>b: print("a is greater than b") else: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand If Statement ###Code if a==b: print("a is equal to b ") ###Output a is equal to b ###Markdown Short Hand If...Else Statement ###Code a = 10 b = 9 print("a is greater than b") if a>b else print("b is greater than a") ###Output a is greater than b ###Markdown And ###Code if a>b and b==b: print("both conditions are true") ###Output _____no_output_____ ###Markdown Or ###Code if a<b or b==b: print("the condition are True") ###Output the condition are True ###Markdown Nested if ###Code x = int(input()) if x>10: print("x is above 10") if x>20: print("and also above 20") if x>30: print("and also above 30") if x>40: print("and also above 40") else: print("but not above 40") ###Output 45 x is above 10 and also above 20 and also above 30 and also above 40 ###Markdown Loop Statement For Loop ###Code week = ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"] for x in week: print(x) ###Output Sunday Monday Tuesday Wednesday Thursday Friday Saturday ###Markdown The break statement ###Code #to display Sunday to Wednesday using For loop for x in week: print(x) if x=="Wednesday": break #to display only Wednesday using break statement for x in week: if x=="Wednesday": break print(x) ###Output _____no_output_____ ###Markdown While Statement ###Code i = 1 while i<6: print(i) i+=1 #same as i = i+1 ###Output 1 2 3 4 5 ###Markdown Application 3 - Create a python program that displays no.3 using break statement ###Code i=1 while i<6: if i==3: break i+=1 print(i) ###Output 3 ###Markdown Boolean Operators ###Code print(10>9) print(10<9) print(10==9) a=10 b=9 print(a>b) print(a<b) print(a==b) ###Output _____no_output_____ ###Markdown Bool Function ###Code print(bool(5)) print(bool("Maria")) print(bool(0)) print(bool("None")) print(bool(None)) print(bool([])) ###Output True True False True False False ###Markdown Function can return in boolean ###Code def myFunction(): return False print(myFunction()) if myFunction(): print("Yes") else: print("No") ###Output _____no_output_____ ###Markdown Application 1 ###Code a=6 b=7 print(10>9) print(a==b) print(a!=a) ###Output True False False ###Markdown Python Operators ###Code print(10+9) print(10-9) print(10*2) print(10/2) print(10**2) ###Output _____no_output_____ ###Markdown Python Bitwise Operators ###Code a= 60 b= 13 print(a&b) print(a|b) ###Output 12 61 ###Markdown Python Assignment Operators ###Code a+=3 print(a) ###Output 3 ###Markdown Identity Operators ###Code a is b a is not b print(a is b) print(a is not b) ###Output False True ###Markdown Bolean Operatons ###Code #Boleans represent one of two values: True or False print(10>9) print(10==9) print(9>10) a=10 b=9 print(a>b) print(a==a) print(b>a) b>a print(bool("hello")) print(bool(15)) print(bool(True)) print(bool(False)) print(bool(None)) print(bool(0)) print(bool([])) #allows you to evaluate and gives False in return def myFunction():return True if myFunction(): print("Yes") else: print("No") def myFunction():return False if myFunction(): print("Yes") else: print("No") ###Output No ###Markdown You Try! ###Code a=6 b=7 print(a==b) print(a!=a) print(10+5) print(10-5) print(10*5) print(10/5) #division - Quotient print(10%5) #modulo division print(10%3) #modulo division print(10//3) # Floor division print(10**2) #concatenation a=60 #0011 1100 b=13 #0000 1101 print(a & b) #AND print(a|b) #OR print(a^b) #XOR print(a<<2) print(a>>2) x=6 x+=3 #same as x=x+3 print(x) x%=3 #x=6% 3 , remainder 0 print(x) a=True b= False a and b a or b a=True b= False print(a and b) print(a or b) print(not(a and b)) print(not(a or b)) #negation a is b a is not b print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operator ###Code x=1 y=2 print(x>y) print(10>11) print(10==10) print(10!=11) #using bool() functions print(bool("Hello")) print(bool(15)) print(bool(1)) print(bool(True)) print(bool(False)) print(bool(None)) print(bool(0)) print(bool([])) ###Output True True True True False False False False ###Markdown Functions can return Boolean ###Code def myFunction():return False print(myFunction()) def yourFunction():return False if yourFunction(): print("Yes!") else: print("No") ###Output No ###Markdown You Try! ###Code print(10>9) a=6 b=7 print(a==b) print(a!=a) ###Output True False False ###Markdown Arithmetic Operators ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5)#modulo division, remainder print(10//5)#floor division print(10//3)#floor division print(10%3)#3x3=9+1 ###Output 15 5 50 2.0 0 2 3 1 ###Markdown Bitwise Operators ###Code a=60 #0011 1100 b=13 #0000 1101 print(a&b) print(a|b) print(a^b) print(~a) print(a<<1) # 0111 1100 print(a<<2) # 1111 0000 print(b>>1) # 1 0000 0110 print(b>>2) # 0000 0011 carry flag bit=01 ###Output 12 61 49 -61 120 240 6 3 ###Markdown Python Assignment Operators ###Code a+=3 #Same As a=a+3 #Same As a=60+3, a=63 print(a) ###Output 63 ###Markdown Logical Operators ###Code #and logical operator a=True b=False print(a and b) print(not(a and b)) print(a or b) print(not(a or b)) print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operators ###Code a=10 b=9 c=8 print (10 > 9) print (10 == 9) print (10 < 9) print (a) print (a > b) c = print (a >b) c ##true print(bool("Hello")) print(bool(15)) print(bool(True)) print(bool(1)) ##false print(bool(False)) print(bool(0)) print(bool(None)) print(bool([])) def myFunction() : return True print(myFunction()) def myFunction() : return True if myFunction(): print("Yes/True") else: print("No/False") print(10>9) a = 6 #0000 0110 b = 7 #0000 0111 print(a==b) print(a!= b) ###Output True False True ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) print(10//3) print(10**2) ###Output 15 5 50 2.0 0 2 100000 ###Markdown Bitwise Operators ###Code a = 60 #0011 1100 b = 13 print(a^b) print(-a) print(a<<2) print(a>>2) #0000 1111 ###Output 49 -60 240 15 ###Markdown Assignment Operator ###Code x=2 x+=3 #Same As x = x+3 print(x) x ###Output 5 ###Markdown Logical Operator ###Code a = 5 b = 6 print(a>b and a==a) print(a<b or b==a) ###Output False True ###Markdown Identify Operator ###Code print(a is b) print(a is not b) ###Output False True ###Markdown ###Code #Operations and Expressions in Python #Boolean Operators #Booleans represent one of two values: True or False print(10 > 9) # 10 is greater than 9 so TRUE print(10 == 9) # 10 is NOT equal to 9 so FALSE ("==" means equality, used for comparing) print(10 < 9) # 10 is NOT less than 9 so FALSE print(10 != 9) #Boolean Function #Evaluate value and give you either TRUE or Flase in return print(bool(True)) print(bool(False)) print(bool(1)) print(bool(0)) print(bool([])) #Empty cell print(bool(None)) print(bool("Hello")) #Functions can Return a Boolean def myFunction(): #myFunction can be change to anything you like return True #return can either be True or False print(myFunction()) def Goodmorning(): return False print(Goodmorning()) #Boolean answer of a function def myFunction(): return True if myFunction(): print("Yes") else: print("No") print(10>9) a=6 b=7 print(a==b) print(a!=a) print(a>b) print(b>a) #Python Operators print(11+11) print(28-8) print(2*4) print(4/2) #with decimal point print(4//2) #whole nummber "//" is called floor division print(3//2) #will not show the decimal print(3/2) print(4%3) #This is modulo, it gives you the remainder print(2**3) #concatenation or basically exponent #Python Bitwise Operators #In Sci-Cal, press "mode" then BASE-N (CONVERTING BINARY TO DECIMAL AND VICE VERSA) A = 60 #0011 1100 B = 13 # print(A&B) #Binary AND (1-0 is 0, 0-1 is 0, 0-0 is 0, 1-1 is 1) print(A|B) #Binary OR print(A<<1) #Binary Left Shift (0011 1100 becomes 0111 1000) the shifting depends on the number print(A>>1) #Binary Right Shift (0011 1100 becomes 0001 1110) #Python Assignment Operators A = 2 A+=2 #same as A = A + 2 print(A) #Logical Operators a = 60 b = 13 print(a and b) print((a>b) and (a<b)) print((a>b) or (a<b)) print(not(a>b)) #Identity Operators a = 60 b = 13 A = 60 B = 60 print(a is b) print(A is B) print(a is not b) print(A is not B) ###Output False True True False ###Markdown Boolean Operators ###Code a=7 b=6 print(10>9) print(10<9) print(a>b) ###Output True False True ###Markdown bool() function ###Code print(bool("Maria")) print(bool(1)) print(bool(0)) print(bool(None)) print(bool(False)) print(bool(True)) ###Output True True False False False True ###Markdown Functions can return a Boolean ###Code def my_Function(): return True print(my_Function()) if my_Function(): print("True") else: print("False") ###Output True ###Markdown Application 1 ###Code print(10>9) a=6 b=7 print(a==b) print(a!=a) ###Output True False False ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10/3) print(10*5) print(10%5) print(10//3) #10/3=3.3333 print(10**2) ###Output 15 5 3.3333333333333335 50 0 3 100 ###Markdown Bitwise operators ###Code c=60 # binary 0011 1100 d=13 # binary 0000 1101 c&d print(c|d) print(c^d) print(d<<2) ###Output 61 49 52 ###Markdown Logical Operators ###Code h=True l=False h and l h or l not(h or l) Application 2 #Python Assignment Operators #x+=3 Same As x=x+3 #x-=3 Same As x=x-3 #x*=3 Same As x=x*3 #x/= Same As x=x/3 #x%=3 Same As x=x%3 x=100 x+=3 #Same as x=x+3, x=100+3 print(x) ###Output 103 ###Markdown Identical Operators ###Code h is l h is not l ###Output _____no_output_____ ###Markdown Control Structure If Statement ###Code if a>b: print("a is greater than b") ###Output a is greater than b ###Markdown Elif Statement ###Code if a<b: print("a is less than b") elif a>b: print("a is greater than b") ###Output a is greater than b ###Markdown Else Statement ###Code a=10 b=10 if a<b: print("a is less than b") elif a>b: print("a is greater than b") else: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand If Statement ###Code if a==b: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand If...Else Statement ###Code a=10 b=9 print("a is greater than b") if a>b else print("b is greater than a") ###Output a is greater than b ###Markdown And ###Code if a>b and b==b: print("both conditions are True") ###Output both conditions are True ###Markdown Or ###Code if a<b or b==b: print("the conditions is True") ###Output the conditions is True ###Markdown Nested If ###Code x=int(input()) if x>10: print("x is above 10") if x>20: print("and also above 20") if x>30: print("and also above 30") if x>40: print("and also above 40") if x>50: print("and also above 50") else: print("but not above 50") ###Output 48 x is above 10 and also above 20 and also above 30 and also above 40 but not above 50 ###Markdown Loop Statement For Loop ###Code week=['Sunday',"Monday",'Tuesday',"Wednesday","Thursday","Friday","Saturday"] for x in week: print(x) ###Output Sunday Monday Tuesday Wednesday Thursday Friday Saturday ###Markdown The Break statement ###Code #to display Sunday to Wednesday using for loop for x in week: print(x) if x=="Wednesday": break #to display only Wednesday using break statement for x in week: if x=="Wednesday": break print(x) ###Output Wednesday ###Markdown While Statement ###Code i=1 while i<6: print(i) i+=1 #same as i=i+1 ###Output 1 2 3 4 5 ###Markdown Application 3 - Create a python program that displays no.3 using break statement ###Code i=1 while i<6: i+=1 #same as i=i+1 if i==3: break print(i) ###Output 3 ###Markdown Boolean Operators ###Code a = 7 b =6 print(10>9) print(10<9) print(a>b) ###Output True False True ###Markdown Bool () Function ###Code print(bool("Maria")) print(bool(1)) print(bool(0)) print(bool(None)) print(bool(False)) print(bool(True)) ###Output True True False False False True ###Markdown Functions can return a Boolean ###Code def my_Function(): return True print(my_Function()) if my_Function(): print("True") else: print("False") ###Output True ###Markdown Application 1 ###Code print(a==b) print(a<b) ###Output False False ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10/3) print(10*5) print(10%5) print(10//3) #10/3 = 3.3333 print(10**2) ###Output 15 5 3.3333333333333335 50 0 3 100 ###Markdown Bitwise operators ###Code c = 60 # binary 0011 1100 d = 13 # binary 0000 1101 c&d print(c|d) print(c^d) print(d<<2) ###Output 61 49 52 ###Markdown Logical Operators ###Code h = True l = False h and l h or l not (h or l) ###Output _____no_output_____ ###Markdown Application 2 Assignment : Python Operators ###Code x = 100 x+=3 # Same as x = x +3, x = 100+3=103 print(x) ###Output 103 ###Markdown Identity Operators ###Code h is l h is not l ###Output _____no_output_____ ###Markdown Control Structure If Statement ###Code if a>b: print("a is greater than b") ###Output a is greater than b ###Markdown Elif statement ###Code if a<b: print("a is less than b") elif a>b: print("a is greater than b") ###Output a is greater than b ###Markdown Else statement ###Code a= 10 b =10 if a>b: print("a is greater than b") elif a>b: print("a is greater than b") else: print("a is equal to b") ###Output a is equal to b ###Markdown Short hand if statement ###Code if a==b: print("a is equal to b") ###Output a is equal to b ###Markdown Short hand if else statement ###Code a = 10 b = 9 print("a is greater than b") if a>b else print('b is greater than a') ###Output a is greater than b ###Markdown And ###Code if a>b and b==b: print("both conditions are True") ###Output both conditions are True ###Markdown Or ###Code if a<b or b==b: print("the condition is True") ###Output the condition is True ###Markdown Nested if ###Code x = int(input()) if x>10: print("x is above 10") if x>20: print("and also above 20") if x>30: print("and also above 30") if x>40: print("and also above 40") if x>50: print("and also above 50") else: print("but not above 50") ###Output 48 x is above 10 and also above 20 and also above 30 and also above 40 but not above 50 ###Markdown Loop statement For loop ###Code week = ['Sunday',"Monday",'Tuesday', "Wednesday","Thursday","Friday","Saturday"] for x in week: print(x) ###Output Sunday Monday Tuesday Wednesday Thursday Friday Saturday ###Markdown The break statement ###Code #to display Sunday to Wednesday using For loop for x in week: print(x) if x=="Wednesday": break #to display only Wednesday using break statement for x in week: if x=="Wednesday": break print(x) ###Output Wednesday ###Markdown While statement ###Code i =1 while i<6: print(i) i+=1 #same as i = i +1 ###Output 1 2 3 4 5 ###Markdown Application 3 - Create a python program that displays no.3 using break statement ###Code i =1 while i<6: if i==3: break i+=1 print(i) ###Output 3 ###Markdown ###Code print(10 > 9) print(10 == 9) print(9 > 10) a = 10 b = 9 print(a > b) print(a == a) print(b > a) a>b print(bool("Hello")) print(bool(15)) print(bool(True)) print(bool(False)) print(bool(None)) print(bool(0)) print(bool([])) def myFunction():return True print(myFunction()) def myFunction():return True print(myFunction()) if myFunction(): print("Yes") else: print("No") a=6 b=7 print(a == b) print(a != a) print(10 + 5) print(10 - 5) print(10 * 5) print(10 / 5) a=60 #0011 1100 b=13 #0000 1101 print(a&b) print(a|b) print(a^b) print(a<<2) print(b>>2) x = 6 x += 3 print (x) x %= 3 print(x) a = True b = False print(a and b) print(a or b) print(not(a and b)) print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operators ###Code print(10>9) print(10 == 9) print(10<9) print(10!=9) ###Output True False False True ###Markdown Bool() function ###Code print(bool("Zaldy")) print(bool("4")) print(bool()) print(bool(0)) print(bool("1")) print(bool(None)) print(bool(False)) ###Output True True False False True False False ###Markdown Function can return a Boolean ###Code def myFunction(): return False print(myFunction()) def myFunction(): return True if myFunction(): print("Yes") else: print("No") ###Output Yes ###Markdown Application 1 ###Code print(10>9) a = 6 b = 7 print(a==b) print(a!=a) print(10>9) a = 6 b = 7 if a<b: print("a is less than b") else: print("a is greater than b") ###Output True a is less than b ###Markdown Python Operators ###Code print(10+3) print(10-3) print(10*3) print(10/3) print(10%3) print(10//3) ###Output 13 7 30 3.3333333333333335 1 3 ###Markdown Bitwise Operators ###Code a = 60 b = 13 print(a&b) print(a|b) print(a^b) print(a<<1) print(a<<2) ###Output 12 61 49 120 240 ###Markdown Assignment Operators: Application 2 ###Code a = 5 b = 6 c = 7 d = 8 e = 9 a+=3 b-=3 c*=3 d/=3 e%=3 print(a) print(b) print(c) print(d) print(e) ###Output 8 3 21 2.6666666666666665 0 ###Markdown Logical Operators ###Code k = True l = False print(k and l) print(k or l) print(not(k or l)) ###Output False True False ###Markdown Identity Operators ###Code print(k is l) print(k is not l) ###Output False True ###Markdown Control Structure If Statement ###Code v = 1 z = 2 if v<z: print("1 is less than 2") ###Output 1 is less than 2 ###Markdown Elif Statement ###Code if v<z: print("v is less than z") elif v>z: print("v is greater than z") ###Output v is less than z ###Markdown Else Statement ###Code v = 1 z = 1 if v<z: print("v is less than z") elif v>z: print("v is greater than z") else: print("v is equal to z") number = int(input()) if number>0: print("positive") elif number < 0: print ("negative") else: print ("number is equal to zero") ###Output 3 positive ###Markdown Application 3 - Develop a python program that will accept if a person is entitled to vote or not ###Code age = int(input()) if age >= 18: print("You are qualified to vote") elif age<18: print("You are not qualified to vote") ###Output 4 You are not qualified to vote ###Markdown Nested If...Else ###Code u = 41 if u> 10: print("u is above 10") if u > 20: print ("u is above 20") if u > 30: print ("u is above 30") if u > 40: print ("u is above 40") if u > 50: print ("u is above 50") ###Output u is above 10 u is above 20 u is above 30 u is above 40 ###Markdown Loop Structure ###Code week = ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'] season = ["rainy","sunny"] for x in week: for y in season: print(y,x) ###Output rainy Sunday sunny Sunday rainy Monday sunny Monday rainy Tuesday sunny Tuesday rainy Wednesday sunny Wednesday rainy Thursday sunny Thursday rainy Friday sunny Friday rainy Saturday sunny Saturday ###Markdown The break statement ###Code for x in week: print (x) if x == "Thursday": break ###Output Sunday Monday Tuesday Wednesday Thursday ###Markdown While loop ###Code i = 1 while i < 6: print (i) i += 1 ###Output 1 2 3 4 5 ###Markdown Application 4 - Create a python program that displays number from 1 to 4: ###Code j = 1 while j<5: print(j) j+=1 ###Output 1 2 3 4 ###Markdown Application 4 - Create a python program that displays number 4 using while loop statement and break statement ###Code j = 1 while j<5: if j >=4: print(j) if j == 4: break j+=1 ###Output 4 ###Markdown Boolean Operators ###Code print(20>7) print(20==7) print(20!=7) print(20<7) ###Output True False True False ###Markdown bool() Function ###Code print(bool("Zep")) print(bool(19)) print(bool([])) print(bool(0)) print(bool(1)) print(bool(None)) print(bool(False)) ###Output True True False False True False False ###Markdown Function can return a Boolean ###Code def myFunction(): return True print(myFunction()) if myFunction(): print("YES!") else: print("NO!") ###Output YES! ###Markdown Application 1 ###Code print(20>2) a=30 b=17 print(a==b) print(a!=b) ###Output True False True ###Markdown Python Operators ###Code print(10+2) print(10 - 2) print(10*2) print(10/2) print(10 % 2) print(10//2) ###Output 12 8 20 5.0 0 5 ###Markdown Bitwise Operators ###Code #a=60, binary 0011 1100 #b=15, binary 0000 1111 print(a&b) print(a|b) print(a^b) print(a<<b) print(a>>b) ###Output 16 31 15 3932160 0 ###Markdown Application 2 Assignment Operators ###Code a+=5 #Same As a = a+5 a-=5 #Same As a = a-5 a*=5 #Same As a = a*5 a/=5 #Same As a = a/5 a%=5 #Same As a = a%5 ###Output _____no_output_____ ###Markdown Logical Operators ###Code k = True l = False print(k and l) print(k or l) print(not(k or l)) ###Output False True False ###Markdown Identity Operators ###Code print(k is l) print(k is not l) ###Output False True ###Markdown Control Structure if Statement ###Code v = 1 z = 1 if v<z: print("1 is less than 2") ###Output _____no_output_____ ###Markdown elif Statement ###Code if v<z: print("v is less than z") elif v!=z: print("v is not z") ###Output _____no_output_____ ###Markdown else Statement ###Code number = int(input()) if number>0: print("Positive") elif number<0: print("negative") else: print("number is equal to zero") ###Output 7 Positive ###Markdown Application 3 - Develop a Python program that will accept if a person is entitled to vote or not ###Code age = int(input()) if age >= 18: print("You are qulifed to vote") else: print("you are not qualified to vote") ###Output 17 you are not qualified to vote ###Markdown Nested if...else ###Code u = int(input()) if u>10: print("u is above 10") if u>20: print("u is above 20") if u>30: print("u is above 30") if u>40: print("u is above 40") if u>50: print("u is above 50") else: print("u is below 50") ###Output 33 u is above 10 u is above 20 u is above 30 ###Markdown Loop Structure ###Code week = ['Sunday','Monday','Tuesday','Wednesday','Thursday','Friday','Saturday'] Season=['rainy','sunny'] for x in week: for y in Season: print(y,x) ###Output rainy Sunday sunny Sunday rainy Monday sunny Monday rainy Tuesday sunny Tuesday rainy Wednesday sunny Wednesday rainy Thursday sunny Thursday rainy Friday sunny Friday rainy Saturday sunny Saturday ###Markdown Break Statement ###Code for x in week: print(x) if x == 'Thursday': break ###Output Sunday Monday Tuesday Wednesday Thursday ###Markdown While Loop ###Code i=1 while i<=5: print(i) i+=1 ###Output 1 2 3 4 5 ###Markdown Application 4 - Create a Python program thatdisplays numbers from 1 to 7 ###Code j=1 while j<=7: print(j) j+=1 ###Output 1 2 3 4 5 6 7 ###Markdown Application 5 - Create a Python program that displays number 7 using while and break statement ###Code j=1 while j<=7: j+=1 if j == 7: print(j) ###Output 7 ###Markdown BOOLEAN OPERATORS ###Code a = 10 b = 9 c = 8 print (10 > 9) print (10 == 9) print (10 < 9) print (a) print (a > b) c = print (a > b) c ##true print(bool("Hello")) print(bool(15)) print(bool(True)) print(bool(1)) ##false print(bool(False)) print(bool(0)) print(bool(None)) print(bool([])) def myFunction(): return True print(myFunction()) def myFunction (): return True if myFunction (): print("Yes/True") else: print("No/False") print(10>9) a = 6 #0000 0110 b = 7 #0000 0111 print(a == b) print(a != b) ###Output True False True ###Markdown Python Operators ###Code print(10 + 5) print(10 - 5) print(10 * 5) print(10 / 5) print(10 % 5) print(10 // 3) print(10 ** 2) ###Output 15 5 50 2.0 0 3 100 ###Markdown Boolean OpeartorsBooleans represent one of two values: True or False ###Code print(10>9) print(10<9) print(10==9) print(10!=9) print(10>10) print(10<10) print(10==10) print(10!=10) print(bool(True)) print(bool(False)) print(bool(1.1)) print(bool(1)) print(bool(0)) print(bool(-1)) print(bool(-1.1)) print(bool(None)) print(bool("")) print(bool(())) print(bool({})) print(bool([])) def myFunction(): return True print(myFunction()) def myFunction1(): return False print(myFunction1()) def myFunction(): return True if myFunction(): print("Yes!") else: print("No!") def myFunction1(): return False if myFunction1(): print("Yes!") else: print("No!") a=6 b=7 print(a==b) #is it equal? print(a!=b) #is it not equal? print(a>b) #is it greater than? print(a<b) #is it less than? ###Output False True False True ###Markdown Python Operators ###Code print(10+5) #addition print(10-5) #substraction print(10*5) #multiplication print(10/5) #division print(10/4) print(10/3) print(10//5) #floor division print(10//4) print(10//3) print(10%5) #modulo print(10%4) print(10**5) #concatenation 10^3 print(10**4) 10^4 ###Output 15 5 50 2.0 2.5 3.3333333333333335 2 2 3 0 2 100000 10000 ###Markdown Python Bitwise Operators ###Code #MSB #LSB a = 60 #0011 1100, if a(<<1) then 0111 1000, if a(<<2) then 1111 0000 b = 13 #0000 1101, if a(>>1) then0001 1000, if a(>>2) then 0011 0000 print(a&b) #Operator copies a bit, to the result, if it exists in both operands #0000 1100 Kung both operands ay may bit print(a|b) #It copies a bit, if it exists in either expand. #0011 1101 Kung either sa dalawang operands print(a^b) #It copies the bit, if it set in one operand but not both #0011 0001 Kung if sa isang operand lang #print(a~b) #It is unary and has the effect of 'flipping' bits print(a<<1) #The left operand's value is moved left by the number of bits specified by the right operand print(a<<2) print(a>>1) #The left operand's value is moved right by the number of bits specified by the right operand print(a>>2) ###Output 12 61 49 120 240 30 15 ###Markdown Python Assignment Operators ###Code a=60 #can not to write if same value before b=13 a+=2 #60+2=62 print(a) ###Output 62 ###Markdown Logical Operators ###Code a=True b=False print(a and b) print(a or b) print(not(a and b)) a=bool(60) b=bool(13) print(a and b) print(a or b) print(not(a and b)) a=60 b=13 (a>b) and (a<b) (a>b) or (a>b) not(a>b) ###Output False True ###Markdown Identity Operators ###Code a=60 b=13 print(a is b) print(a is not b) ###Output False True ###Markdown Bollean Operators ###Code #Booleans represent one of the two values: True and False print(10>9) print(10==9) print(9>10) a=10 b=9 print(a>b) print(a==b) print(b>a) print(bool('Hello')) print(bool(15)) print(bool(False)) print(bool(None)) print(bool({})) print(bool(0)) print(bool([])) #allows you to evaluate and give you false in return def myFunction(): return True print(myFunction()) def myFunction(): return True print(myFunction()) if (myFunction()): print('Yes') else: print('No') ###Output True Yes ###Markdown You Try! ###Code A = 6 B = 7 print(A==B) print(A!=A) print(10+5) print(10-5) print(10*5) print(10/5) #division - quotient print(10%5) #modulo - division print(10//3) #floor division print(10**2) #concatenation a = 60 #0011 1100 b = 13 #0000 1101 print(a & b) print(a | b) print(a ^ b) print(a<<2) print(a>>2) x = 6 x+=3 # x= x+3 print(x) x%=3 #x= 6%3 print(x) a= True b= False print(a and b) print(a or b) print(not(a and b)) print(not(a or b)) #negation print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operatios ###Code print(10>9) print(10<9) print(10==9) a=10 b=9 print(a>b) print(a<b) print(a==b) ###Output True False False ###Markdown Bool Function ###Code print(bool(5)) print(bool("Maria")) print(bool(0)) print(bool(1)) print(bool(None)) print(bool([])) ###Output True True False True False False ###Markdown Functions can return a Boolean ###Code def myFunction(): return True print(myFunction()) if myFunction(): print("yes") else: print("no") def myFunction(): return False print(myFunction()) if myFunction(): print("yes") else: print("no") ###Output no ###Markdown Application 1 ###Code print(10>9) a=6 b=7 print(a==b) print(a!=a) ###Output True False False ###Markdown Python Opera ###Code print(10+9) print(10-9) print(10*2) print(10/2) print(10**2) ###Output 19 1 20 5.0 100 ###Markdown Python Bitwise Operators ###Code a = 60 b = 13 print(a&b) print(a|b) ###Output 12 61 ###Markdown Logical Operators ###Code a = True b = False print(a and b) print(not(a or b)) ###Output False False ###Markdown Identity Operators ###Code a is b a is not b ###Output _____no_output_____ ###Markdown Boolean Operators ###Code print(10>9) print(10==9) print(10!=9) print(10<9) ###Output True False True False ###Markdown bool()Function ###Code print(bool("Maria")) print(bool(19)) print(bool([])) print(bool(0)) print(bool(1)) print(bool(None)) print(bool(False)) ###Output True True False False True False False ###Markdown Function can return to a Boolean ###Code def myFunction(): return False print(myFunction()) if myFunction(): print("Yes!") else: print("No!") ###Output Yes! ###Markdown Application 1 ###Code print(10>9) a=6 b=7 print(a==b) print(a!=a) a=60 b=13 print(a>b) if a>b: print("a is less than b") else: print("a is greater than b") ###Output True a is less than b ###Markdown Python Operators ###Code print(10+3) print(10-3) print(10*3) print(10//3) print(10%3) print(10/3) ###Output 13 7 30 3 1 3.3333333333333335 ###Markdown Bitwise Operation ###Code #a=60, binary 0011 1100 #b=13, binary 0000 1101 print(a&b) print(a|b) print(a^b) print(a<<1) ###Output 12 61 49 120 ###Markdown Assignment Operators Application 2 ###Code x=2 x+=3 print(x) ###Output 5 ###Markdown Logical Operators ###Code k = True l = False print (k and l) print (k or l) print (not(k or l)) ###Output False True False ###Markdown Identity Operators ###Code k is l k is not l ###Output _____no_output_____ ###Markdown Control Structure If Statement ###Code v = 1 z = 2 if 1<2: print("1 is less than 2") ###Output 1 is less than 2 ###Markdown Elif Statement ###Code number= int(input()) if number>0: print("positive") elif number<0: print("negative") else: print("number is equal to 0") age=int(input()) if age>=18: print("You are qualified to vote") else: print("You are not qualified to vote") ###Output 15 You are not qualified to vote ###Markdown Nested If...Else ###Code u= int(input()) if u>10: print("u is above 10") if u>20: print("u is above 20") if u>30: print("u is above 30") if u>40: print("u is above 40") else: print("u is below 40") if u>50: print("u is above 50") else: print("u is below 50") ###Output 11 u is above 10 u is below 40 u is below 50 ###Markdown Loop Structure ###Code week = ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'] season= ["rainy", "sunny"] for x in week: for y in season: print(y,x) ###Output rainy Sunday sunny Sunday rainy Monday sunny Monday rainy Tuesday sunny Tuesday rainy Wednesday sunny Wednesday rainy Thursday sunny Thursday rainy Friday sunny Friday rainy Saturday sunny Saturday ###Markdown The break statement ###Code for x in week: if x == 'Thursday': break print(x) for x in week: print(x) if x=="Thursday": break ###Output Sunday Monday Tuesday Wednesday Thursday ###Markdown While Loop ###Code i=1 while i<6: print(i) i+=1 ###Output 1 2 3 4 5 ###Markdown Application 4 ###Code j = 1 while j<=4: print(j) j+=1 ###Output 1 2 3 4 ###Markdown Application 5 ###Code j=1 while j<=4: j+=1 if j==4: print(j) ###Output 4 ###Markdown Boolean Operators ###Code print(10>9) print(10==9) a = 10 b = 9 c = 8 c= print(a>b) print(bool("Hello")) print(bool(15)) print(bool(True)) print(bool(False)) print(bool(1)) print(bool(0)) print(bool(None)) print(bool([])) print(bool({})) def myFunction(): return True print(myFunction()) def myFunction(): return True if myFunction(): print("Yes") else: print("No") print (10>9) a=6 # 0000 0110 b=7 # 0000 0111 print (a==b) print (6==6) print (a!=a) ###Output True False True False ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) print(10//3) print(10**2) ###Output 15 5 50 2.0 0 3 100 ###Markdown Bitwise Operators ###Code a = 60 #0011 1100 b = 13 print (a | b) print(a^b) print(~a) print(a<<2) print(a>>2) # 0000 1111 ###Output 61 49 -61 240 15 ###Markdown Assignment Operators ###Code x = 2 x+=3 #Same As x = x + 3 print(x) x ###Output 5 ###Markdown Logical Operators ###Code a = 5 b = 6 print(a>b and a==a) print(a<b or b==a) ###Output False True ###Markdown Identity Operator ###Code print(a is b) print(a is not b) ###Output False True ###Markdown Boolean Operators ###Code a = 7 b = 6 print(10 > 9) print(10 < 9) print(a > b) bool() function print(bool("Nemuel")) print(bool(1)) print(bool(0)) print(bool(None)) print(bool(False)) print(bool(True)) def my_Function1(): return True def my_Function2(): return False print(my_Function1()) print(my_Function2()) if my_Function1: print("True") else: print("False") if my_Function2: print("False") else: print("True") ###Output True False ###Markdown Application 1 ###Code print(a==b) print(a!=a) ###Output False False ###Markdown Python Operators ###Code print(10+5) print(10-5) print(int(10/5)) print(10*5) print(10%5) print(10//5) print(10/3) print(10//3) #10/3 = 3.3333 print(10**2) ###Output 15 5 2 50 0 2 3.3333333333333335 3 100 ###Markdown Bitwise Operators ###Code c = 60 #binary 0011 1100 d = 13 #binary 0000 1101 c&d print(c|d) print(c^d) print(d<<1) #0001 1010 print(d<<2) #0011 0100 ###Output 61 49 26 ###Markdown Logical Operators ###Code h = True l = False h and l h or l not(h or l) ###Output _____no_output_____ ###Markdown Application 2 ###Code #Python Assignment Operators x = 100 x += 3 #Same as x = x + 3, x = 100 + 3 = 103 #x -= 3 #Same as x = x - 3 #x *= 3 #Same as x = x * 3 #x /= 3 #Same as x = x / 3 #x %= 3 #Same as x = x % 3 print(x) ###Output 103 ###Markdown Identity Operators ###Code h is l h is not l ###Output _____no_output_____ ###Markdown Control Structure If Statement ###Code if a > b: print("a is greater than b") ###Output a is greater than b ###Markdown Elif Statement ###Code a = 10 b = 10 if a < b: print("a is less than b") elif a > b: print("a is greater than b") else: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand If Statement ###Code if a==b: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand If...Else Statement ###Code a = 6 b = 9 print("a is greater than b") if a > b else print("b is greater than a") ###Output b is greater than a ###Markdown And ###Code if b>a and a==a: print("both conditions are True") ###Output both conditions are True ###Markdown Or ###Code if a<b or b==b: print("the condition is True") ###Output the condition is True ###Markdown Nested If ###Code x = 48 print(x) if x > 10: print("x is above 10") if x > 20: print("and also above 20") if x > 30: print('and also above 30') if x > 40: print("and also above 40") if x > 50: print("and also above 50") else: print("but not above 40") ###Output 48 x is above 10 and also above 20 and also above 30 and also above 40 ###Markdown Week List for loop ###Code week = ['sunday','monday','tuesday','wednesday','thursday','friday','saturday',] for x in week: print(x.title()) ###Output Sunday Monday Tuesday Wednesday Thursday Friday Saturday ###Markdown The break statement ###Code # to display Sunday to Wednesday using a For Loop for x in week: print(x.title()) if x=="wednesday": break # to display only Wednesday using break statement for x in week: if x =="wednesday": break print(x) ###Output wednesday ###Markdown While Statement ###Code i = 1 while i < 6: print(i) i+=1 ###Output 1 2 3 4 5 ###Markdown Application 3 - Create a python program that displays no. 3 using break statement. ###Code i = 1 while i<6: i+=1 if i==3: break print(i) ###Output 3 ###Markdown Boolean Operators ###Code #Booleans represent one of two values: True or False print(10==9) print(10!=9) print(10>5) print(10<5) a=10 b=9 print(a>b) print(a==a) print(b>a) print(bool('Hello')) print(bool()) ###Output True False ###Markdown Functions Can Return Boolean ###Code def myFunction(): return True print(myFunction()) def myFunction():return True if myFunction(): print('Yes') else: print('No') ###Output Yes ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10*5) print(10/5) ###Output 15 5 50 2.0 ###Markdown Python Bitwise Operators ###Code a=60 #(00111100) # The bin() function is used to print in binary format. b=13 #(00001101) print(a|b) print(bin(a|b)) a=60 #(00111100) b=13 #(00001101) print(a^b) print(bin(a^b)) ###Output 49 0b110001 ###Markdown Logical Operators ###Code x = 5 print(x > 3 and x < 10) x = 5 print(x > 3 or x > 10) print(not(x > 3 or x > 10)) ###Output True False ###Markdown Python Assignment Operators ###Code a=5 a+=1 print(a) b=2 b-=2 print(b) ###Output 6 0 ###Markdown Identity Operators ###Code a=1 b=2 print(a is b) print( a is not b) a='Happy' b= 'Sad' print(a is b) print(a is not b) ###Output False True ###Markdown BooleanOperator ###Code print(10>9) print(10<9) print(10==9) a=10 b=9 print(a>b) print(a<b) print(a==b) ###Output True False False ###Markdown BoolFunction ###Code print(bool(5)) print(bool("timmy")) print(bool(0)) print(bool(1)) print(bool(None)) def myFunction(): return True print(myFunction()) if myFunction(): print("yes") else: print("no") ###Output yes ###Markdown Application1 ###Code print(10>9) a=6 b=7 print(a==b) print(a!=a) ###Output True False False ###Markdown PythonOperators ###Code print(10+9) print(10-9) print(10*2) print(10/2) print(10**2) ###Output 19 1 20 5.0 100 ###Markdown PythonBitwiseOperators ###Code a = 60 b = 13 print(a & b) print(a | b) ###Output 12 61 ###Markdown PythonAssignmentOperators ###Code a += 3 print (a) ###Output 69 ###Markdown LogicalOperators ###Code a = True b = False a and b ###Output _____no_output_____ ###Markdown Boolean Operators ###Code a = 7 b = 6 print(10>9) print(10<9) ###Output True False ###Markdown Bool () function ###Code #if True print(bool("Kiel")) print(bool(1)) #if False print(bool(0)) print(bool(False)) ###Output True True False False ###Markdown Functions can return a Boolean ###Code def my_Function(): return True print(my_Function()) if my_Function(): print("True") else: print("False") ###Output True ###Markdown Python Operators ###Code print(10+5) print(10-5) print(10/3) print(10*5) print(10%5) print(10//3) #10/3 = 3.3333 print(10**2) ###Output 15 5 3.3333333333333335 50 0 3 100 ###Markdown Bitwise Operators ###Code c = 60 #Binary 0011 1100 d = 13 #Binary 0000 1101 c&d print(c|d) print(c^d) print(d<<2) ###Output 61 49 52 ###Markdown Logical Operators ###Code h = True l = False h and l h or l not (h or l) ###Output _____no_output_____ ###Markdown Application 2 ###Code x = 100 x += 3 #Same as x = x + 3, x = 100 + 3 = 103 print(x) ###Output 103 ###Markdown Identity Operators ###Code h is l h is not l ###Output _____no_output_____ ###Markdown Control Structure If Statement ###Code if a>b: print("a is greater than b") ###Output a is greater than b ###Markdown Elif Statement ###Code if a<b: print("a is less than b") elif a>b: print("a is greater than b") ###Output a is greater than b ###Markdown Else Statement ###Code a = 10 b = 10 if a>b: print("a is greater than b") elif a<b: print("a is less than b") else: print("a is equal to b") ###Output a is equal to b ###Markdown Short Hand If Statement ###Code if a==b: print("a is equal to b") ###Output a is equal to b ###Markdown Short hand If... Else Statement ###Code a = 10 b = 9 print("a is greater than b") if a>b else print('b is greater than a') ###Output a is greater than b ###Markdown And ###Code if a>b and b==b: print("both conditions are True") ###Output both conditions are True ###Markdown Or ###Code if a<b or b==b: print("the condition is True") ###Output the condition is True ###Markdown Nested If ###Code x = int(input()) if x>10: print("x is above 10") if x>20: print("and also above 20") if x>30: print("and also above 30") if x>40: print("and also above 40") if x>50: print("and also above 50") else: print("but not above 50") ###Output 48 x is above 10 and also above 20 and also above 30 and also above 40 but not above 50 ###Markdown For Loop ###Code week = ['Sunday', "Monday", 'Tuesday', "Wednesday", 'Thursday', "Friday", 'Saturday'] for x in week: print(x) ###Output Sunday Monday Tuesday Wednesday Thursday Friday Saturday ###Markdown The Break Statement ###Code #to display Sunday to Wednesday using For loop for x in week: print(x) if x=="Wednesday": break #to display only Wednesday using break statement for x in week: if x=="Wednesday": break print(x) ###Output Wednesday ###Markdown While Statement ###Code i =1 while i < 6: print(i) i += 1 # same as i = i + 1 ###Output 1 2 3 4 5 ###Markdown Application 3 - Create a Python program that displays no.3 using break statement ###Code i = 1 while i < 6: if i==3: break i += 1 print(i) ###Output 3 ###Markdown Operations and Expressions ###Code print('Hello 0') print('Hello 1') print('Hello 2') print('Hello 3') print('Hello 4') print('Hello 5') print('Hello 6') print('Hello 7') print('Hello 8') print('Hello 9') print('Hello 10') print (10>9) a=6 b=7 print(a==b) print(a!=a) ###Output True False False
notes/05a_linear_systems_direct.ipynb
###Markdown Direct methods for solving linear systems Recall the prototypal PDE problem introduce in the Lecture 08:$$-u_{xx}(x) = f(x)\quad\mathrm{ in }\ \Omega = (0, 1)$$$$u(x) = 0, \quad\mathrm{ on }\ \partial\Omega = \{0, 1\}$$The physical interpretation of this problem is related to the modelling of an elastic string, which occupies at rest the space $[0,1]$ and is fixed at the two extremes. The unknown $u(x)$ represents the displacement of the string at the point $x$, and the right-hand side models a prescribed force $f(x)$ on the string.For the numerical discretization of the problem, we consider a **Finite Difference (FD) Approximation**. Let $n$ be an integer, a consider a uniform subdivision of the interval $(0,1)$ using $n$ equispaced points, denoted by $\{x_i\}_{i=0}^n$ . Moreover, let $u_i$ be the FD approximation of $u(x_i)$, and similarly $f_i \approx f(x_i)$.In order to formulate the discrete problem, we consider a FD approximation of the left-hand side, as follows:$$-u_{xx}(x_i) \approx \frac{-u_{i-1} + 2u_i - u_{i+1}}{h^2}$$being $h = \frac{1}{n-1}$ the size of each subinterval $(x_i, x_{i+1})$.The problem that we need to solve is$$u_i = 0 \qquad\qquad\qquad\qquad i=0,$$$$\frac{-u_{i-1} + 2u_i - u_{i+1}}{h^2} = f_i \qquad\qquad\qquad i=1, \ldots, n-1,\qquad\qquad\qquad(P)$$$$u_i = 0 \qquad\qquad\qquad\qquad i=n.$$Then, let us collect al the unknowns $\{u_i\}_{i=0}^n$ in a vector $\mathbf{u}$. Then, (P) is a linear system$$A \mathbf{u} = \mathbf{f}.$$In this exercise we will show how to use direct methods to solve linear systems, and in particular we will discuss the **LU** and **Cholesky** decompositions that you have studied in Lecture 07.First of all, let use define $n$ and $\{x_i\}_{i=0}^n$. ###Code %matplotlib inline from numpy import * from matplotlib.pyplot import * n = 33 h = 1./(n-1) x=linspace(0,1,n) ###Output _____no_output_____ ###Markdown Let us define the left-hand side matrix $A$. ###Code a = -ones((n-1,)) # Offdiagonal entries b = 2*ones((n,)) # Diagonal entries A = (diag(a, -1) + diag(b, 0) + diag(a, +1)) A /= h**2 print(A) print(linalg.cond(A)) ###Output [[ 2048. -1024. 0. ... 0. 0. 0.] [-1024. 2048. -1024. ... 0. 0. 0.] [ 0. -1024. 2048. ... 0. 0. 0.] ... [ 0. 0. 0. ... 2048. -1024. 0.] [ 0. 0. 0. ... -1024. 2048. -1024.] [ 0. 0. 0. ... 0. -1024. 2048.]] 467.8426288390652 ###Markdown Moreover, let us choose $$f(x) = x (1-x)$$so that the solution $u(x)$ can be computed analytically as$$u(x) = u_{\mathrm{ex}}(x) = \frac{x^4}{12} - \frac{x^3}{6} +\frac{x}{12}$$The right hand side $\mathbf{f}$ then is easily assembled as: ###Code f = x*(1.-x) ###Output _____no_output_____ ###Markdown We still need to impose the boundary conditions at $x=0$ and $x=1$, which read$$u_i = 0 \qquad\qquad\qquad\qquad i=0,$$and$$u_i = 0 \qquad\qquad\qquad\qquad i=n,$$These conditions are associated with the first (last, respectively) row of the linear system.Then we can solve the linear system and compare the FD approximation of $u$ to the exact solution $u_{\mathrm{ex}}$. ###Code # Change first row of the matrix A A[0,:] = 0 A[:,0] = 0 A[0,0] = 1 f[0] = 0 # Change last row of the matrix A A[-1,:] = 0 A[:,-1] = 0 A[-1,-1] = 1 f[-1] = 0 # Solve the linear system using numpy A1 = A.copy() u = linalg.solve(A1, f) u_ex = (x**4)/12. - (x**3)/6. + x/12. # Plot the FD and exact solution _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown LU decompositionWe want to implement our linear solver using an **LU decomposition** (without pivoting)$$A = LU$$LU decomposition can be computed as in the following function. ###Code def LU(A): A = A.copy() N=len(A) for k in range(N-1): if (abs(A[k,k]) < 1e-15): raise RuntimeError("Null pivot") A[k+1:N,k] /= A[k,k] for j in range(k+1,N): A[k+1:N,j] -= A[k+1:N,k]*A[k,j] L=tril(A) for i in range(N): L[i,i]=1.0 U = triu(A) return L, U L, U = LU(A) ###Output _____no_output_____ ###Markdown Once $L$ and $U$ have been computed, the system$$A\mathbf{u}=\mathbf{f}$$can be solved in **two steps**: first solve$$L\mathbf{w}=\mathbf{f},$$where $L$ is a **lower triangular matrix**, and then solve$$U\mathbf{u}=\mathbf{w}$$where $U$ is an **upper triangular matrix**.These two systems can be easily solved by forward (backward, respectively) substitution. ###Code def L_solve(L,rhs): x = zeros_like(rhs) N = len(L) x[0] = rhs[0]/L[0,0] for i in range(1,N): x[i] = (rhs[i] - dot(L[i, 0:i], x[0:i]))/L[i,i] return x def U_solve(U,rhs): x = zeros_like(rhs) N=len(U) x[-1] = rhs[-1]/U[-1,-1] for i in reversed(range(N-1)): x[i] = (rhs[i] -dot(U[i, i+1:N], x[i+1:N]))/U[i,i] return x ###Output _____no_output_____ ###Markdown Now let's solve the system $$A\mathbf{u}=\mathbf{f}$$and compare the solution with respect to the exact solution. ###Code w = L_solve(L,f) u = U_solve(U,w) _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown try to compute the solution $u(x)$ with different forcing terms and compare with the exact solution **without recomputing the LU decomposition** ###Code # YOUR CODE HERE ###Output _____no_output_____ ###Markdown Cholesky decompositionFor symmetric and positive define matrices, the Cholesky decomposition may be preferred since it reduces the number of flops for computing the LU decomposition by a factor of 2.The Cholesky decomposotion seeks an upper triangular matrix $H$ (with all positive elements on the diagonal) such that$$A = H^T H$$An implementation of the Cholesky decomposition is provided in the following function. We can use it to solve the linear system by forward and backward substitution. ###Code def cholesky(A): A = A.copy() N = len(A) for k in range(N-1): A[k,k] = sqrt(A[k,k]) A[k+1:N,k] = A[k+1:N,k]/A[k,k] for j in range(k+1,N): A[j:N,j] = A[j:N,j] - A[j:N,k]*A[j,k] A[-1,-1] = sqrt(A[-1,-1]) L=tril(A) return L, L.transpose() HT, H = cholesky(A) y = L_solve(HT,f) u = U_solve(H,y) _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown Direct methods for solving linear systems Recall the prototypal PDE problem introduce in the Lecture 08:$$-u_{xx}(x) = f(x)\quad\mathrm{ in }\ \Omega = (0, 1)$$$$u(x) = 0, \quad\mathrm{ on }\ \partial\Omega = \{0, 1\}$$The physical interpretation of this problem is related to the modelling of an elastic string, which occupies at rest the space $[0,1]$ and is fixed at the two extremes. The unknown $u(x)$ represents the displacement of the string at the point $x$, and the right-hand side models a prescribed force $f(x)$ on the string.For the numerical discretization of the problem, we consider a **Finite Difference (FD) Approximation**. Let $n$ be an integer, a consider a uniform subdivision of the interval $(0,1)$ using $n$ equispaced points, denoted by $\{x_i\}_{i=0}^n$ . Moreover, let $u_i$ be the FD approximation of $u(x_i)$, and similarly $f_i \approx f(x_i)$.In order to formulate the discrete problem, we consider a FD approximation of the left-hand side, as follows:$$-u_{xx}(x_i) \approx \frac{-u_{i-1} + 2u_i - u_{i+1}}{h^2}$$being $h = \frac{1}{n-1}$ the size of each subinterval $(x_i, x_{i+1})$.The problem that we need to solve is$$u_i = 0 \qquad\qquad\qquad\qquad i=0,$$$$\frac{-u_{i-1} + 2u_i - u_{i+1}}{h^2} = f_i \qquad\qquad\qquad i=1, \ldots, n-1,\qquad\qquad\qquad(P)$$$$u_i = 0 \qquad\qquad\qquad\qquad i=n.$$Then, let us collect al the unknowns $\{u_i\}_{i=0}^n$ in a vector $\mathbf{u}$. Then, (P) is a linear system$$A \mathbf{u} = \mathbf{f}.$$In this exercise we will show how to use direct methods to solve linear systems, and in particular we will discuss the **LU** and **Cholesky** decompositions that you have studied in Lecture 07.First of all, let use define $n$ and $\{x_i\}_{i=0}^n$. ###Code %matplotlib inline from numpy import * from matplotlib.pyplot import * n = 33 h = 1./(n-1) x=linspace(0,1,n) ###Output _____no_output_____ ###Markdown Let us define the left-hand side matrix $A$. ###Code a = -ones((n-1,)) # Offdiagonal entries b = 2*ones((n,)) # Diagonal entries A = (diag(a, -1) + diag(b, 0) + diag(a, +1)) A /= h**2 print(A) print(linalg.cond(A)) ###Output [[ 2048. -1024. 0. ... 0. 0. 0.] [-1024. 2048. -1024. ... 0. 0. 0.] [ 0. -1024. 2048. ... 0. 0. 0.] ... [ 0. 0. 0. ... 2048. -1024. 0.] [ 0. 0. 0. ... -1024. 2048. -1024.] [ 0. 0. 0. ... 0. -1024. 2048.]] 467.84262883905507 ###Markdown Moreover, let us choose $$f(x) = x (1-x)$$so that the solution $u(x)$ can be computed analytically as$$u(x) = u_{\mathrm{ex}}(x) = \frac{x^4}{12} - \frac{x^3}{6} +\frac{x}{12}$$The right hand side $\mathbf{f}$ then is easily assembled as: ###Code f = x*(1.-x) ###Output _____no_output_____ ###Markdown We still need to impose the boundary conditions at $x=0$ and $x=1$, which read$$u_i = 0 \qquad\qquad\qquad\qquad i=0,$$and$$u_i = 0 \qquad\qquad\qquad\qquad i=n,$$These conditions are associated with the first (last, respectively) row of the linear system.Then we can solve the linear system and compare the FD approximation of $u$ to the exact solution $u_{\mathrm{ex}}$. ###Code # Change first row of the matrix A A[0,:] = 0 A[:,0] = 0 A[0,0] = 1 f[0] = 0 # Change last row of the matrix A A[-1,:] = 0 A[:,-1] = 0 A[-1,-1] = 1 f[-1] = 0 # Solve the linear system using numpy A1 = A.copy() u = linalg.solve(A1, f) u_ex = (x**4)/12. - (x**3)/6. + x/12. # Plot the FD and exact solution _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown LU decompositionWe want to implement our linear solver using an **LU decomposition** (without pivoting)$$A = LU$$LU decomposition can be computed as in the following function. ###Code def LU(A): A = A.copy() N=len(A) for k in range(N-1): if (abs(A[k,k]) < 1e-15): raise RuntimeError("Null pivot") A[k+1:N,k] /= A[k,k] for j in range(k+1,N): A[k+1:N,j] -= A[k+1:N,k]*A[k,j] L=tril(A) for i in range(N): L[i,i]=1.0 U = triu(A) return L, U L, U = LU(A) ###Output _____no_output_____ ###Markdown Once $L$ and $U$ have been computed, the system$$A\mathbf{u}=\mathbf{f}$$can be solved in **two steps**: first solve$$L\mathbf{w}=\mathbf{f},$$where $L$ is a **lower triangular matrix**, and then solve$$U\mathbf{u}=\mathbf{w}$$where $U$ is an **upper triangular matrix**.These two systems can be easily solved by forward (backward, respectively) substitution. ###Code def L_solve(L,rhs): x = zeros_like(rhs) N = len(L) x[0] = rhs[0]/L[0,0] for i in range(1,N): x[i] = (rhs[i] - dot(L[i, 0:i], x[0:i]))/L[i,i] return x def U_solve(U,rhs): x = zeros_like(rhs) N=len(U) x[-1] = rhs[-1]/U[-1,-1] for i in reversed(range(N-1)): x[i] = (rhs[i] -dot(U[i, i+1:N], x[i+1:N]))/U[i,i] return x ###Output _____no_output_____ ###Markdown Now let's solve the system $$A\mathbf{u}=\mathbf{f}$$and compare the solution with respect to the exact solution. ###Code w = L_solve(L,f) u = U_solve(U,w) _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown try to compute the solution $u(x)$ with different forcing terms and compare with the exact solution **without recomputing the LU decomposition** ###Code # YOUR CODE HERE ###Output _____no_output_____ ###Markdown Cholesky decompositionFor symmetric and positive define matrices, the Cholesky decomposition may be preferred since it reduces the number of flops for computing the LU decomposition by a factor of 2.The Cholesky decomposotion seeks an upper triangular matrix $H$ (with all positive elements on the diagonal) such that$$A = H^T H$$An implementation of the Cholesky decomposition is provided in the following function. We can use it to solve the linear system by forward and backward substitution. ###Code def cholesky(A): A = A.copy() N = len(A) for k in range(N-1): A[k,k] = sqrt(A[k,k]) A[k+1:N,k] = A[k+1:N,k]/A[k,k] for j in range(k+1,N): A[j:N,j] = A[j:N,j] - A[j:N,k]*A[j,k] A[-1,-1] = sqrt(A[-1,-1]) L=tril(A) return L, L.transpose() HT, H = cholesky(A) y = L_solve(HT,f) u = U_solve(H,y) _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown Direct methods for solving linear systems Recall the prototypal PDE problem introduce in the Lecture 08:$$-u_{xx}(x) = f(x)\quad\mathrm{ in }\ \Omega = (0, 1)$$$$u(x) = 0, \quad\mathrm{ on }\ \partial\Omega = \{0, 1\}$$The physical interpretation of this problem is related to the modelling of an elastic string, which occupies at rest the space $[0,1]$ and is fixed at the two extremes. The unknown $u(x)$ represents the displacement of the string at the point $x$, and the right-hand side models a prescribed force $f(x)$ on the string.For the numerical discretization of the problem, we consider a **Finite Difference (FD) Approximation**. Let $n$ be an integer, a consider a uniform subdivision of the interval $(0,1)$ using $n$ equispaced points, denoted by $\{x_i\}_{i=0}^n$ . Moreover, let $u_i$ be the FD approximation of $u(x_i)$, and similarly $f_i \approx f(x_i)$.In order to formulate the discrete problem, we consider a FD approximation of the left-hand side, as follows:$$-u_{xx}(x_i) \approx \frac{-u_{i-1} + 2u_i - u_{i+1}}{h^2}$$being $h = \frac{1}{n-1}$ the size of each subinterval $(x_i, x_{i+1})$.The problem that we need to solve is$$u_i = 0 \qquad\qquad\qquad\qquad i=0,$$$$\frac{-u_{i-1} + 2u_i - u_{i+1}}{h^2} = f_i \qquad\qquad\qquad i=1, \ldots, n-1,\qquad\qquad\qquad(P)$$$$u_i = 0 \qquad\qquad\qquad\qquad i=n.$$Then, let us collect al the unknowns $\{u_i\}_{i=0}^n$ in a vector $\mathbf{u}$. Then, (P) is a linear system$$A \mathbf{u} = \mathbf{f}.$$In this exercise we will show how to use direct methods to solve linear systems, and in particular we will discuss the **LU** and **Cholesky** decompositions that you have studied in Lecture 07.First of all, let use define $n$ and $\{x_i\}_{i=0}^n$. ###Code %matplotlib inline from numpy import * from matplotlib.pyplot import * n = 33 h = 1./(n-1) x=linspace(0,1,n) ###Output _____no_output_____ ###Markdown Let us define the left-hand side matrix $A$. ###Code a = -ones((n-1,)) # Offdiagonal entries b = 2*ones((n,)) # Diagonal entries A = (diag(a, -1) + diag(b, 0) + diag(a, +1)) A /= h**2 print(A) print(linalg.cond(A)) ###Output [[ 2048. -1024. 0. ... 0. 0. 0.] [-1024. 2048. -1024. ... 0. 0. 0.] [ 0. -1024. 2048. ... 0. 0. 0.] ... [ 0. 0. 0. ... 2048. -1024. 0.] [ 0. 0. 0. ... -1024. 2048. -1024.] [ 0. 0. 0. ... 0. -1024. 2048.]] 467.8426288390642 ###Markdown Moreover, let us choose $$f(x) = x (1-x)$$so that the solution $u(x)$ can be computed analytically as$$u(x) = u_{\mathrm{ex}}(x) = \frac{x^4}{12} - \frac{x^3}{6} +\frac{x}{12}$$The right hand side $\mathbf{f}$ then is easily assembled as: ###Code f = x*(1.-x) ###Output _____no_output_____ ###Markdown We still need to impose the boundary conditions at $x=0$ and $x=1$, which read$$u_i = 0 \qquad\qquad\qquad\qquad i=0,$$and$$u_i = 0 \qquad\qquad\qquad\qquad i=n,$$These conditions are associated with the first (last, respectively) row of the linear system.Then we can solve the linear system and compare the FD approximation of $u$ to the exact solution $u_{\mathrm{ex}}$. ###Code # Change first row of the matrix A A[0,:] = 0 A[:,0] = 0 A[0,0] = 1 f[0] = 0 # Change last row of the matrix A A[-1,:] = 0 A[:,-1] = 0 A[-1,-1] = 1 f[-1] = 0 # Solve the linear system using numpy A1 = A.copy() ## This function performs the LU-factorization of A1 ## and then solves the Upper and Lower system of linear equations u = linalg.solve(A1, f) u_ex = (x**4)/12. - (x**3)/6. + x/12. # Plot the FD and exact solution _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown LU decompositionWe want to implement our linear solver using an **LU decomposition** (without pivoting)$$A = LU$$LU decomposition can be computed as in the following function. ###Code def LU(A): A = A.copy() N=len(A) for k in range(N-1): if (abs(A[k,k]) < 1e-15): raise RuntimeError("Null pivot") A[k+1:N,k] /= A[k,k] for j in range(k+1,N): A[k+1:N,j] -= A[k+1:N,k]*A[k,j] L=tril(A) for i in range(N): L[i,i]=1.0 U = triu(A) return L, U L, U = LU(A) ###Output _____no_output_____ ###Markdown Once $L$ and $U$ have been computed, the system$$A\mathbf{u}=\mathbf{f}$$can be solved in **two steps**: first solve$$L\mathbf{w}=\mathbf{f},$$where $L$ is a **lower triangular matrix**, and then solve$$U\mathbf{u}=\mathbf{w}$$where $U$ is an **upper triangular matrix**.These two systems can be easily solved by forward (backward, respectively) substitution. ###Code def L_solve(L,rhs): x = zeros_like(rhs) N = len(L) x[0] = rhs[0]/L[0,0] for i in range(1,N): x[i] = (rhs[i] - dot(L[i, 0:i], x[0:i]))/L[i,i] return x def U_solve(U,rhs): x = zeros_like(rhs) N=len(U) x[-1] = rhs[-1]/U[-1,-1] for i in reversed(range(N-1)): x[i] = (rhs[i] -dot(U[i, i+1:N], x[i+1:N]))/U[i,i] return x ###Output _____no_output_____ ###Markdown Now let's solve the system $$A\mathbf{u}=\mathbf{f}$$and compare the solution with respect to the exact solution. ###Code w = L_solve(L,f) u = U_solve(U,w) _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown try to compute the solution $u(x)$ with different forcing terms and compare with the exact solution **without recomputing the LU decomposition** ###Code # TODO ###Output _____no_output_____ ###Markdown Cholesky decompositionFor symmetric and positive define matrices, the Cholesky decomposition may be preferred since it reduces the number of flops for computing the LU decomposition by a factor of 2.The Cholesky decomposotion seeks an upper triangular matrix $H$ (with all positive elements on the diagonal) such that$$A = H^T H$$An implementation of the Cholesky decomposition is provided in the following function. We can use it to solve the linear system by forward and backward substitution. ###Code def cholesky(A): A = A.copy() N = len(A) for k in range(N-1): A[k,k] = sqrt(A[k,k]) A[k+1:N,k] = A[k+1:N,k]/A[k,k] for j in range(k+1,N): A[j:N,j] = A[j:N,j] - A[j:N,k]*A[j,k] A[-1,-1] = sqrt(A[-1,-1]) L=tril(A) return L, L.transpose() HT, H = cholesky(A) y = L_solve(HT,f) u = U_solve(H,y) _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown Direct methods for solving linear systems Recall the prototypal PDE problem introduce in the Lecture 08:$$-u_{xx}(x) = f(x)\quad\mathrm{ in }\ \Omega = (0, 1)$$$$u(x) = 0, \quad\mathrm{ on }\ \partial\Omega = \{0, 1\}$$The physical interpretation of this problem is related to the modelling of an elastic string, which occupies at rest the space $[0,1]$ and is fixed at the two extremes. The unknown $u(x)$ represents the displacement of the string at the point $x$, and the right-hand side models a prescribed force $f(x)$ on the string.For the numerical discretization of the problem, we consider a **Finite Difference (FD) Approximation**. Let $n$ be an integer, a consider a uniform subdivision of the interval $(0,1)$ using $n$ equispaced points, denoted by $\{x_i\}_{i=0}^n$ . Moreover, let $u_i$ be the FD approximation of $u(x_i)$, and similarly $f_i \approx f(x_i)$.In order to formulate the discrete problem, we consider a FD approximation of the left-hand side, as follows:$$-u_{xx}(x_i) \approx \frac{-u_{i-1} + 2u_i - u_{i+1}}{h^2}$$being $h = \frac{1}{n-1}$ the size of each subinterval $(x_i, x_{i+1})$.The problem that we need to solve is$$u_i = 0 \qquad\qquad\qquad\qquad i=0,$$$$\frac{-u_{i-1} + 2u_i - u_{i+1}}{h^2} = f_i \qquad\qquad\qquad i=1, \ldots, n-1,\qquad\qquad\qquad(P)$$$$u_i = 0 \qquad\qquad\qquad\qquad i=n.$$Then, let us collect al the unknowns $\{u_i\}_{i=0}^n$ in a vector $\mathbf{u}$. Then, (P) is a linear system$$A \mathbf{u} = \mathbf{f}.$$In this exercise we will show how to use direct methods to solve linear systems, and in particular we will discuss the **LU** and **Cholesky** decompositions that you have studied in Lecture 07.First of all, let use define $n$ and $\{x_i\}_{i=0}^n$. ###Code %matplotlib inline from numpy import * from matplotlib.pyplot import * n = 33 h = 1./(n-1) x=linspace(0,1,n) ###Output _____no_output_____ ###Markdown Let us define the left-hand side matrix $A$. ###Code a = -ones((n-1,)) # Offdiagonal entries b = 2*ones((n,)) # Diagonal entries A = (diag(a, -1) + diag(b, 0) + diag(a, +1)) A /= h**2 print(A) print(linalg.cond(A)) ###Output [[ 2048. -1024. 0. ... 0. 0. 0.] [-1024. 2048. -1024. ... 0. 0. 0.] [ 0. -1024. 2048. ... 0. 0. 0.] ... [ 0. 0. 0. ... 2048. -1024. 0.] [ 0. 0. 0. ... -1024. 2048. -1024.] [ 0. 0. 0. ... 0. -1024. 2048.]] 467.84262883905507 ###Markdown Moreover, let us choose $$f(x) = x (1-x)$$so that the solution $u(x)$ can be computed analytically as$$u(x) = u_{\mathrm{ex}}(x) = \frac{x^4}{12} - \frac{x^3}{6} +\frac{x}{12}$$The right hand side $\mathbf{f}$ then is easily assembled as: ###Code f = x*(1.-x) ###Output _____no_output_____ ###Markdown We still need to impose the boundary conditions at $x=0$ and $x=1$, which read$$u_i = 0 \qquad\qquad\qquad\qquad i=0,$$and$$u_i = 0 \qquad\qquad\qquad\qquad i=n,$$These conditions are associated with the first (last, respectively) row of the linear system.Then we can solve the linear system and compare the FD approximation of $u$ to the exact solution $u_{\mathrm{ex}}$. ###Code # Change first row of the matrix A A[0,:] = 0 A[:,0] = 0 A[0,0] = 1 f[0] = 0 # Change last row of the matrix A A[-1,:] = 0 A[:,-1] = 0 A[-1,-1] = 1 f[-1] = 0 # Solve the linear system using numpy A1 = A.copy() u = linalg.solve(A1, f) u_ex = (x**4)/12. - (x**3)/6. + x/12. # Plot the FD and exact solution _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown LU decompositionWe want to implement our linear solver using an **LU decomposition** (without pivoting)$$A = LU$$LU decomposition can be computed as in the following function. ###Code def LU(A): A = A.copy() N=len(A) for k in range(N-1): if (abs(A[k,k]) < 1e-15): raise RuntimeError("Null pivot") A[k+1:N,k] /= A[k,k] for j in range(k+1,N): A[k+1:N,j] -= A[k+1:N,k]*A[k,j] L=tril(A) for i in range(N): L[i,i]=1.0 U = triu(A) return L, U L, U = LU(A) ###Output _____no_output_____ ###Markdown Once $L$ and $U$ have been computed, the system$$A\mathbf{u}=\mathbf{f}$$can be solved in **two steps**: first solve$$L\mathbf{w}=\mathbf{f},$$where $L$ is a **lower triangular matrix**, and then solve$$U\mathbf{u}=\mathbf{w}$$where $U$ is an **upper triangular matrix**.These two systems can be easily solved by forward (backward, respectively) substitution. ###Code def L_solve(L,rhs): x = zeros_like(rhs) N = len(L) x[0] = rhs[0]/L[0,0] for i in range(1,N): x[i] = (rhs[i] - dot(L[i, 0:i], x[0:i]))/L[i,i] return x def U_solve(U,rhs): x = zeros_like(rhs) N = len(U) - 1 x[N] = rhs[N]/U[N,N] # alternative: x[-1] = rhs[-1]/U[-1,-1] for i in range(N-1,0,-1): # alternative: reversed(range(N-1)) x[i] = (rhs[i] - dot(U[i, i+1:N], x[i+1:N]))/U[i,i] return x ###Output _____no_output_____ ###Markdown Now let's solve the system $$A\mathbf{u}=\mathbf{f}$$and compare the solution with respect to the exact solution. ###Code w = L_solve(L,f) u = U_solve(U,w) _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown try to compute the solution $u(x)$ with different forcing terms and compare with the exact solution **without recomputing the LU decomposition** ###Code # YOUR CODE HERE ###Output _____no_output_____ ###Markdown Cholesky decompositionFor symmetric and positive define matrices, the Cholesky decomposition may be preferred since it reduces the number of flops for computing the LU decomposition by a factor of 2.The Cholesky decomposotion seeks an upper triangular matrix $H$ (with all positive elements on the diagonal) such that$$A = H^T H$$An implementation of the Cholesky decomposition is provided in the following function. We can use it to solve the linear system by forward and backward substitution. ###Code def cholesky(A): A = A.copy() N = len(A) for k in range(N-1): A[k,k] = sqrt(A[k,k]) A[k+1:N,k] = A[k+1:N,k]/A[k,k] for j in range(k+1,N): A[j:N,j] = A[j:N,j] - A[j:N,k]*A[j,k] A[-1,-1] = sqrt(A[-1,-1]) L=tril(A) return L, L.transpose() HT, H = cholesky(A) y = L_solve(HT,f) u = U_solve(H,y) _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown Direct methods for solving linear systems Recall the prototypal PDE problem introduce in the Lecture 08:$$-u_{xx}(x) = f(x)\quad\mathrm{ in }\ \Omega = (0, 1)$$$$u(x) = 0, \quad\mathrm{ on }\ \partial\Omega = \{0, 1\}$$The physical interpretation of this problem is related to the modelling of an elastic string, which occupies at rest the space $[0,1]$ and is fixed at the two extremes. The unknown $u(x)$ represents the displacement of the string at the point $x$, and the right-hand side models a prescribed force $f(x)$ on the string.For the numerical discretization of the problem, we consider a **Finite Difference (FD) Approximation**. Let $n$ be an integer, a consider a uniform subdivision of the interval $(0,1)$ using $n$ equispaced points, denoted by $\{x_i\}_{i=0}^n$ . Moreover, let $u_i$ be the FD approximation of $u(x_i)$, and similarly $f_i \approx f(x_i)$.In order to formulate the discrete problem, we consider a FD approximation of the left-hand side, as follows:$$-u_{xx}(x_i) \approx \frac{-u_{i-1} + 2u_i - u_{i+1}}{h^2}$$being $h = \frac{1}{n-1}$ the size of each subinterval $(x_i, x_{i+1})$.The problem that we need to solve is$$u_i = 0 \qquad\qquad\qquad\qquad i=0,$$$$\frac{-u_{i-1} + 2u_i - u_{i+1}}{h^2} = f_i \qquad\qquad\qquad i=1, \ldots, n-1,\qquad\qquad\qquad(P)$$$$u_i = 0 \qquad\qquad\qquad\qquad i=n.$$Then, let us collect al the unknowns $\{u_i\}_{i=0}^n$ in a vector $\mathbf{u}$. Then, (P) is a linear system$$A \mathbf{u} = \mathbf{f}.$$In this exercise we will show how to use direct methods to solve linear systems, and in particular we will discuss the **LU** and **Cholesky** decompositions that you have studied in Lecture 07.First of all, let use define $n$ and $\{x_i\}_{i=0}^n$. ###Code %matplotlib inline from numpy import * from matplotlib.pyplot import * n = 33 h = 1./(n-1) x=linspace(0,1,n) ###Output _____no_output_____ ###Markdown Let us define the left-hand side matrix $A$. ###Code a = -ones((n-1,)) # Offdiagonal entries b = 2*ones((n,)) # Diagonal entries A = (diag(a, -1) + diag(b, 0) + diag(a, +1)) A /= h**2 print(A) print(linalg.cond(A)) ###Output [[ 2048. -1024. 0. ..., 0. 0. 0.] [-1024. 2048. -1024. ..., 0. 0. 0.] [ 0. -1024. 2048. ..., 0. 0. 0.] ..., [ 0. 0. 0. ..., 2048. -1024. 0.] [ 0. 0. 0. ..., -1024. 2048. -1024.] [ 0. 0. 0. ..., 0. -1024. 2048.]] 467.842628839 ###Markdown Moreover, let us choose $$f(x) = x (1-x)$$so that the solution $u(x)$ can be computed analytically as$$u(x) = u_{\mathrm{ex}}(x) = \frac{x^4}{12} - \frac{x^3}{6} +\frac{x}{12}$$The right hand side $\mathbf{f}$ then is easily assembled as: ###Code f = x*(1.-x) ###Output _____no_output_____ ###Markdown We still need to impose the boundary conditions at $x=0$ and $x=1$, which read$$u_i = 0 \qquad\qquad\qquad\qquad i=0,$$and$$u_i = 0 \qquad\qquad\qquad\qquad i=n,$$These conditions are associated with the first (last, respectively) row of the linear system.Then we can solve the linear system and compare the FD approximation of $u$ to the exact solution $u_{\mathrm{ex}}$. ###Code # Change first row of the matrix A A[0,:] = 0 A[:,0] = 0 A[0,0] = 1 f[0] = 0 # Change last row of the matrix A A[-1,:] = 0 A[:,-1] = 0 A[-1,-1] = 1 f[-1] = 0 # Solve the linear system using numpy A1 = A.copy() u = linalg.solve(A1, f) u_ex = (x**4)/12. - (x**3)/6. + x/12. # Plot the FD and exact solution _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown LU decompositionWe want to implement our linear solver using an **LU decomposition** (without pivoting)$$A = LU$$LU decomposition can be computed as in the following function. ###Code def LU(A): A = A.copy() N=len(A) for k in range(N-1): if (abs(A[k,k]) < 1e-15): raise RuntimeError("Null pivot") A[k+1:N,k] /= A[k,k] for j in range(k+1,N): A[k+1:N,j] -= A[k+1:N,k]*A[k,j] L=tril(A) for i in range(N): L[i,i]=1.0 U = triu(A) return L, U L, U = LU(A) ###Output _____no_output_____ ###Markdown Once $L$ and $U$ have been computed, the system$$A\mathbf{u}=\mathbf{f}$$can be solved in **two steps**: first solve$$L\mathbf{w}=\mathbf{f},$$where $L$ is a **lower triangular matrix**, and then solve$$U\mathbf{u}=\mathbf{w}$$where $U$ is an **upper triangular matrix**.These two systems can be easily solved by forward (backward, respectively) substitution. ###Code def L_solve(L,rhs): x = zeros_like(rhs) N = len(L) x[0] = rhs[0]/L[0,0] for i in range(1,N): x[i] = (rhs[i] - dot(L[i, 0:i], x[0:i]))/L[i,i] return x def U_solve(U,rhs): x = zeros_like(rhs) N=len(U) x[-1] = rhs[-1]/U[-1,-1] for i in reversed(range(N-1)): x[i] = (rhs[i] -dot(U[i, i+1:N], x[i+1:N]))/U[i,i] return x ###Output _____no_output_____ ###Markdown Now let's solve the system $$A\mathbf{u}=\mathbf{f}$$and compare the solution with respect to the exact solution. ###Code w = L_solve(L,f) u = U_solve(U,w) _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown try to compute the solution $u(x)$ with different forcing terms and compare with the exact solution **without recomputing the LU decomposition** ###Code # YOUR CODE HERE ###Output _____no_output_____ ###Markdown Cholesky decompositionFor symmetric and positive define matrices, the Cholesky decomposition may be preferred since it reduces the number of flops for computing the LU decomposition by a factor of 2.The Cholesky decomposotion seeks an upper triangular matrix $H$ (with all positive elements on the diagonal) such that$$A = H^T H$$An implementation of the Cholesky decomposition is provided in the following function. We can use it to solve the linear system by forward and backward substitution. ###Code def cholesky(A): A = A.copy() N = len(A) for k in range(N-1): A[k,k] = sqrt(A[k,k]) A[k+1:N,k] = A[k+1:N,k]/A[k,k] for j in range(k+1,N): A[j:N,j] = A[j:N,j] - A[j:N,k]*A[j,k] A[-1,-1] = sqrt(A[-1,-1]) L=tril(A) return L, L.transpose() HT, H = cholesky(A) y = L_solve(HT,f) u = U_solve(H,y) _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown Direct methods for solving linear systems Recall the prototypal PDE problem introduce in the Lecture 08:$$-u_{xx}(x) = f(x)\quad\mathrm{ in }\ \Omega = (0, 1)$$$$u(x) = 0, \quad\mathrm{ on }\ \partial\Omega = \{0, 1\}$$The physical interpretation of this problem is related to the modelling of an elastic string, which occupies at rest the space $[0,1]$ and is fixed at the two extremes. The unknown $u(x)$ represents the displacement of the string at the point $x$, and the right-hand side models a prescribed force $f(x)$ on the string.For the numerical discretization of the problem, we consider a **Finite Difference (FD) Approximation**. Let $n$ be an integer, a consider a uniform subdivision of the interval $(0,1)$ using $n$ equispaced points, denoted by $\{x_i\}_{i=0}^n$ . Moreover, let $u_i$ be the FD approximation of $u(x_i)$, and similarly $f_i \approx f(x_i)$.In order to formulate the discrete problem, we consider a FD approximation of the left-hand side, as follows:$$-u_{xx}(x_i) \approx \frac{-u_{i-1} + 2u_i - u_{i+1}}{h^2}$$being $h = \frac{1}{n-1}$ the size of each subinterval $(x_i, x_{i+1})$.The problem that we need to solve is$$u_i = 0 \qquad\qquad\qquad\qquad i=0,$$$$\frac{-u_{i-1} + 2u_i - u_{i+1}}{h^2} = f_i \qquad\qquad\qquad i=1, \ldots, n-1,\qquad\qquad\qquad(P)$$$$u_i = 0 \qquad\qquad\qquad\qquad i=n.$$Then, let us collect al the unknowns $\{u_i\}_{i=0}^n$ in a vector $\mathbf{u}$. Then, (P) is a linear system$$A \mathbf{u} = \mathbf{f}.$$In this exercise we will show how to use direct methods to solve linear systems, and in particular we will discuss the **LU** and **Cholesky** decompositions that you have studied in Lecture 07.First of all, let use define $n$ and $\{x_i\}_{i=0}^n$. ###Code %matplotlib inline from numpy import * from matplotlib.pyplot import * n = 33 h = 1./(n-1) x=linspace(0,1,n) ###Output _____no_output_____ ###Markdown Let us define the left-hand side matrix $A$. ###Code a = -ones((n-1,)) # Offdiagonal entries b = 2*ones((n,)) # Diagonal entries A = (diag(a, -1) + diag(b, 0) + diag(a, +1)) A /= h**2 print(A) print(linalg.cond(A)) ###Output [[ 2048. -1024. 0. ... 0. 0. 0.] [-1024. 2048. -1024. ... 0. 0. 0.] [ 0. -1024. 2048. ... 0. 0. 0.] ... [ 0. 0. 0. ... 2048. -1024. 0.] [ 0. 0. 0. ... -1024. 2048. -1024.] [ 0. 0. 0. ... 0. -1024. 2048.]] 467.8426288390652 ###Markdown Moreover, let us choose $$f(x) = x (1-x)$$so that the solution $u(x)$ can be computed analytically as$$u(x) = u_{\mathrm{ex}}(x) = \frac{x^4}{12} - \frac{x^3}{6} +\frac{x}{12}$$The right hand side $\mathbf{f}$ then is easily assembled as: ###Code f = x*(1.-x) ###Output _____no_output_____ ###Markdown We still need to impose the boundary conditions at $x=0$ and $x=1$, which read$$u_i = 0 \qquad\qquad\qquad\qquad i=0,$$and$$u_i = 0 \qquad\qquad\qquad\qquad i=n,$$These conditions are associated with the first (last, respectively) row of the linear system.Then we can solve the linear system and compare the FD approximation of $u$ to the exact solution $u_{\mathrm{ex}}$. ###Code # Change first row of the matrix A A[0,:] = 0 A[:,0] = 0 A[0,0] = 1 f[0] = 0 # Change last row of the matrix A A[-1,:] = 0 A[:,-1] = 0 A[-1,-1] = 1 f[-1] = 0 # Solve the linear system using numpy A1 = A.copy() u = linalg.solve(A1, f) u_ex = (x**4)/12. - (x**3)/6. + x/12. # Plot the FD and exact solution _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown LU decompositionWe want to implement our linear solver using an **LU decomposition** (without pivoting)$$A = LU$$LU decomposition can be computed as in the following function. ###Code def LU(A): A = A.copy() N=len(A) for k in range(N-1): if (abs(A[k,k]) < 1e-15): raise RuntimeError("Null pivot") A[k+1:N,k] /= A[k,k] for j in range(k+1,N): A[k+1:N,j] -= A[k+1:N,k]*A[k,j] L=tril(A) for i in range(N): L[i,i]=1.0 U = triu(A) return L, U L, U = LU(A) ###Output _____no_output_____ ###Markdown Once $L$ and $U$ have been computed, the system$$A\mathbf{u}=\mathbf{f}$$can be solved in **two steps**: first solve$$L\mathbf{w}=\mathbf{f},$$where $L$ is a **lower triangular matrix**, and then solve$$U\mathbf{u}=\mathbf{w}$$where $U$ is an **upper triangular matrix**.These two systems can be easily solved by forward (backward, respectively) substitution. ###Code def L_solve(L,rhs): x = zeros_like(rhs) N = len(L) x[0] = rhs[0]/L[0,0] for i in range(1,N): x[i] = (rhs[i] - dot(L[i, 0:i], x[0:i]))/L[i,i] return x def U_solve(U,rhs): x = zeros_like(rhs) N=len(U) x[-1] = rhs[-1]/U[-1,-1] for i in reversed(range(N-1)): x[i] = (rhs[i] -dot(U[i, i+1:N], x[i+1:N]))/U[i,i] return x ###Output _____no_output_____ ###Markdown Now let's solve the system $$A\mathbf{u}=\mathbf{f}$$and compare the solution with respect to the exact solution. ###Code w = L_solve(L,f) u = U_solve(U,w) _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown try to compute the solution $u(x)$ with different forcing terms and compare with the exact solution **without recomputing the LU decomposition** ###Code # YOUR CODE HERE ###Output _____no_output_____ ###Markdown Cholesky decompositionFor symmetric and positive define matrices, the Cholesky decomposition may be preferred since it reduces the number of flops for computing the LU decomposition by a factor of 2.The Cholesky decomposotion seeks an upper triangular matrix $H$ (with all positive elements on the diagonal) such that$$A = H^T H$$An implementation of the Cholesky decomposition is provided in the following function. We can use it to solve the linear system by forward and backward substitution. ###Code def cholesky(A): A = A.copy() N = len(A) for k in range(N-1): A[k,k] = sqrt(A[k,k]) A[k+1:N,k] = A[k+1:N,k]/A[k,k] for j in range(k+1,N): A[j:N,j] = A[j:N,j] - A[j:N,k]*A[j,k] A[-1,-1] = sqrt(A[-1,-1]) L=tril(A) return L, L.transpose() HT, H = cholesky(A) y = L_solve(HT,f) u = U_solve(H,y) _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown Direct methods for solving linear systems Recall the prototypal PDE problem introduce in the Lecture 08:$$-u_{xx}(x) = f(x)\quad\mathrm{ in }\ \Omega = (0, 1)$$$$u(x) = 0, \quad\mathrm{ on }\ \partial\Omega = \{0, 1\}$$The physical interpretation of this problem is related to the modelling of an elastic string, which occupies at rest the space $[0,1]$ and is fixed at the two extremes. The unknown $u(x)$ represents the displacement of the string at the point $x$, and the right-hand side models a prescribed force $f(x)$ on the string.For the numerical discretization of the problem, we consider a **Finite Difference (FD) Approximation**. Let $n$ be an integer, a consider a uniform subdivision of the interval $(0,1)$ using $n$ equispaced points, denoted by $\{x_i\}_{i=0}^n$ . Moreover, let $u_i$ be the FD approximation of $u(x_i)$, and similarly $f_i \approx f(x_i)$.In order to formulate the discrete problem, we consider a FD approximation of the left-hand side, as follows:$$-u_{xx}(x_i) \approx \frac{-u_{i-1} + 2u_i - u_{i+1}}{h^2}$$being $h = \frac{1}{n-1}$ the size of each subinterval $(x_i, x_{i+1})$.The problem that we need to solve is$$u_i = 0 \qquad\qquad\qquad\qquad i=0,$$$$\frac{-u_{i-1} + 2u_i - u_{i+1}}{h^2} = f_i \qquad\qquad\qquad i=1, \ldots, n-1,\qquad\qquad\qquad(P)$$$$u_i = 0 \qquad\qquad\qquad\qquad i=n.$$Then, let us collect al the unknowns $\{u_i\}_{i=0}^n$ in a vector $\mathbf{u}$. Then, (P) is a linear system$$A \mathbf{u} = \mathbf{f}.$$In this exercise we will show how to use direct methods to solve linear systems, and in particular we will discuss the **LU** and **Cholesky** decompositions that you have studied in Lecture 07.First of all, let use define $n$ and $\{x_i\}_{i=0}^n$. ###Code %matplotlib inline from numpy import * from matplotlib.pyplot import * n = 33 h = 1./(n-1) x=linspace(0,1,n) ###Output _____no_output_____ ###Markdown Let us define the left-hand side matrix $A$. ###Code a = -ones((n-1,)) # Offdiagonal entries b = 2*ones((n,)) # Diagonal entries A = (diag(a, -1) + diag(b, 0) + diag(a, +1)) A /= h**2 print(A) print(linalg.cond(A)) ###Output [[ 2048. -1024. 0. ..., 0. 0. 0.] [-1024. 2048. -1024. ..., 0. 0. 0.] [ 0. -1024. 2048. ..., 0. 0. 0.] ..., [ 0. 0. 0. ..., 2048. -1024. 0.] [ 0. 0. 0. ..., -1024. 2048. -1024.] [ 0. 0. 0. ..., 0. -1024. 2048.]] 467.842628839 ###Markdown Moreover, let us choose $$f(x) = x (1-x)$$so that the solution $u(x)$ can be computed analytically as$$u(x) = u_{\mathrm{ex}}(x) = \frac{x^4}{12} - \frac{x^3}{6} +\frac{x}{12}$$The right hand side $\mathbf{f}$ then is easily assembled as: ###Code f = x*(1.-x) ###Output _____no_output_____ ###Markdown We still need to impose the boundary conditions at $x=0$ and $x=1$, which read$$u_i = 0 \qquad\qquad\qquad\qquad i=0,$$and$$u_i = 0 \qquad\qquad\qquad\qquad i=n,$$These conditions are associated with the first (last, respectively) row of the linear system.Then we can solve the linear system and compare the FD approximation of $u$ to the exact solution $u_{\mathrm{ex}}$. ###Code # Change first row of the matrix A A[0,:] = 0 A[:,0] = 0 A[0,0] = 1 f[0] = 0 # Change last row of the matrix A A[-1,:] = 0 A[:,-1] = 0 A[-1,-1] = 1 f[-1] = 0 # Solve the linear system using numpy A1 = A.copy() u = linalg.solve(A1, f) u_ex = (x**4)/12. - (x**3)/6. + x/12. # Plot the FD and exact solution _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown LU decompositionWe want to implement our linear solver using an **LU decomposition** (without pivoting)$$A = LU$$LU decomposition can be computed as in the following function. ###Code def LU(A): A = A.copy() N=len(A) for k in range(N-1): if (abs(A[k,k]) < 1e-15): raise RuntimeError("Null pivot") A[k+1:N,k] /= A[k,k] for j in range(k+1,N): A[k+1:N,j] -= A[k+1:N,k]*A[k,j] L=tril(A) for i in range(N): L[i,i]=1.0 U = triu(A) return L, U L, U = LU(A) ###Output _____no_output_____ ###Markdown Once $L$ and $U$ have been computed, the system$$A\mathbf{u}=\mathbf{f}$$can be solved in **two steps**: first solve$$L\mathbf{w}=\mathbf{f},$$where $L$ is a **lower triangular matrix**, and then solve$$U\mathbf{u}=\mathbf{w}$$where $U$ is an **upper triangular matrix**.These two systems can be easily solved by forward (backward, respectively) substitution. ###Code def L_solve(L,rhs): x = zeros_like(rhs) N = len(L) x[0] = rhs[0]/L[0,0] for i in range(1,N): x[i] = (rhs[i] - dot(L[i, 0:i], x[0:i]))/L[i,i] return x def U_solve(U,rhs): x = zeros_like(rhs) N=len(U) x[-1] = rhs[-1]/U[-1,-1] for i in reversed(range(N-1)): x[i] = (rhs[i] -dot(U[i, i+1:N], x[i+1:N]))/U[i,i] return x ###Output _____no_output_____ ###Markdown Now let's solve the system $$A\mathbf{u}=\mathbf{f}$$and compare the solution with respect to the exact solution. ###Code w = L_solve(L,f) u = U_solve(U,w) _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown try to compute the solution $u(x)$ with different forcing terms and compare with the exact solution **without recomputing the LU decomposition** ###Code # YOUR CODE HERE ###Output _____no_output_____ ###Markdown Cholesky decompositionFor symmetric and positive define matrices, the Cholesky decomposition may be preferred since it reduces the number of flops for computing the LU decomposition by a factor of 2.The Cholesky decomposotion seeks an upper triangular matrix $H$ (with all positive elements on the diagonal) such that$$A = H^T H$$An implementation of the Cholesky decomposition is provided in the following function. We can use it to solve the linear system by forward and backward substitution. ###Code def cholesky(A): A = A.copy() N = len(A) for k in range(N-1): A[k,k] = sqrt(A[k,k]) A[k+1:N,k] = A[k+1:N,k]/A[k,k] for j in range(k+1,N): A[j:N,j] = A[j:N,j] - A[j:N,k]*A[j,k] A[-1,-1] = sqrt(A[-1,-1]) L=tril(A) return L, L.transpose() HT, H = cholesky(A) y = L_solve(HT,f) u = U_solve(H,y) _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown Direct methods for solving linear systems Recall the prototypal PDE problem introduce in the Lecture 08:$$-u_{xx}(x) = f(x)\quad\mathrm{ in }\ \Omega = (0, 1)$$$$u(x) = 0, \quad\mathrm{ on }\ \partial\Omega = \{0, 1\}$$The physical interpretation of this problem is related to the modelling of an elastic string, which occupies at rest the space $[0,1]$ and is fixed at the two extremes. The unknown $u(x)$ represents the displacement of the string at the point $x$, and the right-hand side models a prescribed force $f(x)$ on the string.For the numerical discretization of the problem, we consider a **Finite Difference (FD) Approximation**. Let $n$ be an integer, a consider a uniform subdivision of the interval $(0,1)$ using $n$ equispaced points, denoted by $\{x_i\}_{i=0}^n$ . Moreover, let $u_i$ be the FD approximation of $u(x_i)$, and similarly $f_i \approx f(x_i)$.In order to formulate the discrete problem, we consider a FD approximation of the left-hand side, as follows:$$-u_{xx}(x_i) \approx \frac{-u_{i-1} + 2u_i - u_{i+1}}{h^2}$$being $h = \frac{1}{n-1}$ the size of each subinterval $(x_i, x_{i+1})$.The problem that we need to solve is$$u_i = 0 \qquad\qquad\qquad\qquad i=0,$$$$\frac{-u_{i-1} + 2u_i - u_{i+1}}{h^2} = f_i \qquad\qquad\qquad i=1, \ldots, n-1,\qquad\qquad\qquad(P)$$$$u_i = 0 \qquad\qquad\qquad\qquad i=n.$$Then, let us collect al the unknowns $\{u_i\}_{i=0}^n$ in a vector $\mathbf{u}$. Then, (P) is a linear system$$A \mathbf{u} = \mathbf{f}.$$In this exercise we will show how to use direct methods to solve linear systems, and in particular we will discuss the **LU** and **Cholesky** decompositions that you have studied in Lecture 07.First of all, let use define $n$ and $\{x_i\}_{i=0}^n$. ###Code %matplotlib inline from numpy import * from matplotlib.pyplot import * n = 33 h = 1./(n-1) x=linspace(0,1,n) ###Output _____no_output_____ ###Markdown Let us define the left-hand side matrix $A$. ###Code a = -ones((n-1,)) # Offdiagonal entries b = 2*ones((n,)) # Diagonal entries A = (diag(a, -1) + diag(b, 0) + diag(a, +1)) A /= h**2 print(A) print(linalg.cond(A)) ###Output [[ 2048. -1024. 0. ..., 0. 0. 0.] [-1024. 2048. -1024. ..., 0. 0. 0.] [ 0. -1024. 2048. ..., 0. 0. 0.] ..., [ 0. 0. 0. ..., 2048. -1024. 0.] [ 0. 0. 0. ..., -1024. 2048. -1024.] [ 0. 0. 0. ..., 0. -1024. 2048.]] 467.842628839 ###Markdown Moreover, let us choose $$f(x) = x (1-x)$$so that the solution $u(x)$ can be computed analytically as$$u(x) = u_{\mathrm{ex}}(x) = \frac{x^4}{12} - \frac{x^3}{6} +\frac{x}{12}$$The right hand side $\mathbf{f}$ then is easily assembled as: ###Code f = x*(1.-x) ###Output _____no_output_____ ###Markdown We still need to impose the boundary conditions at $x=0$ and $x=1$, which read$$u_i = 0 \qquad\qquad\qquad\qquad i=0,$$and$$u_i = 0 \qquad\qquad\qquad\qquad i=n,$$These conditions are associated with the first (last, respectively) row of the linear system.Then we can solve the linear system and compare the FD approximation of $u$ to the exact solution $u_{\mathrm{ex}}$. ###Code # Change first row of the matrix A A[0,:] = 0 A[:,0] = 0 A[0,0] = 1 f[0] = 0 # Change last row of the matrix A A[-1,:] = 0 A[:,-1] = 0 A[-1,-1] = 1 f[-1] = 0 # Solve the linear system using numpy A1 = A.copy() u = linalg.solve(A1, f) u_ex = (x**4)/12. - (x**3)/6. + x/12. # Plot the FD and exact solution _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown LU decompositionWe want to implement our linear solver using an **LU decomposition** (without pivoting)$$A = LU$$LU decomposition can be computed as in the following function. ###Code def LU(A): A = A.copy() N=len(A) for k in range(N-1): if (abs(A[k,k]) < 1e-15): raise RuntimeError("Null pivot") A[k+1:N,k] /= A[k,k] for j in range(k+1,N): A[k+1:N,j] -= A[k+1:N,k]*A[k,j] L=tril(A) for i in range(N): L[i,i]=1.0 U = triu(A) return L, U L, U = LU(A) ###Output _____no_output_____ ###Markdown Once $L$ and $U$ have been computed, the system$$A\mathbf{u}=\mathbf{f}$$can be solved in **two steps**: first solve$$L\mathbf{w}=\mathbf{f},$$where $L$ is a **lower triangular matrix**, and then solve$$U\mathbf{u}=\mathbf{w}$$where $U$ is an **upper triangular matrix**.These two systems can be easily solved by forward (backward, respectively) substitution. ###Code def L_solve(L,rhs): x = zeros_like(rhs) N = len(L) x[0] = rhs[0]/L[0,0] for i in range(1,N): x[i] = (rhs[i] - dot(L[i, 0:i], x[0:i]))/L[i,i] return x def U_solve(U,rhs): x = zeros_like(rhs) N=len(U) x[-1] = rhs[-1]/U[-1,-1] for i in reversed(range(N-1)): x[i] = (rhs[i] -dot(U[i, i+1 : N-1], x[i+1 : N-1]))/U[i,i] return x ###Output _____no_output_____ ###Markdown Now let's solve the system $$A\mathbf{u}=\mathbf{f}$$and compare the solution with respect to the exact solution. ###Code w = L_solve(L,f) u = U_solve(U,w) _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown try to compute the solution $u(x)$ with different forcing terms and compare with the exact solution **without recomputing the LU decomposition** ###Code # YOUR CODE HERE ###Output _____no_output_____ ###Markdown Cholesky decompositionFor symmetric and positive define matrices, the Cholesky decomposition may be preferred since it reduces the number of flops for computing the LU decomposition by a factor of 2.The Cholesky decomposotion seeks an upper triangular matrix $H$ (with all positive elements on the diagonal) such that$$A = H^T H$$An implementation of the Cholesky decomposition is provided in the following function. We can use it to solve the linear system by forward and backward substitution. ###Code def cholesky(A): A = A.copy() N = len(A) for k in range(N-1): A[k,k] = sqrt(A[k,k]) A[k+1:N,k] = A[k+1:N,k]/A[k,k] for j in range(k+1,N): A[j:N,j] = A[j:N,j] - A[j:N,k]*A[j,k] A[-1,-1] = sqrt(A[-1,-1]) L=tril(A) return L, L.transpose() HT, H = cholesky(A) y = L_solve(HT,f) u = U_solve(H,y) _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown Direct methods for solving linear systems Recall the prototypal PDE problem introduce in the Lecture 08:$$-u_{xx}(x) = f(x)\quad\mathrm{ in }\ \Omega = (0, 1)$$$$u(x) = 0, \quad\mathrm{ on }\ \partial\Omega = \{0, 1\}$$The physical interpretation of this problem is related to the modelling of an elastic string, which occupies at rest the space $[0,1]$ and is fixed at the two extremes. The unknown $u(x)$ represents the displacement of the string at the point $x$, and the right-hand side models a prescribed force $f(x)$ on the string.For the numerical discretization of the problem, we consider a **Finite Difference (FD) Approximation**. Let $n$ be an integer, a consider a uniform subdivision of the interval $(0,1)$ using $n$ equispaced points, denoted by $\{x_i\}_{i=0}^n$ . Moreover, let $u_i$ be the FD approximation of $u(x_i)$, and similarly $f_i \approx f(x_i)$.In order to formulate the discrete problem, we consider a FD approximation of the left-hand side, as follows:$$-u_{xx}(x_i) \approx \frac{-u_{i-1} + 2u_i - u_{i+1}}{h^2}$$being $h = \frac{1}{n-1}$ the size of each subinterval $(x_i, x_{i+1})$.The problem that we need to solve is$$u_i = 0 \qquad\qquad\qquad\qquad i=0,$$$$\frac{-u_{i-1} + 2u_i - u_{i+1}}{h^2} = f_i \qquad\qquad\qquad i=1, \ldots, n-1,\qquad\qquad\qquad(P)$$$$u_i = 0 \qquad\qquad\qquad\qquad i=n.$$Then, let us collect al the unknowns $\{u_i\}_{i=0}^n$ in a vector $\mathbf{u}$. Then, (P) is a linear system$$A \mathbf{u} = \mathbf{f}.$$In this exercise we will show how to use direct methods to solve linear systems, and in particular we will discuss the **LU** and **Cholesky** decompositions that you have studied in Lecture 07.First of all, let use define $n$ and $\{x_i\}_{i=0}^n$. ###Code %matplotlib inline from numpy import * from matplotlib.pyplot import * n = 33 h = 1./(n-1) x=linspace(0,1,n) ###Output _____no_output_____ ###Markdown Let us define the left-hand side matrix $A$. ###Code a = -ones((n-1,)) # Offdiagonal entries b = 2*ones((n,)) # Diagonal entries A = (diag(a, -1) + diag(b, 0) + diag(a, +1)) A /= h**2 print(A) print(linalg.cond(A)) ###Output [[ 2048. -1024. 0. ..., 0. 0. 0.] [-1024. 2048. -1024. ..., 0. 0. 0.] [ 0. -1024. 2048. ..., 0. 0. 0.] ..., [ 0. 0. 0. ..., 2048. -1024. 0.] [ 0. 0. 0. ..., -1024. 2048. -1024.] [ 0. 0. 0. ..., 0. -1024. 2048.]] 467.842628839 ###Markdown Moreover, let us choose $$f(x) = x (1-x)$$so that the solution $u(x)$ can be computed analytically as$$u(x) = u_{\mathrm{ex}}(x) = \frac{x^4}{12} - \frac{x^3}{6} +\frac{x}{12}$$The right hand side $\mathbf{f}$ then is easily assembled as: ###Code f = x*(1.-x) ###Output _____no_output_____ ###Markdown We still need to impose the boundary conditions at $x=0$ and $x=1$, which read$$u_i = 0 \qquad\qquad\qquad\qquad i=0,$$and$$u_i = 0 \qquad\qquad\qquad\qquad i=n,$$These conditions are associated with the first (last, respectively) row of the linear system.Then we can solve the linear system and compare the FD approximation of $u$ to the exact solution $u_{\mathrm{ex}}$. ###Code # Change first row of the matrix A A[0,:] = 0 A[:,0] = 0 A[0,0] = 1 f[0] = 0 # Change last row of the matrix A A[-1,:] = 0 A[:,-1] = 0 A[-1,-1] = 1 f[-1] = 0 # Solve the linear system using numpy A1 = A.copy() u = linalg.solve(A1, f) u_ex = (x**4)/12. - (x**3)/6. + x/12. # Plot the FD and exact solution _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown LU decompositionWe want to implement our linear solver using an **LU decomposition** (without pivoting)$$A = LU$$LU decomposition can be computed as in the following function. ###Code def LU(A): A = A.copy() N=len(A) for k in range(N-1): if (abs(A[k,k]) < 1e-15): raise RuntimeError("Null pivot") A[k+1:N,k] /= A[k,k] for j in range(k+1,N): A[k+1:N,j] -= A[k+1:N,k]*A[k,j] L=tril(A) for i in range(N): L[i,i]=1.0 U = triu(A) return L, U L, U = LU(A) ###Output _____no_output_____ ###Markdown Once $L$ and $U$ have been computed, the system$$A\mathbf{u}=\mathbf{f}$$can be solved in **two steps**: first solve$$L\mathbf{w}=\mathbf{f},$$where $L$ is a **lower triangular matrix**, and then solve$$U\mathbf{u}=\mathbf{w}$$where $U$ is an **upper triangular matrix**.These two systems can be easily solved by forward (backward, respectively) substitution. ###Code def L_solve(L,rhs): x = zeros_like(rhs) N = len(L) x[0] = rhs[0]/L[0,0] for i in range(1,N): x[i] = (rhs[i] - dot(L[i, 0:i], x[0:i]))/L[i,i] return x def U_solve(U,rhs): x = zeros_like(rhs) N=len(U) x[-1] = rhs[-1]/U[-1,-1] for i in reversed(range(N-1)): x[i] = (rhs[i] -dot(U[i, i+1:N], x[i+1:N]))/U[i,i] return x ###Output _____no_output_____ ###Markdown Now let's solve the system $$A\mathbf{u}=\mathbf{f}$$and compare the solution with respect to the exact solution. ###Code w = L_solve(L,f) u = U_solve(U,w) _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown try to compute the solution $u(x)$ with different forcing terms and compare with the exact solution **without recomputing the LU decomposition** ###Code # YOUR CODE HERE ###Output _____no_output_____ ###Markdown Cholesky decompositionFor symmetric and positive define matrices, the Cholesky decomposition may be preferred since it reduces the number of flops for computing the LU decomposition by a factor of 2.The Cholesky decomposotion seeks an upper triangular matrix $H$ (with all positive elements on the diagonal) such that$$A = H^T H$$An implementation of the Cholesky decomposition is provided in the following function. We can use it to solve the linear system by forward and backward substitution. ###Code def cholesky(A): A = A.copy() N = len(A) for k in range(N-1): A[k,k] = sqrt(A[k,k]) A[k+1:N,k] = A[k+1:N,k]/A[k,k] for j in range(k+1,N): A[j:N,j] = A[j:N,j] - A[j:N,k]*A[j,k] A[-1,-1] = sqrt(A[-1,-1]) L=tril(A) return L, L.transpose() HT, H = cholesky(A) y = L_solve(HT,f) u = U_solve(H,y) _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown Direct methods for solving linear systems Recall the prototypal PDE problem introduce in the Lecture 08:$$-u_{xx}(x) = f(x)\quad\mathrm{ in }\ \Omega = (0, 1)$$$$u(x) = 0, \quad\mathrm{ on }\ \partial\Omega = \{0, 1\}$$The physical interpretation of this problem is related to the modelling of an elastic string, which occupies at rest the space $[0,1]$ and is fixed at the two extremes. The unknown $u(x)$ represents the displacement of the string at the point $x$, and the right-hand side models a prescribed force $f(x)$ on the string.For the numerical discretization of the problem, we consider a **Finite Difference (FD) Approximation**. Let $n$ be an integer, a consider a uniform subdivision of the interval $(0,1)$ using $n$ equispaced points, denoted by $\{x_i\}_{i=0}^n$ . Moreover, let $u_i$ be the FD approximation of $u(x_i)$, and similarly $f_i \approx f(x_i)$.In order to formulate the discrete problem, we consider a FD approximation of the left-hand side, as follows:$$-u_{xx}(x_i) \approx \frac{-u_{i-1} + 2u_i - u_{i+1}}{h^2}$$being $h = \frac{1}{n-1}$ the size of each subinterval $(x_i, x_{i+1})$.The problem that we need to solve is$$u_i = 0 \qquad\qquad\qquad\qquad i=0,$$$$\frac{-u_{i-1} + 2u_i - u_{i+1}}{h^2} = f_i \qquad\qquad\qquad i=1, \ldots, n-1,\qquad\qquad\qquad(P)$$$$u_i = 0 \qquad\qquad\qquad\qquad i=n.$$Then, let us collect al the unknowns $\{u_i\}_{i=0}^n$ in a vector $\mathbf{u}$. Then, (P) is a linear system$$A \mathbf{u} = \mathbf{f}.$$In this exercise we will show how to use direct methods to solve linear systems, and in particular we will discuss the **LU** and **Cholesky** decompositions that you have studied in Lecture 07.First of all, let use define $n$ and $\{x_i\}_{i=0}^n$. ###Code %matplotlib inline from numpy import * from matplotlib.pyplot import * n = 33 h = 1./(n-1) x=linspace(0,1,n) ###Output _____no_output_____ ###Markdown Let us define the left-hand side matrix $A$. ###Code a = -ones((n-1,)) # Offdiagonal entries b = 2*ones((n,)) # Diagonal entries A = (diag(a, -1) + diag(b, 0) + diag(a, +1)) A /= h**2 print(A) print(linalg.cond(A)) ###Output [[ 2048. -1024. 0. ..., 0. 0. 0.] [-1024. 2048. -1024. ..., 0. 0. 0.] [ 0. -1024. 2048. ..., 0. 0. 0.] ..., [ 0. 0. 0. ..., 2048. -1024. 0.] [ 0. 0. 0. ..., -1024. 2048. -1024.] [ 0. 0. 0. ..., 0. -1024. 2048.]] 467.842628839 ###Markdown Moreover, let us choose $$f(x) = x (1-x)$$so that the solution $u(x)$ can be computed analytically as$$u(x) = u_{\mathrm{ex}}(x) = \frac{x^4}{12} - \frac{x^3}{6} +\frac{x}{12}$$The right hand side $\mathbf{f}$ then is easily assembled as: ###Code f = x*(1.-x) ###Output _____no_output_____ ###Markdown We still need to impose the boundary conditions at $x=0$ and $x=1$, which read$$u_i = 0 \qquad\qquad\qquad\qquad i=0,$$and$$u_i = 0 \qquad\qquad\qquad\qquad i=n,$$These conditions are associated with the first (last, respectively) row of the linear system.Then we can solve the linear system and compare the FD approximation of $u$ to the exact solution $u_{\mathrm{ex}}$. ###Code # Change first row of the matrix A A[0,:] = 0 A[:,0] = 0 A[0,0] = 1 f[0] = 0 # Change last row of the matrix A A[-1,:] = 0 A[:,-1] = 0 A[-1,-1] = 1 f[-1] = 0 # Solve the linear system using numpy A1 = A.copy() u = linalg.solve(A1, f) u_ex = (x**4)/12. - (x**3)/6. + x/12. # Plot the FD and exact solution _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown LU decompositionWe want to implement our linear solver using an **LU decomposition** (without pivoting)$$A = LU$$LU decomposition can be computed as in the following function. ###Code def LU(A): A = A.copy() N=len(A) for k in range(N-1): if (abs(A[k,k]) < 1e-15): raise RuntimeError("Null pivot") A[k+1:N,k] /= A[k,k] for j in range(k+1,N): A[k+1:N,j] -= A[k+1:N,k]*A[k,j] L=tril(A) for i in range(N): L[i,i]=1.0 U = triu(A) return L, U L, U = LU(A) ###Output _____no_output_____ ###Markdown Once $L$ and $U$ have been computed, the system$$A\mathbf{u}=\mathbf{f}$$can be solved in **two steps**: first solve$$L\mathbf{w}=\mathbf{f},$$where $L$ is a **lower triangular matrix**, and then solve$$U\mathbf{u}=\mathbf{w}$$where $U$ is an **upper triangular matrix**.These two systems can be easily solved by forward (backward, respectively) substitution. ###Code def L_solve(L,rhs): x = zeros_like(rhs) N = len(L) x[0] = rhs[0]/L[0,0] for i in range(1,N): x[i] = (rhs[i] - dot(L[i, 0:i], x[0:i]))/L[i,i] return x def U_solve(U,rhs): x = zeros_like(rhs) N=len(U) x[-1] = rhs[-1]/U[-1,-1] for i in reversed(range(N-1)): x[i] = (rhs[i] -dot(U[i, i+1:N], x[i+1:N]))/U[i,i] return x ###Output _____no_output_____ ###Markdown Now let's solve the system $$A\mathbf{u}=\mathbf{f}$$and compare the solution with respect to the exact solution. ###Code w = L_solve(L,f) u = U_solve(U,w) _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____ ###Markdown try to compute the solution $u(x)$ with different forcing terms and compare with the exact solution **without recomputing the LU decomposition** ###Code # YOUR CODE HERE ###Output _____no_output_____ ###Markdown Cholesky decompositionFor symmetric and positive define matrices, the Cholesky decomposition may be preferred since it reduces the number of flops for computing the LU decomposition by a factor of 2.The Cholesky decomposotion seeks an upper triangular matrix $H$ (with all positive elements on the diagonal) such that$$A = H^T H$$An implementation of the Cholesky decomposition is provided in the following function. We can use it to solve the linear system by forward and backward substitution. ###Code def cholesky(A): A = A.copy() N = len(A) for k in range(N-1): A[k,k] = sqrt(A[k,k]) A[k+1:N,k] = A[k+1:N,k]/A[k,k] for j in range(k+1,N): A[j:N,j] = A[j:N,j] - A[j:N,k]*A[j,k] A[-1,-1] = sqrt(A[-1,-1]) L=tril(A) return L, L.transpose() HT, H = cholesky(A) y = L_solve(HT,f) u = U_solve(H,y) _ = plot(x,u,'ro') _ = plot(x,u_ex) ###Output _____no_output_____
Classification/DenseNet/Code/DenseNet.ipynb
###Markdown 使用ResNet改良版的“批量归一化、激活和卷积”结构 ###Code import time import torch from torch import nn, optim import torch.nn.functional as F import d2lzh_pytorch as d2l device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') def conv_block(in_channels, out_channels): blk = nn.Sequential( nn.BatchNorm2d(in_channels), nn.ReLU(), nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1) ) return blk class DenseBlock(nn.Module): def __init__(self, num_convs, in_channels, out_channels): super(DenseBlock, self).__init__() net = [] for i in range(num_convs): in_c = in_channels + i * out_channels net.append(conv_block(in_c, out_channels)) self.net = nn.ModuleList(net) self.out_channels = in_channels + num_convs * out_channels def forward(self, X): for blk in self.net: Y = blk(X) X = torch.cat((X, Y), dim=1) return X blk = DenseBlock(2, 3, 10) X = torch.rand(4, 3, 8, 8) Y = blk(X) Y.shape def transition_block(in_channels, out_channels): blk = nn.Sequential( nn.BatchNorm2d(in_channels), nn.ReLU(), nn.Conv2d(in_channels, out_channels, kernel_size=1), nn.AvgPool2d(kernel_size=2, stride=2) ) return blk net = nn.Sequential( nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3), nn.BatchNorm2d(64), nn.ReLU(), nn.MaxPool2d(kernel_size=3, stride=2, padding=1) ) num_channels, growth_rate = 64, 32 # num_channels为当前的通道数 num_convs_in_dense_blocks = [4, 4, 4, 4] for i, num_convs in enumerate(num_convs_in_dense_blocks): DB = DenseBlock(num_convs, num_channels, growth_rate) net.add_module("DenseBlock_%d" % i, DB) num_channels = DB.out_channels if i != len(num_convs_in_dense_blocks) - 1: net.add_module("transition_block_%d" % i, transition_block(num_channels, num_channels // 2)) num_channels = num_channels // 2 net.add_module("BN", nn.BatchNorm2d(num_channels)) net.add_module("relu", nn.ReLU()) net.add_module("global_avg_pool", d2l.GlobalAvgPool2d()) net.add_module("fc", nn.Sequential(d2l.FlattenLayer(), nn.Linear(num_channels, 10))) X = torch.rand((1, 1, 96, 96)) for name, layer in net.named_children(): X = layer(X) print(name, 'output shape:\t', X.shape) batch_size = 256 train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size, resize=96) lr, num_epochs = 0.001, 5 optimizer = torch.optim.Adam(net.parameters(), lr=lr) d2l.train_ch5(net, train_iter, test_iter, batch_size, optimizer, device, num_epochs) ###Output training on cuda epoch 1, loss 0.4603, train acc 0.837, test acc 0.848, time 76.2 sec epoch 2, loss 0.2746, train acc 0.898, test acc 0.891, time 75.8 sec epoch 3, loss 0.2320, train acc 0.915, test acc 0.858, time 76.5 sec epoch 4, loss 0.2104, train acc 0.922, test acc 0.894, time 75.7 sec epoch 5, loss 0.1905, train acc 0.930, test acc 0.904, time 75.4 sec
Copy_of_quora.ipynb
###Markdown Quora Data Framework New ###Code from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.metrics import accuracy_score from wordcloud import WordCloud as wc from nltk.corpus import stopwords import matplotlib.pylab as pylab import matplotlib.pyplot as plt from pandas import get_dummies import matplotlib as mpl import seaborn as sns import pandas as pd import numpy as np import matplotlib import warnings from sklearn.ensemble import RandomForestClassifier import sklearn import string import scipy import numpy import nltk import json import sys import csv import os nltk.download('averaged_perceptron_tagger') nltk.download("stopwords") ###Output [nltk_data] Downloading package averaged_perceptron_tagger to [nltk_data] /home/rahul/nltk_data... [nltk_data] Package averaged_perceptron_tagger is already up-to- [nltk_data] date! [nltk_data] Downloading package stopwords to /home/rahul/nltk_data... [nltk_data] Package stopwords is already up-to-date! ###Markdown Version of the different libraries ###Code print('matplotlib: {}'.format(matplotlib.__version__)) print('sklearn: {}'.format(sklearn.__version__)) print('scipy: {}'.format(scipy.__version__)) print('seaborn: {}'.format(sns.__version__)) print('pandas: {}'.format(pd.__version__)) print('numpy: {}'.format(np.__version__)) print('Python: {}'.format(sys.version)) ###Output matplotlib: 2.2.2 sklearn: 0.20.2 scipy: 1.1.0 seaborn: 0.9.0 pandas: 0.23.4 numpy: 1.15.1 Python: 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0] ###Markdown Getting alll the data from nltk stopwords ###Code from nltk.tokenize import sent_tokenize,word_tokenize from nltk.corpus import stopwords data = "All work and no play makes jack dull boy. All work and no play makes jack a dull boy." ###Output _____no_output_____ ###Markdown Print the tokenize data ###Code print(word_tokenize(data)) print(sent_tokenize(data)) # stopWords=set(stopwords.words('english')) # words=word_tokenize(data) # wordsFiltered=[] # for w in words: # if w in stopWords: # wordsFiltered.append(w) # print(wordsFiltered) from nltk.stem import PorterStemmer from nltk.tokenize import sent_tokenize,word_tokenize words=["game","gaming","gamed","games"] ps=PorterStemmer() for word in words: print(ps.stem(word)) from nltk.tokenize import PunktSentenceTokenizer sentences=nltk.sent_tokenize(data) for set in sentences: print(nltk.pos_tag(nltk.word_tokenize(set))) ###Output ['All', 'work', 'and', 'no', 'play', 'makes', 'jack', 'dull', 'boy', '.', 'All', 'work', 'and', 'no', 'play', 'makes', 'jack', 'a', 'dull', 'boy', '.'] ['All work and no play makes jack dull boy.', 'All work and no play makes jack a dull boy.'] game game game game [('All', 'DT'), ('work', 'NN'), ('and', 'CC'), ('no', 'DT'), ('play', 'NN'), ('makes', 'VBZ'), ('jack', 'NN'), ('dull', 'JJ'), ('boy', 'NN'), ('.', '.')] [('All', 'DT'), ('work', 'NN'), ('and', 'CC'), ('no', 'DT'), ('play', 'NN'), ('makes', 'VBZ'), ('jack', 'RP'), ('a', 'DT'), ('dull', 'JJ'), ('boy', 'NN'), ('.', '.')] ###Markdown How to make the use of the sns i am not able to get it in poproperly ###Code sns.set(style='white',context='notebook',palette="deep") ###Output _____no_output_____ ###Markdown EDA I will be going to write the diffrent exploratoion technique which can be used to explore the dataset ###Code train=pd.read_csv('/home/rahul/Desktop/Link to rahul_environment/Projects/Machine_Learning Projects/Quora_DataFramework/train.csv') test=pd.read_csv('/home/rahul/Desktop/Link to rahul_environment/Projects/Machine_Learning Projects/Quora_DataFramework/test.csv') print('shape of the train',train.shape) print('shape of the test',test.shape) train.size # finding the size of the training set type(train) # tells us about the object type train.describe() #describe use us about the data train.sample(5) ###Output shape of the train (1306122, 3) shape of the test (375806, 2) ###Markdown Data Cleaning for finding that there is any kind of the null element is present or not(sum of the null values) ###Code train.isnull().sum() # # but if we have the null values used it for finding the result in the dataset print('Before Dropping the items',train.shape) train=train.dropna() print('After droping',train.shape) ###Output Before Dropping the items (1306122, 3) After droping (1306122, 3) ###Markdown for finding the unique items for the target with command below: getting all the unique from the dataset ###Code train_target=train['target'].values np.unique(train_target) train.head(5) train.tail(5) train.describe() ###Output _____no_output_____ ###Markdown ** Data preprocessing refers to the transformations applied to our data before feeding it to the algorithm. Data Preprocessing is a technique that is used to convert the raw data into a clean data set. In other words, whenever the data is gathered from different sources it is collected in raw format which is not feasible for the analysis. there are plenty of steps for data preprocessing and we just listed some of them in general(Not just for Quora) : removing Target column (id) Sampling (without replacement) Making part of iris unbalanced and balancing (with undersampling and SMOTE) Introducing missing values and treating them (replacing by average values) Noise filtering Data discretization Normalization and standardization PCA analysis Feature selection (filter, embedded, wrapper) Etc. now we will be going to perfrom some queries on the dataset** ###Code train.where(train['target']==1).count() train[train['target']>1] train.where(train['target']==1).head(5) ###Output _____no_output_____ ###Markdown ** Imbalanced dataset is relevant primarily in the context of supervised machine learning involving two or more classes. Imbalance means that the number of data points available for different the classes is different: If there are two classes, then balanced data would mean 50% points for each of the class. For most machine learning techniques, little imbalance is not a problem. So, if there are 60% points for one class and 40% for the other class, it should not cause any significant performance degradation. Only when the class imbalance is high, e.g. 90% points for one class and 10% for the other, standard optimization criteria or performance measures may not be as effective and would need modification. Now we will be going to explore the exploreing question** ###Code question=train['question_text'] i=0 for q in question[:5]: i=i+1 print("Question came from the Quora Data_set=="+q) train["num_words"] = train["question_text"].apply(lambda x: len(str(x).split())) ###Output Question came from the Quora Data_set==How did Quebec nationalists see their province as a nation in the 1960s? Question came from the Quora Data_set==Do you have an adopted dog, how would you encourage people to adopt and not shop? Question came from the Quora Data_set==Why does velocity affect time? Does velocity affect space geometry? Question came from the Quora Data_set==How did Otto von Guericke used the Magdeburg hemispheres? Question came from the Quora Data_set==Can I convert montra helicon D to a mountain bike by just changing the tyres? ###Markdown Some Feature Engineering eng_stopwords=set(stopwords.words("english")) print(len(eng_stopwords)) print(eng_stopwords) ###Code print(train.columns) train.head() # # Count Plot ax=sns.countplot(x='target',hue='target',data=train,linewidth=5,edgecolor=sns.color_palette("dark",3)) plt.title('Is data set imbalance') plt.show() plt.savefig('targetsetimbalance') ax=train['target'].value_counts().plot.pie(explode=[0,0.1],autopct='%1.1f%%',shadow=True) ax.set_title('target') ax.set_ylabel('') plt.savefig('targetdiagramforpie') plt.show() # cf=RandomForestClassifier(n_estimators=) ###Output Index(['qid', 'question_text', 'target', 'num_words'], dtype='object') ###Markdown Histogram f,ax=plt.subplots(1,2,figsize=(20,10)) train[train['target']==0].num_words.plot.hist(ax=ax[0],bins=20,edgecolor='black',color='red') ax[0].set_title('target=0') x1=list(range(0,85,5)) f,ax=plt.subplots(1,2,figsize=(18,8)) train[['target','num_words']].groupby(['target']).mean().plot().bar(ax=ax[0]) ax[0].set_title('num vs target') sns.countplot('num_words',hue='target',data=train,ax=ax[1]) ax[1].set_title('num_words:target=0 vs target=1') plt.show() histogram ###Code train.hist(figsize=(15,20)) plt.figure() # # Creating the histogram which can be used to make the ###Output _____no_output_____ ###Markdown Making the violin plot ###Code sns.violinplot(data=train,x='target',y='num_words') plt.savefig('violinplot') ###Output /usr/local/lib/python3.6/dist-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval ###Markdown Making the kde plot ###Code sns.FacetGrid(train,hue="target",size=5).map(sns.kdeplot,"num_words").add_legend() plt.savefig('facetgrid-target') plt.show() ###Output /home/rahul/.local/lib/python3.6/site-packages/seaborn/axisgrid.py:230: UserWarning: The `size` paramter has been renamed to `height`; please update your code. warnings.warn(msg, UserWarning) ###Markdown Box Plot ###Code train['num_words'].loc[train['num_words']>60]=60 axes=sns.boxplot(x='target',y='num_words',data=train) axes.set_xlabel('Target',fontsize=12) axes.set_title("No of words in each class",fontsize=15) plt.savefig('target-numwords') plt.show() # # How to Generate the Word Cloud in the d plotting we will be going to make the commit # eng_stopwords=set(stopwords.words("english")) # def generate_wordcloud(text): # wordcloud = wc(relative_scaling = 1.0,stopwords = eng_stopwords).generate(text) # fig,ax = plt.subplots(1,1,figsize=(10,10)) # ax.imshow(wordcloud, interpolation='bilinear') # ax.axis("off") # ax.margins(x=0, y=0) # plt.show() # text=' '.join(train.question_text) # generate_wordcloud(text) ###Output _____no_output_____
dataset_preparation.ipynb
###Markdown **Dataset Download** ###Code !pip install kaggle from google.colab import files files.upload() !mkdir ~/.kaggle !cp kaggle.json ~/.kaggle/ !chmod 600 ~/.kaggle/kaggle.json !kaggle competitions download -c tensorflow-speech-recognition-challenge !7z x train.7z labels = [ 'zero', 'one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine' ] train_audio_path = './train/audio/' all_wave = [] all_label = [] for label in tqdm(labels): waves = [f for f in os.listdir(train_audio_path + label) if f.endswith('.wav')] i = 0 for wav in waves: i += 1 samples, sample_rate = librosa.load(train_audio_path + '/' + label + '/' + wav, sr = 16000) samples = librosa.resample(samples, sample_rate, 8192) if(len(samples)== 8192) : all_wave.append(samples) all_label.append(label) if i== 10: break pro_wave = np.array(all_wave).reshape(-1,8192,1) pro_label = np.array(all_label) np.save('waves.npy', pro_wave) np.save('labels.npy', pro_label) np.random.shuffle(pro_wave) !wget http://images.cocodataset.org/zips/val2017.zip !unzip val2017.zip from PIL import Image image = Image.open('/content/val2017/000000000139.jpg') print(image.format) print(image.size) print(image.mode) plt.imshow(image) train_img_path = './val2017/' imgs = [f for f in os.listdir(train_img_path) if f.endswith('.jpg')] # all_img = np.empty((len(imgs), 128, 128, 3)) all_img = [] for i in tqdm(range(len(imgs))): temp = Image.open(train_img_path + imgs[i]) if(temp.mode == 'RGB'): temp = temp.resize((128, 128), Image.ANTIALIAS) temp = np.array(temp) temp = temp.astype(np.float32) / 255.0 all_img.append(temp) import random random.seed(1) all_img = all_img + random.sample(all_img, 10) random.shuffle(all_img) images = np.array(all_img) np.save('img_dataset.npy', images) ###Output _____no_output_____ ###Markdown *Test Dataset* ###Code class DataGenerator(object): def __init__(self, batch_size=32, seed=999): np.random.seed(seed) self.batch_size = batch_size self.audio_data = np.load('waves.npy') np.random.shuffle(self.audio_data) self.img_data = np.load('img_dataset.npy') self.img_data = self.img_data.astype(np.float32) / 255.0 np.random.shuffle(self.img_data) def sample_batch(self): img_batch = np.random.choice(self.img_data.shape[0], self.batch_size, replace=False) img_batch = self.img_data[img_batch] audio_batch = np.random.choice(self.audio_data.shape[0], self.batch_size, replace=False) audio_batch = self.audio_data[audio_batch] return img_batch, audio_batch generator = DataGenerator() ###Output _____no_output_____ ###Markdown 1. Dataset Preparation Overview In this phase, a startups dataset will be properly created and prepared for further feature analysis. Different features will be created here by combining information from the CSV files we have available: *acquisitions.csv*, *investments.csv*, *rounds.csv* and *companies.csv*. Load all available data from CSV general files ###Code #All imports here import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn import preprocessing from datetime import datetime from dateutil import relativedelta %matplotlib inline #Let's start by importing our csv files into dataframes df_companies = pd.read_csv('data/companies.csv') df_acquisitions = pd.read_csv('data/acquisitions.csv') df_investments = pd.read_csv('data/investments.csv') df_rounds = pd.read_csv('data/rounds.csv') ###Output _____no_output_____ ###Markdown Start main dataset by USA companies from companies.csv We'll be using in the analysis only USA based companies since companies from other countries have a large amount of missing data ###Code #Our final database will be stored in 'startups_USA' startups_USA = df_companies[df_companies['country_code'] == 'USA'] startups_USA.head() ###Output _____no_output_____ ###Markdown Extract company category features Now that we have a first version of our dataset, we'll expand the category_list attribute into dummy variables for categories. ###Code from operator import methodcaller def split_categories(categories): #get a unique list of the categories splitted_categories = list(categories.astype('str').unique()) #split each category by | splitted_categories = map(methodcaller("split", "|"), splitted_categories) #flatten the list of sub categories splitted_categories = [item for sublist in splitted_categories for item in sublist] return splitted_categories def explore_categories(categories, top_n_categories): cat = split_categories(categories) print 'There are in total {} different categories'.format(len(cat)) prob = pd.Series(cat).value_counts() print prob.head() #select first <top_n_categories> mask = prob > prob[top_n_categories] head_prob = prob.loc[mask].sum() tail_prob = prob.loc[~mask].sum() total_sum = prob.sum() prob = prob.loc[mask] prob2 = pd.DataFrame({'top '+str(top_n_categories)+' categories': head_prob, 'others': tail_prob},index=[0]) fig, axs = plt.subplots(2,1, figsize=(15,6)) prob.plot(kind='bar', ax=axs[0]) prob2.plot(kind='bar', ax=axs[1]) for bar in axs[1].patches: height = bar.get_height() axs[1].text(bar.get_x() + bar.get_width()/2., 0.50*height, '%.2f' % (float(height)/float(total_sum)*100) + "%", ha='center', va='top') fig.tight_layout() plt.xticks(rotation=90) plt.show() explore_categories(startups_USA['category_list'], top_n_categories=50) ###Output There are in total 60813 different categories Software 2326 Mobile 2119 Social Media 1246 Enterprise Software 1141 E-Commerce 1137 dtype: int64 ###Markdown Since there are too many categories, we'll be selecting the top 50 more frequent ones.We see from the chart above, that with these 50 (out of 60813) categories we cover 46% of the companies. ###Code def expand_top_categories_into_dummy_variables(df): cat = df['category_list'].astype('str') cat_count = cat.str.split('|').apply(lambda x: pd.Series(x).value_counts()).sum() #Get a dummy dataset for categories dummies = cat.str.get_dummies(sep='|') #Count of categories splitted first 50) top50categories = list(cat_count.sort_values(ascending=False).index[:50]) #Create a dataframe with the 50 top categories to be concatenated later to the complete dataframe categories_df = dummies[top50categories] categories_df = categories_df.add_prefix('Category_') return pd.concat([df, categories_df], axis=1, ignore_index=False) startups_USA = expand_top_categories_into_dummy_variables(startups_USA) startups_USA.head() ###Output _____no_output_____ ###Markdown So now we added more 50 categories to our dataset. Analyzing total funding and funding round features ###Code startups_USA['funding_rounds'].hist(bins=range(1,10)) plt.title("Histogram of the number of funding rounds") plt.ylabel('Number of companies') plt.xlabel('Number of funding rounds') #funding_total_usd #funding_rounds plt.subplot() startups_USA[startups_USA['funding_total_usd'] != '-']. \ set_index('name')['funding_total_usd'] \ .astype(float) \ .sort_values(ascending=False)\ [:30].plot(kind='barh', figsize=(5,7)) plt.gca().invert_yaxis() plt.title('Companies with highest total funding') plt.ylabel('Companies') plt.xlabel('Total amount of funding (USD)') ###Output _____no_output_____ ###Markdown Analyzing date variables Extract investment rounds features Here, we'll extract from the rounds.csv file the number of rounds and total amount invested for each different type of investment. ###Code # Investment types df_rounds['funding_round_type'].value_counts() import warnings warnings.filterwarnings('ignore') #Iterate over each kind of funding type, and add two new features for each into the dataframe def add_dummy_for_funding_type(df, aggr_rounds, funding_type): funding_df = aggr_rounds.iloc[aggr_rounds.index.get_level_values('funding_round_type') == funding_type].reset_index() funding_df.columns = funding_df.columns.droplevel() funding_df.columns = ['company_permalink', funding_type, funding_type+'_funding_total_usd', funding_type+'_funding_rounds'] funding_df = funding_df.drop(funding_type,1) new_df = pd.merge(df, funding_df, on='company_permalink', how='left') new_df = new_df.fillna(0) return new_df def expand_investment_rounds(df, df_rounds): #Prepare an aggregated rounds dataframe grouped by company and funding type rounds_agg = df_rounds.groupby(['company_permalink', 'funding_round_type'])['raised_amount_usd'].agg({'amount': [ pd.Series.sum, pd.Series.count]}) #Get available unique funding types funding_types = list(rounds_agg.index.levels[1]) #Prepare the dataframe where all the dummy features for each funding type will be added (number of rounds and total sum for each type) rounds_df = df[['permalink']] rounds_df = rounds_df.rename(columns = {'permalink':'company_permalink'}) #For each funding type, add two more columns to rounds_df for funding_type in funding_types: rounds_df = add_dummy_for_funding_type(rounds_df, rounds_agg, funding_type) #remove the company_permalink variable, since it's already available in the companies dataframe rounds_df = rounds_df.drop('company_permalink', 1) #set rounds_df to have the same index of the other dataframes rounds_df.index = df.index return pd.concat([df, rounds_df], axis=1, ignore_index=False) startups_USA = expand_investment_rounds(startups_USA, df_rounds) startups_USA.head() ###Output _____no_output_____ ###Markdown Change dataset index We'll set the company id (**permalink** attribute) as the index for the dataset. This simple change will make it easier to attach new features to the dataset. ###Code startups_USA = startups_USA.set_index('permalink') ###Output _____no_output_____ ###Markdown Extract acquisitions features Here, we'll extract the number of acquisitions were made by each company in our dataset. ###Code import warnings warnings.filterwarnings('ignore') def extract_feature_number_of_acquisitions(df, df_acquisitions): number_of_acquisitions = df_acquisitions.groupby(['acquirer_permalink'])['acquirer_permalink'].agg({'amount': [ pd.Series.count]}).reset_index() number_of_acquisitions.columns = number_of_acquisitions.columns.droplevel() number_of_acquisitions.columns = ['permalink', 'number_of_acquisitions'] number_of_acquisitions = number_of_acquisitions.set_index('permalink') number_of_acquisitions = number_of_acquisitions.fillna(0) new_df = df.join(number_of_acquisitions) new_df['number_of_acquisitions'] = new_df['number_of_acquisitions'].fillna(0) return new_df startups_USA = extract_feature_number_of_acquisitions(startups_USA, df_acquisitions) ###Output _____no_output_____ ###Markdown Extract investments feature Here, we'll extract the number of investments made by each company in our dataset.Note: This is not the number of times in which someone invested in the startup. It is the number of times in which each startup have made an investment in other company. ###Code import warnings warnings.filterwarnings('ignore') def extract_feature_number_of_investments(df, df_investments): number_of_investments = df_investments.groupby(['investor_permalink'])['investor_permalink'].agg({'amount': [ pd.Series.count]}).reset_index() number_of_investments.columns = number_of_investments.columns.droplevel() number_of_investments.columns = ['permalink', 'number_of_investments'] number_of_investments = number_of_investments.set_index('permalink') number_of_unique_investments = df_investments.groupby(['investor_permalink'])['company_permalink'].agg({'amount': [ pd.Series.nunique]}).reset_index() number_of_unique_investments.columns = number_of_unique_investments.columns.droplevel() number_of_unique_investments.columns = ['permalink', 'number_of_unique_investments'] number_of_unique_investments = number_of_unique_investments.set_index('permalink') new_df = df.join(number_of_investments) new_df['number_of_investments'] = new_df['number_of_investments'].fillna(0) new_df = new_df.join(number_of_unique_investments) new_df['number_of_unique_investments'] = new_df['number_of_unique_investments'].fillna(0) return new_df startups_USA = extract_feature_number_of_investments(startups_USA, df_investments) ###Output _____no_output_____ ###Markdown Extract average number of investors and amount invested per round Here we'll extract two more features- The average number of investors that participated in each around of investment- The average amount invested among all the investment rounds a startup had ###Code import warnings warnings.filterwarnings('ignore') def extract_feature_avg_investors_per_round(df, investments): number_of_investors_per_round = investments.groupby(['company_permalink', 'funding_round_permalink'])['investor_permalink'].agg({'investor_permalink': [ pd.Series.count]}).reset_index() number_of_investors_per_round.columns = number_of_investors_per_round.columns.droplevel(0) number_of_investors_per_round.columns = ['company_permalink', 'funding_round_permalink', 'count'] number_of_investors_per_round = number_of_investors_per_round.groupby(['company_permalink']).agg({'count': [ pd.Series.mean]}).reset_index() number_of_investors_per_round.columns = number_of_investors_per_round.columns.droplevel(0) number_of_investors_per_round.columns = ['company_permalink', 'number_of_investors_per_round'] number_of_investors_per_round = number_of_investors_per_round.set_index('company_permalink') new_df = df.join(number_of_investors_per_round) new_df['number_of_investors_per_round'] = new_df['number_of_investors_per_round'].fillna(-1) return new_df def extract_feature_avg_amount_invested_per_round(df, investments): investmentsdf = investments.copy() investmentsdf['raised_amount_usd'] = investmentsdf['raised_amount_usd'].astype(float) avg_amount_invested_per_round = investmentsdf.groupby(['company_permalink', 'funding_round_permalink'])['raised_amount_usd'].agg({'raised_amount_usd': [ pd.Series.mean]}).reset_index() avg_amount_invested_per_round.columns = avg_amount_invested_per_round.columns.droplevel(0) avg_amount_invested_per_round.columns = ['company_permalink', 'funding_round_permalink', 'mean'] avg_amount_invested_per_round = avg_amount_invested_per_round.groupby(['company_permalink']).agg({'mean': [ pd.Series.mean]}).reset_index() avg_amount_invested_per_round.columns = avg_amount_invested_per_round.columns.droplevel(0) avg_amount_invested_per_round.columns = ['company_permalink', 'avg_amount_invested_per_round'] avg_amount_invested_per_round = avg_amount_invested_per_round.set_index('company_permalink') new_df = df.join(avg_amount_invested_per_round) new_df['avg_amount_invested_per_round'] = new_df['avg_amount_invested_per_round'].fillna(-1) return new_df startups_USA = extract_feature_avg_investors_per_round(startups_USA, df_investments) startups_USA = extract_feature_avg_amount_invested_per_round(startups_USA, df_investments) startups_USA.head() ###Output _____no_output_____ ###Markdown Drop useless featuresHere we'll drop homepage_url, category_list, region, city, country_code We'll also move status to the end of the dataframe ###Code #drop features startups_USA = startups_USA.drop(['name','homepage_url', 'category_list', 'region', 'city', 'country_code'], 1) #move status to the end of the dataframe cols = list(startups_USA) cols.append(cols.pop(cols.index('status'))) startups_USA = startups_USA.ix[:, cols] ###Output _____no_output_____ ###Markdown Normalize numeric variablesHere we'll set all the numeric variables into the same scale (0 to 1) ###Code def normalize_numeric_features(df, columns_to_scale = None): min_max_scaler = preprocessing.MinMaxScaler() startups_normalized = df.copy() #Convert '-' to zeros in funding_total_usd startups_normalized['funding_total_usd'] = startups_normalized['funding_total_usd'].replace('-', 0) #scale numeric features startups_normalized[columns_to_scale] = min_max_scaler.fit_transform(startups_normalized[columns_to_scale]) return startups_normalized columns_to_scale = list(startups_USA.filter(regex=(".*(funding_rounds|funding_total_usd)|(number_of|avg_).*")).columns) startups_USA = normalize_numeric_features(startups_USA, columns_to_scale) ###Output _____no_output_____ ###Markdown Normalize date variablesHere we'll convert dates to ages in months up to the first day of 2017 ###Code def date_to_age_in_months(date): if date != date or date == 0: #is NaN return 0 date1 = datetime.strptime(date, '%Y-%m-%d') date2 = datetime.strptime('2017-01-01', '%Y-%m-%d') #get age until 01/01/2017 delta = relativedelta.relativedelta(date2, date1) return delta.years * 12 + delta.months def normalize_date_variables(df): date_vars = ['founded_at', 'first_funding_at', 'last_funding_at'] for var in date_vars: df[var] = df[var].map(date_to_age_in_months) df = normalize_numeric_features(df, date_vars) return df startups_USA = normalize_date_variables(startups_USA) ###Output _____no_output_____ ###Markdown Extract state_code features ###Code def explore_states(states, top_n_states): print 'There are in total {} different states'.format(len(states.unique())) prob = pd.Series(states).value_counts() print prob.head() #select first <top_n_categories> mask = prob > prob[top_n_states] head_prob = prob.loc[mask].sum() tail_prob = prob.loc[~mask].sum() total_sum = prob.sum() prob = prob.loc[mask] prob2 = pd.DataFrame({'top '+str(top_n_states)+' states': head_prob, 'others': tail_prob},index=[0]) fig, axs = plt.subplots(2,1, figsize=(15,6)) prob.plot(kind='bar', ax=axs[0]) prob2.plot(kind='bar', ax=axs[1]) for bar in axs[1].patches: height = bar.get_height() axs[1].text(bar.get_x() + bar.get_width()/2., 0.50*height, '%.2f' % (float(height)/float(total_sum)*100) + "%", ha='center', va='top') fig.tight_layout() plt.xticks(rotation=90) plt.show() explore_states(startups_USA['state_code'], top_n_states=15) ###Output There are in total 54 different states CA 12900 NY 3952 MA 2542 TX 1995 FL 1295 Name: state_code, dtype: int64 ###Markdown As we did for the categories variable, in order to decrease the amount of features in our dataset, let's just select the top 15 more frequent states (which cover already 82% of our companies) ###Code def expand_top_states_into_dummy_variables(df): states = df['state_code'].astype('str') #Get a dummy dataset for categories dummies = pd.get_dummies(states) #select top most frequent states top15states = list(states.value_counts().sort_values(ascending=False).index[:15]) #Create a dataframe with the 15 top states to be concatenated later to the complete dataframe states_df = dummies[top15states] states_df = states_df.add_prefix('State_') new_df = pd.concat([df, states_df], axis=1, ignore_index=False) new_df = new_df.drop(['state_code'], axis=1) return new_df startups_USA = expand_top_states_into_dummy_variables(startups_USA) ###Output _____no_output_____ ###Markdown Move status to the end of dataframe and save to file ###Code cols = list(startups_USA) cols.append(cols.pop(cols.index('status'))) startups_USA = startups_USA.ix[:, cols] startups_USA.to_csv('data/startups_pre_processed.csv') startups_USA.head() ###Output _____no_output_____ ###Markdown Prerequisites* Download dataset from https://shapenet.cs.stanford.edu/media/modelnet40_normal_resampled.zip and unzip.* Install [PCL](https://pointclouds.org/downloads/), [NumPy](https://numpy.org/install/), [Open3D](http://www.open3d.org/), and [pyquaternion](http://kieranwynn.github.io/pyquaternion/). 1. Prepare random uniform downsampled indices of pointsChange `root` in `gen_downsample_index.py` according to where the dataset is located. Run it for both splits (train and test).Note that this step is not a prerequisite for the following data preparation steps. ###Code %cd my_dataloader !python gen_downsample_index.py %cd .. ###Output _____no_output_____ ###Markdown 2. Optional: convert txt file into PCD file for LRF computationtxt file could also parsed and used for the LRF computation. ###Code import open3d as o3d import numpy as np import glob all_files = glob.glob('modelnet40_normal_resampled/*/*.txt') for file in all_files: f = open(file) pc = [] for line in f: l = line.split(',') pc.append([float(l[0]), float(l[1]), float(l[2])]) pc = np.array(pc) # Create PointCloud object pcd = o3d.geometry.PointCloud() pcd.points = o3d.utility.Vector3dVector(pc) o3d.io.write_point_cloud(file[:-3] + "pcd", pcd) ###Output _____no_output_____ ###Markdown 3. Compute LRFPCL implementation of FLARE can be used for the computation of LRFs. See [flare](flare) folder in order to use it. All point cloud file names should be saved in `all_pcd_files.txt` before using it. 4. Convert LRF orientation to quaternion and find the indices of invalid points ###Code from pyquaternion import Quaternion all_lrf_files = [f[:-4] + "_rot.txt" for f in all_files] for f in all_lrf_files: file = open(f).readlines() quat = np.zeros((len(file), 4)) wrong_ids = [] for i, line in enumerate(file[1:]): # assuming that there is a header row in the file lrf = line.split(" ") vecs = np.zeros((3, 3)) vecs[0] = [float(lrf[0]), float(lrf[1]), float(lrf[2])] # x vecs[1] = [float(lrf[3]), float(lrf[4]), float(lrf[5])] # y vecs[2] = [float(lrf[6]), float(lrf[7]), float(lrf[8][:-1])] # z - don't take last char since it is \n rotation = vecs.transpose() try: potential = Quaternion._from_matrix(rotation, rtol=1e-03, atol=1e-03).q if all(p < 1e+20 for p in potential): quat[i] = potential else: quat[i] = np.array([0,0,0,0]) wrong_ids.append(i) except: quat[i] = np.array([0,0,0,0]) wrong_ids.append(i) np.savetxt(f[:-8] + '.qua', quat, encoding='ascii') np.savetxt(f[:-8] + '.idx', np.array(wrong_ids), encoding='ascii') ###Output _____no_output_____ ###Markdown Prerequisites* Download dataset from https://shapenet.cs.stanford.edu/media/modelnet40_normal_resampled.zip and unzip.* Install [PCL](https://pointclouds.org/downloads/), [NumPy](https://numpy.org/install/), [Open3D](http://www.open3d.org/), and [pyquaternion](http://kieranwynn.github.io/pyquaternion/). 1. Prepare random uniform downsampled indices of pointsChange `root` in `gen_downsample_index.py` according to where the dataset is located. Run it for both splits (train and test).Note that this step is not a prerequisite for the following data preparation steps. ###Code %cd my_dataloader !python gen_downsample_index.py %cd .. ###Output _____no_output_____ ###Markdown 2. Optional: convert txt file into PCD file for LRF computationtxt file could also parsed and used for the LRF computation. ###Code import open3d as o3d import numpy as np import glob all_files = glob.glob('modelnet40_normal_resampled/*/*.txt') for file in all_files: f = open(file) pc = [] for line in f: l = line.split(',') pc.append([float(l[0]), float(l[1]), float(l[2])]) pc = np.array(pc) # Create PointCloud object pcd = o3d.geometry.PointCloud() pcd.points = o3d.utility.Vector3dVector(pc) o3d.io.write_point_cloud(file[:-3] + "pcd", pcd) ###Output _____no_output_____ ###Markdown 3. Compute LRFPCL implementation of FLARE can be used for the computation of LRFs ([Example usage of FLARE in PCL](https://github.com/PointCloudLibrary/pcl/blob/master/test/features/test_flare_estimation.cpp)).Save the computed LRFs in a file such that each row is `x_0 x_1 x_2 y_0 y_1 y_2 z_0 z_1 z_2` (split by whitespace). The file name should end with `_rot.txt`. For instance, if input file is lamp/lamp_0007.pcd, LRFs should be saved in lamp/lamp_0007_rot.txt 4. Convert LRF orientation to quaternion and find the indices of invalid points ###Code from pyquaternion import Quaternion all_lrf_files = [f[:-4] + "_rot.txt" for f in all_files] for f in all_lrf_files: file = open(f).readlines() quat = np.zeros((len(file), 4)) wrong_ids = [] for i, line in enumerate(file[1:]): # assuming that there is a header row in the file lrf = line.split(" ") vecs = np.zeros((3, 3)) vecs[0] = [float(lrf[0]), float(lrf[1]), float(lrf[2])] # x vecs[1] = [float(lrf[3]), float(lrf[4]), float(lrf[5])] # y vecs[2] = [float(lrf[6]), float(lrf[7]), float(lrf[8][:-1])] # z - don't take last char since it is \n rotation = vecs.transpose() try: potential = Quaternion._from_matrix(rotation, rtol=1e-03, atol=1e-03).q if all(p < 1e+20 for p in potential): quat[i] = potential else: quat[i] = np.array([0,0,0,0]) wrong_ids.append(i) except: quat[i] = np.array([0,0,0,0]) wrong_ids.append(i) np.savetxt(f[:-8] + '.qua', quat, encoding='ascii') np.savetxt(f[:-8] + '.idx', np.array(wrong_ids), encoding='ascii') ###Output _____no_output_____